musiccaps
1 row where audioset_names contains "Banjo", audioset_names contains "Country" and audioset_names contains "Gospel music"
This data as json, CSV (advanced)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
4KqSdK5KM-I | A banjo is playing chords while an upright bass is playing a bassline. These two form the rhythmic section. On top of that a mandolin is playing a fast and complex solo. In the background you can hear the crowd cheering and clapping hands. This is an amateur recording. This song may be playing in a big pub. | ["bluegrass", "semi acoustic mandolin", "banjo", "upright bass", "background noises", "amateur recording"] | ["Gospel music", "Banjo", "Country", "Guitar", "Music", "Mandolin", "Musical instrument", "Plucked string instrument", "Bluegrass"] | 6 | 80 | 90 | 0 | 1 | ["/m/016cjb", "/m/018j2", "/m/01lyv", "/m/0342h", "/m/04rlf", "/m/04rzd", "/m/04szw", "/m/0fx80y", "/m/0gg8l"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );