musiccaps
1 row where aspect_list contains "acoustic guitar", aspect_list contains "e-guitar", aspect_list contains "saxophone" and aspect_list contains "uptempo"
This data as json, CSV (advanced)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
ChyayWIp_vU | A male voice is singing along to a fast paced reggae instrumental. The drums are playing a reggae-groove along with a harmonica on the offbeat. Two e-guitars are playing a single note melody. Both are following the bassline adding harmonies. A saxophone is adding a little melody after every end of a phrase. The male voice is singing in the mid-range and sounds doubled. Some of the instruments are panned to the left and right side of the speakers. This song may be playing at a local bar. | ["reggae", "saxophone", "acoustic guitar", "e-guitar", "e-bass", "acoustic drums", "harmonica", "male voice singing", "positive energy", "uptempo"] | ["Music of Africa", "Music", "Reggae", "Song"] | 6 | 30 | 40 | 1 | 1 | ["/m/0164x2", "/m/04rlf", "/m/06cqb", "/m/074ft"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );