musiccaps
1 row where aspect_list contains "ambient room nosiness", aspect_list contains "energetic", aspect_list contains "live performance" and aspect_list contains "male singer"
This data as json, CSV (advanced)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
jFek2xLbEww | A male singer raises this passionate melody. The song is medium tempo with an acoustic guitar accompaniment, groovy bass line and a subtle drumming rhythm. The song is passionate and relaxing. The song is played live and has poor audio quality. | ["male singer", "live performance", "groovy bass line", "medium tempo", "story telling", "ambient room nosiness", "people talking", "guitar accompaniment", "bass rhythm", "steady percussions", "folk music", "emotional", "energetic", "people talking", "poor audio quality", "folk music", "folk music", "guitarist"] | ["Singing", "Music", "Rattle (instrument)"] | 1 | 190 | 200 | 0 | 1 | ["/m/015lz1", "/m/04rlf", "/m/05r5wn"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );