musiccaps
1 row where aspect_list contains "acoustic guitar", aspect_list contains "atmospheric", aspect_list contains "emotional" and aspect_list contains "passionate"
This data as json, CSV (advanced)
Suggested facets: audioset_ids (array)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
gTX4SG70cEY | A male vocalist sings this sad melody. The tempo is slow with an acoustic guitar accompaniment and a droning keyboard harmony. The song is soft ,mellow, sad, emotional, sentimental, regretful , lonely and poignant. This song is an Alternative Rock/Indie. | ["male singer", "slow tempo", "acoustic guitar", "atmospheric", "poignant", "dream pop", "lonely", "pop rock", "longing", "sombre", "breakup song", "passionate", "sad", "wistful", "emotional", "sentimental", "regretful", "wistful", "movie soundtrack", "droning keyboard harmony"] | ["Music", "Theremin"] | 7 | 230 | 240 | 0 | 0 | ["/m/04rlf", "/m/07kc_"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );