musiccaps
1 row where aspect_list contains "chaotic", aspect_list contains "jazz" and aspect_list contains "male vocalist"
This data as json, CSV (advanced)
Suggested facets: audioset_ids (array)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
KmBaE7ozWow | A male vocalist sings this melodic jazz/blues. The tempo is medium fast with a piano accompaniment, groovy bass lines, infectious drumming and guitar accompaniment. There is a sound of syncopated ukulele and another string instrument playing over the song. The song is melodic and ambient but the superimposed strings melody is making it chaotic, busy, confusing and confusing. | ["male vocalist", "fast tempo", "jazz", "blues", "dissonant strings", "groovy bass", "steady drumming", "intense", "keyboard harmony", "lively drumming", "superimposed syncopated strings", "syncopated music", "groovy bass lines", "two tracks playing", "unrelated tracks", "chaotic", "confusing", "ukele", "acustic guitar", "bass guitar", "boomy"] | ["Music", "Ukulele"] | 7 | 30 | 40 | 0 | 1 | ["/m/04rlf", "/m/07xzm"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );