musiccaps
1 row where aspect_list contains "moderate tempo", aspect_list contains "orchestral music" and aspect_list contains "string section"
This data as json, CSV (advanced)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
aq3vov8-fw8 | This orchestral song features flutes playing the main melody. This is backed by bass played on the cello. The percussion is played on the tambourine. This song is in a compound time signature. Toward the end of the clip, the tambourine sound is played loud in a specific rhythm. | ["orchestral music", "no voices", "moderate tempo", "flute sounds", "tambourine", "flute harmony", "instrumental", "string section"] | ["Music", "Tambourine"] | 0 | 30 | 40 | 0 | 1 | ["/m/04rlf", "/m/07brj"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );