musiccaps
2 rows where aspect_list contains "zitar" and audioset_ids contains "/m/0jtg0"
This data as json, CSV (advanced)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
LKurVRvkmKc | Someone is playing a melody on a zitar along with a shrutibox in the background. At the end male and female voices are chanting in the mid to high range and a percussive sound can be heard. This song may be playing live at a concert while meditating. | ["oriental", "meditative", "zitar", "percussive sounds", "shrutibox", "voices chanting", "slow to medium tempo"] | ["Folk music", "Music", "Sitar"] | 6 | 30 | 40 | 0 | 0 | ["/m/02w4v", "/m/04rlf", "/m/0jtg0"] | |
yreWOyWr6Uk | Someone is playing a snare with brushes along to zitars playing a repeating melody. In the background you can hear an e-bass and an acoustic guitar strumming chords. This song may be playing in a live concert. | ["zitar", "acoustic snare", "acoustic drums", "e-bass", "amateur recording", "medium tempo", "relaxing"] | ["Country", "Music", "Musical instrument", "Plucked string instrument", "Sitar"] | 6 | 330 | 340 | 1 | 1 | ["/m/01lyv", "/m/04rlf", "/m/04szw", "/m/0fx80y", "/m/0jtg0"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );