musiccaps
1 row where aspect_list contains "guitar playing" and aspect_list contains "melancholic"
This data as json, CSV (advanced)
aspect_list (array) 21 ✖
- accordion player 1
- ambient crowd noise 1
- dance music 1
- enthusiastic 1
- ethnic instruments 1
- flute playing 1
- folk music 1
- guitar playing · 1 ✖
- gypsy music 1
- home studio 1
- instrumental harmony 1
- instrumental music 1
- jam session 1
- life is good 1
- live jam 1
- live performance 1
- medium tempo 1
- melancholic · 1 ✖
- nostalgic moments 1
- people enjoying the music 1
- relaxing 1
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
tbovKStEnME | The song is an instrumental. The tempo is medium with an accordion player, guitar rhythm, flute player and other percussion playing in harmony. The song is groovy and has gypsy flavour to it. The song audio quality is poor with ambient crowd noises. | ["accordion player", "life is good", "relaxing", "jam session", "home studio", "melancholic", "nostalgic moments", "people enjoying the music", "instrumental harmony", "folk music", "flute playing", "guitar playing", "live jam", "instrumental music", "medium tempo", "ambient crowd noise", "enthusiastic", "dance music", "gypsy music", "folk music", "ethnic instruments", "dance music", "live performance"] | ["Wind instrument, woodwind instrument", "Accordion"] | 1 | 270 | 280 | 0 | 0 | ["/m/085jw", "/m/0mkg"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );