musiccaps
2 rows where audioset_names contains "Inside, large room or hall" and audioset_names contains "Yodeling"
This data as json, CSV (advanced)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
-EVRXQpt1-8 | Someone is playing a guitar-like instrument in a tremolo fashion along with someone playing a melody on a harp along with a bass playing the root note. A male voice is singing, sounding sad and sensitive. A backing voice that seems to be female is singing along, providing harmonies. This song may be playing in a dance performance. | ["oriental/ballad", "guitar like instrumental", "bass", "male voice singing", "female backing voice singing", "sad", "slow tempo", "harp"] | ["Yodeling", "Music", "Inside, large room or hall"] | 6 | 50 | 60 | 0 | 1 | ["/m/01swy6", "/m/04rlf", "/t/dd00126"] | |
bwHPVG6vbNQ | The low quality recording features a live performance where the song is played on playback and it consists of sustained strings and piano melody, energetic crash cymbals, shimmering hi hats and toms roll, over which passionate male vocal and harmonizing background female vocals are singing. It sounds emotional, passionate and soulful. | ["low quality", "mono", "noisy", "playback", "sustained strings melody", "toms roll", "shimmering hi hats", "energetic crash cymbal", "passionate male vocal", "harmonizing background female vocals", "piano chords", "emotional", "passionate", "soulful"] | ["Yodeling", "Music", "Inside, large room or hall"] | 4 | 20 | 30 | 0 | 1 | ["/m/01swy6", "/m/04rlf", "/t/dd00126"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );