musiccaps
2 rows where aspect_list contains "acoustic piano", aspect_list contains "sad" and aspect_list contains "slow tempo"
This data as json, CSV (advanced)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
GLIXnXZEOxY | An acoustic piano is playing a sad melody while two women are singing as a duo taking turns. Their voices have some reverb and in the background you can hear the crowd cheering. This song may be playing live on stage. | ["pop-ballad", "acoustic piano", "female singers", "crowd cheering", "amateur recording", "slow tempo", "sad"] | ["Singing", "Music", "/t/dd00004"] | 6 | 90 | 100 | 1 | 1 | ["/m/015lz1", "/m/04rlf", "/t/dd00004"] | |
MpS2SSIhe2g | An acoustic piano is playing a slow melody along with strings and brass in the background playing long minor chords. A bansuri flute is playing the lead melody. The whole composition sounds sad and may be playing in a sad movie scene like Titanic. | ["ballad", "acoustic piano", "bansuri flute", "strings", "synthesizer strings/brass", "slow tempo", "sad"] | ["Music", "Sad music"] | 6 | 30 | 40 | 0 | 1 | ["/m/04rlf", "/t/dd00033"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );