musiccaps
2 rows where aspect_list contains "acoustic drums", aspect_list contains "acoustic piano" and aspect_list contains "e-guitar"
This data as json, CSV (advanced)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
HLz3N5nG8fQ | An acoustic drum is playing a jazzy groove on the ride along with a bassline while an e-guitar is playing jazz chords with a lot of changes together with an acoustic piano rendering phrases with an uplifting melody. The singer's voice sounds low and romantic. This song may be playing while dancing with a partner. | ["jazz/ballad", "acoustic piano", "acoustic drums", "e-bass", "e-guitar", "male deep voice singing", "crowd cheering and clapping", "medium to uptempo", "romantic"] | ["Singing", "Applause", "Music"] | 6 | 30 | 40 | 0 | 1 | ["/m/015lz1", "/m/028ght", "/m/04rlf"] | |
imIFtW4O5S0 | This song contains someone playing a blues melody on the acoustic piano along with an e-guitar and an e-bass. The acoustic drums are playing a constant rhythm while male deep voice is backed by higher sounding male voices singing harmonies along with him. Then a male voice starts talking that seems not to be part of the musical composition. This song may be playing in a cafe. | ["gospel/blues", "male voices singing", "low to higher register", "male voice talking", "acoustic piano", "e-bass", "acoustic drums", "e-guitar", "uptempo"] | ["Gospel music", "Music", "Rhythm and blues", "Speech"] | 6 | 30 | 40 | 0 | 0 | ["/m/016cjb", "/m/04rlf", "/m/06j6l", "/m/09x0r"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );