musiccaps
2 rows where audioset_names contains "Applause" and author_id = 6
This data as json, CSV (advanced)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
HLz3N5nG8fQ | An acoustic drum is playing a jazzy groove on the ride along with a bassline while an e-guitar is playing jazz chords with a lot of changes together with an acoustic piano rendering phrases with an uplifting melody. The singer's voice sounds low and romantic. This song may be playing while dancing with a partner. | ["jazz/ballad", "acoustic piano", "acoustic drums", "e-bass", "e-guitar", "male deep voice singing", "crowd cheering and clapping", "medium to uptempo", "romantic"] | ["Singing", "Applause", "Music"] | 6 | 30 | 40 | 0 | 1 | ["/m/015lz1", "/m/028ght", "/m/04rlf"] | |
zSGWoXDFM64 | A male choir is at the end of a phrase before a single male voice takes over singing with a mid ranged opera voice along to a piano and an accordion. In the background the crowd is cheering,laughing and clapping. This song may be playing in a theater. | ["musical/opera", "male mid range voice singing", "acoustic piano", "accordion", "male choir", "background cheering and clapping", "uptempo", "comedic"] | ["Applause", "Music"] | 6 | 30 | 40 | 0 | 1 | ["/m/028ght", "/m/04rlf"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );