musiccaps
1 row where audioset_names contains "Ambient music", audioset_names contains "Reggae" and audioset_names contains "Singing"
This data as json, CSV (advanced)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
Cn3xoxvbkF0 | This is the live performance of a reggae piece. There is a female vocalist in the lead singing melodically. The keyboard can be heard playing the main melody. The bass guitar is playing a simple but groovy bass line. In the rhythmic background, there is a slow tempo acoustic drum beat. The atmosphere is emotional. This is an amateur recording and a bit dated, so the audio quality is quite poor. | ["reggae", "live performance", "amateur recording", "female vocal", "melodic singing", "keyboard", "bass guitar", "acoustic drums", "emotional", "slow tempo"] | ["Singing", "Music", "Reggae", "Ambient music"] | 9 | 30 | 40 | 0 | 1 | ["/m/015lz1", "/m/04rlf", "/m/06cqb", "/m/0fd3y"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );