musiccaps
1 row where audioset_names contains "Bass guitar", audioset_names contains "Electronic tuner" and audioset_names contains "Inside, small room"
This data as json, CSV (advanced)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
RDW_kz4SXo0 | This song features a bass guitar being played. At first, a single note is ringing. After a brief pause, an ascending lick is played in common time. The first note is played eight times. The second note is played four times, the third and fourth, two times each. The fifth note is played eight times. The sixth and seventh notes are played four times each. The last note is played once as a double stop with the open string above it and a fretted note on the higher register. This song is an instrumental and can be played in an instructional audio. | ["bass song", "no other instruments", "no voices", "instructional song"] | ["Bass guitar", "Guitar", "Music", "Musical instrument", "Electronic tuner", "Plucked string instrument", "Inside, small room"] | 0 | 70 | 80 | 0 | 0 | ["/m/018vs", "/m/0342h", "/m/04rlf", "/m/04szw", "/m/0b_fwt", "/m/0fx80y", "/t/dd00125"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );