musiccaps
1 row where audioset_names contains "Humming", audioset_names contains "Inside, large room or hall" and audioset_names contains "Inside, small room"
This data as json, CSV (advanced)
ytid ▼ | url | caption | aspect_list | audioset_names | author_id | start_s | end_s | is_balanced_subset | is_audioset_eval | audioset_ids |
---|---|---|---|---|---|---|---|---|---|---|
FkpJaXzgMBQ | A female vocalist sings this song. There are two separate tracks where the vocalists sing very low notes in the former and high notes in the latter with ukelele accompaniment. The voice is controlled and melodious but the tracks are unrelated. | ["female vocalist", "slow tempo", "two different tracks", "ukelele accompaniment", "low pitch", "high pitch", "unrelated", "female singer", "female performer"] | ["Singing", "Humming", "Music", "Inside, small room", "Inside, large room or hall"] | 7 | 180 | 190 | 0 | 1 | ["/m/015lz1", "/m/02fxyj", "/m/04rlf", "/t/dd00125", "/t/dd00126"] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [musiccaps] ( [ytid] TEXT PRIMARY KEY, [url] TEXT, [caption] TEXT, [aspect_list] TEXT, [audioset_names] TEXT, [author_id] TEXT, [start_s] TEXT, [end_s] TEXT, [is_balanced_subset] INTEGER, [is_audioset_eval] INTEGER, [audioset_ids] TEXT );