The VAD dataset is a large-scale dataset in the field of video affective content analysis. It aims to solve the problem of limited datasets in this field. VAD has rich emotional labels and modalities, hoping to promote the development of the related tasks
The VAD dataset consists of 19,267 elaborately segmented video clips from user-generated videos on the BiliBili website. The VAD dataset is annotated by the crowdsourcing platform with discrete valence, arousal, and primary emotions, as well as the comparison of valence and arousal between two consecutive video clips. Unlike previous datasets, including only video clips, our VAD dataset also provides danmu as shown in the Fig 1, which is the real-time comment from users as they watch a video. Danmu provides extra information for video affective content analysis.
Fig 1. Original video from BiliBili with danmu in our VAD dataset.
The purpose of the annotation process is to assign valence, arousal, and primary emotion to the intended affect in these segmented video clips and compare two adjacent clips in the same video with respect to valence and arousal simultaneously as shown in the Fig 2.
Fig 2. Dataset annotation.
To access the VAD dataset, researchers need to print the user license agreement, sign it, and send it to us. Please note that it is required that the agreement is signed by an individual holding an official position at their respective institution. This ensures that the signatory has the authority and responsibility to adhere to the terms of use and licensing conditions outlined in the agreement. After we verify, we will provide a link to download the dataset.
If the VAD dataset infringes upon the rights of a third party, please contact us. For the directory description of the dataset we return for users, please refer to this file.
For any inquiries about the VAD dataset, please contact:
Bingzhao Cai (In charge of the database): cbz_2020@mail.ustc.edu.cn
Shangfei Wang: sfwang@ustc.edu.cn
@article{wang2024vad,
title={VAD: A Video Affective Dataset with Danmu},
author={Wang, Shangfei and Li, Xin and Zheng, Feiyi and Pan, Jicai and Li, Xuewei and Chang, Yanan and Li, Qiong and Wang, Jiahe and Xiao, Yufei and others},
journal={IEEE Transactions on Affective Computing},
year={2024},
publisher={IEEE}
}