Capturing Dependencies among Labels and Features for Multiple Emotion Tagging of Multimedia Data

Publication
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA

In this paper,we tackle the problem of emotion tagging of multimedia data by modeling the dependencies among multiple emotions in both the feature and label spaces. These dependencies,which carry crucial top-down and bottom-up evidence for improving multimedia affective content analysis,have not been thoroughly exploited yet. To this end, we propose two hierarchical models that independently and dependently learn the shared features and global semantic relationships among emotion labels to jointly tag multiple emotion labels of multimedia data. Efficient learning and inference algorithms of the proposed models are also developed. Experiments on three benchmark emotion databases demonstrate the superior performance of our methods to existing methods.

Fig. Two proposed methods.(a) Combining a multi-task RBM with a three-layer RBM to capture dependencies among features and labels independently.(b) Capturing dependencies among features and labels dependently.
Fig. Two proposed methods.(a) Combining a multi-task RBM with a three-layer RBM to capture dependencies among features and labels independently.(b) Capturing dependencies among features and labels dependently.
Shangfei Wang
Shangfei Wang
Professor of Artificial Intelligence

My research interests include Pattern Recognition, Affective Computing, Probabilistic Graphical Models, Computation Intelligence.

Related