USTC-AC
USTC-AC
News
People
Graduated
Research
Publications
Datasets
Contact
Light
Dark
Automatic
Qiang Ji
Latest
A Novel Dynamic Model Capturing Spatial and Temporal Patterns for Facial Expression Analysis
Exploring Domain Knowledge for Facial Expression-Assisted Action Unit Activation Recognition
Knowledge-Augmented Multimodal Deep Regression Bayesian Networks for Emotion Video Tagging
Posed and Spontaneous Expression Distinction Using Latent Regression Bayesian Networks
Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey
Capturing Feature and Label Relations Simultaneously for Multiple Facial Action Unit Recognition
Content-Based Video Emotion Tagging Augmented by Users' Multiple Physiological Responses
Facial Action Unit Recognition and Intensity Estimation Enhanced Through Label Dependencies
Facial Action Unit Recognition Augmented by Their Dependencies
Thermal Augmented Expression Recognition
Weakly Supervised Facial Action Unit Recognition With Domain Knowledge
A Multimodal Deep Regression Bayesian Network for Affective Video Content Analyses
Capturing Dependencies among Labels and Features for Multiple Emotion Tagging of Multimedia Data
Deep Facial Action Unit Recognition from Partially Labeled Data
Differentiating Between Posed and Spontaneous Expressions with Latent Regression Bayesian Network
Expression-assisted facial action unit recognition under incomplete AU annotation
Feature and label relation modeling for multiple-facial action unit classification and intensity estimation
Capturing global spatial patterns for distinguishing posed and spontaneous expressions
Employing subjects' information as privileged information for emotion recognition from EEG signals
Facial Expression Intensity Estimation Using Ordinal Information
Facial expression recognition through modeling age-related spatial patterns
Gender recognition from visible and thermal infrared facial images
Implicit hybrid video emotion tagging by integrating video content and users' multiple physiological responses
Multiple Facial Action Unit recognition by learning joint features and label relations
Multiple facial action unit recognition enhanced by facial expressions
Emotion Recognition with the Help of Privileged Information
Facial Action Unit Classification with Hidden Knowledge under Incomplete Annotation
Implicit video emotion tagging from audiences' facial expression
Learning with privileged information using Bayesian networks
Multi-instance Hidden Markov Model for facial expression recognition
Multiple Aesthetic Attribute Assessment by Exploiting Relations Among Aesthetic Attributes
Multiple Emotion Tagging for Multimedia Data by Exploiting High-Order Dependencies Among Emotions
Multiple emotional tagging of multimedia data by exploiting dependencies among emotions
Posed and spontaneous expression recognition through modeling their spatial patterns
Posed and spontaneous facial expression differentiation using deep Boltzmann machines
Video Affective Content Analysis: A Survey of State-of-the-Art Methods
Capture expression-dependent AU relations for expression recognition
Early Facial Expression Recognition Using Hidden Markov Models
Emotion recognition from thermal infrared images using deep Boltzmann machine
Emotion recognition from users' EEG signals with the help of stimulus VIDEOS
Enhancing multi-label classification by modeling dependencies among labels
Exploiting multi-expression dependences for implicit multi-emotion video tagging
Facial Action Unit recognition by relation modeling from both qualitative knowledge and quantitative data
Fusion of visible and thermal images for facial expression recognition
Hybrid video emotional tagging using users' EEG and video content
Multi-label Learning with Missing Labels
Multiple-Facial Action Unit Recognition by Shared Feature Learning and Semantic Relation Modeling
Sequence-based bias analysis of spontaneous facial expression databases
Active Labeling of Facial Feature Points
Capturing Complex Spatio-temporal Relations among Facial Muscles for Facial Expression Recognition
Capturing Global Semantic Relationships for Facial Action Unit Recognition
Emotional tagging of videos by exploring multiple emotions' coexistence
Eye localization from thermal infrared images
Facial Expression Recognition Using Deep Boltzmann Machine from Thermal Infrared Images
Implicit video multi-emotion tagging by exploiting multi-expression relations
Simultaneous Facial Feature Tracking and Facial Expression Recognition
Bias analyses of spontaneous facial expression database
Cite
×