USTC-AC
USTC-AC
News
People
Graduated
Research
Publications
Datasets
Contact
Light
Dark
Automatic
Latest
A Multi-Modal Hierarchical Recurrent Neural Network for Depression Detection
Facial Action Unit Recognition Enhanced by Text Descriptions of FACS
Temporal Enhancement for Video Affective Content Analysis
Incoming Talk in The 36th CSIG young elite Qingyun Forum
VAD: A Video Affective Dataset with Danmu
A Multi-Stage Visual Perception Approach for Image Emotion Analysis
Pose-robust personalized facial expression recognition through unsupervised multi-source domain adaptation
MEDIC: A Multimodal Empathy Dataset in Counseling
Patch-Aware Representation Learning for Facial Expression Recognition
Progressive Visual Content Understanding Network for Image Emotion Classification
Incoming Talk in The 2023 CSIG Conference on Emotional Intelligence
Pose-Aware Facial Expression Recognition Assisted by Expression Descriptions
Privacy-Protected Facial Expression Recognition Augmented by High-Resolution Facial Images
UniFaRN: Unified Transformer for Facial Reaction Generation
Occluded Facial Expression Recognition using Self-supervised Learning
Human Pose Estimation with Shape Aware Loss
Low-Resolution Face Recognition Enhanced by High-Resolution Facial Images
Knowledge Guided Representation Disentanglement for Face Recognition from Low Illumination Images
Representation Learning through Multimodal Attention and Time-sync Comments for Video Affective Content Analysis
Two-Stage Multi-Scale Resolution-Adaptive Network for Low-Resolution Face Recognition
Adversarial Stacking Ensemble for Facial Landmark Tracking
Knowledge-Driven Self-Supervised Representation Learning for Facial Action Unit Recognition
Pose-Invariant Facial Expression Recognition
Micro-Expression Recognition Enhanced by Macro-Expression from Spatial-Temporal Domain
Emotional Attention Detection and Correlation Exploration for Image Emotion Distribution Learning
Capturing Emotion Distribution for Multimedia Emotion Tagging
Exploring Adversarial Learning for Deep Semi-Supervised Facial Action Unit Recognition
Multi-task face analyses through adversarial learning
Attentive One-Dimensional Heatmap Regression for Facial Landmark Detection and Tracking
Dual Learning for Facial Action Unit Detection Under Nonfull Annotation
A Novel Dynamic Model Capturing Spatial and Temporal Patterns for Facial Expression Analysis
Capturing Joint Label Distribution for Multi-Label Classification Through Adversarial Learning
Exploiting Multi-Emotion Relations at Feature and Label Levels for Emotion Tagging
Exploiting Self-Supervised and Semi-Supervised Learning for Facial Landmark Tracking with Unlabeled Data
Exploring Domain Knowledge for Facial Expression-Assisted Action Unit Activation Recognition
Knowledge-Augmented Multimodal Deep Regression Bayesian Networks for Emotion Video Tagging
Learning from Macro-expression: a Micro-expression Recognition Framework
Occluded Facial Expression Recognition with Step-Wise Assistance from Unpaired Non-Occluded Images
Pose-aware Adversarial Domain Adaptation for Personalized Facial Expression Recognition
Posed and Spontaneous Expression Distinction Using Latent Regression Bayesian Networks
Unpaired Multimodal Facial Expression Recognition
Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey
Capturing Feature and Label Relations Simultaneously for Multiple Facial Action Unit Recognition
Capturing Spatial and Temporal Patterns for Facial Landmark Tracking through Adversarial Learning
Content-Based Video Emotion Tagging Augmented by Users' Multiple Physiological Responses
Dual Semi-Supervised Learning for Facial Action Unit Recognition
Facial Action Unit Recognition and Intensity Estimation Enhanced Through Label Dependencies
Identity- and Pose-Robust Facial Expression Recognition through Adversarial Feature Learning
Image Aesthetic Assessment Assisted by Attributes through Adversarial Learning
Integrating Facial Images, Speeches and Time for Empathy Prediction
KDSL: a Knowledge-Driven Supervised Learning Framework for Word Sense Disambiguation
Multiple Face Analyses through Adversarial Learning
Occluded Facial Expression Recognition Enhanced through Privileged Information
Weakly Supervised Dual Learning for Facial Action Unit Recognition
Facial Action Unit Recognition Augmented by Their Dependencies
Facial Expression Recognition Enhanced by Thermal Images through Adversarial Learning
Learning with privileged information for multi-Label classification
Personalized Multiple Facial Action Unit Recognition through Generative Adversarial Recognition Network
Thermal Augmented Expression Recognition
Weakly Supervised Facial Action Unit Recognition Through Adversarial Training
Weakly Supervised Facial Action Unit Recognition With Domain Knowledge
A Multimodal Deep Regression Bayesian Network for Affective Video Content Analyses
Capturing Dependencies among Labels and Features for Multiple Emotion Tagging of Multimedia Data
Capturing Spatial and Temporal Patterns for Distinguishing between Posed and Spontaneous Expressions
Deep Facial Action Unit Recognition from Partially Labeled Data
Deep multimodal network for multi-label classification
Differentiating Between Posed and Spontaneous Expressions with Latent Regression Bayesian Network
Emotion recognition through integrating EEG and peripheral signals
Exploring Domain Knowledge for Affective Video Content Analyses
Expression-assisted facial action unit recognition under incomplete AU annotation
Feature and label relation modeling for multiple-facial action unit classification and intensity estimation
Learning with Privileged Information for Multi-Label Classification
Personalized video emotion tagging through a topic model
Capturing global spatial patterns for distinguishing posed and spontaneous expressions
Emotion Recognition from EEG Signals Enhanced by User's Profile
Emotion recognition from peripheral physiological signals enhanced by EEG
Employing subjects' information as privileged information for emotion recognition from EEG signals
Facial Expression Intensity Estimation Using Ordinal Information
Facial expression recognition through modeling age-related spatial patterns
Facial Expression Recognition with Deep two-view Support Vector Machine
Gender recognition from visible and thermal infrared facial images
Implicit hybrid video emotion tagging by integrating video content and users' multiple physiological responses
Multiple Facial Action Unit recognition by learning joint features and label relations
Multiple facial action unit recognition enhanced by facial expressions
Posed and Spontaneous Expression Recognition Through Restricted Boltzmann Machine
Emotion Recognition from EEG Signals by Leveraging Stimulus Videos
Emotion Recognition from EEG Signals using Hierarchical Bayesian Network with Privileged Information
Emotion Recognition with the Help of Privileged Information
Enhanced facial expression recognition by age
Expression Recognition from Visible Images with the Help of Thermal Images
Facial Action Unit Classification with Hidden Knowledge under Incomplete Annotation
Implicit video emotion tagging from audiences' facial expression
Learning with privileged information using Bayesian networks
Multi-instance Hidden Markov Model for facial expression recognition
Multiple Aesthetic Attribute Assessment by Exploiting Relations Among Aesthetic Attributes
Multiple Emotion Tagging for Multimedia Data by Exploiting High-Order Dependencies Among Emotions
Multiple emotional tagging of multimedia data by exploiting dependencies among emotions
Posed and spontaneous expression recognition through modeling their spatial patterns
Posed and spontaneous facial expression differentiation using deep Boltzmann machines
Video Affective Content Analysis: A Survey of State-of-the-Art Methods
Capture expression-dependent AU relations for expression recognition
Early Facial Expression Recognition Using Hidden Markov Models
Emotion recognition from thermal infrared images using deep Boltzmann machine
Emotion recognition from users' EEG signals with the help of stimulus VIDEOS
Enhancing multi-label classification by modeling dependencies among labels
Exploiting multi-expression dependences for implicit multi-emotion video tagging
Facial Action Unit recognition by relation modeling from both qualitative knowledge and quantitative data
Fusion of visible and thermal images for facial expression recognition
Hybrid video emotional tagging using users' EEG and video content
Multi-label Learning with Missing Labels
Multiple-Facial Action Unit Recognition by Shared Feature Learning and Semantic Relation Modeling
Sequence-based bias analysis of spontaneous facial expression databases
Active Labeling of Facial Feature Points
Analyses of a Multimodal Spontaneous Facial Expression Database
Analyses of the Differences between Posed and Spontaneous Facial Expressions
Capturing Complex Spatio-temporal Relations among Facial Muscles for Facial Expression Recognition
Capturing Global Semantic Relationships for Facial Action Unit Recognition
Emotional Influence on SSVEP Based BCI
Emotional tagging of videos by exploring multiple emotions' coexistence
Eye localization from thermal infrared images
Facial Expression Recognition Using Deep Boltzmann Machine from Thermal Infrared Images
Implicit video multi-emotion tagging by exploiting multi-expression relations
Simultaneous Facial Feature Tracking and Facial Expression Recognition
A qualitative and quantitative study of color emotion using valence-arousal
Analysis of Affective Effects on Steady-State Visual Evoked Potential Responses
Bias analyses of spontaneous facial expression database
Eye Localization from Infrared Thermal Images
Facial expression recognition from infrared thermal images using temperature difference by voting
Facial Expression Recognition from Infrared Thermal Videos
Posed and spontaneous expression distinguishment from infrared thermal images
Similarity Measurement and Feature Selection Using Genetic Algorithm
Spontaneous Facial Expression Recognition by Fusing Thermal Infrared and Visible Images
A real-time attitude recognition by eye-tracking
Affective Classification in Video Based on Semi-supervised Learning
Emotion Recognition Using Hidden Markov Models from Facial Temperature Sequence
Spontaneous Facial Expression Recognition Based on Feature Point Tracking
Spontaneous facial expression recognition by using feature-level fusion of visible and thermal infrared images
Towards robot incremental learning constraints from comparative demonstration
A Natural Visible and Infrared Facial Expression Database for Expression Recognition and Emotion Inference
A spontaneous facial expression recognition method using head motion and AAM features
An emotional harmony generation system
Infrared Face Recognition Based on Histogram and K-Nearest Neighbor Classification
Musical perceptual similarity estimation using interactive genetic algorithm
Emotional speech synthesis by XML file using interactive genetic algorithms
Analysis of Relationships between Color and Emotion by Classification Based on Associations
Emotional Music Generation Using Interactive Genetic Algorithm
Infrared Facial Expression Recognition Using Wavelet Transform
User Fatigue Reduction by an Absolute Rating Data-trained Predictor in IEC
Case-Based Facial Action Units Recognition Using Interactive Genetic Algorithm
Emotion Semantics Image Retrieval: An Brief Overview
Evaluation of User Fatigue Reduction Through IEC Rating-Scale Mapping
Kansei-Oriented Image Retrieval
Cite
×