Multimodal Knowledge Distillation for Emotion Recognition
Multimodal emotion recognition has emerged as a prominent field in affective computing, offering superior performance compared to single-modality methods. Among various physiological signals, EEG signals and EOG data are highly valued for their complementary strengths in emotion recognition. However...
Saved in:
Main Authors: | Zhenxuan Zhang, Guanyu Lu |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-06-01
|
Series: | Brain Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3425/15/7/707 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Intelligent emotion recognition for drivers using model-level multimodal fusion
by: Xing Luan, et al.
Published: (2025-07-01) -
CAG-MoE: Multimodal Emotion Recognition with Cross-Attention Gated Mixture of Experts
by: Axel Gedeon Mengara Mengara, et al.
Published: (2025-06-01) -
MFENet: A Multi-Feature Extraction Network for Enhanced Emotion Detection Using EEG and STFT
by: N. Ramesh Babu, et al.
Published: (2025-01-01) -
A Comprehensive Review of Multimodal Emotion Recognition: Techniques, Challenges, and Future Directions
by: You Wu, et al.
Published: (2025-06-01) -
Multimodal Emotion Recognition Based on Facial Expressions, Speech, and EEG
by: Jiahui Pan, et al.
Published: (2024-01-01)