CMHFE-DAN: A Transformer-Based Feature Extractor with Domain Adaptation for EEG-Based Emotion Recognition

EEG-based emotion recognition (EEG-ER) through deep learning models has gained more attention in recent years, with more researchers focusing on architecture, feature extraction, and generalisability. This paper presents a novel end-to-end deep learning framework for EEG-ER, combining temporal featu...

Full description

Saved in:
Bibliographic Details
Main Authors: Manal Hilali, Abdellah Ezzati, Said Ben Alla
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/16/7/560
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:EEG-based emotion recognition (EEG-ER) through deep learning models has gained more attention in recent years, with more researchers focusing on architecture, feature extraction, and generalisability. This paper presents a novel end-to-end deep learning framework for EEG-ER, combining temporal feature extraction, self-attention mechanisms, and adversarial domain adaptation. The architecture entails a multi-stage 1D CNN for spatiotemporal features from raw EEG signals, followed by a transformer-based attention module for long-range dependencies, and a domain-adversarial neural network (DANN) module with gradient reversal to enable a powerful subject-independent generalisation by learning domain-invariant features. Experiments on benchmark datasets (DEAP, SEED, DREAMER) demonstrate that our approach achieves a state-of-the-art performance, with a significant improvement in cross-subject recognition accuracy compared to non-adaptive frameworks. The architecture tackles key challenges in EEG emotion recognition, including generalisability, inter-subject variability, and temporal dynamics modelling. The results highlight the effectiveness of combining convolutional feature learning with adversarial domain adaptation for robust EEG-ER.
ISSN:2078-2489