An Adaptive Convolutional Neural Network With Spatio-Temporal Attention and Dynamic Pathways (ACNN-STADP) for Robust EEG-Based Motor Imagery Classification

Electroencephalogram (EEG)-based Brain-Computer Interfaces (BCIs) have gained substantial attention, particularly for motor imagery (MI) that facilitates direct brain-to-device communication without any muscular movement. However, existing classification models face limitations such as inter-subject...

Full description

Saved in:
Bibliographic Details
Main Authors: Aaqib Raza, Mohd Zuki Yusoff
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11037490/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Electroencephalogram (EEG)-based Brain-Computer Interfaces (BCIs) have gained substantial attention, particularly for motor imagery (MI) that facilitates direct brain-to-device communication without any muscular movement. However, existing classification models face limitations such as inter-subject variability, lack of generalizability, high computational demands, low signal-to-noise ratios, and inefficient feature extraction, which impede their robustness and accuracy. Moreover, advanced deep learning models often utilize rigid architectures with fixed spatial-temporal filters, restricting their adaptability to dynamic EEG patterns. To address these challenges, this paper proposes an Adaptive Convolutional Neural Network with Spatio-Temporal Attention and Dynamic Pathways (ACNN-STADP), which introduces a novel dynamic pathway mechanism and adaptive attention strategy for robust MI-EEG decoding. The proposed model integrates a Dynamic Pathway Convolution Network (DPCN) for adaptive feature extraction, incorporating a Dynamic Gating Controller (DGC) and Dynamic Adaptive Spatio-Temporal (DAST) blocks to efficiently capture multi-scale spatial and temporal dependencies. Additionally, an Adaptive Attention Fusion (AAF) module employs Dual Multi-Head Self-Attention (DMHSA) and a U-Net-inspired Adaptive Fusion Block (AFB) to enhance feature integration and improve classification performance. Furthermore, the model introduces three key innovations: Dynamic Multi-Scale Convolutional Learning for adaptive kernel selection, Unified Spatio-Temporal Attention (USTA) for efficient feature recalibration, and AFB for multi-scale feature fusion while preserving long-range dependencies. The model is validated on BCI Competition IV Dataset 2a, achieving a peak accuracy of 90.77%, and further evaluated across six additional MI-EEG datasets, demonstrating an overall average accuracy above 78.98%. ACNN-STADP significantly improves generalization, reduces computational complexity, and enhances real-time applicability, establishing a robust multi-dataset adaptive deep learning framework for EEG-based MI classification.
ISSN:2169-3536