Intelligent emotion recognition for drivers using model-level multimodal fusion
Unstable emotions are considered to be an important factor contributing to traffic accidents. The probability of accidents can be reduced if emotional anomalies of drivers can be quickly identified and intervened. In this paper, we present a multimodal emotion recognition model, MHLT, which performs...
Saved in:
Main Authors: | Xing Luan, Quan Wen, Bo Hang |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2025-07-01
|
Series: | Frontiers in Physics |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fphy.2025.1599428/full |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Multimodal Knowledge Distillation for Emotion Recognition
by: Zhenxuan Zhang, et al.
Published: (2025-06-01) -
A Comprehensive Review of Multimodal Emotion Recognition: Techniques, Challenges, and Future Directions
by: You Wu, et al.
Published: (2025-06-01) -
Multimodal Emotion Recognition Based on Facial Expressions, Speech, and EEG
by: Jiahui Pan, et al.
Published: (2024-01-01) -
Feasibility of internet-based multimodal emotion recognition training in adolescents with and without autism: A pilot study
by: Nora Choque Olsson, et al.
Published: (2025-09-01) -
Dual-stage gated segmented multimodal emotion recognition method
by: MA Fei, et al.
Published: (2025-06-01)