MHS-VIT: Mamba hybrid self-attention vision transformers for traffic image detection.

With the rapid development of intelligent transportation systems, especially in traffic image detection tasks, the introduction of the transformer architecture greatly promotes the improvement of model performance. However, traditional transformer models have high computational costs during training...

Full description

Saved in:
Bibliographic Details
Main Authors: Xude Zhang, Weihua Ou, Xiaoping Wu, Changzhen Zhang
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0325962
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the rapid development of intelligent transportation systems, especially in traffic image detection tasks, the introduction of the transformer architecture greatly promotes the improvement of model performance. However, traditional transformer models have high computational costs during training and deployment due to the quadratic complexity of their self-attention mechanism, which limits their application in resource-constrained environments. To overcome this limitation, this paper proposes a novel hybrid architecture, Mamba Hybrid Self-Attention Vision Transformers (MHS-VIT), which combines the advantages of Mamba state-space model (SSM) and transformer to improve the modeling efficiency and performance of visual tasks and to enhance the modeling efficiency and accuracy of the model in processing traffic images. Mamba, as a linear time complexity SSM, can effectively reduce the computational burden without sacrificing performance. The self-attention mechanism of the transformer is good at capturing long-distance spatial dependencies in images, which is crucial for understanding complex traffic scenes. Experimental results showed that MHS-VIT exhibited excellent performances in traffic image detection tasks. Whether it is vehicle detection, pedestrian detection, or traffic sign recognition tasks, this model could accurately and quickly identify target objects. Compared with backbone networks of the same scale, MHS-VIT achieved significant improvements in accuracy and model parameter quantity.
ISSN:1932-6203