RCTNet: Residual conv-attention transformer network for corn hyperspectral image classification

Classifying corn varieties presents a significant challenge due to the high-dimensional characteristics of hyperspectral images and the complexity of feature extraction, which hinder progress in developing intelligent agriculture systems. To cope with these challenges, we introduce the Residual Conv...

Full description

Saved in:
Bibliographic Details
Main Authors: Yihan Li, Yan Li, Gongchao Chen, Linfang Li, Songlin Jin, Ling Zhou
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-06-01
Series:Frontiers in Remote Sensing
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/frsen.2025.1583560/full
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Classifying corn varieties presents a significant challenge due to the high-dimensional characteristics of hyperspectral images and the complexity of feature extraction, which hinder progress in developing intelligent agriculture systems. To cope with these challenges, we introduce the Residual Convolution-Attention Transformer Network (RCTNet), an innovative framework designed to optimize hyperspectral image classification. RCTNet integrates Conv2D with Channel Attention (2DWCA) and Conv3D with Spatial Attention (3DWSA) modules for efficient local spatial-spectral feature extraction, ensuring meaningful feature selection across multiple dimensions. Additionally, a residual transformer module is incorporated to enhance global feature learning by capturing long-range dependencies and improving classification performance. By effectively fusing local and global representations, RCTNet maximizes feature utilization, leading to superior accuracy and robustness in classification tasks. Extensive experimental results on a corn seed hyperspectral image dataset and two widely used remote sensing datasets validate the effectiveness, efficiency, and generalizability of RCTNet in hyperspectral image classification applications.
ISSN:2673-6187