HSI Reconstruction: A Spectral Transformer With Tensor Decomposition and Dynamic Convolution

The core challenge of hyperspectral compressive imaging is to reconstruct the three-dimensional hyperspectral image from two-dimensional compressed measurements. While recent deep learning-based methods have demonsetrated outstanding performance, they often lack robust theoretical interpretability....

Full description

Saved in:
Bibliographic Details
Main Authors: Le Sun, Xihan Ma, Xinyu Wang, Qiao Chen, Zebin Wu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11022735/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The core challenge of hyperspectral compressive imaging is to reconstruct the three-dimensional hyperspectral image from two-dimensional compressed measurements. While recent deep learning-based methods have demonsetrated outstanding performance, they often lack robust theoretical interpretability. Conversely, traditional iterative optimization algorithms are built upon sound mathematical derivations. To combine the advantages of both approaches, we propose a spectral transformer network, termed STTODNet, which integrates deep tensor decomposition and omni-dimensional dynamic convolution (ODConv). Specifically, we incorporate a deep Tucker decomposition module within the self-attention mechanism to effectively extract low-rank prior features inherent in the hyperspectral image. Moreover, we replace the conventional linear projection layer with ODConv to substantially improve feature extraction capabilities. A three-scale U-Net network structure is designed as the approximate operator for solving the prior within our deep unfolding network architecture. Extensive experimental results demonstrate that STTODNet achieves superior results in terms of reconstruction quality, interpretability, and computational efficiency when compared to state-of-the-art methods.
ISSN:1939-1404
2151-1535