Improved leaf area index reconstruction in heavily cloudy areas: A novel deep learning approach for SAR-Optical fusion integrating spatiotemporal features

The Leaf Area Index (LAI) is an essential parameter for assessing vegetation growth. LAI derived from optical data can suffer from gaps caused by cloud cover. Synthetic Aperture Radar (SAR) presents a solution with its all-weather observation capability. To address these issues, this study proposes...

Full description

Saved in:
Bibliographic Details
Main Authors: Mingqi Li, Pengxin Wang, Kevin Tansey, Fengwei Guo, Ji Zhou
Format: Article
Language:English
Published: Elsevier 2025-08-01
Series:International Journal of Applied Earth Observations and Geoinformation
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1569843225003929
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The Leaf Area Index (LAI) is an essential parameter for assessing vegetation growth. LAI derived from optical data can suffer from gaps caused by cloud cover. Synthetic Aperture Radar (SAR) presents a solution with its all-weather observation capability. To address these issues, this study proposes a new deep learning approach for reconstructing time series LAI using SAR and optical data in two steps. Firstly, the two-dimensional Convolutional Neural Network-Transformer (2D CNN-Transformer) is applied to bridge SAR and optical data. Secondly, the 2D CNN-Transformer predicted LAI and the Sentinel-2 LAI are input into the Enhanced Deep Convolutional Model for Spatiotemporal Image Fusion (EDCSTFN) model to further improve the accuracy. The novelty lies in a two-step framework combining a 2D CNN-Transformer for spatiotemporal feature extraction and a deep learning fusion algorithm refining accurate LAI reconstruction. Results showed that the 2D CNN-Transformer achieved a higher accuracy (R2 = 0.64, RMSE = 0.38 m2/m2) in establishing a relationship between SAR and optical data, compared to 1D CNN, 2D CNN-LSTM, and 1D CNN-Transformer. In the second step, the EDCSTFN reconstructed LAI achieved the highest accuracy of an R2 of 0.81 and an RMSE of 0.22 m2/m2, with an average R2 of 0.61 and RMSE of 0.37 m2/m2 across croplands and forests in millions of pixels, further improving the accuracy based on the first step. The approach effectively fills gaps in spatial details and achieves a more continuous spatial distribution. The proposed approach demonstrates good generalizability in millions of pixels under frequent cloud cover and complex surface conditions and provides a new strategy for the fusion of optical and SAR data.
ISSN:1569-8432