Gradient Guided Depth Super-Resolution Using Attention-Based Cross Domain Interaction Module
Color guided depth image super-resolution (DISR), leveraging the strong structural similarity between the registered high-resolution (HR) color image and the low-resolution (LR) depth image. has achieved remarkable results. However, when the sampling factor is large, the DISR suffers from reconstruc...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11048879/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Color guided depth image super-resolution (DISR), leveraging the strong structural similarity between the registered high-resolution (HR) color image and the low-resolution (LR) depth image. has achieved remarkable results. However, when the sampling factor is large, the DISR suffers from reconstructing accurate depth edges due to the severe loss of high frequency components. Crucially, direct use of the high frequency (HF) information of the registered map may transfer the high-frequency textures into the depth results and blur the depth edges. To address this, we propose GDISRNet, which is a gradient guided DISR network. It not only utilizes HF information of the color image, but also fundamentally avoids the texture copying artifacts. First, we compute the gradient map from the Y channel of the HR color image. Then, an Attention-based Multi-Scale Feature extraction module (AMSF) extracts the multi-scale features respectively from the gradient map and the interpolated LR Depth map. Third, a novel Cross-Domain Interaction Module (CDIM) is used to fuse these features to enhance depth boundaries while suppressing color textures at the same time. Additionally, a gradient loss is introduced to prevent edge smoothing effectively. Finally, a residual progressive structure of AMSF and CDIM reconstructs the SR depth image with enhanced high-frequency information. Experiments on several popular benchmark datasets including Middlebury, Sintel, Lu and NYU V2, demonstrate the superiority of the proposed GDISRNet over representative state-of-the-art methods. |
---|---|
ISSN: | 2169-3536 |