Attention-CARU With Texture-Temporal Network for Video Depth Estimation
Video depth estimation has a wide range of applications, especially in the tasks of robot navigation and autonomous driving. RNN-based encoder-decoder architectures are the most commonly used methods for depth feature prediction, but recurrent operators have limitations of large-scale perspective fr...
Saved in:
Main Authors: | Sio-Kei Im, Ka-Hou Chan |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11034974/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Enhanced Localisation and Handwritten Digit Recognition Using ConvCARU
by: Sio-Kei Im, et al.
Published: (2025-06-01) -
Exploring Attention in Depth: Event-Related and Steady-State Visual Evoked Potentials During Attentional Shifts Between Depth Planes in a Novel Stimulation Setup
by: Jonas Jänig, et al.
Published: (2025-04-01) -
Monocular Depth Estimation: A Review on Hybrid Architectures, Transformers and Addressing Adverse Weather Conditions
by: Kumara Lakindu, et al.
Published: (2025-01-01) -
Parallel Multi-Scale Semantic-Depth Interactive Fusion Network for Depth Estimation
by: Chenchen Fu, et al.
Published: (2025-07-01) -
Comparative Analysis of Attention Mechanisms in Densely Connected Network for Network Traffic Prediction
by: Myeongjun Oh, et al.
Published: (2025-06-01)