Unified Depth-Guided Feature Fusion and Reranking for Hierarchical Place Recognition
Visual Place Recognition (VPR) constitutes a pivotal task in the domains of computer vision and robotics. Prevailing VPR methods predominantly employ RGB-based features for query image retrieval and correspondence establishment. Nevertheless, such unimodal visual representations exhibit inherent sus...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-06-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/13/4056 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Visual Place Recognition (VPR) constitutes a pivotal task in the domains of computer vision and robotics. Prevailing VPR methods predominantly employ RGB-based features for query image retrieval and correspondence establishment. Nevertheless, such unimodal visual representations exhibit inherent susceptibility to environmental variations, inevitably degrading method precision. To address this problem, we propose a robust VPR framework integrating RGB and depth modalities. The architecture employs a coarse-to-fine paradigm, where global retrieval of top-N candidate images is performed using fused multimodal features, followed by a geometric verification of these candidates leveraging depth information. A Discrete Wavelet Transform Fusion (DWTF) module is proposed to generate robust multimodal global descriptors by effectively combining RGB and depth data using discrete wavelet transform. Furthermore, we introduce a Spiking Neuron Graph Matching (SNGM) module, which extracts geometric structure and spatial distance from depth data and employs graph matching for accurate depth feature correspondence. Extensive experiments on several VPR benchmarks demonstrate that our method achieves state-of-the-art performance while maintaining the best accuracy–efficiency trade-off. |
---|---|
ISSN: | 1424-8220 |