LiDGS: An efficient 3D reconstruction framework integrating lidar point clouds and multi-view images for enhanced geometric fidelity

Multi-view reconstruction of real-world scenes has been an important and challenging task. Although methods based on Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have made significant progress in rendering quality, there are still some limitations regarding the fidelity of geometri...

Full description

Saved in:
Bibliographic Details
Main Authors: Li Yan, Jiang Song, Hong Xie, Pengcheng Wei, Gang Li, Longze Zhu, Zhongli Fan, Shucheng Gong
Format: Article
Language:English
Published: Elsevier 2025-08-01
Series:International Journal of Applied Earth Observations and Geoinformation
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1569843225003772
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multi-view reconstruction of real-world scenes has been an important and challenging task. Although methods based on Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have made significant progress in rendering quality, there are still some limitations regarding the fidelity of geometric structures. To address this challenge, we propose a novel 3D reconstruction approach within the 3DGS framework, integrating lidar point clouds and multi-view images, named LiDGS, which achieves high-fidelity 3D scene reconstruction by introducing high-precision geometric a priori information and multiple geometric constraints from lidar point clouds, while guaranteeing efficient and accurate scene rendering. Specifically, we adopt an adaptive checkerboard sampling strategy and multi-hypothesis joint view selection (ACMP) for whole-image depth propagation, generating a high −precision dense depth map that provides continuous and accurate depth prior constraints for Gaussian optimization. Then, we design an adaptive Gaussian densification strategy, which effectively guides the geometric structure of the 3D scene through geometric anchors and adaptively adjusts the number and volume of Gaussians to more finely characterize the geometry of the object surface. Finally, this paper introduces a depth regularization method to correct the depth estimation of each Gaussian, ensuring the consistency of depth information from different viewpoints, which, in turn, improves the reconstruction quality. The experimental results show that the method achieves superior performance in both the new view synthesis task and the 3D reconstruction task, outperforming other classical methods. Our source code will be published at https://github.com/SongJiang-WHU/LiDGS.
ISSN:1569-8432