EKNet: Graph Structure Feature Extraction and Registration for Collaborative 3D Reconstruction in Architectural Scenes

Collaborative geometric reconstruction of building structures can significantly reduce communication consumption for data sharing, protect privacy, and provide support for large-scale robot application management. In recent years, geometric reconstruction of building structures has been partially st...

Full description

Saved in:
Bibliographic Details
Main Authors: Changyu Qian, Hanqiang Deng, Xiangrong Ni, Dong Wang, Bangqi Wei, Hao Chen, Jian Huang
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/13/7133
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Collaborative geometric reconstruction of building structures can significantly reduce communication consumption for data sharing, protect privacy, and provide support for large-scale robot application management. In recent years, geometric reconstruction of building structures has been partially studied, but there is a lack of alignment fusion studies for multi-UAV (Unmanned Aerial Vehicle)-reconstructed geometric structure models. The vertices and edges of geometric structure models are sparse, and existing methods face challenges such as low feature extraction efficiency and substantial data requirements when processing sparse graph structures after geometrization. To address these challenges, this paper proposes an efficient deep graph matching registration framework that effectively integrates interpretable feature extraction with network training. Specifically, we first extract multidimensional local properties of nodes by combining geometric features with complex network features. Next, we construct a lightweight graph neural network, named EKNet, to enhance feature representation capabilities, enabling improved performance in low-overlap registration scenarios. Finally, through feature matching and discrimination modules, we effectively eliminate incorrect pairings and enhance accuracy. Experiments demonstrate that the proposed method achieves a 27.28% improvement in registration speed compared to traditional GCN (Graph Convolutional Neural Networks) and an 80.66% increase in registration accuracy over the suboptimal method. The method exhibits strong robustness in registration for scenes with high noise and low overlap rates. Additionally, we construct a standardized geometric point cloud registration dataset.
ISSN:2076-3417