Large-Scale Point Cloud Semantic Segmentation with Density-Based Grid Decimation

Accurate segmentation of point clouds into categories such as roads, buildings, and trees is critical for applications in 3D reconstruction and autonomous driving. However, large-scale point cloud segmentation encounters challenges such as uneven density distribution, inefficient sampling, and limit...

Full description

Saved in:
Bibliographic Details
Main Authors: Liangcun Jiang, Jiacheng Ma, Han Zhou, Boyi Shangguan, Hongyu Xiao, Zeqiang Chen
Format: Article
Language:English
Published: MDPI AG 2025-07-01
Series:ISPRS International Journal of Geo-Information
Subjects:
Online Access:https://www.mdpi.com/2220-9964/14/7/279
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accurate segmentation of point clouds into categories such as roads, buildings, and trees is critical for applications in 3D reconstruction and autonomous driving. However, large-scale point cloud segmentation encounters challenges such as uneven density distribution, inefficient sampling, and limited feature extraction capabilities. To address these issues, this paper proposes <i>RT-Net</i>, a novel framework that incorporates a density-based grid decimation algorithm for efficient preprocessing of outdoor point clouds. The proposed framework helps alleviate the problem of uneven density distribution and improves computational efficiency. <i>RT-Net</i> also introduces two modules: Local Attention Aggregation, which extracts local detailed features of points using an attention mechanism, enhancing the model’s recognition ability for small-sized objects; and Attention Residual, which integrates local details of point clouds with global features by an attention mechanism to improve the model’s generalization ability. Experimental results on the <i>Toronto3D</i>, <i>Semantic3D</i>, and <i>SemanticKITTI</i> datasets demonstrate the superiority of <i>RT-Net</i> for small-sized object segmentation, achieving state-of-the-art mean Intersection over Union (<i>mIoU</i>) scores of 86.79% on <i>Toronto3D</i> and 79.88% on <i>Semantic3D</i>.
ISSN:2220-9964