Uncertainty Estimation for Photogrammetric Point Clouds of UAV Imagery

Nowadays, unmanned aerial vehicles (UAVs) are widely used in various photogrammetric applications to collect high-resolution images for 3D reconstruction. Modern photogrammetric reconstruction often employs Structure-from-Motion (SfM) and Multi-View Stereo (MVS) to generate dense 3D point clouds fro...

Full description

Saved in:
Bibliographic Details
Main Authors: D. Huang, R. Qin
Format: Article
Language:English
Published: Copernicus Publications 2025-07-01
Series:The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Online Access:https://isprs-archives.copernicus.org/articles/XLVIII-G-2025/657/2025/isprs-archives-XLVIII-G-2025-657-2025.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Nowadays, unmanned aerial vehicles (UAVs) are widely used in various photogrammetric applications to collect high-resolution images for 3D reconstruction. Modern photogrammetric reconstruction often employs Structure-from-Motion (SfM) and Multi-View Stereo (MVS) to generate dense 3D point clouds from unordered image sets. Estimating the uncertainty of 3D point clouds is crucial, as it predicts error covariance matrices and indicates the reliability of the reconstructed point clouds. Despite its importance, little effort has been made to model uncertainty, particularly during the MVS stage, and to rigorously propagate uncertainties through the photogrammetric reconstruction process to the final 3D point clouds, leading to improper interpretation of their quality. Recent works on disparity uncertainty estimation also focus solely on stereo matching, ignoring the rich information provided by the MVS framework. In this work, we propose a novel method for estimating metric uncertainty in UAV imagery-derived 3D point clouds using error propagation. Specifically, we leverage multi-ray points from the MVS framework to map dense matching costs to metric disparity uncertainty. Our method requires no training data, making it generalizable to various UAV datasets. We evaluate our method on public and self-collected UAV datasets, and the results demonstrate that it outperforms existing approaches in terms of bounding rate.
ISSN:1682-1750
2194-9034