TY - JOUR
T1 - Development of large-scale synthetic 3D point cloud datasets for vision-based bridge structural condition assessment
AU - Shi, Mingyu
AU - Kim, Hyunjun
AU - Narazaki, Yasutaka
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/12
Y1 - 2024/12
N2 - Visual recognition of 3D point cloud data of bridge inspection scenes is a key step in automating the visual inspection process, which is currently largely manual and inefficient. To alleviate the lack of large-scale annotated point cloud datasets for training such 3D visual recognition algorithms, this research investigates an approach for developing large-scale synthetic point cloud datasets. The proposed approach proceeds in four steps: (1) random generation of different types of bridges in computer graphics environments; (2) sampling of camera trajectories that represent data collection scenarios during bridge inspection; (3) 3D reconstruction using Structure from Motion (SfM) applied to rendered synthetic images; (4) automated annotation of the reconstructed point cloud using ground truth masks obtained with synthetic images. Besides, this research proposes to store point uncertainty information defined by the error between the ground truth depth and the depth calculated from the SfM results. Prior to training, thresholds can be applied to this uncertainty information to control the levels of outliers in the dataset. This research demonstrates the proposed approach by generating point cloud datasets for two data collection scenarios. The effectiveness of the generated datasets is investigated by training 3D semantic segmentation algorithms and evaluating the performance on real and synthetic point cloud data. The proposed approach for point cloud dataset generation will facilitate the development of generalizable and high level-of-detail 3D recognition algorithms toward autonomous bridge inspection.
AB - Visual recognition of 3D point cloud data of bridge inspection scenes is a key step in automating the visual inspection process, which is currently largely manual and inefficient. To alleviate the lack of large-scale annotated point cloud datasets for training such 3D visual recognition algorithms, this research investigates an approach for developing large-scale synthetic point cloud datasets. The proposed approach proceeds in four steps: (1) random generation of different types of bridges in computer graphics environments; (2) sampling of camera trajectories that represent data collection scenarios during bridge inspection; (3) 3D reconstruction using Structure from Motion (SfM) applied to rendered synthetic images; (4) automated annotation of the reconstructed point cloud using ground truth masks obtained with synthetic images. Besides, this research proposes to store point uncertainty information defined by the error between the ground truth depth and the depth calculated from the SfM results. Prior to training, thresholds can be applied to this uncertainty information to control the levels of outliers in the dataset. This research demonstrates the proposed approach by generating point cloud datasets for two data collection scenarios. The effectiveness of the generated datasets is investigated by training 3D semantic segmentation algorithms and evaluating the performance on real and synthetic point cloud data. The proposed approach for point cloud dataset generation will facilitate the development of generalizable and high level-of-detail 3D recognition algorithms toward autonomous bridge inspection.
KW - bridge inspection
KW - point cloud data
KW - semantic segmentation
KW - structure from motion
KW - synthetic dataset
KW - unmanned aerial vehicles
UR - http://www.scopus.com/inward/record.url?scp=85196642935&partnerID=8YFLogxK
U2 - 10.1177/13694332241260077
DO - 10.1177/13694332241260077
M3 - Article
AN - SCOPUS:85196642935
SN - 1369-4332
VL - 27
SP - 2901
EP - 2928
JO - Advances in Structural Engineering
JF - Advances in Structural Engineering
IS - 16
ER -