TY - JOUR
T1 - Improving Aerial Targeting Precision
T2 - A Study on Point Cloud Semantic Segmentation with Advanced Deep Learning Algorithms
AU - Bozkurt, Salih
AU - Atik, Muhammed Enes
AU - Duran, Zaide
N1 - Publisher Copyright:
© 2024 by the authors.
PY - 2024/8
Y1 - 2024/8
N2 - The integration of technological advancements has significantly impacted artificial intelligence (AI), enhancing the reliability of AI model outputs. This progress has led to the widespread utilization of AI across various sectors, including automotive, robotics, healthcare, space exploration, and defense. Today, air defense operations predominantly rely on laser designation. This process is entirely dependent on the capability and experience of human operators. Considering that UAV systems can have flight durations exceeding 24 h, this process is highly prone to errors due to the human factor. Therefore, the aim of this study is to automate the laser designation process using advanced deep learning algorithms on 3D point clouds obtained from different sources, thereby eliminating operator-related errors. As different data sources, dense 3D point clouds produced with photogrammetric methods containing color information, and point clouds produced with LiDAR systems were identified. The photogrammetric point cloud data were generated from images captured by the Akinci UAV’s multi-axis gimbal camera system within the scope of this study. For the point cloud data obtained from the LiDAR system, the DublinCity LiDAR dataset was used for testing purposes. The segmentation of point cloud data utilized the PointNet++ and RandLA-Net algorithms. Distinct differences were observed between the evaluated algorithms. The RandLA-Net algorithm, relying solely on geometric features, achieved an approximate accuracy of 94%, while integrating color features significantly improved its performance, raising its accuracy to nearly 97%. Similarly, the PointNet++ algorithm, relying solely on geometric features, achieved an accuracy of approximately 94%. Notably, the model developed as a unique contribution in this study involved enriching the PointNet++ algorithm by incorporating color attributes, leading to significant improvements with an approximate accuracy of 96%. The obtained results demonstrate a notable improvement in the PointNet++ algorithm with the proposed approach. Furthermore, it was demonstrated that the methodology proposed in this study can be effectively applied directly to data generated from different sources in aerial scanning systems.
AB - The integration of technological advancements has significantly impacted artificial intelligence (AI), enhancing the reliability of AI model outputs. This progress has led to the widespread utilization of AI across various sectors, including automotive, robotics, healthcare, space exploration, and defense. Today, air defense operations predominantly rely on laser designation. This process is entirely dependent on the capability and experience of human operators. Considering that UAV systems can have flight durations exceeding 24 h, this process is highly prone to errors due to the human factor. Therefore, the aim of this study is to automate the laser designation process using advanced deep learning algorithms on 3D point clouds obtained from different sources, thereby eliminating operator-related errors. As different data sources, dense 3D point clouds produced with photogrammetric methods containing color information, and point clouds produced with LiDAR systems were identified. The photogrammetric point cloud data were generated from images captured by the Akinci UAV’s multi-axis gimbal camera system within the scope of this study. For the point cloud data obtained from the LiDAR system, the DublinCity LiDAR dataset was used for testing purposes. The segmentation of point cloud data utilized the PointNet++ and RandLA-Net algorithms. Distinct differences were observed between the evaluated algorithms. The RandLA-Net algorithm, relying solely on geometric features, achieved an approximate accuracy of 94%, while integrating color features significantly improved its performance, raising its accuracy to nearly 97%. Similarly, the PointNet++ algorithm, relying solely on geometric features, achieved an accuracy of approximately 94%. Notably, the model developed as a unique contribution in this study involved enriching the PointNet++ algorithm by incorporating color attributes, leading to significant improvements with an approximate accuracy of 96%. The obtained results demonstrate a notable improvement in the PointNet++ algorithm with the proposed approach. Furthermore, it was demonstrated that the methodology proposed in this study can be effectively applied directly to data generated from different sources in aerial scanning systems.
KW - aerial defense
KW - defense industry
KW - LiDAR
KW - mapping
KW - photogrammetry
KW - UAV
UR - http://www.scopus.com/inward/record.url?scp=85202681152&partnerID=8YFLogxK
U2 - 10.3390/drones8080376
DO - 10.3390/drones8080376
M3 - Article
AN - SCOPUS:85202681152
SN - 2504-446X
VL - 8
JO - Drones
JF - Drones
IS - 8
M1 - 376
ER -