Abstract
Modeling, understanding, and interpreting of the environment become an important task for autonomous systems in vehicles. The ability to sense the environment accurately and robustly in real time is essential for autonomous driving. Mobile point clouds are data obtained using laser scanners mounted on a moving vehicle. An accurate perception of the environment and precise location are essential for autonomous cars to reliably navigate and operate safely in complicated dynamic contexts. For these purposes, semantic segmentation of point clouds is an essential requirement. This study presents a projection-based point cloud semantic segmentation approach that combines 3D data structure and 2D segmentation techniques. Range images are created by projecting the irregular structure of the point cloud to the 2D plane. Each pixel in the range image is defined by vectors containing 3D geometric features. Experiments were carried out on Pandaset, a mobile lidar scanning point cloud. The PandaSet contains 4800 unorganized lidar point cloud scans of the various city scenes captured using the Pandar 64 sensor. The data set provides semantic segmentation labels for 42 different classes including car, road, and pedestrian. U-Net was used as the segmentation algorithm. As a result of the study, 91.89% overall accuracy and 60.82% mIoU were obtained with the U-Net architecture.
Original language | English |
---|---|
Publication status | Published - 2022 |
Event | 43rd Asian Conference on Remote Sensing, ACRS 2022 - Ulaanbaatar, Mongolia Duration: 3 Oct 2022 → 5 Oct 2022 |
Conference
Conference | 43rd Asian Conference on Remote Sensing, ACRS 2022 |
---|---|
Country/Territory | Mongolia |
City | Ulaanbaatar |
Period | 3/10/22 → 5/10/22 |
Bibliographical note
Publisher Copyright:© 43rd Asian Conference on Remote Sensing, ACRS 2022.
Keywords
- Deep Learning
- Geometric Feature
- Point Cloud
- Range Image
- Semantic Segmentation