Deep learning-based vehicle detection from orthophoto and spatial accuracy analysis

Muhammed Yahya Biyik, Muhammed Enes Atik*, Zaide Duran

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)


Deep Learning algorithms are used by many different disciplines for various purposes, thanks to their ever-developing data processing skills. Convolutional neural network (CNN) are generally developed and used for this integration purpose. On the other hand, the widespread usage of Unmanned Aerial Vehicles (UAV) enables the collection of aerial photographs for Photogrammetric studies. In this study, these two fields were brought together and it was aimed to find the equivalents of the objects detected from the UAV images using deep learning in the global coordinate system and to evaluate their accuracy over these values. For these reasons, v3 and v4 versions of the YOLO algorithm, which prioritizes detecting the midpoint of the detected object, were trained in Google Colab’s virtual machine environment using the prepared data set. The coordinate values read from the orthophoto and the coordinate values of the midpoints of the objects, which were derived according to the estimations made by the YOLO-v3 and YOLOV4-CSP models, were compared and their spatial accuracy was calculated. Accuracy of 16.8 cm was obtained with the YOLO-v3 and 15.5 cm with the YOLOv4-CSP. In addition, the mAP value was obtained as 80% for YOLOv3 and 87% for YOLOv4-CSP. F1-score is 80% for YOLOv3 and 85% for YOLOv4-CSP.

Original languageEnglish
Pages (from-to)138-145
Number of pages8
JournalInternational Journal of Engineering and Geosciences
Issue number2
Publication statusPublished - 5 Jul 2023

Bibliographical note

Publisher Copyright:
© Author(s) 2023.


  • Deep Learning
  • Object Detection
  • Orthophoto
  • Photogrammetry
  • UAV


Dive into the research topics of 'Deep learning-based vehicle detection from orthophoto and spatial accuracy analysis'. Together they form a unique fingerprint.

Cite this