COMPARATIVE STUDY OF DEEP LEARNING MODELS IN MULTI-LABEL SCENE CLASSIFICATION

Saziye Ozge Atik*

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

Abstract

Many state-of-the-art studies are being conducted on environmental monitoring with computer vision applications. Remotely sensed images are widely preferred data in this regard. The body of your abstract begins here. Many open data sets have been generated in this area, and multi-class classification is carried out automatically with the studies carried out. Behalf of human interpretation, using machine learning algorithms provides economic, time, and robust utilities. UC Merced Land Use dataset is one of the most common datasets, including a wide variety of classes in the meaning of land use. In the study, seven different deep learning models are conducted to the UC Merced Land use dataset, and multi-class land use classification results have been compared quantitatively. The algorithms yielded higher than %95 accuracies. The highest overall accuracy was obtained using the DenseNet 121 model, and the worst score was obtained with Alexnet. In several test images, using the SqueezeNet model provided more successful predictions for several classes. In future studies, domain-shift applications can strengthen the studies for more expansive areas.

Original languageEnglish
Publication statusPublished - 2022
Externally publishedYes
Event43rd Asian Conference on Remote Sensing, ACRS 2022 - Ulaanbaatar, Mongolia
Duration: 3 Oct 20225 Oct 2022

Conference

Conference43rd Asian Conference on Remote Sensing, ACRS 2022
Country/TerritoryMongolia
CityUlaanbaatar
Period3/10/225/10/22

Bibliographical note

Publisher Copyright:
© 43rd Asian Conference on Remote Sensing, ACRS 2022.

Keywords

  • Deep learning
  • Multi-class classification
  • Scene classification

Fingerprint

Dive into the research topics of 'COMPARATIVE STUDY OF DEEP LEARNING MODELS IN MULTI-LABEL SCENE CLASSIFICATION'. Together they form a unique fingerprint.

Cite this