Genişletilmiş deeplabv3 mimarisi ile anlamsal bölütleme

Translated title of the contribution: Semantic segmentation with extended DeepLabv3 architecture

Salih Can Yurtkulu, Yusuf Huseyin Sahin, Gozde Unal

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

98 Citations (Scopus)

Abstract

In this work, semantic segmentation has been dealt with convolutional neural networks (CNN) which is a widely used recent approach in the field of computer vision. In the experiments using Cityscapes dataset, the images are scaled by various rates and the CNN architecture named DeepLabv3 is trained with different hyperparameters using these images. After the training phase, the success rates of the trained models were compared. The most successful DeepLabv3 model has achieved a success rate of 78.83% on Cityscapes test set. Afterwards, an ensemble of two different DeepLabv3 models and the Extended DeepLabv3 model is tested. In test results, while the success rate remains nearly the same, an increase in classes such as road and sidewalk is observed.

Translated title of the contributionSemantic segmentation with extended DeepLabv3 architecture
Original languageTurkish
Title of host publication27th Signal Processing and Communications Applications Conference, SIU 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728119045
DOIs
Publication statusPublished - Apr 2019
Event27th Signal Processing and Communications Applications Conference, SIU 2019 - Sivas, Turkey
Duration: 24 Apr 201926 Apr 2019

Publication series

Name27th Signal Processing and Communications Applications Conference, SIU 2019

Conference

Conference27th Signal Processing and Communications Applications Conference, SIU 2019
Country/TerritoryTurkey
CitySivas
Period24/04/1926/04/19

Bibliographical note

Publisher Copyright:
© 2019 IEEE.

Fingerprint

Dive into the research topics of 'Semantic segmentation with extended DeepLabv3 architecture'. Together they form a unique fingerprint.

Cite this