High-Frequency Attention U-Net for Road Segmentation in High-Resolution Remote Sensing Imagery

Bahaa Awad*, Isin Erer

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper explores the application of a singular expanding path of Frequency Attention U-Net (FAUNet), specifically employing its frequency attention mechanism for road detection in remote sensing. Contrasting with the full dual-path architecture of the recently proposed FAUNet, this study cap-italizes on only the high-frequency attentive path, tailored for edge detection in road segmentation tasks. By focusing on this single path, the modified FAUNet is adept at highlighting the intricate details necessary for accurate road boundary identification in high resolution remote sensing images. Comparative evaluations are conducted against traditional models like U-Net, U-Net++, and a generic CNN under consistent experimental conditions, including identical datasets, loss functions, and training loops.

Original languageEnglish
Title of host publicationIGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages9609-9613
Number of pages5
ISBN (Electronic)9798350360325
DOIs
Publication statusPublished - 2024
Event2024 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2024 - Athens, Greece
Duration: 7 Jul 202412 Jul 2024

Publication series

NameInternational Geoscience and Remote Sensing Symposium (IGARSS)

Conference

Conference2024 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2024
Country/TerritoryGreece
CityAthens
Period7/07/2412/07/24

Bibliographical note

Publisher Copyright:
© 2024 IEEE.

Keywords

  • Attention
  • Edge detection
  • Frequency Attention
  • Road segmentation
  • U-Net

Fingerprint

Dive into the research topics of 'High-Frequency Attention U-Net for Road Segmentation in High-Resolution Remote Sensing Imagery'. Together they form a unique fingerprint.

Cite this