Benefiting from multitask learning to improve single image super-resolution

Mohammad Saeed Rad*, Behzad Bozorgtabar, Claudiu Musat, Urs Viktor Marti, Max Basler, Hazım Kemal Ekenel, Jean Philippe Thiran

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

16 Citations (Scopus)

Abstract

Despite significant progress toward super resolving more realistic images by deeper convolutional neural networks (CNNs), reconstructing fine and natural textures still remains a challenging problem. Recent works on single image super resolution (SISR) are mostly based on optimizing pixel and content wise similarity between recovered and high-resolution (HR) images and do not benefit from recognizability of semantic classes. In this paper, we introduce a novel approach using categorical information to tackle the SISR problem; we present an encoder architecture able to extract and use semantic information to super-resolve a given image by using multitask learning, simultaneously for image super-resolution and semantic segmentation. To explore categorical information during training, the proposed decoder only employs one shared deep network for two task-specific output layers. At run-time only layers resulting HR image are used and no segmentation label is required. Extensive perceptual experiments and a user study on images randomly selected from COCO-Stuff dataset demonstrate the effectiveness of our proposed method and it outperforms the state-of-the-art methods.

Original languageEnglish
Pages (from-to)304-313
Number of pages10
JournalNeurocomputing
Volume398
DOIs
Publication statusPublished - 20 Jul 2020

Bibliographical note

Publisher Copyright:
© 2019 Elsevier B.V.

Keywords

  • Generative adversarial network
  • Multitask learning
  • Recovering realistic textures
  • Semantic segmentation
  • Single image super-resolution

Fingerprint

Dive into the research topics of 'Benefiting from multitask learning to improve single image super-resolution'. Together they form a unique fingerprint.

Cite this