Nonlocal adaptive direction-guided structure tensor total variation for image recovery

Ezgi Demircan-Tureyen*, Mustafa E. Kamasak

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

A common strategy in variational image recovery is utilizing the nonlocal self-similarity property, when designing energy functionals. One such contribution is nonlocal structure tensor total variation (NLSTV), which lies at the core of this study. This paper is concerned with boosting the NLSTV regularization term through the use of directional priors. More specifically, NLSTV is leveraged so that, at each image point, it gains more sensitivity in the direction that is presumed to have the minimum local variation. The actual difficulty here is capturing this directional information from the corrupted image. In this regard, we propose a method that employs anisotropic Gaussian kernels to estimate directional features to be later used by our proposed model. The experiments validate that our entire two-stage framework achieves better results than the NLSTV model and two other competing local models, in terms of visual and quantitative evaluation.

Original languageEnglish
Pages (from-to)1517-1525
Number of pages9
JournalSignal, Image and Video Processing
Volume15
Issue number7
DOIs
Publication statusPublished - Oct 2021

Bibliographical note

Publisher Copyright:
© 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.

Funding

This work was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) under 115R285.

FundersFunder number
TUBITAK115R285
Türkiye Bilimsel ve Teknolojik Araştirma Kurumu

    Keywords

    • Directional total variation
    • Image recovery
    • Nonlocal regularization
    • Orientation field estimation
    • Structure tensor

    Fingerprint

    Dive into the research topics of 'Nonlocal adaptive direction-guided structure tensor total variation for image recovery'. Together they form a unique fingerprint.

    Cite this