Özet
Unfolding provides a potent method to improve deep network performance in image restoration problems. Recent results in the literature have demonstrated the improvement achieved by unfolding structures when compared to the non-unfolding singular use of a given network. Lately, unfolding models have been offered as promising solutions for the Magnetic Resonance Image (MRI) reconstruction problem. In this work we propose a novel deep unfolding structure for MR image reconstruction. We introduce an adaptive noise level parameter to the unfolding structure, inspired by the conventional iterative thresholding-based reconstruction models. The noise level parameter is calculated at each iteration using the error between the network output and the initial zero filling estimate. This new parameter is given as an additional input to the network, and it acts as an evolving regularizer for the image manipulation strength of the network over the unrolling iterations. The introduction of this adaptivity over iterations in the training step also improves the deep models’ reconstructed image quality in the inference stage. Empirical results indicate that the recommended technique can converge to better reconstruction results when compared to state-of-the-art unfolding structures devoid of such an adaptive parameter. The introduction of the additional adaptive parameter results in an incremental increase in the parameter complexity, and the required reconstruction times also stand very similar. In this study, the statistical differences between developed techniques are investigated using the one-way ANOVA method. Additionally, a t-test is used to specify the major difference between the means of the two proposed structures. These results indicate that differences in the performance metrics results are statistically significant.
Orijinal dil | İngilizce |
---|---|
Makale numarası | 104016 |
Dergi | Biomedical Signal Processing and Control |
Hacim | 78 |
DOI'lar | |
Yayın durumu | Yayınlandı - Eyl 2022 |
Bibliyografik not
Publisher Copyright:© 2022 Elsevier Ltd
Finansman
This work was supported by TUBITAK (The Scientific and Technological Research Council of Turkey) under project no. 119E248 .
Finansörler | Finansör numarası |
---|---|
Türkiye Bilimsel ve Teknolojik Araştırma Kurumu | 119E248 |