Supervised learning of smoothing parameters in image restoration by regularization under cellular neural networks framework

Bilge Gunsel*, Cuneyt Guzelis

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

4 Citations (Scopus)

Abstract

Estimation of smoothing parameters is one of the difficult problems in using regularization techniques for image restoration. The objective of this paper is to show that Cellular Neural Networks (CNNs) incorporated with a learning algorithm can be useful in adaptive learning of smoothing parameters of regularization. Therefore, first a CNN model is designed to minimize a regularization cost function which is in quadratic form. The connection weights of this CNN are obtained by comparing the cost function with a Lyapunov function of the CNN. Unlike the common approaches in the literature, instead of learning connection weights of neural networks, we propose supervised learning of the regularization smoothing parameters by a modified version of the Recurrent Perceptron Learning Algorithm (RPLA) [1] which is recently developed for completely stable CNNs operating in a bipolar binary output mode. It is concluded that CNNs with the RPLA provides us to determine a set of suitable smoothing parameters resulting in a robust restoration of noisy images. For comparison purposes, experimental results obtained by median filter are also reported.

Original languageEnglish
Pages470-473
Number of pages4
Publication statusPublished - 1996
EventProceedings of the 1995 IEEE International Conference on Image Processing. Part 3 (of 3) - Washington, DC, USA
Duration: 23 Oct 199526 Oct 1995

Conference

ConferenceProceedings of the 1995 IEEE International Conference on Image Processing. Part 3 (of 3)
CityWashington, DC, USA
Period23/10/9526/10/95

Fingerprint

Dive into the research topics of 'Supervised learning of smoothing parameters in image restoration by regularization under cellular neural networks framework'. Together they form a unique fingerprint.

Cite this