You are on page 1of 27

CMR COLLEGE OF ENGINEERING & TECHNOLOGY

(UGC AUTONOMOUS)
(NAAC Accredited with ‘A+’ Grade & NBA Accredited)
(Approved by AICTE, Permanently Affiliated to JNTU Hyderabad)
KANDLAKOYA, MEDCHAL ROAD, HYDERABAD-501401

Image Denoising using Denoising Convolutional Neural


Network [DnCNN]
Batch B19
Ch Bharath - 19H51A04C6
V Sanvithi  - 19H51A04F0
Ch Siva Sai - 19H51A04J6

Guide :
Dr. M. Nagaraju Naik [Professor]

1
CONTENTS
 Introduction

 Literature Survey

 Limitations of Existing Methodology 

 Abstract

 Objectives

 Proposed Methodology 

 Hardware and Software Requirements 

 Results

 References

 Source Code

2
INTRODUCTION
What is Digital Image Processing?
 Digital image processing deals with manipulation of digital images through a digital computer.
 It is a subfield of signals and systems but focus particularly on images.
 DIP focuses on developing a computer system that is able to perform processing on an image.
 The input of that system is a digital image and the system process that image using efficient algorithms, and gives an image as an output.
 The three primary phases that constitute image processing are:
1. Importing the image using picture acquisition tools
2. Picture processing and handling
3. Output in which image or report based on image analysis may be changed.

3
INTRODUCTION

Need of Image Denoising


 An image is often corrupted by noise in its acquisition and transmission. Image denoising is used to remove the additive noise while retaining as
much as possible the important signal features.
 Generally, data sets collected by image sensors are contaminated by noise. Imperfect instruments, problems with data acquisition process, and
interfering natural phenomena can all corrupt the data of interest. Thus noise reduction is an important technology in Image Analysis.
 Image denoising plays an important role in a wide range of applications such as image restoration, visual tracking, image registration, image
segmentation, and image classification, where obtaining the original image content is crucial for strong performance.

Image Enhancement
 Image enhancement is a set of techniques used to improve the quality and visual appearance of digital images. The goal of image enhancement is
to make the images more visually appealing, easier to interpret, and more useful for various applications.
 Image enhancement works by applying various techniques and algorithms to improve the visual quality of an image. These techniques can be
divided into two main categories: point operations and spatial operations.
 Image enhancement is widely used in various fields, including medical imaging, remote sensing, surveillance, and digital photography.

4
LITERATURE SURVEY
S.No/ Author Name Of the Title Of The Year Inference Drawbacks
Ref. Journal Paper
No.
Ce Liu IEEE Transactions on Automatic 2006 Based on a simple piecewise-smooth image prior, a Only single image is taken into consideration.
Richard, Pattern Analysis and Estimation and segmentation-based approach is developed to Segmentation doesn't reduce noise all the
[1] Szeliski Sing Machine Intelligence Removal of Noise automatically estimate and remove noise from color times. If the details of the Camera used are
Bing Kang C. from a Single images. The NLF is obtained by estimating the lower unknown, image denoising is difficulty.
Lawrence Image envelope of the standard deviations of image
Zitnick, variance per segment.
William T.
Freeman

Madhu S. Proceedings of the Removal of Salt- 2008 The advantage of this algorithm lies in removing Quality of the image will be lost while data
Nair, K. International Multi- and Pepper Noise only the noisy pixel either by the median value or by transmission occurs.
[2] Revathy, and Conference of in Images: A New the mean of the previously processed neighboring
Rao Tatavarti Engineers and Decision-Based pixel values. Different grayscale and color images
Computer Scientists Algorithm have been tested by using the proposed algorithm
and found to produce better PSNR and SSIM values.

Wangmeng IEEE Conference on Texture Enhanced 2013 A texture enhanced image denoising(TEID) method Limitation of system is that it cannot be
Zuo, Lei Computer Vision and Image Denoising by enforcing the gradient distribution of the denoised directly applied to non-additive noise
[3] Zhang Pattern Recognition via Gradient image to be close to the estimated gradient removal. Need to study more general models
Chunwei, Histogram distribution of the original image. A novel gradient and algorithms for nonadditive noise removal
Song David Preservation histogram preservation (GHP) algorithm is with texture enhancement. Not Suitable for
Zhang. developed to enhance the texture structures while Colour Quality Measures.
removing noise.

5
S.No/ Author Name Of the Title Of The Year Inference Drawbacks
Ref. No. Journal Paper
Julien Foundations and Sparse Modeling 2014 The monograph shows the optimization The algorithms require tuning (at least)
Mairal, Trends R in for Image and techniques that makes dictionary learning easy one parameter to obtain fast convergence
[4] Francis Computer Graphics Vision to use for researchers that are not experts of in practice, which cannot be done
Bach, and Vision Processing the field. automatically. These methods can be
Jean Ponce adapted to largescale problems but they
require a careful choice of a step size
parameter and thus a bit difficult to use.
Kai Zhang, IEEE Transactions Toward a Fast 2018 The FFDNet works on downsampled A specific model must be trained for each
Wangmeng on Image and Flexible subimages, achieving a good trade-off noise level.
[5] Zuo, Lei Processing Solution for between inference speed and denoising
Zhang CNN based performance.
Image Denoising
Bhagya International Image Denoising 2019 An alternative to the wavelet based noise Gaussian noise is added prior to the
Prasad Journal of Recent & Metric removing method is the bilateral filter, which image denoising algorithm.
[6] Bugge, Technology and Parameters uses both spatial and intensity information
Bh.S.S.D.S Engineering Improvement between a pixel and its neighbouring pixels.
Nagendra (IJRTE) using Dictionary
Varma, Learning and
A.Amarnat Sparse Coding
h
Zhang Y, Pew-Thian Yap, Image denoising 2019 Unlike most continuous Total Variation-based Increases the complexity as graph
Wu J, Kong University of North via a non-local methods for image denoising, the problem has construction is difficult.
[7] Y, Carolina at Chapel patch graph total been formulated using a graph-spectral
Coatrieux Hill, United States variation approach. An image was represented as an
G, Shu H undirected graph whose edge weights are
computed by means of a Gaussian kernel
function.

6
S.No/ Author Name Of the Title Of The Year Inference Drawbacks
Ref. No. Journal Paper

M. Sarvesh, Journal of Physics: Removal of Noise 2021 Image data recorded by devices contain mistakes or Only 50% noise removal is possible and
M. Conference Series in an Image using noises due to geometry and brightness values of the applicable for salt and pepper noise.
Sivagami Boundary Detection pixels. This approach removes the noise from the
[8] and N. Technique image using local neighbourhood processing and
Maheswari preserves the edges using boundary-based approach.
The noise filtered image is tested against the
parameter entropy to validate the result.

Keya School of Mechanical Image Noise 2021 Aiming at the problem of unclear images acquired Black and White or Grey scale images are
Huang and Electric Removal Method in interactive systems. compared with the traditional only used. Cannot be applied for colour
[9] and Hairon Engineering, Soochow Based on Improved nonlocal mean algorithm, the algorithm proposed in images. Time spent on image denoising can be
g Zhu University, Suzhou, Nonlocal Mean this paper has better results in the visual quality and reduced.
China Algorithm peak signal-to-noise ratio (PSNR) of complex noise
images.

7
LIMITATIONS OF EXISTING METHODOLOGY

Despite the high denoising quality, most of the image prior-based methods typically suffer from major drawbacks.
They are :
 These methods generally involve a complex optimization problem in the testing stage, making the denoising process time-
consuming. Thus, most of the prior based methods can hardly achieve high performance without sacrificing computational
efficiency.
 The models in general are non-convex and involve several manually chosen parameters, providing some leeway to boost
denoising performance.
  A lot of training data is needed for the CNN to be effective.
To overcome the limitations of prior-based approaches, several discriminative learning methods have been recently developed
to learn image prior models.

8
ABSTRACT
Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance.
In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to
embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning
and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing
discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our
DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy,
DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with
several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive
experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be
efficiently implemented by benefiting from GPU computing.

9
OBJECTIVES

 Training the model using deep learning and developing the algorithm with the help of CNN for Image Denoising.

 Comparison of various noise parameters like PSNR, SSIM & MSE with the existing methods to achieve better results.

 To restore the fine details of the denoised output image through Image Enhancement.

10
PROPOSED METHODOLOGY BLOCK DIAGRAM

Input Image Model Training using


From Degradation Process
Data Set DnCNN

Output Image Image Enhancement

11
PROPOSED METHODOLOGY
 Image acquisition and Pre-Processing
We take gray scale images from the database as an input for image acquisition.
a. Noise Addition :
The images in our database are not the real ones, they are either downloaded or brought from the internet. These
images do not have noise hence we add the noise to them. If the database consists of natural images with noise, then
noise need not be added.

 Model Training using DnCNN

12
 The size of convolutional filters are set to be 3×3 and all pooling layers are removed. Therefore, the receptive field of DnCNN with depth
of d should be (2d+1)(2d+1).

 For Gaussian denoising with a certain noise level, the receptive field size of DnCNN is set to 35×35 with the corresponding depth of 17. For
other general image denoising tasks, a larger receptive field is adopted by setting the depth to be 20.

 The residual learning formulation is adopted to train a residual mapping:


x = y-R(y). Thus, R(y) is learnt.

 To be specific, there are 3 types of layers.


(i) Conv+ReLU: For the first layer, 64 filters of size 3×3×c are used to generate 64 feature maps. c = 1 for gray image and c = 3 for color
image.
(ii) Conv+BN+ReLU: For layers 2 to (D-1), 64 filters of size 3×3×64 are used, and batch normalization is added between convolution
and ReLU.
(iii) Conv: for the last layer, c filters of size 3×3×64 are used to reconstruct the output.

 By incorporating convolution with ReLU, DnCNN can gradually separate image structure from the noisy observation through the hidden layers.

 Image Enhancement
Image restoration is done to the denoised image to restore the image quality that has been lost during the process.
Hence the output image is the image which has minimal noise and better details.

13
HARDWARE & SOFTWARE REQUIREMENTS

HARDWARE : Personal Computer


PC Requirements – intel core i3 (Minimum quadcore)
RAM – 8GB (Minimum), 16GB (Recommended)
Storage – 40-45GB (SSD Drive)
OS – Windows 7 and Above

SOFTWARE: R2022b (MATLAB 9.13)

NOTES :
1. Need an online MATLAB live editor to execute the current code.
2. Deep Learning toolbox is required for Pre-Trained Denoising CNN.

14
RESULTS

Fig 1: Input Image Fig 2: Noisy Image Fig 3: Denoised Output Image
Type: PNG Type: PNG Type: PNG
Size: 87.6KB, 551 x 577 Size: 120.4KB, 551 x 577 Size: 86.5KB, 551 x 577

15
RESULTS

Fig 4: Input Image Fig 5: Noisy Image Fig 6: Denoised Output Image
Type: PNG Type: PNG Type: PNG
Size: 68 KB, 582 x 609 Size: 138 KB, 582 x 604 Size: 62 KB, 570 x 592

16
RESULTS

Fig 7: Input Image Fig 8: Noisy Image Fig 9: Denoised Output Image
Type: PNG Type: PNG Type: PNG
Size: 78 KB, 576 x 602 Size: 140 KB, 570 x 593 Size: 74 KB, 578 x 594

17
METRIC CALCULATIONS
S.no Description Images PSNR (dB) SSIM

Original 20.089 103.00


Image

1
Denoised 29.099 79.00

Original 20.203 109.00


Image
2
Denoised 27.908 72.00

Image Original 20.057 104.00


3

Denoised 28.888 67.00

18
REFERENCES
[1] Ce Liu, Richard Szeliski, Sing Bing Kang, C. Lawrence Zitnick and William T. Freeman, "Automatic Estimation and Removal of Noise from a Single Image", VOL. 30, NO. 2, February, 2008.

[2] Madhu S. Nair, K. Revathy, and Rao Tatavarti, "Removal of Salt-and Pepper Noise in Images: A New Decision-Based Algorithm," Proceedings of the International Multi Conference of
Engineers
and Computer Scientists 2008 Vol I, IMECS 2008, 19-21 March, 2008, Hong Kong.

[3] Wangmeng Zuo, Lei Zhang, Chewie Song, David Zhang, "Texture Enhanced Image Denoising via Gradient Histogram Preservation", 2013 IEEE Conference on Computer Vision Pattern
Recognition, 23-28 June 2013.

[4] Julien Mairal, Francis Bach and Jean Ponce (2014), "Sparse Modeling for Image and Vision Processing", Foundations and Trends in Computer Graphics and Vision: Vol. 8: No. 2-3, pp 85-
283,
19 Dec 2014.

[5] Kai Zhang, Wangmeng Zuo, and Lei Zhang, "FFDNet: Toward a Fast and Flexible Solution for CNN based Image Denoising", IEEE Transactions on Image Processing ( Volume: 27, Issue: 9,
September 2018).

[6] Bhagya Prasad Bugge, Bhadra Varma, A. Amarnath, "Image Denoising & Metric Parameters Improvement using Dictionary Learning and Sparse Coding", Blue Eyes Intelligence Engineering
&
Sciences Publication, 2019.

[7] Zhang Y, Wu J, Kong Y, Coatrieux G, Shu H (2019) "Image denoising via a non-local patch graph total variation". PLoS ONE 14(12): e0226067. https://doi.org/10.1371/journal.pone.0226067

[8] M. Sarvesh, M. Sivagami and N. Maheswari “Removal of Noise in an Image using Boundary Detection Technique”. Journal of Physics: Conference Series 1911 (2021) 012018 IOP Publishing
doi:10.1088/1742-6596/1911/1/012018

[9] Keya Huang and Hairong Zhu “Image Noise Removal Method Based on Improved Nonlocal Mean Algorithm”. Volume 2021 | Article ID 5578788 | https://doi.org/10.1155/2021/5578788

19
SOURCE CODE
clear; close all; clc;
I = imread('05.png');
figure, imshow(I);
title('Input Image');

noisyI = imnoise(I,'gaussian',0.02);
figure, imshow(noisyI); shg;
title('Noisy image'); shg;

net = denoisingNetwork('DnCNN');

denoisedI = denoiseImage(noisyI, net);


figure, imshow(denoisedI);
title('Output'); shg;

% metric values
disp('metric values');
fprintf('\n');
disp('Noisy Image values');
[psnr,mse] = measerr(I,noisyI);
fprintf('\npsnr:%1f',psnr(:,:,1));
fprintf('\nmse:%1f',mse(:,:,1));

disp(‘Denoised Image Values’);


[psnr,mse] = measerr(I,noisyI);

fprintf('\npsnr:%1f',psnr(:,:,1));
fprintf('\nmse:%1f',mse(:,:,1));
20
function net = denoisingNetwork(ModelName)
images.internal.requiresNeuralNetworkToolbox(mfilename);

narginchk(1,1);
validateModelName(ModelName);

validModelNames = {'dncnn'};
ModelName = validatestring(ModelName, validModelNames, mfilename,
'ModelName');

switch ModelName
case ('dncnn')
data = load('defaultDnCNN-B-Grayscale.mat');
net = data.net;
end
end

function validateModelName(ModelName)
supportedClasses = {'char','string'};
attributes = {'nonempty','scalartext'};
validateattributes(ModelName,supportedClasses,attributes,mfilename, 'ModelName’);
end

21
function I = denoiseImage(A, net)
matlab.images.internal.errorIfgpuArray(A, net);
images.internal.requiresNeuralNetworkToolbox(mfilename);

narginchk(2,2);

validateInputImage(A);
validateInputNetwork(net);

channelCompatible = net.Layers(1).InputSize(3) == size(A,3) && net.Layers(end-


1).NumFilters == size(A,3);
if ~channelCompatible
error(message('images:denoiseImage:incompatibleImageNetwork'));
end

classOfA = class(A);
if ~isa(A,'single')
inputImage = im2single(A);
else
inputImage = A;
end

numLayers = size(net.Layers,1);
res = activations(net,inputImage,numLayers-1,'OutputAs','channels');

I = inputImage - res;

22
if isinteger(A)
I = cast(double(intmax(classOfA))*I,classOfA);
elseif isa(A,'double')
I = cast(I,'double');
end

end

function validateInputImage(A)
supportedClasses = {'uint8','uint16','single','double'};
attributes = {'nonempty','nonsparse','real','nonnan','finite'};

validateattributes(A, supportedClasses, attributes, mfilename, 'A');


if ndims(A) > 4
error(message('images:denoiseImage:invalidImageFormat'));
end
end

function validateInputNetwork(net)
supportedClasses = {'SeriesNetwork','DAGNetwork'};
attributes = {'nonempty','nonsparse'};
validateattributes(net, supportedClasses, attributes, mfilename, 'net');
validateattributes(net.Layers(end), {'nnet.cnn.layer.RegressionOutputLayer'},
attributes, mfilename, 'net');
if ~isa(net.Layers(end-1),'nnet.cnn.layer.Convolution2DLayer')
error(message('images:denoiseImage:lastLayerNotConv2d'));
end
end
23
function layers = dnCNNLayers(varargin)
images.internal.requiresNeuralNetworkToolbox(mfilename);

narginchk(0,2);
options = parseInputs(varargin{:});
NetworkDepth = options.NetworkDepth;

layers = getDnCNNLayers('grayscale', NetworkDepth);


end

function options = parseInputs(varargin)


parser = inputParser();
parser.addParameter('NetworkDepth',20,@validateNetworkDepth);
parser.parse(varargin{:});
options = parser.Results;
end

function validateNetworkDepth(NetworkDepth)
supportedClasses = images.internal.iptnumerictypes;
attributes = {'nonempty','nonsparse','real','nonnan','finite', 'integer', ...
'>=',3,'positive','nonzero','scalar'};
validateattributes(NetworkDepth,supportedClasses,attributes,mfilename, ...
'NetworkDepth');
end

function layers = getDnCNNLayers(ChannelFormat, NetworkDepth)

24
if strcmp(ChannelFormat,'grayscale')
c = 1;
else
c = 3;
end
layers = imageInputLayer([50 50 c],'Name','InputLayer','Normalization','none');
convLayer = convolution2dLayer(3,64,...
'Padding', 1, ...
'BiasL2Factor', 0,...
'Name', 'Conv1');
% He initialization
convLayer.Weights = sqrt(2/(9*64))*randn(3,3,c,64);
convLayer.Bias = zeros(1,1,64);

relLayer = reluLayer('Name', 'ReLU1');


layers = [layers convLayer relLayer];

for layerNumber = 2:NetworkDepth-1


convLayer = convolution2dLayer(3, 64,...
'BiasLearnRateFactor',0,...
'BiasL2Factor', 0,...
'Padding', [1 1],...
'Name', ['Conv' num2str(layerNumber)]);
% He initialization
convLayer.Weights = sqrt(2/(9*64))*randn(3,3,64,64);
convLayer.Bias = zeros(1,1,64);

25
scaleInit = sqrt(2/(9*64))*randn(1,1,64);
batchNormLayer = batchNormalizationLayer('Offset',zeros(1,1,64),...
'Scale', scaleInit,...
'OffsetL2Factor',0,...
'ScaleL2Factor',0,...
'Name',['BNorm' num2str(layerNumber)]);

relLayer = reluLayer('Name', ['ReLU' num2str(layerNumber)]);


layers = [layers convLayer batchNormLayer relLayer]; %#ok<AGROW>
end
convLayer = convolution2dLayer(3,c,...
'NumChannels',64,...
'Padding', [1 1],...
'BiasL2Factor', 0,...
'Name', ['Conv' num2str(NetworkDepth)]);
convLayer.Weights = sqrt(2/(9*64))*randn(3,3,64,c);
convLayer.Bias = zeros(1,1,c);

layers = [layers convLayer];


layers = [layers regressionLayer('Name','FinalRegressionLayer')];

end

26
THANK YOU!

27

You might also like