You are on page 1of 18

IMAGE DEBLURRING

Kowshic
Avinash
Vidit Shah
Dinesh
Jaheer Khan
Indhu Lekha
01
Introduction
Image deblurring refers to the process of removing blurriness or restoring the sharpness of
an image. Image blur is a common problem in photography and computer vision, and it can
be caused by a variety of factors, including camera shake, poor lighting, low-quality camera
lenses, and motion of the object or camera during image capture.

There are many different approaches to image deblurring, including blind deconvolution,
non-blind deconvolution, and deep learning-based methods. Each of these approaches has
its own advantages and disadvantages, and the choice of method depends on the specific
requirements of the application.
02
Literature
Survey
Paper Title Author Methodology

"Blind Image Kaiming He, Jian This paper proposes a blind image deblurring method
Deblurring Using Dark Sun, and Xiaoou that uses the dark channel prior, which is a statistic of
Channel Prior" Tang natural images that captures the local minimum of the
image intensity. The method estimates the blur kernel
and the latent image simultaneously by solving a
convex optimization problem.

"Regularization by K. Zhang, W. Zuo, This paper proposes a deep learning-based image


Denoising (RED) for Y. Chen, D. Meng, deblurring method that uses the RED algorithm, which
Neural Network-Aided and L. Zhang involves regularizing the solution by denoising it with a
Image Restoration" deep denoising network. The method achieves
state-of-the-art results on several benchmark
datasets.

"Learning to Deblur" K. Xu, J. Ren, C. Liu, This paper proposes a deep learning-based image
and J. Jia deblurring method that uses a convolutional neural network
(CNN) to learn a mapping between blurred and sharp
images. The method also incorporates a spatially varying
blur model to handle motion blur.
Paper Title Author Methodology

"Fast and Robust Cho et al. This paper presents a method for multi-image blind deblurring,
Multi-Image Blind which is useful when multiple images of the same scene are
Deblurring" available but each has different blur. The method is fast and robust,
using a sparsity-based approach that exploits the statistical
properties of natural images.

"Deep Generative Zhang et al This paper proposes a deep generative prior approach to blind
Prior for Blind deblurring, which combines a CNN with a generative adversarial
Deblurring" network (GAN). The method is able to generate realistic sharp
images from blurred images, even when the blur kernel is unknown.

"Image Deblurring Xu et al. This paper presents a method for image deblurring using pairs of
with Blurred/Noisy blurred and noisy images. The method is based on a deep learning
Image Pairs" approach that jointly learns to remove both blur and noise, and is
able to handle a range of blur kernels and noise levels.
03
Methodology
Encoder model - We will be building a stack of Conv2D(64) - Conv2D(128) -
Conv2D(256). The model is going to be having input shape (128, 128, 3) and kernel size
equal to 3 and the Encoder will compress this shape to (16, 16, 256) and will further
flatten this into a one dimensional array which will be the input for our Decoder.

Decoder model - will be similar to the encoder model but it will be doing the reverse
or opposite computations. We will first manually convert the one dimensional array
from the encoder model to the shape (16, 16, 256) and then send it to the decoder to
decode it back to (128, 128, 3) shape. So the stack here will be Conv2DTranspose(256) -
Conv2DTranspose(128) - Conv2DTranspose(64).

We chose Loss function to be Mean Squared Error, Optimizer to be Adam, and


evaluation metric to be Accuracy.

We defined the learning rate reducer to reduce the learning rate if there’s no
improvement in the metric
Encoder Model
Decoder Model
Auto Encoder
04
Work Done
Our main aim of the project is to work on hyperparameter tuning. Below are some of the
hyperparameters that we have changed to understand its effect on accuracy

Activation Function : It is used to determine the output of neural network like yes or no. It
maps the resulting values in between 0 to 1 or -1 to 1 etc. (depending upon the function).
The various activation functions we used here are:

1. ReLU activation function


2. Leaky ReLU activation function
3. Tanh or hyperbolic tangent Activation Function

Batch Size : The batch size is a number of samples processed before the model is updated.
The size of a batch must be more than or equal to one and less than or equal to the
number of samples in the training dataset. Typically networks train faster with
mini-batches. That's because we update the weights after each propagation.
Optimizer : An optimizer is an algorithm or function that adapts the neural
network’s attributes, like learning rate and weights. Hence, it assists in
improving the accuracy and reduces the total loss. But it is a daunting task
to choose the appropriate weights for the model.

Architecture of encoder and decoder:


The encoder and decoder can consist of number of layers, with each layer
containing different number of filters. We can change all these parameters
and observe the result
Hyperparameter Accuracy Validation Accuracy

Activation Relu 0.8164 0.8161


Function
Leaky Relu 0.8230 0.8271

TanH 0.7373 0.7396

Batch Size 8 0.8125 0.8111

16 0.8024 0.8011

32 0.7980 0.8196

64 0.7646 0.7631

Architecture [64, 128] 0.8115

[64, 128, 256] 0.8092 0.8064

[64, 128, 256, 512] 0.7930

[64, 128, 256, 512, 1024] 0.7604


06
Conclusion and Future
Work
We have achieved a decent accuracy which was around 78% - 81%. But it is not the end, this
project could have been improved by:

● Getting more data to improve the model’s accuracy


● Hyperparameter tuning
● Investigating overfitting problems in depth
Thank You

You might also like