You are on page 1of 11

UMT Artificial Intelligence Review (UMT-AIR)

Volume 1 Issue 2, Fall 2021


ISSN(P): 2791-1276 ISSN(E): 2791-1268
Journal DOI: https://doi.org/10.32350/UMT-AIR
Issue DOI: https://doi.org/10.32350/UMT-AIR.0102
Homepage: https://journals.umt.edu.pk/index.php/UMT-AIR

Journal QR Code:

Article: Convolutional Auto Encoder For Image Denoising

Author(s): Abdul Ghafar1, Usman Sattar2


Affiliation: 1
Department of Information Systems, Dr Hassan Murad School of Management,
University of Management and Technology, Lahore, Pakistan
2
Department of Management Sciences, Beacon house National University, Lahore,
Pakistan

Article QR:

G. Abdul, and S. Usman, “Convolutional auto encoder for image


Citation:
denoising,” UMT Artificial Intelligence Review, vol. 1, pp. 1–11,
2021. https://doi.org/10.32350/UMT-AIR.0102.01

Copyright
Information: This article is open access and is distributed under the terms of
Creative Commons Attribution 4.0 International License

A publication of the
Dr Hassan Murad School of Management,
University of Management and Technology, Lahore, Pakistan
Convolutional Autoencoder for Image Denoising…

Convolutional Autoencoder for Image Denoising


Abdul Ghafar 1*, Usman Sattar2
0F

1
Department of Information Systems, Dr Hassan Murad School of Management,
University of Management and Technology, Lahore, Pakistan
*Corresponding Author: abdul.ghafar@umt.edu.pk
2
Department of Management Sciences, Beacon house National University, Lahore,
Pakistan
2 UMT Artificial Intelligence
Volume 1 Issue 2, Winter 2021
Ghafar and Sattar

ABSTRACT:Image denoising is autoencoder, image convolutional


a process used to remove noise autoencoder
from the image to create a sharp I. INTRODUCTION
and clear image. It is mainly used
in medical imaging, where due to Image denoising is a common
the malfunctioning of machines or problem in computer vision, which
due to the precautions taken to has been looked at very carefully.
protect patients from radiation, This technique employs partial
medical imaging machines create a differential equations (PDEs) and
lot of noise in the final image. other types of changes such as non-
Several techniques can be used in local techniques, and a family of
order to avoid such distortions in models that uses sparse coding
the image before their final techniques and represented in the
printing. Autoencoders are the following expression:
most notable software used to 𝒛𝒛 = 𝒙𝒙 + 𝒚𝒚
denoise images before their final
printing. These software are not Where the combination the
intelligent so the resultant image is original image (x), (z) as noise
not of good quality. In this paper, while other factors are (y).
we introduced a modified Thanks to recent advancements in
autoencoder having a deep deep learning, the results obtained
convolutional neural network. It from using these techniques have
creates better quality images as been effective. Autoencoders are
compared to traditional used to clean the images. They are
autoencoders. After training with a very good at removing noise, as
test dataset on the tensor board, the well as not very specific about how
modified autoencoder is tested on a noise is made or how the
different dataset having various distortion was created. Denoising
shapes. The results were autoencoders built with
satisfactory but not desirable due to convolutional layers can
several reasons. Nevertheless, our effectively denoise digital images
proposed system still performed because they can take advantage of
better than traditional strong spatial correlations [1].
autoencoders. Medical imaging, such as X-rays,
magnetic resonance imaging
KEYWORDS: image denoising, (MRI), computed tomography
deep learning, convolutional (CT), ultrasound, and other types
neural network, image of imaging, is affected by noise

Dr Hasan Murad School of Management


3
Volume 1 Issue 2, Winter 2021
Convolutional Autoencoder for Image Denoising…

when they are employed. When the inpainting. It accomplished the


level of radiation goes down, the same thing as K-singular value
noise level goes up. Many times, decomposition (K-SVD). The
people and machines need to researchers used stacked sparse
denoise images to be able to autoencoders to build adaptive
properly analyze them [1]. In this multi-column deep neural
study, we attempted to create a networks for picture denoising.
simple version of a denoising This system was found to be very
autoencoder built using a durable for denoising different
convolutional layer network by types of noise [4].
using a short dataset. Segmentation is a critical step in
Digital Rock Physics (DRP) as the
II. RELATED WORK
original images are available in a
Even though block-matching gray-scale format. Conventional
and 3D filtering (BM3D) is well- methods often use thresholding to
designed and is thought to be the delineate distinct phases and,
best way to denoise images, a consequently, watershed algorithm
simple multilayer perceptron to identify the existing phases [5].
(MLP) can do the same work [2]. In this paper, autoencoders based
Denoising autoencoders are a new deep learning model is proposed
addition to the literature on image for image denoising. The
denoising. This design of autoencoders learn noise from the
autoencoders could be used to training images and then try to
build deep networks [3] . When eliminate the noise for a novel
you feed the result of one image. The experimental outcomes
denoising autoencoder to the one prove that this proposed model for
below it, you can make a very PSNR has achieved higher results
large network. compared to the conventional
This research designed a way to models [6].
clean up the images using This research proposed a DCAE to
convolutional neural networks. If solve the high-resolution SAR
you use a small number of training image classification. The
images, you can get better results. experimental results demonstrate
This is because wavelets and that the appropriate parameter
Markov random fields can be used settings of the network can achieve
to denoise images. Stacked sparse impressive classification
autoencoders (SSAE) were utilized performance[7].
for picture denoising and

4 UMT Artificial Intelligence


Volume 1 Issue 2, Winter 2021
Ghafar and Sattar

Purpose of this research paper is to from deep learning and Bayesian


develop a novel PSO algorithm inference.
(namely, PSOAO) to automatically As demonstrated in the diagram
discover the optimal architecture of below, a traditional image
the FCAE for image classification autoencoder takes an image,
problems without manual encodes it into a latent vector
intervention [8]. space, and then decodes it into an
In this paper, the researcher output with the same dimensions
proposed an energy compaction- as the original picture. This is how
based image compression a traditional image autoencoder
architecture using a CAE. Firstly, functions. The autoencoder is then
we presented the CAE architecture trained using the same pictures as
and discussed the impact of the input images in the following
different network struction on the stage. In this way, the autoencoder
coding performance [9]. learns to replicate the original
This research paper focused on inputs. It's possible to make the
developing a semiadversarial code (the encoder's output)
network for imparting soft- interesting, causing the
biometric privacy to face images autoencoder to learn more
[10]. interesting hidden data
representations. Generally, the
III. AUTOCODER
code needs to be largely zeros if
Modern autoencoders are you want it to be low-dimensional
networks that attempt to encode and sparse. In this scenario, the
and then decode a piece of input encoder reduces the size of the
into a low-dimensional latent input data by compressing it into a
space. To make the input more lower number of bits.
realistic, they employ concepts

Fig.1. Autoencoder Diagram [11]

Classical autoencoders do not latent spaces in real life. They


produce usable or well-organized aren't particularly good when it

Dr Hasan Murad School of Management


5
Volume 1 Issue 2, Winter 2021
Convolutional Autoencoder for Image Denoising…

comes to compression. For this input is then recreated at L3 level.


reason, a large number of The autoencoder is forced to
individuals dislike them. choose a compact approximation
Conversely, VAEs provide because there are fewer hidden
autoencoder statistical wizardry, units than inputs. Typically, an
allowing them to learn a autoencoder detects low-
continuous, well-structured latent dimensional representations, such
space. They're proven to be as Principal Component Analysis
effective when it comes to (PCA). However, if certain
creating photographs[11]. sparsity constraints are followed,
hidden units that are bigger than
the number of input variables can
provide meaningful insights.
A. Denoising Autoencoders
When we apply a denoising
autoencoder, we train the model
to learn how to rebuild the input
from a noisy version. Some of the
inputs are randomly made zero
using a process known as
stochastic corruption. This causes
Fig. 2. Basic architecture of an the denoising autoencoder to
Autoencoder [1] anticipate missing (corrupted)
The process of autoencoding values for subsets of missing
begins with layer L1, which is patterns picked at random. The
taken as an input layer. It is diagram below explains how to
encoded with a latent construct a denoising
representation in layer L2. The autoencoder.

Fig.3.Basic Architecture of a Denoising Autoencoder [1]

6 UMT Artificial Intelligence


Volume 1 Issue 2, Winter 2021
Ghafar and Sattar

It's possible to stack denoising images they're processing.


autoencoders to make a very large Weighs are shared among all the
network using a stacked input places in convolutional
denoising autoencoder method autoencoders to stabilize the local
(SDAE). spatiality of each input. The
representation of feature map is
The current layer receives the
given as
output from the layer below, after
which the training is done layer
by layer.
In the equation, the bias is spread
across the whole map, for two
dimensional convolution (2D),
and s stands for activation. A
single bias is used for each latent
map. Whereas the reconstruction
is done through following
expressin

Fig.4. A Stacked Denoising For each input channel, c is the


Autoencoder [1] amount of bias that is applied. H
is the group of latent feature maps
B. Convolutional Autoencoder that are turned over.
Convolutional autoencoders are Backpropagation is used to figure
based on standard autoencoder out the gradient of the error
architecture, except instead of function as a function of the
having the usual autoencoder parameters[1].
design, it uses convolutional
encoding and decoding layers. IV. DATA SET
Furthermore, this type of encoder We created our own dataset for
is better at image processing than experimentation. In order to
traditional autoencoders because create a dataset, we placed
they employ deep neural different objects on a canvas and
networks at full capacity to then recorded a video.
examine the structure of the

Dr Hasan Murad School of Management


7
Volume 1 Issue 2, Winter 2021
Convolutional Autoencoder for Image Denoising…

Fig. 5.

Fig. 6. Extracted Dataset

Fig. 7. After Adding Noise to Dataset

8 UMT Artificial Intelligence


Volume 1 Issue 2, Winter 2021
Ghafar and Sattar

Subsequently, we wrote a custom We use the tensor board to


function to add noise to every monitor the experimental results.
image. This created a noisy image It also monitors the training loss
as shown above. and the validation loss. The
model did not perform up to mark
V. EXPERIMENTAL SETUP
as shown in figure 9.
All of the images were edited
before they were modelled. Pre-
processing involved resizing of
all images to 200 x 200 so that
they take up less space on the
computer.

Fig.9. Baseline Model Traning


• We ran our experiment on
Colab, which provides limited
GPU consumption. For this
reason, we need to keep our
batch size as small as possible
Fig.8. Model Summary to not cause a memory error.
• In order to train the
VI. MODEL ARCHITECTURE autoencoder, we cannot use the
We used Keras library to data loader library for denoising
create a Conv autoencoder. First, the data at run time. In order to
we define the input layer, then we fix this, we first add noise into
define 2 conv layers followed by that data, then load the all
max pooling layer for download dataset into RAM.
sampling. Subsequently, we • Due to RAM limitations, we
applied 2 conv2d transpose layers cannot load the entire dataset
for upsampling. into RAM. For this reason, we
need to keep our dataset as
VII. EXPERIMENT RESULTS small as possible.
AND DISCUSSION • We did keep the autoencoder to
up to 2 layers during the

Dr Hasan Murad School of Management


9
Volume 1 Issue 2, Winter 2021
Convolutional Autoencoder for Image Denoising…

encoder part and 2 layers during convolutional neural network, is


the decoder to keep it as simple the most advanced denoising
as possible producequality software. It can help produce
digital images. Software, such quality images, especially in the
as autoencoders, can be used to medical field, as well as in high-
remove noise in digital images quality photography. After
to obtain their high-quality print training with various trained
. A modified autoencoder, using datasets on a tensor board, we
a deep convolutional neural tested the convolutional
network, is the most advanced autoencoder with test datasets. Its
denoising software. It can help results were quite satisfactory but
produce quality images, not desirable due to the physical
especially in the medical field, limitations of the system, such as
as well as in high-quality limited GPU consumption
photography. After training availability and RAM limitations.
with various trained datasets on Nevertheless, our proposed
a tensor board, we tested the system performed better than
convolutional autoencoder with other traditional autoencoders.
test datasets. Its results were
quite satisfactory but not REFERENCES
desirable due to the physical 1. L. Gondara, "Medical image
limitations of the system, such denoising using
as limited GPU consumption convolutional denoising
availability and RAM autoencoders," in 2016 IEEE
limitations. Nevertheless, our 16th international conference
proposed system performed on data mining workshops
better than other traditional (ICDMW), 2016, pp. 241-
autoencoders. 246.
• 2. H. C. Burger, C. J. Schuler,
VI. CONCLUSION and S. Harmeling, "Image
Image denoising is one of the denoising: Can plain neural
most important techniques used to networks compete with
producequality digital images. BM3D?," in 2012 IEEE
Software, such as autoencoders, conference on computer
can be used to remove noise in vision and pattern
digital images to obtain their recognition, 2012, pp. 2392-
high-quality print . A modified 2399.
autoencoder, using a deep

10 UMT Artificial Intelligence


Volume 1 Issue 2, Winter 2021
Ghafar and Sattar

3. P. Vincent, H. Larochelle, Y. 8. Y. Sun, B. Xue, M. Zhang,


Bengio, and P.-A. Manzagol, and G. G. Yen, "A particle
"Extracting and composing swarm optimization-based
robust features with flexible convolutional
denoising autoencoders," in autoencoder for image
Proceedings of the 25th classification," IEEE
international conference on transactions on neural
Machine learning, 2008, pp. networks and learning
1096-1103. systems, vol. 30, pp. 2295-
4. V. Jain and S. Seung, 2309, 2018.
"Natural image denoising 9. Z. Cheng, H. Sun, M.
with convolutional Takeuchi, and J. Katto,
networks," Advances in "Energy compaction-based
neural information image compression using
processing systems, vol. 21, convolutional autoencoder,"
2008. IEEE Transactions on
5. S. Karimpouli and P. Multimedia, vol. 22, pp. 860-
Tahmasebi, "Segmentation of 873, 2019.
digital rock images using 10. V. Mirjalili, S. Raschka, A.
deep convolutional Namboodiri, and A. Ross,
autoencoder networks," "Semi-adversarial networks:
Computers & geosciences, Convolutional autoencoders
vol. 126, pp. 142-150, 2019. for imparting privacy to face
6. K. Bajaj, D. K. Singh, and M. images," in 2018
A. Ansari, "Autoencoders International Conference on
based deep learner for image Biometrics (ICB), 2018, pp.
denoising," Procedia 82-89.
Computer Science, vol. 171, 11. F. Chollet, Deep learning
pp. 1535-1541, 2020. with Python: Simon and
7. J. Geng, J. Fan, H. Wang, X. Schuster, 2021.
Ma, B. Li, and F. Chen,
"High-resolution SAR image
classification via deep
convolutional autoencoders,"
IEEE Geoscience and
Remote Sensing Letters, vol.
12, pp. 2351-2355, 2015.

Dr Hasan Murad School of Management


11
Volume 1 Issue 2, Winter 2021

You might also like