You are on page 1of 8

STUDY OF AUTO ENCODERS &

PERFORMANCE OF VARIOUS
ACTIVATION FUNCTION
PROJECT SYNOPSIS
OF PROJECT -I
BACHELOR OF TECHNOLOGY

SUBMITTED BY
D VIKASH RANA
SUPERVISOR : MR ANKIT SHARMA
DEPARTMENT ELECTRONICS
AND COMMUNICATION
ENGINEERING

Page 1 of 8
1. Auto encoders
The concept of auto encoder was originally proposed by
LeCun in 1987, early works on auto encoder were used for
dimensionality reduction or feature learning. Recently, with
the popularity of deep learning research, auto encoder has
been brought to the forefront of generative modelling. Many
variants of auto encoder have been proposed by di erent
researchers and have been successfully applied in many
elds, such as computer vision, speech recognition and
natural language processing.
. An auto encoder is a type of arti cial neural network used
to learn e cient data coding in an unsupervised manner.
There are two parts in an auto encoder: the encoder and
the decoder. The encoder is used to generate a reduced
feature representation from an initial input x by a hidden
layer h. The decoder is used to reconstruct the initial input
from the encoder's output by minimising the loss function.
The auto encoder converts high-dimensional data to low-
dimensional data.

Page 2 of 8
fi
ffi
fi
ff
2 . Motivatio

Fig. 1.1 motivation and advantages.


An activation function is a function that is added into an
arti cial neural network in order to help the network learn
complex patterns in the data. When comparing with a
neuron-based model that is in our brains, the activation
function is at the end deciding what is to be red to the
next neuron. That is exactly what an activation function
does in an ANN as well. It takes in the output signal from
the previous cell and converts it into some form that can be
taken as input to the next cell.

Page 3 of 8

fi
n

fi
3. LITERATURE REIVEW:

Fig 1.2 activation function.


A signal xi forms the input to the i-ti synapse having weight
wi. The value of any weight may be positive or negative. A
positive weight has an extraordinary e ect, while a negative
weight has an inhibitory e ect on the output of the
summation junction
A summation junction for the input signals is weighted by
the respective synaptic weight. Because it is a linear
combiner or adder of the weighted input signals, the output
of the summation junction can be expressed as follows:
y_{sum} =\sum_{I=1}^{n}w_ix_i
A threshold activation function (or simply the activation
function, also known as squashing function) results in an
output signal.

Page 4 of 8
ff
ff
4 . METHODOLOGY/PLANING OF WORK:

Study of Arti cial neural network.Study of auto encoders and


di erent types of auto encoders
Study of activation function
Study of different types of activation functions (Sigmoid function ,
Tanh function ,Relu function , leaky function )

Code size: The code size or the size of the bottleneck is the
most important hyper parameter used to tune the auto
encoder. The bottleneck size decides how much the data
has to be compressed. This can also act as a regularisation
term.
Number of layers: Like all neural networks, an important
hyper parameter to tune auto encoders is the depth of the
encoder and the decoder. While a higher depth increases
model complexity, a lower depth is faster to process.
Number of nodes per layer: The number of nodes per layer
de nes the weights we use per layer. Typically, the number of
nodes decreases with each subsequent layer in the auto
encoder as the input to each of these layers becomes
smaller across the layers.

Page 5 of 8
ff
fi
fi
.

Page 6 of 8
5. FACILITIES REQUIRED FOR PROPOSED
WORK
Keras is a deep learning API written in Python, running on top
of the machine learning platform TensorFlow. It was developed
with a focus on enabling fast experimentation. Being able to go
from idea to result as fast as possible is key to doing good
research.
KERAS IS A :
Simple -- but not simplistic. Keras reduces developer cognitive
load to free you to focus on the parts of the problem that really
matter.
Flexible -- Keras adopts the principle of progressive disclosure
of complexity: simple work ows should be quick and easy,
while arbitrarily advanced work ows should be possible via a
clear path that builds upon what you've already learned.
Powerful -- Keras provides industry-strength performance and
scalability: it is used by organisations and companies including
NASA, YouTube, or Waymo.

Page 7 of 8
fl
fl
6. REFERENCES
https://ieeexplore.ieee.org/document/8616075

https://www.researchgate.net/ gure/Motivations- Advantages-


of-Autoencoders_ g1_362800782

https://www.geeksforgeeks.org/types-of- activation-function-
in-ann/

Page 8 of 8
fi
fi

You might also like