Professional Documents
Culture Documents
•Deep learning has gained massive popularity in scientific computing, and its algorithms are
widely used by industries that solve complex problems. All deep learning algorithms use
different types of neural networks to perform specific tasks.
•This article examines essential artificial neural networks and how deep learning
algorithms work to mimic the human brain
•Deep learning algorithms train machines by learning from examples. Industries such as
health care, e-Commerce, entertainment, and advertising commonly use deep learning
NEURAL NETWORKS
?
Neurons
x Channels
0
B1
x +
9 (X1*0.1+X2*0.2)
784 pixels x
x 9
Activation
x Function
Input Layer Hidden Layer Output Layer
HOW DEEP LEARNING ALGORITHMS WORK?
CNNs,also known as ConvNets, consist of multiple layers and are mainly used for
image processing and object detection. Yann LeCun developed the first CNN in
1988 when it was called LeNet. It was used for recognizing characters like ZIP
codes and digits.
CNN's are widely used to identify satellite images, process medical images,
forecast time series, and detect anomalies.
CONVOLUTIONAL OPERATION
0 0 1 0 0
8
8
0 1 0 1 0
0 0 1 0 0
0 1 0 1 0
0 0 1 1 0
HOW DO CNNS WORK?
Convolution Layer Rectified Linear Unit Pooling Layer Fully Connected Layer
CNN has a convolution layer (ReLU) The rectified A fully connected layer
that has several filters to CNN's have a ReLU feature map next forms when the flattened
perform the convolution layer to perform feeds into a matrix from the pooling
operation. operations on elements. pooling layer. layer is fed as an input,
The output is a rectified Pooling is a down- which classifies and
feature map. sampling identifies the images.
RECURRENT NEURAL NETWORKS (RNNS)
RNNs,have connections that form directed cycles, which allow the outputs from the
LSTM to be fed as inputs to the current phase.
The output from the LSTM becomes an input to the current phase and can
memorize previous inputs due to its internal memory. RNNs are commonly used
for image captioning, time-series analysis, natural-language processing,
handwriting recognition, and machine translation.
HOW DO RNNS WORK?
Sheets
Autocompletes
Collection of volumes Feed to the Network Google Search the search
APPLICATIONS OF RECURRENT NEURAL NETWORKS
Image Captioning
Time Series Prediction
Natural Language Processing
LONG SHORT TERM MEMORY NETWORKS (LSTMS)
LSTMs are a type of Recurrent Neural Network (RNN) that can learn
and memorize long-term dependencies.
LSTMs retain information over time.
LSTM was designed by Hochreiter & Schmidhuber.
It tackled the problem of long-term dependencies of RNN in which the
RNN cannot predict the word stored in the long-term memory but can
give more accurate predictions from the recent information.
As the gap length increases RNN does not give an efficient performance.
LSTM can by default retain the information for a long period of time.
It is used for processing, predicting, and classifying on the basis of time-
series data.
STRUCTURE OF LSTM
❖ LSTM has a chain structure that contains four neural networks and different
memory blocks called cells.Information is retained by the cells and the memory
manipulations are done by the gates.
❖ There are three gates –
1. Forget Gate: The information that is no longer useful in the cell state is removed
with the forget gate.
2. Input gate: The addition of useful information to the cell state is done by the
input gate.
3. Output gate: The task of extracting useful information from the current cell state
to be presented as output is done by the output gate.
HOW DOES LSTMS WORK?
APPLICATIONS OF LSTM
Robot control.
Time series prediction.
Speech recognition.
Rhythm learning.
Music composition.
Grammar learning.