You are on page 1of 22

TYPES OF NEURAL NETWORK

Presented By:- Jyoti Kumari


Regd. No.:- 18150402009
WHAT IS ARTIFICIAL NEURAL NETWORK?

• Artificial neural networks are computational


models that work similarly to the functioning of
a human nervous system. There are several
kinds of artificial neural networks. These types
of networks are implemented based on the
mathematical operations and a set of
parameters required to determine the output.
Let’s look at some of the neural networks
FEATURES OF ARTIFICIAL NETWORK (ANN)

• Artificial neural networks may by physical devices or simulated on conventional computers. From a practical point of view, an
ANN is just a parallel computational system consisting of many simple processing elements connected together in a specific
way in order to perform a particular task. There are some important features of artificial networks as follows.
• Artificial neural networks are extremely powerful computational devices (Universal computers).
• ANNs are modeled on the basis of current brain theories, in which information is represented by weights.
• ANNs have massive parallelism which makes them very efficient.
• They can learn and generalize from training data so there is no need for enormous feats of programming.
• Storage is fault tolerant i.e. some portions of the neural net can be removed and there will be only a small degradation in the
quality of stored data.
• They are particularly fault tolerant which is equivalent to the “graceful degradation” found in biological systems.
• Data are naturally stored in the form of associative memory which contrasts with conventional memory, in which data are
recalled by specifying address of that data.
• They are very noise tolerant, so they can cope with situations where normal symbolic systems would have difficulty.
• In practice, they can do anything a symbolic/ logic system can do and more.
• Neural networks can extrapolate and intrapolate from their stored information. The neural networks can also be trained.
Special training teaches the net to look for significant features or relationships of data.
TYPES OF NEURAL NETWORK
• .

• Some of the main breakthroughs include:  • ANNs have evolved into a broad family of
convolutional neural networks that have proven techniques that have advanced the state of the art
particularly successful in processing visual and other across multiple domains. The simplest types have
two-dimensional data; long short-term memory avoid one or more static components, including number
the vanishing gradient problem and can handle of units, number of layers, unit weights and 
signals that have a mix of low and high frequency topology. Dynamic types allow one or more of these
components aiding large-vocabulary speech to evolve via learning. The latter are much more
recognition,text-to-speech synthesis,and photo-real complicated, but can shorten learning periods and
talking heads;competitive networks such as  produce better results. Some types allow/require
generative adversarial networks in which multiple learning to be "supervised" by the operator, while
networks (of varying structure) compete with each others operate independently. Some types operate
other, on tasks such as winning a game or on
purely in hardware, while others are purely
deceiving the opponent about the authenticity of an
software and run on general purpose computers
input.
SINGLE LAYER NETWORK

• A single layer neural network consists of a set of


units organized in a layer. Each unit Un receives a
weighted input Ij with weight Wjn. Figure shows a
single layer neural network with j inputs and n
outputs.
• Let I = (i1, i2 … … . ij) be the input vector and let
the activation function f be simply, so that the
activation value is just the net sum to a unit. The
jxn weight matrix is calculated as in the figure.
MULTILAYER NEURAL NETWORK
• A multilayer network has two or more layers of units, with the output
from one layer serving as input to the next. Generally in a multilayer
network there are 3 layers present like, input layer, output layer and
hidden layer. The layer with no external output connections are
referred to as hidden layers. A multilayer neural network structure is
given in figure.
• Any multilayer system with fixed weights that has a linear activation
function is equivalent to a single layer linear system, for example, the
case of a two layer system. The input vector to the first layer is Irthe
output O = W1 × I and the second layer produces output O2 = W2 ×
O.Hence
• O2 = W2 × (W1 × I) = (W2 × W1) × I
• So a linear system with any number n of layers is equivalent to a
single layer linear system whose weight matrix is the product of the n
intermediate weight matrices. A multilayer system that is not linear
can provide more computational capability than a single layer system.
Generally multilayer networks have proven to be very powerful than
single layer neural network. Any type of Boolean function can be
implemented by such a network. At the output layer of a multilayer
neural network the output vector is compared to the expected
output. If the difference is zero, no changes are made to the weights
of connections. If the difference is not zero, the error is calculated and
is propagated back through the network.
FEEDFORWARD NEURAL NETWORK
• This neural network is one of the simplest forms
of ANN, where the data or the input travels in
one direction. The data passes through the input
nodes and exit on the output nodes. This neural
network may or may not have the hidden layers.
In simple words, it has a front propagated wave
and no backpropagation by using a classifying
activation function usually.
• Beside is a Single layer feed-forward network.
Here, the sum of the products of inputs and
weights are calculated and fed to the output.
The output is considered if it is above a certain
value i.e threshold(usually 0) and the neuron
fires with an activated output (usually 1) and if it
does not fire, the deactivated value is emitted
(usually -1).
WORKING OF FEEDFORWARD NEURAL NETWORK:
• Data are introduced into the system through an input layer.
This is followed by processing in one or more intermediate
(hidden layers). Output data emerge from the network’s final
layer. The transfer functions contained in the individual
neurons can be almost anything. The input layer is also called
as Zeroth layer, of the network serves to redistribute input
values
M and does no processing. The output of this layer OisO = i Where m = 1,2 … . NO
m
described mathematically as follows.

• (N represents the no. of neurons in the input or zeroth layer).


O

• The input to each neuron in the first hidden layer in the


network is a summation all weighted connections between the
input or Zeroth layer and the neuron in the first hidden layer.
We will write the weighted sum as net sum or net input. We
can write the net input to a neuron from the first layer as the
product of that input vector im and weight factor wm plus a bias
term q. The total weighted input to the neuron is a summation
N
of these individual input signals described as follows.
net sum = Σ WmXm + θ
m=1
.
Where N represents the number of neurons in the input layer.
•  The net sum to the neuron is transformed by the neuron’s activation or transfer function, f to produce a new output value for the
neuron. With back propagation, this transfer function is most commonly either a sigmoid or a linear function. In addition to the net
sum, a bias term q is generally added to offset the input. The bias is designed as a weight coming from a unitary valued input and
denoted as W0. So, the final output of the neuron is given by the following equation.

• But one question may arise in reader’s mind. Why we are using the hidden layer between the input and output layer? The answer to
this question is very silly. Each layer in a multilayer neural network has its own specific function. The input layer accepts input signals
from the outside world and redistributes these signals to all neurons in the hidden layer. Actually, the input layer rarely includes
computing neurons and thus does not process input patterns. The output layer accepts output signals, or in other words a stimulus
patterns, from the hidden layer and established the output patterns of the entire network. Neurons in the hidden layer detect the
features, the weights of the neurons represent the features hidden in the input patterns. These features are then used by the output
layer in determining the output patterns. With one hidden layer we can represent any continuous function of the input signals and
with two hidden layers even discontinuous functions can be represented. A hidden layer hides its desired output. Neurons in the
hidden layer cannot be observed through the input/ output behaviour of the network. The desired output of the hidden layer is
determined by the layer itself. Generally, we can say there is no obvious way to know what the desired output of the hidden layer
should be.
APPLICATIONS OF FEED FORWARD NEURAL
NETWORK

• Applications of  Feedforward neural networks are found in computer vision


and speech recognition where classifying the target classes is complicated.
These kind of Neural Networks are responsive to noisy data and easy to
maintain.
BACK PROPAGATION NEURAL NETWORK

• Multilayer neural networks use a most common technique from a variety of learning technique, called
the back propagation algorithm. In back propagation neural network, the output values are compared
with the correct answer to compute the value of some predefined error function. By various techniques
the error is then fed back through the network. Using this information, the algorithms adjust the
weights of each connection in order to reduce the value of the error function by some small amount.
After repeating this process for a sufficiently large number of training cycles the network will usually
converge to some state where the error of the calculation is small.
• The goal of back propagation, as with most training algorithms, is to iteratively adjust the weights in the
network to produce the desired output by minimizing the output error. The algorithm’s goal is to solve
credit assignment problem. Back propagation is a gradient-descent approach in that it uses the
minimization of first-order derivatives to find an optimal solution.
RADIAL BASIS FUNCTION NEURAL
NETWORK:

• Radial basic functions consider the distance of a point with respect to the center. RBF functions have
two layers, first where the features are combined with the Radial Basis Function in the inner layer and
then the output of these features are taken into consideration while computing the same output in the
next time-step which is basically a memory.
APPLICATIONS OF RADIAL BASIS FUNCTION
NEURAL NETWORK
• This neural network has been applied in Power Restoration Systems. Power systems have increased in size and
complexity. Both factors increase the risk of major power outages. After a blackout, power needs to be restored
as quickly and reliably as possible.
• Power restoration usually proceeds in the following order:
 The first priority is to restore power to essential customers in the communities. These customers provide health
care and safety services to all and restoring power to them first enables them to help many others. Essential
customers include health care facilities, school boards, critical municipal infrastructure, and police and fire
services.
 Then focus on major power lines and substations that serve larger numbers of customers
 Give higher priority to repairs that will get the largest number of customers back in service as quickly as possible
 Then restore power to smaller neighborhoods and individual homes and businesses
The diagram beside shows the typical order of the
power restoration system.
 Referring to the diagram, first priority goes to
fixing the problem at point A, on the
transmission line. With this line out, none of the
.

houses can have power restored.


 Next, fixing the problem at B on the main
distribution line running out of the substation.
Houses 2, 3, 4 and 5 are affected by this problem.
 Next, fixing the line at C, affecting houses 4 and
5.
 Finally, we would fix the service line at D to
house 1.
KOHONEN SELF ORGANIZING NEURAL
NETWORK:
• The objective of a Kohonen map is to input vectors of
arbitrary dimension to discrete map comprised of
neurons. The map needs to be trained to create its own
organization of the training data. It comprises either one
or two dimensions. When training the map the location
of the neuron remains constant but the weights differ
depending on the value. This self-organization process
has different parts, in the first phase, every neuron value
is initialized with a small weight and the input vector.
• In the second phase, the neuron closest to the point is
the ‘winning neuron’ and the neurons connected to the
winning neuron will also move towards the point like in
the graphic beside. The distance between the point and
the neurons is calculated by the euclidean distance, the
neuron with the least distance wins. Through the
iterations, all the points are clustered and each neuron
represents each kind of cluster. This is the gist behind the
organization of Kohonen Neural Network.
APPLICATIONS OF KOHONEN SELF ORGANIZING NEURAL
NETWORK:

• Kohonen Neural Network is used to recognize


patterns in the data. Its application can be found
in medical analysis to cluster data into different
categories. Kohonen map was able to classify
patients having glomerular or tubular with an
high accuracy. Here is a detailed explanation of
how it is categorized mathematically using the
euclidean distance algorithm. Beside is an image
displaying a comparison between a healthy and
a diseased glomerular.
RECURRENT NEURAL NETWORK(RNN) – LONG
SHORT TERM MEMORY:
• The Recurrent Neural Network works on the principle of
saving the output of a layer and feeding this back to the
input to help in predicting the outcome of the layer.
Here, the first layer is formed similar to the feed forward
neural network with the product of the sum of the weights
and the features. The recurrent neural network process
starts once this is computed, this means that from one
time step to the next each neuron will remember some
information it had in the previous time-step.
• This makes each neuron act like a memory cell in
performing computations. In this process, we need to let
the neural network to work on the front propagation and
remember what information it needs for later use. Here, if
the prediction is wrong we use the learning rate or error
correction to make small changes so that it will gradually
work towards making the right prediction during the back
propagation. This is how a basic Recurrent Neural Network
looks like,
APPLICATIONS OF RECURRENT NEURAL
NETWORK(RNN) – LONG SHORT TERM MEMORY
• The application of Recurrent Neural Networks
can be found in text to speech(TTS) conversion
models. This paper enlightens about Deep Voice,
which was developed at Baidu Artificial
Intelligence Lab in California. It was inspired by
traditional text-to-speech structure replacing all
the components with neural network. First, the
text is converted to ‘phoneme’ and an audio
synthesis model converts it into speech. RNN is
also implemented in Tacotron 2: Human-like
speech from text conversion. An insight about it
can be seen beside,
CONVOLUTIONAL NEURAL NETWORK:

• Convolutional neural networks are similar to feed


forward neural networks, where the neurons have
learnable weights and biases. Its application has been
in signal and image processing which takes over
OpenCV in the field of computer vision.
• Beside is a representation of a ConvNet, in this neural
network, the input features are taken in batch-wise like
a filter. This will help the network to remember the
images in parts and can compute the operations. These
computations involve the conversion of the image from
RGB or HSI scale to the Gray-scale. Once we have this,
the changes in the pixel value will help to detect the
edges and images can be classified into different
categories.
APPLICATIONS OF CONVOLUTIONAL NEURAL NETWORK:

• ConvNet are applied in techniques like signal


processing and image classification techniques.
Computer vision techniques are dominated by
convolutional neural networks because of their
accuracy in image classification. The technique
of image analysis and recognition, where the
agriculture and weather features are extracted
from the open-source satellites like LSAT to
predict the future growth and yield of a
particular land are being implemented.
MODULAR NEURAL NETWORK
• Modular Neural Networks have a collection of different
networks working independently and contributing
towards the output. Each neural network has a set of
inputs that are unique compared to other networks
constructing and performing sub-tasks. These networks
do not interact or signal each other in accomplishing
the tasks.
• The advantage of a modular neural network is that it
breakdowns a large computational process into smaller
components decreasing the complexity. This
breakdown will help in decreasing the number of
connections and negates the interaction of these
networks with each other, which in turn will increase
the computation speed. However, the processing time
will depend on the number of neurons and their
involvement in computing the results.
• Beside is a visual representation,
THANK
YOU

You might also like