You are on page 1of 12

FUZZY LOGIC AND NEURAL

NETWORKS
Back Propagation Neural Algorithm
What Is A Multilayer Perceptron?
• A Perceptron network with one or more hidden layers is called a
Multilayer perceptron network. A multi perceptron network is
also a feed-forward network. It consists of a single input layer,
one or more hidden layers and a single output layer.
• Due to the added layers, MLP networks extend the limitation of
limited information processing of simple Perceptron Networks
and are highly flexible in approximation ability. The MLP
networks are trained and the weights are updated using the
Back propagation learning method which is explained below in
detail.
• Some limitations of a simple Perceptron network like an XOR
problem that could not be solved using Single Layer Perceptron
can be done with MLP networks.
Back Propagation Network(BPN)
• A Back propagation (BP) Network is an application of a
feed-forward multilayer perceptron network with each
layer having differentiable activation functions.
• For a given training set, the weights of the layer in a BPN
are adjusted by the activation functions to classify the
input patterns. The weight update in BPN takes place in
the same way in which the gradient descent method is
applied to the single perceptron networks.
Back Propagation Network(BPN)
• Back-propagation is just a way of propagating the
total loss back into the neural network to know how
much of the loss every node is responsible for, and
subsequently updating the weights in such a way
that minimizes the loss by giving the nodes with
weights changes
Back Propagation Network
Learning
• Back Propagation Learning is done in 3
stages:
• The input training pattern is feed-forward.
• The error between actual output and target
values are calculated.
• The weights update.
Minimization Of Error Using BP
Algorithm
• In this algorithm, the error between the actual output and
target is propagated back to the hidden unit. For
minimizing the error, the weights are updated. To update
the weights the error is calculated at the output layer.
• For further minimization of error and to calculate the error
at the hidden layer, some advanced techniques that will
help in calculation and reduction of error at the hidden
layer leading to more accurate output are applied.
• With a greater number of hidden layers, the network
becomes more complex and slower, but it is more
beneficial. The system can be trained with one hidden layer
as well. Once trained it will start producing the output
rapidly.
BPN Architecture
BPN Architecture
• A BPN is a feed-forward multilayer network. It has an input layer,
a hidden layer, and an output layer. The biases are added to the
network at the hidden layer and the output layer with activation
function=1. The inputs and outputs to the BPN can either be
binary (0,1) or bipolar (-1,+1).
• The activation function is differentiable, monotonic & incremental
and is generally chosen between binary sigmoid or bipolar
sigmoid.
• A BPN has a feed-forward phase where the data is fed from the
input towards the output and a back-propagation phase where
the signals are sent back in a reverse direction to minimize the
error.
Activation fuctions
Factors Affecting The Back-
Propagation Network
• Initial Weights: The initial random weights chosen are of
very small value as the larger inputs in binary sigmoidal
functions may lead to saturation at the very beginning,
thereby leading the function been stuck at local minima.
Some ways of initialization of weights can be using
Nguyen-Widow’s initialization. It analyzes the response of
hidden neurons to a single input, by improving the
learning ability of hidden units. This leads to faster
convergence of BPN.
• Learning rate: A large value of learning rate, helps in
faster convergence but might lead to overshooting. The
range of  from 10-3 to 10 is used for various BPN
experiments.
Factors Affecting The Back-
Propagation Network
• Number of Training Data: The input training data should
cover the entire input space and the set of input sets
should be chosen randomly.
• Number of Hidden Layer Nodes: The number of hidden
layer nodes is chosen for optimum performance of the
network. For networks that do not converge to a
solution, more hidden nodes can be chosen while for
networks with fast convergence few hidden layer nodes
are selected.
References
• Timothy J. Ross, Fuzzy Logic with Engineering
Applications, 3rd Edition, Wiley, 2010.
• Rajasekaran S. and Pai G.A.V., Neural
Networks, Fuzzy Logic and Genetic Algorithm
Synthesis and applications, PHI New Delhi.
• Laurene V. Fausett, Fundamentals of neural
networks, Prentice hall, 1994.
• https://www.geeksforgeeks.org/hebbian-
learning-rule-with-implementation-of-and-
gate/

You might also like