Professional Documents
Culture Documents
com
UNIT-2
NEURAL NETWORKS
FOR
CONTROL
www.Vidyarthiplus.com
Hopfield-Tank model
TSP must be mapped, in some way, onto the neural network
structure
Each row corresponds to a particular city and each column to
a particular position in the tour
www.Vidyarthiplus.com
www.Vidyarthiplus.com
{Vi}i=1,...,L,
L :number of units
Vi :activation level of unit i
www.Vidyarthiplus.com
Since the energy can only decrease over time and the number
configuration is finite
the network must converge (but not necessarily the minimum
energy state)
www.Vidyarthiplus.com
Continuous Hopfield-Tank
Neuron function is continuous
(Sigmoid function)
The evolution of the units over
time is now characterized by the
following differential equation :
www.Vidyarthiplus.com
Continuous Hopfield-Tank
Energy function
www.Vidyarthiplus.com
www.Vidyarthiplus.com
www.Vidyarthiplus.com
www.Vidyarthiplus.com
Results of Hopfield-Tank
Hopfield and Tank were able to solve a
randomly generated 10-city,with parameter
value :A=B=500,C=200,N=15.
They reported for 20 trails, network converge
16 times to feasible tours.
Half of those tours were one of two optimal
tours
www.Vidyarthiplus.com
www.Vidyarthiplus.com
www.Vidyarthiplus.com
www.Vidyarthiplus.com
www.Vidyarthiplus.com
PROCESS IDENTIFICATION
Pattern Classification Network
5 output nodes
16 output nodes
100 input nodes
www.Vidyarthiplus.com
Character Recognition
Learning Curve for Training the Pattern Classification Network
The Learning Curve for Training the Pattern Classification Network
1200
800
600
Cumulative Iterations
1000
400
200
0
0.01
0.009
0.008
0.007
0.006
0.005
0.004
0.003
0.002
0.001
Error Tolerance
The chart depicts the learning curve, as the error tolerance was successively lowered
through the values 0.01, 0.005, 0.0025, 0.001, 0.0005, 0.00025, and finally 0.0001.
The vertical axis shows the cumulative iterations needed to achieve the perfect 5 out of
5 training performance at each level of error tolerance.
www.Vidyarthiplus.com
Training Tip #1
Start with a relatively large error tolerance, and incrementally lower it to the desired level
as training is achieved at each succeeding level. This usually results in fewer training
iterations than starting out with the desired final error tolerance.
Training Tip #2
If the network fails to train at a certain error tolerance, try successively lowering the
learning rate.
www.Vidyarthiplus.com
Training Tip #3
In a system to be used for a real-world application, such as character recognition, you
would want the network to be able to handle not only pixel noise, but also size variance
(slightly smaller letters), some rotation, as well as some variations in font style.
To produce such a robust network classifier, you would need to add representative
samples to the training set. For example, samples of rotated versions of a character could
be added to the training set, etc.
www.Vidyarthiplus.com
Training a Network
The Momentum Term
Smooth out the effect of weight adjustments over time.
Formula:
Weight Change = learning rate * input * error output +
momentum_parameter * previous_weight_change
Momentum term can be disabled by setting it to zero.
Warning! Setting the momentum term and learning rate too large can overshoot a good
minimum as it takes large steps!
www.Vidyarthiplus.com
20 output nodes
5 input nodes
Input:
Output: Angle
www.Vidyarthiplus.com
See FFBRM.EXE
www.Vidyarthiplus.com
www.Vidyarthiplus.com
www.Vidyarthiplus.com
Eulers Method
www.Vidyarthiplus.com
Eulers Method
www.Vidyarthiplus.com
www.Vidyarthiplus.com
www.Vidyarthiplus.com
www.Vidyarthiplus.com
2 output nodes
16 (4+4)
hidden nodes
4 input nodes
Input:
Noise factor could be added during training (e.g. 0.05) to make the
network more robust.
See FFBRM.EXE
www.Vidyarthiplus.com
The following could be done after collecting the raw data for Neural Net
training.
1.
www.Vidyarthiplus.com
The following could be done after collecting the raw data for Neural Net
training.
1.
2.
3.
Calculate xFactor.
see pdf file for more details on this.