Professional Documents
Culture Documents
Laxmidhar Behera
x1
c1 1
x2 w1
x3 c2 2 w 2
.. y
x4 .. wn
.. ..
.. ..
.. .
.. ch h
.
xp
Input Hidden layer Output
layer of Radial layer
Basis functions
Each hidden unit has its own receptive field in input space.
h
X
y= j w j , j = (kx cj k)
j=1
z 2 /2 2
Gaussian radial function (z) = e
Thin plate spline (z) = z 2 logz
Quadratic (z) = (z 2 + r2 )1/2
Inverse quadratic (z) = (z2 +r12 )1/2
Here, z = kx cj k
The most popular radial function is Gaussian
activation function.
Intelligent Control p.5/??
RBFN vs. Multilayer Network
w = [w1 , w2 , , wh ]T
w = y d , y d is the desired output
w = (T )1 T y d = 0 y d
0 = (T )1 T is the pseudo-inverse of .
This is possible only when T is non-singular. If this
is singular, singular value decomposition is used to
solve for w.
Intelligent Control p.10/??
Illustration: EX-NOR problem
The truth table and the RBFN architecture are given below:
x1 x2 y d bias = +1
0 0 1 x1 c1 1
w1
0 1 0 w2 y
1 0 0 x2 c2 2
1 1 1
E E y i
=
cij y i cij
d i zi
= (y y) wi
zi cij
i zi
Now, = 2 i
zi
zi X
and = ( (xj cij )2 )1/2
cij cij j
= (xj cij )/zi
Intelligent Control p.15/??
Contd...
d i
cij (t + 1) = cij (t) + 1 (y y)wi 2 (xj cij )
The update rule for the linear weights is:
wi (t + 1) = wi (t) + 2 (y d y)i
4
I/O data
0
0 50 100 150
Intelligent Control p.18/??
time step
Contd...
The system is identified from the input-output data
using a radial basis function network
Network parameters are given below:
Number of inputs : 2 [(u(t), h(t)]
Number of outputs : 1 [target: h(t + 1)]
Units in the hidden layer : 30
Number of I/O data : 150
Radial Basis Function : Gaussian
Width of the radial function () : 0.707
Center learning rate (1 ) : 0.3
Weight learning rate (2 ) : 0.2
Intelligent Control p.19/??
Result: Identification
0.35
0.3
Mean square error
0.25
0.2
0.15
0.1
0.05
0
0 200 400 600 800 1000 Intelligent Control p.20/??
epochs
Model Validation
After identification, the model is validated through a set of
100 input-output data which is different from the data set
used for training. The result is shown in the following figure
where left figure represents the desired and network output
and right figure shows the corresponding input.
1 8
6
0.8
4
Output yd, ya
Input u(t)
0.6
0.4
0
0.2 2
Desired
Actual 4
0 1 20 40 60 Intelligent
80 Control
100 p.21/??
0 10 20 30 40 50 60 70 80 90 100
time step time step (t)
Hybrid Learning
In hybrid learning, the radial basis functions relocate their
centers in a self-organized manner while the weights are
updated using supervised learning.
When a pattern is presented to RBFN, either a new center
is grown if the pattern is sufficiently novel or the parameters
in both layers are updated using gradient descent.
The test of novelty depends on two criteria:
Is the Euclidean distance between the input pattern
and the nearest center greater than a threshold (t)?
Is the mean square error at the output greater than a
desired accuracy?
ci (t + 1) = ci (t) + (x ci (t))
Thus the center moves closer to x.
xi = T i , i = 1, , n
where Rl , the output vector of the hidden layer;
i Rl , the connection weight vector from hidden
units to ith output unit. The weight update law:
2 2
1.5 1.5
1 1
0.5 0.5
0 0
4 2 0 2 4 6 8 10 4 2 0 2 4 6 Intelligent
8 Control
10 p.26/??
Comparison of Results