Professional Documents
Culture Documents
Output Signals
Input Signals
i j
Step 2: Activation.
Compute the neuron output at iteration p
n
y j ( p) xi ( p) wij ( p) - q j
i 1
where n is the number of neuron inputs, and qj is the
threshold value of neuron j.
10/28/2019 Intelligent Systems and Soft Computing 9
Step 3: Learning.
Update the weights in the network:
wij ( p 1) wij ( p) wij ( p)
where wij(p) is the weight correction at iteration p.
The weight correction is determined by the
generalised activity product rule:
wij ( p) j y j ( p)[ xi ( p) - wij ( p)]
Step 4: Iteration.
Increase iteration p by one, go back to Step 2.
0 2 0 y2 0 2 1 y2
x2 2 x 2
x3 0 3 3
0 y3 x 0 3 3
0 y3
0 4 0 y4 0 4 0 y4
x4 4 x 4
1 5 1 y5 1 5 1 y5
x5 5 x 5
Input layer Output layer Input layer Output layer
1 0 0 1
(a) (b)
10/28/2019 Intelligent Systems and Soft Computing 18
The Kohonen network
The Kohonen model provides a topological
mapping. It places a fixed number of input
patterns from the input layer into a higher-
dimensional output or Kohonen layer.
Training in the Kohonen network begins with the
winner’s neighborhood of a fairly large size. Then,
as training proceeds, the neighborhood size
gradually decreases.
y1
x1
y2
x2
y3
Input Output
layer layer
Connection
1 strength
Excitatory
effect
0 Distance
Inhibitory Inhibitory
effect effect
j X min X - W j , j = 1, 2, . . ., m
j
0.8
0.6
0.4
0.2
W(2,j)
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
0.8
0.6
0.4
0.2
W(2,j)
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
0.8
0.6
0.4
0.2
W(2,j)
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)
0.8
0.6
0.4
0.2
W(2,j)
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
W(1,j)