Professional Documents
Culture Documents
Lecture # 3
The multilayer network
W(n,m) W(n,m)
X1 Y1
Y2
X2
Y3
X3
. . .
. .
. .
.
. .
.
Xn Yn
Calculate when desired value =1 Gout , Cout , Dout , Eout ,Fout , cout, Eg , Ec , Ed , Ee , Ef A.F is log sigmoid
MLP Architecture
The Multi-Layer-Perceptron was first introduced by M. Minsky and S. Papert in 1969
Type:
Feedforward
Learning Method:
Supervised
Why the MLP?
The single-layer perceptron classifiers discussed
previously can only deal with linearly separable sets of
patterns.
Perceptron
Terminology and Nomenclature
Terminology and Nomenclature
Backpropagation Algorithm
Step0: Initialize weights. (Set to small random values).
Step4. Each hidden unit (Zj, j = 1,……,p) sums its weighted input signals.
n
z _ in j v0 j
i 1
x i v ij
Applies its activation function to compute its output signal, zj = f(z_inj),
and sends this signal to all units in the layer above (output units)
Backpropagation Algorithm
Step5: Each output unit (Yk, k = 1,…….,m) sums its weighted input signals,
p
y _ in k w 0 k z j w jk
j 1
and applies its activation function to compute its output signal; yk = f(y_ink).
Backpropagation Algorithm
Step6: Each output unit (Yk, k = 1,….,m) receives a target pattern
corresponding to the input training pattern, computes its error
information terms:
k ( t k y k ) f ( y _ in k )
and calculates its bias correction term (used to update v0j later)
Backpropagation Algorithm
Step8: Each output unit (Yk, k = 1,…..,m) updates its bias and weights (j = 0,…….,p):
Each hidden unit (Zj, j = 1,…..,p) updates its bias and weights (i =0,…….,n):
Post processing
Interpretation & analysis of results
Termination conditions
• Number of training iterations
• Accuracy
Numerical Example
x1x2 t
1 1 0
1 0 1
0 1 1
0 0 0
Activation Function:
Log Sigmoid
Initial Weights Vs:
0.23 (v1)
0.46 (v2)
0.46 (v3)
0.92 (v4)