Professional Documents
Culture Documents
Epoch -1
STEP -1: Adaline layer (INPUT layer to HIDDEN layer)
STEP - 5: Error Gradient (back propagation) from STEP -6: Update the weights & bias of input layer to hidden
Hidden layer to input layer layer based on Error gradient
Net input of Hidden layers Apply Error Change in weight and bias
Zin 1= x1w11+x2w21+bm1 activation hidden Derivative Error Gradient at Hidden layer Learn New weight and bias
(5) Activate inputs Output Layers Gradient at ΔW11= α x1 δh1, ΔW12= α x1 δh2 W new = W old + ΔW
Zin 2= x1w12+x2w22+bm2 function layer of output ERROR ing ij ij ij
x i = si Hidden layer
Iteration Final weights & bias output Zi ‘= Zi (1-Zi) (from step 3) (ez i ) rate ΔW21= α x2 δh1, ΔW22= α x2 δh2 bmi new = bmi old + Δbmi
of last iteration
1 δ =e Z ’ hi zi i Δbm1 = α δh1, Δbm2 = α δh2
Z in 1 ϴZ1 = - Z in1 ez1=(δy1xV11
x1 x2 b w11 w21 bm1 1+e Z1 Z1‘ =Z1(1- Z1) δy1 δy2 δh1=ez1 Z1 ’ α Δw11 Δw21 Δbm1 w11 w21 bm1
+ δy2xV12)
( 0.16)
1 0.5 -0.5 1 0.10 -0.20 0.01 0.1600 1/(1+e - ) 0.5399 0.2484 0.0368 -0.1057 0.0368 0.0092 1.2 0.0055 -0.0055 0.0110 0.1055 -0.2055 0.0210
(0.134)
2 -0.5 0.5 1 0.11 -0.21 0.02 -0.1345 1/(1+e ) 0.4664 0.2489 -0.1234 0.1053 -0.0790 -0.0197 1.2 0.0118 -0.0118 -0.0236 0.1173 -0.2173 -0.0026
ϴZ 2= ez2=(δy1xV21
Iteration x1 x2 b w12 w22 bm2 Z in 2
1/(1+e
(- Z in2)
)
Z2 Z2‘ =Z2(1- Z2) δy1 δy2 δh2=ez2 Z2 ’ α Δw12 Δw22 Δbm2 w12 w22 bm2
+ δy2xV22)
(0.105)
1 0.5 -0.5 1 0.30 0.55 0.02 -0.1050 1/(1+e ) 0.4738 0.2493 0.0368 -0.1057 0.0458 0.0114 1.2 0.0068 -0.0068 0.0137 0.3068 0.5432 0.0337
( 1518)
2 -0.5 0.5 1 0.31 0.54 0.03 0.1518 1/(1+e - ) 0.5379 0.2486 -0.1234 0.1053 -0.1326 -0.0330 1.2 0.0198 -0.0198 -0.0396 0.3266 0.5234 -0.0059
1 bm1 bO1 1 1. Initialise all weights (wij & δh1 = δy1 V11 + δy2 V12
δy1 ∏ Y1
’ VERY IMPORTANT – PLEASE DO REMEMBER
x1 X w11 v11 vij), bias (bmi & boi), with δh1 V11 ΔWio or ΔVio = Δ(W or V) input output =
1 Z1 Y Y1 Z1 Y1 Y1 - ∑ e1
(learning rate)x (output calculated at input side of the
w12 v 12
1 SMALL RANDOM values. V21
v21 2. Initialise Learning rate (α) & t1 t2 link)x (Error gradient at Output side of the link)
w21 Y2 -
x2 X Assign Error Tolerance ration V12
Y2 ∑
e2
w22 Z2 v22 Y2
2 Z2 Y2 δhi = Error gradient at output side of Hidden layer
(Es) and Momentum factor. δh2 V22 ’
b02 δy2 ∏ Y2 = Hidden layer Error x derivative of hidden layer output
1 bm2 1 3. Initialise training set (Xi & ti). δh2 = δy1 V21 + δy2 V22 δyi = Error gradient at output side of Output layer