You are on page 1of 1

BACK PROPAGATION NETWORK

Epoch -1
STEP -1: Adaline layer (INPUT layer to HIDDEN layer)
STEP - 5: Error Gradient (back propagation) from STEP -6: Update the weights & bias of input layer to hidden
Hidden layer to input layer layer based on Error gradient
Net input of Hidden layers Apply Error Change in weight and bias
Zin 1= x1w11+x2w21+bm1 activation hidden Derivative Error Gradient at Hidden layer Learn New weight and bias
(5) Activate inputs Output Layers Gradient at ΔW11= α x1 δh1, ΔW12= α x1 δh2 W new = W old + ΔW
Zin 2= x1w12+x2w22+bm2 function layer of output ERROR ing ij ij ij
x i = si Hidden layer
Iteration Final weights & bias output Zi ‘= Zi (1-Zi) (from step 3) (ez i ) rate ΔW21= α x2 δh1, ΔW22= α x2 δh2 bmi new = bmi old + Δbmi
of last iteration
1 δ =e Z ’ hi zi i Δbm1 = α δh1, Δbm2 = α δh2
Z in 1 ϴZ1 = - Z in1 ez1=(δy1xV11
x1 x2 b w11 w21 bm1 1+e Z1 Z1‘ =Z1(1- Z1) δy1 δy2 δh1=ez1 Z1 ’ α Δw11 Δw21 Δbm1 w11 w21 bm1
+ δy2xV12)
( 0.16)
1 0.5 -0.5 1 0.10 -0.20 0.01 0.1600 1/(1+e - ) 0.5399 0.2484 0.0368 -0.1057 0.0368 0.0092 1.2 0.0055 -0.0055 0.0110 0.1055 -0.2055 0.0210
(0.134)
2 -0.5 0.5 1 0.11 -0.21 0.02 -0.1345 1/(1+e ) 0.4664 0.2489 -0.1234 0.1053 -0.0790 -0.0197 1.2 0.0118 -0.0118 -0.0236 0.1173 -0.2173 -0.0026

ϴZ 2= ez2=(δy1xV21
Iteration x1 x2 b w12 w22 bm2 Z in 2
1/(1+e
(- Z in2)
)
Z2 Z2‘ =Z2(1- Z2) δy1 δy2 δh2=ez2 Z2 ’ α Δw12 Δw22 Δbm2 w12 w22 bm2
+ δy2xV22)
(0.105)
1 0.5 -0.5 1 0.30 0.55 0.02 -0.1050 1/(1+e ) 0.4738 0.2493 0.0368 -0.1057 0.0458 0.0114 1.2 0.0068 -0.0068 0.0137 0.3068 0.5432 0.0337
( 1518)
2 -0.5 0.5 1 0.31 0.54 0.03 0.1518 1/(1+e - ) 0.5379 0.2486 -0.1234 0.1053 -0.1326 -0.0330 1.2 0.0198 -0.0198 -0.0396 0.3266 0.5234 -0.0059

1 bm1 bO1 1 1. Initialise all weights (wij & δh1 = δy1 V11 + δy2 V12
δy1 ∏ Y1
’ VERY IMPORTANT – PLEASE DO REMEMBER
x1 X w11 v11 vij), bias (bmi & boi), with δh1 V11 ΔWio or ΔVio = Δ(W or V) input output =
1 Z1 Y Y1 Z1 Y1 Y1 - ∑ e1
(learning rate)x (output calculated at input side of the
w12 v 12
1 SMALL RANDOM values. V21
v21 2. Initialise Learning rate (α) & t1 t2 link)x (Error gradient at Output side of the link)
w21 Y2 -
x2 X Assign Error Tolerance ration V12
Y2 ∑
e2
w22 Z2 v22 Y2
2 Z2 Y2 δhi = Error gradient at output side of Hidden layer
(Es) and Momentum factor. δh2 V22 ’
b02 δy2 ∏ Y2 = Hidden layer Error x derivative of hidden layer output
1 bm2 1 3. Initialise training set (Xi & ti). δh2 = δy1 V21 + δy2 V22 δyi = Error gradient at output side of Output layer

STEP -2: Medaline layer (HIDDEN layer to OUTPUT layer)


STEP -3: Error Gradient (back propagation)from Output STEP -4: Update the weights & bias of hidden layer to
layer to Hidden Layer Output layer based on Error gradient
Net input of Output layer Apply Final Error Gradient at Output
Output Change in weight and bias
Inputs to output Yin 1= Z1V11 + Z2V21 + bo1 activation output Derivative layer Learn New weight and bias
Desired layer ΔV11 = α z1 δy1, ΔV12 = α z1 δy2
layer from hidden Yin 2= Z1V12 + Z2V22 + bo2 function of of output δyi = Output layer Error x ing Vij new = Vij old + ΔVij
output ERROR
Iteration layer Final weights & bias Output Yi ’ = Yi (1-Yi) derivative of output layer rate ΔV21 = α z2 δy1, ΔV22 = α z2 δy2 boi new = boi old + Δboi
1 layer
(ey i ) Δb01 = α δy1, Δbo2 = α δy2
of last iteration Y in 1 ϴ Y1 = output
- Yin1
Z1 Z2 b V11 V21 bo1 1+e Y1 Y1 ’=Y1(1-Y1) t1 ey1=t1-Y1 δy1= ey1 Y1 ’ α ΔV11 ΔV21 Δbo1 V11 V21 b01
( 0.936)
1 0.5399 0.4738 1 0.37 0.90 0.31 0.9362 1/(1+e - ) 0.7183 0.2023 0.9 0.1817 0.0368 1.2 0.0238 0.0209 0.0441 0.3938 0.9209 0.3541
( 1.033)
2 0.4664 0.5379 1 0.39 0.92 0.35 1.0331 1/(1+e - ) 0.7375 0.1936 0.1 -0.6375 -0.1234 1.2 -0.0691 -0.0797 -0.1481 0.3247 0.8412 0.2060
ϴY 2=
Iteration Z1 Z2 b V12 V22 bo2 Y in 2
1/(1+e
- Yin 2
)
Y2 Y2 ’=Y2(1-Y2) t2 ey2=t2-Y2 δy2 = ey2 Y2 ’ α ΔV12 ΔV22 Δbo2 V12 V22 b02
( 0.094)
1 0.5399 0.4738 1 -0.22 -0.12 0.27 0.0944 1/(1+e - ) 0.5236 0.2494 0.1 -0.4236 -0.1057 1.2 -0.0685 -0.0601 -0.1268 -0.2885 -0.1801 0.1432
(0.088)
2 0.4664 0.5379 1 -0.29 -0.18 0.14 -0.0882 1/(1+e ) 0.4780 0.2495 0.9 0.4220 0.1053 1.2 0.0589 0.0680 0.1264 -0.2295 -0.1121 0.2696
Refer important error propagation diagram: Y = calculated output, Y’ = derivative of calculated output, t = desired output, e = error =(t-y), δy = Error gradient at output layer, δh = Error gradient at hidden layer

Soft Computing Techniques IIT, Dhanbad 17KT000283 2017-18

You might also like