You are on page 1of 1

________________________________________________

Backpropagation Networks 175 J

between zero and one. Larger values for the learning coefficient are used when input data patterns are close to the ideal otherwise small values are used. If the nature of the input data patterns is not known, it is better to use moderate values.

3.7 VARIATIONS OF STANDARD BACKPROPATATION ALGORITHM 3.7.1 Decremental Iteration Procedure

In conventional steepest descent method, the minimum is achieved by varying the variable as

g 0W\
If ,

Av

____________________ OA __________ =

,i

(3-70)

when a step reveals no improvement, the value of w is reduced and the process is continued. Similarly in BPN also, when no improvement is perceived in decrease of error, the training can be continued with different set of learning coefficients and momentum factors, further decreasing the error. Usually the training of the network reaches a plateau and the error might increase, leading to overtuning. Training can be dispensed with at this stage and then continued with previous weights using reduced momentum factor and learning coefficient Usually learning coefficient is halved and the momentum factor is reduced by a small value. Has method is applied to one example and the error rate is given for different values of learning coefficient in Table 3.10.

Table 3.10 Error rate for different n, a Iteration 1 5 6 1 4 5 1 4 5 4 Error 0.256 0.085 0.290 0.085 0.009 0.056 0.009 0.001 0.006 0.001 0.0005 STOP STOP 0.075 0.9 STOP 0.15 0.9 STOP 0.3 09 Remark

17
0.6

0.9

You might also like