Professional Documents
Culture Documents
Back Propagation
BP has two phases
Forward pass phase
Computes functional signal Feed forward propagation of input pattern signals through network
An Overview of training
The objective of training network is to adjust the weights so that application of a set of inputs produces the desired set of outputs Training assumes that each input vector is paired with a target vector representing the desired output ; together are these are called a training pair The group of training pair is called a training set
Training Algorithm
1.
2. 3. Forward pass Select the next training pair from the training set. Apply the input vector to the new network input Calculate the output of the network Calculate the error between actual output and desired output
Backward Pass
4. 5.
Adjust the weight of the network in a way that it minimizes the error Repeat step 1 to 4 for each vector in the training set until the error for the entire set is acceptably low
Forward Pass
Signals propagates from the network input to the output The input target vector pair X and T become the training set Input is applied at the front layer and output NET is calculated in each layer as the weighted sum of its neurons inputs
NET = XW
The activation function squashes NET to produce OUT for each neuron in that layer Once the set of outputs for a layer is found, it serves as the input to the next layer The process is repeated, layer by layer, until the final set of output is produced
OUT = F ( XW )
Backward Pass
Adjusting the weights of the output layer Consider a neuron q in the output layer (last) and training process of a single weight from p in the hidden layer j to q in output layer k
Target
Backward Pass
The output of a neuron in layer k is subtracted from its target value to produce an ERROR signal . This is multiplied by derivative of squashing function [OUT ( 1- OUT ) ] calculated for that layer neuron k thereby producing the value = OUT (1- OUT) (Target-OUT) This is multiplied by OUT from a neuron (p,j) i.e. the output of previous layer
1
2
An identical process is performed for each weight proceeding from a neuron in the hidden layer to a neuron in the output layer
pq,k( t)
= the value of a weight from neuron p in the hidden layer to neuron q in the output layer at step t ( before adjustment); note that the subscript k indicates that the weight is associated with its destination layer
adjustment)