You are on page 1of 12

ARTIFICIAL NEURAL NETWORKS

Training of Weights

1
ARTIFICIAL NEURAL NETWORKS

Training of weights

For complex problems in multi-dimensional space, we have


to have some algorithm for setting (or training) the weights

Supervised training: the classes


of training samples are known

Random initialization of weights.

Threshold or bias is considered as a weight and its input is


fixed at -1

2
ARTIFICIAL NEURAL NETWORKS

Training of weights

For each training sample, we calculate the output with the


current weights

The error will be equal to ydesired – yactual


= ydesired – f(xiwi)

For the complete set of training patterns, the error will be


equal to
E = ∑p (ydesired – yactual)2

The error is squared so that the positive and negative errors


3
may not cancel each other out during summation
ARTIFICIAL NEURAL NETWORKS

Training of weights

Error is the difference between the computed output and the


given output (also called target value, or actual output)

How to reduce this error?


Input is fixed,
Architecture of neuron is fixed
We can only change the weights

Which weight to change?


Since we don’t know which weight is contributing how
much to the error, hence we change all weights
4
Training of Weights: Generalized Delta Rule
Δwi = -2c[- (ydesired - yactual) * f’(act) * xi]

• This rule is called Generalized Delta Rule

• Example: For the logistic function f(act) = 1/(1 + e - λact)

We have f’(act) = f(act)(1 – f(act))


Activation function Derivative
Activation function Derivative
Question # 1
• For the following network, find the new weight wij (new) by the generalized
delta rule.
• Activation Function of the neuron is linear, i.e. y = activation.
• The learning rate (c) is 0.1.
• The training pair is [3; 3]; i.e. input = 3 and output = 3.
• The current weights are shown on the link.
• There is only one input and only one output.
Solution: Question # 1
• The derivative of activation function is 1.

• Wnew = Wold – 2c[-(ydesired – yactual)*f’(act)*x]


= 5 – 2(0.1)[ -(3-14)*1*3]
= 5 – 0.2[-(-11)*3]
= 5 – 0.2[33]
= 5 – 6.6
= - 1.6
Question # 2
• Determine the output values of the neurons labeled ‘Friends’ and “Family’, when the input is
Romeo = 1 and Juliet = 1 and all other inputs are zero.
• Assume unit step activation functions for all neurons.
• All the weights are shown on their links.
• Theta=1
Solution: Question # 2
• For 1st neuron:
• Activation is 1.1 + 0.1 + (-1).(0.5) = 0.5.
• Hence the output = 1

• For 2nd neuron


• Activation is 1.1 + 0.1 + (-1).(0.5) = 0.5.
• Hence the output = 1

• For 3rd neuron (Friends)


• Activation is 1. 1 + 1. 1 – 1. 1.5 = 0.5.
• Hence the output = 0

• For 4th neuron (Family)


• Activation is 1.(-1) + 1.(-1) – 1.(-1.5) = -0.5.
• Hence the output = 1
References
• Engelbrecht
• Training of weights: Section 2.4, 2.4.1, 2.4.2

• Laurene Fausett:
• Delta Rule: Page 86-88
• Generalized Delta Rule: Page 106

You might also like