Professional Documents
Culture Documents
Lecture 08
•
x2
x2
+
+
+ - + -
-
x1
x1
+ -
- +
-
Linearly separable
Non-Linearly separable
Bias units aren't connected to any previous layer and in this sense
don't represent a true "activity".
Advancing Knowledge, Driving Change | www.kca.ac.ke
Advancing Knowledge, Driving Change | www.kca.ac.ke
Bias
Bias is used to adjust the output along with the weighted sum of
the inputs to the neuron.
It is an additional parameter in the Neural Network
Example
From this diagram, Bias of 1.0 has been added as a constant value.
Therefore Bias
Advancing is aDriving
Knowledge, constant which helps the model in a way that it can fit best for the
Change | www.kca.ac.ke
Change in weight adjusts the speed of learning . That is, makes the
activation function steeper or flatter
Example, Suppose we increase weight as follows:
Weight1 changed from 1.0 to 4.0 and weight 2 from -0.5 to 1.5
input
Advancing Knowledge, Driving Change | www.kca.ac.ke
Back propagation Neural network
Back propagation Algorithm has four main stages:
1. Initialization of weights
2. Feed forward- each input unit(X) receives an input signal and transmits this signal
to each of the hidden units Z1, Z2, Z3….., Zn,
Each hidden unit then calculates the activation function and sends its signs Zi
to each output unit.
The output unit calculates the activation function to form the response of the
given input pattern.
3. Calculate Back propagation of errors-
Each output unit compares activation Y , with its target value T , to determine
1 K
the associated error for that unit.
Based on the error, the factor is computed and is used to distribute the error
Given that input1 =0.05 and input2 =0.10, b1=1 and b2=1
Train the above network to output 0.01 and 0.99
Advancing Knowledge, Driving Change | www.kca.ac.ke
Advancing Knowledge, Driving Change | www.kca.ac.ke
Back propagation algorithm
For example,
The target output for is 0.01 but the neural network output
0.75136507, therefore its error is
Similarly, the target output for o2 = 0.99 but the neural network output= 0.77 82
Therefore the error is:
All the errors are then added together to obtain the total error:
Example
dError/dWeight5 =
Advancing Knowledge, Driving Change | www.kca.ac.ke
Advancing Knowledge, Driving Change | www.kca.ac.ke
Step 4: Backward pass
Step a) Calculate change in weights
(i) Calculate the derivative of the total error change with respect to the output as
follows:
(ii) Calculate the derivative of how the net input of a neuron affects a neuron’s output
as follows:
(iii) Calculate the derivative of how a weight affects the weighted input of a neuron.
(iii) finally, Calculate the partial derivative of with respect to weight, e.g. w 5:
Where:
n= predefined learning rate e.g. 0.5
The neural network is updated after getting new weights leading into the
hidden layer neurons so that they can be used to calculate new weights of the
preceding using backpropagation algorithm.
0.35
0.4
0.51
0.56
dNetInputo1/dOuth1=w5= 0.40
dNetInputo2/dOuth1=w7= 0.51
Hence,
dEtotal/dw1 = 0.035* 0.24130 * 0.05 =0.0004222
Therefore,
new W1 = 0.15- 0.5 * 0.0004222 = 0.149889
0.24 0.51
0.29
0.56
https://medium.com/towards-artificial-intelligence/
understanding-back-propagation-in-an-easier-way-you-never-
before-42fe26d44a47
https://stevenmiller888.github.io/mind-how-to-build-a-neural-
network/.
https://hmkcode.com/ai/backpropagation-step-by-step/