You are on page 1of 27

Backpropagation

Algorithm

(Concept)
Backpropagation
Algorithm
What is Backpropagation?
Backpropagation is a supervised learning algorithm, for training
Neural Networks.
Why We Need Backpropagation?
Why We Need Backpropagation?
Gradient Descent
Update the weights using
gradient descent.
Gradient descent is used
for finding the minimum
of a function.
In our case we want to
minimize the error
function.
overview:
y = w1 * x
Output of bias is w_b.
w_b = 1 * b1

output = w1*input + w_b


output = w1*input
i1=0.05, i2=0.10

w1=0.15, w2=0.20
w3=0.25, w4=0.30
b1=0.3

w5=0.4. w6=0.45
w7=0.5, w8=0.55
b2=0.6

o1=0.01, o2=0.99
Forward Pass

Squash it using logistic function to get output


Forward Pass
Calculating Total Error
The Backwards Pass

Update the weights using gradient


descent.
So that they cause the actual output to
be closer the target output.
Thereby minimizing the error for each
output neuron and the network as a
whole.
Output Layer:

Consider w5.
total
partial derivative of
E wrt w5.
Update the weight.
To decrease the error, we then subtract this value from the current weight.

= learning rate (eta)


We perform the actual
updates in the neural
network after we have the
new weights leading into
the hidden layer neurons
(ie, we use the original
weights, not the updated
weights, when we continue
the backpropagation
algorithm below).
Hidden Layer:
Finally, we’ve updated all of our weights!

When we fed forward the 0.05 and 0.1 inputs originally, the error on the

network was 0.298371109.

After this first round of backpropagation, the total error is now down to

0.291027924.

It might not seem like much, but after repeating this process 10,000 times,

for example, the error plummets to 0.0000351085.


nk
ha
T
ou
Y

You might also like