Professional Documents
Culture Documents
• The BACKPROPAGATION Algorithm learns the weights for a multilayer network, given a
network with a fixed set of units and interconnections. It employs gradient descent to attempt to
minimize the squared error between the network output values and the target values for these
outputs.
• In BACKPROPAGATION a
• lgorithm, we consider networks with multiple output units rather than single units as before, so
we redefine E to sum the errors over all of the network output units.
where,
• outputs - is the set of output units in the network
• tkd and Okd - the target and output values associated with the kth output unit
• d - training example
1
2
3
import numpy as np
#Defining network
inputlayer_neurons = 2
hiddenlayer_neurons = 3
output_neurons = 1
print("No. of neurons in input layer : ",inputlayer_neurons)
print("No. of neurons in hidden layer : ",hiddenlayer_neurons)
print("No. of neurons in output layer : ",output_neurons)
bh=np.random.uniform(size=(1,hiddenlayer_neurons))
print ("bias in between input layer and hidden layers :\n", bh)
wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))
print ("Weights from hidden layer to output layers :\n",wout)
bout=np.random.uniform(size=(1,output_neurons))
print ("bias in between hidden and output layers :\n",bout)
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
#Sigmoid Function
def sigmoid (x):
return (1/(1 + np.exp(-x)))
def derivatives_sigmoid(x): #Derivative of Sigmoid Function
return x * (1 - x)
#Forward Propagation
for i in range(epoch):
hinp1=np.dot(X,wh)
hinp=hinp1 + bh
hlayer_act = sigmoid(hinp)
outinp1=np.dot(hlayer_act,wout)
outinp= outinp1 + bout
output = sigmoid(outinp)
#Backpropagation
EO = y-output
outgrad = derivatives_sigmoid(output)
d_output = EO* outgrad #ERROR PORTION AT OUTPUT LAYER
#print d_output
EH = d_output.dot(wout.T)
hiddengrad = derivatives_sigmoid(hlayer_act)
#how much hidden layer wts contributed to error
d_hiddenlayer = EH * hiddengrad
wout += hlayer_act.T.dot(d_output) *lr # each hidden layer output value*final output layer e
rror portion*learning rate
wh += X.T.dot(d_hiddenlayer) *lr # change in weights between input layer and hidden layer
print("Input: \n", str(X))
print("Actual Output: \n" + str(y))
print("Predicted Output: \n" ,output)
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Predicted Output:
[[0.80331703]
[0.21313421]
[0.95037939]]
THANK YOU
10