You are on page 1of 34

Program Name: Masters of Computer Applications

Academic Year: 2021-2022 Winter Semester


Year: FYMCA , Sem-II

Subject Code: MCA 203


Deep Neural Network
LAB Manual

Course Instructor: Dr. Yogesh Kumar Sariya


Lab Instructor: Varshita Gangadhara

Submitted by: Yashasvi Surendra Surajiwala


PRN No: 20210804028

Department of computer science and engineering


D. Y. Patil International University, Akurdi
INDEX

Sr.no Practical List Date

1 Introduction to various libraries required to implement 8/03/2022


DNN.
2 Exploring and preprocessing the data using Principal 15/03/2022
Component Analysis.

3 Getting Familiar with tensorflow. 1/04/2022

4 Implementation of Linear Regression with tensorflow. 8/04/2022

5 Tensorflow implementation of logistic regression. 6/05/2022

6 Building a NN model with tensorflow. 13/05/2022

7 Implement forward propagation. 17/05/2022

8 Implement backward propagation. 18/05/2022

9 Project-I
10 Project-II
Practical -1

Objective: Introduction to various libraries required to implement DNN.

Theory:

Script/ Flow-Chart/ Code:


1. Numpy:
import numpy as np

# Creating two arrays of rank 2


x = np.array([[1, 2], [3, 4]])
y = np.array([[5, 6], [7, 8]])

# Creating two arrays of rank 1


v = np.array([9, 10])
w = np.array([11, 12])

# Inner product of vectors


print(np.dot(v, w), "\n")

# Matrix and Vector product


print(np.dot(x, v), "\n")

# Matrix and matrix product


print(np.dot(x, y))

Output:
2. SciPy:
from scipy.fftpack
import fft, ifft
x= np.array([0,1,2,3])
y= fft(x)
print(y)

Output:

3. Scikit-leran:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
from sklearn.cluster import KMeans
from sklearn.datasets import load_digits
digits = load_digits()
digits.data.shape

Output:

4. Theano:
import theano
import theano.tensor as T
x = T.dmatrix('x')
s = 1 / (1 + T.exp(-x))
logistic = theano.function([x], s)
logistic([[0, 1], [-1, -2]])

Output:
5. Tensorflow:
Step 1: Define the variables. here In this example, the values are:
x = 1, y = 2, and z = 3
Step 2: Add x and y.
Step 3: Now Multiply z with the sum of x and y.
Finally, the result comes as ‘9’.

6. PyTorch:
shape = (2,3,)
rand_tensor = torch.rand(shape)
ones_tensor=torch.ones(shape)
zeros_tensor=torch.zeros(shape)
print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor}")

Output:
Random Tensor: tensor([[0.0048, 0.9871, 0.2899], [0.8372, 0.5228, 0.4136]])
Ones Tensor: tensor([[1., 1., 1.], [1., 1., 1.]])
Zeros Tensor: tensor([[0., 0., 0.], [0., 0., 0.]])

7. Pandas:
import pandas as pd
data = {"country": ["Brazil", "Russia", "India", "China", "South Africa"],
"capital": ["Brasilia", "Moscow", "New Delhi", "Beijing", "Pretoria"],
"area": [8.516, 17.10, 3.286, 9.597, 1.221],
"population": [200.4, 143.5, 1252, 1357, 52.98] }
data_table = pd.DataFrame(data)
print(data_table)

Output:
8. Matplotlib:
import numpy as np
import matplotlib.pyplot as plt
# Compute the x and y coordinates for points on a sine curve
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x) plt.title("sine wave form")
# Plot the points using matplotlib
plt.plot(x, y)
plt.show()

Output:

9. Seaborn:
Example 1:
import seaborn as sns
sns.set(style="dark")
fmri = sns.load_dataset("fmri")
# Plot the responses for different\ events and regions
sns.lineplot(x="timepoint",
y="signal",
hue="region",
style="event",
data=fmri)
Output:

Example 2:
# Importing libraries
import numpy as np
import seaborn as sns
# Selecting style as white, dark, whitegrid, darkgrid or ticks
sns.set(style="white")
# Generate a random univariate dataset
rs = np.random.RandomState(10)
d = rs.normal(size=100)
# Plot a simple histogram and kde with binsize determined automatically
sns.distplot(d, kde=True, color="m")
Output:

Conclusion:
Practical -2

Objective: Exploring and preprocessing the data using Principal Component


Analysis.

Theory: Principal Component Analysis is basically a statistical procedure to convert a set of


observations of possibly correlated variables into a set of values of linearly uncorrelated variables.
Each of the principal components is chosen in such a way so that it would describe most of them
still available variance and all these principal components are orthogonal to each other. In all
principal components the first principal component has a maximum variance. Principal
Component Analysis (PCA) is a linear dimensionality reduction technique that can be utilized for
extracting information from a high-dimensional space by projecting it into a lower-dimensional
subspace. It tries to preserve the essential parts that have more variation of the data and remove
the non-essential parts with fewer variation.
One important thing to note about PCA is that it is an Unsupervised dimensionality
reduction technique, you can cluster the similar data points based on the feature correlation
between them without any supervision (or labels).

Uses of PCA:
• It is used to find inter-relation between variables in the data.
• It is used to interpret and visualize data.
• The number of variables is decreasing and it makes further analysis simpler.
• It’s often used to visualize genetic distance and relatedness between populations.

Conclusion:
Practical -3

Objective: Getting Familiar with tensorflow.

Theory:

Script/ Flow-Chart/ Code:


import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv('Churn.csv')
X = pd.get_dummies(df.drop(['Churn', 'customerID'], axis=1))
y = df['Churn'].apply(lambda x: 1 if x=='Yes' else 0)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2)
X_train.head()
y_train.head()

#Import Dependencies
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Dense
from sklearn.metrics import accuracy_score

#Build and Compile Model


model=Sequential()
model.add(Dense(units=32,activation='relu',input_dim=len(X_train.columns)))
model.add(Dense(units=64, activation='relu'))
model.add(Dense(units=1,activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics='accuracy')

#Fit, Predict and Evaluate


model.fit(X_train, y_train, epochs=200, batch_size=32)
y_hat = model.predict(X_test)
y_hat = [0 if val < 0.5 else 1 for val in
y_hat] accuracy_score(y_test, y_hat
Result/Output:

Conclusion:
Practical -4

Objective: Implementation of Linear Regression with tensorflow.

Theory:

Script/ Flow-Chart/ Code:


# Import Relevant libraries
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

# Learning rate
learning_rate = 0.01
# Number of loops for training through all your data to update the parameters
training_epochs = 100
# the training dataset
x_train = np.linspace(0, 10, 100)
y_train = x_train + np.random.normal(0,1,100)
# plot of data
plt.scatter(x_train, y_train)

# declare weights
weight = tf.Variable(0.)
bias = tf.Variable(0.)
# Define linear regression expression y
def linreg(x):
y = weight*x + bias
return y

# Define loss function (MSE)


def squared_error(y_pred, y_true):
return tf.reduce_mean(tf.square(y_pred - y_true))

# train model
for epoch in range(training_epochs):

# Compute loss within Gradient Tape context


with tf.GradientTape() as tape:
y_predicted = linreg(x_train)
loss = squared_error(y_predicted, y_train)

# Get gradients
gradients = tape.gradient(loss, [weight,bias])
# Adjust weights
weight.assign_sub(gradients[0]*learning_rate)
bias.assign_sub(gradients[1]*learning_rate)

# Print output
print(f"Epoch count {epoch}: Loss value: {loss.numpy()}")

print(weight.numpy())
print(bias.numpy())

# Plot the best fit line


plt.scatter(x_train, y_train)
plt.plot(x_train, linreg(x_train), 'r')
plt.show()

Result/Output:

Conclusion:
Practical -5

Objective: Tensorflow implementation of logistic regression.

Theory:

Script/ Flow-Chart/ Code:


%matplotlib inline

import numpy as np
import seaborn as sns
sns.set(style='whitegrid')
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from subprocess import check_output

iris = pd.read_csv('C:/Users/soniy/Iris.csv')
iris.shape
iris.head()
iris = iris[:100]
iris.shape
iris.Species = iris.Species.replace(to_replace=['Iris-setosa', 'Iris-versicolor'],
value=[0, 1])

plt.scatter(iris[:50].SepalLengthCm,iris[:50].SepalWidthCm,label='Iris-setosa')
plt.scatter(iris[51:].SepalLengthCm,iris[51:].SepalWidthCm,label='Iris-versicolo')
plt.xlabel('SepalLength')
plt.ylabel('SepalWidth')
plt.legend(loc='best')
X = iris.drop(labels=['Id', 'Species'], axis=1).values
y = iris.Species.values

# set seed for numpy and tensorflow


# set for reproducible results
seed = 5
np.random.seed(seed)
tf.random.set_seed(seed)
# set replace=False, Avoid double sampling
train_index = np.random.choice(len(X), round(len(X) * 0.8), replace=False)

# diff set
test_index = np.array(list(set(range(len(X))) - set(train_index)))
train_X = X[train_index]
train_y = y[train_index]
test_X = X[test_index]
test_y = y[test_index]

# Define the normalized function


def min_max_normalized(data):
col_max = np.max(data, axis=0)
col_min = np.min(data, axis=0)
return np.divide(data - col_min, col_max - col_min)

# Normalized processing, must be placed after the data set segmentation,


# otherwise the test set will be affected by the training set
train_X = min_max_normalized(train_X)
test_X = min_max_normalized(test_X)

tf.compat.v1.global_variables_initializer()

# Begin building the model framework


# Declare the variables that need to be learned and initialization
# There are 4 features here, A's dimension is (4, 1)
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

A = tf.Variable(tf.random.normal(shape=[4, 1]))
b = tf.Variable(tf.random.normal(shape=[1, 1]))
init =tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)

# Define placeholders
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

data = tf.placeholder(dtype=tf.float32, shape=[None, 4])


target = tf.placeholder(dtype=tf.float32, shape=[None, 1])

# Declare the model you need to learn


mod = tf.matmul(data, A) + b

# Declare loss function


# Use the sigmoid cross-entropy loss function,
# first doing a sigmoid on the model result and then using the cross-entropy loss function
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=mod, labels=target))
# Define the learning rate, batch_size etc.
learning_rate = 0.003
batch_size = 30
iter_num = 1500

# Define the optimizer


opt = tf.train.GradientDescentOptimizer(learning_rate)

# Define the goal


goal = opt.minimize(loss)

# Define the accuracy


# The default threshold is 0.5, rounded off directly
prediction = tf.round(tf.sigmoid(mod))
# Bool into float32 type
correct = tf.cast(tf.equal(prediction, target), dtype=tf.float32)
# Average accuracy = tf.reduce_mean(correct)
# End of the definition of the model framework

# Start training model


# Define the variable that stores the result
loss_trace = []
train_acc = []
test_acc = []

# training model
for epoch in range(iter_num):
# Generate random batch index
batch_index = np.random.choice(len(train_X), size=batch_size)
batch_train_X = train_X[batch_index]
batch_train_y = np.matrix(train_y[batch_index]).T
sess.run(goal, feed_dict={data: batch_train_X, target: batch_train_y})
temp_loss = sess.run(loss, feed_dict={data: batch_train_X, target:
batch_train_y})
# convert into a matrix, and the shape of the placeholder to correspond
temp_train_acc = sess.run(accuracy, feed_dict={data: train_X, target:
np.matrix(train_y).T})
temp_test_acc = sess.run(accuracy, feed_dict={data: test_X, target: np.matrix(test_y).T})

# recode the result


loss_trace.append(temp_loss)
train_acc.append(temp_train_acc)
test_acc.append(temp_test_acc)
# output
if (epoch + 1) % 300 == 0:
print('epoch: {:4d} loss: {:5f} train_acc: {:5f} test_acc: {:5f}'.format(epoch + 1,
temp_loss,temp_train_acc, temp_test_acc))

# Visualization of the results


# loss function
plt.plot(loss_trace)
plt.title('Cross Entropy Loss')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()

# accuracy
plt.plot(train_acc, 'b-', label='train accuracy')
plt.plot(test_acc, 'k-', label='test accuracy')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.title('Train and Test Accuracy')
plt.legend(loc='best')
plt.show()
Result/Output:

Conclusion:
Practical -6

Objective: Building a NN model with tensorflow.

Theory:

Script/ Flow-Chart/ Code:


import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import pandas as pd
import sys

training_data = np.array([[0,0],[0,1],[1,0],[1,1]], "float32")


target_data = np.array([[0],[1],[1],[0]], "float32")

model = tf.keras.models.Sequential()
#Add the layers
model.add(tf.keras.layers.Dense(4,input_dim=2,activation='relu'))
model.add(tf.keras.layers.Dense(1,activation='sigmoid'))

model.compile(loss='mean_squared_error',optimizer='adam',metrics=['binary_accuracy'])
model.summary()
history = model.fit(training_data, target_data, epochs=500, verbose=2)
print (model.predict(training_data).round())

loss_curve = history.history["loss"]
acc_curve = history.history["binary_accuracy"]

plt.plot(loss_curve, label="Train")
plt.legend(loc='upper left')
plt.title("Loss")
plt.show()

plt.plot(acc_curve, label="Train")
plt.legend(loc='upper left')
plt.title("Accuracy")
plt.show()
Simple Neural Network in Python.

import numpy as np
# Sigmoid function
def nonlin(x, deriv=False):
if(deriv == True):
return x*(1-x)
return 1/(1+np.exp(-x))

X = np.array([
[0,0,1],
[0,1,1],
[0,1,0],
[1,1,1]
])
y = np.array([[0,0,1,1]]).T

np.random.seed(1)

#initialize weight
w0 = 2 * np.random.random((3,1)) -1

for iter in range(1000):


l0 = X l1 = np.dot(l0, w0)
l1 = nonlin(l1)
l1_error = y - l1
delta_l1 = l1_error * nonlin(l1, True)
w0 = w0 + np.dot(l0.T, delta_l1)
print("End of Training. View the output")
print(l1)

Result/Output:
Conclusion:
Practical -7

Objective: Implement forward propagation.

Theory: Forward propagation (or forward pass) refers to the calculation and storage of
intermediate variables (including outputs) for a neural network in order from the input layer to the
output layer. We now work step-by-step through the mechanics of a neural network with one
hidden layer.

Script/ Flow-Chart/ Code:


import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
from sklearn.datasets
import make_moons np.random.seed(0)
data, labels = make_moons(n_samples=200,noise = 0.04,random_state=0)
print(data.shape,labels.shape)
color_map=matplotlib.colors.LinearSegmentedColormap.from_list("",["red","yellow"])
plt.scatter(data[:,0], data[:,1], c=labels)
plt.show()

from sklearn.model_selection
import train_test_split
#Splitting the data into training and testing data
X_train, X_val, Y_train, Y_val = train_test_split(data, labels, stratify=labels, random_state=0)
print(X_train.shape, X_val.shape)

class FeedForwardNetwork:
def __init__(self):
np.random.seed(0)
self.w1 = np.random.randn()
self.w2 = np.random.randn()
self.w3 = np.random.randn()
self.w4 = np.random.randn()
self.w5 = np.random.randn()
self.w6 = np.random.randn()
self.b1 = 0 self.b2 = 0 self.b3 = 0

def sigmoid(self, x):


return 1.0/(1.0 + np.exp(-x))
def forward_pass(self, x):
self.x1,self.x2=x
self.a1=self.w1*self.x1+self.w2*self.x2+self.b1
self.h1=self.sigmoid(self.a1)
self.a2=self.w3*self.x1+self.w4*self.x2+self.b2
self.h2=self.sigmoid(self.a2)
self.a3=self.w5*self.h1+self.w6*self.h2+self.b3
self.h3=self.sigmoid(self.a3)
forward_matrix=np.array([[0,0,0,0,self.h3,0,0,0],
[0,0,(self.w5*self.h1),(self.w6*self.h2),self.b3,self.a3,0,0],
[0,0,0,self.h1,0,0,0,self.h2],
[(self.w1*self.x1), (self.w2*self.x2),self.b1,
self.a1,(self.w3*self.x1),(self.w4*self.x2), self.b2, self.a2]])
forward_matrices.append(forward_matrix)
return self.h3
forward_matrices = []
ffn = FeedForwardNetwork()
for x in X_train:
ffn.forward_pass(x)

import seaborn as sns


import imageio from IPython.display
import HTML
def plot_heat_map(observation):
fig = plt.figure(figsize=(10, 1))
sns.heatmap(forward_matrices[observation],annot=True,vmin=-3,vmax=3)
plt.title('Observation'+str(observation))
fig.canvas.draw()
image=np.frombuffer(fig.canvas.tostring_rgb(),dtype='uint8')
image=image.reshape(fig.canvas.get_width_height()[::-1]+(3,))
return image
imageio.mimsave('forwardpropagation_viz.gif',[plot_heat_map(i) for I in
range(0,len(forward_matrices),len(forward_matrices)//15)], fps=1)

class FeedForwardNetwork_Vectorised:
def __init__(self):
np.random.seed(0)
self.W1 = np.random.randn(2,2)
self.W2 = np.random.randn(2,1)
self.B1 = np.zeros((1,2))
self.B2 = np.zeros((1,1))

def sigmoid(self, X):


return 1.0/(1.0 + np.exp(-X))
def forward_pass(self,X):
self.A1=np.matmul(X,self.W1)+self.B1
self.H1 = self.sigmoid(self.A1)
self.A2 = np.matmul(self.H1, self.W2) + self.B2
self.H2 = self.sigmoid(self.A2)
return self.H2
ffn_v = FeedForwardNetwork_Vectorised()
ffn_v.forward_pass(X_train

Result/Output:
Conclusion:
Practical -8

Objective: Implement backward propagation.

Theory: Backpropagation (backward propagation) is an important mathematical tool for


improving the accuracy of predictions in data mining and machine learning. Essentially,
backpropagation is an algorithm used to calculate derivatives quickly.

Artificial Neural Networks use backpropagation as a learning algorithm to compute a gradient


descent with respect to weights.

Script/ Flow-Chart/ Code:


# Import Libraries
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt

# Load dataset
data = load_iris()

# Get features and target


X=data.data
y=data.target

# Get dummy variable


y = pd.get_dummies(y).values

y[:3]

#Split data into train and test data


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=20, random_state=4)

# Initialize variables
learning_rate = 0.1
iterations = 5000
N = y_train.size
# number of input features
input_size = 4
# number of hidden layers neurons
hidden_size = 2
# number of neurons at the output layer
output_size = 3
results = pd.DataFrame(columns=["mse", "accuracy"])

# Initialize weights
np.random.seed(10)
# initializing weight for the hidden layer
W1 = np.random.normal(scale=0.5, size=(input_size, hidden_size))
# initializing weight for the output layer
W2 = np.random.normal(scale=0.5, size=(hidden_size , output_size))

def sigmoid(x):
return 1 / (1 + np.exp(-x))
def mean_squared_error(y_pred, y_true):
return ((y_pred - y_true)**2).sum() / (2*y_pred.size)
def accuracy(y_pred, y_true):
acc = y_pred.argmax(axis=1) == y_true.argmax(axis=1)
return acc.mean()

for itr in range(iterations):

# feedforward propagation
# on hidden layer
Z1 = np.dot(X_train, W1)
A1 = sigmoid(Z1)

# on output layer
Z2 = np.dot(A1, W2)
A2 = sigmoid(Z2)

# Calculating error
mse = mean_squared_error(A2, y_train)
acc = accuracy(A2, y_train)
results=results.append({"mse":mse, "accuracy":acc},ignore_index=True )

# backpropagation
E1 = A2 - y_train
dW1 = E1 * A2 * (1 - A2)

E2 = np.dot(dW1, W2.T)
dW2 = E2 * A1 * (1 - A1)

# weight updates
W2_update = np.dot(A1.T, dW1) / N
W1_update = np.dot(X_train.T, dW2) / N

W2 = W2 - learning_rate * W2_update
W1 = W1 - learning_rate * W1_update

results.mse.plot(title="Mean Squared Error")

results.accuracy.plot(title="Accuracy")

# feedforward
Z1 = np.dot(X_test, W1)
A1 = sigmoid(Z1)
Z2 = np.dot(A1, W2)
A2 = sigmoid(Z2)
acc = accuracy(A2, y_test)
print("Accuracy: {}".format(acc))

Implementation of Gradient Descent and Backpropagation

Import numpy as np
class NeuralNetwork:
def __init__(self):
np.random.seed(10) # for generating the same results
self.wij = np.random.rand(3,4) # input to hidden layer weights
self.wjk = np.random.rand(4,1) # hidden layer to output weights

def sigmoid(self, x, w):


z = np.dot(x, w)
return 1/(1 + np.exp(-z))

def sigmoid_derivative(self, x, w):


return self.sigmoid(x, w) * (1 - self.sigmoid(x, w))
def gradient_descent(self, x, y, iterations):
for i in range(iterations):
Xi = x
Xj = self.sigmoid(Xi, self.wij)
yhat = self.sigmoid(Xj, self.wjk)
# gradients for hidden to output weights
g_wjk = np.dot(Xj.T, (y - yhat) * self.sigmoid_derivative(Xj, self.wjk))
# gradients for input to hidden weights
g_wij = np.dot(Xi.T, np.dot((y - yhat) * self.sigmoid_derivative(Xj, self.wjk), self.wjk.T)
* self.sigmoid_derivative(Xi, self.wij))
# update weights
self.wij += g_wij
self.wjk += g_wjk
print('The final prediction from neural network are: ')
print(yhat)
if __name__ == '__main__':
neural_network = NeuralNetwork()
print('Random starting input to hidden weights: ')
print(neural_network.wij)
print('Random starting hidden to output weights: ')
print(neural_network.wjk)
X = np.array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
y = np.array([[0, 1, 1, 0]]).T
neural_network.gradient_descent(X, y, 10000)

Result/Output:
Conclusion:
Project-I
Project-II

You might also like