You are on page 1of 30

P.S.R.

ENGINEERING COLLEGE
(An Autonomous Institution, Affiliated to Anna University,
Chennai)
Sevalpatti (P.O), Sivakasi - 626140.
Virudhunagar Dt.

LABORATORY MANUAL

COURSE CODE : 191CS68

COURSE NAME : DEEP LEARNING LABORATORY

SEMSTER : VI

ACADEMIC YEAR :2021-2022 (EVEN SEMESTER)

DEPARTMENT

OF

COMPUTER SCIENCE AND ENGINEERING

Prepared by Approved By
Dr. P. Kavitha, ASP/CSE
Mrs.J.Amutha, AP/CSE HOD/CSE
Ex.NO:1 STYLE TRANSFER FOR AN IMAGES
AIM:
To Get your first taste of deep learning by applying style transfer to your own images, and gain
experience using development tools such as Anaconda and Jupyter notebooks.
PROCEDURE:
Neural-Style-Transfer:
Neural-style-transfer library to compose images in the style of other images using just a few
lines of code.

REFERENCE STYLE IMAGE

Neural Style Transfer is basically an optimization technique that takes 2 pictures as an input, one
image reference style image. and the other image is

INPUT IMAGE
Input image which you want to style — model blends these two image together and produce a
transformed image that looks like the given input (content) but painted in the given referenced
style image.
looks interesting right. but it’s required lots of stuff which includes image pre-processing code,
model training code, optimization functions, and lots of other small functions, but to get rid of all
these things we have an amazing library called

NEURAL-STYLE-TRANSFER
Neural-Style-Transfer which does all this stuff for us in just a few lines of code so in this blog
will show you how someone can use this beautiful library.

Program
pip install neural-style-transfer
from neuralstyletransfer.style_transfer import NeuralStyleTransfer
from PIL import Image
content_url = 'https://i.ibb.co/6mVpxGW/content.png'
style_url = 'https://i.ibb.co/30nz9Lc/style.jpg'
nst = NeuralStyleTransfer()
nst.LoadContentImage(content_url, pathType='url')
nst.LoadStyleImage(style_url, pathType='url')
output = nst.apply(contentWeight=1000, styleWeight=0.01, epochs=20)
output.save('output.jpg')
OUTPUT:
Ex.No:2 BUILD MULTI-LAYER NEURAL NETWORKS

Aim:
To Learn neural networks basics, and build your first network with Python and NumPy.
Use the modern deep learning framework PyTorch to build multi-layer neural networks, and
analyze real data.

Procedure:
Prepare the Data
The first step is to load and prepare your data. Neural network models require numerical
input data and numerical output data.

Define the Model

The next step is to define a model. The idiom for defining a model in PyTorch involves defining
a class that extends the Module class. The constructor of your class defines the layers of the
model and the forward() function is the override that defines how to forward propagate input
through the defined layers of the model.

Train the Model

The training process requires that you define a loss function and an optimization algorithm.

Evaluate the model

Once the model is fit, it can be evaluated on the test dataset. This can be achieved by using
the DataLoader for the test dataset and collecting the predictions for the test set, then comparing
the predictions to the expected values of the test set and calculating a performance metric.

Make predictions

A fit model can be used to make a prediction on new data.

For example, you might have a single image or a single row of data and want to make a
prediction.

Program:
import numpy as np
import pandas as pd
# Load data
data=pd.read_csv('HR_comma_sep.csv')

data.head()

output:

Preprocessing: Label Encoding

# Import LabelEncoder
from sklearn import preprocessing

# Creating labelEncoder
le = preprocessing.LabelEncoder()

# Converting string labels into numbers.


data['salary']=le.fit_transform(data['salary'])
data['Departments ']=le.fit_transform(data['Departments '])
Split the dataset
# Spliting data into Feature and
X=data[['satisfaction_level', 'last_evaluation', 'number_project', 'average_montly_hours',
'time_spend_company', 'Work_accident', 'promotion_last_5years', 'Departments ', 'salary']]
y=data['left']

# Import train_test_split function


from sklearn.model_selection import train_test_split

# Split dataset into training set and test set


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) # 70%
training and 30% test
Build Classification Model
# Import MLPClassifer
from sklearn.neural_network import MLPClassifier

# Create model object


clf = MLPClassifier(hidden_layer_sizes=(6,5),
random_state=5,
verbose=True,
learning_rate_init=0.01)

# Fit data onto the model


clf.fit(X_train,y_train)
Make Prediction and Evaluate the Model
# Make prediction on test dataset
ypred=clf.predict(X_test)

# Import accuracy score


from sklearn.metrics import accuracy_score

# Calcuate accuracy
accuracy_score(y_test,ypred)

Output:
0.9386666666666666
Ex.No: 3 CONVOLUTIONAL NEURAL NETWORKS

Aim:
To build convolutional networks and use them to classify images (faces,
melanomas, etc.) based on patterns and objects that appear in them. Use these networks
to learn data compression and image denoising.

Procedure:
Choose a Dataset
Choose a dataset of your interest or you can also create your own image dataset
for solving your own image classification problem. An easy place to choose a dataset is
on kaggle.com.
Prepare Dataset for Training
Preparing our dataset for training will involve assigning paths and creating
categories(labels), resizing our images.
Create Training Data
Training is an array that will contain image pixel values and the index at which
the image in the CATEGORIES list.
Program:
# Deep Learning CNN model to recognize face
'''This script uses a database of images and creates CNN model on top of it to test
if the given image is recognized correctly or not'''

'''####### IMAGE PRE-PROCESSING for TRAINING and TESTING data #######'''

# Specifying the folder where images are present


TrainingImagePath='/Users/farukh/Python Case Studies/Face Images/Final Training Images'

from keras.preprocessing.image import ImageDataGenerator


# Understand more about ImageDataGenerator at below link
# https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html

# Defining pre-processing transformations on raw images of training data


# These hyper parameters helps to generate slightly twisted versions
# of the original image, which leads to a better model, since it learns
# on the good and bad mix of images
train_datagen = ImageDataGenerator(
shear_range=0.1,
zoom_range=0.1,
horizontal_flip=True)

# Defining pre-processing transformations on raw images of testing data


# No transformations are done on the testing images
test_datagen = ImageDataGenerator()

# Generating the Training Data


training_set = train_datagen.flow_from_directory(
TrainingImagePath,
target_size=(64, 64),
batch_size=32,
class_mode='categorical')

# Generating the Testing Data


test_set = test_datagen.flow_from_directory(
TrainingImagePath,
target_size=(64, 64),
batch_size=32,
class_mode='categorical')

# Printing class labels for each face


test_set.class_indices

Output:

Creating a mapping for index and face names


'''############ Creating lookup table for all faces ############'''
# class_indices have the numeric tag for each face
TrainClasses=training_set.class_indices

# Storing the face and the numeric tag for future reference
ResultMap={}
for faceValue,faceName in zip(TrainClasses.values(),TrainClasses.keys()):
ResultMap[faceValue]=faceName

# Saving the face map for future reference


import pickle
with open("ResultsMap.pkl", 'wb') as fileWriteStream:
pickle.dump(ResultMap, fileWriteStream)

# The model will give answer as a numeric tag


# This mapping will help to get the corresponding face name for it
print("Mapping of Face and its ID",ResultMap)
# The number of neurons for the output layer is equal to the number of faces
OutputNeurons=len(ResultMap)
print('\n The Number of output neurons: ', OutputNeurons)

Output:
Mapping of Face and its ID {0: 'face1', 1: 'face10', 2: 'face11', 3: 'face12', 4: 'face13', 5:
'face14', 6: 'face15', 7: 'face16', 8: 'face2', 9: 'face3', 10: 'face4', 11: 'face5', 12: 'face6', 13:
'face7', 14: 'face8', 15: 'face9'}

The Number of output neurons: 16

'''######################## Create CNN deep learning model ########################'''


from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPool2D
from keras.layers import Flatten
from keras.layers import Dense

'''Initializing the Convolutional Neural Network'''


classifier= Sequential()

''' STEP--1 Convolution


# Adding the first layer of CNN
# we are using the format (64,64,3) because we are using TensorFlow backend
# It means 3 matrix of size (64X64) pixels representing Red, Green and Blue components of pixe
ls
'''
classifier.add(Convolution2D(32, kernel_size=(5, 5), strides=(1, 1), input_shape=(64,64,3), activ
ation='relu'))

'''# STEP--2 MAX Pooling'''


classifier.add(MaxPool2D(pool_size=(2,2)))

'''############## ADDITIONAL LAYER of CONVOLUTION for better accuracy #########


########'''
classifier.add(Convolution2D(64, kernel_size=(5, 5), strides=(1, 1), activation='relu'))

classifier.add(MaxPool2D(pool_size=(2,2)))
'''# STEP--3 FLattening'''
classifier.add(Flatten())

'''# STEP--4 Fully Connected Neural Network'''


classifier.add(Dense(64, activation='relu'))

classifier.add(Dense(OutputNeurons, activation='softmax'))

'''# Compiling the CNN'''


#classifier.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
classifier.compile(loss='categorical_crossentropy', optimizer = 'adam', metrics=["accuracy"])

###########################################################
import time
# Measuring the time taken by the model to train
StartTime=time.time()

# Starting the model training


classifier.fit_generator(
training_set,
steps_per_epoch=30,
epochs=10,
validation_data=test_set,
validation_steps=10)

EndTime=time.time()

Output:
print("###### Total Time Taken: ", round((EndTime-StartTime)/60), 'Minutes ######')
'''########### Making single predictions ###########'''
import numpy as np
from keras.preprocessing import image

ImagePath='/content/drive/MyDrive/Deep Learning Lab/Face-


Images/Face Images/Final Testing Images/face11/1face11.jpg'
test_image=image.load_img(ImagePath,target_size=(64, 64))
test_image=image.img_to_array(test_image)

test_image=np.expand_dims(test_image,axis=0)

result=classifier.predict(test_image,verbose=0)
#print(training_set.class_indices)

print('####'*10)
print('Prediction is: ',ResultMap[np.argmax(result)])

########################################
Output:
Prediction is: face9
Ex.No:4 RECURRENT NEURAL NETWORKS
Aim:
To Build your own recurrent networks and long short-term memory networks with
PyTorch; perform sentiment analysis and use recurrent networks to generate new text
from TV scripts.
Procedure:

o The network takes a single time-step of the input.


o We can calculate the current state through the current input and the previous
state.
o Now, the current state through ht-1 for the next state.
o There is n number of steps, and in the end, all the information can be joined.
o After completion of all the steps, the final step is for calculating the output.
o At last, we compute the error by calculating the difference between actual output
and the predicted output.
o The error is backpropagated to the network to adjust the weights and produce a
better outcome.
Program:
# Author: Robert Guthrie
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

torch.manual_seed(1)
lstm = nn.LSTM(3, 3) # Input dim is 3, output dim is 3
inputs = [torch.randn(1, 3) for _ in range(5)] # make a sequence of length 5

# initialize the hidden state.


hidden = (torch.randn(1, 1, 3),
torch.randn(1, 1, 3))
for i in inputs:
# Step through the sequence one element at a time.
# after each step, hidden contains the hidden state.
out, hidden = lstm(i.view(1, 1, -1), hidden)

inputs = torch.cat(inputs).view(len(inputs), 1, -1)


hidden = (torch.randn(1, 1, 3), torch.randn(1, 1, 3)) # clean out hidden state
out, hidden = lstm(inputs, hidden)
print(out)
print(hidden)

Output:
tensor([[[-0.0187, 0.1713, -0.2944]],

[[-0.3521, 0.1026, -0.2971]],

[[-0.3191, 0.0781, -0.1957]],

[[-0.1634, 0.0941, -0.1637]],

[[-0.3368, 0.0959, -0.0538]]], grad_fn=<StackBackward0>)


(tensor([[[-0.3368, 0.0959, -0.0538]]], grad_fn=<StackBackward0>), tensor([[[-0.9825, 0.4715,
-0.0633]]], grad_fn=<StackBackward0>))

Prepare Data:
def prepare_sequence(seq, to_ix):
idxs = [to_ix[w] for w in seq]
return torch.tensor(idxs, dtype=torch.long)

training_data = [
# Tags are: DET - determiner; NN - noun; V - verb
# For example, the word "The" is a determiner
("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]),
("Everybody read that book".split(), ["NN", "V", "DET", "NN"])
]
word_to_ix = {}
# For each words-list (sentence) and tags-list in each tuple of training_data
for sent, tags in training_data:
for word in sent:
if word not in word_to_ix: # word has not been assigned an index yet
word_to_ix[word] = len(word_to_ix) # Assign each word with a unique index
print(word_to_ix)
tag_to_ix = {"DET": 0, "NN": 1, "V": 2} # Assign each tag with a unique index

# These will usually be more like 32 or 64 dimensional.


# We will keep them small, so we can see how the weights change as we train.
EMBEDDING_DIM = 6
HIDDEN_DIM = 6

Output:
{'The': 0, 'dog': 1, 'ate': 2, 'the': 3, 'apple': 4, 'Everybody': 5, 'read': 6, 'that': 7, 'book': 8}

Create the Model:


class LSTMTagger(nn.Module):

def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):


super(LSTMTagger, self).__init__()
self.hidden_dim = hidden_dim

self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)

# The LSTM takes word embeddings as inputs, and outputs hidden states
# with dimensionality hidden_dim.
self.lstm = nn.LSTM(embedding_dim, hidden_dim)

# The linear layer that maps from hidden state space to tag space
self.hidden2tag = nn.Linear(hidden_dim, tagset_size)

def forward(self, sentence):


embeds = self.word_embeddings(sentence)
lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1))
tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1))
tag_scores = F.log_softmax(tag_space, dim=1)
return tag_scores
Train the Model
model = LSTMTagger(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), len(tag_to_ix))
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

# See what the scores are before training


# Note that element i,j of the output is the score for tag j for word i.
# Here we don't need to train, so the code is wrapped in torch.no_grad()
with torch.no_grad():
inputs = prepare_sequence(training_data[0][0], word_to_ix)
tag_scores = model(inputs)
print(tag_scores)

for epoch in range(300): # again, normally you would NOT do 300 epochs, it is toy data
for sentence, tags in training_data:
# Step 1. Remember that Pytorch accumulates gradients.
# We need to clear them out before each instance
model.zero_grad()

# Step 2. Get our inputs ready for the network, that is, turn them into
# Tensors of word indices.
sentence_in = prepare_sequence(sentence, word_to_ix)
targets = prepare_sequence(tags, tag_to_ix)

# Step 3. Run our forward pass.


tag_scores = model(sentence_in)

# Step 4. Compute the loss, gradients, and update the parameters by


# calling optimizer.step()
loss = loss_function(tag_scores, targets)
loss.backward()
optimizer.step()

# See what the scores are after training


with torch.no_grad():
inputs = prepare_sequence(training_data[0][0], word_to_ix)
tag_scores = model(inputs)

# The sentence is "the dog ate the apple". i,j corresponds to score for tag j
# for word i. The predicted tag is the maximum scoring tag.
# Here, we can see the predicted sequence below is 0 1 2 0 1
# since 0 is index of the maximum value of row 1,
# 1 is the index of maximum value of row 2, etc.
# Which is DET NOUN VERB DET NOUN, the correct sequence!
print(tag_scores)
Output:
tensor([[-1.1389, -1.2024, -0.9693],
[-1.1065, -1.2200, -0.9834],
[-1.1286, -1.2093, -0.9726],
[-1.1190, -1.1960, -0.9916],
[-1.0137, -1.2642, -1.0366]])
tensor([[-0.0462, -4.0106, -3.6096],
[-4.8205, -0.0286, -3.9045],
[-3.7876, -4.1355, -0.0394],
[-0.0185, -4.7874, -4.6013],
[-5.7881, -0.0186, -4.1778]])
Ex.No:5 GENERATIVE ADVERSARIAL NETWORKS
Aim:

To Learn and implement the DCGAN model to simulate realistic images, with Ian
Goodfellow, the inventor of GANS (generative adversarial networks).

Procedure:

Define a Problem

The problem statement is key to the success of the project so the first step is to define your
problem. GANs work with a different set of problems you are aiming so you need to
define What you are creating like audio, poem, text, Image is a type of problem.

Train Discriminator on Real Dataset

Now Discriminator is trained on a real dataset. It is only having a forward path, no


backpropagation is there in the training of the Discriminator in n epochs. And the Data
you are providing is without Noise and only contains real images, and for fake images,
Discriminator uses instances created by the generator as negative output. Now, what
happens at the time of discriminator training.

 It classifies both real and fake data.


 The discriminator loss helps improve its performance and penalize it when it misclassifies
real as fake or vice-versa.
 weights of the discriminator are updated through discriminator loss.

Train Generator

Provide some Fake inputs for the generator(Noise) and It will use some random noise and
generate some fake outputs. when Generator is trained, Discriminator is Idle and when
Discriminator is trained, Generator is Idle. During generator training through any random
noise as input, it tries to transform it into meaningful data. to get meaningful output from
the generator takes time and runs under many epochs. steps to train a generator are listed
below.

 get random noise and produce a generator output on noise sample


 predict generator output from discriminator as original or fake.
 we calculate discriminator loss.
 perform backpropagation through discriminator, and generator both to calculate gradients.
 Use gradients to update generator weights.

Train Discriminator on Fake Data

The samples which are generated by Generator will pass to Discriminator and It will
predict the data passed to it is Fake or real and provide feedback to Generator again.

Train Generator with the output of Discriminator

Again Generator will be trained on the feedback given by Discriminator and try to
improve performance.

This is an iterative process and continues running until the Generator is not successful in
making the discriminator fool.
Algorithm:

Step 1 — Select a number of real images from the training set.

Step 2 — Generate a number of fake images. This is done by sampling random noise
vectors and creating images from them using the generator.

Step 3 — Train the discriminator for one or more epochs using both fake and real images.
This will update only the discriminator’s weights by labeling all the real images as 1 and
the fake images as 0.

Step 4 — Generate another number of fake images.

Step 5 — Train the full GAN model for one or more epochs using only fake images. This
will update only the generator’s weights by labeling all fake images as 1.
Program:
import tensorflow as tf
tf.__version__

Output:
2.8.0

# To generate GIFs
!pip install imageio
!pip install git+https://github.com/tensorflow/docs
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
from tensorflow.keras import layers
import time

from IPython import display


(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
Output:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-
datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step

train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).asty


pe('float32')
train_images = (train_images -
127.5) / 127.5 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(B
UFFER_SIZE).batch(BATCH_SIZE)
ef make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())

model.add(layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the ba
tch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding=
'same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())

model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='


same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())

model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='s


ame', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)

return model
generator = make_generator_model()

noise = tf.random.normal([1, 100])


generated_image = generator(noise, training=False)

plt.imshow(generated_image[0, :, :, 0], cmap='gray')

Output:
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))

model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))


model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))

model.add(layers.Flatten())
model.add(layers.Dense(1))

return model
discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print (decision)

Output:
tf.Tensor([[-0.00016806]], shape=(1, 1), dtype=float32)
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_opt
imizer,
generator=generator,
discriminator=discriminator)
EPOCHS = 5
noise_dim = 100
num_examples_to_generate = 16

# You will reuse this seed overtime (so it's easier)


# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])
# Notice the use of `tf.function`
# This annotation causes the function to be "compiled".
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])

with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:


generated_images = generator(noise, training=True)

real_output = discriminator(images, training=True)


fake_output = discriminator(generated_images, training=True)

gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)

gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainab


le_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminat
or.trainable_variables)

generator_optimizer.apply_gradients(zip(gradients_of_generator, genera
tor.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator
, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()

for image_batch in dataset:


train_step(image_batch)

# Produce images for the GIF as you go


display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
seed)

# Save the model every 15 epochs


if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)

print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-


start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
seed)
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)

fig = plt.figure(figsize=(4, 4))

for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')

plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
train(train_dataset, EPOCHS)
Output:

SS
Ex.No:6 DEPLOYING A SENTIMENT ANALYSIS MODEL

Aim:
Use deep neural networks to design agents that can learn to take actions in a
simulated environment. Apply reinforcement learning to complex control tasks like video
games and robotics.

Procedure:
1. Download or otherwise retrieve the data.
First, create a directory, download, and save the IMDB Dataset used for binary
sentiment classification.
2. Process / Prepare the data.
Here we are going to transform the data from its word representation to a
bag-of-words feature representation. In the model we will construct in this
notebook,will construct a feature representation by representing each word as an
integer. Allows us to transform the words appearing in the reviews into integers, it is
time to make use of it and convert our reviews to their integer sequence representation,
making sure to pad or truncate to a fixed length
3. Upload the processed data to S3.
When a training job is constructed using SageMaker, a container is executed
which performs the training operation. This container is given access to data that is
stored in S3. Need to upload the data we want to use for training to S3 using the session
object associated with this notebook.
4. Test the trained model (typically using a batch transform job).
By implementing our own neural network in PyTorch along with a training
script. For this project, we have provided the necessary model object in
the model.py file inside of the train folder. You can see the provided implementation
by running the cell below.
Program:
Importing the necessary packages
import re
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import string
import nltk
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)

%matplotlib inline
train =
pd.read_csv('https://raw.githubusercontent.com/dD2405/Twitter_S
entiment_Analysis/master/train.csv')

train_original=train.copy()
Training dataset:
Reading the test.csv Pandas file
test =
pd.read_csv('https://raw.githubusercontent.com/dD2405/Twitter_S
entiment_Analysis/master/test.csv')

test_original=test.copy()

Output:
Combine the train.csv and test.csv files.
combine = train.append(test,ignore_index=True,sort=True)
Output:
Combine.tail()

You might also like