You are on page 1of 19

Sukkur Institute of Business Administration University

Department of Electrical Engineering


Artificial Intelligence with Python

Lab Report 08: Convolutional Networks for images with Keras


Instructor: Dr. Gulsher Ali

Submission Profile
Name: Submission date (dd/mm/yy):
Enrollment ID: Receiving authority name and signature:
Comments: __________________________________________________________________________
Instructor Signature

Lab Report Rubrics Total


(Add the points in each column, then add across the bottom row to find the total Marks
score)
S.N Criterion 0.5 0.25 0.125
o
1 Accuracy  Desired output  Minor  Critical
mistakes mistakes
2 Timing  Submitted  1 day late  More than
within the given 1 day late
time
Conv2d Layer:
Conv2D layer is the 2-dimenisonal layer mainly used for image data. 2D convolutional layers take a
three-dimensional input, typically an image with three color channels. They pass a filter, also called a
convolution kernel, over the image, inspecting a small window of pixels at a time, for example 3×3 or
5×5 pixels in size, and moving the window until they have scanned the entire image. The convolution
operation calculates the dot product of the pixel values in the current filter window with the weights
defined in the filter. Difference Between dense layer and convolutional layer is that dense layer learns
the pixels in an image and convolutional layer learns patterns in images causing it more suitable for
images.

Input shape:
4+D tensor with shape: batch_shape + (channels, rows, cols) if data_format='channels_first' or 4+D
tensor with shape: batch_shape + (rows, cols, channels) if data_format='channels_last'.

Output shape:
4+D tensor with shape: batch_shape + (filters, new_rows, new_cols) if data_format='channels_first' or
4+D tensor with shape: batch_shape + (new_rows, new_cols, filters).

Coding wih CNN


First implementing the Dense Layer model so that we can compare with CNN model.

Loading, splitting the data for validation and converting labels to categorical:
Building the dense layer model and evaluating the testing loss and testing accuracy:

Loading and reshaping the same mnist data into 4-dimmensional:

Reason of reshaping data into 4-dimmensional data is that convolutional layers expect the shape of the
data to be in 4 dimensions; 3 dimensions of data and 4 th is the batch size.

Splitting for validation data:


Implementing a neural network containing CNN layer:

It can be seen by adding a single convolutional layer into the network boosted the accuracy from 88%
almost 97%. That is the reason convolutional works best for image data.

Checking the model summary:

Convolutional layer scans image parts and deal every part as a feature as in above case it has made 32
features each of 26 by 26 causing the learning parameters to be of very large number like we have 6 Lac
92 thousand something parameters but if we look at the summary of model containing only dense layer
it has only 25 thousand and 450 parameters indicating model with only dense layers is very light as
compared to model with convolutional layer.

Summary of model containing only Dense Layers:

MaxPooling layer
A pooling layer is another building block of a CNN. Its function is to progressively reduce the spatial size
of the representation to reduce the amount of parameters and computation in the network. Pooling
layer operates on each feature map independently. It also reduces overfitting.

It is operation that calculates the maximum, or largest, value in each patch of each feature map. The
results are down sampled or pooled feature maps that highlight the most present feature in the patch.
In other words, it downsamples the input representation by taking the maximum value over the window
defined by pool_size for each dimension along the features axis. 
Implementing a neural network containing neural network with MaxPooling layer:

Checking out the summary:

By just adding a maxpooling layer with the pool size of 3 by 3 we reduced the parameters from 692,906
to only 66,218. The reduced parameters are less than 10% parameters of the model without using
maxpooling layer.
Dog and Cat classification using Deep Learning
Generating arrays from images for training data:

Generating arrays from images for testing data:


Displaying the image:

Concatenating dogs and cat arrays into one variable for train and test:

Creating labels for cats and dogs 0 and 1 respectively equal to length of data for training data:
Creating labels for cats and dogs 0 and 1 respectively equal to length of data for testing data:

Concatenating the labels for dogs and cats for training and testing:

Splitting the dataset for validation data:

Normalizing the data from 0 to 255 to 0 to 1:


Building and training the model:

Evaluating the model:


Predicting an image from test data and displaying it for cat class:

Predicting an image from test data and displaying it for cat class:
Regularization
In machine learning it is used apply penalty (dropping) to coefficients to overcome overfitting and in
deep learning it is used to apply penalty (dropping) to learning weights to overcome overfitting and to
increase generalization of the model on the new data. Some major are implemented below.

Adding dropout layer:


Dropout layer randomly drops some nodes of model at each iteration to increase generalization and to
reduce overfitting. Depending on the model it can increase or decrease performance.

L1 Regularization:

L2 Regularization:
Combined:

K-Fold Cross Validation


Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data
sample. The procedure has a single parameter called k that refers to the number of groups that a given
data sample is to be split into.
K-Fold Interpretation:
For iteration 1 first fold from 4 folds of total data will be used for testing and remaining 3 folds will be
used for training and respectively for 4 iterations the training and testing data will be switched as show
in above picture. This switching of the data will help us to select the model which generalizes the best.

Illustration of K-fold Cross Validation on our dataset:

Declaring K-Fold module with n_splits = 4, building and compiling the model:
Evaluating the testing loss and testing accuracy for each K-fold.

This implementation is using for loop but there is direct method for implementing this illustrated below.
Also, before applying K-Fold it is a good practice to shuffle the data.

Direct use of K-fold with using Sklearn API with Keras model.
Shuffling the data:

Normalizing the data:


Building the model:

Loading Wrapper for using the Scikit-Learn API with Keras models and Applying K-fold for 5
splits and averaging the all accuracies per each k-Fold:

For 10 K-folds:
Comparison of different optimizers taken for K-fold with 5 splits:

Time taken by the single run of model:


Time taken by K-fold with 5 splits run of model:

You might also like