You are on page 1of 10

TARP-ECE3999

DIGITAL ASSIGNMENT- 4
SCHOOL OF ELECTRONICS AND
ENGINEERING
(SENSE)

SUBMITTED TO: Dr. KATHIRVELAN J

Group Members

ABHIGYAN BHOWAL 18BEC0954

AKSHAT GARG 18BEC0968

LAKSHYA PUNJABI 18BEC0371

KARTIK SHARMA 18BEC0140

NIDHI PATEL 18BEC0841

LOKESH 18BEC0649
System Development and Testing

FEATURE EXTRACTION

Feature extraction is a process of dimensionality reduction by which an initial set of raw data is reduced
to more manageable groups for processing. A characteristic of these large data sets is a large number of
variables that require a lot of computing resources to process. Feature extraction is the name for
methods that select and /or combine variables into features, effectively reducing the amount of data that
must be processed, while still accurately and completely describing the original data set.
The process of feature extraction is useful when you need to reduce the number of resources needed for
processing without losing important or relevant information. Feature extraction can also reduce the
amount of redundant data for a given analysis. Also, the reduction of the data and the machine’s efforts
in building variable combinations (features) facilitate the speed of learning and generalization steps in the
machine learning process.

CODE AND ALGORITHM


In order to train a custom face mask detector, we need to break our project into two distinct phases,
each with its own respective sub
1. Training: Here we’ll focus on loading our face mask detection dataset from disk, training a model (using
Keras/TensorFlow) on this dataset, and then serializing the face mask detector to disk.
2. Deployment: Once the face mask detector is trained, we can then move on to loading the mask
detector, performing face detection, and then classifying each face as with_mask or without_mask
Dataset must contain different kinds of masks with different shades of people all over the world and on
top of that data augmentation techniques are also applied over 1500 images for better training.
3. Based on the boolean value the iot device will communicate over the network and gateway node using
the MQTT protocol and the datavisualization on the various fields of the channel using Thingspeak are
performed such as the density of people violating the rules and entering without mask and the threshold
value that is how many people are actually allowed to gather in the building maintaining the distancing
norms.

➢ Training Step In the training step, we receive a batch of samples, pass them through our model via the
forward pass, and compute the loss of that batch.I have trained my neural network in Jupyter Notebook.
After training files are generated with the weights of our neural network and now need to convert those
.ckpt file into .pb file for testing

➢ Testing Step: To test our model on real data, we need to use OpenCV code that is robust against
occlusions of the face. This deep learning model is a more accurate alternative to the Haar-Cascade
model. The face frame can fit the entire face without capturing parts of the background, which is trained
in our SSD_MobileNet Neural Network.

DATA PREPROCESSING

What is Data Preprocessing?

When we talk about data, we usually think of some large datasets with huge number of rows and columns.
While that is a likely scenario, it is not always the case — data could be in so many different forms:
Structured Tables, Images, Audio files, Videos etc..

Machines don’t understand free text, image or video data as it is, they understand 1s and 0s. So it
probably won’t be good enough if we put on a slideshow of all our images and expect our machine
learning model to get trained just by that!

In any Machine Learning process, Data Preprocessing is that step in which the data gets transformed,
or Encoded, to bring it to such a state that now the machine can easily parse it. In other words,
the features of the data can now be easily interpreted by the algorithm.

Here the images are sized up and placed in a list and this list is later trained the images are labeled and
then processed by the mobilenet v2 CNN.
Code

TRAINING
MobileNetV2 CNN
In MobileNetV2, there are two types of blocks. One is a residual block with stride of 1. Another one is a
block with stride of 2 for downsizing.
There are 3 layers for both types of blocks.
This time, the first layer is 1×1 convolution with ReLU6. The second layer is the depth wiseconvolution.
The third layer is another 1×1 convolution but without any non-linearity. It is claimed that if ReLU is used
again, the deep networks only have the power of a linear classifier on the non-zero volume part of the
output domain.
And there is an expansion factor t. And t=6 for all main experiments.
If the input got 64 channels, the internal output would get 64×t=64×6=384 channels.

Code
Code for deploying Camera
Helps in deploying the camera of the laptop which in turn helps in detecting the presence of mask or not
on the face.
RESULTS OBTAINED WHILE TESTING
● CNN Model
The model, after training, ended with an overall accuracy of 92.49%

● Mask R-CNN Model


The model, after training, ended with an overall accuracy of 97.83%.

● MobileNEt CNN MODEL


The model, after training, ended with an overall accuracy of around 99%
Thingspeak Visualization of Fields:

Data Analysis using Thingspeak-Matlab gives the density of the people with and without Maskless
entries

You might also like