You are on page 1of 9

Artificial Intelligence | 2022

SUBMITTED BY
FULL NAME: Suman Ghorashine

COURSE: Artificial Intelligence


SEMESTER: Fifth
SECTION: ‘C’
YEAR OF ENROLLMENT: 2020

SUBMITTED TO
Teacher’s name: *********
Artificial Intelligence | 2022

Introduction:
The main concept of artificial intelligence aims to create intelligent machines which have
become an essential part of the technology industry. Research associated with artificial
intelligence is highly technical and specialized which requires a clear understanding and the clear
view of the building of the prototype for the usage. Talking about artificial intelligence we come
across a term familiar to AI i.e., Machine Learning. The science of getting a computer to act
without writing a code to run. Machine learning also allows software applications to become
predictive and are not programmed explicitly to do so on their own. The learning from machines
can be more precise than that of the outcome obtained from humans analyzing the data on their
own. Choosing the right machine learning algorithm depends on several factors, including, but
not limited to: data size, equality and diversity, as well as what answers businesses want to
derive from that data. Additional considerations include accuracy, training time, parameters, data
points. Therefore, choosing the right algorithm is both a combination of business need,
specification, experimentation and time available. Even the most experienced data scientists
cannot tell you which algorithm will perform the best before experimenting with others.
According to the current system of classification, there are four primary AI types: reactive,
limited memory, theory of mind, and self-awareness. Not only these but AI can be far more
revealing than others some however are not scientifically possible for the present context. AIs
can’t function beyond the tasks they were initially designed for. OpenCV is a great tool for
image processing and performing computer vision tasks. It is an open-source library that can be
used to perform tasks like face detection, objection tracking, landmark detection, and much
more. It supports multiple languages including python, java C++. It can process images and
videos to identify objects, faces, or even the handwriting of a human. TensorFlow is an open-
source software library for dataflow programming across a range of tasks. The important
difference is that TensorFlow is a framework for machine learning, and OpenCV is a library for
computer vision. OpenCV was built to provide a common infrastructure for computer vision
applications and to accelerate the use of machine perception in the commercial products. For my
project I have coded a code in an open CV using tensor flow library, numpy and many other
libraries for the training and the model development of the project. The project can detect if the
person is wearing a mask or not. The project was a success and the training of the model was
also successful. However, some improvements can be added on my existing project to make it
detect face masks in groups of people and using a high-quality lens for the camera vision can
also be added. The learning outcome of the project has proved to be running successfully.
Lecture Review:
OpenCV is a library of programming functions mainly aimed at real-time computer
vision. Computer vision is a field of AI that trains computers to capture and interpret information
from image and video data. By applying machine learning models to images, computers can
classify objects and respond like unlocking your smartphone when it recognizes your face. Since
the infectious coronavirus was first reported in Wuhan, it has become a public health problem in
China and even around the world. This pandemic is having devastating effects on societies and
economies around the world. My project can be proving to be useful in present context and is
applicable in the real-world scenario. Deep learning is an important breakthrough in the AI field.
It has recently shown enormous potential for extracting tiny features in image analysis. Due to
the COVID-19 epidemic, some deep learning approaches have been proposed to detect patients
infected with coronavirus. The most important advantage of this method is that such models can
Artificial Intelligence | 2022

be used to diagnose supplementary chest-related diseases such as tuberculosis and other chest-
related diseases. We are facing a health crisis and the research about the face mask detection can
help in certain way to capture the guilty ones.
In this paper, which consists of two principal blocks. The first block includes the training
and the testing models, whereas the second block consists of the whole framework testing. For
the first block, our labeled dataset was divided into three classes. The first class is focused on the
training and represents 70% of the dataset images. However, the validation step required only
10% to validate the performance for the trained models. 20% of the dataset was devoted to the
testing phase. For each epoch, each model is trained on the training dataset. The training results,
as well as the training accuracy and the training loss, are presented in the form of curves in
figures of “accuracy in terms of epoch” and “loss in terms of epoch,” respectively. After training,
each model is validated on the validation dataset. Like the training results, the obtained
validation results are the validation accuracy and the validation loss. Then, the two results are
compared with the loss function. An error function value tending toward zero means a well-
trained model. Otherwise, the hyperparameters are tuned to train the model in another epoch.
The process of calculating errors and updating the network’s parameters is called backward
propagation, which is the second important process elaborated in the training phase of any neural
network, after the forward propagation process. Here in this project the dataset contains some
pictures of persons with mask in one directory and one without mask in the other for the training
of the model. And I have labeled them with category as they are the folders containing images
for training. after installing Raspberry Pi OS and all libraries, such as TensorFlow, OpenCV, and
imutils, the embedded vision system will be able to detect if a user is wearing a face mask or not
and if the distance between peoples is maintained or violated. Hence, when someone is not
wearing a face mask, it will be designated with a red box around their face with the text “No
Face Mask Detected.” and when wearing a face mask, it will be seen a green box around their
face with the text, “ Mask Detected.” The same thing is depicted. On the other hand, the
proposed model with social distancing task detects peoples and provides the bounding box
information. After that, the Euclidean distance between each detected centroid pair is computed
using the detected bounding box and its centroid information based on (x, y) dimensions for each
bounding box. Illustrates the social distancing detection task where an alert message displayed
with a red box for violated distance and a green box for the maintained distance. This embedded
vision-based application can be used in any working environment such as public place, station,
corporate environment, streets, shopping malls, and examination centers, where accuracy and
precision are highly desired to serve the purpose. It can be used in smart city innovation, and it
would boost up the development process in many developing countries. Our framework presents
a chance to be more ready for the next crisis or to evaluate the effects of huge scope social
change in respecting sanitary protection rules. For the plotting of the training file, I have
imported an matplotlib library as plt to check the loss in training and training accuracy to
measure all there is to in this project. During the training and running the model I have adjusted
in such a way that the person on the other end is wearing or not wearing a mask is detected 100%
accurately because I have trained the model several times. A major drawback of my model is that
it cannot be closed once running it can be omitted but I didn’t like to do so because the model is
supposed to be running 24/7 on a raspberry pi with the extendable camera option. Although
numerous researchers have committed efforts in designing efficient algorithms for face detection
and recognition but there exists an essential difference between ‘detection of the face under
mask’ and ‘detection of mask over face’. Pattern learning and object recognition are the inherent
Artificial Intelligence | 2022

tasks that a computer vision technique must deal with. Object recognition encompasses both
image classification and object detection. The task of recognizing the mask over the face in the
pubic area can be achieved by deploying an efficient object recognition algorithm through
surveillance devices.
Prototype Development
For the development part of the assignment, I will be discussing majorly on the library
imports and declaration of the imports in codes. The following figure shows the dependency
require for the running of the file:

Fig: 2.1, Dependency Requirements


Here on the dependency, we can change the version according to the user requirements. The file
however while running might show errors that the version is not compatible or might suggest to
install the version requirements.
After the running of the requirement and satisfying all the dependencies we can now try to run
our training model. The training model however have no plot.PNG and model
mask_detector.model files these files are there to check the activity of our training model. In this
face mask detection, I have used 20% of the images on both mask and without mask to train the
model and check for the accuracy and loss of the training model. The training model needs some
imports that needs to be configured and used. The tensorflow is the machine learning and
artificial intelligence open-source end to end model. The model mask_detector.model file is
Artificial Intelligence | 2022

uploaded saved in h5 format. Along with the face mask detection I also have included codes to
track the face of the person which is essential for the detection to be accurate and precise. With
the os.path.join I have joined the directory with the category which is the directory named
with_mask and without_mask for the training. After this I also have to list all the images using
os.listdir with the tensorflow model downloaded. I am using the preprocessing image library to
load the images. In the beginning the images are set in an array and converted into a numpy array
later. We have to define our learning rate to be less to get the minimum loss during the training.
We can change this data according to our desire. While training the model we should be able to
see our file to behave as the below figure:

Fig: 2.2, Training Process


Because, I have only used 20% of the data set to train the model and set the Epoch value of 20.
For the face detection I have implemented deep neural network which is recently developed
model for cv2. We are also using load_model that we have developed during the training of the
model. To start the caver view I have used the command VideoStream with source as 0 to run
the camera model. Source is nothing but the camera to be used to start video streaming if we
have 2 camera and want to use the second camera, we can use 1 instead of 0 to use the second
camera for video streaming. The below figure is the code to display the video streaming service:

Fig: 2.3, Video Streaming


We are reading every frame of the image in front of the camera unit which looks like a video but
is instead series of frames in real-time. The prediction and the location of the mask or no mask is
carried out in a function called detect_and_predict_mask with the frame, facenet, masknet as an
Artificial Intelligence | 2022

argument. Which is the main function that predict and locate the face in the frame to check if the
person on the end is using a face mask and following the protocols or not.
Conclusion
This section should be a reflection on the initiation, progress and evolution of your application
prototype. You should reflect upon the following, and any other issues that have framed your
experience of the development activity:
How has academic theory or practical advice from the sources you have used informed and
improved your prototype?
Were there any aspects of developing your prototype that involved you working in new ways or
ways that you had not anticipated?

Evaluation and Conclusion:


Finally, the project build was successful and the prediction is 100% accurate. The below figure
shows where I have built my project:

Fig: 3.1, File Location


In the above figure I have already trained my model and thus we can see the output as
mask_detector.model and plot.png two files have been created and saved on our directory/folder.
Artificial Intelligence | 2022

The build process was successful we can see the output as the below figure or take fig 2.2 as
reference:

Fig: 3.2 Starting training

The outcome of the training was as expected and is shown in the figure below or take a reference
at fig 2.2:

Fig: 3.3, Training Starting


The below figure shows what it should look like after the training is complete:

Fig: 3.4, Training Complete


Running the file we should be able to see the following outcome:
Artificial Intelligence | 2022

Fig: 3.5, Running the model

Now the only thing left is to test our mask detection model is working or not let’s have a look at
the following pictures and complete the assignment:
Artificial Intelligence | 2022

Fig: 3.6, Final


In the end the project was successful with minimum error displayed while running the
application.

You might also like