You are on page 1of 45

Detection and Grading of Multiple Fruits/Vegetable using

Machine Vision

PRELIMINARY PROJECT REPORT


Submitted by

DANIEL MATHEW RANJAN PRC17CS003


HEGSYMOL RAJU CML17CS018
JINO CHERIAN VARUGHESE CML17CS022
SOORAJ S PRC17CS018

To

APJ Abdul Kalam Technological University


in partial fulfillment of the requirements for the award of B. Tech Degree in
Computer Science and Engineering

Department of Computer Science and


Engineering Providence College of Engineering,
Chengannur December 2020
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
PROVIDENCE COLLEGE OF ENGINEERING,
CHENGANNUR

CERTIFICATE
Certified that this report entitled ‘Detection and Grading of Multiple
Fruits/Vegetable using Machine Vision’ is the report of project completed by the
following students during 2020-2021 in partial fulfillment of the requirements for the
award of the Degree of Bachelor of Technology in Computer Science and
Engineering.

DANIEL MATHEW RANJAN PRC17CS003


HEGSYMOL RAJU CML17CS018
JINO CHERIAN VARUGHESE CML17CS022
SOORAJ S PRC17CS018

Ms. Renju Rachel Varghese (Project Supervisor)


Assistant Professor
Dept. of Computer Science & Engineering
Providence College of Engineering

Mr. Pramod Mathew Jacob (Project Coordinator)


Assistant Professor
Dept. of Computer Science & Engineering
Providence College of Engineering
Dr. Santhosh Simon
Head of the Department
Dept. of Computer Science & Engineering
Providence College of Engineering
DECLARATION

We, hereby declare that, this project report entitled is the bonafide work of ours
carried out under the supervision of Ms. Renju Rachel Varghese, Assistant Professor,
Department of Computer Science and Engineering. We declare that, to the best of
our knowledge, the work reported herein does not form part of any other project
report or dissertation on the basis of which a degree or award was conferred on an
earlier occasion to any other candidate. The content of this report is not being
presented
any by
other student to this or any other University for the award of a
degree.

Sl.No Name of Student Roll Number Signature

1 DANIEL MATHEW RANJAN PRC17CS003

2 HEGSYMOL RAJU CML17CS018

3 JINO CHERIAN VARUGHESE CML17CS022

4 SOORAJ S PRC17CS018

Ms. Renju Rachel Varghese (Project Supervisor)

Dr. Santhosh Simon

Head, Department of Computer Science and


Engineering

Providence College of Engineering

Date: 10/12/2020
ACKNOWLEDGEMENTS

We take this opportunity to express our deep sense of gratitude and sincere thanks to
all who helped us to preliminary project successfully.

We are deeply indebted to our Project Supervisor Ms. Renju Rachel Varghese,
Assistant Professor for her excellent guidance, positive criticism, and valuable
comments. We record our appreciation and sincere thanks to our Departmental Project
Coordinator and Co-guide Mr. Pramod Mathew Jacob, Assistant Professor for his
overall coordination and timely guidelines.

We are also greatly thankful to our Head of Department Dr. Santhosh Simon,
Associate Professor for his continuous support.

Finally, we thank our parents, family members and friends who directly and indirectly
contributed to the successful completion of our preliminary project.

Daniel Mathew Ranjan


Hegsymol Raju
Jino Cherian Varughese
Sooraj S

Date: 10/12/2020

i
ABSTRACT

COVID-19 is spreading rapidly throughout the world.Rinku integrates an electronic


system (ClinicalKit) comprising biomedical sensors for body temperature, pulse rate,
and oxygen saturation, as well as a digital platform for storing and displaying the
collected data. This system aims to detect whether people are wearing masks. It can
handle simultaneous information from multiple patients and provide valuable data
related to the severity of the reported symptoms, which in turn could help healthcare
professionals to make management decisions to optimize their clinical resources. In
this paper, the functionality of the ClinicalKit, communication between the IoT
architecture and the cloud, and the monitoring of physiological parameters were
tested. The results showed that the enclosure design is convenient, IoT architecture is
functional and the tracking of temperature, heart rate, and blood oxygen levels from
subjects is promising. We consider that this system has the potential to provide an
accurate forecast regarding the demand for clinical resources and take prompt actions
related to this pandemic..

ii
TABLE OF CONTENTS

ACKNOWLEDGEMENTS............................................................................................i

ABSTRACT...................................................................................................................ii

LIST OF FIGURES........................................................................................................v

LIST OF TABLES........................................................................................................vi

LIST OF ABBREVIATIONS......................................................................................vii

CHAPTER 1 INTRODUCTION................................................................................1

1.1 Background..........................................................................................................1

1.2 Existing System....................................................................................................2

1.3 Problem Statement...............................................................................................2

1.4 Objectives.............................................................................................................2

1.5 Scope....................................................................................................................2

CHAPTER 2 LITERATURE REVIEW....................................................................4

CHAPTER 3 SYSTEM ANALYSIS.........................................................................14

3.1 Expected System Requirements.........................................................................14

3.2 Feasibility Analysis............................................................................................15

3.2.1 Technical feasibility........................................................................................15

3.2.2 Operational feasibility.....................................................................................15

3.2.3 Economic feasibility........................................................................................15

3.3 Hardware Requirements.....................................................................................15

3.4 Software Requirements......................................................................................15

3.5 Life Cycle Used..................................................................................................16

CHAPTER 4 METHODOLOGY.............................................................................18

4.1 Proposed System................................................................................................18

iii
4.1.1 Android App....................................................................................................18

4.1.2 Classification...................................................................................................19

4.1.2.1 Image Dataset...............................................................................................19

4.1.2.2 Data Preprocessing.......................................................................................19

4.1.2.3 Convolution Neural Network.......................................................................19

4.1.2.4 Transfer Learning.........................................................................................22

4.2 Advantages of Proposed System........................................................................24

CHAPTER 5 SYSTEM DESIGN.............................................................................25

5.1 Architecture Diagram.........................................................................................25

5.2 Flow Chart..........................................................................................................26

5.3 Use Case Diagram..............................................................................................27

5.4 Activity Diagram................................................................................................28

5.5 Sequential Diagram............................................................................................28

5.6 UI Design...........................................................................................................29

REFERENCES...........................................................................................................30

iv
LIST OF FIGURES

Figure number Figure Name Page Number


Figure 1.1 Architecture of Fruit/Vegetable Scanner 1
Figure 3.1 Incremental model 16
Figure 3.2 Gantt chart 17
Figure 4.1 Development block diagram 18
Figure 4.2 CNN layers 22
Figure 4.3 Traditional ml and transfer learning 23
Figure 5.1 Architecture diagram 25
Figure 5.2 Flow chart 26
Figure 5.3 Use case diagram 27
Figure 5.4 Activity Diagram 28
Figure 5.5 Sequence diagram 29
Figure 5.6 UI Design 29

v
LIST OF TABLES

Table Number Table Name Page Number


Table 3.1 COCOMO Model Coefficients 17

vi
LIST OF ABBREVIATIONS

Abbreviation Expansion
ML Machine Learning
SVM Support Vector Machine
KNN K-Nearest Neighbors
ANN Artificial Neural Network
CCD Charged Couple Device
RGB Red Green Blue
COCOMO Constructive Cost Model
HSV Hue Saturation Value
BoF Bag of Features
GLCM Gray Level Co-occurrence Matrix
IDE Integrated Development Environment
APK Android Package
KLOC Kilo Lines of Code
RBF Radial Basis Function
LOC Lines of Code
VEGA Vector Evaluation Genetic Algorithm
BSA Backtracking Search Algorithm
GBSA Genetic Backtracking Search Algorithm
OS Operating System
CNNsF Convolution Neural Network Features
UML Unified Modeling Language

vii
CHAPTER 1

INTRODUCTION

1.1 Background

Due to the COVID-19 pandemic, wearing a mask is mandatory in public spaces, as


properly wearing a mask offers a maximum preventive effect against viral
transmission. Body temperature has also become an important consideration in
determining whether an individual is healthy. In this work, we design a real-time deep
learning model to meet current demand to detect the mask-wearing position and head
temperature of a person before he or she enters a public space.

Figure 1.1: Architecture of System

1
1.2 Existing System
The existing systems include systems with automated checking of masks, temperature
and count of people entering a place.
Although these systems have its own advantages, they also exhibit following
disadvantages: -
 Manual checking of temperature.
 Attendance tracking is not available.
 Real time system not available.
 Smart phone enabled systems are not available.

1.3 Problem Statement


To develop a Machine Learning based system that detects masks and temperature of
people and the number of people entering a room or a place.

1.4 Objectives
Various objectives of our proposed model are:
 To check whether people are wearing masks or not.
 To check the temperature of people.
 To identify the number of people entering a room.

1.5 Scope
Technological advancements such as ML and image processing are required to help
with automation. The scope of this project has a global perspective as it works on
smartphones. The system creates an efficient, real time application to check whether
people are obeying Covid protocols.

2
CHAPTER 2

LITERATURE REVIEW

We have analyzed various existing works in the field of detection and , different types
of classification algorithms and Machine Learning models. The summary of the most
relevant 20 papers is set down below.

Fransico Rodriguez [1] presents a system known as Rinku that aims to provide critical
data to health professionals so that they can validate COVID-19 indicators remotely.
Rinku can handle data from several patients at the given period and provide useful
information on the intensity of the symptoms reported, which could aid healthcare
professionals in making management decisions to optimize their clinical resources.
The functioning of the ClinicalKit, connectivity between the IoT architecture and the
cloud, and physiological parameter monitoring were all tested in this paper. The
findings revealed that the enclosure design is practical, the IoT architecture is
efficient, and subject tracking of temperature, heart rate, and blood oxygen levels is
promising. We believe that the Rinku system has the ability to provide an accurate
forecast of clinical resource demand and to help clinicians plan ahead.

Sushanta Malakar [2] proposed a reconstructive method to get partially


reconstructed features of the occluded part of the face, then existing deep
learning method has used to recognize the face. First the occluded part of a
face is discarded and Principal component analysis (PCA) is applied to the
remaining part of the face. Next, the principal components or the most
significant Eigenvector and its corresponding weights are calculated to
reconstruct the occluded part. Proposed method cannot reconstruct the
occluded part completely but it can provide enough features of the occluded
part which improves the recognition accuracy up to 15%.

3
neural network which is deployed on Android-based mobile devices. The model
composed of three convolutional layers which are activated by a rectified linear unit
function followed by a max-pooling layer and finally two dense layers and an
accuracy rate of 97.87% was obtained. They have used TensorFlow Lite library to run
trained CNN model in the Android devices. Before deploying in the device, it was
converted to TensorFlow file from Keras file. The method used to deploy a model
into an Android device can be used in our system for deployment.

Jiri Prinosil, Ondrej Maly [3] proposed a system that deals with the evaluation of
several methods for face detection when the face is covered by a mask. The methods
evaluated are Haar cascade and Histogram of Oriented Gradients as feature-based
approaches, Multitask Cascade Convolutional Neural Network, Max Margin Object
Detection and TinyFace as convolutional neural network based approaches. Various
types of face masks are considered: disposal face mask, burka, balaclava, ski helmet
with ski goggles, hockey helmet with protective grill, costumes, and others. The
TinyFace method achieves the best accuracy result, but also requires much more
computational power than other approaches. Therefore, this paper describes an
experiment to see if the accuracy of some of the remaining methods can be improved
by retraining their models with new image data containing faces with various face
masks.

Hong, Z. Wang, Z. He, N. Wang, X. Tian and T. Lu [4] proposed a model which is a
masked face recognition method based on person re-identification association, which
converts the masked face recognition problem into an association uncovering problem
between the masked face and the appearing faces of the same person. Based on the
characteristics that person re-identification technique does not rely solely on facial
information, it first takes advantages of re-identification to establish the association
between face-masked pedestrians and face-unveiled pedestrians. It further provides an
effective face image quality assessment to select the most identifiable faces for
subsequent recognition from a variety of appearing candidate faces. Finally, the
selected high-quality recognizable faces are used to replace masked faces for
identification. The comparison experiments with the existing disguise face recognition
methods show its superiority in terms of accuracy.

Kun Zhang; Xiang Jia; Yinghui Wang; Hongwei Zhang; Jingying Cui [5] proposed an
algorithm network structure based on the improved YOLOV3-tiny algorithm and use
4
the combination of nose detection and mask detection for feature fusion based on the
training of massive data sets, which perfectly solves the problem of detecting whether
the mask is worn in a normative way. The experiment shows that this system can
detect the target of wearing face masks in different scenes with an accuracy rate of
over 99%, laying a solid foundation for the detection of wearing face maks
normatively.

5
X. Fan, M. Jiang and H. Yan [6] proposed a model a deep learning based single-shot
light-weight face mask detector to meet the low computational requirements for
embedded systems, as well as achieve high performance. To cope with the low feature
extraction capability caused by the light-weight model, we propose two novel
methods to enhance the model's feature extraction process. First, to extract rich
context information and focus on crucial face mask related regions, we propose a
novel residual context attention module. Second, to learn more discriminating features
for faces with and without masks, we introduce a novel auxiliary task using
synthesized Gaussian heat map regression. Ablation studies show that these methods
can considerably boost the feature extraction ability and thus increase the final
detection performance. Comparison with other models shows that the proposed model
achieves state-of-the-art results on two public datasets, the AIZOO and Moxa3K face
mask datasets. In particular, compared with another light-weight you only look once
version 3 tiny model, the mean average precision of our model is 1.7% higher on the
AIZOO dataset, and 10.47% higher on the Moxa3K dataset. Therefore, the proposed
model has a high potential to contribute to public health care and fight against the
coronavirus disease 2019 pandemic.

B. Wang, Y. Zhao and C. L. P. Chen [7] proposed a two-stage approach to detect


wearing masks using hybrid machine learning techniques. The first stage is designed
to detect candidate wearing mask regions as many as possible, which is based on the
transfer model of Faster_RCNN and InceptionV2 structure, while the second stage is
designed to verify the real facial masks using a broad learning system. It is
implemented by training a two-class model. Moreover, this article proposes a data set
for wearing mask detection (WMD) that includes 7804 realistic images. The data set
has 26403 wearing masks and covers multiple scenes, which is available at
“https://github.com/BingshuCV/WMD.” Experiments conducted on the data set
demonstrate that the proposed approach achieves an overall accuracy of 97.32% for
simple scene and an overall accuracy of 91.13% for the complex scene, outperforming
the compared methods.

S. Srinivasan, R. Rujula Singh, R. R. Biradar and S. Revathi [8] proposed a system a


comprehensive and effective solution to perform person detection, social distancing
violation detection, face detection and face mask classification using object detection,
clustering and Convolution Neural Network (CNN) based binary classifier. For this,

6
YOLOv3, Density-based spatial clustering of applications with noise (DBSCAN),
Dual Shot Face Detector (DSFD) and MobileNetV2 based binary classifier have been
employed on surveillance video datasets. This paper also provides a comparative
study of different face detection and face mask classification models. Finally, a video
dataset labelling method is proposed along with the labelled video dataset to
compensate for the lack of dataset in the community and is used for evaluation of the
system. The system performance is evaluated in terms of accuracy, F1 score as well as
the prediction time, which has to be low for practical applicability. The system
performs with an accuracy of 91.2% and F1 score of 90.79% on the labelled video
dataset and has an average prediction time of 7.12 seconds for 78 frames of a video

I. B. Venkateswarlu, J. Kakarla and S. Prakash [9] proposed a system that employs a


global pooling layer to perform a flatten of the feature vector. A fully connected dense
layer associated with the softmax layer has been utilized for classification. Our
proposed model outperforms existing models on two publicly available face mask
datasets in terms of vital performance metrics.

M. Xu, H. Wang, S. Yang and R. Li [10] proposed a system on the basis of SSD
algorithm, SSD-Mask introduces a channel attention mechanism to improve the
ability of the model to express salient features. At the same time, the information of
different feature levels is fully utilized, and the loss function is optimized. The final
experimental results show that the algorithm can effectively achieve the goal of face
recognition and mask detection.

W. Vijitkunsawat and P. Chantngarm [11] studies the performance of the three


algorithms: KNN, SVM and MobileNet to find the best algorithm which is suitable
for checking who wearing masked face in a real-time situation. The results show that
MobileNet is the best accuracy both from input images and input video from a camera
(real-time).

7
A. Das, M. Wasif Ansari and R. Basak [12] proposed a system that presents a
simplified approach to achieve this purpose using some basic Machine Learning
packages like TensorFlow, Keras, OpenCV and Scikit-Learn. The proposed method
detects the face from the image correctly and then identifies if it has a mask on it or
not. As a surveillance task performer, it can also detect a face along with a mask in
motion. The method attains accuracy up to 95.77% and 94.58% respectively on two
different datasets. We explore optimized values of parameters using the Sequential
Convolutional Neural Network model to detect the presence of masks correctly
without causing over-fitting.

R. Kuchta and R. Vrba [13] proposed a system of knowledge of temperature course


during a certain time is needed in scientific, medical and industrial applications. In
some applications, however, the recorded temperature course should be read
wirelessly. It describes main principles applied in a set of mobile temperature data
logger and portable interrogator with wireless transfer of digitized temperature values.

S. Sakshi, A. K. Gupta, S. Singh Yadav and U. Kumar [14] proposed a two phased
face mask detector which will be easy to deploy at the mentioned outlets. With the
help of Computer Vision, it is now possible to detect and implement this on large
scale. CNN/ MobileNet V2 architecture was used for the implementation of our
model. The implementation is done in Python, and the python script implementation
will train our face mask detector on our selected dataset using TensorFlow and Keras.
It was have added more robust features and trained our model on various variations,
we made sure to have large varied and augmented dataset so that the model is able to
clearly identify and detection the face masks in real time videos. The trained model
was tested on both real-time videos and static pictures and in both the cases the
accuracy was more than the other designed models.

M. M. Rahman, M. M. H. Manik, M. M. Islam, S. Mahmud and J. -H. Kim [15]


propose a system that restrict the growth of COVID-19 by finding out people who are
not wearing any facial mask in a smart city network where all the public places are
monitored with Closed-Circuit Television (CCTV) cameras. While a person without a
mask is detected, the corresponding authority is informed through the city network. A

8
deep learning architecture is trained on a dataset that consists of images of people
with and without masks collected from various sources. The trained architecture
achieved 98.7% accuracy on distinguishing people with and without a facial mask for
previously unseen test data. It is hoped that our study would be a useful tool to reduce
the spread of this communicable disease for many countries in the world.

9
the system on preselected data samples. The hardware includes the conveyer, camera
control and control systems. The software system analyzes the fruit image and
classifies them. Our fruit quality grading into three grades was based on human
perception. The fruits having a good shape, large size, high intensity, high flabbiness
and no defects were branded as of the best quality, i.e., grade 1. The grade two fruits
have distorted shape, medium size, low flabbiness, low intensity and no defects and
fruits having defects were considered as grade three fruits regardless of other features.
There were problems in detecting the flabbiness from the color. An impact sensor
might improve flabbiness detection. To determine the feature-based grades,
unsupervised learning techniques must be used.

K. N. S. Kumar, G. B. A. Kumar, P. P. Rajendra, R. Gatti, S. S. Kumar and N.


Nataraja [16] proposed a method that employs a YoLo technique to recognise the
objects like face masks in pictures and videos as a measure for COVID-19 precaution.
Extensive testing on datasets and performance assessment of the suggested
approaches are demonstrated. Furthermore, they used a symbolic method to
successfully maintain inter and intra class differences in face mask detection. The
proposed work is being created as a prototype to monitor temperature and identify
masks for individuals. The first technique employs a temperature sensor to detect the
body's current temperature. In the second way, the work is aimed at offering a safety
mechanism for individuals in order to avoid COVID-19. Extensive experimentation
on 50 different image datasets was carried out to assess the performance of the
suggested technique. For ten random trials, we experimented with different training
and testing percentages. Based on the data, it was concluded that the symbolic method
produces better outcomes than the conventional one.
P. Ulleri, M. S., S. K., K. Zenith and S. S. N.B [17] proposed a model thatbfocuses
on developing a contactless employee management system with sensor fusion and
facial recognition technology. Image processing capability in the system track
employees entering the institution. Body temperature sensor collect the health status
of each employee at the entrance. With the onboard cameras and deep learning
algorithms, the system also makes sure if the employee is wearing a mask or not. The
aim is to authenticate an employee and check whether they abide by the protocols of
the institution. Ease of use, maintenance, and low-cost installation is the motivation
behind the system design. The best beneficiaries are educational institutions and
corporate offices. The system records employee health data and uses it for the contact
10
tracing.

A. Rahman, M. S. Hossain, N. A. Alrajeh and F. Alsolami [18] tested a have tested a


number of COVID-19 diagnostic methods that rely on DL algorithms with relevant
adversarial examples (AEs). Test results show that DL models that do not consider
defensive models against adversarial perturbations remain vulnerable to adversarial
attacks. Finally, presented in detail the AE generation process, implementation of the
attack model, and the perturbations of the existing DL-based COVID-19 diagnostic
applications. It raise awareness of adversarial attacks and encourages others to
safeguard DL models from attacks on healthcare systems.

M. S. Abd Rahim, F. Yakub, A. R. Mohd Hanapiah, M. Z. Ab Rashid and S. A. Zaki


Shaikh Salim [19] proposed a face mask detection through Haar Cascade approaches.
Besides, body thermal screening features has also been considered in this study to
determine whether an individual is healthy or not by using thermal sensor. The pilot
study was run for a week in other to test the capabilities and the durability of the
program to run in the community. The results from the pilot study showed that the
thermal scanner is able to run standalone in the public for a week. In overall, the
thermal Scanner device was successfully developed with a combinations of deep
learning technique to detect a person with facemask and able to screen a body
temperature just by using low-cost components.

Farady, Lin, Rojanasarit, Prompol and Akhyar [20] designed a real-time deep
learning model to meet current demand to detect the mask-wearing position and head
temperature of a person before he or she enters a public space. In this experiment, we
use a deep learning object detection method to create a mask position and head
temperature detector using a popular one-stage object detection, RetinaNet. We build
two modules for the RetinaNet model to detect three categories of mask-wearing
positions and the temperature of the head. We implement an RGB camera and thermal
camera to generate input images and capture a person's temperature respectively. The
output of these experiments is a live video that carries accurate information about
whether a person is wearing a mask properly and what his or her head temperature is.
Our model is light and fast, achieving a confidence score of 81.31% for the prediction
object and a prediction speed below 0. 1s/image.

11
The above review does not give a solution to the possible challenges in the existing
system. So our goal is to propose a real time implemented application to determine
the quality of fruit or vegetable in an efficient way .The number of days where the
fruit or vegetable can remain is also calculated in the new system which is the shelf
life.

12
CHAPTER 3

SYSTEM ANALYSIS

The system analysis phase includes the analysis of various functional requirements,
non-functional requirements, design constraints and hardware requirements.

3.1Expected System Requirements


The proposed System is expected to meet the following requirements.
REQ 1: Checks whether people are wearing masks or not

REQ 2: Checks the temperature of the people through wireless sensors

REQ 3: Should be able to keep a count on the number of people that have entered the room
REQ 4: Alert the people if they are not wearing masks or if their temperature is
higher than a reference value.

13
3.1 Feasibility Analysis
The feasibility study for the above-mentioned requirements are done and is concluded
that it is practically possible to build such a system. The technical, economical and
operational feasibility analysis is discussed below.

3.1.1 Technical feasibility


It is easy to use in Android phone as an Android application is developed to classify
and grade fruits and vegetables. As Android phones are portable and works on any
devices of Android ,it can be used at any time according to the users requirement and
it will be convincing to utilize this facility for people all over the world.

3.1.2 Operational feasibility


Fully automated system with less involvement of manpower.Bluetooth module is
added to communicate.

3.1.3 Economic feasibility


The system is more economic as only a smart phone is required for the recording
purpose. So, this project will be economically feasible if we develop it on a large
scale.

3.2 Hardware Requirements


 Esp32cam
 Ardunio
 Buzzer
 Led
 Servomoter(sg90)
 Contactless Temperature Sensor
 MLX90614
 HC05 Bluetooth module

3.3 Software Requirements


 Android UNO

14
3.4 Life Cycle Used
In this project we choose the incremental model. It is an iterative enhancement model.
We develop our project as different modules which will be completed as different
iterations. The Incremental model is flexible and is easier to incorporate new features
during the development phase.

Figure 3.1: Incremental Model

3.7 Software Cost Estimation

COCOMO Model (Constructive Cost Estimation Model) is used for cost estimation.

Table 3.1: COCOMO model coefficients


Software
a b c d
Category
Organic 2.4 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

Software Project Category: Semi Detached


Estimated Lines of Code (LOC): 1500 LOC = 1.5 KLOC
b
Effort applied = a*(KLOC) [Person-Months] = 3(1.5) = 4.72 PM
1.12

Development Time = c*(Effort) = 2.5(4.72) = 4.30 months / 131


d 0.35

days

15
3.8 Project Scheduling using Gantt chart

Figure 3.2: Gantt chart


The Gantt chart shows the project scheduling. Gantt chart shows the start and finish
date of the project. Our project phases will be completed as per the prescribed time
schedule in the above Gantt chart. The starting date was 01/08/2020 and the project is
completed on 10/04/2021.

16
CHAPTER 4

METHODOLOGY

4.1 Proposed System


The proposed system checks the people at the entrance that whether they wear masks
or not. It also checks the temperature of the people. If the temperature is higher than a
reference rate, they will not be permitted to enter inside a building or a given place.
The ultrasonic system detects a person within 10 cm.

Figure 4.1: Development Block diagram

4.1.1 Esp32cam

The ESP32-CAM is a very small camera module with the ESP32-S chip that costs
approximately $10. Besides the OV2640 camera, and several GPIOs to connect
peripherals, it also features a microSD card slot that can be useful to store images
taken with the camera or to store files to serve to clients.

17
4.1.2 Arduino UNO

The Arduino Uno is an open-source microcontroller board based on


the Microchip ATmega328P microcontroller and developed by Arduino.cc. The board
is equipped with sets of digital and analog input/output (I/O) pins that may be
interfaced to various expansion boards (shields) and other circuits. The board has 14
digital I/O pins (six capable of PWM output), 6 analog I/O pins, and is programmable
with the Arduino IDE (Integrated Development Environment), via a type B USB
cable. It can be powered by the USB cable or by an external 9-volt battery, though it
accepts voltages between 7 and 20 volts. It is similar to the Arduino Nano and
Leonardo. The hardware reference design is distributed under a Creative
Commons Attribution Share-Alike 2.5 license and is available on the Arduino
website. Layout and production files for some versions of the hardware are also
available.
4.1.2.1 Buzzer
A buzzer or beeper is an audio signaling device, which may
be mechanical, electromechanical, or piezoelectric (piezo for short). Typical uses of
buzzers and beepers include alarm devices, timers, and confirmation of user input
such as a mouse click or keystroke.

4.1.2.2 Led
In the simplest terms, a light-emitting diode (LED) is a semiconductor device that
emits light when an electric current is passed through it. Light is produced when the
particles that carry the current (known as electrons and holes) combine together

within the semiconductor material.


4.1.2.3 Servomotor(sg20)

The main characteristic is that servo motors with magnetic encoder and brushless
motor are used. In this way it is no longer necessary to modify the servos to obtain the
desired rotation. Simply program the limits of the servos and choose the configuration
with the torque that best suits your needs.
4.1.2.4 Contactless Temperature Sensor MLX90614

The MLX90614 is an infrared thermometer for non-contact temperature


measurements. Both the IR sensitive thermopile detector chip and the signal
conditioning ASIC are integrated in the same TO-39 can.

18
4.1.2.5 HC-05 Bluetooth Module
Replace cable connections HC-05 uses serial communication to communicate with
the electronics. Usually, it is used to connect small devices like mobile phones using a
short-range wireless connection to exchange files. It uses the 2.45GHz frequency
band.

19
CHAPTER 5

SYSTEM DESIGN

The proposed system model is illustrated using various system modeling techniques
such as Architecture diagram, Use Case Diagram, Activity Diagram and Sequence
Diagram along with the Flow Chart.

5.1 Architecture Diagram

Figure 5.1: Architecture Diagram

Description:
Figure 5.1 shows the architecture of the proposed system. The dataset is trained by the
neural network and the output obtained from the tensor flow is converted to. tflite file.
ML KIT is used to optimize the device as easy-to-use package and to use. tflite file.
Android Studio is used to develop the application and apk file is obtained and can be
used in the Android device. The user can use the app from smartphone to scan the
fruit/vegetable.

20
5.2 Flow Chart

Figure 5.2: Flow Chart


Description:

Figure 5.2 illustrates the flow chart of a fruit/vegetable grading system. The system
starts by first classifying what type of fruit or vegetable the given product is. Then the
fruit/vegetable is given one of the following ranks: 1, 2 or defected. After the rank
classification, the shelf-life is also given as three classifications. The process ends
after the shelf-life is detected.

21
5.3 Use Case Diagram
Use case diagram is a Unified Modeling Language (UML) diagram which represents
the relationship between the various use cases and actors. The system includes two
actors: User and the Developer.

Figure 5.3: Use Case Diagram

Description:

The Use Case Diagram shown in Figure 5.3 illustrates the various use cases of the
fruit grading system. The end user of this system will be the system users. The user
can use the mobile application to scan the fruit/vegetable and get the name, rank and
shelf-life of the fruit or vegetable as output.

22
5.4 Activity Diagram

Figure 5.4: Activity Diagram

Description:

Figure 5.4 shows the order of execution of the Android application. When the
application is opened, the device camera is first accessed. When the user points the
camera towards the fruit/vegetable, the respective frames are obtained. If the
application is unable to detect the required features, then the execution ends with an
error message shown. Otherwise, the application starts classifying the fruit/vegetable
based on its features. Using the required datasets, the name, grade, and shelf-life of
the fruit/vegetable is generated.

23
5.5 Sequential Diagram

Figure 5.5: Sequence Diagram


Description:

The sequence diagram shown in Figure 5.5 illustrates the interaction between the user
and the application. The user first opens the application to scan the fruit/vegetable.
The application will respond with an error message if the fruit/vegetable is
undetected. Otherwise, the application classifies the name, rank and shelf-life
respectively and they are shown as output.

5.6 UI Design

Figure 5.6: UI Design


Description:

Figure 5.6 shows the UI design of Android mobile application the starting page and
scanning area and it displays the information regarding the fruit/vegetable

24
REFERENCES

[1] IoMT: Rinku’s Clinical Kit Applied to Collect Information Related to COVID-
19 Through Medical Sensors, IEEE LATIN AMERICA TRANSACTIONS,
VOL. 19, NO. 6, JUNE 2021

[2] Susanta Malakar,Werapon Chiracharit,Kosin Chamnongthai,Theekapung


Charoenpong ,”Masked Face Recognition Using Principal Component analysis
and Deep Learning”,ECT-CON 2021-Smart Electrical Systems and Technology.

[3] Jiri Prinosii and Ondrej Mally,”Detecting Faces with Masks”,2021 44 TH


International Conference on Telecommunications and Systems Processing(TSP)

[4] Hong, Z. Wang, Z. He, N. Wang, X. Tian and T. Lu, "Masked Face Recognition
with Identification Association," 2020 IEEE 32nd International Conference on
Tools with Artificial Intelligence (ICTAI), 2020, pp. 731-735, doi:
10.1109/ICTAI50040.2020.00116..

[5] K. Zhang, X. Jia, Y. Wang, H. Zhang and J. Cui, "Detection System of Wearing
Face Masks Normatively Based on Deep Learning," 2021 International
Conference on Control Science and Electric Power Systems (CSEPS), 2021, pp.
35-39, doi: 10.1109/CSEPS53726.2021.00014.

[6] X. Fan, M. Jiang and H. Yan, "A Deep Learning Based Light-Weight Face
Mask Detector With Residual Context Attention and Gaussian Heatmap to Fight
Against COVID-19," in IEEE Access, vol. 9, pp. 96964-96974, 2021, doi:
10.1109/ACCESS.2021.3095191.

[7] B. Wang, Y. Zhao and C. L. P. Chen, "Hybrid Transfer Learning and Broad
Learning System for Wearing Mask Detection in the COVID-19 Era," in IEEE
Transactions on Instrumentation and Measurement, vol. 70, pp. 1-12, 2021, Art
no. 5009612, doi: 10.1109/TIM.2021.3069844.

[8] S. Srinivasan, R. Rujula Singh, R. R. Biradar and S. Revathi, "COVID-19


Monitoring System using Social Distancing and Face Mask Detection on

25
Surveillance video datasets," 2021 International Conference on Emerging Smart
Computing and Informatics (ESCI), 2021, pp. 449-455, doi:
10.1109/ESCI50559.2021.9396783.

[9] I. B. Venkateswarlu, J. Kakarla and S. Prakash, "Face mask detection using


MobileNet and Global Pooling Block," 2020 IEEE 4th Conference on Information
& Communication Technology (CICT), 2020, pp. 1-5, doi:
10.1109/CICT51604.2020.9312083.

26
Conference on Computational Performance Evaluation (ComPE), Shillong,
2020.

[10] M. Xu, H. Wang, S. Yang and R. Li, "Mask wearing detection method based on
SSD-Mask algorithm," 2020 International Conference on Computer Science and
Management Technology (ICCSMT), 2020, pp. 138-143, doi:
10.1109/ICCSMT51754.2020.00034.

[11] W. Vijitkunsawat and P. Chantngarm, "Study of the Performance of Machine


Learning Algorithms for Face Mask Detection," 2020 - 5th International
Conference on Information Technology (InCIT), 2020, pp. 39-43, doi:
10.1109/InCIT50588.2020.9310963.

[12] A. Das, M. Wasif Ansari and R. Basak, "Covid-19 Face Mask Detection Using
TensorFlow, Keras and OpenCV," 2020 IEEE 17th India Council International
Conference (INDICON), 2020, pp. 1-5

[13] R. Kuchta and R. Vrba, "Wireless Temperature Sensor System," International


Conference on Networking, International Conference on Systems and
International Conference on Mobile Communications and Learning
Technologies (ICNICONSMCL'06), 2006, pp. 163-163, doi:
10.1109/ICNICONSMCL.2006.230.

[14] S. Sakshi, A. K. Gupta, S. Singh Yadav and U. Kumar, "Face Mask Detection
System using CNN," 2021 International Conference on Advance Computing and
Innovative Technologies in Engineering (ICACITE), 2021, pp. 212-216, doi:
10.1109/ICACITE51222.2021.9404731.

[15] M. M. Rahman, M. M. H. Manik, M. M. Islam, S. Mahmud and J. -H. Kim, "An


Automated System to Limit COVID-19 Using Facial Mask Detection in Smart
City Network," 2020 IEEE International IOT, Electronics and Mechatronics
Conference (IEMTRONICS), 2020

[16] K. N. S. Kumar, G. B. A. Kumar, P. P. Rajendra, R. Gatti, S. S. Kumar and N.


Nataraja, "Face Mask Detection and Temperature Scanning for the Covid-19
Surveillance System," 2021 International Conference on Recent Trends on
Electronics, Information, Communication & Technology (RTEICT), 2021

[17] P. Ulleri, M. S., S. K., K. Zenith and S. S. N.B., "Development of Contactless

27
Employee Management System with Mask Detection and Body Temperature
Measurement using TensorFlow," 2021 Sixth International Conference on
Wireless Communications, Signal Processing and Networking (WiSPNET),
2021

[18] A. Rahman, M. S. Hossain, N. A. Alrajeh and F. Alsolami, "Adversarial


Examples—Security Threats to COVID-19 Deep Learning Systems in Medical
IoT Devices," in IEEE Internet of Things Journal, vol. 8, no. 12, pp. 9603-9610,
15 June15, 2021

[19] M. S. Abd Rahim, F. Yakub, A. R. Mohd Hanapiah, M. Z. Ab Rashid and S. A.


Zaki Shaikh Salim, "Development of Low-Cost Thermal Scanner and Mask
Detection for Covid-19," 2021 60th Annual Conference of the Society of
Instrument and Control Engineers of Japan (SICE), 2021.

[20] I. Farady, C. -Y. Lin, A. Rojanasarit, K. Prompol and F. Akhyar, "Mask


Classification and Head Temperature Detection Combined with Deep Learning
Networks," 2020 2nd International Conference on Broadband Communications,
Wireless Sensors and Powering (BCWSP), 2020

28
29
30
31
32
33
34
35

You might also like