You are on page 1of 62

CSE1901 TECHNICAL ANSWERS TO REAL WORLD PROBLEMS

Project Based Component Assessment - 5

By

20BCE2004 - Hrithik Purwar


20BCE0802 - Rishabh Agrawal
20BCE2590 - Sidharth Pidaparty
20BCE2442 - AC Akhil
20BCE2030 - Neha Baggan

School of Computer Science and Engineering

Demo Video
https://drive.google.com/drive/folders/1-9Ejdxi0NnRAa4oemgO6cmhTp7IjMa4_?usp=sh
are_link

Academic Paper
https://drive.google.com/drive/folders/18a2t42l1k4xQ8Zugh0uyhOw-AOoK17dL?usp=sh
aring

Testing
Test Case Test Test Data Expected Actual Results Test
ID Objective Results Pass/
Fail

Camera

TC_1 Angle of Camera at 0 All edge All edge points Pass


camera w.r.t points are are detected
line of sight detected and and face is
face is recognised
recognised

TC_2 Angle of Camera at 15 All edge All edge points Pass


camera w.r.t points are are detected
line of sight detected and and face is
face is recognised
recognised

TC_3 Angle of Camera at 30 All edge All edge points Pass


camera w.r.t points are are detected
line of sight detected and and face is
face is recognised
recognised

TC_4 Angle of Camera at 45 All edge All edge points Pass


camera w.r.t points are are detected
line of sight detected and and face is
face is recognised
recognised

TC_5 Angle of Camera at 50 All edge All edge points Pass


camera w.r.t points are are detected
line of sight detected and and face is
face is recognised
recognised

TC_6 Varying light Low light All edge EAR aspect Fail
conditions conditions points are ratio could not
detected and be computed
face is
recognised

TC_7 Varying light Bright light All edge All edge points Pass
conditions conditions points are are detected
detected and and face is
face is recognised
recognised

TC_8 Facial Transparent All edge All edge points Pass


Accessories Glasses points are are detected
detected and and face is
face is recognised
recognised

TC_9 Facial Reflective/Su All edge Only 33% of Fail


Accessories nglasses points are the EAR
detected and computation
face is could be done
recognised and accuracy
drop-off was
significant

TC_10 Facial Cap to All edge All edge points Pass


Accessories hairline points are are detected
detected and and face is
face is recognised
recognised

TC_11 Facial Cap covering All edge Only 50% of


Accessories forehead points are the EAR
detected and computation
face is could be done
recognised and accuracy
drop-off was
significant

GPS

TC_12 Location Transmission Accurate Accurate Pass


of location location is location is
data from transmitted transmitted to
GPS to app to app app

TC_13 Internet Location data Accurate Location data Fail


Connectivity transmission location is is not
when less transmitted accurately
bandwidth to app transmitted

Driver Performance

TC_14 Braking Acceleromete Braking is Braking is Pass


metric r data detected detected
transmission
and display

TC_15 Acceleration Acceleromete Acceleration Acceleration is Pass


Metric r data is shown on shown on app
transmission app
and display

TC_16 Speed Acceleromete Speed is Speed is Pass


metric r data shown in shown in app
transmission app
and display

TC_17 Internet Performance Driver Driver Pass


connectivity during good performance performance is
internet is calculated calculated
connectivity

TC_18 Internet Performance Driver Driver Fail


connectivity during bad performance performance is
internet is calculated not calculated
connectivity

TC_19 Score Transmission Score is Score is Pass


Calculation of data and displayed on displayed on
calculation app app
based on
metrics

Problem Statement
Driver sleepiness has emerged as one of the key factors in recent car accidents,
which can result in fatalities, serious bodily injuries, and large monetary losses.There
are a lot of night drivers who usually get sleepy during their drive and this is why car
crash cases are at an all time high during the night only, especially with trucks. Many
drivers unknowingly put on that music which helps them feel drowsy and tired instead of
active and awake.
Thus, there is a need to develop some solution which could effectively meet this
demand and reduce car crashes at the earliest.

Motivation
India has a vast network of roads that interlink ciHes and towns with each other. A huge
number of people make use of these roads to travel from one place to another. Driving
is an activity that requires the drivers full attention and focus. Driving on the highways is
done at very high speeds and any small mistake can lead to serious and fatal injuries.
Driving can be a very boring or tedious and dull job especially if it is for long distances.
People tend to either listen to music or talk to people over the phone while driving and
that can also lead to distraction. A driver who falls asleep at the wheel loses control of
the vehicle, an action that often results in a crash with either another vehicle or
stationary objects.

Sleep Deprivation is one of the causes that leads to drowsiness. Studies have shown
that lack of sleep for long hours can have the same cause as alcohol in your body and
that poses a serious threat. It can cause impairment of the conscious mind to carry out
intellectual activity such as thinking, reasoning, judgment of situations, reclamation and
remembering. It can seriously affect the reaction time and reduce it to a point where the
driver is not able to react in time to any action. It can affect the emotional capacity of a
person and cause them to make wrong decisions. Driving late at night is also dangerous
as there is low visibility, this requires the drivers full attention and focus on the road, not
only for their safety but also for the safety of others.

A driver should be responsible for taking adequate rest and be in a complete present
state. A precaution for such devastating accidents would be to implement a device that
can monitor the drowsiness of the person behind the wheel, to keep track of the
person’s attention and focus. It should bring to the attention of the driver that they are
dozing off behind the wheel and to prevent any further accident, they should
immediately take a break, refresh and resume driving.
If proper monitoring is done then it can contribute to saving precious lives and 4 keep
the drivers and other people safe. In this project we have proposed a method to help
keep people behind the wheel awake and aware if they are feeling drowsy or sleepy in
an attempt to save lives.

According to available statistical data, over 1.3 million people die each year on the road,
and 20 to 50 million people suffer non-fatal injuries due to road accidents. Based on
police reports, the US National Highway Traffic Safety Administration (NHTSA) has
conservatively estimated that a total of 100,000 vehicle crashes each year are the direct
result of driver drowsiness. These crashes resulted in approximately 1,550 deaths,
71,000 injuries, and $12.5 billion in monetary losses. In 2009, the US National Sleep
Foundation (NSF) reported that 54% of adult drivers have driven a vehicle while feeling
drowsy, and 28% of them actually fell asleep. India is one of the countries with a very
high rate of death due to road accidents. Not only casualties but many people also get
seriously injured which can lead to them being handicapped for the rest of their life. In
the last decade approximately over 13lakh people have lost their lives due to road
accidents and around 50lakh have been seriously injured. India contributes to around
11% of global deaths due to road accidents, which is close to around 4.5 lakh accidents
in a year which claims almost 1.5 lakh lives. Many of these accidents happen on
highways and expressways, involving large commercial vehicles. A study conducted in
2020 by SaveLIFE foundation and Mahindra revealed that almost 50% of truck drivers
feel sleepy or tired while driving. Many of these people suffer from Obstructive Sleep
Apnea (OSA), which can create a very risky situation behind the wheel. This disorder
can increase the risk of an accident by almost 300%. This leads to much economic
damage also. According to the Ministry of Road Transport and Highways, the total
socio-economic burden of road crashes in India is close to 147,000 crore Rupees,
which is close to 0.77% of the country's GDP.

Project Outcome
The goal of this project is to develop a system that can accurately detect sleepy driving
and make alarms accordingly, which aims to prevent the drivers from drowsy driving and
create a safer driving environment.
- The project will be accomplished by a camera module that constantly video frames the
driver in a real time basis, a image processing and machine learning algorithm of sleepy
detection will continuously make predictions on those frames.
Benefited from learning features, the very famous convolutional neural network (CNN)
model, the Face-Landmarks-68 can be used for this purpose which will be able to detect
whenever the eyes will get sleepy, and further it can then be integrated in a device as
an application by using an exported model which can trigger alarms in the vehicle.
- So, our main goal in this project is to develop an end-to-end application with real-time
monitoring of the driver's face which can trigger the alarm in the car as soon as it
detects losing concentration.

We can provide our product to the audience in two different forms-


1. Pocket size dashboard device

- We aim to develop a pocket sized device which can be stuck on a


vehicle's dashboard.
- It will be integrated with a small camera module, a buzzer, a bluetooth
module and a memory chipset.
- The memory chipset will contain our buzzer sound to be played, ML model
and our code script which will give directions to the camera module to
capture frames in the real time and the ML model will run the predictions
on each of the frame, which on detection of inducing sleep, will send a
message to buzz alarm which can trigger either the inbuilt buzzer module
or will guide the Bluetooth module to play the buzzer sound in the car’s
speakers.
- The cost of this project is very low so it is very feasible for the common
people also.

2. Android Application
- We can also offer our product as an android application for those who do
not want to purchase the product.
- In this, instead of the hardware device, the user may install the application
on their phone.
- And as phone have all the necessary requirements like a front camera
module, a sound, bluetooth module and a memory, the application can get
access to these resources and the application will be integrated to the ML
algorithm out of the box, and can be used in the same way to capture the
driver’s face in the real time basis and run predictions on it and on
prediction of inducing sleep, may trigger a buzzer using the phone’s inbuilt
speaker or the car sound connected to the phone’s bluetooth module.
- In this the android phone can be fixed in front of the driver's face in the
same way as they fit their phone to view Google Maps by using a
spring extension phone holder.
- The cost of this application will be zero, so anyone can install it, and
developer revenue will come from advertisements and safety authority
contracts.

Project Plan
Introduction

Recent car accidents have been linked to driver sleepiness, which can cause fatalities,
major injuries, and substantial financial losses. Sleepy night drivers cause many car
crashes, especially truck crashes. Many drivers unwittingly play music that makes them
sleepy. Thus, a solution is needed to meet this demand and eliminate car crashes
quickly.

India's road network connects cities and towns. Many people go on these roads. Driving
takes full concentration. Highway driving is dangerous because of its high speeds.
Long-distance driving is boring. Music and phone calls can sometimes distract drivers.
Sleeping drivers regularly smash into other vehicles or stationary objects.

Sleep deprivation produces sleepiness. Studies have demonstrated that long-term sleep
deprivation is equally dangerous as drinking. It can hinder thinking, reasoning, judging,
recalling, and remembering. It can limit reaction time to the point where the driver
cannot react to any action. It can impair emotional capability and lead to bad decisions.
Due to reduced visibility, late-night driving is risky for drivers and others.
Drivers must rest and be present. A technology that monitors driver tiredness and
concentration could prevent such fatal catastrophes. It should alert the motorist that
they are nodding off and should take a break, refresh, and resume driving to avoid an
accident.

Monitoring can save lives and protect drivers and others. This concept proposes a way
to keep drivers awake and aware of drowsiness to save lives.
This research intends to develop a system that can accurately detect sleepy driving and
notify drivers to prevent drowsy driving and make driving safer.

A camera module will video frame the driver in real time, and a drowsiness detection
system will forecast those frames.

Benefiting from learning features, the Face-Landmarks-68 convolutional neural network


(CNN) model may identify drowsy eyes and be connected into a device as an
application to trigger vehicle alarms.

Our main goal in this project is to design an end-to-end application with real-time face
monitoring that can activate the car's alarm when the driver loses attention.
Literature Review

S. No. Title Summary Pros Cons

1 [Raehat] This paper presents a literature


Dua, Mohit, et al. review of driver drowsiness Use of advanced Too much input
"Deep CNN detection based on behavioural technologies like being taken into
models-based measures using machine ResNet for consideration,
ensemble learning techniques. Faces detecting driver making the
approach to driver contain information that can be drowsiness. model to be
drowsiness used to interpret levels of deployed quite
detection." Neural drowsiness. There are many complex, can
Computing and facial features that can be be a problem
Applications 33 extracted from the face to infer when trying to
(2021): the level of drowsiness. improve the
3155-3168. accuracy of the
model.

2 [Raehat]
Ahmed, M., Facts reveal that numerous road Trying to avoid the Possibility of
Masood, S., accidents worldwide occur due use of slow limited deep learning
Ahmad, M., & to fatigue, drowsiness, and performance model to show
Abd El-Latif, A. distraction while driving. Few algorithms for the less accuracy
A. (2021). works on the automated cause. during an event
Intelligent driver drowsiness detection problem, in a less
drowsiness propose to extract physiological luminous
detection for signals of the driver including environment.
traffic safety ECG, EEG, heart variability
based on multi rate, blood pressure, etc. which
CNN deep model make those solutions non-ideal
and facial
subsampling.
IEEE
Transactions on
Intelligent
Transportation
Systems, 23(10),
19743-19752.

3 [Raehat] A sleepy driver is arguably


Jabbar, Rateb, much more dangerous on the It is proposed that The research
Mohammed road than the one who is for building ML paper doesn’t
Shinoy, Mohamed speeding as he is a victim of models, a CNN focus
Kharbeche, microsleeps. Automotive model should be elaborately on
Khalifa researchers and manufacturers used which is how an
Al-Khalifa, Moez are trying to curb this problem usually more embedded
Krichen, and with several technological accurate in system will
Kamel Barkaoui. solutions that will avert such a detecting faces than work.
"Driver crisis. other models
drowsiness available in the field
detection model of Machine
using Learning.
convolutional
neural networks
techniques for
android
application." In
2020 IEEE
International
Conference on
Informatics, IoT,
and Enabling
Technologies
(ICIoT), pp.
237-242. IEEE,
2020.

4 [Raehat] To develop an efficient


Wang, H., Xu, L., brain–computer interface (BCI) The research paper The research
Bezerianos, A., system, electroencephalography proposes use of paper also talks
Chen, C. and (EEG) measures neuronal M-GCN in which about analysing
Zhang, Z., 2020. activities in different brain temporal-frequency brain activity at
Linking regions through electrodes. processing is a fundamental
attention-based Many EEG-based motor performed which is level, which is
multiscale CNN imagery (MI) studies do not quite superior. beyond our
with dynamical make full use of brain network scope.
GCN for driving topology.
fatigue detection.
IEEE
Transactions on
Instrumentation
and Measurement,
70, pp.1-11.

5 [Raehat] This paper presents a literature


Ngxande M, review of driver drowsiness The research paper Driver
Tapamo JR, detection based on behavioural suggests usage of drowsiness
Burke M. Driver measures using machine models which take cases are
drowsiness learning techniques. Faces only facial features increasing with
detection using contain information that can be as inputs so the progress of
behavioural used to interpret levels of development of time and
measures and drowsiness such models won’t models may
machine learning be too complex and become
techniques: A maintenance of a irrelevant
review of model like this will quickly with
state-of-art be a simpler task. time.
techniques. 2017
pattern
recognition
Association of
South Africa and
Robotics and
mechatronics
(PRASA-RobMec
h). 2017 Nov
30:156-61.

6 4D: A Real-Time Eye abnormalities may suggest 4D CNN found 3% of the test
[Rishabh] Driver exhaustion, psychological weariness. dataset's 4D
Drowsiness issues, and more. This article model did not
Detector Using illustrates how to develop a predict eye
Deep Learning tiredness detection system that state.
predicts driver eye health and
driving risks.

7 System and The hybrid approach uses The hybrid model Simulations
[Rishabh] Method for Driver AI-based Multi-Task Cascaded detects driver determined the
Drowsiness Convolutional Neural Networks sleepiness with model's
Detection Using (MTCNN) to recognize driver 91% accuracy in all efficacy.
Behavioural and facial traits and the Galvanic situations.
Sensor-Based Skin Response (GSR) sensor to
Physiological measure skin conductance to
Measures increase accuracy.

8 Driver This study advises monitoring The structure works This method
[Rishabh] Drowsiness eye closure and yawning to in any light when reliably
Detection Using detect driver weariness. This recognized. evaluates driver
AI study identifies eyes and lips in tiredness and
AIROS (American Institute of drowsiness but
Road Safety) experiment needs touch
recordings. measurement
and has
limitations.

9 This article detects weariness Advantage of the More optimal


[Rishabh] Driver using image processing and eye system is detection non-intrusive
Drowsiness blinking. Raspberry pi and of drowsiness in the ways to
Detection System Python program the suggested early stage can implement this
Based On Eye system. It utilises OpenCV and reduce the impact don’t exist
dlib. Facial landmarks extract of accident caused
Closure the eye area. or could be
completely avoided

10
[Neha] Sensor The paper provides an overview · Sensor-based
Applications and of the current state of research methods can · Some
Physiological in the area of detecting driver provide objective sensor-based
Features in drowsiness using various and non-intrusive methods may
Drivers’ sensors and physiological means of detecting be prone to
Drowsiness features. It discusses various drowsiness. false alarms or
Detection: A sensor-based and physiological misdetections.
methods used for drowsiness · Physiological
Review detection, including features, such as ·
eye-tracking, EEG signals or Physiological
electroencephalogram (EEG), heart rate features may be
and heart rate variability. The variability, can influenced by
paper also reviews the provide insight into factors other
limitations and challenges of the level of than
current drowsiness detection drowsiness or drowsiness,
methods, and provides fatigue experienced such as stress
suggestions for future research by a driver. or medical
directions. conditions,
· Some leading to
sensor-based inaccurate
methods may be results.
prone to false
alarms or · The cost
misdetections. and complexity
of setting up
· Physiological and using some
features may be sensor-based or
influenced by physiological
factors other than methods may
drowsiness, such as limit their
stress or medical practicality for
conditions, leading widespread use.
to inaccurate
results.

· The cost and


complexity of
setting up and using
some sensor-based
or physiological
methods may limit
their practicality for
widespread use.

11. Drowsiness detection and The technology has The accuracy of


[AKHIL] Trends and Future estimation technology is a the potential to drowsiness
Prospects of the rapidly growing field that aims significantly detection
Drowsiness to assess an individual's level of improve road safety technology
Detection and fatigue or sleepiness while by detecting and varies
Estimation driving or performing other alerting drivers who depending on
Technology activities. The technology uses are experiencing the specific
physiological and behavioural excessive method and
indicators to provide a more drowsiness or system used,
comprehensive assessment of fatigue. and many
drowsiness, and has the systems have
potential to improve road safety Drowsiness not been
and reduce fatigue-related detection systems validated in
accidents. The technology can can be integrated real-world
be integrated into vehicles or into vehicles or conditions.
wearable devices, but its wearable devices,
accuracy and cost can pose making it The cost of
challenges to widespread accessible to a wide implementing
adoption. range of users. drowsiness
detection
The technology is systems can be
based on multiple a barrier to
physiological and widespread
behavioural adoption,
indicators, particularly for
including eye individual
movements, head users.
movements, and
heart rate There are also
variability, concerns about
providing a more privacy and the
comprehensive potential
assessment of misuse of
drowsiness. personal data
collected by
drowsiness
detection
systems.

12. Investigating It is a research paper that Provides a novel The study was
[AKHIL] Driver Fatigue explores the use of Granger approach to the conducted with
versus Alertness causality networks to detection of driver a small sample
Using the Granger distinguish between fatigue and fatigue and size and further
Causality alertness in drivers. alertness using research is
Network Granger causality needed to
by Wanzeng Kong Overall, this paper provides networks, which validate the
,Weicheng Lin valuable insights into the offers an objective findings with a
,Fabio Babiloni potential of using Granger method for larger and more
ORCID,Sanqing causality networks to measuring changes diverse sample.
Hu andGianluca distinguish between fatigue and in a driver's
Borghini alertness in drivers. However, cognitive state. The use of
further research is needed to Granger
validate the findings and The results of the causality
determine the feasibility and study demonstrate networks may
effectiveness of this approach the potential for not be practical
in real-world driving Granger causality for widespread
conditions. networks to use in
accurately real-world
distinguish between driving
fatigue and conditions, as it
alertness in drivers. requires
specialised
The findings of the equipment and
study have expertise.
important
implications for the The study does
development of not examine the
effective strategies effectiveness of
for reducing the using Granger
risk of drowsy causality
driving incidents networks in
and improving road reducing
safety. drowsy driving
incidents or
improving road
safety.

13. "Driving with It is a research paper that Evaluation of a The study has a
[AKHIL] drowsy detection explores the use of Granger drowsy driving limited sample
based on EEG causality networks to detection system size and may
signals" (2015) by distinguish between fatigue and based on EEG not accurately
Y. Wang alertness in drivers. signals. reflect the
performance of
Overall, this paper provides Comparison of the the EEG-based
valuable insights into the performance of the system in a
potential of using Granger EEG-based system real-world
causality networks to with other driving
distinguish between fatigue and commonly used environment.
alertness in drivers. However, drowsy driving
further research is needed to detection methods, The paper
validate the findings and such as eye-blink primarily
determine the feasibility and detection and focuses on
effectiveness of this approach head-pose laboratory-base
in real-world driving estimation. d experiments,
conditions. and the results
Discussion of the may not be
potential benefits of directly
using EEG signals applicable to
for drowsy driving real-world
detection, driving
such as the ability scenarios.
to capture changes
in brain activity The study only
before a driver evaluates the
becomes physically EEG-based
drowsy. system and
does not
Identification of the consider other
limitations of the potential
EEG-based system factors that may
and areas for impact driving
improvement. performance,
such as
distractions and
road conditions.

14. Sleep deprivation has a Comprehensive The


[AKHIL] Sleep Deprivation significant negative effect on meta-analytic meta-analytic
Effect on Human human performance, as review of the review relies on
Performance: A demonstrated by numerous existing literature the quality of
Meta-Analysis studies and meta-analyses. This on the effects of the individual
Approach(2006) effect is seen in various aspects sleep loss and studies
of cognition, such as attention, fatigue on driving included in the
By memory, reaction time, and performance. analysis, and
Candice Griffith decision making. Sleep the results may
Sankaran deprivation also affects Quantitative be impacted by
Mahadevan emotional regulation and synthesis of the the potential for
increases the risk of mood results from bias or
disorders, such as depression multiple studies, confounding
and anxiety. Additionally, sleep providing a more factors in the
deprivation has been linked to robust and reliable original studies.
poor physical performance, estimate of the
increased risk of accidents and impact of sleep loss The
errors, and decreased overall and fatigue on meta-analytic
productivity. To mitigate the driving review
negative effects of sleep performance. primarily
deprivation focuses on
Consideration of sleep loss and
individual factors, fatigue as
such as age and separate
sleep disorders, factors, rather
that may moderate than
the effects of sleep considering the
loss and fatigue on potential
driving interactive
performance. effects of these
factors on
Identification of the driving
specific performance.
driving-related
skills that are most
affected by sleep
loss and fatigue.

15. The effects of A systematic review of the Lack of sleep can There is a lack
[AKHIL] sleep loss on literature on the effects of sleep have a significant of consensus in
loss on young drivers' impact on driving the literature
young drivers’
performance found that sleep performance, regarding the
performance: A deprivation can have a including decreased specific impact
systematic significant impact on driving, reaction time and of sleep loss on
review including decreased reaction increased risk of different
time and increased risk of accidents. aspects of
accidents. The effects of sleep driving
loss are more pronounced in Sleep loss has been performance,
young drivers than older found to have a such as
drivers, due to their greater impact on attention,
developmental stage and lower young drivers perception, and
levels of experience. Despite compared to older decision-makin
this, there is a lack of drivers, due to their g.
consensus in the literature developmental
regarding the specific impact of stage and lower Many studies
sleep loss on different aspects levels of experience have been
of driving performance, and on the road. limited by
many studies have been limited small sample
by small sample sizes and sizes,
laboratory-based designs. In short-term
conclusion, it is clear that sleep sleep
loss can negatively impact deprivation, and
young drivers' performance and laboratory-base
increase the risk of accidents d designs that
may not
accurately
reflect
real-world
driving
conditions.

16. Head
[Neha] movement-based The paper provides a review of · Non-intrusive: · Need for
driver drowsiness the state-of-the-art techniques Head further
detection: A for detecting driver drowsiness movement-based research: The
review of based on head movement. The drowsiness authors
state-of-art authors analyse various head detection methods conclude that
techniques movement-based drowsiness do not require the more research
detection methods, including use of intrusive is needed to
visual-based techniques and sensors or devices, improve the
wearable devices, and evaluate making them more accuracy and
their performance and practical for reliability of
limitations. widespread use. head
movement-base
· Potential for d drowsiness
high accuracy: The detection
authors suggest that techniques.
head
movement-based · May not
methods have the be suitable for
potential to be all drivers: The
highly accurate for effectiveness of
detecting head
drowsiness. movement-base
d drowsiness
detection may
vary between
drivers,
depending on
their driving
habits and
posture.

17. Real time


[Neha] drowsiness The paper describes a real-time · Real-time · The
detection using drowsiness detection system detection: The effectiveness of
eye blink that uses eye blink monitoring. authors propose a eye blink
monitoring The authors propose a system real-time monitoring as a
that uses a webcam to track eye drowsiness method for
blinks and detect drowsiness detection system drowsiness
based on changes in the that can provide detection may
frequency of eye blinks. The immediate alerts to vary between
system was tested and found to drivers. drivers,
be effective for detecting depending on
drowsiness in real-world · Non-intrusive: their driving
scenarios. The system uses a habits and eye
webcam to track movements.
eye blinks, making
it non-intrusive and · Potential
practical for for false
widespread use. alarms: The
system may
· Effective in generate false
real-world positive
scenarios: The drowsiness
authors report that alerts if it
the system was misinterprets
tested and found to other factors,
be effective for such as a
detecting driver's eye
drowsiness in movements, as
real-world drowsiness.
scenarios.

18. Heart Beat Based


[Neha] Drowsiness The paper "Heart Beat Based · Uses heart rate · Limited
Detection System Drowsiness Detection System monitoring which is information on
for Driver for Driver" presents a more efficient and the reliability of
drowsiness detection system for cost-effective than the system in
drivers based on heart rate EEG-based real-world
monitoring. The authors tested systems. driving
the system and found it to be scenarios.
effective and efficient in · Was tested and
detecting drowsiness compared found to be · The
to traditional EEG-based effective in impact of the
systems. detecting system on
drowsiness. driver
distraction and
road safety has
not been fully
explored.

19.
[Neha] Driver drowsiness The paper presents a method · The · The study
detection using for detecting driver drowsiness mixed-effect model is limited in
mixed-effect using a mixed-effect ordered takes into account scope and
ordered logit logit model. The authors the cumulative further testing
model considering propose using a mixed-effect effect of and validation
time cumulative model that considers the time drowsiness, which is needed to
effect. Analytic cumulative effect of drowsiness is important for fully
Methods in to improve the accuracy of accurately detecting understand the
Accident drowsiness detection. drowsiness. potential of the
Research mixed-effect
· The proposed model for
method is able to drowsiness
effectively detect detection.
drowsiness.
· The
authors do not
address the
potential
impact of the
method on
driver
distraction and
road safety.

20
[Rishabh] Drowsiness A low-cost device for detecting FIS, an artificial Lack of
Detection using sleepy drivers uses a video embedded eyeglasses and
Fuzzy Inference imaging camera in front of the algorithm on the women's
System driver to create visuals. Raspberry Pi 3, cosmetics
achieves 95% reduce the
accuracy. system's
accuracy.

21 [Hrithik] Wierwille, W. W. Two recent studies addressed - Both approaches


(1995). Overview drowsiness definition issues, gave promising -These
of research on while the other two addressed results algorithms have
a high rate of
driver drowsiness online detection. The first
false negatives
definition and definitional study used observer -Algorithm gave
driver drowsiness rating, and the second used pretty good
detection. In physiological measures to accuracy
Proceedings: predict task performance
International decreases. Both methods seem
Technical promising. These definitions
Conference on the and others could be linked to
Enhanced Safety vehicle measurements like
of Vehicles (Vol. steering, lane position, and
1995, pp. lateral acceleration to create
462-468). drowsiness detection
National Highway algorithms. The first detection
Traffic Safety study developed algorithms and
Administration. estimated accuracy, and the
second validated typical
algorithms. This paper reviews
four studies and evaluates
online drowsiness detection.

22 [Hrithik] Saini, V., & Saini, The paper describes many -This study shows -Few
R. (2014). Driver driver fatigue detection that a few techniques like
drowsiness technologies. This paper algorithms like Yawning
detection system examines emerging EEG, LBP, Steering detection, Head
and techniques: a technologies to find the best Wheel Movement Nodding
review. ways to prevent the leading Analysis, Optical detection are
International cause of fatal vehicle crashes. Detection, Eye pretty
Journal of The market's best-selling Blink Technique unreliable.
Computer Science product is a head angle tilt reed can provide a good
and Information switch. Its use is limited and results -The product
Technologies, ineffective. BMW's driver made by BMW
5(3), 4245-4249. fatigue detection system in is expensive
high-end cars is slightly more
effective but doesn't warn
drivers. Markets and
technologies are young. New
technologies use different
methods.

23 [Hrithik] Ramzan, M., The systematic review -Physiological -None of the


Khan, H. U., describes behavioural, parameters-based described
Awan, S. M., vehicular, and physiological techniques give techniques
Ismail, A., Ilyas, drowsiness detection methods. more accurate provides full
M., & Mahmood, The pros and cons of these results than others. accuracy.
A. (2019). A methods are explained.
survey on Physiological parameters-based -Hybrid of these -CNN and
state-of-the-art methods yielded the most technologies give HMMs are
drowsiness accurate results in the us even better slow and
detection comparative analysis. Wireless results. expensive in
techniques. IEEE sensors on the driver's body, training.
Access, 7, seat, seat cover, steering wheel, -EEG and ECG
61904-61919. etc. reduce intrusion. Hybrids provides -SVMs are not
of these techniques, such as high-performance suitable for
physiological measures results. larger datasets
combined with vehicular or
behavioural measures,
overcome the problems
associated with individual
techniques and improve
drowsiness detection results.
For example, combining ECG
and EEG features yields
high-performance results,
demonstrating that combining
physiological signals improves
performance.
Presenting the best supervised
learning methods. Discussing
the pros and cons of such
methods. Classifier accuracy
varies by situation.
However, SVM is the most
commonly used classifier which
gives better accuracy and speed
in most of the situations, but
not suitable for large datasets.
CNN and HMM train slower
and cost more than SVM
classifiers, but HMM has a
lower error rate.

24 [Hrithik] Hu, S., & Zheng, This paper examines the use of The algorithm gives 16.67% are
G. (2009). Driver multiple eyelid movement us 5-fold Cross false negatives.
drowsiness features to detect driver Validation accuracy -In this
detection with drowsiness using a newly of 80% experiment,
eyelid related developed machine learning only
parameters by technique, Support Vector sleep-deprived
Support Vector Machine (SVM). Multiple subjects are
Machine. Expert features should improve included and no
Systems with prediction precision and data from alert
Applications, robustness. This paper uses driving
36(4), 7651-7658. data from a VTI (Swedish conditions.
National Road and Transport
Research Institute) simulated -Simulator
driving experiment in the EU driving is used
project SENSATION. instead of real
SVM is reviewed for driving.
classification problems, the
experiment is introduced, and
then SVM is used to predict
driver drowsiness with eyelid
movement parameters from
physiological signals.
Conclusions and discussion
follow.

25 [Hrithik] Vicente, J., This work developed two -Provided Small sized
Laguna, P., HRV-based driver drowsiness promising results. population
Bartra, A., & detectors. When drivers used.
Bailón, R. (2016). become drowsy or fatigued -96% accuracy.
Drowsiness while driving, the online
detection using drowsiness episodes detector
heart rate alarms them. Using seven
variability. features from the database of
Medical & simulated and real driving
biological recordings, its P+, Se, and Sp
engineering & are 0.96, 0.59, and 0.98. Before
computing, 54, driving, the sleep-deprivation
927-937. detector determines if the driver
is sleep-deprived and therefore
unfit to drive. Sleep-deprived
people should not drive. With a
P+ of 0.80, Se of 0.62, and Sp
of 0.88, the classifier
determines sleep deprivation in
3 min.

26
[Sidharth] H. Ueno, M. This paper evaluates the Despite being The system was
Kaneda and M. accuracy in detecting the conducted in an era built on camera
Tsukino, alertness of a driver using a with camera technology and
"Development of method developed by the resolution mobile
drowsiness authors via image processing. limitations and computation
detection system," The system uses video cameras processing power limits from the
Proceedings of to capture a series of images of limitations the 90s. The test
VNIS'94 - 1994 the driver's face. The result study found that the subjects
Vehicle
Navigation and obtained from the detection system had a high selected for
Information system was corroborated with level of accuracy evaluation were
Systems brain wave data collected and is reliable at only drowsy so
Conference, throughout the test. detection of there was no
Yokohama, Japan, drowsiness. It scope for
1994, pp. 15-20, judges the driver's evaluating false
doi: alertness level on positives.
10.1109/VNIS.19 the basis of Additionally
94.396873. continuous time the system's
and provides a reliability was
means of early tested in a
detection of controlled
reduced alertness. environment
where changes
in ambient
brightness were
not present.

27
[Sidharth] W. Deng and R. The paper proposes a system, The system is Implementation
Wu, "Real-Time DriCare, which uses video evaluated to have of the hardware
Driver-Drowsines footage to monitor fatigue an accuracy of to capture face
s Detection levels, including yawns, blinks 93.6% which is scans
System Using and duration of eye shutting, 7.7% more than the significantly
Facial Features," without the need for the driver least accurate state obstruct the
in IEEE Access, to wear equipment. This system of the art model and driver's view
vol. 7, pp. also uses a new detection 3.4% more accurate and create an
118727-118738, method for facial regions based than the most extra blind spot.
2019, doi: on 68 points. The authors accurate state of the Evaluation of
10.1109/ACCESS propose an MC-KCF algorithm art model. DriCare was
.2019.2936663. to track the driver’s face using only conducted
CNN and MTCNN. with optimal
lighting
conditions and
accuracy after
the image had
undergone
illumination
enhancement
was not
revealed.

28
[Sidharth] Stancin, I.; Cifrek, This paper presents an Due to this review, To reduce
M.; Jovic, A. A extensive review of the future research sample bias and
Review of EEG systematics and short would have a strong increase the
Signal Features description of the existing impact on the field likelihood that a
and Their characteristics of the EEG of drowsiness model will
Application in signal, sleepiness detection detection systems. generalise well,
Driver systems, and discusses various the development of large numbers
Drowsiness possibilities to improve the a unified, standard of participants
Detection state of the art in sleepiness definition and are needed
Systems. Sensors detection systems. description of because
2021, 21, 3786. drowsiness, which electrophysiolo
https://doi.org/10. would lead to a gical signals
3390/s21113786 reduction in have high
subjective bias and inter-individual
easier comparison variability.
of different studies.

29
[Sidharth] Poursadeghiyan, Among the numerous facial The training data Criteria like
Mohsen, et al. characteristics, the eyes are consisted of 9964 open, closed
"Using image significantly more important, frames captured mouth,
processing in the and much research on the from the drowsiness movements of
proposed processing of the state of the of the five drivers. head and facial
drowsiness eyes have been undertaken. IR 1000 epochs taught features. The
detection system Illuminator, for example, used the network. 70% test was
design." Iranian criteria such as PERCLOS of the data, as well conducted in a
journal of public (Percentage of Eye Closure), as the remaining virtual driving
health 47.9 length of eye closure, and data for testing, simulator rig
(2018): 1371. number of blinks to measure were transmitted to with unrealistic
alertness level. Drowsiness was the network for lighting
diagnosed only using training. The mean conditions and
PERCLOS. The observation squared errors for positioning of
test was carried out using a tiny data that was camera.
camera to determine the trained and tested
degrees of tiredness. by the network
were 0.0623 and
0.0700,
respectively. The
degree of accuracy
was then evaluated
to be 93%.

30
[Sidharth] Siddiqui, H.U.R.; This paper presents the For the respiration The first results
Saleem, A.A.; classification of drowsy and rate utilised, the are based on a
Brown, R.; non-drowsy driver states based Support Vector tiny data set;
Bademci, B.; Lee, on respiration rate detection by Machine machine more data is
E.; Rustam, F.; non-invasive, non-touch, learning model had needed to
Dudley, S. impulsive radio ultra-wideband the highest improve
Non-Invasive (IR-UWB) radar. Chest accuracy of 87%. classifier
Driver movements of 40 subjects were This study offers a accuracy.
Drowsiness acquired for 5 m using a foundation for the Drivers of a
Detection System. lab-placed IR-UWB radar verification and specified age
Sensors 2021, 21, system, and respiration per evaluation of UWB (30-50 years)
4833. minute was extracted from the for successful and ethnic
https://doi.org/10. resulting signals. driver sleepiness origin were
3390/s21144833 detection based on taken into
breathing. account.

31
[Divyansh] Sharma, P., & The rising number of Proper device Needs a lot of
Sood, N. (2020, automobiles on Indian making strategy to education about
July). Application roadways, combined with lax easily detect the product and
of IoT and traffic regulation, results in drowsiness. a lot of capital
Machine Learning numerous human-error-caused to build
for Real-time collisions and fatalities. In this
Driver Monitoring study, we propose a driver
and Assisting monitoring and aiding device
Device. In 2020 that uses IoT sensors such as an
11th International alcohol sensor and an air
Conference on pressure sensor to check for
Computing, sobriety and machine learning
Communication algorithms to identify
and Networking micro-sleep and frequent yawns
Technologies to detect sleepiness.
(ICCCNT) (pp.
1-7). IEEE.
32
[Divyansh] Dwivedi, K., This paper proposes an All the work done Although novel
Biswaranjan, K., intelligent vision-based in the field of visual machine
& Sethi, A. (2014, algorithm for detecting driver cues based driver learning based
February). drowsiness. Previous methods drowsiness algorithms use
Drowsy driver relied on blink rate, eye closure, detection uses only multiple cues,
detection using yawning, eyebrow shape, and hand-picked they are unable
representation other hand-crafted facial features. Hand to exploit the
learning. In 2014 features. The proposed engineered features complex
IEEE algorithm employs features constitute eye blink, relationship
international learned using a convolutional eye closure, between
advance neural network to explicitly expression various
computing capture various latent facial detection features – features.
conference features as well as complex mixture of face
(IACC) (pp. non-linear feature interactions. wrinkles, eye brow,
995-999). IEEE. A softmax layer is used to lip and cheek
determine whether the driver is shapes etc.
drowsy or not. This system is
thus used to warn drivers of
drowsiness or lack of attention
in order to avoid traffic
accidents. To support the claims
made in the paper, we present
both qualitative and quantitative
results

33
[Divyansh] Phan, A. C., The latter uses deep learning We leverage the These studies
Nguyen, N. H. Q., techniques with two adaptive advantage of focused on
Trieu, T. N., & deep neural networks based on transfer learning to analyzing the
Phan, T. C. MobileNet-V2 and pre-train the mouth and eye
(2021). An ResNet-50V2. The second proposed networks regions to
efficient approach method analyses the videos and on datasets of Bing detect blinks
for detecting detects the driver's activities in Search API, and yawns
driver drowsiness every frame to learn all features Kaggle, and without
based on deep automatically. We leverage the RMFD. We then considering
learning. Applied advantage of the transfer use the pre-trained other regions of
Sciences, 11(18), learning technique to train the weights and re-train the head and
8441. proposed networks on our them on our face. There-
training dataset. This solves the training dataset to fore, they are
problem of limited training fine-tune the not accurate
datasets, provides fast training parameters of these enough and
time, and keeps the advantage networks. This have revealed
of the deep neural networks. helps to solve the many
problem of small limitations on
training datasets drowsiness
and gives a fast detection for
training time. actual systems,
because dozing
is a natural
state in the
human body
amongst other
behaviors.
Moreover, there
is a problem
surrounding
physiological
measures in
these works, in
that they may
not be feasible
in practice

34
[Divyansh] Altameem, A., Physiological signals such as Physiological Since EEG
Kumar, A., electroencephalogram (EEG) measurements such signals are
Poonia, R. C., and electrooculography (EOG) as EEG recordings, non-stationary
Kumar, S., & recordings are very important eyelid movement, and present
Saudagar, A. K. J. non-invasive measures of galvanic skin evident
(2021). Early detecting a person’s response (GSR), dynamic
identification and alertness/drowsiness. Since heart rate and pulse characteristics,
detection of driver EEG signals are non-stationary rate provide insight conventional
drowsiness by and present evident dynamic into human linear
hybrid machine characteristics, conventional activities directly approaches are
learning. IEEE linear approaches are not highly and achieve not highly
Access, 9, successful in recognition of relatively accurate successful in
162805-162819. drowsy level. Furthermore, quantification recognition of
previous methods cannot evaluation of drowsy level.
produce satisfying results drowsiness/alertnes
without considering the basic s
rhythms underlying the raw
signals. To address these draw-
backs, we propose a system for
drowsiness detection using
physiological signals

35
[Divyansh] Chen, L. L., Zhao, Sensors in self-driving cars The research in this Even though
Y., Zhang, J., & must detect if a driver is sleepy, field focuses on Physiological
Zou, J. Z. (2015). angry, or experiencing extreme four types of measures may
Automatic changes in their emotions, such fatigue detection. provide an
detection of as anger. These sensors must The first is made up exact indication
alertness/drowsine constantly monitor the driver’s of the conductors’ of exhaustion,
ss from facial expressions and detect physiological they are
physiological facial landmarks in order to signals, such as adversely
signals using extract the driver’s state of electroencephalogra affected by
wavelet-based expression presentation and m (EEG), ancient rarities.
nonlinear features determine whether they are electrocardiograph
and machine driving safely. As soon as the (ECG), and There is an
learning. Expert system detects such changes, it electrooculogram inescapable
Systems with takes control of the vehicle, (EOG). This exchange
Applications, immediately slows it down, and category gives good between the
42(21), alerts the driver by sounding an results. speed and
7344-7355. alarm to make them aware of precision of the
the situation. The proposed The second is based conjecture.
system will be integrated with on methods of From one
the vehicle’s electronics, operating viewpoint, if
tracking the vehicle’s statistics behaviour. The the time
and providing more accurate third is on plans window is
results. In this paper, we have based on the concise, the
implemented real-time image vehicle’s condition. framework may
segmentation and drowsiness The fourth is based identify
using machine learning on physiological “clamour” and,
methodologies. In the proposed characteristics in this manner,
work, an emotion detection may produce
method based on Support excessive false
Vector Machines (SVM) has positives.
been implemented using facial
expressions.

Overview of proposed work

We intend to build an end to end system to detect driver’s sleepiness in a car by


continuously monitoring the driver’s face based on EYE BLINKING TECHNIQUE.

The steps involved are:


1. Continuously monitor driver’s face using camera
2. Pass each frame onto the Landmark-68 algorithm and detect eye points.
3. Calculate eye width and eye length using euclidean distance between these
points.
4. Calculate the width:height ratio.
5. If the ratio is below a threshold for more than 6 frames, trigger an alarm.
6. Drivers may switch off the alarm using mobile app as soon as the sleep is
disrupted.
Implementation Methodology

A sleepiness detection system may be created to address this issue and give an
efficient system. placed inside any car that will use the driver's live video feed as input
to compare with training data and determine if the driver exhibits any signs of
sleepiness, the system will automatically identify this and sound an alarm to let them
know also other travelers. For eye tracking and monitoring, there are several different
algorithms and techniques. Most of them correspond in some way to ocular
characteristics in a video picture of the driver (usually reflections from the eye).

Tech Stack Used

Frontend
● XML Backend
● Java
IOT Device
● Arduino
● ESP32
● MPU-6050
● NodeMCU
Machine Learning (Libraries)
● OpenCV
● Dlib
● Imutils
● Numpy
● Pillow

Detailed Procedure

1. Initial Camera Setup


The first step in the system would be to set up a camera facing the driver so that
we can provide a successful capturing of the driver‘s face for the purpose of
further processing. The camera must be set up in such a way that it is not
intrusive, i.e., does not get in the way of the driver while on the road, and must be
placed in a proper manner so that the face captured is clear and so provides
accurate results.

2. Face Detection
The next step involved is detecting the face of the driver that is displayed on the
video stream. To extract the facial landmarks of drivers, Dlib library was imported
and deployed in our application.The library uses a pre-trained face detector,
which is based on a modification to the histogram of oriented gradients and uses
linear SVM (support vector machine) method for object detection. Actual facial
landmark predictor was then initialized and facial landmarks captured by the
application were used to calculate distance between points. EAR is defined as
the ratio of height and width of the eye. The numerator denotes the height of the
eye and the denominator denotes the width of the eye and the details of all the
landmarks of the eye are depicted as shown. The numerator calculates the
distance between the upper eyelid and the lower eyelid. The denominator
represents the horizontal distance of the eye. When the eyes are open, the
numerator value increases, thus increasing the EAR value, and when the eyes
are closed the numerator value decreases, thus decreasing the EAR value. In
this context, EAR values are used to detect driver’s drowsiness. E.A.R. value of
the left and right eyes is calculated and then the average is taken. In our
drowsiness detector case, the Eye Aspect Ratio is monitored to check if the
value falls below threshold value and also it does not increase again above the
threshold value in the next frame. The above condition implies that the person
has closed his/her eyes and is in a drowsy state. On the contrary, if the EAR
value increases again, it implies that the person has just blinked and there is no
case of drowsiness. Figure below Eye Aspect Ratio (E.A.R.) Computation depicts
the block diagram of our proposed approach to detect driver’s drowsiness. The
figure below represents a snapshot of facial landmark points using Dlib library,
which are used to compute EAR.
3. Face Landmark Detection and Extraction
The next step involved after successful face detection is recognizing the facial
landmarks and extraction of the desired facial landmarks. Finding facial
landmarks can be done by several methods, but most of the methods work on
labeling and localizing the regions such as the right eyebrow, left eyebrow, right
eye, left eye, nose, mouth and, jaw. We use the facial landmark detection
algorithm which is an implementation of the One Millisecond Face Alignment with
an Ensemble of Regression Trees. This detector algorithm is a part of the dlib
library. This method works by manually labeling, specific (x,y) coordinates for the
regions surrounding each facial structure and using this set of trained facial
landmarks on an image. This detector available in the dlib library estimates the
location of the 68 (x,y) coordinates that are specific to each separate facial
structure.
We can localize and extract the eye regions by making use of the specific facial
indices for the left and right eye regions. The right eye can be accessed by using
the coordinates [36,42] and the left eye can be accessed by using the
coordinates [42,48]. These indices are a part of the 68 points iBUG 300-w
[21]-[23] dataset on which the facial landmark detector available in the dlib library
is trained. Irrespective of which dataset is used, if the shape predictor is trained
properly on the input training data, the same Dlib framework can be used.
4. Eye Aspect Ratio (E.A.R.) Computation
To detect if the driver‘s eye is closed or not, and to also successfully differentiate
between standard eye blinks and eyes being closed during a state of drowsiness,
we make use of an algorithm that uses a facial landmark detector. We compute a
single, scalar quantity called eye aspect ratio (E.A.R.) that reflects whether the
eye is closed or not. For each video frame, the landmarks of the eye regions are
found, and the Euclidean distance using the height and width of the eye is
calculated, which is the eye aspect ratio (E.A.R).

Drowsiness Evaluation and Countermeasures

After we successfully compute E.A.R., we can use that value to evaluate the driver‘s
state of drowsiness. The E.A.R value remains constant when the eye of the driver is
open, but it starts to reduce to a value close to zero when the eye starts to close. E.A.R
is invariant with respect to head and body posture. So, using these findings, we can
classify the eye state as closed when the E.A.R. is 0.3 or less than 0.3, otherwise the
state is identified as open. The final part is making the decision to sound the alarm or
not. The average duration of a person‘s eye blink is 100-400 milliseconds; hence, if the
driver is in a state of drowsiness, their eye closure time is beyond this interval. In our
system, the threshold is set at 0.3 seconds, and if this is crossed, the alarm is sounded
and an alert regarding this will pop.

Proposed UI

Landing Page Current Drive Evaluation Map Route Interface


Competitive Analysis

Some of the products that were popular among all other apps are listed below:

● Drivemode Dash: For four years, Drivemode Dash has been available. This
software offers:
1. Talk to text, make a call, use the GPS, or play music.
2. To reduce distractions, use large buttons and straightforward interfaces.
3. Locate contacts and locations with voice search.
4. Favorites lists that may be customized for simple navigation.
Apple and Android users may download Drivemode Dash for free.

● OnMyWay: When you're traveling more quickly than 10 mph, OnMyWay disables
app alerts in a more direct manner. This covers messages as well as other alerts.
However, its voice controls and Bluetooth compatibility guarantee that you
maintain control. You can still utilize applications for navigation and audio
playback without your hands. The app's other features include:
1. Integration with programmes like Spotify and Google Maps.
2. Above 10 mph, automatic activation.
3. Cash incentives for signups and referrals to promote the spread of safety.
Both the Apple and Android app stores provide OnMyWay for free.

● SAFE 2 SAVE: To determine if the phone is being handled, the app makes
advantage of built-in capabilities. You receive points if the phone is still. Once the
car is moving at a speed greater than 10 mph, the app's auto-on feature
activates. SAFE 2 SAVE also includes:
1. When it detects driving while using a phone, it sends a warning signal.
2. Group connection to organize contests for safe driving among friends, family,
and
coworkers.
3. Leaderboards to monitor the most careful drivers.
The iPhone and Android app stores also provide SAFE 2 SAVE for free.

● TrueMotion Family Safe Driving: TrueMotion Family Safe Driving informs you of
your driving style rather than turning off notifications or paying you. It monitors
driving position, speed, and phone use specifically for families. These reports are
given a rating between 0 and 100. The more signs of unsafe driving there are,
the lower the score. Family TrueMotion Safe Driving indicates any instances that
may have included distractions. This includes using the phone or launching apps.
It also offers:
1. GPS tracking of young drivers and teenagers.
2. For friendly competitions, score standings.
3. Real-time alerts for reckless driving, speeding, and other behaviours.
Both Android and iPhone smartphones may get TrueMotion Family Safe Driving
without charge.

● I’m Driving: I'm Driving refreshes your contact list and notifies them that you're
driving for a more direct method. It won't stop alerts from coming to you or give
you money for driving safely, but it will let your friends and family know that you
don't want to be contacted.
I'm Driving attributes
1. A User-friendly interface that minimizes interruptions.
2. Once you start driving, your contact list automatically refreshes.
3. If specified contacts weren't updated, an automatic notice was sent.
I'm Driving is free on both the Android and iPhone app stores, much like the other
applications.

After thorough research we found out that there were no existing businesses that
were offering direct safety features to their customers, all the services provided
by them were just to encourage safe driving and not actually enforce direct safety
for the driver. These products offer some safety which is driving oriented but not
driver oriented. Our idea is to build a solution that is capable of providing safety
by analyzing both fronts, the driver and the driving data which will make our
solution unique and more efficient.

Hardware Components
● ESP32 camera module
● Jumper wires
● FTDI connector
● Breadboard
● Arduino Uno
● LCD
● Power supply module
● MPU6050 accelerometer
● ESP8266 WiFi module

Hardware Configuration
Breadboard and Arduino Configuration Camera Module Configuration

App Screenshots
Driver Performance Metrics Trip History
Google Maps Integration Accelerometer view

Functionalities
● Sleep detection alarming
● Driver performance rating using accelerometer and speed analysis
● Overspeed detection
● GPS
● Harsh braking detection

Software Employed
● Arduino IDE
● Pycharm
● Android Studio

Video Demonstration
https://drive.google.com/file/d/1YVFi864wBZ0Y8LAWZdMOHYBNxYv6chrD/view?usp=s
hare_link

Appendix
ESP32 Camera Input Code

#include <WebServer.h>
#include <WiFi.h>
#include <esp32cam.h>

const char* WIFI_SSID = "Naur";


const char* WIFI_PASS = "eqht6471";

WebServer server(80);

static auto loRes = esp32cam::Resolution::find(320, 240);


static auto midRes = esp32cam::Resolution::find(350, 530);
static auto hiRes = esp32cam::Resolution::find(800, 600);
void serveJpg()
{
auto frame = esp32cam::capture();
if (frame == nullptr) {
Serial.println("CAPTURE FAIL");
server.send(503, "", "");
return;
}
Serial.printf("CAPTURE OK %dx%d %db\n", frame->getWidth(),
frame->getHeight(),
static_cast<int>(frame->size()));

server.setContentLength(frame->size());
server.send(200, "image/jpeg");
WiFiClient client = server.client();
frame->writeTo(client);
}

void handleJpgLo()
{
if (!esp32cam::Camera.changeResolution(loRes)) {
Serial.println("SET-LO-RES FAIL");
}
serveJpg();
}

void handleJpgHi()
{
if (!esp32cam::Camera.changeResolution(hiRes)) {
Serial.println("SET-HI-RES FAIL");
}
serveJpg();
}

void handleJpgMid()
{
if (!esp32cam::Camera.changeResolution(midRes)) {
Serial.println("SET-MID-RES FAIL");
}
serveJpg();
}
void setup(){
Serial.begin(115200);
Serial.println();
{
using namespace esp32cam;
Config cfg;
cfg.setPins(pins::AiThinker);
cfg.setResolution(hiRes);
cfg.setBufferCount(2);
cfg.setJpeg(80);

bool ok = Camera.begin(cfg);
Serial.println(ok ? "CAMERA OK" : "CAMERA FAIL");
}
WiFi.persistent(false);
WiFi.mode(WIFI_STA);
WiFi.begin(WIFI_SSID, WIFI_PASS);
while (WiFi.status() != WL_CONNECTED) {
delay(500);
}
Serial.print("http://");
Serial.println(WiFi.localIP());
Serial.println(" /cam-lo.jpg");
Serial.println(" /cam-hi.jpg");
Serial.println(" /cam-mid.jpg");

server.on("/cam-lo.jpg", handleJpgLo);
server.on("/cam-hi.jpg", handleJpgHi);
server.on("/cam-mid.jpg", handleJpgMid);

server.begin();
}

void loop()
{
server.handleClient();
}

Accelerometer Code
#include <Wire.h>

// MPU6050 Slave Device Address


const uint8_t MPU6050SlaveAddress = 0x68;

#include <ESP8266WiFi.h>
#include <FirebaseESP8266.h>

#define FIREBASE_HOST "rapidez-f8b03-default-rtdb.firebaseio.com"


//Your Firebase Project URL goes here without "http:" , "\" and "/"
#define FIREBASE_AUTH "g1I7OhFbafc0aJfZYU6yjiQEJHNajtGyOnT4xZ4e" //Your
Firebase Database Secret goes here

#define WIFI_SSID "Naur"


//WiFi SSID to which you want NodeMCU to connect
#define WIFI_PASSWORD "eqht6471"
//Password of your wifi network
// Select SDA and SCL pins for I2C communication
const uint8_t scl = D6;
const uint8_t sda = D7;
FirebaseData firebaseData;
// sensitivity scale factor respective to full scale setting provided in
datasheet
const uint16_t AccelScaleFactor = 16384;
const uint16_t GyroScaleFactor = 131;

// MPU6050 few configuration register addresses


const uint8_t MPU6050_REGISTER_SMPLRT_DIV = 0x19;
const uint8_t MPU6050_REGISTER_USER_CTRL = 0x6A;
const uint8_t MPU6050_REGISTER_PWR_MGMT_1 = 0x6B;
const uint8_t MPU6050_REGISTER_PWR_MGMT_2 = 0x6C;
const uint8_t MPU6050_REGISTER_CONFIG = 0x1A;
const uint8_t MPU6050_REGISTER_GYRO_CONFIG = 0x1B;
const uint8_t MPU6050_REGISTER_ACCEL_CONFIG = 0x1C;
const uint8_t MPU6050_REGISTER_FIFO_EN = 0x23;
const uint8_t MPU6050_REGISTER_INT_ENABLE = 0x38;
const uint8_t MPU6050_REGISTER_ACCEL_XOUT_H = 0x3B;
const uint8_t MPU6050_REGISTER_SIGNAL_PATH_RESET = 0x68;

int16_t AccelX, AccelY, AccelZ, Temperature, GyroX, GyroY, GyroZ;


void setup() {
Serial.println("Serial communication started\n\n");

WiFi.begin(WIFI_SSID, WIFI_PASSWORD);
//try to connect with wifi
Serial.print("Connecting to ");
Serial.print(WIFI_SSID);

while (WiFi.status() != WL_CONNECTED) {


Serial.print(".");
delay(500);
}

Serial.println();
Serial.print("Connected to ");
Serial.println(WIFI_SSID);
Serial.print("IP Address is : ");
Serial.println(WiFi.localIP());
//print local IP address
Firebase.begin(FIREBASE_HOST, FIREBASE_AUTH); // connect to firebase

Firebase.reconnectWiFi(true);
delay(1000);
Serial.begin(9600);
Wire.begin(sda, scl);
MPU6050_Init();

void loop() {
double Ax, Ay, Az, T, Gx, Gy, Gz;

Read_RawValue(MPU6050SlaveAddress, MPU6050_REGISTER_ACCEL_XOUT_H);

//divide each with their sensitivity scale factor


Ax = (double)AccelX/AccelScaleFactor;
Ay = (double)AccelY/AccelScaleFactor;
Az = (double)AccelZ/AccelScaleFactor;
T = (double)Temperature/340+36.53; //temperature formula
Gx = (double)GyroX/GyroScaleFactor;
Gy = (double)GyroY/GyroScaleFactor;
Gz = (double)GyroZ/GyroScaleFactor;
if (Firebase.setDouble(firebaseData, "/A", Ax)) { // On successful
Write operation, function returns 1
Serial.println("Value Uploaded Successfully");
Serial.print("Ax = ");
Serial.println(Ax);
Serial.println("\n");

else {
Serial.println(firebaseData.errorReason());
}
if (Firebase.setDouble(firebaseData, "/Ay", Ay)) { // On successful
Write operation, function returns 1
Serial.println("Value Uploaded Successfully");
Serial.print("Ay = ");
Serial.println(Ay);
Serial.println("\n");
}

else {
Serial.println(firebaseData.errorReason());
}
Serial.print("Ax: "); Serial.print(Ax);
Serial.print(" Ay: "); Serial.print(Ay);
Serial.print(" Az: "); Serial.print(Az);
Serial.print(" T: "); Serial.print(T);
Serial.print(" Gx: "); Serial.print(Gx);
Serial.print(" Gy: "); Serial.print(Gy);
Serial.print(" Gz: "); Serial.println(Gz);

delay(100);
}
void I2C_Write(uint8_t deviceAddress, uint8_t regAddress, uint8_t data){
Wire.beginTransmission(deviceAddress);
Wire.write(regAddress);
Wire.write(data);
Wire.endTransmission();
}

// read all 14 register


void Read_RawValue(uint8_t deviceAddress, uint8_t regAddress){
Wire.beginTransmission(deviceAddress);
Wire.write(regAddress);
Wire.endTransmission();
Wire.requestFrom(deviceAddress, (uint8_t)14);
AccelX = (((int16_t)Wire.read()<<8) | Wire.read());
AccelY = (((int16_t)Wire.read()<<8) | Wire.read());
AccelZ = (((int16_t)Wire.read()<<8) | Wire.read());
Temperature = (((int16_t)Wire.read()<<8) | Wire.read());
GyroX = (((int16_t)Wire.read()<<8) | Wire.read());
GyroY = (((int16_t)Wire.read()<<8) | Wire.read());
GyroZ = (((int16_t)Wire.read()<<8) | Wire.read());
}

//configure MPU6050
void MPU6050_Init(){
delay(150);
I2C_Write(MPU6050SlaveAddress, MPU6050_REGISTER_SMPLRT_DIV, 0x07);
I2C_Write(MPU6050SlaveAddress, MPU6050_REGISTER_PWR_MGMT_1, 0x01);
I2C_Write(MPU6050SlaveAddress, MPU6050_REGISTER_PWR_MGMT_2, 0x00);
I2C_Write(MPU6050SlaveAddress, MPU6050_REGISTER_CONFIG, 0x00);
I2C_Write(MPU6050SlaveAddress, MPU6050_REGISTER_GYRO_CONFIG, 0x00);//set
+/-250 degree/second full scale
I2C_Write(MPU6050SlaveAddress, MPU6050_REGISTER_ACCEL_CONFIG, 0x00);//
set +/- 2g full scale
I2C_Write(MPU6050SlaveAddress, MPU6050_REGISTER_FIFO_EN, 0x00);
I2C_Write(MPU6050SlaveAddress, MPU6050_REGISTER_INT_ENABLE, 0x01);
I2C_Write(MPU6050SlaveAddress, MPU6050_REGISTER_SIGNAL_PATH_RESET,
0x00);
I2C_Write(MPU6050SlaveAddress, MPU6050_REGISTER_USER_CTRL, 0x00);
}
Sleep detection and alarming

#Importing OpenCV Library for basic image processing functions


import cv2
import urllib.request # for acessing url camera
# Numpy for array related functions
import numpy as np
# Dlib for deep learning based Modules and face landmark detection
import dlib
#face_utils for basic operations of conversion
from imutils import face_utils

from functions.euclidian_distance import compute

from functions.blinkcheck import blinked

# import required module


from playsound import playsound

from time import time, sleep

#Initializing the face detector and landmark detector


detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
#loading dataset

#status marking for current state


sleep = 0
drowsy = 0
active = 0
status=""
color=(0,0,0)

while True:
# Initializing the camera and taking the instance
winName = 'ESP32 CAMERA'
cv2.namedWindow(winName, cv2.WINDOW_AUTOSIZE)
url = 'http://192.168.236.86/cam-hi.jpg' # url for acesssing the
espp32 cam
imgResponse = urllib.request.urlopen(url) # we open the URL
imgNp = np.array(bytearray(imgResponse.read()), dtype=np.uint8)
cap = cv2.imdecode(imgNp, -1) # decoding final video data
# cap = cv2.VideoCapture(0)
#_, frame = cap.read()
#gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.cvtColor(cap, cv2.COLOR_BGR2GRAY)

faces = detector(gray)
#detected face in faces array
#face_frame = frame.copy()
face_frame = cap.copy()

for face in faces:


x1 = face.left()
y1 = face.top()
x2 = face.right()
y2 = face.bottom()

cv2.rectangle(face_frame, (x1, y1), (x2, y2), (0, 255, 0), 2)

landmarks = predictor(gray, face)


landmarks = face_utils.shape_to_np(landmarks)

#The numbers are actually the landmarks which will show eye
left_blink = blinked(landmarks[36],landmarks[37],
landmarks[38], landmarks[41], landmarks[40], landmarks[39])
right_blink = blinked(landmarks[42],landmarks[43],
landmarks[44], landmarks[47], landmarks[46], landmarks[45])

#Now judge what to do for the eye blinks


if(left_blink==0 or right_blink==0):
sleep+=1
# drowsy=0
active=0
if(sleep>6):
status="SLEEPING !!!"
color = (255,255,255)

# elif(left_blink==1 or right_blink==1):
# sleep=0
# active=0
# drowsy+=1
# if(drowsy>6):
# status="Drowsy !"
# color = (255,255,255)

else:
drowsy=0
sleep=0
active+=1
if(active>6):
status="Active :)"
color = (0,255,0)

#cv2.putText(frame, status, (100,100), cv2.FONT_HERSHEY_SIMPLEX,


1.2, color,3)
cv2.putText(cap, status, (100, 100), cv2.FONT_HERSHEY_SIMPLEX,
1.2, color, 3)
print(status)
if(status=="SLEEPING !!!"):

playsound('Sound Effect Beep Alert Loop.wav') # playing Beep


sound

for n in range(0, 68):


(x,y) = landmarks[n]
cv2.circle(face_frame, (x, y), 1, (255, 255, 255), -1)

# cv2.imshow("Frame", frame)
cv2.imshow("Frame", cap)
cv2.imshow("Result of detector", face_frame)
key = cv2.waitKey(1)
if key == 27:
break

Blink Check FUnction

from .euclidian_distance import compute

def blinked(a,b,c,d,e,f):
up = compute(b,d) + compute(c,e)
down = compute(a,f)
ratio = up/(2.0*down)

#Checking if it is blinked
if(ratio>0.25):
return 2
# elif(ratio>0.21 and ratio<=0.25):
# return 1
else:
return 0

Eucledian Distance Function

import numpy as np

def compute(ptA,ptB):
dist = np.linalg.norm(ptA - ptB)
return dist

Splash Activity
package com.example.rapidez;

import android.content.Intent;
import android.os.Bundle;
import android.os.Handler;

import androidx.appcompat.app.AppCompatActivity;

public class SplashActivity extends AppCompatActivity {

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_splash);
if (getSupportActionBar() != null) {
getSupportActionBar().hide();
}

new Handler().postDelayed(new Runnable() {


@Override
public void run() {
Intent iHome = new Intent(SplashActivity.this,
MainActivity.class);
startActivity(iHome);
finish();
}
}, 2500);
}
}

Main Activity
package com.example.rapidez;

import android.content.Intent;
import android.os.Bundle;
import android.view.View;
import android.widget.LinearLayout;
import android.widget.TextView;
import android.widget.Toast;

import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;

import com.google.firebase.database.DataSnapshot;
import com.google.firebase.database.DatabaseError;
import com.google.firebase.database.DatabaseReference;
import com.google.firebase.database.FirebaseDatabase;
import com.google.firebase.database.ValueEventListener;

public class MainActivity extends AppCompatActivity {


FirebaseDatabase firebaseDatabase;
DatabaseReference speedref;
private TextView clickbtn,mapbtn,location,speedtxt;
LinearLayout lastLocation,performance,history;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
DatabaseReference reference=
FirebaseDatabase.getInstance().getReference().child("newRequest");
firebaseDatabase = FirebaseDatabase.getInstance();
lastLocation=findViewById(R.id.lastlocation);
performance=findViewById(R.id.performance);
history=findViewById(R.id.history);
speedtxt=findViewById(R.id.speed_txt);
speedref=firebaseDatabase.getReference("V");
getSpeed();
lastLocation.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
Intent intent=new
Intent(getApplicationContext(),LastlocationActivity.class);
startActivity(intent);
}
});
performance.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
Intent intent=new
Intent(getApplicationContext(),Driverss_performanceActivity.class);
startActivity(intent);
}
});
history.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
Intent intent=new
Intent(getApplicationContext(),HistoryActivity.class);
startActivity(intent);
}
});

private void getSpeed() {


speedref.addValueEventListener(new ValueEventListener() {
@Override
public void onDataChange(@NonNull DataSnapshot snapshot) {
// this method is call to get the realtime
// updates in the data.
// this method is called when the data is
// changed in our Firebase console.
// below line is for getting the data from
// snapshot of our database.
float value = snapshot.getValue(float.class);
String value1=Float.toString(value);

// after getting the value we are setting


// our value to our text view in below line.
speedtxt.setText(value1);

@Override
public void onCancelled(@NonNull DatabaseError error) {
// calling on cancelled method when we receive
// any error or we are not able to get the data.
Toast.makeText(MainActivity.this, "Fail to get data.",
Toast.LENGTH_SHORT).show();
}
});
}
}

Last Location Activity


package com.example.rapidez;

import android.Manifest;
import android.app.AlertDialog;
import android.content.Context;
import android.content.DialogInterface;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.location.Location;
import android.location.LocationManager;
import android.os.Bundle;
import android.provider.Settings;
import android.widget.Toast;

import androidx.core.app.ActivityCompat;
import androidx.fragment.app.FragmentActivity;

import com.example.rapidez.databinding.ActivityLastlocationBinding;
import com.google.android.gms.maps.CameraUpdateFactory;
import com.google.android.gms.maps.GoogleMap;
import com.google.android.gms.maps.OnMapReadyCallback;
import com.google.android.gms.maps.SupportMapFragment;
import com.google.android.gms.maps.model.LatLng;
import com.google.android.gms.maps.model.MarkerOptions;
public class LastlocationActivity extends FragmentActivity implements
OnMapReadyCallback {
private static final int REQUEST_LOCATION = 1;
private GoogleMap mMap;
private ActivityLastlocationBinding binding;
LocationManager locationManager;
double lat,longi;
String latitude, longitude;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);

binding = ActivityLastlocationBinding.inflate(getLayoutInflater());
setContentView(binding.getRoot());

// Obtain the SupportMapFragment and get notified when the map is


ready to be used.
SupportMapFragment mapFragment = (SupportMapFragment)
getSupportFragmentManager()
.findFragmentById(R.id.map);
mapFragment.getMapAsync(this);
}

/**
* Manipulates the map once available.
* This callback is triggered when the map is ready to be used.
* This is where we can add markers or lines, add listeners or move the
camera. In this case,
* we just add a marker near Chennai, India.
* If Google Play services is not installed on the device, the user will
be prompted to install
* it inside the SupportMapFragment. This method will only be triggered
once the user has
* installed Google Play services and returned to the app.
*/
@Override
public void onMapReady(GoogleMap googleMap) {
ActivityCompat.requestPermissions( this,
new String[] {Manifest.permission.ACCESS_FINE_LOCATION},
REQUEST_LOCATION);
locationManager = (LocationManager)
getSystemService(Context.LOCATION_SERVICE);
if (!locationManager.isProviderEnabled(LocationManager.GPS_PROVIDER)) {
OnGPS();
} else {
getLocation();
}
mMap = googleMap;

// Add a marker in Sydney and move the camera


LatLng sydney = new LatLng(lat, longi);
mMap.addMarker(new MarkerOptions().position(sydney).title("Vellore
Institute of Technology"));
mMap.moveCamera(CameraUpdateFactory.newLatLng(sydney));
}

private void OnGPS() {


final AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setMessage("Enable
GPS").setCancelable(false).setPositiveButton("Yes", new
DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
startActivity(new
Intent(Settings.ACTION_LOCATION_SOURCE_SETTINGS));
}
}).setNegativeButton("No", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
}
});
final AlertDialog alertDialog = builder.create();
alertDialog.show();
}
private void getLocation() {
if (ActivityCompat.checkSelfPermission(

LastlocationActivity.this,Manifest.permission.ACCESS_FINE_LOCATION) !=
PackageManager.PERMISSION_GRANTED && ActivityCompat.checkSelfPermission(
LastlocationActivity.this,
Manifest.permission.ACCESS_COARSE_LOCATION) !=
PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new
String[]{Manifest.permission.ACCESS_FINE_LOCATION}, REQUEST_LOCATION);
} else {
Location locationGPS =
locationManager.getLastKnownLocation(LocationManager.GPS_PROVIDER);
if (locationGPS != null) {
lat = locationGPS.getLatitude();
longi = locationGPS.getLongitude();

} else {
Toast.makeText(this, "Unable to find location.",
Toast.LENGTH_SHORT).show();
}
}
}
}

History Activity
package com.example.rapidez;

import android.os.Bundle;
import android.widget.TextView;
import android.widget.Toast;

import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;

import com.google.firebase.database.DataSnapshot;
import com.google.firebase.database.DatabaseError;
import com.google.firebase.database.DatabaseReference;
import com.google.firebase.database.FirebaseDatabase;
import com.google.firebase.database.ValueEventListener;

public class HistoryActivity extends AppCompatActivity {


FirebaseDatabase firebaseDatabase;
DatabaseReference ratingref;
private TextView ratingtxt,rating2txt,rating3txt;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_history);
ratingtxt=findViewById(R.id.rating1txt);
DatabaseReference reference=
FirebaseDatabase.getInstance().getReference().child("newRequest");
firebaseDatabase = FirebaseDatabase.getInstance();
ratingref=firebaseDatabase.getReference("S");
getRating();
}

private void getRating() {


ratingref.addValueEventListener(new ValueEventListener() {
@Override
public void onDataChange(@NonNull DataSnapshot snapshot) {
// this method is call to get the realtime
// updates in the data.
// this method is called when the data is
// changed in our Firebase console.
// below line is for getting the data from
// snapshot of our database.
float value = snapshot.getValue(float.class);
if(value<=10){
String value1=Float.toString(value);
// after getting the value we are setting
// our value to our text view in below line.
ratingtxt.setText("1 Star");
}
if(value>=10&& value<=20){
String value1=Float.toString(value);
// after getting the value we are setting
// our value to our text view in below line.
ratingtxt.setText("1.5 Star");
}
if(value>=20&& value<=30){
String value1=Float.toString(value);
// after getting the value we are setting
// our value to our text view in below line.
ratingtxt.setText("2 Star");
}
if(value>=30&& value<=40){
String value1=Float.toString(value);
// after getting the value we are setting
// our value to our text view in below line.
ratingtxt.setText("2.5 Star");
}
if(value>=40&& value<=50){
String value1=Float.toString(value);
// after getting the value we are setting
// our value to our text view in below line.
ratingtxt.setText("3 Star");
}
if(value>=50&& value<=60){
String value1=Float.toString(value);
// after getting the value we are setting
// our value to our text view in below line.
ratingtxt.setText("3.5 Star");
}
if(value>=60&& value<=70){
String value1=Float.toString(value);
// after getting the value we are setting
// our value to our text view in below line.
ratingtxt.setText("4 Star");
}
if(value>=70&& value<=80){
String value1=Float.toString(value);
// after getting the value we are setting
// our value to our text view in below line.
ratingtxt.setText("4.5 Star");
}
if(value>=80&& value<=90){
String value1=Float.toString(value);
// after getting the value we are setting
// our value to our text view in below line.
ratingtxt.setText("4.7 Star");
}
if(value>=90&& value<=100){
String value1=Float.toString(value);
// after getting the value we are setting
// our value to our text view in below line.
ratingtxt.setText("5 Star");
}
}

@Override
public void onCancelled(@NonNull DatabaseError error) {
// calling on cancelled method when we receive
// any error or we are not able to get the data.
Toast.makeText(HistoryActivity.this, "Fail to get data.",
Toast.LENGTH_SHORT).show();
}
});
}
}
Testing

Test Case Test Test Data Expected Actual Results Test


ID Objective Results Pass/
Fail

Camera

TC_1 Angle of Camera at 0 All edge All edge points Pass


camera w.r.t points are are detected
line of sight detected and and face is
face is recognised
recognised

TC_2 Angle of Camera at 15 All edge All edge points Pass


camera w.r.t points are are detected
line of sight detected and and face is
face is recognised
recognised

TC_3 Angle of Camera at 30 All edge All edge points Pass


camera w.r.t points are are detected
line of sight detected and and face is
face is recognised
recognised

TC_4 Angle of Camera at 45 All edge All edge points Pass


camera w.r.t points are are detected
line of sight detected and and face is
face is recognised
recognised

TC_5 Angle of Camera at 50 All edge All edge points Pass


camera w.r.t points are are detected
line of sight detected and and face is
face is recognised
recognised

TC_6 Varying light Low light All edge EAR aspect Fail
conditions conditions points are ratio could not
detected and be computed
face is
recognised
TC_7 Varying light Bright light All edge All edge points Pass
conditions conditions points are are detected
detected and and face is
face is recognised
recognised

TC_8 Facial Transparent All edge All edge points Pass


Accessories Glasses points are are detected
detected and and face is
face is recognised
recognised

TC_9 Facial Reflective/Su All edge Only 33% of Fail


Accessories nglasses points are the EAR
detected and computation
face is could be done
recognised and accuracy
drop-off was
significant

TC_10 Facial Cap to All edge All edge points Pass


Accessories hairline points are are detected
detected and and face is
face is recognised
recognised

TC_11 Facial Cap covering All edge Only 50% of


Accessories forehead points are the EAR
detected and computation
face is could be done
recognised and accuracy
drop-off was
significant

GPS

TC_12 Location Transmission Accurate Accurate Pass


of location location is location is
data from transmitted transmitted to
GPS to app to app app

TC_13 Internet Location data Accurate Location data Fail


Connectivity transmission location is is not
when less transmitted accurately
bandwidth to app transmitted

Driver Performance
TC_14 Braking Acceleromete Braking is Braking is Pass
metric r data detected detected
transmission
and display

TC_15 Acceleration Acceleromete Acceleration Acceleration is Pass


Metric r data is shown on shown on app
transmission app
and display

TC_16 Speed Acceleromete Speed is Speed is Pass


metric r data shown in shown in app
transmission app
and display

TC_17 Internet Performance Driver Driver Pass


connectivity during good performance performance is
internet is calculated calculated
connectivity

TC_18 Internet Performance Driver Driver Fail


connectivity during bad performance performance is
internet is calculated not calculated
connectivity

TC_19 Score Transmission Score is Score is Pass


Calculation of data and displayed on displayed on
calculation app app
based on
metrics

References

[1] Dua, Mohit, et al. "Deep CNN models-based ensemble approach to driver drowsiness detection."
Neural Computing and Applications 33 (2021): 3155-3168.

[2] Ahmed, M., Masood, S., Ahmad, M., & Abd El-Latif, A. A. (2021). Intelligent driver drowsiness
detection for traffic safety based on multi CNN deep model and facial subsampling. IEEE Transactions on
Intelligent Transportation Systems, 23(10), 19743-19752.

[3] Jabbar, Rateb, Mohammed Shinoy, Mohamed Kharbeche, Khalifa Al-Khalifa, Moez Krichen, and
Kamel Barkaoui. "Driver drowsiness detection model using convolutional neural networks techniques for
android application." In 2020 IEEE International Conference on Informatics, IoT, and Enabling
Technologies (ICIoT), pp. 237-242. IEEE, 2020.
[4] Wang, H., Xu, L., Bezerianos, A., Chen, C. and Zhang, Z., 2020. Linking attention-based multiscale
CNN with dynamical GCN for driving fatigue detection. IEEE Transactions on Instrumentation and
Measurement, 70, pp.1-11.

[5] Ngxande M, Tapamo JR, Burke M. Driver drowsiness detection using behavioral measures and
machine learning techniques: A review of state-of-art techniques. 2017 pattern recognition Association of
South Africa and Robotics and mechatronics (PRASA-RobMech). 2017 Nov 30:156-61.

[6] Jahan, I., Uddin, K. M., Murad, S. A., Miah, M., Khan, T. Z., Masud, M., ... & Bairagi, A. K. (2023). 4D:
a real-time driver drowsiness detector using deep learning. Electronics, 12(1), 235.

[7] Bajaj, J. S., Kumar, N., Kaushal, R. K., Gururaj, H. L., Flammini, F., & Natarajan, R. (2023). System
and Method for Driver Drowsiness Detection Using Behavioral and Sensor-Based Physiological
Measures. Sensors, 23(3), 1292.

[8] Sriram, D., Sanjeev, D., Yerrapragada, S. P. R., Hemanjali, A., Aathava, B. K., & Hossain, M. Driver
Drowsiness Detection Using AI.

[9] Jain Stoble, B., & Varghese, R. Driver Drowsiness Detection System Based On Eye Closure.

[10] Chowdhury, A., Shankaran, R., Kavakli, M., & Haque, M. M. (2018). Sensor applications and
physiological features in drivers’ drowsiness detection: A review. IEEE sensors Journal, 18(8), 3055-3067.

[11] Arakawa, T. (2021). Trends and future prospects of drowsiness detection and estimation technology.
Sensors, 21(23), 7921.

[12] Investigating Driver Fatigue versus Alertness Using the Granger Causality Network

[13] Wang, Weicai & Gao, Yang & Iribarren, Pablo & Lei, Yanbin & Xiang, Yang & Zhang, Guoqing &
Shenghai, Li & Lu, Anxin. (2015). Wang et al. 2015.

[14] Griffith, C. D., & Mahadevan, S. (2006). Sleep-deprivation effect on human performance: a
meta-analysis approach (No. INL/CON-06-01264). Idaho National Lab.(INL), Idaho Falls, ID (United
States).

[15] Shekari Soleimanloo, S., White, M. J., Garcia-Hansen, V., & Smith, S. S. (2017). The effects of sleep
loss on young drivers’ performance: A systematic review. PLoS One, 12(8), e0184002.

[16] Mittal, A., Kumar, K., Dhamija, S., & Kaur, M. (2016, March). Head movement-based driver
drowsiness detection: A review of state-of-art techniques. In 2016 IEEE international conference on
engineering and technology (ICETECH) (pp. 903-908). IEEE.

[17] Rahman, A., Sirshar, M., & Khan, A. (2015, December). Real time drowsiness detection using eye
blink monitoring. In 2015 National software engineering conference (NSEC) (pp. 1-7). IEEE.

[18] Purnamasari, P. D., & Hazmi, A. Z. (2018, September). Heart beat based drowsiness detection
system for driver. In 2018 International Seminar on Application for Technology of Information and
Communication (pp. 585-590). IEEE.
[19] Zhang, X., Wang, X., Yang, X., Xu, C., Zhu, X., & Wei, J. (2020). Driver drowsiness detection using
mixed-effect ordered logit model considering time cumulative effect. Analytic methods in accident
research, 26, 100114.

[20] Ahmed, H. M., Farhan, R. N., & Aliesawi, S. A. (2019). Drowsiness Detection using Fuzzy Inference
System.

[21] Wierwille, W. W. (1995). Overview of research on driver drowsiness definition and driver drowsiness
detection. In Proceedings: International Technical Conference on the Enhanced Safety of Vehicles (Vol.
1995, pp. 462-468). National Highway Traffic Safety Administration.

[22] Saini, V., & Saini, R. (2014). Driver drowsiness detection system and techniques: a review.
International Journal of Computer Science and Information Technologies, 5(3), 4245-4249.

[23] Ramzan, M., Khan, H. U., Awan, S. M., Ismail, A., Ilyas, M., & Mahmood, A. (2019). A survey on
state-of-the-art drowsiness detection techniques. IEEE Access, 7, 61904-61919.

[24] Hu, S., & Zheng, G. (2009). Driver drowsiness detection with eyelid related parameters by Support
Vector Machine. Expert Systems with Applications, 36(4), 7651-7658.

[25] Vicente, J., Laguna, P., Bartra, A., & Bailón, R. (2016). Drowsiness detection using heart rate
variability. Medical & biological engineering & computing, 54, 927-937.

[26] Ueno, H., Kaneda, M., & Tsukino, M. (1994, August). Development of drowsiness detection system.
In Proceedings of VNIS'94-1994 Vehicle Navigation and Information Systems Conference (pp. 15-20).
IEEE.

[27] Deng, W., & Wu, R. (2019). Real-time driver-drowsiness detection system using facial features. Ieee
Access, 7, 118727-118738.

[28] Stancin, I., Cifrek, M., & Jovic, A. (2021). A review of EEG signal features and their application in
driver drowsiness detection systems. Sensors, 21(11), 3786.

[29] Poursadeghiyan, M., Mazloumi, A., Saraji, G. N., Baneshi, M. M., Khammar, A., & Ebrahimi, M. H.
(2018). Using image processing in the proposed drowsiness detection system design. Iranian journal of
public health, 47(9), 1371.

[30] Siddiqui, H. U. R., Saleem, A. A., Brown, R., Bademci, B., Lee, E., Rustam, F., & Dudley, S. (2021).
Non-invasive driver drowsiness detection system. Sensors, 21(14), 4833.

[31] Sharma, P., & Sood, N. (2020, July). Application of IoT and Machine Learning for Real-time Driver
Monitoring and Assisting Device. In 2020 11th International Conference on Computing, Communication
and Networking Technologies (ICCCNT) (pp. 1-7). IEEE.

[32] Dwivedi, K., Biswaranjan, K., & Sethi, A. (2014, February). Drowsy driver detection using
representation learning. In 2014 IEEE international advance computing conference (IACC) (pp. 995-999).
IEEE.
[33] Phan, A. C., Nguyen, N. H. Q., Trieu, T. N., & Phan, T. C. (2021). An efficient approach for detecting
driver drowsiness based on deep learning. Applied Sciences, 11(18), 8441.

[34] Altameem, A., Kumar, A., Poonia, R. C., Kumar, S., & Saudagar, A. K. J. (2021). Early identification
and detection of driver drowsiness by hybrid machine learning. IEEE Access, 9, 162805-162819.

[35] Chen, L. L., Zhao, Y., Zhang, J., & Zou, J. Z. (2015). Automatic detection of alertness/drowsiness
from physiological signals using wavelet-based nonlinear features and machine learning. Expert Systems
with Applications, 42(21), 7344-7355.

You might also like