You are on page 1of 8

2022 IEEE North Karnataka Subsection Flagship International Conference (NKCon)

A Framework for Driver Drowsiness Detection


using Non-Learning Methods
Dhrithi V G Santosh Botagi Gurukiran K M
Department of CSE Department of CSE Department of CSE
Nitte Meenakshi Institute of Technology Nitte Meenakshi Institute of Technology Nitte Meenakshi Institute of Technology
Bengaluru, India Bengaluru, India Bengaluru, India
dhriti280@gmail.com santoshbothgi12@gmail.com gurukiranmoger@gmail.com

Md Azaroddin Vani Vasudevan


Department of CSE Department of CSE
Nitte Meenakshi Institute of Technology Nitte Meenakshi Institute of Technology
2022 IEEE North Karnataka Subsection Flagship International Conference (NKCon) | 978-1-6654-5342-4/22/$31.00 ©2022 IEEE | DOI: 10.1109/NKCON56289.2022.10126876

Bengaluru, India Bengaluru, India


mdazroddin16@gmail.com vani.v@nmit.ac.in

Abstract— Driver drowsiness and fatigue are one of the infrared light. Benefits of employing infrared light include
major reasons for road accidents that occur globally. ease of use, independence from environmental illumination
Implementing a system with an alarm to remind sleepy drivers conditions, and the ability to determine the state of the face
to pay attention to the road and assist them in maintaining from changes in its constituent parts. [1].
their focus will help prevent car accidents. Using Python, dlib,
and OpenCV, a real-time framework is built that uses a A. A. Rahman et al.[2][3] profiles and forecasts motorist
computerized camera to monitor and process the driver's eye behaviour on one of the busiest routes in Riyadh, Saudi
and yawning measurements. This prototype for a driver Arabia, using collected data. The research gave a wonderful
drowsiness detection system is proposed in this paper to reduce initiative for the country's prospective researchers and made
fatalities and to generally contribute to increasing the a tested dataset available for new self-driving car prototypes
transportation safety on the road. When the driver's eyes and that may initially learn about varied traffic patterns. The
yawns are observed together for a specific amount of time, the algorithms performed on the dataset and the results indicate
suggested system detects whether the driver is drowsy and that numeric datasets are easier for a model to learn than
issues a caution alert. nominal datasets, with Bayesian Algorithms being the best in
this situation due to their statistical execution. Multi-Layer
Keywords—driver drowsiness, face detection, yawn detection, Perceptron, the most popular machine learning algorithm, is
OpenCV, Python, fleet management, image processing.
a superior strategy for the acquired dataset in the long run, as
I. INTRODUCTION determined by the research. However, this research was
confined to determining whether the driver was aggressive or
Drowsiness is a process when one level of consciousness not.
is diminished as a result of sleep deprivation or exhaustion,
and it may cause the driver to drift off peacefully. Drowsy An affordable, real-time driver sleepiness monitoring
driving causes the driver to lose control of the vehicle; as a system based on visual behaviour and machine learning is
result, the vehicle may suddenly veer off the road, collide suggested by A. Kumar et al. [4]. By taking frames from a
with an object, or flip over. Sixty percent of drivers, or streaming video frame that is obtained from a webcam, the
around 160 million people, have driven a car while feeling system measures and takes into account visual behavioral
sleepy, which is responsible for nearly twenty one percent of aspects including eye aspect ratio, mouth aspect ratio, and
fatal accidents. Sixty percent of adult drivers admitted to nose length ratio. [4] Then using the data, they have used
dozing off behind the wheel in the previous year. Utilizing classifiers such as Naives-Bayes, FLDA and SVM. However,
new technology to create and construct systems that are using streaming devices and complicated algorithms would
reliable is crucial in this situation.. lead to increased costs and computational performance, with
the additional drawback of reduced accuracy, is identified as
II. RELATED WORKS a drawback in the proposed paper.
In previously developed driver drowsiness detection Ameen Aliu Bamidele et al. [5] have developed a system
systems, we observe that numerous methods have been that makes use of image processing techniques to detect the
identified, primarily methods of using physiological driver drowsiness level by calculating the eye aspect ratio,
measurements such as EEG, heart rate and ECG. Although mouth aspect ratio parameters that are required for capturing
proven to be useful in detecting driver drowsiness, it the drowsiness levels.[5] The data that is gathered on the eye
involves connecting various connections to the body which aspect ratio makes the base for detecting the driver's
may serve as a hindrance while driving. Several alternative drowsiness levels and an acoustic alert is issued when the
perspectives have been proposed and they are discussed in driver is found to be sleepy. This system, however, does not
this section. inculcate the angle of parameterizing head tilt movements.
A hardware-based system for sleepiness detection in A sleepiness detection framework based on shape
drivers was developed by M.A. Assari et al. [1] using facial predictor algorithm is introduced by Roopalakshmi et al. [6];
expressions. The hardware that has been used is based on it detects the eyes and counts the rate at which they blink

978-1-6654-5342-4/22/$31.00 ©2022 IEEE

Authorized licensed use limited to: VIT University. Downloaded on October 06,2023 at 15:24:41 UTC from IEEE Xplore. Restrictions apply.
before tiredness is determined in real time. Through image O. Rigane et al. [15] outlines an innovative method for an
processing methods, which provide a non-invasive method to intelligent driver drowsiness detection system that uses the
detect drowsiness without any irritation and disturbance, the driver's visual behavior. Using a fuzzy logic controller, the
suggested system obtains information about the eye status. estimation of the driver's vigilance is successfully made by
[7] In the future, the framework for detecting additional combining facial and eye symptoms. Results of experiments
details about the driver's tiredness could be used to
implement the detection of the driver yawning. C (p1 - p4) = Euclidean distance (eye [0], eye [3])
Jun-Juh Yan et al. [7] developed a visual processing, a
real-time, grayscale simulation system can find sleepy
drivers. Based on tests and results and the fatigue model, the utilizing MATLAB's fuzzy-logic simulation.
system can support monitoring the drivers' physical
condition and alert them if they are becoming fatigued, even A. A. Hayawi et al [16] centered on the development and
though they may not be aware of it. [7] The use of gray-scale application of a driver assistance system that incorporates
photographs, which eliminates the need to determine skin monitoring and alarming of the driver using invasive
colour, is the main distinction between commercially acquisition techniques, or Electrooculography (EOG)
available solutions and the system suggested in this study. signals. The Arduino board's embedded system with an
Although the proposed system includes additional ATmega2560 microcontroller has been utilized to create the
computation stages, it may need using more memory than EOG signal acquisition circuit.
anticipated, which could negatively affect how quickly it
runs. M. K. Hussein et al [17] suggests that a reliable driver
detection system be created to warn the motorist. This study
Bharadwaj, P. et al. [8] created and implemented a reviews the research on numerous methods for detecting
system to detect tiredness; they utilized a camera to take driver drowsiness, including but not limited to physical
photos of the driver's face and track the driver's eyes to methods that look for traits including head movement, eye
determine whether or not the driver was drowsy. This blinking rate, yawning, and eye state (closed or opened). The
system, however, solely used an eye tracking method. degree of driver drowsiness is also assessed using a
physiologically based method that looks for (EEG), (ECG),
In order to identify driver drowsiness, Chatterjee, I. et al. (PPG), Heart Rate Variability, (EOG), and (EMG) signals. A
[9] created a system that uses a smartphone to track the vehicle-based method of measuring driver drowsiness uses
driver's eyes and body movements; The system sends a the standard deviation of lane position and steering wheel
warning if the driver's eyes are closed by using the facial movement (SWM) to monitor and regulate the vehicle
landmark for eyes state detection. (SDLP).
Huynh, X. et al. [10] have suggested a method to detect The experiments mentioned above were successful in
sleepiness based on ML approaches; they employed 3D detecting driver drowsiness, but they used single models or
CNN to extract features. Researchers used semi-supervised systems that were attached to the body and may injure the
learning to improve system performance and classified owner of the device. By re-using certain functionalities and
drowsiness using gradient boosting. applying them to our system will prove to give better results
Chellappa, A. et al suggestions [11] to use a camera that are speedy and accurate.
linked to a Raspberry Pi3 was made. The eye-tracking
feature of that camera allows it to take a picture of the III. PROPOSED METHOD
driver's face and analyze it to identify signs of driver The detection methods are categorized as subjective and
weariness. When drowsiness is identified in this system, the objective detection, objective detection since it continuously
alert will warn the driver. tracks the driver's physiological state and driving-behavior
traits. Additionally, contact and non-contact objective
The real-time system was created by Lashkov, I. et al.
detection are separated into two categories [8]. Because
[12] and is based on facial features such the driver's mouth,
systems that do not require Computer Vision technology or
head, and eye movements. A smartphone was used to record
expensive cameras enable the use of the device in more cars,
video of the driver's face, and OpenCV and Dlib were used
non-contact is more affordable and practical than the contact
to extract the characteristics and determine whether or not
technique [9]. We make use of dlib library that is a pre-
the driver was feeling sleepy.
trained model which consists of facial datasets, that are
L.D.S Cueva et al. [13] discusses the creation of a further used for identification of other facial structures in the
method to instantly identify tiredness in a motorist and send detected face region. As it is observed in Fig. 1, areas of the
alerts to prevent potential accidents. The approaches for eyes, mouth and head are monitored for predicting
drowsiness detection by computer vision are examined, with drowsiness.
a particular emphasis on the utilization of facial reference
points. The main causes of accidents are speeding,
exhaustion, drowsiness, distraction, and being too fatigued to
pay attention. Advanced driver assistance systems, or ADAS,
can minimize these serious human errors.
M. Hedi Baccour et al. [14] use logistic regression
models to examine the possibility for the eye closure and
head rotation signals provided by a driver camera to identify
the driver's level of tiredness.

2
Authorized licensed use limited to: VIT University. Downloaded on October 06,2023 at 15:24:41 UTC from IEEE Xplore. Restrictions apply.
B. Eye detection
Six (x, y)-coordinates are used to symbolise an eye,
beginning at the left corner (as if you were staring at the
subject) and moving clockwise [10]. The dlib facial
landmark predictor was trained on the 68-point iBUG 300-
W dataset, which contains these
annotations.

Fig. 1. Flowchart for detecting driver drowsiness.

A. Face Landmark Localization


For face landmark localization which is implemented
through image processing by using OpenCV. For face[18]
Fig. 3. Calculating eye aspect ratio.[21]
and eye detection, Haar-Cascade Classifier algorithm is
applied.[10] Detection happens inside a detection window. Based on Fig. 3, the crucial idea that there is a correlation
Face alignment, head pose estimation, face switching, blink between the width and height of these coordinates needs to
detection, and yawn detection have all been done effectively be summarized. With the help of the same, we create an
using facial landmarks. In our proposed method, we make equation that depicts the connection between these co-
use of facial recognition in a frame, by primarily identifying ordinates, i.e., Eye Aspect Ratio (EAR) can be measured as,
the facial landmarks. The 68 (x, y) coordinates are used to
map the facial structures on the face, and we use the dlib
library to estimate their locations. It is possible to see the co-
ordinate indexes. in Fig. 2. This is tested using appropriate
data as mentioned in Section IV, Table I. (1)

W.r.t eq. (1), The vertical eye landmarks (x, y) co-


ordinates are calculated as mentioned above.
The horizontal eye landmarks (x,y) co-ordinates are
calculated as mentioned above.
After establishing the EAR, which is done for a single
eye, we calculate the EAR for the other eye and the total
EAR can be computed as the average of the two. To produce
an acoustic alert and report the driver’s drowsiness, we set a
threshold for the number of frames and the aspect ratio.
Here, we optimally assign the number of frames that are to
be monitored to be as 48, which is roughly 3-4 seconds. If
the eyes appear to be closed for more than the specified
threshold, an acoustic alert is fired. [11] Below are the
threshold ranges that are set up for monitoring EAR. These
threshold ranges are calculated by counting the minimum
number of frames passed and calculating the time for each
Fig. 2. Facial landmark recognition.[19] frame. Similarly, the aspect ratio threshold is set by
comparing it with previously related works. The test case for
As observed in Fig. 2 and Table II, detecting facial
verifying this functionality is mentioned in Section IV, Table
landmarks is therefore a two-step process.
III.
Step 1 – Localize the face in the image EYE_AR_THRESH = 0.27
Step 2 – Recognize the facial features on the face. EYE_AR_CONSEC_FRAMES = 48

Vertical landmarks
A (p2 - p6) = Euclidean distance (eye [1], eye [5])
B (p3 - p5) = Euclidean distance (eye [2], eye [4])

3
Authorized licensed use limited to: VIT University. Downloaded on October 06,2023 at 15:24:41 UTC from IEEE Xplore. Restrictions apply.
C. Yawn detection
To determine the yawning parameter the aspect ratio of
the mouth (MAR) is computed.

Fig. 4. Calculating mouth aspect ratio.[19]


Fig. 5. Comparison of 3D points to 2D points on the face. [22]
As we can observe in Fig. 4, by using the Euclidean
distance between the horizontal and vertical landmarks that
are detected we can arrive at the formula for Mouth Aspect There are only two types of motions a 3D
Ratio.
rigid object can make in relation to a camera.
1. Translation: Translation is the process of
moving the camera from its present 3D
location (x, y, z) to a new 3D location (x',
(2) y', z'). As you can see, translation has
three different directions you can go in: X,
From eq. (2) and Fig (4), It is plain to see that the mouth's Y, or Z. A vector with the values (x'-x, y'-
aspect ratio is virtually zero when it is closed, as it is in the
first 80 frames. The mouth aspect ratio marginally increases
y, and z'-z) is used to express translation.
when the mouth is slightly open [13]. 2. Rotation: The camera can also be rotated
around the and axes. Therefore, a rotation
however, the mouth is wide open, most likely for also has three degrees of freedom.
MOUTH_AR_THRESH = 0.2
Rotation can be represented in a variety of
MOUTH_AR_CONSECUTIVE_FRAMES =15 ways. Euler angles (roll, pitch, and yaw), a
rotation matrix, or a rotation's axis and
angle can all be used to express it.
yawning, in form frame 80, when the mouth aspect ratio is
noticeably high [23]. After determining the MAR, we set Therefore, collecting six numbers—three
threshold values for deriving the acoustic alert. The test case for translation and three for rotation—is
for verifying this functionality is mentioned in Section IV, necessary to estimate the pose of a 3D
Table IV.
object.
D. Head Tilt Detection
The position and orientation of an item in relation to a The threshold criteria for determining the
camera are referred to as its pose in computer vision. You head position are set, as below,
can alter the stance by shifting the object's position in
relation to the camera or the camera's position in relation to The test case for verifying this functionality is mentioned
the item. The Perspective-n-Point problem, or PNP, is a in Section IV, Table V.
common abbreviation for the pose estimation problem
discussed in this lesson [14]. The objective of this challenge HEAD_COUNTER_FRAMES = 30
is to determine the pose of an item given a calibrated camera,
n 3D point locations on the object, and matching 2D
projections in the image. which can be picturized in Fig. 5 IV. RESULTS AND DISCUSSION
Different methods for identifying driver somnolence have
been discovered as a result of the literature review, and
various forms of data are used as input for their algorithm. In
the initial step of our suggested process, we employ a camera
for face streaming. The video consists of a sequence of
frames, which primarily detects the status of the face which
can be visualized in Fig. 6.

4
Authorized licensed use limited to: VIT University. Downloaded on October 06,2023 at 15:24:41 UTC from IEEE Xplore. Restrictions apply.
eye aspect ratio for examining the eyelids has
been down for a long time, and we would then
notice the yawn, after which a drowsiness alert is
fired, which is visualized in Fig.9.

Fig. 6. Driver is not drowsy, mouth closed, eyes open.

Then, the system can detect the head tilting position of


the user’s input face. The system can detect if the face is
tilted to the right, which is visualized in Fig. 7.

Fig. 9. Driver is drowsy, mouth closed, eyes open.

Python, dlib, and OpenCV were used to build the


system's functionality and efficiency. An alarm
will sound to notify the driver if they are too tired.
which is tested with input parameters in Table VII
and can be depicted in Fig 10.

Fig. 7. User input’s face is tilted towards the right.

Similarly, the system can detect if the user’s input


face is tilted to the left, which is visualized in Fig.
8.

Fig. 10. Driver is drowsy, mouth closed, eyes open.

Fig. 8. User input’s face is tilted towards the left. After issuing the alarm sound, we will also send
an email and text message to the registered user
The face will be located in the video in a way that
via their mobile phones, which can be depicted in
won't interfere with lightning-related accuracy in
Fig. 11 and Fig. 12.
facial detection. According to Fig. 6, if the face is
recognised, the facial landmark detection task is
carried out and extracts the region of the eyes and
mouth, passing the test case listed in Table VI.
When the eye is found, it would discover that the

5
Authorized licensed use limited to: VIT University. Downloaded on October 06,2023 at 15:24:41 UTC from IEEE Xplore. Restrictions apply.
Facial landmark recognition
Test
case Expected Actual
no. Input Result
output output

image

TABLE III. TESTING FUNCTIONALITIES OF THE SYSTEM - EXTRACTING


EYE FEATURES

Fig. 11. Alerting the user via SMS. Extracting eye features
Test
case Expected Actual
no. Input Result
output output

Input Both the


Left and
image and eyes were
3 right eye Successful
facial extracted
extracted
landmarks successfully

TABLE IV. TESTING FUNCTIONALITIES OF THE SYSTEM - EXTRACTING


MOUTH FEATURES

Extracting mouth features


Test
case Expected Actual
no. Input Result
output output

Mouth
Input points
Mouth
image and extracted
4 points Successful
facial successfully
extracted
landmarks from the
Fig. 12. Alerting the user via e-mail. input image

The test cases are used to verify the functionalities TABLE V. TESTING FUNCTIONALITIES OF THE SYSTEM – EXTRACTING
HEAD TILT FEATURES
of the same.
Extracting head tilt features
TABLE I. TESTING FUNCTIONALITIES OF THE SYSTEM – VIDEO
Test
CAPTURE
case Expected Actual
no. Input Result
Video Capture output output
Test
case Input Plot
Expected Actual Head
no. Input Result image and head
output output 5 pose Successful
facial pose
plotted
landmarks extracted
Video
Video output
Web feed displayed TABLE VI. TESTING FUNCTIONALITIES OF THE SYSTEM – OUTPUT
DISPLAY OF MAR, EAR AND HEAD POSE
1 camera from from Successful
index select selected Output display of MAR, EAR and head
camera camera pose
input Test
case no. Expected Actual
TABLE II. TESTING FUNCTIONALITIES OF THE SYSTEM – FACIAL
Input Result
output output
LANDMARK RECOGNITION
Plotted Facial
Facial landmark recognition MAR, facial features
Test EAR features plotted
case Expected Actual 6 and on the on the Successful
no. Input Result head user’s input
output output
pose input user’s
Input Display face face
image to facial Image with
be features all the face
2 processed and features Successful
and facial plotting plotted and
landmarks on to the highlighted
dataset input

6
Authorized licensed use limited to: VIT University. Downloaded on October 06,2023 at 15:24:41 UTC from IEEE Xplore. Restrictions apply.
TABLE VII. TESTING FUNCTIONALITIES OF THE SYSTEM – AUDIO ALERT [5] Ameen Aliu Bamidele, Kamilia Kamardin, Nur Syazarin Natasha
FIRING Abd Aziz, Suriani Mohd Sam, Irfanuddin Shafi Ahmed, Azizul
Azizan, Nurul Aini Bani and Hazilah Mad Kaidi, “Non-intrusive
Audio alert firing Driver Drowsiness Detection based on Face and Eye Tracking”
International Journal of Advanced Computer Science and
Test case Applications(IJACSA), 10(7), 2019.
no. Expected Actual
Input Result [6] Roopalakshmi, R., "Driver Drowsiness Detection System Based on
output output
Visual Features." 2018 Second International Conference on
(ICICCT). Inventive Communication and Computational
Audio
Audio Technologies IEEE, 2018.
output
MAR, output [7] Jun-Juh Yan, Hang-Hong Kuo, Ying-Fan Lin, Teh-Lu Liao, “Real-
when
7 EAR heard on Successful Time Driver Drowsiness Detection System Based on PERCLOS and
values
values values Grayscale Image Processing”, 2016 International Symposium on
exceed
exceeded Computer, Consumer and Control (IS3C), pp. 365-389.
threshold
[8] Bharadwaj, P., CN, A., Patel, T. S., & BR, K. (2019). Drowsiness
Detection and Accident Avoidance System in Vehicles.
V. CONCLUSION [9] I. Chatterjee, Isha and A. Sharma, "Driving Fitness Detection : A
Holistic Approach For Prevention of Drowsy and Drunk Driving
To detect fatigue, a framework is developed that using Computer Vision Techniques," 2018 South-Eastern European
Design Automation, Computer Engineering, Computer Networks and
distributes and keeps track of the driver's head, Society Media Conference (SEEDA_CECNSM), 2018, pp. 1-6.
mouth, and eye movements. To limit the eyes, the [10] Huynh, XP, Park, SM & Kim, YG 2017, Detection of driver
framework employs a combination of layout- drowsiness using 3D deep neural network and semi-supervised
gradient boosting machine. in C-S Chen, K-K Ma & J Lu
based coordinating and highlight-based (eds), Computer Vision - ACCV 2016 Workshops, ACCV 2016
International Workshops, Revised Selected Papers. Lecture Notes in
coordinating. During the following, the Computer Science (including subseries Lecture Notes in Artificial
framework will probably decide if the motorist is Intelligence and Lecture Notes in Bioinformatics), vol. 10118 LNCS,
Springer Verlag, pp. 134-145, 13th Asian Conference on Computer
looking forward or not, as well as whether their Vision, ACCV 2016, Taipei, Taiwan, Province of China, 20/11/16.
eyes are open. When the eyelids are shut for a [11] Chellappa, A., Sushmanth Reddy, M., Ezhilarasie, R., Kanimozhi
long time, a notice indication will be issued as a Suguna, S., & Umamakeswari, A. (2018). Fatigue Detection Using
Raspberry Pi 3. International Journal of Engineering & Technology,
bell or alarm message. It keeps an eye on the 7(2.24), 29-32.
driver's mouth, checks to see if he or she is [12] Lashkov, Igor & Kashevnik, Alexey & Shilov, Nikolay & Parfenov,
Vladimir & Shabaev, Anton. (2019). Driver Dangerous State
yawning, and determines whether the head is Detection Based on OpenCV & Dlib Libraries Using Mobile Video
tilted or not. Using all of this information, we can Processing. 74-79. 10.1109/CSE/EUC.2019.00024.
determine if the driver is fatigued or not. If [13] L. D. S. Cueva and J. Cordero, "Advanced Driver Assistance System
for the drowsiness detection using facial landmarks," 2020 15th
drowsy then the framework sends SMS and Email Iberian Conference on Information Systems and Technologies
(CISTI), 2020, pp. 1-4
to the contact that has been given by the driver.
[14] M. Hedi Baccour, F. Driewer, T. Schäck and E. Kasneci, "Camera-
This project is an ongoing area of study that is still based Driver Drowsiness State Classification Using Logistic
being expanded and improved by researchers. It Regression Models," 2020 IEEE International Conference on
Systems, Man, and Cybernetics (SMC), 2020
has numerous applications, including gauging [15] O. Rigane, K. Abbes, C. Abdelmoula and M. Masmoudi, "A Fuzzy
students' levels of concentration during classes Based Method for Driver Drowsiness Detection," 2017 IEEE/ACS
14th International Conference on Computer Systems and Applications
and lectures. Future work on this topic will make (AICCSA), 2017
use of the blockchain technology's [16] A. A. Hayawi and J. Waleed, "Driver's Drowsiness Monitoring and
decentralisation database feature to make the data Alarming Auto-System Based on EOG Signals," 2019 2nd
International Conference on Engineering Technology and its
available for businesses to monitor their drivers Applications (IICETA), 2019
with a high level of privacy and security. [17] M. K. Hussein, T. M. Salman, A. H. Miry and M. A. Subhi, "Driver
Drowsiness Detection Techniques: A Survey," 2021 1st Babylon
International Conference on Information Technology and Science
REFERENCES (BICITS), 2021
[1] M. A. Assari and M. Rahmati, "Driver drowsiness detection using [18] Vani Vasudevan and Mohan Sellappa Gounder.. “Advances in Sports
face expression recognition," 2018 IEEE International Conference on Video Summarization – A Review Based on Cricket Videos”. In
Signal and Image Processing Applications (ICSIPA), 2018, pp. 337- Advances and Trends in Artificial Intelligence. From Theory to
341. Practice: 34th International Conference on Industrial, Engineering and
[2] A. A. Rahman, W. Saleem and V. V. Iyer, "Driving Behavior Other Applications of Applied Intelligent Systems, IEA/AIE 2021,
Profiling and Prediction in KSA using Smart Phone Sensors and Kuala Lumpur, Malaysia, Proceedings, Part II. Springer-Verlag,
MLAs," 2019 IEEE Jordan International Joint Conference on Berlin, Heidelberg, 347–359, 2021.
Electrical Engineering and Information Technology (JEEIT), 2019, [19] B. K. Savaş and Y. Becerikli, "Real Time Driver Fatigue Detection
pp. 34-39, doi: 10.1109/JEEIT.2019.8717533. Based on SVM Algorithm," 2018 6th International Conference on
[3] Vani V, “Driver Behavior Analysis using Machine Learning Control Engineering & Information Technology (CEIT), 2018, pp. 1-
Algorithms – A survey” Recent Trends in Cloud Computing and Web 4, doi: 10.1109/CEIT.2018.8751886.
Engineering, HBRP publications, Vol. 4, No. 1, 2022. [20] https://learnopencv.com/head-pose-estimation-using-opencv-and-dlib/
[4] A. Kumar and R. Patra, "Driver drowsiness monitoring system using [21] A. Pondit, A. Dey and A. Das, "Real-time Driver Monitoring System
visual behaviour and machine learning," 2018 IEEE Symposium on Based on Visual Cues," 2020 6th International Conference on
Computer Applications & Industrial Electronics (ISCAIE), 2018, pp. Interactive Digital Media (ICIDM), 2020, pp. 1-6, doi:
339-344I. 10.1109/ICIDM51048.2020.9339604.

7
Authorized licensed use limited to: VIT University. Downloaded on October 06,2023 at 15:24:41 UTC from IEEE Xplore. Restrictions apply.
[22] Sri Mounika, T.V.N.S.R., Phanindra, P.H., Sai Charan, N.V.V.N., [23] https://hackaday.io/project/27552-blinktotext/log/68360-eye-blink-
Kranthi Kumar Reddy, Y., Govindu, S. (2022). Driver Drowsiness detection-algorithms
Detection Using Eye Aspect Ratio (EAR), Mouth Aspect Ratio
(MAR), and Driver Distraction Using Head Pose Estimation. In:
Tuba, M., Akashe, S., Joshi, A. (eds) ICT Systems and Sustainability.
Lecture Notes in Networks and Systems, vol 321. Springer,
Singapore.

8
Authorized licensed use limited to: VIT University. Downloaded on October 06,2023 at 15:24:41 UTC from IEEE Xplore. Restrictions apply.

You might also like