Professional Documents
Culture Documents
February, 2022
Driver Drowsiness Detection System
By
CERTIFICATE
(Supervisor)
Dr. Hina Saeeda
Date:
February, 2022
Revision History
Compiled By Checked By Date Reason for Change Version
Hassan Ms. Hina Saeeda 2nd Mar 2021 Initial Version 1.0
Hassan Ms. Hina Saeeda 2nd Mar 2021 Changes in Refrences 1.1
Hassan Ms. Hina Saeeda 2nd Mar 2021 Format Changes 1.2
Hassan Ms. Hina Saeeda 8th Mar 2021 Supervisor Comments 1.3
Zakria Ms. Hina Saeeda 17th Mar 2021 FYP Evaluators’ Comments 1.4
Zakria Ms. Hina Saeeda 6th Apr 2021 Supervisor Comments 1.5
Zakria Ms. Hina Saeeda 9th Apr 2021 Document Revision 1.6
Hassan Ms. Hina Saeeda 1st Jun 2021 Chap 3 & 4 Guidelines 1.7
Zakria Ms. Hina Saeeda 7th Jul 2021 FYP-1 Final Defense Comments 1.8
Hassan Ms. Hina Saeeda 10th Jul 2021 Supervisor Comments 1.9
Hassan Ms. Hina Saeeda 7th Feb 2022 Supervisor Comments 2.0
Hassan Ms. Hina Saeeda 8th Feb 2022 Final Version 2.1
ii
Project Overview
Drivers facing fatigue, in general, are very difficult to measure or observe unlike alcohol
and drugs, which have clear key indicators and tests that are available easily. Due to
this, there are wide spread accidents which in most cases are fatal. Probably, the best
solutions to this problem are awareness about fatigue-related accidents and promoting
drivers to admit fatigue when needed. The former is hard and much more expensive to
achieve, and the latter is not possible without the former as driving for long hours is very
lucrative.
Hence, there is a need to propose a system that efficiently recognizes driver drowsiness
and prevents related accidents, which our project proposes to achieve through minimal
resources. Our approach basically lines up with the fact that our system will cater the
state of the driver in such a way that it will not irritate him/her. Instead it will be
installed and fitted in such a compact way that it will not intrude the driver in any
way whatsoever. That being said, an infrared camera will be placed near the instrument
cluster, which will keep track of the drivers face and eyes at all times. The infrared camera
will be connected to the drivers smartphone, which will capture the detected images of
the driver and process them via the application on the smartphone. The result of those
detected and processed images will then trigger different ways of alerting the driver.
Furthermore, comparison with existing systems leads to a few key notes, as such that
firstly there is no practically implemented system alike ours, in Pakistan. Secondly the
handful applications that are present do not have features that distinguish them. Such
as, in our project, there will be an extra feature which is the second wave of our alert
system, the volume of the cars radio will be turned up so that the driver regains control
sooner than 2 seconds.
Hence with such features, its guaranteed to perform efficiently. If it does so, which
it will, we will try our best to put it in circulation with the Pakistan Automobile sector.
That way, our project can start saving actual lives.
Dedication
Firstly, we dedicate our project to the creator Allah Almighty and dedicate to whom
the world owes its existence Muhammad (Peace Be Upon Him) and dedicate this to our
beloved parents, our extremely dedicated and generous teachers and supportive friends,
their prayers always pave the way to success for us.
iv
Acknowledgement
The requirements for the degree of Bachelor of Software Engineering.
(Student 1) (Student 2)
Sayed Hassan Abbas Kazmi Zakria Fazal Shinwari
Contents
Revision History ii
Dedication iv
Acknowledgements
List of Figures iv
List of Tables v
1 Introduction vi
1.1 Project Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
1.2 Project Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 Existing System Description . . . . . . . . . . . . . . . . . . . . . 1
1.2.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.3 Future System Usage Analysis . . . . . . . . . . . . . . . . . . . . 4
1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 Project Novelty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Intended Market of Project . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8 Intended Users of Project . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.9 Software Process Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.9.1 Process Model Introduction . . . . . . . . . . . . . . . . . . . . . 6
1.9.2 Justification of Proposing the Process Model . . . . . . . . . . . . 6
1.9.3 Steps of Process Model . . . . . . . . . . . . . . . . . . . . . . . . 6
1.10 Tools and Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.11 Work Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.11.1 Team Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.11.2 Work Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 10
i
CONTENTS ii
2.4.2 Brainstorming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.3 Scrum Stories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Time Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5 SYSTEM DESIGN 34
5.1 Structure Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.1.1 Class Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.1.2 Deployment Diagram . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.2 Behavioral Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2.1 Activity Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2.2 Communication Diagrams . . . . . . . . . . . . . . . . . . . . . . 37
5.2.3 Sequence Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7 TEST PLAN 55
7.1 Objective of the Testing Phase . . . . . . . . . . . . . . . . . . . . . . . . 55
7.2 Levels of Tests for Testing Software . . . . . . . . . . . . . . . . . . . . . 55
7.2.1 Unit Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.2.2 Integration testing . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.2.3 System Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.3 Test Management Process . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.3.1 Design the Test Strategy . . . . . . . . . . . . . . . . . . . . . . . 56
7.3.2 Test Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.3.3 Test Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.3.4 Resource Planning . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.3.5 Plan Test Environment . . . . . . . . . . . . . . . . . . . . . . . . 57
7.3.6 Schedule and Estimation . . . . . . . . . . . . . . . . . . . . . . . 58
7.4 Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
8 CONCLUSION 60
8.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
References 61
List of Figures
1.1 System Level Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 OODA Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
iv
List of Tables
1.1 Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Applications Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Tools and Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Team Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Work Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
v
Chapter 1
Introduction
An increasing amount of drivers face a severe low amount of daily sleep. All automobile
drivers ranging from heavy trucks to light vehicles, all face a high amount of sleep prob-
lems that lead to a very unsafe driving experience. This occurrence labels as Drowsiness
which further leads to accidents, mostly fatal[1]. Not following our duties for a safer jour-
ney has lead to many accidents. Following rules and regulations look like a very minute
problem but it needs our utmost and greatest acceptance. A car driver when driving, no
matter in the hands of the most experienced or in-experienced hands, these hands can be
harmless to many but a single wave of sleep while driving can even lead to being a fatal
mistake which might end up taking peoples lives. Driver’s often disregarded the fact that
they are feeling sleepy and thus when they sleep while driving they end up in accidents.
If a person is too tired to drive they shouldn’t neglect the fact that their negligence might
end up taking the lives of themselves or others. Upon studying the problems and the
situation, our system will be designed to implement the safety measures and make sure
these solutions are socially beneficial and safe. A lot of experts have researched driver
drowsiness detection. The researched data is efficient enough but not fully applicable in
Pakistan. Thus, our system will provide said services in Pakistan and hopefully, once it’s
fully developed, we will be able to launch it globally.[2].
Hence, there is a need to propose a system that efficiently recognizes driver drowsiness
and prevents related accidents, which our project proposes to achieve through minimal
resources. Our approach basically lines up with the fact that our system will cater the
state of the driver in such a way that it will not irritate him/her. Instead it will be
installed and fitted in such a compact way that it will not intrude the driver in anyway
whatsoever. That being said, an infrared camera will be placed near the instrument
cluster, which will keep track of the drivers face and eyes at all times. The infrared
camera will be connected to the drivers smartphone, which will capture the detected
images of the driver and process them via the application on the smartphone. The result
vi
CHAPTER 1. INTRODUCTION 1
of those detected and processed images will then trigger different ways of alerting the
driver.
Furthermore, comparison with existing systems leads to a few key notes, as such that
firstly there is no practically implemented system alike ours, in Pakistan. Secondly the
handful applications that are present do not have features that distinguish them. Such
as, in our project, there will be an extra feature which is the second wave of our alert
system, the volume of the cars radio will be turned up so that the driver regains control
sooner than 2 seconds.
Hence with such features, its guaranteed to perform efficiently. If it does so, which
it will, we will try our best to put it in circulation with the Pakistan Automobile sector.
That way, our project can start saving actual lives.
• Limitation 1: This application is limited in the sense that it does not provide
alerting service that informs emergency contacts if there has been an accident.
CHAPTER 1. INTRODUCTION 2
• Limitation 1: The 1 second timer maybe inefficient in such a way that a person
might blink longer than 1 second and that might trigger the system. For e.g, people
suffering from Tourette Syndrome.
• Feature 1: This system fuses all the data gathered from different sensors such as
video, electrocardiography, photoplethysmography, temperature, and a three-axis
accelerometer.
• Feature 2: Fuzzy Bayesian framework is used detect drowsiness levels which keeps
updating itself.
• Feature 3: Data is transferred over to a mobile phone via Bluetooth which then
fakes a call to alert the driver, if drowsiness is detected.
Table 1.2 gives a comparison of the applications reviewed. The parameters selected
for the comparison are listed below.
Android-Based [5]
Proposed System
s
DrowsyDet [4]
ur
at
Fe
CNN Models 3 7 7 3
Alarm System 7 7 3 3
Face Detection 3 3 3 3
Image Processing 3 7 7 3
Music Adaptability 7 7 7 3
• Facial Landmark marking has a drawback if the camera unit face blurry situation
and can lead to accident for not either not detection the drowsiness.
• Video analysis can decrease if the equipment is not of high quality and can give
incorrect results.
• For better and high results of video analysis expensive equipment’s will be required
to achieve the satisfactory level of results.
Hence, if the future holds great response to this project, the usage could become
global.
CHAPTER 1. INTRODUCTION 5
1.3 Objectives
• A complete Activity Recognition based system that will help in the prevention of
accidents related to sleep/fatigue.
• Keras + OpenCV that will be efficient in the classification of driver’s drowsiness
detection.
• Alerting/Waking the driver if the system detects drowsiness.
A visual representation of how the system will work step by step, is shown below in
Figure 1.1.
• Picking Product Owner: This step involves the OODA (Observe, Orientate, Decide,
Art) cycle. Observe, as self-explanatory as it is, refers to seeing the situation in
a broader picture, involving all the perspectives that are present and making a
plan according to the situation. Orientate is the next step from observing, which
CHAPTER 1. INTRODUCTION 7
involves understanding and analyzing the data gather in step 1. Deciding after
you’ve analyzed and created options for yourself, now a solution is chosen according
to the plan that was made in step 1. The last step, Act, is where the execution of
the plan occurs.
Through this cycle, the Product Owner remains engaged in every increment that
is added to the product, through rapid feedback. This provides him control over
objectives and priorities of increments or say, sprint. This way, a feedback loop is
created which adds on to the process of innovation and adaptation.
• Building a team: Building a team involves few basic attributes such as, having
the motivation and ability for product creation, the flexibility of working towards
customer needs with different perspectives for the best outcome.
• Picking Scrum Master: An experienced character who keeps a check on the daily
routines, provides help to them along the way such as a Project Manager.
• Creating Product Backlog: An entire list of all product requirements which are
arranged according to their priorities. It changes throughout the entire lifespan of
the product and provides guidance to the team as to what tasks need to be done.
• Specifying and estimating product backlog: According to priorities, the team should
be estimating how many resources a task will consume. This keeps sprints balanced
and provides more focus on how to finish and move on to the next sprint.
• Sprint Planning: This step refers to the amount of team it would take to make a
working increment which can be displayed to the customer. Every sprint involves a
meeting in the beginning, in which the tasks for that particular sprint are discussed.
CHAPTER 1. INTRODUCTION 8
• Workflow transparency: This step refers to the workflow transparency which enables
the developers or team members to check the progress of each other and manage
time schedule. Members that are lacking, are helped by others in order to achieve
the common goal.
• Daily meeting or daily scrum: This step refers to the beating heart of the entire
scrum process. For a brief while, every morning the team has a meeting in which
yesterday’s documentation, today’s work planning and the challenges faced are
discussed.
• Demo: This step refers to the explanation of the entire sprint to the customer, by
the team. This explanation involves showing the customer what has been done and
what is ready right now.
• Retrospective meetup: This step refers to prototyping of the system and feedback
from the customer in the sprint meeting. All discussion based on what was achieved,
what lacked and what challenges were faced. What improvements can be executed.
Next scrum meeting plans are discussed and what are to be carried out.
• Starting next sprint immediately: This step refers to immediately engaging in the
next increment of the plan so that the customer receives the desired output at the
desired time. This makes scrum, scrum.
• An infrared camera will be placed at an angle from where both the eyes and face
are clearly visible, and then will be used to examine details which will invoke the
process of drowsiness detection.
• A dataset created with images tagged ‘Open’ or ‘Closed’ according to data in the
images. The Data taken from the model (created by us) will be stored in a system
and will be trained through it and the unwanted data will be removed manually
from the storage location to eliminate the excessive use of storage.
CHAPTER 1. INTRODUCTION 9
• Convolutional Neural Network known as CNN will be used by Keras to build the
system.
• OpenCV (facial and eyes detection).
• TensorFlow (Keras uses TensorFlow as backend).
• Keras (to build our classification model).
• Pygame (to play alarm sound).
• Android Studio.
• Android Device (Android 10 exception).
SOFTWARE REQUIREMENTS
SPECIFICATION
The Software Requirement Specifications document will serve as a written and docu-
mented understanding of the features and functionalities of the system. It will be used
to understand all the requirements gathered from different stakeholders. This document
is aimed to serve as input to the development team and to serve as a basis for system
design. It will define the product scope and it will help us to keep the system on the right
path by clearly identifying the deviated path of the system. Developers will be able to
track their work progress and will understand what they have to develop by using SRS.
2.1 Introduction
The software requirement is the phase where the current system is analyzed from
various aspects. The system is analyzed and its specifications are elicited.
2.1.2 Audience
This document is written for the audience mentioned below:
11
CHAPTER 2. SOFTWARE REQUIREMENTS SPECIFICATION 12
• Security: System will be secure to the extent that the captured images will be
stored safely in the database to which only the admin has the access to. There will
no way to interfere or interrupt the detection process by any outsider or third party
application.
2.4.2 Brainstorming
Brainstorming will help us in generating new ideas to solve the problems occurring in
development and choosing the best methods to solve those problems. It will basically en-
hance the reach to innovative ideas in solving future problems since a variety of different
perspectives will be present. Brainstorming will be used as a secondary requirement elic-
itation technique by us and we will do both live and Web-based sessions of brainstorming
where ever required.
15
CHAPTER 3. SOFTWARE PROJECT PLAN 16
• Making sure that all the software quality attributes are up to par by ensuring every
deliverable is checked, passed and approved.
• Making sure all the deadlines are met so that a consistency is maintained towards
the completion of the project timeline.
• Making the final product flexible to accept new alterations so that running and
maintaining it remains at ease.
• The project members will possess the necessary skills, including good knowledge of
the languages and technologies used, for the completion and implementation of this
project.
• All the necessary resources to develop this deep learning system will be available
whenever required which include all the hardware resources as well as the human
resources.
• It will be our assumption that our end user will get drowsy on long routes for which
our system will be present to assist them.
• The camera will be positioned correctly to detect the end users eyes and face at all
times.
• The hardware equipment used in this project is costly and to replicate the same
equipment for every end user will be a sore sight along with the task of maintain-
ability of such moving parts can be difficult.
So, we’re going to identify all the risks involved in this project and then prepare
strategies to counter those risks or at least reduce the their severity, in turn reducing
their altogether impact on our system.
24
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 25
Extensions
1. If user is new, they should register an account first.
2. If user forgets password, they can tap on forget password
option and retrieve their password.
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 28
Extensions
1. If admin forgets password, they can tap on forget pass-
word option and retrieve their password.
Extensions
1. When access gained, admin can edit data, basically per-
form CRUD operations.
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 29
Extensions
1. Use-case doesn’t have any extensions.
Extensions
1. Face detection algorithm starts creating AOI on cap-
tured live feed.
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 30
Extensions
1. Face detection algorithms starts detecting AOI on cap-
tured live feed.
Extensions
1. Classification of whether end user’s eyes are open or
closed will done.
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 31
Extensions
1. Live score determines if end user is drowsy or not, once
it is determined, a buzzer alarm will sound.
2. After more score the music system volume will increase.
Extensions
1. After successful connection, the music system should be
connected to the application.
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 32
SYSTEM DESIGN
5.1 Structure Diagrams
5.1.1 Class Diagram
Figure 5.1 shows the class diagram of this project, which clearly indicates the rela-
tionship between different classes and their functionalities.
34
CHAPTER 5. SYSTEM DESIGN 35
Figure 5.5 shows the second communication diagram of this project, which indicates
the communication between forgot password and database modules.
Figure 5.6 shows the third communication diagram of this project, which indicates
the communication between registration module and database.
Figure 5.7 shows the fourth communication diagram of this project, which indicates
the communication between the app and detection module.
5.2.3.2 Logout
Figure 5.9 shows the second sequence diagram of this project, which shows the user
logging off the app.
5.2.3.4 Registration
Figure 5.11 shows the fourth sequence diagram of this project, which shows the user
registering.
5.2.3.5 Connectivity
Figure 5.12 shows the fifth sequence diagram of this project, which shows the user the
connectivity process right after starting detection.
5.2.3.6 Detection
Figure 5.13 shows the last sequence diagram of this project, which shows the user the
detection process.
42
CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 43
TEST PLAN
7.1 Objective of the Testing Phase
Essentially, the testing phase provides a bug free and a good quality application. This
is done through various testing of all the modules step by step, which also ensures the
validation and verification process. Our objectives are of the exact nature, to test our
application under all scenarios so that we can provide a quality built, bug free and easy
to use application. Most important of these objectives are mentioned below:
• The first goal of this phase will be to ensure that the final software solution meets
the specified requirements. We’ll put the product through all tests and make sure
it meets all of the specified requirements.
• Our next goal will be to find and fix bugs in the system at the earliest possible
stage of development. As a software tester, bug identification and eradication will
be the most important goal since it leads to a bug-free application.
• Our next goal of software testing is to maintain program quality and dependability.
To maintain software quality, we must keep bugs to a bare minimum.
• Our last goal of the testing phase will be to offer all stakeholders with complete
knowledge about all technical and other hazards, allowing them to make educated
decisions.
55
CHAPTER 7. TEST PLAN 56
The test strategy that we have used in our project is Unit Testing, a part of Code
Coverage in Android Studio, loading test data onto our modules one by one and then
testing their functionalities as we proceed further.
• Test Server: Firebase Console will act as a test server where we can cross match all
the data that is being fed to the application accordingly to the data that is being
shown in the database.
• Test Android Device: An Android Device will be used to test every module of our
application, all the integration between every activity and the proper functioning
of our detection system.
• Test IDE: Android Studio will be our IDE for testing as the application is built
on it, along with Code Coverage which allows us to perform Unit Testing on our
application direct from Android Studio.
• Bug Reporting Tool: Android Studio’s own Log-cat does an excellent job of report-
ing all bugs, even on run-time. That will be converted into plain text which can be
shown in MS Word.
CHAPTER 7. TEST PLAN 58
CONCLUSION
8.1 Conclusion
An increasing amount of drivers face a severe low amount of daily sleep. All au-
tomobile drivers ranging from heavy trucks to light vehicles, all face a high amount of
sleep problems that lead to a very unsafe driving experience. This occurrence labels as
Drowsiness which further leads to accidents, mostly fatal[1]. Not following our duties for
a safer journey has lead to many accidents. Following rules and regulations look like a
very minute problem but it needs our utmost and greatest acceptance. A car driver when
driving, no matter in the hands of the most experienced or in-experienced hands, these
hands can be harmless to many but a single wave of sleep while driving can even lead to
being a fatal mistake which might end up taking peoples lives.
Driver’s often disregarded the fact that they are feeling sleepy and thus when they
sleep while driving they end up in accidents. If a person is too tired to drive they shouldn’t
neglect the fact that their negligence might end up taking the lives of themselves or others.
Upon studying the problems and the situation, our system will be designed to implement
the safety measures and make sure these solutions are socially beneficial and safe. A lot
of experts have researched driver drowsiness detection. The researched data is efficient
enough but not fully applicable in Pakistan. Thus, our system will provide said services in
Pakistan and hopefully, once it’s fully developed, we will be able to launch it globally.[2].
Hence, if the future holds great response to this project, the usage could become
global.
60
References
[1] Knipling, R.R. and Wang, J.S., 1994. Crashes and fatalities related to driver drowsi-
ness/fatigue. Washington, DC: National Highway Traffic Safety Administration.
[2] Romdhani, S., Torr, P., Scholkopf, B. and Blake, A., 2001, July. Computationally ef-
ficient face detection. In Proceedings Eighth IEEE International Conference on Com-
puter Vision. ICCV 2001 (Vol. 2, pp. 695-700). IEEE.
[3] Leger, D., 1994. The cost of sleep-related accidents: a report for the National Com-
mission on Sleep Disorders Research. Sleep, 17(1), pp.84-93.
[4] Yu, C., Qin, X., Chen, Y., Wang, J. and Fan, C., 2019, August. Drowsy-
Det: A Mobile Application for Real-time Driver Drowsiness Detection. In
2019 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Advanced
and Trusted Computing, Scalable Computing and Communications, Cloud and
Big Data Computing, Internet of People and Smart City Innovation (Smart-
World/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) (pp. 425-432). IEEE.
[5] Tombeng, M.T., Kandow, H., Adam, S.I., Silitonga, A. and Korompis, J., 2019,
August. Android-Based Application To Detect Drowsiness When Driving Vehicle. In
2019 1st International Conference on Cybernetics and Intelligent System (ICORIS)
(Vol. 1, pp. 100-104). IEEE.
[6] Lee, B.G. and Chung, W.Y., 2012. A smartphone-based driver safety monitoring
system using data fusion. Sensors, 12(12), pp.17536-17552.
[7] Vijayan, V. and Sherly, E., 2019. Real time detection system of driver drowsiness
based on representation learning using deep neural networks. Journal of Intelligent
and Fuzzy Systems, 36(3), pp.1977-1985.
[8] Ngxande, M., Tapamo, J.R. and Burke, M., 2017, November. Driver drowsiness detec-
tion using behavioral measures and machine learning techniques: A review of state-of-
art techniques. In 2017 Pattern Recognition Association of South Africa and Robotics
and Mechatronics (PRASA-RobMech) (pp. 156-161). IEEE.
[9] Vesselenyi, T., Moca, S., Rus, A., Mitran, T. and Tătaru, B., 2017, October. Driver
drowsiness detection using ANN image processing. In IOP Conference Series: Mate-
rials Science and Engineering (Vol. 252, No. 1, p. 012097). IOP Publishing.
[10] Jabbar, R., Shinoy, M., Kharbeche, M., Al-Khalifa, K., Krichen, M. and Barkaoui,
K., 2020, February. Driver drowsiness detection model using convolutional neural
networks techniques for android application. In 2020 IEEE International Conference
on Informatics, IoT, and Enabling Technologies (ICIoT) (pp. 237-242). IEEE.
[11] Mehta, S., Dadhich, S., Gumber, S. and Jadhav Bhatt, A., 2019, February. Real-time
driver drowsiness detection system using eye aspect ratio and eye closure ratio. In Pro-
ceedings of international conference on sustainable computing in science, technology
and management (SUSCOM), Amity University Rajasthan, Jaipur-India.
61
REFERENCES 62
[12] Yu, J., Park, S., Lee, S. and Jeon, M., 2018. Driver drowsiness detection using
condition-adaptive representation learning framework. IEEE transactions on intelli-
gent transportation systems, 20(11), pp.4206-4218.
[13] Scroggins, R., 2014. SDLC and development methodologies. Global Journal of Com-
puter Science and Technology.
[14] Mahalakshmi, M. and Sundararajan, M., 2013. Traditional SDLC vs scrum method-
ology–a comparative study. International Journal of Emerging Technology and Ad-
vanced Engineering, 3(6), pp.192-196.
[15] Chen, L., Hoey, J., Nugent, C.D., Cook, D.J. and Yu, Z., 2012. Sensor-based ac-
tivity recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part C
(Applications and Reviews), 42(6), pp.790-808.