You are on page 1of 74

Driver Drowsiness Detection System

Sayed Hassan Abbas Kazmi (1780163)


Zakria Fazal Shinwari (1780166)

Supervised by: Dr. Hina Saeeda

In partial fulfillment of requirement for the degree


Bachelor of Science (Software Engineering)
Shaheed Zulfikar Ali Bhutto Institute of Science and Technology
Islamabad, Pakistan

February, 2022
Driver Drowsiness Detection System
By

Sayed Hassan Abbas Kazmi and Zakria Fazal Shinwari

CERTIFICATE

A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE


REQUIREMENTS FOR THE DEGREE OF BACHELOR OF SCIENCE IN
SOFTWARE ENGINEERING (BSSE).

It is to certify that the above students thesis has been completed to my


satisfaction and, to my belief, its standard is appropriate for submission for
Evaluation. I have also conducted plagiarism test of this thesis using HEC
prescribed software and found similarity index ”12%” at that is within the
permissible limit set by the HEC for the BSSE degree thesis. I have also found
the thesis in a format recognized by the SZABIST for the BSSE thesis.

(Supervisor)
Dr. Hina Saeeda

Date:

Department of Computer Science, Shaheed Zulfikar Ali Bhutto Institute of


Science and Technology, Islamabad Campus

February, 2022
Revision History
Compiled By Checked By Date Reason for Change Version
Hassan Ms. Hina Saeeda 2nd Mar 2021 Initial Version 1.0
Hassan Ms. Hina Saeeda 2nd Mar 2021 Changes in Refrences 1.1
Hassan Ms. Hina Saeeda 2nd Mar 2021 Format Changes 1.2
Hassan Ms. Hina Saeeda 8th Mar 2021 Supervisor Comments 1.3
Zakria Ms. Hina Saeeda 17th Mar 2021 FYP Evaluators’ Comments 1.4
Zakria Ms. Hina Saeeda 6th Apr 2021 Supervisor Comments 1.5
Zakria Ms. Hina Saeeda 9th Apr 2021 Document Revision 1.6
Hassan Ms. Hina Saeeda 1st Jun 2021 Chap 3 & 4 Guidelines 1.7
Zakria Ms. Hina Saeeda 7th Jul 2021 FYP-1 Final Defense Comments 1.8
Hassan Ms. Hina Saeeda 10th Jul 2021 Supervisor Comments 1.9
Hassan Ms. Hina Saeeda 7th Feb 2022 Supervisor Comments 2.0
Hassan Ms. Hina Saeeda 8th Feb 2022 Final Version 2.1

ii
Project Overview
Drivers facing fatigue, in general, are very difficult to measure or observe unlike alcohol
and drugs, which have clear key indicators and tests that are available easily. Due to
this, there are wide spread accidents which in most cases are fatal. Probably, the best
solutions to this problem are awareness about fatigue-related accidents and promoting
drivers to admit fatigue when needed. The former is hard and much more expensive to
achieve, and the latter is not possible without the former as driving for long hours is very
lucrative.
Hence, there is a need to propose a system that efficiently recognizes driver drowsiness
and prevents related accidents, which our project proposes to achieve through minimal
resources. Our approach basically lines up with the fact that our system will cater the
state of the driver in such a way that it will not irritate him/her. Instead it will be
installed and fitted in such a compact way that it will not intrude the driver in any
way whatsoever. That being said, an infrared camera will be placed near the instrument
cluster, which will keep track of the drivers face and eyes at all times. The infrared camera
will be connected to the drivers smartphone, which will capture the detected images of
the driver and process them via the application on the smartphone. The result of those
detected and processed images will then trigger different ways of alerting the driver.
Furthermore, comparison with existing systems leads to a few key notes, as such that
firstly there is no practically implemented system alike ours, in Pakistan. Secondly the
handful applications that are present do not have features that distinguish them. Such
as, in our project, there will be an extra feature which is the second wave of our alert
system, the volume of the cars radio will be turned up so that the driver regains control
sooner than 2 seconds.
Hence with such features, its guaranteed to perform efficiently. If it does so, which
it will, we will try our best to put it in circulation with the Pakistan Automobile sector.
That way, our project can start saving actual lives.
Dedication
Firstly, we dedicate our project to the creator Allah Almighty and dedicate to whom
the world owes its existence Muhammad (Peace Be Upon Him) and dedicate this to our
beloved parents, our extremely dedicated and generous teachers and supportive friends,
their prayers always pave the way to success for us.

iv
Acknowledgement
The requirements for the degree of Bachelor of Software Engineering.

(Student 1) (Student 2)
Sayed Hassan Abbas Kazmi Zakria Fazal Shinwari
Contents
Revision History ii

Project Overview iii

Dedication iv

Acknowledgements

List of Figures iv

List of Tables v

1 Introduction vi
1.1 Project Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
1.2 Project Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 Existing System Description . . . . . . . . . . . . . . . . . . . . . 1
1.2.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.3 Future System Usage Analysis . . . . . . . . . . . . . . . . . . . . 4
1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 Project Novelty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Intended Market of Project . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8 Intended Users of Project . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.9 Software Process Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.9.1 Process Model Introduction . . . . . . . . . . . . . . . . . . . . . 6
1.9.2 Justification of Proposing the Process Model . . . . . . . . . . . . 6
1.9.3 Steps of Process Model . . . . . . . . . . . . . . . . . . . . . . . . 6
1.10 Tools and Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.11 Work Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.11.1 Team Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.11.2 Work Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2 SOFTWARE REQUIREMENTS SPECIFICATION 11


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.1 Document Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.2 Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.3 Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Non-Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.1 Software Quality Attributes . . . . . . . . . . . . . . . . . . . . . 12
2.3.2 Other Non-Functional Requirements . . . . . . . . . . . . . . . . 13
2.4 Requirement Gathering Techniques Used . . . . . . . . . . . . . . . . . . 13
2.4.1 Focus Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

i
CONTENTS ii

2.4.2 Brainstorming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.3 Scrum Stories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Time Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3 SOFTWARE PROJECT PLAN 15


3.1 Deliverables of the Project . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Software Project Management Plan . . . . . . . . . . . . . . . . . . . . 16
3.2.1 Project Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.2 Project Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Managerial Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3.1 Management Objectives and Priorities . . . . . . . . . . . . . . . 19
3.3.2 Assumptions and Constraints . . . . . . . . . . . . . . . . . . . . 20
3.4 Project Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4.1 Risk Management Plan . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4.2 Risk Management Activities . . . . . . . . . . . . . . . . . . . . . 22

4 FUNCTIONAL ANALYSIS AND MODELING 24


4.1 Use Case Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.1.1 User Stories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.1.2 Individual Actor Use Cases . . . . . . . . . . . . . . . . . . . . . 27
4.2 Functional Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2.1 Entity Relationship Diagram . . . . . . . . . . . . . . . . . . . . . 32
4.2.2 Data Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5 SYSTEM DESIGN 34
5.1 Structure Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.1.1 Class Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.1.2 Deployment Diagram . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.2 Behavioral Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2.1 Activity Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2.2 Communication Diagrams . . . . . . . . . . . . . . . . . . . . . . 37
5.2.3 Sequence Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . 39

6 SYSTEM INTERFACE AND PHYSICAL DESIGN 42


6.0.1 System User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . 42
6.0.2 Firebase Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

7 TEST PLAN 55
7.1 Objective of the Testing Phase . . . . . . . . . . . . . . . . . . . . . . . . 55
7.2 Levels of Tests for Testing Software . . . . . . . . . . . . . . . . . . . . . 55
7.2.1 Unit Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.2.2 Integration testing . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.2.3 System Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.3 Test Management Process . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.3.1 Design the Test Strategy . . . . . . . . . . . . . . . . . . . . . . . 56
7.3.2 Test Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.3.3 Test Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.3.4 Resource Planning . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.3.5 Plan Test Environment . . . . . . . . . . . . . . . . . . . . . . . . 57
7.3.6 Schedule and Estimation . . . . . . . . . . . . . . . . . . . . . . . 58
7.4 Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

8 CONCLUSION 60
8.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

References 61
List of Figures
1.1 System Level Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 OODA Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3.1 Gantt Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18


3.2 Work Breakdown Structure . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Work Breakdown Structure . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4 Critical Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4.1 Use Case Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24


4.2 ERD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3 DFD Level 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.4 DFD Level 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.5 DFD Level 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.1 Class Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34


5.2 Deployment Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.3 Activity Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.4 Communication Diagram: Login . . . . . . . . . . . . . . . . . . . . . . . 37
5.5 Communication Diagram: Forgot Password . . . . . . . . . . . . . . . . . 37
5.6 Communication Diagram: Registration . . . . . . . . . . . . . . . . . . . 38
5.7 Communication Diagram: Detection . . . . . . . . . . . . . . . . . . . . 38
5.8 Sequence Diagram: Login . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.9 Sequence Diagram: Logout . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.10 Sequence Diagram: Forgot Password . . . . . . . . . . . . . . . . . . . . 40
5.11 Sequence Diagram: Registration . . . . . . . . . . . . . . . . . . . . . . . 40
5.12 Sequence Diagram: Connectivity . . . . . . . . . . . . . . . . . . . . . . 41
5.13 Sequence Diagram: Detection . . . . . . . . . . . . . . . . . . . . . . . . 41

6.1 Slider 1: Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42


6.2 Slider 2: Face and Eye Detection . . . . . . . . . . . . . . . . . . . . . . 43
6.3 Slider 3: Supports USB Webcam . . . . . . . . . . . . . . . . . . . . . . 44
6.4 Slider 4: Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.5 User Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.6 Successful Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.7 Email Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.8 Application Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.9 Detection Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.10 Detection Module: Permissions . . . . . . . . . . . . . . . . . . . . . . . 50
6.11 Detection Module: Select USB Webcam . . . . . . . . . . . . . . . . . . 51
6.12 Detection Module: Open Eyes . . . . . . . . . . . . . . . . . . . . . . . . 52
6.13 Detection Module: Closed Eyes . . . . . . . . . . . . . . . . . . . . . . . 53
6.14 Firebase Realtime Database . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.15 Firebase: Email Verification . . . . . . . . . . . . . . . . . . . . . . . . . 54

iv
List of Tables
1.1 Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Applications Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Tools and Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Team Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Work Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.1 Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 12


2.2 Time Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.1 Project Deliverables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15


3.2 Milestones Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Documentation Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.4 Resources Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5 Quality Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.6 Roles and Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.7 Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4.1 User Story 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25


4.2 User Story 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.3 User Story 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4 User Story 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5 User Story 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.6 Use-case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.7 Use-case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.8 Use-case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.9 Use-case 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.10 Use-case 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.11 Use-case 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.12 Use-case 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.13 Use-case 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.14 Use-case 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.15 Use-case 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

7.1 Resource Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57


7.2 Schedule and Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.3 Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

v
Chapter 1

Introduction
An increasing amount of drivers face a severe low amount of daily sleep. All automobile
drivers ranging from heavy trucks to light vehicles, all face a high amount of sleep prob-
lems that lead to a very unsafe driving experience. This occurrence labels as Drowsiness
which further leads to accidents, mostly fatal[1]. Not following our duties for a safer jour-
ney has lead to many accidents. Following rules and regulations look like a very minute
problem but it needs our utmost and greatest acceptance. A car driver when driving, no
matter in the hands of the most experienced or in-experienced hands, these hands can be
harmless to many but a single wave of sleep while driving can even lead to being a fatal
mistake which might end up taking peoples lives. Driver’s often disregarded the fact that
they are feeling sleepy and thus when they sleep while driving they end up in accidents.
If a person is too tired to drive they shouldn’t neglect the fact that their negligence might
end up taking the lives of themselves or others. Upon studying the problems and the
situation, our system will be designed to implement the safety measures and make sure
these solutions are socially beneficial and safe. A lot of experts have researched driver
drowsiness detection. The researched data is efficient enough but not fully applicable in
Pakistan. Thus, our system will provide said services in Pakistan and hopefully, once it’s
fully developed, we will be able to launch it globally.[2].

1.1 Project Purpose


The proposed system will efficiently detect the drowsiness level of the driver. A
technology that will be designed for the safety of the drivers who face a lack of sleep due
to excessive driving. The system is based on Activity Recognition. An infrared camera
will be placed near the instrument cluster, scanning the drivers’ eyes and face. When
the driver closes his eyes for some seconds, the alert system will initiate. The level of
drowsiness is detected by Activity Recognition, OpenCV, and Keras. OpenCV will be
used to collect images from the camera and send them to the Deep-Learning Model that
is going to identify if the user’s eyes are ”Closed” or ”Open”. This process includes steps
such as; taking image as input from infrared camera, detecting face in image and then
creating an area of interest (AOI), detecting eyes from AOI and processing them through
the classifier, processed data is categorized into two labels, open or closed. And finally
the score is calculated through the images to decide whether the person is drowsy or not.

Hence, there is a need to propose a system that efficiently recognizes driver drowsiness
and prevents related accidents, which our project proposes to achieve through minimal
resources. Our approach basically lines up with the fact that our system will cater the
state of the driver in such a way that it will not irritate him/her. Instead it will be
installed and fitted in such a compact way that it will not intrude the driver in anyway
whatsoever. That being said, an infrared camera will be placed near the instrument
cluster, which will keep track of the drivers face and eyes at all times. The infrared
camera will be connected to the drivers smartphone, which will capture the detected
images of the driver and process them via the application on the smartphone. The result

vi
CHAPTER 1. INTRODUCTION 1

of those detected and processed images will then trigger different ways of alerting the
driver.

Furthermore, comparison with existing systems leads to a few key notes, as such that
firstly there is no practically implemented system alike ours, in Pakistan. Secondly the
handful applications that are present do not have features that distinguish them. Such
as, in our project, there will be an extra feature which is the second wave of our alert
system, the volume of the cars radio will be turned up so that the driver regains control
sooner than 2 seconds.

Hence with such features, its guaranteed to perform efficiently. If it does so, which
it will, we will try our best to put it in circulation with the Pakistan Automobile sector.
That way, our project can start saving actual lives.

1.2 Project Scope


The Pakistan automobile and the Emergency Department faced problems[3] due to
driver’s fatigue. This system focuses on pleasantly handling drivers’ fatigue, e.g. while
travelling on long routes such as the Motorway, getting weary will lead to the system
alerting you to take a break. Just as the alert feature, the system can turn the radio
volume up to increase driver attention. Implemented correctly, this system could serve
the twin cities for a decade to come.

Table 1.1: Abbreviations


Sr. # Abbreviation Definition
1 AOI Area of Interest
2 SRS Software Requirements Specifications
3 CNN Convolutional Neural Network
4 SDLC Software Development Life Cycle
5 OODA (Observe, Orientate, Decide, Art)

1.2.1 Existing System Description


DrowsyDet: A Mobile Application for Real-time Driver Drowsiness Detection[4]
is an unobtrusive application since it only needs a mobile device without any internet
connection. A big data-set, consisting of several different head poses, lighting angles, both
male female genders, of drowsy drivers is acquired to test its techniques. The features of
the application are listed below.

• Feature 1: Face recognition and Landmark recognition is acquired through different


models.
• Feature 2: 3 CNN models are made to identify facial drowsiness and eye state.
• Feature 3: The drivers drowsiness state is acquired through the result of all models.

The limitations of the application are listed below.

• Limitation 1: This application is limited in the sense that it does not provide
alerting service that informs emergency contacts if there has been an accident.
CHAPTER 1. INTRODUCTION 2

Android-Based Application To Detect Drowsiness When Driving Vehicle[5]


is a system of detecting drowsiness of a driver while on road to ultimately avoid fatal
accidents. The features are listed below.

• Feature 1: Facial recognition through mobile phones camera proceeded by a voice


alert system if detection is successful.
• Feature 2: Images are processed through OpenCV, the Haar Cascade Classifier to
be exact.

The limitations of the application are listed below.

• Limitation 1: The 1 second timer maybe inefficient in such a way that a person
might blink longer than 1 second and that might trigger the system. For e.g, people
suffering from Tourette Syndrome.

A Smartphone-Based Driver Safety Monitoring System Using Data Fusion[6]


is another technique for detecting drowsiness levels of a driver but with discrete data-
types such as bio-signal variation, in-vehicle temperature, and vehicle speed. System runs
on Android without any extra equipment. The features are listed below.

• Feature 1: This system fuses all the data gathered from different sensors such as
video, electrocardiography, photoplethysmography, temperature, and a three-axis
accelerometer.
• Feature 2: Fuzzy Bayesian framework is used detect drowsiness levels which keeps
updating itself.
• Feature 3: Data is transferred over to a mobile phone via Bluetooth which then
fakes a call to alert the driver, if drowsiness is detected.

The limitations of the application are listed below.

• Limitation 1: Outdated application, might require adaption to run on new android.

Table 1.2 gives a comparison of the applications reviewed. The parameters selected
for the comparison are listed below.

• Parameter 1: CNN Models.


• Parameter 2: Alarm System.
• Parameter 3: Face Detection.
• Parameter 4: Image Processing.
• Parameter 5: Music Adaptability.

1.2.2 Literature Review


Deep CNN: A Machine Learning Approach for Driver Drowsiness Detec-
tion Based on Eye State[7] is a research paper enlightening the drives’ drowsiness
based on the study of Indian drivers. Variety of machine learning approaches are used to
detect drowsiness via face and eye detection.

Features extracted from Literature Review:

• Face detection algorithm known as ’Viola-Jones’ is used.


CHAPTER 1. INTRODUCTION 3

Table 1.2: Applications Comparison


Applications

Android-Based [5]

Proposed System
s

Data Fusion [6]


e

DrowsyDet [4]
ur
at
Fe
CNN Models 3 7 7 3
Alarm System 7 7 3 3
Face Detection 3 3 3 3
Image Processing 3 7 7 3
Music Adaptability 7 7 7 3

• CNN classifier ’SoftMax’ is used to identify drowsiness.


• Accurate up till 96 percent on collective data-set.

Limitations found in Literature Review:

• End product not applicable yet.

Driver drowsiness detection using behavioural measures and machine learn-


ing techniques: A review of state-of-art techniques[8] is a research paper dis-
cussing different behavioural ways of drivers that can be used to detect drowsiness levels,
accurately enough with the right algorithms.

Features extracted from Literature Review:

• Support vector machines.

Limitations found in Literature Review:

• Support vector machines not efficient enough.

Driver drowsiness detection using ANN image processing[9] is a research


paper discussing driver drowsiness detection through EEG and EOG signal processing
along with image detection.

Features extracted from Literature Review:

• Electroencephalogram (EEG) signal capturing.


• EOG signal processing through image capturing.

Limitations found in Literature Review:

• EEG may also be difficult to read if movements generate excessive artifact.


• EEG might not be applicable in reality.
CHAPTER 1. INTRODUCTION 4

Driver Drowsiness Detection Model Using Convolutional Neural Networks


Techniques for Android Application[10] is a research paper discussing driver drowsi-
ness detection using CNN model, a lighter classification model compared to others.

Features extracted from Literature Review:

• Face recognition followed by a CNN classification.

Real-Time Driver Drowsiness Detection System Using Eye Aspect Ratio


and Eye Closure Ratio[11] is a system that uses face detection to construct two of
its key features, Eye Aspect Ratio and Eye Closure Ratio. From which drowsiness is
detected in real-time.

Features extracted from Literature Review:

• Data Procurement was used to collect data samples from 50 volunteers.


• Facial Landmark Marking is used on detecting the distance between upper and
lower eyelid to determine is the driver is sleepy or not.

Limitations found in Literature Review:

• Facial Landmark marking has a drawback if the camera unit face blurry situation
and can lead to accident for not either not detection the drowsiness.

Driver Drowsiness Detection Using Condition-Adaptive Representation


Learning Framework[12] is a research paper proposing a condition-adaptive repre-
sentation learning framework for driver drowsiness detection based on a 3D deep convo-
lutional neural network.

Features extracted from Literature Review:

• The framework consists of four models: spatio-temporal representation learning,


scene condition understanding, feature fusion, and drowsiness detection.

Limitations found in Literature Review:

• Video analysis can decrease if the equipment is not of high quality and can give
incorrect results.
• For better and high results of video analysis expensive equipment’s will be required
to achieve the satisfactory level of results.

1.2.3 Future System Usage Analysis


As specified in the project overview and scope, once this project is approved and in
implementation phase, the goal would be to reach as far as the Pakistan Automobile
sector. Once it reaches that level, this system can be patented and applied on each car
coming out of the factory.

Hence, if the future holds great response to this project, the usage could become
global.
CHAPTER 1. INTRODUCTION 5

1.3 Objectives
• A complete Activity Recognition based system that will help in the prevention of
accidents related to sleep/fatigue.
• Keras + OpenCV that will be efficient in the classification of driver’s drowsiness
detection.
• Alerting/Waking the driver if the system detects drowsiness.

1.4 Problem Statement


Drivers facing fatigue[3], in general, are very difficult to measure or observe unlike
alcohol and drugs, which have clear key indicators and tests that are available easily. Due
to this, there are wide spread accidents which in most cases are fatal. Probably, the best
solutions to this problem are awareness about fatigue-related accidents and promoting
drivers to admit fatigue when needed. The former is hard and much more expensive to
achieve, and the latter is not possible without the former as driving for long hours is very
lucrative. Hence, there is a need to propose a system that efficiently recognizes driver
drowsiness and prevents related accidents, which our project proposes to achieve through
minimal resources.

1.5 Proposed Solution


The proposed solution is our system which will efficiently detect the drowsiness level
of the driver. A technology that will be designed for the safety of the drivers who face a
lack of sleep due to excessive driving. The system is based on Activity Recognition. An
infrared camera will be placed near the instrument cluster, scanning the drivers’ eyes and
face. When the driver closes his eyes for some seconds, the alert system will initiate. The
level of drowsiness is detected by Activity Recognition, OpenCV, and Keras. OpenCV
will be used to collect images from the camera and send them to the Deep-Learning
Model that is going to identify if the user’s eyes are ”Closed” or ”Open”.

1.6 Project Novelty


Driver Drowsiness Detection system has been on the market for some time now, but
where our Application makes a scene is with its new feature. Alongside alerting the driver
by buzzing, our application will turn the car radio’s volume up as well, making the driver
attentive in less than a second or two. And this feature allows our project to become
novel.

1.7 Intended Market of Project


As aforementioned, this system’s intended market will firstly be the national market
of Pakistan, which being the Pakistan Automobile Industry. And if it receives a great
response, then International markets. As to discuss the worth of this system, if imple-
mented and maintained correctly, it can become the easiest and the most efficient way of
saving lives from drowsiness related accidents.
CHAPTER 1. INTRODUCTION 6

1.8 Intended Users of Project


The main users of this product will be the drivers of all kinds of cars, whether its a bus,
a jeep or a car. The easy implementation of this product will allow any ordinary person
to install the system in their cars. Just as Tesla has taken a long period to perfect their
system of Driver Drowsiness Detection, the finalized production version of our system
can be implemented into everyday cars, meaning the traffic of Pakistan.

1.9 Software Process Model


The Software Process Model that will be used is Agile[12]. Agile is a methodology
that lets a team efficiently manage a project via an iterative and incremental approach.

1.9.1 Process Model Introduction


The development of a good software system will need the developers and other teams
to follow specific standards, rules and regulations which provides a systematic way of
developing the system. The SDLC that will be used is Agile[12]. Agile is a methodology
that lets a team efficiently manage a project via an iterative and incremental approach.
Dividing a big project into a number of smaller tasks allows for better understanding and
communication between both, the team and the stakeholders. This way there is a higher
rate of improvements throughout the entire project. As stated in the reference, the Agile
model would be the best SDLC to follow throughout this entire project. Further into it,
we will use the Scrum Methodology[14].

1.9.2 Justification of Proposing the Process Model


As a brief overview, scrum further makes it easier for teams to manage and work on
bigger, complex projects. A collective pool of roles, tools and meetings is what scrum
offers to teams which in turn helps in managing the workload. A handful of advantages
of using agile scrum methodology consist of; Flexibility and adaptability, Creativity and
innovation, Lower costs, Quality improvement, Organizational synergy, Employee satis-
faction and Customer satisfaction. But the biggest advantage of agile scrum methodology
is its flexible nature. After every iteration, the team receives feedback from stakeholders
which allows for any change whatsoever, to be made effectively and efficiently. Along
with that, the deep-learning model we will use is built with Keras using CNN. Activity
recognition[15] is involved as well since we will be tracking activities of the driver through
the infrared camera.

A visual representation of how the system will work step by step, is shown below in
Figure 1.1.

1.9.3 Steps of Process Model


Scrum Methodology consists of 11 steps, that are briefly explained below:

• Picking Product Owner: This step involves the OODA (Observe, Orientate, Decide,
Art) cycle. Observe, as self-explanatory as it is, refers to seeing the situation in
a broader picture, involving all the perspectives that are present and making a
plan according to the situation. Orientate is the next step from observing, which
CHAPTER 1. INTRODUCTION 7

Figure 1.1: System Level Block Diagram

involves understanding and analyzing the data gather in step 1. Deciding after
you’ve analyzed and created options for yourself, now a solution is chosen according
to the plan that was made in step 1. The last step, Act, is where the execution of
the plan occurs.
Through this cycle, the Product Owner remains engaged in every increment that
is added to the product, through rapid feedback. This provides him control over
objectives and priorities of increments or say, sprint. This way, a feedback loop is
created which adds on to the process of innovation and adaptation.
• Building a team: Building a team involves few basic attributes such as, having
the motivation and ability for product creation, the flexibility of working towards
customer needs with different perspectives for the best outcome.
• Picking Scrum Master: An experienced character who keeps a check on the daily
routines, provides help to them along the way such as a Project Manager.
• Creating Product Backlog: An entire list of all product requirements which are
arranged according to their priorities. It changes throughout the entire lifespan of
the product and provides guidance to the team as to what tasks need to be done.
• Specifying and estimating product backlog: According to priorities, the team should
be estimating how many resources a task will consume. This keeps sprints balanced
and provides more focus on how to finish and move on to the next sprint.
• Sprint Planning: This step refers to the amount of team it would take to make a
working increment which can be displayed to the customer. Every sprint involves a
meeting in the beginning, in which the tasks for that particular sprint are discussed.
CHAPTER 1. INTRODUCTION 8

Figure 1.2: OODA Loop

• Workflow transparency: This step refers to the workflow transparency which enables
the developers or team members to check the progress of each other and manage
time schedule. Members that are lacking, are helped by others in order to achieve
the common goal.
• Daily meeting or daily scrum: This step refers to the beating heart of the entire
scrum process. For a brief while, every morning the team has a meeting in which
yesterday’s documentation, today’s work planning and the challenges faced are
discussed.
• Demo: This step refers to the explanation of the entire sprint to the customer, by
the team. This explanation involves showing the customer what has been done and
what is ready right now.
• Retrospective meetup: This step refers to prototyping of the system and feedback
from the customer in the sprint meeting. All discussion based on what was achieved,
what lacked and what challenges were faced. What improvements can be executed.
Next scrum meeting plans are discussed and what are to be carried out.
• Starting next sprint immediately: This step refers to immediately engaging in the
next increment of the plan so that the customer receives the desired output at the
desired time. This makes scrum, scrum.

1.10 Tools and Technologies


Table 1.3 shows the languages and tools used in this project.

• An infrared camera will be placed at an angle from where both the eyes and face
are clearly visible, and then will be used to examine details which will invoke the
process of drowsiness detection.
• A dataset created with images tagged ‘Open’ or ‘Closed’ according to data in the
images. The Data taken from the model (created by us) will be stored in a system
and will be trained through it and the unwanted data will be removed manually
from the storage location to eliminate the excessive use of storage.
CHAPTER 1. INTRODUCTION 9

• Convolutional Neural Network known as CNN will be used by Keras to build the
system.
• OpenCV (facial and eyes detection).
• TensorFlow (Keras uses TensorFlow as backend).
• Keras (to build our classification model).
• Pygame (to play alarm sound).
• Android Studio.
• Android Device (Android 10 exception).

Table 1.3: Tools and Technologies


Languages Tools API’s
Java OpenCV OpenCV-Python
Python TensorFlow Keras
Firebase
Android Studio

1.11 Work Plan


1.11.1 Team Structure
As table 1.4 shows, Zakria Fazal Shinwari and Sayed Hassan Abbas Kazmi, have both
been appointed as per their area of expertise and management skills.

Table 1.4: Team Structure


Sr. # Team Members Role
1 Zakria Fazal Shinwari Project Leader
2 Sayed Hassan Abbas Kazmi Requirement Analyst
3 Zakria Fazal Shinwari QA Engineer
4 Sayed Hassan Abbas Kazmi Technical Writer
5 Zakria Fazal Shinwari Front End Developer
6 Sayed Hassan Abbas Kazmi Front End Developer
7 Zakria Fazal Shinwari Database Designer
8 Sayed Hassan Abbas Kazmi Database Designer
9 Zakria Fazal Shinwari Back End Developer
10 Sayed Hassan Abbas Kazmi Tester
CHAPTER 1. INTRODUCTION 10

1.11.2 Work Distribution


As table 1.5 shows, work distribution has been divided according to time and expertise,
allotting equal responsibilities to each member.

Table 1.5: Work Distribution


Sr. # Work Assignment Hassan Zakria
1 Project Proposal 3 3
2 Requirement Analysis 3 3
3 Use Cases 3
4 System Architecture 3
5 Project Report (Chap 1,2) 3
6 Front End/UI Designing 3
7 Database 3
8 Project Report (Chap 3,4) 3
9 Back End Designing 3
10 SD and SSD 3
11 SRS 3 3
12 Test Cases 3 3
13 Testing 3 3
14 Project Report Chap(5,6) 3
15 User Manual 3
Chapter 2

SOFTWARE REQUIREMENTS
SPECIFICATION
The Software Requirement Specifications document will serve as a written and docu-
mented understanding of the features and functionalities of the system. It will be used
to understand all the requirements gathered from different stakeholders. This document
is aimed to serve as input to the development team and to serve as a basis for system
design. It will define the product scope and it will help us to keep the system on the right
path by clearly identifying the deviated path of the system. Developers will be able to
track their work progress and will understand what they have to develop by using SRS.

2.1 Introduction
The software requirement is the phase where the current system is analyzed from
various aspects. The system is analyzed and its specifications are elicited.

2.1.1 Document Scope


This document will specify all the functional and non-functional requirements based
on which the system will be developed. All the techniques using which all the requirements
will be gathered are specified and the process followed to use these techniques are written
in detail. This document will include the constraints on the system and the environments
where the system will be operational. It includes all the quality attributes which the
system will have.

2.1.2 Audience
This document is written for the audience mentioned below:

• Researchers, so that research on our system can be made.


• Industrialists, so that industry individuals can learn about our system for future
investments.
• Developers, so that they can write the code of the system according to the require-
ments.
• Designers, so that they can design the system according to the requirements.
• Testers, so that they can test the system according to the requirements.
• Project Managers, so that they can keep the project on the right path.
• Integrators, so that they not make any mistake while the integration of the system.
• Implementers, so that they can implement the right decisions.
• Academia, so that the community concerned with the pursuit of research and edu-
cation can influence the system.

11
CHAPTER 2. SOFTWARE REQUIREMENTS SPECIFICATION 12

2.2 Functional Requirements


2.2.1 Functional Requirements
Table 2.1 discusses all the Functional Requirements of the system.

Table 2.1: Functional Requirements


Req. ID Requirement Statement
FR1 Users can register and sign in/out.
FR2 System will connect to camera to capture images.
FR3 System will use eye and face detection to monitor drowsiness.
FR4 Face detection will allow to create AOI.
FR5 Camera will detect the eyes and face from AOI and feed it to the classifier.
FR6 Classifier will sort whether eyes are open or closed.
FR7 System will calculate score to check whether the driver is drowsy or not.
FR8 System will play buzzer sound once drowsiness is detected.
FR9 System will turn up car’s radio volume to engage driver.
FR10 System will connect to music system via blue-tooth.
FR11 Only admin side will be able to access and edit data on database.

2.2.2 Software Requirements


• OS: Windows and Android (Android 10 exception)
• Language: Python and Java
• Front-end: Android Studio.
• Back-end: Android Studio, Firebase, Keras, OpenCV, TensorFlow.

2.2.3 Hardware Requirements


• Processor: Core i3 or higher. Snapdragon 600/ Exynos 7780 or higher.
• RAM: 2 GB minimum.
• Hard Disk: 1 GB minimum.
• Infrared Camera.

2.3 Non-Functional Requirements


2.3.1 Software Quality Attributes
• Performance: System will perform efficiently due to the accuracy of classifier. This
is justifiable on the basis that the data-set used to train the CNN models was of
rich context, because of which the detection is very accurate.
• Reliability: System will be reliable enough to never cause any false alarms, will
always sound buzzer alarms only when real drowsiness is detected.
• Usability: System will be user-friendly to the extent that it will only by the matter
of plug and play. The application will start running once it’s connected to the
camera.
CHAPTER 2. SOFTWARE REQUIREMENTS SPECIFICATION 13

• Security: System will be secure to the extent that the captured images will be
stored safely in the database to which only the admin has the access to. There will
no way to interfere or interrupt the detection process by any outsider or third party
application.

2.3.2 Other Non-Functional Requirements


• Legal and Copyright: Any image, audio, video or any other content which has under
the copyright of any other person or organization will not be used in the system.
• Platform: This system will be based on Android, so it will be available on almost
all the android phones, with OS ranging from Android 5 to 11+.

2.4 Requirement Gathering Techniques Used


Below mentioned are the Requirement Gathering Techniques that we will be using in
this project.

2.4.1 Focus Groups


A focus group will be helpful in the way that our main focus will be drivers. And by
gathering this group of drivers to test out the system and provide us with feedback will
be the primary source for elicitation of requirements of our system. This feedback will
provide us with the future or current needs of our customers which we will be able to
focus on properly, or they will provide additional feedback on existing requirements.

2.4.2 Brainstorming
Brainstorming will help us in generating new ideas to solve the problems occurring in
development and choosing the best methods to solve those problems. It will basically en-
hance the reach to innovative ideas in solving future problems since a variety of different
perspectives will be present. Brainstorming will be used as a secondary requirement elic-
itation technique by us and we will do both live and Web-based sessions of brainstorming
where ever required.

2.4.3 Scrum Stories


Stories elicitation techniques is one of scrum methodology techniques, which basically
converts the stories of individuals such as customers into scrum feedback. Which in turn
becomes a part of or an entire sprint. This sprint will directly lead to achieving customer
needs.
CHAPTER 2. SOFTWARE REQUIREMENTS SPECIFICATION 14

2.5 Time Frame


Table 2.2 shows the phases of requirements along with their duration’s.

Table 2.2: Time Frame


Sr. # Phase Duration
1 Requirement Inception 2 days
2 Requirement Elicitation 6 days
3 Requirement Elaboration 3 days
4 Requirement Negotiation 2 days
5 Requirement Specification 6 days
6 Requirement Validation 4 days
7 Requirement Management 4 days
Chapter 3

SOFTWARE PROJECT PLAN


3.1 Deliverables of the Project
Table 3.1 shows all of the Project deliverables along with their types, which consists
of documents, model files, source code, WAV file and their descriptions.

Table 3.1: Project Deliverables


Sr # Deliverable Type Description
This deliverable consists of system requirements and
user specifications along with the different require-
1 SRS Document
ment elicitation techniques and strategies used in the
project.
This deliverable consists of several UML diagrams of
Architecture &
2 Document the system which provide clear understanding of all
Design
the components of the project.
This deliverable consists of the database schema and
Firebase
3 Database Firebase server which will store image data and help
Server
perform in log in/out operations.
This deliverable consists of the model file which is in-
4 Trained Model Model File
tegrated with detection model to classify images.
Classifier Source This deliverable consists of the haarcascade classifying
5
Module Code algorithm that will be used in the detection module.
This deliverable consists of the source code that uses
Detection Source
6 detection algorithm and the trained model to detect
Module Code
drowsiness.
Source This deliverable consists of all the operations an end-
7 User Module
Code user will be able to perform.
This deliverable consists of an alarm which is playable
8 Alarm WAV File through the detection model once drowsiness is de-
tected.
This deliverable consists of all the test cases and sys-
9 Test Reports Document tem behavioral aspects, generated via various testing
of the system.
This deliverable consists of all deliverables in a bundle,
Source
10 Final Product including source code of android app, database and
Code
trained model.
This deliverable consists of a detailed guide for an
11 User Guide Document end user which will help them to use the system with
proper ease.

15
CHAPTER 3. SOFTWARE PROJECT PLAN 16

3.2 Software Project Management Plan


It is concerned with activities involved in ensuring that system is delivered in time,
on schedule and in agreement with the requirements. Below mentioned subheadings and
their subheadings involve all these activities, in detail.

3.2.1 Project Planning


It deals with the planning of project. A variety of plans are to be planned out till the
end of this project to guarantee full competency. Below mentioned are the plans that
make the basic structure of the project.

3.2.1.1 Milestones Plan


Table 3.2 shows the milestones of this project along with their duration of completion
and then their starting date and ending date.

Table 3.2: Milestones Plan


Sr. # Milestones Duration Start Date End Date
1 Data-set 1 Week 18th Mar 2021 25th Mar 2021
2 Classifier Module 1.5 Weeks 1st Apr 2021 14th Apr 2021
3 Detection Module 3 Weeks 1st Apr 2021 20th Apr 2021
4 Database 5 Weeks 1st Jun 2021 5th Jul 2021
5 User Module 5 Weeks 1st Jun 2021 5th Jul 2021
6 Android App 5 Weeks 1st Jun 2021 5th Jul 2021
7 Music Feature 1.5 Weeks 1st Sep 2021 10th Sep 2021
8 Fine-Tuning 1 Week 16th Sep 2021 22nd Sep 2021

3.2.1.2 Documentation Plan


Table 3.3 shows the documentation of this project along with the duration of their
completion and the starting and ending date.

Table 3.3: Documentation Plan


Sr. # Documentation Duration Start Date End Date
1 SRS 1 Week 1st Apr 2021 7th Apr 2021
2 Design Documents 1.5 Weeks 10th Sep 2021 17th Sep 2021
3 Test Documents 1.5 Weeks 17th Sep 2021 27th Sep 2021
4 Prediction Guide 1.5 Weeks 1st Nov 2021 10th Nov 2021
5 Installation Guide 1.5 Weeks 17th Nov 2021 27th Nov 2021
6 User Guide 1.5 Weeks 17th Nov 2021 27th Nov 2021
CHAPTER 3. SOFTWARE PROJECT PLAN 17

3.2.1.3 Resources Plan


Table 3.4 shows the resources needed for this project along with their costs and
duration of use, how long the project will need said resources. And then their starting
and ending dates.

Table 3.4: Resources Plan


Sr. # Resource Name Cost Duration Start Date End Date
1 Dataset - 48 Weeks 1st Feb 2021 1st Feb 2022
2 Human Resources - 48 Weeks 1st Feb 2021 1st Feb 2022
Hardware Equipment
3 20,000 32 Weeks 1st Jun 2021 1st Feb 2022
(Infrared Camera, etc.)
Laptop w/ Extensive
4 GPU (For Deep Learn- 185,000 48 Weeks 1st Feb 2021 1st Feb 2022
ing Models)

3.2.1.4 Quality Plan


Table 3.5 shows the quality attributes of this project, along with their testing start dates
and testing end dates, where the system will be tested against these quality attributes.

Table 3.5: Quality Plan


Quality Testing Testing End
Sr. # Description
Attributes Start Date Date
Testing the performance of
this system will help ana-
lyze the time it takes to re-
1 Performance 17th Sep 2021 27th Sep 2021
spond to all user commands
and how efficiently the de-
tection system works.
Testing the reliability of
this system will help ana-
2 Reliability 17th Sep 2021 27th Sep 2021
lyze how failure-free the op-
erations of the system are.
Testing the usability of this
system will help analyze
how interactive the inter-
3 Usability 17th Sep 2021 27th Sep 2021
faces are along with the per-
centage of human computer
interaction applied.
Testing the security of this
system will help analyze
how secure the image data
4 Security 17th Sep 2021 27th Sep 2021 and details of the users are.
The access to database and
it’s connection will also be
a part of this testing.
CHAPTER 3. SOFTWARE PROJECT PLAN 18

3.2.2 Project Scheduling


3.2.2.1 Gantt Chart
Figure 3.1 shows the different sprints, with their respective tasks and months in which
said tasks will be executed, through which this project will be completed.

Figure 3.1: Gantt Chart

3.2.2.2 Work Breakdown Structure


Figure 3.2 and figure 3.3 shows the Work Breakdown Structure of this project, through
different tasks and their sub-tasks along with their starting and finish dates. A visual
representation of these tasks are also shown in the figure.

Figure 3.2: Work Breakdown Structure


CHAPTER 3. SOFTWARE PROJECT PLAN 19

Figure 3.3: Work Breakdown Structure


3.2.2.3 Critical Path Method
Figure 3.4 shows the Critical Path of this path, which tells with calculated scores,
what path the system considers to be critical while execution.

Figure 3.4: Critical Path

3.3 Managerial Process


3.3.1 Management Objectives and Priorities
Management Objectives are mentioned below:

• An effective implementation of all the project’s objectives.


• The best approach of supervision through absolute guidance and efficiently com-
municating the right decisions.
• Absolute implementation of all the functional and non-functional requirements
CHAPTER 3. SOFTWARE PROJECT PLAN 20

stated in the SRS within the specified timeline and resources.


• Entirety of the source code to be error-free, via several different testing techniques,
such as Integration Testing, User Acceptance Testing and System Testing.
• Staying true to the documentation process by following originally stated functional-
ities and their constraints to develop a project that will be exactly as documented.
• The final product should be entirely user friendly so that the main goal, which is
to detect driver drowsiness and alert the driver before fatal accidents occur, needs
no introduction.
• To develop a complete Activity Recognition based system that will be efficient in
the classification of driver’s drowsiness detection.

Management Priorities are mentioned below:

• Making sure that all the software quality attributes are up to par by ensuring every
deliverable is checked, passed and approved.
• Making sure all the deadlines are met so that a consistency is maintained towards
the completion of the project timeline.
• Making the final product flexible to accept new alterations so that running and
maintaining it remains at ease.

3.3.2 Assumptions and Constraints


Some project assumptions that were made are mentioned below:

• The project members will possess the necessary skills, including good knowledge of
the languages and technologies used, for the completion and implementation of this
project.
• All the necessary resources to develop this deep learning system will be available
whenever required which include all the hardware resources as well as the human
resources.
• It will be our assumption that our end user will get drowsy on long routes for which
our system will be present to assist them.
• The camera will be positioned correctly to detect the end users eyes and face at all
times.

Constraints will be as such:

• The hardware equipment used in this project is costly and to replicate the same
equipment for every end user will be a sore sight along with the task of maintain-
ability of such moving parts can be difficult.

3.4 Project Risk Management


Risk is the probability of something uncertain and unusual to happen which effects
the schedule, quality and performance of the software being developed. Below mentioned
is a risk management plan and it’s related activities.
CHAPTER 3. SOFTWARE PROJECT PLAN 21

3.4.1 Risk Management Plan


3.4.1.1 Purpose
The risk management plan contains an investigation of probable risks with both high
and low impact, and also some mitigation strategies to assist the project avoid being
derailed. It’s a very crucial aspect of any project since if we fail to recognize any risks
in our project beforehand, they can have an extreme effect on our entire system and at
that time there will no mitigation possible.

So, we’re going to identify all the risks involved in this project and then prepare
strategies to counter those risks or at least reduce the their severity, in turn reducing
their altogether impact on our system.

3.4.1.2 Roles and Responsibilities


Table 3.6 shows roles and responsibilities of both project members.

Table 3.6: Roles and Responsibilities


Name Title Email Roles Responsibilities
Project Proposal,
Requirement Analysis,
QA Engineer,
System Architecture,
Project 1780166@szabist- Front End,
Zakria Front End/UI Designing,
Leader isb.pk Database,
Database,
Back End
Back End Designing,
SRS
Project Proposal,
Requirement Analysis,
Requirement Use Cases,
Analyst, Project Reports,
Technical Database,
Risk 1780163@szabist-
Hassan Writer, SD and SSD,
Manager isb.pk
Front End, SRS,
Database, Test Cases,
Tester Testing,
Test Reports,
User Manual
CHAPTER 3. SOFTWARE PROJECT PLAN 22

3.4.2 Risk Management Activities


Table 3.7 shows the entire risk activity plan for this project.

Table 3.7: Risk Management


Sr Risk
Consequences Category Impact Probability Priority Mitigation Strategy
# Description
If resources are
Resources not available, it Gather resources on
must be avail- will cause hin- Project Very High own end so that work
1 High 40%
able on this drance in the Risk Risk flow remains uninter-
project. timeline and rupted
deliverance.
If final dead-
lines are not
Each and every phase
Project must met with full
has been planned ac-
meet final project compe-
Project cording to a critical
2 deadlines with tency, it will High 25% High Risk
Risk path method so that all
complete im- directly and
deadline are met with
plementation harshly affect
full competency.
the success of
our project.
If the project is By following the Veri-
Project must not up to par fication and Validation
be up to par with the spec- Technical model from our SRS
3 High 20% High Risk
with specified ified quality, it Risk Document, we will be
quality will not be ac- able to meet the speci-
cepted. fied quality.
If the project
By following the Veri-
Project must is missing any
fication and Validation
not be missing features it will
Technical Medium model from our SRS
4 any features not able to Medium 10%
Risk Risk Document, our project
that were reach its stated
will not have any miss-
stated goal and will
ing features.
lead to failure.
If data-set is
not accurate
or following
Make own data-set
proper stan-
Data-set with best illumination
dards, the
should be Project Medium in images which in
5 efficiency of Medium 15%
accurate and Risk Risk turn will serve the
algorithms
proper. algorithms with high
applied on
accuracy.
data-set will
decrease mas-
sively.
If there’s a
While developing our
change in any
project, our goal is to
requirements,
make the code flexi-
Any changes in it will directly Technical Medium
6 Medium 18% ble enough so that any
requirements affect the scope Risk Risk
new changes can be
and the dead-
made without disturb-
line of the
ing the environment
project.
CHAPTER 3. SOFTWARE PROJECT PLAN 23

3.4.2.1 Risk Identification


Table 3.7 shows all the identified risks in our project. This way we can calculate the
impact they might have on our project so that we can better prepare for their mitigation.

3.4.2.2 Risk Analysis


Table 3.7 shows the analysis made by drawing up consequences to the identified risks
in our project. This way we can understand what will happen if these risks are not
countered properly.

3.4.2.3 Risk Response Planning


Table 3.7 shows the mitigation strategies that we have prepared against all the identified
risks in our project. This way we can control and reduce the negative impact on our
project.

3.4.2.4 Rating Risk Likelihood and Impact


Table 3.7 shows the probability of each risk and their impact as well. By identification
and analysis we can measure the likeliness of any of these risks occurring and all their
impacts. mitigation.

3.4.2.5 Risk Monitoring and Control


Table 3.7 shows the control strategies that we have in place for all the risks. While having
that control, we can also monitor our progress to keep an eye out for all such risks.

3.4.2.6 Risk Assessment


Table 3.7 shows the proper way of managing these risks and assessing them through
quantitative and qualitative approaches.
Chapter 4

FUNCTIONAL ANALYSIS AND


MODELING
4.1 Use Case Modeling
Figure 4.1 shows the entire use case diagram of this project, all the interactions of the
individual actors and their activities.

Figure 4.1: Use Case Diagram

24
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 25

4.1.1 User Stories


Table 4.1: User Story 1
User Story ID: 01
Title: Register/Sign-up
Actor: User
Priority: 1
User Story: As an end-user of this system, I need to set this app up quickly so I
can go about my business knowing that this system is detecting my eyes and face
so that it can alert me when I get drowsy. I should be able to register an account
with minimal details, while all the data fields should be proper and easy to use.
Acceptance criteria: The Register/Sign-up module should successfully add new
users to the system with all of their credentials stored directly into the database.
When registering a new account, name, password, contact detail and address will
be needed. If any of these details are not given, the system should show an error
message.
Table 4.2: User Story 2
User Story ID: 02
Title: Login
Actor: Admin
Priority: 2
User Story: As an admin of this system, I should be able to log into the system
with my name and my password. The data fields should be easy to use and the
logging in operation should be quick.
Acceptance criteria: The system should successfully move to the main screen to
start detection. Although, this should only happen if the credentials are correct,
else an error message should be shown.
Table 4.3: User Story 3
User Story ID: 03
Title: Database
Actor: Admin
Priority: 3
User Story: As an admin of this system, I should be able to access the database
through the Android App, edit image data and or profile data. I would not like to
be stuck in complex UI navigation, rather I would want to easily navigate in and
out of the database module and then back to the detection module.
Acceptance criteria: The system should successfully allow admin to do operations
on database and navigate about in or out of the module and back to the detection
module.
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 26

Table 4.4: User Story 4


User Story ID: 04
Title: Login
Actor: User
Priority: 4
User Story: As an end-user of this system, I need this app to start up quickly so
I can go about my business knowing that this system is detecting my eyes and face
so that it can alert me when I get drowsy. I should be able to log in to my account
with my name and password. I don’t care much for fancy interfaces so that’s not a
issue I would have.
Acceptance criteria: The system should successfully move to the main screen and
the detection process should start immediately. In the case of wrong credentials, an
error message should be shown.

Table 4.5: User Story 5


User Story ID: 05
Title: Forget Password
Actor: User
Priority: 5
User Story: As an end-user of this system, I need to reset my password when I
tap on forget password. It should let me enter my email so that it can send me a
new password.
Acceptance criteria: The system should successfully let the user reset the pass-
word. After reset an email should be sent to the mail address that’s been provided,
with a new password.
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 27

4.1.2 Individual Actor Use Cases


Table 4.6: Use-case 1
Use-case ID # 01 Register
Actor User
Type Primary
Precondition End user must have the application up and running.
Post-condition After successful Registration, end user should able to get on
the main page and start detection process.
Success Scenario
1. User enters name.
2. User enters password.
3. User enters address.
4. User enters contact details.
5. User will register successfully.

Extensions Use-case doesn’t have any extensions.


Table 4.7: Use-case 2
Use-case ID # 02 Login
Actor User
Type Primary
Precondition User must be registered.
Post-condition After successful login, end user should be able to get on the
main page and start detection process.
Success Scenario
1. User enters name.
2. User enters password.
3. User will login successfully.

Extensions
1. If user is new, they should register an account first.
2. If user forgets password, they can tap on forget password
option and retrieve their password.
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 28

Table 4.8: Use-case 3


Use-case ID # 03 Login
Actor Admin
Type Primary
Precondition None
Post-condition After successful login, admin should able to get on the main
page and start the detection process.
Success Scenario
1. Admin enters name.
2. Admin enters password.
3. Admin will login successfully.

Extensions
1. If admin forgets password, they can tap on forget pass-
word option and retrieve their password.

Table 4.9: Use-case 4


Use-case ID # 04 Database
Actor Admin
Type Primary
Precondition Admin must be logged into the application.
Post-condition After getting access to database, admin can perform CRUD
operations on the database.
Success Scenario
1. Admin gains access to database.
2. Admin performs CRUD operations.
3. Admin leaves the database.

Extensions
1. When access gained, admin can edit data, basically per-
form CRUD operations.
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 29

Table 4.10: Use-case 5


Use-case ID # 05 Capture Image
Actor System, Camera
Type Primary
Precondition End user should be logged into the application and the camera
should be connected to the application.
Post-condition After successful connection, camera should be able to start
capturing live feed.
Success Scenario
1. End user starts detection process.
2. System and Camera make successful connection.
3. Camera starts capturing live feed.

Extensions
1. Use-case doesn’t have any extensions.

Table 4.11: Use-case 6


Use-case ID # 06 Face Detection
Actor System, Camera
Type Primary
Precondition Camera should be capturing live feed.
Post-condition Face detection algorithm should start creating AOI on cap-
tured live feed.
Success Scenario
1. Face detection algorithm starts creating AOI on cap-
tured live feed.
2. AOI successfully created.

Extensions
1. Face detection algorithm starts creating AOI on cap-
tured live feed.
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 30

Table 4.12: Use-case 7


Use-case ID # 07 Drowsiness Monitor
Actor System, Camera
Type Primary
Precondition Face detection algorithm should be running.
Post-condition Face detection algorithm should successfully detect AOI.
Success Scenario
1. Face detection algorithm starts detecting AOI.
2. AOI successfully detected.

Extensions
1. Face detection algorithms starts detecting AOI on cap-
tured live feed.

Table 4.13: Use-case 8


Use-case ID # 08 Classifier
Actor System
Type Primary
Precondition AOI should be created and detected.
Post-condition Classifier algorithm will classify whether end user’s eyes are
open or closed.
Success Scenario
1. Classifier algorithm should classify whether end user’s
eyes are open or closed.
2. After classification, a score will be provided.

Extensions
1. Classification of whether end user’s eyes are open or
closed will done.
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 31

Table 4.14: Use-case 9


Use-case ID # 09 Calculate Score
Actor System
Type Primary
Precondition Classification of AOI should be successfully completed.
Post-condition After classification of AOI, live score should be calculated to
determine whether the user is drowsy or not.
Success Scenario
1. Classification of AOI should be completed and live score
should start calculating.
2. Live score will determine whether user is drowsy or not.
3. Once it is determined that user is drowsy, a buzzer alarm
will sound and then after more score the music system
volume will increase.

Extensions
1. Live score determines if end user is drowsy or not, once
it is determined, a buzzer alarm will sound.
2. After more score the music system volume will increase.

Table 4.15: Use-case 10


Use-case ID # 10 Bluetooth Connection
Actor System
Type Primary
Precondition System has made successful blue-tooth connection.
Post-condition: After successful connection, the music system should be con-
nected to the application.
Success Scenario
1. After successful connection, the music system should be
connected to the application.
2. Once connection is made, the music volume will increase
and decrease depending on how high the score gets.

Extensions
1. After successful connection, the music system should be
connected to the application.
CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 32

4.2 Functional Modeling


4.2.1 Entity Relationship Diagram
Figure 4.2 shows the ERD of this project, along with their cardinalities.

Figure 4.2: ERD

4.2.2 Data Flow Diagram


4.2.2.1 DFD Level 0
Figure 4.3 shows Data Flow level 0.

Figure 4.3: DFD Level 0


CHAPTER 4. FUNCTIONAL ANALYSIS AND MODELING 33

4.2.2.2 DFD Level 1


Figure 4.4 shows Data Flow level 1.

Figure 4.4: DFD Level 1

4.2.2.3 DFD Level 2


Figure 4.5 shows Data Flow level 2.

Figure 4.5: DFD Level 2


Chapter 5

SYSTEM DESIGN
5.1 Structure Diagrams
5.1.1 Class Diagram
Figure 5.1 shows the class diagram of this project, which clearly indicates the rela-
tionship between different classes and their functionalities.

Figure 5.1: Class Diagram

34
CHAPTER 5. SYSTEM DESIGN 35

5.1.2 Deployment Diagram


Figure 5.2 shows the deployment diagram of this project, which indicates the execution
architecture of a system.

Figure 5.2: Deployment Diagram


CHAPTER 5. SYSTEM DESIGN 36

5.2 Behavioral Diagrams


5.2.1 Activity Diagram
Figure 5.3 shows the activity diagram of this project, which indicates the behaviour
of the system, showing the control flow from to start to end.

Figure 5.3: Activity Diagram


CHAPTER 5. SYSTEM DESIGN 37

5.2.2 Communication Diagrams


Figure 5.4 shows the first communication diagram of this project, which indicates the
communication between login and database modules.

Figure 5.4: Communication Diagram: Login

Figure 5.5 shows the second communication diagram of this project, which indicates
the communication between forgot password and database modules.

Figure 5.5: Communication Diagram: Forgot Password


CHAPTER 5. SYSTEM DESIGN 38

Figure 5.6 shows the third communication diagram of this project, which indicates
the communication between registration module and database.

Figure 5.6: Communication Diagram: Registration

Figure 5.7 shows the fourth communication diagram of this project, which indicates
the communication between the app and detection module.

Figure 5.7: Communication Diagram: Detection


CHAPTER 5. SYSTEM DESIGN 39

5.2.3 Sequence Diagrams


5.2.3.1 Login
Figure 5.8 shows the first sequence diagram of this project, which shows the user logging
into the app and getting validated by the database.

Figure 5.8: Sequence Diagram: Login

5.2.3.2 Logout
Figure 5.9 shows the second sequence diagram of this project, which shows the user
logging off the app.

Figure 5.9: Sequence Diagram: Logout


CHAPTER 5. SYSTEM DESIGN 40

5.2.3.3 Forgot Password


Figure 5.10 shows the third sequence diagram of this project, which shows the user
availing the forgot password option and then getting a mail to reset password.

Figure 5.10: Sequence Diagram: Forgot Password

5.2.3.4 Registration
Figure 5.11 shows the fourth sequence diagram of this project, which shows the user
registering.

Figure 5.11: Sequence Diagram: Registration


CHAPTER 5. SYSTEM DESIGN 41

5.2.3.5 Connectivity
Figure 5.12 shows the fifth sequence diagram of this project, which shows the user the
connectivity process right after starting detection.

Figure 5.12: Sequence Diagram: Connectivity

5.2.3.6 Detection
Figure 5.13 shows the last sequence diagram of this project, which shows the user the
detection process.

Figure 5.13: Sequence Diagram: Detection


Chapter 6

SYSTEM INTERFACE AND


PHYSICAL DESIGN
6.0.1 System User Interfaces
6.0.1.1 Slider 1: Welcome
Figure 6.1 shows the screenshot of the first slider: ”Welcome!” on first boot up of appli-
cation.

Figure 6.1: Slider 1: Welcome

42
CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 43

6.0.1.2 Slider 2: Face and Eye Detection


Figure 6.2 shows the screenshot of the second slider: ”Face and Eye Detection”.

Figure 6.2: Slider 2: Face and Eye Detection


CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 44

6.0.1.3 Slider 3: Supports USB Webcam


Figure 6.3 shows the screenshot of the third slider: ”Supports USB Webcam”.

Figure 6.3: Slider 3: Supports USB Webcam


CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 45

6.0.1.4 Slider 4: Permissions


Figure 6.4 shows the screenshot of the fourth slider: ”Permissions”.

Figure 6.4: Slider 4: Permissions


CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 46

6.0.1.5 User Registration


Figure 6.5 shows the screenshot of the User Registration Module.

Figure 6.5: User Registration


CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 47

6.0.1.6 Successful Registration


Figure 6.6 shows the screenshot of the User being successfully registered.

Figure 6.6: Successful Registration

6.0.1.7 Email Verification


Figure 6.7 shows the screenshot of the Email Verification process required for every user
to log in to the application.

Figure 6.7: Email Verification


CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 48

6.0.1.8 Application Dashboard


Figure 6.8 shows the screenshot of the Application Dashboard.

Figure 6.8: Application Dashboard


CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 49

6.0.1.9 Detection Module


Figure 6.9 shows the screenshot of the Detection Module.

Figure 6.9: Detection Module


CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 50

6.0.1.10 Detection Module: Permissions


Figure 6.10 shows the screenshot of the Detection Module prompting User for access to
camera and recording.

Figure 6.10: Detection Module: Permissions


CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 51

6.0.1.11 Detection Module: Select USB Webcam


Figure 6.11 shows the screenshot of Detection Module prompting User to select USB
Webcam.

Figure 6.11: Detection Module: Select USB Webcam


CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 52

6.0.1.12 Detection Module: Open Eyes


Figure 6.12 shows the screenshot of the Detection Module detecting open eyes.

Figure 6.12: Detection Module: Open Eyes


CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 53

6.0.1.13 Detection Module: Closed Eyes


Figure 6.13 shows the screenshot of the Detection Module detecting closed eyes.

Figure 6.13: Detection Module: Closed Eyes


CHAPTER 6. SYSTEM INTERFACE AND PHYSICAL DESIGN 54

6.0.2 Firebase Views


Figure 6.14 shows the screenshot of the firebase realtime database.

Figure 6.14: Firebase Realtime Database

6.0.2.1 Firebase: Email Verification


Figure 6.15 shows the screenshot of the firebase email verification.

Figure 6.15: Firebase: Email Verification


Chapter 7

TEST PLAN
7.1 Objective of the Testing Phase
Essentially, the testing phase provides a bug free and a good quality application. This
is done through various testing of all the modules step by step, which also ensures the
validation and verification process. Our objectives are of the exact nature, to test our
application under all scenarios so that we can provide a quality built, bug free and easy
to use application. Most important of these objectives are mentioned below:

• The first goal of this phase will be to ensure that the final software solution meets
the specified requirements. We’ll put the product through all tests and make sure
it meets all of the specified requirements.
• Our next goal will be to find and fix bugs in the system at the earliest possible
stage of development. As a software tester, bug identification and eradication will
be the most important goal since it leads to a bug-free application.
• Our next goal of software testing is to maintain program quality and dependability.
To maintain software quality, we must keep bugs to a bare minimum.
• Our last goal of the testing phase will be to offer all stakeholders with complete
knowledge about all technical and other hazards, allowing them to make educated
decisions.

7.2 Levels of Tests for Testing Software


Before our software product is deployed or released, it must go through different
levels of testing processes to guarantee that our driver drowsiness detection system is
functioning properly.

7.2.1 Unit Testing


Unit testing is the first level of testing, in which each component of the system is
tested separately. This will be done to ensure that each component is carrying out its
intended role. The white box testing approach is typically used for this.

7.2.2 Integration testing


In this next level of testing, we will put together components of our software modules
to ensure data flow from one module to the next. We can test how different components
of our system function at their interface in a variety of ways, using either a top-down or
bottom-up integration strategy.

7.2.3 System Testing


This last level of testing phase is critical since it allows us to determine whether our
application meets all of our specified functional and business needs.

55
CHAPTER 7. TEST PLAN 56

7.3 Test Management Process


All testing processes are to be managed under Test Management Process to ensure a
swift, bug-free and high-quality product delivery.

7.3.1 Design the Test Strategy


A test strategy is an outline that describes the testing approach of the software de-
velopment cycle. The purpose of a test strategy is to provide a rational deduction from
organizational, high-level objectives to actual test activities to meet those objectives from
a quality assurance perspective.

The test strategy that we have used in our project is Unit Testing, a part of Code
Coverage in Android Studio, loading test data onto our modules one by one and then
testing their functionalities as we proceed further.

7.3.2 Test Objectives


Our Test Objectives are as follows:

• Application loads onto device properly, without any crashes.


• Welcome sliders are shown only once, when the application is loaded onto the device
for the first time.
• All activities follow intent properly and in the their respective order.
• Registration activity successfully communicates with Firebase properly and sends
data for storage.
• Email verification works properly and authorizes user so that they can be logged
into the application.
• Dashboard successfully communicates with Firebase to show requested data in their
respective data fields.
• User doesn’t get logged out of application every time it closes.
• Detection activity prompts user permission to access camera and record for external
camera use.
• Detection module works successfully by detecting and alerting driver when drowsy.

7.3.3 Test Criteria


Test criteria will aid in the organisation of the testing process for us (testers). The
test criteria will be determined in accordance with the test effort available. In order
to meet the criteria, we’ll need to create test coverage metrics (as a ratio between the
needed test cases). These test coverage metrics will be used to determine whether or not
integration tests are complete.

7.3.4 Resource Planning


Table 7.1 shows all the modules for testing of our project along with their respective
tester(s), tool(s) and time lengths.
CHAPTER 7. TEST PLAN 57

Table 7.1: Resource Planning


Sr. Time
Module Tester(s) Tool(s)
# Required
Android Studio, Android
1 Welcome Zakria 1 Hour.
Device
Android Studio, Android
2 User Registration Hassan 2 Hours.
Device
Android Studio, Web
Firebase
3 Hassan Browser (Firebase Con- 2 Hours.
Authentication
sole), Android Device
Android Studio, Web
4 User Login Zakria Browser (Firebase Con- 1 Hour.
sole), Android Device
Android Studio, Android
5 Dashboard Zakria 1 Hour.
Device
Android Studio, Web
Dashboard
6 Hassan Browser (Firebase Con- 2 Hours.
Integration
sole), Android Device
Hassan, Android Studio, Android
7 Detection 1 Day.
Zakria Device

7.3.5 Plan Test Environment


A testing environment consists of a setup of hardware and software that allows testers
to efficiently execute test cases. Due to it’s flexibility, the test environment can be
modified to meet the needs of the system being evaluated which will in turn ensure that
the software testing phase goes smoothly.

The test environment for our application is as follows:

• Test Server: Firebase Console will act as a test server where we can cross match all
the data that is being fed to the application accordingly to the data that is being
shown in the database.
• Test Android Device: An Android Device will be used to test every module of our
application, all the integration between every activity and the proper functioning
of our detection system.
• Test IDE: Android Studio will be our IDE for testing as the application is built
on it, along with Code Coverage which allows us to perform Unit Testing on our
application direct from Android Studio.
• Bug Reporting Tool: Android Studio’s own Log-cat does an excellent job of report-
ing all bugs, even on run-time. That will be converted into plain text which can be
shown in MS Word.
CHAPTER 7. TEST PLAN 58

7.3.6 Schedule and Estimation


Table 7.2 shows the schedules of all the modules that are under testing and their
estimated time lengths.

Table 7.2: Schedule and Estimation


Sr. Time
Module Tester(s) Start Date End Date
# Required
1 Welcome Zakria 1st Feb 2022 1st Feb 2022 1 Hour
User
2 Hassan 1st Feb 2022 1st Feb 2022 2 Hours
Registration
Firebase
3 Hassan 1st Feb 2022 1st Feb 2022 2 Hours
Authentication
4 User Login Zakria 1st Feb 2022 1st Feb 2022 1 Hour
5 Dashboard Zakria 1st Feb 2022 1st Feb 2022 1 Hour
Dashboard
6 Hassan 1st Feb 2022 1st Feb 2022 2 Hours
Integration
Hassan,
7 Detection 2nd Feb 2022 3rd Feb 2022 1 Day
Zakria
CHAPTER 7. TEST PLAN 59

7.4 Test Cases


Table 7.3 shown below states all the test cases for our application, along with their
respective steps, inputs, expected outputs, actual outputs, whether they pass or fail and
our remarks.
Table 7.3: Test Cases
Test
Test Expected Actual Pass/
Case Test Steps Input Remarks
Scenario Output Output Fail
ID
Email:
Login with Correct cre- 1) User opens application Logged in As
1 1780163@szabist.isb-pk Logged In Pass
dentials 2) User enters credentials successfully Expected
Pass: 123456
Email:
Login Failed
Login with Incorrect 1) User opens application 1780163@google.isb- Login Un- As
2 - Incorrect Pass
credentials 2) User enters credentials pk successful Expected
email
Pass: 123456
Email: Registered
Sign-up with Correct 1) User opens application Sign-up Suc- As
3 1780163@szabist.isb-pk Success- Pass
email 2) User enters credentials cessful Expected
Pass: 123456 fully
Email:
”Incorrect
Sign-up with Incorrect 1) User opens application 1780163@google.isb- Registration As
4 email ad- Pass
email 2) User enters credentials pk failed Expected
dress”
Pass: 123456
1) Admin opens firebase
Checks if user data is Data Avail- Data is As
5 Firebase Connection console Pass
available able Available Expected
2) Opens realtime database
Dashboard Data Dis- Dashboard should load Data dis- Data not Database
6 1) User logs into application Fail
play up user data played available offline
Detection
Detection
Activities following User taps on start detec- Activity As
7 1) User starts Detection activity Pass
proper Intent tion should Expected
appears
appear
Application Loads App should App As
8 1) User starts app User taps on app icon Pass
onto Device properly launch launches Expected
Welcome
Welcome
Welcome Sliders only 1) User downloads app Sliders As
9 User taps on app icon Sliders Pass
show on first boot up 2) User starts app should Expected
appear
appear
1) User registers for app
Email verification 2) User goes back to login User receives verifica- User can log Verification As
10 Pass
working 3) User enters credentials tion email and verifies in Successful Expected
and logs in
1) User moves to Detection Permission
USB Webcam Permis- Permission As
11 activity Taps on start capturing prompt Pass
sion prompt Expected
2) User starts capturing appears
App should
Detection Module App starts As
12 1) User starts capturing Selects camera start detect- Pass
working detecting Expected
ing
App should
Detection Module not App As
13 1) User starts capturing Selects camera not start de- Pass
working crashes Expected
tecting
Alarm Alarm As
14 Alarms working App is detecting Live video recording Pass
should ring rings Expected
Alarm Alarm
As
15 Alarms not working App is detecting Live video recording should not doesn’t Pass
Expected
ring ring
Chapter 8

CONCLUSION
8.1 Conclusion
An increasing amount of drivers face a severe low amount of daily sleep. All au-
tomobile drivers ranging from heavy trucks to light vehicles, all face a high amount of
sleep problems that lead to a very unsafe driving experience. This occurrence labels as
Drowsiness which further leads to accidents, mostly fatal[1]. Not following our duties for
a safer journey has lead to many accidents. Following rules and regulations look like a
very minute problem but it needs our utmost and greatest acceptance. A car driver when
driving, no matter in the hands of the most experienced or in-experienced hands, these
hands can be harmless to many but a single wave of sleep while driving can even lead to
being a fatal mistake which might end up taking peoples lives.

Driver’s often disregarded the fact that they are feeling sleepy and thus when they
sleep while driving they end up in accidents. If a person is too tired to drive they shouldn’t
neglect the fact that their negligence might end up taking the lives of themselves or others.
Upon studying the problems and the situation, our system will be designed to implement
the safety measures and make sure these solutions are socially beneficial and safe. A lot
of experts have researched driver drowsiness detection. The researched data is efficient
enough but not fully applicable in Pakistan. Thus, our system will provide said services in
Pakistan and hopefully, once it’s fully developed, we will be able to launch it globally.[2].

8.2 Future Work


As specified in the project overview and scope, once this project is approved and in
implementation phase, the goal would be to reach as far as the Pakistan Automobile
sector. Once it reaches that level, this system can be patented and applied on each car
coming out of the factory.

Hence, if the future holds great response to this project, the usage could become
global.

60
References
[1] Knipling, R.R. and Wang, J.S., 1994. Crashes and fatalities related to driver drowsi-
ness/fatigue. Washington, DC: National Highway Traffic Safety Administration.

[2] Romdhani, S., Torr, P., Scholkopf, B. and Blake, A., 2001, July. Computationally ef-
ficient face detection. In Proceedings Eighth IEEE International Conference on Com-
puter Vision. ICCV 2001 (Vol. 2, pp. 695-700). IEEE.

[3] Leger, D., 1994. The cost of sleep-related accidents: a report for the National Com-
mission on Sleep Disorders Research. Sleep, 17(1), pp.84-93.

[4] Yu, C., Qin, X., Chen, Y., Wang, J. and Fan, C., 2019, August. Drowsy-
Det: A Mobile Application for Real-time Driver Drowsiness Detection. In
2019 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Advanced
and Trusted Computing, Scalable Computing and Communications, Cloud and
Big Data Computing, Internet of People and Smart City Innovation (Smart-
World/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) (pp. 425-432). IEEE.

[5] Tombeng, M.T., Kandow, H., Adam, S.I., Silitonga, A. and Korompis, J., 2019,
August. Android-Based Application To Detect Drowsiness When Driving Vehicle. In
2019 1st International Conference on Cybernetics and Intelligent System (ICORIS)
(Vol. 1, pp. 100-104). IEEE.

[6] Lee, B.G. and Chung, W.Y., 2012. A smartphone-based driver safety monitoring
system using data fusion. Sensors, 12(12), pp.17536-17552.

[7] Vijayan, V. and Sherly, E., 2019. Real time detection system of driver drowsiness
based on representation learning using deep neural networks. Journal of Intelligent
and Fuzzy Systems, 36(3), pp.1977-1985.

[8] Ngxande, M., Tapamo, J.R. and Burke, M., 2017, November. Driver drowsiness detec-
tion using behavioral measures and machine learning techniques: A review of state-of-
art techniques. In 2017 Pattern Recognition Association of South Africa and Robotics
and Mechatronics (PRASA-RobMech) (pp. 156-161). IEEE.

[9] Vesselenyi, T., Moca, S., Rus, A., Mitran, T. and Tătaru, B., 2017, October. Driver
drowsiness detection using ANN image processing. In IOP Conference Series: Mate-
rials Science and Engineering (Vol. 252, No. 1, p. 012097). IOP Publishing.

[10] Jabbar, R., Shinoy, M., Kharbeche, M., Al-Khalifa, K., Krichen, M. and Barkaoui,
K., 2020, February. Driver drowsiness detection model using convolutional neural
networks techniques for android application. In 2020 IEEE International Conference
on Informatics, IoT, and Enabling Technologies (ICIoT) (pp. 237-242). IEEE.

[11] Mehta, S., Dadhich, S., Gumber, S. and Jadhav Bhatt, A., 2019, February. Real-time
driver drowsiness detection system using eye aspect ratio and eye closure ratio. In Pro-
ceedings of international conference on sustainable computing in science, technology
and management (SUSCOM), Amity University Rajasthan, Jaipur-India.

61
REFERENCES 62

[12] Yu, J., Park, S., Lee, S. and Jeon, M., 2018. Driver drowsiness detection using
condition-adaptive representation learning framework. IEEE transactions on intelli-
gent transportation systems, 20(11), pp.4206-4218.

[13] Scroggins, R., 2014. SDLC and development methodologies. Global Journal of Com-
puter Science and Technology.

[14] Mahalakshmi, M. and Sundararajan, M., 2013. Traditional SDLC vs scrum method-
ology–a comparative study. International Journal of Emerging Technology and Ad-
vanced Engineering, 3(6), pp.192-196.

[15] Chen, L., Hoey, J., Nugent, C.D., Cook, D.J. and Yu, Z., 2012. Sensor-based ac-
tivity recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part C
(Applications and Reviews), 42(6), pp.790-808.

You might also like