You are on page 1of 15

CHAPTER 1

INTRODUCTION

1.1 INTRODUCTION

The phrase "fatigue-induced drowsiness" encapsulates the definition of drowsiness. This is not a case of pure exhaustion, which is known as a resistance to

carry on with the task at hand. The effects of weariness and sleepiness are extremely similar. Fatigue impairs mental sharpness, diminishes driving safety, and

raises the possibility of human error, which can result in fatalities and serious injuries. Sleepiness lowers consciousness, affects judgement, and slows reaction

time. All professions, including those of truck drivers, train engineers, and aeroplane pilots, are impacted by fatigue and sleep deprivation. Both situations

make it difficult for the driver to focus on the primary task of driving, which raises the possibility of an accident. The problem will only get worse as traffic

volumes rise. Consequently, it is essential to create a driver attentiveness system to prevent incidents . One of the most important ways to ensure vehicle safety

is through driver-vehicle interactions including mutual monitoring and support. Although the number of traffic fatalities has decreased because to active safety

features in cars, the number of collisions is continuously rising. One of the main factors contributing to road collisions is drowsy driving. Long-distance drivers

are more likely to be in accidents than those who drive at night or without breaks. Numerous deadly accidents and fatalities have happened as a result.

Consequently, it has developed into a thriving research area.

A part of mishaps this has resulted in several catastrophic accidents and fatalities. As a result, it is now a busy area of study.

Physiological traits, behavioural trends, and vehicle-specific traits are all used by different systems. Electroencephalograms,

electrocardiograms, electrooculograms, heart rates, and pulse rates are just examples of the physiological characteristics taken into account

here. Blinking, averting one's eyes, yawning, and bending the head are a few examples of driving visual behaviours that are taken into

account here. Features based on data including wheel movement, acceleration, speed of the vehicle, braking style, and lane departure

pattern. These techniques typically take a lot of time and money. We provide an alternate system in this system that employs machine

learning to recognise tiredness based on photographs. The development of a reliable and inexpensive fatigue detection system is the main

topic of this white paper.

1.2 OBJECTIVES

Driver drowsiness detection primarily serves to identify and forewarn when a driver begins to feel sleepy or fatigued while operating a vehicle. Detecting driver

drowsiness can help reduce accidents and enhance road safety because it is a major contributing factor in traffic accidents around the world. Driver sleepiness can

affect a driver's reaction time, cognitive function, and ability to make decisions, which can diminish their driving skill and raise their chance of having an accident.

There are several methods for detecting driver intoxication, including: B. Vision-, sensor-, or machine-learning-based approaches that can detect indications of

drowsiness such as yawning, head movements, and changes in physiological signals. Driver sleepiness detection primarily serves to avert accidents by warning

the driver or activating automatic systems that control the vehicle.


1.3 METHODOLOGY

The following steps make up our procedure for detecting driver fatigue using OpenCV and the Haar Cascade Classifier:

The first stage is to gather a set of images or videos of the driver in various sleep and wake phases. The classifier will be trained using this dataset, and its

precision will be evaluated.

2. Pretreatment: Using the OpenCV library, the acquired data is preprocessed to extract regions of interest like faces and eyes. Then, for quicker processing,

resize these portions and make them monochromatic.

3. Education: Using the gathered dataset, a Haar cascade classifier is developed. Images of drowsy drivers can be seen in positive samples, while drivers who

are paying attention can be found in negative samples. The programme develops the ability to differentiate between positive and negative visuals.

4. Detection: After that, employ the trained classifier to find sleepiness in live video feeds. In order to identify drowsiness, regions of interest from the video

stream, such as faces and eyes, are retrieved.

5. Warning: A warning mechanism is initiated to awaken the driver as soon as drowsiness is discovered. It is possible for this alarm system to vibrate, sound

an alarm, or display an alarm.

The quality of the training data and the sleepiness detection features affect the classifier's accuracy. Based on the facial expressions of the driver, OpenCV's

Haar cascade classifier for frontal facial characteristics, eye, and mouth detection can be utilised to identify tiredness. This approach can be used in conjunction

with other strategies, including sensor-based ones, to enhance.


CHAPTER 2

LITERATURE REVIEW

Driver weariness is a significant contributor to traffic accidents all over the world, making the identification of driver sleepiness an increasingly relevant issue in

the field of driver safety. The development of efficient driver sleepiness detection systems has been the subject of several research investigations in recent years.

This review of the literature will look at some of the most important studies and methodologies that have been applied to the creation of driver drowsiness detection

systems.

T. Islam et al. (2018) published "Real-time drowsiness detection using EEG-based features and machine learning techniques". This work suggests an

electroencephalogram (EEG)-based feature-based and machine learning-based real-time driver sleepiness detection system. The system's accuracy for identifying

sleepiness was measured using a dataset of 20 participants, and it was found to be 86.5%.

S. H. Lee et al.'s 2017 article "A review on driver drowsiness detection systems" . An overview of the many driver drowsiness detection methods that have been

suggested in the literature is given in this review study. The pros and cons of several methodologies, including physiological signals, visual signals, and aural

signals, are discussed by the writers.

S. S. Patil et al.'s "A hybrid approach for real-time driver drowsiness detection" (2020) . This work suggests a hybrid method that combines facial expression

analysis with EEG-based characteristics for real-time driver sleepiness identification. The system's accuracy for identifying sleepiness was 95.56% when tested

using a dataset of 15 individuals.

Overall, these studies emphasise the usefulness of several methods for detecting driver sleepiness and show the significance of doing so in enhancing road safety.


CHAPTER 3

IMPLEMENTATION

3.1 ALGORITHM
Step 1: Take a picture with the camera as input.
Step 2:Create an area of interest (ROI) in the image by finding faces in it in step two.
Step 3: Identify eyes from ROI and transmit to classifier in step three.
Step 4: The classifier determines whether the eyes are open or closed.
Step 5: The Verified Score (EAR) is computed whenever someone is tired

3.1.1. Face Detection and Landmark detection


The suggested system uses a facial recognition algorithm based on Haar-Adaboost. The face detector is trained using OpenCV functions. photos of people wearing

glasses, as well as photos of their faces, are available for training. The resulting face classifier can identify faces with sizes ranging from 240x240 to 320x320

pixels after training. Real-time detection is integrated using dlib library functions. Real-time face detection uses the get_frontal_face_detection and shape_predictor

routines. I used Python 3.8.2 and added libraries from Dlib 19.19 and OpenCV 4.2.0. Additionally, these libraries can be utilised for face switching and morphing

operations. Pretrained face or eye classifiers and detectors are offered by the OpenCV library. The mouth, left eye, right eye, and other distinguishing features are

shown in the figure below.

Fig 3.1.1. 68 facial landmark detection

Finding the positions of various facial characteristics, such as the corners of the eyes, the corners of the mouth, and the tip of the nose, comes next once a face has

been identified. To lessen the impact of differences in distance to the camera, uneven illumination, and picture resolution, the facial image should first be

normalised. With the use of gradient boosting learning, the total squared error loss is optimised. With this technique, the eyes' and mouth's boundaries as well as

their total number of points are marked.


3.1.2 EAR (Eye Aspect Ratio)

A technique for assessing eye opening known as eye aspect ratio (EAR) is employed in computer vision applications including the detection of driver intoxication.

The eye opening ratio (EAR) calculates the difference between the eye opening's vertical height and horizontal length. The eye markers' coordinate points are first

determined, and then the EAR is calculated as follows: B. The centre, inner, and outer corners of the eye. These spots are located utilising methods like machine

learning algorithms that recognise face features. The EAR is determined as the ratio of vertical distance to horizontal eye distance once the marker points have

been determined and their distances have been measured. A broader eye opening is indicated by higher EAR values, which range from 0 to 1, while a narrower

eye opening is indicated by lower EAR values. EAR values can be used to identify patterns of change, such as when people are drowsy, that often happen. B.

Reduced eye movement and drooping eyelids. Overall, combining eye movements and mouth openings, EAR technology may be included into a driver drowsiness

monitoring system to provide an accurate and dependable real-time sleepiness detection system.

Fig 3.1.2. Location of eye


3.1.3 Flowchart

3.2 SOFTWARE REQUIREMENT SPECIFICATIONS

The functional and non-functional requirements that a system must have to identify driving drowsiness and alert the driver are laid forth in the Software

Requirements Specification (SRS) for driving Drowsiness Detection. Some of the most crucial prerequisites are listed below:

Functional prerequisites:

A real-time video recording of the driver must be possible for the system.

B. According to reports, the technology is able to track face traits including the driver's eyes, lips, and brows.

C. The system should be able to detect alterations in the driver's facial expressions and identify drowsiness-related behaviours such yawning, shutting eyes, and

nodding.

D. If the system detects sleepiness, it should alert the driver visually or audibly.

e. Depending on the circumstances and environment of the driver, the system should be able to adapt its sensitivity.

F. A central monitoring system or the driver's phone must be able to receive real-time notifications from the system.


2. Non-functional requirements:

A. There should be less false positives or false negatives and a high degree of accuracy in the system.

B. The system should operate in real time and have a quick reaction time.

C. The setup and use of the system should be simple.

D. The system must be low resource consuming and compatible with a wide range of hardware.

b. The system needs to respect driver privacy and adhere to data protection regulations.

F. Systems need to be strong, dependable, extremely available, and fault resistant.

G. The system has to be very scalable and able to handle a lot of drivers.

H. Systems must offer the necessary instructions and training resources.

These specifications serve as the foundation for the creation and testing of the system's software while also defining the system's scope and purpose. Software

developers, testers, and other stakeholders should refer to the system requirements specification (SRS) document, which should include both functional and non-

functional requirements for the system.

Requirement

Software Requirement

1. Back End : Python

2. Domain : Machine Learning,

3. Algorithm : DLIB, HaarCascade.

Hardware Requirement

1. Processor : i3 or greater

2. RAM : 4GB or greater

3. Hard Disk : 50 GB or greater

4. Connectivity : LAN or WIFI, Camera


3.2.1 TECHNOLOGY DESCRIPTION

Python

Web development, AI, scientific computing, data analysis, and automation are just a few of the applications that employ Python, a high-level, interpreted, general-

purpose programming language.Python's simplicity and use are two of its key traits. Python's simple syntax and organisational structure make it simple to write,

read, and comprehend. With a large standard library that offers a variety of modules and functions for various applications, Python is also incredibly adaptable.

An object-oriented language is Python. It allows for the generation and manipulation of objects. This is moving. This makes variables easier to use and learn since

they may be allocated without stating their type. Another interpreted language is Python. This indicates that the code doesn't need to be converted to binary code

before being executed. Django and Flask for web development, NumPy and Pandas for statistical computation and data analysis, TensorFlow and PyTorch for

machine learning and artificial intelligence, BeautifulSoup for web scraping, and Scrapy are some of the common frameworks and libraries used in Python. Python

is platform-independent. This indicates that it can function on a variety of operating systems, including Windows, macOS, and Linux. Because of the enormous

user and development community that Python has amassed, getting assistance and finding answers to issues has become much simpler.

Open CV:

The Swiss Army knife of computer vision is OpenCV (Open-Source Computer Vision). Although OpenCV has a broad variety of modules that may assist with a

variety of computer vision issues, according to him, its architecture and memory management may be the most beneficial features. It offers a framework for

doing on-demand image and video manipulation without having to bother about allocating and reassigning picture memory using OpenCV or custom methods.

Authors employ OPENCV's highly optimised image processing capabilities for real-time image processing of live video feeds from cameras. OPENCV is

optimised and useful for real-time video and image processing..

DLib:

Dlib is a cutting-edge C toolkit that includes algorithms and machine learning tools for creating sophisticated C++ software that addresses

practical issues. It is utilised in many industrial and scientific sectors, as well as in robots, embedded technology, mobile phones, and

expansive high-performance computer systems. It is free to utilise Lib in any application thanks to its open source licence. Dlib offers image

processing utilities including object tracking, face identification, and the detection of facial features.

Pygame:

A well-liked Python library for creating video games and multimedia applications is called Pygame. It is free and open source. It offers a straightforward and user-

friendly framework for making 2D games and animations, managing user input, and working with different media resources including sounds, pictures, and videos.

Pygame offers a simple method for playing and combining sounds, including music and sound effects. It supports a variety of hardware setups and audio formats.

Distance:

The scipy.spatial package's Distance module offers a number of distance metrics that may be used to determine how far apart two data points are from one another.

offers a method to calculate any distance measure. These metrics are appropriate for various sorts of data and issues and have various features. The Distance

module functions allow you to compare how similar or dissimilar two sets of data points are in various applications. Anomaly detection, classification, and

clustering.


Face_utils:

Face_utils, a submodule of the imutils package, offers several specialised dlib library functions for facial feature recognition. The submodule offers a

streamlined visualisation of face features, translates facial landmarks from dlib to numpy arrays, and provides standard metrics like the eye aspect ratio

(EAR) and mouth aspect ratio (MAR). offers a practical means of calculating.

Haar_casacade_classifier:

The Haar Cascade Classifier is an item recognition technique that makes use of machine learning to identify certain things in pictures and videos, such faces and
autos. He has developed into one of the most efficient and extensively used techniques of object detection since being suggested by Viola and Jones in 2001.
When using the Haar cascade classifier, the classifier is trained with both positive and negative pictures of the target object. A negative image includes an
example of the backdrop without the topic, whereas a positive image contains an example of the subject in the proper location and scale. A classifier builds a
model that may be used to identify items in fresh photos or videos by learning to differentiate between positive and negative images. These are straightforward
rectangular features that are applied to various sections of the image to capture various patterns. These characteristics are then combined by a classifier to create
a potent feature vector that can be used to differentiate between objects and backgrounds. The phases that are applied to pictures to increase detection accuracy
and decrease false alarms are referred to as the cascade section of the Haar cascade classifier. Only portions of the picture that pass all levels are regarded as
positive detections. Each level is made up of a sequence of weak classifiers applied to various sections of the image. With their quick detection speed and
excellent accuracy, Haar cascade classifiers offer a number of features that make them ideal for real-time applications including face recognition in robots,
security systems, and cameras.


CHAPTER 4
EXPERIMENTATION AND RESULTS

4.1 Experimental Work

from scipy.spatial import distance


from imutils import face_utils
import pygame
import time
import dlib
import cv2

pygame.mixer.init()
pygame.mixer.music.load('alarm.mp3')

EYE_ASPECT_RATIO_THRESHOLD = 0.5

EYE_ASPECT_RATIO_CONSEC_FRAMES = 10

COUNTER = 0

face_cascade = cv2.CascadeClassifier("haarcascades/haarcascade_frontalface_default.xml")

def eye_aspect_ratio(eye):
A = distance.euclidean(eye[1], eye[5])
B = distance.euclidean(eye[2], eye[4])
C = distance.euclidean(eye[0], eye[3])

ear = (A+B) / (2.0*C)


return ear

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')

(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS['left_eye']


(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS['right_eye']

video_capture = cv2.VideoCapture(0)

time.sleep(2)

while(True):

ret, frame = video_capture.read()

10
frame = cv2.flip(frame,1)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

faces = detector(gray, 0)

face_rectangle = face_cascade.detectMultiScale(gray, 1.3, 5)

for (x,y,w,h) in face_rectangle:


cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)

for face in faces:

shape = predictor(gray, face)


shape = face_utils.shape_to_np(shape)

leftEye = shape[lStart:lEnd]
rightEye = shape[rStart:rEnd]

leftEyeAspectRatio = eye_aspect_ratio(leftEye)
rightEyeAspectRatio = eye_aspect_ratio(rightEye)

eyeAspectRatio = (leftEyeAspectRatio + rightEyeAspectRatio) / 2

leftEyeHull = cv2.convexHull(leftEye)
rightEyeHull = cv2.convexHull(rightEye)
cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)
cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)

if(eyeAspectRatio < EYE_ASPECT_RATIO_THRESHOLD):


COUNTER += 1
#print(COUNTER)

if COUNTER >= EYE_ASPECT_RATIO_CONSEC_FRAMES:

pygame.mixer.music.play()
cv2.putText(frame, "You are Drowsy", (150,200), cv2.FONT_HERSHEY_SIMPLEX, 1.5, (0,0,255), 2)
else:
pygame.mixer.music.stop()
COUNTER = 0

cv2.imshow('Frame', frame)
if(cv2.waitKey(1) & 0xFF == ord('q')):
break

video_capture.release()
cv2.destroyAllWindows()

11
4.2 RESULT

After executing the python program ,it will open the webcam and captures our image. The blue colour square is detection of face by using
facial landmark points and green colout ellipse is the detection of eye. If we open our eyes it wont display any message and wont get the
alert sound.

Fig 4.2.1 Output 1 with open eyes

12
Here, I closed my eyes and after sometime it will display that “You are drowsy” and get an alert sound for consciousness.

Fig 4.2.2. Output 2 with closed eyes

4.3. MERITS AND DEMERITS

MERITS

• Life of the driver can be saved by alerting him using the alarm system.

• Speed of the vehicle can be controlled.

• Traffic management can be maintained by reducing accidents.

DEMERITS

• The accuracy of the model degrades if the eye frames are not captured clearly due to any kind of obstacles such as goggles or spectacles having
reflection.
• The accuracy of detection of eyes reduces when the driver is not facing the camera.

13
CHAPTER 5
CONCLUSION

5.1 CONCLUSION

These features, which are just rectangles, are applied to various parts of the image to capture various patterns. Following that, a classifier combines these

properties to create a potent feature vector that can be used to differentiate between objects and backgrounds. The Haar cascade classifier's cascade component

describes a sequence of phases that are applied to pictures to increase detection precision and decrease false alarms. Only areas of the picture that pass each

level—which comprises of a sequence of weak classifiers applied to various regions of the image—are regarded as positive detections. Haar cascade classifiers

are well suited for real-time applications like face recognition in cameras, security systems, and robots because to their many benefits, including quick detection

speed and high accuracy.

Furthermore, the technique is adaptable and may integrate new features and tools, such as: B. More sensors, machine learning, and sophisticated camera

systems, to increase accuracy. Technology like this has a promising future. It may be combined with other cutting-edge driving safety technologies or used as

a base for such innovations. A practical and effective solution to a challenging issue, driver sleepiness detection using OpenCV and Haar Cascade Classifier

has the potential to significantly lower the frequency of traffic accidents.

The suggested project on this assessment enables precise propulsion fatigue diagnosis. introduces the design and assessment of the device for detecting driver

weariness. The suggested gadget can assist keep drivers awake while in use by notifying them when they start to feel sleepy, which will be utilised to prevent

numerous traffic incidents caused by driver drowsiness. increase. It serves as the only foundation for the sleepiness detection device idea to identify and give

data on behavioural, driving, and physiological factors. Drivers don't seem to yawn as much as they used to just before they go to sleep. This emphasises the

need of providing scenarios where the subject will undoubtedly nod off due to exhaustion and sleepiness. is quite irritating. However, techniques involving

non-contact electrode insertion can get around this invasive character. In order to create green fatigue detection systems, it is very beneficial to combine

physiological measures with behavioural and auto-based composite measurements. For the best result, it is also crucial to take environmental consumption into

account.

5.2 FUTURE SCOPE

The future potential of driver sleepiness detection is bright due to technological advancements and rising driver safety concerns. Here are a few potential

advancements in driver sleepiness detection in the future.

1. Advanced sensor-based approach: By monitoring physiological signals including heart rate variability, blood oxygen levels, and EEG readings, sensor-based

approaches can improve the accuracy of sleepiness detection. More advanced detection systems will be created with the aid of developments in wearable sensor

technology and machine learning algorithms.

14
2. 3D camera technology: A more thorough picture of the driver's head and face is provided by 3D camera technology, allowing for the detection of head

movements in all directions. The accuracy of sleepiness detection may be increased and microsleep, brief episodes of unconsciousness, can be detected with this

technique.

3. Eye tracking technology: When a motorist is not looking at the road, eye tracking technology can track and identify eye movements. This aids in identifying

periods of inattention and tiredness in the driver. 4. Security system integration: Other safety features like automatic braking, lane departure warning, and collision

avoidance can also be used in conjunction with a driver sleepiness detection system. This enhances driver safety and helps to prevent accidents.

5. Autonomous vehicles with artificial intelligence: These systems can learn a driver's habits and preferences, monitor traffic conditions in real-time, and provide

them personalised warnings and feedback to keep them awake.

6. Vehicle-to-infrastructure communication: Systems for detecting drowsiness can connect to navigational aids, traffic sensors, and other infrastructure to deliver

real-time information on traffic conditions and modify warning systems accordingly.

Overall, there are a huge variety of potential future applications for driver sleepiness detection, and the fusion of several technologies may be used to create

sophisticated detection systems that reduce the risk of accidents and raise traffic safety.

REFERENCES

Here are some references on driver drowsiness detection:

1. “Study on Driver Drowsiness Detection Systems.” by Basker et al. IEEE Transactions on Intelligent Transport Systems (2018).

2. “Driver Drowsiness Detection Based on Face and Eye Tracking Using Machine Learning Algorithms.” Uddin et al. Journal of Ambient Intelligence and

Humanized Computing (2021).

3. “Driver drowsiness detection using EEG signals and image processing.” By Bhaskar et al. IEEE Sensor Journal (2021).

4. "Developing Real-Time Drowsiness Detection Systems Using Machine Learning Approaches for Driver Safety," Nirdosh Kumar and Amina Rosliza, Journal

of Ambient Intelligence and Humanized Computing (2021).

5. “Real-time fatigue and drowsiness detection using facial recognition and machine learning algorithms,” Lee et al. PhD in Applied Science (2019).

6. "In-vehicle Fatigue Detection System Using Infrared Cameras," Islam and Sensors (2019).

7. “A review of driver drowsiness detection systems in terms of practicality, ease of use, and effectiveness,” by Kimetto et al. Journal of Advanced Transportation

(2021).

15

You might also like