You are on page 1of 12

Detection of Drowsiness Using Fusion of Yawning

and Eyelid Movements

Vidyagouri B. Hemadri and Umakant P. Kulkarni

Department of Computer Science and Engineering, SDMCET, Dharwad- 580 002, India
{vidya_gouri,upkulkarni}@yahoo.com

Abstract. Use of technology in building human comforts and automation is


growing fast, particularly in automobile industry. Safety of human being is the
major concern in vehicle automation. Statistics shows that 20% of all the traffic
accidents are due to diminished vigilance level of driver and hence use of tech-
nology in detecting drowsiness and alerting driver is of prime importance. In this
paper, method for detection of drowsiness based on multidimensional facial fea-
tures like eyelid movements and yawning is proposed. The geometrical features
of mouth and eyelid movement are processed, in parallel to detect drowsiness.
Harr classifiers are used to detect eyes and mouth region. Only the position of
lower lip is selected to check for drowsiness as during yawn only lower lip is
moved due to downward movement of lower jaw and position of the upper lip is
fixed. Processing is done only on one of the eye to analyze attributes of eyelid
movement in drowsiness, thus increasing the speed and reducing the false
detection. Experimental results show that the algorithm can achieve a 80%
performance for drowsiness detection under varying lighting conditions.

Keywords: Face detection, Eyelid movement, Yawn detection, Drowsiness


detection.

1 Introduction
The increasing number of traffic accidents due to a diminished vigilance level of driv-
er has become a serious problem for society. Accidents related to driver hypo-
vigilance are more serious than other types of accidents, since sleepy drivers often do
not take evasive action prior to a collision [1, 2]. Monitoring the driver’s level of vi-
gilance and alerting the driver when he is not paying adequate attention to the driving
has become matter of concern in order to prevent accidents. The prevention of such
accidents is a major focus of effort in the field of active safety research.
Many researchers are working on the development of monitoring systems using
specific techniques. The best detection techniques are based on physiological pheno-
mena like brain waves, heart rate, pulse rate and respiration. These techniques are
intrusive, causing annoyance. A driver’s state of vigilance can also be characterized
by indirect behaviors of the vehicle like lateral position, steering wheel movements.
Although these techniques are not intrusive, they are subjected to several limitations
as the vehicle type, driver experience, geometric characteristics and state of the road.
People in fatigue show some visual behavior easily observable from changes in their
facial features like eyes, head and face.

S. Unnikrishnan, S. Surve, and D. Bhoir (Eds.): ICAC3 2013, CCIS 361, pp. 583–594, 2013.
© Springer-Verlag Berlin Heidelberg 2013
584 V.B. Hemadri and U.P. Kulkarni

In this paper a novel algorithm to detect drowsiness based on eyelid movements and
yawning is proposed. The Viola Jones algorithm is used to detect face and Harr classifi-
ers are used to detect eyes and mouth region. The mouth geometrical features and eyelid
movement is processed, in parallel, to detect drowsiness. The fusion of these data allows
the system to detect both drowsiness and inattention, thus improving the reliability of
existing unidimensional features based algorithms. Experimental results show that this
algorithm can achieve an 80% performance for drowsiness detection under good light-
ing conditions and 50% detection under poor lighting conditions.
This paper is organized as follows: in section 2 the related work about the detection
of driver fatigue is presented. Section 3 describes the proposed method. Experimental
results are shown on section 4 and finally section 5 presents the conclusion and future
studies.

2 Related Work
Boon-Giin Lee and Wan-Young Chung describes a method to monitor driver safety
by analyzing information related to fatigue based on eye movement monitoring
and bio-signal processing [3]. Video sensors are used to capture the driver image and
a bio-signal sensor to gather the driver photoplethysmograph signal. A monitoring
system is designed in Android-based smartphone where it receives sensory data via
wireless sensor network and further processes the data to indicate the current driving
aptitude of the driver. A dynamic Bayesian network framework is used for the driver
fatigue evaluation.
The drowsiness prediction by employing support vector machine with eyelid re-
lated parameters extracted from EOG data is proposed by Hu Shuyan, Zheng Gangtie
[4]. The dataset is firstly divided into three incremental drowsiness levels, and then a
paired t-test is done to identify how the parameters are associated with drivers’ sleepy
condition. With all the features, a SVM drowsiness detection model is constructed.
The validation results show that the drowsiness detection accuracy is quite high espe-
cially when the subjects are very sleepy.
Shabnam Abtahi, Behnoosh Hariri, Shervin Shimiohammadi presents a method for
detecting drivers' drowsiness and subsequently alerting them based on the changes in
the mouth geometric features [5]. The driver's face is continuously recorded using a
camera that is installed under the front mirror. The face is detected and tracked using
the series of frame shots taken by the camera followed with detection of the eyes and
the mouth. The mouth geometrical features are then used to detect the yawn. The
system will alert the driver of his fatigue and the improper driving situation in case of
yawning detection.
Tianyi Hong, Huabiao Qin proposes a method based on eye state identification of
driver’s drowsiness detection in embedded system using image processing techniques
[6]. This method utilizes face detection and eye detection to initialize the location of
driver’s eyes, after that an object tracking method is used to keep track of the eyes,
finally, identify drowsiness state of driver can be identified with PERCLOS by iden-
tified eye state. Experiment results show that it makes good agreement with analysis.
Danghui Liu, Peng Sun, YanQing Xiao, Yunxia Yin proposed drowsiness
detection algorithms based on eyelid movement [7]. Driver’s face is detected using
cascaded classifiers algorithm and the diamond searching algorithm used to trace the
Detection of Drowsiness Using Fusion of Yawning and Eyelid Movements 585

face. From a temporal difference image, a simple feature is extracted and used to
analyze rules of eyelid movement in drowsiness. Three criterions such as the duration
of eyelid closure, the number of groups of continuous blinks and the frequency of
eye blink are used to judge whether a driver is drowsy or not. Experimental results
show that this algorithm achieves a satisfied performance for drowsiness detection.
Advanced driver assistance system for automatic driver’s drowsiness detection based
on visual information and artificial intelligent is presented by Marco Javier Flores, Jose
Maria Armingol and Arturo de la Escalera [8]. Face and the eyes are located and tracked
to compute a drowsiness index. This system uses advanced technologies for analyzing
and monitoring driver’s eye state at real-time and real-driving conditions.
Antoine Picot, Sylvie Charbonnier and Alice Caplier presents an algorithm for
driver’s drowsiness detection based on visual signs that can be extracted from the
analysis of a high frame rate video [9]. A data mining approach is used to evaluate the
relevance of different visual features on a consistent database, to detect drowsiness.
Fuzzy logic that merges the most relevant blinking features (duration, percentage of
eye closure, frequency of the blinks and amplitude-velocity ratio) is proposed.
Smith, Shah, and Lobo present a system for analyzing human driver alertness
[10,11]. The system relies on estimation of global motion and color statistics to ro-
bustly track a person’s head and facial features. The system classifies rotation in all
viewing directions, detects eye/mouth occlusion, detects eye blinking and eye closure,
and recovers the 3D gaze of the eyes. The system tracks both through occlusions due
to eye blinking, and eye closure, large mouth movement, and also through occlusion
due to rotation.
Azim Eskandarian and Ali Mortazavi describes the performance of a neural net-
work based algorithm for monitoring the driver’s steering input, and correlations are
found between the change in steering and state of drowsiness in case of truck drivers
[12]. The algorithm develops the design of a drowsiness detection method for the
commercial truck using artificial neural network classifier, trained and tested in a
simulated environment.
Taner Danisman, Ian Marius Bilasco, Chabane Djeraba, Nacim Ihaddadene
presents an automatic drowsy driver monitoring and accident prevention system
based on monitoring the changes in the eye blink duration [13]. This method detects
visual changes in eye locations using the horizontal symmetry feature of the eyes and
detects eye blinks via a standard webcam in real-time. The eye blink duration is the
time spent while upper and lower eye-lids are connected. The pattern indicates a po-
tential drowsiness prior to the driver falling asleep and then alerts the driver by voice
messages. Results showed that the proposed system detects eye blinks with 94% accu-
racy with a 1% false positive rate.
Effective driver fatigue detection method based on spontaneous pupillary fluctua-
tion behavior is described by Xingliang Xiong, Lifang Deng, Yan Zhang, Longcong
Chen [14]. Using infrared-sensitive CCD camera 90s long infrared video of pupillo-
gram is captured. Edge detection algorithm based on curvature characteristic of pupil
boundary is employed to extract a set of points of visible pupil boundary, particularly
the eyelids or eyelashes are occluding the pupil. These points are adapted to fit a
circle by using least squares error criterion, the diameter of this fitted circle is the
approximated diameter of the pupil in current frame of video. Finally, the values of
pupil diameter variability (PDV) in 90s long video was calculated.
586 V.B. Hemadri and U.P. Kulkarni

Regan et al [15] defined driver distraction and driver inattention and taxonomy is
presented in which driver distraction is distinguished from other forms of driver inat-
tention. The taxonomy and the definitions are intended to provide a common frame-
work for coding different forms of driver inattention as contributing factors in crashes
and incidents, so that comparable estimates of their role as contributing factors can be
made across different studies, and to make it possible to more accurately interpret and
compare, across studies, the research findings for a given form of driver inattention.
Authors suggested that the driver inattention as insufficient or no attention to activi-
ties critical for safe driving, and that driver diverted attention (which is synonymous
with “driver distraction) is just one form of driver inattention. The other forms of
driver inattention they have labeled tentatively as driver restricted attention, driver
misprioritised attention, driver neglected attention and driver cursory attention.
Fan et al [16] described a novel Gabor-based dynamic representation for dynamics
in facial image sequences to monitor human fatigue. Considering the multi-scale cha-
racter of different facial behaviors, Gabor wavelets were employed to extract multi-
scale and multi-orientation features for each image. Then features of the same scale
were fused into a single feature according to two fusion rules to extract the local
orientation information. AdaBoost algorithm was exploited to select the most discri-
minative features and construct a strong classifier to monitor fatigue. Authors tested
on a wide range of human subjects of different genders, poses and illuminations under
real-life fatigue conditions. Findings indicated the validity of the proposed method,
and an encouraging average correct rate was achieved.
Most of these algorithms [4-9, 12-14] are unidimensional in which case detection
of drowsiness is based on single parameter. The reliability of the system can be im-
proved by the fusion of these data and allowing the system to detect both drowsiness
and inattention. A novel drowsiness detection algorithm based on fusion of eyelid
movements and yawning is proposed. Processing of the mouth geometrical features
and eyelid movement is done, in parallel to detect drowsiness. To increase the speed
of detection only the position of lower lip is selected and only one of the eyes is ana-
lyzed for criterions, such as opening and closing of eye, duration of opening and clos-
ing and frequency of blink. Experimental results show that algorithm can achieve a
satisfied performance for drowsiness detection.

3 Proposed Approach
The proposed algorithm consists of different phases, to properly analyze changes in
the facial attributes like, changes in the mouth geometrical features of the driver and
changes in the status of the eyes. These phases are categorized as follows.
1. Face detection
2. Face tracking
3. Eye detection and Mouth detection
4. Fusion of the data
5. Drowsiness detection

The sample videos are collected with different intensities or captured using webcam for
fixed interval of time. The face region is detected from the videos after being framed
using Viola and Jones algorithm in opencv [16, 17]. After detecting the face, mouth and
eye regions of the face are detected and processed in parallel as shown in figure 1.
Detection of Drowsiness Using Fusion of Yawning and Eyelid Movements 587

Fig. 1. Flow diagram of detection of drowsiness system

3.1 Yawn Detection


Mouth part is extracted, binarized and subjected for edge detection using canny edge
detection. Mouth region is selected based on the face region, converted to gray scale
and canny edge detection is applied to find lips edge. This edge detection depends on
the different threshold value for different light intensity and resolution of the camera.
The method uses multiple thresholds to find edges. Upper threshold is used to find
start point of an edge. The path of the edge is then traced through the image pixel by
pixel and marking the edge if it is above the lower threshold. The pixel point is not
marked if the value falls below the lower threshold. The algorithm for yawn detection
is described below.
Procedure drowsy detect
1. Capture the video
2. Form the frames from the input video
3. Detect the face contour. Viola and Jones algorithm
is used to detect face as shown in figure 2.
4. Call Yawn detection and eye detection algorithms.
5. Collect the returned value from the above two
algorithms.
6. If Drowsy then
Person is alarmed by a voice alert.
End if
End of algorithm
588 V.B. Hemadri and U.P. Kulkarni

Procedure Yawn Detection


1. Detect mouth region as shown in figure 3.
2. Convert the above image to gray scale.
3. Using Canny edge detectors edges of lips are
detected.
4. The template matching is used to match the resultant
image
5. If the template match with any of four white Objects
then Mouth opened
else Mouth closed
End if
6. If mouth found open in four continuous frames in
two seconds then yawn Detected
End if
7. If Yawning then return drowsy End if
End of algorithm

Fig. 2. Detection of face

Fig. 3. Detection of mouth

The result of applying canny edge detection is as shown in figure 4.


Detection of Drowsiness Using Fusion of Yawning and Eyelid Movements 589

Fig. 4. Canny edge detection

Fig. 5. Detection of downward moment of mouth

Fig. 6. Complete system to detect drowsiness based yawn


590 V.B. Hemadri and U.P. Kulkarni

Only the position of lower lip is selected to check for drowsiness as during yawn
only lower lip is moved down and position of the upper lip is fixed. In figure 5 are
shown few regions obtained after the values are set. The complete system is divided
into two modules which are shown in figure 6.
3.2 Eye Detection
Once the face region is detected, the eyes are detected using Harr classifiers.

Fig. 7. Detection of eye

The object detector will put the list of detected eyes and, rectangle can be drawn
for each eye. Processing is done only on one of the eyes. This approach can be used if

Fig. 8. Complete system to detect drowsiness based eye status


Detection of Drowsiness Using Fusion of Yawning and Eyelid Movements 591

one of the eyes is partially covered with hair. Eye blinking is monitored continuously
by observing the status of the eye such as open or closed. In closed state, eye is not
detected and is detected only in the open state. This feature is used to find three crite-
rions such as duration of eye closure, duration of opening and frequency of blink.
Normally, the eye blinks 10 times per minute, the interval is about two to six seconds,
and the duration of each blink is about 0.15 to 0.25 seconds. If eye is closed for more
than 8 frames then the person is in fatigue. When a person is drowsy the frequency of
his eye blink becomes slower. Contrarily, when he just becomes drowsy from normal
spirit state, the frequency of his eye blink is faster during the conversion stage. If the
person feels drowsy, to avoid this, he tries to keep his eye open for longer duration. If
eye blink of a person is continuous with more frequency, that is if it is larger than 10
times per minute then he is in a fatigue state and he is alarmed. The algorithm for eye
detection is described below. The complete system to detect drowsiness based eye
parameters is shown in figure 8.
Procedure Eye Detection
1. Eyes are detected using Harr classifiers
2. Monitor the blinking of the eye by observing the
status of the eye such as open or closed
3. If eye is detected then it is in OPEN state
else it is in CLOSED state.
End if
4. If (eye is closed for more than 8 frames) OR (eye
is open for more than 30 frames) OR (if frequency
of blink is larger than 10 times per minute )
then Return drowsy
End if
End of algorithm

4 Experimental Results
Nearly fifty videos under different lighting conditions were collected, such as during
day time and night time. Proposed algorithm detects the face first. If face is not

Fig. 9. % Drowsiness Detection under different light conditions (partial data)


592 V.B. Hemadri and U.P. Kulkarni

Table 1. Detection of drowsiness under different lighting conditions


Input FI BI Duration of
Duration of Duration of % of
Video (In (In frame yawn
eye Closure Opening Drowsy
clip LUX) LUX) found
1 0 0 0 2 8 0
2 0 0 0 3 7 0
3 20 0 0 3 9 0
4 20 60 0 3 8 0
5 20 60 0 5 8 0
6 30 10 0 3 7 0
7 30 10 3 3 7 50
8 30 10 2 3 8 25
9 40 0 0 4 7 0
10 40 0 2 3 20 25
11 80 10 6 4 8 100
12 80 10 2 6 8 25
13 90 10 7 3 7 100
14 90 10 7 8 4 100
15 90 10 7 9 3 100
16 130 30 6 9 3 100
17 130 30 6 10 2 100
18 140 40 6 9 3 100
19 140 40 7 9 3 100
20 140 40 2 8 4 25
21 140 100 7 3 7 100
22 140 40 2 5 25 25
23 160 10 8 2 35 100
24 160 10 7 8 3 100
25 160 10 2 4 5 25
26 160 40 7 3 5 100
27 160 40 2 4 6 25
28 190 20 6 2 35 100
29 190 20 2 6 4 25
30 200 70 6 10 2 100

detected, the algorithm fails. Following the detection of the face, the algorithm next
finds the mouth and eye region in the face. Detection of the mouth part is followed
with yawn detection and detection of eyes followed with attributes of the eyes. The
two algorithms are processed in parallel and reporting is done based on the two re-
sults. The following shows the result of applying algorithm on different images cap-
tured during the experiment under different intensity conditions, FI, front intensity,
the intensity on the face during video recording and BI, back intensity, the intensity at
the background of the face.
Detection of Drowsiness Using Fusion of Yawning and Eyelid Movements 593

There will be 25% detection if the yawning or eye closure is found in 2 continuous
frames in a second,, 50% detection if the yawning or eye closure is found in 3 or 4
continuous frames in a second, 75% detection if mouth or eye closure is found open
in 5 to 6 continuous frames and 100% detection if mouth or eye closure is found open
in 6 or more continuous frames in 2 second. The Table 1 shows the part of the result
of drowsiness detection under different luminance condition. The result shows that
the detection rate is 80% under lighting conditions with light intensity greater than 50
lux and less than 8000 lux. The algorithm fails under a very dark (less than 50) or
very bright (more than 8000) lighting conditions.

5 Conclusion

In this paper, the algorithm to detect drowsiness based on multidimensional biometric


features like mouth geometrical features and eyelid movement is proposed. The posi-
tion of the lower lip is used to check for yawning. The three parameters of the eye
such as duration of closure, duration of opening and frequency of blink is used to
check for drowsiness based on eyelid movement. The processing is done only on one
of the eye, thus to increase speed of detection and to detect under different hair style
conditions. The algorithm works satisfactory under reasonable lighting conditions (50
lux to 8000 lux) and fails to detect under very dark or very bright lighting condi-
tions((less than 50 lux, more than 8000 lux). The algorithm fails to detect drowsiness
under few adverse conditions such as different skin colors, wearing spectacles and
any lighting conditions.

Acknowledgements. This project was carried under Research Promotion Scheme


grant from All India Council for Technical Education (AICTE), project Ref. No:
8023/RID/RPS-114(Pvt)/ 2011-12. Authors wish to thank AICTE, New Delhi.

References
1. Bergasa, L.M., Nuevo, J., Sotelo, M.A., Barea, R., Lopez, M.E.: Real-time system for
monitoring driver vigilance. IEEE Transactions on Intelligent Transportation Systems 7(1),
63–77 (2006) ISSN: 1524-9050
2. Ueno, H., Kaneda, M., Tsukino, M.: Development of drowsiness detection system. In:
Proc. Vehicle Navigation and Information Systems Conf., pp. 15–20 (1994)
3. Lee, B.-G., Chung, W.-Y.: Driver Alertness Monitoring Using Fusion of Facial Features
and Bio-Signals. IEEE Sensors Journal 12, doi:10.1109/JSEN.2012.2190505
4. Hu, S., Zheng, G.: Driver drowsiness detection with eyelid related parameters by Support
Vector Machine. Expert Systems with Applications 36, 7651–7658 (2009),
doi:10.1016/j.eswa.2008.09.030
5. Abtahi, S., Hariri, B., Shimiohammadi, S.: Driver Drowsiness Monitoring Based on Yawn-
ing Detection, 978-1-4244-7935-1/11/IEEE
6. Hong, T., Qin, H.: Drivers Drowsiness Detection in Embedded System, 1-4244-1266-
8/07/IEEE
594 V.B. Hemadri and U.P. Kulkarni

7. Liu, D., Sun, P., Xiao, Y., Yin, Y.: Drowsiness Detection Based on Eyelid Movement. In:
2010 Second International Workshop on Education Technology and Computer Science.
IEEE (2010) 978-0-7695-3987-4/10, doi:10.1109/ETCS.2010.292
8. Flores, M.J., Armingol, J.M., de la Escalera, A.: Real-Time Drowsiness Detection System
for an Intelligent Vehicle. In: 2008 IEEE Intelligent Vehicles Symposium (2008), 978-1-
4244-2569-3/08/IEEE
9. Picot, A., Charbonnier, S., Caplier, A.: Drowsiness detection based on visual signs: blink-
ing analysis based on high frame rate video, 978-1-4244-2833-5/10/IEEE
10. Smith, P., Shah, M., da Vitoria Lobo, N.: Monitoring head/eye motionfor driver alertness
with one camera. In: Proc. 15th Int. Conf. Pattern Recognition, Barcelona, Spain, vol. 4,
pp. 636–642 (2000)
11. Smith, P., Shah, M., da Vitoria Lobo, N.: Determine driver visual attention with one cam-
era. Intelligent Transportation Systems 4, 205–218 (2003)
12. Eskandarian, A., Mortazavi, A.: Evaluation of a Smart Algorithm for Commercial Vehicle
Driver Drowsiness Detection. In: Proceedings of the 2007 IEEE Intelligent Vehicles Sym-
posium (2007), 1-4244-1068-1/07/IEEE
13. Danisman, T., Bilasco, I.M., Djeraba, C., Ihaddadene, N.: Drowsy Driver Detection Sys-
tem Using Eye Blink Patterns, 978-1-4244-8611-3/10/IEEE
14. Xiong, X., Deng, L., Zhang, Y., Chen, L.: Objective Evaluation of Driver Fatigue by Us-
ing Spontaneous Pupillary fluctuation, 978-1-4244-5089-3/11/IEEE
15. Regan, M.A., Hallett, C., Gordon, C.P.: Driver distraction and driver inattention: Defini-
tion, relationship and taxonomy. Accident Analysis and Prevention 43, 1771–1781 (2011)
16. Fan, X., Sun, Y., Yin, B., Guo, X.: Gabor-based dynamic representation for human fatigue
monitoring in facial image sequences. Pattern Recognition Letters 31, 234–243 (2010)
17. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In:
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, vol. 12, pp. I-511–I-518 (2001)
18. Intel Open Source Computer Vision Library,
http://www.intel.com/technology/computing/opencv/index.htm

You might also like