You are on page 1of 6

9th IFAC Workshop on Programmable Devices and Embedded Systems

Roznov pod Radhostem, Czech Republic, February 10-12, 2009

The eye-blinking artifacts detection and elimination in the EEG record


B. Babušiak*, M. Gála*

* VSB-Technical University of Ostrava, Department of Measurement and Control,


17. listopadu 15, Ostrava-Poruba 708 33, Czech Republic
e-mail: branko.babusiak@vsb.cz, michal.gala@vsb.cz

Abstract: An electroencephalogram (EEG) is often corrupted by different types of artifacts. Many efforts
have been made to enhance its quality by reducing the artifact. The EEG contains the technical artifacts
(noise from the electricity, amplitude artifact, etc.) and biological artifacts (eye artifacts, EKG and EMG
artifacts). This paper is aimed to eye-blinking artifact detection from the video which is recorded with
EEG data simultaneously. Detection of eye artifacts is not a simple way and therefore there are many
efforts to find out optimal method for eye artifact detection or in better case its elimination. For example
the method of Independent Component Analysis (ICA) or artificial neural network was used for this
purpose. In this article is described a detection method based on image processing and a artifact
elimination by using ICA.
Keywords: EEG, artifact, image processing, ICA

1. INTRODUCTION In addition to EEG channels, system may include also


additional channels like ECG (Electrocardiography), EOG
Electroencephalography is the neurophysiologic (Electrooculography), EMG (Electromyography) and PNG
measurement of the electrical activity of the brain by (Pneumography).
recording from electrodes placed on the scalp or, in special
cases, subdurally or in the cerebral cortex. The resulting
traces are known as an electroencephalogram (EEG) and The EEG record is often digitalized and stored on appropriate
represent an electrical signal (postsynaptic potentials) from a type of storage medium (CD, DVD, hard disk …) for
large number of neurons. These are sometimes called additional processing and analysis. The EEG record contains
brainwaves. EEGs are frequently used in experimentation many types of artifacts. An artifact is event or process which
because the process is non-invasive to the research subject. In have not source in an examined organ. One type of artifact is
EEG are recognized four main types of brain activity – delta eye artifact – blinking and eye movement. However the
(frequency up to 4 Hz), theta (frequency range from 4 Hz to 8 amplitude of the electrooculographic (EOG) signals is only
Hz), alpha (range 8 Hz – 12 Hz) and beta (range above 12 six-times greater than EEG signals, there is a large
Hz) [Lopes da Silva, F., 1982]. interference because of short distance between sources of
these signals. The eye artifact is best seen in first two
1.1 Measurement of the EEG channels Fp1 and Fp2 (fig. 2).

In this work international 10-20 system of electrodes


placement is used. This system includes 19 EEG electrodes.
Each of electrodes is placed on the scalp as is shown in the
figure 1.

Fig. 2. Segment of EEG record with marked eye artifacts in


channels Fp1 and Fp2
Fig. 1. International 10-20 system of electrodes placement

978-3-902661-41-8/09/$20.00 © 2009 IFAC 254 10.3182/20090210-3-CZ-4002.0047


2. DETECTION METHOD this fault has not important influence in correct artifact
detection. If Pvideo is a frame order in the video sequence, then
Whole detection method consists of the following steps: position of a synchronizing line Psync in the EEG record is
video and EEG record synchronization, measurement of given by
mean intensity in selected area and marking artifacts in the
EEG record. These steps are described in the text below. f samp
Psync = Pvideo (2)
FPS
2.1 Video and EEG record synchronization

The video record is obtained from two cameras. The first one This computation is performed for each new frame (in this
is scanning whole person in the bed and the second one is case 15 times per second). After this computation, the
focused to the face (fig. 3). The parameters of the video synchronizing line has to be displayed in the EEG record
record are in the Table 1. with video simultaneously. The video record is displayed in a
new modal window which is always on top window-type.
These operations are consuming too much of processor
performance. Therefore it is appropriate to use hardware
acceleration of graphic card in order to reduce processor
exploitation. Visual appearance of application for data
synchronizing is shown in the figure 4.

Fig. 3. Video record

The video record, with parameters shown in the table 1, is not


an ideal case because video record was created with intention
to check only movements and state of the person. Therefore it
has low resolution, low frame rate and its quality is
noticeable reduced by compression. Even though, these
parameters are sufficient for the eye blinking detection.
Fig. 4. Video and EEG synchronization
Table 1. Video Parameters
2.2 Measurement of mean intensity
File Size 97 703 480 bytes
Number of Frames 11 597 To detect eyes blinking (opening and closing eyes), it is used
Frames per Second 14.9914 s-1 measurement of mean value of intensity in the selected region
Time Length 12 min. 52 sec.
of interest. The measurement is carried out for each frame
Width 352 pixels
and at the end curve of mean intensity is made. The moments
Height 288 pixels
of opening or closing eyes can be set according to increasing
Image Type Truecolor
or decreasing values of the curve.
Video Compression DivX50
In pre-processing phase it is appropriate to reduce image data
in order to accelerate detection of blinking. That means, only
A first step in artifacts detection is EEG and video record area focused to face is cut from all of frames and color depth
synchronization. Finally, this step is also used for verification from true color (24-bit) to grayscale (8-bit) is changed
of detected artifacts. The EEG activity is recorded by 19 [Umbaugh, Scott E].
electrodes and sampling rate is fsamp = 128 Hz. As it is seen,
the sampling rate is not integer multiple of frame rate (FPS): Let us set region of interest (ROI) in the reference frame. The
way of setting ROI is interactive. The ROI has dimensions (k
f samp / FPS = 128 /15 = 8.533 (1) x l) and coordinates of upper left corner (L, T) for left and
right eye (Fig. 5).

This fact can not be forgotten in case of making


synchronization with only discrete values. Because then a
little fault between video and EEG timing is made. After all,

255
if f(n)≥b(n) → 1 (closed eyes)
if f(n)<b(n) → 0 (opened eyes)
where f(n) is actual value of mean intensity, b(n) is actual
value of computed or selected boundary (threshold). Logical
0 stands for opened eyes and logical 1 stands for closed eyes.
Threshold can be done in two different ways.
In the first way, threshold (horizontal line) is set intuitively
by user according to whole progress of curve in the time (Fig.
6 left). This way is not very reliable in case of changes of
brightness conditions in the room during examination. Then
the particular part of curve is moved up or down and state of
opened/closed eyes can be interpreted wrong.

Fig. 5. Region of interest settings

Then computation of mean intensity for N-th frame is given


by

1 k l
I avg ( N ) = ∑∑ f (i + T , j + L) N
k .l i =1 j =1
(3)

where f(i,j)N is luminance (intensity) level of pixel at Fig. 6. Curve of mean intensity is on the left side. Boolean
coordinates i, j in the frame N. curve is on the right. Horizontal boundary is set by user
interactively.
The algorithm computes the mean intensity for each frame
over the whole video sequence and then the mean intensity
curve is being created. This curve displays mean intensity The second way is based on local extremes of the curve
variance of the selected area over the time (Fig. 6 left). (function). The second derivative test is a criterion useful for
determining whether a given stationary point of a function is
In the application a user can set optimal ROI interactively in a local maximum or a local minimum.
order to reach adequate signal to noise ratio. The ROI has to
be selected in appropriate way, whole eye (in opened and The test states: If the function f is twice differentiable in a
closed state) must be in the selected area. In this area should neighborhood of a stationary point x, meaning that f’(x)=0,
be also adjacent areas in order to eliminate possible little then:
head movements during examination because there is not If f’’(x)<0 then f has a local maximum at x.
compensation done yet.
If f’’(x)>0 then f has a local minimum at x.
Higher precision and softness of the curve is acquired by
using higher frame rate of the video sequence. Output format
of the video sequence has a considerable influence to image
quality. In order to make analysis better, non-compressed When vector of all local extremes is found then user (usually
records without information loss are most appropriate. The a doctor) sets amplitude of one eye blink (circles in Fig. 6
compressed formats like MPEG or DivX have undesirable left) interactively. New point of boundary is selected if
effect with blocks which can reduce soft details. Frequency following condition is satisfied:
of eye blinking can be obtained, if it is needed, by using
( L max - L min ) > A blink /2 (4)
Fourier transformation.

where ( Lmax – Lmin ) is difference between neighboring local


2.3 Mean intensity curve transformation maximum Lmax and local minimum Lmin . Ablink is amplitude
of one eye blink.
From the curve of mean intensity (fig. 5 left) it is not clearly All of boundary points are concatenated to boundary line as
seen when are eyes opened or closed. Therefore, it is is shown in the Fig. 7.
appropriate to transform the curve to Boolean curve.
Thresholding process is using for this purpose. Thresholding
transforms the curve of mean intensity to Boolean curve with
only two logical levels (Fig. 4 right):

256
s ( k ) = [ s1 ( k ) , s 2 ( k ) , ..., s N ( k )]
T
which are statistically
independent to each other. In addition mixed signals are
wasted with noise and interference. Symbol [ ]T stands for
operation of transposition and k is index of discrete time
sequence. The mixed signals:

x ( k ) = A.s ( k ) + v ( k ) (5)
x(k) - column vector of mixed signals at each discrete time k
Fig. 7. Curve of mean intensity is on the left. Boolean curve s(k) - column vector of source signals
is on the right. Boundary setting is based on finding local v(k) - column vector of additive noise
extremes of function. A - unknown mixture matrix with dimension MxN and with
aij elements

2.4 Marking artifacts in the EEG record

Final step is marking artifacts (blinking) in the EEG record.


Marking is executed by adding one additive channel (EYE)
into the record (Fig. 8). In contrast to previous figure logical
1 stands for opened eyes. Moreover, the state of closed eyes
is colored by another color (green color in this case) in the
whole record. The change of colors represents blinking. In
these segments the influence of eye blinking on the other
channels is visible. Thanks to this a person who evaluates
record knows origin of waves in the EEG channels.

Fig.8. Blind source separation scheme

In general case, source signals and number of source signals


are unknown. Only mixed vectors x(k) are known. The
presence of additive noise v(k) is not considered in the further
text because low noise is supposed as the initial condition
[Hyvärinen, A., Karhunen, J., Oja, E., 2001].
Then goal of ICA is to find inverse matrix of A for
reconstruction of sources. Inverse matrix A-1 is called
separating matrix S with NxM dimension. We try to solve
following transformation:

y ( k ) = S.x ( k ) (6)
Fig. 8. Detection of blinking in the EEG record
Process of sources reconstruction by using ICA method is
shown in the figure 9.
3. REMOVING ARTEFACTS
Blind source separation can be used for purpose to remove
eye artifacts. Blind Source/Signal Separation (BSS) is a
group of methods of digital signal processing. Target of these
methods is to restore initial source signals from mixture of
signals by the help of separation process (Fig. 8). One way
how to solve this problem is using Independent Component
Analysis (ICA).

3.1 Independent Component Analysis Fig. 9. Block diagram of blind source separation represented
by vectors and matrices. A is mixture matrix, S is separating
The method of the ICA is technique which can separate linear matrix, N is number of sources s(k) and output signals y(k), M
mixed signals. In the simplest case, mixed signals is number of mixed signals x(k) and additive noises v(k)
x (k ) = [ x ( k ) , x ( k ) , ..., x ( k )] are linear combination of N
T

1 1 M

(usually M ≥ N) unknown source signals

257
3.2 Properties and assumptions of ICA correlation functions. The cross-correlation function is
computed not for whole signal but just for blink detected area
Here are some significant properties of ICA: and a couple of neighboring samples. This fact undoubtedly
reduce computational time.
• only linearly mixed signals can be separated
• index permutation of separated sources. Separated sources
are usually not in the same order like initial (mixed) sources
• separated signals are not restored with amplitude of source
signals
• if mean value of input signals equals to zero and dispersion
equals to one, ICA will not have to identify sign of value
correctly
Important assumptions for using ICA:

• number of sensors (mixed signals) is greater or equal to


number of sources
• source signals s(k) are independent all the time
• only signals without additive noise of sensors or with small
degradation noise are acceptable Fig. 11. Set of independent components IC
• maximally one source signal can be stochastic with
Gaussian distribution In the picture below are shown cross-corelation functions for
three segments (Fig. 12). Each of segments consists of
3.3 Removing detected artifacts selected area (in the middle of segments) and neighboring
area. In this case are segments one second long (128
Artifacts, which was detecting by using video detection samples).
method (described before), will be removed by using ICA.
One segment (10 second long) with detected eye artifacts is
shown in the Fig. 10.

Fig. 12. Cross-correlation functions between reference


channel and IC components

As it is shown above, cross-correlation functions have


maximum value in the middle of second channel. It follows
that second independent component IC2 is most correlated
Fig. 10. Segment of EEG with marked eye-blinking artifacts with reference signal Fp1. Then component IC2 has to be
removed from EEG mixture. From (6) we need to obtain x:
It is supposed that all of EEG channels are mixed with EOG
x* = S −1. y*
signal linearly. EOG artifact is best seen in channels Fp1 and (7)
Fp2 due to shortest distance between EEG and EOG sources. x* = A. y *
For this segment ICA method (FastICA algorithm) is used to
find independent components IC. There are 19 IC found for x* - mixed signals with component removed
this 10 second long segment (Fig. 11). Now it is needed to A - original mixture matrix
find out IC which stands for EOG artifact and then y* - independent components without EOG
reconstruct EEG signals without this component. component
Because EOG artifacts is best seen in cannel Fp1 and Fp2, as
was mentioned before, let us consider that Fp1 will be
reference signal. To find out right component to remove Fig. 13 shows EEG channels with removed EOG artifact.
(reset to zero), reference signal is compared with each of IC This artifact was clearly removed thanks to ICA method in
signals. Comparison is realized by computing cross- combination with artifact detection method.

258
REFERENCES
Černošek, A., Krajča, V., Petránek, S., Mohylová, J. (2000):
Praktické zkušenosti s aplikací metody analýzy
nezávislých komponent a analýzy hlavních komponent
pro eliminaci EEG artefaktů. Časopis Lékař a technika,
volume 2, pages 31-38
Hyvärinen, A., Karhunen, J., Oja, E. (2001): Independent
component analysis. John Wiley & Sons, Toronto,
Canada
Lopes da Silva, F. (1982): Electroencephalography: Basic
Principles, Clinical , Applications and Related Fields.
Urban and Schwanzenberg, Baltimore, USA
Umbaugh, Scott E. (1999): Computer vision and image
Fig. 13. Segment of EEG with removed eye-blinking artifacts processing. Prenticle-Hall, New Jersey, USA
in marked areas

4. CONCLUSION
In this article was presented algorithm for the eye artifacts
(blinking) detection. The algorithm was nested to the
application with user friendly interface. Disadvantages of this
progress can be revealed in the case of rapid head
movements. In this case more regions of interest and
thresholds settings for the specific segments are needed. This
disadvantage will be removed in “second generation”
algorithm which is able to detect position of the eyes
automatically also in the case of head movements.
Designed algorithm is able to detect movements of eyeball
but needs better video quality – higher frame rate, higher
resolution and video sequence without compression. Future
work is aimed to setting region of interest automatically (e.g.
by using pattern matching).
For removing eye-blinking artifacts was used ICA method.
Thanks to result from previous detecting method it can
remove artifacts quickly and automatically. EEG data without
eye-blinking artifacts are then ready for further processing in
frequency domain.

259

You might also like