You are on page 1of 6

1

Proposal of a Brain Computer Interface to


Command an Autonomous Car
Javier Castillo , Sandra Muller , Eduardo Caicedo , Alberto Ferreira De Souza , and Teodiano Bastos
Program of Electrical Engineering, Federal University of Espirito Santo, Av. Fernando Ferrari,
514, Vitoria, Brazil
Post-Graduate Program of Informatics, Federal University of Espirito Santo, Av. Fernando Ferrari, 514, Vitoria,
Brazil
Post-Graduate Program of Electrical Engineering, University of Valle, Av. Paso Ancho 13-00, Cali, Valle,
Colombia
Email: javier.castillo@correounivalle.edu.co
Post-Graduate

AbstractThis paper presents a proposal of Brain Computer


Interface (BCI) to command an autonomous car. This BCI
is based on the paradigm of visual evoked potentials (VEP)
and event-related desynchronization (ERD). A menu interface
is presented to the user with disabilities in order he/she can
choose a destination for the autonomous car. The selection of
the final destination is performed using visual stimuli flickering
at different frequencies, analyzed through the analysis of brain
signals present at the occipital region of the users scalp. The
power spectrum of these signals is then obtained, in order to get
the frequency of the visual stimulation. Preliminary tests were
performed with healthy users and people with disabilities, which
reached an average success rate above 90%. The proposed system
is also capable of turning on/off the stimuli, thus reducing the
fatigue associated with particular visual stimuli.

capability requires a reactive behavior, with the ability of


interpreting the information with respect to the environment
and error recovery [3].

Index TermsBrain Computer Interface, SSVEP, Autonomous


Car, ERD

Fig. 1: Autonomous car of Federal University of Espirito


Santo - UFES.

I. I NTRODUCTION

II. M ETHODS
This work adapted and optimized the SSVEP-BCI previously used by people with disabilities for navigating a
robotic wheelchair [2]. As a way of validating the system
here developed, our proposal is to take a car tour at the
campus of the Federal University of Esprito Santo (UFES),
with the vehicle being completely commanded by a person
with disabilities, using his/her brain signals.
In this study, an EEG signal database with 19 people aged
28 7 years was used, out of which 6 of them had motor
disabilities [2]. A FPGA was used for stimuli generation
with an accuracy of 0.01Hz [2]. The algorithms for signal
processing were implemented on a PC of 2.4 GHz and 2
GB of RAM, using MATLAB Version 2012b. The signal
recording protocol consisted of presenting the visual stimuli
for 2 minutes, with intervals of 30 s in frequencies of 5.6, 6.4,
6.9 and 8.0 Hz. The stimulus was presented to the user at a
distance of 70 cm. The following methods were evaluated in
this work:

Brain-computer interface (BCI) is a non-muscular communication channel for the transmission of signals from
the brain to the outside world. A BCI processes mental
decision control signals through the analysis of brain bioelectric activity allowing the user to interact with the interface
itself [1]. A BCI operates based on different paradigms of
electroencephalographic (EEG) signals, which can be motor imagination, mu rhythm variation, or evoked potentials.
Among evoked potentials there are Steady-State Visual Evoked
Potentials (SSVEP), which are associated to visual stimuli
frequencies observed by the user. A BCI based on these
potentials is called SSVEP-BCI [2]. Besides, the process of
commanding an autonomous car requires the generation of
relevant information related to the address or route to go.
The use of brain signals provides a new communication
channel through which it is possible to generate actions to
command such autonomous car. The car used in this work is an
unmanned autonomous car with high level of autonomy, shown
in Fig. 1. The main features of this autonomous car are the
ability of locating itself, orienting and planning ways, through
laser, radar, GPS and computer vision. Advanced control
systems interpret this information to identify the appropriate
path, obstacles, and relevant signals. However, this navigation

A. Traditional Power Spectral Density Analysis (TraditionalPSDA)


The PSDA method is often used as a method of SSVEP
detection [4], which is related to signal processing in the

frequency domain. The implementation of the power spectral


density analysis is performed by searching the power densities
around the stimulus frequencies, whose the signal/noise (SNR)
ratio is given by Equation 1.

Sk = 10log10

nP (fk )

Pn/2
m=1

X
Trials

!
,

Y1

(1)

Euclidean
Distance

CCA

P (fk + mfres ) + P (fk mfres )

where n is the number of sample near the frequency stimulus,


P (fk ) is the power density of the stimulus frequencies and
fres is the resolution frequency which depends on the number
of samples used in the Fourier transform. P (fk + mfres ) and
P (fk mfres ) are power densities around the target frequency.

B. Ratio Power Spectral Density Analysis (RatioPSDA)


The second method implemented can be defined as specified
in Equation 2.
Ratio =

|fk fh |
error
=
,
ampl
P (fh )

Yp

CCA

Fig. 2: Schematic representation of canonical correlation


analysis.

The total correlation is calculated as the ratio between the


autocorrelation and cross-correlation of the input and output
vectors, as shown in Equation 6.

(2)

k = Wx ,Wy (x, y) =

E [xT y ]

E[xT x]E[y T y]

where k = 1, 2 N N is the number of stimuli and fk are the


frequencies of the kth stimulus. fh is the frequency showing
a peak in the power spectrum, and P (fh ) is the magnitude of
the peak frequency closer to the frequency (or its harmonics)
of the stimulus target. When there is a peak in the target
frequency or its harmonics, the error is zero. If the error is not
zero, but the amplitude is very large, the value of the ratio,
(Equation 2) is small, related to other stimulus, indicating that
the frequency is very close to the stimulus.
C. Canonical Correlation Analysis (CCA)
CCA is a statistical technique that obtains the maximum
similarity between two data sets. Fig. 2 shows a CCA representation in the frequency domain [4] [5]. The EEG channels
are represented by the vector X, and references signals are
represented by the vector Y . These signals are sines and
cosines of the fundamentals frequencies and their harmonics,
as shown in Equation 3.

sin (2fk )
cos (2fk t)

..
Y =
(3)

sin (2hfk t)
cos (2hfk )
where k = 1, 2 N , N is the number of stimuli and h is the
number of harmonics of the target frequency.
Considering Wx and Wy the matrices that represent a
combination linear of vector X and Y, the CCA method
seeks to find the linear combination of the frequency domain
matrices Wx and Wy that presents the highest correlation and
define the constraints expressed by Equations 4 and 5.






E xxT = E xT x = E WxT XX T Wx = 1






E yy T = E y T y = E WyT Y Y T Wy = 1

>

Reference
signals
generator

(4)
(5)

WxT XY T Wy

E[

(6)

E[WxT XX T Wx ]E [WyT Y Y T Wy ]

D. Least Absolute Shrinkage and Selection Operator (LASSO)


Suppose that data (xi , yi ), i = 1, 2, . . . , N , where xi =
(xi1 , . . . , xip )0 are predictor variables and yi are the responses
[6]. Assume
P
P 2that the xij are standardized so that
x
/N
=
0,
ij
u
i xij /N = 1. As in the usual regression
set-up, assume either that the observations are independent or
yi are conditionally independent given xij .

X

X
yi
j xij

b, b = argmin

i1
j

(7)

The solution to Equation 7 is a quadratic programming


problem with linear inequality constraints [6].

E. Detecting Events Related Desynchronization (ERD)


ERD is bound to get a signal that has been identified as a
result of an event [7]. When the user closes his/her eyes for
a while, one can detect a variation of the energy in the alpha
band. This variation is easier to detect using electrodes located
at the occipital region, specifically the electrodes Oz, O1 and
O2. Then, the signal is filtered in the range of 8 Hz to 13
Hz. The resulting signal is derived and squared to emphasize
the high frequencies and its integration is performed using
a sliding window. The event detection is performed through
the calculation of the kurtosis threshold criterion [8]. Kurtosis
values higher than 3 are considered valid to be detected as
an event. This resource does not depend on amplitude values,
since it only works on the concept of waveform. Fig. 3 shows
the schematic representation of the synchronization detector.

TABLE I: Some proposals for the command of autonomous


vehicles.

40
30
20
10
0
10
20
30
40
100

150

200

250

300

350

400

450

500

550

PlanningcLevel
Bellcetcal3cw2SS8R;
Valvuenacetcalcw2SS7R;
Ferreiracetcal3w2SS9R

1500
300

200
Mag

100

100

13

f (Hz)

d
dt

1000

SteeringcLevel
ControlcLevel
None
Millncetcalcw2SS4R;
Leebcetcalcw2SS6R;
Pfurtschellercetcal3cw2SS6R;
Trejocetcal3cw2SS6R;
Macetcal3cw2SS7R;
Martinezcetcalcw2SS7R;
Pirescetcalcw2SS8R;
McFarlandcetcalcw2SS8R;
Galncetcalcw2SS8R;
Brouwercetcal3w2SS8R;
Ron1angevincetcal3cw2SS9R;
Mllercetcalcw2SC3R

500

200

500

1000

1500

2000

2500

100

200

300

400

500

600

700

Fig. 3: Detection of synchronization events to trigger stimuli.

F. Autonomous car
The autonomous car used in this work was developed to
emulate the behavior and decision-making in the same manner
as a common driver, using the necessary sensors for this
task. From this perspective, an overview of a system for an
autonomous vehicle command is shown in Fig. 4.
Decision

Perception

required can lead to fatigue. For this reason, an asynchronous


BCI to enable/disable the stimuli that can reduce fatigue, is
used here.

Planning

Steering

Actions

Enviroment

Asinchronous
BCI
ERD

Sense

Control

Synchronous
BCI
Stimuli

Autonomous
Car

Fig. 4: Levels representing the autonomous vehicle


navigation [9].

Planning

Feed-Back

Steering

Environment Menu

In general, the command of an autonomous vehicle is of


the following structure:
1) Control level: the lowest level of control is one that
arises from the direct communication of sensors: GPS,
radar, etc. This level of compensation is based on monitoring behavioral skills, such as maintaining a direction
and speed.
2) Steering level: the average level of command is performed with sensor information, which is called perception, and hence microcommands would be made in
order to drive the car. These commands are related to
rule-based monitoring of behavior. Thus, one can define
a reference speed or distance, with stopping rules with
red lights or changing lanes on a highway.
3) Planning Level: the highest level is about an abstraction
structure; when decisions are made and performed, it
is called planning, and corresponds to behavior-based
knowledge. This analysis requires conscious decisions
and information from the environment.

Control

User

Fig. 5: Proposed system to command an autonomous vehicle


using a brain computer interface.
The brain commands are sent to the classification module,
and the autonomous car plans and executes this action. A
menu was developed Fig. 6, which presents the possible directions that are georeferenced using GPS. Each selection has
a command validation. When a decision has been validated,
the system disables the stimuli. When the user requests a
new activity, his/her eyes should be closed for a while, and
the asynchronous BCI detects this condition and reactive the
stimuli.
IV. R ESULTS

III. B RAIN C OMPUTER I NTERFACES AND NAVIGATION


DEVICES

In the process of commanding robotic wheelchairs or mobile


robots some interfaces have been implemented for different
types of functions. Table I shows some of these proposals.
The proposal of this work, outlined in Fig. 5, is to develop
a hybrid brain computer interface using an asynchronous BCI
approach with a SSVEP-BCI. The motivation is that although
the SSVEP-BCI has a high performance with respect to its
success rate, the presence of stimuli when no decision is

A. Synchronous BCI
Table II presents the results of data classification for different users. The first 6 users have some type of disability.
For each one of the classification techniques the accuracy and
the Information Transfer Rate (ITR) were calculated. ITR is a
standard measure of communication systems, which calculate
the amount of information communicated per unit of time.
ITR depends on both speed and accuracy and is defined by
Equation 8 [10].

100

Turn on
Stimuli

Yes

Closed
Eyes?

Accuracy [%]

90

Not

80

70

60

50

Y Start N

Asynchronous
BCI

40

TradPSDA

RatioPSDA

CCA

LASSO

(a) Representation diagrams showing the maximum, minimum


and average values for different classifiers.
Send
Command

Turn off
Stimuli

14
TradPSDA
RatioPSDA
CCA
LASSO

12

Send
Command

10

Fig. 6: Menu for destination selection and control for on/off


stimuli of autonomous car.

B = (1 Pu )



1P
log2 N + P log2 P + (1 P ) log2 N
,
1

(8)

where N is the number of classes, P is the rate of correct


classifications, and Pu is the rate of undefined classifications.
The unit for ITR is [bits/s], but it can be determined in
[bits/min], multiplying the result by the selection speed, i.e,
the number of selections performed by the system in one
minute. In this paper, the developed SSVEP-BCI sends one
command per second, which leads to a rate of 60 selections
per minute.
Fig. 7 shows the distribution of the average percentages of
accuracy for each classification techniques by presenting the
maximum, minimum and the average of concentration.
B. Asynchronous BCI
Tests based on the detection of events (closing and opening
the eyes) provided a success rate of 90% through the alpha
rhythm evaluation. This high success rate shows the applicability of the system proposed.
V. D ISCUSSIONS
With regard to the implemented algorithms, the following
considerations can be highlighted:
1) Although results are good in the controlled laboratory
environment, it is necessary to investigate the results
of these algorithms when implemented in real world
applications.
2) The classification algorithms were developed in the
time domain (CCA and LASSO) and frequency domain (Traditional-PSDA and Ratio-PSDA), in order to

0
40

50

60

70

80

90

100

Accuracy [%]
(b) Success rate of the different classifiers.

Fig. 7: Success rate for classifiers based on SSVEP.

have tools to adapt to situations that may arise in the


autonomous car. Fig. 7(a) shows that the classifiers
using signals in the time domain (CCA, and LASSO)
have better responses with respect to the minimum and
average values. On the other hand, PSDA has a better
performance regarding to the mean values and concentration of the data in terms of the success rate compared
with PSDA using SNR. Fig. 7(b) shows success rates of
the classifiers as a histogram.
VI. C ONCLUSIONS
BCI used outside the laboratory environment is a mandatory
step for practical applications. Although the ERD detector was
not extensively explored for SSVEP detection, the use of both
paradigms are important in a practical BCI system. The tests
with SSVEP obtained average success rates between 70% and
90%. However, the highest concentration of data was found
in the range of 90% to 95%, therefore suggesting suitability
for the application in autonomous cars. The next step of this
work is to design a system for data fusion of the classifiers
to be implemented in the autonomous car, driven by a person
with disabilities.

TABLE II: Results for 19 users, out of which 6 (*) have motor disabilities.

Subject
m1R-

TraditionalLPSDA
AccuracyR[v]
PuR[v]
ITRR[bit/min]
90C16
4C38
79C19

AccuracyR[v]
85C06

RatioLPSDA
PuR[v]
ITRR[bit/min]
6C88
64C55

AccuracyR[v]
96C81

CCA
PuR[v]
3C13

ITRR[bit/min]
101C48

AccuracyR[v]
96C81

LASSO
PuR[v]
3C13

ITRR[bit/min]
101C48
40C86

m2R-

52C61

27C50

10C92

74C24

40C63

27C39

75C97

26C25

36C45

79C38

27C50

m3R-

91C47

5C00

82C33

92C59

6C25

84C47

96C22

3C13

99C28

96C22

3C13

99C28

m4R-

86C89

7C50

68C39

95C19

15C00

83C92

95C39

4C38

95C11

95C39

4C38

95C11

m5R-

76C25

13C13

43C42

97C08

28C13

76C07

87C14

15C00

63C39

87C03

15C00

63C15

m6R-

84C25

8C13

61C86

46C16

40C63

5C39

93C63

50C63

46C14

94C37

16C25

80C32
80C07

m7

93C05

15C63

77C26

90C69

28C75

60C12

91C58

11C88

76C67

93C03

12C50

m8

76C19

14C38

42C69

88C18

31C88

52C68

88C47

16C25

65C41

88C47

16C25

65C41

m9

90C21

7C50

76C75

97C43

2C50

104C57

96C22

0C63

101C84

96C22

0C63

101C84

m10

73C77

6C25

42C44

96C71

5C00

99C13

95C59

1C25

98C93

95C59

1C25

98C93

m11

99C38

3C75

111C78

98C06

3C75

105C76

99C38

1C25

114C69

99C38

1C25

114C69

m12

70C15

30C63

26C96

91C70

20C00

69C90

95C73

11C25

106C18

95C73

11C25

106C18

m13

90C74

5C00

80C29

94C86

2C50

95C17

96C18

2C50

99C80

96C18

2C50

99C80

m14

79C83

10C00

51C58

93C31

26C88

67C57

96C81

10C00

94C26

96C81

10C00

94C26

m15

59C72

26C88

17C08

91C25

21C88

67C19

92C65

11C25

80C13

92C65

11C25

80C13

m16

96C22

0C63

101C84

96C18

1C25

101C08

97C50

0C63

106C85

97C50

0C63

106C85

m17

96C84

1C25

103C57

96C84

3C13

101C60

96C25

0C63

101C97

96C25

0C63

101C97

m18

93C49

2C50

90C65

96C25

1C25

101C33

96C11

3C75

98C26

96C11

3C75

98C26

m19
Average

97C36
84C13

2C50
10C13

104C26
67C01

96C11
90C42

3C13
15C23

98C90
77C20

98C13
93C99

0C63
9C18

109C48
89C28

98C13
94C28

0C63
7C47

109C48
91C48

ACKNOWLEDGMENT
The authors want to thank CAPES, CNPq for funding
(process 133707/2013-0) and the Graduate Program of the
School of Electrical and Electronic Engineering, University
of Valle (Colombia).
R EFERENCES
[1] J. Wolpaw, N. Birbaumer, M. Dennis, G. Pfurtscheller, and T. Vaughan,
Brain-computer interfaces for communication and control. Clinical
neurophysiology : official journal of the International Federation of
Clinical Neurophysiology, vol. 113, no. 6, pp. 767791, 2002. [Online].
Available: http://dx.doi.org/10.1016/S1388-2457(02)00057-3
[2] S. Muller, T. Bastos, and M. Sarcinelli, Proposal of a SSVEP-BCI to
command a robotic wheelchair. Journal of Control, Automation and
Electrical Systems, vol. 24, pp. 97105, 2013.
[3] F. Lobo, Sistemas e veculos autnomos - aplicaes na defensa, Masters
thesis, Faculdade de Engenharia da Universidade do Porto, revised 4
sep/2013. 2005. [Online]. Available: http://paginas.fe.up.pt/flp/papers/
SVA-AD flp cdn05.pdf
[4] Q. Wei, M. Xiao, and Z. Lu, A comparative study of canonical correlation analysis and power spectral density analysis for SSVEP detection.
Third International Conference on Intelligent Human-Machine Systems
and Cybernetics, pp. 710, 2011.
[5] T. Tanaka, C. Zhang, and H. Higashi, SSVEP frequency detection
methods considering background EEG, IEEE SCIS-ISIS, pp. 2024,
2012.
[6] R. Tibshirani, Regression shrinkage and selection via the LASSO,
Journal aof the Royal Statistical Society. Series B (Methodological).,
vol. 58, pp. 267288, 1996.
[7] A. Ferrerira, T. Bastos, M. Sarcinelli, J. Martn, J. Garca, and M. Mazo,
Improvements of a brain-computer interface applied to a robotic
wheelchair. in: Ana fred; joaquim filipe; hugo gamboa. (org.). Biomedical Engineering Systems and Technologies. Berlim: Springer Berlin
Heidelberg, vol. 52, pp. 6473, 2009.
[8] L. DeCarlo, On the meaning and use of kurtosis. Psychological
Methods,, vol. 2, pp. 292307., 1997.
[9] M. Thurling, J. Van, A. Brouwer, and P. Werkhoven, Brain-Computer InterfaIn Applying our Minds to Human-Computer Interaction. Springer,
2010, ch. EEG-Based Navigation form a Human Factors Perspective,
pp. 99105.
[10] J. Millan, F. Renkens, J. Mourino, and W. Gerstner, Noninvasive brainactuated control of a mobile robot by human EEG. IEEE transactions
on bio-medical engineering, vol. 51, no. 6, pp. 10261033, 2004.
[Online]. Available: http://dx.doi.org/10.1109/TBME.2004.827086

[11] C. Bell, P. Shenoy, R. Chalodhorn, and R. Rao, Control of a humanoid


robot by a noninvasive brain-computer interface in humans. J Neural
Eng, vol. 2, pp. 214220, 2008.
[12] A. Brouwer and J. Van, A tactile p300 bci and the optimal number of
tactors: Effects of target probability and discriminability. in In: Proceedings of the 4th International Brain- Computer Interface Workshop and
Training Course 2008. Verlag der Technischen Universitt Graz, Graz,
2008, p. 280285.
[13] F. Galan, M. Nuttin, E. Lew, P. Ferrez, G. Vanacker, J. Philips, and
J. Milln, A brain-actuated wheelchair: Asynchronous and non-invasive
brain-computer interfaces for continuous control of robots. Clin
Neurophysiol 119:, vol. 9, p. 21592169., 2008.
[14] J. Hohne, M. Schreuder, B. Blankertz, and M. Tangermann, Twodimensional auditory p300 speller with predictive text system. Conf
Proc IEEE Eng Med Biol Soc., pp. 41858, 2010.
[15] C. Leeb, R.and Keinrath, D. Friedman, C. Guger, R. Scherer, C. Neuper,
M. Garau, A. Antley, A. Steed, M. Slater, and G. Pfurtscheller, Walking
by thinking: the brainwaves are crucial, not muscles! Presence: Teleop
virtural Environ, vol. 5, pp. 500514, 2006.
[16] Z. Ma, X. Gao, and S. Gao, Enhanced p300-based cursor movement
control. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
LNAI, vol. 4565, p. 120126. ., 2007.
[17] J. Markoff. (2011, 5) Google lobbies nevada to allow selfdriving cars. The New York Times. [Online]. Available: http:
//www.nytimes.com/2011/05/11/science/11drive.html? r=0
[18] P. Martinez, H. Bakardjian, and A. Cichocki, Fully online multicommand brain-computer interface with visual neurofeedback using SSVEP
paradigm. Comput Intell Neurosci, 2007.
[19] D. McFarland, D. Krusienski, W. Sarnacki, and J. Wolpaw, Emulation
of computer mouse control with a noninvasive brain-computer interface.
J Neural Eng, vol. 2, p. 101110, 2008.
[20] G. Pfurtscheller, R. Leeb, C. Keinrath, D. Friedman, C. Neuper,
C. Guger, and M. Slater, Walking from thought, Cognitive Brain
Research, vol. 1, pp. 145152, 2006.
[21] G. Pires, M. Castelo-Branco, and U. Nunes, Visual p300-based bci
to steer a wheelchair: A bayesian approach. in Proceedings of the
30th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society, EMBS08Personalized Healthcare through
Technology art no 4649238, 2008, p. 658661.
[22] R. Ron, A. Daz, and F. Velasco, A two-class brain computer interface to
freely navigate through virtual worlds (ein zwei-klassen-brain-computerinterface zur freien navigation durch virtuelle welten). Biomed Tech 54:,
vol. 3, p. 126133, 2009.
[23] G. Schalk, J. Wolpaw, M. D, and G. Pfurtscheller, EEG-based communication: presence of an error potential. Clinical neurophysiology :

[24]
[25]

[26]

[27]

official journal of the International Federation of Clinical Neurophysiology, vol. 111, no. 12, pp. 21382144, 2000.
E. Sutter, The brain response interface: communication through
visually-induced electrical brain responses, J Microcomput Appl, 1992.
L. Trejo, R. Rosipal, and B. Matthews, Brain-computer interfaces
for 1-D and 2-D cursor control: designs using volitional control
of the EEG spectrum or steady-state visual evoked potentials.
IEEE transactions on neural systems and rehabilitation engineering
: a publication of the IEEE Engineering in Medicine and Biology
Society, vol. 14, no. 2, pp. 225229, 2006. [Online]. Available:
http://dx.doi.org/10.1109/TNSRE.2006.875578
D. Valbuena, M. Cyriacks, O. Friman, I. Volosyak, and A. Grser, Braincomputer interface for high-level control of rehabilitation robotic systems, IEEE 10th International Conference on Rehabilitation Robotics,
ICORR 07, art no 4428489,, p. 619625, 2007.
J. Wolpaw and M. Dennis, Control of a two-dimensional movement
signal by a noninvasive brain-computer interface in humans.
Proceedings of the National Academy of Sciences of the United States
of America, vol. 101, no. 51, pp. 17 84917 854, 2004. [Online].
Available: http://dx.doi.org/10.1073/pnas.0403504101

You might also like