You are on page 1of 57

Brain-Computer Interface

Seminar’09

1.INTRODUCTION
What is a Brain-Computer Interface?

A brain-computer interface uses electrophysiological signals to


control remote devices. Most current BCIs are not invasive. They consist
of electrodes applied to the scalp of an individual or worn in an electrode
cap such as the one shown in 1-1 (Left). These electrodes pick up the
brain’s electrical activity (at the microvolt level) and carry it into
amplifiers such as the ones shown in 1-1 (Right). These amplifiers amplify
the signal approximately ten thousand times and then pass the signal via
an analog to digital converter to a computer for processing. The computer
processes the EEG signal and uses it in order to accomplish tasks such as
communication and environmental control. BCIs are slow in comparison
with normal human actions, because of the complexity and noisiness of
the signals used, as well as the time necessary to complete recognition
and signal processing.

Figure 1: an example to show how the electrodes are placed

Dept. of Computer Science & Engg: 1 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

The phrase brain-computer interface (BCI) when taken literally means to


interface an individual’s electrophysiological signals with a computer. A
true BCI only uses signals from the brain and as such must treat eye and
muscle movements as artifacts or noise. On the other hand, a system that
uses eye, muscle, or other body potentials mixed with EEG signals, is a
brain-body actuated system.

Figure 2 : Scheme of an EEG-based Brain Computer Interface with on-line


feedback. The EEG is recorded from the head surface, signal processing
techniques are used to extract features. These features are classified, the output
is displayed on a computer screen. This feedback should help the subject to
control its EEG patterns.

The BCI system uses oscillatory electroencephalogram (EEG)


signals, recorded during specific mental activity, as input and provides a

Dept. of Computer Science & Engg: 2 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

control option by its output. The obtained output signals are presently
evaluated for different purposes, such as cursor control, selection of
letters or words, or control of prosthesis. People who are paralyzed or
have other severe movement disorders need alternative methods for
communication and control. Currently available augmentative
communication methods require some muscle control. Whether they use
one muscle group to supply the function normally provided by another
(e.g., use extraocular muscles to drive a speech synthesizer) .Thus, they
may not be useful for those who are totally paralyzed (e.g., by
amyotrophic lateral sclerosis (ALS) or brainstem stroke) or have other
severe motor disabilities. These individuals need an alternative
communication channel that does not depend on muscle control. The
current and the most important application of a BCI is the restoration of

communication channel for patients with locked-in-syndrome.

2. BRAIN-COMPUTER INTERFACE ARCHITECTURE

Dept. of Computer Science & Engg: 3 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Figure 3: architecture

The processing unit is subdivided into a preprocessing unit,


responsible for artefact detection, and a feature extraction and
recognition unit that identifies the command sent by the user to the BCI.
The output subsystem generates an action associated to this command.
This action constitutes a feedback to the user who can modulate her
mental activity so as to produce those EEG patterns that make the BCI
accomplish her intents.

Dept. of Computer Science & Engg: 4 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

2.1. THE PARTS OF A BCI


2.1.1. SIGNAL ACQUISITION

In BCIs, the input is EEG recorded from the scalp or the


surface of the brain or neuronal activity recorded within the brain.
Electrophysiological BCis can be categorized by whether they use non-
invasive (e.g. EEG) or invasive (e.g. intracortical) methodology. They can
also be categorized y whether they use evoked or spontaneous inputs.
Evoked inputs (e.g. EEG produced by flashing letters) result from
stereotyped sensory stimulation provided by the BCI. Spontaneous inputs
(e.g. EEG rhythms over sensorimotor cortex) do not depend for their
generation on such stimulation. There s presumably, no reason why a BCI
could not combine non-invasive and invasive methods or evoked and
spontaneous inputs. In the signal-acquisition part of BCI operation, the
chosen input is acquired by the recording electrodes, amplified, and
digitized.

Most current BCIs use electrophysiological signal features that


represent brain events that are reasonably well-defined anatomically and
physiologically. These include rhythms reflecting oscillations in particular
neuronal circuits (e.g. mu or beta rhythms from sensorimotor cortex),
potentials evoked from particular brain regions by particular stimuli (e.g.
VEPs or P300s), or action potentials produced by particular cortical
neurons. A few are exploring signal features, such as autoregressive
parameters, that bear complex and uncertain relationships to underlying
brain events. The special characteristics and capacities of each signal
feature will largely determine the extent and nature of its usefulness.

SCPs are, as their name suggest, slow. They develop over


300ms to several seconds. Thus, if an SCP based BCI is to exceed a bit
rate of one every 1-2s, users will need to produce more than two SCP

Dept. of Computer Science & Engg: 5 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

levels at one location, and/or control SCPs at several locations


independently. Initial studies suggest that such control may be possible.
While mu and beta rhythms have characteristics frequencies of 8-12 and
18-26 Hz , respectively, change in mu or beta rhythm amplitude appears
to have a latency of about .5s. On other hand, users are certainly able to
provide more than two amplitude levels, and can achieve independent
control of different rhythms.

Dept. of Computer Science & Engg: 6 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Projecting from results to date, a mu/beta rhythm BCI might


select among 4 or more choices every 2-3s. While the possibility for
distinguishing more two amplitude ranges from VEPs or P 300 potentials
has not been explored, these potentials can be evoked in partially
overlapping series of trials, so that selection rate can be increased.
Alternatively or in addition, selection rate might be increased if users
could learnt of control shorter-Latency evoked potential. He firing rates of
individual cortical neurons, if they prove to be independently controllable
in the absence of the concurrent motor outputs and sensor input that
normally accompany and reflect their activity, might support quite high
information transfer rates.

The key determinant of a signal features values is its co


relation with the user’s item, that is, the level of voluntary control the user
achieves over it. Users are likely to differ in the signal features they can
best control. In 3 users nearly locked in by ALS, researchers found that
one used a positive SCP, another a relatively fast negative positive SCP
shift, and a third a P300. Once develop ed, these strategies were
extremely resistant to change. Particularly early in training, BCI systems
should be able to identify, accommodate, and encourage the signal
features best suited to each user. User training may be the most
important and least understood factor affecting the BCI capabilities of
different signal features.

Up to now, researchers have usually assumed that basic


learning principles apply. However, BCI signal features are not normal or
nature brain output channels. They are artificial output channels created
by BCI system. It is not yet clear to what extent these new artificial
outputs will observe known conditioning principles. For example, mu
rhythms and other features generated in sensorimotor cortex, which is
directly involved in motor output, may prove more useful than alpha

Dept. of Computer Science & Engg: 7 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

rhythms generated in visual or auditory cortex, which is strongly


influenced by sensory input.

The success of neuronally based BCI methods will presumably


also vary from area to area. Initial efforts have focused on neurons in
motor cortex. While this focus is logical, other cortical areas and even sub
cortical areas warrant exploration. For example, in a user paralyzed by a
peripheral nerve or muscle disorder, the activity of spinal cord
motoneurons controlling specific muscles, detected by implanted
electrodes, might prove most useful for communication and control.

Dept. of Computer Science & Engg: 8 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

2.1.2 FEATURE EXTRACTION

The performance of a BCI, like that of other communication


system, depends on its signal-to-noise ration. The goal is to recognize and
execute the user’s intent, and the signals are those aspects of the
recorded electrophysiological activity that correlate with and thereby
reveal that intent. The user’s task is to maximize this correlation; and the
system’s first task is to measure the signal features accurately, i.e. to
maximize the signal-to-noise ratio.

When the features are mu rhythms from sensorimotor cortex,


noise includes visual alpha rhythms, and when the features are the firing
rates of specific neurons, noise includes activity of other neurons. Of
particular importance for EEG- based BCIs is the detection and/or
elimination of non-CNS activity, such as EMG from cranial or facial
muscles and EOG feature extraction methods can greatly affect signal- to
noise ratio. Good methods enhance the signal and reduce CNS and non-
CNS noise. This is most important and difficult when the noise is similar to
the signal. For example, EOG is of more concern than EMG for a BCI that
uses SCPs as signal feature, because EOG and SCPs have overlapping
frequency ranges; and for the same reason EMG is of more concern then
EOG for BCIs that use beta rhythms.

A variety of option for improving BCI signal-to-noise ratios are


under study. These include spatial and temporal filtering techniques,
signal averaging, and single-trial recognition methods. Much work up to
now has focused on showing by offline data analyses that a given method
will work. Careful comparisons of alternative methods are also essential. A
statistical measure useful in such comparisons is r, the proportion of the
total variance in the signal feature that is accounted for by the user’s
intent. Alternative feature extraction methods can be compared in terms
of r2. At the same time, of course, it is essential to insure that a high r2 is

Dept. of Computer Science & Engg: 9 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

not being achieved by non CNS activity such as EMG. Finally, any method
must ultimately be shown to be useful for actual online operation.

Spatial filters derive signal features by combining data from


two or more locations so as to focus on activity with a particular spatial
distribution. The simplest spatial filter is the bipolar derivation, which
derives the first spatial derivative and thereby enhances differences in the
voltage gradient in one direction.

Dept. of Computer Science & Engg: 10 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

The Laplacian derivation is the second derivative of the


instantaneous spatial voltage distribution, and thereby emphasizes
activity in radial sources immediately below the recording location. It can
be computed by combining the voltage at the location with the voltages of
surrounding electrodes. As the distance to the surrounding electrodes
decreases, the Laplacian becomes ore sensitive to voltage sources with
higher spatial frequencies (i.e. more localized sources) and less sensitive
to those with lower spatial frequencies (i.e. more broadly distributed
sources).

The choice of a spatial filter can markedly affect the signal-to-


noise ratio of a BCI that uses mu and beta rhythms. On the other hand, a
spatial filter best suited for mu and beta rhythms, which are relatively
localized, would probably not be the best choice for measurement of SCPs
OR P300s, which are more broadly distributed over the scalp. Laplacian
and common average reference spatial filters apply a fixed set of weights
to a linear combination of channels (i.e. electrode locations). Both use
weights that sum to zero so that the result is a difference and the spatial
filter has high-pass characteristics. Other spatial filters are available.

Principal components, independent components, and common


spatial patterns analyses are alternative methods for deriving weights for
a linear combination of channels. In these methods, the weights are
determined by the data. Principal components analyses, which produce
orthogonal components, may not be appropriate for separation o signal
features from overlapping sources. Independent components analysis can,
in principle, distinguish between mu rhythms from such sources. These
methods have yet to be compared to simpler spatial filters like the
Laplacian, in which the channel weights are data independent.
Appropriate temporal filtering can also enhance signal to noise ratios.

Dept. of Computer Science & Engg: 11 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Oscillatory signals like the mu rhythms can be measured by


the integrated output of a band-pass filter or by the amplitude in specific
spectral bands of Fourier or autoregressive analysis. Because BCis must
provide relatively rapid user feedback and because signals may change
rapidly, frequency analysis methods (e.g. band-pass filters or
autoregressive methods) that need only relatively short time segments
may be superior to methods like Fourier analysis that need longer
segments.

Dept. of Computer Science & Engg: 12 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

The choice of temporal filtering method, particularly for


research studies, should also consider the need to detect non-CNS
artifacts. A single band-pass filter cannot identify a broadband artifact like
EMG; a representative set of such filters is needed. Similarly, when
autoregressive parameters are used as signal features, additional
spectral-band analyses are needed to detect artifacts like EG.

For SCP recording, the focus on extremely low frequency


activity requires attention to eye-movement and other low-frequency
artifacts like those due to amplifier drift or changes in skin. The signal-to-
noise ratios of evoked time-domain signals like P300 can be enhanced by
averaging. The accompanying loss in communication rate may be
minimized by overlapping the trials. A variety of methods have been
proposed for detecting signals in single trials. These methods have yet to
be extensively applied in BCI research. Thus, their potential usefulness is
unclear. Invasive method using epidural, subdural, or intracortical
electrodes might give better signal-to-noise ratios than non invasive
methods using scalp electrodes. At the same time, the threshold for their
use will presumably be higher. They will be used only when they can
provide communication clearly superior to that provided by non invasive
methods, or when they are needed to avoid artifacts or problems that can
impede non invasive methods (e.g. uncontrollable head and neck EMG in
a user with cerebral palsy).

In short, the digitized signals are subjected to one or more of


a variety of feature extraction procedures, such as spatial filtering,
voltage amplitude measurements, spectral analysis, or single-neuron
separation. This analysis extracts the signal features that (hopefully)
encode the user’s message or commands. BCIs can use signal features
that are in the time domain (e.g. evoked potential amplitude or neuronal
firing rates) or the frequency domain (e.g. mu or beta – rhythm

Dept. of Computer Science & Engg: 13 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

amplitudes or neuronal firing rates) or the frequency domain and


frequency-domain signal features, and might thereby improve
performance. In general, the signal features used in present-day BCIs
reflect identifiable brain events like the firing of a specific control neuron
or the synchronized and rhythmic synaptic activation in sensorimotor
cortex that produces a mu rhythm.

Spatial filters derive signal features by combining data from


two or more locations so as to focus on activity with a particular spatial
distribution. The simplest spatial filter is the bipolar derivation, which
derives the first spatial derivative and thereby enhances differences in the
voltage gradient in one direction.

Spectral analysis is used to identify the frequency components


having a favorable response to the user intension. Oscillatory signals like
the mu rhythm can be measured by the integrated output of a band-pass
filter or by the amplitude in specific spectral bands of fourier or
autoregressive analysis. The signal to-noise ratios of evoked time-domain
signals like P300 can be enhanced by averaging.

3.1.3 TRANSLATION ALGORITHM

BCI translation algorithms convert independent variable, that is,


signal features such as rhythm amplitude or neuronal firing rates, into
dependent variables (i.e. device control commands). Commands may be
continuous (e.g. vertical cursor movements) or discrete (e.g. letter
selection). They should be as independent of each other (i.e. orthogonal)
as possible, so that, for example, vertical cursor movement and horizontal
cursor movement do not depend on each other.

The success of translation algorithm is determined by the


appropriateness of its selection of signal features, by how well it

Dept. of Computer Science & Engg: 14 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

encourages and facilities the user’s control of these features, and by how
effectively it translates this control into device commands. If the user has
no control (i.e. if the user’s intent is not correlated with the signal
features) the algorithm can do nothing, and the BCI will not work. If the
user has some control, the algorithm can do a good or bad job of
translating that control into device control.

Initial selection o signal features for the translation algorithm


can be based on standard guideline (e.g. the known locations and
temporal and spatial frequencies of mu and beta rhythms) supplemented
by operator inspection of initial topographical and spectral data from each
user. These methods may be supplemented or even wholly replaced by
automated procedures. For example, Pregenzer used the learning vector
quantizer (LVQ) to select optimal electrode positions and frequencies
band for each user.

Extant BCIs use a variety of translation algorithms, ranging


from linear equations, to discriminant analysis, to neural networks. In the
simples case, in which only a single signal feature is used, the output of
the translation algorithm can be a simple linear function of the feature
value (e.g. a linear function of murhythm amplitude). The algorithm needs
to use appropriate values for the intercept and the slope of this function.

Dept. of Computer Science & Engg: 15 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

If the command is vertical cursor movement, the intercept


should ensure that upward and downward movement are equally possible
for the user found that the mean value of the signal feature over some
interval of immediately preceding performance provides a good estimate
o the proper intercept. The slope determines the scale of the command
(e.g. the sped of cursor movement).

When a single feature is used to select among more than two


choices, the slope also affects the relative accessibility of the choices. A
wide variety of more complex translation algorithms are possible. These
include supervised learning approaches such as linear discriminate
analysis and non-linear discriminate analysis.

The evaluation of a translation algorithm reduces to


determining how well it accomplishes the 3 levels of adaptation to the
individual user; continuing adaptation to spontaneous changes in the
user’s performance (e.g. fatigue’ level of attention); and continuing
adaptation that encourages and guides the user’s adaptation to the BCI
(i.e. user training).

Up to the present, most evaluation have concentrated on the


first and simplest level of adaptation. In these evaluations, alternative
algorithms are applied offline to a body of data gathered from one or
more users. Typically, portions of the data are used to the parameters of
the algorithm, which is hen applied to the rest of the data (i.e. the test
data). The algorithm is rated according to the accuracy with which it
derives the user’s intent from the test data. While such evaluations are
convenient and certainly valuable in making gross distinctions between
algorithms, they do not take into account spontaneous changes in he
signal features, nor can they assess user adaptation to the algorithm.

Dept. of Computer Science & Engg: 16 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

The second level of adaptation continual adjustments for


spontaneous changes in signal features-can be addressed by offline
analysis that mimics the online situation, that is, if adaptation is based on
earlier data and applied to later data. This analysis needs substantial
bodies of data gathered over substantial periods of time, so that all major
kinds of spontaneous variation can be assessed. The need for this second
level of adaptation tends to favor simpler algorithms. Parameter
adaptation is likely to be more difficult and more vulnerable to instabilities
for complex algorithms like those using neural networks or non-linear
equations, than it is for simple algorithms like those using linear equations
with relatively few variables.

The third level of adaptation – adaptation to the user’s


adaptation to the BCI system – is not accessible to offline evaluation.
Because this level responds to and affects the continua interactions
between the user and the BCI, it can only be assessed online. The goal of
this adaptation is to induce the user to develop and maintain the highest
possible level of correlation between his or her intent and the signal
features that the BCI employs to decipher that intent. The algorithm can
presumably accomplish these aims by rewarding better performance – by
moving the cursor or selecting the letter more quickly whne the signal
feature has a stronger correlation with intent. At the same time, such
efforts at shaping user performance risk making the task too difficult.

As with acquisition of conventional skills, frustration, or fatigue


can degrade performance. Particularly in the first stages of training, the
user is easily overwhelmed by the difficulty of the task. User success may
correlate with self-perception of brain states, and may be promoted by
procedure that increases this perception.

Because the translation algorithm’s adaptations are likely to


shape the user’s adaptations, and because users are likely to differ from

Dept. of Computer Science & Engg: 17 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

one another, the selection of methods for this third level of adaptation
inevitably requires prolonged online studies in large numbers of
representative users. This level of adaptation might also help address the
problem of artifacts, such as EMG or EOG for scalp EEG or extraneous
neuronal activity for neuronal recording. It may be possible to include the
user to reduce or eliminate such artifacts by making them impediments to
performance.

Thus, a specific measure of EMG activity, like amplitude in a


high frequency band at a suitable location, could be monitored, and, by
exceeding a criterion value, could halt BCI operation. The mutual
adaptation of user and BCI is likely to be important even for BCIs that use
signal features (e.g. P300 evoked potentials, or mu-or beta-rhythm
amplitude changes accompanying specific motor imagery) that are
already present in users at the very beginning of training. Once these
features are used for communication and control, they can be expected to
change. Like the activity responsible for the brain’s neuromuscular
outputs, the electrophysiological phenomena are likely to be continuously
adjusted on the basis of feedback.

Dept. of Computer Science & Engg: 18 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

The process of mutual adaptation of the user to the system and


the system to the user is likely to be a fundamental feature of the
operation of any BCI system. Thus, the value of starting from signal
features that are already correlated with specific intents in the native user
(e.g. P300) is an empirical issue. That is, does BCI training that begins
with such feature ultimately lead to faster and more accurate
communication and control than does training that begins with other
features? These adaptations by the translation algorithm may be more
difficult in actual BCI applications than in the laboratory. In the usual
laboratory situation, user intent is defined by the research laboratory. In
real life, the user decides what to select, so that the translation algorithm
does not have this knowledge and adaptation is therefore more difficult.
Possible solutions are to configure applications so as to insure fairly
predictable sets of past intents, to incorporate calibration routines that
consist of series of trials with defined intents, and/or to include methods
for error correction (e.g. a backspace key) that permit the translation
algorithm to assume that all or most final selections are correct.
Unsupervised learning approaches, like cluster or principal components
analysis, which can be trained without knowledge of correct results, might
also be effective.

2.1.4. FEEDBACK

For most current BCIs the output device is a computer screen


and the output is the selection of targets, letters, or icons presented on it.
Selection is indicated in various ways (e.g. the letter flashes). Some BCIs
also provide additional, interim output, such as cursor movement toward
the item prior to its selection. In addition to being the intended product of
BCI operation, this output is the feedback that the brain uses to maintain
and improve the accuracy and speed of communication. Initial studies are
also exploring BCI control of a neuroprosthesis that provides hand closure

Dept. of Computer Science & Engg: 19 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

to people with cervical spinal cord injuries in this prospective BCI


application, the output device is the user’s own hand.

Dept. of Computer Science & Engg: 20 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

3. APPLICATIONS OF BRAIN-COMPUTER
INTERFACE

Brain-Computer Interface (BCI) is a system that acquires and


analyzes neural signals with the goal of creating a communication channel
directly between the brain and the computer. Such a channel potentially
has multiple uses. The current and the most important application of a BCI
is the restoration of communication channel for patients with locked-in-
syndrome.
1) Patients with conditions causing severe communication disorders:

– Advanced Amyotrophic Lateral Sclerosis (ALS)

– Autism

– Cerebral Palsy

– Head Trauma

– Spinal Injury

The output signals are evaluated for different purpose such as


cursor control, selection of letters or words.
2) Military Uses:
The Air Force is interested in using brain-body actuated control to
make faster responses possible for fighter pilots. While brain-body
actuated control is not a true BCI, it may still provide motivations for why
a BCI could prove useful in the future.A combination of EEG signals and
artifacts (eye movement, body movement, etc.) combine to create a
signal that can be used to fly a virtual plane.
3) Bioengineering Applications:

Dept. of Computer Science & Engg: 21 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Assist devices for the disabled. Control of prosthetic aids.


4) Control of Brain-operated wheelchair.
5) Multimedia & Virtual Reality Applications:
 Virtual Keyboards
 Manipulating devices such as television set, radio, etc.
 Ability to control video games and to have video games react
to actual EEG signals.

4. PRINCIPLES OF
ELECTROENCEPHALOGRAPHY
4.1 The Nature of the EEG signals.
The electrical nature of the human nervous system has been
recognized for more than a century. It is well known that the variation of
the surface potential distribution on the scalp reflects functional activities
emerging from the underlying brain. This surface potential variation can
be recorded by affixing an array of electrodes to the scalp, and measuring
the voltage between pairs of these electrodes, which are then filtered,
amplified, and recorded. The resulting data is called the EEG.
Configurations of electrodes usually follow the International 10-20 system
of placement. The 10-20 System of Electrode Placement, which is based
on the relationship between the location of an electrode and the
underlying area of cerebral cortex (the "10" and "20" refer to the 10% or
20% interelectrode distance).

Dept. of Computer Science & Engg: 22 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Figure 4: showing electrodes in scalp

Figure 5: A detailed view for electrodes in scalp

Dept. of Computer Science & Engg: 23 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

The extended 10-20 system for electrode placement. Even numbers


indicate electrodes located on the right side of the head while odd
numbers indicate electrodes on the left side. The letter before the
number indicates the general area of the cortex the electrode is located
above. A stands for auricular,C for central, Fp for prefrontal, F for frontal, P
for parietal, O for Occipital, and T for temporal. In addition, electrodes for
recording vertical and horizontal electrooculographic (EOG) movements
are also place. Vertical EOG electrodes are placed above and below an
eye and horizontal EOG electrodes are placed on the side of both eyes
away from the nose.

Nowadays, modern techniques for EEG acquisition collect these


underlying electrical patterns from the scalp, and digitalize them for
computer storage. Electrodes conduct voltage potentials as microvolt
level signals, and carry them into amplifiers that magnify the signals
approximately ten thousand times. The use of this technology depends
strongly on the electrodes positioning and the electrodes contact. For this
reason,electrodes are usually constructed from conductive materials, such
us gold or silver chloride, with an approximative diameter of 1 cm, and
subjects must also use a conductive gel on the scalp to maintain an
acceptable signal to noise ratio.
4.2 EEG wave groups.
The analysis of continuous EEG signals or brain waves is complex, due to
the large amount of information received from every electrode. As a
science in itself, it has to be completed with its own set of perplexing
nomenclature. Different waves, like so many Radio stations, are
categorized by the frequency of their emanations and, in some cases, by
the shape of their waveforms. Although none of these waves is ever
emitted alone, the state of consciousness of the individuals may make

Dept. of Computer Science & Engg: 24 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

one frequency range more pronounced than others. Five types are
particularly important:

• BETA. The rate of change lies between 13 and 30 Hz, and usually
has a low voltage between 5-30 V BETA. The rate of change lies
between 13 and 30 Hz, and usually has a low voltage between 5-30
V Beta is the brain wave usually associated with active thinking,
active attention, focus on the outside world or solving concrete
problems. It can reach frequencies near 50 hertz during intense
mental activity.

• ALPHA. The rate of change lies between 8 and 13 Hz, with 30-50 V
amplitude. Alpha waves have been thought to indicate both a
relaxed awareness and also in attention. They are strongest over
the occipital (back of the head) cortex and also over frontal cortex.
Alpha is the most prominent wave in the whole realm of brain
activity and possibly covers a greater range than has been
previously thought of. It is frequent to see a peak in the beta range
as high as 20 Hz, which has the characteristics of an alpha state
rather than a beta, and the setting in which such a response
appears also leads to the same conclusion. Alpha alone seems to
indicate an empty mind rather than a relaxed one, a mindless state
rather than a passive one, and can be reduced or eliminated by
opening the eyes, by hearing unfamiliar sounds, or by anxiety or
mental concentration.

• THETA. Theta waves lie within the range of 4 to 7 Hz, with an


amplitude usually greater than 20 V. Theta arises from emotional
stress, especially frustration or disappointment.Theta has been also
associated with access to unconscious material, creative inspiration

Dept. of Computer Science & Engg: 25 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

and deep meditation. The large dominant peak of the theta waves is
around 7 Hz.

• DELTA. Delta waves lie within the range of 0.5 to 4 Hz, with variable
amplitude. Delta waves are primarily associated with deep sleep,
and in the waking state, were thought to indicate physical defects in
the brain. It is very easy to confuse artifact signals caused by the
large muscles of the neck and jaw with the genuine delta responses.
This is because the muscles are near the surface of the skin and
produce large signals whereas the signal which is of interest
originates deep in the brain and is severely attenuated in passing
through the skull. Nevertheless, with an instant analysis EEG, it is
easy to see when the response is caused by excessive movement.

• GAMMA. Gamma waves lie within the range of 35Hz and up. It is
thought that this band reflects the mechanism of consciousness -
the binding together of distinct modular brain functions into
coherent percepts capable of behaving in a re-entrant fashion
(feeding back on themselves over time to create a sense of stream-
of-consciousness).

• MU. It is an 8-12 Hz spontaneous EEG wave associated with motor


activities and maximally recorded over motor cortex. They diminish
with movement or the intention to move. Mu wave is in the same
frequency band as in the alpha wave, but this last one is recorded
over occipital cortex.

Most attempts to control a computer with continuous EEG


measurements work by monitoring alpha or mu waves, because people
can learn to change the amplitude of these two waves by making the

Dept. of Computer Science & Engg: 26 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

appropriate mental effort. A person might accomplish this result, for


instance, by recalling some strongly stimulating image or by raising his or
her level of attention.

Dept. of Computer Science & Engg: 27 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

5. NEUROPSYCHOLOGICAL SIGNALS USED IN


BCI APPLICATIONS
5.1 Generation of Neuropsychological Signals
Interfaces based on brain signals require on-line detection of mental
states from spontaneous activity: different cortical areas are activated
while thinking different things (i.e. a mathematical computation, an
imagined arm movement, a music composition, etc). The information of
these "mental states" can be recorded with different methods.
Neuropsychological signals can be generated by one or more of the
following three:
• implanted methods
• evoked potentials (also known as event related potentials)
• operant conditioning
Both evoked potential and operant conditioning methods are
normally externally-based BCIs as the electrodes are located on the scalp.
The table describes the different signals in common use. It may be noted
that some of the described signals fit into multiple categories.
Implanted methods use signals from single or small groups of neurons in
order to control a BCI. In most cases, the most suitable option for placing
the electrodes is the motor cortex region, because of its direct relevance
to motor tasks, its relative accessibility compared to motor areas deeper
in the brain, and the relative ease of recording from its large pyramidal
cells. These methods have the benefit of a much higher signal-to-noise
ratio at the cost of being invasive. They require no remaining motor
control and may provide either discrete or continuous control.
Evoked potentials (EPs) are brain potentials that are evoked by the
occurrence of a sensory stimulus. They are usually obtained by averaging
a number of brief EEG segments time-registered to a stimulus in a simple

Dept. of Computer Science & Engg: 28 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

task. In a BCI, EPs may provide control when the BCI application produces
the appropriate stimuli. This paradigm has the benefit of requiring little to
no training to use the BCI at the cost of having to make users wait for the
relevant stimulus presentation. EPs offer discrete control for almost all
users.

Dept. of Computer Science & Engg: 29 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Exogenous components, or those components influenced primarily


by physical stimulus properties, generally take place within the first 200
milliseconds after stimulus onset. These components include a Negative
waveform around 100 ms (N1) and a Positive waveform around 200 ms
after stimulus onset (P2). Visual evoked potentials (VEPs) fall into this
category. Uses short visual stimuli in order to determine what command
an individual is looking at and therefore wants to pick. Using VEPs has the
benefit of a quicker response than longer latency components. The VEP
requires subject to have good visual control in order to look at the
appropriate stimulus and allows for discrete control.

One commonly studied ERP in BCI is a component called the P300.It


is a positive peak in the potential that reaches a maximum of about 300
ms after the stimulus is presented. The P3 has been shown to be fairly
stable in locked-in patients, reappearing even after severe brain injuries.

Dept. of Computer Science & Engg: 30 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Figure 6 : (Solid line) The general form of the P3 component of the evoked
potential (EP). The P3 is a cognitive EP that appears approximately 300
ms after
a task relevant stimulus. (Dotted line) The general form of a non-task
related response.

Operant conditioning is a method for modifying the behavior (an


operant), which utilizes contingencies between a discriminative stimulus,
an operant response, and a reinforcer to change the probability of a
response occurring again in a given situation. In the BCI framework, it is
used to train the patients to control their EEG. As it is presented in,
several methods use operant conditioning on spontaneous EEG signals for
BCI control. The main feature of this kind of signals is that it enables
continuous rather than discrete control. This feature may also serve as a
drawback: continuous control is fatiguing for subjects and fatigue may
cause changes in performance since control is learned.

Dept. of Computer Science & Engg: 31 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

5.2 Common Neuropsychological Signals Used In

Dept. of Computer Science & Engg: 32 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

BCIs

6. EEG SIGNAL PRE-PROCESSING


One of the main problems in the automated EEG analysis is the
detection of the different kinds of interference waveforms (artifacts)
added to the EEG signal during the recording sessions. These interference
waveforms, the artifacts, are any recorded electrical potentials not
originated in brain. There are four main sources of artifacts emission:
1. EEG equipment.
2. Electrical interference external to the subject and recording
system.
3. The leads and the electrodes.
4. The subject her/himself: normal electrical activity from the heart,
eye blinking, eyes movement, and muscles in general.
In case of visual inspections, the artifacts can be quite easily detected
by EEG experts. However, during the automated analysis these signal
patterns often cause serious misclassifications thus reducing the clinical
usability of the automated analyzing systems. Recognition and elimination
of the artifacts in real – time EEG recordings is a complex task, but
essential to the development of practical systems.
6.1 Classical Methods for removing eyeblink artifacts :
• Rejection methods consist of discarding contaminated EEG, based on
either automatic or visual detection. Their success crucially depends
on the quality of the detection, and its use depends also on the
specific application for which it is used.Thus, although for epileptic
applications, it can lead to an unacceptable loss of data, for others,
like a Brain Computer interface, its use can be adequate.
• Subtraction methods are based on the assumption that the
measured EEG is a linear combination of an original EEG and a

Dept. of Computer Science & Engg: 33 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

signal caused by eye movement, called EOG (electrooculogram).


The EOG is a potential produced by movement of the eye or eyelid.
The original EEG is hence recovered by subtracting separately
recorded EOG from the measured EEG, using appropriate weights
(rejecting the influence of the EOG on particular EEG channels).

Dept. of Computer Science & Engg: 34 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

6.2 EEG Feature Extraction


For the analysis of oscillatory EEG components, the following
preprocessing methods:
1) calculation of band power in predefined, subject-specific frequency
bands in intervals of 250 (500) ms.
2) adaptive autoregressive (AAR) parameters estimated for each
iteration with the recursive least squares algorithm (RLS).
3) calculation of common spatial filters (CSP).
Band power at each electrode position is estimated by first digitally
bandpass filtering the data, squaring each sample and then averaging
over several consecutive samples. Before the band power method is used
for classification, first the reactive frequency bands must be selected for
each subject.Based on these training data, the most relevant frequency
components can be determined by using the distinction sensitive learning
vector quantization (DSLVQ) algorithm. This method uses a weighted
distance function and adjusts the influence of different input features
(e.g., frequency components) through supervised learning. When DSLVQ
is applied to spectral components of the EEG signals (e.g., in the range
from 5 to 30 Hz), weight values of individual frequency components
according to their relevance for the classification task are obtained.
The AAR parameters, in contrast, are estimated from the EEG
signals limited only by the cutoff frequencies, providing a description of
the whole EEG signal. Thus, an important advantage of the AAR method is
that no a priori information about the frequency bands is necessary .
For both approaches, two closely spaced bipolar recordings from the
left and right sensorimotor cortex were used. In further studies, spatial
information from a dense array of electrodes located over central areas
was considered to improve the classification accuracy. For this purpose,
the CSP method was used to estimate spatial filters that reflect the
specific activation of cortical areas during hand movement imagination.

Dept. of Computer Science & Engg: 35 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Each electrode is weighted according to their importance for the


classification. The method makes a decomposition of EEG data into spatial
patterns which are extracted from two populations (EEG data during left
and right movement imagination) and is based on simultaneous
diagonalization of two covarinance matrices. The pattern maximizes the
difference between left and right population and the only information
contained in these patterns is where the variance of the EEG varies most
when comparing two conditions.

7. SIGNAL CLASSIFICATION PROCEDURES


An important step toward real-time processing and feedback
presentation is the setup of a subject-specific classifier. For this, two
different approaches are followed:
i) neural network based classification, e.g. a learning vector quantization
(LVQ)
ii) linear discriminant analysis (LDA)
Learning Vector Quantization (LVQ) has proven to be an effective
classification procedure. LVQ is shown to be comparable with other neural
network algorithms for the task of classifying EEG signals, yielding
approximately 80% classification accuracy for three out of the four
subjects tested when differentiating between two different mental tasks.
LVQ was mainly applied to online experiments with delayed feedback
presentation. In these experiments, the input features were extracted
from a 1-s epoch of EEG recorded during motor imagery. The EEG was
filtered in one or two subject-specific frequency bands before calculating
four band power estimates, each representing a time interval of 250 ms,
per EEG channel and frequency range. Based on these features, the LVQ
classifier derived a classification and a measure describing the certainty
of this classification, which in turn was provided to the subject as a
feedback symbol at the end of each trial.

Dept. of Computer Science & Engg: 36 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

In experiments with continuous feedback based on either AAR


parameter estimation or CSP’s, a linear discriminant classifier has usually
been applied for on-line classification. The AAR parameters of two EEG
channels or the variance time series of the CSP’s are linearly combined
and a time-varying signed distance (TSD) function is calculated. With this
method it is possible to indicate the result and the certainty of
classification, e.g., by a continuously moving feedback bar. The different
methods of EEG preprocessing and classification have been compared in
extended on-line experiments and data analyzes. These experiments were
carried out using a newly developed BCI system running in real-time
under Windows with a 2, 8, or 64 channel EEG amplifier . The installation
of this system, based on a rapid prototyping environment, includes a
software package that supports the real-time implementation and testing
of different EEG parameter estimation and classification algorithms.

Dept. of Computer Science & Engg: 37 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

8. EXISTING BCI SYSTEMS


8.1 The Brain Response Interface
Sutter's Brain Response Interface (BRI) is a system that takes
advantage of the fact that large chunks of the visual system are devoted
to processing information from the foveal region. The BRI uses visually
evoked potentials (VEP's) produced in response to brief visual stimuli.
These EP's are then used to give a discrete command to pick a certain
part of a computer screen. This system is one of the few that have been
tested on severely handicapped individuals. Word processing output
approaches 10-12 words/min. and accuracy approaches 90% with the use
of epidural electrodes. This is the only system mentioned that uses
implanted electrodes to obtain a larger, less contaminated signal. A BRI
user watches a computer screen with a grid of 64 symbols (some of which
lead to other pages of symbols) and concentrates on the chosen symbol.
A specific subgroup of these symbols undergoes a equiluminant red/green
fine check or plain color pattern alteration in a simultaneous stimulator
scheme at the monitor vertical refresh rate (40-70 frames/s). Sutter
considered the usability of the system over time and since color alteration
between red and green was almost as effective as having the monitor
flicker, he chose to use the color alteration because it was shown to be
much less fatiguing for users. The EEG response to this stimulus is
digitized and stored. Each symbol is included in several different
subgroups and the subgroups are presented several times. The average
EEG response for each subgroup is computed and compared to a
previously saved VEP template (obtained in an initial training session),
yielding a high accuracy system.This system is basically the EEG version
of an eye movement recognition system and contains similar problems
because it assumes that the subject is always looking at a command on
the computer screen. On the positive side, this system has one of the best

Dept. of Computer Science & Engg: 38 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

recognition rates of current systems and may be used by individuals with


sufficient eye control. Performance is much faster than most BCIs, but is
very slow when compared to the speed of a good typist (80 words/min.).
The system architecture is advanced. The BRI is implemented on a
separate processor with a Motorola 68000 CPU. A schematic of the system
is shown in Figure. The BRI processor interacts with a special display
showing the BRI grid of symbols as well as a speech synthesizer and
special keyboard interface. The special keyboard interface enables the
subject to control any regular PC programs that may be controlled from
the keyboard. In addition, a remote control is interfaced with the BRI in
order to enable the subject to control a TV or VCR. Since the BRI processor
loads up all necessary software from the hard drive of a connected PC, the
user may create or change command sequences. The main drawback of
the system architecture is that it is based on a special hardware interface.
This may be problematic when changes need to be made to the system
over time.

Dept. of Computer Science & Engg: 39 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Figure 7: A schematic of the Brain Response Interface (BRI) system

Dept. of Computer Science & Engg: 40 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

8.2 P3 Character Recognition


In a related approach, Farwell and Donchin use the P3 evoked
potential. A 6x6 grid containing letters from the alphabet is displayed on
the computer monitor and users are asked to select the letters in a word
by counting the number of times that a row or column containing the
letter flashes. Flashes occur at about 10 Hz and the desired letter flashes
twice in every set of twelve flashes. The average response to each row
and column is computed and the P3 amplitude is measured. Response
amplitude is reliably larger for the row and column containing the desired
letter. After two training sessions, users are able to communicate at a rate
of 2.3 characters /min, with accuracy rates of 95%. This system is
currently only used in a research setting. A positive aspect of using a
longer latency component such as the P3 is that it enables differentiating
between when the user is looking at the computer screen or looking
someplace else (as the P3 only occurs in certain stimulus conditions).
Unfortunately, this system is also agonizingly slow, because of the need to
wait for the appropriate stimulus presentation and because the stimuli are
averaged over trials. While the experimental setup accomplishes its main
goal of showing that the P3 may be used for a BCI interface, the
subjective experiences of a subject with this system have yet to be
considered. The 10 Hz rate of flashing may fatigue users as Sutter
mentions and this rate of flashing may cause epilepsy in some subjects.
8.3 ERS/ERD Cursor Control
Pfurtscheller and his colleagues take a different approach.Using
multiple electrodes placed over sensorimotor cortex they monitor event-
related synchronization/desynchronization (ERS/ERD). In all sessions,
epochs with eye and muscle artifact are automatically rejected. This
rejection can slow subject performance speeds.As this is a research
system, the user application is a simple screen that allows control of a
cursor in either the left or right direction. In one experiment, for a single

Dept. of Computer Science & Engg: 41 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

trial the screen first appears blank, then a target box is shown on one side
of the screen. A cross hair appears to let the user know that he/she must
begin trying to move the cursor towards the box. Feedback may be
delayed or immediate and different experiments have slightly different
displays and protocols. After two training sessions, three out of five
student subjects were able to move a cursor right or left with accuracy
rates from 89-100%. Unfortunately, the other two students performed at
60% and 51%. When a third category was added for classification,
performance dropped to a low of 60% in the best case. The architecture of
this BCI now contains a remote control interface that allows controlling the
system over a phone line, LAN, or Internet connection.
This allows maintenance to be done from remote locations. The
system may be run from a regular PC, a notebook, or an embedded
computer and is being tested for opening and closing a hand orthesis in a
patient with a C5 lesion. From this information, it appears that the user
application must be independent from the BCI, although it is possible that
two different BCI programs were constructed.
This BCI system was designed with the following requirements in mind:
1. The system must be able to record, analyze, and classify EEG-data in
real- time.
2. The classification results must have the ability to be used to control a
device on-line.
3. The system must have the ability to have different experimental
paradigms Sutter's Brain Response Interface (BRI) is a system that takes
advantage of the fact that large chunks of the visual system are devoted
to processing information from the foveal region. The BRI uses visually
evoked potentials (VEP's) produced in response to brief visual stimuli.
These EP's are then used to give a discrete command to pick a certain
part of a computer screen. This system is one of the few that have been
tested on severely handicapped individuals. Word processing output

Dept. of Computer Science & Engg: 42 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

approaches 10-12 words/min. and accuracy approaches 90% with the use
of epidural electrodes. This is the only system mentioned that uses
implanted electrodes to obtain a larger, less contaminated signal. A BRI
user watches a computer screen with a grid of 64 symbols (some of which
lead to other pages of symbols) and concentrates on the chosen symbol.
A specific subgroup of these symbols undergoes a equiluminant red/green
fine check or plain color pattern alteration in a simultaneous stimulator
scheme at the monitor vertical refresh rate (40-70 frames/s). Sutter
considered the usability of the system over time and since color alteration
between red and green was almost as effective as having the monitor
flicker, he chose to use the color alteration because it was shown to be
much less fatiguing for users. The EEG response to this stimulus is
digitized and stored. Each symbol is included in several different
subgroups and the subgroups are presented several times. The average
EEG response for each subgroup is computed and compared to a
previously saved VEP template (obtained in an initial training session),
yielding a high accuracy system.This system is basically the EEG version
of an eye movement recognition system and contains similar problems
because it assumes that the subject is always looking at a command on
the computer screen. On the positive side, this system has one of the best
recognition rates of current systems and may be used by individuals with
sufficient eye control. Performance is much faster than most BCIs, but is
very slow when compared to the speed of a good typist (80 words/min.).
The system architecture is advanced. The BRI is implemented on a
separate processor with a Motorola 68000 CPU. A schematic of the system
is shown in Figure. The BRI processor interacts with a special display
showing the BRI grid of symbols as well as a speech synthesizer and
special keyboard interface. The special keyboard interface enables the
subject to control any regular PC programs that may be controlled from
the keyboard. In addition, a remote control is interfaced with the BRI in

Dept. of Computer Science & Engg: 43 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

order to enable the subject to control a TV or VCR. Since the BRI processor
loads up all necessary software from the hard drive of a connected PC, the
user may create or change command sequences. The main drawback of
the system architecture is that it is based on a special hardware interface.
This may be problematic when changes need to be made to the system
over time.
8.4 A Steady State Visual Evoked Potential BCI
Middendorf and colleagues use operant conditioning methods in
order to train volunteers to control the amplitude of the steady-state
visual evoked potential (SSVEP) to florescent tubes flashing at 13.25 Hz.
This method of control may be considered as continuous as the amplitude
may change in a continuous fashion. Either a horizontal light bar or audio
feedback is provided when electrodes located over the occipital cortex
measure changes in signal amplitude. If the VEP amplitude is below or
above a specified threshold for a specific time period, discrete control
outputs are generated. After around 6 hours of training, users may have
an accuracy rate of greater than 80% in commanding a flight simulator to
roll left of right. In the flight simulator, the stimulus lamps are located
adjacent to the display behind a translucent diffusion panel. As operators
increase their SSVER amplitude above one threshold, the simulator rolls to
the right. Rolling to the left is caused by a decrease in the amplitude. A
functional electrical stimulator (FES), has been integrated for use with this
BCI. Holding the SSVER above a specified threshold for one second,
causes the FES to turn on. The activated FES then starts to activate at the
muscle contraction level and begins to increase the current, gradually
recruiting additional muscle fibers to cause knee extension. Decreasing
the SSVER for over a second, causes the system to deactivate, thus
lowering the limb. Recognizing that the SSVEP may also be used as a
natural response, Middendorf and his colleagues have recently
concentrated on experiments involving the natural SSVEP. When the

Dept. of Computer Science & Engg: 44 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

SSVEP is used as a natural response, virtually no training is needed in


order to use the system. The experimental task for testing this method of
control has been to have subjects select virtual buttons on a computer
screen.
The luminance of the virtual buttons is modulated, each at a
different frequency to produce the SSVEP. The subject selects the button
by simply looking at it as in Sutter’s Brain Response Interface. From the 8
subjects participating in the experiment, the average percent correct was
92% with an average selection time of 2.1 seconds. Middendorf’s group
has advocated using visual evoked potentials, in this manner as opposed
to their previous work on training control of the SSVEP, for multiple
reasons. Using an inherent response means that less time is spent on
training. The main drawback of this group’s approach appears to be that
they flicker light at different frequencies. Sutter solved the problem of
flicker-related fatigue by using alternating red/green illumination. The
main frequency of stimulus presentation at 13.25 Hz may also cause
epilepsy.
8.5 Mu Rhythm Cursor Control
Wolpaw and his colleagues free their subjects from being tied to a
flashing florescent tube by training subjects to modify their mu rhythm.
This method of control is continuous as the mu rhythm may be altered in
a continuous manner. It can be attenuated by movement and tactile
stimulation as well as by imagined movement. A subject's main task is to
move a cursor up or down on a computer screen. While not all subjects
are able to learn this type of biofeedback control, the subjects that do
perform with accuracy greater than or equal to 90%. These experiments
have also been extended to two-dimensional cursor movement, but the
accuracy of this is reported as having “not reached this level of accuracy”
when compared to the one-dimensional control .Since the mu rhythm isn't
tied to an external stimulus, it frees the user from dependence on

Dept. of Computer Science & Engg: 45 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

external events for control. The BCI system consists of a 64-channel EEG
amplifier, two 32-channel A/D converter boards, a TMS320C30-based DSP
board, and a PC with two monitors. One monitor is used by the subject
and one by the operator of the system . Only a subset of the 64-channels
are used for control, but the number of channels allows recognition to be
adjusted to the unique topographical features of each subject’s head. The
DSP board is programmable in the C-language, enabling testing of all
program code prior to running it on the DSP board. Software is also
programmed in C in order to create consistency across system modules.
The architecture of the system is shown in Figure. Four processes run
between the PC and the DSP board. As signal acquisition occurs, an
interrupt request is sent from the A/D board to the DSP at the end of A/D
conversion. The DSP then acquires the data from all requested channels
sequentially and combines them to derive the one or more EEG channels
that control cursor movement. This is the data collection process.
A second process then takes care of performing a spectral analysis
on the data. When this analysis is completed, the results are moved to
dual-ported memory and an interrupt to the PC is generated. A
background process on the PC then acquires spectral data from the DSP
board and computes cursor movement information as well as records
relevant trial information.

Dept. of Computer Science & Engg: 46 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Figure 8 : A schematic of the mu rhythm cursor control system architecture.


The system contains four parallel processes.

This process runs at a fixed interval of 125 msec. The fourth process
handles thegraphical user interfaces for both the operator and the subject
and records data to disk.The separation of data collection and analysis
enables different algorithms to be inserted for processing the EEG signals.
All algorithms are written in C, which is much easier to program in than
Assembly language, but is not as easy as the commercial Matlab ®
scripting language and environment, which contains many helpful
functions for mathematically processing data. The third and fourth
processes contain design decisions that may make maintenance and
flexibility difficult. The graphical user interface is tied to data storage.

Dept. of Computer Science & Engg: 47 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Conversion of EEG signals to cursor control numbers happens over


the DSP foreground/background processes and in the PC background
process. This lack of encapsulation promises to make changing the
application and signal processing difficult if such changes are planned.
8.6 The Thought Translation Device
As another application used with severely handicapped individuals,
the Thought Translation Device has the distinction of being the first BCI to
enable an individual without any form of motor control to communicate
with the outside world. Out of six patients with ALS, 3 were able to use the
Thought Translation Device. Of the other three, one lost motivation and
later died and another discontinued use of the Thought Translation Device
part way through training, and then later was unable to regain control.
The paper implies that users do not want to use the BCI unless they
absolutely must, but does not disambiguate subjective user satisfaction of
the system from general user depression. The training program may use
either auditory or visual feedback. The slow cortical potential is extracted
from the regular EEG on-line, filtered, corrected for eye movement
artifacts, and fed back to the patient. In the case of auditory feedback, the
positivity/negativity of a slow cortical potential is represented by pitch.
When using visual feedback, the target positivity/negativity is represented
by a high and low box on the screen. A ball-shaped light moves toward or
away from the target box depending on a subject’s performance. The
subject is reinforced for good performance with the appearance of a
happy face or a melodic sound sequence. When a subject performs at
least 75% correct, he/she is switched to the language support program.
At level one, the alphabet is split into two halves (letter-banks) which are
presented successively at the bottom of the screen for several seconds. If
the subject selects the letter-bank being shown by generating a slow
cortical potential shift, that side of the alphabet is split into two halves
and so on, until a single letter is chosen. A “return function” allows the

Dept. of Computer Science & Engg: 48 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

patient to erase the last written letter. These patients may now write
email in order to communicate with other ALS patients world-wide. An
Internet version of the thought translation device is under construction.
The authors comment that patients refuse to use pre-selected word
sequences because they feel less free in presenting their own intentions
and thoughts.

Dept. of Computer Science & Engg: 49 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

8.7 An Implanted BCI


The implanted brain-computer interface system devised by Kennedy
and colleagues has been implanted into two patients. These patients are
trained to control a cursor with their implant and the velocity of the cursor
is determined by the rate of neural firing. The neural waveshapes are
converted to pulses and three pulses are an input to the computer mouse.
The first and second pulses control X and Y position of the cursor and a
third pulse as a mouse click or enter signal.The patients are trained using
software that contains a row of icons representing common phrases (Talk
Assist developed at Georgia Tech), or a standard ‘qwerty’ or alphabetical
keyboard (Wivik software from Prentke Romich Co.). When using a
keyboard, the selected letter appears on a Microsoft Wordpad screen.
When the phrase or sentence is complete, it is output as speech using
Wivox software from Prentke Romich Co. or printed text. There are two
paradigms using the Talk Assist program and a third one using the visual
keyboard. In the first paradigm, the cursor moves across the screen using
one group of neural signals and down the screen using another group of
larger amplitude signals. Starting in the top left corner, the patient enters
the leftmost icon. He remains over the icon for two seconds so that the
speech synthesizer is activated and phrases are produced. In the second
paradigm, the patient is expected to move the cursor across the screen
from one icon to the other. The patient is encouraged to be as accurate as
possible, and then to speed up the cursor movement while attempting to
remain accurate. In the third paradigm, a visual keyboard is shown and
the patient is encouraged to spell his name as accurately and quickly as
possible and then to spell anything
else he wishes.This system uses commercially available software and thus
the BCI implementation does not have to worry about maintenance of the
user application. Unfortunately, the maximum communication rate with
this BCI has been around 3 characters per minute. This is the same rate as

Dept. of Computer Science & Engg: 50 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

quoted for EMG-based control with patient JR and is comparable with the
rates achieved by externally-based BCI systems. Kennedy has founded
Neural Signals, Inc. in order to help create hardware and software for
locked-in individuals and the company is continually looking for methods
to improve control. JR now has access to email and may be contacted
through the email address shown on the company’s web site.

Dept. of Computer Science & Engg: 51 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

9. Non-Invasive Vs Invasive Signal Detection


Non-Invasive
Pros
No surgical risks
Cons
Low signal resolution
Greater interference from other signals
Interfaces must be routinely cleaned and changed

Invasive
Pros
Higher resolution recording
Less interference from other signals
Faster communication possible
Cons
Determining which neurons to record from
Surgical risks

Dept. of Computer Science & Engg: 52 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

10. R & D ACTIVITIES


Common standards and protocols: AS there is no coordinated
effort towards developing BCIs, each researcher builds the system using
custom design and protocols. This makes it difficult for universal use.
Therefore development of a set of common design standards and
communication protocols is one of the areas inviting attention of man.

1.Hybrid BCIs:

Most of the present day BCIs work based on only a single type of
brain wave like P300 evoked potential, Mu rhythms, Beta rhythms etc,
since it simplifies the feature extraction and translation processes. Even
though, attempts are being made towards developing hybrid brain
computer interfaces that detect multiple types of brain signals and decide
the user intention by combining features of all of them.

2.Silicon Implants :

As the digital circuit integration technology reaches higher levels of


integration densities, we can expect to have single chip computers that
can be implanted in the brain itself. It will lead to the era of cybernetic
organisms. Both the brain and artificial processor can work together to
achieve things that are impossible today.

3.More research into mental activities:

Thorough knowledge of the human psychology and neurological


features of brain is very much necessary for successful implementation of
brain computer interfaces. Hence research in this direction is also a part
of BCI research.

4.Improvements in signal detection, feature extraction:

Dept. of Computer Science & Engg: 53 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Present day BSIs suffer a lot from lack of good signal detection
devices. Mostly we use EEgs for non invasive technology. The level of
detail that can be obtained by using EEGs is limited. Another option is to
use invasive technology by which electrodes are places inside brain. But it
requires surgery and therefore not suitable for common use. Also only a
few electrodes can be placed in this way. Hence newer methods for
detecting brain activities need to be developed. Changes are also being
made in the features extraction and translation algorithm parts for
ensuring better operation.

5.Cosmetic and economic improvements:

Cosmetic improvements are an absolute necessity for ensuring


universal acceptance and wider use. BCIs today are not considered to be
that fashionable, with those strange looking electrode caps and a large
number of wires running down the cap to the computer. Attempts are
being made to develop wearable BCIs. Economic efficiency is also a major
factor. Even the cheapest BCI system available costs about Rs. 30000
which is more than the price of a latest personal computer. In order to find
commercial applications, the cost of BCIs needs to be brought down.

Dept. of Computer Science & Engg: 54 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

11. CONCLUSION

BCI is a system that records electrical activity from the brain and
classifies these signals into different states. Few applications currently
being used have been discussed. Since the BCI enables people to
communicate and control appliances with just the use of brain signals it
opens many gates for disabled people. The possible future applications
are numerous. Even though this field of science has grown vastly in last
few years we are still a few steps away from the scene where people drive
brain-operated wheelchairs on the streets. New technologies need to be
developed and people in the neuroscience field need also to take into
account other brain imaging techniques, such as MEG and fMRI, to
develop the future BCI. As time passes BCI might be a part of our every
day lives. Who knows, in twenty years I’ll not have to type this report with
my fingers, but just the conscious control of my thoughts would be
enough.

Dept. of Computer Science & Engg: 55 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

Dept. of Computer Science & Engg: 56 M


E S C E Kuttipuram
Brain-Computer Interface
Seminar’09

12. REFERENCES

1. http://www.bci-info.org
2. http://www.ebme.com
3. http://www.google.com
4. http://www.bbci.org
5. http://www.wikipedia.com
6. http://www.youtube.com
7. Proprioceptive Feedback in BCI. Proceedings of the 4th international
IEEE EMBS conference on Neural Engineering, Antalya , Turkey, April29-
May2 2009.
8. A General Framework for Brain-Computer Interface Design. IEEE
Transactions on neural system and rehabilitation engineering, vol 11 no 1
march 2003 page 70-85.
9. A direct brain interface based on event related potentials. IEEE
Transaction 2000 Issue 8 Pages: 180-185.
10. Current trends in Brain-Computer Interface (BCI) research. IEEE
Transaction 2000 Issue 8 Pages: 216-219.

Dept. of Computer Science & Engg: 57 M


E S C E Kuttipuram

You might also like