You are on page 1of 50

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/319493196

Robot Control Using Brain-Computer Interface System

Thesis · May 2017


DOI: 10.13140/RG.2.2.23884.72327

CITATIONS READS

0 1,578

1 author:

Vyza Yashwanth Sai Reddy


Delft University of Technology
7 PUBLICATIONS   3 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Mitigation of drowsiness using ANN based features of EEG and EOG View project

All content following this page was uploaded by Vyza Yashwanth Sai Reddy on 05 September 2017.

The user has requested enhancement of the downloaded file.


Robot Control Using Brain-Computer
Interface System

Vyza Yashwanth Sai Reddy

Department of Electronics and Communication Engineering


National Institute of Technology Rourkela
Robot Control Using Brain-Computer
Interface System
Dissertation submitted in partial fulfillment

of the requirements of the degree of

Bachelor of Technology
in

Electronics and Instrumentation Engineering

by

Vyza Yashwanth Sai Reddy


(Roll Number: 113EI0591)

based on research carried out

under the supervision of

Prof. Samit Ari

May, 2017

Department of Electronics and Communication Engineering


National Institute of Technology Rourkela
Certificate
This is to certify that the work in this thesis entitled “Inexpensive Wheelchair Navigation
using BCI control” by Vyza Yashwanth Sai Reddy, has been carried out under my
supervision in partial fulfilment of the requirements for the degree of Bachelor of
Technology in Electronics and Instrumentation Engineering during session 2016-2017
in the Department of Electronics and Communication Engineering, National Institute of
Technology, Rourkela.

To the best of my knowledge, this work has not been submitted to any other
University/Institute for the award of any degree or diploma.

Prof. Samit Ari


(Supervisor)
Assistant Professor
Department of Electronics and Communication Engineering
Declaration of Originality
I, Vyza Yashwanth Sai Reddy, Roll Number 113EI0591 hereby declare that this dissertation
entitled Robot Control Using Brain-Computer Interface System presents my original work
carried out as a undergraduate student of NIT Rourkela and, to the best of my knowledge,
contains no material previously published or written by another person, nor any material
presented by me for the award of any degree or diploma of NIT Rourkela or any other
institution. Any contribution made to this research by others, with whom I have worked
at NIT Rourkela or elsewhere, is explicitly acknowledged in the dissertation. Works of
other authors cited in this dissertation have been duly acknowledged under the sections
“Reference” or “Bibliography”. I have also submitted my original research records to the
scrutiny committee for evaluation of my dissertation.

I am fully aware that in case of any non-compliance detected in future, the Senate of NIT
Rourkela may withdraw the degree awarded to me on the basis of the present dissertation.

May , 2017
Vyza Yashwanth Sai Reddy
NIT Rourkela
Acknowledgment
First of all, I would like to express my ardent gratitude to my esteemed supervisor Dr. Samit
Ari, Assistant Professor (Department of Electronics and Communication Engineering, NIT
Rourkela) for his abled guidance and supervision. He has been a great inspiration for
me and has helped me in every possible manner throughout this project. His undaunted
cooperation and valuable advices have helped finally to complete this project successfully.
I would also like to thank Mr. Harshal Suryawanshi, an M.Tech scholar from Department
of Electronics and Communication Engineering, NIT Rourkela for helping to understand
the technical aspect of this project and guiding throughout the hardware and software
development of this project by virtue of their vast experience to work with brain-computer
interface systems and software.

Finally, I would also like to appreciate Mr. Sourav Kundu, PhD, Department of
Electronics and Communication Engineering, NIT Rourkela for his kind support in helping
me to gather all the requisite knowledge related to classification of real-time signals and
experimentation that provided me a thorough insight into this project.

May 2017 Vyza Yashwanth Sai Reddy


NIT Rourkela Roll Number: 113EI0591
Abstract

Brain-computer interfaces (BCI) are interfacing platforms that establish communications


between human and machines. This study attempts to support further research into the
development of practical and inexpensive non-invasive brain-computer interface systems
for the control of prosthetic devices, especially electric wheelchairs. With motivations
from literature, the steady state visual evoked potential is reasoned to be the neurological
mechanism for a proposed modular-based BCI system. Selected papers on surveys
of BCI research and BCI designs are mentioned. Available acquisition hardware for
BCI-interfaces, with particular attention to non-invasive electroencephalogram (EEG)
acquisition, are presented with a selection of articles reporting their use. In conclusion,
some suggestions for further study towards practical BCI systems are made. Also, aspects
like data acquisition through EEG signal, then parameter estimation from the data obtained,
even classification and pattern recognition of the real-time EEG signal have been discussed.

Keywords: Brain-Computer Interface1 ; Electroencephalogram2 ; Steady-State Visually


Evoked Potential3 ; Psychtoolbox4 ; Frequency Spectrum5 .
Contents

Certificate ii

Declaration of Originality iii

Acknowledgment iv

Abstract v

1 Introduction 1
1.1 Brain-Computer Interfaces (BCI) . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Electroencephalogram (EEG) . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Steady State Visually Evoked Potentials (SSVEP) . . . . . . . . . . . . . . 4
1.4 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 EEG Data Acquisition 10


2.1 Selection of Neurological Mechanism . . . . . . . . . . . . . . . . . . . . 11
2.2 Selection of Hardware for the BCI Subsystems . . . . . . . . . . . . . . . 11
2.2.1 Data Acquisition System . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Proposed Data Acquisition System . . . . . . . . . . . . . . . . . . 16

3 Computational Resources and Software Components 18


3.0.1 Detection of SSVEP . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.0.2 SSVEP Stimulus . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4 Robot/Wheelchair Interface 25
4.1 Closed-loop system Hardware Design . . . . . . . . . . . . . . . . . . . . 25
4.2 Micro-controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.3 Chassis and Motor Interface . . . . . . . . . . . . . . . . . . . . . . . . . 28

5 Results and Discussion 30

6 Conclusion 35

vi
Chapter 1

Introduction
Brain computer interface (BCI), especially non-invasive interacting electroencephalogram
based BCI, research in this field has increased immensely over the past decade and
knowledge pertaining to this multi-disciplinary area is vast, yet much remains to be explored.
Most of the research concerned to control-oriented BCIs is on assistive technologies,
enabling handicapped patients to engage with surrounding environment. Ironically, the
only affordable Brain-machine interfacing systems commercially available target either
PC gaming and media industry. This work aims at interpretation and development of an
inexpensive neural interface system to be used for various purposes such as assistance for
impaired.

1.1 Brain-Computer Interfaces (BCI)

A brain–computer interface (BCI), also known as direct neural interface (DNI),


mind-machine interface (MMI), or brain–machine interface (BMI), is a direct
communication channel establishment between an brain and an external automated
device. BCIs are often aimed at mapping, researching, augmenting, assisting or repairing
cognitive and sensory-based motor functions.[1] The field of BMI development has for a
long time primarily focused on neuroprosthetics aimed for restoration of impaired sight,
hearing and movement. Thanks to nature and remarkable cortical-plasticity of brain,
acquired signals from implantations can, after adapting, be handled like a natural sensor or
just effector channels.[2]

The distinction amongst BCIs and neuroprosthetics is generally in how the terms are
utilized: neuroprosthetics normally interface the sensory system to an automated device,
though BCIs for the most part associate the mind (or sensory system) with a PC framework.
Pragmatic neuroprosthetics can be connected to any piece of the sensory system—for
instance, fringe/peripheral nerves—while the expression ”BCI” more often than not assigns

1
Chapter 1 Introduction

a smaller class of frameworks which interface with the focal nervous system.

Mind Computer interfaces can be comprehensively grouped into three classes in view
of the sort of interface level:

a. Invasive BCI’s
Intrusive BCI look into has focused on repairing harmed vision and giving new usefulness
to individuals with loss of motion or motor damage. Obtrusive BCIs are embedded
specifically into the grey matter of the mind amid neurosurgery. Since they lie in the
grey matter, intrusive recorders deliver the most noteworthy quality signs of BCI gadgets,
yet are inclined to scar-tissue develop, making the signal end up plainly weaker, or even
non-existent, as the body responds to an outside entity in the brain.[3] Intrusive BCIs
concentrating on motor prosthetics plan to either reestablish movement in people with loss
of motion or give assistive devices to help them, for example, interfaces with PCs or robot
arms. The real discovery here being that the recording and stimulus frameworks for the
interface are embedded into the living system.

b. Partially Invasive BCI’s


Halfway intrusive BCI gadgets are embedded inside the skull yet rest outside the cerebrum
instead of inside the grey matter. They deliver preferred determination motions over
non-intrusive BCIs where the bone tissue of the skull avoids and distorts signals and have a
lower danger of framing scar-tissue in the brain than completely obtrusive BCIs. There has
been preclinical exhibit of intracortical BCIs from the stroke perilesional cortex.[4]

Electrocorticography (ECoG) measures the electrical action of the cerebrum taken


from underneath the skull comparably to non-intrusive electroencephalography, however
the anodes are inserted in a thin plastic cushion that is put over the cortex, underneath
the dura mater. ECoG is an extremely encouraging middle of the road BCI methodology
since it has higher spatial determination, better signal-to-noise proportion, more extensive
frequency extend, and less training prerequisites than scalp-recorded EEG, and in the
meantime has down-end technical difficulty, less clinical hazard, and presumably unrivaled
longer stability than intracortical single-neuron recording. This element profile and late
confirmation of the abnormal state of control with insignificant training prerequisites
demonstrates potential for certifiable application for individuals with motor damage and
disabilities.[5]

2
Chapter 1 Introduction

c. Non Invasive BCI’s


The considerable dominant part of distributed BCI work includes noninvasive EEG-based
BCIs. Noninvasive EEG-based advancements and interfaces have been utilized for a
significantly more extensive assortment of utilizations. Despite the fact that EEG-based
interfaces are anything but difficult to wear and don’t require surgery, they have moderately
poor spatial determination and can’t successfully utilize higher-frequency signals in light
of the fact that the skull hoses signals, scattering and obscuring the electromagnetic waves
made by the neurons. EEG-based interfaces additionally require some time and exertion
preceding every utilization session, though non-EEG-based ones, and also obtrusive ones
require no earlier training. By and large, the best BCI for every client relies on upon various
elements.

Non-intrusive BCIs have likewise been connected to empower brain control of prosthetic
upper and lower limit devices in individuals with loss of motion. For example, scientists from
Austria showed a BCI-controlled practical electrical incitement framework to reestablish
furthest point developments in a man with tetraplegia because of spinal rope injury.[6]

1.2 Electroencephalogram (EEG)

Electroencephalography (EEG) is an electrophysiological observing technique to record


electrical action of the cerebrum. It is regularly noninvasive, with the anodes set along
the scalp, albeit intrusive cathodes are once in a while utilized as a part of particular
applications. EEG measures voltage vacillations coming about because of ionic current
inside the neurons of the brain.[7] In clinical settings, EEG alludes to the recording of the
mind’s unconstrained electrical action over a time of time,[8] as recorded from various
cathodes put on the scalp. Demonstrative applications by and large concentrate on the
spectral substance of EEG, that is, the sort of neural motions (prevalently called ”brain
waves”) that can be seen in EEG signals.

3
Chapter 1 Introduction

Fig.1. EEG Signals

Subordinates of the EEG strategy incorporate evoked potentials (EP), which includes
averaging the EEG movement time-locked to the introduction of a stimulus or some likeness
thereof (visual, somatosensory, or sound-related). Event related possibilities (ERPs) allude
to average of EEG reactions that are time-bolted to more unpredictable complex stimuli;
this strategy is utilized as a part of subjective science, psychological brain science, and
cognitive investigate. Electroencephalography (EEG) is the most considered non-obtrusive
interface, chiefly because of its fine resolution, usability, convenience and low set-up cost.
The innovation is to some degree defenseless to noise be that as it may.

1.3 Steady State Visually Evoked Potentials (SSVEP)

Recent studies have indicated an increased interest in BCI systems which are based on
conscious modification of natural brain responses to external stimuli with various sensory
modalities. Such BCI methods, in spite of the necessity for stimulation equipment and
increased attention efforts by the user, allow advantages such as a very large num- ber of
commands, high reliability, shorter or no subject training, and higher resistance to artifact
contamination, when compared to BCI approaches based only on mental imagery.

In the SteadyState Visual Evoked Potential (SSVEP) BCI paradigm, the user focuses
attention selectively on one of multiple patterns/lights which reverse/flicker repetitively
at slightly different frequencies. This continuous visual stimulation evokes a precisely
synchronized, recognizable “steady-state” brain activity which depends on the user’s choice
of a target, as each reverse or flicker at its own unique frequency. SSVEP BCI systems have
been used, for example, as a two-command flight simulator control device r the BCI NASA
Earth viewer in which large stationary patterns on the edges of the screen reversing at 5–7
Hz enabled four-command control of the scrolling direction of a satellite map of the Earth.

4
Chapter 1 Introduction

There are enumerous BCI control techniques being used to interface cerebrum with a
robotized framework. Significantly few of methods have been utilize such as:

a. Motor imagery: Motor symbolism includes the creative movement of different


body parts bringing about sensorimotor cortex activation, which tweaks sensorimotor
motions in the EEG. This can be identified by the BCI to construe a client’s goal. Motor
imagery regularly requires various sessions of training before satisfactory control of the
BCI is procured. This training may take various hours more than a few days before clients
can reliably utilize the system with adequate levels of precision.

b. Bio/neurofeedback for passive BCI designs: Biofeedback is utilized to screen a


subject’s mental state. At times, biofeedback does not screen electroencephalography
(EEG), but rather real parameters, for example, electromyography (EMG), galvanic skin
resistance (GSR), and heart rate fluctuation (HRV). Numerous biofeedback frameworks
are utilized to regard certain scatters, for example, attention deficit hyperactivity disorder
(ADHD), rest issues in youngsters, teeth crushing, and chronic pain. EEG biofeedback
frameworks regularly screen four unique groups (theta: 4–7 Hz, alpha:8–12 Hz, SMR:
12–15 Hz, beta: 15–18 Hz) and test the subject to control them.

c. Visual evoked potential (VEP): A VEP is an electrical potential recorded after a


subject is given a kind of visual boosts. There are a few sorts of VEPs.

Steady-state visually evoked potentials (SSVEPs) utilize potentials created by energizing


the retina, utilizing visual boosts regulated at specific frequencies. SSVEP’s stimuli are
frequently shaped from substituting checkerboard designs and on occasion essentially
utilize blazing pictures. The recurrence of the stage inversion of the stimulus utilized
can be unmistakably recognized in the range of an EEG; this makes discovery of SSVEP
moderately simple. SSVEP has ended up being fruitful inside numerous BCI frameworks.
This is because of a few components, the signal evoked is quantifiable in as extensive a
populace as the transient VEP and blink development and electrocardiographic artefacts
don’t influence the frequencies checked.
Furthermore, the SSVEP output signal is uncommonly strong; the topographic
association of the essential visual cortex is with the end goal that a more extensive territory
gets afferents from the focal or fovial area of the visual field. SSVEP has a few issues be
that as it may. As SSVEPs utilize blazing stimuli to derive a client’s purpose, the client must
look at one of the glimmering or repeating images keeping in mind the end goal to interface
with the framework. It is, in this way, likely that the images could wind up noticeably
disturbing and awkward to use amid longer sessions.

5
Chapter 1 Introduction

Fig.2. Architecture of a typical brain–display interactive system. SSVEP: Steady-state


visual evoked potential.

Another kind of VEP utilized with applications is the P300 potential. The P300 related
potential is a positive top in the EEG that happens at approximately 300 ms after the presence
of an objective stimulus (a boost for which the user is holding up or looking for) or oddball
jolts. The P300 adequacy diminishes as the objective stimuli and the overlooked boosts
develop more similar. The P300 is thought to be identified with a more elevated amount
of attention or a situating reaction. Using P300 as a control conspire has the upside of
the member just attending constrained training. The P300 is a BCI that offers a discrete
determination instead of a constant control component.

1.4 Literature Review

EEG-based BCIs have been utilized to control a cursor on the screen [9], select letters from
a virtual console [10], peruse the Internet [11], and play amusements [12]. As of late, they
have been utilized to control wheelchairs to offer assistance take versatility back to some
seriously handicapped individuals [13]. A more point by point audit with respect to brain
controlled wheelchairs can be found in this paper [14]. Besides, a couple considers have
begun to investigate how to interface with a vehicle by utilizing EEG-based BCIs. Gohring
et al. [15] connected a business BCI item (i.e., the EPOC top from Emotiv) to control the
vehicle in conjunction with the help of the knowledge of seeing the encompassing. Bi et al.
[16] have connected a head-up show based SSVEP BCI in conjunction with alpha wave to

6
Chapter 1 Introduction

control the vehicle to turn left, turn right, go ahead, and begin and stop. In spite of the fact
that the two contemplates showed the possibility of utilizing the human ”personality” to
control the vehicle, the BCI-based vehicle control is right now not exact or solid, and along
these lines not tried in the genuine driving situations. Besides, the mind controlled vehicles
have a low speed (i.e., around 3–5 m/s).

In recent years, researchers have increasingly focused on BCI systems for alphabetic
writing known as BCI spellers [17]. EEGsignals used for spelling mainly include
sensorimotor rhythm (SMR) [18], P300 event-related potential (ERP) [19], and steady-state
visually evoked potential (SSVEP) [20]. The SSVEP is a periodic neural response located
in the subject’s central visual field that is induced by a repetitive visual stimulus [21].
Due to the high information transfer rate (ITR), simple system configuration and minimal
required user training time [22], [23], the SSVEP speller has become one of the most
promising paradigms for practical BCI applications. ITR is widely used for evaluating
the performance of BCI systems because ITR simultaneously considers the number of
selectable items, accuracy, and speed [24]. Consequently, BCI researchers have generally
focused on enhancing the ITR of SSVEP spellers in terms of these three factors.

1) To increase the number of items, several methods have been introduced to realize
more distracters than stimulus frequencies. Zhang et al. proposed a multiple frequencies
sequential coding protocol to implement more targets with limited frequencies. The BCI
system with four targets was carried out in an offline experiment [25]. Hwang et al.
proposed a new dual-frequency stimulation method to generalize 12 visual stimuli by
combining four different frequencies [26]. Chen et al. presented an innovative coding
method using intermodulation frequencies induced by luminance flicker (three frequencies)
and color alternating (two frequencies) to build a SSVEP speller with eight targets [27]. Jia
et al. realized a BCI system comprising 15 targets with three stimulus frequencies using the
frequency and phase information [28]. However, the number of items in these studies was
insufficient to allow every character of the BCI speller to be presented on the monitor.

2) To enhance spelling accuracy, several signal processing methods have been proposed,
such as canonical correlation analysis (CCA) [29] and minimum energy combination [30],
which have yielded better recognition results than power spectral density analysis [31], a
traditional method for SSVEP recognition. Among these methods, CCA has demonstrated
the best performance and has been widely used in previous SSVEP BCI studies [32].
Because the features of SSVEP responses are highly dependent on the participant, recent
studies have achieved further progress in terms of classification accuracy by optimizing the
reference signals [33] and selecting suitable stimulus frequencies [34], [35] using additional

7
Chapter 1 Introduction

calibration runs. Unfortunately, misspellings induced byinterfrequency variations in the


SSVEP responses at different frequencies still occur.

3) To improve spelling speed under reliable control, Volosyak proposed an adaptive


mechanism of time segment length adaptation for SSVEP-based BCI spellers [36]. In
this system, the fixed thresholds of the SSVEP response were determined based on the
offline analysis of EEG data collected from many subjects, with no individual information
employed for threshold selection. The recognition performance may deteriorate when
directly using the classification thresholds as a result of overfitting during the SSVEP-based
spelling, particularly with a short time window. Therefore, studies focusing on adaptive
SSVEP spellers have been limited to [36]

The neuroscientific and clinical enthusiasm for SSVEP has produced various signal
conditioning techniques for identifying features from the SSVEP flag. For instance, in [37]
a technique for recognizing a solitary frequency part through a single electrode is proposed.
Taking after the advancement of coordinated subspace locators [38], it was noticed that this
indicator is appropriate for finding the outstanding and unmistakable frequency segments
constituting the SSVEP flag [39].

While the concentration in a clinical use of SSVEP for the most part is on a few
parameter of the SSVEP reaction [40], the BCI application puts diverse and all the more
difficult requests on the signal preparing and location. Commonly, a variety of light sources
flickering with various frequencies is utilized as stimuli, where every signal encodes a
specific control charge. The assignment for the signal handling is to choose which light
source the individual is taking a gander at in view of the evoked SSVEP reaction. To acquire
a worthy speed of the BCI, revise choices must be made in view of short flag portions, as a
rule between 0.5 and 4 s.

The cost, power and preparing abilities of implanted processors and advanced flag
processors, and the abilities of mind PC interfaces are continually making strides. This
is so to the point that some element extraction and order calculations for BCIs can be all
around upheld on generally ease installed frameworks. The unfaltering state visual evoked
potential (SSVEP) based BCI framework is one framework which, until the previous couple
of years, appears to have been a generally disregarded zone in BCI investigate as proposed
by results in Bashashati et al.[41].

8
A conceivable explanation behind this could be for the occasionally irritating and tiring
visual stimulus of the SSVEP approach as said in Garcia [42] and Wang et al. [43] and
perhaps at the same time in view of its more affordable options, for example, the eye
development tracker. Until as of late the cost of actualizing BCI frameworks, including the
tremendously lessened cost proficient BCI frameworks for control, regularly utilized as a
part of research made them once in a while and just valuable to those patients with extreme
incapacitate, for example, secured disorder, required in the examination. This postulation
takes a gander at the likelihood of building up a BCI-framework which can be executed
on a wheelchair having standard everyday requirements, for example, restricted electrical
vitality, space, weight limit, cost and the requirement for quick choices to be made.

These limitations totally preclude the alternative of sensor gear, for example, obtrusive
EEG, MEG and fMRI which are either costly or cumbersome. Abandoning us with two
choices to research, these being non-intrusive EEG and close infrared spectroscopy. Taking
a shot at just the money related cost part of the BCI framework, as it additionally has impact
on the size and computational requests, we verified that one of the significant cost ranges
of a customary BCI framework is the calculation stage which is generally either a (PC) or
FPGA-based installed framework. In the following segment of this paper we give reasons
why the unfaltering state visual evoked potential is the best component to use for a minimal
effort BCI framework.
Chapter 2

EEG Data Acquisition

A BCI framework can by and large be delineated by three or four subsystems. In light of
the general structure for a BCI framework by Mason and Birch [44] figure portrays how we
have separated our BCI framework into four primary secluded equipment subsystems. As the
usefulness of the distinctive equipment segments may vary from application to application,
the secluded way of our framework empowers us to utilize parts of the framework for
different BCIs without redesigning the whole BCI framework, this keeps running in line
with Quitadamo et al. [45] proposition on an UML for BCI frameworks.

Fig.3. Functional Model of a modular BCI hardware system

The particular BCI framework is depicted as having an ”acquisition” subsystem which


involves EEG amplifiers and simple ADC converters, the ’BCI transducer’ which contains
the curio processor (past those which are incorporated into the obtaining stage), the
component extractor and highlight classifier or interpreter. Taking after the transducer, the
control interface controls the visual feedback and stimulus and imparts charges in light of
data from the transducer to the controller. The controller works the condition manipulation
device which for our situation are the wheelchair motors.

10
Chapter 2 EEG Data Acquisition

2.1 Selection of Neurological Mechanism

The neurological instrument of a BCI alludes to the basic cerebrum capacities or attributes
which are utilized as a part of that BCI framework, illustrations are mu-rhythms, cortical
potential (SCP), event related potentials (ERPs), P300 and enduring state visual evoked
potential (SSVEP).
With respect to neurological instruments the SSVEP-based BCI is the one of decision
for the reasons showed from now on. P300 and SSVEP-based BCI frameworks vary
from the others in that it doesn’t read the clients level of thought straightforwardly yet
rather recognizes the cerebrum’s response to a changing outer visual stimuli on which the
user centers consideration [46]. The SSVEP is described by a neurological synchrony or
reverberation in the occipital locale of the cerebrum, inspired by concentrated consideration
on a constant flickering at a sinusoidal recurrence between 3.5Hz to 75 Hz.

Thusly, not at all like other neurological components the power spectrum of the gone
to flicker recurrence is effortlessly extricated from foundation EEG noise [47] thus not
requiring complex element extraction calculations. Appropriate for wheelchair control, the
SSVEP-based BCI has the potential for a high data exchange rate (ITR) of more than 60 bits
for each moment (43 bits/min in Wang et al. [43]) as contrasted and different BCIs having
ITRs of under 30 bits for each moment. [42]

2.2 Selection of Hardware for the BCI Subsystems

In Wu et al. [48] demonstrated that LED flicker evoked SSVEP has a bigger recurrence
amplitude than LCD and CRT evoked SSVEPs, particularly in the lower frequencies.
This proposes set up of the input subsystem of a BCI the visual jolt in SSVEP-based
BCI framework can be actualized utilizing an installed LED framework adding to its
PC-autonomy.

For the BCI transducer, the component extraction and arrangement area, a minimal effort
implanted framework can be actualized. A fundamental SSVEP-based framework needs
practically nothing pre-preparing as the elements of intrigue are as a rule over the noise. A
FFT of the EEG together with a basic measurable investigation of the FFT can be actualized
utilizing a microchip or low-end DSP. To keep to the secluded adaptability of the proposed
BCI framework, the control interface can be executed utilizing a low-end chip. What remains
is the securing segment of the BCI framework for the electric wheelchair. Great quality
instrumentation amplifiers, and in this way the EEG intensifier unit, of the BCI signal area

11
Chapter 2 EEG Data Acquisition

are important to dependably intensify the extremely feeble electrical signals.


These signs are measured from the surface of a BCI client’s scalp so mind must be
taken to abstain from trading off the patient’s wellbeing regarding electrical detachment
while keeping up signal integrity. In most non-intrusive EEG frameworks this is finished
by utilizing instrumentation amplifiers which are costly, a common case of the utilization
of such is found in the modular EEG speaker.

Opto-isolators are utilized as a part of a wired BCI system for included electrical
separation between areas of the BCI framework. Various papers, including Beverina et al.
[46] and Wang et al. [43] show that a SSVEP-based BCI framework needs just two EEG
procurement channels, and conceivably just a single. This will additionally diminish the cost
of our BCI framework as less equipment will be required on the enhancer and flag securing
stage.

2.2.1 Data Acquisition System

As of late, modest portable EEG gadgets have been produced by Avatar EEG Solutions,
Neurosky, OCZ Innovation, InteraXon, PLX Devices and Emotiv Systems. These gadgets
are not utilized as a part of clinical utilize, but rather are utilized as a part of the Brain
Control Interface (BCI) and neurofeedback (one of biofeedback sorts) [49][50][51].

The least expensive EEG gadget is single-channel MindWave MW001 delivered by


Neurosky Inc., San Jose, CA. It cost just around $80. The gadget comprises of eight
fundamental parts, ear cut, adaptable ear arm, battery range, control switch, customizable
head band, sensor tip, sensor arm and inside thinkgear chipset. Figure displays the gadget
outline. The guideline of operation is very basic. Two dry sensors are utilized to recognize
and channel the EEG signals. The sensor tip recognizes electrical signs from the temple of
the scalp.

In the meantime, the sensor gathers surrounding commotion produced by human muscle,
PCs, lights, electrical attachments and other electrical gadgets. The second sensor, ear clasp,
is a grounds and reference, which permits thinkgear chip to sift through the electrical clamor
[52]. The gadget measures the crude signal, control range, power spectrum (beta, alpha,
delta, theta, gamma), consideration level, intercession level and blink discovery. The crude
EEG information is collected at a rate of 512 Hz. Other measured qualities are made each
second. Subsequently, crude EEG signal is a fundamental wellspring of data on EEG signals
utilizing MindWave MW001.

12
Chapter 2 EEG Data Acquisition

Fig.4. The MindWave MW001 device design. [a] (Courtesy: Processing and spectral
analysis of the raw EEG signal from the MindWave, Wojciech SAŁABUN)

This EEG headset is given its very own GUI inorder to perform enumerous applications
however the source code isn’t accessible. In this way, the library record (thinkgear.dll) to
extricate the crude EEG is actualized. The MATLAB permits to incorporate thinkgear.dll.
This condition has wide support in tool compartment, which makes it perfect for a logical
research. The major point in designing the .dll library for this is to establish a valid
bluetooth connection ID through COMx port and to exchange packets of data. This data
being streamed is stored in a streamLog file. This logging process shall be established as
follows:

1494481837.953: AA AA 04 80 02 00 34 49
1494481837.953: AA AA 04 80 02 00 34 49
1494481837.953: AA AA 04 80 02 00 35 48
1494481837.953: AA AA 04 80 02 00 35 48
1494481837.953: AA AA 04 80 02 00 35 48
1494481837.953: AA AA 04 80 02 00 35 48

13
Chapter 2 EEG Data Acquisition

Fig.5. The communications protocol

In the initial step are capacities with parameters in the taking after order::

1. libisloaded(’Thinkgear’) – returns genuine if the ThinkGear library is stacked,


and false otherwise.
2. loadlibrary(’Thinkgear.dll’,’thinkgear.h’) – loads the capacities characterized in the
header record and found in the library. Presently, the capacity calllib() can call a capacity
in the ThinkGear library.
3. calllib(’Thinkgear’, ’TG_GetDriverVersion’) – gives back the rendition of stacked
library.

14
Chapter 2 EEG Data Acquisition

In the following stride, the capacity calllib(’Thinkgear’, ’TG_GetNewConnectionId’)


gets another association ID handle to ThinkGear. The esteem - 1 is returned if excessively
numerous associations have been made. In the ThinkGear library, the most critical capacity
is TG_Connect. This capacity needs 4 parameters: the association ID, number of the serial
port, Baud rate and sort of information. The quantity of the serial port is given amid the
matching of the gadget. The gadget can associate on modes 1200, 2400, 4800, 9600, 57600
and 115200 bits for every second (bps). Furthermore, we can utilize three configurations of
information streams: bundles, 5V RAW and record parcels. Most normally utilized is 9600
bps rate and stream 5V RAW mode, in light of the fact that these parameters have the base
of least transmission errors.

The association is set up by utilizing the accompanying function call:


calllib(’Thinkgear’, ’TG_Connect’, Id, ComPortName,
TG_BAUD_115200,TG_STREAM_5VRAW)

In the following stride, an endeavor is made to peruse a Packet of information from


the association. We utilize the TG_ReadPackets() work with ID parameter and number
of parcel to read.The order calllib(’Thinkgear’,’TG_ReadPackets’,Id,1) returns false for
mistake, and generally genuine. The capacity TG_GetValueStatus() checks if an esteem has
been refreshed by TG_ReadPackets(). On the off chance that this capacity returns genuine,
we can utilize TG_GetValue() capacity to get the refreshed estimation of the crude EEG
flag. Thereafter, we can read the estimation of crude EEG motion with the most extreme
recurrence of 512 Hz. Testing recurrence is determined to 512 Hz, and we control time
delays in testing. The estimation of the flag and time are kept in touch with the exhibit
information. A one-moment test of the EEG raw signal is exhibited in the beneath figure.

Fig.6. Real-time recording raw EEG signal from the MindWave MW001.

15
Chapter 2 EEG Data Acquisition

2.2.2 Proposed Data Acquisition System

In this context, we propose the idea of designing a 2 channel EEG system with a reference
electrode. This system shall first consist of an EEG amplifier circuit designed with pre-/post-
amplifiers and filters.

Fig.7. Block diagram of proposed bio-signal amplifier

Complete signal conditioning is done, even the 50Hz noise that occurs due to power-line
is removed using a Notch filter. This amplified and filtered output is interfaced into a
National Instruments Elvis 2 board using a virtual instrumentation program in LABVIEW.
The system is designed in a analog fashion, where we have utilized AD620 AN, high
precision, rail-to-rail CMOS instrumentation amplifier. And for designing the filters, we
have consider a normal Operational Amplifier OP07 CP, operating at buffer gain. Since the
low-pass filter cut-off frequency is at 36 Hz, there is no possibility of any prevalent 50 Hz
power-line noise.

16
Fig.8. LabVIEW Data-acquisition VI

This interface enables us to take real-time EEG data which is required to be fed into the
classifier system. The advantage of having a 2 electrode system is that any particular lobe of
the brain can be accessed basing in the amputee and this avoids the point of cross-talk. First
stage amplification: th=113.27; practical: 87.66; H.P.F: cutoff frequency = 0.18 Hz; L.P.F:
cutoff frequency = 31 Hz; Final stage amplification: th=9.9818; practical: 11.72; Net total
gain: 1027.37

Fig.9. Bio-signal Amplifier design


Chapter 3

Computational Resources and Software


Components

Following the acquisition of EEG signal, processing is done to remove certain additional
artifacts. These artifacts may be residual interferences like white noise, etc and can be
re-moved by the usage of digital tools like Chebyshev’s or Butterworth filters. Subsequently,
parameters required for analysis of neuronal system are to be estimated. The most apt,
parameters for this study remain SSVEP and P300 responses.

Fig.10. Obtained EEG signal through custom-made hardware

3.0.1 Detection of SSVEP

Steady state visual evoked possibilities (SSVEP) is a reverberation phenomenon emerging


predominantly in the visual cortex when a man is concentrating the visual consideration

18
Computational Resources and Software Components

on a light source flickering with a recurrence over 4 Hz. To obviously recognize SSVEP
signals, it is fitting to join anode signals with that of channel signals making the reaction
be magnified and the aggravation signal or noise are evacuated. Different techniques are
utilized keeping in mind the end goal to accomplish the consolidating procedure like:

1. Normal Combination
2. Local Combination
3. Bipolar Combination
4. Laplacian Combination
5. Least Energy Combination
6. Greatest Contrast Combination

Since the SSVEP reaction is as a repetitive and periodic signal with vitality just in a
couple particular frequencies, a test measurement for testing the nearness of a SSVEP
reaction can be figured as

Here, P(l,k) is the evaluated power in SSVEP harmonic recurrence number k in channel flag
s(l), and sigma(k,l) is a gauge of the noise control in the same signal domain. The estimation
of these amounts is definite in the accompanying segments. Communicated in words, the
test measurement midpoints the SNRs crosswise over N(h) harmonic frequencies and N(s)
channel signals. The elucidation is that the test measurement T tells us how frequently bigger
the evaluated SSVEP is contrasted with the situation where no visual stimulus is available.
The test measurement here is basically a similar test measurement utilized as a part of
and due to the (rough) orthogonality of the sinusoids it is too equivalent to the coordinated
subspace test measurement. Partitioning with the assessed commotion power is generally
alluded to as a whitening operation, which in the present case can be communicated as
condition because of the sinusoidal type of the SSVEP flag.

This recipe for assessing the power in the kth SSVEP symphonious recurrence in channel
signal s(l). The inspiration for utilizing this meaning of SSVEP power is that it measures
up to the squared Discrete Fourier Transform (DFT) size when the recurrence matches with
one of the frequencies in the DFT. In any case, since the SSVEP incitement recurrence
what’s more, its harmonics not really sync with the DFT frequencies, offers a more broad
recipe since it can assess the power in any recurrence.

19
Computational Resources and Software Components

To rehash the parameters utilized, N(t) is the quantity of tests, k= 1,..., N(h) is the SSVEP
consonant recurrence number, f is the SSVEP incitement frequency, F(s) is the sampling
frequency, and in the above equation i is the complex.

3.0.2 SSVEP Stimulus

In neurology and neuroscience look into, visually evoked potentials (SSVEP) are signs
that are normal reactions to visual incitement at particular frequencies. At the point when
the retina is energized by a visual boost running from 3.5 Hz to 75 Hz, the mind creates
electrical action at the same (or products of) recurrence of thevisual stimulus. This strategy
is utilized broadly with electroencephalographic research with respect to and consideration.
SSVEPs are helpful in research as a result of the incredible signal-to-noise proportion
and relative invulnerability to artefacts. SSVEPs likewise give a way to portray favored
frequencies of neocortical dynamic procedures. SSVEP is created by stationary confined
sources and disseminated sources that show attributes of wave wonders. SSVEP likewise
can be utilized to create brain– PC interface (BCI) amusements.

Fig.11. Typical frequency spectrum of an EEG signal acquired during visual stimulation
with a flickering frequency of 7 Hz. The SSVEP response can be seen as the peaks at 7 Hz
and the harmonic frequencies at 14 and 21 Hz. [b] (Courtesy: Multiple Channel Detection

20
Computational Resources and Software Components

of Steady-State Visual Evoked Potentials for Brain-Computer Interfaces by Ola Friman, et


al.)

At flickering frequencies in the delta band (2–4 Hz), and in the upper alpha band
(10–11 Hz), an occipital-frontal system seems to stage bolt to the flicker while stimulating
, expanding the size of the SSVEP. At glimmer frequencies in the lower alpha band
(8–10 Hz), a natural reaction to a fringe flickering, that incorporates parietal cortex and
back frontal cortex, has higher abundancy when consideration is coordinated far from the
glimmering flicker and towards a contending rbbf evoke in the fovea. Increments in SSVEP
control when consideration is coordinated to peripheral flicker are constantly connected
with increments in phase locking. By differentiating, at frequencies in the lower alpha band,
increments in SSVEP control when consideration is coordinated far from the flicker and
towards foveal jolts are not related with changes in stage locking. Accordingly, regardless
of whether consideration regarding a stimulus increments or, then again diminishes SSVEP
abundancy and stage locking relies on upon which of two cortical systems is chosen by
frequency; diverse systems have particular spatial what’s more, dynamic properties.

In this manner for the same, we have specially crafted a glimmering tool stash utilizing
Psychtoolbox interfaced with MATLAB with frequency labeling test at 6.5 Hz.
Psychtoolbox interfaces between Matlab or Octave and the PC equipment. The PTB center
schedule gives access to the show outline frame buffer and shading/color query table,
dependably synchronize with a vertical screen retrace, bolster sub-millisecond timing,
uncover crude OpenGL charges, bolster video playback and catch and also low latency
sound, and encourage the accumulation of onlooker reactions. Auxiliary schedules bolster
basic needs like color space changes and the QUEST limit looking for calculation.

Visual boost introduction with ”Screen”:


1. Visual Stimulus introduction: Multi-show, Stereo, Double buffering, Accurate onset
timing, Timestamping, Standard ideal models, etc.
2. Quick 2D boost creation: Batch drawing, Texture mapping, Alpha mixing, Film
playback, Video catch, Image preparing and processing, etc.
3. Programmed stimulus post-preparing: HDR, Calibration, etc.
4. Quick 3D boost creation: OpenGL for Matlab

The Visuals: Screen ()

a. Controls all parts of the illustrations and show hardware.

21
Computational Resources and Software Components

b. Plays out each of the 2D drawing operations.

c. Controls stimulus onset timing and gives timestamps.

d. Takes into consideration some superior picture processing.

e. Performs on-request stimulus post-processing.

Fig.12. PTB1/2 Operation [c] (Courtesy: Psychtoolbox.org)

Fundamental 2D drawing commands:

(Filled) Circles and ovals: Screen(’FrameOval’, window, shading, boundingrect [,


penWidth]); Screen(’FillOval’, window, shading, boundingrect);
(Filled) Rectangles: Screen(’FrameRect’, window, shading, boundingrect [, penWidth]);
Screen(’FillRect’, window, shading, boundingrect);

Surfaces and textures can be started from various sources:

From Matlab (Matrices and stacked picture records):


matlabMatrix = imread(’MyCuteStimulusImage.jpg’); mytex = Screen(’MakeTexture’,
mywindow, matlabMatrix);

Fundamental commands for framework control and timing:

T = GetSecs - Query time with microsecond determination. Utilizes most astounding


resolution framework clock for estimation of time.
WaitSecs(duration) - Wait for a predefined measure of time ’duration’.
On MS-Windows:

22
Computational Resources and Software Components

Precise to < 2 milliseconds by and large on a cutting edge machine.


Need() - Switch Matlab procedure to realtime-planning mode

Fig.13. Timing Latencies in Stimulus

Essential reaction gathering – Keyboard and Mouse:

(x,y,buttons) = GetMouse(window); Query current mouse position and mouse catch


state: [down, secs, keycode]=KbCheck; Query current condition of all keys on a console:
Can recognize and report various concurrent keypresses.
Questions are quick, bypassing OS lines and application occasion lines. Regardless of that:
Standard consoles/keyboards risky for RT measurements.

The major extension involved in operation of Psychtoolbox is the Screen. This screen
essentially requires MATLAB to be supported by Microsoft SDK 7.1 version for active
compiler support either in the form of C++ re-distributable or Fortran or Python domain.
This parallely also requires the working of .NET 4.0 framework to include the corresponding
graphics of GL OS, in case those of windows is non-responsive.
Also, we are calculating the average time dfftime for which the flickering continues. This
way we can be sure as to at how much rate is the image or color block on the screen flickering
on an average scale. Another major point here is to allow flickering for longer durations
and then acquire EEG signal, this shall make sure that the flicker frequency has stabilized
and reached steady state. It is during this stage where we must look for VEP’s in EEG signal.

The relating stimulus window looks as beneath:

23
Fig.14. Frequency distribution of stimulus

Fig.15. Stimulus Screen


Chapter 4

Robot/Wheelchair Interface

4.1 Closed-loop system Hardware Design

After the EEG signal is fed to the classifier and once it identifies the harmonic involved,
the system is bound to move in that orientation or direction. To enable the process this
movement, we connect this output to a control interface which controls both the automation
of the stimuli and also the device controller. This device controller shall consist of a network
of high-end servos connected to a wheel-chair linked by a micro-controller for automation
purpose

4.2 Micro-controller

A microcontroller (or MCU for microcontroller unit) is a little PC on a solitary coordinated


circuit. In present day phrasing, it is a framework on a chip or SoC. A microcontroller
contains at least one CPUs (processor centers) alongside memory and programmable
information/yield peripherals. Program memory as Ferroelectric RAM, NOR flash or OTP
ROM is regularly included on chip, and a little measure of RAM. Microcontrollers are
intended for inserted applications, as opposed to the microchips utilized as a part of PCs or
other broadly useful applications comprising of different discrete chips.

Microcontrollers are utilized as a part of naturally controlled items and gadgets, for
example, vehicle motor control frameworks, implantable therapeutic gadgets, remote
controls, office machines, apparatuses, control instruments, toys and other inserted
frameworks. By decreasing the size and cost contrasted with a plan that uses a different
chip, memory, and input/output gadgets, microcontrollers make it prudent to carefully
control significantly more procedures. Mixed-signal microcontrollers are normal,
incorporating simple parts expected to control non-computerized electronic systems.

25
Chapter 4 Robot/Wheelchair Interface

The Atmel 8-bit AVR RISC-based microcontroller consolidates 32 kB ISP flash memory
with read-while-compose capacities, 1 kB EEPROM, 2 kB SRAM, 23 universally useful
I/O lines, 32 broadly useful working registers, three adaptable clock/counters with analyze
modes, inside and outer intrudes on, serial programmable USART, a byte-situated 2-wire
serial interface, SPI serial port, 6-channel 10-bit A/D converter (8-directs in TQFP and
QFN/MLF bundles), programmable guard dog clock with interior oscillator, and five
programming selectable power sparing modes. The gadget works between 1.8-5.5 volts.
The gadget accomplishes throughput moving toward 1 MIPS for each MHz.

Fig.16. Arduino Uno [d] (Courtesy: Arduino.Inc, Italy)

The output f max. generated after the frequency spectrum estimation put through an
classifier generates a logic, assigning a preset motion as per the below given table. Each
of the motions ”Forward”, ”Backward”, ”Left” and ”Right” are defined as functions in the
micro-controller logic.

Thus, the estimated f max. from EEG signal is analyzed and the corresponding flag
pertaining to the frequency range of the different motion’s is raised. This flag is serially
transmitted to the micro-controller for further analysis and actuation of the bot.

26
Chapter 4 Robot/Wheelchair Interface

The corresponding serial connection is established through COMx port with a preset
transmission rate:

s = serial(’COM6’,’BaudRate’,9600);
fopen(s);
pause(1);

if(6.0 < f max. f max.< 6.8)


fprintf(’Forward’);
fprintf(s,’%s’,’f’);
else
fprintf(’Dont move’);
fprintf(s,’%c’,’s’);
end

Frequency ranges set for various motions concerned to the degree of freedom of the
robot. These values are obtained after repetitive experimentation taking many trails.

Frequency Range Action


6.0 Hz < f max. < 6.8 Hz Forward
7.0 Hz < f max. < 8.5 Hz Left
9.5 Hz < f max. < 12.0 Hz Right
3.5 Hz < f max. < 4.5 Hz Backward
Table.1. Frequency ranges set for various degree of freedom

27
Chapter 4 Robot/Wheelchair Interface

4.3 Chassis and Motor Interface

In order to demonstrate the working of the control, a sample bot has been assembled using
standard sturdy robot chassis made out of Iron. Features of chassis include:

1. Durable yet light-weight chassis has numerous holes and slots for mounting robot
drive and control systems
2. Rectangular cut-outs fit our Continuous Rotation Servos, both original and High Speed
3. A 4-cell AA, 5-cell AA, or Li-Ion Pack-Charger battery pack fits inside
4. Shaped to mount a 1” Tail Wheel Ball (700-00009)with a single 1/16” cotter pin
9700-00023)

Along with the chassis, four DC geared motors were fitted into the cut-outs provided. These
motors were designed at 12V functioning with a maximum capable rotation at 200 RPM.

To drive these motors, a motor driver circuit with a voltage regulator and transformer
has been interfaced since the micro-controller alone won’t be able to drive DC motors.
Robodo Electronics L298 Motor Driver Module has been utilized for this purpose and has
the following specifications:

1. Drive a 2-stage bipolar stepper motor or two DC motor with the L298 double
H-Bridge chip, mounted on this convenient breakout board alongside all vital peripherals

2. It is perfect for mechanical and robotic applications and appropriate for association with
a microcontroller requiring only two or three control lines for every motor

3. It can likewise be interfaced with basic manual switches, TTL rationale doors,
transfers, and so on.

28
Fig.17. L298 Motor Driver Module [e] (Courtesy: Dual Full-Bridge Driver,
STMicroelectronics)

Additional features: Finned Aluminum Heatsink for the L298 IC and x2022, Clamping
Rectifiers and x2022, Filter Capacitors for logic-power and moto-power busses and
x2022, Onboard 5-Volt DC Regulator (A jumper selects between external 5-Volt power
or the onboard 5-volt regulator.) and x2022, Screw Terminal connections for ground,
motor power, 5-volt power and motor outputs and x2022, 0.1 inch-pitch header pins
for logic inputs and x2022, Jumpers to tie the enable inputs to ground (if desired) and
x2022, Switches up to 2 amps per channel The overall piece Board dimensions are 1.70
and quot, Square x 1.16 and quot, Tall. Mounting holes in each corner accept 1 or 8 and
quot, screws or standoffs. This module is easy to get running, with only 4 logic pins required.

The flag raised from the system when serially transmitted to the micro-controller is
read and corresponding void function is called. All of these motion functions are
individually designed in the algorithm to digitalWrite values to the four motor interfaces
connected to the chassis of a bot. Therefore, basing on the stimulus the EEG pattern
follows it to generate a f max. value to be classified. This is serially transmitted to the
micro-controller which interprets the flag and drives the motor driver to drive or actuate the
robot.
Chapter 5

Results and Discussion


A complete algorithm was implemented for extracting the raw EEG data from Mind Wave
MW001 EEG headset using the thinkgear.dll library. Thus experimentation is carried out to
acquire EEG signal whilst the visual stimulus is given. Corresponding frequency spectrum of
the signal was estimated using Fast Fourier Transformation (FFT). This frequency spectrum
is analyzed in real domain for the maximum frequency component of the EEG signal when
the subject is under stimulus. Stimulus is also set at various frequency values of flickering
for various degree of freedom of the robot or wheelchair. Thereby this maximum frequency
component is classified into one of the case structures of pre-assigned limits of frequency
values.

Fig.18. Experimental bot setup

30
Results and Discussion

In order to calibrate these limits, we repeat the experiment for many trails enabling us to
assign different degree of freedom to be assigned to the bot. Frequency ranges once set, EEG
signal has been acquired for a period of 20 seconds, for which fmax. has been calculated and
basing on the range in which it lies, classified into the pre-set limit values for the movement
of bot. This classification is conditionally declared to generate one of the pre-assigned
bot control movements. These bot control movements are set in the micro-controller and
correspondingly burned into. Thereby the flag raised by the classification in one of the cases
is converted into a command and transmitted serially to the micro-controller connected to
the bot. This micro-controller then gives an command for actuation of motor through via
the motor driver connected to the motors of placed with the chassis supported by power
supply.

The experiment is repeated enumerous times (above 100 times for each case) and the
accuracy of the system is found to be 95% to classify the incoming signal into one of the
cases of motion.

Stimulus Frequency f max. Action Classification


4 Hz 3.67 Hz Backward Correct
6.5 Hz 6.12 Hz Forward Correct
8 Hz 7.36 Hz Left Correct
10.5 Hz 10.39 Hz Right Correct
6.5 Hz 5.24 Hz Forward Wrong
6.5 Hz 6.17 Hz Forward Correct
4 Hz 2.34 Hz Backward Wrong
Table.2. Experimental results

31
Results and Discussion

Fig.19. Real-time EEG signal acquired from headset

Fig.20. Total EEG signal for 20 seconds from headset

32
Results and Discussion

Fig.21. Frequency spectrum of acquired EEG signal

Fig.22. f max. estimation from frequency spectrum

33
Fig.23. Experimentation using electrodes and custom hardware
Chapter 6

Conclusion

This thesis presents how to record and process the raw EEG signal from the MindWave
MW001 in the MATLAB environment. For this purpose, the Fourier analysis has been
used as an efficient way to make the spectral analysis. The first four chapters describe the
theoretical assumptions. It is all used to perform the simple experimentation with results
being quantified in the Results chapter. Two different mental state have been used to record
and process the two samples of the raw EEG signal. The observed differences are confirmed
with theory knowledge.

In the result, the paper is presented the approach to efficient record and process the
EEG signal with low financial outlay. This approach can be used for the neurofeedback and
the brain control interface (BCI). The error of the device is too high for clinical use. In the
Methods area we contemplated with respect to why SSVEP-based BCI frameworks are the
most functional and minimum asset costly approach for a BCI framework for use as a client
interface to control an electric controlled wheelchair. We introduced a few surveys on signal
preparing calculations, BCI methods and articles on audits of BCI equipment frameworks
utilized as a part of research. No papers were discovered expressing the utilization of
monetarily accessible BCI gadgets in prosthetic gadgets for assistive advances to date.

There is in any case, proof of working improvement of minimal effort BCI for prosthesis
control in the examination done by Ribeiro et al. which underpins the utilization of a
SSVEP-based framework as the least complex system to use for minimal effort BCI
gadgets. All in all, with the accessibility of moderate EEG securing frameworks specified
in this paper, additionally examine into the advancement of solid and adaptable yet
reasonable BCI frameworks for prosthesis focused applications will be gaged. This will be
advantageous to the further review and change of BCIs for assistive advancements to the
extremely crippled individual.
Conclusion

With a high accuracy of performance of 95%, this system is capable of being interfaced
with any platform and converted into a stand-alone device for assistance of persons with
motor disabilities.

Some of the critical advantages of developing such a system are:

1. Compatible, robust and holds scope for further development


2. Non-Invasive
3. Lower latent time and higher accuracy of classification
4. Can run at low power, using Li-on batteries and is highly cost effective
5. Can be extended to wireless transmission using EEG headset

Future Scope of work:

1. To ad lib the SNR of the acquired signal from the specially designed equipment
and check for the stage lock with the visual stimulus recurrence.

2. A more far reaching investigation of the BCI securing from various lobes in different
target applications with different neurological systems and to expand the straightforwardness
of SSVEP stimulus while keeping up a high ITR and characterization achievement rate.

3. To implement these algorithm directly into an FPGA or a DSP processor, in order


to bring about a stand-alone device and convert it into a commercial product.

4. Also, to further modify the Thinkgear.dll algorithm for easier connection handling
without clearing sequence of values.

5. To replace the micro-controller and motor driver combination with dedicated IC


serving the same purpose, in order to create a light weight and space-compatible design.

36
References
[1] Krucoff, Max O.; Rahimpour, S., 2016. “Enhancing Nervous System Recovery through
Neurobiologics, Neural Interface training, and Neurorehabilitation”. Neuroprosthetics, 10(554),
January, pp. 554.

[2] Levine, SP; et al., 2000. "A direct brain interface based on event related potentials". IEEE transactions
on rehabilitation engineering: a publication of the IEEE Engineering in Medicine and Biology Society,
8(2), pp.180–185.

[3] Polikov, Vadim S., Patrick A. Tresco, and William M. Reichert, 2005."Response of brain tissue to
chronically implanted neural electrodes". Journal of neuroscience methods, 148(1), pp.1–18.

[4] Gulati, Tanuj; Won, Seok Joon, 2015. "Robust Neuroprosthetic Control from the Stroke Perilesional
Cortex". The Journal of Neuroscience, 35(22), pp.8653–8661.

[5] Pei, X., 2011. "Decoding Vowels and Consonants in Spoken and Imagined Words Using
Electrocorticographic Signals in Humans". Journal Neural Engineering, 4(6), pp.217-227.

[6] Pfurtscheller, G.; et al., 2003. " 'Thought' – control of functional electrical stimulation to restore hand
grasp in a patient with tetraplegia". Neuroscience Letters, 351(1), pp.33–36.

[7] Niedermeyer E.; da Silva F.L., 2004. “Electroencephalography: Basic Principles, Clinical
Applications, and Related Fields.” Lippincott Williams & Wilkins, ISBN 0-7817-5126-8.

[8] Tatum, William O., 2014. “Handbook of EEG interpretation.” Demos Medical Publishing, pp. 155–
190, ISBN 9781617051807.

[9] J. R. Wolpaw and D. J. McFarland, “Control of a 2-D movement signal by a noninvasive brain-
computer interface in humans,” Proc. Nat. Acad. Sci. USA, vol. 101, no. 51, pp. 17849–17854, Dec.
2004.

[10] E. Donchin, K. M. Spencer, and R. Wijesinghe, “The mental prosthesis: Assessing the speed of a
P300-based brain-computer interface,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 8, no. 2, pp. 174–
179, Jun. 2000.

[11] A. A. Karim, T. Hinterberger, and J. Richter, “Neural Internet: Web surfing with brain potentials for
the completely paralyzed,” Neurorehabilitation. Neural Repair, vol. 20, no. 4, pp. 508–515, Dec. 2006.

[12] A. Nijholt et al., “Brain-computer interfaces for HCI and games,” in Proc. 26th Annu. CHI Conf.
Human Factors Comput. Syst., Florence, Italy, 2008, pp. 3925–3928.

[13] C. Escolano, J. M. Antelis, and J. Minguez, “A telepresence mobile robot controlled with a
noninvasive brain-computer interface,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 42, no. 3, pp.
793–804, Jun. 2012.

[14] L. Bi, X. Fan, and Y. Liu, “EEG-based brain-controlled mobile robots: A survey”. IEEE Trans. Human
Mach. Syst., vol. 43, no. 2, pp. 161–176, Mar. 2013.

37
[15] D. Gohring, D. Latotzky, M. Wang, and R. Rojas, “Semi-autonomous car control using brain computer
interfaces,” in Advances in Intelligent Systems and Computing, vol. 194. Berlin, Germany: Springer-
Verlag, 2013, pp. 393–408.

[16] L. Bi, X. Fan, T. Teng, H. Ding, and Y. Liu, “Using a head-up display based steady state visual evoked
potentials brain-computer interface to control a simulated vehicle,” IEEE Trans. Intell. Transp. Syst.,
vol. 15, no. 3, pp. 959–966, Jun. 2014.

[17] H. Cecotti, “Spelling with non-invasive brain-computer interfaces-current and future trends,” J.
Physiol.—Paris, vol. 105, pp. 106–114, 2011.

[18] K. R. Muller, M. Tangermann, G. Dornhege, M. Krauledat, G. Curio, and B. Blankertz, “Machine


learning for real-time single-trial EEG-analysis: from brain-computer interfacing to mental state
monitoring,” J. Neurosci. Methods, vol. 167, no. 1, pp. 82–90, 2008.

[19] J. N. Mak, Y. Arbel, J. W. Minett, L. M. McCane, B. Yuksel, D. Ryan, D. Thompson, L. Bianchi, and
D. Erdogmus, “Optimizing the p300-based brain-computer interface: current status, limitations and
future directions,” J. Neural Eng., vol. 8, art. no. 025003, 2011.

[20] X. Gao, D. Xu, M. Cheng, and S. Gao, “A BCI-based environmental controller for the motion-
disabled,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 11, no. 2, pp. 137–140, Jun. 2003.

[21] F. B. Vialatte, M. Maurice, J. Dauwels, and A. Cichocki, “Steady-state visually evoked potentials:
Focus on essential paradigms and future perspectives,” Prog. Neurobiol., vol. 90, pp. 418–438, 2010.

[22] K. Shyu, Y. Chiu, P. Lee, J. Liang, and S. Peng, “Adaptive SSVEP-based BCI system with frequency
and pulse duty-cycle stimuli tuning design,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 21, no. 5, pp.
697–703, Sep. 2013.

[23] Y. Kimura, T. Tanaka, H. Higashi, and N. Morikawa, “SSVEP-based brain-computer interfaces using
FSK-modulated visual stimuli,” IEEE Trans. Biomed. Eng., vol. 60, no. 10, pp. 2831–2838, Oct. 2013.

[24] J. R. Wolpaw, N. Birbaumer, W. J. Heetderks, D. J. McFarland, P. H. Peckham, G. Schalk, E.


Donchin, L. A. Quatrano, C. J. Robinson, and T. M. Vaughan, “Brain-computer interface technology:
A review of the first international meeting,” IEEE Trans. Rehabil. Eng., vol. 8, no. 2, pp. 164–173,
Jun. 2000.

[25] Y. Zhang, P. Xu, T. Liu, J. Hu, R. Zhang, and D. Yao, “Multiple frequencies sequential coding for
SSVEP-based brain-computer interface,” PLoS One, vol. 7, no. 3, art. no. e29519, 2012.

[26] H. Hwang, D. Han, and C. Im, “A new dual-frequency stimulation method to increase the number of
visual stimuli for multi-class SSVEP based brain-computer interface (BCI),” Brain Res., vol. 1515, pp.
66–77, 2013.

[27] X. Chen, Z. Chen, S. Gao, and X. Gao, “Brain-computer interface based on intermodulation
frequency,” J. Neural Eng., vol. 10, art. no. 066009, 2013.

38
[28] C. Jia, X. Gao, B. Hong, and S. Gao, “Frequency and phase mixed coding in SSVEP-based brain-
computer interface,” IEEE Trans. Biomed. Eng., vol. 58, no. 1, pp. 200–206, 2011.

[29] Z. Lin, C. Zhang, W. Wu, and X. Gao, “Frequency recognition based on canonical correlation analysis
for SSVEP-based BCIs,” IEEE Trans. Biomed. Eng., vol. 54, no. 6, pp. 1172–1176, Jan. 2007.

[30] O. Friman, I. Volosyak, and A. Graser, “Multiple channel detection of steady-state visual evoked
potentials for brain-computer interfaces,” IEEE Trans. Biomed. Eng., vol. 54, no. 4, pp. 742–750, Apr.
2007.

[31] M. Cheng, X. Gao, S. Gao, and D. Xu, “Design and implementation of a brain-computer interface with
high transfer rates,” IEEE Trans. Biomed. Eng., vol. 49, no. 10, pp. 1181–1186, Oct. 2002.

[32] G. Bin, X. Gao, Z. Yan, B. Hong, and S. Gao, “An online multi-channel SSVEP-based brain-computer
interface using a canonical correlation analysis method,” J. Neural Eng., vol. 6, art. no. 046002, 2009.

[33] Y. Zhang, G. Zhou, J. Jing, M. Wang, X. Wang, and A. Cichocki, “L1 regularized multiway canonical
correlation analysis for SSVEP-based BCI,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 21, no. 6, pp.
887–896, Nov. 2013.

[34] R. Kus, A. Duszyk, P. Milanowski, M. Labecki, M. Bierzynska, Z. Radzikowska, M. Michalska, J.


Zygierewicz, P. Suffczynski, and P. J. Durka, “On the quantification of SSVEP frequency responses in
human EEG in realistic BCI conditions,” PLoS ONE, vol. 8, art. no. e77536, 2013.

[35] J. Fernandez-Vargas, H. U. Pfaff, F. B. Rodriguesz, and P. Varona, “Assisted closed-loop optimization


of SSVEP-BCI efficiency,” Front. Neural Circuits, vol. 7, art. no. 27, 2013.

[36] I. Volosyak, “SSVEP-based Bremen-BCI interface-boosting information transfer rates,” J. Neural


Eng., vol. 8, art. no. 036020, 2011.

[37] J. Victor and J. Mast, “A new statistic for steady-state evoked potentials,” Electroencephalogr, Clin.
Neurophysiol., vol. 78, no. 5, pp.378–388, 1991.

[38] L. Scharf, “Statistical Signal Processing: Detection, Estimation, and Time Series Analysis.” New York:
Addison-Wesley, 1990.

[39] C. Davila, R. Srebro, and I. Ghaleb, “Optimal detection of visual evoked potentials,” IEEE Trans.
Biomed. Eng., vol. 45, no. 6, pp. 800–803, Jun. 1998.

[40] S. Tobimatsu, H. Tomoda, and M. Kato, “Normal variability of the amplitude and phase of steady-state
VEPs,” Electroencephalogr. Clin. Neurophysiol., vol. 100, no. 3, pp. 171–176, 1996.

[41] Mason, S.G., Bashashati, A., Fatourechi, M., Navarro, K.F., Birch, G.E., 2007. "A comprehensive
survey of brain interface technology designs". Annals of biomedical engineering, 35, pp.137–169.

[42] Garcia, G., 2006. “High frequency SSVEPs for BCI applications.” Springer Publications Inc.

39
[43] Wang, Y., Wang, R., Gao, X., Hong, B., Gao, S., 2006. "A practical VEP-based brain- computer
interface". IEEE Transactions on Neural Systems and Rehabilitation Engineering, 14, pp.234–240.

[44] Mason, S.G., Birch, G.E., 2003. "A general framework for brain-computer interface design". IEEE
transactions on neural systems and rehabilitation engineering: a publication of the IEEE Engineering
in Medicine and Biology Society, 11, pp.70–85.

[45] Quitadamo, L., et al., 2008. "A UML model for the description of different brain-computer interface
systems". Engineering in Medicine and Biology Society. In: 30th Annual International Conference of
the IEEE Engineering in Medicine and Biology Society, EMBS 2008, pp.1363–1366.

[46] Beverina, F., Palmas, G., 2003. "User adaptive BCIs: SSVEP and P300 based interfaces". PsychNology
Journal, 1, pp.331–354.

[47] B. D. Seno, M. Matteucci, and L. T. Mainardi, “The utility metric: A novel method to assess the
overall performance of discrete brain-computer interfaces,” IEEE Trans. Rehabil. Eng., vol. 18, no. 1,
pp. 20–28, Feb. 2010.

[48] Wu, Z., Lai, Y., Xia, Y., 2008. "Stimulator selection in SSVEP based BCI". Medical engineering
physics, 30, pp.1079–1088.

[49] Rowley K., Sliney A., Pitt I., Murphy D., “Evaluating a Brain Computer Interface to Categorize
Human Emotional Response,” 10th IEEE International Conference on Advanced Learning
Technologies, pp. 276-278, 2010.

[50] Kołodziej M., Majkowski A., Rak R., “Linear discriminant analysis as a feature reduction technique of
EEG signal for brain computer interfaces,” Przegląd Elektrotechniczny, 88(3a), pp. 101-103, 2012.

[51] Rejer I., “EEG feature selection for BCI based on motor imaginary task,” Foundations of Computing
and Decision Sciences, 37(4), pp. 285-294, 2012.

[52] Vourvopoulos A., Liarokapis F, “Evaluation of commercial brain–computer interfaces in real and
virtual world environment: A pilot study,” Computers & Electrical Engineering, November 2013.

IMAGE REFERENCES:

[a] Wojciech SAŁABUN, Processing and spectral analysis of the raw EEG signal from the MindWave.
West Pomeranian University of Technology, Szczecin

[b] O. Friman, I. Volosyak, and A. Graser, “Multiple channel detection of steady-state visual evoked
potentials for brain-computer interfaces,” IEEE Trans. Biomed. Eng., vol. 54, no. 4, pp. 742–750, Apr. 2007.

[c] Psychtoolbox toolbox: https://www.psychtoolbox.org

[d] Arduino.Inc, Italy: https://www.arduino.cc/en/main/arduinoBoardUno

[e] Dual Full-Bridge Driver, STMicroelectronics: http://www.st.com/en/motor-drivers/l298.html

40
View publication stats

You might also like