You are on page 1of 6

Assistive robot operated via P300-based Brain Computer Interface

Filippo Arrichiello, Paolo Di Lillo, Daniele Di Vito, Gianluca Antonelli, Stefano Chiaverini

Abstract— In this paper we present an architecture for the The operation modes of the robotic devices are strictly
operation of an assistive robot finally aimed at allowing users connected to the Human-Machine Interface (HMI) used to
with severe motion disabilities to perform manipulation tasks generate and communicate commands. Among the different
that may help in daily-life operations. The robotic system,
based on a lightweight robot manipulator, receives high level HMIs, Brain-Computer-Interfaces (BCIs) represent a rela-
commands from the user through a Brain-Computer Interface tively new technology that has recently attracted large atten-
based on P300 paradigm. The motion of the manipulator is tion in view of the fact that BCIs may be used in the absence
controlled relying on a closed loop inverse kinematic algorithm of motion capability of the user [12] with applications in
that simultaneously manages multiple set-based and equality- different areas of assistive technologies as motor recovery,
based tasks. The software architecture is developed relying on
widely used frameworks to operate BCIs and robots (namely, entertainment, communication and control [13], [17]. Indeed,
BCI2000 for the operation of the BCI and ROS for the BCIs have been recently proposed to drive wheelchairs [2],
control of the manipulator) integrating control, perception and [3], to guide robots for telepresence [5], [11], to control
communication modules developed for the application at hand. exoskeletons [7] and mobile robots [19], [8].
Preliminary experiments have been conducted to show the Most BCIs rely on non-invasive electroencephalogram
potentialities of the developed architecture.
(EEG) signals, i.e. the electrical brain activity recorded
I. I NTRODUCTION from electrodes placed on the scalp. By processing such
signals, the BCIs may allow the generation of commands
Research and development of robotic assistive technolo- that can be used for the communication with a software in-
gies has gained tremendous momentum in the last decade, terface. EEG-BCI can be categorized based on the considered
due to several factors such as the maturity level reached by brain activity patterns [6], i.e.: Event-Related Desynchroniza-
several technologies, the advances in robotics and AI, and tion/Synchronization; Steady State Visual Evoke Potentials
the fact that more than 700 million of persons have some (SSVEP); P300 component of the Event Related Potentials;
kind of disability or handicap [18]. For many people with Slow Cortical Potentials (SCPs).
mobility impairments, essential and simple tasks, as dressing In this work, similarly to [20], we want to focus on
or feeding, require the assistance of dedicated people; thus, the use of a EEG-BCI to control a robotic manipulator
the use of devices providing independent mobility can have to perform operations as drinking or manipulating objects.
a large impact on their quality of life [10]. However, here the BCI is operated on the base of the P300
In this perspective, different classes of robotic devices can paradigm that is a positive potential deflection on the ongoing
be considered as lightweight robotic arms that may help brain activity at a latency of roughly 300 ms after the
in manipulation tasks, intelligent semi-autonomous powered random occurrence of a desired target stimulus. This choice
wheelchairs to help in mobility tasks, or wheelchair-mounted is motivated by the fact that P300-based BCIs are relatively
robotic manipulator to help in both the issues. From the easy to use for generating a control signal without complex
user perspective, the operation mode of such systems may training of the user, and have shown great potential to be
depend on the level of autonomy provided by/required to used in several applications. From the robot motion control
the robotic system; from the device perspective, this corre- perspective, we do not base on path-planning algorithms but
spond to different control modes that vary from shared to we use the closed loop inverse kinematic approach presented
supervisory control. In shared mode, the user is involved in in [15] that allows the coding of different kinds of high level
the control loop of the system by continuously generating actions required by the user and to manage of multiple tasks
high-frequency motion commands; such commands are then arranged in priorities and expressed as both set-based and
translated from the control software in low-level functions equality-based constraints. The effectiveness of the proposed
after applying all the safety policies. In supervisory mode, architecture has been validated through experimental tests
the user provides high level low-frequency commands (e.g., with a non-invasive BCI used to operate a 7 DOF lightweight
to start/stop actions) and the system operates in complete robot manipulator. The video attached to the paper show the
autonomy; the control software must generate motion direc- execution of a specific mission.
tives that realize the required action while taking into account
safety, comfort and efficiency. II. OVERALL SYSTEM ARCHITECTURE
The proposed system is composed of: a non-invasive
Authors are with the Department of Electrical and Information Engineer- BCI Emotiv Epoc+ that is a 14 channel wireless EEG-
ing of the University of Cassino and Southern Lazio, Via G. Di Biasio
43, 03043 Cassino (FR), Italy {f.arrichiello, pa.dilillo, BCI; a 7 DOF lightweight robot manipulator Kinova Jaco2 ;
d.divito, antonelli, chiaverini}@unicas.it a RGB-D sensor for human face and object recognition
Microsoft Kinect One. As schematically represented in Fig. 1, flash successively and randomly at a rapid rate. The user
the different components are integrated through the Robot selects a character by focusing attention on it and counting
Operating System (ROS) framework where specific control, how many times it flashes. The row or column that contains
perception and communication modules have been developed this character evokes a P300 response, whereas all others do
for the application at hand. The BCI is operated via a not. After a proper training, the computer can determine the
general-purpose software system BCI2000 that allows P300 desired row and column with the highest P300 amplitude,
experiments. The P300 operation paradigm allows the user and thus the desired character.
to select via the BCI an option among a set. To each The BCI considered in the proposed architecture is oper-
choice performed by the user is associated an action either ated via a general-purpose software system named BCI2000
to navigate the BCI2000 Graphical User Interface (GUI) or that, among the different features, allows BCI data acquisi-
to send messages to external devices. In the latter case the tion, signal processing, stimulus presentation, and P300 ex-
BCI2000 opens an UDP/IP socket to send messages to a periments. In particular, for the application at hand, we want
client process running on a linux machine that runs the ROS to allow an user to operate a robotic manipulator to achieve
framework. The client process encodes the received message manipulation actions on some of the objects present in its
according to a beforehand established convention and send workspace.Thus, we rely on BCI2000 P300 functionalities
commands to the control module of the manipulator. The to allow the user to generate commands through a P300
control module also collects messages from the perception based Graphical User Interface (GUI). This GUI has been
module that allows to identify and localize objects or user’s realized structuring the array of flashing elements in a multi-
face in the workspace. In particular, Kinect’s raw data are layered structure; i.e. selecting an element from a starting
processed through OpenCV and PCL (Point Cloud Library) array, the GUI can both switch to a different layer (i.e. with
libraries. Finally, the control module of the manipulator im- a different array of elements) and send commands to the
plements a set-based closed loop inverse kinematics motion manipulator through callback functions that open UDP/IP
control algorithm that computes the corresponding desired socket to communicate with a remote machine.
PSfrag replacements
joint velocities for the robotic manipulator. Fig. 2 shows an example of the structure of the developed
d GUI where the user can select operations as: select an object,
LINUX MACHINE pause/sleep the P300 GUI, move to the home level, command
WINDOWS MACHINE an action to the robot, stop the robot, etc. It can be noticed
PSfrag replacements
Planner that characters, commonly used for the P300 paradigm, have
Socket UDP/IP
d
been replaced by flashing icons more intuitive for the user.
Obj1
UDP-ROS Bridge
Obj2
Kinematic Control
Sleep
Raw Data

Workspace Tasks
Home
Action1
Desired Joint Velocities
Control Signals Action2
SubAction1
Brain Signals
d/2 SubAction2

Emergency Stop
Fig. 1. Scheme of the overall architecture used in the experiment
E
III. O PERATION OF THE EEG-BCI VIA P300 PARADIGM Back
Object Selection
The P300 potential is a component of the Event Related
Potentials (ERPs), i.e. a fluctuation in the EEG generated by Action Selection

the electrophysiological response to a significant sensorial Fig. 2. Scheme of the developed BCI2000 user interface.
stimulus or event. In particular, the P300 is the largest ERP Before using the developed GUI, the user should train the
component, and it consists of a positive shift in the EEG P300 software following a specific procedure that consists
signal approximately 300-400ms after a task relevant stimu- in selecting via oddball paradigm a set of characters in
lus. The P300 potential can be generated during an oddball a predefined order. The BCI2000 software uses a genetic
paradigm by which the user is subject to a sequence of events algorithm for the training of a Stepwise Linear Discriminant
(e.g. visual stimulus) that can be categorized into two classes, Analysis (SWLDA) binary classifier used in the operation
one less frequent than the other. The frequent events are mode, and it generates a specific profile file for the user.
named standard stimulus, while the unfrequent events are The used BCI is an Epoc+ produced by Emotiv, that is a
named target stimulus. When the user distinguishes a target low-cost non-invasive BCI offering high resolution (14 bits,
stimulus from a standard one, this generates the P300 peak 1 LSB = 0.51µV) multi-channels signals, and that provides
about 300 ms after stimulus onset. access to dense array, high quality, raw EEG data. The 14
The P300 potential has been used as the basis for a BCI electrodes of the neuroheadset (that generate signals with a
system in many studies. The classical format presents the bandwidth of 0.2 - 43Hz) are located according to the 10-20
user with a matrix of characters whose rows and columns international system.
IV. M OTION CONTROL OF THE ROBOTIC MANIPULATOR removed from the hierarchy and the solution that fulfills the
The robot manipulator receives as input from the BCI other tasks is computed. When its threshold is violated (i.e.
high-level commands containing information about the se- σ < σmin + ε || σ > σmax − ε ), the task is re-inserted
lected object and the kind of action to perform. The set of in the hierarchy and it is transformed into an equality-based
possible actions is predefined but their execution depend on task. The desired value of the task function is set as:

the information collected on-line about the workspace, e.g. σs,u if σmax − ε < σ < σmax
σd = (4)
the placement of objects, the position of the user’s face, the σs,u if σmin < σ < σmin + ε
possible presence of obstacles. It is important that σmax − ε < σs,u < σmax (σmin < σs,l <
Each action is coded via a set of elementary tasks, each σmin + ε), in order to avoid undesirable system behaviors as
described by a suitable task function of the system state, and chattering due to intermittent activation/deactivation of tasks
arranged in priority order as described in [1]. The reference caused by quantization errors or sensor noise in the task
system velocity is computed inverting the task function at a value computation.
kinematic level and by projecting the contribution of each
task into the null space of the higher priority ones so as to
remove the velocity components that would conflict with it.
For a general robotic system with n Degrees of Free-
dom, the state is described by the joint values q =
T
[q1, q2, . . . , qn] ∈ Rn . Let us consider a generic m-
dimensional task function σ(q) ∈ Rm . The following
differential relationship holds:
σ̇(q) = J(q)q̇ , Fig. 3. Activation and physical thresholds of a set-based task
where J(q) ∈ Rm×n is the task Jacobian matrix, and q̇ is It is not always necessary to compute the solution of the
the system velocity. The reference velocity that brings the hierarchy containing all the active set-based tasks, because
task value σ to a desired σ d can be computed as: the desired system velocity computed applying (2) to a
hierarchy that do not contain a specific set-based task could
q̇ = J† (σ̇ d + Kσ̃), (1) bring such task anyway to its valid set of values. In that
where K is a positive-definite matrix of gains, and σ̃ = case, the set-based task can be removed from the active task
σ d − σ is the task error. If the system is redundant with hierarchy on the base of the following algorithm.
respect to the task dimension (n > m) it is possible to fulfill A. Set-based activation/deactivation algorithm
multiple tasks simultaneously. Defining a priority among the
The algorithm can be divided into four main steps:
h tasks composing an action, the reference system velocity
1) Create the active task hierarchy
can be computed as:
2) Compute the solutions
q̇ = q̇1 + N1 q̇2 + · · · + N1,h−1q̇h , (2) 3) Compute projections of the solutions
4) Choice of the solution
where q̇i is the reference velocity that fulfills the i-th task
and N1,i is the null space of the augmented Jacobian: Starting from a hierarchy H of mixed set-based and
equality-based tasks, in the first step the hierarchy A contain-
J1,i = J1 T
T ing all the active tasks is created, that means A is composed
J2 T Ji T

... . (3)
of all the equality-based tasks and all the set-based tasks that
This framework has been recently extended in [15], [16] exceed the activation thresholds.
in order to handle also set-based tasks, i.e. monodimensional Given that we can not know a-priori which solution would
tasks requiring their value to lie in a set of values D rather make all the set-based task to stay in their specific set of
than assuming a specific one. Classic set-based tasks for values, it is necessary to build a solutions tree, by computing
a robotic manipulator are mechanical joints limits, obstacle all the solutions given by (2) on all the possible task
avoidance, and arm manipulability. The considered method hierarchies obtained by inserting and removing all the set-
allows to simultaneously control a hierarchy composed of based active tasks, i.e for an active hierarchy composed by
both equality-based and set-based tasks. In particular, while na set-based tasks, it is necessary to compute 2na solutions
the equality-based tasks are always active, the set-based tasks and store them in a set S.
can be activated or deactivated depending on the operational Then we have to select and store in a set P among all the
conditions. For each set-based task it is indeed necessary solutions in S, the ones that satisfy all the active set-based
to set different reference values: physical thresholds σmax tasks while fulfilling also all the equality-based tasks. It is
(σmin ), activation thresholds σmax − ε (σmin + ε), and safety possible to check it by projecting each solution in S into
values σs,u with σmax − ε < σs,u < σmax (σs,l with all the active set-based task spaces: a solution q̇j fulfills a
σmin < σs,l < σmin + ε ). With reference to Figure 3, set-based task σk if Jk q̇j < 0 (Jk q̇j > 0).
as long as the threshold for a specific set-based task is Finally, one solution among the ones in P has to be chosen
satisfied (i.e. σmin + ε < σ < σmax − ε), the task is as the desired system velocity q̇d . It is important to notice
that the set P is never empty, because there will always be case of emergency (represented by the red cross). The second
the solution that takes into account all the set-based tasks. selection (image 2) represents the action selection layer, in
If P contains more than one solution the highest-norm one fact after the object choice the user can decide which action
is chosen, being the less conservative in terms of system to perform, i.e. whether to drink or to move the chosen bottle.
velocity. Even in this case there is the possibility to pause and to
return to the bottle choice (represented by the round arrow).
V. ROBOTIC PERCEPTION SOFTWARE If the user decides to drink, the interface switches directly
The robotic platform has to be capable to interact with to the fourth selection (image 4) and pauses itself. In this
the environment and with the user, thus a perception system phase the user can decide to resume the interface program
is needed to make the robot aware of the position of the (represented by the “play” icon), to send a stop signal to
user and of the possible objects. Objects in the environment the robot manipulator, to return to the bottle choice or to
are labelled with markers, and their detection and tracking go back to the previous selection(represented by the “-1”
are performed by resorting to the ArUco libraryPSfrag[9], a replacements
well- icon). Instead, if the user decides to take and move the bottle,
known OpenCV module specifically designed for this kind the interface switches on the third selection (image 3) that
of operations. Objects positions are computed in Kinect represents d the sub-action selection. In this phase the user
LINUX MACHINE
reference frame and then transformed into the manipulator can decide to move the bottle on the left or on the right in
reference frame by mean of a transformation matrix WINDOWS com- MACHINE preassigned positions. After the choice of the location, the
puted with a preliminary calibration. Planner
interface advances to the fourth selection. Once object and
For the case study presented in the following sectionSocketwe UDP/IP actions are selected, the GUI sent a message to the robot
UDP-ROS Bridge
are interested in detecting and tracking also the mouth of the control with the details of the choice performed by the user.
BCI’s user. Such operation has been divided in two steps:Kinematic
the Control
first one consists in detecting the mouth position in the 2D Raw Data
plane of the RGB image taken from the Kinect sensor, andWorkspace Tasks
then in estimating the distance from the point cloud. DesiredFirst
Joint Velocities
Control Signals
of all the image in Full HD resolution is acquired from the
Brain Signals
sensor, and then it is downsampled in order to reduce the
d/2
computational load and to make the algorithm more suitable
for a real-time application. Then a Viola-Jones algorithm is Fig. 4. Different choices of the BCI user interface
applied to recognize the face in the scene, specifically using In the following there is the description of the results for
the Haar features [4], [14]. Among all the faces detected the two kinds of performed operations.
within the image, only the closest one is selected. The face
image is then split into two parts, and only the lower one A. “Move” operation
is taken into account for the following computations. The
For the first experiment the user has been asked to
Haar features are once again applied in order to find the
choose to move the water bottle on the right. The high-
coordinates of the mouth in the image frame.
level command built by the BCI user interface is sent to
The second step is the computation of the 3D coordinates
the manipulator that autonomously fulfills the operation. The
of the center of the mouth. The Full HD Point Cloud is
chosen task hierarchy for the operation is:
acquired from the Kinect, and then filtered by a Voxel Grid
filter in order to reduce the number of points to be computed. 1) Fourth joint mechanical limit: a maximum limit of
The points belonging to the selected area of the image are 5.5 rad and a minimum limit of 0.7 rad have been set
extracted and then a mean for the x, y and z coordinates of for the fourth joint of the manipulator in order to avoid
the center of the mouth is computed. that it hits its own structure
2) Obstacle avoidance: as the operator chooses a bottle,
VI. E XPERIMENTAL CASE STUDY the other automatically becomes an obstacle that the
In the considered experimental case study, we want to end-effector of the manipulator needs to avoid. A
allow a user to command the robot through the BCI to move minimum threshold on the distance between the end-
objects in preselected positions or to bring a bottle to his effector and the obstacle of 25 cm has been set.
mouth. To the purpose, a specific GUI has been developed 3) End-effector position and orientation: a sequence
in the BCI2000 framework according to the scheme in of target waypoints for the end-effector position and
Fig. 2 and that result navigable via P300 BCI paradigm. orientation has been chosen in order to make the
The GUI composed of the four different layers shown in manipulator effectively grasp the selected bottle and
Fig. 4. Referring to this figure, the first selection (image 1) to move it in a specific position.
represents the object selection layer, from where the user Fig. 6 shows the end-effector position and orientation error
can choose the object to manipulate, in this case a bottle during the entire operation. Fig. 7 shows the set-based tasks
coke and one of water. Furthermore there is the possibility values, and it can be seen that all the thresholds for all the
to pause the interface program execution (represented by the set-based tasks are respected. Fig 5 shows a sequence of
“pause” icon) and to send a stop signal to the manipulator in snapshots of a performed moving mission.
ag replacements

Fig. 5. Frames of the “Move” operation execution

1 sequence of target waypoints and orientations have


been assigned in order to fulfill the operation: grasp
0
the bottle, get it close to the operator’s mouth, make
[m]

a)
-1 him drink, and reposition the bottle on the table.
-2 Fig. 9 and Fig. 10 show the results of the experiment.
0 20 40 60 80 100 120
[s] The bottle cap follows the desired positions and orientations
1 with small errors, while respecting all the thresholds for all
0.5 the set-based tasks, effectively fullfilling the operation. Fig
ag replacements b) 8 shows a sequence of snapshots of a performed drinking
0
20
mission.
-0.5
40 0 20 40 60 80 100 120
[s]
VII. C ONCLUSIONS
60
Fig.
80 6. “Move” operation: a) End-effector position error on x-axis (blue), This paper shows an architecture for an assistive robotic
y-axis (yellow) and z-axis (red) during the operation; b) End effector system aimed at helping users with sever motion disabilities
100
orientation error on x-axis (blue), y-axis (yellow) and z-axis (red) during
120
the operation in daily-life operations. The proposed system relies on a BCI
0 based on P300 paradigm for the high level command detec-
0.5 tion, a Kinect One sensor for the environment perception and
1 a Kinova Jaco2 lightweight robot manipulator for performing
1.5 the manipulation tasks. Details of the software modules and
2 of the specific motion control algorithm applied for the
4 application at hand have been described, and an experimental
6 case study involving two different kinds of operations have
[rad] been reported to prove the effectiveness of the developed
[s] system.
[m] Further efforts will mainly concern the improvement
a) of the perception system, the user interface and the mo-
b) tion/interaction control of the robot. More in detail, we want
c) to substitute the marker-based object detection algorithm
Fig. 7. “Move” operation: a) Fourth joint position (black) and limits (blue) with a detection algorithm based on the object geometrical
over time; b) Distance from the obstacle (blue) and minimum distance shapes. Moreover, we want to make the BCI GUI capable
(black) over time
of dynamically changing the structure of the layers to adapt
to the environment scene and to replace object icons with
B. “Drink” operation images dynamically taken from the Kinect. For the robot
For the second experiment the user has been asked to control, future activity will focus on more sophisticated
choose to drink from the water bottle. The task hierarchy obstacle avoidance algorithm to add an user’s safety levels
is: and we will consider more suitable control algorithm for the
1) Fourth joint limit: same as for the first experiment interaction with the environment and for collisions detection.
2) Second joint limit: a maximum limit of 5.1 rad and
a minimum limit of 1.9 rad has been chosen for the ACKNOWLEDGMENTS
second joint position, in order to avoid a collision The Authors want to thank Alessandro Bria, Gianfranco
between the “elbow” of the manipulator and the table Miele, Mario Fresilli and Gianluca Mangiapelo for the pro-
on which the objects are placed. vided support.
3) Obstacle avoidance: same as the first experiment
4) Bottle top position and orientation: in this case it R EFERENCES
is necessary to control the position and the orientation
[1] G. Antonelli, F. Arrichiello, and S. Chiaverini. The null-space-based
of the cap of the grasped bottle rather than the end- behavioral control for autonomous robotic systems. Intelligent Service
effector ones. Similarly to the first experiment, a proper Robotics, 1(1):27–39, 2008.
ag replacements

Fig. 8. Frames of the “Drink” operation execution

1 42(3):793–804, 2012.
[6] R. Fazel-Rezai, B. Allison, C. Guger, E. Sellers, S. Kleih, and
2 0.5 A. Kübler. P300 brain computer interface: current challenges and
[m]

4 a) emerging trends. Frontiers in Neuroengineering, 5:14, 2012.


0 [7] A. Frisoli, C. Loconsole, D. Leonardis, F. Banno, M. Barsotti,
6 C. Chisari, and M. Bergamasco. A new gaze-bci-driven control of
-0.5 an upper limb exoskeleton for rehabilitation in real-world tasks. IEEE
20 40 60 80 100 120 140
[s] Transactions on Systems, Man, and Cybernetics, Part C (Applications
1 and Reviews), 42(6):1169–1179, 2012.
[8] V. Gandhi, G. Prasad, D. Coyle, L. Behera, and T. McGinnity. Eeg-
0.5 based mobile robot control through an adaptive brain–robot inter-
b) face. IEEE Transactions on Systems, Man, and Cybernetics: Systems,
0 44(9):1278–1285, 2014.
[9] S. Garrido-Jurado, R. Mu noz Salinas, F.J. Madrid-Cuevas, and M.J.
-0.5 Marı́n-Jiménez. Automatic generation and detection of highly reliable
20 40 60 80 100 120 140
ag replacements
c) [s] fiducial markers under occlusion. Pattern Recognition, 47(6):2280 –
2292, 2014.
Fig.
20 9. “Drink” operation: a) Bottle top position error on x-axis (blue), y- [10] H. Hoenig, D. Taylor, and F. Sloan. Does assistive technology sub-
axis (yellow) and z-axis (red) during the operation; b) Bottle top orientation stitute for personal assistance among the disabled elderly? American
40
error on x-axis (blue), y-axis (yellow) and z-axis (red) during the operation Journal of Public Health, 93(2):330–337, 2003.
60 [11] R. Leeb, L. Tonin, M. Rohm, L. Desideri, T. Carlson, and J. Millan.
80 Towards independence: a bci telepresence robot for people with severe
100 motor disabilities. Proceedings of the IEEE, 103(6):969–982, 2015.
[12] D. McFarland and J. Wolpaw. Brain-computer interface operation of
120 robotic and prosthetic devices. Computer, 41(10):52–56, 2008.
140 [13] J. Millán, R. Rupp, G. Müller-Putz, R. Murray-Smith, C. Giugliemma,
0 M. Tangermann, C. Vidaurre, F. Cincotti, A. Kübler, and R. Leeb.
Combining brain–computer interfaces and assistive technologies: state-
1 of-the-art and challenges. Frontiers in Neuroscience, 4:161, 2010.
-1 [14] Vaibhavkumar J Mistry and Mahesh M Goyani. A literature survey on
2 facial expression recognition using global features. Int. J. Eng. Adv.
Technol, 2(4):653–657, 2013.
4 [15] S. Moe, G. Antonelli, A. Teel, K. Pettersen, and J. Schrimpf. Set-
6 based tasks within the singularity-robust multiple task-priority inverse
0.5 kinematics framework: General formulation, stability analysis and
experimental results. Frontiers in Robotics and AI, 3:16, 2016.
-0.5 [16] S. Moe, A. Teel, G. Antonelli, and K. Pettersen. Stability analysis for
[s] set-based control within the singularity-robust multiple task-priority
[m] inverse kinematics framework. In 54th IEEE Conference on Decision
and Control and 8th European Control Conference, pages 171–178,
[rad]
Osaka, Jo, December 2015.
a) [17] C. Moritz, P. Ruther, S. Goering, A. Stett, T. Ball, W. Burgard,
b) E. Chudler, and R. Rao. New perspectives on neuroengineering and
neurotechnologies: NSF-DFG workshop report. IEEE Transactions on
c)
Biomedical Engineering, 63(7):1354–1367, 2016.
Fig. 10. “Drink” operation: a) Fourth joint position (black) and limits [18] United Nations and Andrew Byrnes. From Exclusion to Equality:
(blue) over time; b) Second joint position (black) and limits (blue) over Realizing the Rights of Persons with Disabilities: Handbook for
time; c) Distance from the obstacle (blue) and minimum distance (black) Parliamentarians on the Convention on the Rights of Persons with
over time Disabilities and Its Optional Protocol. United Nations, Office of the
High Commisioner for Human Rights, 2007.
[19] H. Riechmann, A. Finke, and H. Ritter. Using a cvep-based brain-
[2] L. Bi, X. Fan, and Y. Liu. Eeg-based brain-controlled mobile robots:
computer interface to control a virtual agent. IEEE Transactions on
a survey. IEEE Transactions on Human-Machine Systems, 43(2):161–
Neural Systems and Rehabilitation Engineering, 24(6):692–699, 2016.
176, 2013.
[20] S. Schröer, I. Killmann, B. Frank, M. Völker, L. Fiederer, T. Ball, and
[3] T. Carlson and J. Millan. Brain-controlled wheelchairs: a robotic
W. Burgard. An autonomous robotic assistant for drinking. In 2015
architecture. IEEE Robotics and Automation Magazine, 20(1):65–73,
IEEE International Conference on Robotics and Automation (ICRA),
2013.
pages 6482–6487. IEEE, 2015.
[4] Modesto Castrillón, Oscar Déniz, Daniel Hernández, and Javier
Lorenzo. A comparison of face and facial feature detectors based on
the viola–jones general object detection framework. Machine Vision
and Applications, 22(3):481–494, 2011.
[5] C. Escolano, J. Antelis, and J. Minguez. A telepresence mobile
robot controlled with a noninvasive brain–computer interface. IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),

You might also like