Professional Documents
Culture Documents
Need a helping hand? Tell this robotic arm—with your mind—to grasp that thing you
need while your own two hands are busy
By Emily Waltz
Imagine commanding a robotic arm to perform a task while your two human hands stay
busy with a completely different job. Now imagine that you give that command just by
thinking it.
Researchers today announced they have successfully built such a device. The user can
be thinking about two tasks at once, and the robot can decipher which of those thoughts
is directed at itself, then perform that task. And what’s special about it (as if a mind-
controlled, multitasking, supernumerary limb isn’t cool enough by itself) is that using it
might actually improve the user’s own multitasking skills.
“Multitasking tends to reflect a general ability for switching attention. If we can make
people do that using brain-machine interface, we might be able to enhance human
capability,” says Shuichi Nishio, a principal researcher at Advanced
Telecommunications Research Institute International in Kyoto, Japan, who codeveloped
the technology with his colleague Christian Penaloza, a research scientist at the same
institute.
Nichio and Penaloza achieved the feat by developing algorithms to read the electrical
activity of the brain associated with different actions. When a person thinks about
performing some kind of physical task—say picking up a glass of water—neurons in
particular regions of the brain fire, generating a pattern of electrical activity that is
unique to that type of task. Thinking about a different type of task, like balancing a tray
of precarious dishes, generates a different pattern of electrical activity.
The brain activity associated with those tasks can be recorded using electrodes
noninvasively placed on the scalp. A trained algorithm then interprets the electrical
recordings, distinguishing brain activity patterns linked to one task versus another. It
then informs the robotic arm to move based on the user’s thoughts. Such systems are
generally known as brain-machine interfaces (BMI).
To test the system, Nishio and Penaloza recruited 15 healthy volunteers to have their
minds read while they multitasked. Wearing an electrode cap, each participant sat in a
chair and used his or her two human hands to balance a ball on a board, while the
computer recorded their brain’s electrical activity.
Sitting in the same chair, this time with a connected robotic arm turned on, participants
visualized the robotic arm grasping a nearby bottle. The computer recorded the neural
firing in their brains, sensing the intention to grasp the bottle, and performed the
command.
Then the participants were asked to simultaneously perform both tasks: balancing the
ball on the board and mind-commanding the robotic arm. With the computer’s help, the
participants successfully performed both tasks about three-quarters of the time,
according to today’s report.
Some of the participants were much better at the multitasking portion of the experiment
than others. “People were clearly separated, and the good performers were able to
multitask 85 percent of the time,” and the poor performers could only multitask about 52
percent of the time, says Penaloza. Lower scores didn’t reflect the accuracy of the BMI
system, but rather the skill of the performer in switching attention from one task to the
another, he says.
It was interesting how quickly the participants learned to simultaneously perform these
two tasks, says Nishio. Normally that would take many training sessions. He and
Penaloza say they believe that using brain-machine interface systems like this one may
provide just the right biofeedback that helps people learn to multitask better. They are
continuing to study the phenomenon in the hope that it can be used therapeutically.
There’s a clear need to develop these technologies for people with disabilities, but the
utility of such systems for able-bodied people isn’t yet clear. Still, the cool factor
has researchers and at least one philosopher-artist brainstorming the question: If we
can have a third arm, how would we use it?
https://spectrum.ieee.org/the-human-os/biomedical/bionics/mindcontrolled-robotic-third-
arm-gives-new-meaning-to-multitasking
Researchers are trying to crack open the black box so AI can be deployed in health
care
By Eliza Strickland
The health care industry may seem the ideal place to deploy artificial intelligence
systems. Each medical test, doctor’s visit, and procedure is documented, and patient
records are increasingly stored in electronic formats. AI systems could digest that data
and draw conclusions about how to provide better and more cost-effective care.
Plenty of researchers are building such systems: Medical and computer science
journals are full of articles describing experimental AIs that can parse records, scan
images, and produce diagnoses and predictions about patients’ health. However, few—
if any—of these systems have made their way into hospitals and clinics.
So what’s the holdup? It’s not technical, says Shinjini Kundu, a medical researcher and
physician at the University of Pittsburgh School of Medicine. “The barrier is the trust
aspect,” she says. “You may have a technology that works, but how do you get humans
to use it and rely on it?”
Most medical AI systems operate as “black boxes” that take in data and spit out
answers. Doctors are understandably wary about basing treatments on reasoning they
don’t understand, so researchers are trying a variety of techniques to create systems
that show their work.
Paint Us a Picture
Kundu, who described her research at the United Nations’ recent AI for
Goodconference, is working on AI that analyzes medical images and then explains what
it sees. Her system starts with a machine-learning component that examines images
such as MRI scans and discovers patterns of interest to doctors.
In Kundu’s most recent experiments, the AI analyzed knee MRIs and predicted which
knees would develop osteoarthritis within three years. Then, using a technique called
“generative modeling,” the AI created a new image—its version of an MRI scan showing
a knee that was guaranteed to develop that condition. “We enabled a black box
classifier to generate an image that demonstrates the patterns it’s seeing as it makes its
diagnosis,” Kundu explains.
The AI’s generated image revealed that it was basing its predictions on subtle changes
to the cartilage shown in the MRI scans—which human doctors hadn’t noticed. “That
was another powerful aspect of this work,” says Kundu. “It helped humans understand
what the early developmental process of arthritis might be.”
Both techniques showed which parts of the image the AI relied on to make decisions.
Encouragingly, the structures that contributed most to the AI’s decisions were also
those that human experts judged important.
https://spectrum.ieee.org/biomedical/devices/making-medical-ai-trustworthy-and-
transparent
A physiological signal-based method for early mental-stress detection.
Reference: www.sciencedirect.com
ABSTRACT
The early detection of mental stress is critical for efficient clinical treatment. As
compared with traditional approaches, the automatic methods presented in literature
have shown significance and effectiveness in terms of diagnosis speed. Unfortunately,
the majority of them mainly focus on accuracy rather than predictions for treatment
efficacy. This may result in the development of methods that are less robust and
accurate, which is unsuitable for clinical purposes. In this study, we propose a
comprehensive framework for the early detection of mental stress by analysing
variations in both electroencephalogram (EEG) and electrocardiogram (ECG) signals
from 22 male subjects (mean age: 22.54 ± 1.53 years). The significant contribution of
this paper is that the presented framework is capable of performing predictions for
treatment efficacy, which is achieved by defining four stress levels and creating models
for the individual level. The experimental results indicate that the framework has
realized an accuracy, a sensitivity, and a specificity of 79.54%, 81%, and 78%,
respectively. Moreover, the results indicate significant neurophysiological differences
between the stress and control (stress-free) conditions at the individual level.
A Micro-Power EEG Acquisition SoC With Integrated Feature Extraction
Processor for a Chronic Seizure Detection System.
By: Naveen Verma, Ali Shoeb, Jose Bohorquez, Joel Dawson, John Guttag,
Anantha Chandrakasan
Reference: ieeexplore.ieee.org
ABSTRACT
This paper presents a low-power SoC that performs EEG acquisition and feature
extraction required for continuous detection of seizure onset in epilepsy patients. The
SoC corresponds to one EEG channel, and, depending on the patient, up to 18
channels may be worn to detect seizures as part of a chronic treatment system. The
SoC integrates an instrumentation amplifier, ADC, and digital processor that streams
features-vectors to a central device where seizure detection is performed via a
machine-learning classifier. The instrumentation-amplifier uses chopper-stabilization in
a topology that achieves high input-impedance and rejects large electrode-offsets while
operating at 1 V; the ADC employs power-gating for low energy-per-conversion while
using static-biasing for comparator precision; the EEG feature extraction processor
employs low-power hardware whose parameters are determined through validation via
patient data. The integration of sensing and local processing lowers system power by
14× by reducing the rate of wireless EEG data transmission. Feature vectors are
derived at a rate of 0.5 Hz, and the complete one-channel SoC operates from a 1 V
supply, consuming 9 ¿ J per feature vector.