You are on page 1of 8

Mind-Controlled Robotic Third Arm Gives New Meaning to “Multitasking”

Need a helping hand? Tell this robotic arm—with your mind—to grasp that thing you
need while your own two hands are busy

By Emily Waltz

Imagine commanding a robotic arm to perform a task while your two human hands stay
busy with a completely different job. Now imagine that you give that command just by
thinking it.

Researchers today announced they have successfully built such a device. The user can
be thinking about two tasks at once, and the robot can decipher which of those thoughts
is directed at itself, then perform that task. And what’s special about it (as if a mind-
controlled, multitasking, supernumerary limb isn’t cool enough by itself) is that using it
might actually improve the user’s own multitasking skills.

“Multitasking tends to reflect a general ability for switching attention. If we can make
people do that using brain-machine interface, we might be able to enhance human
capability,” says Shuichi Nishio, a principal researcher at Advanced
Telecommunications Research Institute International in Kyoto, Japan, who codeveloped
the technology with his colleague Christian Penaloza, a research scientist at the same
institute.

Nichio and Penaloza achieved the feat by developing algorithms to read the electrical
activity of the brain associated with different actions. When a person thinks about
performing some kind of physical task—say picking up a glass of water—neurons in
particular regions of the brain fire, generating a pattern of electrical activity that is
unique to that type of task. Thinking about a different type of task, like balancing a tray
of precarious dishes, generates a different pattern of electrical activity.

The brain activity associated with those tasks can be recorded using electrodes
noninvasively placed on the scalp. A trained algorithm then interprets the electrical
recordings, distinguishing brain activity patterns linked to one task versus another. It
then informs the robotic arm to move based on the user’s thoughts. Such systems are
generally known as brain-machine interfaces (BMI).

To test the system, Nishio and Penaloza recruited 15 healthy volunteers to have their
minds read while they multitasked. Wearing an electrode cap, each participant sat in a
chair and used his or her two human hands to balance a ball on a board, while the
computer recorded their brain’s electrical activity.

Sitting in the same chair, this time with a connected robotic arm turned on, participants
visualized the robotic arm grasping a nearby bottle. The computer recorded the neural
firing in their brains, sensing the intention to grasp the bottle, and performed the
command.

Then the participants were asked to simultaneously perform both tasks: balancing the
ball on the board and mind-commanding the robotic arm. With the computer’s help, the
participants successfully performed both tasks about three-quarters of the time,
according to today’s report.

Some of the participants were much better at the multitasking portion of the experiment
than others. “People were clearly separated, and the good performers were able to
multitask 85 percent of the time,” and the poor performers could only multitask about 52
percent of the time, says Penaloza. Lower scores didn’t reflect the accuracy of the BMI
system, but rather the skill of the performer in switching attention from one task to the
another, he says.

It was interesting how quickly the participants learned to simultaneously perform these
two tasks, says Nishio. Normally that would take many training sessions. He and
Penaloza say they believe that using brain-machine interface systems like this one may
provide just the right biofeedback that helps people learn to multitask better. They are
continuing to study the phenomenon in the hope that it can be used therapeutically.

We’ve seen supernumerary limbs before, such as mind-controlled hand exoskeletons


for quadriplegic individuals, inertia-controlled dual robotic arms, pain-sensing
prosthetics, cyborg athletes, and even a music-mediated drumming arm.
Penaloza and Nishio say theirs is the first mind-controlled robot that can read a
multitasking mind. “Usually when you’re controlling something with BMI, the user really
needs to concentrate so they can do one single task,” says Penaloza. “In our case it’s
two completely different tasks, and that’s what makes it special.”

There’s a clear need to develop these technologies for people with disabilities, but the
utility of such systems for able-bodied people isn’t yet clear. Still, the cool factor
has researchers and at least one philosopher-artist brainstorming the question: If we
can have a third arm, how would we use it?

https://spectrum.ieee.org/the-human-os/biomedical/bionics/mindcontrolled-robotic-third-
arm-gives-new-meaning-to-multitasking

Making Medical AI Trustworthy and Transparent

Researchers are trying to crack open the black box so AI can be deployed in health
care

By Eliza Strickland

The health care industry may seem the ideal place to deploy artificial intelligence
systems. Each medical test, doctor’s visit, and procedure is documented, and patient
records are increasingly stored in electronic formats. AI systems could digest that data
and draw conclusions about how to provide better and more cost-effective care.

Plenty of researchers are building such systems: Medical and computer science
journals are full of articles describing experimental AIs that can parse records, scan
images, and produce diagnoses and predictions about patients’ health. However, few—
if any—of these systems have made their way into hospitals and clinics.

So what’s the holdup? It’s not technical, says Shinjini Kundu, a medical researcher and
physician at the University of Pittsburgh School of Medicine. “The barrier is the trust
aspect,” she says. “You may have a technology that works, but how do you get humans
to use it and rely on it?”
Most medical AI systems operate as “black boxes” that take in data and spit out
answers. Doctors are understandably wary about basing treatments on reasoning they
don’t understand, so researchers are trying a variety of techniques to create systems
that show their work.

 Paint Us a Picture

Kundu, who described her research at the United Nations’ recent AI for
Goodconference, is working on AI that analyzes medical images and then explains what
it sees. Her system starts with a machine-learning component that examines images
such as MRI scans and discovers patterns of interest to doctors.
In Kundu’s most recent experiments, the AI analyzed knee MRIs and predicted which
knees would develop osteoarthritis within three years. Then, using a technique called
“generative modeling,” the AI created a new image—its version of an MRI scan showing
a knee that was guaranteed to develop that condition. “We enabled a black box
classifier to generate an image that demonstrates the patterns it’s seeing as it makes its
diagnosis,” Kundu explains.

Photos: Top: Osteoarthritis Initiative (2); Bottom: University of Pittsburgh School of


MedicineThe Power to Predict: Human eyes can’t tell the difference between MRI
scans of those patients who won’t develop osteoarthritis in their knees within three
years and those who will. But an AI program found subtle differences in the patterns of
cartilage, which it showed to researchers.

The AI’s generated image revealed that it was basing its predictions on subtle changes
to the cartilage shown in the MRI scans—which human doctors hadn’t noticed. “That
was another powerful aspect of this work,” says Kundu. “It helped humans understand
what the early developmental process of arthritis might be.”

 Now What Do You See?

Rima Arnaout, an assistant professor and practicing cardiologist at the University of


California, San Francisco, trained a neural network to classify echocardiograms, the
ultrasound scans crucial for diagnosing heart ailments. The first version of her
AI, described in the journal NPJ Digital Medicine in March, was more accurate than
human cardiologists at sorting tiny, low-resolution images by their angle of perspective
on the heart. The next version will use this information to identify the anatomical
structures in view and diagnose cardiac diseases and defects.
But such a diagnostic system isn’t likely to be used: “I’m never going to make a
diagnosis that doesn’t sit well with me, and say, ‘The computer made me do it,’ ”
Arnaout says. So she used two techniques to understand how her classifier was making
decisions. In occlusion experiments, she covered up parts of test images to see how it
changed the AI’s answers; with saliency mapping, she traced the neural network’s final
answers back to the original image to discover which pixels carried the most weight.

Both techniques showed which parts of the image the AI relied on to make decisions.
Encouragingly, the structures that contributed most to the AI’s decisions were also
those that human experts judged important.

 Moving Beyond Correlation

At Microsoft Research in Redmond, Wash., principal researcher Rich Caruanahas been


on a mission for decades to make machine-learning models that aren’t just intelligent
but also intelligible. His AI uses electronic health records from hospitals to make
predictions about patient outcomes. But he has found that even models that appear
highly accurate can hide serious flaws.
He cites his ongoing research using a data set of pneumonia patients. In one study, he
trained a machine-learning model to distinguish between high-risk patients, who should
be admitted to the hospital, and low-risk patients, who could safely stay home to
recuperate. The model found that people with heart disease were less likely to die of
pneumonia and confidently asserted that these patients were low risk.
Caruana explains that heart disease patients who are diagnosed with pneumonia have
better outcomes—not because they’re low risk but because they typically go to the
emergency room at the first sign of breathing problems and therefore get immediate
diagnosis and treatment. “The correlation the model found is true,” Caruana says, “but if
we used it to guide health care interventions, we’d actually be injuring—and possibly
killing—some patients.” Based on his troubling discoveries, he’s now working on
machine-learning models that clearly show the relationship between variables, letting
him judge whether the model is not only statistically accurate but also clinically useful.
This article appears in the August 2018 print is

Image: Rima ArnaoutHeart Beats: An AI program classifies low-resolution images from


echocardiograms based on which parts of the heart are in view. A next-gen version will
use this anatomical understanding to make diagnoses.

https://spectrum.ieee.org/biomedical/devices/making-medical-ai-trustworthy-and-
transparent
A physiological signal-based method for early mental-stress detection.

By: Likun Xia , Aamir Saeed Malik , Ahmad Rauf Subhani

Reference: www.sciencedirect.com

ABSTRACT

The early detection of mental stress is critical for efficient clinical treatment. As
compared with traditional approaches, the automatic methods presented in literature
have shown significance and effectiveness in terms of diagnosis speed. Unfortunately,
the majority of them mainly focus on accuracy rather than predictions for treatment
efficacy. This may result in the development of methods that are less robust and
accurate, which is unsuitable for clinical purposes. In this study, we propose a
comprehensive framework for the early detection of mental stress by analysing
variations in both electroencephalogram (EEG) and electrocardiogram (ECG) signals
from 22 male subjects (mean age: 22.54 ± 1.53 years). The significant contribution of
this paper is that the presented framework is capable of performing predictions for
treatment efficacy, which is achieved by defining four stress levels and creating models
for the individual level. The experimental results indicate that the framework has
realized an accuracy, a sensitivity, and a specificity of 79.54%, 81%, and 78%,
respectively. Moreover, the results indicate significant neurophysiological differences
between the stress and control (stress-free) conditions at the individual level.
A Micro-Power EEG Acquisition SoC With Integrated Feature Extraction
Processor for a Chronic Seizure Detection System.

By: Naveen Verma, Ali Shoeb, Jose Bohorquez, Joel Dawson, John Guttag,
Anantha Chandrakasan

Reference: ieeexplore.ieee.org

ABSTRACT

This paper presents a low-power SoC that performs EEG acquisition and feature
extraction required for continuous detection of seizure onset in epilepsy patients. The
SoC corresponds to one EEG channel, and, depending on the patient, up to 18
channels may be worn to detect seizures as part of a chronic treatment system. The
SoC integrates an instrumentation amplifier, ADC, and digital processor that streams
features-vectors to a central device where seizure detection is performed via a
machine-learning classifier. The instrumentation-amplifier uses chopper-stabilization in
a topology that achieves high input-impedance and rejects large electrode-offsets while
operating at 1 V; the ADC employs power-gating for low energy-per-conversion while
using static-biasing for comparator precision; the EEG feature extraction processor
employs low-power hardware whose parameters are determined through validation via
patient data. The integration of sensing and local processing lowers system power by
14× by reducing the rate of wireless EEG data transmission. Feature vectors are
derived at a rate of 0.5 Hz, and the complete one-channel SoC operates from a 1 V
supply, consuming 9 ¿ J per feature vector.

You might also like