You are on page 1of 11

What If Big Tech Could Read Your

Mind?
Add to Email Alerts
Ever since his mid-30s, Greg had lived in a nursing home. An
assault six years earlier left him minimally conscious, unable to
talk or eat. Two years of rehab had done little to help him. Most
people in Greg's condition would have remained non-verbal and
siloed off from the world for the rest of their lives. But at age 38,
Greg received a brain implant through a clinical trial.

Dr Joseph Fins

Surgeons installed an electrode on either side of his thalamus,


the main relay station of the brain. "People who are in the
minimally conscious state have intact brain circuitry, but those
circuits are under-activated," explains Joseph Fins, MD, chief of
the division of medical ethics at Weill Cornell Medicine.
Delivering electrical impulses to affected regions can reenergize
those circuits, restoring lost or compromised function. "These
devices are like pacemakers for the brain," says Fins, who co-
authored a study in Nature about Greg's surgery
(https://www.nature.com/articles/nature06041).

The researchers switched Greg's device off and on every 30


days, observing how the electrical stimulation (or lack thereof)
altered his abilities. They saw remarkable things.
"With the deep brain stimulator, he was able to say six- or seven-
word sentences, the first 16 words of the Pledge of Allegiance.
Tell his mother he loved her. Go shopping at Old Navy and voice
a preference for the kind of clothing his mother was buying,"
recalls Fins, who shared Greg's journey in his book Rights Come
to Mind: Brain Injury, Ethics and the Struggle for Consciousness.

After six years of silence, Greg had regained his voice.

Yet success stories like his aren't without controversy, as the


technology has raised a litany of ethical questions: Can a
minimally conscious person consent to brain surgery? What
happens to study participants when clinical trials are over? How
can people's neural data be responsibly used — and protected?

Dr Veljko Dubljevic

"I think that motto, 'Move fast and break things,' is a really bad
approach," says Veljko Dubljevic, PhD, DPhil, an associate
professor of science technology and society at North Carolina
State University. He's referring to the unofficial tagline of Silicon
Valley, the headquarters for Elon Musk's neurotechnology
company, Neuralink.

Neuralink was founded in 2016, nearly a decade after the study


about Greg's brain implant was published. Yet it has been
Musk's company that has most visibly thrust neurotechnology
into public consciousness, owing somewhat to its founder's
often overstated promises. (In 2019, Musk claimed his brain-
computer interface would be implanted in humans in 2020. He
has since moved that target to 2022.) Musk has called his
device "a FitBit in your skull," though it's officially named the
"Link."

Brain-computer interfaces (BCIs) are already implanted in 36


people around the world, according to Blackrock1, a leading
maker of these devices. What differentiates Neuralink is its
ambitious goal to implant over 1,000 thinner-than-hair
electrodes. If the Link works as intended — by monitoring a
person's brain activity and commanding a computer to do what
they want — people with neurological disorders, like
quadriplegia, could regain significant autonomy.

The History Behind Brain Implants

BCIs — brain implants that communicate with an external


device, typically a computer — are often framed as a science-
fiction dream that geniuses like Musk are making a reality. But
they're deeply indebted to a technology that's been used for
decades: deep brain stimulation (DBS). In 1948, a neurosurgeon
at Columbia University implanted an electrode into the brain of a
woman diagnosed with depression and anorexia. The patient
improved — until the wire broke a few weeks later. Still, the
stage was set for longer-term neuromodulation.

It would be movement disorders, not depression, that ultimately


catapulted DBS into the medical mainstream. In the late 1980s,
French researchers published a study suggesting the devices
could improve both essential tremor and the tremor associated
with Parkinson's. The FDA approved DBS for essential tremor in
1997; approval for Parkinson's followed in 2002. DBS is now the
most common surgical treatment for Parkinson's disease.2
Since then, deep brain stimulation has been used, often
experimentally, to treat a variety of conditions, ranging from
obsessive-compulsive disorder to Tourette's to addiction. The
advancements are staggering: Newer closed-loop devices can
directly respond to the brain’s activity, detecting, for example,
when a seizure in someone with epilepsy is about to happen,
then sending an electrical impulse to stop it. Implanted
electrodes recently enabled a blind woman to decipher lines,
shapes, and letters.

In clinical trials, BCIs have helped people with paralysis move


prosthetic limbs. In July, Synchron — widely considered
Neuralink's chief competitor — implanted its Stentrode device
into its first human subject in the U.S. This launched an
unprecedented FDA-approved trial and puts Synchron ahead of
Neuralink (which is still in the animal-testing phase). Australian
research has already shown that people with ALS can shop and
bank online using the Stentrode.
With breakthroughs like these, it's hard to envision any
downsides to brain implants. But neuroethicists warn that if we
don't act proactively — if companies fail to build ethical
considerations into the very fabric of neurotechnology — there
could be serious downstream consequences.

The Ethics of Safety and Durability

It's tempting to dismiss these concerns as premature — the


stuff of imaginative alarmists. But neurotechnology has already
gained a firm foothold, with deep brain stimulators implanted in
200,000 people worldwide. And it's still not clear who is
responsible for the care of those who received the devices from
clinical trials.

Even if recipients report benefits, that could change over time as


the brain encapsulates the implant in glial tissue. This
"scarification" interferes with the electrical signal, says
Dulbjevic, reducing the implant's ability to communicate. But
removing the device could pose a significant risk, such as
bleeding in the brain. Although cutting-edge designs aim to
resolve this — the Stentrode, for example, is inserted into a
blood vessel, rather than through open brain surgery — many
devices are still implanted, probe-like, deep into the brain.

Although device removal is usually offered at the end of studies,


the cost is often not covered as part of the trial. Researchers
typically ask the individual's insurance to pay for the procedure,
according to a study in the journal Neuron.3 But insurers have
no obligation to remove a brain implant without a medically
necessary reason. A patient's dislike for the device generally
isn't sufficient.
Acceptance among recipients is hardly uniform. Patient
interviews suggest these devices can alter identity, making
people feel less like themselves, especially if they're already
prone to poor self-image.4 "Some feel like they're controlled by
the device," says Dubljevic, obligated to obey the implant's
warnings — for example, if a seizure may be imminent, being
forced not to take a walk or go about their day normally.

Dr Paul Ford

"The more common thing is that they feel like they have more
control and greater sense of self," says Paul Ford, PhD, director
of the neuroethics program at the Cleveland Clinic. But even
those who like and want to keep their devices may find a dearth
of post-trial support — especially if the implant wasn't
statistically proven to be helpful.

Eventually, when the device's battery dies, the person will need a
surgery to replace it. "Who's gonna pay for that? It's not part of
the clinical trial," Fins says. "This is kind of like giving people
Teslas and not having charging stations where they're going."

As neurotechnology advances, it's critical that healthcare


systems invest in the infrastructure to maintain brain implants
— in much the same way that someone with a pacemaker can
walk into any hospital and have a cardiologist adjust their
device, Fins says. "If we're serious about developing this
technology, we should be serious about our responsibilities
longitudinally to these participants."
The Ethics of Privacy

It's not just the medical aspects of brain implants that raise
concerns but also the glut of personal data they record.
Dubljevic compares neural data now to blood samples 50 years
ago, before scientists could extract genetic information. Fast-
forward to today when those same vials can easily be linked to
individuals. "Technology may progress so that more personal
information can be gleaned from recordings of brain data," he
says. "It's currently not mind-reading in any way, shape, or form.
But it may become mind-reading in something like 20 or 30
years."

That term — mind-reading — is thrown around a lot in this field.


"It's kind of the science-fiction version of where the technology
is today," says Fins. Brain implants are not currently capable of
mind-reading.

But as device signals become clearer, data will become more


precise. Eventually, says Dubljevic, scientists may be able to
infer attitudes or psychological states. "Someone could be
labeled as less attentive or less intelligent" based on neural
patterns, he says. Brain data could also expose unknown
medical conditions — for example, a history of stroke — that
may be used to raise an individual's insurance premiums or deny
coverage altogether. Hackers could potentially seize control of
brain implants, shutting them off or sending rogue signals to the
user's brain.

Some researchers, including Fins, claim that storing brain data


is really no riskier than keeping medical records on your phone.
"It's about cybersecurity writ large," he says. However, others see
brain data as uniquely personal. "These are the only data that
reveal a person's mental processes," argues a report from
UNESCO's International Bioethics Committee (IBC). "If the
assumption is that 'I am defined by my brain', then neural data
may be considered as the origin of the self and require special
definition and protection."

"The brain is such a key part of who we are — what makes us,"
says Laura Cabrera, PhD, the chair of neuroethics at Penn State
University. "Who owns the data? Is it the medical system? Is it
you, as a patient or user? I think that hasn’t really been resolved."

Dr Laura Cabrera

Many of the measures put in place to regulate what Google or


Facebook gathers and shares could also be applied to brain
data. Some insist that the industry default should be to keep
neural data private, rather than requiring individuals to opt out of
sharing. Dubljevic, however, takes a more nuanced view, since
sharing of raw data among researchers is essential for
technological advancement and accountability.

What's clear is that forestalling research isn't the solution


— transparency is. As part of the consent process, patients
should be told where their data is being stored, for how long,
and for what purpose, says Cabrera. In 2008, the U.S. passed a
law prohibiting discrimination in healthcare coverage and
employment based on genetic information. This could serve as
a helpful precedent, she says.
The Legal Question

Around the globe, legislators are contemplating the question of


neural data. A few years ago, a visit from a Columbia University
neurobiologist sparked Chile's senate to draft a bill to regulate
how neurotechnology could be used and how data would be
safeguarded. "Scientific and technological development will be
at the service of people," the amendment promised, "and will be
carried out with respect for life and physical and mental
integrity."

Chile's new constitution was voted down in September,


effectively killing the neurorights bill. But other countries are
considering similar legislation. In 2021, France amended its
bioethics law to prohibit discrimination due to brain data, while
also building in the right to ban devices that modify brain
activity.

Fins isn't convinced this type of legislation is wholly good. He


points to people like Greg — the 38-year-old who regained his
ability to communicate through a brain implant. If it's illegal to
alter or investigate the brain's state, "then you couldn't find out if
there was covert consciousness" — mental awareness that isn't
outwardly apparent — "thereby destining people to profound
isolation," he says.

Access to neurotechnology needs protecting too, especially for


those who need it to communicate. "It's one thing to do
something over somebody's objection. That's a violation of
consent — a violation of personhood," says Fins. "It's quite
another thing to intervene to promote agency." In cases of
minimal consciousness, a medical surrogate, such as a family
member, can often be called upon to provide consent. Overly
restrictive laws could prevent the implantation of neural devices
in these people. "It's a very complicated area," says Fins.
The Future of Brain Implants

Currently, invasive brain implants are strictly therapeutic. But, in


some corners, "enhancement is an aspiration," says Dubljevic.
Animal studies suggest the potential is there. In a 2013 study,
researchers monitored the brains of rats as they navigated a
maze; electrical stimulation then transferred that neural data to
rats at another lab. This second group of rodents navigated the
maze as if they'd seen it before, suggesting that the transfer of
memories may eventually become a reality. Possibilities like this
raise the specter of social inequity, since only the wealthiest
may afford cognitive enhancement.

They could also lead to ethically questionable military


programs. "We have heard staff at DARPA and the U.S.
Intelligence Advanced Research Projects Activity discuss plans
to provide soldiers and analysts with enhanced mental abilities
('super-intelligent agents')," a group of researchers wrote in a
2017 paper in Nature. Brain implants could even become a
requirement for soldiers, who may be obligated to participate in
trials; some researchers advise stringent international
regulations for military use of the technology, similar to the
Geneva Protocol for chemical and biological weapons.

The temptation to explore every application of neurotechnology


will likely prove irresistible for entrepreneurs and scientists
alike. That makes precautions essential.

"While it's not surprising to see many potential ethical issues


and questions arising from use of a novel technology," a team of
researchers, including Dubljevic, wrote in a 2020 paper in
Philosophies, "what is surprising is the lack of suggestions to
resolve them."
It's critical that the industry proceed with the right mindset, he
says, emphasizing collaboration and prioritizing ethics at every
stage.

"How do we avoid problems that may arise and find solutions


prior to those problems even arising?" Dubljevic asks. "Some
proactive thinking goes a long way."

Sources:

Joseph Fins, MD, chief of the division of medical ethics at Weill


Cornell Medicine

Veljko Dubljevic, PhD, DPhil, an associate professor of science


technology and society at North Carolina State University

Paul Ford, PhD, director of the neuroethics program at the


Cleveland Clinic

Laura Cabrera, PhD, the chair of neuroethics at Penn State


University

References:

1. https://blackrockneurotech.com/
(https://blackrockneurotech.com/)

2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6108190/

3.
https://www.sciencedirect.com/science/article/pii/S08966273
19307305

4. https://link.springer.com/article/10.1007/s11948-017-0001-5

You might also like