You are on page 1of 3

NEWS EXPLAINER

02 May 2023
Mind-reading machines are here: is it time to worry?
Neuroethicists are split on whether a study that uses brain scans and AI to decode
imagined speech poses a threat to mental privacy.
Sara Reardon
Twitter FacebThe little voice inside your head can now be decoded by a brain
scanner — at least some of the time. Researchers have developed the first non-
invasive method of determining the gist of imagined speech, presenting a possible
communication outlet for people who cannot talk. But how close is the technology —
which is currently only moderately accurate — to achieving true mind-reading? And
how can policymakers ensure that such developments are not misused?

Most existing thought-to-speech technologies use brain implants that monitor


activity in a person’s motor cortex and predict the words that the lips are trying
to form. To understand the actual meaning behind the thought, computer scientists
Alexander Huth and Jerry Tang at the University of Texas at Austin and their
colleagues combined functional magnetic resonance imaging (fMRI), a non-invasive
means of measuring brain activity, with artificial intelligence (AI) algorithms
called large language models (LLMs), which underlie tools such as ChatGPT and are
trained to predict the next word in a piece of text.

In a study published in Nature Neuroscience on 1 May, the researchers got 3


volunteers to lie in an fMRI scanner and recorded the individuals’ brain activity
while they listened to 16 hours of podcasts each1. By measuring the blood flow
through the volunteers’ brains and integrating this information with details of the
stories they were listening to and the LLM’s ability to understand how words relate
to one another, the researchers developed an encoded map of how each individual’s
brain responds to different words and phrases.

Next, the researchers recorded the participants’ fMRI activity while they listened
to a story, imagined telling a story or watched a film that contained no dialogue.
Using a combination of the patterns they had previously encoded for each individual
and algorithms that determine how a sentence is likely to be constructed on the
basis of other words in it, the researchers attempted to decode this new brain
activity. The video below shows the sentences produced from brain recordings taken
while a study participant watched a clip from the animated film Sintel, about a
girl caring for a baby dragon.

Credit: Jerry Tang and Alexander Huth


Hit and miss
The decoder generated sentences that got the gist of what the person was thinking:
the phrase ‘I don’t have my driver’s license yet’, for instance, was decoded as
‘she has not even started to learn to drive yet’. And it did a fairly accurate job
of describing what people were seeing in the films. But many of the sentences it
produced were inaccurate.

The researchers also found that it was easy to trick the technology. When
participants thought of a different story while listening to a recorded story, the
decoder could not determine the words they were hearing. The encoded map also
differed between individuals, meaning that the researchers could not create one
decoder that worked on everyone. Huth thinks that it will become even more
difficult to develop a universal decoder as researchers create more detailed maps
of individuals’ brains.

Determining how the brain creates meaning from language is enormously difficult,
says Francisco Pereira, a neuroscientist at the US National Institute of Mental
Health in Bethesda, Maryland. “It’s impressive to see someone pull it off.”‘
‘Wake-up call’
Neuroethicists are split on whether the latest advance represents a threat to
mental privacy. “I’m not calling for panic, but the development of sophisticated,
non-invasive technologies like this one seems to be closer on the horizon than we
expected,” says bioethicist Gabriel Lázaro-Muñoz at Harvard Medical School in
Boston. “I think it’s a big wake-up call for policymakers and the public.”

But Adina Roskies, a science philosopher at Dartmouth University in Hanover, New


Hampshire, says that the technology is too difficult to use — and too inaccurate —
to pose a threat at present. For starters, fMRI scanners are not portable, making
it difficult to scan someone’s brain without their cooperation. She also doubts
that it would be worth the time or cost to train a decoder for an individual for
any purpose other than restoring communication abilities. “I just don’t think it’s
time to start worrying,” she says. “There are lots of other ways the government can
tell what we’re thinking.”

Greta Tuckute, a cognitive neuroscientist at the Massachusetts Institute of


Technology in Cambridge, finds it encouraging that the decoding system could not be
applied across individuals and that people could easily trick it by thinking of
other things. “It’s a nice demonstration of how much agency we actually have,” she
says.

Proceed with caution


Nevertheless, Roskies says that even if the decoder doesn’t work well, problems
could arise if lawyers or courts use it without understanding its scientific
limitations. For instance, in the current study, the phrase ‘I just jumped out [of
the car]’ was decoded as ‘I had to push her out of the car’. “The differences are
stark enough they could make an enormous difference in a legal case,” Roskies says.
“I’m afraid they will have the ability to use this stuff when they shouldn’t.”

Tang agrees. “The polygraph is not accurate but has had negative consequences,” he
said in a press conference. “Nobody’s brain should be decoded without their
cooperation.” He and Huth called for policymakers to proactively address how mind-
reading technologies can and cannot be legally used.

Lázaro-Muñoz says that policy action could mirror a 2008 US law that prevents
insurers and employers from using people’s genetic information in discriminatory
ways. He also worries about the implications of the decoder for people with
conditions such as obsessive–compulsive disorder, who can experience unwanted,
intrusive thoughts about harming people that they would never act on.

Pereira says the matter of how accurate decoders could become is an open question,
as is whether they could eventually become universal, instead of being specific to
an individual. “It depends on how unique you think humans are,” he says.

Although the decoder could eventually become good at predicting the next word in a
series, it might struggle to interpret metaphors or sarcasm. There’s a big step,
Pereira says, between putting words together and determining how the brain encodes
the relationships between the words.

doi: https://doi.org/10.1038/d41586-023-01486-z

References
Tang, J., LeBel, A., Jain, S. & Huth, A. G. Nature Neurosci.
https://doi.org/10.1038/s41593-023-01304-9 (2023).

Article
Google Scholar

Download references

Latest on:
Machine learning
Technology
Neuroscience
Learnable latent embeddings for joint behavioural and neural analysis
Learnable latent embeddings for joint behavioural and neural analysis

ARTICLE 03 MAY 23

Online tools help large language models to solve problems through reasoning
Online tools help large language models to solve problems through reasoning

NEWS & VIEWS 01 MAY 23

Synthetic data could be better than real data


https://www.nature.com/articles/d41586-023-01486-z?
utm_term=Autofeed&utm_campaign=nature&utm_medium=Social&utm_source=Facebook&fbclid=
IwAR21_9g6cGuUlqDx2WVIfK2-CarS6MCuc6uwd6TxfdjMEwT6L-jMqcImTh4#Echobox=1683057836

You might also like