Professional Documents
Culture Documents
Original Article
A BS T R AC T
BACKGROUND
Technology to restore the ability to communicate in paralyzed persons who cannot From the Department of Neurological
speak has the potential to improve autonomy and quality of life. An approach that Surgery (D.A.M., S.L.M., J.R.L., G.K.A.,
J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the
decodes words and sentences directly from the cerebral cortical activity of such Weill Institute for Neuroscience (D.A.M.,
patients may represent an advancement over existing methods for assisted com- S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C.,
munication. K.G., E.F.C.), and the Departments of Re-
habilitation Services (P.M.L.) and Neu-
rology (G.M.A., A.T.-C., K.G.), University
METHODS of California, San Francisco (UCSF), San
We implanted a subdural, high-density, multielectrode array over the area of the Francisco, and the Graduate Program in
Bioengineering, University of California,
sensorimotor cortex that controls speech in a person with anarthria (the loss of Berkeley–UCSF, Berkeley (S.L.M., J.R.L.,
the ability to articulate speech) and spastic quadriparesis caused by a brain-stem E.F.C.). Address reprint requests to Dr.
stroke. Over the course of 48 sessions, we recorded 22 hours of cortical activity Chang at edward.chang@ucsf.edu.
while the participant attempted to say individual words from a vocabulary set of Dr. Moses, Mr. Metzger, and Ms. Liu con-
50 words. We used deep-learning algorithms to create computational models for tributed equally to this article.
the detection and classification of words from patterns in the recorded cortical N Engl J Med 2021;385:217-27.
activity. We applied these computational models, as well as a natural-language DOI: 10.1056/NEJMoa2027540
Copyright © 2021 Massachusetts Medical Society.
model that yielded next-word probabilities given the preceding words in a se-
quence, to decode full sentences as the participant attempted to say them.
RESULTS
We decoded sentences from the participant’s cortical activity in real time at a
median rate of 15.2 words per minute, with a median word error rate of 25.6%. In
post hoc analyses, we detected 98% of the attempts by the participant to produce
individual words, and we classified words with 47.1% accuracy using cortical sig-
nals that were stable throughout the 81-week study period.
CONCLUSIONS
In a person with anarthria and spastic quadriparesis caused by a brain-stem
stroke, words and sentences were decoded directly from cortical activity during
attempted speech with the use of deep-learning models and a natural-language
model. (Funded by Facebook and others; ClinicalTrials.gov number, NCT03698149.)
A
narthria is the loss of the ability which is a single-institution clinical study to
to articulate speech. It can result from evaluate the potential of electrocorticography, a
a variety of conditions, including stroke method for recording neural activity from the
and amyotrophic lateral sclerosis.1 Patients with cerebral cortex with the use of electrodes placed
A Quick Take
anarthria may have intact language skills and on the surface of the cerebral hemisphere, and
is available at cognition, and some are able to produce limited custom decoding techniques to enable commu-
NEJM.org oral movements and undifferentiated vocaliza- nication and mobility. An investigational device
tions when attempting to speak.2 However, para- exemption for the device used in this study was
lyzed persons may be unable to operate assistive approved by the Food and Drug Administration.
devices because of severe impairment of move- As of this writing, the device had been implant-
ment. Anarthria hinders communication with ed only in the participant described here. Because
family, friends, and caregivers, thereby reducing of regulatory and clinical considerations regard-
patient-reported quality of life.3 Advances have ing the proper handling of the percutaneous con-
been made with typing-based brain–computer nector, the participant did not have the opportu-
interfaces that allow speech-impaired persons to nity to use the system independently for daily
spell out messages by controlling a computer cur- activities but underwent testing at his home.
sor.4-8 However, letter-by-letter selection through This work was approved by the Committee on
interfaces driven by neural signal recordings is Human Research at the University of California,
slow and effortful. A more efficient and natural San Francisco, and was supported in part by a
approach may be to directly decode whole words research contract under Facebook’s Sponsored
from brain areas that control speech. Our under- Academic Research Agreement. All the authors
standing of how the area of the sensorimotor were involved in the design and execution of the
cortex that controls speech orchestrates the rapid clinical study; the collection, storage, analysis,
articulatory movements of the vocal tract has and interpretation of the data; and the writing
expanded.9-14 Engineering efforts have used these of the manuscript. No study hardware or data
neurobiologic findings, together with advances were transferred to any sponsor, and we did not
in machine learning, to show that speech can be receive any hardware or software from a sponsor
decoded from brain activity in persons without to use in this work. All the authors vouch for the
speech impairments.15-19 accuracy and completeness of the data and for
In paralyzed persons who cannot speak, record- the fidelity of the study to the protocol (available
ings of neural activity cannot be precisely aligned with the full text of this article at NEJM.org) and
with intended speech because of the absence of confirm that the study was conducted ethically.
speech output, which poses an obstacle for train- Informed consent was obtained from the par-
ing computational models.20 In addition, it is un- ticipant after the reason for and nature of im-
clear whether neural signals underlying speech plantation and the training procedures and risks
control are still intact in persons who have not were thoroughly explained to him.
spoken for years or decades. In earlier work, a
paralyzed person used an implanted, intracorti- Participant
cal, two-channel microelectrode device and an
The participant was a right-handed man who
audiovisual interface to generate vowel sounds
was 36 years of age at the start of the study. At
and phonemes but not full words.21,22 To deter-
20 years of age, he had had an extensive pontine
mine whether speech can be directly decoded to
stroke associated with a dissection of the right
produce language from the neural activity of a
vertebral artery, which resulted in severe spastic
Videos showing
person who is unable to speak, we tested real-
quadriparesis and anarthria, as confirmed by a
speech decoding
are available at time decoding of words and sentences from the
speech–language pathologist and neurologists
NEJM.org cortical activity of a person with limb paralysis
(Video 1 and Fig. S1 in the Supplementary Ap-
and anarthria caused by a brain-stem stroke.pendix, both available at NEJM.org). His cogni-
tive function was intact, and he had a score of
26 on the Mini–Mental State Examination (scores
Me thods
range from 0 to 30, with higher scores indicat-
Study Overview ing better mental performance); because of his
This work was performed as part of the BCI paralysis, it was not physically possible for his
Restoration of Arm and Voice (BRAVO) study, score to reach 30. He was able to vocalize grunts
and moans but was unable to produce intelligi- running custom software for real-time signal
ble speech; eye movement was unaffected. He analysis (Section S2 and Figs. S2 and S3).18,23 As
normally communicated using an assistive com- informed by previous research that had corre-
puter-based typing interface controlled by his lated neural activity in the 70 to 150 Hz (high-
residual head movements; his typing speed was gamma) frequency range with speech process-
approximately 5 correct words or 18 correct ing,9,12-14,18 we measured activity in the high-gamma
characters per minute (Section S1). band for each channel to use in subsequent
analyses and during real-time decoding.
Implant Device
The neural implant used to acquire brain sig- Word and Sentence Task Design
nals from the participant was a customized com- The study consisted of 50 sessions over the
bination of a high-density electrocorticography course of 81 weeks and took place at the par-
electrode array (manufactured by PMT) and a ticipant’s residence or a nearby office. The partic
percutaneous connector (manufactured by Black- ipant engaged in two types of tasks: an isolated-
rock Microsystems). The rectangular electrode word task and a sentence task (Section S3 and
array was 6.7 cm long, 3.5 cm wide, and 0.51 mm Fig. S4). On average, we collected approximately
thick and consisted of 128 flat, disk-shaped 27 minutes of neural activity during these tasks
electrodes arranged in a 16-by-8 lattice forma- at each session. In each trial of each task, a tar-
tion, with a center-to-center distance between get word or sentence was presented visually to
adjacent electrodes of 4 mm. During surgical the participant as text on a screen, and then the
implantation, general anesthesia was used, and participant attempted to produce (say aloud) that
the sensorimotor cortex of the left hemisphere, target.
as identified by anatomical landmarks of the In the isolated-word task, the participant at-
central sulcus, was exposed through craniotomy. tempted to produce individual words from a set
The electrode array was laid on the pial surface of 50 English words. This word set contained
of the brain in the subdural space. The elec- common English words that can be used to cre-
trode coverage enabled sampling from multiple ate a variety of sentences, including words that
cortical regions that have been implicated in are relevant to caregiving and words requested
speech processing, including portions of the by the participant. In each trial, the participant
left precentral gyrus, postcentral gyrus, poste- was presented with one of these 50 words, and,
rior middle frontal gyrus, and posterior inferior after a 2-second delay, he attempted to produce
frontal gyrus.9,11-13 The dura was closed with su- that word when the text of the word on the
tures, and the cranial bone flap was replaced. screen turned green. We collected 22 hours of
The percutaneous connector was placed extra- data from 9800 trials of the isolated-word task
cranially on the contralateral skull convexity performed by the participant in the first 48 of
and anchored to the cranium. This percutane- the 50 sessions.
ous connector conducts cortical signals from In the sentence task, the participant attempt-
the implanted electrode array through externally ed to produce word sequences from a set of 50
accessible contacts to a detachable digital link English sentences consisting of words from the
and cable, enabling transmission of the acquired 50-word set (Sections S4 and S5). In each trial,
brain activity to a computer (Fig. S2). The par- the participant was presented with a target sen-
ticipant underwent surgical implantation of the tence and attempted to produce the words in that
device in February 2019 and had no compli sentence (in order) at the fastest speed he could
cations. The procedure lasted approximately perform comfortably. Throughout the trial, the
3 hours. We began to collect data for this study word sequence decoded from neural activity was
in April 2019. updated in real time and displayed as feedback to
the participant. We collected data from 250 trials
Real-Time Acquisition and Processing of the sentence task performed by the participant
of Neural Data in 7 of the final 8 sessions. This task is shown
A digital-signal processing system (NeuroPort in Video 2. A conversational variant of this task,
System, Blackrock Microsystems) was used to in which the participant was presented with
acquire signals from all 128 electrodes of the prompts and attempted to respond to them, is
implant device and transmit them to a computer shown in Figure 1 and Video 1.
Digital
connector
Cortical
128-electrode
signals
array over speech
sensorimotor
cortex
Prompt
Decoded
response
Neural processing
system
Processed cortical
signals
Speech
detection
Neurally detected
speech events
Word
classification
Predicted word
probabilities
Language
modeling
Decoded sentence
Figure 1 (facing page). Schematic Overview of the Direct (Section S9 and Fig. S6). The predicted probabil-
Speech Brain–Computer Interface. ity associated with each word in the 50-word set
Shown is how neural activity acquired from an investi- quantified how likely it was that the participant
gational electrocorticography electrode array implanted was attempting to say that word during the de-
in a clinical study participant with severe paralysis is tected attempt. We fitted this model to neural
used to directly decode words and sentences in real
data collected during the isolated-word task.
time. In a conversational demonstration, the participant
is visually prompted with a statement or question (A) In English, certain sequences of words are
and is instructed to attempt to respond using words more likely than others. To use this underlying
from a predefined vocabulary set of 50 words. Simulta- linguistic structure, we created a natural-lan-
neously, cortical signals are acquired from the surface guage model that yielded next-word probabilities
of the brain through the electrode array (B) and pro-
given the previous words in a sequence (Section
cessed in real time (C). The processed neural signals
are analyzed sample by sample with the use of a speech- S10).24,25 We trained this model on a collection
detection model to detect the participant’s attempts of sentences that included only words from the
to speak (D). A classifier computes word probabilities 50-word set; the sentences were obtained with
(across the 50 possible words) from each detected win- the use of a custom task on a crowd-sourcing
dow of relevant neural activity (E). A Viterbi decoding
platform (Section S4).
algorithm uses these probabilities in conjunction with
word-sequence probabilities from a separately trained The final component in the decoding approach
natural-language model to decode the most likely sen- involved the use of a custom Viterbi decoder,
tence given the neural activity data (F). The predicted which is a type of model that determines the
sentence, which is updated each time a word is decod- most likely sequence of words given predicted
ed, is displayed as feedback to the participant (G). Be-
word probabilities from the word classifier and
fore real-time decoding, the models were trained with
data collected as the participant attempted to say indi- word-sequence probabilities from the natural-
vidual words from the 50-word set as part of a separate language model (Section S11 and Fig. S7).26 With
task (not depicted). This conversational demonstration the incorporation of the language model, the
is a variant of the standard sentence task used in this Viterbi decoder was capable of decoding more
work, in that it allows the participant to compose his
plausible sentences than what would result from
own unique responses to the prompts.
simply stringing together the predicted words
from the word classifier.
Modeling
We used neural activity data collected during the Evaluations
tasks to train, fine-tune, and evaluate custom To evaluate the performance of our decoding
models (Sections S6 and S7 and Table S1). Spe- approach, we analyzed the sentences that were
cifically, we created speech-detection and word- decoded in real time using two metrics: the word
classification models that used deep-learning error rate and the number of words decoded per
techniques to make predictions from the neural minute (Section S12). The word error rate of a
activity. To decode sentences from the partici- decoded sentence was defined as the number of
pant’s neural activity in real time during the word errors made by the decoder divided by the
sentence task, we also used a natural-language number of words in the target sentence.
model and a Viterbi decoder (Fig. 1). The speech- To further characterize the detection and
detection model processed each time point of classification of word-production attempts from
neural activity during a task and detected onsets the participant’s neural activity, we processed
and offsets of word-production attempts in real the collected isolated-word data with the speech-
time (Section S8 and Fig. S5). We fitted this detection and word-classification models in
model using neural activity data and task-timing offline analyses performed after the recording
information collected only during the isolated- sessions had been completed (Section S13). We
word task. measured classification accuracy as the per-
For each attempt that was detected, the word- centage of trials in which the word classifier
classification model predicted a set of word correctly predicted the target word that the
probabilities by processing the neural activity participant attempted to produce. We also mea-
spanning from 1 second before to 3 seconds sured electrode contributions as the size of the
after the detected onset of attempted speech effect that each individual electrode had on the
A B
100 Without language model With language model
I like my nurse
I am okay
15 I am going outside
I need you
My family is here
No
10 Yes
Bring my glasses please
I am outside
5 I feel very hungry
My nurse is right outside
Please clean it
Please tell my family
0 You are not right
It is comfortable
All Words Correctly Decoded Are you tired
Words How do you like my music
It is right here
I am not okay
I feel very comfortable
100 I need my glasses
I do not feel comfortable
Where is it
Faith is good
75
Percentage of Trials
It is okay
I hope it is clean
That is very clean
50 6 5 4 3 2 1 0
60
Without language model With language model
Percentage of
25 40
Trials
20
0
Too few Correct Too many 0
words length words 6 5 4 3 2 1 0
Decoded Sentence Length No. of Word Errors
C
Target Sentence Example Decoded without Language Model Decoded with Language Model
Hello how are you Hungry how am you Hello how are you
I like my nurse I right my nurse I like my nurse
They are going outside They are going outside They are going outside
My family is very comfortable Glasses family is faith comfortable My family is very comfortable
Bring my glasses please Please my glasses please Bring my glasses please
What do you do What do I you What do I do
How do you like my music How do you like bad bring How do you like my music
Figure 2 (facing page). Decoding a Variety of Sentences assessed whether classification accuracy on a
in Real Time through Neural Signal Processing subset of data collected in the final sessions
and Language Modeling. could be improved by including data from ear-
Panel A shows the word error rates, the numbers of lier subsets as part of the training set for the
words decoded per minute, and the decoded sentence classification model; such improvement would
lengths. The top plot shows the median word error rate
indicate that training data accumulated across
(defined as the number of word errors made by the de-
coder divided by the number of words in the target sen- months or years of recording could be used to
tence, with a lower rate indicating better performance) reduce the need for frequent model recalibration
derived from the word sequences decoded from the in practical applications of our approach.
participant’s cortical activity during the performance
of the sentence task. Data points represent sentence Statistical Analyses
blocks (each block comprises 10 trials); the median
rate, as indicated by the horizontal line within a box, is Results for each experimental condition are pre-
shown across 15 sentence blocks. The upper and lower sented with 95% confidence intervals when ap-
sides of the box represent the interquartile range, and propriate (Section S15). No adjustments were
the I bars 1.5 times the interquartile range. Chance per- made for multiple comparisons. The evaluation
formance was measured by computing the word error
metrics (word error rate, number of words de-
rate on sentences randomly generated from the natu-
ral-language model. The middle plot shows the median coded per minute, and classification accuracy)
number of words decoded per minute, as derived across were specified before the start of data collection.
all 150 trials (each data point represents a trial). The Analyses to assess the long-term stability of
rates are shown for the analysis that included all words speech-detection and word-classification perfor-
that were correctly or incorrectly decoded with the nat-
mance with our implant device were designed
ural-language model and for the analysis that included
only correctly decoded words. Each violin distribution post hoc.
was created with the use of kernel density estimation
based on Scott’s rule for computing the estimator band-
width; the thick horizontal lines represent the median
R e sult s
number of words decoded per minute, and the thinner Sentence Decoding
horizontal lines the range (with the exclusion of outliers
that were more than 4 standard deviations below or During real-time sentence decoding, the median
above the mean, which was the case for one trial). In word error rate across 15 sentence blocks (each
the bottom chart, the decoded sentence lengths show block comprised 10 trials) was 60.5% (95% con-
whether the number of detected words was equal to fidence interval [CI], 51.4 to 67.6) without lan-
the number of words in the target sentence in each of
guage modeling and 25.6% (95% CI, 17.1 to 37.1)
the 150 trials. Panel B shows the number of word er-
rors in the sentences decoded with or without the natu- with language modeling (Fig. 2A, top). The low-
ral-language model across all trials and all 50 sentence est word error rate observed for a single sentence
targets. Each small vertical dash represents the num- block was 7.0% (with language modeling). When
ber of word errors in a single trial (there are 3 trials per chance performance was measured with the use
target sentence; marks for identical error counts are
of sentences that had been randomly generated
staggered horizontally for visualization purposes). Each
dot represents the mean number of errors for that tar- by the natural-language model (Section S12), the
get sentence across the 3 trials. The histogram at the median word error rate was 92.1% (95% CI, 85.7
bottom shows the error counts across all 150 trials. to 97.2). Across all 150 trials, the median num-
Panel C shows seven target sentence examples along ber of words decoded per minute was 15.2 with
with the corresponding sentences decoded with and
the inclusion of all decoded words and 12.5 with
without the natural-language model. Correctly decoded
words are shown in black and incorrect words in red. the inclusion of only correctly decoded words
(with language modeling) (Fig. 2A, middle). In
92.0% of the trials, the number of detected
predictions made by the detection and classifi- words was equal to the number of words in the
cation models.19,27 target sentence (Fig. 2A, bottom). Across all 15
To investigate the viability of our approach sentence blocks, five speech attempts were erro-
for a long-term application, we evaluated the neously detected before the first trial in the
stability of the acquired cortical signals over block and were excluded from real-time decod-
time using the isolated-word data (Section S14). ing and analysis (all other detected speech at-
By sampling neural data from four different date tempts were included). For almost all target sen-
ranges spanning the 81-week study period, we tences, the mean number of word errors decreased
A
Detection Classification
Electrode Contribution
Am 100
Are
Bad
Bring
Clean
Closer
Comfortable
Coming
Computer
Do
Faith
Family
Feel 75
Glasses
Going
Good
Goodbye
Have
Hello
Help
Here
Hungry
I
Is 50
It
Like
Music
My
Need
No
Not
Nurse
Okay
Outside
Please
Right 25
Success
Tell
That
They
Thirsty
Tired
Up
Very
What
Where
Yes
You
0
Am
Are
Bad
Bring
Clean
Closer
Comfortable
Coming
Computer
Do
Faith
Family
Feel
Glasses
Going
Good
Goodbye
Have
Hello
Help
Here
Hope
How
Hungry
I
Is
It
Like
Music
My
Need
No
Not
Nurse
Okay
Outside
Please
Right
Success
Tell
That
They
Thirsty
Tired
Up
Very
What
Where
Yes
You
Predicted Word
Figure 3 (facing page). Distinct Neural Activity Patterns the electrodes in the dorsal aspect contributed
during Word-Production Attempts. more to speech-detection performance (Fig. 3A).
Panel A shows the participant’s brain reconstruction Classification accuracy was consistent across
overlaid with the locations of the implanted electrodes most of the target words (mean [±SD] classifi-
and their contributions to the speech-detection and cation accuracy across the 50 target words,
word-classification models. Plotted electrode size (area)
47.1±14.5%) (Fig. 3B).
and opacity are scaled by relative contribution (impor-
tant electrodes appear larger and more opaque than
other electrodes). Each set of contributions is normal- Long-Term Stability of Acquired Cortical
ized to sum to 1. For anatomical reference, the precen- Signals
tral gyrus is highlighted in light green. Panel B shows The long-term stability of the speech-related cor-
word confusion values computed with the use of the
tical activity patterns recorded during attempts
isolated-word data. For each target word (each row),
the confusion value measures how often the word classi- to produce isolated words showed that the
fier predicted (regardless of whether the prediction was speech-detection and word-classification models
correct) each of the 50 possible words (each column) performed consistently throughout the 81-week
while the participant was attempting to say that target study period without daily or weekly recalibration
word. The confusion value is computed as a percentage
(Fig. S10). When the models were used to ana-
relative to the total number of isolated-word trials for
each target word, with the values in each row summing lyze cortical activity recorded at the end of the
to 100%. Values along the diagonal correspond to cor- study period, classification accuracy increased
rect classifications, and off-diagonal values correspond when the data set used to train the classification
to incorrect classifications. The natural-language model models contained data recorded throughout the
was not used in this analysis.
study period, including data recorded more than
a year before the collection of the data used to
test the models (Fig. 4).
when the natural-language model was used
(Fig. 2B), and in 80 of 150 trials with language Discussion
modeling, sentences were decoded without er-
ror. Use of the natural-language model during We showed that high-density recordings of cor-
decoding improved performance by correcting tical activity in the speech-production area of the
grammatically and semantically implausible word sensorimotor cortex of an anarthric and para-
sequences in the predictions (Fig. 2C). Real-time lyzed person can be used to decode full words
demonstrations are shown in Videos 1 and 2. and sentences in real time. Our deep-learning
models were able to use the participant’s neural
Word Detection and Classification activity to detect and classify his attempts to
In the offline analyses that included data from produce words from a 50-word set, and we could
9000 attempts to produce isolated words (and use these models, together with language-model-
excluded the use of the natural-language model), ing techniques, to decode a variety of meaning-
the mean classification accuracy was 47.1% with ful sentences. Our models, enabled by the long-
the use of the speech detector and word classi- term stability of recordings from the implanted
fier to predict the identity of the target word device, could use data accumulated throughout
from cortical activity. The accuracy of chance the 81-week study period to improve decoding
performance (without the use of any models) performance when evaluating data recorded near
was 2%. Additional results of the isolated-word the end of the study.
analyses are provided in Figures S8 and S9. A Previous demonstrations of word and sen-
total of 98% of these word-production attempts tence decoding from cortical neural activity have
were successfully detected (191 attempts were been conducted with participants who could
not detected), and 968 detected attempts were speak without the need for assistive technology
spurious (not associated with a speech attempt) to communicate.15-19 Similar to the problem of
(Section S8). Electrodes in the most ventral as- decoding intended movements in someone who
pect of the ventral sensorimotor cortex contrib- is paralyzed, the lack of precise time alignment
uted to word-classification performance to a between intended speech and neural activity
greater extent than electrodes in the dorsal as- poses a challenge during model training. We
pect of the ventral sensorimotor cortex, whereas addressed this time-alignment problem with
References
1. Beukelman DR, Fager S, Ball L, Dietz 14. Salari E, Freudenburg ZV, Branco MP, saliency maps. In:Bengio Y, LeCun Y, eds.
A. AAC for adults with acquired neuro- Aarnoutse EJ, Vansteensel MJ, Ramsey Workshop at the International Confer-
logical conditions: a review. Augment Al- NF. Classification of articulator move- ence on Learning Representations. Banff,
tern Commun 2007;23:230-42. ments and movement direction from sen- AB, Canada:ICLR Workshop, 2014.
2. Nip I, Roth CR. Anarthria. In:Kreutzer sorimotor cortex activity. Sci Rep 2019;9: 28. Kanas VG, Mporas I, Benz HL, Sgarbas
J, DeLuca J, Caplan B, eds. Encyclopedia 14165. KN, Bezerianos A, Crone NE. Real-time
of clinical neuropsychology. 2nd ed. New 15. Herff C, Heger D, de Pesters A, et al. voice activity detection for ECoG-based
York:Springer International Publishing, Brain-to-text: decoding spoken phrases speech brain machine interfaces. In:19th
2017:1. from phone representations in the brain. International Conference on Digital Sig-
3. Felgoise SH, Zaccheo V, Duff J, Sim- Front Neurosci 2015;9:217. nal Processing: proceedings. New York:
mons Z. Verbal communication impacts 16. Angrick M, Herff C, Mugler E, et al. Institute of Electrical and Electronics En-
quality of life in patients with amyo- Speech synthesis from ECoG using dense- gineers, 2014:862-5.
trophic lateral sclerosis. Amyotroph Lat- ly connected 3D convolutional neural net- 29. Dash D, Ferrari P, Dutta S, Wang J.
eral Scler Frontotemporal Degener 2016; works. J Neural Eng 2019;16:036019. NeuroVAD: real-time voice activity detec-
17:179-83. 17. Anumanchipalli GK, Chartier J, tion from non-invasive neuromagnetic
4. Sellers EW, Ryan DB, Hauser CK. Chang EF. Speech synthesis from neural signals. Sensors (Basel) 2020;20:2248.
Noninvasive brain–computer interface en- decoding of spoken sentences. Nature 30. Sollich P, Krogh A. Learning with en-
ables communication after brainstem 2019;568:493-8. sembles: how overfitting can be useful.
stroke. Sci Transl Med 2014;6(257):257re7. 18. Moses DA, Leonard MK, Makin JG, In:Touretzky DS, Mozer MC, Hasselmo
5. Vansteensel MJ, Pels EGM, Bleichner Chang EF. Real-time decoding of ques- ME, eds. Advances in neural information
MG, et al. Fully implanted brain–com- tion-and-answer speech dialogue using processing systems 8. Cambridge, MA:
puter interface in a locked-in patient with human cortical activity. Nat Commun MIT Press, 1996:190-6.
ALS. N Engl J Med 2016;375:2060-6. 2019;10:3096. 31. Krizhevsky A, Sutskever I, Hinton GE.
6. Pandarinath C, Nuyujukian P, Blabe 19. Makin JG, Moses DA, Chang EF. Ma- ImageNet classification with deep convo-
CH, et al. High performance communi- chine translation of cortical activity to lutional neural networks. In:Bartlett P,
cation by people with paralysis using an text with an encoder-decoder framework. Pereira FCN, Burges CJC, Bottou L, Wein-
intracortical brain–computer interface. Nat Neurosci 2020;23:575-82. berger KQ, eds. Advances in neural infor-
Elife 2017;6:e18554. 20. Martin S, Iturrate I, Millán JDR, mation processing systems 25. Red Hook,
7. Brumberg JS, Pitt KM, Mantie-Kozlow Knight RT, Pasley BN. Decoding inner NY:Curran Associates, 2012:1097-105.
ski A, Burnison JD. Brain–computer inter- speech using electrocorticography: prog- 32. Shoham S, Halgren E, Maynard EM,
faces for augmentative and alternative ress and challenges toward a speech pros- Normann RA. Motor-cortical activity in
communication: a tutorial. Am J Speech thesis. Front Neurosci 2018;12:422. tetraplegics. Nature 2001;413:793.
Lang Pathol 2018;27:1-12. 21. Guenther FH, Brumberg JS, Wright EJ, 33. Hochberg LR, Serruya MD, Friehs
8. Linse K, Aust E, Joos M, Hermann A. et al. A wireless brain–machine interface GM, et al. Neuronal ensemble control of
Communication matters — pitfalls and for real-time speech synthesis. PLoS One prosthetic devices by a human with tetra-
promise of hightech communication de- 2009;4(12):e8218. plegia. Nature 2006;442:164-71.
vices in palliative care of severely physi- 22. Brumberg JS, Wright EJ, Andreasen 34. Watanabe S, Delcroix M, Metze F, Her-
cally disabled patients with amyotrophic DS, Guenther FH, Kennedy PR. Classifica- shey JR, eds. New era for robust speech
lateral sclerosis. Front Neurol 2018; 9: tion of intended phoneme production recognition: exploiting deep learning. Ber-
603. from chronic intracortical microelectrode lin:Springer-Verlag, 2017.
9. Bouchard KE, Mesgarani N, Johnson recordings in speech-motor cortex. Front 35. Wolpaw JR, Bedlack RS, Reda DJ, et al.
K, Chang EF. Functional organization of Neurosci 2011;5:65. Independent home use of a brain–com-
human sensorimotor cortex for speech 23. Moses DA, Leonard MK, Chang EF. puter interface by people with amyo-
articulation. Nature 2013;495:327-32. Real-time classification of auditory sen- trophic lateral sclerosis. Neurology 2018;
10. Lotte F, Brumberg JS, Brunner P, et al. tences using evoked cortical activity in 91(3):e258-e267.
Electrocorticographic representations of humans. J Neural Eng 2018;15:036005. 36. Silversmith DB, Abiri R, Hardy NF, et al.
segmental features in continuous speech. 24. Kneser R, Ney H. Improved backing- Plug-and-play control of a brain–computer
Front Hum Neurosci 2015;9:97. off for M-gram language modeling. In: interface through neural map stabiliza-
11. Guenther FH, Hickok G. Neural mod- Conference proceedings: 1995 Interna- tion. Nat Biotechnol 2021;39:326-35.
els of motor speech control. In:Hickok G, tional Conference on Acoustics, Speech, 37. Chao ZC, Nagasaka Y, Fujii N. Long-
Small S, eds.Neurobiology of language. and Signal Processing. Vol. 1. New York: term asynchronous decoding of arm mo-
Cambridge, MA:Academic Press, 2015: Institute of Electrical and Electronics En- tion using electrocorticographic signals in
725-40. gineers, 1995:181-4. monkeys. Front Neuroeng 2010;3:3.
12. Mugler EM, Tate MC, Livescu K, Tem- 25. Chen SF, Goodman J. An empirical 38. Rao VR, Leonard MK, Kleen JK, Lucas
pler JW, Goldrick MA, Slutzky MW. Dif- study of smoothing techniques for lan- BA, Mirro EA, Chang EF. Chronic ambu-
ferential representation of articulatory guage modeling. Comput Speech Lang latory electrocorticography from human
gestures and phonemes in precentral and 1999;13:359-94. speech cortex. Neuroimage 2017;153:273-
inferior frontal gyri. J Neurosci 2018;38: 26. Viterbi AJ. Error bounds for convolu- 82.
9803-13. tional codes and an asymptotically opti- 39. Pels EGM, Aarnoutse EJ, Leinders S,
13. Chartier J, Anumanchipalli GK, John- mum decoding algorithm. IEEE Trans Inf et al. Stability of a chronic implanted
son K, Chang EF. Encoding of articulatory Theory 1967;13:260-9. brain–computer interface in late-stage
kinematic trajectories in human speech 27. Simonyan K, Vedaldi A, Zisserman A. amyotrophic lateral sclerosis. Clin Neuro-
sensorimotor cortex. Neuron 2018;98(5): Deep inside convolutional networks: visu- physiol 2019;130:1798-803.
1042.e4-1054.e4. alising image classification models and Copyright © 2021 Massachusetts Medical Society.