You are on page 1of 9

PERSPECTIVES SIG 6

Viewpoint

COVID-19, Face Masks, and Social


Interaction: How a Global Pandemic
Is Shining a Light on the Importance
of Face-to-Face Communication
Tina M. Grieco-Caluba

Purpose: The purpose of this article is to discuss the impact of Conclusions: The use of face masks and social distancing
COVID-19 mitigation strategies on face-to-face communication. are proven mitigation strategies to minimize the spread of
The article covers three main areas: the effect of face masks COVID-19 and other airborne diseases. However, these
and social distancing on aspects of communication, including strategies may place individuals with hearing, speech,
speech production, speech recognition, and emotion and language disorders at greater risk for communication
recognition; the effect of face masks on access to visual speech problems in their daily lives. Practitioners who work directly
cues; and downstream effects of poor speech recognition on with these patients should consider these issues when
language and cognitive abilities. discussing communication strategies with their patients.

T
he SARS-CoV-2 (COVID-19) global pandemic complications include those with preexisting health conditions
will be one of the defining events of 2020. At the and those who are elderly.
time of this writing, the primary mode of transmis- As the virus began to spread around the world in
sion of COVID-19 is through air molecules and, therefore, early 2020, health officials in many countries closed busi-
primarily through person-to-person contact (Centers for nesses and ordered people to stay home. These extreme
Disease Control and Prevention, 2020a). Some of the com- mitigation measures were aimed at rapidly reducing the
mon symptoms experienced by infected individuals include transmission of the virus and protecting people. In mid-
loss of taste, respiratory problems, fever, and/or digestive 2020, communities were allowed to reopen as the rate of
issues (Centers for Disease Control and Prevention, 2020b; infected individuals decreased. Businesses welcomed cus-
World Health Organization, 2020). However, not all indi- tomers back, houses of worship opened their doors, and
viduals infected with COVID-19 experience physical symp- people began to congregate in small groups. The key to
toms. Additionally, both symptomatic and asymptomatic the successful reopening, while minimizing the spread of
carriers of the virus can transmit the virus to other people: the virus, was the implementation of safety measures known
a scenario that has likely negatively impacted communities’ to be effective at reducing transmission of airborne viruses,
ability to control the spread of the virus. Among the millions such as wearing face masks, social distancing (i.e., maintaining
of people who have been infected with COVID-19 world- a 6-ft distance from other individuals), and minimizing the
wide, more severe complications, including death, have been size of social gatherings to a small number of people (e.g.,
observed. Individuals who are most vulnerable to these severe Chu et al., 2020; Markel et al., 2007). Despite the benefits
of these mitigation strategies, they do present challenges
for the many people who rely on face-to-face conversations
and social interaction in their daily lives. Among these in-
a
Department of Psychiatry and Behavioral Sciences, Rush dividuals are people who are deaf or hard of hearing,
NeuroBehavioral Center, Rush University Medical Center, second-language learners, children with speech and language
Chicago, IL disorders, and older individuals whose social interactions sus-
Correspondence to Tina M. Grieco-Calub: tina_griecocalub@rush.edu tain their cognitive function and activity levels. Even with the
Editor-in-Chief: Barbara Cone discovery of an effective vaccine against COVID-19 in late
Editor: Laura Dreisbach Hawe 2020 and an increase in the number of individuals being
Received February 19, 2021
Revision received June 4, 2021
Accepted June 15, 2021 Disclosure: The author has declared that no competing financial or nonfinancial
https://doi.org/10.1044/2021_PERSP-21-00037 interests existed at the time of publication.

Perspectives of the ASHA Special Interest Groups • Vol. 6 • 1097–1105 • October 2021 • Copyright © 2021 American Speech-Language-Hearing Association 1097
Downloaded from: https://pubs.asha.org 141.164.139.31 on 06/30/2022, Terms of Use: https://pubs.asha.org/pubs/rights_and_permissions
SIG 6 Hearing and Hearing Disorders: Research and Diagnosis

vaccinated throughout 2021, there continue to be man- 2007; Trude & Brown-Schmidt, 2011). Additionally, access
dates for the use of face masks and social distancing to to the speech signal may be disrupted by extraneous factors.
mitigate the spread of the virus. The patient populations One such factor is background sound, such as environmen-
served by audiologists and speech-language pathologists tal noise or background talkers, that masks target speech
are at greater risk for communication difficulties, and or is distracting to a listener (Cherry, 1953; Festen & Plomp,
the implementation of these mitigation strategies likely 1990). Another factor is spectral filtering of the signal, which
exacerbates these difficulties. Thus, the purpose of this can occur when it passes through an electrical device, a hear-
article is to explore the myriad ways in which face masks ing device, or a face mask. Finally, the amplitude of the
and social distancing impact face-to-face communication. speech signal decreases as the distance between the talker
It is important to stress that the forthcoming overview is and the listener increases. In isolation or in combination,
not an argument against these mitigation strategies that are these sources of signal degradation disrupt access to the
effective at minimizing the spread of the virus in communi- speech signal.
ties. Rather, it is a summary of the potential communi- There are downstream consequences of poor access
cation barriers that arise because of these strategies so to the speech signal regardless of the source of degradation.
that we, as speech, language, and hearing professionals, Face-to-face conversations not only require that listeners
can better serve the populations at risk for impaired hear individual words but also often require that listeners
communication. follow an ongoing speech stream, which is often rapid with
few pauses. Studies utilizing eye-tracking paradigms have
consistently shown that both children and adults have slower
The Speech Communication Pathway recognition of speech when the speech has been degraded by
To better understand how COVID-19 mitigation background noise or spectral filtering compared to when it is
strategies pose barriers to communication, let us first con- unaltered (Grieco-Calub et al., 2009; McMurray et al., 2017;
sider the speech communication pathway (see Figure 1). Newman & Chatterjee, 2013). Degraded speech also impairs
During a conversation, a transmitter (i.e., talker) shares verbal cognitive processes, such as working memory and, in
content with a receiver (i.e., listener). To share this content, children, acquisition of new words (Newman et al., 2020;
the talker produces an acoustical speech signal, which con- Sullivan et al., 2015). Finally, not only does degraded
tains fluctuations in frequency and intensity. Speech pro- speech impact word recognition and linguistic processing,
duction is the result of coordinated oral motor movements it may also disrupt acoustic cues related to speech prosody,
that are visible to the listener and, therefore, provide coin- such as intonation, pitch, and stress, all of which are criti-
cident visual speech representations. For any given spoken cal for emotion recognition and processing (Everhardt
language, the changes in frequency and intensity of speech et al., 2020; Tinnemore et al., 2018).
reflect specific speech units that, when combined together, One strategy that listeners employ to preserve recog-
form words. The listener then decodes the speech signal to nition when the acoustics of speech are degraded is to use
comprehend the meaning of the content. To do this, the visual speech cues of the talker producing the speech dur-
listener must map the speech units onto their own lexical ing face-to-face communication. A listener’s ability to ben-
representations of speech to facilitate comprehension and efit from visual speech (i.e., lipreading or speechreading) to
extract meaning. facilitate speech recognition in children and adults when
Speech recognition is not always trivial given the auditory speech is degraded is well established (Erber, 1969;
large variability in speech production both across and Grant & Seitz, 2000; Kuhl & Meltzoff, 1982; Lalonde &
within talkers due to differences in biological sex, age, Holt, 2015; MacLeod & Summerfield, 1987; Ross et al.,
accent, or dialect of the talker (e.g., Magnuson & Nusbaum, 2011; Sumby & Pollack, 1954). Additionally, facial

Figure 1. The speech communication pathway.

1098 Perspectives of the ASHA Special Interest Groups • Vol. 6 • 1097–1105 • October 2021

Downloaded from: https://pubs.asha.org 141.164.139.31 on 06/30/2022, Terms of Use: https://pubs.asha.org/pubs/rights_and_permissions


SIG 6 Hearing and Hearing Disorders: Research and Diagnosis

expressions are helpful when attempting to understand ability to comprehend or recall the content of the signal. From
the emotional context of the speech signal (e.g., Busso the perspective of the talker, successful communication will be
et al., 2004). Finally, the use of gestures, touch, or other achieved only if the content of their speech is received by
body movements can facilitate communication. Therefore, the listener. To overcome the intensity loss of the speech
although we often focus on the acoustics of a speech signal signal due to the greater distance between the talker and
and the ability of listeners to resolve those acoustics, the listener, the talker will likely need to increase their vocal
other verbal and nonverbal cues can support effective effort (Fiorella et al., 2021; Michael et al., 1995). Sustained
communication. increases in vocal effort can have negative downstream
consequences on the physical health of the talker.
How Do Face Masks and Social Distancing Third, face masks limit access to the talker’s face,
which provides both verbal and nonverbal cues that en-
Influence Speech Transmission
hance communication. Related to speech recognition,
and Speech Perception? disrupted access to the talker’s face limits access to visual
Given our knowledge about the speech communica- speech cues that appear coincidently with auditory speech.
tion process, it is unsurprising that individuals report Since the early work of Sumby and Pollack (1954), studies
difficulty understanding the speech content of their com- have repeatedly shown that individuals across the life span
munication partner when face masks and social distancing benefit from both hearing and seeing a talker (i.e., audiovisual
practices are implemented. There are three main issues re- speech perception), particularly in situations when the auditory
lated to these observations. First, given that COVID-19 is components of speech are less accessible (Erber, 1969; Grant
transmitted via air molecules, the use of face masks reduces & Seitz, 2000; Kuhl & Meltzoff, 1982; Lalonde & Holt, 2015;
the transmission of air from a talker’s mouth and into a lis- MacLeod & Summerfield, 1987; Ross et al., 2011). Consistent
tener’s nasal and mouth cavities. Although face masks are with this idea, in natural social interactions, listeners flexibly
effective at mitigating the transmission of the virus, they shift their visual attention to a talker in situations when
concurrently reduce speech energy as it is transmitted out the auditory components of speech are degraded or unfa-
of the talker’s mouth (Corey et al., 2020; Goldin et al., miliar (Lewkowicz & Hansen-Tift, 2012; MacDonald
2020; Mendel et al., 2008; Vos et al., 2021). Specifically, et al., 2020). The benefit of seeing the talker is realized
hospital-grade masks act as low-pass filters, effectively even when the listener has only partial access to the talker’s
reducing the intensity of the higher speech frequencies face, suggesting that visual cues are accessible in the ab-
(> 2 kHz) that are important for speech recognition. The sence of seeing the talker’s lips (Fecher & Watt, 2013;
widespread use of face masks in the general public, how- Jordan & Thomas, 2011; Schwartz et al., 2004; Swerts &
ever, has resulted in an increase in the variety of materials Krahmer, 2008; Thomas & Jordan, 2004). Given that ac-
and fabrics that are being used to construct the masks. cess to visual speech cues benefits a listener who has poor
Given that different materials will have variable trans- access to the acoustics of a speech signal, a standard rec-
parency and thus different filtering properties, some masks ommendation from audiologists to their patients with
may degrade the signal more than other masks. Addition- hearing loss is to face their communication partners during
ally, some people may utilize additional filters under masks conversations. The widespread use of face masks, therefore,
or wear more than one mask at a time as additional pro- has the potential to not only disrupt access to the speech
tection from the virus. These factors all contribute to the signal but also disrupt established communication strate-
filtering effects of the speech signal. Less transparent mask gies implemented by individuals who already have poor ac-
materials are expected to exert more filtering of the speech cess to the speech signal. Specifically, face masks can reduce
output. the overall speech energy and obscure other cues that disam-
Second, social distancing creates a longer distance biguate speech information. The extent to which face masks
for speech to travel between the talker and the listener. exert these effects will relate to the mask type, how much
Similar to the use of face masks, social distancing aims the mask filters the acoustics of the speech signal, and how
to reduce the spread of the airborne pathogens. Close much the mask obscures visual speech cues; therefore, peo-
contact in confined spaces increases the likelihood that ple may experience inconsistent difficulty with understand-
the listener comes into contact with air molecules that ing speech when the talker is using a face mask.
are carrying the virus. When considering the relation Together, the use of face masks and social distancing
between distance and speech energy, the inverse square may significantly reduce the overall intensity and spectral
law tells us that the sound pressure level of the speech shape of speech for talkers and listeners. For some people,
signal decreases by 6 dB for every doubling of the distance the use of face masks and social distancing may simply re-
between the talker and the listener. This can have an impact quire them to speak a little louder or to allocate more at-
on both the listener and the talker. From the perspective tention to speech so that they can understand it. For other
of the listener, decreased energy of the speech signal may people, the communication barriers resulting from the use
result in parts of the signal being less audible or completely of face masks and social distancing may be more difficult
unheard. In the former case, the listener would likely need to overcome. For example, individuals who already have
to exert greater cognitive effort to comprehend the speech difficulty with speech production, including people with
signal, which can have downstream consequences on their dysarthria, articulation errors, and dysphonia, may experience

Grieco-Calub: COVID-19 and Face-to-Face Communication 1099


Downloaded from: https://pubs.asha.org 141.164.139.31 on 06/30/2022, Terms of Use: https://pubs.asha.org/pubs/rights_and_permissions
SIG 6 Hearing and Hearing Disorders: Research and Diagnosis

more frequent communication breakdowns because of the ad- to have a conversation with a friend in a noisy environment.
ditional effort that they need to exert. In addition, there are You may have physically arranged yourself to hear her better
many people who report hearing difficulty in complex listen- (i.e., decreased the social distance between you). You may
ing situations who have not received audiological management. have focused more on her face (i.e., gathered visual speech
Among these individuals are people with mild hearing loss information). You may have found yourself exerting more
and who are able to navigate their daily lives by using com- effort to focus on what she was saying (i.e., allocated more
munication strategies, either overtly or covertly, to compen- of your attention to her while simultaneously inhibiting at-
sate for their hearing loss. In addition, individuals with tention to the noise around you). You may recall that you
“hidden hearing loss”—individuals who report difficulty were able to follow the conversation just fine.
hearing in noise despite having audiometrically normal What you may not have realized is that your accu-
hearing—may experience an increase in the number of sit- rate speech perception in this complex listening environment
uations where they have trouble understanding speech. What came at a cost. The additional overt and covert strategies
does this mean for speech-language pathologists and audiolo- that listeners employ to maintain accurate recognition of
gists? First, existing patients may report increased difficulty degraded speech require a reallocation of cognitive resources.
with speech production or speech recognition. Second, clini- In other words, recognizing words and processing ongoing
cians may experience an increase in the number of referrals speech are “effortful,” as evidenced by physiological, behav-
that they receive for speech and hearing evaluations, espe- ioral, and subjective measures. For example, individuals
cially from individuals who no longer are able to success- who exert more effort during a listening task tend to show
fully implement communication strategies due to the increased physiological activity, including changes in skin
existence of COVID-19 mitigation strategies. conductance, cardiovascular function, and pupil dilation
(as reviewed in Francis & Love, 2020). Additionally, be-
havioral studies have implicated that processing degraded
What Is Effortful Listening and How Does It
speech requires greater attentional resources. Given that in-
Relate to Face-to-Face Communication? dividuals have a finite set of attentional resources (Kahneman,
As we have discussed, effective social interactions 1973), allocating more attention to understanding speech
rely on the ability of a listener to perceive the speech of a means that a listener has fewer cognitive resources to multi-
talker. Audiologists quantify this ability through standard- task or to manipulate the speech content in a meaningful
ized speech recognition tests. In these tests, listeners hear way. Support for this idea has been provided by studies
speech (words or sentences) and repeat what they hear to implementing dual-task paradigms. Specifically, both chil-
the best of their ability. The accuracy with which patients dren’s and adult’s performance on secondary tasks decline
repeat the words is used to infer their ability to hear speech as the target speech of the primary speech recognition
in their everyday environments. task becomes more degraded (Grieco-Calub et al., 2017;
There is growing consensus, however, that typical Pals et al., 2013; Ward et al., 2017). Additionally, there is
speech recognition tasks do not wholly capture what lis- growing evidence that lexical access in both children and
teners experience in their daily life. Clinically implemented adults is slower when the auditory input is degraded by
speech recognition tests typically occur in sound-treated spectral filtering, background noise, hearing loss, or soft
rooms, in the absence of visual distractions, and typically speech (Grieco-Calub et al., 2009; Hendrickson et al., 2020;
at a suprathreshold intensity level. Real-world listening is McMurray et al., 2017; Newman & Chatterjee, 2013). The
different from clinical assessments in multiple ways. First, results from this work suggest that listeners who are slower
the listening environment is more complex than sound- to recognize words in these situations may be at risk for
treated rooms. In contrast, real-world environments fre- missing speech content during a social interaction, where
quently contain background sound (e.g., environmental speech is spoken rapidly with few pauses.
noise and other talkers) and reverberation, both of which Simply put, there are downstream consequences of
can mask or distort the target speech information. Second, needing to work harder to understand degraded speech. In
real-world listening often involves complex bottom-up sound the last decade or so, our field has attempted to model the
encoding and top-down cognitive processes as an individual cognitive and communicative costs related to this “effort-
hears and processes speech in real time (McClelland & ful” listening. Theoretical models, such as the ease of lan-
Elman, 1986). Finally, real-world listening is a bit more guage understanding model (Rönnberg et al., 2013) and
dynamic than speech recognition testing. In daily social the framework for understanding effortful listening
interactions, listeners need to not only recognize the speech (Pichora-Fuller et al., 2016), provide ways to consider
of a talker but also use that speech in a meaningful way the strategies that listeners use to achieve good speech
(i.e., comprehending or learning from the content of a perception when their access to speech is disrupted due
message). to hearing loss, background noise, or other forms of
Because real-time language processing contains both spectral filtering. Work in this area suggests that language
bottom-up and top-down processes, a listener is able to en- knowledge, cognitive capacity, and cognitive function are
gage both overt and covert strategies to facilitate their important components in this process. For example, lis-
perception when access to speech is disrupted. To better teners who score higher on standardized measures of inhib-
understand this, think back to a time when you were trying itory control (i.e., executive function) and verbal working

1100 Perspectives of the ASHA Special Interest Groups • Vol. 6 • 1097–1105 • October 2021

Downloaded from: https://pubs.asha.org 141.164.139.31 on 06/30/2022, Terms of Use: https://pubs.asha.org/pubs/rights_and_permissions


SIG 6 Hearing and Hearing Disorders: Research and Diagnosis

memory tend to perform better in adverse listening condi- talker. Because emotion can be communicated by talkers
tions than listeners who score lower on comparable mea- across auditory and visual modalities, listeners may be able
sures (e.g., Ng & Rönnberg, 2020; Pichora-Fuller et al., to flexibly alter their attention on the most salient cues. Thus,
1995). Additionally, the ability of listeners to use existing one might suspect that emotion perception may not be dis-
knowledge to “fill in the gaps” of missing speech will likely rupted when a talker uses a face mask. There is evidence to
influence their performance. Specifically, better language suggest, however, that unimodal perception reduces emo-
skills promote better speech processing (Benichov et al., tion perception variability across different emotions. For
2012; Lash et al., 2013). example, De Silva et al. (1997) showed that individuals
Another consideration is how effortful listening influ- were more accurate at identifying emotions, such as anger,
ences downstream language processing and language happiness, surprise, and dislike, when provided with a
learning. If a listener is allocating more cognitive effort visual-only (vs. auditory-only) cue. In contrast, individuals
into processing speech and their ability to comprehend perceived sadness and fear with higher accuracy with an
is slower, it is reasonable to expect that their compre- auditory-only (vs. visual-only) cue. Listeners are the most
hension will be impaired and their ability to remember accurate, however, when they have access to both acoustic
or learn from the speech input will be disrupted. Consis- and visual markers of emotion (Busso et al., 2004; Chen
tent with this idea, school-age children perform more et al., 1998). In summary, face masks and social distancing
poorly on comprehension tasks when speech is presented may alter a listener’s perception of a talker’s emotion, which
in background noise (Sullivan et al., 2015). Additionally, could have consequences on their perception of a social
younger children learn fewer words when the target interaction or the intent of the communication act.
speech is degraded by background noise (Avivi-Reich
et al., 2020; Han et al., 2019; McMillan & Saffran, 2016). Strategies on How to Overcome the Negative
These issues are particularly relevant for children who are
Effects of the COVID-19 Mitigation Strategies
attending in-person school, either full-time or part-time
(e.g., as in some hybrid academic models), where the teacher on Face-to-Face Communication
and all of the students are required to wear face masks. Understanding the existing and potentially cascading
Classrooms typically contain significant amounts of back- negative effects of wearing face masks and social distancing
ground noise and reverberation, which negatively impact on communication is critical for both speech-language
speech perception and verbal working memory tasks pathologists and audiologists. As discussed, there may be
(Crandell & Smaldino, 2000; Klatte et al., 2013). In sum- patient complaints about vocal effort and complaints about
mary, the myriad sources of speech degradation in class- difficulty with understanding speech. There are, however,
rooms, including environmental noise and background strategies that can be implemented to potentially overcome
talkers, and the use of face masks may negatively impact these negative effects.
the extent to which children can access and retain speech One strategy implemented by individuals, especially
content. those who interact with patients or students who have hear-
ing loss, has been to use transparent masks and face shields.
Both pieces of equipment offer protection against airborne
How Do Face Masks Influence the Social
pathogens and provide the listener with access to visual
Components of Face-to-Face Communication? speech. Given the benefits of visual speech cues on speech
Face masks not only disrupt access to the acoustic recognition when acoustic speech is degraded, it is reason-
and visual aspects of speech, but they also disrupt the so- able to expect that access to visual speech via transparent
cial aspects of face-to-face communication (smiles, frowns, masks and face shields can counteract the acoustic filtering
smirks, etc.) that serve as subtle nonverbal cues, context properties of the mask. Consistent with this idea, recent
for verbal content, and emotion. Individuals use a variety studies show that access to a talker’s mouth (i.e., access to
of facial and bodily cues to both communicate and perceive visual speech) through a transparent mask improves listeners’
emotion (Bänziger et al., 2009; Planalp, 1996). The face, speech recognition (Lalonde et al., 2021; Thibodeau et al.,
however, provides particularly salient cues about the emo- 2021). It is important to note, however, that the materials
tional state of a communication partner. Although the eyes used to make transparent masks and face shields have vari-
and mouth are both able to communicate aspects of emo- able acoustic filtering properties and variable transparency
tion, both static and dynamic mouth positions tend to do a due to reflections or fogging (Corey et al., 2020; L. Hunter,
better job (Blais et al., 2012). Thus, during times when personal communication, June 3, 2021). Thus, the benefits
social interaction involves the use of face masks, emotion observed with transparent equipment—and thus access to
perception may be altered. visual speech—may be limited if the material itself causes
It is important to note, however, that emotion is also too much degradation to the speech signal.
communicated through nonverbal prosodic cues, which in- Another strategy that is effective at improving speech
clude the intonation, pitch, and stress of the talker’s vocal recognition in the presence of acoustic signal degradation
production (e.g., Juslin & Laukka, 2003). For example, dif- is the use of FM systems or remote microphones (e.g., Roger
ferent types of emotions have different prosodic profiles microphone from Phonak). These systems are designed to
that listeners use to determine the emotional state of the improve the intensity of the talker’s voice relative to other

Grieco-Calub: COVID-19 and Face-to-Face Communication 1101


Downloaded from: https://pubs.asha.org 141.164.139.31 on 06/30/2022, Terms of Use: https://pubs.asha.org/pubs/rights_and_permissions
SIG 6 Hearing and Hearing Disorders: Research and Diagnosis

sounds present in the environment. They are frequently connect people, there have been disparities in individuals’
utilized in classrooms to provide children who are deaf accessibility to this technology. Reasons for these dispar-
or hard of hearing with better access to their teacher’s and ities range from lack of devices that are capable of video-
classmate’s voices. If masks and social distancing continue conferencing, unstable wireless access, and/or lack of
to be mandated within schools, FM systems and remote technical knowledge of the end user. A pre-COVID study
microphones may be essential to ensure that children are by the Pew Research Center (2019) showed that Internet
able to hear talkers in the classroom. access and technology use vary widely based on household
If listeners are better able to recognize speech because income and age, with lower income and older adults hav-
they have access to visual speech or because they have access ing less access. The immediate effects of this disparity have
to an improved signal-to-noise ratio, then the onus on the resulted in inequitable education, health care, and social
talker to overcome the acoustic filtering of face masks and connectedness. The longer term effects of technology dispar-
social distancing is less. Consistent with this idea, Beechey ities and social isolation related to COVID-19 on communi-
et al. (2020) showed the benefit of hearing amplification to cation health in individuals across the life span, however,
both the adult users with hearing loss and their communi- will likely not be realized for some time.
cation partners. Together, the data suggest that finding ways
to improve the fidelity of the speech signal may be critical
to ensuring communication efficiency for people who use
Summary
face masks and execute social distancing. Face masks and social distancing have been proven
to be successful at mitigating the spread of COVID-19.
Despite this success, they may create challenges to effective
The Long-Term Effects of Social Isolation communication, which can have negative downstream
on Cognitive and Social Functions effects on speech production, speech recognition, language
The COVID-19 mitigation strategies have not only processing, and cognition. Individuals with preexisting com-
impacted face-to-face communication. It would be remiss munication disorders may be at a higher risk for these nega-
to not address the effects of reduced social interaction in tive effects. Understanding the mechanisms that underlie
individuals across the life span. Social interaction is funda- communication challenges associated with face masks and
mental to the human experience. It is critical in infancy to social distancing will improve our ability to serve these
promote language, cognitive, and social development. patients.
Social interaction promotes vocabulary development and
social emotional learning throughout childhood. As we Acknowledgments
age, even small doses of social interaction improve our
cognitive function (Evans et al., 2018; Glei et al., 2005; This work was supported by National Institute on Deafness
and Other Communication Disorders Grants R01DC018596 and
Ybarra et al., 2008).
R01DC017984 and National Institute of Child Health and Human
In the early days of the global pandemic, people Development Grant R01HD100439. The author would like to
were instructed to isolate, and shelter-in-place orders were thank Natalie Krasuska, Diana Cortez, Samuel Shipley, and Kelsey
issued across the country. For family units, this meant more Sherry for helpful comments on the article.
time together. In contrast, for individuals living alone, this
meant reduced human, face-to-face contact. Older adults
were especially impacted by shelter-in-place orders given References
that they were viewed as a population more vulnerable to Avivi-Reich, M., Roberts, M. Y., & Grieco-Calub, T. M. (2020).
the fatal effects of the virus. Routine social activities, such Quantifying the effects of background speech babble on pre-
as going to religious services, having lunch with friends, or school children’s novel word learning in a multisession paradigm:
celebrating holidays with extended family members, were A preliminary study. Journal of Speech, Language, and Hearing
strongly discouraged and, in some areas, no longer permit- Research, 63(1), 345–356. https://doi.org/10.1044/2019_
JSLHR-H-19-0083
ted. There has been increasing evidence that reduced social
Bänziger, T., Grandjean, D., & Scherer, K. R. (2009). Emotion
interaction, or social isolation, is linked to cognitive decline recognition from expressions in face, voice, and body: The
in older adults (Lin et al., 2013; Maharani et al., 2019). Multimodal Emotion Recognition Test (MERT). Emotion,
Therefore, the shelter-in-place orders throughout the 9(5), 691–704. https://doi.org/10.1037/a0017088
COVID-19 pandemic may have long-lasting effects on Beechey, T., Buchholz, J. M., & Keidser, G. (2020). Hearing aid
the elderly population within communities. amplification reduces communication effort of people with
One strategy to combat social isolation has been the hearing impairment and their conversation partners. Journal
implementation of videoconferencing technology, which of Speech, Language, and Hearing Research, 63(4), 1299–1311.
has enabled continued connection to families, friends, and https://doi.org/10.1044/2020_JSLHR-19-00350
Benichov, J., Cox, L. C., Tun, P. A., & Wingfield, A. (2012). Word
work colleagues. For many people, work, school, and so- recognition within a linguistic context: Effects of age, hearing
cial networking simply pivoted to an online format. The acuity, verbal ability, and cognitive function. Ear and Hearing,
reality of the situation, however, was that the availability 32(2), 250–256. https://doi.org/10.1097/AUD.0b013e31822f680f
of videoconferencing technology has not been equitable. Blais, C., Roy, C., Fiset, D., Arguin, M., & Gosselin, F. (2012). The
Despite the widespread success of videoconferencing to eyes are not the window to basic emotions. Neuropsychologia,

1102 Perspectives of the ASHA Special Interest Groups • Vol. 6 • 1097–1105 • October 2021

Downloaded from: https://pubs.asha.org 141.164.139.31 on 06/30/2022, Terms of Use: https://pubs.asha.org/pubs/rights_and_permissions


SIG 6 Hearing and Hearing Disorders: Research and Diagnosis

50(12), 2830–2838. https://doi.org/10.1016/j.neuropsychologia. impaired and normal hearing. The Journal of the Acoustical
2012.08.010 Society of America, 88(4), 1725–1736. https://doi.org/10.1121/
Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C. M., Kazemzadeh, 1.400247
A., Lee, S., Neumann, U., & Narayanan, S. (2004). Analysis of Fiorella, M. L., Cavallaro, G., Di Nicola, V., & Quaranta, N.
emotion recognition using facial expressions, speech and multi- (2021). Voice differences when wearing and not wearing a sur-
modal information. In Proceedings of the 6th International gical mask. Journal of Voice. Advance online publication.
Conference on Multimodal Interfaces (pp. 205–211). Associa- https://doi.org/10.1016/j.jvoice.2021.01.026
tion for Computing Machinery. Francis, A. L., & Love, J. (2020). Listening effort: Are we measur-
Centers for Disease Control and Prevention. (2020a). How COVID-19 ing cognition or affect, or both? Wiley Interdisciplinary Reviews:
spreads. https://www.cdc.gov/coronavirus/2019-ncov/prevent- Cognitive Science, 11(1), e1514. https://doi.org/10.1002/wcs.1514
getting-sick/how-covid-spreads.html Glei, D. A., Landau, D. A., Goldman, N., Chuang, Y. L., Rodríguez,
Centers for Disease Control and Prevention. (2020b). Symptoms G., & Weinstein, M. (2005). Participating in social activities
of coronavirus. https://www.cdc.gov/coronavirus/2019-ncov/ helps preserve cognitive function: An analysis of a longitudinal,
symptoms-testing/symptoms.html population-based study of the elderly. International Journal of
Chen, L. S., Huang, T. S., Miyasato, T., & Nakatsu R. (1998). Epidemiology, 34(4), 864–871. https://doi.org/10.1093/ije/dyi049
Multimodal human emotion/expression recognition. Proceed- Goldin, A., Weinstein, B., & Shiman, N. (2020). How do medical
ings of the Third IEEE International Conference on Automatic masks degrade speech perception? Hearing Review, 27(5), 8–9.
Face and Gesture Recognition (pp. 366–371). IEEE. https://doi. Grant, K. W., & Seitz, P. F. (2000). The use of visible speech cues
org/10.1109/AFGR.1998.670976 for improving auditory detection of spoken sentences. The
Cherry, E. C. (1953). Some experiments on the recognition of Journal of the Acoustical Society of America, 108(3), 1197–1208.
speech, with one and with two ears. The Journal of the Acous- https://doi.org/10.1121/1.1288668
tical Society of America, 25(5), 975–979. https://doi.org/10. Grieco-Calub, T. M., Saffran, J. R., & Litovsky, R. Y. (2009).
1121/1.1907229 Spoken word recognition in toddlers who use cochlear implants.
Chu, D. K., Akl, E. A., Duda, S., Solo, K., Yaacoub, S., Schünemann, Journal of Speech, Language, and Hearing Research, 52(6),
H. J., & COVID-19 Systematic Urgent Review Group Effort 1390–1400. https://doi.org/10.1044/1092-4388(2009/08-0154)
(SURGE) Study Authors. (2020). Physical distancing, face Grieco-Calub, T. M., Ward, K. M., & Brehm, L. (2017). Multi-
masks, and eye protection to prevent person-to-person trans- tasking during degraded speech recognition in school-age children.
mission of SARS-CoV-2 and COVID-19: A systematic review Trends in Hearing, 21, 2331216516686786. https://doi.org/10.1177/
and meta-analysis. The Lancet, 395(10242), 1973–1987. https:// 2331216516686786
doi.org/10.1016/S0140-6736(20)31142-9 Han, M. K., Storkel, H., & Bontempo, D. E. (2019). The effect of
Corey, R. M., Jones, U., & Singer, A. C. (2020). Acoustic effects neighborhood density on children’s word learning in noise.
of medical, cloth, and transparent face masks on speech sig- Journal of Child Language, 46(1), 153–169. https://doi.org/10.
nals. The Journal of the Acoustical Society of America, 148(4), 1017/S0305000918000284
2371–2375. https://doi.org/10.1121/10.0002279 Hendrickson, K., Spinelli, J., & Walker, E. (2020). Cognitive pro-
Crandell, C. C., & Smaldino, J. J. (2000). Classroom acoustics for cesses underlying spoken word recognition during soft speech.
children with normal hearing and with hearing impairment. Cognition, 198, 104196. https://doi.org/10.1016/j.cognition.
Language, Speech, and Hearing Services in Schools, 31(4), 2020.104196
362–370. https://doi.org/10.1044/0161-1461.3104.362 Jordan, T. R., & Thomas, S. M. (2011). When half a face is as
De Silva, L. C., Miyasato, T., & Nakatsu, R. (1997). Facial emo- good as a whole: Effects of simple substantial occlusion on
tion recognition using multi-modal information. Proceedings of visual and audiovisual speech perception. Attention, Perception,
1997 International Conference on Information, Communications & Psychophysics, 73(7), 2270–2285. https://doi.org/10.3758/
and Signal Processing (ICICS). Theme: Trends in Information s13414-011-0152-4
Systems Engineering and Wireless Multimedia Communica- Juslin, P. N., & Laukka, P. (2003). Communication of emotions
tions, 1, 1397–1401. in vocal expression and music performance: Different channels,
Erber, N. P. (1969). Interaction of audition and vision in the rec- same code? Psychological Bulletin, 129(5), 770–814. https://doi.
ognition of oral speech stimuli. Journal of Speech and Hearing org/10.1037/0033-2909.129.5.770
Research, 12(2), 423–425. https://doi.org/10.1044/jshr.1202.423 Kahneman, D. (1973). Attention and effort (Vol. 1063, pp. 218–226).
Evans, I. E. M., Llewellyn, D. J., Matthews, F. E., Woods, R. T., Prentice-Hall.
Brayne, C., Clare, L., & CFAS-Wales Research Team. (2018). Klatte, M., Bergström, K., & Lachmann, T. (2013). Does noise affect
Social isolation, cognitive reserve, and cognition in healthy older learning? A short review on noise effects on cognitive per-
people. PLOS ONE, 13(8), Article e0201008. https://doi.org/ formance in children. Frontiers in Psychology, 4, 578.
10.1371/journal.pone.0201008 Kuhl, P. K., & Meltzoff, A. N. (1982). The bimodal perception of
Everhardt, M. K., Sarampalis, A., Coler, M., Başkent, D., & speech in infancy. American Association for the Advancement
Lowie, W. (2020). Meta-analysis on the identification of lin- of Science, 218(4577), 1138–1141. https://doi.org/10.1126/science.
guistic and emotional prosody in cochlear implant users and 7146899
vocoder simulations. Ear and Hearing, 41(5), 1092–1102. https:// Lalonde, K., Buss, E., Miller, M. K., & Liebold, L. J. (2021, March
doi.org/10.1097/AUD.0000000000000863 28–31). Effects of face masks on auditory and audiovisual
Fecher, N., & Watt, D. (2013, September). Effects of forensically- speech recognition in children with and without hearing loss
realistic facial concealment on auditory–visual consonant rec- [Poster presentation]. 8th International Pediatric Audiology
ognition in quiet and noise conditions. Proceedings of AVSP Conference [Virtual conference]. https://osf.io/xrqft/
2013 (International Conference on Auditory–Visual Speech Lalonde, K., & Holt, R. F. (2015). Preschoolers benefit from visually
Processing), Annecy, France. salient speech cues. Journal of Speech, Language, and Hearing
Festen, J. M., & Plomp, R. (1990). Effects of fluctuating noise Research, 58(1), 135–150. https://doi.org/10.1044/2014_
and interfering speech on the speech-reception threshold for JSLHR-H-13-0343

Grieco-Calub: COVID-19 and Face-to-Face Communication 1103


Downloaded from: https://pubs.asha.org 141.164.139.31 on 06/30/2022, Terms of Use: https://pubs.asha.org/pubs/rights_and_permissions
SIG 6 Hearing and Hearing Disorders: Research and Diagnosis

Lash, A., Rogers, C. S., Zoller, A., & Wingfield, A. (2013). Expecta- memory and speech recognition in noise. International Journal of
tion and entropy in spoken word recognition: Effects of age and Audiology, 59(3), 208–218. https://doi.org/10.1080/14992027.
hearing acuity. Experimental Aging Research, 39(3), 235–253. 2019.1677951
https://doi.org/10.1080/0361073X.2013.779175 Pals, C., Sarampalis, A., & Başkent, D. (2013). Listening effort
Lewkowicz, D. J., & Hansen-Tift, A. M. (2012). Infants deploy with cochlear implant simulations. Journal of Speech, Language,
selective attention to the mouth of a talking face when learning and Hearing Research, 56(4), 1075–1084. https://doi.org/10.1044/
speech. Proceedings of the National Academy of Sciences, 109(5), 1092-4388(2012/12-0074)
1431–1436. https://doi.org/10.1073/pnas.1114783109 Pew Research Center. (2019, June). Mobile technology and home
Lin, F. R., Yaffe, K., Xia, J., Xue, Q.-L., Harris, T. B., Purchase- broadband 2019. https://www.pewresearch.org/internet/wp-
Helzner, E., Satterfield, S., Ayonayon, H. N., Ferrucci, L., content/uploads/sites/9/2019/06/PI_2019.06.13_Mobile-Technology-
Simonsick, E. M., & Health ABC Study Group. (2013). Hearing and-Home-Broadband_FINAL2.pdf
loss and cognitive decline in older adults. JAMA Internal Medicine, Pichora-Fuller, M. K., Kramer, S. E., Eckert, M. A., Edwards, B.,
173(4), 293–299. https://doi.org/10.1001/jamainternmed.2013. Hornsby, B. W., Humes, L. E., Lemke, U., Lunner, T., Matthen,
1868 M., Mackersie, C. L., Naylor, G., Phillips, N. A., Richter, M.,
MacDonald, K., Marchman, V. A., Fernald, A., & Frank, M. C. Rudner, M., Sommers, M. S., Tremblay, K. L., & Wingfield, A.
(2020). Children flexibly seek visual information to support (2016). Hearing impairment and cognitive energy: The
signed and spoken language comprehension. Journal of Experi- Framework for Understanding Effortful Listening (FUEL).
mental Psychology: General, 149(6), 1078–1096. https://doi.org/ Ear and Hearing, 37, 5S–27S. https://doi.org/10.1097/AUD.
10.1037/xge0000702 0000000000000312
MacLeod, A., & Summerfield, Q. (1987). Quantifying the contribution Pichora-Fuller, M. K., Schneider, B. A., & Daneman, M. (1995).
of vision to speech perception in noise. British Journal of Audiology, How young and old adults listen to and remember speech in
21(2), 131–141. https://doi.org/10.3109/03005368709077786 noise. The Journal of the Acoustical Society of America, 97(1),
Magnuson, J. S., & Nusbaum, H. C. (2007). Acoustic differences, 593–608. https://doi.org/10.1121/1.412282
listener expectations, and the perceptual accommodation of Planalp, S. (1996). Varieties of cues to emotion in naturally occur-
talker variability. Journal of Experimental Psychology: Human ring situations. Cognition and Emotion, 10(2), 137–154. https://
Perception and Performance, 33(2), 391–409. https://doi.org/10. doi.org/10.1080/026999396380303
1037/0096-1523.33.2.391 Ross, L. A., Molholm, S., Blanco, D., Gomez-Ramirez, M., Saint-
Maharani, A., Pendleton, N., & Leroi, I. (2019). Hearing impair- Amour, D., & Foxe, J. J. (2011). The development of multisen-
ment, loneliness, social isolation, and cognitive function: Lon- sory speech perception continues into the late childhood years.
gitudinal analysis using English longitudinal study on ageing. European Journal of Neuroscience, 33(12), 2329–2337. https://
The American Journal of Geriatric Psychiatry, 27(12), 1348–1356. doi.org/10.1111/j.1460-9568.2011.07685.x
https://doi.org/10.1016/j.jagp.2019.07.010 Rönnberg, J., Lunner, T., Zekveld, A., Sörqvist, P., Danielsson, H.,
Markel, H., Lipman, H. B., Navarro, J. A., Sloan, A., Michalsen, Lyxell, B., Dahlström, Ö., Signoret, C., Stenfelt, S., Pichora-
J. R., Stern, A. M., & Cetron, M. S. (2007). Nonpharmaceutical Fuller, M. K., & Rudner, M. (2013). The Ease of Language
interventions implemented by U.S. cities during the 1918–1919 Understanding (ELU) model: Theoretical, empirical, and clinical
influenza pandemic. JAMA, 298(6), 644–654. https://doi.org/ advances. Frontiers in Systems Neuroscience, 7, 31.
10.1001/jama.298.6.644 Schwartz, J.-L., Berthommier, F., & Savariaux, C. (2004). Seeing
McClelland, J. L., & Elman, J. L. (1986). The TRACE model of to hear better: Evidence for early audio-visual interactions in
speech perception. Cognitive Psychology, 18(1), 1–86. https:// speech identification. Cognition, 93(2), B69–B78. https://doi.
doi.org/10.1016/0010-0285(86)90015-0 org/10.1016/j.cognition.2004.01.006
McMillan, B. T. M., & Saffran, J. R. (2016). Learning in complex Sullivan, J. R., Osman, H., & Schafer, E. C. (2015). The effect of
environments: The effects of background speech on early word noise on the relationship between auditory working memory
learning. Child Development, 87(6), 1841–1855. https://doi.org/ and comprehension in school-age children. Journal of Speech,
10.1111/cdev.12559 Language, and Hearing Research, 58(3), 1043–1051. https://
McMurray, B., Farris-Trimble, A., & Rigler, H. (2017). Waiting doi.org/10.1044/2015_JSLHR-H-14-0204
for lexical access: Cochlear implants or severely degraded in- Sumby, W. H., & Pollack, I. (1954). Visual contribution to speech
put lead listeners to process speech less incrementally. Cognition, intelligibility in noise. The Journal of the Acoustical Society of
169, 147–164. https://doi.org/10.1016/j.cognition.2017.08.013 America, 26(2), 212–215. https://doi.org/10.1121/1.1907309
Mendel, L. L., Gardino, J. A., & Atcherson, S. R. (2008). Speech Swerts, M., & Krahmer, E. (2008). Facial expression and prosodic
understanding using surgical masks: A problem in health care? prominence: Effects of modality and facial area. Journal of Pho-
Journal of the American Academy of Audiology, 19(9), 686–695. netics, 36(2), 219–238. https://doi.org/10.1016/j.wocn.2007.05.001
https://doi.org/10.3766/jaaa.19.9.4 Thibodeau, L. M., Thibodeau-Nielsen, R. B., Tran, C. M. Q., &
Michael, D. D., Siegel, G. M., & Pick, H. L., Jr. (1995). Effects of de Souza Jacob, R. T. (2021). Communicating during COVID-19:
distance on vocal intensity. Journal of Speech and Hearing Re- The effect of transparent masks for speech recognition in noise.
search, 38(5), 1176–1183. https://doi.org/10.1044/jshr.3805.1176 Ear and Hearing, 42(4), 772–781. https://doi.org/10.1097/AUD.
Newman, R. S., & Chatterjee, M. (2013). Toddlers’ recognition of 0000000000001065
noise-vocoded speech. The Journal of the Acoustical Society of Thomas, S. M., & Jordan, T. R. (2004). Contributions of oral and
America, 133(1), 483–494. https://doi.org/10.1121/1.4770241 extraoral facial movement to visual and audiovisual speech per-
Newman, R. S., Morini, G., Shroads, E., & Chatterjee, M. (2020). ception. Journal of Experimental Psychology: Human Perception
Toddlers’ fast-mapping from noise-vocoded speech. The Journal and Performance, 30(5), 873–888. https://doi.org/10.1037/0096-
of the Acoustical Society of America, 147(4), 2432–2441. https:// 1523.30.5.873
doi.org/10.1121/10.0001129 Tinnemore, A. R., Zion, D. J., Kulkarni, A. M., & Chatterjee, M.
Ng, E. H. N., & Rönnberg, J. (2020). Hearing aid experience and (2018). Children’s recognition of emotional prosody in spectrally-
background noise affect the robust relationship between working degraded speech is predicted by their age and cognitive status.

1104 Perspectives of the ASHA Special Interest Groups • Vol. 6 • 1097–1105 • October 2021

Downloaded from: https://pubs.asha.org 141.164.139.31 on 06/30/2022, Terms of Use: https://pubs.asha.org/pubs/rights_and_permissions


SIG 6 Hearing and Hearing Disorders: Research and Diagnosis

Ear and Hearing, 39(5), 874–880. https://doi.org/10.1097/AUD. Ward, K. M., Shen, J., Souza, P. E., & Grieco-Calub, T. M. (2017).
0000000000000546 Age-related differences in listening effort during degraded
Trude, A. M., & Brown-Schmidt, S. (2011). Talker-specific speech recognition. Ear and Hearing, 38(1), 74–84. https://
perceptual adaptation during online speech perception. Language doi.org/10.1097/AUD.0000000000000355
and Cognitive Processes, 27(7–8), 979–1001. https://doi.org/10. World Health Organization. (2020). Coronavirus. https://www.who.
1080/01690965.2011.597153 int/health-topics/coronavirus#tab=tab_1
Vos, T. G., Dillon, M. T., Buss, E., Rooth, M. A., Bucker, A. L., Ybarra, O., Burnstein, E., Winkielman, P., Keller, M. C.,
Dillon, S., Pearson, A., Quinones, K., Richter, M. E., Roth, N., Manis, M., Chan, E., & Rodriguez, J. (2008). Mental
Young, A., & Dedmon, M. M. (2021). Influence of protective exercising through simple socializing: Social interaction
face coverings on the speech recognition of cochlear implant promotes general cognitive functioning. Personality and Social
patients. The Laryngoscope, 131(6), E2038–E2043. https://doi. Psychology Bulletin, 34(2), 248–259. https://doi.org/10.1177/
org/10.1002/lary.29447 0146167207310454

Grieco-Calub: COVID-19 and Face-to-Face Communication 1105


Downloaded from: https://pubs.asha.org 141.164.139.31 on 06/30/2022, Terms of Use: https://pubs.asha.org/pubs/rights_and_permissions

You might also like