You are on page 1of 9

Infant Behavior and Development 54 (2019) 48–56

Contents lists available at ScienceDirect

Infant Behavior and Development


journal homepage: www.elsevier.com/locate/inbede

Full length article

Overt congruent facial reaction to dynamic emotional expressions


T
in 9–10-month-old infants
⁎⁎ ⁎
Kazuhide Hashiyaa, , Xianwei Menga,b, , Yusuke Utoc, Kana Tajirid
a
Faculty of Human-Environment Studies, Kyushu University, Fukuoka City, Fukuoka Prefecture, Japan
b
Japan Society for the Promotion of Science, Tokyo, Japan
c
Graduate School of Human-Environment Studies, Kyushu University, Fukuoka City, Fukuoka Prefecture, Japan
d
The 21st Century Program, Kyushu University, Fukuoka City, Fukuoka Prefecture, Japan

A R T IC LE I N F O ABS TRA CT

Keywords: The current study aimed to extend the understanding of the early development of spontaneous
Dynamic expression facial reactions toward observed facial expressions. Forty-six 9- to 10-month-old infants observed
Facial mimicry video clips of dynamic human facial expressions that were artificially created with morphing
Facial action coding system technology. The infants’ facial responses were recorded, and the movements of the facial action
Infant
unit 12 (e.g., lip-corner raising, associated with happiness) and facial action unit 4 (e.g., brow-
lowering, associated with anger) were visually evaluated by multiple naïve raters. Results
showed that (1) infants make congruent, observable facial responses to facial expressions, and (2)
these specific facial responses are enhanced during repeated observation of the same emotional
expressions. These results suggest the presence of observable congruent facial responses in the
first year of life, and that they appear to be influenced by contextual information, such as the
repetition of presentation of the target emotional expressions.

1. Introduction

Reacting with congruent facial expressions to an observed face frequently occurs in daily life (Bavelas, Black, Lemery, & Mullett,
1986; Dimberg, 1982; Hess & Fischer, 2014). The congruent facial reaction is considered to contribute to the communication of
affective states by facilitating both the observer’s understanding of the other’s emotional state through the experience of synchronized
expressions, and the empathic reflection of the same signals to the partner (Hatfield, Cacioppo, & Rapson, 1992; Hess, Philippot, &
Blairy, 1998). Reduced, temporally delayed responses to this phenomenon have been identified in individuals with atypical social
functioning, such as those with autism spectrum disorders and psychopathy (Pelphrey, Morris, McCarthy, & Labar, 2007; Sato, Toichi,
Uono, & Kochiyama, 2012; Bird & Viding, 2014; Yoshimura, Sato, Uono, & Toichi, 2015),which has drawn attention to the need to
understand the context of the nature of human sociality.
Different processes that are not mutually exclusive have been proposed to underlie the occurrence of congruent facial reactions.
Specifically. the matched motor hypothesis (MMH) has posited that perceiving a specific behavior entrains that behavior in the
observer, much like a pure motor copy process even without the observer’s awareness (Dimberg, Thunberg, & Elmehed, 2000;
Dimberg, Thunberg, & Grunedal, 2002; Gallese & Goldman, 1998; Lakin, Jefferis, Cheng, & Chartrand, 2003; Meltzoff & Decety,

Abbreviations: AU, action unit; FACS, facial action coding system; GLMMs, generalized linear mixed models; MMH, matched motor hypothesis

Corresponding author at: Graduate School of Education, Kyoto University, Kyoto City, Japan.
⁎⁎
Corresponding author.
E-mail addresses: hashiya@mindless.com (K. Hashiya), mokeni1211@gmail.com (X. Meng).

https://doi.org/10.1016/j.infbeh.2018.12.002
Received 29 June 2018; Received in revised form 30 November 2018; Accepted 9 December 2018
0163-6383/ © 2018 Elsevier Inc. All rights reserved.
K. Hashiya et al. Infant Behavior and Development 54 (2019) 48–56

2003; Rizzolatti & Craighero, 2004). Further, the mirrored muscular configuration in the observer might also synchronize the ex-
perienced emotions with the partner through a muscular feedback processes (Dezecache, Eskenazi, & Grèzes, 2016; Hatfield,
Cacioppo, & Rapson, 1994). On the other hand, the emotion-communicative perspective has theorized that the congruent facial
reactions involve emotional-communicative processes (Dezecache et al., 2016; Fischer & Hess, 2017; Grèzes & Dezecache, 2014).
That is, the congruent facial reactions reflect the shared emotional states between the observer and the partner, which are elicited by
the observer’s evaluation of the information provided by the partner’s facial emotion while taking into account the social context
(e.g., the relationships between the observers and the partners; Bourgeois & Hess, 2008; Fischer, Becker, & Veenstra, 2012; Seibt,
Mühlberger, Likowski, & Weyers, 2015). This perspective might be supported by a phenomenon such as a competitor’s happiness
eliciting a negative reaction rather than the congruent positive facial expressions in the observer, as well as evidence that congruent
facial reactions could be elicited by non-facial emotional cues (e.g., vocal expressions; Dimberg & Karlsson, 1997; Lanzetta & Englis,
1989; Likowski, Mühlberger, Seibt, Pauli, & Weyers, 2011; Murata, Saito, Schug, Ogawa, & Kameda, 2016; Tamietto & de Gelder,
2008).
It remains unclear at this point how the underlying processes of congruent facial reactions develop early in life. Consistent with
MMH, it has been proposed that neonates spontaneously imitate the orofacial actions of others (e.g., tongue protrusion and mouth
opening) from birth (Kugiumutzakis, 1998, Maratos, 1973; Meltzoff & Moore, 1977; Myowa‐Yamakoshi, Tomonaga, Tanaka, &
Matsuzawa, 2004; Simpson, Murray, Paukner, & Ferrari, 2014), though, this may require further investigation (Meltzoff et al., 2017;
Oostenbroek et al., 2016). Conversely, other studies have suggested that the ability to demonstrate congruent facial reaction is
postnatally acquired, possibly through complex neurocognitive processes that contribute to the evaluation of emotional information
when observing facial expressions. For instance, Montague and Walker-Andrews (2001), suggested that 4-month-old infants may be
responding to the meaning or valence of the other’s facial and vocal expressions (i.e., happy, angry, sad, fearful) during a peekaboo
game (Montague & Walker-Andrews, 2001). Isomura and Nakano (2016) presented dynamic clips of audiovisual, visual, and auditory
emotions to four- to five-month-old infants and measured their facial electromyographic activity concerning the corrugator supercilii
(i.e., brow) and zygomaticus major (i.e., cheek). Consequently, stimuli-corresponding muscle responses were observed for the
audiovisual, but not for the visual or auditory unimodal-emotion stimuli. Moreover, when presented happy, angry, and fearful facial
expressions, four-month-old infants did not show selective electromyographic activation of the facial muscles, while seven-month-old
infants did show selective facial reactions for happy and fearful expressions, but not for angry (Kaiser, Crespo-Llado, Turati, &
Geangu, 2017). Further, a study that used virtual humanoid faces indicated that 12-month-old infants display valence-congruent
expressions (i.e., positive and negative facial expressions in response to the presented positive vs. negative facial expressions) rather
than emotion-specific facial responses when passively watching modeled facial expressions (Soussignan et al., 2017). Taken together,
these studies offer some support for the emotion-communicative view of early congruent facial reaction, that is, infants may treat
others’ emotional signals as a means for conveying emotional information and communicative intentions in a social context. Thus,
their facial responses are influenced by these evaluation processes, which are underpinned by their emotion-recognition abilities and
sensitivity to the affective tone of facial expressions (Hess & Fischer, 2013, 2014; Kaiser et al., 2017; Soussignan et al., 2017).
The current study aims to extend our understanding of the early development of congruent facial reactions using two methods of
evaluation. First, previous studies with EMG methodology have suggested the possibility that the occurrence of congruent facial
reactions requires an infant’s ability for emotion recognition. Infants seem first to become able to discriminate emotions from bimodal
audiovisual stimuli, and later this ability expands to unimodal auditory and visual stimuli (Flom & Bahrick, 2007; Isomura & Nakano,
2016; Walker-Andrews, 1997). From 7-months-old, infants are able to discriminate the differences between happy and angry ex-
pressions through visual unimodal stimulations (Caron, Caron, & MacLean, 1988; Flom & Bahrick, 2007). Moreover, the sensitivity to
common affect among facial expressions (rather than the feature information of an emotional face) seems to develop between 7 and
10 months old (Grossmann, Striano, & Friederici, 2007; Ludemann, 1991). Therefore, Hypothesis One theorized that around this
developmental stage, 9-10-month-old infants, would selectively show observable facial reactions to dynamic happy and angry human
facial expressions. Hypothesis Two posited that congruent, but not incongruent, facial reactions would be enhanced during repeated
exposure to the presented facial emotions. This could be predicted based on the emotion-communicative perspective regarding
congruent facial reactions. Specifically, repeatedly watching a specific facial emotion might reduce the ambiguity of the emotional
context of its presentation, and then enhance the recognizability of the facial expression. This process might elicit convergent
emotional states in infants which would contribute to the presence of congruent facial reaction (Mühlberger et al., 2010; Rymarczyk,
Biele, Grabowska, & Majczynski, 2011). Testing the latter hypothesis would help to verify the validity of MMH and the emotion-
communicative perspective regarding the development of early congruent facial reactions, as MMH predicts that spontaneous imi-
tated facial movements are independent of repeated presentation (Hess & Fischer, 2014).
In the current experiment, infants observed videos of dynamic human facial expressions that were artificially created with
morphing technology. The infants’ facial responses were recorded, and the movements of the facial action unit 12 (AU12; i.e., lip-
corner raising, associated with happiness) and 4 (AU4; i.e., brow-lowering, associated with anger) were visually evaluated by
multiple naïve adult raters (Ekman & Friesen, 1978). The clearly defined relationships between AUs and the underlying facial muscles
(i.e., U12 and U4 respectively relate to the zygomaticus major and the corrugator supercilii) allow the formulation of the current
hypotheses based on previous EMG studies. However, AU is an image-based method, and the movements of the AUs do not actually
provide any biomechanical information about the degree of muscle activation (Donato, Bartlett, Hager, Ekman, & Sejnowski, 1999).
Moreover, using dynamic stimuli comprising reliable facial emotions afforded relatively natural presentations of the emotions (Sato,
Fujimura, & Suzuki, 2008; Seibt et al., 2015) and facilitated the investigation of early facial mimicry in situations where only the
visual information is usable.

49
K. Hashiya et al. Infant Behavior and Development 54 (2019) 48–56

2. Method

2.1. Participants

Participants were from Japanese families living in the city of Fukuoka and were recruited from a database managed by Kyushu
University Infant Scientist Lab. The study was approved by the ethics committee of the Faculty of Human-Environment Studies at
Kyushu University and was conducted in accordance with American Psychological Association ethical standards and the Declaration
of Helsinki. Written informed consent was obtained from the children’s caregivers before the experiment. The final sample included
46 9- to 10-month-old infants (21 girls; Mage = 299 days, SD = 15.37, range = 275–325 days) and their mothers. Additionally, five
infants participated but were excluded because of fussiness (n =1), hiccups (n =1), and experimental error (n = 3).

2.2. Setting

The experiment was conducted in a quiet room with infants seated on their mothers’ laps, approximately 80 cm from a monitor
(23-inch TFT, 1920 × 1080 pixels) on which the experimental stimuli were presented. Two built-in speakers on the monitor
transmitted the electronic sounds (i.e., a “beep” sound). Two video cameras, one in front of the participant (on top of the monitor)
and one at their back right, recorded the infants’ facial expressions and the stimuli, respectively. The recordings were synchronized
through a video mixer. The experimenter, who was hidden from the participants, controlled the stimuli presentation, while con-
currently monitoring participants’ behaviors through a video screen (23-inch TFT, 1920 × 1080 pixels).

2.3. Stimuli

Infants watched a 1.5-min video sequence that consisted of 12 silent dynamic computer-morphing videos (trials). Each video
involved a smooth 3.5-second change in an adult actor’s facial expression (29 cm × 20 cm). This comprised: (1) a static image of a
neutral facial expression (1 s), to (2) a dynamic to a specific facial expression (happiness or anger; 1.5 s), to (3) a static image of that
specific facial expression (1 s; Fig. 1). Each trial was bookended by a blank gray screen (1.5 s). We divided the 12 trials into four
blocks, with each block comprising three trials concerning a particular type of facial expression change. To maintain the infants’
attention, inter-block intervals featuring images of a cartoon character (without overt facial expression) were presented at the be-
ginning of each block, with a 1 s sound (6 s). Also, a 17-second video of dynamic color changes was inserted between the first half
(i.e., the first two blocks) and the second half (i.e., the second two blocks) of the sequence.
Static images of neutral, angry, and happy facial expressions were chosen from a database containing images of adult Japanese
models (four women: F03, F10, F13, F16, and four men: M01, M02, M06, M09; Facial Expression Image Database (DB99), ATR-
Promotions, Inc., Japan). The dynamic emotional changes were created by connecting 50 pictures that morphed from neutral to static
happy or angry photographs, with adjustments from 0 to 100% with fixed 2% intervals (Sqirlz Morph, xiberpix.com). Two patterns in
pseudo-random order were presented in the following sequences: (a) M06–F03–M01 (happy), F16–M09–F10 (angry), F13–M02–F03
(angry), M02–F16–M09 (happy), and (b) M02–F16–M09 (happy), F13–M02–F03 (angry), M06–F03–M01 (happy), F16–M09–F10
(angry). Each participant was randomly assigned to one of two patterns. We fixed the first block to present happy facial expressions,
as preliminary research suggested that infants are more likely to get fussy when initially observing angry expressions.

2.4. Procedure

Each infant completed a warm-up phase, involving playing with the experimenters, which served to establish a cooperative
relationship. Then, the mother was seated in front of the monitor with the infant on her lap. The stimuli were presented when it was

Fig. 1. Examples of the dynamic computer-morphing facial expressions for happy and angry facial emotions.

50
K. Hashiya et al. Infant Behavior and Development 54 (2019) 48–56

confirmed that the infant was attending to the monitor. Mothers were instructed not to interact with their infants, and to keep their
eyes closed to reduce experimental artefact.

2.5. Coding

We extracted (Final Cut Pro X) and coded (ELAN) the video records of the participants’ overt facial expressions during the 2500-
ms intervals following the morphing onset (Lausberg & Sloetjes, 2009). According to the Facial Action Coding System Manual, which
presents an anatomy-based system (FACS) for comprehensively describing visible facial muscular movements in terms of action units
(AUs; Ekman & Friesen, 1978), we coded “AU4” for the corrugator supercilii and “AU12” for the zygomaticus. A 2500-ms interval
which includes both the transition between the neutral and emotional expressions (1500 ms) and the peak expressivity (1000 ms) was
not expected to have sufficient duration to elicit multiple facial reactions times in infants. Thus, a simple and clear coding criterion
was used to examine the effects of presented facial expressions and the repeated presentations on the infants’ facial reactions
(Isomura & Nakano, 2016; Kaiser et al., 2017). A coder blind to the conditions made binary yes/no judgments regarding the presence
of each AU of the participants’ faces at least once during the 2500-ms intervals. A second coder independently re-scored half of the
trials. Scores for which there were disagreements between the two coders (55 trials) were re-coded by a third coder independently.
Trial scores for the corresponding AUs were treated as missing values if: (1) the infant viewed the stimuli for less than one second in
total, (2) the presence of the specific AU could not be judged due to obstructions (e.g., lip-corner was covered with a hand), or (3) the
presence of the specific AU was detected before the participant viewed the stimuli.
The number of trials that were treated as missing were 112 (AU4) and 143 (AU12) out of 1104. Consequently, inter-rater
reliability analysis using the unweighted Cohen’s kappa statistic (presence = a, absence = b, missing value = c) revealed high inter-
rater consistency for AU4 (N = 276, κ = .88, p < .001; 95% CI [.83, .93]) and AU12 (N = 276, κ = .86, p < .001; 95% CI [.81,
.91]; Hallgren, 2012).

2.6. Data analysis

Statistical analyses were conducted using R (version 3.5.0), and all reported p-values are two-tailed. To examine the hypotheses,
we investigated whether the presence of AU depended on the congruency of the AU with the presented facial expressions (i.e., the
presence of AU12 and AU4 are respectively congruent with happy and angry stimuli) and whether the presence of these reactions
increased during observation of repeated presentations of facial expressions. In all models, we used generalized linear mixed models
(GLMMs) with binomial error and logit link functions (1 = presence, 0 = absence), including the individual differences and gender
of the infants, pattern (a, b), block (first, second), and actor of the presented stimuli, as random effects.

3. Results

We tested the overall effect of AU type (i.e., congruent vs. incongruent) by performing GLMM including AU type as a fixed factor
and applied a likelihood ratio test (LRT; lrtest function) between models with and without the factor to confirm the effect.
Consequently, the results of the LRT revealed a significant effect of AUtype, χ2(6) = 19.04, p < .001 (Fig. 2a). Specifically, the
presence of “congruent” AUs which are associated with the presented expressions was found in more trials than the “incongruent”
AUs, β = 0.79, z = 4.36, p < .001 (Table 1). We then tested whether the occurrence of congruent AUs was influenced by the
repeated observations of corresponding facial emotions. Consequently, GLMM including the interaction of AU type and trial (e.g.,
first, second, third; as numeric variables) as fixed factors revealed that the presence of congruent AU significantly increased during
the observation of thrice-repeated matched facial emotions (β = 0.42, z = 3.54, p < .001). However, the trial had no effect on the
presence of the incongruent AU, β = 0.08, z = 0.67, p = .501, with a significant LRT for the interaction, χ2(6) = 22.48, p < .001.

Fig. 2. Differences in infants’ overt facial reactions during the three trials of the experiment. (a) shows the mean number of trials in which the
presence of stimuli-corresponding AUs (congruent) and unmatched AUs (incongruent) were observed (left), and their changes during continuous
presentation of stimulus clips over three trials (right). (b-1) and (b-2) show the mean number of trials in which presence of infants’ AU12 and AU4
were observed during observation of happy and angry stimuli, and their changes across three successive trials (in which the videos were repeated
thrice). The dotted line shows the average number for each trial. Error bars represent standard error of the mean (* p < 0.05).

51
K. Hashiya et al.

Table 1
Infants overt facial response in AU12 and 4 toward facial emotional expressions cross trials.
AU Expression Overall 1st trial 2nd trial 3rd trial

Mean of Frequency of Duration of Mean of Frequency of Duration of AU Mean of Frequency of Duration of AU Mean of Frequency of Duration of AU
trials with AU presence per AU presence trials with AU presence per presence per trials with AU presence per presence per trials with AU presence per presence per
AU presence trial per trial AU presence trial trial AU presence trial trial AU presence trial trial

12 + 4 Congruent 0.29 1.06 1109.55 0.26 1.11 1217.21 0.27 1.03 1065.03 0.36 1.06 1059.56
(0.46) (0.25) (774.24) (0.44) (0.31) (803.02) (0.45) (0.16) (757.53) (0.48) (0.24) (771.92)
Incongruent 0.18 1.05 1224.14 0.14 1 1153 0.2 1.07 1339.21 0.21 1.07 1162.43
(0.38) (0.22) (853.81) (0.35) (0) (820.25) (0.4) (0.26) (974.96) (0.41) (0.26) (762.52)

52
12 Happy 0.31 1.09 968.91 0.25 1.11 967.22 0.3 1.05 1042.9 0.41 1.11 914.5
(0.47) (0.29) (662.01) (0.43) (0.32) (601.93) (0.46) (0.22) (710.56) (0.49) (0.31) (679.6)
Anger 0.23 1.04 1206.38 0.15 1 1018.64 0.3 1.11 1290.89 0.25 1 1237
(0.42) (0.21) (842.34) (0.36) (0) (630.16) (0.46) (0.32) (1025.13) (0.43) (0) (746.19)

4 Happy 0.14 1.06 1249.13 0.13 1 1300.8 0.12 1 1441.22 0.17 1.15 1076.38
(0.35) (0.25) (882.62) (0.33) (0) (1003.78) (0.33) (0) (909.03) (0.38) (0.38) (802.28)
Anger 0.27 1.04 1274.86 0.27 1.1 1442.2 0.24 1 1092.35 0.31 1 1262.65
(0.45) (0.19) (865.25) (0.45) (0.31) (905.02) (0.43) (0) (833.36) (0.47) (0) (862.1)

Notes. Mean of trials with AU presence - Means for the trials in which the presence of the target AU was observed. Frequency/Duration of AU presence per trial – Means for the frequency/duration
(millisecond) of the presence of each AU in the trials including AU presence. Standard deviations in parentheses.
Infant Behavior and Development 54 (2019) 48–56
K. Hashiya et al. Infant Behavior and Development 54 (2019) 48–56

Considering the evidence for the effect of gender on infants' processing of facial information suggesting that infants show a
sensitivity to and respond to the social attributes of gender information in faces (i.e., preference for same-race female over male faces;
Quinn, Yahr, Kuhn, Slater, & Pascalis, 2002; Quinn et al., 2008; Ramsey-Rennels & Langlois, 2006), we conducted post-hoc analyses
to investigate the possible effect of the gender of the actors on the facial reactions of infants. A GLMM that included AU type and the
actor’s gender as fixed factors showed that the gender of the stimuli did not influence the presence of AUs, β = 0.11, z = 0.54, p =
.586, with a non-significant LRT, χ2(7) = 0.25, p = .614, and the significant effect of AU type did not change, β = 0.79, z = 4.36,
p < .001, with a LRT of χ2(7) = 19.08, p < .001. Further, no effect of the actor’s gender was observed when the model included
the gender of the actor and the interaction of AU type and trial as fixed factors, β = 0.11, z = 0.62, p = .533, with a non-significant
LRT, χ2(8) = 0.37, p = .83. Also, the significant effect of trial on the congruent facial reactions did not change, β = 0.42, z = 3.64,
p < .001, with a significant LRT for the interaction, χ2(7) = 22.65, p < .001. Therefore, infants’ facial reactions did not differ
based on the gender of the stimulus.
Further, these effects were confirmed for both AU12 and AU4. Specifically, GLMM including the type of emotion (i.e., happy,
angry) as fixed factors showed that the presence of AU12 was detected in significantly more trials when the infant observed happy
facial expressions rather than angry ones, β = 0.57, z = 1.98, p = .048, with a significant LRT, χ2(6) = 4.05, p = .044. Increasing
the number of trials positively predicted the occurrence of AU12 during observation of happy expressions, β = 0.57, z = 3.17, p =
.002, but not angry expressions, β = 0.31, z = 1.68, p = .09, with the LRT for the interaction being significant, χ2(6) = 11.26, p =
.004. Moreover, no effect of actors’ gender was observed when the gender of the actor was included as a fixed factor in the above
models (LRTs; ps > .153). GLMM including the type of emotion (i.e., happy, angry) as fixed factors showed that the presence of AU4
was found in fewer trials when observing happy facial expressions than when observing angry facial expressions, β = -1.20, z =
-3.85, p < .001, with a significant LRT, χ2(6) = 15.92, p < .001. Moreover, increasing the number of trials positively predicted the
presence of AU4 during the observation of angry expressions (β = 0.38, z = 2, p = .045), but not happy expressions (β = -0.09, z =
-0.46, p = .64, with a LRT for the interaction; χ2(6) = 12.63, p = .002. Also, no significant effect of actors’ gender was observed
when it was included as a fixed factor in the above models (LRTs; ps > .8).
To provide a more accurate picture of infants’ observable facial reactions, we conducted post-hoc analyses to investigate whether
infants show differentiated reactions of AU12 and AU4 when observing happy and angry expressions. GLMM including AU (i.e., 12,
4) as a fixed factor revealed that the presence of AU12 was found in more trials than AU4 when happy expressions were presented,
β = 1.33, z = 4.76, p < .001, with a significant LRT; χ2(6) = 25.19, p < .001. However, there was no significant difference
between the presence of AU12 and 4 when observing angry expressions, LRT, χ2(6) = 1.4, p = .24.

4. Discussion

The current study aimed to further our knowledge of the early development of congruent facial reactions by investigating infants’
observable facial responses toward dynamic human facial expressions. Nine- to 10-month-old infants observed videos of dynamic
human facial expressions, and their facial responses (i.e., lip-corner raising and brow-lowering) during the observations were rated by
naïve coders. First, the current results demonstrate that 9- to 10-month-old infants show selective facial responses to dynamic happy
and angry human facial expression, which supports Hypothesis One. The presence of congruent AUs (associated with the presented
expressions) was found in significantly more trials than incongruent AUs, and there was also a selective response for both AU12 and
AU4 when observing happy and angry faces. These results suggest the presence of observable congruent facial reactions during an
infant’s first year of life. Besides inducing the intra-individual corresponding affective state in the observer through a feedback
process (intra-individual processing; Hatfield et al., 1992; Hess et al., 1998; but see Blair, Morris, Frith, Perrett, & Dolan, 1999; Rives
Bogart & Matsumoto, 2010), congruent facial reactions have also been proposed to play an inter-individual communicative role in
that they convey empathic and cooperative intentions to the partner through being encoded and inserted into the interactive se-
quences and consistently decoded by a partner (Hess & Fischer, 2014; Sato & Yoshikawa, 2007). In line with this theory, previous
studies have shown that observable congruent facial reactions occur during engaged social interactions, which predicts and promotes
interpersonal rapport (Stel & Vonk, 2010; van der Schalk et al., 2011; Yabar & Hess, 2007). The selective, observable facial responses
confirmed in the current study have also been demonstrated in adults (Sato & Yoshikawa, 2007), and are consistent with EMG
findings in children and infants (Geangu, Quadrelli, Conte, Croci, & Turati, 2016; Kaiser et al., 2017).
Additionally, the results indicate that infants’ stimuli-corresponding facial responses increase during the repeated observation of
the target facial expressions, which supports Hypothesis Two. Two possible pathways may contribute to the positive effects that
repeated dynamic facial expression presentations have on the stimuli-corresponding facial responses. Repeatedly watching a specific
facial emotion might: 1) reduce the ambiguity of, and enhance the recognizability of, the emotional valence of a presented facial
expression (Mühlberger et al., 2010; Rymarczyk et al., 2011), and 2) promote the convergence of infants’ emotional states to show the
occurrence of increased facial responses in a consistent manner. Such results and the possible underlying processes suggest that early
congruent facial reactions are less likely to be a simple re-enactment of observed expressions based on perception-action-matching
mechanisms, and rather may support the emotion-communicative perspective of the congruent facial reactions (Dezecache et al.,
2016; Hess & Fischer, 2014; Kaiser et al., 2017).
Regarding the validity of MMH and the emotion-communicative perspective of early congruent facial reaction, the current results
revealed that infants show differentiated reactions of the AU12 and AU4 when observing happy, but not angry facial expressions. This
pattern has already been reported in EMG research, which has demonstrated that 7-month-old infants show selective facial reactions
for happy (i.e., higher zygomaticus major activation compared to the corrugator supercilia) but not angry faces (Kaiser et al., 2017).
Besides the possibility that infants in this developmental stage have insufficient ability to evaluate the specific informative value of

53
K. Hashiya et al. Infant Behavior and Development 54 (2019) 48–56

angry faces (Grossmann et al., 2007; Kaiser et al., 2017; Missana, Grigutsch, & Grossmann, 2014), the lack of selective AU12 and AU4
responses for angry expressions might be caused by the property of anger expression in regard to conveying emotion-communicative
information. That is, different from happiness, anger directly signals low affiliative intent, and thus, its occurrence largely depends on
the social context. Hess and Bourgeois (2010) showed that anger mimicry (i.e., frowning) was not found in a real dyadic interactive
setting, which differed from the mimicry of a smile. Also, participants who recount negative events show more positive and less
negative emotional facial expressions in the presence of the experimenter than when alone (Lee & Wagner, 2002). Indeed, the pattern
of the presence of AU12 in response to expressions of anger suggests that infants tend to raise their lip-corners when a second angry
face is presented. Infants might try to show positive expressions to convey cooperative, communicative intentions which might be
aimed at eliciting a positive response in the angry actor (Jones, Collins, & Hong, 1991; Mesman, van IJzendoorn, & Bakermans-
Kranenburg, 2009). These possibilities should be examined in future research. Taken together, these results provide some support for
the emotion-communicative view of early congruent facial reactions, which posits that infants’ facial responses are influenced by the
evaluation of others’ emotional signals as conveying emotion information and communicative intentions in a social context
(Dezecache et al., 2016; Hess & Fischer, 2013, 2014; Soussignan et al., 2017).
Further investigations are needed to clarify the underlying process of congruent facial reactions in early development. The term,
“contagious,” has been used to describe behaviors that tend to elicit motivations to adopt a similar behavior to uninvolved third
parties (e.g., reactive crying in neonates, yawning; Dezecache et al., 2016; Martin & Clark, 1982; Provine, 1986; Sagi & Hoffman,
1976; Simner, 1971). In the context of research on emotional contagion, Hatfield et al. (1994) proposed that people first mimic (e.g.,
a spontaneous and unintentional process) others’ emotional displays, and then their affective state is elicited by the activated
muscular configuration, which leads the emotional contagions (Hatfield et al., 1994). Others have argued that emotional contagion
might be based on emotional communication processes. Specifically, affective state, both congruent or incongruent, based on the
social context, (Dezecache et al., 2016; Lanzetta & Englis, 1989) is elicited by observed emotional signals, which then leads to facial
expressions reflecting the affective state. As discussed above, the current results do not seem to support the former view because
infants did not faithfully mimic facial expressions. It is possible that the happy expression induced a happy emotion in the infants,
leading to initial and increased happy facial expressions for the infants. To further test these hypotheses, it may be interesting to
examine whether infant’s congruent facial reaction is influenced by the contextual information which may alter the receiver’s
evaluation of the actor’s emotional states (e.g., the background; Nisbett & Masuda, 2003; Senzaki, Masuda, Takada, & Okada, 2016).
Although the current study revealed selective facial responses toward happy and angry facial expressions, further investigation
using neutral and other affective expressions would provide more comprehensive information on early facial reactions. Moreover, the
artificially created dynamic human facial expressions used in the current study may lead to a lack of social engagement between
infants and target faces, which is important for building affective, social coordination. In addition, future research should focus on the
effect of static images of facial expression on facial reactions in infancy, as different response patterns from those to dynamic images
have been reported (Sato & Yoshikawa, 2007; Sato et al., 2008; Weyers, Mühlberger, Hefele, & Pauli, 2006).
The current study extends our understanding of the ontogeny of congruent facial reactions by demonstrating that 9–10-month-old
infants show differentiated facial responses toward happy and angry facial expressions, and suggesting that the congruent facial
reactions might be influenced by evaluation of the emotional-communicative information of the presented facial expressions.

Declaration of interest

None.

Funding

This work was supported by JSPS KAKENHI Grant Numbers; JP18H04200, JP18K02461, JP17KT0057, JP17KT0139,
JP26280049, and JP25118003, to KH, and JP18F18999 to XM.

References

Bavelas, J. B., Black, A., Lemery, C. R., & Mullett, J. (1986). “I show how you feel”: Motor mimicry as a communicative act. Journal of Personality and Social Psychology,
50(2), 322–329.
Bird, G., & Viding, E. (2014). The self to other model of empathy: Providing a new framework for understanding empathy impairments in psychopathy, autism, and
alexithymia. Neuroscience and Biobehavioral Reviews, 47, 520–532.
Blair, R. J. R., Morris, J. S., Frith, C. D., Perrett, D. I., & Dolan, R. J. (1999). Dissociable neural responses to facial expressions of sadness and anger. Brain, 122(5),
883–893.
Bourgeois, P., & Hess, U. (2008). The impact of social context on mimicry. Biological Psychology, 77, 343–352.
Caron, A. J., Caron, R. F., & MacLean, D. J. (1988). Infant discrimination of naturalistic emotional expressions: The role of face and voice. Child Development, 59,
604–616.
Dezecache, G., Eskenazi, T., & Grèzes, J. (2016). 20 Emotional Convergence: A Case of Contagion? Shared Representations: Sensorimotor Foundations of Social Life, 417.
Dimberg, U. (1982). Facial reactions to facial expressions. Psychophysiology, 19, 643–647.
Dimberg, U., & Karlsson, B. (1997). Facial reactions to different emotionally relevant stimuli. Scandinavian Journal of Psychology, 38(4), 297–303.
Dimberg, U., Thunberg, M., & Elmehed, K. (2000). Unconscious facial reactions to emotional facial expressions. Psychological Science, 11, 86–89.
Dimberg, U., Thunberg, M., & Grunedal, S. (2002). Facial reactions to emotional stimuli: Automatically controlled emotional responses. Cognition & Emotion, 16,
449–471.
Donato, G., Bartlett, M. S., Hager, J. C., Ekman, P., & Sejnowski, T. J. (1999). Classifying facial actions. IEEE Transactions on Pattern Analysis and Machine Intelligence,
21(10), 974.
Ekman, P., & Friesen, W. V. (1978). Manual for the facial action coding system. Palo Alto: Consulting Psychologists Press.

54
K. Hashiya et al. Infant Behavior and Development 54 (2019) 48–56

Fischer, A., & Hess, U. (2017). Mimicking emotions. Current Opinion in Psychology, 17, 151–155.
Fischer, A., Becker, D., & Veenstra, L. (2012). Emotional mimicry in social context: The case of disgust and pride. Frontiers in Psychology, 3, 475.
Flom, R., & Bahrick, L. E. (2007). The development of infant discrimination of affect in multimodal and unimodal stimulation: The role of intersensory redundancy.
Developmental Psychology, 43(1), 238–252.
Gallese, V., & Goldman, A. (1998). Mirror neurons and the simulation theory of mind-reading. Trends in Cognitive Sciences, 2, 493–501.
Geangu, E., Quadrelli, E., Conte, S., Croci, E., & Turati, C. (2016). Three-year-olds’ rapid facial electromyographic responses to emotional facial expressions and body
postures. Journal of Experimental Child Psychology, 144, 1–14.
Grèzes, J., & Dezecache, G. (2014). How do shared-representations and emotional processes cooperate in response to social threat signals? Neuropsychologia, 55,
105–114.
Grossmann, T., Striano, T., & Friederici, A. D. (2007). Developmental changes in infants’ processing of happy and angry facial expressions: A neurobehavioral study.
Brain and Cognition, 64(1), 30–41.
Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: An overview and tutorial. Tutorials in Quantitative Methods for Psychology, 8, 23–34.
Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1992). Primitive emotional contagion. In M. S. Clark (Ed.). Emotion and social behavior (pp. 151–177). Thousand Oaks, CA:
Sage Publications.
Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1994). ). Emotional contagion: Studies in emotion and social interaction. Editions de la Maison des sciences de l’homme.
Hess, U., & Bourgeois, P. (2010). You smile–I smile: Emotion expression in social interaction. Biological Psychology, 84(3), 514–520.
Hess, U., & Fischer, A. (2013). Emotional mimicry as social regulation. Personality and Social Psychology Review, 17, 142–157.
Hess, U., & Fischer, A. (2014). Emotional mimicry: Why and when we mimic emotions. Social and Personality Psychology Compass, 8(2), 45–57.
Hess, U., Philippot, P., & Blairy, S. (1998). Facial reactions to emotional facial expressions: affect or cognition? Cognition & Emotion, 12, 509–531.
Isomura, T., & Nakano, T. (2016). Automatic facial mimicry in response to dynamic emotional stimuli in five-month-old infants. Proceedings of the Royal Society B:
Biological Sciences, 283(1844), 20161948.
Jones, S. S., Collins, K., & Hong, H. W. (1991). An audience effect on smile production in 10-month-old infants. Psychological Science, 2(1), 45–49.
Kaiser, J., Crespo-Llado, M. M., Turati, C., & Geangu, E. (2017). The development of spontaneous facial responses to others’ emotions in infancy: An EMG study.
Scientific Reports, 7(1), 17500.
Kugiumutzakis, G. (1998). Neonatal imitation in the intersubjective companion space. In S. Bråten (Ed.). Intersubjective communication and emotion in early ontogeny (pp.
63–88). Cambridge: Cambridge University Press.
Lakin, J. L., Jefferis, V. E., Cheng, C. M., & Chartrand, T. L. (2003). The chameleon effect as social glue: Evidence for the evolutionary significance of nonconscious
mimicry. Journal of Nonverbal Behavior, 27, 145–162.
Lanzetta, J. T., & Englis, B. G. (1989). Expectations of cooperation and competition and their effects on observers’ vicarious emotional responses. Journal of Personality
and Social Psychology, 56(4), 543.
Lausberg, H., & Sloetjes, H. (2009). Coding gestural behavior with the NEUROGES-ELAN system. Behavior Research Methods, 41, 841–849.
Lee, V., & Wagner, H. (2002). The effect of social presence on the facial and verbal expression of emotion and the interrelationships among emotion components.
Journal of Nonverbal Behavior, 26(1), 3–25.
Likowski, K. U., Mühlberger, A., Seibt, B., Pauli, P., & Weyers, P. (2011). Processes underlying congruent and incongruent facial reactions to emotional facial
expressions. Emotion, 11(3), 457.
Ludemann, P. M. (1991). Generalized discrimination of positive facial expressions by seven‐and ten‐month‐old infants. Child Development, 62(1), 55–67.
Maratos, O. (1973). The origin and development of imitation in the first six months of life. Annual Meeting of the British Psychological Society.
Martin, G. B., & Clark, R. D. (1982). Distress crying in neonates: Species and peer specificity. Developmental Psychology, 18(1), 3.
Meltzoff, A. N., & Decety, J. (2003). What imitation tells us about social cognition: A rapprochement between developmental psychology and cognitive neuroscience.
Philosophical Transactions Biological Sciences, 358(1431), 491–500.
Meltzoff, A. N., & Moore, M. K. (1977). Imitation of facial and manual gestures by human neonates. Science, 198(4312), 75–78.
Meltzoff, A. N., Murray, L., Simpson, E., Heimann, M., Nagy, E., Nadel, J., … Subiaul, F. (2017). Re‐examination of Oostenbroek et al.(2016): Evidence for neonatal
imitation of tongue protrusion. Developmental science.
Mesman, J., van IJzendoorn, M. H., & Bakermans-Kranenburg, M. J. (2009). The many faces of the Still-Face Paradigm: A review and meta-analysis. Developmental
Review, 29(2), 120–162.
Missana, M., Grigutsch, M., & Grossmann, T. (2014). Developmental and individual differences in the neural processing of dynamic expressions of pain and anger. PloS
One, 9(4), e93728.
Montague, D. P., & Walker-Andrews, A. S. (2001). Peekaboo: A new look at infants’ perception of emotion expressions. Developmental Psychology, 37(6), 826.
Mühlberger, A., Wieser, M. J., Gerdes, A. B., Frey, M. C., Weyers, P., & Pauli, P. (2010). Stop looking angry and smile, please: Start and stop of the very same facial
expression differentially activate threat-and reward-related brain networks. Social Cognitive and Affective Neuroscience, 6, 321–329.
Murata, A., Saito, H., Schug, J., Ogawa, K., & Kameda, T. (2016). Spontaneous facial mimicry is enhanced by the goal of inferring emotional states: Evidence for
moderation of “Automatic” mimicry by higher cognitive processes. PloS One, 11(4), e0153128. https://doi.org/10.1371/journal.pone.0153128.
Myowa‐Yamakoshi, M., Tomonaga, M., Tanaka, M., & Matsuzawa, T. (2004). Imitation in neonatal chimpanzees (Pan troglodytes). Developmental Science, 7(4),
437–442.
Nisbett, R. E., & Masuda, T. (2003). Culture and point of view. Proceedings of the National Academy of Sciences, 100(19), 11163–11170.
Oostenbroek, J., Suddendorf, T., Nielsen, M., Redshaw, J., Kennedy-Costantini, S., Davis, J., ... Slaughter, V. (2016). Comprehensive longitudinal study challenges the
existence of neonatal imitation in humans. Current Biology, 26, 1334–1338.
Pelphrey, K. A., Morris, J. P., McCarthy, G., & Labar, K. S. (2007). Perception of dynamic changes in facial affect and identity in autism. Social Cognitive and Affective
Neuroscience, 2, 140–149.
Provine, R. R. (1986). Yawning as a stereotyped action pattern and releasing stimulus. Ethology, 72(2), 109–122.
Quinn, P. C., Uttley, L., Lee, K., Gibson, A., Smith, M., Slater, A. M., ... Pascalis, O. (2008). Infant preference for female faces occurs for same‐but not other‐race faces.
Journal of Neuropsychology, 2(1), 15–26.
Quinn, P. C., Yahr, J., Kuhn, A., Slater, A. M., & Pascalis, O. (2002). Representation of the gender of human faces by infants: A preference for female. Perception, 31(9),
1109–1121.
Ramsey-Rennels, J. L., & Langlois, J. H. (2006). Infants’ differential processing of female and male faces. Current Directions in Psychological Science, 15(2), 59–62.
Rives Bogart, K., & Matsumoto, D. (2010). Facial mimicry is not necessary to recognize emotion: Facial expression recognition by people with Moebius syndrome.
Social Neuroscience, 5(2), 241–251.
Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, 169–192.
Rymarczyk, K., Biele, C., Grabowska, A., & Majczynski, H. (2011). EMG activity in response to static and dynamic facial expressions. International Journal of
Psychophysiology, 79, 330–333.
Sagi, A., & Hoffman, M. L. (1976). Empathic distress in the newborn. Developmental Psychology, 12(2), 175.
Sato, W., & Yoshikawa, S. (2007). Spontaneous facial mimicry in response to dynamic facial expressions. Cognition, 104, 1–18.
Sato, W., Fujimura, T., & Suzuki, N. (2008). Enhanced facial EMG activity in response to dynamic facial expressions. International Journal of Psychophysiology, 70,
70–74.
Sato, W., Toichi, M., Uono, S., & Kochiyama, T. (2012). Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders. BMC
Neuroscience, 13(1), 99. https://doi.org/10.1186/1471-2202-13-99.
Seibt, B., Mühlberger, A., Likowski, K., & Weyers, P. (2015). Facial mimicry in its social setting. Frontiers in Psychology, 6, 1122.
Senzaki, S., Masuda, T., Takada, A., & Okada, H. (2016). The communication of culturally dominant modes of attention from parents to children: A comparison of
Canadian and Japanese parent-child conversations during a joint scene description task. PloS One, 11(1), e0147199.

55
K. Hashiya et al. Infant Behavior and Development 54 (2019) 48–56

Simner, M. L. (1971). Newborn’s response to the cry of another infant. Developmental Psychology, 5(1), 136.
Simpson, E. A., Murray, L., Paukner, A., & Ferrari, P. F. (2014). The mirror neuron system as revealed through neonatal imitation: Presence from birth, predictive
power and evidence of plasticity. Philosophical Transactions Biological Sciences, 369(1644), 20130289.
Soussignan, R., Dollion, N., Schaal, B., Durand, K., Reissland, N., & Baudouin, J. Y. (2017). Mimicking emotions: How 3–12-month-old infants use the facial expressions
and eyes of a model. Cognition & Emotion, 32, 1–16.
Stel, M., & Vonk, R. (2010). Mimicry in social interaction: Benefits for mimickers, mimickees, and their interaction. British Journal of Psychology, 101, 311–323.
Tamietto, M., & de Gelder, B. (2008). Emotional contagion for unseen bodily expressions: Evidence from facial EMG. September Automatic Face & Gesture Recognition,
2008. FG’08. 8th IEEE International Conference on, 1–5.
van der Schalk, J., Fischer, A., Doosje, B., Wigboldus, D., Hawk, S., Rotteveel, M., ... Hess, U. (2011). Convergent and divergent responses to emotional displays of
ingroup and outgroup. Emotion, 11, 286–298.
Walker-Andrews, A. S. (1997). Infants’ perception of expressive behaviors: Differentiation of multimodal information. Psychological Bulletin, 121(3), 437.
Weyers, P., Mühlberger, A., Hefele, C., & Pauli, P. (2006). Electromyographic responses to static and dynamic avatar emotional facial expressions. Psychophysiology, 43,
450–453.
Yabar, Y., & Hess, U. (2007). Display of empathy and perception of out-group members. New Zealand Journal of Psychology, 36, 42–49.
Yoshimura, S., Sato, W., Uono, S., & Toichi, M. (2015). Impaired overt facial mimicry in response to dynamic facial expressions in high-functioning autism spectrum
disorders. Journal of Autism and Developmental Disorders, 45, 1318–1328.

56

You might also like