You are on page 1of 10

Biological Psychology 157 (2020) 107973

Contents lists available at ScienceDirect

Biological Psychology
journal homepage: www.elsevier.com/locate/biopsycho

Sex differences in behavioral and brain responses to incongruity in


emotional speech controlling for autistic traits
Ming Lui a, *, 1, Xiaojing Li b, d, f, 1, Werner Sommer c, Andrea Hildebrandt d, e, Gilbert Ka-Bo Lau a,
Changsong Zhou b, *
a
Department of Education Studies, Centre for Learning Sciences, Hong Kong Baptist University, Hong Kong, China
b
Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of
Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong, China
c
Institut für Psychologie, Humboldt-Universität zu Berlin, Germany
d
Department of Psychology, Carl von Ossietzky Universität Oldenburg, Germany
e
Research Center Neurosensory Science, Carl von Ossietzky Universität Oldenburg, Germany
f
Chinese Academy of Disability Data Sciences, Nanjing Normal University of Special Education, Nanjing, China

A R T I C L E I N F O A B S T R A C T

Keywords: Accurate interpretation of speech requires the integration of verbal and nonverbal signals. This study investi­
Speech comprehension gated sex differences in behavior and neural activities associated with the integration of semantic content and
Emotional prosody emotional speech prosody, while the level of autistic traits was controlled for. Adults listened to Cantonese words
Semantic processing
spoken with happy and sad prosody, and made judgments on semantic valence while event-related potentials
Sex differences
Autistic traits
(ERP) were recorded. Behaviorally, men were slower than women in making semantic valence judgments. At the
Event-related potentials neural level, men had a greater congruity effect in the N400 component, whereas women had a greater congruity
Individual differences effect in the 1150–1300 ms time window for happy prosodies. There was no effect of sex in case of sad prosodies.
Our study reveals novel findings on sex differences in the timing of the integration between verbal and non-
verbal signals that cannot be explained by differences in autistic traits.

1. Introduction emotional signals. Kitayama and Ishii (2002) and Ishii, Reyes, and
Kitayama (2003) required participants to judge the semantic valence of
Speech is a major channel through which individuals communicate words while ignoring the emotional prosody, or vice versa. The results
and understand the thoughts of others. Accurate interpretation of speech revealed that Japanese more automaticly integrated contextual
often requires the integration of multi-channel information, including emotional prosody when processing the meaning of speech as compared
both verbal and non-verbal cues. Previous reports indicate that men to Americans (Kitayama & Ishii, 2002; Ishii et al., 2003). Tanaka et al.
exhibit less multi-channel integration during speech processing as (2010) also found differences between Japanese and Dutch participants
compared with women (Schirmer et al., 2006; Schirmer, Kotz, & Frie­ in integrating emotional signals expressed via faces and voices. Their
derici, 2002; Schirmer, Zysset, Kotz, & von Cramon, 2004). Since men results indicated that Japanese more automatically integrate vocal
also tend to score higher on autistic trait measures (Baron-Cohen, emotions than Dutch, while Dutch more automatically integrate facial
Wheelwright, Skinner, Martin, & Clubley, 2001), which are known to be expression than Japanese. Consistently, Liu, Rigoulot, and Pell (2015)
related to social and communication problems, the present study aims to also found that North Americans more automatically attend to
investigate whether men and women differ in multi-channel integration task-irrelevant faces when judging vocal emotions as compared to
given that sex differences in autistic traits are controlled for. Chinese.
In previous studies, cross-cultural and individual differences on Apart from culture, there are individual differences variables apart
multi-channel integration were investigated. Cross-cultural studies have from autistic traits which were shown to be associated with the inte­
compared Asians and Westerners with respect to their integration of gration of multi-channel information during communication. Such

* Corresponding authors.
E-mail addresses: m-lui@u.northwestern.edu (M. Lui), cszhou@hkbu.edu.hk (C. Zhou).
1
Equal contribution.

https://doi.org/10.1016/j.biopsycho.2020.107973
Received 28 February 2020; Received in revised form 12 October 2020; Accepted 13 October 2020
Available online 19 October 2020
0301-0511/© 2020 Elsevier B.V. All rights reserved.
M. Lui et al. Biological Psychology 157 (2020) 107973

variables are empathy and trait anxiety. For example, Jiang and Pell et al., 2001), a self-administered instrument. Sex differences were
(2016a) found that trait empathy was positively related to the integra­ investigated after aligning the mean level of AQ for female and male per
tion of lexical contents and vocal cues of speaker confidence, while trait recruitment design. Participants were involved in a social cognition task
anxiety mediated the effect of sex on processing speaker confidence. that required them to listen to auditory words spoken with emotional
Also, van den Brink et al. (2012) reported that highly empathetic adults prosody, matching or mismatching in valence with the semantic word
integrated voice-based speaker identity information with the contents of meaning.
speech to a larger extent than less empathetic adults. Behaviorally, we expected sex differences in reaction times of se­
The current study focuses on sex as a grouping variable consistently mantic judgments. On the neural level, we hypothesized that sex would
linked to individual differences in multi-channel integration during be significantly related to the ERP congruity effect (amplitude differ­
speech perception. Previous behavioral studies showed that women ences between incongruous and congruous conditions) in the time
exhibit a higher tendency to integrate task-irrelevant vocal cues as windows: 350–650 ms (N400), 750–900 ms, and 1150–1300 ms. The
compared to men while processing verbal contents. Jiang and Pell time windows were selected based on two previous studies applying
(2016b) demonstrated larger difference in women than men with similar stimuli and tasks (Schirmer & Kotz, 2003; Schirmer et al., 2006).
respect to their confidence ratings of sentences spoken with congruous The N400 is a negative-going ERP component peaking around 400 ms
and incongruous vocal tones. This was taken to indicate a stronger after stimulus onset at central-parietal electrodes. Its amplitude in­
tendency to integrate vocal tones among women. Consistently, Schirmer creases with the degree of semantic incongruity (Kutas & Federmeier,
et al. (2002) found a significant reaction time (RT) difference between 2011). The other time windows lie within the time frames of late com­
congruous and incongruous conditions among women but not men ponents, which can be late positivity or late negativity, depending on
when they judged semantic valence while ignoring emotional prosody. polarity. In the literature, the late negativity starts between 600− 800 ms
Apart from behavioral findings, there is neural evidence showing sig­ and may last for 2000 ms, peaking at central and parietal sites (Herron,
nificant sex differences in multi-channels integration during speech 2007). It implies action monitoring triggered by response conflict and
perception. For example, women had larger amplitudes in the N400 and memory retrieval (Herron, 2007; Johansson & Mecklinger, 2003). The
late negativity components of event-related brain potentials (ERPs) than late positivity starts around 500 ms and lasts for several hundred mil­
men when they processed words spoken with incongruous emotional liseconds with peaks mostly found at central-parietal sites (Friederici,
prosody, compared to congruous emotional prosody (Ishii, Kobayashi, & 2002). This component is related to subjective stimulus probability that
Kitayama, 2010; Schirmer et al., 2002; Schirmer et al., 2004; Schirmer may be triggered by syntactic and semantic incongruity (Coulson, King,
et al., 2006). These findings support a larger degree of integration of & Kutas, 1998; Hahne & Friederici, 2002; Kutas & Hillyard, 1980). We
prosodic and semantic cues among women than men. hypothesize sex differences in ERP differences between incongruous and
As argued above, level of autistic traits – which is itself more pro­ congruous conditions (i.e., ERP congruity effect) in the three time
nounced in males – is another variable inducing individual differences in windows, after level of autistic traits was controlled for by design.
multi-channel information processing. Autism is a clinical condition Women are expected to show a larger ERP congruity effect than men. As
involving social communication impairments, repetitive behaviors, and the pattern of results may differ in different prosodic conditions, we
restricted interests (American Psychiatric Association, 2013). However, conducted the analyses separately for happy and sad prosody conditions.
there is evidence that autistic traits are distributed across the entire
population seen as a continuum, including non-clinical individuals (e.g. 2. Methods
Baron-Cohen et al., 2001). Since at the population level, men on average
have higher level of autistic traits than women (e.g., Baron-Cohen et al., 2.1. Participants
2001), and given that autism is linked to impairments in multi-channel
integration of sensory signals (e.g. Brandwein et al., 2012; Stevenson We invited college students via posters on campus and mass emails to
et al., 2014), it is possible that sex differences in multi-channel inte­ fill in an online AQ questionnaire (Baron-Cohen et al., 2001). In
gration reported in previous studies are partially driven by higher levels response, we received 1059 valid data from young adults (mean age =
of autistic traits in men compared with women. There is, however, a lack 20.75, SD = 2.47). Their mean AQ score was 21.58 (SD = 5.96) out of a
of evidence on this hypothesis because previous sex differences studies possible score ranging from 0 to 50. For comparison, in a previous study
did not control for the level of autistic traits. Moreover, most previous of 6934 subclinical participants, the mean AQ was 16.94 (95 % C.I. 11.6,
studies on autistic traits required participants to process social infor­ 20.0; Ruzich et al., 2015). Among the 1059 online respondents, 601
mation in a single modality only (Canal et al., 2019; Lassalle & Itier, indicated their interest in participating in an EEG study. Since we
2015; Peled-Avron & Shamay-Tsoory, 2017). It is still largely unknown intended to match the average AQ of female and male participants,
whether autistic traits in the non-clinical population are linked to the specific questionnaire respondents were invited for the EEG study based
integration of multi-channel social signals. It needs to be additionally on the selection criteria and measured AQ scores. The selection criteria
mentioned that many of the past studies dichotomized the spectrum of were: Cantonese speaker, aged between 18 and 30 years, no hearing
autistic traits and solely employed group comparisons between high vs. impairment, no psychiatric and neurological disorder, no metal inside
low autistic trait individuals. Relatively few studies (Gayle, Gal, & the body, and no prescribed medication for skin problems. Twenty-three
Kieffaber, 2012; Halliday, MacDonald, Sherf, & Tanaka, 2014; Pele­ female and 23 male participants’ data with matched age and AQ were
d-Avron & Shamay-Tsoory, 2017) have considered level of autistic traits included for the present study (Table 1). None of the participants re­
as a continuum within the subclinical population and assessed its rela­ ported a formal diagnosis of autism spectrum disorder (ASD). Data of 14,
tionship with behaviours. It is however arguably more fruitful to out of the 23 male participants were reported in a previous study (Lui,
consider individual differences variables on a continuum and exploit the So, & Tsang, 2018) which examined group difference between high and
information in the data more extensively. The present study aimed to low AQ participants among men. In addition to AQ, participants also
filling these research gaps. completed the Empathy Quotient questionnaire (EQ; Baron-Cohen &
Wheelwright, 2004) and a 30-minute speeded version of Raven’s
1.1. Aims of the present study Advanced Progressive Matrices Test (Raven, Raven, & Court, 1998) to
assess non-verbal IQ. EQ was highly correlated with AQ. To avoid
The current study aimed to examine sex differences in the integration multicollinearity, EQ was not included in the analyses involving AQ.
of emotional prosody and semantic content in speech, while level of Comparisons of AQ, IQ, and EQ between female and male participants
autistic traits was controlled for. Participants’ degree of autistic traits were conducted to ensure that these variables were well matched be­
were measured using the Autism Spectrum Quotient (AQ; Baron-Cohen tween the two groups. The study was carried out in accordance with the

2
M. Lui et al. Biological Psychology 157 (2020) 107973

Table 1 emotional prosody. The stimuli were presented, and button-press re­
Descriptive statistics of age, AQ, EQ, and nonverbal IQ among women and men. sponses were recorded using Eprime© 2.0 software (PST, Inc.). In each
Sex N Statistics Age AQ EQ IQ trial, a fixation cross was presented on the screen for 2700 ms; 500 ms
after its onset, an auditory word started, requiring a button press. After
23 M 20.0 18.43 40.13 19.61
Women
SD 1.48 8.56 10.41 4.51 the offset of the fixation cross, a feedback (indicating “correct”,
Range 18 – 23 10 – 37 22 – 65 11 – 26 “incorrect”, or “too slow”) appeared. Absence of response within 2200
Men
23 M 20.78 19.61 37.26 22.18 ms after word onset prompted the feedback screen of “too slow”. Par­
SD 1.17 7.20 12.24 5.88 ticipants were instructed to blink, if necessary, during this feedback
Range 19 – 22 11 – 36 13 – 58 12 – 33
t − 1.99 − .50 .86 − 1.45
period. To minimize anticipation effects, stimulus onset asynchronies
p .052 .62 .40 .16 (SOA) were equally distributed across values of 3300 ms, 3400 ms, 3500
− 1.57 – − 5.87 – − 3.88 – − 6.16 – ms, or 3600 ms, implemented by showing the feedback screen for du­
C.I.
.008 3.53 9.62 1.03 rations of 600–900 ms. Each participant completed all 240 trials. The
Note. AQ – Autism Spectrum Quotient ; EQ – Empathy Quotient; IQ – Raven’s trials were presented in four blocks, separated by three breaks for rest.
Advanced Progressive Matrices Test Score. The order of the stimulus presentation was pseudo-randomized, such
that the first and second appearances of the same word spoken with a
recommendations of the Research Ethics Committee of Hong Kong different prosody occurred in different blocks, one in the first half and
Baptist University. All subjects gave written informed consent and were the other in the second half of the trials. To familiarize the participants
reimbursed HKD $100 (about USD 13) per hour. with the task, they were given eight practice trials with similar stimuli
and difficulty as the test trials. Participants who showed many incorrect
trials during practice, were given instructions again about the task re­
2.2. Stimuli quirements and went through one more round of practice. All partici­
pants responded correctly in the first or second round of practice trials.
The stimuli consisted of 240 auditory words containing 120 different
two-syllable Chinese words spoken in either happy or sad prosody. Half
2.4. EEG recording and data processing
of the 120 words were positive and half were negative in their semantic
meaning. All words were Cantonese produced by a female native
The EEG was sampled at a rate of 500 Hz from 64 Ag/AgCl electrodes
speaker with drama experience, and none of the words were homo­ mounted in an electrode cap (Waveguard™ original) connected to an
phones. The words were considered verbs in daily life usage by over 70
amplifier (eego™ mylab, ANT Neuro); initial common reference was
% of the raters in Schirmer et al. (2006). In the current study 30 college CPz. Electrode impedances were kept below 10 kOhms. Offline pro­
students (15 men) were recruited to rate the word stimuli in terms of
cessing, including re-referencing, filtering, epoching, artifact removal,
semantic valence and emotional prosody. The raters did not participate and averaging, was performed using the EEGLAB toolbox (Delorme &
in the current EEG experiment. Both semantic valence and emotional
Makeig, 2004) and in-house MATLAB scripts. Offline, EEG was
prosody were rated on a 7-point scale (1= very negative; 2 = negative; 3 re-referenced to average mastoid. After filtering at a 0.05–40 Hz
= slightly negative; 4 = neutral; 5 = slightly positive; 6 = positive; 7 = band-pass, the continuous EEG data were epoched with a 200 ms
very positive). pre-stimulus baseline and a 2000 ms post-stimulus window. Artifacts
For semantic valence, positive words (M = 5.53; SD = .43) were resulting from eye and body movement or muscle contraction were
rated significantly more positive than negative words (M = 2.17; SD = removed by an ICA algorithm using the EEGLAB toolbox. Visual in­
.36; t (118) = 46.38, p < .001). For emotional prosody, happily spoken spection was performed to remove any residual atypical artifacts. The
words (M = 4.97) were rated significantly more positive in prosody than preprocessed ERP trials were subjected to RIDE (see below).
sadly spoken words (M = 2.01; t (119) = 85.18, p < .001). Positive and
negative words did not differ in word frequency (Cai & Brysbaert, 2010)
2.5. Electrophysiological data analyses
(t (117) = 1.46, p = .15).
The four stimulus types were classified as either congruous in se­
A methodological novelty achieved by the current study is the
mantic valence and emotional prosody, that is, happily spoken positive
application of an advanced method to analyze ERP data, the Residual
words and sadly spoken negative words, or incongruous, that is, happily
Iteration Decomposition (RIDE; Ouyang, Herzmann, Zhou, & Sommer,
spoken negative words and sadly spoken positive words. The sound in­
2011; Ouyang, Sommer, & Zhou, 2015). RIDE allows to cope with the
tensity levels of congruous and incongruous stimuli did not differ (t
well-known, but rarely addressed trial-by-trial latency jitter. In the
(238) = -0.79, p = .43), nor did the mean durations of positive and
present study, the stimuli were spoken Cantonese words with two syl­
negative words (t (238) = .21, p = .83); however, happily spoken words
lables. As a nature of Cantonese two-syllable words, the time at which
(M =877 ms) were significantly shorter in duration than sadly spoken
the semantic meaning is accessible (recognition point) varies from word
words (M =1240 ms; t (119) = -27.91, p < .001).
to word (Marslen-Wilson, 1987). For some words, semantic meaning is
Note that the stimuli did not include neutral words because the
ambiguous at the first syllable and only becomes fully clear at the second
current study does not aim to compare the differences between
syllable. For other words, semantic valence is obvious already when the
emotional and neutral stimuli. Our research question is related to the
first syllable is pronounced. Therefore, the ERP latencies in response to
integration between emotional prosody and semantic valence of words
the processing of these stimuli vary substantially from stimulus to
in speech processing. Therefore, the targeted comparison is between
stimulus. Conventional ERPs are computed by averaging multiple single
congruous and incongruous conditions, with the latter involving a
trials in each stimulus condition. The inter-trial variability in ERP la­
conflict between the valence of semantic meaning and emotional
tency often blurs ERP waveforms and smears condition effects in aver­
prosody.
aged ERP amplitudes (Ouyang et al., 2015). To deal with the ERP
latency jitter caused by the large variation in the timing of semantic
2.3. Procedure meaning conveyed by Cantonese two-syllable words, RIDE was applied
to the trials in each of the four conditions of each participant.
The data collection procedure was in line with Lui et al. (2018). With RIDE, the latency variability in single trial ERPs can be taken
Participants listened to auditory words through air-tube earphones and into account, mitigating the smearing problem by reconstructing the
were instructed to judge the semantic valence (positive vs. negative) of ERP after correcting the latency jitter. When applying RIDE, all ERPs are
each word as accurately and as quickly as possible while ignoring decomposed into three component clusters: the S cluster time-locked to

3
M. Lui et al. Biological Psychology 157 (2020) 107973

the stimulus, the R cluster time-locked to the response, and the C 3. Results
(central) cluster neither time-locked to the stimulus nor the response.
Under conventional circumstances the three clusters capture the sub- 3.1. Behavioral data
processes in the ERP involved in stimulus processing (S), response
preparation and execution (R), and central cognitive processes such as The descriptive statistics with respect to age, AQ, EQ, and nonverbal
stimulus classification, response selection, or decision-making (Ouyang IQ among women and men are provided in Table 1. We conducted in­
et al., 2011). dependent samples t-tests with sex as independent variable, and with
The RIDE procedure consists of a decomposition module and a C AQ, EQ, nonverbal IQ, and age as dependent variables. There were no
latency estimation module, applied to each electrode in each partici­ significant differences between men and women on AQ, EQ, IQ, and age
pant. In the decomposition module, the single-trial ERPs are decom­ (see Table 1 for statistics).
posed into a stimulus-locked component cluster S, a response-locked The average RTs and the accuracies of the four stimulus conditions
component cluster R, and an intermediate component cluster C without are displayed in Table 2. Women’s and men’s RTs were on average
explicit time marker, starting from initial latencies LS , LR and a rough 1232.65 ms and 1302.32 ms, respectively. Please note that the auditory
initial estimation of latency LC from the original data. In order to esti­ stimuli had an average duration of 1059 ms (happy prosodic words: 877
mate S, RIDE subtracts the C and R components from each single trial ms; sad prosodic words: 1240 ms), which could be the reason of these
and aligns the residual waveforms of all trials to LS to obtain the median long RTs.
waveform across trials. The same procedures are applied to C and R. In Two separate sets of analyses were performed to investigate different
the C latency estimation module, RIDE calculates the cross-correlation aspects of the behavioral data. First, three-way mixed-design repeated
between a predefined C template based on the original data and the measures ANCOVAs were conducted to examine the congruity effect and
residual single trials after removing the S and R components. The cross- its interactions with prosody, sex, and AQ. Second, multiple regression
correlation curves for all electrodes are averaged to identify a common C analyses were performed to examine individual differences, asking
latency for the topography. The two procedures are iterated, namely, the whether sex and AQ alone predict congruity effects in RT and accuracy
C latency estimation uses an updated C template from the last iteration (i.e., differences between incongruous and congruous conditions).
of decomposition, and the next decomposition will use the updated C A three-way mixed-design repeated measures ANCOVA was con­
latency, until convergence of both – the components S, R, and C and the ducted to investigate sex differences (women vs. men), prosody effects
C latency. Thus, single-trial latencies and amplitudes of the C cluster are (happy vs. sad tone), and congruity effects (congruous vs. incongruous)
made available by applying RIDE. More details can be found in Ouyang on RT, with AQ as a covariate. The results showed a main effect of sex:
et al. (2015) and in the toolbox manual at: http://cns.hkbu.edu.hk men responded more slowly to stimuli than women (F (1, 43) = 5.75, p =
/RIDE.htm. 0.021, partial η2 = .12). The main effect of prosody was also significant:
Next, the new ERPs are reconstructed by summing up the RIDE participants responded more slowly to stimuli spoken in sad tone, as
estimated components S, R, and C each synchronized at their respective compared to happy tone (F (1, 43) = 386.80, p < 0.001, partial η2 = .90).
latencies. In the present study the ERPs of each participant was recon­ Furthermore, there was a main effect of congruity: participants
structed with RIDE for each of the four stimulus conditions: happy- responded more slowly to incongruous than congruous stimuli (F (1, 43)
positive (congruous), happy-negative (incongruous), sad-positive = 4.47, p = 0.04, partial η2 = 0.09). With respect to interactions, our
(incongruous) and sad-negative (congruous). In statistical analyses, analyses revealed a significant interaction between prosody and con­
ERP differences between congruous and incongruous conditions were gruity (F (1, 43) = 9.02, p = 0.004, partial η2 = 0.17), indicating the
calculated in each of the prosodic stimulus conditions (happy prosody effect of congruity to be larger in happy tone stimuli (M = 38.53) as
vs. sad prosody), after applying RIDE. compared with sad tone stimuli (M = 11.07). No other interaction ef­
Nine central-parietal electrode sites (Cz, C1, C2, C3, C4, CP1, CP2, fects were significant (ps > .05).
CP3, and CP4) were averaged as region-of-interest (ROI) for analysis. To investigate the effects of the same factors on accuracy, a three-
Past studies found that those sites typically show the largest semantic way mixed-design repeated measures ANCOVA was conducted with
congruity effect on the N400 (Besson, Kutas & Van Petten, 1992; Kutas & sex (women vs. men), prosody (happy vs. sad tone), and congruity
Federmeier, 2000) and late ERP components (e.g., Herring, Taylor, (congruous vs. incongruous) as factors, and AQ as a covariate. No sig­
White, & Crites, 2011). Moreover, several studies on semantic violations nificant main effects or interaction effects were observed. We should,
in Cantonese word processing consistently found maximal congruity however, note that results regarding the effects on accuracy are limited
effects at central and parietal sites (Lui et al., 2018; Schirmer et al., by the range restriction of accuracy scores in this task.
2006; Schirmer, Tang, Penney, Gunter, & Chen, 2005). Analyses were Multiple regression analyses were conducted with sex and AQ as
conducted for average amplitudes with three time windows: 350–650 predictors, and RT difference (between incongruous and congruous
ms, 750–900 ms, and 1150–1300 ms. The 350–650 ms time window was conditions) as the dependent variable. There were no significant effects
used in many previous N400 studies (e.g., Liu, Wang, & Jin, 2010; for both happy prosodic and sad prosodic conditions. The same analyses
Schirmer & Kotz, 2003). The late time windows, 750–900 ms, and were performed on accuracy data. Similarly, no significant effects
1150–1300 ms, were selected based on Schirmer and Kotz (2003) and resulted. Note however that accuracy effects are limited by range re­
Schirmer et al. (2006) respectively, who used similar stimuli and tasks as striction of accuracy scores in this task.
the current study. The latter study revealed an interaction of congruity There was a significant negative correlation between RTs of specific
and sex in the 1150–1300 ms time window in a Cantonese sample. A late prosodic condition and the congruity effect (RT of incongruous minus
ERP component effect was also reported previously when vocal cues RT of congruous condition) in the corresponding prosodic condition (r =
were in conflict with the context of speech (e.g., Besson, Kutas & Van -0.31, p = .003). Fig. 1 illustrates the negative association between RTs
Petten, 1992). This was suggested to be linked to the pragmatic infer­ and congruity effect for all participants, such that females and males are
ence processes required for deriving contextually appropriate meaning specifically coded. Conditions with longer RTs tended to reveal smaller
due to the conflict (e.g., Jiang & Pell, 2016b). congruity effect. Correlations amounted -0.31 (p = .036) and -.29 (p =
Sex and AQ were independent variables in regression analyses. .052) for females and males respectively.
Dependent variables included reaction time (RT), accuracy, and ERP
differences between incongruous and congruous conditions (i.e., ERP 3.2. ERP data
congruity effect).
The grand average RIDE-corrected ERPs of women and men for the
four stimulus conditions at a representative site, Cz, are provided in

4
M. Lui et al. Biological Psychology 157 (2020) 107973

Table 2
Women’s and men’s reaction times (RT) and accuracies (ACC) in the four stimulus conditions.
Measure Sex Statis Happy congruous Happy incongruous Sad congruous Sad incongruous Congruity effect Congruity effect Average
-tics for Happy condition for Sad condition

M 1084.92 1125.26 1352.61 1367.81 40.33 15.20 1232.65


Women
SD 70.97 70.45 72.99 73.43
RT
M 1155.61 1192.33 1427.21 1434.14 36.72 6.93 1302.32
Men
SD 119.30 122.74 120.01 112.71
M .96 .96 .96 .94 0 − .02 .96
Women
SD .04 .02 .04 .05
ACC
M .95 .93 .95 .92 − .02 − .03 .94
Men
SD .04 .07 .04 .05

Note: Congruity effect is calculated by incongruous minus congruous condition.

Fig. 1. Negative correlation between mean reaction time (Mean RT) across congruous and incongruous conditions but separated for the different prosodic condition
and the RT congruity effect (RT of incongruous condition minus RT of congruous condition) in the corresponding prosodic condition.

Figs. 2a and 2b, respectively. Fig. 3 show te bar plots of the ERP con­ congruity effect at 1150 – 1300 ms (b = 0.30, t = 2.04, p = .048). Women
gruity effect in happy and sad prosodic condition at Cz electrode in the had larger ERP differences than men in this time window. There was no
time windows of 350–650 ms, 750–900 ms, and 1150–1300 ms. Fig. 4 significant association between AQ and sad prosodic stimuli processing
illustrates the topographical plots of ERP congruity effects (incongruous (see Table 3).
minus congruous condition) in happy (A) and sad (B) prosodic condition
in the time windows 350–650 ms, 750–900 ms and 1150–1300 ms for 4. Discussion
females vs. males.
Multiple regression analyses were performed with the ERP amplitude This study aimed to examine sex differences while controlling for
differences between incongruous and congruous condition (i.e., ERP autistic traits regarding two aspects of communication processes: 1)
congruity effect) as the dependent variables, and sex and AQ as pre­ efficiency of emotional speech processing and; 2) multi-channel inte­
dictors. Separate regressions were computed on data in three time gration of emotional prosody and semantic valence. Regarding effi­
windows, and for happy and sad prosodic stimulus conditions (see ciency of processing, we found a significant sex difference in reaction
Table 3 for a summary of the statistics). time but not accuracy. The absence of sex differences in accuracy is not
Time window 350 – 650 ms (N400). surprising because the task was designed to be very easy, as shown by
In the happy prosodic condition, there was a significant sex differ­ the ceiling effect of accuracy rates among female and male participants
ence in the N400 congruity effect (between incongruous and congruous alike. However, men responded more slowly than women when making
conditions) (b = 0.34, t = 2.43, p = .01). Men had a larger N400 con­ semantic judgments about emotionally spoken words, suggesting lower
gruity effect than women. The effect of AQ was not significant (b = 0.13, efficiency in the processing of emotional speech in men. It is plausible
t = 0.93, p > .1). Sex and AQ together explained 13 % of the variance of that these differences are not due to differences in general cognitive
the N400 difference (R2 = 0.13, F (2, 43) = 3.38, p = 0.04). There were processing speed as sex differences were not revealed by the speeded
no significant effects for the sad prosodic condition (see Table 3 for the version of the applied non-verbal IQ test. Our results showed that sex
estimates). differences in reaction times were highly significant even with AQ
Time window 750 – 900 ms. controlled, that is, women and men differed in task performance even
No significant effects of sex (happy: p > .1; sad: p = 0.09) or AQ (ps > though on average they had similar AQ. Therefore, our results refute the
0.1) occurred, for both happy and sad prosodic conditions (see Table 3). hypothesis that sex differences in semantic valence processing are driven
Time window 1150 – 1300 ms. by the generally higher average AQ of men as compared to women in the
For happy prosodic stimuli, sex was significantly related with the population. Instead, our findings are consistent with previous studies

5
M. Lui et al. Biological Psychology 157 (2020) 107973

Fig. 2. 2a & 2b. Grand ERP averages of females vs. males in congruous and incongruous conditions for happy prosodic stimuli (top panel) and sad prosodic stimuli
(bottom panel) at Cz electrode.

Fig. 3. Bar plots illustrating ERP congruity effects (incongruous minus congruous condition) in happy and sad prosodic conditions at Cz electrode in the time
windows 350 – 650 ms, 750 – 900 ms and 1150 – 1300 ms.

6
M. Lui et al. Biological Psychology 157 (2020) 107973

Fig. 4. Topographical plots of ERP congruity effects (incongruous minus congruous condition) in happy (A) and sad (B) prosodic condition in the time windows 350
– 650 ms, 750 – 900 ms and 1150 – 1300 ms for females vs. males.

Table 3
Regression analysis results with Sex and AQ as predictors of ERP amplitude differences between incongruous and congruous condition for happy and sad prosody and
three different time windows (350 – 650 ms, 750 – 900 ms, and 1150 – 1300 ms).
Prosody Time window (ms) R2 F p bsex tsex psex bAQ tAQ pAQ

Happy 350–650 .13* 3.38 .04 .34 2.43 .01* .13 .93 .35
750–900 .03 .75 .47 .15 1.02 .31 .10 .67 .50
1150–1300 .09 2.15 .13 .30 2.04 .048* .06 .39 .69
Sad 350–650 .03 .67 .51 − .02 − .11 .91 − .17 − 1.15 .25
750–900 .07 1.79 .17 − .24 − 1.68 .09 − .12 − .88 .38
1150–1300 .01 .31 .73 − .12 − .76 .45 − .03 − .21 .83

Note: *p < .05; ** p < .01. The regression weights (b) are standardized estimates.

showing women’s superior performance in processing emotional sig­ shorter reaction times instead of higher accuracy rates.
nals, such as higher accuracy in identifying facial emotions (Collignon
et al., 2010; Montagne, Kessels, Frigerio, de Haan, & Perrett, 2005) and 4.1. Integration between emotional prosody and semantic valence
vocal emotions (Schirmer et al., 2004). Note that the semantic judgment
is an easy task while participants were required to respond as fast as Apart from efficiency in processing emotional signals, we aimed to
possible. Thus, the superior performance of women was reflected by demonstrate sex differences in the automatic integration between

7
M. Lui et al. Biological Psychology 157 (2020) 107973

semantic valence and emotional prosody with both behavioral and between sad prosody and semantic valence when controlling for autistic
neural measures, when AQ is controlled for. The integration was indi­ traits. This is in contrast to previous behavioral studies which showed
cated by differences between incongruous and congruous conditions in women’s higher sensitivity towards sad stimuli (e.g., Eugène et al.,
reaction times, accuracy, and ERP amplitudes (congruity effects). In 2003; Fujisawa & Shinohara, 2011; Orozco & Ehlers, 1998; Rotter &
incongruous conditions where prosody and semantic valence mis­ Rotter, 1988) and neuroimaging studies which demonstrated that
matched (e.g., negative meaning words spoken in happy prosody), women recruited a more extensive and less lateralized neural network
conflicts in perception and response generation are expected. A larger during induced sadness (Schneider, Habel, Kessler, Salloum, & Posse,
difference between congruous and incongruous conditions (congruity 2000). One possibility is that sex differences in processing sad stimuli
effect) indicates more automatic (involuntary) integration between are alleviated when both sexes have similar level of autistic traits.
prosody and semantic valence, given that emotional prosody was irrel­ Autistic traits are highly and negatively correlated with empathy and a
evant to the task of judging semantic valence. past study revealed that adolescents with autism were less able to
The main effect of congruity on reaction time indicates that the empathize with other people’s negative emotions than with positive
participants did automatically integrate emotional prosody when pro­ emotions (Mazza et al., 2014). Controlling the levels of autistic traits
cessing semantic valence. This replicates the findings of Schirmer et al. between men and women may therefore reduce differences in process­
(2006) that revealed a main effect of congruity. Our finding indicating ing sad stimuli as shown by the lack of sex differences in ERPs for sad
no sex difference in multi-channel integration of emotional prosody and prosodic stimuli.
semantic valence is however at variance to other studies demonstrating Another possibility could be that the applied sad stimuli were quite
women’s superior performance in processing emotional signals from long in duration (sad prosodic words =1240 ms versus happy prosodic
multiple modalities. For example, Collignon et al. (2010) found that words =877 ms). Long stimuli may have granted participants prolonged
woman had a larger bimodal facilitation than men, supporting that periods of time to process and respond, as shown by the longer reaction
women integrate vocal and facial emotional expressions to a larger time to sad stimuli (1387 ms) than to happy stimuli (1145 ms). The long
extent than men. In another study, Hall, Witelson, Szechtman, and duration of sad stimuli might have diminished the cognitive demands
Nahmias (2004) found women to be more accurate in matching sad required to handle the conflicting information between prosody and
prosodic and visual stimuli than men. Importantly, that study also semantic valence, which is supported by our finding that the congruity
showed that women and men differ in their neural patterns of responses effect on reaction times was negatively correlated with reaction times.
when processing multi-modal emotional stimuli. Women showed The lack of sex differences in ERP congruity effect in the sad prosodic
stronger activation of the limbic system (e.g., anterior cingulate and condition could be due to the overall diminished congruity effect.
thalamus) while men recruited more frontal areas (e.g., left interior A third possibility to explain the lack of sex difference in the sad
frontal gyrus). The authors suggested that women may treat emotional stimulus condition is the nature of the stimuli used in our study. The
cues of multi-modal stimuli as unconditional innate stimuli, whereas emotional prosodies, happy and sad, were discrete emotions while the
men may allocate more resources to regulatory and inhibitory processes semantic valence were categorized more broadly as positive and nega­
for emotional responses. In our study, although no sex differences tive valence. It is generally agreed that there are multiple negative
occurred regarding the congruity effects in reaction times, it is possible emotions (e.g., fear, anger, disgust, and embarrassment) as compared
that the timing of the neurocognitive integration processes of emotional with positive emotions (e.g., happiness, elation, calmness). In our
prosody and semantic valence of spoken words differs between women stimuli, while happy prosody generally matches with positive meaning
and men. This is indicated in the ERP results to be discussed next. of words, sad prosody only matches exactly with the meanings of around
Interestingly, in several time segments of ERPs there were sex dif­ 20 (out of 60) negative words included in our study. It is therefore
ferences in congruity effects: for happy prosodic stimuli, men exhibited a possible that some of the sadly spoken negative words are not exactly
greater ERP difference than women in the N400 component (350 – 650 congruous, which may explain why the congruity effect was smaller in
ms), while women had a greater ERP difference than men in the late sad stimuli than in happy stimuli. Future studies may consider matching
positive component (1150 – 1300 ms). These ERP results show that both the emotional prosody with specific word meaning instead of a broad
sexes integrate emotional prosody and semantic valence, while their valence.
timing differs. This is a novel finding as previous studies have not yet
examined sex differences in integrating emotional prosody and semantic 4.2. Limitations
valence for specific types of emotions. Although past studies showed
women’s advantage in recognizing emotions, many of them were emo­ The current study recorded participants’ responses to the congruity
tions in a single channel such as faces and voices alone, instead of between emotional prosody and semantic content in speech. It is
integrating multi-channel information (e.g., Hall & Matsumoto, 2004; possible that the observed congruity effects are not only specific to
Montagne et al., 2005; Thayer & Johnsen, 2000). Our findings add to the emotional stimuli, but also non-emotional information from multiple
literature by showing that men and women differ in the timing of neu­ channels such as in Stroop tasks (e.g., Zahedi, Abdel-Rahman, Stürmer,
rocognitive integration of happy prosody with valence of semantic & Sommer, 2019). Further studies may include a non-emotional conflict
meaning and that this cannot be accounted for by differences in autistic processing task as a control condition. Regarding participant selection,
traits. While men had larger ERP congruity effect than women in the our study included neurotypical participants with different levels of
N400 time window, which corresponds to the incongruity detection autistic traits. Future studies are needed to investigate the relationship
stage (Kutas & Federmeier, 2011), women had larger ERP congruity between autistic traits and emotional prosody processing by including
effect than men in the later time window (1150 – 1300 ms), which participants diagnosed with ASD, in order to examine the targeted ef­
corresponds to the evaluation process of stimulus probability. In previ­ fects across the whole autism spectrum. In terms of stimulus charac­
ous language studies, the late component was found to be larger when a teristics, auditory words were produced by a single female speaker in
word is in discordant with semantic or syntactic constraints (Hahne & this study. Future studies will need to include stimuli produced by both
Friederici, 2002; Van den Brink, Brown, & Hagoort, 2001). Our finding female and male speakers, and by multiple speakers to be able to
suggest that men allocate more neural resources in detecting incongruity generalize across specific stimuli.
than women, while women dedicate more neural resources in evaluating
the stimulus probability (Coulson et al., 1998), even if sex differences in 4.3. Conclusion
autistic traits are controlled for. Future studies are needed to further test
the robustness of this interpretation. This study indicates that women process emotional speech more
Another major finding is the missing sex difference in the integration efficiently than men, while no behavioral sex differences were found in

8
M. Lui et al. Biological Psychology 157 (2020) 107973

multi-channel integration processes for prosody and semantic meaning Gayle, L. C., Gal, D., & Kieffaber, P. D. (2012). Measuring affective reactivity in
individuals with autism spectrum personality traits using the visual mismatch
when the level of autistic traits was controlled for. At the neural level,
negativity event-related brain potential. Frontiers in Human Neuroscience, 6, 334.
sex differences emerged in the timing of integration between prosody Hahne, A., & Friederici, A. D. (2002). Differential task effects on semantic and syntactic
and semantic valence when processing happily spoken words. Men processes as revealed by ERPs. Cognitive Brain Research, 13(3), 339–356.
showed integration effects in earlier neurocognitive processes than Hall, J. A., & Matsumoto, D. (2004). Gender differences in judgments of multiple
emotions from facial expressions. Emotion, 4(2), 201–206.
women. Our study provides novel findings on sex differences in Hall, G. B., Witelson, S. F., Szechtman, H., & Nahmias, C. (2004). Sex differences in
emotional speech processing that are not due to autistic traits. This functional activation patterns revealed by increased emotion processing demands.
contributes to a better understanding of individual differences in Neuroreport, 15(2), 219–223. https://doi.org/10.1097/00001756-200402090-00001
Halliday, D. W., MacDonald, S. W., Sherf, S. K., & Tanaka, J. W. (2014). A reciprocal
prosody-semantic integration during human communication, and it model of face recognition and autistic traits: Evidence from an individual differences
helps disentangling differences between individuals in higher order so­ perspective. PloS One, 9(5), Article e94013.
cial cognition. Herring, D. R., Taylor, J. H., White, K. R., & Crites Jr, S. L. (2011). Electrophysiological
responses to evaluative priming: The LPP is sensitive to incongruity. Emotion, 11(4),
794.
Declaration of Competing Interest Herron, J. E. (2007). Decomposition of the ERP late posterior negativity: Effects of
retrieval and response fluency. Psychophysiology, 44(2), 233–244.
Ishii, K., Kobayashi, Y., & Kitayama, S. (2010). Interdependence modulates the brain
The authors declare that they have no known competing financial response to word–voice incongruity. Social Cognitive and Affective Neuroscience, 5(2-
interests or personal relationships that could have appeared to influence 3), 307–317.
the work reported in this paper. Ishii, K., Reyes, J. A., & Kitayama, S. (2003). Spontaneous attention to word content
versus emotional tone: Differences among three cultures. Psychological Science, 14
(1), 39–46.
Acknowledgements Jiang, X., & Pell, M. D. (2016a). Neural responses towards a speaker’s feeling of (un)
knowing. Neuropsychologia, 81, 79–93.
Jiang, X., & Pell, M. D. (2016b). The feeling of another’s knowing: How “mixed
This work was supported by the Hong Kong Baptist University
messages” in speech are reconciled. Journal of Experimental Psychology Human
Research Committee Interdisciplinary Research Matching Scheme [RC- Perception and Performance, 42(9), 1412.
IRMS/16-17/04]; Hong Kong Baptist University Interdisciplinary Johansson, M., & Mecklinger, A. (2003). The late posterior negativity in ERP studies of
Research Clusters 2018/19 [RC-IRCMs/18-19/SCI/01]; and the General episodic memory: Action monitoring and retrieval of attribute conjunctions.
Biological Psychology, 64(1–2), 91–117.
Research Fund of Research Grants Committee, Hong Kong [Ref No. Kitayama, S., & Ishii, K. (2002). Word and voice: Spontaneous attention to emotional
12604418]. The work was also supported by Hong Kong University utterances in two languages. Cognition & Emotion, 16(1), 29–59.
Grants Committee, Germany-Hong Kong Joint Research Scheme Kutas, M., & Federmeier, K. D. (2000). Electrophysiology reveals semantic memory use in
language comprehension. Trends in Cognitive Sciences, 4(12), 463–470.
[G_HKBU/201/17] awarded to Changsong Zhou and Xiaojing Li, and [ID Kutas, M., & Federmeier, K. D. (2011). Thirty years and counting: Finding meaning in the
57391438] awarded to Werner Sommer and Andrea Hildebrandt. The N400 component of the event-related brain potential (ERP). Annual Review of
author Xiaojing Li is thankful to the Hanse-Wissenschaftskolleg Institute Psychology, 62, 621–647.
Kutas, M., & Hillyard, S. A. (1980). Reading senseless sentences: Brain potentials reflect
for Advanced Study, Delmenhorst, Germany for the fellowship support. semantic incongruity. Science, 220(4427), 203–205.
Lassalle, A., & Itier, R. J. (2015). Autistic traits influence gaze-oriented attention to
References happy but not fearful faces. Social Neuroscience, 10(1), 70–88.
Liu, B., Wang, Z., & Jin, Z. (2010). The effects of punctuations in Chinese sentence
comprehension: An ERP study. Journal of Neurolinguistics, 23(1), 66–80.
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental
Liu, P., Rigoulot, S., & Pell, M. D. (2015). Culture modulates the brain response to human
disorders: DSM-5 (5th ed.). Arlington, VA: American Psychiatric Association. https://
expressions of emotion: Electrophysiological evidence. Neuropsychologia, 67, 1–13.
doi.org/10.1176/appi.books.9780890425596
Lui, M., So, W. C., & Tsang, Y. K. (2018). Neural evidence for reduced automaticity in
Baron-Cohen, S., & Wheelwright, S. (2004). The Empathy Quotient: An investigation of
processing emotional prosody among men with high levels of autistic traits.
adults with Asperger Syndrome or high functioning Autism, and normal sex
Physiology & Behavior, 196, 47–58.
differences. Journal of Autism and Developmental Disorders, 34(2), 163–175.
Marslen-Wilson, W. (1987). Functional parallelism in spoken word recognition. In
Baron-Cohen, S., Wheelwright, S., Skinner, R., Martin, J., & Clubley, E. (2001). The
U. H. Frauenfelder, & L. K. Tyler (Eds.), Spoken word recognition (pp. 71–102).
Autism-Spectrum Quotient (AQ): Evidence from Asperger syndrome/high-
Cambridge, MA: MIT Press.
functioning autism, males and females, scientists and mathematicians. Journal of
Mazza, M., Pino, M. C., Mariano, M., Tempesta, D., Ferrara, De Berardis, D., …
Autism and Developmental Disorders, 31(1), 5–17.
Valenti, M. (2014). Affective and cognitive empathy in adolescents with autism
Besson, M., Kutas, M., & Petten, C. V. (1992). An event-related potential (ERP) analysis of
spectrum disorder. Frontiers in Human Neuroscience, 8, 791. https://doi.org/
semantic congruity and repetition effects in sentences. Journal of Cognitive
10.3389/fnhum.2014.00791
Neuroscience, 4(2), 132–149.
Montagne, B., Kessels, R. P., Frigerio, E., de Haan, E. H., & Perrett, D. I. (2005). Sex
Brandwein, A. B., Foxe, J. J., Butler, J. S., Russo, N. N., Altschuler, T. S., Gomes, H., &
differences in the perception of affective facial expressions: Do men really lack
Molholm, S. (2012). The development of multisensory integration in high-
emotional sensitivity? Cognitive Processing, 6(2), 136–141.
functioning autism: High-density electrical mapping and psychophysical measures
Orozco, S., & Ehlers, C. L. (1998). Gender differences in electrophysiological responses to
reveal impairments in the processing of audiovisual inputs. Cerebral Cortex, 23(6),
facial stimuli. Biological Psychiatry, 44(4), 281–289.
1329–1341. https://doi.org/10.1093/cercor/bhs109
Ouyang, G., Herzmann, G., Zhou, C., & Sommer, W. (2011). Residue Iteration
Cai, Q., & Brysbaert, M. (2010). SUBTLEX-CH: Chinese word and character frequencies
Decomposition (RIDE): A new method to separate ERP components on the basis of
based on film subtitles. PloS One, 5(6), E10729. https://doi.org/10.1371/journal.
latency variability in single trials. Psychophysiology, 48(12), 1631–1647. https://doi.
pone.0010729
org/10.1111/j.1469-8986.2011.01269.x
Canal, P., Bischetti, L., Di Paola, S., Bertini, C., Ricci, I., & Bambini, V. (2019). ’Honey,
Ouyang, G., Sommer, W., & Zhou, C. (2015). A toolbox for residue iteration
shall I change the baby? - well done, choose another one’: ERP and time-frequency
decomposition (RIDE)—A method for the decomposition, reconstruction, and single
correlates of humor processing. Brain and Cognition, 132, 41–55.
trial analysis of event related potentials. Journal of Neuroscience Methods, 250, 7–21.
Collignon, O., Girard, S., Gosselin, F., Saint-Amour, D., Lepore, F., & Lassonde, M.
Peled-Avron, L., & Shamay-Tsoory, S. G. (2017). Don’t touch me! Autistic traits modulate
(2010). Women process multisensory emotion expressions more efficiently than
early and late ERP components during visual perception of social touch. Autism
men. Neuropsychologia, 48(1), 220–225. https://doi.org/10.1016/j.
Research, 10(6), 1141–1154.
neuropsychologia.2009.09.007
Raven, J. C., Raven, J. E., & Court, J. H. (1998). Progressive matrices. Oxford, England:
Coulson, S., King, J. W., & Kutas, M. (1998). Expect the unexpected: Event-related brain
Oxford Psychologists Press.
response to morphosyntactic violations. Language and Cognitive Processes, 13(1),
Rotter, N. G., & Rotter, G. S. (1988). Sex differences in the encoding and decoding of
21–58.
negative facial emotions. Journal of Nonverbal Behavior, 12(2), 139–148.
Delorme, A., & Makeig, S. (2004). EEGLAB: An open source toolbox for analysis of single-
Ruzich, E., Allison, C., Smith, P., Watson, P., Auyeung, B., Ring, H., & Baron-Cohen, S.
trial EEG dynamics including independent component analysis. Journal of
(2015). Measuring autistic traits in the general population: A systematic review of
Neuroscience Methods, 134, 9–21.
the Autism-Spectrum Quotient (AQ) in a nonclinical population sample of 6,900
Eugène, F., Lévesque, J., Mensour, B., Leroux, J. M., Beaudoin, G., Bourgouin, P., …
typical adult males and females. Molecular Autism, 6(1), 2. https://doi.org/10.1186/
Beauregard, M. (2003). The impact of individual differences on the neural circuitry
2040-2392-6-2
underlying sadness. Neuroimage, 19(2), 354–364.
Schirmer, A., & Kotz, S. A. (2003). ERP evidence for a sex-specific Stroop effect in
Friederici, A. D. (2002). Towards a neural basis of auditory sentence processing. Trends in
emotional speech. Journal of Cognitive Neuroscience, 15(8), 1135–1148. https://doi.
Cognitive Sciences, 6(2), 78–84.
org/10.1162/089892903322598102
Fujisawa, T., & Shinohara, K. (2011). Sex differences in the recognition of emotional
prosody in late childhood and adolescence. The Journal of Physiological Sciences, 61
(5), 429–435.

9
M. Lui et al. Biological Psychology 157 (2020) 107973

Schirmer, A., Kotz, S. A., & Friederici, A. D. (2002). Sex differentiates the role of Tanaka, A., Koizumi, A., Imai, H., Hiramatsu, S., Hiramoto, E., & de Gelder, B. (2010).
emotional prosody during word processing. Cognitive Brain Research, 14(2), I feel your voice: Cultural differences in the multisensory perception of emotion.
228–233. Psychological Science, 21(9), 1259–1262.
Schirmer, A., Zysset, S., Kotz, S. A., & von Cramon, D. Y. (2004). Gender differences in Thayer, J. F., & Johnsen, B. H. (2000). Sex differences in judgement of facial affect: A
the activation of inferior frontal cortext during emotional speech perception. multivariate analysis of recognition errors. Scandinavian Journal of Psychology, 41(3),
NeuroImage, 21(3), 1114–1123. 243–246.
Schirmer, A., Tang, S.-L., Penney, T. B., Gunter, T. C., & Chen, H.-C. (2005). Brain Van den Brink, D., Brown, C. M., & Hagoort, P. (2001). Electrophysiological evidence for
responses to segmentally and tonally induced semantic violations in Cantonese. early contextual influences during spoken-word recognition: N200 versus N400
Journal of Cognitive Neuroscience, 17(1), 1–12. https://doi.org/10.1162/ effects. Journal of Cognitive Neuroscience, 13(7), 967–985. https://doi.org/10.1162/
0898929052880057 089892901753165872
Schirmer, A., Lui, M., Maess, B., Escoffier, N., Chan, M., & Penney, T. B. (2006). Task and van den Brink, D., Van Berkum, J. J., Bastiaansen, M. C., Tesink, C. M., Kos, M.,
sex modulate the brain response to emotional incongruity in Asian listeners. Emotion, Buitelaar, J. K., … Hagoort, P. (2012). Empathy matters: ERP evidence for inter-
6(3), 406–417. individual differences in social language processing. Social Cognitive and Affective
Schneider, F., Habel, U., Kessler, C., Salloum, J. B., & Posse, S. (2000). Gender differences Neuroscience, 7(2), 173–183.
in regional cerebral activity during sadness. Human Brain Mapping, 9(4), 226–238. Zahedi, A., Abdel Rahman, R., Stürmer, B., & Sommer, W. (2019). Common and specific
Stevenson, R. A., Siemann, J. K., Schneider, B. C., Eberly, H. E., Woynaroski, T. G., loci of Stroop effects in vocal and manual tasks, revealed by event-related brain
Camarata, S. M., & Wallace, M. T. (2014). Multisensory temporal integration in potentials and posthypnotic suggestions. Journal of Experimental Psychology, 148(9),
Autism Spectrum Disorders. The Journal of Neuroscience, 34(3), 691–697. https:// 1575–1594. https://doi.org/10.1037/xge0000574. General.
doi.org/10.1523/JNEUROSCI.3615-13.2014

10

You might also like