You are on page 1of 19

Laterality, 2014

Vol. 19, No. 4, 455–472, http://dx.doi.org/10.1080/1357650X.2013.862256

The influence of left and right hemisphere brain


damage on configural and featural processing of
affective faces

Jacenta D. Abbott1, Tissa Wijeratne2, Andrew Hughes3,


Diana Perre4, and Annukka K. Lindell1
1
School of Psychological Science, La Trobe University, Bundoora, VIC,
Australia
2
Department of Neurology, Western Hospital, Western Health, Footscray,
VIC, Australia
3
Department of Neurology, Austin Hospital, Austin Health, Heidelberg, VIC,
Australia
4
Department of Neuropsychology, Sunshine Hospital, Western Health,
Sunshine, VIC, Australia

The literature about the lateralization of facial emotion perception according to valence
(positive, negative) is conflicting; investigating the underlying processes may shed light
on why some studies show right-hemisphere dominance across valence and other studies
demonstrate hemispheric differences according to valence. This is the first clinical study
to examine whether the use of configural and featural cues underlies hemispheric
differences in affective face perception. Right brain-damaged (RBD; n = 17), left brain-
damaged (LBD; n = 17) and healthy control (HC; n = 34) participants completed an
affective face discrimination task that tested configural processing using whole faces and
featural processing using partial faces. No group differences in expression perception
according to valence or processing strategy were found. Across emotions, the RBD
group was less accurate than the HC group in discriminating whole faces, whilst the
RBD and LBD groups were less accurate than HCs in discriminating partial faces. This
suggests that the right hemisphere processes facial expressions from configural and
featural information, whereas the left hemisphere relies more heavily on featural facial
information.

Address correspondence to Jacenta D. Abbott, School of Psychological Science, La Trobe


University, Bundoora, VIC 3086, Australia. E-mail: jd2abbott@students.latrobe.edu.au
We thank all of the participants who volunteered their time towards this research project. We also
thank the Western and Austin Health neurology teams for their contribution towards the recruitment
of stroke participants. Many thanks are especially extended to Dr Lisa Sherry for her support in
facilitating participant recruitment from Austin Health.

© 2013 Taylor & Francis


456 ABBOTT ET AL.

Keywords: Lateralization; Configural; Featural; Emotion; Perception

Facial expressions are a core component of social cognition (e.g., Ekman &
Friesen, 1969; Mehrabian & Ferris, 1967; Mehrabian & Friar, 1969). As such,
considerable research has investigated how the brain is lateralized to process
emotion from faces. Despite over three decades of research, the nature of
emotion lateralization remains contentious. Research examining normal and
clinical populations supports two main theories: the Right Hemisphere Hypo-
thesis (RHH) and the Valence Hypothesis (VH).
The RHH argues that the right hemisphere dominates all emotion processing
irrespective of valence (for review see Borod, Bloom, Brickman, Nakhutina, &
Curko, 2002; Demaree, Everhart, Youngstrom, & Harrison, 2005). Chimeric face
research (e.g., Christman & Hackworth, 1993; Drebing, Federman, Edington, &
Terzian, 1997) demonstrates a left half-face bias for both positive and negative
emotions, supporting the right hemisphere’s role in the perception of emotion,
irrespective of valence, consistent with the RHH. Furthermore, fMRI studies
indicate activation in the right hemisphere during affective face perception tasks,
across valence (Narumoto, Okada, Sadato, Fukui, & Yonekura, 2001; Sato,
Kochiyama, Yoshikawa, Naito, & Matsumura, 2004). Clinical studies assessing
unilateral brain-damaged participants also support the RHH using discrimination
(Blonder, Bowers, & Heilman, 1991; Bowers, Bauer, Coslett, & Heilman, 1985;
Charbonneau, Scherzer, Aspirot, & Cohen, 2003), identification (Borod et al.,
1990; Charbonneau et al., 2003) and labelling (Bowers et al., 1985; Kucharska-
Pietura, Phillips, Gernand, & David, 2003) tasks. All these studies indicate that
negative and positive facial emotions are preferentially processed by the right
hemisphere, suggesting right-hemisphere dominance for emotion processing.
In contrast, the VH argues that negative emotion is processed in the right
hemisphere and positive emotion is processed in the left hemisphere (for review
see Borod et al., 2002; Demaree et al., 2005). Supporting the VH, fMRI research
shows higher levels of brain activity in the left hemisphere in response to
positive pictures, and greater activity in the right hemisphere for negative
pictures (Canli, Desmond, Zhao, Glover, & Gabrieli, 1998; Dolcos, LaBar, &
Cabeza, 2004). Behavioural paradigms administered to normal individuals,
including the divided visual field technique, also support the VH (e.g., Davidson,
Mednick, Moss, Saron, & Schaffer, 1987; Jansari, Tranel, & Adolphs, 2000;
Reuter-Lorenz & Davidson, 1981). For example, Jansari et al. (2000) showed a
left visual field (right hemisphere) bias for negative facial emotion and a right
visual field (left hemisphere) bias for positive facial emotion using a divided
visual field discrimination test (Jansari et al., 2000), consistent with the VH.
Contrasting with normal population findings, clinical research involving left or
right brain-damage patients has not fully supported the VH. Rather, a large number
PROCESSING OF AFFECTIVE FACES 457

of clinical studies support an alternative valence model, whereby the right


hemisphere is dominant for the perception of negative emotion but both
hemispheres process positive emotion (e.g., Abbott, Cumming, Fidler, & Lindell,
2013; Adolphs, Jansari, & Tranel, 2001; Borod, Koff, Perlman Lorch, & Nicholas,
1986; Mandal et al., 1999). For example, recent meta-analysis of the clinical
literature (Abbott et al., 2013) indicated that both right brain-damaged (RBD) and
left brain-damaged (LBD) prompted emotion perception impairment across
valence, although patients with RBD demonstrated a greater tendency for negative
compared to positive emotion impairment. These findings indicate that the right
hemisphere is particularly important for perceiving negative faces, whereas both
hemispheres mediate positive facial emotion perception, supporting the alternative
valence hypothesis.
As this brief review indicates, the literature is divided: it is unclear whether the
right hemisphere preferentially processes all emotion (i.e., RHH; e.g., Christman
& Hackworth, 1993; Kucharska-Pietura et al., 2003; Narumoto et al., 2001), or
whether the right and left hemispheres are differentially lateralized to process
emotion according to valence (i.e., VH or alternative VH; e.g., Abbott et al., 2013;
Borod et al., 1986; Canli et al., 1998; Jansari et al., 2000). Regardless of cortical
lateralization for emotion processing, the right and left hemispheres may process
positive and negative emotion from faces via alternative strategies. Further
knowledge about the strategies that underlie processing of facial emotion may
help clarify how the hemispheres are lateralized to process facial expressions
according to valence.
Two main processing strategies are thought to underlie face perception and may
also influence affective face perception: configural and featural processing (for
review see Collishaw & Hole, 2000). The configural processing strategy
encompasses a holistic approach, whereby the spatial relation between facial
features is used. In contrast, the featural processing strategy is a piecemeal, analytic
approach (i.e., processing specific facial detail such as the eyes; Collishaw &
Hole, 2000).
Research examining both normal and unilateral brain-damaged individuals
demonstrates that the right hemisphere processes configural facial information,
whereas the left hemisphere processes facial information featurally (e.g., Bliem,
1998; Bourne, Vladeanu, & Hole, 2009; Rossion et al., 1999; Uttner, Bliem, &
Danek, 2002; Yin, 1970). For example, Yin (1970) found that upright face
recognition (configural) was impaired in patients with posterior RBD compared
to healthy control (HC) and LBD participants. In contrast, patients with other
(bilateral or unilateral left-sided) brain damage showed impaired recognition of
inverted faces (featural) compared to HC and cerebral posterior RBD patients.
Research testing non-clinical participants also indicates that the left and right
hemispheres are differentially specialized for configural and featural processing
(e.g., Bourne et al., 2009; Leehey, Carey, Diamond, & Cahn, 1978; Rhodes,
458 ABBOTT ET AL.

1993). For example, Leehey et al. (1978) and Rhodes (1993) found a right visual
field response bias for inverted faces suggesting left hemisphere expertise for
processing faces based on featural information. A similar hemispheric pattern has
also been shown using fMRI, indicating that the left fusiform area shows greater
activation than the right fusiform area in response to inverted faces (Passarotti,
Smith, DeLano, & Huang, 2007). Overall, clinical and non-clinical research
using various methods has largely supported the suggestion that the right
hemisphere is specialized to process faces configurally, whereas the left
hemisphere is specialized to process faces featurally.
Although the hemispheric specialization for configural and featural face
processing strategies has been established, only one study to date has
investigated the relationship between configural and featural processing strat-
egies and the perception of facial expressions according to valence (Bourne,
2011). This study administered a chimeric face test incorporating the six basic
emotions to normal participants, with pairs of chimeric faces presented either
upright (configural) or inverted (featural). Left visual field performance was
superior for all emotions when the faces were presented upright, whereas right
visual field performance was superior when the faces were inverted. The
magnitude of the inversion effect varied across the six emotions: the right visual
field was superior at processing positive inverted facial expressions, but no clear
lateralization effect was found for the negatively valanced inverted faces.
Overall, Bourne (2011) concluded that the right hemisphere is specialized for
processing configural facial information and the left hemisphere specialized to
process featural facial information.
However, Bourne’s (2011) results did not consistently support either the RHH
or the VH. Although the upright condition clearly supports the RHH, given the
right hemisphere bias for positive and negative emotions, the inverted condition
lends support to the VH, with hemispheric differences in the processing of
positive and negative emotions. The author was unable to adequately reconcile
these opposing findings. However, given that previous research has shown that
negative expressions are more affected by configural manipulations (Chambon,
Baudouin, & Franck, 2006; Prkachin, 2003), it was suggested that while positive
emotions can be processed on the basis of both configural and featural
information, the processing of negative emotions may be more reliant on
configural information. Thus, differences in the underlying strategies involved in
the processing of positive versus negative emotions may account for the
hemispheric differences observed in the inverted condition. To date, no clinical
studies with unilateral brain-damaged patients have assessed the strategies
behind the lateralization of emotion perception according to valence.
Given the contention in the literature concerning (1) the lateralization of
emotion perception and (2) the contributions of configural and featural processes
to left and right hemisphere emotion recognition, the present study was designed
PROCESSING OF AFFECTIVE FACES 459

to determine whether the hemispheres use the same or different processes to


perceive positive and negative facial expressions. As such, this study aims to
investigate the extent to which patients with RBD and LBD rely on configural
and/or featural strategies when processing positive and negative facial emotions.
To achieve this aim, three participant groups were recruited: an RBD patient
group, an LBD patient group, and an HC group. An affective face discrimination
task was used to elicit configural processing via the use of pairs of whole faces
and featural processing through the use face pairs that included partial (i.e., only
eyes or only mouth) stimuli. This task was chosen because other studies have
used whole and part-based facial stimuli to investigate configural and featural
information processing (e.g., Harris & Aguirre, 2010; Stephan, Breen, & Caine,
2006). It is expected that RBD patients will process positive and negative
expressions from whole faces (i.e., configural) less accurately than LBD and HC
participants, supporting the RHH. In contrast, it is predicted that LBD patients
will process positively valenced partial faces (i.e., featural) less accurately than
RBD and HC participants, supporting the VH, with no hemispheric difference in
how negatively valenced partial faces are processed, contrary to existing theories
of emotion lateralization.

METHOD
Participants
Thirty-four patients who had sustained a unilateral stroke were recruited from the
Austin Health or Western Health services; 17 had suffered RBD and 17 had
suffered LBD. All stroke participants were medically stable and had experienced
a cerebral stroke between 3 and 14 months ago (Chronicity range: RBD = 101–
334 days; LBD = 112–418 days). In addition, 34 HC participants were recruited
via the social networks of the researcher as well as through advertising at the
Western Hospital. Each HC participant was demographically matched for age
and education to one of the stroke participants (Please refer to Abbott, Wijeratne,
Hughes, Perre, and Lindell (submitted) for detailed demographic information for
the stroke and control groups). Briefly, the three groups did not differ in age
(F(2, 65) = .58, p = .565, η2 = .02), gender (χ2(2, N = 68) = 4.92, p = .085) or
years of education (F(2, 65) = 1.58, p = .215, η2 = .05).
Stroke patients with severe residual cognitive deficits as indicated by scores
lower than 24/30 on the mini mental status examination (MMSE; Folstein,
Folstein, & McHugh, 1975) were excluded. However, three stroke patients (two
RBD and one LBD) were included despite scores of 17, 20 and 22 (beneath the
24/30 cut-off), because their low scores were due to specific residual neurolo-
gical impairments and/or English being their second language, which prevented
them from attempting some of the MMSE tasks. Physical difficulties and low
English literacy are known to lower a patient’s performance on the MMSE, even
460 ABBOTT ET AL.

though they may be cognitively intact (Folstein et al., 1975), thus the inclusion
of these three patients was deemed appropriate. Mean values on the MMSE for
the three groups significantly differed, F(2, 65), = 5.7, p = .005, η2 = .15); the
HC group scored higher than the RBD (p = .025) and LBD (p = .016) groups.
There was no difference between the RBD and LBD groups (p = .988). Further
participant details (i.e., group means with their 95% CIs) are detailed in Abbott
et al. (submitted).
Further selection criteria for all participants included: right-hand dominance
(Edinburgh test of handedness score >50; Oldfield, 1971), adequate (assisted)
vision to see A4 pictures, the ability to recognize unfamiliar faces as determined
by the short form Benton facial recognition task (Levin, Hamsher, & Benton,
1975), and no previous and/or secondary psychiatric diagnosis, neurological
disease or habitual drug or alcohol use as per the participants’ answers during an
initial interview. The DASS21 was also retrospectively used to test levels of
depression, anxiety and stress during the preceding week (Lovibond &
Lovibond, 1995). Please refer to Abbott et al. (submitted) for the mean
differences and 95% CIs between the groups for the three DASS scales (anxiety,
depression, stress), the Edinburgh handedness test, and the Benton facial
recognition test.
For each stroke participant, two senior neurologists identified the brain
regions involved, as well as the size of the lesion (please refer to Tables 2 and 3
in Abbott et al., submitted). Brain scans (CT and/or MRI) were obtained from the
Austin or Western hospitals. The scans were used to measure the lesion site and
volume (cm3) for 32/34 patients. Each patient’s lesion was classified into one of
three categories (small, medium, large) by two senior neurologists. Post-hoc
analysis showed that small lesions measured less than 2 cm3, medium lesions
measured between 2 cm3 and 30 cm3, and the volume of large lesions was
greater 30 cm3. The lesion size for two patients (one RBD and one LBD) could
also not be measured because the brain scans used to measure the lesion volume
were not available as their scans were not performed at the Austin or Western
hospitals. However, their imaging reports were available and were used to
identify lesion site.
Finally, 32 of the stroke patients sustained localized cerebral infarctions, and
one LBD and one RBD sustained a localized cerebral haemorrhage. Refer Tables
2 (RBD patients) and 3 (LBD patients) in Abbott et al. (submitted) for further
information about the stroke participants’ pathological and clinical profiles.

Materials
Participants were assessed on an affective face discrimination task that measured
configural and featural information processing (Collishaw & Hole, 2000). The
discrimination task elicited featural processing through the use of partial faces
PROCESSING OF AFFECTIVE FACES 461

(only the eyes or mouth; categories B, D and E within Figure 1) and configural
(holistic) processing via whole faces (categories A and C within Figure 1).
Thirty-six of Ekman and Friesen’s (1976) pictures of facial affect were
selected (12 for happiness, 12 for sadness and 12 for neutral) because they had
the highest percentage of judgements of the intended emotion, based on the data
published by Ekman and Friesen (1976). For each category, 50% of the photos
were male models and 50% were female models. All facial stimuli were
presented as black and white photographs attached to A4 white cards. The
clothing and hair of the facial stimuli were removed as far as possible so that
only the facial features could be used as recognition cues. The same model was
used in each face pair. Each of the whole faces measured approximately 8 cm
(height) × 6.5 cm (width). The partial faces measured about 3 cm (height) × 6.5
cm (width). The face pairs were vertically arranged to minimize possible
confounding effects of hemispatial neglect. The number of correct responses was
recorded for each task.
For each trial, participants were presented with one face pair and asked to
distinguish whether the faces expressed the same or different emotion(s) by
verbalizing or pointing to their response from a card that listed the two response
options vertically. Participants were assessed over two blocks of 30 trials:
the positive emotion block utilized happy and neutral stimuli, whereas the

Category

A B C D E

whole / happy half / happy whole / happy half / neutral whole / neutral

whole / happy whole / happy whole /neutral whole / happy half / happy

Figure 1. Example stimuli for each category (A–E) for the discrimination of whole and partial faces task,
positive emotion Block (Happiness). The stimuli are reproduced from Ekman and Friesen’s (1976) Pictures
of Facial Affect, with permission from Paul Ekman Ph.D/Paul Ekman Group LLC.
462 ABBOTT ET AL.

negative emotion block utilized sad and neutral stimuli. Each block was preceded
by the presentation of five familiarization trials (one trial for each discrimination
category), with feedback provided. The sequence of the positive and negative
blocks was counterbalanced between participants. Within each block there
were five categories of face pairs (see Figure 1). For each block there were 12
whole face stimuli trials and 18 partial face stimuli trials. A higher proportion of
partial face trials was used because two partial face stimuli categories were
required for the discrimination of different emotions: partial neutral and whole
emotional faces in category D and partial emotive and whole neutral faces in
category E. Within categories B, D and E, 50% of the partial face stimuli
consisted of eyes and 50% consisted of mouths. A rest interval of approximately
two minutes was provided between the two blocks.
Five versions of each block were created to counterbalance the models
presented within each of the five categories. This was to control for the possible
impact of different models (identities) on facial emotion perception. The five
categories were independently randomized within each block for each particip-
ant. Moreover, stimulus order was randomized for each participant.

Procedure
All stroke and HC participants were administered identical assessments. Each
session began by detailing the information and consent form with the participant.
Participants who met the selection criteria were screened with the MMSE
(Folstein et al., 1975) to ensure they had sufficient cognitive capacity to provide
informed consent. If deemed cognitively able and the participant had signed an
informed consent form, the remaining tests were administered in the following
order: MMSE, interview (demographic and clinical questions), Edinburgh
Handedness Inventory, Benton facial recognition test (short form), whole versus
partial face discrimination task, DASS21. There was no time limit for the
stimulus exposure during the discrimination task, and each trial ended after the
participant responded. There was a rest interval of approximately two minutes
between the different tests.

RESULTS
Whole versus partial face discrimination
The influences of emotional valence (positive, negative), stimulus type (whole,
partial), participant group (RBD, LBD, HC) and participant gender (male, female)
on discrimination accuracy were explored using a mixed-design analysis of
variance (ANOVA). There was a significant interaction between stimulus type and
participant group (see Table 1 and Figure 2).Two post-hoc one-way between
groups ANOVAs examined this interaction. A large difference (η2 = .14) between
PROCESSING OF AFFECTIVE FACES 463

TABLE 1
Statistics from the mixed ANOVAs for the discrimination of positive and negative
faces

Whole faces vs. partial faces Partial faces: Eyes vs. mouth

partial partial
Variable(s) examined df F p η2 df F p η2

Valence (1,62) 91.60 <.001 .60 (1,62) 57.89 <.001 .48


Valence × Group (2,62) 1.16 .320 .04 (2,62) .30 .739 .01
Valence × Gender (1,62) 0.57 .455 .01 (1,62) .44 .511 .01
Valence × Gender × Group (2,62) 3.65 .032 .11 (2,62) 4.61 .014 .13
Stimulus type (1,62) 74.12 <.001 .55 (1,62) .02 .904 <.01
Stimulus type × Group (2,62) 3.32 .043 .10 (2,62) 1.89 .160 .06
Stimulus type × Gender (1,62) 1.07 .305 .02 (1,62) .02 .904 <.01
Stimulus type × Gender × (2,62) 1.64 .201 .05 (2,62) .22 .804 .01
Group
Valence × Stimulus type (1,62) 1.64 .206 .03 (1,62) 58.81 <.001 .49
Valence × Stimulus type × (2,62) 1.22 .303 .04 (2,62) 1.68 .195 .05
Group
Valence × Stimulus type × (1,62) .01 .927 <.01 (1,62) .001 .977 <.01
Gender
Valence × Stimulus type × (2,62) .91 .409 .03 (2,62) .07 .930 <.01
Group × Gender
Group (2,62) 6.58 .003 .18 (2,62) 5.84 .005 .16
Gender (1,62) .24 .626 <.01 (1,62) .01 .914 <.01
Group × Gender (2,62) .40 .670 .01 (2,62) 1.03 .363 .03

RBD LBD HC
100
Mean % correct with 95% CIs

90

80

70

60

50
Positive Negative Positive Negative
Configural Featural
Figure 2. Mean% correct with 95% CIs for the discrimination of affective faces as a function of
participant group (LBD = left brain-damaged, RBD = right brain-damaged, HC = healthy control),
emotional valence (positive: happy; negative: sad), and processing strategy (configural, featural).
464 ABBOTT ET AL.

participant groups was found in response to whole face stimuli [F(2, 65) = 5.26,
p = .008]. The HC group (M = 91.7, SD = 6.2) was the most accurate in
discriminating facial expressions based on whole faces, followed by the LBD
group (M = 89.2, SD = 4.7) and the RBD group (M = 84.8, SD = 10.3). Tukey HSD
post-hoc comparisons indicated that the HC group was significantly (p = .005)
more accurate than the RBD group. The LBD group did not significantly differ
from the RBD (p = .176) or HC (p = .482) group. A large (η2 = .17) difference was
also found between the three participant groups in response to partial face stimuli
[F(2, 65) = 6.51, p = .003]. Again, the HC group was the most accurate (M = 84.6,
SD = 6.1), than the LBD group (M = 76.5, SD = 10.7), with the RBD group the
least accurate (M = 78.4, SD = 9.7) at processing facial emotion based on partial
faces. Tukey’s HSD post-hoc comparisons found the HC group was significantly
more accurate than the LBD (p = .005) and RBD (p = .039) groups; the RBD and
LBD groups did not differ (p = .774).
There was also a significant three-way interaction between valence, group and
gender (see Table 1). A post-hoc mixed ANOVA examined this interaction. As
seen in Table 2, positive facial expressions were discriminated more accurately
than negative expressions for female RBD, male LBD, female LBD, male HC
and female HC participants. However, there was no significant difference by
valence in the discrimination of facial expressions for male RBD participants. All
other interactions proved non-significant (see Table 1).

Partial face discrimination: Eyes versus mouth


The influences of emotional valence (positive, negative), partial stimulus type
(eyes, mouth), participant group (RBD, LBD, HC) and participant gender (male,
female) on discrimination accuracy from partial stimuli (i.e., featural processing)
were explored using a mixed-design ANOVA. A significant interaction was
found between valence and partial stimulus type (see Table 1 and Figure 3). Two
post-hoc one-way repeated measures ANOVAs examined this interaction.
Participants were more accurate in discriminating positive emotions from the
mouth (M = 94.0, SD = 10.6) than from the eyes (M = 79.9, SD = 14.4) [F(1, 67)
= 45.76, p < .001, partial η2 = .41]. In contrast, participants were more accurate
in discriminating negative facial expressions from the eyes (M = 82.7, SD = 16.3)
than the mouth (M = 67.5, SD = 16.2) [F(1, 67) = 38.42, p < .001, partial η2
= .36].
In addition, there was a significant three-way interaction between valence,
participant group and gender (see Table 1). A post-hoc mixed ANOVA explored
this interaction. As seen in Table 2, positive partial affective faces were
discriminated more accurately than negative partial faces for female RBD,
male LBD, female LBD, male HC and female HC participants. However, there
was no significant difference by valence in discriminating partial facial
TABLE 2
Statistics for the discrimination of positive and negative faces by participant group and gender

Whole faces vs. partial faces Partial faces: Eyes vs. mouth

Positive emotion Negative emotion Positive emotion Negative emotion

PROCESSING OF AFFECTIVE FACES


Group Gender Mean percentage correct (SD) F p Partial η 2
Mean percentage correct (SD) F p Partial η2

RBD Male 86.87 (10.14) 78.41 (13.09) 2.93 .118 .23 81.32 (10.02) 77.28 (9.11) 1.32 .278 .12
Female 90.74 (4.53) 68.75 (18.77) 12.11 .018 .71 87.04 (8.36) 66.67 (21.08) 8.18 .035 .62
LBD Male 91.67 (6.45) 74.42 (9.14) 47.90 < .001 .81 84.72 (11.64) 68.52 (13.47) 11.59 .006 .51
Female 89.44 (8.87) 75.27 (9.29) 18.77 .012 .82 82.22 (16.39) 68.89 (13.95) 10.29 .033 .72
HC Male 94.25 (2.93) 81.05 (7.50) 48.68 < .001 .79 89.68 (4.80) 74.61 (10.83) 27.44 < .001 .68
Female 93.19 (5.19) 83.82 (8.57) 19.42 < .001 .52 90.56 (6.00) 82.22 (8.38) 13.57 .002 .42

465
466 ABBOTT ET AL.

RBD LBD HC
100

Mean % correct with 95% CIs


90

80

70

60

50
Positive Negative Positive Negative
Mouth Eyes
Figure 3. Mean% correct with 95% CIs for the discrimination of affective partial faces (i.e., featural
processing) as a function of participant group (LBD = left brain-damaged, RBD = right brain-damaged, HC
= healthy control), emotional valence (positive: happy; negative: sad), and partial stimulus type
(mouth, eyes).

expressions for male RBD participants. All other interactions proved non-
significant (see Table 1).

DISCUSSION
This is the first study to investigate the processing strategies (configural and
featural) that underlie the hemispheric lateralization of facial expression
perception according to valence in RBD and LBD patients. Overall, there were
no group differences in the processing strategies used to perceive facial
expressions according to valence. However, across emotions the RBD group
was less accurate than the HC group in the discrimination of whole face pairs
(configural processing), while both the RBD and LBD groups were less accurate
than the HC group in discriminating partial faces (featural processing). These
results suggest that affective face perception on the basis of both configural and
featural information is impaired following RBD, indicating that the right
hemisphere uses both processes to perceive facial expressions. On the other
hand, given that LBD only compromised face processing from featural
information compared to HCs, the results suggest that the left hemisphere
primarily uses featural information to perceive faces. The finding that the right
hemisphere processes facial emotions using configural information is consistent
with much of the past literature with normal individuals and patients with
prosopagnosia as a result of RBD (e.g., Bourne et al., 2009; Uttner et al., 2002).
However, the present results also suggest that both hemispheres use featural
information to discriminate facial emotion. This finding contrasts with much of
the research which suggests that the left hemisphere is primarily involved in
featural processing (e.g., Bourne et al., 2009; Passarotti et al., 2007; Rossion
PROCESSING OF AFFECTIVE FACES 467

et al., 2003; Uttner et al., 2002; Yin, 1970). However, a recent fMRI study
measuring neural adaptation found the right fusiform gyrus capable of processing
face parts as well as wholes (Harris & Aguirre, 2010), suggesting that the right
fusiform area processes both configural and featural face information. In contrast,
the left fusiform area only adapted in response to part-based facial information
(Harris & Aguirre, 2010), indicating that the left fusiform area primarily uses
featural information to process faces. Rather than suggesting that these findings
conflict with (e.g., Uttner et al., 2002; Yin, 1970), Harris and Aguirre (2010)
argue that although the right fusiform area relies heavily on configural
processing, it is also capable of processing featural information from faces.
Their conclusions align with the current findings in suggesting that the right
hemisphere uses both configural and featural processing strategies, whereas the
left hemisphere preferentially processes faces featurally.
However, the above interpretations are limited by the non-significant
differences between RBD and LBD groups for the discrimination of facial
expressions from configural and featural information. However, the 95%
confidence intervals (CIs) around the mean difference for affective face
discrimination accuracy from whole (configural) faces were in the direction of
the RBD group being less accurate than the LBD group. On the other hand, the
95% CIs for the RBD versus LBD difference for partial face discrimination
accuracy suggest that the LBD and RBD groups are fairly similar in their ability
to process featural information. Overall these interpretations align with our
suggestion that the right hemisphere primarily processes configural, and both
hemispheres process featural, affective face information. However, the RBD
versus LBD comparisons were not statistically significant. These inconclusive
results could reflect heterogeneity in the regions damaged in the patient groups.
Research in support of hemispheric differences for configural versus featural
face processing generally involves activation of the fusiform area (occipito-
temporal gyrus) in imaging research with normal individuals (e.g., Eimer, 2000;
Rossion et al., 2003), or damage to the posterior cerebral hemispheres of the
brain (inclusive of the fusiform gyrus) in clinical patients (e.g., Bliem, 1998; Yin,
1970) indicating involvement of the posterior cerebral hemispheres in configural
and featural face processing. The present sample was limited to unilateral
cerebral stroke patients tested between 3 and 14 months post-injury, with both
male and female patients recruited in each group. Yet, the brain-damaged
participants sustained damage to a range of brain regions [for details please refer
to Tables 2 and 3 in Abbott et al. (submitted)]. In particular, only three RBD and
two LBD patients solely sustained temporal and/or occipital lobe damage. The
remaining patients sustained frontal and/or parietal lobe damage (RBD: n = 10;
LBD: n = 11), or a mixture (e.g., parietal and temporal cerebral damage; RBD: n
= 4; LBD: n = 4) of anterior and posterior regions. Thus, the RBD and LBD
groups cannot be split into neat anterior and posterior sub-groups, which make
statistical analysis of anterior versus posterior cerebral unilateral damage
468 ABBOTT ET AL.

impossible. Only the posterior brain regions, particularly the fusiform area,
appear specifically involved in processing faces using configural and featural
information (e.g., Eimer, 2000; Eimer & McCarthy, 1999; Rossion et al., 2003;
Yin, 1970). Thus, the observed non-significant hemispheric differences may be
due to increased variability in the strategies used to discriminate facial
expressions as a result of heterogeneity in the brain regions damaged within
the patient groups.
The regions of intra-hemisphere brain damage may also have impacted the
interactions between participant group, emotional valence and participant gender.
Male and female LBD and HC participants and female RBD participants
discriminated positive faces more accurately than negative faces from both whole
and partial information, irrespective of the partial stimulus type (eyes versus
mouth). These findings are compatible with the consistent clinical finding that
positive emotions are perceived more accurately than negative emotions (e.g.,
Braun, Traue, Frisch, Deighton, & Kessler, 2005; Charbonneau et al., 2003;
Kucharska-Pietura et al., 2003). However, male RBD participants’ performance
did not differ in response to positive and negative whole and partial facial
expressions or partial facial expressions from the eyes and mouth. It is possible that
the particular regions of brain damage within the male RBD group may have
contributed to the non-significant difference in the discrimination of positive and
negative faces. Of the 11 male RBD patients, six had parietal and/or frontal
damage, one had damaged the frontal, parietal and temporal lobes, one had
incurred temporal and parietal damage, and three had sustained occipital and/or
temporal damage. fMRI research has demonstrated complex, region-specific
patterns of activation according to valence in response to facial expressions for
males (e.g., Lee et al., 2002). For example, bilateral frontal and left parietal
activation is observed when normal male participants view happy facial stimuli,
whereas bilateral frontal, right temporal and lentiform areas activate in response to
sad faces (Lee et al., 2002). Given that 7 of the 11 male RBD patients sustained
frontal damage, combined with fMRI studies indicating frontal activation in
response to happy and sad faces, it is possible that the large number of male RBD
patients with frontal damage led to the non-significant valence result.
There was also an interaction between valence and processing different types
of partial facial information (eyes versus mouth). All participants processed
positive partial faces more accurately from the mouth than eyes, whereas
negative partial faces produced the opposite pattern. These results suggest that a
smiling mouth distinguishes a happy face to a greater extent than creases in the
corner of the eyes. In contrast, changes to the eyes distinguish a sad face more
than changes to the mouth. These findings are consistent with the normal
population literature (e.g., Calvo, Fernandez-Martin, & Nummenmaa, 2012;
Eisenbarth & Alpers, 2011; Stephan et al., 2006). For example, an eye-tracking
study found that normal individuals fixate on different facial regions depending
PROCESSING OF AFFECTIVE FACES 469

on the emotion shown (Eisenbarth & Alpers, 2011): participants initially fixate
most frequently on the eye region for sad faces, and fixate the mouth for longer
periods while evaluating happy faces. Thus, particular facial regions appear
informative for particular emotions, and consequently people know to look at
different regions for different emotions.
Overall, this clinical study did not find a three-way interaction between
participant group (RBD, LBD, HC), emotional valence (positive versus negative)
and processing information cues (configural versus featural) for facial emotion
discrimination. The present results instead suggest that configural and featural
information processing strategies do not directly underlie how the right and left
hemispheres are lateralized to process facial emotions. Thus, neither the RHH
nor the VH was supported. It is surprising that there are no other clinical
publications on strategy use and facial expression lateralization. This raises the
possibility that past clinical studies may have investigated whether an interaction
exists between face perception according to valence and processing strategy,
however, for reasons similar to this study (i.e., heterogeneity in the brain regions
damaged), the results were complicated and therefore not published. Future
clinical research is thus required to examine whether configural and featural
processing strategies are involved in the lateralization of emotion perception
according to valence for patients with damage specifically to the fusiform area

CONCLUSIONS
The present study investigated whether configural and featural processing
strategies underlie the hemispheric specialization of facial expression perception
according to valence, finding no evidence of between-group differences.
However, there were valence-based differences for the type of partial stimulus:
positive partial faces were perceived more accurately from the mouth, whilst
negative partial faces were recognized more accurately from the mouth. The
results also showed that across emotional valence, the RBD group was less
accurate than the HC group in the discrimination of whole face pairs
(configural), while the RBD and LBD groups were less accurate than the HC
group in the discrimination of partial face pairs (featural). The results thus
indicate that the right hemisphere processes facial expressions from both
configural and featural information, whereas the left hemisphere relies more
heavily on featural information. The lack of difference in the discrimination of
affective faces from configural or featural information for LBD and RBD may
stem from heterogeneity in the intra-hemisphere regions damaged. Further
research that specifically recruits patients with posterior RBD and LBD is
required to confirm this speculation, and assess the impact of fusiform damage
on configural and featural information processing strategies.
470 ABBOTT ET AL.

Manuscript received 3 July 2013


Revised manuscript received 29 October 2013
Revised manuscript accepted 30 October 2013
First published online 9 December 2013

REFERENCES
Abbott, J. A., Cumming, G., Fidler, F., & Lindell, A. K. (2013). The perception of positive and
negative facial expression in unilateral brain-damaged patients: A meta-analysis. Laterality, 18,
437–459. doi:10.1080/1357650X.2012.703206
Abbott, J. A., Wijeratne, T., Hughes, A., Perre, D., & Lindell, A. K. (submitted). The perception of
positive and negative facial expressions by unilateral stroke patients. Brain and Cognition.
Adolphs, R., Jansari, A., & Tranel, D. (2001). Hemispheric perception of emotional valence from
facial expressions. Neuropsychology, 15, 516–524. doi:10.1037/0894-4105.15.4.516
Bliem, H. R. (1998). Experimental and clinical exploration of a possible neural subsystem underlying
configurational face processing. Brain and Cognition, 37, 16–18.
Blonder, L. X., Bowers, D., & Heilman, K. M. (1991). The role of the right hemisphere in emotional
communication. Brain, 114, 1115–1127. doi:10.1093/brain/114.3.1115
Borod, J. C., Bloom, R. L., Brickman, A. M., Nakhutina, L., & Curko, E. A. (2002). Emotional
processing deficits in individuals with unilateral brain damage. Applied Neuropsychology, 9,
23–36. doi:10.1207/S15324826AN0901_4
Borod, J. C., Koff, E., Perlman Lorch, M., & Nicholas, M. (1986). The expression and perception of
facial emotion in brain-damaged patients. Neuropsychologia, 24, 169–180. doi:10.1016/0028-
3932(86)90050-3
Borod, J. C., Welkowitz, J., Alpert, M., Brozgold, A. Z., Martin, C., Peselow, E., & Diller, L. (1990).
Parameters of emotional processing in neuropsychiatric disorders: Conceptual issues and a
battery of tests. Journal of Communication Disorders, 23, 247–271. doi:10.1016/0021-9924(90)
90003-H
Bourne, V. J. (2011). Examining the effects of inversion on lateralisation for processing facial
emotion. Cortex, 47, 690–695. doi:10.1016/j.cortex.2010.04.003
Bourne, V. J., Vladeanu, M., & Hole, G. J. (2009). Lateralised repetition priming for featurally and
configurally manipulated familiar faces: Evidence for differentially lateralised processing
mechanisms. Laterality, 14, 287–299. doi:10.1080/13576500802383709
Bowers, D., Bauer, R. M., Coslett, H. B., & Heilman, K. M. (1985). Processing of faces by patients
with unilateral hemisphere lesions. I. Dissociation between judgments of facial affect and facial
identity. Brain and Cognition, 4, 258–272. doi:10.1016/0278-2626(85)90020-X
Braun, M., Traue, H. C., Frisch, S., Deighton, R. M., & Kessler, H. (2005). Emotion recognition in
stroke patients with left and right hemispheric lesion: Results with a new instrument-the FEEL
test. Brain and Cognition, 58, 193–201. doi:10.1016/j.bandc.2004.11.003
Calvo, M. G., Fernandez-Martin, A., & Nummenmaa, L. (2012). Perceptual, categorical, and affective
processing of ambiguous smiling facial expressions. Cognition, 125, 373–393. doi:10.1016/j.
cognition.2012.07.021
Canli, T., Desmond, J. E., Zhao, Z., Glover, G., & Gabrieli, J. D. (1998). Hemispheric asymmetry for
emotional stimuli detected with fMRI. Neuroreport, 9, 3233–3239. doi:10.1097/00001756-19981
0050-00019
Chambon, V., Baudouin, J. Y., & Franck, N. (2006). The role of configural information in facial
emotion recognition in schizophrenia. Neuropsychologia, 44, 2437–2444. doi:10.1016/j.
neuropsychologia.2006.04.008
PROCESSING OF AFFECTIVE FACES 471

Charbonneau, S., Scherzer, B. P., Aspirot, D., & Cohen, H. (2003). Perception and production of
facial and prosodic emotions by chronic CVA patients. Neuropsychologia, 41, 605–613.
doi:10.1016/S0028-3932(02)00202-6
Christman, S. D., & Hackworth, M. D. (1993). Equivalent perceptual asymmetries for free viewing of
positive and negative emotional expressions in chimeric faces. Neuropsychologia, 31, 621–624.
doi:10.1016/0028-3932(93)90056-6
Collishaw, S. M., & Hole, G. J. (2000). Featural and configurational processes in the recognition of
faces of different familiarity. Perception, 29, 893–909. doi:10.1068/p2949
Davidson, R. J., Mednick, D., Moss, E., Saron, C., & Schaffer, C. E. (1987). Ratings of emotion in
faces are influenced by the visual field to which stimuli are presented. Brain and Cognition, 6,
403–411. doi:10.1016/0278-2626(87)90136-9
Demaree, H. A., Everhart, D. E., Youngstrom, E. A., & Harrison, D. W. (2005). Brain lateralization of
emotional processing: Historical roots and a future incorporating “dominance”. Behavioral and
Cognitive Neuroscience Reviews, 4, 3–20. doi:10.1177/1534582305276837
Dolcos, F., LaBar, K. S., & Cabeza, R. (2004). Dissociable effects of arousal and valence on
prefrontal activity indexing emotional evaluation and subsequent memory: An event-related fMRI
study. Neuroimage, 23, 64–74. doi:10.1016/j.neuroimage.2004.05.015
Drebing, C. E., Federman, E. J., Edington, P., & Terzian, M. A. (1997). Affect identification bias
demonstrated with individual chimeric faces. Perceptual and Motor Skills, 85, 1099–1104.
doi:10.2466/pms.1997.85.3.1099
Eimer, M. (2000). The face-specific N170 component reflects late stages in the structural encoding of
faces. Neuroreport, 11, 2319–2324. doi:10.1097/00001756-200007140-00050
Eimer, M., & McCarthy, R. A. (1999). Prosopagnosia and structural encoding of faces: Evidence from
event-related potentials. Neuroreport, 10, 255–259. doi:10.1097/00001756-199902050-00010
Eisenbarth, H., & Alpers, G. W. (2011). Happy mouth and sad eyes: Scanning emotional facial
expressions. Emotion, 11, 860–865. doi:10.1037/a0022758
Ekman, P., & Friesen, W. (1976). Pictures of facial affect. Palo Alto, CA: Consulting Psychologists
Press.
Ekman, P., & Friesen, W. V. (1969). The repertoire of nonverbal behaviour: Categories, origins,
usage, and coding. Semiotica, 1, 49–98.
Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). “Mini-mental state”. A practical method for
grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12, 189–
198. doi:10.1016/0022-3956(75)90026-6
Harris, A., & Aguirre, G. K. (2010). Neural tuning for face wholes and parts in human fusiform gyrus
revealed by FMRI adaptation. Journal of Neurophysiology, 104, 336–345. doi:10.1152/jn.
00626.2009
Jansari, A., Tranel, D., & Adolphs, R. (2000). A valence-specific lateral bias for discriminating
emotional facial expressions in free field. Cognition & Emotion, 14, 341–353. doi:10.1080/026
999300378860
Kucharska-Pietura, K., Phillips, M. L., Gernand, W., & David, A. S. (2003). Perception of emotions
from faces and voices following unilateral brain damage. Neuropsychologia, 41, 1082–1090.
doi:10.1016/S0028-3932(02)00294-4
Lee, T. M., Liu, H. L., Hoosain, R., Liao, W. T., Wu, C. T., Yuen, K. S., … Gao, J. H. (2002). Gender
differences in neural correlates of recognition of happy and sad faces in humans assessed by
functional magnetic resonance imaging. Neuroscience Letters, 333, 13–16. doi:10.1016/S0304-
3940(02)00965-5
Leehey, S., Carey, S., Diamond, R., & Cahn, A. (1978). Upright and inverted faces: The right
hemisphere knows the difference. Cortex, 14, 411–419. doi:10.1016/S0010-9452(78)80067-7
Levin, H. S., Hamsher, K. de S., & Benton, A. L. (1975). A short form of the test of facial recognition
for clinical use. Journal of Psychology, 91, 223–228. doi:10.1080/00223980.1975.9923946
472 ABBOTT ET AL.

Lovibond, P. F., & Lovibond, S. H. (1995). The structure of negative emotional states: Comparison of
the Depression Anxiety Stress Scales (DASS) with the Beck Depression and Anxiety Inventories.
Behaviour Research and Therapy, 33, 335–343. doi:10.1016/0005-7967(94)00075-U
Mandal, M. K., Borod, J. C., Asthana, H. S., Mohanty, A., Mohanty, S., & Koff, E. (1999). Effects of
lesion variables and emotion type on the perception of facial emotion. Journal of Nervous and
Mental Disease, 187, 603–609. doi:10.1097/00005053-199910000-00003
Mehrabian, A. & Ferris, S. R. (1967). Inference of attitudes from nonverbal communication in 2
channels. Journal of Consulting Psychology, 31, 248–252. doi:10.1037/h0024648
Mehrabian, A. & Friar, J. T. (1969). Encoding of attitude by a seated communicator via posture and
position cues. Journal of Consulting and Clinical Psychology, 33, 330–336. doi:10.1037/h00
27576
Narumoto, J., Okada, T., Sadato, N., Fukui, K. & Yonekura, Y. (2001). Attention to emotion
modulates fMRI activity in human right superior temporal sulcus. Cognitive Brain Research, 12,
225–231. doi:10.1016/S0926-6410(01)00053-2
Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory.
Neuropsychologia, 9, 97–113.
Passarotti, A. M., Smith, J., DeLano, M. & Huang, J. (2007). Developmental differences in the neural
bases of the face inversion effect show progressive tuning of face-selective regions to the upright
orientation. Neuroimage, 34, 1708–1722. doi:10.1016/j.neuroimage.2006.07.045
Prkachin, G. C. (2003). The effects of orientation on detection and identification of facial expressions
of emotion. British Journal of Psychology, 94, 45–62. doi:10.1348/000712603762842093
Reuter-Lorenz, P. & Davidson, R. J. (1981). Differential contributions of the two cerebral hemispheres
to the perception of happy and sad faces. Neuropsychologia, 19, 609–613. doi:10.1016/0028-3932
(81)90030-0
Rhodes, G. (1993). Configural coding, expertise, and the right hemisphere advantage for face
recognition. Brain and Cognition, 22, 19–41. doi:10.1006/brcg.1993.1022
Rossion, B., Caldara, R., Seghier, M., Schuller, A. M., Lazeyras, F. & Mayer, E. (2003). A network of
occipito-temporal face-sensitive areas besides the right middle fusiform gyrus is necessary for
normal face processing. Brain, 126, 2381–2395. doi:10.1093/brain/awg241
Rossion, B., Delvenne, J. F., Debatisse, D., Goffaux, V., Bruyer, R., Crommelinck, M. & Guerit, J. M.
(1999). Spatio-temporal localization of the face inversion effect: An event-related potentials study.
Biological Psychology, 50, 173–189. doi:10.1016/S0301-0511(99)00013-7
Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E. & Matsumura, M. (2004). Enhanced neural activity
in response to dynamic facial expressions of emotion: An fMRI study. Cognitive Brain Research,
20, 81–91. doi:10.1016/j.cogbrainres.2004.01.008
Stephan, B. C., Breen, N. & Caine, D. (2006). The recognition of emotional expression in
prosopagnosia: Decoding whole and part faces. Journal of the International Neuropsychological
Society, 12, 884–895. doi:10.1017/S1355617706061066
Uttner, I.Bliem, H. & Danek, A. (2002). Prosopagnosia after unilateral right cerebral infarction.
Journal of Neurology, 249, 933–935. doi:10.1007/s00415-002-0710-8
Yin, R. K. (1970). Face recognition by brain-injured patients - a dissociable ability. Neuropsycho-
logia, 8, 395–402 doi:10.1016/0028-3932(70)90036-9
Copyright of Laterality is the property of Psychology Press (UK) and its content may not be
copied or emailed to multiple sites or posted to a listserv without the copyright holder's
express written permission. However, users may print, download, or email articles for
individual use.

You might also like