You are on page 1of 13

Social Neuroscience

ISSN: 1747-0919 (Print) 1747-0927 (Online) Journal homepage: http://www.tandfonline.com/loi/psns20

The effects of self-involvement on attention,


arousal, and facial expression during
social interaction with virtual others: A
psychophysiological study

Andreas Mojzisch , Leonhard Schilbach , Jens R. Helmert , Sebastian


Pannasch , Boris M. Velichkovsky & Kai Vogeley

To cite this article: Andreas Mojzisch , Leonhard Schilbach , Jens R. Helmert , Sebastian
Pannasch , Boris M. Velichkovsky & Kai Vogeley (2006) The effects of self-involvement
on attention, arousal, and facial expression during social interaction with virtual
others: A psychophysiological study, Social Neuroscience, 1:3-4, 184-195, DOI:
10.1080/17470910600985621

To link to this article: http://dx.doi.org/10.1080/17470910600985621

Published online: 24 Feb 2007.

Submit your article to this journal

Article views: 197

View related articles

Citing articles: 35 View citing articles

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=psns20

Download by: [Universidad Del Norte] Date: 27 September 2015, At: 20:27
SOCIAL NEUROSCIENCE, 2006, 1 (3 4), 184 195

The effects of self-involvement on attention, arousal, and


facial expression during social interaction with virtual
others: A psychophysiological study

Andreas Mojzisch
Georg-August-University Göttingen, Göttingen, Germany

Leonhard Schilbach
Downloaded by [Universidad Del Norte] at 20:27 27 September 2015

University of Cologne, Cologne, Germany

Jens R. Helmert, Sebastian Pannasch, and Boris M. Velichkovsky


Dresden University of Technology, Dresden, Germany

Kai Vogeley
University of Cologne, Cologne, Germany

Social neuroscience has shed light on the underpinnings of understanding other minds. The current study
investigated the effect of self-involvement during social interaction on attention, arousal, and facial
expression. Specifically, we sought to disentangle the effect of being personally addressed from the effect
of decoding the meaning of another person’s facial expression. To this end, eye movements, pupil size,
and facial electromyographic (EMG) activity were recorded while participants observed virtual
characters gazing at them or looking at someone else. In dynamic animations, the virtual characters
then displayed either socially relevant facial expressions (similar to those used in everyday life situations
to establish interpersonal contact) or arbitrary facial movements. The results show that attention
allocation, as assessed by eye-tracking measurements, was specifically related to self-involvement
regardless of the social meaning being conveyed. Arousal, as measured by pupil size, was primarily
related to perceiving the virtual character’s gender. In contrast, facial EMG activity was determined by
the perception of socially relevant facial expressions irrespective of whom these were directed towards.

INTRODUCTION Woodruff 1978). To illustrate, imagine that while


having dinner in a restaurant you suddenly
Understanding other minds plays a pivotal role in observe an attractive individual who is smiling
social interaction. It refers to the capacity to at you. Is s/he smiling at you because s/he intends
adequately ascribe mental states to others in to flirt with you? Or is s/he just smiling because s/
order to explain or predict their behavior, an he is amused by someone else’s joke? And
ability that has been coined ‘‘theory of mind’’ or anyway, is s/he really smiling at you or rather at
‘‘mentalizing’’ (Frith & Frith, 2003; Premack & the person sitting next to you?

Correspondence should be addressed to: Andreas Mojzisch, Institute of Psychology, Georg-August-University Göttingen,
Gosslerstrasse 14, D-37073 Göttingen, Germany. E-mail: mojzisch@psych.uni-goettingen.de
This work was funded by a grant from the European Commission (NoE COGAIN).
We are grateful to Sven Graupner, Tilman Hensch, Anke Karl, Fiona Mulvey, and Stefan Schulz-Hardt for helpful advice and
support in data analyses.

# 2006 Psychology Press, an imprint of the Taylor & Francis Group, an informa business
www.psypress.com/socialneuroscience DOI:10.1080/17470910600985621
BEING WITH VIRTUAL OTHERS 185

Research integrating approaches from cogni- indicative of a communicative intent or arbitrary


tive neuroscience, developmental and experimen- facial movements (see Kampe et al., 2003, for
tal social psychology has recently begun to shed similar findings). This study indicates that distinct
light on the neural and cognitive mechanisms regions of the MPFC are recruited during the
underlying our understanding of other minds. decoding of the social meaning being conveyed
Specifically, distinct regions in the anterior and during an encounter and the experience of self-
posterior temporal cortex and medial prefrontal involvement.
cortex have been associated with distinct com- Notwithstanding the progress that has been
ponents of social cognition (for reviews see made in revealing the neural correlates of under-
Amodio & Frith, 2006; Saxe, 2006). standing of other minds, we still know relatively
Despite remarkable progress in the neuros- little about the cognitive operations that are
cientific investigation of social cognition, little recruited during self-involvement in social inter-
attention has been paid to the effect of personal action or on-line mentalizing. Therefore, the
involvement during social interaction. Yet it is current study was designed to examine the effect
one thing to be personally addressed by another of self-involvement during social interaction on
Downloaded by [Universidad Del Norte] at 20:27 27 September 2015

person who intends to communicate with us, but attention allocation, arousal, and facial expres-
quite another thing to passively observe another sion. Eye movements, pupil size, and facial
person intending to communicate with someone electromyographic (EMG) activity were recorded
else (Grice, 1975) *also differentiated as theory while participants watched the video sequences
of mind on-line and off-line (Frith & Frith, 2003). developed by Schilbach et al. (2006). Specifically,
Recently, neuroimaging studies have begun to we focused on the following questions.
target aspects of online interactions that require First, we asked whether the allocation of
personal involvement in social interaction (e.g., attention as assessed by eye-tracking measure-
Gallagher, Jack, Roepstorff, & Frith, 2002; ments would be influenced by being personally
Kampe, Frith, & Frith, 2003; King-Casas, Tomlin, addressed during social interaction. The impor-
Anen, Camerer, Quartz, & Montague, 2005; tance of allocating attentional resources during
McCabe, Houser, Ryan, Smith, & Trouard, 2001; social cognition has been stressed by researchers
Sanfey, Rilling, Aronson, Nystrom, & Cohen, from several different disciplines. For example,
2003). However, so far the differential effects of mounting evidence from social psychology sug-
personal involvement on the neural underpin- gests that the activation of category-based knowl-
nings of ‘‘mentalizing’’ have not been investigated edge structures during person perception is
systematically. moderated by the availability of attentional
One of the few exceptions is the study by resources (e.g., Gilbert & Hixon, 1991; Quinn &
Schilbach et al. (2006). In this functional magnetic Macrae, 2005). Another line of research has
resonance imaging (fMRI) study, short animated focused on individuals with autism who are
video sequences were presented in which partici- severely impaired in understanding other minds.
pants were either gazed at by virtual characters or This research shows that normal adults direct
observed these virtual characters looking at their attention to the eye region for information
someone else. In dynamic animations, virtual regarding the mental state of others correspond-
characters then either showed socially relevant ing to the ‘‘language of the eyes’’ (Baron-Cohen,
facial expressions similar to those used in every- 1995) whereas people with autism have difficul-
day life situations to establish interpersonal con- ties extracting mental states from the eyes
tact or showed arbitrary facial movements. The (Baron-Cohen, Wheelwright, & Jolliffe, 1997).
resulting four experimental conditions thus estab- This is corroborated by eye-tracking studies
lished a 2 /2 factorial design. The study revealed suggesting that people with autism spend less
that the ventral part of the medial prefrontal time fixating on the eyes of other persons than
cortex (vMPFC) was recruited during the decod- normal adults do (Dalton et al., 2005; Pelphrey,
ing of the social meaning being conveyed during Sasson, Reznick, Paul, Goldman, & Piven, 2002).
an encounter regardless of whether or not parti- All of these data indicate that the allocation of
cipants were personally engaged in social inter- attention plays an important role in social cogni-
action. In contrast, a more dorsal part of the tion. Yet, the differential effects of being person-
MPFC (dMPFC) was activated during the experi- ally addressed during social interaction on
ence of self-involvement regardless of whether attention allocation and the underlying biological
the virtual character showed facial expressions mechanisms remain unclear.
186 MOJZISCH ET AL.

Second, we asked whether personal involve- characters then exhibited facial expressions that
ment during interaction is associated with an were either socially relevant or arbitrary.
increase in arousal by measuring pupil diameter.
Notably, studies investigating whether direct eye
gaze increases arousal have reported mixed METHODS
results (Gale, Kingsley, Brookes, & Smith, 1978;
Kampe et al., 2003; Nichols & Champness, 1971; Participants
Patterson, Jordan, Hogan, & Frerker, 1981; Strom
& Buck, 1979). The present study aimed to shed The sample included 23 right-handed male parti-
light on this topic by systematically varying cipants (M /23.4 years, SD /2.4) with no record
participants’ state of self-involvement. of neurological or psychiatric illness. All partici-
A third issue that seems to be of crucial pants were naive as to the purpose of the
importance in social interaction is whether people experiment. They had normal or corrected-to-
show an adequate emotional reaction upon per- normal vision and gave informed consent to
ceiving another’s communicative intent. EMG
Downloaded by [Universidad Del Norte] at 20:27 27 September 2015

participate in the study.


research consistently demonstrates that when
people are exposed to emotional facial expres-
sions, they spontaneously react with distinct facial Design
EMG reactions. These reactions seem to reflect a
tendency to mimic the interaction partner’s facial We extended the 2/2 factorial design of Schil-
displays (e.g., Bavelas, Black, Lemery, & Mullett, bach et al. (2006) by introducing the gender of the
1986; Dimberg, 1997; Lundqvist, 1995). For ex- virtual characters as a third independent variable.
ample, pictures of happy faces spontaneously As a consequence, the present study was based on
evoke increased zygomatic major muscle activity a 2 (Self-involvement: yes vs. no)/2 (Social
(i.e., the muscle involved in smiling), whereas Interaction: yes vs. no) /2 (Gender of the virtual
pictures of angry faces evoke increased corruga- character: female vs. male) within-subjects factor-
tor supercilii activity (i.e., the muscle involved in ial design.
frowning). So far, however, the effects of self-
involvement on facial EMG during social inter-
action have not been examined.
Dependent variables
In summary, the purpose of the present study
was to examine attention allocation (eye move-
To measure the attention allocation, we recorded
ments), arousal (pupil size), and facial expres- participants’ eye movements while they watched
sions (facial EMG) of participants while being the video sequences developed by Schilbach et al.
personally involved in social interaction as op- (2006). Because eye movements are an overt
posed to being a passive observer of social manifestation of the dynamic allocation of atten-
interaction between others. Specifically, we tion, they provide information on the operation of
sought to disentangle the effect of being person- the attentional system. As an index of attention
ally addressed from the effect of decoding the allocation during scene perception, we measured
meaning of another person’s facial expression. fixation duration. Notably, there is a close con-
Because electrophysiological studies indicate that nection between fixation duration and amount of
early visual processing is attentive to differences attention allocation during information proces-
in the gender of social targets (Mouchetant- sing (Inhoff & Radach, 1998; Rayner, 1998;
Rostaing & Giard, 2003) and the effects of gender Velichkovsky, 2001, 2002).
and mutual gaze appear to interact (Vuilleumier, As an index of stimulus-induced arousal, we
George, Lister, Armony, & Driver, 2005), we also measured pupil size. Arousal responses are typi-
examined the effects of the virtual characters’ cally associated with increased sympathetic activ-
gender. Participants were presented with the ity, and sympathetic activity induces pupillary
video sequences developed by Schilbach et al. dilatation. Therefore, an increase in arousal is
(2006). Within each sequence, a virtual female or reflected in an increase in pupil size.
virtual male either turned towards the participant Finally, to measure participants’ facial expres-
or looked towards a third person, thereby varying sion, we recorded facial EMG activity in the
participants’ state of self-involvement. Virtual zygomatic major muscle region.
BEING WITH VIRTUAL OTHERS 187

Apparatus Condition-specific changes in face morphology


were obtained by manipulating polygon groups
Eye movements were recorded using the SR on a 3D-mesh that made up the virtual character’s
Research Ltd. EyeLink eye-tracking system with facial structure. The polygon groups were com-
on-line detection of saccades and fixations. Posi- parable to the Action Units as described in the
tion of the eye was detected using bright pupil Facial Action Coding System (FACS; Ekman &
image. This head-mounted system has a spatial Friesen, 1978). Ten different facial expressions
resolution of better than 0.5 degrees and a sample were used (see Table 1). Pilot studies reported by
rate of 250 Hz. Fixation onset was detected and Schilbach et al. (2006) showed that five of these
transmitted to the presentation system with a were classified as unambiguously socially relevant
delay of approximately 12 ms. (SOC), that is indicative of the intention to
Pupil size was measured in arbitrary integer establish social contact, whereas the other five
units referring to pupil diameter. Images were were perceived as arbitrary (ARB).
displayed using a GeForce2 MX card and a CRT Animation of facial motion was realized by
display (19-inch Iiyama Vision Master 451) at interpolating images between the neutral and
Downloaded by [Universidad Del Norte] at 20:27 27 September 2015

1024 /768 pixels and the frame rate was 100 Hz. condition-specific facial expressions as well as
Viewed from a distance of 80 cm, the screen body positions of the virtual characters. AVI-
subtended a visual angle of 278 horizontally and formatted video sequences were generated by
198 vertically. simulating a 100 mm focal-width camera view.
EMG activity was measured using bipolar The video sequences subtended 25.68 horizontally
placement of miniature surface Ag/AgCl electro- and 13.38 vertically. As depicted in Figure 2, only
des filled with electrode paste, attached according the head and shoulders of the virtual characters
to the guidelines by Fridlund and Cacioppo were presented in the video sequences. The
(1986) at the zygomaticus major muscle sites on virtual characters appeared against a light grey
the left side of the face. The participants’ skin was background. Across all video sequences, the
cleaned before the electrodes were attached. following factors were systematically varied and
EMG activity was recorded with a LabLinc counterbalanced: appearance on the screen (from
V75 04 Bioamplifier (COULBOURN Instru- left, from right), direction of gaze (left, right, at
ments, LLC, Allentown, USA), digitized at the subject), hair color (four different colors),
1000 Hz and stored on a laboratory computer. hair style (four different styles). All facial expres-
sions or movements were balanced for apparent
motion change. That is, the number of frames that
contained facial motion was balanced with those
Stimuli that contained static facial configurations.
In total, 100 video sequences were presented
As illustrated in Figure 1, the temporal order of in which 24 different virtual characters exhi-
each video sequence adhered to a standardized bited 10 different mimic expressions, 5 of which
pattern of 7.5 s. Each sequence started with the were indicative of the intention to establish
entrance of the virtual character up to 1500 ms interpersonal contact (sample video sequences
(walk in), followed by positioning for 1000 ms are demonstrated at http://www.uk-koeln.de/
(turn) either towards the participant or towards a kliniken/psychiatrie/Bildgebung/ls_videos/schil-
third person (not visible to the participant situ- bach-videos.htm).
ated at an angle of approximately 308). Of
importance for eliciting the target state was the
following time window extending from 2500 ms to Procedure
5500 ms (social interaction). This time window
was presented at the same time across all experi- Participants received standardized instructions on
mental conditions and included the facial expres- the computer screen. They were told that they
sions or arbitrary movements. Finally, the virtual were part of a virtual 3D scene standing between
character turned away from the participant and two other agents. To help them visualize the
walked out of the screen frame over the last setup, they were shown a picture illustrating this
2000 ms (turn and walk off). 3D scene (see Figure 3). Addressees to the right
The video sequences were designed using the and left side of the participants (OTHER) could
software package Poser 4.0 (Curious Lab† ). not be seen from the participant’s position (ME).
188 MOJZISCH ET AL.
Downloaded by [Universidad Del Norte] at 20:27 27 September 2015

Figure 1. Screen shots of exemplary video sequences (ME/virtual character looks at the participant; OTHER/virtual character
looks towards a third person; SOC/virtual character presents a facial expression indicative of a communicative intent; ARB/
virtual character shows an arbitrary facial movement).

After the instruction was given, participants felt that the virtual character’s facial expression
performed a 9-point calibration routine. Calibra- had been indicative of the intention to greet or
tion was repeated if the error in any fixation point initiate contact (CONTACT?). The latter ques-
exceeded 18 or if the average error for all points tion had to be answered by indicating the level of
was above 0.58. Participants first completed two perceived communicative intent on a 4-point
study trials in order to familiarize themselves with scale ranging from 1 (strong feeling of contact)
the task. Calibration was repeated after every to 4 (no feeling of contact). Participants were
second video sequence and each presentation was instructed to respond as soon as they saw the
preceded by a drift correction. signal word on screen.
After each video sequence, participants were
asked to answer two questions each prompted by
a signal word appearing on the screen: first, Artifact control and data reduction
whether the virtual character had looked at the
participant or at the third invisible agent EMG data sets were visually inspected and data
(WHO?); second, whether the participant had sequences with artifacts were excluded from sub-
sequent analysis. EMG data of six participants had
TABLE 1 to be excluded from analyses because of technical
Facial expressions/movements and FACS codes
difficulties and artifacts in the data stream. EMG
Description FACS code signals were highpass filtered (50 Hz), rectified
and integrated with a 30 ms time constant. Areas
Facial expressions Winking 46 under the curve values were calculated for the
(SOC) relevant time windows of each video sequence.
Eyebrow flash 1/2/5B
Smile 12C/25 EMG data were transformed into within-subjects
Eyebrow flash/smile 1/2/5B/12B z-scores prior to statistical analyses.
Smile/winking 12C/46 Owing to technical problems with the eye-
Facial movements ‘‘Ooh’’ (lip speech 22/25 tracking device, one participant had to be ex-
(ARB) configuration) cluded from further analysis of the eye movement
Lip biting 26 data. Fixations shorter than 40 ms as well as
Lip contraction 18
fixations outside of the presentation screen were
Lip thrusting 23
Blow up cheeks AD 33/34 excluded. In sum, 1.6% of all fixations were
discarded by this trimming process.
BEING WITH VIRTUAL OTHERS 189

The start of the walk in segment was defined by


the first frame in which 50% of the head of the
virtual character was visible. For all video se-
quences this condition was fulfilled 400 ms after
the onset.
The rationale for de-composing the video
sequences into three segments was that our
independent variables, by definition, did not
operate equally within each of the three time
segments: In the walk in segment the virtual
character enters the scene. During this part of
the video only the gender of the virtual character
can be discriminated and was hence treated as the
only independent variable in the statistical ana-
lyses. Regarding the turn segment, in which the
Downloaded by [Universidad Del Norte] at 20:27 27 September 2015

virtual character stops and either looks at the


participant or at a third party, both gender and
self-involvement were included in the statistical
analyses. Finally, in the social interaction segment
the virtual character showed either a socially
relevant facial expression or arbitrary facial
movements. Hence, all three independent vari-
ables were included in the statistical analyses.
Figure 2. Two screen shots of exemplary video sequences: (a)
a male virtual character smiling at the participant; (b) a female
virtual character looking towards a third person (not visible to RESULTS
the participant) and showing an arbitrary facial expression.

Fixation duration
Data analyses
For each participant, median fixation duration
We divided the video sequences into three seg- was calculated for each of the experimental
ments, which were analyzed separately (see also conditions and for each of the three time seg-
Figure 1): walk in (400 1500 ms), turn (1500  ments. To analyze the effect of gender on fixation
2500 ms), and social interaction (2500 5500 ms). duration during the walk in segment, the data

Figure 3. Virtual scene shown to participants in the instruction phase to help them visualize the setup.
190 MOJZISCH ET AL.

were submitted to a one-way analysis of variance 21) /17.52, p B/.001, h2 /.46. In contrast, Self-
(ANOVA). The results revealed that virtual involvement had no significant main effect on
females caused significantly longer fixations pupil size, F(1, 21) B/1. There was, however, a
than virtual males, F(1, 21) /21.98, p B/.001, significant interaction between Gender and Self-
h2 /.51. involvement, F(1, 21) /13.17, p/.002, h2 /.49.
In the turn segment both the gender of the To examine this interaction more closely, simple
virtual character and self-involvement were in- effects analyses were conducted. Pupil size was
cluded in the analysis. Hence, we submitted the found to be significantly larger if virtual males
data to a 2 (Self-involvement: yes vs. no) /2 looked directly at the participants rather than if
(Gender: female vs. male) repeated measures they looked towards someone beside the partici-
ANOVA. The ANOVA revealed a significant pant, F(1, 21) /8.24, p /.009, h2 /.28. For virtual
main effect of Self-involvement, F(1, 21) /6.02, females, there was a trend in the opposite direc-
p /.02, h2 /.22, reflecting that fixation duration tion, but it fell short of significance, F(1, 21) /
was significantly longer if participants were gazed 3.31, p/.08, h2 /.14.
at by the virtual character compared to if they During the social interaction segment, the
Downloaded by [Universidad Del Norte] at 20:27 27 September 2015

observed the virtual character looking at some- results again revealed a main effect of gender,
one else. The effect of Gender and the interaction F(1, 21) /71.19, p B/.001, h2 /.77. Moreover,
of Gender and Self-involvement were not sig- there was a significant main effect of self-involve-
nificant (both FsB/1). ment, F(1, 21) /10.46, p /.004, h2 /.33. This
In the social interaction segment all three main effect, however, was qualified by a signifi-
independent variables were included in the ana- cant two-way interaction between gender and
lysis. Therefore, the data were submitted to a 2 self-involvement, F(1, 21)/13.97, p /.001, h2 /
(Self-involvement: yes vs. no) /2 (Social Interac- .40. Simple effects analyses showed that pupil size
tion: yes vs. no) /2 (Gender of the virtual was significantly larger if virtual females had
character: female vs. male) repeated measures turned to someone beside the participants than
ANOVA. The results showed that none of the if they looked directly at the participants, F(1,
main effects were significant (all FsB/1.6, all ps/ 21) /21.88, p B/.001, h2 /.51. In contrast, for
.22). There was a trend towards an interaction virtual males no effect of self-involvement
between self-involvement and social interaction, emerged, F(1, 21) B/1. See Table 3.
but this trend fell short of significance, F(1, 21) /
3.41, p /.08. See Table 2.
Facial EMG

Pupil size According to the literature, a delay of 200 ms to


400 ms in facial muscle responses to a virtual
During the walk in segment, the results revealed a character’s facial expression can be expected
significant effect of Gender indicating that pupil (Dimberg, 1997, Dimberg & Thunberg, 1998).
size was significantly larger if the virtual character Regarding the analysis of the EMG data, we
was female than if it was male, F(1, 21) /22.89, hence distinguished between the following three
p B/.001, h2 /.52. intervals: 600800 ms (corresponds to the walk in
During the turn segment, the same effect segment), 21002300 ms (corresponds to the turn
emerged, that is, pupil size was significantly larger segment), and 3500 3700 ms (corresponds to the
for virtual females than for virtual males, F(1, social interaction segment).
TABLE 2
Mean values of fixation duration (in milliseconds) for the main effects of Self-involvement, Social Interaction, and Gender (standard
errors in parentheses)

Self-involvement Social Interaction Gender

ME OTHER SOC ARB Female Male

Walk in 264.34 (13.26) 260.36 (8.47) 258.43 (9.51) 266.27 (10.08) 277.32 (11.55) 247.39 (8.38)
Turn 257.50 (10.84) 242.41 (6.37) 252.23 (9.79) 247.68 (8.80) 249.05 (8.55) 250.86 (9.50)
Social interaction 279.90 (10.19) 277.93 (7.21) 274.02 (9.53) 283.82 (8.68) 274.55 (7.53) 283.30 (10.17)
BEING WITH VIRTUAL OTHERS 191

TABLE 3
Mean values of pupil size for the main effects of Self-involvement, Social Interaction, and Gender (standard errors in parentheses)

Self-involvement Social Interaction Gender

ME OTHER SOC ARB Female Male

Walk in 2010.58 (152.31) 2035.10 (158.58) 2005.48 (152.16) 2040.21 (158.74) 2056.47 (155.92) 1989.22 (154.93)
Turn 1947.38 (153.62) 1932.18 (152.57) 1938.23 (149.41) 1941.33 (157.16) 1988.24 (153.62) 1891.32 (149.72)
Social interaction 2043.14 (152.25) 2091.81 (160.51) 2062.88 (153.63) 2072.07 (159.58) 2138.97 (162.14) 1995.98 (150.61)

Of course, we did not expect differential facial Self-involvement, F(1, 16) /0.84, p /.49, h2 /
EMG activity during the walk in or the turn .03, indicating that facial expression was deter-
segment because virtual characters did not show mined by the perception of socially relevant facial
any facial expression until the social interaction expressions regardless of whom they were direc-
Downloaded by [Universidad Del Norte] at 20:27 27 September 2015

segment. Therefore, EMG effects during the ted towards (see Table 4).
social interaction segment should be reflected
statistically in significant two-way interactions
between any of the three independent variables DISCUSSION
and the time of measurement. We hence sub-
mitted the EMG data to a 2 (Self-involvement: The present study investigated the effect of self-
yes vs. no)/2 (Social Interaction: yes vs. no) /2 involvement during social interaction on atten-
(Gender of the virtual character: female vs. tion, arousal, and facial expression. Specifically,
male)/3 (Time of measurement: walk in seg- we aimed to disentangle the effect of being
ment vs. turn segment vs. social interaction personally addressed from the effect of decoding
segment) repeated measures ANOVA. the meaning of another person’s facial expression.
The only significant effect to emerge from the To this end, we recorded eye movements, pupil
statistical analysis was a significant two-way
size, and facial EMG activity while participants
interaction between Time of measurement and
observed virtual characters gazing at them or
Social Interaction, F(1, 16) /3.86, p /.03, h2 /
looking towards a third person. The virtual
.01. Further analyses revealed that this interaction
characters then displayed socially relevant facial
effect was due to the fact that there was *
according to our hypothesis *neither an effect expressions as they would appear in greeting or
of Social Interaction in the walk in segment, F(1, approach situations (e.g., smiling) or displayed
16) /0.31, p /.59, h2 /.02, nor in the turn seg- arbitrary facial movements.
ment, F(1, 16) /1.53, p /.23, h2 /.09; there was, Our results demonstrate that using anthropo-
however, a significant main effect of Social morphic virtual characters can be an adequate
Interaction in the social interaction segment, tool to study the effects of perception of face-
F(1, 16) /7.66, p/.01, h2 /.32, indicating that based cues conveying communicative intent on
zygomaticus major activity was significantly lar- measures of attention, arousal and facial expres-
ger if the virtual character smiled than if the facial sion. We suggest that this reflects the phenom-
expression was arbitrary. Importantly, this main enon of eliciting a sense of ‘‘social presence’’ in
effect of Social Interaction was not qualified by mediated environments (Bailenson, Blascovich,
an interaction between Social Interaction and Beall, & Loomis, 2003; Ijsselsteijn & Riva, 2003).

TABLE 4
Mean values of EMG-activity (z-scores) for the main effects of Self-involvement, Social Interaction, and Gender
(standard errors in parentheses)

Self-involvement Social Interaction Gender

ME OTHER SOC ARB Female Male

Walk in .061 (0.048) /.024 (0.045) .038 (0.050) /.001 (0.041) .006 (0.050) .031 (0.028)
Turn /.007 (0.033) .011 (0.043) /.029 (0.034) .033 (0.033) /.023 (0.041) .027 (0.029)
Social interaction .058 (0.045) /.037 (0.030) .096 (0.059) /.075 (0.021) /.007 (0.028) .028 (0.045)
192 MOJZISCH ET AL.

Attention areas (see Frith & Frith, 2006, for a similar


interpretation).
Measures of fixation duration revealed that dur- Interestingly, it has recently been proposed
ing the initial appearance of the virtual characters that dMPFC plays an important role in the
attention allocation was influenced by the gender representation of triadic social relations, such as
of the character. After the virtual character had sharing attention to an object or collaborating on
turned, however, gender seemingly became less a shared goal (Saxe, 2006). This form of triadic
salient and significant differences of attention attention has to be clearly distinguished from
were then determined by whether or not the shared attention as apparent in dyadic engage-
ment (Tomasello, Carpenter, Call, Behne, &
face was looking directly at the human observer
Moll, 2005). Although the present study did not
regardless of the character’s gender.
investigate joint attention per se (i.e., two indivi-
This finding nicely ties in with research show-
duals attend to the same object because one is
ing the importance of mutual eye contact for
monitoring or following the focus of attention of
transmitting information about another person’s
the other), it is conceivable that our task estab-
Downloaded by [Universidad Del Norte] at 20:27 27 September 2015

intentions, potentially signaling an upcom- lished a more basic form of triadic attention (i.e.,
ing interaction (Argyle & Cook, 1976; Baron- I see that the virtual character attends to the
Cohen, 1995). As Baron-Cohen argued, ‘‘it ma- person standing beside me). Notably, a similar
kes . . . sense that we should be hypersensitive to increase in the length of fixations as observed in
when another organism may be about to attack the present study has been found in conditions
us, or may be interested in us for some reason’’ where states of joint attention have to be main-
(1995, p. 98). In this sense, it has been proposed tained as a prerequisite for dyadic collaborative
that direct gaze captures the attention of percei- actions (Velichkovsky, 1995). Future studies are
vers and that this attentional capture increases needed that systematically manipulate dyadic as
the efficiency of cognitive operations (Baron- compared to triadic attention.
Cohen, 1995). Indeed, there is evidence showing
that directed gaze facilitates categorical thinking
(Macrae, Hood, Milne, Rowe, & Mason, 2002) Arousal
and improves person memory (Mason, Hood, &
Macrae, 2004). The results of the present study go Throughout the video sequences there was a
beyond this work by demonstrating that being significant effect on pupil size dependent upon
personally addressed by an animated virtual the gender of the virtual character. Specifically,
character is associated with a significant increase participants showed a larger pupil size if the
in the length of fixations. Such an increase in the virtual character was female than if it was male
length of fixations seems to underlie the pro- (note that all of our participants were male). This
longed analysis of faces with direct gaze (reflected effect was apparent from the first appearance of
by longer response times), as has recently been the virtual characters in the visual field, that is,
well before the characters’ communicative orien-
found by Vuilleumier et al. (2005) for gender
tation and their facial expression could be en-
judgments.
coded. This result is in line with the notion that
Notably, the main effect of attention allocation
our perceptual system is adaptively tuned, main-
observed in our study corresponds to the findings
taining a chronic vigilance to key features of the
of Schilbach et al. (2006) of differential neural
environment that are tied to adaptive challenges
activity in dMPFC during the experience of self- (Gibson, 1979; McArthur & Baron, 1983).
involvement regardless of whether the virtual We assume that this constitutes further evi-
character showed socially relevant or arbitrary dence indicating that participants in our study
facial expressions. Considering the present results did, indeed, react to the anthropomorphic virtual
and the results of Schilbach et al. (2006) in characters as if they were real people. This also
combination, we suggest that dMPFC acts as a seems to be reflected in the significant interaction
top-down mechanism on the allocation of atten- effect found for gender and self-involvement
tional resources. Accordingly, signals directed at during the turn segment: Whereas eye contact
oneself, such as eye contact, activate the dMPFC, with a male will increase arousal possibly to
which, in turn, is the source of top-down signals prepare for a physiologically charged exchange
that modify the activity in more posterior brain of dominance, the absence of eye contact with a
BEING WITH VIRTUAL OTHERS 193

female can be equally distressing. Admittedly, facial expressions of people on television (e.g.,
because only male participants were recruited in Hsee, Hatfield, Carlson, & Chemtob, 1990).
the present study, we cannot be entirely certain It is particularly interesting to note that the
that our results are due to the interaction between automatic tendency to imitate the facial expres-
the gender of the participants and the gender of sion of others has been proposed as a means of
the virtual characters. Hence, future studies are helping us to share what other people are
required that explore both male and female currently feeling. Thus, facial mimicry appears
participants. to bind people together and fosters liking, em-
Notably, our results are in line with the results pathy, and smooth interaction (Decety & Jackson,
of Kampe et al. (2003) in showing no main effect 2004; Hatfield, Cacioppo, & Rapson, 1994). For
of being personally addressed on stimulus-in- example, in one series of experiments, Van
duced arousal (as measured by pupil size). This Baaren, Holland, Kawakami, and Van Knippen-
finding provides evidence for the notion that the berg (2004) observed that participants who had
effects of direct gaze on information processing * been mimicked were more generous and helpful
as observed by Macrae et al. (2002) and Mason toward other people than those participants who
Downloaded by [Universidad Del Norte] at 20:27 27 September 2015

et al. (2004) *are mediated by the selective had not been mimicked. Of particular relevance
allocation of attention and not by an increase in to the results of our study, Van Baaren et al.
arousal. (2004) found that the beneficial consequences of
Interestingly, it has been suggested that dimin- mimicry were not restricted to behavior directed
ished duration of eye gaze fixation as observed in toward the mimicker but also included behavior
autistic individuals can be explained by the fact directed toward individuals not directly involved
that eye fixation is associated with negatively in the mimicry situation. Hence, the impact of
valenced over arousal within the autistic group mimicry appears to extend beyond the dyad.
(Dalton et al., 2005). A promising avenue for Strikingly, the main effect of perceiving so-
future research would be to investigate whether cially relevant facial expressions regardless of
the differences in how individuals with autism self-involvement on facial EMG is in concordance
scan and process facial information are due to with the finding of differential activity in vMPFC
abnormalities in the neural circuits related to for the main effect of perceiving socially relevant
arousal or rather due to abnormalities in the facial expressions (Schilbach et al., 2006). It has
neural circuits related to visual attention. been proposed that vMPFC integrates interocep-
tive, cognitive, and motivational states thereby
representing or producing changes in visceral
Facial expression sensations as present in affective states (Critchley
et al., 2003). Accordingly, we suggest that vMPFC
Even newborns tend to imitate facial expressions involvement can be conceptualized as represent-
and manual gestures (cf. Meltzoff & Moore, ing embodied co-regulative activity between self
1997). In our study, participants exhibited in- and other ‘‘giving us a feel’’ for the meaning of
creased zygomaticus major muscle activity if the social signals (see also Saxe, 2006, for a similar
virtual characters showed socially relevant facial interpretation).
expressions (compared to arbitrary facial move-
ments). Intriguingly, this effect occurred even if
participants were not personally addressed. In CONCLUSION
other words, even if participants only observed
the virtual characters smiling at someone else, Intriguingly, measures of attention allocation,
they tended to smile back. We hence suggest that arousal, and facial muscle activity demonstrate
this EMG reaction is subserved by a more basic, distinct reaction patterns to different temporal
‘‘reflexive’’ mechanism (Satpute & Lieberman, segments of the stimulus material. We suggest
2006). that these patterns of reactivity subserve and
Our finding that we tend to mimic other contribute to the dynamic processes of person
people even if we are not personally addressed perception by constituting parts of the biological
during social interaction is in line with evidence correlates of our propensity for social perception.
suggesting that we mimic facial expressions even This helps us to actively engage with other
if there is no need to communicate. For example, ‘‘mindful’’ creatures, to drive the allocation of
it has been shown that viewers tend to mimic the attentional resources and to inform higher-order
194 MOJZISCH ET AL.

cognitive functioning thereby providing essential female under different conditions of eye contact.
information to help us to understand other minds. Behavioral Processes, 3, 271 275.
Gallagher, H. L., Jack, A. I., Roepstorff, A., & Frith, C.
D. (2002). Imaging the intentional stance in a
competitive game. NeuroImage, 16, 814 821.
Gibson, J. J. (1979). The ecological approach to visual
perception. Boston: Houghton Mifflin.
Gilbert, D. T., & Hixon, J. G. (1991). The trouble of
REFERENCES thinking: Activation and application of stereotypic
beliefs. Journal of Personality and Social Psychology,
Amodio, D. M., & Frith, C. D. (2006). Meeting of 60, 509 517.
minds: The medial frontal cortex and social cogni- Grice, H. P. (1975). Logic and conversation. In P. Cole
tion. Nature Reviews Neuroscience, 7, 268 277. & J. L. Morgan (Eds.), Syntax and semantics. Speech
Argyle, M., & Cook, M. (1976). Gaze and mutual gaze. acts (pp. 41 58). New York: Academic Press.
Cambridge, UK: Cambridge University Press. Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1994).
Bailenson, J. N., Blascovich, J., Beall, A. C., & Loomis, Emotional contagion. New York: Cambridge Uni-
J. M. (2003). Interpersonal distance in immersive
versity Press.
virtual environments. Personality and Social Psy- Hsee, C. K., Hatfield, E., Carlson, J. G., & Chemtob, C.
Downloaded by [Universidad Del Norte] at 20:27 27 September 2015

chology Bulletin, 29, 1 15.


(1990). The effect of power on susceptibility to
Baron-Cohen, S. (1995). Mindblindness: An essay on
emotional contagion. Cognition and Emotion, 4,
autism and theory of mind. Cambridge, MA: MIT
327 340.
Press.
Ijsselsteijn, W. A., & Riva, G. (2003). Being there: The
Baron-Cohen, S., Wheelwright, S., & Jolliffe, T. (1997).
experience of presence in mediated environments.
Is there a ‘‘language of the eyes’’? Evidence from
In G. Riva, F. Davide, & W. A. Ijsselsteijn (Eds.),
normal adults and adults with autism or Asperger
Being there: Concepts, effects and measurement of
syndrome. Visual Cognition, 4, 311 331.
Bavelas, J. B., Black, A., Lemery, C. R., & Mullett, J. user presence in synthetic environments (pp. 1 14).
(1986). ‘‘I show how you feel’’: Motor mimicry as a Amsterdam: IOS Press.
communicative act. Journal of Personality and Social Inhoff, A. W., & Radach, R. (1998). Definition and
Psychology, 50, 322 329. computation of oculomotor measures in the study of
Critchley, H. D., Mathias, C. J., Josephs, O., O’Doherty, cognitive processes. In G. Underwood (Ed.), Eye
J., Zanini, S., Dewar, B. K., et al. (2003). Human guidance in reading and scene perception (pp. 29 
cingulate cortex and autonomic control: Converging 54). Oxford, UK: Elsevier.
neuroimaging and clinical evidence. Brain, 126, Kampe, K. K. W., Frith, C. D., & Frith, U. (2003). ‘‘Hey
2139 2152. John!’’: Signals conveying communicative intention
Dalton, K. M., Nacewicz, B. M., Johnstone, T., Schae- towards the self activate brain regions associated
fer, H. S., Gernsbacher, M. A., Goldsmith, H. H., et with mentalizing regardless of modality. Journal of
al. (2005). Gaze fixation and the neural circuitry of Neuroscience, 23, 5258 5263.
face processing in autism. Nature Neuroscience, 8, King-Casas, B., Tomlin, D., Anen, C., Camerer, C. F.,
519 526. Quartz, S. R., & Montague, P. R. (2005). Getting to
Decety, J., & Jackson, P. L. (2004). The functional know you: Reputation and trust in a two-person
architecture of human empathy. Behavioral and economic exchange. Science, 308, 78 83.
Cognitive Neuroscience Reviews, 3, 71 100. Lundqvist, L. O. (1995). Facial EMG reactions to facial
Dimberg, U. (1997). Facial reactions: Rapidly evoked expressions: A case of facial emotional contagion?
emotional responses. Journal of Psychophysiology, Scandinavian Journal of Psychology, 36, 130 141.
11, 115 123. Macrae, C. N., Hood, B. M., Milne, A. B., Rowe, A. C.,
Dimberg, U., & Thunberg, M. (1998). Rapid facial & Mason, M. F. (2002). Are you looking at me? Eye
reactions to emotional facial expressions. Scandina- gaze and person perception. Psychological Science,
vian Journal of Psychology, 39, 39 45. 13, 460 464.
Ekman, P., & Friesen, W. (1978). Facial action coding Mason, M. F., Hood, B. M., & Macrae, C. N. (2004).
system: A technique for the measurement of facial Look into my eyes: Gaze direction and person
movement. Palo Alto, CA: Consulting Psychologists memory. Memory, 12, 637 643.
Press. McArthur, L. Z., & Baron, R. (1983). Toward an
Fridlund, A. J., & Cacioppo, J. T. (1986). Guidelines for ecological theory of social perception. Psychological
human electromyographic research. Psychophysiol- Review, 90, 215 238.
ogy, 23, 567 589. McCabe, K., Houser, D., Ryan, L., Smith, V., &
Frith, C. D., & Frith, U. (2006). How we predict what Trouard, T. (2001). A functional imaging study of
other people are going to do. Brain Research, 1079, co-operation in two-person reciprocal exchange.
36 46. Proceedings of the National Academy of Sciences,
Frith, U., & Frith, C. D. (2003). Development and 98, 11832 11835.
neurophysiology of mentalizing. Philosophical Meltzoff, A. N., & Moore, M. K. (1997). Explaining
Transactions of the Royal Society of London, Series facial imitation: A theoretical model. Early Devel-
B: Biological Sciences, 358, 459 473. opment & Parenting, 6, 179 192.
Gale, A., Kingsley, E., Brookes, S., & Smith, D. (1978). Mouchetant-Rostaing, Y., & Giard, M. H. (2003).
Cortical arousal and social intimacy in the human Electrophysiological correlates of age and gender
BEING WITH VIRTUAL OTHERS 195

perception on human faces. Journal of Cognitive Schilbach, L., Wohlschlaeger, A. M., Kraemer, N. C.,
Neuroscience, 15, 900 910. Newen, A. N., Shah, N. J., Fink, G. R., et al. (2006).
Nichols, K., & Champness, B. (1971). Eye gaze and the Being with virtual others: Neural correlates of social
GSR. Journal of Experimental Social Psychology, 7, interaction. Neuropsychologia, 44, 718 730.
623 626. Strom, J., & Buck, R. (1979). Staring and participants’
Patterson, M. L., Jordan, A., Hogan, M. B., & Frerker, sex: Physiological and subjective reactions. Person-
D. (1981). Effects of nonverbal intimacy on arousal ality and Social Psychology Bulletin, 5, 114 117.
and behavioral adjustment. Journal of Nonverbal Tomasello, M., Carpenter, M., Call, J., Behne, T., &
Behavior, 5, 184 198. Moll, H. (2005). Understanding and sharing inten-
Pelphrey, K. A., Sasson, N. J., Reznick, J. S., Paul, G., tions: The origins of cultural cognition. Behavioral
Goldman, B. D., & Piven, J. (2002). Visual scanning and Brain Sciences, 28, 675 691.
of faces in autism. Journal of Autism and Develop- Van Baaren, R. B., Holland, R. W., Kawakami, K., &
mental Disorders, 32, 249 261. Van Knippenberg, A. (2004). Mimicry and prosocial
Premack, D., & Woodruff, G. (1978). Does the chim-
behavior. Psychological Science, 15, 71 74.
panzee have a theory of mind? Behavioral and Brain
Velichkovsky, B. M. (1995). Communicating attention:
Sciences, 4, 515 526.
Gaze position transfer in co-operative problem
Quinn, K. A., & Macrae, C. N. (2005). Categorizing
others: The dynamics of person construal. Journal of solving. Pragmatics and Cognition, 3, 199 222.
Downloaded by [Universidad Del Norte] at 20:27 27 September 2015

Personality and Social Psychology, 88, 467 479. Velichkovsky, B. M. (2001). Levels of processing:
Rayner, K. (1998). Eye movements in reading and Validating the concept. In M. Naveh-Benjamin, M.
information processing: 20 years of research. Psy- Moscovitch, & H. L. Roediger, III (Eds.), Perspec-
chological Bulletin, 124, 372 422. tives on human memory and cognitive aging: Essays
Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. in honor of Fergus I. M. Craik. Philadelphia:
E., & Cohen, J. D. (2003). The neural basis of Psychology Press.
economic decision-making in the ultimatum game. Velichkovsky, B. M. (2002). Heterarchy of cognition:
Science, 300, 1755 1758. The depths and the highs of a framework for
Satpute, A. B., & Lieberman, M. D. (2006). Integrating memory research. Memory, 10, 405 419.
automatic and controlled processes into neurocog- Vuilleumier, P., George, N., Lister, V., Armony, J., &
nitive models of social cognition. Brain Research, Driver, J. (2005). Effects of perceived mutual gaze
1079, 86 97. on face judgments and face recognition memory.
Saxe, R. (2006). Uniquely human social cognition. Visual Cognition, 12, 85 101.
Current Opinion in Neurobiology, 16, 235 239.

You might also like