Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
0Activity
0 of .
Results for:
No results containing your search query
P. 1
Jack_2014.pdf

Jack_2014.pdf

Ratings: (0)|Views: 1|Likes:
Published by Roberto Pineda

More info:

Published by: Roberto Pineda on Feb 06, 2014
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

02/06/2014

pdf

text

original

 
ª
ReportDynamic Facial Expressionsof Emotion Transmit an EvolvingHierarchy of Signals over Time
Rachael E. Jack,
* Oliver G.B. Garrod,
1
and Philippe G. Schyns
1
1
Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland, G12 8QB, UK 
SummaryDesigned by biological [1, 2] and social [3] evolutionary pressures, facial expressions of emotion comprise specificfacial movements [4–8] to support a near-optimal systemofsignalinganddecoding[9,10].Althoughhighlydynamical[11, 12], little is known about the form and function of facialexpression temporal dynamics. Do facial expressions trans-mit diagnostic signals simultaneously to optimize categori-zation of the six classicemotions, orsequentially to supporta more complex communication system of successive cate-gorizations over time? Our data support the latter. Using acombination of perceptual expectation modeling [13–15], in-formation theory [16, 17], and Bayesian classifiers, we showthat dynamic facial expressions of emotion transmit anevolvinghierarchyof‘‘biologicallybasictosociallyspecific’informationovertime.Earlyinthesignalingdynamics,facialexpressionssystematicallytransmitfew,biologicallyrootedface signals [1] supporting the categorization of fewerelementary categories (e.g., approach/avoidance). Latertransmissions comprise more complex signals that supportcategorization of a larger number of socially specific cate-gories (i.e.,the six classic emotions). Here, weshow that dy-namicfacialexpressionsofemotionprovide asophisticatedsignaling system, questioning the widely accepted notionthat emotion communication is comprised of six basic (i.e.,psychologically irreducible) categories [18], and insteadsuggesting four.Results
Knowledgeoffacialexpressionsofemotionandtheinformationthey transmit are deeply rooted in the perceptual expectationsofobservers(e.g.,[4,15]).Specifically,perceptualexpectationsare created from interacting with the external environment,whereby perceivable information (e.g., facial expression sig-nals is extracted, consolidated, and retained as knowledge tolater adaptively predict and interpret the world [21–24]. Thus,by probing the perceptual expectations of observers, we canmodel the facial expression signals transmitted and perceivedin the social environment (see [10, 19, 20] for coevolutionaryaccounts of signal production and perception).To analyze the perceptual expectations of dynamic facialexpressions of emotion, we proceeded in three steps. First,to model the dynamic perceptual expectations of the sixclassic facial expressions of emotion, we combined a uniquegenerative grammar of dynamic facial movements (theGenerative Face Grammar [GFG]) [13] coupled with reversecorrelation [14] in 60 Western white Caucasian observers(see the Supplemental Experimental Procedures, Observers,available online). Second, to quantify the signaling dynamicsof the resulting facial expression models over time, weused information theory [16, 17]. Finally, to understand howthe signaling dynamics supports emotion categorization over time, we used Bayesian classifiers.
Modeling Perceptual Expectations of Dynamic FacialExpressions of Emotion
Figure 1 illustrates the GFG and task procedure. On each trial,the computer graphics platform randomly selects a set of action units (AUs; i.e., specific facial movements performedby specific facial muscle[s] as described by the Facial ActionCoding System [FACS] [8]) and values specifying six temporalparameters (represented as color-coded curves) to generatea random 3D facial animation (see Movie S1 for an exampleand the Supplemental Experimental Procedures, Stimuli). Weasked each naive observer to categorize the random facialanimations according to the six classic emotions (‘‘happy,’’‘‘surprise,’’ ‘‘fear,’’ ‘‘disgust,’‘‘anger,’’ and ‘‘sad’’) or ‘‘don’tknow’(see the Supplemental Experimental Procedures,Task Procedure). Following the experiment, we reverse corre-latedeachobserver’scategoricalresponses(seetheTableS1 )with the randomly chosen AUs and temporal parameters(see the Supplemental Experimental Procedures, ModelingPerceptual Expectations of Dynamic Facial Expressions of Emotion), producing a distribution of 720 dynamic facialexpression models (60 observers
 3
 6 facial expressions of emotion
3
male/female faces).
Quantifying the Signaling Dynamics of Facial Expressionsof Emotion Models
Tounderstandthesignalformofthedynamicfacialexpressionmodels, we first mapped the distribution of all AUs accordingtowhentheypeakedintime(i.e.,thepeaklatency ofeachAU).Figure2showstheAUpeaklatencydistributionsforallmodelspooled together (n = 720, All Facial Expression Models) andsplit by emotion (n = 120, Models Split by Emotion). In eachpanel, color-coded circles in each row represent the distribu-tion of peak latencies (one circle per model) for each AU(see row labels), where brightness indicates proximity to themedian time. As illustrated, dynamic facial expression modelstransmit certain AUs earlier in the signaling dynamics (e.g.,Upper Lid Raiser) and some comparatively later (e.g., LipStretcher), reflecting expectations of an ordered, not uniform,transmission of face signals over time.To objectively quantify AU signaling over time, we usedShannon entropy, which measures (in bits) the complexity(i.e., average uncertainty) of a signal. To compute signalcomplexity over time, we first divided the AU distributionsinto ten equally spaced time bins. For each time bin, we thencomputed the probability that each AU (n = 41) peaked withinthatbin,calculatedacrossall720models(inFigure2,AllFacialExpression Models). We then split the models into the sixemotion categories and repeated the same calculationfor each emotion separately (in Figure 2, Models Split byEmotion).Asshownbythewhitelinesineachpanelof Figure2,signal complexity follows a common pattern over time: low
Please cite this article in press as: Jack et al., Dynamic Facial Expressions of Emotion Transmit an Evolving Hierarchy of Signals over Time, Current Biology (2014), http://dx.doi.org/10.1016/j.cub.2013.11.064
 
complexity (i.e., low entropy, high certainty) early in thesignaling dynamics is followed by increasing complexity (i.e.,high entropy, low certainty), before later decreasing.Low entropy observed early and late in the signaling dy-namics reflects the high probability (i.e., certainty) of thetransmission of few AUs. To identify these AUs—i.e., thosesystematically transmitted earlier and later in the signalingdynamics—we calculated the Shannon information of each AU (measured in bits) across time. AUs with significantly lowShannon information (p < 0.05; see the Supplemental Experi-mental Procedures, Shannon Information) are highlighted inmagenta (early AUs) and green (later AUs) in Figure 2. Asshown in Figure 2, dynamic facial expression models transmitfew AUs early in the signaling dynamics—i.e., Upper LidRaiser, Nose Wrinkler, Lip Funneler, and Mouth Stretch (seemagenta highlight). In contrast, different AUs are systemati-cally transmitted later in the signaling dynamics—i.e., BrowRaiser, Brow Lowerer, Eyes Closed, Upper Lip Raisers, LipCorner Puller + Cheek Raiser, and Lip Stretcher (see greenhighlight). ( Table S2 shows peak latency differences betweenearly and late AUs per emotion).Notably, AUs systematically transmitted early in thesignaling dynamics comprise those conferring a biologicaladvantage to the expresser (i.e., Upper Lid Raiser, and NoseWrinkler[1]),whereasAUstransmittedlatercompriseinforma-tion diagnostic for categorizing the six classic emotions [25].
Figure 1. Generative Face Grammar to ReverseCorrelate Dynamic Perceptual Expectations of Facial Expressions of Emotion(Left) Stimulus. On each experimental trial, acomputer graphics platform randomly selectsfrom a total of 41 a subset of action units (AUs;here, AU4 in blue, AU5 in green, and AU20 inred) and values specifying their temporal param-eters (represented as color-coded curves). Thedynamic AUs are then combined to produce a3D facial animation, illustrated here with fousnapshots and corresponding color-coded tem-poral parameter curves. The color-coded vector below indicates the three randomly selected AUs comprising the stimulus.(Right)Perceptual expectations.Naiveobserverscategorize the random facial animation accord-ing to six emotions (plus don’t know) if the move-ments correlate with their subjective perceptualexpectations of that emotion (here, fear). Eachobserver categorized a total of 2,400 randomfacial animations displayed on same-race facesof both sexes.See also Table S1 and Movie S1.
Together, these results show that dy-namic facialexpressionmodels transmitan evolving hierarchy of signals ovetime, characterized by simpler, biologi-cally rooted signals early in the signalingdynamics followed by more complexsocially specific signals that finelydiscriminate the six facial expressionsof emotion.
Classifying the Signaling Dynamicsof Facial Expressions of Emotion
 All signals, via evolutionary pressures,aredesignedtoreliablytransmitspecificinformation to observers to support a near-optimal system of signaling and decoding [9, 26]. To understand the functionalrelevanceofthehierarchicalformoffacialexpressioninforma-tion transmission over time, we analyzed how this signalingsupports emotion categorization for an idealized observer.To this aim, we constructed ten Bayesian classifiers (one per time point), where each classifier categorizes the face signals(i.e., AUs) transmitted up until that time point (e.g., at
 
 = 10the classifier categorizes the full signal) according to the sixclassic emotions (see the Supplemental Experimental Proce-dures, Bayesian Classifiers).In Figure 3 (Bayesian Classifiers), each color-coded matrixshows the categorization performance of the Bayesian classi-fiers at each time interval, where lighter squares show higher posterior probability of an emotion and darker areas showlower posterior probability. As shown by the increasingly lightsquares across the diagonal, categorization performanceincreases over time with the progressive accumulation of signaledAUs.Squaresoutlinedinmagentashowtheemotionssystematically confused (p < 0.01) at each time point (e.g., at
3, surprise and fear confused, as are disgust and anger).Confusions between emotion categories occur early in thesignalingdynamics,whereasaccuratediscriminationbetweenemotions typically occurs later (indicated in Figure 3 withgreen squares for two examples—surprise/fear [
6] anddisgust/anger [
7]).
Current Biology
 Vol 24 No 22
Please cite this article in press as: Jack et al., Dynamic Facial Expressions of Emotion Transmit an Evolving Hierarchy of Signals over Time, Current Biology (2014), http://dx.doi.org/10.1016/j.cub.2013.11.064
 
To identify the AUs producing early confusions and thosesupporting later accurate discrimination, we used a leave-one-out method that removed each AU independently fromall models and time points before recomputing the Bayesianclassifier performance (see the Supplemental ExperimentalProcedures, Confusing and Diagnostic Face Signals). Figure 3(Confusing and Diagnostic Face Signals) shows the AUs—presented as color-coded deviation maps—that produceearly confusions (outlined in magenta) and support later discrimination between emotions (outlined in green) for twoconfusions (surprise/fear and disgust/anger). As shown, early confusions between surprise and fear arise due to the common transmission of Upper Lid Raiser and Jaw Drop,
2) then Upper Lid Raiser
3–
5), withaccurate discrimination arising due to the later availabilityof Eyebrow Raiser
6). Similarly, disgust and anger areconfused early in the signaling dynamics due to the commontransmission of Nose Wrinkler ( 
2–
5), then Lip Funneler ( 
6),
Figure 2. Expected Dynamic Signaling of FacialExpressions of Emotion over TimeTo quantify the dynamic signaling of facialexpression signals (i.e., AUs) expected ovetime, we mapped the distribution of expectedtimes of all AUs comprising all models pooled(‘‘All Facial Expression Models,’’ n = 720 models)and also split by emotion (‘‘Models Split byEmotion,’’ n = 120 models).(Top) All facial expression models. In each row,color-coded circles represent the distribution of expected times for each AU, where brightnessindicates the median expected time and dark-ness indicates distance from the median,weighted by the proportion of models with that AU. As shown by the white line, signal complexity(measured byShannonentropy,inbits) increasesbefore later decreasing over the signaling dy-namics, where low entropy reflects systematicsignaling of few AUs. As represented by magentacircles, AUs systematically expected early in thesignaling dynamics (e.g., Upper Lid Raiser,Nose Wrinker; p < 0.05) comprise biologicallyadaptive AUs [1]. As represented by green cir-cles, AUs systematically expected later (e.g.,BrowRaiser,UpperLipRaiser;p<0.05)comprise AUs diagnostic for categorizing the six classicemotions [25].(Bottom) Models split by emotion. Note that ob-serversexpectUpperLidRaisertobetransmittedearly in both surprise and fear, and Nose Wrinkler to be transmitted early in disgust and anger.Together, these data show that dynamic facialexpressions transmit signals that evolve over time from simpler, biologically rooted signals tosocially specific signals.See also Table S2.
with accurate discrimination occurr-ing due to the later transmission of Upper Lip Raiser Left
7). Based onsystematic early confusions betweenspecific emotion categories, these re-sults reflect that expected early facesignals enable discrimination of onlyfour emotion categories – i.e., (1) happy,(2) sad, (3) fear/surprise, and (4) disgust/ anger—whereas the later availability of diagnostic information supports discrimination of all sixemotion categories.
Discussion
Using perceptual expectation modeling, we derived thedynamic signaling of the six classic facial expressions of emotion—happy, surprise, fear, disgust, anger, and sad—in60 Western white Caucasian observers. Information-theoreticanalysis showed that the dynamics transmit informationevolving from simpler, biologically rooted signals (e.g., Upper Lid Raiser and Nose Wrinkler) to more-complex signals. UsingBayesianclassifiers,weshowthatearlysignalingischaracter-ized by the common transmission of specific AUs (e.g., Upper Lid Raiser) between emotion categories (e.g., surprise andfear),thereby giving rise tosystematic confusions. Incontrast,later signaling comprises the availability of diagnosticinformation (e.g., Eyebrow Raiser), supporting the accurate
Facial Expressions Transmit Signals Hierarchically
3
Please cite this article in press as: Jack et al., Dynamic Facial Expressions of Emotion Transmit an Evolving Hierarchy of Signals over Time, Current Biology (2014), http://dx.doi.org/10.1016/j.cub.2013.11.064

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->