You are on page 1of 6

Cross-Cultural Recognition of Basic Emotions through Nonverbal Emotional Vocalizations

Author(s): Disa A. Sauter, Frank Eisner, Paul Ekman, Sophie K. Scott and Edward E. Smith
Source: Proceedings of the National Academy of Sciences of the United States of America, Vol.
107, No. 6 (Feb. 9, 2010), pp. 2408-2412
Published by: National Academy of Sciences
Stable URL: http://www.jstor.org/stable/40536598
Accessed: 14-01-2016 19:37 UTC

REFERENCES
Linked references are available on JSTOR for this article:
http://www.jstor.org/stable/40536598?seq=1&cid=pdf-reference#references_tab_contents

You may need to log in to JSTOR to access the linked references.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/
info/about/policies/terms.jsp

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content
in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.
For more information about JSTOR, please contact support@jstor.org.

National Academy of Sciences is collaborating with JSTOR to digitize, preserve and extend access to Proceedings of the
National Academy of Sciences of the United States of America.

http://www.jstor.org

This content downloaded from 132.239.1.230 on Thu, 14 Jan 2016 19:37:50 UTC
All use subject to JSTOR Terms and Conditions
Cross-cultural of basicemotionsthrough
recognition
nonverbal
emotionalvocalizations
Disa A. Sauterab'1, Frank Eisner', Paul Ekmand, and Sophie K. Scott0

department of Psychology,UniversityCollege London, London WC1E 6BT, United Kingdom; bHenryWellcome Building,BirkbeckCollege, London WC1E
7HX, United Kingdom; institute of Cognitive Neuroscience, UniversityCollege London, London WC1N 3AR, United Kingdom; and dThe Paul Ekman Group,
San Francisco,CA 94104

Edited by Edward E. Smith,Columbia University,New York, NY, and approved November 4, 2009 (received for review July22, 2009)

Emotional signals are crucial for sharing important information, throughmedia or personalcontact.Furthermore, emotionalin-
withconspecifics,forexample, to warn humans of danger. Humans formation overlaiàon speechis restricted byseveral such
factors,
use a range of differentcues to communicate to others how they as the segmentaland prosodie structureof the language and
feel, including facial, vocal, and gestural signals. We examined constraintson the movementof the articulators.In contrast,
the recognition of nonverbal emotional vocalizations, such as nonverbalvocalizationsare relatively "pure"expressionsofemo-
screams and laughs, across two dramatically differentcultural tions. Without the simultaneoustransmissionof verbal in-
groups. Western participantswere compared to individuals from formation, the articulators(e.g., lips,tongue,larynx)can move
remote,culturallyisolated Namibianvillages. Vocalizations commu- freely, allowingfortheuse of a widerrangeof acousticcues (9).
nicatingthe so-called "basic emotions" (anger, disgust, fear, joy, We examinedvocal signalsof emotionsusingthe two-culture
sadness, and surprise)were bidirectionallyrecognized. In contrast, approach,in whichparticipants fromtwo populationsthatare
a set of additional emotions was only recognized within,but not maximally differentintermsoflanguageandculturearecompared
across, culturalboundaries. Our findingsindicate that a numberof (10). The claimofuniversality isstrengthened totheextentthatthe
primarilynegative emotions have vocalizations that can be recog- same phenomenonis foundin both groups.This approachhas
nized across cultures,while most positive emotions are communi- previously been used in workdemonstrating the universality
of
cated with culture-specific signals. facialexpressionsoftheemotionshappiness,anger,fear,sadness,
disgust,and surprise(11), a resultthathas nowbeen extensively
communication | affect | universality| vocal signals replicated(12). These emotionshave also been shownto be reli-
ablycommunicatedwithina culturalgroupvia vocal cues, and
differences in language,culture,and ecology,some additionally,vocalizationseffectively signal severalpositiveaf-
Despite
humancharacteristics aresimilarinpeople all overtheworld. fectivestates(3, 13).
Becausewe sharethevastmajority ofourgeneticmakeupwithall To investigate whetheremotionalvocalizationscommunicate
otherhumans,thereis greatsimilarity inthephysicalfeaturesthat affectivestates across cultures,we comparedEuropean native
are typicalforourspecies,eventhoughminorcharacteristics vary Englishspeakerswiththe Himba,a seminomadicgroupof over
betweenindividuals.Like manyphysicalfeatures,aspectsof the 20,000pastoralpeople livingin smallsettlements in the Kaoko-
humanpsychology are shared.These psychological universaiscan landregioninnorthern Namibia.A handfulofHimbasettlements,
inform aboutwhatfeaturesofthehumanmindarepart primarily those near the regionalcapital Opuwo, are so-called
arguments
of our sharedbiologicalheritageand whichare predominantly "showvillages"thatwelcomeforeigntouristsand media in ex-
productsofcultureandlanguage.Forexample,all humansocieties change forpayment.However,in the veryremotesettlements,
havecomplexsystems ofcommunication to conveytheirthoughts, wherethedataforthepresentstudywerecollected,theindividuals
and intentions to those around them(1). However,al- livecompletely traditionallives,withno electricity,
running water,
feelings,
formaleducation,oranycontactwithcultureorpeople fromother
thoughthereare some commonalitiesbetweendifferent com-
municativesystems,speakers of differentlanguages cannot groups.Theyhavethusnotbeen exposedto theaffective signalsof
understandeach others'wordsand sentences.Otheraspectsof individualsfromculturalgroupsotherthantheirown.
communicative do notrelyon commonlexicalcodes and Participantshearda shortemotionalstory,describingan event
systems thatelicitsan affectivereaction:forexample,that a person is
maybe sharedacrosslinguisticand culturalborders.Emotional
signalsare an exampleof a communicative systemthatmaycon- verysad because a close relativeof theirshas passed away [see
stitutea psychologicaluniversal. Table SI]. Afterconfirming that theyhad understoodthe in-
Humans use a range of cues to communicateemotions, tendedemotionof the story,theywere playedtwovocalization
sounds. One of the stimuliwas fromthe same categoryas the
includingvocalizations,facial expressions,and posture (2-4). emotionexpressedin the storyand the otherwas a distractor.
Auditorysignalsallowforaffective communication whenthe re-
The participant was askedwhichof thetwohumanvocalizations
cipientcannotsee thesender,forexample,acrossa distanceor at matched the emotion in the story(Fig. 1). This task avoids
night.Infantsaresensitive tovocalcuesfromtheverybeginning of
life,whentheirvisualsystemis stillrelatively immature(5). problemswithdirecttranslationof emotiontermsbetweenlan-
Vocal expressions ofemotionscan occuroverlaidon speechin guages because it includes additionalinformation in the sce-
theformofaffective prosody.However,humansalso makeuse ofa
rangeof nonverbalvocalizationsto communicatehow theyfeel, Author contributions: D.A.S., F.E., P.E. and S.K.S. designed research; D.A.S. and F.E.
suchas screamsand laughs.In thisstudy,we investigate whether performedresearch; D.A.S. and F.E. analyzed data; and D.A.S. wrote the paper.
certainnonverbalemotionalvocalizationscommunicate thesame The authors declare no conflictof interest.
affectivestatesregardlessof thelistener'sculture.Currently, the This article is a PNAS Direct Submission.
only available cross-cultural data of vocal signals come from
Freelyavailable online through the PNAS open access option.
studiesof emotionalprosodyin speech (6-8). This workhas in-
1To whom correspondence should be sent at the presentaddress: Max Planck Institutefor
dicatedthatlistenerscan infersome affective statesfromemo- PO Box 310, 6500 AH Nijmegen,The Netherlands.E-mail:disa.sauter®
Psycholinguistics,
tionallyinflectedspeechacrossculturalboundaries.However,no mpl.nl
studyto date has investigated emotionrecognition fromthevoice This article contains supporting informationonline at www.pnas.org/cgi/content/full/
in a populationthathas had no exposureto otherculturalgroups 09082391 06/DCSupplemental

2408-2412 | PNAS | February9, 2010 | vol.107 | no. 6 0.1073/pnas.09082391


www.pnas.org/cgi/doi/i 06

This content downloaded from 132.239.1.230 on Thu, 14 Jan 2016 19:37:50 UTC
All use subject to JSTOR Terms and Conditions
a sharedculturebetweenproducerand listener:thesesignalsare
recognizedacrossculturalborders.
We also examinedthe recognitionof vocalizationsfromthe
listeners'own culturalgroups(Fig. 2B). As expected,listeners
fromthe Britishsample matchedthe Britishsoundsto the story
at a level thatsignificantly exceeded chance (xi = 271.82,P <
0.0001), and theyperformedbetterthanwould be expectedby
chance foreach of the emotioncategories[xi = 81.00 (anger),
96.04 (disgust),96.04 (fear), 81.00 (sadness), 70.56 (surprise),
96.04 (achievement),88.36 (amusement),51.84 (sensual pleas-
ure),and 67.24 (relief),all P < 0.001,Bonferronicorrected].This
replicatespreviousfindingsthat have demonstratedgood rec-
ognitionof a range of emotionsfromEnglishnonverbalvocal
cues bothwithin(13) and between(3) European cultures.
The Himba listenersalso matchedthe soundsfromtheirown
groupto the storiesat a level thatwas muchhigherthanwould
be expected by chance (xi = 111.42, P < 0.0001), and they
performedbetterthanwould be expectedby chance foralmost
all of theemotioncategories[xi = 39.86 (anger),42.24 (disgust),
44.69 (sadness),9.97 (surprise),15.21(fear),33.14 (achievement),
19.86 (amusement),and 12.45 (sensual pleasure),all P < 0.05,
Bonferronicorrected].Only sounds of reliefwere not reliably
paired with the relevantstory.Overall, the consistently high
Himba recognition of Himba soundsconfirms thatthesestimuli
constituterecognizablevocal signalsof emotionsto Himba lis-
teners,and further demonstratethatthisrangeof emotionscan
be reliablycommunicated withintheHimbaculturevia nonverbal
vocal cues.

Discussion
The emotionsthat were reliablyidentifiedby both groups of
listeners,regardlessof the originof the stimuli,comprisethe set
of emotions commonlyreferredto as the "basic emotions."
These emotionsare thoughtto constituteevolvedfunctions that
n
2*

are shared between all human beings,both in termsof phe- O t


Fig. 1. Participantwatching the experimenterplay a stimulus{Upper) and nomenologyand communicativesignals (14). Notably,these
indicatingher response (Lower). emotionshave been shownto have universally recognizablefa-
£8
cial expressions(11, 12). In contrast,vocalizationsof several
nariosand does notrequireparticipantsto be able to read. The positive emotions (achievement/triumph, relief, and sensual
pleasure) were not recognizedbidirectionally bybothgroupsof
Englishsoundswerefroma previously validatedset of nonverbal listeners.This findingis despitethe factthatthey,withthe ex-
vocalizationsof emotion,producedbytwomale and twofemale
ceptionof relief,werewellrecognizedwithineach culturalgroup
BritishEnglish-speakingadults. The Himba sounds were pro- and that nonverbalvocalizationsof these emotionsare recog-
duced by fivemale and six femaleHimba adults,and were se- nizedacrossseveralgroupsof Westernlisteners(3). This pattern
lectedin an equivalentwayto the Englishstimuli(13). suggeststhattheremaybe universally recognizablevocal signals
for communicating the basic emotions,but that this does not
Results extendto all affective
states,includingones thatcan be identified
To examinethe cross-cultural recognitionof nonverbalvocal- by listenersfromcloselyrelatedcultures.
izations,we testedtherecognition ofemotionsfromvocal signals Our resultsshowthatemotionalvocal cues communicate affec-
fromtheotherculturalgroupineach groupoflisteners(Fig. 24). tivestatesacrossculturalboundaries.The basic emotions - anger,
The Englishlistenersmatchedthe Himba soundsto the storyat fear,disgust,happiness(amusement),sadness,and surprise - were
exceeded chance (xi = 418.67, P <
a level that significantly reliablyidentifiedby both English and Himba listenersfrom vo-
0.0001), and theyperformedbetterthanwould be expectedby calizationsproducedby individualsfromboth groups.This ob-
chanceforeach of the emotioncategories[xi = 30.15 (anger), servationindicatesthatsome affectivestatesare communicated
100.04 (disgust),24.04 (fear),67.85 (sadness), 44.46 (surprise), withvocal signalsthatare broadlyconsistentacrosshumansoci-
41.88 (achievement),100.04(amusement),15.38 (sensual pleas- eties,and do notrequirethattheproducerand listenersharelan-
ure),and32.35(relief),allP < 0.001,Bonferronicorrected]. These guage or culture.The findingsare in line withresearchin the
data demonstrate domainof visualaffective signals.Facial expressionsof thebasic
thatthe Englishlistenerscould inferthe emo-
emotionsare recognizedacrossa wide rangeof cultures(12) and
tionalstateofeach ofthecategoriesof Himbavocalizations.
correspond toconsistent constellations
offacialmusclemovements
The HimbalistenersmatchedtheEnglishsoundsto thestories
(15). Furthermore, thesefacialconfigurations producealterations
at a levelthatwas significantly
higherthanwouldbe expectedby in sensoryprocessing, suggestingthattheylikelyevolvedto aid in
chance(xi = 271.82,P < 0.0001). For individualemotions,they the preparationforactionto particularly important typesof sit-
performed at better-than-chance levelsfora subsetof the emo- uations(16). Despite the considerablevariationin humanfacial
tions[xi = 8.83 (anger),27.03 (disgust),18.24 (fear),9.96 (sad- musculature, the facialmusclesthatare essentialto producethe
ness), 25.14 (surprise),and 49.79 (amusement),all P < 0.05, expressionsassociatedwithbasic emotionsare constantacrossin-
Bonferroni corrected].These data showthatthecommunication dividuals,suggestingthat specificfacial muscle structures have
oftheseemotionsvia nonverbalvocalizationsis notdependenton likelybeen selected to allow individualsto produce universally

Sauter et al. PNAS | February 9, 2010 | vol.107 | no. 6 | 2409

This content downloaded from 132.239.1.230 on Thu, 14 Jan 2016 19:37:50 UTC
All use subject to JSTOR Terms and Conditions
Fig. 2. Recognition performance(out of four) for each emotion category, within and across cultural groups. Dashed lines indicate chance levels (50%).
Abbreviations:ach, achievement; amu, amusement; ang, anger; dis, disgust; fea, fear; pie, sensual pleasure; rei, relief;sad, sadness; and sur, surprise.(A)
Recognitionof each categoryof emotional vocalizations forstimulifroma differentculturalgroup for Himba (lightbars) and English{dark bars) listeners.(B)
Recognitionof each category of emotional vocalizations for stimulifromtheir own group for Himba (light bars) and English(dark bars) listeners.

recognizable emotionalexpressions (17). The consistency ofemo- the taskoverall(FUi4 = 127.31,P < 0.001; Englishmean: 3.56;
tionalsignalsacrossculturessupportsthenotionofuniversalaffect Himba mean: 2.74). This effectis likelybecause of the English
programs: thatis,evolvedsystems thatregulatethecommunication participants'more extensiveexposure to psychologicaltesting
of emotions,whichtaketheformof universalsignals(18). These and education.
signalsare thoughtto be rootedin ancestralprimatecommunica- The currentstudythusextendsmodels of cross-cultural com-
tivedisplays.In particular,
facialexpressions producedbyhumans municationof emotionalsignalsto nonverbalvocalizationsof
and chimpanzeeshave substantialsimilarities (19). Althougha emotion,suggesting thatthesesignalsare modulatedbyculture-
numberofprimatespeciesproduceaffective vocalizations(20), the specificvariationin a similarwayto emotionalfacialexpressions
extentto whichtheseparallelhumanvocal signalsis as yetun- and affectivespeechprosody(12).
known.The datafromthecurrent studysuggestthatvocalsignalsof
emotionare,likefacialexpressions, drivencommuni- PositiveEmotions. Some affective statesare communicatedusing
biologically
cativedisplaysthatmaybe sharedwithnonhumanprimates. signals that are not shared across cultures,but specificto a
particulargroupor region.In our study,vocalizationsintended
In-Group Advantage.In humans,the basic emotionalsystemsare to communicatea numberofpositiveemotionswerenotreliably
modulatedbyculturalnormsthatdictatewhichaffective signals identified by the Himba listeners.Whymightthisbe? One pos-
shouldbe emphasized,masked,or hidden(21). In addition,cul- sibilityis thatthisis due to thefunctionofpositiveemotions.It is
ture introducessubtle adjustmentsof the universalprograms, well establishedthatthe communication of positiveaffectfacil-
producingdifferences in theappearanceof emotionalexpression itatessocial cohesionwithgroupmembers(22). Such affiliative
acrosscultures(12). These culturalvariations,acquiredthrough behaviorsmay be restrictedto in-groupmemberswithwhom
sociallearning, underliethefinding thatemotionalsignalstendto social connectionsare builtand maintained.However,itmaynot
be recognizedmostaccuratelywhenthe producerand perceiver be desirableto share such signalswithindividualswho are not
are fromthesame culture(12). This is thoughtto be because ex- membersof one's ownculturalgroup.An exceptionmaybe self-
pressionand perceptionare filteredthroughculture-specific sets enhancingdisplaysof positiveaffect.Recent researchhas shown
of rules,determining what signalsare sociallyacceptable in a thatposturalexpressionsofprideare universally recognized(23).
particulargroup.When theserulesare shared,interpretation is However,pride signalshigh social statusin the sender rather
facilitated.In contrast,whenculturalfiltersdifferbetweenpro-
ducerand perceiver,understanding theother'sstateis moredif-
ficult.
To examinewhether thoselisteners
whowerefromthesame
culturewherethestimuliwereproducedperformed betterin the
recognition of the emotionalvocalizations,we comparedrecog-
nitionperformance forthe two groupsand of the two sets of
stimuli.A significantinteractionbetweenthecultureofthelistener
and thatof thestimulusproducerwas found(F1414= 27.68,P <
0.001; means for Englishrecognitionof Englishsounds: 3.79;
Englishrecognition ofHimbasounds:3.34; Himba recognition of
Englishsounds:2.58; Himbarecognition ofHimba sounds:2.90),
confirming thateach groupperformedbetterwithstimulipro-
duced by membersof theirown culture(Fig. 3). The analysis
yieldedno maineffect ofstimulustype(F < 1; meanrecognition of
Englishstimuli:3.19; mean recognition of Himba stimuli:3.12),
demonstrating thatoverall,the two sets of stimuliwere equally Fig. 3. Group averages (out of four) for recognition across all emotion
recognizable. The analysisdid,however,resultin a maineffectof categories for each set of stimuli,for Himba (black line) and English(gray
listenergroup,because theEnglishlistenersperformed betteron line) listeners.Errorbars denote standard errors.

2410 | www.pnas.org/cgi/doi/10.1073/pnas.0908239106 Sauteret al.

This content downloaded from 132.239.1.230 on Thu, 14 Jan 2016 19:37:50 UTC
All use subject to JSTOR Terms and Conditions
thangroupaffiliation, differentiatingit frommanyotherpositive inferredfromfacial expressionsof emotions(11). This finding
emotions.Althoughpride and achievementmay both be con- supportstheoriesproposingthattheseemotionsare psychological
sidered"agency-approach" emotions(involvedin reward-related universaisand constitutea setofbasic,evolvedfunctions thatare
actions;see ref.24), theydifferin theirsignals:achievementis sharedby all humans.In addition,we demonstratethatseveral
well recognizedwithina culturefromvocal cues (3), whereas positiveemotionsare recognizedwithin - butnotacross- cultural
prideis universally well recognizedfromvisual signals(23) but groups,which maysuggest that social signalsare shared
affiliative
not fromvocalizations(24). primarilywithin-groupmembers.
We foundthatvocalizationsof reliefwere not matchedwith
the reliefstoryby Himba listeners,regardlessof whetherthe Materialsand Methods
stimuliwere Englishor Himba. This could implythatthe relief Stimuli.The English stimuliwere taken from a previouslyvalidated set of
storywas not interpreted as conveyingreliefto the Himba par- nonverbal vocalizations of negative and positive emotions. The stimulusset
was comprised of 10 tokens of each of nine emotions: achievement, amuse-
ticipants.However,thisexplanationseems unlikelyforseveral
reasons.Afterhearinga story,each participantwas asked to ment,anger,disgust,fear,sensual pleasure, relief,sadness, and surprise,based
on demonstrationsthat all ofthese categories can be reliablyrecognized from
explainwhatemotiontheythoughtthe personin the storywas nonverbalvocalizations byEnglishlisteners(13). The sounds were produced in
feeling.Onlyonce itwas clearthattheindividualhad understood an anechoic chamber bytwo male and two female native Englishspeakers and
theintendedemotionofthestorydid theexperimenters proceed the stimulusset was normalized for peak amplitude. The actors were pre-
withpresenting thevocalizationstimuli,thusensuringthateach sented witha briefscenarioforeach emotion and asked to produce the kindof
participant had correctly understoodthe targetemotionof each vocalization they would make ifthey felt like the person in the story.Briefly,
story.Furthermore, Himba vocalizationsexpressingreliefwere achievement sounds were cheers, amusement sounds were laughs, anger
reliablyrecognizedby Englishlisteners,demonstrating thatthe sounds were growls,disgust sounds were retches,fear sounds were screams,
Himba individualsproducingthe vocalizations(fromthe same sensual pleasure sounds were moans, relief sounds were sighs, sad sounds
were sobs, and surprisesounds were sharp inhalations. Furtherdetails on the
storypresentedto the listeners)were able to produce appro- acoustic propertiesof the Englishsounds can be found in ref.13.
priatevocal signalsforthe emotionof the reliefstory.A more The Himba stimuliwere recorded from five male and six female Himba
parsimoniousexplanationforthisfindingmaybe thatthe sigh adults, using an equivalent procedure to that of the English stimulus pro-
used bybothgroupsto signalreliefis notan unambiguoussignal duction, and were also matched for peak amplitude. The researchers(D.A.S.
to Himba listeners. Althoughused to signalrelief,demonstrated and F.E.) excluded poor exemplars,as itwas not possible to performmultiple-
bytheabilityof Himba individualsto producevocalizationsthat choice pilot tests with Himba participantsto pilot test the stimuli.Stimuli
wererecognizableto Englishlisteners,sighsmaybe interpreted containing speech or extensive background noise were excluded, as were
to indicatea rangeof otherstatesas well by Himba listeners. multiple,similarstimuliby the same speaker. Examples of the sounds can be
Whetherthereare affective statesthatcan be inferredfromsighs found as Audio S1 and Audio S2.
acrossculturesremainsa questionforfutureresearch. Z uj

In the presentstudy,one type of positivevocalizationwas Participants.The total sample consisted of two English and two Himba <. <»>

groups. The English sample that heard the English stimuliconsisted of 25


reliablyrecognizedby both groups of participants.Listeners
agreed,regardlessof culture,that sounds of laughtercommu-
nicatedamusement,exemplified as the feelingof beingtickled.
native Englishspeakers (10 male, 15 female; mean age 28.7 years),and those
who heard Himba sounds consisted of 26 native Englishspeakers (1 1 male, 15 Is
o t
female; mean age 29.0 years). Twenty-nineparticipants(13 male, 16 female)
Ticklingtriggers laugh-likevocalizationsin nonhumanprimates from Himba settlementsin NorthernNamibia comprised the Himba sample
(25) as well as othermammals(26), suggesting thatit is a social who heard the Englishsounds, and another group of 29 participants(13 male,
behaviorwithdeep evolutionary roots(27). Laughteris thought 16 female) heard the Himba sounds. The Himba do not have a systemfor
to have originatedas a partof playfulcommunication between measuringage, but no childrenor veryold adults were included in the study.
younginfantsand mothers,and also occursmostcommonlyin Informedconsent was given by all participants.
both childrenand nonhumanprimatesin responseto physical
play(28). Our resultssupporttheidea thatthesoundoflaughter Design and Procedure. We used an adapted version of a task employed in
is universallyassociatedwithbeingtickled,as participants from previous cross-culturalresearch on the recognition of emotional facial
bothgroupsof listenersselectedthe amused soundsto go with expressions(1 1). Inthe originaltask,a participantheard a storyabout a person
feeling in a particularway and was then asked to choose which of three
the ticklingscenario.Indeed, giventhe well-established coher- emotional facial expressionsfitwiththe story.Thistask issuitable foruse with
ence betweenexpressiveand experientialsystemsof emotions a preliteratepopulation, as it requires no abilityto read, unlike the forced-
(15), our data suggestthatlaughteruniversally reflectsthe feel- choice format using multiple labels that is common in emotion-perception
ingof enjoyment of physicalplay. studies. Furthermore,the currenttask is particularlywell suited to cross-
In our study,laughterwas cross-culturally recognizedas sig- culturalresearch,as itdoes not relyon the precisetranslationof emotion terms
nalingjoy. In thevisualdomain,smilingis universally recognized because it includes additional informationin the stories. The original task
as a visualsignalof happiness(11, 12). This raisesthepossibility included three response alternativeson each trial,with all three stimulipre-
thatlaughteris the auditoryequivalentof smiling,as bothcom- sented simultaneously.However, as sounds necessarilyextend overtime, the
municatea stateofenjoyment. response alternatives in the currenttask had to be presented sequentially.
However,a different interpretation Thus,participantswere requiredto rememberthe other response alternatives
maybe thatlaughterand smilesare infactquitedifferent typesof as they were listeningto the currentresponse option. To avoid overloading
signals,withsmilesfunctioning as a signalof generallypositive the participants'workingmemory,the numberof response alternativesinthe
socialintent, whereaslaughtermaybe a morespecificemotional currentstudywas reduced to two.
signal,originating inplay(29). Thisissuehighlightstheimportance The Englishparticipantswere tested in the presence of an experimenter;
of consideringpositiveemotionsin cross-cultural researchof the Himba participantswere tested inthe presence of two experimentersand
emotions(30). The inclusionofa rangeofpositivestatesshouldbe one translator. For each emotion, the participant listened to a short pre-
extendedto conceptualrepresentations in semanticsystemsof recorded emotion storydescribinga scenario that would elicitthat emotion
emotions,whichhas been exploredin the contextof primarily (Audio S3 and Audio S4). Aftereach story,the participantwas asked how
the person was feeling to ensure that they had understood the story. If
negativeemotions(31).
necessary,participantscould hear the storyagain. No participantsfailed to
Conclusion identifythe intended emotion from any of the stories, although some in-
dividuals needed to hear a storymore than once to understandthe emotion.
In thisstudywe showthata numberof emotionsare cross-cul- The emotion stories used with the Himba participantswere developed to-
turallyrecognizedfromvocal signals,which are perceivedas gether with a local person vyithextensive knowledge of the culture of the
communicatingspecificaffective
states.The emotionsfoundto be Himba people, who also acted as a translatorduring testing. The emotion
recognizedfromvocal signalscorrespondto those universally storiesused withthe Englishparticipantswere matched as closelyas possible

Sauteret al. PNAS | February9, 2010 | vol.107 | no. 6 | 2411

This content downloaded from 132.239.1.230 on Thu, 14 Jan 2016 19:37:50 UTC
All use subject to JSTOR Terms and Conditions
to the Himba stories,but adapted to be easily understood by Englishpar- female trialsforeach emotion. Thus,all participantscompleted fourtrialsfor
ticipants.The storieswere played over headphones fromrecordings,spoken each of the nine emotions, resultingin a total of 36 trials.The target stimulus
in a neutral tone of voice by a male native speaker of each language (the was of the same emotion as the story,and the distractorwas varied in terms
Himba local language Otji-Hereroand English).Once they had understood of both valence and difficulty, such that for any emotion participantsheard
the story,the participantwas played two sounds over headphones. Stimulus fourtypes of d ¡stractors: maximallyand minimallyeasy of the same valence,
presentation was controlled by the experimenter pressing two computer and maximallyand minimallyeasy of the opposite valence, based on con-
mice in turn,each playingone of the sounds (see Fig. 1). A subgroup of the
fusiondata froma previousstudy(13). Which mouse was correcton any trial,
Himba participantslisteningto Himba sounds performeda slightlyaltered
as well as the order of stories,stimulusgender, distractortype, and whether
version of the task, where the stimuli were played without the use of
the target was firstor second, was pseudorandomized for each participant.
computer mice, but the procedure was identical to that of the other par-
Stimuluspresentationwas controlledusingthe PsychToolbox(32) for Matlab,
ticipantsin all other respects.The participantwas asked which one was the
kind of sound that the person in the storywould make. They were allowed runningon a laptop computer.
to hear the stimulias many times as they needed to make a decision. Par-
ticipants indicated their choice on each trial by pointing to the computer ACKNOWLEDGMENTS. We thank David Matsumoto for discussions. This
mouse that had produced the sound appropriate for the story(see Fig. 1), research was funded by grants from the Economic and Social Research
and the experimenterinputtedtheir response into the computer. Through- Council, UniversityCollege London Graduate School Research Project Fund,
out testing,the experimentersand the translatorwere naive to which re- and the Universityof London Central Research Fund (to D.A.S.), a contribu-
sponse was correctand which stimulusthe participantwas hearing. Speaker tion toward travel costs fromthe UniversityCollege London Department of
gender was constantwithinany trial,with participantshearingtwo male and Psychology,and a grant fromthe Wellcome Trust(to S.K.S.).

1. Hauser MD, ChomskyN, FitchWT (2002) The facultyof language: What is it,who has 18. Ekman P (1999) Facial expressions. Handbook of Cognition and Emotion, eds
it, and how did it evolve? Science 298:1569-1579. Dalgleish T, Power M (Wiley, New York), pp 301-320.
2. Coulson M (2004) Attributingemotion to static body postures: Recognitionaccuracy, 19. ParrLA,Waller BM, Heintz M (2008) Facial expressioncategorization by chimpanzees
confusions,and viewpoint dependence. J Nonverbal Behav 28:1 17-139. using standardized stimuli.Emotion 8:216-231.
3. Sauter DA, Scott SK (2007) More than one kind of happiness: Can we recognize vocal 20. SeyfarthRM (2003) Meaning and emotion in animal vocalizations. Ann N Y Acad Sci
expressionsof differentpositive states? Motiv Emot 31:192-199. 1000:32-55.
4. Ekman P, FriesenWV, Ancoli S (1980) Facial signs of emotional experience. J Pers Soc 21. Matsumoto D, Yoo S, Hirayama S, Petrova G (2005) Development and initial
Psychol39:1 125-1 134. validation of a measure of display rules: The Display Rule Assessment inventory
5. Mehler J,BertonciniJ,BarriereM, Jass ikgersehenfeld D (1978) Infantrecognitionof
(DRAI). Emotion 5:23-40.
mothersvoice. Perception 7:491-497.
22. Shiota M, Keltner D, Campos B, Hertenstein M (2004) Positive emotion and the
6. Scherer KR, Banse R, Wallbott HG (2001) Emotion inferencesfromvocal expression
correlate across languages and cultures.J Cross Cult Psychol32:76-92. regulation of interpersonal relationships. Emotion Regulation, eds Phillipot P,
7. Pell MD, Monetta L, Paulmann S, Kotz SA (2009) Recognizing emotions in a foreign Feldman R (Erlbaum, Mahwah, NJ),pp 127-156.
23. TracyJ,Robins R (2008) The nonverbal expressionof pride: Evidence forcross-cultural
language. J Nonverbal Behav 33:107-120.
8. BryantG, BarrettH (2008) Vocal emotion recognitionacross disparate cultures.JCogn recognition.J Pérs Soc Psychol94:516-530.
Cult 8:135-148. 24. Simon-ThomasE, KeltnerD, Sauter D, Sinicropi-YaoL, Abramson A The voice conveys
9. Scott SK, Sauter D, McGettigan C (2009) Brain mechanismsfor processing perceived specificemotions: evidence fromvocal burstdisplays.Emotion, in press.
emotional vocalizations in humans. Handbook of Mammalian Vocalizations, ed 25. Ross MD, Owren MJ,ZimmermannE (2009) Reconstructingthe evolution of laughter
BrudzynskiS (Academic Press,Oxford),pp 187-198. in great apes and humans. CurrBiol 19:1106-1111.
10. Norenzayan A, Heine SJ(2005) Psychologicaluniversais:What are they and how can 26. Knutson B, Burgdorf J, Panksepp J (2002) Ultrasonic vocalizations as indices of
we know? PsycholBull 13 1:763-784. affectivestates in rats. Psycho/Bull 128:961-977.
11. Ekman P, Sorenson ER, FriesenWV (1969) Pan-culturalelements in facial displaysof 27. Ruch W, Ekman P (2001) The expressive pattern of laughter. Emotion, Qualia and
emotion. Science 164:86-88. Consciousness,ed Kaszniak A (Word ScientificPublisher,Tokyo).
12. ElfenbeinHA,Ambady N (2002) On the universalityand culturalspecificityof emotion 28. Panksepp J (2004) Rough-and-tumble play: The brain sources to joy. Affective
recognition:A meta-analysis.PsycholBull 128:203-235. Neuroscience: the Foundations of Human and Animal Emotions, ed Panksepp J
13. Sauter D, Calder AJ,EisnerF, Scott SK Perceptual cues in non-verbalvocal expressions
(Oxford UniversityPress,New York), pp 280-299.
of emotion. Q J Exp Psychol,in press.
29. van HooffJARAM(1972) A comparative approach to the phylogenyof laughter and
14. Ekman P (1992) An argument for basic emotions. Cogn Emotion 6:169-200.
15. Rosenberg EL, Ekman P (2005) Coherence between expressive and experiential smiling. In Nonverbal Communication, ed Hinde RA (Cambridge UniversityPress,
systemsin emotion. What the Face Reveals: Basic and Applied Studies of Sponta- Cambridge, England).
neous Expression,eds EkmanP, Rosenberg EL (OxfordUniversityPress,New York), pp 30. Sauter D More than happy: the need for disentangling positive emotions. CurrDir
63-85. PsycholSci, in press.
16. SusskindJM,et al. (2008) Expressingfear enhances sensoryacquisition. Nat Neurosci 31. RomneyAK, Moore CC, Rusch CD (1997) Culturaluniversais:Measuring the semantic
11:843-850. structureof emotion terms in English and Japanese. Proc Nati Acad Sci USA 94:
17. Waller BM, Cray JJ,Burrows AM (2008) Selection for universal facial emotion. 5489-5494.
Emotion 8:435-439. 32. Brainard DH (1997) The psychophysicstoolbox. Spat Vis 10:433-436.

2412 | www.pnas.org/cgi/doi/10.1073/pnas.0908239106 Sauter et al.

This content downloaded from 132.239.1.230 on Thu, 14 Jan 2016 19:37:50 UTC
All use subject to JSTOR Terms and Conditions

You might also like