Professional Documents
Culture Documents
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .
http://www.jstor.org/page/info/about/policies/terms.jsp
.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.
University of California Press is collaborating with JSTOR to digitize, preserve and extend access to Music
Perception: An Interdisciplinary Journal.
http://www.jstor.org
E. GLENN SCHELLENBERG
Universityof Toronto
ANIA M. KRYSCIAK
Universityof Windsor
R. JANE CAMPBELL
Universityof Toronto
155
that had a stable pitch (e.g., dob) in the scale of the phrase.Phrasesthat
ended with a long and stable tone soundedparticularlycomplete, but no
more so (or less so) than one would predictby simply addingthe relative
contributionsof durationand stability.
In a relatedstudy (Monahan& Carterette,1985), listenersheardseveral
pairsof melodiesand madea similarityratingfor eachpair.As attendingto
pitchor rhythmincreased,attentiondirectedtowardthe otherfactortended
to decrease.In other words, limited attentionalresourcesappearedto be
allocatedin an additivemannerbased on the pitch and rhythmicinforma-
tion availablein the melodies. Similarly,when Pitt and Monahan (1987)
requiredlistenersto judge the similarityof pairs of tone sequenceswith
differentpolyrhythms,differencesin pitch betweenstandardand compari-
son sequencesreducedperceivedsimilaritiesbut there was no interaction
betweenthe pitch and rhythmmanipulations.
Otherresearchersemphasizethe integrationof pitch and rhythm,which
is consistentwith an interactiveview of musicperception(Cooper&cMeyer,
1960; Jones, 1987, 1993; Lerdahl& Jackendoff,1983). Becauseimpor-
tant pitches often occur at points of rhythmicimportance,listenersmay
perceivethese events as unified instead of perceivingpitch independently
fromrhythm.InJones'(1987, 1993) concepto( joint accentstructure,pitch
and rhythmicpatternsjointly determinea higher-orderaccent structure
(see also Monahan, 1993). Pitch accents (e.g., changes in pitch contour,
largeintervals,stabletones) aresaidto interactwith rhythmicaccents(e.g.,
longer tones, tones afterpauses)in their influenceon listeners'perception
of melodies. Jones and her colleagues (Jones & Ralston, 1991; Jones,
Summerell,& Marshburn,1987; Kidd, Boltz, & Jones, 1984) tested this
hypothesisby examininglisteners'ability to recognizea previouslyheard
melody.Listenersfound it difficultto ignorerhythmvariationswhen asked
to distinguish"target"from "decoy"melodieson the basis of pitch. Simi-
larly,pitch changes that occur at points of rhythmicemphasisare more
easilyidentifiedthan changesat otherpoints (Jones,Boltz, & Kidd, 1982).
Severalother studieshave also reportedthat rhythmicvariationsaffectthe
perceptionof pitch (e.g., Boltz, 1989a, 1989b, 1991, 1993; Deutsch,1980;
Schmuckler& Boltz, 1994). Conversely,variationsin pitch can affect the
perceptionof tone durations(Boltz, 1992) and the perceiveddurationof a
silent period betweentones (Crowder& Neath, 1995).
In sum, effectsof pitch and rhythmappearto be additivein some experi-
mentalcontexts but interactivein the majorityof cases. This apparentdis-
crepancycould be methodologicalin origin. One possibilityis that pitch
andrhythmareindependentin tasksinvolvingsubjectiveratings(Monahan
& Carterette,1985; Palmer& Krumhansl,1987a, 1987b;Pitt& Monahan,
1987), but interactivein more objectivetasks, such as those that measure
recall or recognition(e.g., Boltz, 1991; Deutsch, 1980; Jones et al., 1987;
but see Monahan,Kendall,& Carterette,1987).
Method
PARTICIPANTS
APPARATUS
STIMULUS MATERIALS
Our initial set of stimulicontained30 short melodies, 10 for each of the three target
emotions(happy,sad, and scary).Eachmelodywas less than 30 s in durationandconsisted
of four completemeasures.Some of the melodieswere excerptstaken from folk-songcol-
lections that were unlikelyto be familiarto the listeners(e.g., EasternEuropeancollec-
tions). Othermelodieswere used previouslyby Dolgin and Adelson (1990) or composed
specificallyfor the presentexperiment.Althoughtones in the stimulusmelodiesvariedin
pitch and duration,amplitudeswere held constantfor all tones. Thus, "rhythm"was op-
erationallydefinedsimplyas variationsin tone durations,which is consistentwith Jones'
(1993) definitionof "rhythmproper."For each of the threeemotionalcategories,a timbre
was selectedfrom the set of factory-installedtimbreson the RolandJV-90,with experi-
menters'judgmentsdeterminingwhich was the most evocativeof the correspondingemo-
tion. The chosen timbreswere a classicalguitar (UserA32) for happy excerpts,a violin
section(W-ExpA24) for sad excerpts,and an organ(PresetC41) for scaryexcerpts.Careful
selectionof the melodiesand timbresensuredthat each stimulushad the structuralproper-
ties characteristicof its specifictargetemotion(i.e., the happymelodieswerequickin tempo
with staccatoarticulation,the sad melodieswere slow and legato, and the scarymelodies
had a broadpitch range).
Twentylistenerswere assignedto a task designedto find two exemplarsfor each of the
threetargetemotions.The 30 stimulusmelodieswerepresentedin a differentrandomorder
for each listener. Listeners made a single emotionality rating for each of the melodies. For
the 10 melodies from the happy set, they rated how "happy" the melody sounded on a scale
from 1 (not at all happy) to 7 (extremely happy). Likewise, they made sadness ratings for
the 10 sad melodies and scary ratings for the 10 scary melodies. For each of the emotional
categories, the two melodies with the highest average ratings were selected as the exem-
plars. The melodies are illustrated in musical notation in Figure 1. To ensure that the se-
lected melodies were unequivocally indicative of their respective emotions without any over-
lap, an additional seven listeners heard the six exemplars in random order and were asked
which of the three labels (happy, sad, or scary) they would assign to each melody. They were
informed that more than one label could be selected. Interrater agreement on the appropri-
ate emotional label proved to be 100%, with complete consensus that each exemplar con-
veyed its corresponding emotion and no others. Although such agreement does not guaran-
tee that our target emotions were the "best" descriptors of the selected melodies, it confirms
that our target emotions and our stimulus melodies had a one-to-one correspondence.
The exemplars were subsequently manipulated to produce three altered versions. Thus,
there were eight stimulus sequences (4 versions of each of the 2 exemplars) for each of the
three emotions, for a total of 24 melodies. In the pitch and rhythm condition, the melodies
were simply the exemplars with pitch and duration values corresponding to those specified
in Figure 1. In the pitch-only condition, the pitch of the component tones of the melody was
the same as in the exemplars, but individual tone durations were altered such that all tones
were of equal duration yet the overall duration of the exemplar remained unchanged (e.g.,
if an exemplar had 20 component tones and a total duration of 15 s, each tone in the pitch-
only condition would be 0.75 s). In the rhythm-only condition, tone durations were the
same as those of the exemplars but pitches were altered so that all tones had the same pitch
(i.e., the median pitch of the exemplar). Finally, in the baseline condition, component tones
had uniform pitch and duration. Each of the modified excerpts had the same overall dura-
tion and timbre of the exemplars, as well as the same constant amplitudes. Thus, the condi-
tions formed a 2 x 2 factorial design, with two levels of pitch (varying or equal) and two
levels of rhythm (varying or equal), with all other factors (i.e., timbre, overall length, ampli-
tude, tempo) held constant. Because the baseline conditions were monotonie with a regular
rhythm, they provided a baseline measure of the perceived emotionality of the selected
timbres.
PROCEDURE
Each of 30 listeners (15 men, 15 women) heard all 24 stimulus sequences blocked ac-
cording to emotional category. The order of the blocks was randomized separately for each
listener, as were the stimuli within blocks. Each emotional category was announced imme-
diately before the corresponding eight trials. After each trial, listeners were asked to rate
how happy/sad/scary the musical sequence was on a scale from 1 (not at all happy/sad/
scary) to 7 (extremely happy/sad/scary). Listeners made a single "happy" rating for each of
the eight happy stimuli, a "sad" rating for the sad stimuli, and a "scary" rating for the scary
stimuli. In order to make ratings as normally distributed as possible, listeners were urged to
use the full range of the scale, reserving ratings of 1 and 7 for extreme cases.
Results
HAPPY MELODIES
Mean ratings from each of the four versions of Happy 1 and Happy 2
are illustrated in Figure 2. An analysis of variance (ANOVA) with three
repeated measures (pitch, rhythm, and exemplar) revealed a significant three-
way interaction, F(l, 29) = 16.60, p < .001, which rendered the other main
Fig. 2. Mean ratings as a function of the pitch and rhythm manipulations for Happy 1 and
Happy 2. Error bars are standard errors.
SAD MELODIES
Mean ratings from the four testing conditions for Sad 1 and Sad 2 are
illustrated in Figure 3. An ANOVA revealed that the main effect of the
exemplar variable was nonsignificant and that it did not interact with the
pitch or rhythm variables. In other words, patterns of responding were
similar across the two sad exemplars. A significant main effect of pitch
revealed that sadness ratings were higher for excerpts with pitch changes
than for excerpts without pitch changes, F(l, 29) = 37.37, p < .001. All
other main effects and interactions were nonsignificant. In short, rhythm
had no effect on listeners' ratings. Indeed, Figure 3 shows that ratings were
virtually identical for excerpts with and without variation in tone dura-
tions. Thus, for sad exemplars, only the pitch manipulation provided a
reliable explanation of the variation in emotionality ratings.
SCARYMELODIES
Mean ratings from each of the four conditions for both of the scary
exemplars are illustrated in Figure 4. An ANOVA confirmed that response
patterns were similar across the two exemplars (no main effect of exem-
plar, no interactions involving exemplar). The main effect of pitch was sig-
nificant, F(l, 29) = 109.07, p < .001, as were the main effect of rhythm,
F(l, 29) = 6.13, p = .019, and the interaction between pitch and rhythm,
F(l, 29) = 8.26, p = .008. In general, scariness ratings were higher for
excerpts with pitch changes than for excerpts without pitch changes and
for excerpts with rhythm changes than for other excerpts. Nonetheless, the
interaction between pitch and rhythm necessitated clarification of the main
effects. Follow-up tests of simple effects revealed that scariness ratings were
higher for the two pitch-and-rhythm conditions than for the pitch-only
conditions, F(l, 29) = 16.26, p < .001, and that the rhythm-only and baseline
conditions did not differ. In other words, rhythmic variation affected lis-
teners' ratings only when differences in pitch were also present in the ex-
cerpts.
Fig. 4. Mean ratings as a function of the pitch and rhythm manipulations for Scary 1 and
Scary 2. Error bars are standard errors.
Discussion
References
Bachorowski,J.-A. (1999). Vocal expressionand perceptionof emotion. CurrentDirec-
tions in PsychologicalScience,8, 53-57.
Balkwill,L.-L.,& Thompson,W.R (1999). A cross-culturalinvestigationof the perception
of emotionin music:Psychophysicaland culturalcues. MusicPerception,17. 43-64.
Boltz,M. (1989a). Perceivingthe end:Effectsof tonalrelationshipson melodiccompletion.
Journalof ExperimentalPsychology:Human Perceptionand Performance,15, 749-
761.
Boltz,M. (1989b). Rhythmand "good endings":Effectsof temporalstructureon tonality
judgments.Perception& Psychophysics,46, 9-17.
Boltz, M. (1991). Some structuraldeterminantsof melody recall.Memory& Cognition,
19,239-251.
Boltz, M. (1992). The rememberingof auditoryevent durations.Journalof Experimental
Psychology:Learning,Memory,and Cognition,18, 938-956.
and culture-specificfactorsin
Schellenberg,E. G., ôc Trehub,S. E. (1999). Culture-general
the discriminationof melodies.Journalof ExperimentalChildPsychology,74*in press.
Scherer,K. R., & Oshinsky,J. S. (1977). Cue utilizationin emotional attributionfrom
auditorystimuli.Motivationand Emotion, 1, 331-346.
Schmuckler, M. A., & Boltz,M. G. (1994). Harmonicand rhythmicinfluenceson musical
expectancy.Perception& Psychophysics,56, 313-325.
Sloboda,J. A. (1991). Music structureand emotionalresponse:Some empiricalfindings.
Psychologyof Music, 19, 110-120.
Sloboda,J. A. (1992). Empiricalstudiesof emotionalresponseto music.In M. R. Jones &
S. Holleran(Eds.),Cognitivebasesof musicalcommunication(pp. 33-46). Washington,
DC: AmericanPsychologicalAssociation.
Stratton,V. N., & Zalanowski,A. H. (1991). The effectsof musicand cognitionon mood.
Psychologyof Music, 19, 121-127.
Terwogt,M. M., & Van Grinsven,F. (1991). Musicalexpressionof moodstates.Psychol-
ogy of Music, 19, 99-109.
Thompson,W. F. (1994). Sensitivityto combinationsof musical parameters:Pitch with
duration,and pitch patternwith durationalpattern.Perception& Psychophysics,56,
363-374.
Thompson,W. E, & Robitaille,B. (1992). Can composersexpressemotionsthroughmu-
sic?EmpiricalStudiesof the Arts, 10, 79-89.
Trainor,L. J., & Trehub,S. E. (1992). A comparisonof infants'and adults'sensitivityto
Westernmusicalstructure.Journalof ExperimentalPsychology:HumanPerceptionand
Performance,18, 394-402.
Trainor,L. J., & Trehub,S. E. (1994). Key membershipand impliedharmonyin Western
tonal music:Developmentalperspectives.Perception& Psychophysics,56, 125-132.
Trehub,S. E., Schellenberg,E. G., & Kamenetsky,S. B. (1999). Infants'and adults'percep-
tion of scale structure.Journalof ExperimentalPsychology:Human Perceptionand
Performance,25, 965-975.
Wedin,L. (1972). A multidimensionalstudy of perceptual-emotional qualitiesin music.
ScandinavianJournalof Psychology,13, 241-257.
White,B. W. (1960). Recognitionof distortedmelodies.AmericanJournalof Psychology,
73, 100-107.
Zajonc,R. B. (1980). Feelingand thinking:Preferencesneed no inferences.AmericanPsy-
chologist,35, 151-175.
Zajonc,R. B. (1984). On the primacyof affect.AmericanPsychologist,39, 117-123.