You are on page 1of 5

Beyond Emotion

emotional and other forms of expression. In particular, Schubert and Fabian presented a categorization
(taxonomy) in which expressiveness can be conceptualized as having two kinds of content further divided
into two layers respectively. The first important distinction is the differentiation between emotional expres-
8 sion (in either laye r) and a self-contained, undefined expressiveness, which is called musical expressiveness
or intransitive expressiveness. While emotional expressive content can be communicated and enhanced
by expressive intentions of the performer Quslin, 1997; Juslin & Laukka, 2001), musical expressiveness
BEYOND EMOTION is characterized by an intransitive quality and a particular interest in performance practice (stylishness),
\vithout needing to express a specific emotion. Regarding the two layers denoting each kind of content,
Multi -Sensory Responses to Musical Expression we can distinguish between one based on structural elements of the music that are determined by a
musical score/ composer- the compositional laye,~and another based on features that can be added by
the performer- the peiformance layer--such as a nuance in the performance that exaggerates a compo-
Giovanni De Poli, Maddalena Murari, Sergio Canazza, sitional element (e.g., overdotting a dotted note, extending the length of a suspension or appoggiatura) ,
whether for the purpose of reflecting a performance practice or as a means of personal expression.
Antonio Roda, and Emery Schubert We propose that this taxo nomy can be further developed by including a third kind of expressive-
ness content-sensorimotor expressiveness-which can be applied both to the compositional laye r and
to the performance layer. Sensorimotor expressiveness refers to aspects not covered by musical and
emotional expressiveness, since it investigates the domain of cross-modal associations. The starting
Introduction point is that metaphorical descriptions may offer possibilities to explain and understand aspects of the
musical experience otherwise ineffable. According to Lakoff (1993, p. 245), metaphor is not simply
Music experience raises many complex questions that can be analyzed from different points of view. a literary device but a basic stru cture of understanding, and " metaphorical descriptions of music are
A piece of music can interest a musicologist because of its historical and stylistic relations with the cul- grounded in embodied experience." Similarly, Z bikowski (2008 , p. 511 ) considers metaphor "a kind
tural environment in which it has been composed. At the same time, another listener can be attracted of cross-domain mapping" and considers music a conceptual domain that can be drawn into such
to the same piece of music, because of the emotions or the sensations induced by that music. And mappings . As pointed out by Eitan in two empirical studies, metaphor structures our understanding
for the same person, the experience can vary depending on the musical aspects on which she/ he is of music (Eitan & Granot, 2006), and the metaphors used to characterize musical relationships reflect
focused. Another way of analyzing music experience is by asking listeners to describe the music using the influence of culture (Eitan & Timmers, 201 0).
verbal labels. This method of information gathering about music experiences is often used, with many Our experiments fo cus on the development and validation of new methods to investigate music
papers published reporting participants' ratings of va rious adjectives, for example, w hile auditioning perception without verbal measures. These methods are based on sensory qualities, kinetics/ energy
a piece of music. Many of these studies focus on the emotional experience (for a detailed rev1ew, see space, and action metaphors. In the section "Sensorimotor expressiveness described throu gh verbal
the volume edited by Sloboda & Juslin, 2010). labels," we present the results of studies based on the description of sensorimotor expressiveness throu gh
In addition to emotions, music can suggest or induce in the listeners a variety of images and sensa- verbal labels. Then , the sections "Sensorimotor expressiveness described through ac tion-reaction met-
tions (see, e.g. , Canazza, De Poli , Roda, & Vidolin, 2003). Very often, these studies use verbal labels aphors" and "Sensorimotor expressiveness described throu gh cross-modal associations" focus on two
for describing the subj ects' experience when they listen to music. However, verbal descriptions only recently introdu ced methods for studying sensorimotor expressiveness without using verbal labels,
partially capture the musical experience. Non-verbal associations can be useful in allowing the assess- based on action-reaction metaphors and cross-modal associations. Finally, we discuss the results in a
ment of non-verbal experiences, too. Such an approach may help overcome some of the lmuts of broader context and hypothesize new perspectives for future research in this domain.
I
communication necessarily imposed by language.
We start from the assumption that the numerous ways to describe music experience do not exclude
one another, but rather they may be complementary points of view of the same complex experience. Sensorimotor Expressiveness Described Through Verbal Labels
The different n1.etaphors can be suitable for different applications and contexts : for example, emotional In the context of music performance studies, the term expressive intention has been introduced by
aspects of musical experience can be useful in affec tive human-computer interfaces (Pantie & Roth- Gabrielsson and co-workers (Gabrielsson & Juslin, 1996), with reference to the various emotion
krantz, 2003); sensory-motor aspects can be useful in direct interaction and manipulation of contents characters that a musician tries to express by performing a music score. Traditionally, expressive inten-
(Luciani , Florens, & Castagne , 2005); psychological aspects of cross-modal experience may reveal tions in music refer to the emotional expression content, introduced in the previous section. Indeed,
mechanisms concerned with synaesthesia (Simner eta!., 2006), and so on. The em erging view is that Gabrielsson's work is fo cused exclusively on emotions, more precisely on discrete emotions, because
experiences of music are multi-faceted, consisting of different kinds or possibly interacting qualities, listeners find it quite natural to attach general emotional labels to pieces of music. Nevertheless, listen-
aspects, and dimensions. ers find it quite natural to use other kinds oflabels to describe pieces of music. As an example, the rank
of the most frequent tags freely attached to pieces of music by the users of Last. fm was investigated.
Last.fm is a widely used online music repository and streaming resource (http :/ / vA¥\v.last.fm). The
A New Taxonomy of Music Expressiveness company has also released a public application progra nuning interface (API; http :/ / \>1\vw.last.fm/ api),
As pointed out by Schubert and Fabian (2014) , the terms expression and expressiiJeness can be used inter- which allows everyone to access the metadata of the repository and to count the frequency of the
changeably, but expressiveness can be used in a more specific sense to mean musical expression rather than user- defined tags. Our investigation revealed that happy and sad are frequently used and are classified

78 79
Giovanni De Poli et al. Beyond Emotion

in the 65th and 62nd places, respectively (on a total of about 500,000 listed tags), but other basic emo- In order to understand sensorimotor expressiveness without using verbal labels, we focused on
tions such as fear, tender, and solemn were not so common and are ranked over the 2,000th place. At the associations between music and human movements from the point of view of an action-reaction
the same time, many other words are frequently used to tag music pieces: excluding the tags directly paradigm, by which we mean interactions with other objects through contact, pressure, and (usually
related to a particular musical genre (e.g. , rock,jazz, or classical), we found frequently used adjectives small amounts of) motion. This kind of interaction can be simulated by haptic devices (i. e., devices
such as hard (35th), cool (36th), heavy (54th), soft (106th), and dark (121st). that measure a user's fingertip position, pressure, and motion and exert a precisely controlled force
In several different experiments (e.g. , Baraldi, De Poli, & Roda, 2006; Canazza , D e Poli, Rinaldin, & vector response to the fingertip) . Such devices enable users to interact with and feel a wide variety of
Vidolin, 1997), De Poli and colleagues extended the concept of the performer's expressive intentions virtual objects (Massie & Salisbury, 1994).
beyond emotions by including labels with sensorial connotations. Consequently, the studies about To investigate this kind of interaction, we started from the basic mechanical concepts of Friction,
expressive intentions in music can refer also to the third kind of content (see the section "A new Elasticity, and Inertia , grouped henceforth into the so-called FEI metaphor (De Poli, Mion, & Roda,
taxonomy of music expressiveness"), namely sensorimotor expressiveness. The authors asked several 2009). From a physical perspective, ideal Friction dissipates energy and acts as a scaling factor for the
musicians to play a short piece of music several times, changing each performance in order to convey input action, while ideal Elasticity and Inertia store Energy: in particular, Inertia stores kinetic energy
a different expressive intention, suggested by one of the adjectives : bright, dark, light, heavy, soft, hard, and opposes changes in movement while Elasticity stores potential energy and opposes changes in
and natural. The natural performance was defined as "an interpretation in accordance with musical forces. To control these parameters, we designed three elementary haptic reference stimuli synthesized
practice" but without any particular expressive intention. It provided a useful baseline to measure the by means of a Phantom Omni haptic device (http:/ / www.sensable.com/ haptic-phantom-omni. htm).
performance variability due to other expressive intentions. The experiments involved different musi- The tip of the haptic device, when moved by the user's fingers, produces respectively a frictional, elas-
cal instruments (clarinet, violin, saxophone, and piano) and musical pieces, mainly from the Western tic, or inertial force reaction. From the user's point of view, these reactions provide the impression of
classical repertoire but also some popular and jazz melodies. Then, several groups of listeners were interacting with real objects (e.g., the elastic force response simulates the behavior of a spring, whereas
asked to rate the recorded performances along a set of continuous verbal scales, denoting qualities the the frictional response simulates the movement of an object in a viscous fluid).
listeners felt the stimulus expressed. The objectives of our experiments (De Poli eta!. , 2009) were to: (a) determine whether the FEI
The results showed that the performers' intentions and the listeners' impressions generally agreed. metaphor is able to describe some aspects of the musical experience both at compositional and
The listeners' ratings were analyzed by means of multivariate techniques: two quite distinct expressive performance levels and (b) provide evidence for relationships between the FEI metaphor and other
factors were observed, one related to the kinetics parameters and the other related to the ene1gy of the metaphors, in particular the emotional and the sensorial. In the experiments, participants were asked
pieces. From a musical point of view, the first factor depicts rapid tempo (bright and light pieces) to listen to various musical excerpts and to associate each one to one of the three haptic responses.
against slow (heavy pieces) or moderate (soft pieces) tempo. The second factor, on the other hand, While most research on music expression deals with emotions, expressive intention is a broader con-
is mainly correlated with energy-related parameters such as intensity. The performances inspired by cept that includes emotions as well as sensorimotor aspects. Thus, we studied expressions both from
different expressive intentions are organized in a two-dimensional space named Kinetics-Energy space, the emotional and the sensorial domain, in order to derive a more general organization (and pos-
and presents a point of departure from the commonly used two-dimensional emotion space (Eerola & sibly interpretation) of similarities among the expressive intentions of the two domains. We used as
Vuoskoski , 2011; Schubert, 1999). As will be shown in the next sections, other experiments carried stimuli simple melodies played by a performer according to the categories Happy-Sad (high and low
out by De Poli and co-workers showed that, under certain conditions, music pieces are arranged by valence), Angry-Calm (high and low arousal) for the affective domain, and according to the categories
listeners along these two axes, which have consequently been interpreted as two basic dimensions for Hard-Soft (high and low energy) and Light-Heavy (high and low kinetics) for the Kinetics-Energy
describing sensorimotor expressiveness. space. The results of the experiments showed that subjects are able to associate each haptic response
to differentially expressive musical stimuli. In particular, statistically significant relationships between
Elasticity and the happy / light stimuli, between Inertia and the sad/ calm stimuli, and between Fric-
Sensorimotor Expressiveness Described Through Action-Reaction Metaphors
tion and the hard/a nger/ heavy stimuli were found. Haptic responses are therefore reliable tools that
There is a long tradition of studying gestures associated with music (God0y & Leman, 2009). Embod- use physical interaction to describe expressive charac teristics of music. Furthermore, the same clusters
ied approaches to music (e.g., Franek, van Noorden, & Re2nY, 2014; Leman, 2007) consider body were also found in a previous study (Mion & De Poli, 2008) where acoustic cues were used to group
movements as effective for communicating emotions and expressive intentions, feelings, and ideas music performances inspired by different emotional and sensory expressive intentions. This result
both for the performer and the listener. When listening to music, people can mirror the expressive supports the hypothesis that the three clusters correspond to different expressive categories that can
aspects of the music in the form of actual movement patterns (see also, Frances & Bruchonschweitzer, be characterized both from an acoustic and perceptual point of view.
1983). Thus examining the connections between music and movements can be fruitful in under- Moreover, the FEI metaphor is not strictly related to musical contents, so that the results could
standing expressiveness from a non-verbal point of view. Free and spontaneous human movements in be applied also to other non-musical contexts. For example, the approach of the action-reaction
response to music were studied by Camurri, Castellano, Ricchetti, and Volpe (2006), to investigate the metaphor can be explored in artistic forms based on gestural dynamics, such as painting or dancing
correlations with the emotional characterization of music excerpts, and by Amelynck, Maes, Martens, (Luciani, 2009).
and Leman (2014) , who analyzed conm1onality and individuality of these movements. Clynes also
examined micromovements in response to music via a hardware interface referred to as a Sentograph.
Sensorimotor Expressiveness Described Through Cross-Modal Associations
The Sentograph captures finger movements from which one can deduce expressive properties in
terms of predictable finger movement patterns, and Clynes spent much of his work in interpreting As evident from the previous sections, the capacity of music to elicit a rich number of often shared
these patterns in terms of emotional expressions (Clynes, 1973; Clynes & Nettheim, 1982). However, sensations and images can be attributed in part to cross-modal correspondences, which may be intrin-
his work is a clear precursor to modern conceptions of responding to music non-verbally. sic attributes of our perceptual system 's organization (Marks, 1978). Many studies investigated the

80 81
Giovanni De Poli et al. Beyond Emotion

relationship between music versus colors, tastes, vision, and odors, suggesting that people can exhibit a new sensory differential based on the direct perception of continuous measures. The same procedure
consistent cross-modal responses in different sensory modalities (Ward & Mattingley, 2006) . was applied by Da Pos and Pietto (2010) in the field of normal and iridescent colors and in the study of
According to Spence (2011) , cross-modal correspondences between differen t combinations of unique hues. Musical excerpts in minor and major tonality had already been categorized in experiments
modal and amodal features are mediated not only by innate factors but also by experience and by Bigand, Vieillard, Madurell, Marozeau, and Dacquet (2005) by means of multidimensional scaling and
statistical factors, so they need to be considered alongside semantic, spatiotemporal, developmental, by Roda, Canazza, and De Poli (2014) by means of cluster analysis. The employment of sensory scales
and affective dimensions. In particular, Spence distinguishes among structural, statistical, and seman- was considered a way of obtaining more refined results able to go beyond music emotions.
tic cross-modal correspondences and hypothesizes that structural correspondences may bear fruitful In Murari et al. (2015), we planned to directly compare sensory scales and their verbal equivalent
comparison with synaesthesia, since they can reflect an unlearned aspect of perception, whereas other scales in a setting in which participants were divided randomly into two groups to avoid order effe cts.
correspondences may reflect the internalization of the statistics of the natural environment or may One group completed the set of sensory scales first (SV- group) and the other group completed the
appear semantically mediated. For example, Ward and Simner (2003) have reported an unusual case set of verbal scales first (VS-group). The order in which the rating task was completed (verbal scales
of developmental synaesthesia in w hich speech sounds induce an involuntary sensation of taste that is first ve rsus sensory scales first) impacted on the ratings. In particular, the results of factor analysis per-
subjectively located in the mouth. The findings from this case study provide important implications formed on the SV and the VS group separately revealed a relatively stronger juxtaposition between
for the understanding of the mechanisms of developmental synaesthesia , since they emphasize the role sensory and verbal scales in the SV group, suggesting that sensory scales can be influenced by verbal
of learning and conceptual knowledge. labels, and that conversely, the absence of those labels still makes an important contribution to explain-
In this complex and multifaceted research field, sensory scales, which we present as non-verbal ing variance in response. Moreover, factor analysis was performed on the two groups of participants
bipolar scales taken from the visual, gustatory, haptic, and tactile domains, can represent a reliable tool combined, and this also suggested an even more accentuated juxtaposition of sensory and verbal scales,
for investigating the relationship between sensory perception, emotions, and musical stimuli. Starting since they grouped into different factors . Results showed that asking a participant to evaluate a piece
with the semantic differential approach, sensory scales were adopted by Da Pos et al. (2013), who of music by merging her/ his hands into warm or cold water is different from verbally stating if that
hypothesized that asking a participant to express a judgment on the warmness or coldness of a stimu- piece of music is warm or cold. The Chopin excerpt (Prelude no. 22), for example, was rated cold
lus is different from asking her/him to express this evaluation by directly immersing her/his hands only in the sensory scale condition, and the Mozart excerpt (Adagio from Concerto K 488) was rated
into warm or cold water. If we consider, for example, the word blue from the semantic point of view, bitter only in the sensory scale condition.
its meaning can be connected to sadness (I feel blue) or to a musical style (blues) or to the color blue. Such results confirm that sensory scales allow an alternate understanding of the musical experience,
On the other hand, the large and problematic number of semantic associations can be reduced if we since they index different aspects of the musical experience that are not accessible to natural language.
select those sensorial concepts that are less ambiguous. We propose that sensory scales are indicative The consistency of results confirms that the sensory differential tool created for our experiments is
of two kinds of cognitive processes: (a) the synaesthetic (or categorical correspondence), in wh}ch the reliable and provides a novel way of allowing qualities of music to be differentiated. Moreover, sensory
connection bet\;veen hearing sound as colors, textures, etc. is direct and (usually) involuntary and (b) scales can provide fruitful results in the fields of cross-modal associations and synaesthesia.
the metaphoric, in which sensory modality terms are employed to enrich verbal descriptions.
The sensory domains covered by our sensory scales were visual, tactile, haptic, and gustatory. From
Discussion and Conclusions
the visual domain we adopted the scales maluma-takete, the couple of pseudo-words invented by
Kohler (1929) in order to show the non-arbitrary mapping between speech sounds and the visual We propose that the taxonomy of expressiveness in music proposed by Schubert and Fabian (2014)
shape of objects, and orange-blue, the visualization of the two colors in two cards. These two colors should be augmented by including sensorimotor content in addition to emotions and musical con-
are not perfectly complementary, but they were chosen as representatives of presumably warm and tent. The different experiments described in this chapter show that by approaching the musical
cold colors, respectively. experience from an action-based metaphor perspective, both haptic and sensory, new insights can be
From the tactile domain, we selected the scales smooth-rough (sheets of sandpaper in different grit gleaned. Such approaches have the advantage of being applicable to different contexts (color percep-
sizes), hard-soft (a piece of wood and a cylinder of polystyrene foam), cold-warm (one cup of cold tion, artistic forms based on gestural dynamic, such as painting or dancing), to move beyond verbaliza-
water and one cup of warm water), and tense-relaxed (iron wire covered with cloth and rubber band tion of emotions, and to mitigate some of the other problems associated with the use of verbal labels.
covered with cloth). As pointed out by Karbusicky (1987, p. 433), "thought in music occurs primarily in asemanti-
From the gustatory domain, we selected the scale bitter-sweet: a bitter and a sweet substance. In an cal shapes and formulas." Any attempt to interpret these shapes and formulas through language or
experiment conducted by Crisinel and Spence (2010), cross-modal associations between tastes and linguistic theory would ultimately fail to capture the substance of musical thought. Karbusicky's
words were investigated. Results showed that sugar was associated with the soft-sounding nonsense notion of purely musical metaphor is exemplified by the fact that disparate musical material can be
words "maluma " and "lula," but not with " bobolo" and " bouba," suggesting that the presence of the brought and combined together to generate new meaning in infinite ways. In particular, the sensory
consonants "m" and "1" was an important factor in participants' choice. Our experiments replicated differential provided by sensory scales, as compared to Osgood's semantic differential, may serve as a
the maluma coupling with sweet and with soft. useful tool to offer a more complete perspective not only about the meanings of affect terms but also
The haptic scale we employed was heavy-light: a dark plastic bottle full ofliquid and the same bottle in order to present an alternative, low-dimensional response space and to investigate perceptual aspects
without liquid. The dull color of the bottle did not allow participants to discriminate between the of synaesthesia and cross-modality.
full and the empty bottle. With regard to the way in which ratings of musical excerpts can be collected, it wo uld be inter-
In Murari, Roda, Canazza, De Poli, and Da Pos (2015), we designed t\¥0 experiments (one with esting to develop a blended modality instrument in which collection of emotional responses could
musical excerpts in major tonality and one with musical excerpts in minor tonality) in order to inves- be augmented with sensory scales continuously in real time as music unfolds and at the end of the
tigate the difference bet\;veen Osgood's semantic differential (Osgood, Suci, & Tannenbaum, 1957) and piece (fo r a review of how this has been done for emotion in music, see Schubert, 201 0). Data can be

82 83
Giovanni De Poli et al. Beyond Emotion

collected by sensorimotor devices, and the comparison of results deriving from the different methods Cly nes, M. (1973). Sentics: Biocybernetics of emotion communication. Annals of the New York Academy of Sciences,
can inspire new and deeper insights into essential responses to music (see, e.g., Nielsen, 1987). 220,57-88.
From a technological point of view, the understanding of metaphors for describing the different Clynes, M. , & Nettheim, N . (1982) . The living quality of mu sic: Neurobiologic basis of communicating feeling.
In M . Clynes (Ed. ), Music, mit1d, and brain (pp. 47-82). New York, NY: Plenum.
aspects of music (affective, sensorial, or physical) supports the development of applications for the Crisinel, A. S., & Spence, C. (2010) . As bitter as a trombone: Synesthetic correspondences in nonsynesthetes
interaction with musical contents: the relations between acoustic cues and metaphors can be used to between tastes/ flavors and musical notes. Attention, Perceptiot·t, & Psychophysics, 72(7), 1994-2002.
develop systems for the automatic generation of meta data; action-based m etaphor can be employed to Da Pos, 0., Fiorentin, P , Scroccaro, A. , Filippi, A. , Fontana , C., Gardin, E., & Guerra, D. (2013). Subjective
enhance the interface of devices that use gestural interaction with musical content, such as portable assessment of unique colours as a tool to evaluate colour differences in different adaptation conditions. In
music players or musical video games. Moreover, research on sensory scales can foster the develop- Proceedings of CIE Centenary Coriference "Jowards a New Cent~1ry of Light" (p. 488). Paris, France: Conunission
Internationale de l'Eclairage.
ment of innovative interfaces to browse audio digital collections. These new devices will allow users D a Pos, 0. , & Pietro, M. (2010). Highlighting the quality of light so urces. In P Hanselaer & F. Leloup (Eds.),
to interrelate in a spontaneous and even expressive way with interactive multimedia systems, relying Proceedings of th e 2nd CIE Expert Symposium on Appearat1ce--When Appearance Meets Lighting (pp. 161-163).
on a set of advanced musical and gestural content processing tools adapted to the profiles of individual Ghent, Belgium: Conunission Internationale de l'Eclairage.
users, adopting descriptions of perceived qualities, or m aki ng expressive movem ents. De Poli, G., Mion, L. , & Roda, A. (2009). Toward an action based metaphor for gestural interaction with musical
co ntents. Journ al of New Music Research, 38(3), 295-307 .
From a theoretical perspective, there is much research to be done. For example, one of the studies
Dickie, G. (1997). lt1troduction to aesthetics: An analytic approach. Oxford, UK: Oxford University Press.
we reported suggests that sensory scales can produce meanings that are distinct from their verbal label Eerola, T. , & Vuoskoski , ]. K. (2011). A comparison of the disc rete and dimensional models of emotion in music.
equivalents (selecting an actual color rather than the verbal label for the color, or feeling a warm bottle Psychology of Music, 39, 18-49.
rather than reading the word warm). But some words are more likely to map onto sensory experience Eitan, Z ., & Granot, R. Y (2006). How music moves: Musical parameters and listeners' images of motion. Music
better than others, and some sensory experiences are likely to m ap onto words better than others. Perception, 23(3), 221-247.
Eitan, Z., & Timmers, R. (2010). Beethoven's last piano sonata and those who follow crocodiles: Cross-domain
Identifying when these mappings do and do not occur will provide important data in the understand-
mappings of auditory pitch in a musical context. Cog11ition, 114(3), 405-422.
ing of cognitive processing of the stimulus under consideration , and in further developing the tools Frances, R. , & Bruchonschweitzer, M. (1983). Musical expression and body expression. Perceptual and Motor Skills,
to measure the experiences. Verbal scales are easy and efficient to deploy, whereas haptic and sensory 57(2), 587-595 .
scales are more labor and resource intensive, so while investigations of music from a multi-sensory Franek, M ., van Noorden, L. , & Re2nY, L. (2014). Tempo and walking speed with music in the urban context.
perspective continue, we need to continue efforts in determining the efftciency of the new measures. Frontiers in Psychology, 5, 1361.
Gabrielsson, A. , & Juslin, P N. (1996) . Emotional expression in music perfor mance: Between the performer's
Furthermore, we need to find out how important the multi-sensory experiences are in the sum total
intention and the listener's experience. Psychology of Music, 24(1), 68-91.
of the musical experience. Do people care if music tastes bitter or sweet (sensory) versus sad or joyous God0y, R. I., & Leman, M. (2009). Musical gestures: So 11nd, movement, and meaning. New York, NY: Routledge.
(emotional)? Juslin, P. N. (1997). Emotional communication in music performance: A functionalist perspective and some data.
It should also be noted that music provides an important kind of stimulus for inv~stiga ting Music Perceptiotl , 14(4), 383-418.
multi-sensory experience, because unlike language it is not restricted to denotative meaning, and Juslin, P N ., & Laukka, P (200 1). Impact of intended emotion intensity on cue utilization and decoding accuracy
in vocal expression of emotion. Emotion, 1(4), 381-412.
unlike some more mundane stimuli (colors, single tones, etc.) music listening is considered by many
Karbusicky, V. (1987). "Signification" in music: A metaphor? InT. Sebeok & ]. Umiker-Sebeok (Eds.), 171e semi-
to be a highly meani ngful , all-encompassing activity. otic web 1986 (pp. 430-444) . Berlin, Germany: D e G ruyter Mouton.
The discovery of new ways of reporting musical experience promises more ground to be made Kohler, W (1929). Gestalt psychology (2 nd ed.). New York, NY: Liveright .
.in music psychology and philosophy, in particular because we are able to gradually hone in on the Lakoff, G. (1993). The contemporary theory of metaphor. In A. Ortony (Ed.), Metaphor and thought (vol. 11,
otherwise ineffable aspects of music, what some may consider to be a holy grail and even untouchable pp. 202-251). Cambridge, UK: Cambridge University Press.
Leman, M. (2007). Embodied 11/J.Isic cognition and mediatio11 tedmology. Cambridge, MA: MIT Press.
(see, e.g., Dickie, 1997).
Luciani, A. (2009). Enaction and music: Anticipating a new alliance between dynamic instrumental arts. Joumal
of New Music Research, 38(3), 211- 214. 1
Luciani, A. , Florens, J.-L. , & Castagne , N . (2005 ). From action to sound: A challenging perspective for haptics. In
References
A. Bicchi & M. Bergamasco (Eds.), World Haptics Cotiference 2005 (p. 5). Pisa, Italy: IEEE Computer Society.
Amelynck , D. , Maes, P-]. , Martens, J.-P , & Leman, M. (20 14). Expressive body movement responses to music are Marks, L. (1978). T he unity of th e senses: Interrelations at/tong the modalities. New York, NY: Academic Press.
coherent, consistent, and low dimensional. IEEE Trat'ISactions 011 Cybernetics, 44(12) , 2288-2301. Massie, T. H. , & Salisbury, ]. K. (1994). The phantom haptic interface: A device for probing virtual objects.
Baraldi, F. B., D e Poli, G. , & Roda, A. (2006). Communicating expressive intentions with a single piano note. Proceedit?gs of the ASME Wit·tter Annual Meeting, Symposium on Haptic Imeifaces for Virtual Environment and Tele-
Journal of New Music Research, 35(3) , 19-210. opemtor Systems, 55(1), 295-300.
Bigand, E., Vieillard , S. , Madurell , F. , M arozeau, ]. , & Dacquet, A. (2005). Multidimensional scaling of emotional Mion, L., & D e Poli, G. (2008) . Score-independent audio features for description of music expression. IEEE
responses to music : The effect of musical expertise and of the duration of the excerp ts. Cognition and Emotion, 11-ansaction on Speech, Audio and Lm·tguage Process, 16(2), 458-466.
19(8), 1113-1139. Murari, M ., Roda, A., Canazza, S. , De Poli , G., & D a Pos, 0. (2015). Is Vivaldi smooth and takete? Non-verbal
Camurri , A. , Castellano, G., Ricchetti, M ., & Volpe, G. (2006). Subject interfaces: Measuring bodily activation dur- sensory scales for describing music qualities. Joumal of New Music Research , 44(4), 359-372.
ing an emotional eJo.:perience of music. InS. Giber, N. Courty, & ]. F. Kamp (Eds.), Gesture inlwman-computer Murari, M ., Roda, A., Da Pos, 0. , Schubert, E. , Canazza, S. , & De Poli, G. (2015). Mozart is still blue: A compari-
interactiot1 mt.d sinntlatiott (vol. 3881 ofLNCS, pp. 268-279). Berlin, Germany: Springer. son of sensory and verbal scales to describe qualities in music. In ]. Timoney & T. Lysaght (Eds.), Proceedings
Canazza, S., De Poli, G., Rinaldin, S. , & Vidolin , A. (1997). Sonological analysis of clarinet expressivity. In of Sound and Music Computi11g Coriference (pp. 351-357). Maynooth, Ireland: M aynooth University.
M. Leman (Ed.), Music, gestalt, and computirtg: Studies i11 cognitive at?d systematic musicology (pp. 431-440). Berlin, Nielsen, F. V. (1987) . Mu sical tension and related concepts. In T. A . Sebeok & ]. Umiker-Sebeok (Eds.), H e
Germany: Springer. semiotic web '86 (pp. 491-513). Berlin, Germany: Mouton de Gruyter.
Canazza, S. , De Poli, G., Roda, A., & Vidolin, A. (2003). An abstract control space for conununication of sensory Osgood, C. E. , Suci, G. ].. & Tannenbaunt, P H . (1957). The measurement of meaning. Urbana , IL: University of
expressive intentions in music performance. J oumal of New Music Research , 32(3) , 281-294. Illinois Press.

84 85
Giovanni De Poli et al.

Pantie, M. , & Rothkrantz, L. J. (2003) . Toward an affect-sensitive multimodal human- computer interaction.
Proceedi11gs cifthe IEEE, 91(9), 1370-1390.
Roda , A., Canazza, S. , & De Poli, G. (20 14). Clustering affective qualities of classical music: Beyond the
valence-aro usal plane. IEEE Transactions on Affective Comp11tit1g, 5(4), 364-376.
Schubert, E. (1999). Measuring emotion continuously: Validity and reliabili ty of the two-dimensional emotion-space.
Australian j ournal of Psychology, 51, 154- 165.
9
Schubert, E (20,~0). Continuous self- report methods. In P. N. Juslin & J. A. Sloboda (Eds.), Handbook cif music
and emotton: l11eor1; research, applications (pp. 223-253). Oxford, UK: O xford University Press.
Schubert, E., & Fab1an, D. (2014). A taxonomy of listeners' judgements of expressiveness in music performance.
CONVEYING EXPRESSIVITY AND
In D. Fabtan, R. T11TU11ers, & E. Schubert (Eds.), Expressiveness in music peiformance: Empirica l approaches across
. styles and cultures (pp. 283-303). Oxford, UK: Oxford University Press. INDIVIDUALITY IN KEYBOARD
Smmer, J. , Mulvenna, C., Sagiv, N., Tsakanikos, E., Witherby, S., Fraser, C. , Scott, K., & Ward, J. (2006).
Synaesthesia: The prevalence of atypical cross-modal experiences. Perception, 35(8), 1024-1033.
Sloboda, J. A. , & Jushn, P. N. (2010). H andbook of music and emotion: Theory research applicatiot1S. Oxford UK:
PERFORMANCE
Oxford University Press. ' ' ' ·
Spence, C. (2011). Crossmodal correspondences: A tutorial review Attention Perception & PS)IChoph)ISics 73(4)
971-995. I I
1
I
Bruno Gingras
Ward , J. , & Mattingley, J. B. (2006). Synaesthesia: An overview of contemporary findings and controversies.
Cortex, 42(2), 129-136.
Ward, J. , & Simner, ]. (2003). Lexical-gustatory synaesthesia: Linguistic and co nceptual factors . Coonition 89(3)
237-261. 6 , ,

Z bikowski, L. ~- (2008). Metaphor and music. In R. W Gibbs (Ed.), The Cambridge ha 11 dbook of metaphor and Introduction
thought (pp. ::>02-524). Cambndge, UK: Cambridge University Press.
In a musical .performance, multiple laye rs of information are conveyed, not only aco ustically but also
visually in the context of a live performance (Tsay, 2013; Vines, Krumhansl, Wanderley, & Levitin,
2006). Considering only the auditory domain, listeners specifically receive information abo ut the
performer's individual style, the musical stru cture, and the emotional content of the piece. From a
musicological standpoint, performers' role in transmitting a particular "auditory representation" of a
musical score, imbued both with their individual performance style and with their personal vision of
the piece, is especially relevant (Gingras, M cAdams, & Schubert, 2010; Repp, 1992) . However, most
research in performance science has fo cused on analyzing the performers' expressive devices employed
to communicate these aspects, w hereas the issue of whether listeners can recognize performers' sty-
listic individuality and expressive aims has not been analyzed to the same extent (Gabrielsson &
Juslin, 1996). H ere, I will focus on two aspects in particular: (1) how stylistic individuality is conveyed
by performers and how it is recognized by listeners, and (2) the link between the musical structure as
notated in the score, the use of expressive devices in performance, notably tempo m anipulation, and
the perception of musical tension by listeners. To be sure, elements of artistic individuality can also be
found in the manner performers choose to modulate the musical tension, but, for the sake of simplic-
ity, I w ill focus on each of these two aspects separately.
The theoretical model underpinning my research approac h is based on Kendall and Carterette's
(1990) tripartite model of musical communicatio n, w hich posits that the performer's interpretation
of a musical composition is infl uenced by the musical structure as notated in the score and that the
listener's aesthetic response to a petformance is affected both by the notated structure and by the
performer's interpretation. Although Kendall and Carterette's model represents musical communica-
tion as a more or less linear process (from the composer to the listener through the performer), it
does inc;orporate some aspects related to music-based interaction. Notably, Kendall and Carterette
emphasize that "musical communication is concerned not merely w ith a single frame of reference,
but includes the complex relationships among composer, perform.er, and listener" (p. 130) .
A large amount of empirical performance research has been conducted on the piano, an instru-
ment with w hich most Western listeners are familiar and w hich possesses a wide expressive range in
terms of dynamics . Willie this research has yielded a number of groundbreaking findings, it also raises
the question of w hether listeners can recognize performers' individual charac teristics, as well as their
expressive intentions, in the case of relatively unfanlliiar instruments or instruments for w hich the

86 87

You might also like