You are on page 1of 2

Music Cognition: Learning and Processing

Martin Rohrmeier (mr397@cam.ac.uk)

Patrick Rebuschat (per6@georgetown.edu)

Centre for Music & Science, Faculty of Music,


University of Cambridge, Cambridge CB3 9DP, UK

Department of Linguistics, Georgetown University,


Washington, DC 20008 USA

Henkjan Honing (honing@uva.nl)

Psyche Loui (ploui@bidmc.harvard.edu)

Institute for Logic, Language and Computation,


Cognitive Science Center Amsterdam,
University of Amsterdam, Netherlands

Department of Neurology, Beth Israel Deaconess Medical


Center and Harvard Medical School,
Boston, MA 02215 USA

Geraint Wiggins (g.wiggins@gold.ac.uk)


Marcus T. Pearce (marcus.pearce@ucl.ac.uk)
Daniel Mllensiefen (d.mullensiefen@gold.ac.uk)
Centre for Cognition, Computation and Culture
Goldsmiths, University of London, UK

Keywords: Music cognition; music perception; learning;


pattern processing; computational modeling; cognitive
neuroscience; developmental psychology.

Goals and Scope


In recent years, the study of music perception and cognition
has witnessed an enormous growth of interest. Music
cognition is an intrinsically interdisciplinary subject which
combines insights and research methods from many of the
cognitive sciences. This trend is clearly reflected, for
example, in the contributions in special issues on music,
published by journals such as Nature, Cognition, Nature
Neuroscience, and Connection Science.
This symposium focuses on music learning and
processing and will feature perspectives from cognitive
neuroscience, experimental psychology, computational
modeling, linguistics, and musicology. The objective is to
bring together researchers from different research fields and
traditions in order to discuss the progress made, and future
directions to take, in the interdisciplinary study of music
cognition. The symposium also aims to illustrate how
closely the area of music cognition is linked to topics and
debates in the cognitive sciences.

Henkjan Honing
Beat induction: uniquely human and music
specific?
Beat induction, the process in which a regular isochronous
pattern (the beat or tactus) is activated while listening to
music, is considered a fundamental human trait that,
arguably, played a decisive role in the origin of music.
Theorists are divided on the issue whether this ability is
innate or learned. In a recent study we were able to show
newborn infants develop expectation for the onset of
rhythmic cycles (the downbeat), even when it is not marked
by stress or other distinguishing spectral features. Omitting
the downbeat elicits brain activity associated with violating

sensory expectations. Thus our results strongly support the


view that beat perception is innate. And, as such, are further
support for the hypothesis that beat induction is a
fundamental human trait that is specific to music.

Martin Rohrmeier and Patrick Rebuschat


Implicit learning in music and language
Interaction with music, in the form of listening, performing
or dancing, involves numerous cognitive processes that rely
on knowledge of complex musical structures. Musical
knowledge, like linguistic knowledge, is largely implicit, i.e.
it is the result of a learning process that occurs largely
incidentally and without awareness of the knowledge that
was acquired (Perruchet, 2008). For example, nonmusicians
possess musical knowledge to a high extent even though
they are unable to express this knowledge. Similarly, in the
case of language, all native speakers have developed a
complex syntactic system, despite the fact that they are
unable to provide accurate accounts of the rules that govern
their native language. It is generally accepted that implicit
learning constitutes a key process in musical enculturation
or language acquisition, however, the implicit learning
paradigm has not found extensive application in the study of
these acquisition processes. This paper compares functions
of implicit knowledge in the perception of music and
language and presents the results of Artificial Grammar
learning experiments that directly compared the implicit
learning of music and language for grammars of different
structural complexity. These studies exemplify how the
methodology provided by implicit learning research can be
applied to the investigation of the acquisition of musical and
linguistic knowledge.

Psyche Loui
Neural correlates of new music learning
One of the major questions in cognitive science concerns
how learning occurs in the human brain. In the musical
domain, learning includes the formation and refinement of
expectations for musical structures, which include melody,
harmony, and rhythmic processes. Artificial musical
systems are useful for studying learning because they offer a
controlled environment free of pre-existing associations, in
which one can systematically observe the brain as it reacts
to auditory exposure. Here we present a neurophysiological
study on the rapid probabilistic learning of a novel musical
system (Loui et al., 2009). Participants listened to different
combinations of tones from a previously unheard system of
pitches based on the Bohlen-Pierce scale, with chord
progressions that form 3:1 ratios in frequency, notably
different from 2:1 frequency ratios in existing musical
systems (Krumhansl, 1987). Event-related brain potentials
elicited by improbable sounds in the new music system
showed emergence over a 1 h period of physiological
signatures known to index sound expectation in standard
Western music (Koelsch et al., 2000). These indices of
expectation learning were eliminated when sound patterns
were played equiprobably, and covaried with individual
behavioral differences in learning. Results suggest that
humans use a generalized probability-based auditory
perceptual learning mechanism to process novel sound
patterns in music.

Geraint A. Wiggins, Marcus T. Pearce, Daniel


Mllensiefen
Computer Modelling of Music Cognition
We present a computer model of music cognition based on
machine learning and information-theoretic analysis of the
outputs of a predictive model, which we have extended into
a preliminary partial model of a musical creative system
(Wiggins et al., 2009). The model works at the level of note
percepts and their sequence, deriving transition probabilities
for a complex Markov Model, based both on a melody being
"heard" and a long-term memory of previously "heard"
pieces, represented in a multiple viewpoint framework
(Pearce, 2005). The model is capable of predicting human
pitch expectations with up to 91% correlation; it also
predicts human pitch judgements of expectedness with up to
90% correlation (Pearce et al, in review). We have extended
it, using information theory to predict segmentation in
melody, which it does to a level comparable with
programmed, rule-based systems, with an F1 of .61 or better
(Pearce et al., 2008), and to make predictions which
coincide with the independent analysis of an expert
musicologist. Recent work has shown statistically
significant neural correlates of the expectedness values,
suggesting that melody processing involves motor cortex.
We believe that this is the first computational model of
musical melody to work on the basis of learning (rather than

being programmed), to predict an important aspect of the


human experience of pitch, to then predict a different
phenomenon (segmentation) merely by further analysis of
its output, and to show correlation with brain activity.
Thus, we propose that the model is a good candidate, at a
certain level of abstraction, for a mathematical
understanding of the processing of perceived event
sequences by the brain.

Moderators:
Martin Rohrmeier and Patrick Rebuschat
References
Honing, H. (2006). Computational modeling of music
cognition: a case study on model selection. Music
Perception, 23(5), 365-376.
Honing, H., Ladinig, O., Winkler, I., & Haden, G. (2008).
Probing emergent meter perception in adults and newborns
using event-related brain potentials: a pilot study.
Proceedings of the Neurosciences & Music III Conference.
Montreal: McGill University.
Jackendoff, R. & Lerdahl, F. (2006). The capacity for music:
What is it, and What's specical about it? Cognition, 100,
33-72.
Koelsch, S., Gunter, T., Friederici, A. D., & Schroger, E.
(2000). Brain indices of music processing: "nonmusicians"
are musical. Journal of Cognitive Neuroscience, 12(3),
520-541.
Krumhansl, C. L. (1987). General properties of musical pitch
systems: Some psychological considerations. In J.
Sundberg (Ed.), Harmony and Tonality (Vol. 54).
Stockholm: Royal Swedish Academy of Music.
Loui, P., Wu, E., Wessel, D., & Knight, R. T. (2009). A
Generalized Mechanism for Perception of Pitch Patterns.
[Brief Communications]. Journal of Neuroscience, 29(3),
in press.
Pearce, M. T. (2005). The Construction and Evaluation of
Statistical Models of Melodic Structure in Music
Perception and Composition. PhD thesis, Department of
Computing, City University, London, UK.
Pearce, M. T., Mllensiefen, D., and Wiggins, G. A. (2008).
Melodic segmentation: A new method and a framework for
model comparison. In Proceedings of ISMIR 2008.
Perruchet, P. (2008). Implicit learning. In H.L. Roediger, III
(Ed.), Cognitive psychology of memory. Vol. 2 of
Learning and memory : A comprehensive reference, 4
vols. (J.Byrne, Editor). Oxford : Elsevier (p.597-621).
Wiggins, G. A., Pearce, M. T., and Mllensiefen, D. (2009).
Computational modelling of music cognition and musical
creativity. In Dean, R. (Ed.), Oxford Handbook of
Computer Music and Digital Sound Culture. Oxford
University Press. In press.
Winkler, I., Haden, G., Ladinig, O., Sziller, I., & Honing, H.
(in press). Newborn infants detect the beat in music.
Proceedings of the National Academy of Sciences.
[http://www.musiccognition.nl/newborns/]

You might also like