You are on page 1of 11

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0368-492X.htm

K
40,7/8 A cybernetic model approach
for free jazz improvisations
Jonas Braasch
984 Graduate Program in Acoustics, School of Architecture,
Rensselaer Polytechnic Institute, Troy, New York, USA

Abstract
Purpose – The purpose of this paper is to better understand communication between musicians in a
free jazz improvisation in comparison to traditional jazz.
Design/methodology/approach – A cybernetic informative feedback model was used to study
communication between musicians for free jazz. The conceptual model consists of the ears as sensors,
an auditory analysis stage to convert the acoustic signals into symbolic information (e.g. notated
music), a cognitive processing stage (to make decisions and adapt the performance to what is being
heard), and an effector (e.g. muscle movement to control an instrument). It was determined which
musical features of the co-players have to be extracted to be able to respond adequately in a music
improvisation, and how this knowledge can be used to build an automated music improvisation
system for free jazz.
Findings – The three major findings of this analysis were: in traditional jazz a soloist only needs
to analyze a very limited set of music ensemble features, but in free jazz the performer has to
observe each musician individually; unlike traditional jazz, free jazz is not a strict rule-based
system. Consequently, the musicians need to develop their personal symbolic representation; which
could be a machine-adequate music representation for an automated music improvisation system.
The latter could be based on acoustic features that can be extracted robustly by a computer
algorithm.
Practical implications – Gained knowledge can be applied to build automated music improvisation
systems for free jazz.
Originality/value – The paper expands our knowledge to create intelligent music improvisation
algorithms to algorithms that can improvise with a free jazz ensemble.
Keywords Cybernetics, Automated music improvisation systems, Informative feedback models,
Artificial creativity, Cognitive modeling, Free jazz, Auditory scene analysis, Music
Paper type Conceptual paper

Introduction
The purpose of the research reported here was to develop a model of jazz-ensemble
improvisation practice to compare the communication structures for traditional jazz
and free jazz. Cybernetics has been applied to understand several forms of music
including traditional forms of jazz (Zaripov, 1969; Aufermann, 2005), but, to the
author’s knowledge, not yet specifically to free jazz. The aim of the research reported is

This paper builds on knowledge that was gained in a project funded by the CreativeIT program
of the National Science Foundation (No. 0757454). The concepts discussed in this paper will be
part of an autonomous intelligent agent for free-music improvisation that the author is currently
Kybernetes
Vol. 40 No. 7/8, 2011 developing with colleagues Selmer Bringsjord, Pauline Oliveros, and Doug Van Nort with
pp. 984-994 support from NSF (No. 1002851). The author would like to thank Ted Krueger and two
q Emerald Group Publishing Limited
0368-492X
anonymous reviewers for their helpful suggestions, and Kristen Murphy for proofreading the
DOI 10.1108/03684921111160214 manuscript.
both to better understand the underlying mechanisms of free improvisation and the A cybernetic
application of this knowledge to artificial-intelligent music systems. Unlike traditional model for free
jazz, free jazz, as indicated by its name, is not based on a rule system. The lack of such
has led to a dramatic change in the way musicians communicate, which needs to be jazz
considered in the model.
A major challenge for automated music improvisation systems has always been to
operate with concrete sounds (e.g. analyzing in real time what is being played 985
acoustically by musicians) instead of simply working on a symbolic, music theoretical
level. The question of how the shift from traditional jazz to free jazz will affect this
challenge is a central point of the research and addresses the second theme of the
Cybernetics: Art, Design, Mathematics (C:ADM) 2010 International Conference: From
Abstract to Actual (here: symbolic music representation vs actual sound). As will be
discussed later, the solution asks for cross-over processes, the other theme of the C:ADM
2010 conference, and it needs to draw from knowledge in the fields of
cybernetics/artificial intelligence, music theory, and psychoacoustics/machine
listening. So far, the latter has only played a very limited role in automated music
improvisation systems.
The paper is organized as follows:
.
the introduction of the general cybernetic model structure that was used for
this research, which builds on Wiener’s regulatory informative feedback
model;
.
an overview of current automated improvisation systems for traditional jazz
including a short description of the underlying musical concepts;
.
an outline of the music practices in free jazz compared to those in traditional
jazz and how these practices affect the communication structure between
musicians;
.
the introduction of a new model for an automated music improvisation system
that addresses the specific needs of free improvisation including a comparison to
algorithms that focus on traditional jazz; and
.
a concluding section.

General cybernetic model structure based on informative feedback


The approach that is described here starts with Wiener’s basic informative feedback
model. Wiener (1961, p. 112 f.) explained his model based on the example of a person
driving on an icy road. In this case, the effector is the muscular system that steers the
car. The comparator, in Wiener’s case a simple subtractor, compares the desired
trajectory of the car with the one the car actually took. The compensator then controls
the effector in such a way that it brings the car onto the desired trajectory (e.g. by
counter steering).
For this study, Wiener’s model was extended (Figure 1), partly drawing from
cybernetic models of Craik and Arbib, and also drawing from Pask’s ideas for his
Musicolour system (Pickering, 2010, p. 313 ff). The “cognitive processes and memory”
and “representation of world” stages of the model can be attributed to Craik (cited after
Cordeschi, 2002, p. 138), and the model follows Arbib’s (1972, p. 129) definition of
“goals as a ‘desired’ course of action from ‘higher’ centers”. In the case of jazz music,
the effector is the musician’s muscular system that controls the musical instrument
K Internal
40,7/8 Goals representation
of world

S S S S

986 Auditory
Auditory Analysis
Cognitive processes Effector
Pitch extraction and memory
sensor
on/offset detection muscle
(ears) timbre analysis S control
beat analysis Comparator performing
loudness estimation instrument
polyphonic analysis Compensator

Figure 1.
Informative feedback A
model to explain human A Acoustic signal flow
information processing
and action during a music S Symbolic information flow
performance General signal/information flow

(e.g. a saxophone) to send out an acoustic signal. In a typical jazz performance, the
signal is mixed with the sound of the other instruments before it is received by the
musician’s auditory sensory organ. The sound mixture is analyzed by the auditory
system and the extracted features are made available to the cognitive stages of the
brain in the form of symbolic information. From a functional point of view, the system
also includes at least one comparator (in this case likely to observe a multidimensional
set of features), which compares the acoustic features to a given set of goals, and a
compensator to align the actual performance with the desired goals.
A very simple example could be an arranged piano solo, which, by the way, is not
uncommon in Big Band music. Since the tuning of the piano is fixed and the notes to be
played pre-determined, only the tempo has to be adjusted. If the piano is ahead of the
remaining ensemble, the compensator would address the muscular system to slow
down or, likewise, to play faster if the performer has fallen behind the band. Even for
this very simple case, a major challenge is to extract all necessary information since all
instruments, the piano and those of the remaining ensemble, overlap in time and
frequency when their signals are perceived by the auditory system.

Automated music improvisation systems for traditional jazz


A brief overview on traditional jazz practices
Matters become really interesting when the musician actually improvises a solo
instead of playing one from sheet. Before this problem is addressed, it is useful to
briefly introduce the main concepts and challenges in traditional jazz improvisations.
A more detailed introduction to jazz music theory can be found in Spitzer (2001) and A cybernetic
several other publications dedicated to this topic. model for free
The term “traditional jazz” refers here to jazz styles that preceded the free jazz
era, covering styles from swing to hardbop, but purposely excluding modal jazz, jazz
which already contained numerous elements that later became characteristic features
of free jazz. In traditional jazz, the freedom of an improviser is more constrained
than people outside this tradition might think. Typically, each solo follows the chord 987
progression of the song that is played by the rhythm section. The latter typically
consists of drums, bass, and one or more chordal instruments, predominantly piano
or guitar. For traditional reasons, one chord progression cycle is called a “chorus”.
The general repertoire of jazz tunes are called jazz standards, and most of these
standards originated from Tin Pan Alley songs and pieces from Broadway
musicals, in which the jazz musicians performed for a living. After the theme is
played, the solo instruments take turns playing solos, and often the players of the
rhythm section take their turns, too. In traditional jazz, the performer is free to play
over as many choruses as he or she wants, but to end a solo before the end of the
chord progression cycle is a taboo. The solo typically consists of a sequence of
phrases that is chosen to match the chord progression and the intended dramaturgy.
Since the two most common chord progressions in jazz are II-V and II-V-I
(supertonic/dominant/tonic) combinations, professional jazz musicians train phrases
based on these progressions. Extensive literature exists with collections of standard
jazz phrases.
Figure 2 shows the first eight bars of a notated saxophone solo over the 32-bar
jazz standard, How High the Moon (Hamilton and Lewis, 1940) to provide a practical
example. Charlie Parker’s Ornithology later used the same chord progression with a
new bebop-style theme. Bars 3-6 consist of the typical II-V-I chord progression: Gm7
(notes: G, B[, D, F), C7 (C, E, G, B[), Fmaj7 (F, A, C, E), and Bars 7 and 8 of another
II-V progression Fm7 (F, A[, C, E[) and B[7 (B[, D, F, A[). Notice how in the
example the saxophone initially follows the notes of the individual chords closely
with additional scale-related notes – which is typical for swing. From Bar 6 on, the
phrases change to bebop style with a faster eighth-note pattern. Also noteworthy is
the second half of Bar 7, where the saxophone plays note material outside the chord
related scale to create a dissonant effect. Whether this is appropriate depends on the
agreed upon rules; In the swing era this would have been, plainly put, incorrect
play, but such techniques later became the characteristic style of players like Eric
Dolphy, who could elegantly switch between the so-called inside and outside play.

1 Gmaj7 2 3 Gm7 4 C7

Figure 2.
5 Fmaj7 6 7 Fm7 3 8 B7 Example transcription of a
saxophone solo over the
3 jazz standard How High
3 the Moon (first eight bars)
Outside
K In order to play a correct solo following the rules of jazz theory, one could easily focus
40,7/8 the attention to a very limited set of features to survive gracefully as shown in Figure 3
(It should be pointed out, though, that virtuoso jazz players are known to listen out and
respond to many details initiated by the other players.). Basically, the soloist can deal
with the rhythm section as a holistic entity, since all musicians follow the same chord
progression. The tempo is quasi-stable, and the performance of the other soloist has to
988 be observed only partially, to make sure not to cut into someone else’ turn. Once the
soloist has been cleared to go ahead with his/her solo, he or she no longer needs to pay
attention to the other soloists.

Rule-based machine improvisation algorithms


Numerous attempts have been made to design machine improvisation/composition
algorithms to generate music material in the context of jazz and other styles (Cope,
1987; Friberg, 1991; Widmer, 1992; Jacob, 1996). In most cases, these algorithms use a
symbolic language to code various music parameters. The wide-spread musical
instrument digital interface (MIDI) format, for example, codes the fundamental
frequencies of sounds into numbers. Here, the note C1 is the MIDI Number 24. Note
numbers ascend in integers with the semitones. The temporal structure is also coded in
numeral values related to a given rhythm and tempo structure.
By utilizing such a symbolic code, improvisation or composition can become a
mathematical problem. Typically, the program selects phrases from a database
according to their fit to a given chord progression (e.g. avoiding tones that are outside
the musical scales for these chords, as previously discussed in context of Figure 2), and
current position in the bar structure (e.g. the program would not play a phrase ending
in the beginning of a chord structure). Under such a paradigm, the quality of the
machine performance can be evaluated fairly easily by testing whether any rules were

Need to arrange
order and
duration of solos

Soloist Soloist

Needs to follow • Needs to maintain beat with


dramaturgy of rhythm section
soloists • Needs to follow actual position
in chord progression

Rhythm section

Piano/guitar

Bass
Figure 3.
Schematic communication
scheme for a traditional Drums
jazz performance
violated or not. Of course, such an approach will not lead to a groundbreaking A cybernetic
performance, but the results are often in line with the skills of a professional musician. model for free
A system can even operate in real time as long as it has access to the live music
material on a symbolic level, for example MIDI data from an electronic keyboard. jazz
Lewis’ (2000) Voyager system and Pachet’s (2004) Continuator are working with
MIDI data to interact with an individual performer. The system transforms and
enhances the material of the human performer by generating new material from the 989
received MIDI code, which can be derived from an acoustical sound source using an
audio-to-MIDI converter. (Typically these systems fail if more than one musical
instrument is included in the acoustic signal.) In the case of the Continuator, learning
algorithms based on a Hidden Markov model help the system to copy the musical style
of the human performer.
Commercial systems that can improvise jazz are also available. The program
“Band-in-a-Box” is an intelligent automatic accompaniment program that simulates a
rhythm section for solo music entertainers. The system also simulates jazz solos for
various instruments for a given chord progression and popular music style. The system
can either generate a MIDI score that can be auralized using a MIDI synthesizer or create
audio material by intelligently arranging pre-recorded jazz phrases. The restricted
framework of the jazz tradition makes this quite possible, since the “listening” abilities of
such a system can be limited to knowing the actual position within the form. Here the
system needs to count along making sure that it keeps pace with the quasi-steady beat.

Automated music improvisation systems for free jazz


Free jazz music practices
In contrast to traditional jazz, a formal set of rules does not exist in free jazz, although
there has been a vivid tradition that has been carried on and expanded. Most of this
tradition exists as tacit knowledge and is carried on in performance practice, orally and
through musicological analyses. One example for tacit knowledge in free jazz is the
taboo to perform traditional music material ( Jost, 1981), unless it is a brief reference in
the context of other adequate free music material. For the application of the informative
feedback model to free jazz, it is also important to understand how the tradition
progressed over time deviating more and more from traditional jazz practice. A key
moment for the development of free jazz was the introduction of modal jazz at the end of
the 1950s, in which the chord progressions were replaced with fixed musical modes. In
modal jazz the standard form of 12, 16 or 32 bars was initially kept, but one of the key
moments of free jazz was that this structure was given up in the favor of a free (variable)
duration of form.
In the beginning, the music material was fairly traditional and could be analyzed
based on traditional music notation (and thus can be easily captured using a symbolic
music code like MIDI). However, later musicians started to use extended techniques that
shifted their performance more and more from the traditional sound production
techniques of the orchestral instruments used in jazz. Albert Mangelsdorff’s ability to
perform multiphonics on the trombone is legendary, and so are the circular-breathed
melodic streams of Evan Parker, who obtained the ability to perform arpeggio style
continuous phrases with a variable overtone structure, containing both tonal and
non-pitch-based elements. Peter Brötzmann’s repertoire further expanded the
techniques of non-pitched sounds. Among the younger generation of free jazz
K musicians are performers whose work focuses on complex musical textures outside the
40,7/8 context of tonal music. Mazen Kerbaj (trumpet) and Christine Sehnaoui (saxophone) are
among these players who have neglected the tonal heritage of their instruments in a
unique way.
Initially, free jazz musicians took turns to perform accompanied solos, but later they
transformed it to a genre where the boundaries between solos and accompaniment
990 were blurred. While in traditional jazz, a soloist has to listen to another soloist only to
find a good slot for the solo, suddenly the performers had to pay attention all the time
to the other soloists. In addition, the soloist could no longer rely on the predetermined
role of the rhythm section, which was now allowed to change keys, tempo and/or style.
The higher cognitive load that was necessary to observe all other participants in a
session led to smaller ensembles, often duos. Larger ensembles like the Willem Breuker
Kollektief remained as the exception.

Machine improvisation algorithms for free (non-rule-based) music


Figure 4 shows a model of communication during a free jazz session. The diagram,
shown here for a group of three musicians, appears to be much simpler because of the
lack of rules. In contrast to the previous model for traditional jazz (Figure 3), the
distinction between rhythm section players and soloists is no longer made. While in
traditional jazz the rhythm section can be dealt with as a holistic entity with
homogeneous rhythm, tempo, and chord structure, now individual communication
channels have to be built up between all musicians. Also, the feedback structure that
each musician needs to enact to adequately respond to the other players is
fundamentally different from traditional jazz, where the communication feedback loop
(Figure 3) could simply cover a single communication stream from the ensemble (seen as
a whole) to the soloist and back. In free music, a separate communication line has to be
established between each possible pair of players, and consequently each performer has
to divide his/her attention to observe all other players individually. Since the feedback
from other musicians has to be detected with the ears, the multiple feedback-loop
structure is not apparent in Figure 1. However, the need to extract the information
individually for each musician from a complex sound field is what makes free music

Musician

Musician Musician
Figure 4.
Notes: The categorical distinction between soloists and
Schematic communication
scheme for a free jazz rhythm section players no longer exists; each musician has
performance to establish individual communication channels to all other
musicians
improvisations a challenge. In addition, the performer always has to be prepared for the A cybernetic
unexpected, especially since the tacit knowledge can be extended or modified within a model for free
session.
With regard to the music parameter space, for traditional jazz it is sufficient to receive jazz
the pitches of the notes played, to determine the current chord structure and melody
lines, and to capture the on- and offset times of these notes to align the performance in
time with the rhythm section. Commercial audio-to-MIDI converters can perform this 991
task reliably enough for this application if the general chord progression is known in
advance. The analysis can even contain errors due to great information redundancy as
long as the algorithm can follow the given chord progression. In the context of an
automated system that can improvise free music, machine listening demands are much
higher if the system is mimicking human performance (see the auditory analysis box in
Figure 1). Now, we no longer have a pre-determined chord progression that serves as a
general guideline. Even if we possessed a system that could extract the individual notes
from a complex chord cluster – which, by the way, is difficult because of the complex
overtone-structure of the individual notes – it is not guaranteed that the musical
parameter space in a session is based on traditional music notes.
To address this problem adequately, the intelligent system could be equipped with a
complex model that simulates the auditory pathway. This type of model is able to
extract features out of the acoustics signal in a similar way to the human brain (see
Figure 1). The early stages of the auditory pathway (auditory periphery, early auditory
nuclei perform the spectral decomposition, pitch estimation, on- and offset detection)
are thought to be purely signal driven, whereas the performance of the higher stages
(e.g. timbre recognition, recognition of musical structure) is strongly influenced by the
individual background of a person, and these auditory features are categorized along
learned patterns.
The features extracted in the auditory analysis stage are coded as symbolic
information and passed on to the cognitive processing stage. From the symbolic
information it receives, it can construct an internal representation of the world (in this
case, the representation of the jazz performance). As outlined in the previous section,
the art of mapping acoustic signals onto symbolic information is well defined through
jazz theory for traditional jazz. Thus, if the system does not know and follow the given
rules, it will be easily detected by other musicians and the audience.
In contrast, in free music there is no longer a standardized symbolic representation
of what is being played. Instead, to a greater degree, the music is defined through its
actual sound. Consequently, the musicians will need to derive their own symbolic
representation to classify what they have heard and experienced, and they also need to
define their own goals. For automated systems, the latter can be programmed using
methods in second-order Cybernetics (Scott, 2004). With regard to symbolic music
representation in humans, musicians typically draw from their own musical
background, and significant differences can be found for musicians who primarily
received classical music training compared to those who concentrated on jazz or chose
to work with sound textures rather than pitch and harmony. These differences extend
to artists who grew up in a non-Western music tradition. For example, if a typically
trained musician hears a musical scale, he or she associates it with a scale that exists in
his or her musical culture. This association works as long as the individual pitches of
each note fall within a certain tolerance. Consequently, two people from two different
K cultural backgrounds could label the same scale differently, and thus operate in
40,7/8 different symbolic worlds judging the same acoustic events. In free music, interactions
between musicians of various cultural backgrounds are often anticipated, hoping that
these types of collaborations will lead to new forms of music and preclude musicians
falling into stereotypic patterns. The communication, however, will only work if the
structures of the different musical systems have enough overlap so the musicians can
992 decipher a sufficient amount of features of the other performing musicians into their
own system. Furthermore, as performers, we have only indirect access to the listening
ability of the co-musicians through observing what they play, and in case something
was not “perceived” correctly by others, we cannot measure their resulting response
(musical action) along rules in free music, because they do not exist.
Furthermore, for cross-cultural music ensembles, examples exist where
communication problems resulted from operating in different music systems. The late
Yulius Golombeck once said that when he was performing with the world music band
“Embryo”, Charlie Mariano, and the Karnataka College of Percussion, there were certain
complex Indian rhythms played by the Karnataka College of Percussion that the
Western trained musicians could not participate in because the rhythmical structure
was too complicated to understand, despite the fact that all musicians had a tremendous
experience with non-Western music (personal communication, 1995).
While the complex communication structure in free music poses a real challenge for
automated music systems, the lack of a standardized symbolic representation can be
played to a system’s advantage. Instead of mimicking the auditory system to extract the
musical features (Figure 1), an alternative approach could be a robot-adequate design.
The design could consider that as of today some parameters (e.g. complex chords) are
impossible to extract in parallel for multiple musicians, especially in the presence of
room reverberation. Instead, a music culture for machines could be developed that
emphasizes the strengths of machines and circumvents their shortcomings. The latter
are summarized in Table I.
A directed focus on machine-adequate listening algorithms would also encourage the
design of machines to have their own identity, instead of focusing on making them
indistinguishable from humans on passing the Turing test (e.g. compare Boden, 2010).
Man/machine communication could then be treated like a cross-cultural performance,
where sufficient overlap between the various cultures is expected to allow a meaningful

Listening strengths Listening weakness


Absolute sense of pitch, timbre, timing and most Have difficulty to extract information from
other information multiple source and reverberant environments
Difficulty to reconstruct missing information
Difficulty to perceptually correct imperfections of
other players
Cognition strength Cognition weakness
Absolute memory Not good in abstraction –not able to develop new
concepts
Good at combinatorics Difficulties in judging the quality of a
Table I. performance
Listening strengths and Action strength Action weakness
weaknesses of machines Redefines the ideal of virtuosity, can play Difficulty to perform material with great musical
compared to humans anything at any tempo without imperfections expression
communication. In such collaborations, the highlight would not be to replace humans A cybernetic
with machines, but to build systems that inspire human performers in a unique and model for free
creative way. A good example of machine inspired human music performance in another
context was the introduction of the drum machine, which encouraged a new generation jazz
of drummers around Dave Weckl in the 1980s to perform their instruments more
accurately, almost in a machine-like style.
993
Conclusion
The purpose of this study was to develop a cybernetic model to understand different
forms of jazz by bridging the fields of cybernetics, music theory, psychoacoustics and
machine listening. Wiener’s regulatory informative feedback model was chosen as a
starting point. It was further developed to serve as a model for jazz improvisation,
using elements of other cybernetic models by Arbib and Craik. The model was then
used to investigate the communication structure between jazz musicians with a focus
on which musical features have to be extracted by a soloist in order to adapt his/her
own performance to what is being played by the ensemble. This was accomplished
using an informative feedback process. The analysis revealed that the communication
structure in free jazz is fundamentally different from traditional jazz:
.
The distinguished roles between lead improviser and backing ensemble no
longer exist and the improviser can no longer concentrate on a single musical
stream. In an informative feedback model this results in a parallel feedback-loop
system (one for each musician). Owing to the lack of explicit rules, it can be less
obvious if the performer misinterprets what is being played by others.
.
Since free jazz is not rule based, but defined through actual sounds, a general
theory on how to convert the acoustic events into symbolic code no longer exists.
.
Consequently, automated music improvisation systems do not necessarily have to
follow human practices but can instead use a symbolic code that focuses on
features that can be extracted robustly by computers. Here, it is important, though,
that there is sufficient overlap between both symbolic codes, so human performers
will find that the machine responds in a useful and systematic manner.

References
Arbib, M.A. (1972), The Metaphorical Brain: An Introduction to Cybernetics as Artificial
Intelligence and Brain Theory, Wiley, New York, NY.
Aufermann, K. (2005), “Feedback and music: you provide the noise, the order comes by itself”,
Kybernetes, Vol. 34 Nos 3/4, p. 490.
Boden, M.A. (2010), “The turing test and artistic creativity”, Kybernetes, Vol. 39, p. 409.
Cope, D. (1987), “An expert system for computer-assisted composition”, Computer Music Journal,
Vol. 11 No. 4, pp. 30-46.
Cordeschi, R. (2002), The Discovery of the Artificial: Behavior, Mind, and Machines Before and
Beyond Cybernetics, Kluwer, Amsterdam.
Friberg, A. (1991), “Generative rules for music performance: a formal description of a rule
system”, Computer Music Journal, Vol. 15 No. 2, pp. 56-71.
Hamilton, N. and Lewis, M. (1940), How High the Moon, available at: http://en.wikipedia.org/
wiki/How_High_the_Moon
K Jacob, B. (1996), “Algorithmic composition as a model of creativity”, Organised Sound, Vol. 1
No. 3, pp. 157-65.
40,7/8 Jost, E. (1981), Free Jazz, DaCapo, New York, NY.
Lewis, G.E. (2000), “Too many notes: computers, complexity and culture in voyager”, Leonardo
Music Journal, Vol. 10, pp. 33-9.
Pachet, F. (2004), “Beyond the cybernetic jam fantasy: the continuator”, IEEE Computer Graphics
994 and Applications, Vol. 24 No. 1, pp. 31-5.
Pickering, A. (2010), The Cybernetic Brain – Sketches of Another Future, University of Chicago
Press, London.
Scott, B. (2004), “Second-order cybernetics: an historical introduction”, Kybernetes, Vol. 33
Nos 9/10, pp. 1365-78.
Spitzer, P. (2001), Jazz Theory Handbook, Mel Bay Publications, Pacific, MO.
Widmer, G. (1992), “Qualitative perception modeling and intelligent musical learning”, Computer
Music Journal, Vol. 16 No. 2, pp. 51-68.
Wiener, N. (1961), Cybernetics or Control and Communication in the Animal and the Machine,
2nd ed., The MIT Press, New York, NY.
Zaripov, R.K. (1969), “Cybernetics and music”, Perspectives of New Music, Vol. 7 No. 2, pp. 115-54.

Further reading
Widmer, G. (1994), “The synergy of music theory and AI: learning multi-level expressive
interpretation”, Technical Report OEFAI-94-06, Austrian Research Institute for Artificial
Intelligence, Vienna.

About the author


Jonas Braasch is an Acoustician, Musicologist, and Sound Artist who teaches courses in
Acoustics & Music at the School of Architecture at Rensselaer Polytechnic Institute. He obtained
a Master’s degree from Dortmund University (Germany, 1998) in Physics and two PhD degrees
from Ruhr-University Bochum, Germany (2001, 2004) in Electrical Engineering/Information
Science and Musicology. His research interests include binaural hearing, multi-channel audio
technology, telematic music systems, perceptual audio/visual integration, intelligent systems,
and musical acoustics. For his work, he has received funding from NSF, NSERC (Canada), DFG
(German Science Foundation), and NYSCA. As a soprano saxophonist and sound artist, he has
on-going collaborations with Curtis Bahn, Chris Chafe, Michael Century, Mark Dresser, Pauline
Oliveros, and Doug van Nort – among others. Jonas Braasch is a Board Member of the Deep
Listening Institute (Kingston, NY), EMPAC affiliated faculty, and holds an Adjunct Professor
Appointment with the Schulich School of Music at McGill University. Jonas Braasch can be
contacted at: braasj@rpi.edu

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

You might also like