Professional Documents
Culture Documents
All rights reserved. No part of this book may be reproduced in any form by any electronic
or mechanical means (including photocopying, recording, or information storage and
retrieval) without permission in writing from the publisher.
This book was set in Stone Serif and Stone Sans by Graphic Composition, Inc. and was
printed and bound in the United States of America.
Material
i Introduction and Background
Copyrighted Material
S Introduction and Background
used to create new musical relationships that may exist only between
humans and computers in a digital world.
The adjective "virtual" is a current buzzword that describes computer
simulations of things that behave like real-world objects, situations,
and phenomena. Describing such a simulation, Brenda Laurel writes,
"A virtual world may not look anything like the one we know, but the
persuasiveness of its representation allows us to respond to it as if it
were real." (1993) Interactive music techniques can be used to model
many of the key elements in making and listening to music: instru-
ments, performers, composers, and listeners. Virtual instruments behave
somewhat like real instruments, and can be played from the computer
keyboard, a MIDI controller, or as an extension of a traditional instru-
ment. Virtual performers may play with human performers, interacting
with musical intelligence" in a duet, combo, or accompaniment role.
Virtual composers create original music based on flexible and sometimes
unpredictable processes specified by a real composer. These processes
might represent the same musical ideas that a composer may choose
in order to create a piece for acoustic instruments. Other processes may
be specifically designed to take advantage of the capabilities of the
computer. Finally, a virtual listener or virtual critic may pass judgment
by reacting to and altering the final outcome of a performance (Rowe
1993). Such a critic might be designed to analyze the accuracy of a
performance, or steer the output of a composition away from too
much repetition.
The behavior of these virtual entities must be described by the pro-
grammer, who must answer the questions: How are virtual instruments
played? How does a virtual performer respond to musical input? How
do virtual composers generate, process, and structure musical material?
What are the criteria for a virtual critic to judge the success or failure
of a performance or or a piece of music? The opinions of the program-
mer are impossible to separate from the final outcome. There is a
compelling reason why composers and other artists need to become
involved in the creation of software, and why programs like Max are
needed to make software creation more accessible to artists. Among
other things, artists are experts at creating vivid imaginary worlds that
engage the mind and the senses. These talents are especially needed in
creating rich and engaging computer environments. Computers are
not intelligent. They derive their appearance of intelligence only from
Copyrighted Material
6 Introduction, History, and Theory
the knowledge and experience of the person who creates the software
they run.
The first two steps are very practical, and limited, and deal mainly with
facts. The software should be accurate in its analysis and, therefore,
must understand certain things about human performance. The last
three steps are artistic decisions limited only by a composer's skill and
imagination, since there are countless ways that performance informa-
tion can be interpreted to create original music and sound.
Copyrighted Material
7 Introduction and Background
MIDI module
©
Computer keyboard Interactive Software
and mouse
Hard Disk Sample
Computer Computer o
Listening n Composition
e
o
n
Figure 1.1
Basic components of an interactive composition
each other, with their instruments, and with their audience. Technical
innovations in the twentieth century have reduced opportunities for
musical interaction. Recording eliminated the rapport between audi-
ence members and performers; since music could be heard anywhere
at any time, the focus of the moment, and the energy needed to create
it, was lost. Multitrack tape techniques further eliminated interaction
between musicians, making it possible for an engineer to record a band
with each member appearing at the studio on separate occasions to
record their part on separate tracks (although many bands prefer to
capture the excitement of their live interaction in the studio). Studio
musicians may respond or react to prerecorded music, rather than in-
teracting with live players. Their primary interaction is through feed-
back with a producer or engineer. This mode of creating music has
more recently spilled over into large pop music concerts that employ
prerecorded tapes, with the musicians on stage playing a minimal role
or even lip synching, so that the sound is as close to the original studio
recording as possible.
Computer music, however, did not begin as an interactive art form.
The first general-purpose computer music program was created by Max
Mathews at Bell Laboratories in the early sixties (Dodge and Jerse
1985). The first computer music systems could not be designed for in-
teraction since the primitive equipment was not capable of handling
the immense number of calculations needed for a live performance.
These systems required hours of programming and waiting before even
a single short melody could be heard. Part of the attraction for some
composers was to have absolute control over all aspects of a composi-
tion, specifying each and every detail to the computer, which would,
in turn, implement them flawlessly. Interestingly, the limitations of
these systems and the experimental nature of the software created un-
predictable results that did lead to meaningful "interactions" between
composers and computers; composers programmed what they thought
they would hear; computers interpreted the data differently, and often
composers were surprised to hear the results. Then they either adjusted
the data to sound more like their original intention, or decided that
they were delighted by their 'mistake" and continued in that new di-
rection. More than a few composers have remarked that some of their
best work was discovered unintentionally through a flaw in the system
or by entering erroneous data. The final result of early computer works
Copyrighted Material
11 Introduction and Background
During the sixties and early seventies, the first devices used for creating
computer music were very expensive, and housed at only a few re-
search facilities such as Columbia University, Stanford University, and
Bell Labs. At the same time, analog electronics were being employed
in concert settings to process the sound of instruments and to create
synthetic sounds in real time. So-called live electronic systems began
to be used in the early sixties and proliferated throughout the seven-
ties. Many of these pieces used tape delay techniques or processed ana-
log instruments through specially built electronic circuitry contained
in 'modules," with each module altering the sound in a specific way,
such as adding distortion or reverberation. A few composers showed
the promise of interactive systems, where the system behaved differ-
ently with different musical input, allowing a performer not just to
trigger preset processes, but to shape them as well.
The early electronic works of John Cage influenced a number of com-
posers experimenting with "live electronic" music in the sixties. Their
Copyrighted Material
12 Introduction, History, and Theory
Copyrighted Material
13 Introduction and Background
Most of the computer music research and composition during the sev-
enties centered on sound synthesis and processing methods using
mainframe computers to produce tape pieces. One notable exception
was the GROOVE system, a pioneering work in real-time computer sys-
tems developed by Max Mathews and F. Richard Moore at Bell Labs.
The GROOVE system, in use at Bell Labs from 1968 to 1979, featured
a conducting program that enabled a person to control the tempo, dy-
namic level, and balance of a computer ensemble that had knowledge
of a predetermined musical score. The system was used by performers
to investigate performance nuances in older music, as well as by com-
posers, such as Emmanuel Ghent and Laurie Speigel, to conduct origi-
nal compositions (Dodge and Jerse 1985).
David Behrman, another pioneer in interactive composition, spent
much of his early career creating works featuring real-time processing
of acoustìc instruments. He was one of a few composers who began
to use microcomputers when they first became available in the mid-
seventies. In John Schaefers's book, New Sounds, Behrman explains: "I
used the computer as an interface between some circuitry I had built
that made electronic music, and a pitch-sensing device that listens for
pitches made by acoustic instruments." He maintains that his pieces
"are in the form of computer programs and hardware." In his work
Figure in a Clearing, a cellist improvises on a set of notes, and the com-
puter creates harmonies and timbres that depend on the order and
choice of those notes. As the performer responds to the harmonies, the
improvisation triggers a new set of computer sounds (Schaefer 1987).
Composer Joel Chadabe has employed concepts of interaction in his
music since 1967, when he began creating works for a Moog analog
Copyr; ed Material
14 Introduction, History, and Theory
Many of the early interactive computer systems built in the late seven-
ties and early eighties were actually digital/analog hybrid systems that
consisted of a simple computer sending voltages to an analog synthe-
sizer. George Lewis, a well-known jazz trombonist, began building such
a system in 1979, using a computer with 1 kilobyte of RAM to control
a Moog synthesizer. His program evolved, and continues to evolve, into
a more elaborate system that enables him to use his improvisational
skills to create a true dialogue with the computer (Lewis 1994; see
chap. 3)
Copyrighted Material
15 Introduction and Background
Copyrighted Material
16 Introduction, History, and Theory
ed Material
17 Introduction and Background
Although the original idea was to build an oscillator bank [for sound synthesisi,
by 1985 most of the composers who tried to use the 4X were interested in
"signal processing," using the term to mean transforming the sound of a live
instrument in some way. This change of focus was the product of opportunity
and necessity. Opportunity because "signal processing" is capable of a richer
sonic result than pure synthesis, and since it is easier to create musical connec-
tions between a live player and electronics if the electronics are acting on the
live sound itself. Necessity because it was clear that after eight years of refining
the digital oscillator, we lacked the software to specify interesting real-time tim-
bral control at the level of detail needed. Signal processing, by contrast, can
often yield interesting results from only a small number of control parameters.
Copyrighted Material
18 Introduction, History, and Theory
Copyrighted Material
19 Introduction and Background
The primary software used for signal processing and synthesis on the
ISPW is FTS ('faster than sound"). IRCAM has continued the develop-
ment of FTS as a system that runs on multiple hardware and software
platforms. Most recently, Miller Puckette's new software system, Pd
(Pure Data), provides the main features of Max and FTS while ad-
dressing some of the shortcomings of the original Max paradigm. By
taking advantage of faster processor speeds, Pd is able to integrate
audio synthesis and signal processing with video processing and 3-D
graphics in a single real-time software environment. The graphics pro-
gram, GEM (Graphics Environment for Multimedia), was written by
Mark Danks to operate within the Pd environment. This holds great
promise for composers and visual artists to explore an interactive and
unified audiovisual medium.
These new capabilities provide composers, using off-the-shelf equip-
ment, with a sophisticated interactive system capable of handling not
only MIDI data but also real-time sound synthesis, timbre analysis,
pitch tracking, and signal processing (for more on this topic see chapter
8). Since much of this software is still in the development stages, the
remainder of this text demonstrates concepts of interactive music us-
ing the Opcode version of Max and commonly available MIDI hard-
ware. These techniques should remain viable in future incarnations of
Max and in other interactive systems.
Copyrighted Material