(171)
10 Musical robots and listening machines
NICK COLLINS
Imagine being enraptured at the performance of a hotly tipped band, ‘The
Alan Turing Five’, an epic fusion of thrash drumming, vocoded belting and
virtuosic three-fingered guitar playing evoking a longing for a bygone age.
Yet after the show, an ugly rumour spreads amongst the audience that even
leads once-satisfied punters to demand their money back. All the person-
alities you observed on stage were really robot simulacra; no wonder they
played so fast!
Is this tale so far-fetched? The work of the great French engineer Vau-
canson was introduced in chapter 1 — a modern-day version of his 1738
flute-playing automaton has been designed by researchers at Waseda Uni-
versity, Japan. Their Waseda Flutist Robot aims to reproduce ‘as realistically
as possible, every single human organ involved in playing the flute’ (Solis
et al. 2006, p. 13). Although aspects of this project formulate an acoustical
enquiry, the musical applications are not forsaken: the robot has performed
duets with human flautists. Indeed, this is only one of a number of such
robots produced since the 1980s, and a second musical robot will be fea-
tured later in this chapter. Robotics isa boom area of the current generation,
particularly in Japan, and androids (humanoid robots) are even being given
soft silicone skin and affective (emotional) responses. Media coverage of
the 2006 International Next-Generation Robot Fair actively portrayed such
uncannily realistic simulations, including the scarily lifelike Repliee Q2, an
android interviewer.!
A cultural fascination with the machine existed far before the indus-
trial revolution, though it has no doubt been intensified in the high-tech
age. Automata can be traced to antiquity and the writings of Aristotle. In
music, a fascinating history of machines and formalisms includes d’Arezzo’s
table lookup procedure for setting texts to melody (c. 1030), the first com-
putational memory devices (thirteenth-century nine-thousand-hole caril-
lons from the Netherlands), musical dice games (see chapter 6) and Ada
Lovelace’s prescient description of the application of the Analytical Engine
to musical composition (Roads 1985, 1996; chapters 1 and 4). The fictional
anticipations of artificial intelligence are also wide ranging, from the Golem
myth and Shelley’s Frankenstein’s Monster to the introduction of the Czech
term robot (from robota, to work) in Karel Capek’s play Rossum’s Universal
Robots (1921). Indeed, robots, in the guise of the Man-Machine of Fritz172 Nick Collins
Lang’s Metropolis (1927), appeared as a major preoccupation of Kraftwerk,
notleast associated with their Man-Machinealbum of 1978, and live theatrics
with mechanical mannequins for performances of the song ‘The Robots’
The current age sees a proliferation of cyborgs, robot orchestras and software
composing machines.
There are further anticipations of autonomous creations in the virtual
characters who populate computer games and computer-animated movies.
In one entertaining trend, virtual bands have achieved chart success as the
front for human musicians, the most recognised being the Gorillaz, but
precedents exist from Alvin and the Chipmunks (1958) to the Japanese
virtual idols, 3D animated pop singers with their own cult followings. Nev-
ertheless, the backing musicians in such ventures have remained resolutely
human: no one has yet exhibited a band of fully automated computerised
musicians that can equal the scope of human musical activity. When we
investigate the current musicianship of artificial musicians, they turn out
to be far less skilled than their human counterparts, and this chapter shall
reveal a few reasons why this is so.
Without wishing to denigrate the compositions, many historical per-
formances in electronic music have been somewhat inflexible. Works for
tape and soloist (like Javier Alvarez’s Papalotl (1987) for piano and tape),
whilst demonstrating great craft in the tape parts and great excitement when
soloists can accommodate the demands, do enforce a certain rigidity out of
keeping with conventional music-making; tape cannot yield an inch. A more
pragmatic approach much used in current work is the live cueing of material
by a human director. For instance, cues might be indicated from a MIDI
keyboard, triggering complex processing mechanisms and event sequences.
We shall later discuss accompaniment systems that seek to automate this
playback further, given a known score. But electronic music must also be
prepared to deal with improvised, spontaneous situations, and here the abil-
ity of machines to tread on an equal footing with humans is curtailed; there
is no score for cueing or for more sophisticated accompaniment systems.
We would like to take advantage of the many novel processing and gen-
erational capabilities that machines can offer, without compromising the
sense of co-ordinated musical behaviour key to instrumental practice. The
onus is upon the machines to be brought nearer to our musical practices,
to instil a sense of what Robert Rowe terms machine musicianship (Rowe
2001).
Because such tasks quickly bring us to the fields of artificial intelligence
and the cognition of music, the reader should not be too surprised to hear
that they are unsolved problems. Engaging with music is a high-level cog-
nitive task which stretches the resources of the brain.” The level of artificial
intelligence achievable by engineering is hotly contested and debated by
philosophers and scientists. Yet, we don’t always have to seek substitutes for173 Musical robots and listening machines
human brains and bodies; many fascinating new musical applications have
been offshoots from the attempt, from the consequences of Al technology.
Not all musical machines have to be androids, to act the same way and think
somehow the same way as human beings, and many just exist as curious
and fascinating software programs.
In treating such issues, authors have characterised their work in a variety
of ways, as providing some form ofinteractive companionship (Thom 2003),
self-reflexive music-making (Pachet 2003), as designing settings for novel
improvisation (Lewis 1999), or as providing toolkits for machine musician-
ship (Rowe 1993, 2001) which might potentially support many varieties of
performance situation. In this chapter we shall follow Robert Rowe in the
use of the term interactive music systems to describe artificial non-human
participants in musical discourse.> It is helpful to remember that all such
systems are devised and built by humans, so even if their creators defer real-
time interaction to their creations, these systems are not devoid of human
spirit; they show exactly those assumptions that their makers have managed
to program into them.
Four interactive improvisation systems
In order to reveal some of the principles at work in the practical construction
of interactive music systems, four examples are discussed here. The selection
is in no means meant to be definitive, but illustrative of the wider efforts of
such creators as Robert Rowe (Cypher), Peter Beyl (Oscar), Belinda Thom,
Jonathan Impett and others, Interest in this field is burgeoning, with the
availability of realtime machine listening plugins for such environments as
Max/MSP or SuperCollider, and organisations such as Live Algorithms for
Music in the UK, or the artbots network in New York.
As suggested above, perhaps the greatest challenge in interaction terms is
exemplified by systems built especially for co-improvisation with a human
musician, from scratch. Improvisation is a ubiquitous practice in musical
culture, providing an important sense of location in space and time for each
performance essential to constant challenge and renewal for performers,
and participation and communion for audiences (Bailey 1980). The four
systems described here are unified by an ability to start from a blank slate
in performance, and whilst this does not mean they are free of assumptions
about what might happen, they should ideally allow immediate interaction.
Yet they also demonstrate a variety of thinking on interactive systems. Whilst
none of these four systems is available at the time of writing for public
evaluation of their source code and interaction, they are all documented
in videos, recordings, academic papers and most importantly of all, live
concerts.