Professional Documents
Culture Documents
TW: It’s very gratifying to receive this kind of recognition from the
musical community, especially as the type of music I make is
almost invisible in the cultural press in the UK, where electro-
acoustic composition is squeezed out between classical
instrumental music and popular music.
USO: Why did you choose the electronic medium and how has this
affected your way of composing?
TW: Not strictly. I very often use the human voice (sometimes my
own, sometimes live performers like ‘Electric Phoenix’ or
‘Singcircle’, sometimes voices from the media, or from the local
community), for two reasons. On the one hand the voice is much
more than a musical instrument. Through speech it connects us to
the social world, and thence to the traditions of poetry , drama,
comedy, etc. It reveals much about the speaker, from gender, age
and health to attitude, mood and intention, and it also connects us
with our Primate relatives. The listener recognizes a voice even
when it is minimally indicated, or vastly transformed, just as we
recognize faces from very few cues. And the average listener will be
affected by e.g. a vocal multiphonic, an immediate empathic or
guttural link, in a way which they will not be affected by a clarinet
multiphonic (appreciate more in the sphere of contemporary music
afficionados).
At the same time, apart from the computer itself, the voice is the
richest sound-producing ‘instrument’ that we have, generating a vast
variety of sounds from the stably pitched, to the entirely noisy, to
complex rapidly-changing multiphonics or textures of grit and so on.
This is a rich seam to mine for musical exploration.
USO: How was your experience working at IRCAM for "Vox 5"?
TW: There are two main reasons. The first is poverty (!). Most of my
life I’ve been a freelance composer, and being a freelance
experimental composer in England is seriously difficult! When I first
worked with electronics you were dependent on custom-built black
boxes like the SPX90. The problem for me was, even if I could afford
to buy one of these, I could not afford to upgrade every other year,
as University music departments could. It became clear that the
solution to this would be to have IRCAM-like software running on a
desktop computer. A group of like-minded composers and engineers
in York got together (in 1985-6) as the Composers’ Desktop Project,
and began porting some of the IRCAM/Stanford software to the Atari
ST (The MAC was, then, too slow for professional audio). We then
began to design our own software.
The original ports onto the Atari ran very slowly: e.g. doing a
spectral transformation of a 4 second stereo sound might take 4
minutes at IRCAM, but took 2 days on the Atari. However, on your
own system at home, you could afford to wait … certainly easier
than attempting to gain access to the big institutions every time you
wanted to make a piece.
Gradually PCs got faster – even IRCAM moved onto MACs. The CDP
graduated onto the PC (MACs were to expensive for freelancers!)
and the software gradually got faster than realtime.
TW: Offline and realtime work are different both from a programming
and from a musical perspective. The principal thing you don’t have
to deal with in offline work, is getting the processing done in a
specific time. The program must be efficient and fast, and
understand timing issues. Offline all this is simply irrelevant. Offline
also, you can take your time to produce a sound-result e.g. a
process might consult the entire sound and make some decisions
about what to do, run a 2nd process, and on the basis of this run a
third and so on. As machines get faster and programmers get
cleverer (and if composers are happy for sounds to be processed off-
line and re-injected into the system later) then you can probably get
round most of these problems.
TW: In the 70s I worked in an analogue 4-track. But then the 4-track
technology died. Since then I have been cautious about using any
spatialisation procedure beyond stereo, as I don’t want my work to
be dependent on a technology which might not last. I’m also
concerned with the average listener, who will not go to a concert, or
a special institution with complex spatialisation facilities. Most
people will listen to music on headphones or domestic hifis. So the
music must work in this context. With diffusion, however, I can
expand the stereo work, using the diffusion process to reinforce and
expand the gestures within the music. My current piece will
probably be in 8-tracks, partly for aesthetic reasons, - it is based on
recognizable human speech, and I would like the speech
‘community’ to surround the audience – and partly because 8-track
is, perhaps, future-proof, being essentially 4 times stereo! I’m
excited by the multichannel spatialisation systems being developed
at the moment, but I would like to see the development of very
cheap, high quality loudspeakers, to make these technologies
accessible to smaller (normal) venues, and to composers like me,
who work at home.
USO: How should your music be performed (by you or any other
sound artist) in live context?
TW: With the pure electro-acoustic works, the (stereo) pieces should
be diffused over many stereo pairs. I provide basic diffusion scores
if I am not diffusing the pieces myself.