Professional Documents
Culture Documents
ELLIOT SIMPSON
Universidad Politécnica de Madrid
JORGE GRUNDMAN
Universidad Politécnica de Madrid
‘Animated Sound’: An
application of digital
technologies and open
scores in interdisciplinary
collaboration and education
ABSTRACT KEYWORDS
We describe a collaborative project carried out with an ensemble of amateur digital animation
adult musicians in which a participant created a composition derived from her community music
background as an animator in conjunction with the appropriation of interface music composition
elements used in digital animation software. Conceived of during the 2020 quar- experimental music
antine for online performance with an undefined instrumentation of acoustic digital scores
and electronic sound sources, subsequent versions have been created for film sound and image
soundtracks, ensemble concerts, live-scoring/coding audio-visual presentations values
and solo performances. These diverse realizations are analysed through two
frameworks. The first, adapted from an analysis of the embodiment of values
in digital games, indicates that the community-oriented values present in early
ensemble versions are replaced by more typical aesthetic values, such as complex-
ity and virtuosity, as the work evolves from one realization to another. The
INTRODUCTION
The boundaries between experimental music, digital technologies and educa-
tion are rich in possibilities for new musical forms and experiences. Open
scores, a term that we are using to refer to works in which the composer
explicitly leaves undetermined one or more fundamental musical parame-
ters, create a space for adaptability and inclusivity that overlaps with many
practices of education and community music. Digital technologies can prove
uniquely effective as mediators within this space, serving as instruments,
notations and interfaces. In the case study presented in this article we describe
the realization of a compositional project situated at this point of convergence,
concluding in five versions of the work demonstrating contrasting approaches
to a common process and material. This specific project was chosen as a case
study because the participatory approach and artistic realizations described
make uniquely apparent the powerful potential influences of the disciplines
of experimental music, technology and education upon one another, as well
as the remarkable range of creative and pedagogical outcomes that can result
from their synergy. We draw from two frameworks for analysis of the differ-
ent versions. The first, a framework constructed to examine the embodiment
of values in digital games, suggests that the project highlights certain values
through specific elements of its realizations. As the composition evolves from
version to version, the constellation of values indicated seems to change as
well. The second framework, adapted from a discussion of the relationship
between facilitator and participants in socially engaged practices, provides
a basis for discussing aesthetic outcomes which appear to diverge from the
values embodied by the project while at the same time existing as manifesta-
tions of it.
BACKGROUND
The project presented in this article emerged out of a course on experimental
music offered for amateur adult musicians, directed by guitarist Elliot Simpson
and hosted jointly by the Escuela Municipal de Música y Danza María Dolores
Pradera and the American Space Madrid (supported by the International
Institute and the US Embassy). Much of the course was spent discussing
and performing historical examples of experimental music, including works
by Cage, Wolff, Cardew, Oliveros and Fluxus artists, alongside compositions
by more contemporary composers. In this repertoire we can find a diverse
assortment of works that do not presuppose a background in music notation
or instrumental technique, and in which undetermined musical parameters
permit realizations in contexts where resources of time, space and instru-
ments are limited. Importantly, the participants themselves were encouraged
to begin composing for the group. As a starting point, the instructor suggested
that they consider elements of their own hobbies and professions that could
be interpreted or sonified in a meaningful way. This use of experimental and
ANIMATING SOUND
The case study at hand features a series of pieces with the title ‘Animated
Sound’, composed by a member of the ensemble – Mandy Toderian – who has
spent her career working in digital animation studios. Versions of the compo-
sition were workshopped in close collaboration with another four members of
the group. Toderian is an intermediate-level guitarist, while the other members
have backgrounds in electronic music, but without formal music education.
Toderian had noticed similarities between the f-curve graphs (see Figure 1)
used to control movement in animated characters, and examples of graphic
scores that the group had performed. Similar in appearance and function to
envelope interfaces commonly seen in digital audio workstations, f-curves
control the multidimensional parametric morphology of movements and
are dictated by key frame points and rules for interpolation between them.
Toderian proposed using these graphs as notational material, while at the
same time potentially emphasizing the relation to the visual element derived
from the same information. Modern digital animation software itself plays an
important role in the pieces. Animation programmes are capable of interpo-
lating and automating actions, including compressing key frame information,
modelling physics and mechanizing movements. These features are deeply
embedded in the workflows and muscle memory of professionals in the field.
In Toderian’s words, animators ‘attempt to imbue an inanimate object
with personality and character, giving it the perception of having a life of
its own’ (2022: 1). She has found that subtle visual cues are the most effec-
tive, necessitating complex and carefully controlled manipulation of even the
smallest of movements. The physical actions utilized in various versions of
the piece – breathing, heart beats, eye blinks, eye movements and noncon-
scious micro-movements of the body – are what Toderian uses to make an
animated character appear alive, even while seemingly motionless on-screen.
While we are normally unaware of these actions in our own bodies, the
composition asks interpreters to become profoundly conscious of them by
using them as cues to make sounds. Each interpreter produces their sounds
following these natural rhythms, and in accordance with the intensity defined
by a corresponding f-curve. Intensity can refer to a number of parameters, as
understood by the interpreter. The score explicitly mentions pitch, volume,
duration/articulation, sound quality and mood, but leaves open the possibil-
ity of further parameters. The curves created for each action are indicative of
f-curves that would be used for the animation of the same action, although in
some cases expanded in time.
The curves are filtered through an interpreter’s own corporeal cues: if a
change in a curve for an eye blink, for example, does not coincide with the
eye blink of the performer, that change will likely pass unreflected. Also
mentioned in the instructions are the possibility of doubling performers on
parts, not including all curves in a performance or changing mid-performance
from one action/curve to another. The undefined instrumentation implies flex-
ibility in the organization of concerts well beyond that of adaptation to diverse
capabilities of individual performers, suggesting potential realizations ranging
from typical concert environments with potentially large ensembles to solo
versions in any number of more or less formal contexts. In some versions, the
animator is incorporated as an onstage member of the ensemble. In others,
score material is pre-generated, meaning that the animator does not need to
be present for a performance. The visual output, manipulated as a by-product
of the scoring process, is also optional, and its presence or absence substan-
tially changes the audience’s experience of the composition.
Each corporeal action, as a compositional mechanism, suggests differ-
ent sonic characteristics. The heartbeat is regular, with slight and gradual
variations. The sound itself is quite short, producing a rhythmic foundation.
Breathing also produces mostly stable rhythms, although the sonic embodi-
ment, which is performed alongside each exhalation, has a much longer
envelope to it. With eye blinks, sounds become less predictable, both in rhyth-
mic occurrence and, potentially, duration; at one point the score mentions: ‘If
you so choose, you may also close [your eyes] and play the sound’ (Toderian
2022: 2). The sounds produced by nonconscious micro-movements are the
least predictable and most varied in duration. The sources of all sounds are
left undefined. In rehearsals and recordings the group used acoustic instru-
ments, including guitar, violin, cello and piano, alongside electronic sources
(analogue oscillators, keyboards, digital sources, field recordings and mobile
phones) and found objects (kitchen appliances, empty bottles, dial tones and
hairdryers).
the basis of any music interaction with the digital score can be under-
stood through the interplay between the musician and the agents,
objects and fields that are co-present within the shared music-space.
This interplay has much in common with game play immersion and
human-computer interaction.
(2019: 203)
II
A second version was recorded by Elliot Simpson on electric bass as
the soundtrack to a short film created by the composer (Toderian 2022:
Version2Clip.mp3). With a shorter duration of just two minutes imposed
by the film, the timeframe used in the initial version was compressed,
while the curves remained unchanged. An additional two curves, repre-
senting eye movements, were added. Although intended to accompany
a film, there was no connection between the sound and image, and no
additional visual output was generated. For this version, intensity was
related to the duration/articulation of each event, with low-intensity
sounds resulting in short, staccato events and high-intensity sounds in
more sustained, legato notes. Other parameters, including dynamics and
pitch, remained static. This version served as a transition between first
experiments with the scoring technique and later iterations that incorpo-
rated visual components.
III
A third version was created collaboratively over several in-person
rehearsals for a live performance in December 2021, with four interpret-
ers from the ensemble present alongside the animator (Toderian 2022:
Version3Clip.mp3). The visual material manipulated in real time as a
by-product of the scoring process was projected above the stage along-
side the performance score. The inclusion of a fourth performer allowed
for the addition of a fourth curve, representing nonconscious micro-
movements of the body.
The video score consisted of a scrolling graph editor window containing
four superimposed colour-coded curves, with a play head used to maintain
synchronization both between performers and between sound and image.
Performers could be silenced when their curve was turned off or ran out
of the visible screen area. Toderian participated as a performer, generating
the f-curves and resulting visual manipulations onstage, and the visible and
audible elements of her interactions with the computer formed an important
element of the experience. The sound sources used in the performance were
acoustic guitar with e-bow and slide, cello, digital oscillator and computer-
based sampler. As representations of the same information, the three outputs
– video score, animated image and sound – were sometimes clearly related.
At other moments the relation between components was less apparent.
Figure 2: Software window with interpreters’ live score (bottom window) and animated output (top
window), Blender (Blender Foundation 2021).
IV
A fourth version was created for Elliot Simpson for a solo performance in
January 2022 (Toderian 2022: Version4Clip.mp3). Toderian generated new
visuals and curves, and audio was pre-recorded by the performer as a tape
part. The same four actions (heart beats, eye movements, breaths and micro-
movements) were used as cues for sounds, with an electric bass again used
as the sound source. The micro-movements action was realized by attempt-
ing to hold a slide against the strings with the amplifier set to an extreme
volume and the musician seated in an intentionally uncomfortable physical
position. The resulting small but uncontrolled movements produced subtle
trembling sounds. Curves were interpreted as changes in filter frequencies
for three actions, and as pitch for the fourth. Although pre-recorded, param-
eter changes were performed in real time, following a video recording of the
f-curve graph playback.
As this version was primarily intended for a live solo performance by a
guitarist, the composer took the additional step of generating a notated score
by overlaying the superimposed keyframe points onto a music stave. Time was
indicated with a proportional notation, with each system equal to 30 seconds
and the full piece lasting 25 minutes. Notes that fell between pitches were
interpreted as semitone deviations, and smaller displacements as microtonal
deviations. While not absolute, attention was paid to the observance of rela-
tive deviations between adjacent close pitches. The visual output was projected
above the performer during the performance.
Figure 3: Notated version for guitar with curves included below staves.
V
The latest version of the project deviates substantially from previous iterations.
The idea arose from the concept of a live feedback loop between performer
and visual material (Toderian 2022: Version5Clip.mp3). Attempts at real-time
animation of multiple simultaneous movements proved unwieldy for the
animation medium; this real-time approach would likely be more suitable for
techniques incorporating motion capture technology. A compromise between
real-time capture/feedback and the animation medium was achieved through
a string of single parameter captures fed back to the performer as score mate-
rial. Some adaptations were made. Heartbeats were not visible to the animator,
so were disregarded as an action. Conversely, the act of playing an instrument
(guitar, in this case), implied a new collection of relevant physical movements.
The six actions settled upon were breaths, eye blinks, right hand plucking, left
hand position, head movements and guitar/torso movements.
In this version, the interpreter begins by sitting and looking at a screen
while the animator observes and animates the interpreter’s breathing.
When the predetermined duration passes, the play head returns to the
beginning of the timeline and the interpreter performs the keyframe points
created during the previous animation process. As the points are performed,
the animator moves on to capturing the next action. This process is repeated
Figure 4: Six superimposed captures with keyframe points, Blender (Blender Foundation 2021).
until six curves are obtained. To bypass the additional step of transcription of
keyframe points onto a music stave, as in version four, here the pitches can be
read directly from the grid of the software interface, with the y-axis indicating
twelve frets, and each of the six curves corresponding to a string of the guitar.
This creates a field of pitches in tablature for immediate performance during
the generation of the piece and allows for further transcription after the fact.
As in earlier versions, pitches between fret indications were interpreted as
relative microtonal deviations. When transcribed into music notation, the six
video frames within each grid unit were represented as eighth notes in meas-
ures of 6/8, resulting in a metrically precise notation.
This version, although specifically generated for a given performer
and context, remains completely adaptable. Any number of curves can be
performed by any instruments, and the entire process could be adapted for
any number of participants or animators. The intended outcome in this case
was the creation of a notated score, but the process is equally viable for a live-
scoring setting.
demanding works for solo guitar, which can freely incorporate flexible groupings
of other diverse musicians or be regenerated for other instruments or situations.
These soloist-centric works are much more in line with the world of typical
contemporary classical music. Their realization is contingent upon a great deal
of resources – time, instrument, performance space and everything else implied
by a background that equips a musician with the necessary skills to perform
demanding repertoire. The meaning found in the addition of this fifth stage
is primarily directed towards specialized musicians and audiences through
emphasis on aesthetic outcome and compositional process. Values of virtuosity,
musicality, innovation and complexity are embodied in these versions.
CONCLUSION
This case study demonstrates how themes of inclusivity, pedagogy, collabo-
ration and technology can converge in the creation of new educational and
musical experiences. The values embodied in participatory compositions and
performances are just as meaningful as any sounding or notated outcomes,
and through the additional stage of artistic remediation these approaches also
have much to offer in more conventional musical contexts. The digitally medi-
ated ensemble versions of ‘Animated Sound’ are oriented towards the goal of
empowering amateur musicians to compose and perform, with all the inter-
pretive, social and sonic exploration implied by those achievements. The addi-
tion of a technically demanding notated component generated from the same
collaborative compositional process points to a much different constellation of
values, where more typical aesthetic priorities come to the foreground, but in
which open elements retained throughout the project’s evolution ensure that
the urgently important values behind the composition need not be entirely
absent. In subsequent workshops given by Elliot Simpson, this project was
utilized as example and template to encourage diverse participants to search
out unexamined material in other fields, particularly technological or digital,
to be scored or sonified in open forms capable of being realized in a wide
variety of performance situations. The potential for the creation of unimagined
new musical works, relevant in educational and inclusive contexts as well as
more formal contemporary music settings, is enormous.
Audio clips of each version, an example of the visual output, and a version of
the ensemble score from April 2022 can be found at: https://doi.org/10.21950/
ZXZQ75.
ACKNOWLEDGEMENTS
Permission was obtained from all participants for the use of score materials and
audio documentation for non-commercial research dissemination. The authors
would like to thank Jesús Jara and Lee Douglas for their support of the project,
and Luis Osa Gomez del Campo, Susana Rica Romero, María Santos, Ilona
Scerbak and Mandy Toderian for their generosity of time and creativity.
REFERENCES
Autodesk, Inc. (2017), Autodesk Maya [software], Version 2019.
Benito Gutiérrez, I. (2019), ‘Inicio’, Isabel Benito Gutiérrez Contemporary
Music Composer, 5 September, https://isabelbenitogutierrez.wordpress.
com. Accessed 19 February 2023.
Bishop, C. (2012), Artificial Hells: Participatory Art and the Politics of Spectatorship,
London: Verso.
Blender Foundation (2021), Blender [software], Version 2.93.5.
Bourriaud, N. ([1998] 2002), Relational Aesthetics (trans. S. Pleasance and F.
Woods), Dijon: Les presses du réel.
Chamorro, L. (2022), ‘Ya Fue la Música’, https://luchamorro.hotglue.me/yafue-
lamusica. Accessed 19 February 2023.
Ciciliani, M. (2020), ‘Virtual 3D environments as composition and performance
spaces’, Journal of New Music Research, 49:1, pp. 104–13.
Flanagan, M. and Nissenbaum, H. (2014), Values at Play in Digital Games,
Boston, MA: MIT Press.
Gilchrist, P., Holmes, C., Lee, A., Moore, N. and Ravenscroft, N. (2015),
‘Co-designing non-hierarchical community arts research: The collabora-
tive stories spiral’, Qualitative Research Journal, 14:4, pp. 459–71.
Grant, M. J. (2003), ‘Experimental music semiotics’, International Review of the
Aesthetics and Sociology of Music, 34:2, pp. 173–91.
Higgins, L. (2018), ‘The community within community music: Special needs,
community music, and adult learning’, in G. E. McPherson and G. F.
Welch (eds), An Oxford Handbook of Music Education, vol. 4, Oxford: Oxford
University Press, pp. 104–19.
Holland, D. (2015), ‘A constructivist approach for opening minds to sound-
based music’, Journal of Music, Technology and Education, 8:1, pp. 23–39.
López Rodríguez, J. M. (2008), ‘Aprendiendo a través de la indeterminación’
(‘Learning through indeterminacy’), Lista Electrónica Europea de Música en
la Educación, 22, pp. 15–28.
Matthews, H. and Moorehouse, A. (2021), ‘Evaluating socially engaged practi-
ces in art: The autonomy of artists and artworks in community collabora-
tions’, Question Journal, 6, pp. 18–27.
Pignato, J. M. and Begany, G. M. (2015), ‘Deterritorialized, multilocated and
distributed: Musical space, poietic domains, and cognition in distance
collaboration’, Journal of Music, Technology and Education, 8:2, pp. 111–28.
Smith, R. R. (2023), ‘Animated notation’, Ryan Ross Smith, http://ryanrosss-
mith.com/animatednotation.html. Accessed 19 February 2023.
Tinkle, A. (2015), ‘Experimental music with young novices’, Leonardo Music
Journal, 25, pp. 30–33.
Toderian, M. (2022), ‘Animated Sound’, unpublished score and mp3 files,
https://doi.org/10.21950/ZXZQ75.
Turowski, P. (2016), ‘Digital game as music notation’, Ph.D. dissertation,
Charlottesville, VA: University of Virginia, https://doi.org/10.18130/
V3HS18.
Vear, C. (2019), The Digital Score: Musicianship, Creativity and Innovation, New
York: Routledge.
Veblen, K. K. (2018), ‘Adult music learning in formal, nonformal, and informal
contexts: Special needs, community music, and adult learning’, in G. E.
McPherson and G. F. Welch (eds), An Oxford Handbook of Music Education,
vol. 4, Oxford: Oxford University Press, pp. 243–56.
Whiteman, N. (2022), ‘Home page’, Nina Whiteman website, 14 September,
http://ninawhiteman.com. Accessed 19 February 2023.
Woods, P. J. (2019), ‘Conceptions of teaching in and through noise: A study
of experimental musicians’ beliefs’, Music Education Research, 21:4, pp.
459–68.
SUGGESTED CITATION
Simpson, Elliot and Grundman, Jorge (2023), ‘“Animated Sound”: An appli-
cation of digital technologies and open scores in interdisciplinary colla-
boration and education’, Journal of Music, Technology & Education, Special
Issue: ‘Exploring Audio and Music Technology in Education: Pedagogical,
Research and Sociocultural Perspectives’, 15:1, pp. 57–74, https://doi.
org/10.1386/jmte_00046_1
CONTRIBUTOR DETAILS
Guitarist Elliot Simpson has given premieres of works by such iconic figures
as Sofia Gubaidulina, Alvin Lucier, Michael Finnissy, Walter Zimmermann
and Larry Polansky, and has worked closely with many other prominent
young composers in the creation of new pieces. He has appeared in many
of the arts capitals of the world, including San Francisco, Los Angeles, New
York City, Mexico City, Santiago de Chile, Buenos Aires, São Paulo, London,
Amsterdam, Cologne, Berlin, Salzburg and Shanghai, in master classes, work-
shops and performances ranging from early music to free improvisation. His
recordings can be found on the Microfest, XI, Brilliant Classics, ECM, New
World, Infrequent Seams, Soundset and Hermes record labels. Originally
from New Mexico, United States, Elliot studied with David Tanenbaum at
the San Francisco Conservatory of Music and with Zoran Dukić at the Royal
Conservatoire of The Hague, where he was a recipient of the prestigious
Huygens Grant from the Netherlands Ministry of Education, Culture and
Science. His master’s degree in The Hague, as both soloist and chamber musi-
cian, was awarded ‘with distinction for his extraordinary contribution to new
music’. He is completing a doctoral degree at the Universidad Politécnica de
Madrid.
Contact: Universidad Politécnica de Madrid, Calle Ramiro de Maeztu 7, 28040
Madrid, Spain.
E-mail: elliotsimpson@gmail.com
https://orcid.org/0000-0001-6323-6385
Solo Violin and Sacred Temple’ and the ‘Cantata Levi per Violino, Soprano e
Orchestra da Camera’.
Contact: Universidad Politécnica de Madrid, Calle Ramiro de Maeztu 7, 28040
Madrid, Spain.
E-mail: jorge.grundman@upm.es
https://orcid.org/0000-0001-7953-7609
Elliot Simpson and Jorge Grundman have asserted their right under the
Copyright, Designs and Patents Act, 1988, to be identified as the authors of
this work in the format that was submitted to Intellect Ltd.