Professional Documents
Culture Documents
To cite this article: Dimitri Bouche, Jérôme Nika, Alex Chechile & Jean Bresson (2017) Computer-
aided Composition of Musical Processes, Journal of New Music Research, 46:1, 3-14, DOI:
10.1080/09298215.2016.1230136
Article views: 93
Correspondence: Dimitri Bouche, UMR 9912 STMS: IRCAM - CNRS - UPMC, Paris, France. E-mail: bouche@ircam.fr
Fig. 1. Top branch: score as musical data (result of a process)—Bottom branch: score as a program (set of data, processes and rules).
global structure—for instance, to create a sound from a set of 5.3 Real-time control over compositional processes
synthesis parameters).
The notion of process composition extends the concepts found
In terms of our previous formalism, the maquette corre-
in the maquette (see Section 5.1) by bringing the computation
sponds to an object o = {(di , aci )}i∈N (container for compo-
and rendering steps of its execution together in the same
sitional processes). Its evaluation is the sequential execution
process.
of a set of ‘compute’ operations aci , which correspond to
We are working in a prototype environment derived from
the programs contained in the maquette, and its rendering
OpenMusic, designed to embed this research (Bresson et al.,
implies a single, preliminary scheduling operation o for
2015). The core scheduling system of this environment is
the objects it contains and/or resulting from program execu-
implemented following the architecture described in Section
tions. Although made of processes, the musical structure is
4.1. It allows the user to play musical objects concurrently
therefore a static one (it will not change at rendering time,
in the graphical environment. The rendering of the prototype
after these initial phases of execution and scheduling are per-
maquette interface used in our examples, currently represented
formed).
as a superimposition of separate tracks, is implemented as a set
of calls to the scheduling system as described in Section 4.2.
5.2 Application programming interface
Objects in the maquette interface are actually visual pro-
Our architecture comes along with a programming interface. grams generating data. They can be set to compute these
We provide a small set of operations which allow the manip- data either ‘on-demand’, prior to rendering (as in the standard
ulation of objects and processes. Note that in the following maquette model), or dynamically at runtime (typically when
definitions, o = {(di , ai )}i∈N and a score s is an object ∈ O the playhead reaches the corresponding ‘box’ in the sequencer
aggregating other objects. interface). These objects can be linked to their context (and
in particular to their containing maquette), so their compu-
• add(s, o, d): add an object o in a score s at date d; tation can modify it using the different available calls from
≡ s ← s ∪ {(d + di , ai )}i∈N the system Application programming interface (API). The
• remove(s, o): remove an object o from a score s; synthesis patch—here called control patch—is active during
≡s ←s\o the rendering and can also dynamically modify the contents
• move(s, o, d): move an object o in a score s by of the maquette (for instance through a reactive OSC receiver
adding d to its date. thread, triggering some of the modification methods from the
≡ ∀(di , ai ) ∈ s ∩ o, (di , ai ) ← (di + d, ai ) API listed in Section 5.2).
aims at programming scores as processes with erratic compu- logical relationships and constraints between them.
tations and an global musical timeline. The sequencer is designed in a multi-linear way: each
Interleaving rendering and composition can be used to build object has its own temporality, and may have its own
dynamic/interactive scores. Nowadays, two main computer playhead according to execution delays introduced
music systems also share this outlook: by the scenario or interactions. This representation
differs from the one we presented, which is simply
• i-score is an interactive sequencer for intermedia cre- linear, using mobile boxes instead of playhead split.
ation (Baltazar, de la Hogue, & Desainte-Catherine, While i-score does not provide composition-oriented
2014). This software allows the user to define tempo- tools, it allows the definition and manipulation of time
ral scenarios for automations and events, with chrono- relations between predefined control objects.
Computer-aided Composition of Musical Processes 11
Fig. 13. Maquette rendering the auditory distortion product synthesis example.
Fig. 14. Control patch implementing the auditory distortion product synthesis example.
Wischik, 2005) used in data networking. However, the key 7. Discussion and conclusion
features of this technique are to reduce latency during re-
We presented a formal framework and a musical system
scheduling operations and to maintain musical coherence at
scheduling architecture that supports the high-level specifi-
the output. In data networking, it is mostly used to avoid data
cation, execution and real-time control of compositional pro-
overflow and to adapt resource usage.
cesses. The examples used throughout this work illustrate how
Computer-aided Composition of Musical Processes 13
composers can use such system to compose at a ‘meta’ level, Assayag, G., Rueda, C., Laurson, M., Agon, C., & Delerue, O.
that is, to create a score including compositional processes (1999). Computer-assisted composition at IRCAM: From
running during the execution of the piece. Implementing the patchwork to openmusic. Computer Music Journal, 23,
system in OpenMusic allows the user to take advantage of 59–72.
the capabilities of one of the most flexible CAC packages for Baltazar, P., de la Hogue, T., & Desainte-Catherine, M. (2014).
expressing many types of musical ideas. While the architec- i-score, an interactive sequencer for the intermedia arts. In Joint
ture aims at being used in computer music software, it could be Fortieth International Computer Music Conference/Eleventh
implemented in any software that requires the programmation Sound and Music Computing Conference, Athens, Greece.
of evolving scenarios, with chronological representation of Bouche, D., & Bresson, J. (2015). Planning and scheduling
mixed data. actions in a computer-aided music composition system. In
Proceedings of the 9th International Scheduling and Planning
The asynchronous nature of our system leads to non-
Applications Workshop (SPARK), Jerusalem, Israel.
deterministic score rendering: we do not consider or check the
Bresson, J., & Agon, C. (2006). Temporal control over sound
actual feasibility of processes defined by composers. Thus, a
synthesis processes. In Sound and Music Computing, Marseille,
score may produce non-relevant musical output if, for
France.
instance, computations involved in its execution do not ter- Bresson, J., Bouche, D., Garcia, J., Carpentier, T., Jacquemard,
minate on time.6 Therefore, one has to pay attention to the F., MacCallum, J., & Schwarz, D. (2015). Projet EFFICACE
complexity of its development, and experiment by running : Développements et perspectives en composition assistée
the system. In this respect, we are currently investigating tech- par ordinateur [EFFICACE research project: developments
niques to efficiently anticipate and organize computations. We and outlooks in computer-aided composition]. In Journées
are also working on robustness against time fluctuations due d’Informatique Musicale, Montréal, Canada.
to lower-level OS-controlled preemptive scheduling (as the Bresson, J., & Giavitto, J.-L. (2014). A reactive extension of the
computer music systems we consider are deployed on general OpenMusic visual programming language. Journal of Visual
audience machines), and on a single threaded version of the ar- Languages and Computing, 4, 363–375.
chitecture which in order to enable a ‘synchronous’ rendering Chadabe, J. (1984). Interactive composing: An overview.
mode. This would lead the system towards more deterministic Computer Music Journal, 8, 22–27.
behaviours, and for instance let computations delay the score Chechile, A. (2015). Creating spatial depth using distortion
execution, a useful feature for composers wanting for instance product otoacoustic emissions in music composition. In
to gradually compute scores using physical input (with no International Conference on Auditory Display, Graz, Austria.
need for precise timing, not being in a performance context). Cointe, P. (1983). Evaluation of object oriented program-
Finally, some field-work with composers will help to refine ming from Simula to Smalltalk. In Eleventh Simula
the system and develop tools enabling better control over the Users’Conference, Paris, France.
introduced dynamic and adaptive scheduling features. Echeveste, J., Cont, A., Giavitto, J.-L., & Jacquemard, F. (2013).
Operational semantics of a domain specific language for real
time musician–computer interaction. Discrete Event Dynamic
Acknowledgements Systems, 23, 343–383.
We thank Jean-Louis Giavitto for his numerous suggestions Hernandez, J., & Torres, J. (2013). Electromechanical design:
on both formal and technical aspects. We also thank Hélianthe Reactive planning and control with a mobile robot. In
Caure, Julia Blondeau and Jérémie Garcia for their support. Proceedings of the 10th Electrical Engineering, Computing
Science and Automatic Control (CCE) Conference, Mexico
City, Mexico.
Funding Letz, S., Orlarey, Y., & Fober, D. (2000). Real-time composition
This work is funded by the French National Research Agency in elody. In ICMA (Ed.), Proceedings of the International
project with reference [ANR-13-JS02-0004]. Computer Music Conference, Berlin, Germany.
Nika, J., Bouche, D., Bresson, J., Chemillier, M., & Assayag, G.
(2015). Guided improvisation as dynamic calls to an offline
References model. In Sound and Music Computing (SMC), Maynooth,
Agostini, A., & Ghisi, D. (2013). Real-time computer-aided Ireland.
composition with bach. Contemporary Music Review, 32, Nika, J., Chemillier, M., &Assayag, G. (in press). Improtek: Intro-
41–48. ducing scenarios into human-computer music improvisation.
Assayag, G., Bloch, G., Chemillier, M., Cont, A., & Dubnov, ACM Computers in Entertainment, special issue on Musical
S. (2006). Omax Brothers: A dynamic topology of agents Metacreation.
for improvization learning. In Workshop on Audio and Music Puckette, M. (1991). Combining event and signal processing
Computing for Multimedia, ACM MultiMedia, Santa Barbara, in the MAX graphical programming environment. Computer
CA, USA. Music Journal, 15, 68–77.
Raina, G., Towsley, D. F., & Wischik, D. (2005). Part 2: Control
theory for buffer sizing. Computer Communication Review, 35,
6 Here, ‘musical output’ may not only refer to sounds or control
79–82.
messages, but also to the score display.
14 D. Bouche et al.
Rintanen, J. (2013). Tutorial: Algorithms for classical planning. Vidal, Jr., E. C., & Nareyek, A. (2011). A real-time concurrent
In 23rd International Joint Conference on Artificial Intelli- planning and execution framework for automated story
gence, Beijing, China. planning for games (AAAI Technical Report WS-11-18).
Rodet, X., & Cointe, P. (1984). Formes: Composition and Xenakis, I. (1992). Formalized music: Thought and mathematics
scheduling of processes. Computer Music Journal, 8, in composition (revised). Harmonologia series. Stuyvesant,
32–50. NY: Pendragon Press.