You are on page 1of 11

Towards Intelligent Orchestration Systems

Aurélien Antoine and Eduardo R. Miranda

Interdisciplinary Centre for Computer Music Research (ICCMR),


University of Plymouth,
Plymouth
PL4 8AA, UK
aurelien.antoine@postgrad.plymouth.ac.uk

Abstract. Orchestration is a problem that has been relatively unex-


plored in computer music until recently. This is perhaps due to its com-
plexity. This paper presents a survey of the field of computer-assisted
orchestration. The span of such survey covers different orchestration sys-
tems that were initiated by the spectral music movement and developed
during the last 40 years in order to establish the current state-of-the-art.
In this paper, we also introduce some of our current work-in-progress
ideas, which work towards intelligent orchestration systems.

Keywords: Computer-Aided Orchestration, Orchestration, Intelligent


Systems, Computer Music, Spectral Music, Timbre

1 Introduction
This paper focuses on a specific compositional art named orchestration from
the perspective of Western classical music. Orchestration can be defined as the
art of combining pitches to compose music for an orchestra or more generally
an ensemble. This involves writing for a number of instruments and can be
seen as a symbolic view of composing. Furthermore, orchestration is the art of
mixing instrumental properties. For example, by combining small sounds from
different instruments, the orchestration creates a sound that could not exist on
its own. This second aspect can be described as the sonic view of composing.
Orchestration is an interesting compositional art as it can help achieve a musical
idea that cannot be done with a single instrument. Even if some attempts to
define the art of orchestration have been done, such as in [7], [19], [20] and
[1], its teaching and practice are mainly empirical. This is because there is no
mathematical foundation or long theoretical traditions for this activity as there
is for other compositional aspects. This is possibly the reason that this musical
discipline is said to be “at the crossing of daring and experience”[24].
Since the invention of the computer, composers have been interested in ex-
ploring its potential in the compositional process. The Illiac Suite [16] was one
of the first pieces to use computers to compose music. Since, several different
computing techniques and systems have been developed for musical composi-
tion. These tools mainly allow composers to manipulate musical symbolic ob-
jects, such as notes or chords giving them the ability to focus on harmony and
2 Aurélien Antoine, Eduardo R. Miranda

rhythm. These aspects of composition have been successfully implemented in


several Computer-Aided Composition systems since the late 1950s (see [29], [3]
or [26] for examples) . These tools can aid or completely automate the compo-
sitional process. In this paper, we use the term computer-assisted system, and
not automatic composition system as it includes all kinds of systems that can
help the compositional process.
Until recently, orchestration has been relatively unexplored in the domain
of computer music. We believe that computers can be helpful in orchestration
as they have been and still are for other musical writings. In this paper, we
present an overview of the computer-assisted orchestration field, including dif-
ferent systems and examples of compositions. We also introduce our own ideas
for future developments in this fascinating field, which work toward intelligent
orchestration systems.

2 Computer-Assisted Orchestration

Here we define computer-assisted orchestration (or computer-aided orchestra-


tion) as a system that assists users/composers to make a piece for an ensemble
or an orchestra. This could result in a generated/automatic orchestration or in
helping to orchestrate a part. This will not replace the composer, but assist in
the compositional process in many ways.
For the purposes of this paper, we divide computer-aided orchestration sys-
tems in two categories: “semi-automated” and “automatic”. We classify semi-
automated as a system or a set of techniques that involve using computers in one
or several parts of the orchestration process. In these systems, the composition
is still dependant on a hand-written process. We define automatic as a system in
which the orchestration is completely produced by the computer. However, this
does not necessary result in a complete composition, it can be used as a starting
point or as inspiration for an orchestration.

2.1 Semi-Automated Systems

The first computer-assisted orchestration systems can be associated with the


Spectral Music movement initiated in the 1970s. In [11], Hugues Dufourt coined
the term spectral music. Composers such as Gérard Grisey and Tristan Mu-
rail initiated this movement in France in the early 1970s, with the ensemble
L’Itinéraire, based at IRCAM in Paris. During the same period, some com-
posers such as Johannes Fritsch or Clarence Barlow, from the Feedback Studio
situated in Cologne, were also part of this new musical movement.
One of the first pieces associated with spectral music is entitled Partiels,
which was composed by Gérard Grisey in 1975 for 18 instruments. The spec-
trum analysis was realised on the trombone using an electronic sonogram. Fol-
lowing this experiment and the development of the technological and scientific
knowledge and tools, various composers started to use computers to help them
to compose music for an ensemble.
Towards Intelligent Orchestration Systems 3

In his piece L’Ésprit des dunes (1994) realised at IRCAM, Tristan Murail
started his composition by analysing fragments from different sources, such as
diphonic Mongolian singing and Tibetan singing. The material of this composi-
tion, for an ensemble of 11 instruments and electronics, was generated by spec-
tral analysis of the aforementioned sources. He used an analysis program devel-
oped for additive synthesis, then constructed a database of these analyses to be
evaluated and modified with libraries he developed in the visual programming
environment PatchWork [3].
Some composers used speech analysis for their compositions. Clarence Barlow
developed a technique called Synthrumentation, which consisted of doing spec-
tral analysis of speech and then mapping these analyses to acoustic instruments
[5] [27]. Claudy Malherbe also used voice analysis techniques for some of his com-
positions. In Locus (1997) [23], a piece for four voices and electronics, Malherbe
recorded the singers using two microphones placed at two different distances
in order to have two different characteristic recordings. After the segmentation,
smoothing and normalisation of the recordings, Malherbe applied a FFT anal-
ysis to obtain the representation of these recordings in the form of a sonogram.
Then, a detection of partials was applied and the most prominent were selected.
These data were subsequently input into PatchWork and transcribed into sym-
bolic representations for ease of manipulation. For the rhythmic representation
and manipulation, Malherbe used Kant, a rhythmic editor developed at IRCAM
by the Computer Assisted Composition group1 .
More recently, for his musical piece entitled Metal Extensions (2001), Yann
Maresz used a set of techniques combining handwriting and computational pro-
cesses. In [24], he described his process, as follows:

“Selection of the region of sound to orchestrate from the electronic sound file,
placement by hand of markers on the region within the sound file that interested
me, for a chord-sequence analysis with AudioSculpt (peaks), inharmonic partial
analysis on the totality of the sound file in the same programme, transcription
of the given results into symbolic notation in OpenMusic and finally, realization
of the final score by hand.”

As a summary, several composers began to see the ability of the computer


to help them orchestrate musical ideas, or at least using computers in various
compositional processes. As seen with the spectral movement, computers were
used for spectral analysis and representations of audio signals. Composers used
software, such as AudioSculpt, PatchWork or OpenMusic, to analyse sound,
and also for the representation and the manipulation of the symbolic view of
orchestration.
With the evolution of technology and the experimentations of several com-
posers, the idea of developing systems for orchestration started to arise in some
research groups.

1
http://www.ircam.fr/repmus.html
4 Aurélien Antoine, Eduardo R. Miranda

2.2 Automatic Systems

Surprisingly, there are only a few computer-aided orchestration systems avail-


able. The majority of these have been developed in the last decade. This could be
due to the complexity of orchestration and the limits of the available technology.
However, in this section, we present the attempts of designing computer-assisted
orchestration tools we know of so far. One of the first attempts is a tool devel-
oped by Rose and Hetrik [30]. They propose a system that analyses a given
orchestration. It is also possible to give a target sound to the system and it
outputs an orchestration that tries to approach the target file. Their algorithm
uses a Singular Value Decomposition (SVD) method either for the analysis of a
given orchestration or the proposition of new orchestrations using the spectrum
of the target sound. The SVD approach is interesting in terms of low calculation
costs and the solution is the nearest to the target sound. However, this approach
does not take into account the position of the orchestra and the problem of
instrumental combinations.
Psenicka proposes another approach, with his program called SPORCH (short
for SPectral ORCHestration) [28]. Like the system proposed by Rose and Hetrik,
this program analyses a target file and outputs the orchestration solutions in the
form of a list of data. This data comprises of instrument names, pitches and dy-
namic levels in order to create timbre and quality that fit the target file. Psenicka
decided to perform the searching algorithm focussing on instruments, instead of
doing it on sounds. Hence, the system is divided in two parts: the instruments
database and the orchestration function. The database first needs to be built in
order to run the program and it contains a list of instruments with pitch range,
dynamic level range and the most significant partials associated with instrument
(See [28] for more details). To find the orchestration solutions, Psenicka uses an
iterative matching algorithm to establish the combination of instruments that
fit the original file. The algorithm extracts the peaks of the target sound and
then compares them with each instrument in the database in order to select
those closest in frequency. This approach has a low computational cost and the
instrumental composition of the orchestra is incorporated in the matching algo-
rithm. However, this method tends to output simple orchestrations and only the
solution that best matches the target file, and, therefore, it discards all other
solutions that could be more interesting in terms of musical ideas.
Another attempt is the system developed by Hummel [17]. Like Psenicka’s
method, he uses an iterative algorithm, but instead of analysing spectral peaks,
the program works with spectral envelopes - the frequency-amplitude derived
from a FFT analysis. As with the two previous systems, it analyses a target
sound, retrieving its spectral envelope and then searches iteratively for the best
approximation. Hummel says his system works better with non-pitched sounds
(e.g. whispered vowels). This is due to it using spectral envelopes instead of
spectral peaks. Hence, the perception of the pitches resulting from the solutions
can be different from the pitches of the target file.
IRCAM addressed the question of computer-assisted orchestration in 2003,
initiated by a research project proposed by Yann Maresz [24], whose works are
Towards Intelligent Orchestration Systems 5

mentioned in the previous section of this paper. This resulted in three Ph.D. the-
ses [35] [8] [13] and computer-aided orchestration programs that evolved through
the years. Like the aforementioned systems, the user inputs a target sound and
the program computes an orchestration. The first version, named Orchidée, was
the result of the two Ph.D. theses written by Damien Tardieu [35] and Grégoire
Carpentier [8]. They respectively addressed the problem of analysis of instru-
mental sound and its perception, and the rapid increase of the possible solu-
tions produced by the system. The system extracts some audio descriptors from
the target sound, and also from the audio samples contained in the orchestra
database. These descriptors are the material for the combinatorial algorithm de-
veloped to match the target sounds [36][9]. The algorithm does not output only
the best solution but rather a selection optimal solutions, which is an advan-
tage as it proposes different orchestrations for one target sound. This version
does not consider temporal problems of orchestration. It proposes only static
orchestration solutions and the system works with static and harmonic target
sounds. Jonathan Harvey, assisted by researchers and computer music designers
from IRCAM, was one of the first composers to benefit from this new computer-
assisted orchestration system. He used it for his composition Speakings (2008),
which is for live electronics and a large orchestra [25]. He recorded three vow-
els, as mantra sung: Oh/Ah/Hum. Then, his idea was to input these recordings
into Orchidée in order to try to imitate the sound of the sung mantra with an
ensemble of 13 instruments.
The system evolved into a new version, under the name Ato-ms (for Abstract
Temporal Orchestration), and it is the result of the third Ph.D. thesis by Philippe
Esling [13]. One of the major improvements of this version is the management
of time, which generates orchestration solutions within a time space as opposed
to static. Another improvement is the use of a multi-objective and time-series
matching algorithm, which creates an optimal warping algorithm. In this version,
the user can design envelopes for the audio features thereby creating an abstract
target. According to Maresz [24], the solutions “suffer from a lack of quality in
their timbral accuracy” and the two versions only address the problem of timbre
matching.
In November 2014, IRCAM released a complete new version of this system,
named Orchids2 (Fig. 1). This standalone application implements the best fea-
tures from its predecessors and integrates new improvements. It proposes ab-
stract and temporal orchestration and is also optimised for timbral mixture.
Like the aforementioned systems, the user inputs a target sound, but in Orchids
they also have the ability to design an abstract target by shaping various psy-
choacoustic descriptors. Orchids also includes a database of over 30 orchestral
instruments, whose samples are analysed and indexed by the program. The user
can extend the sound database by simply adding a folder into the program, which
the system will analyse and index. The user has the ability to define the instru-
ments they want to include in the orchestra. Moreover, the user can position the
instruments, as Orchids integrates the notion of spatialisation of the orchestral
2
Orchids is available at http://www.forumnet.ircam.fr/product/orchids
6 Aurélien Antoine, Eduardo R. Miranda

Fig. 1. Orchids interface, showing the Analysis tab

space. The system first analyses the defined psychoacoustic features of the tar-
get. Different matching algorithms are available, specific to the type of solutions
the user wants or the more appropriate for the type of the audio file (see [14]).
Orchids usually proposes several orchestration solutions in the form of a musical
score. The program uses the Bach library3 for the symbolic representations. It is
also possible to listen to the solutions, thanks to the audio samples contained in
the database, before exporting the interesting orchestrations. Furthermore, the
user has the ability to start to construct and edit the composition directly inside
the program in addition to exporting it afterwards.
In this section, we have discussed three different approaches to try to incorpo-
rate the complex problem of orchestration into computer-assisted orchestration
systems. The latest program, Orchids, presents promising improvements for the
problem of computer orchestration and is set to be a powerful tool for computer-
aided orchestration. However, a problem we found in these systems is how to
classify the solutions or, in other words, how to guide the system to match the
kind of orchestration or sound we want. The aforementioned systems usually
propose several solutions, and the user can spend a lot time before the ‘best’
orchestration is found. In the next section of this paper, we discuss solutions of
these questions and introduce ideas to improve computer-assisted orchestration
systems.

3 Future Developments of Orchestration Systems


As described in the previous section, computer-aided orchestration systems have
evolved during the last ten years. In regards to the current state-of-the-art, Or-
chids presents the most efficient and interesting approaches to solving some
problems of orchestration. However, Orchids produces numerous solutions to or-
chestrating a given sound, which is tedious and time-consuming to go through.
3
http://www.bachproject.net
Towards Intelligent Orchestration Systems 7

We believe the next step in computer-aided orchestration is to focus on how to


personalise systems to a user’s style in order to offer more appropriate orches-
tration solutions. We do not want to completely restrain the solutions, as chance
or surprise is part of the compositional process. The ability to have all possible
solutions should still be available, but systems could be more efficient in regards
to achieving composer specific musical ideas.
Timbre characteristics are important in composing for an orchestra, as it
involves writing for instruments that can play simultaneously; thus creating a
unique new sound. From this observation, we believe timbre can be a useful
method to filter the solutions proposed by a system. This could help the com-
poser to achieve the orchestration they have in mind. In order to discuss this
approach, we need to define the notion of timbre, then we introduce our ideas
to filter the solutions proposed by a computer-assisted orchestration system.

3.1 Timbre
Working on computer-aided orchestration, we will focus only on instrumental
music, therefore in the attempt of defining the term timbre we will omit the
timbral characteristics of electroacoustic music. The notion of musical timbre
is complex and has been largely discussed in the last decades (see [21] or [34]
for examples). However, the American Standards Association [2] suggests the
following definition: “Timbre is that attribute of auditory sensation in terms of
which a listener can judge that two sounds similarly presented and having the
same loudness and pitch are dissimilar”. Furthermore, a note to the definition
adds: “Timbre depends primarily upon the spectrum of stimulus, but it also de-
pends upon the wave form, the sound pressure, and the frequency location of the
spectrum of the stimulus”. To summarise, timbre is all the sound properties that
enable us to distinguish and recognise one instrument’s sound from an other.

3.2 Filtering Solutions With Timbral Characteristics


As discussed in the previous section, the term timbre is not easily defined. How-
ever, the method of using timbre to compose music is a widespread practice [12]
[6]. Timbre is also an important notion of the complex process of writing for
an orchestra. The instrumental mixture is a fusion of timbres of the individual
instrument’s sounds. Effects emerging from these fusions are musically interest-
ing and composers often integrate this aspect in their orchestral processes. This
notion of instrumental mixtures is incorporated in the matching algorithms used
in computer-aided orchestration systems.
Composers are not necessarily acousticians and the psychoacoustics pro-
prieties used in computer-aided orchestration systems are not always explicit.
Hence, it is not very intuitive for composers to use these parameters to define
the type of solutions they want. From this observation, we decided to use the
timbre space to propose a method to filter orchestration solutions. Composers
may be looking for specific perceptions of the timbres emerging from their in-
strumental mixtures, to achieve their musical ideas. We propose to offer the user
8 Aurélien Antoine, Eduardo R. Miranda

the possibility to choose the type of solutions, according to verbal descriptors of


timbral qualities.
Terms like brightness or roughness are words from the everyday language used
to describe timbre and its perception. These terms are more explicit than their
correlated acoustic features (e.g. spectral centroid, critical bands, etc). In his
Ph.D. thesis, Duncan Williams made a list of different timbral attributes, with
their associated acoustic cues [40], which we decided to use as initial timbral
attributes to look at and implement into our method.
We decided to use the most advanced computer-aided orchestration system
so far: Orchids. This system proposes several interesting orchestration solutions,
but we think too many solutions can sometimes be unproductive, as the user
would spend a considerable amount of time listening to them before finding
the ‘perfect’ one. This is one of the reasons why we propose to design a new
approach to classify the solutions using timbral attributes. We are aware that
perception of sound could vary from each person. So, we decided to use the
available literature for each timbral quality as a starting reference for our system.
Our algorithm uses solutions generated by Orchids as its input. The first step
is to analyse all the generated solutions. Then, we use the literature about each
timbral attribute to specify various acoustic features to extract from the sound
files. Finally, the solutions are indexed by the selected timbral attribute. For our
preliminary development we chose to implement two timbral attributes to test
the feasibility of our idea: brightness and roughness.
According to the available literature about the perception of brightness, this
attribute is highly correlated with spectral centroid [18] [10] [31] [32]. To approx-
imate the level of brightness of the solutions, we decided to calculate the spectral
centroid of each sound file. For a power spectrum with components Pi (fi ), the
spectral centroid Fc is defined as
P
fi Pi
Fc = P (1)
Pi

and Fc is a frequency. The highest the frequency is, the brighter the sound is.
Hence, we index the solutions, by their spectral centroid, from the brightest to
the least bright.
For the second timbral attribute, we chose to implement the perception of
roughness. This timbral quality is correlated with beats between two partials of
a sound, critical bands and partials above the 6th harmonic [15] [37] [4] [33].
We decided to use the mirroughness function from the MIRtoolbox v1.6.14 [22]
that integrates a set of functions written in Matlab5 . The mirrgouhness function
implements three methods of estimating the roughness of a sound. The first
method is based on an estimation proposed by Sethares [33]. The second method
is a variant of the Sethares model proposed by Weisser and Lartillot [39]. The
last method is also a variant of the Sethares model developed by Vassilakis [38].
4
MIRtoolbox is available at https://goo.gl/d61EO0
5
http://www.mathworks.com/products/matlab
Towards Intelligent Orchestration Systems 9

As per the brightness algorithm, we index the solutions from the roughest sound
to the least rough sound, based on their respective roughness value.

4 Final Remarks

In this paper, we surveyed the evolution of the computer-assisted orchestration


field, initiated by the spectral music movement in the early 1970s. Several com-
posers associated to the spectral movement, including Gérard Grisey, Tristan
Murail or Johannes Fritsch to name but three, started to work on the spectrum
of sound using emerging technology to help them to achieve their compositions
for ensembles or orchestra. Timbre, defined in section 3.1, plays an important
part in the process of writing for an orchestra, as it involves several instruments
that play simultaneously.
During the last ten years, some researchers proposed a few computer-aided
orchestration systems that work with a target file (audio file or abstract) and
a database of instruments, containing only instrumental proprieties or analysed
and indexed audio samples. The user has the ability to specify the composition
of the desired orchestra. These systems analyse the target, try to match the
characteristics with the instrumental information contained in the database and
then output the orchestration solutions.
We believe that one of the next areas of development for computer-aided
orchestration systems is to focus on the orchestration solutions. An area of re-
search could be to try to propose solutions related to the musical idea of the user,
and not to propose all the correct solutions in terms of instrument mixture. The
path we decided to take to filter the solutions is by using verbal descriptors of
timbral qualities. This gives the user the ability to guide the system to output
the solutions that match their ideal kind of sound.
Our preliminary implementation uses solutions generated by Orchids. Our
system analyses the sound files and extracts acoustic features related to two
timbral attributes: brightness and roughness. After the analysis is done, the so-
lutions are indexed from the brightest or roughest sound to the least bright
or rough sound. The next step for our system is to implement more timbral
attributes and to test our indexing efficiency. Furthermore, in terms of computa-
tional efficiency, this method of filtering would be better implemented directly in
the searching algorithm of the computer-assisted orchestration system, instead
of doing the analysis and classification afterwards.
Another approach to personalise the solutions proposed by these systems
could be to learn the preferences of the user/composer in order to propose so-
lutions related to his or her musical style. Adding an artificial intelligence ap-
proach in computer-assisted orchestration systems could be beneficial to improve
the generated solutions. These ideas need to be explored to move towards more
intelligent orchestration systems.
10 Aurélien Antoine, Eduardo R. Miranda

5 Acknowledgments

This research is supported by an AHRC 3D3 Centre for Doctoral Training Ph.D.
studentship.

References

1. Adler, S.: The Study of Orchestration–3rd Edition. WW Norton (2002)


2. American Standards Association: Acoustical Terminology, Definition 12.9, Timbre
(1960)
3. Assayag, G., Rueda, C., Laurson, M., Agon, C., Delerue, O.: Computer-assisted
composition at ircam: From patchwork to openmusic. Computer Music Journal
23(3), 59–72 (1999)
4. Aures, v.W.: A procedure for calculating auditory roughness. Acustica 58(5), 268–
281 (1985)
5. Barlow, C.: On the spectral analysis of speech for subsequent resynthesis by acous-
tic instruments. In: Forum phoneticum. vol. 66, pp. 183–190. Hector (1998)
6. Barrière, J.B.: Le Timbre: métaphore pour la composition. Christian Bourgois
(1991)
7. Berlioz, H.: Traité d’instrumentation et d’orchestration. Henri Lemoine, Paris, 2e
edn. (1855)
8. Carpentier, G.: Approche computationnelle de l’orchestration musicale. Ph.D. the-
sis, Université Pierre et Marie Curie (UMPC) et IRCAM, Paris (2008)
9. Carpentier, G., Bresson, J.: Interacting with symbol, sound, and feature spaces in
orchidée, a computer-aided orchestration environment. Computer Music Journal
34(1), 10–27 (2010)
10. Disley, A.C., Howard, D.M., Hunt, A.D.: Timbral description of musical instru-
ments. In: International Conference on Music Perception and Cognition. pp. 61–68
(2006)
11. Dufourt, H.: Musique spectrale: pour une pratique des formes de l’énergie.
Bicéphale (3), 85–89 (1981)
12. Erickson, R.: Sound structure in music. Univ of California Press (1975)
13. Esling, P.: Multiobjective time series matching and classification. Ph.D. thesis,
Université Pierre et Marie Curie (UMPC) et IRCAM, Paris (2012)
14. Esling, P., Bouchereau, A.: Orchids: Abstract and temporal orchestration software.
IRCAM, Paris, first edn. (November 2014)
15. Fastl, H., Zwicker, E.: Psychoacoustics: Facts and models, vol. 22. Springer Science
& Business Media (2007)
16. Hiller, L., Isaacson, L.: Experimental music: composition with an electronic com-
puter. McGraw-Hill, Inc, New-York (1959)
17. Hummel, T.A.: Simulation of human voice timbre by orchestration of acoustic
music instruments. In: Proceedings of International Computer Music Conference
(ICMC). p. 185 (2005)
18. Johnson, C.G., Gounaropoulos, A.: Timbre interfaces using adjectives and adverbs.
In: Proceedings of the 2006 conference on New interfaces for musical expression.
pp. 101–102. IRCAM—Centre Pompidou (2006)
19. Kennan, K.: The technique of orchestration. Prentice Hall, New Jersey (1952)
20. Koechlin, C.: Traité de l’Orchestration. Max Eschig, Paris (1943)
Towards Intelligent Orchestration Systems 11

21. Krumhansl, C.L.: Why is musical timbre so hard to understand. Structure and
perception of electroacoustic sound and music 9, 43–53 (1989)
22. Lartillot, O.: MIRtoolbox 1.6.1 User’s Manual. Aalborg University, Denmark
(2014)
23. Malherbe, C.: The OM Composer’s Book, vol. 2, chap. Locus: rien n’aura eu lieu
que le lieu. Editions Delatour, France (2008)
24. Maresz, Y.: On computer-assisted orchestration. Contemporary Music Review
32(1), 99–109 (2013)
25. Nouno, G., Cont, A., Carpentier, G., Harvey, J.: Making an orchestra speak. In:
Sound and Music Computing (SMC). Porto, Portugal (2009)
26. Pennycook, B.W.: Computer-music interfaces: a survey. ACM Computing Surveys
(CSUR) 17(2), 267–289 (1985)
27. Poller, T.R.: Clarence barlow’s: Technique of ‘synthrumentation’ and its use in im
januar am nil. Tempo 69(271), 7–23 (January 2015)
28. Psenicka, D.: Sporch: An algorithm for orchestration based on spectral analyses
of recorded sounds. In: Proceedings of International Computer Music Conference
(ICMC). p. 184 (2003)
29. Roads, C.: The computer music tutorial. MIT press (1996)
30. Rose, F., Hetrick, J.: Spectral analysis as a ressource for contemporary orchestra-
tion technique. In: Proceedings of Conference on Interdisciplinary Musicology. vol.
2005 (2005)
31. Schubert, E., Wolfe, J.: Does timbral brightness scale with frequency and spectral
centroid? Acta acustica united with acustica 92(5), 820–825 (2006)
32. Schubert, E., Wolfe, J., Tarnopolsky, A.: Spectral centroid and timbre in complex,
multiple instrumental textures. In: Proceedings of the international conference on
music perception and cognition, North Western University, Illinois. pp. 112–116
(2004)
33. Sethares, W.A.: Tuning, timbre, spectrum, scale. Springer (1998)
34. Smalley, D.: Defining timbre—refining timbre. Contemporary Music Review 10(2),
35–48 (1994)
35. Tardieu, D.: Modèles d’instruments pour l’aide à l’orchestration. Ph.D. thesis, Uni-
versité Pierre et Marie Curie (UMPC) et IRCAM, Paris (2008)
36. Tardieu, D., Carpentier, G., Rodet, X.: Computer-aided orchestration based on
probabilistic instruments models and genetic exploration. In: Proceedings of Inter-
national Computer Music Conference, Copenhagen, Denmark (2007)
37. Terhardt, E.: On the perception of periodic sound fluctuations (roughness). Acta
Acustica united with Acustica 30(4), 201–213 (1974)
38. Vassilakis, P.N.: Perceptual and physical properties of amplitude fluctuation and
their musical significance. Ph.D. thesis, University Of California, Los Angeles
(2001)
39. Weisser, S., Lartillot, O.: Investigating non-western musical timbre: a need for joint
approaches. 3rd International Workshop on Folk Music Analysis (2013)
40. Williams, D.: Towards a Timbre Morpher. Ph.D. thesis, University of Surrey (2010)

You might also like