Professional Documents
Culture Documents
//for examples
(
SynthDef(\acsound,{|freq=440,amp=0.1,dur= 0.2,cutoff=2000|
var sound, filter;
filter= LPF.ar(sound,Line.kr(cutoff,300,dur));
Out.ar(0,filter.dup(2))
}).add;
)
Practical probability
Probability is one of the great tools for algorithmic composition work. Rather than
deterministic works, fixed given a starting state, we can create probabilistic
works, different with every run.
//try these:
2.rand //generates an integer, either 0 or 1
2.rand2 // -2 to 2
All these functions are making selections where the numbers in the range or options
in the Array have an equal chance of turning up.
1.0.bilinrand //similar, goes between positive and negative 1.0, more weight
towards 0 in all cases
1.0.sum3rand //sum of 3 uniform random numbers between plus and minus 1.0 (will
come out with more chance of numbers nearer 0, since values can cancel between
positive and negative; in general, a sum of distributions tends to a centre-
weighted normal distribution)
gauss(1.0,0.1) //mean 1.0, most output values within 0.3 (3*0.1) either side, so
0.7 to 1.3
Most often, you use arbitrary weights amongst a discrete set of options. Think of
choosing amongst a set of possible MIDI notes, or dynamic levels, or durations.
When you use wchoose, the array of weights has to add up to 1.0 (a standard feature
of a probability distribution). There is a helper function for this:
[14, 3.7, 5.6, 8, 11].normalizeSum //make array add up to 1.0 by dividing by the
sum of the entries
Finally, for rolls of the dice in decision making, the coin function is very
useful:
0.5.coin //fair coin, equal chance of heads or tails: true or false as output
You can achieve a lot just with controlled use of probability distributions in this
way.
You may also want to explore using different probability distributions at different
points in time during a piece, perhaps by varying parameters in gauss, changing the
weights for wchoose, or moving between entirely different functions. You can remap
the values to different ranges at different points in time, or restrict which parts
of a distribution you select from. We've seen similar ideas before in terms of time
varying availability; e.g. 'tendency masks' in granular synthesis for allowed
parameter ranges at different points in time.
Use of conditionals
//examples:
//deterministic
(
var pitch=60, amp=1.0;
if(pitch==60,{amp= 0.2});
amp
)
//probabilistic
(
var pitch=60, amp=1.0;
if(pitch==60 and: 0.5.coin,{amp= 0.2}); //amp gets set to 0.5 on average half the
time
amp
)
It is for you to build the model of music; there are as many music theories as you
want to explore. Of course, not all correspond well to existing 'styles' or
'genres', and there is much scope for experiment.
Conditional probability
If some event is known to have happened, this gives information about the situation
which restricts what else may happen alongside it. Such reductions from a full
choice within probability space to a more restricted area is the domain of
conditional probability.
P (B | A) = P (A and B) / P (A)
If A is observed, to find P (B | A), look at the probability that both A and B can
happen (the intersection of the areas of the probability space represented by A and
B) relative to the probability of A happening in the first place.
(sidenote: Bayes theorem follows from the above, as P(B | A)= P(A | B) P(B)/P(A).
Bayes theorem is useful for calculating one conditional probability in terms of
another, ie A might be some observations and B a world state; Bayes theorem lets
us calculate the most likely world state, by looking at how each potential world
state explains the observations)
We can also think about this in terms of an event observed a moment ago further
constraining what could happen next:
In general, decisions can take account of not only the current situation, but the
history of past states. We move away from '0th-order' systems where each choice is
entirely independent of anything else that has happened.
Markovian systems
The idea of the current choice being dependent on past choices is encapsulated in a
Markov system of order n, where n is the number of previous choices at stake.
Simple Markov processes need to keep track of a larger and larger number of
possibilities in a combinatorial explosion. Say there are 3 options at a given
time, say, three notes to choose from:
Then in general there are 3*3= 9 probabilities to specify. For each increase in
order, we'd gain another multiple of 3, so a second order Markov system requires
3*3*3 probabilities to be set up, and an Nth order needs 3**(N+1)
markovmatrix= [
[0.7,0.2,0.1],
[0.0,0.5,0.5],
[0.3,0.4,0.3]
];
{
20.do{
Synth(\acsound,[\freq, [48,60,64].at(currentstate).midicps]);
0.25.wait;
};
}.fork;
)
For fixed and variable order Markovian modeling see also the MathLib and
ContextSnake Quarks.
Search
comparative search: an exhaustive search through all options. Can find an optimal
solution, but usually too computationally intensive
An early strategy (used back in 1955 by Hiller and Isaacson's) was generate and
test. Random numbers are generated until they pass a test. The passed number
becomes the latest choice, and a new selection is then made. Alternatively, we
might restrict generations to only acceptable options in the first place (by
heuristics).
You should see that any machinery from AI (whether GOFAI symbolic or connectionist)
may be imported to problems of musical search.
generateandtest= {|previous=60|
var number=rrand(24,127);
var keeplooking;
//note we could replace this test with just generating number in the
allowable range to start with
if (abs(number-previous)>12) {
keeplooking= true;
};
((number.asString)++(if(keeplooking," rejected","accepted"))).postln;
keeplooking
},{
//no need to do anything here, all done in while test function
number=rrand(24,127);
});
number
};
{
20.do{
currentvalue = generateandtest.(currentvalue);
Synth(\acsound,[\freq, currentvalue.midicps]);
0.25.wait;
};
}.fork;
)
Sonification of mathematics
r=3.74;
logisticmap= {|previous=60|
((1.0-previous)*previous*r).postln;
};
{
50.do{
currentvalue = logisticmap.(currentvalue);
0.125.wait;
};
}.fork;
)
The example here demonstrates how the logistic map acts as a generator of values at
the required rate for musical events set required, much as a UGen is a (usually
much faster running) generator of sample values at audio rate. Analogous networks
of number generation and modification (synthesis and processing) can be formed in
algorithmic composition to determine musical parameter values for event streams.
We'll continue this next year in advanced computer music by discussing mappings and
musical modeling in general. For example, we haven't touched here on data-driven
modeling where a corpus is automatically analyzed to create a generative model. You
may still approach such things intuitively, by formulating rules via your own
personal analyses of musical style.