You are on page 1of 4

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/298979427

Automatic Musical Fountain Scenario Generation Using Musical Information


Analysis

Article · January 2009

CITATION READS

1 369

2 authors:

Min-Joon Yoo In-Kwon Lee


Yonsei University Yonsei University
13 PUBLICATIONS   42 CITATIONS    99 PUBLICATIONS   886 CITATIONS   

SEE PROFILE SEE PROFILE

All content following this page was uploaded by In-Kwon Lee on 10 February 2017.

The user has requested enhancement of the downloaded file.


AUTOMATIC MUSICAL FOUNTAIN SCENARIO GENERATION
USING MUSICAL INFORMATION ANALYSIS

Min-Joon Yoo In-Kwon Lee


Yonsei University Yonsei University
Department of Computer Department of Computer
Science Science
ABSTRACT scenarios by using an example-based approach and musical
information analysis.
In this paper, we introduce our project the ‘Intelligent
A model is built by analyzing musical fountain scenario
Musical Fountain Authoring System’, a system that
samples generated by experts. Then new scenarios are
automatically generates musical fountain scenarios by
synthesized from the data in this model through the
analyzing the musical information in accompanying music.
guidance of the semantic information of arbitrarily
The onset and beat information of the musical piece are
accompanying music. The information includes several
retrieved and its musical structure is analyzed to provide
types of temporal data such as onset, beat and musical
temporal information through scenario generation. Then
structure boundaries.
new scenarios can be generated using a Bayesian network
There are a few software and hardware component that
established from sample scenarios. Though the guidance of
the control the water burst shapes automatically according
musical information in accompanying music, musical
to the musical contents. However, most previous systems
fountain scenarios can be generated much faster than
are limited in that they control the height of the water burst
manual generation.
shapes using only the amount of volume or the frequency
information of the accompanying music. However, in our
system scenarios of higher quality can be generated
because the generated musical fountain scenarios are based
1. INTRODUCTION on example scenarios made by experts and are generated
Majestic musical fountains such as the Bellagio Music by applying several forms of semantic information of
Fountain in Las Vegas and the Magic Fountain of Montjuic music.
in Barcelona are greatly appreciated by spectators. A
profusion of water jets and colored lights give these
2. MUSICAL INFORMATION ANALYSIS
fountains a spectacular appearance. However, what
distinguishes them is that their displays are synchronized The quality of a fountain scenario depends heavily on the
with accompanying music. synchronization of the music and the water burst shapes.
Musical fountains can produce a romantic or dynamic Our system achieves synchronization by extracting the
atmosphere by changing the shape of water bursts temporal features of the music which are appropriate for
synchronized with music. Thus, the major difference scenario generation. The task of synchronization is
between a musical fountain and fountain is that a careful performed by experts and it is still very time consuming.
choreography, a scenario of change in the shape of water However, our system can reduce the amount of time
bursts set to music, has to be generated. required for scenario generation largely by automating the
Usually the scenario is generated by experts because the synchronization task using various musical information
knowledge of water burst shapes created by various retrieval techniques. Our system calculates three temporal
nozzles and a careful analysis of music are required to features of the music and the timing of changing water
make a musical fountain scenario. However, since creating burst shapes is based on these features.
such a scenario is quite challenging and time-consuming,
the cost of scenario generation is usually very high.
Due to the high cost, many music fountains play a short 2.1. Onset Detection
and small number of musical pieces repeatedly.
Frequently repeated pieces would reduce the interests of Locating an accurate starting time of the onset of a musical
the spectators and thus lessen the practical utility of the event such as a note is a fundamental operation in any
musical fountain. attempt to identify musical events from audio signals.
Our project the ‘Intelligent Musical Fountain Authoring Some types of synchronization in musical fountain
System’ aims to automatically generate musical fountain scenarios can be obtained by changing water burst shapes
at the time of onset of a musical event. A range of onset Estimations of the tempo with the most frequent time
detection algorithms are explained in [1,2]. Among the periods between onsets are calculated using
several ways to detect onsets, we selected spectral flux, autocorrelations of the onset energy data. By using the fact
which can be computed quickly and accurately. Spectral that the average person perceives a tempo near 120 bpm,
flux is defined as follows: the tempos near 120 bmp are weighted.
Now, optimized beat sequences are calculated by
defining the object function of two terms, the onset and the
∑ | , | | 1, | , (1) tempo, and are solved by dynamic programming. Thus,
calculated beat sequences have times that match the times
of high onset energy, and are also similar to the estimated
where , is the magnitude of the th frame and th tempo. One can refer to [3] for a more detailed description
frequency bin in a frequency domain of the audio signal of this algorithm.
and is the half-wave rectifier function.
A peak-picking process is then used to select the exact
onsets, which must fulfil the following condition: 2.3. Structure Analysis
The onset and beat data provide the local temporal
for all s.t. . (2) information for scenario generation. The system also uses
global temporal information by using structure analysis.
The accompanying music is segmented using a classical
The parameter controls how many onsets are selected. self-similarity matrix and novelty scores [4], and some
This parameter can be chosen by the user, and is directly changes in scenarios occur at the boundaries of the
related to the ‘busyness’ of the resulting fountain scenario. segments. By defining the size of the kernel used to
calculate the novelty scores as 4~8 seconds, the structure
boundaries, which may not be found using only onset data,
2.2. Beat Tracking can be obtained.

Although some synchronization can be obtained by Now, we are implementing more elaborate musical
changing water burst shapes at onset times, sometimes it is structure analysis techniques to detect the structures more
not enough to use only the onset data. For example, one exactly. For example, we can obtain more accurate
may want to change the shape in the middle of a long calm boundaries of musical structures by using Maddage’s
portion of the music, during which onsets may not be method [5]. In this method, musical structure is analyzed
detected. Thus, our system also uses beat times, in order to using chord patterns and singing voice boundaries as well
provide more natural time information for scenario as onset data.
generation.
Beat Tracking is a technique to extract beat data from
audio signals. Generally this type of tracking finds beat 3. SCENARIO GENERATION
times based on onset data.
This paper focuses on musical fountain scenario
Our system uses the beat tracking algorithm presented generation by analyzing musical signals and retrieving
by Ellis [3]. In this algorithm, the onset energy is musical information. Thus, we will first briefly explain
calculated from audio signals and the tempo is estimated our scenario generation method. A more detailed method
based on the onsets. Then an object function using onset will be presented in a regular paper which will be
energy and tempo is defined and finally a beat sequence published soon.
that optimizes the object function is calculated using
dynamic programming.
3.1. Constructing the Bayesian network
The onset energy is calculated in the following ways. At
first, the frequency information of the audio signal is A Bayesian network is a model of probabilistic
extracted using Fourier Transform. Then, an acoustic relationships among a set of variables which is frequently
model is built by multiplying the frequency information used for encoding uncertain expert knowledge [6]. We
with MEL constants. After changing the values into designed a Bayesian network from sample scenarios
decibels, the first derivatives are calculated. These values generated by experts, and the network can now be used to
mean onsets; that is, functions representing increases in generate new musical fountain scenario automatically.
energy output.
3.2. Selecting operated nozzles Though the scenarios of the system may be not
sufficiently satisfying when compared with scenarios
At each recognized time – onset time, beat time or
generated through great effort by experts, automatic
segmentation boundary time – the operating probability of
generation is much faster and the experts can edit and
each nozzle is calculated using the Bayesian network. In
verify the automatically generated scenarios easily with the
the calculation of operating probability, current operating
graphical interface of our system.
states of nozzles and volumes of accompany music are
considered. Because the network is based on probabilities, Since scenario generation depends on the content
the generated scenarios can be different with each attempt. analysis of audio signals, the inclusion of better analysis
Thus, our system generated various and diverse scenarios techniques should lead to improved scenario generation.
with one piece of music. We are implementing more improved analysis techniques
that have been presented in several music information
retrieval journals and conferences.
4. SIMULATION SYSTEM
We are also exploring interactive technique in order to
Since the cost of testing and verifying generated scenarios control fountains. We are planning to create fountain
of actual fountains is high, the system provides a 3D shows accompanied by music playing in real-time by
simulation to test scenarios on a computer monitor. implementing real-time techniques such as real-time beat-
Automatically generated scenarios can be verified tracking. We think that a fountain can be used as a
immediately using a particle model to simulate the jets of performance tool with various type of music, including
water moving under the influence of gravity, and then electronic music.
render them in 3D. We have implemented our own
particle dynamics engine, and rendering is performed by
GPU-based vertex shading. Figure 1 depicts our system. [To reviewers : This system is now applying to the
In this system the user can generate, edit and verify musical fountain of Seong Name Art Center in South
musical fountain scenarios. Korea and it will be completed in March 2009. We hope
that we can report more interested results in final version
of this paper, if accepted]

6. REFERENCES
[1] Bello, J.P., Daudet, L., Abdallah, S., Duxbury, C.,
Davies, M., and Sandler, M.B. “A Tutorial on Onset
Detection in Music Signals”, IEEE Transactions on
Speech and Audio Processing, Vol. 13(5), 2005, pp.
1035-1047.
[2] Collins, N. “A Comparison of Sound Onset Detection
Algorithms with Emphasis on Psychoacoustically
Motivated Detection Function”, AES 118th
Convention, 2005.
[3] Ellis, D.P.W. “Beat Tracking by Dynamic
Figure 1. A screenshot of the ‘Intelligent Musical Fountain Programming”, Journal of New Music Research, Vol.
Authoring System’. Left-up : layout of nozzles, Middle- 36(1), 2007, pp.51-60.
up : 3D simulation, Right : property setting window, [4] Foote, J. “Automatic Audio Segmentation Using a
Bottom-left : Scenario track. Measure of Audio Novelty”, in Proceedings of IEEE
International Conference on Multimedia and Expo,
New York, USA, 2000, pp.452-455.

5. CONCLUSION AND FUTURE WORK [5] Maddage, N.C. “Automatic Structure Detection for
Popular Music”, IEEE Multimedia, Vol.13(1), 2006,
We presented our ongoing project, the ‘Intelligent Musical pp.65-77.
Fountain Authoring System’. This system can produce
fountain scenarios automatically by combining a Bayesian [6] Heckerman, D. “A Tutorial on Learning with
model derived from sample scenarios with an analysis of Bayesian networks”, In Learning in Graphical
semantic features of the musical accompaniment. Models, M.Jordan, ed. MIT Press, Cambridge, MA,
1999.

View publication stats

You might also like