Professional Documents
Culture Documents
net/publication/298979427
CITATION READS
1 369
2 authors:
All content following this page was uploaded by In-Kwon Lee on 10 February 2017.
Although some synchronization can be obtained by Now, we are implementing more elaborate musical
changing water burst shapes at onset times, sometimes it is structure analysis techniques to detect the structures more
not enough to use only the onset data. For example, one exactly. For example, we can obtain more accurate
may want to change the shape in the middle of a long calm boundaries of musical structures by using Maddage’s
portion of the music, during which onsets may not be method [5]. In this method, musical structure is analyzed
detected. Thus, our system also uses beat times, in order to using chord patterns and singing voice boundaries as well
provide more natural time information for scenario as onset data.
generation.
Beat Tracking is a technique to extract beat data from
audio signals. Generally this type of tracking finds beat 3. SCENARIO GENERATION
times based on onset data.
This paper focuses on musical fountain scenario
Our system uses the beat tracking algorithm presented generation by analyzing musical signals and retrieving
by Ellis [3]. In this algorithm, the onset energy is musical information. Thus, we will first briefly explain
calculated from audio signals and the tempo is estimated our scenario generation method. A more detailed method
based on the onsets. Then an object function using onset will be presented in a regular paper which will be
energy and tempo is defined and finally a beat sequence published soon.
that optimizes the object function is calculated using
dynamic programming.
3.1. Constructing the Bayesian network
The onset energy is calculated in the following ways. At
first, the frequency information of the audio signal is A Bayesian network is a model of probabilistic
extracted using Fourier Transform. Then, an acoustic relationships among a set of variables which is frequently
model is built by multiplying the frequency information used for encoding uncertain expert knowledge [6]. We
with MEL constants. After changing the values into designed a Bayesian network from sample scenarios
decibels, the first derivatives are calculated. These values generated by experts, and the network can now be used to
mean onsets; that is, functions representing increases in generate new musical fountain scenario automatically.
energy output.
3.2. Selecting operated nozzles Though the scenarios of the system may be not
sufficiently satisfying when compared with scenarios
At each recognized time – onset time, beat time or
generated through great effort by experts, automatic
segmentation boundary time – the operating probability of
generation is much faster and the experts can edit and
each nozzle is calculated using the Bayesian network. In
verify the automatically generated scenarios easily with the
the calculation of operating probability, current operating
graphical interface of our system.
states of nozzles and volumes of accompany music are
considered. Because the network is based on probabilities, Since scenario generation depends on the content
the generated scenarios can be different with each attempt. analysis of audio signals, the inclusion of better analysis
Thus, our system generated various and diverse scenarios techniques should lead to improved scenario generation.
with one piece of music. We are implementing more improved analysis techniques
that have been presented in several music information
retrieval journals and conferences.
4. SIMULATION SYSTEM
We are also exploring interactive technique in order to
Since the cost of testing and verifying generated scenarios control fountains. We are planning to create fountain
of actual fountains is high, the system provides a 3D shows accompanied by music playing in real-time by
simulation to test scenarios on a computer monitor. implementing real-time techniques such as real-time beat-
Automatically generated scenarios can be verified tracking. We think that a fountain can be used as a
immediately using a particle model to simulate the jets of performance tool with various type of music, including
water moving under the influence of gravity, and then electronic music.
render them in 3D. We have implemented our own
particle dynamics engine, and rendering is performed by
GPU-based vertex shading. Figure 1 depicts our system. [To reviewers : This system is now applying to the
In this system the user can generate, edit and verify musical fountain of Seong Name Art Center in South
musical fountain scenarios. Korea and it will be completed in March 2009. We hope
that we can report more interested results in final version
of this paper, if accepted]
6. REFERENCES
[1] Bello, J.P., Daudet, L., Abdallah, S., Duxbury, C.,
Davies, M., and Sandler, M.B. “A Tutorial on Onset
Detection in Music Signals”, IEEE Transactions on
Speech and Audio Processing, Vol. 13(5), 2005, pp.
1035-1047.
[2] Collins, N. “A Comparison of Sound Onset Detection
Algorithms with Emphasis on Psychoacoustically
Motivated Detection Function”, AES 118th
Convention, 2005.
[3] Ellis, D.P.W. “Beat Tracking by Dynamic
Figure 1. A screenshot of the ‘Intelligent Musical Fountain Programming”, Journal of New Music Research, Vol.
Authoring System’. Left-up : layout of nozzles, Middle- 36(1), 2007, pp.51-60.
up : 3D simulation, Right : property setting window, [4] Foote, J. “Automatic Audio Segmentation Using a
Bottom-left : Scenario track. Measure of Audio Novelty”, in Proceedings of IEEE
International Conference on Multimedia and Expo,
New York, USA, 2000, pp.452-455.
5. CONCLUSION AND FUTURE WORK [5] Maddage, N.C. “Automatic Structure Detection for
Popular Music”, IEEE Multimedia, Vol.13(1), 2006,
We presented our ongoing project, the ‘Intelligent Musical pp.65-77.
Fountain Authoring System’. This system can produce
fountain scenarios automatically by combining a Bayesian [6] Heckerman, D. “A Tutorial on Learning with
model derived from sample scenarios with an analysis of Bayesian networks”, In Learning in Graphical
semantic features of the musical accompaniment. Models, M.Jordan, ed. MIT Press, Cambridge, MA,
1999.