You are on page 1of 3

A visual representation of music

Guillermo Cornejo Suarez

Fabian Melendez Bolaos

Escuela de Ingenieria Electrica.


Universidad de Costa Rica.
PRIS-LAB collaborator
Email: gmocornejos@gmail.com

Escuela de Ingenieria Electrica.


Universidad de Costa Rica.
PRIS-LAB collaborator
Email: fabmelb@gmail.com

AbstractWe attempt to create an intelligent bioinspired


system capable to identify different structures in a song, in order
to create a visual representation of music, that establishes a
relation between sound and color, based in physical properties
of waves and generates an aesthetic result in which tone, pitch
and volume keeps a relation with color, texture and shape,
respectively.

July, 2014

can result in an extremely powerful form of expression.


A visual representation of music tool could be applied in
several fields like musical therapy for disabled kids, so as
in new music learning techniques; synesthetic, stimulus or
perception studies, and in future it could help in bio-inspired
systems mixing visual and audio data towards a better
human-computer interaction.

I. I NTRODUCTION
To find a reliable visual representation of music is a cuttingedge investigation field, that depends of transdisciplinary
work from neuroscience, engineering, computer science,
psychology, music and visual arts. Investigation in this area
could be fundamental to clarify some complex phenomena,
like synesthesia. Synesthesia is a phenomenon that two senses
resonate such as hearing and vision, sense of touch and vision
[1].
There is evidence that demonstrates matches between
music and colors that are mediated by emotional associations.
Usually faster music in the major mode is related to color
choices that are more saturated, lighter, and yellower whereas
slower, minor music is related the opposite pattern (choices
that were desaturated, darker, and bluer). [3]
Color-Music investigations are closely related to colorhearing synesthesia. As Cytowic explained that ability of the
color-heard is very rare [1]. He thought that the color-heardperson will exist one person in 25,000 people. However, it
is reported by N.Nagata et al [2] that color-heard sense is a
general sense that everyone has. So in a way as Palmer et al
citePalmer concluded, there is a common sense for relating
color and music.
There is another concept that relates to the concept of music
and color relationship, Color music is a concept, according to
Poast [4], that attempts to create a notation system composed
of painted colors and shapes to provoke musical responses: it
is a complex representation of musical composition ideas in
a visually fixed form. The system is based on the idea that
color sensations can trigger correlations to musical sound
in a performer who is sensitive to visual experience. By
combining visual and aural stimuli, the system and its use

A. Related work
In two papers, Kawanobe et al [8] first investigates the
corresponding affect between music and color in order to
realize the media conversion between music and image
according to the feeling. This research derived in the
creation of an algorithm for media conversion based on
the feelings and perceptions of a small group of persons,
and possibly not a representative sample of the population,
and using other valuable psychological elements to describe
music, such as words, but not directly computable information.
Husain et al [6] proposed a framework for visualising
music mood using texture image. The goal was to allow
finding new music. This framework could be use to establish
a relation between music and sound.
Shimotomai et al [11] proposed a mapping model from
chord to color. Based in questionnaire applied to common
people, they created a linear, statistical model for chord-color
mapping, the result was a table relating a small number of
chords with a color in HLS color coordinates.

II. A V ISUAL REPRESENTATION OF MUSIC BASED IN


PHYSICAL RELATIONS

Many works on the field propose to establish a relation


between music and color based on perceptions or psychological elements, resulting in complex tables or algorithms. We
propose to follow a relation based on physical correspondence
between the sound and light wave. Perez, Gilaber [9] achieved
a simple logarithmic relation between the frequency of sound
(related to tone of music) and light wavelength, given in the
equation:


c = 72.135 ln

340
f


+ 577.76

(1)

The use of texture to visualize the music mood were


proposed by Husain et al [?], but uses emotional perceptions to
establish relations as cheerful and elegant to smooth textures
or rough texture to aggressive genre such as rock. Even so, it
could be use as a base to find a relation between pitch and
texture, and a more mathematical treatment of the common
elements of rough or smooth music could drive to a more
appropriate relation for our intentions. Smith, Williams [5]
proposed a representation of the volume as relative size of
the displayed spheres. Each spheres radius is restricted to the
range [0, 127], this corresponds to the minimum and maximum
volume values for a tone.

to create fractals, specifically the Julia set fractals, from


which the Mandelbrot fractals are a subset. Basically it is a
way of studying complex plane numbers behavior towards an
iterative function, and comparing the module of the resulting
complex-surface with certain value, which gives us a defined
area in a complex plane.n
In order to represent the color, and shapes of our music
visualization, we have taken three values of colors (now in
RGB notation), from which we will produce the images. For
this, we decided to modificate the normal Julia Set Fractals
approach towards an algorithm that studies the resulting
complex-plane numbers module with more than one value,
obtaining different shapes and areas (that are painted with the
three different colors). The movement of the fractals in order
to create moving shapes, is made by changing a constant in
the complex iterative function in the Julia Set, in a pattern of
a rhodonea curve ( which is a sinusoid curve plotted in polar
coordinates).
One of the generated frames could be appreciated in image
2

Fig. 1: Block diagram for creating the proposed visual representation of music
Image 1 shows the block diagram to create the proposed
visual representation. The process starts with the sound
capture. The image display must occur in less than human
reaction time, around 100 ms, so the process only records a
minimum number of samples. It also stores around the last
minute to perform an analysis. This consist in a frequency
domain transform, and then a search for the minimum window
capable to store the smallest constant frequency present in
the song, in other words, it looks for the minimum window
capable to store the smallest executable note, it could be a
sixty-fourth note at 270 beats per second in a specially fast
piece.
Note that the process of creating a visual representation
could be understand as a sampling of music, using the
window determined above as sampling period. In order to
respect the NyquistShannon sampling theorem, the number
of frames per second of display must double the sample
frequency, i.e. each frame must be show for less than half
the window determined above. Finally, for each window we
determine a corresponding color using the relation found by
Perez, Gilabert [9], shown in equation 1, and translate this
wavelength information to a RGB color space.
As the concept of color-music embraces, the representation
of music involves the mixture of color, texture, patterns and
shapes. To make an abstract, yet mathematical approach
towards creating a visual representation of music, we decided

Fig. 2: One of the generated frames. Color is obtained by


analysing the principal frequencies of sound and shape is
created using fractals
III. F UTURE WORK
In order to improve the current implementation more
statistical methods should be considered in the analysis of
the song, detecting even more complex features as pitch.
Also simple features as the amplitude should be part of the
algorithm. A current impasse in having a real time algorithm
for live music is the processing time for the images, that

needs to be faster, or it could be implemented with color


light sets.
R EFERENCES
[1] R.E.Cytowic, The man who tasted shapes, Soushisya, Tokyo,2002.
[2] N.Nagata and Shokuchi, Mapping extraction of Color-heard-person,
Japan, pp.97-104,2002-MUS-4717, 2002.
[3] Stephen E. Palmer, Karen B. Schlossa, Zoe Xua, and Lilia R. Prado.
Musiccolor associations are mediated by emotions. Proceedings of the
national academy of sciences. Vol 110 no. 22, 8836-8841 - May 13,
2013.
[4] Michael Poast & Leonardo June. Color Music: Visual Color Notation for
Musical Expression. The MIT Press, Vol. 33, No. 3: 215-221, 2000.
[5] Sean M. Smith & Glen N. Williams, Ph.D. A Visualization of music.
Proceedings of the 8th IEEE Visualization 97 Conference, 2007.
[6] Adzira Husain, Mohd Fairuz Shiratuddin, Kok Wai Wong. A Proposed
Framework for Visualising Music Mood using Texture Image. 3rd International Conference on Research and Innovation in Information Systems,
2013.
[7] Kazuhiro Yamawaki & Hisao Shiiuka.Correlation of Synesthesia and
Common Recognition Concerning Music and Color. IEEE International
Conference on Systems, Man and Cybernetics, 2004.
[8] Makoto Kawanobe Masashi & Kameda Makoto MiyaharaCorresponding
Affect between Music and Color. In proceeding of: Systems, Man and
Cybernetics. IEEE International Conference on, Volume: 5, 2003
[9] Perez, J., Gilabert, E. (2010). Color y msica: Relaciones fsicas entre tonos
de color y notas musicales. ptica pura y aplicada. 43 (4) p267-274.
[10] Makoto Kawanobe & Masashi Kameda Development of an Algorithm
for Media Conversion between Music and Color Combination based on
Impressions. 6th - International Conference on Information, Communications & Signal Processing, 2007.
[11] Takayuki Shimotomai, Takashi Omori, Eriko Aiba, Takashi X. Fujisawa,
& Noriko Nagata. Mapping Model from Chord to Color. Joint 6th
International Conference on Soft Computing and Intelligent Systems
(SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS), 2012 Kobe, Japan, 2012.

You might also like