You are on page 1of 11

From Robotics & Biomechanics to Musical applications.

New ideas in the compositional, performing environment and beyond Alexis Perepelycia Student N 10848045 aperepelycia01@qub.ac.uk MA in Sonic Arts Sonic Arts Research Centre, Queens University Belfast

Abstract This paper introduces new approaches to composition and performance of Live Electronics music in real-time and new technologies that can be implemented in the music field. It covers such topics as Sensors technology, Actuators, Interaction between Human and Computers, the use of Bio-electrical Signals to achieve more control over digital environments, Robotics and Bio-mechanics and Virtual Reality vs. Augmented Reality among others. 1 Introduction Even before the introduction of computers into music, musicians and performers were interested in controlling electric instruments trying to achieve not only the amount of control they had over acoustic instruments but also to experiment with the unseen possibilities of these technologies. Nowadays, with the fast development of software and hardware in the field of computer music, live performances of Electro-acoustic music have become a relatively easy task making the amount of live performers increase greatly. But there is still a gap in both areas, which needs to be filled, the lack of feedback between performers and computers. The introduction of a system that allows us to interact in a logical way with itself and most importantly that allow us to be inside the system to get more control during the performance. Moreover, a System that enhances gestural live performances but also allows the implementation of the latest technology. Until now we just have to implement different types of hardware to do different tasks. For instance, we need a mixer to mix our sound sources, a computer to run our software, maybe a MIDI controller to control our computer, and so on. But what if we had a better integrated structure, which allowed us not just to put together different components of a larger system but also to have more control over actions we perform and the reaction they have, we might really feel the computer feedback in order to get a full interaction with it. In order to achieve these paths, the data stream should be pre-defined. Not just the implementation of sensors and actuators technology but also the link between the real (physical) world and the computer environment. Modular software for data gloves should be developed. We can also make use of Virtual technology to widen the options becoming almost infinite, limited just by CPU power.

And technologies coming from Medicine or even Military research areas could be implemented to go even further by avoiding all external connections between human and computers using Bio-sensors implants. 2 a) Sensors Sensors are the sense organs of machines. Basically a sensor converts amounts of energy into data. Through a sensor, a machine (eg. a computer) can receive information and therefore be controlled. Sensors are also the first step in the system of interaction with a digital environment. Sensors are classified by the type of energy they convert. One possible classification would be: kinetic light sound temperature smell humidity electricity magnetism

Though, a more appropriate classification for musical application would be according to human output. This would be: Muscle action (Isometric or Isotonic): Isometric sensors measure pressure, while Isotonic sensors measure displacement in two different ways. Contactless (i.e. eye movement tracking system) or by physical or mechanical contact (tilt sensors). Bio-electricity: Technologies like Electro-Cardiogram, Electro-Myogram or Electro-Encephalogram are usually employed to catch bio-electricity from the body. Also galvanic-skin response sensors were the first type of bio-electrical impulses sensors developed. Blowing: Sensors that measure passage of air. Voice: Different types of microphones. Temperature: Temperature sensors like thermometers. Heart-rate and Blood pressure: different types of sensors related to these areas are commonly used in medicine.

This type of classification is used by electronic composers or sound artists to determine which type of sensor would be more apropriated to a live performance integrating computers with performers or to a sound installation, where some sort of interaction with the environment might be needed. b) Actuators Actuators are devices that react to a stimulus received from a sensor or to data received from an interface. Actuators are the other half of a real interactive system, which receives its input from a sensor, and react accordingly.

If a Sensor represents the action in a system an Actuator represents the reaction. Both together represent real Interaction with a digital environment or a machine.

Examples of actuators could be:

Speakers Screens Electro-active Polymers Nano-Muscles Pneumatic Systems Electro Mechanical Film (EMFi) Etc.

3 - The Current System of Interaction vs. the Proposed Model The actual system of interaction between Human and Computer works in the following way: 1. Humans perform an Action through its Effectors (Muscle Action, Speech, Breath, etc.). 2. This Action is perceived by the Computers Sensors. 3. The Computer Reacts through its Actuators and sends a Feedback to the Human. 4. Humans perceive this Feedback through their Senses and translate this into Data. 5. Then, Humans React to this Data through its Effectors. This system represents a loop-like data stream between Human and Computers. Which represents a constant Action and Reaction.

Current System of Interaction But this system does not deliver Feedback properly, because computers do not address tactual feedback as much as visual (LCD) and sound (generated sounds) feedback. Moreover, most of the time feedback does not correspond to the action done. This means that you obtain many different results by performing the same action. For example: by moving just a fader from a MIDI mixer you could change volume, pitch, panning, speed of playback, filter parameters, envelopes, etc. from just one single sound. If you have to perform these tasks on an acoustic instrument you would have to do different actions in order to achieve the proper results. For instance, guitar players move the left hand (right handed players) across the fret board to change the pitch of the sound. If they move one finger before the previous sound stops they could achieve a legato sound. If the player moves the right hand between the bridge and the fret board he/she could achieve different timbers from more aggressive to a mellower sound. If you need a shorter sound you just release the pressure of the finger and the sound will stop. All these simple actions differ drastically one from the other and by doing them you get different results. This is mainly because of tactile contact with the instrument. So we could state that Tactual perception allow us to modify different parameters of sound on acoustic instruments.

Proposed System of Interaction One possible solution to the lack of tactile realness while interacting with computers would be Haptic Technology. This technology is being developed and implemented to make blind or visually impaired people have accessibility to environments like Internet, achieving things almost impossible without it. Thanks to the implementation of devices such as the SenAble PHANTOM and the Logitech WingMan force feedback mouse users can receive tactual feedback while interacting with a computer. Therefore, the System of Interaction would be really improved; if for example, we implement actuators to a pair of Data Gloves (like the CyberForce or CyberTouch by Immersion Co. already have) we can have tactual feedback (e.g. pulses, sustained vibrations or even force feedback), which could definitely enhance the feel of performing different tasks while interacting with computers and most important, could react in a different manner to different actions the performer does. 4 Bio-electrical Signals Different types of devices detecting Bio-electrical signals have been used for medical applications since the 80s. But humans have also been interested in detecting this signals for other applications. With no doubt the first technology that implemented measuring of bio-electricity was Galvanic-skin-response (GSR). Intruduced in the early 1900s it has been used for different tasks. The main problem of GSR is that is not reliable enough and has a lack of accuracy. Later different equipment using Bioelectrical signals such as ElectroEncephalogram (EEG), Electromyogram (EMG) or even Electro Cardiogram (EKG) has been more successfully applied to Medicine. Taking advantage of these properties different Musical applications were developed. Bio-Muse (developed by BioControlSystems and adapted by Benjamin Knapp for Atau Tanaka) was introduced in 1992. The BioMuse is a bioelectric signal controller, which allows users to control computer functions directly from muscle, eye movement, or brainwave signals, bypassing entirely the standard input hardware, such as a keyboard or mouse. It receives data from four main sources of electrical activity in the human body: muscles (EMG signals), eye movements (EOG signals), the heart (EKG signals), and brain waves (EEG signals). These signals are acquired using standard non-invasive transdermal electrodes. This device used by Tanaka implements EMG technology to catch the Bio-impulses received by a sensor patched in the forearm of the performer. The latest projects involving this kind of technologies focused on two main fields: Wireless connectivity (not wireless internet connection but an attempt to get rid of cables between the used and the interface or CPU) and development of a reliable system implementing brain signals. Definitely the projects that went farther than any other in the wireless field are Cyborg I (1998) and Cyborg II (2002) lead by Prof. Kevin Warwick at Reading University.

Prof. Warwick 1st Electrode Array

1st Electrode Implant

These two projects are based on the principle of an Electrode Array, which was implanted on the middle nerve of the left forearm. The signals are caught and transferred to a Receiver connected to a CPU, which converts the signal into Data. With this method Prof. Warwick is able to perform different tasks like controlling a Robotic Arm just by moving its arm or even send the Data through the Internet.

Prof. Warwick 2nd Electrode Array This system would allow musicians to do performances, using their gesture, sending their music on real-time through the Internet to a venue placed thousands kilometres away. And also could be useful to enhance collaborations between different artists at the same time and also to transmit the performance not to just one place but to many places on real-time. In relation to different projects using Brain Signals I have to mention two of them, which are the most advanced ones. The first is the Brain Machine Interface (BMI) or Brain Gate and the second is the Cerebus developed by the Massachusetts Institute of Technology. The Brain Gate is based on the same principle that Prof. Warwicks project. It implements a one hundred Electrode Array that is implanted in the Primary Motor Cortex of the Brain and it detects the electrical activity of brain cells. Then this signal is sent to a cart with computers, signal processors and monitors where all data received is processed and recorded. While the Brain Gate implements an invasive technology, because it requires surgery, the Cerebus does not, which makes it more suitable and accessible to general public and musicians. The Cerebus implements an external type of helmet with wireless technology instead of cables to connect it to the computer system. 5 Robotics, Bio - mechanics and Bio-robotics The aim of developing and implementing Robotics technology by humans was since the beginning of this technology, to extend the human capabilities and to achieve humanly impossible tasks. Nowadays Robotics technology includes every kind of flexible automation and which brings almost unlimited possibilities. These Robots are designed to perform physical or mechanical tasks and thanks to computer programming they can perform the requested operation automatically without the need to stop (eg: robots in a production line of a factory). The natural evolution of Robotics made them more effective, more precise and also smaller. Mainly in areas like medicine, where robots have been serving surgeons since the first non-Laparascopic Neurosurgery performed in 1985 it became crucial to reduce the size of robot arms which could assist to surgery. In the late 90s Micro Surgery or Minimally Invasive Surgery reached the top and Micro Robots started playing a main

role on it. In 1999 Surgeons at London Health Sciences Centre's performed the first robotic assisted closed chest bypass on a beating heart. With the raise of this technology in the Medicine the Crossfade between humans and robots became real in year 2000, when the first Bionic Hand was successfully implanted in the University of Edinburgh. Even though the implant was successful this Bionic Hand is far to achieve the same capabilities of a human hand. So, which is the relation between these technologies and music? And how these technologies can be implemented? How can these technologies help musicians? Recently NASA scientist developed Artificial Muscles using a new technology called Electro Active Polymers (EAP). This new technology allows robots (robotic arms or hands) to implement a simulation of human muscles. These EAP Muscles like human muscles do react to an electric impulse making them compress. When they stop receiving electric impulses the EAP Muscles stretch to the original position. EAP Muscles are in the development stage but soon we will have this technology available to implement on humans. For instance, EAP Muscles could be implemented on the fingers of a piano player to make them achieve double speed of their fingers by sending electric impulses directly to the EAP Muscles, which would control the motion of the fingers. Performers would develop technical skills easily so they could spend more time focusing on musical aspects like interpretation or phrasing instead of developing technique because it would be easily achievable. Another interesting field I would like to include in this chapter is the use of Biogenetics. It is more than twenty years now since Hormone treatment began being used on athletes. In the late 90s it became really popular because the products became more accessible and the improvement on people using them, mostly professional athletes, was really obvious. Performing any instrument requires muscle movements, even subtle movements, but you need to move a certain amount of muscle mass. Even singers need to pass a certain amount of air through their vocal strings in order to sing. But singers differ from other musicians because their performance mainly involves tissues rather than muscles. Therefore hormone treatment could be implemented to singers to improve quality of voice, timbre or even increase their register. The National Center for Voice and Speech (N.C.V.S) has been studying the evolution of the voice quality over the time and stated that voice of trained singers could maintain its quality a few more years than untrained people. But due to muscle fatigue it becomes inevitable that voice quality will decrease over a certain period of time. Also it was proved that adults vocal strings became more rigid than childrens. So we could improve the singers tone and quality of voice by making their tissues lighter and softer like when they were teenagers.

Vocal Quality Prediction Formula

Vocal Quality Graph Implementation of this technology could allow singers to develop a mellow, smooth tone of voice through their entire career. Moreover it could bring them new possibilities like achieving an extended register. Would you imagine a baritone reaching an alto register because the texture of his vocal string tissue? 6 Virtual Reality vs. Augmented Reality: Undoubtedly one of the most significant implementations made in music in the last 25 years was computer technology. But possibilities became even greater with the beginning of the digital era. In the middle 80s a new technology based entirely in software and a few pieces of hardware was introduced. It was called Virtual Reality (VR). The basis of this technology is the representation of real or imaginary environments made by special software and represented in 3D. By using special Gloves and a pair of special Glasses you could see and grab virtually created objects. A powerful implementation of V.R. is called CAVE and is a virtual representation of a real room inside the actual real room. This is a very promising technology to be implemented in Electro-acoustic Music Diffusion. The performer would have a virtual representation of the actual room with the exact coordinates of the speakers system and the 3D software would control the virtual speakers. The signal would be sent into a computer linked to the real mixer of the speakers system so you can listen the changes you made in the virtual domain, but in the real world and on real-time. An interesting part of the project is that you could have a graphical representation in the virtual glasses that show you the relative loudness and the average frequency for each speaker so you can modify these parameters on real-time with the movement of your hands. To enhance the feeling of performing with a virtual system haptic technology could be implemented. This would deliver haptic feedback to the performer. Taking this technology one step further we can mix Virtual Reality with the real world. The result is Augmented Reality (AR). Augmented Reality combines elements of VR but it is intended to be portable. So you dont have to stay in an indoor space to implement it and you can transport your equipment to perform in any theatre or any place because it just uses a CPU, a tracking device and a pair of common glasses with a tiny projector attached to one of the legs which gives you a representation of the CPU screen on one crystal of the glasses.

AR is flexible enough to allow the performer to see the virtual domain and at the same time the real world. This seems to be the perfect combination for instance when you are dealing with other performers or musicians. A piece of music involving Electroacoustic music but also acoustic instruments might require that the performer who is using the AR system watches the conductor or reads a score or if its a small ensemble just watch their partner. But at the same time he/she would have control over the diffusion of the music. Different universities and research centres throughout the world, like the Massachusetts Institute of Technology, the Graz University of Technology, Columbia University and the University of South Australia among many others are developing AR systems to be really portable. Most of them are implementing PDA systems, which makes this technology not just pocket size but really accessible. This would enhance the idea of interacting between different artists on real-time. One project of the Graz University of Technology (Austria) called Invisible Train is based on the principle of interaction. This project is just a game but enhances the idea of multiple performers each one of them using just a PDA with Wi-Fi technology that controls the main CPU. Musical applications are already being developed. The project Augmented Groove which is a DJ oriented instrument. Basically is a virtual simulation of a turntable, which implements two vinyl-like controllers to perform different tasks inside the virtual environment. Even though this project was presented in different festivals and meetings and works well it has a lack of possibilities, maybe due to the limitations of the real instrument it represents. There are other projects like the Augmented Composer and the Augmented Bass but none of them seems to reach the full power AR offers. 7 Conclusions: I hope the work in this paper helps to develop new ways of performing Electro-acoustic and Acoustic music, making proper use of technology available nowadays. We should realize that the technology to achieve our goals is already available but we still need to combine them to achieve even greater features. A better integration between sensors, interfaces and performers would enhance live performances of Electro-acoustic musicians in order to make full use of gesture. Gesture in music and in art is the essence of a performance and it enhances expressiveness, which most modern electronic instruments dont have, so technology should not set limitations to artists, on the contrary, it should empower their skills and their creativity when performing and interacting one to each other. In addition, science has proved that a an almost full integration of sensors, transducer, and lately electrodes arrays (combining both) is possible into the human body, nevertheless, developing non-invasive technologies would make these technologies available not just for scientists but also would bring them to the field of music, opening a whole new chapter for live performances of Electro-acoustic music. Finally, I should point that we must be cautious of the way technology is implemented, because technology seems to be overtaking us, and the essence of art should not lie on technology. Technology is a tool, a very powerful tool that we must implement, but we should treat it as a tool. 8 Acknowledgements: This research would have been impossible without the thoughtful guidance of prof. Ricardo Climent, who always supported and encouraged me.

I would like to thank Ravi Kuber for introducing me to his Force Feedback devices and Bert Bongers for his support and for sharing his knowledge with me, Prof. Kevin Warwick for encouraging and letting me use his pictures in this paper and Dr. Way Yu for letting me try the Virtual Reality devices at the Virtual Engineering Centre. 9 References: [1] O. Anderson, Genetics and Performance: Now science is getting to the long and the short of how genes influence performance, http://www.pponline.co.uk/encyc/0831.htm [2] BBC News, Bionic hand success hailed, 2000, http://news.bbc.co.uk/1/hi/health/1035304.stm [3] R. Berry, M. Makino, N. Hikawa, M. Suzuki, The Augmented Composer Project: The Music Table, IEEE, 2003. [4] Bio Conrol Systems, http://www.biocontrol.com/ [5] B. Bongers, Physical Interfaces in the Electronic Arts: Interaction Theory and Interfacing Techniques for Real-Time Performance, 2001??, pp. 127-133, 136-142. [6] B. Bongers, An Interview with Sensorband, Computer Music Journal, 1998. [7] D. De Rossi, F. Capri, F. Lorussi, A. Mazzoldi, R. Paradiso, E. P. Scilingo, A. Tognetti, Electroactive Fabrics and Wearable Biomonitoring Devices, Autex Research Journal, 2003, pp. 180-184. [8] Early Brain-machine Interface Test Promising, 2004, http://www.betterhumans.com/ [9] Electroactive Polymer Actuators, http://ndeaa.jpl.nasa.gov/nasande/lommas/eap/EAP-web.htm [10] S. Gven, S. Feiner, Authoring 3D Hypermedia for Wearable Augmented and Virtual Reality, 2003. [11] Y. Harris, B. Bongers, Approaches to creating spaces, from intimate to inhabited interfaces. [12] R.Hayes, In the Pipeline: Genetically Modified Humans?, 2000, http://multinationalmonitor.org/mm2000/mm0001.09.html [13] Immersion Corporation, CyberForce Data Sheet, CyberGrasp Data Sheet, CyberTouch Data Sheet, CyberGlove Data Sheet, 2002, http://www.immersion.com [14] National Center for Voice and Speech, Voice Changes Throughout Life, http://www.ncvs.org/ncvs/tutorials/voiceprod/tutorial/changes.html [15] J.A. Paradiso, N. Gershenfeld, Musical Applications of Electriv Field Sensing. [16] N.S. Pollard, R.C. Gilbert, Tendon Arrangement and Muscle Force Requirements for Humanlike Force Capabilities in a Robotic Finger.

[17] A. Regalado, Biochip/Implant, Making Brain Control of Computers and Other Machines Possible, 2001, http://www.techreview.com/articles/jan01/tr10_nicolelis_printable.html [18] Robotics: the Future of Minimally Invasive Heart Surgery, Division of Biology and Medicine, Brown University, http://biomed.brown.edu/Courses/BI108/BI108_2000_Groups/Heart_Surgery/Robotics. html [19] K. Salisbury, Haptics: The Technology of Touch HPCwire Special to HPCwire, 1995. [20] S. Sapir, Interactive Digital Audio Environments: Gesture As a Musical Parameter, 2000. [21] V. Stanford, Biosignals Offer Potential for Direct Interfaces and Health Monitoring, IEEE, 2004, pp. 99-103, http://www.computer.org/pervasive [22] R.M. Sunderland, Biorobotic Manipulation Project Proposal, 2003. [23] A. Tanaka, R.B. Knapp, Multimodal Interaction in Music Using the Electromyogram and Relative Position Sensing. [24] Tinmith AR System, http://www.tinmith.net/ [25] D. Wagner, I. Barakonyi, Augmented Reality Kanji Learning. [26] D. Wagner, D. Schmalstieg, First Steps Towards Handheld Augmented Reality. [27] Wearable Computing, http://www.media.mit.edu/wearables/index.html [28] D. Wessel, Technologies for Wearable Electronics. [29] T. Winkler, Making Motion Musical: Gestural Mapping Strategies for Interactive Computer Music. [30] W. Yu, K. Guffie, S. Brewster, Image to Haptic Data Conversion: A First Step to Improving Blind Peoples Accebility to Printed Graphs, 2001, http://www.multivis.org