Applications of a Motion Capturing System

:
Music, Modeling and Animation

Chase Finch, Kevin Musselman, Bill Mummert
College of Engineering and Science, Clemson University


ABSTRACT
Recent technological advancements in the field of motion
capturing have allowed animators and modelers to apply
extremely realistic movement patterns to virtual models
and objects. However, it is becoming increasingly common
to see the effects of motion-capturing technology applied to
a wider variety of fields. Systems using motion capture for
music composition and performance, dance instructions,
and combinations of animation and sound accompany the
traditional uses of animation and modeling. The purpose of
this report is to compare various uses of motion-capturing
technology, and to assess the usefulness, practicality and
future potential of each. Our comparisons show that
modern motion-capturing systems have great potential for
uses which are uncommon today, and that alternative uses
should continue to be explored.
Keywords
Motion capture, gestural input, modeling.

INTRODUCTION
Motion capture, a recently popular tool in the fields of
gaming, animation, and virtual reality, extracts data from
the motion of a set of sensors. This data can be
manipulated and applied to several applications. The most
common uses of the motion capture system are to apply
lifelike motion to humanlike characters. However, the
system’s data is usable for other applications, and the
number of fields in which it is used is growing.

The Vicon 8 Motion Capturing System (MoCap) is
recognized as one of the best options available today for the
gathering of information about three-dimensional
movement. Currently, the uses of MoCap extend to
animation, modeling, biomechanics research, and even
manipulation of music, lighting, and digital video. What
follows is a comparison of successful and promising
applications of MoCap, as well as predictions about how
these and other applications will be explored in the future.





HISTORY
In 1980-83, at Simon Fraser University, biomechanics labs
were beginning to use computers to analyze human motion.
Soon after, MIT started using commercial optical tracking
systems such as the Op-Eye and SelSpot systems to be used
in computer graphics. In 1988, deGraf/Wahrman developed
"Mike the Talking Head" for Silicon Graphics to show off
the real-time capabilities of their new 4D machines. All of
the heads were able to be controlled by one person. The
trend continued until where we are today, with motion
capture systems as viable options for computer animation.

DATA FROM THE MOTION CAPTURING SYSTEM
The Vicon 8 motion capture system is a passive system,
meaning it uses 8 infrared cameras and tracks “markers”
that are placed on the joints of the subject whose movement
is desired. These markers are reflective, and the cameras
are able to track their position. Vicon records the Cartesian
x, y, and z coordinates of each marker. As the markers
move, the system will record the motion at an average of
120 fps. A quick processing of the information allows the
information to be applied to another set of joints, for
example, those placed on a virtual three-dimensional
model.


figure 1: A visual representation of MoCap information

The newest system, the Vicon MX13 is a 12-camera system
that can record 1280x1024 full frame grayscale pixels at
speeds up to 484 frames per second (fps).

MOTION CAPTURING IN ANIMATION
Motion capture for computer animation involves mapping
the data collected onto a virtual character. The motion
capture system makes a rig to apply this data to. A rig or
character is a set of bones that have control points to move
the model. When the mapping is direct, as in a human leg
motion controlling a character’s leg motion, the rig is
simply applied to another rig that goes with the model of
our choice. So a rig has to be created with the same
naming convention of the motion capture system for it to be
applied correctly. Just recently software has been
developed to use the raw motion capture data directly
without having to apply it to a rig. One such software is
called MotionBuilder. This software already has many
different models and rigs that the motion can be applied to.
We can also import our own models but these unfortunately
have to already have a rig attached to them before it can be
used.
Another related technique is facial motion capture. This is
a little more complex since the movements are so minute.
This generally performed using multiple cameras arranged
in a hemisphere at close range with small markers attached
to the face. Though technology is becoming better and
therefore a motion capture system is able to capture many
markers at a time and can be accurate down in the
millimeters. Once again, a rig would still be required to
use this data.
Video games use motion capture a lot for the athletes in
sports games and for combat moves in those action games.
Movies use it for computer-generated effects and for
completely computer-generated creatures like Gollum, the
Mummy and King Kong. Some virtual reality and
augmented reality use motion capture systems but because
these require real time input of the user’s position, a high-
speed, high-resolution system is needed.

MOTION CAPTURING IN MODELING
While animation requires mapping motion capture data to a
skeletal system or rig for a model to create motion,
certainly motion capture data can be used to derive a
skeletal structure. The University of Southern California
has a system based on volume and a motion capturing
system. The system estimates skeletal curves and
kinematic posture from volume data. The volume data is
collected with a set of cameras around the space the actor
moves in. Under the assumption that the human body has
immutable segments connected by joints, and these
connections create “nonlinearity” in the volume data
collected. The volume data is then visualized frame by
frame in a reduced dimensional space using an Isomap.
This creates “Da Vinci” pose images. The skeletal points
can then be estimated and the curves derived. They
extrapolate this estimated skeletal system to the original
volume data. The system then breaks it up into body parts
and determines joint angles.[6]

figure 2: Sample 3d model

MOTION CAPTURING IN AUDIO/VISUAL INTERACTION
Production of sound is often associated with a physical
motion that creates the sound. For example, a percussive
musical instrument is designed such that a motion of
striking a surface creates a noise. In a more broad sense,
the correlation between audio/visual production and
movement lends to the exploration of motion capture as a
means of digitally controlling this production. It follows
that motion capture may be useful for performing, creating
and manipulating sounds, digital videos, music, and even
the accompanying light effects. Methods that allow music
to be controlled in either a gestural conductor-like form or
by the precise movements of a dance have been
investigated.

Motion Capture Music
At the University of California, Irvine (UCI), a small group
of artists and scientists are working on a system to translate
MoCap data into control sequences for musical creation.
The working title for this project is Motion Capture Music
(MCM). The group mentions that the Vicon 8 system is
receiving interest from the music community because of a
system known as the “Vicon Real-time Engine.” This
engine enables real-time transmission and manipulation of
captured motion information. The MCM group uses an
emulator of the Vicon Real-time Engine known as
RTEmulator to output mapped commands to a MIDI
creator, which is a type of standard, simple audio output.



Documentation by the MCM team concludes with
predictions about how the system might be used in the
future. Motion capture, it explains, allows the “playing of a
virtual instrument.” However, because the instrument is
not physically present, a wide variety of improvisations
based upon the entire range of human movements is
possible. This leaves this virtual instrument far more
versatile than any physical instrument could be. Another
benefit of MCM-style systems is that when the MoCap
system is used for animation, the same data that drives the
animation can be used to create the soundtrack. This
enables a wide variety of creative audio/visual
combinations.

Dance Instruction
Three developers in Japan produced documentation on a
system that they have created to teach dance with the help
of a motion capturing system. Their stated goal was to
focus on beginners for the purpose of teaching the basics of
traditional dance. A method where a student wears
position-capturable sensors and dances with a robot was
compared against a system where the student is simply
shown images of the correct form. Feedback (in the form
of physical vibration) is given to the student when their
motion is found to be inconsistent with the stored
information about movement of a professional dancer. As
a result, they know when to make changes and what area
the change needs to take place.

Documentation by the dance instruction team concludes
with results showing that the use of MoCap greatly
increases effectiveness of the dance-training program. The
authors suggest that future improvements to motion capture
technology will make systems like these almost
indispensable for the field of dancing and other motion-
related instruction.




OBSERVATIONS

Motion capture systems will continue to develop and
become more advanced. Eventually, markers on the body
won’t even be needed and system will be able to create a
rig and model just based off the person being filmed. It
will also become even more precise and be able to
accurately record slight motions like breathing and facial
movements.
The MoCap system also shows great potential in the area of
audio/visual manipulation. The “Motion Capture Music”
system illustrates the prospective use of MoCap as a
gestural music generator (in a sense, a virtual musical
instrument.) The “Vicon Real-time Engine” is a program
that greatly increases the usefulness of the MoCap system
in real-time applications, such as the musical performances
discussed. The system also opens up creative avenues to
change the aesthetics of a musical performance, or to sync
animation with a soundtrack that is based on the same
control data as the animation itself.

WEAKNESSES
Motion capture is an evolving tool, and there are reasons
why it is not in widespread use. Video-based motion
capture depends on small sensors, and a bit of error in the
placement of these sensors is inevitable. Also, because you
are only retrieving information from the “joints” of a
subject (where you place the sensors), you are limited as to
how much data you receive.

The major problem with motion capturing is the tools
necessary are very expensive. The prices of the systems
discussed in this paper currently range from $60,000 to
$100,000 and above. For purposes such as dance lessons
and musical performance, the price tag is not justifiable for
use in any situation. Progress and usefulness of the
motion-capturing field depends on the future technological
developments of the actual motion capturing systems to
make them affordable.

ACKNOWLEDGMENTS
We thank Prof. Andrew T. Duchowski for the motivation to
perform the research in this document. We also thank the
College of Engineering and Science for the resources made
available to us in the form of published CHI papers.

REFERENCES
1. David J. Sturman. A Brief History of Motion
Capture for Computer Character Animation.
Available at
http://www.siggraph.org/education/materials/HyperGr
aph/animation/character_animation/motion_capture/hi
story1.htm

2. Christopher Dobrian and Frédéric Bevilacqua.
Gestural Control of Music Using the Vicon 8 Motion
Capturing System. Available at
http://music.arts.uci.edu/dobrian/motioncapture/NIME
03DobrianBevilacqua.pdf. Presented at Proceedings
of the New Interfaces for Musical Expression 2003
Conference, Montréal, Quebec, Canada

3. Frederic Bevilacqua, Christopher Dobrian, and
Jeff Ridenour. Mapping Sound to Human Gestures:
Demos from Video-Based Motion Capturing Systems.
Available at
http://music.arts.uci.edu/dobrian/motioncapture/GW0
3-Bevilacqua-et-al.pdf. Demo presented at the 5th
International Workshop on Gesture and Sign
Language based Human-Computer Interaction, 2003,
Genoa, Italy

4. S. Tabata, A. Nakamura, and Y. Kuno (Japan).
Development of an Easy Dance Teaching System
using Active Devices.

Sign up to vote on this title
UsefulNot useful