You are on page 1of 10

From Motion Capture to Real-Time Character

Animation

Franck Multon1,2 , Richard Kulpa1 , Ludovic Hoyet2 , and Taku Komura3


1
M2S, University of Rennes 2, Av. Charles Tillon CS 24414, 35044 Rennes, France
2
Bunraku, IRISA, Campus Universitaire de Beaulieu, 35042 Rennes, France
3
IPAB JCMB, Edinburgh Univ, Kings Buildings, Mayfield Rd,
Edinburgh EH9 3JZ, UK
{franck.multon,richard.kulpa}@uhb.fr, lhoyet@irisa.fr,
tkomura@inf.ed.ac.uk

Abstract. This paper describes a framework for animating virtual char-


acters in real-time environments thanks to motion capture data. In this
paper, we mainly focus on the adaptation of motion capture data to
the virtual skeleton and to its environment. To speed-up this real-time
process we introduce a morphology-independent representation of mo-
tion. Based on this representation, we have redesigned the methods for
inverse kinematics and kinetics so that our method can adapt the mo-
tion thanks to spacetime constraints, including a control of the center of
mass position. If the resulting motion doesn’t satisfy general mechanical
laws (such as maintaining the angular momentum constant during aerial
phases) the current pose is corrected. External additional forces can also
be considered in the dynamic correction module so that the character
automatically bend his hips when pushing heavy objects, for example.
All this process is performed in real-time.

Keywords: Virtual human, real-time animation, kinematics, dynamics.

1 Introduction
Animating virtual humans thanks to motion capture data generally requires
lots of precomputation and manual editing. Firstly motion capture data gener-
ally contain some holes and the Cartesian position of surface markers must be
converted into joint angles. Most of the commercial softwares such as Motion
Builder (product of Autodesk) and IQ for Vicon systems (product of Oxford
Metrics) offer very efficient interfaces and algorithms to perform such processes.
Secondly, motion capture data should be applied to the virtual character (called
motion retargetting) that is used in the application leading to some corrections
when the feet are not in contact with the ground during contact phases for ex-
ample. Thanks to some research works in computer animation, this task can
be performed almost automatically in a precomputation step. Then, new prob-
lems occur when this animation has to be applied to interactive environments
such as video games. In the real-time engine, the 3D environment may be dif-
ferent from the one in which the actor performed his motion. The pose of the

A. Egges, A. Kamphuis, and M. Overmars (Eds.): MIG 2008, LNCS 5277, pp. 72–81, 2008.

c Springer-Verlag Berlin Heidelberg 2008
From Motion Capture to Real-Time Character Animation 73

Fig. 1. Left) The motion of the character is adapted in real-time in order to push
a heavy cupboard while the original motion was simply walking with the two hands
placed in front of him (lightgray character). Right) The character reacts in real-time
to an external force exerted on his shoulder while walking.

character should be adapted in order to take this new situation into account.
This problem is generally solved concurrently to the motion retargetting step
by precomputing some situations in advance. However, for a wide set of possible
actions and situations, this manual approach leads to very long and fastidious
work. Moreover the resluting database of motions is very large leading to long
loading phases. It also requires a very large place on the storage support and
memory.
Once motions are correctly precomputed, the real-time animation engine has
to select the most convenient clip for a given situation and character. However,
new behaviors and situations generally cannot be processed during real-time an-
imation. Imagine an application where the user can create a character by tuning
its dimensions and appearance. In that case, it’s impossible to perform motion
retargetting in advance. Let us consider now that this new character is driven in
real-time by the user and has to interact with other players. For example, a user
can put objects into a cupboard or other characters can concurrently interact
with such a cupboard. The player wishes to see the virtual character adapting
his pose as a consequence of a change of the mass of the cupboard (see figure 1).
This paper addresses this kind of problem by introducing real-time algorithms
capable of adapting a motion clip to the morphology of the character, to kine-
matic constraints and to some dynamic constraints.

2 Related Works
Adapting motion capture data to new situations rises three main problems:
solving kinematic constraints, ensuring continuity and preserving the original
style. Displacement maps [1] [2] [3] where introduced in order to solve these
74 F. Multon

problems. It consists in solving constraints when needed and then filtering the
resulting discontinuous motion in order to obtain a continuous one. This ap-
proach can be used to retarget a motion to a character [4]. If the character that
has to be animated has a different topology from the one that was used to pro-
cess motion capture data (a character with an accurate model of the shoulder vs.
an original model with a rigid torso for example), an intermediate skeleton can
be used [5]. However, displacement maps require a complete knowledge of the
sequence and the constraints in advance. In real-time interactions with a user,
such constraints are not accurately known in advance as the user can modify the
actions and the environment in an unpredictable way.
Real-time inverse kinematics [6] [7] [8] and kinetics [9] (ensuring the control
of the position of the center of mass) can also be solved in real-time. In the
specific case of foot-skate clean-up, this process is very fast [10]. For a whole-
body pose control, it’s very difficult to control more than one character in real-
time whereas computation time used for this process could be used for processing
other tasks in the animation engine. Moreover these techniques are only designed
to solve kinematic and kinetic constraints in interactive time but cannot ensure
that mechanical laws are satisfied (such as preserving the angular momentum
constant during aerial phases or limiting joint torques).
To solve kinematic and dynamic constraints concurrently, several authors have
proposed to optimize an objective function gathering all these constraints [11]
[12] [13] [14] [15]. By nature, this process cannot be used in real-time animation as
it requires a complete knowledge of the constraints in advance. Another solution
consists in applying inverse dynamics and to tune to motion until the forces and
torques reach acceptable values [16]. This method has been extended in order to
deal with dynamic stability [17] and to correct the angular momentum in aerial
phases thanks to time warping [18] or real-time local adjustments [19]. Dynamic
filters were introduced in order to adapt an incorrect motion in order to satisfy
the mechanical laws frame-by-frame [20]. It has been applied to obtain a stable
motion from the dynamic point of view [21].
In order to react to external perturbations, it’s also possible to simulate
the passive behavior of the virtual human’s mechanical system and to search
a database of possible reactions the candidate that best fit the simulation condi-
tions [22] [23]. This process calculates a whole sequence and is difficult to apply
in real-time for interactive applications. Moreover, only local perturbations such
as pushes are considered. It cannot be used to compensate continuous external
actions, such as pushing an heavy object.
In this paper, we propose a framework to animate virtual humans thanks to
motion capture data while taking kinematic and dynamic constraints into ac-
count. This framework was designed for real-time animation (per-frame solvers).
Section 3 provides an overview of this framework. All this framework is based
on a morphology-independent representation of motion that is described in sec-
tion 4. Then, the inverse kinematics and kinetics solver (section 5) and the on-line
dynamic correction (section 6) are presented.
From Motion Capture to Real-Time Character Animation 75

3 Overview

The whole framework is able to synchronize and blend several motion cap-
ture data in order to solve a complex task in real-time, as described in [24].
When all the desired motions are blended into a unique pose, this latter can be
adapted to a set of constraints. This paper focuses on this part of the frame-
work only. Let us consider q the set of data that are associated to a pose of
the character. In this paper, these data are based on a morphology-independent
representation of motion, as described in section 4. At time t, q is associated
to a set of kinematic and kinetic constraints (modeled as equality equations
fi (q) = ci ).
Before solving these constraints, q must be scaled to the virtual human’s
morphology, as depicted in figure 2. The resulting vector qs is not made of
joint angles any more but consists of Cartesian and angular data. The inverse
kinematics and kinetics solver is applied on qs in order to find a pose that
satisfies a compromise of all the constraints. At this step, the solver delivers joint
angles that are compatible with the virtual character. However, the resulting
motion doesn’t take external forces into account so that the angular and linear
momentums may be incorrect from the mechanical point of view. The Dynamic
Correction module aims at modifying the resulting angles in order to ensure
that external forces are taken into account. This module doesn’t compute the
joint torques but is based on analyzing the mechanical constraints applied to the
global system. Hence, during the aerial phase, the angular momentum should
remain constant and the acceleration of the center of mass should equal gravity.
During the contact phase, we assume that the character should counteract the
external forces to preserve as much as possible the initial movement. To do so,
the system can tune the orientation of groups of body segments. The goal is
to modify the ground reaction force, the center of pressure and the position of
the center of mass in order to counteract the new imposed external forces, as
depicted in figure 1.

Fig. 2. Overview of the motion adaptation framework


76 F. Multon

Fig. 3. Morphology-independent representation of motion

4 Morphology-Independent Representation of Motion


The morphology-independent representation of motion was preliminary
described in [25]. It’s based on a Cartesian normalized representation of a pos-
ture. In this representation, the human body is subdivided into kinematic sub-
chains (see figure 3) that describe parts of the skeleton. Those kinematic chains
are divided into three main types:
– the normalized segments that consist of only one body segment (such as the
hands, the feet, the clavicle and the scapula). They are normalized by their
length ;
– the limbs with variable length that encode upper and lower limbs composed
with two body segments each. In this representation, the intermediate joints
(elbows and knees) are not encoded given that their positions are directly
linked to the characters anthropometric properties. To retrieve them, an
analytical solution derived from IKAN [26] is used ;
– and the spine is modeled by a spline (as suggested in biomechanics) which
advantage is that it can be easily subdivided into as many segments as wishes,
depending on the character description. This spline is easily normalized by
scaling the coefficients between 0 (close to pelvis) and 1 (close to the skull).
Given this data structure, every motion capture data or edited trajectory (what-
ever the size of the initial skeleton) can be used as an initial pose. This pose is
then adapted to the new skeleton by simply multiplying all the normalized data
by the dimensions of the new character.

5 Inverse Kinematics and Kinetics


Once a set of parameters qs is scaled to the size of the virtual character, the
system has to solve the equality constraints fi (qs ) = ci . Generally, this task is
performed by an inverse kinematics solver that consists in locally linearizing fi to
From Motion Capture to Real-Time Character Animation 77

Fig. 4. Dozens of characters controlled through interactively constraining the wrists


and ankles. The center of mass is also constrained over a vertical line going through
the middle of the feet.

inverse it. However, the computation cost is very high and this process sometimes
leads to unrealistic poses. The main problem is to solve constraints expressed in
the Cartesian frame whereas qs is a set of joint angles. With our representation
based on relative Cartesian positions, the problem is simpler. Hence, for a group
of body segment, it’s possible to find an analytical solution for each constraint.
For the arm, it’s possible to determine very efficiently the new location of the
wrist in the shoulder reference frame while preserving the initial orientation of
the arm. The same kind of direct solution can be found for each group of body
segments (limbs, trunk and head).
The main problem is to control the whole body whereas the solutions can be
found locally for each group of segments. To solve this problem, we have adapted
a Cyclic Coordinate Descent algorithm [27]. It consists in iteratively adapting
the groups in a specific order until convergence. The order is selected to adapt
the distal segments first because they are associated with less energy than the
trunk or the whole body position and orientation. Moreover, the limbs offer a
wide range of reaching positions that generally enables to solve the constraints in
only one iterations (for targets (c) placed close to the body). The corresponding
algorithm is:

it = 0
Do
adaptGroup(HEAD)
adaptGroup(RIGHT ARM)
adaptGroup(LEFT ARM)
adaptGroup(RIGHT LEG)
adaptGroup(LEFT LEG)
adaptGroup(TRUNK)
adaptRoot()
completed = computeError()
While ( (it++ < maxIt) & (not completed) )
78 F. Multon

This method also enables us to control the position of the center of mass [28]
by offering immediate position of the local centers of mass for each group. Thus,
analytical solutions can also be proposed for each body group to displace its
local center of mass to a convenient place. However, for this process, the order
in which the groups are adapted is different: from heaviest masses (leading to a
large displacement of the global center of mass) to lightest ones (small displace-
ments). This way, the minimum number of necessary groups is affected with
this method. Figure 4 depicts dozens of various characters that have to satisfy
the same geometric constraints (placing the two wrists and ankles onto targets)
while ensuring that the center of mass stays along a vertical line.

6 Dynamic Correction
The dynamic corrections consist in adapting the joint angles if the corresponding
whole-body motion doesn’t satisfy the mechanical laws. It can be subdivided into
two parts:
– during aerial phases, some global mechanical values are well known. For
example, the angular momentum remains constant and the acceleration of
the center of mass equals gravity. Let function h(qs ) returns the angular
momentum according to the joint angles. If an error Δh in the angular
momentum appears, it’s possible to retrieve the joint angles that eliminates
this error by locally linearizing h (see [19] for details).
– during contact phases, the way the global mechanical values should change
is more complex. If the user adds external forces that were not initially con-
sidered, the system should be able to adapt the sequence of poses to react to
this perturbation. For example, when a character is pushing a cupboard, he
should adapt his poses in order to react to the corresponding external force.
The same way, when a character is turning, he should react to centrifugal
accelerations by bending the whole body inside the turn.
Let us consider now the last item. The main goal is to perform the initial task
as much as possible while reacting to the continuous external forces Fe added in
real-time. To do so, the system can act with two complementary methods:
– displacing the center of pressure COP into the base of support in order
to create a new torque ΔCOP × R where R is the ground reaction force.
If the required COP remains in the base of support to counteract all the
perturbation, no other change is needed; nothing is changed in the character’s
pose.
– if the first solution is not enough to counterbalance Fe , the next step consists
in changing the ground reaction force R. Theoretically R becomes R − Fe .
All the components of R change but the vector should stay within the cone
of friction forces. Each change of the ground reaction force leads to a change
of the global orientation and position of the center of mass. As the character
is attached through kinematic constraints to the ground, it leads to a change
of the pose of the character, as depicted in figure 1.
From Motion Capture to Real-Time Character Animation 79

For the latter, the main problem is to express the Newton’s laws as a function
of the global orientation of the character q1 at time t:

MW (q1 ) + MR (q1 ) + MFe − Ḣ(q1 ) + M(mẍ(q1 )) = 0 (1)

MW (q1 ) is the torque due to the body weight, MR (q1 ) is the one due to the
ground reaction force (linked to the position of the center of pressure), MFe is
the one due to the additional external forces and Ḣ(q1 ) is the derivative of the
angular momentum. The center of pressure is assumed to be the Zero Moment
Point, usually utilized for stabilizing biped robots [17]. Without additional exter-
nal forces, when a motion is retargetted to a new character, this sum is certainly
different from 0. This is due to a set of approximations: the model is different
from a real human body, the body segments length and mass properties are also
different. So equation 1 becomes:
 2
minq1 MW (q1 ) + MR (q1 ) + MFe − Ḣ(q1 ) + M(mẍ(q1 )) (2)

This expression is minimized to calculate the global orientation of the character


that best satisfies equation 1. When the global orientation is known, the kine-
matic constraints may be unsatisfied and a new inverse kinematics step is per-
formed to locally correct the pose of the character, as depicted in figure 2. This
minimization includes the perturbations due to the external forces. As a conse-
quence, the global orientation is modified not only to compensate the change of
morphology but also the addition of those external forces. q1 should also remain
in the neighborhood of the current pose in order to avoid discontinuities. Hence,
the search space is quite small and a simple iterative method can find an optimal
solution with only few steps.
Left part of figure 1 depicts a character that has to push a furniture. De-
pending on the weight of the furniture and the friction forces on the ground,
the virtual human has to compensate an external force applied to his hands. In
this figure, the lightgray character stands for the original pose, without external
force. The other character is obtained after our method for a 50Kg furniture. We
can see that the system automatically rotates the whole body to compensate the
external forces exerted by the furniture. The kinematic constraints (connecting
the feet without sliding on the ground and putting the hand on the furniture)
are also satisfied. This correction is performed in real-time.

7 Conclusion
We have described a framework to control motion of virtual characters thanks to
motion capture data while satisfying kinematic constraints with very few com-
putation time. In addition to kinematics, the system is able to correct poses that
doesn’t satisfy general mechanical laws if we only consider the center of mass
mechanical system. Although this approach is limited to the global system, it
enables us to calculate motions that can take continuous external forces into ac-
count. It’s the case when a character has to push or carry objects, for example.
80 F. Multon

The goal is to propose very fast computation instead of calculating all the joint
torques that are necessary to control the character’s mechanical skeleton. Al-
though this very fast computation leads to some approximations, it can be used
for animating numerous characters concurrently while taking some dynamic ef-
fects into account: bending the body while turning, climbing the trunk while
walking on a leaning ground, hanging heavy objects. . .
In the future, this method should be extended to deal with local adjustments
instead of global ones. Moreover, with the method presented in this paper, this
character reacts instantaneously to external perturbations whereas some latency
occur in real world. We should also take this latency into account, otherwise the
character may look like superman in some cases.

Acknowledgements
This work has been supported by the ”Conseil Régional de Bretagne”, the ”Con-
seil Général d’Ille et Vilaine” and Rennes Metropole.

References
1. Gleicher, M.: Motion editing with spacetime constraints. In: Proceedings of Sym-
posium on Interactive 3D Graphics, pp. 139–148 (1997)
2. Lee, J., Shin, S.: A hierarchical approach to interactive motion editing for human-
like figures. In: Proceedings of ACM SIGGRAPH 1999, pp. 39–48 (1999)
3. LeCallennec, B., Boulic, R.: Interactive motion deformation with prioritized con-
straints. In: Boulic, R., Pai, D.K. (eds.) Proceedings of ACM/Eurographics SCA,
Grenoble, France, pp. 163–171 (August 2004)
4. Gleicher, M.: Retargetting motion to new characters. In: Proc. of ACM SIG-
GRAPH, pp. 33–42 (July 1998)
5. Monzani, J., Baerlocher, P., Boulic, R., Thalmann, D.: Using an intermediate skele-
ton and inverse kinematics for motion retargeting. In: Eurographics 2000, vol. 19(3)
(2000)
6. Shin, H., Lee, J., Shin, S., Gleicher, M.: Computer puppetry: An importance-based
approach. ACM Trans. Graph. 20(2), 67–94 (2001)
7. Yamane, K., Nakamura, Y.: Natural motion animation through constraining and
deconstraining at will. IEEE Trans. on Visualization and Computer Graphics 9(3),
352–360 (2003)
8. Baerlocher, P., Boulic, R.: An inverse kinematic architecture enforcing on arbitrary
number of strict priority levels. The Visual Computer 20(6), 402–417 (2004)
9. Boulic, R., Mas, R., Thalmann, D.: A robust approach for the center of mass
position control with inverse kinetics. Journal of Computers and Graphics 20(5)
(1996)
10. Kovar, L., Gleicher, M., Schreiner, J.: Footskate cleanup for motion capture. In:
ACM Siggraph Symposium on Computer Animation 2002 (2002)
11. Witkin, A., Kass, M.: Spacetime constraints. In: Proceedings of ACM SIGGRAPH,
pp. 159–168. Addison Wesley, Atlanta (1988)
12. Liu, C.K., Popović, Z.: Synthesis of complex dynamic character motion from simple
animations. ACM Transaction on Graphics 21(3), 408–416 (2002)
From Motion Capture to Real-Time Character Animation 81

13. Abe, Y., Liu, C., Popovic, Z.: Momentum-based parameterization of dynamic char-
acter motion. In: Proceedings of ACM SIGGRPAH/Eurographics Symposium on
Computer Animation, Grenoble, France, pp. 173–182 (2004)
14. Sulejmanpasic, A., Popovic, J.: Adaptation of performed ballistic motion. ACM
Trans. on Graphics 24(1), 165–179 (2005)
15. Chai, J., Hodgins, J.: Constraint-based motion optimization using a statistical
dynamic model. ACM Transactions on Graphics - Siggraph 2007 26(3), 8 (2007)
16. Ko, H., Badler, N.I.: Animating human locomotion in real-time using inverse dy-
namics. IEEE Computer Graphics & Applications (1996)
17. Shin, H., Kovar, L., Gleicher, M.: Physical touch-up of human motions. In: Pro-
ceedings of Pacific Graphics, Alberta, Canada (2003)
18. Majkowska, A., Faloutsos, P.: Flipping with physics: Motion editing for acrobat-
ics. In: Proceedings of Eurographics/ACM SIGGRAPH Symposium on Computer
Animation, San Diego, USA (in Press, 2007)
19. Multon, F., Hoyet, L., Komura, T., Kulpa, R.: Dynamic motion adaptation for 3d
acrobatic humanoids. In: Proceedings of IEEE Humanoids 2007 (2007)
20. Yamane, K., Nakamura, Y.: Dynamic filters - concept and implementations of
online motion generator for human figures. IEEE transactions on robotics and
automation 19(3), 421–432 (2003)
21. Tak, S., Ko, H.: A physically-based motion retargeting filter. ACM Transactions
on Grpahics 24(1), 98–117 (2005)
22. Arikan, O., Forsyth, D., O’Brien, J.F.: Pushing people around. In: SCA 2005:
Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer
animation, pp. 59–66 (2005)
23. Zordan, V., Majkowska, A., Chiu, B., Fast, M.: Dynamic response for motion cap-
ture animation. ACM Trans. Graph. 24(3), 697–701 (2005)
24. Multon, F., Kulpa, R., Bideau, B.: Mkm: a global framework for animating humans
in virtual reality applications. Presence 17(1), 17–28 (2008)
25. Kulpa, R., Multon, F., Arnaldi, B.: Morphology-independent representation of mo-
tions for interactive human-like animation. Computer Graphics Forum, Eurograph-
ics 2005 special issue 24(3), 343–352 (2005)
26. Tolani, D., Goswami, A., Badler, N.: Real-time inverse kinematics techniques for
anthropomorphic limbs. Graphical Models 62, 353–388 (2000)
27. Lander, J.: Making kine more flexible. Game Developer Magazine 1, 15–22 (1998)
28. Kulpa, R., Multon, F.: Fast inverse kinematics and kinetics solver for human-like
figures. In: Proceedings of IEEE Humanoids, Tsukuba, Japan, December 2005, pp.
38–43 (2005)

You might also like