You are on page 1of 15

POLITECHNICA UNIVERSITY OF BUCHAREST

FACULTY OF AUTOMATIC CONTROL AND COMPUTERS


COMPUTER SCIENCE DEPARTMENT

MASTER THESIS
Machine Learning in Character Animation

Alex Dinu
Thesis supervisors:
Alex Grădinaru, Ph.D. Stdt.
Alin Moldoveanu, Ph.D.
Anca Morar, Ph.D.
Florica Moldoveanu, Ph.D.
Irina Mocanu, Ph.D.

BUCHAREST
June 2019

Table of Contents
Abstract .......................................................................................................................................................... 1
1. Introduction .................................................................................................................................................... 1
2. State-of-the-art ............................................................................................................................................... 1
2.1. Finite State Machine Animation Controller ............................................................................................ 1
2.2. Phase-Functioned Neural Networks ........................................................................................................ 2
2.2.1. Phase .............................................................................................................................................. 2
2.2.2. PFNN Architecture ......................................................................................................................... 3
2.2.3. Phase Function ............................................................................................................................... 3
2.2.4. Training .......................................................................................................................................... 4
2.2.5. Runtime .......................................................................................................................................... 4
2.3. Motion Matching. Related Work............................................................................................................. 4
3. Animation System .......................................................................................................................................... 5
3.1. System overview ..................................................................................................................................... 5
3.2. Typical FSMAC approach ...................................................................................................................... 5
3.3. Phase-Functioned Finite State Machine Animation Controller ............................................................... 6
3.3.1. FSMAC control override. Injecting unseen data into the PFNN. Smooth state transitioning ........ 7
3.3.2. Fabricating new locomotion by interpolating a PFNN & a FSMAC ............................................. 7
3.3.3. Framerate for the PFNN ................................................................................................................. 8
3.4. Implementation ....................................................................................................................................... 9
4. Discussions & Results .................................................................................................................................. 10
4.1. Relevant gameplay animation elements ................................................................................................ 10
4.1.1. Bone velocities ............................................................................................................................. 10
4.1.2 User input. Effectors & Motion Trajectories. Target Gain. Target Decay ................................... 10
4.1.3. Environment Geometry and Physical Properties. External Forces ............................................... 10
4.2. Results ................................................................................................................................................... 11
5. Conclusions .................................................................................................................................................. 11
BIBLIOGRAPHY ........................................................................................................................................... 12
Abstract

We present a real-time animated character control technique which is stable, extendable and
produces naturalistic output, while minimizing the necessary effort which animation developers put
into producing animated characters.
Believable character locomotion is a decisive element for the success of interactive applications which
deal with such content, whether it be videogames or any 3D simulated world.
The immersion that animation builds for the user is peerless, transferring the feeling of power and
freedom from the simulated character directly to the human user. Therefore a successful game
character’s actions must be believable and must be a seamless translation of user input. This is a
challenging task, given the enormous number of states of motion that a character might find itself
into, the unpredictable user behavior and his expectations regarding the interactivity of the system.
We describe a method which integrates a machine learning based state-of-the-art motion
synthesis technique into a highly modular industry standard technique.

1. Introduction

This paper continues the work of Daniel Holden et al. [1], integrating the authors’ high quality
character locomotion technique, the Phase Functioned Neural Network (PFNN), with the highly
modular industry standard, Finite State Machine Animation Controller (FSMAC). An appropriate
example is Mecanim animation system [2] present in Unity 3D, which we use.

The main motivation for this work is simple: merging the naturalness of motion generated by
the PFNN with a wide range of phase-independent and phase-dependent actions within a FSMAC,
which compile into a phase functioned finite state machine.
There are other benefits in choosing this approach: once having a trained model for a PFNN driven
character, there is no need for further embedding of actions as locomotion styles into the network,
which of course implies further training and a properly labeled locomotion footage dataset. Instead,
we extend the degree of locomotion freedom by bone weighted interpolation of the PFNN synthesized
locomotion with the motion of different FSMAC states, smoothly with respect to time. For specific
animations, this seems the right approach, as the responsiveness of the simulated character increases,
bypassing the phase dependent PFNN behavior, while still relying on it as a locomotion driver. This is
ideal for situations like an action game, where one would have a strictly defined set of acyclic actions
to perform. The functionality of the PFNN reflects into FSMAC driven actions which may even be still
snapshots of a specific move as short animation clips, occupying very little memory, e.g. swinging arms
in sync with legs while gesturing, raising a fighting guard, attacking, blocking attacks, etc.

2. State-of-the-art

Industry approaches to the character animation controller problem as well as research work in the
field are to be mentioned in this section. Sections 2.1 and 2.2 are meant to build up a view of the
system that this paper describes, as they play a large part in it.

2.1. Finite State Machine Animation Controller

The FSMAC [2, 3, 4, 5, 6] is the most used approach in the industry. It is highly modular and
customizable as well as scriptable. It can generate very complex behaviors, through state machine
nesting, blend trees, custom state transitioning parameters and animation layering.
The FSMAC defines the state space of motion in which a character finds itself into and the transition
rules between the states, as well as custom behaviors defined by the developer.
ALEX DINU

A state is an animation or a representative of a class of interrelated animations, case in which the state
is an animation blend tree or a state machine itself, called a (nested) sub-state machine.
We concluded that the flexibility of this approach is a highly desirable advantage in character design.
However, the designer of the animation system must keep it organized, e.g. Figure 2, as the complexity
growth during the character development can turn haywire, as shown in Figure 1.

Figure 1. Growing complexity in FSMACS, from left to right [7] Figure 2. Organized FSMAC template
used on characters in "Paragon" [7]

FSMACs are a feasible iterative prototyping tool because they allow easy replacement of
animations and state transitioning policies. They are also stable and scalable, being far less prone to
artefacts than procedural animation solutions which are rather difficult to maintain and scale. With a
FSMAC, characters will never undefinably jump from one state to another without a well defined rule.
This makes possible well defined gameplay mechanics where the character can interact both with the
environment and with other characters, in a deterministic manner. Because predefined animations
tracks are assigned to each state, there is no need for simulating any motion in advance for making
predictions. All transitions between states are smooth with respect to time and look as smooth as they
are allowed to: often times, short transitions can appear rather sudden and unnatural. Making them
longer implies the classical issue of trading low latency of response to user control for motion
smoothness with respect to time.

2.2. Phase-Functioned Neural Networks

PFNNs [1] are a novel motion synthesis technique for real-time character control. It is designed around
the idea that locomotion can be mathematically described as a smooth function of a cyclic timing
variable and this makes most sense for gait type animations. It operates on a frame after frame
mechanism. A PFNN produces time consistent locomotion and specializes in rough terrain traversal
for a biped character, but such a model can be trained on any type of cyclic motion capture dataset
labeled appropriately with the phase and other high-level control data (e.g. trajectory) on each frame
and can also be extended to fit more control parameters and gait styles / actions. They can also be
extended to make motion smooth with respect to more than one cyclic timing variable, i.e. more
phases. A real example of this feature is the work of Sebastian Starke et al., Mode-Adaptive Neural
Networks for Quadruped Motion Control [15].

2.2.1. Phase

The phase variable p ∈ [0, 2π) represents the timing describing the contact between each ankle and
the ground. Zero represents left ankle’s contact with the ground, π the right ankle down and so on.
Smooth locomotion with respect to p is achieved through the architecture of the neural model, which
explicitly avoids mixing motion data within its parameters from frames with a big difference in phase.
THESIS TITLE 3

Instead, training is performed with every frame as input and every next frame as output, smoothly
changing the network parameters, cycling with the phase extracted from the motion dataset.

2.2.2. PFNN architecture

The PFNN, denoted Φ, is a feed-forward (from input to output) neural network, but holistically
recurrent neural network, from the standpoint that its output is similar to its input and part of the
output at the current frame, the character posture, is fed back as input to Φ for the next frame, along
with user control parameters. Its weights and biases, α = {W0, b0, W1, b1, W2, b2}, are not fixed, but
cycling every wavelength of p. Equation (1) shows the layered architecture of Φ.

𝑦 = 𝛷(𝑥, 𝛼) = 𝑊2 𝐸𝐿𝑈(𝑊1 𝐸𝐿𝑈(𝑊0 𝑥 + 𝑏0 ) + 𝑏1 ) + 𝑏2 (1)

𝑥 and y represent postures and character control parameters for two consecutive animation frames.
W0 and b0 map the input layer to the first hidden layer, W1 and b1 map the first hidden layer to the
second hidden layer and W2 and b2 map the second hidden layer to the output layer.

Figure 3. PFNN architecture and the cyclic phase Figure 4. Visual representation for the input/output
function which computes its weights and biases [1] parameterization of the PFNN [1]

2.2.3. Phase Function

A configuration αj = {W0, b0, W1, b1, W2, b2}j of weights and biases (network parameters) is computed
by the phase function, Θ(p, β) for an arbitrary phase pj. The image of Θ is a smooth cyclic manifold
with respect to p, defined by control points β = {α1 ... αk} , also network parameters configurations.

Θ is responsible for the phase ↔ network parameters configuration one-to-one correspondence.

Θ could be any smooth and cyclic function of phase outputting tensors with the same dimensionality
as the network parameters configuration. It was chosen by the original authors to be a cyclic Catmull-
Rom spline with four control points β = {α0, α1, α3, α4}.
ALEX DINU

2.2.4. Training

Equation (2) explains the optimization problem performed on 𝛽 for input x representing bone
transformations and velocities and user input (target direction, target velocity and locomotion style)
at framei and output y representing the skeletal transformations, the change in phase and the future
trajectory for framei + 1. Θ is the phase function which computes the weights and biases and β are the
trained parameters by minimizing the cost function. The rightmost term ensures that the network
parameters do not become exceedingly large.

𝑐𝑜𝑠𝑡(𝑥, 𝑦, 𝑝, 𝛽) = ∥ 𝑦 − 𝛷(𝑥, 𝛩(𝑝, 𝛽)) ∥ +𝛾|𝛽| (2)

During the training state every gradient descent iteration updates the control points β of the cyclic
phase function Θ, by plugging transform and character control information from every two
consecutive frames from the dataset as x and y, with the corresponding phases and minimizing the
cost defined in equation (2).

The control points β of phase function Θ represent the actual trained model stored on disk and invoked
at the runtime stage, described in the next section.

2.2.5. Runtime

During runtime, the way a PFNN works is as follows: every frame, the bone transformations, future
trajectory of the character and the change in phase, Δp, are computed (or predicted) by the neural
network Φ whose weights and biases are variable in time smoothly looping with respect to p, more
precisely are outputs of the phase function, repeating themselves every phase wavelength, 2π.
Φ is fed at input with the bone posture from the previous frame as well as pre-processed user
parameters, i.e. target velocity, target direction and binary vector of desired locomotion styles, such
that all these user controlled parameters are also smooth with respect to time, in order to produce
seamless motion whenever suddenly changing direction / steering. This pre-processing is just a linear
interpolation between the user control parameters queried at the current frame and at the previous.

This concludes a brief overview of how the PFNN works, but further discussion on its capabilities is
held in section 3.

2.3. Motion Matching. Related Work

The idea of freedom of jumping from one animation frame to any other animation frame
available became very appealing to software developers who realized that given a set of rules, one
could find closest character posture to match the desired input from the user. Much research work
has been put in this field. To just name a few, Parametric Motion Graphs [9], Snap-Together Motion:
Assembling Run-Time Animations [10], Motion Fields for Interactive Character Animation [11], Human
Motion Synthesis with Optimization-based Graph [12], Active Learning for Real-Time Motion [13],
Motion Blending [14]. Ubisoft debuted in this field with Kristian Zadziuk’s talk at GDC 2016 on motion
matching technology, giving a few details on real life scenarios when their team had to organize the
motion capture data in order to speed up the search in the posture space.

This field of research is actively populated by great work and yet it is still an open to insights
on how to get the best out of interactive animation.
THESIS TITLE 5

3. Animation System

3.1. System overview

Our animation system relies on two hierarchical skeletal structures for character control: the
PFNN driven actor, illustrated in yellow and the FSMAC controlled actor illustrated in blue, throughout
the whole paper. They are both under the same game object whose attached controller script balances
the control between the two actors’ motion data at runtime, via spherical linear interpolation
(SLERP)[8] of quaternions representing rotation for each bone in its parent space, i.e. local rotation.
The main focus of this work has been finding a stable way of interpolating between the
postures of the two actors, smoothly with respect to time, such that new, believable locomotion is
generated from both sub-systems, with the possibility of full control override by one or the other at
any time at runtime. This brought about the challenges discussed in sections 3.3.1 and 3.3.2.

3.2. Typical FSMAC approach

For a biped, with the FSMAC approach for character control, one would have a “grounded”
state (character’s feet are on the ground) within which the characters performs all gait animations.
This is typically a massive blend tree parameterized by real numbers representing user control (desired
velocity, orientation, gait styles, etc.) and possibly other sub-states for dodging, take-off, stopping,
etc. A grounded state blend tree is mainly parameterized by a space which in Unity is called “2D
freeform directional”, basically representing the desired velocity along the X and Z axes, mapped
directly to a joystick on a gamepad or simply to buttons on the keyboard.
An over simplified example of a grounded state blend tree is show in Figure 5. The character moves
by interpolating (blending) between the 9 states, which must have matching phases, or otherwise the
animation will present undesirable foot sliding. No jogging, no 45° motion, no stopping or broad turns,
no locomotion styles, basically an indie game character or simply a prototype. A real world scenario
for an AAA application relying on a FSMAC for gait locomotion would be about one order of magnitude
larger than the following. Though very modular and customizable, achievement of naturalistic motion
is rather difficult and requires much manual labor like fine tuning.

Figure 5. An animation blend tree state which produces very limited gait locomotion
ALEX DINU

3.3. Phase-Functioned Finite State Machine Animation Controller

We replace the massive gait blend tree, discussed in the previous section, by the PFNN, benefiting
from its full locomotion synthesis capabilities, while being able to alter the motion it produces, as
well as smoothly transition to other states in the FSMAC, allowing them to take full control, and then
smoothly transition back into the PFNN, grounded state. This is done by carefully interpolating the
local rotations with per-bone dynamic blend factors computed from user control, orientation rules,
the phase variable, the normalized time of either the current state or the current transition and the
time elapsed since the beginning of the application.
Figure X shows the high-level architecture of the system, the embedding of the PFNN behavior in the
FSMAC grounded state.

Figure 6. Architectural diagram of the animation system

The diagram above is described in the following sections.


The Driven_Actor is the skeletal structure of the character that we finally see on the screen. For
efficiency purposes it is also the same actor driven by the FSMAC by default. Thus, in the Late Update
callback of the main controller script, attached to the character object encapsulating the whole
system, we perform the replacement of the default FSMAC skeletal transformations configuration
(posture) by the newly fabricated posture, referred as the composite, obtained by interpolating
between PFNN & FSMAC postures with a dynamic blend factor for every bone.
In the diagram, this interpolation process is represented as Quaternion SLERP bone local rotations
pipeline. It is a pipeline because the final blend factor for each bone is computed via composition of
several one-dimensional functions, with inputs such as the normalized time of the
THESIS TITLE 7

animation/transition, the phase, character orientation, desired velocity, body part, etc. Moreover, not
only the two skeletal structures’ local rotations are interpolated but the previous frame also
contributes to the result, so a second interpolation may be performed per bone, especially during
transitions between states. This pipeline is customizable by the user such that altering the PFNN
synthesized motion matches the desired effect.

3.3.1. FSMAC control override. Injecting unseen data into the PFNN. Smooth state transitioning

The blending of PFNN with FSMAC is performed at any frame while cycling through, transitioning into
and out of the grounded state. Whenever the FSMAC is in full control override mode, then the FSMAC
bone transformations are forced into the PFNN input, illustrated as FSMAC control override feedback.
This is done for smooth transitioning from FSMAC back to PFNN. During such transition, if the FSMAC
suddenly stops injecting bone transformations into the PFNN, synthesized motion becomes noisy,
especially if the acyclic animation from the state that just played was an extreme gesture like a kick.
This occurs because the model hasn’t seen anything similar to it during the training stage. To avoid
the noisy motion, the PFNN is further injected with interpolated transforms between the previous
frame and the actual PFNN. If this is done right, e.g. first half of the transition the previous frames
have high priority in the SLERP, no noise is perceived by the user. As long as the PFNN is allowed to
stabilize (and it does this rather quickly), we only see smooth motion over time.
Transitioning from the PFNN to FSMAC is done in a similar manner, but is usually easier because one
can get away with just blending the previous frame with the FSMAC frame, discarding the PFNN
transforms at the current frame, due to the fact that state entries are generally more sudden than
state exits, because the users demand high responsiveness to their actions. On entering such a
transition, the PFNN injection begins right from the first frame.

Figure 7. Two transitions(top left to right and bottom left to right) from full FSMAC control to PFNN & FSMAC
control, i.e. from punching/kicking state to the ground state. It is illustrated that the PFNN is allowed to stabilize
while the actual driven character performs smooth motion with respect to time, as during stabilization stage,
the NN has very low composite priority.

3.3.2. Fabricating new locomotion by interpolating a PFNN & a FSMAC

The BlendTree inside the grounded state on the diagram is the modifier which produces new
animations on top of the PFNN. Given the neural model trained on a dataset containing only basic
ALEX DINU

locomotion gaits (walking, jogging, running, sneaking), we successfully morphed these behaviors into
fighting locomotion by using a blend tree which interpolates between still fight stances parameterized
by the one dimensional factor equal to sign of the vertical component of the cross product of the
facing direction of our character and the direction from it to its enemy, multiplied by the dot product
of the same two direction vectors, similar to what is shown in equation (3), resolving the axis alignment
problem. For two unit vectors in 3D space 𝐴 ⃗⃗⃗ and 𝐵 ⃗⃗⃗ with 𝐵
⃗⃗⃗ , the quaternion which aligns 𝐴 ⃗⃗⃗ :

⃗, 𝐴×𝐵
𝒒𝐴⃗⃗⃗ →𝐵⃗⃗⃗ = [𝐴 · 𝐵 ⃗] (3)

While we’re at it, we recall that spherical linear interpolation of two quaternions is given by:

sin[(1−𝑡)∗ 𝜃] sin(𝑡∗𝜃)
𝑆𝐿𝐸𝑅𝑃(𝒒𝟏 , 𝒒𝟐 , 𝑡) = 𝒒𝟏 + 𝒒𝟐 , where 𝜃 = 𝑎𝑟𝑐𝑐𝑜𝑠(𝒒𝟏 · 𝒒𝟐 ) (4)
sin(𝜃) sin(𝜃)

and has the property worth to be mentioned especially for this work, which uses an entire pipeline
of SLERP functions, that the interpolation has a unique solution that is along the shortest arc
between q1 and q2 , thus ensuring deterministic behavior transforming bones on our character.

3.3.3. Framerate for the PFNN

Figure 8. Activity diagram for one frame of the animation system

To get the most performance out of the PFNN, one could pre-compute the phase function Θ
in a fixed number of points, representing a fixed framerate, e.g. 60fps, having 60 configurations of
weights and biases for the network, instead of computing Θ at runtime using the control points. One
could either interpolate through the precomputed weights, or even better, could run the simulation
such that it does granular prediction, as described in the activity diagram above, i.e. simulating
prediction sub-frames at lower framerates while catching up with the application time.
An important fact which is not mentioned on the activity diagram of Figure 8 is that there
should be threshold representing the maximum number of sub-frames allowed in a regular frame
even if pfnnTime is smaller than the application’s elapsed time, no further computation is
performed, in order to avoid loss of performance.
THESIS TITLE 9

3.4. Implementation

Algorithm 1. Pseudocode to illustrate core functionality of the animation system

Most functionalities described by both the architecture and activity diagrams are explained
by the pseudocode in Algorithm 1, the system’s main runtime loops. This is an oversimplification of
the controller script attached to the character object which encapsulates the system.
ALEX DINU

4. Discussions & Results

4.1. Relevant gameplay animation elements

4.1.1. Bone Velocities

The velocities of each bone in the one animation frame are used to predict the character’s
pose for the next frame by the PFNN at runtime. This offers smooth locomotion. They are also used
for driving different parts of the body when blending between animations, e.g. while the character is
attacking, if the user tries to move him in some random direction D, we offer a slight feedback by
applying a root motion to the character along D’ which is D interpolated with the hips’ default velocity
at the current frame. Interpolation functions like the cubic Bezier ease-in-and-out curve are used to
generate believable motion.
Rigid bodies in the scene are also directly affected by kinematic objects like the character so velocities
play a decisive role in the former’s motion path.
Real time simulations are constrained by the small amount of time allocated to one frame. To achieve
stability of motion for fluctuant framerate and/or high velocities, several simulation sub-frames are
computed during one regular frame.

4.1.2. User input. Effectors & Motion Trajectories. Target Gain. Target Decay

A very high priority variable in a real time system, because it means application interactivity. The
quicker the application’s response, the deeper the immersion.
The fact that the application must quickly respond to user input can compromise smooth transitions
between animations, thus generating unbelievable motion. This is why dynamic generation of key-
frames at runtime is desirable, rather than just rendering key-frames from the predefined animations
of the current state with possible fixes like time dilation and contraction, which might ruin the
animation’s timing. The trade-off between the responsiveness of the character is given by the
variables TargetGain and TargetDecay, which produce the easing in and the easing out animations
respectively, they are in fact acceleration and deceleration which transform an often times binary
variable representing user input(e.g. a button) into a real number smooth with respect to time.
Trajectories are freeform curves which describe paths for rigid bodies, for the effectors and the
characters’ pivot points. PFNN computes future trajectory by interpolating between the prediction
and the user’s desired trajectory, also via TargetGain & TargetDecay.
Effectors are used as positional targets for objects like limbs in fine tuning scenarios like grabbing,
attacking, etc. They ensure proper relative positioning (snapping) between interactive objects such
that two or more physical objects interact in the simulation.
When an effector acts as a driver for an object which is part of a hierarchical structure, like a human
skeleton, an Inverse Kinematics(IK) solver is used for driving the other parts of the structure which lie
on higher levels in the hierarchy. We usually do this at the end of a simulation frame, as a correction
after both PFNN and FSMAC have been interpolated to output the composite posture.

4.1.3. Environment Geometry and Physical Properties. External Forces

These may be obstacles with various properties such as mass and physical constraints (e.g.
hinge/spring/ball joints) or scripted colliders which trigger transitions in the character’s animation
state machine.
THESIS TITLE 11

Any forces or impulses applied to the physical model of the skeleton and are not negligible – a
simplified physical model of the character must be simulated: a puppet (ragdoll – built from primitives
which encapsulate the body parts) driven both by the physics engine and the animations in the FSMAC.

4.2. Results
We managed to build the PFFSMAC, which produces naturalistic motion, is as flexible as a bare
FSMAC, through modular design. It’s stable, framerate independent and fun to play with.
Further work on this is yet to come. We’re thinking to explore motion matching techniques or to
extend the PFNN’s parameters.
Performancewise, the animation system updates one frame in about 2ms, while the original PFNN
takes about 1ms. However, this is no latency to worry about, as our system runs a smooth 60fps on
an Core i7 8550U laptop CPU, a GeForce MX150 GPU and 12 Gb of RAM(much more than needed).

Figure 9. Snapshots of our system in action, to show the range of movement achieved with PFFSMAC

6. Conclusions

We conclude that the machine learning based motion synthesis method is compatible with
the highly modular industry standard and can produce realistic locomotion with little effort.
The key insight is that the PFNN is a robust system, which is capable of producing consistent
locomotion from unseen input which partly resembles the training data, and this is exactly the case,
because the input parameters of the neural network are well defined within biological constraints,
namely the degrees of freedom of human bones. Thus, with a properly built blending mechanism, one
could rely on feeding novel motion data at runtime for getting the most out of both technologies.
ALEX DINU

Figure 10. Old snapshots during the system development (even before implementing fighting guard)

BIBLIOGRAPHY

[1] D. Holden , J. Saito, T. Komura, 2017- Phase-Functioned Neural Networks for Character Control
http://theorangeduck.com/media/uploads/other_stuff/phasefunction.pdf
[2] Unity 3D Animation Reference Docs
https://docs.unity3d.com/Manual/AnimationOverview.html
[3] R. Barrera, A. S. Kyaw, C. Peters, T. N. Swe , Unity AI Game Programming Second Edition, pag. 17-39
https://s3-eu-west-1.amazonaws.com/lercm.aa/AV/IAAV/htm/IAAV_S5/data/UAIGP_C2.pdf
[4] CISC 486: Game Engine Development, School of Computing, Queen’s University, Fall 2017
http://research.cs.queensu.ca/home/cisc486/assignments/as2/index.html
[5] Unreal Engine Animation Reference Docs
https://docs.unrealengine.com/en-US/Engine/Animation/StateMachines/index.html
[6] CryEngine Animation Reference Docs
https://docs.cryengine.com/display/SDKDOC2/Mannequin+Actions
[7] JC Delannoy, Most Inspiring Game Animation Tech Talks of 2016,
https://medium.com/@jcdelannoy/the-best-game-animation-tech-talks-of-2016-d6a86e3d5a26
[8] Ken Shoemake, Animating Rotations with Quaternion Curves, 1985,
http://run.usc.edu/cs520-s15/assign2/p245-shoemake.pdf
THESIS TITLE 13

[9] Rachel Heck and Michael Gleicher, Parametric Motion Graphs


http://pages.cs.wisc.edu/~heckr/Papers/PMGFullPaper.pdf
[10] Michael Gleicher et al. , Snap-Together Motion: Assembling Run-Time Animations, 2003
http://research.cs.wisc.edu/graphics/Gallery/kovar.vol/SnapTogetherMotion/SnapTogetherMotion.pdf
[11] Y. Lee, K. Wampler, G. Bernstein, J.Popovic, Zoran Popovic, University of Washington, Bungie, Adobe
Systems, Motion Fields for Interactive Character Animation
https://grail.cs.washington.edu/projects/motion-fields/motion-fields.pdf
[12] C. Ren, L. Zhao, A. Safonov, Human Motion Synthesis with Optimization-based Graph
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.227.6702&rep=rep1&type=pdf
[13] S. Cooper, A. Hertzmann, Z. Popovic, Univ. of Washington, Univ. of Toronto, Active Learning for Real-Time
Motion Controllers http://grail.cs.washington.edu/projects/active-learn-controller/active-learn.pdf
[14] Kristine Slot, Motion Blending, February 2007
http://image.diku.dk/projects/media/kristine.slot.07.pdf
[15] H. Zhang, S. Starke,T. Komura, J. Saito, Adobe Research, Mode-Adaptive Neural Networks for Quadruped
Motion Control, SIGGRAPH 2018 http://homepages.inf.ed.ac.uk/tkomura/dog.pdf

You might also like