Professional Documents
Culture Documents
net/publication/44387298
CITATIONS READS
13 3,547
6 authors, including:
Paolo Remagnino
Kingston University
237 PUBLICATIONS 8,388 CITATIONS
SEE PROFILE
All content following this page was uploaded by Paolo Remagnino on 16 May 2014.
1 Introduction
Modern animation packages for film and game production enable the automatic
generation of sequences between key frames previously created by an animator.
Applications for this exist in computer games, animated feature films, simulations,
and digitized special effects, for example synthesized crowd scenes or background
action. However, automated animation has, until very recently, been limited to the
extent that characters move without autonomy, goals, or awareness of their
environment. For example in moving from A to B a character might come into
unintended contact with obstacles, but instead of taking avoiding action or suffering a
realistic collision the animation package generates a scene in which the character
simply passes through the obstacle (Figure 1a). Such incidents must be repaired
manually by the human animator (Figure 1b). Although recent versions of
commercially available animation packages have incorporated limited environment
awareness and a degree of collision avoidance, there remains considerable scope for
applying AI to animated characters to endow them with a full animation oriented
cognitive model, as advocated by Funge et al (Funge, 1998; Funge et al, 1999). The
role of the cognitive model is to provide perception, goals, decision making, and
autonomous interaction with their surroundings and other characters. This paper
describes our “FreeWill” prototype (Forte at al, 2000, Amiguet-Vercher at al, 2001)
which has recently been initiated with the eventual aim of adding such capability to
commercially available animation packages.
biped.AddNewKey LarmCont3 0
biped.AddNewKey RarmCont3 0
sliderTime = 10
rotate RForearm3 30 [-1,0,0]
biped.AddNewKey LarmCont3 10
biped.AddNewKey RarmCont3 10
sliderTime = 20
rotate RForearm3 80 [0,0,-1]
biped.AddNewKey LarmCont3 20
biped.AddNewKey RarmCont3 20
Figure 2 Sample script for generating avatar behavior
Figure 3 Avatar interaction
One of the key elements of the knowledge base is the internal world model. Every
time an avatar performs an action, the process is initiated by first updating the avatar’s
world model. The avatar senses the world via a vision cone, through which it gains
awareness of immediate objects in its path (see Figure 5). The information obtained
from the vision cone is then used to modify the avatar’s plan and perform the next
action.
Figure 5 Scene as seen by an avatar
An avatar’s behavior is goal directed. The primary goal is provided by the user
and represents the aim of the simulation for that avatar. In the example illustrated in
Figure 3, the primary goal is to ‘get to the end of the sidewalk’. However the
fulfilment of this goal may be enacted with accomplishment of secondary goals which
are set and assessed by the avatar. Examples are ‘avoid collisions’ and ‘shake hands
with friends’. Such goals are a part of the avatar’s knowledge. When to give such
goals priority can be inferred from the current world state. The rules of an avatar’s
behavior are stored in the knowledge base as sets of facts and rules. The knowledge
base also provides logical information about static world objects and other avatars
(e.g. a list of friends). The logic controlling the avatar’s behavior is as follows:
DoSensing()
{
image = Body.Sense()
{
return VisionCone.GetImage()
}
Mind.UpdateWorldModel(image)
{
KnowledgeBase.ModifyWorld(image)
{
WorldModel.ModifyWorld(image)
}
}
Mind.RevisePlan()
{
ActionPlanner.Plan()
{
KnowledgeBase.GetGoals()
ExploreSolutions()
KnowledgeBase.GetObjectInfo()
{
WorldModel.GetObjectAttribs()
}
CreatePlan()
lastAction = SelectLastPlannedAction()
MotionControl.Decompose(lastAction)
}
}
action = Mind.PickAction()
{
microA = ActionPlanner.GetMicroAction()
{
return MotionControl.GetCurrentAction()
}
return microA
}
return ConvertActionToEvent(action)
}
The main simulation loop is located within the Scheduler class which
consecutively picks events from an event queue. Control is then passed to the
appropriate world object to which the event refers (which in most cases is an avatar)
and necessary actions are taken. These can be
- an ‘act’ action – such as move a hand or make step. The action is rolled out (the
avatar’s state variables are updated) and a new line is added to the MaxScript file.
This action returns a new sensing event to be inserted in the event queue
- a ‘sense’ action – which means that the avatar should compare the perceived
fragment of the world with its own internal model. Then the avatar has a chance
to rethink its plan and possibly update goals and the planned set of future actions.
This action returns a new acting event.
The returned actions are inserted in the event queue and the time is advanced so that
the next event can be selected. A PeriodicEventGenerator class has been introduced to
generate cyclic sensing events for each avatar so that even a temporarily passive
avatar has its internal world model updated.
This paper has explained our framework for supporting autonomous behavior for
animated characters, and the mechanisms that drive the characters in the simulation.
The resulting actions are rendered in an animation package as illustrated. Our current
prototype indicates that there is considerable scope for the application of AI to the
automatic generation of animated sequences. In the current system the
implementation of goal based planning is inspired by STRIPS (Fikes and Nilsson,
1971; Fikes, Hart and Nilsson, 1972). As a next step it would be interesting to extend
our framework to experiment with planning activity that is distributed across several
agents and takes place in a dynamic complex environment requiring the intertwining
of planning and execution. Such requirements imply that goals may need to be
changed over time, using ideas described for example by Long et al (Long, 2000).
The prototype we have developed is a useful environment for developing and testing
such cognitive architectures in the context of a practical application.
References
1. Amiguet-Vercher J., Szarowicz A., Forte P., Synchronized Multi-agent Simulations for
Automated Crowd Scene Simulation, AGENT-1 Workshop Proceedings, IJCAI 2001, Aug
2001.
2. Fikes R., and Nilsson, N., STRIPS: A new approach to the application of theorem proving to
problem solving, Artificial Intelligence, Vol. 2, pp 189-208, 1971.
3. Fikes R., Hart, P., Nilsson, N., Learning and executing generalised robot plans, Artificial
Intelligence, Vol. 3, pp 251-288, 1972.
4. Forte P., Hall J., Remagnino P., Honey P., VScape: Autonomous Intelligent Behavior in
Virtual Worlds, Sketches & Applications Proceedings, SIGGRAPH 2000, Aug 2000.
5. Funge J., Making Them Behave: Cognitive Models for Computer Animation, PhD thesis,
Department of Computer Science, University of Toronto, 1998.
6. Funge J., Tu X., Terzopoulos D., Cognitive Modeling: Knowledge, reasoning and planning
for intelligent characters, Computer Graphics Proceedings: SIGGRAPH 99, Aug 1999.
7. Long D., The AIPS-98 Planning Competition, AI magazine, Vol. 21, No. 2, pp 13-33, 2000.
8. Object Management Group, OMG Unified Modeling Language Specification, June 1999.
Version 1.3. See also http://www.omg.org
9. Russell S., Norvig P., Artificial Intelligence, A Modern Approach, Prentice Hall, 1995.