Professional Documents
Culture Documents
Massive Software 2 - Learning Tutorials - pdf1
Massive Software 2 - Learning Tutorials - pdf1
The Massive Tutorial Library includes a set of lessons covering all aspects of Massive. The Comprehensive
Track is strongly recommended as the best way to thoroughly understand the concepts behind Massive, and
should be chosen if time permits.
Comprehensive
Motion Track
Brain Track
Body Track
COMPREHENSIVE TRACK
SECTION 1: BASICS
1-1
Skeleton
1-2
Basic Channels
2-2
2-3
2-4
2-5
Sound
2-6
Vision
2-7a
Creating Flowfields
2-7b
2-8a
Painting Colour
2-8b
SECTION 3: ACTIONS
3-1
3-2
Importing Actions
3-3
Adjusting Transitions
3-4
SECTION 4: DYNAMICS
4-1
Dynamics Basics
4-2
4-3
Wind
Shot Track
SECTION 5: GEOMETRY
5-1
Attaching Geometry
5-2
5-3
Cloth
5-4
Blend Shapes
SECTION 6: RENDERING
6-1
Materials
6-2
Cameras
6-3
6-4
Rendering
SECTION 7: MISCELLANEOUS
7-1
Variation
7-2
Brain Variables
7-3
Spawning
MOTION TRACK
TREE DESIGN
3-1
IMPORTING ACTIONS
3-2
Importing Actions
3-3
Adjusting Transitions
BRAIN MODULES
3-4
BRAIN TRACK
BRAIN BASICS
1-2
Basic Channels
2-1
2-2
USING INPUTS
2-5
Sound
2-5
Vision
2-6b
2-7b
BRAIN TOOLS
7-2
Brain Variables
BODY TRACK
SKELETON
1-1
Skeleton
7-1
Variation
Attaching Geometry
5-2
5-3
Cloth
5-4
Blend Shapes
MATERIALS
6-1
Materials
DYNAMICS
4-1
Dynamics Basics
4-2
SHOT TRACK
SCENE ARRANGEMENT
2-3
TERRAIN MAPS
2-6a
Creating Flowfields
2-7a
Painting Colour
RENDERING
6-2
Cameras
6-3
6-4
Rendering
LEARNING
Welcome to Massive Learning Materials. Here you can access a number of helpful documents that will assist you
in getting started with Massive and integrating it into your pipeline.
OBJECTIVES
(1) Learn how to build a skeleton with segment primitives.
(2) Learn the difference between the shape and rest tabs.
(3) Learn how to use segment symmetry.
Files needed:
Tutorial movies:
skeleton01.mov
none
OVERVIEW
Massive agents' skeletons are built out of four primitives: the sphere, the tube, the disc, and the box.
The tutorial video skeleton01.mov will show you how to build a simple skeleton from scratch.
Below are concepts covered in this lesson:
1. Building A Skeleton
2. Shape And Rest Tabs
3. Symmetry
CONCEPTS
> BUILDING A SKELETON
Building a skeleton in Massive is fairly simple. To add segments to the skeleton, drag them from the left toolbar in
the body page. If a preexisting segment is already selected when you drag a new segment onto the page, the new
segment will automatically be a child of the highlighted segment.
Massive will also automatically place the new segment relative to its parent in world space in a relationship similar
to the placement of the segment node relative to the parent node. This is usually a pretty approximate placement,
and needs to be followed up with some adjustment.
Unlike in some other programs, Massive skeleton segments actually occupy volume in three-dimensional space,
and this volume is used to calculate collisions and dynamics as well as being the actual objects seen in agents'
vision.
The shape tab differs from the rest tab in that it contains segment shape information such as radius, length, or size
in x, y, and z. Like the rest tab, it contains rotate and translate transform sliders, but these move the shape while
leaving the segment axis alone. It has the same effect as modifying the pivot point in other programs.
The rest tab contains rotate and translate sliders that manipulate the whole segment including the axis. When the
agent itself is selected, the rest tab moves the whole agent including the agent axis. When the root segment is
moved in the rest tab, the agent axis stays where it is.
In the picture above, the large axis on the ground is the agent axis and the smaller ones are segment axes. Helpful
keys for working with the skeleton include:
alt-s
alt-a
alt-shift-a
alt-l
alt-n
> SYMMETRY
Symmetry is on by default in Massive, but can be turned off in the menu by selecting Edit->symetric. It applies
automatically to any segments with the same name, differing in only an "L" or "R" (either case) at the beginning and
end of the name. Any transformations made to the shape or rest of one segment will automatically be mirrored to
the other segment in the pair.
SUGGESTED EXERCISES
OBJECTIVES
(1) Learn how to use basic agent and segment channels.
Files needed:
Tutorial movie:
none
none
OVERVIEW
The most basic channels in Massive are those used for simple transforms - translation and rotation. This lesson
goes over how they are implemented and used in Massive.
These are the concepts covered in this lesson:
1. Using Output And Input Nodes
2. Position and Speed
3. Agent vs. Segment Channels
CONCEPTS
> USING OUTPUT AND INPUT NODES
Output nodes contain the output of the agent's brain, which controls the agent's behavior. Output nodes often are
found at the end of a long series of fuzzy logic in the brain, but can be as simple as a single output node with a
manually-entered value.
The output node offers the following options:
Of the two text boxes, the name is on the left and the channel is on the right. It's a common mistake when hurrying
to accidentally type the channel (tz - translate z - in this case) in the name box, and become confused about why
the output channel isn't working.
The channel text box should usually be filled with the name of a Massive-recognized channel, such as tx, ty, tz, rx,
ry, rz.
The range only sets the range of the UI slider and is set for convenience's sake. Moving the slider manually sets
the value of the output node. If there is an incoming input connection into the node, that sets the value of the output
node. To adjust the value manually in that case, select the "manual" button. To set a value out of slider range, you
can just type the value in the box beside it.
Input nodes get input values that the agent's brain can use to make decisions. Like the output node, the name
goes on the left and the input source, usually a Massive-recognized channel, goes under source.
In this case, the output node above would set the agent's translate-z speed, while the input node pictured here
would read the agent's translate-z speed.
The pos and speed buttons in the output and input nodes affect whether the node is reading/assigning the values
as a function of speed or position.
By default, tz is the the translation of the agent in the z direction, per second. Or essentially, the speed of the agent
in z. This is what a tz output or input node will refer to when set to the default of "pos". When set to "speed", a tz
output or input node will refer to the rate of increase of tz per second, or effectively, acceleration of the agent in z.
When set to 45, this output channel rotates the agent at a continues rate of 45 degrees per second in y.
When set to 45, this output channel rotates the agent at a rate accelerating by 45 degrees/s/s.
When set to 45, this output channel sets box1 to a static pose rotated 45 degrees from its original position.
When set to 45, this output channel rotates box1 at a continuous rate of 45 degrees per second in y.
OBJECTIVES
(1) Learn how fuzzy logic is applied in Massive.
(2) Become familiar with terrain inputs.
(3) Create a simple terrain-following agent.
Files needed:
Tutorial movie:
terrain01.mov
Terrain1/ground.obj
OVERVIEW
Follow the instructions in the tutorial movie terrain01.mov. The tutorial will take you through all the steps of
creating a box agent from scratch and making a brain for it capable of following an uneven terrain. It will also
introduce a few additional concepts in Massive that may be new to you.
These concepts are covered below:
1. Fuzzy Logic In Massive
2. Terrain-Related Channels
3. Copying and Pasting
4. "Master Control Switches"
5. Macros
CONCEPTS
> FUZZY LOGIC IN MASSIVE
The basic structure of fuzzy logic in Massive looks something like this:
Inputs are numerical inputs from the world which describe what the agent perceives, such as sound frequency,
vertical distance from terrain, slope of terrain, colour value of terrain, direction of flow field, and many other
possibilities. (See the channels page in the manual for a full list of input channels.)
Inputs can potentially be any numerical value, and therefore need to be converted into a fuzzy value so that they
can work within the fuzzy logic rules. All fuzzy values are ultimately between 0 and 1, indicating a truthfulness from
0 (completely false) to 1 (completely true).
The fuzz node converts numerical input values into fuzzy values using a membership function, which is the curve
you see when you highlight the fuzz node. You decide what sort of quality you would like the fuzz node to define,
adjust the membership function to define it, and the output value of the fuzz node tells how "true" it is. Fuzz nodes
are used to define qualities such as "high", "red", "right", and many more. A value of 1 coming from a fuzz node
called "high" means that your agent is completely within the range you call "high". Usually, there are two or more
fuzz nodes connected to an input node, defining various ranges of that input in fuzzy terms. Examples include high/
ok/low, right/ahead/left, red/blue/black.
Outputs from fuzz nodes are then usually combined into one or more rules. AND and OR rules evaluate the
truthfulness of all their combined inputs (must be fuzzy values) and output the resulting fuzzy value.
To convert the results of these rules back into a "real world" value useful for setting speed, turning your agent a
certain angle, or anything else, defuzz nodes are required. In most cases, you set the value of the defuzz node.
http://outranet.scm.tees.ac.uk:8002/Resources/docs/massive/learning/tutorials/terrain.html (2 of 4) [13/05/2007 00:47:20]
We might set the value of the "go up" node to 10. Depending on how true the input is (0 to 1), the defuzz node will
output some value up to 10, telling your agent how far to go up.
The output of the defuzz node connects to an output node, in this case ty, which indicates what channel the defuzz
is driving. Usually in this setup, there should be at least two defuzz nodes connected to the output, often one of
them an "else" node, so that Massive has some defined value to give the output at all times. Otherwise you might
see "jumping" or jerky behavior.
channel
description
ground
input
-inf
inf
ground
-inf
inf
ground.dx
This comes in handy when you create a large block of brain that you'd like to control with one switch. Because the
input is connected to the AND nodes, they will always be false when the switch is off, making that whole section of
the brain effectively inactive.
> MACROS
For better organization of your brain, large sections of brain nodes can be compressed into macros. Select the
portion you're interested in and hit alt-g to compress it into a macro. Shift-alt-g undoes this process. To enter a
macro, hover your mouse over it and hit enter. To exit it, hit backspace.
While inside a macro, you can select any node and hit alt-x to make it an external input or output. Which one it
appears as depends on how close to the left or right side that node is. Any node that was already connected to
something external when you made it into a macro will automatically be an external input or output. To undo this,
make sure the node is unconnected to anything external, select it, and hit alt-x again.
SUGGESTED EXERCISES
1. Add a "head" (a smaller box) to your box agent. Using the principles in this lesson, try to get the head to tilt
so that it stays upright as the body follows the terrain.
http://outranet.scm.tees.ac.uk:8002/Resources/docs/massive/learning/fuzzy_logic_book/page1.html
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
26 27
Next
http://outranet.scm.tees.ac.uk:8002/Resources/docs/massive/learning/fuzzy_logic_book/page1.html
OBJECTIVES
(1) Learn how to use the timer node.
(2) Learn how to use the noise node.
Files needed:
Tutorial movies:
timernoise01.mov
none
OVERVIEW
Noise nodes help add a degree of randomness to an agent's brain, and timer nodes are useful for many different
applications. The tutorial timernoise01.mov will give you an introduction to how these nodes work.
Below are concepts covered in this lesson:
1. The Timer Node
2. The Noise Node
CONCEPTS
> THE TIMER NODE
The timer node is a basic timer with a few additional options. On the left hand side, there is a bar showing the
progress of the timer and a slider to control the rate of its increase. If there is an alt (black) input coming into the
timer, the value from that input will be the timer's rate, and will override this slider.
On the right hand side, there is an input box that allows you to set the timer's range, which only applies if it is not an
endless timer. This can be toggled with the endless button. If on, the timer will continue to increase without
resetting. If off, the timer will reset every time it reaches the end of its range.
Timers can run on their own, or as a result of being triggered by input. If a timer has an input connection, it will not
start until it receives an input value of greater than 0.5. When if stopped is selected, the timer will wait until it
finishes its cycle before it can be triggered again. When always is selected, the timer will remain at the start as long
as input is greater than 0.5, and will start running when input drops below 0.5.
Timers can be used for numerous applications in Massive. They can be used to animate a character by using the
fuzz curves as animation curves controlling segment's rotations and translations. They can be used as an agent's
"memory" of recent events or decisions. They can be used to set a series of behaviors in motion that you want to
happen at specific intervals. Timers are versatile and very useful.
After raising the rate above 0, the noise node will change its output value over time. Agents in the scene will have
randomly different noise values from each other at any given time, but every time you reset and run the simulation,
the progression of values will be the same, and the simulation will give the same results as the first time. However,
any small change to the scene or re-placing of agents will result in a different pattern for all involved, if you wish to
http://outranet.scm.tees.ac.uk:8002/Resources/docs/massive/learning/tutorials/timernoise.html (2 of 3) [13/05/2007 00:47:25]
SUGGESTED EXERCISES
1. Create an agent that turns from white to red over two seconds and then starts over again (use the agent
channel colour. 0 is white and 0.7 is red.)
2. Add a noise node to make this behavior occur randomly. Tinker with the settings to make this occurance
common (at least three times in every ten seconds or so), and then rare (once in twenty seconds).
OBJECTIVES
(1) Learn how to place agents with the place tool.
(2) Learn how to edit individual locators.
Files needed:
Tutorial movie:
place01.mov
Place1/ground.obj
Place1/plod.cdl
Place1/ball.cdl
OVERVIEW
The Place tool allows you to create a variety of generators which generate individual locators that stand in for
where your agents will be placed. Ctrl-P places instances of agents where the locators are and Ctrl-D deletes
them, leaving the locators in their place.
Follow the instructions in the tutorial movie place01.mov. The tutorial will show you how to use the Place dialog to
place groups of agents. It will also cover many of the Place dialog's options. Options not covered in the tutorial will
be mentioned in the concepts section below.
These are the concepts covered in this lesson:
1. The Placement Generators
2. The Place Dialog Options
3. Editing Individual Locators
4. Painting Terrain
When you're finished with the tutorial, save the scene as place.mas.
CONCEPTS
> THE PLACEMENT GENERATORS
At the top of the Place dialog are 5 buttons representing the 5 types of
generators you can use for placement. These are, in order, the point
generator, the circle generator, the polygon generator, the spline
generator, and the colour generator.
Select the generator you want to create, and click the add button to begin
creating it. The point generator creates a single point around which
generated agents will be clustered. Left-click once in the view window to
create it.
The circle generator creates a circle populated with agents. Left-click in the
view window to establish the center of the circle and drag out to establish the
radius. Depending on the distance you select, some agents may appear
outside the borders of the circle. This is the easiest generator for setting up a
simple crowd.
The polygon generator allows you to draw a polygon shape by left clicking
points in the view window. Right click to finish. It allows you to define a more
precise shape than the circle generator.
The spline generator allows you to draw a curved spline by left clicking
points in the view window. Right click to finish. These generators are useful
for generating rows and columns of agents (cars, armies, parades).
The colour generator is different from the others in that it places agents
according to the colour painted on the terrain. You can adjust which r, g, and
b levels you would like your agents to be densest in.
Generators can be moved or edited after creation. A shift-left-button-drag will
by default move the generator. If the button on the far upper right of the Place
dialog is selected, then shift-left-button-drag will instead move a component of
the generator. This can be used to edit the radius of the circle generator, or
move points in the polygon and spline generators.
SUGGESTED EXERCISES
OBJECTIVES
(1) Learn how Massive files relate to each other.
Files needed:
Tutorial movie:
none
none
OVERVIEW
Massive files consist of agents (.cdl files) placed in the context of a scene file (.mas files). Both of these types of
files in turn will almost always contain links to many additional files for geometry, terrain, actions, texture maps, and
other information. Both agent and scene files are text files and can be easily viewed and edited in a text editor.
This lesson covers the relationship of all these different files to each other and takes a brief look at the format of the
text files.
These are the concepts covered in this lesson:
1. Relationships Between Massive Files
2. Suggested Directory Structure
CONCEPTS
> RELATIONSHIPS BETWEEN MASSIVE FILES
As you can see in the diagram above, the scene file (.mas) is the master file ultimately linking together all the
elements in your scene. The .mas file itself contains information such as global render settings, camera and light
information, flowfield spline and setting information, and placement information. Within the code of the .mas file are
direct links to agent (.cdl) files, terrain (.obj) files, and terrain map (.tif) files.
The agent file (.cdl) contains agent information, such as the skeleton and brain data, as well as skinning ("Bones")
data. It contains links to action files (.actb, .amc), agent geometry (.obj), shaders, and texture maps.
With this structure in mind, it makes sense to always re-save your .mas file when you add a new agent or change
the name of your current one (also when you add/change terrain or terrain maps or flowfields).
Directory Name
Contains
Top Level
Second Level
CDL
Directory Name
Contains
Geo
Directory Name
Contains
ACT
Directory Name
Contains
Terrain
Directory Name
Contains
Maps
OBJECTIVES
(1) Become familiar with the sound input/output channels.
(2) Create an agent that avoids other agents using sound.
Files needed:
Tutorial movie:
sound01.mov
none
OVERVIEW
Create a simple box agent (no brain needed), then follow the instructions in the tutorial movie sound01.mov. The
tutorial shows you how to quickly make a box agent that avoids collisions with other box agents using sound.
Concepts are covered in further detail below:
1. Uses of Sound
2. Sound-Related Channels
3. Viewing Sound Emission
4. Tips on Using Sound
CONCEPTS
> USES OF SOUND
Sound is the simplest method in Massive for allowing an agent to percieve other agents, identify them, know where
they are in relation to itself, and identify their current emotional state or action, in addition to communicating all this
information about itself.
Sound can be used to keep agents from bumping into each other, or to cause some agents to follow others. Sound
can be used to keep a group in formation by having an agent perceive the postions of its nearest neighbors and
avoid getting too close, or too far ahead or behind.
An agent can emit different frequencies to indicate to other agents its identity (which "team" it's on for example), its
status ("dead" agents can emit a certain frequency that causes others not to attack), or orders (a "leader" agent can
make sounds that direct its listeners to do certain things).
Sound does not work in close combat situations where agents need to face off precisely and react precisely to their
enemy's actions (e.g., blocking a strike). In those cases, vision is recommended.
channel
description
sound.a
both
inf
sound.a
sound.f
both
inf
sound.f
both
inf
sound.f1f
sound.x
-180 180
sound.x
sound.y
input
-90
90
sound.y
sound.d
input
sound.d
sound.o
-180 180
sound.o
sound.ox
-180 180
sound.ox
input
sound.oy
-90
90
sound.oy
Sound channels receive multiple inputs simultaneously, which can lead to some confusion on the user's part.
Suppose you have a fuzz node called "near" connected to a sound.d input node and a fuzz node called "right"
connected to a sound.x input node. You connect both "near" and "right" to an AND node which you call "near and
right". Is this active when you hear some sounds to the right (but not necessarily near) and some sounds near (but
not necessarily to the right)? Or is it only active when at least one of the sounds you are hearing is both near and
right? It is in fact the latter. All the sound fuzz nodes connected to an AND node must be true about at least one
specific sound-emitting agent in order for the AND node to be true.
All of the sounds being heard by your agent can be seen in the membership curve window of one of its sound input
fuzz nodes. They appear as colored vertical lines, the color of the line corresponding to the frequency of the
sound.
There can sometimes be some confusion over sound.x versus sound.ox. Sound.x tells an agent the angle of the
sound source, using the agent's center as the origin and straight ahead as 0 degrees. In both diagrams below, the
red agent will register the grey agent at 30 degrees in its sound.x input.
The direction the target agent is facing has no effect. If it's important for your agent to know which way the target
agent is facing, the channels you want to use are sound.o, sound.ox, and sound.oy. Below are some diagrams
showing the use of sound.ox in particular.
In the picture above, the red agent will read the grey agent as 30 degrees in the sound.ox channel, and 0 degrees
(straight ahead) in the sound.x channel.
This picture will also result in the same sound.ox (30 degrees), although the sound.x is now about 30 degrees as
well.
The red agent in the diagram above will register the grey agent as 90 degrees in the sound.ox channel, and again 0
degrees in sound.x.
In this diagram, each agent will read the other as 180 degrees in the sound.ox channel.
A very important thing to note is that sound inputs must be connected in this order:
input -> fuzz -> AND ->
An AND node must come between the sound input fuzz node and anything that comes after it. Sound input fuzz
nodes connected directly to a defuzz won't work correctly.
Massive is extremely flexible in that as long as that rule is observed, you can combine sound inputs and fuzz nodes
in almost any way you like. However, there are a few simple tips to setting up a sound-detecting brain that may help
speed up the process.
For your basic collision-avoidance agent, you will need sound.x fuzz nodes representing left, center, and right.
It is suggested that you create left and right fuzzy membership curves that peak at -90 and 90 degrees
respectively. This makes sense considering that 90 degrees should be when "right" is most true. You may want to
begin the curve at 0, because you want your agent to react to any obstacle between 0 and 90 degrees, reacting
gradually more strongly as the angle increases.
In the graph pictured, you'll notice there's a sharper falloff above 90 degrees. This is a good idea for collision
avoidance purposes because as the obstacle moves beyond 90 degrees, it's becoming pretty much behind your
agent, and doesn't really necessitate turning away from. If you'll imagine an obstacle at 135 degrees, you'll see it
makes more sense to just keep going forwards, and turning away would be a bit of an overcorrection. This can lead
to your agent spending way too much effort on avoidance when it has other fish to fry (following, hunting, etc.).
Now, if you want your agent to follow some things in addition to avoiding some other things, you may want to
create additional left and right fuzz nodes, perhaps called "Xleft" and "Xright" (for extreme left and extreme right).
While the "right" curve pictured above works well for avoidance, a slightly different shape of curve may work better
for following. An agent 135 degrees to the right may not be worth turning to avoid, but it would be worth turning to
follow.
For an "Xright" fuzz node used for following purposes, I would suggest a rising membership curve that rises
steadily from 0 to about 179 degrees. The farther to the right the target is, the harder the agent will turn, turning
hardest when the target is almost drectly behind it.
Below is an example of a "center" curve and a "right" curve that do not overlap. Don't do this.
Supposing something's near at 57 degrees. Is it to the right? Is it center? Is it left? No, it's nothing. It won't trigger
anything in the brain, and the agent won't do anything. If it was within the "center" curve, it would have triggered the
brain to slow down. If it was within the "right" curve it would have triggered the brain to turn away to the left. As it is,
it continues plowing full speed straight ahead and will possibly collide with that target. In any case, it isn't optimal for
smooth collision avoidance.
If those curves had some overlap, an object at that point would register as a little bit center and a little bit right and
the agent would slow down a little and turn a little to the left. Much more optimal.
Another problem that often occurs is when you try to create an agent that both avoids and follows. It may avoid
some types of agents and follow others, or it may avoid any agents that are too close, but attempt to gather with
other agents when it is too far from any other agents.
The major problem here is divided loyalties between attempting to avoid and attempting to follow. You will
undoubtedly find some agents colliding with each other, trying to turn both left and right at the same time, for
different reasons, which averages out to straight.
There are a number of ways to solve this. The main idea is to set a priority. In this case, avoiding collisions is the
main priority. Your agent should only attempt to follow others when it is NOT in the midst of avoiding a collision.
Therefore avoidance takes priority. You can specify this in a multitude of ways.
In this diagram, "near left", "near right" and "near front" are all situations in which the agent will be avoiding
something. These 3 situations feed into an OR indicator node, "avoiding". When it's on, it means the agent is
avoiding something. The two illegible AND nodes under "follow" are rules that instruct the agent to follow its own
kind. They both have a black NOT input from avoiding, so that they'll only activate if the agent is NOT avoiding
something. This puts them at a second priority to the three avoiding rules.
On the far right is a quicker, if less precise, way of preventing simultaneous left/right turn attempts. See if you can
trace how it works.
SUGGESTED EXERCISES
1. Make an agent that avoids other agents when it's too close but tries to come back to them when it gets too
far from everyone.
2. Make an agent that wanders around randomly. Make another type of agent that follows the first agent in a
group, avoiding collisions.
OBJECTIVES
(1) Become familiar with the vision input/output channels.
(2) Learn how to use the vision settings.
(3) Create an agent that avoids and follows other agents using vision.
Files needed:
Tutorial movies:
vision01.mov
vision02.mov
Vision/boxen.obj
OVERVIEW
Vision works in a way very similar to sound. Several of the channels are almost equivalent to each other, but there
are some major differences that you should make note of in deciding when to use vision or sound for an application.
Create a simple box agent (no brain needed), then follow the instructions in the tutorial movies vision01.mov and
vision02.mov. The tutorial shows you how to quickly make a box agent that avoids collisions with other box agents
using vision, while not wandering too far from the other agents.
Concepts are covered in further detail below:
1. Uses of Vision
2. Vision-Related Channels
3. Vision Settings
4. Standins
5. Tips on Using Vision
CONCEPTS
> USES OF VISION
Vision is very similar to sound, allowing an agent to percieve other agents, identify them, know where they are in
relation to itself, and identify their current emotional state or action, in addition to communicating all this information
about itself.
In addition, vision allows the agent to locate objects more precisely - in order to hit or block for example. Also very
useful is the ability to make different segments of an agent different colors, so that other agents can react
specifically to its weapon or head or other skeleton segment.
The major drawback of vision is that it takes more processing time than sound.
Vision can also be used to navigate terrain. In order for the agent to see terrain, however, you must make sure you
select Terrain->visible to agents.
channel
description
vision.x
-1
head:vision.x
vision.y
input
-1
head:vision.y
vision.z
input
head:vision.z
vision.h
input
head:vision.h
vision.i
-1
head:vision.i
vision.active
input
head:vision.active, vision.
active
Vision channels receive multiple inputs simultaneously, which can lead to some confusion on the user's part.
Suppose you have a fuzz node called "near" connected to a vision.z input node and a fuzz node called "right"
connected to a vision.x input node. You connect both "near" and "right" to an AND node which you call "near and
right". Is this active when you see something to the right (but not necessarily near) and see something near (but not
necessarily to the right)? Or is it only active when at least one target is both near and right? It is in fact the latter. All
the vision fuzz nodes connected to an AND node must be true about at least one specific target in order for the
AND node to be true.
There is often confusion about the difference between vision.x and vision.i. The two channels have very similar
functions. The only difference is that vision.x gives you an angle relative to agent space whereas vision.i returns an
angle relative to segment space. Vision.x is almost always the channel you need, as you are interested in whether
things are in front of, to the left of, or to the right of your AGENT, not necessarily the segment. You don't want his
navigation affected by whether his head is turned slightly left or right. The primary use of vision.i is head tracking,
that is, to get the agent's head to turn and follow some target.
Vision.z, like sound.d, measures distance on a logarithmic scale, so that 1 is closest and 0 is infinitely far.
Field of
view allows you to set a horizontal and vertical range for your agent's vision. In the example above, field of view is
set to 180 for X, allowing this agent to see 90 degrees to the left and 90 degrees to the right. Field of view is set to
90 for Y, allowing the agent to see 45 degrees up and 45 degrees down.
Each agent's vision is rendered as a small image of a resolution specified in this tab. Only Y resolution is editable,
as the resolution has necessarily the same aspect ratio as the field of view, so the X resolution is determined
automatically.
It's recommended that you keep your resolution at the minimum number of pixels that works for you. Vision images
are rendered for each agent with vision, and larger resolutions add significantly to processing time.
Render slices deal with the problem of distortion when the field of view becomes too wide. Two or more separate
images are rendered and stitched together. The process is automatic. You only need to specify the number of
slices. 1 slice is sufficient for a field of view up to about 150 degrees. 2 slices are recommended for about 150-240
degrees, and 3 slices are good up to 360 degrees.
The z factor deals with the scale of the distance logarithm for vision.z, and usually doesn't need to be adjusted.
The button on the far right allows agents to see geometry. Remember, for agents to see terrain, you need to select
Terrain->visible to agents from the menu.
There is also a vision dialog available from the menu which contains many of the same features. This is found
under Edit->Vision.
Field of view, resolution, and render slices, all have the same effect as adjusting them in the vision tab. The display
vision button toggles on and off the image of what the currently selected agent is seeing, which appears in the
bottom left corner of the view window. This can be also done with View->vision.
The zoom option doesn't affect the actual resolution of the agent's vision, but scales up the display of it in the
corner of your view window, for more convenient viewing.
> STANDINS
To help simplify things and save processing time, there is a feature called standins that allows complex agents to
appear as simple primitives once they are further than a certain distance from the viewing agent. This is the same
principle as level of detail (LOD) switching.
In the image below, we are seeing what the nearest agent sees. He sees the man closest in front of him as a fullresolution man, but all the others as rectangles.
Currently, the only way to incorporate standins is by inserting a small bit of code into the cdl file, as show below.
standin
primitive billboard
size 70.000000 180.000000 0.000000
centre 0.000000 0.000000 0.000000
bone_rotate 0.000000 0.000000 0.000000
range 200.000000 inf
This standin is a billboard type. There are four types of standins: point, line, billboard, and box. The billboard is one
http://outranet.scm.tees.ac.uk:8002/Resources/docs/massive/learning/tutorials/vision.html (4 of 5) [13/05/2007 00:47:38]
of the most useful, and is a two-dimensional rectangle that always faces the viewer.
The other important variable is the range. The first number is the minimum range. Agents farther from the viewer
than this will appear as standins. The second number is the maximum range, which you'd usually set to infinity.
SUGGESTED EXERCISES
1. Make an agent that travels around a maze and avoids walls. Use the sample file boxen.obj as terrain.
2. Make an red box agent that chases a blue box agent through the maze. Make the blue box agent do its best
to avoid red agents. Place several of each agent.
OBJECTIVES
(1) Learn how to create a flow field in Massive.
Files needed:
Tutorial movies:
flowFields01.mov
OVERVIEW
Flow fields are a convenient way to direct your agents along a set path, as well as having other uses. The tutorial
movie flowFields01.mov will show you how to make and edit a flow field.
Below are concepts covered in this lesson:
1. Creating a Flow Field
2. Editing a Flow Field
3. Using Gaps
CONCEPTS
> CREATING A FLOW FIELD
To create a flow field, you first need terrain. The flow field will follow the countour of the terrain, and will actually be
baked into the terrain as an alpha channel before agents can detect it.
Creating a new flow field is fairly straightforward, and is done through the flow field dialog accessed by Edit->flow
field in the menu.
With the spline icon on the left selected, click "add" and begin clicking in the View window to add spline points.
Right click when done. Sliders are available to adjust radius, angle, edge angle, and edge width, as well as u and v
indicators.
The number of u and v indicators affects the density of the little arrows used to visualize the flow field, and do not
affect the behavior of the flow field at all.
When the spline icon is selected, the radius, angle, edge angle, and edge with sliders will affect these settings for
the entire spline.
Radius changes the radius of the flow field, and angle changes the overall angle of direction. Edge angle affects
how far the edge indicators of the flow field are angled in or out, as shown below:
In this view, the red color represents the level of the alpha channel. When saving the flow field, be sure to save the
setup (.mas) file, as it contains all the spline information, as well as the terrain map, as it contains the baked results.
If you forget to save the terrain map, or wish to bring in a new one with rgb data on it, you can just bake the flow
field again by hitting "apply" in the flow field editing window.
To add a point, click "add" and then click anywhere along the spline to add the point. To delete a point, shift-select
a point in the view window and click "delete". To move a point, shift-drag it in the view window.
Also, all of the blue sliders can be adjusted for individual points. For example, here is the result of adjusting the
radius for a single point.
To add a gap, select the gap icon, click "add", and click in the view window where you would like to place the gap.
They can be moved and deleted in the same way as spline points.
Options available for adjustment are the radius of the gap and the weight of its effect on the flow field.
http://outranet.scm.tees.ac.uk:8002/Resources/docs/massive/learning/tutorials/flowfield_a.html (5 of 6) [13/05/2007 00:47:43]
SUGGESTED EXERCISES
1. Create a flow field that the agents follow very tightly, then create one where the agents move more freely but
do not leave the flow field.
OBJECTIVES
(1) Learn how to create an agent that responds to flow fields.
Files needed:
Tutorial movie:
flowField02.mov
OVERVIEW
It is very simple to make a Massive agent follow a flow field. The tutorial above will show you how to make a quick
flow field and create a box agent that follows it.
For more information on making a flow field, see the lesson Creating Flow Fields.
1. Using ground.flow
CONCEPTS
> USING GROUND.FLOW
The input channel ground.flow tells the agent the orientation of the flow field relative to its axis. In the picture
below, the flow field under the agent is angled about 30 degrees right from the agent's forward Z axis.
The result is a value of about positive 30 in ground.flow, as shown in its fuzz node graph.
The simple curves above for "flowfield left" and "flowfield right" are all that you need to get the agent to follow the
flow field. They can be connected directly to left and right turn defuzz nodes or can be incorporated into more
complex behaviors.
SUGGESTED EXERCISES
1. Create an agent that only moves perpendicular (+90 degrees) to a flow field.
OBJECTIVES
(1) Learn how paint colour maps on terrain.
Files needed:
Tutorial movies:
paint01.mov
OVERVIEW
Colour maps on terrain can be read by agents passing over them and also used for placement of agents. Massive's
3D paint tool allows the user to paint terrain directly in Massive, in addition to the option of importing a pre-made
image as a terrain map. The tutorial movie paint01.mov will show you how to use the paint tool.
Below are concepts covered in this lesson:
1. Terrain Maps
2. The Paint Dialog
CONCEPTS
> TERRAIN MAPS
Once there is a terrain in the scene, a terrain map can be imported onto it through the menu option File->load
terrain map.
Alternatively, you can paint your own map from scratch in Massive, or even import a map and modify it in Massive.
You can save the resulting map back out as a .tif file with the menu option File->save terrain map.
Options available include size, opacity, and stipple. Some examples of the different brushes are shown below.
Round brush.
Soft brush.
If you had wanted to add blue without removing the red, you would need to make sure the red colour channel was
off when painting the blue line. The results would be as below:
SUGGESTED EXERCISES
1. Create a red circle that falls off gradually to black at the edges, maintaining a relatively linear gradient from
center to edge.
OBJECTIVES
(1) Learn how to use ground colour channels.
(2) Learn how to use ground colour gradient channels.
(3) Get an agent to follow a colored path.
Files needed:
Tutorial movies:
colour01.mov
colour02.mov
OVERVIEW
Ground colour, as described in a terrain texture map, can be a useful way to control and direct agents, as well as
placing them. (See lesson on placement.) The tutorial colour01.mov will show you how to get agents to respond to
different colours on the ground, and the tutorial colour02.mov will introduce you to some additional channels that
will help you get an agent to follow a coloured path.
Below are concepts covered in this lesson:
1. Detecting Ground Colours
2. Detecting Colour Gradients
3. Colour Tips
CONCEPTS
> DETECTING GROUND COLOURS
The three agent channels ground.r, ground.g, and ground.b detect the value of red, green, or blue on the ground
directly below the agent axis. In this example, the ground beneath the agent is a combination of red and green:
So the resulting channels register a high value of red and green, and a zero value for blue. The range for each of
these channels is between 0 and 1.
When connected to fuzz nodes, these colour channels can affect the behavior of agents as they pass over different
colours. They can be used to affect anything in the agent's brain, but one example would be to have agents switch
from a regular walk to a slower trudging action when passing through a colored area that symbolizes "mud".
The top left agent is to the left of the path, and so its ground.b.dx gradient would be positive, as the blue gradient
increases to the right. The ground.b.dz gradient would be negative, as the blue gradient is increasing behind it. It
can be difficult to remember which way is positive and which is negative for all of Massive's channels, but it's simple
enough to just step forward in the simulation and just look to see what value the node is showing.
To get a good path follow, a minimum of four fuzz nodes, and four rules, are necessary.
Using dx and dz input gradient nodes, you will need fuzz nodes for "path is left/right" and "path is forward/behind",
and the four simple rules from combining those. More inputs and fuzz nodes can be added for even better results.
Possibilities include "no change" fuzz nodes for both inputs, as well as a ground.b value input so that it knows its
approximate distance from the path center as well.
SUGGESTED EXERCISES
1. Adjust the agent and place it facing the other way so that it approaches the intersection and follows the blue
path around the corner.
OBJECTIVES
(1) Learn how to design a motion tree.
(2) Learn how to use tree controls.
(3) Learn how to use action lists and take lists.
Files needed:
Tutorial movies:
tree01.mov
none
OVERVIEW
A motion tree is the starting point for building a complete Massive agent. Here is where you plan out what your
agent needs to be capable of and come up with the actions you will need to capture and animate.
The tutorial movie tree01.mov will show you how to set up a simple stand/walk tree.
Below are concepts covered in this lesson:
1. Creating A Motion Tree
2. Adding Tree Controls
3. Creating Action Lists and Take Lists
CONCEPTS
> CREATING A MOTION TREE
Motion trees are composed of two types of nodes. Transition nodes represent a pose and are conceptual. Action
nodes represent actual actions that will be captured (or animated), imported, and edited. For example, the action
"stand" begins and ends on the same pose, which we also call "stand". The action "stand_to_walk" also begins with
that same pose, but ends in a different one.
You can see how this is depicted in the tree by the direction of the connections. The action "stand" has incoming
and outgoing connections to the "stand" node while "stand_to_walk" only has an incoming connection. This
represents how the "stand_to_walk" action begins with the "stand" pose and ends in the "walk" pose.
You will notice that walk_45L has a single outgoing connection to the "walk" action. This is a blend action, and is
the only case where you want to connect an action to another action. Blends are used when one action is a
variation of another, and can be blended with it in varying degrees. In this example, walk_45L (a 45 degree left
walk while facing forward) is a variation of a walk, and blending it partially could result in a 10 degree walk or a 30
degree walk.
A bad example for blending would be blending a sitting action with a sitting and reading a newspaper action. A
halfway blend between those two actions would look bad. It would be best to set up those sorts of actions like our
stand and walk actions are set up above.
Good ideas for blending include "walk" and "walk_in_place" (blending in the second one would slow the agent down
in a natural-looking way), and "walk" and "walk_up" (blend in the second one gradually as the slope of terrain
increases).
Once all your actions and transition nodes have been created and connected properly, you will need to add inputs
to trigger and control all your actions, as well as some outputs to help keep track of your agent's state.
First, all trees need triggers to know when to start an action. The tree above requires two triggers:
"Walk" and "stand_to_walk" both are assigned the same trigger, "walk". Later on, in the brain, the agent will trigger
the input "walk" whenever the circumstances are right for walking. The tree will automatically determine whether the
"walk" action or the "stand_to_walk" action needs to be started, depending on whether the agent is currently
walking or standing. The brain builder doesn't need to worry about any of this, and only needs to trigger "walk" in
the brain.
Similarly, "stand", "walk_to_standL", and "walk_to_standR" all have the trigger "stand". The tree can automatically
tell whether "stand" or one of the "walk_to_stand" actions is needed, but how can it tell whether to use
"walk_to_standL" or "walk_to_standR"? This depends on which foot is currently forward, and this information needs
to be fed to the tree. This can be done using latches.
We have created two latches, "leftfoot" and "rightfoot". Each latch represents a certain range in the playback of the
previous action. The previous action, "walk", has two possible exit points, right foot forward and left foot forward,
each defined with a latch low (a range where the latch curve value is 0) in the walk action's latch curve.
To adjust the latches, select the multiple-latched action in question (in this case, walk) then go to the latch dialog.
The action's latch curve will appear in the dialog, along with a blue shaded range. Use the left mouse button to drag
the edges of this region to the area you want to define, and give the defined area a logical name, such as "rightfoot".
We next click on "walk_to_standR" and assign it the trigger "stand" and the latch "rightfoot". Now, when the trigger
http://outranet.scm.tees.ac.uk:8002/Resources/docs/massive/learning/tutorials/tree.html (3 of 6) [13/05/2007 00:47:54]
"stand" is on, the latch "rightfoot" is on, and the agent's current action is "walk", then "walk_to_standR" will be
activated.
For further control over the tree's choices, there are also modifier options. Modifier options allow you to specify an
additional condition that needs to be met in order to run an action.
For example, you may have a trigger called "attack" assigned to three actions, "attack_high", "attack_middle", and
"attack_low". You can specify which attack occurs by assigning the "high", "middle", and "low" modifiers to those
respective actions.
For greatest efficiency, modifiers can be reused among different sets of actions. For example, the above modifiers
could be used for blocks as well, in conjunction with a trigger called "block", to trigger the actions "block_high",
"block_middle", and "block_low".
Finally, how do we activate walk_45L (and the walk_45R that we will also need)? Blend actions are assigned
special blend activators.
These are assigned to blend actions as "blends", not triggers. When activated while the base action ("walk") is
running, these will blend on top of the base action.
This is a simple tree, but in more complex tree, we may have many actions that involve the agent staying in one
place, such as stand, sit, kneel, stand_draw_sword, etc. We may want one output to tell is if the agent is not
moving, so that we can make sure it doesn't try to turn, for example. In this case, we can create an output.
We can assign the "still" output to all the actions that involve staying in one place.
SUGGESTED EXERCISES
1. Try creating a tree for an agent that stands, walks, and runs. Don't use run as a blend on top of walk.
OBJECTIVES
(1) Learn how to import and edit actions for Massive agents.
Files needed:
Tutorial movies:
motion_import01a.mov
motion_import01b.mov
import02.mov
OVERVIEW
Actions, created by motion capture or keyframe animation, are generally the basis for all movements in a typical
Massive agent. In this tutorial, you will learn how to import actions and edit them for proper use in Massive.
The tutorials listed above will take you through importing and editing a few actions.
Concepts are covered in further detail below:
1. Importing and Saving Actions
2. Step 1: Transform
3. Step 2: Loop/Trim
4. Step 3: Agent Curves
5. Step 4: IK
6. Action Tips
7. Useful Hotkeys
CONCEPTS
> IMPORTING AND SAVING ACTIONS
Actions can be brought in as motion capture data in .amc format, as keyframed animation in .ma format, or in .act
or .actb formats previously saved from Massive.
To import .amc, .act, or .actb files, simply go to File->load actions.
To import Maya .ma files, go to File-> Import Maya ascii and make sure "motion" is selected.
After importing, actions will appear in the action editor. Imported .ma actions will be named "new" by default.
It is recommended that actions be saved as .actb (action binary) files, as the most compact format that is handled
best by Massive. This is done through the menu selection File -> save actions.
After adding a new set of actions to an agent, or editing an existing set of actions, you should first save the .actb
and then the agent (.cdl) file. This will save a link to the .actb file in the agent (.cdl) file.
If you save the .cdl file without saving the .actb file first, Massive will embed all the action data in the .cdl file, which
is generally a less preferred way of working, as it is less efficient and results in large .cdl files where you may be
duplicating the action data every time you save a new version of the agent.
Generally you will not need to adjust any of the other transforms.
Now go to the loop tab. The curves you selected will help you keep track of where you are, so that the whole span
doesn't look the same.
In this tab, the first task is to trim the motion. It is best to have the "solo" button selected, so that this action plays
alone, and play or step the agent while looking in the view window. When you are close to the critical point, step
forward frame-by-frame using the "." key, or backward using the "," key.
You can set the start and end points by dragging the green start and end indicators from the beginning and end of
the graph. You can also set the points by stepping or playing up to the point you want to set as the start or end and
clicking the "loop start" or "loop end" buttons.
For a looping action, the object is to trim a loop such that the start and end pose are as similar as possible. Since
in reality, they won't be exactly identical, you will generally need to apply a cross fade. Specify the length of the
cross fade in the "cross fade" box.
To let Massive know which curves to cross fade, select one of the boxes: static, turning, locomotion, or ramp. Static
is an action where the agent stays in one spot, turning involves the agent facing a different direction at the end of
the loop, locomotion is an action where the agent moves in the x or z directions, and ramp is an action where the
agent is going up or down, as on a slope. Cross fade curves can also be selected manually, to the right of those
options.
This will create agent translation and rotation curves which keep the agent axis under its root as it performs its
action. You can see the results before applying by remaining in this tab and playing the action. Turn off alt-shift-f
(camera exact follow), and there should be no "jumps".
If all looks well, apply and continue.
> STEP 4: IK
Finally, if your agent is walking in any way at all, it's best to assign some IK to its feet, to avoid the appearance of
sliding when it is turning or moving in any way slightly different from the originally captured motion. Massive can use
an IK hold channel to keep your agent's feet anchored in place while they are touching the ground. This IK hold
can also be used in the brain to adjust the stepping of feet on uneven terrain.
The first step is to put a rotation constraint on the foot or ankle joint that will attempt to keep the foot parallel to the
ground as IK works on the rest of the chain. If you look at the foot segment in the body page, you will see that it has
its skip option selected under IK.
IK in Massive is simple. When you create IK curves on a segment, Massive assumes that segment is the IK target,
and the next two segments above it in the hierarchy are in the IK chain. Assigning IK to the toe would automatically
make the foot and lower leg part of the chain. To customize this, segments can be skipped. In this case, the foot is
skipped. The toe, lower leg, and upper leg form an IK chain.
Since the foot is skipped, it may end up at odd angles as the rest of the IK chain does its work. The rotation
constraint curves keep it relatively parallel to the ground. To assign these, select any of the foot curves in the
curve tab, then go to the IK tab and select "RC curves".
Next, assign IK curves to the toe by choosing a toe curve in the curves tab. Return to the IK tab and click "IK
curves".
Finally, create IK hold curves by clicking "hold curve". This attribute, when on (1), holds the IK target in place and
when off (0), allows the joints to move just as described in the original action.
It's easy to get a good approximation of this curve by moving the threshholds representing speed and height
(green and yellow horizontal lines) up and down by dragging with the middle mouse or left mouse button. Keep an
eye on your agent's action in the view window to see if these curves make sense. It helps to actually see the IK in
action, by toggling alt-i to view IK. Blue spheres represent the location of the IK target (toe), red spheres appear
when IK hold is on. If IK hold is on when your agent's foot is on the ground, and off when it is up and/or moving,
your curve should be about right.
To fine tune this curve, or any of your other curves if you wish it, go to the edit tab.
In the edit tab, you can move points by selecting and dragging with the mouse, delete points, and insert points by
pressing insert and clicking on the curve. Buttons to the lower right allow you to lock or unlock the ability to move
points horizontally or vertically, as well as entering numerical values.
To test that IK is working properly, have your agent walk a slightly curved path. Create an output node with the
name of the action as its channel (e.g., "walk"), and set it to 1. Create another output node called "[ry]:offset" and
assign it a small angle, such as 30 degrees. Make sure "solo" is off, or the action window is closed and run the
simulation. Your agent should be walking a slightly curved path without any evidence of "sliding" feet.
shift-alt-f
right mouse
drag
step forward
step backward
SUGGESTED EXERCISES
OBJECTIVES
(1) Learn how to adjust transition and latch curves for actions.
Files needed:
Tutorial movies:
import03.mov
man_m.cdl
actions.actb (from previous lesson)
OVERVIEW
In order to ensure that actions transition smoothly from one to another, transition and latch curves need to be
adjusted properly for each action. The tutorial import03.mov picks up from the previous lesson on importing
actions and shows how to finish the final step of importing in adjusting transition and latch curves.
Concepts are covered in further detail below:
1. Transition Curves
2. Latch Curves
3. The Sequencer
CONCEPTS
> TRANSITION CURVES
The transition curve, pictured on the left in the above graph, is created automatically when you generate agent
curves. It controls the timing of this current action (walk) transitioning in from any previous action (such as
stand_to_walk).
When the transition curve is at 0, the action has not yet begun the transition. When it is between 0 and 1, the action
is blending on some level with the previous action. When the transition curve is at 1, the current action is completely
on.
Unlike the transition curve, the latch curve (pictured on the right above) has no gradual incline. The latch curve
simply provides an on/off value to tell Massive when it is all right to transition to another action. When the value of
the latch curve is 0, the action is free to transition to another action.
An action can have multiple latches, such as a walk action, which may transition to walk_to_standL when on the left
foot and may transition to walk_to_standR when on the right foot.
To add actions to the sequencer, choose any of the actions in your agent (left side) and click append to add them to
the end of the list, or insert to insert them above the selected action, or replace to replace the selection action.
The number on the right of each action represents which latch of the previous action it will cut in at. This is set to 1
by default and does not need to be changed for most actions. However, if the previous action had two or more
latches (e.g, for left and right foot in a walk), you will need to specify which latch you want the current action to cut
in at. Walk_to_standL may cut in at latch 1 for example, and walk_to_standR may cut in at latch 2.
Sometimes it can also be helpful to toggle View->playbacks in the menu. This allows you to see a display in the
View window that shows which actions are currently playing.
SUGGESTED EXERCISES
1. Set up transition and latch curves for all the actions and test them all with the sequencer.
OBJECTIVES
(1) Trigger actions from the brain.
(2) Set a default action or actions.
(3) Be able to read the active motion tree.
(4) Pass tree outputs to the brain.
Files needed:
Tutorial movies:
tree_brain01.mov
OVERVIEW
Actions in the tree can be triggered from the brain using the controls set up when setting up the tree.
The tutorial movie tree_brain01.mov will give you a brief overview of triggering tree-controlled actions from the
brain.
Below are concepts covered in this lesson:
1. Setting a Default Action
2. Triggering Actions in the Brain
3. Accessing Tree Outputs in the Brain
CONCEPTS
> SETTING A DEFAULT ACTION
The default action is the action that will play when no other action is being activated from the brain.
The easiest way to set the default action is to choose the action node of the action you want to set as default, then
click the default button next to the action's name.
This will set that action as the default, and its name can be seen when the whole tree is selected (click on nothing in
the motion page node area). The default action is listed under tree default action.
Default actions can also be set using agent variables, which allows for variation from agent to agent. See the
variation section for how to set agent variables. After these agent variables are created, they can be assigned to
individual actions to control the likelihood that action will be the default for that agent.
This happens in the agent default actions section. Select an action in the actions column and assign it a
corresponding variable in the variables column on the right. Whichever agent variable in the list has the highest
value, that variable's associated action will be the default for that agent - regardless of the tree default action.
The purpose of the tree is to control the flow of actions and give the agent a smaller, more efficient set of controls to
trigger actions. These controls are triggers, modifiers, and blends.
All of these, once created in the tree and associated with tree actions, can be accessed as output channels in the
brain.
A trigger output node, when on to any degree, will trigger the appropriate action associated with it. When more
than one trigger is active at the same time, the one with the highest value will trigger actions, and the others will be
ignored.
A modifier output node, when active, will allow the triggering of any action it's associated with, if that action's
trigger is on as well. For example, the modifier "high", in conjunction with the trigger "attack", will activate the action
"attack_high".
A blend output node will activate its associated blend action. The degree to which the output node is active (0 to 1)
affects the degree to which the blend is active. When 0, this node would have no effect, and at 1, this node would
result in the blend action completely taking over the base action. If the base action (example: walk) for the blend
(example: walk_45L) is not playing, the blend will have no effect.
SUGGESTED EXERCISES
1. Set up a brain such that if the agent is staying still, it will begin walking, and if it is walking, it will stop.
2. Copy the brain from the sound-avoiding agent soundbox.cdl included in this lesson directory and paste it into
the man_tree.cdl agent. (You can open two sessions of Massive at once and alt-c/alt-v to copy and paste
between the two agents.) Replace the basic transformations the box agent used to move around (tz, ry, tx)
with actions and offsets. RY can be left as is, but changed to [ry]:offset in order to be applied on top of the
agent's actions. Sidesteps should be replaced with the 45L/R actions, stop/go replaced with stand/walk. See
if you can get a group of man agents to walk around and avoid each other.
OBJECTIVES
(1) Learn how to turn on dynamics for part of the skeleton.
(2) Learn how to adjust rotation limits.
Files needed:
Tutorial movies:
dynamics01.mov
Dynamics1/man.cdl
OVERVIEW
In addition to running actions and other controlled motions from the brain, Massive agents can also react to
dynamic forces, allowing them to perform stunts and automatically animate dangling parts such as tails or chains.
The tutorial above will show you how to make a dynamic ponytail on a walking man agent.
Concepts are covered in further detail below:
1. Switching On Dynamics
2. Springs
3. Rotation Limits
CONCEPTS
> SWITCHING ON DYNAMICS
The agent output channel dynamics.active switches on dynamics for the whole agent. The segment channel
segment:dynamics.active switches on dynamics for that segment and all the segments below it in the hierarchy.
This will cause that segment to "fall off" the rest of the skeleton unless a spring is used.
A very important thing to note is that once dynamics.active has been switched on, dynamics cannot be switched off
again.
> SPRINGS
There are five types of springs that can be used in Massive. The most basic type is normal, which functions much
like you would expect a spring to behave.
This type of spring connects between two segments and exhibits a springlike force between them.
You can manually set the point at which the spring connects to each segment on the right side of the spring
settings. These are X, Y, and Z coordinates in segment space.
Forces and other settings for the spring can also be set in the spring node. Here, you can set the spring force,
damper force, and rest length of the spring. You can also determine whether objects can collide with the spring,
and what the collision radius will be.
Of the other types of springs, stretch and squash are identical to the normal spring except that they resist
stretching and squashing respectively. The pin spring tries to keep both ends together using the force provided.
The parent spring is used to keep a dynamic segment attached to the non-dynamic part of the agent. It only needs
to be attached to the top dynamic segment in the chain and will keep it parented to its parent segment in the
skeleton using the translate force specified. It will use the specified rotate forces (rx, ry, rz) to keep the segment
from rotating on its axes.
Rotation is not automatically restricted to the angle limits you specify, but Massive's dynamics engine will attempt to
keep it there using the rotation limits force. The magnitude of this force can be changed in the dynamics tab. In
this example, it is set to 200.
This force may have to fight against gravity, forces applied by connected segments, and other forces, so you will
have to a choose a force strong enough to balance against the other elements acting on the segment, yet not so
strong as to throw the agent off balance and cause it to spin.
Sometimes it will be necessary to apply different rotation limit forces to different segments in the skeleton. In this
case, just turn off the "inherit" button next to the rotation limit force in a segment's dynamics tab, and manually
change the force for just that segment.
SUGGESTED EXERCISES
1. Set up one of the agent's arms as a dynamic arm that dangles at his side.
OBJECTIVES
(1) Create an agent that transitions from actions to dynamics.
(2) Learn how to adjust dynamic forces for stunts.
(3) Learn how to use smart stunts.
Files needed:
Tutorial movies:
dynamics02.mov
OVERVIEW
Dynamics in Massive can be used to make agents perform stunts, such that they can react to collisions and falls in
a physically and physiologically realistic way. Additionally, with smart stunts, agents can perform various actions
while dynamic forces are being applied to them.
Concepts are covered in further detail below:
1. Collisions
2. Dynamics Settings
3. Smart Stunts
CONCEPTS
> COLLISIONS
If dynamics are on, the agent automatically reacts to collisions according to the parameters specified in the
dynamics tab. Additionally, collision information can be registered in the brain with a number of channels, shown
below.
channel
description
collide
input
inf
head:collide
collide.v
velocity of collision
input
inf
head:collide.v
collide.x collide.y
collide.z
-inf inf
head:collide.x
collide.vx collide.vy
collide.vz
input
-int int
head:collide.vx
collide.nx collide.ny
collide.nz
input
-1
head:collide.nx
All these channels are segment channels only except for collide, which can be used as an agent channel that
returns the depth of collision for the first segment to collide with anything. These channels can be used whether
dynamics is on or off.
The most common use of collide with dynamics is to have an agent turn on dynamics upon the first instance of
colliding with anything, so that, for example, an agent can walk until something hits it, then let dynamics take over
for the fall. The simplest way to do this is connect a collide input node to a dynamics.active output node.
Settings only available in the segment dynamics tab are mass and density. Adjusting one automatically adjusts
the order, according to the volume of the segment. Additionally, you can choose to turn collisions on or off for
individual segments.
The rest of the attributes are adjustable in both agent and segment dynamics tabs. Segments by default will inherit
these values from the agent values unless the inherit button is unclicked, in which case you can edit that value for
that individual segment.
Drag affects the air resistance of segments and is especially important to adjust when using wind. If you are
attempting to cause your agent to move in "slow motion" during dynamics, decreasing gravity usually works better
than increasing drag.
Under "collisions", force represents collision force. If it is too low, the segment will fail to collide and will pass
through things. If collision force is a little too high, the agent will recoil or "bounce" from the collision. If it is far too
high, the simulation will be unstable and give bad results. Damper reduces the bounciness of the impact.
Collision friction affects friction when colliding with other objects or agents. When low, the colliding segments will
slide. When high, they will resist sliding, and if certain segments, such as hands, are set with especially high
friction, they can mimic gripping a surface to some degree.
Rotation limit force, covered in the previous lesson, is the force used by Massive to keep the segment within the
rotation limits specified in the dof tab. Friction in this section applies to the friction between segments when this is
being done.
channel
description
servo.force
output
inf
head:servo.force
output
inf
head:servo.force.x
inf
head:servo.force.x
servo.rx servo.ry
servo.rz
The servo related channels control the amount of force used to keep the segments rotated to the positions
corresponding to where the running action says they should be at that moment in time.
This is useful if you want, for example, an agent to shield his face as he is hit. If you had an action where he brings
his arms to cover his face, and run it as a smart stunt while he's being hit dynamically, he will shield his face as he's
hit and falling to the ground, in an action similar to, but not identical to, the original. It would be a combination of the
action and dynamic forces.
The principle works very similarly to rotation limits, except instead of using forces to keep the skeleton in a static
pose, the forces keep the skeleton's pose corresponding to an action. If there is no action running, servo forces can
be used to keep each segment in a set pose, and that set pose can be specified using the channels servo.rx,
servo.ry, and servo.rz for each segment. These channels can also be animated over time.
Servo.force as an agent channel can be used to control servo forces for all segments in the agent. For finer
control, you can use segment servo.force channels, and if even finer control is needed, the servo.x, servo.y, and
servo.z channels can control forces used for each rotation axis in a segment.
SUGGESTED EXERCISES
1. Create a stuntman stable enough to sustain realistic hits from all directions. Try as many angles as possible.
2. If you have an easy access to a "fetal position" animation (a simple two-keyframe curl-up can be created in
Maya), bring it in as an action and have the agent perform it as a smart stunt when falling or being hit.
OBJECTIVES
(1) Learn how to generate and vary wind.
(2) Learn the effects of wind.
Files needed:
Tutorial movies:
wind01.mov
OVERVIEW
Wind in Massive is a versatile feature that can affect dynamically driven segments or cloth, as well as be read as
values in the brain. The tutorial wind01.mov will show you how to set up a wind-generating agent and give you an
overview of how wind works.
Below are concepts covered in this lesson:
1. Generating Wind
2. Viewing Wind
3. Wind Input Channels
4. Wind and Dynamics
CONCEPTS
> GENERATING WIND
All the wind in the scene is controlled by a single agent, through output channels in its brain. If there is more than
one agent in the scene that is setting wind values, the last one to set it will be the one that controls the wind values.
To avoid confusion, it is usually best to set up one agent dedicated to controlling wind.
The following channels can be used as output channels for an agent to control wind in the scene.
wind.x
wind.y
wind.z
wind.a
wind.f
Wind.x, wind.y, and wind.z affect the speed of the wind in the x, y, and z directions. Using only these will result in
a uniform wind that affects all objects in the same way.
For more realistic variation, wind noise is needed. Noise in wind is controlled by wind.a and wind.f, which
respectively control the amplitude and frequency of the noise that controls wind variation.
Wind can be visualized in the View window as a number of blue arrows pointing in the direction of the wind. To view
this, select View->Wind from the menu.
To change the scale of the wind display, select Options->Wind display scale and change the number accordingly.
SUGGESTED EXERCISES
1. Open the file Wind/man_ponytail.cdl and try to get the man's ponytail to flap realistically in wind.
2. Open the file Wind/cloth.cdl and experiment with different wind settings to blow the cloth around the scene.
OBJECTIVES
(1) Learn how to use the geo node to add geometry to agents.
(2) Learn how to bind geometry using smooth or rigid binds.
(3) Learn how to use the option node for variation.
Files needed:
Tutorial movies:
attach01.mov
OVERVIEW
Geometry is very simple to attach in Massive. All agent geometry (except for dynamic cloth) is brought in using a
geo node and binds rigidly or smoothly to the skeleton depending on how the node is connected to the skeleton.
The tutorial video attach01.mov will take you through attaching geometry and setting up an option node to vary
geometry between agents.
Below are concepts covered in this lesson:
1. Rigid and Smooth Binding
2. Adjusting Transforms
3. The Option Node
CONCEPTS
> RIGID AND SMOOTH BINDING
Agent geometry is stored in a geo node - one .obj file per geo node. If this node has no connections to any of the
skeleton segments, Massive will automatically treat it as a smooth bind, binding it to all segments whose area of
influence it falls in.
Below is a geo node with no connections, which is therefore smooth bound to all segments whose areas of
influence it falls in:
(Note: the pants geometry and the skeleton are being shown at the same time in this picture for demonstration
purposes. Massive defaults to showing the skeleton only, and the hotkey alt-m will toggle between that and
showing the geometry only. If you want to see both, manually make sure both are checked in the View menu.)
If you connect a geo node to a single skeleton segment, the result is a rigid bind to that segment. Below is a
picture of a rigid bind between the pants and the root.
While bad for pants, this obviously is good for hats and weapons and the like.
Finally, connected a geo node to several segments results in a smooth bind to those segments only, as in the
example below.
However, if you modeled the geometry to fit on your agent's skeleton (a hat modeled 1.6 meters from the ground in
world space), it will seem to be in the wrong place when attached rigidly to a segment (1.6 meters above the head).
Simply click the "world space" button, and Massive will put it back where it was in world space.
Connecting the geo nodes to the option node automatically lists them as inputs in the option node window. Each
input is assigned an integer, starting with 0, so in this case, shirt is 0, and sweatshirt is 1.
You can also assign a variable to the option node. The variable name here is "shirt", which may be confusing, but it
is not related to "shirt" the name of the option node or "shirt" the name of the geo node. It is an agent variable
created in the variables tab of the agent, and could have any name. Agent variables are discussed further in the
variables lesson.
When the variable "shirt" is set up properly, it will have a random value between 0 and 1, different for every instance
of this agent that you place. (For more information on placement, see the placement lesson.)
On the left side of the option node settings you will see this:
The option slider displays what the value of the option is for the particular instanced agent you've selected. This will
be the value of that agent's "shirt" variable in this case. Since we have two inputs, shirt (value 0), and sweatshirt
(value 1), when this option value is closer to 0, the agent will wear a shirt, and when the value is closer to 1, the
agent will wear a sweatshirt. There is no gradual blending in this case.
To check that all of your geometry options are working, you can click the manual button and adjust the slider
yourself.
SUGGESTED EXERCISES
OBJECTIVES
(1) Learn how to use Bones to adjust agent skinning.
(2) Learn how to save and import weights.
Files needed:
Tutorial movies:
bones01.mov
OVERVIEW
Massive uses the Bones system to skin geometry. Bones uses an volume of influence system and is very simple to
use. It takes very little time to adjust and generally gives very good results. See the tutorial movie bones01.mov to
learn how to adjust skinning with Bones.
Alternatively, weights can be imported from an outside program such as Maya, if written into Massive's .w weight
format.
Below are concepts covered in this lesson:
1. Using Bones
2. Saving And Importing Weights
CONCEPTS
> USING BONES
As mentioned above, Bones is very easy to use. The first step is to initialize Bones on a skeleton segment. Before
you do this, the segment has no skinning influence at all. Be sure to double-check this first if geometry or cloth does
not seem to be working on your agent.
To initialize, just click the "initialize" button on the Bones tab.
After initializing, a number of sliders appear in the Bones tab, and if you open the Bones window, you can see the
graphical representation of the volume of influence.
The volume of influence can be adjusted with the sliders in the Bones tab. Sx, sy, and sz adjust the size of the
volume of influence, and the blue slider adjusts the falloff.
A bone's influence on geometry can be seen by selecting the geometry and the bone or bones at the same time,
then viewing the result in the Bones window.
Symmetry is on by default, meaning any changes you make to r_shoulder will effect the same changes on
l_shoulder. This can be switched off through the button in the bottom right corner of the Bones window or through
Edit->symmetry in the menu.
Look over that file in a text editor, and create a script in Maya or whatever program you are using to spit out weights
in the same format. Massive will be able to read it in, and this will override Bones skinning.
SUGGESTED EXERCISES
OBJECTIVES
(1) Learn how to create cloth in Massive.
(2) Learn how to import geometry into Massive as cloth.
(3) Become familiarized with cloth parameters.
Files needed:
Tutorial movies:
cloth01.mov
cloth02.mov
OVERVIEW
Cloth in Massive is relatively simple to use and calculated very efficiently. Cloth can be created in Massive, as
shown in the tutorial cloth01.mov, or brought in as an .obj, as shown in the tutorial cloth01.mov.
Below are concepts covered in this lesson:
1. Creating Cloth
2. Importing Cloth
3. Adjusting Cloth Transforms
4. Adjusting Cloth Dynamics
5. Attaching Cloth to Agents
CONCEPTS
> CREATING CLOTH
A rectangular piece of cloth is created when you create a cloth node. The type is automatically set to "grid",
meaning it's a rectangular grid of cloth created in Massive.
The cloth is made out of triangles, and the resolution of the cloth in the x and y dimensions can be edited with the
sliders, as can the size of the cloth.
Quite often it will be necessary to model a piece of clothing or other cloth and bring it into Massive. In this case, just
import the .obj in question by clicking on the file folder button next to the file text box.
The file button will now be automatically selected, instead of the grid button. The X/Y resolution and X/Y size tabs
do not apply to an imported piece of cloth.
Under "stretch resistance", force is the force that resists stretching of the cloth. The higher it is, the less the cloth
will stretch.
Under "collision", force represents the collision force. Like segment collision forces, too low a force will result in the
cloth passing through the object it should collide with, and too high a force will cause wild behavior. Thickness
refers to the simulated thickness of the cloth - that is, how close cloth can get to other objects before experiencing a
"collision". The options terrain, skeleton, and geometry turn on and off collisions with those three types of objects.
You can also set number of steps per second for calculation of the cloth simulation. Cloth generally requires much
fewer steps than in the segment dynamics tab, and 100 is often fine.
The last setting is drag, which refers to air resistance when moving through the air, as well as affecting how much
effect wind has on the cloth.
Cloth can be set to collide with segments, but often that's not enough to get a perfect fit for clothing. Some parts of
a piece of clothing, such as the shoulders and torso area of the robe, should be pretty tight on the character and
can just be smooth bound instead of being simulated.
When a cloth node is connected directly to a segment, Massive will smooth bind the part of the cloth that falls in
that segment's volume of influence. Cloth vertices outside of that volume will continue to be calculated as a cloth
simulation.
In the above picture, the robe is smooth bound to the segments r_shoulder and l_shoulder, but outside of their
volume of influence, it will be dynamically simulated cloth.
To see which specific vertices are smooth bound and which are not, select the cloth and segments in question and
view them in the Bones window in unshaded mode (alt-s). You will be able to see the segments' volumes of
influence, and the bound vertices will appear yellow, as shown below.
SUGGESTED EXERCISES
1. Try binding the robe to different number of segments and see what gets the best results.
2. Create a flag that flies in the wind.
OBJECTIVES
(1) Learn how to create blend shapes in Massive.
(2) Learn how to control blend shapes in the brain.
Files needed:
Tutorial movies:
blend01.mov
Blendshapes/blend_eye.cdl
Blendshapes/eye1.obj
Blendshapes/eye2.obj
Blendshapes/eyemap.tif
Blendshapes/face1.obj
Blendshapes/face1_O.obj
Blendshapes/face1_smile.obj
OVERVIEW
Like many animation programs, Massive allows you to create blend shapes, useful for facial expressions and other
similar applications. The tutorial above will show you how to make some simple blend shapes.
Below are concepts covered in this lesson:
1. Creating Blend Shapes
2. Activating Blend Shapes
CONCEPTS
> CREATING BLEND SHAPES
To create blend shapes, first load a base shape into a geo node. Go to the blend shape tab in the geo node and
click "add". This adds a new blend shape, and options for that blend shape appear to the right.
First, the blend shape needs a name that will be used to refer to it in Massive. Next, under file, import an .obj mesh
to be the blend shape target. This mesh must have the same number of vertices in the same vertex order as the
base mesh.
If the blend is a linear blend, vertices will travel a linear path from one blend target to the other. If the blend is
radial - important for blends such as eyelids - the vertices travel in an arc around the centre specified in xyz
coordinates under centre.
Once the blend is created, its effects can be viewed by moving the active slider from 0 (inactive) to 1 (100% active).
Multiple blend shapes can be added to one geo node.
SUGGESTED EXERCISES
1. Try creating simple blends using some of your own meshes that you may have used for blend shapes in
another application.
OBJECTIVES
(1) Learn how to assign materials to geometry.
(2) Learn how to adjust materials settings.
(3) Learn how to assign shaders to materials.
Files needed:
Tutorial movies:
materials01.mov
OVERVIEW
Material nodes are the link between geometry and shaders in Massive and are key to rendering an agent properly.
The tutorial file materials01.mov will show you how to assign a material node to a piece of geometry and adjust
some of its parameters.
Below are concepts covered in this lesson:
1. Attaching Material Nodes
2. Material Node Options
3. Assigning Shaders
CONCEPTS
> ATTACHING MATERIAL NODES
To assign a material to a piece of geometry, create a material node and connect it to the geometry node you want
to assign it to.
Unlike the other two, the diffuse tab has the option to load a texture map which will be viewable in OpenGL.
When you click any of the four shader options, it will open a shader dialog box that allows you to select any of the
shaders in your shader path directory and adjust their parameters. The shader selection process is shown below.
If the shader list is blank and gives you no shaders to choose from, make sure you have the correct renderer
selected under Options->Render, and make sure your shader path is set correctly.
A list of all adjustable parameters are available for adjustment. In addition to setting a value manually, you can also
assign an agent variable to each parameter by using the variable box.
For a parameter that is a filename, such as texture map name, a variable can be incorporated into the name by
using single quotes.
In the case above, "shirt_map" is an agent variable with a value from 1 to 4. For any instance of the agent, the
shader will use one of the four texture map files "short_sleeve_shirt1B.tif" through "short_sleeve_shirt4B.tif",
depending on the value of the "shirt_map" variable for that particular instance of the agent. The variable's value
won't necessarily be an integer, but Massive will round to the nearest integer.
SUGGESTED EXERCISES
OBJECTIVES
(1) Become familiar with cameras in Massive.
(2) Learn how to import a camera from Maya.
Files needed:
Tutorial movies:
camera01.mov
OVERVIEW
Rendering in Massive is done through its cameras, which are very similar to cameras in other 3D rendering
programs. Cameras can constrained to agents or imported in Maya ascii format.
Below is a list of topics covered:
1. The Massive Camera
2. Importing Maya Cameras
CONCEPTS
> THE MASSIVE CAMERA
Massive opens with 4 default cameras created, named camera1, camera2, camera3, and camera4. The camera
nodes can be viewed in the scene page. Whichever node is currently selected will be the camera you look through
in the view window.
To see other cameras in the scene, choose View -> cameras from the menu.
Clicking the camera node gives you access to a number of tabs with more camera options.
The camera transform tab allows you to rotate and translate the camera using the sliders. Cameras can also be
moved by navigating around in the view window while the desired camera is selected.
The constrain tab allows you to constrain the camera to an agent in a number of ways. Alt-f while an agent is
selected will toggle camera constrain on that agent, defaulting to constraint type follow 3D. If you want to constrain
the camera to a new agent, select the new agent and toggle alt-f off and on again.
The constraint types are:
off
look at
camera does not translate but rotates in place to keep target agent in center of view
agent
segment
pov
follow XZ
follow 3D
The smooth slider affects the smoothness of camera movement as it follows the agent. A low smoothness will
keep the camera fixed on the agent precisely, moving jerkily if the agent moves jerkily, whereas a high smoothness
will result in smoother motion of the camera such that it may take more time to "catch up" if the agent puts on a
sudden burst of speed.
The projection tab allows you to adjust fov (field of view) and filmback of the camera. The 35 mm button sets the
filmback to the correct settings for 35 mm full aperture.
Pixel aspect ratio can also be set, and this will apply to both the view in the view window as well as the final render.
Zmin and zmax are the minimum and maximum depth the camera can see, and can also be thought of as near
and far clipping planes.
The animation tab only applies to animated cameras. The file field contains the path of the Massive camera file (.
cam) containing the data for this camera.
The slider allows you to scrub through the animation of the camera, but it does not advance the sim. It gives you an
idea of where the camera will be at any point in its animation.
The frame of a camera animation wouldn't necessarily be related to the simulation time anyway, as camera
animation can be offset by either a positive or negative number of frames within the sim.
The background tab allows you to import an image to use as a background in the camera.
A series of tif files can be loaded for use as an animated background as well. The following conventions are used
for frame numbering:
@ = writes with no padding in filename
# (* the no. of digits) = padding to match the no. of digits specified
http://outranet.scm.tees.ac.uk:8002/Resources/docs/massive/learning/tutorials/camera.html (3 of 4) [13/05/2007 00:48:32]
Ensure that "camera" is selected. Make sure the camera file output directory (Cam/ by default) exists, as this is
where Massive will write out a .cam file that stores all the data for this camera. With all that taken care of, find the .
ma file you wish to import and import it. The new camera will appear as a new node in the scene page.
SUGGESTED EXERCISES
1. Bring the terrain file (terrain1.obj) into Maya and animate a camera through it. Export the animated camera
as a Maya ascii file and bring it back into the Massive scene.
OBJECTIVES
(1) Learn how to write out and read in files with the Sim Dialog.
(2) Learn how to run a sim from the command line.
(3) Learn how to replay selected agents for two-pass sims.
Files needed:
Tutorial movies:
sim01.mov
sim02.mov
OVERVIEW
The Sim dialog allows you to run a Massive simulation with a variety of possible input sources and a variety of
possible output formats. Some of these files are necessary for rendering. Other uses of the Sim dialog include
multiple-pass simulations and writing out OpenGL tifs for preview purposes.
Below is a list of topics covered:
1. The Sim Dialog Window
2. Running Sims from the Command Line
3. Multiple Passes
CONCEPTS
> THE SIM DIALOG WINDOW
The Sim dialog can be accessed from the menu by selecting Run->Sim. There are a number of options for both
input and output types.
In this dialog, you will need to specify a start and end frame, as well as selecting all the input and output types you
wish to read and write by clicking the appropriate buttons on the left hand side.
The brains button turns on or off brain processing by the agents during the Sim.
The real time button slows the Sim to real time if it is running too fast.
In the text fields, be sure that the proper paths or patterns for each selected type are specified. Directories can have
any name you like, but they need to already exist. Massive does not create them.
The Sim files can be written out as .apf or .amc files (selected in the option box to the right), and will be
automatically named. These files record all motion in the simulation, for the selected frame range.
Cloth files have the extension .mgeo, a format specific to Massive. These store information generated by the
simulation of cloth in Massive. Only a directory needs to be specified.
Camera files have the extension .cam, a format specific to Massive. Cam files contain all the information about a
Massive camera and are important for animated cameras. The name of the camera file should be specified in the
text field.
Particle files can be written out in a format recognized by Maya as a particle cache. These are useful for previz
purposes, to give a general idea of where your agents are in the scene if you are animating a camera in Maya that
you want to bring into Massive.
A call sheet is a text file containing detailed information about each instanced agent, listing one agent per line.
Rib files, which store specific rendering information to pass to the renderer, require you to enter a name, such as
"a" (by default), or "men". The # symbol represents 4-digit padding for frame numbers. Two or more # symbols
specify that number of digits for padding. The @ symbol specifies no padding.
Pic files are tifs of the OpenGL display. Like the rib files, the path and format needs to be specified in the text field.
Pics are useful for writing out an image sequence to use as an OpenGL preview if your scene is intense enough
that it will not run in real time when played in Massive.
frame range for sim I/O, also causes auto execution of sim.
-iapf path
-iapfz path
-icloth file
-isim path
See -iamc
-iamc path
-iamcz path
-oapf path
-oapfz path
-ocall file
-ocloth file
-orib path
-osim path
See -oamc
-oamc path
-oamcz path
-opics pattern
Sims allow you to tackle a scene in multiple passes, in a number of different combinations.
One possibility is to run the agents in a sim with skeletons only, concentrating on their motion (writing out .amc
files), then simulating cloth on a second pass (.amc input, cloth output), and then writing out rib files for rendering
on a third pass (.rib output).
Another possibility is to run different groups of agents in different passes, which can be useful in some
circumstances, such as very large scenes. In this case, you would set up all the agents that will go in the first pass,
then run a sim that outputs their motion into .amc files. Then you would place more agents for the second pass.
These agents will get their motion from brain processing, but the first set of agents should replay their motion from
the previous pass by reading and playing the saved .amc files.
In order to do this, select all the agents you wish to have replay, and select "replay" under "process" in the body
page, turning off "brain".
The easiest way to select a large number of agents at once is to lasso select them. In order to do this, hold down
the shift key while dragging in the view window with the left mouse button down.
If you wanted to save at this point, saving the setup file (.mas) would save data regarding which agents are set to
replay and which are set to brain. The next step would be to run the sim again, writing out .amcs which now include
agents from both the first and second passes.
These .amc files can be used to run a third pass with even more agents, and so on for as many passes as you
need.
Agents on replay will not avoid collisions or react to anything, as they are strictly replaying exactly what was
recorded. Agents with active brains, however, can see and hear and react to agents on replay. That is why it is a
good idea to make the first pass out of agents whose actions take priority, such as a parade marching in formation,
and the second pass out of lower priority agents, such as children darting in and out between people as they cross
through the parade.
SUGGESTED EXERCISES
1. Try doing two passes by writing out Sim files in .amc format, then reading them in and outputing a tif
sequence for preview. Look over the finished tif sequence with fcheck or a similar viewer.
2. Write out Sim files for the group of agents in plod.mas, then place a new set of agents near them and try to
run them as a second pass. Try for a third pass as well.
OBJECTIVES
(1) Learn how to set render options in Massive.
(2) Learn how to preview a render in Massive.
(3) Learn how to write out rib and sim files for batch rendering.
Files needed:
Tutorial movies:
render01.mov
OVERVIEW
Massive allows for rendering through three RenderMan compatible renderers. The tutorial render01.mov will give
you a quick overview of how rendering works in Massive.
Below are some of the concepts covered in more detail:
1. Render Options
2. Render Preview
3. Batch Rendering
CONCEPTS
> RENDER OPTIONS
The Render Options dialog is available through the menu under Options->Render.
Massive supports three Renderman-based renderers: PRMan, Air, and 3Delight.
Output pics allows you to specify naming options for the rendered tifs. The # symbol represents 4-digit padding for
frame numbers. Two or more # symbols specify that number of digits for padding. The @ symbol specifies no
padding.
The shader path lists the default directory for shaders. This path being set incorrectly is one of the main reasons
for render problems.
For more details about the rest of the options, see the Massive manual entry on Rendering.
The start and end frame need to be specified here, and the Sim and Rib buttons should both be selected under
OUTPUT.
Once the Sim is run, Sim files will be written to the specified Sim directory and Rib files will be written to the Rib
directory. There are a number of different types of Rib files, as you may notice. If [name] is the Rib filename you
specified in the Sim dialog, such as [name].#.rib, the Rib directory should now contain a series of files called
[name].[framenumber].rib, one for each frame, as well as a matching set of files called [name]._archive
[framenumber].rib, again, one for each frame.
If you have shadow-mapped shadows, there will also be a set of files for the shadow-casting light (in this case,
"key"). These files will be named [name]._key[framenumber].rib.
Lastly, if you have terrain in your scene, there will be a .rib file for the terrain called [name]._terrain.rib.
The main rib files, [name].[framenumber].rib, contain references to all the others. The archive rib files each
contain one line per instanced agent, calling massive.so, which looks at the CDL file for basic agent information
needed for rendering. The CDL files will also contain links to the geometry that is to be rendered.
In brief, rendering in Massive requires access to all the Sim files, all the Rib files, all the CDL files, all the geometry
files, and all the texture maps used in the scene. It is important to keep all of these paths in mind when moving a
ready-to-render scene to another location or rendering from a remote location.
Once all this is set up, it is time to actually render. You will notice that a file called render_script.sh was generated
http://outranet.scm.tees.ac.uk:8002/Resources/docs/massive/learning/tutorials/render.html (4 of 5) [13/05/2007 00:48:38]
when you wrote out ribs from the Sim dialog. This script, when run, will automatically batch render all the frames in
your frame range. This script can also be easily modified to fit your specific needs.
SUGGESTED EXERCISES
1. Render this scene in your own renderer. Reassign the shaders if necessary and ensure they use the
provided texture maps.
OBJECTIVES
(1) Learn how to create and adjust agent variables.
(2) Learn how to apply agent variables to create agent variation.
Files needed:
Tutorial movies:
variation01.mov
OVERVIEW
Agent variables and variation allow Massive to vary almost any agent characteristic randomly over a group of
agents, enabling the user to create a diverse crowd from only one original agent. The tutorial variation01.mov will
show you how to get started using agent variables.
Below are a list of topics covered:
1. Creating Agent Variables
2. Skeleton Variation
3. Geometry Variation
4. Material and Shader Variation
5. Workflow
CONCEPTS
> CREATING AGENT VARIABLES
Agent variables can be created and edited in the variables tab in the body page, when the entire agent is selected
(no segments selected).
Variables require a name, and either a default value and min-max range, or an expression. In the tutorial, we
created a height variable with a range of 0.8-1.2 and a default of 1. Upon placement (instancing of the agents)
Massive will automatically generate a random value for each agent's "height" variable that falls in that range. We
also created a head_height variable with the expression "1/height". In that case, the range would be neglected and
the value taken directly from the expression.
All the agent's variables are listed in the box on the left side of the tab, along with their current values.
The agent vary tab allows you to assign agent variables to the agent's scale as well as x, y, and z translations.
There is also a special characteristic called thick which only applies to spheres and tubes, affecting their radius.
The segment vary tab only allows you to assign variables to the scale and thick attributes.
Each piece of geometry connected to the option node is displayed on the right hand side. The first one in the list is
assigned a value of 0, the second is 1, and so on. In this example, the type of shirt that appears on the agent will
depend on whether the assigned variable "shirt" is closer to 0 or 1. Since the range of "shirt" was set to 0-1, we can
expect a fairly even distribution of sweatshirts and shirts.
Massive will always pick only one piece of geometry to assign. There is no blending in the option node.
Additionally, variables can be incorporated into the texture map text field by enclosing them in single quotes. The
full name in the texture map field above is "short_sleeve_shirt'shirt_map'.tif". If the variable shirt_map has a range
of 1-4, Massive will choose randomly between the four files "short_sleeve_shirt1.tif" through "short_sleeve_shirt4.
tif". Again, the nearest integer will be chosen.
Variables can also be assigned to parameters in the Renderman shaders. The shader dialog, brought up by
clicking a shader button in the Renderman tab, allows you to assign an agent variable to any parameter in the
shader.
Here, the variable "shirt_value" is assigned to the Kd parameter of this shader.
Texture maps and other referenced files in the shader can also have agent variables incorporated into their
filenames, just as for the OpenGL texture map field mentioned above.
> WORKFLOW
When working with agent variation, it is best to work with the original uninstanced agent as its own cdl file, placing it
only after you have finished tinkering with the variables and have saved the agent.
The original agent contains all the instructions and parameters for the variations that will occur upon placement,
and therefore it would be ineffective to edit variation on an instanced agent after placement.
SUGGESTED EXERCISES
1. Assign a material to the sweatshirt and arrange it so that Massive chooses randomly between the two
available sweatshirt maps.
2. Experiment with scaling different segments so that some agents are fatter than others around the waist.
OBJECTIVES
(1) Learn how to create brain variables with output nodes.
(2) Become familiar with predefined variables.
(3) Learn how to use variables and expressions in input nodes.
Files needed:
Tutorial movies:
variables01.mov
OVERVIEW
Massive allows you to create your own variables within the brain, which can be used in expressions and applied in
a number of ways. The tutorial variables01.mov shows one possible application, assigning energy levels to each
agent that decrease over time. Additionally, there are a few predefined variables in Massive that you may find
useful.
Below are a list of topics covered:
1. Creating Brain Variables
2. Predefined Variables
3. Using Variables In The Brain
CONCEPTS
> CREATING BRAIN VARIABLES
Brain variables are created the moment you type a name into the channel field of an output node which is not a
word recognized by Massive as a channel name or predefined variable.
That word will become the name of a new variable stored in Massive. Below is an example of a variable called
"effort" created in an output node:
In this case, the speed option is selected, so that the value of this node is actually the rate of change in the value of
the "effort" variable, per second. The actual value of the "effort" variable is stored internally within Massive and can
be observed if an input node is created with "effort" typed in the source field and its type set to pos. That input
node would then show the actual value of the variable "effort" at any given time.
id
input
keyboard
The ascii value of a key that was pressed. The value is zero if no key has been pressed. A keypress
gives a value for one frame.
All types of variables can be referenced in an input node, by typing the variable name into the source field of the
input node, either by itself or as part of an expression. Below, the predefined variable "input" is used in an
expression.
This input node will take whatever value is coming into it through an input connection and multiply it by 10. The
subsequent value can be passed on to another input node, or fuzz nodes, or any other valid connection in Massive.
Other possibilities include:
effort*10
Assuming that the variable "effort" has already been created in an output node, this would take the current value of
that variable and multiply it by 10.
height/2
Agent variables can be accessed as well. If height has already been created as agent variables, an input node
containing that expression as its source would return the height of the agent divided by 2.
For a list of all available expressions for use in Massive, see the expressions page in the manual.
SUGGESTED EXERCISES
1. Create agents that go at different speeds depending on an agent variable called speed. (See the variation
lesson if you're unsure of how to create agent variables.)
OBJECTIVES
(1) Learn how to spawn agents from another agent
Files needed:
Tutorial movies:
spawn01.mov
OVERVIEW
The spawn channel allows one agent to spawn other agents from a selected segment. This is useful for generating
projectiles as well as generating characters as needed.
Below is a list of topics covered:
1. Spawning Agents
CONCEPTS
> SPAWNING AGENTS
Spawning agents comes in handy for a variety of situations, most obviously for projectiles such as arrows and
bullets. The following is a suggested series of steps for setting up spawning:
1) Create an agent to be the spawner. The spawning segment should have the same name as the group of the
agent to be spawned. (This will most often be the same name as the agent to be spawned, as when agents are first
loaded into Massive, their group is usually given the same name as the agent. Double check in the scene page to
be sure.) In our example, the name of the spawning segment and spawned agent was "bullet". Save the spawner
(in our example, bond.cdl) as a CDL.
2) Create an agent to be spawned. Give it the agreed-upon name ("bullet"). If a projectile, you may want to create
some rudimentary output node for movement (example: tx, ty, or tz with a positive value). You can give it a more
detailed brain later. Save as a CDL. 3) Load both CDLs in Massive. Place the spawner agent. Don't place the
spawnee, because you want it to only appear when spawned. Ensure in the scene page that the group name for
the shell agent is indeed "shell".
4) Create an output node in the spawner agent's brain with the channel bullet:spawn. Every time this channel is
triggered, it will spawn a bullet agent at the location and orientation of the segment named "bullet".
SUGGESTED EXERCISES
1. Create a teleporter box (see lesson picture at top of page) that spawns bond agents at random intervals.