Professional Documents
Culture Documents
Basics • 3
Copyright and Disclaimer
4 • Softimage
Contents
Section 3 Section 6
Moving in 3D Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Coordinate Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 About Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Drawing Curves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Center Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Manipulating Curve Components . . . . . . . . . . . . . . . . . . . . . . . . 107
Freezing Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Modifying Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Resetting Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Creating Curves from Other Objects . . . . . . . . . . . . . . . . . . . . . . 110
Setting Neutral Poses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Importing EPS Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Transform Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Transformations and Hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Snapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Basics • 5
Section 7 Linking Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Polygon Mesh Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Expressions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Overview of Polygon Mesh Modeling . . . . . . . . . . . . . . . . . . . . . . 114 Copying Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
About Polygon Meshes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Scaling and Offsetting Animation . . . . . . . . . . . . . . . . . . . . . . . . . 169
Converting Curves to Polygon Meshes . . . . . . . . . . . . . . . . . . . . . 118 Plotting (Baking) Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Drawing Polygons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Removing Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Subdividing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Drawing Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Section 10
Extruding Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Character Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Removing Polygon Mesh Components . . . . . . . . . . . . . . . . . . . . . 123 Character Animation in a Nutshell . . . . . . . . . . . . . . . . . . . . . . . . 172
Combining Polygon Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Setting Up Your Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Symmetrizing Polygons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Building Skeletons for Characters . . . . . . . . . . . . . . . . . . . . . . . . . 177
Cleaning Up Meshes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Enveloping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Reducing Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Rigging a Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Polygon Normals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Animating Characters with FK and IK . . . . . . . . . . . . . . . . . . . . . . 190
Subdivision Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Walkin’ the Walk Cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Motion Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Section 8 Making Faces with Face Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
NURBS Surface Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
About Surfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Section 11
Building Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Shape Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Modifying Surfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Things are Shaping Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Projecting and Trimming with Curves . . . . . . . . . . . . . . . . . . . . . . 135 Using Construction Modes for Shape Animation. . . . . . . . . . . . . . 204
Surface Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Creating and Animating Shapes in the Shape Manager . . . . . . . . 205
Selecting Target Shapes to Create Shape Keys . . . . . . . . . . . . . . . 206
Section 9 Storing and Applying Shape Keys . . . . . . . . . . . . . . . . . . . . . . . . . 207
Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Using the Animation Mixer for Shape Animation . . . . . . . . . . . . . 208
Bringing It to Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Mixing the Weights of Shape Keys . . . . . . . . . . . . . . . . . . . . . . . . 209
Playing the Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Previewing Animation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Section 12
Animating with Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Actions and the Animation Mixer . . . . . . . . . . . . . . . . . . . 211
Animating Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 What Is Nonlinear Animation? . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Editing Keys and Function Curves . . . . . . . . . . . . . . . . . . . . . . . . . 154 The Animation Mixer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Layering Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Storing Animation in Action Sources. . . . . . . . . . . . . . . . . . . . . . . 214
Constraints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Working with Clips in the Animation Mixer . . . . . . . . . . . . . . . . . 216
Path Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Mixing the Weights of Action Clips. . . . . . . . . . . . . . . . . . . . . . . . 217
6 • Softimage
Modifying and Offsetting Action Clips. . . . . . . . . . . . . . . . . . . . . 218 Section 16
Sharing Animation between Models . . . . . . . . . . . . . . . . . . . . . . 220 Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Adding Audio to the Mix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 The Shader Library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
About Surface Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Section 13 Basic Surface Color Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Reflectivity, Transparency, and Refraction . . . . . . . . . . . . . . . . . . 303
Simulated Effects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Applying Shaders to Scene Elements . . . . . . . . . . . . . . . . . . . . . . 306
Making Things Move with Forces . . . . . . . . . . . . . . . . . . . . . . . . 225 The Render Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Hair and Fur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Building Shader Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Rigid Body Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Creating Shader Compounds . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Cloth Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Soft Body Dynamics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Section 17
Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Section 14 About Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
ICE: The Interactive Creative Environment . . . . . . . . . . . 241 The Material Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
What is ICE? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Creating and Assigning Materials . . . . . . . . . . . . . . . . . . . . . . . . 319
The ICE Tree View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Material Libraries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
ICE Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Forces and ICE Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Section 18
ICE Deformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Texturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Building ICE Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 How Surface and Texture Shaders Work Together . . . . . . . . . . . . 324
ICE Compounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Types of Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Applying Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Section 15 Texture Projections and Supports. . . . . . . . . . . . . . . . . . . . . . . . . 327
ICE Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Editing Texture Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Making ICE Particle Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 UV Coordinates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Particles that Bounce, Splash, Stick, Slide, and Flow. . . . . . . . . . . 277 Editing UV Coordinates in the Texture Editor . . . . . . . . . . . . . . . . 336
Particle Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Texture Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Spawning New Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Bump Maps and Displacement Maps. . . . . . . . . . . . . . . . . . . . . . 342
Particle Strands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Reflection Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Particle Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Baking Textures with RenderMap . . . . . . . . . . . . . . . . . . . . . . . . 345
ICE Particle States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Painting Colors at Vertices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
ICE Rigid Bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
ICE Particle Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Basics • 7
Section 19 Section 22
Lighting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Compositing and 2D Paint . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Types of Lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Softimage Illusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Placing Lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Adding Images and Render Passes . . . . . . . . . . . . . . . . . . . . . . . . 383
Setting Light Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Adding and Connecting Operators . . . . . . . . . . . . . . . . . . . . . . . . 384
Selective Lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Editing and Previewing Operators . . . . . . . . . . . . . . . . . . . . . . . . . 386
Creating Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Rendering Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Global Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 2D Paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Caustics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Vector Paint vs. Raster Paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Final Gathering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Painting Strokes and Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Ambient Occlusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Merging and Cloning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Image-Based Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Light Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Section 23
Customizing Softimage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Section 20 Plug-ins and Add-ons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Toolbars and Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Types of Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 Custom and Proxy Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
The Camera Rig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Working with Cameras. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 Key Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Setting Camera Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Other Customizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Lens Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Motion Blur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Section 21
Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Rendering Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Render Passes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Render Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Setting Rendering Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Different Ways to Render . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
8 • Softimage
Welcome to Autodesk® Softimage®!
Softimage is a powerful 3D system that integrates modeling, Modeling
animation, simulation, compositing, and rendering into a single,
The modeling tools are designed for creating and editing seamless
seamless environment. Softimage incorporates many standard 3D tools
animated models of any sort. Softimage offers many tools for creating,
and functions, but goes far beyond that in terms of tool sophistication
editing, and deforming polygons and subdivision surfaces, as well as
and artistic control.
NURBS curves and surfaces.
Animation
Softimage provides you with a complete set of both low-level and high-
level animation tools. All the fundamental low-level tools are there
with keyframing, fcurve editor, dopesheet, constraints, linked
parameters, and expressions. You can also layer keyframe animation on
top of animation, such as motion capture (mocap) data.
Shape animation is achieved using a number of techniques and tools,
including the popular and easy-to-use shape manager.
For high-level animation, you have the animation mixer which lets you
mix, transition, and combine all forms of animation, shapes, and audio
in a nonlinear and non-destructive manner.
Character Animation
Building and animating characters is fully supported with all the
regular animation tools, as well as special character tools such as
Copyright © 2005 by Paramount Pictures Corporation and Viacom
International Inc. All Rights Reserved. Nickelodeon, Barnyard and all related
skeletons that use inverse kinematics, envelopes and weight maps, and
titles, logos and characters are trademarks of Viacom International Inc. easy-to-create character rigs and rigging tools. As well, you can retarget
any type of animation, including mocap data, to any type of rig.
The Interface The Face Robot module lets you make faces in a unique way. You first
set up a facial rig by going through several simple stages. Once the
Softimage’s interface is laid out in a way that gives you both a large facial rig is created, you can animate the facial controls and sculpt and
viewing area as well as easy access to all the tools you need, all the time. tune the soft facial tissue using a special set of tools.
You can easily resize any panel or viewport in the Softimage interface,
as well as customize its layout to exactly what you want.
Basics • 9
Welcome to Autodesk® Softimage®!
Simulation Rendering
You can simulate almost any kind of natural, or unnatural, phenomena Drawing upon the integration of mental ray® rendering technology,
you can think of using rigid bodies, soft bodies, or cloth — or grow Softimage offers full-resolution, interactive rendering, caustics, global
some hair! Simulation-type objects can then be influenced by forces illumination, and motion blur, not only for the final render, but also
and collisions to create simulated animations. within a render region that can be drawn in any Softimage viewport. It
renders everything in Softimage, letting you adjust your render
ICE: Interactive Creative Environment parameters at any stage of modeling, animating, or even during
ICE is a visual programming environment available directly within the playback.
Softimage interface. Using a node-based data tree format, you can As well, you can embed unlimited render passes into a single scene and
modify how any tool works, create custom tools and effects, and see the for each pass, generate multiple rendered channels such as specular or
results interactively, all without scripting a line of code. ICE is currently reflections. Softimage’s render passes and render channels are
used mostly for creating particle and deformation effects. extremely easy to create, customize, and edit.
Using ICE trees, you can create almost any type of particle effect you
Painting and Compositing
want. You can make natural phenomena, such as smoke, fire, and rain,
but you can also use objects or characters act in a simulated Softimage has a built-in compositor, called Softimage Illusion.
environment: rocks tumbling, glass pieces breaking, grass or hair Softimage Illusion is designed to edit textures and image-based lighting
growing, or humans running about. in real time. You can use it to rough out final shots, touch up your
textures, morph, warp and rig images, create custom mattes, and tweak
Shaders and Texturing the results of a multi-pass render, all within Softimage.
Using a graphical node-based connection tool called the render tree,
you can create an unlimited range of materials by connecting any type
of shader to any object. You can also project 2D and 3D textures into
texture spaces, which can then be manipulated like a 3D object.
10 • Softimage
About this Guide
This guide provides an overview of the main features, tools, and
workflows of Softimage, helping you get a headstart in understanding
and using the software:
• If you’re new to Softimage, it gives you a foot in the proverbial
Softimage door. You may be new to 3D, or just new to Softimage
but familiar with other 3D software packages. Either way, you can
skim through this guide and quickly see what’s possible in
Softimage, as well as discover what the different tools and elements
are called.
• If you’re an old hand at Softimage, this guide may provide you with
a quick start for areas of Softimage that you’ve never needed to use
before. For example, if modeling is your thing and now you have to
do some animation, this guide can help you get a sense of what’s
possible in animation and what tools you can use.
This guide has been updated for Softimage 2010, but because it covers
the fundamental concepts and workflows of Softimage, the
information it contains will apply to Softimage well beyond this
version.
If you’re eager to take Softimage for a spin, there’s enough information
in this guide to get you started without needing to do more homework.
Many workflow overviews are included, as well as command names
that tell you where to find things.
Remember that all the detailed information and procedures are
covered in the Softimage User’s Guide and the Softimage SDK Guide
available from the Help menu on the main menu bar in Softimage (or
press the F1 key): we’ve just filtered out the main goodies for you here.
Now, go fire up Softimage and have some fun!
The Softimage Documentation Team
Basics • 11
Welcome to Autodesk® Softimage®!
12 • Softimage
Section 1
Introducing Softimage
New to Softimage? Take a quick guided tour through
the interface and basic operations.
Basics • 13
Section 1 • Introducing Softimage
A B
C
14 • Softimage
The Softimage Interface
Sample Content
A Title bar
Displays the version of Softimage, your license type, and the name Softimage ships with a sample database XSI_SAMPLES containing
of the open project and scene. scenes, models, presets, scripts, and other goodies. Open a Softimage
file browser (View > General > Browser or press 5 at the top of the
B Viewports
keyboard), then click Paths and choose Sample Project.
Lets you view the contents of your scene in different ways. You can
resize, hide, and mute viewports in any combination.
See Working with Views on page 21 for details.
D Main Toolbar
Contains commands and tools for different aspects of 3D work.
Press 1 for the Model toolbar, 2 for Animate, 3 for Render, 4 for
Simulate, and Ctrl+2 for Hair. You can also access these controls
from the main menu bar.
For more information about other controls that can be displayed in
this area, see The Main Toolbar, Weight Paint Panel, and Palette
Toolbar on page 16 and Switching Toolbars on page 16.
E Icons
Switch between toolbar and other panels, or choose viewport
presets.
See The Main Toolbar, Weight Paint Panel, and Palette Toolbar on
page 16 as well as Viewport Presets on page 22 for details.
Basics • 15
Section 1 • Introducing Softimage
16 • Softimage
Getting Commands and Tools
Basics • 17
Section 1 • Introducing Softimage
D The arrow buttons move along the sequence of property editors (up a
H
level, previous, and next).
I
E Revert changes, or save and load presets.
J
F Use the tabs to quickly move between different property sets in an
editor.
Click the triangle to collapse a property set (like Scene Material in this
picture) or expand it (like Phong).
For help on the parameters in a property set, click the corresponding
help icon (?).
K
G Within a property set like Phong, tabs switch between groups of
L parameters.
18 • Softimage
Setting Values for Properties
Basics • 19
Section 1 • Introducing Softimage
J The Numeric Entry commands select the color model for the numeric
E F K boxes.
L
K The Normalized option specifies whether numeric values are
represented as real numbers in the range [0.0–1.0] or as integers in
A To set a color, click in the color area and then adjust it using the slider. the range [0, 255].
To select which color components appear in the color area and which
one appears on the slider, click the “>” button. L The Gamma Correction option toggles gamma correction display for
all color controls in the color editor.
B The color box on the left shows the previous color for reference.
D Use the numeric boxes to set color values precisely. To select a color
model, click the “>” button.
20 • Softimage
Working with Views
Basics • 21
Section 1 • Introducing Softimage
• Right-click on the Resize icon to open a menu as shown. • To bring a window to the front and display it on top of other
windows, click in it.
Viewport Presets • To close a window, click x in the top right corner.
Instead of switching views and resizing viewports • To minimize a window, click _ in the top right corner.
manually, you can use the buttons at the lower left to
display various preset combinations. You can cycle through all open windows, whether minimized or not,
using Ctrl+Tab. Use Shift+Ctrl+Tab to cycle backwards.
Muting and Soloing Viewports
You can collapse a floating view by double-clicking on its title bar.
The letter identifier in the upper-left corner of the title bar allows you When collapsed, only the title bar is visible and you can still move it
to mute and solo viewports. Muting a viewport’s neighbors helps speed around by dragging. To expand a collapsed view, double-click on the
up its refresh rate. title bar again; the view is restored at its current location.
• Middle-click the letter to mute the viewport. A muted
viewport does not update until you un-mute it. The letter A Word about the Active Window
of a muted viewport is displayed in orange. Middle-click The active window is always the one directly under the mouse
the letter again to un-mute the viewport. pointer—it’s the one that has “focus” and accepts keyboard and mouse
• Click the letter to solo the viewport. Soloing a viewport input even if it is not on top.
mutes all the others. The letter of a soloed viewport is For example, you can open a floating explorer window, then move the
displayed in green. Middle-click the letter again to un-solo pointer over the camera viewport and press F to frame the selected
the viewport. elements. If you pressed F while the pointer was still over the explorer,
To control how viewports update when playing back animation, see the list would have expanded and scrolled to find the next selected
Selecting a Viewport for Playback on page 143. object.
Be careful that you don’t accidentally send commands to the wrong
window.
22 • Softimage
Working in 3D Views
Working in 3D Views
3D views are where you view, edit, and manipulate the geometric elements
A Viewport letter identifier: Click to solo the viewport or middle-click
of your scene.
to mute it.
C Memo cams: Store up to 4 views for quick recall. Left click to recall,
middle-click to save, Ctrl+middle-click to overwrite, and right-click to
clear.
F XYZ buttons: Click on X to view the right side, Y to view the top
side, and Z to show the front side. Middle-click to view the left, back,
and bottom sides respectively. These commands change the
viewpoint but you can still orbit afterwards unlike in the Top, Front,
and Right views selected from the Views menu. Click again to return
to the previous viewpoint.
Basics • 23
Section 1 • Introducing Softimage
Types of 3D Views The Top, Front, and Right views are also orthographic, which means
that the viewpoint is perpendicular (orthogonal) to specific planes:
There are many ways to view your scene in the 3D views. These viewing
modes are available from the Views menu in viewports and from the • The Top view faces the XZ plane.
View menu in the object view.
• The Front view faces the XY plane.
Except for camera views, all of the viewing modes are “viewpoints”.
• The Right view faces the YZ plane.
Like camera views, viewpoints show you the geometry of objects in a
scene. They can be previewed in the render region, but they cannot be You cannot orbit the camera in an orthographic view.
rendered to file like camera views.
Camera views let you display your scene in a 3D view from the point of
view of a particular camera. You can also choose to display the
viewpoint of the camera associated to the current render pass.
The Render Pass view is also a camera view: it shows the viewpoint of
the particular camera associated to the current render pass. Only a
camera associated to a render pass is used in a final render.
Spotlight Views
Spotlight views let you select from a list of spotlights available in the
scene. Selecting a spotlight from this list switches the point of view in
the active 3D view relative to the chosen spotlight. The point of view is Front
set according to the direction of the light cone defined for the chosen Right
spotlight.
24 • Softimage
Working in 3D Views
To open the object view, do one of the following: C Memo cams: Store up to 4 views for quick recall. Left click to recall,
middle-click to save, Ctrl+middle-click to overwrite, and right-click to
• From any viewport’s views menu, choose Object View. clear.
or D XYZ buttons: Click on X to view the right side, Y to view the top
side, and Z to show the front side. Middle-click to view the left, back,
• From the main menu, choose View > General > Object View. and bottom sides respectively. These commands change the
viewpoint but you can still orbit afterwards unlike in the Top, Front,
A B C C E F G and Right views in viewports. Also unlike in the viewports, they are
not temporary overrides and you cannot click them again to return to
the previous viewpoint.
E Lock: Prevent the view from updating when you select a different
object in another view. Click again to unlock.
Basics • 25
Section 1 • Introducing Softimage
26 • Softimage
Working in 3D Views
Tool or
Display Modes
Key Description
Command You can display scene objects in different ways by choosing various
Roll L Rotates a perspective view along its Z axis. Use display modes from a 3D view’s Display Mode menu. The Display
the different mouse buttons to roll at different Mode menu always displays the name of the current display mode,
speeds. such as Wireframe.
Frame F Frames the selected elements in the view under
the mouse pointer.
Frame Shift+F Frames the selected elements in all open views. Wireframe
(All Views)
Shows the geometric object made up of
Frame All A Frames the entire scene in the view under the its edges, drawn as lines resembling a
mouse pointer. model made of wire. This image
Frame All Shift+A Frames the entire scene in all open views.
displays all edges without removing
(All Views) hidden parts or filling surfaces.
Basics • 27
Section 1 • Introducing Softimage
28 • Softimage
Working in 3D Views
Basics • 29
Section 1 • Introducing Softimage
Rotoscopy • On the other hand, rotoscoped images that are displayed in the
orthographic views (Front, Top, and Right) have the Image
Rotoscopy is the use of images in the background of the 3D views. You Placement option set to Fixed by default. This allows you to
can use rotoscopy in different 3D views (Front, Top, Right, User, navigate the camera while modeling without losing the alignment
Camera, etc.) and any display mode (Wireframe, Shaded, etc.). between the image and the modeled geometry.
Furthermore, you can use different images for each view.
Fixed images are sometimes called image planes, and they can be
• Single images are useful as guides for modeling in the orthographic displayed in all views, not just the one for which they were defined.
views.
• Image sequences or clips are useful for matching animation with Fixed
footage of live action in the perspective views.
To load an image in a view, choose Rotoscope from the Display Mode
menu and select an image and other options.
There are two types of rotoscoped images:
• By default, rotoscoped images in perspective views have Image
Placement set to Attached to Camera. This means that they follow
the camera as it moves and zooms so that you can match animation
with live action plates.
Attached
to Camera
Pixel Zoom
30 • Softimage
Working in 3D Views
Basics • 31
Section 1 • Introducing Softimage
The Explorer
The explorer displays the contents of your scene in a hierarchical
A Scope of elements to view. See Setting the
structure called a tree. This tree can show objects as well as their
Scope of the Explorer on page 33.
properties as a list of nodes that expand from the top root. You
normally use the explorer as an adjunct while working in Softimage, B Viewing and sorting options.
for example, to find or select elements.
C Filters for displaying element types. See Filtering
To open an explorer in a floating window, press 8 at the top of the the Display on page 33.
keyboard, or choose View > General > Explorer from the main menu. D Lock and update. This works only when the
scope is set to Selection.
A B C D E
E Search by name, type, or keyword.
32 • Softimage
Exploring Your Scene
Keeping Track of Selected Elements The Selection option in the explorer’s scope menu isolates the selected
object. If you click the Lock button with the Selection option active,
If you have selected objects, their nodes are highlighted in the explorer.
the explorer continues to display the property nodes of the currently
If their nodes are not visible, choose View > Find Next Selected Node.
selected objects, even if you go on to select other objects in other views.
The explorer scrolls up or down to display the first object node in the
When Lock is on, you can also select another object and click Update
order of its selection. Each time you choose this option, the explorer
to lock on to it and update the display.
scrolls up or down to display the next selected node. After the last
selected item, the explorer goes back to the first. Filtering the Display
Choose View > Track Selection if you want to automatically scroll the Filters control which types of nodes are displayed in the explorer. For
explorer so that the node of the first selected object is always visible. example, you can choose to display objects only, or objects and
properties but not clusters nor parameters, and so on. By displaying
Setting the Scope of the Explorer
exactly the types of elements you want to work with, you can find
The Scope button determines the range of elements to display. You can things more quickly without scrolling through a forest of nodes.
display entire scenes, specific parts, and so on.
The basic filters are available on the Filters menu (between the View
menu and the Lock button). The label on the menu button shows the
A current filter. The filters that are available on the menu depend on the
scope. For example, when the scope is Scene Root, the Filters menu
offers several different preset combinations of filters, followed by
specific filters that you can toggle on or off individually.
C The bold item in the menu indicates the last selected scope. Middle-
click the Scope button to quickly select this view. Individual display filter toggles.
Basics • 33
Section 1 • Introducing Softimage
E
A F
1
2 A Enter part of the name to search for. Softimage waits for you to
pause typing before it displays the search results. You can continue
A Explorer filter buttons typing to modify the search string, and the updated results will be
displayed when you pause again.
1 Example: Click the Selection filter button...
Softimage finds the elements that contain the search string anywhere
2 ...to display a pop-up explorer showing all property nodes associated in their names (substring search). Strings are not case-sensitive.
with the selected object. Alternatively, you can also use wildcards and a subset of regex
(regular expressions) just like in the explorer.
The Explore button opens a pop-up menu of additional filters for B Recall a recent search string.
specifying the type of information you wish to obtain on the scene.
C Clear the search string and close the search results.
Click outside a pop-up explorer to close it.
D Open the floating Scene Search window with the current search and
Object Explorers additional options.
34 • Softimage
Exploring Your Scene
E The search results are listed here. They obey the current settings in
The Schematic View
the Scene Search view for sorting and name/path display. The schematic view presents the scene in a hierarchical structure so
• To select an element, click on it. that you can analyze the way a scene is constructed. It includes
• To select a range of elements, click on the first one and then graphical links that show the relationships between objects, as well as
Shift+click on the last one. material and texture nodes to indicate how each object is defined.
• To toggle-select an element, Ctrl+click on it.
• To deselect an element, Ctrl+Shift+click on it. To open a schematic view in a floating window, press 9 at the top of the
• To rectangle-select a range of elements, click in the background first
keyboard, or choose View > General > Schematic from the main
and then drag across the elements to select. This is easier if only menu.
names are displayed, rather than paths.
• Press the spacebar to click and select nodes. Use the left mouse
• To select all elements found, press Ctrl+A.
button for node selection, the middle mouse button for branch
• To rename the selected elements, press F2.
selection, and the right mouse button for tree selection.
• Right-click on any element for a context menu. If you right-click on
a selected element, then some commands apply to all selected • Press M to click and drag nodes to new locations. The schematic
elements. remembers the location of nodes, so you can arrange them as you
F To dismiss the list of results, click anywhere outside the pop-up or please.
press Escape.
• Press s or z to pan and zoom.
Relationships between elements are displayed as lines called links. You
can display or hide links for different types of relationship using the
Show menu.
You can also click a parent-child link to select the child. This is useful if
you have located the parent but can’t find the child in a jumbled
hierarchy. Again, use the left, middle, or right mouse buttons to select
the child in node, branch, or tree modes.
When other types of link are displayed, you can click and drag across
the link to select the corresponding operator, such as a constraint or
expression. When a link is selected, you can press Enter to open the
property editor related to the associated relationship (if applicable), or
press Delete to remove the operator.
Basics • 35
Section 1 • Introducing Softimage
A Scope: Show the entire scene, the current selection, or the current
layer.
E Memo cams: Store up to 4 views for quick recall. Left click to recall,
middle-click to save, Ctrl+middle-click to overwrite, and right-click to
clear.
F Lock: Prevent the view from updating when you select a different
object in another view (if Scope = Selection). Click again to unlock.
36 • Softimage
Section 2
Elements of a Scene
This section provides a guide to the objects,
properties, and components you will find in
Softimage scenes, and describes some of the
workflows for working with them.
Basics • 37
Section 2 • Elements of a Scene
What’s in a Scene?
Scenes contain objects. In turn, objects can have components and Properties
properties.
Properties control how an object looks and behaves: its color, position,
selectability, and so on. Each property contains one or more
Objects
parameters that can be set to different values.
Objects are elements that you can put in your scene. They have a
Properties can be applied to elements directly, or they can be applied at
position in space, and can be transformed by translating, rotating, and
a higher level and passed down (propagated) to the children elements
scaling. Examples of objects include lights, cameras, bones, nulls, and
in a hierarchy.
geometric objects. Geometric objects are those with points, such as
polygon meshes, surfaces, curves, particles, hair, and lattices.
Element Names
Components
All elements have a name. For example, if you choose Get >
Components are the subelements that define the shape of geometric Primitive > Polygon Mesh > Sphere, the new sphere is called sphere by
objects: points, edges, polygons, and so on. You can deform a default, but you can rename it if you want. In fact, it’s a good idea to get
geometric object by moving its components. Components can be into the habit of giving descriptive names to elements to keep your
grouped into clusters for ease of selection and other purposes. scenes understandable. You can see the names in the explorer and
schematic views, and you can even display them in the 3D views.
Points on different
geometry types: You can typically name an element when you create it. You can rename
polygon mesh, an object at any time by choosing Rename from a context menu or
curve, surface, and
lattice.
pressing F2 in the explorer or schematic.
Softimage restricts the valid characters in element names to a–z, A–Z,
0–9, and the underscore (_) to keep them variable-safe for scripting.
You can also use a hyphen (-) but it is not recommended. Invalid
characters are automatically converted to underscores. In addition,
element names cannot start with a digit; Softimage automatically adds
an underscore at the beginning. If necessary, Softimage adds a number
to the end of names to keep them unique within their namespace.
38 • Softimage
Selecting Elements
Selecting Elements
Selecting is fundamental to any software program. In Softimage, you
F Group/Cluster button: Selects groups and clusters.
select objects, components and other elements to modify and
manipulate them. G Center button: Not used for selection.
In Softimage, you can select any object, component, property, group, H Hierarchy navigation: Select an object’s sibling or parent.
cluster, operator, pass, partition, source, clip, and so on; in short, just
about anything that can appear in the explorer. The only thing that you Overview of Selection
can’t select are individual parameters—parameters are marked for
animation instead of selected. To select an object in a 3D or schematic view, press the space bar and
click on it. Use the left mouse button for single objects (nodes), the
A middle mouse button for branches, and the right mouse button for trees
F and chains.
B
G To select components, first select one or more geometric objects, then
press a hotkey for a component selection mode (such as T for rectangle
point selection), and click on the components. Use the middle mouse
C
button for clusters.
For elements with no predefined hotkey, you can manually activate a
D selection tool and a selection filter.
H
In all cases:
E
• Shift+click adds to the selection.
A Select menu: Access a variety of selection tools and commands. • Ctrl+click toggle-selects.
B Select icon: Reactivates the last active selection tool and filter. • Ctrl+Shift+click deselects.
C Filter buttons: Select objects or their components, such as points, • Alt lets you select loops and ranges. You can use Alt in combination
curves, etc. with Shift, Ctrl, and Ctrl+Shift.
D Object Selection and Sub-object Selection text boxes: Enter the
name of the object and its components you want to select. You can
use * and other wildcards to select multiple objects and properties.
Basics • 39
Section 2 • Elements of a Scene
T Select points with the Rectangle selection tool, in either Rectangle Selection Tool
supra or sticky mode.
Rectangle selection is sometimes called marquee selection. You select
Y Select polygons with the Rectangle selection tool, in elements by dragging diagonally to define a rectangle that encompasses
either supra or sticky mode. the desired elements.
U Select polygons with the Raycast selection tool, in either
supra or sticky mode. Raycast Selection Tool
I Select edges with the Raycast selection tool, in either The Raycast tool casts rays from under the mouse pointer into the
supra or sticky mode. scene—elements that get hit by these rays as you click or drag the
mouse are affected. Raycast never selects elements that are occluded by
' (apostrophe) Select hair tips with the Rectangle selection tool, in either other elements.
supra or sticky mode.
F7 Activate Rectangle selection tool using current filter. Lasso Selection Tool
F8 Activate Lasso selection tool using current filter. The Lasso tool lets you select one or more
elements by drawing a free-form shape around
F9 Activate Freeform selection tool using current filter. them. This is especially useful for selecting
F10 Activate Raycast selection tool using current filter. irregularly shaped sets of components.
40 • Softimage
Selecting Elements
Selection Filters
Effect of node-
Selection filters determine what you can select in the 3D and schematic selecting Object.
views. You can restrict the selection to a specific type of object,
component, or property. Press Shift while activating a new filter to keep
the current selection, allowing you to select a mixture of component
types.
Basics • 41
Section 2 • Elements of a Scene
Range Selection
Alt+click to select a range of components using any selection tool
(except Paint). This allows you to select the interconnected
Tree Selection components that lie on a path between two components you pick.
1 2
42 • Softimage
Selecting Elements
- Use Ctrl to toggle-select. Once you have selected a new anchor, 1 First specify the anchor.
you can Alt+Ctrl+click to toggle the selection of a range. 2 Then specify another component to select the entire loop of
- Use Ctrl+Shift to deselect. Once you have selected a new anchor, components.
you can Alt+Ctrl+Shift+click to deselect a range.
1. Do one of the following:
- Select the first “anchor” component normally, then Alt+middle-
click on the second component. Note that the anchor component is
highlighted in light blue as a visual reference while the Alt key is
pressed.
or
- Alt+middle-click to select two adjacent components in a single
mouse movement.
All components on an extended path connecting the two
components become selected.
Basics • 43
Section 2 • Elements of a Scene
Note that for edges, the direction is implied so you only need to Modifying the Selection
Alt+middle-click on a single edge. However, for parallel edge loops,
you still need to specify two edges as described previously. The Select menu has a variety of commands you can use to modify the
selection. For example, among many other things, you can:
2. Use the following key and mouse combinations to further refine
the selection: • Invert the selection.
- Use Shift to add individual components to the selection as usual. • Grow or shrink a component selection (polygon meshes only).
If you want to add additional ranges or loops using Alt+Shift, the • Select adjacent points, edges, or polygons.
last component added to the selection is the new anchor. The last
selected component becomes the anchor for any new loop. Once Defining Selectability
you have selected a new anchor, you can Alt+Shift+middle-click
to add another loop to the selection. You can make an object unselectable in the 3D and schematic views by
opening up its Visibility properties and turning off Selectability. This
- Use Ctrl to toggle-select. Once you have selected a new anchor, can come in handy and speed up your workflow if you are working in a
you can Alt+Ctrl+middle-click to toggle the selection of a loop. very dense scene and there are one or more objects that you don’t wish
- Use Ctrl+Shift to deselect. Once you have selected a new anchor, to select.
you can Alt+Ctrl+Shift+middle-click to deselect a loop. Unselectable objects are displayed in dark gray in the wireframe and
schematic views. Regardless of whether an object’s Selectability is on or
off, you can always select it using the explorer or using its name.
The selectability of an object can also be affected by its membership in
a group or layer.
44 • Softimage
Objects
Objects
Objects can be duplicated, cloned, and organized into hierarchies, To duplicate an object, select it and choose Edit > Duplicate/
groups, and layers. Instantiate > Duplicate Single or press Ctrl+D. The object is
duplicated using the current options and the copy is immediately
Duplicating and Cloning Objects selected. You may need to move it away from the original. By default,
any transformation you apply is remembered for the next duplicate.
Duplicating an object creates an independent copy: modifying the
original after duplication has no effect on the copy. Cloning creates a To make multiple copies, Edit > Duplicate/Instantiate > Duplicate
linked copy: modifying the geometry of the original affects the clone, Multiple or press Ctrl+Shift+d. Specify the number of copies and the
but you can still make additional changes to the clone without affecting incremental transformations to apply to each one.
anything else. All the related commands can be found in Edit >
Example: Applying multiple transformations to duplicated objects
Duplicate/Instantiate.
When an object is duplicated, the original and its duplicates can be modified
separately with no effect on each other.
Basics • 45
Section 2 • Elements of a Scene
You can clone objects using the Clone commands on the Edit >
Duplicate/Instantiate menu.
Clones are displayed in the explorer with a cyan c superimposed on the
model icon. In the schematic view, they are represented by trapezoids
with the label Cl.
Clone in the explorer. Clone in the
schematic view.
46 • Softimage
Objects
Basics • 47
Section 2 • Elements of a Scene
Removing Groups
You can remove a group by selecting it and pressing Delete. When you
delete groups, only the group node and its properties are deleted, not
the member objects themselves.
48 • Softimage
Objects
Basics • 49
Section 2 • Elements of a Scene
G Use the cells of a layer group to control all layers in the group. You
can still change the settings of individual layers afterward. When
different layers in the group have different values, the cell has a light
gray checkmark.
50 • Softimage
Properties
Properties
A property is a set of related parameters How Properties Are Propagated
that controls some aspect of objects in a
scene. Objects can inherit properties from many different sources. This
inheritance is called propagation.
Applying Properties For some properties, such as Display and Geometry Approximation, an
object can have only one at a time. If it inherits the same property from
You can apply many properties using the
more than one source, the source with the highest “strength” is used.
Get > Property menu of any toolbar.
This applies the default preset of a In increasing order of strength, the possible sources of property
property’s parameter values to the propagation are:
selected objects, possibly replacing an
• Scene Default: This is the weakest source. If an object does not
existing version of the same property.
inherit a property from anywhere else, it uses the scene’s default
values. For example, if an object has never had a material applied to
Editing Properties it, it uses the scene default material.
To edit an existing property, open its property editor by clicking on the • Branch: If a parent has a property applied when it is branch-
property node in an explorer. A handy way to do this is to press F3 to selected, its children all inherit the property.
see a mini-explorer for the selected object, or click the Selection button
at the bottom of the Select menu. You can also right-click on Selection • Local: If a child inherits a branch property from its parent, but has
to display properties according to type. the same property applied directly to it, it uses its local values.
Click Selection... • Cluster: Materials, textures, and other properties applied to a
cluster take precedence over those applied to the object.
• Group: If an object is a member of a group, then any properties
applied to the group take precedence over local and branch
properties. Similarly, if a cluster is a member of a group, any
properties applied to the group take precedence over those applied
directly to the cluster.
• Layer: Any properties applied to an object’s layer take precedence
over group, local, and branch properties.
• Partition: Properties applied to a partition of a render pass have the
highest priority of all when that render pass is current.
...then click a property icon... ...or right-click Selection.
Basics • 51
Section 2 • Elements of a Scene
For other types of properties, an object can have many at the same
time. For example, an object can have several local annotations as well
as several annotations inherited from different ancestors, groups, and
so on.
Simple Propagation Branch Propagation Local Material/Texture Application Reverting to the Scene’s Default Material
In this sphere hierarchy, each sphere is One sphere was branch-selected and One sphere was single-selected and The larger sphere was single-selected
parented to the one above it. Because given a cloud texture. The remaining given a blue surface. This applies a local and has had its material deleted. Since
the larger sphere was branch-selected sphere retains the checkerboard texture material/texture that is in turn applied to other spheres can no longer inherit their
when the texture was applied, every because it is on another branch. the selected object only— and none of texture from the parent (because its been
sphere beneath it inherits the its children; the sphere’s children still deleted), they revert back to the scene’s
checkerboard texture. inherit the checkerboard texture, despite default gray (or another color you’ve
assigning a local texture to their parent. defined).
52 • Softimage
Properties
You can also set the following options in the explorer’s View menu:
• Local Properties displays only those properties that have been
applied directly to an object.
• Applied Properties shows all properties that are active on a object,
no matter how they are propagated.
Basics • 53
Section 2 • Elements of a Scene
You can display the various component types in a You can define as many clusters on an object as you like, and the same
specific 3D view using the individual options available component can belong to a number of different clusters.
from its eye icon (Show menu) or in all open 3D views Eye icon You can define clusters for points, edges, polygons, subsurfaces, and
using the options on the Display > Attributes menu on other components. Each cluster can contain one type of component.
the main menu bar. For example, a cluster can contain points or polygons, but not both.
For more options, you can set the visibility options in the Camera
Visibility property editor: click a 3D view’s eye icon (Show menu) and Clusters may shift if you edit an operator in an object’s
choose Visibility Options, or Display > Visibility Options for all open construction history and add components before the position
3D views. where the cluster was created.
Note that when you activate a component selection filter, the Creating Clusters
corresponding components are automatically displayed in the 3D
To create a cluster, select some components and click Cluster on the
views.
Edit panel (the Cluster button changes to Group when objects are
selected). As soon as the cluster is created, it is selected and you can
Clusters press Enter to open its property editor and change its name.
A cluster is a named set of Spinning top Top To create a cluster whose components aren’t already in other clusters,
components that are with two clusters choose Edit > Create Non-overlapping Cluster instead. You can also use
grouped together for a Edit > Create Cluster with Center to make a cluster with a null “center”
specific modeling, that you can transform and animate. If you prefer to use a different
animation, or texturing object as a center, simply create a cluster and apply Deform > Cluster
purpose. By grouping and Center manually.
naming components, it
makes it easier to work with
those same components
again and again. For
example, by grouping all Bottom
points that form an
54 • Softimage
Components and Clusters
Adding and Removing Components from Clusters Manipulating Components and Clusters
To add components to a cluster, select the cluster Add to Cluster Not every type of component or cluster can be directly manipulated in
and add the components you want to the Softimage. You can select and manipulate points, edges, and polygons
selection. In the Edit panel, click the + button in the 3D views, and you can select and manipulate texture UV
(next to the Cluster button). coordinates (samples) in the texture editor.
To remove components from a cluster, select the • You can transform points, edges, and polygons in 3D space. This is
cluster, add the components to remove to the a fundamental part of modeling an object’s shape.
selection, and click the – button. Remove from Cluster
• You can apply deformations to deform points, edges, and polygons
When you add components to an object, any new in the same way that you apply them to objects.
components that are surrounded by similar components in a • You cannot animate component and cluster transformations
cluster are automatically added to the cluster. directly. Instead, you can use a deformer such as a cluster center or
volume deformer and animate the deformer, or you can use shape
Selecting Clusters
animation.
You can select clusters using the Clusters button at the bottom of the
Select panel, or in any other explorer.
Removing Clusters
To remove a cluster, select it and press Delete. Removing a cluster
removes the group, but does not remove the individual components
from the object.
Basics • 55
Section 2 • Elements of a Scene
Parameter Maps • Texture maps consist of an image file or sequence, and a set of UV
coordinates. They are similar to ordinary textures, but are
Certain parameters are mappable—you can vary the parameter’s value connected to parameters instead of shaders.
across an object’s geometry by connecting a weight map, texture map,
vertex color property, or other cluster property. This allows you to, for • Vertex color properties are color values stored at each polynode or
example, control the amplitude of a deformation or the emission rate texture sample of a geometric object.
of a particle system across an object’s surface. In addition to the attributes listed above, you can connect mappable
Mappable parameters have a connection icon in their property editors parameters to other cluster properties, including UV coordinates
that allows you to drive the value using a map. (texture projections), shapes, user normals, and envelope weights.
While these may not always be useful for driving modeling and
Connection icon simulation parameters, the ability to connect to these properties may
be useful for custom developers.
unconnected connected
Connecting Maps
Which Parameters Are Mappable? No matter what type of map you want to connect to a parameter, the
Almost any parameter with a connection icon in its property editor is basic procedure is the same. In a property editor, click on the
mappable. These parameters include: connection icon of a mappable parameter and choose Connect. A
pop-up explorer opens—navigate through the explorer and pick the
• Certain deformation parameters, such as Amplitude in the Push desired map:
operator or Strength in the Smooth operator.
• Weight maps are found under the appropriate cluster.
• The Multiplier parameter in the Polygon Reduction operator.
• Texture maps are properties directly under the object. They can
• Edge and vertex crease values. also be found under the appropriate cluster. Make sure you don’t
• Various simulation parameters, such as the length and density of accidentally select the texture projection.
hair, the stiffness of cloth, and so on. • Vertex color properties are also found under the appropriate
• Shapes in the animation mixer. cluster.
The connection icon changes to show that a map is connected.
What Can You Connect to Mappable Parameters?
When a map is connected, you can click on this icon to open the map’s
You can connect just about any cluster property to a mappable property editor.
parameter. The most useful properties include the following:
If you connect a map that has multiple components, like an RGBA
• Weight maps allow you to start from a base map such as a constant color, to a parameter that has a single dimension, like Amplitude, you
value or gradient, and then paint values on top. can use the options in the Map Adaptor to control the conversion.
To disconnect a weight map, right-click on the connection icon of a
connected parameter and choose Disconnect.
56 • Softimage
Parameter Maps
A spot of paint
and it’s as good as new!
Basics • 57
Section 2 • Elements of a Scene
Texture Maps
Texture maps consist of an image file or sequence, and a set of UV
A slight Push is all that’s needed.
coordinates. They are similar to ordinary textures, but are used to
control operator parameters instead of surface colors.
6. You can reselect the weight map and continue to paint on it to HDR images are fully supported. Floating-point values are
modify the effect further. not truncated.
If your object has multiple maps, you may need to select the Creating Texture Maps
desired one before you can paint on it. You can do this easily To create a texture map, you select the texture projection method and
using Explore > Property Maps from the Select panel. then link an image file to it.
Freezing Weight Maps 1. Apply a texture projection and texture maps to the selected object
Weight maps can be frozen to simplify your scene’s data. Freezing by doing one of the following:
collapses the weight map generator (the base constant or gradient map - If the object already has a set of UV coordinates (texture
you chose when you created the weight map) together with any strokes projection) that you want to use, select it and choose Get >
you have applied. Property > Texture Map > Texture Map.
To freeze a weight map, select it and click the Freeze button on the Edit This creates a blank texture map property for the object and
panel. After you have frozen a weight map, you can still add new opens a blank Texture Map property editor in which you need to
strokes but you cannot change the base map or delete any strokes you set the texture projection and select an image that will be used as
performed before freezing. the map (as described in the next steps).
or
- To create a new texture projection for the map, select the object
and choose Get > Property > Texture Map > projection type
(such as Cylindrical, Spherical, UV, or XZ) that is appropriate for
the shape of the object.
This creates a texture map property and texture projection for the
object, but doesn’t open the Texture Map property editor. Now
you must open the Texture Map property editor to associate the
image to this projection to use as the map (in the explorer, click
the Texture Map property under the object).
58 • Softimage
Parameter Maps
Basics • 59
Section 2 • Elements of a Scene
60 • Softimage
Section 3
Moving in 3D Space
Working in 3D space is fundamental to Softimage.
You will use the transformation tools constantly as
you model and animate objects and components.
Basics • 61
Section 3 • Moving in 3D Space
Coordinate Systems
Softimage uses coordinate systems, also called reference frames, to XYZ Coordinates
describe the position of objects in 3D space.
With the Cartesian coordinate system, you can locate any point in
space using three coordinates. Positions are measured from the origin,
Cartesian Coordinates
which is at (0, 0, 0). For example, if X = +2, Y = +1, Z = +3, a point
One essential concept that a first-time would be located to the right of, above, and in front of the origin.
user of 3D computer graphics should
understand is the notion of working Location = (2, 1, 3) Y=1
within a virtual three-dimensional space
using a two-dimensional user interface.
Softimage uses the classical Euclidean/ Origin
Cartesian mathematical representation Z=3 X=2
of space. The Cartesian coordinate
system is based on three perpendicular
axes, X, Y, and Z, intersecting at one point. This reference point is
called the origin. You can find it by looking at the center of the grid in
any of the 3D windows.
XYZ Axes
Softimage uses a “Y-up” system, where the Y direction represents
XZ, XY, YZ Planes
height. This is different from some other software, which are “Z-up”. Since you are working with a two-
This is something to keep in mind if you are familiar with other dimensional interface, spatial planes are used
software, or are trying to import data into Softimage. to locate points in three-dimensional space.
A small icon representing the three axes and their directions is shown The perpendicular axes extend as spatial
in the corner of 3D views. The icon’s three axes are represented by planes: XZ, XY, and YZ. In the 3D views,
color-coded vectors: red for X, green for Y, and blue for Z. these planes correspond to three of the
parallel projection windows: Top, Front, and
An easy way to remember the color coding is RGB = XYZ. Right. Imagine that the XZ, XY, and YZ
This mnemonic is repeated throughout Softimage: object planes are folded together like the top, front, and right side of a box.
centers, manipulators, axis controls on the Transform panel, This helps you keep a sense of orientation when you are working
and so on. within the parallel projection windows.
62 • Softimage
Coordinate Systems
Basics • 63
Section 3 • Moving in 3D Space
Transformations
Transformations are fundamental to 3D. They include the basic Transforming Interactively
operations of scaling, rotating, and translating: scaling affects an
element’s size, rotation affects an element’s orientation, and translation
affects an element’s position. Transformations are sometimes
called SRTs. 1 Select objects or
components to
You transform by selecting an object or components, activating a transform and
activate a tool:
transform tool, then clicking and dragging a manipulator in a 3D view.
– Scale (press x)
– Rotate (press c)
Local versus Global Transformations – Translate (press v)
There are two types of transformation values that can be stored for
3 If desired, specify the
animation: local and global. Local transformations are stored relative active axes. See
to an object’s parent, while global ones are stored relative to the origin Specifying Axes on
of the scene’s global coordinate system. The global transformation page 67.
2 Set the
values are the final result of all the local transformations that are manipulation mode.
propagated down the object hierarchy from parent to child. See Manipulation
Modes on page 65.
You can animate either the local or the global transformation values.
It’s usually better to animate the local transformations—this lets you 4 If desired, set the pivot. See Setting
move the parent while all objects in the hierarchy keep their relative the Pivot on page 67.
positions rather than staying in place.
64 • Softimage
Transformations
Global
Global translations and rotations are performed along the scene’s global
axes.
Object is transformed...
If you are using the SRT manipulators in a perspective view
like Camera or User, View mode uses the global scene axes.
Par
Par, or parent, translations and rotations use the axes of the object’s
...using global axes as the reference. parent. For translation, this is the only mode where the axes of
interaction correspond exactly to the coordinates of the object’s local
Local position for the purpose of animation. When you activate individual
axes on the Transform panel, the corresponding local position
Local transformations are performed along the axes of the object’s local parameters are automatically marked. To activate Par for rotations,
coordinate system as defined by its center. This is the only true mode activate Add and press Ctrl.
available for scaling—scaling is always performed along an object’s own
axes. Object is transformed...
Object is transformed...
Basics • 65
Section 3 • Moving in 3D Space
Vol
Par mode is not available for components. In its place, Object
mode uses the local coordinates of the object that “owns” Like Uni, Vol or volume is available only for scaling and is a modifier
the components. rather than a mode. It scales along one or two local axes, while
automatically compensating the other axes so that the volume of the
Add object’s bounding box remains constant.
Add, or additive, mode is only available for rotation. It lets you directly
control the object’s local X, Y, and Z rotations as stored relative to its
parent. This mode is especially useful when animating bones and other
objects in hierarchies.
For rotations, this is the only mode where the axes of interaction
correspond exactly to the coordinates of the object’s local orientation
for the purpose of animation. When you activate individual axes on the
Transform panel, the corresponding local position parameters are
automatically marked. Ref
Ref, or reference, mode lets you translate an object along the X, Y, and
Uni Z axes of another element or an arbitrary reference plane. Right-click
Uni, or uniform, is available only for scaling. It is not really a mode but on Ref to set the reference.
it modifies the way objects are scaled locally. It scales along all active
Object is transformed...
local axes at the same time with a single mouse button. You can activate
and deactivate axes as described in Specifying Axes on page 67. You can
also temporarily turn on Uni by pressing Shift while scaling.
66 • Softimage
Transformations
• Ctrl+click the All Axes icon to toggle all three axes. 2. Activate a transform tool.
Basics • 67
Section 3 • Moving in 3D Space
Translate Manipulator
Click and drag on a Click and drag the center left or right
single axis to translate Click and drag between to scale all active axes uniformly.
along it. two axes to translate
along the In addition to dragging the handles, you can:
corresponding plane.
• Middle-click and drag anywhere in the 3D views to translate along
the axis that most closely matches the drag direction.
• Click and drag anywhere in the 3D views (except on the
manipulator) to perform different actions, depending on the
Click and drag on the center to
translate in the viewing plane.
setting for Click Outside Manipulator in the Tools > Transform
preferences.
• Right-click on the manipulator to open a context menu, where you
can set the manipulation mode and other options.
68 • Softimage
Transformations
Transform > Transform Preferences contains several settings that To specify hierarchical or classic scaling
affect the display, interaction, and other options of the transformation
1. Select one or more child objects and open their Local Transform
tools. Since you will be spending a great deal of your time transforming
property editor.
things, it’s a good idea to explore these and find the settings that are
most comfortable for you. 2. On the Scaling tab, turn Hierarchical (Softimage) Scaling off or on.
If it is off, classic scaling is used.
Hierarchical (Softimage) versus Classic Scaling
To set the default scaling mode used for all new objects
Hierarchical (Softimage) scaling uses the local axes of child objects
when their parent is scaled. This maintains the relative shape of the 1. Choose File > Preferences from the main menu bar.
children without shearing if they are rotated with respect to their 2. Click General.
parent.
3. Toggle Use Classical Scaling for Newly Created Objects.
When this option is off, the result is called classic scaling—children are
scaled along their parent’s axes and may be sheared with non-uniform
scaling. Classic scaling is recommended if you are exchanging data with
other applications, such as game engines, motion capture systems, or
3D applications that do not understand Softimage scaling.
Basics • 69
Section 3 • Moving in 3D Space
70 • Softimage
Transform Setup
You apply a Transform Setup property by choosing Get > Property > • If an object is node-selected, then children with local animation
Transform Setup from any toolbar and then setting all the options. You follow the parent. This is because the local animation values are
can modify the options later by opening the property from the stored relative to the parent’s center. However, what happens to
explorer. non-animated children depends on the ChldComp (Child
Transform Compensation) option on the Constrain panel.
While Transform Setups are useful for many tasks, like animating a rig,
at other times you don’t want the current tool to keep changing as you Child Transform Compensation
select objects. In these cases, you can ignore Transform Setups for all
objects in your scene by turning off Transform > Enable The ChldComp option on the Constrain panel
Transformation Setups. Turn it back on to resume using the preferred controls what happens to non-animated children
tool of each object. if an object is node-selected and transformed.
• If this option is off, all children with an active
parent constraint follow the parent. You
cannot move the parent without moving its children.
• If this option is on, the children are not visibly affected. Their local
transformations are compensated so that they maintain the same
global position, orientation, and size.
Child Transform Compensation does not affect what happens when a
child has local animation on the corresponding transformation
parameters nor when the parent is branch-selected.
Basics • 71
Section 3 • Moving in 3D Space
72 • Softimage
Section 4
Basics • 73
Section 4 • Organizing Your Data
74 • Softimage
Scenes
Scenes
A scene file contains all the information necessary to identify and
position all the models and their animation, lights, cameras, textures,
The Softimage title bar identifies the name of the current scene and the
and so on for rendering. All the elements of a scene are compiled into a
project in which it resides.
single file with an .scn extension.
The File Menu contains most of the commands for creating, opening, and managing scenes.
Basics • 75
Section 4 • Organizing Your Data
Click here to refresh The controls for viewing and managing external files. Selected files are highlighted in green.
the list of files.
The left pane allows you to choose The grid lists all of the external files Files with invalid paths
whether to show all external files for the scene/model specified in the are highlighted in red.
used by the scene, or only those used left-hand pane, and of the type
by a particular model. specified in the File Type list.
76 • Softimage
Scenes
Basics • 77
Section 4 • Organizing Your Data
Projects
In Softimage, you always work within the structure of a project. A When you open Softimage for the first time, an untitled scene is
project is a system of folders that contain the scenes you build and the created in the XSI_SAMPLES factory project. You can set your own
external files referenced by those scenes. project as the default project that opens with Softimage. The project
name in the title bar at the top of the Softimage interface is the active
Projects are used to keep your work organized and provide a level of
project.
consistency that can simplify production for a workgroup. A project
can exist locally on your machine or can be shared from a network Project lists are text-based files with an .xsiprojects file name extension.
drive. You can build, manage and distribute your project lists among
members of your workgroup using the Project Manager.
Sets the selected project as the Sets the default project that opens
Location of your project folder. active project. automatically when you start Softimage.
78 • Softimage
Models
Models
Models are like “mini scenes” that can be easily reused in scenes and Models and Namespaces
projects. They act as a container for objects, usually hierarchies of
objects, and many of their properties. Models contain not just the Each model defines its own namespace. This means that each object in
objects’ geometry but also the function curves, shaders, mixer a model’s hierarchy must have a unique name, but objects in different
information, groups, and other properties. They can also contain models can have the same name. For example, two characters in the
internal expressions and constraints; that is, those expressions and same scene can both have chains named left_arm and right_arm if they
constraints that refer only to elements within the model’s hierarchy. are in different models.
All models exist in the namespace of the scene. This means that each
model must have its own unique name, even if it is within the hierarchy
“Club bot” model structure of another model.
contains many things that
define the character. Namespaces let you reuse animations that have been stored as actions.
If an action contains animation for one model’s left_arm chain, you
can apply the action to another model and it automatically connects to
the second model’s left_arm. If your models contain elements with
different naming schemes, for example, LeftArm and L_ARM, you can
use connection mapping templates to specify the proper connections.
Basics • 79
Section 4 • Organizing Your Data
Exporting Models For example, let’s say that you’re modeling a car that will be used in
various scenes, but the animator needs to start animating with the car
Use File > Export > Model to export models created in Softimage for on another computer before you can finish the details. You export the
use in other scenes. Using models to export objects is the main way of car as porsche.emdl, which the animator can import into her scene
sharing objects between scenes. while you continue your work. Any changes that the animator makes to
When you export a model, a copy is saved as an independent file. The the car, such as setting keys or expressions, are automatically stored in
file names of exported models have an .emdl extension. the model’s delta in the scene.
The original model remains in the scene. If you ever need to modify the When you’re done modeling the car, you can re-export using the same
model, you can change it in the original scene, and then re-export it file name. Now when the animator loads the scene or updates the
using the same file name. If other scenes use that file as a referenced referenced model, all the changes you made are automatically reflected
model, they will update automatically when you open them. If you in the car in her scene. After the model is updated, Softimage reapplies
imported the file into another scene as a local model, you must delete the changes stored in the delta to the model within the animator’s
the model from that scene and re-import it from the file to obtain the scene.
updated version. Referenced models also let you work at different levels of detail. You
can have a low-resolution model for fast interaction while animating, a
Importing Local Models medium-resolution model for more accurate previewing, and a high-
When you import a model locally instead of as a referenced model, its resolution model for the final results.
data becomes part of your scene. It is as if the model was created Referenced models are indicated in the explorer by a white man icon.
directly in the scene—there is no live link to the .emdl file. You can The default name of this node depends on the name of the external file,
make any changes you want to the model and its children. but you can change it if you want. The name of the active resolution
To import a model locally, choose File > Import > Model from the appears in square brackets after the model’s name. The name of a delta’s
main menu. You can also drag an .emdl file from a browser or a link on target model appears after the delta’s name.
a Net View page and drop it onto the background of a 3D view. On
Windows, you can also drag an .emdl file from a folder window.
80 • Softimage
Models
Instantiating Models
An instance is an exact replica of a model. Any type of model can be
instanced. You can create as many instances as you like using the
commands on the Edit > Duplicate/Instantiate menu, and position
them anywhere in your scene. When you modify the original “master”
model, all instances update automatically.
Instances are useful because they require very little memory: only the
transformations of the instance root is stored. However, you cannot
modify, for example, an instance’s geometry or material.
Instantiation has the following advantages:
• Instances use much less disk space than duplicates or clones
because you’re not duplicating the geometry.
• Editing multiple identical objects is very simple because you only
have to edit the original.
• Wireframe, shading, and memory operations are much faster.
Instances are displayed in the explorer with a cyan i superimposed on
the model icon. In the schematic view, they are represented by
trapezoids with the label I.
Instance in the Instance in the
explorer. schematic view.
Basics • 81
Section 4 • Organizing Your Data
82 • Softimage
Section 5
General Modeling
Modeling is the task of creating the objects that you
will animate and render. No matter what type of
object you are modeling, the same basic concepts
and techniques apply. This section explores the
aspects of modeling that aren’t specific to any specific
type of geometry such as curves, polygon meshes, or
NURBS surfaces.
Basics • 83
Section 5 • General Modeling
Overview of Modeling
1 Start with a basic object, such as a primitive cube. 2 Add more subdivisions to work with.
3 Rough out the basic shape of the object. 4 Iteratively refine the object, moving points 5 Once the modeling is done, the object
and adding more detail where required. is ready to be textured and animated.
If changes are necessary, you can still
perform modeling operations on the
animated, textured object.
84 • Softimage
Geometric Objects
Geometric Objects
By definition, geometric objects have points. The set of these points On the other hand, polygon meshes may require very heavy geometry
and their positions determine the shape of an object and are often (that is, many points) to approximate smoothly curved objects.
called the object’s geometry. The number of points and how they are However, you can subdivide them to create “virtual” geometry that is
connected is called its topology. smoother.
No matter what the type of geometry, Softimage allows you to select,
manipulate, and deform points in the same way.
Types of Geometry
A subdivision surface created from a cube.
The main types of renderable geometry in Softimage are polygon
meshes and NURBS surfaces. In addition, there are other types of
geometry that you can use for specialized purposes.
Polygon Meshes
NURBS Surfaces
Polygon meshes are quilts of polygons joined at their edges and
vertices. One advantage of polygon meshes is that they allow for almost Surfaces are two-dimensional NURBS (non-uniform rational B-splines)
arbitrary topology—you are not limited to rectangular patches and patches defined by intersecting curves in the U and V directions. In a
you can add extra points for more detail where needed. cubic NURBS surface, the surface is mathematically interpolated
between the control points, resulting in a smooth shape with relatively
few control points.
The accuracy of NURBS makes them ideal for smooth, manufactured
A polygon mesh sphere
shapes like car and aeroplane bodies. One limitation of surfaces is that
they are always four-sided.
Basics • 85
Section 5 • General Modeling
Curves Particles
In Softimage, curves are one-dimensional NURBS of linear or cubic Particles are disconnected points in a point
degree. Cubic curves with Bézier knots can be manipulated as if they are cloud. They are often emitted in simulations to
Bézier curves. create a variety of effects, such as fire, water, and
smoke.
Curves have points but they are not renderable because they have no
thickness. Nevertheless, they have many uses, such as serving as the basis In Softimage, point clouds are controlled by ICE
for constructing polygon meshes surfaces, paths for objects to move along, trees. See ICE Particles on page 271.
controlling deformations like deform by curve and deform by spine, and
so on.
Density
Density refers to the number of points on an object. Part of the art of
modeling is controlling the balance of density. Generally speaking, you
need more density in areas where an object has high detail or needs to
deform smoothly. However, too much density means that an object
will be unnecessarily slow to load, update, and render.
86 • Softimage
Geometric Objects
Normals
On polygon meshes and surfaces, the control points form bounded
areas. Normals are vectors perpendicular to these closed areas on the
surface, and they indicate the visible side of the object and how its
surface is oriented. Normals are used to compute shading between
surface triangles.
Normals are represented by thin blue lines. To display or hide them,
click the eye icon (Show menu) of a 3D view and choose Normals.
Eye icon
Right Wrong
Basics • 87
Section 5 • General Modeling
88 • Softimage
Starting from Scratch
Starting from Scratch - Surface displays a submenu from which you can choose an
available NURBS surface shape.
When modeling, you need to start somewhere. You can:
3. Set the parameters as desired. The geometric primitives (curves,
• Get a basic shape from the Primitive menu. polygon meshes, and surfaces) have certain typical controls:
• Create text. - The shape-specific page contains the basic characteristics of the
• Generate an object from a curve. shape. Each shape has different characteristics; for example, a
sphere has one radius and a torus has two.
Primitives - The Geometry page controls how the implicit shape is subdivided
when converted into a surface. More subdivisions yield more
Primitives are basic shapes like cubes, grids and
points, resulting in greater detail but heavier geometry.
spheres. You can add them to a scene and then
modify them as you wish. For example, you can start
with a sphere and move points to create a head. You Text
can then attach eyeballs and ears to the head and put You can create text in Softimage, as well as import it from RTF (rich
the whole head on a model of a character. text format) files. Text is not a type of geometric object in Softimage;
There are several different primitive shapes for each geometry type. instead, text information is immediately converted to curves. After
Each primitive shape has parameters that are particular to it—for that, the curves can be optionally converted to planar or extruded
example, a sphere has a radius that you can specify, a cube has a length, polygon meshes.
a cylinder has both height and radius, and so on.
There are also several parameters that are common to all or to several
primitive shapes: Subdivisions, Start and End Angles, and Close End.
Getting Primitives
You add a primitive object to the scene by choosing an
option from the Get > Primitive menu on any of the
toolbars at the left of the main window. Creating Text
1. Choose Get > Primitive. • Choose one of the following commands from the Model toolbar:
2. Choose an item from the submenus: - Create > Text > Curves creates a Text primitive and converts it to
a curve object.
- Curve displays a submenu from which you can choose an
available NURBS curve shape. - Create > Text > Planar Mesh creates a Text primitive, converts it
to a curve object, and then finally converts the curve to a polygon
- Polygon Mesh displays a submenu from which you can choose an mesh with the Extrusion Length set to 0. The curve object is
available polygon mesh shape. automatically hidden.
Basics • 89
Section 5 • General Modeling
- Create > Text > Solid Mesh creates a Text primitive, converts it to
a curve object, and then finally converts the curve to a polygon
mesh with the Extrusion Length set to 0.5 by default. Once again,
the curve object is automatically hidden.
In each case, a property editor with the following pages is displayed: Enter text and
font properties.
Convert curves to
polygon meshes
(optional).
Convert text
to curves.
Create polygon mesh from curves
The commands and the general procedures on these two menus are the
same—the only difference is the type of object that is created.
90 • Softimage
Operator Stack
1. Select the first input curve, then add the remaining input curves (if Operator Stack
any) to the selection.
The operator stack (also known as the modifier stack or construction
Different commands require different numbers of input curves. For history) is fundamental to modeling in Softimage. Every time you
example, Revolution Around Axis requires only one curve, while perform a modeling operation, such as modify the topology or apply a
Loft allows for any number of profile curves to define the cross- deformation, an operator is added to the stack. Operators propagate
section. their effects upwards through the stack, with the output of one
You are not limited to curve objects. You can also select curves on operator being the input of the next. At any time, you can go back and
surfaces, including any combination of isolines, knot curves, modify or delete operators in the stack.
boundaries, surface curves, and trim curves. For example, you can
create a loft surface that joins two surface boundaries while passing Viewing and Modifying Operators
through other curves.
You can view the operator stack of an object in an explorer if Operators
2. Choose one of the commands from the first group in the Create > is active in the Filters menu. The operator stack is under the first
Surf. Mesh or the Create > Poly. Mesh on the Model toolbar. subnode of an object in the explorer, typically named Polygon Mesh,
NURBS Surface Mesh, NURBS Curve List, and so on.
3. In the property editor that opens, adjust the parameters as desired.
For more information, refer to the Softimage Reference by clicking For example, suppose you get a primitive polygon mesh grid, apply a
on the ? in the property editor. twist, then randomize the surface. The operator stack shows the
operators that have been applied. You can open the property page of
any operator by clicking on its icon, and then modify values. Any
changes you make are passed up through the history and reflected in
the final object.
Example of extruding a curve along another curve • Change the size of the grid in its Geometry node.
• Change the angle, offset, and axis of the twist in Twist Op.
• Change the random displacement parameters in Randomize Op.
Basics • 91
Section 5 • General Modeling
Secondary Shape
Modeling Animation
Define shapes on Apply envelopes or Display Mode
top of envelopes, other animated menu
e.g., muscle bulges. deformations.
92 • Softimage
Operator Stack
Changing the Order of Operators • Freezing removes any animation on the modeling
You can change the order of operators in an object’s stack by dragging operators (such as the angle of a Twist deformation). The
and dropping them in an explorer view. You must always drop the values at the current frame are used.
operator onto the operator or marker that is immediately below the • For hair objects, the Hair Generator and Hair Dynamics
position where you want the dragged operator to go. operators are never removed.
Be aware that you might not always get the results you expect,
particularly if you move topology operators or move other operators Collapsing Deformation Operators
across topology operators, because operators that previously affected Sometimes, it is useful to “freeze” certain operators in the stack
certain components may now affect different ones. In addition, some without freezing earlier operators that are lower in the stack. For
deformation operators like MoveComponent or Offset may not give example, you might have many MoveComponent operators that are
expected results when moved because they store offsets for point slowing down your scene, but you don’t want to lose an animated
positions whose reference frames may be different at another location deformation or a generator (if your object has a modeling relation that
in the stack. you want to keep).
When you try to drag and drop an operator, Softimage evaluates the In these cases, you can collapse several deformation operator into a
implications of the change to make sure it creates no dependency cycles single Offset operator. The Offset operator is a single deformation that
in the data. If it detects a dependency, it will not let you drop the contains the net effect of the collapsed deformations at the current
operator in that location. Moving an operator up often works better frame. Simply select the deformations operators in an explorer and
than moving it down—this is because of hidden cluster creation choose Edit > Operator > Collapse Operators.
operators on which some operators depend.
Basics • 93
Section 5 • General Modeling
Modeling Relations
When you generate an object from other objects, a modeling relation is You can modify the generated object in any way you like, for example, by
established. For example, if you create a surface by extruding one curve moving points or applying a deformation. When you modify the
along another curve, the resulting surface is linked to its generator generators, the generated object is updated while any modifications you
curves. If you modify the curves, the surface updates automatically. have made to it are preserved.
The modeling relation is sometimes called construction history in other
software. If you delete the input objects, the generated object is
removed as well. To avoid this, freeze the generated object or
at least the generator operator before deleting the inputs. If
you use the Delete button in the Inputs section of the
generator’s property editor, the generator is automatically
frozen first.
You can display the modeling relations:
• In a 3D view, click the eye icon (Show menu) and make sure that
Relations is on.
• In a schematic view, make sure that Show > Operator Links is on.
If the selected object has a modeling relation, it is linked to its input
objects by lines. A label on the line identifies the type of relation (such
as wave or revolution) and the name of the input object. You can click
the line to select the corresponding operator.
Modeling Relation
The road was created by extruding a cross-
section along a guide. When the original
guide was deformed into a loop, the road
was updated automatically.
94 • Softimage
Attribute Transfer (GATOR)
Basics • 95
Section 5 • General Modeling
Rotation uses the current manipulation mode and the Y axis by Switching between Translation, Rotation, and
default, but you can select a different axis by deactivating the Scaling
others.
The Tweak Component tool lets you translate, rotate, or scale
- Click and release the mouse button to select the highlighted components. Select the desired transformation using the v, c, and x
component. A manipulator appears (unless you’ve toggled it off). keys—press and release a key to change the transformation (sticky
You can use the manipulator to transform the selection, or if you mode) or press and hold a key to temporarily override the current
prefer you can first modify the selection, change the pivot, and set transformation (supra mode).
other options.
• To translate, press v or choose Translate from the context menu.
The Tweak Component tool uses the Ctrl, Shift, and Alt modifier
keys with the left and middle mouse buttons to perform different Drag the center to translate Drag an axis to translate in
freely in the viewing plane. the corresponding direction.
functions—look at the mouse/status line at the bottom of the
Softimage window for brief descriptions, or read the rest of this
section for the details. The right mouse button opens a context
menu.
5. The Tweak Component tool remains active, so you can repeat steps
3 and 4 to manipulate other components.
When you have finished, deactivate the tool by pressing Esc or
activating a different tool.
96 • Softimage
Manipulating Components
• To scale, press x or choose Scale from the context menu. • Ref, or reference, mode lets you transform elements using another
component or object as the reference frame. See Setting the Pivot on
Drag the center to Drag an axis to scale in the
corresponding direction.
page 98.
scale uniformly.
• Plane mode is similar to Ref. It uses the same axes as Ref but the
object center as the pivot.
Activating Axes
You can activate or deactivate axes on the Individual axes
Transform panel:
• Click an axis icon to activate it and deactivate
the others.
The mouse pointer updates to reflect the current action. You can also • Shift+click an axis icon to activate it without
press Tab to cycle through the three actions, or Shift+Tab to cycle in All Axes
affecting the others.
reverse order.
• Ctrl+click an axis icon to toggle it.
To activate the standard Translate, Rotate, or Scale tools, you must
either deactivate the Tweak Component tool before pressing v, c, or x, • Click the All Axes icon to activate all three axes.
or use the t, r, or s buttons on the Transform panel. • Ctrl+click the All Axes icon to toggle all three axes.
Setting Manipulation Modes Alternatively if the Tweak manipulator is displayed, you can activate a
single axis by double-clicking on it. Double-click on the same axis
The Tweak Component tool uses the manipulation again to re-activate all axes, or on a different one to activate it instead.
modes shown on the Transform panel. They affect
the axes and pivot used for the transformation.
• Global transformations are performed along the scene’s global
axes.
• Local transformations use the component’s own reference frame.
In this mode, Y is the normal direction.
• View transformations are performed with respect to the viewing
plane of the 3D view.
• Object transformations are performed in the local coordinate
system of the object that contains the components.
Basics • 97
Section 5 • General Modeling
Selecting Components Note that for edge loops, the direction is implied so you can simply
Alt+middle-click on an edge to select the loop and then
The Tweak Component tool lets you select components in a similar Alt+Shift+middle-click to select additional loops. However, to select
way to the standard selection tools, but there are some differences. parallel edge loops, you still need to specify two components as
described above.
Selecting, Deselecting, and Extending the Selection
Use the following keyboard and mouse combinations for selection: Selecting by Type
• Click a component to select it. The Tweak Component tool allows you to manipulate points, edges,
and polygons, but you can limit it to a particular type of component if
• Shift+click a component to add it to the selection. you desire. Use the context menu to activate Tweak All, Points, Edges,
• Shift+middle-click to toggle-select a component. Polygons, or Points + Edges.
98 • Softimage
Manipulating Components
Proportional modeling on
Selected edge loop. Effect of sliding. Effect of ordinary
translation
To activate proportional modeling, click the Prop button on the for comparison.
Transform panel.
To activate or deactivate sliding:
• While the Tweak Component tool is active, do one of the following:
- Press j. Press and release the key to toggle sliding on or off (sticky
Components that are affected by the proportional falloff are mode) or press and hold it to temporarily override the current
highlighted, and the Distance Limit is displayed as a circle. behavior (supra mode).
You can change the Distance Limit interactively when proportional - Click the on-screen Slide icon at the bottom of the view.
modeling is active by pressing and holding r while dragging the mouse
left or right. You can change the Falloff (Bias) profile by pressing and Slide Components button
holding Shift+R while dragging the mouse.
To change other proportional settings, right-click on Prop. - Right-click and choose Slide Components.
Basics • 99
Section 5 • General Modeling
Snapping 3. Release the mouse button over the point you want to weld to.
You can use the Ctrl key to snap while using the Tweak Component Note that interactive welding uses the same snapping region size as
tool: the Snap tool. You can modify the region size using the Snap menu.
• Press Ctrl to toggle snapping to targets on or off (depending on its 4. Repeat steps 2 and 3 to weld more points, if desired. When you
current setting on the Snap panel) while translating. have finished welding, toggle Weld Points off.
• Press Ctrl to snap by increments while scaling.
Hiding the Manipulator
For more information about snapping options, see Snapping on
If you don’t like working with the
page 72.
manipulator, you can hide or unhide it by
Toggle Manipulator button
clicking the on-screen button at the bottom
Welding Points of the view or by choosing Toggle
You can interactively weld pairs of points on polygon meshes while Manipulator from the context menu.
using the Tweak Component tool. Welding merges points into a When the manipulator is off, the Tweak Component tool is always in
single vertex. click-and-drag mode:
To weld points • If all axes are active on the Transform panel, translation occurs in
1. While the Tweak Component tool is active, toggle Weld Points on the viewing plane and scaling is uniform in local space. If one or
by doing one of the following: more axes have been toggled off, translation and scaling use the
current manipulation mode and active axes set on the Transform
- Press l. Press and release the key to toggle welding on or off panel.
(sticky mode) or press and hold it to temporarily override the
current behavior (supra mode). • Rotation uses the current manipulation mode and the Y axis by
default, but you can select a different axis by deactivating the
- Click the on-screen Weld Points icon at the bottom of the view. others.
100 • Softimage
Manipulating Components
Basics • 101
Section 5 • General Modeling
Deformations
Deformations are operators that change the shape of geometric objects. Lattice Deformation
Softimage provides a large variety of deformation types available from
the Modify > Deform menu of the Model and Simulate toolbars as well
as the Deform > Deform menu of the Animate toolbar.
Some deformations, like Bend and Twist, are very simple. Others, like
Lattice and Curve, use additional objects to control the effect.
Deformations can be used either as modeling tools or animation tools.
Depending on the type of deformation, you can animate the
deformation’s own parameters, such as the amplitude of a Push, or the Wave Deformation
properties of a controlling object, such as the center of a Wave.
Examples of Deformations
Circular wave
Here are just some examples of the many types of deformation and
their possible uses.
Deformation by Curve
Planar wave
Object and curve before the Object and curve after the
deformation is applied deformation is applied
Muting Deformations
All deformations can be muted. This temporarily disables its effect. To
mute a deformation, activate Mute in its property editor. Alternatively,
right-click on its operator in an explorer and choose Mute.
102 • Softimage
Section 6
Curves
Softimage provides a full set of tools for creating and
editing curves in 3D space. Although they can’t be
rendered by themselves, curves form the basis for a
lot of modeling and animation techniques.
Basics • 103
Section 6 • Curves
Curve Components
Curves have many components. You can display these components Linear Curve Cubic Curve
using the options on a viewport’s Show menu (eye icon) and select Knot has multiplicity 1.
them using the filters on the Select panel.
104 • Softimage
Drawing Curves
Basics • 105
Section 6 • Curves
106 • Softimage
Manipulating Curve Components
Basics • 107
Section 6 • Curves
108 • Softimage
Manipulating Curve Components
Basics • 109
Section 6 • Curves
Inverting Curves
Modify > Curve > Invert switches the start and end points of a curve.
The result is as if you had drawn the curve clockwise instead of
counterclockwise or vice versa.
For example, if an object uses the curve as a path, it moves in the Original sketched curve New curve fitted onto sketched curve
opposite direction once you invert the curve. Similarly, if a surface has
been built from the curve and its operator stack was not frozen, its
normals become reversed. Creating Curves from Intersecting Surfaces
Intersection between
two surfaces
110 • Softimage
Importing EPS Files
Filleting Curves
Basics • 111
Section 6 • Curves
112 • Softimage
Section 7
Polygon Mesh
Modeling
Polygon meshes are one of the basic renderable
geometry types in Softimage. They are ideally suited
for modeling non-organic objects with hard edges
and corners, but they can also be used to
approximate smooth, organic objects. Polygon
meshes are particularly used for games development
because of the requirements of most game engines.
Polygon meshes are also the basis of subdivision
surfaces.
Basics • 113
Section 7 • Polygon Mesh Modeling
Polygon-by-polygon Modeling
With polygon-by-polygon modeling, you draw each polygon directly.
114 • Softimage
About Polygon Meshes
Polygon Meshes Edges that are not shared represent the boundary of the polygon
mesh object and are displayed in light blue if Boundaries and Hard
A polygon mesh is a 3D object composed of Edges are visible in a 3D view.
one or more polygons. Typically these
polygons share edges to form a three- • Polygons are the closed shapes that make up the “tiles” of the mesh.
dimensional patchwork.
Planar and Non-planar Polygons
However, a single polygon mesh object can
also contain discontiguous sections that are When an individual polygon on a polygon mesh is completely flat, it is
not connected by edges. These disconnected A polygon mesh sphere called planar. All its vertices lie in the same plane, and are thus
polygon “islands” can be created by drawing coplanar. Planar polygons give better results when rendering.
them directly or by combining existing polygon meshes.
Polygon
Point
Triangles are always planar because any three points define a plane.
However, quadrilaterals and other polygons can become non-planar,
particularly as you move vertices around in 3D space. When objects are
• Points are the vertices of the polygons. Each point can be shared by automatically tessellated before rendering, non-planar polygons are
many adjacent polygons in the same mesh. divided into triangles. However, other applications such as game
engines may not support non-planar polygons properly.
• Edges are the straight line segments that join two adjacent points.
Edges can be shared by no more than two polygons.
Basics • 115
Section 7 • Polygon Mesh Modeling
Hole in a
polygon mesh
At least two polygons
are required.
• Edges cannot be shared by more than two polygons. Tri-wings are Faceted polygons are appropriate for geometric shapes like dice.
not supported. To connect three polygons in this way, a double
edge is required.
• Softimage does support one case of non-manifold geometry. A single
point can be shared by two otherwise unconnected parts of a single
mesh object.
If you export geometry from Softimage, remember that such
geometry may not be considered valid by other applications.
A non-manifold geometry
that is valid in Softimage.
116 • Softimage
About Polygon Meshes
The illusion of smoothness is created by averaging the normals of • If Automatic is on and Angle is 0, the object is completely faceted.
adjacent polygons. When normals are averaged in this way, the shading
is a smooth gradient along the surface of a polygon. When normals are
not averaged, there is an abrupt change of shading at the polygon
edges.
Automatic discontinuity lets you turn off the averaging of normals for
sharper edges and the discontinuity Angle lets you specify how sharp
edges must be before they appear faceted. If the dihedral angle (angle
between normals) of two adjacent polygons is less than the
Discontinuity Angle, the normals are averaged; otherwise, they are not • If Automatic is off, the object is completely smooth.
averaged.
Selected edges
marked as hard.
Basics • 117
Section 7 • Polygon Mesh Modeling
• Medial Axis creates concentric contour lines along the medial axes
Interior closed curves can become holes. (averages between the input boundary curves), morphing from one
boundary shape to the next. This method creates mainly quads
with some triangles, so it is well-suited for subdivision surfaces.
Tesselating
Tesselation is the process of tiling the curves’ shapes with polygons.
Softimage offers three different tesselation methods:
• Minimum Polygon Count uses the least number of polygons
possible but yields irregular polygons.
Other Options
In addition to controlling the tesselation, there are many other options
to control holes, extrusion, beveling, embossing, and so on.
118 • Softimage
Drawing Polygons
Drawing Polygons
Modify > Poly. Mesh > Add/Edit Polygon Tool is a multi-purpose tool or
that lets you draw polygons interactively by placing vertices. You can
- Middle-click a vertex of the current polygon to remove it.
use it to add polygons to an existing mesh, add or remove points on
existing polygons, or to create a new polygon mesh object. As you move the mouse pointer, the edges that would be created are
outlined in red. To insert the new point between a different pair of
1. Do one of the following:
vertices of the current polygon, first move the mouse across the edge
- To create a new polygon mesh object, first make sure that no connecting them.
polygon meshes are currently selected.
The direction of the normals is determined by the direction in
or which you draw the vertices. If the vertices are drawn in a
counterclockwise direction, the normals face toward the camera and
- To add polygons to an existing polygon mesh object, select the
if drawn clockwise, they face away from the camera. As you draw,
mesh first.
red arrows indicate the order of the vertices.
or
4. When you have finished drawing a polygon, do one of the
- To add or remove points on an existing polygon in a existing following:
polygon mesh object, select that polygon.
- To start a new polygon and automatically share an edge with the
2. Choose Modify > Poly. Mesh > Add/Edit Polygon Tool from the current one, first move the mouse pointer across the desired edge
Model toolbar or press n. and then click the middle mouse button. Repeat step 3 as
necessary.
3. Do one of the following:
or
- Click in a 3D view to add a point. If necessary, you can adjust the
position by moving the mouse pointer before releasing the - To start a new polygon without sharing automatically sharing an
button. edge, click the right mouse button. Repeat step 3 as necessary.
or or
- Click an existing point on another polygon in the same mesh to - When you are finished drawing polygons, exit the Add/Edit
attach the current polygon to it. Polygon tool by clicking the right mouse button twice in a row, by
choosing a different tool, or by pressing Esc.
or
- Click an existing edge of another polygon in the same mesh to
attach the current polygon to it.
or
- Left-click and drag on a vertex of the current polygon to move it.
Basics • 119
Section 7 • Polygon Mesh Modeling
Subdividing
You can subdivide polygon meshes to add more detail where needed. Subdividing Polygons with Smoothing
You can subdivide and smooth selected polygons using Modify > Poly.
Subdividing Polygons and Edges Evenly
Mesh > Local Subdivision from the Model toolbar.
You can subdivide polygons and edges evenly using Modify > Poly.
Mesh > Subdivide Polygons/Edges from the Model toolbar. Select
specific polygons or edges first, or just select a polygon mesh object to
subdivide all polygons.
For polygons, you can choose different subdivision types:
Splitting Edges
Plus Diamond X Triangles
You can split edges interactively using Modify > Poly. Mesh > Split
For edges, you can connect the new points and extend the subdivision Edge Tool from the Model toolbar. Activate this tool then click an edge
to a loop of parallel edges (that is, the opposite edges of quad to split it. Use the middle mouse button to split parallel edges. Press
polygons): Ctrl while clicking to bisect edges evenly.
Parallel Edge Loop and Connect both off. Connect on. • Add Vertex Tool
• Split Polygon Tool
• Split Edges (with split control)
• Dice Polygons
• Slice Polygons
Parallel Edge Loop on. Parallel Edge Loop and
Connect both on.
120 • Softimage
Drawing Edges
Drawing Edges
Choose Modify > Poly. Mesh > Add Edge Tool from the Model toolbar Middle-click to continue
to split or cut polygons interactively by drawing new edges. You can use drawing edges from the
this tool to freeform or redraw your object’s flow lines. previous point.
Basics • 121
Section 7 • Polygon Mesh Modeling
Extruding Components
You can extrude polygon mesh components to create local details, such If you want to adjust other properties, open the Extrude Op
as indentations or protuberances like limbs and tentacles. You can property editor in the stack.
extrude polygons, edges, or points.
Extruding with Options
To display additional options when extruding, select one or more
components and press Ctrl+Shift+d or choose Modify > Polygon
Mesh > Extrude Along Axis. This lets you control whether adjacent
components are extruded separately or together, as well as specify the
subdivisions, inset, transformations, and other values.
2. Use the transform tools or the Tweak Component tool to translate, Duplicating Polygons
rotate, and scale the extruded components as desired. Duplicating is similar to extruding, but the polygons are not connected
to the original geometry. This is useful for building repeating forms
like steps or railings. Choose Modify > Polygon Mesh > Duplicate, or
check Duplicate Polygons in the Extrude Op property editor.
122 • Softimage
Removing Polygon Mesh Components
Selected polygons
will be dissolved.
Basics • 123
Section 7 • Polygon Mesh Modeling
• With Blend, nearby boundaries on different objects are joined by You can also combine meshes using the Boolean commands on the
new polygons. Create > Poly. Mesh and Modify > Poly. Mesh menus.
Original objects
Blended object
Near boundaries are joined
Merged object
Near boundaries are merged
124 • Softimage
Symmetrizing Polygons
Symmetrizing Polygons
You can model one half of a polygon mesh object and then symmetrize 3. Select the polygons to be symmetrized. You can symmetrize the
it. This creates new polygons that mirror the geometry on the original whole object or just a portion.
side.
1. Model the polygons on one side of the object. In the example Select the desired polygons.
below, an ornamental curlicue was added to the hilt of the dagger.
4. Choose Modify > Poly. Mesh > Symmetrize Polygons from the
Model toolbar.
5. In the Symmetrize Polygon Op property editor, set the parameters
2. Prepare the other side of the object for symmetrization. For as desired, for example, to specify the plane of symmetry.
example, if you intend to merge the symmetrized portions by
welding or bridging, then you may need to create holes for the new
polygons to fit and add vertices to aid the merge.
The finished dagger.
Basics • 125
Section 7 • Polygon Mesh Modeling
Cleaning Up Meshes • When you filter polygons by area, the smallest polygons are
removed. This eliminates small, “noisy” details.
You can filter polygon mesh objects to clean them up. Filtering removes
components that match certain criteria, for example, small Reducing Polygons
components that represent insignificant detail.
The Modify > Poly. Mesh > Polygon Reduction command on the Model
Filtering Edges toolbar lightens a heavy object by reducing the number of polygons,
while still retaining a useful fidelity to the shape of the original high-
Modify > Poly. Mesh > Filter Edges on the Model toolbar removes resolution version. For example, you can use polygon reduction to
edges by collapsing them based on either their length or angle. In both meet maximum polygon counts for game content, or to reduce file size
cases, you can protect boundary edges using Keep Borders Edges Intact. and rendering times by simplifying background objects.
Edge filtering is especially useful for reducing the triangulation on Polygon reduction also allows you to generate several versions of an
polygon meshes generated by Boolean operations. object at different levels of detail (LODs).
Filtering Points Polygon reduction works by collapsing edges into points. Edges are
chosen according to their “energy”, which is a metric based on their
Modify > Poly. Mesh > Filter Points on the Model toolbar welds length, orientation, and other criteria. In addition, you have options to
together vertices that are within a specified distance from each other. control the extent to which certain features, such as quad polygons, are
Among other things, this can be very useful for fixing disconnected preserved by the process.
polygons in “exploded” meshes which can occur when meshes are
exported from some other programs.
• Average position welds each clump of points in the selection
together at their average position.
• Selected point welds each clump of points in the selection together
at the position of the point that is nearest to the average position.
• Unselected point welds each selected point to an unselected point
on the same object.
Filtering Polygons
Modify > Poly. Mesh > Filter Polygons removes polygons based on
their area or their dihedral angles:
• When you filter polygons by angle, adjacent polygons are merged
together if their dihedral angle is less than the threshold you
specify. Small angles correspond to flat areas, so this method
preserves sharp detail.
126 • Softimage
Polygon Normals
Polygon Normals
Shading normals are vectors that are perpendicular to the surface of Controlling User Normals
polygons at each corner. They control how polygon meshes are shaded. If
the normals are averaged across an edge or corner, the shading is Instead of relying on the automatically generated normals, you can
smooth. If they are not averaged, the shading is faceted and the edge is specify custom normals to use for shading. These custom normals are
considered “hard”. called user normals, or explicit normals in some other programs
including 3ds Max. User normals allow you to create things like a box
To display normals on selected objects, click on a view’s Show menu with rounded corners using a minimum number of polygons.
(eye icon) and choose Normals.
Basics • 127
Section 7 • Polygon Mesh Modeling
Subdivision Surfaces
Subdivision surfaces (sometimes called “subdees”) allow you to create Subdivision Rules
smooth, high-resolution polygon meshes from lower-resolution ones.
They provide the smoothness of NURBS surfaces with the local detail Softimage gives you a choice of several
and texturing capabilities of polygon meshes. subdivision rules (smoothing
algorithms): Catmull-Clark, XSI-Doo-
Sabin, and linear. In addition, you have
Applying Geometry Approximation
the option of using Loop for triangles
You can turn a polygon mesh object into a subdivision surface by when using Catmull-Clark or linear.
pressing + and – on the numeric keypad. This applies a local Geometry
The subdivision rule is set in the Polygon Mesh property editor.
Approximation property if there isn’t already one, and sets the
subdivision level for render and display. The higher the subdivision Catmull-Clark
level, the smoother the object.
The Catmull-Clark Catmull-Clark Subdivision
The original geometry forms a hull that is used to control the shape of subdivision algorithm
the smoothed, “proxy” geometry. You can toggle the display of the hull produces rounder shapes. The
and the subdivision surface on the Show menu (eye icon). generated polygons are all
quadrilateral.
Polymesh hull XSI-Doo-Sabin
XSI-Doo-Sabin Subdivision
128 • Softimage
Subdivision Surfaces
Loop Subdivision
With the Catmull-Clark and linear subdivision methods, you have the
option of using Loop subdivision for triangles. The Loop method
subdivides triangles into smaller triangles rather instead of into quads,
which gives better results when smoothing and shading.
Basics • 129
Section 7 • Polygon Mesh Modeling
130 • Softimage
Section 8
NURBS Surface
Modeling
NURBS surfaces are one of the basic types of
renderable geometry in Softimage. They are
rectangular patches that allow for very smooth shapes
with relatively few control points. Surfaces can model
precise shapes using less geometry than polygon
meshes and they’re ideal for smooth, manufactured
objects like car and aeroplane bodies.
Basics • 131
Section 8 • NURBS Surface Modeling
About Surfaces
In Softimage, surfaces are NURBS patches. Mathematically, they are an • Knot curves (sometimes called isoparams or isoparms) are sets of
interconnected patchwork of smaller surfaces defined by intersecting connected knots along U or V—they are the “wires” shown in
NURBS curves. wireframe views. You can select knot curves and use them, for
example, to build other surfaces using the Loft operator.
Components of Surfaces
You can display surface components and attributes in the 3D views, as
well as select them for various tasks. Knots lie
• Points are the control points of the curves that define the surface. on the surface.
Their positions define the shape of the surface.
Knot curves
connect knots.
Points define
and control the
surface. • Isolines are not true components. They are, in fact, arbitrary lines
You can display
of constant U or V on a surface. You can use the U and V Isoline
lines between selection filter to help you pick isolines for lofting and other
points. operations.
• NURBS hulls are display lines that join consecutive control points.
It can be useful to display them when working with curves and
surfaces.
• Surface knots are the knots of the curves that define the surface;
they lie on the surface where the U and V curve segments meet.
132 • Softimage
Building Surfaces
Building Surfaces
The commands on the Create > Surf. Mesh menu can be used to build Merging Surfaces
NURBS surfaces in a variety of ways. The first set of commands
generate surfaces from curves—see Objects from Curves on page 90 for Merging two surfaces creates a third surface that spans the originals.
an overview of the basic procedure. Here are a few examples of some of You have the option of also selecting an intermediary curve for the
the other ways you can build surfaces. merged surface to pass through.
Blending Surfaces
Blending creates a new surface that fills the gap between the selected
boundaries on two other surfaces.
Filleting Intersections
A fillet is a surface that smooths the intersection of two others, like a
molding between a wall and a ceiling.
Input surfaces Resulting blend
Basics • 133
Section 8 • NURBS Surface Modeling
Modifying Surfaces
You can modify surfaces in a variety of ways using the commands in Opening and Closing Surfaces
the Modify > Surface menu of the Model toolbar, for instance, by
adding and removing knot curves. Here are a few examples of some You can open a closed surface and close an open surface. A surface can
other ways of modifying surfaces. be open in both U and V like a grid, closed in both like a torus, or open
in one and closed in the other like a tube.
Inverting Normals
If the normals of a surface are pointing in the wrong direction, you can
invert them.
Open Closed
Inverting a surface
Extending Surfaces
You can extend a surface from the selected boundary to a curve.
134 • Softimage
Projecting and Trimming with Curves
Basics • 135
Section 8 • NURBS Surface Modeling
• Use Is Boundary to choose whether to trim the inside or the Surface Meshes
outside.
Surface meshes provide a way to assemble multiple surfaces into a
• Use Projection Precision to control the precision used to calculate single object that remains seamless under animation and deformation.
the projection. If the shape of the projected curve is not accurate,
increase this value. However, high values take longer to calculate 1. Create a collection of separate surfaces. These will become the
and may slow down your computer. For best performance, set this surface mesh’s subsurfaces.
parameter to the lowest value that gives good results.
Line the surfaces up into a
basic configuration.
Deleting Trims
This illustration shows a
Deleting a trim allows you to remove a trim operation even after you common configuration for a
have frozen the surface’s operator stack. Set the selection filter to Trim leg or arm.
Curve, select one or more trim curves on the surface, and choose
Modify > Surface > Delete Trim from the Model toolbar.
Snap opposite
boundaries together to
connect the surfaces
across the junction.
136 • Softimage
Surface Meshes
3. Select all the surfaces and choose Create > Surf Mesh > Assemble. Excluding Points from Continuity Managements
The surfaces are assembled into a single surface mesh. The
continuity manager ensures that the continuity is preserved at the All assembled surface meshes have a special cluster called
seams. NonFixingPointsCluster. If a point on a subsurface boundary is in this
cluster, its continuity is not managed by SCM when Don’t Fix the
Tagged Points is on. The other points on the same junction are not
affected. This lets you create holes in the surface mesh for mouths, eyes,
and so on.
4. You can now deform and animate the surface mesh as desired.
Basics • 137
Section 8 • NURBS Surface Modeling
138 • Softimage
Section 9
Animation
To animate means to make things come alive, and
life is always signified by change: growth,
movement, dynamism. In Softimage, everything can
be animated, and animation is the process of
changing things over time. For example, you can
make a cat leap on a chair, a camera pan across a
scene, a chameleon change color, or a face change
shape.
Basics • 139
Section 9 • Animation
Bringing It to Life
The animation tools in Softimage let you create animation quickly so What Can You Animate in Softimage?
that you can spend your time editing movements, changing the timing,
and trying out different techniques for perfecting the job. Softimage You can animate every scene element and most of their parameters—in
gives you the control and quick feedback you need to produce great effect, if a parameter exists on a property page, it can probably be
animation. Basically, if you want to make something move, Softimage animated.
has the tools. • Motion: Probably the most common form of animation, this
involves transforming an object by either moving (translating),
rotating, or scaling (resizing) it. Special character tools let you
easily animate humans, animals, and all manner of fantastical
creatures. You can also use dynamic simulations to create
movement according to the physical forces of nature.
• Geometry: You can animate an object’s geometry by changing
values such as U and V subdivision, radius, length, or scale. You
can also use numerous deformation tools and skeletons to bend,
twist, and contort your object.
• Appearance: Material, textures, visibility, lighting, and
transparency are just some of the parameters controlling
appearance that can be changed over time.
140 • Softimage
Bringing It to Life
You store animation or shapes in sources, then use the animation mixer So Many Choices ...
to edit, mix, and reuse those sources as clips.
Softimage provides you with many choices of tools and techniques for
To use these levels together, you can animate at a low level by animating: explore and decide which tool lets you animate in the most
keyframing a specific parameter, then store that animation and others effective way. In most projects you have, you will probably use a
into action sources and mix them together in the animation mixer to combination of a number of these tools together to get the best results.
animate at a high level. This allows you to easily manage complex
animation yet retain the ability to work at the most granular level. • The most basic method of animation is keying. You set parameter
values at specific frames, and then set keys for these values. The
values for the frames between the keys are calculated by
interpolation.
Basics • 141
Section 9 • Animation
• Character animation tools offer you control for creating and • Dynamic simulations let you create realistic motion with natural
animating skeletons. You can animate them with forward or inverse forces acting on rigid bodies, soft bodies, cloth, hair, and particles
kinematics, apply mocap data, add an enveloping model, set up a (done with ICE). With simulations, you can create animation that
rig, and fine-tune the skeleton’s movements in a myriad of ways to could be difficult or time-consuming to achieve with other
get just the right motion. animation techniques.
142 • Softimage
Playing the Animation
You can set up the default frame format and frame rate preferences for
your scene using the options in the Output Format preferences
property editor (choose File > Preferences). These settings propagate to
many other parts of Softimage that depend on timing. Regardless of
whether you enter time code or a frame number as the frame format,
Softimage internally converts your entry into time code.
Basics • 143
Section 9 • Animation
144 • Softimage
Previewing Animation
Previewing Animation
You can capture and cache images from an animation sequence and Ghosting
play them back in a flipbook to help you see the animation in real time.
Animation ghosting, also known as onion-skinning, lets you display a
Anything that is shown in the viewport you choose is captured—
series of snapshots of animated objects at frames or keyframes behind
render region, rotoscoped scene with background, or any display mode
and/or ahead of the current frame. This lets you visualize an object’s
(wireframe, textured, shaded, etc.). For example, you may want to set
motion, helping you improve its timing and flow. You can display an
the display mode to Hidden Line Removal for a “pencil test” effect.
object’s geometry, points, centers, trails, and velocity vectors as ghosts.
You can include audio files to play back with the flipbook, especially
Ghosting works for any object that moves in 3D space, either by having
useful for lip synching. You can also export flipbooks in a variety of
its transformation parameters (scaling, rotation, and translation)
standard formats, such as AVI and QuickTime.
animated in any way, or by having its geometry changed by shape
Creating a Flipbook animation or deformations (including envelopes), or with simulated
rigid bodies, soft bodies, or cloth.
1. In the viewport whose images you want to capture, set the
display options as you like. Then click the camera icon in that Ghosting is set per object by selecting the Ghosting option in the
viewport and choose Start Capture. object’s Visibility property editor. Once this is done, you can set
ghosting per scene layer or per group, in their respective property
2. In the Capture Viewport dialog box, set the options for the editors.
flipbook’s file name, image size, format, sequence, padding, and
frame rate. To see ghosting in a 3D view, such as a viewport, choose the Animation
Ghosting command in the Display Mode menu of a 3D view, then set
3. View the flipbook in the Softimage flipbook or in the native media up the ghost display options in the Camera Display property editor.
player on your computer. You can open the Softimage flipbook by
choosing Flipbook from the Playback menu.
Basics • 145
Section 9 • Animation
Animating with Keys When you set keys on a parameter’s value, a function curve (or fcurve)
is created. An fcurve is a graph that represents the changes of a
Keyframing (or “keying”) is the process of animating values over time. parameter’s values over time, as well as how the interpolation between
In traditional animation, an animator draws the extreme (or critical) the keys occurs. When you edit an fcurve, you change the animation.
poses at the appropriate frames (key frames), thus creating “snapshots”
of movement at specific moments. Methods of Keying
As in traditional animation, a keyframe in Softimage is also a There are a number of ways in which you can set keys in Softimage
“snapshot” of one or more values at a given frame, but unlike depending on what type of workflow you’re used to and the tools you
traditional animation, Softimage handles the in-betweening for you, want or need to use for your production. Any way you choose, each
computing the intermediate values between keyframes by method results in keyframes being created.
interpolation.
There are three main keying workflows from which to choose:
• Keyable parameters on the keying panel.
• Character key sets
• Marked parameters (and marking sets)
Keys set at frames 1, 50, and 100. Intermediate frames Before you start setting keys, you need to set a preference that
are interpolated automatically. determines the way in which you key: with keyable parameters, with
character key sets, or with marked parameters.
This preference determines which parameters are keyed when you save
a key by pressing K, by clicking the keyframe icon in the Animation
You can set keys for just about anything in Softimage that has a value: panel, or by choosing the Save Key command from the Animation
this includes an object’s transformation, geometry, colors, textures, menu.
lighting, and visibility. To set the preference, click the Save Key preference button in the
You can set keys for any animatable parameter in any order and at any Animation panel, then select an option from the menu.
time. When you add a new key, Softimage recalculates the
interpolation between the previous and next keys. If you set a key for a
parameter at a frame that already has a key set for that parameter, the
new key overwrites the old one.
146 • Softimage
Animating with Keys
Basics • 147
Section 9 • Animation
148 • Softimage
Animating with Keys
Basics • 149
Section 9 • Animation
Click the autokey button to automatically set a key each time you
D
change a parameter’s values.
Choose Animation > Set Keys at Multiple Frames to set keys for the
parameters’ current values at the multiple frames that you enter. This
E
is handy for setting up basic keyframes for pose-to-pose type
animation.
B
D
C
150 • Softimage
Animating Transformations
Animating Transformations
Animating the transformations (scaling, rotation, and translation) of Animating Local or Global Transformations
objects is something that you will be doing frequently. It is one of the
most fundamental things to animate in Softimage. You can animate objects either in terms of their parents (local
animation) or in terms of the scene’s world origin (global animation).
You can find transformation parameters in the object’s Kinematics
node in the explorer. Kinematics in this case refers to “movement,” not It’s usually better to animate the local transformations because you
to inverse or forward kinematics as is used in skeleton animation. usually animate relative to the object’s parent instead of animating
relative to the world origin. Animating locally lets you branch-select an
object’s parent and move it while all objects in the hierarchy keep their
relative positions to the parent.
If you animate both the local and the global transformations, the global
animation takes precedence.
Basics • 151
Section 9 • Animation
translate in Par mode. These are the only two manipulation modes that Remembering Transformation Tools for an Object
transform in the same way as local animation: they are both relative to
the object’s parent. When you’re manipulating or animating an object, you often use the
same transformation tool for it, such as always using the Rotate tool for
Of course, you can always set and animate the values as you like bones in a skeleton. You can create a transform setup property (choose
directly in the object’s Local Transform or Global Transform property Get > Property > Transform Setup) for an object so that the same
editor. transformation tool is automatically activated when you select that
object.
Marking Transformation Parameters
This is very useful for working quickly with
When you activate any of the transformation tools, all three of their control objects in a character rig—for
corresponding local transformation parameters (X, Y, Z) are example, when you select the head’s
automatically marked. effector, the Translate tool is automatically
activated.
For example, when you rotate in Local mode, all three rotation axes are
marked automatically, even if only one rotation axis is selected.
152 • Softimage
Animating Transformations
Basics • 153
Section 9 • Animation
154 • Softimage
Editing Keys and Function Curves
D E
Regions (press Q) let you edit multiple keys, including moving them,
The Explorer, Lock, and Update buttons apply only to the animation E
A scaling them, copying and pasting them, and deactivating animation.
explorer (4).
Timeline. Click and drag the red playback cursor in it to “scrub” The keys represent the keyframes of the selected parameter’s
B F animation. Each colored block is one frame long. You can edit (move,
through the animation.
copy, paste) individual keys on tracks.
Summary tracks display keys for all objects in the scene or all objects
C The tracks display and let you manipulate the animation keys. You
currently displayed in the dopesheet. G
can expand and collapse tracks to view exactly what you want.
Animation explorer displays the parameters of objects that you
D
select.
Basics • 155
Section 9 • Animation
E
B
C
F
G
156 • Softimage
Editing Keys and Function Curves
Basics • 157
Section 9 • Animation
Copy and paste an fcurve and keys. You can also set paste options to
D control how keys are pasted—whether they replace the selection or
are added to it.
C
B Scale fcurves or regions of keys. When you shorten the length, you
E speed up the animation; increasing the length slows it down. Scaling
vertically changes the values.
Cycle the fcurves for repetitive motions. You can create basic cycles,
F or you can have relative cycles that are progressively offset, such as
when creating a walk cycle.
158 • Softimage
Layering Animation
Layering Animation
Animation layering allows you to have one or more levels of animation
1 2
on top of an object’s parameters base animation at the same time. You
usually want to layer animation when you need to add an offset to the
base animation on an object, but you don’t want to change the original 3
animation, such as with mocap data. You can only add keys in the
layers, and the existing base animation must be either action clips or
fcurves.
Animation layers are non-destructive, meaning that they don’t alter
your base animation in any way: the keys in the layers always remain as
a separate entity. Layering allows you to experiment with different 4
effects on your animations and build several variations, each in its own
layer.
5
For example, let’s say that you’ve imported a mocap action clip of a
character running down the flight of stairs. However, in your current
scene, the stairs are shallower than those used for the mocap session, so
the character steps “through” the stairs instead of on them.
To fix this problem, you create an animation layer, offset the contact
points for the character’s feet so that they step on the stair, then set
keys. The result is an offset animation that sits on top of the mocap
data: you don’t need to touch the original mocap clip at all. You can
then easily edit the fcurves for the animation layer, tweaking it as you
like.
Animation layers are actually controlled and managed in the animation
mixer, but you don’t need to access the mixer for creating and setting 6
keys in layers. You can use the Animation Layers panel (click the KP/L
tab on the main command panel) to do this. However, you may want to
use the animation mixer for added control over each layer, such as for
setting each layer’s weight.
Basics • 159
Section 9 • Animation
160 • Softimage
Constraints
Basics • 161
Section 9 • Animation
With almost all types of constraints, you can set offsets using the
controls in their property editors. The offset is set between the centers
of the constrained and constraining objects on any axis.
To set an offset interactively, you can use the
CnsComp button (Constraint Compensation) on
the Constrain panel. With compensation, you can
interactively offset the constrained object from the
constraining object and animate it independently
while keeping the constraint.
Blending Constraints
You can blend multiple constraints on an object with each other, as
well as blend constraints with other animation on the constrained
object. You set the Blend Weight parameter’s value in each constraint’s A
property editor to blend the weight (or “strength”) of one constraint
against the others. And, of course, you can animate the blending to
have it change over time.
Blending is done in the order in which you applied the constraints,
from the first-applied constraint to the last. Each constraint takes the
previous result and gives a new one based on the value you set. For B
example, if you have three position constraints on an object, you can
have the object placed exactly in the center of them.
In the example on the right, the cone has three blended position
constraints to keep it positioned in the middle of the triangle formed
by objects A, B, and C:
162 • Softimage
Path Animation
Path Animation
A path provides a route in global space for an object to follow in order After you’ve created path animation, you can modify the animation by
to get from one point to another. The object stays on the path because changing the timing of the object on the path (choose the Create >
its center is constrained to the curve for the duration of the animation. Path > Path Retime command), or by moving, adding, or removing
points on the path curve as you would to edit any curve.
You can create path animation in Softimage using a number of
methods, each one having its own advantages: For example, using the Path Retime command, you can shorten (and
therefore increase the speed) a path animation that went from frame 1
• The quickest and easiest way of animating an object along a path is
to 100 to frames 20 to 70. You can even reverse the animation—for
by using the Create > Path > Set Path command and picking the
example, enter 100 as the start and 1 as the end frame.
curve to be used as the path. There’s no need to set keyframes—just
set the start and end frames. The object is automatically
constrained to the path and animated along the percentage of the
curve’s length.
B A
• Constrain an object to curve using the Curve (Path) constraint and
manually set keys for the percentage of the path traveled.
C
• Choose the Create > Path > Set Trajectory command and pick a
trajectory to use a curve’s knots as indicators of the object’s D
position at each frame.
Basics • 163
Section 9 • Animation
Linking Parameters
When you create linked parameters, also known as driven keys, you • Drive a single parameter with the combined animation values of
create a relationship in which one parameter depends on the animation multiple parameters. This allows you to create more complex
state of another. In Softimage, you can create simple one-to-one links relationships, where many parameter values are interpolated to
with one parameter controlling another, or you can have multiple create an output value for one parameter.
parameters controlling one parameter.
• Drive a single parameter with the whole orientation of an object.
After you link parameters, you set the values that you want the
parameters to have, relative to a certain condition (when A does this, B Overview of Linking Parameters
does this). To open the Parameter Connection Editor, choose View > Animation
> Parameter Connection Editor. Then follow these steps:
Venus flytrap eyes its victim. Its jaw’s rotation Z parameter is linked to
the position X parameter of the fly that is animated along a path. 3
164 • Softimage
Linking Parameters
Basics • 165
Section 9 • Animation
Expressions
1 Select an object and open the expression editor by pressing
Expressions are mathematical formulas that you can use to control any Ctrl+9.
parameter that can be animated, such as translation, rotation, scaling,
materials, colors, or textures. Expressions are useful to creating regular 2 Select the target, which is the parameter controlled by the
or mechanical movements, such as oscillations or rotating wheels. As expression.
well, they allow you to create almost any connection you like between The Current Value box below it shows the value of the
any parameters, from simple “A = B” relationships to very complex expression at the current frame.
ones using predefined variables, standard math functions, random
number generators, and more. 3 Enter the expression in the expression pane by typing
directly or by choosing items from the Function, Object,
However you use expressions, you will find that they are very powerful and Param menus.
because they allow you to animate precisely, right down to the
parameter level. Once you’re more experienced using them, you can You can also enter parameter names by typing their script
create all sorts of custom setups, like character rigs and animation names and then pressing F12. This prompts you with a list
control systems. of possible parameters in context.
You can copy, cut, and paste in the expression pane using
Overview of writing an expression
standard keyboard shortcuts (Ctrl+C, Ctrl+X, and Ctrl+V,
respectively).
1 5 4 The message pane updates as you work, letting you know
whether the expression is valid or not.
5 Click the Validate and Apply buttons to validate and then
2 apply the expression.
For a complete description and syntax of all the functions
and constants available, refer to the Expression Function
4
Reference (choose Help > User’s Guides).
166 • Softimage
Expressions
Basics • 167
Section 9 • Animation
Copying Animation
There are different levels at which you can copy animation in • You can copy animation between any parameters in the explorer or
Softimage: between parameters, between objects, or between models. a property editor in a number of ways:
Here are some of the main ways to do this.
A B
168 • Softimage
Scaling and Offsetting Animation
Basics • 169
Section 9 • Animation
170 • Softimage
Section 10
Character Animation
Character animation is all about bringing your
characters to life, whether it’s some guy dancing in a
club, a dog catching a frisbee, or a simple bouncing
ball with personality to spare.
Even though you’re working in a virtual
environment, your job is to make these characters
seem believable in their movements and expression.
In Softimage, you’ll find everything you need to
make any type of character come alive.
Basics • 171
Section 10 • Character Animation
172 • Softimage
Character Animation in a Nutshell
Basics • 173
Section 10 • Character Animation
Getting Started with Ready-Made Characters All predefined skeletons, bodies, characters, and rigs are implemented
as models. As well, most of the bipeds share the same basic hierarchy
Looking for a quick way to get started with characters in Softimage? structure that you can see in the explorer, making it easy to share
Check out the ready-made models in the Get > Primitive > Model and animation later, especially if you’re using actions in the animation
Get > Primitive > Character menus. Here are just a few of the mixer.
characters you’ll meet on these menus:
Making Custom Characters and Faces
The Character Designer (choose Get > Primitive > Character > Man
Maker) loads a generic male body, then use sliders in a property editor
to interactively manipulate individual body and head features. You can
create many bodies, each with their own distinctive look, yet have all
bodies sharing the same underlying topology.
The Face Maker (choose Get > Primitive > Character > Face Maker)
loads a predefined low-resolution polygon mesh head (male or
female). This lets you can create any number of different faces with the
same topology, allowing you to easily copy shape animation keys
between them. Perfect for testing out some shape animation!
Face Maker
174 • Softimage
Setting Up Your Character
Basics • 175
Section 10 • Character Animation
Tools for Easy Viewing and Selecting You can set up a character synoptic view for other members of your
team, allowing them to use your character easily. Synoptic views allow
When you’re animating a skeleton, you may want to work with a low- you and others to quickly access commands and data related to a
resolution version of the envelope on the skeleton. This helps you get a specific object or model. They consist of a simple HTML image map
sense of how the animation will work with the final envelope. However, stored as a separate file outside of the Softimage scene file. The HTML
working with enveloped skeletons can make it difficult to view or select file is then linked to a scene element.
chain elements. To help you with this, Softimage has several viewing
and selection options, with the most common ones shown here. Clicking on a hot spot in the image either opens another synoptic view
or runs a linked script. You can include all sorts of information about
X-ray shading lets you see and the character, set up hotspots for selecting body parts, setting keys on
select the underlying chains while different elements, running a script, etc.
still seeing the shaded surface of
the envelope.
You can display the chains
in screen (bones inside) or
overlay (bones on top) Synoptic views
modes.
Click on a hot spot on the
synoptic image to run the
script that is linked to that
image.
176 • Softimage
Building Skeletons for Characters
Anatomy of a skeleton
The bones are connected by joints. A bone always
rotates about its joint, which is at its top. The first bone
rotates around the root.
The first bone in the chain is a child of the root, and all
The root is a null that is the starting point other bones are children of their preceding bones.
on the chain. It is the parent of all other
elements in the chain. Keying the rotation of bones is how you animate with
forward kinematics (FK).
Because the first joint is local to the root,
the root’s position and rotation determine
the position and rotation of the rest of the
chain.
A joint is the connection between elements in a chain:
between bones in the chain, between the root and the
first bone, and between the last bone and the effector.
By default, joints are not shown but you can easily
display them.
The effector is a null that is the last part of
a chain. Moving the effector invokes • In a 2D chain, the joints act as hinges, restricting
inverse kinematics (IK), which modifies movement so that it’s easier to create typical limb
the angles of all the joints in that chain. actions, such as bending an arm or leg. Only its first
joint at the root acts as a ball joint, allowing a free
When you create a chain, the effector is a range of movement: when using IK, the rest of the
child of the root, not the preceding bone. 2D chain’s joints rotate only on the root’s Z axis, like
hinges. Of course, you can rotate the joints of a 2D
chain in any direction with FK, but this is overridden
as soon as you invoke IK.
• In a 3D chain, the joints can move any which way
they like. All of its joints are like ball joints that can
rotate freely on any axis, allowing you to animate
wiggly objects like a tail or seaweed.
Basics • 177
Section 10 • Character Animation
Creating Skeletons After you have created the chains for a character’s skeleton, you need to
organize them in a hierarchy. Hierarchies are parent-child relationships
Drawing chains is pretty simple in Softimage: you choose the Create >
that make it easy to animate the skeleton. There are many different
Skeleton > Draw 2D Chain or 3D Chain command on the Animate
ways in which you can set up a hierarchy, depending on the skeleton’s
toolbar and click where you want the root, joints, and effector to be.
structure and the type of movements that the character needs to make.
Here are some tips to help you draw chains:
• Draw the chains in relation to the default pose of the envelope that Part of a skeleton hierarchy structure shown in the
you’re planning to use. This means you don’t have to spend as schematic view. In this case, the spine root is the parent of
the leg roots, spine, and spine effector.
much time adjusting each bone’s size and position later.
These elements are, in turn, parents of the legs, neck,
• Draw the chain with at least a slight bend to determine its direction shoulders, spine, and so on.
of movement when using IK. Drawing bones in a straight line can
result in unpredictable bending.
• If you want two chains to be mirrored, such as a character’s arms or
legs, you can draw one and have the other one created at the same
time. Just activate symmetry (Sym) mode and then draw a chain.
Tip: You can try out the joint’s 4 Click once more to create Select the node you
location by keeping the another bone and joint. want to be the parent,
mouse button held down as
click the Parent button
you drag.
and then pick the
The bone and joint are not 5 When you’re ready to finish, elements that will be its
created until you let go of the right-click to create the children.
mouse button. effector and end the chain.
Right-click to end the
parenting mode.
178 • Softimage
Building Skeletons for Characters
Basics • 179
Section 10 • Character Animation
180 • Softimage
Enveloping
Enveloping
An envelope is an object that deforms automatically, based on the pose in the explorer—this is equivalent to picking every object in the
of its skeleton or other deformers. In this way, for example, a character group individually. If you make a mistake, Ctrl+click to undo the
moves as you animate its skeleton. The process of setting up an last pick.
envelope is sometimes called skinning or boning.
5. When you have finished picking deformers, right-click to terminate
Every point in an envelope is assigned to one or more deformers. For the picking session. Each deformer is assigned a color, and points
each point, weights control the relative influence of its deformers. Each that are weighted 50% or more toward a particular deformer are
point on an envelope has a total weight of 100, which is divided displayed in the same color.
between the deformers to which it is assigned. For example, if a point is
Use the Automatic Envelope
weighted by 75 to the femur and 25 to the tibia, then the femur pulls on
Assignment property editor to adjust
the point three times more strongly than the tibia.
the basic settings.
Setting Envelopes 6. Move the deformers to see how the
envelope deforms. If necessary, you can
1. Make sure the envelope and now change the deformers to which
deformers are in the reference pose points are assigned, as well as modify
(sometimes called a bind pose). the envelope weights using the
The reference pose determines methods described in the next few
how points are initially assigned sections.
and weighted. It’s best to choose a
reference pose that makes it easy
to see and control how points will If you ever need to reopen the
be assigned. Automatic Envelope
Assignment property editor,
2. Select the objects, hierarchies, or
you can find it in the
clusters to become envelopes.
envelope weight stack in
3. Choose Deform > Envelope > Set Envelope from the Animate an explorer.
toolbar.
If the current construction mode is not Animation, you are
prompted to apply the envelope operator in the animation region
of the operator stack anyway. In most cases, this is probably what
you want.
4. Pick the objects that will act as deformers. You are not restricted to
skeleton bones; you can pick any object. Left-click to pick individual
objects and middle-click to pick branches. You can also pick groups
Basics • 181
Section 10 • Character Animation
Painting Envelope Weights 4. If desired, set the paint mode. Most of the time you will be using
Add (additive) but Smooth, Erase, and Abs (absolute) are also
You can use the Paint tool to adjust envelope weights. This lets you use sometimes useful.
a brush to apply and remove weights on points in the 3D views.
5. If desired, adjust the brush properties:
1. Select an envelope.
- Use the r key to change the brush radius interactively.
2. Activate the Paint tool using the weight paint panel or by pressing
w. - Use the e key to change the opacity interactively.
3. Pick a deformer for which you want to paint weights by selecting it in - Set other options in the Brush Properties editor (Ctrl+w).
the list in the weight paint panel or by pressing d while picking it in a
6. Click and drag to paint on points on the envelope. In normal
3D view.
(additive) paint mode:
- To add weight, use the left mouse button.
182 • Softimage
Enveloping
- To remove weight, either use the right mouse button or press Reassigning Points to Specific Deformers
Shift+left mouse button.
You can reassign points to specific deformers. This is useful in case the
- To smooth weight values between deformers, press Alt+left automatic assignment did not assign the points to the desired bones.
mouse button.
1. Select points on the envelope.
7. Repeat steps 3 to 6 for other deformers and points until you are
satisfied with the weighting. 2. Choose Deform > Envelope > Reassign Locally on the Animate
toolbar, or click Local Reassign on the weight paint panel.
If your envelope has multiple maps, for example, a weight 3. Pick one or more of the original deformers.
map in addition to an envelope weight map, then you may
need to select the envelope weight map explicitly before you
can paint on it. A quick way is to select the enveloped
geometry object, then choose Explore > Property Maps from
the Select panel and select the map to paint on.
Basics • 183
Section 10 • Character Animation
Control display of points and deformers. Limit the number of deformers per point.
Lock weights. Weight assignment options.
Multiple envelopes.
Double-click to expand and collapse, or
right-click for more options.
184 • Softimage
Enveloping
Basics • 185
Section 10 • Character Animation
Adding and Removing Deformers Limiting the Number of Deformers per Point
After you have applied an envelope, you can add and remove You can limit the number of deformers to which each point’s weight is
deformers. To add deformers, select the envelope, choose Deform > assigned. This can be especially important for game characters, because
Envelope > Set Envelope from the Animate toolbar, pick the new some game engines have a limit on the number of deformers.
deformers, and right-click when you have finished. If the envelope
1. Set the maximum number of deformers on the weight editor’s
weights have been frozen or if Automatically Reassign Envelope When
command bar.
Adding Deformers is off, no points are weighted to the new deformers
so you must do that manually. Otherwise, the initial weight
Maximum number of deformers
assignments are recalculated and any modifications you made to them
are preserved. If a point’s weight is assigned to more than this number of
To remove deformers, simply choose Deform > Envelope > Remove deformers, its row is shown in yellow in the weight editor. If an
Deformers from the Animate toolbar, pick the deformers to remove, envelope has any such points, its row is shown in yellow, too.
and right-click when you are finished. 2. To try to fix these points automatically, click Enforce Limit. A Limit
Envelope Deformers operator is applied, and its property page is
Modifying Enveloped Objects opened automatically. By default, the limit is the one you set on the
Sometimes, after carefully assigning weights manually, you discover command bar, but you can change it for individual operators.
that you need to make a substantial change to the enveloped object, If a point has more than the maximum number of deformers, the
such as adding points. Luckily, you do not need to redo all your operator unassigns the deformers with the lowest weights and then
weighting—you can add and move points after enveloping. normalizes the weight among the remainder. However, it will
When you add a point to an enveloped object, it is automatically respect locked weights—locked weights are never changed, even if
weighted based on the surrounding points. It is better to add new other deformers have greater weight. If there aren’t enough
points before removing old ones—this means that there is more unlocked weights to modify, then the total weight might not add
weight information for the new points. You can assign the new points up to 100%.
to specific deformers and modify weights as with any point on the
envelope.
If you want to apply a deformation or move points on an enveloped
object, make sure to first set the construction mode based on what you
want to accomplish. For example:
• If you want to modify the base shape of the envelope, set the
construction mode to Modeling.
• If you want to author shape keys on top of the envelope, for
example, to create muscle bulges, set the construction mode to
Secondary Shape Modeling.
186 • Softimage
Rigging a Character
Rigging a Character
Control rigs allow for “puppeteering” a character, helping you easily Shadow Rigs and Exporting Animation
pose and animate it. Once a control rig is set up properly, you can
animate more quickly and accurately than without one. Shadow rigs are simpler rigs that are constrained to your more
complex main rig that is used for animating the character. Shadow rigs
There are a number of tools in Softimage to help you create a rig for are usually used for exporting animation, such as to a games or crowd
your character. You can use them to create control objects and engine or other 3D software programs.
constrain them to the skeleton, and to create shadows rigs and manage
the constraints between them and their parent rigs. You can load a basic shadow rig with the Get > Primitive > Model >
Biped - Box command. You can also create a shadow rig from a guide
You can also use the prefab guides and rigs in Softimage to help you get with the Character > Hierarchy from Guide command, or generate a
going quickly. These are available for biped, dog-leg biped, and shadow rig at the same time that you create a prefab rig.
quadruped characters. The rigs are skeletons that include control
objects that you can position and orient to animate the various parts of To transfer the animation from the complex (animated) rig to its
the character’s body. shadow rig, you plot the animation while the shadow rig is still
constrained to the complex rig. Then you can export the shadow rig or
just its animation.
Ready-made (prefab) biped rig
that comes with Softimage Animation
transferred to
Animated shadow rig while
main rig it’s constrained to
the main rig.
You can create either
a quaternion or
regular chain spine Volume
and head. indicators help
you work with
Separate controls for envelopes.
the chest, upper body,
and hips let you
position and rotate
each area individually.
Basics • 187
Section 10 • Character Animation
188 • Softimage
Rigging a Character
Using Prefab Guides and Rigs You can customize these guides and rigs so that they contain only the
elements you need. They can be used as a starting point for different
You can use the prefab guides and rigs in Softimage to get going rigging styles, and technical directors can write their own
quickly. These are available for biped, dog-leg biped, and quadruped proportioning script to attach their own rig to a guide.
characters. The resulting rigs created from the guides are skeletons that
include control objects that you can position and orient to animate the The guides have synoptic views to help you select and animate the rig
various parts of the character’s body. controls: select any control and press F3. There are also preset character
key sets and action sources to help you animate the rig.
1 Create a guide by choosing Character > Biped 2 When the guide is fitted to the envelope, 3 Apply the body geometry as an
Guide (or quadruped or biped dog-leg) and create a rig based on it by choosing Character envelope to the rig using the
adjust it to fit your character’s envelope. > Rig from Biped Guide. envelope_group
in the rig’s
The rig is a skeleton that also includes standard model to apply
Drag the red cubes Softimage objects as control objects. it to the correct
to resize the different parts of the rig.
parts of the body.
You can use
symmetry to
resize the limbs on
both sides of the
body at the same
time. 4 Position and rotate the rig controls
and key them to animate the various
parts of the skeleton.
Basics • 189
Section 10 • Character Animation
190 • Softimage
Animating Characters with FK and IK
Basics • 191
Section 10 • Character Animation
Basic Concepts for Inverse Kinematics You can change the joint’s preferred angle to get the correct skeleton
structure for the animation that you want to create. This solves the IK
There are two fundamental concepts you should understand when in a new way, affecting the movement of the whole chain. You can also
working in IK: the chain’s preferred angle and its resolution plane. reset a bone’s rotation to the value of its preferred rotation, which
When you draw a chain, you usually draw it with a bend to be able to resets the chain to its pose when you created it.
predict its behavior when using IK. This bend is called the chain’s With 2D chains, the preferred axis of a chain (the X axis, by default) is
preferred angle. When you move the effector, the chain’s built-in solver perpendicular to the plane in which Softimage tries to keep the chain
computes a solution that considers these angles and the effector’s when moving the effector. This plane is referred to as the general
position. orientation or resolution plane of a chain. It is in the space of this plane
that the IK system resolves the joints’ rotations when you move the
effector.
Third point
Resolution plane (a null constrained by an
(gray triangle) up-vector constraint)
192 • Softimage
Animating Characters with FK and IK
Basics • 193
Section 10 • Character Animation
1 Key the position and rotation of the You can use rotoscoped images of models to act as a
character’s arms, legs, and hips on template from which you can base the character’s poses
one side of the body. Key the 5 basic to be keyed.
poses at frames 1, 5, 9, etc., or You’ll need to tweak your character’s walk afterward to
frames 1, 6, 11, depending on your make it look natural and appropriate for the character.
character’s stride.
Tip: It helps to make the arms and legs of the left and
The start and end poses must match right side in different colors. Here, the right leg and arm
so that the motion can be properly are in black.
cycled in the animation mixer.
3 If the feet slide when they’re on the 6 Cycle the walk clip in the mixer by dragging one of the
ground, you can fix it by making the clip’s lower corners. You can also quicken or slow down
fcurve interpolation flat between the the walk pace, blend it with another action, or create a
pose keys. Open the animation (fcurve) transition to yet another action, such as to a run cycle.
editor, select the keys on the fcurves,
and choose Keys > Zero Slope Use the cid clip effect variable to add a progressive
Orientation. forward offset to a stationary cycle.
194 • Softimage
Motion Capture
Motion Capture
Motion captured animation (usually known as mocap) offers a way to Adding Offsets to Mocap Data
animate a character based on motion that is electronically gathered
from a human or animal. This is useful for animating actions that are It’s inevitable: the director took a look at the mocap animation for this
particularly difficult to do well with keyframing or other methods of character. It looks good but now he has some comments and wants to
animation creation. In Softimage, you can import mocap data and make a few changes. This can be problematic when the change affects a
apply it onto rigs, as well as retarget animation from BVH or C3D key pose or move because many other moves and poses are usually
mocap files to rigs. linked to it.
Basics • 195
Section 10 • Character Animation
• Creating action clip effects in the mixer. Clip effects let you adjust Retargeting Animation with MOTOR
the animation in an action clip without affecting the original
animation in the action source. Clip effects add values “on top” of a Retargeting allows you to transfer any type of animation between
clip, such as noise or offsets. characters, regardless of their size or proportions. Retargeting involves
first tagging (identifying) the elements of a rig, then transferring
animation from another rig or a mocap data file to the target rig. The
Working with High-density Fcurves
animation is retargeted to the new rig as it’s transferred. The retargeted
When you import motion capture data, the fcurves often have many animation is “live” on the rig, controlled by the retargeting operators
keys, usually one per frame. A high-density fcurve is difficult to edit that live on the tagged rig elements. Because of this, you can adjust the
because if you change even a few keys, you then have to adjust many animation on the rig at any time so that the motion is exactly as you
other keys to retain the overall shape of the curve. like. If you want to commit the retargeted animation to fcurves, you
can plot it on the rig.
Because editing these fcurves is not always easy, there are tools in the
fcurve editor that can help you work with them: the HLE (high-level While you can retarget any type of animation between characters, it is
editing) tool and the curve processing tools (for smoothing, especially useful for reusing motion capture data to animate many
resampling, and fitting curves). different characters with the same movements, such as you would for a
game. For example, you can reuse a basic run mocap file for many
characters and then adjust the animation for each one as you like by
The HLE tool in the fcurve editor lets you adding offsets in different animation layers. Using the retargeting and
shape an fcurve in an overall fashion, like layering tools in Softimage, you can quickly test out many variations of
lattices shaping an object’s geometry. animation on the characters.
The HLE tool creates a sculpting curve that Using the commands in the Tools > MOTOR menu on the Animate
has few keys (shown here in green), but
each one refers to a group of points on the toolbar, you can perform all of these tasks:
dense fcurve.
• Tag rig elements so that animation can be retargeted onto them.
• Retarget any type of animation from one rig to another.
• Retarget animation from BVH or C3D mocap files to a rig.
• Adjust the retargeted animation on the rig, such as by setting
position and rotation offsets for the whole rig or just certain
elements.
• Save any type of retargeted animation in a normalized motion
format (.motor file) so that it can be loaded and retargeted on any
tagged rig. This makes it easy to build up libraries of animation that
can be used across all your rigs.
196 • Softimage
Motion Capture
• Plot the retargeted animation on a rig into fcurves so that you can Before you start tagging the character elements or retargeting
keep and edit the animation. animation, make sure that the skeleton or rig is in a model. Retargeting
can work only within model structures.
Select a rig and choose the Tools > Select the source rig, then press
MOTOR > Tag Rig command to tag its Ctrl and select the target rig.
elements.
Then choose the Tools > MOTOR
Once you have tagged a rig, you can > Rig to Rig command to retarget
use it for retargeting with another rig or the animation from the source to
with mocap data. the target rig.
If you want to save the animation
on the target rig, you must plot
(bake) it into fcurves.
Choose the Tools > You can then save the mocap
MOTOR > Mocap to animation on the rig in a
Rig command to load .motor file so that you can
either a C3D or apply it to any tagged rig of
Biovision file and the same structure.
apply it to a rig in
Softimage.
Biovision rig
C3D rig
Basics • 197
Section 10 • Character Animation
198 • Softimage
Making Faces with Face Robot
D
E
A Main menu bar contains all standard menu commands. This is the D Click this button to display/hide the Softimage main command
same as in the main Softimage interface. panel (MCP).
B The Face Robot panel gives you access to all six Face Robot stages E Click this button to display/hide the standard Softimage tool bars.
for completing your facial animation.
C Click this button to hide/display the Face Robot panel and enlarge
the viewport.
Basics • 199
Section 10 • Character Animation
200 • Softimage
Section 11
Shape Animation
Shape animation is the process of deforming an
object over time. You take “snapshots” called shape
keys of the object in different poses, then you blend
these poses over time to animate them.
Softimage offers a number of tools with which you
can create shape animation, allowing you to choose
the method that works for you.
Basics • 201
Section 11 • Shape Animation
202 • Softimage
Things are Shaping Up
Always store shape keys using the same cluster of points. When you
deform an object, but store a shape key only for a cluster of points on
that object, the deformed points that don’t belong to that cluster snap
back to their original position when you change frames. Local Relative Mode Object Relative Absolute Mode
Shape deforms with Mode: Shape deforms Shape stays locked in
To make it easier to use the same cluster, give the cluster a descriptive object. with object but keeps place as object deforms.
name as soon as you create it. original orientation.
Basics • 203
Section 11 • Shape Animation
204 • Softimage
Creating and Animating Shapes in the Shape Manager
1 Open the shape manager in a viewport or 3 Deform the object or 4 Repeat these two steps to create a library
2 Duplicate the shape of different shapes for this object.
in a floating window (choose View > and rename it. cluster into a new shape in
Animation > Shape Manager). the shape viewer.
With an object selected, select Shape or an
existing shape in the shape list.
Basics • 205
Section 11 • Shape Animation
1 Create the base 2 Duplicate the object 3 Select Shape Modeling Mode
object in a neutral and deform into from the Construction Mode
pose. This is the different shapes (target list.
object to be shape) such as for
deformed phonemes.
with the
target Move them out of the
shapes. way of the camera.
4 Select the base object and choose Deform To create the animation, set the values for each
6
> Shape > Select Shape Key. Then pick shape key’s weight slider in the animation
5 Label the first shape key created in the
each of the target shapes in the order that mixer or in the Shape Weights custom
Name text box, such as face. The other
you want to create parameter set.
shape keys use this name plus a number,
shape keys for the
such as face1, face2, etc.
object.
In either the mixer or the
For each target
parameter set, click the weight
shape you pick, a
slider’s animation icon to key
shape key is
this value at this frame.
added to the
model’s Mixer >
Sources >
Shape folder.
206 • Softimage
Storing and Applying Shape Keys
Basics • 207
Section 11 • Shape Animation
Notice how the shape interpolates over time, from clip to clip.
To add a shape key as a clip to a track in the
mixer, right-click on a blue shape track and
choose Insert Source, then pick the source
(shape key) you’ve stored.
You can also drag a shape key from the
model’s Mixer > Sources > Shapes
folder in the explorer and drop it on a
blue shape track.
208 • Softimage
Mixing the Weights of Shape Keys
Basics • 209
Section 11 • Shape Animation
+ = or
Shape 1 Shape 2
210 • Softimage
Section 12
Basics • 211
Section 12 • Actions and the Animation Mixer
The animation mixer is well-suited for editing existing material and There are a number of ways in which you can share animation between
bringing together all the pieces of an animation. In it, you can assemble models, whether they are in the same scene or different scenes. You can
all the bits and pieces you’ve imported from different scenes and copy action sources, clips, compound clips, and even a model’s whole
models to help you build them into a final animation. Mixer node between models. And when you duplicate a model, all
sources and clips and mixer information are also duplicated.
212 • Softimage
The Animation Mixer
Basics • 213
Section 12 • Actions and the Animation Mixer
214 • Softimage
Storing Animation in Action Sources
Changing What’s in an Action Source If you want to modify an action clip without affecting the
After you have created an action source, you can modify the original source, you must use clip effects.
animation data stored in its source, remove items from it, or even add
keys to fcurves in the source. When you modify the source, you change Restoring the Original Animation to an Object
the animation for all action clips that were created from that source
and refer to it. You can return to the original animation stored in an action source at
any time by applying that action source to the object. This is useful if
Because editing an action source is destructive (you’re changing the you removed the animation when you created an action source, or you
original animation data), you should always make a backup copy of it can also apply the animation in the source to another model.
before editing. This is also useful to do if you don’t want all action clips
to share the same source (duplicate the source before creating clips To apply the action source to a model, you simply select the source in
from it). the model’s Mixer > Sources > Animation folder in the explorer and
choose the Actions > Apply > Action command.
You can access the animation data in an action source by right-clicking
an action clip and choosing Source, or right-click and choose
Animation Editor to access the source’s fcurves.
Basics • 215
Section 12 • Actions and the Animation Mixer
To add a clip to a track in the mixer, Select and move clips Select and drag a clip to move it somewhere
right-click on a track and choose else on the same track or a different track of
Insert Source, then pick the source the same type (action, shape, or audio).
Select only
you’ve stored.
You can also drag a source
from the model’s Sources
folder in the explorer and
drop it on a track in the
mixer. Press Ctrl while dragging the clip to
copy it. You can copy clips between
different models’ mixers this way, one
clip at a time.
Drag on either of the clip’s upper
corners to hold the clip’s first or last
frames for any number of frames.
216 • Softimage
Mixing the Weights of Action Clips
Basics • 217
Section 12 • Actions and the Animation Mixer
Mixing Fcurves with Action Clips Modifying and Offsetting Action Clips
Normally, when there is an action clip in the mixer, it overrides any If you want to modify an action clip that contains animation data from
other animation on that object that covers the same frames. However, fcurves, you can create a clip effect. A clip effect is a package of any
by selecting the Mix Current Animation option in the Mixer Properties number of variables and functions that you use to modify the data in
editor, you can blend fcurves on the object directly with an action clip the action source. Each clip effect is an independent package,
over the same frames. associated with its action clip, and sits “on top” of the clip’s original
For example, you can paste a clip in the mixer that contains the final action source animation without touching it.
animation for an object, then you can blend it with other fcurve Because the effect is an independent unit, you can easily activate or
animation you have added to that object, such as a slight offset or a deactivate it, allowing you to toggle between the clip’s original
minor adjustment to a mocap clip. animation and the animation modifications in the clip effect. This
Being able to mix clips directly with fcurves means that you can easily makes it easy to test out changes to your animation.
create animation using the mixer, as well as using it for blending and You may need to edit a clip’s animation for a number of reasons:
tweaking final animations. You can keep manipulating and setting keys
for the animated object and not have to make its animation into a clip • Add a progressive offset (using the cid variable) to a stationary walk
to blend it with another clip. cycle so that a character moves forward with each cycle.
• Animation coming from a library of stored actions often needs to
Club-bot with a run be modified to fit a particular goal or environment. For example,
action clip active in the you have a walk cycle, but the character must now step over an
animation mixer.
obstacle, so you have to move the leg over the obstacle.
Open the Mixer Properties editor and select • Animation that was originally created or captured for a given
Mix Current Animation. Then adjust the leg character must be applied to a different character that has different
and arm a bit (as below right) and key it. proportions.
The Mix Weight value determines how much • Animation with numerous keys, such as motion capture
influence the fcurve animation has over the
animation in the clip. animation, must be adjusted, but you don’t want to touch the
original animation because it can be difficult to edit.
Key this parameter to blend the fcurves in and
out of the action clips.
Moving a key point in
a mocap fcurve
results in a peak in
the curve.
218 • Softimage
Modifying and Offsetting Action Clips
Basics • 219
Section 12 • Actions and the Animation Mixer
220 • Softimage
Sharing Animation between Models
Copying Action Sources between Models You can also create connection-mapping templates to specify the
proper connections between models before you copy action sources
If you want to share an action source between models in the same between models. These templates set up rules for mapping the object
scene, you can drag-and-drop one from the model’s Mixer > Sources > and parameter names stored in the action sources, such as when
Animation folder in the explorer onto the mixer of another model. similar elements have with different naming schemes, such as L_ARM
This makes a copy of that action source for the model. and LeftArm.
To copy compound sources between models, press Ctrl while you drag To create a connection-mapping template, open the animation mixer
the compound action source from the model’s Mixer > Sources > and choose Effect > Create Empty Connection Template. A template is
Animation to a track in the other model’s mixer. created for the current model and the Connection Map property editor
opens. Once you have created an empty connection-mapping template,
1 Open the animation mixer for the model to which you want to you can add and modify the rules as you like.
copy the action source (the target).
2 Open an explorer and expand the Model node for the model from
which you want to copy the action source (the original).
3 Drag a source from the original model’s Mixer > Sources >
Animation folder in the explorer and drop it on a track in the
animation mixer of the target model. Jaiqua’s (on the left)
elements are mapped
to the corresponding
ones on the Club-bot
using a connection-
mapping template.
This is set up before
action sources are
shared between them.
Mapping Model Elements for Sharing
Sharing actions is possible because each model has its own namespace.
This means that each object in a single model’s hierarchy must have a
unique name, but objects in different models can have the same name.
For example, if an action contains animation for Bob’s left_arm, you
can apply the action to Biff ’s model and it automatically connects to
Biff ’s left_arm element.
If the names for some of the objects and parameter names in the source
don’t match when you’re copying sources between models, the Action
Connection Resolution dialog box opens up in which you can resolve
how the object or parameters are mapped.
Basics • 221
Section 12 • Actions and the Animation Mixer
222 • Softimage
Section 13
Simulation
Imagine a scene with an alien climbing out of her
space ship: it has just crashed to the ground after
breaking through fence posts like match sticks,
smoke streaming out of the engine. As she stares at
the burning rubble that was once her home in the
skies, a single tear rolls down her cheek. She
stumbles through a raging snow storm, the
howling wind whipping through her hair and
tearing at her cape.
You can use all the simulation powers in Softimage
to create your own compelling scenes—all the
tools are there for you.
Basics • 223
Section 13 • Simulation
Cloth
Particles
Rigid bodies
224 • Softimage
Making Things Move with Forces
You can apply a force to hair, soft bodies, and cloth as described below. The Fan creates a “local” effect of wind blowing through a cylinder so
B
that everything inside the cylinder is affected.
3. The force is automatically applied to the selected object. The Toric force simulates the effect of a vacuum or local turbulence by
H
creating a vortex force field inside a torus.
You could also select the hair object and apply an existing force to it by
The Attractor force attracts or repels simulated objects much like a
choosing Modify > Environment > Apply Force on the Hair toolbar, or I
magnet attracts/repels iron filings.
select the cloth/soft body object and choose Cloth/Soft Body > Modify
> Apply Force on the Simulate toolbar.
For rigid bodies, the process is simpler: simply create a force from the
Get > Force menu and it is applied to all rigid bodies in the current
simulation environment.
Basics • 225
Section 13 • Simulation
Types of Forces C
D
B
F G
226 • Softimage
Hair and Fur
Basics • 227
Section 13 • Simulation
Basic Grooming 101 Because guide hairs are actual geometry, you can use all of the standard
Deformation tools on them to come up with some groovy hairdos!
When you’re styling, you always work with the guide hairs: these are the Lattices, envelopes, deform by cluster center, randomize, and deform
hairs that are similar to and behave like segmented IK chains. In fact, by volume usually produce the best results. However, if you animate
the you can grab a hair tip and position it the same way as you would the deformations, you cannot then use dynamics on the hair.
the effector on an IK chain.
Use the Brush tool to sculpt hairs with a
natural falloff, like proportional modeling. Translate and rotate specific
You can find all styling tools on the Comb the hair in the desired tips or points of hair.
Hair toolbar (press Ctrl+2). direction, such as in the negative
Y direction. Maybe use Puff to
give some lift at the roots.
Select tips, points, or entire strands
of hair to style in any way. Here,
just the tips of some hair strands
are selected.
Use the Clump tool to bring Change the length of the guide hairs
When you use a styling tool after using the Cut tool or the Scale tool.
hair strands or points
selecting Tip, press Alt+spacebar to
together or fan them out.
return to the Tip selection tool.
You can
deform the
shape of the hair
using any deformation tool, like a lattice.
To have smoother animation, activate
Stretchy mode to allow the hair segments to
stretch along with the deformation.
228 • Softimage
Hair and Fur
Making Hair Move with Dynamics Getting the Look with Render Hairs
When you apply dynamics to hair, you make it possible for the hair to The render hairs are the “filler” hairs that are generated from and
move according to the velocity of the hair emitter object, like long hair interpolated between the guide hairs. And as their name implies,
whipping around as a character turns her head quickly. The dynamics render hairs are the hairs that are actually rendered. You can change the
calculations also take into account any forces applied to hair, such as look of a hair style quite a lot by modifying the render hairs.
gravity or wind, as well as any collisions of the hair with obstacles.
Set the number of render hairs to be
You can also use dynamics as a styling tool by freezing the hair when it’s rendered, then decide which percentage of this
at a state that you like. For example, apply dynamics, apply some wind value you want to display. To work quickly,
to the hair, then freeze the hair when it has that wind-swept look. display a low percentage, then display the full
amount of hair for the final render.
Basics • 229
Section 13 • Simulation
Hair Shaders and Rendering While you can use any type of Softimage shader on hair, the Hair
Renderer and Hair Geo shaders give you the most control for making
Rendering hair is similar to rendering any other object in Softimage. the hair look the way you want. You can determine different coloring,
You can use all standard lighting techniques (including final gathering transparency, and translucency anywhere along the length of the hair,
and global illumination), set shadows, and apply motion blur. Hair is such as at the roots and tips.
rendered as a special hair primitive geometry by the mental ray
renderer.
How to attach shaders to hair
1 Select the hair and open a render tree (press 7). This tree shows the
default shader connection when you create hair.
2 To switch to the Hair Geo shader, choose Nodes > Hair > Hair
Geometry Shading and attach it to the hair’s Material node in the
same way as the Hair Renderer shader.
The Hair Renderer shader gives you control The Hair Geo shader lets you set the
over coloring, transparency, and shadows coloring, transparency, and
along the hair strands. You can also optimize translucency using gradient sliders,
the render and take advantage of final 3 To connect other Softimage shaders to the hair, which give you lots of control over
gathering. disconnect the current Hair shader. Then you can where the shading occurs along the
load and connect another shader directly to the hair strand.
hair’s Material node. You can even add incandescence to
For example, you can attach a Toon Paint or make the hair “glow”.
standard surface shader to the Surface and
Shadow inputs of the hair’s Material node to
change the hair’s color. Incandescence on the inner part of
the hair strand.
230 • Softimage
Hair and Fur
Connecting a Texture Map to Hair Color Parameters Rendering Objects (Instances) in Place of Hairs
A texture map is the combination of a texture projection plus an image Replacing hairs with objects allows you to use any type of geometry in
file whose pattern of colors you want to map. Instead of a value being a hair simulation. You can replace hair with one or more geometric
applied over the surface as with a weight map, a texture map applies a objects (referred to as instances) to create many different effects. For
color. When mapping a texture to the hair color parameters in the hair example, you could instance a feather object for a bird or instance a leaf
shaders, the color of the individual strands are derived from the texture object to create a jungle of lush vegetation.
color found at the root of the hair.
The instanced geometry can be animated, such as its local rotation or
Unlike other geometry in Softimage, hair is not a typical surface so you scaling, or animated with deformations. This allows you to animate the
can’t apply projections directly to it. Instead, you need to create a hair without needing to use dynamics, such as instancing wriggling
texture map property for the hair emitter object first, and then transfer snakes on a head to transform an ordinary character into Medusa!
it to the hair itself.
You can render
To do this, apply a texture map to the hair emitter using one of the Get instances of 3D objects
> Property > Texture Map commands, associate an image to this as hair instead of the
projection to use as the map, then transfer the texture map from the hair’s geometry.
hair emitter to the hair object itself using the Transfer Map button on The instance objects
the Hair toolbar. can even be
animated!
To render instances for the hairs, simply put the objects you want to
instance into a group, and each object in the group is assigned to a
guide hair using the Instancing options in the Hair property editor.
The instanced geometry is calculated at render time, so you’ll only see
the effect in a render region or when you render the frames of your
scene.
You can change the color of the hair using a You can choose whether to replace the render hairs or just the guide
texture map connected to the hair shaders’ hairs. You can also control how the instances are assigned to the hair
color parameters. (randomly or using a weight map values), as well as control their
orientation by using a tangent map or have them follow an object’s
direction.
Basics • 231
Section 13 • Simulation
232 • Softimage
Rigid Body Dynamics
Basics • 233
Section 13 • Simulation
Simulation
3
234 • Softimage
Rigid Body Dynamics
Basics • 235
Section 13 • Simulation
Hinge
Spring B
A
Fixed
236 • Softimage
Cloth Dynamics
Cloth Dynamics
The cloth simulator uses a spring-based model for animating cloth
• Bend controls the resistance to bending. With low values, the cloth moves
dynamics. You can specify and control the mass of the fabric, the very freely like silk; with high values, the cloth appears like rigid linen or
friction, and the degree of stiffness, allowing you to simulate different even leather.
materials such as leather, silk, dough, or even paper. • Stretch controls the resistance to stretching, which is the elasticity of the
material. Low values allow the cloth to deform without resistance, while
Cloth deformation is controlled by a virtual “spring net” which is made higher values prevent the cloth from having elasticity.
up from three different types of springs, each controlling a different • Shear controls the resistance to shearing (crosswise stretching), keeping as
kind of deformation: shearing, stretching, and bending. much to the original shape as possible. Try to decrease this value if the
cloth’s wrinkling is too rigid.
After you set up how the cloth is deformed according to its own
“internal” spring-based forces, you can then affect how it’s deformed
using external forces, such as gravity, wind, fans, and eddies.
As well, you can have the cloth collide with external objects or with
itself. The obstacles can be animated or deformed and interact with the
cloth model according to the cloth’s and obstacle’s friction.
Although you can apply cloth only to single objects, you could create a
larger object (such as a garment) made of multiple NURBS surface
patches stitched together using any number of points.
Low resistance
You must first assemble the different
to Shear.
patches into a single surface mesh Low resistance Low resistance
object, then apply cloth to that to Bend. to Stretch.
object. Set the Stitching parameters
in the ClothOp property editor to
create seams between the different To give you a head start on creating cloth,
NURBS surfaces of the same surface there are several presets in the Cloth
mesh model. property editor that let you quickly
simulate the look and behavior
of different materials, such as
leather, paper, silk, or pizza
dough.
Basics • 237
Section 13 • Simulation
238 • Softimage
Soft Body Dynamics
Basics • 239
Section 13 • Simulation
240 • Softimage
Section 14
Basics • 241
Section 14 • ICE: The Interactive Creative Environment
What is ICE?
ICE is a node-based system for controlling all the attributes that define • Completely control particle systems. You can add and remove
a deformation or particle effect. There are two parts to ICE: points on point clouds. You can move points directly, or apply a
simulation using particle or rigid body behavior.
• At its basic level, ICE is a complete visual programming
environment. You can combine basic nodes for getting data, • Deform various geometry types, including polygon meshes,
modifying data, setting data, and controlling execution flow into NURBS surfaces, curves, lattices, and point clouds. However, you
elaborate ICE trees. You can easily experiment, in a way that you cannot add or remove components on any geometry type except
can’t when writing code, by simply connecting nodes and seeing point clouds.
the results immediately in the viewports. When you’re done, you
You cannot use ICE on hair, non-ICE (legacy) particle clouds,
can package your tree into reusable compounds that you can use in
groups, or branches.
other scenes, share with your team, or even put online to share with
the Softimage community. There are three ways you can approach ICE:
• On top of that level, Softimage comes with a comprehensive set of • You can simply use the predefined compounds and adjust their
predefined compounds for particle simulations. For simple effects, input values to create basic effects.
you can connect compounds that define forces or basic behaviors
like sticking and bouncing. For more complex effects, you can use • At the other extreme, you can dive right in and create your own
the predefined state machine to switch between several behaviors custom effects from scratch using the base nodes.
on a per-particle basis. • Between the two extremes, you can start with the factory
You can use ICE to: compounds and then modify or augment them with extra nodes to
create your own variations of effects.
242 • Softimage
What is ICE?
It’s All About the Nodes The ICETree node is like Grand Central Station for an
ICE tree: it’s the main operator that processes all the
Nodes are the building blocks for ICE: they are operators that work on data that flows into it. Nodes in the tree must be
object data. Some nodes get data from the scene, and some modify and connected to it in order to be evaluated.
process this data. They have input and output ports that allow them to
be connected to each other. You can have multiple ICE trees per object as long as
each ICETree operator has a different name—and you can easily
rename it in the explorer.
Attributes
Attributes are at the heart of ICE. Attributes are data that is associated
Two nodes with ports with objects, or with components such as points, edges, polygons, and
connected together. nodes. With attributes, you can get and set information such as a
Compound with particle’s color or shape, or an object’s point position. Almost every
several input ports. ICE tree involves getting and setting attributes in some way.
Attributes can be inherent (always part of the scene), predefined
(innately understood by certain base ICE nodes, but dynamic in that
they only exist when they are set), or custom (create your own).
Compounds
Compounds are the “über nodes” of the ICE world. They can contain a
whole ICE tree or just parts of it. Compounds make it easy to create
more complex effects in the ICE tree because they package numerous Some of the many attributes that
nodes into one. And because they’re in a package, you can easily bring are available for point clouds.
compounds into other scenes or share them with other users.
You can view attributes in an
You can connect compounds in the same way that you do for nodes in explorer.
the ICE tree. As well, you can open up a compound to edit it or just to
see what makes it tick.
Softimage ships with many compounds that are designed specifically
for particle and deformation workflows. You can find these on the
Tasks tab of the preset manager in the ICE Tree view.
Basics • 243
Section 14 • ICE: The Interactive Creative Environment
A B C D E F G H I
J K L
A Memo Cams. Save and restore up to four views: D Clear. Clears the view.
Left-click to recall stored view. E Opens the preset manager in a floating window.
Middle-click to store current view.
F Displays or hides the preset manager embedded in the left panel (J).
Ctrl+middle-click to overwrite stored view with current view.
Right-click to clear stored view. G Displays or hides the local explorer embedded in the right panel (L).
B Lock. Prevents the view from updating when you select other H Bird’s Eye View. Click to view a specific area of the workspace, or
objects in the scene. drag to scroll. Toggle it on or off with Show > Bird’s Eye View.
244 • Softimage
The ICE Tree View
L Local explorer. When there are multiple ICE trees on the same object,
click to select the one to view. You can also click on a material to
switch to the render tree view.
Basics • 245
Section 14 • ICE: The Interactive Creative Environment
Anatomy of an ICE Tree C Execution flows sequentially from top to bottom along the input
The following illustration shows a typical ICE tree for a simple particle ports of the ICETree node (and any other type of Execute node).
system. To see some examples of how to build up an ICE tree, check Because the nodes are evaluated in order, it matters where you plug
them in. Sometimes one operation requires another to be done first
out the three tutorials at the end of this guide. so that it can be evaluated properly.
D Nodes that are connected to an Emit node’s Execute on Emit port are
A applied only to new points that are generated on the current frame.
They are not applied to all particles on every frame.
E Nodes that are connected to the root node are executed on every
frame. You can control which data gets set on which elements by
using If and Filter nodes in the upstream branches.
The simulation framework resets every particle’s force to 0 at the end
of each frame, so forces must be reapplied at every frame, which is
why the Add Forces node is plugged into the ICETree node and not
D the Emit node.
C
E F The Simulate Particles node is the “standard” particles node that
updates the position and velocity of each particle at each frame
based on mass and force.
B You could use the Simulate Rigid Bodies node instead to make
particles into rigid bodies. Particles can then collide with each other
and with other objects that are set as obstacles.
You do not need to include a simulation node in your tree—if you
prefer, you can set point positions directly.
B The ICETree node is the main operator that processes all the data that
flows into it. Nodes must be connected to it to be evaluated.
246 • Softimage
ICE Simulations
ICE Simulations
As with animation, a simulation calculates the way in which an object • You can deform various geometry types, including polygon
changes over time. However, with a simulation, the result of the meshes, NURBS surfaces, curves, lattices, and point clouds, to
current frame depends on the result of the previous frame. create effects such as turbulent ocean waves, gentle ripples on a
pond, or ribbons twisting in the wind.
With ICE, you can create both particle and deformation simulations.
• You can emit and change particles in a point cloud for effects such
as cigarette smoke curling as it rises, leaves falling lazily to the
ground, vines growing up out of the ground, or even crowds of
people milling about in the street.
Basics • 247
Section 14 • ICE: The Interactive Creative Environment
Simulations and the Construction Regions this, you can select and delete the Simulation region marker from the
construction operator stack. Both the Simulation and Post-Simulation
An ICETree node can be either simulated or not: the only difference region markers are removed if either one is deleted, but operators in
between the two is the ICETree operator’s position in the object’s these regions are not removed and can be moved to the desired regions
construction stack. afterward.
When you create a simulated
ICETree node, the Simulation and The Simulation Environment
Post-Simulation regions are created
A simulation environment is
in the object’s construction stack,
automatically created when a
and the ICETree operator is placed in
simulated ICETree node is
the Simulation region.
created. This simulation
Operators in the Simulation region environment houses the
calculate the result of the current Simulation Time Control, the
frame based on the previous frame cache files, and any non-ICE
rather than on the construction forces used in the simulation.
regions that are below it. This is true not only for ICE trees, but for all
The Simulation Time Control property is where you set the frame
operators in the Simulation region. For example, if you apply a non-
range during which the simulation is active. It’s also where you set the
ICE Twist deformation with a small Angle value in the Simulation
Play Mode which controls how the simulation plays back: Live,
region and play back the scene, the object becomes progressively more
Standard, or Interactive.
twisted.
To play the simulation, use the standard playback controls below the
Operators in the Post-Simulation region are applied on top of the
timeline to play, scrub, or jog forward. Since simulations depend on the
simulation. You could use the Post-Simulation region to apply a
previous frame, the viewports do not update if you play, scrub, or jog
deformation, such as a lattice, on top of a particle simulation.
backwards unless the simulation has been cached. If you jump to a later
When the simulation is not active, the operators in the Simulation frame, the intervening frames are calculated in the background.
region are skipped. On the first frame that the simulation is active, the
operators below the Simulation region are evaluated to define the Setting the ICE Simulation’s Initial State
default initial state but the operators in the Simulation region are not
evaluated—this means that if you are emitting particles, for example, By default, the initial state of a simulation is the result of the operators
they will appear on the second frame of the simulation. While the in the construction regions that are below the Simulation region on the
simulation is active, the operators below the Simulation region are not first frame that the simulation is active.
re-evaluated. However, with simulations you often need to have a certain state be the
You can turn a simulated tree into a non-simulated one by moving it to first frame of the simulation, such as a candle already burning or rigid
another region, like Modeling, and vice versa. However, remember that bodies already settled. You can select any frame in an existing
the lower regions are not re-evaluated when the simulation is active if simulation and use that as the initial state by choosing ICE > Edit > Set
the Simulation region exists in an object’s construction stack. To fix Initial State from the Simulate toolbar.
248 • Softimage
ICE Simulations
Caching ICE Simulations There are three ways of caching ICE simulations:
Much of the work in creating a convincing simulation is the process of
trial and error. Caching can help you try out different combinations of A Use the Tools > Plot > Write Geometry Cache command on the
Animate toolbar to plot any type of simulation (except hair) or
settings until you find the right effect. Caching stores the current animation into cache files. Then select an object and load the cache
simulation frames into a file that you can play back using the ICE tree, files on it with the Plot > Load Geometry Cache command. This
the animation mixer, or simply the playback controls and the timeline. brings them into the animation mixer.
With cache files in the animation mixer you can scale, trim, cycle B Use the Caching option in the Simulation Time Control to cache
(loop), blend, etc. them in the same way that you can for action clips. the simulation frames from any simulated object into an action
source, which you can then bring into the animation mixer.
There are three file formats from which you can choose to create cache
C Use the Cache on File node in the ICE tree to write the simulation or
files: the default ICECache file format, the PC2 file format, and the
animation data stored on an ICE object to a cache file, which you can
Custom file format (if you create your own custom plug-in for bring into the animation mixer. You can also read the cache data
caching). with this node.
Basics • 249
Section 14 • ICE: The Interactive Creative Environment
In the ICE tree, you can make simulated particles and deformed objects
A Gravity applies a force that defines an acceleration over time. To get
move according to different types of forces. Each simulated object can the correct gravitational behavior from objects or particles, their size
have multiple forces applied to it. must be taken into consideration.
You can use either of these types of forces in an ICE tree: B The Surface force attracts particles/objects to or repels them from an
object’s surface. While this force is similar to creating goals for
• The forces that are available from the Get > Primitive > Forces particles, this force keeps the particles moving around (“swarming”)
menu on any toolbar (see Making Things Move with Forces on the surface object instead of stopping once they reach the goal.
page 225).
C The Wind is a directional force with velocity and strength. It generates
• The ICE forces that are available as compounds in the ICE tree a force that speeds up particles or objects to a target velocity.
view’s preset manager or Nodes menu.
D The Null Controller force uses a null to attract or repel particles/
You can also create your own force compounds using the nodes objects, much like how particles move toward or away from a goal
object. Changing the icon shape of the null (to something like Rings,
found within ICE. Square, or Circle) changes the behavior of this force.
The main ICE force is the Add Forces compound, which is a hub for all E The Neighboring Particles force attracts particles to each other when
the forces in your ICE tree. It adds up the effect of all forces that are they get within a certain range, but there is no friction between the
plugged into it, then outputs one force (vector). The order in which the particles so they don’t stay clumped together—they keep moving.
forces are plugged into the Add Force compound is not important.
F The Drag force opposes the movement of simulated objects, as if they
If nothing is plugged into the Add Force compound, you can use it to were in a fluid.
set a simple directional force on each axis. G The Coagulate force attracts points toward their neighbors to form
clumps. Once the points get within a certain range of each other, the
friction (drag) slows them down.
250 • Softimage
Forces and ICE Simulations
Basics • 251
Section 14 • ICE: The Interactive Creative Environment
ICE Deformations
Any ICE tree that modifies point positions on an object without adding A deformer works by getting current point positions, modifying them
or deleting points can be considered a deformation. With ICE, you can based on other variables, then setting new positions. This means that
deform various geometry types, including polygon meshes, NURBS you can create your own custom deformers with ICE.
surfaces, curves, lattices, and point clouds. However, you cannot add or
You can create three types of deformations with ICE: simulated,
remove components on any geometry type except point clouds.
animated, and non-time based.
252 • Softimage
ICE Deformations
Basics • 253
Section 14 • ICE: The Interactive Creative Environment
Non–time-based Deformations
You can create deformations that are not time-based but instead
depend on the position of deformer objects or other factors to modify
point positions. The deformation can then be controlled by animating
the deformers in any way.
The following example is a variation of the Push deformation that uses
the proximity of a null to displace points along their normals.
254 • Softimage
Building ICE Trees
Basics • 255
Section 14 • ICE: The Interactive Creative Environment
1
2
4 5
256 • Softimage
Building ICE Trees
Basics • 257
Section 14 • ICE: The Interactive Creative Environment
Boolean A Boolean value: True or False. Geometry A reference to a geometrical object in the
scene, such as a a polygon mesh, NURBS curve,
NURBS surface, or point cloud. You can sample
Integer A positive or negative number without decimal the surface of a geometry to generate surface
fractions, for example, 7, –2, or 0. locations for emitting particles.
Scalar A real number represented as a decimal value, Surface Location A location on the surface of a geometric object.
for example, 3.14. Internally this is a single- The locator is “glued” to the surface of the
precision float value. object so that even if the object transforms and
deforms, the locator moves with the object and
2D Vector A two-dimensional vector [x, y] whose entries stays in same relative position.
are scalars, for example, a UV coordinate.
Execution Not a data type in the conventional sense. You
3D Vector A three-dimensional vector [x, y, z] whose connect Execution ports such as the output of a
entries are scalars, for example, a position, Set Data into an Execute or root node to control
velocity, or force. the flow of execution in the tree.
4D Vector A four-dimensional vector [w, x, y, z] whose Reference Also not a data type in the conventional sense.
entries are scalars. This is a reference to an object, parameter, or
attribute in the scene, expressed as a character
Quaternion A quaternion [x, y, z, w]. Quaternions are string. You can daisy-chain these as described in
usually used to represent an orientation. Daisy-chaining References on page 261.
Quaternions can be easily blended and
interpolated, and help address gimbal-lock
problems when dealing with animated
rotations.
258 • Softimage
Building ICE Trees
Polymorphic Ports
Before any connection,
Polymorphic ports can accept several different data types. For example, the Add node’s property
the Add node can be used to add together two or more integers, or two editor is blank.
or more scalars, or two or more vectors, and so on.
Once you connect a value to a polymorphic port, its port type becomes After connection, controls
resolved. Other input and output ports on the same node and on appear for Value2. There
connected nodes may also become resolved and only accept specific are no controls for Value1
because it is being driven
data types. This reflects the fact that, for example, you cannot add an by the connection.
integer to a vector.
Before anything is
connected, the Add node’s
ports are unresolved (black). While polymorphic ports accept several data types, they don’t
necessarily accept all types of connection. For example, the ports of a
Pass Through node accept any type of value, but it doesn’t make sense
to use a Multiply by Scalar node with a Boolean value.
Once a node is connected to
Value1, then Value2 and Data Context
Result become resolved. In
this case, they are yellow for In ICE, attributes are always associated with elements, either objects or
3D vectors.
one of their component types such as points, polygons, edges, and so
on. For example, “sphere.PointNormal” consists of one 3D vector for
Even after a port’s type has been resolved, you can still change it by each point of the object called “sphere”; in other words, the context is
replacing the connection with a different data type. However, this per point of sphere.
works only if the port is not resolved by other connections in the tree.
For two ports to be connectable, their contexts must be compatible.
If a port’s type is unresolved, you cannot set values in its property Context is determined by two factors:
editor. Once it is resolved, the appropriate controls appear in the
property editor. Different data types use different controls: for • The type of element associated with the data: object or a specific
example, checkboxes for Booleans, sliders for scalars, and so on. component type (points, polygons, etc.).
• The object that owns the components.
The data context gets propagated through node connections in the
same way as the data types of polymorphic nodes.
Basics • 259
Section 14 • ICE: The Interactive Creative Environment
The different types of context are summarized in the following table: Specifying Scene References
Certain nodes can refer to elements in the scene using strings as
Context Description
references. For example, references can specify things like:
Singleton A data set containing exactly one value. For example, an
object’s position, a bone’s length, a mesh’s volume, etc. The
• Attributes to get or set.
singleton context includes data that is associated directly • Point clouds to which to add particles.
with objects rather than their components, as well as scene
data such the current time and the frame rate. • Geometric objects to query for closest points, etc.
Singleton data is always compatible with other singleton
data. Singleton data is usually compatible with other References are resolved by name. Character strings are not case-
contexts, for example, you can add the position of one sensitive. Object, property, and attribute names are separated by a
object to the point positions of another object. period (.), for example, “grid.PointPosition” or
Point A data set containing one value for each point of a “sphere.cls.WeightMapCls.Weight_Map.Weights”.
geometric object (point cloud, polygon mesh, NURBS
surface, curve, lattice, etc.). For example, point positions, You can specify a scene reference by using controls in a node’s property
envelope weight assignments, etc. editor to enter, explore for, or pick elements in the scene. Alternatively,
you can right-click on a node and choose Explore for Port Data.
Line A data set containing one value for each edge, subcurve, or
surface boundary. For example, edge lengths.
260 • Softimage
Building ICE Trees
Basics • 261
Section 14 • ICE: The Interactive Creative Environment
Getting and Setting Data in ICE Trees • When you get data by an explicit string reference, you get a set of
values with one value for each component. For example, if you get
Almost every ICE tree involves getting data, performing calculations, “sphere.PointNormal”, you get one 3D vector for each point of the
and then setting data. You can get and set any data using Get Data, Set sphere object; in other words, the context is per point of sphere.
Data, and other nodes found in the Data Access category of the Tools
tab in the preset manager. There are also some compounds for getting • When you get data at a location, the context depends on the
and setting specific data on the Task > Particles or Deformations tabs. context of the set of locations that is connected to the Source port
of the Get Data node. For example, if you start by getting
“grid.PointPosition”, then use that to get the closest location on
sphere, and in turn use that to get PointNormal, the data consists of
normals on the sphere but the context is per point of the grid. If
instead you started by getting “grid.PolygonPosition”, the context
would be per polygon of the grid.
You can get any data in the scene. Once you have a Get Data node in Getting Data at Locations
your tree, you can specify or modify the reference.
To get data at a location, plug any location data into a Get Data node’s
You can set only certain data: Source port. When a location is plugged into the Source port of a Get
• Some intrinsic attributes, such as PointPosition or EdgeCrease. Data node in this way, its Explorer button shows only the attributes
Other attributes are read-only, like PointNormal and PolygonArea. that are available at that location.
Getting Data
You get data using Get Data nodes. You can add a Get Data node to
your scene by dragging it from the preset manager (it’s in the Data
Access category of the Tools tab) or by selecting it from the Nodes >
Data Access menu. You can also get a specific object or other element
by dragging its name from any explorer view. Once you have a Get
Data node in your tree, you can specify or modify the reference as
described in Specifying Scene References on page 260.
You can get data by explicit string references or at locations.
262 • Softimage
Building ICE Trees
You can use this technique to get data from other objects using Reusing Get Data Nodes
geometry queries like Get Closest Location nodes. For example, you
You can connect the same Get Data node to as many nodes as you want
can get PointNormal at the closest location on a sphere.
if you need the same data elsewhere in the tree. However if the data has
changed in-between, the Get Data node will return the new data later
in the tree.
The Get Self.Foo node returns different values to Stuff and More Stuff because
Self.Foo was set in-between.
Basics • 263
Section 14 • ICE: The Interactive Creative Environment
264 • Softimage
Building ICE Trees
Basics • 265
Section 14 • ICE: The Interactive Creative Environment
266 • Softimage
ICE Compounds
ICE Compounds
Compounds are ICE nodes that are built from other nodes, which can
be base nodes or even other compounds.
1
You can use compounds to simplify and organize your ICE trees to
make them easier to read and understand, but the real advantage of
compounds is that you can export them and reuse them in other ICE
trees and scenes, as well as share them with other users.
Softimage includes many pre-built compounds for performing specific
tasks. You can find these in the preset manager in the ICE tree view. 2
These compounds are built from the same nodes that are also available
in the preset manager. Inspecting the supplied compounds is a great
way to see how ICE trees work. You can then edit these compounds to
use them as a base for building your own effect.
Basics • 267
Section 14 • ICE: The Interactive Creative Environment
Editing Compounds
When you edit a compound, you can change the compound name and
expose different ports of the nodes inside so that they are easily
accessible from your compound later on.
A
F L
B C D I
G J
K
268 • Softimage
ICE Compounds
A Opens the compound editor. Move the mouse over a compound I Expand or collapse the list of exposed output parameters (shown
node, and click the e icon that pops up. Or right-click on a compound collapsed). Here again, when the list is collapsed, you can display the
node (not over a port) and choose Edit Compound. name of a port by hovering the mouse pointer over its connection.
B Compound name. To change the name, double-click and type a new J Expose a new output port. Drag an output port from any node onto
one. the black circle. You can have as many output ports as you want.
Basics • 269
Section 14 • ICE: The Interactive Creative Environment
Versioning Compounds
Softimage uses a built-in versioning system to manage updates to
exported compounds. You should use this versioning system instead of
renaming .xsicompound files manually; otherwise, you may end up
with multiple compounds that share the same name and version. If this
happens, Softimage warns you that the locations of the file that will be
used and the files that will be ignored.
The major and minor version numbers are stored in the .xsicompound
file. Major version changes are for large functional changes, while
minor version changes are for bug fixes and small adjustments.
If you modify a compound in an ICE tree and don’t export the new
version, it is identified by an asterisk.
270 • Softimage
Section 15
ICE Particles
ICE is a complete visual programming environment
that’s allows you to create particle effects.
In the real world, you think of particles as being
small pieces of matter such as dust, sea salt, water
droplets, sand, smoke, or sparks from a fire. With
ICE particles, you can create all these types of
natural phenomena and so much more!
Basics • 271
Section 15 • ICE Particles
272 • Softimage
Making ICE Particle Effects
1
B
C
A
3
4
Basics • 273
Section 15 • ICE Particles
274 • Softimage
Making ICE Particle Effects
1 Create a point cloud or emit particles: The simplest way is to 3 Edit the Emit parameters: These define how the particles will
select one or more objects to be the particle emitter(s) and look and act when they are emitted: set the particle rate,
then choose ICE > Create > Emit Particles from Selection on speed, orientation, direction, color, mass, etc.
the Simulate toolbar. This automatically creates a point cloud
4 Delete particles at their age limit: The Set Particle Age Limit
and sets up certain nodes in the ICE Tree for that point cloud.
compound determines how long the particle will live, then the
You can also set up these nodes in the ICE tree from scratch. Delete Particles at Age Limit compound does its job.
2 Open the ICE tree view: press Alt+9 or choose ICE > Edit > If you don’t put a limit on their age, the particles live the
Open ICE Tree on the Simulate toolbar to open it in a floating duration of the simulation, which you may want for some
window. effects.
A • The ICETree node is the main processing operator in an 5 Add forces to make the particles move. The Add Forces
ICE tree. Because this is a particle simulation, the ICETree compound is a hub into which other forces can be connected.
node type is simulated. Here, only the Turbulence value is modifying the force, but
you could easily add other forces.
B • The disc is the particle emitter object. The Get Data node
for it simply gets the disc’s object data so that it can be 6 Build the particle ICE tree: Plug in different nodes for
used in the ICE tree. different effects. Remember this:
C • The Emit compound is responsible for emitting the • When you plug nodes into the ICETree node, their output
particles and setting certain particle attributes (such as gets evaluated at every frame. You want to do this if you
size, color, velocity, mass, shape, etc.) at emission time. At want the particle data to be updated throughout the
every frame, it adds points to the point cloud. simulation, not just when the particles are emitted.
The Emit compounds are always plugged into the top of • When you plug nodes into any of the Emit compounds,
the ICETree node in a particle simulation because you their output is evaluated only once, upon particle
need to emit the particles before anything else can happen emission. This means that data from this node won’t
to them. change the particles during the rest of the simulation.
• You can connect ports together only if their data matches
D • The Simulate Particles node updates the position and
in type and context.
velocity of each particle at each frame based on its mass,
position, and velocity of the previous frame. 7 Create a compound: This step is not necessary, but creating a
This node is usually plugged into the bottom of the compound of this particle effect lets you use it in other scenes
ICETree node because it needs to take all information or share it with others.
from the nodes that precede it and then use that 8 Render the particles as volumes using ICE particle shaders, or
information to update each particle at each frame. render particles as surfaces using Softimage surface shaders.
Basics • 275
Section 15 • ICE Particles
1 4
3 2
1 Create a point cloud by choosing Get > Primitive > Point 4 Drag one of the Emit compounds from the preset manager
Cloud > Empty Cloud (or any of the shapes) from any into the ICE tree view.
toolbar.
5 Drag the Simulate Particles node from the preset manager
2 In the ICE tree view, create a Simulated ICE Tree node: from into the ICEtree view.
the menu bar of the ICE Tree, choose Create > Simulated ICE Plug all the nodes together as shown here. You can then
Tree. continue to build your ICE tree as you like.
3 Drag the emitter’s name from an explorer into the ICE Tree
view to create a Get Data node for it. An easy way is to select
the object and press F3 so that a floating explorer opens, then
drag the emitter’s name from there into the ICE Tree.
276 • Softimage
Particles that Bounce, Splash, Stick, Slide, and Flow
Basics • 277
Section 15 • ICE Particles
Stick
Splash
Bounce
Slide
278 • Softimage
Particle Goals
Particle Goals
When you create a goal for particles, the particles are attracted to it or Goals are part of the overall particle simulation, which means that any
repelled from it, similar to magnets. With goals, you can create a particles that are progressing toward a goal can also react to any other
number of particle effects, such as drops of water forming into a forces that are applied to them. In fact, goals are a force on particles,
puddle, paint being sprayed over a surface, or butterflies following the similar to how an attraction force works.
infamous ClubBot.
Creating goals requires the Move Towards Goal compound. This
compound lets you do two things: choose the location on the goal
object to which the particles are attracted (or repelled) and define how
the particles move toward the goal, such as their speed, acceleration,
and alignment with the goal.
Basics • 279
Section 15 • ICE Particles
280 • Softimage
Spawning New Particles
Spawning New Particles If you spawn particles into the same point cloud, the shaders and forces
on the spawned particles are the same as for the original point cloud.
Spawning generates new particles (points) from existing particles. You can, however, add new attributes to the spawned particles to
These new particles are often referred to as particle trails. Spawning change their color, size, shape, and so on.
makes it easy to create effects such as fireworks, laser shots, streams of
falling rain, or smoke trails. Spawning into a different point cloud is similar to creating a new
particle simulation because this point cloud has a separate ICE tree.
You can also use different shaders for that point cloud, giving you
control over the rendered look of the spawned particles.
Spawning Trails
The Spawn Trails compound gives you a basic way to spawn new
particles. Here, pixie dust is spawned as a trail to follow the original
particle as it travels upwards.
Basics • 281
Section 15 • ICE Particles
282 • Softimage
Particle Strands
Particle Strands • Create Strands is the basic compound that creates particle strands.
You can use any particle shape for the strands.
Particle strands are solid shape trails that are drawn after a particle.
These solid shapes are actually continuous segments of the shape that • Generate Strand Trails lets you dynamically generate particle
you have chosen for the particle, such as spheres, rectangles, boxes, strands based on the length of the simulation and the number of
discs, blobs, or even instanced particle geometry. Strands makes it easy segments, such as for “growing” things like grass or vines. One
to create effects that require more solid-looking objects than trails, strand segment is created per second up to the maximum number
such as ribbons, seaweed, or hair, and much more. of segments that you have set.
Using the numerous Strands compounds, you have a lot of control over Because these two compounds create strands in different ways, you can
the appearance and movement of strands to create many types of use only one of them at a time on the same set of particles.
particle effects. There are two main compounds you can use to actually
create the strands using two different methods:
Create Strands
Basics • 283
Section 15 • ICE Particles
Twist Strand
Turbulize Strand
284 • Softimage
Particle Instances
Particle Instances To use instances as particles, you assign them to the point cloud using
either the Instance Shape node or the Set Instance Geometry
You can use any 3D geometric object, hierarchy of objects, or group of compound in the ICE Tree:
objects in place of particles to create many different effects. For
example, you could use cars to create a flow of traffic or characters to • If the instanced objects are not animated, you should use the
create a crowd scene; or create flocking scenes with flying birds, Instance Shape node. This node provides the simplest and fastest
butterflies, or insects. The object is assigned to a particle and stays with way to create large numbers of instances whose geometry is not
that particle for its lifetime. animated.
• If the instanced object is animated, you can use the Set Instance
Geometry and Control Instance Animation compounds. If an
object’s transformation is animated, it has to be in relation to its
parent, and then you choose the parent as the instance object.
Instances are exact copies of their master object, including its materials
(color) and rendering information. However, instances inherit the
particle’s position, velocity, orientation, and size: the instance’s
transformation is not used, although children keep their relative
transformation to their parent.
If you’re using instances as particle shapes in collisions with an obstacle
(as rigid bodies or using a compound with surface interaction, such as
Bounce Off Surface), you can use an approximated box or sphere
around it: its actual shape is not used.
Basics • 285
Section 15 • ICE Particles
286 • Softimage
ICE Particle States
Each State compound you define is plugged into the State Machine
compound. This compound is the “grand central station” for the states.
The states are executed in the order in which they’re plugged into the 3
State Machine, from top to bottom.
Basics • 287
Section 15 • ICE Particles
288 • Softimage
ICE Rigid Bodies
Basics • 289
Section 15 • ICE Particles
This character is made up of rigid body particle cubes and is heading Not so lucky this time! Here, the wall is set as passive, but the
for a rigid body particle wall. What will happen? character isn’t. Ouch.
290 • Softimage
ICE Rigid Bodies
Basics • 291
Section 15 • ICE Particles
292 • Softimage
ICE Particle Shaders
The ICE Particle shaders and shader compounds can be found in the Connecting Particle Shaders
preset manager or in the Nodes menu in the render tree.
To apply shaders to the point cloud, you connect them in the render
tree. This gives you precise control over which shaders are connected
together using which ports.
The shaders that you choose to plug in to a point cloud’s Material node
depend on whether you want to render the particles as a surface or as a
volume.
Particle Surfaces
If you want to render particles as a surface, you can hook up any
surface shader to the Surface port of the point cloud’s Material node. In
fact, when you create an ICE particle simulation, the Phong shader is
connected to the point cloud’s Material node by default.
Particle Shader Compounds
Shader compounds are like ICE data compounds in that they contain
several connected nodes (in this case, shader nodes). Once you have
shaders hooked up together in the render tree as you like them, you can
create a compound that contains all of these shaders. This allows you to
create a standard particle shader effect, such as fire, that you can use in
different scenes or share with other people. Particle image sprites are rendered
Softimage ships with several particle shader compounds that you can onto rectangle particle shapes
using the Phong shader.
use as a starting point for your own shader effects.
• Start out with the Particle Renderer or Particle Shaper shader
compound to render a volume quickly. These compounds use the
Particle Volume Cloud shader as a base.
• The Particle Gradient Fcurve compound creates a curve that you
can plug into a Gradient port of a shader to control the gradient’s
falloff over distance.
• The Particle Strand Gradient compound sets up a color/alpha
gradient for rendering particle strands.
Particles using the Blob shape are rendered
using the Lambert shader.
Basics • 293
Section 15 • ICE Particles
294 • Softimage
Section 16
Shaders
A shader is a miniature computer program that
controls the behavior of the rendering software
during, or immediately after, the rendering process.
Some shaders compute the color values of pixels.
Other shaders can displace or create geometry on the
fly.
Shaders are used to create materials and effects in
just about every part of a scene. An object’s surface
and shadows are controlled by shaders. So are scene
lighting and camera lens effects. Even shaders’
parameters are usually controlled by other shaders.
You can even apply shaders at the render pass level
to affect the entire scene.
Basics • 295
Section 16 • Shaders
296 • Softimage
The Shader Library
Basics • 297
Section 16 • Shaders
298 • Softimage
The Shader Library
A H Clears the filter string (show all nodes). You can also delete the text
string to show all nodes again.
B
Basics • 299
Section 16 • Shaders
300 • Softimage
About Surface Shaders
Strauss Constant
Uses only the diffuse color to Uses only the diffuse color. It ignores the orientation of surface
simulate a metal surface. The normals. All the object’s surface triangles are considered to have the
surface’s specular is defined same orientation and be the same distance from the light.
with smoothness and
It yields an object whose
“metalness” parameters that
surface appears to have no
control the diffuse to specular
shading at all, like a paper
ratio as well as reflectivity and
cutout. This can be useful
highlights.
when you want to add static
Anisotropic blur to an object so that there
is no specular or ambient
Sometimes called Ward, this light.
shading model simulates a
glossy surface using an Toon
ambient, diffuse, and a glossy
This model begins with a
color. To create a “brushed”
constant-shading-like base
effect, such as brushed
color. Ambient lighting, as
aluminum, it is possible to
well as highlights and rim
define the specular color’s
lights are composited over the
orientation based on the object’s surface orientation. The specular is
base color to produce the
calculated using UV coordinates.
final result.
The result is a cel-animation type of shading that can vary enormously
depending on how you configure the highlights and rim lights. The
toon shading model is typically used in conjunction with the Toon Ink
Lens shader (applied to the render pass camera), which creates the
cartoon-style ink lines.
Basics • 301
Section 16 • Shaders
You can create a very specific color for an object by defining its This is the color that the light
ambient, diffuse, and specular colors separately on the Illumination scatters equally in all
page of its surface shader property editor. directions so that the surface
appears to have the same
To open an object’s surface shader property editor, select the object and brightness from all viewing
choose Modify > Shader from the Render toolbar. angles. It usually contributes
the most to an object’s overall
appearance and it can be
considered the “main” color of the surface.
Ambient
This color simulates a
uniform non-directional
lighting that pervades the
entire scene. It is multiplied
by the scene ambience value,
The combined result of the ambient, diffuse, and and blended with the diffuse
specular colors/lighting contributions. color. Often, the ambient
color is set to the same value
Not all shading models support all of these basic characteristics. For as the diffuse color, allowing the scene ambience to provide the
example, only the Phong, Blinn, Cook-Torrance and Anisotropic ambient color.
shading models support specular highlights (although the Strauss
shader’s Smoothness and Metalness parameters affect specularity).
Similarly, the Strauss shader does not support an ambient color, while Specular
most other models do.
This is the color of shiny
It’s also worth noting that because different shading models compute highlights on the surface. It is
these basic characteristics, the parameters that control the attributes usually set to white or to a
vary from one property editor to another. For example, the Anisotropic brighter shade of the diffuse
shader has much more elaborate specular highlight controls than the color. The size of the
Phong shader. highlight depends on the
defined Specular Decay
value. Specular highlights are
not visible in all shading models.
302 • Softimage
Reflectivity, Transparency, and Refraction
Reflectivity, Transparency, and Refraction As an object becomes more reflective, its other surface parameters,
such as those related to diffuse, ambient, and specular areas of
In addition to controlling an object’s basic surface shading illumination, become less visible. If an object’s material is fully
characteristics, surface shaders also control reflectivity, transparency, reflective, its other material attributes are not visible at all.
and refraction. Parameters for controlling these attributes are on the
Transparency/Reflection tab of the surface shader’s property editor. Reflectivity values are defined using color sliders. Setting the color to
black makes the object completely non-reflective, while setting the
To open an object’s surface shader property editor, select the object and color to white makes it completely reflective. If necessary, you can even
choose Modify > Shader from the Render toolbar. control reflectivity in individual color channels.
A surface shader’s Reflection parameters control an object’s reflectivity. You can also control reflectivity using a texture by connecting the
The more reflective an object is, the more other objects in the scene texture to the surface shader’s reflectivity input.
appear reflected in the object’s surface.
In this example, the surface
shader’s reflectivity
parameter is connected to a
simple black and white stripe
No reflectivity in gray texture.
ball’s material The white areas are
reflective, while the black
areas are not.
35% reflectivity Normally, grayscale images are used since black, white and shades of
gray adjust reflectivity uniformly in all color channels. Black areas of the
image make the corresponding portions of the object non-reflective,
white areas make the corresponding portions of the object completely
reflective, and gray areas make the corresponding portions of the object
partially reflective.
Basics • 303
Section 16 • Shaders
70% transparency with Normally, grayscale images are used since black, white and shades of
30% reflection. gray adjust transparency uniformly in all color channels. Black areas of
the image make the corresponding portions of the object opaque,
white areas make the corresponding portions of the object completely
transparent, and gray areas make the corresponding portions of the
As with reflectivity, transparency affects the visibility of an object’s object partially transparent — or translucent.
other surface attributes. You can compensate for this by increasing the
attributes’ values, such as changing specular color values that were 1 on
an opaque object to 10 or higher on a transparent object.
Transparency values are also defined using color sliders. Setting the
color to black makes the object completely opaque, while setting the
color to white makes it completely transparent. If necessary, you can
even control transparency in individual color channels.
304 • Softimage
Reflectivity, Transparency, and Refraction
Refraction
When transparency is incorporated into an object’s surface definition,
you can also define the refraction value. Refraction is the bending of
light rays as they pass from one transparent medium to another, such
as from air to glass or water.
You can set the index of refraction from a surface shader’s property
editor. The default value is 1, which represents the density of air. This
value allows light rays to pass straight through a transparent surface
without bending. Higher values make the light rays bend, while values
less than 1 makes light rays bend in the opposite direction, simulating
light passing from air into an even less dense material (such as a
vacuum).
Refractive index values usually vary between 0 and 2, but you can type
in higher values as needed.
Basics • 305
Section 16 • Shaders
• Material manager: You can use this tool to create and apply a • Shader’s property editor: A shader’s property editor contains all
material to an object. See The Material Manager on page 317. the parameters that you can edit. To the right of each parameter,
there is a “plug” connection icon . Clicking this icon opens a
menu that lists shaders that you can attach directly to that
parameter.
• Preset manager: Drag an drop a shader or material preset from • Shader stacks: Some scene elements, like render passes and
here onto the appropriate type of object to apply it, or drag it into cameras, have shader “stacks” in their property editors where you
the render tree as a node. See The Preset Manager on page 299. apply shaders that affect the whole scene rather than individual
objects.
306 • Softimage
The Render Tree
A B C D E F G H I
J L
Basics • 307
Section 16 • Shaders
D Clears the render tree workspace. Data travels from .. and is processed
this node’s output by this node via its
E Opens the preset manager in a floating window. port ... input port.
I Bird’s Eye View. Click to view a specific area of the workspace, or When a port is connected, the value of its corresponding parameter is
drag to scroll. Toggle it on or off with Show > Bird’s Eye View. driven by the connection, which means that you can no longer set the
parameter’s value in that shader’s property editor. In fact, the
J Embedded preset manager shows all shader nodes and compounds
that are available to use. parameter and its controls (checkboxes, sliders, etc.) are not even
You can drag and drop shader nodes from here into the render tree
displayed. If you remove the connection, the controls reappear in the
workspace. You can also get shaders from the Nodes menu. property editor.
K The render tree workspace. This is where you can connect shader
nodes together to build trees.
N Texture layers. These layers let you mix several textures together so
that each texture is blended with the cumulative result of the
preceding textures.
O Material node: This node acts like a placeholder for every shader that
is applied to an object. Every object must have one or it won’t render.
Its input ports support each type of shader.
308 • Softimage
The Render Tree
Node Color Codes The following table shows which input/output port color is assigned to
which type of value:
Every shader node in the render tree is color coded, as are each of its
ports. This coding system helps you visualize which shaders are doing
what within their render tree structures. Color Input/ Result
Output Port
Material node Realtime shader Color Returns or outputs a color (RGB) value.
These ports are usually used in
Material phenomenon Volume shader conjunction with the surface of an object
or when defining a light or camera.
Surface shader Output shader
Scalar Represents a scalar input/output with
Texture shader Lens /camera shader any value between 0 and 1.
Vector Represents an output/input that
Lightmap shader Light shader
corresponds to vector positions or
Environment shader coordinates.
Boolean Represents an input/output that
Click the arrow to expand Click the port to create a corresponds to a 0 or 1, or On/ Off.
or collapse a node. connection arrow.
Integer Consists of a single integer (such as 2 or
73).
Selected nodes are highlighted in white.
Texture/ Accepts or returns an image file.
Shader node ports are also color coded. A node’s output is indicated by Image Clip
a port (colored dot) in the top right of the node, while each input port
is indicated on the left side of the node. The color of a port identifies RealTime Accepts connections from other realtime
what type of input value the port will accept, and what type of value it shaders and outputs to other realtime
will output. shaders or to the Material node’s
RealTime port.
Lightmap Outputs the result of a lightmap shader
to the Material node’s Lightmap port.
Material Outputs the result of a material
Phenomenon phenomenon shader to the Material
node’s Material port.
Basics • 309
Section 16 • Shaders
310 • Softimage
Building Shader Networks
Basics • 311
Section 16 • Shaders
3
2
6 5 10 9
4
8
312 • Softimage
Creating Shader Compounds
Basics • 313
Section 16 • Shaders
314 • Softimage
Section 17
Materials
In Softimage, an object’s look and feel is defined by
one or more shaders that are plugged into the
object’s material node. The material node itself
provides access to the object’s attributes while the
shaders control how those attributes appear when
rendered. This section introduces ways of creating
and working with materials.
Basics • 315
Section 17 • Materials
About Materials
Every object needs a material. In Softimage, the term “material” is used When you assign a local material to an
to refer to the cumulative effect of all of the shaders that you use to alter object, it replaces the default scene
an object’s look and feel. Strictly speaking, though, materials in material for that object only. If you
Softimage are really just containers for an object’s various attributes. If remove or delete the object’s local
an object’s material has no shaders attached to it, nothing defines the material, the object inherits the default
object’s look, and the object won’t render. scene material again.
The easiest way to understand what a material is to You can modify the default scene Default Scene Material
look at it in the render tree where it is represented by a material as you would any other
Material node. The Material node lists all of the material and the changes are applied to any objects that inherit it.
inputs to a given material. These inputs are
If you delete the default scene material, the oldest created material in
sometimes referred to as “ports.” Each port controls a
the scene becomes the new default material, and is assigned to all
set of object attributes. When the material is assigned
objects to which the previous default material was assigned (whether
to an object, the shaders that you connect to these
explicitly or through propagation).
ports alter the corresponding attributes.
For example, the Surface port controls object surface Materials and Surface Shaders
characteristics. By connecting a shader or a network
of shaders to this port, you can change an object’s It’s worth noting that all new materials that you create in Softimage
color, transparency, reflectivity, and so on. The important thing to start out with some type of surface shader attached to them. This
understand is that nearly every change you make to an object’s provides basic surface shading so that the material is renderable from
appearance involves connecting shaders to define the object’s material. the beginning. For example, if you create a material from within a
material library, it has a Phong shader attached to its Surface, Shadow,
The Default Scene Material and Photon ports. If you create a material using a command from the
Render toolbar’s Get > Material menu, you can choose a surface shader
Every new scene has a default material, called Scene_Material, which is to attach to the material.
assigned to the scene’s root in branch mode. An object (in a hierarchy
or not) that does not inherit a material from a parent, and does not By default, new materials
have a locally-defined material, inherits the scene’s default material. In have a surface shader, like
the explorer, you can view the default material in the material library’s the Phong shader, attached
hierarchy, or as a node of the scene root, which you can display by to them.
choosing Local Properties from the Show menu.
316 • Softimage
The Material Manager
A B D C
Basics • 317
Section 17 • Materials
A The left panel contains the explorer that has the Scene (cluster) and Scene and Image Clip Explorer
Image Clip tabs—see the image on the right for more details.
In the Scene explorer, you can switch between local materials
(applied locally on object or cluster itself) and applied materials.
Selecting a material in the explorer highlights it in the shelf and
displays it in the bottom panel.
A B
In the Image Clip explorer, all image clips in the scene are displayed.
C
B On the top, the command bar provides tools for applying materials,
such as creating, duplicating, or deleting materials, as well as tools D
for managing material libraries. E
C The middle right is a shelf with shaderballs for the materials in your
scene. Multiple libraries appear on separate tabs.
G Drag and drop one or more images into the image clip explorer panel
to create sources and clips.
318 • Softimage
Creating and Assigning Materials
2 3
2 3
Basics • 319
Section 17 • Materials
Simple Propagation
Polygon mesh object with Object with specific Local material assigned
global material assigned. polygons selected. to selected polygons. The larger sphere was branch-selected and
given a checkerboard material. Because it was
applied in branch mode, the material is
In the explorer, a cluster’s material appears under the cluster’s node, inherited by all the descendants.
rather than directly under the object’s node. To access it, expand the
object’s Polygon Mesh > Clusters > name of cluster node.
The cluster’s
material is here.
Local Material
The object’s Application
material is here.
One sphere was selected
and given a blue material.
If you remove a material from a cluster, the material inherits the This material is local for the
selected object only, but
material either assigned to or inherited by the object. not for any of its children.
320 • Softimage
Material Libraries
Material Libraries
Most properties in Softimage are owned by the scene elements to which Storing materials in a library makes it easy to share a single material
they’re applied. Materials, on the other hand, belong to material between several objects. It also allows you to access and edit all of the
libraries. Material libraries are common containers for all of the materials in a scene from a single place. Furthermore, because
materials in a scene. Each time you create a material, it’s added to a materials belong to libraries and not to individual objects, you can
material library. Although all of the materials in a scene belong to a delete an object from the scene, but keep its material for later use. If
library, they are used only by the objects to which they are assigned. you no longer want to use a material, you can simply delete it once,
regardless of the number of objects to which it’s assigned.
The material manager is designed to let you easily view and manage
your material libraries. Most of the commands that you need for You can create as many material libraries as you need. For example, you
managing your libraries are found in the Libraries menu. might want to keep separate libraries for different types of materials
(wood, metals, rock, skin, scales, and so on), or create a material
Click a library tab to switch between libraries. The selected tab
library for each character in your scene.
becomes the current library. Unless you explicitly create a new material
in another library, all newly created materials are added to the current You can drag and drop materials onto the Favorites tab in the material
library. manager to create shortcuts to materials that you want to keep handy.
You can also create your own custom “favorites” tabs to collect and sort
You can also manage your libraries using an explorer with its scope set
the material shortcuts as you like.
to Materials (press M).
By default, material libraries are stored internally as part of the scene.
However, you can store them externally, as dotXSI (.xsi) or material
library (.xsiml) files, which allows you to share them between multiple
scenes.
Basics • 321
Section 17 • Materials
322 • Softimage
Section 18
Texturing
Texturing is the process of adding color and texture
to an object. You can use textures to define
everything from basic surface color to more tactile
characteristics like bumps or dirt. Textures can also
be used to drive a wide variety of shader parameters,
allowing you to create maps that define an object’s
transparency, reflectivity, bumpiness, and so on.
Basics • 323
Section 18 • Texturing
324 • Softimage
Types of Textures
Types of Textures
Softimage allows you to use two different types of textures: image • An image clip is a copy, or instance, of an image source file. Each
textures, which are separate image files applied to an object’s surface, time you use an image source, an image clip of it is created. You can
and procedural textures, which are calculated mathematically. have as many clips of the same source as you wish. You can then
modify the image clip without affecting the original source image.
Image Textures Clips are useful because they allow you to create different
Image textures are images that can be wrapped around an object’s representations of the same texture image (source), such as five
surface, much like a piece of paper that’s wrapped around an object. To different blur levels of the same source image. Also, clips are
use a 2D texture, you start with any type of picture file (PIC, TIFF, PSD, memory-efficient because the source is only loaded once,
etc.). These can be scanned photos or any file containing data that regardless of the number of clips are created from it.
describes all the pixels in an image, RGB or RGBA data.
Procedural Textures
Procedural textures are generated mathematically, each according to a
particular algorithm. Typically, they are used to simulate natural
materials and patterns such as wood, marble, rock, veins, and so on.
Softimage’s shader library contains both 2D and 3D procedural
textures. 2D procedurals are calculated on the object’s surface —
according to their texture projections — while 3D procedurals are
calculated through the object’s volume. In other words, unlike 2D
textures, 3D textures are projected “into” objects rather than onto
2D textures are wrapped around objects. them. This means they can be used to represent substances having
internal structure, like the rings and knots of wood.
Image Sources and Clips
Every time you select an image to use as a
texture or for rotoscopy, an image clip and an
image source of the selected image is created.
• An image source is not really a usable scene
element. It is merely a pointer to the
original image stored on disk. Images
3D textures are defined throughout an object.
sources are listed in your scene in the Sources folder of the Scene
Root. They can be stored within your project folder structure, or
outside of it.
Basics • 325
Section 18 • Texturing
Applying Textures
There are a number of ways to connect textures to objects in Softimage. • Using the parameter connection
These include: icon menu in a shader’s property
editor lists textures that you can
• Using the render tree, where you can choose a texture from the
attach directly to the parameter.
Nodes > Texture menu. Once you choose a texture, it is added to
Attaching a texture to a parameter
the render tree workspace and you can connect it to the material’s
lets you control the parameter with
or other shaders’ ports.
a texture instead of a simple color
or numeric value.
This is a convenient way to connect a
texture to a surface shader’s Ambient
and Diffuse ports immediately after
applying the surface shader to the
object.
326 • Softimage
Texture Projections and Supports
Basics • 327
Section 18 • Texturing
Types of Texture Projections All of the projections described can be applied to objects from the
Render toolbar’s Get > Property > Texture Projection menu.
Choosing the right type of texture projection is an important part of
the texturing process. The more closely the projection conforms to the You can also create and apply texture projections from any texture
original shape of the object, the less you’ll have to adjust the texture to shader’s property editor. Every texture shader needs a projection to
get the object looking just right. This section describes the types of define where the texture should appear on the object.
texture projections that are available to you.
Planar projections are used for mapping textures onto an object’s XY, XZ, and YZ If you map the picture file cylindrically, it is
planes. By default, the projection plane is one pixel smaller than the surface plane, projected as if wrapped around a cylinder.
therefore no “streaking” or distortion occurs on the object’s other planes.
XY XZ YZ
Planar XY Cylindrical
Lollipop Spherical
328 • Softimage
Texture Projections and Supports
By default, a spatial
projection’s texture
support appears in the
center of the textured
A cubic projection is object’s volume.
applied to a head so that a +Z face (front) +X face (right)
different part of the
texture image is projected -Y face (bottom)
Polygon sphere with a vein texture applied using a spatial projection.
onto each face.
Basics • 329
Section 18 • Texturing
Texture image used Wireframe view of the Top view showing where
rendered frame. the texture is projected.
330 • Softimage
Texture Projections and Supports
Basics • 331
Section 18 • Texturing
Unique UVs Projection (Polygons Only) projection to output a texture image that you can reapply to the
object. The texture is applied to texture each polygon properly
Unique UVs mapping applies a texture to polygon objects using one of
without you worrying about “unfolding” it to fit properly.
two possible methods:
• Angle Grouping, after deciding on a projection direction, groups
• Individual polygon packing assigns each polygon’s UV coordinates
neighboring polygons whose normal directions fall within a
to its own distinct piece of the texture so that no one polygon’s
specified angle tolerance. This process is repeated until all of the
coordinates overlap another’s.
object’s polygons are in a group. The groups—or islands—are then
This is useful for rendermapping polygon objects. You can apply assigned to distinct pieces of the texture so that no two islands’
textures to an object using a projection type appropriate to its coordinates overlap each other.
geometry, then rendermap the object using a new Unique UVs
Unique UVs projections do not have a texture support. To adjust it
further, edit the UV coordinates in the texture editor.
A Unique UVs projection was The Individual Polygon Packing method produces The Angle Grouping method produces “islands” of polygons.
applied to this sphere. UV coordinates that look like this: each polygon’s
UV coordinates separated from the rest of the
coordinate set so it can be assigned to its own
portion of texture.
332 • Softimage
Editing Texture Projections
• Modify > Projection > Inspect All UVs opens a multi-selection The texture projection’s wrapping options control whether the texture
property editor for all of the object’s texture projections. extends past the projection’s boundaries to wrap around the object.
• Modify > Texture > name of the texture from the Render toolbar to The examples below show a sphere whose texture projection has been
open the texture’s property editor. adjusted such that the texture covers only a portion of the object’s
surface. You can see the effect of wrapping in different directions.
Then click the Edit button on the Texture tab (beside the Texture
Projection list) to open the Texture Projection property editor.
Basics • 333
Section 18 • Texturing
Transforming Texture Projections There are two ways to transform texture projections—using the
projection manipulator in a 3D view, or by editing the scaling,
By default, a texture projection fills the entire texture support. For rotation, and translation values in the Texture Projection property
example, if you apply a simple XZ Planar projection to a grid, the editor.
texture coordinates span the entire projection from one grid corner to
the other. You can transform the texture projection to reposition the To activate the projection manipulator, press j, or choose
texture, or to make room on the support for other projections in Modify > Projection Edit Projection Tool from the Render toolbar.
different locations.
The texture projection manipulator allows you to Alternatively, you can use the texture
reposition a texture projection on an object by changing projection definition parameters to
the projection’s position on the texture support. transform a texture on the surface of
an object.
In edit mode, the
Drag the green arrow mouse cursor
to scale the changes to this icon.
projection vertically.
Right-click to switch
Drag the green line to to another projection,
translate the if one exists.
projection vertically.
UVW
Drag the intersection Drag the red arrow to Transformation
of the red and green scale the projection controls
arrows to translate horizontally.
the projection freely.
Drag the red line
to translate the
Drag one of the corner projection
handles or borders to Middle-click + drag to rotate the horizontally.
scale the projection. projection about its center.
334 • Softimage
UV Coordinates
UV Coordinates
Applying a texture projection to an object creates a set of texture You can view and adjust UV coordinates using the texture editor, where
coordinates — often called UV coordinates or simply UVs — that control they are represented by sample points. When you select sample points,
where the texture corresponds to the surface of the object. you are actually selecting the UV coordinates held at the corresponding
position on the object.
• On a polygon object, each vertex can hold multiple UV coordinates
— one for each polygon corner that shares the vertex. The portion For example, as you can see in the images below, the center point of a 2x2
of the texture enclosed by a polygon’s UVs is mapped to the polygon grid holds four UV coordinates. When you select the
polygon. corresponding sample point in the texture editor, you are selecting all
four coordinates (although it is possible to select a single polygon-
• On NURBS objects, UV coordinates are not stored at the vertices;
corner’s UV coordinate).
instead, they are generated based on a regular sampling of the
object‘s surface. However, as with polygon objects, the portion of
the texture enclosed by, say, four UVs is mapped to the
corresponding portion of the object.
In this example, the image shown This exploded view of the textured grid
left was used to texture a 2 x2 shows how each polygon’s UVs correspond
polygon grid such that each to the texture image.
polygon’s UV coordinates were
mapped to the texture differently.
Basics • 335
Section 18 • Texturing
Texture image
The image clip
currently applied
to the object.
Connectivity Tabs
help you make sense of
the object’s UVs by
highlighting boundaries
shared between of UV
“islands”.
This character and his head are Status bar displays the UV Selected UVs are highlighted red, and
separate objects, each with its own coordinates, pixel coordinates, unselected UVs are blue.
projection. Both sets of UVs are shown and RGBA values of the current
in the texture editor. mouse pointer position
336 • Softimage
Editing UV Coordinates in the Texture Editor
Basics • 337
Section 18 • Texturing
338 • Softimage
Texture Layers
Basics • 339
Section 18 • Texturing
The Texture Layer Editor how many ports those layers affect, and how and in which order the
layers are blended together. Add to this the ability to modify the
The texture layer editor is a grid-style editor from which you can view majority of each layer’s properties, and the texture layer editor makes
and edit all of a shader or material’s texture layers. for quite a powerful tool.
The advantage of using the texture layer editor is that it packs a To open the texture layer editor, choose View >Rendering/Texturing >
tremendous amount of information into a relatively compact interface. Texture Layer Editor from the main menu.
At a glance, you can see which shaders are directly connected to a
shader’s port, how many texture layers have been added to the shader,
340 • Softimage
Texture Layers
Texture Layers in the Render Tree Layers behave exactly like any other parameter group in the render tree,
meaning that you can connect shaders to texture layer parameters as
When a shader has one or more texture layers, a new section called you would to any other shader parameter. This lets you control each
Layers is added to its node in the render tree. The Layers section texture layer with its own branch of the render tree.
contains a parameter group for each of the shader’s layers.
Expanding the Layers section reveals all of the individual layer
parameter groups. Expanding an individual texture layer’s parameter
group reveals the ports for its Color and Mask parameters.
Layers section
Collapsed layer
parameter group
Expanded layer
parameter group.
Basics • 341
Section 18 • Texturing
The sphere shown here was bump- Consider the sphere shown here: even with
mapped with a fine noise. A negative a very high bump step, the bumping is not
bump factor was used to make the white convincing on the silhouette where there is
areas bump outward. no indication that the surface is raised.
In these cases, it’s better to either model
the necessary geometry or to use a displacement map.
Displacement Maps
A displacement map is a scalar map that, for each point on an object’s
surface, displaces the geometry in the direction of the object’s normal.
Creating a Bump Map Unlike regular bump mapping that “fakes” the look of relief,
displacement mapping creates actual self-shadowing geometry.
To give you the most control over surface bumping, the best way to
create a bump map is to connect a Bumpmap shader to the Bump Map
The sphere shown here was
port of an object’s material node. displacement-mapped using
the texture shown below.
However, every texture shader has bump map parameters, so you can
create a bump map using textures that you’ve connected to, for
example, a surface shader’s Ambient and Diffuse ports.
342 • Softimage
Bump Maps and Displacement Maps
Creating a Displacement Map Using Displacement Maps and Bump Maps Together
You create a displacement map by connecting a texture, preferably You can use bump maps and displacement maps together to create
grayscale, to the Displacement port of an object’s material node. It is extremely detailed surfaces. Typically, the best approach is to use a
often helpful to add an intensity node between the map and the displacement map to create the coarser surface detail — major features
material node to help control the displacement. that need to be visible at the object’s edges and can benefit from self-
shadowing. You can then use the bump map to create a top layer of fine
detail. The bump-mapping is applied to the displaced geometry.
The sphere on the left uses a bump map, while the one on the right
uses a displacement map. In this case, the difference is slight enough
that the bump map’s shorter render time makes it the better choice.
Basics • 343
Section 18 • Texturing
Reflection Maps
Reflection maps, also called environment maps, can be used to simulate Raytraced Reflections are slower to render because they actually
an image reflected on an object’s surface, without using actual compute reflections for everything around them.
raytraced reflections. They can also be used to add an extra reflection
Non-Raytraced Reflection Maps are much faster to compute because
to an object’s reflective, raytraced surface.
they simulate the reflection of a specified texture or image, defined by
When objects are reflective, you can define whether the reflections on an environment map, on the object’s surface.
its surface are Raytracing Enabled or Environment Only. Reflection
When reflection mapping is used without raytracing, only the
settings are found on the Transparency/Reflection tab of the object’s
reflection map appears on the object’s surface; when used with
surface shader’s property editor (choose Modify > Shader from the
raytracing, the map is combined with raytraced reflections.
Render toolbar to open the property editor).
Raytraced reflection only Reflection map only Raytraced reflection and reflection map
Note how reflective objects reflect other objects in Using only a reflection map, no scene objects are With both types of reflection activated, you get
the scene. For example, you can see the flask and reflected in reflective surfaces. Instead, the only the real reflections of scene object and simulated
the floor reflected in the retort. reflection is that simulated by the reflection map. reflections from the map, producing highly
detailed reflections.
344 • Softimage
Baking Textures with RenderMap
Color map
Alpha map
Before RenderMap
The disembodied hand shown here was
textured using a combination of several
Displacement map images mixed together in a complex
render tree, and lit using two infinite
lights. The result is a highly detailed surface
that incorporates color, bump, displacement, and lighting
Specular map information, and takes a fair amount of time to render.
Bump map
After RenderMap
To bake the hand’s surface attributes into a single texture file, a RenderMap property was
applied to the hand, and a Surface Color map was generated. The resulting texture image
was then applied directly to the Surface input of the hand’s material node. Finally, the scene
lights were deleted, producing the result shown at right—a good approximation of the
hand’s original appearance.
Because the hand’s
illumination is baked into the
rendermap image, you can get
this result without using lights
or an illumination shader.
Basics • 345
Section 18 • Texturing
1 Choose Get > Property > Color at 2 Press Ctrl+W to open the Brush
Vertices Map to add a CAV Property to the Properties property editor. On the
selected object. An object can have as many Vertex Colors tab, you can choose a
CAV properties as paint mode and color, set the brush
you need. size, set falloff and bleeding options
and so on. Basically, you’re defining
how the brush strokes look.
346 • Softimage
Section 19
Lighting
Conventional lighting (direct light sources), indirect
lighting, and image-based lighting are all techniques
that contribute to a scene’s illumination and affects
the way all object surfaces appear in the rendered
image.
Basics • 347
Section 19 • Lighting
Types of Lights
You can add lights to a scene by choosing them from the Infinite (Default)
Render toolbar’s Get > Primitive > Light menu.
Infinite lights simulate
Every light type has its own special characteristics and is light sources that are
represented by its own icon in 3D views. infinitely far from
objects in the
scene. There is no
position associated with an infinite
light, only a direction. All objects are
Point lit by parallel light rays. The scene’s
Point lights casts rays in default light is infinite.
all directions from the
position of the light.
They are similar to light Spot
bulbs, whose light rays emanate from Spot lights cast rays
the bulb in all directions. in a cone-shape,
simulating real
spotlights. This is
useful for lighting a specific object
Light Box or area. The manipulators can be
used to edit the light cone’s length,
Light box lights simulate a light diffused
width, and falloff points.
with a white fabric.
The light and
shadows created by
Neon
this light are very
soft. Specularity is Neon lights
still visible, but simulate real-
noticeably weaker. world neon lights. They are
Manipulating the box essentially point lights whose settings
shapes the projected light. and shapes are altered to resemble
fluorescent tubes. The manipulators
can be used to change the tube into
any rectangular or square shape.
348 • Softimage
Placing Lights
Placing Lights
You can translate, rotate, and scale lights as you would any other object. Placing Spotlights Using the Spot Light View
However, scaling a light only affects the size of the icon and does not
The Spot Light view in a 3D view that lets you select from a list of
change any of the light properties.
spotlights available in the scene. A spotlight view is useful to see what
objects a spotlight is lighting and from what angle.
Rotating an infinite light. This is the only useful
transformation for infinite lights since their
scale and position do not affect the lighting. 1 Select a spotlight from
Rotating the light, on the other hand, changes the view menu to see
its direction. the scene from the
light’s point of view.
Basics • 349
Section 19 • Lighting
Intensity: 0.75
350 • Softimage
Setting Light Properties
Start falloff = 6
End falloff = 8 The inner, solid cone is the
spotlight’s Spread Angle.
Falloff
Start and End Falloff values. Using a point light, umbra = 0;
bottom corner of chess board is 0; top, left corner is 10.
The lower circle is the End
Falloff point.
Basics • 351
Section 19 • Lighting
You also need to make sure that the Primary Rays Type is set to
Raytracing in the renderer options.
352 • Softimage
Creating Shadows
Basics • 353
Section 19 • Lighting
354 • Softimage
Global Illumination
Global Illumination
Global illumination simulates the way bright light bounces off of
2 Set the light to emit global illumination photons.
objects and bleeds their color into surrounding surfaces. When global
illumination is activated, photons emitted from a designated light
travel through the scene, bounce off photon-casting objects and are
stored by photon-receiving objects.
Photon casting and reception are not mutually exclusive properties: an
object can do both, but only a light can emit photons. Global
illumination is often used with caustics, which is also a photon effect.
The following is an overview of how to set up global illumination for
the mental ray renderer.
Basics • 355
Section 19 • Lighting
356 • Softimage
Caustics
Caustics
Caustic effects recreate the way that light is distorted when it bounces
off a specular surface or passes through refractive objects/volumes. The 3 Adjust the caustic effect.
classic example is the light sparkling in the middle of a wine glass or the
floor of a swimming pool. In either case, light passes through refractive Adjust the rendering options that control
surfaces and is distorted, creating complex light patterns on surfaces the photon effect on the Caustics and
GI tab for the renderer.
that it affects.
Activate Caustics on this tab, then set
As with global illumination, caustics compute how photons emitted these two important parameters:
from a light travel across the scene and bounce over and through caster
• Caustic Accuracy specifies the number
and receiver objects. of photons that are considered when
any point is rendered.
Here is an overview of setting up caustic lighting for the mental ray
renderer, which is almost identical to setting up global illumination: • Photon Search Radius specifies the
distance from the rendered point within
which photons are considered.
1 Define objects as casters and receivers.
You’ll also need to go back to the
property editors of all emitting lights and
An object’s visibility
fine tune the photon intensity and the
property allows you to
number of emitted photons.
set options that
control how the
object responds
caustics photons
emitted from a light.
Basics • 357
Section 19 • Lighting
1 You can use the scene objects’ visibility properties to precisely control
how each object participates in final gathering calculations.
3
4
358 • Softimage
Ambient Occlusion
Ambient Occlusion
Ambient occlusion is a fast and computationally inexpensive way to In Softimage, you can create an ambient occlusion effect by connecting
simulate indirect illumination. It works by firing sample rays into a the Ambient Occlusion shader in the render tree. This is most
predefined hemispherical region above a given point on an object's commonly done at the render pass level to create an occlusion pass that
surface in order to determine the extent to which the point is blocked, can be added in and adjusted during compositing. You can also use the
or occluded, by other geometry. shader on individual objects to limit the occlusion calculation.
Once the amount of occlusion has been determined, a bright and a
dark color are returned for points that are unoccluded and occluded Image-Based Lighting
respectively. Where the object is partially occluded the bright and dark You can light your scenes with images using the Environment shader
colors are mixed in accordance with the amount of occlusion. which surrounds the scene with an image. However, this shader has a
set of parameters that allow you to control the image’s contribution to
final gathering and reflections.
The image above shows a scene rendered using only the Ambient Occlusion
shader. The bright color is set to white and the dark color to black. This type Although you can use any image to light the scene this way, you will get
of rendering can be composited with other passes to add the occlusion the best results using a High Dynamic Range (HDR) image. That’s
effect to the scene’s color and illumination. because HDR images contain a greater range of illumination than
regular images, making them better able to simulate real-world
lighting.
Basics • 359
Section 19 • Lighting
Light Effects
The point light inside of this street lamp In the background of the scene, you can see the effect of
uses a flare effect. Flares are created depth-fading. Even though it affects the entire scene,
as properties of scene lights. the depth fading is defined by a light’s volumic property.
360 • Softimage
Section 20
Cameras
Virtual cameras in Softimage are similar to physical
cameras in the real world. They define the views that
you can render. You can add as many cameras as you
want in a scene.
you can also achieve a photorealistic motion blur
effect for every object and/or camera in your scene.
Basics • 361
Section 20 • Cameras
Types of Cameras
Each of the images below was taken from the same position, but using
a different camera each time. The image on the right shows a
wireframe view of the original scene, including the position of the
camera.
These camera types are available from the Get > Primitive > Camera
menu.
Telephoto Orthographic
362 • Softimage
The Camera Rig
Basics • 363
Section 20 • Cameras
364 • Softimage
Setting Camera Properties
Camera Format
The camera’s “format” refers to the picture standard that the camera is
using and the corresponding picture ratio. You can also specify a
The camera’s Vertical field of view was made large enough to accommodate
custom picture standard with a picture ratio that you define. The the entire building. The Horizontal field of view was automatically calculated
default camera format is NTSC D1 4/3 720x486, with a picture ratio of based on the aspect ratio.
1.333, but several standard NTSC, PAL, HDTV, Cine, and Slide formats
are also available.
Using the same camera in the same location, the Vertical field of view is
much smaller, thus making only a small part of the building visible.
Basics • 365
Section 20 • Cameras
Applies a shader to
This is a camera with no clipping planes set—which means the resulting the camera.
image (right) is every object in the scene.
Removes a shader from
the shader stack.
This is a camera with near and far clipping planes set. The near plane is
between the first two buildings and the far clipping plane is between the last
two buildings. Everything before the first plane is invisible and everything
beyond the far clipping plane is also invisible, as seen in the resulting image
(right).
366 • Softimage
Lens Shaders
The images below and beside show this scene Toon Ink Lens shader
rendered using three different lens shaders.
Basics • 367
Section 20 • Cameras
Motion Blur You can apply motion blur properties to cameras. This is useful when
both the camera and scene objects are moving, but you only want the
Motion blur adds realism to a scene’s moving objects by simulating the blur caused by the object’s movement.
blur that results from objects passing in front of a camera lens over a
specified period of exposure. In Softimage, you can easily achieve a Rendering Motion Blur
photorealistic motion blur effect for every object and/or camera in
your scene. Motion blur is active for the scene by default. To view the motion blur
of objects in a scene, activate the motion blur settings in the render
region options and/or the render pass options. As long as these options
are on and you have a moving object in your scene, the motion blur is
visible.
First set the scene motion blur settings. In particular, the Speed option
which specifies the time interval (usually between 0 and 1) during
which the geometry and any motion transformations and motion
vectors are evaluated for the frame. The motion data is then pushed to
the renderer (by default mental ray).
Setting the Speed value to 0 turns motion blur off. Longer (slow)
shutter speeds (a difference of greater than 0.6) create a wider and/or
longer motion blur effect, simulating a faster speed. Shorter (quicker)
shutter speeds (a difference of less than 0.3) create subtler motion
blurs.
368 • Softimage
Section 21
Rendering
Rendering is the last step in the 3D content creation
process. Once you have created your objects,
textured them, animated them, and so on, you can
render out your scene as a sequence of 2D images.
Your ultimate goal may not be just to render, but to
optimize rendering quality and speed.
Basics • 369
Section 21 • Rendering
Rendering Overview
The process or rendering out your scenes can vary considerably from - Batch rendering using the [xsi -render | xsibatch -render]
project to project. However, here is a typical sequence of tasks you command line.
might follow when rendering:
- Batch rendering with scripts using the -script option at the
1. Set up render passes and define their options. command line.
Render passes let you render different aspects of your scene - Using the ray3.exe command line.
separately, such as a matte pass, a shadow pass, a highlight pass, or
- Using mental ray’s tile-base distributed rendering across several
a complete beauty pass. You can define as many render passes as
machines. To do so, you must define which machines to use and
you want: within each pass, you can create partitions of lights and
how.
objects, then apply shaders and control their settings together.
6. Composite and apply effects to passes.
2. Set up render channels and define their options. These allow you to
output different information about the pass to separate files. You can use Softimage Illusion, a compositing and effects toolset
that’s fully integrated in Softimage, or you can use another post-
3. Set rendering options.
production tool.
All objects, including lights and cameras, are defined by their
rendering properties. For example, you can determine whether a Softimage and mental ray®
geometric object is visible, whether its reflection is visible, and
whether it casts shadows. Rendering properties can be set per Softimage uses mental ray as its core rendering engine. mental ray is
render pass as well. fully integrated in Softimage, meaning that most mental ray features
are exposed in Softimage’s user interface, and are easy to adjust — both
4. Preview the results of any modifications. while creating a scene and during the final renderings. Full integration
The viewports can display your scene in different display modes, with mental ray also allows artists to generate final-quality preview
including wireframe, hidden-line removal, shaded, and textured. In renders interactively in 3D views, using the render region.
addition, you can view any portion of your scene in a viewport and
rendered by defining a render region. Or preview a full frame using Rendering Visibility
Render Preview. Every geometric object in a scene has a visibility property that controls
5. Render the passes and their render channels. whether it is visible when rendering, and in particular whether it is
visible to various types of rays (primary, secondary, final gathering,
Softimage gives you the option of rendering using any one of the and so on). This visibility property exists locally on every 3D object in
following methods: Softimage and cannot be applied or deleted. However, visibility can be
- Interactively from the Render Region. overridden at the partition level. In complex scenes, setting rendering
visibility options can be difficult to manage on a per-object basis. It’s
- Interactively, using the single-frame preview tool. easier to partition objects and use overrides to control rendering
- Interactively from the Softimage user interface. visibility for all of the objects in a partition.
370 • Softimage
Render Passes
Render Passes
A render pass creates a “layer” of a scene that can be composited with Each scene can contain as many render passes as you need. When you
any other passes to create a complete image. Passes also allow you to first create a scene in Softimage, it has a single pass named Default_pass.
quickly re-render a single layer without re-rendering the entire scene. This is a “beauty pass” that is set to render every element of the scene.
Later, you can composite the rendered passes back together, making You can create additional passes to render specific elements and
adjustments to each layer as needed. attributes as needed.
This photograph (background pass) is the This image is the composite of all these passes. The specular pass is used to capture
background scene over which the dinosaur Rendering in passes allows you to tweak each isolated an object’s highlights.
will be composited. element separately without having to re-render your scene.
Basics • 371
Section 21 • Rendering
Creating Passes
You will most likely want to create several passes as your scene grows in
size and complexity. You can create a variety of pass types from the
Render toolbar’s Pass > Edit > New Pass menu.
372 • Softimage
Render Passes
Creating Partitions
A Current pass. The current pass is always displayed in
A partition is a division of a pass that behaves like a group. There are bold typeface.
two types of partitions: object and light. Light partitions can only Each pass has its own options. This lets you optimize
contain lights, and object partitions can only contain geometric your rendering by enabling only those options you
need for each pass. For example, you could enable
objects. shadow calculations only in the shadow pass.
Placing objects in partitions allows you to control their attributes by Expanding any pass node displays its renderer
modifying them at the partition level rather than at the individual options, the active camera for the pass, its partitions,
and any environment, output, and/or volume shaders
object level. The modifications affect only the objects in the partition applied to the pass as a whole.
for the specific render pass to which the partition belongs. This allows
you to change object attributes on a per-pass basis. B Pass renderer options. Depending on which
renderer you have chosen for your pass, click the
Create an empty partition by choosing Pass > Partition > New Partition Hardware Renderer or mental ray icon to edit the
on the Render toolbar and then add elements to it. Or you can select pass’s renderer options.
some objects and choose the same command to create a partition that You can identify whether the pass is using a local or
automatically includes these objects. global set of render options by the Roman or italic
typeface displayed for the renderer’s node.
Viewing Passes and Partitions in the Explorer C Pass camera. Click the camera icon to define
camera and lens-shader options for the pass. You
In an explorer set the scope to Passes (press P) to see a hierarchical list can add new cameras to your scene and set them as
of all of the render passes in your scene with their contents. active if needed.
F
G
Basics • 373
Section 21 • Rendering
E
Applying Shaders to Passes and Partitions
Partition. A partition is a division of a pass, which
behaves like a group. Partitions are used to organize You can apply environment, volume, output, and lens shaders to an
scene elements within a pass.
entire pass using the shader stacks in the render pass property editor.
Expanding any partition node allows you to see its
contents, as well as any materials, shaders, overrides, Applies a shader to the
and other properties that are applied to it. camera.
Each pass has two default partitions: a background
objects partition that contains most or all of the Removes a shader from
scene’s objects, and a background lights partition the stack.
that contains most or all of the scene’s lights. You
Opens the selected
can add as many additional partitions as you need
shader’s property editor.
for a pass, but an object can only be in one partition
per pass. The applied shaders are
listed in the stack.
F Framebuffers. The framebuffers folder holds all the
active render channels defined for the pass including
its Main render channel. When you apply shaders to partitions using the Get > Material
G Passes. Additional passes including the default command, they take precedent over the shaders applied directly to
“beauty” pass are listed in creation order unless you objects in the scene, but only for that pass.
have modified the explorer’s sort order settings.
Overriding Shader Parameters
H A material is assigned to a partition. The “B”
indicates that it was applied in branch mode and is You can use an override property to redefine specific shader parameters
propagated to every object in the partition. If any
objects in the partition have local materials, they will in a partition. For example, if a scene contains several hundred objects
be overridden by the partition-level material for this and you want to edit each object’s transparency value without
pass. modifying the original material, you can create a partition that
contains the objects you want to change, and apply an override
property that affects only the transparency parameter of each material.
An override changes the ambient and diffuse values to black, but leaves the
other values untouched.
374 • Softimage
Render Channels
Render Channels
Render channels are a mechanism for outputting multiple images, each The advantage of using render channels is that they are easy to define
containing different information, from a single pass. When you render and quick to add to any pass. Preset render channels allow you to
the pass, you can specify which channels should be output in addition isolate scene attributes that are commonly rendered in separate passes.
to the full pass. By default a Main render channel is defined for every You do not need to create complex systems of partitions and overrides
pass (you can think of it as the “beauty” channel rendered for each to extract a particular scene attribute. All you need is your default pass
pass). You can use these images at the compositing stage, the same way and you can quickly output the preset diffuse, specular, reflection,
you would use any render pass. refraction, and irradiance render channels.
Basics • 375
Section 21 • Rendering
E
B C
D F
376 • Softimage
Setting Rendering Options
A Explorer panel Select from the explorer the various render G Passes The render options for all the render passes
(left panel) options available for editing. You can edit render defined in your scene.
options for the scene, for the renderer, and for The pass render options allow you to modify
each pass defined in the scene. Depending on settings specific to each pass. You can set output
your selection, the options are displayed in the paths, specify the pass camera, output your pass
middle or right panel. to a movie file, apply pass-level shaders, add
render channels, and more.
B Render pass When you select a render pass, the render options
panel (middle for the selected pass are displayed in the middle H Global The render options for all available renderers.
panel) panel. renderers
If you select multiple passes (Ctrl-select), you can
simultaneously edit their common parameters. I Scene Render The scene render options allow you to modify
“Multi Edit” will appear at the top of the panel to Options global settings for the entire scene. You can
indicate that you are in this mode. specify things like the renderer to use, the frames
to render, the basic output path and format for
C Renderer When you select Scene Render Options or one of rendered images. You can also create custom
options panel the global renderers (mental ray, Hardware render channels that you can add to individual
(right panel) Renderer, etc.), the options for the selected item passes.
are displayed in the right panel.
J Current pass The current pass is displayed in bold in the
This is also the case when you select a render pass explorer.
that contains a set of local render options.
If your selected passes use different renderers
then “Mixed Selection” will appear at the top of
the panel and no options are displayed.
Refresh
Basics • 377
Section 21 • Rendering
Scanline
Scanline rendering is a rendering method used to determine primary
visible surfaces. Scene objects are projected onto a 2D viewing plane,
and sorted according to their X and Y coordinates. The image is then Hardware Rendering
rendered point-by-point and scanline-by-scanline, rather than object- The Softimage hardware renderer allows you to output a scene as it
by-object. Scanline rendering is faster than raytracing but does not appears when displayed in any 3D view whose viewpoint is that of the
produce as accurate results for reflections and refractions. pass camera. Most of the hardware rendering modes correspond to the
3D views’ display modes Wireframe, Shaded, Textured, and so on.
This scene was rendered using
scanline rendering only. Notice Hardware rendering is useful for generating previews of your scene
how the transparency has little using all of the display options available in 3D views. It is also useful for
depth, and there is no reflection or outputting realtime shader effects to file.
refraction.
378 • Softimage
Different Ways to Render
Different Ways to Render You can resize and move a render region, select objects and elements
within the region, as well as modify its properties to optimize your
There are several ways to render a scene, from single frame previews to preview. Whatever is displayed inside that region is continuously
large sequences rendered to file. Some rendering methods are launched updated as you make changes to the rendering properties of the
from Softimage’s interface, others from the command-line. objects. Only this area is refreshed when changing object, camera, and
light properties, when adjusting rendering options, or when applying
Previewing Interactively with the Render Region textures and shaders.
You can view a rendering of any section or object in your scene quickly Comparing Render Regions
and easily using a render region. Rather than setting up and launching
a preview, you can simply draw a render region over any 3D view and The render region has memo regions that allow you to store, compare,
see how your scene will appear in the final render. and recall settings. They look similar to the viewports’ memo cams, but
are not saved with the scene.
To draw a render region, press Q to activate the render region tool and
drag in any 3D view to define the region’s rectangle. Press Shift+Q to Middle-click to store, and click to display. The
currently displayed cache is highlighted in
toggle the region on and off.
white. Right-click for other options.
Basics • 379
Section 21 • Rendering
Previewing a Single Frame • To render a selection of passes, select the passes in the explorer and
click the Render Pass > Selected button in the Render Manager, or
The Render > Preview command in the Render toolbar lets you choose Render > Render > Selected Passes from the Render
preview the current frame at fully rendered quality in a floating toolbar. The passes are rendered one after the other.
window. The frame is rendered using the render options for the
current render pass or using the render region options defined in any
Batch Rendering (xsi -render| xsibatch -render)
of the four viewports.
You can use -render command-line options to render scenes without
opening the Softimage user interface. In addition, you can export
render archives from the command line. The most common rendering
options are available directly from the command line, while other
options can be changed by specifying a script using the
-script option.
ray3.exe Rendering
You can render scenes using the mental ray standalone — ray3.exe
from a command line. Although many of the ray3.exe commands are
available in the Softimage interface, you may want to use the ray3.exe
command line tool to manually override options in exported MI2 files.
You can edit the MI2 files to define extra shaders, create objects, swap
textures, or perform other tasks.
Distributed Rendering
Distributed rendering is a way of sharing rendering tasks among several
Rendering to File from the Softimage Interface networked machines. It uses a tile-based rendering method where each
frame is broken up into segments, called tiles, which are distributed to
You can render your passes directly from the Softimage interface. Once participating machines. Each machine renders one tile at a time, until
the pass options are set, all you need to do is start the render in any of all of the frame’s tiles are rendered and the frame is reassembled. By
these ways: spreading the workload this way, you can decrease overall rendering
time considerably.
• To render all of your scene’s passes, click the Render Pass > All
button in the Render Manager, or choose Render > Render > All Once you’ve set up a distributed rendering network, rendering tasks are
Passes from the Render toolbar. distributed automatically once a render is initiated on a computer. The
initiating computer is referred to as the master and the other
• To render the current pass, click the Render Pass > Current button
computers on the network are referred to as slaves. The master and
in the Render Manager, or choose Render > Render > Current Pass
slaves communicate via a mental ray service that listens on a designated
from the Render toolbar.
TCP port and passes information to the mental ray renderer.
380 • Softimage
Section 22
Compositing and
2D Paint
Softimage Illusion is a fully integrated compositing,
effects, and 2D paint toolset that is resolution
independent and supports 8, 16, and 32-bit floating-
point compositing.
You can use Softimage Illusion operators to perform
compositing and effects tasks ranging from
tweaking the results of a multi-pass render to
creating complex special effects sequences.
The effects that you create are part of your scene that
are accessible from the explorer, are accessible to
Softimage’s scripting and animation features, and
support clips and sources, as well as render passes.
Basics • 381
Section 22 • Compositing and 2D Paint
Softimage Illusion
The Softimage Illusion toolset consists of three core views: the FxTree, There is also a Compositing layout available from the View > Layouts
where you build networks of effects operators; the Fx Viewer, where menu. It contains the three core Fx tools arranged in a way that makes
you preview the results; the Fx Operator Selector, from which you it easy to build and preview effects. Using this layout for compositing
insert pre-connected operators into the FxTree. and effects work is usually more efficient than simply opening the
required views in viewports because the non-compositing tools and
Each of these views can be opened in a viewport or as a floating view
views are mostly hidden.
(choose View > Compositing > name of view from the main menu).
Fx Tree
where you create Fx Viewer
networks of linked 2D viewer in which you
operators to composite can preview each
images and create operator to see how it
effects. contributes to the overall
effect.
You can create multiple
instances of the FxTree
workspace — called
trees — to organize
effects more efficiently.
Fx Operator Selector
Lists all of the available
compositing and effects
operators.
Once you select an
Fx Operators operator here, you can
Operators are pre-set its connections to
represented by nodes existing operators in the
that you can link Fx Tree and then
together manually or simultaneously insert and
connect beforehand connect it in the Fx Tree.
using the Fx Operator
Selector.
382 • Softimage
Adding Images and Render Passes
Basics • 383
Section 22 • Compositing and 2D Paint
384 • Softimage
Adding and Connecting Operators
Fx Operator Types
Whether you’re compositing a simple foreground image over a background, or applying a complex series of effects to an image, every step of the
process is accomplished by an operator in the FxTree. By connecting these operators together, you can create composites and special effects.
Image Image operators act as the in and out points for each Color Curves Use the Color Curves operators to graphically adjust
effect in the FxTree. color components of images in the FxTree, and to
extract mattes for foreground images so that you can
• File input operators are placeholders for images in the
composite them over background images.
tree.
• Paint Clip operators are used to import images into Grain Grain operators alter the appearance of film grain in
the FxTree for raster painting. your image sequences. You can add and remove grain,
• Vector Paint operators are used to create vector paint as well as adding and removing noise.
layers in the FxTree.
Optics Optics operators create optical effects in images in the
• PSD Layer Extract operators extract a single layer from FxTree. These include depth-of-field, lens flares, and flare
a .psd image. rings.
• File Output Operators let you set the output and
rendering options for your composites and effects. Filter Filter operators let you control the appearance of
images in the FxTree. Among other things they can
Composite Composite operators offer you several ways to combine reproduce the effects of different lens filters, apply
foreground images with a background image to blurs, and add or remove noise.
produce a composited result. Most compositing
operators require a foreground image, a background Distort Distort operators simulate 3D changes to images in the
image, and an internal or external matte. FxTree. Use these operators to apply distortions and
transformations
Retiming Retiming operators allow you to change the timing of
image sequences. You can, for example, convert from Transform Transform operators adjust the dimensions and/or
24 to 30 frames-per-second and vice versa, interlace position of Images in the FxTree. Besides cropping and
and de-interlace clips, and change the duration of clips resizing images, you can also use the 3D Transform
by dropping frames, or combining them together in operator to transform an image in a simulated 3D
different ways. space, as well as warp and morph images.
Transition Transition operators create animated changes from one Plugins The plugins operators offer a variety of patterns and
image clip to another. You can use transition operators special effects that you can use in your FxTrees. All of
to apply dissolves, fades, wipes, pushes, and peels. the Plugins operators are custom operators—called
UFOs— that were created using the UFO SDK.
Color Adjust Color adjust operators let you color correct clips in the
FxTree. You can modify and animate hue, saturation, Painterly Painterly Effects operators allow you to apply a variety
lightness, brightness, contrast, gamma, and RGB values. Effects® of classic artistic effects to images in the FxTree. The
You can also perform various operations like inverting, Softimage compositor’s three sets of Painterly Effects
images, premultiplying images, and so on. operators let you apply effects like Chalk & Charcoal,
Watercolor, Bas Relief, Palette Knife, Stained Glass, and
many more, to images in the FxTree.
Basics • 385
Section 22 • Compositing and 2D Paint
Operator Info
Displays info about the operators
being viewed and edited.
Navigation Tool
Drag in the rectangle to
pan. Drag on the slider
to zoom.
Compare Area
Displays a portion of one
image while you're editing
another image. This is
useful for seeing one
operator’s effect on
another.
Display Area
Image courtesy of Ouch! Animation Displays the operator that
you’re previewing.
Split viewers Display image’s alpha Displays the current image at full size.
A and B. channel as a red overlay.
Toggles the Compare Area
Switch viewers Isolate one of the Forces the current image to fit in the viewer.
A and B. image’s color channels. Mixes the view with the Merge Source.
386 • Softimage
Rendering Effects
Rendering Effects
Once you have your effect looking the way you want it, you can render Rendering Effects From the Command Line
it to a variety of different image formats using a File Output operator.
You can also render effects non-interactively from the command line
The File Output property editor is where you set all of the effect’s using xsi -script or xsibatch -script. Make sure that your
output options, including the picture standard, file format, and range script contains the following line (VBScript example):
of frames.
RenderFxOp "OutputOperator", False
where OutputOperator is the name of the FileOutput operator that
you want to render. The False statement specifies that the Fx Rendering
Click here to open the dialog box should not be displayed during rendering.
Rendering window.
Once you’ve set the output options, all you need to do is click the
Render button to start the rendering process. In the Rendering
window, you will get information regarding the rendering of
the sequence.
Basics • 387
Section 22 • Compositing and 2D Paint
2D Paint
Softimage’s compositing and Effects toolset includes a 2D paint You work with paint operators the same way you work with other Fx
module which offers 8 and 16-bit raster and vector painting. To paint operators, making it easy to touch up images, fine-tune effects, edit
on images, you set up paint operators in the FxTree and then paint on image clips, paint custom mattes, create write-on effects, and so on.
them in the Fx viewer, where a Paint menu gives you access to a variety You can also use blank paint operators to paint images from scratch.
of paint tools.
Paint Menu
When you edit a paint operator, the paint menu is
added the Fx Viewer, giving you access to all of the
paint-related commands and tools.
Fx Paint Brush List
Lists all of the paint
brushes available for
painting strokes.
All of the brushes are
presets based on the Fx Viewer
same core set of When you edit and
properties. preview a paint operator,
the Fx Viewer is where you
The Fx Paint Brush List is actually paint strokes and
an optional view in the shapes.
compositing layout
(shown here).
To open: choose View Fx Color Selector
> Compositing > Fx Allows you to choose
Color Selector from foreground and
the main menu. background paint colors
using a variety of different
color models.
388 • Softimage
Vector Paint vs. Raster Paint
Basics • 389
Section 22 • Compositing and 2D Paint
Choose a
3 Choose a paint tool from
brush category
the Fx Viewer’s Draw menu.
from the
brush-type list.
390 • Softimage
Painting Strokes and Shapes
The Line tool, as you might imagine, The Freehand Shape tool allows you to
allows you to draw straight lines. This draw editable vector shapes as if you were
is especially useful for painting wires using a pen and paper. You need only drag 6 If you are using vector paint operators, you can edit any
out of an image or sequence. the paint cursor around the outline of the vector shapes that you’ve painted. The two images
shape that you wish to draw. below show the manipulators used to transform a
In vector paint operators, drawing a vector shape and to edit a vector shape’s points.
line creates a two-point color shape The Freehand Shape tool is only available in
drawn using the outline (stroke) only vector paint operators.
Basics • 391
Section 22 • Compositing and 2D Paint
After
Merge Source
When you paint using the Clone brush, you’ll only see a result if you
You can set any operator in the Fx Tree as the Merge Source icon use a brush offset. The offset is the distance between the area from
merge source by right-clicking it and choosing
which you’re painting and the area to which you’re painting. You can
Set as Paint Merge Source from the menu. This
offset the brush in any direction and use any offset distance, as long as
adds a small paint-bucket icon to the operator
both the source and destination cursors can be placed somewhere on
to help you identify it as the merge source.
the target image simultaneously.
392 • Softimage
Section 23
Customizing Softimage
You can extend Softimage in a variety of ways by
customizing it. Many customizations are too
involved to cover here, but you can get more details
in the Softimage User’s Guide and Softimage SDK
Guide.
Basics • 393
Section 23 • Customizing Softimage
394 • Softimage
Toolbars and Shelves
Basics • 395
Section 23 • Customizing Softimage
Proxy Parameters
Proxy parameters are similar to custom parameters, but with a
fundamental difference. Custom parameters can drive target
parameters, but they are still separate and different parameters. This
means that when you set keyframes, you key the custom parameter and
not the driven parameter. So what do you do when you want to drive
the actual parameter, or create a single parameter set that holds only
those existing parameters you are interested in? You can use
For example, you can use a set of sliders in a property editor to drive the proxy parameters.
pose of a character instead of creating a virtual control panel using 3D Unlike custom parameters, proxy parameters are cloned parameters:
objects. they reflect the data of another parameter in the scene. Any operation
First, create a custom parameter set by selecting an element and using done on a proxy parameter has the same result as if it had been done on
Create > Parameter > New Custom Parameter Set on the Animate the real parameter itself (change a value, save a key, etc.).
toolbar, and then giving it a meaningful name. While you can create proxy parameters for any purpose, it’s most likely
that you will use them to create custom property pages. You can create
your own property pages for just about anything you like: for example,
locate all animatable parameters for an object on a single property
396 • Softimage
Custom and Proxy Parameters
page, making it much quicker and easier to add keys because all the Select one or more objects with a DisplayInfo custom parameter set. If
animated parameters are in one place. Or as a technical director, you nothing is selected, the DisplayInfo set of the scene root is displayed (if
can expose only the necessary parameters for your animation team to it has one).
use, thereby streamlining their workflow and reducing potential errors.
First, create a custom parameter set, then open an explorer and drag
and drop parameters into the custom property editor or onto the custom
parameter set node in an explorer. Alternatively, use Create >
Parameter > New Proxy Parameter to specify parameters with a
picking session.
Basics • 397
Section 23 • Customizing Softimage
- Press Alt to extend beyond the range of the parameter’s slider in If there is a DisplayInfo property on the scene root, you
its property editor (if the slider range is smaller than its total cannot edit its parameters on-screen unless the scene root is
range). selected.
If the parameter that you click on is not marked, it becomes marked.
If it is already marked, then all marked parameters are modified as
you drag.
398 • Softimage
Scripts
Scripts
Scripts are text files containing instructions for modifying data in
Softimage. They provide a powerful way to automate many tasks and
simplify your workflow.
Command box displays the most
recent command. Modify the
contents or type a new command,
then press Enter to execute it. Selects any of the last 25 commands.
Basics • 399
Section 23 • Customizing Softimage
Key Maps
Key maps determine the keyboard Keyboard shortcuts are grouped by
combinations that are used to run interface component.
To see which command is mapped to a key,
commands, open windows, and activate Click an interface component in click the appropriate modifiers (Alt, Ctrl,
tools. You can create your own key the Group list to display its Click a command in the Shift) from the check boxes or the keyboard
maps to create new key bindings or commands and their keyboard Command list to display its diagram, then rest your mouse pointer over a
change the default ones. shortcuts in the Command list. keyboard shortcut in red. key on the keyboard diagram.
400 • Softimage
Other Customizations
The keyboard keys are color-coded to indicate the following: Other Customizations
• White: no keyboard shortcut has been assigned to this key. In addition to the customizations briefly mentioned so far, there are
• Beige: a keyboard shortcut from another interface component has many other ways you can extend Softimage:
been assigned to this key. • Custom commands can automate repetitive or difficult tasks.
• Light Brown: a keyboard shortcut from the currently selected Commands can be scripted or compiled.
interface component has been assigned to this key. • Custom operators can automatically update data in the operator
• Red: this keyboard shortcut corresponds to the currently selected stack. Operators can be scripted or compiled.
item in the Command box. • Layouts define the main window of Softimage. You can create
To see key conflicts with other windows, select View and choose a layouts based on your preferences or common tasks.
window from the adjacent list. Keys that are used by the selected • Views can be floating or embedded in a layout. You can create
window are highlighted in dark brown. For combinations involving views for specialized tasks.
modifiers, select the appropriate Ctrl, Shift, and Alt boxes or press and
hold those keys on your keyboard. • Events run automatically when certain situations occur in
Softimage.
• Synoptic views allow you to run scripts by clicking hotspots in an
image, for example, you can create custom control panels for a rig.
• Net View allows you to create an HTML interface for sharing
scripts, models, and other data.
• Shaders give you complete control over the final look of your work.
For more information about customizing Softimage, see the SDK
Guides, as well as Customization in the Softimage Guides.
Basics • 401
Section 23 • Customizing Softimage
402 • Softimage