You are on page 1of 402

The Basics Guide

Copyright and Disclaimer


2009 Autodesk, Inc. All rights reserved. Except as otherwise permitted by Autodesk, Inc., this publication, or parts thereof, may not be reproduced in any form, by any method, for any purpose. Certain materials included in this publication are reprinted with the permission of the copyright holder. The following are registered trademarks or trademarks of Autodesk, Inc., and/or its subsidiaries and/or affiliates in the USA and other countries: 3DEC (design/logo), 3December, 3December.com, 3ds Max, ADI, Algor, Alias, Alias (swirl design/logo), AliasStudio, Alias|Wavefront (design/logo), ATC, AUGI, AutoCAD, AutoCAD Learning Assistance, AutoCAD LT, AutoCAD Simulator, AutoCAD SQL Extension, AutoCAD SQL Interface, Autodesk, Autodesk Envision, Autodesk Intent, Autodesk Inventor, Autodesk Map, Autodesk MapGuide, Autodesk Streamline, AutoLISP, AutoSnap, AutoSketch, AutoTrack, Backburner, Backdraft, Built with ObjectARX (logo), Burn, Buzzsaw, CAiCE, Can You Imagine, Character Studio, Cinestream, Civil 3D, Cleaner, Cleaner Central, ClearScale, Colour Warper, Combustion, Communication Specification, Constructware, Content Explorer, Create>what's>Next> (design/logo), Dancing Baby (image), DesignCenter, Design Doctor, Designer's Toolkit, DesignKids, DesignProf, DesignServer, DesignStudio, Design|Studio (design/logo), Design Web Format, Discreet, DWF, DWG, DWG (logo), DWG Extreme, DWG TrueConvert, DWG TrueView, DXF, Ecotect, Exposure, Extending the Design Team, Face Robot, FBX, Fempro, Filmbox, Fire, Flame, Flint, FMDesktop, Freewheel, Frost, GDX Driver, Gmax, Green Building Studio, Headsup Design, Heidi, HumanIK, IDEA Server, i-drop, ImageModeler, iMOUT, Incinerator, Inferno, Inventor, Inventor LT, Kaydara, Kaydara (design/logo), Kynapse, Kynogon, LandXplorer, Lustre, MatchMover, Maya, Mechanical Desktop, Moldflow, Moonbox, MotionBuilder, Movimento, MPA, MPA (design/logo), Moldflow Plastics Advisers, MPI, Moldflow Plastics Insight, MPX, MPX (design/logo), Moldflow Plastics Xpert, Mudbox, Multi-Master Editing, NavisWorks, ObjectARX, ObjectDBX, Open Reality, Opticore, Opticore Opus, Pipeplus, PolarSnap, PortfolioWall, Powered with Autodesk Technology, Productstream, ProjectPoint, ProMaterials, RasterDWG, Reactor, RealDWG, Real-time Roto, REALVIZ, Recognize, Render Queue, Retimer,Reveal, Revit, Showcase, ShowMotion, SketchBook, Smoke, Softimage, Softimage|XSI (design/logo), Sparks, SteeringWheels, Stitcher, Stone, StudioTools, Topobase, Toxik, TrustedDWG, ViewCube, Visual, Visual Construction, Visual Drainage, Visual Landscape, Visual Survey, Visual Toolbox, Visual LISP, Voice Reality, Volo, Vtour, Wire, Wiretap, WiretapCentral, XSI, and XSI (design/logo). Python is a registered trademark of Python Software Foundation. All other brand names, product names or trademarks belong to their respective holders. Disclaimer THIS PUBLICATION AND THE INFORMATION CONTAINED HEREIN IS MADE AVAILABLE BY AUTODESK, INC. "AS IS." AUTODESK, INC. DISCLAIMS ALL WARRANTIES, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE REGARDING THESE MATERIALS. Documentation Team Judy Bayne, Grahame Fuller, Amy Green, Edna Kruger, and Naomi Yamamoto. 11 2009

Basics 3

Copyright and Disclaimer

4 Softimage

Contents
Welcome to Autodesk Softimage! . . . . . . . . . . . . . . . . . 9 Section 1 Introducing Softimage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
The Softimage Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Getting Commands and Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Values for Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working in 3D Views. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exploring Your Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 16 18 21 23 32

Section 4 Organizing Your Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73


Where Files Get Stored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scenes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Importing and Exporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 75 78 79 82

Section 5 General Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83


Overview of Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Geometric Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Accessing Modeling Commands . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Starting from Scratch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Operator Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Modeling Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Attribute Transfer (GATOR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Manipulating Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Deformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

Section 2 Elements of a Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37


Whats in a Scene? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selecting Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Components and Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parameter Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 39 45 51 54 56

Section 3 Moving in 3D Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61


Coordinate Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Center Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Freezing Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resetting Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Neutral Poses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transform Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transformations and Hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . Snapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 64 70 70 70 70 71 71 72

Section 6 Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103


About Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Drawing Curves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manipulating Curve Components . . . . . . . . . . . . . . . . . . . . . . . . Modifying Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Curves from Other Objects . . . . . . . . . . . . . . . . . . . . . . Importing EPS Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 104 107 110 110 111

Basics 5

Section 7 Polygon Mesh Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 113


Overview of Polygon Mesh Modeling . . . . . . . . . . . . . . . . . . . . . . 114 About Polygon Meshes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Converting Curves to Polygon Meshes . . . . . . . . . . . . . . . . . . . . . 118 Drawing Polygons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Subdividing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Drawing Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Extruding Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Removing Polygon Mesh Components . . . . . . . . . . . . . . . . . . . . . 123 Combining Polygon Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Symmetrizing Polygons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Cleaning Up Meshes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Reducing Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Polygon Normals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Subdivision Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

Linking Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Expressions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Copying Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Scaling and Offsetting Animation . . . . . . . . . . . . . . . . . . . . . . . . . 169 Plotting (Baking) Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Removing Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

Section 10 Character Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171


Character Animation in a Nutshell . . . . . . . . . . . . . . . . . . . . . . . . 172 Setting Up Your Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Building Skeletons for Characters . . . . . . . . . . . . . . . . . . . . . . . . . 177 Enveloping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Rigging a Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Animating Characters with FK and IK . . . . . . . . . . . . . . . . . . . . . . 190 Walkin the Walk Cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Motion Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Making Faces with Face Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

Section 8 NURBS Surface Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 131


About Surfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Building Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Modifying Surfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Projecting and Trimming with Curves . . . . . . . . . . . . . . . . . . . . . . 135 Surface Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

Section 11 Shape Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201


Things are Shaping Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Using Construction Modes for Shape Animation. . . . . . . . . . . . . . 204 Creating and Animating Shapes in the Shape Manager . . . . . . . . 205 Selecting Target Shapes to Create Shape Keys . . . . . . . . . . . . . . . 206 Storing and Applying Shape Keys . . . . . . . . . . . . . . . . . . . . . . . . . 207 Using the Animation Mixer for Shape Animation . . . . . . . . . . . . . 208 Mixing the Weights of Shape Keys . . . . . . . . . . . . . . . . . . . . . . . . 209

Section 9 Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139


Bringing It to Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Playing the Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Previewing Animation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Animating with Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Animating Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Editing Keys and Function Curves . . . . . . . . . . . . . . . . . . . . . . . . . 154 Layering Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Constraints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Path Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
6 Softimage

Section 12 Actions and the Animation Mixer . . . . . . . . . . . . . . . . . . . 211


What Is Nonlinear Animation? . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 The Animation Mixer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Storing Animation in Action Sources. . . . . . . . . . . . . . . . . . . . . . . 214 Working with Clips in the Animation Mixer . . . . . . . . . . . . . . . . . 216 Mixing the Weights of Action Clips. . . . . . . . . . . . . . . . . . . . . . . . 217

Modifying and Offsetting Action Clips. . . . . . . . . . . . . . . . . . . . . 218 Sharing Animation between Models . . . . . . . . . . . . . . . . . . . . . . 220 Adding Audio to the Mix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222

Section 16 Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295


The Shader Library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . About Surface Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic Surface Color Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . Reflectivity, Transparency, and Refraction . . . . . . . . . . . . . . . . . . Applying Shaders to Scene Elements . . . . . . . . . . . . . . . . . . . . . . The Render Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Building Shader Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Shader Compounds . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 300 302 303 306 307 310 312

Section 13 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223


Simulated Effects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Making Things Move with Forces . . . . . . . . . . . . . . . . . . . . . . . . Hair and Fur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rigid Body Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cloth Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Soft Body Dynamics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 225 227 232 237 239

Section 17 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315


About Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Material Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating and Assigning Materials . . . . . . . . . . . . . . . . . . . . . . . . Material Libraries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 317 319 321

Section 14 ICE: The Interactive Creative Environment . . . . . . . . . . . 241


What is ICE? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The ICE Tree View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ICE Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Forces and ICE Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ICE Deformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Building ICE Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ICE Compounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 244 247 250 252 255 267

Section 18 Texturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323


How Surface and Texture Shaders Work Together . . . . . . . . . . . . Types of Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applying Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Texture Projections and Supports. . . . . . . . . . . . . . . . . . . . . . . . . Editing Texture Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UV Coordinates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Editing UV Coordinates in the Texture Editor . . . . . . . . . . . . . . . . Texture Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bump Maps and Displacement Maps. . . . . . . . . . . . . . . . . . . . . . Reflection Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Baking Textures with RenderMap . . . . . . . . . . . . . . . . . . . . . . . . Painting Colors at Vertices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 325 326 327 333 335 336 338 342 344 345 346

Section 15 ICE Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271


Making ICE Particle Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Particles that Bounce, Splash, Stick, Slide, and Flow. . . . . . . . . . . Particle Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spawning New Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Particle Strands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Particle Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ICE Particle States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ICE Rigid Bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ICE Particle Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 277 279 281 283 285 287 289 292

Basics 7

Section 19 Lighting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347


Types of Lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Placing Lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Setting Light Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Selective Lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Creating Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Global Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Caustics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Final Gathering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Ambient Occlusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Image-Based Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Light Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360

Section 22 Compositing and 2D Paint . . . . . . . . . . . . . . . . . . . . . . . . . . 381


Softimage Illusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Adding Images and Render Passes . . . . . . . . . . . . . . . . . . . . . . . . 383 Adding and Connecting Operators . . . . . . . . . . . . . . . . . . . . . . . . 384 Editing and Previewing Operators . . . . . . . . . . . . . . . . . . . . . . . . . 386 Rendering Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 2D Paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Vector Paint vs. Raster Paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Painting Strokes and Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 Merging and Cloning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392

Section 23 Customizing Softimage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393


Plug-ins and Add-ons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Toolbars and Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Custom and Proxy Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Key Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 Other Customizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401

Section 20 Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361


Types of Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 The Camera Rig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Working with Cameras. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 Setting Camera Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Lens Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Motion Blur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368

Section 21 Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369


Rendering Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 Render Passes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Render Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Setting Rendering Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Different Ways to Render . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379

8 Softimage

Welcome to Autodesk Softimage!


Softimage is a powerful 3D system that integrates modeling, animation, simulation, compositing, and rendering into a single, seamless environment. Softimage incorporates many standard 3D tools and functions, but goes far beyond that in terms of tool sophistication and artistic control. Modeling The modeling tools are designed for creating and editing seamless animated models of any sort. Softimage offers many tools for creating, editing, and deforming polygons and subdivision surfaces, as well as NURBS curves and surfaces. Animation Softimage provides you with a complete set of both low-level and highlevel animation tools. All the fundamental low-level tools are there with keyframing, fcurve editor, dopesheet, constraints, linked parameters, and expressions. You can also layer keyframe animation on top of animation, such as motion capture (mocap) data. Shape animation is achieved using a number of techniques and tools, including the popular and easy-to-use shape manager. For high-level animation, you have the animation mixer which lets you mix, transition, and combine all forms of animation, shapes, and audio in a nonlinear and non-destructive manner. Character Animation Building and animating characters is fully supported with all the regular animation tools, as well as special character tools such as skeletons that use inverse kinematics, envelopes and weight maps, and easy-to-create character rigs and rigging tools. As well, you can retarget any type of animation, including mocap data, to any type of rig. The Face Robot module lets you make faces in a unique way. You first set up a facial rig by going through several simple stages. Once the facial rig is created, you can animate the facial controls and sculpt and tune the soft facial tissue using a special set of tools.

Copyright 2005 by Paramount Pictures Corporation and Viacom International Inc. All Rights Reserved. Nickelodeon, Barnyard and all related titles, logos and characters are trademarks of Viacom International Inc.

The Interface Softimages interface is laid out in a way that gives you both a large viewing area as well as easy access to all the tools you need, all the time. You can easily resize any panel or viewport in the Softimage interface, as well as customize its layout to exactly what you want.

Basics 9

Welcome to Autodesk Softimage!

Simulation You can simulate almost any kind of natural, or unnatural, phenomena you can think of using rigid bodies, soft bodies, or cloth or grow some hair! Simulation-type objects can then be influenced by forces and collisions to create simulated animations. ICE: Interactive Creative Environment ICE is a visual programming environment available directly within the Softimage interface. Using a node-based data tree format, you can modify how any tool works, create custom tools and effects, and see the results interactively, all without scripting a line of code. ICE is currently used mostly for creating particle and deformation effects. Using ICE trees, you can create almost any type of particle effect you want. You can make natural phenomena, such as smoke, fire, and rain, but you can also use objects or characters act in a simulated environment: rocks tumbling, glass pieces breaking, grass or hair growing, or humans running about. Shaders and Texturing Using a graphical node-based connection tool called the render tree, you can create an unlimited range of materials by connecting any type of shader to any object. You can also project 2D and 3D textures into texture spaces, which can then be manipulated like a 3D object.

Rendering Drawing upon the integration of mental ray rendering technology, Softimage offers full-resolution, interactive rendering, caustics, global illumination, and motion blur, not only for the final render, but also within a render region that can be drawn in any Softimage viewport. It renders everything in Softimage, letting you adjust your render parameters at any stage of modeling, animating, or even during playback. As well, you can embed unlimited render passes into a single scene and for each pass, generate multiple rendered channels such as specular or reflections. Softimages render passes and render channels are extremely easy to create, customize, and edit. Painting and Compositing Softimage has a built-in compositor, called Softimage Illusion. Softimage Illusion is designed to edit textures and image-based lighting in real time. You can use it to rough out final shots, touch up your textures, morph, warp and rig images, create custom mattes, and tweak the results of a multi-pass render, all within Softimage.

10 Softimage

About this Guide


This guide provides an overview of the main features, tools, and workflows of Softimage, helping you get a headstart in understanding and using the software: If youre new to Softimage, it gives you a foot in the proverbial Softimage door. You may be new to 3D, or just new to Softimage but familiar with other 3D software packages. Either way, you can skim through this guide and quickly see whats possible in Softimage, as well as discover what the different tools and elements are called. If youre an old hand at Softimage, this guide may provide you with a quick start for areas of Softimage that youve never needed to use before. For example, if modeling is your thing and now you have to do some animation, this guide can help you get a sense of whats possible in animation and what tools you can use. This guide has been updated for Softimage 2010, but because it covers the fundamental concepts and workflows of Softimage, the information it contains will apply to Softimage well beyond this version. If youre eager to take Softimage for a spin, theres enough information in this guide to get you started without needing to do more homework. Many workflow overviews are included, as well as command names that tell you where to find things. Remember that all the detailed information and procedures are covered in the Softimage Users Guide and the Softimage SDK Guide available from the Help menu on the main menu bar in Softimage (or press the F1 key): weve just filtered out the main goodies for you here. Now, go fire up Softimage and have some fun! The Softimage Documentation Team

Basics 11

Welcome to Autodesk Softimage!

12 Softimage

Section 1

Introducing Softimage
New to Softimage? Take a quick guided tour through the interface and basic operations.

What youll find in this section ...


The Softimage Interface Getting Commands and Tools Setting Values for Properties Working with Views Working in 3D Views Exploring Your Scene

Basics 13

Section 1 Introducing Softimage

The Softimage Interface


Welcome to your new homethe Softimage interface. The interface is composed of several toolbars and panels surrounding the viewports that display the elements in your scene. Each part of the interface is designed to help you accomplish different aspects of your work. The image below shows the default layout. Take a minute to become familiar with the names and locations of the parts of the interface. You
A C D B

can toggle parts of the standard layout using View > Optional Panels. Other layouts are available from the View > Layouts menu. You can even create your own layout for a customized workflow. Softimage has many preferences for many tools, editors, and working methods (choose File > Preferences). If you want to change something, chances are theres a preference for it!

14 Softimage

The Softimage Interface

Title bar Displays the version of Softimage, your license type, and the name of the open project and scene.

Sample Content Softimage ships with a sample database XSI_SAMPLES containing scenes, models, presets, scripts, and other goodies. Open a Softimage file browser (View > General > Browser or press 5 at the top of the keyboard), then click Paths and choose Sample Project.

Viewports Lets you view the contents of your scene in different ways. You can resize, hide, and mute viewports in any combination. See Working with Views on page 21 for details.

C D

Main menu bar Main Toolbar Contains commands and tools for different aspects of 3D work. Press 1 for the Model toolbar, 2 for Animate, 3 for Render, 4 for Simulate, and Ctrl+2 for Hair. You can also access these controls from the main menu bar. For more information about other controls that can be displayed in this area, see The Main Toolbar, Weight Paint Panel, and Palette Toolbar on page 16 and Switching Toolbars on page 16.

Icons Switch between toolbar and other panels, or choose viewport presets. See The Main Toolbar, Weight Paint Panel, and Palette Toolbar on page 16 as well as Viewport Presets on page 22 for details.

Main command panel (MCP) Contains frequently used commands grouped by category. Switch between the MCP, KP/L, and MAT panels using the tabs at lower right. See The MCP, KP/L, and MAT Panels on page 17 for details.

Lower interface controls The controls at the bottom of the interface include a command box, script editor icon, the mouse/status line, the timeline, the playback panel, and the animation panel.

Basics 15

Section 1 Introducing Softimage

Getting Commands and Tools


There are several different types of menus in Softimage. Each menu typically contains a mixture of commands and tools: Commands have an immediate effect on the scene, for example, duplicating the selected object. Tools activate a mode that requires mouse interaction, for example, selecting elements, translating an object, orbiting the camera, or drawing polygons and curves. A tool stays active until you deactivate it by pressing Esc or by activating a different tool.

The Main Toolbar, Weight Paint Panel, and Palette Toolbar


The three buttons at the lower left switch between the main toolbar, the weight paint panel, and the palette:
Main toolbar Palette

Weight paint panel

Menu Buttons
Buttons with a triangle open up a menu of commands and tools. You can middle-click on a menu button to repeat the last action you performed on that menu.

The main toolbar is where youll do most of your work. The weight paint panel contains a specialized layout for editing envelope weights. See The Weight Paint Panel on page 182. The palette contains some wire color and display mode presets, as well as a custom toolbar where you can store custom commands.

Context Menus
You can right-click on elements in the views to open a menu with items that relate to the element under the mouse pointer. This is a quick and convenient way to access commands and tools, for example, when modeling. In the explorer or schematic view, right-click on an element to open its context menu. In a 3D view, Alt+right-click (Ctrl+Alt+right-click on Linux) on an object to open its context menu, or on the background to open the Camera View menu. When object components like points, polygons, or edges are selected, right-click anywhere on the object for the selected components context menu. Right-click anywhere else for the Camera View menu. Some tools like the Tweak Components tool have their own rightclick menus with options specific for each tool.

Switching Toolbars
The main toolbar on the left side of the interface can display categories for modeling, animation, rendering, simulation, and hair. You can switch between these categories by clicking on the toolbars title as shown at right, or by pressing 1, 2, 3, 4, or Ctrl+2 (use the number keys at the top of the keyboard, not on the numeric keypad). If you prefer, you can also access the same commands from the main menu bar:

16 Softimage

Getting Commands and Tools

The MCP, KP/L, and MAT Panels


The three tabs at the bottom of the panel on the right side of the interface switch between the MCP, KP/L, and MAT panels: MCP is the main command panel. It is divided into sub-panels with controls for selection, transformation, constraints, snapping, and editing. The tools and commands available here are described in context throughout this guide. KP/L contains the keying panel as well as controls for working with animation and scene layers. See Keying Parameters in the Keying Panel on page 147, Layering Animation on page 159, and Scene Layers on page 49. MAT is the material panel. It provides similar controls to the texture layer editor, but in a different arrangement. See Texture Layers on page 338.

Tearing Off Menus


To tear off a menu, click on the dotted line at the top of a menu or submenu and drag to any area in the interface. The menu stays open in a floating window until you close it.

Hotkeys: Sticky or Supra


Using hotkeys, tools can be activated in either of two modes: Sticky: Press and release the key quickly. The tool stays active until you activate a different tool or press Esc. Supra: Press and hold the key to temporarily override the current tool. The new tool stays active only while the key is held down. When you release the key, the previous tool is reactivated.

Collapsing MCP Panels


You can collapse panels in the MCP by rightclicking on their main menu buttons. To expand a collapsed panel, simply right-click on it again. This is useful when working on small monitors, like on laptops.

Repeating Commands and Tools


Press . (period) to repeat the last command, and press , (comma) to reactivate the last tool (other than selection, navigation, or transformation).

Basics 17

Section 1 Introducing Softimage

Setting Values for Properties


Property editors are where youll find an elements properties. They are a basic tool that you use constantly to define and modify elements in a scene. Select an object or property and press Enter to open its property editor, or click its icon in an explorer. In addition to property editors, you can enter values in many of the text boxes in the main command panel, such as the Transform panel, and use virtual sliders to change values for marked parameters in the explorer.
A C F G D B E C A The title bar of the property shows the name of the element being edited. When multiple elements are selected for editing, the title bar shows multi. Control how property editors update: Focus updates only for properties of the same type when other elements are selected. Recycle updates with the properties of the currently selected elements. Lock does not update when other elements are selected. Click the key button to set or remove a key on all parameters in all property sets in the editor. Right-click the key button to access a menu of commands that affect all parameters. Use the arrows to move between next and previous keys on any parameter. H I E J F Revert changes, or save and load presets. Use the tabs to quickly move between different property sets in an editor. Click the triangle to collapse a property set (like Scene Material in this picture) or expand it (like Phong). For help on the parameters in a property set, click the corresponding help icon (?). K L G H Within a property set like Phong, tabs switch between groups of parameters. The animation icon shows if and how the parameter is animated. Click to set or remove a key. Right-click to access animation commands for that parameter. I Drag a slider to change values. To change R, G, and B values simultaneously for a color, press Ctrl while dragging any one of them. D The arrow buttons move along the sequence of property editors (up a level, previous, and next).

18 Softimage

Setting Values for Properties

Type a numerical value in a text box to change the parameters values precisely. You can sometimes enter values beyond the slider range. Drag the mouse in a circular motion over the text box to change values (scrubbing). Scrub clockwise to increase and counterclockwise to decrease. Increment values using [ and ]. Ctrl and Shift change the increment size. For example, press Ctrl+] to increment by 10. You can also press Ctrl or Shift with the arrow keys to change values by increments. Enter relative values with the addition (+), subtraction (-), multiplication (*), and division (/) symbols after the value. For example, 2- decreases the value by 2. On the other hand, -2 enters negative two. With multiple elements, use l(min, max) for a linear range, r(min, max) for random values, and g(mean, var) for a normal distribution.

Entering Values Outside of Slider Ranges


Many parameters with sliders let you set values outside of the slider range. For example, the range of the Local Transform property editors Position sliders is between -50 and +50, but objects can be much farther from their parents origin than that. If a parameter supports values outside of the slider range, you can set such values by typing them into the associated numeric box or by pressing Alt while using the virtual slider tool. When you set a value outside the slider range, the displayed range automatically expands to twice the current value. For example, if the default range of a parameter is between 0 and 10 and you set the value to 15, the new range is 0 to 30. However, the change is not permanentif you set the parameter to a value within the default range and then close and reopen the property editor, the displayed range is back to its default.

Click a color box to open the color editors, from which you can pick or define the colors you want. See Color Editors on page 20. You can copy colors by dragging and dropping one color box onto another. Click the label below the box to cycle the color space for the sliders through RGB, HLS, and HSV.

Virtual Sliders
Virtual sliders let you do the job of a slider without having to open up a property editor. Select one or more objects, mark the desired parameters, then press F4 and middle-drag in a 3D view. Use Ctrl, Shift, and Ctrl+Shift to change increments, and Alt to extend beyond the sliders display range.

The connection icon links a parameter value to a shader, weight map, or texture map which modulates it. Click the icon to inspect the connected element, or right-click for options.

Basics 19

Section 1 Introducing Softimage

Color Editors
Instead of using the RGB color sliders, you can click on a color box to open a color editor.
A

To pick a color: Click the color picker button (the eyedropper) and click anywhere in the Softimage window. This tool can be especially useful when trying to match a color in the Image Clip editor. On Windows systems, you can click outside of the Softimage window to pick a color, even though the mouse pointer does not show that the color picker is active outside of the window. This does not work on Linux systems, but you can import an image clip and load it into the Image Clip editor as a workaround. To cancel the color picker, click the right mouse button. The color picker takes the color you see on the screen rather than the true color of the objects. There may be rounding errors because most display adapters have only 256 levels for each of the RGB channels.

G B C D J H I G H I J K F Click on the browse (...) button to open the full color editor, where you can use additional controls. Click the palette button to choose a preset color. Click the > button to open the menu shown. The Color Area commands specify the configuration of the color area and slider. The Numeric Entry commands select the color model for the numeric boxes. The Normalized option specifies whether numeric values are represented as real numbers in the range [0.01.0] or as integers in the range [0, 255]. The Gamma Correction option toggles gamma correction display for all color controls in the color editor.

K L

To set a color, click in the color area and then adjust it using the slider. To select which color components appear in the color area and which one appears on the slider, click the > button. The color box on the left shows the previous color for reference. The color box on the right shows the current color. Use the numeric boxes to set color values precisely. To select a color model, click the > button.

B C D

20 Softimage

Working with Views

Working with Views


Views provide a window into the current scene, whether they display a 3D view of geometric objects such as in the Camera view or a hierarchical view of the data such as in the explorer. Views can be displayed docked in a viewport, or floating in separate windows. Any spotlights that are present in your scene. The Object view, which shows the selected object in isolation. See Working in 3D Views on page 23. The other views include alternative representations of your scene data such as the explorer or the schematic views (see Exploring Your Scene on page 32), as well as tools for specialized tasks. Resizing Viewports Viewports can be resized, maximized, or expanded vertically and horizontally. Drag the horizontal and vertical splitter bars (or their intersection) to resize the viewports. Middle-click the bars to reset them.

Views Docked in the Viewports


There are four viewports in the view manager at the center of the default Softimage layout. Each viewport is identified by a letter. When you start Softimage, viewport A (top left) shows the Top orthographic view, viewport B (top right) shows the Camera perspective view, viewport C (bottom left) shows the Front orthographic view, and viewport D (bottom right) shows the Right orthographic view. Switching Views in the Viewports You can change the view displayed by a viewport using the menu on the left of its title bar. Middle-click to display the previous view.

The 3D views show the geometry of your scene and include: Any cameras that are present in your scene. The orthographic Top, Front, and Right views. The User view, which is not a real camera but an extra perspective view that you can navigate in without modifying your main camera setup or its animation.
Basics 21

Section 1 Introducing Softimage

Use the Resize icon at the right of a viewports toolbar to maximize, expand, and restore: Left-click to maximize a viewport, or restore a maximized viewport. Alternatively, press F12 while the pointer is over the viewport. Middle-click to expand or restore horizontally. Ctrl+middle-click to expand or restore vertically. Right-click on the Resize icon to open a menu as shown. Viewport Presets Instead of switching views and resizing viewports manually, you can use the buttons at the lower left to display various preset combinations. Muting and Soloing Viewports The letter identifier in the upper-left corner of the title bar allows you to mute and solo viewports. Muting a viewports neighbors helps speed up its refresh rate. Middle-click the letter to mute the viewport. A muted viewport does not update until you un-mute it. The letter of a muted viewport is displayed in orange. Middle-click the letter again to un-mute the viewport. Click the letter to solo the viewport. Soloing a viewport mutes all the others. The letter of a soloed viewport is displayed in green. Middle-click the letter again to un-solo the viewport. To control how viewports update when playing back animation, see Selecting a Viewport for Playback on page 143.

Floating Views
You can open views as floating windows using the first group of submenus on the Views menu. Some floating views also have shortcut keys. Depending on the type of view, you can have multiple windows of the same type open at the same time. You can adjust floating windows in the usual ways: To move a window, drag its title bar. To resize a window, drag its borders. To bring a window to the front and display it on top of other windows, click in it. To close a window, click x in the top right corner. To minimize a window, click _ in the top right corner. You can cycle through all open windows, whether minimized or not, using Ctrl+Tab. Use Shift+Ctrl+Tab to cycle backwards. You can collapse a floating view by double-clicking on its title bar. When collapsed, only the title bar is visible and you can still move it around by dragging. To expand a collapsed view, double-click on the title bar again; the view is restored at its current location.

A Word about the Active Window


The active window is always the one directly under the mouse pointerits the one that has focus and accepts keyboard and mouse input even if it is not on top. For example, you can open a floating explorer window, then move the pointer over the camera viewport and press F to frame the selected elements. If you pressed F while the pointer was still over the explorer, the list would have expanded and scrolled to find the next selected object. Be careful that you dont accidentally send commands to the wrong window.

22 Softimage

Working in 3D Views

Working in 3D Views
3D views are where you view, edit, and manipulate the geometric elements of your scene.
A B C D E F G H A B C Viewport letter identifier: Click to solo the viewport or middle-click to mute it. Views menu: Choose which view to display in the viewport. Memo cams: Store up to 4 views for quick recall. Left click to recall, middle-click to save, Ctrl+middle-click to overwrite, and right-click to clear. Camera icon menu: Navigate and frame elements in the scene. Eye icon menu (Show menu): Specify which object types, components, and attributes are visible in the viewports. Hold down the Shift key to keep the menu open while you choose multiple options. XYZ buttons: Click on X to view the right side, Y to view the top side, and Z to show the front side. Middle-click to view the left, back, and bottom sides respectively. These commands change the viewpoint but you can still orbit afterwards unlike in the Top, Front, and Right views selected from the Views menu. Click again to return to the previous viewpoint. Display Mode menu: Specifies how scene elements are displayed: wireframe, shaded, and other options. Resize icon: Resizes viewports to full-screen, horizontal, or vertical layouts. Click to maximize and restore. Middle-click to maximize and restore horizontally. Ctrl+middle-click to maximize and restore vertically. Right-click for a menu.

D E

G H

Basics 23

Section 1 Introducing Softimage

Types of 3D Views
There are many ways to view your scene in the 3D views. These viewing modes are available from the Views menu in viewports and from the View menu in the object view. Except for camera views, all of the viewing modes are viewpoints. Like camera views, viewpoints show you the geometry of objects in a scene. They can be previewed in the render region, but they cannot be rendered to file like camera views. Camera Views Camera views let you display your scene in a 3D view from the point of view of a particular camera. You can also choose to display the viewpoint of the camera associated to the current render pass. The Render Pass view is also a camera view: it shows the viewpoint of the particular camera associated to the current render pass. Only a camera associated to a render pass is used in a final render. Spotlight Views Spotlight views let you select from a list of spotlights available in the scene. Selecting a spotlight from this list switches the point of view in the active 3D view relative to the chosen spotlight. The point of view is set according to the direction of the light cone defined for the chosen spotlight. Top, Front, and Right Views The Top, Front, and Right views are parallel projection views, called such because the objects projection lines do not converge in these views. Because of this, the distance between an object and the viewpoint has no influence on the scale of the object. If one object is close and an identical object is farther away, both appear to be the same size.

The Top, Front, and Right views are also orthographic, which means that the viewpoint is perpendicular (orthogonal) to specific planes: The Top view faces the XZ plane. The Front view faces the XY plane. The Right view faces the YZ plane. You cannot orbit the camera in an orthographic view.
Top

Front

Right

User View (Viewports Only) The User view is a viewpoint that shows objects in a scene from a virtual cameras point of view, but is not actually linked to a scene camera or spot light. The User point of view can be placed at any position and at any angle. You can orbit, dolly, zoom, and pan in this view. Its useful for navigating the scene without changing the render cameras position and zoom settings.

24 Softimage

Working in 3D Views

The Object View The object view is a 3D view that displays only the selected scene elements. It has standard display and show menus, and works the same way as any 3D view in most respects. Selection, navigation, framing, and so on work as they do in any viewport. There are also some custom viewing options, available from the object views View menu, that make it easier to work with local 3D selections. To open the object view, do one of the following: From any viewports views menu, choose Object View. or From the main menu, choose View > General > Object View.
A B C C E F G

View menu: Choose the viewpoint to display, and set various viewing options. This is similar to the viewports Views menu, but includes special viewing controls for the object view. Show menu (equivalent to the eye icon menu): Specify which object types, components, and attributes are visible in the viewports. Hold down the Shift key to keep the menu open while you choose multiple options. Memo cams: Store up to 4 views for quick recall. Left click to recall, middle-click to save, Ctrl+middle-click to overwrite, and right-click to clear. XYZ buttons: Click on X to view the right side, Y to view the top side, and Z to show the front side. Middle-click to view the left, back, and bottom sides respectively. These commands change the viewpoint but you can still orbit afterwards unlike in the Top, Front, and Right views in viewports. Also unlike in the viewports, they are not temporary overrides and you cannot click them again to return to the previous viewpoint. Lock: Prevent the view from updating when you select a different object in another view. Click again to unlock. Update: Refresh the view if it is locked. Display Mode menu: Specifies how scene elements are displayed: wireframe, shaded, and other options.

E F G

Basics 25

Section 1 Introducing Softimage

Navigating in 3D Views
In 3D views, a set of navigation controls and shortcut keys lets you control the viewpoint. You can use these controls and keys to zoom in and out, frame objects, as well as orbit, track, and dolly among other things. Activating Navigation Tools Most navigation tools have a corresponding shortcut key so you can quickly activate them from the keyboard. However, some tools are only available from a viewports camera icon menu. In either case, activating a navigation tool makes it the current tool for all 3D views, including object views which do not have an equivalent to the camera icon menu.
Selecting navigation tools from the camera icon menu activates them for all 3D views.

Tool or Command Pan/Zoom

Key Z

Description Moves the camera laterally, or changes the field of view: Pan (track) with the left mouse button. Zoom in with the middle mouse button. Zoom out with the right mouse button. In your Tools > Camera preferences, you can activate Zoom On Cursor to center the zoom wherever the mouse pointer is located.

Rectangular Zoom

Shift+Z

Zooms onto a specific area: Draw a diagonal with the left mouse button to fit the corresponding rectangle in the view. Draw a diagonal with the right mouse button to fit the current view in the corresponding rectangle. In perspective (non-orthographic) views, rectangular zoom activates pixel zoom

After you activate a tool, check the mouse bar at the bottom of the Softimage interface to see which mouse button does what.
Tool or Command Zoom Key mouse wheel Description By default, zooms in and out in various views and editors. You can control how the mouse wheel is used for zooming in your Tools > Camera preferences. Navigation S Combines the most common navigation tools: Pan (track) with the left mouse button. Dolly with the middle mouse button. Orbit with the right mouse button. In your Tools > Camera preferences, you can change the order of the mouse buttons as well as remap this tool to the Alt key. Dolly P Orbit O

mode , which offsets and enlarges the view without changing the cameras pose or field of view. Rotates a camera, spotlight, or user viewpoint around its point of interest. This is sometimes called tumbling or arc rotation. Use the left mouse button to orbit freely. Use the middle mouse button to orbit horizontally. Use the right mouse button to orbit vertically. In your Tools > Camera preferences, you can set Orbit Around Selection. Moves the camera forward and back. Use the different mouse buttons to dolly at different speeds. In orthographic views, dollying is equivalent to zooming.

26 Softimage

Working in 3D Views

Tool or Command Roll

Display Modes
Key L Description Rotates a perspective view along its Z axis. Use the different mouse buttons to roll at different speeds. Frames the selected elements in the view under the mouse pointer. Frames the selected elements in all open views. Frames the entire scene in the view under the mouse pointer. Frames the entire scene in all open views. Centers the selected elements in the view under the mouse pointer. Centering is similar to framing, but without any zooming or dollying. The camera is tracked horizontally and vertically so that the selected elements are at the center of the viewport. Centers the selected elements in all open views.

You can display scene objects in different ways by choosing various display modes from a 3D views Display Mode menu. The Display Mode menu always displays the name of the current display mode, such as Wireframe.

Frame Frame (All Views) Frame All Frame All (All Views) Center

F Shift+F A Shift+A Alt+C

Wireframe Shows the geometric object made up of its edges, drawn as lines resembling a model made of wire. This image displays all edges without removing hidden parts or filling surfaces.

Center (All Views) Reset

Shift+ Alt+C R

Bounding Box
Resets the view under the mouse pointer to its default viewpoint.

In addition to the above, there are other tools available on the camera icon menu, such as pivot, walk, fly, and so on. Undoing Camera Navigation As you navigate in a 3D view, you may want to undo one or more camera moves. Luckily, there is a separate camera undo stack that lets you undo navigation in 3D views. To undo a camera move, press Alt+Z. To redo an undone camera move, press Alt+Y.

Reduces all scene objects to simple cubes. This speeds up the redrawing of the scene because fewer details are calculated in the screen refresh.

Basics 27

Section 1 Introducing Softimage

Depth Cue Applies a fade to visible objects, based on their distance from the camera, in order to convey depth. You can set the depth cue range to the scene, selection, or a custom start and end point. Objects within the range fade as they near the edge of the range, while objects completely outside the range are made invisible. You can also display depth cue fog to give a stronger indication of fading. Hidden Line Removal Shows only the edges of objects that are facing the camera. Edges that are hidden from view by the surface in front of them are not displayed.

Constant Ignores the orientation of surfaces and instead considers them to be pointing directly toward an infinite light source. All the objects surface triangles are considered to have the same orientation and be the same distance from the light. This results in an object that appears to have no shading. This mode is useful when you want to concentrate on the silhouettes of objects. Shaded Provides an OpenGL hardware-shaded view of your scene that shows shading, material color, and transparency, but not textures, shadows, reflections, or refraction. By default, selected objects have their wireframes superimposed, making it easy to manipulate points and other components.

28 Softimage

Working in 3D Views

Textured Similar to Shaded, but also shows image-based textures (not procedural textures).

Realtime Shaders Evaluates the real-time shaders that have been applied to objects. In the example shown here, the same textures have been used as for the non-realtime shaders, so the result is similar to the textured mode.Several realtime display modes are available, depending on your graphics card: OpenGL: displays realtime shader attributes for objects that have been textured using OpenGL realtime shaders.

Textured Decal This is like the textured, viewing mode, but textures are displayed with constant lighting. The net effect is a general brightening of your textures and an absence of shadow. This allows you to see a texture on any part of an object regardless of how well that part is lit.

Cg: displays realtime shader attributes for objects that have been textured using Cg realtime shaders as well as Softimages Cg-compatible MetaShaders. DirectX: displays realtime shader attributes for objects that have been textured using DirectX realtime shaders.

Basics 29

Section 1 Introducing Softimage

Rotoscopy
Rotoscopy is the use of images in the background of the 3D views. You can use rotoscopy in different 3D views (Front, Top, Right, User, Camera, etc.) and any display mode (Wireframe, Shaded, etc.). Furthermore, you can use different images for each view. Single images are useful as guides for modeling in the orthographic views. Image sequences or clips are useful for matching animation with footage of live action in the perspective views. To load an image in a view, choose Rotoscope from the Display Mode menu and select an image and other options. There are two types of rotoscoped images: By default, rotoscoped images in perspective views have Image Placement set to Attached to Camera. This means that they follow the camera as it moves and zooms so that you can match animation with live action plates.
Attached to Camera

On the other hand, rotoscoped images that are displayed in the orthographic views (Front, Top, and Right) have the Image Placement option set to Fixed by default. This allows you to navigate the camera while modeling without losing the alignment between the image and the modeled geometry. Fixed images are sometimes called image planes, and they can be displayed in all views, not just the one for which they were defined.
Fixed

Navigating with Images Attached to the Camera Normally when a rotoscoped image or sequence is attached to the camera, it is fully displayed in the background no matter how the camera is zoomed, panned, or framed. However you can activate Pixel Zoom mode if you need to maintain the alignment between objects in the scene and the background, for example if you want to temporarily zoom into a portion of the scene.
Pixel Zoom

30 Softimage

Working in 3D Views

In Pixel Zoom mode, you can: Zoom (Z + middle or right mouse button, S + middle mouse button) Pan (Z + left mouse button, S + left mouse button) Frame (F for selection, A for all) The original view is restored when you exit Pixel Zoom mode. Be careful not to orbit, dolly, roll, pivot, or track because these actions change the cameras transformations and will not be undone when you deactivate Pixel Zoom.

Object Visibility Each object in the scene has its own set of visibility controls that allow you to control how objects appear in the scene, or whether they appear at all, as well as how shadows, reflections, transparency, final gathering, and other attributes are rendered. For example, you may wish to temporarily exclude objects from a render but retain them in the scene. This can come in handy when you are working with complex objects and want to reduce lengthy refresh times. You can open an objects Visibility property editor from the explorer by clicking the Visibility icon in the objects hierarchy. Object Display You can control how individual objects are displayed in a 3D view. Giving an object or objects different display characteristics is particularly useful for heavily-animated scenes. For example, if you want to tweak a static object within a scene that has a complex animated character, you could set the character in wireframe display mode while adjusting the lighting of your static object in shaded mode. You can open an objects Display property editor from the explorer by clicking the Display icon in the objects hierarchy.

Setting Viewing Options and Preferences


There are several places you can go to set options and preferences related to viewing. Colors You can modify scene, element, and component colors (such as the viewport background) by choosing Scene Colors from any viewports camera icon menu. For instance, by default a selected object is displayed in white and an unselected object is displayed in black; points are displayed in blue, knots are displayed in brown, and so on. Camera and 3D Views Display You can set display options to control how cameras and views display scene objects. These camera display options can be set for individual 3D views, or for all 3D views at once. To open an individual 3D views Camera Display property editor, choose Display Options from any viewport or object views Display Mode menu. To open the Camera Display property editor for all 3D views, choose Display > Display Options (all cameras) from the main menu.

The ability to view different objects in different display modes works only when you turn off Override Object Properties in a views Display Mode menu.

Basics 31

Section 1 Introducing Softimage

Exploring Your Scene


Three of the most important tools for exploring your scene are the explorer, the quick filter box, and the schematic.

The Explorer
The explorer displays the contents of your scene in a hierarchical structure called a tree. This tree can show objects as well as their properties as a list of nodes that expand from the top root. You normally use the explorer as an adjunct while working in Softimage, for example, to find or select elements. To open an explorer in a floating window, press 8 at the top of the keyboard, or choose View > General > Explorer from the main menu.
A B C D E E F G H F G I I A B C D Scope of elements to view. See Setting the Scope of the Explorer on page 33. Viewing and sorting options. Filters for displaying element types. See Filtering the Display on page 33. Lock and update. This works only when the scope is set to Selection. Search by name, type, or keyword. Expand and collapse the tree. Click an icon to open property editor. Click a name to select. Use Shift to select ranges and Ctrl to toggle-select. Middle-click to branch-select. Right-click for a context menu. You can pan the view by dragging up and down in an empty area within the explorer. You can also use the mouse wheel to scroll up and down. First make sure the explorer has focus by clicking anywhere in the explorer.

32 Softimage

Exploring Your Scene

Keeping Track of Selected Elements If you have selected objects, their nodes are highlighted in the explorer. If their nodes are not visible, choose View > Find Next Selected Node. The explorer scrolls up or down to display the first object node in the order of its selection. Each time you choose this option, the explorer scrolls up or down to display the next selected node. After the last selected item, the explorer goes back to the first. Choose View > Track Selection if you want to automatically scroll the explorer so that the node of the first selected object is always visible. Setting the Scope of the Explorer The Scope button determines the range of elements to display. You can display entire scenes, specific parts, and so on.
A

The Selection option in the explorers scope menu isolates the selected object. If you click the Lock button with the Selection option active, the explorer continues to display the property nodes of the currently selected objects, even if you go on to select other objects in other views. When Lock is on, you can also select another object and click Update to lock on to it and update the display. Filtering the Display Filters control which types of nodes are displayed in the explorer. For example, you can choose to display objects only, or objects and properties but not clusters nor parameters, and so on. By displaying exactly the types of elements you want to work with, you can find things more quickly without scrolling through a forest of nodes. The basic filters are available on the Filters menu (between the View menu and the Lock button). The label on the menu button shows the current filter. The filters that are available on the menu depend on the scope. For example, when the scope is Scene Root, the Filters menu offers several different preset combinations of filters, followed by specific filters that you can toggle on or off individually.

Preset display filter combinations. C

A B C

Click the Scope button to select the range of elements to view. The current scope is indicated by the button label. It is also bulleted in the list. The bold item in the menu indicates the last selected scope. Middleclick the Scope button to quickly select this view.

Individual display filter toggles.

Basics 33

Section 1 Introducing Softimage

Other Explorer Views You can view other smaller versions of the explorer (pop-up explorers) elsewhere in the interface. They are used to view the properties of selected scene elements.
Select Panel Explorer

The Quick Filter Box


The Quick Filter box on the main Softimage menu bar lets you find scene objects by name.
A B C D

Explorer filter buttons in the Select panel offer a shortcut by instantly displaying filtered information on specific aspects of currently selected objects.
A E F

1 2 A 1 2 Explorer filter buttons Example: Click the Selection filter button... ...to display a pop-up explorer showing all property nodes associated with the selected object. B C D A Enter part of the name to search for. Softimage waits for you to pause typing before it displays the search results. You can continue typing to modify the search string, and the updated results will be displayed when you pause again. Softimage finds the elements that contain the search string anywhere in their names (substring search). Strings are not case-sensitive. Alternatively, you can also use wildcards and a subset of regex (regular expressions) just like in the explorer. Recall a recent search string. Clear the search string and close the search results. Open the floating Scene Search window with the current search and additional options.

The Explore button opens a pop-up menu of additional filters for specifying the type of information you wish to obtain on the scene. Click outside a pop-up explorer to close it.
Object Explorers

You can quickly display a pop-up explorer for a single objectjust select the object and press Shift+F3. If the object has no synoptic property or annotation, you can press simply F3. Click outside the pop-up explorer or press those keys again to close it.

34 Softimage

Exploring Your Scene

The search results are listed here. They obey the current settings in the Scene Search view for sorting and name/path display. To select an element, click on it. To select a range of elements, click on the first one and then Shift+click on the last one. To toggle-select an element, Ctrl+click on it. To deselect an element, Ctrl+Shift+click on it. To rectangle-select a range of elements, click in the background first and then drag across the elements to select. This is easier if only names are displayed, rather than paths. To select all elements found, press Ctrl+A. To rename the selected elements, press F2. Right-click on any element for a context menu. If you right-click on a selected element, then some commands apply to all selected elements.

The Schematic View


The schematic view presents the scene in a hierarchical structure so that you can analyze the way a scene is constructed. It includes graphical links that show the relationships between objects, as well as material and texture nodes to indicate how each object is defined. To open a schematic view in a floating window, press 9 at the top of the keyboard, or choose View > General > Schematic from the main menu. Press the spacebar to click and select nodes. Use the left mouse button for node selection, the middle mouse button for branch selection, and the right mouse button for tree selection. Press M to click and drag nodes to new locations. The schematic remembers the location of nodes, so you can arrange them as you please. Press s or z to pan and zoom. Relationships between elements are displayed as lines called links. You can display or hide links for different types of relationship using the Show menu. You can also click a parent-child link to select the child. This is useful if you have located the parent but cant find the child in a jumbled hierarchy. Again, use the left, middle, or right mouse buttons to select the child in node, branch, or tree modes. When other types of link are displayed, you can click and drag across the link to select the corresponding operator, such as a constraint or expression. When a link is selected, you can press Enter to open the property editor related to the associated relationship (if applicable), or press Delete to remove the operator.

To dismiss the list of results, click anywhere outside the pop-up or press Escape.

Basics 35

Section 1 Introducing Softimage

A B A B C D E F G C D E

Scope: Show the entire scene, the current selection, or the current layer. Edit: Access navigation and selection commands. Show: Set filters that specify which elements to display. View: Set various viewing options. Memo cams: Store up to 4 views for quick recall. Left click to recall, middle-click to save, Ctrl+middle-click to overwrite, and right-click to clear. Lock: Prevent the view from updating when you select a different object in another view (if Scope = Selection). Click again to unlock. Update: Refresh the view if it is locked. To select a node, click its label. Middle-click to branch-select and right-click to tree-select. To open a nodes property editor, click its icon or double-click its label. Alt+right-click (Ctrl+Alt+right-click on Linux) on a node to open a context menu for the node.

F G H

H I

Press F2 to rename the selected node. Alt+right-click (Ctrl+Alt+right-click on Linux) in an empty area to quickly access a number of viewing and navigation commands.

36 Softimage

Section 2

Elements of a Scene
This section provides a guide to the objects, properties, and components you will find in Softimage scenes, and describes some of the workflows for working with them.

What youll find in this section ...


Whats in a Scene? Selecting Elements Objects Properties Components and Clusters Parameter Maps

Basics 37

Section 2 Elements of a Scene

Whats in a Scene?
Scenes contain objects. In turn, objects can have components and properties.

Properties
Properties control how an object looks and behaves: its color, position, selectability, and so on. Each property contains one or more parameters that can be set to different values. Properties can be applied to elements directly, or they can be applied at a higher level and passed down (propagated) to the children elements in a hierarchy.

Objects
Objects are elements that you can put in your scene. They have a position in space, and can be transformed by translating, rotating, and scaling. Examples of objects include lights, cameras, bones, nulls, and geometric objects. Geometric objects are those with points, such as polygon meshes, surfaces, curves, particles, hair, and lattices.

Components
Components are the subelements that define the shape of geometric objects: points, edges, polygons, and so on. You can deform a geometric object by moving its components. Components can be grouped into clusters for ease of selection and other purposes.
Points on different geometry types: polygon mesh, curve, surface, and lattice.

Element Names
All elements have a name. For example, if you choose Get > Primitive > Polygon Mesh > Sphere, the new sphere is called sphere by default, but you can rename it if you want. In fact, its a good idea to get into the habit of giving descriptive names to elements to keep your scenes understandable. You can see the names in the explorer and schematic views, and you can even display them in the 3D views. You can typically name an element when you create it. You can rename an object at any time by choosing Rename from a context menu or pressing F2 in the explorer or schematic. Softimage restricts the valid characters in element names to az, AZ, 09, and the underscore (_) to keep them variable-safe for scripting. You can also use a hyphen (-) but it is not recommended. Invalid characters are automatically converted to underscores. In addition, element names cannot start with a digit; Softimage automatically adds an underscore at the beginning. If necessary, Softimage adds a number to the end of names to keep them unique within their namespace.

38 Softimage

Selecting Elements

Selecting Elements
Selecting is fundamental to any software program. In Softimage, you select objects, components and other elements to modify and manipulate them. In Softimage, you can select any object, component, property, group, cluster, operator, pass, partition, source, clip, and so on; in short, just about anything that can appear in the explorer. The only thing that you cant select are individual parametersparameters are marked for animation instead of selected.
A B F G F G H Group/Cluster button: Selects groups and clusters. Center button: Not used for selection. Hierarchy navigation: Select an objects sibling or parent.

Overview of Selection
To select an object in a 3D or schematic view, press the space bar and click on it. Use the left mouse button for single objects (nodes), the middle mouse button for branches, and the right mouse button for trees and chains. To select components, first select one or more geometric objects, then press a hotkey for a component selection mode (such as T for rectangle point selection), and click on the components. Use the middle mouse button for clusters. For elements with no predefined hotkey, you can manually activate a selection tool and a selection filter. In all cases: Shift+click adds to the selection.

D E

A B C D

Select menu: Access a variety of selection tools and commands. Select icon: Reactivates the last active selection tool and filter. Filter buttons: Select objects or their components, such as points, curves, etc. Object Selection and Sub-object Selection text boxes: Enter the name of the object and its components you want to select. You can use * and other wildcards to select multiple objects and properties. Explore menu and explorer filter buttons: Display the current scene hierarchy, current selection, or the clusters or properties of the current selection. These buttons are particularly useful because they display pre-filtered information but dont take up a viewport.

Ctrl+click toggle-selects. Ctrl+Shift+click deselects. Alt lets you select loops and ranges. You can use Alt in combination with Shift, Ctrl, and Ctrl+Shift.

Basics 39

Section 2 Elements of a Scene

Selection Hotkeys
Key space bar E T Y U I ' (apostrophe) F7 F8 F9 F10 Shift+F10 Ctrl+F7 Ctrl+F8 Ctrl+F9 Ctrl+F10 Alt+space bar Tool or action Select objects with the Rectangle selection tool, in either supra or sticky mode. Select edges with the Rectangle selection tool, in either supra or sticky mode. Select points with the Rectangle selection tool, in either supra or sticky mode. Select polygons with the Rectangle selection tool, in either supra or sticky mode. Select polygons with the Raycast selection tool, in either supra or sticky mode. Select edges with the Raycast selection tool, in either supra or sticky mode. Select hair tips with the Rectangle selection tool, in either supra or sticky mode. Activate Rectangle selection tool using current filter. Activate Lasso selection tool using current filter. Activate Freeform selection tool using current filter. Activate Raycast selection tool using current filter. Activate Rectangle-Raycast selection tool using current filter. Activate Object filter with current selection tool. Activate Point filter with current selection tool. Activate Edge filter with current selection tool. Activate Polygon filter with current selection tool. Activate last-used selection filter and tool.

Selection Tools
To select something in the 3D views, a selection tool must be active. Softimage offers a choice of several selection tools, each with a different mouse interaction: Rectangle, Lasso, Raycast, and others. The choice of selection tool is partly a matter of personal preference, and partly a matter of what is easiest or best to use in a particular situation. They are all available from the Select > Tools menu or hotkeys. Rectangle Selection Tool Rectangle selection is sometimes called marquee selection. You select elements by dragging diagonally to define a rectangle that encompasses the desired elements. Raycast Selection Tool The Raycast tool casts rays from under the mouse pointer into the sceneelements that get hit by these rays as you click or drag the mouse are affected. Raycast never selects elements that are occluded by other elements. Lasso Selection Tool The Lasso tool lets you select one or more elements by drawing a free-form shape around them. This is especially useful for selecting irregularly shaped sets of components.

Freeform Selection Tool The Freeform tool lets you select elements by drawing a line across them. This is particularly useful for selecting a series of edges when modeling with polygon meshes, or for selecting a series of curves in order for lofting or creating hair from curves, as well as in many other situations.

40 Softimage

Selecting Elements

Rectangle-Raycast Tool The Rectangle-Raycast selection tool is mixture of the Rectangle and the Raycast tools. You select by dragging a rectangle to enclose the desired elements, like the Rectangle tool. Elements that are occluded behind others in Hidden Line Removal, Constant, Shaded, Textured, and Textured Decal display modes are ignored, like the Raycast tool. Paint Selection Tool The Paint selection tool lets you use a brush to select components. It is limited to selecting points (on polygons meshes and NURBS), edges, and polygons. The brushs radius controls the size of the area selected by each stroke, which you can adjust interactively by pressing R and dragging to the left or right. Use the left mouse button to select and the right mouse button to deselect. Press Ctrl to toggle-select.

Selection and Hierarchies


You can select objects in hierarchies in several ways: node, branch, and tree. Node Selection Left-click to node-select an object. Node selection is the simplest way in which an object can be selected. When you node-select an object, only it is selected. If you apply a property to a node-selected object, that property is not inherited by its descendants.

Selection Filters
Selection filters determine what you can select in the 3D and schematic views. You can restrict the selection to a specific type of object, component, or property. Press Shift while activating a new filter to keep the current selection, allowing you to select a mixture of component types.
Effect of nodeselecting Object.

A B C A Selection filter buttons: Select objects or their components in the 3D views. The component buttons are contextual: they change depending on what type of object is currently selected. Click the triangle for additional filters. Click the bottom button to re-activate the last filter.

B C

Basics 41

Section 2 Elements of a Scene

Branch Selection Middle-click to branch-select an object. When you branch-select an object, its descendants inherit the selection status and are highlighted in light gray. You would branch-select an object when you want to apply a property that gets inherited by all the objects descendants.

Selecting Ranges and Loops of Components


Use the Alt key to select ranges or loops of components. Softimage tries to find a path between two components that you pick. In the case of ranges, it selects all components along the path between the picked components. In the case of loops, it extends the path, if possible, and selects all components along the entire path. For polygon meshes, you can select ranges or loops of points, edges, or polygons. Several strategies are used to find a path, but priority is given to borders and quadrilateral topology. For NURBS curves and surfaces, you can select ranges or loops of points, knots, or knot curves. Points and knots must lie in the same U or V row. In addition, paths and loops stop at junctions between subsurfaces on assembled surface meshes. Range Selection Alt+click to select a range of components using any selection tool (except Paint). This allows you to select the interconnected components that lie on a path between two components you pick.

Effect of branchselecting Object.

Tree Selection Right-click to tree-select an object. This selects the objects topmost ancestor in branch mode. For kinematic chains, right-clicking will select the entire chain.

1 Effect of treeselecting Object. 1 2 First specify the anchor.

Then specify the end component to select the range of components in-between.

42 Softimage

Selecting Elements

1. Select the first anchor component normally. 2. Alt+click on the second component. Note that the anchor component is highlighted in light blue as a visual reference while the Alt key is pressed. All components between the two components on a path become selected. 3. Use the following key and mouse combinations to further refine the selection: - Use Shift to add individual components to the selection as usual. If you want to add additional ranges or loops using Alt+Shift, the last component added to the selection is the new anchor. If you want to start a new range anchored at the end of the previous range, you must reselect the last component by Shift+clicking or Alt+Shift+clicking. Once you have selected a new anchor, you can Alt+Shift+click to add another range to the selection. - Use Ctrl to toggle-select. Once you have selected a new anchor, you can Alt+Ctrl+click to toggle the selection of a range. - Use Ctrl+Shift to deselect. Once you have selected a new anchor, you can Alt+Ctrl+Shift+click to deselect a range.

Loop Selection Alt+middle-click to select a loop of components using any selection tool (except Paint). When you select a loop of components, Softimage finds a path between two components that you pick. It then extends the path in both directions, if it is possible, and selects all components along the extended path.

1 1 2 First specify the anchor.

Then specify another component to select the entire loop of components.

1. Do one of the following: - Select the first anchor component normally, then Alt+middleclick on the second component. Note that the anchor component is highlighted in light blue as a visual reference while the Alt key is pressed. or - Alt+middle-click to select two adjacent components in a single mouse movement. All components on an extended path connecting the two components become selected.

Basics 43

Section 2 Elements of a Scene

Note that for edges, the direction is implied so you only need to Alt+middle-click on a single edge. However, for parallel edge loops, you still need to specify two edges as described previously. 2. Use the following key and mouse combinations to further refine the selection: - Use Shift to add individual components to the selection as usual. If you want to add additional ranges or loops using Alt+Shift, the last component added to the selection is the new anchor. The last selected component becomes the anchor for any new loop. Once you have selected a new anchor, you can Alt+Shift+middle-click to add another loop to the selection. - Use Ctrl to toggle-select. Once you have selected a new anchor, you can Alt+Ctrl+middle-click to toggle the selection of a loop. - Use Ctrl+Shift to deselect. Once you have selected a new anchor, you can Alt+Ctrl+Shift+middle-click to deselect a loop.

Modifying the Selection


The Select menu has a variety of commands you can use to modify the selection. For example, among many other things, you can: Invert the selection. Grow or shrink a component selection (polygon meshes only). Select adjacent points, edges, or polygons.

Defining Selectability
You can make an object unselectable in the 3D and schematic views by opening up its Visibility properties and turning off Selectability. This can come in handy and speed up your workflow if you are working in a very dense scene and there are one or more objects that you dont wish to select. Unselectable objects are displayed in dark gray in the wireframe and schematic views. Regardless of whether an objects Selectability is on or off, you can always select it using the explorer or using its name. The selectability of an object can also be affected by its membership in a group or layer.

44 Softimage

Objects

Objects
Objects can be duplicated, cloned, and organized into hierarchies, groups, and layers. To duplicate an object, select it and choose Edit > Duplicate/ Instantiate > Duplicate Single or press Ctrl+D. The object is duplicated using the current options and the copy is immediately selected. You may need to move it away from the original. By default, any transformation you apply is remembered for the next duplicate. To make multiple copies, Edit > Duplicate/Instantiate > Duplicate Multiple or press Ctrl+Shift+d. Specify the number of copies and the incremental transformations to apply to each one.
Example: Applying multiple transformations to duplicated objects 1 Select the object (a step) to be duplicated and transformed.

Duplicating and Cloning Objects


Duplicating an object creates an independent copy: modifying the original after duplication has no effect on the copy. Cloning creates a linked copy: modifying the geometry of the original affects the clone, but you can still make additional changes to the clone without affecting anything else. All the related commands can be found in Edit > Duplicate/Instantiate. Duplicating Objects

2 With the step selected, press Ctrl+Shift+d. Specify 5 copies and a transformation to apply to each.

3 Result: Five copies of the original step are generated, with each duplicate translated, rotated and scaled to give the appearance of a flight of spiral stairs. Note: The center of the step was repositioned to the right so that the step could be rotated along its right edge. When an object is duplicated, the original and its duplicates can be modified separately with no effect on each other.

Other commands in the Edit > Duplicate/Instantiate menu let you duplicate symmetrically, from animation, and so on.
Basics 45

Section 2 Elements of a Scene

Cloning Objects

Hierarchies
Hierarchies describe the relationship between objects, usually using a combination of parent-child and tree analogies, as you do with a family tree. Objects can be associated to each other in a hierarchy for a number of reasons, such as to make manipulation easier, to propagate applied properties, or to animate children in relation to a parent. For example, the parent-child relationship means that any properties applied to the parent (in branch mode) also affect the child. In a hierarchy there is a parent, its children, its grandchildren, and so on: A root is a node at the base of either a branch or the entire tree. A tree is the whole hierarchy of nodes stemming from a common root. A branch is a subtree consisting of a node and all its descendants.

When an object is cloned, editing the original object affects all the clones but editing one of the clones has no effect on the others.

Nodes with the same parent are called siblings.

You can clone objects using the Clone commands on the Edit > Duplicate/Instantiate menu. Clones are displayed in the explorer with a cyan c superimposed on the model icon. In the schematic view, they are represented by trapezoids with the label Cl.
Clone in the explorer. Clone in the schematic view.

46 Softimage

Objects

Creating Hierarchies You can create a hierarchy by selecting an object and activating the Parent tool from the Constrain panel (or pressing the / key). Click on another object to make it the child of the selected object, or middle-click to make the selected object the child of the picked object. Continue picking objects or right-click to exit the tool. You can also create hierarchies by dragging and dropping in the explorer:
1 2

Deleting an Object in a Hierarchy If you delete an object with children, it is replaced by a null with the same name in order to preserve the hierarchy structure. Deleting this null just replaces it with another one. If you want to get rid of it, you must first cut its children if you want to keep them, or branch-select the object to remove it and its children.

Groups
You can organize 3D objects, cameras, and lights into groups for the purpose of selection, applying operations, assigning properties and shaders, and attaching materials and textures. For example, you can add several objects to a group, and then apply a property like Display, Geometry Approximation, or a material to the group. The groups properties override the members own ones. Besides being able to organize objects into groups, you can also create a group of groups. An object can be a member of more than one group. Groups, however, cant be added in hierarchies. They can only live immediately beneath the scene root or a model. In Softimage, groups are a tool for organizing and sharing properties. If you are familiar with Autodesk Maya and want to use groups to control transformations, for example, in a character rig, use transform groups instead. If you are familiar with Autodesk 3ds Max, note that you dont need to open a group to select its members individually. You can always select either the group as a whole or any of its members.

1 2

Make the ball_child a child of the ball_parent by dropping its node onto the ball_parents node. The ball_child is now under the ball_parents node.

In the schematic, you can create a hierarchy by pressing Alt while dragging a node onto a new parent. Cutting Links in a Hierarchy You will often need to cut the hierarchical links between a parent and its child or children in a hierarchy of objects. If the child is also a parent, the links to its own children are not affected. Select the child and click Cut in the Constrain panel, or press Ctrl+/. A cut object becomes a child of its model. If an object is cut from its model, it becomes a child of the parent model.

Basics 47

Section 2 Elements of a Scene

Creating Groups To create a group, select some objects and click Group in the Edit panel or press Ctrl+g. In the Group property editor, enter a name for your group and select the different View and Render Visibility, Selectability, and Animation Ghosting options.

Selecting Groups You can select groups in the 3D and schematic views using the Group selection button or the = key. Note that the Group button changes to the Cluster button when a component filter is active.

Group selection (or use = key)

Once a group is selected, you can select all its members using Select > Select Members/ Components. The members of the group are selected as multiple objects. If you want to select a single member of a group, simply select it normally in any 3D, explorer, or schematic view. Adding and Removing Elements from Groups
Add to Group

All selected objects are grouped together. In the explorer, you can see the group with all its members within it.

To add objects to a group, select the group and add the objects you want to the selection. In the Edit panel, click the + button (next to the Group button). You can also drag objects onto a group in an explorer view.

Remove from Group

If an object is a member of just one group, you can ungroup it by just selecting it and clicking the button (next to the Group button). If an object is a member of multiple groups, you must select the group to remove it from before selecting the object. Alternatively, use the context menu in the explorer.

Right-click on name of object within the group to be removed and choose Remove from Group.

Removing Groups You can remove a group by selecting it and pressing Delete. When you delete groups, only the group node and its properties are deleted, not the member objects themselves.
48 Softimage

Objects

Scene Layers
Scene layers are containers similar to groups or render passes that help you organize, view, display, and edit the contents of your scene. For example, you can put different objects into different scene layers and then hide a particular layer when you dont want to see that part of your scene. Or you might want to make a scene layers objects unselectable if the scene is getting too complex to select objects accurately. You can create as many layers as your scene requires. The main differences between a scene layer and a group are that every object is a member of a layer (that would be the default layer if you havent created any new layers) and objects cannot belong to more than one layer. Scene Layer Attributes Each scene layer has four main attributes: viewport visibility, rendering visibility, selectability, and animation ghosting. You can activate or deactivate each these attributes for each layer in the scene. Scene layers can also have custom properties such as wireframe color and geometry approximation. Scene Layers in the Explorer You can view and edit scene layers in the explorer. This is most useful when you wish to move several objects between layers, since you can quickly drag and drop them from one layer to another.

The Scene Layer Manager The scene layer manager is a grid-style view from which you can quickly view and edit all of the layers in a scene. You can use the layer control to do things like add objects to or remove them from layers, create new scene layers, toggle scene layer attributes, select objects in a scene layer, and so on. To open the scene layer manager in a floating window, press 6 at the top of the keyboard, or choose View > General > Scene Layer Manager from the main menu. The scene layer manager is also available on the KP/L panel.
A H B G C D

E A

The Layers menu contains commands for creating layers, moving selected objects into the current layer, and so on. Other commands are available by right-clicking in the grid. The View menu contains various display preferences, including how layers should be sorted and which columns are visible. Press and hold Shift to keep the menu open while you toggle multiple items.

Scene layers are represented as indented rows. Right-click anywhere in the row for various commands that affect the corresponding layer.

Basics 49

Section 2 Elements of a Scene

The current layer is indicated by a green background and a doublechevron. To make a layer current, click in the in the leftmost column of the corresponding row. Scene layer groups are represented as rows with a light gray background. Right-click anywhere in the row for various commands that affect all layers in the group. Click the triangle at left to hide or display the rows of its individual layers. To rename a layer or group, double-click on its name, type a new name, and press Enter. You can select multiple layers for certain commands by clicking on their names. To select a range, click on the first layer and then Shift+click on the last, or drag across the desired rows. To add individual layers to the selection, Ctrl+click on their rows. Note that selecting layers in the grid in this way simply selects them for certain commands in the scene layer managerit does not affect the global scene selection.

Scene layer attributes: wireframe color, view visibility, render visibility, selectability, and animation ghosting. Click in a cell to toggle its value. Click+drag to toggle multiple cells in a rectangular area. Right-click on a column heading and choose Check All or Uncheck All. Double-click on a color swatch to set the wireframe color and other display attributes.

Use the cells of a layer group to control all layers in the group. You can still change the settings of individual layers afterward. When different layers in the group have different values, the cell has a light gray checkmark. Right-click on a column heading and choose Check All or Uncheck All. Resize a column by dragging the borders of its heading.

50 Softimage

Properties

Properties
A property is a set of related parameters that controls some aspect of objects in a scene.

How Properties Are Propagated


Objects can inherit properties from many different sources. This inheritance is called propagation. For some properties, such as Display and Geometry Approximation, an object can have only one at a time. If it inherits the same property from more than one source, the source with the highest strength is used. In increasing order of strength, the possible sources of property propagation are: Scene Default: This is the weakest source. If an object does not inherit a property from anywhere else, it uses the scenes default values. For example, if an object has never had a material applied to it, it uses the scene default material. Branch: If a parent has a property applied when it is branchselected, its children all inherit the property. Local: If a child inherits a branch property from its parent, but has the same property applied directly to it, it uses its local values. Cluster: Materials, textures, and other properties applied to a cluster take precedence over those applied to the object. Group: If an object is a member of a group, then any properties applied to the group take precedence over local and branch properties. Similarly, if a cluster is a member of a group, any properties applied to the group take precedence over those applied directly to the cluster. Layer: Any properties applied to an objects layer take precedence over group, local, and branch properties. Partition: Properties applied to a partition of a render pass have the highest priority of all when that render pass is current.

Applying Properties
You can apply many properties using the Get > Property menu of any toolbar. This applies the default preset of a propertys parameter values to the selected objects, possibly replacing an existing version of the same property.

Editing Properties
To edit an existing property, open its property editor by clicking on the property node in an explorer. A handy way to do this is to press F3 to see a mini-explorer for the selected object, or click the Selection button at the bottom of the Select menu. You can also right-click on Selection to display properties according to type.
Click Selection...

...then click a property icon...

...or right-click Selection.

Basics 51

Section 2 Elements of a Scene

For other types of properties, an object can have many at the same time. For example, an object can have several local annotations as well as several annotations inherited from different ancestors, groups, and so on.

Simple Propagation In this sphere hierarchy, each sphere is parented to the one above it. Because the larger sphere was branch-selected when the texture was applied, every sphere beneath it inherits the checkerboard texture.

Branch Propagation One sphere was branch-selected and given a cloud texture. The remaining sphere retains the checkerboard texture because it is on another branch.

Reverting to the Scenes Default Material Local Material/Texture Application The larger sphere was single-selected One sphere was single-selected and given a blue surface. This applies a local and has had its material deleted. Since material/texture that is in turn applied to other spheres can no longer inherit their the selected object only and none of texture from the parent (because its been deleted), they revert back to the scenes its children; the spheres children still inherit the checkerboard texture, despite default gray (or another color youve assigning a local texture to their parent. defined).

52 Softimage

Properties

Viewing Propagation in the Explorer


A

Creating Presets of Property Settings


You can save property settings as a preset. Presets are data files with a .preset file extension that contain property information. Presets let you work more efficiently because you can save the modified properties and reuse them as needed, as well as transfer settings between scenes. For quick access, you can also place presets on a toolbar. To save or load a preset, click the button at the top Save/Load Presets of a property editor. The saved preset contains values for only the parameter set currently selected on the property set tabs in the property editor. For materials and shaders, it also contains parameter settings for any connected shaders. Presets do not contain any animationonly the current parameter values are stored. If there is a render region open when you save a preset, it will be used as a thumbnail.

A B

Properties that are applied in branch-mode, and therefore propagated, are marked with B. Shared properties such as materials are shown in italics. The propertys source (where its propagated from) is shown in parentheses. If no source is shown, then it is inherited from the scene root.

You can also set the following options in the explorers View menu: Local Properties displays only those properties that have been applied directly to an object. Applied Properties shows all properties that are active on a object, no matter how they are propagated.

Basics 53

Section 2 Elements of a Scene

Components and Clusters


Components are elements, like points and edges, that define the shape of 3D objects. Clusters are named groups of components. eyebrow, you can easily deform the eyebrow as if it were an object instead of trying to reselect the same points each time you work with it. You can also apply operators like deformations or Cloth to specific clusters instead of an entire object. You can define as many clusters on an object as you like, and the same component can belong to a number of different clusters.
Eye icon

Displaying Components
You can display the various component types in a specific 3D view using the individual options available from its eye icon (Show menu) or in all open 3D views using the options on the Display > Attributes menu on the main menu bar.

You can define clusters for points, edges, polygons, subsurfaces, and other components. Each cluster can contain one type of component. For example, a cluster can contain points or polygons, but not both. Clusters may shift if you edit an operator in an objects construction history and add components before the position where the cluster was created. Creating Clusters To create a cluster, select some components and click Cluster on the Edit panel (the Cluster button changes to Group when objects are selected). As soon as the cluster is created, it is selected and you can press Enter to open its property editor and change its name. To create a cluster whose components arent already in other clusters, choose Edit > Create Non-overlapping Cluster instead. You can also use Edit > Create Cluster with Center to make a cluster with a null center that you can transform and animate. If you prefer to use a different object as a center, simply create a cluster and apply Deform > Cluster Center manually.

For more options, you can set the visibility options in the Camera Visibility property editor: click a 3D views eye icon (Show menu) and choose Visibility Options, or Display > Visibility Options for all open 3D views. Note that when you activate a component selection filter, the corresponding components are automatically displayed in the 3D views.

Clusters
A cluster is a named set of components that are grouped together for a specific modeling, animation, or texturing purpose. By grouping and naming components, it makes it easier to work with those same components again and again. For example, by grouping all points that form an
Spinning top with two clusters Top

Bottom

54 Softimage

Components and Clusters

Adding and Removing Components from Clusters To add components to a cluster, select the cluster and add the components you want to the selection. In the Edit panel, click the + button (next to the Cluster button). To remove components from a cluster, select the cluster, add the components to remove to the selection, and click the button.
Add to Cluster

Manipulating Components and Clusters


Not every type of component or cluster can be directly manipulated in Softimage. You can select and manipulate points, edges, and polygons in the 3D views, and you can select and manipulate texture UV coordinates (samples) in the texture editor. You can transform points, edges, and polygons in 3D space. This is a fundamental part of modeling an objects shape.

Remove from Cluster

When you add components to an object, any new components that are surrounded by similar components in a cluster are automatically added to the cluster. Selecting Clusters You can select clusters using the Clusters button at the bottom of the Select panel, or in any other explorer.

You can apply deformations to deform points, edges, and polygons in the same way that you apply them to objects. You cannot animate component and cluster transformations directly. Instead, you can use a deformer such as a cluster center or volume deformer and animate the deformer, or you can use shape animation.

You can also select clusters in a 3D view when a component selection filter is active. Simply activate the Cluster button at the top of the Select panel, or press =, or use the middle mouse button while clicking on any component in the cluster. Removing Clusters To remove a cluster, select it and press Delete. Removing a cluster removes the group, but does not remove the individual components from the object.

Basics 55

Section 2 Elements of a Scene

Parameter Maps
Certain parameters are mappableyou can vary the parameters value across an objects geometry by connecting a weight map, texture map, vertex color property, or other cluster property. This allows you to, for example, control the amplitude of a deformation or the emission rate of a particle system across an objects surface. Mappable parameters have a connection icon in their property editors that allows you to drive the value using a map.
Connection icon unconnected connected

Texture maps consist of an image file or sequence, and a set of UV coordinates. They are similar to ordinary textures, but are connected to parameters instead of shaders. Vertex color properties are color values stored at each polynode or texture sample of a geometric object. In addition to the attributes listed above, you can connect mappable parameters to other cluster properties, including UV coordinates (texture projections), shapes, user normals, and envelope weights. While these may not always be useful for driving modeling and simulation parameters, the ability to connect to these properties may be useful for custom developers.

Connecting Maps
No matter what type of map you want to connect to a parameter, the basic procedure is the same. In a property editor, click on the connection icon of a mappable parameter and choose Connect. A pop-up explorer opensnavigate through the explorer and pick the desired map: Weight maps are found under the appropriate cluster. Texture maps are properties directly under the object. They can also be found under the appropriate cluster. Make sure you dont accidentally select the texture projection. Vertex color properties are also found under the appropriate cluster. The connection icon changes to show that a map is connected. When a map is connected, you can click on this icon to open the maps property editor. If you connect a map that has multiple components, like an RGBA color, to a parameter that has a single dimension, like Amplitude, you can use the options in the Map Adaptor to control the conversion. To disconnect a weight map, right-click on the connection icon connected parameter and choose Disconnect. of a

Which Parameters Are Mappable? Almost any parameter with a connection icon in its property editor is mappable. These parameters include: Certain deformation parameters, such as Amplitude in the Push operator or Strength in the Smooth operator. The Multiplier parameter in the Polygon Reduction operator. Edge and vertex crease values. Various simulation parameters, such as the length and density of hair, the stiffness of cloth, and so on. Shapes in the animation mixer. What Can You Connect to Mappable Parameters? You can connect just about any cluster property to a mappable parameter. The most useful properties include the following: Weight maps allow you to start from a base map such as a constant value or gradient, and then paint values on top.

56 Softimage

Parameter Maps

To connect maps to hair parameters, you must first transfer the maps from the emitter to the hair object. In the case of weight maps and deformations, you can simply select the weight map and then apply the deformation instead of manually connecting it. Since the weight map is selected by default as soon as you create it, this technique is quick and easy.

2. Optionally, select some points or a cluster.

Selected cluster

3. Apply a weight map using Get > Property > Weight Map.

Weight Maps
Weight maps are properties of point clusters on geometric objects. They associate each point in a cluster with a weight value. Each cluster can have multiple weight maps, so you can modulate different parameters on different operators in different ways. Each weight map has its own operator stack. When you create a weight map, a WeightMapOp operator sets the base map, which can be constant or one of a variety of gradients. Then when you paint on the weight map, the strokes are added to a WeightPainter operator on top of the WeightMapOp in the stack. Like other elements with operator stacks, you can freeze a weight map to discard its history and simplify your scene data. The following steps present a quick overview of the workflow for using weight maps. 1. Start with an object.
Blank weight map, ready for painting

4. Press W to activate the Paint tool, then use the mouse to paint on the weight map. - Press R and drag the mouse to control the brush radius. - Press E and drag the mouse to control the opacity. - Press Ctrl+W to open the Brush properties to set other parameters. In the default paint mode (normal, also called additive), use the left mouse button to add paint and the right mouse button to remove weight. Press Alt to smooth.

A spot of paint and its as good as new!

5. Connect the weight map to drive the value of a parameterfor example in the image below, it is driving the Amplitude of a Push deformation.

Basics 57

Section 2 Elements of a Scene

Texture Maps
A slight Push is all thats needed.

Texture maps consist of an image file or sequence, and a set of UV coordinates. They are similar to ordinary textures, but are used to control operator parameters instead of surface colors. HDR images are fully supported. Floating-point values are not truncated. Creating Texture Maps To create a texture map, you select the texture projection method and then link an image file to it. 1. Apply a texture projection and texture maps to the selected object by doing one of the following: - If the object already has a set of UV coordinates (texture projection) that you want to use, select it and choose Get > Property > Texture Map > Texture Map. This creates a blank texture map property for the object and opens a blank Texture Map property editor in which you need to set the texture projection and select an image that will be used as the map (as described in the next steps). or - To create a new texture projection for the map, select the object and choose Get > Property > Texture Map > projection type (such as Cylindrical, Spherical, UV, or XZ) that is appropriate for the shape of the object. This creates a texture map property and texture projection for the object, but doesnt open the Texture Map property editor. Now you must open the Texture Map property editor to associate the image to this projection to use as the map (in the explorer, click the Texture Map property under the object).

6. You can reselect the weight map and continue to paint on it to modify the effect further. If your object has multiple maps, you may need to select the desired one before you can paint on it. You can do this easily using Explore > Property Maps from the Select panel. Freezing Weight Maps Weight maps can be frozen to simplify your scenes data. Freezing collapses the weight map generator (the base constant or gradient map you chose when you created the weight map) together with any strokes you have applied. To freeze a weight map, select it and click the Freeze button on the Edit panel. After you have frozen a weight map, you can still add new strokes but you cannot change the base map or delete any strokes you performed before freezing.

58 Softimage

Parameter Maps

2. In the Clip section of the Texture Map property editor, select an image or sequence to use as the map. If there isnt already a clip for the desired image, click New to create one. 3. In the UV Property area beneath the image, select an existing texture projection or create a New texture projection (if there isnt already one) that is appropriate to the shape of the object or how you want to project the mapped image. Editing Texture Maps To edit the UV coordinates of a texture maps projection, select the object and open the text editor. If necessary, use the Clips menu to display the correct image and the UVs menu to display the correct projection. If you do this, you should make sure that the operator connected to the texture map is above the modeling region of the construction history, for example, in the animation region. Otherwise, the UV edits are above the operator and appear to have no effect. You can move the operator back to the modeling region when you are done.

Basics 59

Section 2 Elements of a Scene

60 Softimage

Section 3

Moving in 3D Space
Working in 3D space is fundamental to Softimage. You will use the transformation tools constantly as you model and animate objects and components.

What youll find in this section ...


Coordinate Systems Transformations Center Manipulation Freezing Transformations Resetting Transformations Setting Neutral Poses Transform Setup Transformations and Hierarchies Snapping

Basics 61

Section 3 Moving in 3D Space

Coordinate Systems
Softimage uses coordinate systems, also called reference frames, to describe the position of objects in 3D space.

XYZ Coordinates
With the Cartesian coordinate system, you can locate any point in space using three coordinates. Positions are measured from the origin, which is at (0, 0, 0). For example, if X = +2, Y = +1, Z = +3, a point would be located to the right of, above, and in front of the origin.
Location = (2, 1, 3) Y=1

Cartesian Coordinates
One essential concept that a first-time user of 3D computer graphics should understand is the notion of working within a virtual three-dimensional space using a two-dimensional user interface. Softimage uses the classical Euclidean/ Cartesian mathematical representation of space. The Cartesian coordinate system is based on three perpendicular axes, X, Y, and Z, intersecting at one point. This reference point is called the origin. You can find it by looking at the center of the grid in any of the 3D windows.

Origin Z=3 X=2

XYZ Axes
Softimage uses a Y-up system, where the Y direction represents height. This is different from some other software, which are Z-up. This is something to keep in mind if you are familiar with other software, or are trying to import data into Softimage. A small icon representing the three axes and their directions is shown in the corner of 3D views. The icons three axes are represented by color-coded vectors: red for X, green for Y, and blue for Z. An easy way to remember the color coding is RGB = XYZ. This mnemonic is repeated throughout Softimage: object centers, manipulators, axis controls on the Transform panel, and so on.

XZ, XY, YZ Planes


Since you are working with a twodimensional interface, spatial planes are used to locate points in three-dimensional space. The perpendicular axes extend as spatial planes: XZ, XY, and YZ. In the 3D views, these planes correspond to three of the parallel projection windows: Top, Front, and Right. Imagine that the XZ, XY, and YZ planes are folded together like the top, front, and right side of a box. This helps you keep a sense of orientation when you are working within the parallel projection windows.

62 Softimage

Coordinate Systems

Global and Local Coordinate Systems


The location of an object in 3D space is defined by a point called its center. This location can be described in more than one way or according to more than one frame of reference. For example, the global position is expressed in relation to the scenes origin. The local position is expressed in terms of the center of the objects parent.
Parent Scene origin

Softimage Units
Throughout Softimage, lengths are measured in Softimage units. How big is a Softimage unit? It is an arbitrary, relative value that can be anything you want: a foot, 10 cm, or anything else. However, it is generally recommended that you avoid making your objects too big, too small, or too far from the scene origin. This is because rounding errors can accumulate in mathematical calculations, resulting in imprecisions or even jittering in object positions. As a general rule of thumb, an entire character should not fit within 1 or 2 units, nor exceed 1000 units. The Softimage units used for objects also matters for creating dynamic simulations where objects have mass or density and are affected by forces such as gravity. For simulations, Softimage assumes that 1 unit is 10 cm by default, but you can change this by changing the strength of gravity.

Object and its center

The center of an object is only a referenceit is not necessarily in the middle of the object because it can be relocated (as well as rotated and scaled). The position, orientation, and scaling (collectively known as the pose) of the objects center defines the frame of reference for the local poses of its own children.

Basics 63

Section 3 Moving in 3D Space

Transformations
Transformations are fundamental to 3D. They include the basic operations of scaling, rotating, and translating: scaling affects an elements size, rotation affects an elements orientation, and translation affects an elements position. Transformations are sometimes called SRTs. You transform by selecting an object or components, activating a transform tool, then clicking and dragging a manipulator in a 3D view.

Transforming Interactively

Local versus Global Transformations


There are two types of transformation values that can be stored for animation: local and global. Local transformations are stored relative to an objects parent, while global ones are stored relative to the origin of the scenes global coordinate system. The global transformation values are the final result of all the local transformations that are propagated down the object hierarchy from parent to child. You can animate either the local or the global transformation values. Its usually better to animate the local transformationsthis lets you move the parent while all objects in the hierarchy keep their relative positions rather than staying in place.

1 Select objects or components to transform and activate a tool: Scale (press x) Rotate (press c) Translate (press v) 3 If desired, specify the active axes. See Specifying Axes on page 67.

2 Set the manipulation mode. See Manipulation Modes on page 65.

4 If desired, set the pivot. See Setting the Pivot on page 67.

5 Click and drag on the manipulator. See Using the Transform Manipulators on page 68.

64 Softimage

Transformations

Manipulation Modes
When you transform interactively, you always do so using one of several modes set on the Transform panel: View, Local, Global, etc. The mode determines the axes and the default pivot used for manipulation. If an object isnt transforming as you expected, its possible that you need to change the manipulation mode. It is important to remember that the mode does not affect the values stored for animation (local versus global), it only affects your interaction with the transform tool. Global Global translations and rotations are performed along the scenes global axes.
Object is transformed...

View View translations and rotations are performed with respect to the 3D view. The plane in which the object moves depends on whether you are manipulating it in the Camera, Top, Front, Right, or other view.
Object is transformed using the axes of the 3D view as the reference.

If you are using the SRT manipulators in a perspective view like Camera or User, View mode uses the global scene axes. Par
...using global axes as the reference.

Local Local transformations are performed along the axes of the objects local coordinate system as defined by its center. This is the only true mode available for scalingscaling is always performed along an objects own axes.
Object is transformed...

Par, or parent, translations and rotations use the axes of the objects parent. For translation, this is the only mode where the axes of interaction correspond exactly to the coordinates of the objects local position for the purpose of animation. When you activate individual axes on the Transform panel, the corresponding local position parameters are automatically marked. To activate Par for rotations, activate Add and press Ctrl.
Object is transformed...

...using the local space of its parent as the reference.

...using the objects own local axes as the reference.


Basics 65

Section 3 Moving in 3D Space

Par mode is not available for components. In its place, Object mode uses the local coordinates of the object that owns the components. Add Add, or additive, mode is only available for rotation. It lets you directly control the objects local X, Y, and Z rotations as stored relative to its parent. This mode is especially useful when animating bones and other objects in hierarchies. For rotations, this is the only mode where the axes of interaction correspond exactly to the coordinates of the objects local orientation for the purpose of animation. When you activate individual axes on the Transform panel, the corresponding local position parameters are automatically marked. Uni Uni, or uniform, is available only for scaling. It is not really a mode but it modifies the way objects are scaled locally. It scales along all active local axes at the same time with a single mouse button. You can activate and deactivate axes as described in Specifying Axes on page 67. You can also temporarily turn on Uni by pressing Shift while scaling.

Vol Like Uni, Vol or volume is available only for scaling and is a modifier rather than a mode. It scales along one or two local axes, while automatically compensating the other axes so that the volume of the objects bounding box remains constant.

Ref Ref, or reference, mode lets you translate an object along the X, Y, and Z axes of another element or an arbitrary reference plane. Right-click on Ref to set the reference.
Object is transformed...

...using the local space of a picked object as its reference.

66 Softimage

Transformations

Plane Plane mode lets you drag an object along the XZ plane of another element or an arbitrary reference plane. Right-click on Plane to choose the plane.
Object is transformed...

If Allow Double-click to Toggle Active Axes is on in the Transform preferences, then you can also specify transformation axes by doubleclicking in the 3D views while a transformation tool is active: Double-click on a single axis to activate it and deactivate the others. If only one axis is currently active, double-click on it to activate all three axes. Shift+double-click on an axis to toggle it on or off individually. (If it is the only active axis, it will be deactivated and both of the other two axes will be activated).

...using the local space of a user-defined plane in space.

Setting the Pivot


When transforming elements interactively, you can set the pivot by pressing the Alt key while a transformation tool is active. The pivot defines the position around which elements are rotated or scaled (center of transformation). When translating and snapping, the pivot is the position that snaps to the target.
Individual axes

Specifying Axes
When transforming interactively, you can specify which axes are active using the x, y, and z icons in the Transform panel. For example, you can activate rotation in Y only, or deactivate translation only in Z. Active icons are colored, and inactive icons are gray. Click an axis icon to activate it and deactivate the others. Shift+click an axis icon to activate it without affecting the others. Ctrl+click an axis icon to toggle it. Click the All Axes icon to activate all three axes. Ctrl+click the All Axes icon to toggle all three axes.

1. Make sure that Transform > Modify Object Pivot is set to the desired value: - Off (unchecked) to set the tool pivot used for interactive manipulation only. This is useful if you are simply moving elements into place. The tool pivot is normally reset when you change the selection. However, you can lock and reset the position manually. - On (checked) to modify the object pivot. The object pivot acts like a center for the objects local transformations. It is used when playing back animated transformations, and is also the objects default pivot for manipulation. You can animate the object pivot to create a rolling cube. 2. Activate a transform tool.

All Axes

Basics 67

Section 3 Moving in 3D Space

3. Do any of the following: - Alt+drag the manipulators center, or one of its axes, to change the position of the pivot manually. You can use snapping, as well as change manipulation modes on the Transform panel. - Alt+click in a geometry view. The pivot snaps to the closest point, edge midpoint, polygon midpoint, or object center among the selected objects. This lets you easily rotate or scale an object about one of its components. - Alt+middle-click to reset the pivot to the default. You can lock the pivot by pressing Alt, clicking on the Pivot icon triangle below the pivot icon, and choosing Lock. The tool pivot remains at its current location, even if you change the selection.

Rotate Manipulator
Click and drag on a single ring to rotate around that axis. Click and drag on the silhouette to rotate about the viewing axis. This does not work in Add mode. Click and drag on the ball to rotate freely. This does not work in Add mode.

Scale Manipulator
Click and drag on a single axis to scale along it. Click and drag along the diagonal between two axes to scale both those axes uniformly.

Using the Transform Manipulators


Translate Manipulator
Click and drag on a single axis to translate along it. Click and drag between two axes to translate along the corresponding plane.

Click and drag the center left or right to scale all active axes uniformly.

In addition to dragging the handles, you can: Middle-click and drag anywhere in the 3D views to translate along the axis that most closely matches the drag direction. Click and drag anywhere in the 3D views (except on the manipulator) to perform different actions, depending on the setting for Click Outside Manipulator in the Tools > Transform preferences. Right-click on the manipulator to open a context menu, where you can set the manipulation mode and other options.

Click and drag on the center to translate in the viewing plane.

68 Softimage

Transformations

Setting Values Numerically


As an alternative to transforming objects interactively, you can enter numerical values in the boxes on the Transform panel: In Global mode, values are relative to the scene origin. In Ref mode, values are relative to the active reference plane. In View mode, values can be either global or relative to the objects parent depending on whats set in your preferences. In all other modes, values are relative to the objects parent.
Parent and child branchselected before scaling. Scaled in Y using hierarchical scaling. Scaled in Y using classic scaling.

Transformation Preferences
Transform > Transform Preferences contains several settings that affect the display, interaction, and other options of the transformation tools. Since you will be spending a great deal of your time transforming things, its a good idea to explore these and find the settings that are most comfortable for you.

You specify which method to use for each child in its Local Transform property. You can also set the default value used for all new objects.
To specify hierarchical or classic scaling

1. Select one or more child objects and open their Local Transform property editor. 2. On the Scaling tab, turn Hierarchical (Softimage) Scaling off or on. If it is off, classic scaling is used.
To set the default scaling mode used for all new objects

Hierarchical (Softimage) versus Classic Scaling


Hierarchical (Softimage) scaling uses the local axes of child objects when their parent is scaled. This maintains the relative shape of the children without shearing if they are rotated with respect to their parent. When this option is off, the result is called classic scalingchildren are scaled along their parents axes and may be sheared with non-uniform scaling. Classic scaling is recommended if you are exchanging data with other applications, such as game engines, motion capture systems, or 3D applications that do not understand Softimage scaling.

1. Choose File > Preferences from the main menu bar. 2. Click General. 3. Toggle Use Classical Scaling for Newly Created Objects.

Basics 69

Section 3 Moving in 3D Space

Center Manipulation
Center manipulation lets you move the center of an object without moving its points. This changes the default pivot point used for rotation and scaling. You can manipulate the center by using Center mode interactively, or by using commands on the Transform menu (Move Center to Vertices and Move Center to Bounding Box). Its important to note that center manipulation is actually a deformation. As the center is moved, the geometry is compensated to stay in place. Because it is a deformation, you cannot manipulate the center of non-geometric objects. This includes nulls, bones, implicit objects, control objects, and anything else without points.

Resetting Transformations
The Transform > Reset commands return an objects local scaling, rotation, and translation return to the default values. It effectively removes transformations applied since the object was created or parented, or since its transformations were frozen. If you want an object to return to a pose other than the origin of its parents space when you reset its transformations, set a neutral pose for it.

Setting Neutral Poses


The Transform > Set Neutral commands zero out an objects transformations. This is useful if you want an object to return to a pose other than the origin of its parents space when you reset its transformations. For example, you can set the neutral pose of a chain bone so that it returns to a natural position when you reset it. Neutral poses are also useful for visualizing the transformation valuesits easier to imagine a rotation from 0 to 45 degrees than from 78.4 to 123.4 degrees. The neutral pose acts as an offset for the objects local transformation values, as if there was an intermediate null between the object and its parent in the hierarchy. The neutral pose values are stored in the objects Local Transform property, and can be viewed or modified on the Neutral Pose tab of that property editor. When you set the neutral pose, any existing animation of the local transformation values is interpreted with respect to the new pose. This may give unexpected results when you play back the animation. For that reason, you should set the neutral pose before animating the transformations of an object. If you remove the neutral pose using Transform > Remove Neutral Pose, the neutral pose values are added to the local transformation before being reset to the defaults. The object does not move in global space as a result.

Freezing Transformations
The Transform > Freeze commands reset an objects size, orientation, or location to the default values without moving the objects geometry in global space. For instance, freezing an objects translation moves its center to (0, 0, 0) in its parents space without visibly displacing its points. Like center manipulation, freezing transformations is actually a deformation. As the center is transformed, the geometry is compensated to stay in place. If a neutral pose exists when you freeze an objects transformations, the objects center moves to the neutral pose instead of the origin of its parents space. If you want the objects center to be at the origin, you should remove the neutral pose in addition to freezing the transformations. You can perform these two operations in either order.

70 Softimage

Transform Setup

Transform Setup
The Transform Setup property lets you define a preferred transformation for an object. When you select that object, its preferred transformation tool is automatically activated. Of course, you can still choose a different tool and change transformation options manually if you want to. Transform setups are particularly useful when building animation rigs for characters. If you are using an object to control a characters head orientation, you can set its preferred transformation to rotation. If you are using another object to control the characters center of gravity (COG), you can set its preferred transformation to translation. When you select the head control, the Rotate tool is automatically activated, and then when you select the COG control, the Translate tool is automatically activated. You apply a Transform Setup property by choosing Get > Property > Transform Setup from any toolbar and then setting all the options. You can modify the options later by opening the property from the explorer. While Transform Setups are useful for many tasks, like animating a rig, at other times you dont want the current tool to keep changing as you select objects. In these cases, you can ignore Transform Setups for all objects in your scene by turning off Transform > Enable Transformation Setups. Turn it back on to resume using the preferred tool of each object.

Transformations and Hierarchies


Transformations are propagated down hierarchies. Each objects local position is stored relative to its parent. Its as if the parents center is the origin of the childs world.

Basics of Transforming Hierarchies


Objects in hierarchies behave differently when they transformed depending on whether the objects are node-selected or branch-selected. By default: If an object is branch-selected, then its children are transformed as well. You can change this behavior by modifying the parent constraint on the Options tab of the childs Local Transform property editor. If an object is node-selected, then children with local animation follow the parent. This is because the local animation values are stored relative to the parents center. However, what happens to non-animated children depends on the ChldComp (Child Transform Compensation) option on the Constrain panel.

Child Transform Compensation


The ChldComp option on the Constrain panel controls what happens to non-animated children if an object is node-selected and transformed. If this option is off, all children with an active parent constraint follow the parent. You cannot move the parent without moving its children. If this option is on, the children are not visibly affected. Their local transformations are compensated so that they maintain the same global position, orientation, and size. Child Transform Compensation does not affect what happens when a child has local animation on the corresponding transformation parameters nor when the parent is branch-selected.

Basics 71

Section 3 Moving in 3D Space

Snapping
Snapping lets you align components and objects when moving or adding them. You can snap to targets like objects, components, and the viewport grids, or you can snap by increments.

Incremental Snapping
When translating, rotating, and scaling elements, you can snap incrementally. Instead of snapping to a target, elements jump in discrete increments from their current values. This is useful if you want to move an element by exact multiples of a certain value, but keep it offset from the global grid. To snap incrementally: Press Shift while rotating or translating an element. Press Ctrl while scaling (Shift is used for scaling uniformly).

Snapping to Targets
Use the Snap panel to activate snapping to targets.
Set a variety of options from the menu.

Activate or deactivate snapping. Use Ctrl to temporarily toggle the current state. Specify the type of target: points, curves/edges, facets, or the grid. Right-click to select various sub-types.

You can set the Snap Increments using Transform > Transform Preferences.

The grid used for snapping depends on the manipulation mode: Global, Local, Par, Object, and Ref use the Snap Increments set in the Transform > Transform Preferences. They do not use the visible floor/grid displayed in 3D views. View mode uses the Floor/Grid Setup set in the Camera Visibility property editor (Shift+s over a specific 3D view, or Display > Visibility Options (All Cameras)). Plane mode uses the Snap Size set in the Reference Plane property editor.

72 Softimage

Section 4

Organizing Your Data


Working in Softimage involves saving and retrieving files between systems. A typical project in Softimage contains many files that need to be easily accessible to you or members of your workgroup. Softimage provides data management features that help you optimize your production pipeline.

What youll find in this section ...


Where Files Get Stored Scenes Projects Models Importing and Exporting

Basics 73

Section 4 Organizing Your Data

Where Files Get Stored


There are two types of files in Softimage: project files and application data files. Project files include scenes as well as any accompanying files such as texture images, referenced models, cached simulations, rendered pictures, and so on. They are stored in various subfolders of a main project folder. Application data files are not specific to a single project. They include presets and various customizations you can make or install, such as commands, keyboard mappings, toolbars, shelves, views, layouts, plugins, add-ons, and so on. The application data files can be stored in various subfolders at one of three locations: User is the location for your personal customizations. Typically, it is C:\users\username\Autodesk\Softimage_2010 on Windows or ~/Autodesk/Softimage_2010 on Linux. Workgroup is the location for customizations that are shared among a group of users working on the same local area network. Installation (Factory) is the location for presets and sample customizations that ship with Softimage. It is located in the directory where the Softimage program files are installed. It is not recommended that you store your own customizations here.

Setting a Workgroup
Workgroups provide a method for easily sharing customizations among a group of people working on the same project. Simply set your workgroup path to a shared location on your local network, and you can take advantage of any presets, plug-ins, add-ons, shaders, toolbars, views, and layouts that are installed there. The workgroup is usually created by a technical director or site supervisor. To connect to an existing workgroup, choose File > Plug-in Manager, click the Workgroups tab, click Connect, and specify the location.

Whenever you use an Softimage file browser to access files on disk, you can quickly switch among your project, user, workgroup, and installation locations using the Paths button.

74 Softimage

Scenes

Scenes
A scene file contains all the information necessary to identify and position all the models and their animation, lights, cameras, textures, and so on for rendering. All the elements of a scene are compiled into a single file with an .scn extension. The Softimage title bar identifies the name of the current scene and the project in which it resides.

The File Menu contains most of the commands for creating, opening, and managing scenes.

Merging Scenes combines objects in any number of Softimage scenes. When you merge a scene into the current scene, it is automatically loaded as a model. Press the Ctrl key as you drag and drop a scene (*.scn) file from an external window into a 3D view to merge it as a model under the scene root. Save or Save As to update the existing scene or save it to a new name in the current project. Manage scenes and their associated projects using the Project Manager. You can also create, open, and save scenes to different projects from here. Import and export scenes from and to other 3D or CAD/CAM programs saved in the dotXSI, COLLADA, FBX, DirectX, IGES, and OBJ formats. Choose Preferences > Data Management to set options for backing up, autosaving, recovering, and debugging your scenes.

A New Scene is automatically generated when you start Softimage or create a new project. You can also create a new scene any time while you work. Every new scene is created in the active project and its name appears as Untitled in the Softimage title bar. Choose Edit > Delete All from the Edit panel in the main command panel or press Ctrl+Delete to clear the workspace before creating a new scene. Open a scene. or Open a recently used scene. You can also drag and drop a scene (*.scn) file from an external window into a 3D view to open the scene. Note that you cannot drag and drop scenes from external windows on Linux systems. When you open a scene file, a temporary lock file is created. Anyone else who opens the file in the meantime must work on a copy and any changes to the scene must be saved under a different file name. The lock file is deleted when you close the scene

Basics 75

Section 4 Organizing Your Data

Managing External Files in Scenes


Scenes can reference many external files such as referenced models, texture images, action sources, and audio clips. Some of these referenced files may be located outside of your project structure. When you save a scene, the path information that lets Softimage locate and refer to these external files is saved as well. As you develop the scene, youll probably need to perform some cleanup and management operations on its external files. For example, you might need to update some paths or locate a missing image. You can do all this, as well as perform other file management tasks, using the external files manager. Choose File > External Files to open the external files manager.

Click here to refresh the list of files.

The controls for viewing and managing external files.

Selected files are highlighted in green.

The left pane allows you to choose whether to show all external files used by the scene, or only those used by a particular model.

The grid lists all of the external files for the scene/model specified in the left-hand pane, and of the type specified in the File Type list.

Files with invalid paths are highlighted in red.

76 Softimage

Scenes

Displaying Scene Information


You can obtain important statistics for your scene by choosing Edit > Info Scene from the Edit panel or by pressing Ctrl+Enter. This information can be helpful when evaluating a scenes complexity for the purpose of optimization.

Getting and Setting Data in the Scene TOC


Scene files can be further modified by its scene TOC. The scene TOC (scene table of contents) is an XML-based file that contains scene information. It has an extension of .scntoc with the same name and in the same folder as the corresponding scene file. By default, the scene TOC is created automatically when you save a scene. When you open a scene file, Softimage looks for a corresponding scene TOC file. If it is found, Softimage automatically applies the information it contains. This lets you use a text editor or XML editor to change the path for external files such as referenced models or texture images, change render options, change the current render pass, and so on.

Basics 77

Section 4 Organizing Your Data

Projects
In Softimage, you always work within the structure of a project. A project is a system of folders that contain the scenes you build and the external files referenced by those scenes. Projects are used to keep your work organized and provide a level of consistency that can simplify production for a workgroup. A project can exist locally on your machine or can be shared from a network drive. When you open Softimage for the first time, an untitled scene is created in the XSI_SAMPLES factory project. You can set your own project as the default project that opens with Softimage. The project name in the title bar at the top of the Softimage interface is the active project. Project lists are text-based files with an .xsiprojects file name extension. You can build, manage and distribute your project lists among members of your workgroup using the Project Manager.

The Project Manager


The tool for managing multiple projects and scenes. You can create new projects and scenes, open existing projects and scenes, scan your system for projects, delete projects, as well as add and remove projects from the project list. Select a project from the project list.

The Project Structure


Subfolders created in every new project folder store and organize the elements of your work such as rendered pictures, scenes, material libraries, external action sources, etc.

Scan for projects in a specified path and add them to the project list. Export the list of projects and have all members of the workgroup import it. Sort projects by Name, Origin (factory [F], user [U], and workgroup [W]), or none. Location of your project folder. Sets the selected project as the active project. Sets the default project that opens automatically when you start Softimage.

78 Softimage

Models

Models
Models are like mini scenes that can be easily reused in scenes and projects. They act as a container for objects, usually hierarchies of objects, and many of their properties. Models contain not just the objects geometry but also the function curves, shaders, mixer information, groups, and other properties. They can also contain internal expressions and constraints; that is, those expressions and constraints that refer only to elements within the models hierarchy.

Models and Namespaces


Each model defines its own namespace. This means that each object in a models hierarchy must have a unique name, but objects in different models can have the same name. For example, two characters in the same scene can both have chains named left_arm and right_arm if they are in different models. All models exist in the namespace of the scene. This means that each model must have its own unique name, even if it is within the hierarchy of another model. Namespaces let you reuse animations that have been stored as actions. If an action contains animation for one models left_arm chain, you can apply the action to another model and it automatically connects to the second models left_arm. If your models contain elements with different naming schemes, for example, LeftArm and L_ARM, you can use connection mapping templates to specify the proper connections.

Club bot model structure contains many things that define the character.

Creating Local Models


To create a model in your scene, select the elements you want it to contain and choose Create > Model from the Model toolbar. At this point, the model has its own namespace and its own mixer, so it can share action sources with other models in the same scene. It can also be instantiated or duplicated within the same scene. If thats all you need a model for, you do not need to export and import it. You can add elements to the model by parenting them to the model hierarchy. To remove elements, cut them from the hierarchy.

There are two types of models: Local models are specific to a single scene. Referenced models are external files that can be reused in many scenes.

Basics 79

Section 4 Organizing Your Data

Exporting Models
Use File > Export > Model to export models created in Softimage for use in other scenes. Using models to export objects is the main way of sharing objects between scenes. When you export a model, a copy is saved as an independent file. The file names of exported models have an .emdl extension. The original model remains in the scene. If you ever need to modify the model, you can change it in the original scene, and then re-export it using the same file name. If other scenes use that file as a referenced model, they will update automatically when you open them. If you imported the file into another scene as a local model, you must delete the model from that scene and re-import it from the file to obtain the updated version.

For example, lets say that youre modeling a car that will be used in various scenes, but the animator needs to start animating with the car on another computer before you can finish the details. You export the car as porsche.emdl, which the animator can import into her scene while you continue your work. Any changes that the animator makes to the car, such as setting keys or expressions, are automatically stored in the models delta in the scene. When youre done modeling the car, you can re-export using the same file name. Now when the animator loads the scene or updates the referenced model, all the changes you made are automatically reflected in the car in her scene. After the model is updated, Softimage reapplies the changes stored in the delta to the model within the animators scene. Referenced models also let you work at different levels of detail. You can have a low-resolution model for fast interaction while animating, a medium-resolution model for more accurate previewing, and a highresolution model for the final results. Referenced models are indicated in the explorer by a white man icon. The default name of this node depends on the name of the external file, but you can change it if you want. The name of the active resolution appears in square brackets after the models name. The name of a deltas target model appears after the deltas name.

Importing Local Models


When you import a model locally instead of as a referenced model, its data becomes part of your scene. It is as if the model was created directly in the scenethere is no live link to the .emdl file. You can make any changes you want to the model and its children. To import a model locally, choose File > Import > Model from the main menu. You can also drag an .emdl file from a browser or a link on a Net View page and drop it onto the background of a 3D view. On Windows, you can also drag an .emdl file from a folder window.

Importing Referenced Models


Referenced models are models that are imported using File > Import > Referenced Model or converted to referenced using Edit > Model > Convert to Referenced. Their data is not stored in the sceneit is referenced from an external .emdl or .xsi file. Changes made to the external model are reflected in your scene the next time you open the scene or update the reference.

Use the Modify > Model menu on the Model toolbar to set the current resolution, or to temporarily offload models.

80 Softimage

Models

You can change a referenced models Parameters display a white lock icon but they can still parameters values, animate them, apply be modified and animated. new properties, and so on. These changes are stored in the clip and reapplied when the model is updated. There are some changes you cant make, such as adding an object to the hierarchy or deleting a property. Whatever changes you perform, make sure that they are selected in the deltas Recorded/Applied Modifications property, otherwise they will be lost the next time the model is updated.

Instantiating Models
An instance is an exact replica of a model. Any type of model can be instanced. You can create as many instances as you like using the commands on the Edit > Duplicate/Instantiate menu, and position them anywhere in your scene. When you modify the original master model, all instances update automatically. Instances are useful because they require very little memory: only the transformations of the instance root is stored. However, you cannot modify, for example, an instances geometry or material. Instantiation has the following advantages: Instances use much less disk space than duplicates or clones because youre not duplicating the geometry. Editing multiple identical objects is very simple because you only have to edit the original. Wireframe, shading, and memory operations are much faster. Instances are displayed in the explorer with a cyan i superimposed on the model icon. In the schematic view, they are represented by trapezoids with the label I.
Instance in the explorer. Instance in the schematic view.

Basics 81

Section 4 Organizing Your Data

Importing and Exporting


In any production pipeline, you will need to import and export scene data for reuse in other scenes or software packages. Softimage provides a number of importers and exporters available from the File > Import, File > Export, and File > Crosswalk menus. Softimage also supports many other file types such as audio, video, various graphics and middleware formats, as well as specialized scene elements such as function curves, actions, and motion capture data.

Importing and Exporting with Point Oven


Point Oven is a suite of plug-ins available from within Softimage that allow you to simplify your Softimage scenes by baking in vertex and function curve data. These plug-ins also allow you to streamline your pipeline by providing data transfer between different applications that also use Point Oven. The Softimage Point Oven plug-ins let you load and save various types of data: you can import and export Lightwave Object (LWO2) files, bake vertices to MDD files, import and export Point Oven scenes (PSC), export Lightwave scenes (LWS), export Messiah scenes (FXS), and import MDD files. You can access the Point Oven plug-ins from the File > Import > Point Oven and File > Export > Point Oven menus.

Importing and Exporting with Crosswalk


Crosswalk is a set of plug-ins and converters that lets you transfer assets such as scenes and models between Softimage and other programs in your pipeline such as Autodesk Maya and Autodesk 3ds Max. The Crosswalk converters are available in Softimage from File > Crosswalk. You can download the latest version of Crosswalk from www.autodesk.com/softimage-crosswalk. FBX, Collada, and dotXSI You can use Crosswalk in Softimage to import and export scenes and models in FBX (.fbx), Collada (.dae, .xml), and dotXSI (.xsi) formats. 3ds Max and Maya Crosswalk plug-ins for Maya and 3ds Max allow you to import and export dotXSI files in those programs. This allows you to share assets back and forth with Softimage. Crosswalk SDK You can use the templates and examples provided in the Crosswalk SDK to create converters to import dotXSI files into your own custom format, such as for games content.

Importing and Exporting Obj Files


You can import and export Wavefront Obj files to transfer data back and forth with other programs that support this format using File > Import > Obj File and File > Export > Obj File.

Importing and Exporting Other Formats


In addition to the formats explicitly mentioned here, Softimage supports a large number of other formats for scenes, animation, motion capture, images, and so on.

82 Softimage

Section 5

General Modeling
Modeling is the task of creating the objects that you will animate and render. No matter what type of object you are modeling, the same basic concepts and techniques apply. This section explores the aspects of modeling that arent specific to any specific type of geometry such as curves, polygon meshes, or NURBS surfaces.

What youll find in this section ...


Overview of Modeling Geometric Objects Accessing Modeling Commands Starting from Scratch Operator Stack Modeling Relations Attribute Transfer (GATOR) Manipulating Components Deformations

Basics 83

Section 5 General Modeling

Overview of Modeling
1 Start with a basic object, such as a primitive cube. 2 Add more subdivisions to work with.

Rough out the basic shape of the object.

Iteratively refine the object, moving points and adding more detail where required.

Once the modeling is done, the object is ready to be textured and animated. If changes are necessary, you can still perform modeling operations on the animated, textured object.

84 Softimage

Geometric Objects

Geometric Objects
By definition, geometric objects have points. The set of these points and their positions determine the shape of an object and are often called the objects geometry. The number of points and how they are connected is called its topology. No matter what the type of geometry, Softimage allows you to select, manipulate, and deform points in the same way. On the other hand, polygon meshes may require very heavy geometry (that is, many points) to approximate smoothly curved objects. However, you can subdivide them to create virtual geometry that is smoother.

Types of Geometry
The main types of renderable geometry in Softimage are polygon meshes and NURBS surfaces. In addition, there are other types of geometry that you can use for specialized purposes. Polygon Meshes Polygon meshes are quilts of polygons joined at their edges and vertices. One advantage of polygon meshes is that they allow for almost arbitrary topologyyou are not limited to rectangular patches and you can add extra points for more detail where needed. NURBS Surfaces Surfaces are two-dimensional NURBS (non-uniform rational B-splines) patches defined by intersecting curves in the U and V directions. In a cubic NURBS surface, the surface is mathematically interpolated between the control points, resulting in a smooth shape with relatively few control points. The accuracy of NURBS makes them ideal for smooth, manufactured shapes like car and aeroplane bodies. One limitation of surfaces is that they are always four-sided.
A subdivision surface created from a cube.

A polygon mesh sphere

NURBS surfaces allow for smooth geometry with relatively few control points.

Basics 85

Section 5 General Modeling

Curves In Softimage, curves are one-dimensional NURBS of linear or cubic degree. Cubic curves with Bzier knots can be manipulated as if they are Bzier curves. Curves have points but they are not renderable because they have no thickness. Nevertheless, they have many uses, such as serving as the basis for constructing polygon meshes surfaces, paths for objects to move along, controlling deformations like deform by curve and deform by spine, and so on.
A simple cubic NURBS curve.

Particles Particles are disconnected points in a point cloud. They are often emitted in simulations to create a variety of effects, such as fire, water, and smoke. In Softimage, point clouds are controlled by ICE trees. See ICE Particles on page 271.

Hair Lattices Lattices are a hybrid between geometric objects and control objects. Although they have points, they do not render and are used only to deform other geometric objects. Hair objects let you use guide hairs to control a full head of render hairs. You can style the hairs manually as well as apply a dynamic simulation.

Density
Density refers to the number of points on an object. Part of the art of modeling is controlling the balance of density. Generally speaking, you need more density in areas where an object has high detail or needs to deform smoothly. However, too much density means that an object will be unnecessarily slow to load, update, and render.

86 Softimage

Geometric Objects

Normals
On polygon meshes and surfaces, the control points form bounded areas. Normals are vectors perpendicular to these closed areas on the surface, and they indicate the visible side of the object and how its surface is oriented. Normals are used to compute shading between surface triangles. Normals are represented by thin blue lines. To display or hide them, click the eye icon (Show menu) of a 3D view and choose Normals.

Eye icon

When normals are oriented in the wrong direction, they cause modeling or rendering problems. You can invert them using Modify > Surface > Invert Normals or Modify > Poly. Mesh > Invert Normals on the Model toolbar. If an object was generated from curves, you can also invert its normals by inverting one or more of its generator curves with Modify > Curve > Inverse.
Normals should point toward the camera.

Right

Wrong

Basics 87

Section 5 General Modeling

Accessing Modeling Commands


The modeling tools can be found, not surprisingly, on the Model toolbar. In addition, the context menu also contains many of the most useful modeling commands that apply to the current selection.

Context Menus
Many modeling commands are available from context menus. The context menu appears when you Alt+right-click in the 3D views (Ctrl+Alt+right-click on Linux). If you click a selected object, the menu items apply to all selected objects. On Windows, you can also press the context-menu key (next to the right Ctrl key on some keyboards). If you click an unselected object, the menu items apply only to that object. When components are selected, you can right-click anywhere on the object that owns the selected components. The items on the context menu apply to the selected components. If you click over an empty area of a 3D view, the menu items apply to the view itself.

Model Toolbar
Youll find the Model toolbar at the far left of the screen. These commands are also available from the main menu.
Get commands Create generic elements, including primitive objects, cameras, and lights (also available on Animate, Render, and Simulate toolbars).

To display the Model toolbar: Click the toolbar title and choose Model.

Create commands Draw new objects or generate them from existing ones.

or Press 1 at the top of the keyboard.

Modify commands Change an objects topology or deform its geometry.

If the Palette or Paint panel is currently displayed, first click the Toolbar icon or press Ctrl+1.

88 Softimage

Starting from Scratch

Starting from Scratch


When modeling, you need to start somewhere. You can: Get a basic shape from the Primitive menu. Create text. Generate an object from a curve.

- Surface displays a submenu from which you can choose an available NURBS surface shape. 3. Set the parameters as desired. The geometric primitives (curves, polygon meshes, and surfaces) have certain typical controls: - The shape-specific page contains the basic characteristics of the shape. Each shape has different characteristics; for example, a sphere has one radius and a torus has two. - The Geometry page controls how the implicit shape is subdivided when converted into a surface. More subdivisions yield more points, resulting in greater detail but heavier geometry.

Primitives
Primitives are basic shapes like cubes, grids and spheres. You can add them to a scene and then modify them as you wish. For example, you can start with a sphere and move points to create a head. You can then attach eyeballs and ears to the head and put the whole head on a model of a character. There are several different primitive shapes for each geometry type. Each primitive shape has parameters that are particular to itfor example, a sphere has a radius that you can specify, a cube has a length, a cylinder has both height and radius, and so on. There are also several parameters that are common to all or to several primitive shapes: Subdivisions, Start and End Angles, and Close End. Getting Primitives You add a primitive object to the scene by choosing an option from the Get > Primitive menu on any of the toolbars at the left of the main window. 1. Choose Get > Primitive. 2. Choose an item from the submenus: - Curve displays a submenu from which you can choose an available NURBS curve shape. - Polygon Mesh displays a submenu from which you can choose an available polygon mesh shape.

Text
You can create text in Softimage, as well as import it from RTF (rich text format) files. Text is not a type of geometric object in Softimage; instead, text information is immediately converted to curves. After that, the curves can be optionally converted to planar or extruded polygon meshes.

Creating Text Choose one of the following commands from the Model toolbar: - Create > Text > Curves creates a Text primitive and converts it to a curve object. - Create > Text > Planar Mesh creates a Text primitive, converts it to a curve object, and then finally converts the curve to a polygon mesh with the Extrusion Length set to 0. The curve object is automatically hidden.

Basics 89

Section 5 General Modeling

- Create > Text > Solid Mesh creates a Text primitive, converts it to a curve object, and then finally converts the curve to a polygon mesh with the Extrusion Length set to 0.5 by default. Once again, the curve object is automatically hidden. In each case, a property editor with the following pages is displayed:
Enter text and font properties.

Convert curves to polygon meshes (optional).

Objects from Curves


You can generate polygon meshes and surfaces from curves using the first group of commands in the Create > Surf. Mesh menu or the Create > Poly. Mesh menu on the Model toolbar.

Create surface from curves Convert text to curves. Create polygon mesh from curves

The commands and the general procedures on these two menus are the samethe only difference is the type of object that is created.

90 Softimage

Operator Stack

1. Select the first input curve, then add the remaining input curves (if any) to the selection. Different commands require different numbers of input curves. For example, Revolution Around Axis requires only one curve, while Loft allows for any number of profile curves to define the crosssection. You are not limited to curve objects. You can also select curves on surfaces, including any combination of isolines, knot curves, boundaries, surface curves, and trim curves. For example, you can create a loft surface that joins two surface boundaries while passing through other curves. 2. Choose one of the commands from the first group in the Create > Surf. Mesh or the Create > Poly. Mesh on the Model toolbar. 3. In the property editor that opens, adjust the parameters as desired. For more information, refer to the Softimage Reference by clicking on the ? in the property editor.

Operator Stack
The operator stack (also known as the modifier stack or construction history) is fundamental to modeling in Softimage. Every time you perform a modeling operation, such as modify the topology or apply a deformation, an operator is added to the stack. Operators propagate their effects upwards through the stack, with the output of one operator being the input of the next. At any time, you can go back and modify or delete operators in the stack.

Viewing and Modifying Operators


You can view the operator stack of an object in an explorer if Operators is active in the Filters menu. The operator stack is under the first subnode of an object in the explorer, typically named Polygon Mesh, NURBS Surface Mesh, NURBS Curve List, and so on. For example, suppose you get a primitive polygon mesh grid, apply a twist, then randomize the surface. The operator stack shows the operators that have been applied. You can open the property page of any operator by clicking on its icon, and then modify values. Any changes you make are passed up through the history and reflected in the final object.

Click icon to open the property editor.

Click the name to select the operator. Then you can press Enter to open the editor, or press Delete to remove the operator.

2 Guide curve 1 Profile curve Example of extruding a curve along another curve

For example, you can: Change the size of the grid in its Geometry node. Change the angle, offset, and axis of the twist in Twist Op. Change the random displacement parameters in Randomize Op.

Basics 91

Section 5 General Modeling

To quickly open the last operator in the selected objects stack, press Ctrl+End or choose Edit > Properties > Last Operator in Stack. If you modify specific components, then go back earlier in the stack and change the number of subdivisions, youll probably get undesirable results because the indices of the affected points have changed.

Here is a quick overview of the workflow for using construction modes: 1. Set the current construction mode using the selector on the main menu bar. 2. Continue modeling objects by applying new operators. New deformations (operations that only change the positions of points) are applied at the top of the current region, and new topology modifiers (operators that change the number of components) are always applied at the top of the Modeling region. If you apply a deformation in the wrong region, you can move it by dragging and dropping in the explorer. 3. At any time as you work, you can display the final result (the result of all operators in all regions) or the just the current mode (the result of all operators in the current region and those below it) by selecting an option from the Construction Mode Display submenu of the Display Mode menu on the top right of a viewport: - Result (top) always shows the final result of all operators, no matter which construction mode is current. - Sync with construction mode shows the result of the operators in the current construction region and below.

Construction Modes and Regions


The construction history is divided into four regions: Modeling, Shape Modeling, Animation, and Secondary Shape Modeling. The purpose of these regions is to keep the construction history clean and well ordered by allowing you to classify operators according to how you intend to use them. For example, when you apply a deformation, you might be building the objects basic geometry (Modeling), or creating a shape key for use with shape animation (Shape Modeling), or creating an animated effect (Animation), or creating a shape key to tweak an enveloped object (Secondary Shape Modeling).

Secondary Shape Modeling Define shapes on top of envelopes, e.g., muscle bulges. Shape Modeling Define shapes for animation.

Animation Apply envelopes or other animated deformations. Modeling Create the basic shape and topology of an object. Use Freeze M to freeze this region.

Display Mode menu

You can even have different displays in different views so, for example, you can see and move points in one view in Modeling mode while you see the results after enveloping and other deformations in another view.

92 Softimage

Operator Stack

Changing the Order of Operators


You can change the order of operators in an objects stack by dragging and dropping them in an explorer view. You must always drop the operator onto the operator or marker that is immediately below the position where you want the dragged operator to go. Be aware that you might not always get the results you expect, particularly if you move topology operators or move other operators across topology operators, because operators that previously affected certain components may now affect different ones. In addition, some deformation operators like MoveComponent or Offset may not give expected results when moved because they store offsets for point positions whose reference frames may be different at another location in the stack. When you try to drag and drop an operator, Softimage evaluates the implications of the change to make sure it creates no dependency cycles in the data. If it detects a dependency, it will not let you drop the operator in that location. Moving an operator up often works better than moving it downthis is because of hidden cluster creation operators on which some operators depend.

Freezing removes any animation on the modeling operators (such as the angle of a Twist deformation). The values at the current frame are used. For hair objects, the Hair Generator and Hair Dynamics operators are never removed.

Collapsing Deformation Operators


Sometimes, it is useful to freeze certain operators in the stack without freezing earlier operators that are lower in the stack. For example, you might have many MoveComponent operators that are slowing down your scene, but you dont want to lose an animated deformation or a generator (if your object has a modeling relation that you want to keep). In these cases, you can collapse several deformation operator into a single Offset operator. The Offset operator is a single deformation that contains the net effect of the collapsed deformations at the current frame. Simply select the deformations operators in an explorer and choose Edit > Operator > Collapse Operators.

Freezing the Operator Stack


When you are satisfied with an object, you can freeze all or part of its operator stack. This removes the current historyas a result, the object requires less memory and is quicker to update. However, you can no longer go back and change values. To freeze the entire stack, select the object and click Freeze on the Edit panel. To freeze just the modeling region, select the object and click Freeze M. To freeze from a specific operator down, select the operator in an explorer and click Freeze.

Basics 93

Section 5 General Modeling

Modeling Relations
When you generate an object from other objects, a modeling relation is established. For example, if you create a surface by extruding one curve along another curve, the resulting surface is linked to its generator curves. If you modify the curves, the surface updates automatically. The modeling relation is sometimes called construction history in other software. You can modify the generated object in any way you like, for example, by moving points or applying a deformation. When you modify the generators, the generated object is updated while any modifications you have made to it are preserved. If you delete the input objects, the generated object is removed as well. To avoid this, freeze the generated object or at least the generator operator before deleting the inputs. If you use the Delete button in the Inputs section of the generators property editor, the generator is automatically frozen first. You can display the modeling relations: In a 3D view, click the eye icon (Show menu) and make sure that Relations is on. In a schematic view, make sure that Show > Operator Links is on. If the selected object has a modeling relation, it is linked to its input objects by lines. A label on the line identifies the type of relation (such as wave or revolution) and the name of the input object. You can click the line to select the corresponding operator.
Modeling Relation The road was created by extruding a crosssection along a guide. When the original guide was deformed into a loop, the road was updated automatically.

94 Softimage

Attribute Transfer (GATOR)

Attribute Transfer (GATOR)


You can transfer and merge clusters with properties from object to object. The cluster properties that you can transfer in this way include materials, texture UV coordinates, vertex colors, property weight maps, envelope weights, and shape animation. Attributes can be transferred in two ways: If you are generating a polygon mesh object from others, for example using Merge or Subdivision, use the controls in the generators property editor to transfer attributes from the input objects to the generated objects. Otherwise, select the target object, choose Get > Property > GATOR, pick one or more input objects, and right-click to end the picking session. You can use any combination of polygon meshes and NURBS surfaces.

Manipulating Components
Tweak Component is the main tool for moving components. It allows you to translate, rotate, and scale points, polygons, and edges. You can use it in two ways: Click and drag components for a fast, uninterrupted interaction. Select a component and then use the manipulator for a more controlled interaction.
To use the Tweak Component tool

1. Select a geometric object. 2. Activate the Tweak Component tool by pressing m or choosing Modify > Component > Tweak Component Tool from the Model toolbar. Note that if a curve is selected, then pressing m activates the Direct Manipulation tool instead. However, you can still use Tweak Component with curves by choosing it from the toolbar menu. 3. Move the mouse pointer over the object in any geometry view. As the pointer moves, the component under the pointer is highlighted. The Tweak Component tool will not highlight backfacing components, or components that are occluded by parts of the same object. When there are multiple types of components within the picking radius, priority is given first to points, then to edges, and finally to polygons. 4. Do one of the following: - Click+drag to perform a simple transformation on the highlighted component. If all axes are active on the Transform panel, translation occurs in the viewing plane and scaling is uniform in local space. If one or more axes have been toggled off, translation and scaling use the current manipulation mode and active axes set on the Transform panel. For example, to translate along a points normal, activate Local and the Y axis only.
Basics 95

Transfer and merge surface attributes. Transfer and merge animation attributes. Transfer and merge specific attributes manually.

Section 5 General Modeling

Rotation uses the current manipulation mode and the Y axis by default, but you can select a different axis by deactivating the others. - Click and release the mouse button to select the highlighted component. A manipulator appears (unless youve toggled it off). You can use the manipulator to transform the selection, or if you prefer you can first modify the selection, change the pivot, and set other options. The Tweak Component tool uses the Ctrl, Shift, and Alt modifier keys with the left and middle mouse buttons to perform different functionslook at the mouse/status line at the bottom of the Softimage window for brief descriptions, or read the rest of this section for the details. The right mouse button opens a context menu. 5. The Tweak Component tool remains active, so you can repeat steps 3 and 4 to manipulate other components. When you have finished, deactivate the tool by pressing Esc or activating a different tool.

Switching between Translation, Rotation, and Scaling


The Tweak Component tool lets you translate, rotate, or scale components. Select the desired transformation using the v, c, and x keyspress and release a key to change the transformation (sticky mode) or press and hold a key to temporarily override the current transformation (supra mode). To translate, press v or choose Translate from the context menu.
Drag the center to translate freely in the viewing plane. Drag an axis to translate in the corresponding direction.

To rotate, press c or choose Rotate from the context menu.


Drag an axis to rotate in the corresponding direction.

96 Softimage

Manipulating Components

To scale, press x or choose Scale from the context menu.


Drag the center to scale uniformly. Drag an axis to scale in the corresponding direction.

Ref, or reference, mode lets you transform elements using another component or object as the reference frame. See Setting the Pivot on page 98. Plane mode is similar to Ref. It uses the same axes as Ref but the object center as the pivot.

Activating Axes
You can activate or deactivate axes on the Transform panel: Click an axis icon to activate it and deactivate the others. The mouse pointer updates to reflect the current action. You can also press Tab to cycle through the three actions, or Shift+Tab to cycle in reverse order. To activate the standard Translate, Rotate, or Scale tools, you must either deactivate the Tweak Component tool before pressing v, c, or x, or use the t, r, or s buttons on the Transform panel. Shift+click an axis icon to activate it without affecting the others. Ctrl+click an axis icon to toggle it. Click the All Axes icon to activate all three axes. Ctrl+click the All Axes icon to toggle all three axes. Alternatively if the Tweak manipulator is displayed, you can activate a single axis by double-clicking on it. Double-click on the same axis again to re-activate all axes, or on a different one to activate it instead.
All Axes Individual axes

Setting Manipulation Modes


The Tweak Component tool uses the manipulation modes shown on the Transform panel. They affect the axes and pivot used for the transformation. Global transformations are performed along the scenes global axes. Local transformations use the components own reference frame. In this mode, Y is the normal direction. View transformations are performed with respect to the viewing plane of the 3D view. Object transformations are performed in the local coordinate system of the object that contains the components.

Basics 97

Section 5 General Modeling

Selecting Components
The Tweak Component tool lets you select components in a similar way to the standard selection tools, but there are some differences. Selecting, Deselecting, and Extending the Selection Use the following keyboard and mouse combinations for selection: Click a component to select it. Shift+click a component to add it to the selection. Shift+middle-click to toggle-select a component. Ctrl+Shift+click to deselect a component. To quickly deselect all components, click anywhere outside the object. Note that you can only multi-select components of the same type. You cannot select a heterogeneous collection of points, edges, and polygons. Selecting Loops and Ranges Use the Alt key to select loops or ranges of components.
To select loops or ranges of components

Note that for edge loops, the direction is implied so you can simply Alt+middle-click on an edge to select the loop and then Alt+Shift+middle-click to select additional loops. However, to select parallel edge loops, you still need to specify two components as described above. Selecting by Type The Tweak Component tool allows you to manipulate points, edges, and polygons, but you can limit it to a particular type of component if you desire. Use the context menu to activate Tweak All, Points, Edges, Polygons, or Points + Edges.

Setting the Pivot


You can quickly set the pivot by middle-clicking on a component. For example, to rotate a polygon about one of its edges, simply click to select the polygon and then middle-click to specify the edge as the reference. The manipulator does not react to middle-clicks unless Shift is pressed, so you can pick a component even if the manipulator is covering it in a view. Middle-clicking temporarily switches to Ref manipulation mode. As soon as you select a new component, the previous manipulation mode is restored. If you want to transform several components about the same reference one after another, you should manually switch to Ref mode and then middle-click to specify the reference. In this way, the reference frame does not revert to the default when you select a new component to manipulate.

1. Click to select the first or anchor component. 2. Do one of the following: - Alt+click on a second component to select all components on a path between the two components. - Alt+middle-click on a second component to select all components in the loop that contains both components. 3. To select additional loops or ranges, use Shift+click to specify a new anchor and then Alt+Shift+click for a new range or Alt+Shift+middle-click for a new loop.

98 Softimage

Manipulating Components

Using Proportional Modeling


When you manipulate points, edges, and polygons, you can use proportional modeling. When this option is on, neighboring components are affected as well, with a falloff that depends on distance. Proportional modeling is sometimes known as magnet or soft selection.

Sliding Components
You can slide components with the Tweak Component tool. This helps to preserve the contours of objects as you tweak them. Sliding an edge moves its endpoints along the adjacent edges by an equal percentage. Sliding a point or a polygon clamps the associated points to the nearest location on the surface of the mesh, as if they had been shrinkwrapped to the original untweaked object. Sliding works only on polygon mesh components.

Proportional modeling off

Proportional modeling on

Selected edge loop.

Effect of sliding.

To activate proportional modeling, click the Prop button on the Transform panel.

Effect of ordinary translation for comparison.

To activate or deactivate sliding:

While the Tweak Component tool is active, do one of the following: Components that are affected by the proportional falloff are highlighted, and the Distance Limit is displayed as a circle. You can change the Distance Limit interactively when proportional modeling is active by pressing and holding r while dragging the mouse left or right. You can change the Falloff (Bias) profile by pressing and holding Shift+R while dragging the mouse. To change other proportional settings, right-click on Prop. - Press j. Press and release the key to toggle sliding on or off (sticky mode) or press and hold it to temporarily override the current behavior (supra mode). - Click the on-screen Slide icon at the bottom of the view.
Slide Components button

- Right-click and choose Slide Components.

Basics 99

Section 5 General Modeling

Snapping
You can use the Ctrl key to snap while using the Tweak Component tool: Press Ctrl to toggle snapping to targets on or off (depending on its current setting on the Snap panel) while translating. Press Ctrl to snap by increments while scaling. For more information about snapping options, see Snapping on page 72.

3. Release the mouse button over the point you want to weld to. Note that interactive welding uses the same snapping region size as the Snap tool. You can modify the region size using the Snap menu. 4. Repeat steps 2 and 3 to weld more points, if desired. When you have finished welding, toggle Weld Points off.

Hiding the Manipulator


If you dont like working with the manipulator, you can hide or unhide it by Toggle Manipulator button clicking the on-screen button at the bottom of the view or by choosing Toggle Manipulator from the context menu. When the manipulator is off, the Tweak Component tool is always in click-and-drag mode: If all axes are active on the Transform panel, translation occurs in the viewing plane and scaling is uniform in local space. If one or more axes have been toggled off, translation and scaling use the current manipulation mode and active axes set on the Transform panel. Rotation uses the current manipulation mode and the Y axis by default, but you can select a different axis by deactivating the others.

Welding Points
You can interactively weld pairs of points on polygon meshes while using the Tweak Component tool. Welding merges points into a single vertex.
To weld points

1. While the Tweak Component tool is active, toggle Weld Points on by doing one of the following: - Press l. Press and release the key to toggle welding on or off (sticky mode) or press and hold it to temporarily override the current behavior (supra mode). - Click the on-screen Weld Points icon at the bottom of the view.
Weld Points button

- Right-click and choose Weld Points. 2. Click and drag a point. As you move the mouse pointer, the point snaps to points within the region.

100 Softimage

Manipulating Components

Manipulating Components Symmetrically


Symmetrical manipulation lets you move points and other components while maintaining the symmetry of an object. Any manipulation performed on components on one side is mirrored to the corresponding components on the other side. Components that lie directly on the plane of symmetry are locked down; they can be translated or moved only along the plane of symmetry itself. There are two ways to do this in Softimage: To move components symmetrically in live mode, simply activate Sym on the Transform panel. Softimage automatically finds symmetrical components (within a small tolerance) and moves them, too. If you will need to maintain a correspondence between points even after an object is no longer symmetrical, you first need to apply a symmetry map (Get > Property > Symmetry Map) while the object is still symmetrical. This allows you to manipulate components symmetrically after a character has been enveloped and posed, for example. To specify the plane of symmetry or set other options, right-click on Sym.

Alternatives to the Tweak Component Tool


In addition to the Tweak Component tool, Softimage provides many other ways to manipulate components. For example, you could use the regular selection and transformation tools, or some of the other tools on the Modify > Component menu.

Basics 101

Section 5 General Modeling

Deformations
Deformations are operators that change the shape of geometric objects. Softimage provides a large variety of deformation types available from the Modify > Deform menu of the Model and Simulate toolbars as well as the Deform > Deform menu of the Animate toolbar. Some deformations, like Bend and Twist, are very simple. Others, like Lattice and Curve, use additional objects to control the effect. Deformations can be used either as modeling tools or animation tools. Depending on the type of deformation, you can animate the deformations own parameters, such as the amplitude of a Push, or the properties of a controlling object, such as the center of a Wave. Lattice Deformation

Wave Deformation

Examples of Deformations
Here are just some examples of the many types of deformation and their possible uses. Deformation by Curve
Circular wave

Planar wave

Object and curve before the deformation is applied

Object and curve after the deformation is applied

Muting Deformations
All deformations can be muted. This temporarily disables its effect. To mute a deformation, activate Mute in its property editor. Alternatively, right-click on its operator in an explorer and choose Mute.

102 Softimage

Section 6

Curves
Softimage provides a full set of tools for creating and editing curves in 3D space. Although they cant be rendered by themselves, curves form the basis for a lot of modeling and animation techniques.

What youll find in this section ...


About Curves Drawing Curves Manipulating Curve Components Modifying Curves Inverting Curves Importing EPS Files

Basics 103

Section 6 Curves

About Curves
In Softimage, you can use curves to: To build objects, for example, by revolving, extruding, or using Curves to Mesh, To deform objects, for example, using curve or spine deformations. As paths and trajectories for animation. Curves are linear (degree 1) or cubic (degree 3) NURBS (Non-Uniform Rational B-Splines). NURBS are a class of curves that computers can easily manipulate, allowing for a great deal of flexibility in modeling.

Drawing Curves
Softimage has tools and commands that let you draw and manipulate curves in a variety of ways. In Softimage, you can draw and manipulate two types of curve: linear and cubic. Linear curves are composed of straight segments, and cubic curves are composed of curved segments.

Curve Components
Curves have many components. You can display these components using the options on a viewports Show menu (eye icon) and select them using the filters on the Select panel.
Knots lie on the curve. Linear Curve Cubic Curve Knot has multiplicity 1.

NURBS Boundaries show the beginning of the curve (U = 0).

Cubic curves are interpolated between points.

Cubic Curve Knot has multiplicity 2.

Cubic Curve Knot has multiplicity 3 (Bzier).

Segments are the span between knots.

On a cubic curve, each knot can have a multiplicity of 1, 2, or 3. This value refers to the number of control points associated to the knot. In general, knots with higher multiplicity are less smooth but provide more control over the trace of the curve. A knot with multiplicity 3 is like a Bzier point, with one control point at the position of the knot and the other two control points acting as the tangent handles.
Hulls join points.

The Tweak Curve tool allows you to manipulate these knots in a Bzierlike mannersee Manipulating Curve Components on page 107. Whether the back and forward tangents remain aligned depends on how you manipulate themit is not a property of the knot itself.

104 Softimage

Drawing Curves

Draw Linear allows you to draw lines of connected straight segments (sometimes called polylines). The straight segments meet at the locations you click. To add points or knots to an existing curve, use the corresponding commands on the Modify > Curve menu. To remove points or knots, select them and press Delete.
Broken tangents create a sharp corner. Four control points create a straight segment when they are lined up.

Bzier knots also allow you to create straight segments by rotating the tangents to point at adjacent knots, so that four control points are lined up in a row. Again, whether the control points remain lined up depends on how you manipulate the adjacent knotsit is not a property of the segment. See Drawing a Combination of Linear and Curved Segments on page 106. You can draw cubic or linear curves by clicking to place control points or to place knots. Use one of the following commands from the Create > Curve menu of the Model or Animate toolbar: Draw Cubic by CVs allows you to place control points (also known as control vertices or CVs). The curve does not pass through the locations you click but is a weighted interpolation between the control points. As you add more points, the existing knot positions may change but the point positions do not. Draw Cubic by Bzier-Knot Points allows you to place knots of multiplicity 3. The curve passes through the points you click. As you add more knots, the positions of the control points are automatically adjusted to ensure maximum smoothness of the curve as the curve passes through the existing knot positions. Draw Cubic by Knot Points allows you to place knots of multiplicity 1. Again, the curve always passes through the locations you click and the positions of the control points are automatically adjusted as you add more knots.

The choice between linear, cubic Bzier, and cubic non-Bzier drawing tools depends on the situation. When creating profiles for modeling, linear curves give a good sense of the final result. For paths, youll want cubic curvesnon-Bzier curves are smoother but you may find Bzier curves easier to control. Bzier curves also give you the ability to have sharp corners, and to mix curved and straight segments. The choice between placing control points or placing knots to draw cubic non-Bzier curves is simply a matter of personal preference. While drawing a curve: To add a point at the end of the curve, use the left mouse button. To add a point between two existing points, use the middle mouse button. To add a point before the first point, first right-click and choose LMB = Add at Start and then use the left mouse button. To return to adding points at the end of the curve, first right-click and choose LMB = Add at End. Other useful commands are available on the context menu when you right-click: Open/Close, Invert, Start New Curve, and, of course, Exit Tool. Before you release the mouse button, you can drag the mouse to adjust the points location. Snapping can also be very useful for controlling the position of points and knots. While drawing, you can move any point or knot by pressing and holding m while dragging to activate the Tweak Curve tool in supra mode.

Basics 105

Section 6 Curves

If you will be using curves as profiles for modeling, you should draw them in a counterclockwise direction. This ensures that the normals of any surface or polygon mesh you create from the curves will be oriented correctly. If you will be using curves as paths for animation or extruding, you should draw them from beginning to end. Otherwise, you may need to invert the curves or generated objects later. Drawing a Combination of Linear and Curved Segments Although Softimage does not support having linear and cubic NURBS segments in the same subcurve, you can use Bzier knots to obtain straight segments on a cubic curve: If you have already begun drawing a linear curve, make it cubic using Modify > Curve > Raise Degree and then use Modify > Curve > Add Point Tool by Bzier-Knot Points to draw curved sections. Press Shift while adding knots to preserve the existing trace if you want the last-drawn segment to remain straight. If you have already begun drawing a cubic curve, place the knots where you want them and then straighten the desired segments as described in Creating Straight Segments on page 109. Straight segments are not inherently linear. Whether they remain straight depends on how you manipulate them. Using the Tweak Curve tool to move a knot preserves the linearity, but it will break if you move a tangent or use another tool.

Setting Knot Multiplicity You can change the multiplicity of a knot to suit your needs. For example, reducing the multiplicity makes a curve smoother, but increasing the multiplicity to 3 allows you to use Bzier controls and make sharp angles. 1. Select one or more knots on a cubic curve. To affect all knots on one or more curves, select the curve objects instead. 2. Choose one of the following commands from the Modify > Curve menu of the Model toolbar: - Make Knots Bezier set the multiplicity of the selected knots to 3. - Make Knots Non-Bezier set the multiplicity of the selected knots to 1. - Set Knots Multiplicity opens the Set Crv Knot Multiplicity Op property editor, where you can set the multiplicity of the selected knots to 0, 1, 2, or 3. Setting it to 0 is equivalent to removing the knot.

106 Softimage

Manipulating Curve Components

Manipulating Curve Components


The main tool for manipulating curve components is Tweak Curve. It allows you to manipulate curves in a Bzier-like manner. In addition to Bzier knots, you can manipulate non-Bzier knots, control points, and isopoints. 1. Select a curve and activate the Tweak Curve tool by pressing m or choosing Modify > Curve > Tweak Curve from the Model toolbar. Note that pressing m when a curve is not selected will activate the Tweak Component tool instead. 2. As you move the mouse pointer close to a knot, the manipulator jumps to it. Click and drag the manipulators handles to adjust the knots position, tangent angle, or tangent length.

Drag the round handle to rotate the tangent without changing its length. Handle on a Bzier knot Drag the square handle to move the tangent freely. Use the middle mouse button to drag one side independently. Once the tangent is broken in this way, the handles always move independently until you align them again. Shift+drag to scale the tangent length without affecting the slope. Again, use the middle mouse button to scale one side independently. Use middle mouse button to rotate one side independently. If the handles have been broken and you want to maintain their relative angle while rotating them, right-click on the manipulator and choose LMB Binds Broken Tangents. Drag the central knot to move it freely. The tangent handles maintain their relative positions to the knot, unless an adjacent segment is linear (four control points lined up). In that case, the tangent handles are automatically adjusted to maintain the linearity of the segment. Use the middle mouse button to drag the central knot while leaving the tangent points in place.

Handle on a non-Bzier point Drag the round handle to rotate the tangent without changing its length. Drag the knot (or isopoint) to move it freely.

Drag the square handle to move the tangent freely. Press Shift to scale the tangent length without affecting the slope.

Drag a control point to move it and affect the trace of the curve indirectly.

Basics 107

Section 6 Curves

You can also: - Click and drag a control point to move it to a new location. - Select an isopoint by clicking on a curve segment between knots. A manipulator appears at the isopoint. To select an isopoint that is very close to a knot, you can click on the curve farther away and then slide the mouse pointer closer before releasing the button. - Right-click on a knot or isopoint manipulator to access a context menu containing commands that affect that point, as well as other tool options. Note that if you right-click on a selected knot (or on another part of the curve while knots are selected), the context menu is different (although many of the same items are available on both menus). In this case, the commands apply to all selected knots and not just the one under the mouse pointer. - Click and drag a rectangle across one or more knots to select them. Use Shift to add to the selection, Ctrl to toggle, or Ctrl+Shift to deselect. This allows you to apply commands to multiple selected knots using the context menu or the Modify > Curve menu. 3. The Tweak Curve tool remains active, so you can repeat step 2 as often as you like. When you have finished, exit the tool by pressing Esc or activating a different tool. Note that if you move an isopoint that is adjacent to Bzier knots, the tangents will break. If desired, first add a Bzier knot at the isopoints location to preserve continuity.

Breaking and Aligning Bzier Tangents


On a Bzier knot, the back and forward tangents can have different orientations. When the tangents are broken or unlinked in this way, the result is a sharp corner.

Broken tangents

Aligned tangents

Breaking Tangents To break Bzier tangents and adjust the handles independently of each other, use the middle mouse button while using the Tweak Curve tool. Aligning Tangents After tangent handles have been broken, they can be realigned to make the curve smooth again at that point. Select one or more Bzier knots and choose one of the following commands from the Modify > Curve menu on the Model toolbar: Align Bezier Handles sets the slopes of both tangents to their average orientation. Align Bezier Handles Back to Forward sets the slope of back tangent equal to the forward tangent. Align Bezier Handles Forward to Back sets the slope of forward tangent equal to the back tangent. Back and forward are considered in terms of the curves parameterization from start to end point.

108 Softimage

Manipulating Curve Components

Creating Straight Segments You can create straight segments on curves using the commands available on the Modify > Curve menu of the Model toolbar, or on the context menu of the Tweak Curve tool. Softimage creates Bzier knots, if necessary, and rotates the appropriate tangents to point at the adjacent knots. Once a straight segment has been created this way, the Tweak Curve tool maintains the linearity when you move the adjacent knots. However, the segment will revert to a curve if you adjust the tangent handles, or if you use a different tool to move control points.

To straighten segments adjacent to a knot

1. Select a curve. 2. Activate the Tweak Curve tool (press m). 3. Move the mouse pointer over an unselected knot. 4. Right-click and choose one of the following commands from the context menu: - Make Adjacent Knot Segments Linear straightens both segments connected to the knot. - Make Fwd Knot Segment Linear straightens the forward segment. - Make Bwd Knot Segment Linear straightens the back segment. Back and forward are considered in terms of the curves parameterization from start to end point.

To straighten segments between knots

Alternatives to the Tweak Curve Tool


In addition to the Tweak Curve tool, Softimage provides many other ways to manipulate components. For example, you could use the regular selection and transformation tools, or some of the other tools on the Modify > Component menu.

1. Select the knots at both ends of each segment you want to straighten. You must do this individually for each segment you want to straighten, even if segments are consecutive. 2. Choose Modify > Curve > Make Knot Segments Linear from the Model toolbar. The segments between selected knots become straight.

Basics 109

Section 6 Curves

Modifying Curves
The Modify > Curve menu of the Model toolbar contains a variety of commands you can use to modify curves in various ways. Two of the more common modifications are inverting and opening/closing, but there are other operations you can perform as well.

Creating Curves from Other Objects


Many of the commands on the Create > Curve menu of the Model toolbar allow you to create curves based on other objects in your scene. The illustrations here give you an idea of just some of the possibilities. Extracting Curve Segments

Opening and Closing Curves


Modify > Curve > Open/Close opens a closed curve and closes an open one.

Original curve

Extracted segment

Fitting Curves onto Curves


Open curve Closed curve

Inverting Curves
Modify > Curve > Invert switches the start and end points of a curve. The result is as if you had drawn the curve clockwise instead of counterclockwise or vice versa. For example, if an object uses the curve as a path, it moves in the opposite direction once you invert the curve. Similarly, if a surface has been built from the curve and its operator stack was not frozen, its normals become reversed.
Original sketched curve New curve fitted onto sketched curve

Creating Curves from Intersecting Surfaces

Intersection between two surfaces

110 Softimage

Importing EPS Files

Blending Curves

Importing EPS Files


Use File > Import > EPS File from the main menu to import curves saved as EPS (encapsulated PostScript) and AI (Adobe Illustrator) files from a drawing program. Once in Softimage, you can convert them to polygon meshes using Create > Poly. Mesh > Curves to Mesh to create planar or extruded logos.

Original curves

New blend curve

Filleting Curves

Preparing EPS and AI Files for Import There are some restrictions on the files you can import. Follow these guidelines: Make sure the file contains only curves. Convert text and other elements to outlines.
Intersecting curves Fillet between them

Save or export as version 8 or previous. Do not include a TIFF preview header.

Creating Curves from Animation If you have animated the translation of an object, you can use Tools > Plot > Curve from the Animate toolbar plot the motion of its center to generate a curve. For example, this can be used to create a trajectory curve. You can also plot the movement of a selected point or cluster.

Basics 111

Section 6 Curves

112 Softimage

Section 7

Polygon Mesh Modeling


Polygon meshes are one of the basic renderable geometry types in Softimage. They are ideally suited for modeling non-organic objects with hard edges and corners, but they can also be used to approximate smooth, organic objects. Polygon meshes are particularly used for games development because of the requirements of most game engines. Polygon meshes are also the basis of subdivision surfaces.

What youll find in this section ...


Overview of Polygon Mesh Modeling About Polygon Meshes Converting Curves to Polygon Meshes Drawing Polygons Subdividing Drawing Edges Extruding Components Removing Polygon Mesh Components Combining Polygon Meshes Symmetrizing Polygons Cleaning Up Meshes Reducing Polygons Subdivision Surfaces

Basics 113

Section 7 Polygon Mesh Modeling

Overview of Polygon Mesh Modeling


There are three basic approaches to modeling with polygon meshes.

About Polygon Meshes


When working with polygon meshes, there are some basic concepts you should understand.

Box Modeling
Box modeling starts with a primitive like a cube, then adds subdivision and shapes it by deforming, adding edges, extruding, and so on.

Polygons
A polygon is a closed 2D shape formed by straight edges. The edges meet at points called vertices. There are exactly the same number of vertex points as edges. The simplest polygon is a triangle.

Triangle

Quad

N-gon

Modeling with Curves


When you model with curves, you begin with curves outlining the basic shape of your object and convert them to polygon meshes. You can then continue to add detail using any techniques you like. Polygons are classified by the number of edges or vertices. Triangles and quadrilaterals (or quads) are the most commonly used for modeling. Triangles have the advantage of always being planar, while quads give better results when used as the basis of subdivision surfaces. Certain game engines may require that objects be composed entirely of triangles or quads. Polygons that are very long and thin, or that have extremely sharp angles, can give poor results when deforming or shading. Polygons that are regularly shaped, with all edges and angles being almost equal, generally give the best results.

Polygon-by-polygon Modeling
With polygon-by-polygon modeling, you draw each polygon directly.

114 Softimage

About Polygon Meshes

Polygon Meshes
A polygon mesh is a 3D object composed of one or more polygons. Typically these polygons share edges to form a threedimensional patchwork. However, a single polygon mesh object can also contain discontiguous sections that are not connected by edges. These disconnected A polygon mesh sphere polygon islands can be created by drawing them directly or by combining existing polygon meshes.

Edges that are not shared represent the boundary of the polygon mesh object and are displayed in light blue if Boundaries and Hard Edges are visible in a 3D view. Polygons are the closed shapes that make up the tiles of the mesh.

Planar and Non-planar Polygons


When an individual polygon on a polygon mesh is completely flat, it is called planar. All its vertices lie in the same plane, and are thus coplanar. Planar polygons give better results when rendering.

Types of Polygon Mesh Components


Polygon meshes contain several different types of component: points (vertices), edges, and polygons.
Planar polygon on the ground plane with normals visible.

Polygon

Edge

Non-planar polygon created by moving a point below the ground plane.

Point

Points are the vertices of the polygons. Each point can be shared by many adjacent polygons in the same mesh. Edges are the straight line segments that join two adjacent points. Edges can be shared by no more than two polygons.

Triangles are always planar because any three points define a plane. However, quadrilaterals and other polygons can become non-planar, particularly as you move vertices around in 3D space. When objects are automatically tessellated before rendering, non-planar polygons are divided into triangles. However, other applications such as game engines may not support non-planar polygons properly.

Basics 115

Section 7 Polygon Mesh Modeling

Valid Meshes
Softimage has strict rules for valid polygon mesh structures and wont let you create an invalid mesh. Some of the rules are: Every point must belong to at least one polygon. Every edge must belong to at least one polygon. A given point can be used only once in the same polygon. All edges of a single polygon must be connected to each other. Among other things, this means that you cannot have a hole in a single polygon. To get a hole in a polygon mesh, you must have at least two polygons.

Controlling Shading on Meshes


Use the meshs Geometry Approximation property to control whether the shading is smooth or faceted across polygons. If the object doesnt already have a Geometry Approximation property, choose Get > Property > Geometry Approximation from any toolbar. The Discontinuity parameters on the Polygon Mesh page of the Geometry Approximation property editor control whether the objects are faceted or smooth at the edges.

Hole in a polygon mesh

At least two polygons are required.

Edges cannot be shared by more than two polygons. Tri-wings are not supported. To connect three polygons in this way, a double edge is required. Softimage does support one case of non-manifold geometry. A single point can be shared by two otherwise unconnected parts of a single mesh object. If you export geometry from Softimage, remember that such geometry may not be considered valid by other applications.

Faceted polygons are appropriate for geometric shapes like dice.

A non-manifold geometry that is valid in Softimage.

Smooth polygons are appropriate for organic shapes like faces.

116 Softimage

About Polygon Meshes

The illusion of smoothness is created by averaging the normals of adjacent polygons. When normals are averaged in this way, the shading is a smooth gradient along the surface of a polygon. When normals are not averaged, there is an abrupt change of shading at the polygon edges. Automatic discontinuity lets you turn off the averaging of normals for sharper edges and the discontinuity Angle lets you specify how sharp edges must be before they appear faceted. If the dihedral angle (angle between normals) of two adjacent polygons is less than the Discontinuity Angle, the normals are averaged; otherwise, they are not averaged.

If Automatic is on and Angle is 0, the object is completely faceted.

If Automatic is off, the object is completely smooth.

Dihedral angles: flatter edges have small angles and sharper edges large angles.

Discontinuity on Selected Edges You can achieve different effects by adjusting these two parameters: If Automatic is on, then the Angle determines the threshold for faceted polygons.
Flat edges: normals averaged, smooth shading Sharp edges: normals not averaged, faceted

In addition to setting the geometry approximation for an entire object, you can make selected edges discontinuous by marking them as hard using Modify > Component > Mark Hard Edge/Vertex from the Model toolbar. Hard edges are displayed in dark blue when Boundaries and Hard Edges is checked on a viewports Show menu (eye icon).
Selected edges marked as hard.

Basics 117

Section 7 Polygon Mesh Modeling

Converting Curves to Polygon Meshes


Use Create > Poly. Mesh > Curves to Mesh from the Model toolbar to create a polygon mesh based on the selected curves.
Exterior closed curves become disjoint parts of the same mesh object.

Delaunay generates a mesh composed entirely of triangular polygons. This method gives consistent and predictable results, and in particular, it will not give different results if the curves are rotated.

Interior closed curves can become holes.

Tesselating
Tesselation is the process of tiling the curves shapes with polygons. Softimage offers three different tesselation methods: Minimum Polygon Count uses the least number of polygons possible but yields irregular polygons.

Medial Axis creates concentric contour lines along the medial axes (averages between the input boundary curves), morphing from one boundary shape to the next. This method creates mainly quads with some triangles, so it is well-suited for subdivision surfaces.

Other Options
In addition to controlling the tesselation, there are many other options to control holes, extrusion, beveling, embossing, and so on.

118 Softimage

Drawing Polygons

Drawing Polygons
Modify > Poly. Mesh > Add/Edit Polygon Tool is a multi-purpose tool that lets you draw polygons interactively by placing vertices. You can use it to add polygons to an existing mesh, add or remove points on existing polygons, or to create a new polygon mesh object. 1. Do one of the following: - To create a new polygon mesh object, first make sure that no polygon meshes are currently selected. or - To add polygons to an existing polygon mesh object, select the mesh first. or - To add or remove points on an existing polygon in a existing polygon mesh object, select that polygon. 2. Choose Modify > Poly. Mesh > Add/Edit Polygon Tool from the Model toolbar or press n. 3. Do one of the following: - Click in a 3D view to add a point. If necessary, you can adjust the position by moving the mouse pointer before releasing the button. or - Click an existing point on another polygon in the same mesh to attach the current polygon to it. or - Click an existing edge of another polygon in the same mesh to attach the current polygon to it. or - Left-click and drag on a vertex of the current polygon to move it. or - Middle-click a vertex of the current polygon to remove it. As you move the mouse pointer, the edges that would be created are outlined in red. To insert the new point between a different pair of vertices of the current polygon, first move the mouse across the edge connecting them. The direction of the normals is determined by the direction in which you draw the vertices. If the vertices are drawn in a counterclockwise direction, the normals face toward the camera and if drawn clockwise, they face away from the camera. As you draw, red arrows indicate the order of the vertices. 4. When you have finished drawing a polygon, do one of the following: - To start a new polygon and automatically share an edge with the current one, first move the mouse pointer across the desired edge and then click the middle mouse button. Repeat step 3 as necessary. or - To start a new polygon without sharing automatically sharing an edge, click the right mouse button. Repeat step 3 as necessary. or - When you are finished drawing polygons, exit the Add/Edit Polygon tool by clicking the right mouse button twice in a row, by choosing a different tool, or by pressing Esc.

Basics 119

Section 7 Polygon Mesh Modeling

Subdividing
You can subdivide polygon meshes to add more detail where needed.

Subdividing Polygons with Smoothing


You can subdivide and smooth selected polygons using Modify > Poly. Mesh > Local Subdivision from the Model toolbar.

Subdividing Polygons and Edges Evenly


You can subdivide polygons and edges evenly using Modify > Poly. Mesh > Subdivide Polygons/Edges from the Model toolbar. Select specific polygons or edges first, or just select a polygon mesh object to subdivide all polygons. For polygons, you can choose different subdivision types:

Plus

Diamond

Triangles

Splitting Edges
You can split edges interactively using Modify > Poly. Mesh > Split Edge Tool from the Model toolbar. Activate this tool then click an edge to split it. Use the middle mouse button to split parallel edges. Press Ctrl while clicking to bisect edges evenly.

For edges, you can connect the new points and extend the subdivision to a loop of parallel edges (that is, the opposite edges of quad polygons):

Other Ways to Subdivide


The Modify > Poly. Mesh menu of the Model toolbar contains many other tools and commands that can subdivide and add detail to polygon meshes. For example:
Parallel Edge Loop and Connect both off. Connect on.

Add Vertex Tool Split Polygon Tool Split Edges (with split control) Dice Polygons Slice Polygons

Parallel Edge Loop on.

Parallel Edge Loop and Connect both on.

120 Softimage

Drawing Edges

Drawing Edges
Choose Modify > Poly. Mesh > Add Edge Tool from the Model toolbar to split or cut polygons interactively by drawing new edges. You can use this tool to freeform or redraw your objects flow lines. 1. Select a polygon mesh object. 2. Choose Modify > Poly. Mesh > Add Edge Tool from the Model toolbar or press \ . 3. Start a new edge by clicking on an existing edge or point. You can also: - Press Ctrl while clicking or middle-clicking an edge to bisect it evenly. - Press Shift while clicking or middle-clicking an edge to ensure that the angle between the new edge and the target edge snaps to multiples of the Snap Increments - Rotate value set in your Transform preferences. For example, if Snap Increments - Rotate is 15, then the new edge will snap at 15 degrees, 30 degrees, 45 degrees, and so on. Angles are calculated in screen space. - Press Ctrl+Shift while clicking or middle-clicking an edge to attach the new edge at a right angle to the target edge. The angle is calculated in object space. - Press Alt while clicking in the middle of the polygon to add a point and connect it to the nearest edge by a triangle. If you are trying to attach a new edge to an existing edge or vertex, and the target does not become highlighted when you move the pointer over it, it means that you cannot attach the new edge at that location because it would create an invalid mesh.
You cannot attach the edge to this point. Middle-click to continue drawing edges from the previous point.

You can also press Alt while clicking to start in the middle of a polygon and automatically connect to the nearest edge by a triangle 4. If desired, click in the interior of a polygon to add a point. You can repeat this step to add as many interior points as you like, creating a polyline, before terminating it.
Click inside a polygon to add an interior point.

5. Terminate the new edge by clicking or middle-clicking on an existing edge or point.


Click to continue drawing edges from the last point.

6. To continue adding edges starting at a new location, right-click and then repeat steps 2 to 4. To exit the Add Edge tool, press Esc or choose a different tool.
Basics 121

Section 7 Polygon Mesh Modeling

Extruding Components
You can extrude polygon mesh components to create local details, such as indentations or protuberances like limbs and tentacles. You can extrude polygons, edges, or points. If you want to adjust other properties, open the Extrude Op property editor in the stack.

Extruding with Options


To display additional options when extruding, select one or more components and press Ctrl+Shift+d or choose Modify > Polygon Mesh > Extrude Along Axis. This lets you control whether adjacent components are extruded separately or together, as well as specify the subdivisions, inset, transformations, and other values.

Extruding Along a Curve Extruding Components


1. Select one or more components on a polygon mesh, and then press Ctrl+d or choose Edit > Duplicate/Instantiate > Duplicate Single. You can get more control over the shape of an extrusion by using a curve. Select one or more components, choose Modify > Polygon Mesh > Extrude Along Curve, and then pick the curve.

2. Use the transform tools or the Tweak Component tool to translate, rotate, and scale the extruded components as desired.

Duplicating Polygons
Duplicating is similar to extruding, but the polygons are not connected to the original geometry. This is useful for building repeating forms like steps or railings. Choose Modify > Polygon Mesh > Duplicate, or check Duplicate Polygons in the Extrude Op property editor.

122 Softimage

Removing Polygon Mesh Components

Removing Polygon Mesh Components


There are several different ways to remove polygon mesh components using different commands from the Modify > Poly. Mesh menu: Delete Components, Collapse Components, Dissolve Components, and Dissolve and Clean Adjacent Vertices. When components are selected, pressing Delete performs different actions: Points and edges are dissolved and adjacent vertices are cleaned. Polygons are deleted.
Dissolving selected polygons

Dissolving Components
Dissolving removes selected components and then fills in the holes with new polygons.

Deleting Polygon Mesh Components


Deleting removes selected components and anything attached to them, leaving empty holes.

Dissolving Components and Cleaning Vertices


Cleaning automatically collapses vertices that are shared by only two edges after dissolving, but were shared by more before.

Deleting selected point

Before

Selected polygons will be dissolved.

Collapsing Polygon Mesh Components


Collapsing removes selected components and reattaches the adjacent ones, creating no new holes.
After Dissolving and cleaning vertices Vertices shared by two edges after dissolving are collapsed. Vertices already shared by two edges are not collapsed. Vertices shared by three or more edges are not collapsed.

Collapsing selected edge

Basics 123

Section 7 Polygon Mesh Modeling

Combining Polygon Meshes


You can combine two or more polygon mesh objects into a single new one. Select all the meshes you want to combine, then choose Create > Poly. Mesh > Blend or Merge from the Model toolbar. The two commands differ in how they treat boundary edges on different objects when the boundaries are close to each other. With Blend, nearby boundaries on different objects are joined by new polygons. With Merge, nearby boundaries on different objects are merged into a single edge at the average position. There is a Tolerance parameter for determining the maximum distance in Softimage units between boundaries for them to be considered nearby.

Other Ways of Combining Meshes


You can also combine meshes using the Boolean commands on the Create > Poly. Mesh and Modify > Poly. Mesh menus.

Original objects

Far boundaries are not joined

Blended object Near boundaries are joined

Far boundaries are not merged

Merged object Near boundaries are merged

124 Softimage

Symmetrizing Polygons

Symmetrizing Polygons
You can model one half of a polygon mesh object and then symmetrize it. This creates new polygons that mirror the geometry on the original side. 1. Model the polygons on one side of the object. In the example below, an ornamental curlicue was added to the hilt of the dagger.
Model one side of the object.

3. Select the polygons to be symmetrized. You can symmetrize the whole object or just a portion.
Select the desired polygons.

4. Choose Modify > Poly. Mesh > Symmetrize Polygons from the Model toolbar. 2. Prepare the other side of the object for symmetrization. For example, if you intend to merge the symmetrized portions by welding or bridging, then you may need to create holes for the new polygons to fit and add vertices to aid the merge. 5. In the Symmetrize Polygon Op property editor, set the parameters as desired, for example, to specify the plane of symmetry.

The finished dagger.

Prepare the other side.

Basics 125

Section 7 Polygon Mesh Modeling

Cleaning Up Meshes
You can filter polygon mesh objects to clean them up. Filtering removes components that match certain criteria, for example, small components that represent insignificant detail.

When you filter polygons by area, the smallest polygons are removed. This eliminates small, noisy details.

Reducing Polygons
The Modify > Poly. Mesh > Polygon Reduction command on the Model toolbar lightens a heavy object by reducing the number of polygons, while still retaining a useful fidelity to the shape of the original highresolution version. For example, you can use polygon reduction to meet maximum polygon counts for game content, or to reduce file size and rendering times by simplifying background objects. Polygon reduction also allows you to generate several versions of an object at different levels of detail (LODs). Polygon reduction works by collapsing edges into points. Edges are chosen according to their energy, which is a metric based on their length, orientation, and other criteria. In addition, you have options to control the extent to which certain features, such as quad polygons, are preserved by the process.

Filtering Edges
Modify > Poly. Mesh > Filter Edges on the Model toolbar removes edges by collapsing them based on either their length or angle. In both cases, you can protect boundary edges using Keep Borders Edges Intact. Edge filtering is especially useful for reducing the triangulation on polygon meshes generated by Boolean operations.

Filtering Points
Modify > Poly. Mesh > Filter Points on the Model toolbar welds together vertices that are within a specified distance from each other. Among other things, this can be very useful for fixing disconnected polygons in exploded meshes which can occur when meshes are exported from some other programs. Average position welds each clump of points in the selection together at their average position. Selected point welds each clump of points in the selection together at the position of the point that is nearest to the average position. Unselected point welds each selected point to an unselected point on the same object.

Filtering Polygons
Modify > Poly. Mesh > Filter Polygons removes polygons based on their area or their dihedral angles: When you filter polygons by angle, adjacent polygons are merged together if their dihedral angle is less than the threshold you specify. Small angles correspond to flat areas, so this method preserves sharp detail.
126 Softimage

Polygon Normals

Polygon Normals
Shading normals are vectors that are perpendicular to the surface of polygons at each corner. They control how polygon meshes are shaded. If the normals are averaged across an edge or corner, the shading is smooth. If they are not averaged, the shading is faceted and the edge is considered hard. To display normals on selected objects, click on a views Show menu (eye icon) and choose Normals.

Controlling User Normals


Instead of relying on the automatically generated normals, you can specify custom normals to use for shading. These custom normals are called user normals, or explicit normals in some other programs including 3ds Max. User normals allow you to create things like a box with rounded corners using a minimum number of polygons.

In Softimage, polygon meshes can have auto normals or user normals: Auto normals are calculated automatically based on a meshs geometry. User normals are custom-defined.

On a cube with beveled edges, the interpolation of the automatic normals creates a gradation in the shading across the large, flat sides. To create the illusion of a box with rounded corners, you can set user normals so that their interpolation produces the correct shading.

There are two main ways to set user normals: Activate Modify > Component Tweak User Normals Tool on the Model toolbar, and then drag normals interactively in the viewports. Select points, polygons, and edges and then use the commands on the Modify > Component > Set User Normals submenu.

Controlling Auto Normals


The best way to control auto normals on a polygon mesh is to apply a Geometry Approximation property (from the Get > Property menu) if there isnt already one on the object, and turn off Discontinuity: Automatic. Then, manually mark any edges or vertices you want to be hard by selecting them and using Modify > Component > Mark Hard Edge/Vertex on the Model toolbar.

Basics 127

Section 7 Polygon Mesh Modeling

Subdivision Surfaces
Subdivision surfaces (sometimes called subdees) allow you to create smooth, high-resolution polygon meshes from lower-resolution ones. They provide the smoothness of NURBS surfaces with the local detail and texturing capabilities of polygon meshes.

Subdivision Rules
Softimage gives you a choice of several subdivision rules (smoothing algorithms): Catmull-Clark, XSI-DooSabin, and linear. In addition, you have the option of using Loop for triangles when using Catmull-Clark or linear. The subdivision rule is set in the Polygon Mesh property editor. Catmull-Clark The Catmull-Clark subdivision algorithm produces rounder shapes. The generated polygons are all quadrilateral. XSI-Doo-Sabin
Catmull-Clark Subdivision

Applying Geometry Approximation


You can turn a polygon mesh object into a subdivision surface by pressing + and on the numeric keypad. This applies a local Geometry Approximation property if there isnt already one, and sets the subdivision level for render and display. The higher the subdivision level, the smoother the object. The original geometry forms a hull that is used to control the shape of the smoothed, proxy geometry. You can toggle the display of the hull and the subdivision surface on the Show menu (eye icon).
Polymesh hull Subdivision surface

The XSI-Doo-Sabin subdivision algorithm is a variation of the standard Doo-Sabin algorithm. It produces more geometry than Doo-Sabin, but it works better with cluster properties such as texture UVs, vertex colors, and weight maps, as well as with creases.
XSI-Doo-Sabin Subdivision

128 Softimage

Subdivision Surfaces

Linear Subdivision Linear subdivision does not perform any smoothing, so the objects shape is unchanged. It is useful when you want an object to deform smoothly without rounding its contours.
Linear Subdivision

Creases
Subdivision surfaces typically produce a smooth result because the original vertex positions are averaged during the subdivision process. However, you can still create sharp spikes and creases in subdivision surfaces. This is done by adjusting the hardness value of points or edges on the hull. The harder a component, the more strongly it pulls on the resulting subdivision surface. Use Modify > Component > Mark Hard Edge/Vertex to make components completely hard, or Set Edge/Vertex Crease Value to apply an adjustable value.

Loop Subdivision With the Catmull-Clark and linear subdivision methods, you have the option of using Loop subdivision for triangles. The Loop method subdivides triangles into smaller triangles rather instead of into quads, which gives better results when smoothing and shading.
Catmull-Clark with Loop Catmull-Clark

Other Methods of Subdividing


You can create a new object that is a smoother, denser version of an existing one using Create > Poly. Mesh > Subdivision from the Model toolbar. You can create a new object that is a smoother, denser version based on the Geometry Approximation settings of an existing object using Edit > Duplicate/Instantiate > Duplicate Using Geometry Approx.

Basics 129

Section 7 Polygon Mesh Modeling

130 Softimage

Section 8

NURBS Surface Modeling


NURBS surfaces are one of the basic types of renderable geometry in Softimage. They are rectangular patches that allow for very smooth shapes with relatively few control points. Surfaces can model precise shapes using less geometry than polygon meshes and theyre ideal for smooth, manufactured objects like car and aeroplane bodies.

What youll find in this section ...


About Surfaces Building Surfaces Modifying Surfaces Projecting and Trimming with Curves Surface Meshes

Basics 131

Section 8 NURBS Surface Modeling

About Surfaces
In Softimage, surfaces are NURBS patches. Mathematically, they are an interconnected patchwork of smaller surfaces defined by intersecting NURBS curves. Knot curves (sometimes called isoparams or isoparms) are sets of connected knots along U or Vthey are the wires shown in wireframe views. You can select knot curves and use them, for example, to build other surfaces using the Loft operator.

Components of Surfaces
You can display surface components and attributes in the 3D views, as well as select them for various tasks. Points are the control points of the curves that define the surface. Their positions define the shape of the surface.

Knots lie on the surface. Knot curves connect knots.

Points define and control the surface. You can display lines between points.

Isolines are not true components. They are, in fact, arbitrary lines of constant U or V on a surface. You can use the U and V Isoline selection filter to help you pick isolines for lofting and other operations.

NURBS hulls are display lines that join consecutive control points. It can be useful to display them when working with curves and surfaces. Surface knots are the knots of the curves that define the surface; they lie on the surface where the U and V curve segments meet.
Isolines are arbitrary lines on the surface in U or V.

132 Softimage

Building Surfaces

Building Surfaces
The commands on the Create > Surf. Mesh menu can be used to build NURBS surfaces in a variety of ways. The first set of commands generate surfaces from curvessee Objects from Curves on page 90 for an overview of the basic procedure. Here are a few examples of some of the other ways you can build surfaces.

Merging Surfaces
Merging two surfaces creates a third surface that spans the originals. You have the option of also selecting an intermediary curve for the merged surface to pass through.

Blending Surfaces
Blending creates a new surface that fills the gap between the selected boundaries on two other surfaces.

Input surfaces

Single merged surface

Filleting Intersections
A fillet is a surface that smooths the intersection of two others, like a molding between a wall and a ceiling.

Input surfaces

Resulting blend

Input surfaces

Resulting fillet

Shaded view

Basics 133

Section 8 NURBS Surface Modeling

Modifying Surfaces
You can modify surfaces in a variety of ways using the commands in the Modify > Surface menu of the Model toolbar, for instance, by adding and removing knot curves. Here are a few examples of some other ways of modifying surfaces.

Opening and Closing Surfaces


You can open a closed surface and close an open surface. A surface can be open in both U and V like a grid, closed in both like a torus, or open in one and closed in the other like a tube.

Inverting Normals
If the normals of a surface are pointing in the wrong direction, you can invert them.

Open

Closed

Inverting a surface

Extending Surfaces
You can extend a surface from the selected boundary to a curve.

134 Softimage

Projecting and Trimming with Curves

Projecting and Trimming with Curves


You can project curves onto surfaces and then use the result to remove a portion of the surface, or for any other modeling purpose. This is useful for modeling manufactured objects like car parts with holes or for creating smooth surfaces that arent four-sided like a standard NURBS patch. Trim Curves If you use the curve to remove part of the surface, it is called a trim curve.

What Are Surface and Trim Curves?


Both surface and trim curves involve projecting a curve object onto a NURBS surface. The difference is whether the result is used to remove a portion of the surface or not. Surface Curves If the curve object is just projected and nothing more, the result is called a surface curve. It is a new component of the surface. This surface curve can be used like any other curve component of the surface (isoline, knot curve, and so on) for modeling operations like Loft, Extend to Curve, and others.
Curve object Trim curve

Trimming affects the visible portion of the surface. All the underlying points are still there and you can still affect the surfaces shape by moving points in the trimmed area.

Projecting or Trimming by Curves


Select a NURBS surface object, choose Modify > Surface > Trim by Projection from the Model toolbar, and then pick a curve object. The curves are projected onto the surface and, by default, the surface is trimmed using all projected curves. In the Trim Surface by Space Curve property editor, do any of the following: To trim the surface using only some of the projected curves, click Pick Trims and then pick the desired surface curves. Right-click when you have finished picking. To trim the surface using all the projected curves, click Trim with All. To project the curve onto the surface, click Project All.

NURBS surface

Surface curve

Basics 135

Section 8 NURBS Surface Modeling

Use Is Boundary to choose whether to trim the inside or the outside. Use Projection Precision to control the precision used to calculate the projection. If the shape of the projected curve is not accurate, increase this value. However, high values take longer to calculate and may slow down your computer. For best performance, set this parameter to the lowest value that gives good results.

Surface Meshes
Surface meshes provide a way to assemble multiple surfaces into a single object that remains seamless under animation and deformation. 1. Create a collection of separate surfaces. These will become the surface meshs subsurfaces.
Line the surfaces up into a basic configuration. This illustration shows a common configuration for a leg or arm.

Deleting Trims
Deleting a trim allows you to remove a trim operation even after you have frozen the surfaces operator stack. Set the selection filter to Trim Curve, select one or more trim curves on the surface, and choose Modify > Surface > Delete Trim from the Model toolbar.

2. Optionally, line up pairs of boundaries by selecting them and choosing Create > Surf Mesh > Snap Boundary from the Model toolbar.

Snap opposite boundaries together to connect the surfaces across the junction.

136 Softimage

Surface Meshes

3. Select all the surfaces and choose Create > Surf Mesh > Assemble. The surfaces are assembled into a single surface mesh. The continuity manager ensures that the continuity is preserved at the seams.

Excluding Points from Continuity Managements


All assembled surface meshes have a special cluster called NonFixingPointsCluster. If a point on a subsurface boundary is in this cluster, its continuity is not managed by SCM when Dont Fix the Tagged Points is on. The other points on the same junction are not affected. This lets you create holes in the surface mesh for mouths, eyes, and so on.

Notice how the assembled surface mesh blends smoothly across the junctions.

4. You can now deform and animate the surface mesh as desired.

If you ever freeze the assembled surface, you will need to reapply the surface continuity manager manually using Create > Surf Mesh > Continuity Manager.

Basics 137

Section 8 NURBS Surface Modeling

138 Softimage

Section 9

Animation
To animate means to make things come alive, and life is always signified by change: growth, movement, dynamism. In Softimage, everything can be animated, and animation is the process of changing things over time. For example, you can make a cat leap on a chair, a camera pan across a scene, a chameleon change color, or a face change shape.

What youll find in this section ...


Animating with Keys Animating Transformations Playing the Animation Editing Keys and Function Curves Layering Animation Constraints Path Animation Linking Parameters Expressions Copying Animation Scaling and Offsetting Animation Plotting (Baking) Animation Removing Animation

Basics 139

Section 9 Animation

Bringing It to Life
The animation tools in Softimage let you create animation quickly so that you can spend your time editing movements, changing the timing, and trying out different techniques for perfecting the job. Softimage gives you the control and quick feedback you need to produce great animation. Basically, if you want to make something move, Softimage has the tools.

What Can You Animate in Softimage?


You can animate every scene element and most of their parametersin effect, if a parameter exists on a property page, it can probably be animated. Motion: Probably the most common form of animation, this involves transforming an object by either moving (translating), rotating, or scaling (resizing) it. Special character tools let you easily animate humans, animals, and all manner of fantastical creatures. You can also use dynamic simulations to create movement according to the physical forces of nature. Geometry: You can animate an objects geometry by changing values such as U and V subdivision, radius, length, or scale. You can also use numerous deformation tools and skeletons to bend, twist, and contort your object. Appearance: Material, textures, visibility, lighting, and transparency are just some of the parameters controlling appearance that can be changed over time.

The Highs and Lows of Animation


One of the most important features of Softimage is its high and lowlevel approach to animation: Low-level animation means getting down to the parameters of an object and animating their values. Keyframing is the most common method of direct animation, but you can also use path animation, constraints, linked parameters, expressions, and scripted operators for creating animation control relationships.
Motion, geometry deformations, and appearances can all be animated in Softimage.

High-level animation means that you are working with animation in a way that is nonlinear (the animation is independent of the timeline) and non-destructive (any modifications do not destroy your original animation data).

140 Softimage

Bringing It to Life

You store animation or shapes in sources, then use the animation mixer to edit, mix, and reuse those sources as clips. To use these levels together, you can animate at a low level by keyframing a specific parameter, then store that animation and others into action sources and mix them together in the animation mixer to animate at a high level. This allows you to easily manage complex animation yet retain the ability to work at the most granular level.

So Many Choices ...


Softimage provides you with many choices of tools and techniques for animating: explore and decide which tool lets you animate in the most effective way. In most projects you have, you will probably use a combination of a number of these tools together to get the best results. The most basic method of animation is keying. You set parameter values at specific frames, and then set keys for these values. The values for the frames between the keys are calculated by interpolation.

Create animation relationships between objects at the lowest (parameter) level. These include constraints, path animation, linked parameters, expressions, and scripted operators.
Keyframed (low-level) animation can be contained in action sources, then brought into the animation mixer as a clip (high level).

Basics 141

Section 9 Animation

Character animation tools offer you control for creating and animating skeletons. You can animate them with forward or inverse kinematics, apply mocap data, add an enveloping model, set up a rig, and fine-tune the skeletons movements in a myriad of ways to get just the right motion.

Dynamic simulations let you create realistic motion with natural forces acting on rigid bodies, soft bodies, cloth, hair, and particles (done with ICE). With simulations, you can create animation that could be difficult or time-consuming to achieve with other animation techniques.

Animation and Models


The animation mixer is a powerful editing tool that is nonlinear and non-destructive. Any type of animation that you generate can be stored and reused later, on the same model or a different one. You can also mix different types of animation together and weight them against each other. Shape animation lets you can change the geometry of an object over time. To do this, you deform the object into different shapes using any type of deformation tool, then store shape keys for each pose that you want to animate. Models in Softimage are data containers (like mini scenes) that make it easy to organize elements that need to be kept together, such as all the parts that make up a character. The main reason for using models for animation is that they provide the easiest way to import and export animated objects between scenes, and to copy animation between objects. Models also make it easy to use the animation mixer. Each model can have only one Mixer node that contains mixer and animation data. This means that if you have many objects in a scene that use the mixer and each is within a model, you can copy animation from one object to another.

142 Softimage

Playing the Animation

Playing the Animation


The first thing you need to do before starting an animation is to set up your frame rate and format to match the medium in which you will be saving the final animation. In animation, the smallest unit of time is the amount required to display a single frame. The speed at which frames are displayed, or the frame rate, is always determined by how the final animation will be viewed. If you are compositing your animation with other film or video footage, its usually best for the animation to be at the same frame rate as the footage. When you change the timing of the animation, you change the way that the actions look. This means that the timing that looked correct while you were previewing it in Softimage may not look as good on video or film. For example, an action that spans 24 frames would take one second on film; changing the frame rate to suit North American video at 30 fps would cause the same 24 frames to span 0.8 seconds.
Setting up the timing for your animation is the first thing you should do before you start. You can set the frame rate and frame format in the Output Format preferences. These settings affect many areas of Softimage, including the timeline and playback controls.

Selecting a Viewport for Playback


To optimize playback speed, you can specify a single viewport for playback (viewport B by default). When the playback is over, the other viewports are updated to the current frame. If you scrub in the timeline, however, all viewports are updated at each frame. To select a viewport for this, choose All Views, Active View, or a specific viewport (A, B, C, or D) from the Playback > Playback View menu. You can also set it as a preference in the Interaction Preferences.

You can set up the default frame format and frame rate preferences for your scene using the options in the Output Format preferences property editor (choose File > Preferences). These settings propagate to many other parts of Softimage that depend on timing. Regardless of whether you enter time code or a frame number as the frame format, Softimage internally converts your entry into time code.

Basics 143

Section 9 Animation

Using the Timeline and the Playback Controls


A big part of the animation process is the constant tweaking and replaying of the animation to see that you get things right. There are different ways of playing back animation in the viewports, but the most common way is by dragging the playback cursor in the timeline and using the playback controls below the timeline. Before you start playing back the animation, you should set up the time range, the time display format, and the timelines start and end frames. These define the range of frames in which you can play in the scene.
Timeline

Playback menu displays many playback options, such as for setting preferences, opening the flipbook, setting real-time play rates, and setting the current viewport. Increment Backward/Forward moves the currently displayed frame backward/forward by predefined increments (default is 1). Start/First Frame displays (resets) the first frame at the beginning of the timeline. End/Last Frame displays the last frame at the end of the timeline. Play Backward plays/stops the animation or simulation in the backward direction (to the left on timeline). Click this icon to play from the last frame on the timeline; click it again to stop playback; middle-click to play from the current frame. Note that you can only play simulations backwards if you have cached them. Play Forward plays/stops the animation or simulation in the forward direction (to the right on timeline). Click this icon to play from the first frame on the timeline; click it again to stop playback; middle-click it to play from the current frame. Loop repeats the animation or simulation in a continuous loop. Audio toggles sound on/off during playback. It is on by default. When the audio is off (muted), the icon appears highlighted. All/RT toggles between playing back frame by frame (All) or in real time (RT).

Time range

The time range determines the global range of frames, and the range slider in it lets you play back a smaller range of frames within the global range. If you are working with an animation sequence that is very long, you can focus on just a subsection of frames which you can easily change and move along the timeline. You can set the global length by entering frame numbers in the boxes at either end of the time range. The timeline displays which frames can be played, which is linked to the range slider. The current frame of the animation is indicated by the playback cursor (the vertical red bar), which you can drag to different frames. You can set the scenes length by entering frame numbers in the boxes at either end of the timeline. The controls in the Playback panel below the timeline allow you to view and play animations, simulations, and audio in different ways.

144 Softimage

Previewing Animation

Previewing Animation
You can capture and cache images from an animation sequence and play them back in a flipbook to help you see the animation in real time. Anything that is shown in the viewport you choose is captured render region, rotoscoped scene with background, or any display mode (wireframe, textured, shaded, etc.). For example, you may want to set the display mode to Hidden Line Removal for a pencil test effect. You can include audio files to play back with the flipbook, especially useful for lip synching. You can also export flipbooks in a variety of standard formats, such as AVI and QuickTime. Creating a Flipbook 1. In the viewport whose images you want to capture, set the display options as you like. Then click the camera icon in that viewport and choose Start Capture. 2. In the Capture Viewport dialog box, set the options for the flipbooks file name, image size, format, sequence, padding, and frame rate. 3. View the flipbook in the Softimage flipbook or in the native media player on your computer. You can open the Softimage flipbook by choosing Flipbook from the Playback menu. Ghosting Animation ghosting, also known as onion-skinning, lets you display a series of snapshots of animated objects at frames or keyframes behind and/or ahead of the current frame. This lets you visualize an objects motion, helping you improve its timing and flow. You can display an objects geometry, points, centers, trails, and velocity vectors as ghosts. Ghosting works for any object that moves in 3D space, either by having its transformation parameters (scaling, rotation, and translation) animated in any way, or by having its geometry changed by shape animation or deformations (including envelopes), or with simulated rigid bodies, soft bodies, or cloth. Ghosting is set per object by selecting the Ghosting option in the objects Visibility property editor. Once this is done, you can set ghosting per scene layer or per group, in their respective property editors. To see ghosting in a 3D view, such as a viewport, choose the Animation Ghosting command in the Display Mode menu of a 3D view, then set up the ghost display options in the Camera Display property editor.

Basics 145

Section 9 Animation

Animating with Keys


Keyframing (or keying) is the process of animating values over time. In traditional animation, an animator draws the extreme (or critical) poses at the appropriate frames (key frames), thus creating snapshots of movement at specific moments. As in traditional animation, a keyframe in Softimage is also a snapshot of one or more values at a given frame, but unlike traditional animation, Softimage handles the in-betweening for you, computing the intermediate values between keyframes by interpolation.

When you set keys on a parameters value, a function curve (or fcurve) is created. An fcurve is a graph that represents the changes of a parameters values over time, as well as how the interpolation between the keys occurs. When you edit an fcurve, you change the animation.

Methods of Keying
There are a number of ways in which you can set keys in Softimage depending on what type of workflow youre used to and the tools you want or need to use for your production. Any way you choose, each method results in keyframes being created. There are three main keying workflows from which to choose: Keyable parameters on the keying panel. Character key sets Marked parameters (and marking sets)

Always Set the Keying Preference First!


Keys set at frames 1, 50, and 100. Intermediate frames are interpolated automatically.

Before you start setting keys, you need to set a preference that determines the way in which you key: with keyable parameters, with character key sets, or with marked parameters. This preference determines which parameters are keyed when you save a key by pressing K, by clicking the keyframe icon in the Animation panel, or by choosing the Save Key command from the Animation menu. To set the preference, click the Save Key preference button in the Animation panel, then select an option from the menu.

You can set keys for just about anything in Softimage that has a value: this includes an objects transformation, geometry, colors, textures, lighting, and visibility. You can set keys for any animatable parameter in any order and at any time. When you add a new key, Softimage recalculates the interpolation between the previous and next keys. If you set a key for a parameter at a frame that already has a key set for that parameter, the new key overwrites the old one.

146 Softimage

Animating with Keys

Keying Parameters in the Keying Panel


Using the keying panel (click the KP/L tab on the main command panel), you can quickly and easily change values and set keys for specific parameters of a selected object. The parameters that are displayed in the keying panel are called keyable parameters. If youre using the Maya interaction model, Softimage is automatically set up to work in this manner. Once you have set up the objects keying panel with the keyable parameters you want, you simply select that object and press K or click the keyframe icon to set a key on whatever is in its keying panel. Overview of Using the Keying Panel 1 2 3 4 5 6 Set the Save Key preference to Key All Keyable. Select an object and open the keying panel (click the KP/L tab). If you need to add other keyable parameters to the keying panel, select them in the keyable parameters editor. Go to a frame where you want to set a key. Change the values for the selected objects keyable parameters. Set a key for the keyable parameters.
6 5 4 3

Basics 147

Section 9 Animation

Keying with Character Key Sets


Character key sets are sets of keyable parameters that you create for an object or hierarchy for quick and easy keying. Once you have created key sets, you dont need to select an object first to key its parameters just press K or click the keyframe icon and whatever is in the current character key set is keyed. If youre transferring from another 3D software, you may prefer this method of working. Character key sets let you keep the same set of parameters available for any object or hierarchy for easy keying, such as only the rotation parameters for the upper body control in a rig. Overview of Using Character Key Sets 1 2 3 4 5 6 Create a character key set that includes the parameters you want to key on an object. Set the current character key set. If you just created a character key set, it is set as the current one. Set the Save Key preference to Key Character Key Set. Go to a frame where you want to set a key. Change the values for the parameters in the set. Set a key for the parameters in the current character key set.
6 3 4 2 1

148 Softimage

Animating with Keys

Keying Marked Parameters


Marking parameters is a way of identifying which parameters you want to use for a specific animation task, such as keying. By keying only the marked parameters, you can keep the animation information small and specific to the selected object. Overview of Marking Parameters 1 2
1 2

Set the Save Key preference to Key Marked Parameters. Select the object you want to animate and go to the frame at which you want to set a key. Mark the parameters you want to key. You can mark parameters by clicking them in the marked parameter list (in the lower-right of the interface), a property editor, the explorer, or the keying panel. Marked parameters are highlighted in yellow.

Transformation parameters are automatically marked when you activate a transformation tool.
4

4 5

Set the marked parameter values for the selected object. Set a key for the marked parameters at this frame.

Keying with Marking Sets You can also create marking sets, which are similar to character key sets. You can have only one marking set per object at a time. Marking sets make it easy to key in hierarchies because each object within that structure can have its own marking set, such as a marking set of rotation parameters for bones, or a marking set of translation parameters for IK effectors. To create a marking set, select an object and mark the parameters you want to keep in the set. Then press Ctrl+Shift+M. To key marking sets, select one or more objects with a marking set. Then press Ctrl+M to activate the marking set, then set a key by pressing K. Press Alt+K to set a branch key, which is useful for working with characters and other hierarchies.
Basics 149

Section 9 Animation

Setting Keys on Individual Parameters


In addition to the three main keying workflows, you can also set keys directly on individual parameters in these different ways. These methods dont need to consider the keying preference that you have selected.
A C D A Click the keyframe icon to set keys on, or remove keys from, all or only marked parameters on the property page. Click the animation icon to set keys on, or remove keys from, only that parameter. You can also right-click it and choose Set Key or Remove Key from the menu. In an explorer, right-click a parameters animation icon and choose Set Key or Remove Key from the menu. Click the autokey button to automatically set a key each time you change a parameters values. Choose Animation > Set Keys at Multiple Frames to set keys for the parameters current values at the multiple frames that you enter. This is handy for setting up basic keyframes for pose-to-pose type animation.

D C

150 Softimage

Animating Transformations

Animating Transformations
Animating the transformations (scaling, rotation, and translation) of objects is something that you will be doing frequently. It is one of the most fundamental things to animate in Softimage. You can find transformation parameters in the objects Kinematics node in the explorer. Kinematics in this case refers to movement, not to inverse or forward kinematics as is used in skeleton animation.

Animating Local or Global Transformations


You can animate objects either in terms of their parents (local animation) or in terms of the scenes world origin (global animation). Its usually better to animate the local transformations because you usually animate relative to the objects parent instead of animating relative to the world origin. Animating locally lets you branch-select an objects parent and move it while all objects in the hierarchy keep their relative positions to the parent. If you animate both the local and the global transformations, the global animation takes precedence.

Manipulation Modes versus Transformation Values


When you transform an object interactively in a 3D view, you use one of several modes that determine which coordinate system to use for manipulation. The manipulation mode affects the interaction only, the resulting values of which you see in the Transform panel. This is important to know, particularly for understanding the Local manipulation mode: the values shown in the Transform panel while using a transformation tool may not be the same as the local transform values that are stored for the object: that is, the values that you animate. So, how do you manipulate an object so that the values on the Transform panel are the same as the stored values for local animation? You rotate in Add mode or
Basics 151

Within the Kinematics node are the Global Transform and Local Transform nodes, referring to the type of transformation. Within each of the Transform nodes, there are the Pos (position, also called translation), Ori (orientation, also called rotation), and Scl (scale) folders. Each of the Pos, Ori, and Scl folders contain the X, Y, and Z parameters corresponding to each axis. Manipulation modes for current transformation (in this case, translation).

Section 9 Animation

translate in Par mode. These are the only two manipulation modes that transform in the same way as local animation: they are both relative to the objects parent. Of course, you can always set and animate the values as you like directly in the objects Local Transform or Global Transform property editor.

Remembering Transformation Tools for an Object


When youre manipulating or animating an object, you often use the same transformation tool for it, such as always using the Rotate tool for bones in a skeleton. You can create a transform setup property (choose Get > Property > Transform Setup) for an object so that the same transformation tool is automatically activated when you select that object. This is very useful for working quickly with control objects in a character rigfor example, when you select the heads effector, the Translate tool is automatically activated.

Marking Transformation Parameters


When you activate any of the transformation tools, all three of their corresponding local transformation parameters (X, Y, Z) are automatically marked. For example, when you rotate in Local mode, all three rotation axes are marked automatically, even if only one rotation axis is selected.

To have only specific axes X, Y, or Z marked, you can rotate in Add mode or translate in Par mode. Or you can choose Transform > Automark Active Transform Axes: then when you click a transformations specific axis button (such as the Rotations Y button) on the Transform panel, only that axis is marked, regardless of the current manipulation mode.

152 Softimage

Animating Transformations

Animating Transformations in Hierarchies


Transformations are propagated down through hierarchies so that each objects local position is stored relative to its parent. Objects in hierarchies behave differently when they are transformed, depending on whether the objects are node-selected (left-click) or branch-selected (middle-click). By default: When you branch-select a parent object and animate its transformation, the animation is propagated to its children. When you node-select a parent and animate its transformation, its children are not transformed unless their respective local transformations are animated. For example, suppose the childs local translation is animated but its rotation isnt: if you translate the parent, the child follows; however if you rotate the parent, the child stays put. This is because animation on the local transformations is stored relative to the parents center. You can make unanimated children follow the parent with the Child Transform Compensation command (or ChldComp button) on the Constrain panel. When you animate a child object, its animation is always done relative to its parent (local animation). When you animate anything in global, its always done in relation to the world origin: it does not matter if your objects are in a hierarchy or not. Nothing is inherited if you have global transformation keys because they override any parent-to-child inheritance. Skeleton chains are an exception to these hierarchy animation rules because the end location of one chain element always determines the start location of the next one in the chain.

Animating Rotations
When you animate rotations in Softimage, you normally use three separate function curves that are connected to the X, Y, and Z rotation parameters. These three rotation parameters are called Euler angles. Euler interpolation works well when the axis of interpolation coincides with one of the XYZ rotation axes, but is not as good at interpolating arbitrary orientations. Euler angles can also suffer from gimbal lock, which is the phenomenon of two rotational axes aligning with each other so that they both point in the same direction. To solve this, you can change the order in which the rotation axes are evaluated (by default, its XYZ), which changes where the gimbal lock occurs. As well, you can convert Euler fcurves to quaternion. Quaternion interpolation provide smooth interpolation with any sequence of rotations. The XYZ angles are treated as a single unit to determine an objects orientation, so they are not restricted to a particular order of rotation axes. Quaternions interpolate the shortest path between two rotations. You can create quaternion fcurves by setting quaternion keys directly, or by converting Euler fcurves to quaternion using the Animation > Convert commands in the Animation panel. And you can always convert back to Euler fcurves in the same way.
Cone is rotated on 90 degrees in X and Y.

Euler interpolation of the rotation values.

Quaternion interpolation of the rotation values.

Basics 153

Section 9 Animation

Editing Keys and Function Curves


After you have set keys to animate a parameters value, you can edit the keys and the function curve (or fcurve) to edit the animation. An fcurve is a graph that represents the changes of a parameters values over time, as well as how the interpolation between the keys occurs. Softimage has several tools that help you edit keys and function curves: Editing keys in the timeline is the easiest and most direct method for working with keys. The dopesheet lets you work with keys as well, but with tools and in a larger viewgreat for working at the scene level when youre offsetting and scaling animation. The fcurve editor is the most sophisticated view that gives you the best tools for making the fcurves exactly as you want them.
C A

Editing Keys in the Timeline


You can view and edit keys in the timeline similar to how you do in the dopesheet. The advantage of doing this in the timeline, of course, is that you dont need to open up a separate window for the dopesheet: the keys are right there. This lets you keep the object that youre animating in full view at all times. Once you have selected an animated object, you can easily move its keys, cut or copy and paste its keys, and scale a region of keys, all within the timeline. This is especially useful for blocking out rough animations before you do more detailed editing. You can also select single keys and move, cut, copy, and paste them.
A B Keys are displayed as red lines in the timeline. Right-click in the timeline to open a menu of options for displaying and editing the keys. Press Shift+drag to draw a region, then drag it to a new area on the timeline. Press Ctrl while dragging to copy the keys, or choose Copy and Paste from the right-click menu. Press Shift+click to select a single key, then you can move it, or cut/ copy, and paste it. You can scale a region by dragging either of its ends in the appropriate direction.

D E

154 Softimage

Editing Keys and Function Curves

Editing Keys in the Dopesheet


The dopesheet provides you with a way of viewing and editing key animation. Similar to a cel animators dopesheet, it shows your entire animated sequence, frame by frame. Because you can see your whole animation in the dopesheet, it makes an ideal tool for editing overall motion and timing. For example, if you wanted to change a 100-frame sequence to 200 frames, you would simply stretch (scale) the animation segment on the track to be 200 frames long.
A B C

You can modify your animation sequences by editing regions of keys on the tracks with standard operations such as moving, scaling, copying, cutting, and pasting. You can delete them, shift them left and right, scale themall with or without a ripple. Summary tracks help you see the animation for the whole scene or just the selected objects. To open a dopesheet, you can open the animation editor (press 0 [zero]), then choose Editor > Dopesheet from its command bar. Or choose it in a viewport, like any other view.

G D E

F A B C D The Explorer, Lock, and Update buttons apply only to the animation explorer (4). Timeline. Click and drag the red playback cursor in it to scrub through the animation. Summary tracks display keys for all objects in the scene or all objects currently displayed in the dopesheet. Animation explorer displays the parameters of objects that you select. E Regions (press Q) let you edit multiple keys, including moving them, scaling them, copying and pasting them, and deactivating animation. The keys represent the keyframes of the selected parameters animation. Each colored block is one frame long. You can edit (move, copy, paste) individual keys on tracks. The tracks display and let you manipulate the animation keys. You can expand and collapse tracks to view exactly what you want.

Basics 155

Section 9 Animation

Editing Function Curves


When you set keyframes to animate a parameter, a function curve, or fcurve, is created. An fcurve is a representation of the animated parameters values over time. You can edit fcurves in the fcurve editor, which lives in the animation editor and is its default editor. You can also display the dopesheet, expression editor, and scripted operator editor in the animation editor. The fcurve editor is an ideal tool to help you control the animations speed and interpolation, as well as easily adding and deleting keys. Press the 0 (zero) key to open the animation editor in a floating window, or you can open it in any viewport. If you open it with an object already selected, its fcurves automatically appear in the fcurve editor. The graph in the fcurve editor is where you manipulate the fcurve: time is shown along the graphs X axis (horizontal), while the parameters value is plotted along the graphs Y axis (vertical). The shape of the fcurve shows how the parameters value changes over time. On the fcurve, keyframes are represented by key points (also referred to as keys) and the interpolation between them is represented by segments of the curve linking the key points. You can change the interpolation for a segment or for the whole fcurve. The slope of the curve between keys determines the rate of change in the animation, while the handles at each key let you define the fcurves slope in the same way that control points define Bzier curves.

E B C F G

156 Softimage

Editing Keys and Function Curves

A B C D

Command bar contains menu commands and icons to edit fcurves in many different ways. Animation explorer displays the parameters of objects that you select. Values for the parameter are shown on the graphs Y (vertical) axis. Timeline. Time is shown on the graphs X (horizontal) axis. Click and drag the red playback cursor in it to scrub through the animation. Selected fcurves are white. When not selected, the curves for X, Y, and Z parameters are red, green, and blue, respectively. You can also change the color of any fcurve you like. The keys on the fcurves represent the keyframes of the selected parameters animation. You must select an fcurve before you can select its keys. Selected keys are red with slope handles. Unselected keys match the color of their fcurve. The slope handles (tangents) at each key indicate the rate at which an fcurves value changes at that key. These handles only appear on keys on fcurves that have spline interpolation.

Editing a Function Curves Slope The fcurves slope determines the rate of change in the animation. By modifying the slope, you change the acceleration or deceleration in or out from a key, making the animation change rapidly or slowly, or even reversing it. You can change the slope of any fcurve that uses spline interpolation by using the two handles (called slope handles) that extend out from a key. By modifying the handles length and direction, you can define the way the curve moves into and out from each key. You can change the length and angle of each handle in unison or individually. The slope handles are tangent to the curve at their key when Unified Slope Orientation is on. (A) This keeps the acceleration and deceleration smooth, but you can also turn off this option to break the slope at a certain point. (B) This creates a sudden animation acceleration or deceleration, or change of direction altogether.

A Types of interpolation: By default, fcurves use spline interpolation to calculate intermediate values. The curves ease into and ease out of each key, resulting in a smooth transition. H Linear interpolation connects keys by straight line segments. This creates a constant speed with sudden changes at each key. Constant interpolation repeats the value of a key until the next one. The creates sudden changes at keys and static positions between keys, such as for animating a cut from one camera to another. B

Basics 157

Section 9 Animation

Ways of Editing Function Curves and Keys When you select one or more fcurves, any modifications you perform are done only to them. You can select keys on the selected fcurves to edit only them, including regions of keys on fcurves.
A B C Move fcurves and keys in X (horizontally) to change the time or in Y (vertically) to change the values. Add or delete keys on an fcurve. Create regions (press Q) of keys for editing. A Drag the region up or down to move the keys, or drag the regions handles to scale. Copy and paste an fcurve and keys. You can also set paste options to control how keys are pastedwhether they replace the selection or are added to it. Scale fcurves or regions of keys. When you shorten the length, you speed up the animation; increasing the length slows it down. Scaling vertically changes the values. Cycle the fcurves for repetitive motions. You can create basic cycles, or you can have relative cycles that are progressively offset, such as when creating a walk cycle.

D C B E

158 Softimage

Layering Animation

Layering Animation
Animation layering allows you to have one or more levels of animation on top of an objects parameters base animation at the same time. You usually want to layer animation when you need to add an offset to the base animation on an object, but you dont want to change the original animation, such as with mocap data. You can only add keys in the layers, and the existing base animation must be either action clips or fcurves. Animation layers are non-destructive, meaning that they dont alter your base animation in any way: the keys in the layers always remain as a separate entity. Layering allows you to experiment with different effects on your animations and build several variations, each in its own layer. For example, lets say that youve imported a mocap action clip of a character running down the flight of stairs. However, in your current scene, the stairs are shallower than those used for the mocap session, so the character steps through the stairs instead of on them. To fix this problem, you create an animation layer, offset the contact points for the characters feet so that they step on the stair, then set keys. The result is an offset animation that sits on top of the mocap data: you dont need to touch the original mocap clip at all. You can then easily edit the fcurves for the animation layer, tweaking it as you like. Animation layers are actually controlled and managed in the animation mixer, but you dont need to access the mixer for creating and setting keys in layers. You can use the Animation Layers panel (click the KP/L tab on the main command panel) to do this. However, you may want to use the animation mixer for added control over each layer, such as for setting each layers weight.
6 1 2

4 5

Basics 159

Section 9 Animation

Overview of Layering Animation There are different ways in which you can work with animation layers in Softimage, but heres a simple overview just to get you started. 1 2 Make sure the objects are in a model structure. Animate the objects. This animation is the base layer. You cannot create animation layers without first having a base layer. Create an animation layer in the Animation Layer panel. Select the animated objects, change their values, and set keys for them in the layer you created. Edit the layers fcurves. Collapse the layer to combine its animation with the base layer of animation.

Constraints
Constraining is a way of increasing the speed and efficiency in which you animate. It lets you animate one object via another ones animation. You can constrain different properties, such as position or direction, of one object to that of an animated object. Then when the animated object moves, the constrained object follows in the same way.
Radar dish constrained by direction to the plane The X axis of the radar dish continually points in the direction of the planes center.

3 4 5 6

There are a number of types of constraints in Softimage: Constraining transformations: in position, orientation, direction, scaling, pose (all transformation), and symmetry. Constraining in space: by distance, or between 2, 3, or any number of points. Constraining to objects: to clusters, surfaces and curves, bounding volumes, and bounding planes. For many of the constraints, you can add a tangency or up-vector directions to the mix. The tangency and up-vector constraints are properties of several constraint types that determine the direction in which the constrained object should point. For example, if you apply a Direction constraint to an object, you can also add an up-vector (Y axis) to control the roll of the direction-constrained object.

160 Softimage

Constraints

Overview of Constraining Objects 1 2 3 4 Select the object to be constrained. Choose the constraint command from the Constrain menu. Pick the constraining (control) object. The constraint is created between the objects. Adjust the constraint in its property editor that opens. You can see constraint information in the viewport if you click the eye icon in a viewports menu bar and select Relations.

Creating Offsets between Constrained Objects


When you constrain an object, you often need to offset it in some way from the constraining object. This could be an offset in position, orientation, or scaling. For example, if you position-constrain one object to another without an offset, both objects end up sharing the same position (on top of each other), so you need to offset them.
Constraining object (magnet) Constrained object (airplane) Position constraint without offset: The position of the constrained objects center matches that of the constraining objects center.

Position constraint with offset: An offset is applied to the position of the constrained objects center.

Basics 161

Section 9 Animation

With almost all types of constraints, you can set offsets using the controls in their property editors. The offset is set between the centers of the constrained and constraining objects on any axis. To set an offset interactively, you can use the CnsComp button (Constraint Compensation) on the Constrain panel. With compensation, you can interactively offset the constrained object from the constraining object and animate it independently while keeping the constraint.

Blending Constraints
You can blend multiple constraints on an object with each other, as well as blend constraints with other animation on the constrained object. You set the Blend Weight parameters value in each constraints property editor to blend the weight (or strength) of one constraint against the others. And, of course, you can animate the blending to have it change over time. Blending is done in the order in which you applied the constraints, from the first-applied constraint to the last. Each constraint takes the previous result and gives a new one based on the value you set. For example, if you have three position constraints on an object, you can have the object placed exactly in the center of them. In the example on the right, the cone has three blended position constraints to keep it positioned in the middle of the triangle formed by objects A, B, and C: A B C First to A with a blend weight of 1. Next to B with a blend weight of 0.5. Lastly to C with a blend weight of 0.333. You can see the order of the constraints as well as their blend weight values in a viewport if you click the eye icon in a viewport and select Relations and Relations Info.
C

162 Softimage

Path Animation

Path Animation
A path provides a route in global space for an object to follow in order to get from one point to another. The object stays on the path because its center is constrained to the curve for the duration of the animation. You can create path animation in Softimage using a number of methods, each one having its own advantages: The quickest and easiest way of animating an object along a path is by using the Create > Path > Set Path command and picking the curve to be used as the path. Theres no need to set keyframesjust set the start and end frames. The object is automatically constrained to the path and animated along the percentage of the curves length. Constrain an object to curve using the Curve (Path) constraint and manually set keys for the percentage of the path traveled. Choose the Create > Path > Set Trajectory command and pick a trajectory to use a curves knots as indicators of the objects position at each frame.
C D

After youve created path animation, you can modify the animation by changing the timing of the object on the path (choose the Create > Path > Path Retime command), or by moving, adding, or removing points on the path curve as you would to edit any curve. For example, using the Path Retime command, you can shorten (and therefore increase the speed) a path animation that went from frame 1 to 100 to frames 20 to 70. You can even reverse the animationfor example, enter 100 as the start and 1 as the end frame.

A B C D

The dotted line is connected to the center of the constraining curve. You can select the line and press Enter to open the PathCns or TrajectoryCns property editor. A triangle represents a locked-path key. A square represents a key saved on the path. A circle represents a key set directly from a property page or the animation editor. These are the only type of keys found on trajectories. You can see path information in a viewport if you click the eye icon in a viewport and select Relations.

Move an object about your scene and save path keys with the Create > Path > Save Key on Path command at different positionsthe path curve is created automatically as you go. Convert the existing movement of an object into a path using the Create > Path > Convert Position Fcurves to Path command. Want to convert a path animation to translation? Plot the position of the path-animated object, then apply the result to the object or as an action in the animation mixer.

Basics 163

Section 9 Animation

Linking Parameters
When you create linked parameters, also known as driven keys, you create a relationship in which one parameter depends on the animation state of another. In Softimage, you can create simple one-to-one links with one parameter controlling another, or you can have multiple parameters controlling one parameter. After you link parameters, you set the values that you want the parameters to have, relative to a certain condition (when A does this, B does this). Drive a single parameter with the combined animation values of multiple parameters. This allows you to create more complex relationships, where many parameter values are interpolated to create an output value for one parameter. Drive a single parameter with the whole orientation of an object. Overview of Linking Parameters To open the Parameter Connection Editor, choose View > Animation > Parameter Connection Editor. Then follow these steps:

Venus flytrap eyes its victim. Its jaws rotation Z parameter is linked to the position X parameter of the fly that is animated along a path.

You can link any animatable parameters togetherfrom translation to colorto create some very interesting or unusual animation conditions. For example, you could create a chameleon effect so that when object A approaches object B, it changes color. Basically, if you can animate a parameter, you can link it. There are three basic ways in which you can link parameters. You can:
5

Create simple one-to-one links with one parameter driving one or more other parameters. When you link one parameter to another, a relationship is established that makes the value of the linked parameter depend on the value of the driving parameter.

164 Softimage

Linking Parameters

Select an object, then select one or more of its parameters in the Driven Target explorer. These are the parameters whose values will be controlled by the driving parameter. Click the lock icon to prevent the explorer from changing when you select other objects. Select an object, then select one of its parameter in the Driving Source explorer. This is the parameter whose values will control the linked parameters. If you are driving a single parameter with multiple parameters, select two or more of the parameters (Ctrl+click) here. These are the parameters whose interpolated values will control the linked parameter.

2 3

Select Link With from the link list. If you are driving a single parameter with multiple parameters, select Link With Multi.

Click the Link button. A link relationship is established between the parameters. An l_fcv expression appears in the Definition text box and the animation icon of the linked parameter displays an L to indicate this. If you are driving a single parameter with multiple parameters, an l_interp expression appears in the Definition text box.

Set the driving and linked parameters values as you want them to be relative to each other, then click the Set Relative Values button. Repeat this step for each relative state you want to set.

Basics 165

Section 9 Animation

Expressions
Expressions are mathematical formulas that you can use to control any parameter that can be animated, such as translation, rotation, scaling, materials, colors, or textures. Expressions are useful to creating regular or mechanical movements, such as oscillations or rotating wheels. As well, they allow you to create almost any connection you like between any parameters, from simple A = B relationships to very complex ones using predefined variables, standard math functions, random number generators, and more. However you use expressions, you will find that they are very powerful because they allow you to animate precisely, right down to the parameter level. Once youre more experienced using them, you can create all sorts of custom setups, like character rigs and animation control systems. Overview of writing an expression

1 2

Select an object and open the expression editor by pressing Ctrl+9. Select the target, which is the parameter controlled by the expression. The Current Value box below it shows the value of the expression at the current frame.

Enter the expression in the expression pane by typing directly or by choosing items from the Function, Object, and Param menus. You can also enter parameter names by typing their script names and then pressing F12. This prompts you with a list of possible parameters in context. You can copy, cut, and paste in the expression pane using standard keyboard shortcuts (Ctrl+C, Ctrl+X, and Ctrl+V, respectively).

4 5

The message pane updates as you work, letting you know whether the expression is valid or not. Click the Validate and Apply buttons to validate and then apply the expression. For a complete description and syntax of all the functions and constants available, refer to the Expression Function Reference (choose Help > Users Guides).

166 Softimage

Expressions

How to create a simple equal (=) expression: 3 ways Use any of these methods to create a simple equal expression between two parameters:
A B

In a property editor, drag an unanimated parameters animation icon onto another parameters animation icon. This animation icon shows an equal sign and its value is made to be equal to the first parameter. In the explorer, drag the name of an unanimated parameter and drop it on another parameters name. In the parameter connection editor, set up the Driving Source and Target parameters, then select Equals (=) Expression.

B C

Basics 167

Section 9 Animation

Copying Animation
There are different levels at which you can copy animation in Softimage: between parameters, between objects, or between models. Here are some of the main ways to do this. You can copy animation between any parameters in the explorer or a property editor in a number of ways:

You can copy any type of animation between selected objects, models, or parameters using the Copy Animation commands from the Animation menu in the Animation panel. You can copy keys between parameters or objects in the dopesheet, or copy function curves and keys between parameters or objects in the fcurve editor. In the dopesheet, you can copy animation from one model to another, or from one hierarchy of objects to another within the same model. For example, you can paste a walk cycle animation from the Bob model to the Fred model, as long as Fred has the same parameter names as Bob. Store an objects animation in an action source and copy it between models, which is especially useful for exchanging animation between scenes.

A B

In the explorer, drag the name of an animated parameter and drop it on another parameters name. In a property editor, drag the animation icon of an animated parameter and drop it on another parameters animation icon. In either the explorer or a property editor, right-click the animation icon of an animated parameter and choose Copy Animation. Paste this on another parameter with the Paste Animation command. In the explorer, you can drag an entire folder from one object onto another objects folder of the same name, such as the Pos folder which contains translation (position) parameters.

168 Softimage

Scaling and Offsetting Animation

Scaling and Offsetting Animation


If you find that your whole animation is a bit too long or too short, or you just want to offset by a few frames, you can do so with the Sequence Animation commands from the Animation menu in the Animation panel. They give you control over animation by offsetting or scaling (shortening or lengthening) the motion of all objects, selected objects, or just the marked parameters of selected objects. You can offset or scale all function curves. You can scale and offset using explicit values, or else you can retime an animation by fitting it into a specified frame range. You can even easily reverse an animation.
A

You can also use the dopesheet to offset or scale animation for an object or even the scene, especially using its summary tracks.
A B C The selected fcurve (white) has been scaled to twice its length. The ghosted fcurve (black) shows the original fcurves size. The selected fcurve has been offset by about 20 frames. The selected fcurve has been retimed so that a range of 125 frames in the middle of it has been compressed into a range of 80 frames.

Basics 169

Section 9 Animation

Plotting (Baking) Animation


When you plot the animation on an object using the commands in the Tools > Plot menu on the Animate toolbar, the animation is evaluated frame by frame and function curves are created. Plotting is useful for generating function curves from any type of animation or simulation, such as from the simulation of a spring-based tail on a dog, or plotting mocap animation from a rig. You can also plot the animation of a constrained object and then remove its constraints so that only the plotted animation remains on the object.
Animation of an object constrained between two points is plotted.

Removing Animation
There are different levels at which you can remove animation in Softimage: between parameters, between objects, or between models. Here are some of the main ways to do this. You can remove any type of animation from selected objects, models, or parameters using the Remove Animation commands from the Animation menu in the Animation panel. You can remove all keys from parameters or objects in the timeline or in the dopesheet, or remove fcurves or all keys from parameters or objects in the fcurve editor. When you remove keys from an fcurve, a flat (static) fcurve remains. To remove the static fcurve, choose Remove Animation > from All Parameters, Static Fcurves from the Animation menu. In the dopesheet, you can easily remove all animation from a model or from a hierarchy of objects using its summary tracks. To remove animation from parameters in a property editor, right-click the keyframe icon at the top of the editor and choose Remove Animation. This removes animation from all or marked animated parameters on that property page. To remove animation from parameters in the explorer or a property editor, right-click the animation icon of an animated parameter and choose Remove Animation.

Plotting is done by first creating an action source. You can choose to either keep or delete this action source after the animation has been plotted: You can apply the plotted animation (fcurves) immediately to the object and delete the action source. You can apply the plotted animation (fcurves) to the object and also keep them stored in an action source. This may be useful if youre using the animation mixer. You can keep the action source of the plotted animation (fcurves) but not have it applied to the object immediately. This may be useful for creating a library of action sources that can be applied to the same or even a different object.

170 Softimage

Section 10

Character Animation
Character animation is all about bringing your characters to life, whether its some guy dancing in a club, a dog catching a frisbee, or a simple bouncing ball with personality to spare. Even though youre working in a virtual environment, your job is to make these characters seem believable in their movements and expression. In Softimage, youll find everything you need to make any type of character come alive.

What youll find in this section ...


Character Animation in a Nutshell Setting Up Your Character Building Skeletons for Characters Enveloping Rigging a Character Animating Characters with FK and IK Walkin the Walk Cycle Motion Capture Making Faces with Face Robot

Basics 171

Section 10 Character Animation

Character Animation in a Nutshell


Softimage has many tools to help you create and animate your characters. Some of them are tools designed for character animation, such as inverse kinematics, while others are part of the standard Softimage tool set, such as modeling and keying tools.
1 Model the body geometry that is to be used as the envelope (skin). You can use either a low or high-resolution version of the envelope. A low-res envelope lets you work out the animation with it as a reference, but doesnt hinder the refresh speed. You can later switch to the high-res version for the final animation and rendering.

The following outline gives you an idea of which steps to take and which tools to use for developing and animating characters in Softimage.

2 Create a model structure for your character, starting with the body geometry. Then as you create the other elements (skeleton, rig controls, Mixer node), you put them in the model to keep all the characters elements together. This makes it easy to copy or export your character later on.

Build a skeleton to provide a framework for a character, and to pose or deform it intuitively. The structure of your characters skeleton determines every aspect of how it will move. With the envelope as a guide, you can create the bones for the skeleton and assemble them into a hierarchy.

Create a rig using different control objects to help you to pose and animate the character more quickly and accurately than without a rig. While simple characters may not require a rig, a character that is complex or needs to do complicated movements will need a rig.

172 Softimage

Character Animation in a Nutshell

Apply the envelope to the skeleton. This also involves setting how the different parts of the envelope are weighted to the different bones in the skeleton. You should also save a reference pose of the envelope before you start animating for a home base to which you can return.

Animate the skeleton using inverse kinematics (IK) and forward kinematics (FK). You can also apply mocap data to your character to animate it, including retargeting the data onto different characters with the MOTOR tools.

Adjust the animation using any of the animation tools in Softimage, such as the dopesheet, the fcurve (animation) editor, animation layers, or the animation mixer. For example, you many want to fix foot sliding in the fcurve editor, add a progressive offset to a walk cycle in the mixer, or add a few keyframes on top of some mocap data with animation layers.

Basics 173

Section 10 Character Animation

Getting Started with Ready-Made Characters


Looking for a quick way to get started with characters in Softimage? Check out the ready-made models in the Get > Primitive > Model and Get > Primitive > Character menus. Here are just a few of the characters youll meet on these menus:

All predefined skeletons, bodies, characters, and rigs are implemented as models. As well, most of the bipeds share the same basic hierarchy structure that you can see in the explorer, making it easy to share animation later, especially if youre using actions in the animation mixer. Making Custom Characters and Faces The Character Designer (choose Get > Primitive > Character > Man Maker) loads a generic male body, then use sliders in a property editor to interactively manipulate individual body and head features. You can create many bodies, each with their own distinctive look, yet have all bodies sharing the same underlying topology. The Face Maker (choose Get > Primitive > Character > Face Maker) loads a predefined low-resolution polygon mesh head (male or female). This lets you can create any number of different faces with the same topology, allowing you to easily copy shape animation keys between them. Perfect for testing out some shape animation!

Complete Woman Skeleton and Biped Character

Man Maker

Face Maker

XSI Man Armored and Elephant

174 Softimage

Setting Up Your Character

Setting Up Your Character


How you set up your character determines its destiny in many different ways. Here are some issues to think about while youre planning out your character animation.

Organizing Your Character into Scene Layers and Groups


Scene layers let you divide up different scene elements into groupings whose visibility, selectability, renderability, and ghosting can be controlled. Press 6 to open the scene layer manager and set up the layers. For example, you can separate the characters envelope (geometry), its skeleton, and its control objects for the rig each into different layers. Layers, however, live only at the scene level, so if youre importing and exporting models between scenes, theyre not going to include any layer information. This is where groups can be of help. Groups let you keep certain character elements together for easy selection, such as all objects that are to be enveloped. Groups are properties of a model, so you can export them with your character model.

Putting the Characters Elements into Models


Models in Softimage are containers that make it easy to organize scene elements that need to be kept together. A characters skeleton hierarchy, rig controls, envelope geometry, and groups are often kept together within a model. The main reasons for using models with character animation is that they provide the easiest way to import and export characters between scenes and to copy animation between characters. You can refine your rigs and character models over the course of a production without fear of lost animation. For example, character animators can start roughing out animation with a simple rig and low resolution proxy model while the other creative work is still being worked out. As long as you keep the rig controls names and their coordinate space consistent, all the animation is kept and can be reapplied as the character and rigging both get more complex. Another reason to work with models is to easily use the animation mixer. Each model can only have one Mixer node. If you have many characters in a scene but they arent within models, you have only one Mixer node for the whole scene (under the scene root, which is technically a model) which means that you cant copy animation from one character to another.

Basics 175

Section 10 Character Animation

Tools for Easy Viewing and Selecting


When youre animating a skeleton, you may want to work with a lowresolution version of the envelope on the skeleton. This helps you get a sense of how the animation will work with the final envelope. However, working with enveloped skeletons can make it difficult to view or select chain elements. To help you with this, Softimage has several viewing and selection options, with the most common ones shown here.
X-ray shading lets you see and select the underlying chains while still seeing the shaded surface of the envelope. You can display the chains in screen (bones inside) or overlay (bones on top) modes.

You can set up a character synoptic view for other members of your team, allowing them to use your character easily. Synoptic views allow you and others to quickly access commands and data related to a specific object or model. They consist of a simple HTML image map stored as a separate file outside of the Softimage scene file. The HTML file is then linked to a scene element. Clicking on a hot spot in the image either opens another synoptic view or runs a linked script. You can include all sorts of information about the character, set up hotspots for selecting body parts, setting keys on different elements, running a script, etc.

Synoptic views Click on a hot spot on the synoptic image to run the script that is linked to that image.

Shadow icons are displayed here as cylinders for many bones. These shadows have been resized and offset from the bone to make them easy to see and grab. You can also color-code the shadows to identify different groups of controls. You can also change the shape, color, and size of the chain elements themselves (such as resizing the bones), including having no chain element displayed at all.

176 Softimage

Building Skeletons for Characters

Building Skeletons for Characters


Skeletons provide an intuitive way to pose and animate your character. A well-constructed skeleton can be used for a wide variety of poses and actions. Skeletons in Softimage are made up of bones that are linked together by joints that can rotate. The combination of bones and joints is referred to generically as a chain in Softimage because you can use chains for animating any type of object, not just humans or creatures. Chains have several elements, each of which has an important part to play, as shown below.

Anatomy of a skeleton
The bones are connected by joints. A bone always rotates about its joint, which is at its top. The first bone rotates around the root. The root is a null that is the starting point on the chain. It is the parent of all other elements in the chain. Because the first joint is local to the root, the roots position and rotation determine the position and rotation of the rest of the chain. A joint is the connection between elements in a chain: between bones in the chain, between the root and the first bone, and between the last bone and the effector. By default, joints are not shown but you can easily display them. In a 2D chain, the joints act as hinges, restricting movement so that its easier to create typical limb actions, such as bending an arm or leg. Only its first joint at the root acts as a ball joint, allowing a free range of movement: when using IK, the rest of the 2D chains joints rotate only on the roots Z axis, like hinges. Of course, you can rotate the joints of a 2D chain in any direction with FK, but this is overridden as soon as you invoke IK. In a 3D chain, the joints can move any which way they like. All of its joints are like ball joints that can rotate freely on any axis, allowing you to animate wiggly objects like a tail or seaweed. The first bone in the chain is a child of the root, and all other bones are children of their preceding bones. Keying the rotation of bones is how you animate with forward kinematics (FK).

The effector is a null that is the last part of a chain. Moving the effector invokes inverse kinematics (IK), which modifies the angles of all the joints in that chain. When you create a chain, the effector is a child of the root, not the preceding bone.

Basics 177

Section 10 Character Animation

Creating Skeletons
Drawing chains is pretty simple in Softimage: you choose the Create > Skeleton > Draw 2D Chain or 3D Chain command on the Animate toolbar and click where you want the root, joints, and effector to be. Here are some tips to help you draw chains: Draw the chains in relation to the default pose of the envelope that youre planning to use. This means you dont have to spend as much time adjusting each bones size and position later. Draw the chain with at least a slight bend to determine its direction of movement when using IK. Drawing bones in a straight line can result in unpredictable bending. If you want two chains to be mirrored, such as a characters arms or legs, you can draw one and have the other one created at the same time. Just activate symmetry (Sym) mode and then draw a chain.

After you have created the chains for a characters skeleton, you need to organize them in a hierarchy. Hierarchies are parent-child relationships that make it easy to animate the skeleton. There are many different ways in which you can set up a hierarchy, depending on the skeletons structure and the type of movements that the character needs to make.
Part of a skeleton hierarchy structure shown in the schematic view. In this case, the spine root is the parent of the leg roots, spine, and spine effector. These elements are, in turn, parents of the legs, neck, shoulders, spine, and so on.

Choose the Create > Skeleton > Draw 2D Chain or Draw 3D Chain command.

How to create a hierarchy


In an explorer, drag the nodes you want to be children and drop them onto the node that will be the parent. OR Select the node you want to be the parent, click the Parent button and then pick the elements that will be its children. Right-click to end the parenting mode.

2 Click once to create root and first joint.

3 Click again to create first bone and second joint. Tip: You can try out the joints location by keeping the mouse button held down as you drag. The bone and joint are not created until you let go of the mouse button. 4 Click once more to create another bone and joint. 5 When youre ready to finish, right-click to create the effector and end the chain.

178 Softimage

Building Skeletons for Characters

Hold That Pose!


When youre creating a skeleton, its a good idea to save it in a default position (pose) before its animated or enveloped. This way you have a solid reference point to revert to when enveloping and animating the skeleton. This pose is known as the neutral pose, reference pose, base pose, or bind pose, and is usually set up so that the character has outstretched arms and legs (a T-pose), making it easy to weight the envelope and adjust its textures. To save the skeleton in a pose, you can create an action source using the Skeleton > Store Skeleton Pose command. To return to this pose at any time, you apply it to your character with the Skeleton > Apply Skeleton Pose command. Because this pose is saved in an action source, you can pop it into the animation mixer to do nonlinear animation. For example, you could use this pose, as well as other stored action poses, to block out a rough animation for the character in the mixer.

Neutral Poses for Easy Keying


While a characters reference or neutral T-pose makes it easy to weight the envelope and adjust its textures, its not the best pose for animating. This is because it can create local transformation values that are not easy to key. For example, if you load the default skeleton that comes in Softimage and you want to key the rotation of the finger bones, youll see that the bones local rotation values are difficult numbers to use for keying because they often involve several decimal places. To solve this problem, pose your character how you want for its neutral pose and then simply choose the Skeleton > Create Neutral Pose command. This creates a neutral pose that uses zero for its local transformation values (0 for rotation and translation, 1 for scaling). Basically, this neutral pose acts as an offset for the objects current local transformation values. To return to this neutral pose, you can enter zero in the Transform panel (zero out the values). Then when you key the characters values, they reflect the relative difference from zero, and not a number thats difficult to use. For example, when you key a hand bone at coordinates (0, 3, 0), you know that its 3 units in the Y axis above the neutral pose.
Branch-selected hand bone in neutral pose at 0.

Hand bone rotated and keyed. Notice how the rotation values are easy to understand because theyre using 0 as a reference.

Character in his neutral pose for weighting and texturing. If you store a skeleton pose of this position, its easy to return to it at any point of your characters development.

Basics 179

Section 10 Character Animation

Making Adjustments to a Skeleton


Even though youve created your skeleton with the envelope in mind, you always need to resize bones, chains, or a whole skeleton to achieve the exact structure you want. As well, you may need to add or remove bones to the skeleton. Its usually better to modify a skeleton before you apply the envelope to it so that you dont have to reweight the envelope to the bones. However, you can change the skeleton after its been enveloped, and decide whether to have the envelope adjust to the skeleton or not.
Adding bones
You can add bones to a chain using the Create > Skeleton > Add Bone to Chain command. Click at the point where you want the new bone to end, and the new bone is added between the last bone and the effector. Keep on adding as many bones as you like, then right-click to end the mode.

Resizing bones
The easiest way to resize bones is to use the Create > Skeleton > Move Joint/Branch tool (press Ctrl+J). This tool lets you interactively resize bones by moving any chain element to a new location. The bones that are immediately connected to that chain element are resized and rotated to fit the chain elements new location. Moving the knee joint using Move Branch resizes only the bone above it: this joints children are moved as a group but are not resized.

Use the Move Joint tool to move the knee joint to a new position. The bones connected above and below this joint are resized.

Removing bones
You cant select and delete individual bones from a chain because of their hierarchy dependencies, but you can branch-select (middle-click) a chain and then delete it. If there are children in that chain that you want to keep, make sure to Cut their links before deleting the chain, and then reparent them to the modified chain.

Modifying bones for an enveloped skeleton


If you resize or add bones to a skeleton thats already enveloped, the envelope automatically adjusts to the new skeleton. This means that you may need to adjust the weighting on the envelope. If you want to resize bones without having the envelope adjust to the new size, you set a new reference pose with the Deform > Envelope > Set Reference Pose command.

180 Softimage

Enveloping

Enveloping
An envelope is an object that deforms automatically, based on the pose of its skeleton or other deformers. In this way, for example, a character moves as you animate its skeleton. The process of setting up an envelope is sometimes called skinning or boning. Every point in an envelope is assigned to one or more deformers. For each point, weights control the relative influence of its deformers. Each point on an envelope has a total weight of 100, which is divided between the deformers to which it is assigned. For example, if a point is weighted by 75 to the femur and 25 to the tibia, then the femur pulls on the point three times more strongly than the tibia. in the explorerthis is equivalent to picking every object in the group individually. If you make a mistake, Ctrl+click to undo the last pick. 5. When you have finished picking deformers, right-click to terminate the picking session. Each deformer is assigned a color, and points that are weighted 50% or more toward a particular deformer are displayed in the same color. Use the Automatic Envelope Assignment property editor to adjust the basic settings. 6. Move the deformers to see how the envelope deforms. If necessary, you can now change the deformers to which points are assigned, as well as modify the envelope weights using the methods described in the next few sections. If you ever need to reopen the Automatic Envelope Assignment property editor, you can find it in the envelope weight stack in an explorer.

Setting Envelopes
1. Make sure the envelope and deformers are in the reference pose (sometimes called a bind pose). The reference pose determines how points are initially assigned and weighted. Its best to choose a reference pose that makes it easy to see and control how points will be assigned. 2. Select the objects, hierarchies, or clusters to become envelopes. 3. Choose Deform > Envelope > Set Envelope from the Animate toolbar. If the current construction mode is not Animation, you are prompted to apply the envelope operator in the animation region of the operator stack anyway. In most cases, this is probably what you want. 4. Pick the objects that will act as deformers. You are not restricted to skeleton bones; you can pick any object. Left-click to pick individual objects and middle-click to pick branches. You can also pick groups

Basics 181

Section 10 Character Animation

The Weight Paint Panel


The weight paint panel is very useful when modifying weights. It combines several features from the weight editor, brush properties, and the Animate toolbar. To display the weight paint panel, press Ctrl+3 or click the weight paint panel icon at the bottom of the toolbar.

Chose a paint mode. Weight Paint Panel Activate Paint tool. Set paint density. Set brush size. Update continuously (on) or only when mouse button is released (off). Pick a deformer for painting from the 3D views. Select the deformer with the most influence on the point you pick Click to pick deformer for painting. Right-click for other options.

Set weight assignment of selected points to current deformer numerically. Numeric weight assignment options. Smooth weights on object or selected points. Reassign points to other deformers. Freeze initial weight assignment and any modifications. Open weight editor Display only current deformers weight map.

Change color of current deformer.

Painting Envelope Weights


You can use the Paint tool to adjust envelope weights. This lets you use a brush to apply and remove weights on points in the 3D views. 1. Select an envelope. 2. Activate the Paint tool using the weight paint panel or by pressing w. 3. Pick a deformer for which you want to paint weights by selecting it in the list in the weight paint panel or by pressing d while picking it in a 3D view.

4. If desired, set the paint mode. Most of the time you will be using Add (additive) but Smooth, Erase, and Abs (absolute) are also sometimes useful. 5. If desired, adjust the brush properties: - Use the r key to change the brush radius interactively. - Use the e key to change the opacity interactively. - Set other options in the Brush Properties editor (Ctrl+w). 6. Click and drag to paint on points on the envelope. In normal (additive) paint mode: - To add weight, use the left mouse button.

182 Softimage

Enveloping

- To remove weight, either use the right mouse button or press Shift+left mouse button. - To smooth weight values between deformers, press Alt+left mouse button. 7. Repeat steps 3 to 6 for other deformers and points until you are satisfied with the weighting. If your envelope has multiple maps, for example, a weight map in addition to an envelope weight map, then you may need to select the envelope weight map explicitly before you can paint on it. A quick way is to select the enveloped geometry object, then choose Explore > Property Maps from the Select panel and select the map to paint on.

Reassigning Points to Specific Deformers


You can reassign points to specific deformers. This is useful in case the automatic assignment did not assign the points to the desired bones. 1. Select points on the envelope. 2. Choose Deform > Envelope > Reassign Locally on the Animate toolbar, or click Local Reassign on the weight paint panel. 3. Pick one or more of the original deformers.

Smoothing Envelope Weights


In addition to painting in Smooth mode, you can select an envelope or specific points and click Apply Smooth on the weight panel. This applies a Smooth Envelope Weight operator with several options.

Mirroring Envelope Weights Symmetrically


You can mirror the envelope weighting symmetrically. This lets you set up the weighting on one half or your character and then copy the weights to the corresponding points and deformers on the other half. First, you must establish the correspondence between symmetrical points and deformers using Deform > Envelope > Create Symmetry Mapping Template from the Animate toolbar. Then, you can select properly weighted points and copy their values to the other side using Deform > Envelope > Mirror Weights.

These points are incorrectly assigned to this deformer.

Basics 183

Section 10 Character Animation

Setting Weights Numerically


The weight editor allows you to modify envelope weight assignments numerically. You can open the weight editor by pressing Ctrl+e or by clicking Weight Editor on the weight panel.
Transfer cell selection to 3D views.

Control display of enveloped objects.

Reassign points to other deformers. Smooth weights on object or selected points. Freeze the envelope operator stack.

Control display of points and deformers. Lock weights. Deformers are listed in columns. Rightclick for display options. Drag a column border to resize. Multiple envelopes. Double-click to expand and collapse, or right-click for more options. If some points arent fully weighted, the name is shown in red. Hover the mouse pointer over the name to see how many points arent fully weighted.

Limit the number of deformers per point. Weight assignment options. Set weight of selected cells.

Points are listed in rows. Click to select, right-click for display options. Drag a row border to resize. Points that arent fully weighted are shown in red.

Points with more deformers than the limit are shown in yellow, as are envelopes with such points. Selected cells are highlighted. Non-zero weights are shaded.

184 Softimage

Enveloping

Locking Envelope Weights


You can lock or hold the values of envelope weights using the weight editor, the Envelope menu of the Animate toolbar, or the context menu in the deformer list of the weight panel. Locking prevents you from accidentally modifying points that you have carefully adjusted when you are working on other points. It is also useful for setting exact numeric values while keeping Normalize on so that points dont inadvertently become partially weighted to no deformer. If you need to modify locked points later, you must first unlock them. Points that are locked for all deformers are drawn in black in the 3D views.

Using Envelope Presets


You can use the commands on the File menu of the weight editor to save and load presets of envelope weights. This can be useful if you want to experiment with modifying weightsyou can save the current weights and reload them later if you dont like the results. To share presets between different envelopes, the envelopes must meet the following conditions: They must have exactly the same topology. This includes both the number of points and their connections. If you added points after you created a preset, and then reapply the preset to the modified geometry, the new points are not weighted to any deformer until you assign them manually. Their deformers must have the same names. The easiest way to meet these conditions is to simply duplicate a model containing an envelope and its deformers.

Freezing Envelope Weights


When you freeze envelope weights using Freeze Weights on the weight paint panel, the weight maps operator stack is collapsed, removing the original Automatic Envelope Assignment property along with any Weight Painter, Modify Envelope Weight, and Smooth Envelope Weight operators that have been applied. This reduces the amount of stored data and increases performance, but also has a number of other effects: The initial envelope weights can no longer be recalculatedits as if the envelope was imported as is. If you change the reference pose, you can no longer change the initial envelope weights based on the new pose. If you add a deformer to an envelope, you can no longer recalculate the weights automatically. The envelope points are all weighted 0 to the new deformer, and you must assign weights manually. However, you can still add new paint strokes, smooth weights, and edit weights numerically after freezing. In addition, you can still reassign points locally to other deformers.

Changing Reference Poses


After an envelope has been assigned, you can change the reference pose of the envelope. The reference pose is the stance that the envelope and its deformers return to when you use the Reset Actor command. It is also the pose that determines the initial weighting of points to deformers based on proximity. First mute the envelope, then adjust the positions of the envelope and deformers. Next, select both the envelope and deformers and choose Deform > Envelope > Set Reference Poses from the Animate toolbar. Finally, unmute the envelope.

Basics 185

Section 10 Character Animation

Adding and Removing Deformers


After you have applied an envelope, you can add and remove deformers. To add deformers, select the envelope, choose Deform > Envelope > Set Envelope from the Animate toolbar, pick the new deformers, and right-click when you have finished. If the envelope weights have been frozen or if Automatically Reassign Envelope When Adding Deformers is off, no points are weighted to the new deformers so you must do that manually. Otherwise, the initial weight assignments are recalculated and any modifications you made to them are preserved. To remove deformers, simply choose Deform > Envelope > Remove Deformers from the Animate toolbar, pick the deformers to remove, and right-click when you are finished.

Limiting the Number of Deformers per Point


You can limit the number of deformers to which each points weight is assigned. This can be especially important for game characters, because some game engines have a limit on the number of deformers. 1. Set the maximum number of deformers on the weight editors command bar.
Maximum number of deformers

If a points weight is assigned to more than this number of deformers, its row is shown in yellow in the weight editor. If an envelope has any such points, its row is shown in yellow, too. 2. To try to fix these points automatically, click Enforce Limit. A Limit Envelope Deformers operator is applied, and its property page is opened automatically. By default, the limit is the one you set on the command bar, but you can change it for individual operators. If a point has more than the maximum number of deformers, the operator unassigns the deformers with the lowest weights and then normalizes the weight among the remainder. However, it will respect locked weightslocked weights are never changed, even if other deformers have greater weight. If there arent enough unlocked weights to modify, then the total weight might not add up to 100%.

Modifying Enveloped Objects


Sometimes, after carefully assigning weights manually, you discover that you need to make a substantial change to the enveloped object, such as adding points. Luckily, you do not need to redo all your weightingyou can add and move points after enveloping. When you add a point to an enveloped object, it is automatically weighted based on the surrounding points. It is better to add new points before removing old onesthis means that there is more weight information for the new points. You can assign the new points to specific deformers and modify weights as with any point on the envelope. If you want to apply a deformation or move points on an enveloped object, make sure to first set the construction mode based on what you want to accomplish. For example: If you want to modify the base shape of the envelope, set the construction mode to Modeling. If you want to author shape keys on top of the envelope, for example, to create muscle bulges, set the construction mode to Secondary Shape Modeling.

186 Softimage

Rigging a Character

Rigging a Character
Control rigs allow for puppeteering a character, helping you easily pose and animate it. Once a control rig is set up properly, you can animate more quickly and accurately than without one. There are a number of tools in Softimage to help you create a rig for your character. You can use them to create control objects and constrain them to the skeleton, and to create shadows rigs and manage the constraints between them and their parent rigs. You can also use the prefab guides and rigs in Softimage to help you get going quickly. These are available for biped, dog-leg biped, and quadruped characters. The rigs are skeletons that include control objects that you can position and orient to animate the various parts of the characters body.
Ready-made (prefab) biped rig that comes with Softimage Animated main rig You can create either a quaternion or regular chain spine and head. Separate controls for the chest, upper body, and hips let you position and rotate each area individually.

Shadow Rigs and Exporting Animation


Shadow rigs are simpler rigs that are constrained to your more complex main rig that is used for animating the character. Shadow rigs are usually used for exporting animation, such as to a games or crowd engine or other 3D software programs. You can load a basic shadow rig with the Get > Primitive > Model > Biped - Box command. You can also create a shadow rig from a guide with the Character > Hierarchy from Guide command, or generate a shadow rig at the same time that you create a prefab rig. To transfer the animation from the complex (animated) rig to its shadow rig, you plot the animation while the shadow rig is still constrained to the complex rig. Then you can export the shadow rig or just its animation.
Animation transferred to shadow rig while its constrained to the main rig.

Volume indicators help you work with envelopes.

Feet have three controls to allow for complex angles and foot rolls.

Basics 187

Section 10 Character Animation

Creating Your Own Rig


There are a number of tools in Softimage to help you create a rig for your character. You can create primitive control objects (such as spheres and cubes) or sophisticated control elements, (such as spines and spring-based tails) and constrain them to the skeleton. Expressions and scripted operators on these controls allow you to have ultimate control over your characters animation. There are also tools to help you easily create shadows rigs and manage the constraints between them and their parent rigs.
1 Create control objects out of primitive objects or curves for each skeleton element you want to control. You can also create your own objects to look like the body parts youre controlling, such as the feet, hands, head, or hips. Use up-vector constraints for controlling the resolution plane of the arms and legs when using IK. Put the control objects behind the legs or arms and constrain them to the thigh or upper-arm bones using the Create > Skeleton or Constrain > Chain Up Vector command. 2 Constrain the control object to its skeleton element using constraints from the Constrain menu. The pose constraint is often used because it constrains all transformations (SRT) of the control object to its skeleton element.

You can create a simple but flexible spine with the Create > Skeleton > Create Spine command. This creates a quaternion-blended spine for controlling a character the way you like. You constrain the top and bottom vertebrae to hip and chest control objects that you create.

3 Create an object, such as a null, and make it the parent of all skeleton and rig control objects. Also make sure that all the rig control objects are within the characters model.

Create spring-based tail or ear controls using the Create > Skeleton > Create Tail command. Spring-based controls use dynamics to make them react to motion, such as bouncing when a character runs or jumps.

You can also create a Transform Group in which a null becomes an invisible parent of all selected objects.

188 Softimage

Rigging a Character

Using Prefab Guides and Rigs


You can use the prefab guides and rigs in Softimage to get going quickly. These are available for biped, dog-leg biped, and quadruped characters. The resulting rigs created from the guides are skeletons that include control objects that you can position and orient to animate the various parts of the characters body.

You can customize these guides and rigs so that they contain only the elements you need. They can be used as a starting point for different rigging styles, and technical directors can write their own proportioning script to attach their own rig to a guide. The guides have synoptic views to help you select and animate the rig controls: select any control and press F3. There are also preset character key sets and action sources to help you animate the rig.
3 Apply the body geometry as an envelope to the rig using the envelope_group in the rigs model to apply it to the correct parts of the rig.

1 Create a guide by choosing Character > Biped Guide (or quadruped or biped dog-leg) and adjust it to fit your characters envelope. Drag the red cubes to resize the different parts of the body. You can use symmetry to resize the limbs on both sides of the body at the same time.

2 When the guide is fitted to the envelope, create a rig based on it by choosing Character > Rig from Biped Guide. The rig is a skeleton that also includes standard Softimage objects as control objects.

4 Position and rotate the rig controls and key them to animate the various parts of the skeleton.

You can also create tail, ear, and belly controls that are driven by springs. This lets you create secondary animation on these body parts using dynamics.

Basics 189

Section 10 Character Animation

Animating Characters with FK and IK


Skeletons provide an intuitive way to pose and animate your model. A well-constructed skeleton can be used for a wide variety of poses and actions, in much the same way as the skeletons in our bodies can. How parts of the skeleton move relative to each other is determined by the way your skeleton hierarchy is built, whether and how objects are constrained to each other. Before you start animating your character, it is important to understand how animating transformations work in Softimage. There are several issues related to local and global animation, as well as animating transformations in skeleton hierarchies (see Animating Transformations on page 151). You animate skeletons using inverse kinematics (IK) and forward kinematics (FK). The method you choose depends on what type of motion youre trying to achieve. Of course, you can animate with both IK and FK on the same chain and then blend between them, allowing you the flexibility to animate as you like. Have a movement properly follow through, such as giving a good, hard kick to a football. Forward kinematics
Bones in arm are rotated and keyed in order from the upper arm down to move from an outstretched position to a raised position with a flexed wrist.

To animate with FK

1 2 3 4

Select a bone or the control rig object to which a bone is constrained. Click the Rotate (r) button in the Transform panel or press C. Rotate the bone into position on any axis (X, Y, Z). Key the bones rotation values.

Animating with Forward Kinematics


Forward kinematics, or FK as it is usually known, allows for complete control of the chains behavior. When you animate with FK, you rotate a bone into position, which sets the angle of its joint, and then key the bones rotation values (its orientation). Each movement needs to be planned to create the resulting animation. For example, to bend an arm, you start from the top and move down by rotating the upper arm bone, the forearm bone, and finally the hand bone. With FK, you can: Key the exact orientation (in X, Y, Z) of a joint. This prevents any surprises from occurring when 2D chains flatten on their resolution plane. Control certain joints that are difficult to animate, such as shoulders and arms.

You could also animate with FK by first translating the chains effector (invoking IK) to move the bones into position, and then tweaking each bones rotation as necessary. When things are in position, choose Create > Skeleton > Key All Bone Rotations to set rotation keys for all the bones in that chain.

To help make keying easier, you can create a character key set that contains all the rotation parameters for the bones. Then you can quickly key using this set. In a similar way, you can use the keying panel to key only the rotation parameters that you have set as keyable for the bones.

190 Softimage

Animating Characters with FK and IK

Animating with Inverse Kinematics


Inverse kinematics, usually referred to as simply IK, is a goal-oriented way of animating: you define the chains goal position by placing its effector where you want, then Softimage calculates the angles at which the previous joints in the chain must rotate so that the chain can reach that goal. IK is an intuitive way of animating because its how you probably think of movement. For example, when you want to grab an apple, you think about moving your hand to the apple (goal-oriented), not rotating your shoulder first, then your arm, and then your hand. With IK, you can: Easily try out different poses. Dragging an effector to reach a goal is intuitive for certain types of actions. Quickly animate simple movements, including 2D chains that have a limited range of movement. Easily set up poses for a chain by positioning the effector, then keying either the effectors translation (IK) or the bones rotation values (FK). Translation values on effectors of chains created in Softimage are local to the effectors parent (by default, the chain root). By not having the effector tied to its preceding bone, you are free to create local animation on the effector that can be translated with its parent. However, many animators prefer to constrain effectors and bones to a separate hierarchy of control objects (control rigs) so that they never animate the skeleton itself directly. To help make keying easier, you can create a character key set that contains all the translation parameters for the effector. Then you can quickly key using this set. In a similar way, you can use the keying panel to key only the translation parameters that you have set as keyable for the effector.

Inverse kinematics

Legs effector is branchselected (middle-clicked) and translated to move the leg from a standing position to doing the can-can.

To animate with IK

Select the chains effector or the control rig object to which the effector is constrained. Click the Translate (t) button in the Transform panel or press V. Move the effector so that the chain is in the position you want. Key the effectors translation values.

2 3 4

You could also constrain the effector to a curve with the Constrain > Path command and animate it with path animation. The chain is solved in the same way as if you keyed the effectors positions.

Basics 191

Section 10 Character Animation

Basic Concepts for Inverse Kinematics


There are two fundamental concepts you should understand when working in IK: the chains preferred angle and its resolution plane. When you draw a chain, you usually draw it with a bend to be able to predict its behavior when using IK. This bend is called the chains preferred angle. When you move the effector, the chains built-in solver computes a solution that considers these angles and the effectors position.

You can change the joints preferred angle to get the correct skeleton structure for the animation that you want to create. This solves the IK in a new way, affecting the movement of the whole chain. You can also reset a bones rotation to the value of its preferred rotation, which resets the chain to its pose when you created it. With 2D chains, the preferred axis of a chain (the X axis, by default) is perpendicular to the plane in which Softimage tries to keep the chain when moving the effector. This plane is referred to as the general orientation or resolution plane of a chain. It is in the space of this plane that the IK system resolves the joints rotations when you move the effector.
Constraining the chain to prevent flipping Using an up-vector constraint for chains, you can constrain the orientation of a chain to prevent it from flipping when it crosses certain zones. The up-vector constraint forces the Y axis of a chain to point to a constraining object so that the solver knows exactly how to resolve the chains rotations. You add up-vector constraints to the first bone of a chain because that is the bone that determines the resolution plane.

Preferred angle Chain is drawn with a slight bend to determine its direction of movement when using IK. This determines the preferred angle of rotation for each bones joint.

Resolution plane

The resolution plane of this skeletons leg is shown with a gray triangle, connecting the root, the effector, and the knee joint. This plane is defined by the first joints XY plane, and any joint rotations stay aligned with this plane. When the first joint is rotated, the resolution plane rotates accordingly, and all joint rotations remain on the resulting resolution plane.

First point (joint 1 at chain root)

Second point (effector)

Resolution plane (gray triangle)

Third point (a null constrained by an up-vector constraint)

192 Softimage

Animating Characters with FK and IK

Blending between FK and IK Animation


When youre animating a skeleton, you may need to use both FK and IK animation on the same chain. For example, you want to use IK to have the hand grab at something, but to get a more convincing swing from the shoulder, you need to use FK. In Softimage, its easy to blend between FK and IK using the Blend FK/ IK slider in the Kinematics Chain property editor. This slider controls the influence that IK and FK both have on a chain, smoothly blending the results of bone rotation and effector translation. By blending, you can animate with rotations to get a good whip effect (FK), and then blend in specific grabbing/punching/kicking (goal-oriented IK) movements, or mix goal-oriented movements (IK) against motion capture data (FK).
1 Animate the chain in FK (key the bones rotation parameters), as well as in IK (key the effectors position). Here, the blue ghost above the arm shows the chain at full FK; the red ghost below the arm shows the chain at full IK. 2 Drag the Blend FK/IK slider to set the value you want between FK (0) and IK (1). The chain interpolates smoothly between its IK and FK positions. To help you see how the chain is blending, you can use ghosting. Ghosts are shown for the full FK and IK positions of the chains.

Solving the Dreaded Gimbal Lock


When youre setting up a character, you should consider how the bones will be rotating for each body part so that you can choose the proper rotation order for them. While the default rotation order of XYZ works for some body parts, there are certain body parts or movements for which this order can cause gimbal lock. Gimbal lock is a state that Euler angles go through when two rotation axes overlap. The angle values can change drastically when rotations are interpolated through it. When you change the rotation order, you can solve the gimbal lock. You can change the order in which an object is rotated about its parents axes by selecting a Rotation > Order in the bones Local Transform > SRT property page (select the bone and press Ctrl+K). You can also convert the rotation angles from Euler to quaternion using the Animation > Convert to Quaternion command in the Animation panel. Quaternion rotation angles produce a smooth interpolation which helps to prevent gimbal lock.

3 Set keys for the Blend FK/IK values at the appropriate frames where you want the blend to start and finish.

Basics 193

Section 10 Character Animation

Walkin the Walk Cycle


A walk cycle is probably the most common task youre going to do as an animator. You can do this with traditional tools, such as keying and the fcurve editor, but Softimage provides other excellent tools to help you animate your character. These include all the tools shown in this section, as well as the animation mixer.
1 Key the position and rotation of the characters arms, legs, and hips on one side of the body. Key the 5 basic poses at frames 1, 5, 9, etc., or frames 1, 6, 11, depending on your characters stride. The start and end poses must match so that the motion can be properly cycled in the animation mixer.

You can store the walk cycle in an action source, then bring that source into the mixer to cycle it. Once in the mixer, you can reverse it, stretch it out or compress it to change the timing, cycle it, move it around in time, mix it with other actions, and moreall in a nondestructive way.

You can use rotoscoped images of models to act as a template from which you can base the characters poses to be keyed. Youll need to tweak your characters walk afterward to make it look natural and appropriate for the character. Tip: It helps to make the arms and legs of the left and right side in different colors. Here, the right leg and arm are in black.

Repeat the same poses for the other side of the body on frames 21, 25, 29, and 3 (the first pose is the same as the last pose of the side you just did).

Save the finished walk cycle in an action source using the Action > Store > Fcurves command.

Open the animation mixer, and load the action source into it by rightclicking on a green track and choosing Insert Source. This create an action clip for the walk cycle on that track.

If the feet slide when theyre on the ground, you can fix it by making the fcurve interpolation flat between the pose keys. Open the animation (fcurve) editor, select the keys on the fcurves, and choose Keys > Zero Slope Orientation. The fcurve editor is the tool to help you fine-tune the walks fcurves in many ways.

6 Cycle the walk clip in the mixer by dragging one of the clips lower corners. You can also quicken or slow down the walk pace, blend it with another action, or create a transition to yet another action, such as to a run cycle. Use the cid clip effect variable to add a progressive forward offset to a stationary cycle.

194 Softimage

Motion Capture

Motion Capture
Motion captured animation (usually known as mocap) offers a way to animate a character based on motion that is electronically gathered from a human or animal. This is useful for animating actions that are particularly difficult to do well with keyframing or other methods of animation creation. In Softimage, you can import mocap data and apply it onto rigs, as well as retarget animation from BVH or C3D mocap files to rigs.

Adding Offsets to Mocap Data


Its inevitable: the director took a look at the mocap animation for this character. It looks good but now he has some comments and wants to make a few changes. This can be problematic when the change affects a key pose or move because many other moves and poses are usually linked to it.
Club-bot with a mocap run action clip in the animation mixer.

Importing Acclaim and Biovision Mocap Data


You can import motion capture information into Softimage using the File > Import > Acclaim and Biovision commands. Once the files are imported, you can constrain the skeletons to a rig and plot the mocap data into fcurves so that you can edit the animation. Acclaim Skeleton files (ASF) contain information about the hierarchy and base pose of the skeleton. The animation for this skeleton is saved in an accompanying Acclaim Motion Capture (AMC) file. Biovision (BVH) files contain information about the hierarchy of the skeleton.
Mocap files with hierarchy imported as bone chains.

The left leg and arm are rotated a bit and then keyed as an offset to the clip.

Luckily, in Softimage you can easily add non-destructive offsets to mocap data in any of these ways: Creating animation layers: Create a layer of keys as an offset to mocap animation. Layers let you keyframe as you would normally, but those keys are kept in a separate layer of animation so that they dont affect the base mocap animation. After youve added one or more layers of keys and youre happy with the results, you can collapse the layers to bake them into the base layer of animation. Mixing fcurves with an action clip: Normally, when there is an action clip in the mixer, it overrides any other animation on that object that covers the same frames. However, you can blend fcurves directly with an action clip over the same frames. This allows you to blend mixer animation with scene level animation.

Mocap files with hierarchy imported as nulls.

Basics 195

Section 10 Character Animation

Creating action clip effects in the mixer. Clip effects let you adjust the animation in an action clip without affecting the original animation in the action source. Clip effects add values on top of a clip, such as noise or offsets.

Retargeting Animation with MOTOR


Retargeting allows you to transfer any type of animation between characters, regardless of their size or proportions. Retargeting involves first tagging (identifying) the elements of a rig, then transferring animation from another rig or a mocap data file to the target rig. The animation is retargeted to the new rig as its transferred. The retargeted animation is live on the rig, controlled by the retargeting operators that live on the tagged rig elements. Because of this, you can adjust the animation on the rig at any time so that the motion is exactly as you like. If you want to commit the retargeted animation to fcurves, you can plot it on the rig. While you can retarget any type of animation between characters, it is especially useful for reusing motion capture data to animate many different characters with the same movements, such as you would for a game. For example, you can reuse a basic run mocap file for many characters and then adjust the animation for each one as you like by adding offsets in different animation layers. Using the retargeting and layering tools in Softimage, you can quickly test out many variations of animation on the characters. Using the commands in the Tools > MOTOR menu on the Animate toolbar, you can perform all of these tasks: Tag rig elements so that animation can be retargeted onto them. Retarget any type of animation from one rig to another. Retarget animation from BVH or C3D mocap files to a rig. Adjust the retargeted animation on the rig, such as by setting position and rotation offsets for the whole rig or just certain elements. Save any type of retargeted animation in a normalized motion format (.motor file) so that it can be loaded and retargeted on any tagged rig. This makes it easy to build up libraries of animation that can be used across all your rigs.

Working with High-density Fcurves


When you import motion capture data, the fcurves often have many keys, usually one per frame. A high-density fcurve is difficult to edit because if you change even a few keys, you then have to adjust many other keys to retain the overall shape of the curve. Because editing these fcurves is not always easy, there are tools in the fcurve editor that can help you work with them: the HLE (high-level editing) tool and the curve processing tools (for smoothing, resampling, and fitting curves).

The HLE tool in the fcurve editor lets you shape an fcurve in an overall fashion, like lattices shaping an objects geometry. The HLE tool creates a sculpting curve that has few keys (shown here in green), but each one refers to a group of points on the dense fcurve.

196 Softimage

Motion Capture

Plot the retargeted animation on a rig into fcurves so that you can keep and edit the animation.

Before you start tagging the character elements or retargeting animation, make sure that the skeleton or rig is in a model. Retargeting can work only within model structures.
Retargeting animation between rigs When you retarget animation between rigs, the retargeting operator figures out which rig elements match based on their tags. Then it maps and generates the animation that is transferred to the target rig. The animation between the two rigs is a live link that allows for interaction. Select the source rig, then press Ctrl and select the target rig. Then choose the Tools > MOTOR > Rig to Rig command to retarget the animation from the source to the target rig. If you want to save the animation on the target rig, you must plot (bake) it into fcurves.

Tagging a rigs elements Tagging tells Softimage which part is which on your character, such as its hips, chest, legs, root, and so on. You tag the rig controls or skeleton parts that you use to animate the character. These tags are used to create a map (template) for that character. Select a rig and choose the Tools > MOTOR > Tag Rig command to tag its elements. Once you have tagged a rig, you can use it for retargeting with another rig or with mocap data.

Retargeting mocap data from a file to a rig You can retarget mocap data from either C3D or BVH files to a tagged rig. Choose the Tools > MOTOR > Mocap to Rig command to load either a C3D or Biovision file and apply it to a rig in Softimage.

You can then save the mocap animation on the rig in a .motor file so that you can apply it to any tagged rig of the same structure.

Biovision rig C3D rig

Basics 197

Section 10 Character Animation

Making Faces with Face Robot


Face Robot is a suite of tools that work together to help you easily rig and animate life-like human and humanoid faces, no matter what that face may look like! To start out with Face Robot, you load in a head model from the Face Robot layouts Stage 1 panel. You then follow the instructions on the first four stage panels in Face Robot to create a solved head. A solved head is one that has been processed by Face Robot and contains all the necessary objects and operators, as is shown on the right. Once the head is solved, you can move freely between Stage 5 and Stage 6 to animate and sculpt the face. Stage 1: Assemble: Load in a single head model and possibly face parts (such as eyeballs, teeth, and a tongue). These need to be polygon meshes. Stage 2: Pick: Identify each face part for Face Robot (head, eyeballs, teeth, and an optional tongue). A visual guide on the panel helps you through this process. Face Robot lets you quickly set up a facial rig by taking you through several required stages. Once the facial rig is created, you can animate the facial controls and sculpt and tune the soft facial tissue using Face Robot-specific tools, as well as some standard Softimage ones. When youre done, you can export the Face Robot head in different ways, including as a games rig or a shape animation rig. Face Robot has its own interface layout and operators that are separate from the rest of the Softimage interface layout. As a result, you need to enable a special mode that opens the Face Robot layout and loads its operators: choose Face Robot > Enable Face Robot from the main menu at the top of the Softimage window. Stage 3: Landmarks: Pick the landmark points on the face that tell Face Robot the heads size and proportions. A visual guide on the panel helps you through this process. Stage 4: Fit: Make any adjustments to the facial controls that are generated from the landmarks. Stage 5: Act: Set keyframes on the faces animation controls or apply facial motion capture files (C3D) to them. You can retarget the mocap, and also blend it with keyframes. Stage 6: Tune: Use different tools to adjust the deformation of the faces soft tissue to achieve the range of facial expressions that your character needs to make.

198 Softimage

Making Faces with Face Robot

A B

E C

A B C

Main menu bar contains all standard menu commands. This is the same as in the main Softimage interface. The Face Robot panel gives you access to all six Face Robot stages for completing your facial animation. Click this button to hide/display the Face Robot panel and enlarge the viewport.

D E

Click this button to display/hide the Softimage main command panel (MCP). Click this button to display/hide the standard Softimage tool bars.

Basics 199

Section 10 Character Animation

200 Softimage

Section 11

Shape Animation
Shape animation is the process of deforming an object over time. You take snapshots called shape keys of the object in different poses, then you blend these poses over time to animate them. Softimage offers a number of tools with which you can create shape animation, allowing you to choose the method that works for you.

What youll find in this section ...


Different Tools for Animating Shapes Shape Animation on Clusters Using Construction Modes for Shape Animation Creating and Animating Shapes in the Shape Manager Selecting Target Shapes to Create Shape Keys Storing and Applying Shape Keys Using the Animation Mixer for Shape Animation Mixing the Weights of Shape Keys

Basics 201

Section 11 Shape Animation

Things are Shaping Up


With shape animation, you can change the shape of an object over time. To do this, you move the objects clusters of points in different ways, then store shape keys for each pose of these clusters that you want. You can create shape keys from any kind of deformation to produce shape animation. For example, you can store shape keys for clusters on an object by moving points or by deforming by spline, such as for facial animation and lip-syncing. Or you can create a shape key for an objects overall deformation using envelopes, lattices, or any of the standard deform operators (Bend, Bulge, Twist, etc.). In Softimage, all shape animation is done on clusters. This means that you can have multiple clusters animated at the same time on the same object, such as a cluster for each eyebrow, one for the upper lip, one for the lower lip, etc. Or you can treat a complete object as one cluster, such as a head, and store shape keys for it. Different Tools for Animating Shapes Shape animation in Softimage uses the animation mixer under the hood to do its work. You can also use the animation mixer to do your shape work, but there are other methods too. You can: Use the shape manager to easily create and animate shape keys. This is probably the fastest and easiest way to work. Create shape keys for a base object from a group of target shapes (sometimes called morphing or blend shapes). Store shape keys, then apply them at different frames. You can use the animation mixer with any of these methods. It is a powerful tool that gives you a high degree of flexibility in reworking your shape animation in a nonlinear way. Because shape animation is essentially pose-based, you can easily reorder the poses in time, reuse the same pose several times, and mix the poses together as you like, in the animation mixer. You can even add audio clips to the mixer to synchronize your shape animation to sound, such as for lip syncing. Shape Animation and Models Before you start to animate shapes, its a good idea to create a model containing the object that is to be shape-animated. This puts the object under its own Model node and creates a Mixer node for that model that contains all its shape keys. This way, the shape keys are stored with the model rather than just being in the entire scene. You can then reuse the model with its shape animation in another scene, import and export the model with all its shapes and mixer, or duplicate the model with its shape animation.

Shape animation is done for this face by simply moving the points in different clusters on the head object, then storing a shape key for each clusters pose. You could also treat the whole head object as a cluster and deform its points in the same way, then store shape keys for each pose for the object.

You can use surface or polygon objects to create shape animation, or even curves, particles, and latticesany geometry that has a static number of points.

202 Softimage

Things are Shaping Up

Shape Animation on Clusters


All shape animation is done on clusters. You can have multiple clusters on the same object, or you can treat an object as one cluster. You can even store shape keys for tagged points that are not saved as a cluster.

Shape Sources and Clips


A shape source is the shape that you have stored and is usually referred to as a shape key. By storing several shapes for an object, you can build up a library of sources. Shape sources are stored in the models Mixer > Sources > Shape folder. A shape clip is an instance of that source on a track in the animation mixer. Even if you dont use the mixer for shape animation, a clip is always created when you create a shape key.

Whole object. A cluster including all points on head is automatically created when you store a shape key.

Object with cluster

Object with tagged points. A cluster of these points is automatically created when you store a shape key.

Shape Reference Modes


Shape reference modes control how the shape behaves when the base shape is deformed in Modeling mode. You should select a reference mode before you store shape keys on a cluster.

Click the Clusters button on the Select panel to see a list of the objects clusters.

Shape key on a single cluster

Always store shape keys using the same cluster of points. When you deform an object, but store a shape key only for a cluster of points on that object, the deformed points that dont belong to that cluster snap back to their original position when you change frames. To make it easier to use the same cluster, give the cluster a descriptive name as soon as you create it.

Local Relative Mode Shape deforms with object.

Object Relative Mode: Shape deforms with object but keeps original orientation.

Absolute Mode Shape stays locked in place as object deforms.

Basics 203

Section 11 Shape Animation

Using Construction Modes for Shape Animation


When youre creating shapes, you can use any number of deformation operators, including envelopes, as the tools for sculpting the shapes. Because you can use these deformation operators for tasks other than shape animation, you need to let Softimage know how you want to use them. For example, when you apply a deformation, you could be building the objects basic geometry (modeling), or creating a shape key for use with shape animation (shape modeling), or creating an animated deformation effect (animation). To tell Softimage how youre using the deformation, you need to select the correct construction mode: Modeling, Shape Modeling, Animation, or Secondary Shape. The mode puts the deformation operator in one of four regions in the objects construction history that corresponds to that mode. These regions keep the construction history clean and well ordered by allowing you to classify operators according to how you want to use them. Here is a quick overview of how you can use the four different construction modes for doing shape animation:

In Modeling mode, create and deform the object to be shapeanimated. This is the base shape for the object, which is a result of all the operators in the Modeling region of the objects construction history. When you create shape keys, they are stored as the difference of point positions from this base shapes geometry.

Select one of the four construction modes from the list in the menu bar at the top of the Softimage window.

If the object is to be an envelope for a skeleton, switch to Animation mode and apply it as an envelope. In this case, the jaw bone is rotated to help deform the envelope for lip syncing.

3 Switch to Shape Modeling mode to create shape keys. These shape keys are set in reference to the objects base shape (each cluster is an offset from the base).

Markers in the explorer divide up the objects construction history into regions that correspond to the four construction modes. Deformation operators are kept in their appropriate region.

4 To fix any geometry problems due to the envelopes animation, switch to Secondary Shape mode and create shape keys in reference to the animated envelopes geometry. For example, you can fix up the shape in the corner of the mouth in relation to the jaw opening and deforming the envelope.

204 Softimage

Creating and Animating Shapes in the Shape Manager

Creating and Animating Shapes in the Shape Manager


The shape manager provides you with an environment for creating, editing, and animating shapes. To help you work efficiently, the shape manager has a viewer that immediately displays the results of the changes as you make them to the object.
1 Open the shape manager in a viewport or in a floating window (choose View > Animation > Shape Manager). 2 Duplicate the shape and rename it.

When you create a new shape in the shape manager, a shape key is added to the objects Mixer > Sources > Shape list and shape clips are created for the object in the animation mixer.
4 Repeat these two steps to create a library of different shapes for this object.

3 Deform the object or cluster into a new shape in the shape viewer.

With an object selected, select Shape or an existing shape in the shape list.

Go to the next frame at which you want to set a key, change the values of the weight sliders, and set another key. Continue on in this manner.

5 On the Animate tab, set the values of the shape weight sliders until you get the shape you want. Notice the object update in the shape viewer as you change the slider values. Set a key at this frame.

Basics 205

Section 11 Shape Animation

Selecting Target Shapes to Create Shape Keys


Selecting shape keys (also known as morphing or blend shapes) lets you deform an object using a series of objects that are deformed in different shapes (called target shapes). These objects must have the same type of geometry and the same topology (number and arrangement of points) as the base object that theyre shape-animating. The easiest way to do this is to duplicate the base object that you want to shape-animate, and then deform each of the copies in a different way that will correspond to a target shape.
1 Create the base object in a neutral pose. This is the object to be deformed with the target shapes. 2 Duplicate the object and deform into different shapes (target shape) such as for phonemes. Move them out of the way of the camera.

Selecting target shapes sets up a relation between the base object and the shape keys, allowing to you fine-tune the target shapes and have those adjustments appear on the base object. For example, if your client thinks that the nose is too long on one of the target shapes, all you have to do is change the nose for it and the nose on the base object is updated. You can also choose to break the relationship between the base object and its target shapes to keep performance optimal.

Select Shape Modeling Mode from the Construction Mode list.

Select the base object and choose Deform > Shape > Select Shape Key. Then pick each of the target shapes in the order that you want to create shape keys for the object.

6 5 Label the first shape key created in the Name text box, such as face. The other shape keys use this name plus a number, such as face1, face2, etc. For each target shape you pick, a shape key is added to the models Mixer > Sources > Shape folder.

To create the animation, set the values for each shape keys weight slider in the animation mixer or in the Shape Weights custom parameter set. In either the mixer or the parameter set, click the weight sliders animation icon to key this value at this frame.

206 Softimage

Storing and Applying Shape Keys

Storing and Applying Shape Keys


When you store and apply shape keys, you create a shape source in the models Mixer > Sources > Shape folder, as well as a shape clip in the animation mixer. If you want to use the mixer for doing your shape animation, this is an easy way to work because the clips are set up for you. In the mixer, you can then change the length of the clips, create transitions between clips, change the weight of the clips, and so on. If you dont want to use the mixer, storing and applying shape keys is still an easy way to work because everything is set up under the hood in the mixer for you. You can then animate the shape weights in the Shape Weights custom parameter set that is automatically created for you. This custom parameter set contains a proxy of each shape keys weight slider. You can also simply store shape keys and then apply them to the object or cluster later. When you store shape keys, a shape key is created for the current shape and added to the models list of shape sources, but it does not create a shape clip in the mixer. Storing shape keys is a good way to build up a library of shapes: when youre ready to apply the shape keys, you can load them into the animation mixer to create shape clips. Or if you dont want to use the mixer, you can simply apply the shape keys to the object or cluster at different frames.
4 Deform the cluster or object into a shape that you want to store, then choose Deform > Shape > Store and Apply Shape Key.

Select a cluster of points or the whole object (creates one cluster for the object).

Select Shape Modeling Mode from the Construction Mode list.

Go to the frame at which you want to set a shape key.

When you store and apply, the shape key is applied to the cluster or object at the current frame. A shape clip for this shape key is also created in the animation mixer.

6 5 Go to the next frame at which you want to set a shape key, deform the cluster or object, and store and apply another shape key.

You can edit the shape animation in the mixer. You can resize and layer the clips, and add transitions between the clips for a smooth change between shapes. You can also animate the weight of each shape clip against each other in the mixer or in the Shape Weights custom parameter set.

Basics 207

Section 11 Shape Animation

Using the Animation Mixer for Shape Animation


Once you have created shape keys, you can use the animation mixer to sequence and mix them as shape clips. This lets you easily move shape clips around in a nonlinear way and change the weighting between two or more clips where they overlap in time. The first step to using shape keys in the mixer is to add them as shape clips to a shape track. If you stored and applied shape keys or selected shape keys, this is automatically done for you. Shape clips do not actually contain animationthey are simply static poses. This is why you need to create transitions between them and/or weight their shapes against each other to animate. Transitions create smooth and more complex animation than is possible with shape keys simply set at different frames with no transitions or weighting. Once you have added shape clips to the animation mixer, you can use any of the mixers features to move, reorder, copy, scale, trim, and blend them.
To add a shape key as a clip to a track in the mixer, right-click on a blue shape track and choose Insert Source, then pick the source (shape key) youve stored. You can also drag a shape key from the models Mixer > Sources > Shapes folder in the explorer and drop it on a blue shape track.

Notice how the shape interpolates over time, from clip to clip.

You can make composite shapes by creating compound clips for different clusters on the same frames of different tracks. For example, one compound clip could drive the eyebrow cluster of a character while another clip drives the mouth cluster.

You can easily reorder the shape clips in time on the tracks, or duplicate a clip to repeat a shape several times over the animation. Because each shape clip refers to the source, you dont need to duplicate the source. Create sequence of shapes by creating clips one after another using transitions to help smooth the spaces between them.

208 Softimage

Mixing the Weights of Shape Keys

Mixing the Weights of Shape Keys


Shape clips dont contain any animationthey are simply static poses. As a result, one way to create animation with shapes is to animate the weight of each shape. Weighting is always done in relation to another shape key. This means that shape keys have to be overlapping in time with at least one other shape key to be weighted. The higher the weight value, the more strongly a clip contributes to the combined animation. For example, if you set the weights value to 1, the clips contribution to the animation is 100% of its weight. You can mix shape key weights in different ways, depending on how you created the shape keys in the first place and on how you like to work. You can mix shape key weights: Using the shape manager. Using the animation mixer. Using a custom parameter set, either the Shape Weights one or one you set up yourself. The advantage of having a custom control panel is that you can have all the sliders in one property editor that you can easily move around in the workspace. As well, you can key all the sliders values at once by clicking the property sets keyframe icon. Click the Shape Weights icon beneath the shape-animated object in the explorer to open the custom parameter set.
5

No matter which tool you use, the basic process is the same: go to the frame you want, set each shape weights value, then click the keyframe or animation icon to set a key. You can then edit the resulting weight fcurve in the animation editor as you would any other fcurve.

How to Mix and Key Action Weights in the Mixer


1 Put clips on different tracks and overlap them where you want to mix them. In most cases, this is for the whole duration of the scene. 2 Move to the frame at which you want to set a key. 3 Set a weight value for each clip at this frame. Red curves in the clip display its weight values. 4 Click each weights animation icon to set a key for this value at this frame.

After you are done setting keys for the weights, you can edit the resulting weight fcurves. Right-click the weights animation icon and choose Animation Editor.

Basics 209

Section 11 Shape Animation

Normalized or Additive Weighting


One of the most important things to understand about weighting is to know whether weights are normalized (averaged) or additive. You can control how the weights of clips are combined, depending on whether or not you select the Normalize option in the Mixer Properties. Youll know that shapes are normalized if they seem to average or smooth each other out, or if different clusters on the same object affect each other when they shouldnt (such as an eyebrow affecting the mouth shape). You may want to use the normalized mode if youre mixing together shapes for a whole object. In many cases, you will probably want the weight to be additive instead of normalized, such as if youre mixing different clusters on one face over the same frames. This adds the shapes together but doesnt blend them together.
Additive mix of Shapes 1 and 2. The shapes are literally added together to create a composite result. You can also exaggerate shapes by setting weight values higher than 1.

+
Shape 1 Shape 2

or

Normalized mix of Shapes 1 and 2. The shapes are averaged resulting in a combination of the shapes. The total weight value of the two shapes equals 1.

210 Softimage

Section 12

Actions and the Animation Mixer


Actions are packages of low-level animation, such as function curves, expressions, constraints, and linked parameters. By creating a package that represents the animation, you can work at a higher level of animation that is not restricted by time. The animation mixer is the tool that lets you work with actions, all in a nonlinear and non-destructive way.

What youll find in this section ...


What Is Nonlinear Animation? The Animation Mixer Storing Animation in Action Sources Working with Clips in the Animation Mixer Mixing the Weights of Action Clips Modifying and Offsetting Action Clips Sharing Animation between Models Adding Audio to the Mix

Basics 211

Section 12 Actions and the Animation Mixer

What Is Nonlinear Animation?


Nonlinear animation is a way of animating that does not restrict you to a fixed time frame. You store animation into a package called an action source, then load this package in the animation mixer. In the mixer, you can layer and mix the animation sequences at a higher level in a nonlinear and non-destructive way. You can reuse and fine-tune animation youve created with keyframes, expressions, constraints, and shape animation (shape keys stored in shape sources). You can even add audio clips to the mixer to help synchronize it with the animation. And at any time, you can go back and modify the animation data at the lower levels, without needing to begin again and redo all your work. When you bring an action source into the animation mixer, it becomes a clip. In the mixer, you can move an action clip around anywhere in time, squeeze or stretch its length as you like, apply one action after another in sequences, and combine two or more actions together to create a new animation. On the frames covered by the clip, the data stored in the source drives the objects animation. If youre modifying someone elses animation, you dont really have to deconstruct their workjust add a layer with your own animation. You can even modify the existing animation with a clip effect, acting as a separate and removable layer on top of the original animation.

Models and the Mixer


Models provide a way of organizing the objects in a scene, like a mini scene. You should always put your object structures within a model so that you have a Mixer node for it, because each model can have only one Mixer node. This node contains mixer data, such as action sources, mixer tracks, clips, transitions, and compounds. If the characters in the scene arent within models, you have only one Mixer node for the whole scene (in the Scene Root) which means that you cant easily copy animation from one model to another.
Club_bot model structure contains many elements, including a Mixer node that has its action sources.

The animation mixer is well-suited for editing existing material and bringing together all the pieces of an animation. In it, you can assemble all the bits and pieces youve imported from different scenes and models to help you build them into a final animation.

There are a number of ways in which you can share animation between models, whether they are in the same scene or different scenes. You can copy action sources, clips, compound clips, and even a models whole Mixer node between models. And when you duplicate a model, all sources and clips and mixer information are also duplicated.

212 Softimage

The Animation Mixer

The Animation Mixer


The animation mixer gives you high-level control over animation because you can layer and mix sequences in a nonlinear and nondestructive way, making it the ideal tool to use for complex animation. The animation mixer looks like a digital video editor, but instead of editing video sequences, you create animation sequences, transitions, and mixes. It helps you reuse and fine-tune animation youve created with keyframes, expressions, and constraints. You can use the animation mixer with animation data (action sources), shape animation data (shape keys as shape sources), and add audio files for synchronization. Once you have a library of action sources created, you bring them into the mixer as action clips.
You can display the animation mixer in any viewport, or display it in a floating window by pressing Alt+0 (zero). Select an object, then click the Update icon in the mixer to see its tracks and clips. Icons indicate the type of track and let you select the track.

Each action clip is an instance of its action source. The original animation data stays untouched, making it easy to experiment with the animation without fear of destroying anything. You can always go back and change the original data and all your changes will automatically be applied; or you can add animation on top of the original animation source, as you may want to do with motion capture data. On the frames covered by the clip, the data stored in the source drives the animation for the object. The mixer overrides any other animation that is on the object at that frame, unless you set a special option that mixes an action clip with fcurves on the object over the same frames.
Multiple tracks let you overlap clips in time and mix their weights. The playback cursor shows the current frame on the timeline.

Tracks are the background on which you add and sequence clips in the mixer. You can sequence one clip after another on the same track or different tracks. To overlap clips in time for mixing, they must be on separate tracks. Animation (action) tracks are green. Shape tracks are blue. Audio tracks are sand.

You can ripple, mute, solo, and ghost all clips on a track.

Clips appear as colored bars according to their type. Create sequences of clips on the same track or on different tracks.

Mix overlapping clips by setting and animating their weight values in the weight panel.

To add a track, press Shift+A, Shift+S, or Shift+U to add animation (action), shape, or audio tracks, respectively. You can also choose a type from the Track menu.

Basics 213

Section 12 Actions and the Animation Mixer

Storing Animation in Action Sources


Action sources are packages of animation that you can use in the animation mixer. This is where the animation lives. You can package function curves, expressions, constraints, and linked parameters into a source, as well as rigid body or ICE simulations. You can create an entire library of actions, like walk cycles or jumps, and then share them among any number of models. How to Create Action Sources and Clips
1 Animate an object or model. Each animation sequence here will be stored in its own source.

When you create an action source, it is saved in the Sources > model folder for the scene, which you can find in the explorer. This lets you see all sources for all models in the scene. However, for convenience, a copy of the source is available in the models Mixer > Sources > Animation folder. The name of this source is in italics to indicate that its a copy of the original source.

2 Select the animated object and choose an appropriate command from the Actions > Store menu. This stores the animation in an action source.

3 Right-click on a track and choose Insert Source. An action clip is created. You can also drag a source from the models Sources folder in the explorer and drop it on a track.

Arm wave

Step and look

Ground jimmy

Once the clip is in the mixer, you can manipulate it in many ways. Here are some ideas ...

You can composite actions by adding clips for different parameters on the same frames of different tracks. Here, the top clip drives the legs of the character while the bottom clip drives the arms.

You can use the mixer as a simple sequencing tool that lets you position and scale multiple clips on a single track. You may find the technique of pose-topose animation using the mixer easy to do by saving static poses of a character, loading the actions onto the tracks in sequence, and then creating transitions between the poses.

214 Softimage

Storing Animation in Action Sources

Changing Whats in an Action Source


After you have created an action source, you can modify the original animation data stored in its source, remove items from it, or even add keys to fcurves in the source. When you modify the source, you change the animation for all action clips that were created from that source and refer to it. Because editing an action source is destructive (youre changing the original animation data), you should always make a backup copy of it before editing. This is also useful to do if you dont want all action clips to share the same source (duplicate the source before creating clips from it). You can access the animation data in an action source by right-clicking an action clip and choosing Source, or right-click and choose Animation Editor to access the sources fcurves.
You can also deactivate or remove certain parameters in the source.

If you want to modify an action clip without affecting the source, you must use clip effects.

Restoring the Original Animation to an Object


You can return to the original animation stored in an action source at any time by applying that action source to the object. This is useful if you removed the animation when you created an action source, or you can also apply the animation in the source to another model. To apply the action source to a model, you simply select the source in the models Mixer > Sources > Animation folder in the explorer and choose the Actions > Apply > Action command.

Click this button to access the sources fcurves or constraints (depending on the type of animation in the source)

Select the action source in the models Mixer > Sources > Animation node, then choose the Actions > Apply > Action command to restore it to that object.

Creating Action Sources from Clips Because applying works only on sources, you cant use it on clips. But what do you do when you want to combine some clips? You can select the clips and choose Clip > Freeze to New Source or Clip > Merge to New Source in the mixer to create a new source. You can then apply this new source to the model with the Actions > Apply > Action command.

If expressions are stored in the source, enter information in a Value cell to edit them.

To add keys to a source, use the Action Key button in the mixers command bar.

Basics 215

Section 12 Actions and the Animation Mixer

Working with Clips in the Animation Mixer


Clips are instances of action sources that you have created. While sources contain data such as function curves, clips dont actually contain any animation: they simply reference the animation in the source and wrap it with timing information. You can create multiple clips from the same source and modify the clips independently of each other without affecting the animation data in the source.
To add a clip to a track in the mixer, right-click on a track and choose Insert Source, then pick the source youve stored. You can also drag a source from the models Sources folder in the explorer and drop it on a track in the mixer.

Clips are represented by boxes on tracks in the mixer that you can move, scale, copy, trim, cycle, bounce, etc. Clips define the range of frames over which the animation items in the source are active and play back. You can also create compound clips which are a way of packaging multiple clips together so that you can work with larger amounts of animation data more easily.
Select and move clips Select only Select and drag a clip to move it somewhere else on the same track or a different track of the same type (action, shape, or audio).

Press Ctrl while dragging the clip to copy it. You can copy clips between different models mixers this way, one clip at a time. Drag on either of the clips upper corners to hold the clips first or last frames for any number of frames. Drag on either of the clips lower corners to cycle it. Press Ctrl+drag on either of the clips lower corners to bounce it.

Click and drag in the middle of either end of a clip to scale it.

Transitions interpolate from one clip to the next, making the animation flow smoothly between clips rather than jerk suddenly at the start of the next clip. If youre working in a pose-to-pose method of animation using pose-based action clips, you need to use transitions to prevent a blocky-looking animation.

Add markers to clips and add information to a clip, such as to synchronize action or shape clips with audio clips.

Create thumbnails for each clip to help quickly identify whats in them.

216 Softimage

Mixing the Weights of Action Clips

Mixing the Weights of Action Clips


One of the most powerful features of the animation mixer is its ability to mix the weight of clips against each other. When two or more clips overlap in time and drive the same objects, you can mix them by setting their weights. By adjusting the weight of a clip, you can control how much of an influence it has compared to the other clips in the resulting animation. The higher the mix weight, the more strongly a clip contributes to the animation. Mixing compound clips is an easy way to blend animation at an even higher level. You can set keys on each clips weight to animate the changes. When the weight is animated, a weight fcurve is created that you can adjust like any other fcurve.
2 Move to the frame at which you want to set a key. 3 Set a weight value for each clip at this frame. Red curves on the clip display its weight values.

How to Mix and Key Action Clip Weights


1 Put clips on different tracks and overlap them where you want to mix them. This can also be for the duration of the scene.

For the club-bot here, an arm wave action is being mixed with a dejected turn action.

4 Click each weights animation icon to set a key for this value at this frame.

You can control how the weights of clips are combined using the Normalize option in the Mixer Properties: When Normalize is on, the weight values of the separate clips are averaged out. This is useful if youre blending similar actions, such as two leg actions of a character. When Normalize is off, mixes are additive meaning that the weight values of the separate clips are added on top of each other. This is useful if youre weighting dissimilar actions against each other, such as weighting arm and leg actions of a character.

You can also create a custom parameter set, then drag and drop the animation icons from each action clip weight in the mixer into the parameter set to make proxies of those weight sliders.

5 After youre done setting keys for the weights, you can edit the resulting weight fcurves. Right-click the weights animation icon and choose Animation Editor.

Basics 217

Section 12 Actions and the Animation Mixer

Mixing Fcurves with Action Clips


Normally, when there is an action clip in the mixer, it overrides any other animation on that object that covers the same frames. However, by selecting the Mix Current Animation option in the Mixer Properties editor, you can blend fcurves on the object directly with an action clip over the same frames. For example, you can paste a clip in the mixer that contains the final animation for an object, then you can blend it with other fcurve animation you have added to that object, such as a slight offset or a minor adjustment to a mocap clip. Being able to mix clips directly with fcurves means that you can easily create animation using the mixer, as well as using it for blending and tweaking final animations. You can keep manipulating and setting keys for the animated object and not have to make its animation into a clip to blend it with another clip.
Club-bot with a run action clip active in the animation mixer. Open the Mixer Properties editor and select Mix Current Animation. Then adjust the leg and arm a bit (as below right) and key it. The Mix Weight value determines how much influence the fcurve animation has over the animation in the clip. Key this parameter to blend the fcurves in and out of the action clips.

Modifying and Offsetting Action Clips


If you want to modify an action clip that contains animation data from fcurves, you can create a clip effect. A clip effect is a package of any number of variables and functions that you use to modify the data in the action source. Each clip effect is an independent package, associated with its action clip, and sits on top of the clips original action source animation without touching it. Because the effect is an independent unit, you can easily activate or deactivate it, allowing you to toggle between the clips original animation and the animation modifications in the clip effect. This makes it easy to test out changes to your animation. You may need to edit a clips animation for a number of reasons: Add a progressive offset (using the cid variable) to a stationary walk cycle so that a character moves forward with each cycle. Animation coming from a library of stored actions often needs to be modified to fit a particular goal or environment. For example, you have a walk cycle, but the character must now step over an obstacle, so you have to move the leg over the obstacle. Animation that was originally created or captured for a given character must be applied to a different character that has different proportions. Animation with numerous keys, such as motion capture animation, must be adjusted, but you dont want to touch the original animation because it can be difficult to edit.
Moving a key point in a mocap fcurve results in a peak in the curve.

218 Softimage

Modifying and Offsetting Action Clips

How to Add a Clip Effect to a Clip


1 2 Right-click an action clip and choose Clip Properties. In the Instanced Action property editor, click the Clip Item Information tab. 3 Enter formulas for any items expression to create a clip effect.

Offsetting Clip Values


Offsetting actions is a task that you will probably perform frequently. This lets you move an object in local space so that its animation occurs in a different location from where it was originally defined.

Original position on left with foot in ball.

Leg effector is translated to a position where Club-bot is just about to kick the ball and an offset key is set.

The clip effect is created and displayed as a yellow bar above the clip.

To offset a clips values, you can: Click the Offset Map button in the mixers command bar. Choose the Set Offset Map - Changed Parameters command which compares the current value of all parameters driven by the clip and sets an offset if there is a difference. Choose the Effect > Set Offset Keys - Marked Parameters, which is the same as creating a clip effect, except that the clip effects offset expression is created for you. Choose the Set Pose Offset command to offset all transformations (scaling, rotation, and translation). All parameters to be offset are calculated together as a whole instead of as independent entities. The pose offset is especially useful for offsetting an objects rotation, as well as position. As with clip effects, pose offsets sit on top of a clips animation.

The cid variable in a clip effect is the cycle ID number. The cycle ID can be used to progressively offset a parameter in an action, such as for having a walk cycle move forward. The Cycle ID of the current frame is in the Time Control property editor (select the clip and press Ctrl+T). For example, with a clip effect expression like (cid * 10) + this the parameter value of the action is used for the duration of the original clip, then 10 is added for the first cycle, 20 is added for the second cycle, and so on.

Basics 219

Section 12 Actions and the Animation Mixer

Changing Time Relationships (Timewarps)


A timewarp basically defines the speed of the animation in a clip. Timewarps change the relationship between the local time of the clip and the time of its parent (either a compound clip or the entire scene) while taking into account other things like scales, cycles etc. You can make a clip speed up, slow down, and reverse itself in a nonlinear way (such as making a character run or walk backwards).

Sharing Animation between Models


One of the great things about actions is that you can use them again and again. You can create an action for one model and then use it again to animate another model in the same or another scene. You can even use the same action for different objects within the same model.

When you apply a timewarp to a compound clip, it creates an overall effect that encompasses all clips that are contained within the compound clip. If your clip is cycled or bounced, the timewarp can either be repeated on each cycle or bounce or encompass the duration of the whole extrapolated clip (the warp is not repeated with each cycle or bounce). This means, for example, that the overall animation on a cycled clip could increase in speed with each cycle. You can apply a timewarp by right-clicking a clip and choosing Time Properties, or by selecting a clip and pressing Ctrl+T. The Warp page is home to both the Do Warp and Clip Warp options. Use the Clip Warp option for applying a warp over an extrapolated clip to warp its overall animation.

These two models can share actions easily because they have similar hierarchies.

There are a number of ways in which you can share animation between models, whether they are in the same scene or a different scene: Copy action sources and compound sources between models in the same scene. Copy action clips and compound clips (which lets you combine a number of clips non-destructively) between models. Save an action source as a preset to copy action sources between models in different scenes. Create an external action source in a separate file in different formats (.xsi or .eani) to be used in other Softimage scenes. Import and export action sources in different file formats to be used in other scenes or other software packages. Import and export a models animation mixer as a preset (.xsimixer) to copy it to models in the same scene or another scene.

220 Softimage

Sharing Animation between Models

Copying Action Sources between Models


If you want to share an action source between models in the same scene, you can drag-and-drop one from the models Mixer > Sources > Animation folder in the explorer onto the mixer of another model. This makes a copy of that action source for the model. To copy compound sources between models, press Ctrl while you drag the compound action source from the models Mixer > Sources > Animation to a track in the other models mixer.
1 2 3 Open the animation mixer for the model to which you want to copy the action source (the target). Open an explorer and expand the Model node for the model from which you want to copy the action source (the original). Drag a source from the original models Mixer > Sources > Animation folder in the explorer and drop it on a track in the animation mixer of the target model.

You can also create connection-mapping templates to specify the proper connections between models before you copy action sources between models. These templates set up rules for mapping the object and parameter names stored in the action sources, such as when similar elements have with different naming schemes, such as L_ARM and LeftArm. To create a connection-mapping template, open the animation mixer and choose Effect > Create Empty Connection Template. A template is created for the current model and the Connection Map property editor opens. Once you have created an empty connection-mapping template, you can add and modify the rules as you like.

Jaiquas (on the left) elements are mapped to the corresponding ones on the Club-bot using a connectionmapping template. This is set up before action sources are shared between them.

Mapping Model Elements for Sharing


Sharing actions is possible because each model has its own namespace. This means that each object in a single models hierarchy must have a unique name, but objects in different models can have the same name. For example, if an action contains animation for Bobs left_arm, you can apply the action to Biff s model and it automatically connects to Biff s left_arm element. If the names for some of the objects and parameter names in the source dont match when youre copying sources between models, the Action Connection Resolution dialog box opens up in which you can resolve how the object or parameters are mapped.

Basics 221

Section 12 Actions and the Animation Mixer

Adding Audio to the Mix


You can add audio files to your scenes using the animation mixer. This allows you to adjust the timing of your animations by using the sound as a reference. For example, you can use an audio file as reference for lip syncing with a shape-animated face, or sync up some special effect noise with an animation. Or you could load an audio file to do some previsualization or storyboarding as youre experimenting with your animation project. How to Synchronize Audio with Animation
1 Load an audio source file on an audio track in the animation mixer to create an audio clip. To do this, right-click a tan-colored audio track and choose Load Source from File. 2

Sound files are added as audio clips on tracks in the animation mixer in the same way that you load action and shape sources as clips on tracks. Once you have an audio clip in the mixer, you can move it along the track, copy it, scale it, add markers to it, mute, and solo it. The following process shows how you can easily load and play sound files in the animation mixer.
In the Playback panel, click the All button so that RT (real-time playback) is active. Play the audio clip using the regular playback controls below the timeline, including scrubbing in the timeline and looping. Toggle the sound on and off by clicking the headphones icon.

On

Muted

Markers let you delimit different portions of the audio clip and give their wave patterns a corresponding meaningful name to help you synchronize more easily with the animation. Move the playback cursor to the portion of audio wave you want to mark. Create markers with the Create Marker tool in the mixer by pressing the M key, then dragging over a range of frames on the clip.

4 Adjust the animation of the character (such as facial animation) to match the marked audio waveforms. To help do this, you can view the audio waveform in the timeline or the fcurve editor to sync with the animation. Or you can create a flipbook to preview the animation with audio.

When youre satisfied with the results, do a final render and use an editing suite to add the sound to the final animation.

222 Softimage

Section 13

Simulation
Imagine a scene with an alien climbing out of her space ship: it has just crashed to the ground after breaking through fence posts like match sticks, smoke streaming out of the engine. As she stares at the burning rubble that was once her home in the skies, a single tear rolls down her cheek. She stumbles through a raging snow storm, the howling wind whipping through her hair and tearing at her cape. You can use all the simulation powers in Softimage to create your own compelling scenesall the tools are there for you.

What youll find in this section ...


Simulated Effects Making Things Move with Forces Hair and Fur Rigid Body Dynamics Soft Body Dynamics Cloth Dynamics

Basics 223

Section 13 Simulation

Simulated Effects
In Softimage, you can simulate almost any kind of natural, or unnatural, phenomena you can think of. To simulate these phenomena, you must first make objects into rigid bodies, soft bodies, or cloth, generate hair from an emitter, or create ICE particles. Only these types of objects can be influenced by forces and collisions to create simulations. Forces make simulated objects move and add realism. As well, you can create collisions using any type and number of obstacles for any type of simulated object.

About Particles in Softimage


The Particles, Fluid, and Explode operators that existed in Softimage for many versions (now referred to as legacy particles) have been removed from Softimage to make room for ICE particles. If youre used to working with the legacy particle system, youre going to recognize some of the same concepts and features in ICE particles, but thats where it ends. Everything for ICE particles works in a completely different system. ICE (Interactive Creative Environment) is a visual programming environment designed to easily create particle effects, and much more, by connecting data nodes together to create an ICE tree. You may find the learning curve for using the ICE tree a little steep at first, depending on what you want to do and what your technical level is, but soon youll find yourself connecting nodes together like a pro! For information, see ICE: The Interactive Creative Environment on page 241 and ICE Particles on page 271.

Hair Particles

Cloth

Particles

Rigid bodies

224 Softimage

Making Things Move with Forces

Making Things Move with Forces


Forces make simulated objects move according to different types of forces in nature. Each force in Softimage has a control object that you can select, translate, rotate, and scale like any other object in a scene. For example, you can animate the rotation of a fans control object to create the effect of a classic oscillating fan. Scaling a forces control object changes its strength as well as its size. Each simulated object can have multiple natural forces applied to it, and the same force can be applied to any number of simulated objects.
A

To use forces on ICE particles, see Forces and ICE Simulations on page 250.

Types of Forces
You can use any of these forces with hair, ICE particles, and rigid bodies, but not all forces work with soft body or cloth.
Gravity applies a force that defines an acceleration over time. To get the correct gravitational behavior from simulated objects, their size must be taken into consideration. The Fan creates a local effect of wind blowing through a cylinder so that everything inside the cylinder is affected. An Eddy force simulates the effect of a vacuum or local turbulence by creating a vortex force field inside a cylinder. The Drag force opposes the movement of simulated objects, as if they were in a fluid. The Vortex simulates a spiralling, swirling movement. The Wind is a directional force with velocity and strength. It generates a force that speeds up simulated objects to a target velocity. The Turbulence force builds a wind field to let you imitate turbulence effects, such as the violent gusts of air that occur when an airplane lands. The Toric force simulates the effect of a vacuum or local turbulence by creating a vortex force field inside a torus. The Attractor force attracts or repels simulated objects much like a magnet attracts/repels iron filings.

Creating and Applying a Force


You can apply a force to hair, soft bodies, and cloth as described below.
3 2 D E F B C

1. Select the hair, cloth, or soft body object to which you want to apply the force. 2. Create a force from the Get > Force menu on the Simulate toolbar. 3. The force is automatically applied to the selected object. You could also select the hair object and apply an existing force to it by choosing Modify > Environment > Apply Force on the Hair toolbar, or select the cloth/soft body object and choose Cloth/Soft Body > Modify > Apply Force on the Simulate toolbar. For rigid bodies, the process is simpler: simply create a force from the Get > Force menu and it is applied to all rigid bodies in the current simulation environment.

H I

Basics 225

Section 13 Simulation

Types of Forces
B

C D

E I

226 Softimage

Hair and Fur

Hair and Fur


In Softimage you can make all sorts of hairy and furry thingsfrom Lady Godiva to wolves, bears, and grass. Hair in Softimage is a fully integrated hair generator that interacts with other elements in the scene. If you apply dynamics to the hair, the dynamics operator calculates the movement of the hair according to the velocity of the emitter object and any forces that are applied to the hair object. Hair comes with a set of styling tools that allow you to groom and style the hair, almost as easily as if it was on your head. You can control the styling hairs one at a time, or grab many and style in an overall way. To control the rendered look, you can use two special shaders designed for hair, or you can use any other Softimage shader with hair. And as with all things rendered in Softimage, you can use the render region to preview accurate results. Hair is represented by two types of hairs: guide hairs and render hairs. Guide hairs are segmented curves that are used for styling, while render hairs are the filler hairs that are generated from and interpolated between the guide hairs. Render hairs are the only hairs that are actually rendered. Overview of Growing and Grooming Hair
1 Emit hair from an object, cluster, or curves. 2 Style the guide hairs using tools on the Hair toolbar.

3 View and set up how the render hairs look.

Apply dynamics to have hair respond to movement, forces, and collision.

The render hairs are interpolated between the guide hairsthese are the hairs that are rendered. Guide hairs shown in white (selected). These are the hairs that you style.

5 Select obstacles for hair collisions. 6 Adjust the default hair shader or apply another one to the hair.

Basics 227

Section 13 Simulation

Basic Grooming 101


When youre styling, you always work with the guide hairs: these are the hairs that are similar to and behave like segmented IK chains. In fact, the you can grab a hair tip and position it the same way as you would the effector on an IK chain.
You can find all styling tools on the Hair toolbar (press Ctrl+2). Comb the hair in the desired direction, such as in the negative Y direction. Maybe use Puff to give some lift at the roots.

Because guide hairs are actual geometry, you can use all of the standard Deformation tools on them to come up with some groovy hairdos! Lattices, envelopes, deform by cluster center, randomize, and deform by volume usually produce the best results. However, if you animate the deformations, you cannot then use dynamics on the hair.
Use the Brush tool to sculpt hairs with a natural falloff, like proportional modeling. Translate and rotate specific tips or points of hair.

Select tips, points, or entire strands of hair to style in any way. Here, just the tips of some hair strands are selected.

When you use a styling tool after selecting Tip, press Alt+spacebar to return to the Tip selection tool. Copy the style to another hair object.

Use the Clump tool to bring hair strands or points together or fan them out.

Change the length of the guide hairs using the Cut tool or the Scale tool.

You can deform the shape of the hair using any deformation tool, like a lattice. To have smoother animation, activate Stretchy mode to allow the hair segments to stretch along with the deformation.

228 Softimage

Hair and Fur

Making Hair Move with Dynamics


When you apply dynamics to hair, you make it possible for the hair to move according to the velocity of the hair emitter object, like long hair whipping around as a character turns her head quickly. The dynamics calculations also take into account any forces applied to hair, such as gravity or wind, as well as any collisions of the hair with obstacles. You can also use dynamics as a styling tool by freezing the hair when its at a state that you like. For example, apply dynamics, apply some wind to the hair, then freeze the hair when it has that wind-swept look. How to apply dynamics to hair
1 Select the hair and choose Create > Dynamics on the Hair toolbar. Play through the simulationyou may want to loop it. Animate the hair emitter objects translation or rotation, or apply a force to the hair to make it move. Adjust the hairs Stiffness, Wiggle, and Dampening parameters, if necessary.

Getting the Look with Render Hairs


The render hairs are the filler hairs that are generated from and interpolated between the guide hairs. And as their name implies, render hairs are the hairs that are actually rendered. You can change the look of a hair style quite a lot by modifying the render hairs.
Set the number of render hairs to be rendered, then decide which percentage of this value you want to display. To work quickly, display a low percentage, then display the full amount of hair for the final render.

Set the render hair root and tip thickness separately.

Add kink, waves, and frizz to render hairs to change their shape.

5 Set the Cache to Read&Write, then play the simulation to cache it to a file for faster playback and scrubbing. Caching also helps for more consistent rendering results.

Change the number of segments to change the hairs resolution. Use a higher amount for curly or wavy hair.

Tip: Click the Style button on the Hair toolbar to toggle the dynamics state. You can style the hair only when dynamics is off.

Set the hairs density according to a weight or texture map so that you can create some bald spots or sparser growth. You can also use cut maps for the render hair length so that some areas have shorter hair than others according to a weight map.

Basics 229

Section 13 Simulation

Hair Shaders and Rendering

Rendering hair is similar to rendering any other object in Softimage. You can use all standard lighting techniques (including final gathering and global illumination), set shadows, and apply motion blur. Hair is rendered as a special hair primitive geometry by the mental ray renderer. How to attach shaders to hair

While you can use any type of Softimage shader on hair, the Hair Renderer and Hair Geo shaders give you the most control for making the hair look the way you want. You can determine different coloring, transparency, and translucency anywhere along the length of the hair, such as at the roots and tips.

Select the hair and open a render tree (press 7). This tree shows the default shader connection when you create hair.

2 The Hair Renderer shader gives you control over coloring, transparency, and shadows along the hair strands. You can also optimize the render and take advantage of final gathering.

To switch to the Hair Geo shader, choose Nodes > Hair > Hair Geometry Shading and attach it to the hairs Material node in the same way as the Hair Renderer shader.

3 To connect other Softimage shaders to the hair, disconnect the current Hair shader. Then you can load and connect another shader directly to the hairs Material node. For example, you can attach a Toon Paint or standard surface shader to the Surface and Shadow inputs of the hairs Material node to change the hairs color.

The Hair Geo shader lets you set the coloring, transparency, and translucency using gradient sliders, which give you lots of control over where the shading occurs along the hair strand. You can even add incandescence to make the hair glow.

Incandescence on the inner part of the hair strand.

To get started with some hair coloring, choose View > General > Preset Manager, then drag and drop a preset from the Materials > Hair tab onto a hair object. These presets use the Hair Renderer shader. Incandescence on the rim of the hair strand.

230 Softimage

Hair and Fur

Connecting a Texture Map to Hair Color Parameters


A texture map is the combination of a texture projection plus an image file whose pattern of colors you want to map. Instead of a value being applied over the surface as with a weight map, a texture map applies a color. When mapping a texture to the hair color parameters in the hair shaders, the color of the individual strands are derived from the texture color found at the root of the hair. Unlike other geometry in Softimage, hair is not a typical surface so you cant apply projections directly to it. Instead, you need to create a texture map property for the hair emitter object first, and then transfer it to the hair itself. To do this, apply a texture map to the hair emitter using one of the Get > Property > Texture Map commands, associate an image to this projection to use as the map, then transfer the texture map from the hair emitter to the hair object itself using the Transfer Map button on the Hair toolbar.

Rendering Objects (Instances) in Place of Hairs


Replacing hairs with objects allows you to use any type of geometry in a hair simulation. You can replace hair with one or more geometric objects (referred to as instances) to create many different effects. For example, you could instance a feather object for a bird or instance a leaf object to create a jungle of lush vegetation. The instanced geometry can be animated, such as its local rotation or scaling, or animated with deformations. This allows you to animate the hair without needing to use dynamics, such as instancing wriggling snakes on a head to transform an ordinary character into Medusa!
You can render instances of 3D objects as hair instead of the hairs geometry. The instance objects can even be animated!

Transfer the texture map from the hair emitter to the hair object using the Transfer Map button.

To render instances for the hairs, simply put the objects you want to instance into a group, and each object in the group is assigned to a guide hair using the Instancing options in the Hair property editor. The instanced geometry is calculated at render time, so youll only see the effect in a render region or when you render the frames of your scene.
You can change the color of the hair using a texture map connected to the hair shaders color parameters.

You can choose whether to replace the render hairs or just the guide hairs. You can also control how the instances are assigned to the hair (randomly or using a weight map values), as well as control their orientation by using a tangent map or have them follow an objects direction.

Basics 231

Section 13 Simulation

Rigid Body Dynamics


Rigid body dynamics let you create realistic motion using rigid body objects (referred to as rigid bodies), which are objects that do not deform in a collision. With rigid body dynamics, you can create animation that could be difficult or time-consuming to achieve with other animation techniques, such as keyframing. For instance, you can easily make effects such as curling rocks colliding and rebounding off each other, a brick wall crumbling into pieces, or a saloon door swinging on its hinges. You can make a regular object into a rigid body by simply selecting it and choosing a Create > Rigid Body command from the Simulate toolbar. This applies rigid body properties to that object, which include the objects physical and collision properties, such as its mass or density, center of mass, elasticity, and friction. The center of mass is the location at which a rigid body spins around itself when dynamics is applied (forces and/or collisions). By default, the center of mass is at the same location as the objects center, but you can move it to wherever you like.
Center of mass at default location of objects center. Notice how the box bounces a bit in the middle before falling off the edge. 5

How to Create a Rigid Body Simulation


1 Select an object and choose either Create > Rigid Body > Active Rigid Body or Passive Rigid Body from the Simulate toolbar. A simulation environment is automatically created in which the rigid body dynamics are calculated. 2 Apply a force to the scene, such as gravity. The force is added to the simulation environment. If a rigid body is animated, you dont need a force to make it move: just make sure to use its animation as its initial state for the simulation. 3 Have two or more rigid bodies collide make their geometries intersect at any time other than at the first frame. Here, the floor is set as an obstacle by making it a passive rigid body. 4 Set up the playback for the environment. This includes the duration of the simulation, the playback mode, and caching the simulation. Play the simulation!

Center of mass is moved to the bottom right corner of the object. Notice how the box hits the edge and tumbles more quickly with more spinning.

Tip: Animation ghosting lets you display a series of snapshots of the rigid bodies at frames behind and/or ahead of the current frame. You can preview the simulation result without having to run the simulation!

232 Softimage

Rigid Body Dynamics

Simulation Environments
All elements that are part of a rigid body simulation are controlled within a simulation environment. A simulation environment is a set of connection groups, one for each type of element in the simulation.
You can see the current simulation environment by using the Curr. Envir. scope in the explorer. Or use the Environments scope to see all simulation environments in the scene. All elements involved in the rigid body simulation are contained within this environment.

Adding Forces to the Environment


When you create a force in a scene, that force is automatically added to the Forces group in the current simulation environment and the dynamics solver calculates all active rigid bodies movements according to the force. If there are other simulations in the scene (such as particles or hair), they are not affected by the force unless you specifically apply it to them. After you apply the force, you can adjust its weight individually on the rigid bodies. For example, you may want to have only 50% of a gravity forces weight applied to a specific rigid body, while you want 100% of the gravitys weight used on all the other rigid bodies in the simulation.

Passive or Active?
Rigid bodies can be either active or passive: Active rigid bodies are affected by dynamics, meaning that they can be moved by forces and collisions with other rigid bodies.

A simulation environment is created as soon as you make an object into a rigid body. You can also create more environments so that you have multiple simulation environments in one scene. The dynamics operator solves the simulation for all elements that are in this environment. You have a choice of dynamics operators in Softimage: physX or ODE. physX is the default operator, offering you stable and accurate collisions with many rigid bodies in a scene, even when using the rigid bodys actual shape as the collision geometry. ODE is a free, open source library for simulating rigid body dynamics.

Passive rigid bodies participate in the simulation but are not affected by dynamics; that is, they do not move as a result of forces or collisions with other rigid bodies. They can, however, be animated. You often use passive objects as stationary obstacles or as stationary objects in conjunction with rigid constraints (as an anchor point). You can easily change the state of a rigid body by toggling the Passive option in the rigid bodys property editor.
The pool table is a passive rigid body, while the white ball is an active rigid body with the gravity force applied. The ball rebounds off the table but the table does not move.

Basics 233

Section 13 Simulation

Animation or Simulation?
You can apply rigid body dynamics to objects that are animated or not: If the rigid bodies are animated, you can use their animation (position, rotation, and linear/angular velocity) for the initial state of the simulation. When you apply a force to an animated rigid body, the force takes over the objects movement as soon as the simulation starts. If the rigid bodies are not animated, you need to apply a force to make them move. You can easily animate the active/passive state of a rigid body to achieve various effects: you simply animate the activeness of the Passive option in the rigid bodys property editor.

Creating Collisions with Rigid Bodies


Rigid bodies are all collision objectsyou dont need to specifically set an object as an obstacle with rigid bodies. For example, to animate billiard balls colliding with each other, you simply make the balls into rigid bodies. Then when they come in contact with each other, they all react to the collision. At least one rigid body must be active to create a collision. When you have collisions between two or more active objects, they all move because they are all affected by the dynamics. You can put rigid bodies into different collision layers, which lets you create exclusive groups of rigid bodies that can collide only with each other. By putting rigid bodies that dont need to collide together in different layers, you can lessen the collision processing time.

Animation The billiard ball is a passive rigid body whose rotation and translation is animated to make it move to the tables edge. A gravity force has been applied to the simulation environment. When the ball reaches the edge of the table, the balls state is switched from passive to active, the simulation takes over, and gravity makes the ball fall down. 2 1

All billiard balls are assigned as active rigid bodies. When the white ball (circled) hits them, they all react to the collision.

Simulation 3

234 Softimage

Rigid Body Dynamics

Elasticity and Friction All rigid bodies use a set of collision properties to calculate their reactions to each other during a collision, including elasticity and friction. Elasticity is the amount of kinetic energy that is retained when an object collides with another object. For example, when a billiard ball hits the table, elasticity influences how much the ball rebounds. Friction is the resisting force that determines how much energy is lost by an object as it moves along the surface of another. For example, a billiard ball rolling along a table has a lower friction value than a rubber ball along a table. Likewise, a billiard ball rolling on a carpet would have more friction than if it was rolling on a marble floor. Collision Geometry Types The collision type is the geometry used for the collision, which can be a bounding box/capsule/sphere, a convex hull, or the actual shape of the rigid bodys geometry. Bounding shapes (capsules, spheres, and boxes) provide a quick solution for collisions when shape accuracy is not an issue or the bounding shapes geometry is close enough to the shape of the rigid body. Actual Shape provides an accurate collision but takes longer to calculate than bounding shapes or convex hulls. This is useful for rigid body geometry that is irregular in shape or has holes, dips, or peaks that you want to consider for the collision, such as this bowl with cherries falling inside of it. Convex hulls give a quick approximation of a rigid bodys shape, with the results similar to a box being shrinkwrapped around the rigid body. They have the advantage of being very fast. Any dips or holes in the rigid body geometry are not calculated, but it is otherwise the same as the rigid bodys original shape.

Bounding shapes: box, sphere, and capsule

Actual Shape provides an accurate collision using the rigid bodys original shape.

Convex hull doesnt calculate the dip in this bowl, but is otherwise the same as the bowls shape.

Basics 235

Section 13 Simulation

Constraints between Rigid Bodies


You can set constraints between rigid bodies to limit a rigid body to a specific type of movement. For example, you could create a trap door that has a hinge at one of its ends. Then when some crates fall on the trap door, the collision causes the trap door to open up and the crates fall through it. Rigid body constraints are actual objects that you can transform (translate, rotate, and scale), select, and delete like any other 3D object in Softimage. You can constrain two rigid bodies together, a single rigid body to a point in global space, or constrain several active rigid bodies together as a chain.

How to constrain rigid bodies


1 Choose a constraint from the Create > Rigid Body > Rigid Constraint menu, then left-click to pick the position for the constraint object. To constrain multiple rigid bodies to one, choose a command from the Create > Rigid Body > Multi Constraint Tool menu.

2 Pick the first constrained rigid body (A). The constraint object connects to its center. A is a passive rigid body and B is an active rigid body.

Types of rigid body constraints


Slider Ball and socket 3 Pick the second constrained rigid body (B). The constraint connects to its center, joining the two rigid bodies together.

Hinge Spring A B

Fixed Rigid body Bs resulting movement with gravity applied. Notice how the constraint object is attached to both rigid bodies centers.

236 Softimage

Cloth Dynamics

Cloth Dynamics
The cloth simulator uses a spring-based model for animating cloth dynamics. You can specify and control the mass of the fabric, the friction, and the degree of stiffness, allowing you to simulate different materials such as leather, silk, dough, or even paper. Cloth deformation is controlled by a virtual spring net which is made up from three different types of springs, each controlling a different kind of deformation: shearing, stretching, and bending. After you set up how the cloth is deformed according to its own internal spring-based forces, you can then affect how its deformed using external forces, such as gravity, wind, fans, and eddies. As well, you can have the cloth collide with external objects or with itself. The obstacles can be animated or deformed and interact with the cloth model according to the cloths and obstacles friction. Although you can apply cloth only to single objects, you could create a larger object (such as a garment) made of multiple NURBS surface patches stitched together using any number of points. You must first assemble the different patches into a single surface mesh object, then apply cloth to that object. Set the Stitching parameters in the ClothOp property editor to create seams between the different NURBS surfaces of the same surface mesh model.
Low resistance to Bend. Low resistance to Stretch. Low resistance to Shear. Bend controls the resistance to bending. With low values, the cloth moves very freely like silk; with high values, the cloth appears like rigid linen or even leather. Stretch controls the resistance to stretching, which is the elasticity of the material. Low values allow the cloth to deform without resistance, while higher values prevent the cloth from having elasticity. Shear controls the resistance to shearing (crosswise stretching), keeping as much to the original shape as possible. Try to decrease this value if the cloths wrinkling is too rigid.

To give you a head start on creating cloth, there are several presets in the Cloth property editor that let you quickly simulate the look and behavior of different materials, such as leather, paper, silk, or pizza dough.

Paper preset

Silk preset

Basics 237

Section 13 Simulation

How to apply cloth to an object


5 Select objects as obstacles for collisions and choose Cloth > Modify > Set Obstacle. You can also have the cloth collide with itself by activating Self Collision in the ClothOp property page.

Select Animation as the Construction Mode. This tells Softimage that you want to use cloth as an animated deformation.

Select an object and choose Create > Cloth > From Selection from the Simulate toolbar.

Play the simulation. To calculate the whole simulation more quickly, go to the last frame of the simulation. You can cache the simulation to files to play back faster, as well as being able to scrub the simulation and play it backwards.

Set the cloths physical properties such as mass, friction, and resistance to shearing, bending, and stretching.

Apply forces to make the cloth move. Here, a little gravity and a large fan are applied to create the effect of a strong wind blowing on the flag.

You can also set clusters of points to define specific areas of a cloth that you want to be affected by the cloth simulation, then use the Nail parameter to nail down these clusters. For example, you can anchor down clusters at the sides or corners of a flag to keep it from blowing away in the wind. As well, you can animate the Nail parameter as being on or off, making it easy to create the effect of a cloth being grabbed and then let go.

238 Softimage

Soft Body Dynamics

Soft Body Dynamics


As the name would indicate, soft bodies are objects that easily deform when they collide with obstacles. In fact, the main reason to create soft bodies is to have collisions with obstacles. You can, for example, use soft body to deform a beach ball being blown across the sand and have it get squashed when it collides with a pail.

How to apply soft body to an object or cluster


1 Select Animation as the Construction Mode. This tells Softimage that you want to use soft body as an animated deformation. Select an object or cluster and choose Create > Soft Body > From Selection from the Simulate toolbar. The object can also be animated.

3 Set the soft body physical properties such as mass, friction, stiffness, and plasticity. To give you a head start, click a button on the Presets page to quickly make the object behave like a rubber ball, an air bag, and more.

Soft body is a deform operator meaning that it moves only an objects vertices, never the objects center. Soft body computes the movements and deformations of the object by means of a spring-based lattice whose resolution you can define using the Sampling parameter in the SoftBodyOp property editor. You can use soft body on clusters (such as points and polygons), allowing only that part of an object to be deformed by soft body. For example, you can have the cluster of points that form a characters belly be deformed by soft body for some jelly-like fun! If the soft-body object is animated, you can either preserve its animation or recalculate it according to any forces you apply, such as wind and gravity. If you keep the objects animation, soft body acts only as a deformer on the object, but does not influence its movement. If you want to convert the soft body simulation to animation, you can plot it as shape animation using the Tools > Plot > Shape command on the Animate toolbar.

4 Apply a gravity and/or wind force. If the soft body is not already animated, you need to apply a force to make it move.

5 Select objects as obstacles for collisions and choose Soft Body > Modify > Set Obstacle. Then play the simulation and watch the ball bounce!

Basics 239

Section 13 Simulation

240 Softimage

Section 14

ICE: The Interactive Creative Environment


ICE is a graph-based system for controlling deformations and particle effects in Softimage. You can quickly create an effect by connecting a few nodes, or you can dig deeper and use ICE as a complete visual programming environment. This section describes some of the basic concepts of ICE. The next section, ICE Particles on page 271, describes the workflow for using the predefined ICE compounds to create particle systems.

What youll find in this section ...


What is ICE? The ICE Tree View ICE Simulations Forces and ICE Simulations ICE Deformations Building ICE Trees ICE Compounds

Basics 241

Section 14 ICE: The Interactive Creative Environment

What is ICE?
ICE is a node-based system for controlling all the attributes that define a deformation or particle effect. There are two parts to ICE: At its basic level, ICE is a complete visual programming environment. You can combine basic nodes for getting data, modifying data, setting data, and controlling execution flow into elaborate ICE trees. You can easily experiment, in a way that you cant when writing code, by simply connecting nodes and seeing the results immediately in the viewports. When youre done, you can package your tree into reusable compounds that you can use in other scenes, share with your team, or even put online to share with the Softimage community. On top of that level, Softimage comes with a comprehensive set of predefined compounds for particle simulations. For simple effects, you can connect compounds that define forces or basic behaviors like sticking and bouncing. For more complex effects, you can use the predefined state machine to switch between several behaviors on a per-particle basis. You can use ICE to: Completely control particle systems. You can add and remove points on point clouds. You can move points directly, or apply a simulation using particle or rigid body behavior. Deform various geometry types, including polygon meshes, NURBS surfaces, curves, lattices, and point clouds. However, you cannot add or remove components on any geometry type except point clouds. You cannot use ICE on hair, non-ICE (legacy) particle clouds, groups, or branches. There are three ways you can approach ICE: You can simply use the predefined compounds and adjust their input values to create basic effects. At the other extreme, you can dive right in and create your own custom effects from scratch using the base nodes. Between the two extremes, you can start with the factory compounds and then modify or augment them with extra nodes to create your own variations of effects.

Under the hood, many nodes connected together in the point clouds ICE tree are doing all the work.

242 Softimage

What is ICE?

A Few Thing to Know About ICE...


Its All About the Nodes Nodes are the building blocks for ICE: they are operators that work on object data. Some nodes get data from the scene, and some modify and process this data. They have input and output ports that allow them to be connected to each other.

The ICETree Node The ICETree node is like Grand Central Station for an ICE tree: its the main operator that processes all the data that flows into it. Nodes in the tree must be connected to it in order to be evaluated. You can have multiple ICE trees per object as long as each ICETree operator has a different nameand you can easily rename it in the explorer. Attributes

Two nodes with ports connected together. Compound with several input ports.

Attributes are at the heart of ICE. Attributes are data that is associated with objects, or with components such as points, edges, polygons, and nodes. With attributes, you can get and set information such as a particles color or shape, or an objects point position. Almost every ICE tree involves getting and setting attributes in some way. Attributes can be inherent (always part of the scene), predefined (innately understood by certain base ICE nodes, but dynamic in that they only exist when they are set), or custom (create your own).

Compounds Compounds are the ber nodes of the ICE world. They can contain a whole ICE tree or just parts of it. Compounds make it easy to create more complex effects in the ICE tree because they package numerous nodes into one. And because theyre in a package, you can easily bring compounds into other scenes or share them with other users. You can connect compounds in the same way that you do for nodes in the ICE tree. As well, you can open up a compound to edit it or just to see what makes it tick. Softimage ships with many compounds that are designed specifically for particle and deformation workflows. You can find these on the Tasks tab of the preset manager in the ICE Tree view.

Some of the many attributes that are available for point clouds. You can view attributes in an explorer.

Basics 243

Section 14 ICE: The Interactive Creative Environment

The ICE Tree View


The ICE tree view is where you build ICE trees by connecting nodes. You can open an ICE tree view in a floating window by pressing Alt+9 or by choosing View > General > ICE Tree.
A B C D E F G H

To display an ICE Layout with the ICE tree view embedded, choose View > Layouts > ICE.

J A

K D E F G H

L Clear. Clears the view. Opens the preset manager in a floating window. Displays or hides the preset manager embedded in the left panel (J). Displays or hides the local explorer embedded in the right panel (L). Birds Eye View. Click to view a specific area of the workspace, or drag to scroll. Toggle it on or off with Show > Birds Eye View.

Memo Cams. Save and restore up to four views: Left-click to recall stored view. Middle-click to store current view. Ctrl+middle-click to overwrite stored view with current view. Right-click to clear stored view.

B C

Lock. Prevents the view from updating when you select other objects in the scene. Refresh. When the view is locked, forces it to update with the current selection in the scene.

244 Softimage

The ICE Tree View

Control timers and display performance highlights. This is an advanced feature used for profiling and optimizing the performance of ICE trees. Embedded preset manager. You can press Ctrl+F to quickly put the cursor in the preset managers text box so that you can start typing a search string. Pressing Ctrl+F will also temporarily display the preset manager if it is hidden.

ICE Nodes in the Preset Manager In the preset manager, ICE nodes are separated into two tabs: The Tasks tab contains higher-level compounds for accomplishing specific tasks. You can select a task (Particles or Deformation) from the drop-down, and then select a sub-task from the list below. The Tools tab contains base nodes and general utility compounds for performing basic operations, like getting data, setting data, adding values, etc. You can drag a node from the preset manager into an ICE tree and connect it to the graph.

ICE tree workspace. Connect nodes by dragging an output port from the right side of one node onto an input port on the left side of another node. You can connect the same output to as many inputs as you want. Open a nodes property editor by double-clicking on it. This lets you set parameters that cannot be driven by connections. Right-click on a node, on a port, or on the background for various options. Hover the mouse pointer over a connection to highlight the connected ports. If a port is not visible because it has been collapsed or because the view is zoomed out too far, information about the port is displayed in a pop-up. The nodes in the tree can be base nodes or compound nodes. Compounds are encapsulated subtrees built from base nodes and other compounds. Base nodes have a single border and compound nodes have a double border. See ICE Compounds on page 267 for information on building and exporting your own compounds. Nodes that cannot be evaluated because of a structural error are displayed in red. Other nodes that will not be evaluated because of an error in their branch are displayed in yellow. See Debugging ICE Trees on page 264.

Local explorer. When there are multiple ICE trees on the same object, click to select the one to view. You can also click on a material to switch to the render tree view.

Basics 245

Section 14 ICE: The Interactive Creative Environment

Anatomy of an ICE Tree


The following illustration shows a typical ICE tree for a simple particle system. To see some examples of how to build up an ICE tree, check out the three tutorials at the end of this guide.

Execution flows sequentially from top to bottom along the input ports of the ICETree node (and any other type of Execute node). Because the nodes are evaluated in order, it matters where you plug them in. Sometimes one operation requires another to be done first so that it can be evaluated properly.

D A E

Nodes that are connected to an Emit nodes Execute on Emit port are applied only to new points that are generated on the current frame. They are not applied to all particles on every frame. Nodes that are connected to the root node are executed on every frame. You can control which data gets set on which elements by using If and Filter nodes in the upstream branches. The simulation framework resets every particles force to 0 at the end of each frame, so forces must be reapplied at every frame, which is why the Add Forces node is plugged into the ICETree node and not the Emit node.

D E C F

The Simulate Particles node is the standard particles node that updates the position and velocity of each particle at each frame based on mass and force. You could use the Simulate Rigid Bodies node instead to make particles into rigid bodies. Particles can then collide with each other and with other objects that are set as obstacles. You do not need to include a simulation node in your treeif you prefer, you can set point positions directly.

Data flows downstream from left to right along connections from one nodes output ports to the next nodes input ports. Each connection represents a data set. The ICETree node is the main operator that processes all the data that flows into it. Nodes must be connected to it to be evaluated.

246 Softimage

ICE Simulations

ICE Simulations
As with animation, a simulation calculates the way in which an object changes over time. However, with a simulation, the result of the current frame depends on the result of the previous frame. With ICE, you can create both particle and deformation simulations. You can emit and change particles in a point cloud for effects such as cigarette smoke curling as it rises, leaves falling lazily to the ground, vines growing up out of the ground, or even crowds of people milling about in the street. You can deform various geometry types, including polygon meshes, NURBS surfaces, curves, lattices, and point clouds, to create effects such as turbulent ocean waves, gentle ripples on a pond, or ribbons twisting in the wind.

ICE snow particles fly from the point of impact of the boulder with the snow on the hill. An ICE deformation also occurs on the hill as the boulder rolls down it, crushing the snow as it goes.

The point clouds simulated ICE tree emits the snow particles and makes them move. A simulated ICE tree also exists for the polygon mesh hills deformation effect.

Basics 247

Section 14 ICE: The Interactive Creative Environment

Simulations and the Construction Regions


An ICETree node can be either simulated or not: the only difference between the two is the ICETree operators position in the objects construction stack. When you create a simulated ICETree node, the Simulation and Post-Simulation regions are created in the objects construction stack, and the ICETree operator is placed in the Simulation region. Operators in the Simulation region calculate the result of the current frame based on the previous frame rather than on the construction regions that are below it. This is true not only for ICE trees, but for all operators in the Simulation region. For example, if you apply a nonICE Twist deformation with a small Angle value in the Simulation region and play back the scene, the object becomes progressively more twisted. Operators in the Post-Simulation region are applied on top of the simulation. You could use the Post-Simulation region to apply a deformation, such as a lattice, on top of a particle simulation. When the simulation is not active, the operators in the Simulation region are skipped. On the first frame that the simulation is active, the operators below the Simulation region are evaluated to define the default initial state but the operators in the Simulation region are not evaluatedthis means that if you are emitting particles, for example, they will appear on the second frame of the simulation. While the simulation is active, the operators below the Simulation region are not re-evaluated. You can turn a simulated tree into a non-simulated one by moving it to another region, like Modeling, and vice versa. However, remember that the lower regions are not re-evaluated when the simulation is active if the Simulation region exists in an objects construction stack. To fix
248 Softimage

this, you can select and delete the Simulation region marker from the construction operator stack. Both the Simulation and Post-Simulation region markers are removed if either one is deleted, but operators in these regions are not removed and can be moved to the desired regions afterward.

The Simulation Environment


A simulation environment is automatically created when a simulated ICETree node is created. This simulation environment houses the Simulation Time Control, the cache files, and any non-ICE forces used in the simulation. The Simulation Time Control property is where you set the frame range during which the simulation is active. Its also where you set the Play Mode which controls how the simulation plays back: Live, Standard, or Interactive. To play the simulation, use the standard playback controls below the timeline to play, scrub, or jog forward. Since simulations depend on the previous frame, the viewports do not update if you play, scrub, or jog backwards unless the simulation has been cached. If you jump to a later frame, the intervening frames are calculated in the background.

Setting the ICE Simulations Initial State


By default, the initial state of a simulation is the result of the operators in the construction regions that are below the Simulation region on the first frame that the simulation is active. However, with simulations you often need to have a certain state be the first frame of the simulation, such as a candle already burning or rigid bodies already settled. You can select any frame in an existing simulation and use that as the initial state by choosing ICE > Edit > Set Initial State from the Simulate toolbar.

ICE Simulations

Caching ICE Simulations


Much of the work in creating a convincing simulation is the process of trial and error. Caching can help you try out different combinations of settings until you find the right effect. Caching stores the current simulation frames into a file that you can play back using the ICE tree, the animation mixer, or simply the playback controls and the timeline. With cache files in the animation mixer you can scale, trim, cycle (loop), blend, etc. them in the same way that you can for action clips. There are three file formats from which you can choose to create cache files: the default ICECache file format, the PC2 file format, and the Custom file format (if you create your own custom plug-in for caching).

There are three ways of caching ICE simulations:


A Use the Tools > Plot > Write Geometry Cache command on the Animate toolbar to plot any type of simulation (except hair) or animation into cache files. Then select an object and load the cache files on it with the Plot > Load Geometry Cache command. This brings them into the animation mixer. Use the Caching option in the Simulation Time Control to cache the simulation frames from any simulated object into an action source, which you can then bring into the animation mixer. Use the Cache on File node in the ICE tree to write the simulation or animation data stored on an ICE object to a cache file, which you can bring into the animation mixer. You can also read the cache data with this node.

A C

Basics 249

Section 14 ICE: The Interactive Creative Environment

Forces and ICE Simulations


In the ICE tree, you can make simulated particles and deformed objects move according to different types of forces. Each simulated object can have multiple forces applied to it. You can use either of these types of forces in an ICE tree: The forces that are available from the Get > Primitive > Forces menu on any toolbar (see Making Things Move with Forces on page 225). The ICE forces that are available as compounds in the ICE tree views preset manager or Nodes menu. You can also create your own force compounds using the nodes found within ICE. The main ICE force is the Add Forces compound, which is a hub for all the forces in your ICE tree. It adds up the effect of all forces that are plugged into it, then outputs one force (vector). The order in which the forces are plugged into the Add Force compound is not important. If nothing is plugged into the Add Force compound, you can use it to set a simple directional force on each axis.

ICE Forces
A Gravity applies a force that defines an acceleration over time. To get the correct gravitational behavior from objects or particles, their size must be taken into consideration. The Surface force attracts particles/objects to or repels them from an objects surface. While this force is similar to creating goals for particles, this force keeps the particles moving around (swarming) the surface object instead of stopping once they reach the goal. The Wind is a directional force with velocity and strength. It generates a force that speeds up particles or objects to a target velocity. The Null Controller force uses a null to attract or repel particles/ objects, much like how particles move toward or away from a goal object. Changing the icon shape of the null (to something like Rings, Square, or Circle) changes the behavior of this force. The Neighboring Particles force attracts particles to each other when they get within a certain range, but there is no friction between the particles so they dont stay clumped togetherthey keep moving. The Drag force opposes the movement of simulated objects, as if they were in a fluid. The Coagulate force attracts points toward their neighbors to form clumps. Once the points get within a certain range of each other, the friction (drag) slows them down. The Point force attracts particles/objects to or repels them from a position in space that you define.

C D

F G

250 Softimage

Forces and ICE Simulations

Types of ICE Forces


A

D E

F B

G H

Basics 251

Section 14 ICE: The Interactive Creative Environment

ICE Deformations
Any ICE tree that modifies point positions on an object without adding or deleting points can be considered a deformation. With ICE, you can deform various geometry types, including polygon meshes, NURBS surfaces, curves, lattices, and point clouds. However, you cannot add or remove components on any geometry type except point clouds. A deformer works by getting current point positions, modifying them based on other variables, then setting new positions. This means that you can create your own custom deformers with ICE. You can create three types of deformations with ICE: simulated, animated, and non-time based.

The snow on the polygon mesh hill crushes under the weight of the boulder as it rolls down the hill.

The simulated ICE tree for the polygon mesh hills deformation effect. A Bulge operation is used along with turbulence.

252 Softimage

ICE Deformations

Simulated Deformations To create a simulated deformation in ICE, you need to use a Simulated ICETree node. You can then change the object point positions as you like with any type of deformer, including one of your own design. As an example, the Footprints compound creates a simple deformation. It lowers the points of an object where the surface of another geometric object (the deformer) is below them in the objects local Y axis. The points stay deformed during the simulation, so you can move the deformer to create more indentations. When you return to the first frame of the simulation, the geometry returns to its initial undeformed state.

Time-based, Non-simulated Deformations You can also use ICE to create deformations that are time-based, but not simulated in that they are not in the Simulation region of the construction stack and therefore do not depend on the previous frames point positions. One way to do this is to simply animate the input port values of the ICE tree. Another way is to include time-dependent nodes in the ICE tree, such as a Turbulence node. This node creates a coherent noise pattern that varies continuously in space, as well as optionally in time. Here, the Turbulence node is used to set the point positions in Y. Space Frequency was set differently in X and Z, resulting in long, thin ripples. There are also several Turbulize compounds based on this node, but designed to work with specific situations. You can find them in the preset manager.
1

Select the geometric object to be deformed and choose Deform > Footprints (ICE) from the Model, Animate, or Simulate toolbar. This creates an ICE tree for this object. Alternatively, you can get the Footprints compound from the preset manager and set up this tree yourself.

2 3

Pick the geometric object to act as the deformer. In this case, its the infamous foot! Play the scene to run the simulation, then move the deformer to create indentations in the object.

Basics 253

Section 14 ICE: The Interactive Creative Environment

Nontime-based Deformations You can create deformations that are not time-based but instead depend on the position of deformer objects or other factors to modify point positions. The deformation can then be controlled by animating the deformers in any way. The following example is a variation of the Push deformation that uses the proximity of a null to displace points along their normals.

254 Softimage

Building ICE Trees

Building ICE Trees


ICE allows you to create operators by building a network of nodes called an ICE tree. The ICE tree makes it easy to build up an effect by connecting pieces of data together. The real work in creating an ICE tree, however, is finding out the type of data you can use and then figuring out how to connect that data together to achieve the effect you want. Using the compounds that come with Softimage can get you pretty far for some effects, but you might need to work at the base node level at some point. While this isnt rocket science, its also not trivial. The level at which you get into tree building depends on what you want or need to do, as well as how comfortable you are with math and programming concepts. 4 Add nodes to the workspace in a variety of ways, such as by dragging them from the preset manager into the ICE tree workspace or by choosing them from the Nodes menu. You can also get data from a scene element. An easy way to do this is to select the object and press F3 so that a floating explorer opens, then drag the emitters name from there into the ICE tree workspace. This adds a pre-filled Get Data node for that object. 5 Connect the nodes together to achieve the effect you want. This is where all the thinking and work takes place! You can also open a nodes property editor to edit parameters that are not (or cannot be) driven by connections. Right-click on a node, on a port, or on the background for various options. 6 If the tree is a particle simulation, add either the Simulate Particles or Simulate Rigid Bodies node to make sure that it updates properly at each frame. You can create a compound node and export it for reuse in other trees and scenes.

Overview of How to Create an ICE Tree


This is a basic workflow for creating ICE trees. 1 2 Select the geometric object to which you want to apply an ICE tree. Display the ICE tree view by pressing Alt+9. Click the Update button in the ICE tree view to show the selection. The view will be empty if theres no ICE tree on the object yet. 3 Create an ICE tree or a simulated ICE tree (for particle or deformation simulations) by choosing Create > ICE Tree or Simulated ICE Tree. This creates the ICETree node.

Basics 255

Section 14 ICE: The Interactive Creative Environment

1 2

256 Softimage

Building ICE Trees

The Way Trees Work


Each connection in a tree represents a set of data, with one value per element of the set. For example, if you get Self.PointPosition, the set consists of one 3D vector per point of the Self object (the object with the ICE tree). When tracing the logic and connections of an ICE tree, you can think of the nodes as working on all members of the data set at once, or you can concentrate on what happens to a single representative of that set. When you combine a single constant value (or something else in the singleton context) with a data set, it gets combined with every member of the set. For example, you can add the same value to all members of a set, or you can multiply them all by the same number, and so on. When you combine two data sets, the corresponding members of the set are combined. For example, if you add Self.PointPosition and Self.PointNormal as you might do in a Push-type deformation, then each points position vector is added to its own normal vector. This is why component contexts must be the same when you combine them there must be the same number of elements and there must be a correspondence between the members. A data set is not an array, or at least, its not exposed as an array in ICE. Traditional programming concepts related to arrays do not apply. You do not need to use the nodes in the Array category to work with data sets (unless your data set actually contains arrays, for example Self.PointNeighbors, and even then you can connect directly to many nodes without worrying too much about the fact that the data consists of arrays). You do not need to iterate on the members of the data set just plug the data into another node, such as a Math node, to process the data.

Connecting Nodes
In general, you connect ICE nodes by clicking and dragging an output port from the right of one node onto the input port on the left of another node. You can connect the same output to as many inputs as you want. Data flows along the connection from the first node and is processed by the second node.

When you connect to an input port, any existing animation on the ports value is lost. Some nodes, such as Execute, Add, Multiply, and so on, allow an unlimited number of input connections. These nodes have special virtual ports identified as New (port name). You can connect to the New port to create a new port, or right-click on an existing port to manually insert and remove ports. There are some special factors that determine whether you can connect two ports together: The type of the data, as indicated by the port colors. The context of the data. The structure of the data: either single or array (ordered set).

Basics 257

Section 14 ICE: The Interactive Creative Environment

Data Types
The data type defines the kind of values that a port can pass or accept, such as Boolean, integer, scalar or vector. The data type is identified by the color of the port. You cannot connect two ports if their data types are incompatible. However, you can convert between many data types using the different Conversion nodes. Here are the types of data you might see:
Type Polymorphic Boolean Integer Scalar Description Accepts a variety of data types. See Polymorphic Ports on page 259. A Boolean value: True or False. A positive or negative number without decimal fractions, for example, 7, 2, or 0. A real number represented as a decimal value, for example, 3.14. Internally this is a singleprecision float value. A two-dimensional vector [x, y] whose entries are scalars, for example, a UV coordinate.

Type Rotation 3x3 Matrix

Description A rotation as represented by an axis vector [x, y, z] and an angle in degrees. A 3-by-3 matrix whose entries are real numbers. 3x3 matrices are often used to represent rotation and scaling. A 4-by-4 matrix whose entries are real numbers. 4x4 matrices are often used to represent transformations (scaling, rotation, and translation). A primitive geometrical shape, or a reference to the shape of an object in the scene. This data type is used to determine the shape of particles. A reference to a geometrical object in the scene, such as a a polygon mesh, NURBS curve, NURBS surface, or point cloud. You can sample the surface of a geometry to generate surface locations for emitting particles. A location on the surface of a geometric object. The locator is glued to the surface of the object so that even if the object transforms and deforms, the locator moves with the object and stays in same relative position. Not a data type in the conventional sense. You connect Execution ports such as the output of a Set Data into an Execute or root node to control the flow of execution in the tree. Also not a data type in the conventional sense. This is a reference to an object, parameter, or attribute in the scene, expressed as a character string. You can daisy-chain these as described in Daisy-chaining References on page 261.

4x4 Matrix

Shape

Geometry

Surface Location

2D Vector 3D Vector

Execution A three-dimensional vector [x, y, z] whose entries are scalars, for example, a position, velocity, or force. A four-dimensional vector [w, x, y, z] whose entries are scalars. A quaternion [x, y, z, w]. Quaternions are usually used to represent an orientation. Quaternions can be easily blended and interpolated, and help address gimbal-lock problems when dealing with animated rotations. Reference

4D Vector Quaternion

258 Softimage

Building ICE Trees

Polymorphic Ports Polymorphic ports can accept several different data types. For example, the Add node can be used to add together two or more integers, or two or more scalars, or two or more vectors, and so on. Once you connect a value to a polymorphic port, its port type becomes resolved. Other input and output ports on the same node and on connected nodes may also become resolved and only accept specific data types. This reflects the fact that, for example, you cannot add an integer to a vector.
Before anything is connected, the Add nodes ports are unresolved (black). After connection, controls appear for Value2. There are no controls for Value1 because it is being driven by the connection.

Before any connection, the Add nodes property editor is blank.

While polymorphic ports accept several data types, they dont necessarily accept all types of connection. For example, the ports of a Pass Through node accept any type of value, but it doesnt make sense to use a Multiply by Scalar node with a Boolean value.

Once a node is connected to Value1, then Value2 and Result become resolved. In this case, they are yellow for 3D vectors.

Data Context
In ICE, attributes are always associated with elements, either objects or one of their component types such as points, polygons, edges, and so on. For example, sphere.PointNormal consists of one 3D vector for each point of the object called sphere; in other words, the context is per point of sphere. For two ports to be connectable, their contexts must be compatible. Context is determined by two factors: The type of element associated with the data: object or a specific component type (points, polygons, etc.). The object that owns the components. The data context gets propagated through node connections in the same way as the data types of polymorphic nodes.

Even after a ports type has been resolved, you can still change it by replacing the connection with a different data type. However, this works only if the port is not resolved by other connections in the tree. If a ports type is unresolved, you cannot set values in its property editor. Once it is resolved, the appropriate controls appear in the property editor. Different data types use different controls: for example, checkboxes for Booleans, sliders for scalars, and so on.

Basics 259

Section 14 ICE: The Interactive Creative Environment

The different types of context are summarized in the following table:


Context Singleton Description A data set containing exactly one value. For example, an objects position, a bones length, a meshs volume, etc. The singleton context includes data that is associated directly with objects rather than their components, as well as scene data such the current time and the frame rate. Singleton data is always compatible with other singleton data. Singleton data is usually compatible with other contexts, for example, you can add the position of one object to the point positions of another object. Point A data set containing one value for each point of a geometric object (point cloud, polygon mesh, NURBS surface, curve, lattice, etc.). For example, point positions, envelope weight assignments, etc. A data set containing one value for each edge, subcurve, or surface boundary. For example, edge lengths. A data set containing one value for each polygon or subsurface. For example, polygon normals or polygon areas. A data set containing one value for each texture sample of a geometry. A sample is usually a polygon node on a polygon mesh, but there can also be samples on NURBS surfaces and curves. In some cases, the context is bound to a node in the ICE tree. For example, the Generate Sample Set node generates a set of random point locators on the surface of a geometric object. The size of the set of point locators does not necessarily match the number of any kind of element in the scene, but is actually controlled by the rate parameter of the Generate Sample Set node. Node-bound contexts are typically incompatible with each other. For example, if you generate two sets of locations on the same geometry, they are bound to different nodes and cannot be combined.

Specifying Scene References


Certain nodes can refer to elements in the scene using strings as references. For example, references can specify things like: Attributes to get or set. Point clouds to which to add particles. Geometric objects to query for closest points, etc. References are resolved by name. Character strings are not casesensitive. Object, property, and attribute names are separated by a period (.), for example, grid.PointPosition or sphere.cls.WeightMapCls.Weight_Map.Weights. You can specify a scene reference by using controls in a nodes property editor to enter, explore for, or pick elements in the scene. Alternatively, you can right-click on a node and choose Explore for Port Data.

Line Face

A D A

Sample

Node

Type the reference. Use periods to separate objects, properties, and attributes. Strings are not case-sensitive. Use the token self to refer to the object on which the tree exists. You can also use the tokens this (same as self) and this_model (the model that contains the object with the tree). Click Explorer, expand the tree, and choose an element. The tree shows the attributes that you can get from the current element name path or location. This list includes predefined attributes and any custom attributes (including those defined in unconnected Set Data nodes). Click Pick and then pick an element from a viewport, explorer, or schematic view. You can combine methods A and B, for example, type self, click Explore, and then choose an attribute such as PointPosition.

C D

260 Softimage

Building ICE Trees

Daisy-chaining References You can use the In Name and Out Name ports to connect references on Get Data and other nodes in sequence, like a daisy chain. For example, you can get sphere and then use that to get sphere.PointPosition, sphere.PointNormal, and so on. If you want to change sphere to torus later on, theres only one node that needs to be changed. This is particularly useful when creating compounds, because you only need to expose the leftmost reference.

Tokens in References The token self always refers to the object on which the ICE tree is directly applied. This token allows you to create trees that are easily reusable because they dont depend on specific object names. Other tokens that you can use are this (same as self ) and this_model (refers to the model that contains the object with the ICE tree). If you have built an ICE tree using specific object names and want to make it more generic so that you can make a compound to use on other objects, you can automatically replace the object name with Self using User Tools > Replace Object Name with Self (Nested Compounds). Resolving Scene References Scene references are automatically maintained as you modify the scene.

References that are connected in this way are concatenated, so for example Get Data (Self ) plugged into Get Data (PointPosition) results in Self.PointPosition. You do not need to worry about periods at the beginning or end of the referencesperiods are automatically added or removed as necessary. When a node has a reference connected to its In Name port, then the Explore button and Explore for Port Data command both start from the current path. For example, if you click Explore in the property editor of a leaf (left-most) Get Data, you can select anything starting from the scene root. However if a Get Data has a reference to a geometric object connected to its In Name, you can select properties and attributes on that object.

If you have an object called sphere and you rename it to ball, references to sphere are automatically updated to ball. If you delete the object named sphere instead, any references to it are invalid and the affected nodes become red. If you later add another object named sphere, or rename an existing object to sphere, then the references become resolved again. If you add the object named sphere to a model named Fluffy, references to sphere are automatically updated to Fluffy.sphere. If the ICE tree is on an object in the Fluffy model, the references are updated to this_model.sphere instead.

Basics 261

Section 14 ICE: The Interactive Creative Environment

Getting and Setting Data in ICE Trees


Almost every ICE tree involves getting data, performing calculations, and then setting data. You can get and set any data using Get Data, Set Data, and other nodes found in the Data Access category of the Tools tab in the preset manager. There are also some compounds for getting and setting specific data on the Task > Particles or Deformations tabs.

When you get data by an explicit string reference, you get a set of values with one value for each component. For example, if you get sphere.PointNormal, you get one 3D vector for each point of the sphere object; in other words, the context is per point of sphere. When you get data at a location, the context depends on the context of the set of locations that is connected to the Source port of the Get Data node. For example, if you start by getting grid.PointPosition, then use that to get the closest location on sphere, and in turn use that to get PointNormal, the data consists of normals on the sphere but the context is per point of the grid. If instead you started by getting grid.PolygonPosition, the context would be per polygon of the grid. Getting Data at Locations To get data at a location, plug any location data into a Get Data nodes Source port. When a location is plugged into the Source port of a Get Data node in this way, its Explorer button shows only the attributes that are available at that location.

You can get any data in the scene. Once you have a Get Data node in your tree, you can specify or modify the reference. You can set only certain data: Some intrinsic attributes, such as PointPosition or EdgeCrease. Other attributes are read-only, like PointNormal and PolygonArea. Any dynamic attribute, including predefined ones like Force, Velocity, and so on. Any property in Softimage except for kinematics. Getting Data You get data using Get Data nodes. You can add a Get Data node to your scene by dragging it from the preset manager (its in the Data Access category of the Tools tab) or by selecting it from the Nodes > Data Access menu. You can also get a specific object or other element by dragging its name from any explorer view. Once you have a Get Data node in your tree, you can specify or modify the reference as described in Specifying Scene References on page 260. You can get data by explicit string references or at locations.

262 Softimage

Building ICE Trees

You can use this technique to get data from other objects using geometry queries like Get Closest Location nodes. For example, you can get PointNormal at the closest location on a sphere.

Reusing Get Data Nodes You can connect the same Get Data node to as many nodes as you want if you need the same data elsewhere in the tree. However if the data has changed in-between, the Get Data node will return the new data later in the tree.

If an attribute is stored on points, you can still get it at an arbitrary location. The value is interpolated among the neighboring point values. You can convert a location on a geometry into a position (3D vector) by getting the PointPosition attribute at that location.

The Get Self.Foo node returns different values to Stuff and More Stuff because Self.Foo was set in-between.

Basics 263

Section 14 ICE: The Interactive Creative Environment

Setting Data To set data, use the Set Data compound. You can find this node in the Data Access category of the Tools tab in the preset manager, or on the Nodes > Data Access menu. Simply specify the desired reference and value, either through connections or directly in the property editor. See Specifying Scene References on page 260. Not all attributes can be set. Read-only attributes like NbPoints are not shown in the Set Data nodes explorer. You can set data using an explicit string reference only. You cannot set data at locations. To set an attribute, you must be in the appropriate context. For example, to set PointPosition, you must be in the per point context of the appropriate object. If data has been set for some but not all components in a data set, uninitialized components have default values: zero for most data types, false for Booleans, identity for matrices, black for color, etc. Setting Custom Attributes To create a custom attribute, simply use a Set Data node and make up a new attribute name. Dont forget to include the full reference including the object name, for example, PointCloud.my_custom_attribute. You can use custom attributes to store any type of value, including locations. The context and data type of custom attributes are determined by the connected nodes. If the data type is undetermined, the Set Data node is in error (red) you can use a node from the Constant category to force a specific data type. If the context is undetermined, it defaults to the object context. However, this context can be changed to a component context if you connect nodes that force a different context, as long as there are no conflicting constraints on the context.

Debugging ICE Trees


ICE includes some basic tools that help you identify and correct the different types of problems you may encounter when building ICE trees. Structural Problems Structural problems are caused by incompatible data types, contexts, or structures in the tree. Nodes that are in error because of structural problems are displayed in red, and other nodes in that branch that will not be evaluated because of the error are displayed in yellow. If you have red nodes, or if you cannot connect nodes that you think should be connectable, then your tree has structural problems. Messages on ports and nodes help you identify structural problems: Hover the mouse pointer over a port to display a pop-up message showing the data types, context, and structure that the port supports, for example, Array of 3D Vector per Point of PointCloud.pointcloud. To see more detailed information about a port, right-click over a port or connection and then choose Log port type details. Information is logged to the history pane of the script editor. If a node is red (in error), hover the mouse pointer over it (not over a port) to see the first error message. To see all error messages, right-click over the node and choose Show Messages. When you drag an output port onto an incompatible input port, a pop-up message informs you of the conflict and shows the data types, contexts, and structure that are supported by the two ports.

264 Softimage

Building ICE Trees

Logical Problems If a tree is working but not doing what you think it should be doing, it may be that the values being passed to ports are not what you expect them to be. You can display port values in the 3D views by right-clicking on a connection and choosing Show Values. There are several options for controlling the color, style, and placement of the information. When port values are displayed, a V icon appears on the connection. Click the icon to change display properties, or right-click and choose Hide Values to remove the display.

Performance Problems You can profile the performance of ICE trees by displaying execution times directly on nodes in the ICE tree viewer. This shows you which nodes take the most processing time, and lets you see where you can try to optimize the tree.

D A

Start Performance Timers. Activates and deactivates performance logging. Typically, you activate this and then play back or advance frames. Reset Performance Timers. Clears the performance numbers. When you have made changes and want to start logging the new performance values, click this. Performance Highlight. Choose one: No Highlight. Displays nodes and ports normally. Time (Top Thread). Shows the performance of the worst thread per node. The number on the root ICETree node is still the total for the entire tree and its inputs. Time (All Threads). Shows the total performance of all threads per node.

Displaying values on this connection.

Update. You may need to click this to see new values.

Basics 265

Section 14 ICE: The Interactive Creative Environment

Adding a Comment or Two


When youre building a tree, its immensely useful to write down notes about it as you go, especially when a tree grows many branches. You can easily do this by adding comments to individual nodes in a tree or to a group of nodes. To add a comment to a single node, right-click on it and choose Create Comment. Enter the comment text and set its color. To add a comment that is not connected to any specific node, use a Comment node.
o

To add a comment a group of nodes, use a Group Comment node. To move the comment along with the node group, middle-click and drag in the comment area. Group Comment colors are visible in the birds eye view, so they are a handy way of visually organizing your trees.

266 Softimage

ICE Compounds

ICE Compounds
Compounds are ICE nodes that are built from other nodes, which can be base nodes or even other compounds. You can use compounds to simplify and organize your ICE trees to make them easier to read and understand, but the real advantage of compounds is that you can export them and reuse them in other ICE trees and scenes, as well as share them with other users. Softimage includes many pre-built compounds for performing specific tasks. You can find these in the preset manager in the ICE tree view. These compounds are built from the same nodes that are also available in the preset manager. Inspecting the supplied compounds is a great way to see how ICE trees work. You can then edit these compounds to use them as a base for building your own effect. Overview of How to Create and Use ICE Compounds 1 You cant store the ICETree node in a compound, so insert an Execute node to merge all the root connections into a single output. To do this, right-click the ICETree node and choose Insert Execute Node. Select all the nodes you want to save in your compound. To keep the compound generic, you should leave out objectspecific nodes (such as particle emitter data) so that you can apply this effect to any appropriate object in any scene. Convert the selected subtree into a compound: choose Compounds > Create Compound from the ICE tree toolbar. Edit the compoundsee Editing Compounds on page 268. Export the compoundsee Exporting Compounds on page 269. You can modify the compound and re-export itsee Versioning Compounds on page 270.
5 6 4 3 2 1

3 4 5 6

Basics 267

Section 14 ICE: The Interactive Creative Environment

Editing Compounds
When you edit a compound, you can change the compound name and expose different ports of the nodes inside so that they are easily accessible from your compound later on.
A

L B G C D I J K

H E

268 Softimage

ICE Compounds

Parts of the Compound Editor


A Opens the compound editor. Move the mouse over a compound node, and click the e icon that pops up. Or right-click on a compound node (not over a port) and choose Edit Compound. Compound name. To change the name, double-click and type a new one. Category is used to organize exported compounds on the Tool tab of the preset manager. To change the category, double-click and type a different name. To create a new category, simply enter a new category name. The new category is automatically added to the preset manager and Nodes menu when the compound is exported. If a compound has no category, it does not appear on the Tool tab of the preset manager. Tasks are used to further organize exported compounds by workflow on the Task tab of the preset manager. Double-click to enter or change a comma-separated list of tasks. Use a slash to separate tasks and subtasks, for example, task/subtask,task1/subtask1. To create a new task or subtask, simply enter new names. New tasks and subtasks are automatically added to the preset manager when the compound is exported. If a compound has no task, it does not appear on the Task tab of the preset manager. Modify the tree by adding, editing, and connecting nodes in the usual way. Expand or collapse the list of exposed input parameters (shown expanded). When the list is collapsed, you can display the name of a port by hovering the mouse pointer over its connection. Expose a new input port or parameter. Drag this icon onto a nodes input. Unlike ports, parameters dont display a circle next to their labels but you can still drag this icon onto them to expose parameters such as references. Exposed input ports and parameters. Double-click on a ports name to change it while the list is expanded. Drag the circle icon onto another node to share the input. Right-click on a specific port to change the order, remove it, or set properties. I Expand or collapse the list of exposed output parameters (shown collapsed). Here again, when the list is collapsed, you can display the name of a port by hovering the mouse pointer over its connection. Expose a new output port. Drag an output port from any node onto the black circle. You can have as many output ports as you want. Exposed output ports. Double-click on a ports name to change it while the list is expanded. Right-click on a specific port to change the order, remove it, or set properties. Not all properties apply to output ports. Exit and return to the parent tree.

B C

J K

Exporting Compounds
Compounds are XML-based files that contain all the connections and data of all the nodes in the tree. They are saved as .xsicompound files. Exporting a compound allows you to use it in other trees and scenes, including sharing it with others by downloading to Softimage|NET. To export a compound, right-click on a compound (not over a port) and choose Export Compound and give a file name and location for it. You can then bring your exported compounds into an ICE tree in the usual way: from the preset manager, from the Nodes menu, using Compounds > Import Compound, or by dragging it from a Softimage file browser or folder window. If two or more compounds have the same name, Softimage logs a warning message telling you the locations of the version that will be used and the versions that will be ignored.

E F

Basics 269

Section 14 ICE: The Interactive Creative Environment

Versioning Compounds
Softimage uses a built-in versioning system to manage updates to exported compounds. You should use this versioning system instead of renaming .xsicompound files manually; otherwise, you may end up with multiple compounds that share the same name and version. If this happens, Softimage warns you that the locations of the file that will be used and the files that will be ignored. The major and minor version numbers are stored in the .xsicompound file. Major version changes are for large functional changes, while minor version changes are for bug fixes and small adjustments. If you modify a compound in an ICE tree and dont export the new version, it is identified by an asterisk.

Compounds that already exist in a scene are not updated automatically even if new versions are available. You can update them individually, or by using the Compound Version Manager (Compounds > Compound Version Manager).

270 Softimage

Section 15

ICE Particles
ICE is a complete visual programming environment thats allows you to create particle effects. In the real world, you think of particles as being small pieces of matter such as dust, sea salt, water droplets, sand, smoke, or sparks from a fire. With ICE particles, you can create all these types of natural phenomena and so much more!

What youll find in this section ...


Making ICE Particle Effects Particles that Bounce, Splash, Stick, Slide, and Flow Particle Goals Spawning New Particles Particle Strands Particle Instances ICE Particle States ICE Rigid Bodies ICE Particle Shaders

Basics 271

Section 15 ICE Particles

Making ICE Particle Effects


In Softimage, ICE particles are simply points in a point cloud that are simulated using nodes in the ICE tree. While that doesnt sound too exciting, you can actually create any type of particle effect you want with them: you can make natural phenomena such as smoke, fire, and sparks. But you can also make objects or characters act like particles: rocks tumbling, glass pieces breaking, grass growing, or humans running about.

Creating ICE Particles


You can create ICE trees on a point cloud to create particle simulations. This point cloud can simply exist in the scene or it can have its points (particles) emitted from a scene element. You can emit particles from polygon meshes and NURBS surfaces, from within object volumes, from curves, from nulls, from multiple objects and groups of objects, or even from any random position in global space.

ICE firework particles are emitted from different positions in space. When they reach a certain position, they explode into a new cloud of spawned particles.

The point clouds simulated ICE tree emits the particles and uses a state system to determine the condition under which the fireworks will explode and spawn a new cloud.

272 Softimage

Making ICE Particle Effects

Overview of ICE Particle Workflow

1 B

C A

Basics 273

Section 15 ICE Particles

274 Softimage

Making ICE Particle Effects

Create a point cloud or emit particles: The simplest way is to select one or more objects to be the particle emitter(s) and then choose ICE > Create > Emit Particles from Selection on the Simulate toolbar. This automatically creates a point cloud and sets up certain nodes in the ICE Tree for that point cloud. You can also set up these nodes in the ICE tree from scratch.

Edit the Emit parameters: These define how the particles will look and act when they are emitted: set the particle rate, speed, orientation, direction, color, mass, etc. Delete particles at their age limit: The Set Particle Age Limit compound determines how long the particle will live, then the Delete Particles at Age Limit compound does its job. If you dont put a limit on their age, the particles live the duration of the simulation, which you may want for some effects.

Open the ICE tree view: press Alt+9 or choose ICE > Edit > Open ICE Tree on the Simulate toolbar to open it in a floating window. The ICETree node is the main processing operator in an ICE tree. Because this is a particle simulation, the ICETree node type is simulated. The disc is the particle emitter object. The Get Data node for it simply gets the discs object data so that it can be used in the ICE tree. The Emit compound is responsible for emitting the particles and setting certain particle attributes (such as size, color, velocity, mass, shape, etc.) at emission time. At every frame, it adds points to the point cloud. The Emit compounds are always plugged into the top of the ICETree node in a particle simulation because you need to emit the particles before anything else can happen to them. 5

Add forces to make the particles move. The Add Forces compound is a hub into which other forces can be connected. Here, only the Turbulence value is modifying the force, but you could easily add other forces. Build the particle ICE tree: Plug in different nodes for different effects. Remember this: When you plug nodes into the ICETree node, their output gets evaluated at every frame. You want to do this if you want the particle data to be updated throughout the simulation, not just when the particles are emitted. When you plug nodes into any of the Emit compounds, their output is evaluated only once, upon particle emission. This means that data from this node wont change the particles during the rest of the simulation. You can connect ports together only if their data matches in type and context.

The Simulate Particles node updates the position and velocity of each particle at each frame based on its mass, position, and velocity of the previous frame. This node is usually plugged into the bottom of the ICETree node because it needs to take all information from the nodes that precede it and then use that information to update each particle at each frame.

Create a compound: This step is not necessary, but creating a compound of this particle effect lets you use it in other scenes or share it with others. Render the particles as volumes using ICE particle shaders, or render particles as surfaces using Softimage surface shaders.

Basics 275

Section 15 ICE Particles

Setting Up a Particle Emission From Scratch


You can create any type of particle emission by creating and connecting nodes yourself in a point clouds ICE Tree.
4

Create a point cloud by choosing Get > Primitive > Point Cloud > Empty Cloud (or any of the shapes) from any toolbar. In the ICE tree view, create a Simulated ICE Tree node: from the menu bar of the ICE Tree, choose Create > Simulated ICE Tree. Drag the emitters name from an explorer into the ICE Tree view to create a Get Data node for it. An easy way is to select the object and press F3 so that a floating explorer opens, then drag the emitters name from there into the ICE Tree.

4 5

Drag one of the Emit compounds from the preset manager into the ICE tree view. Drag the Simulate Particles node from the preset manager into the ICEtree view. Plug all the nodes together as shown here. You can then continue to build your ICE tree as you like.

276 Softimage

Particles that Bounce, Splash, Stick, Slide, and Flow

Particles that Bounce, Splash, Stick, Slide, and Flow


There are several compounds that let you control a particles motion and the way it interacts with object surfaces. These compounds are fairly complete within themselves, but you can also use them in conjunction with State systems as part of a larger effect. Within a State system, you can choose one of these compounds to create the effect that happens when a trigger compounds value is reached: for example, new particles can be spawned when they collide with an obstacle. When particles collide with an obstacles geometry, the collision geometry type that is used for the obstacles is its actual shape. As well, the particles size is taken into account upon collision. However, if youre using instances as the particle geometry, an approximated box (or sphere) is created around it: its actual shape is not used. Bouncing Using the Bounce Off Surface compound, you can make particles bounce off any number of obstacles upon collision. This is useful for creating ballistics, fragments, rain, or other debris bouncing off surfaces. Sticking - and Letting Go Using the Stick to Surface compound, you can make particles stick to an obstacles surface upon collision and remain there for the duration of their lifetime, such as paint being sprayed onto a surface. If the obstacless geometry is deform- or shape-animated, the particles will follow the shape of the changing surface. If you want the particles to unstick, you can set up some condition that makes them fall off, such as the obstacle being at a certain angle or a force threshold being reached. Making a Splash The Emit Splash from Surface Collision compound lets you emit a splash of new particles upon impact with an obstacle in a collision. This can be useful for creating effects like dust puffing up as a foot steps on the ground, mud splashing up as a ball hits a mud puddle, or sparks flying as two metal objects collide. Sliding and Dripping Off Using the Slide on Surface compound, you can make particles cling to and then slide on an obstacles surface. Obstacles can be shapeanimated and the particles will follow the shape of the changing surface. You can set the conditions in which the particles will drip off the obstacle, such as the obstacle being at a certain angle or a force threshold being reached. This is useful for creating sweat, water condensation, or other liquid drops sliding down a surface and then dripping off. Flowing Along a Curve You can make particles flow along a curve using the appropriatelynamed Flow Along Curve compound. This is useful for when you need particles to follow a path or direction, such as a school of fish swimming and turning suddenly, blood cells flowing through arteries, or lava oozing in streams down a mountain. Flowing Around an Object You can make particles flow around an obstacle using the Flow Around Surface compound. This is useful for doing effects such as water flowing around a rock in a stream, or for doing crowds/flocking simulations where the characters need to move around an obstacle.

Basics 277

Section 15 ICE Particles

Stick Splash Bounce

Slide

Flow Along

Flow Around

278 Softimage

Particle Goals

Particle Goals
When you create a goal for particles, the particles are attracted to it or repelled from it, similar to magnets. With goals, you can create a number of particle effects, such as drops of water forming into a puddle, paint being sprayed over a surface, or butterflies following the infamous ClubBot. Goals are part of the overall particle simulation, which means that any particles that are progressing toward a goal can also react to any other forces that are applied to them. In fact, goals are a force on particles, similar to how an attraction force works. Creating goals requires the Move Towards Goal compound. This compound lets you do two things: choose the location on the goal object to which the particles are attracted (or repelled) and define how the particles move toward the goal, such as their speed, acceleration, and alignment with the goal. Moving Toward One Goal You can set up a simple goal ICE tree with particles moving toward one goal, as you see on the left with the butterflies fluttering toward the walking ClubBot.

When a particle is born, it is assigned to a location on the goal object that you have defined, and it evolves towards this location throughout its life. This can be a random location on the goal, the location on the goal that it closest to the particle, or any location that you specify on the goal. The particles try to reach the position and/or shape of the goal objects, even as the goal moves or its surface is deformed. When the particles reach the goal, their velocity decreases and they stop until the goal moves or is deformed again.

Basics 279

Section 15 ICE Particles

Moving Towards Two Goals You can use two Move Towards Goal compounds with two goals and the If node to have particles move to two goals at once based on a condition that you set up.

Moving From Goal to Goal If you want to have particles move from one goal to another, you can create several sets of Move Towards Goal+goal object nodes, then plug each set into the Multi Goal Sequencer compound.

280 Softimage

Spawning New Particles

Spawning New Particles


Spawning generates new particles (points) from existing particles. These new particles are often referred to as particle trails. Spawning makes it easy to create effects such as fireworks, laser shots, streams of falling rain, or smoke trails.

If you spawn particles into the same point cloud, the shaders and forces on the spawned particles are the same as for the original point cloud. You can, however, add new attributes to the spawned particles to change their color, size, shape, and so on. Spawning into a different point cloud is similar to creating a new particle simulation because this point cloud has a separate ICE tree. You can also use different shaders for that point cloud, giving you control over the rendered look of the spawned particles. Spawning Trails The Spawn Trails compound gives you a basic way to spawn new particles. Here, pixie dust is spawned as a trail to follow the original particle as it travels upwards.

Different sets of spawned particles create fireworks with some help from a state system.

To spawn particles, you can use several different Spawn compounds, either on their own or as part of a larger effect via a State system: Spawn Trails is the basic compound that creates particle trails. Spawn on Collision spawns particles upon collision with an object. Spawn on Trigger spawns particles when a trigger value is reached. Each of the Spawn compounds is based on the Clone Point node. This node is responsible for creating new particles which are an exact replica of the original particles. The points, including all of their attributes (except ID, which is unique), are copied from a point cloud and then added to either the same point cloud or to another point cloud that you select.

Basics 281

Section 15 ICE Particles

Spawning Upon Collision You can use the Spawn on Collision compound in conjunction with any of the Surface Interaction compounds (such as Bounce Off Surface) to have new particles spawned when a particle collides with an object. Here, the small blue particles are spawned when an orange particle bounces on the surface of an obstacle.

Spawning on Trigger You can use the Spawn on Trigger compound with either a State system or just a simple If node system. Either way, you need to set the condition upon which new particles are spawned. Here, the small blue particles are spawned when the bubblelooking trail particles reach their age limit.

282 Softimage

Particle Strands

Particle Strands
Particle strands are solid shape trails that are drawn after a particle. These solid shapes are actually continuous segments of the shape that you have chosen for the particle, such as spheres, rectangles, boxes, discs, blobs, or even instanced particle geometry. Strands makes it easy to create effects that require more solid-looking objects than trails, such as ribbons, seaweed, or hair, and much more. Using the numerous Strands compounds, you have a lot of control over the appearance and movement of strands to create many types of particle effects. There are two main compounds you can use to actually create the strands using two different methods:

Create Strands is the basic compound that creates particle strands. You can use any particle shape for the strands. Generate Strand Trails lets you dynamically generate particle strands based on the length of the simulation and the number of segments, such as for growing things like grass or vines. One strand segment is created per second up to the maximum number of segments that you have set. Because these two compounds create strands in different ways, you can use only one of them at a time on the same set of particles.

Create Strands

Generate Strand Trails

Basics 283

Section 15 ICE Particles

Modifying the Strands


You can use any of these compounds with either the Create Strands or Generate Strand Trails compound to change the strand behavior: Bend Strand

Viewing and Rendering Strands


You can view the strands as trails in a 3D view if you set the particle shape or display type to Segment. The Segment shape draws a line from each point position through each strand position point. You can easily simulate hair using this shape type. If you want to see what the strands will look like rendered using any shape type, however, you need to draw a render region. You can render strands as surfaces using the Softimage surface shaders (such as Phong, Lambert, etc.), or you can render strands as volumes with the Particle Strand Gradient shader compound connected to the Particle Volume Cloud shader (or the Particle Renderer or Particle Shaper shader compounds).

Strand Sine Wave

Twist Strand

Turbulize Strand

284 Softimage

Particle Instances

Particle Instances
You can use any 3D geometric object, hierarchy of objects, or group of objects in place of particles to create many different effects. For example, you could use cars to create a flow of traffic or characters to create a crowd scene; or create flocking scenes with flying birds, butterflies, or insects. The object is assigned to a particle and stays with that particle for its lifetime.

To use instances as particles, you assign them to the point cloud using either the Instance Shape node or the Set Instance Geometry compound in the ICE Tree: If the instanced objects are not animated, you should use the Instance Shape node. This node provides the simplest and fastest way to create large numbers of instances whose geometry is not animated.

If the instanced object is animated, you can use the Set Instance Geometry and Control Instance Animation compounds. If an objects transformation is animated, it has to be in relation to its parent, and then you choose the parent as the instance object.

Instances are exact copies of their master object, including its materials (color) and rendering information. However, instances inherit the particles position, velocity, orientation, and size: the instances transformation is not used, although children keep their relative transformation to their parent. If youre using instances as particle shapes in collisions with an obstacle (as rigid bodies or using a compound with surface interaction, such as Bounce Off Surface), you can use an approximated box or sphere around it: its actual shape is not used.

Basics 285

Section 15 ICE Particles

Using Groups of Instanced Objects If youve selected a group of objects for the instances, you have some control over which object is instanced. The objects in the group are picked according to their creation order, as shown in the explorer. You can choose View > Reorder Tool in the explorer to change the objects order in the group. You can also plug in a Randomize compound into the Group Object Index port to change their order randomly.
0 1 2 3

There are three compounds that help you control animated instances: The Set Instance Geometry compound lets you choose the instance object to use, as well as which frame of its animation to use as the starting frame for each particle. The Control Instance Animation compound is like a playback control for how you want the instances animation played during the particle simulation. For example, if the instances animation goes from frames 1 - 50, you can choose to use only frames 20 - 40 for its animation in the particle simulation. The Control Displacement Instance Animation compound scales the instanced objects animation according to its size when it becomes a particle. For example, in the image below are two simple animated rigs that are used as master objects: one hopping, one rolling. When they become instanced, they are much smaller than their original size, so their animation cycles must go at a faster rate to cover the same distance as the original animation.
Master objects Instanced objects as particles

Controlling the Instances Animation If the instanced objects are animated, you can create crowds or flocking scenes, such as with flying birds, butterflies, or walking characters. If youre doing a crowd, for example, each character can walk at a different pace. If an objects transformation is animated, such as a walk cycle, it has to be in relation to its parent. You then select the parent as the instance object, and choose Object and Children in the Set Instance Geometry compound.

286 Softimage

ICE Particle States

ICE Particle States


Particle states offer a way of dividing particles into behavior groups. States are basically a combination of two things: a trigger and an effect. The trigger determines what causes the particles behavior to change, and the effect is the behavior that the particles adopt when the trigger is executed. Using these two elements, you can have many different combinations of things happening to particles. Of course, state systems can be much more elaborate than this, with many State compounds defining many behavioral changes happening to the particles. For example, you could create fireworks: the particle trail is seen going up into the sky, then suddenly bursts into another particle cloud at the end of its lifetime and leaves trails. Or by spawning particles at every frame, you could create something simple such as the smoky trail left by a hurtling fireball. How to Create a Particle State System This is an overview of the state workflow using a simple example of particles changing their size and shape when they reach their age limit.

Each State compound you define is plugged into the State Machine compound. This compound is the grand central station for the states. The states are executed in the order in which theyre plugged into the State Machine, from top to bottom.

Basics 287

Section 15 ICE Particles

288 Softimage

ICE Rigid Bodies

1 2

Create a particle simulation. Then drag in a State Machine compound and plug it into the ICE Tree node. Drag in a State compound for each behavior set you want to define. Plug each one into the State Machine compound in the order you want them executed. Disconnect the Simulate Particles compound from the ICE Tree node. This is because each State compound has its own Simulate Particles node inside. Give each state a unique ID to identify it in the system, and give it a unique color to help you identify each states particles as you work. Get a trigger compound and plug it into the first State compound. Here, the trigger compound tests when the age limit of the particle is reached. Define the triggers value. This is done by setting the particle age limit value, which is set to 2 seconds here. Specify the state to which you want the particle to transition when the trigger is pulled. In this case, State 0 transitions to State 1. Get one or more effect nodes or compounds and plug them into the second state. Here, these two Set compounds will set the particle shape and size when the particle age limit is reached. Define the effects behavior. The values of the Set Particle compounds are set so that the size decreases to 0.1 and the shape is changed to a Cone when the particle age limit is reached. You can keep adding state compounds and defining each trigger/effect set by following steps 4 - 9 to create more complex effects.

ICE Rigid Bodies


ICE rigid body dynamics let you create realistic motion using particles as rigid body objects, which are objects that do not deform in a collision. The PhysX dynamics engine that is used for creating non-ICE rigid bodies is also used for creating ICE rigid bodies. With rigid body particles, you can create effects that involve many small pieces that collide or accumulate, such as bricks, stones, or anything falling in a pile or being blasted apart. The node that makes all this rigid body action happen is the Simulate Rigid Bodies node. It updates a particles position, velocity, orientation, and angular velocity from the previous frame based on its rigid body attributes (such as elasticity, friction, shape, size, and scale).

6 7

Rigid body particles can collide with geometric objects (obstacles) that are set as rigid bodiesjust plug them into an Obstacle > Geometry port on the Simulate Rigid Bodies node in the ICE Tree. Rigid body particles can also collide with each other if theyre in the same point cloud. To create the illusion of particles from several point clouds colliding, you can use several emitters and/or emissions in the ICE tree of a single point cloud. Then set up the emission properties for each to look like different particles.

10

Basics 289

Section 15 ICE Particles

Passive Rigid Bodies


By default, rigid body particles are active, meaning that they change position and orientation when affected by forces and collisions with other rigid body particles in the same point cloud, as well as with obstacle objects. Using the IsPassiveRigidBody attribute, however, you can make rigid body particles passive so that they act as obstacles: their position does not change when active rigid body particles collide with them.

Luckily for the character, hes set as passive in this situation, so hes unscathed by the collision with the wall.

This character is made up of rigid body particle cubes and is heading for a rigid body particle wall. What will happen?

Not so lucky this time! Here, the wall is set as passive, but the character isnt. Ouch.

290 Softimage

ICE Rigid Bodies

Collision Geometry
The Simulate Rigid Bodies node calculates the particle and obstacle collisions according to the shape of their collision geometry. The collision geometry used is different depending on whether the rigid bodies are particles or obstacle objects: For rigid body particles, this is a bounding shape (sphere, capsule, or box) that approximates the particle Shape that you have set. Bounding shapes provide a quick solution for calculating particle collisions because they dont have to calculate detailed geometry. For instanced geometry on the particles, a box or sphere is used, not the instanced objects actual geometry. This is done to make the calculation time faster. For rigid body obstacle objects, this is a convex hull. Convex hulls give a quick approximation of an objects actual shape, with the results similar to an object being shrinkwrapped. Convex hull doesnt calculate any dips or holes in the rigid body obstacles geometry, but is otherwise the same as the obstacles original shape.

Elasticity and Friction


All rigid bodies use a set of collision attributes to calculate their reactions to each other during a collision. These attributes include elasticity and friction (static and dynamic). Elasticity determines how much energy is retained when rigid bodies collide. For example, when a basketball hits the ground, its elasticity influences how much the ball rebounds. Friction is the resistive force acting between rigid bodies that tends to oppose and dampen motion. For example, a bowling ball rolling on a carpet would have more friction than if it was rolling on a wooden floor. Its the combination of the friction and elasticity attributes of all rigid bodies involved in a collision that determines the results. Any rigid body attribute values you set for the particles are multiplied with the obstacles rigid body properties that you set. You can set the Elasticity, StaticFriction, and DynamicFriction attributes for each rigid body particle in a point cloud using a Set Data node. You can set the obstacles rigid body properties in the Simulate Rigid Bodies nodes property editor.
Elasticity set to 1 on both the table (obstacle) and the particles.

Convex hull collision geometry for an obstacle. The dip in the obstacle is not calculated so the boxes simply bounce off the obstacle top.

Elasticity set to 1 on only the table (obstacle).

Basics 291

Section 15 ICE Particles

ICE Particle Shaders


In many basic ways, rendering particles is similar to rendering any other object in Softimage. You can use shaders, all standard lighting techniques, set shadows, and apply motion blur. To the mental ray renderer, particles (point clouds) are a surface, just like any other object in Softimage, which means that you can plug many of the regular shaders into a point clouds render tree to shade the particles as surfaces. However, when you render particles, you often want them to look like certain types of volume-based phenomena, such as smoke, clouds, or fire. To help you do this, there are some special ICE shaders that are designed to create volumic effects on ICE particles. The Particle Volume Cloud shader is the main particle shader that renders the point clouds bounding box as a volume. You can also choose Get > Material > ICE Particle Volume from the Render toolbar to do some basic shader connection work for you. The Particle Density shader renders noise functions as density fields to create clouds, fireballs, smoke etc. This shader helps define each particles shape within the point clouds volume so that it doesnt look like a single volumetric mass. The Particle Gradient shader lets you change the color and/or density of the particles based on density, age, or any other ICE attribute that you define for the particles. The Fractal Scalar and Cell Scalar shaders are actually texture shaders, but they are very useful for adding noise to particle volume effects, such as smoke, clouds, and fire.

Dragon breath particles are rendered as a volume.

The point clouds render tree shows how the particles get their volume and definition from the Particle Volume and Particle Shape compounds. The color and density are defined by the Particle Gradient shader, with a Fractal Scalar adding noise to the density.

292 Softimage

ICE Particle Shaders

The ICE Particle shaders and shader compounds can be found in the preset manager or in the Nodes menu in the render tree.

Connecting Particle Shaders


To apply shaders to the point cloud, you connect them in the render tree. This gives you precise control over which shaders are connected together using which ports. The shaders that you choose to plug in to a point clouds Material node depend on whether you want to render the particles as a surface or as a volume. Particle Surfaces If you want to render particles as a surface, you can hook up any surface shader to the Surface port of the point clouds Material node. In fact, when you create an ICE particle simulation, the Phong shader is connected to the point clouds Material node by default.

Particle Shader Compounds Shader compounds are like ICE data compounds in that they contain several connected nodes (in this case, shader nodes). Once you have shaders hooked up together in the render tree as you like them, you can create a compound that contains all of these shaders. This allows you to create a standard particle shader effect, such as fire, that you can use in different scenes or share with other people. Softimage ships with several particle shader compounds that you can use as a starting point for your own shader effects. Start out with the Particle Renderer or Particle Shaper shader compound to render a volume quickly. These compounds use the Particle Volume Cloud shader as a base. The Particle Gradient Fcurve compound creates a curve that you can plug into a Gradient port of a shader to control the gradients falloff over distance. The Particle Strand Gradient compound sets up a color/alpha gradient for rendering particle strands.
Particles using the Blob shape are rendered using the Lambert shader.

Particle image sprites are rendered onto rectangle particle shapes using the Phong shader.

Basics 293

Section 15 ICE Particles

Particle Volume If you want to render particles as a volume, you need to first hook up the Particle Volume Cloud shader (or the Particle Renderer shader compound) to the Volume port of the Material node.

Bringing ICE Data into the Render Tree


The ICE attribute shaders allow you to control a point clouds shading in the render tree based on calculations done in the point clouds ICE tree. To use the attribute shaders, you must make sure that the particles first have the appropriate attribute created for it in the ICE tree. For example, you can control the particles transparency based on the distance to a surface by creating an ICE tree that sets a DistancetoSurface attribute, and then accessing that attribute in the render tree via an attribute shader. Another example is to override the color of the instanced particle geometry with the particles Color or Init_Color attribute. You can use the Attribute Color shader to do this.

Dry ice particle volume is created with a combination of several ICE particle shaders.

The Fractal Scalar and Cell Scalar shaders help to give this particle volume a unique look.

In the render tree, drag the appropriate shader from the Attributes group in the preset manager or from the Nodes menu. There is one Attribute shader per data type: Boolean, Color, Integer, Scalar, Transform, and Vector.

294 Softimage

Section 16

Shaders
A shader is a miniature computer program that controls the behavior of the rendering software during, or immediately after, the rendering process. Some shaders compute the color values of pixels. Other shaders can displace or create geometry on the fly. Shaders are used to create materials and effects in just about every part of a scene. An objects surface and shadows are controlled by shaders. So are scene lighting and camera lens effects. Even shaders parameters are usually controlled by other shaders. You can even apply shaders at the render pass level to affect the entire scene.

What youll find in this section ...


The Shader Library About Surface Shaders Applying Shaders to Scene Elements The Render Tree Building Shader Networks Creating Shader Compounds

Basics 295

Section 16 Shaders

The Shader Library


Softimages shaders are divided into several different categories based on how they are used in a render tree. shaders can be quickly and easily accessed from the preset manager, the browser, an explorer view, or the Nodes menu in the render tree view. Surface shaders are one of the most important types of shaders. All geometric objects in a scene have an associated surface shader, even if it is only the scenes default shader. Surface shaders determine an objects basic color and illumination characteristics. Surface shaders are also responsible for object transparency, refraction and reflectivity. 2D texture shaders apply a twodimensional texture onto an object, just as 3D texture shaders implement a three-dimensional texture into an object. They are connected to the objects surface shader to define the objects texture. Light shaders define the characteristics of the scenes light sources. For example, a spotlight shader uses the illumination direction to attenuate the amount of light emitted. A light shader is used whenever a surface shader uses a built-in function to evaluate a light. If shadows are used, light shaders normally cast shadow rays to detect occluding objects between the light source and the illuminated point. Lens shaders are used when a primary ray is cast by the camera. They may modify the rays origin and direction to implement cameras other than the standard pinhole camera and they may modify the result of the primary ray to implement effects such as lens flares, distortion, or cartoon ink lines.

296 Softimage

The Shader Library

Environment shaders are used instead of surface shaders when a visible ray leaves the scene entirely without intersecting an object or when the maximum ray depth is reached.They are used to create backgrounds for scenes, create quick-rendering reflections, light scenes with High Dynamic Range Images, and so on. Volume shaders modify rays as they pass through an object (local volume shader) or the scene as a whole (global volume shader). They can simulate effects such as clouds, smoke, and fog. There are also particle volume shaders that help you create these same types of effects on a point cloud. Toon shaders apply nonphotorealistic or cartoon style effects to objects. They control celanimation type properties like inking and painting. To get a full toon effect, its best to use the toon material shaders in conjunction with the toon lens shaders.

Shadow shaders determine how the light coming from a light source is altered when it is obstructed by an object. They are used to define the way an objects shadow is cast, such as its opacity and color. Lightmap shaders sample object surfaces and store the result in a file that can be used later. For example, you can use a lightmap shader to bake a complex material into a single texture file. Lightmaps are also used by the Fast Subsurface Scattering and Fast Skin shaders to store information about scattered light. Photon shaders are used for global illumination and caustics. They process light to determine how it floods the scene. Photon rays are cast from light sources rather than from a camera.

BBC Everyman: Animation by Aldis Animation

Basics 297

Section 16 Shaders

Output shaders operate on images after they are rendered but before they are written to a file. They can perform such as glows, blurs, background colors, and so on.

Displacement shaders alter an objects surface by displacing its points. The resulting bumps are visibly raised and can cast shadows.

Realtime shaders allow you to use the render tree to build and control the multipass realtime rendering pipeline. You can connect these shaders together to achieve a multitude of sophisticated rendering effects, from basic surface shading to complex texture blending and reflection.

Material phenomena are combinations of shaders that are packaged into a single shader node. These are often used to create more complex rendering effects. Connecting a material phenomenon to an objects material prevents that material from accepting other shaders directly, though you can extend the phenomenons effect by driving its parameters with other shaders. The Fast Subsurface Scattering and Fast Skin shaders are examples of material phenomena.

Geometry shaders are evaluated before rendering starts. This allows the shader to introduce procedural geometry into the scene. For example, a geometry shader might be used to create feathers on a bird or leaves on a tree.

Tool shaders let you create a shader from scratch or extend an existing one. Although some tool shaders can be used on their own, many of them must work in conjunction with another to achieve a highly customized effect. Some examples of tool shaders include: Color Channels, Conversion, Image Processing, Math, Mixers, and Texture Generators, Texture Space Controller, and Texture Space Generators.

298 Softimage

The Shader Library

The Preset Manager


Many shaders and material (and ICE node) presets are installed with Softimage, all accessible from the preset manager. The preset manager is available on the left side of the render tree (and the ICE tree). You can also open it as a floating window by choosing View > General > Preset Manager from the main menu. You can apply shaders or materials by dragging and dropping them onto objects in the scene. This connects the shader or material to the objects Material node ports. You can also drag shaders or shader compounds into the render tree as a shader node that you can then connect to an objects tree to build up an effect.
D E F G H

A B

Select from Materials, Shaders, or ICE Nodes type of presets. Select Favorites, All Nodes, or a specific category. You can add items to your Favorites for easier access to presets that you use frequently. Items in the selected category appear in this panel. You can drag and drop materials onto objects and material libraries; shaders onto objects and into render trees, and ICE nodes into ICE trees. Sets thumbnail size and arrangement. Refresh. Clicking this button forces an update. This may be necessary if you have moved, added, or removed preset files on disk since opening the preset manager. Enter all or part of a name to filter the presets that are displayed in the right panel (3). Filtering works across all categories. In this case, grad is entered, so all shaders in all categories that have grad in their names appear in the right panel.

D E

G A B H

Recalls previous filter strings. Clears the filter string (show all nodes). You can also delete the text string to show all nodes again.

Basics 299

Section 16 Shaders

About Surface Shaders


Surface shaders are some of the most commonly used shaders in Softimage. Each one defines an objects basic surface characteristics, like color, transparency, reflectivity, specularity, and so on, according to a specific shading model. Shading models determine how an objects surface reacts to scene lighting. Phong Uses ambient, diffuse, and specular colors. This shading model reads the surface normals orientation and interpolates between them to create an appearance of smooth shading. It also processes the relation between normals, the light, and the cameras point of view to create a specular highlight. The result is a smoothly shaded object with diffuse and ambient areas of illumination on its surface and a specular highlight so that the object appears shiny, like a billiard ball or plastic. Lambert Uses the ambient and diffuse colors to create a matte surface with no specular highlights. It interpolates between normals of adjacent surface triangles so that the shading changes progressively, creating a matte surface. The result is a smoothly shaded object, like an egg or ping-pong ball. Blinn Uses diffuse, ambient, and specular color, as well as a refractive index for calculating the specular highlight. Blinn produces results that are virtually identical to Phong except that the shape of the specular highlight reflects the actual lighting more accurately when there is a high angle of incidence between the camera and the light. Blinn is useful for rough or sharp edges and simulating a metal surface. The specular highlight also appears brighter than the Phong model. Cook-Torrance Uses diffuse, ambient, and specular color, as well as a refractive index used to calculate the specular highlight. It reads the surface normals orientation and interpolates between them to create an appearance of smooth shading. It also processes the relation between normals, the light, and the cameras point of view to create a specular highlight. Cook-Torrance produces results that are somewhere between Blinn and Lambert and is useful for simulating smooth and reflective objects, such as leather. Because this shading model is more complex to calculate, it takes longer to render than the other shading models.

300 Softimage

About Surface Shaders

Strauss Uses only the diffuse color to simulate a metal surface. The surfaces specular is defined with smoothness and metalness parameters that control the diffuse to specular ratio as well as reflectivity and highlights. Anisotropic Sometimes called Ward, this shading model simulates a glossy surface using an ambient, diffuse, and a glossy color. To create a brushed effect, such as brushed aluminum, it is possible to define the specular colors orientation based on the objects surface orientation. The specular is calculated using UV coordinates.

Constant Uses only the diffuse color. It ignores the orientation of surface normals. All the objects surface triangles are considered to have the same orientation and be the same distance from the light. It yields an object whose surface appears to have no shading at all, like a paper cutout. This can be useful when you want to add static blur to an object so that there is no specular or ambient light. Toon This model begins with a constant-shading-like base color. Ambient lighting, as well as highlights and rim lights are composited over the base color to produce the final result. The result is a cel-animation type of shading that can vary enormously depending on how you configure the highlights and rim lights. The toon shading model is typically used in conjunction with the Toon Ink Lens shader (applied to the render pass camera), which creates the cartoon-style ink lines.

Basics 301

Section 16 Shaders

Basic Surface Color Attributes


You can create a very specific color for an object by defining its ambient, diffuse, and specular colors separately on the Illumination page of its surface shader property editor. To open an objects surface shader property editor, select the object and choose Modify > Shader from the Render toolbar.

Diffuse This is the color that the light scatters equally in all directions so that the surface appears to have the same brightness from all viewing angles. It usually contributes the most to an objects overall appearance and it can be considered the main color of the surface. Ambient This color simulates a uniform non-directional lighting that pervades the entire scene. It is multiplied by the scene ambience value, and blended with the diffuse color. Often, the ambient color is set to the same value as the diffuse color, allowing the scene ambience to provide the ambient color. Specular This is the color of shiny highlights on the surface. It is usually set to white or to a brighter shade of the diffuse color. The size of the highlight depends on the defined Specular Decay value. Specular highlights are not visible in all shading models.

The combined result of the ambient, diffuse, and specular colors/lighting contributions.

Not all shading models support all of these basic characteristics. For example, only the Phong, Blinn, Cook-Torrance and Anisotropic shading models support specular highlights (although the Strauss shaders Smoothness and Metalness parameters affect specularity). Similarly, the Strauss shader does not support an ambient color, while most other models do. Its also worth noting that because different shading models compute these basic characteristics, the parameters that control the attributes vary from one property editor to another. For example, the Anisotropic shader has much more elaborate specular highlight controls than the Phong shader.

302 Softimage

Reflectivity, Transparency, and Refraction

Reflectivity, Transparency, and Refraction


In addition to controlling an objects basic surface shading characteristics, surface shaders also control reflectivity, transparency, and refraction. Parameters for controlling these attributes are on the Transparency/Reflection tab of the surface shaders property editor. To open an objects surface shader property editor, select the object and choose Modify > Shader from the Render toolbar.

As an object becomes more reflective, its other surface parameters, such as those related to diffuse, ambient, and specular areas of illumination, become less visible. If an objects material is fully reflective, its other material attributes are not visible at all. Reflectivity values are defined using color sliders. Setting the color to black makes the object completely non-reflective, while setting the color to white makes it completely reflective. If necessary, you can even control reflectivity in individual color channels. Controlling Reflectivity with Textures You can also control reflectivity using a texture by connecting the texture to the surface shaders reflectivity input.
In this example, the surface shaders reflectivity parameter is connected to a simple black and white stripe texture. The white areas are reflective, while the black areas are not.

Reflectivity
A surface shaders Reflection parameters control an objects reflectivity. The more reflective an object is, the more other objects in the scene appear reflected in the objects surface.

No reflectivity in gray balls material

35% reflectivity

Normally, grayscale images are used since black, white and shades of gray adjust reflectivity uniformly in all color channels. Black areas of the image make the corresponding portions of the object non-reflective, white areas make the corresponding portions of the object completely reflective, and gray areas make the corresponding portions of the object partially reflective.

Basics 303

Section 16 Shaders

Transparency
A surface shaders Transparency parameters control an objects transparency. The more transparent an object is, the more you can see through it.

Controlling Transparency with Textures As with reflectivity, you can also control transparency using a texture by connecting the texture to the surface shaders reflectivity input.
In this example, the surface shaders transparency parameter is connected to a simple black and white stripe texture. The white areas are transparent, while the black areas are opaque.

75% transparency

70% transparency with 30% reflection.

As with reflectivity, transparency affects the visibility of an objects other surface attributes. You can compensate for this by increasing the attributes values, such as changing specular color values that were 1 on an opaque object to 10 or higher on a transparent object. Transparency values are also defined using color sliders. Setting the color to black makes the object completely opaque, while setting the color to white makes it completely transparent. If necessary, you can even control transparency in individual color channels.

Normally, grayscale images are used since black, white and shades of gray adjust transparency uniformly in all color channels. Black areas of the image make the corresponding portions of the object opaque, white areas make the corresponding portions of the object completely transparent, and gray areas make the corresponding portions of the object partially transparent or translucent.

304 Softimage

Reflectivity, Transparency, and Refraction

Refraction
When transparency is incorporated into an objects surface definition, you can also define the refraction value. Refraction is the bending of light rays as they pass from one transparent medium to another, such as from air to glass or water.

Refraction value of 0.9

Refraction value of 1.1

You can set the index of refraction from a surface shaders property editor. The default value is 1, which represents the density of air. This value allows light rays to pass straight through a transparent surface without bending. Higher values make the light rays bend, while values less than 1 makes light rays bend in the opposite direction, simulating light passing from air into an even less dense material (such as a vacuum). Refractive index values usually vary between 0 and 2, but you can type in higher values as needed.

Basics 305

Section 16 Shaders

Applying Shaders to Scene Elements


There are a number of ways to apply and connect shaders in Softimage. You can use any of these methods depending on the tool you prefer to use or the task you need to perform. Render tree: You can connect many shader nodes together to build up a tree for the object. Easy! See The Render Tree on page 307. Netview: Drag and drop a shader or material preset from a netview window (press Alt+5) onto the appropriate type of object to apply it, or drag it into the render tree as a node. Render toolbar: - Choose Get > Material on the Render toolbar to create and apply materials to selected objects. - The Get > Shader menu has sub-menus containing shaders that you can connect to all of an objects Material nodes input ports. - The Get > Texture menu lists commonly used texture shaders and allows you to connect them to any combination of a surface shaders ambient, diffuse, transparency and reflection ports. Material manager: You can use this tool to create and apply a material to an object. See The Material Manager on page 317. Shaders property editor: A shaders property editor contains all the parameters that you can edit. To the right of each parameter, there is a plug connection icon . Clicking this icon opens a menu that lists shaders that you can attach directly to that parameter.

Preset manager: Drag an drop a shader or material preset from here onto the appropriate type of object to apply it, or drag it into the render tree as a node. See The Preset Manager on page 299.

Shader stacks: Some scene elements, like render passes and cameras, have shader stacks in their property editors where you apply shaders that affect the whole scene rather than individual objects.

306 Softimage

The Render Tree

The Render Tree


The render tree is where you can connect shader nodes together to build trees that create a visual effect for an object. You can have one render tree per object. To open the render tree, select an object and press 7 or choose View > Rendering/Texturing > Render Tree. Click the Refresh icon in the render tree to show the shader nodes available for that object. Shaders in the render tree are called nodes as a way of describing their representation as a container. These nodes can be single shader nodes or shader compound nodes. Shader compounds are shader packages built from shader nodes and possibly other shader compounds. Every shaders node exposes a set of inputs and outputs (called ports) for most or all of a shaders parameters. You connect shaders together by simply dragging a connection arrow from one shaders output port to another shaders input port. Its so easy!

M O

N K

Basics 307

Section 16 Shaders

A B C D E F G H I J

Memo Cams. You can save and restore up to four views of the render tree workspace. Lock. Prevents the view from updating when you select other objects in the scene. Refresh. When the view is locked, clicking this button forces it to update with the current selection in the scene. Clears the render tree workspace. Opens the preset manager in a floating window. Displays or hides shaderballs on the shader nodes. Displays or hides the preset manager embedded in the left panel (10). Name and path of the current Material node. Birds Eye View. Click to view a specific area of the workspace, or drag to scroll. Toggle it on or off with Show > Birds Eye View. Embedded preset manager shows all shader nodes and compounds that are available to use. You can drag and drop shader nodes from here into the render tree workspace. You can also get shaders from the Nodes menu.

Connecting Shader Nodes


You connect shader nodes by clicking and dragging an output port from the right side of one shader node onto an input port on the left side of another shader node. Data flows along the connection from the first node and is processed by the second node. All data ends up being processed by the Material node.
Data travels from this nodes output port ... .. and is processed by this node via its input port.

When a port is connected, the value of its corresponding parameter is driven by the connection, which means that you can no longer set the parameters value in that shaders property editor. In fact, the parameter and its controls (checkboxes, sliders, etc.) are not even displayed. If you remove the connection, the controls reappear in the property editor.

K L

The render tree workspace. This is where you can connect shader nodes together to build trees. Connection arrow between shaders output and input ports shows the data flow between them. Data always flows from the left to the right of the tree. Shader node. This shader is a texture shader, as indicated by its light green color. Each type of shader has a different color. Texture layers. These layers let you mix several textures together so that each texture is blended with the cumulative result of the preceding textures. Material node: This node acts like a placeholder for every shader that is applied to an object. Every object must have one or it wont render. Its input ports support each type of shader.

M N

308 Softimage

The Render Tree

Node Color Codes


Every shader node in the render tree is color coded, as are each of its ports. This coding system helps you visualize which shaders are doing what within their render tree structures.
Material node Material phenomenon Surface shader Texture shader Lightmap shader Environment shader Realtime shader Volume shader Output shader

The following table shows which input/output port color is assigned to which type of value: Color Input/ Output Port Color Result Returns or outputs a color (RGB) value. These ports are usually used in conjunction with the surface of an object or when defining a light or camera. Represents a scalar input/output with any value between 0 and 1. Represents an output/input that corresponds to vector positions or coordinates. Represents an input/output that corresponds to a 0 or 1, or On/ Off. Consists of a single integer (such as 2 or 73). Accepts or returns an image file. Accepts connections from other realtime shaders and outputs to other realtime shaders or to the Material nodes RealTime port. Outputs the result of a lightmap shader to the Material nodes Lightmap port. Outputs the result of a material phenomenon shader to the Material nodes Material port.

Scalar
Lens /camera shader Light shader

Vector

Boolean
Click the arrow to expand or collapse a node. Click the port to create a connection arrow.

Integer Texture/ Image Clip RealTime

Selected nodes are highlighted in white.

Shader node ports are also color coded. A nodes output is indicated by a port (colored dot) in the top right of the node, while each input port is indicated on the left side of the node. The color of a port identifies what type of input value the port will accept, and what type of value it will output.

Lightmap Material Phenomenon

Basics 309

Section 16 Shaders

Building Shader Networks


The process of building shader networks in the render tree is best explained visually. Essentially, you create an effect by connecting shaders to an objects material, using other shaders to control those shaders parameters, and so on. There are no hard and fast rules for how shaders should be connected, and experimenting with different connections is usually rewarding. What follows is a simple example of how to connect shaders in the render tree to build an objects material. 1 To begin with, the mug has a Phong shader connected to its material nodes Surface port to create basic surface shading ambient and diffuse colors, specular highlights and, in this case, some reflectivity. Since there are no other objects in the scene, the mugs reflectivity is not apparent. Connecting an Environment map shader to the material nodes Environment port makes the reflectivity visible and creates some reflections on the mugs surface. Now its time to add some color and detail. Connecting two textures to a Mix2Colors shader blends the textures together. The combined result is then connected to the Phong shaders Ambient and Diffuse ports, coloring the mugs surface.
1

Connecting a Bump Map generator shader to the material nodes Bump Map port adds some bumpiness to the mugs surface. Note how this affects the reflections from the environment map. The mug now looks more like stoneware than porcelain. Finally, connecting an Ambient Occlusion shader between the Phong shader and the material nodes Surface port darkens the mug where it occludes itself. The Phong shaders branch, which includes the textures, is connected to the Ambient Occlusion shaders Bright Color port, while the Dark Color is set to black. The Ambient Occlusion effect is most visible on the inside of the mug and the inner surface of the handle.

310 Softimage

Building Shader Networks

Basics 311

Section 16 Shaders

Creating Shader Compounds


In the render tree, you can hook up shaders together and set their values to create an effect. Once you have things set up as you like, you can then create a compound of this tree that contains all the shaders. Shader compounds allow you to create an effect and save it in one node, then use it in different scenes or share it with other users. You can expose only the parameters of each shader that you want others to see and adjust.
1 3 2

You can create a shader compound containing any type of shader. The compound can contain many shaders connected together, or just one shader, if you like. Softimage ships with some shader compounds for ICE particles and subsurface scattering effects open them up and see what makes them tick!
.

6 4

10

312 Softimage

Creating Shader Compounds

Overview of Creating a Shader Compound These steps show the basic process of how to create a shader compound of your own. 1 In the render tree, select all the shader nodes you want to save in the compound. To keep the compound generic, you should leave out the Material node so that you can apply this compound to any object. From the render tree toolbar, choose Compounds > Create Shader Compound. This creates a compound named ShaderCompound, which contains all the shaders that have just disappeared.

You can rename an exposed port by right-clicking on it and choosing Properties, then entering new names: The Display Name is the one that is displayed in the compound node in the render tree and in the compounds property editor. If this is blank, then the display name is the same as the Name. The Name is the one that is displayed in the blue bar on the left here and is used in scripting. Double-clicking on an exposed port or right-clicking and choosing Rename sets the scripting name only, not the display name.

8 3 Click the little e on your new compound to edit it. This opens up the compound editor in which you can expose ports for the compound. Only exposed ports will be available for connections and editing back in the render tree. The bar on the left shows all the exposed shader ports for your compound. Click this arrow to expand or collapse the list of exposed input parameters. When the list is collapsed, you can display the name of a port by hovering the mouse pointer over its connection. To expose a shader port, click the black circle beside Expose Input and drag it to a port. That port is included on the bar. Keep doing this for every port you want to expose.

Create the output port by dragging an output port from the shader on the furthest right (the shader into which all other shaders are plugged) to black dot on the bar on the right. In the bar at the top, double-click where ShaderCompound is written and give your compound a class name (in this example, its Bonfire). Do the same for the Category, which is where it will show up in the groups in the preset manager, such as Particle. If you like, you can add comments to your compound to document how everything inside it works.

4 5

10 11

Click the little x box in the upper-left corner to close the compound and return to the regular render tree. Choose Compounds > Export Shader Compound from the render tree toolbar to export your compound so that it can be used in other scenes or by other users.

Basics 313

Section 16 Shaders

314 Softimage

Section 17

Materials
In Softimage, an objects look and feel is defined by one or more shaders that are plugged into the objects material node. The material node itself provides access to the objects attributes while the shaders control how those attributes appear when rendered. This section introduces ways of creating and working with materials.

What youll find in this section ...


About Materials The Material Manager Creating and Assigning Materials Material Libraries

Basics 315

Section 17 Materials

About Materials
Every object needs a material. In Softimage, the term material is used to refer to the cumulative effect of all of the shaders that you use to alter an objects look and feel. Strictly speaking, though, materials in Softimage are really just containers for an objects various attributes. If an objects material has no shaders attached to it, nothing defines the objects look, and the object wont render. The easiest way to understand what a material is to look at it in the render tree where it is represented by a Material node. The Material node lists all of the inputs to a given material. These inputs are sometimes referred to as ports. Each port controls a set of object attributes. When the material is assigned to an object, the shaders that you connect to these ports alter the corresponding attributes. For example, the Surface port controls object surface characteristics. By connecting a shader or a network of shaders to this port, you can change an objects color, transparency, reflectivity, and so on. The important thing to understand is that nearly every change you make to an objects appearance involves connecting shaders to define the objects material. When you assign a local material to an object, it replaces the default scene material for that object only. If you remove or delete the objects local material, the object inherits the default scene material again.
Default Scene Material You can modify the default scene material as you would any other material and the changes are applied to any objects that inherit it.

If you delete the default scene material, the oldest created material in the scene becomes the new default material, and is assigned to all objects to which the previous default material was assigned (whether explicitly or through propagation).

Materials and Surface Shaders


Its worth noting that all new materials that you create in Softimage start out with some type of surface shader attached to them. This provides basic surface shading so that the material is renderable from the beginning. For example, if you create a material from within a material library, it has a Phong shader attached to its Surface, Shadow, and Photon ports. If you create a material using a command from the Render toolbars Get > Material menu, you can choose a surface shader to attach to the material.
By default, new materials have a surface shader, like the Phong shader, attached to them.

The Default Scene Material


Every new scene has a default material, called Scene_Material, which is assigned to the scenes root in branch mode. An object (in a hierarchy or not) that does not inherit a material from a parent, and does not have a locally-defined material, inherits the scenes default material. In the explorer, you can view the default material in the material librarys hierarchy, or as a node of the scene root, which you can display by choosing Local Properties from the Show menu.

316 Softimage

The Material Manager

The Material Manager


The material manager is a tool that is designed for creating, managing, and editing all your materials and libraries. You can open the material manager by pressing Ctrl+7 or choosing Modify > Materials from the Render toolbar. Its different areas are outlined here.
D C

Basics 317

Section 17 Materials

The left panel contains the explorer that has the Scene (cluster) and Image Clip tabssee the image on the right for more details. In the Scene explorer, you can switch between local materials (applied locally on object or cluster itself) and applied materials. Selecting a material in the explorer highlights it in the shelf and displays it in the bottom panel. In the Image Clip explorer, all image clips in the scene are displayed.

Scene and Image Clip Explorer

A D E

B C

On the top, the command bar provides tools for applying materials, such as creating, duplicating, or deleting materials, as well as tools for managing material libraries. The middle right is a shelf with shaderballs for the materials in your scene. Multiple libraries appear on separate tabs. Click a shaderball to select the material, or drag a shaderball onto an object or cluster in the scene to apply it. The tabs on the bottom of the material manager can display one of several views: The selected material in the render tree (default view). The selected material in the texture layer editor. A list of image clips used by the selected material. Right-click on a clips thumbnail for a context menu that allows you to edit a clips properties and other options. In the Material Manager preferences, you can set the size of the thumbnails used on this tab. A list of objects and clusters that use the selected material (Who Uses?). In the Material Manager preferences, you can set the size of the thumbnails used on this tab.

C D E

Select the thumbnail size for the clips displayed in this list: small, medium, large, or list view. You can turn off the display of the thumbnails to optimize performance. Filters clips by All, Used, and Unused clips. Filters clips displayed by scene layer. Filter clips displayed by user keywords. Filters clips displayed by name. Right-click a clip to display a context menu. Drag and drop one or more images into the image clip explorer panel to create sources and clips.

B C D E F G

318 Softimage

Creating and Assigning Materials

Creating and Assigning Materials


Giving an object a material is the first step in defining its look. There are a couple of different tools you can use to create new materials and assign them to objects. Once you create a material, it belongs to a material library, and you can assign it to as many objects as youd like.

Assigning Any Material


Once a material is created, you can access it from a material library and then assign it to an element in your scene. The material manager provides you with several ways of doing this, but you can also use the explorer or the Get > Material > Assign Material command on the Render toolbar. This is one of the easiest ways to assign a material using the material manager. 1. Click a tab to select a material library. 2. Select a material you want to assign. 3. Drag the materials shaderball and drop it on an unselected object, or on one or more selected objects to apply it.
1

Creating a New Material


You can create materials and assign them to objects using the material manager, the explorer, or the Get > Material commands on the Render toolbar. With any of these methods, a new material is created, consisting of a Material node with a Phong shader connected to its Surface, Shadow, and Photon ports. The easiest way to do this is to use the material manager. 1. Select one or more objects. 2. Click the Create New Material icon, or choose a material from the Create menu. 3. Click the Assign Material to Selected Objects icon to apply the material to the objects.
1

3 2 3

Basics 319

Section 17 Materials

Assigning Materials to Polygons and Clusters


Using the same tools and techniques that you use to assign materials to objects, you can assign materials locally to selections of polygons and/ or polygon clusters on a polygon mesh object. If you choose the former, a cluster is created from the selection. The clusters local material always overrides the one assigned to the entire object.

Assigning Materials to Hierarchies


You can assign a material to a hierarchys parent object using the same tools and techniques that you use to assign materials to objects. The only thing different is that you must middle-click the parent object to branch-select it. Children in the hierarchy that dont have a locally assigned material then inherit the parents material. For example, if you have an object such as a table, you may want the legs and top to be the same color. If you assign a material to color the parent (table top), the material definition is propagated to its children (table legs). Materials assigned to hierarchies are subject to the same rules of propagation as any other properties.

Polygon mesh object with global material assigned.

Object with specific polygons selected.

Local material assigned to selected polygons.

Simple Propagation The larger sphere was branch-selected and given a checkerboard material. Because it was applied in branch mode, the material is inherited by all the descendants.

In the explorer, a clusters material appears under the clusters node, rather than directly under the objects node. To access it, expand the objects Polygon Mesh > Clusters > name of cluster node.

The clusters material is here. Local Material Application One sphere was selected and given a blue material. This material is local for the selected object only, but not for any of its children.

The objects material is here.

If you remove a material from a cluster, the material inherits the material either assigned to or inherited by the object.

320 Softimage

Material Libraries

Material Libraries
Most properties in Softimage are owned by the scene elements to which theyre applied. Materials, on the other hand, belong to material libraries. Material libraries are common containers for all of the materials in a scene. Each time you create a material, its added to a material library. Although all of the materials in a scene belong to a library, they are used only by the objects to which they are assigned. The material manager is designed to let you easily view and manage your material libraries. Most of the commands that you need for managing your libraries are found in the Libraries menu. Click a library tab to switch between libraries. The selected tab becomes the current library. Unless you explicitly create a new material in another library, all newly created materials are added to the current library. You can also manage your libraries using an explorer with its scope set to Materials (press M). Storing materials in a library makes it easy to share a single material between several objects. It also allows you to access and edit all of the materials in a scene from a single place. Furthermore, because materials belong to libraries and not to individual objects, you can delete an object from the scene, but keep its material for later use. If you no longer want to use a material, you can simply delete it once, regardless of the number of objects to which its assigned. You can create as many material libraries as you need. For example, you might want to keep separate libraries for different types of materials (wood, metals, rock, skin, scales, and so on), or create a material library for each character in your scene. You can drag and drop materials onto the Favorites tab in the material manager to create shortcuts to materials that you want to keep handy. You can also create your own custom favorites tabs to collect and sort the material shortcuts as you like. By default, material libraries are stored internally as part of the scene. However, you can store them externally, as dotXSI (.xsi) or material library (.xsiml) files, which allows you to share them between multiple scenes.

The Default Material Library


Every new scene has a material library called DefaultLib. Initially, the library contains only the default scene material, but all new materials that you create in the scene are added to the default library until you create or import a new library and set it as the current library.

Basics 321

Section 17 Materials

322 Softimage

Section 18

Texturing
Texturing is the process of adding color and texture to an object. You can use textures to define everything from basic surface color to more tactile characteristics like bumps or dirt. Textures can also be used to drive a wide variety of shader parameters, allowing you to create maps that define an objects transparency, reflectivity, bumpiness, and so on.

What youll find in this section ...


How Surface and Texture Shaders Work Together Types of Textures Applying Textures Texture Projections and Supports Editing Texture Projections UV Coordinates Editing UV Coordinates in the Texture Editor Texture Layers Bump Maps and Displacement Maps Baking Textures with RenderMap Painting Colors at Vertices

Basics 323

Section 18 Texturing

How Surface and Texture Shaders Work Together


Surface shaders and texture shaders work together to create an objects look. A surface shader defines how an object responds to lighting, and defines other basic characteristics such as transparency and reflectivity. A texture shader applies either an image or a procedural texture onto the object. The texture doesnt cover the surface shader; rather, it is combined with the surface shader such that the object is textured and responds correctly to scene lighting. In most cases, a surface shader is connected to the material nodes Surface port, and then a texture shader is connected to the Ambient and Diffuse parameters of the surface shader. The following example illustrates how combining texture shaders and surface shaders affects the final result.

A Blinn shader connected to the Surface port of the cows bodys material node. The hoofs, horns, and so on have different materials.

A texture shader connected to the Surface port of the cows bodys material. Note that without a surface shader, the lighting appears constant.

Using the texture shader to drive the surface shaders Ambient and Diffuse colors produces a textured cow that responds properly to lighting.

324 Softimage

Types of Textures

Types of Textures
Softimage allows you to use two different types of textures: image textures, which are separate image files applied to an objects surface, and procedural textures, which are calculated mathematically. An image clip is a copy, or instance, of an image source file. Each time you use an image source, an image clip of it is created. You can have as many clips of the same source as you wish. You can then modify the image clip without affecting the original source image. Clips are useful because they allow you to create different representations of the same texture image (source), such as five different blur levels of the same source image. Also, clips are memory-efficient because the source is only loaded once, regardless of the number of clips are created from it.

Image Textures
Image textures are images that can be wrapped around an objects surface, much like a piece of paper thats wrapped around an object. To use a 2D texture, you start with any type of picture file (PIC, TIFF, PSD, etc.). These can be scanned photos or any file containing data that describes all the pixels in an image, RGB or RGBA data.

Procedural Textures
Procedural textures are generated mathematically, each according to a particular algorithm. Typically, they are used to simulate natural materials and patterns such as wood, marble, rock, veins, and so on. Softimages shader library contains both 2D and 3D procedural textures. 2D procedurals are calculated on the objects surface according to their texture projections while 3D procedurals are calculated through the objects volume. In other words, unlike 2D textures, 3D textures are projected into objects rather than onto them. This means they can be used to represent substances having internal structure, like the rings and knots of wood.

2D textures are wrapped around objects.

Image Sources and Clips Every time you select an image to use as a texture or for rotoscopy, an image clip and an image source of the selected image is created. An image source is not really a usable scene element. It is merely a pointer to the original image stored on disk. Images sources are listed in your scene in the Sources folder of the Scene Root. They can be stored within your project folder structure, or outside of it.

3D textures are defined throughout an object.

Basics 325

Section 18 Texturing

Applying Textures
There are a number of ways to connect textures to objects in Softimage. These include: Using the render tree, where you can choose a texture from the Nodes > Texture menu. Once you choose a texture, it is added to the render tree workspace and you can connect it to the materials or other shaders ports. Using the parameter connection icon menu in a shaders property editor lists textures that you can attach directly to the parameter. Attaching a texture to a parameter lets you control the parameter with a texture instead of a simple color or numeric value. This is a convenient way to connect a texture to a surface shaders Ambient and Diffuse ports immediately after applying the surface shader to the object. Adding More Textures To add a texture in addition to the one applied using Method 1, choose Modify > Texture > Add from the Render toolbar.
Choosing a texture from the Nodes > Texture menu adds it to the render tree workspace.

This adds a new texture layer to the objects surface shader. The parameters that you add the new texture to are added to the layer, and the layers texture is blended with them.
Choose Modify > Texture > Add from the Render toolbar.

Using the Get > Texture menu lists commonly used texture shaders that can be connected to any combination of a surface shaders ambient, diffuse, transparency and reflection ports.

The menu lists texture shaders that can be blended with the surface shader via a new texture layer.

326 Softimage

Texture Projections and Supports

Texture Projections and Supports


Typically when you apply a texture to an object, a texture projection and texture support are created. The texture support is a graphical representation of how the texture is projected on the object. It defines the type of projection and applies textures to your 3D objects using that definition. By default, an objects texture support is constrained to the object; otherwise, animated objects would move through space without their projection. Transforming the texture support is a useful way of animating or repositioning a texture on an object. Texture projections exist on the support and record the correspondence between pixels in the texture and points on the objects surfacein other words they define where the texture is projected on the object. You can transform a texture projection on a given support to define the part of the object to which the texture is applied. You can then add any number of projections, adjacent or overlapping, to the support. The sphere shown below has three texture projections connected to its support. The wireframe view on the left shows how the projections are positioned, and the textured view on the right shows the rendered result.

Texture projections Texture support

Rendered result of how the textures are projected onto this sphere.

Basics 327

Section 18 Texturing

Types of Texture Projections


Choosing the right type of texture projection is an important part of the texturing process. The more closely the projection conforms to the original shape of the object, the less youll have to adjust the texture to get the object looking just right. This section describes the types of texture projections that are available to you.

All of the projections described can be applied to objects from the Render toolbars Get > Property > Texture Projection menu. You can also create and apply texture projections from any texture shaders property editor. Every texture shader needs a projection to define where the texture should appear on the object.

Planar Projections
Planar projections are used for mapping textures onto an objects XY, XZ, and YZ planes. By default, the projection plane is one pixel smaller than the surface plane, therefore no streaking or distortion occurs on the objects other planes. XY YZ

Cylindrical Projections
If you map the picture file cylindrically, it is projected as if wrapped around a cylinder.

XZ

Planar XY

Cylindrical

Lollipop Projections
A lollipop projection is a spherical-type projection that stretches the texture over the top of the object so its corners meet on the bottom, like the wrapper of a lollipop. A single pinch-point occurs at the -Y pole. Lollipop

Spherical Projections
A standard spherical projection stretches the texture over the front of the object so that its edges meet at the back. Distortion occurs towards the pinch points at the objects +Y and -Y poles. Spherical

328 Softimage

Texture Projections and Supports

Cubic Projections
A cubic projection assigns an objects polygons to a specific face of the cube based either on the orientation of their normals, or their positions relative to the cubic texture support. The texture is then projected onto each face using a planar or spherical projection method. By default, the entire texture is projected onto each face. However, you can choose from a number of different cubic projection presets. You can also transform each face of the cube individually and save the transformations as presets of your own. +Y face (top) -X face (left) -Z face (back)

UV Projections
UV projections are useful for texturing NURBS surface objects. They behaves like a rubber skin stretched over the objects surface. The points of the object correspond exactly to a particular coordinate in the texture, allowing you to accurately map a texture to the objects geometry. Even when you deform an object, its texture follows the objects geometry.

A NURBS surface (left) with a wood texture applied using an planar XZ map (below, left) and UV map (below, right). With the UV map applied, the pattern accurately follows the contours of the object.

+Z face (front) A cubic projection is applied to a cube so that the entire texture image is projected onto each face. -X face (left)

+X face (right) -Y face (bottom)

+Y face (top) -Z face (back)

Spatial Projections
A spatial projection is a three-dimensional UVW texture projection that has either the objects origin or the scenes origin as its center. Spatial projections are used to apply procedural textures that are computed mathematically, rather than being somehow wrapped around the object. By default, a spatial projections texture support appears in the center of the textured objects volume.

A cubic projection is applied to a head so that a different part of the texture image is projected onto each face.

+Z face (front) -Y face (bottom)

+X face (right) Polygon sphere with a vein texture applied using a spatial projection.

Basics 329

Section 18 Texturing

Camera Projections A simple and convenient way to texture objects is to project a texture from the camera onto the objects surface, much like a slide projector does. This is useful for projecting live action backgrounds into your scene so you can model and animate your 3D elements against them. Changing the cameras position changes the projections position. Once you have positioned the texture on the surface to your liking, you can freeze the projection.

Unfolding Unfolding creates a UV texture projection by unwrapping a polygon mesh object using the edges you specify as cut lines or seams. When unfolding, the cut lines are treated as if they are disconnected to create borders or separate islands in the texture projection. The result is like peeling an orange or a banana and laying the skin out flat.

Texture image used

Wireframe view of the rendered frame.

Top view showing where the texture is projected.

Unfolding does not rely on a texture support. To adjust the projection further, edit the UV coordinates in the texture editor.

Final rendered frame In this example, the corner of a room was textured using the original texture (top-left). The texture was projected from a scene camera (top right). The rendered result shows the modeled teddy bear against the projected background.

330 Softimage

Texture Projections and Supports

Contour Stretch UVs Projection (Polygons Only) Contour Stretch UVs projections allow you to project a texture image onto a selection of an objects polygons. Rather than projecting according to a specific form, however, a contour stretch projection analyzes a four-cornered selection to determine how best to stretch the polygons UV coordinates over the image. Contour stretch projections are useful for a number of different texturing tasks, particularly for applying textures to tracks, and irregular, terrain-like meshes. They are also useful for fitting regularshaped textures onto curved meshes. For example, they would be useful to place a label texture on a beer bottle, right at the junction of the bottles neck and body.

The contour stretch projection is ideal for texturing a curvy path like this road.

Contour stretch projections do not have the same alignment and positioning options as other projections. Instead, you select a stretching method that is appropriate to the selections topology and complexity. Also, contour stretch projections do not have a texture support. To adjust it further, edit the UV coordinates in the texture editor.

Basics 331

Section 18 Texturing

Unique UVs Projection (Polygons Only) Unique UVs mapping applies a texture to polygon objects using one of two possible methods: Individual polygon packing assigns each polygons UV coordinates to its own distinct piece of the texture so that no one polygons coordinates overlap anothers. This is useful for rendermapping polygon objects. You can apply textures to an object using a projection type appropriate to its geometry, then rendermap the object using a new Unique UVs

projection to output a texture image that you can reapply to the object. The texture is applied to texture each polygon properly without you worrying about unfolding it to fit properly. Angle Grouping, after deciding on a projection direction, groups neighboring polygons whose normal directions fall within a specified angle tolerance. This process is repeated until all of the objects polygons are in a group. The groupsor islandsare then assigned to distinct pieces of the texture so that no two islands coordinates overlap each other. Unique UVs projections do not have a texture support. To adjust it further, edit the UV coordinates in the texture editor.

A Unique UVs projection was applied to this sphere.

The Individual Polygon Packing method produces UV coordinates that look like this: each polygons UV coordinates separated from the rest of the coordinate set so it can be assigned to its own portion of texture.

The Angle Grouping method produces islands of polygons.

332 Softimage

Editing Texture Projections

Editing Texture Projections


A texture projections property editor contains options for modifying, transforming and renaming the projection. You can open a texture projections property editor by selecting an object and choosing one of the following from the Render toolbar: Modify > Projection > Inspect Current UV opens the property editor for the objects current texture projection. This is the projection used when the object is viewed in a textured display mode (textured, textured decal, and so on). Modify > Projection > Inspect All UVs opens a multi-selection property editor for all of the objects texture projections. Modify > Texture > name of the texture from the Render toolbar to open the textures property editor. Then click the Edit button on the Texture tab (beside the Texture Projection list) to open the Texture Projection property editor. You would make a texture projection implicit to obtain a better overall result for spherical and cylindrical projections on an object, especially one with fewer polygons. For example, when mapping a texture onto a sphere (using either spherical or cylindrical projection), implicit texturing produces more accurate results at the spheres poles than does explicit projection.

Wrapping Texture Projections


The texture projections wrapping options control whether the texture extends past the projections boundaries to wrap around the object. The examples below show a sphere whose texture projection has been adjusted such that the texture covers only a portion of the objects surface. You can see the effect of wrapping in different directions.

Making Projections Implicit


You can make most texture projections into implicit projections. Implicit projections are slightly slower to render because it performs its own projection computation (based on a predefined projection model; that is, spherical, planar, and so on) at each pixel, as opposed to using predefined interpolated UV data like explicit projections.
Both spheres have a texture applied to their diffuse parameter. The sphere on the left uses an explicit projection and the sphere on the right uses an implicit projection. Wrap in V Wrap in U and V

No Wrapping

Wrap in U

Basics 333

Section 18 Texturing

Transforming Texture Projections


By default, a texture projection fills the entire texture support. For example, if you apply a simple XZ Planar projection to a grid, the texture coordinates span the entire projection from one grid corner to the other. You can transform the texture projection to reposition the texture, or to make room on the support for other projections in different locations.
The texture projection manipulator allows you to reposition a texture projection on an object by changing the projections position on the texture support. Drag the green arrow to scale the projection vertically. Drag the green line to translate the projection vertically. Drag the intersection of the red and green arrows to translate the projection freely. Drag one of the corner handles or borders to scale the projection.

There are two ways to transform texture projectionsusing the projection manipulator in a 3D view, or by editing the scaling, rotation, and translation values in the Texture Projection property editor. To activate the projection manipulator, press j, or choose Modify > Projection Edit Projection Tool from the Render toolbar.
Alternatively, you can use the texture projection definition parameters to transform a texture on the surface of an object.

In edit mode, the mouse cursor changes to this icon. Right-click to switch to another projection, if one exists. Drag the red arrow to scale the projection horizontally. Drag the red line to translate the projection horizontally.

UVW Transformation controls

Middle-click + drag to rotate the projection about its center.

Muting and Freezing Texture Projections


Once you have scaled, rotated, or translated a texture projection to your liking, you can freeze it permanently or mute it temporarily. Freezing a texture projection is the equivalent of freezing the texturing operator stack. This is useful if you want to avoid accidentally editing or moving your texture support, especially when the object is animated.
334 Softimage

UV Coordinates

UV Coordinates
Applying a texture projection to an object creates a set of texture coordinates often called UV coordinates or simply UVs that control where the texture corresponds to the surface of the object. On a polygon object, each vertex can hold multiple UV coordinates one for each polygon corner that shares the vertex. The portion of the texture enclosed by a polygons UVs is mapped to the polygon. On NURBS objects, UV coordinates are not stored at the vertices; instead, they are generated based on a regular sampling of the objects surface. However, as with polygon objects, the portion of the texture enclosed by, say, four UVs is mapped to the corresponding portion of the object. You can view and adjust UV coordinates using the texture editor, where they are represented by sample points. When you select sample points, you are actually selecting the UV coordinates held at the corresponding position on the object. For example, as you can see in the images below, the center point of a 2x2 polygon grid holds four UV coordinates. When you select the corresponding sample point in the texture editor, you are selecting all four coordinates (although it is possible to select a single polygoncorners UV coordinate).

In this example, the image shown left was used to texture a 2 x2 polygon grid such that each polygons UV coordinates were mapped to the texture differently.

This exploded view of the textured grid shows how each polygons UVs correspond to the texture image.

The grids middle vertex holds four overlapping UVs. Each UV belongs to a specific polygon and holds a coordinate which, along with the polygons other UV coordinates, defines the portion of the texture mapped onto that polygon.

Basics 335

Section 18 Texturing

Editing UV Coordinates in the Texture Editor


When you apply an image to an object, its unlikely that it will fit perfectly. The next step after applying the texture is to adjust what parts of the image correspond to the various parts of your object. You can do this using the texture editor, which displays an objects UV coordinates. These are a two-dimensional representation of the objects geometry which, when superimposed on a texture image, shows what portion of the texture appears on any part of the objects surface.
Texture editor workspace is where you manipulate the selected objects UV coordinates.

By selecting the objects UV coordinates and moving them to a new location, you can control which portions of the texture correspond to different parts of the object. The texture editor has a wide variety of tools to help you select and move UV coordinates. To open the texture editor, press 7 or choose View > Rendering/ Texturing > Texture Editor from the main menu.
UV position boxes allow you to move selected sample points to precise U and V locations. Texture editor command bars provide quick access to commonly used texture editor commands.

Texture editor menu bar contains all of the texture editor commands, including those accessible from the command bar

Texture image The image clip currently applied to the object. Connectivity Tabs help you make sense of the objects UVs by highlighting boundaries shared between of UV islands.

This character and his head are separate objects, each with its own projection. Both sets of UVs are shown in the texture editor.

Status bar displays the UV coordinates, pixel coordinates, and RGBA values of the current mouse pointer position

Selected UVs are highlighted red, and unselected UVs are blue.

336 Softimage

Editing UV Coordinates in the Texture Editor

Choosing a Texture Image to Display


The texture editors Clips menu allows you to choose the image clip that is displayed in the texture editor workspace, as well as on the object in the 3D views in Textured and Textured Decal display modes while the texture editor is open. It contains an alphabetical list of all of the image clips used by the selected object(s). Choosing different clips allows you to see how the same set of UV coordinates maps different textures onto an object. Displaying an image clip in the texture editor workspace does not apply the image to the selected object. Displaying the Checkerboard Clips > Checkerboard displays a checkerboard image in the texture editor workspace, as well as on objects in the 3D views in Textured and Textured Decal display modes while the texture editor is open. This is useful for seeing how the texture image gets stretched over different areas of an object. An even distribution of regularly sized squares indicates minimal stretching, which is usually preferable, although you may want a higher density of squares in areas of high detail such as the face of a character. You can set the number of squares in the texture editor preferences.

Dimming the Texture Image If youre having trouble seeing a projections UV coordinates in the texture editor workspace, you can dim the texture image to make the coordinates more visible. Click the Dim Image button or choose View > Dim Image.

Selecting and Editing UV Coordinates


Use the Vertex, Edge, or Polygon selection filters to select the texture samples associated with vertices, edges, or polygons. ALL lets you select the samples associated with any component type, depending on what you click on. ISL selects whole islands, and CLS restricts the selection to clusters.

Once you have selected samples, you can edit them using the transform tools (x, c, and v) or other commands. Tearing

When tearing is off, connected and coincident UV samples are automatically affected by any manipulation even if they are not explicitly selected. When tearing is on, its possible to separate samples into discontinuous islands. Polynode bisectors appear, which allow you to select individual samples at a vertex. Polygon Bleeding belonging to the adjacent polygons become selected automatically. This allows you to move the polygons in a block without internal distortion.

When polygon bleeding is on and you select samples, all samples


Basics 337

Section 18 Texturing

Editing Multiple UV Sets


You can display and edit multiple UV coordinate sets simultaneously in the texture editor. Simply select multiple objects or an object with multiple UV sets, and open or update the texture editor. On the UVs menu, Shift+click to toggle the display of specific UV coordinate sets. Any UV coordinate set that is currently displayed in the texture editor is live and can be edited. You can select and modify sample points on multiple coordinate sets simultaneously. You can snap sample points from one set to another, and copy and paste coordinates between sets. The different coordinate sets are independent for the purposes of operations like healing, relaxing, and matching.

Texture Layers
Texture layering is the process of mixing several textures together, one after the other, such that each texture is blended with the cumulative result of the preceding textures. In Softimage, you can use this technique to build complex effects by adding texture layers to an objects material or its shaders. When you add a texture layer to a shader, one or more of that shaders parameters, or ports, is added to the layer. The layer is mixed on the selected ports, in accordance with its assigned strength, or weight, using one of several different mixing methods. For texture layering purposes, the shaders ports are collectively treated as the base layer with which the texture layers are blended. If some of the shaders ports are connected to other shaders, those shaders are considered part of the base layer as well. For example, if youve connected a Cell texture to a Phong shaders Ambient and Diffuse ports, the Cell texture is treated as part of the Phongs base layer. What makes texture layers so powerful is that at any time in the texturing process, you can add, modify, and remove any layer, giving you complete control over the resulting effect. You can also quickly and easily change the order in which layers are blended together, something thats quite difficult to do when you mix textures using mixer shaders in the render tree. Because texture layers only affect designated ports, you can blend a number of layers with each of a shaders attributes and create a complex effect for each.

338 Softimage

Texture Layers

The parameters of the grids Lambert surface shader are represented in the base layers. In this case, nothing is connected to the Lambert shaders ports, so only the base colors are shown.

The first layer adds the basic sign texture to the Ambient and Diffuse ports. The textures alpha channel is used to control transparency, cutting out the shape of the sign. The weatherbeaten road sign shown here was created by adding three texture layers to a basic Lambert-shaded grid. The images on the left show the cumulative effect of the layers.

The second layer adds some rust. The rust texture is blended with the Ambient and Diffuse ports according to its alpha channel, and a separate maskin this case, a weight map.

The final layer, blended with Ambient, Diffuse, and Transparency adds the bullet holes. Bump mapping is activated in the layers shader, creating the depression around each bullet hole.

Basics 339

Section 18 Texturing

The Texture Layer Editor


The texture layer editor is a grid-style editor from which you can view and edit all of a shader or materials texture layers. The advantage of using the texture layer editor is that it packs a tremendous amount of information into a relatively compact interface. At a glance, you can see which shaders are directly connected to a shaders port, how many texture layers have been added to the shader,

how many ports those layers affect, and how and in which order the layers are blended together. Add to this the ability to modify the majority of each layers properties, and the texture layer editor makes for quite a powerful tool. To open the texture layer editor, choose View >Rendering/Texturing > Texture Layer Editor from the main menu.

The shader list displays all of the shaders connected to the current selections material. Select a shader to update the editor with its layers. The texture controls allow you to control the texture projections assigned to selected layers inputs. The Base Colors layer displays color boxes for unconnected ports Base layers represent shaders that are directly connected to the current shaders ports. Texture layers are blended with the base layer and with each other.

The Selected shaders ports can be added to texture layers and base layers.

Layer/port controls indicate that the port has been added to the layer. An empty cell indicates that the port is not affected by the layer.

Layer controls and layer/port controls allow you to set texture layer properties.

340 Softimage

Texture Layers

Texture Layers in the Render Tree


When a shader has one or more texture layers, a new section called Layers is added to its node in the render tree. The Layers section contains a parameter group for each of the shaders layers. Expanding the Layers section reveals all of the individual layer parameter groups. Expanding an individual texture layers parameter group reveals the ports for its Color and Mask parameters.

Layers behave exactly like any other parameter group in the render tree, meaning that you can connect shaders to texture layer parameters as you would to any other shader parameter. This lets you control each texture layer with its own branch of the render tree.

Shader ports that have been added to layers are marked with a small blue L.

Layers section Collapsed layer parameter group Expanded layer parameter group. Layer Color and Mask ports.

Basics 341

Section 18 Texturing

Bump Maps and Displacement Maps


Although real surfaces can be perfectly smooth, you are more likely to encounter surfaces with flaws, bumps, and ridges. You can add this kind of noise to object surfaces using bump maps and displacement maps.

Bump Maps
Bump maps use textures to perturb an objects shading normals to create the illusion of relief on the objects surface. Because they do not actually change the objects geometry, they are best suited to creating fine detail that does not come too far off the surface.
The sphere shown here was bumpmapped with a fine noise. A negative bump factor was used to make the white areas bump outward.

When Not to Use Bump Maps Because bump maps do not actually alter object geometry, their limitations can become apparent when too much relief is required. Consider the sphere shown here: even with a very high bump step, the bumping is not convincing on the silhouette where there is no indication that the surface is raised. In these cases, its better to either model the necessary geometry or to use a displacement map.

Displacement Maps
A displacement map is a scalar map that, for each point on an objects surface, displaces the geometry in the direction of the objects normal. Unlike regular bump mapping that fakes the look of relief, displacement mapping creates actual self-shadowing geometry.
The sphere shown here was displacement-mapped using the texture shown below.

Creating a Bump Map To give you the most control over surface bumping, the best way to create a bump map is to connect a Bumpmap shader to the Bump Map port of an objects material node.

However, every texture shader has bump map parameters, so you can create a bump map using textures that youve connected to, for example, a surface shaders Ambient and Diffuse ports.
342 Softimage

Bump Maps and Displacement Maps

Creating a Displacement Map You create a displacement map by connecting a texture, preferably grayscale, to the Displacement port of an objects material node. It is often helpful to add an intensity node between the map and the material node to help control the displacement.

Using Displacement Maps and Bump Maps Together You can use bump maps and displacement maps together to create extremely detailed surfaces. Typically, the best approach is to use a displacement map to create the coarser surface detail major features that need to be visible at the objects edges and can benefit from selfshadowing. You can then use the bump map to create a top layer of fine detail. The bump-mapping is applied to the displaced geometry.

Setting Displacement Map Parameters In addition to any shaders that you add to the render tree to modulate displacement, the main displacement controls are on the Displacement tab of the objects Geometry Approximation property editor. From there, you can choose the type of displacement appropriate to your object and refine the displacement effect. When Not to Use a Displacement Map Because they actually modify object geometry, displacement maps can take considerably longer to render than bump maps. Generally speaking you should not use a displacement map if you can achieve a satisfactory effect using a bump map.

This sphere uses the texture on the left as a displacement map to create coarse surface detail, and the texture on the right as a bump map to create fine surface detail.

The sphere on the left uses a bump map, while the one on the right uses a displacement map. In this case, the difference is slight enough that the bump maps shorter render time makes it the better choice.

Basics 343

Section 18 Texturing

Reflection Maps
Reflection maps, also called environment maps, can be used to simulate an image reflected on an objects surface, without using actual raytraced reflections. They can also be used to add an extra reflection to an objects reflective, raytraced surface. When objects are reflective, you can define whether the reflections on its surface are Raytracing Enabled or Environment Only. Reflection settings are found on the Transparency/Reflection tab of the objects surface shaders property editor (choose Modify > Shader from the Render toolbar to open the property editor). Raytraced Reflections are slower to render because they actually compute reflections for everything around them. Non-Raytraced Reflection Maps are much faster to compute because they simulate the reflection of a specified texture or image, defined by an environment map, on the objects surface. When reflection mapping is used without raytracing, only the reflection map appears on the objects surface; when used with raytracing, the map is combined with raytraced reflections.

Raytraced reflection only Note how reflective objects reflect other objects in the scene. For example, you can see the flask and the floor reflected in the retort.

Reflection map only Using only a reflection map, no scene objects are reflected in reflective surfaces. Instead, the only reflection is that simulated by the reflection map.

Raytraced reflection and reflection map With both types of reflection activated, you get the real reflections of scene object and simulated reflections from the map, producing highly detailed reflections. You can apply a reflection map to the entire scene by adding an environment map shader to a render pass shader stack.

You can apply a reflection map to an object by connecting an environment map shader to the Environment port of the objects material node.

344 Softimage

Baking Textures with RenderMap

Baking Textures with RenderMap


RenderMap allows you to capture a wide variety of surface information from scene objects, and bake that information into image files that can be reapplied to the rendermapped object and/or used for a myriad of other purposes. RenderMap captures surface information by casting rays from a virtual camera in order to sample each point on an objects surface. The results are rendered as one or more 2D images that you can apply to the object as you would any other 2D texture.
Color map

To rendermap an object, you need to apply a RenderMap property. Choose Get > Property > RenderMap from the Render Toolbar. This opens the RenderMap property editor, from which you can configure all of the maps that you wish to output. The following example shows how you can use RenderMap to create a single texture (which includes lighting information) out of a complex render tree.

Alpha map

Before RenderMap
The disembodied hand shown here was textured using a combination of several images mixed together in a complex render tree, and lit using two infinite lights. The result is a highly detailed surface that incorporates color, bump, displacement, and lighting information, and takes a fair amount of time to render. Bump map

Displacement map

Specular map

After RenderMap
To bake the hands surface attributes into a single texture file, a RenderMap property was applied to the hand, and a Surface Color map was generated. The resulting texture image was then applied directly to the Surface input of the hands material node. Finally, the scene lights were deleted, producing the result shown at righta good approximation of the hands original appearance. Because the hands illumination is baked into the rendermap image, you can get this result without using lights or an illumination shader.

Basics 345

Section 18 Texturing

Painting Colors at Vertices


Another way to apply color to polygon mesh objects is to paint their vertices. Vertex colors arent considered to be material or texture shaders: they are actually a constant color stored directly in the vertices of a polygon at the geometry level. Each vertex of a polygon has polynodes (a type of subvertex) that hold its UV coordinates and vertex colors. The Color at Vertices (CAV) property allows you to color an entire polygon or just its edge rather than the actual vertex (the information is stored at the vertex level, hence the name). For example, you can paint each edge of a square polygon a different color. As a result, the center of the polygon would display a blend of each of the four colors. If necessary, you can store several color at vertices properties on the same object. Vertex colors are often used in games because they are an efficient way of storing color information that can be used in a variety of ways, e.g., for pre-baked lighting, texture blending, and so on.

Choose Get > Property > Color at Vertices Map to add a CAV Property to the selected object. An object can have as many CAV properties as you need.

Press Ctrl+W to open the Brush Properties property editor. On the Vertex Colors tab, you can choose a paint mode and color, set the brush size, set falloff and bleeding options and so on. Basically, youre defining how the brush strokes look.

Press Shift+W to activate the brush tool and paint the color (or other attribute) onto the object in any 3D view. When you move the brush into any 3D view, the views display mode automatically changes to Constant.

If youd like, you can render the result of the color at vertices property using a Vertex RGBA shader in the render tree.

346 Softimage

Section 19

Lighting
Conventional lighting (direct light sources), indirect lighting, and image-based lighting are all techniques that contribute to a scenes illumination and affects the way all object surfaces appear in the rendered image.

What youll find in this section ...


Types of Lights Placing Lights Setting Light Properties Selective Lights Creating Shadows Global Illumination Caustics Final Gathering Image-Based Lighting Light Effects

Basics 347

Section 19 Lighting

Types of Lights
You can add lights to a scene by choosing them from the Render toolbars Get > Primitive > Light menu. Every light type has its own special characteristics and is represented by its own icon in 3D views. Infinite (Default) Infinite lights simulate light sources that are infinitely far from objects in the scene. There is no position associated with an infinite light, only a direction. All objects are lit by parallel light rays. The scenes default light is infinite. Spot Spot lights cast rays in a cone-shape, simulating real spotlights. This is useful for lighting a specific object or area. The manipulators can be used to edit the light cones length, width, and falloff points. Neon Neon lights simulate realworld neon lights. They are essentially point lights whose settings and shapes are altered to resemble fluorescent tubes. The manipulators can be used to change the tube into any rectangular or square shape.

Point Point lights casts rays in all directions from the position of the light. They are similar to light bulbs, whose light rays emanate from the bulb in all directions.

Light Box Light box lights simulate a light diffused with a white fabric. The light and shadows created by this light are very soft. Specularity is still visible, but noticeably weaker. Manipulating the box shapes the projected light.

348 Softimage

Placing Lights

Placing Lights
You can translate, rotate, and scale lights as you would any other object. However, scaling a light only affects the size of the icon and does not change any of the light properties.
Rotating an infinite light. This is the only useful transformation for infinite lights since their scale and position do not affect the lighting. Rotating the light, on the other hand, changes its direction.

Placing Spotlights Using the Spot Light View The Spot Light view in a 3D view that lets you select from a list of spotlights available in the scene. A spotlight view is useful to see what objects a spotlight is lighting and from what angle.
1 Select a spotlight from the view menu to see the scene from the lights point of view.

Translating a point light. Rotating and scaling point lights does not affect the lighting. Translating a point light changes its position, which does change the scene lighting.

2 Navigate in the spotlight viewport to change the position of the light. The inner and outer circles correspond to the lights spread angle and cone angle respectively.

Translating a spotlight. When you translate the spotlight, it rotates automatically to point toward its interest. Scaling a spotlight has no effect on the lighting. Since the spotlight is normally constrained to its interest, you cannot rotate it either (unless you delete the interest). 3 The rendered result shows the scene lit from the spotlight.

Spotlights have a third set of manipulators that let you control their start and end falloff, as well as their spread and cone angles. Area lights also have a third set of manipulators that let you scale the geometric area from which the light rays emanate. These manipulators are discussed later in this section.

Note that the light falls off exactly where the cone and spread circles indicate that it should.

Basics 349

Section 19 Lighting

Setting Light Properties


Once you create a light, you can edit its properties from its property editor. To open a lights property editor, select the light and choose Modify > Shader from the Render toolbar. Some of the most commonly edited light properties are described below. When you define the color of an objects material, you should work with a white light because colored light sources affect the materials appearance. You can color your light source afterward to achieve the final look of the scene.

Setting Light Color


The color of a light controls the color of the rays emitted by the light. The final result depends on both the color of the light and the color of objects.

Setting Light Intensity


You can control a lights intensity by adjusting the Intensity slider in the lights property editor. By default, values range from 0 to 1, but you can set much higher values if needed. Alternatively, you can control light intensity indirectly using its color channels. Setting RGB values greater than 1 creates more intense light.

White Light

Pale Yellow Light

Intensity: 0.25

Intensity: 0.5

Pale Blue Light Intensity: 0.75

350 Softimage

Setting Light Properties

Setting Light Falloff


Falloff refers to the diminishing of a lights intensity over distance, also called attenuation. This mimics the way light behaves naturally. The falloff options are available only for point and spotlights. You can set the distance at which the light begins to diminish, as well as the distance at which the falloff is complete (darkness). This means you can set the values so the falloff affects only those features you want. In addition, you can control how quickly or slowly the light diminishes.
Start falloff = 0 End falloff = 4

Setting a Spotlight
A spotlight casts its rays in a cone aimed at its interest. Spotlights have special parameters, called Spread and Cone Angle, that control the size and shape of the cone. You can set these options using the spotlights property editor or its 3D manipulators. You can also use the 3D manipulators to set the lights falloff.

The white line indicates the cone angle.

The yellow line indicates the lights spread angle.

To activate a spotlights manipulators, select the light and press B. You can then adjust the light by dragging any of the manipulators labeled in the image below.
The upper circle is the Start Falloff point. The wireframe outline is the spotlights Cone Angle.

Start falloff = 0 End falloff = 8

Start falloff = 6 End falloff = 8 Falloff Start and End Falloff values. Using a point light, umbra = 0; bottom corner of chess board is 0; top, left corner is 10.

The inner, solid cone is the spotlights Spread Angle.

The lower circle is the End Falloff point.

Basics 351

Section 19 Lighting

Selective Lights
When you create a light, it affects all visible objects in the scene. However, every light has a selective property that you can use to make it affect, or not affect, a designated group of objects called Associated Models. This can reduce rendering time by limiting the number of calculations per light. You can set a lights selective property to be Inclusive or Exclusive. Exclusive illuminates every object except for those in the lights Associated Models group. Inclusive illuminates every object defined in the lights Associated Models group.
A simple scene illuminated by a point light. None of the geometric objects are included in the lights Associated Models list, so they are not affected by the lights selective property.

Creating Shadows
You can create shadows that appear to be cast by the objects in your scene. Shadows can make all the difference in a scene: a lack of them can create a sterile environment, whereas the right amount can augment the realism of the same scene. Shadows are controlled independently for each light source, so you can have some lights casting shadows and others not. To create a shadow using the mental ray renderer for a scene or a render pass, you must set up three things: A light that generates shadows. Objects that cast and receive shadows. Rendering options that render shadows. There are three basic kinds of shadows you can create using mental ray: raytraced, shadow-mapped, and soft.

Raytraced Shadows
Raytraced shadows use the renderers raytracing algorithm to calculate how light rays are reflected, refracted, and obstructed. The shadows are very realistic but take longer to render than other types of shadows. To create raytraced shadows, you need to activate shadows in the lights property editor.

The King piece (center) has been added to the lights Associated Models list, making it affected by the lights selective property. The light has been defined as Exclusive, thereby not illuminating the objects on the lights Associated Models list.

The light is set to Inclusive. Now the light source affects only the objects listed in the Associated Models list (only the King piece) and ignores the rest.

Activates shadows for the light.

You also need to make sure that the Primary Rays Type is set to Raytracing in the renderer options.
352 Softimage

Creating Shadows

Shadow-Mapped Shadows
Shadow-mapped shadows, also known as depth-mapped shadows, use the renderers scanline algorithm. They are quick to render, but not as accurate as raytraced shadows. The shadow map algorithm calculates color and depth (z-channel) information for each pixel, based on its surface and distance from the camera. Before rendering starts, a shadow map is generated for the light. This map contains information about the scene from the perspective of the lights origin. The information describes the distance from the light to objects in the scene and the color of the shadow on that object. During the rendering process, the map is used to determine if an object is in a shadow. To create shadow-mapped shadows, you need to activate shadows and configure the Shadow Map in the lights property editor. Then, you need to enable shadow maps in the renderer options.

Volumic Shadow Maps Volumic shadow maps, are similar to regular shadow maps, but store more detail. Instead of simply storing the distance from the light to the first object hit, the volumic shadow map algorithm raymarches through the scene from the lights origin until it hits a fully opaque object. Along the way it stores changes in light color or intensity along with the depth at which the change occurred. Volumic shadow maps are typically used when rendering shadows for geometry hair.

Regular shadow-mapped shadow of hair.

Volumic shadow-mapped shadow of hair.

Basics 353

Section 19 Lighting

Soft Shadows
Soft shadows are created by defining area lights which are a special kind of point or spotlight. The rays emanate from a geometric area instead of a single point. This is useful for creating soft shadows with both an umbra (the full shadow where an object blocks all rays from the light) and a penumbra (the partial shadow where an object blocks some of the rays). The shadows relative softness (the relation between the umbra and penumbra) is affected by the shape and size of the lights geometry. You can choose from four shapes and set the size as you wish. To determine the amount of illumination on a surface, a sample of points is distributed evenly over the area light geometry. Rays are cast from each sample point; all, some, or none of the rays may be blocked by an object. This creates a smoothly graded penumbra.

To create raytraced shadows, you need to activate shadows in the lights property editor. You also need to activate and configure the Area Light in the lights property editor. Finally, you need to make sure that the Primary Rays Type is set to Raytracing in the renderer options.

A rectangular area light emits light from a rectangular object like this one.

354 Softimage

Global Illumination

Global Illumination
Global illumination simulates the way bright light bounces off of objects and bleeds their color into surrounding surfaces. When global illumination is activated, photons emitted from a designated light travel through the scene, bounce off photon-casting objects and are stored by photon-receiving objects. Photon casting and reception are not mutually exclusive properties: an object can do both, but only a light can emit photons. Global illumination is often used with caustics, which is also a photon effect. The following is an overview of how to set up global illumination for the mental ray renderer.
1 Define objects as casters and receivers. An objects visibility property allows you to set options that control how the object responds to global illumination photons emitted from a light. Caster controls whether photons bounce off of the object and continue to travel through the scene. When this is off, the object simply absorbs photons. Receiver controls whether the object receives and stores photons. When this is off, the photon effect is not visible on the objects surface. Visible controls whether the object is visible to photons at all. When this is off, photons simply pass through the object. Activate Global Illumination on the Photon tab of the lights property editor. You can then set the Intensity of the photon energy, which determines the intensity of the color that bleeds onto photon receiving objects. You can also set the Number of Emitted Photons. Typically, both of these values will need to be set in the tens or hundreds of thousands for the final global illumination effect. 2 Set the light to emit global illumination photons.

Basics 355

Section 19 Lighting

Adjust the global illumination effect. Once youve defined the caster, receivers and emitting lights, you need to adjust the rendering options that control the photon effect. On the Caustics and GI tab for the renderer, activate Global Illumination, then set these two important parameters: GI Accuracy specifies the number of photons that are considered when any point is rendered. Photon Search Radius specifies the distance from the rendered point within which photons are considered. Youll also need to fine-tune the photon intensity and the number of emitted photons for each of the emitting lights.

Increase the radiance of the receiver object. To further fine-tune the global illumination effect, adjust the Radiance of the global illumination receiver objects. Radiance controls the strength of the photon effect on the objects surface. This is useful for brightening or darkening photon lighting in specific areas of a scene. The Radiance parameter is set in each objects surface shader.

356 Softimage

Caustics

Caustics
Caustic effects recreate the way that light is distorted when it bounces off a specular surface or passes through refractive objects/volumes. The classic example is the light sparkling in the middle of a wine glass or the floor of a swimming pool. In either case, light passes through refractive surfaces and is distorted, creating complex light patterns on surfaces that it affects. As with global illumination, caustics compute how photons emitted from a light travel across the scene and bounce over and through caster and receiver objects. Here is an overview of setting up caustic lighting for the mental ray renderer, which is almost identical to setting up global illumination:
1 Define objects as casters and receivers. An objects visibility property allows you to set options that control how the object responds caustics photons emitted from a light. 3 Adjust the caustic effect.

Adjust the rendering options that control the photon effect on the Caustics and GI tab for the renderer. Activate Caustics on this tab, then set these two important parameters: Caustic Accuracy specifies the number of photons that are considered when any point is rendered. Photon Search Radius specifies the distance from the rendered point within which photons are considered. Youll also need to go back to the property editors of all emitting lights and fine tune the photon intensity and the number of emitted photons.

Set the light to emit caustic photons.


4

Increase the radiance of the receiver objects. To fine-tune the caustics effect, adjust the Radiance of the caustics receiver objects. Radiance controls the strength of the photon effect on the objects surface. This is useful for brightening or darkening photon lighting in specific areas of a scene. The Radiance parameter is set for each objects surface shader.

To make a light into a global illumination photon emitter, activate Caustics on the Photon tab of the lights property editor. You can then set the Intensity of the photon energy and the Number of Emitted Photons.

Basics 357

Section 19 Lighting

Final Gathering
Final gathering is a way of calculating indirect illumination without using photon energy. Instead of using rays cast from a light to calculate illumination, final gathering uses rays cast from each illuminated point on an objects surface. The rays sample a hemisphere of a specified radius above each point and calculate direct and indirect illumination based on what they hit. The overall effect is that every object in the scene becomes a light source and influences the color and illumination of the objects and environment surrounding it.

4 5

Direct illumination contribution. Final gathering point is computed.

Creating a Final Gathering Effect


Creating final gathering in a scene is more straightforward than applying caustics or global illumination. Most of the options that control the final gathering effect for the mental ray renderer are on the Final Gathering tab of the renderer options. The final gathering Accuracy options are the main settings used to control the quality of a final gathering render.

You can use the scene objects visibility properties to precisely control how each object participates in final gathering calculations.

3 4

1 2

Camera eye ray intersects with geometry whos shading needs to calculate indirect illumination. Final gathering rays are shot into the hemisphere above the intersection point to sample for illumination. Indirect illumination contribution.

This scene was rendered using final gathering, which collects the indirect and direct light around illuminated points on an objects surface to simulate real-world lighting.

358 Softimage

Ambient Occlusion

Ambient Occlusion
Ambient occlusion is a fast and computationally inexpensive way to simulate indirect illumination. It works by firing sample rays into a predefined hemispherical region above a given point on an object's surface in order to determine the extent to which the point is blocked, or occluded, by other geometry. Once the amount of occlusion has been determined, a bright and a dark color are returned for points that are unoccluded and occluded respectively. Where the object is partially occluded the bright and dark colors are mixed in accordance with the amount of occlusion. In Softimage, you can create an ambient occlusion effect by connecting the Ambient Occlusion shader in the render tree. This is most commonly done at the render pass level to create an occlusion pass that can be added in and adjusted during compositing. You can also use the shader on individual objects to limit the occlusion calculation.

Image-Based Lighting
You can light your scenes with images using the Environment shader which surrounds the scene with an image. However, this shader has a set of parameters that allow you to control the images contribution to final gathering and reflections.

The image above shows a scene rendered using only the Ambient Occlusion shader. The bright color is set to white and the dark color to black. This type of rendering can be composited with other passes to add the occlusion effect to the scenes color and illumination.

Although you can use any image to light the scene this way, you will get the best results using a High Dynamic Range (HDR) image. Thats because HDR images contain a greater range of illumination than regular images, making them better able to simulate real-world lighting.

Basics 359

Section 19 Lighting

Light Effects
The point light inside of this street lamp uses a flare effect. Flares are created as properties of scene lights. Softimage includes a number of lighting effects that you can use to enhance the realism and alter the look and mood of your rendered scenes. Different effects are applied differently. Some are applied as properties of lights, while others are defined by shaders in the render tree. This scene uses a variety of light effects to capture the feeling of a dimly lit alley on a foggy evening. In the background of the scene, you can see the effect of depth-fading. Even though it affects the entire scene, the depth fading is defined by a lights volumic property.

The volumic light shining out from the window in the stairwell is created using a volumic property applied to a light.

360 Softimage

Section 20

Cameras
Virtual cameras in Softimage are similar to physical cameras in the real world. They define the views that you can render. You can add as many cameras as you want in a scene. you can also achieve a photorealistic motion blur effect for every object and/or camera in your scene.

What youll find in this section ...


Types of Cameras The Camera Rig Working with Cameras Setting Camera Properties Lens Shaders Motion Blur

Basics 361

Section 20 Cameras

Types of Cameras
Each of the images below was taken from the same position, but using a different camera each time. The image on the right shows a wireframe view of the original scene, including the position of the camera. These camera types are available from the Get > Primitive > Camera menu.

Perspective (Default) Uses a perspective projection, which simulates depth. Perspective cameras are useful for simulating a physical camera. The default camera in any new scene is a perspective camera.

Wide Angle Creates a wide-angle view by using a perspective projection and a large angle (100) of view. Wide angle cameras have a very large field of view and can often distort the perspective.

Telephoto Uses a perspective projection and a small angle of view (5) to simulate a telephoto lens view where objects are zoomed.

Orthographic Makes all of the camera rays parallel. Objects stay the same size regardless of their distance from the camera. These projections are useful for architectural and engineering renderings.

362 Softimage

The Camera Rig

The Camera Rig


Each camera that you create is made up of three separate parts: the camera root, the camera interest, and the camera itself. If you look at a camera in the explorer, youll see that the camera root is the parent of both the camera and its interest. Each of these elements is displayed in the 3D views as well.

The Camera The camera is the camera is the camera. In the 3D views, it is represented by a wireframe control object that you can manipulate in 3D space. The camera has a directional constraint to the camera interest.

Camera Direction The camera icon displays a blue and a green arrow. The blue arrow shows where the camera is looking; that is, the direction the lens is facing. The green arrow shows the cameras up direction, which you can change by rolling the camera (press L).

The Camera Interest The cameras interestwhat the camera is always looking atis represented by a null. You can translate and animate the null to change the cameras interest.

The Camera Root The camera root is represented by a null. By default, it appears in the middle of the wireframe camera, but you can translate and animate it as you would any other object. The null is useful as an extra level of control over the camera rig, allowing you to translate and animate the entire rig the same way that you animate its individual components.

Basics 363

Section 20 Cameras

Working with Cameras


Once youve created your cameras, youll probably want to move them around to capture just the right angles. You may also need to switch back and forth between different cameras to compare points of view.
Choose a camera from the list to switch the viewport to that cameras view. Choose Render Pass to switch to the camera view defined for your render pass. You can select a predefined orthographic viewpoint, but its not an actual camera view.

Selecting Cameras and Camera Interests


Cameras or their interests can be tricky to select. Luckily, there are several ways to select either or both. You can: Locate the camera or interest in a 3D view and click it to select. From any viewport, click the camera icon on its menu bar, then choose Select Camera or Select Interest. This selects the camera used in that viewport. From the Select panel, choose Explore > Cameras. This opens a floating explorer that shows every camera in your scene and its interest. Select a camera or interest from the list. Of course, you can also do the same thing from a regular explorer once you locate the cameras.

Positioning Cameras
Once you select a camera, you can translate, rotate, and scale it as you would any other object. However, scaling a camera only affects the size of the icon and does not change any of the camera properties. Generally, the most intuitive way of positioning cameras is to set a 3D view to a camera view and then use the 3D view navigation tools to change the cameras position. As you navigate in the 3D view, the camera is subject to any transformations that are necessary to keep its interest in the center of its focal view. Since positioning cameras is often a process of trial and error, youll probably find yourself wanting to undo and redo camera moves. Press Alt+Z to undo the last camera move. Press Alt+Y to redo the last undone camera move. If youve zoomed in and out too much and the perspective on your camera is in need of a reset or refresh, press R. This resets the camera in the 3D view in which the cursor is.

Selecting Camera Views


Camera views let you display your scene in a 3D view from the point of view of a particular camera. If you have created more than one camera in your scene, you can display a different camera view in each 3D view. Choosing a camera from a viewports Cameras menu switches the viewpoint to that of a real camera in your scene. All other views such as User, Top, Front, and Right are orthogonal viewpoints and are not associated to an actual camera.

364 Softimage

Setting Camera Properties

Setting Camera Properties


The Camera property editor contains every parameter needed to define how a camera sees your scene. To open the camera property editor, do one of the following: Select a camera and choose Modify > Shader from the Render toolbar. Click on the cameras icon in the explorer. Double-click the cameras node in the render tree. From the camera icon menu of any viewport set to the cameras view, choose Properties.

Field of View
The field of view is the angular measurement of how much the camera can see at any one time. By changing the field of view, you can distort the perspective to give a narrow, peephole effect or a wide, fish-eye effect.

Camera Format
The cameras format refers to the picture standard that the camera is using and the corresponding picture ratio. You can also specify a custom picture standard with a picture ratio that you define. The default camera format is NTSC D1 4/3 720x486, with a picture ratio of 1.333, but several standard NTSC, PAL, HDTV, Cine, and Slide formats are also available.

The cameras Vertical field of view was made large enough to accommodate the entire building. The Horizontal field of view was automatically calculated based on the aspect ratio.

Using the same camera in the same location, the Vertical field of view is much smaller, thus making only a small part of the building visible.

Basics 365

Section 20 Cameras

Setting Clipping Planes


You can use clipping planes to set the minimum and maximum viewable distances from the camera. Objects outside these planes are not visible. By default, the near plane is very close to the camera and the far plane is very far away, so most objects are usually visible. You can set clipping planes to display or hide specific objects.

Lens Shaders
Lens shaders are used to apply a variety of different effects to everything that a camera sees. Some lens shaders create generalized effects, such as depth of field, cartoon ink lines, or lens distortion. Others are more utility oriented, and do things like emulate real-world camera lenses or render depth information. Lens shaders can be used alone, or in conjunction with other lens shaders. For example, you might want to render a bulge distortion and depth of field simultaneously. You can apply lens shaders to cameras as well as passes.

This is a camera with no clipping planes setwhich means the resulting image (right) is every object in the scene.

Applies a shader to the camera. Removes a shader from the shader stack. Opens the selected shaders property editor. Lists every shader applied to a camera. Lens shaders are applied via the shader stack on the Lens Shaders tab of the cameras property editor.

This is a camera with near and far clipping planes set. The near plane is between the first two buildings and the far clipping plane is between the last two buildings. Everything before the first plane is invisible and everything beyond the far clipping plane is also invisible, as seen in the resulting image (right).

366 Softimage

Lens Shaders

The images below and beside show this scene rendered using three different lens shaders.

Toon Ink Lens shader

Lens Effects Shader (Fisheye distortion setting) Depth of Field shader

Basics 367

Section 20 Cameras

Motion Blur
Motion blur adds realism to a scenes moving objects by simulating the blur that results from objects passing in front of a camera lens over a specified period of exposure. In Softimage, you can easily achieve a photorealistic motion blur effect for every object and/or camera in your scene.

You can apply motion blur properties to cameras. This is useful when both the camera and scene objects are moving, but you only want the blur caused by the objects movement.

Rendering Motion Blur


Motion blur is active for the scene by default. To view the motion blur of objects in a scene, activate the motion blur settings in the render region options and/or the render pass options. As long as these options are on and you have a moving object in your scene, the motion blur is visible. First set the scene motion blur settings. In particular, the Speed option which specifies the time interval (usually between 0 and 1) during which the geometry and any motion transformations and motion vectors are evaluated for the frame. The motion data is then pushed to the renderer (by default mental ray). Setting the Speed value to 0 turns motion blur off. Longer (slow) shutter speeds (a difference of greater than 0.6) create a wider and/or longer motion blur effect, simulating a faster speed. Shorter (quicker) shutter speeds (a difference of less than 0.3) create subtler motion blurs.

Creating a Motion Blur Property


To control motion blur for a specific object in a scene, you can assign it a motion blur property. This is primarily useful when you want to force motion blur off for a given object, or when you have a few objects that need deformation motion blur. To create the motion blur property, select one or more objects and choose Get > Property > Motion Blur from the Render toolbar.

In the first image (left), a quick shutter speed (< 0.1) is used, then a slower shutter speed (middle), and finally (right) a very slow shutter speed (> 0.6).

You can also specify an Offset for the shutters time interval which allows you to push the motion blur trails, even extend them into later frames. Additionally, you can define where on the frame the blur is evaluated and rendered.

368 Softimage

Section 21

Rendering
Rendering is the last step in the 3D content creation process. Once you have created your objects, textured them, animated them, and so on, you can render out your scene as a sequence of 2D images. Your ultimate goal may not be just to render, but to optimize rendering quality and speed.

What youll find in this section ...


Rendering Overview Render Passes Render Channels Setting Rendering Options Different Ways to Render

Basics 369

Section 21 Rendering

Rendering Overview
The process or rendering out your scenes can vary considerably from project to project. However, here is a typical sequence of tasks you might follow when rendering: 1. Set up render passes and define their options. Render passes let you render different aspects of your scene separately, such as a matte pass, a shadow pass, a highlight pass, or a complete beauty pass. You can define as many render passes as you want: within each pass, you can create partitions of lights and objects, then apply shaders and control their settings together. 2. Set up render channels and define their options. These allow you to output different information about the pass to separate files. 3. Set rendering options. All objects, including lights and cameras, are defined by their rendering properties. For example, you can determine whether a geometric object is visible, whether its reflection is visible, and whether it casts shadows. Rendering properties can be set per render pass as well. 4. Preview the results of any modifications. The viewports can display your scene in different display modes, including wireframe, hidden-line removal, shaded, and textured. In addition, you can view any portion of your scene in a viewport and rendered by defining a render region. Or preview a full frame using Render Preview. 5. Render the passes and their render channels. Softimage gives you the option of rendering using any one of the following methods: - Interactively from the Render Region. - Interactively, using the single-frame preview tool. - Interactively from the Softimage user interface.
370 Softimage

- Batch rendering using the [xsi -render | xsibatch -render] command line. - Batch rendering with scripts using the -script option at the command line. - Using the ray3.exe command line. - Using mental rays tile-base distributed rendering across several machines. To do so, you must define which machines to use and how. 6. Composite and apply effects to passes. You can use Softimage Illusion, a compositing and effects toolset thats fully integrated in Softimage, or you can use another postproduction tool.

Softimage and mental ray


Softimage uses mental ray as its core rendering engine. mental ray is fully integrated in Softimage, meaning that most mental ray features are exposed in Softimages user interface, and are easy to adjust both while creating a scene and during the final renderings. Full integration with mental ray also allows artists to generate final-quality preview renders interactively in 3D views, using the render region.

Rendering Visibility
Every geometric object in a scene has a visibility property that controls whether it is visible when rendering, and in particular whether it is visible to various types of rays (primary, secondary, final gathering, and so on). This visibility property exists locally on every 3D object in Softimage and cannot be applied or deleted. However, visibility can be overridden at the partition level. In complex scenes, setting rendering visibility options can be difficult to manage on a per-object basis. Its easier to partition objects and use overrides to control rendering visibility for all of the objects in a partition.

Render Passes

Render Passes
A render pass creates a layer of a scene that can be composited with any other passes to create a complete image. Passes also allow you to quickly re-render a single layer without re-rendering the entire scene. Later, you can composite the rendered passes back together, making adjustments to each layer as needed. Each scene can contain as many render passes as you need. When you first create a scene in Softimage, it has a single pass named Default_pass. This is a beauty pass that is set to render every element of the scene. You can create additional passes to render specific elements and attributes as needed.

This photograph (background pass) is the background scene over which the dinosaur will be composited.

This image is the composite of all these passes. Rendering in passes allows you to tweak each isolated element separately without having to re-render your scene.

The specular pass is used to capture an objects highlights.

This pass is a rendered image of the dinosaur. Compositing it over the background would make the scene rather flat and unrealistic.

The matte pass cuts out a section of the rendered image so another image can be composited over or beneath it.

The shadow pass isolates the scenes shadows so you can composite them in later. This allows you to edit a shadows blur, intensity, and color without any additional rendering.

Basics 371

Section 21 Rendering

Render Pass Workflow


The following steps provide an overview of how to use render passes: 1. Create and name a new render pass. 2. Edit the pass using either Pass > Edit > Current Pass from the Render toolbar or the render manager. 3. Define partitions to edit the objects and lights in your render pass depending on what effect you want to achieve. 4. Specify the active camera for the pass. 5. Apply shaders and override properties for the pass and its partitions. An override lets you control specific shader parameters in a partition. 6. Set the pass options for each pass. 7. Set the renderer options for each pass. 8. After you have set up render passes, you can render them. 9. You can then composite and apply effects to the passes using Softimage Illusion.

Setting the Current Pass


The current pass is the pass to which all pass and partition properties are applied. The current pass is also the pass displayed in 3D views when the Render Pass view is displayed in a viewport.
To set the current pass click the arrow beside the Pass selection menu on the Render toolbar. Then from the pass list, choose the render pass you want to set as current.

Setting the Pass Camera


You can specify the camera you want to use for each render pass. The active camera provides the viewpoint from which the pass is rendered. In the render pass property editor, on the Output tab, choose a camera from the Pass Camera list, which lists all of a scenes cameras. Choose Cameras > Render Pass from any viewport Views menu to see the current pass from the viewpoint of the active camera.

Creating Passes
You will most likely want to create several passes as your scene grows in size and complexity. You can create a variety of pass types from the Render toolbars Pass > Edit > New Pass menu.

372 Softimage

Render Passes

Creating Partitions
A partition is a division of a pass that behaves like a group. There are two types of partitions: object and light. Light partitions can only contain lights, and object partitions can only contain geometric objects. Placing objects in partitions allows you to control their attributes by modifying them at the partition level rather than at the individual object level. The modifications affect only the objects in the partition for the specific render pass to which the partition belongs. This allows you to change object attributes on a per-pass basis. Create an empty partition by choosing Pass > Partition > New Partition on the Render toolbar and then add elements to it. Or you can select some objects and choose the same command to create a partition that automatically includes these objects.

Current pass. The current pass is always displayed in bold typeface. Each pass has its own options. This lets you optimize your rendering by enabling only those options you need for each pass. For example, you could enable shadow calculations only in the shadow pass. Expanding any pass node displays its renderer options, the active camera for the pass, its partitions, and any environment, output, and/or volume shaders applied to the pass as a whole.

Pass renderer options. Depending on which renderer you have chosen for your pass, click the Hardware Renderer or mental ray icon to edit the passs renderer options. You can identify whether the pass is using a local or global set of render options by the Roman or italic typeface displayed for the renderers node.

Viewing Passes and Partitions in the Explorer


In an explorer set the scope to Passes (press P) to see a hierarchical list of all of the render passes in your scene with their contents.

Pass camera. Click the camera icon to define camera and lens-shader options for the pass. You can add new cameras to your scene and set them as active if needed. Background partition. Every pass is created with two background partitions which contain the scenes objects and lights. Background partitions usually contain every object in your scene that isnt modified in the pass. However, nothing is stopping you from modifying the contents of these partitions as well.

D A B C D E H

F G

Basics 373

Section 21 Rendering

Partition. A partition is a division of a pass, which behaves like a group. Partitions are used to organize scene elements within a pass. Expanding any partition node allows you to see its contents, as well as any materials, shaders, overrides, and other properties that are applied to it. Each pass has two default partitions: a background objects partition that contains most or all of the scenes objects, and a background lights partition that contains most or all of the scenes lights. You can add as many additional partitions as you need for a pass, but an object can only be in one partition per pass.

Applying Shaders to Passes and Partitions


You can apply environment, volume, output, and lens shaders to an entire pass using the shader stacks in the render pass property editor.
Applies a shader to the camera. Removes a shader from the stack. Opens the selected shaders property editor. The applied shaders are listed in the stack.

Framebuffers. The framebuffers folder holds all the active render channels defined for the pass including its Main render channel. Passes. Additional passes including the default beauty pass are listed in creation order unless you have modified the explorers sort order settings. A material is assigned to a partition. The B indicates that it was applied in branch mode and is propagated to every object in the partition. If any objects in the partition have local materials, they will be overridden by the partition-level material for this pass.

When you apply shaders to partitions using the Get > Material command, they take precedent over the shaders applied directly to objects in the scene, but only for that pass.

Overriding Shader Parameters


You can use an override property to redefine specific shader parameters in a partition. For example, if a scene contains several hundred objects and you want to edit each objects transparency value without modifying the original material, you can create a partition that contains the objects you want to change, and apply an override property that affects only the transparency parameter of each material.

An override changes the ambient and diffuse values to black, but leaves the other values untouched.

374 Softimage

Render Channels

Render Channels
Render channels are a mechanism for outputting multiple images, each containing different information, from a single pass. When you render the pass, you can specify which channels should be output in addition to the full pass. By default a Main render channel is defined for every pass (you can think of it as the beauty channel rendered for each pass). You can use these images at the compositing stage, the same way you would use any render pass. The advantage of using render channels is that they are easy to define and quick to add to any pass. Preset render channels allow you to isolate scene attributes that are commonly rendered in separate passes. You do not need to create complex systems of partitions and overrides to extract a particular scene attribute. All you need is your default pass and you can quickly output the preset diffuse, specular, reflection, refraction, and irradiance render channels.

Setting Rendering Options


This scene defines six preset render channels, each extracting specific attributes of the objects surface materials. Any combination of these channels can be rendered with the pass.

Rendering options are set for the scene, for your renderer of choice (by default this is mental ray), and for each render pass you define. For interactive preview renders, the render region has its own set of renderer options. You can access these rendering options from different places: Render toolbar: opens the scene, pass, and renderer property editors. - Choose Render > Scene Options - Choose Render > Pass Options (for the current pass) - Choose Render > Renderer Options (active renderer for the current pass) Explorer: press 8 to open an explorer and then press P to set the scope to Passes or press U to set it to Current Pass. From there you can click the scene, pass, or renderer nodes to display their property editors. Render Manager: a dedicated view for editing scene, pass, and renderer options. It contains a built-in explorer view, quick access to pass rendering, rendering and output preferences, and a copy manager for you render settings. Choose Render > Render Manager from the Render toolbar.

Refraction Channel

Reflection Channel

Irradiance Channel

Ambient Channel

Diffuse Channel

Specular Channel

Basics 375

Section 21 Rendering

Anatomy of the Render Manager


E D F

H I J

376 Softimage

Setting Rendering Options

Explorer panel (left panel)

Select from the explorer the various render options available for editing. You can edit render options for the scene, for the renderer, and for each pass defined in the scene. Depending on your selection, the options are displayed in the middle or right panel. When you select a render pass, the render options for the selected pass are displayed in the middle panel. If you select multiple passes (Ctrl-select), you can simultaneously edit their common parameters. Multi Edit will appear at the top of the panel to indicate that you are in this mode.

Passes

The render options for all the render passes defined in your scene. The pass render options allow you to modify settings specific to each pass. You can set output paths, specify the pass camera, output your pass to a movie file, apply pass-level shaders, add render channels, and more.

Render pass panel (middle panel)

H I

Global renderers Scene Render Options

The render options for all available renderers. The scene render options allow you to modify global settings for the entire scene. You can specify things like the renderer to use, the frames to render, the basic output path and format for rendered images. You can also create custom render channels that you can add to individual passes. The current pass is displayed in bold in the explorer.

Renderer options panel (right panel)

When you select Scene Render Options or one of the global renderers (mental ray, Hardware Renderer, etc.), the options for the selected item are displayed in the right panel. This is also the case when you select a render pass that contains a set of local render options. If your selected passes use different renderers then Mixed Selection will appear at the top of the panel and no options are displayed.

Current pass

D Render menu E Edit menu

Use these commands to render the current pass, the selected passes, all passes in the scene, the current frame, or the current frame for all passes in the scene. Edit > Override Marked Pass Parameters Edit > Make Renderer Local to Pass Edit > Make Pass Renderer Global Edit > Open Rendering Preferences Edit > Open Output Format Preferences Edit > Copy Render Options

F Refresh

Updates the render manager after modifications.

Basics 377

Section 21 Rendering

Selecting a Renderer
You usually render a scene using the default mental ray rendering software, which is built into Softimage. mental ray uses three rendering algorithms: scanline, raytracing, and rasterizer. You can also use the hardware renderer, which renders whatever is displayed in a 3D view (such as a viewport in Shaded display mode). Scanline and raytracing are normally used together. mental ray uses the scanline method until an eye ray changes direction (due to reflection or refraction and so on), at which point it switches to the raytracing method. Once it switches, it does not go back to scanline until the next eye ray is fired. Without scanline rendering, the render is usually slower. Without raytracing, transparency rays are rendered, but reflection rays cannot be cast and refraction rays are not computed. The Rasterizer accelerates motion blur rendering in large and complex scenes with a lot of motion blur. You must set special sampling options. Scanline Scanline rendering is a rendering method used to determine primary visible surfaces. Scene objects are projected onto a 2D viewing plane, and sorted according to their X and Y coordinates. The image is then rendered point-by-point and scanline-by-scanline, rather than objectby-object. Scanline rendering is faster than raytracing but does not produce as accurate results for reflections and refractions.
This scene was rendered using scanline rendering only. Notice how the transparency has little depth, and there is no reflection or refraction.

Raytracing Raytracing calculates the light rays that are reflected, refracted, and obstructed by surface, producing more realistic results. Each refraction or reflection of a light ray creates a new branch of that ray when it bounces off an object and is cast in another direction. The various branches a ray constitute a ray tree. Each new branch can be thought of as a layer: if you add together the total number of a rays layers, it represents the depth of that ray.
This scene was rendered using the raytracing render method. Notice how the glass reflections, transparency, and refraction are more realistic than with Scanline rendering.

Hardware Rendering The Softimage hardware renderer allows you to output a scene as it appears when displayed in any 3D view whose viewpoint is that of the pass camera. Most of the hardware rendering modes correspond to the 3D views display modes Wireframe, Shaded, Textured, and so on. Hardware rendering is useful for generating previews of your scene using all of the display options available in 3D views. It is also useful for outputting realtime shader effects to file.

378 Softimage

Different Ways to Render

Different Ways to Render


There are several ways to render a scene, from single frame previews to large sequences rendered to file. Some rendering methods are launched from Softimages interface, others from the command-line.

Previewing Interactively with the Render Region


You can view a rendering of any section or object in your scene quickly and easily using a render region. Rather than setting up and launching a preview, you can simply draw a render region over any 3D view and see how your scene will appear in the final render. To draw a render region, press Q to activate the render region tool and drag in any 3D view to define the regions rectangle. Press Shift+Q to toggle the region on and off.

You can resize and move a render region, select objects and elements within the region, as well as modify its properties to optimize your preview. Whatever is displayed inside that region is continuously updated as you make changes to the rendering properties of the objects. Only this area is refreshed when changing object, camera, and light properties, when adjusting rendering options, or when applying textures and shaders. Comparing Render Regions The render region has memo regions that allow you to store, compare, and recall settings. They look similar to the viewports memo cams, but are not saved with the scene.
Middle-click to store, and click to display. The currently displayed cache is highlighted in white. Right-click for other options.

The left side shows the stored region.

The right side shows the current settings.

Drag the swiper to show more or less of one image or the other.

The render region uses the same renderer as the final render (mental ray), so you can set the region to render your previews at final output quality. This gives you an accurate preview of what your final rendered scene will look like.

Be careful when comparing render regions. You should do this only when you are tweaking material and rendering parameters, and not making other changes to the scene. If you revert to previous settings, either accidentally or on purpose, you will lose any modeling, animation, or other changes you have made in the meantime.

Basics 379

Section 21 Rendering

Previewing a Single Frame


The Render > Preview command in the Render toolbar lets you preview the current frame at fully rendered quality in a floating window. The frame is rendered using the render options for the current render pass or using the render region options defined in any of the four viewports.

To render a selection of passes, select the passes in the explorer and click the Render Pass > Selected button in the Render Manager, or choose Render > Render > Selected Passes from the Render toolbar. The passes are rendered one after the other.

Batch Rendering (xsi -render| xsibatch -render)


You can use -render command-line options to render scenes without opening the Softimage user interface. In addition, you can export render archives from the command line. The most common rendering options are available directly from the command line, while other options can be changed by specifying a script using the -script option.

ray3.exe Rendering
You can render scenes using the mental ray standalone ray3.exe from a command line. Although many of the ray3.exe commands are available in the Softimage interface, you may want to use the ray3.exe command line tool to manually override options in exported MI2 files. You can edit the MI2 files to define extra shaders, create objects, swap textures, or perform other tasks.

Distributed Rendering Rendering to File from the Softimage Interface


You can render your passes directly from the Softimage interface. Once the pass options are set, all you need to do is start the render in any of these ways: To render all of your scenes passes, click the Render Pass > All button in the Render Manager, or choose Render > Render > All Passes from the Render toolbar. To render the current pass, click the Render Pass > Current button in the Render Manager, or choose Render > Render > Current Pass from the Render toolbar. Distributed rendering is a way of sharing rendering tasks among several networked machines. It uses a tile-based rendering method where each frame is broken up into segments, called tiles, which are distributed to participating machines. Each machine renders one tile at a time, until all of the frames tiles are rendered and the frame is reassembled. By spreading the workload this way, you can decrease overall rendering time considerably. Once youve set up a distributed rendering network, rendering tasks are distributed automatically once a render is initiated on a computer. The initiating computer is referred to as the master and the other computers on the network are referred to as slaves. The master and slaves communicate via a mental ray service that listens on a designated TCP port and passes information to the mental ray renderer.

380 Softimage

Section 22

Compositing and 2D Paint


Softimage Illusion is a fully integrated compositing, effects, and 2D paint toolset that is resolution independent and supports 8, 16, and 32-bit floatingpoint compositing. You can use Softimage Illusion operators to perform compositing and effects tasks ranging from tweaking the results of a multi-pass render to creating complex special effects sequences. The effects that you create are part of your scene that are accessible from the explorer, are accessible to Softimages scripting and animation features, and support clips and sources, as well as render passes.

What youll find in this section ...


Softimage Illusion Adding Images and Render Passes Adding and Connecting Operators Editing and Previewing Operators Rendering Effects 2D Paint Vector Paint vs. Raster Paint Painting Strokes and Shapes Merging and Cloning

Basics 381

Section 22 Compositing and 2D Paint

Softimage Illusion
The Softimage Illusion toolset consists of three core views: the FxTree, where you build networks of effects operators; the Fx Viewer, where you preview the results; the Fx Operator Selector, from which you insert pre-connected operators into the FxTree. Each of these views can be opened in a viewport or as a floating view (choose View > Compositing > name of view from the main menu). There is also a Compositing layout available from the View > Layouts menu. It contains the three core Fx tools arranged in a way that makes it easy to build and preview effects. Using this layout for compositing and effects work is usually more efficient than simply opening the required views in viewports because the non-compositing tools and views are mostly hidden.

Fx Tree where you create networks of linked operators to composite images and create effects. You can create multiple instances of the FxTree workspace called trees to organize effects more efficiently.

Fx Viewer 2D viewer in which you can preview each operator to see how it contributes to the overall effect.

Fx Operator Selector Lists all of the available compositing and effects operators. Fx Operators Operators are represented by nodes that you can link together manually or connect beforehand using the Fx Operator Selector. Once you select an operator here, you can pre-set its connections to existing operators in the Fx Tree and then simultaneously insert and connect it in the Fx Tree.

382 Softimage

Adding Images and Render Passes

Adding Images and Render Passes


Before you can composite anything, or create any effects, you need to import images into the Fx Tree. There are several ways of doing this.

Getting Image Clips


The Fx Tree has direct access to all of the image clips in your project. Inserting image clips into the FxTree creates a pair of Image Clip operators for each imported clip. Image Clip operator pairs consist of two operators:

Clip In Operator

Getting File Input Operators


Importing images into the FxTree creates a File Input operator for each imported image. The operator points directly to the image on disk without creating an image source or clip. When you import an image, the File Input operators properties are automatically updated according to the images properties. To import image files, click the Import Images button in the FxTree menu bar or choose File > Import Images from the FxTree menu bar. A browser opens from which you can select an image to import.

Clip Out Operator

Clip In (or From): reads from the image clip. Clip Out (or To): writes back to it. You can modify the image clip itself by adding effects operators between the Clip In and Clip Out operators. This updates the clip wherever it is used in the scene. The Clip In and Clip Out operators are primarily used to modify images that are used outside of the Fx Tree. For an actual composite or effect that you intend to render to file, its better to use File Input operators. To import image clips, select an image clip from the FxTrees Clips menu.

Select this option to import an image using a File Input operator.

Getting Render Passes


In the FxTree, you can import any rendered pass (or all of them at once) from the Passes menu. A File Input node is created for each pass that you import, and the file name, start frame, and end frame are all based on the pass render options. The file extension is based on the pass image format and output channels.
Select this option to add all rendered passes to the Fx Tree. Select a pass to add it to the Fx Tree.

Setting Image Defaults


Before you begin building effects, you may want to adjust the Fx Trees image defaults to conform to your chosen picture format. The image defaults affect all operators that create an image (the Pattern Generator operator, for example), and are applied when you opt to output an operator with the default size, and/or bit-depth. Each tree that you create has its own set of image defaults that specifies the width/height, bit depth, and pixel ratio. To set the image defaults, choose File > Tree Properties from the Fx Tree menu.

Basics 383

Section 22 Compositing and 2D Paint

Adding and Connecting Operators


The FxTree is where you create networks of linked operators to composite images and create effects. Operators are represented by nodes that you can link together manually, by dragging connection lines, or connect beforehand using the Fx Operator Selector.
Fx Tree Menu Provides access to operators, render passes, image clips, and Fx Tree tools and preferences. 1 Start by adding images and/or sequences to the Fx Tree. These are the images that you want to composite together and/or build effects on.

If you need to build several different networks, you can create multiple instances of the FxTree workspacecalled trees to organize them more efficiently. Each tree is a separate operator in the scene with its own node in the explorer.
Navigation Control Allows you to navigate in the Fx Tree workspace when a network of operators becomes to large to display all at once. Dragging in the rectangle pans in the Fx Tree workspace. Dragging the zoom slider up and down zooms in and out. Operator Connection Icons Green icons accept image inputs. You can connect almost any operator to green inputs. Blue icons accept matte (A) inputs, which are generally used to control transparency. Red connections icons are outputs, plain and simple. Fx Operator Selector A tool for inserting operators into the Fx Tree. Select an operator from the list, then consecutively middleclick the existing operators you wish to connect to its inputs and output. Middle-click in an empty area of the Fx Tree workspace to add the operator.

Next you need to add and connect the operators required to build your effect. You can get any operator from the Ops menu and connect it by dragging connection lines from other operators outputs to its inputs. You can also use the operator selector to pre-define operator connections before you inset the operators into the Fx Tree.

Once youve built your effect, you can render it out using a File Output operator. Operator information Positioning the mouse pointer over an operator displays information at the bottom of the Fx Tree.

Once you define all of the needed connections, middle-click an empty area of the Fx Tree workspace to add the operator.

384 Softimage

Adding and Connecting Operators

Fx Operator Types
Whether youre compositing a simple foreground image over a background, or applying a complex series of effects to an image, every step of the process is accomplished by an operator in the FxTree. By connecting these operators together, you can create composites and special effects.
Operator Type Image Description Image operators act as the in and out points for each effect in the FxTree. File input operators are placeholders for images in the tree. Paint Clip operators are used to import images into the FxTree for raster painting. Vector Paint operators are used to create vector paint layers in the FxTree. PSD Layer Extract operators extract a single layer from a .psd image. File Output Operators let you set the output and rendering options for your composites and effects. Composite Composite operators offer you several ways to combine foreground images with a background image to produce a composited result. Most compositing operators require a foreground image, a background image, and an internal or external matte. Retiming operators allow you to change the timing of image sequences. You can, for example, convert from 24 to 30 frames-per-second and vice versa, interlace and de-interlace clips, and change the duration of clips by dropping frames, or combining them together in different ways. Transition operators create animated changes from one image clip to another. You can use transition operators to apply dissolves, fades, wipes, pushes, and peels. Color adjust operators let you color correct clips in the FxTree. You can modify and animate hue, saturation, lightness, brightness, contrast, gamma, and RGB values. You can also perform various operations like inverting, images, premultiplying images, and so on. Grain Operator Type Color Curves Description Use the Color Curves operators to graphically adjust color components of images in the FxTree, and to extract mattes for foreground images so that you can composite them over background images. Grain operators alter the appearance of film grain in your image sequences. You can add and remove grain, as well as adding and removing noise. Optics operators create optical effects in images in the FxTree. These include depth-of-field, lens flares, and flare rings. Filter operators let you control the appearance of images in the FxTree. Among other things they can reproduce the effects of different lens filters, apply blurs, and add or remove noise. Distort operators simulate 3D changes to images in the FxTree. Use these operators to apply distortions and transformations Transform operators adjust the dimensions and/or position of Images in the FxTree. Besides cropping and resizing images, you can also use the 3D Transform operator to transform an image in a simulated 3D space, as well as warp and morph images. The plugins operators offer a variety of patterns and special effects that you can use in your FxTrees. All of the Plugins operators are custom operatorscalled UFOs that were created using the UFO SDK. Painterly Effects operators allow you to apply a variety of classic artistic effects to images in the FxTree. The Softimage compositors three sets of Painterly Effects operators let you apply effects like Chalk & Charcoal, Watercolor, Bas Relief, Palette Knife, Stained Glass, and many more, to images in the FxTree.

Optics

Filter

Distort

Retiming

Transform

Transition

Plugins

Color Adjust

Painterly Effects

Basics 385

Section 22 Compositing and 2D Paint

Editing and Previewing Operators


A big part of building an effect is previewing operators and editing their properties. As you adjust an operators parameters, you can see the effect of your changes reflected in the Fx Viewer. There are several ways to edit and preview operators, but the easiest is to use the View and Edit hotspots that appear when you position the mouse pointer over an operator. The Edit hotspot opens the operators property editor, while the View hotspot previews the operator in the Fx Viewer. This allows you to open one operators property editor while youre previewing another operator. For example, you might want to see how color correcting one image affects the composited result of that image and another one.

Operator Info Displays info about the operators being viewed and edited. Navigation Tool Drag in the rectangle to pan. Drag on the slider to zoom. Click the Edit hotspot to open the operators property editor. Click the View hotspot to preview the operator in the Fx Viewer. Compare Area Displays a portion of one image while you're editing another image. This is useful for seeing one operators effect on another. Image courtesy of Ouch! Animation Display Area Displays the operator that youre previewing. Displays the current image at full size. Toggles the Compare Area Updates the Compare Area with the current image. Switch viewers A and B. Isolate one of the images color channels. Forces the current image to fit in the viewer. Mixes the view with the Merge Source.

Split viewers A and B.

Display images alpha channel as a red overlay.

386 Softimage

Rendering Effects

Rendering Effects
Once you have your effect looking the way you want it, you can render it to a variety of different image formats using a File Output operator. The File Output property editor is where you set all of the effects output options, including the picture standard, file format, and range of frames. Rendering Effects From the Command Line You can also render effects non-interactively from the command line using xsi -script or xsibatch -script. Make sure that your script contains the following line (VBScript example):
RenderFxOp "OutputOperator", False

Click here to open the Rendering window. Enter a valid filename, path and format here. When the sequence is rendered, click here to open a flipbook and view it. Specify the range of frames to render.

where OutputOperator is the name of the FileOutput operator that you want to render. The False statement specifies that the Fx Rendering dialog box should not be displayed during rendering.

Once youve set the output options, all you need to do is click the Render button to start the rendering process. In the Rendering window, you will get information regarding the rendering of the sequence.

Basics 387

Section 22 Compositing and 2D Paint

2D Paint
Softimages compositing and Effects toolset includes a 2D paint module which offers 8 and 16-bit raster and vector painting. To paint on images, you set up paint operators in the FxTree and then paint on them in the Fx viewer, where a Paint menu gives you access to a variety of paint tools. You work with paint operators the same way you work with other Fx operators, making it easy to touch up images, fine-tune effects, edit image clips, paint custom mattes, create write-on effects, and so on. You can also use blank paint operators to paint images from scratch.

Paint Menu When you edit a paint operator, the paint menu is added the Fx Viewer, giving you access to all of the paint-related commands and tools. Fx Paint Brush List Lists all of the paint brushes available for painting strokes. All of the brushes are presets based on the same core set of properties. The Fx Paint Brush List is an optional view in the compositing layout (shown here). To open: choose View > Compositing > Fx Color Selector from the main menu. Fx Viewer When you edit and preview a paint operator, the Fx Viewer is where you actually paint strokes and shapes.

Fx Color Selector Allows you to choose foreground and background paint colors using a variety of different color models. To open: position the mouse pointer in the Fx Viewer and press 1, or choose View > Compositing > Fx Color Selector from the main menu.

Paint Operators Behave exactly like other operators in the Fx Tree, and can be connected manually or using the operator selector.

388 Softimage

Vector Paint vs. Raster Paint

Vector Paint vs. Raster Paint


Softimages paint tools allow for both vector- and raster-based painting. Each has its advantage, as well as its own operator to use in the Fx Tree. Mask Shapes The Mask Shapes operator is an alpha-only version of the Vector Paint operator. You can use the vector paint tools in a Mask Shapes to paint a matte that you can use in any other Fx operator.

Vector Paint
Vector painting is a non-destructive, shape-based process where every brush stroke is editable even after youve painted it. Rather than painting directly on an image, you paint on a vector shapes layer that is composited over an input image or other operator. In the Fx Tree, you add a vector shapes layer overtop of an image by connecting the images operator to Vector Paint operators input. You can then paint on the vector shapes layer in the Fx viewer. A Vector Paint operator has a small paint brush/shape icon in its upper-left corner. This differentiates it from non-paint operators, which you cannot paint on, and from raster paint operators, which use a different icon. One convenience of painting in vector paint operators is that you dont have to manage changes to each frame the way you do with raster paint clips. Every shape in a vector paint operator is stored as part of the operators data, and is animatable. This allows you to paint shapes and strokes that stay in the image for as many frames as you need. Vector paint operators are blank by default and do not have source images. Instead, they are more like other Fx operators in that they have both an input and an output and use other operators outputs as their sources. However, theres nothing preventing you from keeping them blank and painting their contents from scratch.

Raster Paint
Raster painting is the process of painting directly on an image. It is destructive, meaning that each time you paint a stroke, youre directly altering the images pixels. Once youve painted on the image, the stroke or shape cannot be moved or altered (unless, of course, you paint a new stroke over it). In the Fx Tree, you can paint on images or sequences (but not a movie file .avi, Quicktime, and so on) loaded in a Paint Clip operator, which is available from the Ops menu. You can also insert a blank paint clip and configure it later. A Paint Clip operator has a small paint brush icon in its upper-left corner. When you paint on a sequence, you can manage changes to frames using the tools on the Modified Frames tab of an Paint Clips property editor. You can revert painted frames back to their last saved state, and save changes when youre ready to commit them.

Where you manage painted frames. Save changes to frames. Revert frames to their original state. Lists every unsaved frame that youve changed.

Basics 389

Section 22 Compositing and 2D Paint

Painting Strokes and Shapes


At its most basic, painting on an image is a simple matter of inserting a paint operator in the FxTree, choosing a paint color, brush and tool, and using the mouse pointer to paint in the Fx viewer. The following a general overview of the paint process, intended to give you an idea of workflow, as well as a sense of where to set the options necessary for defining strokes and shapes.

Set the active paint brush from the Fx Paint Brush List. The active paint brush is used by any paint tool that can paint a stroke (the paint brush tool, the line tool, the shape tools, and so on).

Add a paint operator to the Fx Tree workspace and edit its properties. This activates the Fx Viewers paint menu, giving you access to paint tools and options.

Select a brush from the list.

If necessary, edit the brush properties. To open the brush property editor, position the mouse pointer in the Fx Viewer and press 2.

Choose a paint tool from the Fx Viewers Draw menu.

Choose a brush category from the brush-type list.

Choose the foreground and (if needed) the background color from the Fx Color Selector.

If necessary, edit the tool properties. To open the tool property editor, position the mouse pointer in the Fx Viewer and press 3.

The five most recently used colors are stored in the selector for easy access.

390 Softimage

Painting Strokes and Shapes

Paint on the operator in the Fx Viewer. The Flood Fill tool (not shown) fills pixels that you click, and neighboring pixels of similar color, with the specified foreground color. The Draw Rectangle and Draw Ellipse tools are unique in that they are the only shape tools that work in both raster paint clips and vector paint operators (all other shape tools are vector-paint only). In either mode, the shapes are drawn using the current colors and paint brush settings. The Mark Out Shape tool allows you to create an editable vector shape by clicking to define the locations of the shape's points. As you add points, each new point is connected to the previous point by a line segment. The line segments curve, or lack thereof, depends on the type of shape youre drawing: Bzier, B-Spline, or Polyline. The Mark Out Shapes tool is only available in vector paint operators.

The Brush tool is the most basic tool for painting brush strokes. You use it to paint on images as if you were using a real paint brush, or one of the myriad tools simulated by the brush presets in the Fx paint brush list. Painting is a simple matter of clicking and dragging on a paint operators image.

The Line tool, as you might imagine, allows you to draw straight lines. This is especially useful for painting wires out of an image or sequence. In vector paint operators, drawing a line creates a two-point color shape drawn using the outline (stroke) only

The Freehand Shape tool allows you to draw editable vector shapes as if you were using a pen and paper. You need only drag the paint cursor around the outline of the shape that you wish to draw. The Freehand Shape tool is only available in vector paint operators.

If you are using vector paint operators, you can edit any vector shapes that youve painted. The two images below show the manipulators used to transform a vector shape and to edit a vector shapes points.

Basics 391

Section 22 Compositing and 2D Paint

Merging and Cloning


Merging and cloning are both ways of painting using an images pixels as the paint color. In the Brushes category of the Fx paint brush list, youll find the Merge brush and the Clone brush, which you can use to paint strokes and lines, or draw shapes that use a source image as the paint color.

Cloning
Cloning is the process of painting pixels from one region of an image to a different region of the same image. This can be useful for duplicating elements in an image, as in the example below. It is also often used to paint out unwanted elements. For example, you can remove wires from a clear sky by painting over them with adjacent pixels.
Before In this example, the trumpet player and his shadow are cloned into the left side of the frame.

Merging
Merging is the process of painting pixels from a source image called the merge source onto the corresponding portion, or a different portion, of a destination image. This is useful for painting unwanted elements, like wires, out of images. It is also useful for painting new elements into images, like the clouds in the example below.

Destination

Offset

Source After

Merge Source

In this example, the image of the clouds is set as the merge source and is being painted into the image of the field, as shown below.

Clone

Original

You can set any operator in the Fx Tree as the merge source by right-clicking it and choosing Set as Paint Merge Source from the menu. This adds a small paint-bucket icon to the operator to help you identify it as the merge source.

Merge Source icon

When you paint using the Clone brush, youll only see a result if you use a brush offset. The offset is the distance between the area from which youre painting and the area to which youre painting. You can offset the brush in any direction and use any offset distance, as long as both the source and destination cursors can be placed somewhere on the target image simultaneously.

392 Softimage

Section 23

Customizing Softimage
You can extend Softimage in a variety of ways by customizing it. Many customizations are too involved to cover here, but you can get more details in the Softimage Users Guide and Softimage SDK Guide.

What youll find in this section ...


Plug-ins and Add-ons Toolbars and Shelves Custom and Proxy Parameters Displaying Custom and Proxy Parameters in 3D Views Scripts Key Maps Other Customizations

Basics 393

Section 23 Customizing Softimage

Plug-ins and Add-ons


You can extend the functionality of Softimage using plug-ins and addons: A plug-in is a customization, for example, a command or operator, implemented in a single file (possibly with a separate help file). An add-on is a set of related customization files stored together in an Add-on directory. It may consist of a toolbar and its associated commands, operators, properties, and so on. Add-ons can be packaged into a single .xsiaddon file to distribute to others. Plug-ins and add-ons can be managed using the Plug-in Manager.

Installing Add-on Packages


The easiest way to install (and uninstall) a packaged add-on is to use the Plug-in Manager.
To install an .xsiaddon

1. In the Plug-in Tree, right-click User Root or the first workgroup in the tree and choose Install .xsiaddon. If you want to install the add-on in a different workgroup, go to the Workgroup tab and move that workgroup to the top of the list. You can install add-ons only in the first workgroup. 2. In the Select Add-on File dialog box, locate the .xsiaddon file you want to install, and click OK. You can also install an add-on by dragging an .xsiaddon file to an Softimage viewport. This installs the add-on in the User location or the first workgroup, depending on the value in the DefaultDestination tag of the .xsiaddon. The SDK Guides contain additional information about other methods of installing add-ons.
To uninstall an .xsiaddon

Plug-in Manager
The Plug-in Manager is the central location for managing your customizations. You can display the Plug-in Manager using File > Plug-in Manager or in the Tool Development Environment (View > Layouts > Tool Development Environment).

In the Plug-in Tree, right-click an add-on and choose Uninstall Add-on. Installing a simple plug-in is as easy as copying the script or library file to the Plugins directory of your user or workgroup location.

394 Softimage

Toolbars and Shelves

Toolbars and Shelves


Softimage lets you create and edit your own custom toolbars and shelves. This gives you convenient access to commands, presets, and other files. Toolbars contain buttons for running commands or applying presets. Shelves are floating windows that contain tabs. Each tab can be a toolbar, display the contents of a file directory, or hold other items. Toolbars and shelves can be floating windows, or embedded in a view or layout. They are stored as XML-based files with the .xsitb extension in the Application\toolbars subdirectory of the user, workgroup, or factory path. At startup, Softimage gathers the files it finds in these locations and adds them to the View > Toolbars menu. Toolbars and shelves that are found in your user location are marked with u in the View menu, and those that are found in a workgroup location are marked with w. You can remove toolbars and shelves stored in the user location. Choose View > Manage, check any items you want to remove, and click Delete. The items are not physically deleted but they are marked for removal. When you exit Softimage, the file extensions are changed to .bak so they wont be detected and loaded when you restart. The Custom tab of the main shelf (View > Optional Panels > Main Shelf ). To create a new toolbar, choose View > New Custom Toolbar. To add presets to the toolbar, drag them from a file browser. To add commands and tools, choose View > Customize Toolbar, select a command category, and drag items onto the toolbar. Use the Toolbar Widgets category to organize your toolbar. To add a script, drag lines from the script editor or a script file from a browser and choose Script Button. To remove an item from a custom toolbar, right-click on a toolbar button and choose Remove Button. To save the toolbar, right-click on an empty area of the toolbar and choose Save or Save As.

Shelves
To create a custom shelf, choose View > New Custom Shelf. To add a tab, rightclick on an empty part of the tab area and choose an item from the Add Tab menu. If no tabs have been defined yet, you can right-click anywhere in the shelf. Folder tabs display files in a specific directory. You can drag files like presets from a folder tab onto objects and views in Softimage. Toolbar tabs hold buttons for commands and presets. Driven tabs can be filled with scene elements such as clips by using the object model of the SDK. To save a custom shelf, Click the Options icon and choose Save or Save As.

Custom Toolbars
You can create your own toolbar and use it to hold commonly-used tools and presets. Tools and presets are represented as buttons on the toolbar. Softimage also includes a couple of blank toolbars that are ready for you to customize by adding your own scripts, commands, and presets: The lower area of the palette and script toolbar.

Basics 395

Section 23 Customizing Softimage

Custom and Proxy Parameters


Custom parameters are parameters that you create for your own purpose. Proxy parameters are linked copies of other parameters that you can add to your own custom parameter sets. Both custom parameters and proxy parameters are contained in custom properties, also known as custom parameter sets. Next, create a new parameters using Create > Parameter > New Custom Parameter. If your object has only one custom parameter set, the custom parameters are placed in it. If there are multiple sets, you should select the desired one beforehand. If there arent any custom parameter sets on the selected object, one is created using a default name. At this point, the custom parameter set exists only in the scene in which it was created. It is not installed at the application level. You can copy it to other objects in the same scene, or save a preset to apply it to objects in other scenes. If you want, you can convert the custom parameter set into a selfinstalling custom property by right-clicking in the light gray header bar of the property editor and choosing Migrate to Self-installed. This lets you distribute the property as a script plug-in. You can also edit the script file to control the layout and logic of the property.

Custom Parameters
Custom parameters are parameters that you create for any specific animation purpose you want. You typically create a custom parameter and then connect it to other parameters using expressions or linked parameters. You can then use the sliders in the custom parameter sets property editor to drive the connected parameters in your scene.

Proxy Parameters
Proxy parameters are similar to custom parameters, but with a fundamental difference. Custom parameters can drive target parameters, but they are still separate and different parameters. This means that when you set keyframes, you key the custom parameter and not the driven parameter. So what do you do when you want to drive the actual parameter, or create a single parameter set that holds only those existing parameters you are interested in? You can use proxy parameters. Unlike custom parameters, proxy parameters are cloned parameters: they reflect the data of another parameter in the scene. Any operation done on a proxy parameter has the same result as if it had been done on the real parameter itself (change a value, save a key, etc.). While you can create proxy parameters for any purpose, its most likely that you will use them to create custom property pages. You can create your own property pages for just about anything you like: for example, locate all animatable parameters for an object on a single property

For example, you can use a set of sliders in a property editor to drive the pose of a character instead of creating a virtual control panel using 3D objects. First, create a custom parameter set by selecting an element and using Create > Parameter > New Custom Parameter Set on the Animate toolbar, and then giving it a meaningful name.

396 Softimage

Custom and Proxy Parameters

page, making it much quicker and easier to add keys because all the animated parameters are in one place. Or as a technical director, you can expose only the necessary parameters for your animation team to use, thereby streamlining their workflow and reducing potential errors. First, create a custom parameter set, then open an explorer and drag and drop parameters into the custom property editor or onto the custom parameter set node in an explorer. Alternatively, use Create > Parameter > New Proxy Parameter to specify parameters with a picking session.

Select one or more objects with a DisplayInfo custom parameter set. If nothing is selected, the DisplayInfo set of the scene root is displayed (if it has one).

Displaying Custom and Proxy Parameters in 3D Views


You can display and edit parameter values directly in a 3D view. This is sometimes called a heads-up display or HUD. You do this by creating a custom parameter set whose name starts with the text DisplayInfo. You can simply display information, for example, about your company or a particular scene shot, or you can mark parameters and change their values. Viewing the Information in a 3D View To view the DisplayInfo information in a 3D View, click the eye icon in a 3D view and choose Visibility Options. On the Stats page in the Camera Visibility property editor, select Show Custom DisplayInfo Parameters.

Changing Parameter Values in a 3D View You can easily modify the parameters displayed in the 3D views. There is a preference that controls the interaction: If Enable On-screen Editing of DisplayInfo Parameters is on in your Display preferences, you can modify the values as well as animate them directly in the display. If on-screen editing is disabled, you can still mark the parameters and modify them using the virtual slider. If on-screen editing is enabled, the parameters appear in a transparent box in the view. The title of the parameter set is shown at the top (without the DisplayInfo_ prefix). Each parameter has animation controls that allow you to set keys.

Basics 397

Section 23 Customizing Softimage

Double-click on a numeric value to edit it using the keyboard. The current value is highlighted, so you can type in a new value. Only the parameter you click on is affected even if multiple parameters are marked. Double-click on a Boolean value to toggle it. Only the parameter you click on is affected even if multiple parameters are marked. Click on an animation icon to set or remove a key for the corresponding parameter. Right-click on an animation icon to open the animation context menu for the corresponding parameter. Click the triangle in the top right corner to expand or collapse the parameter set. The color of the animation icon indicates the following information: You can do any of the following: Click and drag on a parameter name to modify the value. You dont need to explicitly activate the virtual slider tool. - Drag to the left to decrease the value, and drag to the right to increase it. - Press Ctrl for coarse control. - Press Shift for fine control. - Press Ctrl+Shift for ultra-fine control. - Press Alt to extend beyond the range of the parameters slider in its property editor (if the slider range is smaller than its total range). If the parameter that you click on is not marked, it becomes marked. If it is already marked, then all marked parameters are modified as you drag. Gray: The parameter is not animated. Red: There is a key for the current value at the current frame. Yellow: The parameter is animated by an fcurve, and the current value has been modified but not keyed. Green: The parameter is animated by an fcurve, and the current value is the interpolated result between keys. Blue: The parameter is animated by something other than an fcurve (expression, constraint, mixer, etc.). If there is a DisplayInfo property on the scene root, you cannot edit its parameters on-screen unless the scene root is selected.

398 Softimage

Scripts

Scripts
Scripts are text files containing instructions for modifying data in Softimage. They provide a powerful way to automate many tasks and simplify your workflow.
Command box displays the most recent command. Modify the contents or type a new command, then press Enter to execute it.

Selects any of the last 25 commands. Script editor icon opens the script editor.

Run the lines selected in the editing pane. If no lines are selected, the entire script is run.

Get help on the command selected in the editing pane.

History pane contains the most recently used commands in your current session. Drag and drop lines into the editing pane to get a head start on your own scripts. The history pane also contains messages related to importing and exporting, debugging information, and so on. Editing pane is a text editor in which you can create scripts by typing or pasting. Right-click for a context menu.

Basics 399

Section 23 Customizing Softimage

Key Maps
Key maps determine the keyboard combinations that are used to run commands, open windows, and activate tools. You can create your own key maps to create new key bindings or change the default ones. Key maps are stored as XML-based .xsikm files in the \Application\keymaps subdirectory of the user, workgroup, or factory path. At startup, Softimage gathers the files it finds at these locations and makes them available for selection in the Keyboard Mapping editor. When you change a key mapping, the new key automatically appears next to the command in menus and context menus. For some menus, you must restart Softimage to see the new label. Open the keyboard mapping editor by choosing File > Keyboard Mapping from the main menu. Select an existing Key Map, or click New to create a new one.
Keyboard shortcuts are grouped by interface component. Click an interface component in the Group list to display its commands and their keyboard shortcuts in the Command list. Click a command in the Command list to display its keyboard shortcut in red. To see which command is mapped to a key, click the appropriate modifiers (Alt, Ctrl, Shift) from the check boxes or the keyboard diagram, then rest your mouse pointer over a key on the keyboard diagram.

Create or modify a shortcut by dragging a command label to a shortcut key. Hold down the Shift, Ctrl, or Alt key while dragging to add a modifier to the new shortcut command.

Remove a shortcut key by selecting a command from the Command box and pressing Clear.

400 Softimage

Other Customizations

The keyboard keys are color-coded to indicate the following: White: no keyboard shortcut has been assigned to this key. Beige: a keyboard shortcut from another interface component has been assigned to this key. Light Brown: a keyboard shortcut from the currently selected interface component has been assigned to this key. Red: this keyboard shortcut corresponds to the currently selected item in the Command box. To see key conflicts with other windows, select View and choose a window from the adjacent list. Keys that are used by the selected window are highlighted in dark brown. For combinations involving modifiers, select the appropriate Ctrl, Shift, and Alt boxes or press and hold those keys on your keyboard.

Other Customizations
In addition to the customizations briefly mentioned so far, there are many other ways you can extend Softimage: Custom commands can automate repetitive or difficult tasks. Commands can be scripted or compiled. Custom operators can automatically update data in the operator stack. Operators can be scripted or compiled. Layouts define the main window of Softimage. You can create layouts based on your preferences or common tasks. Views can be floating or embedded in a layout. You can create views for specialized tasks. Events run automatically when certain situations occur in Softimage. Synoptic views allow you to run scripts by clicking hotspots in an image, for example, you can create custom control panels for a rig. Net View allows you to create an HTML interface for sharing scripts, models, and other data. Shaders give you complete control over the final look of your work. For more information about customizing Softimage, see the SDK Guides, as well as Customization in the Softimage Guides.

Basics 401

Section 23 Customizing Softimage

402 Softimage

You might also like