Professional Documents
Culture Documents
Basics 3
4 Softimage
Contents
Welcome to Autodesk Softimage! . . . . . . . . . . . . . . . . . 9 Section 1 Introducing Softimage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
The Softimage Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Getting Commands and Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Values for Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working in 3D Views. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exploring Your Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 16 18 21 23 32
Basics 5
Linking Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Expressions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Copying Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Scaling and Offsetting Animation . . . . . . . . . . . . . . . . . . . . . . . . . 169 Plotting (Baking) Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Removing Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Modifying and Offsetting Action Clips. . . . . . . . . . . . . . . . . . . . . 218 Sharing Animation between Models . . . . . . . . . . . . . . . . . . . . . . 220 Adding Audio to the Mix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Basics 7
8 Softimage
Copyright 2005 by Paramount Pictures Corporation and Viacom International Inc. All Rights Reserved. Nickelodeon, Barnyard and all related titles, logos and characters are trademarks of Viacom International Inc.
The Interface Softimages interface is laid out in a way that gives you both a large viewing area as well as easy access to all the tools you need, all the time. You can easily resize any panel or viewport in the Softimage interface, as well as customize its layout to exactly what you want.
Basics 9
Simulation You can simulate almost any kind of natural, or unnatural, phenomena you can think of using rigid bodies, soft bodies, or cloth or grow some hair! Simulation-type objects can then be influenced by forces and collisions to create simulated animations. ICE: Interactive Creative Environment ICE is a visual programming environment available directly within the Softimage interface. Using a node-based data tree format, you can modify how any tool works, create custom tools and effects, and see the results interactively, all without scripting a line of code. ICE is currently used mostly for creating particle and deformation effects. Using ICE trees, you can create almost any type of particle effect you want. You can make natural phenomena, such as smoke, fire, and rain, but you can also use objects or characters act in a simulated environment: rocks tumbling, glass pieces breaking, grass or hair growing, or humans running about. Shaders and Texturing Using a graphical node-based connection tool called the render tree, you can create an unlimited range of materials by connecting any type of shader to any object. You can also project 2D and 3D textures into texture spaces, which can then be manipulated like a 3D object.
Rendering Drawing upon the integration of mental ray rendering technology, Softimage offers full-resolution, interactive rendering, caustics, global illumination, and motion blur, not only for the final render, but also within a render region that can be drawn in any Softimage viewport. It renders everything in Softimage, letting you adjust your render parameters at any stage of modeling, animating, or even during playback. As well, you can embed unlimited render passes into a single scene and for each pass, generate multiple rendered channels such as specular or reflections. Softimages render passes and render channels are extremely easy to create, customize, and edit. Painting and Compositing Softimage has a built-in compositor, called Softimage Illusion. Softimage Illusion is designed to edit textures and image-based lighting in real time. You can use it to rough out final shots, touch up your textures, morph, warp and rig images, create custom mattes, and tweak the results of a multi-pass render, all within Softimage.
10 Softimage
Basics 11
12 Softimage
Section 1
Introducing Softimage
New to Softimage? Take a quick guided tour through the interface and basic operations.
Basics 13
can toggle parts of the standard layout using View > Optional Panels. Other layouts are available from the View > Layouts menu. You can even create your own layout for a customized workflow. Softimage has many preferences for many tools, editors, and working methods (choose File > Preferences). If you want to change something, chances are theres a preference for it!
14 Softimage
Title bar Displays the version of Softimage, your license type, and the name of the open project and scene.
Sample Content Softimage ships with a sample database XSI_SAMPLES containing scenes, models, presets, scripts, and other goodies. Open a Softimage file browser (View > General > Browser or press 5 at the top of the keyboard), then click Paths and choose Sample Project.
Viewports Lets you view the contents of your scene in different ways. You can resize, hide, and mute viewports in any combination. See Working with Views on page 21 for details.
C D
Main menu bar Main Toolbar Contains commands and tools for different aspects of 3D work. Press 1 for the Model toolbar, 2 for Animate, 3 for Render, 4 for Simulate, and Ctrl+2 for Hair. You can also access these controls from the main menu bar. For more information about other controls that can be displayed in this area, see The Main Toolbar, Weight Paint Panel, and Palette Toolbar on page 16 and Switching Toolbars on page 16.
Icons Switch between toolbar and other panels, or choose viewport presets. See The Main Toolbar, Weight Paint Panel, and Palette Toolbar on page 16 as well as Viewport Presets on page 22 for details.
Main command panel (MCP) Contains frequently used commands grouped by category. Switch between the MCP, KP/L, and MAT panels using the tabs at lower right. See The MCP, KP/L, and MAT Panels on page 17 for details.
Lower interface controls The controls at the bottom of the interface include a command box, script editor icon, the mouse/status line, the timeline, the playback panel, and the animation panel.
Basics 15
Menu Buttons
Buttons with a triangle open up a menu of commands and tools. You can middle-click on a menu button to repeat the last action you performed on that menu.
The main toolbar is where youll do most of your work. The weight paint panel contains a specialized layout for editing envelope weights. See The Weight Paint Panel on page 182. The palette contains some wire color and display mode presets, as well as a custom toolbar where you can store custom commands.
Context Menus
You can right-click on elements in the views to open a menu with items that relate to the element under the mouse pointer. This is a quick and convenient way to access commands and tools, for example, when modeling. In the explorer or schematic view, right-click on an element to open its context menu. In a 3D view, Alt+right-click (Ctrl+Alt+right-click on Linux) on an object to open its context menu, or on the background to open the Camera View menu. When object components like points, polygons, or edges are selected, right-click anywhere on the object for the selected components context menu. Right-click anywhere else for the Camera View menu. Some tools like the Tweak Components tool have their own rightclick menus with options specific for each tool.
Switching Toolbars
The main toolbar on the left side of the interface can display categories for modeling, animation, rendering, simulation, and hair. You can switch between these categories by clicking on the toolbars title as shown at right, or by pressing 1, 2, 3, 4, or Ctrl+2 (use the number keys at the top of the keyboard, not on the numeric keypad). If you prefer, you can also access the same commands from the main menu bar:
16 Softimage
Basics 17
18 Softimage
Type a numerical value in a text box to change the parameters values precisely. You can sometimes enter values beyond the slider range. Drag the mouse in a circular motion over the text box to change values (scrubbing). Scrub clockwise to increase and counterclockwise to decrease. Increment values using [ and ]. Ctrl and Shift change the increment size. For example, press Ctrl+] to increment by 10. You can also press Ctrl or Shift with the arrow keys to change values by increments. Enter relative values with the addition (+), subtraction (-), multiplication (*), and division (/) symbols after the value. For example, 2- decreases the value by 2. On the other hand, -2 enters negative two. With multiple elements, use l(min, max) for a linear range, r(min, max) for random values, and g(mean, var) for a normal distribution.
Click a color box to open the color editors, from which you can pick or define the colors you want. See Color Editors on page 20. You can copy colors by dragging and dropping one color box onto another. Click the label below the box to cycle the color space for the sliders through RGB, HLS, and HSV.
Virtual Sliders
Virtual sliders let you do the job of a slider without having to open up a property editor. Select one or more objects, mark the desired parameters, then press F4 and middle-drag in a 3D view. Use Ctrl, Shift, and Ctrl+Shift to change increments, and Alt to extend beyond the sliders display range.
The connection icon links a parameter value to a shader, weight map, or texture map which modulates it. Click the icon to inspect the connected element, or right-click for options.
Basics 19
Color Editors
Instead of using the RGB color sliders, you can click on a color box to open a color editor.
A
To pick a color: Click the color picker button (the eyedropper) and click anywhere in the Softimage window. This tool can be especially useful when trying to match a color in the Image Clip editor. On Windows systems, you can click outside of the Softimage window to pick a color, even though the mouse pointer does not show that the color picker is active outside of the window. This does not work on Linux systems, but you can import an image clip and load it into the Image Clip editor as a workaround. To cancel the color picker, click the right mouse button. The color picker takes the color you see on the screen rather than the true color of the objects. There may be rounding errors because most display adapters have only 256 levels for each of the RGB channels.
G B C D J H I G H I J K F Click on the browse (...) button to open the full color editor, where you can use additional controls. Click the palette button to choose a preset color. Click the > button to open the menu shown. The Color Area commands specify the configuration of the color area and slider. The Numeric Entry commands select the color model for the numeric boxes. The Normalized option specifies whether numeric values are represented as real numbers in the range [0.01.0] or as integers in the range [0, 255]. The Gamma Correction option toggles gamma correction display for all color controls in the color editor.
K L
To set a color, click in the color area and then adjust it using the slider. To select which color components appear in the color area and which one appears on the slider, click the > button. The color box on the left shows the previous color for reference. The color box on the right shows the current color. Use the numeric boxes to set color values precisely. To select a color model, click the > button.
B C D
20 Softimage
The 3D views show the geometry of your scene and include: Any cameras that are present in your scene. The orthographic Top, Front, and Right views. The User view, which is not a real camera but an extra perspective view that you can navigate in without modifying your main camera setup or its animation.
Basics 21
Use the Resize icon at the right of a viewports toolbar to maximize, expand, and restore: Left-click to maximize a viewport, or restore a maximized viewport. Alternatively, press F12 while the pointer is over the viewport. Middle-click to expand or restore horizontally. Ctrl+middle-click to expand or restore vertically. Right-click on the Resize icon to open a menu as shown. Viewport Presets Instead of switching views and resizing viewports manually, you can use the buttons at the lower left to display various preset combinations. Muting and Soloing Viewports The letter identifier in the upper-left corner of the title bar allows you to mute and solo viewports. Muting a viewports neighbors helps speed up its refresh rate. Middle-click the letter to mute the viewport. A muted viewport does not update until you un-mute it. The letter of a muted viewport is displayed in orange. Middle-click the letter again to un-mute the viewport. Click the letter to solo the viewport. Soloing a viewport mutes all the others. The letter of a soloed viewport is displayed in green. Middle-click the letter again to un-solo the viewport. To control how viewports update when playing back animation, see Selecting a Viewport for Playback on page 143.
Floating Views
You can open views as floating windows using the first group of submenus on the Views menu. Some floating views also have shortcut keys. Depending on the type of view, you can have multiple windows of the same type open at the same time. You can adjust floating windows in the usual ways: To move a window, drag its title bar. To resize a window, drag its borders. To bring a window to the front and display it on top of other windows, click in it. To close a window, click x in the top right corner. To minimize a window, click _ in the top right corner. You can cycle through all open windows, whether minimized or not, using Ctrl+Tab. Use Shift+Ctrl+Tab to cycle backwards. You can collapse a floating view by double-clicking on its title bar. When collapsed, only the title bar is visible and you can still move it around by dragging. To expand a collapsed view, double-click on the title bar again; the view is restored at its current location.
22 Softimage
Working in 3D Views
Working in 3D Views
3D views are where you view, edit, and manipulate the geometric elements of your scene.
A B C D E F G H A B C Viewport letter identifier: Click to solo the viewport or middle-click to mute it. Views menu: Choose which view to display in the viewport. Memo cams: Store up to 4 views for quick recall. Left click to recall, middle-click to save, Ctrl+middle-click to overwrite, and right-click to clear. Camera icon menu: Navigate and frame elements in the scene. Eye icon menu (Show menu): Specify which object types, components, and attributes are visible in the viewports. Hold down the Shift key to keep the menu open while you choose multiple options. XYZ buttons: Click on X to view the right side, Y to view the top side, and Z to show the front side. Middle-click to view the left, back, and bottom sides respectively. These commands change the viewpoint but you can still orbit afterwards unlike in the Top, Front, and Right views selected from the Views menu. Click again to return to the previous viewpoint. Display Mode menu: Specifies how scene elements are displayed: wireframe, shaded, and other options. Resize icon: Resizes viewports to full-screen, horizontal, or vertical layouts. Click to maximize and restore. Middle-click to maximize and restore horizontally. Ctrl+middle-click to maximize and restore vertically. Right-click for a menu.
D E
G H
Basics 23
Types of 3D Views
There are many ways to view your scene in the 3D views. These viewing modes are available from the Views menu in viewports and from the View menu in the object view. Except for camera views, all of the viewing modes are viewpoints. Like camera views, viewpoints show you the geometry of objects in a scene. They can be previewed in the render region, but they cannot be rendered to file like camera views. Camera Views Camera views let you display your scene in a 3D view from the point of view of a particular camera. You can also choose to display the viewpoint of the camera associated to the current render pass. The Render Pass view is also a camera view: it shows the viewpoint of the particular camera associated to the current render pass. Only a camera associated to a render pass is used in a final render. Spotlight Views Spotlight views let you select from a list of spotlights available in the scene. Selecting a spotlight from this list switches the point of view in the active 3D view relative to the chosen spotlight. The point of view is set according to the direction of the light cone defined for the chosen spotlight. Top, Front, and Right Views The Top, Front, and Right views are parallel projection views, called such because the objects projection lines do not converge in these views. Because of this, the distance between an object and the viewpoint has no influence on the scale of the object. If one object is close and an identical object is farther away, both appear to be the same size.
The Top, Front, and Right views are also orthographic, which means that the viewpoint is perpendicular (orthogonal) to specific planes: The Top view faces the XZ plane. The Front view faces the XY plane. The Right view faces the YZ plane. You cannot orbit the camera in an orthographic view.
Top
Front
Right
User View (Viewports Only) The User view is a viewpoint that shows objects in a scene from a virtual cameras point of view, but is not actually linked to a scene camera or spot light. The User point of view can be placed at any position and at any angle. You can orbit, dolly, zoom, and pan in this view. Its useful for navigating the scene without changing the render cameras position and zoom settings.
24 Softimage
Working in 3D Views
The Object View The object view is a 3D view that displays only the selected scene elements. It has standard display and show menus, and works the same way as any 3D view in most respects. Selection, navigation, framing, and so on work as they do in any viewport. There are also some custom viewing options, available from the object views View menu, that make it easier to work with local 3D selections. To open the object view, do one of the following: From any viewports views menu, choose Object View. or From the main menu, choose View > General > Object View.
A B C C E F G
View menu: Choose the viewpoint to display, and set various viewing options. This is similar to the viewports Views menu, but includes special viewing controls for the object view. Show menu (equivalent to the eye icon menu): Specify which object types, components, and attributes are visible in the viewports. Hold down the Shift key to keep the menu open while you choose multiple options. Memo cams: Store up to 4 views for quick recall. Left click to recall, middle-click to save, Ctrl+middle-click to overwrite, and right-click to clear. XYZ buttons: Click on X to view the right side, Y to view the top side, and Z to show the front side. Middle-click to view the left, back, and bottom sides respectively. These commands change the viewpoint but you can still orbit afterwards unlike in the Top, Front, and Right views in viewports. Also unlike in the viewports, they are not temporary overrides and you cannot click them again to return to the previous viewpoint. Lock: Prevent the view from updating when you select a different object in another view. Click again to unlock. Update: Refresh the view if it is locked. Display Mode menu: Specifies how scene elements are displayed: wireframe, shaded, and other options.
E F G
Basics 25
Navigating in 3D Views
In 3D views, a set of navigation controls and shortcut keys lets you control the viewpoint. You can use these controls and keys to zoom in and out, frame objects, as well as orbit, track, and dolly among other things. Activating Navigation Tools Most navigation tools have a corresponding shortcut key so you can quickly activate them from the keyboard. However, some tools are only available from a viewports camera icon menu. In either case, activating a navigation tool makes it the current tool for all 3D views, including object views which do not have an equivalent to the camera icon menu.
Selecting navigation tools from the camera icon menu activates them for all 3D views.
Key Z
Description Moves the camera laterally, or changes the field of view: Pan (track) with the left mouse button. Zoom in with the middle mouse button. Zoom out with the right mouse button. In your Tools > Camera preferences, you can activate Zoom On Cursor to center the zoom wherever the mouse pointer is located.
Rectangular Zoom
Shift+Z
Zooms onto a specific area: Draw a diagonal with the left mouse button to fit the corresponding rectangle in the view. Draw a diagonal with the right mouse button to fit the current view in the corresponding rectangle. In perspective (non-orthographic) views, rectangular zoom activates pixel zoom
After you activate a tool, check the mouse bar at the bottom of the Softimage interface to see which mouse button does what.
Tool or Command Zoom Key mouse wheel Description By default, zooms in and out in various views and editors. You can control how the mouse wheel is used for zooming in your Tools > Camera preferences. Navigation S Combines the most common navigation tools: Pan (track) with the left mouse button. Dolly with the middle mouse button. Orbit with the right mouse button. In your Tools > Camera preferences, you can change the order of the mouse buttons as well as remap this tool to the Alt key. Dolly P Orbit O
mode , which offsets and enlarges the view without changing the cameras pose or field of view. Rotates a camera, spotlight, or user viewpoint around its point of interest. This is sometimes called tumbling or arc rotation. Use the left mouse button to orbit freely. Use the middle mouse button to orbit horizontally. Use the right mouse button to orbit vertically. In your Tools > Camera preferences, you can set Orbit Around Selection. Moves the camera forward and back. Use the different mouse buttons to dolly at different speeds. In orthographic views, dollying is equivalent to zooming.
26 Softimage
Working in 3D Views
Display Modes
Key L Description Rotates a perspective view along its Z axis. Use the different mouse buttons to roll at different speeds. Frames the selected elements in the view under the mouse pointer. Frames the selected elements in all open views. Frames the entire scene in the view under the mouse pointer. Frames the entire scene in all open views. Centers the selected elements in the view under the mouse pointer. Centering is similar to framing, but without any zooming or dollying. The camera is tracked horizontally and vertically so that the selected elements are at the center of the viewport. Centers the selected elements in all open views.
You can display scene objects in different ways by choosing various display modes from a 3D views Display Mode menu. The Display Mode menu always displays the name of the current display mode, such as Wireframe.
Frame Frame (All Views) Frame All Frame All (All Views) Center
Wireframe Shows the geometric object made up of its edges, drawn as lines resembling a model made of wire. This image displays all edges without removing hidden parts or filling surfaces.
Shift+ Alt+C R
Bounding Box
Resets the view under the mouse pointer to its default viewpoint.
In addition to the above, there are other tools available on the camera icon menu, such as pivot, walk, fly, and so on. Undoing Camera Navigation As you navigate in a 3D view, you may want to undo one or more camera moves. Luckily, there is a separate camera undo stack that lets you undo navigation in 3D views. To undo a camera move, press Alt+Z. To redo an undone camera move, press Alt+Y.
Reduces all scene objects to simple cubes. This speeds up the redrawing of the scene because fewer details are calculated in the screen refresh.
Basics 27
Depth Cue Applies a fade to visible objects, based on their distance from the camera, in order to convey depth. You can set the depth cue range to the scene, selection, or a custom start and end point. Objects within the range fade as they near the edge of the range, while objects completely outside the range are made invisible. You can also display depth cue fog to give a stronger indication of fading. Hidden Line Removal Shows only the edges of objects that are facing the camera. Edges that are hidden from view by the surface in front of them are not displayed.
Constant Ignores the orientation of surfaces and instead considers them to be pointing directly toward an infinite light source. All the objects surface triangles are considered to have the same orientation and be the same distance from the light. This results in an object that appears to have no shading. This mode is useful when you want to concentrate on the silhouettes of objects. Shaded Provides an OpenGL hardware-shaded view of your scene that shows shading, material color, and transparency, but not textures, shadows, reflections, or refraction. By default, selected objects have their wireframes superimposed, making it easy to manipulate points and other components.
28 Softimage
Working in 3D Views
Textured Similar to Shaded, but also shows image-based textures (not procedural textures).
Realtime Shaders Evaluates the real-time shaders that have been applied to objects. In the example shown here, the same textures have been used as for the non-realtime shaders, so the result is similar to the textured mode.Several realtime display modes are available, depending on your graphics card: OpenGL: displays realtime shader attributes for objects that have been textured using OpenGL realtime shaders.
Textured Decal This is like the textured, viewing mode, but textures are displayed with constant lighting. The net effect is a general brightening of your textures and an absence of shadow. This allows you to see a texture on any part of an object regardless of how well that part is lit.
Cg: displays realtime shader attributes for objects that have been textured using Cg realtime shaders as well as Softimages Cg-compatible MetaShaders. DirectX: displays realtime shader attributes for objects that have been textured using DirectX realtime shaders.
Basics 29
Rotoscopy
Rotoscopy is the use of images in the background of the 3D views. You can use rotoscopy in different 3D views (Front, Top, Right, User, Camera, etc.) and any display mode (Wireframe, Shaded, etc.). Furthermore, you can use different images for each view. Single images are useful as guides for modeling in the orthographic views. Image sequences or clips are useful for matching animation with footage of live action in the perspective views. To load an image in a view, choose Rotoscope from the Display Mode menu and select an image and other options. There are two types of rotoscoped images: By default, rotoscoped images in perspective views have Image Placement set to Attached to Camera. This means that they follow the camera as it moves and zooms so that you can match animation with live action plates.
Attached to Camera
On the other hand, rotoscoped images that are displayed in the orthographic views (Front, Top, and Right) have the Image Placement option set to Fixed by default. This allows you to navigate the camera while modeling without losing the alignment between the image and the modeled geometry. Fixed images are sometimes called image planes, and they can be displayed in all views, not just the one for which they were defined.
Fixed
Navigating with Images Attached to the Camera Normally when a rotoscoped image or sequence is attached to the camera, it is fully displayed in the background no matter how the camera is zoomed, panned, or framed. However you can activate Pixel Zoom mode if you need to maintain the alignment between objects in the scene and the background, for example if you want to temporarily zoom into a portion of the scene.
Pixel Zoom
30 Softimage
Working in 3D Views
In Pixel Zoom mode, you can: Zoom (Z + middle or right mouse button, S + middle mouse button) Pan (Z + left mouse button, S + left mouse button) Frame (F for selection, A for all) The original view is restored when you exit Pixel Zoom mode. Be careful not to orbit, dolly, roll, pivot, or track because these actions change the cameras transformations and will not be undone when you deactivate Pixel Zoom.
Object Visibility Each object in the scene has its own set of visibility controls that allow you to control how objects appear in the scene, or whether they appear at all, as well as how shadows, reflections, transparency, final gathering, and other attributes are rendered. For example, you may wish to temporarily exclude objects from a render but retain them in the scene. This can come in handy when you are working with complex objects and want to reduce lengthy refresh times. You can open an objects Visibility property editor from the explorer by clicking the Visibility icon in the objects hierarchy. Object Display You can control how individual objects are displayed in a 3D view. Giving an object or objects different display characteristics is particularly useful for heavily-animated scenes. For example, if you want to tweak a static object within a scene that has a complex animated character, you could set the character in wireframe display mode while adjusting the lighting of your static object in shaded mode. You can open an objects Display property editor from the explorer by clicking the Display icon in the objects hierarchy.
The ability to view different objects in different display modes works only when you turn off Override Object Properties in a views Display Mode menu.
Basics 31
The Explorer
The explorer displays the contents of your scene in a hierarchical structure called a tree. This tree can show objects as well as their properties as a list of nodes that expand from the top root. You normally use the explorer as an adjunct while working in Softimage, for example, to find or select elements. To open an explorer in a floating window, press 8 at the top of the keyboard, or choose View > General > Explorer from the main menu.
A B C D E E F G H F G I I A B C D Scope of elements to view. See Setting the Scope of the Explorer on page 33. Viewing and sorting options. Filters for displaying element types. See Filtering the Display on page 33. Lock and update. This works only when the scope is set to Selection. Search by name, type, or keyword. Expand and collapse the tree. Click an icon to open property editor. Click a name to select. Use Shift to select ranges and Ctrl to toggle-select. Middle-click to branch-select. Right-click for a context menu. You can pan the view by dragging up and down in an empty area within the explorer. You can also use the mouse wheel to scroll up and down. First make sure the explorer has focus by clicking anywhere in the explorer.
32 Softimage
Keeping Track of Selected Elements If you have selected objects, their nodes are highlighted in the explorer. If their nodes are not visible, choose View > Find Next Selected Node. The explorer scrolls up or down to display the first object node in the order of its selection. Each time you choose this option, the explorer scrolls up or down to display the next selected node. After the last selected item, the explorer goes back to the first. Choose View > Track Selection if you want to automatically scroll the explorer so that the node of the first selected object is always visible. Setting the Scope of the Explorer The Scope button determines the range of elements to display. You can display entire scenes, specific parts, and so on.
A
The Selection option in the explorers scope menu isolates the selected object. If you click the Lock button with the Selection option active, the explorer continues to display the property nodes of the currently selected objects, even if you go on to select other objects in other views. When Lock is on, you can also select another object and click Update to lock on to it and update the display. Filtering the Display Filters control which types of nodes are displayed in the explorer. For example, you can choose to display objects only, or objects and properties but not clusters nor parameters, and so on. By displaying exactly the types of elements you want to work with, you can find things more quickly without scrolling through a forest of nodes. The basic filters are available on the Filters menu (between the View menu and the Lock button). The label on the menu button shows the current filter. The filters that are available on the menu depend on the scope. For example, when the scope is Scene Root, the Filters menu offers several different preset combinations of filters, followed by specific filters that you can toggle on or off individually.
A B C
Click the Scope button to select the range of elements to view. The current scope is indicated by the button label. It is also bulleted in the list. The bold item in the menu indicates the last selected scope. Middleclick the Scope button to quickly select this view.
Basics 33
Other Explorer Views You can view other smaller versions of the explorer (pop-up explorers) elsewhere in the interface. They are used to view the properties of selected scene elements.
Select Panel Explorer
Explorer filter buttons in the Select panel offer a shortcut by instantly displaying filtered information on specific aspects of currently selected objects.
A E F
1 2 A 1 2 Explorer filter buttons Example: Click the Selection filter button... ...to display a pop-up explorer showing all property nodes associated with the selected object. B C D A Enter part of the name to search for. Softimage waits for you to pause typing before it displays the search results. You can continue typing to modify the search string, and the updated results will be displayed when you pause again. Softimage finds the elements that contain the search string anywhere in their names (substring search). Strings are not case-sensitive. Alternatively, you can also use wildcards and a subset of regex (regular expressions) just like in the explorer. Recall a recent search string. Clear the search string and close the search results. Open the floating Scene Search window with the current search and additional options.
The Explore button opens a pop-up menu of additional filters for specifying the type of information you wish to obtain on the scene. Click outside a pop-up explorer to close it.
Object Explorers
You can quickly display a pop-up explorer for a single objectjust select the object and press Shift+F3. If the object has no synoptic property or annotation, you can press simply F3. Click outside the pop-up explorer or press those keys again to close it.
34 Softimage
The search results are listed here. They obey the current settings in the Scene Search view for sorting and name/path display. To select an element, click on it. To select a range of elements, click on the first one and then Shift+click on the last one. To toggle-select an element, Ctrl+click on it. To deselect an element, Ctrl+Shift+click on it. To rectangle-select a range of elements, click in the background first and then drag across the elements to select. This is easier if only names are displayed, rather than paths. To select all elements found, press Ctrl+A. To rename the selected elements, press F2. Right-click on any element for a context menu. If you right-click on a selected element, then some commands apply to all selected elements.
To dismiss the list of results, click anywhere outside the pop-up or press Escape.
Basics 35
A B A B C D E F G C D E
Scope: Show the entire scene, the current selection, or the current layer. Edit: Access navigation and selection commands. Show: Set filters that specify which elements to display. View: Set various viewing options. Memo cams: Store up to 4 views for quick recall. Left click to recall, middle-click to save, Ctrl+middle-click to overwrite, and right-click to clear. Lock: Prevent the view from updating when you select a different object in another view (if Scope = Selection). Click again to unlock. Update: Refresh the view if it is locked. To select a node, click its label. Middle-click to branch-select and right-click to tree-select. To open a nodes property editor, click its icon or double-click its label. Alt+right-click (Ctrl+Alt+right-click on Linux) on a node to open a context menu for the node.
F G H
H I
Press F2 to rename the selected node. Alt+right-click (Ctrl+Alt+right-click on Linux) in an empty area to quickly access a number of viewing and navigation commands.
36 Softimage
Section 2
Elements of a Scene
This section provides a guide to the objects, properties, and components you will find in Softimage scenes, and describes some of the workflows for working with them.
Basics 37
Whats in a Scene?
Scenes contain objects. In turn, objects can have components and properties.
Properties
Properties control how an object looks and behaves: its color, position, selectability, and so on. Each property contains one or more parameters that can be set to different values. Properties can be applied to elements directly, or they can be applied at a higher level and passed down (propagated) to the children elements in a hierarchy.
Objects
Objects are elements that you can put in your scene. They have a position in space, and can be transformed by translating, rotating, and scaling. Examples of objects include lights, cameras, bones, nulls, and geometric objects. Geometric objects are those with points, such as polygon meshes, surfaces, curves, particles, hair, and lattices.
Components
Components are the subelements that define the shape of geometric objects: points, edges, polygons, and so on. You can deform a geometric object by moving its components. Components can be grouped into clusters for ease of selection and other purposes.
Points on different geometry types: polygon mesh, curve, surface, and lattice.
Element Names
All elements have a name. For example, if you choose Get > Primitive > Polygon Mesh > Sphere, the new sphere is called sphere by default, but you can rename it if you want. In fact, its a good idea to get into the habit of giving descriptive names to elements to keep your scenes understandable. You can see the names in the explorer and schematic views, and you can even display them in the 3D views. You can typically name an element when you create it. You can rename an object at any time by choosing Rename from a context menu or pressing F2 in the explorer or schematic. Softimage restricts the valid characters in element names to az, AZ, 09, and the underscore (_) to keep them variable-safe for scripting. You can also use a hyphen (-) but it is not recommended. Invalid characters are automatically converted to underscores. In addition, element names cannot start with a digit; Softimage automatically adds an underscore at the beginning. If necessary, Softimage adds a number to the end of names to keep them unique within their namespace.
38 Softimage
Selecting Elements
Selecting Elements
Selecting is fundamental to any software program. In Softimage, you select objects, components and other elements to modify and manipulate them. In Softimage, you can select any object, component, property, group, cluster, operator, pass, partition, source, clip, and so on; in short, just about anything that can appear in the explorer. The only thing that you cant select are individual parametersparameters are marked for animation instead of selected.
A B F G F G H Group/Cluster button: Selects groups and clusters. Center button: Not used for selection. Hierarchy navigation: Select an objects sibling or parent.
Overview of Selection
To select an object in a 3D or schematic view, press the space bar and click on it. Use the left mouse button for single objects (nodes), the middle mouse button for branches, and the right mouse button for trees and chains. To select components, first select one or more geometric objects, then press a hotkey for a component selection mode (such as T for rectangle point selection), and click on the components. Use the middle mouse button for clusters. For elements with no predefined hotkey, you can manually activate a selection tool and a selection filter. In all cases: Shift+click adds to the selection.
D E
A B C D
Select menu: Access a variety of selection tools and commands. Select icon: Reactivates the last active selection tool and filter. Filter buttons: Select objects or their components, such as points, curves, etc. Object Selection and Sub-object Selection text boxes: Enter the name of the object and its components you want to select. You can use * and other wildcards to select multiple objects and properties. Explore menu and explorer filter buttons: Display the current scene hierarchy, current selection, or the clusters or properties of the current selection. These buttons are particularly useful because they display pre-filtered information but dont take up a viewport.
Ctrl+click toggle-selects. Ctrl+Shift+click deselects. Alt lets you select loops and ranges. You can use Alt in combination with Shift, Ctrl, and Ctrl+Shift.
Basics 39
Selection Hotkeys
Key space bar E T Y U I ' (apostrophe) F7 F8 F9 F10 Shift+F10 Ctrl+F7 Ctrl+F8 Ctrl+F9 Ctrl+F10 Alt+space bar Tool or action Select objects with the Rectangle selection tool, in either supra or sticky mode. Select edges with the Rectangle selection tool, in either supra or sticky mode. Select points with the Rectangle selection tool, in either supra or sticky mode. Select polygons with the Rectangle selection tool, in either supra or sticky mode. Select polygons with the Raycast selection tool, in either supra or sticky mode. Select edges with the Raycast selection tool, in either supra or sticky mode. Select hair tips with the Rectangle selection tool, in either supra or sticky mode. Activate Rectangle selection tool using current filter. Activate Lasso selection tool using current filter. Activate Freeform selection tool using current filter. Activate Raycast selection tool using current filter. Activate Rectangle-Raycast selection tool using current filter. Activate Object filter with current selection tool. Activate Point filter with current selection tool. Activate Edge filter with current selection tool. Activate Polygon filter with current selection tool. Activate last-used selection filter and tool.
Selection Tools
To select something in the 3D views, a selection tool must be active. Softimage offers a choice of several selection tools, each with a different mouse interaction: Rectangle, Lasso, Raycast, and others. The choice of selection tool is partly a matter of personal preference, and partly a matter of what is easiest or best to use in a particular situation. They are all available from the Select > Tools menu or hotkeys. Rectangle Selection Tool Rectangle selection is sometimes called marquee selection. You select elements by dragging diagonally to define a rectangle that encompasses the desired elements. Raycast Selection Tool The Raycast tool casts rays from under the mouse pointer into the sceneelements that get hit by these rays as you click or drag the mouse are affected. Raycast never selects elements that are occluded by other elements. Lasso Selection Tool The Lasso tool lets you select one or more elements by drawing a free-form shape around them. This is especially useful for selecting irregularly shaped sets of components.
Freeform Selection Tool The Freeform tool lets you select elements by drawing a line across them. This is particularly useful for selecting a series of edges when modeling with polygon meshes, or for selecting a series of curves in order for lofting or creating hair from curves, as well as in many other situations.
40 Softimage
Selecting Elements
Rectangle-Raycast Tool The Rectangle-Raycast selection tool is mixture of the Rectangle and the Raycast tools. You select by dragging a rectangle to enclose the desired elements, like the Rectangle tool. Elements that are occluded behind others in Hidden Line Removal, Constant, Shaded, Textured, and Textured Decal display modes are ignored, like the Raycast tool. Paint Selection Tool The Paint selection tool lets you use a brush to select components. It is limited to selecting points (on polygons meshes and NURBS), edges, and polygons. The brushs radius controls the size of the area selected by each stroke, which you can adjust interactively by pressing R and dragging to the left or right. Use the left mouse button to select and the right mouse button to deselect. Press Ctrl to toggle-select.
Selection Filters
Selection filters determine what you can select in the 3D and schematic views. You can restrict the selection to a specific type of object, component, or property. Press Shift while activating a new filter to keep the current selection, allowing you to select a mixture of component types.
Effect of nodeselecting Object.
A B C A Selection filter buttons: Select objects or their components in the 3D views. The component buttons are contextual: they change depending on what type of object is currently selected. Click the triangle for additional filters. Click the bottom button to re-activate the last filter.
B C
Basics 41
Branch Selection Middle-click to branch-select an object. When you branch-select an object, its descendants inherit the selection status and are highlighted in light gray. You would branch-select an object when you want to apply a property that gets inherited by all the objects descendants.
Tree Selection Right-click to tree-select an object. This selects the objects topmost ancestor in branch mode. For kinematic chains, right-clicking will select the entire chain.
Then specify the end component to select the range of components in-between.
42 Softimage
Selecting Elements
1. Select the first anchor component normally. 2. Alt+click on the second component. Note that the anchor component is highlighted in light blue as a visual reference while the Alt key is pressed. All components between the two components on a path become selected. 3. Use the following key and mouse combinations to further refine the selection: - Use Shift to add individual components to the selection as usual. If you want to add additional ranges or loops using Alt+Shift, the last component added to the selection is the new anchor. If you want to start a new range anchored at the end of the previous range, you must reselect the last component by Shift+clicking or Alt+Shift+clicking. Once you have selected a new anchor, you can Alt+Shift+click to add another range to the selection. - Use Ctrl to toggle-select. Once you have selected a new anchor, you can Alt+Ctrl+click to toggle the selection of a range. - Use Ctrl+Shift to deselect. Once you have selected a new anchor, you can Alt+Ctrl+Shift+click to deselect a range.
Loop Selection Alt+middle-click to select a loop of components using any selection tool (except Paint). When you select a loop of components, Softimage finds a path between two components that you pick. It then extends the path in both directions, if it is possible, and selects all components along the extended path.
1. Do one of the following: - Select the first anchor component normally, then Alt+middleclick on the second component. Note that the anchor component is highlighted in light blue as a visual reference while the Alt key is pressed. or - Alt+middle-click to select two adjacent components in a single mouse movement. All components on an extended path connecting the two components become selected.
Basics 43
Note that for edges, the direction is implied so you only need to Alt+middle-click on a single edge. However, for parallel edge loops, you still need to specify two edges as described previously. 2. Use the following key and mouse combinations to further refine the selection: - Use Shift to add individual components to the selection as usual. If you want to add additional ranges or loops using Alt+Shift, the last component added to the selection is the new anchor. The last selected component becomes the anchor for any new loop. Once you have selected a new anchor, you can Alt+Shift+middle-click to add another loop to the selection. - Use Ctrl to toggle-select. Once you have selected a new anchor, you can Alt+Ctrl+middle-click to toggle the selection of a loop. - Use Ctrl+Shift to deselect. Once you have selected a new anchor, you can Alt+Ctrl+Shift+middle-click to deselect a loop.
Defining Selectability
You can make an object unselectable in the 3D and schematic views by opening up its Visibility properties and turning off Selectability. This can come in handy and speed up your workflow if you are working in a very dense scene and there are one or more objects that you dont wish to select. Unselectable objects are displayed in dark gray in the wireframe and schematic views. Regardless of whether an objects Selectability is on or off, you can always select it using the explorer or using its name. The selectability of an object can also be affected by its membership in a group or layer.
44 Softimage
Objects
Objects
Objects can be duplicated, cloned, and organized into hierarchies, groups, and layers. To duplicate an object, select it and choose Edit > Duplicate/ Instantiate > Duplicate Single or press Ctrl+D. The object is duplicated using the current options and the copy is immediately selected. You may need to move it away from the original. By default, any transformation you apply is remembered for the next duplicate. To make multiple copies, Edit > Duplicate/Instantiate > Duplicate Multiple or press Ctrl+Shift+d. Specify the number of copies and the incremental transformations to apply to each one.
Example: Applying multiple transformations to duplicated objects 1 Select the object (a step) to be duplicated and transformed.
2 With the step selected, press Ctrl+Shift+d. Specify 5 copies and a transformation to apply to each.
3 Result: Five copies of the original step are generated, with each duplicate translated, rotated and scaled to give the appearance of a flight of spiral stairs. Note: The center of the step was repositioned to the right so that the step could be rotated along its right edge. When an object is duplicated, the original and its duplicates can be modified separately with no effect on each other.
Other commands in the Edit > Duplicate/Instantiate menu let you duplicate symmetrically, from animation, and so on.
Basics 45
Cloning Objects
Hierarchies
Hierarchies describe the relationship between objects, usually using a combination of parent-child and tree analogies, as you do with a family tree. Objects can be associated to each other in a hierarchy for a number of reasons, such as to make manipulation easier, to propagate applied properties, or to animate children in relation to a parent. For example, the parent-child relationship means that any properties applied to the parent (in branch mode) also affect the child. In a hierarchy there is a parent, its children, its grandchildren, and so on: A root is a node at the base of either a branch or the entire tree. A tree is the whole hierarchy of nodes stemming from a common root. A branch is a subtree consisting of a node and all its descendants.
When an object is cloned, editing the original object affects all the clones but editing one of the clones has no effect on the others.
You can clone objects using the Clone commands on the Edit > Duplicate/Instantiate menu. Clones are displayed in the explorer with a cyan c superimposed on the model icon. In the schematic view, they are represented by trapezoids with the label Cl.
Clone in the explorer. Clone in the schematic view.
46 Softimage
Objects
Creating Hierarchies You can create a hierarchy by selecting an object and activating the Parent tool from the Constrain panel (or pressing the / key). Click on another object to make it the child of the selected object, or middle-click to make the selected object the child of the picked object. Continue picking objects or right-click to exit the tool. You can also create hierarchies by dragging and dropping in the explorer:
1 2
Deleting an Object in a Hierarchy If you delete an object with children, it is replaced by a null with the same name in order to preserve the hierarchy structure. Deleting this null just replaces it with another one. If you want to get rid of it, you must first cut its children if you want to keep them, or branch-select the object to remove it and its children.
Groups
You can organize 3D objects, cameras, and lights into groups for the purpose of selection, applying operations, assigning properties and shaders, and attaching materials and textures. For example, you can add several objects to a group, and then apply a property like Display, Geometry Approximation, or a material to the group. The groups properties override the members own ones. Besides being able to organize objects into groups, you can also create a group of groups. An object can be a member of more than one group. Groups, however, cant be added in hierarchies. They can only live immediately beneath the scene root or a model. In Softimage, groups are a tool for organizing and sharing properties. If you are familiar with Autodesk Maya and want to use groups to control transformations, for example, in a character rig, use transform groups instead. If you are familiar with Autodesk 3ds Max, note that you dont need to open a group to select its members individually. You can always select either the group as a whole or any of its members.
1 2
Make the ball_child a child of the ball_parent by dropping its node onto the ball_parents node. The ball_child is now under the ball_parents node.
In the schematic, you can create a hierarchy by pressing Alt while dragging a node onto a new parent. Cutting Links in a Hierarchy You will often need to cut the hierarchical links between a parent and its child or children in a hierarchy of objects. If the child is also a parent, the links to its own children are not affected. Select the child and click Cut in the Constrain panel, or press Ctrl+/. A cut object becomes a child of its model. If an object is cut from its model, it becomes a child of the parent model.
Basics 47
Creating Groups To create a group, select some objects and click Group in the Edit panel or press Ctrl+g. In the Group property editor, enter a name for your group and select the different View and Render Visibility, Selectability, and Animation Ghosting options.
Selecting Groups You can select groups in the 3D and schematic views using the Group selection button or the = key. Note that the Group button changes to the Cluster button when a component filter is active.
Once a group is selected, you can select all its members using Select > Select Members/ Components. The members of the group are selected as multiple objects. If you want to select a single member of a group, simply select it normally in any 3D, explorer, or schematic view. Adding and Removing Elements from Groups
Add to Group
All selected objects are grouped together. In the explorer, you can see the group with all its members within it.
To add objects to a group, select the group and add the objects you want to the selection. In the Edit panel, click the + button (next to the Group button). You can also drag objects onto a group in an explorer view.
If an object is a member of just one group, you can ungroup it by just selecting it and clicking the button (next to the Group button). If an object is a member of multiple groups, you must select the group to remove it from before selecting the object. Alternatively, use the context menu in the explorer.
Right-click on name of object within the group to be removed and choose Remove from Group.
Removing Groups You can remove a group by selecting it and pressing Delete. When you delete groups, only the group node and its properties are deleted, not the member objects themselves.
48 Softimage
Objects
Scene Layers
Scene layers are containers similar to groups or render passes that help you organize, view, display, and edit the contents of your scene. For example, you can put different objects into different scene layers and then hide a particular layer when you dont want to see that part of your scene. Or you might want to make a scene layers objects unselectable if the scene is getting too complex to select objects accurately. You can create as many layers as your scene requires. The main differences between a scene layer and a group are that every object is a member of a layer (that would be the default layer if you havent created any new layers) and objects cannot belong to more than one layer. Scene Layer Attributes Each scene layer has four main attributes: viewport visibility, rendering visibility, selectability, and animation ghosting. You can activate or deactivate each these attributes for each layer in the scene. Scene layers can also have custom properties such as wireframe color and geometry approximation. Scene Layers in the Explorer You can view and edit scene layers in the explorer. This is most useful when you wish to move several objects between layers, since you can quickly drag and drop them from one layer to another.
The Scene Layer Manager The scene layer manager is a grid-style view from which you can quickly view and edit all of the layers in a scene. You can use the layer control to do things like add objects to or remove them from layers, create new scene layers, toggle scene layer attributes, select objects in a scene layer, and so on. To open the scene layer manager in a floating window, press 6 at the top of the keyboard, or choose View > General > Scene Layer Manager from the main menu. The scene layer manager is also available on the KP/L panel.
A H B G C D
E A
The Layers menu contains commands for creating layers, moving selected objects into the current layer, and so on. Other commands are available by right-clicking in the grid. The View menu contains various display preferences, including how layers should be sorted and which columns are visible. Press and hold Shift to keep the menu open while you toggle multiple items.
Scene layers are represented as indented rows. Right-click anywhere in the row for various commands that affect the corresponding layer.
Basics 49
The current layer is indicated by a green background and a doublechevron. To make a layer current, click in the in the leftmost column of the corresponding row. Scene layer groups are represented as rows with a light gray background. Right-click anywhere in the row for various commands that affect all layers in the group. Click the triangle at left to hide or display the rows of its individual layers. To rename a layer or group, double-click on its name, type a new name, and press Enter. You can select multiple layers for certain commands by clicking on their names. To select a range, click on the first layer and then Shift+click on the last, or drag across the desired rows. To add individual layers to the selection, Ctrl+click on their rows. Note that selecting layers in the grid in this way simply selects them for certain commands in the scene layer managerit does not affect the global scene selection.
Scene layer attributes: wireframe color, view visibility, render visibility, selectability, and animation ghosting. Click in a cell to toggle its value. Click+drag to toggle multiple cells in a rectangular area. Right-click on a column heading and choose Check All or Uncheck All. Double-click on a color swatch to set the wireframe color and other display attributes.
Use the cells of a layer group to control all layers in the group. You can still change the settings of individual layers afterward. When different layers in the group have different values, the cell has a light gray checkmark. Right-click on a column heading and choose Check All or Uncheck All. Resize a column by dragging the borders of its heading.
50 Softimage
Properties
Properties
A property is a set of related parameters that controls some aspect of objects in a scene.
Applying Properties
You can apply many properties using the Get > Property menu of any toolbar. This applies the default preset of a propertys parameter values to the selected objects, possibly replacing an existing version of the same property.
Editing Properties
To edit an existing property, open its property editor by clicking on the property node in an explorer. A handy way to do this is to press F3 to see a mini-explorer for the selected object, or click the Selection button at the bottom of the Select menu. You can also right-click on Selection to display properties according to type.
Click Selection...
Basics 51
For other types of properties, an object can have many at the same time. For example, an object can have several local annotations as well as several annotations inherited from different ancestors, groups, and so on.
Simple Propagation In this sphere hierarchy, each sphere is parented to the one above it. Because the larger sphere was branch-selected when the texture was applied, every sphere beneath it inherits the checkerboard texture.
Branch Propagation One sphere was branch-selected and given a cloud texture. The remaining sphere retains the checkerboard texture because it is on another branch.
Reverting to the Scenes Default Material Local Material/Texture Application The larger sphere was single-selected One sphere was single-selected and given a blue surface. This applies a local and has had its material deleted. Since material/texture that is in turn applied to other spheres can no longer inherit their the selected object only and none of texture from the parent (because its been deleted), they revert back to the scenes its children; the spheres children still inherit the checkerboard texture, despite default gray (or another color youve assigning a local texture to their parent. defined).
52 Softimage
Properties
A B
Properties that are applied in branch-mode, and therefore propagated, are marked with B. Shared properties such as materials are shown in italics. The propertys source (where its propagated from) is shown in parentheses. If no source is shown, then it is inherited from the scene root.
You can also set the following options in the explorers View menu: Local Properties displays only those properties that have been applied directly to an object. Applied Properties shows all properties that are active on a object, no matter how they are propagated.
Basics 53
Displaying Components
You can display the various component types in a specific 3D view using the individual options available from its eye icon (Show menu) or in all open 3D views using the options on the Display > Attributes menu on the main menu bar.
You can define clusters for points, edges, polygons, subsurfaces, and other components. Each cluster can contain one type of component. For example, a cluster can contain points or polygons, but not both. Clusters may shift if you edit an operator in an objects construction history and add components before the position where the cluster was created. Creating Clusters To create a cluster, select some components and click Cluster on the Edit panel (the Cluster button changes to Group when objects are selected). As soon as the cluster is created, it is selected and you can press Enter to open its property editor and change its name. To create a cluster whose components arent already in other clusters, choose Edit > Create Non-overlapping Cluster instead. You can also use Edit > Create Cluster with Center to make a cluster with a null center that you can transform and animate. If you prefer to use a different object as a center, simply create a cluster and apply Deform > Cluster Center manually.
For more options, you can set the visibility options in the Camera Visibility property editor: click a 3D views eye icon (Show menu) and choose Visibility Options, or Display > Visibility Options for all open 3D views. Note that when you activate a component selection filter, the corresponding components are automatically displayed in the 3D views.
Clusters
A cluster is a named set of components that are grouped together for a specific modeling, animation, or texturing purpose. By grouping and naming components, it makes it easier to work with those same components again and again. For example, by grouping all points that form an
Spinning top with two clusters Top
Bottom
54 Softimage
Adding and Removing Components from Clusters To add components to a cluster, select the cluster and add the components you want to the selection. In the Edit panel, click the + button (next to the Cluster button). To remove components from a cluster, select the cluster, add the components to remove to the selection, and click the button.
Add to Cluster
When you add components to an object, any new components that are surrounded by similar components in a cluster are automatically added to the cluster. Selecting Clusters You can select clusters using the Clusters button at the bottom of the Select panel, or in any other explorer.
You can apply deformations to deform points, edges, and polygons in the same way that you apply them to objects. You cannot animate component and cluster transformations directly. Instead, you can use a deformer such as a cluster center or volume deformer and animate the deformer, or you can use shape animation.
You can also select clusters in a 3D view when a component selection filter is active. Simply activate the Cluster button at the top of the Select panel, or press =, or use the middle mouse button while clicking on any component in the cluster. Removing Clusters To remove a cluster, select it and press Delete. Removing a cluster removes the group, but does not remove the individual components from the object.
Basics 55
Parameter Maps
Certain parameters are mappableyou can vary the parameters value across an objects geometry by connecting a weight map, texture map, vertex color property, or other cluster property. This allows you to, for example, control the amplitude of a deformation or the emission rate of a particle system across an objects surface. Mappable parameters have a connection icon in their property editors that allows you to drive the value using a map.
Connection icon unconnected connected
Texture maps consist of an image file or sequence, and a set of UV coordinates. They are similar to ordinary textures, but are connected to parameters instead of shaders. Vertex color properties are color values stored at each polynode or texture sample of a geometric object. In addition to the attributes listed above, you can connect mappable parameters to other cluster properties, including UV coordinates (texture projections), shapes, user normals, and envelope weights. While these may not always be useful for driving modeling and simulation parameters, the ability to connect to these properties may be useful for custom developers.
Connecting Maps
No matter what type of map you want to connect to a parameter, the basic procedure is the same. In a property editor, click on the connection icon of a mappable parameter and choose Connect. A pop-up explorer opensnavigate through the explorer and pick the desired map: Weight maps are found under the appropriate cluster. Texture maps are properties directly under the object. They can also be found under the appropriate cluster. Make sure you dont accidentally select the texture projection. Vertex color properties are also found under the appropriate cluster. The connection icon changes to show that a map is connected. When a map is connected, you can click on this icon to open the maps property editor. If you connect a map that has multiple components, like an RGBA color, to a parameter that has a single dimension, like Amplitude, you can use the options in the Map Adaptor to control the conversion. To disconnect a weight map, right-click on the connection icon connected parameter and choose Disconnect. of a
Which Parameters Are Mappable? Almost any parameter with a connection icon in its property editor is mappable. These parameters include: Certain deformation parameters, such as Amplitude in the Push operator or Strength in the Smooth operator. The Multiplier parameter in the Polygon Reduction operator. Edge and vertex crease values. Various simulation parameters, such as the length and density of hair, the stiffness of cloth, and so on. Shapes in the animation mixer. What Can You Connect to Mappable Parameters? You can connect just about any cluster property to a mappable parameter. The most useful properties include the following: Weight maps allow you to start from a base map such as a constant value or gradient, and then paint values on top.
56 Softimage
Parameter Maps
To connect maps to hair parameters, you must first transfer the maps from the emitter to the hair object. In the case of weight maps and deformations, you can simply select the weight map and then apply the deformation instead of manually connecting it. Since the weight map is selected by default as soon as you create it, this technique is quick and easy.
Selected cluster
3. Apply a weight map using Get > Property > Weight Map.
Weight Maps
Weight maps are properties of point clusters on geometric objects. They associate each point in a cluster with a weight value. Each cluster can have multiple weight maps, so you can modulate different parameters on different operators in different ways. Each weight map has its own operator stack. When you create a weight map, a WeightMapOp operator sets the base map, which can be constant or one of a variety of gradients. Then when you paint on the weight map, the strokes are added to a WeightPainter operator on top of the WeightMapOp in the stack. Like other elements with operator stacks, you can freeze a weight map to discard its history and simplify your scene data. The following steps present a quick overview of the workflow for using weight maps. 1. Start with an object.
Blank weight map, ready for painting
4. Press W to activate the Paint tool, then use the mouse to paint on the weight map. - Press R and drag the mouse to control the brush radius. - Press E and drag the mouse to control the opacity. - Press Ctrl+W to open the Brush properties to set other parameters. In the default paint mode (normal, also called additive), use the left mouse button to add paint and the right mouse button to remove weight. Press Alt to smooth.
5. Connect the weight map to drive the value of a parameterfor example in the image below, it is driving the Amplitude of a Push deformation.
Basics 57
Texture Maps
A slight Push is all thats needed.
Texture maps consist of an image file or sequence, and a set of UV coordinates. They are similar to ordinary textures, but are used to control operator parameters instead of surface colors. HDR images are fully supported. Floating-point values are not truncated. Creating Texture Maps To create a texture map, you select the texture projection method and then link an image file to it. 1. Apply a texture projection and texture maps to the selected object by doing one of the following: - If the object already has a set of UV coordinates (texture projection) that you want to use, select it and choose Get > Property > Texture Map > Texture Map. This creates a blank texture map property for the object and opens a blank Texture Map property editor in which you need to set the texture projection and select an image that will be used as the map (as described in the next steps). or - To create a new texture projection for the map, select the object and choose Get > Property > Texture Map > projection type (such as Cylindrical, Spherical, UV, or XZ) that is appropriate for the shape of the object. This creates a texture map property and texture projection for the object, but doesnt open the Texture Map property editor. Now you must open the Texture Map property editor to associate the image to this projection to use as the map (in the explorer, click the Texture Map property under the object).
6. You can reselect the weight map and continue to paint on it to modify the effect further. If your object has multiple maps, you may need to select the desired one before you can paint on it. You can do this easily using Explore > Property Maps from the Select panel. Freezing Weight Maps Weight maps can be frozen to simplify your scenes data. Freezing collapses the weight map generator (the base constant or gradient map you chose when you created the weight map) together with any strokes you have applied. To freeze a weight map, select it and click the Freeze button on the Edit panel. After you have frozen a weight map, you can still add new strokes but you cannot change the base map or delete any strokes you performed before freezing.
58 Softimage
Parameter Maps
2. In the Clip section of the Texture Map property editor, select an image or sequence to use as the map. If there isnt already a clip for the desired image, click New to create one. 3. In the UV Property area beneath the image, select an existing texture projection or create a New texture projection (if there isnt already one) that is appropriate to the shape of the object or how you want to project the mapped image. Editing Texture Maps To edit the UV coordinates of a texture maps projection, select the object and open the text editor. If necessary, use the Clips menu to display the correct image and the UVs menu to display the correct projection. If you do this, you should make sure that the operator connected to the texture map is above the modeling region of the construction history, for example, in the animation region. Otherwise, the UV edits are above the operator and appear to have no effect. You can move the operator back to the modeling region when you are done.
Basics 59
60 Softimage
Section 3
Moving in 3D Space
Working in 3D space is fundamental to Softimage. You will use the transformation tools constantly as you model and animate objects and components.
Basics 61
Coordinate Systems
Softimage uses coordinate systems, also called reference frames, to describe the position of objects in 3D space.
XYZ Coordinates
With the Cartesian coordinate system, you can locate any point in space using three coordinates. Positions are measured from the origin, which is at (0, 0, 0). For example, if X = +2, Y = +1, Z = +3, a point would be located to the right of, above, and in front of the origin.
Location = (2, 1, 3) Y=1
Cartesian Coordinates
One essential concept that a first-time user of 3D computer graphics should understand is the notion of working within a virtual three-dimensional space using a two-dimensional user interface. Softimage uses the classical Euclidean/ Cartesian mathematical representation of space. The Cartesian coordinate system is based on three perpendicular axes, X, Y, and Z, intersecting at one point. This reference point is called the origin. You can find it by looking at the center of the grid in any of the 3D windows.
XYZ Axes
Softimage uses a Y-up system, where the Y direction represents height. This is different from some other software, which are Z-up. This is something to keep in mind if you are familiar with other software, or are trying to import data into Softimage. A small icon representing the three axes and their directions is shown in the corner of 3D views. The icons three axes are represented by color-coded vectors: red for X, green for Y, and blue for Z. An easy way to remember the color coding is RGB = XYZ. This mnemonic is repeated throughout Softimage: object centers, manipulators, axis controls on the Transform panel, and so on.
62 Softimage
Coordinate Systems
Softimage Units
Throughout Softimage, lengths are measured in Softimage units. How big is a Softimage unit? It is an arbitrary, relative value that can be anything you want: a foot, 10 cm, or anything else. However, it is generally recommended that you avoid making your objects too big, too small, or too far from the scene origin. This is because rounding errors can accumulate in mathematical calculations, resulting in imprecisions or even jittering in object positions. As a general rule of thumb, an entire character should not fit within 1 or 2 units, nor exceed 1000 units. The Softimage units used for objects also matters for creating dynamic simulations where objects have mass or density and are affected by forces such as gravity. For simulations, Softimage assumes that 1 unit is 10 cm by default, but you can change this by changing the strength of gravity.
The center of an object is only a referenceit is not necessarily in the middle of the object because it can be relocated (as well as rotated and scaled). The position, orientation, and scaling (collectively known as the pose) of the objects center defines the frame of reference for the local poses of its own children.
Basics 63
Transformations
Transformations are fundamental to 3D. They include the basic operations of scaling, rotating, and translating: scaling affects an elements size, rotation affects an elements orientation, and translation affects an elements position. Transformations are sometimes called SRTs. You transform by selecting an object or components, activating a transform tool, then clicking and dragging a manipulator in a 3D view.
Transforming Interactively
1 Select objects or components to transform and activate a tool: Scale (press x) Rotate (press c) Translate (press v) 3 If desired, specify the active axes. See Specifying Axes on page 67.
4 If desired, set the pivot. See Setting the Pivot on page 67.
5 Click and drag on the manipulator. See Using the Transform Manipulators on page 68.
64 Softimage
Transformations
Manipulation Modes
When you transform interactively, you always do so using one of several modes set on the Transform panel: View, Local, Global, etc. The mode determines the axes and the default pivot used for manipulation. If an object isnt transforming as you expected, its possible that you need to change the manipulation mode. It is important to remember that the mode does not affect the values stored for animation (local versus global), it only affects your interaction with the transform tool. Global Global translations and rotations are performed along the scenes global axes.
Object is transformed...
View View translations and rotations are performed with respect to the 3D view. The plane in which the object moves depends on whether you are manipulating it in the Camera, Top, Front, Right, or other view.
Object is transformed using the axes of the 3D view as the reference.
If you are using the SRT manipulators in a perspective view like Camera or User, View mode uses the global scene axes. Par
...using global axes as the reference.
Local Local transformations are performed along the axes of the objects local coordinate system as defined by its center. This is the only true mode available for scalingscaling is always performed along an objects own axes.
Object is transformed...
Par, or parent, translations and rotations use the axes of the objects parent. For translation, this is the only mode where the axes of interaction correspond exactly to the coordinates of the objects local position for the purpose of animation. When you activate individual axes on the Transform panel, the corresponding local position parameters are automatically marked. To activate Par for rotations, activate Add and press Ctrl.
Object is transformed...
Par mode is not available for components. In its place, Object mode uses the local coordinates of the object that owns the components. Add Add, or additive, mode is only available for rotation. It lets you directly control the objects local X, Y, and Z rotations as stored relative to its parent. This mode is especially useful when animating bones and other objects in hierarchies. For rotations, this is the only mode where the axes of interaction correspond exactly to the coordinates of the objects local orientation for the purpose of animation. When you activate individual axes on the Transform panel, the corresponding local position parameters are automatically marked. Uni Uni, or uniform, is available only for scaling. It is not really a mode but it modifies the way objects are scaled locally. It scales along all active local axes at the same time with a single mouse button. You can activate and deactivate axes as described in Specifying Axes on page 67. You can also temporarily turn on Uni by pressing Shift while scaling.
Vol Like Uni, Vol or volume is available only for scaling and is a modifier rather than a mode. It scales along one or two local axes, while automatically compensating the other axes so that the volume of the objects bounding box remains constant.
Ref Ref, or reference, mode lets you translate an object along the X, Y, and Z axes of another element or an arbitrary reference plane. Right-click on Ref to set the reference.
Object is transformed...
66 Softimage
Transformations
Plane Plane mode lets you drag an object along the XZ plane of another element or an arbitrary reference plane. Right-click on Plane to choose the plane.
Object is transformed...
If Allow Double-click to Toggle Active Axes is on in the Transform preferences, then you can also specify transformation axes by doubleclicking in the 3D views while a transformation tool is active: Double-click on a single axis to activate it and deactivate the others. If only one axis is currently active, double-click on it to activate all three axes. Shift+double-click on an axis to toggle it on or off individually. (If it is the only active axis, it will be deactivated and both of the other two axes will be activated).
Specifying Axes
When transforming interactively, you can specify which axes are active using the x, y, and z icons in the Transform panel. For example, you can activate rotation in Y only, or deactivate translation only in Z. Active icons are colored, and inactive icons are gray. Click an axis icon to activate it and deactivate the others. Shift+click an axis icon to activate it without affecting the others. Ctrl+click an axis icon to toggle it. Click the All Axes icon to activate all three axes. Ctrl+click the All Axes icon to toggle all three axes.
1. Make sure that Transform > Modify Object Pivot is set to the desired value: - Off (unchecked) to set the tool pivot used for interactive manipulation only. This is useful if you are simply moving elements into place. The tool pivot is normally reset when you change the selection. However, you can lock and reset the position manually. - On (checked) to modify the object pivot. The object pivot acts like a center for the objects local transformations. It is used when playing back animated transformations, and is also the objects default pivot for manipulation. You can animate the object pivot to create a rolling cube. 2. Activate a transform tool.
All Axes
Basics 67
3. Do any of the following: - Alt+drag the manipulators center, or one of its axes, to change the position of the pivot manually. You can use snapping, as well as change manipulation modes on the Transform panel. - Alt+click in a geometry view. The pivot snaps to the closest point, edge midpoint, polygon midpoint, or object center among the selected objects. This lets you easily rotate or scale an object about one of its components. - Alt+middle-click to reset the pivot to the default. You can lock the pivot by pressing Alt, clicking on the Pivot icon triangle below the pivot icon, and choosing Lock. The tool pivot remains at its current location, even if you change the selection.
Rotate Manipulator
Click and drag on a single ring to rotate around that axis. Click and drag on the silhouette to rotate about the viewing axis. This does not work in Add mode. Click and drag on the ball to rotate freely. This does not work in Add mode.
Scale Manipulator
Click and drag on a single axis to scale along it. Click and drag along the diagonal between two axes to scale both those axes uniformly.
Click and drag the center left or right to scale all active axes uniformly.
In addition to dragging the handles, you can: Middle-click and drag anywhere in the 3D views to translate along the axis that most closely matches the drag direction. Click and drag anywhere in the 3D views (except on the manipulator) to perform different actions, depending on the setting for Click Outside Manipulator in the Tools > Transform preferences. Right-click on the manipulator to open a context menu, where you can set the manipulation mode and other options.
68 Softimage
Transformations
Transformation Preferences
Transform > Transform Preferences contains several settings that affect the display, interaction, and other options of the transformation tools. Since you will be spending a great deal of your time transforming things, its a good idea to explore these and find the settings that are most comfortable for you.
You specify which method to use for each child in its Local Transform property. You can also set the default value used for all new objects.
To specify hierarchical or classic scaling
1. Select one or more child objects and open their Local Transform property editor. 2. On the Scaling tab, turn Hierarchical (Softimage) Scaling off or on. If it is off, classic scaling is used.
To set the default scaling mode used for all new objects
1. Choose File > Preferences from the main menu bar. 2. Click General. 3. Toggle Use Classical Scaling for Newly Created Objects.
Basics 69
Center Manipulation
Center manipulation lets you move the center of an object without moving its points. This changes the default pivot point used for rotation and scaling. You can manipulate the center by using Center mode interactively, or by using commands on the Transform menu (Move Center to Vertices and Move Center to Bounding Box). Its important to note that center manipulation is actually a deformation. As the center is moved, the geometry is compensated to stay in place. Because it is a deformation, you cannot manipulate the center of non-geometric objects. This includes nulls, bones, implicit objects, control objects, and anything else without points.
Resetting Transformations
The Transform > Reset commands return an objects local scaling, rotation, and translation return to the default values. It effectively removes transformations applied since the object was created or parented, or since its transformations were frozen. If you want an object to return to a pose other than the origin of its parents space when you reset its transformations, set a neutral pose for it.
Freezing Transformations
The Transform > Freeze commands reset an objects size, orientation, or location to the default values without moving the objects geometry in global space. For instance, freezing an objects translation moves its center to (0, 0, 0) in its parents space without visibly displacing its points. Like center manipulation, freezing transformations is actually a deformation. As the center is transformed, the geometry is compensated to stay in place. If a neutral pose exists when you freeze an objects transformations, the objects center moves to the neutral pose instead of the origin of its parents space. If you want the objects center to be at the origin, you should remove the neutral pose in addition to freezing the transformations. You can perform these two operations in either order.
70 Softimage
Transform Setup
Transform Setup
The Transform Setup property lets you define a preferred transformation for an object. When you select that object, its preferred transformation tool is automatically activated. Of course, you can still choose a different tool and change transformation options manually if you want to. Transform setups are particularly useful when building animation rigs for characters. If you are using an object to control a characters head orientation, you can set its preferred transformation to rotation. If you are using another object to control the characters center of gravity (COG), you can set its preferred transformation to translation. When you select the head control, the Rotate tool is automatically activated, and then when you select the COG control, the Translate tool is automatically activated. You apply a Transform Setup property by choosing Get > Property > Transform Setup from any toolbar and then setting all the options. You can modify the options later by opening the property from the explorer. While Transform Setups are useful for many tasks, like animating a rig, at other times you dont want the current tool to keep changing as you select objects. In these cases, you can ignore Transform Setups for all objects in your scene by turning off Transform > Enable Transformation Setups. Turn it back on to resume using the preferred tool of each object.
Basics 71
Snapping
Snapping lets you align components and objects when moving or adding them. You can snap to targets like objects, components, and the viewport grids, or you can snap by increments.
Incremental Snapping
When translating, rotating, and scaling elements, you can snap incrementally. Instead of snapping to a target, elements jump in discrete increments from their current values. This is useful if you want to move an element by exact multiples of a certain value, but keep it offset from the global grid. To snap incrementally: Press Shift while rotating or translating an element. Press Ctrl while scaling (Shift is used for scaling uniformly).
Snapping to Targets
Use the Snap panel to activate snapping to targets.
Set a variety of options from the menu.
Activate or deactivate snapping. Use Ctrl to temporarily toggle the current state. Specify the type of target: points, curves/edges, facets, or the grid. Right-click to select various sub-types.
You can set the Snap Increments using Transform > Transform Preferences.
The grid used for snapping depends on the manipulation mode: Global, Local, Par, Object, and Ref use the Snap Increments set in the Transform > Transform Preferences. They do not use the visible floor/grid displayed in 3D views. View mode uses the Floor/Grid Setup set in the Camera Visibility property editor (Shift+s over a specific 3D view, or Display > Visibility Options (All Cameras)). Plane mode uses the Snap Size set in the Reference Plane property editor.
72 Softimage
Section 4
Basics 73
Setting a Workgroup
Workgroups provide a method for easily sharing customizations among a group of people working on the same project. Simply set your workgroup path to a shared location on your local network, and you can take advantage of any presets, plug-ins, add-ons, shaders, toolbars, views, and layouts that are installed there. The workgroup is usually created by a technical director or site supervisor. To connect to an existing workgroup, choose File > Plug-in Manager, click the Workgroups tab, click Connect, and specify the location.
Whenever you use an Softimage file browser to access files on disk, you can quickly switch among your project, user, workgroup, and installation locations using the Paths button.
74 Softimage
Scenes
Scenes
A scene file contains all the information necessary to identify and position all the models and their animation, lights, cameras, textures, and so on for rendering. All the elements of a scene are compiled into a single file with an .scn extension. The Softimage title bar identifies the name of the current scene and the project in which it resides.
The File Menu contains most of the commands for creating, opening, and managing scenes.
Merging Scenes combines objects in any number of Softimage scenes. When you merge a scene into the current scene, it is automatically loaded as a model. Press the Ctrl key as you drag and drop a scene (*.scn) file from an external window into a 3D view to merge it as a model under the scene root. Save or Save As to update the existing scene or save it to a new name in the current project. Manage scenes and their associated projects using the Project Manager. You can also create, open, and save scenes to different projects from here. Import and export scenes from and to other 3D or CAD/CAM programs saved in the dotXSI, COLLADA, FBX, DirectX, IGES, and OBJ formats. Choose Preferences > Data Management to set options for backing up, autosaving, recovering, and debugging your scenes.
A New Scene is automatically generated when you start Softimage or create a new project. You can also create a new scene any time while you work. Every new scene is created in the active project and its name appears as Untitled in the Softimage title bar. Choose Edit > Delete All from the Edit panel in the main command panel or press Ctrl+Delete to clear the workspace before creating a new scene. Open a scene. or Open a recently used scene. You can also drag and drop a scene (*.scn) file from an external window into a 3D view to open the scene. Note that you cannot drag and drop scenes from external windows on Linux systems. When you open a scene file, a temporary lock file is created. Anyone else who opens the file in the meantime must work on a copy and any changes to the scene must be saved under a different file name. The lock file is deleted when you close the scene
Basics 75
The left pane allows you to choose whether to show all external files used by the scene, or only those used by a particular model.
The grid lists all of the external files for the scene/model specified in the left-hand pane, and of the type specified in the File Type list.
76 Softimage
Scenes
Basics 77
Projects
In Softimage, you always work within the structure of a project. A project is a system of folders that contain the scenes you build and the external files referenced by those scenes. Projects are used to keep your work organized and provide a level of consistency that can simplify production for a workgroup. A project can exist locally on your machine or can be shared from a network drive. When you open Softimage for the first time, an untitled scene is created in the XSI_SAMPLES factory project. You can set your own project as the default project that opens with Softimage. The project name in the title bar at the top of the Softimage interface is the active project. Project lists are text-based files with an .xsiprojects file name extension. You can build, manage and distribute your project lists among members of your workgroup using the Project Manager.
Scan for projects in a specified path and add them to the project list. Export the list of projects and have all members of the workgroup import it. Sort projects by Name, Origin (factory [F], user [U], and workgroup [W]), or none. Location of your project folder. Sets the selected project as the active project. Sets the default project that opens automatically when you start Softimage.
78 Softimage
Models
Models
Models are like mini scenes that can be easily reused in scenes and projects. They act as a container for objects, usually hierarchies of objects, and many of their properties. Models contain not just the objects geometry but also the function curves, shaders, mixer information, groups, and other properties. They can also contain internal expressions and constraints; that is, those expressions and constraints that refer only to elements within the models hierarchy.
Club bot model structure contains many things that define the character.
There are two types of models: Local models are specific to a single scene. Referenced models are external files that can be reused in many scenes.
Basics 79
Exporting Models
Use File > Export > Model to export models created in Softimage for use in other scenes. Using models to export objects is the main way of sharing objects between scenes. When you export a model, a copy is saved as an independent file. The file names of exported models have an .emdl extension. The original model remains in the scene. If you ever need to modify the model, you can change it in the original scene, and then re-export it using the same file name. If other scenes use that file as a referenced model, they will update automatically when you open them. If you imported the file into another scene as a local model, you must delete the model from that scene and re-import it from the file to obtain the updated version.
For example, lets say that youre modeling a car that will be used in various scenes, but the animator needs to start animating with the car on another computer before you can finish the details. You export the car as porsche.emdl, which the animator can import into her scene while you continue your work. Any changes that the animator makes to the car, such as setting keys or expressions, are automatically stored in the models delta in the scene. When youre done modeling the car, you can re-export using the same file name. Now when the animator loads the scene or updates the referenced model, all the changes you made are automatically reflected in the car in her scene. After the model is updated, Softimage reapplies the changes stored in the delta to the model within the animators scene. Referenced models also let you work at different levels of detail. You can have a low-resolution model for fast interaction while animating, a medium-resolution model for more accurate previewing, and a highresolution model for the final results. Referenced models are indicated in the explorer by a white man icon. The default name of this node depends on the name of the external file, but you can change it if you want. The name of the active resolution appears in square brackets after the models name. The name of a deltas target model appears after the deltas name.
Use the Modify > Model menu on the Model toolbar to set the current resolution, or to temporarily offload models.
80 Softimage
Models
You can change a referenced models Parameters display a white lock icon but they can still parameters values, animate them, apply be modified and animated. new properties, and so on. These changes are stored in the clip and reapplied when the model is updated. There are some changes you cant make, such as adding an object to the hierarchy or deleting a property. Whatever changes you perform, make sure that they are selected in the deltas Recorded/Applied Modifications property, otherwise they will be lost the next time the model is updated.
Instantiating Models
An instance is an exact replica of a model. Any type of model can be instanced. You can create as many instances as you like using the commands on the Edit > Duplicate/Instantiate menu, and position them anywhere in your scene. When you modify the original master model, all instances update automatically. Instances are useful because they require very little memory: only the transformations of the instance root is stored. However, you cannot modify, for example, an instances geometry or material. Instantiation has the following advantages: Instances use much less disk space than duplicates or clones because youre not duplicating the geometry. Editing multiple identical objects is very simple because you only have to edit the original. Wireframe, shading, and memory operations are much faster. Instances are displayed in the explorer with a cyan i superimposed on the model icon. In the schematic view, they are represented by trapezoids with the label I.
Instance in the explorer. Instance in the schematic view.
Basics 81
82 Softimage
Section 5
General Modeling
Modeling is the task of creating the objects that you will animate and render. No matter what type of object you are modeling, the same basic concepts and techniques apply. This section explores the aspects of modeling that arent specific to any specific type of geometry such as curves, polygon meshes, or NURBS surfaces.
Basics 83
Overview of Modeling
1 Start with a basic object, such as a primitive cube. 2 Add more subdivisions to work with.
Iteratively refine the object, moving points and adding more detail where required.
Once the modeling is done, the object is ready to be textured and animated. If changes are necessary, you can still perform modeling operations on the animated, textured object.
84 Softimage
Geometric Objects
Geometric Objects
By definition, geometric objects have points. The set of these points and their positions determine the shape of an object and are often called the objects geometry. The number of points and how they are connected is called its topology. No matter what the type of geometry, Softimage allows you to select, manipulate, and deform points in the same way. On the other hand, polygon meshes may require very heavy geometry (that is, many points) to approximate smoothly curved objects. However, you can subdivide them to create virtual geometry that is smoother.
Types of Geometry
The main types of renderable geometry in Softimage are polygon meshes and NURBS surfaces. In addition, there are other types of geometry that you can use for specialized purposes. Polygon Meshes Polygon meshes are quilts of polygons joined at their edges and vertices. One advantage of polygon meshes is that they allow for almost arbitrary topologyyou are not limited to rectangular patches and you can add extra points for more detail where needed. NURBS Surfaces Surfaces are two-dimensional NURBS (non-uniform rational B-splines) patches defined by intersecting curves in the U and V directions. In a cubic NURBS surface, the surface is mathematically interpolated between the control points, resulting in a smooth shape with relatively few control points. The accuracy of NURBS makes them ideal for smooth, manufactured shapes like car and aeroplane bodies. One limitation of surfaces is that they are always four-sided.
A subdivision surface created from a cube.
NURBS surfaces allow for smooth geometry with relatively few control points.
Basics 85
Curves In Softimage, curves are one-dimensional NURBS of linear or cubic degree. Cubic curves with Bzier knots can be manipulated as if they are Bzier curves. Curves have points but they are not renderable because they have no thickness. Nevertheless, they have many uses, such as serving as the basis for constructing polygon meshes surfaces, paths for objects to move along, controlling deformations like deform by curve and deform by spine, and so on.
A simple cubic NURBS curve.
Particles Particles are disconnected points in a point cloud. They are often emitted in simulations to create a variety of effects, such as fire, water, and smoke. In Softimage, point clouds are controlled by ICE trees. See ICE Particles on page 271.
Hair Lattices Lattices are a hybrid between geometric objects and control objects. Although they have points, they do not render and are used only to deform other geometric objects. Hair objects let you use guide hairs to control a full head of render hairs. You can style the hairs manually as well as apply a dynamic simulation.
Density
Density refers to the number of points on an object. Part of the art of modeling is controlling the balance of density. Generally speaking, you need more density in areas where an object has high detail or needs to deform smoothly. However, too much density means that an object will be unnecessarily slow to load, update, and render.
86 Softimage
Geometric Objects
Normals
On polygon meshes and surfaces, the control points form bounded areas. Normals are vectors perpendicular to these closed areas on the surface, and they indicate the visible side of the object and how its surface is oriented. Normals are used to compute shading between surface triangles. Normals are represented by thin blue lines. To display or hide them, click the eye icon (Show menu) of a 3D view and choose Normals.
Eye icon
When normals are oriented in the wrong direction, they cause modeling or rendering problems. You can invert them using Modify > Surface > Invert Normals or Modify > Poly. Mesh > Invert Normals on the Model toolbar. If an object was generated from curves, you can also invert its normals by inverting one or more of its generator curves with Modify > Curve > Inverse.
Normals should point toward the camera.
Right
Wrong
Basics 87
Context Menus
Many modeling commands are available from context menus. The context menu appears when you Alt+right-click in the 3D views (Ctrl+Alt+right-click on Linux). If you click a selected object, the menu items apply to all selected objects. On Windows, you can also press the context-menu key (next to the right Ctrl key on some keyboards). If you click an unselected object, the menu items apply only to that object. When components are selected, you can right-click anywhere on the object that owns the selected components. The items on the context menu apply to the selected components. If you click over an empty area of a 3D view, the menu items apply to the view itself.
Model Toolbar
Youll find the Model toolbar at the far left of the screen. These commands are also available from the main menu.
Get commands Create generic elements, including primitive objects, cameras, and lights (also available on Animate, Render, and Simulate toolbars).
To display the Model toolbar: Click the toolbar title and choose Model.
Create commands Draw new objects or generate them from existing ones.
If the Palette or Paint panel is currently displayed, first click the Toolbar icon or press Ctrl+1.
88 Softimage
- Surface displays a submenu from which you can choose an available NURBS surface shape. 3. Set the parameters as desired. The geometric primitives (curves, polygon meshes, and surfaces) have certain typical controls: - The shape-specific page contains the basic characteristics of the shape. Each shape has different characteristics; for example, a sphere has one radius and a torus has two. - The Geometry page controls how the implicit shape is subdivided when converted into a surface. More subdivisions yield more points, resulting in greater detail but heavier geometry.
Primitives
Primitives are basic shapes like cubes, grids and spheres. You can add them to a scene and then modify them as you wish. For example, you can start with a sphere and move points to create a head. You can then attach eyeballs and ears to the head and put the whole head on a model of a character. There are several different primitive shapes for each geometry type. Each primitive shape has parameters that are particular to itfor example, a sphere has a radius that you can specify, a cube has a length, a cylinder has both height and radius, and so on. There are also several parameters that are common to all or to several primitive shapes: Subdivisions, Start and End Angles, and Close End. Getting Primitives You add a primitive object to the scene by choosing an option from the Get > Primitive menu on any of the toolbars at the left of the main window. 1. Choose Get > Primitive. 2. Choose an item from the submenus: - Curve displays a submenu from which you can choose an available NURBS curve shape. - Polygon Mesh displays a submenu from which you can choose an available polygon mesh shape.
Text
You can create text in Softimage, as well as import it from RTF (rich text format) files. Text is not a type of geometric object in Softimage; instead, text information is immediately converted to curves. After that, the curves can be optionally converted to planar or extruded polygon meshes.
Creating Text Choose one of the following commands from the Model toolbar: - Create > Text > Curves creates a Text primitive and converts it to a curve object. - Create > Text > Planar Mesh creates a Text primitive, converts it to a curve object, and then finally converts the curve to a polygon mesh with the Extrusion Length set to 0. The curve object is automatically hidden.
Basics 89
- Create > Text > Solid Mesh creates a Text primitive, converts it to a curve object, and then finally converts the curve to a polygon mesh with the Extrusion Length set to 0.5 by default. Once again, the curve object is automatically hidden. In each case, a property editor with the following pages is displayed:
Enter text and font properties.
Create surface from curves Convert text to curves. Create polygon mesh from curves
The commands and the general procedures on these two menus are the samethe only difference is the type of object that is created.
90 Softimage
Operator Stack
1. Select the first input curve, then add the remaining input curves (if any) to the selection. Different commands require different numbers of input curves. For example, Revolution Around Axis requires only one curve, while Loft allows for any number of profile curves to define the crosssection. You are not limited to curve objects. You can also select curves on surfaces, including any combination of isolines, knot curves, boundaries, surface curves, and trim curves. For example, you can create a loft surface that joins two surface boundaries while passing through other curves. 2. Choose one of the commands from the first group in the Create > Surf. Mesh or the Create > Poly. Mesh on the Model toolbar. 3. In the property editor that opens, adjust the parameters as desired. For more information, refer to the Softimage Reference by clicking on the ? in the property editor.
Operator Stack
The operator stack (also known as the modifier stack or construction history) is fundamental to modeling in Softimage. Every time you perform a modeling operation, such as modify the topology or apply a deformation, an operator is added to the stack. Operators propagate their effects upwards through the stack, with the output of one operator being the input of the next. At any time, you can go back and modify or delete operators in the stack.
Click the name to select the operator. Then you can press Enter to open the editor, or press Delete to remove the operator.
2 Guide curve 1 Profile curve Example of extruding a curve along another curve
For example, you can: Change the size of the grid in its Geometry node. Change the angle, offset, and axis of the twist in Twist Op. Change the random displacement parameters in Randomize Op.
Basics 91
To quickly open the last operator in the selected objects stack, press Ctrl+End or choose Edit > Properties > Last Operator in Stack. If you modify specific components, then go back earlier in the stack and change the number of subdivisions, youll probably get undesirable results because the indices of the affected points have changed.
Here is a quick overview of the workflow for using construction modes: 1. Set the current construction mode using the selector on the main menu bar. 2. Continue modeling objects by applying new operators. New deformations (operations that only change the positions of points) are applied at the top of the current region, and new topology modifiers (operators that change the number of components) are always applied at the top of the Modeling region. If you apply a deformation in the wrong region, you can move it by dragging and dropping in the explorer. 3. At any time as you work, you can display the final result (the result of all operators in all regions) or the just the current mode (the result of all operators in the current region and those below it) by selecting an option from the Construction Mode Display submenu of the Display Mode menu on the top right of a viewport: - Result (top) always shows the final result of all operators, no matter which construction mode is current. - Sync with construction mode shows the result of the operators in the current construction region and below.
Secondary Shape Modeling Define shapes on top of envelopes, e.g., muscle bulges. Shape Modeling Define shapes for animation.
Animation Apply envelopes or other animated deformations. Modeling Create the basic shape and topology of an object. Use Freeze M to freeze this region.
You can even have different displays in different views so, for example, you can see and move points in one view in Modeling mode while you see the results after enveloping and other deformations in another view.
92 Softimage
Operator Stack
Freezing removes any animation on the modeling operators (such as the angle of a Twist deformation). The values at the current frame are used. For hair objects, the Hair Generator and Hair Dynamics operators are never removed.
Basics 93
Modeling Relations
When you generate an object from other objects, a modeling relation is established. For example, if you create a surface by extruding one curve along another curve, the resulting surface is linked to its generator curves. If you modify the curves, the surface updates automatically. The modeling relation is sometimes called construction history in other software. You can modify the generated object in any way you like, for example, by moving points or applying a deformation. When you modify the generators, the generated object is updated while any modifications you have made to it are preserved. If you delete the input objects, the generated object is removed as well. To avoid this, freeze the generated object or at least the generator operator before deleting the inputs. If you use the Delete button in the Inputs section of the generators property editor, the generator is automatically frozen first. You can display the modeling relations: In a 3D view, click the eye icon (Show menu) and make sure that Relations is on. In a schematic view, make sure that Show > Operator Links is on. If the selected object has a modeling relation, it is linked to its input objects by lines. A label on the line identifies the type of relation (such as wave or revolution) and the name of the input object. You can click the line to select the corresponding operator.
Modeling Relation The road was created by extruding a crosssection along a guide. When the original guide was deformed into a loop, the road was updated automatically.
94 Softimage
Manipulating Components
Tweak Component is the main tool for moving components. It allows you to translate, rotate, and scale points, polygons, and edges. You can use it in two ways: Click and drag components for a fast, uninterrupted interaction. Select a component and then use the manipulator for a more controlled interaction.
To use the Tweak Component tool
1. Select a geometric object. 2. Activate the Tweak Component tool by pressing m or choosing Modify > Component > Tweak Component Tool from the Model toolbar. Note that if a curve is selected, then pressing m activates the Direct Manipulation tool instead. However, you can still use Tweak Component with curves by choosing it from the toolbar menu. 3. Move the mouse pointer over the object in any geometry view. As the pointer moves, the component under the pointer is highlighted. The Tweak Component tool will not highlight backfacing components, or components that are occluded by parts of the same object. When there are multiple types of components within the picking radius, priority is given first to points, then to edges, and finally to polygons. 4. Do one of the following: - Click+drag to perform a simple transformation on the highlighted component. If all axes are active on the Transform panel, translation occurs in the viewing plane and scaling is uniform in local space. If one or more axes have been toggled off, translation and scaling use the current manipulation mode and active axes set on the Transform panel. For example, to translate along a points normal, activate Local and the Y axis only.
Basics 95
Transfer and merge surface attributes. Transfer and merge animation attributes. Transfer and merge specific attributes manually.
Rotation uses the current manipulation mode and the Y axis by default, but you can select a different axis by deactivating the others. - Click and release the mouse button to select the highlighted component. A manipulator appears (unless youve toggled it off). You can use the manipulator to transform the selection, or if you prefer you can first modify the selection, change the pivot, and set other options. The Tweak Component tool uses the Ctrl, Shift, and Alt modifier keys with the left and middle mouse buttons to perform different functionslook at the mouse/status line at the bottom of the Softimage window for brief descriptions, or read the rest of this section for the details. The right mouse button opens a context menu. 5. The Tweak Component tool remains active, so you can repeat steps 3 and 4 to manipulate other components. When you have finished, deactivate the tool by pressing Esc or activating a different tool.
96 Softimage
Manipulating Components
Ref, or reference, mode lets you transform elements using another component or object as the reference frame. See Setting the Pivot on page 98. Plane mode is similar to Ref. It uses the same axes as Ref but the object center as the pivot.
Activating Axes
You can activate or deactivate axes on the Transform panel: Click an axis icon to activate it and deactivate the others. The mouse pointer updates to reflect the current action. You can also press Tab to cycle through the three actions, or Shift+Tab to cycle in reverse order. To activate the standard Translate, Rotate, or Scale tools, you must either deactivate the Tweak Component tool before pressing v, c, or x, or use the t, r, or s buttons on the Transform panel. Shift+click an axis icon to activate it without affecting the others. Ctrl+click an axis icon to toggle it. Click the All Axes icon to activate all three axes. Ctrl+click the All Axes icon to toggle all three axes. Alternatively if the Tweak manipulator is displayed, you can activate a single axis by double-clicking on it. Double-click on the same axis again to re-activate all axes, or on a different one to activate it instead.
All Axes Individual axes
Basics 97
Selecting Components
The Tweak Component tool lets you select components in a similar way to the standard selection tools, but there are some differences. Selecting, Deselecting, and Extending the Selection Use the following keyboard and mouse combinations for selection: Click a component to select it. Shift+click a component to add it to the selection. Shift+middle-click to toggle-select a component. Ctrl+Shift+click to deselect a component. To quickly deselect all components, click anywhere outside the object. Note that you can only multi-select components of the same type. You cannot select a heterogeneous collection of points, edges, and polygons. Selecting Loops and Ranges Use the Alt key to select loops or ranges of components.
To select loops or ranges of components
Note that for edge loops, the direction is implied so you can simply Alt+middle-click on an edge to select the loop and then Alt+Shift+middle-click to select additional loops. However, to select parallel edge loops, you still need to specify two components as described above. Selecting by Type The Tweak Component tool allows you to manipulate points, edges, and polygons, but you can limit it to a particular type of component if you desire. Use the context menu to activate Tweak All, Points, Edges, Polygons, or Points + Edges.
1. Click to select the first or anchor component. 2. Do one of the following: - Alt+click on a second component to select all components on a path between the two components. - Alt+middle-click on a second component to select all components in the loop that contains both components. 3. To select additional loops or ranges, use Shift+click to specify a new anchor and then Alt+Shift+click for a new range or Alt+Shift+middle-click for a new loop.
98 Softimage
Manipulating Components
Sliding Components
You can slide components with the Tweak Component tool. This helps to preserve the contours of objects as you tweak them. Sliding an edge moves its endpoints along the adjacent edges by an equal percentage. Sliding a point or a polygon clamps the associated points to the nearest location on the surface of the mesh, as if they had been shrinkwrapped to the original untweaked object. Sliding works only on polygon mesh components.
Proportional modeling on
Effect of sliding.
To activate proportional modeling, click the Prop button on the Transform panel.
While the Tweak Component tool is active, do one of the following: Components that are affected by the proportional falloff are highlighted, and the Distance Limit is displayed as a circle. You can change the Distance Limit interactively when proportional modeling is active by pressing and holding r while dragging the mouse left or right. You can change the Falloff (Bias) profile by pressing and holding Shift+R while dragging the mouse. To change other proportional settings, right-click on Prop. - Press j. Press and release the key to toggle sliding on or off (sticky mode) or press and hold it to temporarily override the current behavior (supra mode). - Click the on-screen Slide icon at the bottom of the view.
Slide Components button
Basics 99
Snapping
You can use the Ctrl key to snap while using the Tweak Component tool: Press Ctrl to toggle snapping to targets on or off (depending on its current setting on the Snap panel) while translating. Press Ctrl to snap by increments while scaling. For more information about snapping options, see Snapping on page 72.
3. Release the mouse button over the point you want to weld to. Note that interactive welding uses the same snapping region size as the Snap tool. You can modify the region size using the Snap menu. 4. Repeat steps 2 and 3 to weld more points, if desired. When you have finished welding, toggle Weld Points off.
Welding Points
You can interactively weld pairs of points on polygon meshes while using the Tweak Component tool. Welding merges points into a single vertex.
To weld points
1. While the Tweak Component tool is active, toggle Weld Points on by doing one of the following: - Press l. Press and release the key to toggle welding on or off (sticky mode) or press and hold it to temporarily override the current behavior (supra mode). - Click the on-screen Weld Points icon at the bottom of the view.
Weld Points button
- Right-click and choose Weld Points. 2. Click and drag a point. As you move the mouse pointer, the point snaps to points within the region.
100 Softimage
Manipulating Components
Basics 101
Deformations
Deformations are operators that change the shape of geometric objects. Softimage provides a large variety of deformation types available from the Modify > Deform menu of the Model and Simulate toolbars as well as the Deform > Deform menu of the Animate toolbar. Some deformations, like Bend and Twist, are very simple. Others, like Lattice and Curve, use additional objects to control the effect. Deformations can be used either as modeling tools or animation tools. Depending on the type of deformation, you can animate the deformations own parameters, such as the amplitude of a Push, or the properties of a controlling object, such as the center of a Wave. Lattice Deformation
Wave Deformation
Examples of Deformations
Here are just some examples of the many types of deformation and their possible uses. Deformation by Curve
Circular wave
Planar wave
Muting Deformations
All deformations can be muted. This temporarily disables its effect. To mute a deformation, activate Mute in its property editor. Alternatively, right-click on its operator in an explorer and choose Mute.
102 Softimage
Section 6
Curves
Softimage provides a full set of tools for creating and editing curves in 3D space. Although they cant be rendered by themselves, curves form the basis for a lot of modeling and animation techniques.
Basics 103
Section 6 Curves
About Curves
In Softimage, you can use curves to: To build objects, for example, by revolving, extruding, or using Curves to Mesh, To deform objects, for example, using curve or spine deformations. As paths and trajectories for animation. Curves are linear (degree 1) or cubic (degree 3) NURBS (Non-Uniform Rational B-Splines). NURBS are a class of curves that computers can easily manipulate, allowing for a great deal of flexibility in modeling.
Drawing Curves
Softimage has tools and commands that let you draw and manipulate curves in a variety of ways. In Softimage, you can draw and manipulate two types of curve: linear and cubic. Linear curves are composed of straight segments, and cubic curves are composed of curved segments.
Curve Components
Curves have many components. You can display these components using the options on a viewports Show menu (eye icon) and select them using the filters on the Select panel.
Knots lie on the curve. Linear Curve Cubic Curve Knot has multiplicity 1.
On a cubic curve, each knot can have a multiplicity of 1, 2, or 3. This value refers to the number of control points associated to the knot. In general, knots with higher multiplicity are less smooth but provide more control over the trace of the curve. A knot with multiplicity 3 is like a Bzier point, with one control point at the position of the knot and the other two control points acting as the tangent handles.
Hulls join points.
The Tweak Curve tool allows you to manipulate these knots in a Bzierlike mannersee Manipulating Curve Components on page 107. Whether the back and forward tangents remain aligned depends on how you manipulate themit is not a property of the knot itself.
104 Softimage
Drawing Curves
Draw Linear allows you to draw lines of connected straight segments (sometimes called polylines). The straight segments meet at the locations you click. To add points or knots to an existing curve, use the corresponding commands on the Modify > Curve menu. To remove points or knots, select them and press Delete.
Broken tangents create a sharp corner. Four control points create a straight segment when they are lined up.
Bzier knots also allow you to create straight segments by rotating the tangents to point at adjacent knots, so that four control points are lined up in a row. Again, whether the control points remain lined up depends on how you manipulate the adjacent knotsit is not a property of the segment. See Drawing a Combination of Linear and Curved Segments on page 106. You can draw cubic or linear curves by clicking to place control points or to place knots. Use one of the following commands from the Create > Curve menu of the Model or Animate toolbar: Draw Cubic by CVs allows you to place control points (also known as control vertices or CVs). The curve does not pass through the locations you click but is a weighted interpolation between the control points. As you add more points, the existing knot positions may change but the point positions do not. Draw Cubic by Bzier-Knot Points allows you to place knots of multiplicity 3. The curve passes through the points you click. As you add more knots, the positions of the control points are automatically adjusted to ensure maximum smoothness of the curve as the curve passes through the existing knot positions. Draw Cubic by Knot Points allows you to place knots of multiplicity 1. Again, the curve always passes through the locations you click and the positions of the control points are automatically adjusted as you add more knots.
The choice between linear, cubic Bzier, and cubic non-Bzier drawing tools depends on the situation. When creating profiles for modeling, linear curves give a good sense of the final result. For paths, youll want cubic curvesnon-Bzier curves are smoother but you may find Bzier curves easier to control. Bzier curves also give you the ability to have sharp corners, and to mix curved and straight segments. The choice between placing control points or placing knots to draw cubic non-Bzier curves is simply a matter of personal preference. While drawing a curve: To add a point at the end of the curve, use the left mouse button. To add a point between two existing points, use the middle mouse button. To add a point before the first point, first right-click and choose LMB = Add at Start and then use the left mouse button. To return to adding points at the end of the curve, first right-click and choose LMB = Add at End. Other useful commands are available on the context menu when you right-click: Open/Close, Invert, Start New Curve, and, of course, Exit Tool. Before you release the mouse button, you can drag the mouse to adjust the points location. Snapping can also be very useful for controlling the position of points and knots. While drawing, you can move any point or knot by pressing and holding m while dragging to activate the Tweak Curve tool in supra mode.
Basics 105
Section 6 Curves
If you will be using curves as profiles for modeling, you should draw them in a counterclockwise direction. This ensures that the normals of any surface or polygon mesh you create from the curves will be oriented correctly. If you will be using curves as paths for animation or extruding, you should draw them from beginning to end. Otherwise, you may need to invert the curves or generated objects later. Drawing a Combination of Linear and Curved Segments Although Softimage does not support having linear and cubic NURBS segments in the same subcurve, you can use Bzier knots to obtain straight segments on a cubic curve: If you have already begun drawing a linear curve, make it cubic using Modify > Curve > Raise Degree and then use Modify > Curve > Add Point Tool by Bzier-Knot Points to draw curved sections. Press Shift while adding knots to preserve the existing trace if you want the last-drawn segment to remain straight. If you have already begun drawing a cubic curve, place the knots where you want them and then straighten the desired segments as described in Creating Straight Segments on page 109. Straight segments are not inherently linear. Whether they remain straight depends on how you manipulate them. Using the Tweak Curve tool to move a knot preserves the linearity, but it will break if you move a tangent or use another tool.
Setting Knot Multiplicity You can change the multiplicity of a knot to suit your needs. For example, reducing the multiplicity makes a curve smoother, but increasing the multiplicity to 3 allows you to use Bzier controls and make sharp angles. 1. Select one or more knots on a cubic curve. To affect all knots on one or more curves, select the curve objects instead. 2. Choose one of the following commands from the Modify > Curve menu of the Model toolbar: - Make Knots Bezier set the multiplicity of the selected knots to 3. - Make Knots Non-Bezier set the multiplicity of the selected knots to 1. - Set Knots Multiplicity opens the Set Crv Knot Multiplicity Op property editor, where you can set the multiplicity of the selected knots to 0, 1, 2, or 3. Setting it to 0 is equivalent to removing the knot.
106 Softimage
Drag the round handle to rotate the tangent without changing its length. Handle on a Bzier knot Drag the square handle to move the tangent freely. Use the middle mouse button to drag one side independently. Once the tangent is broken in this way, the handles always move independently until you align them again. Shift+drag to scale the tangent length without affecting the slope. Again, use the middle mouse button to scale one side independently. Use middle mouse button to rotate one side independently. If the handles have been broken and you want to maintain their relative angle while rotating them, right-click on the manipulator and choose LMB Binds Broken Tangents. Drag the central knot to move it freely. The tangent handles maintain their relative positions to the knot, unless an adjacent segment is linear (four control points lined up). In that case, the tangent handles are automatically adjusted to maintain the linearity of the segment. Use the middle mouse button to drag the central knot while leaving the tangent points in place.
Handle on a non-Bzier point Drag the round handle to rotate the tangent without changing its length. Drag the knot (or isopoint) to move it freely.
Drag the square handle to move the tangent freely. Press Shift to scale the tangent length without affecting the slope.
Drag a control point to move it and affect the trace of the curve indirectly.
Basics 107
Section 6 Curves
You can also: - Click and drag a control point to move it to a new location. - Select an isopoint by clicking on a curve segment between knots. A manipulator appears at the isopoint. To select an isopoint that is very close to a knot, you can click on the curve farther away and then slide the mouse pointer closer before releasing the button. - Right-click on a knot or isopoint manipulator to access a context menu containing commands that affect that point, as well as other tool options. Note that if you right-click on a selected knot (or on another part of the curve while knots are selected), the context menu is different (although many of the same items are available on both menus). In this case, the commands apply to all selected knots and not just the one under the mouse pointer. - Click and drag a rectangle across one or more knots to select them. Use Shift to add to the selection, Ctrl to toggle, or Ctrl+Shift to deselect. This allows you to apply commands to multiple selected knots using the context menu or the Modify > Curve menu. 3. The Tweak Curve tool remains active, so you can repeat step 2 as often as you like. When you have finished, exit the tool by pressing Esc or activating a different tool. Note that if you move an isopoint that is adjacent to Bzier knots, the tangents will break. If desired, first add a Bzier knot at the isopoints location to preserve continuity.
Broken tangents
Aligned tangents
Breaking Tangents To break Bzier tangents and adjust the handles independently of each other, use the middle mouse button while using the Tweak Curve tool. Aligning Tangents After tangent handles have been broken, they can be realigned to make the curve smooth again at that point. Select one or more Bzier knots and choose one of the following commands from the Modify > Curve menu on the Model toolbar: Align Bezier Handles sets the slopes of both tangents to their average orientation. Align Bezier Handles Back to Forward sets the slope of back tangent equal to the forward tangent. Align Bezier Handles Forward to Back sets the slope of forward tangent equal to the back tangent. Back and forward are considered in terms of the curves parameterization from start to end point.
108 Softimage
Creating Straight Segments You can create straight segments on curves using the commands available on the Modify > Curve menu of the Model toolbar, or on the context menu of the Tweak Curve tool. Softimage creates Bzier knots, if necessary, and rotates the appropriate tangents to point at the adjacent knots. Once a straight segment has been created this way, the Tweak Curve tool maintains the linearity when you move the adjacent knots. However, the segment will revert to a curve if you adjust the tangent handles, or if you use a different tool to move control points.
1. Select a curve. 2. Activate the Tweak Curve tool (press m). 3. Move the mouse pointer over an unselected knot. 4. Right-click and choose one of the following commands from the context menu: - Make Adjacent Knot Segments Linear straightens both segments connected to the knot. - Make Fwd Knot Segment Linear straightens the forward segment. - Make Bwd Knot Segment Linear straightens the back segment. Back and forward are considered in terms of the curves parameterization from start to end point.
1. Select the knots at both ends of each segment you want to straighten. You must do this individually for each segment you want to straighten, even if segments are consecutive. 2. Choose Modify > Curve > Make Knot Segments Linear from the Model toolbar. The segments between selected knots become straight.
Basics 109
Section 6 Curves
Modifying Curves
The Modify > Curve menu of the Model toolbar contains a variety of commands you can use to modify curves in various ways. Two of the more common modifications are inverting and opening/closing, but there are other operations you can perform as well.
Original curve
Extracted segment
Inverting Curves
Modify > Curve > Invert switches the start and end points of a curve. The result is as if you had drawn the curve clockwise instead of counterclockwise or vice versa. For example, if an object uses the curve as a path, it moves in the opposite direction once you invert the curve. Similarly, if a surface has been built from the curve and its operator stack was not frozen, its normals become reversed.
Original sketched curve New curve fitted onto sketched curve
110 Softimage
Blending Curves
Original curves
Filleting Curves
Preparing EPS and AI Files for Import There are some restrictions on the files you can import. Follow these guidelines: Make sure the file contains only curves. Convert text and other elements to outlines.
Intersecting curves Fillet between them
Creating Curves from Animation If you have animated the translation of an object, you can use Tools > Plot > Curve from the Animate toolbar plot the motion of its center to generate a curve. For example, this can be used to create a trajectory curve. You can also plot the movement of a selected point or cluster.
Basics 111
Section 6 Curves
112 Softimage
Section 7
Basics 113
Box Modeling
Box modeling starts with a primitive like a cube, then adds subdivision and shapes it by deforming, adding edges, extruding, and so on.
Polygons
A polygon is a closed 2D shape formed by straight edges. The edges meet at points called vertices. There are exactly the same number of vertex points as edges. The simplest polygon is a triangle.
Triangle
Quad
N-gon
Polygon-by-polygon Modeling
With polygon-by-polygon modeling, you draw each polygon directly.
114 Softimage
Polygon Meshes
A polygon mesh is a 3D object composed of one or more polygons. Typically these polygons share edges to form a threedimensional patchwork. However, a single polygon mesh object can also contain discontiguous sections that are not connected by edges. These disconnected A polygon mesh sphere polygon islands can be created by drawing them directly or by combining existing polygon meshes.
Edges that are not shared represent the boundary of the polygon mesh object and are displayed in light blue if Boundaries and Hard Edges are visible in a 3D view. Polygons are the closed shapes that make up the tiles of the mesh.
Polygon
Edge
Point
Points are the vertices of the polygons. Each point can be shared by many adjacent polygons in the same mesh. Edges are the straight line segments that join two adjacent points. Edges can be shared by no more than two polygons.
Triangles are always planar because any three points define a plane. However, quadrilaterals and other polygons can become non-planar, particularly as you move vertices around in 3D space. When objects are automatically tessellated before rendering, non-planar polygons are divided into triangles. However, other applications such as game engines may not support non-planar polygons properly.
Basics 115
Valid Meshes
Softimage has strict rules for valid polygon mesh structures and wont let you create an invalid mesh. Some of the rules are: Every point must belong to at least one polygon. Every edge must belong to at least one polygon. A given point can be used only once in the same polygon. All edges of a single polygon must be connected to each other. Among other things, this means that you cannot have a hole in a single polygon. To get a hole in a polygon mesh, you must have at least two polygons.
Edges cannot be shared by more than two polygons. Tri-wings are not supported. To connect three polygons in this way, a double edge is required. Softimage does support one case of non-manifold geometry. A single point can be shared by two otherwise unconnected parts of a single mesh object. If you export geometry from Softimage, remember that such geometry may not be considered valid by other applications.
116 Softimage
The illusion of smoothness is created by averaging the normals of adjacent polygons. When normals are averaged in this way, the shading is a smooth gradient along the surface of a polygon. When normals are not averaged, there is an abrupt change of shading at the polygon edges. Automatic discontinuity lets you turn off the averaging of normals for sharper edges and the discontinuity Angle lets you specify how sharp edges must be before they appear faceted. If the dihedral angle (angle between normals) of two adjacent polygons is less than the Discontinuity Angle, the normals are averaged; otherwise, they are not averaged.
Dihedral angles: flatter edges have small angles and sharper edges large angles.
Discontinuity on Selected Edges You can achieve different effects by adjusting these two parameters: If Automatic is on, then the Angle determines the threshold for faceted polygons.
Flat edges: normals averaged, smooth shading Sharp edges: normals not averaged, faceted
In addition to setting the geometry approximation for an entire object, you can make selected edges discontinuous by marking them as hard using Modify > Component > Mark Hard Edge/Vertex from the Model toolbar. Hard edges are displayed in dark blue when Boundaries and Hard Edges is checked on a viewports Show menu (eye icon).
Selected edges marked as hard.
Basics 117
Delaunay generates a mesh composed entirely of triangular polygons. This method gives consistent and predictable results, and in particular, it will not give different results if the curves are rotated.
Tesselating
Tesselation is the process of tiling the curves shapes with polygons. Softimage offers three different tesselation methods: Minimum Polygon Count uses the least number of polygons possible but yields irregular polygons.
Medial Axis creates concentric contour lines along the medial axes (averages between the input boundary curves), morphing from one boundary shape to the next. This method creates mainly quads with some triangles, so it is well-suited for subdivision surfaces.
Other Options
In addition to controlling the tesselation, there are many other options to control holes, extrusion, beveling, embossing, and so on.
118 Softimage
Drawing Polygons
Drawing Polygons
Modify > Poly. Mesh > Add/Edit Polygon Tool is a multi-purpose tool that lets you draw polygons interactively by placing vertices. You can use it to add polygons to an existing mesh, add or remove points on existing polygons, or to create a new polygon mesh object. 1. Do one of the following: - To create a new polygon mesh object, first make sure that no polygon meshes are currently selected. or - To add polygons to an existing polygon mesh object, select the mesh first. or - To add or remove points on an existing polygon in a existing polygon mesh object, select that polygon. 2. Choose Modify > Poly. Mesh > Add/Edit Polygon Tool from the Model toolbar or press n. 3. Do one of the following: - Click in a 3D view to add a point. If necessary, you can adjust the position by moving the mouse pointer before releasing the button. or - Click an existing point on another polygon in the same mesh to attach the current polygon to it. or - Click an existing edge of another polygon in the same mesh to attach the current polygon to it. or - Left-click and drag on a vertex of the current polygon to move it. or - Middle-click a vertex of the current polygon to remove it. As you move the mouse pointer, the edges that would be created are outlined in red. To insert the new point between a different pair of vertices of the current polygon, first move the mouse across the edge connecting them. The direction of the normals is determined by the direction in which you draw the vertices. If the vertices are drawn in a counterclockwise direction, the normals face toward the camera and if drawn clockwise, they face away from the camera. As you draw, red arrows indicate the order of the vertices. 4. When you have finished drawing a polygon, do one of the following: - To start a new polygon and automatically share an edge with the current one, first move the mouse pointer across the desired edge and then click the middle mouse button. Repeat step 3 as necessary. or - To start a new polygon without sharing automatically sharing an edge, click the right mouse button. Repeat step 3 as necessary. or - When you are finished drawing polygons, exit the Add/Edit Polygon tool by clicking the right mouse button twice in a row, by choosing a different tool, or by pressing Esc.
Basics 119
Subdividing
You can subdivide polygon meshes to add more detail where needed.
Plus
Diamond
Triangles
Splitting Edges
You can split edges interactively using Modify > Poly. Mesh > Split Edge Tool from the Model toolbar. Activate this tool then click an edge to split it. Use the middle mouse button to split parallel edges. Press Ctrl while clicking to bisect edges evenly.
For edges, you can connect the new points and extend the subdivision to a loop of parallel edges (that is, the opposite edges of quad polygons):
Add Vertex Tool Split Polygon Tool Split Edges (with split control) Dice Polygons Slice Polygons
120 Softimage
Drawing Edges
Drawing Edges
Choose Modify > Poly. Mesh > Add Edge Tool from the Model toolbar to split or cut polygons interactively by drawing new edges. You can use this tool to freeform or redraw your objects flow lines. 1. Select a polygon mesh object. 2. Choose Modify > Poly. Mesh > Add Edge Tool from the Model toolbar or press \ . 3. Start a new edge by clicking on an existing edge or point. You can also: - Press Ctrl while clicking or middle-clicking an edge to bisect it evenly. - Press Shift while clicking or middle-clicking an edge to ensure that the angle between the new edge and the target edge snaps to multiples of the Snap Increments - Rotate value set in your Transform preferences. For example, if Snap Increments - Rotate is 15, then the new edge will snap at 15 degrees, 30 degrees, 45 degrees, and so on. Angles are calculated in screen space. - Press Ctrl+Shift while clicking or middle-clicking an edge to attach the new edge at a right angle to the target edge. The angle is calculated in object space. - Press Alt while clicking in the middle of the polygon to add a point and connect it to the nearest edge by a triangle. If you are trying to attach a new edge to an existing edge or vertex, and the target does not become highlighted when you move the pointer over it, it means that you cannot attach the new edge at that location because it would create an invalid mesh.
You cannot attach the edge to this point. Middle-click to continue drawing edges from the previous point.
You can also press Alt while clicking to start in the middle of a polygon and automatically connect to the nearest edge by a triangle 4. If desired, click in the interior of a polygon to add a point. You can repeat this step to add as many interior points as you like, creating a polyline, before terminating it.
Click inside a polygon to add an interior point.
6. To continue adding edges starting at a new location, right-click and then repeat steps 2 to 4. To exit the Add Edge tool, press Esc or choose a different tool.
Basics 121
Extruding Components
You can extrude polygon mesh components to create local details, such as indentations or protuberances like limbs and tentacles. You can extrude polygons, edges, or points. If you want to adjust other properties, open the Extrude Op property editor in the stack.
2. Use the transform tools or the Tweak Component tool to translate, rotate, and scale the extruded components as desired.
Duplicating Polygons
Duplicating is similar to extruding, but the polygons are not connected to the original geometry. This is useful for building repeating forms like steps or railings. Choose Modify > Polygon Mesh > Duplicate, or check Duplicate Polygons in the Extrude Op property editor.
122 Softimage
Dissolving Components
Dissolving removes selected components and then fills in the holes with new polygons.
Before
Basics 123
Original objects
124 Softimage
Symmetrizing Polygons
Symmetrizing Polygons
You can model one half of a polygon mesh object and then symmetrize it. This creates new polygons that mirror the geometry on the original side. 1. Model the polygons on one side of the object. In the example below, an ornamental curlicue was added to the hilt of the dagger.
Model one side of the object.
3. Select the polygons to be symmetrized. You can symmetrize the whole object or just a portion.
Select the desired polygons.
4. Choose Modify > Poly. Mesh > Symmetrize Polygons from the Model toolbar. 2. Prepare the other side of the object for symmetrization. For example, if you intend to merge the symmetrized portions by welding or bridging, then you may need to create holes for the new polygons to fit and add vertices to aid the merge. 5. In the Symmetrize Polygon Op property editor, set the parameters as desired, for example, to specify the plane of symmetry.
Basics 125
Cleaning Up Meshes
You can filter polygon mesh objects to clean them up. Filtering removes components that match certain criteria, for example, small components that represent insignificant detail.
When you filter polygons by area, the smallest polygons are removed. This eliminates small, noisy details.
Reducing Polygons
The Modify > Poly. Mesh > Polygon Reduction command on the Model toolbar lightens a heavy object by reducing the number of polygons, while still retaining a useful fidelity to the shape of the original highresolution version. For example, you can use polygon reduction to meet maximum polygon counts for game content, or to reduce file size and rendering times by simplifying background objects. Polygon reduction also allows you to generate several versions of an object at different levels of detail (LODs). Polygon reduction works by collapsing edges into points. Edges are chosen according to their energy, which is a metric based on their length, orientation, and other criteria. In addition, you have options to control the extent to which certain features, such as quad polygons, are preserved by the process.
Filtering Edges
Modify > Poly. Mesh > Filter Edges on the Model toolbar removes edges by collapsing them based on either their length or angle. In both cases, you can protect boundary edges using Keep Borders Edges Intact. Edge filtering is especially useful for reducing the triangulation on polygon meshes generated by Boolean operations.
Filtering Points
Modify > Poly. Mesh > Filter Points on the Model toolbar welds together vertices that are within a specified distance from each other. Among other things, this can be very useful for fixing disconnected polygons in exploded meshes which can occur when meshes are exported from some other programs. Average position welds each clump of points in the selection together at their average position. Selected point welds each clump of points in the selection together at the position of the point that is nearest to the average position. Unselected point welds each selected point to an unselected point on the same object.
Filtering Polygons
Modify > Poly. Mesh > Filter Polygons removes polygons based on their area or their dihedral angles: When you filter polygons by angle, adjacent polygons are merged together if their dihedral angle is less than the threshold you specify. Small angles correspond to flat areas, so this method preserves sharp detail.
126 Softimage
Polygon Normals
Polygon Normals
Shading normals are vectors that are perpendicular to the surface of polygons at each corner. They control how polygon meshes are shaded. If the normals are averaged across an edge or corner, the shading is smooth. If they are not averaged, the shading is faceted and the edge is considered hard. To display normals on selected objects, click on a views Show menu (eye icon) and choose Normals.
In Softimage, polygon meshes can have auto normals or user normals: Auto normals are calculated automatically based on a meshs geometry. User normals are custom-defined.
On a cube with beveled edges, the interpolation of the automatic normals creates a gradation in the shading across the large, flat sides. To create the illusion of a box with rounded corners, you can set user normals so that their interpolation produces the correct shading.
There are two main ways to set user normals: Activate Modify > Component Tweak User Normals Tool on the Model toolbar, and then drag normals interactively in the viewports. Select points, polygons, and edges and then use the commands on the Modify > Component > Set User Normals submenu.
Basics 127
Subdivision Surfaces
Subdivision surfaces (sometimes called subdees) allow you to create smooth, high-resolution polygon meshes from lower-resolution ones. They provide the smoothness of NURBS surfaces with the local detail and texturing capabilities of polygon meshes.
Subdivision Rules
Softimage gives you a choice of several subdivision rules (smoothing algorithms): Catmull-Clark, XSI-DooSabin, and linear. In addition, you have the option of using Loop for triangles when using Catmull-Clark or linear. The subdivision rule is set in the Polygon Mesh property editor. Catmull-Clark The Catmull-Clark subdivision algorithm produces rounder shapes. The generated polygons are all quadrilateral. XSI-Doo-Sabin
Catmull-Clark Subdivision
The XSI-Doo-Sabin subdivision algorithm is a variation of the standard Doo-Sabin algorithm. It produces more geometry than Doo-Sabin, but it works better with cluster properties such as texture UVs, vertex colors, and weight maps, as well as with creases.
XSI-Doo-Sabin Subdivision
128 Softimage
Subdivision Surfaces
Linear Subdivision Linear subdivision does not perform any smoothing, so the objects shape is unchanged. It is useful when you want an object to deform smoothly without rounding its contours.
Linear Subdivision
Creases
Subdivision surfaces typically produce a smooth result because the original vertex positions are averaged during the subdivision process. However, you can still create sharp spikes and creases in subdivision surfaces. This is done by adjusting the hardness value of points or edges on the hull. The harder a component, the more strongly it pulls on the resulting subdivision surface. Use Modify > Component > Mark Hard Edge/Vertex to make components completely hard, or Set Edge/Vertex Crease Value to apply an adjustable value.
Loop Subdivision With the Catmull-Clark and linear subdivision methods, you have the option of using Loop subdivision for triangles. The Loop method subdivides triangles into smaller triangles rather instead of into quads, which gives better results when smoothing and shading.
Catmull-Clark with Loop Catmull-Clark
Basics 129
130 Softimage
Section 8
Basics 131
About Surfaces
In Softimage, surfaces are NURBS patches. Mathematically, they are an interconnected patchwork of smaller surfaces defined by intersecting NURBS curves. Knot curves (sometimes called isoparams or isoparms) are sets of connected knots along U or Vthey are the wires shown in wireframe views. You can select knot curves and use them, for example, to build other surfaces using the Loft operator.
Components of Surfaces
You can display surface components and attributes in the 3D views, as well as select them for various tasks. Points are the control points of the curves that define the surface. Their positions define the shape of the surface.
Points define and control the surface. You can display lines between points.
Isolines are not true components. They are, in fact, arbitrary lines of constant U or V on a surface. You can use the U and V Isoline selection filter to help you pick isolines for lofting and other operations.
NURBS hulls are display lines that join consecutive control points. It can be useful to display them when working with curves and surfaces. Surface knots are the knots of the curves that define the surface; they lie on the surface where the U and V curve segments meet.
Isolines are arbitrary lines on the surface in U or V.
132 Softimage
Building Surfaces
Building Surfaces
The commands on the Create > Surf. Mesh menu can be used to build NURBS surfaces in a variety of ways. The first set of commands generate surfaces from curvessee Objects from Curves on page 90 for an overview of the basic procedure. Here are a few examples of some of the other ways you can build surfaces.
Merging Surfaces
Merging two surfaces creates a third surface that spans the originals. You have the option of also selecting an intermediary curve for the merged surface to pass through.
Blending Surfaces
Blending creates a new surface that fills the gap between the selected boundaries on two other surfaces.
Input surfaces
Filleting Intersections
A fillet is a surface that smooths the intersection of two others, like a molding between a wall and a ceiling.
Input surfaces
Resulting blend
Input surfaces
Resulting fillet
Shaded view
Basics 133
Modifying Surfaces
You can modify surfaces in a variety of ways using the commands in the Modify > Surface menu of the Model toolbar, for instance, by adding and removing knot curves. Here are a few examples of some other ways of modifying surfaces.
Inverting Normals
If the normals of a surface are pointing in the wrong direction, you can invert them.
Open
Closed
Inverting a surface
Extending Surfaces
You can extend a surface from the selected boundary to a curve.
134 Softimage
Trimming affects the visible portion of the surface. All the underlying points are still there and you can still affect the surfaces shape by moving points in the trimmed area.
NURBS surface
Surface curve
Basics 135
Use Is Boundary to choose whether to trim the inside or the outside. Use Projection Precision to control the precision used to calculate the projection. If the shape of the projected curve is not accurate, increase this value. However, high values take longer to calculate and may slow down your computer. For best performance, set this parameter to the lowest value that gives good results.
Surface Meshes
Surface meshes provide a way to assemble multiple surfaces into a single object that remains seamless under animation and deformation. 1. Create a collection of separate surfaces. These will become the surface meshs subsurfaces.
Line the surfaces up into a basic configuration. This illustration shows a common configuration for a leg or arm.
Deleting Trims
Deleting a trim allows you to remove a trim operation even after you have frozen the surfaces operator stack. Set the selection filter to Trim Curve, select one or more trim curves on the surface, and choose Modify > Surface > Delete Trim from the Model toolbar.
2. Optionally, line up pairs of boundaries by selecting them and choosing Create > Surf Mesh > Snap Boundary from the Model toolbar.
Snap opposite boundaries together to connect the surfaces across the junction.
136 Softimage
Surface Meshes
3. Select all the surfaces and choose Create > Surf Mesh > Assemble. The surfaces are assembled into a single surface mesh. The continuity manager ensures that the continuity is preserved at the seams.
Notice how the assembled surface mesh blends smoothly across the junctions.
4. You can now deform and animate the surface mesh as desired.
If you ever freeze the assembled surface, you will need to reapply the surface continuity manager manually using Create > Surf Mesh > Continuity Manager.
Basics 137
138 Softimage
Section 9
Animation
To animate means to make things come alive, and life is always signified by change: growth, movement, dynamism. In Softimage, everything can be animated, and animation is the process of changing things over time. For example, you can make a cat leap on a chair, a camera pan across a scene, a chameleon change color, or a face change shape.
Basics 139
Section 9 Animation
Bringing It to Life
The animation tools in Softimage let you create animation quickly so that you can spend your time editing movements, changing the timing, and trying out different techniques for perfecting the job. Softimage gives you the control and quick feedback you need to produce great animation. Basically, if you want to make something move, Softimage has the tools.
High-level animation means that you are working with animation in a way that is nonlinear (the animation is independent of the timeline) and non-destructive (any modifications do not destroy your original animation data).
140 Softimage
Bringing It to Life
You store animation or shapes in sources, then use the animation mixer to edit, mix, and reuse those sources as clips. To use these levels together, you can animate at a low level by keyframing a specific parameter, then store that animation and others into action sources and mix them together in the animation mixer to animate at a high level. This allows you to easily manage complex animation yet retain the ability to work at the most granular level.
Create animation relationships between objects at the lowest (parameter) level. These include constraints, path animation, linked parameters, expressions, and scripted operators.
Keyframed (low-level) animation can be contained in action sources, then brought into the animation mixer as a clip (high level).
Basics 141
Section 9 Animation
Character animation tools offer you control for creating and animating skeletons. You can animate them with forward or inverse kinematics, apply mocap data, add an enveloping model, set up a rig, and fine-tune the skeletons movements in a myriad of ways to get just the right motion.
Dynamic simulations let you create realistic motion with natural forces acting on rigid bodies, soft bodies, cloth, hair, and particles (done with ICE). With simulations, you can create animation that could be difficult or time-consuming to achieve with other animation techniques.
142 Softimage
You can set up the default frame format and frame rate preferences for your scene using the options in the Output Format preferences property editor (choose File > Preferences). These settings propagate to many other parts of Softimage that depend on timing. Regardless of whether you enter time code or a frame number as the frame format, Softimage internally converts your entry into time code.
Basics 143
Section 9 Animation
Playback menu displays many playback options, such as for setting preferences, opening the flipbook, setting real-time play rates, and setting the current viewport. Increment Backward/Forward moves the currently displayed frame backward/forward by predefined increments (default is 1). Start/First Frame displays (resets) the first frame at the beginning of the timeline. End/Last Frame displays the last frame at the end of the timeline. Play Backward plays/stops the animation or simulation in the backward direction (to the left on timeline). Click this icon to play from the last frame on the timeline; click it again to stop playback; middle-click to play from the current frame. Note that you can only play simulations backwards if you have cached them. Play Forward plays/stops the animation or simulation in the forward direction (to the right on timeline). Click this icon to play from the first frame on the timeline; click it again to stop playback; middle-click it to play from the current frame. Loop repeats the animation or simulation in a continuous loop. Audio toggles sound on/off during playback. It is on by default. When the audio is off (muted), the icon appears highlighted. All/RT toggles between playing back frame by frame (All) or in real time (RT).
Time range
The time range determines the global range of frames, and the range slider in it lets you play back a smaller range of frames within the global range. If you are working with an animation sequence that is very long, you can focus on just a subsection of frames which you can easily change and move along the timeline. You can set the global length by entering frame numbers in the boxes at either end of the time range. The timeline displays which frames can be played, which is linked to the range slider. The current frame of the animation is indicated by the playback cursor (the vertical red bar), which you can drag to different frames. You can set the scenes length by entering frame numbers in the boxes at either end of the timeline. The controls in the Playback panel below the timeline allow you to view and play animations, simulations, and audio in different ways.
144 Softimage
Previewing Animation
Previewing Animation
You can capture and cache images from an animation sequence and play them back in a flipbook to help you see the animation in real time. Anything that is shown in the viewport you choose is captured render region, rotoscoped scene with background, or any display mode (wireframe, textured, shaded, etc.). For example, you may want to set the display mode to Hidden Line Removal for a pencil test effect. You can include audio files to play back with the flipbook, especially useful for lip synching. You can also export flipbooks in a variety of standard formats, such as AVI and QuickTime. Creating a Flipbook 1. In the viewport whose images you want to capture, set the display options as you like. Then click the camera icon in that viewport and choose Start Capture. 2. In the Capture Viewport dialog box, set the options for the flipbooks file name, image size, format, sequence, padding, and frame rate. 3. View the flipbook in the Softimage flipbook or in the native media player on your computer. You can open the Softimage flipbook by choosing Flipbook from the Playback menu. Ghosting Animation ghosting, also known as onion-skinning, lets you display a series of snapshots of animated objects at frames or keyframes behind and/or ahead of the current frame. This lets you visualize an objects motion, helping you improve its timing and flow. You can display an objects geometry, points, centers, trails, and velocity vectors as ghosts. Ghosting works for any object that moves in 3D space, either by having its transformation parameters (scaling, rotation, and translation) animated in any way, or by having its geometry changed by shape animation or deformations (including envelopes), or with simulated rigid bodies, soft bodies, or cloth. Ghosting is set per object by selecting the Ghosting option in the objects Visibility property editor. Once this is done, you can set ghosting per scene layer or per group, in their respective property editors. To see ghosting in a 3D view, such as a viewport, choose the Animation Ghosting command in the Display Mode menu of a 3D view, then set up the ghost display options in the Camera Display property editor.
Basics 145
Section 9 Animation
When you set keys on a parameters value, a function curve (or fcurve) is created. An fcurve is a graph that represents the changes of a parameters values over time, as well as how the interpolation between the keys occurs. When you edit an fcurve, you change the animation.
Methods of Keying
There are a number of ways in which you can set keys in Softimage depending on what type of workflow youre used to and the tools you want or need to use for your production. Any way you choose, each method results in keyframes being created. There are three main keying workflows from which to choose: Keyable parameters on the keying panel. Character key sets Marked parameters (and marking sets)
Before you start setting keys, you need to set a preference that determines the way in which you key: with keyable parameters, with character key sets, or with marked parameters. This preference determines which parameters are keyed when you save a key by pressing K, by clicking the keyframe icon in the Animation panel, or by choosing the Save Key command from the Animation menu. To set the preference, click the Save Key preference button in the Animation panel, then select an option from the menu.
You can set keys for just about anything in Softimage that has a value: this includes an objects transformation, geometry, colors, textures, lighting, and visibility. You can set keys for any animatable parameter in any order and at any time. When you add a new key, Softimage recalculates the interpolation between the previous and next keys. If you set a key for a parameter at a frame that already has a key set for that parameter, the new key overwrites the old one.
146 Softimage
Basics 147
Section 9 Animation
148 Softimage
Set the Save Key preference to Key Marked Parameters. Select the object you want to animate and go to the frame at which you want to set a key. Mark the parameters you want to key. You can mark parameters by clicking them in the marked parameter list (in the lower-right of the interface), a property editor, the explorer, or the keying panel. Marked parameters are highlighted in yellow.
Transformation parameters are automatically marked when you activate a transformation tool.
4
4 5
Set the marked parameter values for the selected object. Set a key for the marked parameters at this frame.
Keying with Marking Sets You can also create marking sets, which are similar to character key sets. You can have only one marking set per object at a time. Marking sets make it easy to key in hierarchies because each object within that structure can have its own marking set, such as a marking set of rotation parameters for bones, or a marking set of translation parameters for IK effectors. To create a marking set, select an object and mark the parameters you want to keep in the set. Then press Ctrl+Shift+M. To key marking sets, select one or more objects with a marking set. Then press Ctrl+M to activate the marking set, then set a key by pressing K. Press Alt+K to set a branch key, which is useful for working with characters and other hierarchies.
Basics 149
Section 9 Animation
D C
150 Softimage
Animating Transformations
Animating Transformations
Animating the transformations (scaling, rotation, and translation) of objects is something that you will be doing frequently. It is one of the most fundamental things to animate in Softimage. You can find transformation parameters in the objects Kinematics node in the explorer. Kinematics in this case refers to movement, not to inverse or forward kinematics as is used in skeleton animation.
Within the Kinematics node are the Global Transform and Local Transform nodes, referring to the type of transformation. Within each of the Transform nodes, there are the Pos (position, also called translation), Ori (orientation, also called rotation), and Scl (scale) folders. Each of the Pos, Ori, and Scl folders contain the X, Y, and Z parameters corresponding to each axis. Manipulation modes for current transformation (in this case, translation).
Section 9 Animation
translate in Par mode. These are the only two manipulation modes that transform in the same way as local animation: they are both relative to the objects parent. Of course, you can always set and animate the values as you like directly in the objects Local Transform or Global Transform property editor.
To have only specific axes X, Y, or Z marked, you can rotate in Add mode or translate in Par mode. Or you can choose Transform > Automark Active Transform Axes: then when you click a transformations specific axis button (such as the Rotations Y button) on the Transform panel, only that axis is marked, regardless of the current manipulation mode.
152 Softimage
Animating Transformations
Animating Rotations
When you animate rotations in Softimage, you normally use three separate function curves that are connected to the X, Y, and Z rotation parameters. These three rotation parameters are called Euler angles. Euler interpolation works well when the axis of interpolation coincides with one of the XYZ rotation axes, but is not as good at interpolating arbitrary orientations. Euler angles can also suffer from gimbal lock, which is the phenomenon of two rotational axes aligning with each other so that they both point in the same direction. To solve this, you can change the order in which the rotation axes are evaluated (by default, its XYZ), which changes where the gimbal lock occurs. As well, you can convert Euler fcurves to quaternion. Quaternion interpolation provide smooth interpolation with any sequence of rotations. The XYZ angles are treated as a single unit to determine an objects orientation, so they are not restricted to a particular order of rotation axes. Quaternions interpolate the shortest path between two rotations. You can create quaternion fcurves by setting quaternion keys directly, or by converting Euler fcurves to quaternion using the Animation > Convert commands in the Animation panel. And you can always convert back to Euler fcurves in the same way.
Cone is rotated on 90 degrees in X and Y.
Basics 153
Section 9 Animation
D E
154 Softimage
You can modify your animation sequences by editing regions of keys on the tracks with standard operations such as moving, scaling, copying, cutting, and pasting. You can delete them, shift them left and right, scale themall with or without a ripple. Summary tracks help you see the animation for the whole scene or just the selected objects. To open a dopesheet, you can open the animation editor (press 0 [zero]), then choose Editor > Dopesheet from its command bar. Or choose it in a viewport, like any other view.
G D E
F A B C D The Explorer, Lock, and Update buttons apply only to the animation explorer (4). Timeline. Click and drag the red playback cursor in it to scrub through the animation. Summary tracks display keys for all objects in the scene or all objects currently displayed in the dopesheet. Animation explorer displays the parameters of objects that you select. E Regions (press Q) let you edit multiple keys, including moving them, scaling them, copying and pasting them, and deactivating animation. The keys represent the keyframes of the selected parameters animation. Each colored block is one frame long. You can edit (move, copy, paste) individual keys on tracks. The tracks display and let you manipulate the animation keys. You can expand and collapse tracks to view exactly what you want.
Basics 155
Section 9 Animation
E B C F G
156 Softimage
A B C D
Command bar contains menu commands and icons to edit fcurves in many different ways. Animation explorer displays the parameters of objects that you select. Values for the parameter are shown on the graphs Y (vertical) axis. Timeline. Time is shown on the graphs X (horizontal) axis. Click and drag the red playback cursor in it to scrub through the animation. Selected fcurves are white. When not selected, the curves for X, Y, and Z parameters are red, green, and blue, respectively. You can also change the color of any fcurve you like. The keys on the fcurves represent the keyframes of the selected parameters animation. You must select an fcurve before you can select its keys. Selected keys are red with slope handles. Unselected keys match the color of their fcurve. The slope handles (tangents) at each key indicate the rate at which an fcurves value changes at that key. These handles only appear on keys on fcurves that have spline interpolation.
Editing a Function Curves Slope The fcurves slope determines the rate of change in the animation. By modifying the slope, you change the acceleration or deceleration in or out from a key, making the animation change rapidly or slowly, or even reversing it. You can change the slope of any fcurve that uses spline interpolation by using the two handles (called slope handles) that extend out from a key. By modifying the handles length and direction, you can define the way the curve moves into and out from each key. You can change the length and angle of each handle in unison or individually. The slope handles are tangent to the curve at their key when Unified Slope Orientation is on. (A) This keeps the acceleration and deceleration smooth, but you can also turn off this option to break the slope at a certain point. (B) This creates a sudden animation acceleration or deceleration, or change of direction altogether.
A Types of interpolation: By default, fcurves use spline interpolation to calculate intermediate values. The curves ease into and ease out of each key, resulting in a smooth transition. H Linear interpolation connects keys by straight line segments. This creates a constant speed with sudden changes at each key. Constant interpolation repeats the value of a key until the next one. The creates sudden changes at keys and static positions between keys, such as for animating a cut from one camera to another. B
Basics 157
Section 9 Animation
Ways of Editing Function Curves and Keys When you select one or more fcurves, any modifications you perform are done only to them. You can select keys on the selected fcurves to edit only them, including regions of keys on fcurves.
A B C Move fcurves and keys in X (horizontally) to change the time or in Y (vertically) to change the values. Add or delete keys on an fcurve. Create regions (press Q) of keys for editing. A Drag the region up or down to move the keys, or drag the regions handles to scale. Copy and paste an fcurve and keys. You can also set paste options to control how keys are pastedwhether they replace the selection or are added to it. Scale fcurves or regions of keys. When you shorten the length, you speed up the animation; increasing the length slows it down. Scaling vertically changes the values. Cycle the fcurves for repetitive motions. You can create basic cycles, or you can have relative cycles that are progressively offset, such as when creating a walk cycle.
D C B E
158 Softimage
Layering Animation
Layering Animation
Animation layering allows you to have one or more levels of animation on top of an objects parameters base animation at the same time. You usually want to layer animation when you need to add an offset to the base animation on an object, but you dont want to change the original animation, such as with mocap data. You can only add keys in the layers, and the existing base animation must be either action clips or fcurves. Animation layers are non-destructive, meaning that they dont alter your base animation in any way: the keys in the layers always remain as a separate entity. Layering allows you to experiment with different effects on your animations and build several variations, each in its own layer. For example, lets say that youve imported a mocap action clip of a character running down the flight of stairs. However, in your current scene, the stairs are shallower than those used for the mocap session, so the character steps through the stairs instead of on them. To fix this problem, you create an animation layer, offset the contact points for the characters feet so that they step on the stair, then set keys. The result is an offset animation that sits on top of the mocap data: you dont need to touch the original mocap clip at all. You can then easily edit the fcurves for the animation layer, tweaking it as you like. Animation layers are actually controlled and managed in the animation mixer, but you dont need to access the mixer for creating and setting keys in layers. You can use the Animation Layers panel (click the KP/L tab on the main command panel) to do this. However, you may want to use the animation mixer for added control over each layer, such as for setting each layers weight.
6 1 2
4 5
Basics 159
Section 9 Animation
Overview of Layering Animation There are different ways in which you can work with animation layers in Softimage, but heres a simple overview just to get you started. 1 2 Make sure the objects are in a model structure. Animate the objects. This animation is the base layer. You cannot create animation layers without first having a base layer. Create an animation layer in the Animation Layer panel. Select the animated objects, change their values, and set keys for them in the layer you created. Edit the layers fcurves. Collapse the layer to combine its animation with the base layer of animation.
Constraints
Constraining is a way of increasing the speed and efficiency in which you animate. It lets you animate one object via another ones animation. You can constrain different properties, such as position or direction, of one object to that of an animated object. Then when the animated object moves, the constrained object follows in the same way.
Radar dish constrained by direction to the plane The X axis of the radar dish continually points in the direction of the planes center.
3 4 5 6
There are a number of types of constraints in Softimage: Constraining transformations: in position, orientation, direction, scaling, pose (all transformation), and symmetry. Constraining in space: by distance, or between 2, 3, or any number of points. Constraining to objects: to clusters, surfaces and curves, bounding volumes, and bounding planes. For many of the constraints, you can add a tangency or up-vector directions to the mix. The tangency and up-vector constraints are properties of several constraint types that determine the direction in which the constrained object should point. For example, if you apply a Direction constraint to an object, you can also add an up-vector (Y axis) to control the roll of the direction-constrained object.
160 Softimage
Constraints
Overview of Constraining Objects 1 2 3 4 Select the object to be constrained. Choose the constraint command from the Constrain menu. Pick the constraining (control) object. The constraint is created between the objects. Adjust the constraint in its property editor that opens. You can see constraint information in the viewport if you click the eye icon in a viewports menu bar and select Relations.
Position constraint with offset: An offset is applied to the position of the constrained objects center.
Basics 161
Section 9 Animation
With almost all types of constraints, you can set offsets using the controls in their property editors. The offset is set between the centers of the constrained and constraining objects on any axis. To set an offset interactively, you can use the CnsComp button (Constraint Compensation) on the Constrain panel. With compensation, you can interactively offset the constrained object from the constraining object and animate it independently while keeping the constraint.
Blending Constraints
You can blend multiple constraints on an object with each other, as well as blend constraints with other animation on the constrained object. You set the Blend Weight parameters value in each constraints property editor to blend the weight (or strength) of one constraint against the others. And, of course, you can animate the blending to have it change over time. Blending is done in the order in which you applied the constraints, from the first-applied constraint to the last. Each constraint takes the previous result and gives a new one based on the value you set. For example, if you have three position constraints on an object, you can have the object placed exactly in the center of them. In the example on the right, the cone has three blended position constraints to keep it positioned in the middle of the triangle formed by objects A, B, and C: A B C First to A with a blend weight of 1. Next to B with a blend weight of 0.5. Lastly to C with a blend weight of 0.333. You can see the order of the constraints as well as their blend weight values in a viewport if you click the eye icon in a viewport and select Relations and Relations Info.
C
162 Softimage
Path Animation
Path Animation
A path provides a route in global space for an object to follow in order to get from one point to another. The object stays on the path because its center is constrained to the curve for the duration of the animation. You can create path animation in Softimage using a number of methods, each one having its own advantages: The quickest and easiest way of animating an object along a path is by using the Create > Path > Set Path command and picking the curve to be used as the path. Theres no need to set keyframesjust set the start and end frames. The object is automatically constrained to the path and animated along the percentage of the curves length. Constrain an object to curve using the Curve (Path) constraint and manually set keys for the percentage of the path traveled. Choose the Create > Path > Set Trajectory command and pick a trajectory to use a curves knots as indicators of the objects position at each frame.
C D
After youve created path animation, you can modify the animation by changing the timing of the object on the path (choose the Create > Path > Path Retime command), or by moving, adding, or removing points on the path curve as you would to edit any curve. For example, using the Path Retime command, you can shorten (and therefore increase the speed) a path animation that went from frame 1 to 100 to frames 20 to 70. You can even reverse the animationfor example, enter 100 as the start and 1 as the end frame.
A B C D
The dotted line is connected to the center of the constraining curve. You can select the line and press Enter to open the PathCns or TrajectoryCns property editor. A triangle represents a locked-path key. A square represents a key saved on the path. A circle represents a key set directly from a property page or the animation editor. These are the only type of keys found on trajectories. You can see path information in a viewport if you click the eye icon in a viewport and select Relations.
Move an object about your scene and save path keys with the Create > Path > Save Key on Path command at different positionsthe path curve is created automatically as you go. Convert the existing movement of an object into a path using the Create > Path > Convert Position Fcurves to Path command. Want to convert a path animation to translation? Plot the position of the path-animated object, then apply the result to the object or as an action in the animation mixer.
Basics 163
Section 9 Animation
Linking Parameters
When you create linked parameters, also known as driven keys, you create a relationship in which one parameter depends on the animation state of another. In Softimage, you can create simple one-to-one links with one parameter controlling another, or you can have multiple parameters controlling one parameter. After you link parameters, you set the values that you want the parameters to have, relative to a certain condition (when A does this, B does this). Drive a single parameter with the combined animation values of multiple parameters. This allows you to create more complex relationships, where many parameter values are interpolated to create an output value for one parameter. Drive a single parameter with the whole orientation of an object. Overview of Linking Parameters To open the Parameter Connection Editor, choose View > Animation > Parameter Connection Editor. Then follow these steps:
Venus flytrap eyes its victim. Its jaws rotation Z parameter is linked to the position X parameter of the fly that is animated along a path.
You can link any animatable parameters togetherfrom translation to colorto create some very interesting or unusual animation conditions. For example, you could create a chameleon effect so that when object A approaches object B, it changes color. Basically, if you can animate a parameter, you can link it. There are three basic ways in which you can link parameters. You can:
5
Create simple one-to-one links with one parameter driving one or more other parameters. When you link one parameter to another, a relationship is established that makes the value of the linked parameter depend on the value of the driving parameter.
164 Softimage
Linking Parameters
Select an object, then select one or more of its parameters in the Driven Target explorer. These are the parameters whose values will be controlled by the driving parameter. Click the lock icon to prevent the explorer from changing when you select other objects. Select an object, then select one of its parameter in the Driving Source explorer. This is the parameter whose values will control the linked parameters. If you are driving a single parameter with multiple parameters, select two or more of the parameters (Ctrl+click) here. These are the parameters whose interpolated values will control the linked parameter.
2 3
Select Link With from the link list. If you are driving a single parameter with multiple parameters, select Link With Multi.
Click the Link button. A link relationship is established between the parameters. An l_fcv expression appears in the Definition text box and the animation icon of the linked parameter displays an L to indicate this. If you are driving a single parameter with multiple parameters, an l_interp expression appears in the Definition text box.
Set the driving and linked parameters values as you want them to be relative to each other, then click the Set Relative Values button. Repeat this step for each relative state you want to set.
Basics 165
Section 9 Animation
Expressions
Expressions are mathematical formulas that you can use to control any parameter that can be animated, such as translation, rotation, scaling, materials, colors, or textures. Expressions are useful to creating regular or mechanical movements, such as oscillations or rotating wheels. As well, they allow you to create almost any connection you like between any parameters, from simple A = B relationships to very complex ones using predefined variables, standard math functions, random number generators, and more. However you use expressions, you will find that they are very powerful because they allow you to animate precisely, right down to the parameter level. Once youre more experienced using them, you can create all sorts of custom setups, like character rigs and animation control systems. Overview of writing an expression
1 2
Select an object and open the expression editor by pressing Ctrl+9. Select the target, which is the parameter controlled by the expression. The Current Value box below it shows the value of the expression at the current frame.
Enter the expression in the expression pane by typing directly or by choosing items from the Function, Object, and Param menus. You can also enter parameter names by typing their script names and then pressing F12. This prompts you with a list of possible parameters in context. You can copy, cut, and paste in the expression pane using standard keyboard shortcuts (Ctrl+C, Ctrl+X, and Ctrl+V, respectively).
4 5
The message pane updates as you work, letting you know whether the expression is valid or not. Click the Validate and Apply buttons to validate and then apply the expression. For a complete description and syntax of all the functions and constants available, refer to the Expression Function Reference (choose Help > Users Guides).
166 Softimage
Expressions
How to create a simple equal (=) expression: 3 ways Use any of these methods to create a simple equal expression between two parameters:
A B
In a property editor, drag an unanimated parameters animation icon onto another parameters animation icon. This animation icon shows an equal sign and its value is made to be equal to the first parameter. In the explorer, drag the name of an unanimated parameter and drop it on another parameters name. In the parameter connection editor, set up the Driving Source and Target parameters, then select Equals (=) Expression.
B C
Basics 167
Section 9 Animation
Copying Animation
There are different levels at which you can copy animation in Softimage: between parameters, between objects, or between models. Here are some of the main ways to do this. You can copy animation between any parameters in the explorer or a property editor in a number of ways:
You can copy any type of animation between selected objects, models, or parameters using the Copy Animation commands from the Animation menu in the Animation panel. You can copy keys between parameters or objects in the dopesheet, or copy function curves and keys between parameters or objects in the fcurve editor. In the dopesheet, you can copy animation from one model to another, or from one hierarchy of objects to another within the same model. For example, you can paste a walk cycle animation from the Bob model to the Fred model, as long as Fred has the same parameter names as Bob. Store an objects animation in an action source and copy it between models, which is especially useful for exchanging animation between scenes.
A B
In the explorer, drag the name of an animated parameter and drop it on another parameters name. In a property editor, drag the animation icon of an animated parameter and drop it on another parameters animation icon. In either the explorer or a property editor, right-click the animation icon of an animated parameter and choose Copy Animation. Paste this on another parameter with the Paste Animation command. In the explorer, you can drag an entire folder from one object onto another objects folder of the same name, such as the Pos folder which contains translation (position) parameters.
168 Softimage
You can also use the dopesheet to offset or scale animation for an object or even the scene, especially using its summary tracks.
A B C The selected fcurve (white) has been scaled to twice its length. The ghosted fcurve (black) shows the original fcurves size. The selected fcurve has been offset by about 20 frames. The selected fcurve has been retimed so that a range of 125 frames in the middle of it has been compressed into a range of 80 frames.
Basics 169
Section 9 Animation
Removing Animation
There are different levels at which you can remove animation in Softimage: between parameters, between objects, or between models. Here are some of the main ways to do this. You can remove any type of animation from selected objects, models, or parameters using the Remove Animation commands from the Animation menu in the Animation panel. You can remove all keys from parameters or objects in the timeline or in the dopesheet, or remove fcurves or all keys from parameters or objects in the fcurve editor. When you remove keys from an fcurve, a flat (static) fcurve remains. To remove the static fcurve, choose Remove Animation > from All Parameters, Static Fcurves from the Animation menu. In the dopesheet, you can easily remove all animation from a model or from a hierarchy of objects using its summary tracks. To remove animation from parameters in a property editor, right-click the keyframe icon at the top of the editor and choose Remove Animation. This removes animation from all or marked animated parameters on that property page. To remove animation from parameters in the explorer or a property editor, right-click the animation icon of an animated parameter and choose Remove Animation.
Plotting is done by first creating an action source. You can choose to either keep or delete this action source after the animation has been plotted: You can apply the plotted animation (fcurves) immediately to the object and delete the action source. You can apply the plotted animation (fcurves) to the object and also keep them stored in an action source. This may be useful if youre using the animation mixer. You can keep the action source of the plotted animation (fcurves) but not have it applied to the object immediately. This may be useful for creating a library of action sources that can be applied to the same or even a different object.
170 Softimage
Section 10
Character Animation
Character animation is all about bringing your characters to life, whether its some guy dancing in a club, a dog catching a frisbee, or a simple bouncing ball with personality to spare. Even though youre working in a virtual environment, your job is to make these characters seem believable in their movements and expression. In Softimage, youll find everything you need to make any type of character come alive.
Basics 171
The following outline gives you an idea of which steps to take and which tools to use for developing and animating characters in Softimage.
2 Create a model structure for your character, starting with the body geometry. Then as you create the other elements (skeleton, rig controls, Mixer node), you put them in the model to keep all the characters elements together. This makes it easy to copy or export your character later on.
Build a skeleton to provide a framework for a character, and to pose or deform it intuitively. The structure of your characters skeleton determines every aspect of how it will move. With the envelope as a guide, you can create the bones for the skeleton and assemble them into a hierarchy.
Create a rig using different control objects to help you to pose and animate the character more quickly and accurately than without a rig. While simple characters may not require a rig, a character that is complex or needs to do complicated movements will need a rig.
172 Softimage
Apply the envelope to the skeleton. This also involves setting how the different parts of the envelope are weighted to the different bones in the skeleton. You should also save a reference pose of the envelope before you start animating for a home base to which you can return.
Animate the skeleton using inverse kinematics (IK) and forward kinematics (FK). You can also apply mocap data to your character to animate it, including retargeting the data onto different characters with the MOTOR tools.
Adjust the animation using any of the animation tools in Softimage, such as the dopesheet, the fcurve (animation) editor, animation layers, or the animation mixer. For example, you many want to fix foot sliding in the fcurve editor, add a progressive offset to a walk cycle in the mixer, or add a few keyframes on top of some mocap data with animation layers.
Basics 173
All predefined skeletons, bodies, characters, and rigs are implemented as models. As well, most of the bipeds share the same basic hierarchy structure that you can see in the explorer, making it easy to share animation later, especially if youre using actions in the animation mixer. Making Custom Characters and Faces The Character Designer (choose Get > Primitive > Character > Man Maker) loads a generic male body, then use sliders in a property editor to interactively manipulate individual body and head features. You can create many bodies, each with their own distinctive look, yet have all bodies sharing the same underlying topology. The Face Maker (choose Get > Primitive > Character > Face Maker) loads a predefined low-resolution polygon mesh head (male or female). This lets you can create any number of different faces with the same topology, allowing you to easily copy shape animation keys between them. Perfect for testing out some shape animation!
Man Maker
Face Maker
174 Softimage
Basics 175
You can set up a character synoptic view for other members of your team, allowing them to use your character easily. Synoptic views allow you and others to quickly access commands and data related to a specific object or model. They consist of a simple HTML image map stored as a separate file outside of the Softimage scene file. The HTML file is then linked to a scene element. Clicking on a hot spot in the image either opens another synoptic view or runs a linked script. You can include all sorts of information about the character, set up hotspots for selecting body parts, setting keys on different elements, running a script, etc.
Synoptic views Click on a hot spot on the synoptic image to run the script that is linked to that image.
Shadow icons are displayed here as cylinders for many bones. These shadows have been resized and offset from the bone to make them easy to see and grab. You can also color-code the shadows to identify different groups of controls. You can also change the shape, color, and size of the chain elements themselves (such as resizing the bones), including having no chain element displayed at all.
176 Softimage
Anatomy of a skeleton
The bones are connected by joints. A bone always rotates about its joint, which is at its top. The first bone rotates around the root. The root is a null that is the starting point on the chain. It is the parent of all other elements in the chain. Because the first joint is local to the root, the roots position and rotation determine the position and rotation of the rest of the chain. A joint is the connection between elements in a chain: between bones in the chain, between the root and the first bone, and between the last bone and the effector. By default, joints are not shown but you can easily display them. In a 2D chain, the joints act as hinges, restricting movement so that its easier to create typical limb actions, such as bending an arm or leg. Only its first joint at the root acts as a ball joint, allowing a free range of movement: when using IK, the rest of the 2D chains joints rotate only on the roots Z axis, like hinges. Of course, you can rotate the joints of a 2D chain in any direction with FK, but this is overridden as soon as you invoke IK. In a 3D chain, the joints can move any which way they like. All of its joints are like ball joints that can rotate freely on any axis, allowing you to animate wiggly objects like a tail or seaweed. The first bone in the chain is a child of the root, and all other bones are children of their preceding bones. Keying the rotation of bones is how you animate with forward kinematics (FK).
The effector is a null that is the last part of a chain. Moving the effector invokes inverse kinematics (IK), which modifies the angles of all the joints in that chain. When you create a chain, the effector is a child of the root, not the preceding bone.
Basics 177
Creating Skeletons
Drawing chains is pretty simple in Softimage: you choose the Create > Skeleton > Draw 2D Chain or 3D Chain command on the Animate toolbar and click where you want the root, joints, and effector to be. Here are some tips to help you draw chains: Draw the chains in relation to the default pose of the envelope that youre planning to use. This means you dont have to spend as much time adjusting each bones size and position later. Draw the chain with at least a slight bend to determine its direction of movement when using IK. Drawing bones in a straight line can result in unpredictable bending. If you want two chains to be mirrored, such as a characters arms or legs, you can draw one and have the other one created at the same time. Just activate symmetry (Sym) mode and then draw a chain.
After you have created the chains for a characters skeleton, you need to organize them in a hierarchy. Hierarchies are parent-child relationships that make it easy to animate the skeleton. There are many different ways in which you can set up a hierarchy, depending on the skeletons structure and the type of movements that the character needs to make.
Part of a skeleton hierarchy structure shown in the schematic view. In this case, the spine root is the parent of the leg roots, spine, and spine effector. These elements are, in turn, parents of the legs, neck, shoulders, spine, and so on.
Choose the Create > Skeleton > Draw 2D Chain or Draw 3D Chain command.
3 Click again to create first bone and second joint. Tip: You can try out the joints location by keeping the mouse button held down as you drag. The bone and joint are not created until you let go of the mouse button. 4 Click once more to create another bone and joint. 5 When youre ready to finish, right-click to create the effector and end the chain.
178 Softimage
Hand bone rotated and keyed. Notice how the rotation values are easy to understand because theyre using 0 as a reference.
Character in his neutral pose for weighting and texturing. If you store a skeleton pose of this position, its easy to return to it at any point of your characters development.
Basics 179
Resizing bones
The easiest way to resize bones is to use the Create > Skeleton > Move Joint/Branch tool (press Ctrl+J). This tool lets you interactively resize bones by moving any chain element to a new location. The bones that are immediately connected to that chain element are resized and rotated to fit the chain elements new location. Moving the knee joint using Move Branch resizes only the bone above it: this joints children are moved as a group but are not resized.
Use the Move Joint tool to move the knee joint to a new position. The bones connected above and below this joint are resized.
Removing bones
You cant select and delete individual bones from a chain because of their hierarchy dependencies, but you can branch-select (middle-click) a chain and then delete it. If there are children in that chain that you want to keep, make sure to Cut their links before deleting the chain, and then reparent them to the modified chain.
180 Softimage
Enveloping
Enveloping
An envelope is an object that deforms automatically, based on the pose of its skeleton or other deformers. In this way, for example, a character moves as you animate its skeleton. The process of setting up an envelope is sometimes called skinning or boning. Every point in an envelope is assigned to one or more deformers. For each point, weights control the relative influence of its deformers. Each point on an envelope has a total weight of 100, which is divided between the deformers to which it is assigned. For example, if a point is weighted by 75 to the femur and 25 to the tibia, then the femur pulls on the point three times more strongly than the tibia. in the explorerthis is equivalent to picking every object in the group individually. If you make a mistake, Ctrl+click to undo the last pick. 5. When you have finished picking deformers, right-click to terminate the picking session. Each deformer is assigned a color, and points that are weighted 50% or more toward a particular deformer are displayed in the same color. Use the Automatic Envelope Assignment property editor to adjust the basic settings. 6. Move the deformers to see how the envelope deforms. If necessary, you can now change the deformers to which points are assigned, as well as modify the envelope weights using the methods described in the next few sections. If you ever need to reopen the Automatic Envelope Assignment property editor, you can find it in the envelope weight stack in an explorer.
Setting Envelopes
1. Make sure the envelope and deformers are in the reference pose (sometimes called a bind pose). The reference pose determines how points are initially assigned and weighted. Its best to choose a reference pose that makes it easy to see and control how points will be assigned. 2. Select the objects, hierarchies, or clusters to become envelopes. 3. Choose Deform > Envelope > Set Envelope from the Animate toolbar. If the current construction mode is not Animation, you are prompted to apply the envelope operator in the animation region of the operator stack anyway. In most cases, this is probably what you want. 4. Pick the objects that will act as deformers. You are not restricted to skeleton bones; you can pick any object. Left-click to pick individual objects and middle-click to pick branches. You can also pick groups
Basics 181
Chose a paint mode. Weight Paint Panel Activate Paint tool. Set paint density. Set brush size. Update continuously (on) or only when mouse button is released (off). Pick a deformer for painting from the 3D views. Select the deformer with the most influence on the point you pick Click to pick deformer for painting. Right-click for other options.
Set weight assignment of selected points to current deformer numerically. Numeric weight assignment options. Smooth weights on object or selected points. Reassign points to other deformers. Freeze initial weight assignment and any modifications. Open weight editor Display only current deformers weight map.
4. If desired, set the paint mode. Most of the time you will be using Add (additive) but Smooth, Erase, and Abs (absolute) are also sometimes useful. 5. If desired, adjust the brush properties: - Use the r key to change the brush radius interactively. - Use the e key to change the opacity interactively. - Set other options in the Brush Properties editor (Ctrl+w). 6. Click and drag to paint on points on the envelope. In normal (additive) paint mode: - To add weight, use the left mouse button.
182 Softimage
Enveloping
- To remove weight, either use the right mouse button or press Shift+left mouse button. - To smooth weight values between deformers, press Alt+left mouse button. 7. Repeat steps 3 to 6 for other deformers and points until you are satisfied with the weighting. If your envelope has multiple maps, for example, a weight map in addition to an envelope weight map, then you may need to select the envelope weight map explicitly before you can paint on it. A quick way is to select the enveloped geometry object, then choose Explore > Property Maps from the Select panel and select the map to paint on.
Basics 183
Reassign points to other deformers. Smooth weights on object or selected points. Freeze the envelope operator stack.
Control display of points and deformers. Lock weights. Deformers are listed in columns. Rightclick for display options. Drag a column border to resize. Multiple envelopes. Double-click to expand and collapse, or right-click for more options. If some points arent fully weighted, the name is shown in red. Hover the mouse pointer over the name to see how many points arent fully weighted.
Limit the number of deformers per point. Weight assignment options. Set weight of selected cells.
Points are listed in rows. Click to select, right-click for display options. Drag a row border to resize. Points that arent fully weighted are shown in red.
Points with more deformers than the limit are shown in yellow, as are envelopes with such points. Selected cells are highlighted. Non-zero weights are shaded.
184 Softimage
Enveloping
Basics 185
If a points weight is assigned to more than this number of deformers, its row is shown in yellow in the weight editor. If an envelope has any such points, its row is shown in yellow, too. 2. To try to fix these points automatically, click Enforce Limit. A Limit Envelope Deformers operator is applied, and its property page is opened automatically. By default, the limit is the one you set on the command bar, but you can change it for individual operators. If a point has more than the maximum number of deformers, the operator unassigns the deformers with the lowest weights and then normalizes the weight among the remainder. However, it will respect locked weightslocked weights are never changed, even if other deformers have greater weight. If there arent enough unlocked weights to modify, then the total weight might not add up to 100%.
186 Softimage
Rigging a Character
Rigging a Character
Control rigs allow for puppeteering a character, helping you easily pose and animate it. Once a control rig is set up properly, you can animate more quickly and accurately than without one. There are a number of tools in Softimage to help you create a rig for your character. You can use them to create control objects and constrain them to the skeleton, and to create shadows rigs and manage the constraints between them and their parent rigs. You can also use the prefab guides and rigs in Softimage to help you get going quickly. These are available for biped, dog-leg biped, and quadruped characters. The rigs are skeletons that include control objects that you can position and orient to animate the various parts of the characters body.
Ready-made (prefab) biped rig that comes with Softimage Animated main rig You can create either a quaternion or regular chain spine and head. Separate controls for the chest, upper body, and hips let you position and rotate each area individually.
Feet have three controls to allow for complex angles and foot rolls.
Basics 187
You can create a simple but flexible spine with the Create > Skeleton > Create Spine command. This creates a quaternion-blended spine for controlling a character the way you like. You constrain the top and bottom vertebrae to hip and chest control objects that you create.
3 Create an object, such as a null, and make it the parent of all skeleton and rig control objects. Also make sure that all the rig control objects are within the characters model.
Create spring-based tail or ear controls using the Create > Skeleton > Create Tail command. Spring-based controls use dynamics to make them react to motion, such as bouncing when a character runs or jumps.
You can also create a Transform Group in which a null becomes an invisible parent of all selected objects.
188 Softimage
Rigging a Character
You can customize these guides and rigs so that they contain only the elements you need. They can be used as a starting point for different rigging styles, and technical directors can write their own proportioning script to attach their own rig to a guide. The guides have synoptic views to help you select and animate the rig controls: select any control and press F3. There are also preset character key sets and action sources to help you animate the rig.
3 Apply the body geometry as an envelope to the rig using the envelope_group in the rigs model to apply it to the correct parts of the rig.
1 Create a guide by choosing Character > Biped Guide (or quadruped or biped dog-leg) and adjust it to fit your characters envelope. Drag the red cubes to resize the different parts of the body. You can use symmetry to resize the limbs on both sides of the body at the same time.
2 When the guide is fitted to the envelope, create a rig based on it by choosing Character > Rig from Biped Guide. The rig is a skeleton that also includes standard Softimage objects as control objects.
4 Position and rotate the rig controls and key them to animate the various parts of the skeleton.
You can also create tail, ear, and belly controls that are driven by springs. This lets you create secondary animation on these body parts using dynamics.
Basics 189
To animate with FK
1 2 3 4
Select a bone or the control rig object to which a bone is constrained. Click the Rotate (r) button in the Transform panel or press C. Rotate the bone into position on any axis (X, Y, Z). Key the bones rotation values.
You could also animate with FK by first translating the chains effector (invoking IK) to move the bones into position, and then tweaking each bones rotation as necessary. When things are in position, choose Create > Skeleton > Key All Bone Rotations to set rotation keys for all the bones in that chain.
To help make keying easier, you can create a character key set that contains all the rotation parameters for the bones. Then you can quickly key using this set. In a similar way, you can use the keying panel to key only the rotation parameters that you have set as keyable for the bones.
190 Softimage
Inverse kinematics
Legs effector is branchselected (middle-clicked) and translated to move the leg from a standing position to doing the can-can.
To animate with IK
Select the chains effector or the control rig object to which the effector is constrained. Click the Translate (t) button in the Transform panel or press V. Move the effector so that the chain is in the position you want. Key the effectors translation values.
2 3 4
You could also constrain the effector to a curve with the Constrain > Path command and animate it with path animation. The chain is solved in the same way as if you keyed the effectors positions.
Basics 191
You can change the joints preferred angle to get the correct skeleton structure for the animation that you want to create. This solves the IK in a new way, affecting the movement of the whole chain. You can also reset a bones rotation to the value of its preferred rotation, which resets the chain to its pose when you created it. With 2D chains, the preferred axis of a chain (the X axis, by default) is perpendicular to the plane in which Softimage tries to keep the chain when moving the effector. This plane is referred to as the general orientation or resolution plane of a chain. It is in the space of this plane that the IK system resolves the joints rotations when you move the effector.
Constraining the chain to prevent flipping Using an up-vector constraint for chains, you can constrain the orientation of a chain to prevent it from flipping when it crosses certain zones. The up-vector constraint forces the Y axis of a chain to point to a constraining object so that the solver knows exactly how to resolve the chains rotations. You add up-vector constraints to the first bone of a chain because that is the bone that determines the resolution plane.
Preferred angle Chain is drawn with a slight bend to determine its direction of movement when using IK. This determines the preferred angle of rotation for each bones joint.
Resolution plane
The resolution plane of this skeletons leg is shown with a gray triangle, connecting the root, the effector, and the knee joint. This plane is defined by the first joints XY plane, and any joint rotations stay aligned with this plane. When the first joint is rotated, the resolution plane rotates accordingly, and all joint rotations remain on the resulting resolution plane.
192 Softimage
3 Set keys for the Blend FK/IK values at the appropriate frames where you want the blend to start and finish.
Basics 193
You can store the walk cycle in an action source, then bring that source into the mixer to cycle it. Once in the mixer, you can reverse it, stretch it out or compress it to change the timing, cycle it, move it around in time, mix it with other actions, and moreall in a nondestructive way.
You can use rotoscoped images of models to act as a template from which you can base the characters poses to be keyed. Youll need to tweak your characters walk afterward to make it look natural and appropriate for the character. Tip: It helps to make the arms and legs of the left and right side in different colors. Here, the right leg and arm are in black.
Repeat the same poses for the other side of the body on frames 21, 25, 29, and 3 (the first pose is the same as the last pose of the side you just did).
Save the finished walk cycle in an action source using the Action > Store > Fcurves command.
Open the animation mixer, and load the action source into it by rightclicking on a green track and choosing Insert Source. This create an action clip for the walk cycle on that track.
If the feet slide when theyre on the ground, you can fix it by making the fcurve interpolation flat between the pose keys. Open the animation (fcurve) editor, select the keys on the fcurves, and choose Keys > Zero Slope Orientation. The fcurve editor is the tool to help you fine-tune the walks fcurves in many ways.
6 Cycle the walk clip in the mixer by dragging one of the clips lower corners. You can also quicken or slow down the walk pace, blend it with another action, or create a transition to yet another action, such as to a run cycle. Use the cid clip effect variable to add a progressive forward offset to a stationary cycle.
194 Softimage
Motion Capture
Motion Capture
Motion captured animation (usually known as mocap) offers a way to animate a character based on motion that is electronically gathered from a human or animal. This is useful for animating actions that are particularly difficult to do well with keyframing or other methods of animation creation. In Softimage, you can import mocap data and apply it onto rigs, as well as retarget animation from BVH or C3D mocap files to rigs.
The left leg and arm are rotated a bit and then keyed as an offset to the clip.
Luckily, in Softimage you can easily add non-destructive offsets to mocap data in any of these ways: Creating animation layers: Create a layer of keys as an offset to mocap animation. Layers let you keyframe as you would normally, but those keys are kept in a separate layer of animation so that they dont affect the base mocap animation. After youve added one or more layers of keys and youre happy with the results, you can collapse the layers to bake them into the base layer of animation. Mixing fcurves with an action clip: Normally, when there is an action clip in the mixer, it overrides any other animation on that object that covers the same frames. However, you can blend fcurves directly with an action clip over the same frames. This allows you to blend mixer animation with scene level animation.
Basics 195
Creating action clip effects in the mixer. Clip effects let you adjust the animation in an action clip without affecting the original animation in the action source. Clip effects add values on top of a clip, such as noise or offsets.
The HLE tool in the fcurve editor lets you shape an fcurve in an overall fashion, like lattices shaping an objects geometry. The HLE tool creates a sculpting curve that has few keys (shown here in green), but each one refers to a group of points on the dense fcurve.
196 Softimage
Motion Capture
Plot the retargeted animation on a rig into fcurves so that you can keep and edit the animation.
Before you start tagging the character elements or retargeting animation, make sure that the skeleton or rig is in a model. Retargeting can work only within model structures.
Retargeting animation between rigs When you retarget animation between rigs, the retargeting operator figures out which rig elements match based on their tags. Then it maps and generates the animation that is transferred to the target rig. The animation between the two rigs is a live link that allows for interaction. Select the source rig, then press Ctrl and select the target rig. Then choose the Tools > MOTOR > Rig to Rig command to retarget the animation from the source to the target rig. If you want to save the animation on the target rig, you must plot (bake) it into fcurves.
Tagging a rigs elements Tagging tells Softimage which part is which on your character, such as its hips, chest, legs, root, and so on. You tag the rig controls or skeleton parts that you use to animate the character. These tags are used to create a map (template) for that character. Select a rig and choose the Tools > MOTOR > Tag Rig command to tag its elements. Once you have tagged a rig, you can use it for retargeting with another rig or with mocap data.
Retargeting mocap data from a file to a rig You can retarget mocap data from either C3D or BVH files to a tagged rig. Choose the Tools > MOTOR > Mocap to Rig command to load either a C3D or Biovision file and apply it to a rig in Softimage.
You can then save the mocap animation on the rig in a .motor file so that you can apply it to any tagged rig of the same structure.
Basics 197
198 Softimage
A B
E C
A B C
Main menu bar contains all standard menu commands. This is the same as in the main Softimage interface. The Face Robot panel gives you access to all six Face Robot stages for completing your facial animation. Click this button to hide/display the Face Robot panel and enlarge the viewport.
D E
Click this button to display/hide the Softimage main command panel (MCP). Click this button to display/hide the standard Softimage tool bars.
Basics 199
200 Softimage
Section 11
Shape Animation
Shape animation is the process of deforming an object over time. You take snapshots called shape keys of the object in different poses, then you blend these poses over time to animate them. Softimage offers a number of tools with which you can create shape animation, allowing you to choose the method that works for you.
Basics 201
Shape animation is done for this face by simply moving the points in different clusters on the head object, then storing a shape key for each clusters pose. You could also treat the whole head object as a cluster and deform its points in the same way, then store shape keys for each pose for the object.
You can use surface or polygon objects to create shape animation, or even curves, particles, and latticesany geometry that has a static number of points.
202 Softimage
Whole object. A cluster including all points on head is automatically created when you store a shape key.
Object with tagged points. A cluster of these points is automatically created when you store a shape key.
Click the Clusters button on the Select panel to see a list of the objects clusters.
Always store shape keys using the same cluster of points. When you deform an object, but store a shape key only for a cluster of points on that object, the deformed points that dont belong to that cluster snap back to their original position when you change frames. To make it easier to use the same cluster, give the cluster a descriptive name as soon as you create it.
Object Relative Mode: Shape deforms with object but keeps original orientation.
Basics 203
In Modeling mode, create and deform the object to be shapeanimated. This is the base shape for the object, which is a result of all the operators in the Modeling region of the objects construction history. When you create shape keys, they are stored as the difference of point positions from this base shapes geometry.
Select one of the four construction modes from the list in the menu bar at the top of the Softimage window.
If the object is to be an envelope for a skeleton, switch to Animation mode and apply it as an envelope. In this case, the jaw bone is rotated to help deform the envelope for lip syncing.
3 Switch to Shape Modeling mode to create shape keys. These shape keys are set in reference to the objects base shape (each cluster is an offset from the base).
Markers in the explorer divide up the objects construction history into regions that correspond to the four construction modes. Deformation operators are kept in their appropriate region.
4 To fix any geometry problems due to the envelopes animation, switch to Secondary Shape mode and create shape keys in reference to the animated envelopes geometry. For example, you can fix up the shape in the corner of the mouth in relation to the jaw opening and deforming the envelope.
204 Softimage
When you create a new shape in the shape manager, a shape key is added to the objects Mixer > Sources > Shape list and shape clips are created for the object in the animation mixer.
4 Repeat these two steps to create a library of different shapes for this object.
3 Deform the object or cluster into a new shape in the shape viewer.
With an object selected, select Shape or an existing shape in the shape list.
Go to the next frame at which you want to set a key, change the values of the weight sliders, and set another key. Continue on in this manner.
5 On the Animate tab, set the values of the shape weight sliders until you get the shape you want. Notice the object update in the shape viewer as you change the slider values. Set a key at this frame.
Basics 205
Selecting target shapes sets up a relation between the base object and the shape keys, allowing to you fine-tune the target shapes and have those adjustments appear on the base object. For example, if your client thinks that the nose is too long on one of the target shapes, all you have to do is change the nose for it and the nose on the base object is updated. You can also choose to break the relationship between the base object and its target shapes to keep performance optimal.
Select the base object and choose Deform > Shape > Select Shape Key. Then pick each of the target shapes in the order that you want to create shape keys for the object.
6 5 Label the first shape key created in the Name text box, such as face. The other shape keys use this name plus a number, such as face1, face2, etc. For each target shape you pick, a shape key is added to the models Mixer > Sources > Shape folder.
To create the animation, set the values for each shape keys weight slider in the animation mixer or in the Shape Weights custom parameter set. In either the mixer or the parameter set, click the weight sliders animation icon to key this value at this frame.
206 Softimage
Select a cluster of points or the whole object (creates one cluster for the object).
When you store and apply, the shape key is applied to the cluster or object at the current frame. A shape clip for this shape key is also created in the animation mixer.
6 5 Go to the next frame at which you want to set a shape key, deform the cluster or object, and store and apply another shape key.
You can edit the shape animation in the mixer. You can resize and layer the clips, and add transitions between the clips for a smooth change between shapes. You can also animate the weight of each shape clip against each other in the mixer or in the Shape Weights custom parameter set.
Basics 207
Notice how the shape interpolates over time, from clip to clip.
You can make composite shapes by creating compound clips for different clusters on the same frames of different tracks. For example, one compound clip could drive the eyebrow cluster of a character while another clip drives the mouth cluster.
You can easily reorder the shape clips in time on the tracks, or duplicate a clip to repeat a shape several times over the animation. Because each shape clip refers to the source, you dont need to duplicate the source. Create sequence of shapes by creating clips one after another using transitions to help smooth the spaces between them.
208 Softimage
No matter which tool you use, the basic process is the same: go to the frame you want, set each shape weights value, then click the keyframe or animation icon to set a key. You can then edit the resulting weight fcurve in the animation editor as you would any other fcurve.
After you are done setting keys for the weights, you can edit the resulting weight fcurves. Right-click the weights animation icon and choose Animation Editor.
Basics 209
+
Shape 1 Shape 2
or
Normalized mix of Shapes 1 and 2. The shapes are averaged resulting in a combination of the shapes. The total weight value of the two shapes equals 1.
210 Softimage
Section 12
Basics 211
The animation mixer is well-suited for editing existing material and bringing together all the pieces of an animation. In it, you can assemble all the bits and pieces youve imported from different scenes and models to help you build them into a final animation.
There are a number of ways in which you can share animation between models, whether they are in the same scene or different scenes. You can copy action sources, clips, compound clips, and even a models whole Mixer node between models. And when you duplicate a model, all sources and clips and mixer information are also duplicated.
212 Softimage
Each action clip is an instance of its action source. The original animation data stays untouched, making it easy to experiment with the animation without fear of destroying anything. You can always go back and change the original data and all your changes will automatically be applied; or you can add animation on top of the original animation source, as you may want to do with motion capture data. On the frames covered by the clip, the data stored in the source drives the animation for the object. The mixer overrides any other animation that is on the object at that frame, unless you set a special option that mixes an action clip with fcurves on the object over the same frames.
Multiple tracks let you overlap clips in time and mix their weights. The playback cursor shows the current frame on the timeline.
Tracks are the background on which you add and sequence clips in the mixer. You can sequence one clip after another on the same track or different tracks. To overlap clips in time for mixing, they must be on separate tracks. Animation (action) tracks are green. Shape tracks are blue. Audio tracks are sand.
You can ripple, mute, solo, and ghost all clips on a track.
Clips appear as colored bars according to their type. Create sequences of clips on the same track or on different tracks.
Mix overlapping clips by setting and animating their weight values in the weight panel.
To add a track, press Shift+A, Shift+S, or Shift+U to add animation (action), shape, or audio tracks, respectively. You can also choose a type from the Track menu.
Basics 213
When you create an action source, it is saved in the Sources > model folder for the scene, which you can find in the explorer. This lets you see all sources for all models in the scene. However, for convenience, a copy of the source is available in the models Mixer > Sources > Animation folder. The name of this source is in italics to indicate that its a copy of the original source.
2 Select the animated object and choose an appropriate command from the Actions > Store menu. This stores the animation in an action source.
3 Right-click on a track and choose Insert Source. An action clip is created. You can also drag a source from the models Sources folder in the explorer and drop it on a track.
Arm wave
Ground jimmy
Once the clip is in the mixer, you can manipulate it in many ways. Here are some ideas ...
You can composite actions by adding clips for different parameters on the same frames of different tracks. Here, the top clip drives the legs of the character while the bottom clip drives the arms.
You can use the mixer as a simple sequencing tool that lets you position and scale multiple clips on a single track. You may find the technique of pose-topose animation using the mixer easy to do by saving static poses of a character, loading the actions onto the tracks in sequence, and then creating transitions between the poses.
214 Softimage
If you want to modify an action clip without affecting the source, you must use clip effects.
Click this button to access the sources fcurves or constraints (depending on the type of animation in the source)
Select the action source in the models Mixer > Sources > Animation node, then choose the Actions > Apply > Action command to restore it to that object.
Creating Action Sources from Clips Because applying works only on sources, you cant use it on clips. But what do you do when you want to combine some clips? You can select the clips and choose Clip > Freeze to New Source or Clip > Merge to New Source in the mixer to create a new source. You can then apply this new source to the model with the Actions > Apply > Action command.
If expressions are stored in the source, enter information in a Value cell to edit them.
To add keys to a source, use the Action Key button in the mixers command bar.
Basics 215
Clips are represented by boxes on tracks in the mixer that you can move, scale, copy, trim, cycle, bounce, etc. Clips define the range of frames over which the animation items in the source are active and play back. You can also create compound clips which are a way of packaging multiple clips together so that you can work with larger amounts of animation data more easily.
Select and move clips Select only Select and drag a clip to move it somewhere else on the same track or a different track of the same type (action, shape, or audio).
Press Ctrl while dragging the clip to copy it. You can copy clips between different models mixers this way, one clip at a time. Drag on either of the clips upper corners to hold the clips first or last frames for any number of frames. Drag on either of the clips lower corners to cycle it. Press Ctrl+drag on either of the clips lower corners to bounce it.
Click and drag in the middle of either end of a clip to scale it.
Transitions interpolate from one clip to the next, making the animation flow smoothly between clips rather than jerk suddenly at the start of the next clip. If youre working in a pose-to-pose method of animation using pose-based action clips, you need to use transitions to prevent a blocky-looking animation.
Add markers to clips and add information to a clip, such as to synchronize action or shape clips with audio clips.
Create thumbnails for each clip to help quickly identify whats in them.
216 Softimage
For the club-bot here, an arm wave action is being mixed with a dejected turn action.
4 Click each weights animation icon to set a key for this value at this frame.
You can control how the weights of clips are combined using the Normalize option in the Mixer Properties: When Normalize is on, the weight values of the separate clips are averaged out. This is useful if youre blending similar actions, such as two leg actions of a character. When Normalize is off, mixes are additive meaning that the weight values of the separate clips are added on top of each other. This is useful if youre weighting dissimilar actions against each other, such as weighting arm and leg actions of a character.
You can also create a custom parameter set, then drag and drop the animation icons from each action clip weight in the mixer into the parameter set to make proxies of those weight sliders.
5 After youre done setting keys for the weights, you can edit the resulting weight fcurves. Right-click the weights animation icon and choose Animation Editor.
Basics 217
218 Softimage
Leg effector is translated to a position where Club-bot is just about to kick the ball and an offset key is set.
The clip effect is created and displayed as a yellow bar above the clip.
To offset a clips values, you can: Click the Offset Map button in the mixers command bar. Choose the Set Offset Map - Changed Parameters command which compares the current value of all parameters driven by the clip and sets an offset if there is a difference. Choose the Effect > Set Offset Keys - Marked Parameters, which is the same as creating a clip effect, except that the clip effects offset expression is created for you. Choose the Set Pose Offset command to offset all transformations (scaling, rotation, and translation). All parameters to be offset are calculated together as a whole instead of as independent entities. The pose offset is especially useful for offsetting an objects rotation, as well as position. As with clip effects, pose offsets sit on top of a clips animation.
The cid variable in a clip effect is the cycle ID number. The cycle ID can be used to progressively offset a parameter in an action, such as for having a walk cycle move forward. The Cycle ID of the current frame is in the Time Control property editor (select the clip and press Ctrl+T). For example, with a clip effect expression like (cid * 10) + this the parameter value of the action is used for the duration of the original clip, then 10 is added for the first cycle, 20 is added for the second cycle, and so on.
Basics 219
When you apply a timewarp to a compound clip, it creates an overall effect that encompasses all clips that are contained within the compound clip. If your clip is cycled or bounced, the timewarp can either be repeated on each cycle or bounce or encompass the duration of the whole extrapolated clip (the warp is not repeated with each cycle or bounce). This means, for example, that the overall animation on a cycled clip could increase in speed with each cycle. You can apply a timewarp by right-clicking a clip and choosing Time Properties, or by selecting a clip and pressing Ctrl+T. The Warp page is home to both the Do Warp and Clip Warp options. Use the Clip Warp option for applying a warp over an extrapolated clip to warp its overall animation.
These two models can share actions easily because they have similar hierarchies.
There are a number of ways in which you can share animation between models, whether they are in the same scene or a different scene: Copy action sources and compound sources between models in the same scene. Copy action clips and compound clips (which lets you combine a number of clips non-destructively) between models. Save an action source as a preset to copy action sources between models in different scenes. Create an external action source in a separate file in different formats (.xsi or .eani) to be used in other Softimage scenes. Import and export action sources in different file formats to be used in other scenes or other software packages. Import and export a models animation mixer as a preset (.xsimixer) to copy it to models in the same scene or another scene.
220 Softimage
You can also create connection-mapping templates to specify the proper connections between models before you copy action sources between models. These templates set up rules for mapping the object and parameter names stored in the action sources, such as when similar elements have with different naming schemes, such as L_ARM and LeftArm. To create a connection-mapping template, open the animation mixer and choose Effect > Create Empty Connection Template. A template is created for the current model and the Connection Map property editor opens. Once you have created an empty connection-mapping template, you can add and modify the rules as you like.
Jaiquas (on the left) elements are mapped to the corresponding ones on the Club-bot using a connectionmapping template. This is set up before action sources are shared between them.
Basics 221
Sound files are added as audio clips on tracks in the animation mixer in the same way that you load action and shape sources as clips on tracks. Once you have an audio clip in the mixer, you can move it along the track, copy it, scale it, add markers to it, mute, and solo it. The following process shows how you can easily load and play sound files in the animation mixer.
In the Playback panel, click the All button so that RT (real-time playback) is active. Play the audio clip using the regular playback controls below the timeline, including scrubbing in the timeline and looping. Toggle the sound on and off by clicking the headphones icon.
On
Muted
Markers let you delimit different portions of the audio clip and give their wave patterns a corresponding meaningful name to help you synchronize more easily with the animation. Move the playback cursor to the portion of audio wave you want to mark. Create markers with the Create Marker tool in the mixer by pressing the M key, then dragging over a range of frames on the clip.
4 Adjust the animation of the character (such as facial animation) to match the marked audio waveforms. To help do this, you can view the audio waveform in the timeline or the fcurve editor to sync with the animation. Or you can create a flipbook to preview the animation with audio.
When youre satisfied with the results, do a final render and use an editing suite to add the sound to the final animation.
222 Softimage
Section 13
Simulation
Imagine a scene with an alien climbing out of her space ship: it has just crashed to the ground after breaking through fence posts like match sticks, smoke streaming out of the engine. As she stares at the burning rubble that was once her home in the skies, a single tear rolls down her cheek. She stumbles through a raging snow storm, the howling wind whipping through her hair and tearing at her cape. You can use all the simulation powers in Softimage to create your own compelling scenesall the tools are there for you.
Basics 223
Section 13 Simulation
Simulated Effects
In Softimage, you can simulate almost any kind of natural, or unnatural, phenomena you can think of. To simulate these phenomena, you must first make objects into rigid bodies, soft bodies, or cloth, generate hair from an emitter, or create ICE particles. Only these types of objects can be influenced by forces and collisions to create simulations. Forces make simulated objects move and add realism. As well, you can create collisions using any type and number of obstacles for any type of simulated object.
Hair Particles
Cloth
Particles
Rigid bodies
224 Softimage
To use forces on ICE particles, see Forces and ICE Simulations on page 250.
Types of Forces
You can use any of these forces with hair, ICE particles, and rigid bodies, but not all forces work with soft body or cloth.
Gravity applies a force that defines an acceleration over time. To get the correct gravitational behavior from simulated objects, their size must be taken into consideration. The Fan creates a local effect of wind blowing through a cylinder so that everything inside the cylinder is affected. An Eddy force simulates the effect of a vacuum or local turbulence by creating a vortex force field inside a cylinder. The Drag force opposes the movement of simulated objects, as if they were in a fluid. The Vortex simulates a spiralling, swirling movement. The Wind is a directional force with velocity and strength. It generates a force that speeds up simulated objects to a target velocity. The Turbulence force builds a wind field to let you imitate turbulence effects, such as the violent gusts of air that occur when an airplane lands. The Toric force simulates the effect of a vacuum or local turbulence by creating a vortex force field inside a torus. The Attractor force attracts or repels simulated objects much like a magnet attracts/repels iron filings.
1. Select the hair, cloth, or soft body object to which you want to apply the force. 2. Create a force from the Get > Force menu on the Simulate toolbar. 3. The force is automatically applied to the selected object. You could also select the hair object and apply an existing force to it by choosing Modify > Environment > Apply Force on the Hair toolbar, or select the cloth/soft body object and choose Cloth/Soft Body > Modify > Apply Force on the Simulate toolbar. For rigid bodies, the process is simpler: simply create a force from the Get > Force menu and it is applied to all rigid bodies in the current simulation environment.
H I
Basics 225
Section 13 Simulation
Types of Forces
B
C D
E I
226 Softimage
The render hairs are interpolated between the guide hairsthese are the hairs that are rendered. Guide hairs shown in white (selected). These are the hairs that you style.
5 Select obstacles for hair collisions. 6 Adjust the default hair shader or apply another one to the hair.
Basics 227
Section 13 Simulation
Because guide hairs are actual geometry, you can use all of the standard Deformation tools on them to come up with some groovy hairdos! Lattices, envelopes, deform by cluster center, randomize, and deform by volume usually produce the best results. However, if you animate the deformations, you cannot then use dynamics on the hair.
Use the Brush tool to sculpt hairs with a natural falloff, like proportional modeling. Translate and rotate specific tips or points of hair.
Select tips, points, or entire strands of hair to style in any way. Here, just the tips of some hair strands are selected.
When you use a styling tool after selecting Tip, press Alt+spacebar to return to the Tip selection tool. Copy the style to another hair object.
Use the Clump tool to bring hair strands or points together or fan them out.
Change the length of the guide hairs using the Cut tool or the Scale tool.
You can deform the shape of the hair using any deformation tool, like a lattice. To have smoother animation, activate Stretchy mode to allow the hair segments to stretch along with the deformation.
228 Softimage
Add kink, waves, and frizz to render hairs to change their shape.
5 Set the Cache to Read&Write, then play the simulation to cache it to a file for faster playback and scrubbing. Caching also helps for more consistent rendering results.
Change the number of segments to change the hairs resolution. Use a higher amount for curly or wavy hair.
Tip: Click the Style button on the Hair toolbar to toggle the dynamics state. You can style the hair only when dynamics is off.
Set the hairs density according to a weight or texture map so that you can create some bald spots or sparser growth. You can also use cut maps for the render hair length so that some areas have shorter hair than others according to a weight map.
Basics 229
Section 13 Simulation
Rendering hair is similar to rendering any other object in Softimage. You can use all standard lighting techniques (including final gathering and global illumination), set shadows, and apply motion blur. Hair is rendered as a special hair primitive geometry by the mental ray renderer. How to attach shaders to hair
While you can use any type of Softimage shader on hair, the Hair Renderer and Hair Geo shaders give you the most control for making the hair look the way you want. You can determine different coloring, transparency, and translucency anywhere along the length of the hair, such as at the roots and tips.
Select the hair and open a render tree (press 7). This tree shows the default shader connection when you create hair.
2 The Hair Renderer shader gives you control over coloring, transparency, and shadows along the hair strands. You can also optimize the render and take advantage of final gathering.
To switch to the Hair Geo shader, choose Nodes > Hair > Hair Geometry Shading and attach it to the hairs Material node in the same way as the Hair Renderer shader.
3 To connect other Softimage shaders to the hair, disconnect the current Hair shader. Then you can load and connect another shader directly to the hairs Material node. For example, you can attach a Toon Paint or standard surface shader to the Surface and Shadow inputs of the hairs Material node to change the hairs color.
The Hair Geo shader lets you set the coloring, transparency, and translucency using gradient sliders, which give you lots of control over where the shading occurs along the hair strand. You can even add incandescence to make the hair glow.
To get started with some hair coloring, choose View > General > Preset Manager, then drag and drop a preset from the Materials > Hair tab onto a hair object. These presets use the Hair Renderer shader. Incandescence on the rim of the hair strand.
230 Softimage
Transfer the texture map from the hair emitter to the hair object using the Transfer Map button.
To render instances for the hairs, simply put the objects you want to instance into a group, and each object in the group is assigned to a guide hair using the Instancing options in the Hair property editor. The instanced geometry is calculated at render time, so youll only see the effect in a render region or when you render the frames of your scene.
You can change the color of the hair using a texture map connected to the hair shaders color parameters.
You can choose whether to replace the render hairs or just the guide hairs. You can also control how the instances are assigned to the hair (randomly or using a weight map values), as well as control their orientation by using a tangent map or have them follow an objects direction.
Basics 231
Section 13 Simulation
Center of mass is moved to the bottom right corner of the object. Notice how the box hits the edge and tumbles more quickly with more spinning.
Tip: Animation ghosting lets you display a series of snapshots of the rigid bodies at frames behind and/or ahead of the current frame. You can preview the simulation result without having to run the simulation!
232 Softimage
Simulation Environments
All elements that are part of a rigid body simulation are controlled within a simulation environment. A simulation environment is a set of connection groups, one for each type of element in the simulation.
You can see the current simulation environment by using the Curr. Envir. scope in the explorer. Or use the Environments scope to see all simulation environments in the scene. All elements involved in the rigid body simulation are contained within this environment.
Passive or Active?
Rigid bodies can be either active or passive: Active rigid bodies are affected by dynamics, meaning that they can be moved by forces and collisions with other rigid bodies.
A simulation environment is created as soon as you make an object into a rigid body. You can also create more environments so that you have multiple simulation environments in one scene. The dynamics operator solves the simulation for all elements that are in this environment. You have a choice of dynamics operators in Softimage: physX or ODE. physX is the default operator, offering you stable and accurate collisions with many rigid bodies in a scene, even when using the rigid bodys actual shape as the collision geometry. ODE is a free, open source library for simulating rigid body dynamics.
Passive rigid bodies participate in the simulation but are not affected by dynamics; that is, they do not move as a result of forces or collisions with other rigid bodies. They can, however, be animated. You often use passive objects as stationary obstacles or as stationary objects in conjunction with rigid constraints (as an anchor point). You can easily change the state of a rigid body by toggling the Passive option in the rigid bodys property editor.
The pool table is a passive rigid body, while the white ball is an active rigid body with the gravity force applied. The ball rebounds off the table but the table does not move.
Basics 233
Section 13 Simulation
Animation or Simulation?
You can apply rigid body dynamics to objects that are animated or not: If the rigid bodies are animated, you can use their animation (position, rotation, and linear/angular velocity) for the initial state of the simulation. When you apply a force to an animated rigid body, the force takes over the objects movement as soon as the simulation starts. If the rigid bodies are not animated, you need to apply a force to make them move. You can easily animate the active/passive state of a rigid body to achieve various effects: you simply animate the activeness of the Passive option in the rigid bodys property editor.
Animation The billiard ball is a passive rigid body whose rotation and translation is animated to make it move to the tables edge. A gravity force has been applied to the simulation environment. When the ball reaches the edge of the table, the balls state is switched from passive to active, the simulation takes over, and gravity makes the ball fall down. 2 1
All billiard balls are assigned as active rigid bodies. When the white ball (circled) hits them, they all react to the collision.
Simulation 3
234 Softimage
Elasticity and Friction All rigid bodies use a set of collision properties to calculate their reactions to each other during a collision, including elasticity and friction. Elasticity is the amount of kinetic energy that is retained when an object collides with another object. For example, when a billiard ball hits the table, elasticity influences how much the ball rebounds. Friction is the resisting force that determines how much energy is lost by an object as it moves along the surface of another. For example, a billiard ball rolling along a table has a lower friction value than a rubber ball along a table. Likewise, a billiard ball rolling on a carpet would have more friction than if it was rolling on a marble floor. Collision Geometry Types The collision type is the geometry used for the collision, which can be a bounding box/capsule/sphere, a convex hull, or the actual shape of the rigid bodys geometry. Bounding shapes (capsules, spheres, and boxes) provide a quick solution for collisions when shape accuracy is not an issue or the bounding shapes geometry is close enough to the shape of the rigid body. Actual Shape provides an accurate collision but takes longer to calculate than bounding shapes or convex hulls. This is useful for rigid body geometry that is irregular in shape or has holes, dips, or peaks that you want to consider for the collision, such as this bowl with cherries falling inside of it. Convex hulls give a quick approximation of a rigid bodys shape, with the results similar to a box being shrinkwrapped around the rigid body. They have the advantage of being very fast. Any dips or holes in the rigid body geometry are not calculated, but it is otherwise the same as the rigid bodys original shape.
Actual Shape provides an accurate collision using the rigid bodys original shape.
Convex hull doesnt calculate the dip in this bowl, but is otherwise the same as the bowls shape.
Basics 235
Section 13 Simulation
2 Pick the first constrained rigid body (A). The constraint object connects to its center. A is a passive rigid body and B is an active rigid body.
Hinge Spring A B
Fixed Rigid body Bs resulting movement with gravity applied. Notice how the constraint object is attached to both rigid bodies centers.
236 Softimage
Cloth Dynamics
Cloth Dynamics
The cloth simulator uses a spring-based model for animating cloth dynamics. You can specify and control the mass of the fabric, the friction, and the degree of stiffness, allowing you to simulate different materials such as leather, silk, dough, or even paper. Cloth deformation is controlled by a virtual spring net which is made up from three different types of springs, each controlling a different kind of deformation: shearing, stretching, and bending. After you set up how the cloth is deformed according to its own internal spring-based forces, you can then affect how its deformed using external forces, such as gravity, wind, fans, and eddies. As well, you can have the cloth collide with external objects or with itself. The obstacles can be animated or deformed and interact with the cloth model according to the cloths and obstacles friction. Although you can apply cloth only to single objects, you could create a larger object (such as a garment) made of multiple NURBS surface patches stitched together using any number of points. You must first assemble the different patches into a single surface mesh object, then apply cloth to that object. Set the Stitching parameters in the ClothOp property editor to create seams between the different NURBS surfaces of the same surface mesh model.
Low resistance to Bend. Low resistance to Stretch. Low resistance to Shear. Bend controls the resistance to bending. With low values, the cloth moves very freely like silk; with high values, the cloth appears like rigid linen or even leather. Stretch controls the resistance to stretching, which is the elasticity of the material. Low values allow the cloth to deform without resistance, while higher values prevent the cloth from having elasticity. Shear controls the resistance to shearing (crosswise stretching), keeping as much to the original shape as possible. Try to decrease this value if the cloths wrinkling is too rigid.
To give you a head start on creating cloth, there are several presets in the Cloth property editor that let you quickly simulate the look and behavior of different materials, such as leather, paper, silk, or pizza dough.
Paper preset
Silk preset
Basics 237
Section 13 Simulation
Select Animation as the Construction Mode. This tells Softimage that you want to use cloth as an animated deformation.
Select an object and choose Create > Cloth > From Selection from the Simulate toolbar.
Play the simulation. To calculate the whole simulation more quickly, go to the last frame of the simulation. You can cache the simulation to files to play back faster, as well as being able to scrub the simulation and play it backwards.
Set the cloths physical properties such as mass, friction, and resistance to shearing, bending, and stretching.
Apply forces to make the cloth move. Here, a little gravity and a large fan are applied to create the effect of a strong wind blowing on the flag.
You can also set clusters of points to define specific areas of a cloth that you want to be affected by the cloth simulation, then use the Nail parameter to nail down these clusters. For example, you can anchor down clusters at the sides or corners of a flag to keep it from blowing away in the wind. As well, you can animate the Nail parameter as being on or off, making it easy to create the effect of a cloth being grabbed and then let go.
238 Softimage
3 Set the soft body physical properties such as mass, friction, stiffness, and plasticity. To give you a head start, click a button on the Presets page to quickly make the object behave like a rubber ball, an air bag, and more.
Soft body is a deform operator meaning that it moves only an objects vertices, never the objects center. Soft body computes the movements and deformations of the object by means of a spring-based lattice whose resolution you can define using the Sampling parameter in the SoftBodyOp property editor. You can use soft body on clusters (such as points and polygons), allowing only that part of an object to be deformed by soft body. For example, you can have the cluster of points that form a characters belly be deformed by soft body for some jelly-like fun! If the soft-body object is animated, you can either preserve its animation or recalculate it according to any forces you apply, such as wind and gravity. If you keep the objects animation, soft body acts only as a deformer on the object, but does not influence its movement. If you want to convert the soft body simulation to animation, you can plot it as shape animation using the Tools > Plot > Shape command on the Animate toolbar.
4 Apply a gravity and/or wind force. If the soft body is not already animated, you need to apply a force to make it move.
5 Select objects as obstacles for collisions and choose Soft Body > Modify > Set Obstacle. Then play the simulation and watch the ball bounce!
Basics 239
Section 13 Simulation
240 Softimage
Section 14
Basics 241
What is ICE?
ICE is a node-based system for controlling all the attributes that define a deformation or particle effect. There are two parts to ICE: At its basic level, ICE is a complete visual programming environment. You can combine basic nodes for getting data, modifying data, setting data, and controlling execution flow into elaborate ICE trees. You can easily experiment, in a way that you cant when writing code, by simply connecting nodes and seeing the results immediately in the viewports. When youre done, you can package your tree into reusable compounds that you can use in other scenes, share with your team, or even put online to share with the Softimage community. On top of that level, Softimage comes with a comprehensive set of predefined compounds for particle simulations. For simple effects, you can connect compounds that define forces or basic behaviors like sticking and bouncing. For more complex effects, you can use the predefined state machine to switch between several behaviors on a per-particle basis. You can use ICE to: Completely control particle systems. You can add and remove points on point clouds. You can move points directly, or apply a simulation using particle or rigid body behavior. Deform various geometry types, including polygon meshes, NURBS surfaces, curves, lattices, and point clouds. However, you cannot add or remove components on any geometry type except point clouds. You cannot use ICE on hair, non-ICE (legacy) particle clouds, groups, or branches. There are three ways you can approach ICE: You can simply use the predefined compounds and adjust their input values to create basic effects. At the other extreme, you can dive right in and create your own custom effects from scratch using the base nodes. Between the two extremes, you can start with the factory compounds and then modify or augment them with extra nodes to create your own variations of effects.
Under the hood, many nodes connected together in the point clouds ICE tree are doing all the work.
242 Softimage
What is ICE?
The ICETree Node The ICETree node is like Grand Central Station for an ICE tree: its the main operator that processes all the data that flows into it. Nodes in the tree must be connected to it in order to be evaluated. You can have multiple ICE trees per object as long as each ICETree operator has a different nameand you can easily rename it in the explorer. Attributes
Two nodes with ports connected together. Compound with several input ports.
Attributes are at the heart of ICE. Attributes are data that is associated with objects, or with components such as points, edges, polygons, and nodes. With attributes, you can get and set information such as a particles color or shape, or an objects point position. Almost every ICE tree involves getting and setting attributes in some way. Attributes can be inherent (always part of the scene), predefined (innately understood by certain base ICE nodes, but dynamic in that they only exist when they are set), or custom (create your own).
Compounds Compounds are the ber nodes of the ICE world. They can contain a whole ICE tree or just parts of it. Compounds make it easy to create more complex effects in the ICE tree because they package numerous nodes into one. And because theyre in a package, you can easily bring compounds into other scenes or share them with other users. You can connect compounds in the same way that you do for nodes in the ICE tree. As well, you can open up a compound to edit it or just to see what makes it tick. Softimage ships with many compounds that are designed specifically for particle and deformation workflows. You can find these on the Tasks tab of the preset manager in the ICE Tree view.
Some of the many attributes that are available for point clouds. You can view attributes in an explorer.
Basics 243
To display an ICE Layout with the ICE tree view embedded, choose View > Layouts > ICE.
J A
K D E F G H
L Clear. Clears the view. Opens the preset manager in a floating window. Displays or hides the preset manager embedded in the left panel (J). Displays or hides the local explorer embedded in the right panel (L). Birds Eye View. Click to view a specific area of the workspace, or drag to scroll. Toggle it on or off with Show > Birds Eye View.
Memo Cams. Save and restore up to four views: Left-click to recall stored view. Middle-click to store current view. Ctrl+middle-click to overwrite stored view with current view. Right-click to clear stored view.
B C
Lock. Prevents the view from updating when you select other objects in the scene. Refresh. When the view is locked, forces it to update with the current selection in the scene.
244 Softimage
Control timers and display performance highlights. This is an advanced feature used for profiling and optimizing the performance of ICE trees. Embedded preset manager. You can press Ctrl+F to quickly put the cursor in the preset managers text box so that you can start typing a search string. Pressing Ctrl+F will also temporarily display the preset manager if it is hidden.
ICE Nodes in the Preset Manager In the preset manager, ICE nodes are separated into two tabs: The Tasks tab contains higher-level compounds for accomplishing specific tasks. You can select a task (Particles or Deformation) from the drop-down, and then select a sub-task from the list below. The Tools tab contains base nodes and general utility compounds for performing basic operations, like getting data, setting data, adding values, etc. You can drag a node from the preset manager into an ICE tree and connect it to the graph.
ICE tree workspace. Connect nodes by dragging an output port from the right side of one node onto an input port on the left side of another node. You can connect the same output to as many inputs as you want. Open a nodes property editor by double-clicking on it. This lets you set parameters that cannot be driven by connections. Right-click on a node, on a port, or on the background for various options. Hover the mouse pointer over a connection to highlight the connected ports. If a port is not visible because it has been collapsed or because the view is zoomed out too far, information about the port is displayed in a pop-up. The nodes in the tree can be base nodes or compound nodes. Compounds are encapsulated subtrees built from base nodes and other compounds. Base nodes have a single border and compound nodes have a double border. See ICE Compounds on page 267 for information on building and exporting your own compounds. Nodes that cannot be evaluated because of a structural error are displayed in red. Other nodes that will not be evaluated because of an error in their branch are displayed in yellow. See Debugging ICE Trees on page 264.
Local explorer. When there are multiple ICE trees on the same object, click to select the one to view. You can also click on a material to switch to the render tree view.
Basics 245
Execution flows sequentially from top to bottom along the input ports of the ICETree node (and any other type of Execute node). Because the nodes are evaluated in order, it matters where you plug them in. Sometimes one operation requires another to be done first so that it can be evaluated properly.
D A E
Nodes that are connected to an Emit nodes Execute on Emit port are applied only to new points that are generated on the current frame. They are not applied to all particles on every frame. Nodes that are connected to the root node are executed on every frame. You can control which data gets set on which elements by using If and Filter nodes in the upstream branches. The simulation framework resets every particles force to 0 at the end of each frame, so forces must be reapplied at every frame, which is why the Add Forces node is plugged into the ICETree node and not the Emit node.
D E C F
The Simulate Particles node is the standard particles node that updates the position and velocity of each particle at each frame based on mass and force. You could use the Simulate Rigid Bodies node instead to make particles into rigid bodies. Particles can then collide with each other and with other objects that are set as obstacles. You do not need to include a simulation node in your treeif you prefer, you can set point positions directly.
Data flows downstream from left to right along connections from one nodes output ports to the next nodes input ports. Each connection represents a data set. The ICETree node is the main operator that processes all the data that flows into it. Nodes must be connected to it to be evaluated.
246 Softimage
ICE Simulations
ICE Simulations
As with animation, a simulation calculates the way in which an object changes over time. However, with a simulation, the result of the current frame depends on the result of the previous frame. With ICE, you can create both particle and deformation simulations. You can emit and change particles in a point cloud for effects such as cigarette smoke curling as it rises, leaves falling lazily to the ground, vines growing up out of the ground, or even crowds of people milling about in the street. You can deform various geometry types, including polygon meshes, NURBS surfaces, curves, lattices, and point clouds, to create effects such as turbulent ocean waves, gentle ripples on a pond, or ribbons twisting in the wind.
ICE snow particles fly from the point of impact of the boulder with the snow on the hill. An ICE deformation also occurs on the hill as the boulder rolls down it, crushing the snow as it goes.
The point clouds simulated ICE tree emits the snow particles and makes them move. A simulated ICE tree also exists for the polygon mesh hills deformation effect.
Basics 247
this, you can select and delete the Simulation region marker from the construction operator stack. Both the Simulation and Post-Simulation region markers are removed if either one is deleted, but operators in these regions are not removed and can be moved to the desired regions afterward.
ICE Simulations
A C
Basics 249
ICE Forces
A Gravity applies a force that defines an acceleration over time. To get the correct gravitational behavior from objects or particles, their size must be taken into consideration. The Surface force attracts particles/objects to or repels them from an objects surface. While this force is similar to creating goals for particles, this force keeps the particles moving around (swarming) the surface object instead of stopping once they reach the goal. The Wind is a directional force with velocity and strength. It generates a force that speeds up particles or objects to a target velocity. The Null Controller force uses a null to attract or repel particles/ objects, much like how particles move toward or away from a goal object. Changing the icon shape of the null (to something like Rings, Square, or Circle) changes the behavior of this force. The Neighboring Particles force attracts particles to each other when they get within a certain range, but there is no friction between the particles so they dont stay clumped togetherthey keep moving. The Drag force opposes the movement of simulated objects, as if they were in a fluid. The Coagulate force attracts points toward their neighbors to form clumps. Once the points get within a certain range of each other, the friction (drag) slows them down. The Point force attracts particles/objects to or repels them from a position in space that you define.
C D
F G
250 Softimage
D E
F B
G H
Basics 251
ICE Deformations
Any ICE tree that modifies point positions on an object without adding or deleting points can be considered a deformation. With ICE, you can deform various geometry types, including polygon meshes, NURBS surfaces, curves, lattices, and point clouds. However, you cannot add or remove components on any geometry type except point clouds. A deformer works by getting current point positions, modifying them based on other variables, then setting new positions. This means that you can create your own custom deformers with ICE. You can create three types of deformations with ICE: simulated, animated, and non-time based.
The snow on the polygon mesh hill crushes under the weight of the boulder as it rolls down the hill.
The simulated ICE tree for the polygon mesh hills deformation effect. A Bulge operation is used along with turbulence.
252 Softimage
ICE Deformations
Simulated Deformations To create a simulated deformation in ICE, you need to use a Simulated ICETree node. You can then change the object point positions as you like with any type of deformer, including one of your own design. As an example, the Footprints compound creates a simple deformation. It lowers the points of an object where the surface of another geometric object (the deformer) is below them in the objects local Y axis. The points stay deformed during the simulation, so you can move the deformer to create more indentations. When you return to the first frame of the simulation, the geometry returns to its initial undeformed state.
Time-based, Non-simulated Deformations You can also use ICE to create deformations that are time-based, but not simulated in that they are not in the Simulation region of the construction stack and therefore do not depend on the previous frames point positions. One way to do this is to simply animate the input port values of the ICE tree. Another way is to include time-dependent nodes in the ICE tree, such as a Turbulence node. This node creates a coherent noise pattern that varies continuously in space, as well as optionally in time. Here, the Turbulence node is used to set the point positions in Y. Space Frequency was set differently in X and Z, resulting in long, thin ripples. There are also several Turbulize compounds based on this node, but designed to work with specific situations. You can find them in the preset manager.
1
Select the geometric object to be deformed and choose Deform > Footprints (ICE) from the Model, Animate, or Simulate toolbar. This creates an ICE tree for this object. Alternatively, you can get the Footprints compound from the preset manager and set up this tree yourself.
2 3
Pick the geometric object to act as the deformer. In this case, its the infamous foot! Play the scene to run the simulation, then move the deformer to create indentations in the object.
Basics 253
Nontime-based Deformations You can create deformations that are not time-based but instead depend on the position of deformer objects or other factors to modify point positions. The deformation can then be controlled by animating the deformers in any way. The following example is a variation of the Push deformation that uses the proximity of a null to displace points along their normals.
254 Softimage
Basics 255
1 2
256 Softimage
Connecting Nodes
In general, you connect ICE nodes by clicking and dragging an output port from the right of one node onto the input port on the left of another node. You can connect the same output to as many inputs as you want. Data flows along the connection from the first node and is processed by the second node.
When you connect to an input port, any existing animation on the ports value is lost. Some nodes, such as Execute, Add, Multiply, and so on, allow an unlimited number of input connections. These nodes have special virtual ports identified as New (port name). You can connect to the New port to create a new port, or right-click on an existing port to manually insert and remove ports. There are some special factors that determine whether you can connect two ports together: The type of the data, as indicated by the port colors. The context of the data. The structure of the data: either single or array (ordered set).
Basics 257
Data Types
The data type defines the kind of values that a port can pass or accept, such as Boolean, integer, scalar or vector. The data type is identified by the color of the port. You cannot connect two ports if their data types are incompatible. However, you can convert between many data types using the different Conversion nodes. Here are the types of data you might see:
Type Polymorphic Boolean Integer Scalar Description Accepts a variety of data types. See Polymorphic Ports on page 259. A Boolean value: True or False. A positive or negative number without decimal fractions, for example, 7, 2, or 0. A real number represented as a decimal value, for example, 3.14. Internally this is a singleprecision float value. A two-dimensional vector [x, y] whose entries are scalars, for example, a UV coordinate.
Description A rotation as represented by an axis vector [x, y, z] and an angle in degrees. A 3-by-3 matrix whose entries are real numbers. 3x3 matrices are often used to represent rotation and scaling. A 4-by-4 matrix whose entries are real numbers. 4x4 matrices are often used to represent transformations (scaling, rotation, and translation). A primitive geometrical shape, or a reference to the shape of an object in the scene. This data type is used to determine the shape of particles. A reference to a geometrical object in the scene, such as a a polygon mesh, NURBS curve, NURBS surface, or point cloud. You can sample the surface of a geometry to generate surface locations for emitting particles. A location on the surface of a geometric object. The locator is glued to the surface of the object so that even if the object transforms and deforms, the locator moves with the object and stays in same relative position. Not a data type in the conventional sense. You connect Execution ports such as the output of a Set Data into an Execute or root node to control the flow of execution in the tree. Also not a data type in the conventional sense. This is a reference to an object, parameter, or attribute in the scene, expressed as a character string. You can daisy-chain these as described in Daisy-chaining References on page 261.
4x4 Matrix
Shape
Geometry
Surface Location
2D Vector 3D Vector
Execution A three-dimensional vector [x, y, z] whose entries are scalars, for example, a position, velocity, or force. A four-dimensional vector [w, x, y, z] whose entries are scalars. A quaternion [x, y, z, w]. Quaternions are usually used to represent an orientation. Quaternions can be easily blended and interpolated, and help address gimbal-lock problems when dealing with animated rotations. Reference
4D Vector Quaternion
258 Softimage
Polymorphic Ports Polymorphic ports can accept several different data types. For example, the Add node can be used to add together two or more integers, or two or more scalars, or two or more vectors, and so on. Once you connect a value to a polymorphic port, its port type becomes resolved. Other input and output ports on the same node and on connected nodes may also become resolved and only accept specific data types. This reflects the fact that, for example, you cannot add an integer to a vector.
Before anything is connected, the Add nodes ports are unresolved (black). After connection, controls appear for Value2. There are no controls for Value1 because it is being driven by the connection.
While polymorphic ports accept several data types, they dont necessarily accept all types of connection. For example, the ports of a Pass Through node accept any type of value, but it doesnt make sense to use a Multiply by Scalar node with a Boolean value.
Once a node is connected to Value1, then Value2 and Result become resolved. In this case, they are yellow for 3D vectors.
Data Context
In ICE, attributes are always associated with elements, either objects or one of their component types such as points, polygons, edges, and so on. For example, sphere.PointNormal consists of one 3D vector for each point of the object called sphere; in other words, the context is per point of sphere. For two ports to be connectable, their contexts must be compatible. Context is determined by two factors: The type of element associated with the data: object or a specific component type (points, polygons, etc.). The object that owns the components. The data context gets propagated through node connections in the same way as the data types of polymorphic nodes.
Even after a ports type has been resolved, you can still change it by replacing the connection with a different data type. However, this works only if the port is not resolved by other connections in the tree. If a ports type is unresolved, you cannot set values in its property editor. Once it is resolved, the appropriate controls appear in the property editor. Different data types use different controls: for example, checkboxes for Booleans, sliders for scalars, and so on.
Basics 259
Line Face
A D A
Sample
Node
Type the reference. Use periods to separate objects, properties, and attributes. Strings are not case-sensitive. Use the token self to refer to the object on which the tree exists. You can also use the tokens this (same as self) and this_model (the model that contains the object with the tree). Click Explorer, expand the tree, and choose an element. The tree shows the attributes that you can get from the current element name path or location. This list includes predefined attributes and any custom attributes (including those defined in unconnected Set Data nodes). Click Pick and then pick an element from a viewport, explorer, or schematic view. You can combine methods A and B, for example, type self, click Explore, and then choose an attribute such as PointPosition.
C D
260 Softimage
Daisy-chaining References You can use the In Name and Out Name ports to connect references on Get Data and other nodes in sequence, like a daisy chain. For example, you can get sphere and then use that to get sphere.PointPosition, sphere.PointNormal, and so on. If you want to change sphere to torus later on, theres only one node that needs to be changed. This is particularly useful when creating compounds, because you only need to expose the leftmost reference.
Tokens in References The token self always refers to the object on which the ICE tree is directly applied. This token allows you to create trees that are easily reusable because they dont depend on specific object names. Other tokens that you can use are this (same as self ) and this_model (refers to the model that contains the object with the ICE tree). If you have built an ICE tree using specific object names and want to make it more generic so that you can make a compound to use on other objects, you can automatically replace the object name with Self using User Tools > Replace Object Name with Self (Nested Compounds). Resolving Scene References Scene references are automatically maintained as you modify the scene.
References that are connected in this way are concatenated, so for example Get Data (Self ) plugged into Get Data (PointPosition) results in Self.PointPosition. You do not need to worry about periods at the beginning or end of the referencesperiods are automatically added or removed as necessary. When a node has a reference connected to its In Name port, then the Explore button and Explore for Port Data command both start from the current path. For example, if you click Explore in the property editor of a leaf (left-most) Get Data, you can select anything starting from the scene root. However if a Get Data has a reference to a geometric object connected to its In Name, you can select properties and attributes on that object.
If you have an object called sphere and you rename it to ball, references to sphere are automatically updated to ball. If you delete the object named sphere instead, any references to it are invalid and the affected nodes become red. If you later add another object named sphere, or rename an existing object to sphere, then the references become resolved again. If you add the object named sphere to a model named Fluffy, references to sphere are automatically updated to Fluffy.sphere. If the ICE tree is on an object in the Fluffy model, the references are updated to this_model.sphere instead.
Basics 261
When you get data by an explicit string reference, you get a set of values with one value for each component. For example, if you get sphere.PointNormal, you get one 3D vector for each point of the sphere object; in other words, the context is per point of sphere. When you get data at a location, the context depends on the context of the set of locations that is connected to the Source port of the Get Data node. For example, if you start by getting grid.PointPosition, then use that to get the closest location on sphere, and in turn use that to get PointNormal, the data consists of normals on the sphere but the context is per point of the grid. If instead you started by getting grid.PolygonPosition, the context would be per polygon of the grid. Getting Data at Locations To get data at a location, plug any location data into a Get Data nodes Source port. When a location is plugged into the Source port of a Get Data node in this way, its Explorer button shows only the attributes that are available at that location.
You can get any data in the scene. Once you have a Get Data node in your tree, you can specify or modify the reference. You can set only certain data: Some intrinsic attributes, such as PointPosition or EdgeCrease. Other attributes are read-only, like PointNormal and PolygonArea. Any dynamic attribute, including predefined ones like Force, Velocity, and so on. Any property in Softimage except for kinematics. Getting Data You get data using Get Data nodes. You can add a Get Data node to your scene by dragging it from the preset manager (its in the Data Access category of the Tools tab) or by selecting it from the Nodes > Data Access menu. You can also get a specific object or other element by dragging its name from any explorer view. Once you have a Get Data node in your tree, you can specify or modify the reference as described in Specifying Scene References on page 260. You can get data by explicit string references or at locations.
262 Softimage
You can use this technique to get data from other objects using geometry queries like Get Closest Location nodes. For example, you can get PointNormal at the closest location on a sphere.
Reusing Get Data Nodes You can connect the same Get Data node to as many nodes as you want if you need the same data elsewhere in the tree. However if the data has changed in-between, the Get Data node will return the new data later in the tree.
If an attribute is stored on points, you can still get it at an arbitrary location. The value is interpolated among the neighboring point values. You can convert a location on a geometry into a position (3D vector) by getting the PointPosition attribute at that location.
The Get Self.Foo node returns different values to Stuff and More Stuff because Self.Foo was set in-between.
Basics 263
Setting Data To set data, use the Set Data compound. You can find this node in the Data Access category of the Tools tab in the preset manager, or on the Nodes > Data Access menu. Simply specify the desired reference and value, either through connections or directly in the property editor. See Specifying Scene References on page 260. Not all attributes can be set. Read-only attributes like NbPoints are not shown in the Set Data nodes explorer. You can set data using an explicit string reference only. You cannot set data at locations. To set an attribute, you must be in the appropriate context. For example, to set PointPosition, you must be in the per point context of the appropriate object. If data has been set for some but not all components in a data set, uninitialized components have default values: zero for most data types, false for Booleans, identity for matrices, black for color, etc. Setting Custom Attributes To create a custom attribute, simply use a Set Data node and make up a new attribute name. Dont forget to include the full reference including the object name, for example, PointCloud.my_custom_attribute. You can use custom attributes to store any type of value, including locations. The context and data type of custom attributes are determined by the connected nodes. If the data type is undetermined, the Set Data node is in error (red) you can use a node from the Constant category to force a specific data type. If the context is undetermined, it defaults to the object context. However, this context can be changed to a component context if you connect nodes that force a different context, as long as there are no conflicting constraints on the context.
264 Softimage
Logical Problems If a tree is working but not doing what you think it should be doing, it may be that the values being passed to ports are not what you expect them to be. You can display port values in the 3D views by right-clicking on a connection and choosing Show Values. There are several options for controlling the color, style, and placement of the information. When port values are displayed, a V icon appears on the connection. Click the icon to change display properties, or right-click and choose Hide Values to remove the display.
Performance Problems You can profile the performance of ICE trees by displaying execution times directly on nodes in the ICE tree viewer. This shows you which nodes take the most processing time, and lets you see where you can try to optimize the tree.
D A
Start Performance Timers. Activates and deactivates performance logging. Typically, you activate this and then play back or advance frames. Reset Performance Timers. Clears the performance numbers. When you have made changes and want to start logging the new performance values, click this. Performance Highlight. Choose one: No Highlight. Displays nodes and ports normally. Time (Top Thread). Shows the performance of the worst thread per node. The number on the root ICETree node is still the total for the entire tree and its inputs. Time (All Threads). Shows the total performance of all threads per node.
Basics 265
To add a comment a group of nodes, use a Group Comment node. To move the comment along with the node group, middle-click and drag in the comment area. Group Comment colors are visible in the birds eye view, so they are a handy way of visually organizing your trees.
266 Softimage
ICE Compounds
ICE Compounds
Compounds are ICE nodes that are built from other nodes, which can be base nodes or even other compounds. You can use compounds to simplify and organize your ICE trees to make them easier to read and understand, but the real advantage of compounds is that you can export them and reuse them in other ICE trees and scenes, as well as share them with other users. Softimage includes many pre-built compounds for performing specific tasks. You can find these in the preset manager in the ICE tree view. These compounds are built from the same nodes that are also available in the preset manager. Inspecting the supplied compounds is a great way to see how ICE trees work. You can then edit these compounds to use them as a base for building your own effect. Overview of How to Create and Use ICE Compounds 1 You cant store the ICETree node in a compound, so insert an Execute node to merge all the root connections into a single output. To do this, right-click the ICETree node and choose Insert Execute Node. Select all the nodes you want to save in your compound. To keep the compound generic, you should leave out objectspecific nodes (such as particle emitter data) so that you can apply this effect to any appropriate object in any scene. Convert the selected subtree into a compound: choose Compounds > Create Compound from the ICE tree toolbar. Edit the compoundsee Editing Compounds on page 268. Export the compoundsee Exporting Compounds on page 269. You can modify the compound and re-export itsee Versioning Compounds on page 270.
5 6 4 3 2 1
3 4 5 6
Basics 267
Editing Compounds
When you edit a compound, you can change the compound name and expose different ports of the nodes inside so that they are easily accessible from your compound later on.
A
L B G C D I J K
H E
268 Softimage
ICE Compounds
B C
J K
Exporting Compounds
Compounds are XML-based files that contain all the connections and data of all the nodes in the tree. They are saved as .xsicompound files. Exporting a compound allows you to use it in other trees and scenes, including sharing it with others by downloading to Softimage|NET. To export a compound, right-click on a compound (not over a port) and choose Export Compound and give a file name and location for it. You can then bring your exported compounds into an ICE tree in the usual way: from the preset manager, from the Nodes menu, using Compounds > Import Compound, or by dragging it from a Softimage file browser or folder window. If two or more compounds have the same name, Softimage logs a warning message telling you the locations of the version that will be used and the versions that will be ignored.
E F
Basics 269
Versioning Compounds
Softimage uses a built-in versioning system to manage updates to exported compounds. You should use this versioning system instead of renaming .xsicompound files manually; otherwise, you may end up with multiple compounds that share the same name and version. If this happens, Softimage warns you that the locations of the file that will be used and the files that will be ignored. The major and minor version numbers are stored in the .xsicompound file. Major version changes are for large functional changes, while minor version changes are for bug fixes and small adjustments. If you modify a compound in an ICE tree and dont export the new version, it is identified by an asterisk.
Compounds that already exist in a scene are not updated automatically even if new versions are available. You can update them individually, or by using the Compound Version Manager (Compounds > Compound Version Manager).
270 Softimage
Section 15
ICE Particles
ICE is a complete visual programming environment thats allows you to create particle effects. In the real world, you think of particles as being small pieces of matter such as dust, sea salt, water droplets, sand, smoke, or sparks from a fire. With ICE particles, you can create all these types of natural phenomena and so much more!
Basics 271
ICE firework particles are emitted from different positions in space. When they reach a certain position, they explode into a new cloud of spawned particles.
The point clouds simulated ICE tree emits the particles and uses a state system to determine the condition under which the fireworks will explode and spawn a new cloud.
272 Softimage
1 B
C A
Basics 273
274 Softimage
Create a point cloud or emit particles: The simplest way is to select one or more objects to be the particle emitter(s) and then choose ICE > Create > Emit Particles from Selection on the Simulate toolbar. This automatically creates a point cloud and sets up certain nodes in the ICE Tree for that point cloud. You can also set up these nodes in the ICE tree from scratch.
Edit the Emit parameters: These define how the particles will look and act when they are emitted: set the particle rate, speed, orientation, direction, color, mass, etc. Delete particles at their age limit: The Set Particle Age Limit compound determines how long the particle will live, then the Delete Particles at Age Limit compound does its job. If you dont put a limit on their age, the particles live the duration of the simulation, which you may want for some effects.
Open the ICE tree view: press Alt+9 or choose ICE > Edit > Open ICE Tree on the Simulate toolbar to open it in a floating window. The ICETree node is the main processing operator in an ICE tree. Because this is a particle simulation, the ICETree node type is simulated. The disc is the particle emitter object. The Get Data node for it simply gets the discs object data so that it can be used in the ICE tree. The Emit compound is responsible for emitting the particles and setting certain particle attributes (such as size, color, velocity, mass, shape, etc.) at emission time. At every frame, it adds points to the point cloud. The Emit compounds are always plugged into the top of the ICETree node in a particle simulation because you need to emit the particles before anything else can happen to them. 5
Add forces to make the particles move. The Add Forces compound is a hub into which other forces can be connected. Here, only the Turbulence value is modifying the force, but you could easily add other forces. Build the particle ICE tree: Plug in different nodes for different effects. Remember this: When you plug nodes into the ICETree node, their output gets evaluated at every frame. You want to do this if you want the particle data to be updated throughout the simulation, not just when the particles are emitted. When you plug nodes into any of the Emit compounds, their output is evaluated only once, upon particle emission. This means that data from this node wont change the particles during the rest of the simulation. You can connect ports together only if their data matches in type and context.
The Simulate Particles node updates the position and velocity of each particle at each frame based on its mass, position, and velocity of the previous frame. This node is usually plugged into the bottom of the ICETree node because it needs to take all information from the nodes that precede it and then use that information to update each particle at each frame.
Create a compound: This step is not necessary, but creating a compound of this particle effect lets you use it in other scenes or share it with others. Render the particles as volumes using ICE particle shaders, or render particles as surfaces using Softimage surface shaders.
Basics 275
Create a point cloud by choosing Get > Primitive > Point Cloud > Empty Cloud (or any of the shapes) from any toolbar. In the ICE tree view, create a Simulated ICE Tree node: from the menu bar of the ICE Tree, choose Create > Simulated ICE Tree. Drag the emitters name from an explorer into the ICE Tree view to create a Get Data node for it. An easy way is to select the object and press F3 so that a floating explorer opens, then drag the emitters name from there into the ICE Tree.
4 5
Drag one of the Emit compounds from the preset manager into the ICE tree view. Drag the Simulate Particles node from the preset manager into the ICEtree view. Plug all the nodes together as shown here. You can then continue to build your ICE tree as you like.
276 Softimage
Basics 277
Slide
Flow Along
Flow Around
278 Softimage
Particle Goals
Particle Goals
When you create a goal for particles, the particles are attracted to it or repelled from it, similar to magnets. With goals, you can create a number of particle effects, such as drops of water forming into a puddle, paint being sprayed over a surface, or butterflies following the infamous ClubBot. Goals are part of the overall particle simulation, which means that any particles that are progressing toward a goal can also react to any other forces that are applied to them. In fact, goals are a force on particles, similar to how an attraction force works. Creating goals requires the Move Towards Goal compound. This compound lets you do two things: choose the location on the goal object to which the particles are attracted (or repelled) and define how the particles move toward the goal, such as their speed, acceleration, and alignment with the goal. Moving Toward One Goal You can set up a simple goal ICE tree with particles moving toward one goal, as you see on the left with the butterflies fluttering toward the walking ClubBot.
When a particle is born, it is assigned to a location on the goal object that you have defined, and it evolves towards this location throughout its life. This can be a random location on the goal, the location on the goal that it closest to the particle, or any location that you specify on the goal. The particles try to reach the position and/or shape of the goal objects, even as the goal moves or its surface is deformed. When the particles reach the goal, their velocity decreases and they stop until the goal moves or is deformed again.
Basics 279
Moving Towards Two Goals You can use two Move Towards Goal compounds with two goals and the If node to have particles move to two goals at once based on a condition that you set up.
Moving From Goal to Goal If you want to have particles move from one goal to another, you can create several sets of Move Towards Goal+goal object nodes, then plug each set into the Multi Goal Sequencer compound.
280 Softimage
If you spawn particles into the same point cloud, the shaders and forces on the spawned particles are the same as for the original point cloud. You can, however, add new attributes to the spawned particles to change their color, size, shape, and so on. Spawning into a different point cloud is similar to creating a new particle simulation because this point cloud has a separate ICE tree. You can also use different shaders for that point cloud, giving you control over the rendered look of the spawned particles. Spawning Trails The Spawn Trails compound gives you a basic way to spawn new particles. Here, pixie dust is spawned as a trail to follow the original particle as it travels upwards.
Different sets of spawned particles create fireworks with some help from a state system.
To spawn particles, you can use several different Spawn compounds, either on their own or as part of a larger effect via a State system: Spawn Trails is the basic compound that creates particle trails. Spawn on Collision spawns particles upon collision with an object. Spawn on Trigger spawns particles when a trigger value is reached. Each of the Spawn compounds is based on the Clone Point node. This node is responsible for creating new particles which are an exact replica of the original particles. The points, including all of their attributes (except ID, which is unique), are copied from a point cloud and then added to either the same point cloud or to another point cloud that you select.
Basics 281
Spawning Upon Collision You can use the Spawn on Collision compound in conjunction with any of the Surface Interaction compounds (such as Bounce Off Surface) to have new particles spawned when a particle collides with an object. Here, the small blue particles are spawned when an orange particle bounces on the surface of an obstacle.
Spawning on Trigger You can use the Spawn on Trigger compound with either a State system or just a simple If node system. Either way, you need to set the condition upon which new particles are spawned. Here, the small blue particles are spawned when the bubblelooking trail particles reach their age limit.
282 Softimage
Particle Strands
Particle Strands
Particle strands are solid shape trails that are drawn after a particle. These solid shapes are actually continuous segments of the shape that you have chosen for the particle, such as spheres, rectangles, boxes, discs, blobs, or even instanced particle geometry. Strands makes it easy to create effects that require more solid-looking objects than trails, such as ribbons, seaweed, or hair, and much more. Using the numerous Strands compounds, you have a lot of control over the appearance and movement of strands to create many types of particle effects. There are two main compounds you can use to actually create the strands using two different methods:
Create Strands is the basic compound that creates particle strands. You can use any particle shape for the strands. Generate Strand Trails lets you dynamically generate particle strands based on the length of the simulation and the number of segments, such as for growing things like grass or vines. One strand segment is created per second up to the maximum number of segments that you have set. Because these two compounds create strands in different ways, you can use only one of them at a time on the same set of particles.
Create Strands
Basics 283
Twist Strand
Turbulize Strand
284 Softimage
Particle Instances
Particle Instances
You can use any 3D geometric object, hierarchy of objects, or group of objects in place of particles to create many different effects. For example, you could use cars to create a flow of traffic or characters to create a crowd scene; or create flocking scenes with flying birds, butterflies, or insects. The object is assigned to a particle and stays with that particle for its lifetime.
To use instances as particles, you assign them to the point cloud using either the Instance Shape node or the Set Instance Geometry compound in the ICE Tree: If the instanced objects are not animated, you should use the Instance Shape node. This node provides the simplest and fastest way to create large numbers of instances whose geometry is not animated.
If the instanced object is animated, you can use the Set Instance Geometry and Control Instance Animation compounds. If an objects transformation is animated, it has to be in relation to its parent, and then you choose the parent as the instance object.
Instances are exact copies of their master object, including its materials (color) and rendering information. However, instances inherit the particles position, velocity, orientation, and size: the instances transformation is not used, although children keep their relative transformation to their parent. If youre using instances as particle shapes in collisions with an obstacle (as rigid bodies or using a compound with surface interaction, such as Bounce Off Surface), you can use an approximated box or sphere around it: its actual shape is not used.
Basics 285
Using Groups of Instanced Objects If youve selected a group of objects for the instances, you have some control over which object is instanced. The objects in the group are picked according to their creation order, as shown in the explorer. You can choose View > Reorder Tool in the explorer to change the objects order in the group. You can also plug in a Randomize compound into the Group Object Index port to change their order randomly.
0 1 2 3
There are three compounds that help you control animated instances: The Set Instance Geometry compound lets you choose the instance object to use, as well as which frame of its animation to use as the starting frame for each particle. The Control Instance Animation compound is like a playback control for how you want the instances animation played during the particle simulation. For example, if the instances animation goes from frames 1 - 50, you can choose to use only frames 20 - 40 for its animation in the particle simulation. The Control Displacement Instance Animation compound scales the instanced objects animation according to its size when it becomes a particle. For example, in the image below are two simple animated rigs that are used as master objects: one hopping, one rolling. When they become instanced, they are much smaller than their original size, so their animation cycles must go at a faster rate to cover the same distance as the original animation.
Master objects Instanced objects as particles
Controlling the Instances Animation If the instanced objects are animated, you can create crowds or flocking scenes, such as with flying birds, butterflies, or walking characters. If youre doing a crowd, for example, each character can walk at a different pace. If an objects transformation is animated, such as a walk cycle, it has to be in relation to its parent. You then select the parent as the instance object, and choose Object and Children in the Set Instance Geometry compound.
286 Softimage
Each State compound you define is plugged into the State Machine compound. This compound is the grand central station for the states. The states are executed in the order in which theyre plugged into the State Machine, from top to bottom.
Basics 287
288 Softimage
1 2
Create a particle simulation. Then drag in a State Machine compound and plug it into the ICE Tree node. Drag in a State compound for each behavior set you want to define. Plug each one into the State Machine compound in the order you want them executed. Disconnect the Simulate Particles compound from the ICE Tree node. This is because each State compound has its own Simulate Particles node inside. Give each state a unique ID to identify it in the system, and give it a unique color to help you identify each states particles as you work. Get a trigger compound and plug it into the first State compound. Here, the trigger compound tests when the age limit of the particle is reached. Define the triggers value. This is done by setting the particle age limit value, which is set to 2 seconds here. Specify the state to which you want the particle to transition when the trigger is pulled. In this case, State 0 transitions to State 1. Get one or more effect nodes or compounds and plug them into the second state. Here, these two Set compounds will set the particle shape and size when the particle age limit is reached. Define the effects behavior. The values of the Set Particle compounds are set so that the size decreases to 0.1 and the shape is changed to a Cone when the particle age limit is reached. You can keep adding state compounds and defining each trigger/effect set by following steps 4 - 9 to create more complex effects.
6 7
Rigid body particles can collide with geometric objects (obstacles) that are set as rigid bodiesjust plug them into an Obstacle > Geometry port on the Simulate Rigid Bodies node in the ICE Tree. Rigid body particles can also collide with each other if theyre in the same point cloud. To create the illusion of particles from several point clouds colliding, you can use several emitters and/or emissions in the ICE tree of a single point cloud. Then set up the emission properties for each to look like different particles.
10
Basics 289
Luckily for the character, hes set as passive in this situation, so hes unscathed by the collision with the wall.
This character is made up of rigid body particle cubes and is heading for a rigid body particle wall. What will happen?
Not so lucky this time! Here, the wall is set as passive, but the character isnt. Ouch.
290 Softimage
Collision Geometry
The Simulate Rigid Bodies node calculates the particle and obstacle collisions according to the shape of their collision geometry. The collision geometry used is different depending on whether the rigid bodies are particles or obstacle objects: For rigid body particles, this is a bounding shape (sphere, capsule, or box) that approximates the particle Shape that you have set. Bounding shapes provide a quick solution for calculating particle collisions because they dont have to calculate detailed geometry. For instanced geometry on the particles, a box or sphere is used, not the instanced objects actual geometry. This is done to make the calculation time faster. For rigid body obstacle objects, this is a convex hull. Convex hulls give a quick approximation of an objects actual shape, with the results similar to an object being shrinkwrapped. Convex hull doesnt calculate any dips or holes in the rigid body obstacles geometry, but is otherwise the same as the obstacles original shape.
Convex hull collision geometry for an obstacle. The dip in the obstacle is not calculated so the boxes simply bounce off the obstacle top.
Basics 291
The point clouds render tree shows how the particles get their volume and definition from the Particle Volume and Particle Shape compounds. The color and density are defined by the Particle Gradient shader, with a Fractal Scalar adding noise to the density.
292 Softimage
The ICE Particle shaders and shader compounds can be found in the preset manager or in the Nodes menu in the render tree.
Particle Shader Compounds Shader compounds are like ICE data compounds in that they contain several connected nodes (in this case, shader nodes). Once you have shaders hooked up together in the render tree as you like them, you can create a compound that contains all of these shaders. This allows you to create a standard particle shader effect, such as fire, that you can use in different scenes or share with other people. Softimage ships with several particle shader compounds that you can use as a starting point for your own shader effects. Start out with the Particle Renderer or Particle Shaper shader compound to render a volume quickly. These compounds use the Particle Volume Cloud shader as a base. The Particle Gradient Fcurve compound creates a curve that you can plug into a Gradient port of a shader to control the gradients falloff over distance. The Particle Strand Gradient compound sets up a color/alpha gradient for rendering particle strands.
Particles using the Blob shape are rendered using the Lambert shader.
Particle image sprites are rendered onto rectangle particle shapes using the Phong shader.
Basics 293
Particle Volume If you want to render particles as a volume, you need to first hook up the Particle Volume Cloud shader (or the Particle Renderer shader compound) to the Volume port of the Material node.
Dry ice particle volume is created with a combination of several ICE particle shaders.
The Fractal Scalar and Cell Scalar shaders help to give this particle volume a unique look.
In the render tree, drag the appropriate shader from the Attributes group in the preset manager or from the Nodes menu. There is one Attribute shader per data type: Boolean, Color, Integer, Scalar, Transform, and Vector.
294 Softimage
Section 16
Shaders
A shader is a miniature computer program that controls the behavior of the rendering software during, or immediately after, the rendering process. Some shaders compute the color values of pixels. Other shaders can displace or create geometry on the fly. Shaders are used to create materials and effects in just about every part of a scene. An objects surface and shadows are controlled by shaders. So are scene lighting and camera lens effects. Even shaders parameters are usually controlled by other shaders. You can even apply shaders at the render pass level to affect the entire scene.
Basics 295
Section 16 Shaders
296 Softimage
Environment shaders are used instead of surface shaders when a visible ray leaves the scene entirely without intersecting an object or when the maximum ray depth is reached.They are used to create backgrounds for scenes, create quick-rendering reflections, light scenes with High Dynamic Range Images, and so on. Volume shaders modify rays as they pass through an object (local volume shader) or the scene as a whole (global volume shader). They can simulate effects such as clouds, smoke, and fog. There are also particle volume shaders that help you create these same types of effects on a point cloud. Toon shaders apply nonphotorealistic or cartoon style effects to objects. They control celanimation type properties like inking and painting. To get a full toon effect, its best to use the toon material shaders in conjunction with the toon lens shaders.
Shadow shaders determine how the light coming from a light source is altered when it is obstructed by an object. They are used to define the way an objects shadow is cast, such as its opacity and color. Lightmap shaders sample object surfaces and store the result in a file that can be used later. For example, you can use a lightmap shader to bake a complex material into a single texture file. Lightmaps are also used by the Fast Subsurface Scattering and Fast Skin shaders to store information about scattered light. Photon shaders are used for global illumination and caustics. They process light to determine how it floods the scene. Photon rays are cast from light sources rather than from a camera.
Basics 297
Section 16 Shaders
Output shaders operate on images after they are rendered but before they are written to a file. They can perform such as glows, blurs, background colors, and so on.
Displacement shaders alter an objects surface by displacing its points. The resulting bumps are visibly raised and can cast shadows.
Realtime shaders allow you to use the render tree to build and control the multipass realtime rendering pipeline. You can connect these shaders together to achieve a multitude of sophisticated rendering effects, from basic surface shading to complex texture blending and reflection.
Material phenomena are combinations of shaders that are packaged into a single shader node. These are often used to create more complex rendering effects. Connecting a material phenomenon to an objects material prevents that material from accepting other shaders directly, though you can extend the phenomenons effect by driving its parameters with other shaders. The Fast Subsurface Scattering and Fast Skin shaders are examples of material phenomena.
Geometry shaders are evaluated before rendering starts. This allows the shader to introduce procedural geometry into the scene. For example, a geometry shader might be used to create feathers on a bird or leaves on a tree.
Tool shaders let you create a shader from scratch or extend an existing one. Although some tool shaders can be used on their own, many of them must work in conjunction with another to achieve a highly customized effect. Some examples of tool shaders include: Color Channels, Conversion, Image Processing, Math, Mixers, and Texture Generators, Texture Space Controller, and Texture Space Generators.
298 Softimage
A B
Select from Materials, Shaders, or ICE Nodes type of presets. Select Favorites, All Nodes, or a specific category. You can add items to your Favorites for easier access to presets that you use frequently. Items in the selected category appear in this panel. You can drag and drop materials onto objects and material libraries; shaders onto objects and into render trees, and ICE nodes into ICE trees. Sets thumbnail size and arrangement. Refresh. Clicking this button forces an update. This may be necessary if you have moved, added, or removed preset files on disk since opening the preset manager. Enter all or part of a name to filter the presets that are displayed in the right panel (3). Filtering works across all categories. In this case, grad is entered, so all shaders in all categories that have grad in their names appear in the right panel.
D E
G A B H
Recalls previous filter strings. Clears the filter string (show all nodes). You can also delete the text string to show all nodes again.
Basics 299
Section 16 Shaders
300 Softimage
Strauss Uses only the diffuse color to simulate a metal surface. The surfaces specular is defined with smoothness and metalness parameters that control the diffuse to specular ratio as well as reflectivity and highlights. Anisotropic Sometimes called Ward, this shading model simulates a glossy surface using an ambient, diffuse, and a glossy color. To create a brushed effect, such as brushed aluminum, it is possible to define the specular colors orientation based on the objects surface orientation. The specular is calculated using UV coordinates.
Constant Uses only the diffuse color. It ignores the orientation of surface normals. All the objects surface triangles are considered to have the same orientation and be the same distance from the light. It yields an object whose surface appears to have no shading at all, like a paper cutout. This can be useful when you want to add static blur to an object so that there is no specular or ambient light. Toon This model begins with a constant-shading-like base color. Ambient lighting, as well as highlights and rim lights are composited over the base color to produce the final result. The result is a cel-animation type of shading that can vary enormously depending on how you configure the highlights and rim lights. The toon shading model is typically used in conjunction with the Toon Ink Lens shader (applied to the render pass camera), which creates the cartoon-style ink lines.
Basics 301
Section 16 Shaders
Diffuse This is the color that the light scatters equally in all directions so that the surface appears to have the same brightness from all viewing angles. It usually contributes the most to an objects overall appearance and it can be considered the main color of the surface. Ambient This color simulates a uniform non-directional lighting that pervades the entire scene. It is multiplied by the scene ambience value, and blended with the diffuse color. Often, the ambient color is set to the same value as the diffuse color, allowing the scene ambience to provide the ambient color. Specular This is the color of shiny highlights on the surface. It is usually set to white or to a brighter shade of the diffuse color. The size of the highlight depends on the defined Specular Decay value. Specular highlights are not visible in all shading models.
The combined result of the ambient, diffuse, and specular colors/lighting contributions.
Not all shading models support all of these basic characteristics. For example, only the Phong, Blinn, Cook-Torrance and Anisotropic shading models support specular highlights (although the Strauss shaders Smoothness and Metalness parameters affect specularity). Similarly, the Strauss shader does not support an ambient color, while most other models do. Its also worth noting that because different shading models compute these basic characteristics, the parameters that control the attributes vary from one property editor to another. For example, the Anisotropic shader has much more elaborate specular highlight controls than the Phong shader.
302 Softimage
As an object becomes more reflective, its other surface parameters, such as those related to diffuse, ambient, and specular areas of illumination, become less visible. If an objects material is fully reflective, its other material attributes are not visible at all. Reflectivity values are defined using color sliders. Setting the color to black makes the object completely non-reflective, while setting the color to white makes it completely reflective. If necessary, you can even control reflectivity in individual color channels. Controlling Reflectivity with Textures You can also control reflectivity using a texture by connecting the texture to the surface shaders reflectivity input.
In this example, the surface shaders reflectivity parameter is connected to a simple black and white stripe texture. The white areas are reflective, while the black areas are not.
Reflectivity
A surface shaders Reflection parameters control an objects reflectivity. The more reflective an object is, the more other objects in the scene appear reflected in the objects surface.
35% reflectivity
Normally, grayscale images are used since black, white and shades of gray adjust reflectivity uniformly in all color channels. Black areas of the image make the corresponding portions of the object non-reflective, white areas make the corresponding portions of the object completely reflective, and gray areas make the corresponding portions of the object partially reflective.
Basics 303
Section 16 Shaders
Transparency
A surface shaders Transparency parameters control an objects transparency. The more transparent an object is, the more you can see through it.
Controlling Transparency with Textures As with reflectivity, you can also control transparency using a texture by connecting the texture to the surface shaders reflectivity input.
In this example, the surface shaders transparency parameter is connected to a simple black and white stripe texture. The white areas are transparent, while the black areas are opaque.
75% transparency
As with reflectivity, transparency affects the visibility of an objects other surface attributes. You can compensate for this by increasing the attributes values, such as changing specular color values that were 1 on an opaque object to 10 or higher on a transparent object. Transparency values are also defined using color sliders. Setting the color to black makes the object completely opaque, while setting the color to white makes it completely transparent. If necessary, you can even control transparency in individual color channels.
Normally, grayscale images are used since black, white and shades of gray adjust transparency uniformly in all color channels. Black areas of the image make the corresponding portions of the object opaque, white areas make the corresponding portions of the object completely transparent, and gray areas make the corresponding portions of the object partially transparent or translucent.
304 Softimage
Refraction
When transparency is incorporated into an objects surface definition, you can also define the refraction value. Refraction is the bending of light rays as they pass from one transparent medium to another, such as from air to glass or water.
You can set the index of refraction from a surface shaders property editor. The default value is 1, which represents the density of air. This value allows light rays to pass straight through a transparent surface without bending. Higher values make the light rays bend, while values less than 1 makes light rays bend in the opposite direction, simulating light passing from air into an even less dense material (such as a vacuum). Refractive index values usually vary between 0 and 2, but you can type in higher values as needed.
Basics 305
Section 16 Shaders
Preset manager: Drag an drop a shader or material preset from here onto the appropriate type of object to apply it, or drag it into the render tree as a node. See The Preset Manager on page 299.
Shader stacks: Some scene elements, like render passes and cameras, have shader stacks in their property editors where you apply shaders that affect the whole scene rather than individual objects.
306 Softimage
M O
N K
Basics 307
Section 16 Shaders
A B C D E F G H I J
Memo Cams. You can save and restore up to four views of the render tree workspace. Lock. Prevents the view from updating when you select other objects in the scene. Refresh. When the view is locked, clicking this button forces it to update with the current selection in the scene. Clears the render tree workspace. Opens the preset manager in a floating window. Displays or hides shaderballs on the shader nodes. Displays or hides the preset manager embedded in the left panel (10). Name and path of the current Material node. Birds Eye View. Click to view a specific area of the workspace, or drag to scroll. Toggle it on or off with Show > Birds Eye View. Embedded preset manager shows all shader nodes and compounds that are available to use. You can drag and drop shader nodes from here into the render tree workspace. You can also get shaders from the Nodes menu.
When a port is connected, the value of its corresponding parameter is driven by the connection, which means that you can no longer set the parameters value in that shaders property editor. In fact, the parameter and its controls (checkboxes, sliders, etc.) are not even displayed. If you remove the connection, the controls reappear in the property editor.
K L
The render tree workspace. This is where you can connect shader nodes together to build trees. Connection arrow between shaders output and input ports shows the data flow between them. Data always flows from the left to the right of the tree. Shader node. This shader is a texture shader, as indicated by its light green color. Each type of shader has a different color. Texture layers. These layers let you mix several textures together so that each texture is blended with the cumulative result of the preceding textures. Material node: This node acts like a placeholder for every shader that is applied to an object. Every object must have one or it wont render. Its input ports support each type of shader.
M N
308 Softimage
The following table shows which input/output port color is assigned to which type of value: Color Input/ Output Port Color Result Returns or outputs a color (RGB) value. These ports are usually used in conjunction with the surface of an object or when defining a light or camera. Represents a scalar input/output with any value between 0 and 1. Represents an output/input that corresponds to vector positions or coordinates. Represents an input/output that corresponds to a 0 or 1, or On/ Off. Consists of a single integer (such as 2 or 73). Accepts or returns an image file. Accepts connections from other realtime shaders and outputs to other realtime shaders or to the Material nodes RealTime port. Outputs the result of a lightmap shader to the Material nodes Lightmap port. Outputs the result of a material phenomenon shader to the Material nodes Material port.
Scalar
Lens /camera shader Light shader
Vector
Boolean
Click the arrow to expand or collapse a node. Click the port to create a connection arrow.
Shader node ports are also color coded. A nodes output is indicated by a port (colored dot) in the top right of the node, while each input port is indicated on the left side of the node. The color of a port identifies what type of input value the port will accept, and what type of value it will output.
Basics 309
Section 16 Shaders
Connecting a Bump Map generator shader to the material nodes Bump Map port adds some bumpiness to the mugs surface. Note how this affects the reflections from the environment map. The mug now looks more like stoneware than porcelain. Finally, connecting an Ambient Occlusion shader between the Phong shader and the material nodes Surface port darkens the mug where it occludes itself. The Phong shaders branch, which includes the textures, is connected to the Ambient Occlusion shaders Bright Color port, while the Dark Color is set to black. The Ambient Occlusion effect is most visible on the inside of the mug and the inner surface of the handle.
310 Softimage
Basics 311
Section 16 Shaders
You can create a shader compound containing any type of shader. The compound can contain many shaders connected together, or just one shader, if you like. Softimage ships with some shader compounds for ICE particles and subsurface scattering effects open them up and see what makes them tick!
.
6 4
10
312 Softimage
Overview of Creating a Shader Compound These steps show the basic process of how to create a shader compound of your own. 1 In the render tree, select all the shader nodes you want to save in the compound. To keep the compound generic, you should leave out the Material node so that you can apply this compound to any object. From the render tree toolbar, choose Compounds > Create Shader Compound. This creates a compound named ShaderCompound, which contains all the shaders that have just disappeared.
You can rename an exposed port by right-clicking on it and choosing Properties, then entering new names: The Display Name is the one that is displayed in the compound node in the render tree and in the compounds property editor. If this is blank, then the display name is the same as the Name. The Name is the one that is displayed in the blue bar on the left here and is used in scripting. Double-clicking on an exposed port or right-clicking and choosing Rename sets the scripting name only, not the display name.
8 3 Click the little e on your new compound to edit it. This opens up the compound editor in which you can expose ports for the compound. Only exposed ports will be available for connections and editing back in the render tree. The bar on the left shows all the exposed shader ports for your compound. Click this arrow to expand or collapse the list of exposed input parameters. When the list is collapsed, you can display the name of a port by hovering the mouse pointer over its connection. To expose a shader port, click the black circle beside Expose Input and drag it to a port. That port is included on the bar. Keep doing this for every port you want to expose.
Create the output port by dragging an output port from the shader on the furthest right (the shader into which all other shaders are plugged) to black dot on the bar on the right. In the bar at the top, double-click where ShaderCompound is written and give your compound a class name (in this example, its Bonfire). Do the same for the Category, which is where it will show up in the groups in the preset manager, such as Particle. If you like, you can add comments to your compound to document how everything inside it works.
4 5
10 11
Click the little x box in the upper-left corner to close the compound and return to the regular render tree. Choose Compounds > Export Shader Compound from the render tree toolbar to export your compound so that it can be used in other scenes or by other users.
Basics 313
Section 16 Shaders
314 Softimage
Section 17
Materials
In Softimage, an objects look and feel is defined by one or more shaders that are plugged into the objects material node. The material node itself provides access to the objects attributes while the shaders control how those attributes appear when rendered. This section introduces ways of creating and working with materials.
Basics 315
Section 17 Materials
About Materials
Every object needs a material. In Softimage, the term material is used to refer to the cumulative effect of all of the shaders that you use to alter an objects look and feel. Strictly speaking, though, materials in Softimage are really just containers for an objects various attributes. If an objects material has no shaders attached to it, nothing defines the objects look, and the object wont render. The easiest way to understand what a material is to look at it in the render tree where it is represented by a Material node. The Material node lists all of the inputs to a given material. These inputs are sometimes referred to as ports. Each port controls a set of object attributes. When the material is assigned to an object, the shaders that you connect to these ports alter the corresponding attributes. For example, the Surface port controls object surface characteristics. By connecting a shader or a network of shaders to this port, you can change an objects color, transparency, reflectivity, and so on. The important thing to understand is that nearly every change you make to an objects appearance involves connecting shaders to define the objects material. When you assign a local material to an object, it replaces the default scene material for that object only. If you remove or delete the objects local material, the object inherits the default scene material again.
Default Scene Material You can modify the default scene material as you would any other material and the changes are applied to any objects that inherit it.
If you delete the default scene material, the oldest created material in the scene becomes the new default material, and is assigned to all objects to which the previous default material was assigned (whether explicitly or through propagation).
316 Softimage
Basics 317
Section 17 Materials
The left panel contains the explorer that has the Scene (cluster) and Image Clip tabssee the image on the right for more details. In the Scene explorer, you can switch between local materials (applied locally on object or cluster itself) and applied materials. Selecting a material in the explorer highlights it in the shelf and displays it in the bottom panel. In the Image Clip explorer, all image clips in the scene are displayed.
A D E
B C
On the top, the command bar provides tools for applying materials, such as creating, duplicating, or deleting materials, as well as tools for managing material libraries. The middle right is a shelf with shaderballs for the materials in your scene. Multiple libraries appear on separate tabs. Click a shaderball to select the material, or drag a shaderball onto an object or cluster in the scene to apply it. The tabs on the bottom of the material manager can display one of several views: The selected material in the render tree (default view). The selected material in the texture layer editor. A list of image clips used by the selected material. Right-click on a clips thumbnail for a context menu that allows you to edit a clips properties and other options. In the Material Manager preferences, you can set the size of the thumbnails used on this tab. A list of objects and clusters that use the selected material (Who Uses?). In the Material Manager preferences, you can set the size of the thumbnails used on this tab.
C D E
Select the thumbnail size for the clips displayed in this list: small, medium, large, or list view. You can turn off the display of the thumbnails to optimize performance. Filters clips by All, Used, and Unused clips. Filters clips displayed by scene layer. Filter clips displayed by user keywords. Filters clips displayed by name. Right-click a clip to display a context menu. Drag and drop one or more images into the image clip explorer panel to create sources and clips.
B C D E F G
318 Softimage
3 2 3
Basics 319
Section 17 Materials
Simple Propagation The larger sphere was branch-selected and given a checkerboard material. Because it was applied in branch mode, the material is inherited by all the descendants.
In the explorer, a clusters material appears under the clusters node, rather than directly under the objects node. To access it, expand the objects Polygon Mesh > Clusters > name of cluster node.
The clusters material is here. Local Material Application One sphere was selected and given a blue material. This material is local for the selected object only, but not for any of its children.
If you remove a material from a cluster, the material inherits the material either assigned to or inherited by the object.
320 Softimage
Material Libraries
Material Libraries
Most properties in Softimage are owned by the scene elements to which theyre applied. Materials, on the other hand, belong to material libraries. Material libraries are common containers for all of the materials in a scene. Each time you create a material, its added to a material library. Although all of the materials in a scene belong to a library, they are used only by the objects to which they are assigned. The material manager is designed to let you easily view and manage your material libraries. Most of the commands that you need for managing your libraries are found in the Libraries menu. Click a library tab to switch between libraries. The selected tab becomes the current library. Unless you explicitly create a new material in another library, all newly created materials are added to the current library. You can also manage your libraries using an explorer with its scope set to Materials (press M). Storing materials in a library makes it easy to share a single material between several objects. It also allows you to access and edit all of the materials in a scene from a single place. Furthermore, because materials belong to libraries and not to individual objects, you can delete an object from the scene, but keep its material for later use. If you no longer want to use a material, you can simply delete it once, regardless of the number of objects to which its assigned. You can create as many material libraries as you need. For example, you might want to keep separate libraries for different types of materials (wood, metals, rock, skin, scales, and so on), or create a material library for each character in your scene. You can drag and drop materials onto the Favorites tab in the material manager to create shortcuts to materials that you want to keep handy. You can also create your own custom favorites tabs to collect and sort the material shortcuts as you like. By default, material libraries are stored internally as part of the scene. However, you can store them externally, as dotXSI (.xsi) or material library (.xsiml) files, which allows you to share them between multiple scenes.
Basics 321
Section 17 Materials
322 Softimage
Section 18
Texturing
Texturing is the process of adding color and texture to an object. You can use textures to define everything from basic surface color to more tactile characteristics like bumps or dirt. Textures can also be used to drive a wide variety of shader parameters, allowing you to create maps that define an objects transparency, reflectivity, bumpiness, and so on.
Basics 323
Section 18 Texturing
A Blinn shader connected to the Surface port of the cows bodys material node. The hoofs, horns, and so on have different materials.
A texture shader connected to the Surface port of the cows bodys material. Note that without a surface shader, the lighting appears constant.
Using the texture shader to drive the surface shaders Ambient and Diffuse colors produces a textured cow that responds properly to lighting.
324 Softimage
Types of Textures
Types of Textures
Softimage allows you to use two different types of textures: image textures, which are separate image files applied to an objects surface, and procedural textures, which are calculated mathematically. An image clip is a copy, or instance, of an image source file. Each time you use an image source, an image clip of it is created. You can have as many clips of the same source as you wish. You can then modify the image clip without affecting the original source image. Clips are useful because they allow you to create different representations of the same texture image (source), such as five different blur levels of the same source image. Also, clips are memory-efficient because the source is only loaded once, regardless of the number of clips are created from it.
Image Textures
Image textures are images that can be wrapped around an objects surface, much like a piece of paper thats wrapped around an object. To use a 2D texture, you start with any type of picture file (PIC, TIFF, PSD, etc.). These can be scanned photos or any file containing data that describes all the pixels in an image, RGB or RGBA data.
Procedural Textures
Procedural textures are generated mathematically, each according to a particular algorithm. Typically, they are used to simulate natural materials and patterns such as wood, marble, rock, veins, and so on. Softimages shader library contains both 2D and 3D procedural textures. 2D procedurals are calculated on the objects surface according to their texture projections while 3D procedurals are calculated through the objects volume. In other words, unlike 2D textures, 3D textures are projected into objects rather than onto them. This means they can be used to represent substances having internal structure, like the rings and knots of wood.
Image Sources and Clips Every time you select an image to use as a texture or for rotoscopy, an image clip and an image source of the selected image is created. An image source is not really a usable scene element. It is merely a pointer to the original image stored on disk. Images sources are listed in your scene in the Sources folder of the Scene Root. They can be stored within your project folder structure, or outside of it.
Basics 325
Section 18 Texturing
Applying Textures
There are a number of ways to connect textures to objects in Softimage. These include: Using the render tree, where you can choose a texture from the Nodes > Texture menu. Once you choose a texture, it is added to the render tree workspace and you can connect it to the materials or other shaders ports. Using the parameter connection icon menu in a shaders property editor lists textures that you can attach directly to the parameter. Attaching a texture to a parameter lets you control the parameter with a texture instead of a simple color or numeric value. This is a convenient way to connect a texture to a surface shaders Ambient and Diffuse ports immediately after applying the surface shader to the object. Adding More Textures To add a texture in addition to the one applied using Method 1, choose Modify > Texture > Add from the Render toolbar.
Choosing a texture from the Nodes > Texture menu adds it to the render tree workspace.
This adds a new texture layer to the objects surface shader. The parameters that you add the new texture to are added to the layer, and the layers texture is blended with them.
Choose Modify > Texture > Add from the Render toolbar.
Using the Get > Texture menu lists commonly used texture shaders that can be connected to any combination of a surface shaders ambient, diffuse, transparency and reflection ports.
The menu lists texture shaders that can be blended with the surface shader via a new texture layer.
326 Softimage
Rendered result of how the textures are projected onto this sphere.
Basics 327
Section 18 Texturing
All of the projections described can be applied to objects from the Render toolbars Get > Property > Texture Projection menu. You can also create and apply texture projections from any texture shaders property editor. Every texture shader needs a projection to define where the texture should appear on the object.
Planar Projections
Planar projections are used for mapping textures onto an objects XY, XZ, and YZ planes. By default, the projection plane is one pixel smaller than the surface plane, therefore no streaking or distortion occurs on the objects other planes. XY YZ
Cylindrical Projections
If you map the picture file cylindrically, it is projected as if wrapped around a cylinder.
XZ
Planar XY
Cylindrical
Lollipop Projections
A lollipop projection is a spherical-type projection that stretches the texture over the top of the object so its corners meet on the bottom, like the wrapper of a lollipop. A single pinch-point occurs at the -Y pole. Lollipop
Spherical Projections
A standard spherical projection stretches the texture over the front of the object so that its edges meet at the back. Distortion occurs towards the pinch points at the objects +Y and -Y poles. Spherical
328 Softimage
Cubic Projections
A cubic projection assigns an objects polygons to a specific face of the cube based either on the orientation of their normals, or their positions relative to the cubic texture support. The texture is then projected onto each face using a planar or spherical projection method. By default, the entire texture is projected onto each face. However, you can choose from a number of different cubic projection presets. You can also transform each face of the cube individually and save the transformations as presets of your own. +Y face (top) -X face (left) -Z face (back)
UV Projections
UV projections are useful for texturing NURBS surface objects. They behaves like a rubber skin stretched over the objects surface. The points of the object correspond exactly to a particular coordinate in the texture, allowing you to accurately map a texture to the objects geometry. Even when you deform an object, its texture follows the objects geometry.
A NURBS surface (left) with a wood texture applied using an planar XZ map (below, left) and UV map (below, right). With the UV map applied, the pattern accurately follows the contours of the object.
+Z face (front) A cubic projection is applied to a cube so that the entire texture image is projected onto each face. -X face (left)
Spatial Projections
A spatial projection is a three-dimensional UVW texture projection that has either the objects origin or the scenes origin as its center. Spatial projections are used to apply procedural textures that are computed mathematically, rather than being somehow wrapped around the object. By default, a spatial projections texture support appears in the center of the textured objects volume.
A cubic projection is applied to a head so that a different part of the texture image is projected onto each face.
+X face (right) Polygon sphere with a vein texture applied using a spatial projection.
Basics 329
Section 18 Texturing
Camera Projections A simple and convenient way to texture objects is to project a texture from the camera onto the objects surface, much like a slide projector does. This is useful for projecting live action backgrounds into your scene so you can model and animate your 3D elements against them. Changing the cameras position changes the projections position. Once you have positioned the texture on the surface to your liking, you can freeze the projection.
Unfolding Unfolding creates a UV texture projection by unwrapping a polygon mesh object using the edges you specify as cut lines or seams. When unfolding, the cut lines are treated as if they are disconnected to create borders or separate islands in the texture projection. The result is like peeling an orange or a banana and laying the skin out flat.
Unfolding does not rely on a texture support. To adjust the projection further, edit the UV coordinates in the texture editor.
Final rendered frame In this example, the corner of a room was textured using the original texture (top-left). The texture was projected from a scene camera (top right). The rendered result shows the modeled teddy bear against the projected background.
330 Softimage
Contour Stretch UVs Projection (Polygons Only) Contour Stretch UVs projections allow you to project a texture image onto a selection of an objects polygons. Rather than projecting according to a specific form, however, a contour stretch projection analyzes a four-cornered selection to determine how best to stretch the polygons UV coordinates over the image. Contour stretch projections are useful for a number of different texturing tasks, particularly for applying textures to tracks, and irregular, terrain-like meshes. They are also useful for fitting regularshaped textures onto curved meshes. For example, they would be useful to place a label texture on a beer bottle, right at the junction of the bottles neck and body.
The contour stretch projection is ideal for texturing a curvy path like this road.
Contour stretch projections do not have the same alignment and positioning options as other projections. Instead, you select a stretching method that is appropriate to the selections topology and complexity. Also, contour stretch projections do not have a texture support. To adjust it further, edit the UV coordinates in the texture editor.
Basics 331
Section 18 Texturing
Unique UVs Projection (Polygons Only) Unique UVs mapping applies a texture to polygon objects using one of two possible methods: Individual polygon packing assigns each polygons UV coordinates to its own distinct piece of the texture so that no one polygons coordinates overlap anothers. This is useful for rendermapping polygon objects. You can apply textures to an object using a projection type appropriate to its geometry, then rendermap the object using a new Unique UVs
projection to output a texture image that you can reapply to the object. The texture is applied to texture each polygon properly without you worrying about unfolding it to fit properly. Angle Grouping, after deciding on a projection direction, groups neighboring polygons whose normal directions fall within a specified angle tolerance. This process is repeated until all of the objects polygons are in a group. The groupsor islandsare then assigned to distinct pieces of the texture so that no two islands coordinates overlap each other. Unique UVs projections do not have a texture support. To adjust it further, edit the UV coordinates in the texture editor.
The Individual Polygon Packing method produces UV coordinates that look like this: each polygons UV coordinates separated from the rest of the coordinate set so it can be assigned to its own portion of texture.
332 Softimage
No Wrapping
Wrap in U
Basics 333
Section 18 Texturing
There are two ways to transform texture projectionsusing the projection manipulator in a 3D view, or by editing the scaling, rotation, and translation values in the Texture Projection property editor. To activate the projection manipulator, press j, or choose Modify > Projection Edit Projection Tool from the Render toolbar.
Alternatively, you can use the texture projection definition parameters to transform a texture on the surface of an object.
In edit mode, the mouse cursor changes to this icon. Right-click to switch to another projection, if one exists. Drag the red arrow to scale the projection horizontally. Drag the red line to translate the projection horizontally.
UV Coordinates
UV Coordinates
Applying a texture projection to an object creates a set of texture coordinates often called UV coordinates or simply UVs that control where the texture corresponds to the surface of the object. On a polygon object, each vertex can hold multiple UV coordinates one for each polygon corner that shares the vertex. The portion of the texture enclosed by a polygons UVs is mapped to the polygon. On NURBS objects, UV coordinates are not stored at the vertices; instead, they are generated based on a regular sampling of the objects surface. However, as with polygon objects, the portion of the texture enclosed by, say, four UVs is mapped to the corresponding portion of the object. You can view and adjust UV coordinates using the texture editor, where they are represented by sample points. When you select sample points, you are actually selecting the UV coordinates held at the corresponding position on the object. For example, as you can see in the images below, the center point of a 2x2 polygon grid holds four UV coordinates. When you select the corresponding sample point in the texture editor, you are selecting all four coordinates (although it is possible to select a single polygoncorners UV coordinate).
In this example, the image shown left was used to texture a 2 x2 polygon grid such that each polygons UV coordinates were mapped to the texture differently.
This exploded view of the textured grid shows how each polygons UVs correspond to the texture image.
The grids middle vertex holds four overlapping UVs. Each UV belongs to a specific polygon and holds a coordinate which, along with the polygons other UV coordinates, defines the portion of the texture mapped onto that polygon.
Basics 335
Section 18 Texturing
By selecting the objects UV coordinates and moving them to a new location, you can control which portions of the texture correspond to different parts of the object. The texture editor has a wide variety of tools to help you select and move UV coordinates. To open the texture editor, press 7 or choose View > Rendering/ Texturing > Texture Editor from the main menu.
UV position boxes allow you to move selected sample points to precise U and V locations. Texture editor command bars provide quick access to commonly used texture editor commands.
Texture editor menu bar contains all of the texture editor commands, including those accessible from the command bar
Texture image The image clip currently applied to the object. Connectivity Tabs help you make sense of the objects UVs by highlighting boundaries shared between of UV islands.
This character and his head are separate objects, each with its own projection. Both sets of UVs are shown in the texture editor.
Status bar displays the UV coordinates, pixel coordinates, and RGBA values of the current mouse pointer position
Selected UVs are highlighted red, and unselected UVs are blue.
336 Softimage
Dimming the Texture Image If youre having trouble seeing a projections UV coordinates in the texture editor workspace, you can dim the texture image to make the coordinates more visible. Click the Dim Image button or choose View > Dim Image.
Once you have selected samples, you can edit them using the transform tools (x, c, and v) or other commands. Tearing
When tearing is off, connected and coincident UV samples are automatically affected by any manipulation even if they are not explicitly selected. When tearing is on, its possible to separate samples into discontinuous islands. Polynode bisectors appear, which allow you to select individual samples at a vertex. Polygon Bleeding belonging to the adjacent polygons become selected automatically. This allows you to move the polygons in a block without internal distortion.
Section 18 Texturing
Texture Layers
Texture layering is the process of mixing several textures together, one after the other, such that each texture is blended with the cumulative result of the preceding textures. In Softimage, you can use this technique to build complex effects by adding texture layers to an objects material or its shaders. When you add a texture layer to a shader, one or more of that shaders parameters, or ports, is added to the layer. The layer is mixed on the selected ports, in accordance with its assigned strength, or weight, using one of several different mixing methods. For texture layering purposes, the shaders ports are collectively treated as the base layer with which the texture layers are blended. If some of the shaders ports are connected to other shaders, those shaders are considered part of the base layer as well. For example, if youve connected a Cell texture to a Phong shaders Ambient and Diffuse ports, the Cell texture is treated as part of the Phongs base layer. What makes texture layers so powerful is that at any time in the texturing process, you can add, modify, and remove any layer, giving you complete control over the resulting effect. You can also quickly and easily change the order in which layers are blended together, something thats quite difficult to do when you mix textures using mixer shaders in the render tree. Because texture layers only affect designated ports, you can blend a number of layers with each of a shaders attributes and create a complex effect for each.
338 Softimage
Texture Layers
The parameters of the grids Lambert surface shader are represented in the base layers. In this case, nothing is connected to the Lambert shaders ports, so only the base colors are shown.
The first layer adds the basic sign texture to the Ambient and Diffuse ports. The textures alpha channel is used to control transparency, cutting out the shape of the sign. The weatherbeaten road sign shown here was created by adding three texture layers to a basic Lambert-shaded grid. The images on the left show the cumulative effect of the layers.
The second layer adds some rust. The rust texture is blended with the Ambient and Diffuse ports according to its alpha channel, and a separate maskin this case, a weight map.
The final layer, blended with Ambient, Diffuse, and Transparency adds the bullet holes. Bump mapping is activated in the layers shader, creating the depression around each bullet hole.
Basics 339
Section 18 Texturing
how many ports those layers affect, and how and in which order the layers are blended together. Add to this the ability to modify the majority of each layers properties, and the texture layer editor makes for quite a powerful tool. To open the texture layer editor, choose View >Rendering/Texturing > Texture Layer Editor from the main menu.
The shader list displays all of the shaders connected to the current selections material. Select a shader to update the editor with its layers. The texture controls allow you to control the texture projections assigned to selected layers inputs. The Base Colors layer displays color boxes for unconnected ports Base layers represent shaders that are directly connected to the current shaders ports. Texture layers are blended with the base layer and with each other.
The Selected shaders ports can be added to texture layers and base layers.
Layer/port controls indicate that the port has been added to the layer. An empty cell indicates that the port is not affected by the layer.
Layer controls and layer/port controls allow you to set texture layer properties.
340 Softimage
Texture Layers
Layers behave exactly like any other parameter group in the render tree, meaning that you can connect shaders to texture layer parameters as you would to any other shader parameter. This lets you control each texture layer with its own branch of the render tree.
Shader ports that have been added to layers are marked with a small blue L.
Layers section Collapsed layer parameter group Expanded layer parameter group. Layer Color and Mask ports.
Basics 341
Section 18 Texturing
Bump Maps
Bump maps use textures to perturb an objects shading normals to create the illusion of relief on the objects surface. Because they do not actually change the objects geometry, they are best suited to creating fine detail that does not come too far off the surface.
The sphere shown here was bumpmapped with a fine noise. A negative bump factor was used to make the white areas bump outward.
When Not to Use Bump Maps Because bump maps do not actually alter object geometry, their limitations can become apparent when too much relief is required. Consider the sphere shown here: even with a very high bump step, the bumping is not convincing on the silhouette where there is no indication that the surface is raised. In these cases, its better to either model the necessary geometry or to use a displacement map.
Displacement Maps
A displacement map is a scalar map that, for each point on an objects surface, displaces the geometry in the direction of the objects normal. Unlike regular bump mapping that fakes the look of relief, displacement mapping creates actual self-shadowing geometry.
The sphere shown here was displacement-mapped using the texture shown below.
Creating a Bump Map To give you the most control over surface bumping, the best way to create a bump map is to connect a Bumpmap shader to the Bump Map port of an objects material node.
However, every texture shader has bump map parameters, so you can create a bump map using textures that youve connected to, for example, a surface shaders Ambient and Diffuse ports.
342 Softimage
Creating a Displacement Map You create a displacement map by connecting a texture, preferably grayscale, to the Displacement port of an objects material node. It is often helpful to add an intensity node between the map and the material node to help control the displacement.
Using Displacement Maps and Bump Maps Together You can use bump maps and displacement maps together to create extremely detailed surfaces. Typically, the best approach is to use a displacement map to create the coarser surface detail major features that need to be visible at the objects edges and can benefit from selfshadowing. You can then use the bump map to create a top layer of fine detail. The bump-mapping is applied to the displaced geometry.
Setting Displacement Map Parameters In addition to any shaders that you add to the render tree to modulate displacement, the main displacement controls are on the Displacement tab of the objects Geometry Approximation property editor. From there, you can choose the type of displacement appropriate to your object and refine the displacement effect. When Not to Use a Displacement Map Because they actually modify object geometry, displacement maps can take considerably longer to render than bump maps. Generally speaking you should not use a displacement map if you can achieve a satisfactory effect using a bump map.
This sphere uses the texture on the left as a displacement map to create coarse surface detail, and the texture on the right as a bump map to create fine surface detail.
The sphere on the left uses a bump map, while the one on the right uses a displacement map. In this case, the difference is slight enough that the bump maps shorter render time makes it the better choice.
Basics 343
Section 18 Texturing
Reflection Maps
Reflection maps, also called environment maps, can be used to simulate an image reflected on an objects surface, without using actual raytraced reflections. They can also be used to add an extra reflection to an objects reflective, raytraced surface. When objects are reflective, you can define whether the reflections on its surface are Raytracing Enabled or Environment Only. Reflection settings are found on the Transparency/Reflection tab of the objects surface shaders property editor (choose Modify > Shader from the Render toolbar to open the property editor). Raytraced Reflections are slower to render because they actually compute reflections for everything around them. Non-Raytraced Reflection Maps are much faster to compute because they simulate the reflection of a specified texture or image, defined by an environment map, on the objects surface. When reflection mapping is used without raytracing, only the reflection map appears on the objects surface; when used with raytracing, the map is combined with raytraced reflections.
Raytraced reflection only Note how reflective objects reflect other objects in the scene. For example, you can see the flask and the floor reflected in the retort.
Reflection map only Using only a reflection map, no scene objects are reflected in reflective surfaces. Instead, the only reflection is that simulated by the reflection map.
Raytraced reflection and reflection map With both types of reflection activated, you get the real reflections of scene object and simulated reflections from the map, producing highly detailed reflections. You can apply a reflection map to the entire scene by adding an environment map shader to a render pass shader stack.
You can apply a reflection map to an object by connecting an environment map shader to the Environment port of the objects material node.
344 Softimage
To rendermap an object, you need to apply a RenderMap property. Choose Get > Property > RenderMap from the Render Toolbar. This opens the RenderMap property editor, from which you can configure all of the maps that you wish to output. The following example shows how you can use RenderMap to create a single texture (which includes lighting information) out of a complex render tree.
Alpha map
Before RenderMap
The disembodied hand shown here was textured using a combination of several images mixed together in a complex render tree, and lit using two infinite lights. The result is a highly detailed surface that incorporates color, bump, displacement, and lighting information, and takes a fair amount of time to render. Bump map
Displacement map
Specular map
After RenderMap
To bake the hands surface attributes into a single texture file, a RenderMap property was applied to the hand, and a Surface Color map was generated. The resulting texture image was then applied directly to the Surface input of the hands material node. Finally, the scene lights were deleted, producing the result shown at righta good approximation of the hands original appearance. Because the hands illumination is baked into the rendermap image, you can get this result without using lights or an illumination shader.
Basics 345
Section 18 Texturing
Choose Get > Property > Color at Vertices Map to add a CAV Property to the selected object. An object can have as many CAV properties as you need.
Press Ctrl+W to open the Brush Properties property editor. On the Vertex Colors tab, you can choose a paint mode and color, set the brush size, set falloff and bleeding options and so on. Basically, youre defining how the brush strokes look.
Press Shift+W to activate the brush tool and paint the color (or other attribute) onto the object in any 3D view. When you move the brush into any 3D view, the views display mode automatically changes to Constant.
If youd like, you can render the result of the color at vertices property using a Vertex RGBA shader in the render tree.
346 Softimage
Section 19
Lighting
Conventional lighting (direct light sources), indirect lighting, and image-based lighting are all techniques that contribute to a scenes illumination and affects the way all object surfaces appear in the rendered image.
Basics 347
Section 19 Lighting
Types of Lights
You can add lights to a scene by choosing them from the Render toolbars Get > Primitive > Light menu. Every light type has its own special characteristics and is represented by its own icon in 3D views. Infinite (Default) Infinite lights simulate light sources that are infinitely far from objects in the scene. There is no position associated with an infinite light, only a direction. All objects are lit by parallel light rays. The scenes default light is infinite. Spot Spot lights cast rays in a cone-shape, simulating real spotlights. This is useful for lighting a specific object or area. The manipulators can be used to edit the light cones length, width, and falloff points. Neon Neon lights simulate realworld neon lights. They are essentially point lights whose settings and shapes are altered to resemble fluorescent tubes. The manipulators can be used to change the tube into any rectangular or square shape.
Point Point lights casts rays in all directions from the position of the light. They are similar to light bulbs, whose light rays emanate from the bulb in all directions.
Light Box Light box lights simulate a light diffused with a white fabric. The light and shadows created by this light are very soft. Specularity is still visible, but noticeably weaker. Manipulating the box shapes the projected light.
348 Softimage
Placing Lights
Placing Lights
You can translate, rotate, and scale lights as you would any other object. However, scaling a light only affects the size of the icon and does not change any of the light properties.
Rotating an infinite light. This is the only useful transformation for infinite lights since their scale and position do not affect the lighting. Rotating the light, on the other hand, changes its direction.
Placing Spotlights Using the Spot Light View The Spot Light view in a 3D view that lets you select from a list of spotlights available in the scene. A spotlight view is useful to see what objects a spotlight is lighting and from what angle.
1 Select a spotlight from the view menu to see the scene from the lights point of view.
Translating a point light. Rotating and scaling point lights does not affect the lighting. Translating a point light changes its position, which does change the scene lighting.
2 Navigate in the spotlight viewport to change the position of the light. The inner and outer circles correspond to the lights spread angle and cone angle respectively.
Translating a spotlight. When you translate the spotlight, it rotates automatically to point toward its interest. Scaling a spotlight has no effect on the lighting. Since the spotlight is normally constrained to its interest, you cannot rotate it either (unless you delete the interest). 3 The rendered result shows the scene lit from the spotlight.
Spotlights have a third set of manipulators that let you control their start and end falloff, as well as their spread and cone angles. Area lights also have a third set of manipulators that let you scale the geometric area from which the light rays emanate. These manipulators are discussed later in this section.
Note that the light falls off exactly where the cone and spread circles indicate that it should.
Basics 349
Section 19 Lighting
White Light
Intensity: 0.25
Intensity: 0.5
350 Softimage
Setting a Spotlight
A spotlight casts its rays in a cone aimed at its interest. Spotlights have special parameters, called Spread and Cone Angle, that control the size and shape of the cone. You can set these options using the spotlights property editor or its 3D manipulators. You can also use the 3D manipulators to set the lights falloff.
To activate a spotlights manipulators, select the light and press B. You can then adjust the light by dragging any of the manipulators labeled in the image below.
The upper circle is the Start Falloff point. The wireframe outline is the spotlights Cone Angle.
Start falloff = 6 End falloff = 8 Falloff Start and End Falloff values. Using a point light, umbra = 0; bottom corner of chess board is 0; top, left corner is 10.
Basics 351
Section 19 Lighting
Selective Lights
When you create a light, it affects all visible objects in the scene. However, every light has a selective property that you can use to make it affect, or not affect, a designated group of objects called Associated Models. This can reduce rendering time by limiting the number of calculations per light. You can set a lights selective property to be Inclusive or Exclusive. Exclusive illuminates every object except for those in the lights Associated Models group. Inclusive illuminates every object defined in the lights Associated Models group.
A simple scene illuminated by a point light. None of the geometric objects are included in the lights Associated Models list, so they are not affected by the lights selective property.
Creating Shadows
You can create shadows that appear to be cast by the objects in your scene. Shadows can make all the difference in a scene: a lack of them can create a sterile environment, whereas the right amount can augment the realism of the same scene. Shadows are controlled independently for each light source, so you can have some lights casting shadows and others not. To create a shadow using the mental ray renderer for a scene or a render pass, you must set up three things: A light that generates shadows. Objects that cast and receive shadows. Rendering options that render shadows. There are three basic kinds of shadows you can create using mental ray: raytraced, shadow-mapped, and soft.
Raytraced Shadows
Raytraced shadows use the renderers raytracing algorithm to calculate how light rays are reflected, refracted, and obstructed. The shadows are very realistic but take longer to render than other types of shadows. To create raytraced shadows, you need to activate shadows in the lights property editor.
The King piece (center) has been added to the lights Associated Models list, making it affected by the lights selective property. The light has been defined as Exclusive, thereby not illuminating the objects on the lights Associated Models list.
The light is set to Inclusive. Now the light source affects only the objects listed in the Associated Models list (only the King piece) and ignores the rest.
You also need to make sure that the Primary Rays Type is set to Raytracing in the renderer options.
352 Softimage
Creating Shadows
Shadow-Mapped Shadows
Shadow-mapped shadows, also known as depth-mapped shadows, use the renderers scanline algorithm. They are quick to render, but not as accurate as raytraced shadows. The shadow map algorithm calculates color and depth (z-channel) information for each pixel, based on its surface and distance from the camera. Before rendering starts, a shadow map is generated for the light. This map contains information about the scene from the perspective of the lights origin. The information describes the distance from the light to objects in the scene and the color of the shadow on that object. During the rendering process, the map is used to determine if an object is in a shadow. To create shadow-mapped shadows, you need to activate shadows and configure the Shadow Map in the lights property editor. Then, you need to enable shadow maps in the renderer options.
Volumic Shadow Maps Volumic shadow maps, are similar to regular shadow maps, but store more detail. Instead of simply storing the distance from the light to the first object hit, the volumic shadow map algorithm raymarches through the scene from the lights origin until it hits a fully opaque object. Along the way it stores changes in light color or intensity along with the depth at which the change occurred. Volumic shadow maps are typically used when rendering shadows for geometry hair.
Basics 353
Section 19 Lighting
Soft Shadows
Soft shadows are created by defining area lights which are a special kind of point or spotlight. The rays emanate from a geometric area instead of a single point. This is useful for creating soft shadows with both an umbra (the full shadow where an object blocks all rays from the light) and a penumbra (the partial shadow where an object blocks some of the rays). The shadows relative softness (the relation between the umbra and penumbra) is affected by the shape and size of the lights geometry. You can choose from four shapes and set the size as you wish. To determine the amount of illumination on a surface, a sample of points is distributed evenly over the area light geometry. Rays are cast from each sample point; all, some, or none of the rays may be blocked by an object. This creates a smoothly graded penumbra.
To create raytraced shadows, you need to activate shadows in the lights property editor. You also need to activate and configure the Area Light in the lights property editor. Finally, you need to make sure that the Primary Rays Type is set to Raytracing in the renderer options.
A rectangular area light emits light from a rectangular object like this one.
354 Softimage
Global Illumination
Global Illumination
Global illumination simulates the way bright light bounces off of objects and bleeds their color into surrounding surfaces. When global illumination is activated, photons emitted from a designated light travel through the scene, bounce off photon-casting objects and are stored by photon-receiving objects. Photon casting and reception are not mutually exclusive properties: an object can do both, but only a light can emit photons. Global illumination is often used with caustics, which is also a photon effect. The following is an overview of how to set up global illumination for the mental ray renderer.
1 Define objects as casters and receivers. An objects visibility property allows you to set options that control how the object responds to global illumination photons emitted from a light. Caster controls whether photons bounce off of the object and continue to travel through the scene. When this is off, the object simply absorbs photons. Receiver controls whether the object receives and stores photons. When this is off, the photon effect is not visible on the objects surface. Visible controls whether the object is visible to photons at all. When this is off, photons simply pass through the object. Activate Global Illumination on the Photon tab of the lights property editor. You can then set the Intensity of the photon energy, which determines the intensity of the color that bleeds onto photon receiving objects. You can also set the Number of Emitted Photons. Typically, both of these values will need to be set in the tens or hundreds of thousands for the final global illumination effect. 2 Set the light to emit global illumination photons.
Basics 355
Section 19 Lighting
Adjust the global illumination effect. Once youve defined the caster, receivers and emitting lights, you need to adjust the rendering options that control the photon effect. On the Caustics and GI tab for the renderer, activate Global Illumination, then set these two important parameters: GI Accuracy specifies the number of photons that are considered when any point is rendered. Photon Search Radius specifies the distance from the rendered point within which photons are considered. Youll also need to fine-tune the photon intensity and the number of emitted photons for each of the emitting lights.
Increase the radiance of the receiver object. To further fine-tune the global illumination effect, adjust the Radiance of the global illumination receiver objects. Radiance controls the strength of the photon effect on the objects surface. This is useful for brightening or darkening photon lighting in specific areas of a scene. The Radiance parameter is set in each objects surface shader.
356 Softimage
Caustics
Caustics
Caustic effects recreate the way that light is distorted when it bounces off a specular surface or passes through refractive objects/volumes. The classic example is the light sparkling in the middle of a wine glass or the floor of a swimming pool. In either case, light passes through refractive surfaces and is distorted, creating complex light patterns on surfaces that it affects. As with global illumination, caustics compute how photons emitted from a light travel across the scene and bounce over and through caster and receiver objects. Here is an overview of setting up caustic lighting for the mental ray renderer, which is almost identical to setting up global illumination:
1 Define objects as casters and receivers. An objects visibility property allows you to set options that control how the object responds caustics photons emitted from a light. 3 Adjust the caustic effect.
Adjust the rendering options that control the photon effect on the Caustics and GI tab for the renderer. Activate Caustics on this tab, then set these two important parameters: Caustic Accuracy specifies the number of photons that are considered when any point is rendered. Photon Search Radius specifies the distance from the rendered point within which photons are considered. Youll also need to go back to the property editors of all emitting lights and fine tune the photon intensity and the number of emitted photons.
Increase the radiance of the receiver objects. To fine-tune the caustics effect, adjust the Radiance of the caustics receiver objects. Radiance controls the strength of the photon effect on the objects surface. This is useful for brightening or darkening photon lighting in specific areas of a scene. The Radiance parameter is set for each objects surface shader.
To make a light into a global illumination photon emitter, activate Caustics on the Photon tab of the lights property editor. You can then set the Intensity of the photon energy and the Number of Emitted Photons.
Basics 357
Section 19 Lighting
Final Gathering
Final gathering is a way of calculating indirect illumination without using photon energy. Instead of using rays cast from a light to calculate illumination, final gathering uses rays cast from each illuminated point on an objects surface. The rays sample a hemisphere of a specified radius above each point and calculate direct and indirect illumination based on what they hit. The overall effect is that every object in the scene becomes a light source and influences the color and illumination of the objects and environment surrounding it.
4 5
You can use the scene objects visibility properties to precisely control how each object participates in final gathering calculations.
3 4
1 2
Camera eye ray intersects with geometry whos shading needs to calculate indirect illumination. Final gathering rays are shot into the hemisphere above the intersection point to sample for illumination. Indirect illumination contribution.
This scene was rendered using final gathering, which collects the indirect and direct light around illuminated points on an objects surface to simulate real-world lighting.
358 Softimage
Ambient Occlusion
Ambient Occlusion
Ambient occlusion is a fast and computationally inexpensive way to simulate indirect illumination. It works by firing sample rays into a predefined hemispherical region above a given point on an object's surface in order to determine the extent to which the point is blocked, or occluded, by other geometry. Once the amount of occlusion has been determined, a bright and a dark color are returned for points that are unoccluded and occluded respectively. Where the object is partially occluded the bright and dark colors are mixed in accordance with the amount of occlusion. In Softimage, you can create an ambient occlusion effect by connecting the Ambient Occlusion shader in the render tree. This is most commonly done at the render pass level to create an occlusion pass that can be added in and adjusted during compositing. You can also use the shader on individual objects to limit the occlusion calculation.
Image-Based Lighting
You can light your scenes with images using the Environment shader which surrounds the scene with an image. However, this shader has a set of parameters that allow you to control the images contribution to final gathering and reflections.
The image above shows a scene rendered using only the Ambient Occlusion shader. The bright color is set to white and the dark color to black. This type of rendering can be composited with other passes to add the occlusion effect to the scenes color and illumination.
Although you can use any image to light the scene this way, you will get the best results using a High Dynamic Range (HDR) image. Thats because HDR images contain a greater range of illumination than regular images, making them better able to simulate real-world lighting.
Basics 359
Section 19 Lighting
Light Effects
The point light inside of this street lamp uses a flare effect. Flares are created as properties of scene lights. Softimage includes a number of lighting effects that you can use to enhance the realism and alter the look and mood of your rendered scenes. Different effects are applied differently. Some are applied as properties of lights, while others are defined by shaders in the render tree. This scene uses a variety of light effects to capture the feeling of a dimly lit alley on a foggy evening. In the background of the scene, you can see the effect of depth-fading. Even though it affects the entire scene, the depth fading is defined by a lights volumic property.
The volumic light shining out from the window in the stairwell is created using a volumic property applied to a light.
360 Softimage
Section 20
Cameras
Virtual cameras in Softimage are similar to physical cameras in the real world. They define the views that you can render. You can add as many cameras as you want in a scene. you can also achieve a photorealistic motion blur effect for every object and/or camera in your scene.
Basics 361
Section 20 Cameras
Types of Cameras
Each of the images below was taken from the same position, but using a different camera each time. The image on the right shows a wireframe view of the original scene, including the position of the camera. These camera types are available from the Get > Primitive > Camera menu.
Perspective (Default) Uses a perspective projection, which simulates depth. Perspective cameras are useful for simulating a physical camera. The default camera in any new scene is a perspective camera.
Wide Angle Creates a wide-angle view by using a perspective projection and a large angle (100) of view. Wide angle cameras have a very large field of view and can often distort the perspective.
Telephoto Uses a perspective projection and a small angle of view (5) to simulate a telephoto lens view where objects are zoomed.
Orthographic Makes all of the camera rays parallel. Objects stay the same size regardless of their distance from the camera. These projections are useful for architectural and engineering renderings.
362 Softimage
The Camera The camera is the camera is the camera. In the 3D views, it is represented by a wireframe control object that you can manipulate in 3D space. The camera has a directional constraint to the camera interest.
Camera Direction The camera icon displays a blue and a green arrow. The blue arrow shows where the camera is looking; that is, the direction the lens is facing. The green arrow shows the cameras up direction, which you can change by rolling the camera (press L).
The Camera Interest The cameras interestwhat the camera is always looking atis represented by a null. You can translate and animate the null to change the cameras interest.
The Camera Root The camera root is represented by a null. By default, it appears in the middle of the wireframe camera, but you can translate and animate it as you would any other object. The null is useful as an extra level of control over the camera rig, allowing you to translate and animate the entire rig the same way that you animate its individual components.
Basics 363
Section 20 Cameras
Positioning Cameras
Once you select a camera, you can translate, rotate, and scale it as you would any other object. However, scaling a camera only affects the size of the icon and does not change any of the camera properties. Generally, the most intuitive way of positioning cameras is to set a 3D view to a camera view and then use the 3D view navigation tools to change the cameras position. As you navigate in the 3D view, the camera is subject to any transformations that are necessary to keep its interest in the center of its focal view. Since positioning cameras is often a process of trial and error, youll probably find yourself wanting to undo and redo camera moves. Press Alt+Z to undo the last camera move. Press Alt+Y to redo the last undone camera move. If youve zoomed in and out too much and the perspective on your camera is in need of a reset or refresh, press R. This resets the camera in the 3D view in which the cursor is.
364 Softimage
Field of View
The field of view is the angular measurement of how much the camera can see at any one time. By changing the field of view, you can distort the perspective to give a narrow, peephole effect or a wide, fish-eye effect.
Camera Format
The cameras format refers to the picture standard that the camera is using and the corresponding picture ratio. You can also specify a custom picture standard with a picture ratio that you define. The default camera format is NTSC D1 4/3 720x486, with a picture ratio of 1.333, but several standard NTSC, PAL, HDTV, Cine, and Slide formats are also available.
The cameras Vertical field of view was made large enough to accommodate the entire building. The Horizontal field of view was automatically calculated based on the aspect ratio.
Using the same camera in the same location, the Vertical field of view is much smaller, thus making only a small part of the building visible.
Basics 365
Section 20 Cameras
Lens Shaders
Lens shaders are used to apply a variety of different effects to everything that a camera sees. Some lens shaders create generalized effects, such as depth of field, cartoon ink lines, or lens distortion. Others are more utility oriented, and do things like emulate real-world camera lenses or render depth information. Lens shaders can be used alone, or in conjunction with other lens shaders. For example, you might want to render a bulge distortion and depth of field simultaneously. You can apply lens shaders to cameras as well as passes.
This is a camera with no clipping planes setwhich means the resulting image (right) is every object in the scene.
Applies a shader to the camera. Removes a shader from the shader stack. Opens the selected shaders property editor. Lists every shader applied to a camera. Lens shaders are applied via the shader stack on the Lens Shaders tab of the cameras property editor.
This is a camera with near and far clipping planes set. The near plane is between the first two buildings and the far clipping plane is between the last two buildings. Everything before the first plane is invisible and everything beyond the far clipping plane is also invisible, as seen in the resulting image (right).
366 Softimage
Lens Shaders
The images below and beside show this scene rendered using three different lens shaders.
Basics 367
Section 20 Cameras
Motion Blur
Motion blur adds realism to a scenes moving objects by simulating the blur that results from objects passing in front of a camera lens over a specified period of exposure. In Softimage, you can easily achieve a photorealistic motion blur effect for every object and/or camera in your scene.
You can apply motion blur properties to cameras. This is useful when both the camera and scene objects are moving, but you only want the blur caused by the objects movement.
In the first image (left), a quick shutter speed (< 0.1) is used, then a slower shutter speed (middle), and finally (right) a very slow shutter speed (> 0.6).
You can also specify an Offset for the shutters time interval which allows you to push the motion blur trails, even extend them into later frames. Additionally, you can define where on the frame the blur is evaluated and rendered.
368 Softimage
Section 21
Rendering
Rendering is the last step in the 3D content creation process. Once you have created your objects, textured them, animated them, and so on, you can render out your scene as a sequence of 2D images. Your ultimate goal may not be just to render, but to optimize rendering quality and speed.
Basics 369
Section 21 Rendering
Rendering Overview
The process or rendering out your scenes can vary considerably from project to project. However, here is a typical sequence of tasks you might follow when rendering: 1. Set up render passes and define their options. Render passes let you render different aspects of your scene separately, such as a matte pass, a shadow pass, a highlight pass, or a complete beauty pass. You can define as many render passes as you want: within each pass, you can create partitions of lights and objects, then apply shaders and control their settings together. 2. Set up render channels and define their options. These allow you to output different information about the pass to separate files. 3. Set rendering options. All objects, including lights and cameras, are defined by their rendering properties. For example, you can determine whether a geometric object is visible, whether its reflection is visible, and whether it casts shadows. Rendering properties can be set per render pass as well. 4. Preview the results of any modifications. The viewports can display your scene in different display modes, including wireframe, hidden-line removal, shaded, and textured. In addition, you can view any portion of your scene in a viewport and rendered by defining a render region. Or preview a full frame using Render Preview. 5. Render the passes and their render channels. Softimage gives you the option of rendering using any one of the following methods: - Interactively from the Render Region. - Interactively, using the single-frame preview tool. - Interactively from the Softimage user interface.
370 Softimage
- Batch rendering using the [xsi -render | xsibatch -render] command line. - Batch rendering with scripts using the -script option at the command line. - Using the ray3.exe command line. - Using mental rays tile-base distributed rendering across several machines. To do so, you must define which machines to use and how. 6. Composite and apply effects to passes. You can use Softimage Illusion, a compositing and effects toolset thats fully integrated in Softimage, or you can use another postproduction tool.
Rendering Visibility
Every geometric object in a scene has a visibility property that controls whether it is visible when rendering, and in particular whether it is visible to various types of rays (primary, secondary, final gathering, and so on). This visibility property exists locally on every 3D object in Softimage and cannot be applied or deleted. However, visibility can be overridden at the partition level. In complex scenes, setting rendering visibility options can be difficult to manage on a per-object basis. Its easier to partition objects and use overrides to control rendering visibility for all of the objects in a partition.
Render Passes
Render Passes
A render pass creates a layer of a scene that can be composited with any other passes to create a complete image. Passes also allow you to quickly re-render a single layer without re-rendering the entire scene. Later, you can composite the rendered passes back together, making adjustments to each layer as needed. Each scene can contain as many render passes as you need. When you first create a scene in Softimage, it has a single pass named Default_pass. This is a beauty pass that is set to render every element of the scene. You can create additional passes to render specific elements and attributes as needed.
This photograph (background pass) is the background scene over which the dinosaur will be composited.
This image is the composite of all these passes. Rendering in passes allows you to tweak each isolated element separately without having to re-render your scene.
This pass is a rendered image of the dinosaur. Compositing it over the background would make the scene rather flat and unrealistic.
The matte pass cuts out a section of the rendered image so another image can be composited over or beneath it.
The shadow pass isolates the scenes shadows so you can composite them in later. This allows you to edit a shadows blur, intensity, and color without any additional rendering.
Basics 371
Section 21 Rendering
Creating Passes
You will most likely want to create several passes as your scene grows in size and complexity. You can create a variety of pass types from the Render toolbars Pass > Edit > New Pass menu.
372 Softimage
Render Passes
Creating Partitions
A partition is a division of a pass that behaves like a group. There are two types of partitions: object and light. Light partitions can only contain lights, and object partitions can only contain geometric objects. Placing objects in partitions allows you to control their attributes by modifying them at the partition level rather than at the individual object level. The modifications affect only the objects in the partition for the specific render pass to which the partition belongs. This allows you to change object attributes on a per-pass basis. Create an empty partition by choosing Pass > Partition > New Partition on the Render toolbar and then add elements to it. Or you can select some objects and choose the same command to create a partition that automatically includes these objects.
Current pass. The current pass is always displayed in bold typeface. Each pass has its own options. This lets you optimize your rendering by enabling only those options you need for each pass. For example, you could enable shadow calculations only in the shadow pass. Expanding any pass node displays its renderer options, the active camera for the pass, its partitions, and any environment, output, and/or volume shaders applied to the pass as a whole.
Pass renderer options. Depending on which renderer you have chosen for your pass, click the Hardware Renderer or mental ray icon to edit the passs renderer options. You can identify whether the pass is using a local or global set of render options by the Roman or italic typeface displayed for the renderers node.
Pass camera. Click the camera icon to define camera and lens-shader options for the pass. You can add new cameras to your scene and set them as active if needed. Background partition. Every pass is created with two background partitions which contain the scenes objects and lights. Background partitions usually contain every object in your scene that isnt modified in the pass. However, nothing is stopping you from modifying the contents of these partitions as well.
D A B C D E H
F G
Basics 373
Section 21 Rendering
Partition. A partition is a division of a pass, which behaves like a group. Partitions are used to organize scene elements within a pass. Expanding any partition node allows you to see its contents, as well as any materials, shaders, overrides, and other properties that are applied to it. Each pass has two default partitions: a background objects partition that contains most or all of the scenes objects, and a background lights partition that contains most or all of the scenes lights. You can add as many additional partitions as you need for a pass, but an object can only be in one partition per pass.
Framebuffers. The framebuffers folder holds all the active render channels defined for the pass including its Main render channel. Passes. Additional passes including the default beauty pass are listed in creation order unless you have modified the explorers sort order settings. A material is assigned to a partition. The B indicates that it was applied in branch mode and is propagated to every object in the partition. If any objects in the partition have local materials, they will be overridden by the partition-level material for this pass.
When you apply shaders to partitions using the Get > Material command, they take precedent over the shaders applied directly to objects in the scene, but only for that pass.
An override changes the ambient and diffuse values to black, but leaves the other values untouched.
374 Softimage
Render Channels
Render Channels
Render channels are a mechanism for outputting multiple images, each containing different information, from a single pass. When you render the pass, you can specify which channels should be output in addition to the full pass. By default a Main render channel is defined for every pass (you can think of it as the beauty channel rendered for each pass). You can use these images at the compositing stage, the same way you would use any render pass. The advantage of using render channels is that they are easy to define and quick to add to any pass. Preset render channels allow you to isolate scene attributes that are commonly rendered in separate passes. You do not need to create complex systems of partitions and overrides to extract a particular scene attribute. All you need is your default pass and you can quickly output the preset diffuse, specular, reflection, refraction, and irradiance render channels.
Rendering options are set for the scene, for your renderer of choice (by default this is mental ray), and for each render pass you define. For interactive preview renders, the render region has its own set of renderer options. You can access these rendering options from different places: Render toolbar: opens the scene, pass, and renderer property editors. - Choose Render > Scene Options - Choose Render > Pass Options (for the current pass) - Choose Render > Renderer Options (active renderer for the current pass) Explorer: press 8 to open an explorer and then press P to set the scope to Passes or press U to set it to Current Pass. From there you can click the scene, pass, or renderer nodes to display their property editors. Render Manager: a dedicated view for editing scene, pass, and renderer options. It contains a built-in explorer view, quick access to pass rendering, rendering and output preferences, and a copy manager for you render settings. Choose Render > Render Manager from the Render toolbar.
Refraction Channel
Reflection Channel
Irradiance Channel
Ambient Channel
Diffuse Channel
Specular Channel
Basics 375
Section 21 Rendering
H I J
376 Softimage
Select from the explorer the various render options available for editing. You can edit render options for the scene, for the renderer, and for each pass defined in the scene. Depending on your selection, the options are displayed in the middle or right panel. When you select a render pass, the render options for the selected pass are displayed in the middle panel. If you select multiple passes (Ctrl-select), you can simultaneously edit their common parameters. Multi Edit will appear at the top of the panel to indicate that you are in this mode.
Passes
The render options for all the render passes defined in your scene. The pass render options allow you to modify settings specific to each pass. You can set output paths, specify the pass camera, output your pass to a movie file, apply pass-level shaders, add render channels, and more.
H I
The render options for all available renderers. The scene render options allow you to modify global settings for the entire scene. You can specify things like the renderer to use, the frames to render, the basic output path and format for rendered images. You can also create custom render channels that you can add to individual passes. The current pass is displayed in bold in the explorer.
When you select Scene Render Options or one of the global renderers (mental ray, Hardware Renderer, etc.), the options for the selected item are displayed in the right panel. This is also the case when you select a render pass that contains a set of local render options. If your selected passes use different renderers then Mixed Selection will appear at the top of the panel and no options are displayed.
Current pass
Use these commands to render the current pass, the selected passes, all passes in the scene, the current frame, or the current frame for all passes in the scene. Edit > Override Marked Pass Parameters Edit > Make Renderer Local to Pass Edit > Make Pass Renderer Global Edit > Open Rendering Preferences Edit > Open Output Format Preferences Edit > Copy Render Options
F Refresh
Basics 377
Section 21 Rendering
Selecting a Renderer
You usually render a scene using the default mental ray rendering software, which is built into Softimage. mental ray uses three rendering algorithms: scanline, raytracing, and rasterizer. You can also use the hardware renderer, which renders whatever is displayed in a 3D view (such as a viewport in Shaded display mode). Scanline and raytracing are normally used together. mental ray uses the scanline method until an eye ray changes direction (due to reflection or refraction and so on), at which point it switches to the raytracing method. Once it switches, it does not go back to scanline until the next eye ray is fired. Without scanline rendering, the render is usually slower. Without raytracing, transparency rays are rendered, but reflection rays cannot be cast and refraction rays are not computed. The Rasterizer accelerates motion blur rendering in large and complex scenes with a lot of motion blur. You must set special sampling options. Scanline Scanline rendering is a rendering method used to determine primary visible surfaces. Scene objects are projected onto a 2D viewing plane, and sorted according to their X and Y coordinates. The image is then rendered point-by-point and scanline-by-scanline, rather than objectby-object. Scanline rendering is faster than raytracing but does not produce as accurate results for reflections and refractions.
This scene was rendered using scanline rendering only. Notice how the transparency has little depth, and there is no reflection or refraction.
Raytracing Raytracing calculates the light rays that are reflected, refracted, and obstructed by surface, producing more realistic results. Each refraction or reflection of a light ray creates a new branch of that ray when it bounces off an object and is cast in another direction. The various branches a ray constitute a ray tree. Each new branch can be thought of as a layer: if you add together the total number of a rays layers, it represents the depth of that ray.
This scene was rendered using the raytracing render method. Notice how the glass reflections, transparency, and refraction are more realistic than with Scanline rendering.
Hardware Rendering The Softimage hardware renderer allows you to output a scene as it appears when displayed in any 3D view whose viewpoint is that of the pass camera. Most of the hardware rendering modes correspond to the 3D views display modes Wireframe, Shaded, Textured, and so on. Hardware rendering is useful for generating previews of your scene using all of the display options available in 3D views. It is also useful for outputting realtime shader effects to file.
378 Softimage
You can resize and move a render region, select objects and elements within the region, as well as modify its properties to optimize your preview. Whatever is displayed inside that region is continuously updated as you make changes to the rendering properties of the objects. Only this area is refreshed when changing object, camera, and light properties, when adjusting rendering options, or when applying textures and shaders. Comparing Render Regions The render region has memo regions that allow you to store, compare, and recall settings. They look similar to the viewports memo cams, but are not saved with the scene.
Middle-click to store, and click to display. The currently displayed cache is highlighted in white. Right-click for other options.
Drag the swiper to show more or less of one image or the other.
The render region uses the same renderer as the final render (mental ray), so you can set the region to render your previews at final output quality. This gives you an accurate preview of what your final rendered scene will look like.
Be careful when comparing render regions. You should do this only when you are tweaking material and rendering parameters, and not making other changes to the scene. If you revert to previous settings, either accidentally or on purpose, you will lose any modeling, animation, or other changes you have made in the meantime.
Basics 379
Section 21 Rendering
To render a selection of passes, select the passes in the explorer and click the Render Pass > Selected button in the Render Manager, or choose Render > Render > Selected Passes from the Render toolbar. The passes are rendered one after the other.
ray3.exe Rendering
You can render scenes using the mental ray standalone ray3.exe from a command line. Although many of the ray3.exe commands are available in the Softimage interface, you may want to use the ray3.exe command line tool to manually override options in exported MI2 files. You can edit the MI2 files to define extra shaders, create objects, swap textures, or perform other tasks.
380 Softimage
Section 22
Basics 381
Softimage Illusion
The Softimage Illusion toolset consists of three core views: the FxTree, where you build networks of effects operators; the Fx Viewer, where you preview the results; the Fx Operator Selector, from which you insert pre-connected operators into the FxTree. Each of these views can be opened in a viewport or as a floating view (choose View > Compositing > name of view from the main menu). There is also a Compositing layout available from the View > Layouts menu. It contains the three core Fx tools arranged in a way that makes it easy to build and preview effects. Using this layout for compositing and effects work is usually more efficient than simply opening the required views in viewports because the non-compositing tools and views are mostly hidden.
Fx Tree where you create networks of linked operators to composite images and create effects. You can create multiple instances of the FxTree workspace called trees to organize effects more efficiently.
Fx Viewer 2D viewer in which you can preview each operator to see how it contributes to the overall effect.
Fx Operator Selector Lists all of the available compositing and effects operators. Fx Operators Operators are represented by nodes that you can link together manually or connect beforehand using the Fx Operator Selector. Once you select an operator here, you can pre-set its connections to existing operators in the Fx Tree and then simultaneously insert and connect it in the Fx Tree.
382 Softimage
Clip In Operator
Clip In (or From): reads from the image clip. Clip Out (or To): writes back to it. You can modify the image clip itself by adding effects operators between the Clip In and Clip Out operators. This updates the clip wherever it is used in the scene. The Clip In and Clip Out operators are primarily used to modify images that are used outside of the Fx Tree. For an actual composite or effect that you intend to render to file, its better to use File Input operators. To import image clips, select an image clip from the FxTrees Clips menu.
Basics 383
If you need to build several different networks, you can create multiple instances of the FxTree workspacecalled trees to organize them more efficiently. Each tree is a separate operator in the scene with its own node in the explorer.
Navigation Control Allows you to navigate in the Fx Tree workspace when a network of operators becomes to large to display all at once. Dragging in the rectangle pans in the Fx Tree workspace. Dragging the zoom slider up and down zooms in and out. Operator Connection Icons Green icons accept image inputs. You can connect almost any operator to green inputs. Blue icons accept matte (A) inputs, which are generally used to control transparency. Red connections icons are outputs, plain and simple. Fx Operator Selector A tool for inserting operators into the Fx Tree. Select an operator from the list, then consecutively middleclick the existing operators you wish to connect to its inputs and output. Middle-click in an empty area of the Fx Tree workspace to add the operator.
Next you need to add and connect the operators required to build your effect. You can get any operator from the Ops menu and connect it by dragging connection lines from other operators outputs to its inputs. You can also use the operator selector to pre-define operator connections before you inset the operators into the Fx Tree.
Once youve built your effect, you can render it out using a File Output operator. Operator information Positioning the mouse pointer over an operator displays information at the bottom of the Fx Tree.
Once you define all of the needed connections, middle-click an empty area of the Fx Tree workspace to add the operator.
384 Softimage
Fx Operator Types
Whether youre compositing a simple foreground image over a background, or applying a complex series of effects to an image, every step of the process is accomplished by an operator in the FxTree. By connecting these operators together, you can create composites and special effects.
Operator Type Image Description Image operators act as the in and out points for each effect in the FxTree. File input operators are placeholders for images in the tree. Paint Clip operators are used to import images into the FxTree for raster painting. Vector Paint operators are used to create vector paint layers in the FxTree. PSD Layer Extract operators extract a single layer from a .psd image. File Output Operators let you set the output and rendering options for your composites and effects. Composite Composite operators offer you several ways to combine foreground images with a background image to produce a composited result. Most compositing operators require a foreground image, a background image, and an internal or external matte. Retiming operators allow you to change the timing of image sequences. You can, for example, convert from 24 to 30 frames-per-second and vice versa, interlace and de-interlace clips, and change the duration of clips by dropping frames, or combining them together in different ways. Transition operators create animated changes from one image clip to another. You can use transition operators to apply dissolves, fades, wipes, pushes, and peels. Color adjust operators let you color correct clips in the FxTree. You can modify and animate hue, saturation, lightness, brightness, contrast, gamma, and RGB values. You can also perform various operations like inverting, images, premultiplying images, and so on. Grain Operator Type Color Curves Description Use the Color Curves operators to graphically adjust color components of images in the FxTree, and to extract mattes for foreground images so that you can composite them over background images. Grain operators alter the appearance of film grain in your image sequences. You can add and remove grain, as well as adding and removing noise. Optics operators create optical effects in images in the FxTree. These include depth-of-field, lens flares, and flare rings. Filter operators let you control the appearance of images in the FxTree. Among other things they can reproduce the effects of different lens filters, apply blurs, and add or remove noise. Distort operators simulate 3D changes to images in the FxTree. Use these operators to apply distortions and transformations Transform operators adjust the dimensions and/or position of Images in the FxTree. Besides cropping and resizing images, you can also use the 3D Transform operator to transform an image in a simulated 3D space, as well as warp and morph images. The plugins operators offer a variety of patterns and special effects that you can use in your FxTrees. All of the Plugins operators are custom operatorscalled UFOs that were created using the UFO SDK. Painterly Effects operators allow you to apply a variety of classic artistic effects to images in the FxTree. The Softimage compositors three sets of Painterly Effects operators let you apply effects like Chalk & Charcoal, Watercolor, Bas Relief, Palette Knife, Stained Glass, and many more, to images in the FxTree.
Optics
Filter
Distort
Retiming
Transform
Transition
Plugins
Color Adjust
Painterly Effects
Basics 385
Operator Info Displays info about the operators being viewed and edited. Navigation Tool Drag in the rectangle to pan. Drag on the slider to zoom. Click the Edit hotspot to open the operators property editor. Click the View hotspot to preview the operator in the Fx Viewer. Compare Area Displays a portion of one image while you're editing another image. This is useful for seeing one operators effect on another. Image courtesy of Ouch! Animation Display Area Displays the operator that youre previewing. Displays the current image at full size. Toggles the Compare Area Updates the Compare Area with the current image. Switch viewers A and B. Isolate one of the images color channels. Forces the current image to fit in the viewer. Mixes the view with the Merge Source.
386 Softimage
Rendering Effects
Rendering Effects
Once you have your effect looking the way you want it, you can render it to a variety of different image formats using a File Output operator. The File Output property editor is where you set all of the effects output options, including the picture standard, file format, and range of frames. Rendering Effects From the Command Line You can also render effects non-interactively from the command line using xsi -script or xsibatch -script. Make sure that your script contains the following line (VBScript example):
RenderFxOp "OutputOperator", False
Click here to open the Rendering window. Enter a valid filename, path and format here. When the sequence is rendered, click here to open a flipbook and view it. Specify the range of frames to render.
where OutputOperator is the name of the FileOutput operator that you want to render. The False statement specifies that the Fx Rendering dialog box should not be displayed during rendering.
Once youve set the output options, all you need to do is click the Render button to start the rendering process. In the Rendering window, you will get information regarding the rendering of the sequence.
Basics 387
2D Paint
Softimages compositing and Effects toolset includes a 2D paint module which offers 8 and 16-bit raster and vector painting. To paint on images, you set up paint operators in the FxTree and then paint on them in the Fx viewer, where a Paint menu gives you access to a variety of paint tools. You work with paint operators the same way you work with other Fx operators, making it easy to touch up images, fine-tune effects, edit image clips, paint custom mattes, create write-on effects, and so on. You can also use blank paint operators to paint images from scratch.
Paint Menu When you edit a paint operator, the paint menu is added the Fx Viewer, giving you access to all of the paint-related commands and tools. Fx Paint Brush List Lists all of the paint brushes available for painting strokes. All of the brushes are presets based on the same core set of properties. The Fx Paint Brush List is an optional view in the compositing layout (shown here). To open: choose View > Compositing > Fx Color Selector from the main menu. Fx Viewer When you edit and preview a paint operator, the Fx Viewer is where you actually paint strokes and shapes.
Fx Color Selector Allows you to choose foreground and background paint colors using a variety of different color models. To open: position the mouse pointer in the Fx Viewer and press 1, or choose View > Compositing > Fx Color Selector from the main menu.
Paint Operators Behave exactly like other operators in the Fx Tree, and can be connected manually or using the operator selector.
388 Softimage
Vector Paint
Vector painting is a non-destructive, shape-based process where every brush stroke is editable even after youve painted it. Rather than painting directly on an image, you paint on a vector shapes layer that is composited over an input image or other operator. In the Fx Tree, you add a vector shapes layer overtop of an image by connecting the images operator to Vector Paint operators input. You can then paint on the vector shapes layer in the Fx viewer. A Vector Paint operator has a small paint brush/shape icon in its upper-left corner. This differentiates it from non-paint operators, which you cannot paint on, and from raster paint operators, which use a different icon. One convenience of painting in vector paint operators is that you dont have to manage changes to each frame the way you do with raster paint clips. Every shape in a vector paint operator is stored as part of the operators data, and is animatable. This allows you to paint shapes and strokes that stay in the image for as many frames as you need. Vector paint operators are blank by default and do not have source images. Instead, they are more like other Fx operators in that they have both an input and an output and use other operators outputs as their sources. However, theres nothing preventing you from keeping them blank and painting their contents from scratch.
Raster Paint
Raster painting is the process of painting directly on an image. It is destructive, meaning that each time you paint a stroke, youre directly altering the images pixels. Once youve painted on the image, the stroke or shape cannot be moved or altered (unless, of course, you paint a new stroke over it). In the Fx Tree, you can paint on images or sequences (but not a movie file .avi, Quicktime, and so on) loaded in a Paint Clip operator, which is available from the Ops menu. You can also insert a blank paint clip and configure it later. A Paint Clip operator has a small paint brush icon in its upper-left corner. When you paint on a sequence, you can manage changes to frames using the tools on the Modified Frames tab of an Paint Clips property editor. You can revert painted frames back to their last saved state, and save changes when youre ready to commit them.
Where you manage painted frames. Save changes to frames. Revert frames to their original state. Lists every unsaved frame that youve changed.
Basics 389
Set the active paint brush from the Fx Paint Brush List. The active paint brush is used by any paint tool that can paint a stroke (the paint brush tool, the line tool, the shape tools, and so on).
Add a paint operator to the Fx Tree workspace and edit its properties. This activates the Fx Viewers paint menu, giving you access to paint tools and options.
If necessary, edit the brush properties. To open the brush property editor, position the mouse pointer in the Fx Viewer and press 2.
Choose the foreground and (if needed) the background color from the Fx Color Selector.
If necessary, edit the tool properties. To open the tool property editor, position the mouse pointer in the Fx Viewer and press 3.
The five most recently used colors are stored in the selector for easy access.
390 Softimage
Paint on the operator in the Fx Viewer. The Flood Fill tool (not shown) fills pixels that you click, and neighboring pixels of similar color, with the specified foreground color. The Draw Rectangle and Draw Ellipse tools are unique in that they are the only shape tools that work in both raster paint clips and vector paint operators (all other shape tools are vector-paint only). In either mode, the shapes are drawn using the current colors and paint brush settings. The Mark Out Shape tool allows you to create an editable vector shape by clicking to define the locations of the shape's points. As you add points, each new point is connected to the previous point by a line segment. The line segments curve, or lack thereof, depends on the type of shape youre drawing: Bzier, B-Spline, or Polyline. The Mark Out Shapes tool is only available in vector paint operators.
The Brush tool is the most basic tool for painting brush strokes. You use it to paint on images as if you were using a real paint brush, or one of the myriad tools simulated by the brush presets in the Fx paint brush list. Painting is a simple matter of clicking and dragging on a paint operators image.
The Line tool, as you might imagine, allows you to draw straight lines. This is especially useful for painting wires out of an image or sequence. In vector paint operators, drawing a line creates a two-point color shape drawn using the outline (stroke) only
The Freehand Shape tool allows you to draw editable vector shapes as if you were using a pen and paper. You need only drag the paint cursor around the outline of the shape that you wish to draw. The Freehand Shape tool is only available in vector paint operators.
If you are using vector paint operators, you can edit any vector shapes that youve painted. The two images below show the manipulators used to transform a vector shape and to edit a vector shapes points.
Basics 391
Cloning
Cloning is the process of painting pixels from one region of an image to a different region of the same image. This can be useful for duplicating elements in an image, as in the example below. It is also often used to paint out unwanted elements. For example, you can remove wires from a clear sky by painting over them with adjacent pixels.
Before In this example, the trumpet player and his shadow are cloned into the left side of the frame.
Merging
Merging is the process of painting pixels from a source image called the merge source onto the corresponding portion, or a different portion, of a destination image. This is useful for painting unwanted elements, like wires, out of images. It is also useful for painting new elements into images, like the clouds in the example below.
Destination
Offset
Source After
Merge Source
In this example, the image of the clouds is set as the merge source and is being painted into the image of the field, as shown below.
Clone
Original
You can set any operator in the Fx Tree as the merge source by right-clicking it and choosing Set as Paint Merge Source from the menu. This adds a small paint-bucket icon to the operator to help you identify it as the merge source.
When you paint using the Clone brush, youll only see a result if you use a brush offset. The offset is the distance between the area from which youre painting and the area to which youre painting. You can offset the brush in any direction and use any offset distance, as long as both the source and destination cursors can be placed somewhere on the target image simultaneously.
392 Softimage
Section 23
Customizing Softimage
You can extend Softimage in a variety of ways by customizing it. Many customizations are too involved to cover here, but you can get more details in the Softimage Users Guide and Softimage SDK Guide.
Basics 393
1. In the Plug-in Tree, right-click User Root or the first workgroup in the tree and choose Install .xsiaddon. If you want to install the add-on in a different workgroup, go to the Workgroup tab and move that workgroup to the top of the list. You can install add-ons only in the first workgroup. 2. In the Select Add-on File dialog box, locate the .xsiaddon file you want to install, and click OK. You can also install an add-on by dragging an .xsiaddon file to an Softimage viewport. This installs the add-on in the User location or the first workgroup, depending on the value in the DefaultDestination tag of the .xsiaddon. The SDK Guides contain additional information about other methods of installing add-ons.
To uninstall an .xsiaddon
Plug-in Manager
The Plug-in Manager is the central location for managing your customizations. You can display the Plug-in Manager using File > Plug-in Manager or in the Tool Development Environment (View > Layouts > Tool Development Environment).
In the Plug-in Tree, right-click an add-on and choose Uninstall Add-on. Installing a simple plug-in is as easy as copying the script or library file to the Plugins directory of your user or workgroup location.
394 Softimage
Shelves
To create a custom shelf, choose View > New Custom Shelf. To add a tab, rightclick on an empty part of the tab area and choose an item from the Add Tab menu. If no tabs have been defined yet, you can right-click anywhere in the shelf. Folder tabs display files in a specific directory. You can drag files like presets from a folder tab onto objects and views in Softimage. Toolbar tabs hold buttons for commands and presets. Driven tabs can be filled with scene elements such as clips by using the object model of the SDK. To save a custom shelf, Click the Options icon and choose Save or Save As.
Custom Toolbars
You can create your own toolbar and use it to hold commonly-used tools and presets. Tools and presets are represented as buttons on the toolbar. Softimage also includes a couple of blank toolbars that are ready for you to customize by adding your own scripts, commands, and presets: The lower area of the palette and script toolbar.
Basics 395
Custom Parameters
Custom parameters are parameters that you create for any specific animation purpose you want. You typically create a custom parameter and then connect it to other parameters using expressions or linked parameters. You can then use the sliders in the custom parameter sets property editor to drive the connected parameters in your scene.
Proxy Parameters
Proxy parameters are similar to custom parameters, but with a fundamental difference. Custom parameters can drive target parameters, but they are still separate and different parameters. This means that when you set keyframes, you key the custom parameter and not the driven parameter. So what do you do when you want to drive the actual parameter, or create a single parameter set that holds only those existing parameters you are interested in? You can use proxy parameters. Unlike custom parameters, proxy parameters are cloned parameters: they reflect the data of another parameter in the scene. Any operation done on a proxy parameter has the same result as if it had been done on the real parameter itself (change a value, save a key, etc.). While you can create proxy parameters for any purpose, its most likely that you will use them to create custom property pages. You can create your own property pages for just about anything you like: for example, locate all animatable parameters for an object on a single property
For example, you can use a set of sliders in a property editor to drive the pose of a character instead of creating a virtual control panel using 3D objects. First, create a custom parameter set by selecting an element and using Create > Parameter > New Custom Parameter Set on the Animate toolbar, and then giving it a meaningful name.
396 Softimage
page, making it much quicker and easier to add keys because all the animated parameters are in one place. Or as a technical director, you can expose only the necessary parameters for your animation team to use, thereby streamlining their workflow and reducing potential errors. First, create a custom parameter set, then open an explorer and drag and drop parameters into the custom property editor or onto the custom parameter set node in an explorer. Alternatively, use Create > Parameter > New Proxy Parameter to specify parameters with a picking session.
Select one or more objects with a DisplayInfo custom parameter set. If nothing is selected, the DisplayInfo set of the scene root is displayed (if it has one).
Changing Parameter Values in a 3D View You can easily modify the parameters displayed in the 3D views. There is a preference that controls the interaction: If Enable On-screen Editing of DisplayInfo Parameters is on in your Display preferences, you can modify the values as well as animate them directly in the display. If on-screen editing is disabled, you can still mark the parameters and modify them using the virtual slider. If on-screen editing is enabled, the parameters appear in a transparent box in the view. The title of the parameter set is shown at the top (without the DisplayInfo_ prefix). Each parameter has animation controls that allow you to set keys.
Basics 397
Double-click on a numeric value to edit it using the keyboard. The current value is highlighted, so you can type in a new value. Only the parameter you click on is affected even if multiple parameters are marked. Double-click on a Boolean value to toggle it. Only the parameter you click on is affected even if multiple parameters are marked. Click on an animation icon to set or remove a key for the corresponding parameter. Right-click on an animation icon to open the animation context menu for the corresponding parameter. Click the triangle in the top right corner to expand or collapse the parameter set. The color of the animation icon indicates the following information: You can do any of the following: Click and drag on a parameter name to modify the value. You dont need to explicitly activate the virtual slider tool. - Drag to the left to decrease the value, and drag to the right to increase it. - Press Ctrl for coarse control. - Press Shift for fine control. - Press Ctrl+Shift for ultra-fine control. - Press Alt to extend beyond the range of the parameters slider in its property editor (if the slider range is smaller than its total range). If the parameter that you click on is not marked, it becomes marked. If it is already marked, then all marked parameters are modified as you drag. Gray: The parameter is not animated. Red: There is a key for the current value at the current frame. Yellow: The parameter is animated by an fcurve, and the current value has been modified but not keyed. Green: The parameter is animated by an fcurve, and the current value is the interpolated result between keys. Blue: The parameter is animated by something other than an fcurve (expression, constraint, mixer, etc.). If there is a DisplayInfo property on the scene root, you cannot edit its parameters on-screen unless the scene root is selected.
398 Softimage
Scripts
Scripts
Scripts are text files containing instructions for modifying data in Softimage. They provide a powerful way to automate many tasks and simplify your workflow.
Command box displays the most recent command. Modify the contents or type a new command, then press Enter to execute it.
Selects any of the last 25 commands. Script editor icon opens the script editor.
Run the lines selected in the editing pane. If no lines are selected, the entire script is run.
History pane contains the most recently used commands in your current session. Drag and drop lines into the editing pane to get a head start on your own scripts. The history pane also contains messages related to importing and exporting, debugging information, and so on. Editing pane is a text editor in which you can create scripts by typing or pasting. Right-click for a context menu.
Basics 399
Key Maps
Key maps determine the keyboard combinations that are used to run commands, open windows, and activate tools. You can create your own key maps to create new key bindings or change the default ones. Key maps are stored as XML-based .xsikm files in the \Application\keymaps subdirectory of the user, workgroup, or factory path. At startup, Softimage gathers the files it finds at these locations and makes them available for selection in the Keyboard Mapping editor. When you change a key mapping, the new key automatically appears next to the command in menus and context menus. For some menus, you must restart Softimage to see the new label. Open the keyboard mapping editor by choosing File > Keyboard Mapping from the main menu. Select an existing Key Map, or click New to create a new one.
Keyboard shortcuts are grouped by interface component. Click an interface component in the Group list to display its commands and their keyboard shortcuts in the Command list. Click a command in the Command list to display its keyboard shortcut in red. To see which command is mapped to a key, click the appropriate modifiers (Alt, Ctrl, Shift) from the check boxes or the keyboard diagram, then rest your mouse pointer over a key on the keyboard diagram.
Create or modify a shortcut by dragging a command label to a shortcut key. Hold down the Shift, Ctrl, or Alt key while dragging to add a modifier to the new shortcut command.
Remove a shortcut key by selecting a command from the Command box and pressing Clear.
400 Softimage
Other Customizations
The keyboard keys are color-coded to indicate the following: White: no keyboard shortcut has been assigned to this key. Beige: a keyboard shortcut from another interface component has been assigned to this key. Light Brown: a keyboard shortcut from the currently selected interface component has been assigned to this key. Red: this keyboard shortcut corresponds to the currently selected item in the Command box. To see key conflicts with other windows, select View and choose a window from the adjacent list. Keys that are used by the selected window are highlighted in dark brown. For combinations involving modifiers, select the appropriate Ctrl, Shift, and Alt boxes or press and hold those keys on your keyboard.
Other Customizations
In addition to the customizations briefly mentioned so far, there are many other ways you can extend Softimage: Custom commands can automate repetitive or difficult tasks. Commands can be scripted or compiled. Custom operators can automatically update data in the operator stack. Operators can be scripted or compiled. Layouts define the main window of Softimage. You can create layouts based on your preferences or common tasks. Views can be floating or embedded in a layout. You can create views for specialized tasks. Events run automatically when certain situations occur in Softimage. Synoptic views allow you to run scripts by clicking hotspots in an image, for example, you can create custom control panels for a rig. Net View allows you to create an HTML interface for sharing scripts, models, and other data. Shaders give you complete control over the final look of your work. For more information about customizing Softimage, see the SDK Guides, as well as Customization in the Softimage Guides.
Basics 401
402 Softimage