You are on page 1of 68

BAFM

TEXTURING

A) Directional light and Spot light:


A directional light sets a single vector for all its illumination and hits
every object from the same angle, no matter where the object is
located. All the shadows cast by a directional light are cast in the
same direction and are orthogonal projections of each object's shape.
It does not matter where a directional light is located relative to the
objects being lit. The only thing that matters in placing a directional
light is which way it is pointed. The angle used by a directional light is
controlled by the rotation manipulator.
Because a directional light is not as easy to aim or confine to a local
area as a point light or spot light, it is most useful as a part of your
secondary or fill lighting, and not as the main light on a subject. A set
of directional lights from different angles can be used together to
provide fill light, even if the individual lights from each angle are very
dim. Directional lights can fill very large areas with illumination that
appears to be ambient or atmospheric, such as filling in daylight from
the sky, providing a quick, effective alternative to global ambience.
Directional light is used to simulate sunlight because it uses
parallel rays of light as if illuminating the object from far
distance. The position of light is not important as the direction of the
arrow. Point lights in Maya shines evenly in all direction from small
point source.
Spot lights are a basic staple of most lighting designs in computer
graphics. Spot lights are a popular choice of many artists because
they can be controlled conveniently to aim light at a specific target.
A spot light simulates light radiating from a point, much like a point
light. A spot light, however, limits the illumination to light within a
specified cone or beam of light only. The rotation of a spot light can
determine where the beam is aimed. You can also link a "target" to
the light so that the light is always oriented toward the position of the
target. You can also group a spot light with a 3D object, such as a
model flashlight or car headlight assembly, so that the beam of light
will be aimed as if the light were radiating from the object.
Spot lights are staples of visual effects in your renderings. A spot light
has extra controls and options not found on other types of lights.
Options such as projecting an image map from a light, or making a
beam of light visible as if shining through fog, are often best
controlled with the beam of a spot light.
Other common spot light parameters enable you to control the width
of the cone (usually specified in degrees) to vary between a narrow
beam and a broad one. The amount of Dropoff of the cone allows the
intensity of the light to diminish more gradually as it approaches the
edge of the beam. A softer edge on a spot light's beam will make the
light's individual location less obvious and will avoid creating a harsh
"circle" of projected light.
This enables you to more subtly lighten or darken areas with a spot
light. With a very soft-edged beam, for example, you can aim a spot
light from within a room to brighten the general area around a window
and curtains, or aim a spot light with a negative brightness at the
corner of a room to darken it.

Bump map and Displacement Map: Normal and Bump maps are an
illusion designed to make surfaces look more detailed than the underlying
geometry. These create shading on a surface that makes part of the
surface look raised or lowered. A normal map is like a fancy bump map that
operates on all three axis instead of just perpendicular to the surface.
Normal maps are extremely common ways of creating the appearance of
additional detail without needing extra geometry.
The effectiveness of a bump depends on the viewing and light angles. Very
large bumps tend to break down and look cruddy, so bump maps are best
kept to low values (a Bump Depth of 0-2) and used on small details, like
bricks or scratches. Bumps create a separate node in your network.
.Displacement: Displacement maps can be an excellent tool for
adding surface detail that would take far too long using regular
modeling methods. Displacement mapping differs from bump mapping in
that it alters the geometry, and therefore will have a correct silhouette, and
self-shadowing effects. Depending on the type of input, the displacement
can occur in two ways: Float, RGB & RGBA inputs will displace along the
normal while a vector input will displace along the vector.
The example above shows how a simple plane, with the addition of a
displacement map, can produce an interesting looking simple scene.
You should ensure that your base mesh geometry has a sufficient number
of polygons otherwise subtle differences can occur between the displaced
low-resolution geometry and the high-resolution mesh from which it was
generated.

The Displacement node must be connected to the displacement attribute of


the shading group of the material that is assigned to the mesh that requires
displacement.

Always ensure that you use the highest quality texture maps for
displacement mapping. Arnold works well with very high-resolution maps,
as long as the maps have been pre-processed with the maketx utility. It will
convert them into .tx files (which are tiled, mipmapped files). See the pages
about the maketx utility and .tx files.

color map and transparency map:


Color: Just what it says—the diffuse color of the object. This is the
underlying palette Maya uses when determining what color each pixel of
the object will be. This color is then adjusted by other attributes, such as
light levels or transparency, to yield the final pixel.

A color map is a set of values that are associated with colors. Color
maps are used to display a single-band raster consistently with the
same colors. Each pixel value is associated with a color, defined as a
set of red, green, and blue (RGB) values. Since each value has a
distinct color associated with it, it will always display the same way
each time you open it in a program that can read a image with a color
map.

Transparency: Another fairly straightforward one. Light parts make things


go clear, dark parts keep them opaque. Transparency does not denote
volume, so sharp holes in objects appear like thin shells instead of deep
holes. Parameters like specular and reflectivity are unaffected by
transparency, as it only affects the diffuse color of the object.

Color or Image maps determine how a polygon is colored, but it is


possible to use a texture map to declare what parts of a texture are
transparent as well. This is the role of a transparency map.

In the image at right there are three rectangular polygons. The


human figure is composed of an image map (shown below, left). To
keep the rest of the rectangle from showing, a transparency map is
also defined, using the image shown below (right). The gray-scale
value of this image will determine how see-through the associated
texture pixels are.
3 point lighting:

Three-point lighting is a standard method used in visual media such as


:theatre, video, film, still photography, computer-generated imagery and 3D
computer graphics. By using three separate positions, can illuminate the
shot's subject (such as a person) however desired, while also controlling
(or eliminating entirely) the shading and shadows produced by direct
lighting.

Three different Types are -

1. Key Light - main source illuminating the object is known as Key Light.
The key light, as the name suggests, shines directly upon the subject and
serves as its principal illuminator; more than anything else, the strength,
color and angle of the key determines the shot's overall lighting design.

2. Secondary (Fill) Light - highlights details of the object is known as


Secondary Light.
The fill light also shines on the subject, but from a side angle relative to the
key and is often placed at a lower position than the key (about at the level
of the subject's face).

3. Back Light - the Light distinguishes the object from the background is
known as Back Light.The blacklight shines on the subject from behind,
often (but not necessarily) to one side or the other. It gives the subject a rim
of light, serving to separate the subject from the background and
highlighting contours.
Normal Map: Normal and Bump maps are an illusion designed to make
surfaces look more detailed than the underlying geometry. These create
shading on a surface that makes part of the surface look raised or lowered.
A normal map is like a fancy bump map that operates on all three axis
instead of just perpendicular to the surface. Normal maps are extremely
common ways of creating the appearance of additional detail without
needing extra geometry.
The effectiveness of a bump depends on the viewing and light angles. Very
large bumps tend to break down and look cruddy, so bump maps are best
kept to low values (a Bump Depth of 0-2) and used on small details, like
bricks or scratches. Bumps create a separate node in your network.

Normal maps can be referred to as a newer, better type of bump


map. As with bump maps, the first thing you need to understand
about normal maps is that the detail they create is also fake.
There's no additional resolution added to the geometry in your
scene. In the end, a normal map does create the illusion of depth
detail on the surface of a model but it does it differently than a
bump map. As we already know, a bump map uses grayscale
values to provide either up or down information. A normal map
uses RGB information that corresponds directly with the X, Y and
Z axis in 3D space. This RGB information tells the 3D application
the exact direction of the surface normals are oriented in for
each and every polygon. The orientation of the surface normals,
often just referred to as normals, tell the 3D application how the
polygon should be shaded. In learning about normal maps, you
should know that there are two totally different types. These two
types look completely different when viewed in 2D space. The
most commonly used is called a Tangent Space normal map and
is a mixture of primarily purples and blues . These maps work
best for meshes that have to deform during animation. Tangent
Space normal maps are great for things like characters. For
assets that don’t need to deform, often times an Object Space
normal map is used. These maps have a rainbow assortment of
different colors as well as slightly improved performance over
Tangent Space maps. There are definitely some things you need
to be aware of when considering using a normal map. Unlike a
bump map, these types of maps can be very difficult to create or
edit in a 2D software like Photoshop. Likely, you will bake a
normal map out using a high resolution version of your mesh.
There are however some exceptions for editing these types of
maps. MARI for example has the ability to paint the type of
surface normal information we see in a normal map. When it
comes to support, normal maps are pretty well integrated into
most pipelines. Unlike a bump map, there are exceptions to this
rule. One of those would be mobile game design. Only recently
has hardware evolved to the point where mobile games are
beginning to adopt normal mapping into their pipelines.

Projection

Turns any 2d texture into a 3d texture you can place on the surface
using one of the available projection types. Use to adjust the texture
placement on the surface.

Find this utility in the Create tab (see Create tab).

To use this utility, see Use the Projection utility.

Interactive Placement

Displays the Projection manipulators in the scene view.


Tip:

Use Fit To BBox to center the manipulator around the object.

Using these manipulators in combination with Maya’s transform


tools, you can orient and position the texture map in three
dimensions. The manipulators for texture mapping are exactly the
same as those used for texturing polygons. See the Polygon
modeling overview in the Polygonal Modeling guide for details.

Note: If no place3dTexture node exists, Maya displays the following


alert box:
Click the Create a placement node button to create the
place3dTexture node.

Fit To BBox

The texture map coincides with the bounding box of the mapped
object or set. See also Interactive Placement.

Proj Type
Select a projection type from the drop-down list to display seven
projection manipulators.

Off

Uses no projection type.

Planar

Default Proj Type. Places the texture on a planar surface and


projects it onto the object.
Spherical

Places the texture inside a sphere and projects it onto the object.

Cylindrical

Places the texture inside a cylinder and projects it onto the object.

Ball

Places the texture inside a ball and projects it onto the object. For
example, Maya projects the texture as if a candy wrapper is pulled
around a lollipop. There is one pinch point to the mapping at the
-z-pole, as opposed to the two pinch points at the +y and -y poles in
spherical and cylindrical mapping.

Cubic

Defines the projection surface as a box. Maya places images on


each plane and projects them onto the object.

Triplanar

Extrudes the texture along the axis defined by the maximum


direction of the surface normal. The texture is projected much like
fabric pulled around an arc.
Concentric

Projects a vertical slice of the texture from the inside to the outside
edge of the voxel. The vertical slice used is randomly chosen for
each voxel. A voxel is a 3D version of a rectangle—a voxel grid is a
series of 3D cubes that line up to form a bigger cube.

Perspective

Integrates 3D elements with a background image or a live action


sequence.

Examples
You have a background sequence with a vase on a table you want to
blow up. It can be difficult to produce a parametric or solid texture map
that exactly matches the vase, but if you use the Perspective Proj
Type, you can project the real image of the vase from the background
sequence onto a 3D vase placed in a matching position. This allows a
perfect match of textures from the camera’s point of view.

You want to place a 3D flying saucer within an image of a street


scene. The image of the street scene should accurately reflect onto
the flying saucer. You could do this with an environment texture, but
that would require that you have other images available—if you use
the Perspective Proj Type, you can project the image of the street
scene onto stand-in geometry. The image projected onto the stand-in
geometry accurately reflects onto the flying saucer.

Image

The 2D texture to be used as a map.

Tip:

To undo a mapping, in the Attribute Editor, right-click while the


cursor is over the attribute’s name and select Break Connection
from the pop-up menu.

U angle

For spherical and cylindrical mapping only. Changes the U angle.

V angle

For spherical mapping only. Changes the V angle.


Camera Projection Attributes
Control a Projection node when the Proj Type is Perspective.

Link To Camera

The drop-down list contains a list of the perspective cameras in the


scene. Choose the camera from which you want to project the
image.

Fit Type
Controls how the texture fits to the camera when Proj Type is
Perspective. Select from the following:

None

The image is not squeezed or stretched to fit. One of its axes


(determined by the Fit Fill setting) fits to the film gate, and the other
resizes appropriately.

Match Camera Film Gate

Squeezes the image to fit the film gate.

Match Camera Resolution

If you use this to match a backdrop, match these settings to the


settings in the Image Plane. Usually, the image plane is the same
size as the rendered image. If so, use the Match Camera
Resolution setting.

Fit Fill

Only available if Proj Type is Perspective and Fit Type is None. If


the image plane aspect ratio is not the same as the film gate aspect
ratio, this attribute decides which axis of the image is fit to the film
gate.

Noise Attributes
Controls the amount of fractal noise added to a Projection node.
(Adding fractal noise randomizes or blurs the texture).

Amplitude X/Amplitude Y

Scales the amount of fractal noise added to the projection in the X


or Y direction. When Amplitude X and Amplitude Y are 0, no fractal
noise is added.

Ratio

Controls the frequency of fractal noise. Increase this value to


increase the fineness of detail.

Ripples
Determines how wavy the projected image is when projected, but
controls the scale of the frequency of any fractal noise added to the
texture. If increased in any direction, the fractal detail seems to
smear out in that direction.

Recursion Depth
Depth

Controls the amount of calculation done by the texture when


Ripples are added. Fractal noise such as ripples are created by a
mathematical process; as the process goes over more levels, it
produces a more detailed fractal, but takes longer. Normally, the
texture chooses a level appropriate for the volume rendered. You
can use Depth Min and Depth Max to control the minimum and
maximum amount of calculation.

Surface materials

From the Hypershade Create tab and Create Render Node window, you can create
Surface Materials, Volumetric Materials, and Displacement Materials. For more
information on surface materials, see Maya materials.

Some attributes (such as color and transparency) are common to most surface
materials and are described in Common surface material attributes.
Some attributes are shared among many surface materials, are therefore grouped
separately from the Common surface attributes and are described in Shared surface
material sections.

Surface material-specific descriptions are provided in this section under the material
name.

Topics in this section

● Common surface material attributes


● Common surface material Specular Shading attributes
● Anisotropic
● Blinn
● CgFX shader
● DirectX 11 Shader
● GLSL Shader
● HLSL shader
● Lambert
● Layered Shader
● Ocean Shader
● Phong
● Phong E
● Ramp Shader
● Shading Map
● Standard Surface
● Surface Shader
● Use Background
● Shared surface material sections
Anisotropic

Is a material (shader) that represents surfaces with grooves, such as a CD, feathers,
or fabrics like velvet or satin. The appearance of specular highlights on an
Anisotropic material depends on the properties of these grooves and their
orientation. The Specular shading attributes (shiny highlights) determine the
direction of the grooves as well as their properties.

An isotropic material (such as Phong or Blinn) reflects specular light identically in all
directions. If you spin an isotropic sphere, its specular highlight remains still.

An anisotropic material reflects specular light differently in different directions. If you


spin an anisotropic sphere, its specular highlight changes, depending on the
direction of the grooves.

You can set attributes of Anisotropic materials to control the appearance of


highlights, determine the orientation and spread of grooves, set the roughness and
reflectivity, and reduce spherical abnormalities (fresnel index).

You can find this material in the Create tab.

Specular Shading attributes


Control the appearance of specular highlights on a surface.

Angle

Determines the orientation of the grooves. The range is 0.0 (default) to 360.0.
Use to determine the X and Y direction for non-uniform specular highlight.
Spread X/Spread Y

Determines how much the grooves spread out in the X and Y directions. The X
direction is the U direction rotated counter-clock-wise by the specified Angle
degrees. The Y direction is perpendicular to the X direction in UV space.

For Spread X, the range is 0.1 to 100.0 and the default is 13. For Spread Y, the
range is 0.1 to 100.0 and the default is 3.0

Large values correspond to surfaces which vary smoothly in the X or Y direction.


Small values correspond to surfaces with fine structure. When increased, the
specular highlight in the X or Y direction shrinks in size—when decreased, the
specular highlight spreads out.

When the Spread X value is equal to the Spread Y value, the surface becomes
isotropic—equally smooth in all directions. When the Spread X value is more
than the Spread Y value, the surface is smooth in the X direction and rough in
the Y direction.

For example, when a surface such as a piece of cloth whose fibers run along the
X direction is rendered, the highlights non-uniformly spread out with more
highlights along the Y direction.
Roughness

Determines the overall roughness of the surface. The range is 0.01 to 1.0. The
default is 0.7. Smaller values correspond to smoother surfaces and the specular
highlights are more concentrated. Larger values correspond to rougher surfaces
and the specular highlights are more spread out—similar to being diffused.

Fresnel Index

A fresnel is a flat lens consisting of a number of concentric rings that reduces


spherical abnormalities. The Fresnel Index value computes the fresnel factor that
connects the reflected light wave to the incoming light wave. For instance, the
Fresnel Index for water is 1.33. Values range from 1.0 to 20.0.

Specular Color

See Common surface material Specular Shading attributes.

Reflectivity

See Common surface material Specular Shading attributes.

Reflected Color

See Common surface material Specular Shading attributes.

Anisotropic Reflectivity

If on, Maya automatically calculates Reflectivity as a fraction of Roughness.


Reflectivity is on by default.
If off, Maya uses the specified Reflectivity value for the environment map
(mapped on the Reflected Color attribute), similar to how the Phong and Blinn
materials work.

In the following, Anisotropic Reflectivity is on, an environment is mapped on the


Reflected Color, and the Roughness is set to 0.01, 0.05, 0.1, and 1.0 (from very
smooth to very rough).

Standard Surface attributes


Standard Surface consists of the following components:

● Transparency
● Coat - a layer that sits on top of all other layers
● Emission - a layer below coat that is useful for simulating light
sources with a coat on top
● Metal
● Thin film - a layer on top of the specular components that can
be used to create spectral coloring effects
● Specular reflection and refraction
● Sheen - a layer to model cloth
● Diffuse reflection and refraction
● Subsurface scattering

A tooltip with a brief description appears when you mouse over each
attribute. For a detailed description of the attributes, see the Autodesk
Standard Surface Whitepaper. See also the Arnold for Maya User
Guide for a description of the Arnold implementation of the Standard
Surface shader.

Renderer support
You can preview Standard Surface in the viewport, in all of the DirectX
11, OpenGL - Core Profile (Compatibility), and OpenGL - Core Profile
(Strict) modes. OpenGL - Legacy mode is not supported.

Rendering with the Arnold for Maya renderer is supported, and you
can preview Standard Surface in the Material Viewer of the
Hypershade using either the Hardware or Arnold renderers or any
other software renderer that supports Standard Surface. In addition,
the Maya Software and Maya Vector renderers also give approximate
representations of Standard Surface.

Lighting support
Standard Surface works with both direct and indirect lighting (image
based lighting) in the viewport and in the Hypershade Material Viewer.

Limitations

● Almost all attributes on the Standard Surface node can be


previewed in the viewport. Attributes that are not supported
are:
○ Transmission attributes, with the exception of Weight,
which is supported
○ Subsurface attributes
○ Thin Film attributes
○ Thin Walled attribute
○ Coat Bump Mapping
● Standard Surface materials are not included when exporting
an object or a scene to a GPU cached file.
Note: Export your Standard Surface materials via FBX files
instead.
● Use Flat Lighting viewport mode is not supported
● Coat roughness only affects direct lighting and not
environment lighting in the viewport, as the viewport only
supports one roughness attribute when calculating
environment lighting. The Specular > Roughness attribute is
used for these calculations, and therefore Coat > Roughness
is ignored.

Textures - Arnold for Maya

Auto-generate TX Textures

When using Arnold it is best to use a tiled mipmapped texture format such as .exr or
.tx that has been created using maketx.

● A tutorial that covers this process can be found here.

By default, tiled and mipmapped TX textures are automatically generated for each
image shader. The resulting TX will be placed next to the original texture files. When
the texture filename contains tags such as UDIMs, a TX texture will be generated for
each sub-tile.

It can take a bit of time to convert a texture to TX, especially for large textures stored
on a network share, but usually, this is done only for the first render. For subsequent
renders, if an existing matching TX texture is detected, it won't be regenerated
unless the source texture contents or colorspace has changed. Also, note that if the
input texture filename already has a .tx extension, it will be left as-is.

Disabling auto-generation

A toggle to disable auto-generation of .tx textures can be found in the Textures tab of
the Arnold Render Settings to disable TX auto-generation globally. This behavior can
also be disabled per texture with the Auto-generate TX Textures toggle on the Ai
Image shader and Maya file nodes.

Linearization

Also, the TX texture will be linearized according to the color space rules in the Color
Management settings in Maya.

There is no texture filtering when not using mipmapped textures. This can cause
some differences, more notably for emissive maps and displacement maps.

Flushing Caches
The Flush Caches menu can be found within the Arnold menubar in the main application window

If you have Auto-mipmap or Auto-tile options enabled by default, a pre-process of


the texture will be performed by OIIO in the first render that a texture is used. This
will be saved in the texture cache, and the process will not be required in the
following renders. The drawback of this is that in Windows, textures will be locked
after the first render.

If you need to modify texture while you are rendering, you have the following
options:

● Use the Flush Caches->Textures command.


● Disable Auto-mipmap and Auto-tile options while you are modifying textures.
With this option, you will not have any preprocess before each render.
Used texture files are locked after a render has been started (even if it is not an IPR
session) and they cannot be written by another application such as Photoshop, for
example. The typical message that is shown is:

The user then has to close the application and reopen the scene for the Arnold
plugin to accept the newly saved texture file. A workaround is to flush the caches.

● Auto-TX must be disabled as well, otherwise, the .tx texture is used and the
original can be modified.
● This issue only exists on the Windows platform. Linux and MacOS should be
unaffected.
● A short video that shows this workflow can be found here.

Selected Textures

Only flushes the selected textures.

Skydome lights

Flush all textures assigned to a skydome light.

Quad lights

Flush all textures assigned to a quad light.

All

Flush all texture caches in the scene, including normal, skydome and quad light
textures.

Maketx
Maketx is a command-line utility to convert images to tiled, MIP-mapped textures,
similar to txmake in Pixar Renderman. It is part of OpenImageIO (
http://www.openimageio.org ) and was developed by Larry Gritz at Sony Pictures
Imageworks.

● A tutorial that shows the workflow involved when working with .tx files can be
found here.
● There is no texture filtering when not using mipmapped textures. This can
cause some differences, more notably for emissive maps and displacement
maps.
● Do not linearize textures used to drive scalar values as you will lose precision
values.
● The maketx utility is available in the MtoA plugin folder. In Windows this would
be:
● C:\Program Files\Autodesk\Arnold\maya2020

Mip-Mapping Bias
Mip-Mapping Bias offsets the Mip-Map level from which a texture is sampled.
Negative values indicate a larger Mip-Map level (bigger texture); a positive value
indicates a smaller Mip-Map level (smaller texture). This is evident at the top of the
images below:
Diagram showing Mip-Mapping process

The example below shows the effect on a render when increasing the Mip-Mapping
bias:

Tokens
In addition to the Arnold attr, tile, and udim tokens, MtoA also supports shapeName
and shapePath tokens.

In addition to the Arnold attr, tile, and udim tokens, MtoA also supports shapeName
and shapePath tokens.

shapeName

The texture token <shapeName> provides a direct connection between the name of
the mesh and the texture name, replacing the name of the 'shape' at render time.

Here's an example shapeName workflow:

● Rename the textures to:


main_BlueShape.jpg
main_RedShape.jpg

● Add main_<shapename>.jpg to the image name of the file texture that is


assigned to both meshes in Maya.
● Rename the meshes to 'Red' and 'Blue' and the textures will automatically be
replaced.

shapePath
The <shapePath> token gets replaced with the full name of the node as exported to
Arnold, with "|" replaced with "_". But only if full paths are exported in the render
settings.

Use PSD Networks as textures in Maya

The PSD File node lets you use a PSD file as a texture network in Maya. It is similar
to Maya’s File texture node, but it’s for PSD files only.

By default, Maya links a PSD file to the composite image, which is included in the
PSD file. Maya can only read image and vector layers, so when the PSD node is
linked to Adobe Photoshop’s composite image (it is by default), Maya supports
anything that Adobe Photoshop supports, including (for example) layer styles,
adjustment layers, text, and so on.

However, you can choose to link the PSD node to a layer set instead, in which case
layer styles and adjustment layers are not supported and should be rasterized
before the PSD file is read in Maya.

The advantage of PSD files with layer sets

PSD files with layer sets facilitate iterative painting:


● Adobe Photoshop artists can add, modify, or delete any number of layers
within a layer set while maintaining the connections in Maya (see also Create
a PSD file with layer sets from within Maya).
● Maya artists can convert a PSD node to a Layered Texture, and see the layer
sets as multiple PSD File Textures connected to a layered texture in
Hypershade.
Note:
Maya treats all layers within a layer set as a single flattened image.

To use an existing PSD file in Maya

1. In Hypershade, load your PSD image file with the new PSD File Texture node
(Create > 2D textures > PSD File).
The PSD file is linked to Adobe Photoshop’s composite image.
2. If the file has multiple mask channels, you can choose which one to see.
Select the mask from the Alpha to Use attribute in the Attribute Editor.
3. If the PSD file has layer sets, you can choose a layer set to link to. Select the
layer from the Link to Layer Set attribute in the Attribute Editor.
4. As with any File Texture in Maya, you can set any PSD File Texture attributes.
See File Attributes.

To convert an existing Adobe Photoshop File Texture with layer sets to a


Layered Texture
1. In Hypershade, do one of the following:
● Right-click a PSD File Texture, then select Convert to Layered Texture.
● Select Edit > Convert PSD to Layered Texture.
2. To see the multiple layer sets connected to the layered texture, regraph the
layered texture.

Topics in this section

● Adjust conversion options


● Convert a PSD node to a file texture
● Create a PSD file with layer sets from within Maya
● Display the alpha of a PSD file in the scene view
● Edit PSD Networks
● Open a PSD network in Adobe Photoshop from Maya
● Sketch out guidelines (“lipstick”) for paint application
● Update PSD Networks
● Photoshop integration limitations

Maya LT features
Maya® LT 3D game development software includes animation, rigging, modeling,
and lighting features for indie game creators to work faster and without creative
limits.
3D modeling, UVs, and textures
3D modeling tools
Produce realistic in-game characters, props, and environments with sophisticated
3D modeling tools. (video: 1 min.) Learn more
UV editor

Quickly create and edit UV topology with artist-friendly tools and functionality. (video:
1:02 min.) Learn more
Shaders and materials

Create high-quality materials with shading, or work with Allegorithmic Substance


materials in Maya LT. (video: 1:09 min.) Learn more
Built-in sculpting tools

Use brush-based sculpting tools to perform high-level edits on your models without
having to export to a different tool. Learn more

LOD tools for game model efficiency


Optimize content for mobile devices with polygon reduction, data cleanup, blind data
tagging, and level-of-detail tools. Learn more

Physically based shader materials

Use ShaderFX to create high-quality physically based shader materials within Maya
LT. Learn more

Lighting and texture baking

Simulate realistic game lighting. Use global illumination tools to bake lighting data
into texture or vertex maps. Learn more
Modeling improvements

Make modeling more efficient with quad draw, multi-cut, bevel, and symmetry
enhancements.

Overall UV Mapping strategies

There are many ways that you can map a 3D object, all of which balance optimization versus
minimizing stretching/pinching differently. Here are three of the most common strategies along with
their uses.

Heavily optimized UV mapping

One extreme is a heavily optimized UV map, which is used exclusively for real-time graphics. The
end-goal is to have as much coverage on a map that is as small as possible. Heavily optimized UV
maps are most common when working with mobile/low-poly graphics, where you have heavy
limitations on your assets, but also for current gen graphics since loading texture maps into the
graphics memory is heavy work. The Texel density will vary a lot and stretching and pinching will be
created on purpose where it's needed. Every shell that can be mirrored (along a plane or radially) is
often stacked on top of each other. Furthermore, shells are oriented straight along U/V, and those
with odd shapes are cut up into smaller parts. Do not be afraid of straightening border shells so that
you can pack shells tighter together so that they fit within the 0 to +1 UV space. Additionally for
mobile graphics, consider pushing the UVs of a shell into a line segment if you only need information
in one direction (like a gradient) or even a point (if you only need a color).
See the Optimization section below for more details.

Technical UV mapping

On the other end of the spectrum is technical UV mapping. This approach is most common when the
model is going to be used for pre-rendered graphics, technical demonstrations, or for promo
material. Pixel aspect ratio is very important, while texture space and optimization is not. It is
important that all your shells have the same Texel Density and that you eradicate stretching and
pinching as best you can. It is also common practice to use multiple large UV maps for different parts
of the mesh (known as multi-tile UV mapping). However, not all pre-rendered art will require a UV
map, as it is common to use procedural 3d textures for different materials as well. Always check with
your art director/art lead to make sure that the UV map is actually necessary.

Continuous UV mapping

Located somewhere in-between the previous two extremes. This is the most common method when
working with more high-detail organic models (e.g. a character or a tree). The focus is on reducing
the number of seams and to preserve the Texel Density across the shells. Heavy optimizations are
difficult due to all the oddly-shaped UV shells, but try not to neglect optimizing altogether.

Optimizations

Unless you are mapping for technical models or VFX assets, you need to think about optimizations
that will enhance performance while reducing memory usage. Here are some tips:

● Consider what parts of your model are going to be visible to the camera, how often, and at
what distances. Start by setting a uniform texel density to all UV shells and then scale up or
down according to those factors. For example if you are working on a FPS weapon with iron
sights then the scope part should have the highest density, and the right and front side of the
gun the lowest.
● Symmetry mapping: When working on symmetrical meshes, you can stack shells on top of
each other that are mirrored over a plane, or even radially, so that they reference the same
portion of the texture. However, when doing so be careful to consider what kind of ambient
occlusion shadows the affected shells will receive (if you use AO). If you have some form of
text or logo on one side of the model that you don't want mirrored to the other, consider
mirroring the entire shell except the area with the logo. Sometimes you can even add extra
geometry around this logo in order to cut out that particular part of the shell, allowing
everything else from the large shells to be stacked. To make symmetry mapping easier, cut
your mesh in half before performing a layout.
● Divide and conquer: Start laying out UV shells by placing the largest and most oddly-shaped
shells into the UV range (0 to 1) first. Also consider going for another ratio than 1:1, such as
2:1 or even 4:1, as the shape of the texture map does not affect the texture processing at all.
Work from one corner towards the opposite one, while trying to keep the layout ratio intact.
This way, if you end up with too little or too much UV space left, you can simply select your
entire UV layout and scale it.
● Loading textures into the graphics memory is slowed down by the following factors: How
many channels there are in the texture (RGBA is 4x more expensive than grayscale), how
many pixels there are in the texture, and how many texture maps your asset uses. The latter
is very important. When doing environment art, you are strongly advised to use texture
atlases.

Shell spacing

Shell spacing refers to the amount of space between UV shells (also known as shell padding). There
are a few things to note when it comes to shell spacing:

● Texture bleeding: Texture bleeding is when the color information inside one UV shell bleeds
into another UV shell due to texture filtering. In general, it's good practice to keep at least 2
pixels around all UV shells so that there is a 2px margin to the texture map border, and a 4px
margin between UV shells. However, that only applies to the final version of the texture. If the
texture map is further reduced in size by the engine you need to increase this padding.
● If LOD models are being used, then every LOD/Mipmap step requires double the shell
spacing. For example: If you have an asset with 3x LOD steps and an original texture size of
2048px, then on the smallest mipmap level the texture is only 512px. In this case, the
spacing needs to be 4px between shells at 512px, then 8px at 1024px, then 16px at 2048px.
Thus, when doing the layout you need to make sure that you have at least 16px distance
between shells and 8px distance to the UV map border. You can use the UV Toolkit's
Measure tool to keep track of pixel distances.

Keep UVs within the 0 to +1 texture coordinates

The UV Editor displays a grid marking the texture space for UVs. The working area of the grid
begins at 0 and extends to 1. By default, the UV mapping operations in Maya automatically fit UVs
within the 0 to 1 coordinates. While it is possible to move or scale the UVs so they reside outside of
this 0 to 1 region, you should keep the UVs for a surface positioned within these 0 to 1 coordinates,
in the majority of situations.

When the UVs extend beyond the 0 to 1 range, the texture will appear to repeat or wrap around the
corresponding vertices when viewed in the 3D scene or rendered image. The exception to this
guideline is when you actually want the texture to repeat on the surface, such as a brick texture
along the model of a wall.

Overlapping UV shells

If any of the UV shells overlap in the UV Editor, the texture will repeat on the corresponding vertices.
Depending on the mapping strategy (above) you want to use, you will either want to take advantage
of / avoid this. Shells can be easily stacked using the UV Toolkit's Stack Shells command, or
separated using its Layout command.

Snapping UVs

You can use snapping in the UV Editor to lock your transformations to existing objects in the scene.
This functionality is similar to the snapping functionality in the scene view.

You can use the Preserve Component Spacing option in the Move Tool settings when transforming
multiple UVs to maintain their relative spacing.
To snap to... Hold Icon

Grid intersections x

(In the Status Line)

Other UVs (points) v

(In the Status Line)

Pixels

(In the UV Editor toolbar)

Note:

● If snapping is on and you drag an axis manipulator (as opposed to the manipulator’s center),
the manipulator snaps to the nearest point or grid intersection restricted to that axis
(depending on the snapping mode). Alternatively, you can use Shift + x or Shift + v to snap to
the nearest point restricted by the U or V axis respectively.
● Pixel Snapping is measured by monitor pixels. You can zoom in close to the UVs to achieve
better results. This setting also affects snapping for rotating and scaling pivot locations.

Related topics

● Mapping UVs
● Creating UVs
● Viewing and evaluating UVs
● UV Editor overview

Apply the checker pattern shader to a UV mesh

You can apply a checker texture to your UV mesh to spot problems, like stretched or overlapped
UVs, that can occur during UV mapping.

To apply the checker pattern shader

1. In the UV Editor toolbar, click the checker pattern shader icon or go to Textures >
Checker Map.
The checker pattern shader is applied to the surface of your UV mesh and appears behind
the grid in the UV Editor. You can toggle between a simple black and white shader to a color
gradient in the Checker Map options. The color gradient makes it easy to locate tiles on your
UV mesh.
Tip: If the checker pattern shader doesn't appear on your mesh, ensure that Viewport 2.0 is
selected in the Renderer panel menu.
In the following example, the size of the checkers is inconsistent, indicating that the texture is
stretched in some areas.
Note: The checker pattern shader does not affect the object's original materials, shaders,
and texture assignments.
2. Do one of the following to turn off the checker shader:
● Close the UV Editor.
● Click the checker pattern shader icon.
3. The mesh's original materials, shaders, and texture assignments reappear on your UV mesh
and in the UV Editor.

Related topics

● Prepare a UV shell for unfolding


● Identify UV distortion

Apply the checker pattern shader to a UV mesh

You can apply a checker texture to your UV mesh to spot problems, like stretched or overlapped
UVs, that can occur during UV mapping.

To apply the checker pattern shader


1. In the UV Editor toolbar, click the checker pattern shader icon or go to Textures >
Checker Map.
The checker pattern shader is applied to the surface of your UV mesh and appears behind
the grid in the UV Editor. You can toggle between a simple black and white shader to a color
gradient in the Checker Map options. The color gradient makes it easy to locate tiles on your
UV mesh.
Tip: If the checker pattern shader doesn't appear on your mesh, ensure that Viewport 2.0 is
selected in the Renderer panel menu.
In the following example, the size of the checkers is inconsistent, indicating that the texture is
stretched in some areas.

Note: The checker pattern shader does not affect the object's original materials, shaders,
and texture assignments.
2. Do one of the following to turn off the checker shader:
● Close the UV Editor.
● Click the checker pattern shader icon.
3. The mesh's original materials, shaders, and texture assignments reappear on your UV mesh
and in the UV Editor.

Related topics

● Prepare a UV shell for unfolding


● Identify UV distortion
UV Set Editor

Navigate to Create UVs > UV Set Editor.

The UV Set Editor lets you create and edit UV sets for multiple polygon meshes simultaneously.

The UV Set Editor lists only the UV sets for the currently selected polygon meshes. You must first
select the polygon meshes in order to edit the UV sets.

New

Creates a new, empty UV set on the currently selected objects. You can then create the UVs in the
set using one of the mapping/projection methods. This feature is the same as Create UVs > Create
Empty UV Set. For information, see Create Empty UV Set options.

Rename

Lets you rename the currently selected UV set.

Delete

Deletes the currently selected UV set.

Note: Maya won't allow you to delete the top-most entry in the UV Set list. If you do need to delete
this entry, you can move it lower in the list and then delete it.

Copy

Creates a new UV set based on an existing UV layout or transfers a UV layout from one set to
another. This feature is the same as Copy UVs to UV Set.

Propagate

Assigns the selected UV set from the UV Set Editor list to the selected objects in the scene. The
selected UV set becomes the active UV set for those objects.
Unmapped

Selects unmapped faces on any selected objects in the scene. This aids in visually determining
any areas where texture maps do not appear or appear incorrectly.

Confirm UV placement

Confirming that UVs are positioned correctly is critical if you want the textures to appear correctly on
the surfaces of your mesh. One method of confirming the UV placement is to apply a temporary
texture map, which you can then evaluate on your model for possible distortions. There are two such
shaders made specifically for this purpose: checker and distortion.

To apply a temporary map to a UV shell

1. Select the mesh.


2. Open the UV Editor.

3. Click the Checker Shader ( ) or Distortion Shader ( ) button.


The checker shader allows you to evaluate distorted areas by seeing where the colored squares are
distorted.

The distortion shader applies specific colors to areas where the UV shell is stretched or compacted
(red and blue respectively). To avoid distortion, you'll want to move UVs until the majority of the UV
shell is white.

Related topics

● Mapping UVs
● Planar UV mapping
● Cylindrical UV mapping
● Spherical UV mapping
Unfolding a UV mesh

A UV mesh is made of UVs similar to how a polygon mesh is made of vertices. Unfolding a UV mesh
refers to the process of cutting a seam in the UV mesh and then unfolding along that seam. The
process is similar to cutting a seam along a shirt and laying it flat on a table. By laying the UVs flat,
you can easily paint a texture on the 2D surface, which you can then wrap around the model.

Unfold works well in situations where UV meshes are created from polygonal models that have
complex organic shapes. In these situations, other projection methods may not be as successful and
automatic mapping would produce too many individual UV shells that would necessitate many move
and sew operations afterwards. For example, polygon models that are prone to overlap via other
projection methods are ideal for unfolding. You do not need to unfold non-organic poly mesh forms.
In these cases, other projection techniques would be better suited and more straightforward. For
example, a wall can be planar-projected or a bottle can be cylindrically projected.

Topics in this section

● Prepare a UV mesh for unfolding


● Unfold a UV mesh
● Best practices for unfolding
Layout UV shells

The Layout feature automatically repositions UV shells so they don’t overlap in UV texture space
and maximizes the spacing and fit between them. This is useful for ensuring that the UV shells
occupy their own separate UV texture space. For example, if you are applying Fur to a surface, the
UV texture coordinates on a given shell must not overlap.

In general, you should keep UV shells separated for convenience and clarity, but it is not absolutely
necessary. For example, you may want the UV shells to overlap so different faces use the same
region of a texture.

You can also use the Layout feature to

● Scale or stretch the UV shells to fit within the 0 to 1 coordinates of the UV Editor. This is
useful if you need to maximize the texture space used when creating a texture map. For
example, when using 3D Paint.
● Arrange the UV layout of multiple selected objects simultaneously. This improves your
efficiency when you need to quickly sort the UVs for multiple objects within the UV Editor, or
when multiple objects need to share different parts of the same texture.

The Layout feature is available from within the UV Editor by selecting Modify > Layout from the UV
Editor’s menu bar, or Arrange & Layout > Layout in the UV Toolkit.

Note: Before using Layout you should already have performed the necessary UV mapping. That is,
the Layout feature will only arrange existing UV texture coordinates, it will not create them.

To lay out UVs for multiple objects simultaneously

1. Select the objects or faces whose UVs you want to lay out.
2. Select UV > UV Editor to display the UV Editor.

3. In the UV Editor, select Modify > Layout > (if you need to modify options), or in the UV
Toolkit's Arrange & Layout section click Layout.
4. In the Layout UVs Options window, set the following options depending on your required
outcomes:
● Set Multiple Objects to Pack Seperately (overlapping) when you require multiple
object’s UVs to overlap within the UV texture space.
● Set Multiple Objects to Pack Together (non-overlapping) (Default) when you require
the UVs to be separated. This is useful when you need each UV set to be separate
and distinct from each other.
● Set Shell Transform Settings to customize how Maya moves or rotates shells around
the UV space.
● Set the Shell Padding and Tile Padding to specify how far shells of the same object
are spaced from each other and from the edge of the UV space respectively.
5. Click Apply to perform the layout operation or Layout UVs if you want to perform the
operation and close the Layout UVs Options window.

Related topics

● Display overlapping UVs


● Layout UVs options
● Separate and attach UV shells

Cut UV Tool

The Cut UV Tool lets you split UVs by dragging along edges. Press Ctrl to temporarily activate the
Sew UV Tool and weld UVs together. For more information on cutting edges, see Separate and
attach UVs.

To open the Cut UV Tool, select it from the Cut & Sew section of the UV Toolkit or select Tools > Cut
from the UV Editor menus.

Note: When moving between the Viewport and UV Editor, the Cut Tool automatically changes to the
3D Cut and Sew Tool.

The following options appear in the Tool Settings window when you select Tools > Cut > .

Brush Options

Edge Select Sensitive

Specifies the radius of the tool. Higher Edge Select Sensitive values increase the radius. Any
edges that are within the radius when you drag your cursor across a shell are cut.

Steady Stroke

Helps to produce a smoother stroke by filtering mouse movement. When on, a vector displays on
the tool cursor and no cut appears until you drag a distance equal to the length of the vector. The
length of the vector is set by the Distance setting.

Distance

Sets the length of the Steady Stroke vector on the tool cursor.

Cut Open Ratio


Specifies the size of the gap between the split UVs. A higher value produces a wider gap.

Display All Shell Borders

When on, highlights shell borders. Sets of connected edges are displayed in different colors,
making it easy to identify edges that are shared. Display All Shell Borders is off by default.

Related topics

● Separate and Attach UVs


● 3D Cut and Sew Tool

Select UV shells

To select UV shells in the scene

1. Right-click your object, and then select UV > UV Shell.


2. Move the cursor over your object to highlight individual shells.

3. Click a UV shell to select it.


To select UV shells in the UV Editor

1. Do one of the following in the UV Texture Editor:


● Right-click in the 2D view, select Shell, and then click a shell to select it.

● Turn on the UV Shell selection mask ( ) in the UV Toolkit, and then click a shell
to select it.
● Double-click a face in face selection mode.
2. Tip: To convert the selected UV shell to another component mode, Ctrl + right-click in the 2D
view, and then select an option from the marking menu.

To select all components on a shell in the UV Editor

1. Double-click a single vertex, face, or UV.


All connected components of that type on the same shell are selected.
2. Ctrl + double-click a single vertex, face, or UV to deselect all connected components.

To deselect all components on shell in the UV Editor

1. Ctrl + double-click a single vertex, face, or UV.


All connected components of that type on the same shell are deselected.

To select border edges in the UV Editor

1. Double-click a border edge.


All of the border edges on the same shell are selected.

To select components that share a common quality in the UV Editor

1. Switch to the desired component selection mode.


2. In the UV Toolkit, go to the Select By Type section and click the appropriate button.
All components with the appropriate commonality are selected.
Cylindrical UV mapping

Cylindrical mapping creates UVs for an object based on a cylindrical projection shape that gets
wrapped around the mesh. This projection is best for shapes which can be completely enclosed and
visible within a cylinder, without projecting or hollow parts.

1. Select the faces you want to project UVs onto.


2. Select UV > Cylindrical Mapping, or in the UV Editor's UV Toolkit go to Create > Cylindrical.
3. Use the manipulator to change the position and size of the projection shape.
4. Use the UV Editor to view and edit the resulting UVs.

Note:
Projection mapping only works properly on a single object at a time. If you need to apply a
projection to multiple polygonal objects in a single step, combine the objects into one, apply
the projection, and then separate the parts back out. Otherwise, perform a projection on
each object separately.
Display connected edges on UV shells

When you want to perform a UV mapping operation on a model with many UV shells, it's helpful to
know which shell edges are connected.

Note: Connected edges are only visible in the UV Editor.

To display connected edges on UV shells

1. Press the '8' key.

Sets of connected edges are displayed in different colors, making it easy to identify edges that are
shared.
Tip: You can adjust the width of shell borders by selecting Display > Polygons > Edge Width.

Alternatively, you can Shift + right-click in the 2D view and select Toggle Shell Borders.
Select which UV set to use for nHair

Use UV sets to define where the hair goes and to control its distribution and density.

By default nHair uses UV set map1, but you can link your hair systems to different UV sets on
your polygonal surface. For example, if you have different UV sets defined for different
projections, you can link the hair systems to the UV set that gives the best result.

In the Relationship Editor, when you link a hair system to a UV set, all the hair systems
attached to the polygon are linked to that UV set. However the UV set is controlled by the Map
Set Name attribute in the Extra Attributes section in the follicleShape, so you could define it
from there. (Currently you cannot edit the follicle’s Map Set Name attribute in the Attribute
Spreadsheet.)
Related topics

● UV sets
● Creating UVs
● Create UVs > Automatic Mapping
Tip:
Currently you need to create a UV set for hair that is between zero and 1. You can use
regions outside this range for areas you want to be bald. Or if you simply do not
define UVs for a triangle, then it will be bald. You should do this before creating the
hairs, as on creation of hair, hairs are rejected that are located at undefined UV
locations. If the UV for a follicle becomes undefined after creation, then the follicle will
simply move to the center of the object; however, you could then manually select and
delete these follicles. See Create your own hair on surfaces.

To link attached hair systems to a different UV set

1. Select the object with the hair system(s) you want to link.
2. Select Window > Relationship Editors> UV Linking > Hair/UV to open the Relationship
Editor.
The left panel lists all the hair systems that are attached to the selected polygon.
The right panel lists mesh nodes with UV Sets for the selected polygon. If more than
one polygon is selected, only the last selected polygon is listed.
3. In the left panel, click a hair system. In the right panel the UV set the Hair follicle is
linked to becomes highlighted.
4. In the right panel, click the UV set you want to link the hair system to.
Texture Options

● In the Animation andRigging menu sets: Deform > (Create) Texture >

Basic tab

Point Space

Specifies how to sample the texture.

World

With World, the texture deformer converts the world position of each vertex to a sample texture.

For 2D texture, axis Y is ignored. The value on axis X and Y is converted to UV.

Because the world position of vertices can be changed by transforming the object, you can
transform the object to modify the deformation.

Local

With Local, the texture deformer converts the local position of each vertex to a sample point.

For 2D texture, axis Y is ignored. The value on axis X and Y is converted to UV.

Because the local position of vertices cannot be changed by transforming the object, you cannot
transform the object to modify the deformation.
UV

With UV, the texture deformer uses the current UV value of each vertex to sample pixel data. UV
only supports 2D texture.

Note: UV point space does not support NURBS objects.

Direction

Specifies the direction of the deformation.

Handle

With Handle, you can control the direction of deformation by using a handle. When you select
Handle, a handle manipulator shows at the origin. You can move, rotate and scale the handle to
edit texture deformer effects. When other direction options are selected, the Handle is hidden.

Note: If you delete the handle you will delete both the deformer node and the tweak node.

Normal

With Normal, you can replace the direction of the deformation by the normal of each polygon face.

Vector

With Vector, you can have the Texture affect the direction of the deformation. Texture has three
color channels. They can be used to compute deformation on the X, Y and Z axes separately.

Vector Space

Specifies the coordinate space with which to use the texture deformation.
Tangent

The coordinate space on a face is defined by the normal, tangent and binormal, which are
orthogonalized and normalized relative to each other. Use if your texture is extracted with Tangent
mode. With Tangent, the coordinate space CAN be changed by rotating and scaling the object. In
other words, when the user rotates or scales the object, the deformation is rotated or scaled as
well.

Object

Local coordinate space for the model. Use if your texture is extracted with Object mode. With
Object, the coordinate space CAN be changed by rotating and scaling the object. In other words,
when the user rotates or scales the object, the deformation is rotated or scaled as well.

World

Coordinate space for the 3D scene. Use if your texture is extracted with World mode. With World,
the coordinate space CANNOT be changed by rotating and scaling the object. In other words, the
deformation is fixed on three Axes when the user rotates or scales the object.

Texture

Specifies the input texture of the Texture deformer. Input texture is used to compute the
deformation of each point. When the Handle or Normal (see Direction) is selected, Texture is
converted to luminance. Larger values typically make points deform more along the specified
direction. When Vector (see Direction) is selected, the three channels of Texture are used for the
three axes.

You can link attribute outColor from a texture to Texture to apply color data on deformation. Three
types are supported: 2D texture, 3D texture and layered texture.

Texture has a default value of black. You can modify it by using the attribute editor. When Texture
has no connection, deformer uses the default value.
Strength

Specifies the influence intensity of the deformation. Larger values increase the intensity of
deformation.

Use Strength by setting Direction as Handle or Normal.

Offset

Specifies the offset of the translation.

Changing this value can create a wriggling effect. Use the slider to specify values from -10.0000 to
10.0000. Default is 0.0000.

Use Offset by setting Direction as Handle or Normal.

Vector Strength

Specifies the influence intensity of deformation. With Vector Strength you can set a different
intensity on each of three axes.

Vector Strengthworks when you set Direction as Vector.

Vector Offset

Specifies the offset of the deformation. With Vector Offset you can set the offset deformation on
each of three axes.

Vector Offsetworks when you set Direction as Vector.

Envelope
Specifies the deformation scale factor.

A value of 0.0000 provides no deformation, a value of 0.5000 provides a deformation effect scaled
to half of its full effect, and a value of 1.0000 provides the full deformation effect. Use the slider to
select values between 0.0000 and 1.0000. Default is 1.0000

Advanced tab

See Advanced deformer options.

Deformation order

Specifies the placement of the deformer node in the deformable object's history. For more
information about deformer placement, see Deformation order.

Exclusive

Specifies whether the deformer set is in a partition. Sets in a partition can have no overlapping
members. If on, the Partition To Use and New Partition Name options become available. Default is
off.

Partition to use

Lists any existing partitions, and a default selection Create New Partition. If you select Create New
Partition, you can edit the New Partition Name field to specify the name of a new partition. Only
available if Exclusive is on.

New Partition Name

Specifies the name of a new partition that will include the deformer set. The suggested partition
name is deformPartition, which will be created if it does not already exist. Typically, you might put
all your exclusive deformer sets in the partition named deformPartition. However, you can create
as many partitions as you like, and name them whatever you want. Only available if Exclusive is
on.
Create a texture deformer

Two nodes are created when you create a texture deformer: textureDeformer and
textureDeformerHandle. TextureDeformer deforms the mesh while textureDeformerHandle controls
the deformation direction. textureDeformerHandle is hidden by default. You can have it display in the
Attribute Editor by choosing Direction as Handle, or in the TextureDeformer tool option by
selecting Direction as Handle.

To create a texture deformer

1. Select a deformable object.


2. Select Deform > Texture > .
3. In the Texture Options that appear, set the options you want.

4. Click the map button next to the Texture attribute in the Attribute Editor.
The Create Render Node window window appears.
5. Select and manipulate a texture handle.
The object deforms.

Position 3D textures

For more information about texture positioning the place3dTexture node, see 3D texture positioning.

To use the 3D placement manipulator

1. Assign a 3D texture to a surface.


The texture’s place3dTexture node (swatch) appears in Hypershade and a manipulator
appears in the view panel.
2. Double-click the node to open the Attribute Editor.
3. Select the Interactive placement (see Interactive Placement) to reposition the 3d texture to
show the placement manipulator. (This tool similar to a combined version of the move, scale,
and rotate tool.)
To use the Fit to group bbox option

1. Assign a 3D texture to a surface.


The texture’s place3dTexture node (swatch) appears in Hypershade and a manipulator
appears in the view panel.
2. Double-click the node to open the Attribute Editor.
3. Select the Fit to group bbox to reposition the 3d texture. This causes the 3D texture to be
scaled, moved, rotated as necessary to the assigned object’s bounding box.
Texture placement vs. label mapping

By default, the Texture Placement tool is set for Label Mapping, which lets you stretch, shrink, move,
and rotate the texture as if it were a label. You can change set the tool for Surface Placement, which
lets you stretch, shrink, move, and rotate the texture as if it were wallpaper.

In both cases, wrap U and wrap V control the tiling of the texture in horizontal and vertical directions
(based on the UV coordinates on the object).

To change the Texture Placement Tool, double-click the Texture Placement Tool icon in the toolbar,

or select Texturing > NURBS Texture Placement Tool > See Texturing menu for more
information..

Surface placement

When you drag the manipulator handles, the attribute settings for Repeat UV, Offset, and Rotate UV
change in the place2dTexture’s Attribute Editor. This lets you stretch, shrink, move, and rotate the
texture as if it were wallpaper.

Label mapping

When you drag the manipulator handles, the attribute settings for Coverage, Translate Frame, and
Rotate Frame change in the place2dTexture’s Attribute Editor. This lets you stretch, shrink, move,
and rotate the texture as if it were a label.

You might also like