You are on page 1of 83

H16

Mantra User Guide


ABSORPTION AND NESTED


DIELECTRICS

ABSORPTION

When light travels through a transparent medium such as glass, the material absorbs a certain amount
of light. This effect becomes more obvious when the transparent media has been tinted or colored in
some way, like stained-glass or fruit-juice.

When rendering with Mantra, this effect can be simulated in variety of ways – using a volume to
attenuate light as it travels through an object, for instance. However, for simplicity, a set of parameters
which allow an artist to quickly achieve the desired look without the need for complex setups have been
provided. These parameters can be found on the Principled Shader (As well as the Classic Shader).

Transmission Color
The color which will be used to tint a ray as it passes through a transparent object. It is important to
remember that the amount of tinting is dependant on both the distance the ray has travelled through
the object and the At Distance parameter.

At Distance
This parameter determines how the transmission color affects a ray as it travels through a transparent
object. When the distance travelled through an object matches the At Distance parameter, one hundred
percent of the Transmission Color will be blended with the result. In cases where the ray does not travel
far enough inside the object to reach the At Distance limit, a smaller percentage of the Transmission
Color is used. Conversely, if the At Distance limit is reached before the ray has travelled all the way
through the object, a darker and more saturated version of the Transmission Color is used.


H16 Mantra User Guide


For simplicity, imagine that each ray which encounters a transparent surface is “tagged” with data like
the hit location, Transmission Color, and At Distance value. When this ray hits the other side of the
transparent object, it can now use this data to calculate how far it has travelled and therefore how to
apply the Transmission Color. Eventually, this ray may hit an opaque object where it evaluates the
surface shader and combines the result with the previously calculated absorption color.


Using this method, it is possible to accumulate absorption colors across multiple, non-overlapping,
transparent objects. (For a solution to overlapping transparent objects, see the section on Nested
Dielectrics) Essentially, the process of tagging a ray and calculating its color can be repeated for every
transparent object encountered. Rather than replacing the absorption color, the values are multiplied
together before being combined with the color returned upon hitting an opaque object.


H16 Mantra User Guide

It should be noted that a ray does not have to exit a transparent object for absorption to take place.
Consider the case of an opaque object embedded in a transparent one. Upon entering the transparent
surface, the ray is still tagged with the relevant data to calculate the absorption color. The only
difference is that the distance travelled is calculated at the point the ray collides with the opaque object
rather than upon exiting the transparent one.


Additionally, the transparent object does not have to be a closed surface - single sided surfaces can take
advantage of absorption as well. For places where a ray does not collide with another object in the
scene, the absorption information is discarded and the color black is returned. (Essentially treating the
one-sided surface as an infinitely Deep transparent object which absorbs all light)


NOTE: A limitation of One-Sided transparent surfaces is that the absorption color will not be applied to
Light Objects seen through transparent surfaces. If this effect is required, consider adding depth to your
transparent object, or creating emissive geometry as a stand-in for your light.

ABSORPTION WORKFLOW CONSIDERATIONS



At Distance and Scale
When applying a shader with Absorption, it is always important to remember than that amount of
absorption which occurs is based on the distance travelled through the object. Both the travel distance


H16 Mantra User Guide

and the At Distance parameter are measured in World Space. This means that the scale of your object
can have a large influence on the final rendered results. Consider the following objects with the same
shader settings:


The example above illustrates how important it is to do your look development on an object at the scale
it will be rendered when using Absorption. If you were to do your shader setup at one scale, but then
the object was placed in the final scene at a drastically different scale, you will not get the same look.

Absorption and Camera Position


As explained above, absorption works by tagging rays that have intersected with transparent objects
with useful pieces of data. Only when a ray intersects a surface which is facing the camera will this
information be Considered (This allows mantra to track when a ray enters an object versus when it
leaves an object). However, this also means that placing a camera inside a transparent object will not
generate correct results.


H16 Mantra User Guide

Notice how the Transmission Color has been lost when the camera is placed inside the object. The ray
leaves the camera and intersects with the inside of the sphere as it leaves the transparent object - it
never receives information about when it entered the sphere, so is unable to calculate the distance
travelled.

Absorption and Volumetric Effects


Absorption is a very simple but effective way of representing the attenuation of light through a
transparent medium. However, because it is not a truly volumetric effect, it is not appropriate for
representing all types of lighting effects. In some objects, the attenuation of light is caused by
particulate matter suspended in the medium, these particles can both absorb and scatter light. Consider
a large fluid effect like an ocean:


You can see that while the ocean surface has a realistic feeling of depth using absorption, only the true
volumetric rendering displays the characteristic light scattering of a real ocean. Unfortunately, volumes
are costly to render, so some consideration must be given to the quality of a render versus the speed. In
many cases, like a swimming pool or shallow river, absorption may be sufficient to achieve the desired
result without a loss in quality.

NESTED DIELECTRICS

Rendering multiple transparent objects which are embedded inside each other is a complex task, both
from the point of view of the renderer but also from the point of the of the artist building the scene.
Consider the following example:


H16 Mantra User Guide

The simplest approach would be to model each of the objects separately and simply have them overlap.
However, if you consider how a ray will travel through each of the transparent objects, it quickly
becomes clear that you will generate a cascading series of incorrect refractions.


To make matters worse, not only will rays refract incorrectly, but there will be surfaces in places where
none should exist (Fluid intersecting the ice cubes, glass Intersecting the fluid). Each one of these errors
compounds the next creating a poor render. In the diagram above, every red dot is an incorrectly
evaluated surface. This method is good in terms of workflow, but poor in terms of the result.


H16 Mantra User Guide

An alternative method would be to model each object without overlaps. This means fitting the fluid to
the glass and carving holes in the fluid to make room for the ice cubes.


Unfortunately, this is not enough to correctly render this scene. In the diagram above, the Blue Dots
indicate where a ray will need to travel through coincident surfaces. To render this scene correctly,
those surfaces would need to have a special set of IOR values to correctly transition from one material
to another (Glass, water, and ice all have different IO values). While this setup would result in a better
final render, the work required is both tedious and error prone. Additionally, precision errors may arise
from completely coincident surfaces. To correct for this, you may require a small gap between each
material, but this will not give physically correct results.

The best solution would to be simply overlap the objects, but somehow let the renderer know which
objects should have precedence over others. In some sense, it would be like asking Mantra to remove
the overlaps at render time.

This is the concept behind Nested Dielectrics. By providing a number which represents the priority of a
material, Mantra can track which object a ray has entered and simply ignore surfaces which have a
lower priority. To achieve this, the Principled Shader (as well as the Classic Shader) has a parameter
called Surface Priority.


Simply by setting this surface priority, Mantra would know both where and how a ray should transition
from one transparent material to the other, correctly calculating the IOR values as it goes. In the above
example, the glass would have the highest priority, followed by the ice, and finally the water.



H16 Mantra User Guide

Surface Priority
A numerical value which establishes an order of precedence for transparent materials in a scene. A value
of 0 indicates that the surface priority should be ignored. Increasing values indicate lower priority.
Consider this simple 2d example.


Notice that when all circles have equal priority, they simply overlap, creating multiple intersecting
surfaces. However, as the priority value is increased for the green and blue circle, the interior surfaces
are removed.

In 3d, on refractive surfaces with absorption, the effect of setting correct surface priority can be
dramatic.


In the above render, both the red sphere and the blue box have the same surface priority. You can
immediately see the problems in both the refraction and absorption. These problems arise because it is
unclear to Mantra which surface is “inside” another surface and therefore it cannot correctly calculate
the result.


H16 Mantra User Guide

In this case, the red sphere has a higher priority than the blue box (Remember, lower numbers mean
higher priority. Red Sphere = 1, Blue Box =2). The absorption and refraction on the parts of the sphere
inside the box is now correct and it appears as if there is no overlap between the two objects. Mantra
simply ignores whichever parts of the Blue Box have overlapped the Red Sphere. This setup would work
well for something like Ice Cubes floating in Water.


In the above example, the Surface Priority values have been switched (Blue Box =1, Red Sphere =2). This
has the effect of removing any parts of the red sphere which overlapped the blue box. This would be an
ideal setup for having water droplets resting on the surface of a glass.

DISPERSION

In optics, dispersion can refer to the separation of light into its component wavelengths as it travels
through a refractive material. A classic example of this effect is the spectrum produced by light travelling
through a dispersive prism.


H16 Mantra User Guide

When rendering with Mantra, it is possible to simulate this effect using the Dispersion parameter
included on the Principled Shader (As well as the Classic Shader).

Dispersion
When this parameter is set to a non-zero value, refracted rays are tagged with a single wavelength in
the visible spectrum. Each of these wavelengths modify the underlying IOR causing the rays to separate
as they travel through the refractive material. The larger this value, the larger the separation.


Because each ray is tagged with a single wavelength, it is important to have enough samples to
represent the entire spectrum. Each ray is randomly assigned a wavelength; however, some attempt is
made to ensure that the visible spectrum is uniformly sampled per Pixel Sample.

In the following diagram, you can see how a single Pixel sample, with 3 secondary rays, will not be able
to adequately cover the visible spectrum. This will almost certainly result in noise in the final render as


H16 Mantra User Guide

each pixel sample returns a random distribution of wavelengths.


However, as the number of secondary rays are increased, we see that more of the visible spectrum is
represented in a single Pixel Sample. This will result in more consistency pixel to pixel and therefor less
noise in the render.


H16 Mantra User Guide

The following sequence of renders demonstrate how noise from Dispersion changes with the number of
secondary rays. On the left, with only 1 Pixel Sample and 1 Secondary Ray, each pixel essentially receives
a random color from the visible spectrum. As the number of secondary rays is increased to 5, 25 and
100, the noise caused by dispersion is completely resolved.


H16 Mantra User Guide

DISPERSION WORKFLOW CONSIDERATIONS


Removing Noise
Most often, noise in a render has the appearance of small changes in brightness from one pixel to its
neighbours. In the case of dispersion, this grainy look can be amplified by the introduction of color noise
alongside the changes in luminance.


In the above example, both are similarly under-sampled but the image on the right appears to exhibit
much more noise. This is because small changes in brightness are far less obvious than dramatic change
in color caused by insufficiently sampling the color spectrum.


If you look closely at the white areas of the sphere, you’ll see very similar noise patterns. However, the
image on the right appears dramatically noisier due to the chromatic nature of the noise. It can often be
necessary to increase the amount of sampling on objects with dispersion compared to similar objects


H16 Mantra User Guide

without dispersion. In the following example, you can see that significantly more sampling was required
to achieve similar amounts of noise between both objects.


In this cropped version, you can see that the white areas of the spheres now have very similar noise
levels and patterns. This is because enough of the visible spectrum has been sampled to converge back
to the color white. But, it required almost twice the number of samples to achieve this result.


Because of this difference in sampling, it may be useful to override Refraction Quality parameters on any
transparent object with dispersion enabled. This way you can be sure that you are sending extra
refraction samples only to the objects which require them.


H16 Mantra User Guide

THE SAMPLING TAB

The Sampling Tab can be found under the Rendering Tab on the Mantra Node.
The parameters found on this tab control the amount of sampling performed by Mantra while
generating an image. Adjusting these parameters will have a dramatic effect on the quality and
clarity of your images as well as the amount of time it takes to render them. Changing these
values should be done carefully to avoid over-sampling and extended render times.
For an explanation of how sampling works, see the “Sampling and Noise” section.

Pixel Samples
This parameter controls the number of primary rays Mantra will use to sample your scene per
pixel. The two numbers represent an arrangement of samples in the X and Y axis and are
generally the same number. However, for non-square pixels it may be necessary to use
different values in X and Y. Multiplying these two values together will give you the number of
primary rays per pixel.


Increasing Pixel Samples will result in a cleaner, higher quality image. However, since all other
sampling values are multiplied by the number of Pixel Samples, they should only be increased
when necessary. For more details on when to increase Pixel Samples, see the “Removing Noise”
section.

Ray Variance Antialiasing


When enabled, this parameter will cause Mantra to use ray variance antialiasing when
determining the number of Secondary Rays to send for every Primary Ray.
H16 Mantra User Guide

This means that rather than using a specific number of rays, Mantra will first send out a small
number of rays and use this sample set to evaluate the Variance. Depending on the amount of
various, Mantra will continue to send more rays up to the Max Ray Samples value. Ray Variance
Antialiasing is useful for optimizing your render by sending more rays only in the areas they are
needed.
In cases where the minimum number of rays to remove noise is equal to the maximum number
of rays, you may save a small amount of render time by disabling Ray Variance Antialiasing.

Min Ray Samples


This value is the minimum number of secondary rays to use when generating an image. When
Ray Variance Antialiasing is disabled, this number represents the number of secondary rays to
send regardless of the noise level.
Remember, this number is multiplied by the current number of Pixel Samples.

Max Ray Samples


When Ray Variance Antialiasing is enabled, this parameter represents the maximum number of
secondary rays allowed even if the Noise Level is never reached. This parameter, along with
Min Ray Samples, essentially allows you to create a range of acceptable sampling for your
image. Carefully controlling the total number of potential rays is the best way to optimize your
renders.

Remember, this number is multiplied by the current number of Pixel Samples.


For more details on when to increase Max Ray Samples, see the “Removing Noise” section.

Noise Level
This parameter represents a threshold in the amount of variance allowed before mantra will
send more secondary rays. Variance essentially represents how “spread out” the values in a set
of samples are. For instance, a set of samples that were all the same would have a variance of
0. It is generally a good idea to keep this value as high as possible so that rays are sent only into
those areas where an unacceptable amount of noise is present.
Adding “direct samples” and “indirect samples” image planes can help you track how many
samples are being sent and to which parts of the image. For more information about sampling,
see the “Sampling and Noise” section.
If you find that certain objects in your scene require substantially more samples than other
parts of your image and you are unable to “target” those objects using the Noise Level
H16 Mantra User Guide

parameter, it may be a better idea to add per-object sampling parameters to the problem
areas. See the “Removing Noise” section for more details.

Diffuse Quality
This parameter controls the quality of Indirect Diffuse sampling. ( For more information
regarding the difference between Direct and Indirect rays, see the section on Sampling and
Noise )
Often, indirect sources of light will be a significant cause of noise in your renders. This quality
slider allows you to adjust the amount of secondary rays sent to help resolve this type of noise.
Keep in mind that indirect sources of light can be the surfaces of other objects in your scene as
well as light scattered inside of a volume.
Essentially, the Diffuse Quality parameter acts as a multiplier on the “Max Ray Samples” while
also acting as a divisor for “Noise Level”. For instance, let’s say you have set “May Ray Samples”
to 8 and your Noise Level to 0.1. If you then set your “Diffuse Quality” parameter to 2, Mantra
will send up to 16 secondary ray samples based on a Noise Level of 0.05. It is important to
remember that these numbers apply only to the indirect samples, your original values will be
used for all direct sampling.
To find out how much noise is present in your indirect diffuse component, it can be useful to
add the “Indirect Lighting ( per-component )” image plane in the “Extra Image Planes” tab. This
will allow to you investigate each indirect component in isolation.
If you find that increasing the “Diffuse Quality” does not improve the amount of noise in your
Indirect Diffuse Component, it may be because your Noise Level is set too low and the variance
threshold is being met before more indirect samples can be sent. Try slowly lowering the Noise
Level amount until you begin to see a noticeable difference in your indirect noise.

SSS Quality
This parameter controls the quality of Indirect sampling sent to materials with Sub Surface
Scattering enabled.
Materials with Sub Surface Scattering enabled can exhibit a large amount of noise, especially
when the Sub Surface Distance is set to a high value. This quality slider allows you to adjust the
amount of secondary rays sent to help resolve this type of noise.
Essentially, the SSS Quality parameter acts as a multiplier on the “Max Ray Samples” while also
acting as a divisor for “Noise Level”. For instance, let’s say you have set “May Ray Samples” to 8
and your Noise Level to 0.1. If you then set your “Diffuse Quality” parameter to 2, Mantra will
H16 Mantra User Guide

send up to 16 secondary ray samples based on a Noise Level of 0.05. It is important to


remember that these numbers apply only to the indirect samples, your original values will be
used for all direct sampling.
To find out how much noise is present in your Indirect SSS component, it can be useful to add
the “Indirect Lighting ( per-component )” image plane in the “Extra Image Planes” tab. This will
allow to you investigate each indirect component in isolation
If you find that increasing the “SSS Quality” does not improve the amount of noise in your
Indirect Diffuse Component, it may be because your Noise Level is set too low and the variance
threshold is being met before more indirect samples can be sent. Try slowly lowering the Noise
Level amount until you begin to see a noticeable difference in your indirect noise.

Reflection Quality
This parameter controls the quality of Indirect Reflection sampling. ( For more information
regarding the difference between Direct and Indirect rays, see the section on Sampling and
Noise )
Indirect Reflections, which are reflections of other objects in your scene, can sometimes be the
source of noise in your scene. This quality slider allows you to adjust the amount of secondary
rays sent to help resolve this type of noise.
Essentially, the Reflection Quality parameter acts as a multiplier on the “Max Ray Samples”
while also acting as a divisor for “Noise Level”. For instance, let’s say you have set “May Ray
Samples” to 8 and your Noise Level to 0.1. If you then set your “Reflection Quality” parameter
to 2, Mantra will send up to 16 secondary ray samples based on a Noise Level of 0.05. It is
important to remember that these numbers apply only to the indirect samples, your original
values will be used for all direct sampling.
To find out how much noise is present in your indirect reflection component, it can be useful to
add the “Indirect Lighting ( per-component )” image plane in the “Extra Image Planes” tab. This
will allow to you investigate each indirect component in isolation.
If you find that increasing the “Reflection Quality” does not improve the amount of noise in
your Indirect Reflection Component, it may be because your Noise Level is set too low and the
variance threshold is being met before more indirect samples can be sent. Try slowly lowering
the Noise Level amount until you begin to see a noticeable difference in your indirect noise.

H16 Mantra User Guide

Refraction Quality
This parameter controls the quality of Indirect Refraction sampling. ( For more information
regarding the difference between Direct and Indirect rays, see the section on Sampling and
Noise )
Indirect Refractions, which are the refracted images of other objects in your scene, can
sometimes be the source of noise in your scene, especially when using blurry refractions. This
quality slider allows you to adjust the amount of secondary rays sent to help resolve this type of
noise.
Essentially, the Refraction Quality parameter acts as a multiplier on the “Max Ray Samples”
while also acting as a divisor for “Noise Level”. For instance, let’s say you have set “May Ray
Samples” to 8 and your Noise Level to 0.1. If you then set your “Refraction Quality” parameter
to 2, Mantra will send up to 16 secondary ray samples based on a Noise Level of 0.05. It is
important to remember that these numbers apply only to the indirect samples, your original
values will be used for all direct sampling.
To find out how much noise is present in your indirect refraction component, it can be useful to
add the “Indirect Lighting ( per-component )” image plane in the “Extra Image Planes” tab. This
will allow to you investigate each indirect component in isolation.
If you find that increasing the “Refraction Quality” does not improve the amount of noise in
your Indirect Refraction Component, it may be because your Noise Level is set too low and the
variance threshold is being met before more indirect samples can be sent. Try slowly lowering
the Noise Level amount until you begin to see a noticeable difference in your indirect noise.



Volume Quality
This parameter controls how finely or coarsely a volume is sampled as a ray travels through it.
Volumetric objects are made up of 3d structures called Voxels, the value of this parameter
represents the number of voxels a ray will travel through before performing another sample.
H16 Mantra User Guide


The default value is 0.25, which means that every one of every four voxels will be sampled. A
value of 1 would mean that all voxels are sampled and a value of 2 would mean that all voxels
are sampled twice. This means that the volume quality value behaves in a similar way to pixel
samples, acting as a multiplier on the total number of samples for volumetric objects.
For volumes that aren’t voxel based, like CVEX procedural volumes, Mantra will divide the
bounding box of the volume into roughly 100 “virtual” voxels. In these cases, setting the
Volume Quality correctly is essential to maintaining the correct level of detail.
Keep in mind that increasing the volume quality can dramatically increase render times, so it
should only be adjusted when necessary. Also, while increasing the default from 0.25 can
reduce volumetric noise, increasing the value beyond 1 will rarely see similar results.
For more information about volume sampling, see the “Sampling and Noise” section.

Stochastic Transparency
Enabling this parameter will activate a Raytracing optimization for translucent objects
(Volumes, Sprites, Transparent surfaces). Essentially, while the accumulation of density, or
opacity, will occur at every step, the shading will occur randomly along the ray. This means less
sampling is performed overall, speeding up renders. This parameter defaults to “on” because it
is usually much faster than performing shading samples at every step but without a significant
loss in visual quality.
For more information about stochastic transparency and volume sampling in general see
“Sampling and Noise”.

Stochastic Samples
H16 Mantra User Guide


This parameter controls the number of transparent samples to be shaded as a ray travels
through translucent objects. Increasing this value will result in less noise in translucent objects
and is generally less costly than increasing Pixel samples, Volume Quality, or Min and Max ray
samples. Stochastic Sampling will not have any effect on noise from Indirect Sources however.

Sample Lock
Sampling generally occurs in random patterns which change on every frame of an animation.
This can cause a distracting “buzz” when there is a significant amount of noise in your images
which can make evaluation of other aspects of the scene difficult. Enabling this parameter will
“lock” the sampling patterns so that the noise remains the same on every frame.
Also, in some cases where the final rendered images will be sent through a post-render de-
noise process, it can be useful to have the noise remain constant frame to frame. Consistent
sampling patterns can help when analyzing the noise.
It defaults to “off” because it is generally unacceptable to have a locked sampling pattern for
final sequences.

Random Seed
Adjusting this parameter will cause the pixel sampling patterns used by Mantra to be
regenerated in different configurations. By default, the patterns change on every frame, so
manually changing this value is not necessary.

Allow Image Motion Blur


This parameter is related to the motion blur parameters which are available only when Motion
Blur is enabled. Disabling this option will cause motion blur to be removed from the final
H16 Mantra User Guide

rendered image, however the blurred Position will still be calculated, allowing for custom
motion vector image planes to be created. For more information, see the section on “Motion
Blur”.

Adaptive Sampling
This toggle will enable Adaptive Sampling.
Adaptive sampling allows mantra to redistribute its sampling pattern in order to address noise
in areas of the scene where radiance is changing quickly.
Essentially, mantra will perform some subset of the Max Ray Samples and compare their
intensities. In areas where this group of samples show high contrast, the remaining samples will
be redistributed to have a higher chance of resolving the noise. It behaves in a similar fashion to
the Noise Level parameter; however, rather than dictating the number of samples to be sent
into the scene it dictates how those samples are distributed.


Adaptive Sampling Threshold
This parameter controls how sensitive the adaptive sampling is to changes in radiance. Larger
values mean that the current set of samples being examined would need to have a large
difference in intensity before the redistribution of samples would occur. Smaller values mean
that the samples would only need to vary a small amount before triggering the redistribution of
samples.

In general, the default value of 0.1 will give reasonable results for most scenes.

H16 Mantra User Guide

THE LIMITS TAB


The Limits Tab can be found under the Rendering Tab on the Mantra Node.


The parameters found on this tab control the amount of times a ray associated with a specific
component is allowed to propagate through a scene. Setting these limits has influence over the
final look of your scene as well as the amount of time it will take the render your image without
noise.

Reflect Limit


This parameter controls the number of times a ray can be reflected in your scene.
H16 Mantra User Guide


The above example shows a classic “Hall of Mirrors” scenario with the subject placed between
two mirrors. This effectively creates an infinite series of reflections.


From this camera angle the reflection limits are obvious and have a large impact on the accuracy
of the final image. However, in most cases the reflection limit will be subtler, allowing you to
reduce the number of reflections in your scene and optimize the time it takes to render them.


Remember that the first time a light source is reflected in an object, it is considered a direct
reflection. Therefore, even with Reflect Limit set to 0, you will still see specular reflections of
light sources.
H16 Mantra User Guide


To control what happens when the maximum number of reflections is exceeded, see the At Ray
Limit parameter on the Limits tab.

Refract Limit


This parameter control the number of times a ray be refracted in your scene.


The above example shows a simple scene with ten grids all in a row. By applying a refractive
shader, we will be able see through the grids to an image of a sunset in the background.
H16 Mantra User Guide


From this camera angle, for the image to be accurate, the refraction limit must match the
number of grids that that are in the scene. However, most scenes will not have this number of
refractive objects all in a row and so it is possible to reduce the refract limit without affecting the
final image while also reducing the time it takes to render them.


Keep in mind that this Refract Limit refers to the number of surfaces that the ray must travel
through, not the number of objects.


Remember that the first time a light source is refracted through a surface, it is considered a
direct refraction. Therefore, even with Refract Limit set to 0, you will see refractions of Light
Sources. However, since most objects in your scene will have at least two surfaces between it
and the light source, direct refractions are often not evident in your final render.
H16 Mantra User Guide


To control what happens when the maximum number of refraction is exceeded, see the At Ray
Limit parameters on the Limits tab.

Diffuse Limit


This parameter controls the number of times diffuse rays can propagate through your scene.
Unlike the Reflect and Refract Limits, this parameter will increase the overall amount of light in
your scene and contribute to most global illumination. With this parameter set above zero
diffuse surfaces will accumulate light from other objects in addition to direct light sources.


In this example, increasing the Diffuse Limit has a dramatic effect on the appearance of the final
image. To replicate realistic lighting conditions, it is often necessary to increase the Diffuse Limit.
However, since the amount of light contribution usually decreases with each diffuse bounce,
increasing the Diffuse Limit beyond 4 does little to improve the visual fidelity of a scene.
Additionally, increasing the Diffuse Limit can dramatically increase noise levels and render times.


H16 Mantra User Guide


SSS Limit



This parameter controls the number of times light generated from materials with Sub-Surface
Scattering will be included in the evaluation of indirect light. It is intrinsically linked to the Diffuse
Limit, since the propagation of indirect diffuse rays is what allows the evaluation of new SSS
samples.


In this example, increasing the SSS limit allows the grey SSS material to receive indirect
illumination from the orange SSS material. You may also notice a relationship between Diffuse
Limit and SSS Limit – essentially, to match the contribution of indirect light, you will usually need
one extra SSS Sample.
Keep in mind that materials with SSS enabled absorb and scatter light, so the light contribution
to other SSS objects will often be quite small. Increasing SSS limits beyond 2 will do little to
improve the realism of a final render but may require dramatically more SSS samples. In fact, in
many cases even an SSS Limit of 1 ( essentially limiting the contribution to the object itself ) will
be sufficient to create highly realistic renders.

H16 Mantra User Guide


Volume Limit

This parameter controls the number of times a volumetric ray can propagate through a scene. It
functions in a similar fashion to the Diffuse Limit parameter.
Increasing the Volume Limit parameter will result in much more realistic volumetric effects. This
is especially noticeable in situations where only part of a volume is receiving direct lighting. Also,
in order for a volumetric object to receive indirect light from other objects, the Volume Limit
parameter must be set above 0.


With the Volume Limit set to values above zero, the fog volume takes on the characteristic light
scattering you would expect from light travelling through a volume. However, as with the Diffuse
Limit, the light contribution generally decreases with each bounced ray and therefore using
values above 4 does not necessarily result in a noticeably more realistic image.
Also, increasing the value of this parameter can dramatically increase the amount of time spent
rendering volumetric images.

Opacity Limit
H16 Mantra User Guide


As a ray travels through many transparent surfaces, or through a volume, it will calculate the
cumulative amount of Opacity. When this value exceeds the Opacity Limit mantra will assume all
surfaces beyond this point are opaque.
This parameter behaves in a similar fashion to both the Reflect and Refract Limit but operates on
accumulated values rather than simply the number of surfaces the ray has passed through.


In the above example, each grid has a shader attached with an opacity value of 0.1. It is
important to remember that in this case “transparent” refers to objects whose opacity is less
than 100% and does not include refractive objects which can appear transparent.
H16 Mantra User Guide


In the above example, the sphere of the left has an opacity of 0.5, with no refraction. The sphere
on the right has an Opacity of 1 with refraction enabled. You can see that the Opacity Limit has
no effect on the amount of refraction, only affecting objects whose opacity value is less than 1.
While reducing the Opacity Limit may save a small amount of render time (1 – 5%) using low
values may result in banding and other artifacts when your camera is moving or an animation is
evolving. This can be especially noticeable in smoke simulations where opacity values are
constantly changing.


The default value for Opacity Limit is quite aggressive, changing this value should be done
carefully and the results inspected across a range of frames in an animated sequence.

Color Limit
H16 Mantra User Guide


This parameter controls the maximum value a shading sample is allowed to return from indirect
sources.
Physically Based Rendering can cause “spikes” in color values when extremely bright indirect
light sources are under sampled. This results in “fireflies” in the final rendered image which can
be very difficult to remove without very high sampling rates.


You can see in the example above, that even at 12x12 pixel samples, the “fireflies” still remain.
Adjusting Min and Max indirect rays sample settings could remove this noise, but at the cost of
longer render times.
Decreasing the Color Limit parameter clamps the color values in these indirect samples and can
help to avoid these “spikes”.


Reducing the color Limit can be an effective way of removing “fireflies” without increasing
sampling rates. However, clamping the values in indirect lighting can result in an overall
H16 Mantra User Guide


reduction in the amount of light in your scene. This is especially evident in scenes which are
mostly illuminated by indirect light.

Color Limit Decay


This parameter causes the Color Limit to decay as rays propagate through the scene.
Since the Color Limit parameter acts as clamp on the indirect values in your scene, it can
occasionally cause indirect reflections to appear too dim. To disguise this effect, the Color Limit
Decay can decrease the color limit after each bounce. This way the decrease in light only
becomes apparent after several bounces where the effect is less noticeable.
H16 Mantra User Guide


In the above example (color corrected with a brightness value of 0.1) you can see that by
reducing the Color Limit value, all secondary values are clamped to the same amount. However,
by setting the color limit decay without adjusting the color limit, the brightness of each reflection
is reduced after each bounce producing a more subtle reduction in intensity.
Setting the Color Limit Decay value to 0.9 will cause the color limit to be 90% of its original value
after one bounce, 81% after two bounces, etc. The Color Limit will never decay below a value of
1, so this setting will not affect colors in the 0-1 range.


At Ray Limit
This parameter allows you to control how Mantra deals with rays that reach the ray tracing limit
(For example the Reflect Limit or Refract Limit).


H16 Mantra User Guide


In the above example, the refract Limit has been set to 2.
Setting the “At Ray Limit” parameter to “Use Black Background” will simply render black once
the limits are reached. This is the default setting and will work in most scenes since the Reflect or
Refract Limit is unlikely to be reached. However, in scenes where the limit is noticeable in the
rendered image, the black color can be quite noticeable and stand out against the colors in the
scene.
In this case, it is advisable to increase the limit until the effect is avoided or use the second
option for this parameter “Use Direct Lighting as Background Color”. This will replace the black
color with whichever color or image is used in your direct lighting, for instance an Environment
Light.
For More Information about how the settings on an Environment Light affect this parameter see
the Lighting section.


H16 Mantra User Guide

SAMPLING AND NOISE


When generating an image, Mantra must determine a color value for each pixel by examining
the scene behind the image plane. Mantra achieves this by sending out a number of rays from
the camera’s position until they hit an object in the scene. Every time a ray hits an object, it will
return some piece of information about the object (Its color, for instance). This process can
broadly be described as “Sampling” the scene.


Sampling once per pixel, however, can cause “aliasing” where information is lost between
samples. This is particularly evident in scenes with more variety in shapes and colors.


Increasing the number of samples per pixel gives an “anti-aliased” image which better represents
the actual scene.
H16 Mantra User Guide



The samples described above could be described as “Primary” rays (or pixel samples). They can
determine the overall quality of the image being rendered, especially with regard to the shape
and accuracy of the objects in the scene.
For other aspects of an image, like lighting, reflections and refractions, more rays must be cast
into the scene, originating from the hit location of the primary rays. For each primary ray,
Mantra will fire at least one “Secondary” ray. These secondary rays can be divided into two types
- Direct and Indirect.

Direct and Indirect Rays


Direct Rays can be described as rays which deal with Lights. This generally means that the rays
travel from some position in the scene toward the various light sources. These rays determine if
a surface is in shadow, and if not, lighting information can be calculated.


The same “aliasing” problems described previously can exist with these direct rays, resulting in a
noisy image. You will usually find noise from direct sources showing up when rendering specular
H16 Mantra User Guide


highlights or the soft edges of shadows cast from area lights. In these cases, it may be necessary
to send more direct rays.


When evaluating the effect of sending more Direct Rays in your renders, it can sometimes be
challenging to separate one source of noise from another. Adding the “Direct Lighting ( per
component )” image plane will allow you view the direct contribution of each component
separately.


When attempting to optimize the number of direct rays in your scene, the “Direct Samples”
image plane can be added. This plane will show you the number of direct rays used throughout
your image displayed as intensity.
H16 Mantra User Guide


Indirect Rays can be described as rays which deal with objects and their surface properties. This
generally means that rays travel from some position in the scene in directions determined by the
shader attached to the object. Refraction rays will travel “through” objects, Reflection Rays will
bounce, and Diffuse Rays will scatter in a random direction within a hemispherical distribution.


With indirect rays, “aliasing” can be much worse than with Direct rays and usually the greatest
cause of noise in a render. Generally speaking, small, very bright features will cause the most
noise in indirect samples – soft reflections of very bright specular highlights on other objects, for
example. In these cases, it may be necessary to send more direct rays.

H16 Mantra User Guide


When evaluating the effect of sending more Indirect Rays in your renders, it can sometimes be
challenging to separate one source of noise from another. Adding the “Indirect Lighting ( per
component )” image plane will allow you view the direct contribution of each component
separately.


When attempting to optimize the number of indirect rays in your scene, the “Indirect Samples”
image plane can be added. This plane will show you the number of indirect rays used throughout
your image.
H16 Mantra User Guide

VOLUMES
Sampling volumetric objects requires a different approach than sampling surfaces. While direct
rays are still used, they must sample the volume multiple times while travelling through the
volume. Indirect rays behave in a similar fashion, sent multiple times as the ray “steps” through
the volumetric object. This means that sampling volumes is a much more expensive process than
sampling a surface.


As a ray travels through a volume, it moves forward at a rate determined by the Volume Quality.
At each step, it evaluates the shader and accumulates the density of the volume. Because these
H16 Mantra User Guide


density values can vary drastically throughout the volume, nearby rays can calculate substantially
different values, introducing noise into the render. In these cases, it may be necessary to send
out more direct and indirect rays or to take smaller steps through the volume.


Even at low sampling rates, it can be costly to render clean images of volumetric data. This is
because the shading is run for every step through the volume. However, mantra has a variety of
ways to optimize volume rendering which can decrease render times without sacrificing detail.


One optimization, known as Stochastic Transparency, decouples the accumulation of density
values from the shading samples. This means that the amount of sampling can be greatly
reduced as variations in lighting information is less obvious than variations in density.
H16 Mantra User Guide


When evaluating the effect of sending more Direct and Indirect rays in your renders, it can
sometimes be challenging to separate one source of noise from another. Adding the “Direct
Lighting ( per component)” and “Indirect Lighting ( per component )” image planes will allow you
view the direct and indirect volume contributions separately.



H16 Mantra User Guide

REMOVING NOISE
As described in the Sampling section, under-sampling is almost always the cause of noise in your
renders. Simply increasing the overall amount of sampling will reduce the amount of noise, but it will
also cause many parts of your image to be over-sampled and your render times to increase. Targeting
the various types of rays to the correct part of your image is critical for optimization, sending more of
the wrong kind of ray will not increase the quality of your render. The goal, when setting your sampling
parameters, is to balance Speed with Quality. For more information about the specific parameters
described below, see The Sampling Tab and the Extra Image Planes Tab.

In general, when attempting to remove the noise in your render, it is good practice to start by adding
the following image planes:

Direct Lighting ( per component )

Indirect Lighting ( per component )

Direct Samples

Indirect Samples

These will allow you to analyze different parts of your scene one at a time.

The following render is an example of a scene with multiple material types and motion blur. All sampling
values on the Mantra Node are set to the defaults except for Pixel Samples, which are 1x1. On the limits
tab, Diffuse Limit has been set to 2, Volume Limit has been set to 2 and SSS limit has been set to 1.


H16 Mantra User Guide


For each Example below, we will adjust only the parameters that are mentioned in the descriptions
accompanying each noise type.

Motion Blur
When “Allow Motion Blur” is enabled on the Mantra node, fast moving objects can cause your image to
become noisy. This noise is essentially a type of aliasing that occurs when an object must be sampled
across time as well as space. See the chapter on Motion Blur, for a more in depth explanation of how
mantra samples objects in motion and how certain objects may be optimized for heavily motion blurred
scenes.

Increasing Pixel Samples, also described as Primary Rays, is the only way to remove this type of noise.


H16 Mantra User Guide


Increasing Pixel Samples will act as a multiplier for all other types of rays (see Sampling Tab). In the
example above, you can see that increasing the pixel samples has also removed most of the other types
of noise in this scene. For this reason, it is a good idea to remove Motion Blur as a first step as it may
solve other types of noise in your scene at the same time.

To identify this type of noise, it can be useful to check the Alpha Channel for noise at the ends of
objects. If overlapping objects make this impossible, turn off “Allow Motion Blur” and check the noise
levels versus the non-motion blurred scene.


Depth of Field
When “Enable Depth of Field” is checked on the Mantra Node, objects which are distant from the
camera’s “Focus Distance” can become noisy. This is especially evident in bright highlights and the edges
of objects.

Increasing Pixel Samples is the only way to remove this type of noise.


H16 Mantra User Guide


As with Motion Blur, removing noise from images with Depth of Field may have the side effect of
removing other types of noise as well. Consider removing this type of noise first before attempting to
remove noise from other sources. However, always check the “in focus” areas of your image for any of
the other noise types as some extra attention may be required in these areas.

To identify this type of noise, it can be useful to check the Alpha Channel for noise at the edges of
objects or along the motion path. If overlapping objects make this impossible, turn off “Enable Depth of
Field” and check the amount of noise in the image without Depth of Field blurring.

Edge Aliasing
Without enough Primary Rays, the edges of objects can appear jagged and rough. This can be especially
evident in high-contrast areas or within high-frequency patterns.

Increasing Pixel Samples is the only way to remove this type of noise.


H16 Mantra User Guide


In the above example, notice how the edges of the sphere and plane appear jagged in the image on the
left.

To identify this type of noise, it can be useful to check the Alpha Channel for noise at the edges of
objects or along the motion path.


For many scenes, setting pixel samples to 3x3 will be sufficient to remove this type of noise. For images
with high frequency patterns generated by a shader, it may be necessary to increase these values to get
a fully anti-aliased image. If possible, it may be more efficient to handle filtering in the shader, rather
that using the brute force approach of increasing pixel samples.


H16 Mantra User Guide

DIRECT ILLUMINATION
Direct Reflections
Direct Reflections refer to the reflection of light sources directly from the surface of an object. These
reflections can exhibit a speckled noise pattern especially in materials with small amounts of roughness
in combination with Area Lights or Environment Lights.

The best way to remove this type of noise is to increase the Sampling amount on the Light which is
causing the noise. Increasing Pixel Samples will also help remove the noise, but will cause an increase in
all other types of rays. It can often be a good idea to start with the default Pixel Sample value of 3x3
because it will also remove any distracting Edge Aliasing from your image.


Increasing light samples will act like a multiplier on the number of Direct Rays in your scene so it is not a
good idea to simply increase samples to extremely high values for all lights. Increasing Direct rays ( Min
and Max Ray Samples ) will help remove this type of noise. This means that you must balance the need
to clean up noise from a specific light, against cleaning up Direct Sources of noise throughout your
image.

To identify this type of noise, enable the “Direct Reflect” image layer, this will allow you to examine the
contributions to Direct Reflections without interference from other sources of noise in your scene.


H16 Mantra User Guide


In the above images, the Direct Reflection noise is much clearer since it no longer mixed in with all other
sources of noise.

For complex scenes with many lights, it can be useful to export the “Direct Reflect” layer using the “Per
Light” option. This will allow you to isolate the specific lights that are causing noise in your scene,
allowing you to increase sampling only on the offending light sources.


H16 Mantra User Guide

Direct Refractions
Direct Refractions are caused by the refraction of light sources through a single surface (A grid for
instance) These refractions can exhibit a speckled noise pattern especially in materials with small
amounts of roughness in combination with Area Lights or Environment Lights.

(Remember that any refractions through more than one surface will be considered an Indirect
Refraction.)

The best way to remove this type of noise is to increase the Sampling amount on the Light which is
causing the noise. Increasing Pixel samples will also help remove the noise, but will cause an increase in
all other types of rays, causing areas of the image without noise to become over-sampled.


In the above example, increasing the Pixel Samples to 3x3 removes all the Direct Refraction noise and so
the Sampling Quality on the Environment Light did not need to be adjusted. This is another good reason
to approach the removal of noise in stages. For this case, Removing Edge Aliasing has effectively
resolved the Direct Refraction noise as well.

Increasing light samples will act like a multiplier on the number of Direct Rays in your scene so it is not a
good idea to simply increase samples to extremely high values for all lights. Also, increasing Direct rays (
Min and Max Ray Samples ) will also help clean up this type of noise. This means that you must balance
the need to clean up noise from a specific light, against cleaning up Direct Sources of noise throughout
your image.

To identify this type of noise, enable the “Direct Refract” image layer, this will allow you to examine the
contributions to Direct Refractions without interference from other sources of noise in your scene.


H16 Mantra User Guide


For complex scenes with many lights, it can be useful to export the “Direct Refract” layer using the “Per
Light” option. This will allow you to isolate the specific lights that are causing noise in your scene,
allowing you to increase sampling only on the offending light sources

Direct Shadows
Direct Shadows, which occur when a point in your scene does not have a direct path to a light source,
can exhibit a speckled or rough noise pattern. This is especially evident in soft shadows cast from large
area lights.

The best way to remove this type of noise is to increase the Sampling amount on the light which is
causing the noise. Increasing Pixel samples will also help remove the noise, but will cause an increase in
all other types of rays, causing areas of the image without noise to become over-sampled.


H16 Mantra User Guide

Increasing light samples will act like a multiplier on the number of direct rays in your scene so it is not a
good idea to simply increase samples to extremely high values for all lights. Also, increasing Direct rays (
Min and Max Ray Samples ) will also help clean up this type of noise. This means that you must balance
the need to clean up noise from a specific light, against cleaning up Direct Sources of noise throughout
your image.

To identify this type of noise, enable the “Direct Diffuse” image layer, this will allow you to examine the
shadows caused by direct lighting without interference from indirect sources of shadow.


In the above example, identifying which light is responsible for the noise is difficult, especially since
environment lights have the effect of “filling in” shadows. It can be useful in these cases to export the
“Direct Diffuse” layer using the “Per Light” option. This will allow you isolate the specific lights that are
causing noise in your scene, allowing you to increase sampling only on the offending light sources.


H16 Mantra User Guide


In the examples above, it is much more obvious which light is causing the various types of noise. This is
especially evident on the edges of the shadows cast by the area light. In this case, you can see that the
Area light required fewer samples to remove the noise than the Environment light. In complex scenes,
this kind of close examination of per-light noise can help prevent significantly over-sampling your scene.


H16 Mantra User Guide

INDIRECT ILLUMINATION
Indirect Diffuse


Indirect Diffuse, which is the light contribution from other objects in a scene, can be a significant source
of noise. This can be especially evident in scenes with physically accurate light sources which are also
very near other objects (Light sconces or inset lights, for instance.) and only contribute a small amount
to direct lighting.

The best way to remove this type of noise is to increase the number of indirect samples that are being
sent. You can achieve this by adjusting the Diffuse Quality parameter on the Mantra node. Increasing
Pixel samples will also help remove the noise, but will cause an increase in all other types of rays,
causing areas of the image without noise to become over-sampled.

To identify this type of noise, enable the “Indirect Diffuse” image layer, this will allow you to examine
the light contributions to this layer without interference from other types of noise.


H16 Mantra User Guide

Keep in mind that you do not have to completely remove noise from this component to have a clean
image, indirect noise is often imperceptible when it has been combined with direct lighting information.
Always refer to the Combined Color image plane to see how your sampling is affecting the fidelity of the
final image.

Indirect Reflections


Indirect Reflections, which are the reflections of other objects, can be responsible for much of the noise
in your scene. This can be particularly evident in scenes with very bright glossy reflections in
combination with other objects with rough reflections.

The best way to remove this type of noise is to increase the number of indirect samples that are being
sent. You can achieve this by adjusting the Reflection Quality parameter on the Mantra node. Increasing
Pixel samples will also help remove the noise, but will cause an increase in all other types of rays,
causing areas of the image without noise to become over-sampled.

To identify this type of noise, enable the “Indirect Reflect” image layer, this will allow you to examine
the amount of noise caused by indirect reflections without interference from other types of noise.


H16 Mantra User Guide

Keep in mind that you do not have to completely remove noise from this component to have a clean
image, indirect noise is often imperceptible when it has been combined with direct lighting information.
However, unlike indirect diffuse noise, indirect reflections can be responsible for most of the color of a
final pixel. ( An object with a mirror-like finish, for example ) Always refer to the Combined Color image
plane to see how your sampling is affecting the fidelity of the final image.

Indirect Refractions


Indirect Refractions, which are the refractions of other objects and surfaces in your scene, can be
significant sources of noise in your scene. This is especially true when rendering refractive objects with a
high roughness value.

The best way to remove this type of noise is to increase the number of indirect samples that are being
sent. You can achieve this by adjusting the Refraction Quality parameter on the Mantra node.
Increasing Pixel samples will also help remove the noise, but will cause an increase in all other types of
rays, causing areas of the image without noise to become over-sampled.

To identify this type of noise, enable the “Indirect Refract” image layer, this will allow you to examine
the amount of noise caused by indirect refractions without interference from other types of noise.


H16 Mantra User Guide

Keep in mind that you do not have to completely remove noise from this component to have a clean
image, indirect noise is often imperceptible when it has been combined with direct lighting information.
However, unlike indirect diffuse noise, indirect refractions can be responsible for most of the color of a
final pixel. ( A glass of water, for instance. ) Always refer to the Combined Color image plane to see how
your sampling is affecting the fidelity of the final image.

Subsurface Scattering


Subsurface Scattering refers to a type of indirect light caused by light scattering inside the surface of an
object before exiting. Typically, this effect is seen in materials like candle wax or human skin. Objects
with Sub Surface Scattering enabled can contribute a significant amount of noise in your scene.

To remove this type of noise, increase the SSS Quality parameter on the Mantra node. Increasing Pixel
samples will also help remove the noise, but will cause an increase in all other types of rays, causing
areas of the image without noise to become over-sampled.

To identify this type of noise, enable the “Indirect SSS” image layer, this will allow you to examine the
amount of noise caused by indirect reflections without interference from other types of noise


H16 Mantra User Guide

Keep in mind that you do not have to completely remove noise from this component to have a clean
image, SSS noise is often imperceptible when combined with other lighting components. However, in
some materials the SSS component may be responsible for much of the final pixel color, in those cases a
significant increase in the number of rays sent may be necessary. Additionally, because Subsurface
Scattering is highly depending on the viewing angle, it may be a good idea to test your sampling settings
across multiple frames if your camera or object is animated. Always refer to the Combined Color image
plane to see how your sampling is affecting the fidelity of the final image.

VOLUMES


Volumes require a different sampling strategy than surfaces, rays “march” through each object and
accumulate values across multiple depth samples. This added complexity can make rendering volumes


costly. Like surfaces, it is best to approach the removal of noise in a series of stages based on the type of
noise present.

Direct Volumes
Direct Volumetric lighting, refers to volumes that have receive their lighting only directly from light
sources.

When rendering volumes, there can be more than one type of noise present per component - noise
from under-sampling the transparent parts of an object and noise from under-sampling the lights.

To begin, increasing Stochastic Samples will dramatically reduce noise without causing a large increase
in render times. This will be most notable in semi-transparent areas, usually in the soft edges of the
volumetric object. At some point, increasing Stochastic Samples will no longer have a significant effect
on noise. If this occurs, and noise remains in this component, begin increasing Max Ray Samples slowly
until the remaining noise is removed.


H16 Mantra User Guide

To identify these types of noise, enable the “Direct Volume” image layer, this will allow you to examine
the amount of noise in this component without interference from types of noise.


As with all noise types, increasing Pixel samples will help reduce this type of noise. This can be
particularly expensive when rendering volumes, so it is best to avoid this if possible. However, the
default setting of 3x3 pixel samples is often a good starting place.

If you plan to composite your volumetric images with a separate background image, be sure to
occasionally check the noise levels after compositing. Noise that is evident against a black background
may be invisible on your final plate. The opposite can also be true, where noise is invisible against a
black background, but becomes obvious when the alpha channel of the image is taken into account. As
much as possible, refer to the Combined Color channel ( or the composited final frame ) to verify how
your sampling is affecting the fidelity of your image.


Indirect Volumetric Lighting
Indirect Volumetric Lighting, which involves volumes which receive light from indirect sources such as
other objects or through the scattering of light within the volume itself, can create significant amounts
of noise in your renders. Indirect volumetric noise is most often noticeable in the shadowed areas of
volumetric objects.


H16 Mantra User Guide

As with Direct Volume noise, begin by removing the noise caused by under-sampling the opacity of your
volume by increasing Stochastic Samples. However, if you’ve already removed this type of noise from
your direct volume component, you may find that increasing the value of this parameter has little effect
since transparent samples are shared between these components.

The remaining noise is likely due to under-sampling the indirect sources of light in your scene. To
remove this noise type, slowly increase the Max Ray Samples parameter until the noise is resolved. Like
Stochastic sampling, this parameter is shared with the direct volume component. However, because
indirect sampling of volumes can be especially noisy, it is likely you will need to increase the max ray
samples further to remove noise from this component.

To identify this type of noise, enable the “Indirect Volume” image layer, this will allow you to examine
the amount of noise caused by indirect volumetric lighting without interference from types of noise.

Keep in mind that you do not have to completely remove noise from this component to have a clean
image, often indirect noise is invisible when it has been combined with direct lighting information.
Always refer to the Combined Color image plane to see how your sampling is affecting the fidelity of the
final image.

Volume Quality
Unlike the other “Quality” parameters on the Sampling Tab of the Mantra Node, Volume Quality does
not refer to the amount of indirect sampling. Instead, it explicitly refers to the number of voxels which
are considered for sampling. ( See “The Sampling tab” for more information on this parameter )

In general, it will be unnecessary to change this parameter so long as Stochastic Transparency in


enabled. However, it is possible that small details will be missed when this Volume Quality is set too low.


H16 Mantra User Guide

If you feel that there is more information in the volume than appears in the render, consider increasing
this value.

When Stochastic Transparency is disabled, this parameter directly controls the amount of sampling in
the volume and will have a dramatic effect on render times.

FINAL IMAGE
Here is a version of the complete scene with the rendering settings required to generate a clean image.

SPECIAL CASES
In some cases, there may be specific objects in your scene that are especially noisy in comparison to
other objects. You may find that to get enough samples onto these objects you will end up over-
sampling the rest of your scene. This can occur in many different circumstances, but a common cause
would be a refractive or reflective shader with high roughness values.


H16 Mantra User Guide

To avoid this over-sampling problem, you can add the sampling properties onto the object itself. This
will mean that only the problem-case will receive more samples. To achieve this, go to an object and
select the “Edit Rendering Parameters” option in the gear icon.


Under “Render Properties”, navigate to the Mantra/Sampling folder (Or use the Filter Field to narrow
your search). Add the following properties to your object:–

Diffuse Quality

Reflection Quality

Refraction Quality

SSS Quality

Max Ray Samples

Min Ray Samples

Noise Level

These properties will give you the same control over sampling that
you have on the Mantra node, but isolated to this specific object.
Note that Pixel Samples cannot be altered per object, it is a global
setting.


H16 Mantra User Guide


The sphere on the right has had the Sampling parameters added and the values adjusted to remove any
noise. The rest of the objects in the scene use the sampling values set on the Mantra node. ( For the
purposes of this example, Noise Level: 0.01, Min Rays 1, Max Rays 2 ).


You can see that mantra samples the objects in the scenes at different rates, allowing you to optimize
the rendering of specific objects in your scene without negatively affecting the overall sampling of your
image.


H16 Mantra User Guide

MOTION BLUR
Many of the parameters related to Motion Blur can be found under the Rendering Tab.
When using a real-world camera, rapidly moving objects can appear blurry or “streaky”. This is
because the object changes position while the camera’s shutter is open, allowing its image to
be smeared across the negative as it is exposed. This effect is exaggerated the longer the
shutter is allowed to remain open.
Enabling the Allow Motion Blur toggle on the Mantra node will attempt to replicate the effect
of photographic motion blur in your renders. Many of the parameters which control motion
blur are designed to replicate the settings on a real world camera, however there are several
controls which are meant as optimizations for rendering purposes and have no direct
correlation to real world settings.

Camera Settings
On the camera object, there is a sampling tab which contains parameters related to shutter
speed as well as depth of field. In the case of Motion Blur, the relevant parameter is Shutter
Time.

Shutter Time
The shutter time refers to the portion of a frame the shutter is actually open. On a physical
camera, this if often referred to as Shutter Speed.
A value of 0 for the shutter time would mean that there is no motion blur at all, as the shutter is
only “Open” for an instant. A value of 1 on the other hand would mean that the shutter is open
for the entire length of the frame.


H16 Mantra User Guide


In the above example the sphere is rotating a full 360 degrees over the course of a single frame.
You can see how the length of the “motion trail” or “blur” changes based on the shutter time.
In most cases, the default value of .5 is appropriate for animated sequences and a good match
for real world settings.
Keep in mind that this parameter controls the amount of time within a single frame, that the
shutter is open. It does not refer to how long an individual frame is. To adjust the frame rate,
change the Frames Per Second parameter in the Global Animation Options.

Render Settings
Xform Time Samples
This parameter controls the number of transformation motion blur samples. Unlike a physical
camera, mantra will only sample the motion of an object a specific number of times while the
virtual camera shutter is open. This allows mantra to optimize the rendering of objects whose
path through space over the span of a frame is relatively simple. A setting of 2 is generally
enough to properly represent a motion path.


However for objects whose path is complex over the course of a single frame, it will become
necessary to increase the number of times Mantra samples the transformation.


H16 Mantra User Guide


In the above example, it requires 40 Xform Time samples in order to correctly render the
complex motion that occurs within one frame. This amount of motion in subframe motion is
very unusual and only used as a demonstration.
Keep in mind that Transformation Blur refers to objects being transformed at the object level
and does not include deforming objects. For deformation motion blur, see Geo Time Samples.

Geo Time Samples


This parameter controls the number of deformation motion blur samples. Unlike Transform
Time Samples, this refers to an object whose geometry is changing frame to frame (Although
the topology of the geometry must remain the same). This may refer to simple
transformations at the Geometry Level, but may also include a character or object which
changes shape rapidly over the course of a frame.


As with Xform Time Samples, objects whose deformations are quite complex within a single
frame will require a higher number of Geo Time Samples.


H16 Mantra User Guide


Unlike Xform Time Samples, increasing the number of Geo Time Samples can have an impact
on the amount of memory which Mantra uses. For each additional Sample, Mantra must
retain a copy of the geometry in memory while it samples across the shutter time. For this
reason, when optimizing your renders, it is a good idea to find the minimum number of Geo
Time Samples necessary to create a smooth motion trail.

Shutter Offset
This parameter controls which segment of time will be considered when generating motion
blur. A value of 1 will use the current position of the object and the position of the object on
the next frame as the time interval to use for motion blur. A value of -1 will use the position of
the object on the previous frame and the current position of the object as the time. A value of 0
will generate an interval which lasts halfway through the previous frame and half way into the
next frame.


H16 Mantra User Guide

Adjusting this parameter is usually unnecessary unless you are attempting to match motion-
blur which has been generated outside of Mantra. A photographic background plate, for
instance.

Allow Image Motion Blur


Occasionally, when motion blur is going to be added to an image as a post-process or for other
compositing operations, it is necessary to calculate the motion blur but not include it in the
final rendered image. In these cases, Allow Image Motion Blur should be disabled.
This means that the blurred positions necessary for Motion Blur can be exported as a custom
Motion Vector Image Plane from within a shader using the GetBlurP() function without the
small overhead of doing the actual shading in the render.

Motion Factor
This parameter can be found on the Dicing Tab.
Fast moving objects which have significant amounts of motion blur are rendered with the same
sampling quality as slow moving or static objects. However, in cases where objects are very
blurry, small details are usually lost. In these cases, it is a useful optimization to reduce the
shading quality on those objects which are moving quickly since the loss in detail is hidden in
the motion blur.
Increasing the Motion Factor will dynamically reduce the shading quality of an object based on
the rate of motion. This optimization is primarily useful for objects which are refined at render
time like subdivision surfaces or objects with displacement-based shading.


In the above example, you can see that the motion factor does not have a large impact on the
quality of the final render.


H16 Mantra User Guide

However, sometimes too much detail can be lost, especially in cases where much of the surface
detail is generated by the shader. Objects whose shape is derived through significant amounts
of displacement, for example.


In these cases, the Motion Factor value must be adjusted carefully to retain a believable
amount of surface detail.

Velocity Based Motion Blur


In some cases it can be preferable to use a Velocity Attribute (v) on your geometry to calculate
Motion Blur rather than using the Geo Time Samples parameter. To enable this capability, you
must turn on the “geometry velocity blur” toggle under the sampling tab on the Object itself.
If your geometry changes topology frame-to-frame, Mantra will not be able to interpolate the
geometry to correctly calculate Motion Blur. In these cases, a “v” attribute can be calculated
which is consistent even while the underlying geometry is changing. The surface of a fluid
simulation is a good example of this. In this case, and other types of simulation data, the
velocity will be calculated by the Solvers involved to give accurate results at render time.
**Fluid example**
Another useful case for Velocity Based Motion Blur is when you have large amounts of
geometry whose animation is derived from Geometry level transformations or deformations. In
these cases, increasing Geo Time Samples is not recommended because Mantra will have to
store multiple copies of the geometry in memory. Instead, using Velocity Based Motion Blur is
more efficient and uses less memory.
*Memory comparison between geo time and v*
It is important to remember that Velocity Based Motion blur does not support multi-segment
motion blur. This means that sub-frame motion is not recognized and only the velocity
attribute on the current frame will be used to generate the Motion Blur.


H16 Mantra User Guide

Object Specific Sampling


Since all of the objects in your scene can be moving at different rates or have drastically
different amounts of sub-frame motion, it can be useful to sample specific objects at different
rates than others. This can be especially important for deforming objects which may require
high Geo Time Samples - isolating the most important objects can reduce memory overhead.
All of the Mantra Parameters which deal with motion blur can be added to objects in your
scene using the “Edit Rendering Parameters” interface in the gear menu. The parameters which
can be adjusted per object are:
Xform Time Samples
Geo Time Samples
Shutter Offset
Motion Factor
Enabling Motion blur is a global setting on the Mantra node and can’t be set per object,
however, reducing Xform and Geo Time samples to a value of 1 will essentially disable motion
blur for those objects.
Keep in mind that the motion of the Camera can also contribute to motion blur and can also
have its own sampling rates applied.


H16 Mantra User Guide

Packed Primitives

What Are Packed Primitives?
Packed Primitives are a way to express a procedure to generate geometry at render time.

Packed Primitives have information about other pieces of geometry embedded inside of them.
This information could be an actual piece of geometry stored in memory, a reference to a
smaller part of another piece of geometry, or even a path to geometry stored on disk.

The information can then be used throughout Houdini to more efficiently represent geometry
in the viewport, in the bullet solver, and in Mantra.

The Types of Packed Primitives


Packed Primitives can “embed” different types of data about geometry for use in different
scenarios. Each of these Packed Primitive Types have advantages and limitations and are
generally tailored to be used in specific circumstances.

In Memory Packed Primitives
An “In Memory” packed primitive is generated by “Packing” the geometry directly in a SOP
network. This creates a Packed Geometry Primitive with an embedded reference to the current
version of your geometry stored in RAM.

In practice, the “embedded” geometry essentially becomes a single un-editable “primitive” with
a single transform.

Advantages
Because the “embedded geometry” simply refers to a piece of geometry in memory, copying a
packed primitive creates a copy of the reference rather than a copy of the geometry itself. This
means that the referenced geometry is shared among all copies of the packed primitive. This
stands in contrast to copying standard Houdini geometry, which creates duplicates of all points,
primitives, etc., in the original piece of geometry.


H16 Mantra User Guide



Copies of packed primitives use less memory, are simpler to transform, and can be drawn more
efficiently in the viewport or rendered by Mantra.

Additionally, because the geometry can exist in a traditional SOP network before being packed,
you can easily generate procedural geometry which adapts to your scene, use stamping to
generate variations of your packed geometry, or make interactive edits to your geometry while
viewing the results live. Essentially, working with “In Memory” Packed Primitives is a more
interactive and user-friendly version of traditional instancing workflows.

Individual copies of an “In Memory” Packed Primitive can also be “Unpacked” in a SOP network,
loading the referenced geometry into memory. This allows you to generate procedural
workflows which are a hybrid of traditional Houdini geometry and Packed Geometry Primitives.

A Helpful Reminder
“Packing” geometry has an associated memory cost. Since you are storing the original piece of
geometry in RAM as well as the memory overhead for the “In Memory” Packed Primitive itself,
a single packed primitive is not necessarily any more efficient than the original piece of
geometry. The benefit of “In Memory” Packed Primitives comes from the efficient
representation of large number of copies where the referenced geometry can be shared.


H16 Mantra User Guide

This is important to remember when copy-stamping packed geometry. If every instance of your
packed geometry is unique, then you will not receive any of the memory or performance
benefits. In fact, in this scenario you will use more memory than using standard Houdini
geometry, since each packed primitive has its own data on top of the embedded geometry.

TIP: It’s possible to somewhat offset the cost of packing stamped geometry when there are
limited numbers of stamped variations using the 'cache stamping' parameter - see the help for
the Copy SOP for more info.

Packed Disk Primitives
A “Packed Disk” Primitive has an embedded “path” to a file on disk rather than a reference to a
piece of geometry stored in RAM. Generally speaking, “Packed Disk” primitives are standard
Houdini Geometry which have been written to disk as a .bgeo, or .bgeo.sc file then loaded into
Houdini through a File SOP as a “Packed Disk Primitive”.

A “Packed Disk” primitive behaves in a very similar fashion to “In Memory” Packed Primitives,
the “embedded” geometry is represented as a single un-editable primitive with a single
transform

Advantages
Much like the “In Memory” packed primitives, a “Packed Disk” primitive is an excellent choice
for efficiently representing copies of geometry in the viewport and in Mantra. Copying a
“Packed Disk” primitive creates a copy of the path to the geometry on disk rather than a
duplicate of the geometry itself.

Another advantage shared between “In Memory” and “Packed Disk” primitives comes from
how the geometry can be represented in the viewport. The viewport does not copy the
geometry, but simply draws it multiple times with different transforms. This means that the
viewport can also refer to a smaller subset of the referenced geometry and display that instead.


H16 Mantra User Guide



Since “Packed Disk” primitives, by their nature, are loaded from pre-generated geometry stored
on disk they are less dynamic than “In Memory” packed primitives whose embedded geometry
can be generated procedurally. The only way to make edits to “Packed Disk” primitives is to
“unpack” them, however this causes the geometry to be loaded into memory as standard
Houdini geometry, negating the benefits. In this sense, “Packed Disk” primitives are less flexible
than “In Memory” packed primitives and best used for static geometry.

However, there are several advantages “Packed Disk” primitives have over “In Memory” packed
primitives when used at render time. When generating an IFD (A file which contains a complete
description of a scene and how to render It.), a “Packed Disk” primitive can be represented
simply as path to the file on disk. In contrast, an “In Memory” primitive must have the entire
piece of Geometry copied into the IFD in order to be referenced by Mantra. Both of these
methods are superior to standard Houdini geometry which must include all of the geometry as
well as all of the duplicates of the geometry in the IFD file.


H16 Mantra User Guide


H16 Mantra User Guide



Additionally, Mantra never has to load into memory a “Packed Disk” primitive which isn’t
currently being used to render the scene. Instead, “Packed Disk” geometry is streamed into the
scene when necessary and then unloaded when no longer in use.

This means that single copies of “Packed Disk” primitives can still be useful at render time,
saving memory in the IFD file, as well as reducing the amount of geometry Mantra needs to
load at any given time.

The lightweight representation of “Packed Disk” primitives makes them ideal candidates for
scene assembly, especially for static background objects. That said, the very small memory
footprint in the IFD file also makes them very useful for objects with large on-disk footprints.
(Like Fluid, Smoke, or RBD simulations).

Packed Fragments
A “Packed Fragment” primitive is generated by “Packing” some piece of geometry along with a
"name attribute”. Each piece of the geometry with a unique “name attribute” will become a
“Packed Fragment” primitive with an embedded reference to the “complete” piece of geometry
which is shared across all “Fragments”.

In practice, each “Fragment” essentially becomes a single un-editable “primitive” with a single
transform.


H16 Mantra User Guide

Advantages
“Packed Fragment” primitives are ideal for representing many pieces of a greater “complete”
piece of geometry. Each “Fragment” refers to some subset of the embedded geometry which is
shared across all “Fragments”. When “Unpacked”, only the smaller subset of geometry will be
loaded into memory.

Additionally, because each “Packed Fragment” represents a single reference and a transform,
they are useful for cases where each “Fragment” will receive some unique transformation such
as a Rigid Body Simulation. This stands in contrast to standard Houdini geometry which does
not share its geometry, so each individual piece must be considered its own object.


H16 Mantra User Guide



“Packed Fragment” primitives use less memory, are simpler to transform, and are more
efficiently displayed in the viewport.



H16 Mantra User Guide

A Helpful Reminder
Each “Packed Fragment” contains a reference to the larger piece of embedded geometry stored
in memory. When you have many “Fragments”, this is a very efficient way of representing the
geometry because each “Fragment” only refers to a small subset of the shared Geometry.
However, if you were to delete many of your fragments, leaving only a small number, each
“Packed Fragment” is still referring to the original “complete” piece of geometry which is stored
in memory. This can potentially mean a large amount of memory overhead which is no longer
necessary.

Consider “Unpacking” your fragments when you have much fewer “Fragments” than in the
original piece of Geometry.

Rendering Packed Primitives


Packed Primitives are extremely useful for rendering in Mantra. In general, the proper use of
packed primitives will allow you to increase the speed of your renders as well as reduce the
overall amount of memory needed. Additionally, IFD generation will be faster and use less on-
disk memory.

However, it is important to understand how Mantra deals with Packed Primitives and the data
stored inside of them in order to take full advantage of them at render time.

Material Assignment
With standard Houdini geometry, Material assignment can occur at two levels – the object level
(On the object Node), or the primitives inside of the object (Using a Material SOP). Materials
assigned at “lower levels” override materials in the higher levels.

When sending a scene to be rendered, it is first analyzed to see which materials will actually be
needed in the final render. Houdini checks the objects for any material assignments, along with
any geometry attributes which apply materials, then makes sure to include the appropriate
Shaders in the IFD file.


H16 Mantra User Guide



When using Packed Geometry, this process is made more complex by adding a third level of
material assignment – materials assigned by attributes inside the Packed Primitive. Like the
previous example, materials assigned at “lower levels” override
Materials in the higher levels.

However, in this case, when the scene is sent to be rendered, the material assignments “inside”
the packed primitives are hidden from Houdini. (Remember the Packed Geometry may simply
refer to an object on disk). This means that Houdini will be unable to add the appropriate
Shaders to the IFD file for use at render time. During rendering, Mantra will unpack the object,
find the material assignment, but it will not have the necessary Shaders to apply to the object.


The solution to this problem is to tell Houdini to include the Shaders in the IFD regardless of
whether or not they have been assigned to any objects or primitives. On the Mantra node,
there is an option called “Save All SHOPS” which will embed all Shaders in your scene in the IFD.
(This will increase the on-disk size of your IFD by a small amount). This way, when Mantra
unpacks the geometry at render time and finds the material assignment, the necessary Shaders
will be available.


H16 Mantra User Guide



In general, when working with Packed Geometry, it is important to remember that “packed”
data is only accessible once it has been unpacked. For more information about assigning
shaders and overriding shading parameters “inside” Packed Geometry, please see the
documentation on Material Style Sheets.


Displacement and Subdivision Surfaces
In general, when using Packed Geometry, displacement shading and subdivision surfaces are
handled in the same way as with any other piece of geometry. However, if you are primarily
using your Packed Geometry as instanced geometry, then some care must be taken to get the
most out of your workflow.

Before rendering a displaced or a subdivided surface, the geometry is “diced” into smaller
primitives such that there will be one primitive for every pixel (with shading quality set to 1).
This means that objects closer to the camera will be diced more than objects in the distance
(which have less pixel coverage). However, for instancing, this can cause a problem. As
discussed previously, the benefit of instancing comes from the fact that geometry is shared
across all instances. In the case of displacements or subdivision surfaces, the objects must be
evaluated and diced individually which means the geometry is no longer being shared.

To avoid this problem, there is a rendering property which can be added to the object
containing your instances – “vm_sharedisplace”. Enabling this parameter will tell Mantra to use
the highest level of dicing needed for the scene on one object and then share the diced
geometry across all instances. Keep in mind that this means that objects far away from the
camera will have the same level of dicing as objects very close to the camera. There is some
potential for this to cause problems in your renders, however, the benefits of instancing the
geometry most likely outweigh any downside.


H16 Mantra User Guide

In the worst case scenario, where “incorrect” dicing levels cause problems in your rendering,
you could “split” your instances into two objects, foreground and background, so that distant
objects are evaluated separately from nearby ones. Alternatively, you could also unpack any
objects close to the camera, essentially removing them from the instancing hierarchy.

You might also like