Professional Documents
Culture Documents
com
Single Pass Stereo rendering allows the GPU to share culling for both eyes. The GPU only
needs to iterate through all the GameObjects in the Scene once for culling purposes, and
then renders the GameObjects that survived the culling process.
The comparison images below show the difference between normal VR rendering and Single
Pass Stereo rendering.
Normal VR rendering:
To enable this feature, open the Player settings (menu: Edit > Project Settings, then select
the Player category). Then navigate to the XR Settings panel, ensure the Virtual Reality
Supported checkbox is ticked, and select the Single Pass option from the Stereo Rendering
Method dropdown.
Unity’s built-in rendering features and Standard Assets all support this feature. However,
custom-built Shaders and Shaders downloaded from the Asset Store might need to be
modified (for example, you might need to scale and offset screen space coordinates to
access the appropriate half of the packed Render Texture) to add Single Pass Stereo
rendering support.
In the case of XR, there are multiple view matrices: one for both the left and right eye. You
can use the built-in method UnityWorldToClipPos to ensure that Unity takes into
consideration whether the calculation requires handling multiple view matrices. If you use
theUnityWorldToClipPos method, the shader automatically performs the transformation
calculation correctly, regardless of the platform your application is running on.
UnityCG.cginc also contains the following helper methods to assist you with authoring
stereoscopic Shaders:
UnityStereoTransformScreenSpaceTex(uv) uv: UV If
texture UNITY_SINGLE_PASS_STEREO
coordinates. is defined, this returns the
Either a float2 result of applying the current
for a standard eye’s scale and bias to the
UV or a float4 texture coordinates in uv.
for a packed Otherwise, this returns the
pair of two texture coordinates unaltered.
UVs.
Shaders expose the constant in-built variable ‘unity_StereoEyeIndex’, so that Unity can
perform eye-dependent calculations. The value of unity_StereoEyeIndex is 0 for left-eye
rendering, and 1 for right-eye rendering.
In most cases, you don’t need to modify your Shaders. However, there are situations in
which you might need to sample a monoscopic Texture as a source for Single Pass Stereo
rendering (for example, if you are creating a full-screen film grain or noise effect where the
source image should be the same for both eyes, rather than packed into a stereoscopic
image). In such situations, use ComputeNonStereoScreenPos() instead of
ComputeScreenPos() to calculate locations from the full source Texture.
Post-processing effects
Post-processing effects require some extra work to support Single Pass Stereo rendering.
Each post-processing effect runs once on the packed Render Texture (which contains both
the left and right eye images), but applies all draw commands that run during the post-
processing twice: once to the left-eye half of the destination Render Texture, and once to the
right-eye half.
Post-processing effects do not automatically detect Single Pass Stereo rendering, so you
need to adjust any reads of packed Stereo Render Textures so that they only read from the
correct side for the eye being rendered. There are two ways to do this depending on how
your post-processing effect is being rendered:
• Using Graphics.Blit()
• Mesh-based drawing
Without the above-mentioned adjustments, each draw command reads the whole of the
source Render Texture (containing both the left and right eye views), and outputs the entire
image pair to both the left and right eye sides of the output Render Texture, resulting in
incorrect duplication of the source image in each eye.
This happens when using Graphics.Blit or a fullscreen polygon with a Texture map to
draw each post-processing effect . Both methods reference the entire output of the previous
post-processing effect in the chain. When referring to an area in a packed stereo Render
Texture, they reference the whole packed Render Texture instead of just the relevant half of
it.
Graphics.Blit()
Post-processing effects rendered with Blit() do not automatically reference the correct
part of packed stereo Render Textures. By default, they refer to the entire texture. This
incorrectly stretches the post-processing effect across both eyes.
For Single Pass Stereo rendering using Blit(), texture samplers in Shaders have an
additional auto-calculated variable which refers to the correct half of a packed stereo Render
Texture, depending on the eye being drawn. The variable contains scale and offset values
that allow you to transform your target coordinates to the correct location.
To access this variable, declare a half4 in your Shader with the same name as your
sampler, and add the suffix _ST (see below for a code example of this). To adjust UV
coordinates, pass in your _ST variable to scaleAndOffset and use
UnityStereoScreenSpaceUVAdjust(uv, scaleAndOffset). This method compiles to
nothing in non-Single Pass Stereo builds, meaning that shaders modified to support this
mode are still compatible with non-Single Pass Stereo builds.
The following examples demonstrate what you need to change in your fragment shader
code to support Single Pass Stereo rendering.
Mesh-based drawing
Rendering Post-processing effects using meshes (for example, by drawing a quadrilateral in
immediate mode using the low level graphics API) also need to adjust the UV coordinates on
the target Texture when rendering each eye. To adjust your coordinates in these
circumstances, use UnityStereoTransformScreenSpaceTex(uv). This method correctly
adjusts for packed stereo Render Textures in Single Pass Stereo rendering mode, and
automatically compiles for non-packed Render Textures if you have disabled Single Pass
Stereo rendering mode. However, if you intend to use a Shader both for packed and
unpacked Render Textures in the same mode, you need to have two separate Shaders.
For example, imagine a screen space effect that requires an image to be drawn over the
screen (perhaps you are drawing some kind of dirt spattered on the screen). Instead of
applying the effect over the entire output display, which would stretch the dirt image across
both eyes, you need to apply it twice: once for each eye. In cases like this, you need to
convert from using texture coordinates that reference the whole packed Render Texture, to
coordinates that reference each eye.
The following code examples show a Surface Shader that repeats an input Texture (called
_Detail) 8 x 6 times across the output image. In the second example, the shader transforms
the destination coordinates in Single Pass Stereo mode to refer to the part of the output
Texture that represents the eye currently being rendered.
2019-03-26.
Store