You are on page 1of 6

Shadow Volume in real-time rendering

Abdelmoumne Zerari

Mohamed Chaouki Babahenini

Department of computer, Laboratory LESIA


University Med Khider Biskra, B.P 145, 07000
BISKRA, ALGERIA
sssokba@hotmail.com

Department of computer, Laboratory LESIA


University Med Khider Biskra, B.P 145, 07000
BISKRA, ALGERIA
Chaouki.Babahenini@gmail.com

Abstract This paper presents an implementation of the


algorithm of shadow volume, with an acceleration allowing a
rendering in real-time. This technique is based on [1], which
makes it possible to obtain shadow in real-time, where the
computation of the silhouette requires the evaluation of the
geometry by the CPU. By the use of the last version of the GPU,
we propose to improve the computing times by using the
geometry shaders so that the calculation of the silhouette is
carried out by a GPU program. We present the step which allows
us to lead to a concrete implementation of this algorithm, the
modifications which were made, as well as a comparative study of
results, followed by a discussion.
Keywords- Volume Rendering; Illumination; Shadow Volumes;
Filtering; GPU; Real-time; Material Acceleration.

I.

INTRODUCTION

The shadows are useful for the visualization of an image,


because they bring information on the shape, the position and
the characteristics of an object, thus on the position, intensity
and size of the lights. The shadows are the result of the
interaction between three components: light sources, receiving
of shadow and occluder. To display shadow in a scene 3D, it
exist several techniques that calculate shadows, majority of the
cases, gathered in several categories as follows [17]: traditional
techniques, the techniques based on the Ray Tracing, global
illumination techniques , depth buffers or shadow maps and
finally, shadow volumes which is the objective of our work.
Shadow volumes were first proposed by Franklin Crow in
1977 [4], the approach consists in generating a polygonal
shadow starting from the occulting objects of the scene
opposite to a point source, then to display only certain parts of
these volumes, classically, those ranging between the front
faces and back of shadow volumes, as illustrated on figure 1.
With the source, the silhouettes edges (i.e. incidental edges
with two polygons, one directed towards the source, the other
not, (figure 1)) form plans which delimit the shadow volume of
the object. Only the points of space not contained in such a
volume are lighted by the source [5].

Figure 1: Illustration of the shadow volume [21].


COMPUTING OF SHADOW VOLUME

II.

A. Shadow volume principle


for each object occulting in the scene to build a shadow
volume

for each fragment drawn to count how much time we


input/output a volume thanks to the stencil buffer:

If number of input/output > 0: in the


shadow
If number of input/output = 0: in the
light

To build shadow volumes it is necessary: to find the


silhouette of the objects seen from the source, then to build
quadrilateral infinite (extrusion of the silhouette) resting: on the
source and each edge of silhouette, then to count the
inputs/outputs by using the stencil buffer.
Classically the extrusion of the silhouette made by the
CPU, but this computing is very greedy in computing times.
Several improvements carried out [1], [2] for the extrusion of
the silhouette by the GPU using the shaders. The detection of
the silhouette requires one preprocess of the geometry of the
objects in the scene.

Many methods were developed to allow the fast compute of


the shadows parts of a sequence of images. One of these
methods is known under the name of stencil shadow volumes
[18]. It uses the geometry of the scene 3D to extract volumes.
These volumes are then drawn in an image called stencil
buffer to obtain a mask indicating which pixels are in the
shadow of the point light source. There exist two basic
techniques to draw these shadow volumes in the stencil buffer.
They are known under the names of Z-pass [6] and Z-fail [7].
The first technique (Z-pass) presents some defects whose
correction is made by Z-Fail method.
B. Technique Z-PASS
In 1991, Tim Heidmann presented the basic technique for
rendering of Shadow Volumes is commonly called method
Zpass [6].
The principle of Zpass is the following:

Render front faces of volume, by incrementing the


stencil when the test of depth is true, else anything. To
disable the display of volume.

Render faces back of volume, by decrementing the


stencil when the test of depth is true, else nothing. To
disable the display of volume.

A point having a value of stencil different from zero is in the


shadow (figure 2).

Render front faces of volume, by decrementing


the stencil when the test of depth is false, else
anything. To disable the display of volume.

From this principle, the shadow volume must be a closed


volume [16]. Consequently, it will be necessary to build the
front faces and back volume. For that, for the front face, we
use the faces of the object directed towards the source and for
the back face, we use the back faces of the object, those which
are not enlightened, that we project at a certain distance.
D. The silhouette
The silhouettes in our case specifically indicate the object
space silhouettes for well-defined polygonal models. A
polygonal model is well-defined if the model is only
composed by triangles and each edge has two and only two
adjacent triangles.
A silhouette edge is an edge where a front facing triangle
meets a back facing triangle. As shown in the figure 3: triangle
(v0, v1, v2) meets triangle (v0, v3, v1) at edge (v0, v1). We
can see that triangle (v0, v1, v2) is a front facing triangle and
triangle (v0, v3, v1) is a back facing triangle relative to the
eye point, so edge (v0, v1) is a silhouette edge [3].
The algorithm to compute if the triangle is a front face or
not is:

Figure 3: Definition of a silhouette edge [3].


Given an eye point and a triangle(v0, v1, v2).

N = cross(v1-v0, v2-v1)

Figure 2: Z-Pass technique


The opposite version was proposed in 2000 per John
Carmack [7]. Indeed, Z-pass does not functioning if the
observer is located in the shadow volume.
C. Technique Z-FAIL
The principle of Z-fail is the following:

Render faces back of volume, by incrementing the


stencil when the test of depth is false, else
anything. To disable the display of volume.

Compute the normal of the triangle(v0, v1, v2).

Compute the eye vector.


E = v0 eye

If dot(E, N) > 0
triangle(v0, v1, v2) is a front face
Else
triangle(v0, v1, v2) is a back face

Our work is based on a preceding method of shadow


volume [1] bases on the technique of Z-FAIL and the extrusion
of the silhouette is implemented by program GPU (shader), but
in this work the stage of detection of the silhouette is
implemented in the CPU, i.e. it requires a pre-process of the
geometry, because the information of adjacency between the
primitives is absent.
Our analysis made on the limits of the old method [1]
enabled us to propose a method faster for detection of the
silhouette, by an improvement using the new performances of
the recent material graphic.
There was several research works to find an effective
manner of detection the silhouette in real-time [8], [9], [10],
[11], [12], although the majority of the algorithms require a
pre-process of the geometry or multiple rendering passes [13].
In the preceding cases there is no support geometry underlying
the silhouette.

not receive information of connectivity (triangles or quads), it


receives only the coordinates of the points, independently
from/to each other.
The fourth version of the shader model introduced a new
kind of treatment named geometry shaders [14], which
manages the connectivity of the polygons like their
subdivisions or simplifications. They are available on the very
recent graphics cards. In the chain of operations of OpenGL,
this last takes seat between the vertex shader and the phase of
clipping, as illustrated on figure 4. The objective of this shader
is to be able to handle, on GPU, unquestionable primitive by
making it possible to transform them, duplicate or to act on the
vertices. Being given a whole of elements in entry, the
geometry shader makes it possible to obtain primitives like
points, lines or triangles.

With the new stage of the geometry shaders of GPUs, a


new possibility is open, since the information of adjacencies is
available. Thanks to Geometry Shaders we can produce
geometry of silhouette.
Our contribution consists in implementing the stage of
detection of the silhouette on the GPU by using Geometry
Shaders.

III.

GEOMETRY SHADER-BASED SILHOUETTES

The Shaders Programs are programs carried out by graphics


card (GPU). They are used to replace the fixed functions
implemented in the graphics card (cabled functions) by other
functions written by the programmer [19]. They intervene at
various levels of the graphic pipeline (Figure 4). The most used
shaders are the Vertex and Shaders Fragment. In the modern
graphics cards, the stage Geometry Assembly of the pipeline
became programmable what gives to the programmers more
flexibility to manage the vertices and their connectedness
(addition or removal of vertices or edges).
A. Shader

There are three types of basic shader:


Vertex shader: They are carried out at the treatment of
each vertex of the primitives.

Fragment shader: Still called Shader Pixel, they


intervene for the treatment of each pixel to display.

Geometry shader: Who acts on the geometry of a


model - i.e. the primitives (points, lines,
trianglesUnits

B. Geometry Shader
These last years, the vertices and fragment shaders profited
from a strong passion on behalf of the graphic community.
Compared to CPU or old version of the GPU, they have the
advantage of making it possible to accelerate computing times
and to improve quality of rendering. The vertex shader does

Figure 4: The OpenGL pipeline with the introduction


of version 4 of the shader model.
This new stage gives us the possibility of treating a mesh at
the primitive level with adjacent information (close triangles)
and to product a new geometry at exit (sees the figure. 4). It is
information that we must detect and creates the edges of
silhouette.
We consider a closed triangle mesh with consistently
oriented triangles. The set of triangles is denoted, T1TN. The
set of vertices is v1 vn in R3, and normal vectors are given
by triangles: nt is the normal of a triangle Tt = [vi; vj; vk], using
the notation by [12]. This triangle normal is defined as the
normalization of the vector (vj - vi) (vk - vi). Given an
observer at position x R3, we may say a triangle is front facing
in v if (v - x). n 0, otherwise it is back facing.
The silhouette of a triangle mesh is the set of edges where
one of the adjacent triangles is front facing while the other is

back facing. In order to detect a silhouette in a triangulated


mesh we need to process any triangle, together with the
triangles that share an edge with it. In order to be able to
access the adjacency information, Geometry Shader require
the triangle indices be specially sorted, as it is shown in
(Figure 5).
Figure 7: v1 and v2 are the edge vertices, and n1 and
the vertices normal vectors.

n2 are

In some cases, this solution may produce an error when the


extrusion direction has a different direction than the projected
normal version. In order to solve this, we have designed two
possible solutions:
Figure 5: The triangle currently processed is the one
determined by vertices (0,2,4). The ones determined by
vertices (0,1,2), (2,3,4) and (4,5,0) are the triangles that
share an edge with the current triangle.
C. Detection of silhouette and creation of the geometry
We may detect if a triangle edge belongs to the silhouette
respect to the observer by checking if, for both triangles
sharing the edge, one is front facing and the other one is back
facing. If we apply this method as is, it will generate the same
edge two times, one for each triangle that shares the edge. In
order to correct this, we only generate the edge if the triangle
that is front-facing is the center one, the one that is actually
treated by the Geometry Shader. This approach is also taken
by Doss [11].
Once a silhouette edge has been detected, we generate new
geometry for the silhouette, and this is further processed at the
fragment shader stage. This carried out by creating a new
polygon, by extruding the edge along the vector perpendicular
to the edge direction in screen space. However, this approach
produces discontinuities in the start and end of the edges.
These become especially visible with the increase in width of
the silhouette, as shown in Figure 6.

Invert the extrusion direction in this vertex.


Invert the direction of the projected normal.

These two solutions are illustrated in Figures 8(a) and 8(b),


respectively. As this problem usually occurs when edges are
hidden by a front-facing polygon, by reversing the extrude
direction the hidden edges are revealed across the visible
geometry, and therefore an artifact appears. Therefore, we
decided to use the second solution as it is the one that produces
more visually pleasing results.

Figure 8: v1 and v2 are the edge vertices, e is the extrude


direction, n2 is the vertex normal, e is the inverted extrude
direction and n2 is the inverted projected normal.
The Geometry Shader pseudo code is shown in
Algorithm1:

Figure 6: Discontinuity produced when extrudes the edge.


In order to correct this, we adopt the approach of McGuire
and Hughes [20], but we do this at the Geometry Shader stage
in a single step (they require three rendering passes). We create
two new triangles at the start and ending of the edge, along the
projected vertex normal in screen space. This ensures
continuity along the silhouette, as depicted in Figure 7.

Calculate the triangle normal and view direction


if triangle is frontfacing then
for all adjacent triangles do
Calculate normal
if triangle is backfacing then
Extrude silhouette
end if
end for
end if
Algorithm 1: Algorithm that extrudes the silhouette edge.

D. Real-Time Rendering
Like photorealistic rendering, real-time rendering is a part
of the field of computer graphics. While photorealistic
rendering deals with the appearance of the generated images,
real-time rendering mainly focuses on creating those images
fast enough to keep the viewer immersed in the scene. To
achieve this goal, the images must be created at a rate that does
not permit the viewer to distinguish individual frames.
This requirement imposes a natural limit on the quality of
the generated images. In many cases, a trade-off must be found
between the visual acuity of the images and the speed at which
the images can be generated (rendered) [22].

Figure 10 represents the result of our algorithm by using


the Geometry Shader, and by using the same composition as
the scene precedent (30 creatures of which each one is
composed of 634 triangles, 365 vertices and of 307 coordinates
of textures, including one total of 30*634=19020 triangles,
365*30=10950 coordinated vertices and of 307*30=9210
textures). In this case our scene generates 14 FPS with the
active shades, and produces 18 FPS with the non-active
shadows.

The term frame rate is used to describe the rate at which


images are displayed on the screen. Its unit is frames per
second (fps). Haines and Akenine-Mller [15] classify an
application as real-time if its frame rate is at least 15 fps.

IV.

DISCUSSION

All the results are obtained, for a resolution of 800x600


pixels, on a graphic card nVidia GeForce 9400M GT, version
OpenGl Core: 3.3 and version GLSL: 3.30, and with a
processor Intel (R) Pentium (R) D CPU 3.00GHz and a RAM
of 1 Go.
Figure 9 represents the result of implementation of the
algorithm of the shadow volume [1] with the method of ZFAIL and the detection of the silhouette by the CPU. This
scene is composed by 30 creatures, which each one is
composed of 634 triangles, 365 vertices and 307 coordinates of
textures, including one total of 30*634=19020 triangles,
365*30=10950 coordinated vertices and of 307*30=9210
textures. Here we see very well shadow volumes as well as the
self-shadow. In this case our scene generates 6 FPS with the
shadows activates, and produces 18 FPS with the non-active
shades.

Figure 10. Implementation of the shadow volume


algorithm by using the Geometry Shader.

V.

COMPARISON OF THE RESULTS

The comparison of the results obtained by the old method


[1] and that proposed; show visibly the improvement of the
speed of rendering and the optimization of time of calculation.
The results of figure 10 number in frame per second are equal
to 14 FPS, but the results of figure 9 gave 6 FPS, with the same
composition of the scene.

VI.

CONCLUSION AND PROSPECTS

We presented an improvement of a method of shadow


volume in real-time rendering based on the Geometry Shader,
able to carry out renderings in real-time. This improvement
relates to the part of detection of the silhouette.
Although the results are not with the height of waiting
concerning aesthetic, they are nevertheless visible.
VII.

Figure 9. Implementation of the method [1].


This algorithm functions only for one certain number
triangles limited, in this case inorder to have real-time were
not allowed exceed certain numbers of triangles in the scene.

FUTURE WORK

Because of the extremely scene-dependent nature of


shadow volume rendering performance and space constraints
here, we defer thorough performance evaluation of our
technique. Still we are happy to report that our rendering
examples, including examples that seek to mimic the animated
behaviour of a sophisticated 3D game (see Figure 4), achieve
real-time rates on current PC graphics hardware.
Future graphics hardware will support more higher-order

graphics primitives beyond triangles. Combining higher-order


hardware primitives with shadow volumes requires automatic
generation of shadow volumes in hardware. Two-sided stencil
testing will be vital since it only requires one rendering of
automatically generated shadow volume geometry. Automatic
generation of shadow volumes will also relieve the CPU of
this chore.

REFERENCES
[1]

[2]

[3]

[4]
[5]

[6]
[7]
[8]
[9]

GUNTER WALLNER, Geometry of Real Time Shadows, in


international journals or conference proceedings, University of Applied
Arts Vienna, Department of Geometry between 2005 and 2008.
Brennan, Chris, Shadow Volume Extrusion Using a Vertex Shader, in
Engel, Wolfgang, ed., Shader X, Wordware, May 2002.
http://www.shaderx.com/.
Shi Yazheng, Performance comparison of CPU and GPU silhouette
extraction in a shadow volume algorithm, Masters Thesis in
Computing Science, 10 credits, Department of Computing Science
SWEDEN, January 27, 2006 .
Franklin C. CROW, Shadow algorithms for computer graphics in
Proc.SIGGRAPH 77, volume 11, pages 242248, juillet 1977
MORA FRDRIC, Visibilit polygone polygone, calcul,
reprsentation, applications. Thse Doctorat de l'universit de Poitiers,
10 juillet 2006.
Tim HEIDMANN, Real shadows real time, IRIS Universe, volume
18 page 2331, 1991.
John carmack, . on shadow volumes, 2000. URL
http://developer.nvidia.com/attach/5628.
MITCHELL J. L., BRENNAN C., CARD D.: Real-time image-space
outlining for non-photorealistic rendering. In Siggraph 02 (2002).
RASKAR R.: Hardware support for nonphotorealistic rendering. In 2001
SIGGRAPH / Eurographics Workshop on Graphics Hardware (2001),
ACM Press, pp. 4146.

[10] ASHIKHMIN M.: Image-space silhouettes for unprocessed models. In


GI 04: Proceedings of Graphics Interface 2004 (School of Computer
Science, University of Waterloo, Waterloo, Ontario, Canada, 2004),
Canadian Human-Computer Communications Society, pp. 195202.
[11] DOSS J.: Inking the cube: Edge detection with direct3d 10, August
2008.
[12] DYKEN C., REIMERS M., SELAND J.: Realtime gpu silhouette
refinement using adaptively blended bzier patches. Computer Graphics
Forum 27, 1 (Mar 2008), 112.
[13] P. Hermosilla & P.P. Vzquez , Single Pass GPU Stylized Edges ,
MOVING Group, Universitat Politcnica de Catalunya , IV
Iberoamerican Symposium in Computer Graphics - SIACG (2009), pp.
18.
[14] LICHTENBELT B., BROWN P. : EXT_gpu_shader4 Extensions
Specifications. NVIDIA, 2007.
[15] Tomas AKENINE-MLLER et Eric HAINES : Real-time Rendering.
AK Peters, 2e dition, 2002.
[16] DUPUIS Emmanuel , Gestion des Ombres en Temps Rel Travail
dEtude et de Recherche , Master I STIC-Informatique , Anne
Universitaire 2004/2005.
[17] Jean-Franois St-Amour , "Gnration dombres floues provenant de
sources de lumire surfaciques laide de tampons dombre tendus"
Mmoire prsent la Facult des tudes suprieures en vue de
lobtention du grade de Matre s sciences (M.Sc.) en informatique,
luniversit de Montral, Aot, 2004.
[18] Jukka Kainulainen, Gestion des Ombres en Temps Rel,
Telecommunications Software and Multimedia Laboratory , HELSINKI
UNIVERSITY OF TECHNOLOGY, 15.4.2002.
[19] Randi J. Rost and Bill Licea-Kane. OpenGL Shading Language Third
Edition. Addison-Wesley, 2009.
[20] MCGUIRE M., HUGHES J. F.: Hardware- determined feature edges. In
NPAR 04: Proceedings of the 3rd international symposium on Nonphotorealistic animation and rendering (New York, NY, USA, 2004),
ACM, pp. 3547.
[21] Samuel HORNUS. " Maintenance de la visibilit depuis un point
mobile, et applications", Thse prsente pour lobtention du titre de
Docteur de lUniversit Joseph Fourier, 22 mai 2006..
Christian Steiner : Shadow Volumes in Complex Scenes. 2006.

You might also like