You are on page 1of 13

Visible-Surface Detection

Methods
Introduction

• The various algorithms are referred to as visible-surface detection methods. Sometimes


these methods are also referred to as hidden-surface elimination
Classification of Visible-
Surface Detection
Algorithms
• Visible-surface detection algorithms are broadly classified
according to whether they deal with the object definitions or
with their projected images.
• Two approaches are called object-space methods and
image-space methods
• An object-space method compares objects and parts of
objects to each other within the scene definition to
determine which surfaces we should label as visible
• an image-space algorithm, visibility is decided point by
point at each pixel position on the projection plane
• an image-space algorithm, visibility is decided point by
point at each pixel position on the projection plane
• A fast and simple object-space method for locating
the back faces of a polyhedron is based on front-
back tests
• A point (x, y, z) is behind a polygon surface if Ax + By
+ Cz + D < 0. where A, B , C , and D are the plane
parameters for the polygon
Back-Face • we use the viewing position to test for back faces.
• We can simplify the back-face test by considering
Detection the direction of the normal vector N for a polygon
surface
• If V view is a vector in the viewing direction from our
camera position, as shown in Figure 1, then a
polygon is a back face if V view ·N > 0
Depth- • A commonly used image-space approach for
detecting visible surfaces is the depth-buffer
Buffer method, which compares surface depth
values throughout a scene for each pixel
Method position on the projection plane
A-Buffer Method
• A-buffer procedure is an extension of Depth-
buffer ideas.
• This depth-buffer extension is an
antialiasing, area-averaging, visibility-
detection method developed at Lucasfilm
Studios for inclusion in the surface-rendering
system called REYES.
• REYES ( “Renders Everything You Ever Saw”).
• The buffer region for this procedure is
referred to as the accumulation buffer,
because it is used to store a variety of
surface data, in addition to depth values.
Drawback of the
Depth-Buffer method

• It identifies only one visible surface at


each pixel position
OR
• It deals only with opaque surfaces and
cannot accumulate color values for
more than one surface, as is necessary
if transparent surfaces are to be
displayed (FIGURE)
• The A-buffer method expands the depth-
buffer algorithm so that each position in the
buffer can reference a linked list of surfaces.
• This allows a pixel color to be computed as a
combination of different surface colors for
transparency or antialiasing effects

• Each position in the A-buffer has two fields


1. Depth field: Stores a real-number value (positive, negative,
or zero).
2. Surface data field: Stores surface data or a pointer.
If the depth field is nonnegative, the number
stored at that position is the depth of a surface
that overlaps the corresponding pixel area.

The surface data field then stores various surface


information, such as the surface color for that
position and the percent of pixel coverage.

If the depth field for a position in the A-buffer is


negative, this indicates multiple surface
contributions to the pixel color.

The color field then stores a pointer to a linked


list of surface data.
• RGB intensity components
• Opacity parameter (percent of transparency)
Surface
• Depth
information in • Percent of area coverage
the A-buffer • Surface identifier
includes • Other surface-rendering parameters
• The A-buffer visibility-detection scheme can be implemented using methods
similar to those in the depth-buffer algorithm(Scan lines).

You might also like