You are on page 1of 22

Unit : 7

Visible Surface Determination


7.1 Visible Surface Determination
7.2Back Face Detection,Depth Buffer (Z-buffer), A-Buffer and Scan-Line
Algorithms.
7.3 Depth Sorting Method (Painter’s Algorithm)
7.4 BSP tree Method, Octree and Ray Tracing
Visible-Surface Determination (Hidden Surface Elimination):
The set of 3D objects and viewing specification, where to determine which lines
or surfaces of the objects are visible, either from the center of projection (for
perspective projections) or along the direction of projection (for parallel
projections), so that only the visible surfaces or lines are display. This process is
known as visible-line or Visible surface determination or Hidden surface
determination or Hidden-line.
Visible surface detection or Hidden surface removal is major concern for
realistic graphics for identifying those parts of a scene that are visible from a
chosen viewing position. Numerous algorithms have been devised for efficient
identification of visible objects for different types of applications. Some methods
require more memory, some involve more processing time, and some apply only
to special types of objects. Deciding upon a method for a particular application
can depend on such factors as the complexity of the scene, type of objects to be
displayed, available equipment, and whether static or animated displays are to
be generated.
Visible surface detection methods are broadly classified according to
whether they deal with objects or with their projected images.
These two approaches are:
 Object-Space methods: An object-space method compares objects
and parts of objects to each other within the scene definition to determine which
surfaces, as a whole, we should label as visible.
 Image-Space methods: Visibility is decided point by point at each
pixel position on the projection plane.


Most visible surface detection algorithm use image-space-method but in
some cases object space methods can also be effectively used.

Techniques for efficient Visible-Surface Algorithm:


We must organize visible-surface algorithms so that costly operations are
performed as efficiently and infrequently as possible. Some techniques are
describes below:
Coherence:
Visible-surface algorithms are 3D versions of sorting, i.e., depth
comparison. Avoid comparing all pairs of objects using the following
coherence:
 Object coherence: no comparison between components of objects if
objects are separated each other.
 Face coherence: surface properties vary smoothly across a face.
 Edge coherence: an edge changes its visibility not frequently.
 Implied edge coherence: line of intersection of two face can be
determined from two intersection points.
 Area coherence: a group of pixels is often covered by the same
visible surface.
 Span coherence: (special case of area coherence) homogeneous
runs in a scan line.
 Depth coherence: adjacent parts of the same surface are typically
close in depth.
 Frame coherence: animation frames contain small changes from
the previous frame.
Bounding Volumes:
Back-Face Culling:
In computer graphics, back-face culling determines whether a polygon of a
graphical object is visible.
The Perspective Transformation
Algorithm for visible-surface Detection:
1. Backface Detection Method:
2. Z-Buffer method / Depth-Buffer Algorithm:
Depth Buffer Method is the commonly used image-space method for
detecting visible surface. It is also known as z-buffer method. It compares
surface depths at each pixel position on the projection plane. It is called z-
buffer method since object depth is usually measured from the view plane
along the z-axis of a viewing system.
Each surface of scene is processed separately, one point at a time across
the surface. The method is usually applied to scenes containing only polygon
surfaces, because depth values can be computed very quickly and method is
easy to implement. This method can be applied also to non-planer surfaces.
With object description converted to projection co-ordinates, each (x,y,z)
position on polygon surface corresponds to the orthographic projection point
(x,y) on the view plane. Therefore for each pixel position (x,y) on the view plane,
object depth is compared by z-values.

In Z-buffer method, two buffers area are required.


 Depth buffer (z-buffer): stores the depth value for each (x, y)
position as surfaces are processed.
 Refresh buffer (Image buffer): stores the intensity value for each
position.
Initially all the positions in depth buffer are set to 0, and refresh buffer is
initialize to background color. Each surfaces listed in polygon table are processed
one scan line at a time, calculating the depth (z-val) for each position (x, y). The
calculated depth is compared to the value previously stored in depth buffer at
that position. If calculated depth is greater than stored depth value in depth
buffer, new depth value is stored and the surface intensity at that position is
determined and placed in refresh buffer.
The depth values of the surface position (x, y) are calculated by plane
equation of surface.

-A/C is constant for each surface so succeeding depth value across a scan
line are obtained from preceding values by simple calculation.

Advantages
 It is easy to implement.
 It reduces the speed problem if implemented in hardware.
 It processes one object at a time.

Disadvantages
 It requires large memory.
 It is time consuming process.
3. A-Buffer Algorithm:
The A-buffer method is an extension of the depth-buffer method. The A-buffer method is a
visibility detection method developed at Lucas film Studios for the rendering system Renders
Everything You Ever Saw. The A-buffer expands on the depth buffer method to allow
transparencies. The key data structure in the A-buffer is the accumulation buffer.

Each position in the A-buffer has two fields −


1. Depth field − It stores a positive or negative real number
2. Intensity field − It stores surface-intensity information or a pointer value

If depth >= 0, the number stored at that position is the depth of a single surface overlapping
the corresponding pixel area. The intensity field then stores the RGB components of the
surface color at that point and the percent of pixel coverage.
If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The intensity field
then stores a pointer to a linked list of surface data. The surface buffer in the A-buffer includes

RGB intensity components
Intensity, also called chroma or saturation, refers to the brightness of a color. A color is at
full intensity when not mixed with black or white - a pure hue. You can change the intensity of a
color, making it duller or more neutral by adding gray to the color.

Opacity Parameter
The lower the opacity—or the more light that can pass through an object—the lighter and more see-
through it is. The higher the opacity—or the less light that can pass through said object—the more
solid it is. So, in other words, the lower the opacity, the more transparent an object.

Depth
Percent of area coverage
Surface identifier
The algorithm proceeds just like the depth buffer algorithm. The depth and opacity values are
used to determine the final color of a pixel.

4. Scan-line Algorithm:
This is image-space method for removing hidden surface which is extension
of the scan line polygon filling for polygon interiors. Instead of filling one surface
we deal with multiple surfaces here. In this method, as each scan line is
processed, all polygon surfaces intersecting that line are examined to determine
which are visible. Across each scan line, depth calculations are made for each
overlapping surface to determine which is nearest to the view plane. When the
visible surface has been determined, the intensity value for that position is
entered into the image buffer.
This method has a depth information for only single scan-line. In order to
require one scan-line of depth values, we must group and process all polygons
intersecting a given scan-line at the same time before processing the next scan-
line. Two important tables, edge table and polygon table, are maintained for this.

The Edge Table − It contains coordinate endpoints of each line in the scene,
the inverse slope of each line, and pointers into the polygon table to connect
edges to surfaces.

The Polygon Table − It contains the plane coefficients, surface material


properties, other surface data, and may be pointers to the edge table.
Scan-line Algorithm:

Note: Step 2 is not efficient because not all polygons necessarily intersect
with the scan line.
Depth calculation in 2a is not needed if only 1 polygon in the scene is
mapped onto a segment of the scan line.
Figure above illustrates the scan-line method for locating visible portions of
surfaces for pixel positions along the line. The active list for scan line 1 contains
information from the edge table for edges AB, BC, EH, and FG. For positions
along this scan line between edges AB and BC, only the flag for surface S1 is on.
Therefore, no depth calculations are necessary, and intensity information for
surface S1 is entered from the polygon table into the refresh buffer. Similarly,
between edges EH and FG, only the flag for surface S2 is on. No other positions
along scan line 1 intersect surfaces, so the intensity values in the other areas are
set to the background intensity. The background intensity can be loaded
throughout the buffer in an initialization routine.
For scan lines 2 and 3, the active edge list contains edges AD, EH, BC, and
FG. Along scan line 2 from edge AD to edge EH, only the flag for surface S1 is on.
But between edges EH and BC, the flags for both surfaces are on. In this interval,
depth calculations must be made using the plane coefficients for the two
surfaces. For this example, the depth of surface S1 is assumed to be less than
that of S2, so intensities for surface S1 are loaded into the refresh buffer until
boundary BC is encountered. Then the flag for surface S1 goes off, and intensities
for surface S2 are stored until edge FG is passed. We can take advantage of
coherence along the scan lines as we pass from one scan line to the next. In Fig.,
scan line 3 has the same active list of edges as scan line 2. Since no changes have
occurred in line intersections, it is unnecessary again to make depth calculations
between edges EH and BC. The two surfaces must be in the same orientation as
determined on scan line 2, so the intensities for surface S1 can be entered
without further calculations.
Any number of overlapping polygon surfaces can be processed with this
scan-line method. Flags for the surfaces are set to indicate whether a position is
inside or outside, and depth calculations are performed when surfaces overlap.
When these coherence methods are used, we need to be careful to keep track
of which surface section is visible on each scan line. This works only if surfaces
do not cut through or otherwise cyclically overlap each other.
If any kind of cyclic overlap is present in a scene, we can divide the
surfaces to eliminate the overlaps. The dashed lines in this figure indicate
where planes could be subdivided to form two distinct surfaces, so that the
cyclic overlaps are eliminated.
List Priority Algorithm:

List-Priority Algorithm is classified into two types:


Binary space partitioning Tree Algorithm:
5. Depth Sort/Priority/ Painter’s Algorithm
s
Ray Tracing:
Ray tracing also known as ray casting is efficient method for visibility detection in the objects. It can be
used effectively with the object with curved surface. But it is also used for polygon surfaces.

 Trace the path of an imaginary ray from the viewing position (eye) through viewing plane to
object in the scene.
 Identify the visible surface by determining which surface is intersected first by the ray.
 Can be easily combined with lightning algorithms to generate shadow and reflection.
 It is good for curved surface but too slow for real time application.
Ray casting, as a visibility detection tool, is based on geometric optics methods, which trace the paths of
light rays. Since there are an infinite number of light rays in a scene and we are interested only in those
rays that pass through pixel positions, we can trace the light-ray paths backward from the pixels through
the scene. The ray-casting approach is an effective visibility-detection method for scenes with curved
surfaces, particularly spheres.

You might also like