Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Oct 2013
Chapter 1 – Graphic Systems and Models
Section 1.3 – Images Physical and Synthetic
Two basic entities must be part of any image formation process…
1. Object
2. Viewer
The visible spectrum of light for humans is from 350 to 780 nm
A light source is characterized by its intensity and direction
A ray is a semiinfinite line that emanates from a point and travels to infinity
Ray tracing and photon mapping are examples of image formation techniques
Section 1.5 – Synthetic Camera Model
Conceptual foundation for modern three dimensional computer graphics is the synthetic camera
model.
Few basic principles include:
Specification of object is independent of the specification of the viewer
Compute the image using simple geometric calculations
COP – Center of projection (center of the lens)
With synthetic cameras we move the clipping to the front by placing a clipping rectangle, or clipping
window in the projection plane. This acts as a window through which we view the world.
Section 1.7  Graphic Architectures
2 main approaches…
1. Object Oriented Pipeline – vertices travel through the pipeline that determines the color
and pixel positions.
2. Image Oriented Pipeline – loop over pixels. For each pixel work backwards to determine
which geometric primitives can contribute to its color.
Object Orient Pipeline
History
Graphic architecture has progressed from single central processing to do graphics to a pipeline
model. Pipeline architecture reduces the total processing time for a render (think of it as multiple
specialized processors, each performing a function and then passing the result on to the next
processor).
Advantages
Each primitive can be processed independently which leads to fast performance
Memory requirements reduced because not all objects are needed in memory at the same time
Disadvantages
Cannot handle most global effects such as shadows, reflections and blending in a physically
correct manner
4 major steps in pipeline…
1. Vertex Processing
a. Does coordinate transformations
b. Computes color for each vertex
2. Clipping and Primitive Assembly
a. Clipping is performed on a primitive by primitive basis
3. Rasterization
a. Convert from vertices to fragments
b. Output of rasterizer is a set of fragments
4. Fragment Processing
a. Takes fragments generated by rasterizer and updates pixels
Fragments – think of them as a potential pixel that carries information including its color, location
and depth info.
6 major frames that occur in OpenGL…
1. Object / Model Coordinates
2. World Coordinates
3. Eye or Camera Coordinates
4. Clip Coordinates
5. Normalized Device Coordinates
6. Window or Screen Coordinates
Example Questions for Chapter 1
Textbook Question 1.1
What are the main advantages and disadvantages of the preferred method to form computer
generated images discussed in this chapter?
Textbook Question 1.5
Each image has a set of objects and each object comprises a set of graphical primitives. What does
each primitive comprise? What are the major steps in the imaging process?
Exam Jun 2011 1.a (6 marks)
Differentiate between the object oriented and image oriented pipeline implementation strategies
and discuss the advantages of each approach? What strategy does OpenGL use?
Exam Jun 2012 1.a (4 marks)
What is the main advantage and disadvantage of using the pipeline approach to form computer
generated images?
Exam Jun 2012 1.b (4 marks)
Differentiate between the object oriented and image oriented pipeline implementation strategies
Exam Jun 2012 1.c (4 marks)
Name the frames in the usual order in which they occur in the OpenGL pipeline
Exam Jun 2013 1.3 (3 marks)
Can the standard OpenGL pipeline easily handle light scattering from object to object? Explain?
Chapter 2 – Graphics Programming
Key concepts that need to be understood…
Typical composition of Vertices / Primitive Objects
Size & Colour
Immediate mode vs. retained mode graphics
Immediate mode
 Used to be the standard method for displaying graphics
 There is no memory of the geometric data stored
 Large overhead in time needed to transfer drawing instructions and model data for each
cycle to the GPU
Retained mode graphics
 Has the data stored in a data structure which allows it to redisplay the data with the option
of slight modifications (i.e. change color) by resending the array without regenerating the
points.
Retained mode is the opposite of immediate: most rendering data is preloaded onto the graphics
card and thus when a render cycle takes place, only render instructions, and not data, are sent.
Both immediate and retained mode can be used at the same time on all graphics cards, though the
moral of the story is that if possible, use retained mode to improve performance.
Coordinate Systems
Device Dependent Graphics  Originally graphic systems required the user to specify all information
directly in units of the display device (i.e. pixels).
Device Independent Graphics  Allows users to work in any coordinate system that they desire.
World coordinate system – coordinate system that the user decides to work in
Vertex coordinates – the units that an application program uses to specify vertex positions.
At some point with device independent graphics the values in the vertex coordinate system must be
mapped to window coordinates. The graphic system rather than the user is now responsible for this
task and mapping is performed automatically as part of the rendering process.
Color – RGB vs. Indexed
With both the indexed and RGB color models the number of colors that can be displayed depends on
the depth of the frame (color) buffer.
Indexed Color Model
In the past, memory was expensive and small and displays had limited colors.
This meant that the indexedcolor model was preferred because…
 It had lower memory requirements
 Displays had limited colors available.
In an indexed color model a color lookup table is used to identify which color to display.
Color indexing presented 2 major problems
1) When working with dynamic images that needed shading we would typically need more
colors than were provided by the color index mode.
2) The interaction with the window system is more complex than with RGB color.
RGB Color Model
As hardware has advanced, RGB has become the norm.
Think of RGB conceptually as three separate buffers, one for red, green and blue. It allows us to
specify the proportion of red, green and blue in a single pixel. In OpenGL this is often stored in a
three dimensional vector.
RGB color model can become unsuitable when the depth of the frame is small because shades
become too distinct/discreet.
Viewing – Orthographic and Two Dimensional
The orthographic view is the simplest and OpenGL’s default view. Mathematically, the orthographic
projection is what we would get if the camera in our synthetic camera model had an infinitely long
telephoto lens and we could then place the camera infinitely far from our objects.
In OpenGL, an orthographic projection with a rightparallelepiped viewing volume is the default. The
orthographic projection “sees’ only those objects in the volume specified by the viewing volume.
Two dimensional viewing is a special case of threedimensional graphics. Our viewing area is in the
plane z = 0, within a three dimensional viewing volume. The area of the world that we image is
known as the viewing rectangle, or clipping rectangle. Objects inside the rectangle are in the image;
objects outside are clipped out.
Aspect Ratio and Viewports
Aspect Ratio  The aspect ratio of a rectangle is the ratio of the rectangle’s width to its height. The
independence of the object, viewing, and workstation window specifications can cause undesirable
side effects if the aspect ratio of the viewing rectangle is not the same as the aspect ratio of the
window specified.
In glut we use glutInitWindowSize to set this. Side effects can include distortion. Distortion is a
consequence of our default mode of operation, in which the entire clipping rectangle is mapped to
the display window.
Clipping Rectangle  The only way we can map the entire contents of the clipping rectangle to the
entire display window is to distort the contents of clipping rectangle to fit inside the display window.
This is avoided if the display window and clipping rectangle have the same aspect ratio.
Viewport  Another more flexible approach is to use the concept of a viewport. A viewport is a
rectangular area of the display window. By default it is the entire window, but it can be set to any
smaller size in pixels.
OpenGL Programming Basics
Event Processing
Event processing allows us to program how we would like the system to react to certain events.
These could include mouse, keyboard or window events.
Callbacks (Display, Reshape, Idle, Keyboard, Mouse)
Each event has a callback that is specified. The callback is used to trigger actions when an event is
used.
The idle callback is invoked when there are no other events to trigger. A typical use of the idle
callback is to continue to generate graphical primitives through a display function while nothing else
is happening.
Hidden Surface Removal
Given the position of the viewer and the objects being rendered we should be able to draw the
objects in such a way that the correct image is obtained. Algorithms for ordering objects so that they
are drawn correctly are called visiblesurface algorithms (or hiddensurface removal algorithms).
Zbuffer algorithm  A common hidden surface removal algorithm supported by OpenGL.
Double buffering
Why we need double buffering
Because an application program typically works asynchronously, changes can occur to the display
buffer at any time. Depending on when the display is updated, this can cause the display to show
partially updated results.
What is double buffering
A way to avoid partial updates. Instead of a single frame buffer, the hardware has two frame buffers.
Front buffer – the buffer that is displayed
Back buffer – the buffer that is being updated
Once updating the back buffer is complete, the front and back buffer are swapped. The new back
buffer is then cleared and the system starts updating it.
To trigger a refresh using double buffering in OpenGL we call glutSwapBuffers();
Menus
Glut provides popup menus that can be used.
An example of doing this in code would be…
glutCreateMenu(demo_menu); //Create Callback for Menu
glutAddMenuEntry(“quit”,1);
glutAddMenuEntry(“start rotation”, 2);
glutAttachMenu(GLUT_RIGHT_BUTTON);
void demo_menu(int id)
{
//react to menu
}
Purpose of GLFlush statements
Similar to computer IO buffer, OpenGL commands are not executed immediately. All commands are
stored in buffers first, including network buffers and the graphics accelerator itself, and are awaiting
execution until buffers are full. For example, if an application runs over the network, it is much more
efficient to send a collection of commands in a single packet than to send each command over
network one at a time.
glFlush()  empties all commands in these buffers and forces all pending commands to be executed
immediately without waiting for buffers to get full. glFlush() guarantees that all OpenGL commands
made up to that point will complete executions in a finite amount time after calling glFlush().
glFlush() does not wait until previous executions are complete and may return immediately to your
program. So you are free to send more commands even though previously issued commands are not
finished.
Vertex Shaders and Fragment Shaders
OpenGL requires a minimum of a vertex and fragment shader.
Vertex Shader
A simple vertex shader determines the color and passes the vertex location to the fragment shader.
The absolute minimum a vertex shader must do is send a vertex location to the rasterizer.
In general a vertex shader will transform the representation of a vertex location from whatever
coordinate system in which is it specified to a representation in clip coordinates for the rasterizer.
Shaders are written using GLSL (which is very similar to a dumbed down c).
Example would be ….
In vec4 vPosition
void main()
{
Gl_Position = vPosition;
}
Gl_Position is a special variable known by OpenGL and used to pass data to the rasterizer.
Fragment Shader
Each invocation of the vertex shader outputs a vertex that then goes through primitive assembly and
clipping before reaching the rasterizer. The rasterizer outputs fragments for each primitive inside the
clipping volume. Each fragment invokes an execution of the fragment shader.
At a minimum, each execution of the fragment shader must output a color for the fragment unless
the fragment is to be discarded.
void main()
{
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
Shaders need to be compiled and linked as a bare minimum for things to work.
Example Questions for Chapter 2
Exam Nov 2011 1.1 (4 marks)
Explain what double buffering is and how it is used in computer graphics.
Exam Jun 2011 6.a (4 marks)
Discuss the difference between RGB color model and the indexed col or model with respect to the
depth of the frame (color) buffer.
Exam Nov 2012 5.4 (4 marks)
Discuss the difference between the RGB color model and the indexed color model with respect to
the depth of the frame (color) buffer.
Exam Nov 2012 1.1 (3 marks)
A realtime graphics program can use a single frame buffer for rendering polygons, clearing the
buffer, and repeating the process. Why do we usually use two buffers instead?
Exam Jun 2013 8.2 (5 marks)
GLUT uses a callback function event model. Describe how it works and state the purpose of the idle,
display and reshape callbacks.
Exam Jun 2013 8.3 (1 marks)
What is the purpose of the OpenGL glFlush statement.
Exam Jun 2013 8.4 (1 marks)
Is the following code a fragment or vertex shader
In vec4 vPosition;
void Main() {Gl_Position = vPosition;}
Exam Jun 2013 1.1 (4 marks)
Explain the difference between immediate mode graphics and retained mode graphics.
Exam Jun 2013 1.2 (2 marks)
Name two artifacts in computer graphics that may commonly be specified at the vertices of a
polygon and then interpolated across the polygon to give a value for each fragment within the
polygon.
Chapter 3 – Geometric Objects and Transformations
Key concepts you should know in this chapter are the following:
Surface Normals
Normals are vectors that are perpendicular to a surface. They can be used to describe the
orientation of direction of that surface.
Uses of surface normals include…
Together with a point, a normal can be used to specify the equation of a plane
The shading of objects depends on the orientation of their surfaces, a factor that is characterized
by the normal vector at each point.
Flat shading uses surface normals to determine if the normal is the same at all points on the
surface.
Calculating smooth (Gourand and Phong) shading.
Ray tracing and light interactions can be calculated from the angle of incidence and the normal.
Homogeneous Coordinates
Because there can be confusion between vectors and points we use homogenous coordinates.
For a point, the fourth coordinate is 1 and for a vector it is 0. For example…
The point (4,5,6) is represented in homogenous coordinates by (4,5,6,1)
The vector (4,5,6) is represented in homogenous coordinates by (4,5,6,0)
Advantages of homogenous coordinates include…
All affine (line preserving) transformations can be represented as matrix multiplications in
homogenous coordinates.
Less arithmetic work is involved.
The uniform representation of all affine transformations makes carrying out successive
transformations far easier than in three dimensional space.
Modern hardware implements homogenous coordinate operations directl y, using parallelism to
achieve high speed calculations.
Instance Transformations
An instance transformation is the product of a translation, a rotation and a scaling.
The order of the transformations that comprise an instance transformation will effect the outcome.
For instance, if we rotate a square before we apply a nonuniform scale, we will shear the square,
something we cannot do if we scale then rotate.
Frames in OpenGL
The following is the usual order in which the frames occur in the pipeline.
1) Object (or model) coordinates
2) World coordinate
3) Eye (or camera) coordinates
4) Clip coordinates
5) Normalized device coordinates
6) Window (or screen) coordinates
Model Frame (Represents an object we want to render in our world).
A scene may comprise of many models– each is oriented, sized and positioned in the World
coordinate system.
World Frame  also referred to as the application frame  represents values in world coordinates.
If we do not apply transformations to our object frames the world and model coordinat es are the
same.
The camera frame (or eye frame) is a frame whose origin is the center of the camera lens and whose
axes are aligned with the sides of the camera.
Because there is an affine transformation that corresponds to each change of frame, there are 4x4
matrices that represent the transformation from model coordinates to world coordinates and from
world coordinates to eye coordinates. These transformations are usually concatenated together into
the modelview transformation, which is specified by the modelview matrix.
After transformation, vertices are still represented in homogenous coordinates. The division by the
w component called perspective division, yields three dimensional representations in normalized
device coordinates.
The final translation takes a position in normalized device coordinates and, taking into account the
viewport, creates a three dimensional representation in window coordinates.
Translation, Rotation, Scaling and Shearing
Know how to perform Translation, Rotation, Scaling and Shearing (You do not have to learn off the
matrices, they will be given to you if necessary).
Affine transformation  An affine transformation is any transformation that preserves collinearity
(i.e., all points lying on a line initially still lie on a li ne after transformation) and ratios of distances
(e.g., the midpoint of a line segment remains the midpoint after transformation).
Rigidbody Transformations  Rotation and translation are known as rigidbody transformations. No
combination of rotations and translations can alter the shape or volume of an object, they can alter
only the object’s location and orientation.
Within a frame, each affine transformation is represented by a 4x4 matric of the form…
Translation
Translation displaces points by a fixed distance in a given direction.
P’=P+d
We can also get the same result using the matrix multiplication
P’ = Tp
where…
T is called the translation matrix
Rotation
Two dimensional rotations
Three dimensional rotations.
Rotation about the xaxis by an angle Ф followed by rotation about the yaxis by an angle Ф does not
give us the same result as the one that we obtain if we reverse the order of rotations.
Scaling
P’ = sP, where…
Sheer
Sections that are not examinable include 3.13 & 3.14
Examples of different types of matrices are below…
Example Questions for Chapter 3
Exam Jun 2011 2 (6 marks)
Consider the diagram below and answer the question that follows…
a) Determine the transformation matrix which will transform the square ABCD to the square
A’B’C’D. Show all workings.
Hint Below is the transformation matrices for clockwise and anticlockwise rotation about the zaxis.
b) Using the transformation matrix in a, calculate the new position of A’ if the transformation
was performed on A’B’C’D’
Exam Nov 2011 2.1 & 2.2 (6 marks)
Consider a triangular prism with vertices a,b,c,d,e and f at (0,0,0),(1,0,0),(0,0,1),(0,2,0),(1,2,0) and
(0,2,1), respectively.
Perform scaling by a factor of 15 along the xaxes. (2 marks)
Then perform a clockwise rotation by 45’ about the yaxis (4 marks)
Hint: The transformation matrix for rotation about the yaxis is given alongside (where theta is the
angle of rotation)
Exam Nov 2012 2.2 (6 marks)
Consider the following 4x4 matrices…
Which of the matrices reflect the following (give the correct letter only)
2.2.1 Identity Matric (no effect)
2.2.2 Uniform Scaling
2.2.3 Nonuniform scaling
2.2.4 Reflection
2.2.5 Rotation about z
2.2.6 Rotation
Exam June 2012 2.a (1 marks)
What is an instance transformation?
Exam June 2012 2.b (3 marks)
Will you get the same effect if the order of transformations that comprise an instance
transformation were changed? Explain using an example.
Exam June 2012 2.c (4 marks)
Provide a mathematical proof to show that rotation and uniform scaling commute.
Exam June 2013 2.1 (3 marks)
Do the following transformation sequences commute? If they do commute under certain conditions
only, state those conditions.
2.1.1 Rotation
2.1.2 Rotation and Scaling
2.1.3 Two Rotations
Exam June 2013 2.2 (5 marks)
Consider a line segment (in 3 dimensions) with endpoints a and b at (0,1,0) and (1,2,3) respectively.
Compute the coordinates of vertices that result after each application of the following sequence of
transformations of the line segment.
2.2.1 Perform scaling by a factory of 3 along the xaxis
2.2.2 Then perform a translation of 2 units along the yaxis
2.2.3 Finally perform an anticlockwise rotation by 60’ about the zaxis
Hint – the transformation matric for rotation about the zaxis is given below (where omega is the
angle of anticlockwise rotation)
Chapter 4  Viewing
Important concepts for Chapter 4 include…
Planar Geometric Projections are the class of projections produced by parallel and perspective
views. A planar geometric projection is a projection where the surface is a plane and the projectors
are lines.
4.1 Classical and Computer Viewing
Two types of views
1. Perspective Views – Views with finite COP
2. Parallel Views – Views with infinite COP
Classical and computer viewing COP / DOP
COP – Center of Projections. For computers is the origin of the camera frame for perspective views.
DOP – Direction of projections
PRP – Projection Reference Point
In classical viewing there is an underlying notion of a principle face.
Different types of classical views include…
Parallel Viewing
Orthographic Projections – parallel view – shows a single plane
Axonometric Projections – parallel view  projection are still orthogonal to the projection plane
but the projection plane can have any orientation with respect to the object (isometric,
diametric and trimetric views)
Oblique Projections – most general parallel view – most difficult views to construct by hand.
Perspective Viewing
Characterized by diminution of size
Classical perspective views are known as one, two and three point perspective
Parallel lines in each of the three principal directions converges to a finite vanishing point
Perspective foreshortening The farther an object is from the center of projection ,the smaller it
appears. Perspective drawings are characterized by perspective foreshortening and vanishing points.
Perspective foreshortening is the illusion that object and lengths appear smaller as there distance
from the center of projection increases. These points are called vanishing point. Principal vanishing
points are formed by the apparent intersection of lines parallel to one of the three x,y or z axis. The
number of principal vanishing points is determined by the number of principal axes interested by the
view plane.
4.2 Viewing with a computer – read only
4.3.1 Positioning of camera frame – read only
4.3.2 Normalization
Normalization transformation  specification of the projection matrix
VRP – View Reference Point
VRC – View Reference Coordinate
VUP – View Up Vector  the up direction of the camera
VPN – View Plane Normal – the orientation of the projection plane or back of the camera
Camera is positioned at the origin, pointing in the negative z direction. Camera is centered at a point
called the View Reference Point (VRP). Orientation of the camera is specified by View Plane Normal
(VPN) and View Up Vector (VUP). The View Plane Normal is the orientation of the projection plane
or back of camera. The orientation of the plane does not specify the up direction of the camera
hence we have View Up Vector (VUP) which is the up direction of the camera. VUP fixes the camera.
Viewing Coordinate System – The orthogonal coordinate system (see pg 240)
View Orientation Matrix – the matrix that does the change of frames. It is equivalent to the viewing
component of the modelview matrix. (Not necessary to know formulae or derivations)
4.3.3 Lookat function
The use of VRP, VPN and VUP is but one way to provide an API for specifying the position of a
camera.
The LookAt function creates a viewing matrix derived from an eye point, a reference point indicating
the center of the scene, and an up vector. The matrix maps the reference point to the negative zaxis
and the eye point to the origin, so that when you use a typical projection matrix, the center of the
scene maps to the center of the viewport. Similarly, the direction described by the up vector
projected onto the viewing plane is mapped to the positive yaxis so that it points upward in the
viewport. The up vector must not be parallel to the line of sight from the eye to the reference point.
Eye and points – VPN = ae (Not necessary to know other formulae or derivations)
4.3.4 – Other Viewing APIs  read only
4.4 Parallel Projections
A parallel projection is the limit of a perspective projection in that the center of projection (COP) is
infinitely far from the object being viewed.
Orthogonal projections  a special kind of parallel projection in which the projector are parallel to
the view plane. A single orthogonal view is restricted to one principal face of an object.
Axonometric view – projectors are perpendicular to the projection plane but projection plane can
have any orientation with respect to object.
Oblique projections – projectors are parallel but can make an arbitrary angle to the projection plane
and projection plane can have any orientation with respect to object.
Projection Normalization – a process using translation and scaling that will transform vertices in
camera coordinates to fit inside the default view volume. (see page 247/248 for detailed
explanation).
4.4.5 – Oblique Projections  Leave out
4.5 Perspective projections
Perspective projections are what we get with a camera whose lens has a finite length, or in terms of
our synthetic camera model, the center of the projection is finite.
4.5.1 Simple Perspective Projections  Not necessary to know formulae and derivations
Read pg. 257
4.6 View volume, Frustum, Perspective Functions
Two perspective functions you need to know…
1. Mat4 Frustrum(left, right, bottom, top, near, far)
2. Mat4 Perspective(fovy, aspect, near, far)
(All variables are of type GlFloat)
View Volume (Canonical)
The view volume can be thought of as the volume that a real camera would see through its lens
(Except that it is also limited in distance from the front and back). It is a section of 3D space that is
visible from the camera or viewer between two distances.
When using orthogonal (or parallel) projections, the view volume is rectangular. In OpenGL, an
orthographic projection is defined with the function call glOrtho(left, right, bottom, top, near, far).
When using perspective projections, the view volume is a frustrum and has a truncated pyramid
shape. In OpenGL, a perspective projection is defined with the function all glFrustum(xmin, xmax,
ymin, ymax, nbear, far) or gluPerspective(fovy, aspect, near, far).
NB: Not necessary to know formulae or derivations.
4.7 PerspectiveProjection Matrices – read only
4.8 Hidden surface removal
Conceptually we seek algorithms that either remove those surfaces that should not be visible to the
viewer, called hiddensurfaceremoval algorithms, or find which surfaces are visible, called visible
surfacealgorithms.
OpengGL has a particular algorithm associated with it, the zbuffer algorithm, to which we can
interface through three function calls.
Hiddensurfaceremoval algorithms can be divided into two broad classes…
1. Objectspace algorithms
2. Imagespace algorithms
Objectspace algorithms
Object space algorithms attempt to order the surfaces of the objects in the scene such that
rendering surfaces in a particular order provides the correct image. i.e. render objects furthest back
first.
This class of algorithms does not work well with pipeline architectures in which objects are passed
down the pipeline in an arbitrary order. In order to decide on a proper order in which to render the
objects, the graphics system must have all the objects available so it can sort them into the desi red
backtofront order.
Depth Sort Algorithm
All polygons are rendered with hidden surface removal as a consequence of back to front rendering
of polygons. Depth sort orders the polygons by how far away from the viewer their maximum z
value is. If the minimum depth (zvalue) of a given polygon is greater than the maximum depth of
the polygon behind the one of interest, we can render the polygons back to front.
Imagespace algorithms
Imagespace algorithms work as part of the projection process and seek to determine the
relationship among object points on each projector. The zbuffer algorithm is an example of this.
ZBuffer Algorithm
The basic idea of the zbuffer algorithm is that for each fragment on the polygon corresponding to
the intersection of the polygon with a ray (from the COP) through a pixel we compute the depth
from the center of projection. If the depth is greater than the depth currently stored in the zbuffer,
it is ignored else zbuffer is updated and color buffer is updated with the new color fragment.
Ultimately we display only the closest point on each projector. The algorithm requires a depth
buffer, or zbuffer, to store the necessary depth information as polygons are rasterized.
Because we must keep depth information for each pixel in the color buffer, the zbuffer has the
same spatial resolution as the color buffers. The depth buffer is initialized to a value that
corresponds to the farthest distance from the viewer.
For instance with the diagram below, a projector from the COP passes through two surfaces.
Because the circle is closer to the viewer than to the triangle, it is the circle’s color that determines
the color placed in the color buffer at the location corresponding to where the projector pierces the
projection plane.
2 Major Advantages of ZBuffer Algorithm
 Its complexity is proportional to the number of fragments generated by the rasterizer
 It can be implemented with a small number of additional calculations over what we have to
do to project and display polygons without hiddensurface removal
Handling Translucent Objects using the ZBuffer Algorithm
Any object behind an opaque object (solid object) should not be rendered. Any object behind a
translucent object (see through objects) should be composited.
Basic approach in the zbuffer algorithm would be…
 If the depth information allows a pixel to be rendered, it is blended (composited) with the pixel
already stored there.
 If the pixel is part of an opaque polygon, the depth data is updated.
4.9 Displaying Meshes – read only
4.10 Projections and Shadows  Know
The creation of simple shadows is an interesting application of projection matrices.
To add physically correct shadows we would typically have to do global calculations that are difficult.
This normally cannot be done in real time.
There is the concept of a shadow polygon – which is a flat polygon which is the projection of the
original polygon onto the surface with the center of projection at the light source.
Shadows are easier to calculate if the light source is not moving, if it is moving, the shadows would
possibly need to be calculated in the idle callback function.
For a simple environment such as a plane flying over a flat terrain casting a single shadow, this is an
appropriate approach. When objects can cast shadows on other objects, this method becomes
impractical
Example Questions for Chapter 4
Exam Jun 2011 3.3 (4 marks)
Differentiate between orthographic and perspective projections in terms of projectors and the
projection plane.
Exam Jun 2012 3.a (4 marks)
Define the term View Volume with respect to computer graphics and with reference to both
perspective and orthogonal views.
Exam Jun 2012 3.a (4 marks)
Define the term View Volume with respect to computer graphics and with reference to both
perspective and orthogonal views.
Exam Nov 2012 1.3 (6 marks)
Define the term “View Volume” with reference two both perspective and orthogonal views. Provide
the OpenGL functions that used to define the respective view volumes.
Exam Jun 2011 3.b (4 marks)
Orthogonal, oblique and axonometric view scenes are all parallel view scenes. Explain the
differences between orthogonal, axonometric, and oblique view scenes.
Exam June 2013 3.1 (1 marks)
Explain what is meant by ‘nonuniform’ foreshortening of objects under perspective camera.
Exam June 2013 3.2 (3 marks)
What is the purpose of projection normalization in the computer graphics pipeline? Name one
advantage of using this technique.
Exam June 2013 3.3 (4 marks)
Draw a view Frustum. Position and name the three important rectangular planes at their correct
positions. Make sure that the position of the origin and the orientation of the zaxis are clearly
distinguishable. State the name of the coordinate system (or f rame) in which the view frustum is
defined.
Exam June 2013 6.1 (2 marks)
Draw a picture of a set of simple polygons that the Depth sort algorithm cannot render without
splitting the polygons
Exam June 2013 6.2 (3 marks)
Why can’t the standard zbuffer algorithm handle scenes with both opaque and translucent objects?
What modifications can be made to the algorithm for it to handle this?
Exam June 2012 3.a (4 marks)
Hidden surface removal can be divided into two broad classes. State and explain each of these
classes.
Exam June 2012 3.b (4 marks)
Explain the problem of rendering translucent objects using the zbuffer algorithm, and describe how
the algorithm can be adapted to deal with this problem (without sorting the polygons).
Exam June 2012 4.a (4 marks)
What is parallel projection? What specialty do orthogonal projections provide? What is the
advantage of the normalization transformation process?
Exam June 2012 4.b (2 marks)
Why are projections produced by parallel and perspective viewing known as planar geometric
projections?
Exam June 2012 4.c (4 marks)
The specification of the orientation of a synthetic camera can be divided into the specification of the
view reference point (VRP), viewplane normal (VPN) and the viewupvector (VUP). Explain each of
these?
Exam Nov 2012 3.1 (6 marks)
Differentiate between Depth sort and zbuffer algorithms for hidden surface removal.
Exam Nov 2012 3.2 (6 marks)
Briefly describe, with any appropriate equations, the algorithm for removing (or ‘culling’) back facing
polygons. Assume that the normal points out from the visible side of the polygon
Chapter 5 – Lighting and Shading
A surface can either emit light by selfemission, or reflect light from other surfaces that illuminate it.
Some surfaces can do both.
Rendering equation  to represent lighting correctly we would need a recursive call that would blend
light between sources – this can mathematically be described using the rendering equation. There
are various approximations of this equation using ray tracing unfortunately these methods cannot
render scenes at the rate at which we can pass polygons through the modelingprojection pipeline.
For render pipeline architectures we focus on a simpler rendering model, based on the Phong
reflection model that provides a compromi se between physical correctness and efficient calculation.
Rather than looking at a global energy balance, we follow rays of light from lightemitting (or self
luminous) surfaces that we call light sources. We then model what happens to these rays as they
interact with reflecting surfaces in the scene. This approach is similar to ray tracing, but we consider
only single interactions between light sources and surfaces.
2 independent parts of the problem…
1. Model the light sources in the scene
2. Build a reflection model that deals with the interactions between materials and light.
We need to consider only those rays that leave the source and reach the viewer’s eye (either directly
or through interactions with objects). These are the rays that reach the center of projection (COP)
after passing through the clipping rectangle.
Interactions between light and materials can be classified into three groups depicted.
1. Specular Surfaces – appear shinny because most of the light that is reflected or scattered is
in a narrow range of angles close to the angle of reflection. Mirrors are perfectly specular
surfaces.
2. Diffuse surfaces – characterized by reflected light being scattered in all directions. Walls
painted with matt paint are diffuse reflectors.
3. Translucent Surfaces – allow some light to penetrate the surface and to emerge from
another location on the object. The process of refraction characterizes glass and water.
5.1 – Light and Matter
There are 4 basic types of light sources…
1. Ambient Lighting
2. Point Sources
3. Spotlights
4. Distant Lights
These four lighting types are sufficient for rendering most simple scenes.
52 – Light Sources
Ambient Light
Ambient light produces light of constant intensity throughout the scene. All objects are illuminated
from all sides.
Point Sources
Point sources emit light equally in all direction, but the intensity of the light diminishes with the
distance between the light and the objects it illuminates. Surfaces facing away from the light source
are not illuminated.
Umbra – the area that is fully in the shadow
Penumbra – the area that is partially in the shadow
Spotlights
A spot light source is similar to a point light source except that its illumination is restricted to a cone
in a particular direction.
Spotlights are characterized by a narrow range of angles through which light is emitted. More
realistic spotlights are characterized by the distribution of light within the cone – usually with most
of the light concentrated in the center of the cone.
Distant Light Sources
A distant light source is like a point light source except that the rays of light are all parallel.
Most shading calculations require the direction from the point on the surface to the light source
position. As we move across a surface, calculating the intensity at each point, we should recompute
this vector repeatedly – a computation that is a significant part of the shading calculation. Distant
light sources can be calculated faster than near light sources (see pg. 294 for parallel light).
5.3  Phong Reflection Model
The Phong model uses 4 vectors to calculate a color for an arbitrary point P on a surface.
1. l – from p to light source
2. n – the normal at point p
3. v – from p to viewer
4. r – reflection of ray from l
The Phong model supports the three types of material light interactions
1. Ambient Light, I = kL where k is the reflection coefficient, L is ambient term
2. Diffuse Light, I = k(I.n)L
3. Specular Light, I=k.L Max((rv)
α
,0) α is the shininess coefficient
There are 3 types of reflection
1. Ambient Reflection
2. Diffuse Reflection
3. Specular Reflection
What is referred to as the Phong model, including the distance term is written…
Lambertian Surfaces (Applies to Diffuse Reflection)
An example of a lambertian surface is a rough surface. This can also be referred to as diffuse
reflection.
Lamberts Law – the surface is brightest at noon and dimmest at dawn and dusk because we see only
the vertical component of the incoming light.
More technical, the amount of diffuse light reflected is directly proportional to Cos Ф where Ф is the
angle between the normal at the point of interest and the direction of the light source.
If both l and n are unitlength vectors, then
Cos Ф = l ◦ n
Using Lamberts Law, derive the equation for calculating approximations to diffuse reflection on
a computer.
If we consider the direction of light source (l) and the normal at the point of interest (n) to be
unit length vectors, then cos Ф = l dot n;
If we add a reflection component k
d
representing the fraction of incoming diffuse light that is
reflected, we have the diffuse reflection term:
I
d
= k
d
(l ◦ n)L
d
, where L is the light source
Difference between the Phong Model and the BlinnPhong Model
The BlinnPhong model attempts to provide a performance optimization by using the unit vector
halfway between the viewer vector and the lightsource vector which avoids the recalculation of r.
When we use the halfway vector in the calculation of the specular term we are using the Blinn
Phong model. This model is the default in systems with a fixedfunction pipeline.
5.4 – Computation of Vectors – Read Only
5.5 – Polygonal Shading
Flat Shading – a polygon is filled with a single color or shade across its surface. A single normal is
calculated for the whole surface, and this determines the color. It works on the basis that if three
vectors are constant, then the shading calculation needs to be carried out only once for each
polygon, and each point on the polygon is assigned the same shade.
Smooth Shading  the color per vertex is calculated using vertex normal and then this color is
interpolated across the polygon.
Gouraud Shading – an estimate to the surface normal of each vertex in is found. Using this value,
lighting computations based on the Phong reflection model are performed to produce color
intensities at the vertices.
Phong Shading  the normals at the vertices are interpolated across the surface of the polygon. The
lighting model is then applied at every point within the polygon. Because normals give the local
surface orientation, by interpolating the normals across the surface of a polygon, the surface
appears to be curved rather than flat hence the smoother appearance of Phong shaded images.
5.6 – Approximation of a sphere by recursive subdivision  Read Only
5.7 – Specifying Lighting Parameters
Read pg. 314315
5.8  Implementing a Lighting Model
Read pg. 314315
5.9  Shading of the sphere model
5.10 – Per fragment Lighting
5.11 – Global Illumination
Example Questions for Chapter 5
Exam Nov 2013 4.1 (4 marks)
The Phong reflection model is an approximation of the physical reality to produce good renderings
under a variety of lighting conditions and material properties. In this model there are three terms, an
ambient term, a diffuse term, and a specular term. The Phong shading model for a single light source
is
Exam Nov 2013 4.1.1 (4 marks)
Describe the four vectors the model uses to calculate a color for an arbitrary point p. Illustrate with a
figure.
Exam Nov 2013 4.1.2 (2 marks)
In the specular term, there is a factor of (r◦v)
P
. What does p refer to? What effect does varying the
power p have?
Exam Nov 2013 4.1.3 (3 marks)
What is the term k
a
L
a
? What does k
a
refer to? How will decreasing k
a
effect the rendering of the
surface?
Exam June 2012 5.a (4 marks)
Interactions between light and materials can be classified into three categories. State and describe
these categories.
Exam June 2012 5.b (4 marks)
State and explain Lamberts Law using a diagram.
Exam Nov 2011 4.b (3 marks)
State and explain Lamberts Law using a diagram
Exam Nov 2011 4.c (3 marks)
Using Lamberts Law, derive the equation for calculating approximations to the diffuse reflection
term used in the Phong lighting model.
Exam June 2012 5.c (4 marks)
Using Lamberts Law derive the equation for calculating approximations to diffuse reflection on a
computer.
Exam Nov 2012 4.1 (9 marks)
The shading intensity at any given point p on a surface is in general comprised of three
contributions, each of which corresponds to a distinct physical phenomenon. List and describe all
three, stating how they are computed in terms of the following vectors.
n – the normal at point
v – from p to viewer
I – from p to light source
r – reflection of ray from light source to p
Exam Nov 2013 4.1.4 (3 marks)
Consider Gouraud and Phong shading. Which one is more realistic, especially for highly curved
surfaces? Why?
Exam Jun 2011 4.e (3 marks)
Why does Phong shaded images appear smoother than Gouraud or Flat shaded images?
Exam Jun 2011 4.a (1 marks)
Explain what characterizes a diffuse reflecting surface.
Exam Jun 2011 4.d (6 marks)
Describe distinguishing features of ambient, point, spot and distant light sources.
Chapter 6 – From Vertices to Fragments
6.1 – Basic Implementation Strategies
There are two basic implementation strategies
1. Image Oriented Strategy
2. Object Oriented Strategy
Image Oriented Strategy
Loop through each pixel (called scanlines) and work our way back to determine what determines the
pixels color.
Main disadvantage – unless we first build a data structure from the geometric data, we do not know
which primitives affect which pixels (These types of data structures can be complex).
Main Advantage – They are well suited to handle global effects such as shadows and reflections (e.g.
Ray Tracing).
Object Oriented Strategy
Loop through each object and determine each objects colors and whether it is visible.
In past main disadvantage was memory required doing this, however this has been overcome as
memory has reduced in price and become denser.
Main disadvantage  each geometric primitive is processed independently – complex shading effects
that involve multiple geometric objects such as reflections cannot be handled except by approximate
methods.
Main advantage – realtime processing and generation of 3d views.
One major exception to this is hiddensurface removal, where the z buffer is used to store global
information.
6.2 – Four Major Tasks
There are four major tasks that any graphics system must perform to render a geometric entity, such
as a 3d polygon. They are…
1. Modeling
2. Geometry Processing
3. Rasterization
4. Fragment Processing
Modeling
Think of the modeler as a black box that produces geometric objects and is usually a user program.
One function a modeler can perform is to perform “clipping” or intelligently eliminate objects that
do not need to be rendered or that can be simplified.
Geometry Processing
The first step in geometry processing is to change representations from object coordinates to
camera or eye coordinates using the modelview transformation.
The second step is to transform vertices using the projection transformation to a normalized view
volume in which objects that might be visible are contained in a cube centered at the origin.
Geometric objects are transformed by a sequence of transformations that may reshape and move
them or may change their representations.
Eventually only those primitives that fit within a specified volume (the view volume) can appear on
the display after rasterization.
2 reasons why we cannot allow all objects to be rasterized are…
1. Rasterizing objects that lie outside the view volume is inefficient because such objects
cannot be visible.
2. When vertices reach the rasterizer, they can no longer be processed individually and first
must be assembled into primitives.
Rasterization
To generate a set of fragments that give the locations of the pixels in the frame buffer corresponding
to these vertices, we only need their x, y components or, equivalently, the results of the orthogonal
projection of these vertices. We determine these fragments through a process called rasterization.
Rasterization determines which fragments should be used to approximate a line segment between
the projected vertices.
The rasterizer starts with vertices in normalized device coordinates but outputs frames whose
locations are in units of the display (window coordinates)
Fragment Processing
The process of taking fragments generated by rasterizer and updating pixels in the frame buffer.
Depth information together with transparency of fragments “in front” as well as texture and bump
mapping are used to update the fragments in the frame buffer to form pixels that can be displayed
on the screen.
Hiddensurface removal is typically carried out on a fragment by fragment basis. In the simplest
situation, each fragment is assigned a color by the rasterizer and this color is placed in the frame
buffer at the location corresponding to the fragment location.
6.3 – Clipping
Clipping is performed before perspective division.
The most common primitives to pass down the pipeline are line segments and polygons and there
are techniques for clipping on both types of primitives.
6.4 – Linesegment Clipping
A clipper decides which primitives are accepted (displayed) or rejected. There are two well know
clipping algorithms for line segments, they are…
1. CohenSutherland Clipping
2. LiangBarsky Clipping
LiangBarksy Clipping is more efficient than CohenSutherland Clipping.
CohenSutherland Clipping
The CohenSutherland algorithm was the first to seek to replace most of the expensive floatingpoint
multiplications and divisions with a combination of floatingpoint subtractions and bit operations.
The center region is the screen, and the other 8 regions are on different sides outside the screen.
Each region is given a 4 bit binary number, called an "outcode". The codes are chosen as follows:
If the region is above the screen, the first bit is 1
If the region is below the screen, the second bit is 1
If the region is to the right of the screen, the third bit is 1
If the region is to the left of the screen, the fourth bit is 1
Obviously an area can't be to the left and the right at the same time, or above and below it at the
same time, so the third and fourth bit can't be 1 together, and the first and second bit can't be 1
together. The screen itself has all 4 bits set to 0.
Both endpoints of the line can lie in any of these 9 regions, and there are a few trivial cases:
If both endpoints are inside or on the edges of the screen, the line is inside the screen or clipped,
and can be drawn. This case is the trivial accept.
If both endpoints are on the same side of the screen (e.g., both endpoints are above the screen),
certainly no part of the line can be visible on the screen. This case is the trivial reject, and the
line doesn't have to be drawn.
Advantages
1. This algorithm works best when there are many line segments but few are actually
displayed.
2. The algorithm can be extended to three dimensions
Disadvantage
1. It must be used recursively
LiangBarsky Clipping
The Liang–Barsky algorithm uses the parametric equation of a line and inequalities describing the
range of the clipping window to determine the intersections between the line and the clipping
window. With these intersections it knows which portion of the l ine should be drawn.
How it works
Suppose we have a line segment defined by two endpoints p(x1,y1) q(x1,y1). The parametric
equation of the line segment gives xvalues and yvalues for every point in terms of a parameter α
that ranges from 0 to 1.
x(α) = (1 α) x1 + αx2
y(α) = (1 α) y1 + αy2
There are four points where line intersects side of windows
tB, tL, tT, tR
We can order these points and then determine where clipping needs to take place. If for example tL
> tR, this implies that the line must be rejected as it falls outside the window.
To use this strategy effectively we need to avoid computing intersections until they are needed.
Many lines can be rejected before all four intersections are known.
Efficiency
Efficient implementation of this strategy requires that we avoid computing intersections until they
are needed.
The LiangBarsky algorithm is significantly more efficient than Cohen–Sutherland.
The efficiency of this approach, compared to that of CohenSutherland algorithm is that we avoid
multiple shortening of line segments and the related reexecutions of the clipping algorithm.
6.5 – Polygon Clipping – do not learn
6.6  Clipping of other primitives – do not learn
6.7  Clipping in three dimensions – do not learn
6.8 – Rasterization
Rasterization is the task of taking an image described in a vector graphics format (shapes) and
converting it into a raster image (pixels or dots) for output on a video display or printer, or for
storage in a bitmap file format.
In normal usage, the term refers to the popular rendering algorithm for displaying threedimensional
shapes on a computer. Rasterization is currently the most popular technique for producing real time
3D computer graphics. Realtime applications need to respond immediately to user i nput, and
generally need to produce frame rates of at least 24 frames per second to achieve smooth
animation.
Compared with other rendering techniques such as ray tracing, rasterization is extremely fast.
However, rasterization is simply the process of computing the mapping from scene geometry to
pixels and does not prescribe a particular way to compute the color of those pixels. Shading,
including programmable shading, may be based on physical light transport, or artistic intent.
6.9 – Bresenham’s Algorithm
Bresenham derived a linerasterization algorithm that avoids all floating point calculations and has
become the standard algorithm used in hardware and software rasterizers.
It is preferred over the DDA algorithm because although the DDA algorithm is eff icient and can be
coded easily, it requires a floatingpoint addition for each pixel generated which Bresenham’s
algorithm doesn’t require.
6.10 –Polygon Rasterization
There are several different types of polygon rasterization. Some of the ones that work with the
OpenGL pipeline include.
InsideOutside Testing
Concave Polygons
Fill and Sort
Flood Fill
Singularities
Crossing or OddEven Test
The most widely used test for making insideoutside decisions. Suppose P is a point inside a polygon,
and then pretend there is a ray emanating from p, going off into infinity. Now follow that 'ray' from
outside (somewhere) to P. If it crosses an even amount of edges, then P is inside the polygon, else
it’s outside
Winding Test
p.g. 358
6.11 – Hidden Surface Removal
2 main approaches
1) Object space approaches
2) Image space approaches (Image space approaches are more popular)
Scanline Algorithms

BackFace Removal (Object Space Approach)
A simple object space algorithm is BackFace removal (or back face cull) where no faces on the back
of the object are displayed.
Limitations on Backface removal algorithm…
It can only be used on solid objects modeled as a polygon mesh.
It works fine for convex polyhedra but not necessarily for concave polyhedra.
ZBuffer Algorithm (Image Space Approach)
The easiest way to achieve hiddensurface removal is to use the depth buffer (sometimes called a z
buffer). A depth buffer works by associating a depth, or distance from the viewpoint, with each pixel
on the window. Initially, the depth values for all pixels are set to the largest possible distance, and
then the objects in the scene are drawn in any order.
Graphical calculations in hardware or software convert each surface that's drawn to a set of pixels
on the window where the surface will appear if it isn't obscured by something else. In addition, the
distance from the eye is computed. With depth buffering enabled, before each pixel is drawn, a
comparison is done with the depth value already stored at the pixel.
If the new pixel is closer to the eye than what is there, the new pixel's color and depth values replace
those that are currently written into the pixel. If the new pixel's depth is greater than what is
currently there, the new pixel would be obscured, and the color and depth information for t he
incoming pixel is discarded.
Since information is discarded rather than used for drawing, hiddensurface removal can increase
your performance.
Shading is performed before hidden surface removal. In the zbuffer algorithm polygons are fisrt
rasterized and then for each fragment of the polygon depth values are determined and compared to
the zbuffer.
Scan Conversion with the zBuzzer

Depth Sort and the Painters Algorithm (Object Space Approach)
The idea behind the Painter's algorithm is to draw polygons far away from the eye first, followed by
drawing those that are close to the eye. Hidden surfaces will be written over in the image as t he
surfaces that obscure them are drawn.
Situations where depth sort algorithm is troublesome include.
If three or more polygons operate cyclically
If a polygon pierces another polygon
6.12 – Antialiasing
An error arises whenever we attempt to go from the continuous representation of an object (which
has infinite resolution) to a sampled approximation, which has limited resolution – this is called
aliasing.
Aliasing errors are caused by 3 related problems with the discrete nature of the frame buffer.
1. The number of pixels of the frame buffer is fixed. Many different line segments may be
approximated by the same pattern of pixels. We can say that all these segments are aliased
as the same sequence of pixels.
2. Pixel locations are fixed on a uniform grid; regardl ess of where we would like to place pixels,
we cannot place them at other than evenly spaced locations.
3. Pixels have a fixed size and shape.
In computer graphics, antialiasing is a software technique for diminishing jaggies  stairsteplike
lines that should be smooth.
Jaggies occur because the output device, the monitor or printer, doesn't have a high enough
resolution to represent a smooth line. Antialiasing reduces the prominence of jaggies by surrounding
the stairsteps with intermediate shades of gray (for grayscaling devices) or color (for color devices).
Although this reduces the jagged appearance of the lines, it also makes them fuzzier.
Another method for reducing jaggies is called smoothing, in which the printer changes the size and
horizontal alignment of dots to make curves smoother.
Antialiasing is sometimes called oversampling.
Interpolation
Interpolation is a way of determining value (of some parameter) for any point between two
endpoints of which the parameter values are known (e.g. the color of any points between two point,
or the normal of any point between two points).
Accumulation Buffer
The accumulation buffer can be used for a variety of operations that involve combining multiple
images. One of the most important uses of the accumulation buffer is for antialiasing. Rather than
antialiasing individual lines and polygons, we can anti alias an entire scene using the accumulation
buffer.
6.13 – Display Considerations – do not learn
Example Questions for Chapter 6
Exam Nov 2012 1.2 (3 marks)
Briefly explain what the accumulation buffer is and how it is used with respect to anti aliasing.
Exam Nov 2012 6 (6 marks)
Using diagrams describe briefly the LiangBarsky clipping algorithm
Exam Jun 2012 7.a (8 marks)
Describe, with the use of diagrams, the CohenSutherland line clipping algorithm
Exam Jun 2012 7.b (2 marks)
What are the advantages and disadvantages of the CohenSutherland line clipping algorithm?
Exam Jun 2013 7.1 (2 marks)
Give one advantage and one disadvantage of the CohenSutherland line clipping algorithm.
Exam Jun 2013 7.2 (3 marks)
What is the crossing or odd even test? Explain i t with respect to a point p inside a polygon
Exam Jun 2011 5.a (2 marks)
In the case of the zbuffer algorithm for hidden surface removal, is shading performed before or
after hidden surfaces are eliminated? Explain.
Exam Jun 2011 5.c (2 marks)
Bresenham derived a linerasterization algorithm that has become the standard approach used in
hardware and software rasterizers as opposed to the more simpler DDA algorithm. Why is this so?
Exam Jun 2011 1.b (4 marks)
Give brief definitions of the following terms in the context of computer graphics
i) Antialiasing
ii) Normal Interpolation
Chapter 7 – Discrete Techniques
7.1 – Buffers – Do not learn
7.2 – Digital Images – Do not learn
7.3  Writing into Buffers – Do not learn
7.4  Mapping Methods
Mimapping
A way to deal with the minification problem i.e. the distortion of a mapped texture due to the texel
being smaller than one pixel. Mimapping enables us to use a sequence of texture images at different
resolutions to give a texture values that are the average of texel values over various are
7.5 – Texture Mapping
Texture mapping maps a pattern (of colors) to a surface.
All approaches of texture mapping require a sequence of steps that involve mappings among three
or four different coordinate systems. They are…
Screen Coordinates – where the final image is produced
World Coordinate – where the objects up which the textures will be mapped are described
Texture coordinates – are used to describe the texture
Parametric coordinate – used to define curved surfaces
7.6 – Texture Mapping in OpengGL
7.7  Texture Generation – Do not learn
7.8  Environment Maps
7.9 – Reflection Map Example – Do not learn
7.10 – Bump Mapping
Whereas texture maps give detail by mapping patterns onto surfaces, bump maps distort the normal
vectors during the shading process to make the surface appear to have small variations in shape, like
bumps or depressions.
7.11 – Compositing Techniques
7.12  Sampling and Aliasing – Do not learn
Example Questions for Chapter 7
Exam Jun 2013 5.1 (4 marks)
Explain what is meant by bump mapping. What does the value at each pixel in a bump map
correspond to? How is this data used in rendering?
Exam Jun 2013 5.2 (1 marks)
What technique computes the surroundings visible as a reflected image in a shinny obj ect?
Exam Jun 2013 5.3 (3 marks)
Describe what is meant by point sampling and linear filtering? Why is linear filtering a better choice
than point sampling in the context of aliasing of textures?
Exam Jun 2013 1.4 (5 marks)
The 4 major stages of the modern graphics pipeline are.
a. Vertex Processing
b. Clipping and primitive assembly
c. Rasterization
d. Fragment processing
In which of these 4 stages would the following normally occur.
1.4.1 Texture Mapping
1.4.2 Perspective Division
!.4.3 Insideoutside testing
1.4.4 Vertices are assembled into objects
1.4.5 Zbuffer algorithm
Exam Nov 2012 5.1 (2 marks)
Explain the term texture mapping
Exam Nov 2012 5.3 (2 marks)
Consider the texture map with U,V coordinates in the diagram on the left below. Draw the
approximate mapping if the square on right were textured using the above image.
Exam May 2012 6.a (3 marks)
Texture mapping requires interaction between the application program, the vertex shader and the
fragment shade. What are the three basic steps of texture mapping.
Exam May 2012 6.b (4 marks)
Explain how the alpha channel and the accumulation buffer can be used to achieve antialiasing with
line segments and edges of polygons.
Exam May 2012 6.c (3 marks)
Explain what is meant by texture aliasing. Explain how point sampling and linear filtering help to
solve this problem.
Exam Jun 2011 6.b (4 marks)
Define the following terms and briefly explain their use in computer graphics
i) Bitmap
ii) Bump Mapping
iii) Mimapping
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.