You are on page 1of 22

2.4.

The Z Buffer For Hidden Surface Removal Z buffer

 Z-buffer is a 2D array that stores a depth value for each pixel.

 This is referred to as the Z-buffer, since depth of an object is mainly

calculated from the view plane along the 𝑧 axis of a coordinate system.
Z buffer Z buffer
Z buffer Algorithm Z buffer Examples

Z buffer Screen
Z buffer Examples Z buffer Examples
Chapter Three What is OpenGL?

 OpenGL is strictly defined as “a software interface to graphics

hardware.”
Rendering Process with OpenGL  it is a 3D graphics and modeling library that is highly portable and very

fast.

 Using OpenGL, you can create elegant and beautiful 3D graphics with

exceptional visual quality.


…Cont’d …Cont’d

 Initially, it used algorithms carefully developed and optimized by  The OpenGL API itself is not a programming language like C or C++.

Silicon Graphics, Inc. (SGI), an acknowledged world leader in  It is more like the C runtime library, which provides some

computer graphics and animation. prepackaged functionality.

 Over time, OpenGL has evolved as other vendors have contributed  OpenGL is intended for use with computer hardware that is designed

their expertise and intellectual property to develop high-performance and optimized for the display and manipulation of 3D graphics.

implementations of their own.  However, Software-only implementations of OpenGL are also possible.
1. Setting up the graphics context

 The first step in using OpenGL is to set up the graphics context.

 This involves creating a window or surface for rendering and

The key steps involved in creating initializing the OpenGL context within it.

and displaying graphics  Platform-specific libraries like GLFW (OpenGL Framework) or GLUT

(OpenGL Utility Toolkit) can be used to manage the window and

OpenGL context.
2. Defining the geometry 3. Sending Data to the GPU

 In OpenGL, 3D objects are represented as collections of vertices,  The vertex data is sent from the CPU (Central Processing Unit) to the
edges, and faces.
GPU (Graphics Processing Unit) using OpenGL buffer objects.
 To render a 3D scene, the application must define the geometry of the
objects using vertex data.  These buffer objects efficiently store the vertex data in the GPU's

 The vertex data includes the coordinates of the vertices, information memory for fast access during rendering.
about normal (surface directions), texture coordinates, and other
attributes.
4. Compiling and Linking Shaders 5. Rendering Loop

 Shaders are small programs written in the OpenGL Shading Language  The rendering process in OpenGL occurs within a rendering loop, also
known as the game loop.
(GLSL) that run on the GPU.
 This loop continually updates the scene and renders it on the screen.
 There are two types of shaders used in OpenGL:  The loop typically involves the following steps:
Clearing the Frame Buffer
vertex shaders, which manipulate vertices, and
A)
1.
B) Updating the Scene
2. fragment shaders, which determine the color and depth of fragments C) Setting Up the Camera

D) Binding Shaders and Uniforms


(pixels) generated during rasterization.
E) Drawing the Geometry

 The application must compile and link the shader programs before F) Displaying the Frame

they can be used in the rendering process.


3.1. Role of OpenGL in the Reference Model 3.2. Coordinate system

 The Reference Model is a conceptual framework that defines the  A coordinate system is a mathematical framework used to specify the

components and processes involved in creating and displaying graphics precise location of points in space or on a surface.

on a computer screen.  It provides a way to represent and measure positions or directions

 OpenGL fits into this model as the graphics API responsible for relative to a reference point or reference axes.

rendering 2D and 3D graphics efficiently and interactively.  The origin of the 2D Cartesian system is at 𝑥 = 0, 𝑦 = 0.
3.2. Coordinate system Cont’d

 For example, a standard VGA screen has 640 pixels from left to
right and 480 pixels from top to bottom.
 To specify a point in the middle of the screen, you specify that a
point should be plotted at (320,240)
 that is, 320 pixels from the left of the screen and 240 pixels
down from the top of the screen.
 In OpenGL, or almost any 3D API, when you create a window to

Figure 3.1 Cartesian space


draw in, you must also specify the coordinate system you want
to use and how to map the specified coordinates into physical
OpenGL takes care of the mapping between Cartesian coordinates and
screen pixels.
window pixels when it comes time to rasterize (actually draw) your geometry
on-screen.
3.3. Viewing Using a Synthetic Camera Cont’d

What does a camera do?


 The synthetic camera is a programmer’s model for
 Takes in a 3D scene specifying how a 3D scene is projected onto the
 Places (i.e., projects) the scene onto a 2D medium screen.
such as a roll of film or a digital pixel array

Pinhole
3D Viewing: The Synthetic Camera 3.4. Output primitives

 General synthetic camera: each package has its own  Output primitives are the basic geometric shapes that
but they are all (nearly) equivalent, with the can be rendered by a graphics system.
following parameters/degrees of freedom:  Common output primitives:
 Camera Position and Orientation › Points: A single pixel or dot.
 Field of view (angle of view, e.g., wide, narrow/telephoto, › Lines: A straight line segment connecting two vertices
normal...) › Triangles: A three-sided polygon defined by three vertices.
 Depth of field/focal distance (near distance, far distance) › Quads: A four-sided polygon defined by four vertices
 Tilt of view/ film plane (if not perpendicular to viewing › Other Polygons: Graphics systems may support polygons with
direction, produces oblique projections) more than three or four sides, but they are typically tessellated
 Perspective or Orthographic Projection into triangles for rendering.
Output attributes Chapter Four

 Output attributes define the characteristics and


appearance of output primitives.
 These attributes include:: Geometry and Line Generation
› Vertex Attributes: Each vertex of an output primitive may have
various attributes, such as position, color, texture coordinates,
normal vectors, and other user-defined properties.
› Color: The color attribute determines the color of a primitive.
Color can be represented using RGB (Red, Green, Blue) values,
RGBA (RGB with an alpha component for transparency), or
other color models.
Introduction Cont’d…

 All graphics packages construct pictures from basic building


 In the following sections we will examine some algorithms
blocks known as graphics primitives
 Primitives that describe the geometry, or shape, of these for drawing different primitives, and where appropriate we
building blocks are known as geometric primitives. will introduce the routines for displaying these primitives in
 They can be anything from 2-D primitives such as points, lines
and polygons to more complex 3-D primitives such as spheres
OpenGL.

and polyhedra (a polyhedron is a 3-D surface made from a mesh


of 2-D polygons).
OpenGL Point drawing primitives Cont’d

 The most basic type of primitive is the point.


 Many graphics packages, including OpenGL, provide
routines for displaying points.

glBegin(GL_POINTS);

glVertex2f(-0.5, 0.5); // First point (top-left)

glVertex2f(0.5, 0.5); // Second point (top-right)

glVertex2f(-0.5, -0.5); // Third point (bottom-left)

glVertex2f(0.5, -0.5); // Fourth point (bottom-right)

glEnd();
Line Drawing Algorithms Cont’d…

 Lines are a very common primitive and will be supported by almost all  Given two end-points (𝑥0, 𝑦0) and (𝑥𝑒𝑛𝑑 , 𝑦𝑒𝑛𝑑 ), we can calculate values for
graphics packages. 𝑚 and 𝑏 as follows:
 Lines are normally represented by the two end-points of the line, and
 y end  y 0  −−−−−−−−−−−−−−−−−−−−−−−−−− eq. (2)
m
points 𝑥, 𝑦 along the line must satisfy the following slope-intercept
xend  x0 
equation:
𝑏 = 𝑦 − 𝑚. Δ𝑥 -------------------------------------------------- eq. (3)
𝑦 = 𝑚𝑥 + 𝑏 ----------------------------------------- eq. (1)
where 𝑚 is the slope or gradient of the line, and 𝑏 is the coordinate at  Furthermore, for any given x-interval Δ𝑥, we can calculate the corresponding

which the line intercepts the 𝑦 axes y-interval Δ𝑦 ∶

Δ𝑦 = 𝑚. Δ𝑥 -------------------------------------------------------------------eq. (4)

Δ𝑥 = ⋅ Δ𝑦 --------------------------------------------------------------- eq. (5)


DDA Line-Drawing Algorithm Cont’d

 The Digital Differential Analyser (DDA) algorithm operates by starting at Figure 1 – ‘Holes’ in a Line Drawn by
one end-point of the line, Incrementing 𝑥 and Computing the
 and then using 𝐸𝑞. (4) and (5) to generate successive pixels until the Corresponding y-Coordinate
second end-point is reached.
 Therefore, first, we need to assign values for Δ𝑦 and Δ𝑥  This would compute correct line points but, as illustrated by Figure 4.1,
it would leave gaps in the line.
Suppose we simply increment the value of 𝑥 at each iteration (i.e. Δ𝑥 = 1)  The reason for this is that the value of Δ𝑦 is greater than one, so the gap
and then compute the corresponding value for y using 𝑒𝑞. (2) 𝑎𝑛𝑑 (4). between subsequent points in the line is greater than 1 pixel.
Cont’d Cont’d

 The solution to this problem is to make sure that both Δ𝑥 and Δ𝑦 have  Once we have computed values for Δ 𝑥 and Δ 𝑦, the basic DDA algorithm
values less than or equal to one. is:
 Start with (𝑥0, 𝑦0)
 To ensure this, we must first check the size of the line gradient. The
 Find successive pixel positions by adding on ( Δ𝑥 , Δ 𝑦) and rounding
conditions are:
to the nearest integer, i.e.
 𝐼𝑓 |𝑚| ≤ 1:
o 𝑥𝑘 + 1 𝑥𝑘 + Δ𝑥
o Δ𝑥 = 1 =
o 𝑦𝑘 + 1 𝑦𝑘 + Δ 𝑦
o Δ𝑦 = 𝑚 =
For each position (𝑥𝑘 , 𝑦𝑘) computed, plot a line point at
 𝐼𝑓 |𝑚| > 1:
(𝑟𝑜𝑢𝑛𝑑(𝑥𝑘 ), 𝑟𝑜𝑢𝑛𝑑(𝑦𝑘 )), where the round function will round to the
o Δ 𝑥 = 1/𝑚
nearest integer
o Δ𝑦 = 1
Note that the actual pixel value used will be calculated by rounding to the nearest
integer, but we keep the real-valued location for calculating the next pixel position.
Examples (DDA algorithm) Cont’d…
 Using these values of Δ𝑥 and Δy we can now start to plot line points:
 Apply the DDA algorithm for drawing a straight-line segment.  Start with (𝑥0, 𝑦0) = (𝟏𝟎, 𝟏𝟎) – colour this pixel
 Given: 𝑥 ,𝑦 = 10,10  Next, (𝑥1, 𝑦1) = (10 + 1,10 + 0.6) = (11,10.6) – so we colour pixel
𝑥 ,𝑦 = (15,13) (11,11)
 Next, (𝑥2, 𝑦2) = (11 + 1,10.6 + 0.6) = (12,11.2) – so we colour pixel
 First compute a value for the gradient 𝑚:
(12,11)
y  y 0  (13  10) 3
m  end    0.6  Next, (𝑥3, 𝑦3) = (12 + 1,11.2 + 0.6) = (13,11.8) – so we colour pixel
 xend  x0  (15  10) 5
(13,12)
 Now, because |𝑚| ≤ 1, we compute Δ𝑥 and Δ𝑦 as follows:
 Next, (𝑥4, 𝑦4) = (13 + 1,11.8 + 0.6) = (14,12.4) – so we colour pixel
(14,12)
Δ𝑥 = 1 and Δy = 0.6
 Next, (𝑥5, 𝑦5) = (14 + 1,12.4 + 0.6) = (15,13) – so we colour pixel
(15,13)
 We have now reached the end-point (𝑥𝑒𝑛𝑑 , 𝑦𝑒𝑛𝑑), so the algorithm
terminates
Cont’d… Bresenham’s Line-Drawing Algorithm
𝑥 ,𝑦 = (15,13)
 Bresenham’s line-drawing algorithm provides significant improvements in

efficiency over the DDA algorithm.

 These improvements arise from the observation that for any given line,

 if we know the previous pixel location, we only have a choice of 2 locations

for the next pixel.


𝑥 ,𝑦 = 10,10
 This concept is illustrated in Figure 3: given that we know (𝑥𝑘 , 𝑦𝑘) is a point

on the line, we know the next line point must be either pixel A or pixel B.
Cont’d
 Therefore we do not need to compute the actual floating-point location of
the ‘true’ line point; we need only make a decision between pixels A and B.

Figure 3 - Bresenham's Line-Drawing Algorithm

You might also like