You are on page 1of 13

Unit 3 Window to Viewport Transformation 

is the process of Line clipping is a method of reducing a line to fit within a


What is Window Port ? :Window port is the world transforming 2D world-coordinate objects to device coordinates. specific window or viewport. It is often used in computer
coordinate area to display. A window is a graphical control Objects inside the world or clipping window are mapped to the graphics to ensure that a line is drawn only within the bounds of
element. It consists of a visual area containing some of the viewport which is the area on the screen where world coordinates the screen or other display area.
Graphical User Interface (GUI). Furthermore, there is a frame are mapped to be displayed. There are several algorithms for performing line clipping, but
by a window decoration. Therefore, this window port is a the most common is the Cohen-Sutherland algorithm. The
window that defines a rectangular area in world coordinates.  procedure for this algorithm is as follows:
1.Divide the screen into nine regions, or "octants," using a pair
of perpendicular lines called the "clip window."
2.Assign a unique binary code, called an "outcode," to each
point on the line based on which octant it lies in.
3.Test whether the line is trivially accepted or rejected based on
Furthermore, it is possible to define the window size. It can be the outcodes of the two endpoints. If either endpoint has an
larger, smaller or the same size depending on whether to Mathematical Calculation of Window to Viewport:  outcode that indicates it is outside the clip window, the line is
display all data or only a part of data. It may be possible that the size of the Viewport is much smaller rejected. If both outcodes are zero, the line is accepted.
What is View Port? : A view port is a part of the computer or greater than the Window. In these cases, we have to increase or 4.If the line is not trivially accepted or rejected, find the
screen. In other words, it is an area on a device to map the decrease the size of the Window according to the Viewport and intersection points of the line with each side of the clip window.
object. Therefore, it is the region according to the device for this, we need some mathematical calculations. 5.Replace the original endpoints of the line with the intersection
coordinates. It is possible to describe the viewport by (xw, yw): A point on Window points found in step 4, if necessary. 6.Repeat the outcode test
rendering device-specific coordinates. For example, it denotes (xv, yv): Corresponding point on Viewport with the modified line. If the modified line is accepted, it is
the pixels of screen coordinates to render. Depending on the We have to calculate the point (xv, yv) drawn on the screen. If it is rejected, it is not drawn.
requirement, it is possible to display the entire display device
or to display only a portion. Furthermore, viewing transform or Requirement for line clipping algoritm:
window to view port mapping is the process of mapping the There are several requirements for a line clipping algorithm.
part of the world coordinate scene to device coordinate sense. Here are a few:
Likewise, it is possible to map the objects in the world 1.The algorithm should be able to clip a line against a
coordinates to the view port. Now the relative position of the object in Window and Viewport rectangular clipping window.
Viewport transformation : “ The mapping of a port of a are same. 2.The algorithm should be able to handle lines that are partially
world co-ordinate scene to a device co-ordinates is referred to For x coordinate, For y coordinate, inside and partially outside the clipping window.
as viewing transformation.” Sometimes , the 2D viewing 3.The algorithm should be able to handle lines that are
transformation is simply referred to as the window – to – completely outside the clipping window.
viewport transformation or the windowing transformation. 4.The algorithm should be able to handle horizontal, vertical,
• Fig. given below illustrates the mapping of a picture and diagonal lines.
So, after calculating for x and y coordinate, we get 5.The algorithm should be able to handle lines with positive and
selections that falls within a rectangular window onto a
designated rectangular viewpoint. negative slopes.
Where sx is the scaling factor of x coordinate and sy is the scaling
factor of y coordinate Polygon Clipping:

Matrix Representation of the above three steps of


Transformation:
Fig. A viewing transformation using standard rectangles for Sutherland-Hodgman Polygon Clipping Algorithm:
the window and viewpoint. The Sutherland Hodgman algorithm is used for clipping
Diff between window port & view port : polygons. In this algorithm, all the
vertices of the polygon are clipped against each edge of the
Step1:Translate clipping window.
window to origin 1 First the polygon is clipped against the left edge of the clipping
window to get new vertices
          Tx=-Xwmin Ty=-Ywmin
of the polygon. These new vertices are used to clip the polygon
Step2:Scaling of the window to match its size to the viewport against right edge, top edge,
          Sx=(Xymax-Xvmin)/(Xwmax-Xwmin) bottom edge, of the clipping window.
          Sy=(Yvmax-Yvmin)/(Ywmax-Ywmin) To find new sequence of vertices four cases are considered.
Step3:Again translate viewport to its correct position on screen.
          Tx=Xvmin
          Ty=Yvmin
Above three steps can be represented in matrix form:
          VT=T * S * T1 T
Homogeneous coordinates are a way of representing points in = Translate window to the origin
space using a coordinate system with an extra dimension. In a S=Scaling of the window to viewport size
three-dimensional Cartesian coordinate system, we represent a T1=Translating viewport on screen.
point using three coordinates (x, y, z). In homogeneous
coordinates, we represent the same point using four
coordinates (x, y, z, w), where w is a scaling factor.
Why homogenous coordinate are used for transformation
computation in computer graphics? :Homogeneous
coordinates are used for transformations in computer graphics
because they allow for the representation of translation (as
well as rotation and scaling) using matrix multiplication. This
can be convenient because matrix multiplication is easy to
implement and can be performed efficiently on modern Viewing Transformation= T * S * T1
computers. In homogeneous coordinates, a point in 2D space is
represented by a 3-element vector of the form [x, y, w], and a Application :
point in 3D space is represented by a 4-element vector of the 1.The window to viewport transformation is a technique used in Point Clipping : Assuming that the clip window is a rectangle
form [x, y, z, w]. The w element is known as the homogeneous computer graphics to map coordinates from one coordinate in standard position, we save a point P=(x, y) for display if the
coordinate and it is used to encode the translation information. system to another. The transformation is usually used to map following inequalities are satisfied:
By setting the value of w to a non-zero number, it is possible coordinates from a logical coordinate system (also known as a xwmin ≤ x ≤ xwmax
to represent a translation as a simple scaling operation, which "window") to a physical coordinate system (also known as a ywmin ≤ y ≤ ywmax
can then be performed using matrix multiplication.One "viewport"). Where the edges of the clip window (xwmin, xwmax, ywmin,
advantage of using homogeneous coordinates is that they ywmax) can be either the world- coordinate window boundaries
2.One way to perform the transformation is to use a
allow for the representation of all affine transformations or viewport boundaries. If any one of these four inequalities is
transformation matrix. The transformation matrix is a 3x3 matrix
(which include translation, rotation, scaling, and shearing) not satisfied, the point is clipped.
that defines how the coordinates in the window are mapped to
using a single matrix. This can simplify the process of Algorithm: 1.
the coordinates in the viewport. Get the minimum and maximum coordinates of the viewing
composing multiple transformations, as it is only necessary to
multiply the matrices representing the individual plane.
transformations to obtain the matrix representing the 2. Get the coordinates for a point.
composite transformation. 3. Check whether given input lies between minimum and
maximum coordinates of viewing plane.
4. If yes display the point which lies inside the region otherwise
discard it.
Liang-Barsky Line Clipping Algorithm : This algorithm is Unit 4 Projection - Projection is any method of mapping three
considered to be the faster parametric line-clipping algorithm. Parallel Projection Perspective Projection dimensional (3D) objects into two dimensional (2D) view plane
The following concepts are used in this clipping: (screen). In general, projection transforms a N-dimension points
1. The parametric equation of the line. Parallel projection Perspective projection represents to N-1 dimensions.
2. The inequalities describing the range of the clipping represents the object in a the object in three dimensional way.- Two types of projection:
window which is used to determine the intersection between different way like telescope.
a) Parallel projection: In parallel projection, coordinate positions
the line and the clip window. In parallel projection, these In perspective projection, objects are transformed to view plane along parallel line.
The parametric equation of a line can be given by, effects are not created. that are far away appear smaller,
- A parallel projection preserves relative proportion of objects so
x = x1 + u∆x and objects that are near appear
bigger. that accurate views of various sides of an object are obtained
y = y1 + u∆y, 0 ≤ u ≤ 1
Where, ∆x = x2 − x1 & ∆y = y2 − y1 but doesn’t give realistic representation of the 3D objects.
Then, writing the point-clipping conditions in the parametric The distance of the object The distance of the object from the - Can be used for exact measurement so parallel lines remain
form: xwmin ≤ x1 + u∆x ≤ xwmax from the center of projection center of projection is finite.
ywmin ≤ y1 + u∆y ≤ ywmax is infinite.
The above four inequalities can be expressed as,
upk ≤ qk where, k = 1, 2, 3, 4 (corresponds to the left, right, Parallel projection can give Perspective projection cannot give
bottom, and top boundaries, respectively). the accurate view of object. the accurate view of object.
The parameters p & q are defined as,
p1 = −∆x , q1 = x1 − xwmin (Left Boundary)
p2 = ∆x , q2 = xwmax − x1 (Right Boundary) The lines of parallel The lines of perspective projection
p3 = −∆y , q3 = y1 − ywmin (Bottom Boundary) projection are parallel. are not parallel.
parallel.
p4 = ∆y , q4 = ywmax − y1 (Top Boundary)
Fig: Parallel projection of an object to the view plane
When the line is parallel to a view window boundary, the p Projector in parallel Projector in perspective projection Parallel projection : On the basis of angle made by projection
value for the boundary is zero. projection is parallel. is not parallel.
When pk < 0, as u increase line goes from the outside to inside line with view plane, there are two types of parallel projection.
(entering). i. Orthographic parallel projection
Two types of parallel Three types of perspective
When pk > 0, line goes from the inside to outside (existing). projection : projection: ii. Oblique parallel projection
When pk = 0 & qk < 0 then line is trivially invisible because it 1.Orthographic, 1.one point perspective, i. Orthographic parallel projection
is outside view window. 2.Oblique 2.Two point perspective, - When the projection lines are perpendicular to view plane, the
When pk = 0 & qk > 0 then line is inside the corresponding 3. Three point perspective, projection is orthographic parallel projection.
window boundary.
Using the following conditions, the position of line can be It does not form realistic It forms a realistic view of object.
determined: view of object.

2D transformations refer to operations that manipulate the


position, size, and orientation of objects in a two-dimensional
space, such as scaling, rotating, and translating. These
transformations can be useful for a variety of applications, Here, after projection of P(x, y, z) on XY-plane we get P′(x′, y′)
including image processing, computer graphics, and user where, x′ = x, y′ = y & z=0
interface design.There are several types of 2D transformations In homogenous coordinate form,
that are commonly used:
Parameters u1 & u2 can be calculated that define the part of Translation : Translation involves moving an object from one
line that lies within the clip rectangle. When, position to another by a certain distance in the x and y directions.
Scaling: Scaling changes the size of an object by a scaling factor.
Rotation: Rotation rotates an object around a point by a certain
angle. ii. Oblique parallel projection -
If u1 > u2, the line is completely outside the clip window and Shearing: Shearing skews an object by a certain angle in the x or Here, projection lines make certain angle to view plane. -
it can be rejected. Otherwise, the endpoints of the clipped line y direction.
Oblique projection is specified with two angle ‘α’ & ‘φ’ where
are calculated from the two values of parameter u. These transformations can be combined to create more complex
transformations. They can also be represented using matrices, ‘α’ is the angle made by projection line with view plane line (L)
which can make it easier to apply multiple transformations to an which is formed by joining oblique projected point P′(xp, yp)
object at once. and orthogonal projected point P′′(x, y) on point (x, y) on view
plane & ‘φ’ is the angle between ‘L’ & horizontal direction of
3D transformations refer to operations that manipulate the view plane.
position, size, and orientation of objects in a three-dimensional
space, such as scaling, rotating, and translating. These
transformations can be useful for a variety of applications,
including computer graphics, engineering design, and scientific
visualization.There are several types of 3D transformations that
are commonly used:
Translation: Translation involves moving an object from one
position to another by a certain distance in the x, y, and z
directions.
Scaling: Scaling changes the size of an object by a scaling factor.
Rotation: Rotation rotates an object around a point by a certain
angle.
Shearing: Shearing skews an object by a certain angle in the x, y,
or z direction.
Projection : Projection maps a 3D object onto a 2D plane, which
can be useful for creating 2D views of a 3D scene.
These transformations can be combined to create more complex
Clipping transformations. They can also be represented using matrices,
which can make it easier to apply multiple transformations to an
: The process of discarding those parts of a picture which are
object at once.
outside of a specified region or window is called clipping. The
procedure using which we can identify whether the portions 2D shearing is a geometric transformation that distorts the shape b) Perspective projection: In perspective projection, objects
of an object by slanting its copy along one or both axes. It is often positions are transformed to the view plane along lines that
of the graphics object is within or outside a specified region or used in computer graphics to give the illusion of three- converge to point behind view plane.
space is called clipping dimensionality and can be represented by a 2x2 matrix.
algorithm. - A perspective projection produces realistic views but does not
The matrix for 2D shearing along the x-axis is given by:
Applications of clipping: preserve relative proportions. Projections of distance objects
[1 sx]
- Extracting part of a defined scene for viewing [0 1] from view plane are smaller than the projections of objects of
- Identifying visible surfaces in three-dimensional views where sx is the shear factor along the x-axis. the same size that are closer to the projection place.
- Antialiasing line segments or object boundaries The matrix for 2D shearing along the y-axis is given by:
- Creating objects using solid-modeling procedures [1 0]
- Displaying a multi-window environment [sy 1]
- Drawing and painting operations that allow parts of a picture where sy is the shear factor along the y-axis.
to be selected for copying, To apply shearing to an object, you can multiply the
moving, erasing, or duplicating. transformation matrix by the object's matrix of coordinates. For
example, if you have a triangle with vertices at (0,0), (1,0), and
(0,1), and you want to apply shearing along the x-axis with a
shear factor of 0.5, the resulting vertices would be (0,0), (1.5,0),
and (0,1). Fig: Perspective projection of an object to the view plane
Derive relation for three dimensional translation and rotation : Describe 3d windows to viewport transformation with matrix Unit 8
In 3D space, translation and rotation can be represented using representation for each step.
matrices. A translation matrix is a 3x3 matrix that represents a Three-dimensional window to viewport transformation involves
transforming three-dimensional coordinates from the window Define realism in human perception : Realism in human
translation by a certain distance in the x, y, and z directions. The perception refers to the idea that our perceptions of the world
coordinates system to the viewport coordinates system. This can
translation matrix for a translation by (dx, dy, dz) is given by: be done using a series of matrix transformations. around us accurately represent the true nature of the objects and
Scaling: The first step is to scale the three-dimensional events that we are perceiving. This means that when we see, hear,
coordinates to fit within the dimensions of the touch, taste, or smell something, our brain is accurately
1.viewport. This can be done using a scaling matrix of the form: interpreting the sensory information it receives and constructing
[sx 0 0 0] an accurate representation of the object or event in question.
[0 sy 0 0] Realism in perception is often contrasted with other philosophical
[0 0 sz 0] perspectives, such as idealism, which posits that our perceptions
A rotation matrix is a 3x3 matrix that represents a rotation
[0 0 0 1] do not accurately reflect the true nature of the world, and that our
around a point by a certain angle. There are several different where (sx, sy, sz) are the scaling factors in the x, y, and z
types of rotation matrices that can be used, depending on the experiences are shaped by our thoughts and beliefs.
dimensions, respectively.
axis of rotation and the direction of the rotation. 2.Translation: The next step is to translate the scaled coordinates
so that the origin of the window coordinates system maps to the Significance difference between rendering and image synthesis in
For example, the rotation matrix for a rotation around the x-axis
origin of the viewport coordinates system. This can be done using creating computer generated 3d image:
by an angle theta is given by:
a translation matrix of the form: Rendering and image synthesis are two closely related techniques
[1 0 0 tx] that are used to generate computer-generated 3D images.
[0 1 0 ty] However, there are some key differences between the two:
[0 0 1 tz] 1.Rendering refers to the process of taking a 3D scene and creating
[0 0 0 1]
a 2D image from it, using algorithms to simulate the way light
where (tx, ty, tz) are the translation factors in the x, y, and z
dimensions, respectively. interacts with the objects and surfaces in the scene. Image
3.Rotation: The final step is to rotate the transformed coordinates synthesis, on the other hand, refers to the process of generating a
to align the axes of the window and viewport coordinates systems. 2D image from scratch using computer algorithms, without the
The rotation matrix for a rotation around the y-axis by an angle This can be done using a rotation matrix of the form: need for a pre-existing 3D scene.
theta is given by: [cos(theta) -sin(theta) 0 0] 2.Rendering is typically done using specialized software that is
[sin(theta) cos(theta) 0 0] designed specifically for this purpose, such as 3D rendering
[0 0 1 0]
applications like 3ds Max, Maya, or Blender. Image synthesis, on
[0 0 0 1]
where theta is the angle of rotation. the other hand, can be done using a wide range of software tools
These three transformations can be combined into a single matrix and techniques, including drawing and painting programs,
by performing the scaling transformation first, followed by the generative art tools, and more.
The rotation matrix for a rotation around the z-axis by an angle translation transformation, and finally the rotation transformation. 3.Rendering is usually focused on creating photorealistic images
theta is given by: This can be done using matrix multiplication. that look as realistic as possible, while image synthesis can be used
to create a wide range of styles and visual effects, from
3D viewing refers to the process of representing three-
photorealistic to highly stylized and abstract.
dimensional objects on a two-dimensional display, such as a
Overall, rendering and image synthesis are
computer screen or a printed page. There are several ways to
both important techniques for creating computer-generated 3D
achieve this, including:
Orthographic projection: This method represents a 3D object as a images, and they can be used together or separately depending on
2D image by projecting its vertices onto a plane perpendicular to the needs of a particular project.
These matrices can be combined to perform multiple
the line of sight. It is often used for technical drawings and
translations and rotations in sequence. For example, to translate Flat shading Gouraud shading Phong shading
computer-aided design (CAD) applications.
an object by (dx, dy, dz) and then rotate it around the x-axis by Perspective projection: This method represents a 3D object as it Computes Computes Applies
an angle theta, you can multiply the translation matrix and the would appear to the human eye, with objects farther away illumination once illumination illumination at
rotation matrix to obtain the combined transformation matrix: appearing smaller than those that are closer. It is often used in per polygon and atvertices and every point of
computer graphics and photography to create the illusion of depth apply it to whole interpolates. polygon surface.
and distance. polygon.
Parallel projection: This method represents a 3D object as a 2D Creates Interpolates colors Interpolates normal
image by projecting its vertices onto a plane parallel to the line of discontinuous along edges and instead of colors.
sight. It is often used for architectural drawings and maps. incolor. scan line.
To view a 3D object on a 2D display,
Problems of Mach Handles Mach Removes Mach
you can use a 3D graphics software package that allows you to bands. bands problem bands completely.
This rotate, scale, and translate the object as needed. You can also use a found in flat
combined matrix can then be applied to any point in 3D space to 3D viewer, which is a software tool that allows you to view 3D shading.
perform both the translation and the rotation in one step. models on your computer
Low cost. Not so expensive. More expensive
3D translation: In computer graphics, 3D translation refers to the than gouraud
How can you represent 3d viewing? :
process of moving an object in 3D space by a specified distance shading.
There are several ways to represent 3D viewing, each with their
along a given axis or direction. This can be represented Requires very less Required moderate Requires complex
own equation and practical applications. Here are a few common
mathematically using a 3D translation matrix, which is defined as processing and is processing time. processing and is
techniques: fast in time. slower but is more
follows: Perspective projection: This is a way of simulating the way that
[x'] [1 0 0 Tx] [x] efficient as
objects appear to the human eye, with objects that are farther away compared to other
[y'] = [0 1 0 Ty] * [y]
appearing smaller than those that are closer. The equation for shading methods.
[z'] [0 0 1 Tz] [z]
[w'] [0 0 0 1] [w] perspective projection is:
x' = x / (z + d) Lighting equation Lighting equation Lighting equation
Where (x, y, z) are the original coordinates of the point, (x', y', z')
y' = y / (z + d) used once per used at each vertex. used at each pixel.
are the new coordinates of the point after translation, and (Tx, Ty,
Where x, y, and z are the 3D coordinates of the object, d is the polygon.
Tz) are the translation distances along the x, y, and z axes,
respectively. The value w is a homogenous coordinate and is distance of the viewer from the projection plane, and x' and y' are
typically set to 1. the 2D coordinates of the projected image. Define intensity attenuation : Intensity attenuation refers to the
In practical applications, 3D translation is used extensively in Orthographic projection: This is a way of projecting 3D objects decrease in intensity or strength of a signal, light, or energy source
computer graphics and computer games to move objects within a onto a 2D plane without any perspective distortion. It is often used over distance. This can occur due to various factors, including
3D environment. It is also used in 3D modeling and visualization for technical drawings and computer-aided design (CAD) absorption, scattering, and interference.
software, as well as in virtual and augmented reality applications. applications. The equation for orthographic projection is: In the context of lighting, intensity attenuation refers to
x' = x the way that the intensity of a light source decreases as it travels
2D mirror :A 2D mirror is a flat surface that reflects a 2D image y' = y through space. This is often modeled using an attenuation function,
of an object. In computer graphics, this can be represented which describes the rate at which the intensity of the light decreases
Where x, y, and z are the 3D coordinates of the object, and x' and
mathematically using a 2D reflection matrix, which is defined as over distance. The attenuation function can be used to calculate the
y' are the 2D coordinates of the projected image.
follows: intensity of the light at any given distance from the source.
[x'] [1 0] [x] Isometric projection: This is a way of representing 3D objects in Intensity attenuation is an important concept in many
[y'] = [0 -1] * [y] 2D space with a combination of orthographic and perspective fields, including optics, computer graphics, and physics. It is often
Where (x, y) are the original coordinates of the point, and (x', y') projection. It is often used in video games and other interactive 3D taken into account when designing lighting systems, calculating the
are the reflected coordinates of the point. applications. The equation for isometric projection is: range and effectiveness of sensors, and more.
In practical applications, 2D reflection is used in computer x' = x - y
graphics and image processing to create a mirrored version of an y' = (x + y) / sqrt(2) - z
image or to flip an image horizontally or vertically. It is also used Where x, y, and z are the 3D coordinates of the object, and x' and
in user interface design to create symmetry and balance. y' are the 2D coordinates of the projected image.
To reflect an image about a line other than the x- or y-axis, a 2D These are just a few examples of the ways that 3D viewing can be
rotation matrix can be combined with the reflection matrix. For represented. There are many other techniques, each with their own
example, to reflect an image about the line y = x, the following equation and practical applications.
matrix could be used:
[x'] [0 1] [x]
[y'] = [1 0] * [y]
This matrix rotates the image by 90 degrees counterclockwise
and then reflects it about the y-axis.
Polygon Rendering Method a) What is quadric surface? How can we detect shadow in computer graphics?
Constant intensity shading method b) A quadric surface is a type of mathematical surface that can be
Gouraud shading method (Intensity Interpolation) c) defined using a quadratic equation. Quadric surfaces are defined
Phong shading mehod (Normal vector interpolation) a) by the equation:
Constant Intensity Shading (Flat Shading)Method -A ax^2 + by^2 + cz^2 + 2fyz + 2gzx + 2hxy + 2ux + 2vy + 2wz +
fast and simple method for rendering an object with polygon d=0
where a, b, c, etc. are constants. Quadric surfaces include well-
surfaces is constant intensity shading.
known shapes such as spheres, cylinders, cones, and ellipsoids.
- In this method, a single intensity is calculated for each polygon
In computer graphics, quadric surfaces are often used to
and the intensity is applied to all the points of surface of polygon. represent 3D shapes and objects. They are relatively simple to
Hence, all the points over the surface of the polygon are model and can be easily manipulated using mathematical
displayed with same intensity value. - transformations, making them a popular choice for use in 3D
Constant shading is useful for quickly displaying the general graphics applications.
appearance of a curved surfaces. This approach is valid if:
a. Light source is at infinity (N. L is constant on polygon). Difference between diffuse and specular reflection:
b. Viewer is at infinity (V. R is constant on polygon). c.
Object is polyhedron and is not an approximation of an object
with a curved surface.
b) Gouraud shading method Why is shading required in computer graphics? Shading
- Developed by Henri Gouraud. - is an important aspect of computer graphics because it helps to
It renders the polygon surface by linearly interpolating intensity create the illusion of three-dimensionality and realism in a
values across the surface. - scene. When an object is shaded correctly, it appears to have
Intensity values for each polygon are matched with the values of volume and to be affected by the lighting in the scene, which
adjacent polygon along the common edge, thus eliminating the helps the viewer to perceive the object as a physical, three-
intensity discontinuities that occur in flat shading. dimensional entity. Shading is
Steps: 1. Determine the average unit normal vector at each vertex achieved in computer graphics by simulating the way that light
interacts with the surface of an object. This can be done using
a variety of techniques, such as diffuse shading, specular
shading, and ambient occlusion. By applying these techniques,
of polygon.
a rendering engine can create the appearance of shadows,
2. Apply illumination model at each vertex to calculate the vertex
highlights, and other surface details that help to make an
intensity.
object look more realistic. In addition
3. Linearly interpolate the vertex intensities over the projected
to adding realism to a scene, shading is also used to convey
area of polygon .
information about the surface properties of an object. For
example, a shiny, metallic surface will reflect light differently
than a matte, rubber surface, and shading can be used to
communicate this to the viewer. Shading is also used to create
visual interest and to direct the viewer's attention to certain
parts of a scene.
Blobby objects are three-dimensional shapes that are made up
of a large number of interconnected, spherical or blob-like
elements. These elements, also known as "blobs," are typically
interconnected by a network of structural elements, such as
springs or beams, and can move and deform as a unit.
Blobby objects are often used in computer graphics
to simulate soft bodies, such as jello, skin, or cloth. They can
also be used to simulate rigid bodies with complex shapes, such
as rocks or plants.
Blobby objects are typically simulated using
techniques from physics and computer science, such as mass-
spring systems or finite element analysis. They can be rendered
using a variety of techniques, including ray tracing or
c) Phong shading method rasterization.
- Developed by Phong Bui Tuong.
- It renders the polygon surface by linearly interpolating normal Basic illumination models:
vector across the surface. 1. Ambient Illumination :
Steps: 1. Determine the average unit normal vector at each vertex Assume you are standing on a road, facing a building with glass
exterior and sun rays are falling on that building reflecting back
from it and the falling on the object under observation. This
of polygon. would be Ambient Illumination. In simple words, Ambient
2. Linearly interpolate the vertex normal over the projected area Illumination is the one where source of light is indirect.The
of the polygon. reflected intensity Iamb of any point on the surface is:
3. Apply illumination model at position along each scan line to
calculate pixel intensities using interpolated normal vectors.

2. Diffuse Reflection :
Diffuse reflection occurs on the surfaces which are rough or
grainy. In this reflection the brightness of a point depends upon
the angle made by the light source and the surface.
The reflected intensity Idiff of a point on the surface is:

3. Specular Reflection :
When light falls on any shiny or glossy surface most of it is
reflected back, such reflection is known as Specular Reflection.
Phong Model is an empirical model for Specular Reflection
which provides us with the formula for calculation the reflected
intensity Ispec:
UNIT-6 Describe the architecture of raster scan display. Explain about sweep, octree, and boundary representations for
What is the method to recognize boundary point and interior A raster scan display is a type of display that produces an image by solid modeling.
point in solid modeling? scanning a beam of electrons across a phosphorescent screen, row Sweep, octree, and boundary representations are three different
In solid modeling, a boundary point is a point on the surface of a by row from top to bottom. The electrons cause the phosphors on techniques that can be used for solid modeling, which is the
3D object, while an interior point is a point inside the volume of the screen to emit light, creating a pattern of illuminated pixels that representation of 3D objects in a computer.
the object. One method for recognizing boundary points and forms the image. The architecture of a raster scan display typically 1. Sweep representation: A sweep representation of a 3D object is
interior points is to use a BSP (binary space partitioning) tree, as consists of the following components: created by sweeping a 2D cross-section (also known as a "profile")
I described in my previous response. Another method is to use a 1. CRT (cathode ray tube): The CRT is a vacuum tube that contains along a path in 3D space. The profile is repeatedly deformed and
CSG (constructive solid geometry) tree, which represents a 3D an electron gun at one end and a phosphorescent screen at the other translated as it is swept along the path, creating a 3D shape. Sweep
object as a combination of simpler objects using Boolean end. The electron gun generates a beam of electrons that is focused representations are useful for modeling objects with a high degree of
operations (such as union, intersection, and difference).To onto the screen by a set of electron optics. symmetry, such as cylinders, cones, and tori.
determine whether a point is a boundary point or an interior 2. Electron optics: The electron optics consists of a set of 2. Octree representation: An octree is a data structure that represents a
point using a CSG tree, we can evaluate the point against the tree electromagnets and lenses that shape and direct the beam of 3D space by recursively dividing it into smaller octants (eight
using the Boolean operations. For example, if the tree represents electrons onto the screen. The electron optics can be used to move subspaces) until the desired level of detail is achieved. Octrees are
a sphere, we can check whether the point is inside the sphere by the beam horizontally and vertically across the screen, as well as to commonly used for spatial indexing and collision detection, as they
evaluating the point against the sphere's equation. If the point is change the size and intensity of the beam. allow for efficient searches and comparisons of 3D objects.
inside the sphere, it is an interior point; if it is on the surface of 3. Phosphorescent screen: The phosphorescent screen is coated 3. Boundary representation (B-rep): A boundary representation (B-
the sphere, it is a boundary point. There are also other methods with a layer of phosphors that emit light when struck by the beam rep) of a 3D object is a representation that encodes the surface of the
for recognizing boundary points and interior points in solid of electrons. The screen is divided into a grid of pixels, and the object as a set of polygons, along with information about the topology
modeling, such as using a voxel grid or a distance field. These intensity of each pixel is controlled by the intensity of the electron (connectivity) of the polygons. B-reps are useful for modeling objects
methods can be more efficient in certain cases, but they may not beam. with complex topologies and for performing operations such as
always provide as much detail or accuracy as BSP trees or CSG 4. Video controller: The video controller is a hardware device that Boolean operations (union, intersection, difference) and surface
trees. In solid modeling, a BSP (binary space partitioning) tree is controls the movement of the electron beam across the screen. It smoothing.
a data structure that is used to represent a 3D object by receives input from the computer and generates the appropriate Each of these techniques has its own strengths and weaknesses, and
recursively dividing the object's space into convex sets. Each signals to move the beam to the desired position on the screen. the appropriate technique to use will depend on the specific
node in the tree represents a convex subspace, and the children 5. Frame buffer: The frame buffer is a memory location where the requirements of the modeling task.
of the node represent the subspaces that are obtained by image to be displayed is stored. It consists of a grid of pixels, with
each pixel represented by a set of bits that specify its color and UNIT 7
partitioning the convex set using a plane. List any two disadvantages of BSP tree method in visible surface
intensity. The video controller reads the pixel values from the
frame buffer and uses them to control the intensity of the electron detection.
Describe how BSP recursively subdivided a space into One disadvantage of using BSP trees for visible surface detection is
convex sets. beam.
6. Power supply: The power supply provides the voltage and that they can be complex to construct, especially for large and
To construct a BSP tree, we start with the entire space and select complex scenes. This can be time-consuming and may require
a plane to partition the space. The plane divides the space into current needed to operate the CRT, the electron optics, and the
video controller. significant preprocessing before the scene can be rendered. Another
two convex sets, which become the children of the root node in disadvantage of BSP trees is that they can be inefficient for scenes
the BSP tree. We then recursively apply this process to each of Raster scan displays are commonly used in computer monitors,
televisions, and other electronic devices that display images. They with many polygons. This is because the tree must be traversed for
the children, until we reach a leaf node (a node with no every pixel on the screen, which can be slow if the tree is large, or the
children).For example, consider a cube with vertices at (0,0,0), are also known as raster graphics displays or raster-scan displays.
scene has a high polygon count. This can make BSP trees less
(1,0,0), (1,1,0), (0,1,0), (0,0,1), (1,0,1), (1,1,1), and (0,1,1). To suitable for real-time rendering applications where high frame rates
Explain the process for solid modeling with example.
construct a BSP tree for this cube, we could choose the plane z = are important.
Solid modeling is the representation of 3D objects in a computer,
0.5 as the root node, which would partition the space into two
with a focus on the object's geometric properties and the
convex sets: one above the plane (z > 0.5) and one below the Make a comparison between Painter's algorithm and A- Buffer
relationships between its parts. Here is an example of the process
plane (z < 0.5). The above set would contain the top four algorithm.
for solid modeling an object:
vertices of the cube, and the below set would contain the bottom Painter's algorithm is a technique for rendering 3D graphics that
- Determine the requirements and constraints of the object. This
four vertices. We could then choose additional planes to orders and paints the polygons in a scene based on their depth relative
might include the size, shape, materials, and intended use of the
partition each of these convex sets further, until we reach leaf to the viewer. This algorithm is simple to implement and works well
object, as well as any design specifications or standards that must
nodes that represent the individual polygons of the cube. This for scenes with a small number of polygons, but it can be inefficient
be followed.
process is repeated until the entire space has been partitioned for scenes with many polygons because it has to recalculate the depth
- Choose a suitable solid modeling technique and software tool.
into a hierarchy of convex sets, which can then be used to of each polygon for every pixel on the screen. The A-Buffer
There are various techniques and tools available for solid modeling,
efficiently determine the location of a point relative to the 3D algorithm is an improvement on Painter's algorithm that addresses
such as boundary representation (B-rep), constructive solid
object (for example, to determine whether the point is an interior this issue by storing the depth values of each pixel in a buffer,
geometry (CSG), and voxel-based modeling. The choice of
point or a boundary point). allowing the algorithm to quickly determine which polygons are
technique and tool will depend on the requirements and constraints
of the object, as well as the user's preferences and skills. visible at each pixel without having to recalculate the depth values.
What do you mean by solid modeling? This makes the A-Buffer algorithm more efficient for scenes with
- Create a 3D model of the object using the chosen technique and
Solid modeling is the representation of 3D objects in a computer, many polygons, but it requires more memory and may be more
tool. This might involve building the model from scratch using
with a focus on the object's geometric properties and the complex to implement.
geometric primitives and construction features, or it might involve
relationships between its parts. Solid models are typically used In summary, Painter's algorithm is simple to implement and works
modifying an existing model or importing data from another
for engineering and design applications, where it is important to well for small scenes, but it can be inefficient for large scenes. The A-
source.
accurately represent the size, shape, and properties of an object. Buffer algorithm is more efficient for large scenes, but it requires
- Validate and optimize the model. This might involve checking the
In solid modeling, 3D objects are represented as a set of more memory and may be more complex to implement.
model for errors or inconsistencies, as well as making any
geometric primitives (such as points, lines, and curves) and more
necessary adjustments to the model to meet the requirements and
complex shapes that are constructed from these primitives. The What are the object space and image space method of hidden
constraints. It might also involve performing simulations or
geometric primitives and shapes are used to represent the surface removal?
analyses to verify the object's behavior or performance under
boundary of the object, as well as any internal features or details. In computer graphics, hidden surface removal is the process of
different conditions.
Solid models can be created and edited using specialized determining which surfaces of 3D objects are visible in the final
- Use the model for the intended purpose. This might involve
software tools, such as computer-aided design (CAD) systems. image, and which are occluded by other objects. There are several
generating 2D drawings, animations, or simulations of the object,
They can also be used to generate 2D drawings, animations, and different methods that can be used to perform hidden surface
as well as manufacturing or testing the object in the real world.
simulations of the object, as well as to perform analyses of the removal, including object space methods and image space methods.
For example, consider a model of a plastic toy car. The
object's properties (such as stress and strain). 1. Object space methods: Object space methods perform hidden
requirements and constraints of the model might include the size of
the car (to fit within certain dimensions), the materials used (to be surface removal by analyzing the 3D geometry of the objects in the
Describe how a polygon can be represented using BSP tree safe for children), and the intended use (to be played with by scene. These methods can be more accurate than image space
with example. children). The solid modeling technique chosen might be boundary methods, but they can also be more computationally expensive, as
A polygon can be represented using a BSP (binary space representation (B-rep), and the software tool used might be a they require the 3D coordinates of each vertex in the scene to be
partitioning) tree by creating a leaf node in the tree that computer-aided design (CAD) system. The modeler might create transformed into screen space. Some examples of object space
represents the polygon. In a BSP tree, each node represents a the model by starting with basic geometric shapes (such as a cube methods include:
convex subspace of the 3D space, and the children of the node for the body and cylinders for the wheels) and using construction - BSP trees (binary space partitioning): BSP trees are data structures
represent the subspaces that are obtained by partitioning the features (such as extrusion and fillet) to add details and complexity that recursively divide a 3D space into convex sets, using planes as
convex set using a plane. The leaf nodes in the tree represent a to the model. The modeler might then validate and optimize the the dividing surfaces. BSP trees can be used to efficiently determine
single convex set, which can be either a single polygon or a model by checking for errors and making any necessary the visibility of objects and surfaces in the scene by traversing the tree
group of coplanar polygons. To represent a polygon using a BSP adjustments, such as making sure the wheels are the right size and and comparing the positions of the objects relative to the dividing
tree, we can create a leaf node in the tree that represents the properly attached to the body. Finally, the model might be used to planes.
polygon and assign the vertices of the polygon to the leaf node generate 2D manufacturing drawings, animations of the car in - Depth sorting: Depth sorting is a method that involves sorting the
as its "points." The leaf node can also store other information action, or simulations of the car's performance. objects in the scene by their distance from the viewer and rendering
about the polygon, such as its normal vector, color, and texture them in back-to-front order. This ensures that objects that are closer
coordinates. For example, consider a triangle with vertices at to the viewer are drawn on top of objects that are further away,
(0,0,0), (1,0,0), and (0,1,0). To represent this triangle using a effectively hiding the occluded surfaces.
BSP tree, we could create a leaf node in the tree and assign the 2. Image space methods: Image space methods perform hidden
three vertices to the node as its points. The leaf node might look surface removal by analyzing the 2D image pixels that are produced
something like this: by the rendering process. These methods are typically faster than
Leaf node: object space methods, as they only require the 2D image data to be
- Points: (0,0,0), (1,0,0), (0,1,0) analyzed, rather than the 3D geometry of the objects. However, they
- Normal vector: (0,0,1) can be less accurate, as they may produce artifacts or other visual
- Color: red errors due to the projection of the 3D objects onto the 2D image
- Texture coordinates: (0,0), (1,0), (0,1) plane. Some examples of image space methods include:
We can then use the BSP tree to perform operations on the - Painters’ algorithm: The Painters algorithm is a method that
polygon, such as rendering, collision detection, or hidden surface involves rendering the objects in the scene in back-to-front order,
removal. based on their depth. This ensures that objects that are closer to the
viewer are drawn on top of objects that are further away, effectively
hiding the occluded surfaces.
- Z-buffering: Z-buffering is a method that involves maintaining a
depth buffer (also known as a Z-buffer) that stores the depth value of
each pixel in the image. As each object is rendered, its depth values
are compared to the values in the depth buffer, and only the pixels
with the smallest (
Explain the scan line algorithm for removing hidden surfaces. Explain which algorithm is better for hidden surface removal. Explain the area subdivision method for visible surface
The scan line algorithm is a technique for removing hidden The Z-buffer is the most widely used method for solving the hidden detection.
surfaces in 3D graphics that works by scanning the image from surface problem. Z-buffer algorithm: This algorithm maintains a The area subdivision method is a technique used in computer
top to bottom, one row at a time. For each scan line, the algorithm buffer (a 2D array) that stores the depth value of each pixel on the graphics to determine which surfaces of a 3D model are visible
processes the polygons that intersect that line, determining which screen. As the scene is rendered, it compares the depth of incoming from a given viewpoint. It works by dividing the screen into
polygons are visible and which are occluded by other polygons. pixels with the value stored in the buffer and only draws the closest smaller areas, or subregions, and then determining which
The algorithm keeps a list of active polygons for each scan line, surface. It has the following major advantages over other hidden surfaces in the 3D model intersect each subregion. There are
which are the polygons that intersect that line and are currently surface removal algorithms: several different algorithms that can be used to implement the
being drawn. As the scan line is processed, the algorithm updates No sorting is required. Models can be rendered in any order. area subdivision method, but they all follow a similar process:
the active polygon list by adding any new polygons that intersect No geometric intersection calculations are required. The algorithm - Divide the screen into a grid of subregions.
the line and removing any polygons that have been completely produces the correct output even for intersecting or overlapping - For each subregion, determine the set of surfaces in the 3D
processed. To determine which polygons are visible, the triangles. model that intersect it. This is often done by performing a ray
algorithm compares the depths of the polygons at each pixel and The algorithm is very simple to implement. casting operation from the viewpoint to the subregion.
only draws the polygon with the smallest depth value. This - Sort the intersecting surfaces by their distance from the
ensures that polygons closer to the viewer are drawn over Explain in detail about plain equation method. viewpoint. This is necessary because surfaces that are closer to
polygons that are farther away, eliminating hidden surfaces. The plain equation method is a technique for performing visible the viewpoint should occlude (obscure) surfaces that are further
The scan line algorithm is relatively simple to implement and can surface detection (hidden surface removal) in 3D graphics. It works away.
be efficient for scenes with a moderate number of polygons, but it by representing each polygon in a scene as a plane in 3D space, using -Starting with the closest surface, check whether it is visible
may not be suitable for very large or complex scenes. the equation of a plane to describe the orientation and position of the from the viewpoint. If it is, add it to the set of visible surfaces. If
polygon. To determine which polygons are visible, the algorithm it is not, skip it and move on to the next surface.
Explain the scan line algorithm for removing hidden surfaces. compares the depths of the polygons at each pixel and only draws the One advantage of the area subdivision method is that it is
The scan line algorithm is a technique for removing hidden polygon with the smallest depth value. This ensures that polygons relatively simple and easy to implement. However, it can be
surfaces in 3D graphics that works by scanning the image from closer to the viewer are drawn over polygons that are farther away, computationally expensive, especially if the screen is divided
top to bottom, one row at a time. For each scan line, the algorithm eliminating hidden surfaces. The equation of a plane in 3D space is into many subregions. As a result, it is not typically used in real-
processes the polygons that intersect that line, determining which typically represented as: time graphics applications where performance is critical.
polygons are visible and which are occluded by other polygons. Ax + By + Cz + D = 0
The algorithm keeps a list of active polygons for each scan line, where (A, B, C) is a vector normal to the plane and D is a constant
which are the polygons that intersect that line and are currently that determines the position of the plane. To determine the visibility Explain the visible surface detection with an algorithm.
being drawn. As the scan line is processed, the algorithm updates of a polygon, the plain equation method calculates the equation of the Visible surface detection, also known as hidden surface removal
the active polygon list by adding any new polygons that intersect plane for each polygon in the scene and then uses this equation to or occlusion culling, is the process of identifying which surfaces
the line and removing any polygons that have been completely determine the depth of the polygon at each pixel. The depth of the in a 3D scene are visible from a given point of view, and which
processed. To determine which polygons are visible, the polygon at a particular pixel is calculated by substituting the are occluded (hidden) by other objects. This is an important
algorithm compares the depths of the polygons at each pixel and coordinates of the pixel into the equation of the plane and solving for optimization technique used in 3D computer graphics to reduce
only draws the polygon with the smallest depth value. This the value of z. One advantage of the plain equation method is that it is the workload on the rendering pipeline by only rendering the
ensures that polygons closer to the viewer are drawn over relatively simple to implement and can be efficient for scenes with a visible surfaces, rather than all surfaces in the scene. There are
polygons that are farther away, eliminating hidden surfaces. moderate number of polygons. However, it may not be suitable for several algorithms that can be used to perform visible surface
The scan line algorithm is relatively simple to implement and can very large or complex scenes because it can be computationally detection, including the painter’s algorithm, the Z-buffer
be efficient for scenes with a moderate number of polygons, but it expensive to calculate the equation of a plane for each polygon in the algorithm, and the BSP tree algorithm. Each of these algorithms
may not be suitable for very large or complex scenes. scene. has its own strengths and weaknesses, and the appropriate
algorithm to use will depend on the specific needs of the
Explain the Z-buffer algorithm for removing hidden faces? Explain in detail about depth buffer method. application. In general, visible surface detection algorithms work
The Z-buffer algorithm, also known as the depth buffer The depth buffer method, also known as the Z-buffer algorithm, is a by traversing the scene from the point of view of the observer,
algorithm, is a technique for removing hidden surfaces in 3D technique for performing visible surface detection (hidden surface and determining which objects are in front of or behind other
graphics. It works by maintaining a buffer (a 2D array) that stores removal) in 3D graphics. It works by maintaining a buffer (a 2D objects. This is typically done by comparing the depth (z-value)
the depth value (the Z value) of each pixel on the screen. As the array) that stores the depth value (the Z value) of each pixel on the of the surfaces at each pixel, and deciding which surface is
3D scene is rendered, the Z-buffer algorithm compares the depth screen. As the 3D scene is rendered, the depth buffer algorithm visible based on its depth relative to other surfaces at the same
of each incoming pixel with the value stored in the buffer for that compares the depth of each incoming pixel with the value stored in pixel. Once the visible surfaces have been identified, they can be
pixel. If the incoming pixel is closer to the viewer (has a smaller the buffer for that pixel. If the incoming pixel is closer to the viewer rendered to produce the final image. One common algorithm
Z value), it is drawn, and the Z-buffer value is updated. If the (has a smaller Z value), it is drawn, and the buffer value is updated. If used for visible surface detection is the Z-buffer algorithm. This
incoming pixel is farther away (has a larger Z value), it is the incoming pixel is farther away (has a larger Z value), it is algorithm works by maintaining a depth buffer, also known as a
discarded, and the buffer value is left unchanged. This process discarded, and the buffer value is left unchanged. This process Z-buffer, which stores the depth (z-value) of the nearest surface
ensures that only the closest surface is drawn at each pixel, ensures that only the closest surface is drawn at each pixel, at each pixel of the screen. As the scene is traversed and
eliminating hidden surfaces. The Z-buffer algorithm is relatively eliminating hidden surfaces. The depth buffer algorithm is relatively rendered, the Z-buffer is updated with the depth of the surface
simple to implement and can be efficient for scenes with a simple to implement and can be efficient for scenes with a moderate being drawn at each pixel. If the depth of the surface being
moderate number of polygons, but it can be memory-intensive number of polygons, but it can be memory-intensive and may not be drawn is greater than the depth stored in the z-buffer at that
and may not be suitable for very large or complex scenes. suitable for very large or complex scenes. To implement the depth pixel, the surface is occluded by another object and is not drawn.
buffer algorithm, the following steps are typically followed: If the depth of the surface being drawn is less than or equal to
Define the algorithms for visible surface detection. - Initialize the depth buffer with a large value for all pixels. the depth in the z-buffer, the surface is drawn and the depth in
Visible surface detection, also known as hidden surface removal - For each polygon in the scene: the z-buffer is updated.
or hidden surface determination, is the process of identifying - Transform the vertices of the polygon to screen space using the Here is an outline of the steps involved in the z-buffer algorithm:
which surfaces in a 3D scene are visible to the viewer and which projection matrix. 1. Initialize the z-buffer to a large value for all pixels.
are occluded by other surfaces. There are several algorithms that - Rasterize the polygon by iterating over the pixels that it covers and 2. For each surface in the scene:
can be used to perform visible surface detection, including: calculating the depth value for each pixel. -Transform the surface vertices from world space to screen
Painter's algorithm: This algorithm orders and paints the polygons - Compare the depth value of each pixel with the value stored in the . space using the view and projection matrices.
in a scene based on their depth relative to the viewer. It is simple depth buffer. If the depth value is smaller, update the buffer value and -Rasterize the surface into screen space, generating a list of
to implement but can be inefficient for scenes with many draw the pixel. . fragments (pixels) to be drawn.
polygons. - Repeat the process for all polygons in the scene. 3. For each fragment:
A-Buffer algorithm: This algorithm is an improvement on The depth buffer method is widely used in 3D graphics and has the -Read the depth stored in the z-buffer at the fragment's screen
Painter's algorithm that stores the depth values of each pixel in a advantage of being relatively simple to implement and efficient for . space coordinates.
buffer, allowing the algorithm to quickly determine which scenes with a moderate number of polygons. However, it may not be -If the fragment depth is less than or equal to the value in the
polygons are visible at each pixel without having to recalculate suitable for very large or complex scenes because it can be memory- . z-buffer, draw the fragment and update the value in the z- . .
the depth values. intensive and may not scale well. buffer.
BSP trees: This algorithm uses a binary search tree data structure -If the fragment depth is greater than the value in the z-buffer,
to represent the polygons in a scene. The tree is traversed to Justify that depth buffer method is better than plane equation . do not draw the fragment.
determine which polygons are visible at each pixel. method.
Scan line algorithm: This algorithm processes the image from top There are a few reasons why the depth buffer method may be a good What is the task of polygon table? Why we must remove
to bottom, one row at a time, and maintains a list of active choice in some cases: hidden surface? Explain with any one methodology?
polygons for each scan line. It compares the depths of the Efficiency: The depth buffer method can be more efficient than the A polygon table is a data structure used to store the geometry of
polygons at each pixel to determine which is visible. plane equation method for scenes with many polygons. This is a 3D scene, specifically the list of polygons (triangles or other n-
Z-buffer algorithm: This algorithm maintains a buffer (a 2D because the depth buffer method only needs to calculate the depth sided shapes) that make up the surfaces of the objects in the
array) that stores the depth value of each pixel on the screen. As value for each pixel once, whereas the plane equation method must scene. The polygon table is an important input to the rendering
the scene is rendered, it compares the depth of incoming pixels calculate the depth of each polygon at every pixel. pipeline, as it provides the raw geometry data that is used to
with the value stored in the buffer and only draws the closest Simplicity: The depth buffer method is relatively simple to implement generate the final image of the scene.
surface. compared to the plane equation method. It only requires a single Removing hidden surfaces, also known as visible surface
buffer to store the depth values of the pixels, whereas the plane detection or occlusion culling, is an optimization technique used
equation method requires the calculation of the equation of a plane for in 3D graphics to improve performance by only rendering the
each polygon in the scene. visible surfaces of a scene, rather than all surfaces. This is
Memory usage: The depth buffer method can be more memory- important because rendering every surface in a scene can be
efficient than the plane equation method because it only requires a computationally expensive, especially in complex scenes with
single buffer to store the depth values of the pixels, whereas the plane many polygons. By only rendering the visible surfaces, the
equation method may require additional data structures to store the workload on the rendering pipeline can be significantly reduced,
equations of the planes. improving performance, and allowing for higher frame rates or
Overall, the depth buffer method is a widely used and effective more detailed graphics.
technique for hidden surface removal in 3D graphics. However, the One methodology for removing hidden surfaces is the painter's
choice of algorithm will depend on the specific requirements of the algorithm, which works by sorting the polygons in the scene
application and the trade-offs between efficiency, simplicity, and from back to front, based on their distance from the viewer. The
memory usage. polygons are then drawn in this order, with later-drawn polygons
occluding earlier-drawn polygons. This works because polygons
that are closer to the viewer will occlude polygons that are
farther away, so by drawing the farther away polygons first and
the closer polygons last, the hidden surfaces will be
automatically removed.
What is the role of ray tracing in visible surface detection? What is the advantage of real time rendering over offline Demonstrate how a polygon can be created using OpenGL.
Explain how scan line algorithm is used for back face rendering? In OpenGL, a polygon can be created by specifying the vertices
detection. Real-time rendering refers to the process of generating an image (or corners) of the polygon and then rendering it to the screen.
Ray tracing is a rendering technique that can be used to generate or animation that can be displayed in real-time, typically at a Here is some example code in C++ that demonstrates how to
high-quality images by simulating the way light travels through a frame rate of 30 or more frames per second. Offline rendering, on create a simple triangle using OpenGL:
3D scene. In ray tracing, rays are traced from the eye (viewpoint) the other hand, is the process of generating an image or animation #include <GL/gl.h>
through each pixel in the image plane, and the color of the pixel that is not intended to be displayed in real-time. One of the main void drawTriangle()
is determined by the color of the surface that the ray intersects. advantages of real-time rendering is that it allows the user to {
Ray tracing can be used for visible surface detection by casting interact with the rendered image or animation in real-time. This is glBegin(GL_TRIANGLES);
rays from the eye into the scene and determining which surface is useful in applications such as video games, where the user needs glVertex3f(0.0f, 1.0f, 0.0f);
visible at each pixel. This can be done by intersecting the ray to be able to respond to events in the game in a timely manner. glVertex3f(-1.0f, -1.0f, 0.0f);
with the surfaces in the scene and selecting the intersection point Real-time rendering also allows the user to see the results of their glVertex3f(1.0f, -1.0f, 0.0f);
with the smallest distance from the eye as the visible surface. actions immediately, which can be helpful for tasks such as 3D glEnd();
Scan line algorithms are another method used for visible surface modeling and design. Another advantage of real-time rendering is }
detection. These algorithms work by rasterizing the scene into a that it tends to be more efficient than offline rendering, since it This code will create a triangle with the vertices (0,1,0), (-1,-1,0),
2D image plane, like the way a raster graphics display works. As does not require the same level of processing power or time. This and (1,-1,0) and render it to the screen. The glBegin() function
each scan line (row of pixels) is drawn, the visible surfaces are makes it more suitable for use on devices with limited resources, tells OpenGL to start rendering a primitive (in this case, a
identified and drawn to the screen. such as mobile phones or tablets. triangle) and the glEnd() function tells OpenGL to stop rendering
Scan line algorithms can also be used for back face detection, the primitive. The glVertex3f() function specifies a vertex at the
which is the process of identifying which surfaces in a 3D model Discuss the limitation of Z-Buffer algorithm. given (x,y,z) coordinates.To create a more complex polygon, you
are facing away from the viewer. This is typically done by The Z-buffer algorithm is a widely used method for visible would simply specify more vertices between the glBegin() and
calculating the surface normal at each vertex of the polygon and surface detection in computer graphics. It works by maintaining a glEnd() calls. For example, to create a quadrilateral, you could
comparing it to the view direction (vector from the vertex to the buffer in memory that stores the depth (Z-value) of each pixel on use GL_QUADS as the primitive type and specify four vertices.
viewer). If the dot product of these two vectors is negative, the the screen and selecting the pixel with the smallest depth value at
surface is facing away from the viewer and is considered a back each screen position to construct the final image. Explain the following term with practical applications.
face. Back faces are usually not drawn, as they are occluded by However, the Z-buffer algorithm has several limitations that can a) Rotation
the front faces of the model. affect its accuracy and performance. Some of these limitations b) Computer Animation
include: a) Rotation is the act of turning an object around a specific point,
What is the significance of vanishing points in perspective - Z-fighting: This occurs when two or more polygons are very called the pivot point. Rotation can be specified in terms of an
Projection? close to each other in depth, and their Z-values are nearly the angle of rotation and an axis of rotation. Rotation is a common
Vanishing points are important in perspective projection because same. In this case, the Z-buffer may not be able to accurately transformation that is used in many fields, including computer
they help to create the illusion of three-dimensional space on a determine which polygon is in front, resulting in flickering or graphics, robotics, and engineering.
two-dimensional surface. In a perspective projection, the lines of other visual artifacts. Some practical applications of rotation in computer graphics
perspective converge at one or more vanishing points, which are - Limited precision: The Z-buffer uses a fixed-precision include:
typically located at the edges of the image. These converging representation for depth values, which can lead to errors when - Rotating an object in a 3D modeling software to view it from
lines give the impression of distance and depth, making the image rendering objects that are very large or very far away. This can different angles
appear more realistic. Vanishing points are typically used in result in artifacts such as holes or gaps in the image. - Rotating a character's arm in a video game to simulate
drawings and paintings to create the illusion of depth and - Fragmentation: When an object is occluded by another object, movement
distance, and they are also used in computer graphics and the Z-buffer may leave gaps or holes in the image, since it only - Rotating a camera in a virtual reality environment to change the
photography to achieve the same effect. By using vanishing stores depth information for visible pixels. This can lead to a perspective of the user
points, an artist or photographer can create a more realistic and fragmented appearance in the final image. b) Computer animation is the use of computers to create the
believable image, with objects in the foreground appearing larger - Overdraw: When an object is rendered, the Z-buffer must illusion of movement and change in static images. It is a
and closer, and objects in the background appearing smaller and update the depth value for every pixel covered by the object. This technique that is used to create movies, television shows, video
farther away. can lead to a significant amount of overdraw, where the same games, and other forms of media.
pixel is written to multiple times, which can be inefficient and There are several different types of computer animation,
Explain how Z- Buffer algorithm is used for visible surface reduce performance. including:
detection. Despite these limitations, the Z-buffer algorithm is still widely - 2D animation: This involves creating static images and then
The Z-buffer algorithm, also known as the depth buffer used in computer graphics due to its simplicity and efficiency. sequentially displaying them at a high frame rate to create the
algorithm, is a method used in computer graphics for visible
illusion of movement.
surface detection. It works by maintaining a buffer in memory UNIT 9 - 3D animation: This involves creating a digital model of an
that stores the depth (Z-value) of each pixel on the screen. As the List some significances of virtual reality. object or character and then animating it by specifying how the
graphics pipeline processes each object in the scene, it writes the Virtual reality (VR) has a number of significant applications and object should move and change over time.
object's depth information to the Z-buffer. When the pipeline is potential benefits, including: Motion graphics: This involves creating animated graphics and
finished, the Z-buffer contains the depth of every pixel on the - Entertainment: VR technology is being used to create text for use in commercials, movies, and other types of media.
screen. To construct the final image, the graphics system reads immersive, interactive experiences for gaming, movies, and other Some practical applications of computer animation include:
the Z-buffer and selects the pixel with the smallest depth value at forms of entertainment. - Creating special effects for movies and television shows
each screen position. This ensures that objects that are closer to - Education and training: VR can be used to create realistic - Creating realistic simulations for training or design purposes
the viewer occlude (obscure) objects that are farther away, simulations for training in a variety of fields, including medicine, - Creating interactive experiences for video games and virtual
creating the correct illusion of depth and visibility. The Z-buffer military, and aviation. reality environments
algorithm is a simple and efficient way to solve the visibility - Therapy: VR is being used to treat a variety of mental health - Creating animated graphics for use in commercials and other
problem, and it is widely used in computer graphics systems. It conditions, including phobias, post-traumatic stress disorder forms of media.
can be implemented in hardware or software, and it is used in a (PTSD), and anxiety.
variety of applications, including 3D graphics, computer-aided - Product design: VR allows designers to create and test What are the key issues prevalent in producing a Virtual
design (CAD), and video games. prototypes of new products in a virtual environment, saving time reality scene?
and resources. There are several key issues that are commonly encountered
Explain boundary representations techniques to represent the - Tourism: VR can be used to give people a virtual taste of a when producing a virtual reality (VR) scene. These include:
3D object with suitable example. destination before they travel, or to provide immersive - Performance: VR scenes can be resource-intensive, and it is
Boundary representation (B-rep) is a technique used to represent experiences for people who are unable to travel due to physical or important to ensure that the scene runs smoothly on the target
the geometric shape of a three-dimensional (3D) object. In a B- financial constraints. hardware. This can require optimizing the scene for performance,
rep model, the object is represented as a set of surfaces that - Socialization: VR technology is being used to enable people to including minimizing the number of polygons, using efficient
enclose a volume. These surfaces are typically defined by a set of socialize and communicate in virtual environments, even when rendering techniques, and minimizing the use of expensive
curves or lines, called "edges," that lie on the surface of the they are physically separated. effects such as shadows and reflections.
object. There are several different ways to represent the edges in - Physical therapy: VR can be used to help people recover from - Interactivity: In a VR scene, the user is able to interact with the
a B-rep model, including: injuries or surgeries by allowing them to perform exercises in a environment in real-time. This can be challenging to implement,
- Polygon meshes: In this representation, the object is divided virtual environment under the supervision of a therapist. as the scene must respond to the user's actions in a believable and
into a series of interconnected triangles or other simple polygons. consistent way.
Each polygon has a set of vertices that define its shape, and the Differentiate between virtual reality and augmented reality - User comfort: VR scenes can sometimes cause discomfort or
edges are defined by the connections between these vertices. with example. motion sickness in users, especially if the scene involves rapid
- NURBS surfaces: A non-uniform rational B-spline (NURBS) Virtual reality (VR) is a computer-generated simulation of a movement or disorienting effects. It is important to consider the
surface is a mathematical model that can be used to represent three-dimensional environment that can be interacted with in a user's comfort when designing a VR scene.
smooth, curved surfaces. A NURBS surface is defined by a set of seemingly real or physical way. In VR, users wear a headset that - Immersion: A key goal of VR is to create a sense of immersion
control points and a set of weights that control the shape of the blocks out the real world and transports them into a completely for the user. This can be challenging, as the user is aware that
surface. The edges of the surface are defined by the boundaries of artificial one. For example, VR can be used to create immersive they are in a simulated environment. To increase immersion, it is
the control points. video games or to simulate training environments for pilots or important to create a believable and consistent virtual world, with
- Implicit surfaces: An implicit surface is defined by a surgeons. realistic physics and believable character and object behavior.
mathematical equation that describes the shape of the surface. Augmented reality (AR) is a technology that overlays digital - Accessibility: It is important to consider the needs of users with
The edges of the surface are the points where the equation information on top of the real world. Unlike VR, users do not disabilities when designing a VR scene. This can include
changes value. need to wear a headset to experience AR. Instead, they use ensuring that the scene is compatible with assistive technologies
An example of a 3D object that could be represented using a B- devices such as smartphones or tablets to view the enhanced such as screen readers and designing controls that are accessible
rep model is a car. A car could be represented using a polygon reality. An example of AR is the mobile game Pokémon Go, in to users with limited mobility.
mesh, with each panel of the car's body represented as a separate which players use their phones to see and catch virtual creatures
polygon. Alternatively, the car's body could be represented using that appear in the real world. Other examples of AR include
NURBS surfaces, with the edges of the surface defined by the furniture shopping apps that allow users to see how a piece of
boundaries of the control points. The wheels of the car could be furniture would look in their home before they buy it, and
represented using implicit surfaces, with the edges defined by the interactive museum exhibits that enhance the physical exhibits
points where the equation changes value. with digital information.
Explain different hardware and software used for VR Explain the scan line algorithms with example. How virtual realities differ with our real world? Describe
purpose. Scan line algorithms are algorithms that are used to fill the interior of a some components of VR system.
There is a wide range of hardware and software that can be closed polygon in a 2D graphics rendering system. These algorithms Virtual reality (VR) is a computer-generated simulation of a
used to create and experience virtual reality (VR) scenes. work by scanning the image from top to bottom, one row at a time, and three-dimensional environment that can be interacted with in
Hardware used in VR includes: determining which pixels fall inside the polygon. The pixels that fall a seemingly real or physical way. While VR can be very
- VR headsets: These are specialized devices that a user inside the polygon are then filled with the desired color. immersive and realistic, it differs from the real world in a
wears on their head to immerse them in a virtual One example of a scan line algorithm is the scan line fill algorithm. number of ways:
environment. VR headsets can range from simple This algorithm works by first sorting the edges of the polygon from left - VR is simulated: The virtual environments and objects in
smartphone-based systems to more complex, high-end to right. It then scans each row of the image from left to right, and for VR are created by computers, rather than being real physical
systems with advanced tracking and display technology. each pixel, it determines whether the pixel is inside or outside the entities. As a result, they may not behave in exactly the same
- VR controllers: These are devices that a user holds in their polygon by checking whether the number of intersections of the pixel way as real-world objects.
hands to interact with the virtual environment. VR with the polygon's edges is even or odd. If the number of intersections - VR is limited: The virtual environments and objects in VR
controllers can range from simple handheld controllers to is odd, the pixel is inside the polygon and is filled with the desired are limited by the capabilities of the hardware and software
more complex devices with motion tracking and haptic color. that is being used to create them. This can result in limitations
feedback. Here is an example of how the scan line fill algorithm might be on the realism and interactivity of the VR experience.
- Tracking systems: Some VR systems use tracking implemented in pseudocode: - VR is controlled: In VR, the user is typically limited to
technology to allow the user to move around in the virtual for each row in the image: interacting with the environment in ways that are
environment. This can be achieved through the use of initialize a list of edges that intersect the row predetermined by the creators of the VR experience. This is in
cameras, lasers, or other sensors that track the user's sort the list of edges by x coordinate contrast to the real world, where the user has more freedom to
movements. initialize a color flag to "outside" act and explore.
Software used in VR includes: for each pixel in the row: There are several components that are typically found in a VR
- VR development platforms: These are software tools that if the pixel is intersected by an edge: system, including:
are used to create VR experiences. Examples include Unity, toggle the color flag - VR headset: A device that the user wears on their head to
Unreal Engine, and CryEngine. if the color flag is "inside": immerse them in the virtual environment. VR headsets can
- VR content creation tools: These are software tools that color the pixel range from simple smartphone-based systems to more
are used to create 3D models, textures, and other assets that This pseudocode demonstrates the basic steps involved in using the complex, high-end systems with advanced tracking and
are used in VR scenes. Examples include 3D modeling scan line fill algorithm to fill a polygon. The algorithm first loops display technology.
software such as Blender and Maya, and texture creation through each row of the image, and then for each row, it determines - VR controllers: Devices that the user holds in their hands to
software such as Substance Designer. which edges intersect the row and sorts them by x coordinate. It then interact with the virtual environment. VR controllers can
- VR content playback software: These are software tools scans the row from left to right, toggling a color flag each time the range from simple handheld controllers to more complex
that are used to play back VR experiences on the user's pixel is intersected by an edge. If the color flag is set to "inside", the devices with motion tracking and haptic feedback.
device. Examples include the Steam VR platform and the pixel is filled with the desired color. - Tracking system: Some VR systems use tracking technology
Oculus Rift software. to allow the user to move around in the virtual environment.
- VR content distribution platforms: These are online UNIT 1
platforms that are used to distribute VR content to users. Explain the basic steps for computer animation and its application What is a computer graphics? Explain in detail about the
Examples include the Oculus Store and Steam. in computer science. application of computer graphics.
Computer animation is the use of computers to create the illusion of Computer graphics is the field of computer science that deals
Explain the virtual reality and its applications in the movement and change in static images. It is a technique that is used to with generating images, animations, and visual effects using
computer graphics. create movies, television shows, video games, and other forms of computers. It involves using computers to process and
Virtual reality (VR) is a computer-generated simulation of a media. The basic steps for creating computer animation are: manipulate visual data, and to display the results on various
three-dimensional environment that can be interacted with - Planning: The first step in creating computer animation is to plan out output devices such as monitors, printers, and projectors.
in a seemingly real or physical way. In VR, users wear a the story and characters, as well as the overall look and feel of the There are many applications of computer graphics, some of
headset that blocks out the real world and transports them animation. This may involve creating detailed storyboards or concept which include:
into a completely artificial one. Virtual reality has a number art. - 2D and 3D modeling: Computer graphics can be used to
of applications in computer graphics, including: - Modeling: The next step is to create digital models of the characters, create detailed models of objects, characters, and
- Gaming: VR technology is being used to create immersive objects, and environments that will be used in the animation. This is environments in two or three dimensions. These models can
video game experiences. Players can use VR headsets and typically done using 3D modeling software such as Blender or Maya. be used in a variety of contexts, such as animation, games,
controllers to interact with the virtual environment in a - Texturing: After the models have been created, they are usually given architecture, and product design.
realistic way. textures to give them a more realistic appearance. This can be done - Visualization and simulation: Computer graphics can be
- Product design: VR allows designers to create and test using texture mapping techniques or with the use of specialized used to create visualizations of data, such as scientific
prototypes of new products in a virtual environment, saving software such as Substance Designer. simulations, medical imaging, and weather forecasting. It can
time and resources. - Rigging: The next step is to set up the skeletons and control systems also be used to create simulations of complex systems, such
- Education and training: VR can be used to create realistic that will be used to animate the characters and objects. This is known as military training, air traffic control, and industrial design.
simulations for training in a variety of fields, including as rigging, and it involves creating a hierarchy of bones and control Image and video processing: Computer graphics can be used
medicine, military, and aviation. points that can be used to move the objects in a realistic way. to edit and enhance digital images and videos, such as to
- Therapy: VR is being used to treat a variety of mental - Animation: Once the models and rigs are in place, the actual remove blemishes, adjust colors and contrast, and add special
health conditions, including phobias, post-traumatic stress animation can begin. This involves specifying how the characters and effects.
disorder (PTSD), and anxiety. objects should move and change over time, using techniques such as - User interfaces: Computer graphics is used to create the
- Socialization: VR technology is being used to enable keyframing and motion capture. graphical user interfaces (GUIs) that people use to interact
people to socialize and communicate in virtual - Rendering: The final step is to render the animation, which involves with computers and other devices.
environments, even when they are physically separated. calculating how the scene should look at each frame and generating a - Multimedia content creation: Computer graphics is used to
- Physical therapy: VR can be used to help people recover final image or series of images. This is typically done using specialized create a wide range of multimedia content, such as movies,
from injuries or surgeries by allowing them to perform rendering software such as Arnold or V-Ray. TV shows, commercials, and music videos.
exercises in a virtual environment under the supervision of - Printing and publishing: Computer graphics is used in the
a therapist. Computer animation has a number of applications in computer science, printing and publishing industry to create high-quality
- Entertainment: VR technology is being used to create including: graphics for books, magazines, and other printed materials.
immersive, interactive experiences for movies, theme parks, - Creating movies and television shows Education and training: Computer graphics is used to create
and other forms of entertainment. - Creating video games and interactive experiences educational and training materials, such as simulations,
- Creating special effects for movies and television virtual reality environments, and interactive tutorials.
What do you mean by virtual Reality and animation? - Creating realistic simulations for training or design purposes
Explain. - Creating animated graphics for use in commercials and other forms of
Virtual reality (VR) is a computer-generated simulation of a media.
three-dimensional environment that can be interacted with
in a seemingly real or physical way. In VR, users wear a
headset that blocks out the real world and transports them What is virtual reality? Explain the importance of virtual reality
into a completely artificial one. VR technology is used to and its application.
create immersive, interactive experiences for gaming, Virtual reality (VR) is a computer-generated simulation of a three-
entertainment, education, and other applications. dimensional environment that can be interacted with in a seemingly
Animation is the process of creating the illusion of real or physical way. In VR, users wear a headset that blocks out the
movement and change by displaying a series of static real world and transports them into a completely artificial one. VR
images or frames in rapid succession. In the context of technology is used to create immersive, interactive experiences for
computer graphics, animation refers to the use of computers gaming, entertainment, education, and other applications.
to create and control the movement of digital objects or The importance of virtual reality lies in its ability to create realistic and
characters. Virtual reality and animation are often used immersive experiences that can engage the user in a way that is not
together to create interactive, immersive experiences. For possible with traditional media. VR can be used to transport users to
example, in a VR video game, the game engine uses completely artificial worlds, or to enhance the real world with digital
animation to control the movement of the characters and information. There are a number of applications for virtual reality,
objects in the game world, while the VR headset allows the including:
player to experience the game world in a seemingly real - Gaming: VR technology is being used to create immersive video
way. Similarly, in a VR training simulation, animation is game experiences. Players can use VR headsets and controllers to
used to create realistic scenarios and interactions, while the interact with the virtual environment in a realistic way.
VR headset allows the user to feel like they are part of the Product design: VR allows designers to create and test prototypes of
simulation. new products in a virtual environment, saving time and resources.
- Education and training: VR can be used to create realistic simulations
for training in a variety of fields, including medicine, military, and
aviation.
- Therapy: VR is being used to treat a variety of mental health
conditions, including phobias, post-traumatic stress disorder (PTSD),
and anxiety.
- Socialization: VR technology is being used to enable people to
socialize and communicate in virtual environments, even when they are
physically separated.
What is a raster scan display system? Draw its block diagram What is the random scan system? Explain the operation of
and explain it in detail. random scan with architecture.
Define the following terms
A raster scan display system is a type of display technology that A random scan system is a type of display technology that
a) Video controller
generates an image by displaying a series of horizontal scan lines generates an image by displaying a series of dots or lines on a
b) 3D viewing
on a screen. The image is created by rapidly scanning the screen screen, rather than a series of horizontal scan lines like in a
c) Raster graphics
from top to bottom and left to right, with each scan line consisting raster scan display system. The image is created by rapidly
d) List priority
of a series of pixels that are turned on or off to create the desired scanning the screen and turning on or off the pixels at the
a) Video controller: A video controller is a hardware device
image.Here is a block diagram of a raster scan display system: desired locations to create the desired image.
that is responsible for generating the control signals that
Here is the architecture of a random scan display system:
control the display of an image on a screen. It receives input
from a computer and converts it into a format that can be
displayed on the screen.
b) 3D viewing: 3D viewing refers to the process of displaying
and viewing three-dimensional (3D) images or animations on a
two-dimensional (2D) screen. 3D viewing can be achieved
using various technologies, such as 3D glasses, head-mounted
displays, and autostereoscopic displays.
c) Raster graphics: Raster graphics are digital images that are
made up of a grid of pixels, with each pixel representing a
- Computer: The computer is the input device that generates the specific color or shade. Raster graphics are resolution-
data for the image to be displayed. dependent, which means that the quality of the image will
- Video controller: The video controller is responsible for - Display processor: The display processor is responsible for degrade if it is scaled up or down.
generating the control signals that control the display of the image. generating the control signals that control the display of the d) List priority: List priority refers to the order in which
It receives input from the computer and converts it into a format image. It receives input from the computer and converts it into items in a list are ranked or prioritized. In computer
that can be displayed on the screen. a format that can be displayed on the screen. programming, list priority is often used to determine which
- Video RAM (VRAM): The video RAM (VRAM) is a type of - Display memory: The display memory is a type of memory tasks or processes should be given priority and which should
memory that is used to store the pixel data for the image. The video that is used to store the pixel data for the image. The display be delayed or deferred.
controller reads the pixel data from the VRAM and sends it to the processor reads the pixel data from the display memory and
video display unit (VDU). sends it to the display refresh processor. Consider 256-pixel X 256 scan lines image with 24-bit true
- Video display unit (VDU): The video display unit (VDU) is the - Display refresh processor: The display refresh processor is color. If 10 minutes video is required to capture, calculate
hardware that is responsible for displaying the image on the screen. responsible for generating the control signals that control the the total memory required? Why is intensity assignment
It receives the pixel data from the video controller and uses it to scanning of the screen. It receives the pixel data from the required? In a 24-bit true color system, each pixel requires
control the intensity of the pixels on the screen. display processor and uses it to control the intensity of the 24 bits of data to store its color value.
- Screen: The screen is the output device that displays the image. It pixels on the screen. If the image has a resolution of 256 pixels by 256 scan lines, it
consists of a matrix of pixels that can be turned on or off to create - Screen: The screen is the output device that displays the has a total of 256 * 256 = <<256*256=65536>>65,536 pixels.
the desired image. image. It consists of a matrix of pixels that can be turned on or If the video has a frame rate of 25 frames per second, there are
In a raster scan display system, the video controller generates a off to create the desired image. 25 * 60 = <<25*60=1500>>1500 frames in a 10-minute video.
series of horizontal scan lines, starting at the top of the screen and In a random scan display system, the display processor The total amount of memory required to store a 10-minute
moving down to the bottom. As each scan line is displayed, the generates the control signals that control the display of the video of a 256x256 image with 24-bit true color is therefore
video controller reads the pixel data for that scan line from the image. It receives the pixel data from the computer and stores it 1500 * 65,536 * 24 bits = 20736000000 bits.
VRAM and sends it to the VDU, which uses it to control the in the display memory. The display refresh processor then This is equivalent to 20736000000 / 8 bits per byte =
intensity of the pixels on the screen. The process is repeated for reads the pixel data from the display memory and uses it to 2073600000 bytes, or around 2.07 GB.
each scan line, until the entire image has been displayed. control the intensity of the pixels on the screen as it scans the
screen. The process is repeated until the entire image has been Intensity assignment is required in a display system to
Explain the random scan display system with its advantages displayed. determine the intensity of the pixels on the screen. In a raster
and disadvantages. scan display system, the intensity of each pixel is controlled by
A random scan display system is a type of display technology that the video controller, which generates the control signals that
generates an image by displaying a series of dots or lines on a Differentiate between vector and raster graphics. determine the intensity of the pixels. In a random scan display
screen, rather than a series of horizontal scan lines like in a raster Vector graphics and raster graphics are two different types of system, the intensity of each pixel is controlled by the display
scan display system. The image is created by rapidly scanning the graphics file formats that are used to store and display digital refresh processor, which uses the pixel data from the display
screen and turning on or off the pixels at the desired locations to images. Here are the main differences between the two: memory to control the intensity of the pixels as it scans the
create the desired image. - Resolution independence: Vector graphics are resolution- screen.
Advantages of a random scan display system: independent, which means that they can be scaled to any size
- Better image quality: Because a random scan display system only without losing quality. Raster graphics, on the other hand, are Consider 1024 pixels X 1024 scan lines image with 24-bit
has to draw the pixels that are needed to create the image, it can resolution-dependent, which means that the quality of the true color. If 10 minutes video is required to capture,
produce higher quality images with less visual artifacts than a raster image will degrade if it is scaled up or down. calculate the total memory required? How can you
scan display system. - File size: Vector graphics are generally smaller in file size incorporate shadow in the computer graphics? In a 24-bit
- More efficient use of bandwidth: A random scan display system than raster graphics, because they do not need to store pixel true color system, each pixel requires 24 bits of data to
only has to transmit the data for the pixels that are being displayed, data for the image. Raster graphics, on the other hand, need to store its color value.
rather than transmitting data for all of the pixels on the screen like store pixel data for the image, which can result in larger file If the image has a resolution of 1024 pixels by 1024 scan lines,
in a raster scan display system. This makes it more efficient to use sizes. it has a total of 1024 * 1024 =
when transmitting the image data over a network or other - File format: Vector graphics are typically stored in file <<1024*1024=1048576>>1,048,576 pixels.
communication channels. formats such as SVG, EPS, and AI. Raster graphics are If the video has a frame rate of 25 frames per second, there are
- Lower power consumption: Because a random scan display typically stored in file formats such as JPG, PNG, and BMP. 25 * 60 = <<25*60=1500>>1500 frames in a 10-minute video.
system only has to draw the pixels that are needed to create the - Image types: Vector graphics are well-suited for displaying The total amount of memory required to store a 10-minute
image, it uses less power than a raster scan display system, which simple graphics and text, and are often used for logos, icons, video of a 1024x1024 image with 24-bit true color is therefore
has to draw all of the pixels on the screen. and other simple images. Raster graphics are well-suited for 1500 * 1,048,576 * 24 bits = 3,716608000000 bits.
Disadvantages of a random scan display system: displaying photographs and other complex images, and are This is equivalent to 3,716608000000 / 8 bits per byte =
- Limited to line drawings and simple graphics: A random scan often used for things like digital images, web graphics, and 46457600000 bytes, or around 4.65 GB.
display system is not well-suited for displaying complex images or print media.
photographs, as it can only display a limited number of dots or lines - Editing: Vector graphics can be edited more easily than raster To incorporate shadow in computer graphics, you can use
on the screen. graphics because they are made up of individual shapes that various techniques such as shadow mapping, ray tracing, and
- Limited to static images: A random scan display system is not can be modified and resized independently. Raster graphics, on ambient occlusion. Shadow mapping involves rendering the
well-suited for displaying moving images or video, as it can only the other hand, are made up of a grid of pixels, and any changes scene from the perspective of a light source and using the
display a static image at a time. to the image will affect all the pixels in the grid. resulting depth map to determine which parts of the scene are
- More expensive: A random scan display system is typically more in shadow. Ray tracing involves simulating the path of light
expensive to manufacture than a raster scan display system due to Consider a raster scan system having 12 inch by 10 inch rays through a scene and using the resulting data to determine
the more complex hardware required. screen with resolution of 100 pixels per inch in each the shadows and reflections in the scene. Ambient occlusion
direction. If the display controller of this system refreshes involves calculating the amount of light that is blocked by
Calculate the total memory required to store a 10-minute video the screen at the rate of 50 frames per second, how many objects in a scene and using this information to create more
in a SVGA system with 24 bit true color and 25 fps. In a SVGA pixels could be accessed per second and what is the access realistic shadows and reflections.
system with 24-bit true color, each pixel requires 24 bits of data time per second and what is the access time pre pixels of the
to store its color value. system?
If the video has a frame rate of 25 frames per second, there are 25 * In a raster scan system with a screen size of 12 inches by 10
60 = <<25*60=1500>>1500 frames in a 10-minute video. inches and a resolution of 100 pixels per inch in each direction,
If the video has a resolution of 800x600 pixels, there are a total of the screen has a total of 12 * 100 = <<12100=1200>>1200
800 * 600 = <<800*600=480000>>480,000 pixels per frame. pixels in the horizontal direction and 10 * 100 =
The total amount of memory required to store a 10-minute video in <<10100=1000>>1000 pixels in the vertical direction.
a SVGA system with 24-bit true color and 25 fps is therefore 1500 The total number of pixels on the screen is therefore 1200 *
* 480,000 * 24 bits = 1000 = <<1200*1000=1200000>>1,200,000 pixels.
<<150048000024=20736000000>>20736000000 bits. If the display controller refreshes the screen at a rate of 50
This is equivalent to 20736000000 / 8 bits per byte = 2073600000 frames per second, it can access 1,200,000 pixels per second.
bytes, or around 2.07 GB. The access time per second is therefore 1 / 50 =
<<1/50=0.02>>0.02 seconds.
The access time per pixel is therefore 0.02 / 1,200,000 =
<<0.02/1200000=1.6666666666667e-06>>1.6666666666667e-
06 seconds.
Unit 2 Bresanhams line algorithm Case II: m > 1
This algorithm is used for scan converting a line. It was
Difference between DDA line algorithm and bresanhams li developed by Bresenham. It is an efficient method because it
ne algorithm involves only integer addition, subtractions, and multiplication
DDA Line Algorithm Bresenham line operations. These operations can be performed very rapidly so
Algorithm lines can be generated quickly.
DDA stands for Digital While it has no full form. Advantage:
Differential Analyzer. 1. It involves only integer arithmetic, so it is simple.
DDA algorithm is less While it is more efficient 2. It avoids the generation of duplicate points.
efficient than Bresenham than DDA algorithm. 3. It can be implemented using hardware because it does not use
line algorithm. multiplication and division. Let (xk, yk) is pixel at k
The calculation speed of While the calculation speed Disadvantage: th
DDA algorithm is less than of Bresenham line 1. This algorithm is meant for basic line drawing only Initializing step then next point to plot may be either (xk, yk + 1) or (xk +1,
Bresenham line algorithm. algorithm is faster than is not a part of Bresenham's line algorithm. So to draw smooth yk + 1).
DDA algorithm. lines, you should want to look into a different algorithm. Let d1 & d2 be the separation of pixel position (xk, yk + 1) and
DDA algorithm is costlier While Bresenham line (xk + 1, yk + 1) from the actual line path.
than Bresenham line algorithm is cheaper than
Case I: 0 < m < 1 y = mx + b
algorithm. DDA algorithm.
DDA algorithm has less While it has more precision The actual value of x is given by;
precision or accuracy. or accuracy. x = (y − b)/m
In DDA algorithm, the While in this, the Then at sampling point (yk + 1)
complexity of calculation complexity of calculation x = (yk + 1 − b)/m
is more complex. is simple. From figure,
d1 = (x − xk =yk+1−b)/m− xk =(yk+1−b−mxk)/m
In DDA algorithm, While in this, optimization
d2 = (xk + 1) − x = (xk + 1) −(yk+1−b)/m=
optimization is not is provided.
(mxk+m−yk−1+b)/m
provided.
Now, d1 − d2 =(2yk−2mxk−2b−m+2)/m
Let us define a decision parameter Pk for the kth step by
DDA Algorithm :
DDA stands for Digital Differential Analyzer. It is an Let (xk, yk) is pixel at kth step then next point to plot may be
Pk = ∆y(d1 − d2)
incremental method of scan conversion of line. In this method either (xk + 1, yk) or (xk + 1, yk + 1).
∆y > 0 Therefore, Pk < 0 if d1 < d2
calculation is performed at each step but by using results of Let d1 & d2 be the separation of pixel position (xk + 1, yk)
Pk ≥ 0 if d1 ≥ d2
previous steps. and (xk + 1, yk + 1) from the actual line path.
Advantage: y = mx + b
1. It is a faster method than method of using direct Then at sampling point (xk + 1)
use of line equation. y = m(xk + 1) + b
2. This method does not use multiplication theorem.
3. It allows us to detect the change in the value of x From figure,
and y ,so plotting of same point twice is not d1 = y − yk = m(xk + 1) + b − yk
possible. d2 = (yk + 1) − y = (yk + 1) − m(xk + 1) − b
Now, d1 − d2 = 2m(xk + 1) − (yk + 1) − yk + 2b = 2m(xk + 1) −
Disadvantage: 2yk + 2b − 1
1. It involves floating point additions rounding off is Let us define a decision parameter Pk for the kth step by
done. Accumulations of round off error cause Pk = ∆x(d1 − d2)
accumulation of error. ∆x > 0 Therefore, Pk < 0 if d1 < d2
2. Rounding off operations and floating point Pk ≥ 0 if d1 > d2
operations consumes a lot of time.
3. It is more suitable for generating line using the ∴ Pk = ∆x(d1 − d2) = ∆x{2∆y/∆x(xk + 1) − 2yk + 2b − 1} = 2∆y.
software. But it is less suited for hardware xk − 2∆x. yk + c ......(i)
implementation
Where the constant c =2∆y + ∆x(2b − 1). Therefore, initial decision parameter,
It is a scan conversion line algorithm based on calculating eith Now, for next step; P0 = 2∆xy0 − 2∆yx0 + c
er ∆x or ∆y using equation Pk+1 = 2∆y. xk+1 − 2∆x. yk+1 + c ........(ii) = 2∆xy0 − 2∆yx0 + 2(1 − b)∆x − ∆y
m= From (i) & (ii) = 2∆xy0 − 2∆yx0 + 2∆x − 2b∆x − ∆y
∆y ∴ Pk+1 − Pk = 2∆y(xk+1 − xk) − 2∆x(yk+1 − yk) = 2∆xy0 − 2∆yx0 + 2∆x − 2(y0 − mx0)∆x − ∆y
∆x ∴ Pk+1 = Pk + 2∆y − 2∆x(yk+1 − yk) [Since, xk+1 = xk + 1] = 2∆xy0 − 2∆yx0 + 2∆x − 2(y0 −(∆y/∆x)x0)∆x − ∆y
The equation of the line is given as; Where, yk+1 − yk= 0 or 1 P0 = 2∆x − ∆y
y = mx + b ..............(i) If Pk < 0, Bresenham’s Complete algorithm:
m = (y2 − y1)/(x2 − x1) ................ (ii) yk+1 = yk so Pk+1 = Pk + 2∆y 1. Input the two end points (x1,y1) and (x2,y2)
For any interval ∆x , corresponding interval is given by ∆y = If Pk ≥ 0, 2. Compute dx= |x2 –x1| and dy = |y2 –y1|
m∆x. yk+1 = yk + 1 so Pk+1 = Pk + 2∆y − 2∆x 3. If x2 – x1 < 0 and y2 –y1 > 0 or x2-x1>0 and y2 – y1 < 0.
If m <= 1, and start point is left endpoint then ∆x = 1 and Therefore, initial decision parameter, Then set a = -1, else a = 1
yk+1 = yk + m P0 = 2∆y. x0 − 2∆x. y0 + c [from (i)] 4. If dy < dx then,
If m <= 1, and start point is right endpoint, then ∆x = −1 and =2∆yx0 − 2∆xy0 +2∆y + ∆x(2b − 1) i. If x1>x2 then, t = x1 ; x1 = x2 ; x2= t
yk+1 = yk − m =2∆yx0 − 2∆xy0 +2∆y + 2b∆x − ∆x t = y1 ; y1 = y2 ; y2 = t
If m > 1, and start point is left endpoint, then ∆y = 1 and =2∆yx0 − 2∆xy0 +2∆y + 2(y0 −(∆y/∆x)x0)∆x − ∆x ii. Find initial decision parameter P = 2dy - dx
xk+1 = xk +1/m =2∆yx0 − 2∆xy0 +2∆y + 2∆xy0 − 2∆yx0 − ∆x iii. Plot the first pixel (x1, y1)
If m > 1, and start point is right endpoint, then ∆y = −1 and =2∆y − ∆x iv. Repeat the following till |x1| < |x2|
xk+1 = xk −1/m a. If P< 0 then,
P0 = 2∆y − ∆x P = P+ 2dy
Else, P = P+ 2dy – 2dx
Algorithm y = y1+a
1. Input the two line endpoints and store the left endpoint in (x0, y b. Increase x1 by 1 i.e x1 = x1+1
0). c. Plot (x1,y1)
2. Load (x0, y0) into the frame buffer i.e. plot the first point. 5. Else |m|>1
3. Calculate constants ∆x, ∆y, 2∆y and 2∆y-2∆x and obtain the st i. Check if (y1>y2) then,
arting value for the t = x1; x1 = x2 ; x2 = t
decision parameter as t = y1 , y1 = y2 ; y2 = t.
P0 = 2∆y − ∆x ii. Find initial decision parameters. P = 2dx – dy
DDA Algorithm: 4. At each xk, along the line, starting at k=0, perform the followin iii. Plot the first point (x1,y1)
Step1: Start Algorithm g tests: iv. Repeat the following till y1 < y2
Step2: Declare x1,y1,x2,y2,dx,dy,x,y as integer variables. If Pk < 0, then next point to plot is (xk + 1, yk) and a. If P< 0 then , P = P+2dx.
Step3: Enter value of x1,y1,x2,y2. Pk+1 = Pk + 2∆y Else, P = P+ 2dx-2dy
Step4: Calculate dx = x2-x1 Otherwise, the next point to plot is (xk + 1, yk + 1) and x1 = x1 +a
Step5: Calculate dy = y2-y1 Pk+1 = Pk + 2∆y − 2∆x b. Increase y1 by 1. i.e y1 = y1+1
Step6: If ABS (dx) > ABS (dy) c. Plot (x1,y1)
            Then step = abs (dx) 5. Repeat step 4 ∆x times. Advantages of Bresenham’s line algorithm (BLA) over
            Else [Note: For m>1, just interchange the role of x & y] DDA:
Step7: xinc=dx/step - In DDA algorithm each successive point is computed in
            yinc=dy/step floating point, so it required
            assign x = x1 more time and memory space. While in BLA each successive
            assign y = y1 point is calculated in
Step8: Set pixel (x, y) integer value or whole number. So it requires less time and less
Step9: x = x + xinc memory
            y = y + yinc - In DDA, since the calculated point value is floating point
            Set pixels (Round (x), Round (y)) number, it should be rounded at
Step10: Repeat step 9 until x = x2 the end of calculation but in BLA it does not need to round, so
Step11: End Algorithm there is no accumulation
of rounding error.
- Due to rounding error, the line drawn by DDA algorithm is not
accurate, while in BLA
Scan conversion algorithm : Polygon Table
It is a process of representing graphics objects a collection of In this method, a polygon surface is specified with a set of vertex
pixels. The graphics objects are continuous. The pixels used co-ordinates and associated
are discrete. Each pixel can have either on or off state. attributes. Polygon data tables can be organized into two groups:
The circuitry of the video display device of the computer is geometrical and attribute
capable of converting binary values (0, 1) into a pixel on and tables.
pixel off information. 0 is represented by pixel off. 1 is Geometric tables: It contain vertex coordinates and parameters
represented using pixel on. Using this ability graphics to identify the spatial
computer represent picture having discrete dots. orientation of polygon surfaces
Any model of graphics can be reproduced with a dense matrix Attribute table: It gives attribute information for an object
of dots or points. Most human beings think graphics objects as (Degree of transparency, surface
points, lines, circles, ellipses. For generating graphical object, reflectivity etc.).
many algorithms have been developed. Geometric data consists of three tables:
Advantage of developing algorithms for scan conversion (i) Vertex table: It stores co-ordinate values for each vertex of
1. Algorithms can generate graphics objects at a Algorithm the object.
faster rate. 1. Input radius ‘r’ and circle center (xc, yc) and obtain the first p (ii) Edge table: It stores the edge information of polygon.
2. Using algorithms memory can be used efficiently. oint on the circumference ofa circle centered on origin as (iii) Surface table: It stores the number of surfaces present in the
3. Algorithms can develop a higher level of
polygon.
graphical objects. (x0, y0) = (0, r)
Scan conversion of a point 2. Calculate the initial value of the decision parameter as
- Scan conversion of a point requires the two data that are pixe p0 =5/4− r
l position and color for 3. At each xk position, starting at k=0, perform the following test:
display. If pk < 0, the next point on circle is (xk + 1, yk) and
- In C, a point be scan converted using function putpixel() defi pk+1 = pk + 2xk+1 + 1
ned in library. Otherwise, the next point on circle is (xk + 1, yk − 1) and
Header file is pk+1 = pk + 2xk+1 + 1 − 2yk+1
<graphics.h> 4. Determine the symmetry point in other seven octants.
putpixel(x, y, color) 5. Move each calculated pixel position (x, y) onto the circular pat
Here, x & y represent pixel position on 2D display. h centered on (xc, yc) and
Scan Conversion of line plot the co-ordinate values,
Say y = mx + b be the equation of line with end point (x1, y1)
and (x2, y2) then, x = x + xc
m=
y2−y1 , y = y + yc
x2−x1
∴m= 6. Repeat step 3 through 5 until x ≥ y.
∆y
∆x ................... (i) write a procedure to fill the interior of a given elipse with a sp
Here, m represents the slope of line path where by ∆x & ∆y gi ecified pattern
ves the deflection needed To fill the interior of an ellipse with a specified pattern, you can u
towards horizontal and vertical direction to get new pixel from se the following procedure:
current pixel position.
- Slope of line also describes the nature and characteristics of li Define the ellipse by its center coordinates (x0, y0) and its two se
ne that is going to display. mi-axis lengths (a, b).
Midpoint Circle Algorithm
The mid-point circle drawing algorithm is an algorithm used to Iterate through the pixels in the bounding box of the ellipse. For e
determine the points needed for rasterizing a circle.  ach pixel (x, y), compute the distance d from the center of the elli The object can be displayed efficiently by using data from tables
 We use the mid-point algorithm to calculate all the perimeter pse using the formula: and processing them for surface rendering and visible surface
points of the circle in the first octant and then print them along
determination.
with their mirror points in the other octants. This will work d = ((x-x0)^2)/(a^2) + ((y-y0)^2)/(b^2) Polygon Meshes
because a circle is symmetric about its centre.
A polygon mesh is collection of edges, vertices and polygons
If d <= 1, then the pixel (x, y) is inside the ellipse and should be fi connected such that each edge is shared by at most two polygons.
lled with the specified pattern. An edge connects two vertices and a polygon is a closed
sequence of edges. An edge can be shared by two polygons and a
Repeat this process for all pixels in the bounding box of the ellips vertex is shared by at least two edges.This method can be used to
e. represent a broad class of solids/surfaces in graphics. A polygon
mesh can be rendered using hidden surface removal algorithms.
Unit 5 The polygon mesh can be represented by three ways-
What is the purpose of wireframe representation? Describe a  Explicit representation
bout boundary and space partitioning.  Pointers to a vertex list
  Ans: A wireframe is a three-dimensional model that only includes  Pointers to an edge list
Assume that we have just plotted point (xk, yk). vertices and lines. It does not contain surfaces, textures, or lightin In Explicit representation, each polygon is represented by a list of
The next point is a choice between (xk+1, yk) and g like a 3D mesh. Instead, a wireframe model is a 3D image com vertex co-ordinates.
(xk+1, yk-1). We would like to choose the point that is prised of only "wires" that represent three-dimensional shapes.
nearest to the actual circle.Let us define a circle function as f(x, Wireframes provide the most basic representation of a three-dime P = ((x1, y1, z1), (x2, y2, z2), ... ... . . , (xn, yn, zn))
nsional scene or object. They are often used as the starting point i In Pointers to a vertex list, each vertex is stored just once, in
y) = x2 + y2 − r2 = { n 3D modeling since they create a "frame" for 3D structures. For
example, a 3D graphic designer can create a model from scratch b
vertex list
V = (v1, v2, ... ... .... , vn)
y simply defining points (vertices) and connecting them with line E.g. A polygon made up of vertices 3, 5, 7, 10 in vertex list be
s (paths). Once the shape is created, surfaces or textures can be ad represented as P1 = {3,5,7,10}
ded to make the model appear more realistic. Objects are represen
ted as a collection of surfaces. 3D object representation is divided
into two categories.
Boundary Representations B-reps - It describes a 3D object as
a set of surfaces that separates the object interior from the environ
ment.
Space-partitioning representations It is used to describe interio
r properties, by partitioning the spatial region containing an objec
t into a set of small, non-overlapping, contiguous solids usually c
ubes.
The most commonly used boundary representation for a 3D graph
ics object is a set of surface polygons that enclose the object interi
or. Many graphics system use this method. Set of polygons are st
ored for object description. This simplifies and speeds up the surf
ace rendering and display of object since all surfaces can be descr
ibed with linear equations.
The polygon surfaces are common in design and solid-modeling a sweep representation is a way to represent a two-dimensional
pplications, since their wireframe display can be done quickly to shape or image as a sequence of points that are generated by
give general indication of surface structure. Then realistic scenes scanning the shape or image from left to right, starting at the top.
are produced by interpolating shading patterns across polygon sur This sequence of points is known as a sweep.
face to illuminate. Sweep representations can be used to efficiently store and
manipulate shapes or images that have a large number of points,
or that have a complex shape. They can also be used to perform
operations on the shape or image, such as translation, rotation,
scaling, or other transformations.
One common use of sweep representations is in computer
graphics, where they can be used to represent and render shapes
or images on a display screen. They can also be used in image
processing, pattern recognition, and other applications where it is
necessary to work with two-dimensional shapes or images.
Clipping: Bezier Curve Properties of B-Spline curve
When we have to display a large portion of the picture, then not Bezier curve is developed by the French engineer Pierre Bezier for a) The polynomial curve has d − 1 degree.
only scaling & translation is necessary, the visible part of picture the design of Renault b) For (n+1) control points, the curve is described with (n+1) blen
is also identified. This process is not easy. Certain parts of the automobile bodies. ding function.
image are inside, while others are partially inside. The lines or - It is an approximating spline widely used in various CAD system. c) Each blending function Bk,d is defined over ‘d’ sub-interval of
elements which are partially visible will be omitted. - Bezier curve is generated under the control of points known as c the total range of ‘u’,
For deciding the visible and invisible portion, a particular process ontrol points. starting at knot value uk.
called clipping is used. Clipping determines each element into the General Bezier curve for (n+1) control point, denoted as pk = (xk, d) The sum of B-spline basis functions for any parameter value is
visible and invisible portion. Visible portion is selected. An 1
yk, zk) with ‘k’ varying
invisible portion is discarded. ∑Bk,d(u) = 1
from 0 to n is given by
Applications of clipping: e) Each basis function is positive or zero for all parameter value.
1. It will extract part we desire. f) The range of parameter ‘u’ is divided into (n+d) sub interval by
2. For identifying the visible and invisible area in the (n+d+1) values
3D object. specified in knot vector.
3. For creating objects using solid modeling. g) Each section of spline curve is influenced by ‘d’ control point.
4. For drawing operations. h) Any one control point can affect the shape of at most ‘d’ curve
sections.
Cubic spline i) The maximum order of curve is equal to the number of vertices
- It is used to set up path for object motions or to provide a of defining polygon.
representation for an existing j) The curve generally follows the shape of defining polygon.
object or drawings. k) The degree of B-spline polynomial is independent on the numb
- Compared to higher-order polynomials, cubic splines requires er of vertices of
less calculation and defining polygon.
memory and they are more stable. Compared to lower-order
polynomials, cubic splines periodic b-spline curve vs non periodic b-spline curve
are more flexible for modeling arbitrary curve shapes. A B-spline curve is a piecewise defined polynomial curve that is
- Cubic interpolation spline is obtained by fitting the input points represented by a set of control points. A periodic B-spline curve
with a piecewise cubic is a curve that is defined in such a way that the curve forms a
polynomial curve that passes through every control points. closed loop. This means that the curve starts and ends at the same
Suppose we have n+1 control points having co-ordinates point and passes through all of the control points.
Pk = (xk, yk, zk) K = 0, 1, 2, 3, ... ... ... . , n
On the other hand, a non-periodic B-spline curve is a curve that is
defined by a set of control points, but it does not form a closed
loop. This means that the curve starts at one point and ends at
another point, and it may or may not pass through all of the
control points.

algorithm of generating curve in computer graphics


A parametric cubic polynomial that is to be fitted between each p
air of control points have 1. Define the starting point and end point of the curve.
following equations: 2. Determine the type of curve to be generated (e.g.
x(u) = axu3 + bxu2 + cxu + dx Bezier, B-spline, etc.).
Properties of Bezier curve: 3. If using a parametric curve, define the parameters
y(u) = ayu3 + byu2 + cyu + dy a) The basis functions are real.
z(u) = azu3 + bzu2 + czu + dz and corresponding control points.
b) The Bezier curve always passes through first and last control 4. If using a non-parametric curve, define the control
We need to determine the values of the four coefficients a, b, c, a point i.e. p(0) =
nd d in the polynomial points and weights.
p0 & p(1) = pn. 5. Calculate the curve using the defined equations for
representation for each of the n curve section. We do this by setti c) The degree of polynomial representing Bezier curve is one less
ng enough boundary the chosen type of curve.
than the number of 6. Plot the curve points on the graphics canvas.
conditions at the “joints” between curve sections we can obtain n control points.
umerical values for all the 7. Repeat the process for any additional curves desired.
d) The Bezier curve always follows convex hull formed by control 8. Render the final image with the plotted curve(s).
coefficients. points.
- Cubic splines are more flexible for modeling arbitrary curve sha e) The Bezier curve always lies inside the polygon formed by
pes. discuss strength and weakness of human visual system
control points. The human visual system is incredibly complex and
f) Bezier blending functions are positive and sum is equal to 1. sophisticated, with various strengths and weaknesses that
∑ BEZk,n(u) = 1 contribute to our overall ability to perceive and interpret the
3D object representation is divided into two categories. • g) The direction of the tangent vector at the end points is same like
Boundary Representations (B-reps) − • It describes a 3D object as vector determined by world around us.
a set of surfaces that separates the object interior from the One of the major strengths of the human visual system is its
first and last segment. ability to process and interpret visual information quickly and
environment. • B-spline Curve
Space–partitioning representations − • It is used to describe efficiently. Our brains are capable of processing vast amounts of
B-spline curve is a set of piecewise polynomial segments that visual data in a short period of time, allowing us to quickly
interior properties, by partitioning the spatial region containing an passes close to a set of control
object into a set of small, non-overlapping, contiguous solids identify patterns, shapes, and objects in our environment. This
points. ability is especially useful for tasks such as driving, where we
(usually cubes). It has two advantage over Bezier curve: need to quickly and accurately process visual information in order
a) The degree of B-spline polynomial can be set independently of to navigate safely.
Three-Dimensional Viewing Viewing in 3D involves the the number of control
following considerations: - We can view an object from any Another strength of the visual system is its ability to perceive
points. depth and distance. Our brains are able to interpret cues such as
spatial position, eg. In front of an object, Behind the object, In the b) It allows local control over the shape of a spline curve.
middle of a group of objects, Inside an object, etc. - 3D size, perspective, and parallax in order to understand the relative
descriptions of objects must be projected onto the flat viewing distance and position of objects in our environment. This allows
surface of the output device. - The clipping boundaries enclose a us to judge the distance of objects and navigate through space
volume of space. with ease.
3D viewing pipelining : However, the human visual system also has several weaknesses
that can impact our ability to perceive and interpret visual
information accurately. One of these weaknesses is our limited
ability to see in low light conditions. Our eyes are not particularly
sensitive to light, meaning that we struggle to see in dimly lit
environments or at night. This can make tasks such as driving at
night more difficult and potentially dangerous.
Another weakness of the human visual system is our limited
PARAMETRIC CUBIC CURVE ability to perceive color. While we are able to perceive a wide
Polylines and polygons: range of colors, our color perception is not as accurate or
-Large amounts of data to achieve good accuracy. sensitive as that of some other animals. This can make it difficult
- Interactive manipulation of the data is tedious. for us to distinguish between subtle differences in color,
Higher-order curves: especially in poor lighting conditions.
-More compact (use less storage). Overall, the human visual system is incredibly complex and
-Easier to manipulate interactively. sophisticated, with a range of strengths and weaknesses that
Possible representations of curves: explicit, implicit, and paramet contribute to our ability to perceive and interpret the world
ric around us. While we have a number of strengths, such as our
There are Three Types of Parametric Cubic Curves. ability to process visual information quickly and perceive depth
Hermite Curves: and distance, we also have some weaknesses, such as our limited
Defined by two endpoints and two endpoint tangent vectors (used ability to see in low light conditions and our limited color
1st order) perception.
Bézier Curves:
Defined by two endpoints and two control points which control th
e endpoint' tangent vectors
Splines:
Defined by four control points
Fractals are mathematical structures that are self-similar, meaning UNIT 9 Creating polygons in open GL
that they contain smaller copies of themselves at various scales. In In open gl , polygons are typicaly drawn by filling in all the pixel
computer graphics, fractals are used to generate complex and intric s enclosed within the boundary but we can also drawn them as ou
ate patterns that are difficult to create using traditional techniques. Open Gl : tlined polygon or simply point at the vertices .
Fractals are often used in computer graphics to create realistic land OpenGL (Open Graphics Library) is a cross-platform, hardware- To create a polygon in OpenGL, you can use the following comm
scapes, such as mountains, valleys, and coastlines. They can also b accelerated, language-independent, industrial standard API for and:
e used to generate abstract patterns and shapes, such as those foun producing 3D graphics. It is used to draw complex 3D graphics, glBegin(GL_POLYGON);
d in psychedelic art.Fractals are generated using algorithms that re such as 3D games, simulations, and scientific visualizations, and This command tells OpenGL to begin drawing a polygon. You ca
peat a process over and over, each time adjusting the parameters sl is supported by many operating systems, including Windows, n then specify the vertices of the polygon using the glVertex*() c
ightly to create a more complex structure. This process is called ite macOS, Linux, Android, and iOS. ommands, such as glVertex2f() or glVertex3f().
ration, and the resulting structure is called a fractal. OpenGL is implemented as a graphics library, which is usually For example, to create a triangle, you can use the following code:
packaged with the graphics drivers of a particular graphics card. glBegin(GL_POLYGON);
why polygon description is consider as standard graphics obje Applications that use OpenGL call functions in the library to glVertex2f(0.0, 0.0);
ct specify the objects and operations needed to produce interactive glVertex2f(1.0, 0.0);
Polygons are considered standard graphics objects because they ar 3D graphics. The graphics library then communicates with the glVertex2f(0.5, 1.0);
e a fundamental building block for creating more complex graphic graphics hardware to execute the requested operations. glEnd();
s. They can be easily manipulated and transformed using standard OpenGL is widely used in the development of computer games, This will create a triangle with vertices at (0.0, 0.0), (1.0, 0.0), an
graphics algorithms and techniques, making them an essential tool visual simulations, and other interactive 3D applications. It is an d (0.5, 1.0). You can also specify the color and other attributes of
for creating various types of graphics. essential tool for many computer graphics professionals and is the polygon using the glColor*() and gl*() commands.
Additionally, polygons can be used to represent a wide range of re an important skill for anyone interested in working in the field
al-world objects, such as buildings, cars, and terrain. They can be a of computer graphics. Once you have finished drawing the polygon, you can use the glE
ccurately and efficiently represented in a computer graphics syste nd() command to tell OpenGL to stop drawing the polygon.
m, making them a popular choice for creating realistic graphics in
many applications.
Overall, the versatility and efficiency of polygons make them a ke Creating Pixels in open GL
y component of standard graphics systems, and they continue to be To create a pixel in OpenGL, you can use the glBegin() and glEn
widely used in various fields such as computer graphics, game dev d() functions to define a drawing mode and the glVertex2i() funct
elopment, and scientific visualization. ion to specify the position of the pixel. For example:

importance of polygon table in computer graphics glBegin(GL_POINTS);


Polygon tables are important in computer graphics because they glVertex2i(x, y);
allow for the creation and manipulation of 3D objects. A polygon glEnd();
table is a list of vertices, edges, and faces that make up a 3D
object. These vertices and edges are connected to form polygons, Where x and y are the coordinates of the pixel. You can also set t
which are the basic building blocks of 3D objects.Using a polygon he color of the pixel using the glColor3f() or glColor4f() function
table, a computer can render 3D objects in a scene by filling in the before calling glVertex2i()
polygons with colors or textures. This allows for the creation of
realistic and detailed 3D graphics in video games, movies, and
other visual media.Additionally, polygon tables allow for the
manipulation of 3D objects through rotation, scaling, and
translation. This allows for the creation of dynamic and interactive
3D graphics, such as character animations and
simulations.Overall, polygon tables play a crucial role in the
creation and manipulation of 3D objects in computer graphics.
Without them, it would be difficult to create realistic and dynamic
3D graphics.

set up a procedure for establishing polygon table for any input


set of data point defining an object
-Gather the input data points defining the object. These should be Color model in open Gl :
coordinates in a 2D plane.
-Determine the number of sides of the polygon based on the numb OpenGL uses the additive RGB color model, which works with
er of input data points. A polygon with n sides will have n data poi just the three primary colors: red, green, and blue. Many colors
nts. can be created by mixing these primary colors together in
-Create a table with columns for the x and y coordinates of each da various proportions. For example, red and green together create
ta point, as well as a column for the side number. yellow, red and blue together create magenta, and blue and
-Begin filling in the table with the data points in the order they wer green together create cyan. Add red, green, and blue together
e given. and you get white .
-Assign a side number to each data point, starting with 1 and incre This model works differently than the subtractive paint model
asing sequentially. you might have learned about in school: in the subtractive paint
-Check that the last data point is connected to the first data point to model, adding blue and yellow makes green, and adding a bunch
ensure that the polygon is closed. If necessary, add a row to the tab of colors together creates a dark brown or black. This is because
le with the coordinates of the first data point as the final side of the paint does not emit light; it absorbs it. The more colors of paint
polygon. we use, the more light is absorbed and the darker the paint
-Check the orientation of the polygon. If the vertices are listed in a appears.
clockwise direction, the polygon is concave. If the vertices are list
ed in a counterclockwise direction, the polygon is convex. Creating lines in open GL
-Save the completed table for future reference. in open gl , the term line refers to a line segment. There are easy
way to soecify or connected series of lines segment or even clos
ed segment with open gl . we can specify lines with different wi
dth and lines that are stipped in various way dotted , dashed .
To create lines in Open GL, you can use the glBegin() function a
nd specify GL_LINES as the primitive type. Then, you can use t
he glVertex2f() function to specify the coordinates of the two en
dpoints of the line. Finally, you can use the glEnd() function to
mark the end of the line drawing.

For example:

glBegin(GL_LINES);
glVertex2f(0.0, 0.0);
glVertex2f(1.0, 1.0);
glEnd();

This will create a line that starts at (0.0, 0.0) and ends at (1.0, 1.
0). You can use the glLineWidth() function to set the width of th
e line, and the glColor() function to set the color of the line.

You might also like