You are on page 1of 42

1

01725-402592

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


2
Ashek Mahmud Khan
Dept. of CSE (JUST)
Computer Graphics (Chapter:01) 01725-402592

Q 1. Define Computer Graphics.

Computer graphics: Computer graphics may be defined as a pictorial representation or


graphical representation of objects in a computer.

Computer graphics are pictures and movies created using computers usually referring to
image data created by a computer specifically with help from specialized graphic hardware
and software.

Q. 2013-1(b)/2012-1(a): Mention the Application of Computer Graphics and explain.

(i) Computer Aided Design (CAD)


(ii) Presentation Graphics
(iii) Computer Art
(iv) Entertainment
(v) Education & Training
(vi) Visualization
(vii) Image Processing
(viii) Graphical user Interfaces

(i) Computer Aided Design (CAD): A major use of computer graphics is in design
processes, particularly for engineering and architectural systems, but almost all products are
now computer designed. Generally referred to as CAD, computer-aided design methods are
now routinely used in the design of buildings, automobiles, aircraft, watercraft, spacecraft,
computers, textiles, and many, many other products.

(ii) Presentation Graphics: Another major application area is presentation graphics, used to
produce illustrations for reports or to generate 35-mm slides or transparencies for use with
projectors. Presentation graphics is commonly used to summarize financial, statistical,
mathematical, scientific, and economic data for research reports, managerial reports, and
consumer information bulletins.

(iii) Computer Art: Computer graphics methods are widely used in both fine art and
commercial art applications.

(iv) Entertainment: Computer graphics methods are now commonly used in making motion
pictures, music videos, and television shows.

(v) Education & Training: Computer-generated models of physical, financial, and


economic systems are often used as educational aids.

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


3

(vi) Visualization: Scientists, engineers, medical personnel, business analysts, and others
often need to analyze large amounts of information or to study the behavior of certain
processes.

(vii) Image Processing: In computer graphics, a computer is used to create a picture. Image
processing, on the other hand applies techniques to modify or interpret existing pictures,
such as photo graphs and TV scans.

(viii) Graphical user interface: It is common now for software packages to provide a
graphical interface. A major component of a graphical interface is a window manager that
allows a user to display multiple-window areas. Each window can contain a different process
that can contain graphical or non graphical displays.

Q. 2012-1(a): What do you mean by GUI?


GUI stands for Graphical user interface. A major component of a GUI is a window manager
that allows a user to display multiple-window areas. To make a particular window active we
simply click in that window using an interactive pointing device. Interfaces also display menus
and icons for fast selection of processing options or parameter values.
Q. 2013-1(a): Define: (i) Pixel (i) Resolution (i) Aspect ratio (i) Persistence

Pixel: Each screen point is referred to as a pixel or pel. The pixel (a word invented from
"picture element") is the basic unit of programmable color on a computer display or in a
computer image. Think of it as a logical - rather than a physical - unit. The physical size of a
pixel depends on how you've set the resolution for the display screen.

Resolution: Resolution is defined as a matrix of "pixels" per inch. Screen Pixels Per Inch. A
screen resolution of 1920x1200 means 1,920 pixels horizontally across each of 1,200 lines,
which run vertically from top to bottom.

Aspect ratio: Aspect ratio is an image projection attribute that describes the proportional
relationship between the width of an image and its height. For example, if a graphic has an
aspect ratio of 2:1, it means that the width is twice as large as the height.

Persistence: Persistence mean how long the electronic beam continue emit light after the
CRT beam is removed.

Chapter:02-(Overview of Graphics Systems)


Video Display Device: The primary output device in a graphics system is a video monitor.
The operation of most video monitors is based on the standard cathode-ray tube (CRT)
design.

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


4

Figure 2-1 Basic design of a magnetic-deflection CRT

The basic operation of a CRT: A beam of electrons (cathode rays) emitted by an electron
gun, passes through focusing and deflection systems that direct the beam toward specified
positions on the phosphor coated screen.
 The phosphor then emits a small spot of light at each position contacted by the
electron beam. This type of display is called a refresh CRT.
 The primary components of an electron gun in a CRT are the heated metal cathode
and a control grid.
 The focusing system in a CRT is needed to force the electron beam to converge into a
small spot as it strikes the phosphor.
 Magnetic deflection has two pairs of coils are used, with the coils in each pair
mounted on opposite sides of the neck of the CRT envelope.

Figure:2-2 Operation of an electron gun with an accelerating anode.

Figure: 2-3 Electrostatic deflection of the electron beam in a CRT.


@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592
5

 Electrostatic deflection has two pairs of parallel plates are mounted inside the CRT
envelope.

 Resolution: The maximum number of points that can be displayed without overlap
on a CRT is referred to as the resolution.

Q. 2013-1(c): Describe the Raster-scan display system.

Raster-Scan systems: The operation of the display device is controlled by a special-purpose


processor called video controller or display controller.

Fig: Architecture of a simple raster graphics system

Frame buffer can be anywhere in the system memory & video controller access the frame
buffer to refresh the screen

Other processors such as coprocessors & accelerators implement various graphics


operations.

Fig: Architecture of a raster system with a fixed portion of the system memory reserved for the frame
buffer.

Video controller resets the register to the first pixel position on top scan line & the refresh
process starts over.

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


6

Fig: Basic video-controller refresh operations.


Refresh operations
 X, Y register used to indicate pixel position
 Fix Y register and increment X register to generate scan line
Double buffering
 Pixel value can be loaded in buffer while
 Provide a fast mechanism for real-time animation generation

Fig: Architecture of Raster scan graphics system with a display processor

Advantages of raster scan:


 Decrease memory costs
 High degree realism is achieved in picture - advanced shading & hidden surface
technique.
 Computer monitors and TVs use this method
Disadvantages of raster scan:
 Lines produced are zigzag as the plotted values are discrete.
 Resolution is low.
@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592
7

Random scan System: Random scan monitors draw a picture one line at a time and for this
reason are also referred to as vector displays.

Fig: Architecture of a simple random scan system.

The organization of a simple random-scan (vector) system is shown in Fig. An application


program is input and stored in the system memory along with a graphics package. Graphics
commands in the application program are translated by the graphics package into a display
file stored in the system memory.
Advantages of random scan:
 Very high resolution, limited only by monitor.
 Easy animation, just draw at different position.
 Requires little memory.
Disadvantages of random scan:
 Requires “intelligent electron beam, i.e., processor controlled.
 Limited screen density before have flicker, can’t draw a complex image.
 Limited color capability

Q. 2012-1(c): Distinguish between Raster Scan CRT and Random Scan CRT

Random Scan System (Vector) Raster Scan System

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


8

Cannot draw realistic shaded scenes. Used in Systems to Display Realistic


Images.

Store - Line drawing instruction Store – Value related to pixels (Intensity


value)

Higher Resolution No Support – High Resolution

Smooth line drawings Capable of Producing Curves Better

Mathematical functions are used to draw Screen points/pixels are used to draw an
an image. image.

Cost is more. Cost is low.

Color CRT Monitors: A CRT monitor displays color pictures by using a combination of
phosphors that emit different -colored light. By combining the emitted light from the
different phosphors, a range of colors can be generated. The two basic techniques for
producing color displays with a CRT are
1. The Beam-Penetration method.
2. The Shadow-Mask method.
The Beam-Penetration method: The beam-penetration method for displaying color
pictures has been used with random-scan monitors. Two layers of phosphor, usually RED
and GREEN, are coated onto the inside of the CRT screen, and the displayed color depends
on how far the electron beam penetrates into the phosphor layers.
A beam of slow electrons excites only the outer RED layer.
A beam of very fast electrons penetrates through the RED layer and excites the inner
GREEN layer.
At intermediate beam speeds, combinations of red and green light are emitted to show two
additional colors, ORANGE and YELLOW.
Advantage:
Beam penetration has been an inexpensive way to produce color in random -scan monitors,
Disadvantage:
Only four colors are possible, and the quality of pictures is not as good as with other
methods.
@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592
9

Q. 2013-1(d): What is shadow mask?

Shadow mask: The shadow mask is one of the technologies used to manufacture cathode
ray tube (CRT) televisions and computer displays that produce color images. A shadow-
mask CRT has three phosphor color dots at each pixel position.

One phosphor dot emits a RED Light, another emits a GREEN light, and the third emits a
BLUE light.

Flat-panel display: Although most graphics monitors are still constructed with CRTs, other
technologies are emerging that may soon replace CRT monitors.

The term flat-panel display refers to a class of video devices that have
1. Reduced volume
2. Weight
3. Power requirements compared to a CRT.
Current uses for flat-panel displays include small TV monitors, calculators, pocket video
games, laptop computers. Flat-panel displays into two categories:
1. Emissive displays
2. Non-Emissive displays.
Emissive displays (or emitters): These devices that convert electrical energy into light.
Examples: 1. Plasma panels. 3. Light-Emitting Diodes (LED).
Non-emissive is plays (or non-emitters): These device use optical effects to convert
sunlight or light from some other source into graphics patterns. Example: Liquid-Crystal
Device (LCD).
Plasma panel display: Plasma panels also called gas-discharge displays.
These are constructed by filling the region between two glass plates with a mixture of gases
that usually includes neon.
Advantage of LCD: (i) Low power Consumption. (ii) Low cost. (iii) Small size.

Disadvantage of LCD: (i) LCD have less color capability. (ii) Resolution is not as good as
that of a CRT. (iii) LCD’s do not emit light.

LED Advantages:
1) Better black levels (most of the time).
2) Higher contrast ratios (most of the time).
3) 40% less energy use than a florescent tube back lit LCD TV.

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


10

5) LED will not lose color accuracy with age as florescent tubes do.
6) Extremely thin profile.

LCD Disadvantages

Aspect Ratio The aspect ratio and resolution are fixed.


Contrast Lower contrast than CRTs due to a poor black-level.
Resolution Works best at the native resolution.
Viewing Angle Restricted viewing angles

Input device: An input device is any device that provides input to a computer. Examples of
input devices include keyboards, mouse, scanners, digital cameras and joysticks.

Mouse: A device that controls the movement of the cursor or pointer on a display screen. A
mouse is a small object you can roll along a hard, flat surface. Its name is derived from its
shape, which looks a bit like a mouse.

Keyboard: External input device used to type data into some sort of computer system
whether it be a mobile device, a personal computer, or another electronic machine. A
keyboard usually includes alphabetic, numerical, and common symbols used in everyday
transcription.

There are many types of computer keyboards. For instance, multimedia computer keyboard,
membrane computer keyboard, slim computer keyboard, capacitive key switches etc.

Digitizer: Digitizer converts analog or physical input into digital image. This makes them
related to both scanners and mice, although current digitizer serve completely different role.

Scanners: A computer scanner optically scans an object such as document and data convert
the information into a digital image. The two commonly used types are: (i) Flatbed scanner
(ii) Hand-held.scanner.

Q. 2013-2(c): What is Image Scanner?

Image Scanner: In computing, an image scanner is a device used to transfer images or text
into a computer. There are special models for scanning photo negatives, or to scan books. In
the computer, the signal from the scanner is transferred to a digital image.

Joystick: A joystick is an input device consisting of a stick that pivots on a base and reports
its angle or direction to the device it is controlling. A joystick, also known as the control
column.

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


11

Light pens: A light pen is a computer input device in the form of a light-sensitive wand
used in conjunction with a computer's CRT display. Light pens have the advantage of
'drawing' directly onto the screen, but this can become uncomfortable, and they are not as
accurate as digitizing tablets.

What are the advantages of laser printers?


1] High speed, precision and economy.
2] Cheap to maintain.
3] Quality printers.
4] Lasts for longer time.
5] Toner power is very cheap.
Chapter: 03
Points and lines: Point plotting is accomplished by converting a single coordinate position
furnished by an application program into appropriate operations for the output device in use.
With a CRT monitor, for example, the electron beam is turned on to illuminate the screen
phosphor at the selected location.

Line drawing is accomplished by calculating intermediate positions along the line path
between two specified endpoint positions. An output device is then directed to fill in these
positions between the endpoints. For analog devices, such as a vector pen plotter or a
random-scan display, a straight line can be drawn smoothly from one endpoint to the other.

Stair step effect (jaggies) produced when Fig: Pixel positions referenced by scan-line
a line is generated as a series number and column number.

Line-drawing
of pixel positions. Algorithms:

 The Cartesian slope-intercept equation for a straight line:

y= m. x +b

Where m is the line slop and b is y intercept.

 For line segment starting in (x1,y1) and ending in (x2,y2), the slop is:

m= (y1-y2)/(x1-x2)
@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592
12

b= y1- m.x1

 For any given x interval x, we can compute the corresponding y interval y:

y= m.x

 Or x interval x from a given y:

x= y/m

|m| < 1, x can be set proportional to a small horizontal deflection voltage

|m| >1, y can be set proportional to a small vertical deflection voltage with the
corresponding horizontal deflection voltage set proportional to x.

m = 1, x = y and the horizontal and vertical deflections voltages are equal.

DDA (digital differential analyze) Algorithm: A scan-conversion line algorithm based on


calculating either y or x using line points calculating equations. In each case, choosing
the sample axis depends on the slop value.

 Sample at unit x interval (x=1) and compute each successive y value as :

y k+1= yk+ m

 Subscript k takes integer values starting from 1, for the first point, and increases by 1
until the final endpoint is reached. Since m can be any real number between 0 and 1,
the calculated y values must be rounded to the nearest integer.

For lines with a positive slope greater than 1, we reverse the roles of x and y. That is, we
sample at unit y intervals (y = 1) and calculate each succeeding x value as
x k+1= xk + (1/m)
 If the absolute slop is less than 1, set x=-1 and
y k+1= yk - m
or, If the absolute slop is greater than 1, set y=-1 and
x k+1= xk - (1/m)
Bresenham’s Line Algorithm:

 Scan converts lines using only incremental integer calculations.

 Can be adapted to display circles and other curves.

 When sampling at unit x intervals, we need to decide which of two possible pixel
positions is closer to the line path at each sample step by using a decision parameter.
@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592
13

 The decision parameter (pk) is an integer number that is proportional to the difference
between the separations of the two pixel positions from the actual line path.

 Depending on the slop sign and value the decision parameter determine which pixel
coordinates would be taken in the next step.

Algorithm:

 At pixel (xk,yk) we need to decide whether (xk+1,yk) or (xk+1,yk+1) would be plotted in


column xk+1.

 d1 and d2 are the vertical pixel separations from the mathematical line path.

 y coordinate on the mathematical line at pixel xk+1 is calculated as:

y= (x+1.m)+b

d1= y-yk

d2= yk+1-y

We accomplish this by substituting m = y/x, where y and x are the vertical and
horizontal separations of the endpoint positions, and defining:

Pk =x(d1-d2)

=2y.xk-2x. yk+c

If the pixel at yk is closer to the line path than the pixel at yk+l (that is, d1 < d2), then decision
parameter pk is negative and we plot the lower pixel.

Pk+1 =2y.xk+1-2x. yk+1+c

Subtracting Eq. from the preceding equation, we have

Pk+1-Pk =2y(xk+1- xk )- 2x(yk+1-yk)

But xk+1, = xk + 1, so that

pk+1 = pk+2y-2x(yk+1-yk)

and p0=2y-x

Bresenham's Line-Drawing Algorithm for |m|< 1

1. Input the two line endpoints and store the left endpoint in (x0, y0).

2. Load (x0, y0) into the frame buffer; that is, plot the first point.

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


14

3. Calculate constants x, y, 2y, and 2y - 2x, and obtain the starting value for the
decision parameter as

P0 = 2y - X

4. At each xk along the line, starting at k = 0, perform the following test:

If Pk < 0, the next point to plot is (xk +1, yk) and

Pk+1 = Pk + 2y

Otherwise, the next point to plot is (xk +1, yk +1) and

pk+1, = pk + 2y - 2x,

5. Repeat step 4 x times.

Example 3-1: Bresenham Line Drawing

To illustrate the algorithm, we digitize the line with endpoints (20, 10) and (30,18). This line
has a slope of 0.8, with

x= x2-x1=30-20=10, y=y2-y1=18-10=8

The initial decision parameter has the value

P0 = 2y - X= 2×8-10=16-10=6

and the increments for calculating successive decision parameters are

2y=2×8=16, 2y - 2x=16-20= -4

We plot the initial point (x0, y0) = (20, 10), and determine successive pixel positions along
the line path from the decision parameter as-

A plot of the pixels generated along this line path is shown in Fig

Q. 2013-3(b): Explain the Mid-point circle Algorithm.

 Samples at unit intervals and uses decision parameter to determine the closest pixel
position to the specified circle path at each step.

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


15

 Along the circle section from x = 0 to x = y in the first quadrant, the slope of the curve
varies from 0 to -1. Therefore, we can take unit steps in the positive x direction over
this octant and use a decision parameter to determine which of the two possible y
positions is closer to the circle path at each step. Positions in the other seven octants
are then obtained by symmetry.

 To apply mid-point method, we define a circle function:

fcircle (x,y) = x2+y2-r2

and determine the nearest y position from p k:

Successive decision parameters are obtained using incremental calculations. We obtain a


recursive expression for the next decision parameter by evaluating the circle function at
sampling position xk+1 + 1 = xk+ 2:

or

Where yk+1 is either yk or yk-1, depending on the sign of pk.

Q. 2013-3(c)/Example 3-2: Drive successive decision values and positions along the
circle path using the midpoint method. [where r= 10 & first quadrant from x=0 to x=y]

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


16

Given a circle radius r = 10, we demonstrate the midpoint circle algorithm by determining
positions along the circle octant in the first quadrant from x = 0 to x = y. The initial value of
the decision parameter is
P0 = 1-r = -9
For the circle centered on the coordinate origin, the initial point is (x0, y0) = (0, l0), and
initial increment terms for calculating the decision parameters are-
2x0=0, 2y0=20 If (x0, y0)=(0, r)

Midpoint ellipse Algorithm:

Ellipse Properties: Expressing distances d1 and d2 in terms of the focal coordinates F1 = (x1,
y1) and F2 = (x2, y2), we have: ( x  x )2  ( y  y )2  ( x  x )2  ( y  y )2  constant
1 1 2 2

2
  
2

Cartesian coordinates:  x  xc    y  yc   1
 
 rx   ry 

x  xc  rx cos 
Polar coordinates:
y  yc  ry sin 

Explain Ellipse Algorithm: fellipse ( x, y)  ry2 x2  rx2 y2  rx2ry2

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


17

 0 if ( x, y ) is inside the ellipse



Decision parameter: f ellipse ( x, y )   0 if ( x, y ) is on the ellipse
 0 if ( x, y ) is outside the ellipse

dy 2ry2 x
Slope   2
dx 2rx y

At the boundary: dy  1  2ry2 x  2rx2 y


dx
Therefore, we move out of region 1 whenever,

2ry2 x  2rx2 y

Assuming that we have just plotted the pixels at (xi , yi). The next position is determined by:
p1i  f ellipse ( xi  1, yi  12 )
 ry2 ( xi  1)2  rx2 ( yi  12 )2  rx2 ry2

If p1i < 0 the midpoint is inside the ellipse  yi is closer


If p1i ≥ 0 the midpoint is outside the ellipse  yi – 1 is closer
At the next position [xi+1 + 1 = xi + 2]
p1i 1  f ellipse ( xi 1  1, yi 1  12 )
 ry2 ( xi  2) 2  rx2 ( yi 1  12 ) 2  rx2 ry2
@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592
18

Or,

p1i 1  p1i  2ry2 ( xi  1)2  ry2  rx2 ( yi 1  12 )2  ( yi  12 )2 


where yi+1 = yi or,yi+1 = yi – 1
Midpoint Ellipse Algorithm:
1. Input rx, ry, and ellipse center (xc, yc), and obtain the first point on an ellipse centered
on the origin as
(x0, y0) = (0, ry)
2. Calculate the initial parameter in region 1 as
p10  ry2  rx2ry  14 rx2
2. At each xi position, starting at i = 0, if p1i < 0, the next point along the ellipse centered
on (0, 0) is (xi + 1, yi) and
p1i 1  p1i  2ry2 xi 1  ry2

otherwise, the next point is (xi + 1, yi – 1) and


p1i 1  p1i  2ry2 xi 1  2rx2 yi 1  ry2

and continue until


2ry2 x  2rx2 y

4. (x0, y0) is the last position calculated in region 1. Calculate the initial parameter in
region 2 as
p20  ry ( x0  2 )  rx ( y0 1)  rx ry
2 1 2 2 2 2 2

5. At each yi position, starting at i = 0, if p2i > 0, the next point along the ellipse centered
on (0, 0) is (xi, yi – 1) and p 2  p 2  2r 2 y  r 2
i 1 i x i 1 x

Otherwise, the next point is (xi + 1, yi – 1) and


p2i 1  p2i  2ry2 xi 1  2rx2 yi 1  rx2

Use the same incremental calculations as in region 1. Continue until y = 0.

6. For both regions determine symmetry points in the other three quadrants.

7. Move each calculated pixel position (x, y) onto the elliptical path centered on (xc, yc)
and plot the coordinate values

x = x + xc , y = y + yc

Example: 3-3:
@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592
19

rx = 8 , ry = 6
2ry2x = 0 (with increment 2ry2 = 72)
2rx2y = 2rx2ry (with increment -2rx2 = -128)
Region 1: (x0, y0) = (0, 6)

p10  ry2  rx2ry  14 rx2  332

i pi xi+1, yi+1 2ry2xi+1 2rx2yi+1

0 -332 (1, 6) 72 768

1 -224 (2, 6) 144 768

2 -44 (3, 6) 216 768

3 208 (4, 5) 288 640

4 -108 (5, 5) 360 640

5 288 (6, 4) 432 512

6 244 (7, 3) 504 384

Move out of region 1 since 2ry2x > 2rx2y

Region 2: (x0, y0) = (7, 3) (Last position in region 1)


p 20  f ellipse (7  12 , 2)  151

i pi xi+1, yi+1 2ry2xi+1 2rx2yi+1

0 -151 (8, 2) 576 256

1 233 (8, 1) 576 128

2 745 (8, 0) - -

Stop at y = 0

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


20

Fig: Positions along an elliptical path centered on the origin with rx= 8 and ry= 6 using the midpoint
algorithm to calculate pixel addresses in the first quadrant.

Chapter: 05
Basic transformations: Basic transformations such as translation, rotation, and scaling are
included in most graphics packages. Some packages provide a few additional
transformations that are useful in certain applications. Two such transformations are
reflection and shear.
2D Transformations: “Transformations are the operations applied to geometrical
description of an object to change its position, orientation, or sizes are called geometric
transformations”.
Why Transformations?
“Transformations are needed to manipulate the initially created object and to display the
modified object without having to redraw it.”
Q.2013-5(b): What do you mean by Translation and Rotation in geometric
transformations?
Translation: A translation moves all points in an object along the same straight-line path to
new positions.

We can write the components:


p'x = px + tx
p'y = py + ty
or in matrix form:

P' = P + T

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


21

Rotation: A two-dimensional rotation is applied to an object by repositioning it along a


circular path in the xy plane. To generate a rotation, we specify a rotation angle 0 and the
position (x1,yl) of the rotation point (or pivot point) about which the object is to be rotated
(Fig. 5-3).

Figure 5-3 Rotation of an object through angle  about the pivot point (x1, y1).

 We can write the components:


p'x = px cos  – py sin 
p'y = px sin  + py cos 
 or in matrix form:
P' = R • P
  can be clockwise (-ve) or counterclockwise (+ve as our example).
 Rotation matrix: cos  sin  
R 
 sin  cos 

Scaling: Scaling changes the size of an object and involves two scale factors, S x and Sy for
the x- and y-coordinates respectively.
• Scales are about the origin.
• We can write the components:
p'x = sx • px
p'y = sy • py
 or in matrix form:
P' = S • P
 Scale matrix as: sx 0
S 
0 s y 
@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592
22

Q.2013-6(d):What do you know about Composite Transformations?


Composite Transformations: With the matrix representations of the previous section, we
can set up a matrix for any sequence of transformations as a composite transformation
matrix by calculating the matrix product of the individual transformations.
Translations: If two successive translation vectors (tx1,ty1) and (tx2,ty2) are applied to a
coordinate position P, the final transformed location P’ is calculated as:-

P’=T(tx2,ty2) . {T(tx1,ty1) .P}


={T(tx2,ty2) . T(tx1,ty1)} .P
Where P and P’ are represented as homogeneous-coordinate column vectors. We can verify
this result by calculating the matrix product for the two associative groupings. Also, the
composite transformation matrix for this sequence of transformations is: -

Or, T(tx2,ty2) . T(tx1,ty1) = T(tx1+tx2, ty1+ty2)


Which demonstrate that two successive translations are additive.
Rotations: Two successive rotations applied to point P produce the transformed position: -
P’= R(Ө2) . {R(Ө1 ) . P}
= {R(Ө2) . R(Ө1)} . P
By multiplication the two rotation matrices, we can verify that two successive rotations are
additive:
R(Ө2) . R(Ө1) = R (Ө1+ Ө2)
So that the final rotated coordinates can be calculated with the composite rotation matrix as:-
P’ = R(Ө1+ Ө2) . P

Scaling: Concatenating transformation matrices for two successive scaling operations


produce the following composite scaling matrix: -

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


23

Or, S(Sx2, Sy2 ) . S(Sx1, Sy1) = S (Sx1 . Sx2, Sy1 .Sy2 )


The resulting matrix in this case indicates that successive scaling operations are
multiplicative.
General pivot point rotation: Translate the object so that pivot-position is moved to the
coordinate origin. Rotate the object about the coordinate origin.

General fixed point scaling: Translate object so that the fixed point coincides with the
coordinate origin. Scale the object with respect to the coordinate origin.

Reflection: Reflection is a transformation that produces a mirror image of an object. It is


obtained by rotating the object by 180 deg about the reflection axis

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


24

Figure: Reflection of an object about the x axis.

Figure: Reflection of an object about the y axis.


Shear: A transformation that distorts the shape of an object such that the transformed shape
appears as if the object were composed of internal layers that had been caused to slide over
each other is called shear. Two types of shearing transformations are there about X values
and about Y values
Shearing transformations in three-dimensions alter two of the three coordinate values
proportionally to the value of the third coordinate.

Affine Transformation: A coordinate transformation of the form

x' =axxx+axyy+bx, y' =ayxx+ayyy+by


@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592
25

is called a two-dimensional affine transformation. Each of the transformed coordinates x' and
y' is a linear function of the original coordinates x and y, and parameters aij and bk are
constants determined by the transformation type. Affine transformations have the general
properties that parallel lines are transformed into parallel lines and finite points map to finite
points.
Translation, rotation, scaling, reflection, and shear are examples of two-dimensional affine
transformations.

Chapter: 06
Clipping: Identify those portions of a picture that are either inside or outside of a specified
region of space. The region against which an object is to clipped is called a clip window.

Viewport clipping: It can reduce calculations by allowing concatenation of viewing and


geometric transformation matrices.

In the following sections, we consider algorithms for clipping the following primitive types-
 Point Clipping
 Line Clipping (straight-line segments)
 Area Clipping (polygons)
 Curve Clipping
 Text Clipping
Point Clipping: Assuming that the clip window is a rectangle in standard position, we save
a point P = (x, y) for display if the following inequalities are satisfied:
xwmin ≤ x ≤ xwmax
ywmin ≤ y ≤ ywmax
Where the edges of the clip window (xwmin, xwmax, ywmin, ywmax) can be either the world-
coordinate window boundaries or viewport boundaries. If any one of these four inequalities
is not satisfied, the point is clipped (not saved for display)
Line Clipping: Possible relationships between line positions and a standard rectangular
clipping region

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


26

• Possible relation ships


– Completely inside the clipping window
– Completely outside the window
– Partially inside the window
• Parametric representation of a line
x = x1+ u (x2- x1)
y = y1+ u (y2- y1)
• The value of u for an intersection with a rectangle boundary edge
– Outside the range 0 to 1
– Within the range from 0 to 1
Polygons clipping: To clip polygons, we need to modify the line-clipping procedures
discussed in the previous section. A polygon boundary processed with a line clipper may be
displayed as a series of unconnected line segments, depending on the orientation of the
polygon to the clipping window. The following example illustrates a simple case of polygon
clipping.

Curve Clipping: Use bounding rectangle to test for overlap with a rectangular clip window.
Curve clipping procedures will involve non-linear equations (so requires more processing
than for objects with linear boundaries).

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


27

Text Clipping: There are several techniques that can be used to provide text clipping in a
graphics package. The clipping technique used will depend on the methods used to generate
characters and the requirements of a particular application. Methods for processing
character strings relative to a window boundary
• All-or-none string clipping strategy
• All or none character clipping strategy
• Clip the components of individual characters

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


28

Clip the components of


individual characters

Treat characters same as


lines

If individual char overlaps


a clip window boundary,
clip off the parts of the
character that are outside
the window

Chapter: 10
Q. 2012-8(i): Write the short note of following terms-
Bézier curve: A Bézier curve is a parametric curve frequently used in computer graphics
and related fields.
In general, a Bezier curve section can be fitted to any number of control points. The number
of control points to be approximated and their relative position determine the degree of the
Bézier polynomial.
@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592
29

Fig: Cubic Bézier curve

Suppose we are given n + 1 control-point positions: pk = (xk, yk, zk), with k varying from 0 to
n. These coordinate points can be blended to produce the following position vector P(u),
which describes the path of an approximating Bezier polynomial function between p0, and
pn.

P(u) = BEZ k,n(u), 0 u

The Bezier blending function BEZ k,n(u), are the Bernstein polynomials:

BEZ k,n(u) = C(n,k) uk (1-u) n-k

Where the C(n,k) are the binomial coefficient:

C(n,k) =

Equivalently, we can define Bezier blending functions with the recursive calculation

BEZ k,n(u) = (1-u) BEZ k,n-1(u) + u BEZ k-1,n-1(u) , n>k 1

with BEZ k,k = uk, and BEZ0,k = (1- u)k. Vector equation represents a set of three parametric
equations for the individual curve coordinates.

x(u) = BEZ k,n(u),

y(u) = BEZ k,n(u),

z(u) = BEZ k,n(u),

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


30

Properties of the Bézier curves:


Interpolation: A Bézier curve always interpolates the end control points.
P(0)=P0
P(1)=Pn
Tangency: The end point tangent vectors are parallel to P1- P0 and Pn- Pn-1
Convex hull property: The curve is contained in the convex hull of its defining control
points.
p (0) = n(n-1)[(p2 – p1) – (p1 – p0) ]

p (1) = n(n-1)[(pn-2 – pn-1) – (pn-1 – pn)]

Variation diminishing property: No straight line intersects a Bézier curve more times than
it intersects its control polygon.
=1

The contrex-hull property for a Bezier curve ensures that the polynomial smoothly follows
the control points without erratic oscillations.

Bézier Surface: Two sets of orthogonal Bézier curves can be used to design an object
surface by specifying by an input mesh of control points. The parametric vector function for
the Bkzier surface is formed as the Cartesian product of Bezier blending functions:

P(u,v) = BEZ j,m(v) BEZk,n(u)

with Pi,k specifying the location of the (m + 1) by (n + I) control points.

Figure 10-39 illustrates two Bezier surface plots. The control points are connected by dashed
lines, and the solid lines show curves of constant u and constant v.

Figure 10-39: Bezier surfaces constructed tor (a) m = 3, n = 3, and (b) m = 4, n = 4. Dashed
lines connect the control points.

Q. 2012-8(ii): Write the short note of following terms-

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


31

B-Spline Curve and Surfaces: These are the most widely used class of approximating
splines. B-splines have two advantages over Bezier splines: (1) the degree of a B-spline
polynomial can be set independently of the number of control points (with certain
limitations), and (2) B-splines allow local control over the shape of a spline curve or surface
The trade-off is that B-splines are more complex than Bezier splines.

We can write a general expression for the calculation of coordinate positions along a B-
spline curve in a blending-function formulation as-

P(u) = Bk,d(u), umin u umax ,2 d

Where the pk are an input set of n + 1 control points. There are several differences between
this B-spline formulation and that for Bezier splines.
Blending functions for B-spline curves are defined by the Cox-deBoor recursion formulas:

1 if uk u < uk+1
Bk,1 (u) =
0 Otherwise

Bk,d(u) = (u) + (u)

B-spline curves have the following properties:


 The polynomial curve has degree d - 1 and Cd-2 continuity over the range of u.
 For n + 1 control points, the curve is described with n+1 blending functions.
 Each blending function Bk,d, is defined over d subintervals of the total range of u,
starting at knot value uk.
 The range of parameter u is divided into n+d subintervals by the n+d+1 values
specified in the k not vector.

Fig: Local modification of a B-spline curve. Changing one of the control points in (a) produces curve (b),
which is modified only in the neighborhood of the altered control point .

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


32

Example 10-1: Uniform, Quadratic B-Splines


To illustrate the calculation of B-spline blending functions for a uniform, integer knot
vector, we select parameter values d =n= 3. The knot vector must then contain n + d + 1 = 7
knot values:
{0, 1, 2, 3, 4, 5, 6}
and the range of parameter u is from 0 to 6, with n + d = 6 subintervals.
Each of the four blending functions spans d = 3 subintervals of the total range of U. Using
the recurrence relations, we obtain the first blending function as-

u2 for 0 u 1

B0,3 (u) = u(2-u)+ (u-1)(3-u) for 1 u 2

for 2 u 3
(3-u)2

We obtain the next periodic blending function using relationship 10-57. Substituting u-1 for
u in B0,3, and shifting the starting positions up by 1:

(u-1)2 for 1 u 2

B1,3(u) = (u-1)(3-u)+ (u-2)(4-u) for 2 u 3

(4-u)2 for 3 u 4

Similarly, the remaining two periodic function are by successively shifting B1,3 to the right.

(u-2)2 , for 2 u 3

(u-2)(4-u)+ (u-3)(5-u) , for 3 u 4


B2,3(u) =

(5-u)2,
for 4 u 5

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


33

Chapter: 11

Geometric Transformation: The object itself is moved relative to a stationary coordinate


system or background.
With respect to some 3-D coordinate system, an object Obj is considered as a set of points.
Obj = {P(x,y,z)}
If the Obj moves to a new position, the new object Obj’ is considered:
Obj’ = { P’(x’,y’,z’)}
Translation: Moving an object is called a translation. We translate an object by translating
each vertex in the object.
x’ = x + tx
y’ = y + ty
z’ = z + tz
The translating distance pair (tx, ty, tz) is called a translation vector or shift vector. We can
also write this equation in a single
Matrix using column vectors:
x’ 1 0 0 tx x
y’ = 0 1 0 ty y
z’ 0 0 1 tz z
1 0 0 0 1 1

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


34

Rotation: To generate a rotation transformation for an object, we must designate an axis of


rotation (about which the object is to be rotated) and the amount of angular rotation. Unlike
two-dimensional applications, where all transformations are carried out in the xy plane, a
three-dimensional rotation can be specified around any line in space. The easiest rotation
axes to handle are those that are parallel to the coordinate axes. Also, we can use
combinations of coordinate axis rotations (along with appropriate translations) to specify any
general rotation.

In 2-D, a rotation is prescribed by an angle θ & a center of rotation P. But in 3-D rotations
require the prescription of an angle of rotation & an axis of rotation.

 Rotation about the z axis:


R θ,K  x’ = x cosθ – y sinθ
y’ = x sinθ – y cosθ
z’ = z

Rotation about the y axis:


R θ,J  x’ = x cosθ + z sinθ
y’ = y
z’ = - x sinθ + z cosθ
Rotation about the x axis:
R θ,I  x’ = x
y’ = y cosθ – z sinθ
z’ = y sinθ + z cosθ
& the rotation matrix corresponding is
cos θ -sin θ 0
R θ,K = sin θ cos θ 0
0 0 1
cos θ 0 sin θ
R θ,J = 0 1 0
-sin θ 0 cos θ
@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592
35

1 0 0
R θ,I = 0 cos θ -sin θ
0 sin θ cos θ
Scaling:
Changing the size of an object is called Scaling. The scale factor s determines whether
the scaling is a magnification, s > 1,Or a reduction, s < 1. Scaling with respect to the origin,
where the origin remains fixed,
x’ = x . sx
Ssx,sy,sz  y’ = y . sy
z’ = z . sz
The transformation equations can be written in the matrix form:

x’ sx 0 0 x

y’ = 0 sy 0 . y

z’ 0 0 sz z

Chapter: 12

Viewing pipeline: The viewing pipeline is a group of processes common from wireframe
display through to near photo-realistic image generation, and is basically concerned with
transforming objects to be displayed from specific viewpoint and removing surfaces that
cannot be seen from this viewpoint.

World co-ordinate system: In our world co-ordinate system, we need to specify a view
reference point - this will become the origin of the view co-ordinate system.

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


36

Finally we need to specify the view-up direction, V - this will give the y-axis direction

Q. 2013-7(a): What is Projections?

Projections: Once world-coordinate description of the objects in a scene are converted to


viewing coordinates we can project the 3D objects onto 2D view-plane. Two types of
projections: (i) Parallel Projection (ii) Perspective Projection

Parallel Projections: Coordinate Positions are transformed to the view plane along parallel
lines.

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


37

• Orthographic parallel projection: The projection is perpendicular to the view plane.

• Oblique parallel projecion: The parallel projection is not perpendicular to the view
plane.

Perspective projection: Perspective projection transforms object positions to the view plane
while converging to a center point of projection.

Perspective projection produces realistic views but does not preserve relative proportions.
Projections of distant objects are smaller than the projections of objects of the same size that
are closer to the projection plane.

Chapter: 13 (Visible-Surface Detection Methods)


Q. 2013-8(c): Explain depth-buffer method.

Depth-buffer method: A commonly used image-space approach to detecting visible


surfaces is the depth-buffer method, which compares surface depths at each pixel position on
the projection plane. This procedure is also referred to as the z-buffer method. This method
requires 2 buffers:

1) Depth buffer or z-buffer:


• To store the depth values for each (X, Y) position, as surfaces are processed.
• 0 ≤ depth ≤ 1
2) Refresh Buffer or Frame Buffer:
• To store the intensity value or Color value at each position (X, Y).

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


38

Figure: At view-plane position (x, y), surface S, has the smallest depth from the view plane
and so is visible at that position.

Q. 2012/09-8(a): Explain the Z-buffer algorithm.


1. depthbuffer(x,y) = 0
framebuffer(x,y) = background color
2. Process each polygon one at a time
2.1. For each projected (x,y) pixel position of a polygon, calculate depth z.
2.2. If z > depthbuffer(x,y)
Compute surface color,
set depthbuffer(x,y) = z,
framebuffer(x,y) = surfacecolor(x,y)
Calculating Depth:
• We know the depth values at the vertices.
• we can calculate the depth at any other point on the surface of the polygon using the
polygon surface equation:
 Ax  By  D
z
C

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


39

• For any scan line adjacent horizontal x positions or vertical y positions differ by 1 unit.
• The depth value of the next position (x+1,y) on the scan line can be obtained using
 A( x  1)  By  D
z 
C
A
 z
C

• For adjacent scan-lines we can compute the x value using the slope of the projected
line and the previous x value.
1
x  x 
m
A/ m  B
 z  z 
C
A-buffer method: An extension of the ideas in the depth-buffer method is the A-buffer
method (at the other end of the alphabet from "z-buffer", where z represents depth). The A-
buffer method represents an antaliased, area-averaged, accumulation-buffer method
developed by Lucas film for implementation in the surface-rendering system called REYES.

A drawback of the depth-buffer method is that it can only find one visible surface at each
pixel position. Each position in the A-buffer has two fields:

Depth field - stores a positive or negative real number

Intensity field- stores surface-intensity information or a pointer value.

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


40

If depth is >= 0, then the surface data field stores the depth of that pixel position as before
If depth < 0 then the data filed stores a pointer to a linked list of surface data
Surface information in the A-buffer includes:
– RGB intensity components
– Opacity parameter
– Depth
– Percent of area coverage
– Surface identifier
– Other surface rendering parameters
Depth-sorting method: Using both image-space and object-space operations, the depth-
sorting method performs the following basic functions:
1. Surfaces are sorted in order of decreasing depth.
2. Surfaces are scan converted in order, starting with the surface of greatest depth.
Q. 2013-8(b): Define BSP tree and Octree.
BSP tree: A binary space-partitioning (BSP) tree is an efficient method for determining
object visibility by painting surfaces onto the screen from back to front, as in the painter's
algorithm.
BSP-Tree Method: When BSP tree is complete, process the tree from the right nodes to the
left nodes

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


41

Octree: An octree is a tree in which each node has at most 8 children. When an octree
representation is used for the viewing volume, hidden surface elimination is accomplished by
projecting octree nodes onto the viewing surface in a front-to-back order.

Q. 2012/09-8(c): Explain Back-Face Detection technique


Back-Face Detection: The simplest thing we can do is find the faces on the backs of
polyhedra and discard them

We know from before that a point (x, y, z) is behind a polygon surface if:
Ax  By  Cz  D  0
Where A, B, C & D are the plane parameters for the surface
This can actually be made even easier if we organise things to suit ourselves
Now we can simply say that if the z component of the polygon’s normal is less than zero the
surface cannot be seen

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592


42

Q. 2013-8(a): Ray-Casting Method:


 Along the line of sight, can determined which objects intersect this line
 Method is based on geometric-optics methods, which trace the parts of light rays
 Trace the light-ray paths backward from the pixels through the scene
 Effective method for scenes with curved surfaces, particularly, spheres
 Ray casting is a special case of ray-tracing algorithms
 Only follow a ray out from each pixel to the nearest object

Viewport: An area on a display device to which a window is mapped. defines where it is to


be displayed.
Q. 2013-4(b):
(i) Bitmap fonts: On a black and white system with one bit per pixel, the frame buffer is
commonly known as a bitmap.
(ii) Anti-aliasing: Anti-aliasing is the smoothing of the image or sound roughness caused by
aliasing . With images, approaches include adjusting pixel positions or setting pixel intensities
so that there is a more gradual transition between the color of a line and the background color.

@ Ashek Mahmud Khan; Dept. of CSE (JUST); 01725-402592

You might also like