You are on page 1of 24

Computer Graphics Short Notes

Watch Full Playlist on YouTube

Our Other Courses


Management Complete Course – Click Here to Watch

System Software Live Course – Click Here to Watch

Visual Basic Full Course – Click Here to Watch

Mail your queries at firstcodehelpdesk@gmail.com

@firstcodeyt @firstcodeyt

company/firstcodeyt @firstcodeyt @firstcodeyt

COPYRIGHT © First Code - Let's Begin


Unit 1: INTRODUCTION AND APPLICATIONS
What is computer Graphics?
Computer graphics is an art of drawing pictures, lines, charts, etc using computers with the help of
programming. Computer graphics is made up of number of pixels. Pixel is the smallest graphical
picture or unit represented on the computer screen. Basically there are two types of computer
graphics namely.
Interactive Computer Graphics: Interactive Computer Graphics involves a two way
communication between computer and user. Here the observer is given some control over the image
by providing him with an input device for example the video game controller of the ping pong game.
This helps him to signal his request to the computer.
The computer on receiving signals from the input device can modify the displayed picture
appropriately. To the user it appears that the picture is changing instantaneously in response to his
commands. He can give a series of commands, each one generating a graphical response from the
computer. In this way he maintains a conversation, or dialogue, with the computer.
Interactive computer graphics affects our lives in a number of indirect ways. For example, it helps
to train the pilots of our airplanes. We can create a flight simulator which may help the pilots to get
trained not in a real aircraft but on the grounds at the control of the flight simulator. The flight
simulator is a mock up of an aircraft flight deck, containing all the usual controls and surrounded by
screens on which we have the projected computer generated views of the terrain visible on take off
and landing.
Flight simulators have many advantages over the real aircrafts for training purposes, including fuel
savings, safety, and the ability to familiarize the trainee with a large number of the world’s airports.
Non Interactive Computer Graphics: In non interactive computer graphics otherwise known as
passive computer graphics. it is the computer graphics in which user does not have any kind of
control over the image. Image is merely the product of static stored program and will work according
to the instructions given in the program linearly. The image is totally under the control of program
instructions not under the user. Example: screen savers.

Applications of Computer Graphics :


Design and Drawing : In almost all areas of engineering, be it civil, mechanical, electronic etc.,
drawings are of prime importance. In fact, drawing is said to be the language of engineers. The
ability of computers to store complex drawings and display them on demand was one of the major
attractions for using computers in graphic mode. However, these were further advantages. Most of
these drawings were the result of engineering calculations. In fact, programs can be written to make
these calculations and the results can be used to draw diagrams on the screen. If changes are to
be made, one can get back to the design formulae and so on. Thus, the are of design and drawing
was one of the earliest and most useful applications of graphics.
Animation: But what brought the computers pretty close to the average individuals is the concept
of animation moving pictures. It is the well known principle of moving pictures that a succession of
related pictures, when flashed with sufficient speed will make the succession of pictures appear to
be moving. In movies, a sequence of such pictures is shot and are displayed with sufficient speed
to make them appear moving. Computers can do it in another way.
The properties of the picture can be modified at a fairly fast rate to make it appear moving. For
example, if a hand is to be moved, say, the successive positions of the hand at different periods of
time can be computed and pictures showing the position of the hand at these positions can be

COPYRIGHT © First Code - Let's Begin


flashed on the screen. This led to the concept of “animation” or moving pictures. In the
initial stages, animation was mainly used in computer games.
Animation software enables you to chain and sequence a series of images to simulate movement.
Each image is like a frame in a movie. It can be defined as a simulation of movement created by
displaying a series of pictures, or frames. A cartoon on television is one example of animation.
Animation on computers is one of the chief ingredients of multimedia presentations. There are many
software applications that enable you to create animations that you can display on a computer
monitor.
There is a difference between animation and video. Whereas video takes continuous motion and
breaks it up into discrete frames, animation starts with independent pictures and puts them together
to form the illusion of continuous motion.
Multimedia applications : The use of sound cards to make computers produce sound effect led to
other uses of graphics. The concept of virtual reality, where in one can be taken through an unreal
experience, like going through an unbuilt house ( to see how it feels inside, once it is built) are
possible by the use of computer graphics technology .
In fact the ability of computers to convert electronic signals (0 & 1) to data and then on to figures
and pictures has made it possible for us to get photographs of distant planets like mars being
reproduced here on the earth in almost real time.
Simulation : The other revolutionary change that graphics made was in the area of simulation.
Basically simulation is a mockup of an environment elsewhere to study or experience it. The
availability of easily interactive devices (mouse is one of them, we are going to see a few other later
in the course) made it possible to build simulators.
One example is of flight simulators, wherein the trainee, sitting in front of a computer, can operate
on the interactive devices as if he were operating on the flight controls and the changes he is
expected to see outside his window are made to appear on the screen, so that he can master the
skills of flight operations before actually trying his hand on the actual flights.
Paint programs: Allow you to create rough freehand drawings. The images are stored as bit maps
and can easily be edited. It is a graphics program that enables you to draw pictures on the display
screen which is represented as bit maps (bit-mapped graphics). In contrast, draw programs use
vector graphics (object-oriented images), which scale better.
Most paint programs provide the tools shown below in the form of icons. By selecting an icon, you
can perform functions associated with the tool. In addition to these tools, paint programs also provide
easy ways to draw common shapes such as straight lines, rectangles, circles, and ovals.
Sophisticated paint applications are often called image editing programs. These applications support
many of the features of draw programs, such as the ability to work with objects. Each object,
however, is represented as a bit map rather than as a vector image.
CAD software: Enables architects and engineers to draft designs. It is the acronym for computer-
aided design. A CAD system is a combination of hardware and software that enables engineers and
architects to design everything from furniture to airplanes. In addition to the software, CAD systems
require a high-quality graphics monitor; a mouse, light pen, or digitizing tablet for drawing; and a
special printer or plotter for printing design specifications.
CAD systems allow an engineer to view a design from any angle with the push of a button and to
zoom in or out for close-ups and long-distance views. In addition, the computer keeps track of design
dependencies so that when the engineer changes one value, all other values that depend on it are
automatically changed accordingly. Until the mid 1980s, all CAD systems were specially constructed

COPYRIGHT © First Code - Let's Begin


computers. Now, you can buy CAD software that runs on general-purpose workstations and
personal computers.

Unit 2: Display devices

Random Scan Display


Random scan displays, often termed vector Vector, Stroke, and Line drawing displays, came first
and are still used in some applications. Here the characters are also made of sequences of strokes
(or short lines). The electron gun of a CRT illuminates straight lines in any order. The display
processor repeatedly reads a variable ‘display file’ defining a sequence of X,Y coordinate pairs and
brightness or color values, and converts these to voltages controlling the electron gun.

In random scan display an electron beam is deflected from endpoint to end-point.


The order of deflection is dictated by the arbitrary order of the display commands. The display must
be refreshed at regular intervals – minimum of 30 Hz (fps) for flicker-free display.

Raster Scan Display


Raster Scan methods have increasingly become the dominant technology since about 1975. These
methods use the TV type raster scan. The growth in the use of such methods has been dependent
on rapidly decreasing memory prices and on the availability of cheap scan generating hardware
from the TV industry.
The screen is coated with discrete dots of phosphor, usually called pixels, laid out in a rectangular
array. The image is then determined by how each pixel is intensified. The representation of the
image used in servicing the refresh system is thus an area of memory holding a value for each pixel.
This memory area holding the image representation is called the frame buffer.
The values in the frame buffer are held as a sequence of horizontal lines of pixel values from the
top of the screen down. The scan generator then moves the beam in a series of horizontal lines with

COPYRIGHT © First Code - Let's Begin


fly-back (non-intensified) between each line and between the end of the frame and the
beginning of the next frame. This is illustrated below.

Unlike random-scan which is a line drawing device, refresh CRT is a point-plotting device. Raster
displays store the display primitives (lines, characters, shaded and patterned areas) in a refresh
buffer. Refresh buffer (also called frame buffer) stores the drawing primitives in terms of points and
pixels components This scan is synchronized with the access of the intensity values held in the
frame buffer.

More Difference Between Vector Scan Display and Raster Scan Display

COPYRIGHT © First Code - Let's Begin


Color CRT Displays
This was one of the earlier CRTs to produce color displays. Coating phosphors of different
compounds can produce different colored pictures. But the basic problem of graphics is not to
produce a picture of a predetermined color, but to produce color pictures, with the color
characteristics chosen at run time.
The basic principle behind colored displays is that combining the 3 basic colors –Red, Blue and
Green, can produce every color. By choosing different ratios of these three colors we can produce
different colors – millions of them in-fact. We also have basic phosphors, which can produce these
basic colors. So, one should have a technology to combine them in different combinations.

Plasma Panel Displays


Plasma displays are bright, have a wide color gamut, and can be produced in fairly large sizes, up
to 262 cm (103 inches) diagonally. They have a very low-luminance “dark-room” black level, creating
a black some find more desirable for watching movies. The display panel is only about 6 cm (2½
inches) thick, while the total thickness, including electronics, is less than 10 cm (4 inches).
Plasma displays use as much power per square meter as a CRT or an AMLCD television. Power
consumption will vary greatly depending on what is watched on it. Bright scenes (say a football
game) will draw significantly more power than darker scenes. The xenon and neon gas in a plasma
television is contained in hundreds of thousands of tiny cells positioned between two plates of glass.
Long electrodes are also sandwiched between the glass plates, in front of and behind the cells. The
address electrodes sit behind the cells, along the rear glass plate. The transparent display
electrodes, which are surrounded by an insulating dielectric material and covered by a magnesium
oxide protective layer, are mounted in front of the cell, along the front glass plate. Control circuitry
charges the electrodes that cross paths at a cell, creating a voltage difference between front and
back and causing the gas to ionize and form a plasma; as the gas ions rush to the electrodes and
collide, photons are emitted.
In a monochrome plasma panel, the ionizing state can be maintained by applying a lowlevel voltage
between all the horizontal and vertical electrodes – even after the ionizing voltage is removed. To
erase a cell all voltage is removed from a pair of electrodes. This type of panel has inherent memory
and does not use phosphors. A small amount of nitrogen is added to the neon to increase hysteresis.
In color panels, the back of each cell is coated with a phosphor.
The ultraviolet photons emitted by the plasma excite these phosphors to give off colored light. The
operation of each cell is thus comparable to that of a fluorescent lamp. Every pixel is made up of
three separate sub pixel cells, each with different colored phosphors. One sub pixel has a red light
phosphor, one sub pixel has a green light phosphor and one sub pixel has a blue light phosphor.
These colors blend together to create the overall color of the pixel, analogous to the “triad” of a
shadowmask CRT.
LCD Panels
An active matrix liquid crystal display (AMLCD) is a type of flat panel display, currently the
overwhelming choice of notebook computer manufacturers, due to light weight, very good image
quality, wide color gamut, and response time. The most common example of an active matrix display
contains, besides the polarizing sheets and cells of liquid crystal, a matrix of thin-film transistors
(TFTs) to make a TFT LCD.
Each pixel of an LCD typically consists of a layer of molecules aligned between two transparent
electrodes, and two polarizing filters, the axes of transmission of which are (in most of the cases)
perpendicular to each other. With no liquid crystal between the polarizing filters, light passing

COPYRIGHT © First Code - Let's Begin


through the first filter would be blocked by the second (crossed) polarizer. The surface of
the electrodes that are in contact with the liquid crystal material are treated so as to align
the liquid crystal molecules in a particular direction. This treatment typically consists of a thin
polymer layer that is unidirectionally rubbed using, for example, a cloth.
The direction of the liquid crystal alignment is then defined by the direction of rubbing. When a
voltage is applied across the electrodes, a torque acts to align the liquid crystal molecules parallel
to the electric field, distorting the helical. This reduces the rotation of the polarization of the incident
light, and the device appears gray. If the applied voltage is large enough, the liquid crystal molecules
in the center of the layer are almost completely untwisted and the polarization of the incident light is
not rotated as it passes through the liquid crystal layer. This light will then be mainly polarized
perpendicular to the second filter, and thus be blocked and the pixel will appear black.
By controlling the voltage applied across the liquid crystal layer in each pixel, light can be allowed
to pass through in varying amounts thus constituting different levels of gray.Both the liquid crystal
material and the alignment layer material contain ionic compounds. If an electric field of one
particular polarity is applied for a long period of time, this ionic material is attracted to the surfaces
and degrades the device performance.
This is avoided either by applying an alternating current or by reversing the polarity of the electric
field as the device is addressed. When a large number of pixels is required in a display, it is not
feasible to drive each directly since then each pixel would require independent electrodes. Instead,
the display is multiplexed. In a multiplexed display, electrodes on one side of the display are grouped
and wired together (typically in columns), and each group gets its own voltage source. On the other
side, the electrodes are also grouped (typically in rows), with each group getting a voltage sink.

Unit 3 : 2-D DRAWING GEOMETRY


2D Transformation
Transformation means changing some graphics into something else by applying rules. We can have
various types of transformations such as translation, scaling up or down, rotation, shearing, etc.
When a transformation takes place on a 2D plane, it is called 2D transformation.
Transformations play an important role in computer graphics to reposition the graphics on the screen
and change their size or orientation.

Homogenous Coordinates
To perform a sequence of transformation such as translation followed by rotation and scaling, we
need to follow a sequential process −
• Translate the coordinates,
• Rotate the translated coordinates, and then
• Scale the rotated coordinates to complete the composite transformation.
To shorten this process, we have to use 3×3 transformation matrix instead of 2×2 transformation
matrix. To convert a 2×2 matrix to 3×3 matrix, we have to add an extra dummy coordinate W.
In this way, we can represent the point by 3 numbers instead of 2 numbers, which is called
Homogenous Coordinate system. In this system, we can represent all the transformation equations
in matrix multiplication. Any Cartesian point P X,Y can be converted to homogenous coordinates
by P’ (Xh, Yh, h).

Translation

COPYRIGHT © First Code - Let's Begin


A translation moves an object to a different position on the screen. You can translate
a point in 2D by adding translation coordinate (tx, ty) to the original coordinate X,Y to
get the new coordinate X′,Y′.

From the above figure, you can write that –


X’ = X + tx
Y’ = Y + ty
The pair (tx, ty) is called the translation vector or shift vector. The above equations can also be
represented using the column vectors.

We can write it as –
P’ = P + T

Rotation
In rotation, we rotate the object at particular angle θtheta from its origin. From the following figure,
we can see that the point P X,Y is located at angle φfrom the horizontal X coordinate with
distance r from the origin.

Let us suppose you want to rotate it at the angle θ.After rotating it to a new location, you will get a
new point P’ X′,Y′.

COPYRIGHT © First Code - Let's Begin


Using standard trigonometric the original coordinate of point PX,YX,Y can be represented
as –

X = r cosϕ......(1)X=rcosϕ ......(1)
Y = r sinϕ......(2)Y=rsinϕ ......(2)

Same way we can represent the point P’ X′,Y′X′,Y′as –


x′= r cos(ϕ+θ) = r cosϕcosθ– r sinϕ sinθy′= r .......(3)
sin(ϕ+θ) = r cosϕsinθ + r sinϕ cosθ .......(4)

Substituting equation 11 & 22 in 33 & 44 respectively, we will get


x′= x cosθ– y sinθy′=
x sinθ + ycosθ
Representing the above equation in matrix form,

OR
P’ = P . R

Scaling
To change the size of an object, scaling transformation is used. In the scaling process, you either
expand or compress the dimensions of the object. Scaling can be achieved by multiplying the
original coordinates of the object with the scaling factor to get the desired result.
Let us assume that the original coordinates are X,YX,Y, the scaling factors are (SX, SY), and the
produced coordinates are X′,Y′X′,Y′.This can be mathematically represented as shown below −
X' = X . SX and Y' = Y . SY

COPYRIGHT © First Code - Let's Begin


The scaling factor SX, SY scales the object in X and Y direction respectively.
The above equations can also be represented in matrix form as below −

OR
P’ = P . S
Where S is the scaling matrix. The scaling process is shown in the following figure.

If we provide values less than 1 to the scaling factor S, then we can reduce the size of the object. If
we provide values greater than 1, then we can increase the size of the object.

Reflection
Reflection is the mirror image of original object. In other words, we can say that it is a rotation
operation with 180°. In reflection transformation, the size of the object does not change.
The following figures show reflections with respect to X and Y axes, and about the origin
respectively.

COPYRIGHT © First Code - Let's Begin


Interactive Picture Construction Techniques
Interactive picture- construction methods are commonly used in variety of applications, including
design and painting packages. These methods provide user with the capability to position objects,
to constrain fig. to predefined orientations or alignments, to sketch fig., and to drag objects around
the screen. Grids, gravity fields, and rubber band methods are used to aid in positioning and other
picture construction operations. The several techniques used for interactive picture construction
that are incorporated into graphics packages are:
(1)Basic positioning methods:- coordinate values supplied by locator input are often used with
positioning methods to specify a location for displaying an object or a character string. Coordinate
positions are selected interactively with a pointing device, usually by positioning the screen cursor.
(2)Constraints:-A constraint is a rule for altering input coordinates values to produce a specified
orientation or alignment of the displayed coordinates. the most common constraint is a horizontal
or vertical alignment of straight lines.
(3)Grids:- Another kind of constraint is a grid of rectangular lines displayed in some part of the
screen area. When a grid is used, any input coordinate position is rounded to the nearest
intersection of two grid lines.
(4)Gravity field:- When it is needed to connect lines at positions between endpoints, the graphics
packages convert any input position near a line to a position on the line. The conversion is
accomplished by creating a gravity area around the line. Any related position within the gravity
field of line is moved to the nearest position on the line. It illustrated with a shaded boundary
around the line.
(5)Rubber Band Methods:- Straight lines can be constructed and positioned using rubber band
methods which stretch out a line from a starting position as the screen cursor.

(6) Dragging:- This methods move object into position by dragging them with the screen cursor.
(7)Painting and Drawing:- Cursor drawing options can be provided using standard curve shapes
such as circular arcs and splices, or with freehand sketching procedures. Line widths, line styles
and other attribute options are also commonly found in painting and drawing packages.

Dragging
Dragging is used to move an object from one position to another position on the computer screen.
To drag any other object, first, we have to select the object that we want to move on the screen by

COPYRIGHT © First Code - Let's Begin


holding the mouse button down. As cursor moved on the screen, the object is also moved
with the cursor position. When the cursor reached the desired position, the button is released.

The following diagram represents the dragging procedure:

Unit 3 : CONICS AND CURVES


DDA Algorithm :
Digital Differential Analyzer DDADDA algorithm is the simple line generation algorithm which is
explained step by step here.
Step 1 −Get the input of two end points (X0,Y0)(X0,Y0) and (X1,Y1)(X1,Y1).
Step 2 −Calculate the difference between two end points.
dx = X1 - X0
dy = Y1 - Y0
Step 3 −Based on the calculated difference in step-2, you need to identify the number of steps to
put pixel. If dx > dy, then you need more steps in x coordinate; otherwise in y coordinate.
if (absolute(dx) > absolute(dy))
Steps = absolute(dx);
else
Steps = absolute(dy);

Step 4 −Calculate the increment in x coordinate and y coordinate.


Xincrement = dx / (float) steps;
Yincrement = dy / (float) steps;

COPYRIGHT © First Code - Let's Begin


Step 5 −Put the pixel by successfully incrementing x and y coordinates accordingly
and complete the drawing of the line.
for(int v=0; v < Steps; v++)
{
x = x + Xincrement;
y = y + Yincrement;
putpixel(Round(x), Round(y));
}

Bresenham’s Line Generation


The Bresenham algorithm is another incremental scan conversion algorithm. The big advantage of
this algorithm is that, it uses only integer calculations. Moving across the x axis in unit intervals and
at each step choose between two different y coordinates.
For example, as shown in the following illustration, from position 2,32,3 you need to choose
between 3,33,3 and 3,43,4. You would like the point that is closer to the original line.

At sample position Xk+1,Xk+1, the vertical separations from the mathematical line are labelled
as dupper and dlower.

COPYRIGHT © First Code - Let's Begin


From the above illustration, the y coordinate on the mathematical line at xk+1xk+1 is −
Y = m $Xk$ + 1 + b
So, dupper and dlower are given as follows −
dlower = y − yk
=m(Xk+1)+b−Yk
and
dupper = (yk+1) − y
=Yk + 1 – m(Xk + 1) − b
You can use these to make a simple decision about which pixel is closer to the mathematical line.
This simple decision is based on the difference between the two pixel positions.
dlower – dupper = 2m(xk + 1) − 2yk + 2b − 1
Let us substitute m with dy/dx where dx and dy are the differences between the end-points.
dx(dlower−dupper) = dx (2(dy/dx)(xk+1)−2yk+2b−1)
=2dy.xk−2dx.yk+2dy+2dx(2b−1)
=2dy.xk−2dx.yk+C
So, a decision parameter Pk for the kth step along a line is given by −
pk=dx(dlower−dupper)
=2dy.xk−2dx.yk+C
The sign of the decision parameter PkPk is the same as that of dlower−dupper
If pk is negative, then choose the lower pixel, otherwise choose the upper pixel.
Remember, the coordinate changes occur along the x axis in unit steps, so you can do everything
with integer calculations. At step k+1, the decision parameter is given as −
pk+1=2dy.xk+1−2dx.yk+1+C
Subtracting pkpk from this we get −
pk+1−pk=2dy(xk+1−xk)−2dx(yk+1−yk)
But, xk+1xk+1 is the same as (xk)+1(xk)+1. So −
pk+1=pk+2dy−2dx(yk+1−yk)
Where, Yk+1–Yk is either 0 or 1 depending on the sign of Pk.
The first decision parameter p0p0 is evaluated at (x0,y0) is given as −
p0=2dy−dx
Now, keeping in mind all the above points and calculations, here is the Bresenham algorithm for
slope m < 1 −
Step 1 −Input the two end-points of line, storing the left end-point in (x0,y0).
Step 2 −Plot the point (x0,y0).
Step 3 −Calculate the constants dx, dy, 2dy, and 2dy–2dx and get the first value for the decision
parameter as −
p0=2dy−dx
COPYRIGHT © First Code - Let's Begin
Step 4 −At each Xk along the line, starting at k = 0, perform the following test −
If pk < 0, the next point to plot is (xk+1,yk) and
pk+1=pk+2dy
Otherwise,

(xk,yk+1)
pk+1=pk+2dy−2dx

Step 5 −Repeat step 4 dx–1 times.


For m > 1, find out whether you need to increment x while incrementing y each time.

After solving, the equation for decision parameter Pk will be very similar, just the x and y in the
equation gets interchanged.

Unit 5 : Windowport and Viewport


Polygon Clipping Sutherland Hodgman Algorithm :
A polygon can also be clipped by specifying the clipping window. Sutherland Hodgeman polygon
clipping algorithm is used for polygon clipping. In this algorithm, all the vertices of the polygon are
clipped against each edge of the clipping window.
First the polygon is clipped against the left edge of the polygon window to get new vertices of the
polygon. These new vertices are used to clip the polygon against right edge, top edge, bottom
edge, of the clipping window as shown in the following figure.

While processing an edge of a polygon with clipping window, an intersection point is found if edge
is not completely inside clipping window and the a partial edge from the intersection point to the
outside edge is clipped. The following figures show left, right, top and bottom edge clippings −

COPYRIGHT © First Code - Let's Begin


Scan Line Polygon Fill Algorithm:

This algorithm lines interior points of a polygon on the scan line and these points are done on or off
according to requirement. The polygon is filled with various colors by coloring various pixels.

In above figure polygon and a line cutting polygon in shown. First of all, scanning is done. Scanning
is done using raster scanning concept on display device. The beam starts scanning from the top left
corner of the screen and goes toward the bottom right corner as the endpoint. The algorithms find
points of intersection of the line with polygon while moving from left to right and top to bottom. The
various points of intersection are stored in the frame buffer. The intensities of such points is keep
high. Concept of coherence property is used. According to this property if a pixel is inside the
polygon, then its next pixel will be inside the polygon.

Side effects of Scan Conversion:

COPYRIGHT © First Code - Let's Begin


1.Staircase or Jagged: Staircase like appearance is seen while the scan was converting
line or circle.

2.Unequal Intensity: It deals with unequal appearance of the brightness of different lines. An
inclined line appears less bright as compared to the horizontal and vertical line.

Line Clipping
In computer graphics, line clipping is the process of removing lines or portions of lines outside an
area of interest. Typically, any line or part there of which is outside of the viewing area is removed.

There are two common algorithms for line clipping: Cohen–Sutherland and Liang–Barsky.
A line-clipping method consists of various parts. Tests are conducted on a given line segment to
find out whether it lies outside the view volume. Afterwards, intersection calculations are carried
out with one or more clipping boundaries.[1]
Determining which portion of the line is inside or outside of the clipping volume is done by
processing the endpoints of the line with regards to the intersection.

Cohen–Sutherland
In computer graphics, the Cohen–Sutherland algorithm (named after Danny Cohen and Ivan
Sutherland) is a line-clipping algorithm. The algorithm divides a 2D space into 9 regions, of which
only the middle part (viewport) is visible.
In 1967, flight-simulation work by Danny Cohen led to the development of the Cohen–Sutherland
computer graphics two- and three-dimensional line clipping algorithms, created with Ivan
Sutherland.

Nicholl–Lee–Nicholl
The Nicholl–Lee–Nicholl algorithm is a fast line-clipping algorithm that reduces the chances of
clipping a single line segment multiple times, as may happen in the Cohen–Sutherland algorithm.
The clipping window is divided into a number of different areas, depending on the position of the
initial point of the line to be clipped.

COPYRIGHT © First Code - Let's Begin


Cyrus–Beck
Very similar to Liang–Barsky line-clipping algorithm. The difference is that Liang–Barsky is a
simplified Cyrus–Beck variation that was optimized for a rectangular clip window.
The Cyrus–Beck algorithm is primarily intended for a clipping a line in the parametric form against
a convex polygon in 2 dimensions or against a convex polyhedron in 3 dimensions.

Unit 6 : 3 D GRAPHICS
In the 2D system, we use only two coordinates X and Y but in 3D, an extra coordinate Z is added.
3D graphics techniques and their application are fundamental to the entertainment, games, and
computer-aided design industries. It is a continuing area of research in scientific visualization.
Furthermore, 3D graphics components are now a part of almost every personal computer and,
although traditionally intended for graphics-intensive software such as games, they are
increasingly being used by other applications.

Translation
In 3D translation, we transfer the Z coordinate along with the X and Y coordinates. The process for
translation in 3D is similar to 2D translation. A translation moves an object into a different position
on the screen.
The following figure shows the effect of translation –

COPYRIGHT © First Code - Let's Begin


Rotation
3D rotation is not same as 2D rotation. In 3D rotation, we have to specify the angle of rotation along
with the axis of rotation. We can perform 3D rotation about X, Y, and Z axes. They are represented
in the matrix form as below –

The following figure explains the rotation about various axes –

COPYRIGHT © First Code - Let's Begin


Scaling
You can change the size of an object using scaling transformation. In the scaling process, you either
expand or compress the dimensions of the object. Scaling can be achieved by multiplying the
original coordinates of the object with the scaling factor to get the desired result. The following figure
shows the effect of 3D scaling –

In 3D scaling operation, three coordinates are used. Let us assume that the original coordinates
are X,Y,Z, scaling factors are (SX,SY,Sz) respectively, and the produced coordinates are X′,Y′,Z′. This
can be mathematically represented as shown below –

COPYRIGHT © First Code - Let's Begin


Projection
Orthographic Projection :
In orthographic projection the direction of projection is normal to the projection of the plane. There
are three types of orthographic projections −

• Front Projection
• Top Projection
• Side Projection

Oblique Projection
In oblique projection, the direction of projection is not normal to the projection of plane. In oblique
projection, we can view the object better than orthographic projection.
There are two types of oblique projections −Cavalier and Cabinet. The Cavalier projection makes
45° angle with the projection plane. The projection of a line perpendicular to the view plane has the
same length as the line itself in Cavalier projection. In a cavalier projection, the foreshortening
factors for all three principal directions are equal.
The Cabinet projection makes 63.4° angle with the projection plane. In Cabinet projection, lines
perpendicular to the viewing surface are projected at ½ their actual length. Both the projections are
shown in the following figure –

COPYRIGHT © First Code - Let's Begin


Perspective Projection
In perspective projection, the distance from the center of projection to project plane is finite and the
size of the object varies inversely with distance which looks more realistic.
The distance and angles are not preserved and parallel lines do not remain parallel. Instead, they
all converge at a single point called center of projection or projection reference point. There are 3
types of perspective projections which are shown in the following chart.

• One point perspective projection is simple to draw.


• Two point perspective projection gives better impression of depth.
• Three point perspective projection is most difficult to draw.

The following figure shows all the three types of perspective projection –

COPYRIGHT © First Code - Let's Begin


Unit 7 : ANIMATION
Computer Animation
Animation means giving life to any object in computer graphics. It has the power of injecting energy
and emotions into the most seemingly inanimate objects. Computer-assisted animation and
computer-generated animation are two categories of computer animation. It can be presented via
film or video.
The basic idea behind animation is to play back the recorded images at the rates fast enough to fool
the human eye into interpreting them as continuous motion. Animation can make a series of dead
images come alive. Animation can be used in many areas like entertainment, computer aided-
design, scientific visualization, training, education, e-commerce, and computer art.

Animation Techniques
Animators have invented and used a variety of different animation techniques. Basically there are
six animation technique which we would discuss one by one in this section.

i) Traditional Animation frame by frame


Traditionally most of the animation was done by hand. All the frames in an animation had to be
drawn by hand. Since each second of animation requires 24 frames film, the amount of efforts
required to create even the shortest of movies can be tremendous.

ii) Keyframing
In this technique, a storyboard is laid out and then the artists draw the major frames of the animation.
Major frames are the ones in which prominent changes take place. They are the key points of
animation. Keyframing requires that the animator specifies critical or key positions for the objects.
The computer then automatically fills in the missing frames by smoothly interpolating between those
positions.

iii) Procedural
In a procedural animation, the objects are animated by a procedure −a set of rules −not by
keyframing. The animator specifies rules and initial conditions and runs simulation. Rules are often
based on physical rules of the real world expressed by mathematical equations.

iv) Behavioral
In behavioral animation, an autonomous character determines its own actions, at least to a certain
extent. This gives the character some ability to improvise, and frees the animator from the need to
specify each detail of every character's motion.

COPYRIGHT © First Code - Let's Begin


v) Performance Based Motion Capture
Another technique is Motion Capture, in which magnetic or vision-based sensors record the actions
of a human or animal object in three dimensions. A computer then uses these data to animate the
object.
This technology has enabled a number of famous athletes to supply the actions for characters in
sports video games. Motion capture is pretty popular with the animators mainly because some of
the commonplace human actions can be captured with relative ease. However, there can be serious
discrepancies between the shapes or dimensions of the subject and the graphical character and this
may lead to problems of exact execution.

vi) Physically Based Dynamics


Unlike key framing and motion picture, simulation uses the laws of physics to generate motion of
pictures and other objects. Simulations can be easily used to produce slightly different sequences
while maintaining physical realism. Secondly, real-time simulations allow a higher degree of
interactivity where the real person can maneuver the actions of the simulated character.
In contrast the applications based on key-framing and motion select and modify motions form a pre-
computed library of motions. One drawback that simulation suffers from is the expertise and time
required to handcraft the appropriate controls systems.

What is Morphing?
If you ever watched the Michael Jackson’s Black or White video in which faces changed to one
another, you must have wondered how it was possible. It is done through morphing which has come
of ages since then. It is possible to start with your face and end up with the face of a terrorist or
some celebrity. This special effect of face changing is so smooth that one is left blinking his eyes
how he turned into someone else. The technique involves marking the important features or the
contours of the face which has to be changed and the same facial features such as the place of
nose and mouth are marked on the face into which the first face has to evolve. Computer software
then distorts the first face to take the shape of the 2nd face while fading both the faces. Morphing
has totally taken over the earlier cross fading technique that was used earlier for transition between
two scenes in TV shows.

COPYRIGHT © First Code - Let's Begin

You might also like