Professional Documents
Culture Documents
Computer Graphics
Computer graphics is an art of drawing pictures, lines, charts, etc. using computers with the help of
programming. Computer graphics image is made up of number of pixels. Pixel is the smallest addressable
graphical unit represented on the computer screen.
1.1 APPLICATIONS OF COMPUTER GRAPHICS
The development of computer graphics has been driven both by the needs of the user community and by
advances in hardware and software. The applications of computer graphics are many and varied; we can,
however, divide them into four major areas:
1. Display of information
2. Design
3. Simulation and animation
4. User interfaces
1. Display of information
a) Graphs and Charts
• An early application for computer graphics is the display of simple data graphs usually plotted on a
character printer. Data plotting is still one of the most common graphics application.
• Graphs & charts are commonly used to summarize functional, statistical, mathematical, engineering
and economic data for research reports, managerial summaries and other types of publications.
• Typically examples of data plots are line graphs, bar charts, pie charts, surface graphs, contour plots
and other displays showing relationships between multiple parameters in two dimensions, three
dimensions, or higher-dimensional spaces.
2. Design
a) Computer-Aided Design
• A major use of computer graphics is in design processes-particularly for engineering and architectural
systems.
• CAD, computer-aided design or CADD, computer-aided drafting and design methods are now
routinely used in the automobiles, aircraft, spacecraft, computers, home appliances.
• Circuits and networks for communications, water supply or other utilities are constructed with repeated
placement of a few geographical shapes.
d) Computer Art
• Some television programs also use animation techniques to combine computer generated figures of
people, animals, or cartoon characters with the actor in a scene or to transform an actor’s face into
another shape.
h) Image Processing
• The modification or interpretation of existing pictures, such as photographs and TV scans is called
image processing.
• Methods used in computer graphics and image processing overlap, the two areas are concerned with
fundamentally different operations.
• Image processing methods are used to improve picture quality, analyze images, or recognize visual
patterns for robotics applications.
• Each screen display area can contain a different process, showing graphical or non-graphical
information, and various methods can be used to activate a display window.
• Using an interactive pointing device, such as mouse, we can active a display window on some systems
by positioning the screen cursor within the window display area and pressing the left mouse button.
1.2 GRAPHICS SYSTEM
A computer graphics system is a computer system; as such, it must have all the components of a general-
purpose computer system. Let us start with the high-level view of a graphics system, as shown in the block
diagram in Figure 1.1. There are six major elements in our system:
1. Input devices
2. Central Processing Unit
3. Graphics Processing Unit
4. Memory
5. Frame buffer
6. Output devices
a) Pixels and Frame buffer
• Virtually all modern graphics systems are raster based. The image we see on the output device is an
array—the raster—of picture elements, or pixels, produced by the graphics system.
• From Figure 1.2, each pixel corresponds to a location, or small area, in the image. Collectively, the
pixels are stored in a part of memory called the frame buffer.
• In full-color systems, there are 24 (or more) bits per pixel. Such systems can display sufficient colors
to represent most images realistically. They are also called true-color systems, or RGB-color systems,
because individual groups of bits in each pixel are assigned to each of the three primary colors—red,
green, and blue. High dynamic range (HDR) systems use 12 or more bits for each color component.
b) The CPU and the GPU
• In a simple system, there may be only one processor, the central processing unit (CPU) of the system,
which must do both the normal processing and the graphical processing.
• The main graphical function of the processor is to take specifications of graphical primitives (such as
lines, circles, and polygons) generated by application programs and to assign values to the pixels in
the frame buffer that best represent these entities.
• The conversion of geometric entities to pixel colors and locations in the frame buffer is known as
rasterization, or scan conversion. Today, virtually all graphics systems are characterized by special-
purpose graphics processing units (GPUs), custom-tailored to carry out specific graphics functions.
• In a noninterlaced system, the pixels are displayed row by row, or scan line by scan line, at the refresh
rate. In an interlaced display, odd rows and even rows are refreshed alternately.
• Color CRTs have three different colored phosphors (red, green, and blue), arranged in small groups.
One common style arranges the phosphors in triangular groups called triads, each triad consisting of
three phosphors, one of each primary.
• In the shadow-mask CRT (Figure 1.4), a metal screen with small holes—the shadow mask—ensures
that an electron beam excites only phosphors of the proper color.
• Flat-panel monitors are inherently raster based. Although there are multiple technologies available,
including light-emitting diodes (LEDs), liquid-crystal displays (LCDs), and plasma panels, all use a
two-dimensional grid to address individual light-emitting elements.
• It is not necessary that the output of the mouse or trackball encoders be interpreted as a position.
Instead, either the device driver or a user program can interpret the information from the encoder as
two independent velocities to obtain a two-dimensional position.
• Thus, as a mouse moves across a surface, the integrals of the velocities yield x, y values that can be
conerted to indicate the position for a cursor on the screen, as shown in Figure 1.8. By interpreting the
distance traveled by the ball as a velocity, we can use the device as a variable-sensitivity input device.
• If the ball does not rotate, then there is no change in the integrals and a cursor tracking the posiion of
the mouse will not move. In this mode, these devices are called relative-positioning. Mouse or
trackball, is not always desirable because while the user is attempting to follow a curve on the screen
with a mouse, she lifts and moves the mouse, the absolute position on the curve being traced is lost.
• In (Figure 1.10), The motion of the joystick in two orthogonal directions is encoded, interpreted as two
velocities, and integrated to identify a screen location. The integration implies that if the stick is left in
its resting position, there is no change in the cursor position and that the farther the stick is moved
from its resting position, the faster the screen location changes. Thus, the joystick is a variable-
sensitivity device.
• The other advantage of the joystick is that the device can be constructed with mechanical elements,
such as springs and dampers.
• A spaceball looks like a joystick with a ball on the end of the stick (Figure 1.11); however, the stick
does not move. Rather, pressure sensors in the ball measure the forces applied by the user.
• The spaceball can measure not only the three direct forces (up–down, front–back, left–right) but also
three independent twists. The device measures six independent values and thus has six degrees of
freedom. Such an input device could be used, for example, both to position and to orient a camera.
• The device measure, including the identifier for the device, is placed in an event queue. This process
of placing events in the event queue is completely independent of what the application program does
with these events.
• One way that the application program can work with events is shown in Figure 1.12. The user program
can examine the front event in the queue or, if the queue is empty, can wait for an event to occur. If
there is an event in the queue, the program can look at event’s type and decide.
• Light is a form of electromagnetic radiation. Taking the classical view, we look at electromagnetic
energy travels as waves that can be characterized by either their wavelengths or their frequencies.
• The electromagnetic spectrum (Figure 1.16) includes radio waves, infrared (heat), and a portion that
causes a response in our visual systems. This visible spectrum, which has wavelengths in the range
of 350 to 780 nanometers (nm), is called (visible) light.
• A ray is a semi-infinite line that emanates from a point and travels to infinity in a particular direction.
• A portion of these infinite rays contributes to the image on the film plane of our camera. For example,
if the source is visible from the camera, some of the rays go directly from the source through the lens
of the camera and strike the film plane.
• Suppose that we orient our camera along the z-axis, with the pinhole at the origin of our coordinate
system. The film plane is located a distance d from the pinhole. A side view (Figure 1.20) allows us to
calculate where the image of the point (x, y, z) is on the film plane z = −d.
• Using the fact that the two triangles in Figure 1.20 are similar, we find that the y coordinate of the
image is at yp, where
• The point (xp, yp, −d) is called the projection of the point (x, y, z). The field, or angle, of view of our
camera is the angle made by the largest object that our camera can image on its film plane.
• We can calculate the field of view with the aid of Figure 1.21 If h is the height of the camera, the angle
of view θ is
• The ideal pinhole camera has an infinite depth of field: Every point within its field of view is in focus.
• Disadvantages:
o 1st, Because the pinhole is so small—it admits only a single ray from a point source—almost
no light enters the camera.
o 2nd, the camera cannot be adjusted to have a different angle of view.
• First, the lens gathers more light than can pass through the pinhole. The larger the aperture of the lens,
the more light the lens can collect. Second, by picking a lens with the proper focal length—a selection
equivalent to choosing d for the pinhole camera—we can achieve any desired angle of view (up to 180
degrees).
b) The Human Visual System
• The complex visual system has all the components of a physical imaging system, such as a camera or
a microscope. The major components of the visual system are shown in Figure 1.22.
• Light enters the eye through the lens and cornea, a transparent structure that protects the eye. The iris
opens and closes to adjust the amount of light entering the eye. The lens forms an image on a two-
dimensional structure called the retina at the back of the eye.
• First, the specification of the objects is independent of the specification of the viewer. Hence, we
should expect that, within a graphics library, there will be separate functions for specifying the objects
and the viewer.
• Second, we can compute the image using simple geometric calculations, just as we did with the pinhole
camera. Consider the side view of the camera and a simple object in Figure 1.24.
• The view in part (a) of the figure is similar to that of the pinhole camera. Note that the image of the
object is flipped relative to the object.
• We find the image of a point on the object on the virtual image plane by drawing a line, called a
projector, from the point to the center of the lens, or the center of projection (COP).
• In our synthetic camera, the virtual image plane that we have moved in front of the lens is called the
projection plane.
• In the synthetic camera, we can move this limitation to the front by placing a clipping rectangle, or
clipping window, in the projection plane (Figure 1.26). This rectangle acts as a window, through which
a viewer, located at the center of projection, sees the world.
• Given the location of the center of projection, the location and orientation of the projection plane, and
the size of the clipping rectangle, we can determine which objects will appear in the image.
• By clicking on these items, the user guides the software and produces images without having to write
programs.
• The interface between an application program and a graphics system can be specified through a set of
functions that resides in a graphics library. These specifications are called the application programming
interface (API). The application programmer’s model of the system is shown in Figure 1.28.
• The software drivers are responsible for interpreting the output of the API and converting these data
to a form that is understood by the particular hardware.
a) The Pen-Plotter Model
• The most early graphics systems were two-dimensional systems. The conceptual model now referred
to as the pen-plotter model, referencing the output device that was available on these systems.
• A pen plotter (Figure 1.29) produces images by moving a pen held by a gantry, a structure that can
move the pen in two orthogonal directions across the paper.
• The plotter can raise and lower the pen as required to create the desired image. Pen plotters are still in
use; they are well suited for drawing large diagrams, such as blueprints. Various APIs—such as LOGO
and PostScript—have their origins in this model.
• An alternate raster-based, two-dimensional model relies on writing pixels directly into a frame buffer.
Such a system could be based on a single function of the form
write_pixel(x, y, color);
b) The Three Dimensional API’s
• The synthetic-camera model is the basis for a number of popular APIs, including OpenGL and
Direct3D. If we are to follow the synthetic-camera model, we need functions in the API to specify the
following:
✓ Objects
• We could also use the three points to display three pixels at locations in the frame buffer corresponding
to the three vertices. For example, in OpenGL we would use GL_TRIANGLES, GL_LINE_STRIP, or
GL_POINTS for the three possibilities we just described.
• We can define a viewer or camera in a variety of ways. Available APIs differ both in how much
flexibility they provide in camera selection and in how many different methods they allow.
• If we look at the camera in Figure 1.32, we can identify four types of necessary specifications:
• This block diagram suggests that we might implement the modeler and the renderer with different
software and hardware. For example, consider the production of a single frame in an animation.
• We first want to design and position our objects. Consequently, we prefer to carry out this step on an
interactive workstation with good graphics hardware. Once we have designed the scene, we want to
render it, adding light sources, material properties, and a variety of other detailed effects, to form a
production-quality image.
• This step requires a tremendous amount of computation, so we might prefer to use a so we might prefer
to use a render farm, a cluster of computers configured for numerical computing.
• The interface between the modeler and renderer, as simple as a file produced by the modeler that
describes the objects and that contains additional information important only to the renderer, such as
light sources, viewer location, and material properties.
• Models, including the geometric objects, lights, cameras, and material properties, are placed in a data
structure called a scene graph that is passed to a renderer or game engine.
1.7 GRAPHICS ARCHITECTURES
• On one side of the API is the application program. On the other is some combination of hardware and
software that implements the functionality of the API.
• A simple model of these early graphics systems is shown in Figure 1.35. The display in these systems
was based on a calligraphic CRT display that included the necessary circuitry to generate a line
segment connecting two points.
✓ The display processor would then execute repetitively the program in the display list, at a rate sufficient
to avoid flicker, independently of the host, thus freeing the host for other tasks.
✓ This architecture has become closely associated with the client–server architectures.
b) Pipeline Architecture
✓ For computer-graphics applications, the most important use of custom VLSI circuits has been in
creating pipeline architectures.
✓ The concept of pipelining is illustrated in Figure 1.37 for a simple arithmetic calculation. In our
pipeline, there is an adder and a multiplier.
✓ If we use this configuration to compute a + (b ∗ c), the calculation takes one multiplication and one
addition—the same amount of work required if we use a single processor to carry out both operations.
However, suppose that we have to carry out the same computation with many values of a, b, and c.
✓ Now, the multiplier can pass on the results of its calculation to the adder and can start its next
multiplication while the adder carries out the second step of the calculation on the first set of data.
d) Vertex Processing
✓ In the first block of our pipeline, each vertex is processed independently. The two major functions of
this block are to carry out coordinate transformations and to compute a color for each vertex.
✓ For example, in our discussion of the synthetic camera, we observed that a major part of viewing is to
convert to a representation of objects from the system in which they were defined to a representation
in terms of the coordinate system of the camera.
✓ A further example of a transformation arises when we finally put our images onto the output device.
✓ We can represent each change of coordinate systems by a matrix. We can represent successive changes
in coordinate systems by multiplying, or concatenating, the individual matrices into a single matrix.
✓ Because multiplying one matrix by another matrix yields a third matrix, a sequence of transformations
is an obvious candidate for a pipeline architecture.
✓ Eventually, after multiple stages of transformation, the geometry is transformed by a projection
transformation.
✓ This step using 4 × 4 matrices, and thus projection fits in the pipeline. In general, we want to keep
three-dimensional information as long as possible, as objects pass through the pipeline.
e) Clipping and Primitive Assembly
✓ The second fundamental block in the implementation of the standard graphics pipeline is for clipping
and primitive assembly. We must do clipping because of the limitation that no imaging system can see
the whole world at once.
✓ The human retina has a limited size corresponding to an approximately 90-degree field of view.
Cameras have film of limited size, and we can adjust their fields of view by selecting different lenses.
▪ Before we develop the program, you might try to determine what the resulting image will be. A
possible form for our graphics program might be this:
▪ In this algorithm, we compute all the points first and put them into an array or some other data structure.
▪ This approach avoids the overhead of sending small amounts of data to the graphics processor for each
point we generate at the cost of having to store all the data. The strategy used in the first algorithm is
known as immediate mode graphics.
▪ In our second algorithm, because the data are stored in a data structure, we can redisplay the data,
perhaps with some changes such as altering the color or changing the size of a displayed point, by
resending the array without regenerating the points. The method of operation is known as retained
mode graphics.
▪ Our second approach has one major flaw. Suppose that, as we might in an animation, we wish to
redisplay the same objects. The geometry of the objects is unchanged, but the objects may be moving.
▪ Displaying all the points involves sending the data from the CPU to the GPU each time we wish to
display the objects in a new position. Consider the following alternative scheme:
• We use the terms vertex and point in a somewhat different manner in OpenGL. A vertex is a position
in space; we use two-, three-, and four-dimensional spaces in computer graphics.
• The simplest geometric primitive is a point in space, which is usually specified by a single vertex. Two
vertices can specify a line segment, a second primitive object; three vertices can specify either a
triangle or a circle; four vertices can specify a quadrilateral; and so on.
• The heart of our Sierpinski gasket program is generating the points. In order to go from our third
algorithm to a working OpenGL program.
• One simplification is to delay a discussion of coordinate systems and transformations among them by
putting all the data we want to display inside a cube centered at the origin whose diagonal goes from
(−1, −1, −1) and (1, 1, 1). This system known as clip coordinates is the one that our vertex shader uses
to send information to the rasterizer.
• Objects outside this cube will be eliminated, or clipped, and cannot appear on the display. Object
coordinates—and use transformations to convert the data to a representation in clip coordinates.
• Using a simple array of two elements to hold the x- and y-values of each point.
• We have created such classes and operators and put them in a file vec.h. The types in vec.h and the
other types defined later in the three- and four-dimensional classes match the types in the OpenGL
Shading Language.
• We can access individual elements using either the usual membership operator, e.g., p.x or p.y, or by
indexing as we would an array (p[0] and p[1]).
• One small addition will make our applications even clearer. Rather than using the GLSL vec2, we
typedef a point2
• Within vec.h, the type vec2 is specified as a struct with two elements of type GLfloat. In OpenGL, we
often use basic OpenGL types, such as GLfloat and GLint, rather than the corresponding C types float
and int. These types are defined in the OpenGL header files and usually in the obvious way—for
example,
• However, use of the OpenGL types allows additional flexibility for implementations where, for
example, we might want to change floats to doubles without altering existing application programs.
• The following code generates 5000 points starting with the vertices of a triangle that lie in the plane
z=0:
• Note that because any three noncollinear points specify a unique plane, had we started with three points
(x1, y1, z1), (x2, y2, z2), and (x3, y3, z3) along with an initial point in the same plane, then the gasket
would be generated in the plane specified by the original three vertices.
• One way to maintain the screen picture is to store the picture information as a charge distribution
within the CRT in order to keep the phosphors activated.
• The most common method now employed for maintaining phosphor glow is to redraw the picture
repeatedly by quickly directing the electron beam back over the same screen points. This type of
display is called a refresh CRT.
• The frequency at which a picture is redrawn on the screen is referred to as the refresh rate.
Operation of an electron gun with an accelarating anode
• The primary components of an electron gun in a CRT are the heated metal cathode and a control grid.
• The heat is supplied to the cathode by directing a current through a coil of wire, called the filament,
inside the cylindrical cathode structure.
• This causes electrons to be “boiled off” the hot cathode surface.
• Inside the CRT envelope, the free, negatively charged electrons are then accelerated toward the
phosphor coating by a high positive voltage.
• Intensity of the electron beam is controlled by the voltage at the control grid.
• Since the amount of light emitted by the phosphor coating depends on the number of electrons striking
the screen, the brightness of a display point is controlled by varying the voltage on the control grid.
• What we see on the screen is the combined effect of all the electrons light emissions: a glowing spot
that quickly fades after all the excited phosphor electrons have returned to their ground energy level.
• The frequency of the light emitted by the phosphor is proportional to the energy difference between
the excited quantum state and the ground state.
• Lower persistence phosphors required higher refresh rates to maintain a picture on the screen without
flicker.
• At the end of each scan line, the electron beam returns to the left side of the screen to begin displaying
the next scan line. The return to the left of the screen, after refreshing each scan line, is called the
➢ Here, the frame buffer can be anywhere in the system memory, and the video controller accesses the
frame buffer to refresh the screen.
➢ In addition to the video controller, raster systems employ other processors as coprocessors and
accelerators to implement various graphics operations.
1.11.1 VIDEO CONTROLLER:
➢ The figure.17 below shows a commonly used organization for raster systems.
➢ A fixed area of the system memory is reserved for the frame buffer, and the video controller is given
direct access to the frame-buffer memory.
➢ Frame-buffer locations, and the corresponding screen positions, are referenced in the Cartesian
coordinates.
➢ The screen surface is then represented as the first quadrant of a two-dimensional system with positive
x and y values increasing from left to right and bottom of the screen to the top respectively.
➢ Pixel positions are then assigned integer x values that range from 0 to xmax across the screen, left to
right, and integer y values that vary from 0 to ymax, bottom to top.
b) Basic Video Controller Refresh Operations
➢ The basic refresh operations of the video controller are diagrammed fig.19.
▪ For characters that are defined as outlines, the shapes are scan-converted into the frame buffer by
locating the pixel positions closest to the outline in Fig.22.
• Some of the mouses uses optical sensors, which detects movement across the horizontal and vertical
grid lines.
• Since a mouse can be picked up and put down, it is used for making relative changes in the position of
the screen.
• Most general purpose graphics systems now include a mouse and a keyboard as the primary input
devices.
c) Trackballs and Spaceballs:
• A trackball is a ball device that can be rotated with the fingers or palm of the hand to produce screen
cursor movement.
• Laptop keyboards are equipped with a trackball to eliminate the extra space required by a mouse.
• Spaceball is an extension of two-dimensional trackball concept.
• Spaceballs are used for three-dimensional positioning and selection operations in virtual reality
systems, modeling, animation, CAD and other applications.
d) Joysticks:
• Joystick is used as a positioning device, which uses a small vertical lever(stick) mounded on a base.It
is used to steer the screen cursor around and select screen position with the stick movement.
• A push or pull on the stick is measured with strain gauges and converted to movement of the screen
cursor in the direction of the applied pressure.
• Device coordinates (xdc , ydc ) are integers within the range (0, 0) to (xmax, ymax) for a particular output
device.
b) Graphics Functions
• It provides users with a variety of functions for creating and manipulating pictures.
• The basic building blocks for pictures are referred to as graphics output primitives.
• Attributes are properties of the output primitives.
• We can change the size, position, or orientation of an object using geometric transformations.
• Modeling transformations, which are used to construct a scene.
• Viewing transformations are used to select a view of the scene, the type of projection to be used and
the location where the view is to be displayed.
• Input functions are used to control and process the data flow from these interactive devices (mouse,
tablet and joystick).
• Graphics package contains a number of tasks .We can lump the functions for carrying out many tasks
by under the heading control operations.
c) Software Standards
• The primary goal of standardized graphics software is portability.
• In 1984, Graphical Kernel System (GKS) was adopted as the first graphics software standard by the
International Standards Organization (ISO)
• The second software standard to be developed and approved by the standards organizations was
Programmer’s Hierarchical Interactive Graphics System (PHIGS).
Step 2: Title
• We can state that a display window is to be created on the screen with a given caption for the title bar.
This is accomplished with the function
• where the single argument for this function can be any character string that we want to use for the
display-window title.
GLUT Function 2:
• After the display window is on the screen, we can reposition and resize it.
glutInitWindowSize (400, 300);
GLUT Function 3:
• We can also set a number of other options for the display window, such as buffering and a choice of
color modes, with the glutInitDisplayMode function.
• Arguments for this routine are assigned symbolic GLUT constants.
• Example: the following command specifies that a single refresh buffer is to be used for the display
window and that we want to use the color mode which uses red, green, and blue (RGB) components
to select color values:
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
• The values of the constants passed to this function are combined using a logical or operation.
• Actually, single buffering and RGB color mode are the default options.
• The display window will then be referenced by coordinates (xmin, ymin) at the lower-left corner and
by coordinates (xmax, ymax) at the upper-right corner, as shown in Figure below:
• We can then designate one or more graphics primitives for display using the coordinate reference
specified in the gluOrtho2D statement.
• If the coordinate extents of a primitive are within the coordinate range of the display window, all of
the primitive will be displayed.
• Otherwise, only those parts of the primitive within the display-window coordinate limits will be
shown.
• Also, when we set up the geometry describing a picture, all positions for the OpenGL primitives must
be given in absolute coordinates, with respect to the reference frame defined in the gluOrtho2D
function.
1.20 OpenGL POINT FUNCTIONS
• The type within glBegin() specifies the type of the object and its value can be as follows:
GL_POINTS.
• Each vertex is displayed as a point.
• The size of the point would be of at least one pixel.
• Then this coordinate position, along with other geometric descriptions we may have in our scene, is
passed to the viewing routines.
• Unless we specify other attribute values, OpenGL primitives are displayed with a default size and
color.
• The default color for primitives is white, and the default point size is equal to the size of a single screen
pixel.
• Finally, the coordinate values can be listed explicitly in the glVertex function, or a single argument
can be used that references a coordinate position as an array. If we use an array specification for a
coordinate position, we need to append v (for “vector”) as a third suffix code.
Case 2: Alternatively, we could specify the coordinate values for the preceding points in arrays such as
and call the OpenGL functions for plotting the three points as
Case 3: In addition, here is an example of specifying two point positions in a three dimensional world
reference frame. In this case, we give the coordinates as explicit floating-point values:
Case 3: The third OpenGL line primitive is GL LINE LOOP, which produces a closed polyline. Lines are
drawn as with GL LINE STRIP, but an additional line is drawn to connect the last coordinate position and the
first coordinate position. Figure 4(c) shows the display of our endpoint list when we select this line option,
using the code.
Method 5: Painting and drawing programs allow pictures to be constructed interactively by using a pointing
device, such as a stylus and a graphics tablet, to sketch various curve shapes.
1.28 LINE DRAWING ALGORITHM
• A straight-line segment in a scene is defined by coordinate positions for the endpoints of the segment.
• To display the line on a raster monitor, the graphics system must first project the endpoints to integer
screen coordinates and determine the nearest pixel positions along the line path between the two
endpoints then the line color is loaded into the frame buffer at the corresponding pixel coordinates.
• The Cartesian slope-intercept equation for a straight line is with m as the slope of the line and b as the
y intercept.
• Given that the two endpoints of a line segment are specified at positions (x0,y0) and (xend, yend) ,as
shown in fig.
• Algorithms for displaying straight line are based on the line equation (1) and calculations given in
eq(2) and (3).
• For given x interval δx along a line, we can compute the corresponding y interval δy from eq.(2) as
• These equations form the basis for determining deflection voltages in analog displays, such as vector-
scan system, where arbitrarily small changes in deflection voltage are possible.
• For lines with slope magnitudes
• where k takes integer values starting from 0,for the first point and increases by 1 until final endpoint
is reached. Since m can be any real number between 0.0 and 1.0.
Case 2: if m>1, y increment in unit intervals
i.e. y k+1 = yk +1 then, m=(y k+1 – y k)/( x k+1 – x k)
m(x k+1 – x k) = 0
• At sampling position xk + 1, we label vertical pixel separations from the mathematical line path as
dlower and dupper (Figure 7). The y coordinate on the mathematical line at pixel column position xk + 1
is calculated as
• We summarize Bresenham line drawing for a line with a positive slope less than 1 in the following
outline of the algorithm. The constants 2∆y and 2∆y − 2∆x are calculated once for each line to be scan-
converted, so the arithmetic involves only integer addition and subtraction of these two constants. Step
4 of the algorithm will be performed a total of ∆x times.
A plot of the pixels generated along this line path is shown in Figure 8.
• For any circle point (x, y), this distance relationship is expressed by the Pythagorean theorem in
Cartesian coordinates as
• We could use this equation to calculate the position of points on a circle circumference by stepping
along the x axis in unit steps from xc −r to xc +r and calculating the corresponding y values at each
position as
• One problem with this approach is that it involves considerable computation at each step. Moreover,
the spacing between plotted pixel positions is not uniform, as demonstrated in Figure 12.
• We could adjust the spacing by interchanging x and y (stepping through y values and calculating x
values) whenever the absolute value of the slope of the circle is greater than 1; but this simply increases
the computation and processing required by the algorithm.
• Another way to eliminate the unequal spacing shown in Figure 12 is to calculate points along the
circular boundary using polar coordinates r and θ (Figure 11).
• At the start position (0, r), these two terms have the values 0 and 2r, respectively. Each successive
value for the 2x k+1 term is obtained by adding 2 to the previous value, and each successive value for
the 2y k+1 term is obtained by subtracting 2 from the previous value.
• The initial decision parameter is obtained by evaluating the circle function at the start position
(x0, y0) = (0,r):
A plot of the generated pixel positions in the first quadrant is shown in Figure 15.
A plot of the generated pixel positions in the first quadrant is shown in Figure 15.