You are on page 1of 80

UNIT 1: INTRODUCTION

Computer Graphics
Computer graphics is an art of drawing pictures, lines, charts, etc. using computers with the help of
programming. Computer graphics image is made up of number of pixels. Pixel is the smallest addressable
graphical unit represented on the computer screen.
1.1 APPLICATIONS OF COMPUTER GRAPHICS
The development of computer graphics has been driven both by the needs of the user community and by
advances in hardware and software. The applications of computer graphics are many and varied; we can,
however, divide them into four major areas:
1. Display of information
2. Design
3. Simulation and animation
4. User interfaces
1. Display of information
a) Graphs and Charts

• An early application for computer graphics is the display of simple data graphs usually plotted on a
character printer. Data plotting is still one of the most common graphics application.
• Graphs & charts are commonly used to summarize functional, statistical, mathematical, engineering
and economic data for research reports, managerial summaries and other types of publications.
• Typically examples of data plots are line graphs, bar charts, pie charts, surface graphs, contour plots
and other displays showing relationships between multiple parameters in two dimensions, three
dimensions, or higher-dimensional spaces.
2. Design
a) Computer-Aided Design
• A major use of computer graphics is in design processes-particularly for engineering and architectural
systems.
• CAD, computer-aided design or CADD, computer-aided drafting and design methods are now
routinely used in the automobiles, aircraft, spacecraft, computers, home appliances.
• Circuits and networks for communications, water supply or other utilities are constructed with repeated
placement of a few geographical shapes.

USHA RANI R E, GUEST FACULTY 1


• Animations are often used in CAD applications. Real-time, computer animations using wire-frame
shapes are useful for quickly testing the performance of a vehicle or system.
3. Simulation and animation
a) Virtual-Reality Environments

• Animations in virtual-reality environments are often used to train heavy-equipment operators or to


analyze the effectiveness of various cabin configurations and control placements.
• With virtual-reality systems, designers and others can move about and interact with objects in various
ways. Architectural designs can be examined by taking simulated “walk” through the rooms or around
the outsides of buildings to better appreciate the overall effect of a particular design.
• With a special glove, we can even “grasp” objects in a scene and turn them over or move them from
one place to another.
b) Data Visualizations
• Producing graphical representations for scientific, engineering and medical data sets and processes is
another fairly new application of computer graphics, which is generally referred to as scientific
visualization. And the term business visualization is used in connection with data sets related to
commerce, industry and other nonscientific areas.

USHA RANI R E, GUEST FACULTY 2


• There are many different kinds of data sets and effective visualization schemes depend on the
characteristics of the data. A collection of data can contain scalar values, vectors or higher-order
tensors.

c) Education and Training


• Computer generated models of physical, financial, political, social, economic & other systems are
often used as educational aids.
• Models of physical processes physiological functions, equipment, such as the color coded diagram as
shown in the figure, can help trainees to understand the operation of a system.
• For some training applications, special hardware systems are designed. Examples of such specialized
systems are the simulators for practice sessions ,aircraft pilots, air traffic control personnel.
• Some simulators have no video screens, for eg: flight simulator with only a control panel for instrument
flying.

d) Computer Art

USHA RANI R E, GUEST FACULTY 3


• The picture is usually painted electronically on a graphics tablet using a stylus, which can simulate
different brush strokes, brush widths and colors.
• Fine artists use a variety of other computer technologies to produce images. To create pictures the
artist uses a combination of 3D modeling packages, texture mapping, drawing programs and CAD
software etc. Commercial art also uses theses “painting” techniques for generating logos & other
designs, page layouts combining text & graphics, TV advertising spots & other applications.
• A common graphics method employed in many television commercials is morphing, where one object
is transformed into another.
e) Entertainment
• Television production, motion pictures, and music videos routinely a computer graphics methods.
• Sometimes graphics images are combined a live actors and scenes and sometimes the films are
completely generated a computer rendering and animation techniques.

• Some television programs also use animation techniques to combine computer generated figures of
people, animals, or cartoon characters with the actor in a scene or to transform an actor’s face into
another shape.
h) Image Processing

• The modification or interpretation of existing pictures, such as photographs and TV scans is called
image processing.
• Methods used in computer graphics and image processing overlap, the two areas are concerned with
fundamentally different operations.
• Image processing methods are used to improve picture quality, analyze images, or recognize visual
patterns for robotics applications.

USHA RANI R E, GUEST FACULTY 4


• Image processing methods are often used in computer graphics, and computer graphics methods are
frequently applied in image processing.
• Medical applications also make extensive use of image processing techniques for picture
enhancements in tomography and in simulations and surgical operations.
• It is also used in computed X-ray tomography(CT), position emission tomography(PET),and computed
axial tomography(CAT).
4. User interfaces
a) Graphical User Interfaces
• It is common now for applications software to provide graphical user interface (GUI).
• A major component of graphical interface is a window manager that allows a user to display multiple,
rectangular screen areas called display windows.

• Each screen display area can contain a different process, showing graphical or non-graphical
information, and various methods can be used to activate a display window.
• Using an interactive pointing device, such as mouse, we can active a display window on some systems
by positioning the screen cursor within the window display area and pressing the left mouse button.
1.2 GRAPHICS SYSTEM
A computer graphics system is a computer system; as such, it must have all the components of a general-
purpose computer system. Let us start with the high-level view of a graphics system, as shown in the block
diagram in Figure 1.1. There are six major elements in our system:
1. Input devices
2. Central Processing Unit
3. Graphics Processing Unit
4. Memory
5. Frame buffer
6. Output devices
a) Pixels and Frame buffer
• Virtually all modern graphics systems are raster based. The image we see on the output device is an
array—the raster—of picture elements, or pixels, produced by the graphics system.
• From Figure 1.2, each pixel corresponds to a location, or small area, in the image. Collectively, the
pixels are stored in a part of memory called the frame buffer.

USHA RANI R E, GUEST FACULTY 5


• Its resolution—the number of pixels in the frame buffer— determines the detail that you can see in the
image. The depth, or precision, of the frame buffer, defined as the number of bits that are used for each
pixel.
For example, a 1-bit-deep frame buffer allows only two colors, whereas an 8-bit-deep frame buffer
allows 28 (256) colors.

• In full-color systems, there are 24 (or more) bits per pixel. Such systems can display sufficient colors
to represent most images realistically. They are also called true-color systems, or RGB-color systems,
because individual groups of bits in each pixel are assigned to each of the three primary colors—red,
green, and blue. High dynamic range (HDR) systems use 12 or more bits for each color component.
b) The CPU and the GPU
• In a simple system, there may be only one processor, the central processing unit (CPU) of the system,
which must do both the normal processing and the graphical processing.
• The main graphical function of the processor is to take specifications of graphical primitives (such as
lines, circles, and polygons) generated by application programs and to assign values to the pixels in
the frame buffer that best represent these entities.
• The conversion of geometric entities to pixel colors and locations in the frame buffer is known as
rasterization, or scan conversion. Today, virtually all graphics systems are characterized by special-
purpose graphics processing units (GPUs), custom-tailored to carry out specific graphics functions.

USHA RANI R E, GUEST FACULTY 6


• The GPU can be either on the mother board of the system or on a graphics card. The frame buffer is
accessed through the graphics processing unit and usually is on the same circuit board as the GPU.
• GPUs are so powerful that they can often be used as mini supercomputers for general purpose
computing.
c) Output Devices
• The dominant type of display (or monitor) was the cathode-ray tube (CRT). A simplified picture of
a CRT is shown in Figure 1.3.
• When electrons strike the phosphor coating on the tube, light is emitted. The direction of the beam is
controlled by two pairs of deflection plates.
• The output of the computer is converted, by digital-to-analog converters, to voltages across the x and
y deflection plates.
• Light appears on the surface of the CRT when a sufficiently intense beam of electrons is directed at
the phosphor.
• If the voltages steering the beam change at a constant rate, the beam will trace a straight line, visible
to a viewer. Such a device is known as the random-scan, calligraphic, or vector CRT, because the
beam can be moved directly from any position to any other position.
• For a human to see a steady, flicker-free image on most CRT displays, the same path must be retraced,
or refreshed, by the beam at a sufficiently high rate, the refresh rate.

• In a noninterlaced system, the pixels are displayed row by row, or scan line by scan line, at the refresh
rate. In an interlaced display, odd rows and even rows are refreshed alternately.
• Color CRTs have three different colored phosphors (red, green, and blue), arranged in small groups.
One common style arranges the phosphors in triangular groups called triads, each triad consisting of
three phosphors, one of each primary.
• In the shadow-mask CRT (Figure 1.4), a metal screen with small holes—the shadow mask—ensures
that an electron beam excites only phosphors of the proper color.
• Flat-panel monitors are inherently raster based. Although there are multiple technologies available,
including light-emitting diodes (LEDs), liquid-crystal displays (LCDs), and plasma panels, all use a
two-dimensional grid to address individual light-emitting elements.

USHA RANI R E, GUEST FACULTY 7


• Figure 1.5 shows a generic flat-panel monitor. The two outside plates each contain parallel grids of
wires that are oriented perpendicular to each other. By sending electrical signals to the proper wire in
each grid, the electrical field at a location, determined by the intersection of two wires, can be made
strong enough to control the corresponding element in the middle plate.
• The middle plate in an LED panel contains light-emitting diodes that can be turned on and off by the
electrical signals sent to the grid. In an LCD display, the electrical field controls the polarization of the
liquid crystals in the middle panel, thus turning on and off the light passing through the panel.
• A plasma panel uses the voltages on the grids to energize gases embedded between the glass panels
holding the grids. The energized gas becomes a glowing plasma.
• Most projection systems are also raster devices. Hard-copy devices, such as printers and plotters, are
also raster based but cannot be refreshed.
d) Input Devices
• Most graphics systems provide a keyboard and at least one other input device. The most common input
devices are the mouse, the joystick, and the data tablet.
• Each provides positional information to the system, and each usually is equipped with one or more
buttons to provide signals to the processor. Often called pointing devices, these devices allow a user
to indicate a particular location on the display.
• Think about input devices in two distinct ways. The obvious one is to look at them as physical devices,
such as a keyboard or a mouse, and to discuss how they work.
• We prefer to treat input devices as logical devices whose properties are specified in terms of what they
do from the perspective of the application program. A logical device is characterized by its high-level
interface with the user program rather than by its physical characteristics.
• In computer graphics, the use of logical devices is slightly more complex because the forms that input
can take are more varied than the strings of bits or characters to which we are usually restricted in
nongraphical applications.

USHA RANI R E, GUEST FACULTY 8


For example, we can use the mouse—a physical device—either to select a location on the screen of
our CRT or to indicate which item in a menu we wish to select.
• In the first case, an x, y pair (in some coordinate system) is returned to the user program; in the second,
the application program may receive an integer as the identifier of an entry in the menu.
e) Physical Input Devices
• The view used in most of the workstation literature that there are two primary types of physical devices:
pointing devices and keyboard devices.
• The pointing device allows the user to indicate a position on the screen and almost always incorporates
one or more buttons to allow the user to send signals or interrupts to the computer.
• The keyboard device is almost always a physical keyboard but can be generalized to include any device
that returns character codes. Ex: American Standard Code for Information Interchange (ASCII).
• The mouse (Figure 1.6) and trackball (Figure 1.7) are similar in use and often in construction as well.
A typical mechanical mouse when turned over looks like a trackball. In both devices, the motion of
the ball is converted to signals sent back to the computer by pairs of encoders inside the device that
are turned by the motion of the ball. The encoders measure motion in two orthogonal directions.

• It is not necessary that the output of the mouse or trackball encoders be interpreted as a position.
Instead, either the device driver or a user program can interpret the information from the encoder as
two independent velocities to obtain a two-dimensional position.
• Thus, as a mouse moves across a surface, the integrals of the velocities yield x, y values that can be
conerted to indicate the position for a cursor on the screen, as shown in Figure 1.8. By interpreting the
distance traveled by the ball as a velocity, we can use the device as a variable-sensitivity input device.
• If the ball does not rotate, then there is no change in the integrals and a cursor tracking the posiion of
the mouse will not move. In this mode, these devices are called relative-positioning. Mouse or
trackball, is not always desirable because while the user is attempting to follow a curve on the screen
with a mouse, she lifts and moves the mouse, the absolute position on the curve being traced is lost.

USHA RANI R E, GUEST FACULTY 9


• Data tablets provide absolute positioning. A typical data tablet (Figure 1.9) has rows and columns of
wires embedded under its surface. The position of the stylus is determined through electromagnetic
interactions between signals traveling through the wires and sensors in the stylus.

• In (Figure 1.10), The motion of the joystick in two orthogonal directions is encoded, interpreted as two
velocities, and integrated to identify a screen location. The integration implies that if the stick is left in
its resting position, there is no change in the cursor position and that the farther the stick is moved
from its resting position, the faster the screen location changes. Thus, the joystick is a variable-
sensitivity device.
• The other advantage of the joystick is that the device can be constructed with mechanical elements,
such as springs and dampers.
• A spaceball looks like a joystick with a ball on the end of the stick (Figure 1.11); however, the stick
does not move. Rather, pressure sensors in the ball measure the forces applied by the user.
• The spaceball can measure not only the three direct forces (up–down, front–back, left–right) but also
three independent twists. The device measures six independent values and thus has six degrees of
freedom. Such an input device could be used, for example, both to position and to orient a camera.

USHA RANI R E, GUEST FACULTY 10


f) Logical Devices
• Input from inside the application program—that is, from the logical point of view. Two major
characteristics describe the logical behavior of an input device: (1) the measurements that the device
returns to the user program and (2) the time when the device returns those measurements.
• The logical string device is the same as the use of character input through scanf or cin.
• The physical pointing device can be used in a variety of logical ways. As a locator it can provide a
position to the application in either a device-independent coordinate system, such as world coordinates,
as in OpenGL, or in screen coordinates.
• A logical pick device returns the identifier of an object on the display to the application program.
• A widget is a graphical interactive device, provided by either the window system or a toolkit. Typical
widgets include menus, scrollbars, and graphical buttons.
• Widgets can be used to provide additional types of logical devices. For example, a menu provides one
of a number of choices as may a row of graphical buttons.
• A logical valuator provides analog input to the user program, usually through a widget such as a
slidebar, although the same logical input could be provided by a user typing numbers into a physical
keyboard
g) Input Modes
• The manner by which physical and logical input devices provide input to an application program can
be described in terms of two entities: a measure process and a device trigger. The measure of a device
is what the device returns to the user program. The trigger of a device is a physical input on the device
with which the user can signal the computer.
• For example, the measure of a keyboard contains a string, and the trigger can be the Return or Enter
key. For a locator, the measure includes the position, and the associated trigger can be a button on the
pointing device.
• In request mode, the measure of the device is not returned to the program until the device is triggered.
This input mode is standard in nongraphical applications. For example, if a typical C program requires
character input, we use a function such as scanf.
• Sample-mode input is immediate. As soon as the function call in the application program is
encountered, the measure is returned. In sample mode, the user must have positioned the pointing
device or entered data using the keyboard before the function call, because the measure is extracted
immediately from the buffer.
• Our third mode, event mode, can handle these other interactions. Suppose that we are in an
environment with multiple input devices, each with its own trigger and each running a measure
process. Each time that a device is triggered, an event is generated.

• The device measure, including the identifier for the device, is placed in an event queue. This process
of placing events in the event queue is completely independent of what the application program does
with these events.
• One way that the application program can work with events is shown in Figure 1.12. The user program
can examine the front event in the queue or, if the queue is empty, can wait for an event to occur. If
there is an event in the queue, the program can look at event’s type and decide.

USHA RANI R E, GUEST FACULTY 11


• Another approach is to associate a function called a callback with a specific type of event.

1.3 IMAGES: PHYSICAL AND SYNTHETIC


• Computer graphics started with how to construct raster images of simple two-dimensional geometric
entities (for example, points, line segments, and polygons) in the frame buffer.
• Next how to define two- and three-dimensional mathematical objects in the computer and image them
with the set of two-dimensional rasterized primitives.
a) Objects and Viewers
• We live in a world of three-dimensional objects. The development of many branches of mathematics,
including geometry and trigonometry, was in response to the desire to systematize conceptually simple
ideas, such as the measurement of size of objects and distance between objects.
• Two basic entities must be part of any image-formation process, be it mathematical or physical: object
and viewer. The object exists in space independent of any image-formation process and of any viewer.

• In most graphics systems, a set of locations in space, or of vertices, is sufficient to define, or


approximate, most objects.
For example, a line can be specified by two vertices; a polygon can be specified by an ordered list of
vertices; and a sphere can be specified by two vertices that specify its center and any point on its
circumference.
• One of the main functions of a CAD system is to provide an interface that makes it easy for a user to
build a synthetic model of the world.
• It is the viewer that forms the image of our objects. In the human visual system, the image is formed
on the back of the eye. In a camera, the image is formed in the film plane.
• Figure 1.13(a) shows two viewers observing the same building. This image is what is seen by an
observer A who is far enough away from the building to see both the building and the two other
viewers, B and C. From A’s perspective, B and C appear as objects, just as the building does.
• Figures 1.13(b) and (c) show the images seen by B and C, respectively. All three images contain the
same building, but the image of the building is different in all three.

USHA RANI R E, GUEST FACULTY 12


• Figure 1.14 shows a camera system viewing a building. Here we can observe that both the object and
the viewer exist in a three-dimensional world. However, the image that they define—what we find on
the projection plane—is two-dimensional.
b) Light and Images
• The preceding description of image formation is far from complete. For example, we have yet to
mention light. If there were no light sources, the objects would be dark, and there would be nothing
visible in our image.
• In Figure 1.15, which shows a simple physical imaging system. Again, we see a physical object and a
viewer (the camera); now, however, there is a light source in the scene.
• Light from the source strikes various surfaces of the object, and a portion of the reflected light enters
the camera through the lens.

• Light is a form of electromagnetic radiation. Taking the classical view, we look at electromagnetic
energy travels as waves that can be characterized by either their wavelengths or their frequencies.
• The electromagnetic spectrum (Figure 1.16) includes radio waves, infrared (heat), and a portion that
causes a response in our visual systems. This visible spectrum, which has wavelengths in the range
of 350 to 780 nanometers (nm), is called (visible) light.

USHA RANI R E, GUEST FACULTY 13


• Wavelengths in the middle of the range, around 520 nm, are seen as green; those near 450 nm are seen
as blue; and those near 650 nm are seen as red. Just as with a rainbow, light at wavelengths between
red and green, we see as yellow, and wavelengths shorter than blue generate violet light.
• A laser, for example, emits light at a single frequency, whereas an incandescent lamp emits energy
over a range of frequencies. Fortunately, in computer graphics, except for recognizing that distinct
frequencies are visible as distinct colors, we rarely need to deal with the physical properties of light.
• Geometric optics models light sources as emitters of light energy, each of which have a fixed intensity.
Modeled geometrically, light travels in straight lines, from the sources to those objects with which it
interacts.
• An ideal point source emits energy from a single location at one or more frequencies equally in all
directions. More complex sources, such as a light bulb, can be characterized as emitting light over an
area and by emitting more light in one direction than another.
c) Imaging models
• We can start building an imaging model by following light from a source. Consider the scene in Figure
1.17; it is illuminated by a single point source. We include the viewer in the figure because we are
interested in the light that reaches her eye. The viewer can also be a camera, as shown in Figure 1.18.

• A ray is a semi-infinite line that emanates from a point and travels to infinity in a particular direction.
• A portion of these infinite rays contributes to the image on the film plane of our camera. For example,
if the source is visible from the camera, some of the rays go directly from the source through the lens
of the camera and strike the film plane.

USHA RANI R E, GUEST FACULTY 14


• Most rays, however, go off to infinity, neither entering the camera directly nor striking any of the
objects. The remaining rays strike and illuminate objects. These rays can interact with the objects’
surfaces in a variety of ways.
• For example, if the surface is a mirror, a reflected ray might—depending on the orientation of the
surface—enter the lens of the camera and contribute to the image.
• Ray tracing and photonmapping are image-formation techniques that are based on these ideas and
that can form the basis for producing computer-generated images.
• Other physical approaches to image formation are based on conservation of energy. The most
important in computer graphics is radiosity. This method works best for surfaces that scatter the
incoming light equally in all directions.
1.4 IMAGING SYSTEMS
a) The Pinhole Camera
• The pinhole camera in Figure 1.19 provides an example of image formation that we can understand
with a simple geometric model. A pinhole camera is a box with a small hole in the center of one side
of the box; the film is placed inside the box on the side opposite the pinhole.

• Suppose that we orient our camera along the z-axis, with the pinhole at the origin of our coordinate
system. The film plane is located a distance d from the pinhole. A side view (Figure 1.20) allows us to
calculate where the image of the point (x, y, z) is on the film plane z = −d.

• Using the fact that the two triangles in Figure 1.20 are similar, we find that the y coordinate of the
image is at yp, where

USHA RANI R E, GUEST FACULTY 15


A similar calculation, using a top view, yields

• The point (xp, yp, −d) is called the projection of the point (x, y, z). The field, or angle, of view of our
camera is the angle made by the largest object that our camera can image on its film plane.
• We can calculate the field of view with the aid of Figure 1.21 If h is the height of the camera, the angle
of view θ is

• The ideal pinhole camera has an infinite depth of field: Every point within its field of view is in focus.
• Disadvantages:
o 1st, Because the pinhole is so small—it admits only a single ray from a point source—almost
no light enters the camera.
o 2nd, the camera cannot be adjusted to have a different angle of view.
• First, the lens gathers more light than can pass through the pinhole. The larger the aperture of the lens,
the more light the lens can collect. Second, by picking a lens with the proper focal length—a selection
equivalent to choosing d for the pinhole camera—we can achieve any desired angle of view (up to 180
degrees).
b) The Human Visual System
• The complex visual system has all the components of a physical imaging system, such as a camera or
a microscope. The major components of the visual system are shown in Figure 1.22.

• Light enters the eye through the lens and cornea, a transparent structure that protects the eye. The iris
opens and closes to adjust the amount of light entering the eye. The lens forms an image on a two-
dimensional structure called the retina at the back of the eye.

USHA RANI R E, GUEST FACULTY 16


• The rods and cones are light sensors and are located on the retina. They are excited by electromagnetic
energy in the range of 350 to 780 nm. The rods are low-level-light sensors that account for our night
vision and are not color sensitive; the cones are responsible for our color vision.
• The sizes of the rods and cones, coupled with the optical properties of the lens and cornea, determine
the resolution of our visual systems, or our visual acuity. Resolution is a measure of what size objects
we can see.
• There are three types of cones and a single type of rod. Whereas intensity is a physical measure of light
energy, brightness is a measure of how intense we perceive the light emitted from an object to be.
• The human visual system does not have the same response to a monochromatic (single-frequency) red
light as to a monochromatic green light. We are most sensitive to green light, and least sensitive to red
and blue.
• Most image-production systems, including film and video, work with just three basic, or primary,
colors.
• The human visual system has a back end much more complex than that of a camera or telescope. The
optic nerves are connected to the rods and cones in an extremely complex arrangement that has many
of the characteristics of a sophisticated signal processor.
• The final processing is done in a part of the brain called the visual cortex, where high-level functions,
such as object recognition, are done.
1.5 THE SYNTHETIC CAMERS MODEL
• The conceptual foundation for modern three-dimensional computer graphics. We look at creating a
computer generated image as being similar to forming an image using an optical system. This paradigm
has become known as the synthetic-camera model.
• In Figure 1.23 we see objects and a viewer. The viewer is a bellows camera, the image is formed on
the film plane at the back of the camera. So that we can emulate this process to create artificial images.

• First, the specification of the objects is independent of the specification of the viewer. Hence, we
should expect that, within a graphics library, there will be separate functions for specifying the objects
and the viewer.
• Second, we can compute the image using simple geometric calculations, just as we did with the pinhole
camera. Consider the side view of the camera and a simple object in Figure 1.24.
• The view in part (a) of the figure is similar to that of the pinhole camera. Note that the image of the
object is flipped relative to the object.

USHA RANI R E, GUEST FACULTY 17


• With our synthetic camera we can avoid the flipping by a simple trick, we draw another plane in front
of the lens (Figure 1.24(b)) and work in three dimensions, as shown in Figure 1.25.

• We find the image of a point on the object on the virtual image plane by drawing a line, called a
projector, from the point to the center of the lens, or the center of projection (COP).
• In our synthetic camera, the virtual image plane that we have moved in front of the lens is called the
projection plane.
• In the synthetic camera, we can move this limitation to the front by placing a clipping rectangle, or
clipping window, in the projection plane (Figure 1.26). This rectangle acts as a window, through which
a viewer, located at the center of projection, sees the world.

• Given the location of the center of projection, the location and orientation of the projection plane, and
the size of the clipping rectangle, we can determine which objects will appear in the image.

USHA RANI R E, GUEST FACULTY 18


1.6 THE PROGRAMMER’S INTERFACE
• A user develops images through interactions with the display using input devices, such as a mouse and
a keyboard. In a typical application, such as the painting program in Figure 1.27, the user sees menus
and icons that represent possible actions.

• By clicking on these items, the user guides the software and produces images without having to write
programs.
• The interface between an application program and a graphics system can be specified through a set of
functions that resides in a graphics library. These specifications are called the application programming
interface (API). The application programmer’s model of the system is shown in Figure 1.28.

• The software drivers are responsible for interpreting the output of the API and converting these data
to a form that is understood by the particular hardware.
a) The Pen-Plotter Model
• The most early graphics systems were two-dimensional systems. The conceptual model now referred
to as the pen-plotter model, referencing the output device that was available on these systems.
• A pen plotter (Figure 1.29) produces images by moving a pen held by a gantry, a structure that can
move the pen in two orthogonal directions across the paper.

• The plotter can raise and lower the pen as required to create the desired image. Pen plotters are still in
use; they are well suited for drawing large diagrams, such as blueprints. Various APIs—such as LOGO
and PostScript—have their origins in this model.

USHA RANI R E, GUEST FACULTY 19


• We can describe such a graphics system with two drawing functions:
moveto(x,y);
lineto(x,y);
• Execution of the moveto function moves the pen to the location (x, y) on the paper without leaving a
mark. The lineto function moves the pen to (x, y) and draws a line from the old to the new location of
the pen.
• The ability to change pens to alter the drawing color or line thickness, we have a simple—but
complete—graphics system. Here is a fragment of a simple program in such a system:
moveto(0, 0);
lineto(1, 0);
lineto(1, 1);
lineto(0, 1);
lineto(0, 0);
• This fragment would generate the output in Figure 1.30(a). If we added the code
moveto(0, 1);
lineto(0.5, 1.866);
lineto(1.5, 1.866);
lineto(1.5, 0.866);
lineto(1, 0);
moveto(1, 1);
lineto(1.5, 1.866);
We would have the image of a cube formed by an oblique projection, as is shown in Figure 1.30(b).

• An alternate raster-based, two-dimensional model relies on writing pixels directly into a frame buffer.
Such a system could be based on a single function of the form
write_pixel(x, y, color);
b) The Three Dimensional API’s
• The synthetic-camera model is the basis for a number of popular APIs, including OpenGL and
Direct3D. If we are to follow the synthetic-camera model, we need functions in the API to specify the
following:
✓ Objects

USHA RANI R E, GUEST FACULTY 20


✓ A viewer
✓ Light sources
✓ Material properties
• Objects are usually defined by sets of vertices. For simple geometric objects— such as line segments,
rectangles, and polygons—there is a simple relationship between a list of vertices, or positions in space,
and the object.
• A circle, for example, can be defined by three points on its circumference, or by its center and one
point on the circumference.
• The primitives usual sets include points, line segments, polygons, and sometimes text. OpenGL
programs define primitives through lists of vertices. The following code fragment specifies three
vertices:
float vertices[3][3];

vertices[0][0] = 0.0; /* vertex A */


vertices[1][0] = 0.0;
vertices[2][0] = 0.0;
vertices[0][1] = 0.0; /* vertex B */
vertices[1][1] = 1.0;
vertices[2][1] = 0.0;
vertices[0][2] = 0.0; /* vertex C */
vertices[1][2] = 0.0;
vertices[2][2] = 1.0;
• Note that these three vertices only give three locations in a three-dimensional space and do not specify
the geometric entity that they define. The locations could describe a triangle, as in Figure 1.31, or we
could use them to specify two line segments using the first two locations to specify the first segment
and the second and third locations to specify the second segment.

• We could also use the three points to display three pixels at locations in the frame buffer corresponding
to the three vertices. For example, in OpenGL we would use GL_TRIANGLES, GL_LINE_STRIP, or
GL_POINTS for the three possibilities we just described.
• We can define a viewer or camera in a variety of ways. Available APIs differ both in how much
flexibility they provide in camera selection and in how many different methods they allow.
• If we look at the camera in Figure 1.32, we can identify four types of necessary specifications:

USHA RANI R E, GUEST FACULTY 21


▪ Position The camera location usually is given by the position of the center of the lens, which is the
center of projection (COP).
▪ Orientation Once we have positioned the camera, we can place a camera coordinate system with
its origin at the center of projection. We can then rotate the camera independently around the three
axes of this system.
▪ Focal length The focal length of the lens determines the size of the image on the film plane or,
equivalently, the portion of the world the camera sees.
▪ Film plane The back of the camera has a height and a width. On the bellows camera, and in some
APIs, the orientation of the back of the camera can be adjusted independently of the orientation of
the lens.
• Classical viewing techniques, such as are used in architecture, stress the relationship between the
object and the viewer, rather than the independence that the synthetic-camera model emphasizes.
• Thus, the classical two-point perspective of a cube in Figure 1.33 is a two-point perspective because
of a particular relationship between the viewer and the planes of the cube.
• For example, consider the two function calls:
LookAt(cop, at, up);
Perspective(field_of_view, aspect_ratio, near, far);
• The first function call points the camera from the center of projection toward a desired point (the at
point), with a specified up direction for the camera. The second selects a lens for a perspective view
(the field of view) and how much of the world that the camera should image (the aspect ratio and the
near and far distances).
c) A Sequence of Images
• Color Plate 1 shows an image of an artist’s creation of a sunlike object. Color Plate 2 shows the object
rendered using only line segments. Although the object consists of many parts, and although the
programmer may have used sophisticated data structures to model each part and the relationships
among the parts, the rendered object shows only the outlines of the parts.
• This type of image is known as a wireframe image because we can see only the edges of surfaces:
Such an image would be produced if the objects were constructed with stiff wires that formed a frame
with no solid material between the edges.
• Before raster-graphics systems became available, wireframe images were the only type of computer-
generated images that we could produce.
• In Color Plate 3, the same object has been rendered with flat polygons. Certain surfaces are not visible,
because there is a solid surface between them and the viewer; these surfaces have been removed by a
hidden-surface-removal (HSR) algorithm.

USHA RANI R E, GUEST FACULTY 22


• Although the objects are three-dimensional, each surface is displayed in a single color, and the image
fails to show the three-dimensional shapes of the objects.
• Color Plate 4 illustrates smooth shading of the polygons that approximate the object; it shows that the
object is three-dimensional and gives the appearance of a smooth surface.
• Color Plate 5 shows a more sophisticated wireframe model constructed using NURBS surfaces, in
Color Plates 6 and 7, we add surface texture to our object; texture is one of the effects.
• In Color Plate 6, we use a technique called bump mapping that gives the appearance of a rough surface
even though we render the same flat polygons as in the other examples. Color Plate 7 shows an
environment map applied to the surface of the object, which gives the surface the appearance of a
mirror.
• Color Plate 8 shows a small area of the rendering of the object using an environment map. The image
on the left shows the jagged artifacts known as aliasing errors that are due to the discrete nature of the
frame buffer. The image on the right has been rendered using a smoothing or antialiasing method.
d) The Modelling-Rendering Paradigm
• In many situations—especially in CAD applications and in the development of com plex images,
such as for movies—we can separate the modeling of the scene from the production of the image, or
the rendering of the scene in Fig.1.34.

• This block diagram suggests that we might implement the modeler and the renderer with different
software and hardware. For example, consider the production of a single frame in an animation.
• We first want to design and position our objects. Consequently, we prefer to carry out this step on an
interactive workstation with good graphics hardware. Once we have designed the scene, we want to
render it, adding light sources, material properties, and a variety of other detailed effects, to form a
production-quality image.
• This step requires a tremendous amount of computation, so we might prefer to use a so we might prefer
to use a render farm, a cluster of computers configured for numerical computing.
• The interface between the modeler and renderer, as simple as a file produced by the modeler that
describes the objects and that contains additional information important only to the renderer, such as
light sources, viewer location, and material properties.
• Models, including the geometric objects, lights, cameras, and material properties, are placed in a data
structure called a scene graph that is passed to a renderer or game engine.
1.7 GRAPHICS ARCHITECTURES
• On one side of the API is the application program. On the other is some combination of hardware and
software that implements the functionality of the API.
• A simple model of these early graphics systems is shown in Figure 1.35. The display in these systems
was based on a calligraphic CRT display that included the necessary circuitry to generate a line
segment connecting two points.

USHA RANI R E, GUEST FACULTY 23


• The job of the host computer was to run the application program and to compute the endpoints of the
line segments in the image (in units of the display). This information had to be sent to the display at a
rate high enough to avoid flicker on the display.
a) Display Processor
✓ This display processors had conventional architectures (Figure 1.36) but included instructions to
display primitives on the CRT.
✓ The main advantage of the display processor was that the instructions to generate the image could be
assembled once in the host and sent to the display processor, where they were stored in the display
processor’s own memory as a display list, or display file.

✓ The display processor would then execute repetitively the program in the display list, at a rate sufficient
to avoid flicker, independently of the host, thus freeing the host for other tasks.
✓ This architecture has become closely associated with the client–server architectures.
b) Pipeline Architecture
✓ For computer-graphics applications, the most important use of custom VLSI circuits has been in
creating pipeline architectures.
✓ The concept of pipelining is illustrated in Figure 1.37 for a simple arithmetic calculation. In our
pipeline, there is an adder and a multiplier.

✓ If we use this configuration to compute a + (b ∗ c), the calculation takes one multiplication and one
addition—the same amount of work required if we use a single processor to carry out both operations.
However, suppose that we have to carry out the same computation with many values of a, b, and c.
✓ Now, the multiplier can pass on the results of its calculation to the adder and can start its next
multiplication while the adder carries out the second step of the calculation on the first set of data.

USHA RANI R E, GUEST FACULTY 24


✓ Hence, whereas it takes the same amount of time to calculate the results for any one set of data, when
we are working on two sets of data at one time, our total time for calculation is shortened markedly.
Here the rate at which data flows through the system, the throughput of the system, has been doubled.
✓ Note that as we add more boxes to a pipeline, it takes more time for a single datum to pass through the
system. This time is called the latency of the system; we must balance it against increased throughput
in evaluating the performance of a pipeline.
✓ But that is just what we do in computer graphics, where large sets of vertices and pixels must be
processed in the same manner.
c) The Graphics Pipeline
✓ We start with a set of objects. Each object comprises a set of graphical primitives. Each primitive
comprises a set of vertices. We can think of the collection of primitive types and vertices as defining
the geometry of the scene.
✓ In a complex scene, there may be thousands—even millions—of vertices that define the objects.
✓ In terms of processing the geometry of our objects to obtain an image, we can employ the block
diagram in Figure 1.38, which shows the four major steps in the imaging process:
1) Vertex processing 2) Clipping and primitive assembly 3) Rasterization 4) Fragment
processing.

d) Vertex Processing
✓ In the first block of our pipeline, each vertex is processed independently. The two major functions of
this block are to carry out coordinate transformations and to compute a color for each vertex.
✓ For example, in our discussion of the synthetic camera, we observed that a major part of viewing is to
convert to a representation of objects from the system in which they were defined to a representation
in terms of the coordinate system of the camera.
✓ A further example of a transformation arises when we finally put our images onto the output device.
✓ We can represent each change of coordinate systems by a matrix. We can represent successive changes
in coordinate systems by multiplying, or concatenating, the individual matrices into a single matrix.
✓ Because multiplying one matrix by another matrix yields a third matrix, a sequence of transformations
is an obvious candidate for a pipeline architecture.
✓ Eventually, after multiple stages of transformation, the geometry is transformed by a projection
transformation.
✓ This step using 4 × 4 matrices, and thus projection fits in the pipeline. In general, we want to keep
three-dimensional information as long as possible, as objects pass through the pipeline.
e) Clipping and Primitive Assembly
✓ The second fundamental block in the implementation of the standard graphics pipeline is for clipping
and primitive assembly. We must do clipping because of the limitation that no imaging system can see
the whole world at once.
✓ The human retina has a limited size corresponding to an approximately 90-degree field of view.
Cameras have film of limited size, and we can adjust their fields of view by selecting different lenses.

USHA RANI R E, GUEST FACULTY 25


✓ We obtain the equivalent property in the synthetic camera by considering a clip ping volume, such
as the pyramid in front of the lens in Figure 1.25. The projections of objects in this volume appear in
the image. Those that are outside do not and are said to be clipped out.
✓ Clipping must be done on a primitive-by-primitive basis rather than on a vertex by-vertex basis. Thus,
within this stage of the pipeline, we must assemble sets of vertices into primitives, such as line
segments and polygons, before clipping can take place.
f) Rasterization
✓ The primitives that emerge from the clipper are still represented in terms of their vertices and must be
converted to pixels in the frame buffer. For example, if three vertices specify a triangle with a solid
color, the rasterizer must determine which pixels in the frame buffer are inside the polygon.
✓ The output of the rasterizer is a set of fragments for each primitive. The potential pixel that carries
with it information, including its color and location, that is used to update the corresponding pixel in
the frame buffer.
g) Fragment Processing
✓ The final block in our pipeline takes in the fragments generated by the rasterizer and updates the pixels
in the frame buffer. If the application generated three-dimensional data, some fragments may not be
visible because the surfaces that they define are behind other surfaces.
✓ The color of a fragment may be altered by texture mapping or bump mapping, as in Color Plates 6 and
7. The color of the pixel that corresponds to a fragment can also be read from the frame buffer and
blended with the fragment’s color to create translucent effects.
1.8 PROGRAMMABLE PIPELINES
• Graphics architectures have gone through multiple design cycles in which the importance of special-
purpose hardware relative to standard CPUs has gone back and forth. However, the importance of the
pipeline architecture has remained regardless of this cycle. None of the other approaches—ray tracing,
radiosity, photon mapping—can achieve real-time behavior, that is, the ability to render complex
dynamic scenes so that the viewer sees the display without defects.
• some approaches such as ray tracing can come close to real time, none can achieve the performance of
pipeline architectures with simple application programs and simple GPU programs. Hence, the
commodity graphics market is dominated by graphics cards that have pipelines built into the graphics
processing unit.
• Both the vertex processor and the fragment processor are now programmable by the application
program. One of the most exciting aspects of this advance is that many of the techniques that formerly
could not be done in real time because they were not part of the fixed-function pipeline can now be
done in real time.
• Vertex programs can alter the location or color of each vertex as it flows through the pipeline. Thus,
we can implement a variety of light–material models or create new kinds of projections. Fragment
programs allow us to use textures in new ways and to implement other parts of the pipeline, such as
lighting, on a per-fragment basis rather than per vertex.
• Programmability is now available at every level, including hand-held devices such as cell phones.
WebGL is being built into Web browsers. At the high end, the speed and parallelism in programmable
GPUs make them suitable for carrying out high-performance computing that does not involve graphics.
1.9 GRAPHICS PROGRAMMING
• API will be sufficient to allow you to program many interesting two- and three-dimensional problems
and to familiarize you with the basic graphics concepts.

USHA RANI R E, GUEST FACULTY 26


• Our development will use a simple but informative problem: the Sierpinski gas ket. It shows how we
can generate an interesting and, to many people, unexpectedly sophisticated image using only a
handful of graphics functions.
The Sierpinski Gasket
▪ The drawing of the Sierpinski gasket—an interesting shape that has a long history and is of interest in
areas such as fractal geometry.
▪ The Sierpinski gasket is an object that can be defined recursively and randomly; in the limit, however,
it has properties that are not at all random.
▪ Suppose that we start with three points in space. As long as the points are not collinear, they are the
vertices of a unique triangle and also define a unique plane.
▪ We assume that this plane is the plane z = 0 and that these points, as specified in some convenient
coordinate system,1 are (x1, y1, 0), (x2, y2, 0), and (x3, y3, 0). The construction proceeds as follows:
1. Pick an initial point p = (x, y, 0) at random inside the triangle.
2. Select one of the three vertices at random.
3. Find the point q halfway between p and the randomly selected vertex.
4. Display q by putting some sort of marker, such as a small circle, at the corresponding location
on the display.
5. Replace p with q.
6. Return to step 2.
▪ Thus, each time that we generate a new point, we display it on the output device. This process is
illustrated in Figure 2.1, where p0 is the initial point, and p1 and p2 are the first two points generated
by our algorithm.

▪ Before we develop the program, you might try to determine what the resulting image will be. A
possible form for our graphics program might be this:

USHA RANI R E, GUEST FACULTY 27


▪ This form can be converted into a real program fairly easily. However, even at this level of abstraction,
we can see two other alternatives. Consider the pseudocode.

▪ In this algorithm, we compute all the points first and put them into an array or some other data structure.
▪ This approach avoids the overhead of sending small amounts of data to the graphics processor for each
point we generate at the cost of having to store all the data. The strategy used in the first algorithm is
known as immediate mode graphics.
▪ In our second algorithm, because the data are stored in a data structure, we can redisplay the data,
perhaps with some changes such as altering the color or changing the size of a displayed point, by
resending the array without regenerating the points. The method of operation is known as retained
mode graphics.
▪ Our second approach has one major flaw. Suppose that, as we might in an animation, we wish to
redisplay the same objects. The geometry of the objects is unchanged, but the objects may be moving.
▪ Displaying all the points involves sending the data from the CPU to the GPU each time we wish to
display the objects in a new position. Consider the following alternative scheme:

USHA RANI R E, GUEST FACULTY 28


Programming Two Dimensional Applications
• For two-dimensional applications, such as the Sierpinski gasket, although we could use a pen-plotter
API, such an approach would limit us.
• Instead, we choose to start with a three-dimensional world; we regard two-dimensional systems, such
as the one on which we will produce our image, as special cases.
• Mathematically, we view the two-dimensional plane, or a simple two-dimensional curved surface, as
a subspace of a three-dimensional space.
• We can represent a point in the plane z = 0 as p = (x, y, 0) in the three dimensional world, or as p = (x,
y)in the two-dimensional plane.
• We can implement representations of points in a number of ways, but the simplest is to think of a three-
dimensional point as being represented by a triplet p = (x, y, z) or a column matrix.

• We use the terms vertex and point in a somewhat different manner in OpenGL. A vertex is a position
in space; we use two-, three-, and four-dimensional spaces in computer graphics.
• The simplest geometric primitive is a point in space, which is usually specified by a single vertex. Two
vertices can specify a line segment, a second primitive object; three vertices can specify either a
triangle or a circle; four vertices can specify a quadrilateral; and so on.
• The heart of our Sierpinski gasket program is generating the points. In order to go from our third
algorithm to a working OpenGL program.
• One simplification is to delay a discussion of coordinate systems and transformations among them by
putting all the data we want to display inside a cube centered at the origin whose diagonal goes from
(−1, −1, −1) and (1, 1, 1). This system known as clip coordinates is the one that our vertex shader uses
to send information to the rasterizer.
• Objects outside this cube will be eliminated, or clipped, and cannot appear on the display. Object
coordinates—and use transformations to convert the data to a representation in clip coordinates.
• Using a simple array of two elements to hold the x- and y-values of each point.
• We have created such classes and operators and put them in a file vec.h. The types in vec.h and the
other types defined later in the three- and four-dimensional classes match the types in the OpenGL
Shading Language.

USHA RANI R E, GUEST FACULTY 29


• In addition to defining these new types, vec.h and its companion file mat2.h also define overloaded
operators and constructors for these types that match GLSL. Hence, code such as can appear either in
a shader or in the application.

• We can access individual elements using either the usual membership operator, e.g., p.x or p.y, or by
indexing as we would an array (p[0] and p[1]).
• One small addition will make our applications even clearer. Rather than using the GLSL vec2, we
typedef a point2

• Within vec.h, the type vec2 is specified as a struct with two elements of type GLfloat. In OpenGL, we
often use basic OpenGL types, such as GLfloat and GLint, rather than the corresponding C types float
and int. These types are defined in the OpenGL header files and usually in the obvious way—for
example,

• However, use of the OpenGL types allows additional flexibility for implementations where, for
example, we might want to change floats to doubles without altering existing application programs.
• The following code generates 5000 points starting with the vertices of a triangle that lie in the plane
z=0:

USHA RANI R E, GUEST FACULTY 30


• Note that because every point we generate must lie inside the triangle determined by these vertices,
we know that none of the generated points will be clipped out.
• The function rand() is a standard random-number generator that produces a new random integer each
time it is called. We use the modulus operator to reduce these random integers to the three integers 0,
1, and 2.
• We intend to generate the points only once and then place them on the GPU. Hence, we make their
creation part of an initialization function init.
• We specified our points in two dimensions. We could have also specified them in three dimensions by
adding a z-coordinate, which is always zero through the three dimensional types in mat.h and vec.h.
• The changes to the code would be minimal. We would have the code lines as part of initialization.
Although we still do not have a complete program, Figure 2.2 shows the output that we expect to see.

• Note that because any three noncollinear points specify a unique plane, had we started with three points
(x1, y1, z1), (x2, y2, z2), and (x3, y3, z3) along with an initial point in the same plane, then the gasket
would be generated in the plane specified by the original three vertices.

1.10 VIDEO DISPLAY DEVICES


• The primary output device in a graphics system is a video monitor.
• Historically, the operation of most video monitors was based on the standard cathode ray tube (CRT)
design, but several other technologies exist.

USHA RANI R E, GUEST FACULTY 31


• In recent years, flat-panel displays have become significantly more popular due to their reduced power
consumption and thinner designs.
a) Refresh Cathode-Ray Tubes
• A beam of electrons, emitted by an electron gun, passes through focusing and deflection systems that
direct the beam toward specified positions on the phosphor-coated screen.
• The phosphor then emits a small spot of light at each position contacted by the electron beam and the
light emitted by the phosphor fades very rapidly.

• One way to maintain the screen picture is to store the picture information as a charge distribution
within the CRT in order to keep the phosphors activated.
• The most common method now employed for maintaining phosphor glow is to redraw the picture
repeatedly by quickly directing the electron beam back over the same screen points. This type of
display is called a refresh CRT.
• The frequency at which a picture is redrawn on the screen is referred to as the refresh rate.
Operation of an electron gun with an accelarating anode

• The primary components of an electron gun in a CRT are the heated metal cathode and a control grid.
• The heat is supplied to the cathode by directing a current through a coil of wire, called the filament,
inside the cylindrical cathode structure.
• This causes electrons to be “boiled off” the hot cathode surface.
• Inside the CRT envelope, the free, negatively charged electrons are then accelerated toward the
phosphor coating by a high positive voltage.
• Intensity of the electron beam is controlled by the voltage at the control grid.
• Since the amount of light emitted by the phosphor coating depends on the number of electrons striking
the screen, the brightness of a display point is controlled by varying the voltage on the control grid.

USHA RANI R E, GUEST FACULTY 32


• The focusing system in a CRT forces the electron beam to converge to a small cross section as it strikes
the phosphor and it is accomplished with either electric or magnetic fields.
• With electrostatic focusing, the electron beam is passed through a positively charged metal cylinder
so that electrons along the center line of the cylinder are in equilibrium position.
• Deflection of the electron beam can be controlled with either electric or magnetic fields.
• Cathode-ray tubes are commonly constructed with two pairs of magnetic-deflection coils
• One pair is mounted on the top and bottom of the CRT neck, and the other pair is mounted on opposite
sides of the neck.
• The magnetic field produced by each pair of coils results in a traverse deflection force that is
perpendicular to both the direction of the magnetic field and the direction of travel of the electron
beam.
• Horizontal and vertical deflections are accomplished with these pair of coils.
Electrostatic deflection of the electron beam in a CRT
• When electrostatic deflection is used, two pairs of parallel plates are mounted inside the CRT envelope
where, one pair of plates is mounted horizontally to control vertical deflection, and the other pair is
mounted vertically to control horizontal deflection.
• Spots of light are produced on the screen by the transfer of the CRT beam energy to the phosphor.
• When the electrons in the beam collide with the phosphor coating, they are stopped and their kinetic
energy is absorbed by the phosphor.
• Part of the beam energy is converted by the friction in to the heat energy, and the remainder causes
electros in the phosphor atoms to move up to higher quantum-energy levels.
• After a short time, the “excited” phosphor electrons begin dropping back to their stable ground state,
giving up their extra energy as small quantum of light energy called photons.

• What we see on the screen is the combined effect of all the electrons light emissions: a glowing spot
that quickly fades after all the excited phosphor electrons have returned to their ground energy level.
• The frequency of the light emitted by the phosphor is proportional to the energy difference between
the excited quantum state and the ground state.
• Lower persistence phosphors required higher refresh rates to maintain a picture on the screen without
flicker.

USHA RANI R E, GUEST FACULTY 33


• The maximum number of points that can be displayed without overlap on a CRT is referred to as a
resolution.
• Resolution of a CRT is dependent on the type of phosphor, the intensity to be displayed, and the
focusing and deflection systems.
• High-resolution systems are often referred to as high-definition systems.
1.10.1) Raster-Scan Displays
➢ The electron beam is swept across the screen one row at a time from top to bottom.
➢ As it moves across each row, the beam intensity is turned on and off to create a pattern of illuminated
spots.
➢ This scanning process is called refreshing. Each complete scanning of a screen is normally called a
frame.
➢ The refreshing rate, called the frame rate, is normally 60 to 80 frames per second, or described as 60
Hz to 80 Hz. Picture definition is stored in a memory area called the frame buffer.
➢ This frame buffer stores the intensity values for all the screen points. Each screen point is called a pixel
(picture element).
➢ Property of raster scan is Aspect ratio, which defined as number of pixel columns divided by number
of scan lines that can be displayed by the system Fig.6.

• At the end of each scan line, the electron beam returns to the left side of the screen to begin displaying
the next scan line. The return to the left of the screen, after refreshing each scan line, is called the

USHA RANI R E, GUEST FACULTY 34


horizontal retrace of the electron beam. And at the end of each frame (displayed in 1/80 to 1/60 of a
second), the electron beam returns to the upper-left corner of the screen (vertical retrace) to begin the
next frame.
• Each frame is displayed in two passes using an interlaced refresh procedure. In the first pass, the beam
sweeps across every other scan line from top to bottom. After the vertical retrace, the beam then sweeps
out the remaining scan lines (Fig. 7).

Case 1: In case of black and white systems


▪ On black and white systems, the frame buffer storing the values of the pixels is called a bitmap.
▪ Each entry in the bitmap is a 1-bit data which determine the on (1) and off (0) of the intensity of the
pixel.
Case 2: In case of color systems
▪ On color systems, the frame buffer storing the values of the pixels is called a pixmap (Though now a
days many graphics libraries name it as bitmap too).
▪ Each entry in the pixmap occupies a number of bits to represent the color of the pixel. For a true color
display, the number of bits for each entry is 24 (8 bits per red/green/blue channel, each channel 28=256
levels of intensity value, ie. 256 voltage settings for each of the red/green/blue electron guns).
1.10.2) Flat Panel Display
• The term flat panel display refers to a class of video device that have reduced volume, weight &
power requirement compared to a CRT.
• As flat panel display is thinner than CRTs, we can hang them on walls or wear on our wrists.
• Since we can even write on some flat panel displays they will soon be available as pocket notepads.
• We can separate flat panel display in two categories:
1. Emissive displays: - the emissive display or emitters are devices that convert electrical energy
into light. For Ex: Plasma panel, thin film electroluminescent displays and light emitting diodes.
2. Non emissive displays: - non emissive display or non-emitters use optical effects to convert
sunlight or light from some other source into graphics patterns. For Ex: LCD (Liquid Crystal
Display).
a) Plasma Panels displays
▪ This is also called gas discharge displays.
▪ It is constructed by filling the region between two glass plates with a mixture of gases that usually
includes neon.
▪ A series of vertical conducting ribbons is placed on one glass panel and a set of horizontal ribbon is
built into the other glass panel.

USHA RANI R E, GUEST FACULTY 35


▪ Firing voltage is applied to a pair of horizontal and vertical conductors cause the gas at the intersection
of the two conductors to break down into glowing plasma of electrons and ions.
▪ Picture definition is stored in a refresh buffer and the firing voltages are applied to refresh the pixel
positions, 60 times per second.
▪ Alternating current methods are used to provide faster application of firing voltages and thus brighter
displays.
▪ Separation between pixels is provided by the electric field of conductor.
▪ One disadvantage of plasma panels is they were strictly monochromatic device that means shows only
one color other than black like black and white.

b) Thin Film Electroluminescent Displays


▪ It is similar to plasma panel display but region between the glass plates is filled with phosphors such
as doped with magnesium instead of gas.
▪ When sufficient voltage is applied the phosphors becomes a conductor in area of intersection of the
two electrodes.
▪ Electrical energy is then absorbed by the manganese atoms which then release the energy as a spot of
light similar to the glowing plasma effect in plasma panel.
▪ It requires more power than plasma panel.
▪ In this good color and gray scale difficult to achieve.

USHA RANI R E, GUEST FACULTY 36


c) Light Emitting Diode (LED)
▪ In this display a matrix of multi-color light emitting diode is arranged to form the pixel position in the
display and the picture definition is stored in refresh buffer.
▪ Similar to scan line refreshing of CRT information is read from the refresh buffer and converted to
voltage levels that are applied to the diodes to produce the light pattern on the display.
d) Liquid Crystal Display (LCD)
▪ This non emissive device produce picture by passing polarized light from the surrounding or from an
internal light source through liquid crystal material that can be aligned to either block or transmit the
light.
▪ The liquid crystal refreshes to fact that these compounds have crystalline arrangement of molecules
then also flows like liquid.
▪ It consists of two glass plates each with light polarizer at right angles to each other sandwich the liquid
crystal material between the plates.
▪ Rows of horizontal transparent conductors are built into one glass plate, and column of vertical
conductors are put into the other plates.
▪ The intersection of two conductors defines a pixel position.
▪ In the ON state polarized light passing through material is twisted so that it will pass through the
opposite polarizer.
▪ In the OFF state it will reflect back towards source.

USHA RANI R E, GUEST FACULTY 37


EXTRA TOPICS
a) Random-Scan Displays
➢ When operated as a random-scan display unit, a CRT has the electron beam directed only to those
parts of the screen where a picture is to be displayed.
➢ Pictures are generated as line drawings, with the electron beam tracing out the component lines one
after the other.
➢ For this reason, random-scan monitors are also referred to as vector displays (or stroke writing displays
or calligraphic displays).
➢ The component lines of a picture can be drawn and refreshed by a random-scan system in any specified
order in Fig.8.
➢ A pen plotter operates in a similar way and is an example of a random-scan, hard-copy device. Refresh
rate on a random-scan system depends on the number of lines to be displayed on that system.
➢ Picture definition is now stored as a set of line-drawing commands in an area of memory referred to
as the display list, refresh display file, vector file, or display program
➢ To display a specified picture, the system cycles through the set of commands in the display file,
drawing each component line in turn.
➢ After all line-drawing commands have been processed, the system cycles back to the first line
command in the list.
➢ Random-scan displays are designed to draw all the component lines of a picture 30 to 60 times each
second, with up to 100,000 “short” lines in the display list.
➢ When a small set of lines is to be displayed, each refresh cycle is delayed to avoid very high refresh
rates, which could burn out the phosphor.

USHA RANI R E, GUEST FACULTY 38


Difference between Raster scan system and Random scan system
Base of Difference Raster Scan System Random Scan System
Electron Beam The electron beam is swept across The electron beam is directed only
the screen, one row at a time, from to theparts of screen where a
top to bottom picture is to be drawn
Resolution Its resolution is poor because Its resolution is good because this
raster system in contrast produces system produces smooth lines
zigzag lines that are plotted as drawings because CRT beam
discrete point sets. directly follows the line path.
Picture Definition Picture definition is stored as a set Picture definition is stored as a set
of intensity values for all screen of line drawing instructions in a
points,called pixels in a refresh display file.
buffer area.
Realistic Display The capability of this system to These systems are designed for
store intensity values for pixel line-drawing and can’t display
makes it well suited for the realistic shaded scenes.
realistic display of scenes contain
shadow and color pattern.
Draw an Image Screen points/pixels are used to Mathematical functions are used
draw an image to draw an image

b) Color CRT Monitors


➢ A CRT monitor displays color pictures by using a combination of phosphors that emit different-colored
light.
➢ It produces range of colors by combining the light emitted by different phosphors.  There are two
basic techniques for color display:
i. Beam-penetration technique
ii. Shadow-mask technique

USHA RANI R E, GUEST FACULTY 39


i) Beam-penetration technique:
• This technique is used with random scan monitors.
• In this technique inside of CRT coated with two phosphor layers usually red and green.
• The outer layer of red and inner layer of green phosphor.
• The color depends on how far the electron beam penetrates into the phosphor layer.
• A beam of fast electron penetrates more and excites inner green layer while slow electron excites outer
red layer.
• At intermediate beam speed we can produce combination of red and green lights which emit additional
two colors orange and yellow.
• The beam acceleration voltage controls the speed of the electrons and hence color of pixel.
Disadvantages:
• It is a low cost technique to produce color in random scan monitors.
• It can display only four colors.
• Quality of picture is not good compared to other techniques.
ii) Shadow-mask technique
• It produces wide range of colors as compared to beam-penetration technique.
• This technique is generally used in raster scan displays. Including color TV.
• In this technique CRT has three phosphor color dots at each pixel position.
• One dot for red, one for green and one for blue light. This is commonly known as Dot triangle.
• Here in CRT there are three electron guns present, one for each color dot. And a shadow mask grid
just behind the phosphor coated screen.
• The shadow mask grid consists of series of holes aligned with the phosphor dot pattern.
• Three electron beams are deflected and focused as a group onto the shadow mask and when they pass
through a hole they excite a dot triangle.
• In dot triangle three phosphor dots are arranged so that each electron beam can activate only its
corresponding color dot when it passes through the shadow mask.
• A dot triangle when activated appears as a small dot on the screen which has color of combination of
three small dots in the dot triangle.
• By changing the intensity of the three electron beams we can obtain different colors in the shadow
mask CRT.

USHA RANI R E, GUEST FACULTY 40


c) Three- Dimensional Viewing Devices
▪ Graphics monitors for the display of three-dimensional scenes have been devised using a technique
that reflects a CRT image from a vibrating, flexible mirror As the varifocal mirror vibrates, it changes
focal length.
▪ These vibrations are synchronized with the display of an object on a CRT so that each point on the
object is reflected from the mirror into a spatial position corresponding to the distance of that point
from a specified viewing location.
▪ This allows us to walk around an object or scene and view it from different sides.

Application of 3D viewing devices:


• In medical to analyze data from ultra-sonography.
• In geological to analyze topological and seismic data.
• In designing like solid objects viewing and 3D viewing of objects.
d) Stereoscopic and Virtual-Reality Systems
▪ Stereoscopic views does not produce three dimensional images, but it produce 3D effects seen in
Fig.15 by presenting different view to each eye of an observer so that it appears to have depth.
▪ To obtain this we first need to obtain two views of object generated from viewing direction
corresponding to each eye.
▪ We can construct the two views as computer generated scenes with different viewing positions or we
can use stereo camera pair to photograph some object or scene.
▪ When we see simultaneously both the view as left view with left eye and right view with right eye then
two views is merge and produce image which appears to have depth.
▪ One way to produce stereoscopic effect is to display each of the two views with raster system on
alternate refresh cycles.
▪ The screen is viewed through glasses with each lance design such a way that it act as a rapidly
alternating shutter that is synchronized to block out one of the views.
▪ Virtual reality is the system which produce images in such a way that we feel that our surrounding is
what we are set in display devices but in actually it does not.
▪ In virtual reality user can step into a scene and interact with the environment.

USHA RANI R E, GUEST FACULTY 41


▪ A head set containing an optical system to generate the stereoscopic views is commonly used in
conjunction with interactive input devices to locate and manipulate objects in the scene.
▪ Sensor in the head set keeps track of the viewer’s position so that the front and back of objects can be
seen as the viewer “walks through” and interacts with the display.
▪ Virtual reality can also be produce with stereoscopic glass and video monitor instead of head set. This
provides low cost virtual reality system.
▪ Sensor on display screen track head position and accordingly adjust image depth.
1.11 RASTER-SCAN SYSTEMS
➢ Interactive raster-graphics systems typically employ several processing units.
➢ In addition to the central processing unit (CPU), a special-purpose processor, called the video
controller or display controller, is used to control the operation of the display device.
➢ Organization of a simple raster system is shown in below Figure.16.

➢ Here, the frame buffer can be anywhere in the system memory, and the video controller accesses the
frame buffer to refresh the screen.
➢ In addition to the video controller, raster systems employ other processors as coprocessors and
accelerators to implement various graphics operations.
1.11.1 VIDEO CONTROLLER:
➢ The figure.17 below shows a commonly used organization for raster systems.
➢ A fixed area of the system memory is reserved for the frame buffer, and the video controller is given
direct access to the frame-buffer memory.
➢ Frame-buffer locations, and the corresponding screen positions, are referenced in the Cartesian
coordinates.

a) Cartesian reference frame:

USHA RANI R E, GUEST FACULTY 42


➢ Frame-buffer locations and the corresponding screen positions, are referenced in Cartesian
coordinates.
➢ In an application (user) program, we use the commands within a graphics software package to set
coordinate positions for displayed objects relative to the origin of the Cartesian reference frame.
➢ The coordinate origin is referenced at the lower-left corner of a screen display area by the software
commands, although we can typically set the origin at any convenient location for a particular
application.
Working:
➢ Figure.18 shows a two-dimensional Cartesian reference frame with the origin at the lower left screen
corner.

➢ The screen surface is then represented as the first quadrant of a two-dimensional system with positive
x and y values increasing from left to right and bottom of the screen to the top respectively.
➢ Pixel positions are then assigned integer x values that range from 0 to xmax across the screen, left to
right, and integer y values that vary from 0 to ymax, bottom to top.
b) Basic Video Controller Refresh Operations
➢ The basic refresh operations of the video controller are diagrammed fig.19.

USHA RANI R E, GUEST FACULTY 43


➢ Two registers are used to store the coordinate values for the screen pixels. Initially, the x register is set
to 0 and the y register is set to the value for the top scan line.
➢ The contents of the frame buffer at this pixel position are then retrieved and used to set the intensity
of the CRT beam.
➢ Then the x register is incremented by 1, and the process is repeated for the next pixel on the top scan
line.
➢ This procedure continues for each pixel along the top scan line.
➢ After the last pixel on the top scan line has been processed, the x register is reset to 0 and the y register
is set to the value for the next scan line down from the top of the screen.
➢ The procedure is repeated for each successive scan line.
➢ After cycling through all pixels along the bottom scan line, the video controller resets the registers to
the first pixel position on the top scan line and the refresh process starts over.
b.i) Speed up pixel position processing of video controller:
➢ Since the screen must be refreshed at a rate of at least 60 frames per second,the simple procedure
illustrated in above figure may not be accommodated by RAM chips if the cycle time is too slow.
➢ To speed up pixel processing, video controllers can retrieve multiple pixel values from the refresh
buffer on each pass.
➢ When group of pixels has been processed, the next block of pixel values is retrieved from the frame
buffer.
Advantages of video controller:
▪ A video controller can be designed to perform a number of other operations.
▪ For various applications, the video controller can retrieve pixel values from different memory areas on
different refresh cycles.
▪ This provides a fast mechanism for generating real-time animations.
▪ Another video-controller task is the transformation of blocks of pixels, so that screen areas can be
enlarged, reduced, or moved from one location to another during the refresh cycles.
▪ In addition, the video controller often contains a lookup table, so that pixel values in the frame buffer
are used to access the lookup table. This provides a fast method for changing screen intensity values.
▪ Finally, some systems are designed to allow the video controller to mix the framebuffer image with an
input image from a television camera or other input device.
1.11.2 RASTER-SCAN DISPLAY PROCESSOR
➢ Figure.20 shows one way to organize the components of a raster system that contains a separate display
processor, sometimes referred to as a graphics controller or a display coprocessor.

USHA RANI R E, GUEST FACULTY 44


➢ The purpose of the display processor is to free the CPU from the graphics chores.
➢ In addition to the system memory, a separate display-processor memory area can be provided.
a) Scan conversion:
▪ A major task of the display processor is digitizing a picture definition given in an application program
into a set of pixel values for storage in the frame buffer.
▪ This digitization process is called scan conversion.
Example 1: displaying a line
▪ Graphics commands specifying straight lines and other geometric objects are scan converted into a set
of discrete points, corresponding to screen pixel positions.
▪ Scan converting a straight-line segment.
Example 2: displaying a character
▪ Characters can be defined with rectangular pixel grids as shown in Fig.21.
▪ The array size for character grids can vary from about 5/7 to 9/12 or more for higher-quality displays.
▪ A character grid is displayed by superimposing the rectangular grid pattern into the frame buffer at a
specified coordinate position.

▪ For characters that are defined as outlines, the shapes are scan-converted into the frame buffer by
locating the pixel positions closest to the outline in Fig.22.

b) Additional operations of Display processors:


▪ Display processors are also designed to perform a number of additional operations.
▪ These functions include generating various line styles (dashed, dotted, or solid), displaying color areas,
and applying transformations to the objects in a scene.
▪ Display processors are typically designed to interface with interactive input devices, such as a mouse.

USHA RANI R E, GUEST FACULTY 45


c) Methods to reduce memory requirements in display processor:
▪ In an effort to reduce memory requirements in raster systems, methods have been devised for
organizing the frame buffer as a linked list and encoding the color information.
▪ One organization scheme is to store each scan line as a set of number pairs.
▪ Encoding methods can be useful in the digital storage and transmission of picture information.
d) Run-length encoding:
▪ The first number in each pair can be a reference to a color value, and the second numbercan specify
the number of adjacent pixels on the scan line that are to be displayed in that color.
▪ This technique, called run-length encoding, can result in a considerable saving in storage space if a
picture is to be constructed mostly with long runs of a single color each.
▪ A similar approach can be taken when pixel colors change linearly.
e) Cell encoding: Another approach is to encode the raster as a set of rectangular areas (cell encoding).
Disadvantages of encoding:
▪ The disadvantages of encoding runs are that color changes are difficult to record and storage
requirements increase as the lengths of the runs decrease.
▪ In addition, it is difficult for the display controller to process the raster when many short runs are
involved.
▪ Moreover, the size of the frame buffer is no longer a major concern, because of sharp declines in
memory costs.
1.12 GRAPHICS WORKSTATIONS AND VIEWING SYSTEMS
• Most graphics monitors today operate as raster-scan displays, and both CRT and flat panel systems are
in common use.
• Graphics workstation range from small general-purpose computer systems to multi monitor facilities,
often with ultra–large viewing screens.
• High-definition graphics systems, with resolutions up to 2560 by 2048, are commonly used in medical
imaging, air-traffic control, simulation, and CAD.
• Many high-end graphics workstations also include large viewing screens, often with specialized
features.
• Multi-panel display screens are used in a variety of applications that require “wall-sized” viewing
areas. These systems are designed for presenting graphics displays at meetings, conferences,
conventions, trade shows, retail stores etc.
• A multi-panel display can be used to show a large view of a single scene or several individual images.
Each panel in the system displays one section of the overall picture.
• A large, curved-screen system can be useful for viewing by a group of people studying a particular
graphics application.
• A 360 degree paneled viewing system in the NASA control-tower simulator, which is used for training
and for testing ways to solve air-traffic and runway problems at airports.
1.13 INPUT DEVICES
➢ Graphics workstations make use of various devices for data input. Most systems have keyboards and
mouses, while some other systems have trackball, spaceball, joystick, button boxes, touch panels,
image scanners and voice systems.

USHA RANI R E, GUEST FACULTY 46


a) Keyboard, Button Boxes and Dials:
• Keyboard on graphics system is used for entering text strings, issuing certain commands and selecting
menu options.
• Keyboards can also be provided with features for entry of screen coordinates, menu selections or
graphics functions.
• General purpose keyboard uses function keys and cursor-control keys.
• Function keys allow user to select frequently accessed operations with a single keystroke. Cursor-
control keys are used for selecting a displayed object or a location by positioning the screen cursor.
• Buttons are often used to input predefined functions. Dials are common devices for entering scalar
values.
• Numerical values within some defined range are selected for input with dial rotations.
b) Mouse Devices:
• Mouse is a hand-held device, usually moved around on a flat surface to position the screen cursor.
wheeler or roolers on the bottom of the mouse used to record the amount and direction of movement.

• Some of the mouses uses optical sensors, which detects movement across the horizontal and vertical
grid lines.
• Since a mouse can be picked up and put down, it is used for making relative changes in the position of
the screen.
• Most general purpose graphics systems now include a mouse and a keyboard as the primary input
devices.
c) Trackballs and Spaceballs:
• A trackball is a ball device that can be rotated with the fingers or palm of the hand to produce screen
cursor movement.
• Laptop keyboards are equipped with a trackball to eliminate the extra space required by a mouse.
• Spaceball is an extension of two-dimensional trackball concept.
• Spaceballs are used for three-dimensional positioning and selection operations in virtual reality
systems, modeling, animation, CAD and other applications.
d) Joysticks:
• Joystick is used as a positioning device, which uses a small vertical lever(stick) mounded on a base.It
is used to steer the screen cursor around and select screen position with the stick movement.
• A push or pull on the stick is measured with strain gauges and converted to movement of the screen
cursor in the direction of the applied pressure.

USHA RANI R E, GUEST FACULTY 47


e) Data Gloves:
• Data glove can be used to grasp a virtual object. The glove is constructed with a series of sensors that
detect hand and finger motions.
• Input from the glove is used to position or manipulate objects in a virtual scene.
f) Digitizers:
• Digitizer is a common device for drawing, painting or selecting positions.
• Graphics tablet is one type of digitizer, which is used to input 2-dimensional coordinates by activating
a hand cursor or stylus at selected positions on a flat surface.
• A hand cursor contains cross hairs for sighting positions and stylus is a pencil-shaped device that is
pointed at positions on the tablet.
g) Image Scanners:
• Drawings, graphs, photographs or text can be stored for computer processing with an image scanner
by passing an optical scanning mechanism over the information to be stored.
• Once we have the representation of the picture, then we can apply various image processing method
to modify the representation of the picture and various editing operations can be performed on the
stored documents.
h) Touch Panels:
• Touch panels allow displayed objects or screen positions to be selected with the touch of a finger.
• Touch panel is used for the selection of processing options that are represented as a menu of graphical
icons. Optical touch panel-uses LEDs along one vertical and horizontal edge of the frame.
• Acoustical touch panels generates high-frequency sound waves in horizontal and vertical directions
across a glass plate.
i) Light Pens:
• Light pens are pencil-shaped devices used to select positions by detecting the light coming from points
on the CRT screen.
• To select positions in any screen area with a light pen, we must have some nonzero light intensity
emitted from each pixel within that area.
• Light pens sometimes give false readings due to background lighting in a room.
j) Voice Systems:
• Speech recognizers are used with some graphics workstations as input devices for voice commands.
The voice system input can be used to initiate operations or to enter data.
• A dictionary is set up by speaking command words several times, then the system analyses each word
and matches with the voice command to match the pattern
1.14 GRAPHICS NETWORKS
• So far, we have mainly considered graphics applications on an isolated system with a single user.
• Multiuser environments & computer networks are now common elements in many graphics
applications.
• Various resources, such as processors, printers, plotters and data files can be distributed on a network
& shared by multiple users.

USHA RANI R E, GUEST FACULTY 48


• A graphics monitor on a network is generally referred to as a graphics server. The computer on a
network that is executing a graphics application is called the client.
• A workstation that includes processors, as well as a monitor and input devices can function as both a
server and a client.
1.15 GRAPHICS ON INTERNET
• A great deal of graphics development is now done on the Internet.
• Computers on the Internet communicate using TCP/IP.
• Resources such as graphics files are identified by URL (Uniform resource locator).
• The World Wide Web provides a hypertext system that allows users to loacate and view documents,
audio and graphics.
• Each URL sometimes also called as universal resource locator.
• The URL contains two parts Protocol- for transferring the document, and Server contains the
document.
1.16 GRAPHICS SOFTWARE
There are two broad classifications for computer-graphics software
a. Special-purpose packages: Special-purpose packages are designed for nonprogrammers
Example: generate pictures, graphs, charts, painting programs or CAD systems in some application
area without worrying about the graphics procedure.
b. General programming packages: general programming package provides a library of graphics functions
that can be used in a programming language such as C, C++, Java, or FORTRAN.
Example: GL (Graphics Library), OpenGL, VRML (Virtual-Reality Modeling Language), Java 2D
And Java 3D.
EXTRA TOPICS
a) Coordinate Representations
• To generate a picture using a programming package we first need to give the geometric descriptions
of the objects that are to be displayed known as coordinates.
• If coordinate values for a picture are given in some other reference frame (spherical, hyperbolic, etc.),
they must be converted to Cartesian coordinates.
• Several different Cartesian reference frames are used in the process of constructing and displaying.
• First we define the shapes of individual objects, such as trees or furniture, These reference frames are
called modeling coordinates or local coordinates
• Then we place the objects into appropriate locations within a scene reference frame called world
coordinates.
• After all parts of a scene have been specified, it is processed through various output device reference
frames for display. This process is called the viewing pipeline.
• The scene is then stored in normalized coordinates. Which range from −1 to 1 or from 0 to 1
Normalized coordinates are also referred to as normalized device coordinates.
• The coordinate systems for display devices are generally called device coordinates, or screen
coordinates.
NOTE: Geometric descriptions in modeling coordinates and world coordinates can be given in floating-
point or integer values.

USHA RANI R E, GUEST FACULTY 49


• Example: Figure.1 briefly illustrates the sequence of coordinate transformations from modeling
coordinates to device coordinates for a display.
• An initial modeling-coordinate position (xmc , ymc , zmc ) in this illustration is transferred to world
coordinates, then to viewing and projection coordinates, then to left-handed normalized coordinates,
and finally to a device-coordinate position (xdc , ydc ) with the sequence:

• Device coordinates (xdc , ydc ) are integers within the range (0, 0) to (xmax, ymax) for a particular output
device.

b) Graphics Functions
• It provides users with a variety of functions for creating and manipulating pictures.
• The basic building blocks for pictures are referred to as graphics output primitives.
• Attributes are properties of the output primitives.
• We can change the size, position, or orientation of an object using geometric transformations.
• Modeling transformations, which are used to construct a scene.
• Viewing transformations are used to select a view of the scene, the type of projection to be used and
the location where the view is to be displayed.
• Input functions are used to control and process the data flow from these interactive devices (mouse,
tablet and joystick).
• Graphics package contains a number of tasks .We can lump the functions for carrying out many tasks
by under the heading control operations.
c) Software Standards
• The primary goal of standardized graphics software is portability.
• In 1984, Graphical Kernel System (GKS) was adopted as the first graphics software standard by the
International Standards Organization (ISO)
• The second software standard to be developed and approved by the standards organizations was
Programmer’s Hierarchical Interactive Graphics System (PHIGS).

USHA RANI R E, GUEST FACULTY 50


• Extension of PHIGS, called PHIGS+, was developed to provide 3-D surface rendering capabilities not
available in PHIGS.
• The graphics workstations from Silicon Graphics, Inc. (SGI), came with a set of routines called GL
(Graphics Library).
d) Other Graphics Packages
• Many other computer-graphics programming libraries have been developed for
1. general graphics routines
2. Some are aimed at specific applications (animation, virtual reality, etc.)
Example: Open Inventor Virtual-Reality Modeling Language (VRML). We can create 2-D scenes with in
Java applets (java2D, Java 3D).
1.17 INTRODUCTION TO OPENGL
• OpenGL basic(core) library :-A basic library of functions is provided in OpenGL for specifying
graphics primitives, attributes, geometric transformations, viewing transformations, and many other
operations.
a) Basic OpenGL Syntax
• Function names in the OpenGL basic library (also called the OpenGL core library) are prefixed with
gl. The component word first letter is capitalized.
For eg:- glBegin, glClear, glCopyPixels, glPolygonMode
• Symbolic constants that are used with certain functions as parameters are all in capital letters, preceded
by “GL”, and component are separated by underscore.
For eg:- GL_2D, GL_RGB, GL_CCW, GL_POLYGON, GL_AMBIENT_AND_DIFFUSE.
• The OpenGL functions also expect specific data types. For example, an OpenGL function parameter
might expect a value that is specified as a 32-bit integer. But the size of an integer specification can be
different on different machines.
• To indicate a specific data type, OpenGL uses special built-in, data-type names, such as GLbyte,
GLshort, GLint, GLfloat, GLdouble, Glboolean.
b) Related Libraries
In addition to OpenGL basic(core) library(prefixed with gl), there are a number of associated libraries for
handling special operations:-
• OpenGL Utility(GLU):- Prefixed with “glu”. It provides routines for setting up viewing and
projection matrices, describing complex objects with line and polygon approximations, displaying
quadrics and B-splines using linear approximations, processing the surface-rendering operations, and
other complex tasks. -Every OpenGL implementation includes the GLU library
• Open Inventor:- provides routines and predefined object shapes for interactive three dimensional
applications which are written in C++.
• Window-system libraries:- To create graphics we need display window. We cannot create the display
window directly with the basic OpenGL functions since it contains only device-independent graphics
functions, and window-management operations are device dependent. However, there are several
window-system libraries that supports OpenGL functions for a variety of machines. Eg:- Apple
GL(AGL), Windows-to-OpenGL(WGL), Presentation Manager to OpenGL(PGL), GLX.

USHA RANI R E, GUEST FACULTY 51


• OpenGL Utility Toolkit(GLUT):- provides a library of functions which acts as interface for
interacting with any device specific screen-windowing system, thus making our program device-
independent. The GLUT library functions are prefixed with “glut”.
c) Header Files
• In all graphics programs, we will need to include the header file for the OpenGL core library.
• In windows to include OpenGL core libraries and GLU we can use the following header files:-
#include //precedes other header files for including Microsoft windows ver of OpenGL libraries
#include<windows.h>
#include<GL/gl.h>
#include<GL/glu.h>
• The above lines can be replaced by using GLUT header file which ensures gl.h and glu.h are included
correctly,
#include <GL/glut.h> //GL in windows
• In Apple OS X systems, the header file inclusion statement will be,
#include <GL/glut.h>

d) Display-Window Management Using GLUT


We can consider a simplified example, minimal number of operations for displaying a picture.
Step 1: Initialization of GLUT
• We are using the OpenGL Utility Toolkit, our first step is to initialize GLUT.
• This initialization function could also process any command line arguments, but we will not need to
use these parameters for our first example programs.
• We perform the GLUT initialization with the statement.

glutInit (&argc, argv);

Step 2: Title

• We can state that a display window is to be created on the screen with a given caption for the title bar.
This is accomplished with the function

glutCreateWindow ("An Example OpenGL Program");

• where the single argument for this function can be any character string that we want to use for the
display-window title.

Step 3: Specification of the display window

• Then we need to specify what the display window is to contain.


• For this, we create a picture using OpenGL functions and pass the picture definition to the GLUT
routine glutDisplayFunc, which assigns our picture to the display window.
• Example: suppose we have the OpenGL code for describing a line segment in a procedure called
lineSegment.
• Then the following function call passes the line-segment description to the display window:
glutDisplayFunc (lineSegment);

USHA RANI R E, GUEST FACULTY 52


Step 4: One more GLUT function
• But the display window is not yet on the screen.
• We need one more GLUT function to complete the window-processing operations.
• After execution of the following statement, all display windows that we have created, including their
graphic content, are now activated:
glutMainLoop ( );
• This function must be the last one in our program. It displays the initial graphics and puts the program
into an infinite loop that checks for input from devices such as a mouse or keyboard.
Step 5: These parameters using additional GLUT functions
• Although the display window that we created will be in some default location and size, we can set
these parameters using additional GLUT functions.
GLUT Function 1:
• We use the glutInitWindowPosition function to give an initial location for the upper left corner of the
display window.
glutInitWindowPosition (50, 100);
• This position is specified in integer screen coordinates, whose origin is at the upper-left corner of the
screen.

GLUT Function 2:
• After the display window is on the screen, we can reposition and resize it.
glutInitWindowSize (400, 300);

GLUT Function 3:
• We can also set a number of other options for the display window, such as buffering and a choice of
color modes, with the glutInitDisplayMode function.
• Arguments for this routine are assigned symbolic GLUT constants.
• Example: the following command specifies that a single refresh buffer is to be used for the display
window and that we want to use the color mode which uses red, green, and blue (RGB) components
to select color values:
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
• The values of the constants passed to this function are combined using a logical or operation.
• Actually, single buffering and RGB color mode are the default options.

USHA RANI R E, GUEST FACULTY 53


• But we will use the function now as a reminder that these are the options that are set for our display.
• Later, we discuss color modes in more detail, as well as other display options, such as double buffering
for animation applications and selecting parameters for viewing three dimensional scenes.
e) A Complete OpenGL Program
▪ There are still a few more tasks to perform before we have all the parts that we need for a complete
program.
Step 1: To set background color
▪ For the display window, we can choose a background color.
▪ Using RGB color values, we set the background color for the display window to be white, with the
OpenGL function:
glClearColor (1.0, 1.0, 1.0, 0.0);
▪ The first three arguments in this function set the red, green, and blue component colors to the value
1.0, giving us a white background color for the display window.
▪ If, instead of 1.0, we set each of the component colors to 0.0, we would get a black background.
▪ The fourth parameter in the glClearColor function is called the alpha value for the specified color. One
use for the alpha value is as a “blending” parameter.
▪ When we activate the OpenGL blending operations, alpha values can be used to determine the resulting
color for two overlapping objects.
▪ An alpha value of 0.0 indicates a totally transparent object, and an alpha value of 1.0 indicates an
opaque object.
▪ For now, we will simply set alpha to 0.0.
▪ Although the glClearColor command assigns a color to the display window, it does not put the display
window on the screen.
Step 2: To set window color
▪ To get the assigned window color displayed, we need to invoke the following OpenGL function:
glClear (GL_COLOR_BUFFER_BIT);
▪ The argument GL COLOR BUFFER BIT is an OpenGL symbolic constant specifying that it is the bit
values in the color buffer (refresh buffer) that are to be set to the values indicated in the glClearColor
function. (OpenGL has several different kinds of buffers that can be manipulated.
Step 3: To set color to object
▪ In addition to setting the background color for the display window, we can choose a variety of color
schemes for the objects we want to display in a scene.
▪ For our initial programming example, we will simply set the object color to be a dark green.
glColor3f (0.0, 0.4, 0.2);
▪ The suffix 3f on the glColor function indicates that we are specifying the three RGB color components
using floating-point (f) values.
▪ This function requires that the values be in the range from 0.0 to 1.0, and we have set red = 0.0, green
= 0.4, and blue = 0.2.
Example program
• For our first program, we simply display a two-dimensional line segment.

USHA RANI R E, GUEST FACULTY 54


• To do this, we need to tell OpenGL how we want to “project” our picture onto the display window
because generating a two-dimensional picture is treated by OpenGL as a special case of three-
dimensional viewing.
• So, although we only want to produce a very simple two-dimensional line, OpenGL processes our
picture through the full three-dimensional viewing operations.
• We can set the projection type (mode) and other viewing parameters that we need with the following
two functions:
glMatrixMode (GL_PROJECTION);
gluOrtho2D (0.0, 200.0, 0.0, 150.0);
• This specifies that an orthogonal projection is to be used to map the contents of a two dimensional
rectangular area of world coordinates to the screen, and that the x-coordinate values within this
rectangle range from 0.0 to 200.0 with y-coordinate values ranging from 0.0 to 150.0.
• Whatever objects we define within this world-coordinate rectangle will be shown within the display
window.
• Anything outside this coordinate range will not be displayed.
• Therefore, the GLU function gluOrtho2D defines the coordinate reference frame within the display
window to be (0.0, 0.0) at the lower-left corner of the display window and (200.0, 150.0) at the upper-
right window corner.
• For now, we will use a world-coordinate rectangle with the same aspect ratio as the display window,
so that there is no distortion of our picture.
• Finally, we need to call the appropriate OpenGL routines to create our line segment.
• The following code defines a two-dimensional, straight-line segment with integer,
• Cartesian endpoint coordinates (180, 15) and (10, 145).

The following OpenGL program is organized into three functions.


o init: We place all initializations and related one-time parameter settings in function init.

USHA RANI R E, GUEST FACULTY 55


o lineSegment: Our geometric description of the “picture” that we want to display is in function
lineSegment, which is the function that will be referenced by the GLUT function glutDisplayFunc.
o main function: Main function contains the GLUT functions for setting up the display window and
getting our line segment onto the screen.
o glFlush: This is simply a routine to force execution of our OpenGL functions, which are stored by
computer systems in buffers in different locations, depending on how OpenGL is implemented.
o The procedure lineSegment that we set up to describe our picture is referred to as a display callback
function.
o And this procedure is described as being “registered” by glutDisplayFunc as the routine to invoke
whenever the display window might need to be redisplayed.

f) Error Handling in OpenGL


• The OpenGL and GLU libraries have a relatively simple method of recording errors. When OpenGL
detects an error in a call to a base library routine or a GLU routine, it records an error code internally,
and the routine which caused the error is ignored.
• OpenGL only records one error code at a time. Once an error occurs, no other error code will be
recorded until your program explicitly queries the OpenGL error state:
GLenum code;
code = glGetError ();

USHA RANI R E, GUEST FACULTY 56


• This call returns the current error code and clears the internal error flag.
• The base OpenGL library defines a number of symbolic constants which represent different error
conditions; the ones which occur most often are listed in Table 1.

1.18 COORDINATE REFERENCE FRAMES


To describe a picture, we first decide upon
• A convenient Cartesian coordinate system, called the world-coordinate reference frame, which could
be either 2D or 3D.
• We then describe the objects in our picture by giving their geometric specifications in terms of
positions in world coordinates.
Example: We define a straight-line segment with two endpoint positions, and a polygon is specified
with a set of positions for its vertices.
• These coordinate positions are stored in the scene description along with other info about the objects,
such as their color and their coordinate extents
• Co-ordinate extents :Co-ordinate extents are the minimum and maximum x, y, and z values for each
object.
• A set of coordinate extents is also described as a bounding box for an object.
Example: For a 2D figure, the coordinate extents are sometimes called its bounding rectangle.
• Objects are then displayed by passing the scene description to the viewing routines which identify
visible surfaces and map the objects to the frame buffer positions and then on the video monitor.
• The scan-conversion algorithm stores info about the scene, such as color values, at the appropriate
locations in the frame buffer, and then the scene is displayed on the output device.
a) Screen co-ordinates:
• Locations on a video monitor are referenced in integer screen coordinates, which correspond to the
integer pixel positions in the frame buffer.
• Scan-line algorithms for the graphics primitives use the coordinate descriptions to determine the
locations of pixels.
Example: given the endpoint coordinates for a line segment, a display algorithm must calculate the
positions for those pixels that lie along the line path between the endpoints.
• Since a pixel position occupies a finite area of the screen, the finite size of a pixel must be taken into
account by the implementation algorithms.
• For the present, we assume that each integer screen position references the centre of a pixel area.
• Once pixel positions have been identified the color values must be stored in the frame buffer.

USHA RANI R E, GUEST FACULTY 57


Assume we have available a low-level procedure of the form
i)setPixel (x, y);
• stores the current color setting into the frame buffer at integer position(x, y), relative to the position of the
screen-coordinate origin
ii)getPixel (x, y, color);
• Retrieves the current frame-buffer setting for a pixel location;
• Parameter color receives an integer value corresponding to the combined RGB bit codes stored for the
specified pixel at position (x,y).
• Additional screen-coordinate information is needed for 3D scenes.
• For a two-dimensional scene, all depth values are 0.
b) Absolute and Relative Coordinate Specifications
Absolute coordinate:
• So far, the coordinate references that we have discussed are stated as absolute coordinate values.
• This means that the values specified are the actual positions within the coordinate system in use.
Relative coordinates:
• However, some graphics packages also allow positions to be specified using relative coordinates.
• This method is useful for various graphics applications, such as producing drawings with pen plotters,
artist’s drawing and painting systems, and graphics packages for publishing and printing applications.
• Taking this approach, we can specify a coordinate position as an offset from the last position that was
referenced (called the current position).
1.19 SPECIFYING A TWO-DIMENSIONAL WORLD-COORDINATE REFERENCE FRAME IN
OPENGL
• The gluOrtho2D command is a function we can use to set up any 2D Cartesian reference frames.
• The arguments for this function are the four values defining the x and y coordinate limits for the picture
we want to display.
• Since the gluOrtho2D function specifies an orthogonal projection, we need also to be sure that the
coordinate values are placed in the OpenGL projection matrix.
• In addition, we could assign the identity matrix as the projection matrix before defining the world-
coordinate range.

USHA RANI R E, GUEST FACULTY 58


• This would ensure that the coordinate values were not accumulated with any values we may have
previously set for the projection matrix.
• Thus, for our initial two-dimensional examples, we can define the coordinate frame for the screen
display window with the following statements:

• The display window will then be referenced by coordinates (xmin, ymin) at the lower-left corner and
by coordinates (xmax, ymax) at the upper-right corner, as shown in Figure below:

• We can then designate one or more graphics primitives for display using the coordinate reference
specified in the gluOrtho2D statement.
• If the coordinate extents of a primitive are within the coordinate range of the display window, all of
the primitive will be displayed.
• Otherwise, only those parts of the primitive within the display-window coordinate limits will be
shown.
• Also, when we set up the geometry describing a picture, all positions for the OpenGL primitives must
be given in absolute coordinates, with respect to the reference frame defined in the gluOrtho2D
function.
1.20 OpenGL POINT FUNCTIONS
• The type within glBegin() specifies the type of the object and its value can be as follows:
GL_POINTS.
• Each vertex is displayed as a point.
• The size of the point would be of at least one pixel.
• Then this coordinate position, along with other geometric descriptions we may have in our scene, is
passed to the viewing routines.
• Unless we specify other attribute values, OpenGL primitives are displayed with a default size and
color.
• The default color for primitives is white, and the default point size is equal to the size of a single screen
pixel.
• Finally, the coordinate values can be listed explicitly in the glVertex function, or a single argument
can be used that references a coordinate position as an array. If we use an array specification for a
coordinate position, we need to append v (for “vector”) as a third suffix code.

USHA RANI R E, GUEST FACULTY 59


Case 1: In the following example, three equally spaced points are plotted along a two dimensional, straight-
line path with a slope of 2 (see Figure 3). Coordinates are given as integer pairs:

Case 2: Alternatively, we could specify the coordinate values for the preceding points in arrays such as

and call the OpenGL functions for plotting the three points as

Case 3: In addition, here is an example of specifying two point positions in a three dimensional world
reference frame. In this case, we give the coordinates as explicit floating-point values:

1.21 OpenGL LINE FUNCTIONS


• Primitive type is GL_LINES.
• Successive pairs of vertices are considered as endpoints and they are connected to form an individual
line segments.
• Note that successive segments usually are disconnected because the vertices are processed on a pair-
wise basis.
• We obtain one line segment between the first and second coordinate positions and another line segment
between the third and fourth positions.
• If the number of specified endpoints is odd, so the last coordinate position is ignored.
Case 1: For example, if we have five coordinate positions, labeled p1 through p5, and each is represented as
a two-dimensional array, then the following code could generate the display shown in Figure 4(a):

USHA RANI R E, GUEST FACULTY 60


Case 2: Using the same five coordinate positions as in the previous example, we obtain the display in Figure
4(b) with the code.

Case 3: The third OpenGL line primitive is GL LINE LOOP, which produces a closed polyline. Lines are
drawn as with GL LINE STRIP, but an additional line is drawn to connect the last coordinate position and the
first coordinate position. Figure 4(c) shows the display of our endpoint list when we select this line option,
using the code.

1.24 OpenGL CURVE FUNCTIONS


• Routines for generating basic curves, such as circles and ellipses, are not included as primitive
functions in the OpenGL core library. But this library does contain functions for displaying Bezier
splines, which are polynomials that are defined with a discrete point set.
• OpenGL Utility (GLU) library has routines for three-dimensional quadrics, such as spheres and
cylinders, as well as routines for producing rational B-splines, which are a general class of splines that
include the simpler B´ezier curves. Figure 5 illustrates various polyline displays that could be used for
a circle segment.

USHA RANI R E, GUEST FACULTY 61


• Using rational B-splines, we can display circles, ellipses, and other two-dimensional quadrics. In
addition, there are routines in the OpenGL Utility Toolkit (GLUT) that we can use to display some
three-dimensional quadrics, such as spheres and cones, and some other shapes.

USHA RANI R E, GUEST FACULTY 62


1.25 POINT ATTRIBUTES
• Basically, we can set two attributes for points: color and size.
• In a state system: The displayed color and size of a point is determined by the current values stored in
the attribute list.
• Color components are set with RGB values or an index into a color table.
• For a raster system: Point size is an integer multiple of the pixel size, so that a large point is displayed
as a square block of pixels.
OpenGL Point-Attribute Functions
a) Color:
▪ The displayed color of a designated point position is controlled by the current color values in the state
list.
▪ Also, a color is specified with either the glColor function or the glIndex function.
b) Size:
▪ We set the size for an OpenGL point with glPointSize (size); and the point is then displayed as a
square block of pixels.
▪ Parameter size is assigned a positive floating-point value, which is rounded to an integer (unless the
point is to be antialiased).
▪ The number of horizontal and vertical pixels in the display of the point is determined by parameter
size.
▪ Thus, a point size of 1.0 displays a single pixel, and a point size of 2.0 displays a 2×2 pixel array.
▪ If we activate the antialiasing features of OpenGL, the size of a displayed block of pixels will be
modified to smooth the edges.
▪ The default value for point size is 1.0.
Example program:
▪ Attribute functions may be listed inside or outside of a glBegin/glEnd pair.
▪ Example: the following code segment plots three points in varying colors and sizes.
▪ The first is a standard-size red point, the second is a double-size green point, and the third is a triple-
size blue point:

1.26 LINE ATTRIBUTE


• In OpenGL straight-line segment with three attribute settings: line color, line-width, and line style.
• OpenGL provides a function for setting the width of a line and another function for specifying a line
style, such as a dashed or dotted line.
• In addition, lines may be generated with other effects, such as pen and brush strokes.

USHA RANI R E, GUEST FACULTY 63


• With some packages, particularly painting and drawing systems, we can select different pen and
brush styles directly. Options in this category include shape, size, and pattern for the pen or brush.
Some example pen and brush shapes are given in Figure 2.

OpenGL Line-Attribute Functions


a) OpenGL Line-Width Function
▪ Line width is set in OpenGL with the function Syntax: glLineWidth (width);
▪ We assign a floating-point value to parameter width, and this value is rounded to the nearest
nonnegative integer.
▪ If the input value rounds to 0.0, the line is displayed with a standard width of 1.0, which is the default
width.
▪ Some implementations of the line-width function might support only a limited number of widths, and
some might not support widths other than 1.0.
▪ That is, the magnitude of the horizontal and vertical separations of the line endpoints, deltax and deltay,
are compared to determine whether to generate a thick line using vertical pixel spans or horizontal
pixel spans.
b) OpenGL Line-Style Function
▪ By default, a straight-line segment is displayed as a solid line.
▪ But we can also display dashed lines, dotted lines, or a line with a combination of dashes and dots.
▪ We can vary the length of the dashes and the spacing between dashes or dots.
▪ We set a current display style for lines with the OpenGL function:
Syntax: glLineStipple (repeatFactor, pattern);
i) Pattern:
o Parameter pattern is used to reference a 16-bit integer that describes how the line should be displayed.
o 1 bit in the pattern denotes an “on” pixel position, and a 0 bit indicates an “off” pixel position.
o The pattern is applied to the pixels along the line path starting with the low-order bits in the pattern.
o The default pattern is 0xFFFF (each bit position has a value of 1),which produces a solid line.
ii) repeatFactor:

USHA RANI R E, GUEST FACULTY 64


o Integer parameter repeatFactor specifies how many times each bit in the pattern is to be repeated before
the next bit in the pattern is applied.
o The default repeat value is 1.
iii) Polyline:
o With a polyline, a specified line-style pattern is not restarted at the beginning of each segment.
o It is applied continuously across all the segments, starting at the first endpoint of the polyline and
ending at the final endpoint for the last segment in the series.
Example:
o For line style, suppose parameter pattern is assigned the hexadecimal representation 0x00FF and the
repeat factor is 1.
o This would display a dashed line with eight pixels in each dash and eight pixel positions that are “off”
(an eight-pixel space) between two dashes.
o Also, since low order bits are applied first, a line begins with an eight-pixel dash starting at the first
endpoint.
o This dash is followed by an eight-pixel space, then another eight-pixel dash, and so forth, until the
second endpoint position is reached.
iv) Activating line style:
o Before a line can be displayed in the current line-style pattern, we must activate the line style feature
of OpenGL. glEnable (GL_LINE_STIPPLE);
o If we forget to include this enable function, solid lines are displayed; that is, the default pattern 0xFFFF
is used to display line segments.
o At any time, we can turn off the line-pattern feature with glDisable (GL_LINE_STIPPLE);
o This replaces the current line-style pattern with the default pattern (solid lines).
o The following program outline, we illustrate use of the OpenGL line attribute functions by plotting
three line graphs in different styles and widths. Figure 3 shows the data plots that could be generated
by this program.

USHA RANI R E, GUEST FACULTY 65


1.27 CURVE ATTRIBUTES
• Parameters for curve attributes are the same as those for straight-line segments.
• We can display curves with varying colors, widths, dot-dash patterns, and available pen or brush
options.
• Methods for adapting curve-drawing algorithms to accommodate attribute selections are similar to
those for line drawing.
• Raster curves of various widths can be displayed using the method of horizontal or vertical pixel spans.
Case 1: Where the magnitude of the curve slope |m| <= 1.0, we plot vertical spans;
Case 2: when the slope magnitude |m| > 1.0, we plot horizontal spans.
Different methods to draw a curve:
Method 1: Using circle symmetry property, we generate the circle path with vertical spans in the octant from
x = 0 to x = y, and then reflect pixel positions about the line y = x to y=0.
Method 2: Another method for displaying thick curves is to fill in the area between two Parallel curve paths,
whose separation distance is equal to the desired width. We could do this using the specified curve path as
one boundary and setting up the second boundary either inside or outside the original curve path. This
approach, however, shifts the original curve path either inward or outward, depending on which direction we
choose for the second boundary.

USHA RANI R E, GUEST FACULTY 66


Method 3:The pixel masks discussed for implementing line-style options could also be used in raster curve
algorithms to generate dashed or dotted patterns
Method 4: Pen (or brush) displays of curves are generated using the same techniques discussed for straight-
line segments in Fig.4.

Method 5: Painting and drawing programs allow pictures to be constructed interactively by using a pointing
device, such as a stylus and a graphics tablet, to sketch various curve shapes.
1.28 LINE DRAWING ALGORITHM
• A straight-line segment in a scene is defined by coordinate positions for the endpoints of the segment.
• To display the line on a raster monitor, the graphics system must first project the endpoints to integer
screen coordinates and determine the nearest pixel positions along the line path between the two
endpoints then the line color is loaded into the frame buffer at the corresponding pixel coordinates.
• The Cartesian slope-intercept equation for a straight line is with m as the slope of the line and b as the
y intercept.

• Given that the two endpoints of a line segment are specified at positions (x0,y0) and (xend, yend) ,as
shown in fig.

• Algorithms for displaying straight line are based on the line equation (1) and calculations given in
eq(2) and (3).
• For given x interval δx along a line, we can compute the corresponding y interval δy from eq.(2) as

• Similarly, we can obtain the x interval δx corresponding to a specified δy as

• These equations form the basis for determining deflection voltages in analog displays, such as vector-
scan system, where arbitrarily small changes in deflection voltage are possible.
• For lines with slope magnitudes

USHA RANI R E, GUEST FACULTY 67


|m|1<δy can be set proportional to a small horizontal deflection voltage with the corresponding
vertical deflection voltage set proportional to δx from eq.(4)
|m|1>δy can be set proportional to a small vertical deflection voltage with the corresponding
horizontal deflection voltage set proportional to δx from eq.(5)
|m|=1, δx=δy and the horizontal and vertical deflections voltages are equal.
• This scan-conversion process for straight lines is illustrated in Figure 3 with discrete sample positions
along the x axis.

a) DDA Algorithm (DIGITAL DIFFERENTIAL ANALYZER)


• The DDA is a scan-conversion line algorithm based on calculating either δy or δx.
• A line is sampled at unit intervals in one coordinate and the corresponding integer values nearest the
line path are determined for the other coordinate
• DDA Algorithm has three cases so from equation i.e.., m=(y k+1 – y k)/(x k+1 – x k).
Case 1: if m<1, x increment in unit intervals
i.e. x k+1 = xk +1 then, m = (y k+1 – y k)/( x k+1 – x k)
m= y k+1 – y k

• where k takes integer values starting from 0,for the first point and increases by 1 until final endpoint
is reached. Since m can be any real number between 0.0 and 1.0.
Case 2: if m>1, y increment in unit intervals
i.e. y k+1 = yk +1 then, m=(y k+1 – y k)/( x k+1 – x k)
m(x k+1 – x k) = 0

Case 3: if m=1, both x and y increment in unit intervals


i.e..,x k+1=x k + 1 and y k+1 = y k + 1
• Equations (6) and (7) are based on the assumption that lines are to be processed from the left endpoint
to the right endpoint (Figure 2).. If this processing is reversed, so that the starting endpoint is at the
right, then either we have δx= -1 and

USHA RANI R E, GUEST FACULTY 68


• Similar calculations are carried out using equations (6) through (9) to determine the pixel positions
along a line with negative slope. thus, if the absolute value of the slope is less than 1 and the starting
endpoint is at left ,we set δx==1 and calculate y values with eq(6).
• when starting endpoint is at the right(for the same slope),we set δx=-1 and obtain y positions using
eq(8).
• This algorithm is summarized in the following procedure, which accepts as input two integer screen
positions for the endpoints of a line segment.
• if m<1, where x is incrementing by 1
y k+1 = y k + m
• So initially x=0, Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the starting point.
Illuminate pixel(x, round(y))
x1= x+ 1 , y1=y + 1
Illuminate pixel(x1,round(y1))
x2= x1+ 1 , y2=y1 + 1
Illuminate pixel(x2,round(y2))
Till it reaches final point.
• if m>1,where y is incrementing by 1
x k+1 =(1/m)+ x k
• So initially y=0, Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the starting point .
Illuminate pixel(round(x),y)
x1= x+( 1/m) ,y1=y
Illuminate pixel(round(x1),y1)
x2= x1+ (1/m) , y2=y1
Illuminate pixel(round(x2),y2) o Till it reaches final point.
• The DDA algorithm is faster method for calculating pixel position than one that directly implements .
• It eliminates the multiplication by making use of raster characteristics, so that appropriate increments
are applied in the x or y directions to step from one pixel position to another along the line path.
• The accumulation of round off error in successive additions of the floating point increment, however
can cause the calculated pixel positions to drift away from the true line path for long line segments.
Furthermore ,the rounding operations and floating point arithmetic in this procedure are still time
consuming.
• We improve the performance of DDA algorithm by separating the increments m and 1/m into integer
and fractional parts so that all calculations are reduced to integer operations.

USHA RANI R E, GUEST FACULTY 69


Advantages of DDA algorithm
o It is faster algorithm.
o It is simple algorithm.
Disadvantage of DDA algorithm
o Floating point arithmetic is time consuming.
o Poor end point accuracy.
b) BRESENHAM’S ALGORITHM:
• It is an efficient raster scan generating algorithm that uses incremental integral calculations.
• To illustrate Bresenham’s approach, we first consider the scan-conversion process for lines with
positive slope less than 1.0.
• Pixel positions along a line path are then determined by sampling at unit x intervals. Starting from the
left endpoint (x0, y0) of a given line, we step to each successive column (x position) and plot the pixel
whose scan-line y value is closest to the line path.
• Consider the equation of a straight line y=mx+c where m=dy/dx.

USHA RANI R E, GUEST FACULTY 70



• Figure 6 demonstrates the kth step in this process. Assuming that we have determined that the pixel at
(xk,yk) is to be displayed, we next need to decide which pixel to plot in column x k+1 = xk + 1. Our
choices are the pixels at positions (xk + 1, yk ) and (xk + 1, yk + 1).

• At sampling position xk + 1, we label vertical pixel separations from the mathematical line path as
dlower and dupper (Figure 7). The y coordinate on the mathematical line at pixel column position xk + 1
is calculated as

USHA RANI R E, GUEST FACULTY 71


• This recursive calculation of decision parameters is performed at each integer x position, starting at
the left coordinate endpoint of the line. The first parameter, p0, is evaluated from Equation 14 at the
starting pixel position (x0, y0) and with m evaluated as ∆y/∆x as follows:

• We summarize Bresenham line drawing for a line with a positive slope less than 1 in the following
outline of the algorithm. The constants 2∆y and 2∆y − 2∆x are calculated once for each line to be scan-
converted, so the arithmetic involves only integer addition and subtraction of these two constants. Step
4 of the algorithm will be performed a total of ∆x times.

USHA RANI R E, GUEST FACULTY 72


Example: Bresenham Line Drawing
To illustrate the algorithm, we digitize the line with endpoints (20, 10) and (30, 18). This line has a slope
of 0.8, with ∆x = 10, ∆y = 8
The initial decision parameter has the value p0 = 2∆y − ∆x
=6
and the increments for calculating successive decision parameters are
2∆y = 16, 2∆y − 2∆x = −4
We plot the initial point (x0, y0) = (20, 10), and determine successive pixel positions along the line path from
the decision parameter as follows

A plot of the pixels generated along this line path is shown in Figure 8.

USHA RANI R E, GUEST FACULTY 73


An implementation of Bresenham line drawing for slopes in the range 0 < m < 1.0 is given in the following
procedure. Endpoint pixel positions for the line are passed to this procedure, and pixels are plotted from the
left endpoint to the right endpoint.
1.29 CIRCLE GENERATION ALGORITHM
a) Properties of Circles
• A circle (Figure 11) is defined as the set of points that are all at a given distance r from a center position
(xc,yc).

• For any circle point (x, y), this distance relationship is expressed by the Pythagorean theorem in
Cartesian coordinates as

• We could use this equation to calculate the position of points on a circle circumference by stepping
along the x axis in unit steps from xc −r to xc +r and calculating the corresponding y values at each
position as

• One problem with this approach is that it involves considerable computation at each step. Moreover,
the spacing between plotted pixel positions is not uniform, as demonstrated in Figure 12.

• We could adjust the spacing by interchanging x and y (stepping through y values and calculating x
values) whenever the absolute value of the slope of the circle is greater than 1; but this simply increases
the computation and processing required by the algorithm.
• Another way to eliminate the unequal spacing shown in Figure 12 is to calculate points along the
circular boundary using polar coordinates r and θ (Figure 11).

USHA RANI R E, GUEST FACULTY 74


• Expressing the circle equation in parametric polar form yields the pair of equations

Eight way symmetry


• The shape of the circle is similar in each quadrant.
• Therefore ,if we determine the curve positions in the first quadrant ,we can generate the circle positions
in the second quadrant of xy plane.
• The circle sections in the third and fourth quadrant can be obtained from sections in the first and second
quadrant by considering the symmetry along X axis. We can take this one step further and note that
there is also symmetry between octants.
• The Circle sections in adjacent octants within one quadrant are symmetric with respect to the 45◦ line
dividing the two octants.
• These symmetry conditions are illustrated in Figure 13, where a point at position (x, y) on a one-eighth
circle sector is mapped into the seven circle points in the other octants of the xy plane.
• Taking advantage of the circle symmetry in this way, we can generate all pixel positions around a
circle by calculating only the points within the sector from x = 0 to x = y.
• The slope of the curve in this octant has a magnitude less than or equal to 1.0. At x = 0, the circle slope
is 0, and at x = y, the slope is −1.0.

b) Midpoint Circle Algorithm


• Midpoint circle algorithm generates all points on a circle centered at the origin by incrementing all the
way around circle.
• The strategy is to select which of 2 pixels is closer to the circle by evaluating a function at the midpoint
between the 2 pixels
• To apply the midpoint method, we define a circle function as

• To summarize, the relative position of any point (x, y) can be determined by checking the sign of the
circle function as follows:

USHA RANI R E, GUEST FACULTY 75


• Figure 14 shows the midpoint between the two candidate pixels at sampling position xk + 1. Assuming
that we have just plotted the pixel at (xk , yk ), we next need to determine whether the pixel at position
(xk + 1, yk ) or the one at position (xk + 1, yk − 1) is closer to the circle.
• Our decision parameter is the circle function evaluated at the midpoint between these two pixels:

• Successive decision parameters are obtained using incremental calculations.


• We obtain a recursive expression for the next decision parameter by evaluating the circle function at
sampling position x k+1 + 1 = xk + 2:

where y k+1 is either yk or yk − 1, depending on the sign of pk.


• Increments for obtaining p k+1 are either 2x k+1 +1 (if pk is negative) or 2x k+1 + 1−2y k+1. Evaluation of
the terms 2x k+1 and 2y k+1 can also be done incrementally as

• At the start position (0, r), these two terms have the values 0 and 2r, respectively. Each successive
value for the 2x k+1 term is obtained by adding 2 to the previous value, and each successive value for
the 2y k+1 term is obtained by subtracting 2 from the previous value.
• The initial decision parameter is obtained by evaluating the circle function at the start position
(x0, y0) = (0,r):

USHA RANI R E, GUEST FACULTY 76


• If the radius r is specified as an integer, we can simply round p0 to

because all increments are integers.

A plot of the generated pixel positions in the first quadrant is shown in Figure 15.

USHA RANI R E, GUEST FACULTY 77


USHA RANI R E, GUEST FACULTY 78
Example: Midpoint Circle Drawing
Given a circle radius r = 10, we demonstrate the midpoint circle algorithm by determining positions along the
circle octant in the first quadrant from x = 0 to x = y. The initial value of the decision parameter is
p0 = 1 − r = −9
For the circle centered on the coordinate origin, the initial point is (x0, y0) = (0, 10), and initial increment terms
for calculating the decision parameters are
2x0 = 0, 2y0 = 20
Successive midpoint decision parameter values and the corresponding coordinate positions along the circle
path are listed in the following table:

A plot of the generated pixel positions in the first quadrant is shown in Figure 15.

USHA RANI R E, GUEST FACULTY 79


1.29 a) BRESENHAM’S CIRCLE GENERATION ALGORITHMS
**Refer written class notes.

USHA RANI R E, GUEST FACULTY 80

You might also like