You are on page 1of 62

UNIT- 1

Introduction to
Interactive Computer
Graphics
Historical Introduction

2
Computer Graphics

• Computer graphics deals with all aspects of creating


images with a computer
­ Hardware
­ Software
­ Applications

3
Example

• Where did this image come from?

• What hardware/software did we need to produce it?

4
Preliminary Answer

• Application: The object is an artist’s rendition of the sun


for an animation to be shown in a rounded environment
(planetarium)
• Software: Maya for modeling and rendering but Maya is
built on top of OpenGL
• Hardware: PC with graphics card for modeling and
rendering

5
Basic Graphics System

Output device

Input devices Image formed in FB


6
CRT

Can be used either as a line-drawing device (calligraphic)


or to display contents of frame buffer (raster mode)

7
Computer Graphics: 1950-1960

• Computer graphics goes back to the earliest days of


computing
­ Strip charts
­ Pen plotters
­ Simple displays using A/D converters to go from computer to
calligraphic CRT
• Cost of refresh for CRT too high
­ Computers slow, expensive, unreliable

8
Computer Graphics: 1960-1970

• Wireframe graphics
­ Draw only lines
• Sketchpad
• Display Processors
• Storage tube

wireframe representation
of sun object

9
Sketchpad

• Ivan Sutherland’s PhD thesis at MIT


­ Recognized the potential of man-machine interaction
­ Loop
• Display something
• User moves light pen
• Computer generates new display
­ Sutherland also created many of the now common algorithms for
computer graphics

10
Display Processor

• Rather than have the host computer try to refresh display use a
special purpose computer called a display processor (DPU)

• Graphics stored in display list (display file) on display processor


• Host compiles display list and sends to DPU

11
Direct View Storage Tube

• Created by Tektronix
­ Did not require constant refresh
­ Standard interface to computers
• Allowed for standard software
• Plot3D in Fortran
­ Relatively inexpensive
• Opened door to use of computer graphics for CAD community

12
Computer Graphics: 1970-1980

• Raster Graphics
• Beginning of graphics standards
­ IFIPS/International Federation for Information Processing Systems
• GKS: European effort
– Becomes ISO 2D standard
• Core: North American effort/Community Organized Relief Effort
– 3D but fails to become ISO standard

• Workstations and PCs

13
Raster Graphics

• Image produced as an array (the raster) of picture


elements (pixels) in the frame buffer

14
Raster Graphics

• Allows us to go from lines and wire frame images to filled


polygons

15
PCs and Workstations

• Although we no longer make the distinction between


workstations and PCs, historically they evolved from
different roots
­ Early workstations characterized by
• Networked connection: client-server model
• High-level of interactivity
­ Early PCs included frame buffer as part of user memory
• Easy to change contents and create images

16
Computer Graphics: 1980-1990

Realism comes to computer graphics

smooth shading environment bump mapping


mapping

17
Computer Graphics: 1980-1990

• Special purpose hardware


­ Silicon Graphics geometry engine
• VLSI implementation of graphics pipeline / Very large-scale integration /

• Industry-based standards
­ PHIGS
­ RenderMan
• Networked graphics: X Window System
• Human-Computer Interface (HCI)

18
Computer Graphics: 1990-2000

• OpenGL API
• Completely computer-generated feature-length movies
(Toy Story) are successful
• New hardware capabilities
­ Texture mapping
­ Blending
­ Accumulation, stencil buffers

19
Computer Graphics: 2000-2010

• Photorealism
• Graphics cards (GPU) for PCs dominate market
­ Nvidia, ATI
• Game boxes and game players determine direction of
market (Wii, Kinect, etc)
• Computer graphics routine in movie industry: Maya,
Lightwave
• Programmable pipelines

20
Computer Graphics: 2010-

• Mobile Computing
­ iPhone
• Cloud Computing
­ Amazon Web Services (AWS)
• Virtual Reality
­ Oculus Rift
• Artificial Intelligence
­ Big Data/Deep Learning
­ Google Car

21
3D Graphics Techniques and Terminology
• The process of drawing (or rendering) a single image of a 3-dimensional scene.
• The process begins by producing a mathematical model of the object to be rendered.
• Such a model should describe not only the shape of the object but its color, its surface
finish (shiny, matte, transparent, fuzzy, scaly, rocky).
• Producing realistic models is extremely complex, but luckily it is not our main concern.
• The scene model should also include information about the location and characteristics of
the light sources (their color, brightness), and the atmospheric nature of the medium
through which the light travels (is it foggy or clear).
• In addition we will need to know the location of the viewer.
• We can think of the viewer as holding a “synthetic camera”, through which the image is to
be photographed.
• We need to know the characteristics of this camera (its focal length, for example).
• Based on all of this information, we need to perform a number of steps to produce our
desired image.
Techniques…
Projection: Project the scene from 3-dimensional space onto the 2-dimensional
image plane in our synthetic camera.
Color and shading: For each point in our image we need to determine its color,
which is a function of the object’s surface color, its texture, the relative positions
of light sources, and (in more complex illumination models) the indirect
reflection of light off of other surfaces in the scene.
Hidden surface removal: Elements that are closer to the camera obscure more
distant ones.
We need to determine which surfaces are visible and which are not.
Rasterization: Once we know what colors to draw for each point in the image,
the final step is that of mapping these colors onto our display device.
Techniques…
• Modeling:
• Model types: Polyhedral models, hierarchical models, fractals and fractal
dimension.
• Curves and Surfaces: Representations of curves and surfaces, interpolation,
Bezier, B-spline curves and surfaces, NURBS, subdivision surfaces.
• Surface finish: Texture-, bump-, and reflection-mapping.
• Projection:
• 3-d transformations and perspective: Scaling, rotation, translation,
orthogonal and perspective transformations, 3-d clipping.
• Hidden surface removal: Back-face culling, z-buffer method, depth-sort.
• Issues in Realism:
• Light and shading: Diffuse and specular reflection, the Phong and Gouraud
shading models, light transport and radiosity.
• Ray tracing: Ray-tracing model, reflective and transparent objects, shadows.
• Color: Gamma-correction, halftoning, and color models.

• Computer graphics is all about producing pictures (realistic or
stylistic) by computer.
• How are graphical images represented?
• There are four basic types that make up virtually of computer
generated pictures:
• polylines,
• filled regions,
• text, and
• raster images.
• Polylines: A polyline (or more properly a polygonal curve is a
finite sequence of line segments joined end to end.
• These line segments are called edges, and the endpoints of the line
segments are called vertices.
• A single line segment is a special case. (An infinite line, which
stretches to infinity on both sides, is not usually considered to be a
polyline.)
• A polyline is closed if it ends where it starts.
• It is simple if it does not self-intersect.
• Self-intersections include such things as two edge crossing one
another, a vertex intersecting in the interior of an edge, or more
than two edges sharing a common vertex.
• A simple, closed polyline is also called a simple polygon.
• If all its internal angle are at most 180, then it is a convex polygon.

• This is sufficient to encode the geometry of a polyline.


• In contrast, the way in which the polyline is rendered is
determined by a set of properties called graphical attributes.
• These include elements such as
• color,
• line width, and
• line style (solid, dotted, dashed),
• how consecutive segments are joined (rounded, mitered or
beveled;).
• Many graphics systems support common special cases of
curves such as circles, ellipses, circular arcs, and Bezier
and B-splines.
• We should probably include curves as a generalization of
polylines.
• Most graphics drawing systems implement curves by
breaking them up into a large number of very small
polylines, so this distinction is not very important.

• Filled regions: Any simple, closed polyline in the
plane defines a region consisting of an inside and
outside.
• (This is a typical example of an utterly obvious fact
from topology that is notoriously hard to prove. It is
called the Jordan curve theorem.)
• We can fill any such region with a color or repeating
pattern.
• In some instances the bounding polyline itself is
also drawn and others the polyline is not drawn.

• A polyline with embedded “holes” also naturally defines a region


that can be filled.
• In fact this can be generalized by nesting holes within holes
(alternating color with the background color).
• Even if a polyline is not simple, it is possible to generalize the
notion of interior.
• Given any point, shoot a ray to infinity.
• If it crosses the boundary an odd number of times it is colored.
• If it crosses an even number of times, then it is given the
background color.

• Text: Although we do not normally think of text as a graphical


output, it occurs frequently within graphical images such as
engineering diagrams.
• Text can be thought of as a sequence of characters in some font.
• As with polylines there are numerous attributes which affect how the
text appears.
• This includes the font’s face (Times-Roman, Helvetica, Courier, for
example), its weight (normal, bold, light), its style or slant (normal,
italic, oblique, for example), its size, which is usually measured in
points, a printer’s unit of measure equal to 1=72-inch), and its color.
• Raster Images: Raster images are what most of us think of
when we think of a computer generated image.
• Such an image is a 2-dimensional array of square (or generally
rectangular) cells called pixels (short for “picture elements”).
• Such images are sometimes called pixel maps.
• The simplest example is an image made up of black and white
pixels, each represented by a single bit (0 for black and 1 for
white).
• This is called a bitmap.
• For gray-scale (or monochrome) raster images, each pixel is
represented by assigning it a numerical value over some range
(e.g., from 0 to 255, ranging from black to white).
• There are many possible ways of encoding color images
• Graphics Devices: The standard interactive graphics device today is called a
raster display.
• As with a television, the display consists of a two-dimensional array of pixels.
• There are two common types of raster displays.
• Video displays: consist of a screen with a phosphor coating, that allows each pixel
to be illuminated momentarily when struck by an electron beam.
• A pixel is either illuminated (white) or not (black).
• The level of intensity can be varied to achieve arbitrary gray values.
• Because the phosphor only holds its color briefly, the image is repeatedly
rescanned, at a rate of at least 30 times per second.
• Liquid crystal displays (LCD’s): use an electronic field to alter polarization of
crystalline molecules in each pixel.
• The light shining through the pixel is already polarized in some direction.
• By changing the polarization of the pixel, it is possible to vary the amount of light
which shines through, thus controlling its intensity.
• Irrespective of the display hardware, the computer program stores
the image in a two-dimensional array in RAM of pixel values
(called a frame buffer).
• The display hardware produces the image line-by-line (called
raster lines).
• A hardware device called a video controller constantly reads the
frame buffer and produces the image on the display.
• The frame buffer is not a device.
• It is simply a chunk of RAM memory that has been allocated for
this purpose.
• A program modifies the display by writing into the frame buffer,
and thus instantly altering the image that is displayed.
• An example of this type of configuration is shown below.
• More sophisticated graphics systems, in the form of a display processor (more
commonly known as a graphics accelerator or graphics card to PC users).
• A typical display processor will provide assistance for a number of operations
including the following:
• Transformations: Rotations and scaling used for moving objects and the viewer’s
location.
• Clipping: Removing elements that lie outside the viewing window.
• Projection: Applying the appropriate perspective transformations.
• Shading and Coloring: The color of a pixel may be altered by increasing its
brightness.
• Simple shading involves smooth blending between some given values.
• Modern graphics cards support more complex procedural shading.
• Texturing: Coloring objects by “painting” textures onto their surface.
• Textures may be generated by images or by procedures.
• Hidden-surface elimination: Determines which of the various objects that project
to the same pixel is closest to the viewer and hence is displayed.
• Color: The method chosen for representing color depends on the
characteristics of the graphics output device (e.g., whether it is additive as
are video displays or subtractive as are printers).
• It also depends on the number of bits per pixel that are provided, called
the pixel depth.
• For example, the most method used currently in video and color LCD
displays is a 24-bit RGB representation.
• Each pixel is represented as a mixture of red, green and blue components,
and each of these three colors is represented as a 8-bit quantity (0 for
black and 255 for the brightest color).
• In many graphics systems it is common to add a fourth component,
sometimes called alpha, denoted A.
• This component is used to achieve various special effects, most
commonly in describing how opaque a color is.
• In some instances 24-bits may be unacceptably large.
• For example, when downloading images from the web, 24-bits of
information for each pixel may be more than what is needed.
• A common alternative is to used a color map, also called a color
look-up-table (LUT). (This is the method used in most gif files, for
example.)
• In a typical instance, each pixel is represented by an 8-bit quantity
in the range from 0 to 255.
• This number is an index to a 256-element array, each of whose
entries is a 234-bit RGB value.
• To represent the image, we store both the LUT and the image itself.
• The 256 different colors are usually chosen so as to produce the best
possible reproduction of the image.
• For example, if the image is mostly blue and red, the LUT will
contain many more blue and red shades than others.
• A typical photorealistic image contains many more than 256
colors.
• This can be overcome by a fair amount of clever trickery to fool
the eye into seeing many shades of colors where only a small
number of distinct colors exist.
• This process is called digital halftoning, as shown in fig below
• Colors are approximated by putting combinations of similar
colors in the same area.
• The human eye averages them out.
Application of Computer Graphics
Computer-Aided Design for engineering and
architectural systems etc.
• Objects maybe displayed in a wireframe outline form.
• Multi-window environment is also favored for
producing various zooming scales and views.
• Animations are useful for testing performance.
Presentation Graphics
• To produce illustrations which summarize various kinds
of data.
• Except 2D, 3D graphics are good tools for reporting
more complex data.
Computer Art
• Painting packages are available.
• With cordless, pressure-sensitive stylus, artists can produce electronic
paintings which simulate different brush strokes, brush widths, and
colors.
• Photorealistic techniques, morphing and animations are very useful in
commercial art.
• For films, 24 frames per second are required.
• For video monitor, 30 frames per second are required.
Entertainment
• Motion pictures, Music videos, and TV shows, Computer games
Education and Training
• Training with computer-generated models of specialized systems such
as the training of ship captains and aircraft pilots.
Visualization
• For analyzing scientific, engineering, medical and business data or
behavior.
• Converting data to visual form can help to understand mass volume of
data very efficiently.
Image Processing
• Image processing is to apply techniques to modify or interpret existing
pictures.
• It is widely used in medical applications.
Graphical User Interface
• Multiple window, icons, menus allow a computer setup to be utilized more
efficiently.
Video Display devices
• Cathode-Ray Tubes (CRT) - still the most common
video display device presently.
Cont…
• An electron gun emits a beam of electrons, which
passes through focusing and deflection systems and
hits on the phosphor-coated screen.
• The number of points displayed on a CRT is
referred to as resolutions (eg. 1024x768).
• Different phosphors emit small light spots of
different colors, which can combine to form a range
of colors.
• A common methodology for color CRT display is
the Shadow-mask method
• The light emitted by phosphor fades very rapidly, so it needs
to redraw the picture repeatedly.
• There are 2 kinds of redrawing mechanisms: Raster-Scan and
Random-Scan
1. Raster-Scan
• The electron beam is swept across the screen one row at a
time from top to bottom.
• As it moves across each row, the beam intensity is turned
on and off to create a pattern of illuminated spots.
• This scanning process is called refreshing.
• Each complete scanning of a screen is normally called a
frame.
• The refreshing rate, called the frame rate, is normally 60 to
80 frames per second, or described as 60 Hz to 80 Hz.
• Picture definition is stored in a memory area called the
frame buffer.
• This frame buffer stores the intensity values for all the
screen points.
• Each screen point is called a pixel (picture element).
• On black and white systems, the frame buffer storing the
values of the pixels is called a bitmap.
• Each entry in the bitmap is a 1-bit data which determine the
on (1) and off (0) of the intensity of the pixel.
• On color systems, the frame buffer storing the values of the
pixels is called a pixmap (Though nowadays many graphics
libraries name it as bitmap too).
• Each entry in the pixmap occupies a number of bits to
represent the color of the pixel.
• For a true color display, the number of bits for each entry is 24
(8 bits per red/green/blue channel, each channel 28=256 levels
of intensity value,
• ie. 256 voltage settings for each of the red/green/blue electron guns).
2. Random-Scan (Vector Display)
• The CRT's electron beam is directed only to
the parts of the screen where a picture is to
be drawn.
• The picture definition is stored as a set of
line-drawing commands in a refresh display
file or a refresh buffer in memory.

• Random-scan generally have higher


resolution than raster systems and can
produce smooth line drawings, however it
cannot display realistic shaded scenes.
Display Controller
•For a raster display device reads the frame buffer and generates the
control signals for the screen,
• ie. the signals for horizontal scanning and vertical scanning.

•Most display controllers include a color map (or video look-up table).

•The major function of a color map is to provide a mapping between the


input pixel value to the output color.
Anti-Aliasing

•On dealing with integer pixel positions, jagged or stair step appearances happen very
usually.

•This distortion of information due to under sampling is called aliasing.

•A number of ant-aliasing methods have been developed to compensate this problem.

•One way is to display objects at higher resolution.

•However there is a limit to how big we can make the frame buffer and still
maintaining acceptable refresh rate.
COLOR CRT MONITORS
• CRT monitor displays color pictures by using a combination of
phosphors that emit different colored light.
• By combining the emitted light from the different phosphors ,
a range of colors can be generated.
• Color CRTs have 3 phosphor color dots at each pixel position
for red , green and blue color.
• Three electron guns one for each color dot A metal shadow
mask to differentiate the beams.
• The 2 basic techniques for producing color CRT displays are:
1. Beam penetration method
2. Shadow mask method
Beam Penetration method

• Commonly used in raster scan systems


• Two layers of phosphor, usually red and green, are coated onto
the inside of CRT screen, and the displayed color depends on
how far the electron beam penetrates into the phosphor layers.
• A beam of slow electrons excites only the outer red layer.
• A beam of very fast electrons penetrates through the red layer
and excites the inner green layer.
• At intermediate beam speeds, combinations of other colors are
produced.
SHADOW MASK
• The shadow mask is one of two major technologies used
to manufacture cathode ray tube (CRT) televisions and
computer displays that produce color images (the other is
aperture grille and its improved variant Cromaclear).
• Tiny holes in a metal plate separate the colored phosphors
in the layer behind the front glass of the screen.
• The holes are placed in a manner ensuring that electrons
from each of the tube's three cathode guns reach only the
appropriately-colored phosphors on the display.

• All three beams pass through the same holes in the mask,
but the angle of approach is different for each gun.
• The spacing of the holes, the spacing of the phosphors, and
the placement of the guns is arranged so that for example the
blue gun only has an unobstructed path to blue phosphors.
• The red, green, and blue phosphors for each pixel are
generally arranged in a triangular shape (sometimes called a
"triad")
Flat Panel Displays
• A flat CRT is obtained by initially projecting the
electron beam parallel to the screen and then
reflecting it through.
• Reflecting the electron beam significantly reduces
the depth of the CRT bottle and, consequently, of
the display.
• Types of Flat panel displays:
I. Plasma Panels.
II. Thin-film electro luminescent display
III.Light-emitted diode
Plasma Panels

• Constructed by filling the region between two glass plates with a


mixture of gases that usually includes neon.
• A series of vertical conducting ribbons is placed on one glass panel,
and a set of horizontal conducting ribbons is built into the other
glass panel.
• Firing voltages applied to an intersecting pair of horizontal and
vertical conductors cause the gas at the intersection of the two
conductors to break down into a glowing plasma of electrons and
ions.
• Picture definition is stored in a refresh buffer, and the firing
voltages are applied to refresh the pixel positions (at the
intersection of the conductors) 60 times per second.
Thin-film electroluminescent display

• The xenon, neon, and helium gas in a plasma television is contained in hundreds
of thousands of tiny cells positioned between two plates of glass.
• Long electrodes are also put together between the glass plates, in front of and
behind the cells.
• The address electrodes sit behind the cells, along the rear glass plate.
• The transparent display electrodes, which are surrounded by an insulating
dielectric material and covered by a magnesium oxide protective layer, are
mounted in front of the cell, along the front glass plate.
• Control circuitry charges the electrodes that cross paths at a cell, creating a
voltage difference between front and back and causing the gas to ionize and form
a plasma.
• As the gas ions rush to the electrodes and collide, photons are emitted.
Thin-Film Electroluminescent
• These are similar in construction to a plasma panel.
• The only difference is that the enfilement of the
region between the glass plates is with a phosphor,
such as zinc sulphide doped with manganese,
instead of a gas.
Light Emitting Diode (LED)
• A matrix of diodes is arranged to form the pixel
positions in the display, and picture definition is
stored in a refresh buffer. •
• Information is read from the refresh buffer and
converted to voltage levels that are applied to the
diodes to produce the light patterns in the display.
Active-matrix LCD

• This type of LCD is constructing by placing a


transistor at each pixel location, using thin-film
transistor technology.
• The transistors are used to control the voltage at
pixel locations and to prevent charge from gradually
leaking out of the liquid-crystal cells.
Passive-matrix LCD

• Two glass plates, each containing a light polarizer that is aligned at a right
angle to the other plate, sandwich the liquid-crystal material.
• Rows of horizontal, transparent conductors are built into one glass plate,
and columns of vertical conductors are put into the other plate.
• The intersection of the two defines a pixel position.
• Polarized light passing through the material is twisted so that it will pass
through the opposite polarizer.
• The light is then reflected back to the viewer.
• To turn off the pixel, we apply a voltage to the two intersecting
conductors to align the molecules so that the light is not twisted.

You might also like