You are on page 1of 67

Lecture 1: Course Introduction

Definition of Computer Graphics: The creation of, manipulation of, analysis of, and interaction with pictorial representations of objects and data using computers. -Dictionary of Computing Computer Graphics: Computer graphics is concerned with producing images and animations (or sequences of images) using a computer. The field of computer graphics dates back to the early 1960's with Ivan Sutherland, one of the pioneers of the field. This began with the development of the (by current standards) very simple software for performing the necessary mathematical transformations to produce simple line-drawings of 2- and 3-dimensional scenes. As time went on, and the capacity and speed of computer technology improved, successively greater degrees of realism were achievable. Today it is possible to produce images that are practically indistinguishable from photographic images (or at least that create a pretty convincing illusion of reality). Computer Graphics History Early 60's: Computer animations for physical simulation; Edward Zajac displays satellite research using CG in 1961 1963: Sutherland (MIT) Sketchpad (direct manipulation, CAD) Calligraphic (vector) display devices Interactive techniques Douglas Englebart invents the mouse. 1968: Evans & Sutherland founded 1969: First SIGGRAPH Late 60's to late 70's: Utah Dynasty 1970: Pierre Bezier develops Bezier curves 1971: Gouraud Shading 1972: Pong developed 1973: Westworld, The first film to use computer animation 1974: Ed Catmull develops z-buffer (Utah) First Computer Animated Short, Hunger: Keyframe animation and morphing 1975: Bui-Toung Phong creates Phong Shading (Utah) Martin Newell models a teapot with B_ezier patches (Utah) Mid 70's: Raster graphics (Xerox PARC, Shoup) 1976: Jim Blinn develops texture and bump mapping 1977: Star Wars, CG used for Death Star plans SIGGRAPH came up with 3-D Core Graphics System, a software standard for device-independent graphics 1979: Turner Whitted develops ray tracing Mid 70's - 80's: Quest for realism radiosity; also mainstream real-time applications. 1982: Tron, Wrath of Kahn. Particle systems and obvious CG 1984: The Last Star Fighter, CG replaces physical models. Early attempts at realism using CG 1986: First CG animation nominated for an Academy Award: Luxo Jr. (Pixar) 1989: Tin Toy (Pixar) wins Academy Award 1995: Toy Story (Pixar and Disney), the first full length fully computer-generated 3D animation Reboot, the first fully 3D CG Saturday morning cartoon Babylon 5, the first TV show to routinely use CG models Late 90's: Interactive environments, scientific and medical visualization, artistic rendering, image

00's:

based rendering, path tracing, photon maps, etc. Real-time photorealistic rendering on consumer hardware? Interactively rendered movies? Ubiquitous computing, vision and graphics?

Applications of Computer Graphics Computer graphics has grown tremendously over the past 20-30 years with the advent of inexpensive interactive display technology. The availability of high resolution, highly dynamic, colored displays has enabled computer graphics to serve a role in intelligence amplification, where a human working in conjunction with a graphics enabled computer can engage in creative activities that would be difficult or impossible without this enabling technology. An important aspect of this interaction is that vision is the sensory mode of highest bandwidth. Because of the importance of vision and visual communication, computer graphics has found applications in numerous areas of science, engineering, and entertainment. These include: User Interfaces: If you have ever used a macintosh or an IBM-compatible computer running windows 3.1, you are a seasoned graphics user. Cartography: Computer graphics is used to produce both accurate and schematic representation of geographical and other natural phenomena from measurement data. Examples include geographical maps, relief maps, and population density maps. Computer-Aided Design: The design of 3-dimensional manufactured objects such as automobiles. Here, the emphasis is on interacting with the computer-based model to design component and systems of mechanical, electrical, elecrtomechanical and electronic devices. Drug Design: The design and analysis drugs based on their geometric interactions with molecules such as proteins and enzymes. Architecture: Designing buildings by computer with the capability to perform virtual „fly throughs‟ of the structure and investigation of lighting properties at various times of day and at various seasons. Medical Imaging: Visualizations of the human body produced by 3-dimensional scanning technology. Computational Simulations: Visualizations of physical simulations, such as air flow analysis in computational fluid dynamics or stresses on bridges. Entertainment: Film production and computer games. Fashion Design in textile industry Scientific Visualization Interaction versus Realism: One of the most important tradeoffs faced in the design of interactive computer graphics systems is the balance between the speed of interactivity and degree of visual realism. To provide a feeling of interaction, images should be rendered at speeds of at least 20.30 frames (images) per second. However, producing a high degree of realism at these speeds for very complex objects is difficult. This complexity arises from a number of sources: Large Geometric Models: Large-scale architectural plans of factories and entire city-scapes can involve vast numbers of geometric elements. Complex Geometry: Many natural objects (such as hair, fur, trees, plants, clouds) have very complex geometric structure. Complex Illumination: Many natural objects (such as human skin) behave in very complex and subtle ways to light. The Scope of Computer Graphics: Graphics is both fun and challenging. The challenge arises from the fact that computer graphics draws from so many different areas, including: Mathematics and Geometry: Modeling geometric objects. Representing and manipulating surfaces and shapes.

Physics (Kinetics): Understanding how physical objects behave when acted upon by various forces. Physics (Illumination): Understanding how physical objects reflect light. Computer Science: The design of efficient algorithms and data structures for rendering. Software Engineering: Software design and organization for large and complex systems, such as computer games. Computer Engineering: Understanding how graphics processors work in order to produce the most efficient computation times. The Scope of this Course: There has been a great deal of software produced to aid in the generation of large-scale software systems for computer graphics. Our focus in this course will not be on how to use these systems to produce these images. (If you are interested in this topic, you should take courses in the art technology department). As in other computer science courses, our interest is not in how to use these tools, but rather in understanding how these systems are constructed and how they work. Course Overview: Given the state of current technology, it would be possible to design an entire university major to cover everything (important) that is known about computer graphics. In this introductory course, we will attempt to cover only the merest fundamentals upon which the field is based. Nonetheless, with these fundamentals, you will have a remarkably good insight into how many of the modern video games and „Hollywood‟ movie animations are produced. This is true since even very sophisticated graphics stem from the same basic elements that simple graphics do. They just involve much more complex light and physical modeling, and more sophisticated rendering techniques. In this course we will deal primarily with the task of producing a both single images and animations from a 2- or 3-dimensional scene models. Over the course of the semester, we will build from a simple basis (e.g., drawing a triangle in 3-dimensional space) all the way to complex methods, such as lighting models, texture mapping, motion blur, morphing and blending, anti-aliasing. Let us begin by considering the process of drawing (or rendering) a single image of a 3-dimensional scene. This is crudely illustrated in the figure below. The process begins by producing a mathematical model of the object to be rendered. Such a model should describe not only the shape of the object but its color, its surface finish (shiny, matte, transparent, fuzzy, scaly, rocky). Producing realistic models is extremely complex, but luckily it is not our main concern. We will leave this to the artists and modelers. The scene model should also include information about the location and characteristics of the light sources (their color, brightness), and the atmospheric nature of the medium through which the light travels (is it foggy or clear). In addition we will need to know the location of the viewer. We can think of the viewer as holding a .synthetic camera., through which the image is to be photographed. We need to know the characteristics of this camera (its focal length, for example).

Fig. 1: A typical rendering situation. Based on all of this information, we need to perform a number of steps to produce our desired image. Projection: Project the scene from 3-dimensional space onto the 2-dimensional image plane in our synthetic camera.

Hidden surface removal: Elements that are closer to the camera obscure more distant ones. say. By the end of the semester. . or produce a simple animation. write a program to generate highly realistic images. you should have a basic understanding of how each of the steps is performed. its texture. The Course in a Nutshell: The process that we have just described involves a number of steps. The topics that we cover this semester will consider many of these issues. either with color (as in a wood-grain pattern) or surface irregularities (such as bumpiness). from modeling to rasterization. Rasterization: Once we know what colors to draw for each point in the image. a detailed understanding of most of the elements that are important to computer graphics will beyond the scope of this one-semester course. which is a function of the object's surface color. But by combining what you have learned here with other resources (from books or the Web) you will know enough to. write a simple video game. and (in more complex illumination models) the indirect reflection of light off of other surfaces in the scene. the final step is that of mapping these colors onto our display device. the relative positions of light sources. Of course.Color and shading: For each point in our image we need to determine its color. We need to determine which surfaces are visible and which are not. Surface Detail: Are the surfaces textured.

this technique is called the random scan. See picture below: Architecture of raster display In some raster displays. such as those in personal computers. Stroke. from the top to the bottom and then back to the top. more common systems. which is a set of horizontal scan lined. Line drawing or Calligraphic displays. The term vector is used synonymously with the word line and a stroke is a short line. The essence of the vector system is that the electron beam. A typical vector system consists of a display processor connected to an I/O peripheral to the central processing unit (CPU). The complete image on a raster display is formed from the raster. of inexpensive raster graphics based on television technology contributed more to the growth of the field than did any other technology. the display controller exists only as a software component of the graphics library and the refresh buffer is only a piece of the CPU‟s memory that can be read out by the image display subsystem (commonly called the video controller) that produces the actual picture on the screen. each a row of individual pixels. Since the light output of the phosphor decays in tens or at most hundreds microseconds. The development. solidly shaded or patterned areas) in a refresh buffer in terms of the primitives‟ component pixels. which writes on the CRT‟s phosphor coating is deflected from endpoint to endpoint. The entire image is scanned out sequentially by the video controller. a display buffer memory and a CRT. Raster displays store the display primitives (such as lines. one scan line at a time. in simpler.Lecture 2: Computer Graphics Overview Output Technology: The display devices developed in the mid sixties and used until the mid eighties are called Vector. the display processor must cycle through the display list to refresh the phosphor at least 30 times per second (30Hz) to avoid flicker. the raster is thus stored as a matrix of pixels representing the entire screen area. a hardware display controller receives and interprets sequences of output commands. in the early seventies. characters. . as dictated by the arbitrary order of the display commands.

000 bytes. something not achievable with vector graphics  Because of being able to achieve solid color patterns. Bilevel bitmaps contain a single bit per pixel. Low end color systems have 8bits per pixel allowing 256 colors simultaneously. Advantages/Disadvantages of vector/raster graphics:  Raster graphics are less costly as compared to vector graphics  Raster graphics can display areas filled with solid color patterns. Bitmap applies to 1-bit per pixel bi-level systems. say 1024 lines of 1024 pixels must be stored explicitly. in a raster system the entire grid of. Bilevel (also called monochrome) CRTs draw images in black and white or black and green. the availability of inexpensive solid state random access memory (RAM) for bitmaps in the early seventies was the breakthrough needed to make raster graphics the dominant hardware technology. For multi-bit per pixel systems. the primitives such as lines and polygons are specified in terms of their end-points and must first be converted into their component pixels in the frame buffer (scan conversion)  Vector graphics can draw smooth lines whereas raster graphics lines are not always as smooth since points on the line in a raster graphic are estimated to pixels (resulting in jaggies/staircasing) How the image ought to be Random Scan (vector graphics) . This requires 3.75 MB of RAM – inexpensive by today‟s standards. we use the more general term pixmap. raster graphics can be used to achieve 3-D whereas vector graphics can only present 2-D  Due to the discrete nature of the pixels in the raster graphics.Raster Scan Since. and the entire bitmap for a screen with a resolution of 1024 by 1024 pixels is only 2 20 bits or about 128. More expensive systems have 24 bits per pixel allowing a choice of 16millio colors and a refresh buffer with 32 bits per pixel and a screen resolution of 1280 by 1024 pixels.

Raster Scan with outline primitives (note the staircasing)

Raster scan with filled primitives Input Technology Input technology has improved over the years from the light pen of the vector systems to the mouse. Even fancier devices that supply not just (x,y) locations on the screen, but also 3-D and even higher dimensional input values (degrees of freedom), are becoming common. Audio communication also has exciting potential, since it allows hands free input and natural output of simple instructions, feedback, etc. With the standard input devices, the user can specify operations or picture components by typing or drawing new information, or by pointing to existing information on the screen. These interactions do not require any knowledge of programming. Selecting menu items, typing on the keyboard, drawing on paint, etc do not require special skills, thanks to input technologies contribution to computer graphics. Software portability and standards As we have seen, steady advances in hardware technology have made possible the evolution of graphics display from one-of-a kind special input devices to the standard user interface to the computer. We may well wonder whether software has kept pace. For example, to what extent have we resolved earlier difficulties with overly complex, cumbersome and expensive graphics systems and applications software? We have moved from low level device-dependent packages supplied by manufucturers for their particular display to higher level, device-intependent packages. These packages can drive a wide variety of display devices, from laser printers and plotters to film recorders and high performance interactive displays. The main purpose of using a device-independent package in conjunction with a high level programming languag eis to promote application program portability. The package provides this portability in much the same way as does a high level machine-independent language (such as FORTRAN, Pascal, or C); by isolating the programmer from most machine peculiarities. A general awareness of the need for standards in such device-independent graphics arose in the midseventies and culminated in a specification for a 3-D Core Graphics System produced by SIGGRAPH committee in 1977 and refined in 1979. This was used as an input in the many subsequent implementations of standards such as ANSI and ISO. The first graphics specification to be standardized officially was the GKS (Graphical Kernel System), an elaborated clean-up version of the core that, unlike the core, was restricted to 2-D. In 1988, GKS-3D, a 3D extension of GKS was made an official standard as did much more sophisticated and complex graphics system PHIGS (Programmer‟s Hierarchical Interactive Graphics System). GKS supports grouping of

logically related primitives such as lines, polygons and character strings and their atributes into collections called segments In this course, we will use OpenGL on C language which is a standard that is device-dependent and window system independent. Elements of 2-dimensional Graphics: Computer graphics is all about producing pictures (realistic or stylistic) by computer. Traditional 2-dimensional (flat) computer graphics treats the display like a painting surface, which can be colored with various graphical entities. Examples of the primitive drawing elements include line segments, polylines, curves, filled regions, and text. Polylines: A polyline (or more properly a polygonal curve) is a finite sequence of line segments joined end to end. These line segments are called edges, and the endpoints of the line segments are called vertices. A single line segment is a special case. A polyline is closed if it ends where it starts. It is simple if it does not self-intersect. Self-intersections include such things as two edge crossing one another, a vertex intersecting in the interior of an edge, or more than two edges sharing a common vertex. A simple, closed polyline is also called a simple polygon. If all of its internal angles are at most 1800, then it is a convex polygon. (See Fig. 2.)

Fig. 2: Polylines and filled regions. The geometry of a polyline in the plane can be represented simply as a sequence of the (x; y) coordinates of its vertices. The way in which the polyline is rendered is determined by a set of properties called graphical attributes. These include elements such as color, line width, and line style (solid, dotted, dashed). Polyline attributes also include how consecutive segments are joined. For example, when two line segments come together at a sharp angle, do we round the corner between them, square it off, or leaving it pointed? Curves: Curves consist of various common shapes, such as circles, ellipses, circular arcs. It also includes special free-form curves. Later in the semester we will discuss Bezier curves and B-splines, which are curves that are defined by a collection of control points. Filled regions: Any simple, closed polyline in the plane defines a region consisting of an inside and outside. (This is a typical example of an utterly obvious fact from topology that is notoriously hard to prove. It is called the Jordan curve theorem.) We can fill any such region with a color or repeating pattern. In some cases it is desired to draw both the bounding polyline and the filled region, and in other cases just the filled region is to be drawn.

A polyline with embedded „holes‟ also naturally defines a region that can be filled. In fact this can be generalized by nesting holes within holes (alternating color with the background color). Even if a polyline is not simple, it is possible to generalize the notion of inside and outside. (We will discuss various methods later in the semester.) (See Fig. 2.) Text: Although we do not normally think of text as a graphical output, it occurs frequently within graphical images such as engineering diagrams. Text can be thought of as a sequence of characters in some font. As with polylines there are numerous attributes which affect how the text appears. This includes the font's face (Times-Roman, Helvetica, Courier, for example), its weight (normal, bold, light), its style or slant (normal, italic, oblique, for example), its size, which is usually measured in points, a printer's unit of measure equal to 1=72-inch), and its color. (See Fig. 3.)

Fig. 3: Text font properties. Raster Images: Raster images are what most of us think of when we think of a computer generated image. Such an image is a 2-dimensional array of square (or generally rectangular) cells called pixels (short for “picture elements”). Such images are sometimes called pixel maps or pixmaps. An important characteristic of pixel maps is the number of bits per pixel, called its depth. The simplest example is an image made up of black and white pixels (depth 1), each represented by a single bit (e.g., 0 for black and 1 for white). This is called a bitmap. Typical gray-scale (or monochrome) images can be represented as a pixel map of depth 8, in which each pixel is represented by assigning it a numerical value over the range 0 to 255. More commonly, full color is represented using a pixel map of depth 24, where 8 bits each are used to represent the components of red, green and blue. We will frequently use the term RGB when referring to this representation. Interactive 3-dimensional Graphics: Anyone who has played a computer game is accustomed to interaction with a graphics system in which the principal mode of rendering involves 3-dimensional scenes. Producing highly realistic, complex scenes at interactive frame rates (at least 30 frames per second, say) is made possible with the aid of a hardware device called a graphics processing unit, or GPU for short. GPUs are very complex things, and we will only be able to provide a general outline of how they work. Like the CPU (central processing unit), the GPU is a critical part of modern computer systems. It has its own memory, separate from the CPU's memory, in which it stores the various graphics objects (e.g., object coordinates and texture images) that it needs in order to do its job. Part of this memory is called the frame buffer, which is a dedicated chunk of memory where the pixels associated with your monitor are stored. Another entity, called the video controller, reads the contents of the frame buffer and generates the actual image on the monitor. This process is illustrated in schematic form in Fig. 4.

A typical command from your program might be “draw a triangle in 3-dimensional space at these coordinates”. Objects are described in terms of vectors in 3-dimensional space (for example.) The process is illustrated in Fig. 5. Vertex Processing: Geometric objects are introduced to the pipeline from your program. 4: Architecture of a simple GPU-based graphics system. In the vertex processing stage. The Graphics Pipeline: The key concept behind all GPUs is the notion of the graphics pipeline. until the final image is produced at the end. Traditionally. modern GPUs support programs called vertex shaders and fragment shaders. and the frame buffer sits at the other end. we will focus on the GPUs traditional role in the rendering process. 5: Stages of the graphics pipeline. your program sends a command to the GPU specifying the location of the camera and its projection properties. and involves a number of stages. The output of this stage is called the transformed geometry. one per vertex). since the GPU architecture is not divided so cleanly. where your user program sits at one end sending graphics commands to the GPU. . The process of doing this is quite complex. For example. a triangle might be represented by three such vectors. For the purposes of this high-level overview. (This is mostly a conceptual aid. In order to know how to perform this transformation. the graphics system transforms these coordinates into a coordinate system that is more convenient to the graphics system. which provide the user with the ability to fine-tune the colors assigned to vertices and fragments. which can perform not just graphics rendering. in that they provide the user the ability to program various elements of the graphics process. you might imagine that the transformation projects the vertices of the threedimensional triangle onto the 2-dimensional coordinate system of your screen. This is conceptual tool.Fig. called screen space. Modern GPUs are much programmable. Broadly speaking the pipeline can be viewed as involving four major stages. GPUs are designed to perform a relatively limited fixed set of operations. Fig. Since we are interested in graphics here. but general scientific calculations on the GPU. Recently there has been a trend towards what are called general purpose GPUs. but with blazing speed and a high degree of parallelism. and the results are then fed to the next stage of the pipeline. Each of these stages is performed by some part of the pipeline. The job of the graphics system is to convert this simple request to that of coloring a set of pixels on your display.

For one. graphics APIs are classified into two general classes: Retained Mode: The library maintains the state of the computation in its own internal data structures. C++. Another operation is lighting. It has been ported to virtually all major systems. In other words. since the internal representation of the data set needs to be updated frequently.This stage involves other tasks as well. since it does not know the global state. . Because it works across many different platforms. (This is in contrast to DirectX.  This is functionally analogous to program compilation. This may involve applying various coloring textures to the fragment and/or color blending from the vertices. Your program communicates with the graphics system through a library. it will then be subjected to coloring.  This method is less well suited to time-varying data sets. and can be accessed from a number of different programming languages (C. it is very general. . (How the lighting is performed depends on commands that you send to the GPU. (This typically results from translucence or other special effects like motion blur. For example.) The colors of these fragments are then blended together to produce the final pixel color.  Examples: Java3d.  This is functionally analogous to program interpretation. OpenGL: OpenGL is a widely used industry standard graphics API.  Examples: OpenGL. First. the library can perform global optimizations automatically. We call each such redrawing a display cycle or a refresh cycle. since your program is refresh the current contents of the image.  This is well suited to highly dynamic scenes. Graphics Libraries: Let us consider programming a 3-dimensional interactive graphics system. Python. Ogre. which means that each function call results in a command being sent directly to the GPU.  Because it knows the full state of the scene. called fragments. what image is to be drawn.  The library can only perform local optimizations. at the rate of over 30 frames per second. there may be a number of fragments that affect the color of a given pixel. . DirectX. . Fragment Processing: Each fragment is then run through various computations. Immediate Mode: The application provides all the primitives with each display cycle. indicating where the light sources are and how bright they are. The final output of this stage is the frame-buffer image. or whether it is hidden behind some other fragment. Blending: Generally. it must be determined whether this fragment is visible. in order to produce the effect of smooth shading. There are some retained elements. Open Scenegraph. which has been desired to work primarily on Microsoft systems.) For the most part. There are a number of different APIs used in modern graphics systems. Broadly speaking. It is the responsibility of the user program to perform global optimizations. as described above. your program transmits commands directly to the GPU for execution. however. an application programmer's interface or API. ). each providing some features relative to the others. where computations are performed to determine the colors and intensities of the vertices of your objects.) Rasterization: The job of the rasterizer is to convert the geometric shape given in terms of its screen coordinates into individual pixels. OpenGL operates in immediate mode. or more formally. The challenge is that your program needs to specify. If it is visible. Java. With each refresh cycle. this data is transmitted to the GPU for rendering. clipping is performed to snip off any parts of your geometry that lie outside the viewing area of the window on your display.

lighting. Because of the design goal of being independent of the window system and operating system. which stands for the GL Utility Toolkit. so that they can be applied later in the computation. OpenGL does not have a command for drawing spheres. GLUT has the virtue of being very simple. functions from GLU begin with “glu” (as in “gluLookAt”). it is necessary to use an additional toolkit. (as in “glTriangle”). We will cover a very simple one in this class. there are a number of software systems available that provide utility functions. there are no commands in OpenGL to create a window. and texturing need to be set up. called GLUT. As a result. Everything is focused just on the process of generating an image. . Functions from the OpenGL library begin with . or to detect whether a keyboard key has been hit. OpenGL does not provide capabilities for windowing tasks or user input and output. let me mention that it is possible to determine which library a function comes from by its prefix. To get these features. many tasks needed in a typical large graphics system. OpenGL provides a simple collection of utilities. to determine the current mouse coordinates. you will need to use a more sophisticated toolkit. For example. we will discuss OpenGL in greater detail. Next time. We have described some of the basic elements of graphics systems.transformations. GLU. but it can draw triangles. Since we will be discussing a number of the library functions for OpenGL. For example. given the center and radius of a sphere. but it does not have a lot of features. and functions from GLUT begin with “glut” (as in “glutCreateWindow”). suppose that you want to draw a sphere. and GLUT during the next few lectures. called the GL Utility Library or GLU for short.gl. will produce a collection of triangles that approximate the sphere's shape. In order to achieve these other goals. which provide various capabilities. There are a number of different toolkits. What you would like is a utility function which. There are many. to resize a window.

but some is transferred to the electrons of the phosphor atoms. In returning to their previous quantum levels.  High contrast (100:1). Phosphor Fluorescence is the light emitted as these very unstable electrons lose their excess energy while the phosphor is being struck by the electrons.  High update rates. On the way to the screen. the longer the time taken by a single refresh cycle and the lower the refresh rate. the electrons are forced into a narrow beam by the focusing mechanism and are directed towards a particular point on the screen. characters): the greater the complexity. The electron gun emits a stream of electrons that is accelerated towards the phosphor coated screen by a high-positive voltage (15.Lecture 3: Devices and Device Independence Our goal will be to:  Consider display devices for computer graphics: o Calligraphic devices o Raster devices: CRT's. the individual electrons are moved with kinetic energy proportional to the acceleration voltage. How a Monitor Works Raster Cathode Ray Tubes (CRTs) most common display device  Capable of high resolution. A monochromatic CRT works the same way as a black and white television. at differing frequencies (colors) defined by quantum theory. The refresh rate in a raster system is independent of the complexity of the picture whereas in a vector system. Some of this energy is dissipated as heat. the phosphor emits visible light. o Direct vs.  Good colour fidelity. the refresh rate depends directly on the picture complexity (number of lines. The entire picture must be refreshed many times per second so that the viewer sees what appears to be a constant unflickering picture. .000-20. pseudocolour frame buffers  Discuss the problem of device independence: o Window-to-viewport mapping o Normalized device coordinates Calligraphic and Raster Devices Calligraphic Display Devices draw polygon and line segments directly:  Plotters  Direct Beam Control CRTs  Laser Light Projection Systems Raster Display Devices represent an image as a regular grid of samples.000 volts) applied near the face of the tube. making them jump to higher energy levels. these excited electrons give up their extra energy in form of light. points. LCD's.  Each sample is usually called a pixel  Both are short for picture element.  Rendering requires rasterization algorithms to quickly determine a sampled representation of geometric primitives. When the electron beam strikes the phosphor coated screen of the CRT.

typically this is 60 times per second for raster displays. Electron beam scanned in regular pattern of horizontal scanlines. Delta-delta shadow mask CRT. . A Phosphor‟s persistence is defined as the time from the removal of excitation to the moment when phosphorescence has decayed to 10% of the initial light output. Burst-mode DRAM replacing VRAM in many systems. The three guns and phosphor dots are arranged in a triangular (delta) pattern. The horizontal scan rate is the number of scan lines per second that the circuitry driving the CRT is able to display. Frame buffers composed of VRAM (video RAM). This is usually about 10-60microseconds. The shadow mask allows electrons from each gun to hit only the corresponding phosphor dots. The refresh rate of a CRT is the number of times per second the image is redrawn. Raster images stored in a frame buffer.Phosphorescence is the light given off by the return of the excited electrons to their unexcited state once the electron beam excitation is removed. Intensity of electron beam modified by the pixel value. Shadow Masks allow each gun to irradiate only one colour of phosphor. VRAM is dual-ported memory capable of  Random access  Simultaneous high-speed serial output: built-in serial shift register can output entire scanline at high rate synchronized to pixel clock. Colour CRTs have three different colours of phosphor and three independent electron guns.

using a Colour Lookup Table (LUT). Sophisticated frame buffers may allow different colour specifications for different portions of frame buffer.  Polarizing filters allow only light through unaligned molecules. Use a window identifier also stored in the frame buffer.  Subpixel colour filter masks used for RGB. using three independent intensity channels. In the latter case. .  Cells contain liquid crystal molecules that align when charged. Liquid Crystal Displays (LCDs) becoming more popular and reasonably priced  Flat panels  Flicker free  Decreased viewing angle Works as follows:  Random access to cells like memory.  Unaligned molecules twist light.Colour is specified either  Directly. or  Indirectly. a colour index is stored in the frame buffer.

 Window point (xw.Window to Viewport Mapping  Start with 3D scene. yvb) and (xvr. Length and height of the window are Lw and Hw. Length and height of the viewport are Lv and Hv. . What do we do?  Answer: map rectangular region of 2D device scene to device. Window has corners (xwl. yv). Window: rectangular region of interest in scene.  Proportionally map each of the coordinates according to:  To map xw to xv:  If Hw/Lw != Hv/Lv the image will be distorted. ywt). Device has a finite visible rectangle. yvt). Viewport: rectangular region on device. ywb) and (xwr. yw) maps to viewport point (xv. but eventually project to 2D scene  2D scene is infinite plane. Viewport has corners (xvl. Usually. both rectangles are aligned with the coordinate axes.

 Scale this w distance to get a v distance. use Normalized Device Coordinates (NDC) as an intermediate coordinate system that gets mapped to the device layer. . but 2550x3300 pixels at 300dpi. .  Origin in the top left corner. BUT. . Two common conventions for DCS:  Origin in the lower left corner.These quantities are called the aspect ratios of the window and viewport. with x to the right and y downward.  If we map directly from WCS to a DCS. Intuitively. Many different resolutions for graphics display devices:  Workstations commonly have 128x1024 frame buffers.  Will consider using only a square portion of the device. then changing our device requires rewriting this mapping (among other changes).  Instead. .  A PostScript page is 612x792 points. . with x to the right and y upward. suppose we want to run program on several hardware platforms or graphic devices. the window-to-viewport formula can be read as:  Convert xw to a distance from the window corner. Windows in WCS will be mapped to viewports that are specified within a unit square in NDC space. . Normalized Device Coordinates  Where do we specify our viewport?  Could specify it in device coordinates .  Map viewports from NDC coordinates to the screen. Aspect ratios may vary .  And so on .  Add to viewport corner to get xv. Basic User Interface Concepts  A short outline of input devices and the implementation of a graphical user interface is given:  Physical input devices used in graphics  Virtual devices  Polling is compared to event processing  UI toolkits are introduced by generalizing event processing .

can be depressed or released. Key: Return a \character". Pick: Return a scene component. that is. “public interface / private implementation” Device Input Modes Input from devices may be managed in different ways: Request Mode: Alternating application and device execution  application requests input and then suspends execution.Physical Devices Actual. Sample Mode: Concurrent application and device execution  device continually updates register(s) or memory location(s). .  application may read at any time. Each of the above is called a virtual device.  application resumes execution. provides input and then suspends execution. Device Association To obtain device independence:  Design an application in terms of virtual (abstract) devices. processes input.) Valuator: Return a real value (in a given range). direction)  Tablets (absolute position)  Etc.  device wakes up. Stroke: Return a sequence of positions..  Implement virtual devices using available physical devices. Choice: Return an option (menu. one can make do with other possibilities:  Valuator ↔ number entered on keyboard. ganged valuators). Need some abstractions to keep organized. Virtual Devices Devices can be classified according to the kind of value they return: Button: Return a Boolean value. . There are certain natural associations:  Valuator ↔ Mouse-X But if the naturally associated device does not exist on a platform. Locator: Return a position in (2D/3D) space (eg. callback. . String: Return a sequence of characters. Selector: Return an integral value (in a given range). one of a given set of code values. physical input devices include:  Dials (Potentiometers)  Selectors  Pushbuttons  Switches  Keyboards (collections of pushbuttons called “keys”)  Mice (relative motion)  Joysticks (relative motion.. .

 Upon change in status of device. the application will engage in an event loop  not a tight loop . like “Expose”.  Value of input device constantly checked in a tight loop  Wait for a change in status Generally. they may be read in a  blocking  non-blocking fashion. applications may be structured to engage in  requesting  polling or sampling  event processing Events may or may not be interruptive. Application Structure With respect to device input modes. Without interrupts. Windowing system can also generate virtual events. value of an input device is read and then the program proceeds. this process places a record into an event queue. particularly in time-sharing systems.Event Mode: Concurrent application and device execution together with a concurrent queue management service  device continually offers input to the queue  application may request selections and services from the queue (or the queue may interrupt the application). Polling and Sampling In polling. In sampling. If not interruptive.  Application can request read-out of queue: Number of events  1st waiting event  Highest priority event  1st event of some category  All events  Application can also  Specify which events should be placed in queue  Clear and reset the queue  Etc. . . polling is inefficient and should be avoided.  No tight loop  Typically used to track sequence of actions (the mouse) Event Queues  Device is monitored by an asynchronous process.     Queue reading may be blocking or non-blocking Processing may be through callbacks Events may be processed interruptively Events can be associated with more than physical devices.

"  The cursor is usually bound to a pair of valuators.  The cursor enters a window.  Each table entry associates an event with a callback function. a preliminary of register event actions followed by a repetition of test for event actions.  A widget has a graphical representation that suggests its function. For more sophisticated queue management.  An Expose event is triggered under X when a window becomes visible.  A timer event may occur after a certain interval. SDL. Modular UI functionality is provided through a set of widgets:  Widgets are parcels of the screen that can respond to events.  Event-process definition for parent may apply to child.  The cursor has moved more than a certain amount.  A Configure event is triggered when a window is resized. as well as issuing callbacks. . based on the cursor position. and child may add additional eventprocess definitions  Event-process definition for parent may be redefined within child  Widgets may have multiple parts.  Simple event queues just record a code for event (Iris GL). typically MOUSE_X and MOUSE_Y. GLUI. SUIT. corresponding callback is invoked. Discussion Questions:  Discuss the differences in architecture and pros & cons of the following display technologies o Cathode ray tube o Electro Luminiscent o Liquid Crystal o Thin Film  Discuss the various hard-copy technologies.  Events can be restricted to particular areas of the screen.  Widgets are arranged in a parent/child hierarchy. and in fact may be composed of other widgets in a heirarchy. GLUT.  Better event queues record extra information such as time stamps (X windows).  Widgets may respond to events with a change in appearance.  Divide screen into parcels. both using vector technology and raster technology. Tk. .  Event manager does most or all of the administration. use table lookup.  Provide an API to make and delete table entries. including but not limited to: o Plotters o Dot matrix printers (monochrome and color) .  A mouse button or keyboard key is released.  application merely registers event-process pairs  queue manager does all the rest “if event E then invoke process P. FORMS. Qt . Xt.  When event occurs.  Events can be very general or specific:  A mouse button or keyboard key is depressed. Toolkits and Callbacks  Event-loop processing can be generalized:  Instead of switch. and assign different callbacks to different parcels (X Windows does this). UI toolkits recommended for projects: Tk. Some UI toolkits: Xm.

000 bytes per second? How long would it take to load a 1024 by 1280 by 1 bitmap? Reference Books  Computer Graphics. Hearn and M.P. by D. assuming that the pixels are packed 8 to a byte and that bytes can be transferred and unpacked at the rate of 100. Watt  OpenGL Reference Manual. o Laser printers (monochrome and color) o Camera o Ink Jet printer (both monochrome and color) How long would it take to load 512 by 512 bitmap. 2nd ed. 3rd edition by A. 3rd edition by D. Schreiner . Baker  An Introduction to Graphics Programming with OpenGL by Toby Howard  3D Computer Graphics.

OpenGL is window and operating system independent.Lecture 4: Introduction to OpenGL        General OpenGL Introduction Rendering Primitives Rendering Modes Lighting Texture Mapping Additional Rendering Attributes Imaging This section provides a general introduction and overview to the OpenGL API (Application Programming Interface) and its features. You can run OpenGL on C++ by including the following library. the part of your application which does rendering is platform independent. and other features. OpenGL is a rendering library available on almost any computer which supports a graphics monitor. we‟ll discuss the basic elements of OpenGL: rendering points. By using it. polygons and images. we need some way to integrate OpenGL into each windowing system.     glut32. as well as more advanced features as lighting and texture mapping. These additional APIs are platform dependent.dll glut. Every windowing system where OpenGL is supported has additional API calls for managing OpenGL windows.def glut32. and some of its capabilities. OpenGL and GLUT Overview What is OpenGL & what can it do for me? OpenGL in windowing systems Why GLUT A GLUT program template This section discusses what the OpenGL API (Application Programming Interface) is. . As OpenGL is platform independent. definition dll and header files in C++. Generally. this is controlled by the windowing system on whatever platform you‟re working on. What Is OpenGL? OpenGL is a library for doing computer graphics.lib glut. However. colormaps. you can create interactive applications which render high-quality color images composed of 3D geometric objects and images. As such. in order for OpenGL to be able to render.h Today. it needs a window to draw into. lines.

• • • high-quality color images composed of geometric and image primitives window system independent operating system independent OpenGL Architecture This is diagram you represents the flow of graphical information. OpenGL is a library for rendering computer graphics. Texturing combines the two types of primitives together. The upper pipeline is for geometric. lines and polygons.) Additionally. there are two operations that you do with OpenGL: • • draw something change the state of how OpenGL draws OpenGL has two types of things that it can render: geometric primitives and image primitives. OpenGL links image and geometric primitives together using texture mapping. Geometric primitives are points. image primitives. which is an advanced topic. Generally. . Image primitives are bitmaps and graphics images (i. The lower pipeline is for pixel-based. the pixels that you might extract from a JPEG image after you‟ve read it into your program. vertex-based primitives.e. OpenGL as a Renderer As mentioned. There are two pipelines of data flow. as it is processed from CPU to the frame buffer.

to simplify common tasks such as: rendering quadric surfaces (i.e.is a mathematical model commonly used in computer graphics for generating and representing curves and surfaces which offers great flexibility and precision for handling both analytic and freeform shapes.) Finally to simplify programming and window system dependence. spheres. Some examples are: • • • GLX for the X Windows system. “Setting state” is the process of initializing the internal data that OpenGL uses to render your primitives.The other common operation that you do with OpenGL is setting state. Related APIs As mentioned. GLU. OpenGL and Related APIs . to initializing multiple mipmap levels for texture mapping. we‟ll be using the freeware library. NURBS (Non-uniform rational basis spline . cylinders. cones. It simplifies the process of creating windows. working with events in the window system and handling animation. Every window system has its own unique library and functions to do this. To integrate it into various window systems. GLUT. additional libraries are used to modify a native window into an OpenGL capable window. working with NURBS and curves. GLUT. common on Unix platforms AGL for the Apple Macintosh WGL for Microsoft Windows OpenGL also includes a utility library.). is a public domain window system independent toolkit for making simple OpenGL applications. written by Mark Kilgard. OpenGL is window and operating system independent. It can be as simple as setting up the size of points and color that you want a vertex to be. etc. and concave polygon tessellation.

. GLint.h> For C.h> #include <GL/glut. may choose to use GLUT instead because of its simplified programming model and window system independence.) such as Motif or the Win32 API.e. Prototype applications.h> #include <GL/glu. their parameters and defined constant values to the compiler. or one which don‟t require all the bells and whistles of a full GUI. Preliminaries Headers Files • • • Libraries Enumerated Types • OpenGL defines numerous types for compatibility – GLfloat. #include <GL/gl. menu and scroll bars. OpenGL has header files for GL (the core library). applications which require more user interface support will use a library designed to support those types of features (i. there are a few required elements which an application must do: • Header files describe all of the function calls. Generally. and GLUT (freeware windowing toolkit).The above diagram illustrates the relationships of the various libraries and window system components. buttons. etc. GLenum. etc. GLU (the utility library).

double. like the window needing to be refreshed. GLUT Basics Here‟s the basic structure that we‟ll be using in our applications. This might include things like the background color. This is generally what you‟d do in your own OpenGL applications. etc.Note: glut.so and for Microsoft Windows. To simplify platform independence for OpenGL programs. it‟s named opengl32. glutCreateWindow( argv[0] ). The steps are: 1) Choose the type of window that you need for your application and initialize it.h and glu. 2) Initialize any OpenGL state that you don‟t need to change every frame of your program.h is recommended to avoid warnings about redefining Windows macros. glutInitDisplayMode( mode ). int.lib.h includes gl. a complete set of enumerated types are defined. char** argv ) { int mode = GLUT_RGB|GLUT_DOUBLE. or the user moving the mouse.e. . which we‟ll discuss in a few slides. including only glut. Finally. glutReshapeFunc( resize ). This is where your application receives events. • Libraries are the operating system dependent implementation of OpenGL on the system you‟re using. the OpenGL library is commonly named libGL. light positions and texture maps. glutDisplayFunc( display ). enumerated types are definitions for the basic types (i. float.h. Callbacks are routines you write that GLUT calls when a certain sequence of events occurs. For Unix systems. Sample Program void main( int argc. Each operating system has its own set of libraries. 4) Enter the main event processing loop. On Microsoft Windows. init(). Use them to simplify transferring your programs to other operating systems. 3) Register the callback functions that you‟ll need. The most important callback function is the one to render your scene.) which your program uses to store variables. and schedules when callback functions are called.

0. we enter the event processing loop. and call whatever actions are necessary. which contains our one-time initialization. which interprets events and calls our respective callback routines. We then call the init() routine. } Here‟s the internals of our initialization routine. Finally.0. where the author must receive and process each event. 1.0. glClearDepth( 1.0 ). Next. glEnable( GL_LIGHT0 ).0.glutKeyboardFunc( key ). glutIdleFunc( idle ). OpenGL Initialization Set up whatever state you’re going to use void init( void ) { glClearColor( 0. Over the course. glEnable( GL_DEPTH_TEST ). GLUT Callback Functions Routine to call when something happens GLUT uses a callback mechanism to do its event processing. you‟ll learn what each of the above OpenGL calls do. Callbacks simplify event processing for the application developer. glEnable( GL_LIGHTING ). This is the model that we‟ll use for most of our programs in the course. init(). Here we initialize any OpenGL state and other program variables that we might need to use during our program that remain constant throughout the program‟s execution. we register the callback routines that we‟re going to use during our program. As compared to more traditional event driven programming. } Here‟s an example of the main part of a GLUT based OpenGL application. callbacks simplify the . The glutInitDisplayMode() and glutCreateWindow() functions compose the window configuration step. 0.0 ). glutMainLoop().

Very useful for animations. glEnd().called when pixels in the window need to be refreshed.called when the window changes size glutKeyboardFunc() . } One of the most important callbacks is the glutDisplayFunc() callback. glVertex3fv( v[1] ). • • Rendering Callback glutDisplayFunc( display ). and automatically handling the user events.called when a key is struck on the keyboard glutMouseFunc() .called when the mouse is moved regardless of mouse button state glutIdleFunc() . glBegin( GL_TRIANGLE_STRIP ). . All the author must do is fill in what should happen when.process by defining what actions are supported.a callback function called when nothing else is going on. glVertex3fv( v[2] ).called when the user presses a mouse button on the mouse glutMotionFunc() .called when the user moves the mouse while a mouse button is pressed glutPassiveMouseFunc() . glVertex3fv( v[3] ). glutReshapeFunc() . GLUT supports many different callback actions. including: • • • • • glutDisplayFunc() . void display( void ) { glClear( GL_COLOR_BUFFER_BIT ). This callback is called when the window needs to be refreshed. glutSwapBuffers(). It‟s here that you‟d do your entire OpenGL rendering. glVertex3fv( v[0] ).

You‟ll learn more about what each of these calls do. } Animation requires the ability to draw a sequence of images. glutPostRedisplay() requests that the callback registered with glutDisplayFunc() be called as soon as possible. break. You register a routine which updates your motion variables (usually global variables in your program which control how things move) and then requests that the scene be updated. int y ) { switch( key ) { case ‘q’ : case ‘Q’ : exit( EXIT_SUCCESS ). and renders a triangle strip and then swaps the buffers for smooth animation transition. int x. void keyboard( unsigned char key.The above routine merely clears the window. . since the user may have interacted with your application and user input events need to be processed. case ‘r’ : case ‘R’ : rotate = GL_TRUE. This is preferred over calling your rendering routine directly. Idle Callbacks Use for animation and continuous update glutIdleFunc( idle ). The glutIdleFunc()is the mechanism for doing animation. User Input Callbacks Process user input glutKeyboardFunc( keyboard ). glutPostRedisplay(). void idle( void ) { t += dt. glutPostRedisplay(). break.

In this case. which is discussed in a later section.} } Above is a simple example of a user input callback. Homogenous coordinates are of the form ( x. Additionally. and what each can be used for. Elementary Rendering    Geometric Primitives Managing OpenGL State OpenGL Buffers In this section. Depending on how vertices are organized. dial button and boxes. . GLUT supports user input through a number of devices including the keyboard. w ). OpenGL also supports the rendering of bitmaps and images. OpenGL can render any of the shown primitives. OpenGL Geometric Primitives All geometric primitives are specified by vertices Every OpenGL geometric primitive is specified by its vertices. we‟ll be discussing the basic geometric primitives that OpenGL uses for rendering. y. we‟ll discuss the different types of OpenGL buffers. as well as how to manage the OpenGL state which controls the appearance of those primitives. mouse. which are homogenous coordinates. z. the routine was registered to receive keyboard input.

0 by glVertex2f(). glVertex2f( 0. glVertex2f( 1. 1. 0.5. 1. glEnd().0 ).0. glColor3fv( color ).Simple Example void drawRhombus( GLfloat color[] ) { glBegin( GL_QUADS ). .0 ). Knowing how the calls are structured makes it easy to determine which call should be used for a particular data format and size. which is reflected in the calls name. The rhombus is planar.118 ). OpenGL Command Formats The OpenGL API calls are designed to accept almost any basic data type. } The drawRhombus() routine causes OpenGL to render a single quadrilateral in a single color.5. glVertex2f( 1. since the z value is automatically set to 0. 0.118 ). glVertex2f( 0.0.

( sometimes described as TrueColor mode ) and color index ( or colormap ) mode. the vertex‟s index is specified with glIndex*(). blue. As mentioned before. OpenGL uses homogenous coordinates to specify vertices. and w = 1. The type of window color model is requested from the windowing system. glBegin( primType ). blue ). For RGBA rendering. The possible types are: GL_POINTS GL_LINES GL_POLYGON GL_TRIANGLES GL_QUADS OpenGL Color Models Every OpenGL implementation must support rendering in both RGBA mode.0. or a color indexed window ( using GLUT_INDEX ).0 . For glVertex*() calls which don‟t specify all the coordinates ( i. green. ++i ) { glColor3f( red. Shapes Tutorial GL_LINE_STRIP GL_LINE_LOOP GL_TRIANGLE_STRIP GL_TRIANGLE_FAN GL_QUAD_STRIP . Using GLUT. Glfloat coords[3]. } glEnd(). i < nVerts. glVertex3fv( coords ). for ( i = 0. vertex colors are specified using the glColor*() call. As such. vertices from most commercial models are stored as three component floating point vectors.e. Specifying Geometric Primitives GLfloat red. For color index rendering. glVertex2f()).For instance. green. OpenGL organizes vertices into primitives based upon which type is passed into glBegin(). the glutInitDisplayMode() call is used to specify either an RGBA window ( using GLUT_RGBA ). OpenGL will default z = 0. the appropriate OpenGL command to use is glVertex3fv( coords ).

you can bring up a pop-up menu to change the primitive you are rendering. pressing the LEFT mouse while the pointer is over the green parameter numbers allows you to move the mouse in the y-direction (up and down) and change their values. With this action. In an application. but you are likely to want to position vertices at coordinates outside the boundaries of this tutorial. (Note that the parameters have minimum and maximum values in the tutorials. In the command manipulation window. you can change the appearance of the geometric primitive in the other window. The left and right mouse buttons will do similar operations in the other tutorials. the RIGHT mouse button brings up a different pop-up menu.0.This section illustrates the principles of rendering geometry. which has menu choices to change the appearance of the geometry in different ways.) In the screen-space window. you probably don‟t want to have floating-point color values less than 0. specifying both colors and vertices. Controlling Rendering Appearance .0 or greater than 1. sometimes to prevent you from wandering too far. With the RIGHT mouse button. The shapes tutorial has two views: a screen-space window and a command manipulation window.

Manipulating OpenGL State Appearance is controlled by current state for each ( primitive to render ) { update OpenGL state .OpenGL can render from a simple line-based wireframe to complex multi-pass texturing algorithms to simulate bump mapping or Phong lighting. textured or any of OpenGL‟s other modes. it uses data stored in its internal state tables to determine how the vertex should be transformed. OpenGL’s State Machine All rendering attributes are encapsulated in the OpenGL State • • • • rendering styles shading lighting texture mapping Each time OpenGL processes a vertex. lit.

In general. the most common way to manipulate OpenGL state is by setting vertex attributes. then pass the primitive to be rendered. In general. the most common way to manipulate OpenGL state is by setting vertex attributes. lighting normals. which include color. and repeat for the next primitive.render primitive } Manipulating vertex attributes is most common way to manipulate state glColor*() / glIndex*() glNormal*() glTexCoord*() The general flow of any OpenGL rendering is to set up the required state. then pass the primitive to be rendered. Manipulating OpenGL State Appearance is controlled by current state for each ( primitive to render ) { update OpenGL state render primitive } Manipulating vertex attributes is most common way to manipulate state glColor*() / glIndex*() glNormal*() glTexCoord*() The general flow of any OpenGL rendering is to set up the required state. lighting normals. and texturing coordinates. which include color. Controlling current state Setting State glPointSize( size ). and repeat for the next primitive. . and texturing coordinates.

or setting the line width. and passing the token for the feature. . glDisable( GL_TEXTURE_2D ). like GL_LIGHT0 or GL_POLYGON_STIPPLE. pattern ). glShadeModel( GL_SMOOTH ).glLineStipple( repeat. Also for some state changes. This is done using glEnable(). setting the OpenGL state also enables that feature ( like setting the point size or line width ). such as loading a texture map. Other features need to be turned on. Enabling Features glEnable( GL_LIGHTING ). Setting OpenGL state usually includes modifying the rendering attribute.

triangles. robotics. computer vision. as we shall see later.h> // OpenGL using namespace std. Unfortunately. what are the coordinates of its vertices after rotating it 30 degrees about the vector (1. and discuss some of the basic elements of geometry. and orientations (such as clockwise and counterclockwise). one centered at the camera's location)? Such basic geometric problems are fundamental to computer graphics.Graphics Gems. . and how to do this in a reasonably clean and painless way. is it above. which will be needed for the rest of the course. because light travels in straight lines. In this and the next few lectures we will consider how this can be done. angles. however. #include <cstdlib> // standard definitions #include <iostream> // C++ I/O #include <GL/glut. Given a fourth point q. Geometric Intersections: Given a cube and a ray. angles. This includes not only computer graphics. here are some typical geometric problems that arise in designing programs for computer graphics. Euclidean Geometry: The geometric system that is most familiar to us. For example. Computer graphics deals largely with the geometry of lines and linear objects in 3-space. or orientations. etc. // update the viewport glMatrixMode(GL_PROJECTION). Projective Geometry: The geometric system needed for reasoning about perspective projection.) There are various geometric systems. int h) { // window is reshaped glViewport (0. does the ray strike the cube? If so which face? If the ray is reflected off of the face. our goal will be to present the tools needed to answer these sorts of questions. or on this plane? Transformation: Given unit cube. line segments. but also areas like computer-aided design.Lecture 5: Geometric Representations Geometric Programming: We are going to leave our discussion of OpenGL for a while.h> // GLUT #include <GL/glu. // update projection glLoadIdentity(). Change of coordinates: A cube is represented relative to some standard coordinate system. h). 0. The principal ones that will be of interest to us are: Affine Geometry: Geometric system involving „Flat things‟: points. There is no defined notion of distance. and geographic information systems. and over the next few lectures. what is the direction of the reflection ray? Orientation: Three noncollinear points in 3-space define a unique plane. (By the way. 2. It enhances affine geometry by adding notions such as distances. 1). Each book is a collection of many simple graphics problems and provides algorithms for solving them. planes.h> // GLU #include <GL/gl. There are many areas of computer science that involve computation with geometric entities.. a good source of information on how to solve these problems is the series of books entitled . What are its coordinates relative to a different coordinate system (say. // make std accessible void myReshape(int w. this system is not compatible with Euclidean geometry. lines. w. below.

affine geometry. 0.90). // . We will make use of linear algebra as a concrete reprensentational basis for these abstract geometric systems (in much the same way that a concrete structure like an array is used to represent an abstract structure like a stack in object-oriented programming). which we can just think of as being real numbers points.. 10: Sample OpenGL Program: Header and Main program. which define locations in space free vectors (or simply vectors).25.0. // background is gray glClear(GL_COLOR_BUFFER_BIT).50.. glEnd(). // create a 400x400 window glutInitWindowPosition(0.0.5. // OpenGL initializations glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA). argv). // set color to red glBegin(GL_POLYGON). 1.0.10.50).75). // clear the window glColor3f(1.// double buffering and RGB glutInitWindowSize(400.0).in the upper left glutCreateWindow(argv[0]). glVertex2f(0. 0. // create the window glutDisplayFunc(myDisplay). 0. char** argv) { glutInit(&argc. // ANSI C expects this } Fig.10).0). 0. // start it running return 0.5.75. 0.0. Affine Geometry: The basic elements of affine geometry are:    scalars.0.gluOrtho2D(0. 0. 0. You might wonder where linear algebra enters.0. glutPostRedisplay(). // set color to blue glRectf(0. 0. 1. // request redisplay } void myDisplay(void) { // (re)display callback glClearColor(0. 0).0). which are used to specify direction and magnitude.0). 400). glVertex2f(0.90. glutMainLoop(). 1. 1. 0. 0. glColor3f(0.50). // swap buffers } int main(int argc. 0.25.50. // draw the diamond glVertex2f(0.0. but have no fixed position. // map unit square to viewport glMatrixMode(GL_MODELVIEW). We will describe these systems. starting with the simplest. . // setup callbacks glutReshapeFunc(myReshape). 0.5. // draw the rectangle glutSwapBuffers(). 0. glVertex2f(0.

vectors are used to denote direction and length. 11). there is no origin!) Similarly. There is a special vector called the zero vector. We will use the following notational conventions. and w. For example.g..g. The reason is to avoid hiding the intention of the programmer.origin. By keeping these basic concepts separate. Note that some operations (e. No point special compared to any other point.g. however. (Such a point is twice as far away from the origin. and r. we may use c to denote the center point of a circle or r to denote the scalar radius of a circle. scalar-point multiplication. it makes perfect sense to multiply a vector and a scalar (we stretch the vector by this amount) or to add two vectors together (using the head-to-tail rule). but float freely about in space.tail-to-head. The difference p . and often to emphasize this we will add an arrow (e. For example. ). and addition of points) are explicitly not defined. . there is a particular combination of two points that we will consider legal. ~0.q of two points results in a free vector directed from q to p. This is an intentional omission. but remember. Vectors will usually be denoted with lower-case Roman letters. b. It is not so clear what it means multiply a point by a scalar. such that Note that we did not define a zero point or . not an intrinsic feature of affine space. The operation is called an affine combination. why make a distinction between points and vectors? Both can be represented in the same way as a list of coordinates. manner (see Fig. Affine Combinations: Although the algebra of affine geometry has been careful to disallow point addition and scalar multiplication of points. Vector operations are applied in the same way that you learned in linear algebra..) Affine Operations: The table below lists the valid combinations of these entities. In our programs scalars will be translated to Roman (e.g. (We will eventually have to break down and define an origin in order to have a coordinate system for our points. Scalars will be represented as lower case Greek letters (e.) You might ask. The formal definitions are pretty much what you would expect. vectors are added in the usual . a. v.The term „free‟ means that vectors do not necessarily emanate from some position (like the origin). For example. what does it mean to add two points? Points are used for locations. c). the programmer's intentions are easier to understand. α.γ. such as u. Point-vector addition r + ~v is defined to be the translation of r by displacement ~v. ). but this is a purely representational necessity. q.β. that has no magnitude. (We will sometimes violate these conventions. for affine space. Points will usually be denoted by lower-case Roman letters such as p.

Affine Operations Euclidean Geometry: In affine geometry we have provided no way to talk about angles or distances. The product of ~u and ~v is denoted commonly denoted There are many ways of defining the inner product. The inner product is an operator that maps two vectors to a scalar. called the inner product. Euclidean geometry is an extension of affine geometry which includes one additional operation. but any legal definition should satisfy the following requirements .

As you might expect. Today we consider how this is done in 2-space. Change of Coordinates: This is used when objects that are stored relative to one reference frame are to be accessed in a different reference frame. We could do this by computing (ourselves) the coordinates of the transformed vertices. Recall that when drawing. scalings. For this reason. It would also be inefficient. OpenGL uses much the same model with transformations. Because transformations are used for different purposes. Moving Objects: As needed in animations. this would negate the benefit of loading them just once. and then this transformation is automatically applied to every object that is drawn afterwards. You specify a transformation first. . Suppose that we want to draw more complex scenes. Important features of affine transformations include the facts that they map straight lines to straight lines. shearings (which stretch rectangles into parallelograms). They arise in various ways in graphics. if the coordinates of these objects were changed with each display cycle. Projection: Such transformations are used to project objects from the idealized drawing window to the viewport. then the midpoint of T(p) and T(q) is T(m). OpenGL has a very particular model for how transformations are performed. and they can be implemented through matrix multiplication. since they may not preserve parallelism. However. OpenGL maintains three sets of matrices for performing various transformation operations. These are: Modelview matrix: Used for transforming objects in the scene and for changing the coordinates into a form that is easier for OpenGL to deal with.) Mapping between Surfaces: This is useful when textures are mapped onto object surfaces as part of texture mapping. OpenGL provides methods for downloading large geometric specifications directly to the GPU. This will form a foundation for the more complex transformations. One important case of this is that of mapping objects stored in a standard coordinate system to a coordinate system that is associated with the camera (or viewer). For example. Such transformations include rotations. affine transformations are transformations that preserve affine combinations. this would be inconvenient for us. Recall from your linear algebra class that a linear transformation is a mapping in a vector space that preserves linear combinations. they preserve parallelism. until the transformation is set again. it was convenient for us to first define the drawing attributes (such as color) and then draw a number of objects using that attribute. which will be needed for 3dimensional viewing. OpenGL provides tools to handle transformations. (It is used for the first two tasks above).Lecture 6: Transformation More about Drawing: So far we have discussed how to draw simple 2-dimensional objects using OpenGL. and T is an affine transformation. because it implies that you must always set the transformation prior to issuing drawing commands. Transformations: Linear and affine transformations are central to computer graphics. we want to draw objects that move and rotate or to change the projection. (We shall see that perspective projection transformations are more general than affine transformations. For example. It is important to keep this in mind. if p and q are two points and m is their midpoint. and combinations thereof. However. and mapping the viewport to the graphics display window.

(Used for the last task above.) glPopMatrix(): Pops the current matrix off the stack. (As above. Change coordinate frames (world. 4. Rigid body . and uniform scaling. Thus. respectively. etc).Preserves angles. Examples of transformations:  Translation by vector  Rotation counterclockwise by .  Examples: translation and rotation. if C is the current matrix on top of the stack. Lines remain lines. For example. glLoadMatrix*(M): Loads (copies) a given matrix over the current matrix. Useful for animation. the „*‟ can be either „f‟ or „d‟ depending on M. scaling. rotation. (Used for the third task above. rotation. Compose objects of simple parts with local scale/position/orientation of one part defined with regard to other parts. There are three basic classes of transformations: 1. (Thus the stack now has two copies of the top matrix.Preserves parallelism.Projection matrix: Handles parallel and perspective projections.M. polygon. Affine .  Examples: translation. viewport. 3.) glPushMatrix(): Pushes a copy of the current matrix on top the stack.Preserves distance and angles. we can use transformations for several purposes: 1. 2. device.) Texture matrix: This is used in specifying how textures are mapped onto objects. Use deformation to create new shapes.) glMultMatrix*(M): Post-multiplies the current matrix by a given matrix and replaces the current matrix with this result. for articulated objects. or sampled parametric curve. and reflection. Given a point cloud. window.) glLoadIdentity(): Sets the current matrix to the identity matrix. Conformal . (The „*‟ can be either „f‟ or „d‟ depending on whether the elements of M are GLfloat or GLdouble. it will be replaced with the matrix product C. 2. shear.  Examples: translation. 3.

. a linear transformation followed by a translation. You should understand the following proofs.  The inverse of an affine transformation is also affine. assuming it exists. Uniform scaling by scalar a:  Nonuniform scaling by a and b:  Shear by scalar h:  Reflection about the y-axis: Affine Transformations An affine transformation takes a point ¯p to ¯q according to ¯q = F(¯p) = A¯p +~t.

Homogeneous Coordinates Homogeneous coordinates are another way to represent points to simplify the way in which we express affine transformations. In future sections of the course we exploit this in much more powerful ways. a point ¯p is augmented with a 1. With homogeneous coordinates. affine transformations become matrices. Lines and parallelism are preserved under affine transformations. bookkeeping would become tedious when affine transformations of the form A¯p +~t are composed.  Given a closed region.  A composition of affine transformations is still affine. the area under an affine transformation A¯p +~t is scaled by det(A). to form . and composition of transformations is as simple as matrix multiplication. Normally. With homogeneous coordinates.

With homogeneous coordinates. Many transformations become linear in homogeneous coordinates. we can add a row to the matrix: This is linear! Bookkeeping becomes simple under composition. the following properties of affine transformations become apparent:  Affine transformations are associative. and then mapping this canonical window to the viewport. The intermediate coordinates are often called normalized device coordinates. The reason for this intermediate mapping is that the clipping algorithms are designed to operate on this fixed sized window. first mapping from the window to canonical 2 x 2 window centered about the origin. actually OpenGL does this in two steps.  Affine transformations are not commutative.Given ˆp in homogeneous coordinates. For affine transformations F1. we divide ˆp by its last component and discard the last component. F2. and F3. For affine transformations F1 and F2. including affine transformations: To produce ˆq rather than ¯q. to get ¯p. How is transformation done: How does gluOrtho2D() and glViewport() set up the desired transformation from the idealized drawing window to the viewport? Well. .

Window to Viewport transformation Let f(x. Wb. depend on the window and viewport coordinates. y) in window coordinates to a point (x‟. y‟) in viewport coordinates. Since the function is linear. and Vt similarly for the viewport. bottom and top of the window. We know that the x-coordinates for the left and right sides of the window (Wl andWr) should map to the left and right sides of the viewport (Vl and Vr). let us consider doing this all in one shot. . Let W r. and Wt denote the left. right. Thus we have We can solve these equations simultaneously. and it operates on x and y independently. Let W denote the idealized drawing window and let V denote the viewport. This can be expressed in matrix form as which is essentially what OpenGL stores internally. By subtracting them to eliminate tx we have Plugging this back into to either equation and solving for tx we have A similar derivation for sy and ty yields These four formulas give the desired final transformation. tx. Vb. Let's derive what sx and tx are using simultaneous equations.As an exercise in deriving linear transformations. y) denote the desired transformation. we have where sx. sy and ty. Wl. Vl. We wish to derive a linear transformation that maps a point (x. See Figure below. Define Vr.

shape modeling. That is. animation. and z axes. and camera modeling.3D Affine Transformations Three dimensional transformations are used for many different purposes. Including the z coordinate. such as coordinate transforms. An affine transform in 3D looks the same as in 2D: A homogeneous affine transformation is 3D rotations are much more complex than 2D rotations. rotation about the x-axis is For rotation about the y-axis. then qz = pz. so we will consider only elementary rotations about the x. this becomes Similarly. y. . For a rotation about the z-axis. and the rotation occurs in the x-y plane. the z coordinate remains unchanged. So if ¯q = R¯p.

) This projection process consists of three separate parts: the projection transformation (affine part). or eye coordinates. . and involves a number of stages.) (Perspective) projection: This projects points in 3-dimensional eye-coordinates to points on a plane called the viewplane. we will revisit the graphics rendering and try to understand the graphics rendering pipeline. Each will be discussed below. In OpenGL. The output coordinates are called normalized device coordinates. in what are called world coordinates. A typical command from your program might be . where the third component records depth information. and perspective normalization. (Specified by the OpenGL command gluLookAt. (Specified by the OpenGL command glViewport. and most similar graphics systems. we will use an analogy of a camera.) Mapping to the viewport: Convert the point from these idealized normalized device coordinates to the viewport. But first. Each of these stages is performed by some part of the pipeline. The job of the graphics system is to convert this simple request to that of coloring a set of pixels on your display. the process involves the following basic steps. or gluPerspective. The process of doing this is quite complex. Modelview transformation: Maps objects (actually vertices) from their world-coordinate representation to one that is centered around the viewer. This is conceptual tool. The resulting coordinates are variously called view coordinates. of which the perspective transformation is just one component. The key concept behind all GPUs is the notion of the graphics pipeline. clipping. camera coordinates.Lecture 7: Viewing in 3-D Viewing in OpenGL: For the next couple of lectures we will discuss how viewing and perspective transformations are handled for 3-dimensional scenes. until the final image is produced at the end. glFrustum. (We will see later that this transformation actually produces a 3-dimensional output.draw a triangle in 3-dimensional space at these coordinates. (Specified by the OpenGL commands gluOrtho2D. where your user program sits at one end sending graphics commands to the GPU. and the results are then fed to the next stage of the pipeline. The coordinates are called window coordinates or viewport coordinates. We assume that all objects are initially represented relative to a standard 3-dimensional coordinate frame.) Camera analogy To understand the concept of viewing a 3-D image on a 2-D viewport. and the frame buffer sits at the other end.

the image is clipped to fit on the viewport.  WCS: World Coordinate System.y. Projection types It therefore becomes important to specify the type of projection being used to transform a 3-D object onto a 2-D projection plane. The model undergoes transformation to fit into the 3-D world scene. Derived information may be added (lighting and shading) and primitives may be removed (hidden surface removal) or modified (clipping). The resulting image is then rasterized and presented on the viewport as a 2-D image in Device Coordinate System (DCS) or equivalently the Screen Coordinate System (SCS) A number of coordinate systems are used:  MCS: Modeling Coordinate System.  DCS or SCS: Device Coordinate System or equivalently the Screen Coordinate System. converting primitives in modeling space to primitives in device space. To ensure that the image can be projected to any viewport without having to change the rendering code.  NDCS: Normalized Device Coordinate System. converting primitives in modeling space to primitives in device space. Each stage refines the scene. However before this. the coordinates used are the World Coordinate System (WCS). The WCS are converted to Viewer Coordinate System (VCS) at which point the image is transformed from the 3-D world scene to 3-D view scene. In this state.Each stage refines the scene.v. The coordinates x. where they are converted to pixels (rasterized). Since the viewports are 2-D whereas the objects are 3-D. Conceptually. A number of coordinate systems are used with the image in the model view in Modeling Coordinate System (MCS). we can represent the viewing process by the following chart: . the VCS have to be converted to Normalized Device Coordinate System (NDCS).n on the camera also refered to as viewing or eye coordinate system.  VCS: Viewer Coordinate System. where they are converted to pixels (rasterized).z on the object are made to correspond to the coordinates u. Going back to the camera analogy. we resolve the mismatch by introducing projections on the viewports to transform 3-D objects to 2-D projection planes.

These are called so since they project onto planes and not curved surfaces. etc. where the center of projection is tending to infinity. Where we cannot explicitly specify the centre of projection because it is at infinity.z." In general. vector. and intersecting a projection plane to form the projection.y. They also use straight lines and not curved projectors. The visual effect of a . it is more realistic to talk about the direction of projection. The term can also refer collectively to the numbers used to describe the mapping of one space onto another. passing through each point of the object. which are also called a "3D transformation matrix. the center of projection is a finite distance away from the projection plane.   Clipping is the process of removing points and parts of objects that are outside the view volume. In general. we can compute it by subtracting two points.. we refer to the projection as parallel projection. A planar geometric projection where the center of projection can be defined is referred to as perspective projection. there are two types of projections: parallel and perspective.1). This makes possible to have only the portions of the 3-D scene we require displayed on the viewport Projections transform points in a 3-D coordinate system to 2-D window. The class of projections that we will concentrate on are planar geometric projections. Transformation is the act of converting the coordinates of a point. In some cases however. projections transform points in a coordinate system of dimension n into points in a coordinate system of dimensions less than n. from one coordinate space to another. Projection is a scheme for mapping the 3D scene geometry onto the 2D image. Since the direction of projection is a vector. being a point is defined by homogeneous coordinates (x. emanating from a center of projection. The center of projection. Under planar geometric projections. the projection of 3-D objects is defined by straight projection rays called projectors.

In 3-D. There are at most three such points corresponding to the number of principal axes cut by the projection plane. For oblique. so the vanishing point can be thought of as a projection of a point at infinity. Perspective projections The perspective projections of any set of parallel lines that are not parallel to the projection plane converge to a vanishing point. the vanishing point is called axis vanishing point. Two point perspective projection of a cube. The projection plane cuts the x and z axis. . If the set of lines is parallel to one of the three principal axes. In orthographic parallel projections. One point perspective projection of a cube onto a plane cutting the z axis. The projection plane normal is parallel to the z axis.perspective projection is similar to that of photographic system and of the human visual system ans is known as perspective foreshortening. they are not. the parallel lines meet only at infinity. Perspective projections are categorized by the number of principal vanishing points and therefore by the number of axes the projection plane cuts. Parallel projections These are further classified into two groups depending on the relation between the direction of the projection and the normal to the projection plane: orthographic and oblique. so the direction of the projection is normal to the projection plane. these directions are the same (or the reverse of each other).

If the projection plane is (dx. the projection plane normal and therefore the direction of projection.The most common of the orthographic projection is the top. Oblique projections combine properties of the top. Axonometric orthographic projections use projection planes that are not normal to a principal axis and therefore show several faces of an object at once. Isometric projection is commonly used axonometric projection. makes equal angles with each principal axis. side and front orthographic projections with those of axonometric projections. These find everyday use in engineering drawings. dy. In this type of projection. dz) then |dx|=|dy|=| dz| Oblique projections differ from orthographic projections in that the projection plane normal and the direction of projection differ. . side. front and plan elevations.

we need means of specifying minimum and maximum window coordinates and the two orthogonal axes in the view plane along which to measure these coordinates. then transform it onto a viewport. The u axis direction is defined such that u. The projection and the view volume together provide all the information we need to clip and project into 2-D space. . This axis is called the n axis. also called view plane is defined by a pont on the plane called the view reference point (VRP) and a normal to the plane called the view plane normal (VPN). v and n form right handed coordinate system. the window‟s minimum and maximum u and v coordinates can be defined. With the VRC system defined. The projection plane. A second axis of the VRC is found from the view up vector (VUP) which determines the v axis direction on the view plane. project it onto a projection plane.Summarily. The origin of the VRC systemis the VRP. we start with an object on a window which we clip against a view volume. When defining the window. The view plane may be anywhere with respect to the world objects. These axes are part of the 3-D viewing reference coordinate (VRC) system. To define a window on the view plane. the various projection types can be represented as in the tree diagram below: 3-D viewing Essentially. we explicitly define the center of the window (CW). One axis of the VRC is the VPN. The v axis is defined such that the projection of VUP parallel to VPN onto the view plane is coincident to the v axis.

DOP is the vector from PRP to CW and is parallel to VPN. Truncated view volume for a perspective projection. sometimes called hither and yon planes are parallel to the view plane. The CW is in general not the VRP. The normal is the VPN with positive distance in the direction of the VPN. which does not need even to be within the window bounds. Semi-infinite pyramid view volume for perspective projection. Sometimes. we might want the view volume to be finite in order to limit the number of output primitives projected onto the view plane.The centre of projection and the direction of projection (DOP) are defined by a projection reference point (PRP) and an indicator of the projection type. This is done by use of front clipping plane and back clipping plane. The VPN and direction of projection (DOP) are parallel. these planes. CW is the center of the window Infinite paralleled view volume of parallel orthographic projection. . then DOP is from the PRP to the CW. Truncated view volume for an orthographic parallel projection DOP is the direction of the projection. If the projection type us parallel.

like most interactive graphics systems. without considering the other objects in the scene. They bounce off various surfaces and may be scattered by smoke or dust in the air. we can imagine a simple model of light consisting of a large number of photons being emitted continuously from each light source. which offer even greater realism. The photons are reflected and transmitted in various ways throughout the environment.Lecture 8: Illumination & Shading Lighting and Shading: We will now take a look at the next major element of graphics rendering: light and shading. Eventually. Local Illumination Model Global Illumination Model For example. the more realistic lighting will be. the OpenGL designers provided many ways to “fake” realistic illumination models. This topic is the beginning of an important shift in approach. in which light reflected or passing through one object might affect the illumination of other objects. Global illumination models deal with many affects. The more accurately we can simulate this physical process. Modern GPUs support programmable shaders. we have discussed graphics from are purely mathematical (geometric) perspective. which means that the shading of a point depends only on its relationship to the light sources. For our purposes. but we will not discuss these now. such as shadows. What we “see” is a function of the light that enters our eye. which we may think of as being composed of extremely tiny packets of energy. Light sources generate energy. Up until now. it does not handle indirect reflection from other objects (where light bounces off of one object and illuminates another). and so we will have to settle for much simpler approximations. This was done primarily because speed is of the essence in interactive graphics. Light: A detailed discussion of light and its properties would take us more deeply into physics than we care to go. color bleeding (colors from one object reflecting and altering the color of a nearby object). This is one of the primary elements of generating realistic images. it does not handle objects that reflect or refract light (like metal spheres and glass balls). OpenGL's lighting model does not model shadows. Each photon has an associated energy. OpenGL's light and shading model was designed to be very efficient. caustics (which result when light passes through a lens and is focused on another surface). and hence can achieve only limited realism. Unfortunately. which (when aggregated over millions of different reflected photons) we perceive as color. Although it is not physically realistic. Although color is . An example of some of the differences between a local and global illumination model are shown below. indirect illumination. called photons. some of them enter our eye and strike our retina. Light and reflection brings us to issues involved with the physics of light and color and the physiological aspects of how humans perceive light and color. This is in contrast to a global illumination model. OpenGL assumes a local illumination model. OpenGL. We perceive the resulting amalgamation of photons of various energy levels in terms of color. supports a very simple lighting and shading model. computers are not fast enough to produce a truly realistic simulation of indirect reflections in real time.

Light Sources: Before talking about light reflection. Lb). The intensity of light energy is distributed across a continuous spectrum of wavelengths. called a luminance function. Note that. We will not concern ourselves with the exact units of measurement. green. real surfaces possess various combinations of these elements. OpenGL assumes that each light source is a point. In reality. if the surface is rough at a microscopic level (like foam rubber. because it reflects photons in the green part of the spectrum and absorbs photons in the other regions of the visible spectrum. an object appears to be green. and these elements can interact in complex ways. Reflection: The photon can be reflected or scattered back into the atmosphere. in which light is transmitted under the surface and then bounces around and is reflected at some other point. which will be discussed below. light sources come in many sizes and shapes. Diffuse reflection: Uniformly scattering. although your display device will have an absolute upper limit on how much energy each color component of each pixel can generate (which is typically modeled as an 8-bit value in the range from 0 to . This happens perfectly with transparent objects (like glass and polished gem stones) and with a significant amount of scattering with translucent objects (like human skin or a thin piece of tissue paper). This is described by a vector with three components L = (Lr. and hence not shiny. Of course. say) then the photons are scattered nearly uniformly in all directions. The ways in which a photon of light can interact with a surface. we need to discuss where the light originates. for our purposes it is sufficient to consider color to be a modeled as a triple of red. To simplify things. green. Transmission: The photon can pass through the surface. and that the energy emitted can be modeled as an RGB triple. We can further distinguish different varieties of reflection: Pure reflection: Perfect mirror-like reflectors Specular reflection: Imperfect reflectors like brushed metal and shiny plastics. Absorption: The photon can be absorbed into the surface (and hence dissipates in the form of heat energy). For example.complex phenomenon. On the other hand. and blue components. which indicate the intensities of red. If the surface were perfectly smooth (like a mirror or highly polished metal) the refection would satisfy the rule “angle of incidence equals angle of reflection” and the result would be a mirror-like and very shiny in appearance. All of the above involve how incident light reacts with a surface. since this is very simple model. and blue light respectively. Lg. Thus. They may emit light in varying intensities and wavelengths according to direction. We do not see this light. Another way that light may result from a surface is through emission. human skin and many plastics are characterized by a complex phenomenon called subsurface scattering.

255). This would imply that the intensity at some (unblocked) point p would be . Let n denote the normal vector at p. In indoor scenes we are accustomed to seeing much softer shading. Note that p‟ is illuminated in spite of the obscuring triangle. Surfaces in OpenGL are polygons. Unfortunately. Point light source visibility using a local illumination model. In theory. in theory there is no upper limit on the intensity of light. because its normal is directed away from the light. If we were to ignore this effect and simply consider a point to be illuminated only if it can see the light source.α > 0. in the figure below. Ambient emission: Refers to light that does not come from any particular location. We can determine this by testing whether their dot produce is positive.p). Point emission: Refers to light that originates from a single point. determining whether a point is visible to a light source in a complex scene with thousands of objects can be computationally quite expensive. and let α denote the directional vector from p to the light source (α = q . point emission only affects points that are directly visible to the light source. (If you need evidence of this. point p‟ is also illuminated. it is assumed to be scattered uniformly in all locations and directions. The point p‟‟ is clearly not illuminated. but let us consider this in a more general setting. In spite of the obscuring triangle. go outside and stare at the sun for a while!) Lighting in real environments usually involves a considerable amount of indirect reflection between objects of the scene. For example. n. So OpenGL simply tests whether the surface is facing towards the light or away from the light. Physics tells us that the intensity of light falls off as the inverse square of the distance. That is. In OpenGL (and most local illumination models) this scattering of light modeled by breaking the light source's intensity into two components: ambient emission and point emission. then the resulting image in which objects in the shadows are totally black. A point is illuminated by ambient emission even if it is not visible from the light source. a point p is illuminate by light source q if and only if the open line segment pq does not intersect any of the objects of the scene. then p will be illuminated if and only if the angle between these vectors is acute. because other objects in the scene are ignored by the local illumination model. Like heat. Suppose that have a point p lying on some surface. the decrease in strength of illumination as the distance to the source increases. that is. Attenuation: The light that is emitted from a point source is subject to attenuation. directed outwards from the object's interior. so that even objects that are hidden from the light source are partially illuminated. the point p is illuminated. that is.

our various simplifying where || assumptions (ignoring indirect reflections. See the OpenGL function glLight() for further information. the sun's coordinates would be modeled by the homogeneous positional vector (0. Many of the computations involving light sources require computing angles between the surface normal and the light source location. At high noon. Consequently. Suppose that we imagine that the z-axis points up. Directional Sources and Spotlights: A light source can be placed infinitely far away by using the projective geometry convention of setting the last coordinate to 0. However. Types of light reflection: The next issue needed to determine how objects appear is how this light is reflected off of the objects in the scene and reach the viewer. However. 1. Sometimes it is nice to have a directional component to the light sources. linear. There is a performance advantage to using directional sources. The model is over 20 years old. Then the attenuation function is || ( ) ( ) ( ) In OpenGL. . 0)T: These are called directional sources. Let d = || denote the distance to the point source. it does not behave like a light source. in the sense that it does not cause any other objects to be illuminated. and then drops off according to the angle from this direction. The user specifies constants a. The intensity decreases as the angle θ increases. Spotlight. because our illumination model is local. This is unaffected by the presence of any light sources. and hence the angle need be computed only once for all points on the patch. for example) will cause point sources to appear unnaturally dim using the exact physical model of attenuation. and quadratic components. If the light source is at infinity. so there is no attenuation by default. the default values are a = 1 and b = c = 0. We will assume that all objects are opaque. So the discussion shifts from the discussion of light sources to the discussion of object surface properties. b and c. 0. OpenGL uses an attenuation function that has constant. and is based on modeling surface reflection as a combination of the following components: Emission: This is used to model objects that glow (even when all the lights are off). where the intensity is strongest along a given direction. The simple model that we will use for describing the reflectance properties of objects is called the Phong model. in OpenGL.( ) || || || denotes the Euclidean distance from p to q. OpenGL also supports something called a spotlight. then all points on a single polygonal patch have the same angle.

For the purposes of our equations below. View vector: A vector v that points in the direction of the viewer (or camera). metallic or polished) surfaces. Normal vector: A vector n that is perpendicular to the surface and directed outwards from the surface. The physical explanation for this type of reflection is that at a microscopic level the object is made up of microfacets that are highly irregular. Since this is half way between l and v.e. The reason that Lambertian reflectors appear brighter in some parts that others is that if the surface is facing (i. (Based on the law that the angle of incidence with respect to the surface normal equals the angle of reflection. Reflection vector: A vector r that indicates the direction of pure reflection of the light vector. and these irregularities scatter light uniformly in all directions. (Recall that because this is a local illumination model the other objects of the scene are ignored. The Relevant Vectors: The shading of a point on a surface is a function of the relationship between the viewer.) The following vectors are relevant to shading. specular reflection only reflects light. light sources. Although specular reflection is related to pure reflection (as with mirrors). then the energy is spread over the smallest possible area. for the purposes of our simple model these two are different. All surfaces in all positions and orientations are illuminated equally by this light energy. and surface. perpendicular to) the light source. There are a number of ways to compute normal vectors. not the surrounding objects in the scene. we can compute this by simply averaging these two vectors and normalizing (assuming that they are not pointing in exactly opposite directions). Light vector: A vector l that points towards the light source.) The reflection vector computation reduces to an easy exercise in vector arithmetic.Ambient reflection: This is a simple way to model indirect reflection. Specular reflection: The bright spots appearing on smooth shiny (e. Such an reflector is called a pure Lambertian reflector. Halfway vector: A vector h that is midway between l and v. it will be convenient to think of them all as being of unit length. such as foam rubber. Diffuse reflection: The illumination produced by matte (i. Vectors used in Phong Shading.g. dull or non-shiny) smooth objects. In particular. Diffuse reflection: Diffuse reflection arises from the assumption that light from any direction is reflected uniformly in all directions. depending on the representation of the underlying object. . We can think of them as being centered on the point whose shading we wish to compute.e. and both have been normalized to unit length.

the property of the light to set. . one for scalar-valued parameters and one for vector-valued parameters. Create/Enable lights: To use lighting in OpenGL. Theoretically. and (n . glLightModelfv(GLenum pname. Observe that if the eye is aligned perfectly with the ideal reflection angle. On the other hand. GLfloat param). h) will be large. named GL LIGHT0 through GL LIGHT7. One of the most common deviations is for smooth metallic or highly polished objects. Again. OpenGL allows the user to create up to 8 light sources. On the other hand. we let (n . their positions and properties. OpenGL instead uses a vector called the halfway vector. and the value of this property. It has two forms. these spots arise because at the microfacet level. By default they are all disabled. then an equal among of the light's energy is spread out over a greater fraction of the surface. through a call to glEnable(GL LIGHTING). and hence each point of the surface receives (and hence reflects) a smaller amount of light. There are two common ways of modeling of specular reflection. and hence (n . the facets are not so smooth that we get a clear mirror-like reflection. if eye deviates from the ideal reflection angle. Lighting/Shading model: There are a number of global lighting parameters and options that can be set through the command glLightModel*(). Lighting and Shading in OpenGL: To describe lighting in OpenGL there are three major steps that need to be performed: setting the lighting and shade model (smooth or _at). h) will tend to decrease. (The original Phong model uses the factor (r . this is done using glEnable() (and glDisable()). because it is somewhat more efficient and produces essentially the same results. They tend to have specular highlights (or “shiny spots”).) Diffuse and specular reflection. the name of the light. first you must enable lighting. As the angle of the surface normal increases with respect to the angle of the light source. and finally defining object material properties. light is not being scattered perfectly randomly. glLightModelf(GLenum pname. then h will align itself perfectly with the normal n. but shows a preference for being reflected according to familiar rule that the angle of incidence equals the angle of reflection. The Phong model uses the reflection vector (derived earlier). defining the lights. This command takes three arguments. const GLfloat* params). the microfacet level. Thus. Each light source may either be enabled (turned on) or disabled (turned off). Specular Reflection: Most objects are not perfect Lambertian reflectors. h) be the geometric parameter which will define the strength of the specular component.and thus this part of the surface appears brightest. v) instead. then h will not align with n. The properties of each light source is set by the command glLight*().

a lighting model is calculated. The shading model usually incorporates the surface normal information.  Extensions to convex polygons . across spans. the lighting model.  Barycentric combinations are also affine combinations.  Interpolate colours along each scanline. . the surface reflectance attributes. . . Gouraud Shading  Gouraud shading interpolates colours across a polygon from the vertices. during rasterization.  Slice the polygon into trapezoids with parallel top and bottom. Gouraud Shading: Lighting is only computed at the vertices. any texture or bump mapping. Flat Shading  Shade entire polygon one colour  Perform lighting calculation at:  One polygon vertex  Center of polygon  What normal do we use?  All polygon vertices and average colours   Problem: Surface looks faceted OK if really is a polygonal model.  Interpolate colours along each edge of the trapezoid. and shade entire polygon one colour. and even some compositing information. Flat Shading: Perform lighting calculation once. . and this normal is interpolated across the polygon. and the colours are interpolated across the (convex) polygon. convert to triangles.  Interpolation well-defined for triangles. can use repeated affine combination along edges. but not a good idea. Gouraud shading is well-defined only for triangles For polygons with more than three vertices:  Sort the vertices by y coordinate.  Lighting calculations are only performed at the vertices.    To implement. .Shading model is the algorithm used to determine the color of light leaving a surface given a description of the light incident upon it. not good if a sampled approximation to a curved surface. Triangular Gouraud shading is invariant under affine transformations. At each pixel. Phong Shading: A normal is speci_ed at each vertex. .

 Vertex normals are independent of the polygon normal.  Much better rendition of highlights.  Have to perform linear interpolation in world or view space.  Can be organized so only need one division per pixel.    Gouraud shading gives bilinear interpolation within each trapezoid. not colours.  Interpolate other shading parameters.  A normal is specified at each vertex of a polygon. . . .  At each pixel. Not good for shiny surfaces unless fine polygons are used.  The normal is interpolated across the polygon (using Gouraud techniques).  Compute the view and light vectors. Phong Shading  Phong Shading interpolates lighting model parameters.  Interpolate the normal. project into device space  Results in rational-linear interpolation in device space!  Interpolate homogenous coordinates. . regardless of the number of parameters to be interpolated. Aliasing also a problem: highlights can be missed or blurred. .  Vertex normals should relate to the surface being approximated by the polygonal mesh. n-sided Gouraud interpolation is not affine invariant. assuming model-view transformation is affine. do per-pixel divide.  The lighting model does not have to be the Phong lighting model!  Normal interpolation is nominally done by vector addition and renormalization  Several “fast" approximations are possible  The view and light vectors may also be interpolated or approximated  Problems with Phong shading:  Distances change under perspective transformation  Where do we do interpolation?  Normals don't map through perspective transformation  Can't perform lighting calculation or linear interpolation in device space  Have to perform lighting calculation in world space or view space. .  Evaluate the lighting model. Since rotating the polygon can result in a different trapezoidal decomposition. .

etc. they only account for incident light coming directly from the light sources.Lecture 9: Ray Tracing So far. but each ray is followed as it passes through translucent objects. Consequently the vast majority of the light simulation effort is wasted. How might we speed the process up? Observe that most of the light rays that are emitted from the light sources never even hit our eye. More generally.) The light may continue to be reflected off of other objects. we have considered only local models of illumination. and lighting effects that account for global scene geometry. instead we reverse things and trace backwards along the light rays that hit the eye. When light hits a surface. We want to render the scene that is visible to the viewer through this window. making photorealism a reality. and complex lighting models. Global models include incident light that arrives from other surfaces. light may also be transmitted through the object. and only these are relevant to the viewing process. and some is reflected in different directions. some of its energy is absorbed. light travels in rays that are emitted from the light source. and in front of the viewer is the view plane. We shoot rays out . Ray tracing is the process of determining the shade of a pixel in a scene consisting of arbitrary objects. Such effects include: – Shadows – Secondary illumination (such as color bleeding) – Reflections of other objects. This suggests that rather than tracing light rays as they leave the light source (in the hope that it will eventually hit the eye). and hit objects in the environment. Consider an arbitrary point on this window. is bounced by reflective objects. and find the first surface hit by the ray. There is a viewer located at some position. and on this view plane is a window. cast rays from the surface point to possible incident directions to determine how much light comes from each direction. If we could accurately model the movement of all light in a 3-dimensional scene then in theory we could produce very accurate renderings. intersects objects on its way to each light source to create shadows. Eventually some of these reflected rays find their way to the viewer's eye. in mirrors.  Determine the surface radiance at the surface intersection with a combination of local and global models. The color of this point is determined by the light ray that passes through this point and hits the viewer's eye. The basic idea is as follows: For each pixel:  Cast a ray from the eye of the camera through the pixel. so that each grid square corresponds to a pixel in the final image. Ray Tracing Model: Imagine that the viewing window is replaced with a fine mesh of horizontal and vertical grid lines. The process starts like ray-casting. various surface attributes. This is the idea upon which ray tracing is based. The Basic Idea: Consider our standard perspective viewing scenario. Unfortunately the computational effort needed for such a complex simulation would be prohibitively large. (If the object is transparent.  To estimate the global component. This leads to a recursive form for tracing paths of light backwards from the surface to the light sources. for example Ray Tracing was developed as one approach to modeling the properties of global illumination.

RayTrace(): Given the camera setup and the image size. We simply repeat this operation on all the pixels in the grid. it is more common to shoot a number of rays per pixel and average their results. and hence we are in the shadow of the blocking object. diffuse. we blend it with the local surface color and return the result. because. we compute the reflection ray and shoot it into the environment. when the ray hits a reflective object. The amount of light reaching this surface point is the hard to compute accurately. for example. For example. This depends on a number of things. We use this model to assign a color to the pixel. and we have our final image. and specular reflection properties of the object. by using the Phong model and information about the ambient. we infer that this point is illuminated by this source. A purely local approach to this question would be to use the model we discussed in the Phong model. called aliasing. We shoot a ray from the surface point to each of the light sources. In ray tracing it is common to use a somewhat more global approximation. Given the direction to the light source and the direction to the viewer. j) of your image window. . The ray tracing model can easily be extended to deal with reflective objects (such as mirrors and shiny spheres) and transparent objects (glass balls and rain drops). generate a ray Rij from the eye passing through the center of each pixel (i. Ray Tracing. namely that a point is illuminated if the angle between the normal vector and light vector is acute. principally the reflective and color properties of the surface. When we get the associated color. say. This is because light from the various light sources might be blocked by other objects in the environment and it may be reflected off of others. Consider the first object that such a ray hits. We will assume that the light sources are points. We invoke the ray tracing algorithm recursively. and the amount of light reaching this point from the various light sources. Call trace(R) and assign the color returned to this pixel.) We want to know the intensity of reflected light at this surface point. (In order to avoid problems with jagged lines. Even this simple ray tracing model is already better than what OpenGL supports. and otherwise we assume that it is not illuminated. and the surface normal (which we can compute because we know the object that the ray struck). The generic algorithm is outlined below. OpenGL's local lighting model does not compute shadows.from the eye through the center of each grid square in an attempt to trace the path of light backwards toward the light sources. For each of these rays that succeeds in reaching a light source before being blocked another object. we have all the information that we need to compute the reflected intensity of the light at this point.

Reflection: Recall the Phong reflection model. As with the other coefficients this is a number in the interval [0. (ii) If RL does not hit any object until reaching L. which we assume is also normalized. and its coefficients of ambient. if the ray is p + tu.0 1. (c) For each light source L. when the ray hits a nonreflective object. denoted _r. if say you have two mirrors facing each other. Return C. 1]. Thus. (a) If X is reflective. Typical indices of refraction include: Material Air (vacuum) Water Glass Diamond Index of Refraction 1. Let v denote the normalized view vector. We also need to associate each surface with two additional parameters.5 2. then compute the reflection ray Rr of R at p. Eventually. but it may not point directly back to the eye because of intermediate reflections. (d) Combine the colors Cr and Ct due to reflection and transmission (if any) along with the combined shading from (c) to determine the final color C. irrespective of whether the object is reflective.) Let n denote the outward pointing surface normal vector.333 1.47 . Recall from physics that the index of refraction is the ratio of the speed of light through a vacuum versus the speed of light through the material. denoted ρt.Trace(R): Shoot R into the scene and let X be the first object hit and p be the point of contact with this object. Note that it is possible for this process to go into an infinite loop. and specular reflection. Let us assume that this coefficient is nonzero. which points backwards along the viewing ray. ηt. then v = -normalize(u). Let Ct trace(Rt). as will be described below. each object will be associated with an additional parameter called the coefficient of reflection. Reflection Since the surface is reflective. Let Cr trace(Rr). the resulting color is returned. then compute the transmission (refraction) ray Rt of R at p. To avoid such looping. it is common to have a maximum recursion depth. (b) If X is transparent. We compute the view reflection ray (which equalizes the angle between the surface normal and the view vector). To model the reflective component. (This is essentially the same as the view vector used in the Phong model. then apply the lighting model to determine the shading at this point. diffuse. Each object is associated with a color. after which some default color is returned. we maintain a coefficient of transmission. (i) Shoot a ray RL from p to L. we shoot the ray emanating from the surface contact point along this direction and apply the above ray-tracing algorithm recursively. This color is then factored into the Phong model. Transparent objects and refraction: To model refraction. the indices of refraction2 for the incident side ηi and the transmitted side. also called transmission.

. Rendering: Draw the objects using illumination information from the photon trace. while dark surfaces do not.Snell's law says that if a ray is incident with angle θi (relative to the surface normal) then it will transmitted with angle θt (relative to the opposite normal) such that Global Illumination through Photon Mapping: Our description of ray tracing so far has been based on the Phong illumination. Power: The color and brightness of the photon. the rendering phase starts. Photon mapping is particularly powerful because it can handle both diffuse and non-diffuse (e. By summing the total contribution of these photons and consider surface properties (such as color and reflective properties) we determine the intensity of the resulting surface patch. which works quite well with ray tracing. a large number of photons are randomly generated from each light source and propagated into the 3-dimensional scene. Indirect illumination: This occurs when light is reflected from one surface (e. bright surfaces generate a lot of reflection. specular) reflective surfaces and can deal with complex (curved) geometries. but the results produced by photon mapping can be stunningly realistic. it is not really a full-fledged global illumination model because it cannot handle complex inter-object effects with respect to light. Thus. we check how many photons have landed near this surface point. a white wall) onto another. In order to render a point of some surface. a white wall that is positioned next to a bright green object will pick up some of the green color. Caustics: These result when light is focused through refractive surfaces like glass and water. called photon mapping. When a photon lands.. Color bleeding: When indirect illumination occurs with a colored surface.) After all the photons have been traced. This causes variations in light intensity on the surfaces on which the light eventually lands. In the first phase. There are a number of methods for implementing global illumination models. the number of photons shot into the scene must be large enough that every point has a respectable number of nearby photons. Such reflection depends on the properties of the incident surface. Because it is not a local illumination method.g. A photon hitting a colored surface is more likely to reflect the color present in the surface. As each photon hits a surface. in comparison to simple ray tracing. Incident direction: The direction from which the photon arrived on the surface. Although ray tracing can handle shadows. or (with some probability) it may be reflected onto another surface. photon mapping takes more time than simple ray-tracing using the Phong model. its direction of reflection depends on surface properties (e. .. it is represented by three quantities: Location: A position in space where the photon lands on a surface. the reflected light is colored. When the photon is reflected.g.g. The basic idea behind photon mapping involves two steps: Photon tracing: Simulate propagation of photons from light source onto surfaces. diffuse reflectors scatter photons uniformly in all directions while specular reflectors reflect photons nearly along the direction of perfect reflection. We will discuss one method. it may either stay on this surface. For this to work. For example.

and other image generation parameters and computing an image. The choice of rendering algorithm is dependent on the model representation and the degree of realism (interpretation of object and lighting attributes) desired. a lighting model.Lecture 9: Rendering Rendering is the process of taking a geometric model. a camera view.    Turning ideas into pictures Communications tool A means to an end .

the resulting color is returned. (i) Shoot a ray RL from p to L. then compute the reflection ray Rr of R at p. As with the other coefficients this is a number in the interval [0. Each object is associated with a color. and its coefficients of ambient. but it may not point directly back to the eye because of intermediate reflections. each object will be associated with an additional parameter called the coefficient of reflection. We compute the view reflection ray (which equalizes the angle between the surface normal and the view vector). (b) If X is transparent. Let us assume that this coefficient is nonzero. (c) For each light source L. it is common to have a maximum recursion depth. which we assume is also normalized. Recall from physics that the index of refraction is the ratio of the speed of light through a vacuum versus the speed of light through the material. also called transmission. (a) If X is reflective. To avoid such looping. Let v denote the normalized view vector.RayTrace(): Given the camera setup and the image size. and specular reflection. denoted ρt. j) of your image window. (This is essentially the same as the view vector used in the Phong model. Let Cr trace(Rr). which points backwards along the viewing ray. Thus. if the ray is p + tu. Return C. 1]. the indices of refraction2 for the incident side ηi and the transmitted side. ηt. Transparent objects and refraction: To model refraction. then compute the transmission (refraction) ray Rt of R at p. denoted _r. We also need to associate each surface with two additional parameters. Let Ct trace(Rt). Typical indices of refraction include: . irrespective of whether the object is reflective. then v = -normalize(u). generate a ray Rij from the eye passing through the center of each pixel (i. then apply the lighting model to determine the shading at this point.) Let n denote the outward pointing surface normal vector. we maintain a coefficient of transmission. as will be described below. when the ray hits a non-reflective object. (ii) If RL does not hit any object until reaching L. we shoot the ray emanating from the surface contact point along this direction and apply the above ray-tracing algorithm recursively. Trace(R): Shoot R into the scene and let X be the first object hit and p be the point of contact with this object. To model the reflective component. Reflection: Recall the Phong reflection model. Reflection Since the surface is reflective. if say you have two mirrors facing each other. Note that it is possible for this process to go into an infinite loop. after which some default color is returned. Eventually. diffuse. This color is then factored into the Phong model. (d) Combine the colors Cr and Ct due to reflection and transmission (if any) along with the combined shading from (c) to determine the final color C. Call trace(R) and assign the color returned to this pixel.

0 1. Such reflection depends on the properties of the incident surface. The basic idea behind photon mapping involves two steps: Photon tracing: Simulate propagation of photons from light source onto surfaces.5 2. a white wall that is positioned next to a bright green object will pick up some of the green color. it may either stay on this surface. specular) reflective surfaces and can deal with complex (curved) geometries. while dark surfaces do not. As each photon hits a surface. When the photon is reflected. Thus. bright surfaces generate a lot of reflection.47 Snell's law says that if a ray is incident with angle θi (relative to the surface normal) then it will transmitted with angle θt (relative to the opposite normal) such that sinθisinθt= ηtηi Global Illumination through Photon Mapping: Our description of ray tracing so far has been based on the Phong illumination. A photon hitting a colored surface is more likely to reflect the color present in the surface. a large number of photons are randomly generated from each light source and propagated into the 3-dimensional scene. which works quite well with ray tracing.g. Color bleeding: When indirect illumination occurs with a colored surface. There are a number of methods for implementing global illumination models. Photon mapping is particularly powerful because it can handle both diffuse and non-diffuse (e. its direction of reflection depends on surface properties (e. it is represented by three quantities: Location: A position in space where the photon lands on a surface. When a photon lands. For example. Although ray tracing can handle shadows. Indirect illumination: This occurs when light is reflected from one surface (e. it is not really a full-fledged global illumination model because it cannot handle complex inter-object effects with respect to light. the reflected light is colored. Caustics: These result when light is focused through refractive surfaces like glass and water. Rendering: Draw the objects using illumination information from the photon trace. Power: The color and brightness of the photon.333 1..g...Material Air (vacuum) Water Glass Diamond Index of Refraction 1. This causes variations in light intensity on the surfaces on which the light eventually lands. called photon mapping. or (with some probability) it may be reflected onto another surface. We will discuss one method. . a white wall) onto another. In the first phase. Incident direction: The direction from which the photon arrived on the surface.g.

but the results produced by photon mapping can be stunningly realistic. the number of photons shot into the scene must be large enough that every point has a respectable number of nearby photons. The choice of rendering algorithm is dependent on the model representation and the degree of realism (interpretation of object and lighting attributes) desired. Because it is not a local illumination method.diffuse reflectors scatter photons uniformly in all directions while specular reflectors reflect photons nearly along the direction of perfect reflection. For this to work. in comparison to simple ray tracing. • • • Turning ideas into pictures Communications tool A means to an end . a camera view. the rendering phase starts. we check how many photons have landed near this surface point. Lecture 9: Rendering Rendering is the process of taking a geometric model. a lighting model. photon mapping takes more time than simple ray-tracing using the Phong model.) After all the photons have been traced. In order to render a point of some surface. By summing the total contribution of these photons and consider surface properties (such as color and reflective properties) we determine the intensity of the resulting surface patch. and other image generation parameters and computing an image.