You are on page 1of 137

Fundamentals of Computer Graphics

With Java, OpenGL, and Jogl
(Preliminary Partial Version, May 2010)

David J. Eck
Hobart and William Smith Colleges

This is a PDF version of an on-line book that is available at http://math.hws.edu/graphicsnotes/. The PDF does not include source code files, but it does have external links to them, shown in blue. In addition, each section has a link to the on-line version. The PDF also has internal links, shown in red. These links can be used in Acrobat Reader and some other PDF reader programs.

ii

c 2010, David J. Eck
David J. Eck (eck@hws.edu) Department of Mathematics and Computer Science Hobart and William Smith Colleges Geneva, NY 14456 This book can be distributed in unmodified form with no restrictions. Modified versions can be made and distributed provided they are distributed under the same license as the original. More specifically: This work is licensed under the Creative Commons Attribution-Share Alike 3.0 License. To view a copy of this license, visit http://creativecommons.org/licenses/bysa/3.0/ or send a letter to Creative Commons, 543 Howard Street, 5th Floor, San Francisco, California, 94105, USA. The web site for this book is: http://math.hws.edu/graphicsnotes

Contents
Preface 1 Java Graphics in 2D 1.1 Vector Graphics and Raster Graphics 1.2 Two-dimensional Graphics in Java . . 1.2.1 BufferedImages . . . . . . . . . 1.2.2 Shapes and Graphics2D . . . . 1.3 Transformations and Modeling . . . . 1.3.1 Geometric Transforms . . . . . 1.3.2 Hierarchical Modeling . . . . . iii 1 1 4 5 6 12 12 21 25 25 26 28 29 31 33 34 36 37 41 44 44 46 47 50 51 54 60 60 61 62 63 64 66 66

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 Basics of OpenGL and Jogl 2.1 Basic OpenGL 2D Programs . . . . . . . . . 2.1.1 A Basic Jogl App . . . . . . . . . . . . 2.1.2 Color . . . . . . . . . . . . . . . . . . 2.1.3 Geometric Transforms and Animation 2.1.4 Hierarchical Modeling . . . . . . . . . 2.1.5 Introduction to Scene Graphs . . . . . 2.1.6 Compiling and Running Jogl Apps . . 2.2 Into the Third Dimension . . . . . . . . . . . 2.2.1 Coordinate Systems . . . . . . . . . . 2.2.2 Essential Settings . . . . . . . . . . . . 2.3 Drawing in 3D . . . . . . . . . . . . . . . . . 2.3.1 Geometric Modeling . . . . . . . . . . 2.3.2 Some Complex Shapes . . . . . . . . . 2.3.3 Optimization and Display Lists . . . . 2.4 Normals and Textures . . . . . . . . . . . . . 2.4.1 Introduction to Normal Vectors . . . . 2.4.2 Introduction to Textures . . . . . . . .

3 Geometry 3.1 Vectors, Matrices, and Homogeneous Coordinates 3.1.1 Vector Operations . . . . . . . . . . . . . 3.1.2 Matrices and Transformations . . . . . . . 3.1.3 Homogeneous Coordinates . . . . . . . . . 3.1.4 Vector Forms of OpenGL Commands . . . 3.2 Primitives . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Points . . . . . . . . . . . . . . . . . . . . i

. . . . . .5. . . .3. . . . . . . . . . . . . . .5 3. . . . . . . . . . . .4 Creating Textures with OpenGL 4. . . . . . . . . . .2 Lines . . . . . . . . . . . . . . . . . . . . . . .5 Loading Data into a Texture . . .3. . . . . . 3. . . . . . . .1 Texture Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Polygons . . . . . . . . . . . . . . . . . . . . .3 3. . . . . . .4. . . . . . .5. . . . .5. .4. . . . . 4. . . . . . . . . . . . . . . . . . . . 67 68 71 72 73 73 76 78 80 81 82 86 89 92 92 93 94 96 97 99 99 102 102 104 105 105 107 108 109 111 111 114 115 116 117 118 120 122 123 125 126 129 3. . . . . . 3. . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . .5. . . .5.4 Lights and Materials in Scenes . . . . . . . . . . . . . Drawing Primitives . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . .CONTENTS ii . . . . . . . . . . . . . 3. . . . . . . . . . . 4. . . . . . . . . . . . .3 Terraine and Grids . . . . . . . . . . . . . . . .4 A Simple Avatar . . . . . . . . . . . . . .2. . . . . . . . . . . . . . .5. . .2 Lights in Scene Graphs . 4. . . . . . . . . . .2. .3 Vertex Buffer Objects . 4. . .1 Setting Material Properties . . . . 3. .3 The Global Light Model . . . . . . . . . . .3 OpenGL Lighting . . . . . . . . . . . 4. Viewing and Projection . . . . . . Polygonal Meshes . . . . . . . . . . .2 Color Material . . . . . . . . .3 The Viewing Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Perspective Projection . . . . . . . .1 Light Color and Position . . . . . . . . . . . . . .1 Vision and Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . 3. . . . . . . . .5 Quadrilaterals . . . . . . . . . .4 The Lighting Equation . . . . . . . . .5. . . . . . . . . . .1 The Attribute Stack . . . . . . . . . . . . . . . . . . . . . .3. . . .2. . 3. . . . . . . . . . . . . . . . 3. . . . . . . . . 4. . . . . . . . 4. . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . .5. . . . . . . . . . . . . 3. . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . .1 Java’s Data Buffers . 4. . . . . . . . . . . . . . . .3.5 Textures .4 3. . . . .3. . . . . . . . . . . . . . .2 OBJ Files . . 3.2. . . . . . . . . . . . . . . . . . . 4. . . . .2 OpenGL Materials . . . 3. . . . . . . . . . 4. . . . . . . . . . . . . . . . . . .6 Texture Coordinate Generation .4. . .5 Viewer Nodes in Scene Graphs 4 Light and Material 4. . . . . . . . . . .3 Texture Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Indexed Face Sets .5. . . . . . . . . . .3. . . . . . .5. . . .4 Triangles . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Drawing with Array Indices . . . 5 Some Topics Not Covered Appendix: Source Files . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . .4. . . . . . . . . . . .2 Mipmaps and Filtering . . . . . . . . . . . . . . . . . . .2 Drawing With Vertex Arrays . . . . 4. . . .2 Orthographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.7 Texture Objects . . . . . 3. .5. . . .2 Other Properties of Lights . . . 3. . . . . . . . . . . . . . . . . . 4. . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . .

The version of Jogl that was used in this course was 1. the writing gets a bit rushed near the end.” It covers some features of the Graphics2D class. It shows how to use Jogl to write OpenGL applications and applets. This chapter introduces drawing with OpenGL in both two and three dimensions. Some examples from those topics can be found in Chapter 5 (which is not a real chapter).1. not as much is covered in the notes as I had hoped. so code written for the older version will not automatically work with the new version. was under development as the course was being taught. A number of topics were covered in the course that did not make it into the notes. In addition to OpenGL. with very basic color and lighting. GIMP briefly and Blender in a little more depth.) ∗ ∗ ∗ As often happens.hws. These notes were written over the course of the Spring semester.1a will not be upward compatible with Jogl 2. Drawing in this chapter uses the OpenGL routines glBegin and glEnd. This chapter includes a short general discussion of graphics and the distinction between “painting” and “drawing. and the concentration is on graphics in three dimensions.Preface These notes represent an attempt to develop a new computer graphics course at the advanced undergraduate level.edu/eck/cs424/. and even then. These details are the basis of computer graphics. and the typical textbook in the field covers much more material than can reasonably fit into a one-semester undergraduate course. However. • Chapter 2: Overview of OpenGL and Jogl. is to cover the fundamental concepts of computer graphics. but Jogl 2 is still listed as a “work in progress” in May 2010. the selection of topics can easily bury what should be an exciting and applicable subject under a pile of detailed algorithms and equations. It introduces the use of transforms and scene iii . the changes that will be needed to adapt code from this book to the new version should not be large. computer graphics has become a huge and complex field. there are other things that are more important for students on an introductory level to learn. OpenGL is a standard and widely used graphics API. Some of the labs for the course deal with these programs. The notes cover computer graphics programming using Java. including in particular the use of geometric transforms for geometric modeling and animation. However. 2010. While it is most commonly used with the C programming language. Jogl 2. A new version. Jogl is the Java API for OpenGL. The primary goal.1. Jogl gives Java programmers access to all the features of OpenGL. Here are the topics covered in the four completed chapters of the book: • Chapter 1: Java Graphics Fundamentals in Two Dimensions. (Unfortunately. More information can be found on the web page for the course at http://math. While practitioners should be aware of this basic material in a general way. it looks like Jogl 1. the course covered two open-source graphics programs. as in any such course. Jogl is used for three-dimensional graphics programming.1a. but they are to a large extent built into graphics systems. Furthermore.

The sources require a fair amount of expertise to use and were not written to be published. And it covers viewing. • Chapter 3: Geometric Modeling. images.edu WWW: http://math. This chapter concentrates on composing scenes in three dimensions out of geometric primitives. This chapter discusses how to add color. and Materials. Note that source code for all the examples in the book can be found in the source directory on-line or in the web site download. ∗ ∗ ∗ Professor David J.Preface iv graphs to do hierarchical modeling and animation. XSLT transformations. and textures to a 3D scene. Java source code files. light. ∗ ∗ ∗ The web site and the PDF versions of this book are produced from a common set of sources consisting of XML files. Lighting. including the use of lights in scene graphs. that is. and a couple other things. • Chapter 4: Color. the projection of a 3D scene down to a 2D picture. including the use of vector buffer objects and the OpenGL routines glDrawArrays and glDrawArrayElements. I am happy to make them available upon request. However.hws. It introduces the topic of optimization of graphics performance by covering display lists.edu/eck/ . USA Email: eck@hws. New York 14456. Eck Department of Mathematics and Computer Science Hobart and William Smith Colleges Geneva. UNIX shell scripts. These source files require the Xalan XSLT processor and (for the PDF version) the TeX typesetting system.

giving the levels of red. At a given time. 1. green. but you will probably encounter some corners of that API that are new to you. Other formats are possible. The screen is redrawn many times per second. (For a color screen. such as grayscale. Changing the image on the screen requires changing all the color values in the frame buffer. the brighter the glow of the pixel. For this chapter only. where a color can be specified by three 8-bit numbers. or even monochrome. In any case. Some of this will no doubt be review. arranged in rows and columns. An image that is presented on the computer screen is made up of pixels. Most screens these days use 24-bit color . each pixel had a red 1 . so almost immediately after the color values are changed in the frame buffer. and the displayed image will change. So. where a color is given by one number that specifies the level of gray on a black-to-white scale. so the brightness of the pixels could be controlled by modulating the intensity of the electron beam. we will put OpenGL to the side and will work with the standard Java two-dimensional graphics API. The screen consists of a rectangular grid of pixels. The difference is in how the image is represented. the color values for all the pixels on the screen are stored in a large block of memory known as a frame buffer . unless you look rather closely. The beam could be moved across the screen by powerful magnets that would deflect the path of the electrons. the idea is to represent an image. where there is a single bit per pixel that tells whether the pixel is on or off. making them glow. However. and blue in the color. the colors of the pixels on the screen will be changed to match. The pixels are small enough that they are not easy to see individually. The stronger the beam. A computer screen used in this way is the basic model of raster graphics. many important ideas in computer graphics apply to two dimensions in much the same way that they apply to three—often in somewhat simplified form. we will begin in this chapter with graphics in two dimensions.Chapter 1 Java Graphics Fundamentals in Two Dimensions The focus of this course will be three-dimensional graphics using OpenGL.1 Vector Graphics and Raster Graphics (online) Computer graphics can be divided broadly into two kinds: vector graphics and raster graphics. The color values stored in the frame buffer were used to determine the intensity of the electron beam. The term “raster” technically refers to the mechanism used on older vacuum tube computer monitors: An electron beam would move along the rows of pixels. each pixel can show only one color. In both cases.

raster displays were too slow and expensive to be practical. such as the thickness of a line or the color that fills a rectangle. it works well for many types of images. and rectangles. In a painting program. This approach certainly wouldn’t work for a picture of a beautiful sunset (or for most any other photographic image). such as architectural blueprints and scientific illustrations. then draw a tree in front of the . Of course. and the color values for all the pixels are still stored in a frame buffer. circles. however. it can be seen in a division between two categories of programs that can be used to create images: painting programs and drawing programs.CHAPTER 1. Another way to create an image is to specify the basic geometric shapes that it contains. defines raster graphics. To make this clearer. For example. But here is the point: For an image that can be specified as a reasonably small number of geometric shapes. you only need to store the coordinates of two thousand points. But the screen is still made up of pixels. they quickly displaced vector displays because of their ability to display all types of images reasonably well. and a blue dot. the image would start to flicker because a line would have a chance to visibly fade before its next turn to be redrawn. Since a point on the screen would glow only very briefly after being illuminated by the electron beam. or even by tools that draw geometric shapes such as lines or rectangles. This might be done by using a “drawing tool” that acts like a painter’s brush. specifying individual pixel colors is not always the best way to create an image. and the user creates an image by assigning colors to pixels. with numerical color values for each pixel. (As soon as raster displays became fast and inexpensive. A vector graphics display would store a display list of lines that should appear on the screen. which were separately illuminated by the beam. Consider an image made up of one thousand line segments. and it is only the pixel colors that are saved. vector graphics were even used directly on computer screens. Of course. To store the image in a frame buffer for a raster display would require much more memory. early in the history of computing. not every image can be composed from simple geometric shapes. The mechanism that controls the colors of the pixels is different for different types of screen. This would take up only a few kilobytes of memory. a green dot. Similarly. but the point is to color the individual pixels. the graphics display would go through the display list over and over. In fact. it would only be necessary to change the contents of the display list. the image is represented as a grid of pixels. For a vector representation of the image. This is the idea that defines vector graphics: represent an image as a list of the geometric shapes that it contains. if the display list became too long. Fortunately. To make things more interesting. To change the image. simply by sweeping the beam along that line. even for a monochrome display. The idea of an image consisting of a grid of pixels. shapes such as lines. triangles. ∗ ∗ ∗ Although images on the computer screen are represented using pixels. the endpoints of the lines.) A modern flat-screen computer monitor is not a raster in the same sense. JAVA GRAPHICS IN 2D 2 dot. the amount of information needed to represent the image is much smaller using a vector representation than using a raster representation. continually redrawing all the lines on the list. When the first graphical computer displays were developed. it was possible to use vacuum tube technology in another way: The electron beam could be made to directly draw a line on the screen. There is no moving electron beam.) ∗ ∗ ∗ The divide between raster graphics and vector graphics persists in several areas of computer graphics. However. the shapes can have attributes. a vector display could draw the lines on the screen more quickly than a raster display could copy the the same image from the frame buffer to the screen. suppose that you use a painting program to draw a house.

Two well-known graphics programs are Adobe Photoshop and Adobe Illustrator. (The reverse. A painting program might let the user create “layers. the GNU image-processing program. GIF and PNG use lossless data compression. and SVG. treating it as one shape. Photoshop is in the category of painting programs.” which are separate images that can be layered one on top of another to create the final image. however. a drawing program might allow the user to include a raster-type image. just as it was before you added the tree. “SVG” stands for “Scalable Vector Graphics. JPEG generally works well for photographic images. Gimp is a good alternative to Photoshop. The layers can then be manipulated much like the shapes in a drawing program (so that you could keep both your house and your tree. PNG is the preferred format for such images. and the image is represented as a list of those shapes. In the world of free software. the house will still be in the image. In a drawing program. an image is specified by storing a color value for each pixel. even if in the image the house is in back of the tree). while Illustrator is more of a drawing program. the image never really contained a “house” at all—only individually colored pixels that the viewer might perceive as making up a picture of a house. which means that the image that is recovered from a JPEG image is not exactly the same as the original image—some information has been lost. is also true. and JPEG are basically raster graphics formats. while Inkscape is a reasonably capable free drawing program.) A practical program for image creation and editing might combine elements of painting and drawing. but not as well for images that have sharp edges between different colors. and some web browsers also have support for SVG images. If you place a house shape (or collection of shapes making up a house) in the image. and the data can be compressed to reduce its size. Furthermore. known specification. If you then erase the tree. but you can still find GIF images on the web. PNG. the user creates an image by adding geometric shapes. GIF. you should be able to select any of the shapes in the image and move it or change its size. but in fact the difference is often not very noticeable. If you delete the tree. It is especially bad for line drawings and images that contain text. There are many ways to represent an image as data stored in a file.) JPEG uses a lossy data compression algorithm. If the original image is to be recovered from the bits stored in the file. so drawing programs offer a rich set of editing operations that are not possible in painting programs. which has largely been superseded by PNG. For example.” and the term “scalable” indicates one of the advantages of vector graphics: There is no loss of quality when the size of the image . Some popular graphics file formats include GIF. SVG is actually an XML-based language for describing two-dimensional vector graphics images. the data usually contains a lot of redundancy. This might not sound like a good idea. ∗ ∗ ∗ The divide between raster and vector graphics also appears in the field of graphics file formats. or JPEG. Such a specification is called a graphics file format. you’ll only reveal a blank canvas. which means that the original image can be recovered perfectly from the compressed data. JAVA GRAPHICS IN 2D 3 house. SVG is fundamentally a vector graphics format (although SVG images can contain raster images).CHAPTER 1. Most images used on the Web are GIF. the representation must follow some exact. not a house. PNG. The amount of data necessary to represent an image in this way can be quite large. and using lossy compression usually permits a greater reduction in the size of the compressed data. although one or the other is usually dominant. JPEG. and you then place a tree shape on top of the house. However. since it is stored in the list of shapes that the image contains. In fact. the house is still there. PNG. (GIF is an older file format.

and it is still the same perfect geometric line. fillRect. Image and Graphics.3 of Introduction to Programming Using Java. A line between two points can be drawn at any scale. and you will get large visible blocks of uniform color. setColor. 1. rather. you will find that you don’t have enough color values for all the pixels in the new image. on the other hand.awt.Graphics2D. images are fundamentally composed out of geometric shapes. A Graphics object has the ability to draw geometric shapes on its associated drawing surface. A BufferedImage represents a rectangular grid of pixels. In this course.awt. the most common techniques are more similar to vector graphics than to raster graphics. The class java. the amount of data can be immense and is wasted to a great extent. You can’t do much with the basic Image other than display it on a drawing surface.2 Two-dimensional Graphics in Java Java’s support for 2D graphics is embodied primarily in two abstract classes.” which tells how the color values are to be interpreted. (Remember that there are many ways to represent colors as numerical values. getColor.awt. drawString. Much more common is to combine two-dimensional raster graphics with three-dimensional geometry: A two-dimensional image can be projected onto the surface of a three-dimensional object. drawRect. but you will need to know something about both. In fact. while Graphics is concerned primarily with vector graphics. and in their subclasses. all the Graphics objects that are provided for drawing in modern Java are actually of type Graphics2D. including such Graphics methods as drawLine. in any case. a “model” of a three-dimensional scene is built from geometric shapes. The Image class is mainly about raster graphics.Graphics represents the ability to draw on a two-dimensional drawing surface. since we generally only see the surfaces of objects. and the image is obtained by “projecting” the model onto a two-dimensional viewing surface. each pixel in the original image will cover a rectangle of images in the scaled image. ∗ ∗ ∗ When we turn to 3D graphics.Image really represents the most abstract idea of an image. some desktop environments are now using SVG images for their desktop icons. java.BufferedImage. However. A much more complete two-dimensional drawing capability is provided by the class java. Or. and not their interiors.) In general. you can read Section 6.) The Graphics class is suitable for many purposes. but its capabilities are still fairly limited. which is a subclass of Graphics. we will generally be working with geometry rather than pixels. The three-dimensional analog of raster graphics is used occasionally: A region in space is divided into small cubes called voxels. If you try to greatly increase the size of a raster image. (It can also draw strings of characters and Images. we will be looking at Java’s built-in support for two-dimensional graphics. and color values are stored for each voxel. and setFont. Java’s standard class java. In the rest of this chapter. This chapter assumes that you are familiar with the basics of the Graphics class and the related classes Color and Font.awt.CHAPTER 1. It consists of a “raster. you will want to use the subclass.” which contains color values for each pixel in the image. you don’t have to work directly with the raster or the color model. and a “color model. For other purposes. If you need to review them. That is. And indeed. and you can type-cast the variable of type Graphics . An image used in this way is referred as a texture. You can simply use methods in the BufferedImage class to work with the image. JAVA GRAPHICS IN 2D 4 is increased.image. The scalable nature of SVG images make them a good choice for web browsers and for graphical elements on your computer’s desktop. Both Java and OpenGL have support for both vector-type graphics and raster-type graphics in two dimensions.

int y. You can then use g2 for drawing. and make any changes to it that you like without having any effect on g. You can create a blank image using a constructor from the BufferedImage class. The imageType specifies the color model. } The method g. It is also possible to manipulate the color of individual pixels in the image. use the image type BufferedImage.paintComponent(g). but you are free to do anything that you like to g2. you might want to be able to modify it. For basic full-color images. int rgb) for getting and setting the color of an individual pixel. the type BufferedImage. Grayscale images can be created using the image type BufferedImage. (For these methods. When you want to create a blank image.” or transparency. The BufferedImage class has methods public int getRGB(int x.) . or you can get a copy of an existing image by reading the image from a file or some other source. The method public Graphics2D createGraphics() in the BufferedImage class returns a Graphics2D object that can be used to draw on the image using the same set of drawing operations that are commonly used to draw on the screen. It is recommended not to make any permanent changes to g in the paintComponent method.2. the constructor that you are most likely to use is public BufferedImage(int width.create(). There are also methods for getting and setting the color values of a large number of pixels at once. a paintComponent method that needs access to the capabilities of a Graphics2D might look like this: protected void paintComponent(Graphics g) { super. int imageType) The width and height specify the size of image. JAVA GRAPHICS IN 2D 5 to Graphics2D to obtain access to the additional capabilities of Graphics2D. // Use g2 for drawing.TYPE INT RGB can be used.TYPE INT ARGB. 1. See the API documentation for more information. The imageType can be specified using a constant defined in the BufferedImage class. and blue color components. Once you have a BufferedImage. To do that. colors are specified by 32-bit int values.create creates a new graphics context object that has exactly the same properties as g. that is. It is possible to add an eight-bit “alpha. this specifies a color model that uses three eight-bit numbers for each pixel. int height.ImageIO can be used to read an image from a file. int y) and public void setRGB(int x.1 BufferedImages There are basically two ways to get a BufferedImage. The method public boolean read(File source) in the class javax. Other methods in the ImageIO class make it possible to read an image from an InputStream or URL. giving the red. that is.CHAPTER 1. green. with eight bits each for the alpha. For example..imageio. . what kind of color value is stored for each pixel.TYPE BYTE GRAY. red.. and blue components of the color. Graphics2D g2 = (Graphics2D)g. The easiest way is to treat the image as a drawing surface and to draw on it using an object of type Graphics2D. green. Supported image types include at least PNG and JPEG. the number of rows and columns of pixels. value for each pixel.

y) picks out the pixel in column x and in row y. the ImageObserver is there to make sure that the rest of the image gets drawn when it becomes available. Another problem with pixel coordinates is that when the size of the drawing area changes. One advantage of double-buffering is that the user doesn’t see changes as they are being made to an image. and for many applications. int x. over the Internet. For example. The paintComponent method actually draws to an off-screen canvas and the result is copied onto the screen. a BufferedImage can be used to store a copy of the image that is being created or edited. in the paintComponent method of a JPanel ). computer graphics applications usually allow the specification of an arbitrary coordinate system.CHAPTER 1. it would be nice to be able to match the coordinate system to the size of the object that is being displayed. such as a box around a selected area. One important use of BufferedImages is to keep a copy of an image that is being displayed on the screen. However. you need to create an off-screen canvas of your own. Often. and sometimes such precision is essential. you don’t have direct access to the off-screen canvas that is used for this purpose. Using an off-screen image in this way is referred to as double buffering .2 Shapes and Graphics2D You should be familiar with Graphics methods such as drawLine and fillRect. Any changes that are made to the image are applied to the off-screen canvas. coordinates are most naturally expressed on some other coordinate system. you can save it to a file using a write method from the ImageIO class. Shapes can then be drawn using the coordinates that are most natural to the application. (In fact. JAVA GRAPHICS IN 2D 6 Once you have your complete image. The Graphics2D class supports the use of arbitrary coordinate systems. You can then draw extra stuff. you just pass null as the fourth parameter when drawing a BufferedImage. where the term “buffer” is used in the sense of a “frame buffer” that stores color values for the pixels in an image. And you can perform operations on the off-screen canvas. ImageObserver obs) This method draws the image at its normal size with its upper left corner at the point (x. The ImageObserver is meant for use when drawing an image that is still being loaded. for example. it makes it possible to draw a partially loaded image (and the boolean return value of the method tells whether the image has been fully drawn when the method returns). at least if the size of the image is to change to match the change in size of the drawing area. so that the user sees only the completed image. points are specified in pixel coordinates. For example. In such cases. int y. Java uses double-buffering automatically for Swing components such as JPanel. the natural coordinates must be laboriously converted into pixel coordinates. For this reason. and the graphics system will do the work of converting those coordinates to pixel coordinates. however. When drawing a mathematical object such as the graph of a function. where a pair of integers (x. For these methods. Using pixel coordinates allows you to determine precisely which pixels will be affected by a given drawing operation.2. over the on-screen image without affecting the image in the off-screen canvas. in a paint program. Since a BufferedImage is always already in memory. such as reading the color of a given pixel. such as public boolean drawImage(Image image. I sometimes call an image that is used in this way an off-screen canvas. Once you’ve specified .) 1.y). it would be natural to use actual physical measurements as coordinates. in an architectural drawing. all the coordinates have to recomputed. you can compose an image in the off-screen canvas and then copy that image all at once onto the screen. that you can’t do on the on-screen image. and you can ignore the return value. which is then copied to the screen (for example. You can also copy the image to a drawing surface (including another image) using one of the drawImage methods from the Graphics class.

corresponding to the idea that there is a natural coordinate system for the “world” that we want to represent in the image. and for simplicity.1F.7). and the Graphics2D object will correctly transform the shape into pixel coordinates for you.7. the class Line2D in that package represents line segments whose endpoints are given as pairs of real numbers.awt.1) to (1.0) and (5. For example.7. is world coordinates.2.awt.14F. In fact. drawing a line with g. and double is the more commonly used type.14 is taken to be of type double. than float. one using coordinates of type float and one using coordinates of type double. we will consider how to draw shapes in such coordinate systems. Later in the chapter. This is done in a rather strange way. Java documentation refers to the coordinate system that is actually used for drawing as user coordinates. The strangest part is that these subclasses are defined as static nested classes inside Line2D: Line2D.Float(2. coordinates are specified as real numbers rather than integers (since each unit in the coordinate system might cover many pixels or just a small fraction of a pixel).Double(1.geom package actually provides two versions of each shape. even. This means that you can declare a variable of type Line2D. computer graphics hardware often uses float values internally.1F).) Java has two primitive real number types: double and float. given these considerations.7.2.5. Note that the coordinate system that I am calling “pixel coordinates” is usually referred to as device coordinates. more appropriate to OpenGL. and to get a literal of type float. For example.3.5. Also. ∗ ∗ ∗ Let’s take a look at some of the classes in package java. Another name. In fact. // (2.CHAPTER 1. When using general coordinate systems.drawLine(1. a real-number literal such as 3. one that represents lines using float coordinates and one using double coordinates.geom provides support for shapes defined using real number coordinates. Furthermore. the java. with a greater number of significant digits. the class Line2D itself is an abstract class.7) will have the same effect as drawing a Line2D that has endpoints (1.7. all drawing is affected by the transformation of coordinates.0. before drawing it. Taking Line2D as an example. For now.0. unfortunately. (Although the older drawing methods such as drawLine use integer coordinates. you might want to stick to one of the types Line2D.0. you can draw shapes using that coordinate system.Double will be more convenient to use. although the “user” in this case is the programmer. since it is the appropriate coordinate system for drawing directly to the display device where the image appears.2.0.7.geom.Double. the width of lines and the size of characters in a string. // Line from (1.0) to (5. it’s important to note that any shapes drawn using these methods are subject to the same transformation as shapes such as Line2Ds that are specified with real number coordinates. see the API documentation for more information. so you have to use explicit type-casting to convert a double value into a float.5. It has two subclasses.Float. float values generally have enough accuracy for graphics applications.awt.3. you have to add an “F”: 3.Float might give better performance. JAVA GRAPHICS IN 2D 7 a coordinate system. The package java. There is no automatic conversion from double to float. but to create an object. somewhat disturbingly.Double or Line2D.Float and Line2D. The double type can represent a larger range of numbers. treating it as a basic type. especially if you also use variables of type float to avoid type-casting. The abstract class Point2D— .7F.0) Line2D line2 = new Line2D.1. Line2D.2. you need to use Line2D.1) This gives you. I will discuss only some of their properties. we will see how to specify a new coordinate system for a Graphics2D. Line2D. a lot of rope with which to hang yourself.0).Float: Line2D line1 = new Line2D. doubles are simply easier to use in Java. and they have the advantage of taking up less space in memory. So.5F. However.Double or Line2D.

JAVA GRAPHICS IN 2D 8 and its concrete subclasses Point2D.y. Some shapes.Float—represents a point in two dimensions.h. such as rectangles. or Arc2D. since it allows the creation of shapes consisting of arbitrary sequences of lines and curves. you can use GeneralPath.width.Double()”). The closure types are Arc2D.y (and similarly for Point2D. width.height. such as lines. r. which is equivalent to Path2D. The method .CHAPTER 1. and I won’t try to list them all here. Other classes in java.setY (y).w. Arc2D.2. (x. plus the start angle of the arc.Double(x.Double or Rectangle2D.getY () to retrieve its coordinates. r. and r.add(x2.OPEN (the arc is not closed).7)”). If pd is a variable of type Point2D. Aside from lines.x and pd. and a height. you can also refer directly to the coordinates as pd.PIE (the arc is closed by drawing two line segments. A common problem is to define a rectangle by two corner points (x1. Adding a point to a rectangle causes the rectangle to grow just enough to include that point. and you can use p.geom offer a similar variety of ways to manipulate their properties.y.0.w. except that an arc of an ellipse has been cut off each corner. The corner point (x. Ellipse2D.Double r = new Rectangle2D.Float.Double.getX () and p.Double and Rectangle2D.0.Float. including Line2D. and height of the rectangle (“new Ellipse2D. A Rectangle2D has a corner point (x. where zero degrees is in the positive direction of the x-axis. p. are purely one-dimensional and can only be stroked.y.y.y) specify the minimum x. Rectangle2D.y).h)”).awt.Double and Point2D.setX (x ). such shapes also have outlines that can be stroked .x.” The angles are given in degrees. In addition to Point2D. The sides of the rectangle are parallel to the coordinate axes. which represents the general idea of a geometric shape. have interiors that can be filled . All of these are abstract classes. An Arc2D represents an arc of an ellipse. and Path2D. java.y1.r2)”).y) would be the lower left corner.and y-values in the rectangle. the end angle. If p is a variable of type Point2D.Double(x. in a coordinate system in which the minimum value of y is at the bottom.) Each of these classes implements the interface java. If the width or the height is less than or equal to zero.y) is the upper left corner.y) to set its coordinates.Double(x.setLocation(x.Float).Double(1. This can be accomplished by creating a rectangle with height and width equal to zero and then adding the second point to the rectangle: Rectangle2D. and each of them contains a pair of subclasses such as Rectangle2D.geom contains a variety of classes that represent geometric shapes. The horizontal and vertical radius of the ellipse are part of the data for the round rectangle (“new RoundRectangle2D. and a “closure type.Shape. The data for an Arc2D is the same as the data for an Ellipse2D.y2).Float has public instance variables r.awt. rectangles are probably the simplest shapes.w. Some shapes. A RoundRectangle2D is similar to a plain rectangle.r1.y2 ). you can use p.0). For the usual pixel coordinate system.3. A Path2D p is empty when it is first created (“p = new Path2D. RoundRectangle2D. A point can be constructed from two real numbers (“new Point2D. specified by two real number coordinates.h)”). (Note that Path2D is new in Java 6. However. (x. r. or p. nothing will be drawn when the rectangle is filled or stroked. An Ellipse2D that just fits inside a given rectangle can be constructed from the upper left corner.CHORD (meaning that the arc is closed by drawing the line back from its final point back to its starting point). The Path2D shape is the most interesting. which are defined by polynomials of degree two or three. a width. giving the form of a wedge of pie). and can be constructed from that data (“new Rectangel2D.y1 ) and (x2. Arc2D.Double(x1. The curves that can be used are quadratic and cubic Bezier curves. A variable r of type Rectangle2D. You can then construct the path by moving an imaginary “pen” along the path that you want to create.awt. in older versions.

the curve heads into the point (x. For Bezier curves. with their control points. To make this clearer.lineTo(x. Note that the control point (cx.y). The method p. p.cy).y) draws a line from the current pen position to (x. p.−3).cy).close(). Similarly. with control point (cx. It is used to specify the initial point of the path or the starting point of a new segment of the path.cy2 ) as control points. it heads in the direction of (cx.y).lineTo(-4. the picture on the left below shows a path consisting of five quadratic Bezier curves. (Note that curves don’t have to be closed. the following code creates a triangle with vertices at (0. As the curve leaves the current pen position.close() can be used to close the path (or the current segment of the path) by drawing a line back to its starting point.Double().x. (In the picture. p.y). The quadratic curve has endpoints at the current pen position and at (x.moveTo(x. p. A quadratic Bezier curve has one control point. Note that at each endpoint of a curve.cy). The method p. (2.) The picture on the right shows a path that consists of four cubic Bezier curves. You also have to specify control points. The first controls the velocity of the curve as it leaves the initial endpoint of the curve.y).y). xy2. and (−4. Control points don’t lie on the curve. leaving the pen at (x. cx2.) For example.CHAPTER 1. these lines are shown as dotted lines. using (cx1.1). and the second controls the velocity as it arrives at the final endpoint of the curve.quadTo(cx.5).cy) is not on the curve—it just controls the direction of the curve. A cubic Bezier curve is similar. JAVA GRAPHICS IN 2D 9 p.1): Path2D p = new Path2D. you have to specify more than just the endpoints of the curve.y) without drawing anything. with a speed determined by the distance between the pen position and (cx. cy1. but they determine the velocity or tangent of the curve at the endpoints. as is done at the point where the first curve joins the second in the picture here.cy) to (x.cy1 ) and (cx2. y ) This adds a Bezier curve that starts at the current pen position and ends at (x. .moveTo(0. Note that by choosing appropriate control points.cy. x. You can add a Bezier curve to a Path2D p with the method p.-3). You can add a quadratic curve to a Path2D p using the method p. the curve is tangent to the line from that endpoint to the control point. with a speed determined by the distance from (cx. with their control points.lineTo(2.cy).5).curveTo( cx1. except that it has two control points.y) from the direction of (cx. you can always ensure that one cubic Bezier curve joins smoothly to the next.y) moves the pen to the point (x.y).

One example is class Color. is the ability to draw dotted and dashed lines.java define applications that let you edit the curves shown in this image by dragging the control points and endpoints of the curve. then calling g2. Paint is just an interface. the pen will be one unit wide and one unit high. JAVA GRAPHICS IN 2D 10 The source code files QuadraticBezierEdit. If the current paint in a Graphics2D g2 is a Color.fill(s) will fill the Shape s with solid color. In the usual pixel coordinates.java and CubicBezierEdit. ∗ ∗ ∗ Once you have a Path2D or a Line2D. the details . for example. The pen is actually defined by an object of type Stroke. you can use it with a Graphics2D. Unfortunately. you can use the method g2. You can set the Paint in a Graphics2D g2 by calling g2. according the units of that coordinate system. such as GradientPaint and TexturePaint. but there are several standard classes that implement that interface.draw (s) will “stroke” the shape by moving an imaginary pen along the lines or curves that make up the shape. Note that the parameter in the constructor is of type float.CHAPTER 1. or indeed any object of type Shape. such as how an endpoint should be drawn and how one piece of a curve should be joined to the next. You can use this method. but in a non-standard coordinate system. there are other types of Paint. you would say g2.setStroke(stroke). However. to draw a line or the boundary of a rectangle. For a Color c. which is another interface. A Graphics2D has an instance variable of type Paint that tells how shapes are to be filled. perhaps. that represent other types of fill. the shape will be stroked using a pen that is one unit wide. calling g2. For example to draw lines with g2 that are 2. a class that implements the Stroke interface. this means one pixel wide. Most interesting. BasicStroke has several alternative constructors that can be used to specify various characteristics of the pen.setStroke( new BasicStroke(2. To set the stroke property of a Graphics2D g2. The Graphics2D class defines the methods public void draw(Shape s) public void fill(Shape s) for drawing and filling Shapes.setColor (c) also sets the paint. Calling g2. You can also find applet versions of these applications in the web-site version of these notes. The stroke will almost certainly be of type BasicStroke. By default.setPaint(p).5F) ).5 times as wide as usual.

greatly magnified so that you can see the individual pixels. pattern. they can all be considered as different names or “aliases” for the same pixel. followed by a 5unit-long dash. (The term aliasing likely comes from the fact that most pictures are naturally described in real-number coordinates. while the one on the left is not: . BasicStroke. the color of the pixel should be a mixture of the color of the shape and the color of the background. Note that in addition to the current Stroke. When you try to represent the image using pixels. but as an example. Aliasing is a problem for raster-type graphics in general. the result is a jagged staircase effect.CHAPTER 1. A diagonal geometric line. will cover some pixels only partially.draw (s) also depends on the current Paint in g2. with the pattern repeating after that. When drawing a black line on a white background. The area covered by the pen is filled with the current paint. for example. The idea is that when a pixel is only partially covered by a shape. the color of a partially covered pixel would be gray. solid-colored pixels. I should mention antialiasing. and it is not possible to make a pixel half black and half white. calculating this area exactly for each pixel would be too difficult. JAVA GRAPHICS IN 2D 11 are somewhat complicated.) Here. Aliasing can also be seen in the outlines of characters drawn on the screen and in diagonal or curved boundaries between any two regions of different color. (In practice. This effect is an example of aliasing. followed by a 3-unit-wide space. here is how you can create a pen that draws a pattern of dots and dashes: float[] pattern = new float[] { 1.CAP BUTT. many real-number coordinates will map to the same integer pixel coordinates. followed by a 3-unit-wide space. 5. It is not possible to draw geometrically perfect pictures by coloring pixels. BasicStroke. The one on the right is drawn using antialiasing. 3 }. 3.JOIN ROUND. for example. the effect of g2. are two lines. // pattern of lines and spaces BasicStroke pen = new BasicStroke( 1. so some approximate method is used. When you try to draw a line with black and white pixels only. caused by the fact that raster images are made up of small. 0. ∗ ∗ ∗ Before leaving the topic of drawing with Graphics2D.) Antialiasing is a term for techniques that are designed to mitigate the effects of aliasing. with the shade of gray depending on the fraction of the pixel that is covered by the line. You can adjust the pattern to get different types of dotted and dashed lines. 0 ). This pen would draw a one-unit-long dot.

Some of this. 1. and doing so is generally a good idea. there are things that you simply can’t do in Java without transforms. For example. 1. or rotating them. There is no way to draw the string tilted to a 30-degree angle. before using the Graphics2D g2 for drawing. but it can reduce the “jaggies” that are caused by aliasing.setRenderingHint(RenderingHints. It is an important foundation for the rest of the course. however. Just as important are geometric transforms that allow you to modify the shapes that you draw. In general. such as by moving them. so its nice that we can get the computer to do it for us simply by specifying an appropriate geometric transform. by doing your own calculations. Here is an example of a string and an image drawn by Java in their normal orientation and with a counterclockwise 30-degree rotation: . though. RenderingHints. to text. The material in this section will carry over nicely to three dimensions and OpenGL. the only way to draw a tilted image is by applying a transform. and it might even have no effect at all. You can turn on antialiasing in a Graphics2D.KEY ANTIALIASING. it does give an improved image.3. Similarly.” which means that its exact effect is not guaranteed. Turning antialiasing on does slow down the drawing process. you can simply add 3 to each of the horizontal coordinates that are used to draw the shape. when you draw a string of text without a transform.VALUE ANTIALIAS ON). Note that turning on antialiasing is considered to be a “hint. To do so. For example.CHAPTER 1. Antialiasing can be applied to lines. Furthermore. changing their size. JAVA GRAPHICS IN 2D 12 Note that antialiasing does not give a perfect image. if you want to move a shape three units to the right. simply call g2. the baseline of the text can only be horizontal.1 Geometric Transforms Being able to draw geometric shapes is important. This would quickly become tedious. and the slow-down might be noticeable when drawing a very complex image. you can do by hand.3 We Transformations and Modeling (online) now turn to another aspect of two-dimensional graphics in Java: geometric transformations. and to the curved shapes such as ellipses. but it is only part of the story.

(0. possibly. with the positive direction of the x-axis pointing to the right and the positive direction of the yaxis pointing up. JAVA GRAPHICS IN 2D 13 Every Graphics2D graphics context has a current transform that is applied to all drawing that is done in that context. and f. Generally. rotation.) The transform in a Graphics2D can be any affine transform . For example. the resulting picture would be tiny. If the same coordinates are used when drawing to the printer as are used on the screen. the result is also a set of parallel lines (or.0). which has no effect at all on drawing. is at the center of the picture. Therefore the combined transform that first moves a point to the right and then rotates it is also an affine transform.CHAPTER 1. The effect of a transform depends on what coordinate system you are using for drawing. d. (There are some exceptions to this. when drawing to a printer. It’s good to have an intuitive understanding of these basic transforms and how they can be used. So. b. moving each point three units to the right is an affine transform. in a newly created Graphics2D. An affine transform has the property that when it is applied to any set of parallel lines. An affine transform in two dimensions can be specified by six numbers a. the current transform is the identity transform. You can change the transform. e. which have the property that when the transform is applied to a point (x. the resulting point (x1. and so is rotating each point by 30 degrees about the origin (0.0). Note that this orientation for the y-axis is the opposite of the usual orientation . the result is another affine transform. it will not affect things that have already been drawn in the graphics context. and scaling . the pixels are very small. For this discussion. Java provides a default transform that magnifies the picture to make it look about the same size as it does on the screen. so we will go through them in some detail. It’s possible to build up any two-dimensional affine transformation from a few basic kinds of simple transforms: translation. Combining two affine transformations in this way is called multiplying them (although it is not multiplication in the usual sense). when drawing to a printer. and the new value will apply to all subsequent drawing operations. a single line or even a single point). For example. Mathematicians often use the term composing rather than multiplying. we assume that the origin.y1 ) is given by the formulas: x1 = a*x + b*y + e y1 = c*x + d*y + f Affine transforms have the important property that if you apply one affine transform and then follow it with a second affine transform.y). c.

called the angle of rotation. the more common orientation in mathematics and in OpenGL.y1 ) is given by x1 = cos(r) * x . The formula for a translation is x1 = x + e y1 = y + f where the point (x. For this purpose.translate(4. Positive angles move the positive x-axis in the direction of the positive y-axis.y) is moved e units horizontally and f units vertically. Suppose that you are going to draw an “F” centered at (0. but it is clockwise in the usual pixel coordinates. If you say g. a = d = 1. the measure of a right angle is π/2.) If you wanted to apply this translation to a Graphics2D g. JAVA GRAPHICS IN 2D 14 in Java. This would mean that for all subsequent drawing operations. Every point is rotated through the same angle.translate(e. then every point of the “F” will be moved over 4 units and up 2 units. e would be added to the x-coordinate and f would be added to the y-coordinate. (0. Here is a picture: The light gray “F” in this picture shows what would be drawn without the translation.f ). not just to the next shape that you draw.2) before drawing the “F”. it is. the translation applies to all the drawing that you do after that. so that subsequent drawing will be affected by the combined transformation. not 90. when rotation through an angle of r radians about the origin is applied to the point (x. (This is counterclockwise in the coordinate system that we are using here. Let’s look at an example. the dark red “F” shows the same “F” drawn after applying a translation by (4.0). ∗ ∗ ∗ A translation simply moves each point by a certain amount horizontally and a certain amount vertically. This is an important point. not degrees. and b = c = 0.y). angles in Java are measured in radians.2).f ). you could simply say g. and there will be a lot more to say about it later. The arrow shows that the upper left corner of the “F” has been moved over 4 units and up 2 units.CHAPTER 1. It will be multiplied by the translation.translate(e. the second transform will not replace the translation. (Thus for a translation. ∗ ∗ ∗ A rotation rotates each point about the origin. then the resulting point (x1.sin(r) * y y1 = sin(r) * x + cos(r) * y . however. For example.) Although it is not obvious.0). and the “F” that appears on the screen will actually be centered at (4. where the y-axis points down rather than up. Every point in the “F” is subjected to the same displacement. If you apply another transformation to the same g.2). Remember that when you say g.

e = f = 0. and the dark red “F” is the shape that results if you apply the rotation. The arrow shows how the upper left corner of the original “F” has been moved. The dark red “F” shows the result of applying the two transforms.translate(4. g.CHAPTER 1.rotate(r ) before drawing the shapes that are to be rotated.PI/2).0). and the thing that you draw after the translation is a rotated shape.translate(4. Here is a picture that illustrates a rotation about the origin by the angle −3π/4: Again. And note that the order in which the transforms are applied is important. where the light gray “F” is the original shape. If we reverse the order in which the two transforms are applied in this example. The “F” has first been rotated through a 90 degree angle. JAVA GRAPHICS IN 2D 15 That is. An example is shown on the left in the illustration below.rotate(Math. The translation applies to all subsequent drawing. by saying g. and c = sin(r ). the light gray “F” is the original shape. a = d = cos(r ). We are now in a position to see what can happen when you combine two transformations. and then moved 4 units to the right. the translation applies to a shape to which a rotation has already been applied.rotate(Math. before drawing some shape. in the general formula for an affine transform. b = −sin(r ). g. you can apply a rotation through an angle r by saying g.0). Suppose that you say g. Transforms are applied to shapes in the reverse of the order in which they are given in the code (because the first transform in the code is applied to a shape that has already been affected by the second transform). That is.PI/2). In a Graphics2D g. .

a scaling transform simply multiplies each x-coordinate by a given amount and each y-coordinate by a given amount.) . you can do the same thing in Java with one command: g. you can use a sequence of three transforms. In fact.q) to subsequent drawing operations.q) using g.q) will apply a rotation of r radians about the point (p. ∗ ∗ ∗ A scaling transform can be used to make objects bigger or smaller.0). g. For a useful application of using several transformations. in which the original light gray “F” is scaled by a factor of 3 horizontally and 2 vertically to give the final dark red “F”: The common case where the horizontal and vertical scaling factors are the same is called uniform scaling .0) or pulls them towards (0.−q). similar to what was done in the case of rotation. Keeping in ming that we have to apply the transformations in the reverse order. then the resulting point (x1.translate(p.q). Finally. Mathematically.y1 ) is given by x1 = a * x y1 = d * y If you apply this transform to a shape that is centered at the origin.CHAPTER 1. Java provides no shorthand command for this operation.rotate(r). In fact. Uniform scaling stretches or shrinks a shape without distorting it.0). JAVA GRAPHICS IN 2D 16 then the result is as shown on the right in the above illustration. the true description of a scaling operation is that it pushes every point away from (0.-q).0).0). if a point (x. the original “F” is first moved 4 units to the right and the resulting shape is then rotated through an angle of π/2 about the origin to give the shape that actually appears on the screen. we can move the origin back to the point (p. the shape will be moved away from 0 or towards 0.q) instead of about the point (0. g.q) to the origin with g. we can say g. Then we can do a standard rotation about the origin by calling g. (If you want to scale about a point other than (0. That is.q). Here is an example. In that picture. Note that negative scaling factors are allowed and will result in reflecting the shape as well as stretching or shrinking it.translate(-p. When scaling is applied to a shape that is not centered at (0. We can do this by first moving the point (p.y) is scaled by a factor of a in the x direction and by a factor of d in the y direction. suppose that we want to rotate a shape through an angle r about a point (p.rotate(r ). then in addition to being stretched or shrunk. though. before drawing the shape.translate(p.rotate(r.p. it will stretch the shape by a factor of a horizontally and d vertically.translate(−p.

a shearing transform . For a pure horizontal shear. You can retrieve the current affine transform from a Graphics2D g by calling g. you can call g. .y1 ) is given by x1 = x + b * y y1 = y for some constant shearing factor b.awt. it is not really obvious how to do so. A shear will “tilt” objects. The inverse transform for a transform t is a transform that exactly reverses the effect of t. ∗ ∗ ∗ We will look at one more type of basic transform.d). As usual. a vertical shear by a shearing factor c has equations x1 = x y1 = c * x + y In Java.CHAPTER 1. Although shears can in fact be built up out of rotations and scalings if necessary.d) in Graphics2D g. Applying t followed by the inverse of t has no effect at all—it is the same as the identity transform. Every other horizontal line is moved to the left or to the right by an amount that is proportional to the y-value along that line. Here is an example of horizontal shear: A horizontal shear does not move the x-axis. set b to zero. JAVA GRAPHICS IN 2D 17 To scale by (a.AffineTransform.scale(a. ∗ ∗ ∗ In Java. A horizontal shear will tilt things towards the left (for negative shear) or right (for positive shear). you will just use the corresponding methods in the Graphics2D class. however. and you can replace the current transform by calling g.c). you can set c to zero.getTransform(). In general. the resulting point (x1.setTransform(t). a rotation. or a shear. A vertical shear tilts them up or down. a scaling. There are also methods in the AffineTransform class itself for multiplying the transform by a translation.shear multiply that current transform object by another transform.geom.setTransform only for restoring a previous transform in g. the transform applies to all x and y coordinates in subsequent drawing operations. the method for applying a shear to a Graphics2D g allows you to specify both a horizontal shear factor b and a vertical shear factor c: g. The current transform in a Graphics2D g is an object of type AffineTransform. it is recommended to use g. an affine transform is represented by an object belonging to the class java. One use for an AffineTransform is to compute the inverse transform . Similarly. Usually.y).rotate and g. after temporarily modifying it. for a pure vertical shear. Methods such as g. When a horizontal shear is applied to a point (x.shear (b.

scale( getWidth() / ( x2 .height/(y2-y1))..paintComponent(g). do have inverses. given as fraction of the total width.f ) is a translation by (−e. (y-y1)/(y2-y1) * height ) To see this. Let’s see how to set up a coordinate system on a component such as a JPanel. The inverse of rotation by an angle r is rotation by −r. A point (x. we can implement this coordinate transform in the panel’s paintComponent method as follows protected void paintComponent(Graphics g) { super. (y-y1) * (height/(y2-y1)) ) we see that the change in coordinates can be accomplished by first translating by (-x1. For example.0). you might want to know the result of applying a transform or its inverse to a point.-y1) and then scaling by (width/(x2-x1). p2 ). you can call t. The same transforms that are used for one purpose can be used for the other.y1) ). The width and height can be obtained by calling the panel’s getWidth and getHeight methods. which requires handling. and similarly for the height. the upper left corner is (0.catch statement. using the new coordinate system } . p2 ). and the component has a width and a height that give its size in pixels. however.x1 ). In fact. Modeling transforms modify shapes that are drawn. for example with a try. The inverse of scaling by (a. This method will throw an exception of type NoninvertableTransformException if t does not have an inverse. the inverses are easy.d) is scaling by (1/a. you can call t.createInverse() to get the inverse transform. note that (x-x1)/(x2-x1) is the distance of x1 from the left edge of the panel. Suppose that we would like to draw on the panel using a real-number coordinate system that extends from x1 on the left to x2 on the right and from y1 at the top to y2 at the bottom. JAVA GRAPHICS IN 2D 18 Not every transform has an inverse. to apply the inverse transform of t. This is a checked exception. provided a and d are not zero. storing the result of the transformation in p2. The inverse of a translation by (e. there is no way to undo a scaling transform that has a scale factor of zero.CHAPTER 1. g2. getHeight() / ( y2 .transform( p1. ∗ ∗ ∗ You have probably noticed that we have discussed two different uses of transforms: Coordinate transforms change the coordinate system that is used for drawing.. Keeping in mind that transforms are applied to coordinates in the reverse of the order in which they are given in the code. // draw the content of the panel. In the standard coordinate system. g2.inverseTransform( p1.translate( -x1.create().y) in these coordinates corresponds to pixel coordinates ( (x-x1)/(x2-x1) * width.−f ). -y1 ). For a general AffineTransform t. you can call t.) Similarly. For an AffineTransform t and points p1 and p2 of type Point2D.1/d). to apply t to the point p1. If we rewrite this as ( (x-x1) * (width/(x2-x1)). Graphics2D g2 = (Graphics2D)g. these two uses are two sides of the same coin. (It’s OK for p1 and p2 to be the same object. Most transforms. This statement will throw a NoninvertibleTransformException if t does not have an inverse.. . For the basic transforms. Sometimes.

Now suppose that the component in which you will draw has width 800 and height 400. then the limitsRequested * rectangle will exactly fill the viewport. The transforms used in this example will work even in the case where y1 is greater than y2. * @param width The width of the viewport where g2 draws.0). If the preserveAspect parameter is true. * limitsRequested[1]) and (limitsRequested[2]. but the method could also be used to map the coordinate rectangle onto a different viewport. top. then shapes will be distorted because they will be stretched more in one direction than in the other. The general case is a little more complicated. One issue that remains is aspect ratio. right.” (The viewport is probably just the entire component where g2 draws. to make * the aspect ratio of the displayed rectangle match the aspect ratio of the * viewport. * * @param g2 The drawing context whose transform will be set. This method is used in the sample program CubicBezierEdit. and bottom edges of the coordinate rectangle that you want to display. the units of measure in . The upper * left corner of the viewport where the graphics context draws is assumed to * be (0. However.CHAPTER 1. The aspect ratio for the component is 0. for example. Under the transform. if it is true. Similarly. horizontally or vertically. which refers to the ratio between the height and the width of a rectangle. If the aspect ratio of the coordinate rectangle that you want to display does not match the aspect ratio of the component in which you will display it. shapes will be stretched twice as much in the horizontal direction as in the vertical direction. That is. This method sets the global variables pixelSize and transform.5. The rectangle that you want to display is a square. This allows you to introduce a coordinate system in which the minimal value of y is at the bottom rather than at the top. while still using range of y values from 0 to 1. the parameter limitsRequested should be an array containing the left. we can pad the range of y values. This will place the square that we really want to draw in the middle of the component. you can see that it’s really just a nominal distinction.) /** * Applies a coordinate transform to a Graphics2D graphics context. then the actual limits will be padded in either the horizontal or vertical direction to match the aspect ratio of the coordinate rectangle to the width and height of the “viewport. shown below.java. then the limits * will be expanded in one direction.5 onto the component. by padding the square on both sides with some extra x-values. * @param preserveAspect if preserveAspect is false. can be used to set up the coordinate system in a Graphics2D. the rectangle with corners (limitsRequested[0]. a solution is to use only part of the component for drawing the square.5 and 1. We can do this. we can map the range of x values between −0. JAVA GRAPHICS IN 2D 19 The scale and translate in this example set up the coordinate transformation. which has aspect ratio 1. * @param height The height of the viewport where g2 draws. If this is a problem. Note that when preserveAspect is false.limitsRequested[3]) will just * fit in the viewport. Any further transforms can be thought of as modeling transforms that are used to modify shapes. suppose that you want to use a coordinate system in which both x and y range from 0 to 1. so you can look there for an example of how it is used. It should be called in the paintCompoent method before doing any drawing to which the coordinate system should apply. If you simply map the square onto the component. The applyLimits method. When using this method. if the aspect ratio of the component is greater than 1. * @param limitsRequested Specifies a rectangle that will be visible in the * viewport. For example.

limits[2] ) / height). suppose that you want to implement mouse interaction.limits[0]) * (requestedAspect/displayAspect .CHAPTER 1. double[] limitsRequested. it can be used for transforming points between pixel coordinates and drawing coordinates.abs(( limits[1] .pixelHeight). height / (limits[3]-limits[2]) ).excess/2.1). Suppose that evt is a MouseEvent.1). limits[2]. For example. You can use the transform variable from the above method in the following code to transform the point from the MouseEvent into drawing coordinates: Point2D p = new Point2D. limits = new double[] { limits[0]. In particular. limits[3] + excess/2 }. limits[1]. pixelSize might be used for setting the width of a stroke to some given number of pixels: g2.limits[0] ) / width). however. The point p will then contain the drawing coordinates corresponding to the mouse position. you need to know the size of a pixel.limits[2]) * (displayAspect/requestedAspect . transform = g2.getTransform().scale( width / (limits[1]-limits[0]). which are assumed to be global variables. If you want a stroke—or anything else—that is measured in pixels.getY() ). These are values that might be useful elsewhere in the program.abs((double)height / width). The methods for handling mouse events will tell you the pixel coordinates of the mouse position. double pixelWidth = Math. -limits[2] ).translate( -limits[0].inverseTransform( p. } } g2. limits[3] }. transform. pixelSize = (float)Math. double requestedAspect = Math. As for the transform.excess/2. p ).limits[0] )).Double( evt. int width. you need to know the drawing coordinates of that point.abs(( limits[3] .abs(( limits[3] . . if (preserveAspect) { double displayAspect = Math. double pixelHeight = Math.getX(). limits[1] + excess/2. boolean preserveAspect) { double[] limits = limitsRequested.limits[2] ) / ( limits[1] . You can use the transform variable to make the transformation. if (displayAspect > requestedAspect) { double excess = (limits[3] .min(pixelWidth. JAVA GRAPHICS IN 2D 20 * the horizontal and vertical directions will be different. limits[2] . Sometimes. int height. limits = new double[] { limits[0] . evt. g2. } The last two lines of this method assign values to pixelSize and transform.setStroke( new BasicStroke(3*pixelSize) ). Remember that the unit of measure for the width of a stroke is the same as the unit of measure in the coordinate system that you are using for drawing. */ private void applyLimits(Graphics2D g2. } else if (displayAspect < requestedAspect) { double excess = (limits[1] .

the transformations are specified in the opposite order from the order in which they are applied to the object and that the transformations are specified before drawing the object. that affects the scene as a whole. // set the size of the object . // restore the previous transform Note that we can’t simply assume that the original transform is the identity transform. and the object along with it. or at least to use the origin as a convenient reference point. Usually. (Remember that in the code. We would like to be able to draw the smaller component objects in their own natural coordinate systems. for example. to any level.CHAPTER 1. Suppose that the object that we want to draw is itself a complex picture. to move it into the completed scene. The modeling transform for the object is effectively applied in addition to any other transform that was specified previously. its component objects are carried along with it. made up of a number of smaller objects. and position in the scene. such as a coordinate transform. // set the orientation of the object g. (0. suppose that we have written a method private void drawWheel(Graphics2D g2) . The modeling transform moves the object from its natural coordinates into its proper place in the scene. using its natural coordinates g. we want an object in its natural coordinates to be centered at the origin.getTransform().2 Hierarchical Modeling A major motivation for introducing a new coordinate system is that it should be possible to use the coordinate system that is most natural to the scene that you want to draw. to place it in the scene. we can use a scaling transform. we can apply another modeling transformation to the main object. Think. just as we do the main object. Now let’s extend this a bit. For the wheels. JAVA GRAPHICS IN 2D 21 1. and bloom. of a potted flower made up of pot.b).rotate(r).3. we can save the current transformation before starting work on the object and restore it afterwards. Let’s look at a simple example. Then on top of that.) The modeling transformations that are used to place an object in the scene should not affect other objects in the scene. In fact. But this is easy: We draw each small object in its own coordinate system. to any other point. Since scaling and rotation leave the origin fixed. and use a modeling transformation to move the small object into position within the main object. those operations don’t move the reference point for the object. and to apply a transformation to the object to place it wherever we want it in the scene. followed by a translation to set its size. We can extend this idea to individual objects in a scene: When drawing an object. followed by a rotation. A translation can then be used to move the reference point. we can build objects that are made up of smaller objects which in turn are made up of even smaller objects. orientation. where g is a Graphics2D: AffineTransform saveTransform = g.scale(s. Geometric transformations allow us to specify the object using whatever coordinate systeme we want. another transform is applied to the scene as a whole.setTransform(saveTransform).. We will draw the body of the cart as a rectangle. Then.translate(a. The code could look something like this. Transformations used in this way are called modeling transformations.. Suppose that we want to draw a simple 2D image of a cart with two wheels. // draw the object. leaves. stem. This type of modeling is known as hierarchical modeling . // move object into position g. On top of that. carrying the object along with it. To limit their application to just the one object.0).s). g. it should be possible to use the coordinate system that is most natural to the object. There might be another transform in place.

// draw the first wheel g2. We will draw the cart with the center of its rectangular body at the point (0.5. Here is our cart used in a scene. // draw the second wheel g2. we could add several carts to the scene.0.-0. To place them in the correct positions relative to body of the cart.RED). with hills.setTransform(tr). we have to translate one wheel to the left and one wheel to the right.1) g2. and a sun in the background.fill(new Rectangle2D. g2. we can apply another modeling transformation to the cart as a whole. You should notice the analogy here: Building up a complex scene out of objects is similar to building up a complex program out of subroutines.5. and I also found that the wheels looked better if I also moved them down a bit.scale(0. // center of second wheel will be at (1. JAVA GRAPHICS IN 2D 22 that draws a wheel. if we want. When I coded this example.8 drawWheel(g2). // center of first wheel will be at (-1.translate(1. Once we have this cart-drawing subroutine. // restore the transform g2.5. we need two wheels. // save the current transform g2. Using the usual techniques of hierarchical modeling.1). When we do this. The rectangle has width 5 and height 2. // restore the transform g2.setTransform(tr).0) and has radius 1.1).2) ).translate(-1. you can work on pieces of the problem separately. The on-line applet version of this example is an animation in which the windmills turn and the cart rolls down the road. In both cases.Double(-2.8.0.getTransform(). The scene shows the cart on a road.8 drawWheel(g2).8).1) g2. . you can compose a solution to a big problem from solutions to smaller problems.-0. To make the size of the wheels fit the cart.-0.CHAPTER 1.8. by calling the cart subroutine several times with different modeling transformations. Let’s say that in this coordinate system.8). we can use it to add a cart to a scene. Here is a subroutine that can be used to draw the cart: private void drawCart(Graphics2D g2) { AffineTransform tr = g2.5. // scale to reduce radius from 1 to 0. we will probably have to scale them.5.setColor(Color.scale(0.0.-0. The reason that two wheels appear in the picture is that different modeling transformations are in effect for the two subroutine calls. Indeed. // draw the body of the cart } It’s important to note that the same subroutine is used to draw both wheels. // scale to reduce radius from 1 to 0. the wheel is centered at (0. we have to remember to save the current transform and to restore it after drawing each wheel. windmills. and once you have solved a problem.0). so its corner is at (−2.5.0). To complete the cart. I had to play around with the numbers to get the right sizes and positions for the wheels. you can reuse that solution in several places. The wheel is drawn using its own natural coordinate system.5.

For example. which increases by 1 from one frame to the next. the drawWindmill method doesn’t just draw a windmill—it draws an animated windmill. To animate the object.) In each frame. A computer animation consists of a sequence of frames. animation is just another aspect of modeling. the translation that is applied to the cart is g2. with small changes from one frame to the next. which moves the reference point of the cart from −3 to 13 along the horizontal axis every 300 frames. Each frame is a separate image. we can simply apply a different modeling transformation to the object in each frame. From our point of view. JAVA GRAPHICS IN 2D 23 You can probably see how hierarchical modeling is used to draw the windmills in this example. the cart will be 0. When a modeling transformation is applied to the windmill. 0). Each time the timer fires. The parameters used in the transformation can be computed from the current time or from the frame number. in the actual program.1 units farther to the right than in the previous frame. (In fact. the rotating vanes are scaled and moved as part of the object as a whole. In fact. to the cart. The same object can appear in many frames. each frame is a separate scene and has to be drawn separately. where frameNumber is the frame number.translate(-3 + 13*(frameNumber % 300) / 300.CHAPTER 1.) The really neat thing is that this type of animation works with hierarchical modeling.translate(frameNumber * 0. This is actually an example of . we might apply a translation g2. It might not be so easy to see how different parts of the scene can be animated. That just means that the rotation applied to the vanes depends on the frame number. Each of the windmills in the scene is then produced by applying a different modeling transform to the standard windmill. (The animation is driven by a Timer that fires every 30 milliseconds. for example. with turning vanes.0. There is a drawWindmill method that draws a windmill in its own coordinate system. 1 is added to frameNumber and the scene is redrawn.1). To make a cart move from left to right.

You are strongly encouraged to read it. Then a further modeling transformation can be applied to the windmill object to place it in the scene. The vanes are sub-objects of the windmill. especially the paintComponent method. which composes the entire scene. . JAVA GRAPHICS IN 2D 24 hierarchical modeling. The file HierarchicalModeling2D.CHAPTER 1.java contains the complete source code for this example. The rotation of the vanes is part of the modeling transformation that places the vanes into the windmill object.

Three-dimensional objects are harder to visualize than two. We will start in this chapter with an overview that will allow you to use OpenGL to create fairly complex 3D scenes.) OpenGL is an API for creating images. we will cover more of the details and advanced features of OpenGL. A sequence of new versions has gradually added features to the API. We will be using Jogl version 1. In 2004.0 will be available soon). you have to know what light sources illuminate the scene. libraries are available that add support for OpenGL to Java. To do all that.0 with the addition of GLSL. We will draw some simple shapes. The libraries are referred to as Jogl or as the Java Binding for OpenGL. As we pursue our study of OpenGL. While OpenGL is not part of standard Java. In later chapters. the OpenGL Shading Language. In this section.1 Basic OpenGL 2D Programs (online) The first version of OpenGL was introduced in 1992. and that introduces its own complexities. Three-dimensional scenes have to be projected down onto a two-dimensional surface to be viewed. This will already give us enough capabilities to do some hierarchical modeling and animation. and apply some geometric transforms. (Indeed. the power of OpenGL was greatly enhanced in version 2. 25 .2. I have no graphics card available to me that supports a later version. Without these effects.1a (although Jogl 2. To simulate lighting in 3D computer graphics. and three-dimensional transformations take some getting used to.1. in 2009. we will cover all of this and more. change the drawing color. such as Java. there is the problem that realistic 3D scenes require the effects of lighting to make them appear three-dimensional to the eye. the eye sees only flat patches of color. This course will cover nothing beyond OpenGL 2. and we will consider the general theory of 3D graphics in more depth. 2. Furthermore. you need to combine OpenGL with a general-purpose programming language.Chapter 2 The Basics of OpenGL and Jogl The step from two dimensions to three dimensions in computer graphics is a big one. You can’t use it to open a window on a computer screen or to support input devices such as a keyboard and mouse. we’ll see how to to create basic two-dimensional scenes using Jogl and OpenGL. which makes it possible to replace parts of the standard OpenGL processing with custom programs written in a C-like programming language. and you have to understand how the light from those sources interacts with the surface material of the objects in the scene.1. with the latest version being OpenGL 3. It does not have support for writing complete applications.

This method plays the same role as the paintComponent method of a JPanel.swing.1 A Basic Jogl App To use OpenGL to display an image in a Java program. Both of these classes implement an interface. Instead. boolean modeChanged.getGL() Generally. While GLCanvas might offer better performance. An object of type GL includes a huge number of methods.awt. window. that represents the ability to support OpenGL drawing. and you should register that object to listen for events from the panel. GLCanvas and GLJPanel. and of the reshape method in a GLEventListener.setDefaultCloseOperation(JFrame. GLAutoDrawable. The third method can always be left empty. when the drawable is first created. we will use GLJPanel. To actually do anything with OpenGL.*. boolean deviceChanged) The init method is called once. you need an OpenGL context. most of the OpenGL API is contained in this class. which is represented in Jogl by an object of type GL. It’s the GLEventListener that will actually do the drawing. in the GLEventListener method public void display(GLAutoDrawable drawable) The GLAutoDrawable will be the GLJPanel (or other OpenGL drawing surface) that needs to be drawn.opengl.opengl. int x. import javax. are defined in the package javax. import javax. In fact. The GLEventListener class defines three other methods: public void init(GLAutoDrawable drawable) public void reshape(GLAutoDrawable drawable. this statement is the first line of the init method. BASICS OF OPENGL AND JOGL 26 2.media. like most of the Jogl classes that we will use in this chapter. We will spend much of our time in the rest of this course investigating the GL class.*. The reshape method is called when the drawable changes shape. int width. it doesn’t fit as easily into Swing applications as does GLJPanel.1. You can get a context from a GLAutoDrawable by saying GL gl = drawable. it has to be there to satisfy the definition of a GLEventListener.CHAPTER 2.media. int y. Java’s standard AWT and Swing components don’t support OpenGL drawing directly. For now. you should write an event listener object that implements the GLEventListener interface. It is analogous to a constructor. but it will not be called (and in fact will not even exist in Jogl 2. you need a GUI component in which you can draw. . public class BasicJoglApp2D extends JPanel implements GLEventListener { public static void main(String[] args) { JFrame window = new JFrame("Basic JOGL App 2D"). while GLJPanel is a subclass of Swing’s JPanel class. but the Jogl API defines two new component classes that you can use. These classes. ∗ ∗ ∗ We are just about ready to look at our first program! Here is a very simple but functional Jogl Application: import java. and the GL class defines a huge number of constants for use as parameters in those methods. of the display method. GLCanvas is a subclass of the standard AWT Canvas class.0).*.EXIT ON CLOSE). OpenGL drawing is not done in the paintComponent method of a GLJPanel. int height) public void displayChanged(GLAutoDrawable drawable.

// Set up events for OpenGL drawing! } public void init(GLAutoDrawable drawable) { } public void display(GLAutoDrawable drawable) { GL gl = drawable.glVertex2f(0. the drawing color (white).CHAPTER 2. int width.CENTER). This method starts. // gl. Default settings are used for the background color (black).GL COLOR BUFFER BIT). drawable. and adds it to the main panel. This command “clears” the drawing area to its background color.glClear(GL.5f.glEnd(). since it’s our first example of OpenGL drawing.setContentPane(new BasicJoglApp2D()). int y. boolean modeChanged.600)). The goal is to draw a triangle. from −1 on the bottom to 1 at the top). and the coordinate system (x coordinates ranging from −1 on the left to 1 on the right. There are several buffers that can . window. where the color value for each pixel is stored. The display method is then the place where the content of the GLJPanel is drawn. Let’s look at the display method. gl. add(drawable.GL POLYGON).-0.pack(). The rest of the method uses that context for all OpenGL drawing operations.glVertex2f(-0. drawable. Start drawing a polygon.GL COLOR BUFFER BIT).-0. The constructor in this class creates a GLJPanel. by getting an OpenGL context.3f). setLayout(new BorderLayout()). } public BasicJoglApp2D() { GLJPanel drawable = new GLJPanel(). First vertex of the polygon. // gl. boolean deviceChanged) { } } // Fill with background color. // gl. drawable. Second vertex Third vertex Finish the polygon. The main panel itself acts as a GLEventListener. // } public void reshape(GLAutoDrawable drawable.6f).getGL(). It’s actually clearing the color buffer.setPreferredSize(new Dimension(600. BASICS OF OPENGL AND JOGL 27 window.addGLEventListener(this). and registers itself to listen for OpenGL events from drawable. gl. // gl.setVisible(true). This is done with the command drawable. window. as discussed above.addGLEventListener(this).glClear(GL.5f. int x. gl.glVertex2f(0.3f).glBegin(GL. BorderLayout. int height) { } public void displayChanged(GLAutoDrawable drawable. The first step is to fill the entire component with the background color. y coordinates. This is done with the command gl.0.

The parameter to gl. but it was thought that programmers would be more comfortable with the more familiar names.glColor3f(1. We’ll see later how to set the background color.-0. gl.) Suppose that we want to draw a yellow triangle with a black border. } .0. and reshape methods. you can use one of the glColor methods.2 Color To set the drawing color in an OpenGL context gl.3f).0) will set the color for subsequent drawing operations to yellow. To say what is done with the points. gl.-0. There is a version of the basic method that has two parameters of type double.glClear(GL. gl.6f). gl. and the “f” at the end says that the parameters are of type float. The names make sense.-0. 2.1.-0. gl.5f.glColor3f(1. This can be done with a display method public void display(GLAutoDrawable drawable) { GL gl = drawable. For this variation. BASICS OF OPENGL AND JOGL 28 be cleared with this command.GL POLYGON with GL. gl.glColor3f(red. gl.0. This method just specifies points. gl.GL POLYGON). gl.CHAPTER 2.glBegin gives the meaning of the points—in this case the fact that they are the vertices of a filled polygon.getGL().0f.glVertex2f(-0. the parameters are three values of type float in the range 0 to 1. // Draw a filled triangle.glVertex2f(0. and its name is “glVertex2d”. you can’t always predict which possible variations of a name are actually defined. gl. gl. Unfortunately.1. // Draw the outline of the triangle.glEnd().glEnd().glVertex2f(0.blue). For example. gl. The name “glVertex2f” actually has a very specific format.green.glEnd.5f. A lot of method names in OpenGL work the same way. 1. You are probably thinking that “gl. The rest of the display method draws the triangle.3f). If we were to replace GL. gl. giving “glVertex3f” and “glVertex3d”. There is a basic name.glVertex2f have to come between gl. display. “glVertex”. and the parameter tells which of them should be cleared. the calls to gl.0f).0. Now that we have a sample Jogl application.GL LINE LOOP in the glBegin method.6f). The names of the methods and constants come from the OpenGL API. Then the number 2 tells how many parameters there will be.glBegin(GL. It would have been possible to simplify the names in Java (and in particular to leave off the extra “gl”). which was designed with the C programming language in mind. // Draw with black. that takes parameters of type double.glBegin and gl.glBegin(GL. (There is also a version.glColor3f(0.glVertex2f(0. we can use the same outline for other programs.GL LINE LOOP).5f. // Draw with yellow.0f. such as gl. When we move into 3D.glVertex2f(0.glVertex2f.GL COLOR BUFFER BIT). since one needs three coordinates to specify a point in three dimensions. then the method would draw the outline of the triangle instead of a filled triangle.glVertex2f(-0.3f).5f. glColor3d.3f).0). the 2 will change to a 3. Each of the vertices of the triangle is specified by calling the method gl. 0.glVertex2f” is a pretty strange name for a method. gl. We just need to change the init.

8f. but alphas are a more complicated topic in OpenGL than they are in standard Java graphics. which should ordinarily be set equal to 1. Calling repaint will in turn trigger a call to the display method where the actual OpenGL drawing is done. you can use gl.3.dy.glTranslatef (dx. y. to 0. we get a 2D translation. For example. if you are drawing in red at the end of one call to display. you have to put in an explicit call to glColor3f to do so. dz. we get an animation.green. This illustrates the use of the init method for setting the values of OpenGL state variables that will remain the same throughout the program. and reshape.3 Geometric Transforms and Animation Naturally. translation in OpenGL is done with gl. it can be set in init instead. 1.” which is used by the glClear method to fill the color buffer. initialized to default values.glClearColor(0. there is a z coordinate.1. This includes. including an alpha component. All these state variables retain their values between calls to init. as indicated by the “f” at the end of “translatef”. Here is an example that sets the clear color to light blue: public void init(GLAutoDrawable drawable) { GL gl = drawable.8f. In three dimensions. In OpenGL. and the current geometric transform. This is different from Java graphics. the clear color.CHAPTER 2. By setting the third parameter. . There is no background color as such. it is not necessary to set the clear color in display.blue. there is a “clear color. We need some action. then you will be drawing in red at the beginning of the next call. the lighting setup. gl. An OpenGL context keeps many internal values in state variables. This method has only one version.) The clear color is often set once and for all. The parameters specify the color components of the color that will be used for clearing the color buffer.dz ) where the parameters specify the amount of displacement in the x. 0.translated. and the parameters must be float values in the range 0 to 1. The clear color can be set by calling gl. among many other things.alpha). we don’t want to spend too much time looking at still images. in addition to the usual x and y coordinates. If you want to be sure of drawing with some default drawing color at the beginning of display. display. We can still use 3D transforms to operate in 2D. and then does not change between one call of display and the next.glClearColor(red. If you want to use parameters of type double. and we will avoid that topic for now. and z directions. } Note that this uses an important fact about OpenGL. In that case. transforms are meant to work in three dimensions rather than two. just as we did with Graphics2D in Section 1. where every call to paintComponent starts with a new graphics context. (Alpha components can be used for transparency. the drawing color. The names for the other transform methods work in the same way. The background color is a little more complicated. 1). as long as we make sure to leave the z coordinate unchanged. We can use modeling transforms for animation in OpenGL. The parameters are of type float. BASICS OF OPENGL AND JOGL 29 That takes care of changing the drawing color. The main difference is that in OpenGL. we will increment the frame number and call the repaint method of the GLJPanel. When the timer fires. We can make a simple animation by introducing a frameNumber instance variable to our program and adding Timer to drive the animation. 2. If we make that drawing depend on the frame number.getGL().

since OpenGL state persists across calls to display.-0. gl.6f).glVertex2f(0. 1. The direction of rotation is counterclockwise when d is positive.CHAPTER 2. gl. You can find the full source code for this simple animation program in . gl. gl.glLoadIdentity().glRotatef (d.glRotatef( 30.8f.z ).8f. scaling can be done with gl.glVertex2f(0. we can use this method in an animation of a spinning triangle.-0. you should call gl.0f.3f). gl. In three dimensions. gl.5 units up.sy. where the parameters specify the scaling factor in the x.glScalef (sx. To specify a rotation in OpenGL.1). unless you do something to restore the original transform.glBegin(GL. gl. 0.-0. and by default. That is.8. gl. rotation is about a line called the axis of rotation. // Apply a rotation of frameNumber degrees. gl. Think of the rotating Earth spinning about its axis. As with Graphics2D. for example. 0 ). not to things that have already been drawn.sz ). This can be done with gl. y.0f. 0. 1. you must specify both an angle of rotation and an axis of rotation. gl.GL POLYGON).0. it’s common to set the transform at the beginning of the display method to be the identity transform. the rotation is about the origin. Here.0.5f.glTranslatef( 3.glBegin(GL.0). // Draw a yellow filled triangle. 0. To avoid changing the scale in the z direction. gl. rotation is rotation about a point. This can be done by calling the method gl. .glColor3f(1. } Since the rotation depends on frameNumber. // draw the object Keep in mind that the transforms apply to all subsequent drawing.z ) to (0. you can set (x.3f). For a 2D rotation.y. not just to the next object drawn..1). unless you do something to prevent it! To avoid this problem.0. For example. (0. In fact. // Outline the triangle with black. BASICS OF OPENGL AND JOGL 30 Similarly.GL LINE LOOP).y.glColor3f(0. 1).glClear(GL.0).3f).0. // Start from the identity transform! gl. is a display method that draws a rotated triangle: public void display(GLAutoDrawable drawable) { GL gl = drawable.0.5f. gl.-0.0) and the point (x.3f). 0.0f).glVertex2f(0. gl. and then move it 3 units over and 1. 1 ).glRotated(frameNumber. rotate it 30 degrees. And when there are multiple transforms.5f.5f. gl. the third parameter. 0.glScalef( 1.5f.0). Rotation transforms are a little more complicated.y.getGL(). measured in degrees. and the axis of rotation is the line through the origin (0.6f). where d is the angle of rotation. if we want to scale a 2D object by a factor of 1. In two dimensions. should be 1.glVertex2f(-0.0. for a rotation of d degrees about the 2D origin (0. then they are applied to shapes in the reverse of the order in which they are specified in the code.glRotatef (d. gl.x. we could say gl.glEnd(). the transform will still be in effect the next time display is called. sz. since rotation in 3D is more complicated than rotation in 2D.z ). gl. and z directions.0.glLoadIdentity().glVertex2f(0. 1 ). transforms apply to drawing that is done after the transform is specified.glVertex2f(-0.GL COLOR BUFFER BIT).glEnd(). 1..

which restores the transform that was saved by gl. glPopMatrix removes the transform from the top of the stack and uses it to replace the current transform in the OpenGL context.” by the way.CHAPTER 2.. OpenGL has methods that are meant to do precisely that.4 Hierarchical Modeling When we do geometric modeling. gl. // Draw the object. and restore that transform after the object has been drawn. That is. to limit transforms that are applied to an object to just that object.glPopMatrix(). (“Matrix. You can draw a scene using glPushMatrix and glPopMatrix when placing each object into the scene. // Apply the modeling transform of the object.. Here is one frame from the animation: 2.java You can find an applet version of the program on-line.” drawing a sub-object will not make any permanent change to the transform stack. So.. (Unfortunately.) glPushMatrix adds a copy of the current transform to the stack. OpenGL implementations must allow at least 32 transforms on the stack.) As you might suspect from the method names. The command gl. It should be matched by a later call to gl. . However. BASICS OF OPENGL AND JOGL 31 BasicJoglAnimation2D. the general scheme for geometric modeling is: gl. As long as each “push” is matched with a “pop.” It refers to the way in which transforms are represented internally. But each of the objects could itself be a complex object that uses glPushMatrix and glPopMatrix to draw its sub-objects.. we need a way to limit the effect of a transform to one object. you cannot assume that this stack can grow to arbitrary size. the process would go something like this: pushMatrix apply overall object modeling transform pushMatrix apply first sub-object modeling transform draw first sub-object popMatrix .glPopMatrix (). is really just another name for “transform. transforms are actually saved on a stack of transforms.glPushMatrix () saves a copy of the current transform. The use of a stack makes it easy to implement hierarchical modeling.1. In outline. . we should save the current transform before drawing the object. which should be plenty for most purposes.glPushMatrix (thus discarding any changes that were made to the current transform between the two method calls).glPushMatrix().

For example.java is an almost direct port of the Graphics2D example.1f.1f. BASICS OF OPENGL AND JOGL 32 pushMatrix apply second sub-object modeling transform draw second sub-object popMatrix .glBegin(GL. gl.1). . 0.8f. I encourage you to look through the program.glColor3f(1.8f.3. We’re now in a position to do the same thing with Jogl. gl.3.glPushMatrix(). 0). gl. gl. even though it does use a few techniques that we haven’t covered yet. gl.2).0. gl. and that transform requires its own calls to glPushMatrix and glPopMatrix : gl.5f.GL POLYGON). // does its own pushes and pops internally! gl. gl. .glPushMatrix(). drawCart(gl). gl. and a windmill. In HierarchicalModeling2D. we did some hierarchical modeling using Java Graphics2D.glPopMatrix().glScalef(0. 0).0).glPushMatrix().glVertex2f(2. a cart. // Draw the first wheel as a sub-object.glTranslated(-3 + 13*(frameNumber % 300) / 300.2). with their own nested calls to glPushMatrix and glPopMatrix. gl.0. gl. } // Draw the second wheel as a sub-object. The sample program JoglHierarchicalModeling2D. -0. The cart requires its own modeling transformation to place it into the scene. gl.5f. // Draw a rectangle for the body of the cart When the display method draws the complete scene.8f. popMatrix And keep in mind that the sub-objects could themselves be complex objects.glScaled(0.0.1).3.0. 0).5f.glPopMatrix().glEnd(). gl.glVertex2f(-2. . here is the cart-drawing subroutine in OpenGL: private void drawCart(GL gl) { gl. drawWheel(gl). It uses similar subroutines to draw a sun. gl.1).CHAPTER 2.0).8f. -0.5f.2.5f.glVertex2f(-2. drawWheel(gl).glVertex2f(2.5f.glScalef(0.0). gl. a wheel.glPopMatrix().0.glTranslatef(-1. gl. it calls the drawCart method to add a cart to the scene. in Subsection 1.glTranslatef(1. gl.java.

A SceneNode2D can have a color. at the beginning of the program. but there is another approach that goes well with object-oriented programming. Each sub-object is of type SceneNode2D and can itself be a ComplexObject2D.CHAPTER 2. then its color will be inherited from the main object in which it is contained. we can represent the graphical objects in a scene with software objects in the program. The sample program JoglHMWithSceneGraph2D. that constructs the scene graph. // Color for most of the wheel. we might have an object of type Cart. This means that in the display method. The draw method in the ComplexObject2D class simply draws the sub-objects. public void draw(GL gl). Attributes contain properties of objects beyond the purely geometric properties. there is a class LineNode that represents a single line segment between two points.5 Introduction to Scene Graphs So far. In the sample animation program. This works well. with their proper colors and transformations. which is applied to the complex object as a whole. For example. which is the default value. we could have a cart object belonging to a more general class ComplexObject2D. and translation factors. BASICS OF OPENGL AND JOGL 33 2. Most of the work has been moved into a method. If the color of a sub-object is null.) SceneNode2D also has several subclasses to represent geometric primitives. (The treatment of color for complex objects is also interesting. wheel. as I will do in the actual example.java). Note that the complex object can have its own transformation. A scene graph is a data structure that represents the contents of a scene. we have been doing hierarchical modeling using subroutines to draw the objects. an object in the scene is represented in the program by an object of type SceneNode2D. In this version. That is. These geometric primitive classes represent the “bottom level” of the scene graph.setColor(Color. It’s interesting to see how objects in the scene are constructed. They are objects that are not composed of simpler objects. which draws the object in an OpenGL context. There is also PolygonNode and DiskNode. The general question of how to handle attributes in scene graphs is an important design issue. you can get all the information that you need by traversing the scene graph. SceneNode2D has a subclass ComplexObject2D that represents an object that is made up of sub-objects.) Note that the cart object would include references to the cart’s sub-objects. where Cart is a class that we have written to represent carts in the scene. The color in this case is an example of an attribute of a node in a scene graph. Here.1. on top of the transformations that are applied to the sub-objects. this one using a scene graph. an instance variable named theWorld of type ComplexObject2D represents the entire content of the scene. and they are the fundamental building blocks of complex objects. and the scene as a whole would be represented by a linked collection of software objects. for example. When you need to draw the scene. SceneNode2D is an abstract class (defined in scenegraph2D/SceneNode2D. This method is called only once. which represents objects that are made up of sub-objects. (Or. There is an add method for adding a sub-object to the list of sub-objects in the complex object. . Instead of having a cart-drawing subroutine. It has an abstract method. drawing the content of the image requires only one line of code: theWorld.java is another version of our hierarchical modeling animation.BLACK). This linked data structure is an example of a scene graph . is the code the creates a wheel for the cart: ComplexObject2D wheel = new ComplexObject2D(). buildTheWorld(). rotation. and it can have a transformation that is specified by separate scaling. I have decided that the color of an object can be null.draw(gl).

is how the cart is created: ComplexObject2D cart = new ComplexObject2D().add( new TransformedObject(wheel. 0).add(hubcap).add(cart). wheel. 0. you can’t run programs that use it unless you make the Jogl classes available to that program. This leaves open the question of how to do animation when the objects are stored in a scene graph.setTranslation(-3 + 13*(frameNumber % 300) / 300. To make things like this easier. BASICS OF OPENGL AND JOGL 34 wheel.75f.setRotation(frameNumber*20).add( new DiskNode(1) ). -0. for (int i = 0.3). cart.75f)). I made a subclass TransformedObject of SceneNode2D to represent an object with an extra transform to be applied to that object.1.8.setColor(new Color(0. 0. the overall scene.setColor(Color. For the type of animation that is done in this example.add( new DiskNode(0.cos(angle).2) ). 2. The scene graph framework that is used in this example is highly simplified and is meant as an example only. This is done just before the world is drawn: wheel. cart. the wheel will appear twice.5. } Once this object is created.75f. When the scene graph is traversed.1 are distributed . we will set the correct transforms for that frame. i < 15. and vanes. then. wheel.5.RED). -0. The classes that you need for Jogl 1. // Override color of hubcap. hubcap. Here. theWorld. // The body. 2} ) ). 2.1) ). cart.0. The two occurrences will be in different locations because of the different transformations that are applied.” 2. cart.sin(angle))). I store references to the objects representing the wheel. we can add it to the cart object.5.add(new LineNode(0.setScaling(0. // Second wheel.6 Compiling and Running Jogl Apps Since Jogl is not a standard part of Java. DiskNode hubcap = new DiskNode(0.5.1. 0. the cart. More about scene graphs will be coming up later. a scene graph is called a “graph” because it’s generally in the form of a data structure known as a “directed acyclic graph. 0.add( new PolygonNode( new double[] { -2.CHAPTER 2. 1. -1.5. 0. we can add it twice. In fact.0.5. with two different transformations. You can read the program to see how the rest of the scene graph is created and used. and the rotating vanes of the windmill in instance variables wheel.PI/15) * i. // First wheel. in cart.Math. wheel. cart. The classes that are used to build the scene graph can be found in the source directory scenegraph2D. 0. the display methods includes commands to set the rotation or translation of the object to a value that depends on the current frame number. -2.Math. In the sample program. cart. We can now add the cart to the scene simply by calling theWorld.1) ). // The parameter is the radius of the disk. i++) { // Add lines representing the wheel’s spokes. 2. By the way. will apply only to the body.8.draw(gl). double angle = (2*Math. // Suitable scaling for fact.8).0/46)). Before drawing each frame. vanes.setRotation(frameNumber * (180. // Overall color. 0. we just have to be able to change the transforms that are applied to the object. 0.add( new TransformedObject(wheel. To animate these objects.

jar and gluegen-rt. Do the same thing for gluegen-rt. When using javac to compile a class that uses Jogl. Furthermore.jar. While you are downloading. ”.jar and click “OK”.1. you need to replace /home/eck/jogl-1. you will need to add the Jogl jar file. For example: javac -cp /home/eck/jogl-1. In the pop-up menu. These jar files must be on the Java classpath when you compile or run any program that uses Jogl. .1a/ You will need to get the download for your platform if you want to develop Jogl programs on the command line or in the Eclipse IDE.” Native code is code that is written in the machine language that is appropriate to a particular kind of computer.org) is your development environment.jar. The interface to OpenGL requires some native code. jogl. you might also want to get the Jogl documentation download. This will reveal a setting for “Native Library Location”. A dialog box will open. and information about it can be found at http://kenai.net/media/jogl/builds/archive/jsr-231-1. go to the “Build Path” submenu and select “Configure Build Path.1.1. That’s why there is a different Jogl download for each platform. and click OK.1. As I write this.jar:. As I write this. Click on the “Libraries” tab in the “Java Build Path” pane. [Click the button “Add External Jars. The download for a given platform includes the appropriate native code libraries for that platform.] Finally.zip. jogl-1. Now do the same thing for gluegen-rt. paths look a little different. Click it. Browse to the Jogl lib directory.CHAPTER 2. and for Windows.) When using the java command for running a program that uses Jogl. ∗ ∗ ∗ If you like to work on the command line. (This example assumes a Linux or Mac OS computer. the downloads for Jogl 1.1. Unzip the zip file and put the directory in some convenient place. for MacOS. BASICS OF OPENGL AND JOGL 35 in two jar files. which means that you have to make the appropriate native code libraries available to Java when you run (but not when you compile) any program that uses Jogl. .jar”. [Click the triangle to the left of jogl. Set it to point to the unzipped documentation directory. rather than in Java. Add the two required Jogl jar files to the build path. Jogl uses “native code libraries.org) as your development environment. ”. if you use Netbeans (http://netbeans. packagename/*. the current version of the plugin supports Jogl 1.jar. you can still do that with Jogl.java. The directory has a lib subdirectory that contains the jar files and native code libraries that you need.jar. In the file dialog box. you can then compile and run Jogl programs in your Eclipse project.] If you downloaded the Jogl docs. and unzip it as well.1/lib/jogl.java Of course. .1. Unfortunately.com/projects/netbeans-opengl-pack/pages/Home If Eclipse (http://eclipse. to the classpath. you have to do the following setup in that project: Right-click the name of the project in the “Project Explorer” pane. If you’ve gotten all the steps right. Then.1a-docs. browse to the lib directory in the Jogl download. you can avoid the whole issue. jogl. . You need to get the Jogl 1. .1. it’s just a little harder.jar in the list of libraries. when you want to use Jogl in an Eclipse project. then click the “Edit” button. There is a Netbeans plugin that adds full support for Jogl/OpenGL development.1a. you need different native code libraries for Linux. set up the jar files to use the Jogl native code libraries.1a download for your platform. on Windows. However. If you install the plugin.1a for various platforms can be found at: http://download. Select jogl. you can develop and run your programs in Netbeans with no further action. you will also want to set up the “Javadoc Location” under “jogl.1/lib with the correct path to the jar file on your computer.

CHAPTER 2. BASICS OF OPENGL AND JOGL

36

you need to add both jogl.jar and gluegen-rt.jar to the class path. Furthermore, Java must be able to find the native library files that are also in the lib directory of the Jogl download. For Linux, that directory should be added to the LIBRARY PATH environment variable; for Mac OS, it should be added to DYLD LIBRARY PATH; and for Windows, it should be added to PATH. Here for example is a sequence of commands that might be used on Linux to run a class, packagename.MainClass, that uses Jogl:
JOGL LIB DIR="/home/eck/jogl-1.1.1/lib" JOGL JAR FILES="$JOGL LIB DIR/jogl.jar:$JOGL LIB DIR/gluegen-rt.jar" export LIBRARY PATH="$JOGL LIB DIR" java -cp $JOGL JAR FILES:. packagename.MainClass

All that typing would quickly get annoying, so you will want to use scripts to do the compiling and running. The source code directory for this book contains two simple scripts, gljavac.sh and gljava.sh, that can be used to compile and run Jogl programs on Mac OS or Linux. You will just need to edit the scripts to use the correct path to the Jogl lib directory. (Sorry, but I do not currently have scripts for Windows.)

2.2

Into the Third Dimension
(online)

As we move our drawing into the third dimension, we have to deal with the addition of a third coordinate, a z-axis in addition to an x-axis and a y-axis. There’s quite a bit more to it than that, however, since our goal is not just to define shapes in 3D. The goal is to make 2D images of 3D scenes that look like pictures of three-dimensional objects. Take a look, for example, at this picture of a set of three coordinate axes:

It’s not too hard to see this as a picture of three arrows pointing in different directions in space. If we just drew three straight lines to represent the axes, it would look like nothing more than three lines on a 2D surface. In the above picture, on the other hand, each axis is actually built from a long, thin cylinder and a cone. Furthermore, the shapes are shaded in a way that imitates the way that light would reflect from curved shapes. Without this simulated lighting, the shapes would just look like flat patches of color instead of curved 3D shapes. (This picture was drawn with a short OpenGL program. Take a look at the sample program Axes3D.java if you are interested.)

CHAPTER 2. BASICS OF OPENGL AND JOGL

37

In this section and in the rest of the chapter, we will be looking at some of the fundamental ideas of working with OpenGL in 3D. Some things will be simplified and maybe even oversimplified as we try to get the big picture. Later chapters will fill in more details.

2.2.1

Coordinate Systems

Our first task is to understand 3D coordinate systems. In 3D, the x-axis and the y-axis lie in a plane called the xy-plane. The z-axis is perpendicular to the xy-plane. But you see that we already have a problem. The origin, (0,0,0), divides the z-axis into two parts. One of these parts is the positive direction of the z-axis, and we have to decide which one. In fact, either choice will work. In OpenGL, the default coordinates system identifies the xy-plane with the computer screen, with the positive direction of the x-axis pointing to the right and the positive direction of the y-axis pointing upwards. The z-axis is perpendicular to the screen. The positive direction of the z-axis points out of the screen, towards the viewer, and the negative z-axis points into the screen. This is a right-handed coordinate system : If you curl the fingers of your right hand in the direction from the positive x-axis to the positive y-axis, then your thumb points in the direction of the positive z-axis. (In a left-handed coordinate system, you would use your left hand in the same way to select the positive z-axis.) This is only the default coordinate system. Just as in 2D, you can set up a different coordinate system for use in drawing. However, OpenGL will transform everything into the default coordinate system before drawing it. In fact, there are several different coordinate systems that you should be aware of. These coordinates systems are connected by a series of transforms from one coordinate system to another. The coordinates that you actually use for drawing an object are called object coordinates. The object coordinate system is chosen to be convenient for the object that is being drawn. A modeling transformation can then be applied to set the size, orientation, and position of the object in the overall scene (or, in the case of hierarchical modeling, in the object coordinate system of a larger, more complex object). The coordinates in which you build the complete scene are called world coordinates. These are the coordinates for the overall scene, the imaginary 3D world that you are creating. Once we have this world, we want to produce an image of it. In the real world, what you see depends on where you are standing and the direction in which you are looking. That is, you can’t make a picture of the scene until you know the position of the “viewer” and where the viewer is looking (and, if you think about it, how the viewer’s head is tilted). For the purposes of OpenGL, we imagine that the viewer is attached to their own individual coordinate system, which is known as eye coordinates. In this coordinate system, the viewer is at the origin, (0,0,0), looking in the direction of the negative z-axis (and the positive direction of the y-axis is pointing straight up). This is a viewer-centric coordinate system, and it’s important because it determines what exactly is seen in the image. In other words, eye coordinates are (almost) the coordinates that you actually want to use for drawing on the screen. The transform from world coordinates to eye coordinates is called the viewing transform . If this is confusing, think of it this way: We are free to use any coordinate system that we want on the world. Eye coordinates are the natural coordinate system for making a picture of the world as seen by a viewer. If we used a different coordinate system (world coordinates) when building the world, then we have to transform those coordinates to eye coordinates to find out what the viewer actually sees. That transformation is the viewing transform.

CHAPTER 2. BASICS OF OPENGL AND JOGL

38

Note, by the way, that OpenGL doesn’t keep track of separate modeling and viewing transforms. They are combined into a single modelview transform . In fact, although the distinction between modeling transforms and viewing transforms is important conceptually, the distinction is for convenience only. The same transform can be thought of as a modeling transform, placing objects into the world, or a viewing transform, placing the viewer into the world. In fact, OpenGL doesn’t even use world coordinates internally—it goes directly from object coordinates to eye coordinates by applying the modelview transformation. We are not done. The viewer can’t see the entire 3D world, only the part that fits into the viewport, the rectangular region of the screen or other display device where the image will be drawn. We say that the scene is clipped by the edges of the viewport. Furthermore, in OpenGL, the viewer can see only a limited range of z-values. Objects with larger or smaller z-values are also clipped away and are not rendered into the image. (This is not, of course, the way that viewing works in the real world, but it’s required by the way that OpenGL works internally.) The volume of space that is actually rendered into the image is called the view volume. Things inside the view volume make it into the image; things that are not in the view volume are clipped and cannot be seen. For purposes of drawing, OpenGL applies a coordinate transform that maps the view volume onto a cube. The cube is centered at the origin and extends from −1 to 1 in the x-direction, in the y-direction, and in the z-direction. The coordinate system on this cube is referred to as normalized device coordinates. The transformation from eye coordinates to normalized device coordinates is called the projection transformation. At this point, we haven’t quite projected the 3D scene onto a 2D surface, but we can now do so simply by discarding the z-coordinate. We still aren’t done. In the end, when things are actually drawn, there are device coordinates, the 2D coordinate system in which the actual drawing takes place on a physical display device such as the computer screen. Ordinarily, in device coordinates, the pixel is the unit of measure. The drawing region is a rectangle of pixels. This is the rectangle that we have called the viewport. The viewport transformation takes x and y from the normalized device coordinates and scales them to fit the viewport. Let’s go through the sequence of transformations one more time. Think of a primitive, such as a line or polygon, that is part of the world and that might appear in the image that we want to make of the world. The primitive goes through the following sequence of operations: 1. The points that define the primitive are specified in object coordinates, using methods such as glVertex3f. 2. The points are first subjected to the modelview transformation, which is a combination of the modeling transform that places the primitive into the world and the viewing transform that maps the primitive into eye coordinates. 3. The projection transformation is then applied to map the view volume that is visible to the viewer onto the normalized device coordinate cube. If the transformed primitive lies outside that cube, it will not be part of the image, and the processing stops. If part of the primitive lies inside and part outside, the part that lies outside is clipped away and discarded, and only the part that remains is processed further. 4. Finally, the viewport transform is applied to produce the device coordinates that will actually be used to draw the primitive on the display device. After that, it’s just a matter of deciding how to color individual pixels to draw the primitive on the device.

∗ ∗ ∗

modelview. it’s harder to understand. glRotatef. use gl.GL MODELVIEW). however. such as glLoadIdentity.height). We assume that we want to view objects that lie in a cube centered at the origin. from a different point of view. With Jogl. This is done just before calling reshape method of the GLEventListener. the viewport is automatically set to what is usually the correct value. glScalef. and OpenGL uses the terms projection matrix and modelview matrix instead of projection transform and modelview transform. However. we’ll look at more convenient ways to set up the view. as follows: In your displayi> method. we consider a simple version of glOrtho.y) is the lower left corner of the rectangle that you want to use for drawing. to use the entire available drawing area as the viewport. and glTranslatef (or their double precision equivalents). Note that in OpenGL device coordinates. where (x.width. BASICS OF OPENGL AND JOGL 39 All this still leaves open the question of how you actually work with this complicated series of transformations. and we will put it aside until the next chapter. You might use this. OpenGL keeps track of these two matrices separately. transforms are represented as matrices (twodimensional arrays of numbers).glMatrixMode(GL. and draw the scene again. and height is the height. For now. to set up a different viewport by calling gl. Modeling is done by applying the basic transform methods glScalef. that is. (Remember that the projection transform is the transform from eye coordinates onto the standard cube that is used for normalized device coordinates. from the point of view of the viewer. In the next chapter.) The projection matrix is used to establish the view volume.y. orthographic projection and perspective projection. It is possible.CHAPTER 2. and projection transforms. for example. and draw the first view of the scene. looking in the direction of the negative z-axis. You select the matrix that you want to work on by setting the value of an OpenGL state variable. (OpenGL keeps separate stacks of matrices for use with glPushMatrix and glPopMatrix. in which the viewer is on the z-axis. We can use glOrtho to . affect only the currently selected matrix.) OpenGL has two methods for setting the view volume. Finally. Remember that you just have to set up the viewport. the minimal y value is at the bottom. and y increases as you move up. All operations that affect the transform. width is the width of the rectangle. and glPushMatrix. one for use with the projection matrix and one for use with the modelview matrix. These values are given in device (pixel) coordinates. glOrtho and glFrustum.glMatrixMode(GL. the part of the world that is rendered onto the display. you need to know a little more about how OpenGL handles transforms.glViewport(x. For now.GL PROJECTION). we will just use the default view. The modelview transform is a combination of modeling and viewing transforms. but it only lets you work on one or the other of these matrices at a time. To select the modelview matrix. These methods can also be used for viewing. use gl. You would then use glViewport again to set up a different viewport in another part of the drawing area. to draw two or more views of the same scene in different parts of the drawing area. OpenGL will do all the calculations that are required to implement the transformations. to work with the projection transform. Internally. The view volume is expressed in eye coordinates. These two methods represent two different kinds of projection. this is the opposite of the convention in Java Graphics2D. that is. it can be clumsy to get the exact view that you want using these methods. Although glFrustum gives more realistic results. you would use glViewport to set up a viewport in part of the drawing area. To select the projection matrix.

you can use glOrtho to specify the range of x and y coordinates that you want to use. } gl. you can only use coordinates between −1 and 1.5.GL MODELVIEW).glOrtho(-s. Note that even when drawing in 2D. as long as it includes 0. // Start with the identity transform.CHAPTER 2. gl. -s. xmax. int width. int height) { GL gl = drawable.-s.GL PROJECTION). and you can find it in the source package glutil along with several other utility classes that I’ve written. gl. you need to set up a projection transform.GL PROJECTION). In this case. it would be nice to have a better way to set up the view. To use the camera.s.glMatrixMode(GL. y.apply(gl) at the beginning of the display method to set up the projection and view. By default. after setting up the projection. s*ratio. gl. gl. so we need to expand the view volume in either the x or the y direction to match the aspect ratio of the viewport. double ratio = (double)height/width. Then.s). If the cube stretches from −s to s in the x. and the region with x. we call glMatrixMode again to switch back to the modelview matrix. -s. int x. ymin. gl. -s.glMatrixMode(GL.java. you can change the limits on the view volume by calling camera. we want the view volume to have the same aspect ratio as the viewport. and z limits to range from −s to s.glOrtho(-s. Putting this all together. gl. ymax. The class is Camera. Remember that we must call glMartixMode to switch to the projection matrix. -s*ratio. In general. gl. y. double s = 1. . BASICS OF OPENGL AND JOGL 40 establish this cube as the view volume. we get the following reshape method: public void reshape(GLAutoDrawable drawable.glOrtho(-s*ratio. This will change the x.glMatrixMode(GL. and z limits from −5 to 5 is in view. Cameras are used in the remaining sample programs in this chapter. If camera is a Camera. int y. s).glLoadIdentity().setScale(s). When setting up the projection matrix for 2D drawing. so that all further transform operations will affect the modelview transform. Still. Otherwise. A Camera object takes responsibility for setting up both the projection transform and the view transform. This can be done most easily in the reshape method. The range of z coordinates is not important. For example: gl. it uses a perspective projection. } else { // Expand y limits to match viewport aspect ratio. where we know the aspect ratio of the viewport. s). you should call camera.getGL(). there is no need to define the reshape method. s. and so I’ve written a helper class that you can use without really understanding how it works. y. s*ratio.glOrtho(xmin.glMatrixMode(GL. } This method is used in the sample program Axes3D. if (width > height) { // Expand x limits to match viewport aspect ratio. s. -1.s. and z directions. double ratio = (double)width/height. then the projection can be set up by calling gl. though. 1).GL MODELVIEW).-s.glLoadIdentity(). // limits of cube that we want to view go from -s to s.

The truth is stranger. This buffer stores one value for each pixel. it firsts tests whether there is already another object at that pixel that is closer to the viewer than the object that is being drawn. it leaves the pixel unchanged. for example. and which is seen depends on which one is closer to the viewer. that you draw one square inside another. as it would if the depth test were disabled. The whole picture . at the same time that you clear the color buffer. There is one issue with the depth test that you should be aware of.GL DEPTH TEST). I won’t mention this explicitly. and it is essentially the z-coordinate of the object. for each pixel that it draws in an object. OpenGL uses a depth buffer . the computed z-values for the two squares at a given pixel might be different— and which one is greater might be different for different pixels.glClear(GL. This is called clearing the depth buffer.GL COLOR BUFFER BIT | GL. Here is a real example in which a black square is drawn inside a white square which is inside a gray square. which represents the eye-coordinate z-value of the object that is drawn at that pixel.glClear(GL. then things might appear in your picture that should really be hidden behind other objects.2 Essential Settings Making a realistic picture in 3D requires a lot of computation.2. BASICS OF OPENGL AND JOGL 41 2. This is usually done in the init method. When working in 3D. since they are not always needed and certainly not for 2D drawing. and you might even want to do so in some cases. OpenGL always draws objects in the order in which they are generated in the code. In fact.”) By default. only one of the objects can be seen. What happens when two objects are actually at the same distance from the user? Suppose. you will do this in the init method. lying in the same plane.CHAPTER 2. before drawing anything. Perhaps the most essential feature for 3D drawing is the depth test. (Note that any feature that can be enabled can also be disabled. Because of the inevitable inexactness of real-number computations. If you don’t enable it. Now. (There is a particular value that represents “no object here yet. In that case. by “or-ing” together the flags for the two buffers: gl. In the future. the depth test is disabled.GL DEPTH TEST).) To use the depth test correctly.glEnable(GL.GL DEPTH BUFFER BIT). and not on the order in which the objects are drawn. if any. You can do this by calling glClear with a flag that indicates that it’s the depth buffer that you want to clear: gl. You can disable the depth test by calling gl. The depth test is enabled by calling gl. Certain features of this computation are turned off by default.glDisable(GL.GL DEPTH BUFFER BIT). This is the depth test. “Depth” here really means distance from the user. Before you draw anything. since the object that is being drawn is hidden from sight by the object that is already there. In order to implement the depth test. depending on how the clear operations are implemented by the graphics hardware. Often. the two opererations can be combined into one method call. expressed in eye coordinates. This might be faster than clearing the two buffers separately. it’s not enough to enable it. When one object lies behind another. Does the second square appear or not? You might expect that the second square that is drawn might appear. there are a few essential features that you will almost always want to enable. This should be done at the beginning of the display method. However. the depth buffer must be set up to record the fact that no objects have been drawn yet.

you usually need to simulate lighting of the scene.GL SMOOTH). Material is more complicated than color. by default. we will leave that for Chapter 4. basic illumination for many scenes. as set for example with glColor3f. there is no telling which square will end up on top at a given pixel. It is sufficient to give decent. Turning on lighting changes the rendering algorithm in fundamental ways. It can be enabled by calling gl.GL LIGHT0). OpenGL does many calculations.glShadeModel(GL.CHAPTER 2. including lighting calculations. if you turn on lighting and do nothing else. A possible solution would be to move the white square a little bit forward and the black square forward a little bit more—not so much that the change will be visible. The results of these calculations are then interpolated to the pixels inside the polygon. then the current drawing color. However. no lights are turned on. This has to do with the way that interiors of polygons are filled in.glEnable(GL. the current material is used. in spite of small computational errors. Turning on lighting has another major effect: If lighting is on.glEnable(GL. This can be much faster than doing the full calculation for each pixel. only at the vertices of the polygons. Lighting is disabled by default. The default material is a rather ugly light gray. You need to turn on at least one light: gl. The picture on the left is drawn with the depth test enabled. ∗ ∗ ∗ The depth test ensures that the right object is visible at each pixel. For one thing. It’s possible to add other lights and to set properties of lights such as color and position. you will also want to use the following command: gl. Instead. taking lighting into account if lighting is enabled.GL LIGHTING). BASICS OF OPENGL AND JOGL 42 is rotated a bit to force OpenGL to do some real-number computations. which by default is a white light that shines on the scene from the direction of the viewer. but enough to clear up the ambiguity about which square is in front. and there are special commands for setting material properties. OpenGL computes a color for each vertex. It then has to decide . We will consider material properties in Chapter 4. so there is no light to illuminate the objects in the scene. When drawing a polygon. This command turns on light number zero. To get the best effect from lighting. you won’t see much of anything! This is because. while it is disabled for the picture on the right: When the depth test is applied. For that. is not used in the rendering process. but it’s not enough to make a 3D scene look realistic. This is possibly the single most significant command in OpenGL. This will enable the depth test to produce the correct result for each pixel.

glEnable(GL. called smooth shading smoothly varies the colors from the vertices across the face of the polygon. but not quite so essential.0. This is called a wireframe model .GL LIGHT0). A better solution.GL SMOOTH). gl. ∗ ∗ ∗ There are several other common. which is called flat shading .glClearColor(0.GL LIGHTING). which has been set to white using glColor3f.glEnable(GL. OpenGL has no sphere-drawing command. gl. // Set background color. You can see that drawing the sphere using lighting and smooth shading gives the most realistic appearance. so the wireframe model is drawn with lighting turned off. is to simply copy the color from the first vertex of the polygon to every other pixel. settings that you might use for 3D drawing. only the outlines of the polygons are drawn. The realism could be increased even more by using a larger number of polygons to draw the sphere.glEnable(GL. Setting the shade model to GL SMOOTH tells OpenGL to use smooth shading. // Use smooth shading. interpolating values calculated for the endpoints to the rest of the line. The color of the lines is the current drawing color. but with lighting turned off. However.getGL(). BASICS OF OPENGL AND JOGL 43 how to use the color information from the vertices to color the pixels in the polygon. a rather small number of polygons is used. gl. As noted above. using the default light gray material color. // Turn on lighting. white. the color of an object is determined by its material properties. we just see a flat patch of color. gl. The difference between the two spheres is that flat shading is used for the third sphere. Nothing here looks like a 3D sphere. Lighting doesn’t work well for lines. Four spheres are shown.glShadeModel(GL. giving only a rough approximation of a sphere. // Turn on light number 0. we get the following init method for 3D drawing with basic lighting: public void init(GLAutoDrawable drawable) { GL gl = drawable. } Let’s look at a picture that shows some of the effects of these commands. Since no lighting or shading calculations are done.) The default.0. You can return to flat shading by calling gl. In this case.CHAPTER 2. rendered with different settings: In the sphere on the left. The third and fourth spheres are drawn with lighting turned on. the polygons are simply filled with the current drawing color.GL DEPTH TEST). (OpenGL does the same thing for lines. with no shading at all.glShadeModel(GL. This results in a polygon that is a uniform color. // Turn on the depth test. The second sphere from the left is drawn using filled polygons. but we can approximate a sphere with polygons. while smooth shading is used for the fourth. if you want to avoid the complications of materials and still . Putting all the essential settings that we have talked about here. gl.1).GL FLAT). when lighting is turned on.

You can turn it off using the same command with parameter GL. the lighting calculations for your image will be incorrect. These commands can be used in the same way to draw primitives in . This forces the usual lighting calculations to be done for the back sides of polygons. To turn on two-sided lighting.) In the default lighting model.glEnable(GL.CHAPTER 2.4. the back faces are not properly lit.GL LIGHT MODEL TWO SIDE. use the command: gl. the calculation that is required to light them would be wasted. One side is the front side. GL NORMALIZE forces OpenGL to adjust the length of all normal vectors to length 1 before using them in lighting calculations.GL NORMALIZE). the order is counterclockwise if you are looking at the front face and is clockwise if you are looking at the back face. Which is the front side and which is the back is determined by the order in which the vertices of the polygon are specified when it is drawn. at least when you are drawing those polygons. but nothing forces you to supply vectors of proper length. (Rotation and translation are OK without it.glEnbable(GL. (For now. then you can turn on two-sided lighting. BASICS OF OPENGL AND JOGL 44 be able to use different colors. and is not so different from modeling in 2D.1 Geometric Modeling We have already seen how to use glBegin/glEnd to draw primitives such as lines and polygons in two dimensions. This has to do with the “normal vectors” that will be discussed in Section 2. or if you apply scaling transforms. (Materials have other aspects besides this basic color. This causes the basic material color to be taken from the color set by glColor3f or similar commands. (By default.GL FALSE in place of GL. if the back sides of some polygons might be visible in your scene.3.) Examples in this chapter and the next will use this feature.glLightModeli(GL.GL MATERIAL COLOR). However. so this feature is turned off by default. Furthermore.GL TRUE. and they can increase or decrease the lengths of the vectors. However. Finally.) This option should be enabled in two circumstances: if you supply normal vectors that do not have length equal to 1. which can be enabled with gl. This is geometric modeling in 3D. if you fail to turn it on when it is needed. This is because in many 3D scenes. You just have to deal with the extra coordinate. This adds some significant computational overhead to the rendering process. I will mention the GL NORMALIZE option.3 Drawing in 3D Finally.GL TRUE). We will look first at how to create three-dimensional objects and how to place them into the world. 2. GL. OpenGL distinguishes the two sides of a polygon. but setting the basic color is often sufficient. the back faces of polygons face the insides of objects and will not be visible in the scene. we are ready to create some three-dimensional scenes. you can turn a feature that causes OpenGL to use the current drawing color for the material: gl.) Correct lighting calculations require normal vectors of length 1. 2. scaling transforms are applied to normal vectors as well as to geometry. and one is the back side. just think of a vector as an arrow that has a length and a direction. Another useful feature is two-sided lighting .

7.z).glRotatef(d. You can use glPushMatrix and glPopMatrix to implement hierarchical modeling. To specify a vertex in three dimensions. with a bunch of flat flaps that rotate around a central axis. gl. However. drawn as a trapezoid lying in the xy-plane above the origin: gl. since it does not specify the direction of rotation.x.1). where d is the measure of the angle of rotation in degrees and x.) When you want to apply a rotation in 3D. Let’s do a simple 3D modeling example.0).glVertex3d(x.y. Three paddle wheels. just as in 2D. with your thumb pointing at (x.y.0. then the .glVertex2d(-0.glBegin(GL.z)).z) (or gl.0) and the point (x. If paddles is an integer giving the number of flaps that we want. The axis passes through (0. we can start with one flap.CHAPTER 2. rotated by various amounts around the x-axis. You can also continue to use glVertex2f and glVertex2d to specify points with z-coordinate equal to zero.glEnd().2).y. Rotation is more complicated. gl. (If you happen to use a left-handed coordinate system. Think of a paddle wheel.glVertex2d(1.z) (or glRotated).y. y. this does not fully determine the rotation. you can call gl. with different numbers of flaps. gl. For this example. We can make the complete wheel by making copies of this flap.y. Modeling transformations are an essential part of geometric modeling.glVertex3f(x. with nothing physical to support them. Is it clockwise? Counterclockwise? What would that even mean in 3D? The direction of rotation in 3D follows the right-hand rule (assuming that you are using a right-handed coordinate system).y. Scaling and translation are pretty much the same in 3D as in 2D. place your right hand at (0.1).7.z ). gl.glVertex2d(-1. BASICS OF OPENGL AND JOGL 45 three dimensions.2).0.x.glVertex2d(0. Recall that the OpenGL command for rotation is gl. you would use your left hand in the same way to determine the positive direction of rotation. gl. are drawn in the following picture: To produce a model of a paddle wheel.glRotatef(d.z ). you will probably find yourself literally using your right hand to figure out the direction of rotation. The direction in which your fingers curl will be the direction of a positive angle of rotation. To determine the direction of rotation for gl. and z specify the axis of rotation. we will just draw the flaps.GL POLYGON). a negative angle of rotation will produce a rotation in the opposite direction.

depending on the frameNumber. and regular polyhedra such as cubes and dodecahedra. As we continue. Because of some limitations to the GLUT drawing routines (notably the fact that they don’t support textures). cylinder. By the time the second paddle is drawn.1). of type GLUT. You can find an applet version of the program in the on-line version of this section. For example. another of the classes defined in the glutil source directory.glutSolidSphere(0. .2 Some Complex Shapes Complex shapes can be built up out of large numbers of polygons. The second gives the number of “slices” (like lines of longitude or the slices of an orange). you can use mouse to rotate the view. and UVCone in the sample source package glutil. cones.2). gl. To draw the set of three paddle wheels. and you can create it in the init method. 40. One of the most basic libraries is GLUT . 0. You can check the GLUT API documentation for more information. we can write a method to draw a single wheel and then call that method three times. 0). Recalling that successive transforms are multiplied together.glVertex2d(-1. If we want to make an animation in which the paddle wheel rotates. with different colors and translations in effect for each call.0/paddles. in this sample program. you should create an object.java. i++) { gl. you can draw various shapes by calling methods such as glut. The first parameter gives the radius of the sphere. gl. The curved shapes are actually just approximations made out of polygons.glBegin(GL. You can see how this is done in the source code for the program that produced the above image. To use the drawing subroutines. to the wheel as a whole. The sphere is centered at the origin.glEnd(). so the second paddle is rotated by 2*(360/(paddles).sun. OpenGL has no built-in complex shapes. Forty slices and twenty stacks give quite a good approximation for a sphere.glVertex2d(1. gl. Jogl includes a partial implementation of GLUT in the class com. gl.5. The mouse is enabled by an object of type TrackBall. Three shapes—sphere. the paddles are spread out to cover a full circle. gl. the OpenGL Utilities Toolkit.GLUT. and the third gives the number of “stacks” (divisions perpendicular to the axis of the sphere. like lines of latitude).opengl. BASICS OF OPENGL AND JOGL 46 angle between each pair of flaps will be 360/paddles.glVertex2d(0.7. we can draw the whole set of paddles with a for loop: for (int i = 0. PaddleWheels. 2. To draw a sphere.3.GL POLYGON).7. You will only need one such object for your entire program. with its axis lying along the z-axis. glut. we can apply an additional rotation.glVertex2d(-0. but there are utility libraries that provide subroutines for drawing certain shapes. This draws a polyhedral approximation of a sphere.2). i < paddles. gl. and cone—are defined by the classes UVSphere. cylinders. } The first paddle is rotated by 360/paddles degrees. for example. GLUT includes methods for drawing spheres. 20). I have written a few shape classes of my own.CHAPTER 2. but it’s not always easy to see how to do so. two such rotations have been applied. you should create an object of type UVSphere.1). 1. Then.util.glRotated(360. By the way. UVCylinder.

except that a cone doesn’t have a top. the graphics cards that come even in inexpensive computers today would have qualified as supercomputers a decade or so ago. The calculations involved in computing and displaying an image of a three-dimensional scene can be immense.CHAPTER 2. only a bottom. centered at the origin and with its axis along the z-axis. specialized hardware that can do these calculations at very high speed. This can be a significant bottleneck.5. however. By default. The contents of the display list only have to be transmitted once from the CPU to the card. but you can turn this behavior off by calling cylinder. Once a list has been created. A display list is a list of graphics commands that can be stored on a graphics card (though there is no guarantee that they actually will be). In this section. you need to understand something of how OpenGL works. by the computer’s main CPU.10). To understand why. except that they have a height in addition to a radius: cylinder = new UVCylinder(0.15. by the way. we will look at one of these optimization strategies. there is a problem: The commands. Once you have the object.3. graphics operations are much slower—slow enough to make many 3D programs unusable. The computations and drawing can be done. Display lists have been part of OpenGL from the beginning. Modern desktop computers have graphics cards—dedicated. have to transmitted from your computer’s CPU to the graphics card. . and with its axis along the positive z-axis.5.” very much like a subroutine. along with all the data that they require. OpenGL is built into almost all modern graphics cards.5. stacks The cylinder is drawn with its base in the xy-plane. to implement OpenGL on a computer without any graphics card at all. Although the same list of commands still has to be executed. you can draw the sphere in an OpenGL context gl by saying sphere. slices. // radius. Although the commands can be executed very quickly once they get to the graphics card. and the time that it takes to transmit all that information can be a real drag on graphics performance. Display lists are useful when the same sequence of OpenGL commands will be used several times. 2. Cones and cylinders are used in a similar way.5. it can be “called. When this is done. Using a UVCone is very similar.setDrawTop(false) and/or cylinder. Newer versions of OpenGL have introduced various techniques that can help to overcome the bottleneck. both the top and bottom ends of the cylinder are drawn.1. display lists. BASICS OF OPENGL AND JOGL 47 sphere = new UVSphere(0. It is possible. and most OpenGL commands are executed on the card’s specialized hardware. but they have a problem with efficiency. and 20 stacks. I will use these three shape classes in several examples throughout the rest of this chapter. 40. Similarly to the GLUT version. and the full power of hardware acceleration can be used to execute the commands at the highest possible speed. height. centered at the origin. if necessary.render(gl).3 Optimization and Display Lists The drawing commands that we have been using work fine. The key point is that calling a list requires only one OpenGL command. only one command has to be transmitted from the CPU to the graphics card. 40 slices. Using a graphics card to perform 3D graphics operations at high speed is referred to as 3D hardware acceleration. 20). this represents a sphere with radius 0. In fact.setDrawBottom(false).

and the rotation is a bit jerky but still usable. they are also fairly easy to use. since it only has to generate the sphere-drawing commands once instead of 1331 times. depending on whether or not display lists are used. since the effect can depend on the OpenGL state at the time the display list is called. However. as long as the drawing color is changed between calls to the list. This limits their usefulness. Without display lists. as long as different modeling transforms are in effect each time the list is called. arranged in a cube that has eleven spheres along each edge. since its not possible to customize their behavior by calling them with different parameter values. the transform and color are changed. but before drawing each sphere. Here. Each sphere in the picture is a different color and is in a different location. calling a display list twice can result in two different effects. Note that we are also saving some significant time on the main CPU. You can turn the use of display lists on and off using a checkbox below the image. Without hardware acceleration. a display list that generates the geometry for a sphere can draw spheres in different locations. and call that list once for each sphere that we want to draw. BASICS OF OPENGL AND JOGL 48 One difference between display lists and subroutines is that display lists don’t have parameters. You can try the applet in the on-line version of these notes. with a good graphics card and hardware acceleration. We can store the commands for drawing one sphere in a display list. Exactly the same OpenGL commands are used to draw each sphere. for example is an image that shows 1331 spheres. you can rotate the cube of spheres by dragging the mouse over the image.CHAPTER 2. You are likely to see a significant change in the time that it takes to render the image. This is a natural place to use a display list. Then each of the 1331 spheres can be drawn by sending a single command to the card to call the list (plus a few commands to change the color and transform). The graphics commands that draw the sphere only have to be sent to the card once. For example. the exact results will depend very much on the OpenGL implementation that you are using. the rendering with display lists is fast enough to produce very smooth rotation. Fortunately. . the rendering might take so long that you get no feeling at all of natural rotation. On my computer. the rendering is about five times slower. The list can also produce spheres of different colors. ∗ ∗ ∗ I hope that this has convinced you that display lists can be very worthwhile. However. In the program that drew the image.

in the display method. so that if listA is the return value from gl. as needed. you can have other Java code between glNewList and glEndList. GL. listA + 1.GL COMPILE means that you only want to store commands into the list. commands for generating geometry. then the display list will contain 10. Once you have a list. not references to variables. Display lists are identified by integer code numbers. you can store commands into it. You can actually ask for several list IDs at once. Most. BASICS OF OPENGL AND JOGL 49 Remember that display lists are meant to be stored on the graphics card. you first have to ask the graphics card for an integer to identify the list. and z at the time glVertex3d is called are stored into the list.glCallList(listID). You can also.glGenLists(1). then the commands will be executed immediately as well as stored in the list for later reuse.z). Note that the previous contents of the list.glNewList(listID.) This is done with a command such as listID = gl. If you change the values of the variables later. OpenGL commands can be placed into a display list. If listID is the ID for the list. then the identifiers for the three lists will be listA. make them on the fly.glVertex3d(x. you would do this with code of the form: gl. The parameter GL. remember.GL COMPILE). The list IDs will be consecutive integers. You can tell the graphics card that a list is no longer needed by calling gl. if you use a for loop to call glVertex3f 10. // Generate OpenGL commands to store in the list. Once you’ve allocated a list in this way.. In particular. The return value is an int which will be the identifier for the list.glDeleteLists(listID. applying transforms. the list only stores numerical values. You can even add commands for calling other lists. If you use the alternative parameter GL. if any. so it is the graphics card that has to manage the collection of display lists that have been created. but not all. of course. . is to tell the graphics card to execute a list that it has already stored. only the values of x. gl. you can use a for loop to generate a large number of vertices. between glNewList and glEndList. But remember that the list only stores OpenGL commands. and enabling and disabling various features can be placed into lists. . not a for loop.y. In addition to OpenGL commands. So if you call gl. will be deleted and relaced. y. The effect of this command. it can make sense to create it in the init method and keep it around as long as your program will run.GL COMPILE AND EXECUTE.000 times. a graphics card has only a limited amount of memory for storing display lists. Furthermore. so you shouldn’t keep a list around longer than you need it. The parameter to glGenLists is also an int. For example.. you can call the list with the command gl. If you want to use a display list. However. If you are going to reuse a list many times. not the Java code that generates them For example.CHAPTER 2.glGenLists(3). 1). changing the color.glEndList(). (You can’t just make one up. and listA + 2.000 individual glVertex3f commands. the change has no effect on the list or what it does. since the one that you pick might already be in use. as the parameters to these commands. not execute them. the parameter tells how many you want.

glCallList(displayList). It is possible to assign a different color to every vertex in a polygon. For example: OpenGL associates several quantities with every vertex that is generated. 2.glNewList(displayList.glTranslatef(i-5. The program uses an object of type UVSphere to draw a sphere.render (gl) is called.glColor3f(r. (This color is used only if lighting is off or if the GL COLOR MATERIAL option is on. // Red component of sphere color.0f. and the UVSphere object is used to generate the commands that go into the list. gl. // Translate the sphere. gl. // Set the color for the sphere. deleting one list at a time is probably the most likely use. gl.glPopMatrix().j-5. i++) { float r = i/10. One attribute is color.render(gl). gl.4). i <= 10. for example. which can be set.0f. sphere. } } } The glCallList in this code replaces a call to sphere.g.CHAPTER 2. gl. it allows you delete several sequentially numbered lists. gl. with glColor3f. The integer identifier for the list is displayList: UVSphere sphere = new UVSphere(0. that is. k <= 10.4 Normals and Textures (online) These quantities are called attributes of the vertex. // Allocate the list ID.java. // Green component of sphere color.k-5).4.render that would otherwise be used to draw the sphere directly. k++) { float b = k/10.” ColorCubeOfSpheres.glPushMatrix(). j <= 10. When sphere. In the display method. The display list is created in the init method.) The drawing color can be changed between calls to glBegin and glEnd. you can look at the source code for the program that draws the “cube of spheres. Let’s look at some slightly modified code that always uses lists. That program is complicated somewhat by the option to use or not use the display list. for (int j = 0. // Call display list to draw sphere. displayList = gl. // Blue component of sphere color. for (int k = 0. . instead of being executed immediately.glEndList(). all the OpenGL commands that it generates are channeled into the list.0f. BASICS OF OPENGL AND JOGL 50 The second parameter in this method call plays the same role as the parameter in glGenLists. ∗ ∗ ∗ As an example of using display lists.b). j++) { float g = j/10. Deleting a list when you are through with it allows the graphics card to reuse the memory used by that list.GL COMPILE).glGenLists(1). GL. The color that is assigned to a vertex is the current drawing color at the time the vertex is generated. gl. // For drawing a sphere of radius 0. the display list is called 1331 times in a set of triply nested for loops: for (int i = 0.

z ) with its start point at the origin.1).y. the normal vector. Another word for “perpendicular” is “normal.5. A normal vector of length one is called a unit normal . (Actually. This can be done at any time. It has a direction and a length. the value of the current normal vector is copied and is associated to that vertex as an attribute.glNormal3d(x.glVertex(0. then you can specify normal vectors of any length. That direction can be specified by a vector that is perpendicular to the surface.glVertex(-0.glColor3f(0. it keeps track of a current normal vector.2.z ) or gl. Now. gl.-0. not to the polyhedron that approximates it. so you can visualize the arrow as being positioned anywhere you like. gl. is used when applying texture images to surfaces.glBegin(GL. and the colors will be smoothly interpolated to the interior of the triangle. then the end point of the vector is at the point with coordinates (x.0). It doesn’t have a particular location.glNormal3f (x.5). In this section. then you want to use normal vectors that are perpendicular to the actual surface. the texture coordinates. For proper lighting calculations in OpenGL. a polygon is flat.glEnd(). if you turn on the option GL NORMALIZE.) Just as OpenGL keeps track of a current drawing color.GL POLYGON).CHAPTER 2. which is part of the OpenGL state. BASICS OF OPENGL AND JOGL 51 gl. The current normal vector can be set by calling gl. A vector in three dimensions is given by three numbers that specify the change in x.glColor3f(1. 2.1 Introduction to Normal Vectors The visual effect of a light shining on a surface depends on the properties of the surface and of the light.0. This means that it’s possible for different vertices of a polygon to have different associated normal vectors. OpenGL needs to know the direction in which the surface is facing. Assuming that the shade model has been set to GL SMOOTH.0). But it also depends to a great extent on the angle at which the light strikes the surface.glVertex(0. a unit normal must be specified for each vertex.0). is an essential of lighting calculations. each vertex of this triangle will be a different color.0. lit surface looks different at different points.1). // The color associated with this vertex is blue. A vector can be visualized as an arrow. gl. When a vertex is generated. even if its surface is a uniform color. Another. // The color associated with this vertex is red. If you position the vector (x.z ). including between calls to glBegin and glEnd.y. polyhedra are often used to approximate curved surfaces such as spheres.-0.y. you might be asking yourself. gl. gl.1. On the other hand. gl. then in fact. and if your objective is to display a polyhedral object whose sides are flat polygons.5). a normal vector must have length equal to one.y.glColor3f(0. all the normals of one of those polygons should point in the same direction.4.5.z ). such as sticking out of a particular point on a surface. Take a look at this example: . we will look at two other important attributes that can be associated with a vertex. This is true. To calculate this angle. and the change in z along the vector. If your real objective is to make something that looks like a curved surface. and OpenGL will convert them to unit normals for you. see Subsection 2. When used in lighting calculations. That’s why a curved.” and a vector that is perpendicular to a surface at a given point is called a normal to that surface. the perpendicular direction to the polygon doesn’t change from point to point. // The color associated with this vertex is green. One of them. “Don’t all the normal vectors to a polygon point in the same direction?” After all. gl.0. (0.2. the change in y.

I was using the band of rectangles to approximate a smooth surface (part of a cylinder. when I specified a normal vector at one of the vertices. or at least a good approximation. For the top object. The arrows represent the normal vectors. Imagine that you are looking at them edge-on. on the other hand. and I really didn’t want to see the rectangles at all—I wanted to see the curved surface. the normal vectors to the two rectangles are different. . the vectors are perpendicular to a curved surface that passes through the endpoints of the rectangles. Two normal vectors are shown for each rectangle. The visual effect on the rendered image is an abrupt change in shading that is perceived as a corner or edge between the two rectangles. For the object on the bottom. one on each end. Here’s a two-dimensional illustration that shows the normal vectors: The thick blue lines represent the rectangles. so where one rectangle meets the next. this eliminates the abrupt change in shading. BASICS OF OPENGL AND JOGL 52 The two objects in this picture are made up of bands of rectangles. Visually. This is because different normal vectors are used in each case. I was thinking of an object that really is a band of rectangles. I used a vector that is perpendicular to the surface rather than one perpendicular to the rectangle. In the bottom half of this illustration.CHAPTER 2. yet they look quite different. When two rectangles share a vertex. and I used normal vectors that were actually perpendicular to the rectangles. There is an abrupt change in direction as you move from one rectangle to the next. In the top half. The vertices of the rectangles are points on that surface. on the other hand. So for the top object. in fact). they also share the same normal at that vertex. The two objects have exactly the same geometry. resulting in something that looks more like a smoothly curving surface. the vectors are actually perpendicular to the rectangles.

with appropriate rotations to move the front face into the desired position: public static void drawUnitCube(GL gl) { drawFace1(gl).glVertex3d(-0.0.5).5. // the bottom face gl. 0.5.glVertex3d(0.: private static void drawFace1(GL gl) { gl.5. 0). a front face and a back face.5. 0. -0.glBegin(GL.5). 0. Let’s start with a method to draw the face of the cube that is perpendicular to the z-axis with center at (0.glVertex3d(-0.5. gl. a normal vector at a vertex is whatever you say it is. gl.glNormal3d(0.glEnd(). 0. Unfortunately.0. Recall that a polygon has two faces.5). -0. // rotate 90 degrees about the x-axis drawFace1(gl). gl. gl. which applies to all vertices. We can look at one simple case of supplying normal vectors by hand: drawing a cube.5. (See Section 2.0. 0. 0.5. The normal vector that you choose should depend on the object that you are trying to model. could be created with gl. gl.2.GL POLYGON).5. when you are looking at the front face of a polygon. } // The vertices.1). Another approach is to use the method for drawing the front face for each of the six faces. you should at least understand that the solid surfaces that are defined in the GLUT library come with the correct normal vectors already built-in. and it can involve some non-trivial math. I’ll have more to say about that in Chapter 4. gl.5).glVertex3d(-0. the normal vector should be pointing towards you.glNormal3d(0.1). 0). and it does not have to be literally perpendicular to your polygons. Let’s say that we want to draw a cube centered at the origin. 0. -0. gl.0).glBegin(GL.0. gl.glEnd(). -0. gl.−1. which has normal vector (0.CHAPTER 2.glPushMatrix(). which are distinguished by the order in which the vertices are generated. // the front face gl. it’s not always easy to compute normal vectors. the bottom face. However. getting all the vertices correct and in the right order is by no means trivial. So do the shape classes in my glutil package.5). in counterclockwise order.5. gl. which points directly out of the screen along the z-axis.5.glVertex3d(0. For now. There is one other issue in choosing normal vectors: There are always two possible unit normal vectors at a vertex.5.5). We could draw the other faces similarly.5. 1.glVertex3d(0.5. // Unit normal vector. -0. 0.5). pointing in opposite directions. The unit normal to that face is (0. the normal vector should be pointing away from you.5). That is.) A normal vector should point out of the front face of the polygon. -1.5). BASICS OF OPENGL AND JOGL 53 The upshot of this is that in OpenGL. -0. where the length of each side of the cube is one.glVertex3d(-0. -0. gl.5.5. If you are looking at the back face.GL POLYGON). -0. . 0.glRotated( 90. For example.glVertex3d(0. gl.5.

gl. a brick wall or a plaid couch.glRotated( -90. // rotate -90 degrees about the x-axis drawFace1(gl). // the left face gl. 1. 1.glPopMatrix().glPopMatrix(). but they are a little bland. gl. 0. but it really does require a lot less work! 2. // the back face gl. // rotate 90 degrees about the y-axis drawFace1(gl).4. 0). A texture—or at least the kind of texture that we consider here—is a 2D image that can be applied to the surface of a 3D object.glPopMatrix(). gl. } Whether this looks easier to you might depend on how comfortable you are with transformations.glPushMatrix().glPushMatrix(). // the right face gl.glPushMatrix(). say. gl. BASICS OF OPENGL AND JOGL 54 gl. 0).glRotated( 90. 1. gl.2 Introduction to Textures The 3D objects that we have created so far look nice enough. Here is a picture that shows six objects with various textures: . 1.glRotated( -90.glPopMatrix().glPushMatrix(). // the top face gl. gl.glPopMatrix().CHAPTER 2. // rotate -90 degrees about the y-axis drawFace1(gl). 0. 0). Their uniform colors don’t have the visual appeal of. // rotate 180 degrees about the x-axis drawFace1(gl). 0). Threedimensional objects can be made to look more interesting and more realistic by adding a texture to their surfaces. gl. gl. 0.glRotated( 180. 0.

true ). an instance variable.jpg” is the name of one of the image files in that package. Then.gov/.htm and http://planetpixelemporium. and let’s say that “brick001. try { URL textureURL. The rotation makes it look even more realistic. Let’s say that you store the images in a package named textures. More likely. if file is of type java. (It has to do with optimizing the texture to work on objects of different sizes. you can create a texture from that image with Texture texture = Texture.nasa. both because of the number of options involved and the new features that have been introduced in various versions. and probably "gif" files should be OK. EarthAtNight image taken from the Astronomy Picture of the Day web site. folder) in your project. you can use the helper class TextureIO.) This picture comes from the sample program TextureDemo.getClassLoader(). even by creating the image from scratch using Java’s drawing commands.File and represents a file that contains an image.html. You can try the applet version on-line. texture = TextureIO.util. "jpg"). but I can’t distribute them on my web site. (This class and other texture-related classes are defined in the package com. The textures used in this program are in the folder named textures in the source directory. a file that is part of your program.newTexture( file. which contains several static methods for reading images from various sources. The case of reading an image from a resource is probably more common. // Most likely.grsites. Textures are one of the more complex areas in the OpenGL API.java.opengl. we will work mostly with the Jogl classes. for example. For example. To use a texture in OpenGL.jpg"). you will want to read the image from a file or from a program resource.html. The second parameter is a boolean value that you don’t need to understand just now. } catch (Exception e) { . The Jogl API has a few classes that make textures easier to use. textureURL = getClass(). It might be packaged into the same jar file as the rest of your program.com/earth.newTexture( img. BASICS OF OPENGL AND JOGL 55 (Topographical Earth image. to retrieve and use the image as a texture. In Jogl. true ). but the idea is to place the image files that you want to include as resources in some package (that is.getResource("texture/brick001.) To create a Texture object. you can create a Texture from that image with Texture texture = TextureIO. For now. it is also a NASA/JPL image. you can rotate each individual object by dragging the mouse on it.net.nasa. You should understand that the Jogl texture classes are not themselves part of the OpenGL API. they are free for you to use.CHAPTER 2.jpl. you first of all need an image. TextureIO has methods that can do that automatically.) This means that you can create a BufferedImage any way you like.gov/apod/ap001127. Brick and metal textures from http://www.sun. "png". and we will cover only a few of the many options. more or less.io. // "jpg". A resource is. courtesy NASA/JPL-Caltech. http://maps. In the applet version.net/art/maps.texture.com/archive/textures/. true.newTexture(textureURL. if img is a BufferedImage object. you can use the following code in any instance method in your program: Texture texture. // URL class from package java. http://apod. // The third param above is the extension from the file name. For example. I can’t give you a full tutorial on using resources here. Some nicer planetary images for use on a sphere can be found at http://evildrganymede. and then use that image as a texture in OpenGL. the Texture class represents 2D images that can be used as textures.

textureURL = getClass(). and leave it on. you can enable and disable texturing as needed. } catch (Exception e) { e. but it complicates things.) To turn texturing off. .2. it doesn’t even matter which Texture object you use to call the enable method. if necessary. try { URL textureURL.newTexture(img. the width and height of texture images had to be powers of two. To enable texturing with a 2D image. you could enable texturing method once. in the original versions of OpenGL. But there are a couple of issues. it’s easy enough to flip an image vertically if necessary.getClassLoader(). you just have to call the method texture.read(textureURL). This means that images loaded by Java might be upside down when used as textures in OpenGL.printStackTrace(). call texture. true). 512. it’s just turning on texturing in general. or else textures are simply ignored. First of all. Once you have a Texture object.getResource(textureFileName). you can use use glColor3f and the GL MATERIAL COLOR option. For now. (This is true at least as long as you stick to texture images whose dimensions are powers of two. so that their dimensions are powers of two. } ∗ ∗ ∗ We move on now to the question of how to actually use textures in a program. In that case. First.printStackTrace(). Although this limitation has been eliminated in more recent versions. Otherwise. in the init method. Java puts it at the top right corner. You must enable texturing. it’s probably still a good idea to resize your texture images.disable(). Jogl even provides a way of doing this for a BufferedImage. If you are planning to texture all the objects in your scene. such as 128. Two of these are easy. Another issue is that Java and OpenGL have different ideas about where the origin of an image should be. } All this makes it reasonably easy to prepare images for use as textures. note that textures work best if the material of the textured object is pure white. 256.flipImageVertically(img). To get white-colored objects. where texture is any object of type Texture. while OpenGL puts it at the bottom left. It’s important to understand that this is not selecting that particular texture for use. and if the color is not white then the texture will be “tinted” with the material color. Here’s some alternative code that you can use to load an image resource as a texture. or 1024. since the colors from the texture are actually multiplied by the material color. as discussed in Subsection 2. And you have to specify which texture is to be used.CHAPTER 2.2. if you want to flip the image: Texture texture. The Texture class has a way of dealing with this. three things are necessary to apply that texture to an object: The object must have texture coordinates that determine how the image will be mapped onto the object. BufferedImage img = ImageIO. texture = TextureIO. BASICS OF OPENGL AND JOGL 56 // Won’t get here if the file is properly packaged with the program! e. // read file into BufferedImage ImageUtil.enable().

expressed in terms of s and t are called texture coordinates. To apply a texture to a polygon.0.bind() where texture is the Texture object that you want to use. That texture will be used until you bind a different texture.bind() before drawing the object that you want to texture.1) .glTexCoord2d(0. Traditionally. // Texture coords for vertex (0.3. I have to specify the corresponding texture coordinate in the image.7).05). s is a real-number coordinate that ranges from 0 on the left of the image to 1 on the right. That leaves us with the problem of texture coordinates for objects to which you want to apply textures. Here’s the area in the image that I want to map onto the triangle. // Texture coords for vertex (0.glVertex2d(0. Values of s or t outside of the range 0 to 1 are not inside the image. It will be used whenever texturing is enabled. Just remember: Call texture.45.t).glTexCoord2d(s.1).glNormal3d(0.bind().3. gl. I could say: gl.6).45. To draw the triangle in this case. gl.0) gl. (0. suppose that we want to apply part of the EarthAtNight image to a triangle. you have to say which point in the texture should correspond to each vertex of the polygon. Call texture2. you must also specify that some particular texture is to be used. shown outlined in thick orange lines: The vertices of this area have (s.enable() and texture. s used for the horizontal coordinate on the image and t is used for the vertical coordinate. while t ranges from 0 at the bottom to 1 at the top.0. texture2. This is called binding the texture. For example. Usually.0). and you can do it by calling texture.0. gl.glTexCoord2d(0. BASICS OF OPENGL AND JOGL 57 To get texturing to work. These coordinates in the image.glBegin(GL.disable() to turn texturing off.t) coordinates (0. If you’re just using one texture. every vertex of a polygon will have different texture coordinates.0.GL POLYGON). you could bind it in the init method. A texture image comes with its own 2D coordinate system. (Or you can use glTexCoord2f. you can bind different textures in different parts of the display method. This is done by calling gl. but of course only when texturing is enabled.25. When I generate the vertices of the triangle that I want to draw. If you are using several textures. Call texture.CHAPTER 2.05).0.) You can call this method just before generating the vertex.6).0. and (0. if you want to switch to using a different texture.

CHAPTER 2.0. .glVertex2d(1. In fact.0) for the four vertices of a square. In that case.1).7).glTexCoord2d(0.radius).glVertex2d(0. you just have to create a Texture object.2).1). which tell what point in the image is mapped to the vertex.GL POLYGON). Note that texture coordinates are attributes of vertices.glTexCoord2d(0. a copy of the texture is mapped onto each face. The flat texture has to be distorted to fit onto the sphere.0).radius). gl.y) coordinates of a vertex. To use textures on these objects. using what is called a cylindrical projection. For a UVCSphere. but exactly what they mean depends on the setting of two parameters in the Texture object. and the (s. and (2.2). The textures that are used on the spheres in the TextureDemo example are of this type.0).glTexCoord2d(1.java the entire texture wraps once around the cylinder.25. and circular cutouts from the texture are applied to the top and bottom. gl. but some textures. The most desirable outcome is probably that the texture is copied over and over to fill the entire plane.glVertex2d(-radius. enable it. // Texture coords for vertex (1.glTexCoord2d(1.0).glTexCoord2d(0.1).glVertex2d(-radius. in this case.glVertex2d(radius. * where radius is half the length of the side.java. are made to work precisely with this type of texture mapping. which give its position in space. gl. if you supply texture coordinates (0. gl.glNormal3f(0. you will probably be happy to know that the shapes in the package glutil come with reasonable texture coordinates already defined.-radius). the texture is wrapped once around the sphere. gl.1). for example.glVertex2d(radius. (0. One last question: What happens if you supply texture coordinates that are not in the range from 0 to 1? It turns out that such values are legal.glEnd(). gl.0). */ private void square(GL gl. gl. values are only specified at the vetex. and that piece of the image will have to be stretched and distorted to fit.-radius). Here is a method from Cube.0) Note that there is no particular relationship between the (x.java. and bind it.t) texture coordinate associated with the vertex. for example. (2. gl.0. with given radius.glEnd(). For UVCylinder. so that using s and t values outside the usual range will simply cause the texture to be repeated. double radius) { gl. it’s difficult to decide what texture coordinates to use.java that draws a square in the xy-plane. } At this point. gl. For Cube. Like other vertex attributes. Sometimes. One case where it’s easy is applying a complete texture to a rectangle. BASICS OF OPENGL AND JOGL 58 gl.glBegin(GL. the triangle that I am drawing has a different shape from the triangular area in the image. with appropriate texture coordinates to map the entire image onto the square: /** * Draws a square in the xy-plane. gl. gl. gl. gl. OpenGL will interpolate the values between vertices to calculate texture coordinates for points inside the polygon.

GL TEXTURE WRAP S.GL TEXTURE WRAP T. you can use the following code: tex. BASICS OF OPENGL AND JOGL 59 then the square will be covered with four copies of the texture. This behavior is not the default. has many parameters to control how textures are used. as I have said. There are two parameters to control this behavior because you can enable the behavior separately for the horizontal and the vertical directions. This is an example of how these parameters are set using the Texture class. To get this behavior for a Texture object tex. OpenGL. For a repeating texture. such as a brick wall image. this can be much more effective than stretching a single copy of the image to cover the entire square. GL. tex.setTexParameteri(GL.GL REPEAT).setTexParameteri(GL.CHAPTER 2. ∗ ∗ ∗ .GL REPEAT). GL.

all that we have is a sequence of numbers. The goal is to familiarize you with the properties of vectors and matrices that are most relevant to OpenGL. as we have seen.1 Vectors.dz ) = (3. The treatment is not very mathematical. y. and z coordinates along the vector.0. In this section. is a quantity that has a length and a direction. Often. For example. For the point. which is the study of vectors and linear transformations.Chapter 3 Geometry In the previous chapter. For the vector.0).1. the coordinates (3. as long as you remember that it is the length and direction of the arrow that are relevant. The mathematics of computer graphics is primarily linear algebra . the distinction can be ignored. the 3D point (x.4. The distinction between a point and a vector is subtle. everything that we have covered is part of OpenGL 1. and Homogeneous Coordinates (online) Before moving on with OpenGL. we look at the mathematical background in a little more depth.5).4. we surveyed some of the basic features of OpenGL. and their coordinates are the same.5) has the same coordinates as the vector (dx. Matrices are rectangular arrays of numbers. If we visualize a vector V as starting at the origin and ending at a point P. A matrix can be used to apply a transformation to a vector (or to a point). A vector. and that its specific location is irrelevant.4. and it will cover geometric transformations in more depth.5) specify the change in the x. for other purposes. It will explain why you might want to use those features and how you can test for their presence before you use them. If we represent the vector with an arrow that starts at the origin (0. For some purposes.4. So far. Matrices. the coordinates (3. A vector can be visualized as an arrow.dy. and so should be universally available.5). we will look at vectors and matrices and at some of the ways that they can be used. This chapter will introduce some features that were introduced in later versions. then we can to a certain extent identify V with P —at least to the extent that both V and P have coordinates. then the head of the arrow will be at (3.y.4.5) specify a position in space in the xyz coordinate system. which we can treat as the coordinates of either a vector or a point at will. 3. It will cover the full range of geometric primitives and the various ways of specifying them. including some of the mathematics behind them. The geometric transformations that are so important in computer graphics are represented as matrices. it is important. 60 .z ) = (3. This chapter will fill in details about the geometric aspects of OpenGL.

Dividing a vector by its length is said to normalize the vector: The result is a unit vector that points in the same direction as the original vector. such as (0. if you curl the fingers of your right hand from v1 to v2. For vectors in 3D.0. the vectors v1. two non-zero vectors are perpendicular if and only if their dot product is zero. The dot product is defined in any dimension. The length of a vector is also called its norm . the cross product of v1 and v2 is denoted v1 ×v2 and is the vector defined by v1×v2 = ( y1*z2 . then v1 ×v2 is zero if and only if v1 and v2 point in the same direction or in exactly opposite directions. then the cross product v1 ×v2 is also a unit vector. then it is perpendicular both to v1 and to v2 .1 Vector Operations We assume for now that we are talking about vectors in three dimensions.) If v is a vector. The dot product has several very important geometric meanings.z ) is any vector other than (0. v2. First of all. where the effect of light shining on a surface depends on the angle that the light makes with the surface.1.0) or (3. v1 ×v2 follow the right-hand rule. Furthermore.7. the length of a vector (x.0.CHAPTER 3. furthermore. I will note that given two points P1 = (x1. and so the cross product .0). (This is just the Pythagorean theorem in three dimensions.z2 ). then there is exactly one unit vector that points in the same direction as v. which also has an important geometric meaning.z ) is given by sqrt(x 2 +y 2 +z 2 ). which is perpendicular both to v1 and to v2. Furthermore. For vectors v1 = (x1. Now. They are called unit vectors. note that the length of a vector v is just the square root of v ·v. z/length ) where length is the length of v.y2. the dot product of v1 and v2 is denoted by v1 ·v2 and is defined by v1·v2 = x1*x2 + x2*y2 + z1*z2 Note that the dot product is a number. y/length.z1 ) and v2 = (x2. not a vector. there is another type of product called the cross product. the dot product of two non-zero vectors v1 and v2 has the property that cos(angle) = v1·v2 / (|v1|*|v2|) where angle is the measure of the angle from v1 to v2. since the cosine of a 90-degree angle is zero.y1. Finally.88. Because of these properties. If v1 and v2 are unit vectors.1.x1*z2. z2 − z1 ) is a vector that can be visualized as an arrow that starts at P1 and ends at P2.y.” carries over easily into other dimensions. except for the “cross product. the difference P2−P1 which is defined by P2 − P1 = ( x2 − x1. then your thumb points in the direction of v1 ×v2.y1. GEOMETRY 61 3. One of the basic properties of a vector is its length . the dot product of two unit vectors is simply the cosine of the angle between them. suppose that P1. If v = (x. y2 − y1. P2. In particular.y.z1 ) and P2 = (x2. that is. Assuming v1 ×v2 is non-zero.y2.z1*y2. z1*x2 . the dot product is particularly important in lighting calculations. In terms of its coordinates. Then the vectors P1−P2 and P3−P2 lie in the plane of the polygon. and P3 are vertices of a polygon. Most of the discussion. Vectors of length 1 are particularly important.y1. x1*y2 .z1 ) and v2 = (x2.−12. A 3D vector can be specified by a triple of numbers.z2 ). Given two vectors v1 = (x1. That vector is given by ( x/length. in the case of unit vectors.y2.y1*x2 ) If v1 and v2 are non-zero vectors.z2 ). whose lengths are 1. its length can be denoted by |v |.02).

For computer graphics.1.) 3. Suppose that a matrix M has r rows and c columns. } } If you think of a row. Using this definition of the multiplication of a vector by a matrix. scaling. An affine transformation can be defined. which will have dimension r.2 Matrices and Transformations A matrix is just a two-dimensional array of numbers.y1. a matrix defines a transformation that can be applied to one vector to yield another vector.CHAPTER 3. double[] v = new double[c]. To include translations. Rotation.z1 ) into a vector (x2. GEOMETRY 62 (P3−P2) × (P1−P2) is either zero or is a vector that is perpendicular to the polygon. Note first of all that an affine transformation in three dimensions transforms a vector (x1. This fact allows us to use the vertices of a polygon to produce a normal vector to the polygon. we have to widen our view to include affine transformations. then w [i] is simply the dot product M [i]·v. and P3 lie on a line. roughly. but translation is not. i < r. for (int i = 0. However— by what seems at first to be a very odd trick—we can narrow our view back to the linear by moving into the fourth dimension. j < c. (It’s possible for the cross product to be zero. Once we have that. Let v be a c-dimensional vector. as a linear transformation followed by a translation. Then it is possible to multiply M by v to yield another vector. j++) { w[i] = w[i] + M[i][j] * v[j]. For a programmer. Then we can define the product w = M *v as follows: double w = new double[r]. then the polygon degenerates to a line segment and has no interior points at all. for (int j = 0. We don’t need unit normals for such polygons. we are interested in affine transformations in three dimensions. another set of three vertices might work. In that case. M [i]. Transformations that are defined in this way are called linear transformations. that is. P2. it’s probably easiest to define this type of multiplication with code. we can normalize the vector to produce a unit normal. a vector of c numbers. i++) { w[i] = 0.y2. This will happen if P1.z2 ) given by formulas x2 = a1*x1 + a2*y1 + a3*z1 + t1 y2 = b1*x1 + b2*y1 + b3*z1 + t2 z2 = c1*x1 + c2*y1 + c3*z1 + t3 These formulas express a linear transformation given by multiplication by the 3-by-3 matrix a1 b1 c1 a2 b2 c2 a3 b3 c3 . Suppose that we represent M and v by the arrays double[][] M = new double[r][c]. and shear are linear transformations. of M as being a c-dimensional vector. and they are the main object of study in the field of mathematics known as linear algebra. Note that if all the vertices of a polygon lie on a line.

adding a “1” as the fourth coordinate.CHAPTER 3. When the fourth coordinate. B. We have already represented 3D vectors by 4D vectors in which the fourth coordinate is 1. It’s really a very neat system. The result is that all the affine transformations that are so important in computer graphics can be implemented as matrix multiplication.y2.1 ) to represent (x. You might compose your modelview transform as a long sequence of modeling and viewing transforms. yielding a matrix that represents the combined transform. We now allow the fourth coordinate to be anything at all. One advantage of using matrices to represent transforms is that matrices can be multiplied.y.z ) with the four-dimensional vector (x.y.z ). since it considers (x.z1 ).0) as representing a 3D “point at infinity” in the direction of (x. 3. w.y1. Surprisingly.y1. The important fact is that applying the single transformation A*B to a vector v has the same effect as first applying B to v and then applying A to the result. That is.1).y. the projection transformation is not affine. This allows OpenGL to keep track of a single modelview matrix. the result is the precisely the vector (x2. In particular.z ). there is no corresponding 3D vector.z.1). rather than a sequence of individual transforms.z2.z1. Mathematically. and that is a property that no affine transformation can express. The matrix that is used represents the entire sequence of transforms. only a single matrix multiplication is necessary. then their matrix product A*B is another 4-by-4 matrix.y. provided we are willing to stretch our use of coordinates even further than we have already.z/w ). if A and B are 4-by-4 matrices. and A*B represents a linear transformation. Transform commands such as glRotatef and glTranslated are implemented as matrix multiplication—the current modelview matrix is multiplied by a matrix representing the transform that is being applied. an object will appear to get smaller as it moves farther away from the viewer.y.1) is multiplied by this 4-by-4 matrix.z. as before.y/w. And instead of the 3-by-3 matrix.z.y. in which the bottom row is (0. Note that this is consistent with our previous usage. This might seem pointless to you. t2 in the y direction and t3 in the z direction. but nevertheless. is non-zero. Each of the matrices A.z1. For computer graphics.0. but when the transform is actually applied to a vertex.1. we can apply a linear transformation to the 4D vector (x1.z.y1. we consider the coordinates (x. that is what OpenGL does: It represents affine transformations as 4-by-4 matrices. but it is possible to think of (x.3 Homogeneous Coordinates There is one transformation in computer graphics that is not an affine transformation: In the case of a perspective projection. The trick is to replace each three-dimensional vector (x. . instead of applying the affine transformation to the 3D vector (x1. it means that the operation of following one transform by another simply means multiplying their matrices.w ) to represent the three-dimensional vector (x/w. this can be said very simply: (A*B)*v = A*(B*v ). all multiplied together. we can still represent a perspective projection as a 4-by-4 matrix.y.1). When the fourth coordinate is zero. In a perspective projection.0. we use the 4-by-4 matrix a1 b1 c1 0 a2 b2 c2 0 a3 b3 c3 0 t1 t2 t3 1 If the vector (x1. GEOMETRY 63 followed by translation by t1 in the x direction. and it converts three-dimensional vectors into four dimensional vectors by adding a 1 as the final coordinate.1).

you can use the command gl. For example.z. 0). For some commands.y. For example. // Equivalent to gl. you will almost never have to deal with homogeneous coordinates directly. and alpha components of the color as float values in the range 0.z. -3}. and it represents all transformations as 4-by-4 matrices.1. if you want to look at it that way) given in the form of arrays of numbers. You can even specify vertices using homogeneous coordinates.0. that returns an array containing the red. It represents three-dimensional points and vectors using homogeneous coordinates. You can call glVertex3fv (&vert[3]). there are methods such as glVertex2dv and glColor3fv. Fortunately.vert[7]. 3. // Equivalent to gl.GL POLYGON).0.w ) used in this way are referred to as homogeneous coordinates. 0.glColor3fv( c. The color command can be useful when working with Java Color objects.0).z/w).1. If vert is an array of floats. such as the commands for setting material properties.getColorComponents(null). -1.y/w. blue. gl. then any 4-by-4 matrix can be used to transform threedimensional vectors. And in fact. only work with vectors. Others. (I should note that the versions of these commands in the OpenGL API for the C programming language do not have a second parameter. 0).−3)—and use that array to draw the triangle: float[] vert = new float[] { 1.glVertex3fv(vert. The only real exception to this is that homogeneous coordinates are required when setting the position of a light.vert[5]).offset) to generate a vertex that is given by three numbers in an array A of type float[]. (This will make a lot more sense if you already have the vertex data in an array for some reason.2).) Similarly. For example. 3). and among the transformations that can be represented in this way is the projection transformation for a perspective projection.w). 2. as we’ll see in the next chapter. and (1. gl. 6).glVertex3f(vert[3].) You can use this method and the command gl. GEOMETRY 64 Coordinates (x. 2.glVertex3fv(vert.glEnd(). gl. The second parameter is an integer offset value that specifies the starting index of the vector in the array. you might use an array to store the coordinates of the three vertices of a triangle—say (1. vector parameters are an option. the parameter could also be an array into which the method would place the data. gl. I should also mention that Java has alternative forms for . (The null parameter tells the method to create and return a new array. In C. since a color object c has a method c. you can call glVertex3fv (vert) when the vertex is given by the first three elements of the array. such glVertex and glColor.glVertex3f(vert[6].vert[1]. the command gl.y. and so on. 1. // Equivalent to gl. 1.glVertex3fv(vert. 0.glVertex3fv (A.4 Vector Forms of OpenGL Commands Some OpenGL commands take parameters that are vectors (or points.CHAPTER 3. (−1.vert[2]). or equivalently glVertex3fv (vert+3).glVertex4d(x.vert[4]. when the vertex is given by the next three array elements. generates the 3D point (x/w. the only parameter is a pointer. this is what OpenGL does internally.glBegin(GL. gl. If we use homogeneous coordinates. Commands that take parameters in array form have names that end in “v”.getColorComponents(null).glColor4fv to set OpenGL’s color from the Java Color object c: gl.vert[8]).2.glVertex3f(vert[0]. green.glColor3fv or gl.0 to 1. and you can pass a pointer to any array element.

float values might be the most efficient. And offset tells the starting index in the array where the data should be placed.glGetBooleanv( GL. // Space for x. I might mention that if you want to retrieve and save a value so that you can restore it later. You could use glGetDoublev instead of glGetFloatv.GL VIEWPORT.GL LIGHTING. Later.green. viewport. and so on. the array type is byte[].glGetFloatv( GL. followed by the entries in the second column. 0 ). 0 ). For example.glGetBooleanv( propertyCodeNumber.) For example.GL CURRENT COLOR or GL. gl. which will be covered later in the chapter. it will probably be 0 in most cases. gl. to retrieve the current color. you can retrieve the current transformation matrices. In these methods. I will mention just a few more as examples. saveColor. You could retrieve the current modelview transformation with float[] transform = new float[16]. Perhaps most interesting for us now. as usual. offset ). A transformation matrix is represented by a one-dimensional array of length 16. and alpha. many state variables that you can read in this way. lit ). you could restore the modelview matrix to the value that was retrieved by the previous command using . offset ). the entries in the first column from top to bottom. you can call glGetBooleanv with an array of length 1: byte[] lit = new byte[1].glColor4fv( saveColor. offset).blue. It is possible to set the transformation matrix to a value given in the same form. gl. destinationArray. if you prefer.glGetIntegerv( propertyCodeNumber. // Space for the single return value gl. gl. However. and the numbers 0 and 1 are used to represent the boolean values false and true. width. For boolean state variables such as whether lighting is currently enabled. GEOMETRY 65 these methods that use things called “buffers” instead of arrays. gl. 0 ).glGetFloatv( GL. y. You might use this to save the current color while you do some drawing that requires changing to another color. // Space for red. height. transform. Many of these values can be retrieved using four generic methods that take an array as a parameter: gl. the first parameter is an integer constant such as GL.glGetFloatv( propertyCodeNumber. these forms will be covered later in the chapter. there are better ways to do so. offset ). destinationArray.glGetIntegerv( GL.CHAPTER 3. The 4-by-4 transformation matrix is stored in the array in column-major order. There are many. (For glGetBooleanv.glGetDoublev( propertyCodeNumber. destinationArray. OpenGL will convert the data to the type that you request.GL MODELVIEW MATRIX that specifies which state variable you want to read. you could say float[] saveColor = new float[4]. You can read the current viewport with int[] viewport = new int[4]. The second parameter is an array of appropriate type into which the retrieved value of the state variable will be placed. using the glLoadMatrixf method. 0 ). that is.GL CURRENT COLOR.GL MODELVIEW.) OpenGL has a number of methods for reading the current values of OpenGL state variables. gl. you could restore the current color to its previous state by saying gl. destinationArray.

circles. 0 ). that is.glBegin.1 Points Individual points can be generated by using the primitive type GL. GEOMETRY 66 gl. you can use glMultMatrixf as follows: float[] shear = new float[] { 1.1.offset). we’ll look at the ten primitive type constants and how they are used to generate primitives. There are ten such constants. you can multiply the current transform matrix by an arbitrary matrix using gl.glMultMatrixf( shear.2 The Primitives (online) term “primitive” in computer graphics refers to a geometry element that is not made up of simpler elements. several points.CHAPTER 3. the type of primitive that you are generating is indicated by a constant such as GL. Just as you can multiply the current transform matrix by a rotation matrix by calling glRotatef or by a translation matrix by calling glTranslatef. For example. 3. 0.0. but it is very simple: 1 0 0 0 0 1 0 0 s 0 1 0 0 0 0 1 To use this shear transformation as a modeling transform.0. such shapes have to be approximated by lines or polygons.glLoadMatrixf( transform. which are said to define ten different primitive types in OpenGL. (For most of these primitive types. Primitives can be generated using glBegin/glEnd or by more efficient methods that we will cover in Section 3. or polygons. the primitives are points. if you just want to save a transform so that you can restore it later. But again.) In this section. lines (meaning line segments). a single use of glBegin/glEnd can produce several basic primitives. gl. With this primitive type.0. rotation.z1 ) to the vector (x2.0.2. 0. As with the other matrix methods.1.0. and polygons.0. In any case. In OpenGL.1 }.0. though. the matrix is given by a one-dimensional array of 16 floats. For example. the following code generates 10 points lying along the circumference of a circle of radius 1 in the xy-plane: .z2 ) given by x2 = x1 + s * z1 y2 = y1 z2 = z1 where s is a constant shear amount. Other graphics systems might have additional primitives such as Bezier curves.0.GL POLYGON that could be passed as a parameter to gl.y2.glMultMatrixf (matrix.0.3. 0 ). but in OpenGL. lines. We’ll also cover some of the attributes that can be applied to primitives to modify their appearance. and translation. there is a better way to do it—in this case by using glPushMatrix and glPopMatrix. The 4-by-4 matrix for this transformation is hard to construct from scaling. each vertex that is specified generates an individual point.y1.GL POINTS. or spheres. s. One situation in which this can be useful is to do shear transformations. consider a shear that transforms the vector (x1. 3.

in that order.PI/10)*i. Math.2. } gl. 0). i++){ double angle = (2*Math. which might be a surprise. Unfortunately.GL LINE LOOP. gl. // Angle in radians. B.GL ONE MINUS SRC ALPHA). in addition to GL POINT SMOOTH : gl. However. the pixel takes its color from the current drawing color as set by one of the glColor methods. To get circular points. a line is drawn from the first specified vertex to the second. you have to turn on point smoothing by calling gl. When a partially transparent pixel is drawn. and GL.glVertex3d( Math. With GL LINES.sin(angle).GL POINT SMOOTH). i < 10.GL LINES. This illustration shows the line segments that would be drawn if you specified the points A. The size of a point can be increased by calling gl. you should remember to turn off the GL BLEND option when drawing polygons.GL LINE STRIP. using the three primitive types: . C.glEnable(GL.GL SRC ALPHA. By default.glPointSize(size). Recall that antialiasing can be implemented by using partial transparency for pixels that are only partially covered by the object that is being drawn. but generally it makes more sense to draw points with lighting disabled.glEnable(GL. GL. for (int i = 0. from the third to the fourth. If lighting is off. GL LINE LOOP is similar.GL BLEND). If the number of vertices is odd.glBegin(GL. except that one additional line segment is drawn from the final vertex back to the first. You can get nicely antialiased points by also using blending . GEOMETRY 67 gl.CHAPTER 3. The range of sizes that is supported depends on the implementation. D. and each pair is joined to produce a line segment. An individual pixel is just barely visible.GL POINTS). If lighting is on.cos(angle). rather than replacing it.glBlendFunc(GL. a point is drawn as a single pixel. The parameter is a float that specifies the diameter of the point. and so on. for Java’s functions. 3. then the same lighting calculations are done for the point as would be done for the vertex of a polygon. normal vector. We will discuss them in more detail in the next chapter. E. this can have some interesting effects. using the current material. its color is blended with the previous color of the pixel. For now.glEnd(). A point that has size larger than 1 appears by default as a square. gl. Just turning on this option will leave the edges of the point looking a little jagged.2 Lines There are three primitive types for generating lines: GL. blending and transparency are fairly complicated topics in OpenGL. and so on. the last vertex is ignored. When GL LINE STRIP is used. I will just note that you can generally get good results for antialiased points by setting the following options. the vertices that are specified are paired off. in pixels. from the second to the third. and F. GL.

GEOMETRY 68 When a line segment is drawn. where width is a value of type float that specifies the line width in pixels. By default. If the current shade model is GL SMOOTH. and you should set up blending with the same two commands that were given above for antialiased points.glEnable(GL. off for two. and they might vary from one OpenGL implementation to another. To get antialiased lines. When multiplier is one. there is no reason to expect that the results will be geometrically correct or meaningful. including all lighting calculations when lighting is enabled.2. all of its vertices should lie in the same plane. First of all. on for four steps. The 16 bits in pattern specify how drawing is turned on and off along the line. for polygons that don’t meet the requirements. the width of a line is one pixel. The other five polygon primitives are for drawing triangles and quadrilaterals and will be covered later in this section. For example. The multiplier tells the size of the steps.3 Polygons The remaining six primitive types are for drawing polygons. you should call gl. off for two. (Exactly the same stippling could be produced by using pattern equal to (short)0x3535 and doubling the multiplier. when multiplier is two.glLineStipple( multiplier. I haven’t bothered to mention an important fact about polygons in OpenGL: It is correct to draw a polygon in OpenGL only if the polygon satisfies certain requirements.) The line stipple pattern is only applied if the GL LINE STIPPLE option is enabled by calling gl. as usual. The pattern then repeats. any polygon drawn by OpenGL should be planar . pattern ). But you should be aware that OpenGL will not report an error if you specify a non-planar polygon. where pattern is a value of type short and multiplier is an integer. the value (short)0x0F33 represents the binary number 0000111100110011. each bit in pattern corresponds to two pixel-lengths. and the multiplier value would determine the length of the dashes. just as if it were a vertex in a polygon. and so on. and finally on for the final two steps along the line.GL LINE STIPPLE). a color is first assigned to each vertex. on for two. then the color of the second vertex is used for the entire segment. OpenGL will attempt to draw a polygon using any sequence of vertices that you give it.GL POLYGON. However. It will just draw whatever its algorithms produce from the data you give .GL LINE SMOOTH). and drawing is tuned off for four steps along the line. with 0 turning drawing off and 1 turning it on. 3. A line can also have an attribute called a stipple pattern which can be used to produce dotted and dashed lines.CHAPTER 3. The pattern value (short)0x0F33 would draw the line with a long-dash / short-dash / short-dash stippling. by following its usual rendering algorithms. The width can be changed by calling gl. It is not even clear what a non-planar polygon would mean.glEnable(GL. Up until now. that is. If the shade model is GL FLAT. then the vertex colors are interpolated between the two vertices to assign colors to the points along the line. The familiar GL. The stipple pattern is set by calling gl. is used to draw a single polygon with any number of sides. each bit in pattern corresponds to one pixel-length along the line. very reasonably.glLineWidth(width). which we used almost exclusively before this section.

The GL LINE mode produces an outline of the polygon. convexity means that the polygon does not have any indentations along its edge. In order to draw a non-convex polygon with OpenGL. showing just the polygon edges. Second. You can actually apply different settings to front faces and back faces at the same time. then the entire line segment between the two points also lies in the polygon. This option can be turned on with gl.GL LINE ) gl. and that OpenGL determines which side is the front by the order in which the vertices of the polygon are generated. When drawing polygons. you have to sub-divide it into several smaller convex polygons and draw each of those smaller polygons. you might want to draw a “wireframe” representation of a solid object.glFrontFace(GL. Assuming that you want to treat both faces the same.glPolygonMode( GL. like the non-convex polygon on the lower right in the illustration below. GEOMETRY 69 them. For example. GL. it is possible to assign different material properties to the front faces and to the back faces of polygons. for example.GL CW ). And the GL POINT mode just places a point at .GL FRONT AND BACK. You can reverse this behavior by calling gl. First. so you don’t have to be too obsessive about it. Informally. You might do this.GL CULL FACE ). GL.GL FRONT AND BACK.GL POINT ) The default mode is GL FILL. and the back faces of the objects that make up the object are all facing the interior of the object.CHAPTER 3. Or you might want to fill a polygon with one color and then outline it with another color. The front/back distinction is used in two cases. It tells OpenGL to ignore a polygon when the user is looking at the back face of that polygon. This can be done for efficiency when the polygon is part of a solid object. Sometimes. whether with GL POLYGON or with the more specialized primitive types discussed below. The technical definition of convex is that whenever two points lie in the polygon.) The other requirement is less reasonable: Polygons drawn by OpenGL must be convex . When drawing polygons. where the interior of the polygon is drawn. as we will see in the next chapter. so that they will have different colors.GL FRONT AND BACK. You can use glPolygonMode to determine how polygons are rendered. you should keep in mind that a polygon has a front side and a back side. so any time spent rendering them would be wasted.GL FILL ) gl. (Note that polygons that are very close to being planar will come out OK. OpenGL has an option to do backface culling . GL. In the default setup. the vertices have to be specified in counterclockwise order when viewed from the front (and therefore in clockwise order when viewed from the back).glPolygonMode( GL. Note that a convex polygon can never be self-intersecting. the default is to to draw the interior of the polygons.glPolygonMode( GL.glEnable(GL. the back faces will not be visible in the final image. it is useful just to draw the edges of the polygon. if you read your polygon data from a file that lists vertices in clockwise order as seen from the front. In that case. the three possible settings are: gl.

GL. Polygon offset can be enabled separately for the FILL. This eliminated the problem with drawing things that have the same depths.GL LINE).GL LIGHTING).GL FILL).1). The glPolygonOffset method specifies the amount of the offset. called “polygon offset.CHAPTER 3.0).GL POLYGON OFFSET FILL).glEnable(GL. since the depth values for the polygon and its outline are no longer exactly the same. // Fill the triangles that make up the surface. The parameters used here. // Draw the outlines in black.glPolygonOffset(1. In this case.GL FRONT AND BACK. you see the interior color. // Draw polygon interiors with lighting enabled. // Draw polygon outlines with lighting disabled. Here. The picture on the top left does not use polygon offset. At some points. however. and because of computational inaccuracy.) . Here is some code for drawing the above picture with polygon offset. (1. I applied polygon offset while filling the triangles but not while drawing their outlines. produce a minimal change in depth. assuming that the method drawSurface generates the polygons that make up the surface.glPolygonMode(GL. // Draw polygon interiors. // Draw polygon outlines. gl. it is enabled only for the FILL mode. so it does not apply when the outlines are drawn.” Here are two pictures of the same surface. just enough to make the offset work. OpenGL has a fix for this problem. at others. the one on the bottom right does: Polygon offset can be used to add a small amount to the depth of the pixels that make up the polygon. and POINT polygon modes.2: Along an edge of the polygon. GL. you run into a problem with the depth test that was discussed in Subsection 2. the interior and the outline of the polygon are at the same depth. compensates for the fact that a polygon that is tilted relative to the screen will require a bigger change in depth to be effective. gl.0.1). (The first parameter.2. // Turn on offset for filled polygons. When you do this. // Outline the triangles that make up the surface again.glPolygonMode(GL. To draw both the interior and the outline. gl. gl.glEnable(GL. the depth test can give erratic results when comparing objects that have the same depth values. drawSurface(gl). you have to draw the polygon twice. which I don’t completely understand. once using the GL FILL polygon mode and once using GL LINE. drawSurface(gl). // Set polygon offset amount. you see the edge.glColor3f(0. gl. gl. gl. LINE. GEOMETRY 70 each vertex of the polygon.glDisable(GL.GL FRONT AND BACK. The surface is rendered by drawing a large number of triangles.GL LIGHTING).

the vertices that you specify are grouped into sets of three. I.GL TRIANGLE STRIP. This illustration shows the triangles produced using GL TRIANGLES and GL TRIANGLE STRIP with the same set of vertices A. . specified in that order: Note that the vertices of triangle DEF in the picture on the left are specified in clockwise order. you are looking at the front face of triangles ABC and GHI. since every triangle is both planar and convex. of course. which means that you are looking at the back face of this triangle. This might look complicated.GL TRIANGLES. in the second. the first vertex that is specified becomes one of the vertices of every triangle that is generated. that is. the order is DCB. The third primitive for drawing triangles is GL TRIANGLE FAN. in the third. the first three vertices specify a triangle. you don’t have to worry about planarity or convexity.2. GEOMETRY 71 3. and those triangles form a “fan” around that initial vertex. use GL TRIANGLES to specify a single triangle by providing exactly three vertices. since it’s the natural order to ensure that the front faces of all the triangles are on the same side of the strip. In the picture that is made by GL TRIANGLE STRIP. in the fourth. When you build geometry out of triangles. You can. With GL TRIANGLE STRIP. CDE. This illustration shows typical uses of GL TRIANGLE FAN : . and so on. . and every set of three generates a triangle. It is less commonly used than the other two. each triangle is treated as a separate polygon. . To make this true. With GL TRIANGLES. the order of the vertices is considered to be the reverse of the order in which they are specified. In a triangle fan. If the number of vertices is not a multiple of three. The triangles form a “strip” or “band” with the vertices alternating from one side of the strip to the other. and GL. in the first triangle. the ordering is ABC. and each additional vertex adds another triangle. FED. you are looking at the front faces of all the triangles. On the other hand. So. but you rarely have to think about it. on the other hand. B. which is formed from the new vertex and the two previous vertices. GL. OpenGL has three primitive types for drawing triangles: GL. the orientation of every second triangle is reversed.4 Triangles Triangles are particularly common in OpenGL. When using GL TRIANGLES. and the usual rule for determining the front face apply. .CHAPTER 3. then the one or two extra vertices are ignored.GL TRIANGLE FAN.

it’s more common to use triangles than quads.sin(d*i). using groups of four vertices. . Furthermore. GL QUAD STRIP draws a strip or band made up of quadrilaterals.) As an example. for i = 0. . For GL QUAD STRIP. However.0). including all the shape classes in my glutil package. Assume that the cylinder has radius r and height h and that its base is in the xyplane. As with any polygon in OpenGL.r *sin(d*i).0). as seen from the front. the vertices along the top edge of the cylinder have coordinates (r *cos(d*i). the vertices should alternate from one side of the strip to the other. using the same set of vertices—although not generated in the same order: For GL QUADS. as you can probably guess. to be valid. . In fact. (The reverse is not true. . with the same vertices in the same order. GL QUADS. where d is 1/32 of a complete circle. a quadrilateral must be planar and convex. since the quads that are produced when GL QUAD STRIP is substituted for GL TRIANGLE STRIP might not be planar. So. Similarly. If we use a strip of 32 quads. draws a sequence of independent quadrilaterals. the vertices of each quadrilateral should be specified in counterclockwise order. GEOMETRY 72 3. OpenGL has two primitives for drawing quadrilaterals: GL. the unit normal to the cylinder at the point (r *cos(d*i). GL TRIANGLE STRIP can always be used in place of GL QUAD STRIP.GL QUAD STRIP.r *sin(d*i).z ) is (cos(d*i).CHAPTER 3.2.0) for both z = 0 and z = h.r *sin(d*i). centered at (0. Here is an illustration showing the quadrilaterals produced by GL QUADS and GL QUAD STRIP. then the vertices along the bottom of the cylinder have coordinates (r *cos(d*i).31.GL QUADS and GL. let’s approximate a cylinder—without a top or bottom—using a quad strip.5 Quadrilaterals Finally. just as they do for GL TRIANGLE STRIP. Because of this restriction. we can use the following code to generate the quad strip: . there are many common objects for which quads are appropriate.h). A quadrilateral is a four-sided polygon.

CHAPTER 3. GEOMETRY

73

double d = (1.0/32) * (2*Math.PI); // 1/32 of a circle, for Java’s functions. gl.glBegin(GL.GL QUAD STRIP); for (int i = 0; i <= 32; i++) { // Generate a pair of points, on top and bottom of the strip. gl.glNormal3d( Math.cos(d*i), Math.sin(d*i, 0); // Normal for BOTH points. gl.glVertex3d( r*Math.cos(d*i), r*Math.sin(d*i), h ); // Top point. gl.glVertex3d( r*Math.cos(d*i), r*Math.sin(d*i), 0 ); // Bottom point. } gl.glEnd();

The order of the points, with the top point before the bottom point, is chosen to put the front faces of the quads on the outside of the cylinder. Note that if GL TRIANGLE STRIP were substituted for GL QUAD STRIP in this example, the results would be identical as long as the usual GL FILL polygon mode is selected. If the polygon mode is set to GL LINE, then GL TRIANGLE STRIP would produce an extra edge across the center of each quad.

∗ ∗ ∗
In the on-line version of this section, you will find an applet that lets you experiment with the ten primitive types and a variety of options that affect the way they are drawn.

3.3

Polygonal Meshes
(online)

When creating objects that are made up of more than just a few polygons, we need some
way to organize the data and some strategy for using it effectively. The problem is how to represent polygonal meshes for efficient processing. A polygonal mesh is just a collection of connected polygons used to model a two-dimensional surface. The term includes polyhedra such as a cube or a dodecahdron, as well as meshes used to approximate smooth surfaces such as the geometry produced by my UVSphere class. In these cases, the mesh represents the surface of a solid object. However, the term is also used to refer to open surfaces with no interior, such as a mesh representing the graph of a function z = f (x,y). There are many data structures that can be used to represent polygonal meshes. We will look at one general data structure, the indexed face set, that is often used in computer graphics. We will also consider the special case of a polygonal grid.

3.3.1

Indexed Face Sets

The polygons in a polygonal mesh are also referred to as “faces” (as in the faces of a polyhedron), and one of the primary means for representing a polygonal mesh is as an indexed face set, or IFS. The data for an IFS includes a list of all the vertices that appear in the mesh, giving the coordinates of each vertex. A vertex can then be identified by an integer that specifies its index, or position, in the list. As an example, consider this pyramid, a simple five-sided polyhedron:

CHAPTER 3. GEOMETRY

74

The vertex list for this polyhedron has the form
Vertex Vertex Vertex Vertex Vertex #0. #1. #2. #3. #4. (1,0,1) (1,0,-1) (-1,0,-1) (-1,0,1) (0,1,0)

The order of the vertices is completely arbitrary. The purpose is simply to allow each vertex to be identified by an integer. To describe a polygon, we have to give its vertices, being careful to enumerate them in counterclockwise order as seen from the front of the polygon. In this case, the fronts of the polygonal faces of the pyramid should be on the outside of the pyramid. Now for an IFS, we can specify a vertex by giving its index in the list. For example, we can say that one of the faces of the pyramid is the polygon formed by vertex #4, vertex #3, and vertex #0. To complete the data for our IFS, we just give the indexes of the vertices for each face:
Face Face Face Face Face #1: #2: #3: #4: #5: 4 4 4 4 0 3 0 1 2 3 0 1 2 3 2 1

In Java, we can store this data effectively in two two-dimensional arrays:
float[][] vertexList = { {1,0,1}, {1,0,-1}, {-1,0,-1}, {-1,0,1}, {0,1,0} int[][] faceList = { {4,3,0}, {4,0,1}, {4,1,2}, {4,2,3}, {0,3,2,1} }; };

Note that each vertex is repeated three or four times in the face-list array. With the IFS representation, a vertex is represented in the face list by a single integer, giving an index into the vertex list array. This representation uses significantly less memory space than the alternative, which would be to write out the vertex in full each time it occurs in the face list. For this example, the IFS representation uses 31 numbers to represent the polygonal mesh. as opposed to 48 numbers for the alternative. IFSs have another advantage. Suppose that we want to modify the shape of the polygon mesh by moving its vertices. We might do this in each frame of an animation, as a way of “morphing” the shape from one form to another. Since only the positions of the vertices are changing, and not the way that they are connected together, it will only be necessary to update the 15 numbers in the vertex list. The values in the face list will remain unchanged. So, in this example, we would only be modifying 15 numbers, instead of 48. Suppose that we want to use the IFS data to draw the pyramid. It’s easy to generate the OpenGL commands that are needed to generate the vertices. But to get a correct rendering, we also need to specify normal vectors. When the IFS represents an actual polyhedron, with flat

CHAPTER 3. GEOMETRY

75

faces, as it does in this case, we can compute a normal vector for each polygon, as discussed in Subsection 3.1.1. (In fact, if P1, P2, and P3 are the first three vertices of the polygon, then the cross product (P3 −P2 )×(P1 −P2 ) is the desired normal as long as the three points do not lie on a line.) Let’s suppose that we have a method, computeUnitNormalForPolygon, to compute the normal for us. Then the object represented by an IFS can be generated from the vertexList and faceList data with the following simple method:
static void drawIFS(GL gl, float[][] vertexList, int[][] faceList) { for (int i = 0; i < faceList.length; i++) { gl.glBegin(GL.GL POLYGON); int[] faceData = faceList[i]; // List of vertex indices for this face. float[] normal = computeUnitNormalForPolygon(vertexList, faceData); gl.glNormal3fv( normal, 0 ); for (int j = 0; j < faceData.length; j++) { int vertexIndex = faceData[j]; // Location of j-th vertex in list. float[] vertex = vertexList[ vertexIndex ]; gl.glVertex3fv( vertex, 0 ); } gl.glEnd(); } }

This leaves open the question of what to do about normals for an indexed face set that is meant to approximate a smooth surface. In that case, we want to use normals that are perpendicular to the surface, not to the polygons. One possibility is to expand our data structure to store a normal vector for each vertex, as we will do in the next subsection. Another possibility, which can give nice-looking results, is to calculate an approximate normal for each vertex from the data that we have. To do this for a particular vertex v, consider all the polygons in which v is a vertex. Compute vectors perpendicular to each of those polygons, and then use the average of all those vectors as the normal vector at v. Although that vector is probably not exactly perpendicular to the surface at v, it is usually good enough to pass visual inspection of the resulting image.

∗ ∗ ∗
Before moving on, lets think about making a minor change to the way that we have stored the data for the vertex list. It is common in OpenGL to store this type of data in a onedimensional array, instead of a two dimensional array. Store the first vertex in the first three array elements, the second vertex in the next three, and so on:
float[] vertexList = { 1,0,1, 1,0,-1, -1,0,-1, -1,0,1, 0,1,0 };

In general, the data for vertex number n will be stored starting at position 3*n in the array. Recall that the second parameter to gl.glVertex3fv gives the position in the array where the data for the vertex is stored. So, to use our modified data, we can now draw the IFS as follows:
static void drawIFS(GL gl, float[] vertexList, int[][] faceList) { for (int i = 0; i < faceList.length; i++) { gl.glBegin(GL.GL POLYGON); int[] faceData = faceList[i]; // List of vertex indices for this face. float[] normal = computeUnitNormalForPolygon(vertexList, faceData); gl.glNormal3fv( normal, 0 ); for (int j = 0; j < faceData.length; j++) { int vertexIndex = faceData[j]; gl.glVertex3fv( vertexList, 3*vertexIndex );

CHAPTER 3. GEOMETRY

76

} gl.glEnd(); } }

This representation will be convenient when we look at new methods for drawing OpenGL primitives in the next section.

3.3.2

OBJ Files

For complex shapes that are not described by any simple mathematical formula, it’s not feasible to generate the shape using Java code. We need a way to import shape data into our programs from other sources. The data might be generated by physical measurement, for example, or by an interactive 3D modeling program such as Blender (http://www.blender.org). To make this possible, one program must write data in a format that can be read by another program. The two programs need to use the same graphics file format. One of the most common file formats for the exchange of polygonal mesh data is the Wavefront OBJ file format. Although the official file format can store other types of geometric data, such as Bezier curves, it is mostly used for polygons, and that’s the only use that we will consider here. An OBJ file (with file extension “.obj”) can store the data for an indexed face set, plus normal vectors and texture coordinates for each vertex. The data is stored as plain, humanreadable text, using a simple format. Lines that begin with “v”, “vn”, “vt”, or “f”, followed by a space, contain data for one vertex, one normal vector, one set of texture coordinates, or one face, respectively. For our purposes here, other lines can be ignored. A line that specifies a vertex has the form v x y z, where x, y, and z are numeric constants giving the coordinates of the vertex. For example:
v 0.707 -0.707 1

Four numbers, specifying homogeneous coordinates, are also legal but, I believe, rarely used. All the “v” lines in the file are considered to be part of one big list of vertices, and the vertices are assigned indices based on their position in the list. The indices start at one not zero, so vertex 1 is the vertex specified by the first “v” line in the file, vertex 2 is specified by the second “v” line, and so on. Note that there can be other types of lines interspersed among the “v” lines—those extra lines are not counted when computing the index of a vertex. Lines starting with “vn” or “vt” work very similarly. Each “vn” line specifies a normal vector, given by three numbers. Normal vectors are not required to be unit vectors. All the “vn” lines in the file are considered to be part of one list of normal vectors, and normal vectors are assigned indices based on their order in the list. A “vt” line defines texture coordinates with one, two, or three numbers. (Two numbers would be used for 2D image textures.) All the “vt” lines in the file create a list of texture coordinates, which can be referenced by their indices in the list. Faces are more complicated. Each “f” line defines one face, that is, one polygon. The data on the “f” line must give the list of vertices for the face. The data can also assign a normal vector and texture coordinates to each vertex. Vertices, texture coordinates, and normals are referred to by giving their indices in the respective lists. (Remember that the numbering starts from one, not from zero; if you’ve stored the data in Java arrays, you have to subtract 1 from the numbers given in the “f” line to get the correct array indices. There reference numbers can be negative. A negative index in an “f” line means to count backwards from the position of the “f” line in the file. For example, a vertex index of −1 refers to the “v” line that was seen

CHAPTER 3. GEOMETRY

77

most recently in the file, before encountering the “f” line; a vertex index of −2 refers the “v” line that precedes that one, an so on. If you are reading the file sequentially and storing data in arrays as you go, then −1 simply refers to the last item that is currently in the array, −2 refers to the next-to-last item, and so on.) In the simple case, where there are no normals or texture coordinates, an “f” line can simply list the vertex indices in order. For example, an OBJ file for the pyramid example from the previous subsection could look like this:
v v v v v f f f f f 1 0 1 1 0 -1 -1 0 -1 -1 0 1 0 1 0 5 4 1 5 1 2 5 2 3 5 3 4 1 4 3 2

When texture coordinate or normal data is included, a single vertex index such as “5” is replaced by a data element in the format v /t/n, where v is a vertex index, t is a texture coordinates index, and n is a normal coordinate index. The texture coordinates index can be left out, but the two slash characters must still be there. For example, “5/3/7” specifies vertex number 5, with texture coordinates number 3, and normal vector number 7. And “2//1” specifies vertex 2 with normal vector 7. As an example, here is complete OBJ file representing a cube, with its normal vectors, exactly as exported from Blender. Note that it contains additional data lines, which we want to ignore:
# Blender3D v248 OBJ File: # www.blender3d.org mtllib stage.mtl v 1.000000 -1.000000 -1.000000 v 1.000000 -1.000000 1.000000 v -1.000000 -1.000000 1.000000 v -1.000000 -1.000000 -1.000000 v 1.000000 1.000000 -1.000000 v 0.999999 1.000000 1.000001 v -1.000000 1.000000 1.000000 v -1.000000 1.000000 -1.000000 vn -0.000000 -1.000000 0.000000 vn 0.000000 1.000000 -0.000000 vn 1.000000 0.000000 0.000000 vn -0.000000 -0.000000 1.000000 vn -1.000000 -0.000000 -0.000000 vn 0.000000 0.000000 -1.000000 usemtl Material s off f 1//1 2//1 3//1 4//1 f 5//2 8//2 7//2 6//2 f 1//3 5//3 6//3 2//3 f 2//4 6//4 7//4 3//4 f 3//5 7//5 8//5 4//5 f 5//6 1//6 4//6 8//6

That is. If we have a texture that we want to . we could use a two-dimensional array height[i][j ]. which occurs when the vertices in the mesh form a grid or net.j /N ).3. The result might look something like this (although certainly a larger value of N would be desirable): Imagine looking down from above at the original grid of points and at the same grid connected by lines: You can see that the geometry consists of N horizontal “strips. 3. (This defines the ground height only over a square that is one unit on a side. for 0 <= i <= N and 0 <= j <= N. A typical example in computer graphics programs would be to represent the ground or terrain by specifying the height of the ground at each point in a grid.CHAPTER 3. but let’s assume again that we have a way to compute them (although the computation in this case is more difficult than for indexed face sets). it is easy enough to use the arrays to draw the object with OpenGL. Again. with the points joined by line segments.3 Terraine and Grids We look at one more common type of polygonal mesh.height[i][j ].j /N ). we have to worry about normal vectors. GEOMETRY 78 Once you have read a geometric object from an OBJ file and stored the data in arrays. for example. but we can easily scale to extend the terrain over a larger square.” Each of these strips can be drawn using the GL TRIANGLE STRIP primitive.) The ground would then be approximated by a polygonal mesh containing the points (i/N. to give the height of the ground over each of the points (i/N.

then a normal at the point (x (u. The surface defined by a rectangular area in the uv-plane can be approximated by plotting points on the surface for a grid of uvpoints in that rectangle and using the resulting points to draw a sequence of triangle strips. and z (u. all the original grid points have xz-coordinates in the unit square. thinking of the function as giving y = f (x.v )) × (Dv x (u. 0 ). z1 ). y(u.v ). y1.v ). x (u.glTexCoord2f( x2. 0 ).z ) = (x (u.y. We can then use the following method to draw the terrain static void drawTerrain(GL gl. float[] normal2 = computeTerrainUnitNormal(height. for (int col = 0.v ). gl.v ). gl. } } ∗ ∗ ∗ For readers with some background in calculus. and we can simply use those coordinates as texture coordinates. In fact. Du z (u.col).N. Dv y(u. float z2 = (float)row/N.v ).row. float[][] height.glBegin(GL.glNormal3fv( normal2.glVertex3f( x1. this example is actually just a special case of drawing the graph of a mathematical function of two variables by plotting some points on the surface and connecting them to form lines and triangles. The normal at a point on the surface can be computed from partial derivatives.v ). // Lower point on strip.CHAPTER 3. We are approximating the smooth surface by a polygonal mesh. gl.v )) is given by the cross product (Du x (u.glTexCoord2f( x1. if we use the notation Du to mean the partial derivative with respect to u and Dv for the partial derivative with respect to v. z (u. y2. int N) { for (int row = 0. y(u.N. row++) { gl. float x2 = (float)col/N.v ). z2 ). y(u. we would use the points (x.v ). float z1 = (float)(row+1)/N.GL TRIANGLE STRIP). so we would like to use normal vectors that are perpendicular to the surface. GEOMETRY 79 apply to the terrain.z ).v ). Points on the surface are given by (x. gl. More generally. we can easily draw a polygonal approximation of a parametric surface given by a set of three functions of two variables. Du y(u. float y2 = height[row][col]. // Upper point on strip.col). gl.v )) . row < N. col++) { // Generate one pair of points on the strip. we can do so easily.glVertex3f( x2. col <= N. // Draw strip number row. z2 ). } gl.z ). gl. In that case.glNormal3fv( normal1. float x1 = (float)col/N. at horizontal position col.glEnd().f (x.v ). float y1 = height[row+1][col]. The graph of the function is a surface.row+1. float[] normal1 = computeTerrainUnitNormal(height. which will be smooth if f is a differentiable function. Dv z (u. The normal vectors can be calculated from the partial derivatives of f. z (u.z ) for a grid of points (x. z1 ).v ). As we have set things up.v )).z ) in the xz-plane.

The texture that is used is a topological map of the Earth’s surface. The alternative approaches are more efficient. we’ll look at ways to produce the following image. It shows a dark red cylinder inside a kind of cloud of points. is good enough to produce nice-looking surfaces. GEOMETRY 80 In fact. it’s not even necessary to compute exact formulas for the partial derivatives. This section will discuss alternative approaches.CHAPTER 3. because the implementation of arrays in Java makes Java arrays unsuitable for use in certain OpenGL methods that have array parameters in C. Jogl’s solution to the array problem is to use “nio buffers” instead of arrays in those cases were Java arrays are not suitable. with normal vectors and texture coordinates appropriate for the unit sphere. 3. In fact. The source code for the program can be found in VertexArrayDemo.java. below. because they make many fewer calls to OpenGL commands and because they offer the possibility of storing data in the graphics card’s memory instead of retransmitting the data every time it is used. The on-line version of this section has an applet that animates the image and allows you to rotate it. the glBegin/glEnd paradigm is considered to be a little old-fashioned and it has even been deprecated in the latest versions of OpenGL. . but that is just one of the approaches to drawing that OpenGL makes available. Unfortunately. The points are randomly selected points on a sphere. using the difference quotient. This is especially true in Java (as opposed to C). A rough numerical approximation. The sphere points are lit. discusses nio buffers. To give us something to draw in this section. the alternative drawing methods are more complicated than glBegin/glEnd. The first subsection.4 We Drawing Primitives (online) have been exclusively using glBegin/glEnd for drawing primitives.

A buffer has relative put and get methods that work on the item at the current position. Jogl’s use of nio buffers makes it even harder to know exactly what type of buffer is being discussed in any particular case. In fact.put(x).1 Java’s Data Buffers The term “buffer” was already overused in OpenGL. buffer.util. this type of buffer offers the possibility of efficient data transfer between a Java program and a graphics card. Subclasses of Buffer such as FloatBuffer and IntBuffer represent buffers capable of holding data items of a given primitive type. and BufferUtil. It is also possible to create a Buffer object that stores its data in a standard Java array. The term buffer ends up meaning. There is one such subclass for each primitive type except boolean.x). Suppose that buffer is a variable of type FloatBuffer. no more than a place where data can be stored—usually a place where data can be stored by one entity and retrieved by another entity. start. Then buffer. buffer.put(i.4. Jogl defines a set of utility methods for creating buffers. but not others. starting at the buffer’s current position. Buffers in this package are used for efficient transfer of data between a Java program and an I/O device such as a hard drive. just as for an array. i is an int and x is a float. which are required for the Jogl methods that we will be using. To make working with nio buffers a little easier.get(i) can be used to retrieve the value at index i in the buffer. That is. copies count values from an array.put(data. it is possible to refer to items in a buffer by their index or position in that sequence. They could be used with some Jogl methods.io package.nio defines input/output facilities that supplement those in the older java.Buffer or one of its subclasses. buffer. The buffer’s position pointer is moved to the first index following the block of transferred data.nio. like an array.CHAPTER 3. For example. buffer. We will be using BufferUtil. They can be found as static methods in class BufferUtil.newFloatBuffer (n). There are also methods for transferring an entire array of values to or from a buffer. count). which creates a buffer that can hold n integers. A Java nio buffer is an object belonging to the class java. from the package com. GEOMETRY 81 3. is simply a linear sequence of elements of a given type. data. The package java. A buffer differs from an array in that a buffer has an internal position pointer that indicates a “current position” within the buffer. (The buffers created by these methods are so-called “direct” buffers.opengl. . stores the value of x at the current position. and advances the position pointer to the next element in the buffer. Similarly.get() returns the item at the buffer’s current position and advances the pointer. Similarly.sun. When used in Jogl. The data is stored in the buffer.) A nio Buffer. starting from position start in the array. copies the value of x into position number i in the buffer. but buffers of that type have the same problems for OpenGL as plain Java arrays. These methods can be much more efficient than transferring values one at a time. basically. which creates a buffer that can hold n floating point values.newIntBuffer (n). and then advance the position pointer to the next position. We will avoid them.

5f).) In addition to calling glVertexPointer. // second vertex buf.0.put(vert.glVertexPointer ().0.put(7. // third vertex (2) Relative put.5. (You have to provide the same number of coordinates for each vertex. although nio buffers.5f).9).put(3. that will be at the beginning of the buffer.position(i) can be used to set the buffer’s current position pointer to i.0). you can place your data into nio buffers and let OpenGL read the data from the buffers as it renders the primitives. GL. meaning that the data values are stored in consecutive locations in the buffer. and GL. and so can be used on all current systems. not arrays. 3. buf.position(0) or buffer. no need to specify indices: buf. are used in Jogl.5.put(1.0).GL FLOAT as the type and use a buffer of type FloatBuffer. starting at position 0: buf. // second vertex buf. buf.1.5f).put(6. Here are three ways to do it: (1) Absolute put.5f). buf. buf. 0.5. you must store vertices in a buffer.0). GEOMETRY 82 The method buffer. The type is a constant that tells the data type of each of the numbers in the buffer. buf.position() before calling gl.1. this will certainly be true if you use relative put commands to put data into the buffer.put(1.1. which can be 2. buf. if you want to use float values for the vertex coordinates.GL INT. int stride. buf. buf.4. The constant that you provide here must match the type of the buffer.5f).-1 }. The possible values are GL.GL DOUBLE. 3. buf.) The stride is usually 0. suppose that we want to store the vertices of a triangle in a FloatBuffer.0. As an example.0. buf. and you can store other data such as normal vectors in the buffers.0.) We start with a version of this technique that has been available since OpenGL 1. Say the vertices are (1. To return the position pointer to the start of the buffer. or 4. then stride gives the distance in bytes between the location of one value and the location of the next value. 0. // first vertex buf. The relative puts in cases (2) and (3) advance the pointer nine positions in each case—and when you use them.5f). The vertex data is assumed to start at the buffer’s current position. For example.put(0.put(2. Usually. The technique is referred to as using vertex arrays. // third vertex (3) Bulk put. starts at current position.put(0.rewind().0). Buffer buffer) The size here is the number of coordinates per vertex. if that is not the case.5.0.−1).put(8.put(1.put(-1).put(0).GL FLOAT.CHAPTER 3. copying the data from an array: float[] vert = { 1. you might need to reset the buffer’s position pointer later.put(0).5f).5.put(4. To use vertex arrays.put(0). specify GL.1. not just vertices.0). you must enable the use of the buffer by calling .put(0). (In C or C++.0).2 Drawing With Vertex Arrays As an alternative to using glBegin/glEnd. and (0.0. buf. you can use buffer. and you have to tell OpenGL that you are using that buffer with the following method from the class GL: public void glVertexPointer(int size.0. You should note that the absolute put methods used in case (1) do not move the buffer’s internal pointer.put(0.5.0. buf. (In particular.1.5.-1).5.put(5. // first vertex buf. buf. note that you might need to set the buffer’s position pointer to the appropriate value by calling buffer. int type. buf.5f). you would use arrays rather than buffers. (0.

put(data. GL. which would probably be an instance variable: private FloatBuffer rectVertices. The last line. not number of floating point values.CHAPTER 3.rewind(). With this setup. We need a variable of type FloatBuffer. 0. For normal vectors. gl.GL VERTEX ARRAY ) to disable the use of the array. giving the coordinates of a vertex that lies in the xy-plane. Use gl.GL VERTEX ARRAY).GL FLOAT. 4). The count is the number of vertices to be used. int first.−1) and (1.glEnableClientState(GL.glEnableClientState(GL.1). int type. The mode tells which primitive type is being drawn.-1. 1.GL POLYGON. rectVertices. using the method calls . one normal vector will work for all the vertices. int stride. rectVertices). In this example. gl. The same ten primitive types that can be used with glBegin can be used here.newFloatBuffer(8). use the following method from class GL: void glDrawArrays(int mode. The following methods are used in the same way as glVertexPointer. Note that the position is given in terms of vertex number. which moves the buffer’s position pointer back to zero. Let’s see how this could be used to draw the rectangle in the xy-plane with corners at (−1.-1. In general.glDisableClientState(GL.GL TRIANGLE STRIP. such as GL.1.glDisableClientState(GL.GL POLYGON or GL.glDrawArrays(GL. you must always give three numbers for each vector. OpenGL ignores the vertex pointer except when this state is enabled.GL VERTEX ARRAY). you have to enable the appropriate client state. just as if glVertex were called count times. Finally. Buffer buffer) public void glTexCoordPointer(int size. and there are no texture coordinates. to specify the buffers that hold normal vector and texture coordinate data: public void glNormalPointer(int type. first is the index in the buffer of the first vertex that is to used for drawing the primitive. The buffer itself has to be allocated and filled with data. since positions of data in the buffer are specified relative to that pointer. gl. perhaps in the init() method: rectVertices = BufferUtil. To tell OpenGL to use the data from the buffers. 1.0. Buffer buffer) Note that glNormalPointer does not have a size parameter. -1. int count). in order to actually use the vertex data from the buffer. We just have to store these data in their own arrays. The “2” that is used as the first parameter to glVertexPointer says that each vertex consists of two floating point values. // Generate the polygon using 4 vertices. GEOMETRY 83 gl. 0. int stride.1).GL VERTEX ARRAY). we will need a different normal vector and possibly texture coordinates for each vertex.8).glVertexPointer(2. This method call corresponds to one use of glBegin/glEnd.glNormal3f(0. we can draw the rectangle in the display() method like this: gl.0.1 }. just as for glVertex2f. in much the same way that we did the vertex data. float[] data = { -1. rectVertices. gl. so the size would always be 3 in any case. is essential.

and the normals in the normal buffer must correspond one-to-one with the vertices in the vertex buffer. } } You don’t need to understand the math. gl.put(3*i+2. double x = r * Math. i++) { double s = Math. spherePointCloud.(float)z). I used relative put.GL TEXTURE COORD ARRAY). I don’t present that method here. the sphere of radius 1 centered at the origin). and the data for them are taken from the same buffer.newFloatBuffer(spherePointCount*3).GL NORMAL ARRAY).sqrt(1-z*z). One normal vector will be retrieved for each vertex.put(2*i+1.newFloatBuffer(spherePointCount*2).(float)t). double y = r * Math.cos(u). spherePointCloud.random().sin(u). double z = t * 2 . double r = Math. When you call glDrawArrays. but note that each vertex requires three floats while each set of texture coordinates requires two.put(2*i.random(). spherePointCloud. so in fact I use the same set of data for the normals as for the vertices. if GL NORMAL ARRAY is enabled. The sphere consists of thousands of randomly generated points on the unit sphere (that is.put(3*i+1. The texture coordinate buffer works in the same way.PI * 2. i < spherePointCount. where sphereVertices and sphereTexCoords are FloatBuffers and spherePointCount is the number of points to be generated: private void createPointSphere() { spherePointCloud = BufferUtil.CHAPTER 3. stored in the same buffer. it’s easy to draw the sphere. In this case. I’ve used indexed put commands to store the data into the buffers. double t = Math. which has the advantage that I don’t need to computer the correct index for each number that I put in the buffer. we will look at how to draw the cylinder and sphere in the picture at the beginning of this section. in this particular case. (This example emphasizes the fact that points are essentially free-floating vertices that can have their own normals and texture coordinates. for (int i = 0. Here is the method that is used to create the data. using GL POINTS as the primitive type. and call glDrawArrays to generate the points from the data.) We need buffers to hold vertices. and texture coordinates.put(3*i. sphereTexCoords.glEnableClientState(GL.(float)y). sphereTexCoords. GEOMETRY 84 gl. the normals and the vertices are identical. Here. double u = s * Math. However.glEnableClientState(GL. with appropriate texture coordinates and normal vectors.(float)s). The entire set of points can be drawn with one call to glDrawArrays. but you can find it in the source code. We have to set up pointers to the data. In the method for creating the data for the cylinder. sphereTexCoords = BufferUtil. the normal to the unit sphere at a given point has the same coordinates as the point itself. then normal vectors will be retrieved from the buffer specified by glNormalPointer. .1. Once the data has been stored in the buffers. enable the appropriate client states.(float)x). normals. ∗ ∗ ∗ For a more extended example.

The data is stored in a FloatBuffer named cylinderPoints. The vertices for the side of the cylinder start at position 0. cylinderVertexCount+2).GL FLOAT.glDisableClientState(GL. // Normal for all vertices for the bottom.cylinderPoints). We can.0. sphereTexCoords). GEOMETRY 85 // Tell OpenGL where to find the data: gl. // Turn off normal array.GL QUAD STRIP. however.glNormalPointer(GL. for the top at position cylinderTopStart.glEnableClientState(GL. 0.GL. 0. In addition to vertex. gl.glTexCoordPointer(2.0.GL TEXTURE COORD ARRAY).GL FLOAT. since it requires three calls to glDrawArrays.glEnableClientState(GL.0.GL VERTEX ARRAY).glDrawArrays(GL.GL NORMAL ARRAY). cylinderTopStart.glEnableClientState(GL. // Turn on texturing. the side used cylinderTopStart vertices. // Generate the points! // At this point.glDrawArrays(GL. using data from vertex buffer only. normal. (That is. gl. store the vertices for all three parts in the same buffer. 0.) The normals for the side of the cylinder are stored in a FloatBuffer named cylinderSideNormals. gl.glVertexPointer(3.glNormalPointer(GL. cylinderBottomStart.1).CHAPTER 3. gl. gl. GL.GL TRIANGLE FAN.GL NORMAL ARRAY). gl. and texture coordinate arrays. texturing and client states could be disabled. gl. sphereVertices).glEnableClientState(GL. For the top and bottom. .GL FLOAT. gl. (cylinderVertexCount+1)*2). // Normal for all vertices for the top. gl.GL VERTEX ARRAY).glDrawArrays(GL.cylinderSideNormals). including a color array that holds color values and generic vertex attribute arrays that hold data for use in GLSL programs. 0. gl. one for the top.GL VERTEX ARRAY). spherePointCount). This means that GL NORMAL ARRAY has to be enabled while drawing the side but not while drawing the top and bottom of the cylinder. gl. gl.enable().glEnableClientState(GL. and draw the sphere: sphereTexture. since the same normal is used for each vertex. gl.GL NORMAL ARRAY). the cylinder can be drawn as follows: gl.glNormal3f(0. Putting all this together.glDisableClientState(GL. and for the bottom at cylinderBottomStart. one for the side of the cylinder. gl. using data from the vertex buffer and from the normal buffer. Things are just a little more complicated for the cylinder.-1). the normal vector can be set by a single call to glNormal3f. as long as we remember the starting point for each part. // Tell OpenGL which arrays are being used: gl. // Draw the side. // Draw the top and bottom. GL.GL TRIANGLE FAN. sphereVertices). cylinderVertexCount+2).GL FLOAT.glNormal3f(0. and one for the bottom. // Leave vertex array enabled.GL POINTS. and the first vertex for the top is at index cylinderTopStart in the list of vertices. glDrawArrays can use several other arrays.glVertexPointer(3. gl.glDrawArrays(GL.0.GL FLOAT. 0. // Turn off vertex array.

It’s important to understand that this method does nothing except to say. not just the location of the data. int usage) The target. this can be important because the meaning of some commands changes when they are used with VBOs. you can do this with: version 1 5 = gl. You can also call gl. The third parameter is an offset that tells the starting index in the array where the IDs should be stored.CHAPTER 3. they work with the “current VBO.3 Vertex Buffer Objects Vertex arrays speed up drawing by greatly reducing the number of calls to OpenGL routines. When VBOs are used. “OK. Note that these buffers are not the same sort of thing as Java nio buffers. int size. To specify the data that is to be stored in a VBO. size is the size of the data in bytes. // Get 4 buffer IDs. and the data is stored in data.” For use with vertex arrays. It is a “hint” that tells . vboID). A vertex buffer object is a region of memory managed by OpenGL that can store the data from vertex arrays.bufferID. It is a good idea to test this and store the answer in a boolean variable.glGenBuffers(4. from now on anything I do with VBOs that is relevant to vertex arrays should be applied to VBO number vboID. for now. most of the OpenGL routines that work with VBOs do not mention a VBO index. The ID numbers are stored in an array. which is a Java nio Buffer. Buffer data.5 introduced another technique that offers the possibility of reducing and in some cases eliminating this bottleneck. The technique is referred to as vertex buffer objects or VBOs.GL ARRAY BUFFER (and in general should match the first parameter in glBindBuffer ). In Jogl.) The usage parameter is particularly interesting. VBOs are allocated by the glGenBuffers method.” It is a switch that directs future commands to a particular VBO. Note that this method supplies data for the VBO whose ID has been selected using glBindBuffer : public void glBufferData(int target. and this can still be a bottleneck on performance. you should be sure that the version of OpenGL is 1.isExtensionAvailable("GL VERSION 1 5"). However. this method specifies the data itself.4. even though the data will be used by future commands. Now.glBindBuffer(GL. the current VBO is set by calling gl.GL ARRAY BUFFER. where vboID is the ID number of the VBO that is being made current.0). use the glBufferData method from the GL class. GEOMETRY 86 3. the data for the routines still has to be transferred to the graphics card each time it is used.5 or higher. should be GL. For example. Before using VBOs. OpenGL Version 1. the data from the Java nio buffers specified by methods such as glVertexPointer and glNormalPointer is (at least potentially) copied into a VBO which can reside in memory inside the graphics card or in system memory that is more easily accessible to the graphics card.glBindBuffer (GL. Instead. it is usually 0 (and is absent in the C API). Later. four VBOs can be created like this: int[] bufferID = new int[4]. (Note that the data should already be there. which creates one or more VBOs and returns an integer ID number for each buffer created.0) to direct commands away from VBOs altogether.GL ARRAY BUFFER. we’ll see another possible value for the first parameter of this method. gl.

The if statement tests whether the version of OpenGL is high enough to support vertex buffer objects. gl.GL ARRAY BUFFER. sphereVertices.GL STATIC DRAW). GL.GL ARRAY BUFFER. cylinderPointBufferID).5 or higher. The VertexArrayDemo application actually uses VBOs when the OpenGL version is 1.GL STATIC DRAW – the data is expected to be used many times. Here is the code that creates the VBO’s and provides them with data: if (version 1 5) { int[] bufferID = new int[4]. sphere texture coords. Note how glBindBuffer is used to select the VBO to which the following glBufferData method will apply.glBindBuffer(GL.GL ARRAY BUFFER. spherePointCount*3*4. • GL. note that the size specified in the second parameter to glBufferData is given in bytes. cylinderSideNormals. gl. // Get 4 buffer IDs for the data. sphereTexCoordID). spherePointBufferID = bufferID[0].GL DYNAMIC DRAW – the data will be changed many times. This usage would be appropriate for an animation in which the vertex data can change from one frame to the next.glBindBuffer(GL. The data is ideally stored in memory that can be quickly accessed by both the CPU and the graphics card. OpenGL will try to store the data in the optimal location for its intended use. the possible values are • GL. where OpenGL has direct access to it.GL STREAM DRAW – the data will be used once or at most a few times. cylinder vertices. normal.GL STATIC DRAW). GL. without further modification. // // // // VBO VBO VBO VBO for for for for sphere vertices/normals.glBufferData(GL. cylinder normals.glBindBuffer(GL. Four VBOs are used.bufferID.CHAPTER 3. It would be optimal to store the data in the graphics card.glBufferData(GL. GL.0).GL STATIC DRAW). 0). Since a float value takes up four bytes. gl.glBindBuffer(GL. the size is obtained by multiplying the number of floats by 4. and texture coordinate data for the sphere and cylinder. spherePointBufferID).glGenBuffers(4. gl.GL ARRAY BUFFER. gl.GL ARRAY BUFFER. This is the appropriate usage for this section’s VertexArrayDemo application. Also.glBufferData(GL. spherePointCount*2*4. cylinderNormalBufferID = bufferID[3]. gl.GL STATIC DRAW). using the variable version 1 5 which was discussed above. GEOMETRY 87 OpenGL how the data will be used. (cylinderVertexCount+1)*2*3*4. gl. gl. sphereTexCoordID = bufferID[1]. ((cylinderVertexCount+1)*4+2)*3*4.GL ARRAY BUFFER. cylinderPoints. } // Leave no buffer ID bound. sphereTexCoords.glBufferData(GL.GL ARRAY BUFFER. gl. • GL. cylinderNormalBufferID). For our purposes in this section. ∗ ∗ ∗ .GL ARRAY BUFFER.glBindBuffer(GL. It is not so important to save it on the graphics card.GL ARRAY BUFFER. GL. gl. VBOs are used to store the vertex. cylinderPointBufferID = bufferID[2].

Once that’s done. 0). gl. spherePointBufferID). but that parameter is interpreted differently depending on whether VBOs are being used or not.glNormalPointer(GL. This offset is given as the number of bytes from the beginning of the VBO. the value is often zero. then use // glVertexPointer to set the position where the data starts // in the buffer (in this case. // Generate the points! gl.GL FLOAT.GL FLOAT. at the start of the buffer). The glDrawArrays and glEnableClientState commands are used in exactly the same way whether or not we are using VBOs.vboID ).glDisableClientState(GL.GL ARRAY BUFFER. to set the buffers from which // the various kinds of data will be read.CHAPTER 3.glDisableClientState(GL.glEnableClientState(GL. which uses VBOs when the OpenGL version is 1. } . set up the texture coordinate pointer in the same way. gl.glVertexPointer(3. However. alternative forms of the glVertexPointer.glBindBuffer (GL. sphereTexCoords). GL. } else { // Use glVertexPointer.glVertexPointer(3. (In the C API. gl. gl. etc. 0.glEnableClientState(GL. the code for drawing the sphere is identical whether VBOs are used or not: private void drawPointSphere(GL gl. gl. sphereTexCoordID).glBindBuffer(GL.GL FLOAT. Note that this only makes a difference when telling OpenGL the location of the data. sphereTexture.glTexCoordPointer(2. gl. we also have to use the data from the VBOs! The VBOs are holding vertex array data. gl. sphereTexture. // Leave no VBO bound. int stride. the command for telling OpenGL where to find that data is a little different when using VBOs. long vboOffset) The difference is the last parameter. int pointCt) { if (version 1 5) { // Use glBindBuffer to say what VBO to work on. long vboOffset) public void glNormalPointer(int type. GL.GL TEXTURE COORD ARRAY). GEOMETRY 88 Of course..) The vboOffset gives the starting position of the data within the VBO. We can now look at the complete sphere-drawing method from VertexArrayDemo.GL VERTEX ARRAY).GL FLOAT. 0.glEnableClientState(GL. gl. int type. // Use the same buffer for the normal vectors. As usual. which is now an integer instead of a Buffer.sphereVertices). there is only one version of each command. GL.GL FLOAT. int stride.sphereVertices). and glTexCoordPointer methods are used: public void glVertexPointer(int size.0. gl.0).5 or higher. gl. gl.glBindBuffer(GL.disable(). with a pointer as the fourth parameter.glDisableClientState(GL. gl. 0. glNormalPointer. int stride.GL TEXTURE COORD ARRAY). 0).glDrawArrays(GL.GL VERTEX ARRAY). 0. When the data is stored in VBOs.GL NORMAL ARRAY).GL NORMAL ARRAY). } gl.glTexCoordPointer(2. it is the VBO that has been most recently specified by a call to gl.GL.GL POINTS. int type.GL ARRAY BUFFER.glNormalPointer(GL. spherePointCount). long vboOffset) public void glTexCoordPointer(int size.GL ARRAY BUFFER.0.glBindBuffer(GL.GL ARRAY BUFFER. gl. // Now. The VBO in question is not mentioned. 0).GL FLOAT. gl.enable(). 0.

Note. The order needed for generating primitives will be specified by a separate list of indices into the arrays. GEOMETRY 89 Obviously. if desired). For use with glDrawElements. we consider an “icosphere. // Get vertex coords. For the icosphere example. vertex buffer arrays are non-trivial to use. Now. For an indexed face set. the elements in the arrays are stored in arbitrary order. . i < indexCount. Because the points are points on the unit sphere.java. To give us an example to work with. not the order in which they will be used to generate primitives.CHAPTER 3. for (int i = 0. the following code would work: gl. a face is specified by a sequence of integers representing indices into a list of vertices. it can’t be used for data in indexed face set (IFS) format. float vx = icosphereVertexBuffer. we can use the same set of coordinates as unit normal vectors. normal coordinates. you can set up buffers to hold vertex coordinates. Let’s assume that the vertex coordinates have been stored in a Java nio FloatBuffer named icosphereVertexBuffer. The source code for the program can be found in IcosphereIFS. You get a better approximation by subdividing the faces more times. and texture coordinates. We could actually draw the icosphere directly using glBegin/glEnd: Assuming indexCount is the number of integers in icosphereIndexBuffer. however. glDrawElements can work directly with data in this format. // Index of i-th vertex.” a polygonal approximation of a sphere that is obtained by subdividing the faces of an icosahedron. and the whole thing can be drawn with a single use of the GL TRIANGLES primitive. the geometry consists of a large nubmer of triangles.4 Drawing with Array Indices The glDrawArrays method is great for drawing primitives given as a list of vertex coordinates. Here is a picture of an icosphere created by one subdivision: You can find an applet that draws icospheres in the on-line version of these notes. For that.glBegin(GL.4. i++) { int vertexIndex = icosphereIndexBuffer. And let’s say that the face data (the list of vertex indices for each triangle) is in an IntBuffer named icosphereIndexBuffer. That is. and so on.get(i). the second item in the normal and texture arrays must correspond to the second vertex. the first vector in the normal array and the first set of texture coordinates in the texture array must correspond to the first vertex in the vertex array. The reward is the possibility of better performance when rendering complex scenes. however. However. exactly as you would for glDrawArrays (including the use of vertex buffer objects. there is the glDrawElements method. 3. that the order of elements in the various arrays must correspond.GL TRIANGLES).get(vertexIndex).

∗ ∗ ∗ This has been an admittedly short introduction to glDrawElements.CHAPTER 3. GL.glVertex3f(vx. and how we use glDrawElements also depends on whether or not a vertex buffer object is used. if you want to store data for more than one call to glDrawElements in the same buffer. as follows: gl. The first parameter is the type of primitive that is being drawn. the first parameter to glBindBuffer and glBufferData must be GL. then the face data is passed directly to glDrawArrays.GL ELEMENT ARRAY BUFFER. icopsphereIndexID).glBufferData(GL. The third is the type of data in the buffer. VBOs can also be used to store the face index data for use with glDrawElements.glBindBuffer(GL. Before using it. gl. gl. float vz = icosphereVertexBuffer. 0).GL ELEMENT ARRAY BUFFER. simpler.GL STATIC DRAW). GEOMETRY 90 float vy = icosphereVertexBuffer. let’s look at what happens when a VBO is used to hold the face data. we must use glVertexPointer and glNormalPointer to set up a vertex pointer and normal pointer.vy. icosphereIndexBuffer. gl. icopsphereIndexID). See the source code.glBindBuffer(GL.GL TRIANGLES. Unfortunately. The second is the number of vertices that will be generated.vz).GL ELEMENT ARRAY BUFFER: gl. example that shows how you might use glDrawElements to draw an IFS representing a simple polyhedron.get(vertexIndex). The data begins at the buffer’s current internal position pointer. as for glDrawArrays.java. We will use the data for the same pyramid that was used as an example in the previous section: . At the time when glDrawElements is called.get(vertexIndex).vy. Note that there is no indication of the position within the buffer where the data begins. given as the number of bytes from the start of the VBO: gl. you can set that pointer by calling the buffer’s position method before calling glDrawElements.GL ELEMENT ARRAY BUFFER. giving an index into the data that has already been set up by glVertexPointer and related methods. the data must be loaded into the VBO before it can be used. The face data consists of one integer for each vertex. } gl.glDrawElements(GL. In this case. IcosphereIFS. gl. indexCount*4.GL UNSIGNED INT. // Generate the i-th vertex. to see how all this is used in the context of a full program. but hopefully it is similar enough to glDrawArrays that the information given here is enough to get you started using it. and the fourth parameter to glDrawElements is replaced by an integer giving the position of the data within the VBO.) Now. If necessary. But glDrawElements is meant to replace precisely this sort of code. GL. For this application. indexCount. GL. but it is done exactly as above. icosphereIndexBuffer). indexCount. // Use vertex coords as normal vector. the same buffer ID must be bound.GL TRIANGLES.glNormal3f(vx. And the fourth is the nio Buffer that holds the face data. the use of VBOs for this purpose is not exactly parallel to their use for vertex and normal data. Let’s look at another.glEnd(). (This is one place where are are forced to work with the internal buffer pointer.GL UNSIGNED INT.glDrawElements(GL. If we do not use a VBO for the face data.vz). How we do this depends on whether we are using vertex buffer objects.

gl.3}. GL.GL FLOAT.position(9).glDrawElements(GL.GL UNSIGNED INT.0.3. 4. 0). 4. GL.1.glNormal3f(0. 0.position(0).0.4. we will not attempt to use vertex buffer objects.GL VERTEX ARRAY). -1. 0. 0. gl.1}.glDrawElements(GL.1} Since this is such a small object. and fill them with data. And glMultiDrawArrays.put(faceList.707f. float[][] vertexList = { {1.GL UNSIGNED INT. 3. Set the position in // the faceBuffer to indicate the start of the data in each case: faceBuffer.GL UNSIGNED INT. // No need to rewind.glVertexPointer(3. 0. 0. // Create the Java nio buffers.1.GL POLYGON. vertexBuffer.1}. 0). gl.2. We will draw one face at a time. 0. {1. there is a glDrawRangeElements routine.glNormal3f(-0.2.707f. 15).707f.0 int[] faceList = { 4. faceBuffer).0. 0). gl. I will specify the appropriate normal directly. {4. gl.position(6).2. For example. 0.glDrawElements(GL.glDrawElements(GL. that is similar to glDrawElements but can be more efficient.put(vertexList. using a single normal vector for the face. gl. GL.707f. }. probably as instance variables: FloatBuffer vertexBuffer. gl.3.1 vertexBuffer.707f).0. {-1. float[] vertexList = { 1.707f.GL POLYGON.position(3).1}.2. 1. faceBuffer.-1.707f).position(12).glEnableClientState(GL. // Do the actual drawing.0.1.1. 0. -0. 16).0.0} int[][] faceList = { {4.CHAPTER 3. faceBuffer). GL. }. 0.-1}. faceBuffer).rewind().3. faceBuffer. 3. 3. 4.glNormal3f(0. gl.2. from version 1. {-1. {0.0.2.newFloatBuffer(15). GL. vertexBuffer). faceBuffer. ∗ ∗ ∗ This has not been a complete survey of the OpenGL routines that can be used for drawing primitives. GEOMETRY 91 }. -1. introduced in OpenGL version 1.-1}.0. 4. faceBuffer. // Set up for using a vertex array. position is set later. Here is the code for drawing the pyramid (which would actually be scattered in several parts of a program): // Declare variables to hold the data.-1.0.GL POLYGON.GL POLYGON.glDrawElements(GL.3. faceBuffer = BufferUtil.3. {4. faceBuffer).GL UNSIGNED INT.glNormal3f(0.707f.0}.GL UNSIGNED INT.2}.0.1. 3.glNormal3f(0. gl. gl.GL POLYGON.newIntBuffer(16). in init() or display(): gl. . }. faceBuffer). can be used to do the job of multiple calls to glDrawArrays at once. {4. gl. faceBuffer. IntBuffer faceBuffer. perhaps in init(): vertexBuffer = BufferUtil. -1. one face at a time. {0.0. GL.1.1. Interested readers can consult an OpenGL reference. 0.

facing in the direction of the negative z-axis. OpenGL can’t actually show everything in this pyramid. For a perspective transformation. and with the sides of the pyramid passing through the sides of the viewport rectangle. A rather obscure term for this shape is a frustum. and bottom of the pyramid. These planes are called clipping planes because anything that lies on the wrong side of each plane is clipped away. The volume of space that is represented in the image is thus a “truncated pyramid.0.glFrustum(xmin. you have to set up a view volume that is a truncated pyramid.” This pyramid is the view volume: The view volume is bounded by six planes—the sides.0). Perspective projection gives a realistic view.near. as already discussed. In fact. near and far . However.ymax. top.far) The last two parameters specify the near and far distances from the viewer. it can’t represent the entire range of depth values for the infinite pyramid that is theoretically in view. (0. the apparent size of an object depends on how far it is away from the viewer. with the viewer at the apex of the pyramid. it shows what you would see if the OpenGL display rectangle on your computer screen were a window into an actual 3D world (one that could extend in front of the screen as well as behind it).5. Both of these values must be positive numbers. Only things that are in front of the viewer can be seen. Only objects in a certain range of distances from the viewer are shown in the image that OpenGL produces. and far must be greater than near. In OpenGL. In a perspective view. . Anything that is closer to the viewer than the near distance or farther away than the far distance is discarded and does not appear in the rendered image. 3. setting up the projection transformation is equivalent to defining the view volume. It shows a view that you could get by taking a picture of a 3D world with a camera. the part of the world that is in view is an infinite pyramid. That range of distances is specified by two values. perspective projection and orthographic projection. Since the depth buffer can only store a finite range of depth values. and a perspective transformation can be set up with the glFrustum command gl. That is.CHAPTER 3. The viewer is assumed to be at the origin.5 This Viewing and Projection (online) chapter on geometry finishes with a more complete discussion of projection and viewing transformations.xmax. GEOMETRY 92 3.ymin. because of its use of the depth buffer to solve the hidden surface problem.1 Perspective Projection There are two general types of projection.

In fact.5. Orthographic projections are still useful. in OpenGL there is a viewer. gl. 3. When the glFrustum method is used to set up the projection transform. Furthermore. the coordinates of the upper-left corner of the small end of the pyramid are (xmin. the identity matrix should be loaded before calling glFrustum (since glFrustum modifies the existing projection matrix rather than replacing it. and the far clipping plane is at z = −far.) So.gluPerspective(fieldOfViewAngle. The near and far parameters have the same meaning as for glFrustum. leaving the matrix mode set to GL MODELVIEW at the end: gl. or possibly in the reshape() method. if you want to take the aspect ratio of the viewport into account.glMatrixMode(GL. The GLU library includes the method gluPerspective as an easier way to set up a perspective projection.far). aspect. especially in interactive modeling programs where it is useful to see true sizes and angles. only a finite segment of this infinite corridor can actually be shown in an OpenGL image.ymax. Note that although xmin is usually equal to the negative of xmax and ymin is usually equal to the negative of ymax. facing in the direction of the negative z-axis. Objects in back of the viewer as well as in front of the viewer are visible in the image. However.−near ).glFrustum(xmin. So. It is possible to have asymmetrical view volumes where the z-axis does not point directly down the center of the view. this is not required.ymax. gl. near. The glFrustum method is not particularly easy to use. the near clipping plane is at z = −near. xmax. unmodified by perspective. divided by the height at the same distance. GEOMETRY 93 (These are “eye coordinates” in OpenGL.GL PROJECTION). at the beginning of the display() method.near. a rectangular corridor extending infinitely in both directions. Theoretically.glMatrixMode(GL. The fieldOfViewAngle is the vertical angle between the top of the view volume pyramid and the bottom. a use of glFrustum generally looks like this. The value of . gl.GL MODELVIEW).ymin. far). would be in view. however. as with perspective projection. For example. The aspect is the aspect ratio of the view. in front of the viewer and in back. This type of projection is unrealistic in that it is not what a viewer would see. the apparent size of an object does not depend on its distance from the viewer. If glu is an object of type GLU then glu.xmax. This might be used in the init() method. the width of the pyramid at a given distance from the eye.CHAPTER 3.glLoadIdentity(). the matrix mode should be set to GL PROJECTION. Nevertheless. ymin. and you don’t even want to try to think about what would happen if you combine several projection matrices into one). it’s not really clear what it means to say that there is a viewer in the case of orthographic projection. The first four parameters specify the sides of the pyramid: xmin. which is located at the eye-coordinate origin. This finite view volume is a parallelepiped—a rectangular solid—that is cut out of the infinite corridor by a near clipping plane and a far clipping plane. and ymax specify the horizontal and vertical limits of the view volume at the near clipping plane. can be used instead of glFrustum. The value of aspect should generally be set to the aspect ratio of the viewport.2 Orthographic Projection Orthographic projections are comparatively easy to understand: A 3D world is projected onto a 2D image by discarding the z-coordinate of the eye-coordinate system. that is. For example.

Suppose.ymax. the camera ends up in exactly the same position relative to the objects.glOrtho(xmin. and the objects that it contains. In fact. The first four parameters specify the x. The point here is that these are really equivalent operations.glMatrixMode(GL.GL PROJECTION). but for an orthographic projection. GEOMETRY 94 far must be greater than near. Remember that the projection transformation is specified in eye coordinates.GL MODELVIEW). that we would like to move the camera from its default location at the origin back along the positive z-axis from the origin to the point (0.and y-coordinates of the left.CHAPTER 3. which is behind the viewer.xmax. then at least in terms of the image that is produced when the camera finally snaps a picture. the value of near is allowed to be negative. facing down the negative direction of the z-axis. gl. gl.20).0. Recall that OpenGL has no viewing transformation as such. it is often the case that near = −far. you have to do more than specify the projection.ymin. and top of the view volume. However. not zmin and zmax. the modeling transformation moves the objects in the 3D world. It has a modelview transformation. While the viewing transformation moves the viewer.5. the minimum z-value for the view volume is −far and the maximum z-value is −near. The viewing transformation places the camera in the world and points it. as it is in this illustration: Note that a negative value for near puts the near clipping plane on the positive z-axis. putting the “near” clipping plane behind the viewer.near. right.far). bottom. That is done with the viewing transformation. The projection transformation can be compared to choosing which camera and lens to use to take a picture. An orthographic projection can be set up in OpenGL using the glOrtho method. in terms of the world coordinate system. which have the viewer at the origin. 20 units in the negative direction along the z-axis—whichever operation is performed. gl.glLoadIdentity(). for example. The viewing transformation says where the viewer really is. and where the viewer is really facing. You also have to position the viewer in the world. This operation has exactly the same effect as moving the world. which is generally called like this: gl. Note that the last two parameters are near and far. if not logically. and if that is true then the minimum and maximum z-values turn out to be near and far after all! 3. which combines the viewing transform with the modeling transform.3 The Viewing Transform To determine what a viewer will actually see in a 3D world. It follows that both operations are .glMatrixMode(GL.

apply(gl). The method camera.ymin. the relationship of the viewer to the object is the same in the end. Suppose that we use the commands gl.gluLookAt.refZ ). As a modeling transform. The parameters that will be used in these methods must be set before the call to camera.30). For an object of type Camera. the reference point is the origin (0. ∗ ∗ ∗ The Camera class that I wrote for the glutil package combines the functions of the projection and viewing transformations.0). the object is then directly in front of the viewer. and any further transformations that are applied after that are considered to be part of the modeling transformation. This leaves the viewer on the negative x-axis at (−10. The viewing transform will be established with a call to glu.upZ ).0).ymax. refX. More generally.refY.refY. or opposite.0.refY. the effect on the viewer is the inverse of the effect on the world. is meant to be called at the beginning of the display method to set both the projection and the view. This method places the camera at the point (eyeX.eyeY.upZ ).zmax) . the method camera. then rotate the object 90 degrees about the y-axis.xmax. GEOMETRY 95 implemented by the same OpenGL command.refZ. Since this can be confusing. at a distance of 10 units. at a distance of ten units. gl. the transform commands first rotate the viewer −90 degrees about the y-axis. An object that was originally at the origin ends up on the negative z-axis. In the default settings. and the projection will be set with a call to either glOrtho or glFrustum.1.refZ. applying any transformation to the camera is equivalent to applying the inverse.eyeZ. then translate the viewer 10 units in the negative x-direction. upX.setView(eyeX.eyeY.upY.upZ ) points upwards in the camera’s view. these commands would first translate an object 10 units in the positive x-direction. If we consider the same transformation as a viewing transform.setLimits(xmin.gluLookAt( eyeX.zmin.glRotatef(90. The camera is oriented so that the vector (upX. depending on whether the camera is set to use orthographic or perspective projection.eyeZ.0.eyeZ ). This even works for scaling: Imagine yourself sitting inside the camera looking out. of the transformation to the world.-20).CHAPTER 3.upY.0.0). looking at the origin.apply by calling other methods in the Camera object. looking in the direction of the point (refX.0. upX. the eye is at (0.glTranslatef (0.upY. the GLU library provides a convenient method for setting up the viewing transformation: glu. This method is meant to be called at the beginning of the display() method to establish the viewing transformation. Rotating the camera to left has the same effect as rotating the world to the right. and the up vector is the y-axis. Under both interpretations of the transformation. to establish the viewing transformation.glTranslatef(10. gl. refX. If the camera shrinks (and you along with it) it will look to you like the world outside is growing—and what you see doesn’t tell which is really happening.0.gluLookAt. sets the parameters that will be used in a call to glu. That is. An object at the origin will then be directly in front of the viewer.0). The method camera.eyeY.0.

and the viewer has been placed on the positive z-axis. Similarly. The motion is parallel to the xz-plane. the up vector (upX. which I have added to the glutil package. at a distance from the origin equal to the distance between (eyeX. However.) Basically. that is.5.) In these coordinates. The up-arrow and down-arrow keys move the viewer forward or backward in the direction that the viewer is currently facing. and ymax specify the left-to-right and bottom-to-top limits of the view volume on the xy-plane.refZ ) has been moved to the origin. They are specified relative to a coordinate system in which the reference point (refX. zmin and zmax specify the minimum and maximum z-values for the view volume.y. observing it from a distance. A SimpleAvatar represents a point of view that can be rotated and moved. except that the viewer has been moved some distance backwards along the positive z-axis. depending on the configuration of the camera. 3. the viewer’s height above this plane does not change. and the user can move among them using the keyboard’s arrow keys. Here is a snapshot: The viewer in this program is represented by an object of the class SimpleAvatar. GEOMETRY 96 is used to specify the projection.refY. (These are almost standard eye coordinates. the right-arrow key rotates the viewer towards the right.CHAPTER 3.refY.setLimits command to establish a box around the reference point (refX. which the viewer perceives as turning towards the left. The sample program WalkThroughDemo.z ) would move the viewer away from that point).refZ ). such as 3D games. xmax. These transformations are applied to the viewer in that order (not the reverse—a rotation following a translation to the point (x. the viewer is subject to a rotation about the y-axis and a translation.eyeZ ) and (refX. you use the camera.eyeY.4 A Simple Avatar In all of our sample programs so far. the viewer has stood apart from the world. this viewing transform is equivalent to applying the inverse transform to the world and is implemented in the code as .refZ ) that you would like to be in view. at z = 0. The rotation is about the viewer’s vertical axis and is controlled in the applet by the left and right arrow keys.java is a simple example where the world consists of some random shapes scattered around a plane. the viewer is a part of the world and gets to move around in it. In terms of the programming.upY. With the right viewing transformation and the right user controls. to match the aspect ratio of the viewport. ymin. (The x and y limits might be adjusted. You can find an applet version of the program in the on-line version of this section.upZ ) has been rotated onto the positive y-axis. this is not hard to implement.refY. and xmin. Pressing the left-arrow key rotates the viewer through a positive angle. The parameters do not correspond directly to the parameters to glOrtho or glFrustum. In many applications.

5. but the viewer we are talking about is really a point of view. However. in . we can return to the idea of scene graphs.glLoadIdentity().CHAPTER 3. it’s possible to switch from one viewer to another and to redraw the image from the new point of view. is that the viewer’s projection and viewing transformation have to be established before anything is drawn. then the viewer should be subject to exactly the same transformation as the car and this should allow the viewer to turn and move along with the car.-y.z ) is the point to which the viewer is translated. And it means that a viewer can be part of a complex. and transformations can be applied to objects on any level of the hierarchy. hierarchical model. we need to apply the inverses of these transformations. It can be part of a complex object. This would mean that the viewer could be subjected to transformations just like any other node in the scene graph.1. Of course.5 Viewer Nodes in Scene Graphs For another example of the same idea. For example. to establish the appropriate projection and viewing transformations to show the world as seen from the avatar’s point of view. To apply the inverse of this transformation. But if we truly want to make our viewer part of the scene. gl. which represents exactly the type of viewer node that I have been discussing. there can only be one view at a time. in the same way that a cube or sphere can be represented by a node. If we place a viewer into a scene graph. The overall transformation that is finally applied to an object consists of the product of all the transformations from the root of the scene graph to the object. the transformation that we want to apply to the viewer is the product of all the transformations applied to nodes along a path from the root of the scene graph to the AvatarNode. then there should be a way for the viewer to be part of the data for the scene. When the viewer is represented by an AvatarNode in a scene graph.5.0). We would like to be able to represent the viewer as a node in a scene graph. And while there can be several viewer nodes in a scene graph. First of all. It could be “attached” to a visible object.glRotated(-angle. you have to apply the inverse of that transformation to the world. 3.-z). where angle is the rotation applied to the viewer and (x.y. represented by a projection and viewing transformation. gl. The package simplescenegraph3d contains a very simple implementation of scene graphs for 3D worlds. A scene graph is a data structure that represents the contents of a scene. then the viewer should be subject to transformation in exactly the same way. and it will be carried along with that object when the object is transformed. and that method is meant to be called in the display method. if the viewer is part of a complex object representing a car. the viewer is not quite the same as other objects in the scene graph. before anything is drawn. GEOMETRY 97 gl. which is crucial. There has to be one active viewer whose view is shown in the rendered image. the viewer is not a visible object. The viewer might be the driver in a moving car or a rider on an amusement park ride. This means that we can’t simply traverse the scene graph and implement the viewer node when we come to it—the viewer node has to be applied before we even start traversing the scene graph. that is part of the scene graph. A scene is a hierarchical structure in which complex objects can be built up out of simpler objects. An AvatarNode can be added to a scene graph and transformed just like any other node. which were introduced in Subsection 2. One of the classes in this package is AvatarNode. Remember that to apply a transformation to a viewer. This code can be found in the apply() method in the SimpleAvatar class.glTranslated(-x.1. A second point.0.

we can start at the AvatarNode and walk up the path in the scene graph. To do this. To make it possible to navigate a scene graph in this way. each node has a parent pointer that points from a child node to its parent node. So.applyInverseTransform(gl). node = node. the code in the AvatarNode class for setting up the viewing transformation is: SceneNode3D node = this.java.parent. from child node to parent node until we reach the top. . Along the way. The code for setting up the viewing transformation is in the apply method. while (node != null) { node. In the on-line version of this section. GEOMETRY 98 the opposite order.CHAPTER 3. including two views that are represented by objects of type AvatarNode in a scene graph. we apply the inverse of the transformation for each node. There is also a method applyInverseTransform that applies the inverse of the transform in the node. A pop-up menu below the scene allows the user to select one of three possible points of view. (The third view is the familiar global view in which the user can rotate the scene using the mouse. } An AvatarNode has an apply method that should be called at the beginning of the display method to set up the projection and viewing transformations that are needed to show the world from the point of view of that avatar.) The source code for the program is MovingCameraDemo. you will find an applet that uses this technique to show you a moving scene from several possible viewpoints.

one kind that sees red light. and radio waves. The goal of OpenGL is to produce a good-enough approximation of realism in a reasonable amount of time—preferably fast enough for interactive graphics at 30 to 60 frames per second. but when used well. from red to violet. The physical property of light that corresponds to the perception of color is wavelength . they are only approximations. we need to simulate the way that lights interacts with objects in the world. microwave. we have to say how much of each visible wavelength it contains. This is a very narrow band of wavelengths. which makes it possible to reprogram parts of the OpenGL processing pipeline. just next to the visible band. a kind of wave that propagates thorough space. In 4. Much more realism is possible in computer graphics. blue. This chapter will look at the OpenGL approach to simulating light and materials. Light from the sun is a combination of all of the visible wavelengths of light (as well 99 . All the colors of the rainbow. we have used only the default setup for lighting. at least. and we have used only simple color for materials. or even at a photograph.1 Vision and Color (online) Most people are familiar with some basic facts about the human visual system. one that sees green. and green. outside this band are other kinds of electromagnetic radiation including ultraviolet and infrared. gamma rays. The detailed physics of light is not part of our story here. and one that sees blue. but the techniques that are required to achieve such realism are not built into standard OpenGL. It’s time to look at the full range of options and at a bit of the theory behind them. OpenGL uses a model of light and vision that is a rather crude approximation for reality. as well as X-rays. and we need to consider how that world is perceived by people’s visual systems. light is electromagnetic radiation. To fully describe a light source. They know that the full spectrum of colors can be made from varying amounts red. it is good enough to let the viewer understand the 3D world that is being represented. Some of them can be added to OpenGL using the GL Shading Language. And they might know that this is because the human eye has three different kinds of “cone cells” for color vision. But some of them require more more computational power than is currently available in a desktop computer. So far. Electromagnetic radiation that can be detected by the human visual system has a wavelength between about 400 and 700 nanometers. are found in this range. Or.Chapter 4 Light and Material order to create realistic scenes. It is far from good enough to fool anyone into thinking that they are looking at reality. but for our purposes. These facts are mostly wrong.

So in this system. Users of Java are probably familiar with the HSB (hue/saturation/brightness) color system. which is appropriate for devices that make color by combining light of different wavelengths. But no matter what three primary colors are used. and blue that will produce the same levels of excitation. Cyan. In practice. green. However. and the light that will produce the perceived color has to come from an outside source. For example. and “blue” here are inexact terms. Many but not all. LIGHT AND MATERIAL 100 as wavelengths outside the visible range). The color gamut of a printer can be improved by using additional inks. and yellow make up the CMY color system. This is a “subtractive” color system. some of it is absorbed and some is reflected (and some might pass into and through the object). We have been discussing an RGB (red/green/blue) color system. it is impossible to duplicate all visible colors with combinations of the three primary colors. the three types of cone cell do not detect exclusively red. and the wavelength bands for the three types of cell have a large overlap. black ink is often added to the mix. a yellow ink absorbs blue light and reflects red and green. By adding cyan ink to the yellow and magenta. and blue light. combines differently colored dyes or inks. Display devices such as computer screens and projectors do. A combination of yellow ink and magenta ink will absorb both blue and green to some extent. No device has a color gamut that includes all possible colors. In this case. but it is difficult to match the colors from a printer to the colors on the screen (let alone to those in real life). where the “K” stands for black. red is a combination of yellow and magenta. are color blind to some extent because they lack one or more of the three types of cone cells. “green”. giving the CMYK color system. so I will not discuss the other possibilities further here. magenta. since each color of ink subtracts some wavelengths from the light that strikes the ink. A printer. A substantial number of people. in fact. but they can be taken to mean three wavelengths or combination of wavelengths that are used as primary colors. red and green look the same. for example. it is possible in many cases to find a combination of red. “Red”. a wide range of colors can be produced.CHAPTER 4. Each type of cell responds to a range of wavelengths. by the way. ∗ ∗ ∗ In the rest of this chapter. we’ll look at the OpenGL approach to light and material. When light from some source strikes a surface. green. produce color by combining three primary colors. to the extent that our experience of a color boils down to a certain level of excitation of each of the three types of cone cells. The appearance of an object depends on the nature of the light that illuminates it and on how the surface of the object interacts with different wavelengths of light. OpenGL uses RGB colors. Light from other sources will have a different distribution of wavelengths. the color depends on which colors are absorbed by the ink and which are reflected by the inks. leaving mostly red light to be reflected. A magenta ink absorbs green while reflecting red and blue. with a characteristic amount of each wavelength. That is. RGB and CMYK are only two of many color systems that are used to describe colors. The visual system in most people does indeed have three kinds of cone cells that are used for color vision (as well as “rod cells” that do not see in color). especially when the essential distinction is between red and green. There . The color gamut of a CMYK printer is different from and smaller than the color gamut of a computer screen. green. For people with the most common type of color blindness. Programmers should avoid coding essential information only in the colors used. The range of colors that can be shown by a given device is called the color gamut of that device. Not all devices do this. An object that reflects most of the red light that hits it and absorbs most of the other colors will appear red—if the light that illuminates it includes some red for it to reflect. and blue light can be combined to duplicate many of the colors that can be seen. It is true that red.

the complexity is approximated by two general types of reflection. In OpenGL. In pure diffuse reflection. but as a cone of light. We now see that a material can have two different colors—a diffuse color that tells how the material reflects light diffusely and a specular color that tells how it reflects light specularly. LIGHT AND MATERIAL 101 are a few general ideas about the interaction of light and materials that you need to understand before we begin. Specular reflection from a very shiny surface produces very narrow cones of reflected light. There are in fact four colors associated with a material. Even if the entire surface is illuminated by the light source. fuzzier specular highlights. Specular Reflection Incoming rays of light Viewer Diffuse Reflection Incoming rays of light Viewer Viewer sees a reflection at only one point Light from all points on the surface reaches the viewer. In perfect specular (“mirror-like”) reflection. When a light strikes a surface. The reflected ray makes the same angle with the surface as the incoming ray. The ambient color of a material determines how it will reflect . some of it will be reflected. which can be more or less narrow. specular reflection and diffuse reflection. When light strikes a surface. what I am calling the material properties of the surface. Ambient light refers to a general level of illumination that does not come directly from a light source. A viewer would see see reflected light from all points on the surface. the viewer will only see the reflection of the light source at those points on the surface where the geometry is right. Such reflections are referred to as specular highlights. the material property that determines the size and sharpness of specular highlights is called shininess. Exactly how it reflects depends in a complicated way on the nature of the surface. A duller surface will produce wider reflected light cones and bigger.CHAPTER 4. The third color is the ambient color of the material. A viewer can see the reflected ray only if the viewer is in the right position. an incoming ray of light is scattered in all directions equally. and the surface would appear to be evenly illuminated. OpenGL goes even further. somewhere along the path of the reflected ray. an incoming ray of light is reflected from the surface intact. The specular color determines the color of specular highlights. Ambient light is why shadows are not absolutely black. and some can be reflected specularly. ambient light is only a crude approximation for the reality of multiply reflected light. some can be reflected diffusely. The diffuse color is the basic color of the object. In practice. In OpenGL (and in many other computer graphics systems). The degree to which a material reflects light of different wavelengths is what constitutes the color of the material. It consists of light that has been reflected and re-reflected so many times that it is no longer coming from any particular source. In fact. In fact. specular highlights on such a material are small and sharp. we think of a ray of light as being reflected not as a single perfect ray. but it is better than ignoring multiple reflections entirely. some wavelengths of light can be absorbed.

inclusive.CHAPTER 4. int offset) The first parameter tells which face of a polygon the command applies to. In standard OpenGL processing (when it has not been overridden by the GL Shading Language). material properties are assigned to vertices. The default value is zero. Note that the back material is ignored completely unless two-sided lighting has been turned on by calling gl. GL. which have their own set of properties. This reflects the fact that different material properties can be assigned to the front and the back faces of polygons.2. Ambient color is generally set to be the same as the diffuse color. Larger values produce smaller. The fourth color associated with a material is an emission color .GL BACK. int propertyName. The emission color is used only rarely. The emission color is color that does not come from any external source. and the results of those calculations are interpolated to other points of the object. principly glMaterialfv and glMateriali. we will look at how to use materials in OpenGL. since it sometimes desirable for the front and the back faces to have a different appearance. which will be covered in the next section. and in that sense it might appear to glow. GL. the object will be brighter than can be accounted for by the light that illuminates it.GL FRONT AND BACK. Try . and therefore seems to be emitted by the material itself. The front material is also used for point and line primitives. When the face parameter is GL.GL SHININESS. 4.GL FRONT or GL. int propertyValue) public void glMaterialfv(int face. which tells which material property is being set. 4. int propertyName. (There are no versions of this command that take parameters of type double. In the next section.GL TRUE) The second parameter for glMaterial is propertyName.GL LIGHT MODEL TWO SIDE. The system is designed for speed of calculation rather than perfect realism. and the use of interpolation is another entire level of approximation that can introduce serious distortion.GL FRONT AND BACK. float[] propertyValue. In the presence of light. It must be one of the constants GL. Lighting calculations are done for vertices only. The calculations involved are summarized in a mathematical formula known as the lighting equation. but it does mean that the object can be seen even if there is no source of light (not even ambient light). The section after that will cover working with light sources.2 OpenGL Materials (online) OpenGL uses the term material to refer to the properties of an object that determine how it interacts with light. For glMateriali. But you should know that the whole calculation is a rather loose approximation for physical reality. the command sets the value of the material property for both sides simultaneously. sharper highlights.1 Setting Material Properties Material properties are set using the glMaterial* family of commands. and the property value must be an integer in the range from 0 to 128. especially when using large polygons. This property determines the size and sharpness of specular highlights. LIGHT AND MATERIAL 102 various wavelengths of ambient light. which gives very large highlights that are almost never desirable. This does not mean that the object is giving off light that will illuminate other objects.glLightModeli(GL.) These commands are defined in the GL class and are specified by public void glMateriali(int face. the only legal property name is GL.

for a general yellow appearance.GL AMBIENT. we will set alpha equal to 1. fuzzier highlight and values of 100 or more for a small. or specular color. It’s not surprising that materials. have no specular reflection.0. the term “color” really means reflectivity. as discussed in the previous section. green. starting at the specified index.0.0. Here is an image to change that: This image shows eight spheres that differ only in the value of the GL SHININESS material property.75. LIGHT AND MATERIAL 103 values close to 10 for a larger. blue. GL.GL SPECULAR.GL EMISSION. GL.75. and green components of the ambient. the red component of a color gives the proportion of red light hitting the surface that is reflected by that surface. blue.75.75.1) and diffuse color (0. green. (More about that in the next section.GL SPECULAR. However.0. but they are not clamped to that range. for the time being.2. Four numbers in the array. For example.2. The names refer to the four different types of material color supported by OpenGL. but will require more computation.2. by default. That is. Since lighting calculations are only done at vertices.GL AMBIENT AND DIFFUSE. specify the red. the propertyName parameter can be GL. do not emit extra color. it is a little surprising that materials. . and blue component values are generally between 0 and 1.) For glMaterialfv.0.0.8.75. or GL. And when a highlight does occur at a vertex. making it appear whiter as well as brighter than the rest of the sphere. which leads to an ugly specular “highlight” that covers an entire hemisphere. diffuse. The specular color is (0. The property name GL. the shininess increases by 16 from one sphere to the next. The default material has ambient color (0.5 would mean that the surface reflects 50% more red light than hits it—a physical impossibility. For the sphere on the left.0. The third parameter for glMaterialfv is an array of type float[]. In the case of the red.0. the interpolation process can smear out the highlight to points that should not show any specular highlight at all. sharp highlight. which adds some little blue to the specular highlight. with no specular highlights.1).8.glMaterialfv(GL.1). by default. The index is often zero and is absent in the C API.GL AMBIENT AND DIFFUSE allows both the ambient and the diffuse material colors to be set simultaneously to the same value. Specular and emission color are both black. Using very small polygons will alleviate the problem. OpenGL will entirely miss any specular highlight that should occur in the middle of a polygon but not at the vertices. and similarly for green and blue. Going from left to right. setting the red component to 1.GL DIFFUSE. that is. and the fourth parameter specifies an index in the array. or transparency. This means that the objects that you have seen in all our examples so far exhibit ambient and diffuse color only. The material colors for this image were specified using gl.1). and a material can have a different reflectivity for each type of light. The ambient and diffuse material colors are set to (0. (0. (It’s worth noting that specular highlights are one area where polygon size can have a significant impact. That is.0.GL FRONT. GL. There are three different types of reflective color because there are three different types of light in the environment. GL.CHAPTER 4.0.1). but something that you might want to do for effect. The alpha component is used for blending. it is perfectly legal to have a negative color component or a color component that is greater than 1. and alpha components of a color. the shininess is 0.) The red.8.

CHAPTER 4.) Use of a color array must be enabled by calling gl.75f. or GL. 0.glColorPointer must be called to specify the location of the data.glEnableClientState(GL. That is. Similarly.GL DIFFUSE) parameters in this That is.GL COLOR ARRAY) and gl. you can change this default behavior of color material by using the method public void glColorMaterial(int face. call gl.GL FRONT.GL AMBIENT AND DIFFUSE. new float[] { 0. it’s important to note that using GL COLOR ARRAY is equivalent to calling glColor* for each vertex that is generated. by default. The most likely use of this method is to call gl. 1 }. if you are using a color array while lighting is enabled. face can and propertyName GL SPECULAR.glEnable(GL.75f. 0.glMateriali(GL. (More exactly.GL BACK. 1 }. the color data will not have any effect. calling gl. or so that GL COLOR MATERIAL will affect only the diffuse material color and not the ambient color. GL.GL FRONT.glColorMaterial(GL. The specular and emission material colors are not affected. as we have seen.glMaterialfv(GL. GL. unless GL COLOR MATERIAL has been enabled. 0).75f. we saw how to use the glDrawArrays and glDrawElements methods with vertex arrays. the current color as set by glColor* is ignored. these colors are ignored. More specifically.2. in Jogl. optionally.glDrawElements. i*16). ∗ ∗ ∗ In Section 3.glDrawArrays or gl. GL. However. It is also possible to specify a color array to be used with these methods. The method correspond to the first two parameters of glMaterialfv.GL SHININESS. GL.glVertexPointer for specifying the location of vertex data. normal arrays.GL FRONT. can be GL AMBIENT AND DIFFUSE.4.GL FRONT AND BACK. when lighting is enabled.GL COLOR MATERIAL) will cause OpenGL to take the current color into account even while lighting is enabled.GL FRONT AND BACK. be GL. gl. 0.2 Color Material Materials are used in lighting calculations and are ignored when lighting is not enabled.glEnable(GL. GL AMBIENT. Exactly how to do this depends on whether a vertex buffer object is used. 4. int propertyName) to say which material properties should be tied to glColor. the color data must actually be stored in a Java nio Buffer rather than an array. If lighting is enabled.75f. Otherwise. You could also. 0.75f. However. and the shininess for sphere number i was set using gl. you must call gl.glColorMaterial to say which material property the color data will affect. enabling GL COLOR MATERIAL tells OpenGL to substitute the current color for the ambient and for the diffuse material color when doing lighting computations. but the format is exactly parallel to that used with gl. Furthermore. LIGHT AND MATERIAL 104 new float[] { 0. . GL EMISSION. GL DIFFUSE. 0). and texture coordinate arrays.GL COLOR MATERIAL) before calling gl.

3.GL AMBIENT.glLightfv(GL.GL AMBIENT. it is possible to define multiple lights. That is. in OpenGL.GL DIFFUSE. and it is sufficient for some purposes.glEnable(GL. light. A light color in OpenGL is specified by four numbers giving the red. 0). it has to come from somewhere and we can imagine that turning on a light should increase the general level of ambient light in the environment.CHAPTER 4. (These constants are consecutive integers.glEnable(GL.) The diffuse intensity of a light is the aspect of the light that interacts with diffuse material color. most notably public void glLightfv(int light. The ambient intensity of a light works a little differently. Just as the color of a material is more properly referred to as reflectivity.GL SPECULAR.) Each light can be separately enabled and disabled. 4. green. For example. gl. GL. a light doesn’t have to . int offset) The first parameter.GL LIGHT1. propName. and so on. and this interaction has no dependence on the position of any light source. each light in OpenGL has an ambient. we could make a somewhat blue light with float[] bluish = { 0. 0.7f. The second parameter. color of a light is more properly referred to as intensity or energy. however. This ambient light interacts with the ambient color of a material. Still. However. The fourth parameter is. These values are often between 0 and 1 but are not clamped to that range. and a specular color.3f. been able to figure out how the alpha component of a light color is used. So. int propName. and alpha intensity values for the light.GL DIFFUSE. Some other properties are discussed in the next subsection. 1 }.glLightfv(GL. as usual. the starting index of that value in the array. (I have not.GL DIFFUSE. GL LIGHT7.GL LIGHT0 ). gl. In fact. . and GL. which are set using glLightfv with property names GL. however. blue. They can shine from different directions and can have various colors. specifies which property of the light is being set. or even if it is used. 0). which are identified by the constants GL LIGHT0. The third parameter of glLightfv is an array that contains the new value for the property. Commonly used properties are GL. It is common for the diffuse and specular light intensities to be the same.GL LIGHT0. It must be one of the constants GL. Recall that ambient light is light that is not directly traceable to any light source.GL SPECULAR. A light can have color. float[] propValue.GL LIGHT1. . a diffuse.GL LIGHTING) and enabling light number zero with gl. 0. and the specular intensity of a light is what interacts with specular material color.3 Our OpenGL Lighting (online) 3D worlds so far have been illuminated by just one light—a white light shining from the direction of the viewer onto the scene. GL. bluish. and GL. The ambient intensity of a light in OpenGL is added to the general level of ambient light. are black. LIGHT AND MATERIAL 105 4. it should be set to 1. All the other lights. as we have seen. is white by default. . You get this light just by turning on lighting with gl. bluish.GL SPECULAR respectively.GL POSITION. Every OpenGL implementation is required to support at least eight lights. This “viewpoint light” shines on everything that the viewer can see.1 Light Color and Position Light number zero. specifies the light whose property is being set. GL LIGHT1. . GL. GL. Only the lights that are enabled while a vertex is being rendered can have any effect on that vertex. GL.GL LIGHT1. they are all off by default. GL. they provide no illumination at all even if they are enabled. The color and other properties of a light can be set with the glLight* family of commands.3f.

glLightfv(GL.z.GL POSITION.z ) specifies the direction of the light: The light rays shine in the direction of the line from the point (x. is 1. So. when we say that the default light shines towards the negative direction of the z-axis. new float[] { 1.y/w.GL AMBIENT. w. in which the viewer is at the origin. new float[] { 0.z ). w. in a cone whose vertex is at that point.GL POSITION. we might want our blue light to add a slight bluish tint to the ambient light. The position specified for a light is transformed by the modelview matrix that is in effect at the time the position is set using glLightfv.0. positional and directional . setting the position of light number 1 with gl. ∗ ∗ ∗ The other major property of a light is its position.1f.glTranslatef(1.z ).) On the other hand. The property value is an array of four numbers (x. puts the light in the same place as gl. is zero. If cam is a Camera. and specular colors. then the light is directional and the point (x.3). GL. ∗ ∗ ∗ . This is related to homogeneous coordinates: The source of the light can be considered to be a point at infinity in the direction of (x.apply(gl) can be called at the beginning of display() to set up the projection and viewing transformations. Note that the default light position is. and in this case not one that has a basis in the physics of the real world.3.GL POSITION. 0). in effect. A directional light. A directional light imitates light from the sun. whose rays are essentially parallel by the time they reach the earth. if the fourth number. then that light will rotate along with the rest of the scene. Since ambient light should never be too intense. There are two types of lights.w ). then cam. the default position makes a light into a viewpoint light that shines in the direction the viewer is looking. 0}.0. this is really homogeneous coordinates: Any nonzero value for w specifies a positional light at the point (x/w. The type and position or direction of a light are set using glLightfv with property name equal to GL. 0. representing a directional light shining from the positive direction of the z-axis (and towards the negative direction of the z-axis).y.apply.y.2.1. the ambient intensity of a light source should always be rather small. if the light is made into a spotlight.2. diffuse.1 }. For example.3. lights are treated in the same way as other objects in OpenGL in that they are subject to the same transformations. the light source just has to be turned on.GL LIGHT1. on the other hand. (See Subsection 3. Real light sources do not have separate ambient.GL LIGHT1. or. Again. The Camera class in package glutil allows the user to rotate the 3D world using the mouse.glLightfv(GL. 0).GL LIGHT1.1.glLightfv(GL. of which at least one must be non-zero. That is.y. shines in parallel rays from some set direction. set before any transformation has been applied and is therefore given directly in eye coordinates. Light is emitted from that point—in all directions by default. When the fourth number. GL. LIGHT AND MATERIAL 106 shine on an object for the object’s ambient color to be affected by the light source.y. looking along the negative z-axis. The default position for all lights is (0. If a light position is set up after calling cam. Thus.y. new float[] { 0.z ) to the origin.CHAPTER 4.0).z/w ). then the light is positional and is located at the point (x. For example. we mean the z-axis in eye coordinates. 0). A positional light represents a light source at some point in 3D space. GL. I should emphasize again that this is all just an approximation. 0. We could do this by calling gl.0.1 }. gl.

an array of three numbers that can be set using the glLightfv method. between the axis of the cone and the side of the cone. the red light always illuminates the same part of the surface. In the second case. the positions of specular highlights on the surface do change as the scene is rotated. the part of the surface that is illuminated by the red light is whatever part happens to be facing upwards in the display. which emits light in a cone of directions. so. (For directional lights. Both the surface material and the lights have a non-zero specular component to their color.3. In the LightDemo example. 180. these features are drawn with lighting turned off. Note that even in the second case. The online version of this section includes this program as an applet. The user should try other combinations of settings and try to understand what is going on.) A positional light is a spotlight if the value of its GL SPOT CUTOFF property is set to a number in the range 0 to 90. A spotlight is not completely specified until we know what direction it is shining. spotlight settings are ignored. the surface disappears entirely.) The surface itself is gray. The only legal values for GL SPOT CUTOFF are 180 and numbers in the range 0 to 90. The user can use the mouse to rotate the scene. rather than in all directions.CHAPTER 4. the surface appears a a flat patch of gray. a wave-like surface is illuminated by three lights: a red light that shines from above. at the user’s option. If all the lights are turned off. Ambient light by itself does not give any 3D appearance to the surface. The program also has controls that let the user turn the red. Also. int propValue) public void glLightf(int light. since specular highlights depend on the position of the viewer relative to the surface. These properties have to do with spotlights and attenuation. lights have six other properties that are more rarely used. since it does not depend in any way on the orientation of the surface or the location of any light source. it gives the angle. The value of this property determines the size of the cone of light emitted by the spotlight. for example. although it looks colored under the colored lights. and white lights. measured in degrees. Like the light’s position. and the x-axis is pointing out of the screen. and a white light that shines down the x-axis. the lights rotate along with the surface. as the user rotates the scene. If only ambient light is on. (In this example. on and off. as well as on the position of the lights relative to the surface. Five of the six properties are numbers rather than arrays and are set using either of the following forms of glLight* public void glLighti(int light. 4. the z-axis is pointing up. float propValue) It is possible to turn a positional light into a spotlight. int propName. A control below the display determines whether the lights are fixed with respect to the viewer or with respect to the world. take a look at the source code. A set of axes and a grid of lines on the surface are also shown. int propName.2 Other Properties of Lights In addition to color and position. LIGHT AND MATERIAL 107 The sample program LightDemo. for example. I encourage you to try the applet or the corresponding application. blue. a blue light that shines from below. the surface rotates but the lights don’t move. that indicates an ordinary light rather than a spotlight. In the first case. as well as ambient light.java demonstrates various aspects of light and material. so. no matter how the scene has been rotated. the spotlight direction is subject to the modelview transformation that is in effect at the time when the . The default value of GL SPOT CUTOFF is a special value. The direction of a spotlight is given by its GL SPOT DIRECTION property.

With these default values.GL SPOT DIRECTION. GL. int propertyValue) . positioned at the point (10. LIGHT AND MATERIAL 108 spotlight direction is set. the default spotlight direction makes it point directly away from the viewer.GL LIGHT1. with larger values giving a more rapid falloff. GL LINEAR ATTENUATION. OpenGL lights do not follow this model because it does not. 0. 30). Other values cause the illumination to fall off with increasing angle.5}. The default value is (0. For example.glLighti(GL. 4. everything in the cone of a spotlight is illuminated evenly. float[] propertyValue.−1). the level of illumination from a light source is proportional to the reciprocal of the square of the distance from the light source. That is. into the screen. its color level—this calculation is done for each of the red. The attenuation properties are ignored for directional lights.0. especially when there are several lights in a scene. By default. new float[] {-10. you can use these commands: gl.GL SPOT CUTOFF. and there is no attenuation. the intrinsic intensity of the light. and blue components of the color). The default value. the level of illumination from a light does not depend at all on distance from the light. int offset) public void glLightModeli(int propertyName.CHAPTER 4. gl. spotlights really only work well when the polygons that they illuminate are very small. the intensity of the light on that vertex is computed as I * ( 1 / (a + b*r + c*r2 ) ) where I is the intrinsic intensity of the light (that is. This property must have a value in the range 0 to 128. a is 1 while b and c are 0. b. The value of the GL SPOT EXPONENT property of a light determines the rate of falloff. gl. The numbers a. For an vertex at distance r from a light source. the intensity of the light at the vertex is equal to I. gives even illumination. don’t expect to see a nice circle of illumination.glLightfv(GL. ∗ ∗ ∗ In real-world physics.-5}. When using larger polygons. and c are values of the properties GL CONSTANT ATTENUATION. giving a spotlight that points in the negative direction of the z-axis. 0).GL LIGHT1. By default in OpenGL. it is possible to turn on attenuation for a positional light source. However. Since the default is set before the modelview transformation has been changed from the identity. 0). and GL QUADRATIC ATTENUATION. Since lighting calculations are only done at vertices.3.3 The Global Light Model There are a few properties of the OpenGL lighting system that are “global” in the sense that they are not properties of individual lights. green.GL POSITION. GL.glLightfv(GL.15. GL. Attenuation can sometimes be useful to localize the effect of a light. produce nice pictures.GL LIGHT1.5) and pointing toward the origin. to turn light number 1 into a spotlight with a cone angle of 30 degrees. It is also possible to make the illumination decrease as the angle away from the axis of the cone increases. We say that the light “attenuates” with distance. These properties can be set using the glLightModeli and glLightModelfv methods: public void glLightModelfv(int propertyName. Note that attenuation applies only to positional lights.-15. this means the viewer’s z-axis.15. new float[] {10. By default. in practice.

let’s assume that the ambient. the global ambient light intensity. Global ambient light is ambient light that is present in the environment even when no light sources are turned on.GL LIGHT MODEL LOCAL VIEWER.GL LIGHT MODEL AMBIENT. for example. there is an option that is available only if the OpenGL version is 1. 0. The alternative will produce better specular highlight on such surfaces by applying the specular highlight after the texture has been applied. but it can be noticeable if the viewer is fairly close to the illuminated objects.isExtensionAvailable("GL VERSION 1 2") { gl. The second parameter can be GL. Ignoring alpha components. OpenGL assumes that the direction to the viewer is in the positive direction of the z-axis. and (mer .2. GL. Finally. you could say gl.mdb ).GL LIGHT MODEL AMBIENT. and b are fairly complex.meg . turns on two-sided lighting. this assumes.mag . (mdr .. using property name GL. . but these are actually just symbolic constants for the numbers 1 and 0. GL. GL.mab ).0. The value for this property must be either GL.2 or higher. The latter is the default. 0). (r. 0.b.glLightModeli(GL. By default. and emission colors of the material have RGB components (mar .glLightModeli(GL.GL LIGHT MODEL TWO SIDE. for the purpose of specular light calculations. The alpha component.” This makes the computation easier but does not produce completely accurate results. respectively.2. Essentially. (msr .GL SEPARATE SPECULAR COLOR).msg .2f.2f.glLightModeli(GL. This default yields poor specular highlights on objects to which a texture has been applied. Suppose that the global ambient intensity is (gar .4 The Lighting Equation What does it actually mean to say that OpenGL performs “lighting calculations”? The goal of the calculation is produce a color. new float[] { 0. this rarely makes a significant difference. for a vertex.GL LIGHT MODEL COLOR CONTROL. specular.gag . LIGHT AND MATERIAL 109 There is only one property that can be set with glLightModelfv. There are three properties that can be set using glLightModeli.GL FALSE.g.a).GL TRUE or GL. The default value is (0.3.meb ). To make OpenGL do the correct calculation.gab ). This also enables the front and the back faces of polygons to have different material properties.GL TRUE). Then the red component of the vertex color will be r = mer + gar *mar + I0r + I1r + I2r + .msb ). In practice. We have already encountered one of them.glLightModelfv(GL.GL SEPARATE SPECULAR COLOR or GL GL SINGLE COLOR. The command gl. } 4.2. is easy—it’s simply the alpha component of the diffuse material color at that vertex. To make sure that the option is actually available.CHAPTER 4.. you can call gl. 1 }.1). you can use the following code: if (gl. diffuse. For a yellowish ambient light. a. when computing specular highlights.mdg . g. But the calculations of r.0. that the viewer is “at infinity.GL LIGHT MODEL COLOR CONTROL. GL. which tells OpenGL to compute normal vectors for the back faces of polygons by reversing the normal vectors that were specified for the front faces.GL TRUE).

diffuse. where e is the value of the falloff property (GL SPOT EXPONENT ) of the light. but the direction is still well-defined. This equation says that the emission color is simply added to any other contributions to the color.ldb ). and specular color components (lar . which represents attenuation of the light intensity due to distance from the light. For an enabled light source. spot is given by (N ·L)e . L is a vector that points towards the light source. (ldr . In the equation. When f is zero. spot is 1. Also. Inner products occur at several points in the lighting equation.V·R)mh ) with similar equations for the green and blue components. Here. Recall that for unit vectors A and B. the inner product A · B is equal to the cosine of the angle between the two vectors. The contribution of global ambient light is obtained by multiplying the global ambient intensity by the material ambient color.CHAPTER 4. The contributions from the light sources are much more complicated. and V is a vector that points towards the viewer. let mh be the value of the shininess property (GL SHININESS ) of the material. This is the mathematical way of saying that the material ambient color is the proportion of the ambient light that is reflected by the surface. . The value of att is 1 if the light source is directional.ldg . spot is zero when the angle between N and L exceeds the cutoff angle of the spotlight. where a. All of the vectors are unit vectors. And spot accounts for spotlights. let’s say that the light has ambient. For directional lights and regular positional lights. Then the contribution of this light source to the red component of the vertex color can be computed as Ir = lar *mar + f*att*spot*( ldr *mdr *(L·N) + lsr *msr *max(0.) R is the direction of the reflected ray. Otherwise. f is 0 if the surface is facing away from the light and is 1 otherwise. f is 1 when L·N is greater than 0. that is. and (lsr . att is the attenuation factor. when the angle between L and N is less than 90 degrees. b. Note that even when f is 0. then att is computed as 1/(a+b*r+c*r2 ).lab ). (Both the viewer and the light source can be “at infinity”. that is. with length 1. Note first of all that if a light source is disabled. N is the normal vector at the point whose color we want to compute. If the light is positional. For a spotlight. then the contribution from that light source is zero.lsg . the direction in which a light ray from the source will be reflected when it strikes the surface at the point in question.lag . Now. the ambient component of the light can still affect the vertex color. there is no diffuse or specular contribution from the light to the color of the vertex. and c are the attenuation constants for the light and r is the distance from the light source to the vertex. A similar equation holds for the green and blue components of the color. we have to look at the geometry as well as the colors: In this illustration. LIGHT AND MATERIAL 110 where Iir is the contribution to the color that comes from the i-th light.lsb ). The angle between N and L is the same as the angle between N and R.

The mask parameter tells which group or groups of attributes are to be pushed. so the larger the angle. lsr *msr *max(0. especially when using several lights. Using a stack with push and pop is a neat way to save and restore state. The attributes are divided into attribute groups. and one or more groups of attributes can be pushed onto the stack with a single call to glPushAttrib. and the result is the sort of huge and undesirable specular highlight that we have seen in this case. OpenGL has an attribute stack that can be used for saving and restoring attribute values. Each call to glPushAttrib must be matched by a later call to glPopAttrib. the color components must be clamped to the range zero to one. Note that glPopAttrib has no parameters. and the cosine of this angle is raised to the exponent mh. Values greater than one are replaced by one.V·R)mh. It can be easy for a programmer to lose track of the current state. these methods are used when temporary changes are made to the transformation matrix.4 Lights and Materials in Scenes (online) In this section. is given by ldr *mdr *(L·N) This represents the diffuse intensity of the light times the diffuse reflectivity of the material. The result is that larger shininess values give smaller. OpenGL extends this idea beyond the matrix stacks. Typically. In the end. The exponent is the material’s shininess property. there is no dependence on the angle (as long as the angle is greater than 0). 4. It’s easy. but here the angle involved is the angle between the reflected ray and the viewer. the smaller the diffuse color contribution. As the angle increases from 0 to 90 degrees. The angle is involved because for a larger angle. when using a lot of light. The larger the shininess value. multiplied by the cosine of the angle between L and N. and it is not necessary to tell it which attributes to . the same amount of energy from the light is spread out over a greater area. the specular contribution is maximal when this angle is zero and it decreases as the angle increases.1 The Attribute Stack OpenGL is a state machine with dozens or hundreds of state variables. When this property is 0. the cosine of the angle decreases from 1 to 0. It can take some work to find appropriate lighting levels to avoid this kind of over-exposure. and spot. to end up with color components larger than one. att. LIGHT AND MATERIAL 111 The diffuse component of the color. the OpenGL commands glPushMatrix and glPopMatrix offer some help for managing state. in a scene graph. The specular component. In the case of transformation matrices. before the color is used to color a pixel on the screen. It’s easy. to produce ugly pictures in which large areas are a uniform white because all the color values in those areas exceeded one. Remember that the same calculation is repeated for every enabled light and that the results are combined to give the final vertex color. The push and pop methods for the attribute stack are defined in class GL: public void glPushAttrib(int mask) public void glPopAttrib() There are many attributes that can be stored on the attribute stack. is similar. we turn to some of the practicalities of using lights and materials in a scene and.CHAPTER 4. the faster the rate of decrease. sharper specular highlights. 4. in particular. Material and light properties are examples of attributes. All the information that was supposed to be conveyed by the lighting has been lost. before adjustment by f. For positive values of shininess. The effect is similar to an over-exposed photograph.4.

you might think that it would be more efficient to use the glGet family of commands to read the values of just those properties that you are planning to modify. lines. in your computer’s main memory) rather than in the graphics card. GL.GL CURRENT BIT ). and GL. I should note that there is another stack. In fact. The command saves material properties. GL.GL LINE BIT.4. settings for the global light model.2. This can be useful when drawing primitives using glDrawArrays and glDrawElements.) There are other useful attribute groups.GL POINT BIT can be used to save attributes relevant to the rendering of polygons. you might have to be careful when rendering very complex scenes.) But in fact.CHAPTER 4. and points.GL LIGHTING BIT.GL CLIENT VERTEX ARRAY BIT).glPopClientAttrib(). LIGHT AND MATERIAL 112 pop—it will simply restore the values of all attributes that were saved onto the stack by the matching call to glPushAttrib. which is the kind of thing that can hurt performance. gl.) The values saved on the stack can be restored with gl. properties of lights.” These are attributes that are stored on the client side (that is.4. a glGet command can require your program to communicate with the graphics card—and wait for the response. The parameter to glPushAttrib is a bit mask. It is better to combine groups in this way rather than to use separate calls to glPushAttrib.GL TEXTURE BIT can be used to save many settings that control how textures are processed (although not the texture images). for “client attributes.glEnable and gl. Since glPushAttrib can be used to push large groups of attribute values. GL. The attribute that most concerns us here is the one identified by the constant GL.glDisable. the settings for GL COLOR MATERIAL.glPushAttrib( GL.GL LIGHTING BIT | GL.glPushAttrib( GL. and the shading model (GL FLAT or GL SMOOTH ). For example. you should always prefer using glPushAttrib/glPopAttrib instead of a glGet command when you can.glPopAttrib(). GL. calls to glPushAttrib and glPopAttrib can be queued with other OpenGL commands and sent to the graphics card in batches. will save values in the GL CURRENT BIT group as well as in the GL LIGHTING BIT group. There is really only one group of client attributes that concerns us: gl. where they can be executed efficiently by the graphics hardware.GL LIGHTING BIT ). which means that you can “or” together several attribute group names in order to save the values in several groups in one step. (The size is guaranteed to be at least 16. the line width.glPushClientAttrib(GL.GL ENABLE BIT will save the values of boolean properties that can be enabled and disabled with gl. All the saved values can be restored later by the single matching call to gl. will push onto the attribute stack all the OpenGL state variables associated with lights and materials. A call to gl. and to save the old values in variables in your program so that you can restore them later. and the settings relevant to polygon offset.1. In contrast. You might want to do this since the current color is not included in the “lighting” group but is included in the “current” group. which should be sufficient for most purposes. will store the values of settings relevant to using vertex arrays. the enable state for lighting and for each individual light. since the attribute stack has a limited size. However. . the settings for line stippling. such as the values for GL VERTEX POINTER and GL NORMAL POINTER and the enabled states for the various arrays. (Subsection 3. (See Subsection 3.GL POLYGON BIT. such as the point size.

glTranslated(translateX. 0). We have seen how a screen graph can contain basic nodes representing geometric primitives.GL FRONT AND BACK. rotationAxisX.GL CURRENT BIT). 0).glGetFloatv(GL.GL DIFFUSE. GL.glScaled(scaleX. if (shininess >= 0) gl. The SceneNode3D class has a draw () method that manages the transforms and the material properties (and calls another method.GL FRONT AND BACK. rotationAxisZ). there are instance variables to represent all the material properties—ambient color. a node can have an associated color and if that color is null then the color is inherited from the parent of the node in the scene graph or from the OpenGL environment if it has no parent. if (emissionColor != null) gl. scaleZ). if (color != null) gl. translateY. translateZ). rotationAxisY.glPushAttrib(GL. if (ambientColor != null) gl. drawBasic(gl). gl. Here is the method: final public void draw(GL gl) { boolean hasColor = (color != null) || (specularColor != null) || ambientColor != null || diffuseColor != null || emissionColor != null || shininess >= 0. gl. gl. It’s not a good idea for any part of the program to simply change some state variable and leave it that way. if (hasColor) { gl. Complex scenes are often represented as scene graphs. diffuseColor. you need to keep careful control over the state.GL AMBIENT. Now.glMaterialfv(GL. diffuse color. GL. scenegraph3d. because any such change affects all the rendering that is done down the road. emission color. basicDraw () to do the actual drawing).GL FRONT AND BACK.GL EMISSION. if (specularColor != null) gl. specular color. scaleY. represented by the package simplescenegraph3d.glPushMatrix(). how can we use the attribute stack in practice? It can be used any time complex scenes are drawn when you want to limit state changes to just the part of the program where they are needed. 0).GL SHININESS.glMaterialfv(GL. and shininess. A scene graph is a data structure that represents the contents of a scene.GL FRONT AND BACK. When working with complex scenes.GL CURRENT COLOR. instead of just the current drawing color. All the nodes in a scene graph are represented by sub-classes of the class SceneNode3D from the scenegraph3d package.GL SPECULAR.glMaterialfv(GL. GL. complex nodes representing groups of sub-nodes. and nodes representing viewers. The typical pattern is to set state variables in the init() method to their most common values.glPopMatrix(). color. emissionColor. The easiest way to do this is to enclose the code that changes the state between push and pop operations. if (diffuseColor != null) gl. gl. This base class defines and implements both the transforms and the material properties for the nodes in the graph. if (hasColor) .GL FRONT AND BACK. In our scene graphs so far. shininess). 0).GL LIGHTING BIT | GL. } gl. A more realistic package for scene graphs. Other parts of the program should not make permanent changes to the state. GL. 0). // Render the part of the scene represented by this node. ambientColor. adds material properties to the nodes and adds a node type for representing lights.glRotated(rotationAngle.glMaterialfv(GL. GL. specularColor. LIGHT AND MATERIAL 113 ∗ ∗ ∗ Now.glMateriali(GL.CHAPTER 4.

the properties and positions of all the lights in the scene graph are set. There is also a turnOnLights() method that does a traversal of the scene graph for the purpose of turning on lights and setting their properties. and so on) automatically. A LightNode represents an OpenGL light. and the illumination from the light would appear to come from the visible object that is associated with the light. but lights have to be set up and enabled before any of the geometry that they illuminate is rendered. GL LIGHT1. LIGHT AND MATERIAL 114 gl. The headlights come on only at night (when the sun is on the other side of the world).CHAPTER 4. During the second traversal. in exactly the same way that we animate geometric objects. In this way. 4. The draw () method for a scene graph. the scene graphs defined by the package scenegraph3d can handle no more than eight lights. as the graph is traversed. place a light at the same location as a sun or a streetlight.” one in a moon that does likewise. So we have to work in two stages: Set up the lighting.2 Lights in Scene Graphs We have seen that the position of a light is transformed by the modelview matrix that is in effect when the position is set. The variable hasColor is used to test whether any such changes will actually be made. the geometry is rendered and the light nodes are ignored. This package contains a LightNode class. The contents of a scene are often represented by a scene graph. so that the user of the scene graph does not have to worry about managing the light constants. The effect is that the material properties and transform are guaranteed to have the same values when the method ends as they did when it began. and a similar statement holds for the direction of a spotlight. lights are like other objects in a scene. and two spotlights in the headlights of a cart.4.glPopAttrib(). for example. in order to avoid doing an unnecessary push and pop. and geometry is ignored. In fact. shown above.java. we can do this by traversing the scene graph twice. Lights in the scene graph are assigned one of the light constants (GL LIGHT0. There is one way that lights differ from geometric objects: Geometric objects can be rendered in any order. We could. When lights are represented by nodes in a scene graph. During the first traversal. This is the approach taken in the scene graph package. One interesting point is that the light constants are not simply . scenegraph3d. We should be able to animate the position of a light in a scene graph by applying transformations to the graph node that represents the light. The source code can be found in MovingLightDemo. One issue when working with lights in OpenGL is managing the limited number of lights. does a traversal of the graph for the purpose of rendering geometry. one inside a sun that rotates around the “world. The on-line version of this section includes an applet that demonstrates the use of LightNodes in scene graphs. while making sure that they are subject to all the transformations that are applied in the scene graph. If the sun moves during an animation. and it would be nice to be able to add lights to a scene graph in the same way that we add geometric objects. any lights in the scene graph beyond that number are ignored. The viewpoint light is not part of the scene graph. the light can move right along with it. The method also uses glPushMatrix and glPopMatrix to limit changes made to the transformation matrix. then render the geometry. } This method uses glPushAttrib and glPopAttrib to make sure that any changes that are made to the color and material properties are limited just to this node. There are four lights in the applet. There is also a light that illuminates the scene from the position of the viewer and that can be turned on and off using a control beneath the display area.

5 Textures (online) In that section. it makes things look subtly wrong. which makes it fairly easy to use image textures.CHAPTER 4.2. Textures are a complex subject. we looked at Java’s Texture class. LIGHT AND MATERIAL 115 assigned to LightNodes. As mentioned previously. you’ll see that the boundary can be very jagged indeed. One big limitation is that lights in OpenGL do not cast shadows. For more details about how lights are managed. consider this screenshot of a cylinder whose left side is illuminated by one of the spotlights in the program: 4. There are two lights because the node is encountered twice during a graph traversal. this class is not a standard part of OpenGL. . you don’t get a neat circle of light from a spotlight unless the polygons that it illuminates are very small. It’s also worth looking at how scene graphs are used in the sample program MovingLightDemo. As a result. and each adds a new OpenGL light to the scene.java. but when the sun in the applet is below the world. and indeed it would be impossible to do so since the same LightNode can be encountered more than once during a single traversal of the graph. This section does not attempt to cover all the details. That’s not such a bad thing. A different light constant is used for each encounter.java and the implementation of turnOnLightsBasic in scenegraph3d/LightNode. but even if you don’t notice it. And if you look at the boundary of the circle of illumination from a headlight on an object made up of large polygons. This means that light simply shines through objects. the lights are in different locations because different transformations are in effect during the two encounters. This section covers parts of OpenGL’s own texture API (plus more detail on the Texture class). the illumination from the headlights flickers a bit as the cart moves. You have to look for it. By the way. Textures were introduced in Subsection 2.java. However. that example also demonstrates some of the limitations of OpenGL lighting. the two headlights of the cart are represented by just one LightNode. For example.4. see the methods turnOnLights and turnOnLightsBasic in scenegraph3d/SceneNode3D. The effect is not all that obvious. Problems with spotlights are also apparent in the example. it shines up through the ground and illuminates objects in the scene from below. in the MovingLightDemo program. For example. This is why is is possible to place a light inside a sphere and still have that light illuminate other objects: The light simply shines through the surface of the sphere.

GL. A white surface material means that what you end up with is basically the texture color. before combination with the texture color. GL. gl. tex. Texturing can be turned on and off for each target with gl.1 Texture Targets So far.getTarget() returns the target— most likely GL TEXTURE 2D —for the texture object. Texture targets are also used when setting the texture environment. such as GL. I will not cover them here.) OpenGL maintains some separate state information for each target.glDisable(GL.glDisable(GL. However. you would use GL. (There are also several other targets that are used for aspects of texture mapping that I won’t discuss here. to specify which target’s state is being changed. GL TEXTURE 2D. / / / gl. gl. which are two-dimensional textures.setTexParameteri(prop. Similarly.val) is simply an abbreviation for gl.GL TEXTURE 2D). Texture wrapping was mentioned at the end of Subsection 2.GL TEXTURE 1D). LIGHT AND MATERIAL 116 4.GL TEXTURE 2D. and GL TEXTURE 3D.GL REPEAT). Except for one example later in this section.setTexParameteri(GL.enable() is equivalent to gl.GL TEXTURE 2D). tex.5. and 2D textures have precedence over 1D. This is usually what you want. If several targets are enabled.GL TEXTURE 3D). GL. we have only considered texture images.getTarget(). gl.glEnable(GL. gl.GL TEXTURE WRAP S. This is commonly used with a white surface material.GL REPLACE mode will completely replace the surface color with the texture color. where tex. we will work only with 2D textures.glEnable(GL. For example. GL. .glSetTexParameteri(tex. The combination mode for a given texture target is set by calling gl. for a Texture. but there are other texture combination modes for special purposes.4.glTexParameteri(GL. 3D textures have precedence over 2D textures.GL TEXTURE 2D.GL TEXTURE WRAP S. mode ).glEnable(GL. In fact. OpenGL has three texture targets corresponding to the three possible dimensions: GL TEXTURE 1D.glEnable(tex. The Texture method simply provides access to the more basic OpenGL command.GL TEXTURE 1D).val ). which determines how the colors from the texture are combined with the color of the surface to which the texture is being applied. gl. where target is the texture target. More generally. the texture environment offers many options and a great deal of control over how texture colors are used. for which the combination mode is being changed.GL MODULATE.GL TEXTURE 3D).GL REPEAT). tex. the GL.GL TEXTURE ENV MODE. where repeat in the s direction was set using an object tex of type Texture with tex.glDisable(GL. At most one texture target will be used when a surface is rendered. Many commands for working with textures take a texture target as their first parameter. The surface material is first used in the lighting computation to produce a basic color for each surface pixel.getTarget()). OpenGL also supports one-dimensional textures and three-dimensional textures. The default value of mode is GL. To make one-dimensional textures repeat in the s direction.GL TEXTURE 1D as the first parameter. modified by lighting effects.glTexEnvi( target.CHAPTER 4. However.prop. For example.2. which means that the color components from the surface are multiplied by the color components from the texture. tells OpenGL to repeat two-dimensional textures in the s direction.

This is an example of filtering . LIGHT AND MATERIAL 117 4. The easiest thing to do is to apply the color of that texel to the point on the surface. and in general. but the other dimension will continue to be cut in half until it too reaches one pixel. It doesn’t take into account the difference in size between the pixels on the surface and the texels. The problem with linear filtering is that it will be very inefficient when a large texture is applied to a much smaller surface area. They are essentially a way of pre-computing the bulk of the averaging that is required when shrinking a texture to fit a surface. Mipmaps are used only for minification filtering. it is not reduced further. many texels map to one pixel. An improvement on nearest neighbor filtering is linear filtering . OpenGL has the texture coordinates for that point. several pixels in the texture will be mapped to the same pixel on the surface. Here are the first few images in the set of mipmaps for a brick texture: You’ll notice that the mipmaps become small very quickly. If one dimension shrinks to a single pixel. and we have an example of “magnification filtering. OpenGL has a neat solution for this: mipmaps. but it does not usually give good results. To texture a pixel. Those texture coordinates correspond to one point in the texture.” One bit of terminology before we proceed: The pixels in a texture are referred to as texels. In this case. When deciding how to apply a texture to a point on a surface. The total memory used by a set of mipmaps is only about one-third more than the memory used for the original texture. It is very fast. short for texture pixels.2 Mipmaps and Filtering When a texture is applied to a surface. A mipmap for a texture is a scaled-down version of that texture. it is “minification filtering” because the texture is being shrunk. the color that is applied to the surface pixel must somehow be computed from the colors of all the texture pixels that map to it. In this case. a half-size version in which each dimension is divided by two. the final mipmap consists of a single pixel. and that point lies in one of the texture’s texels. OpenGL can first select the mipmap whose texels most closely match the size of the . the pixels in the texture do not usually match up oneto-one with pixels on the surface. a quarter-sized version. and computing the average of so many texels becomes very inefficient. A complete set of mipmaps consists of the full-size texture. which can take an average of several texel colors to compute the color that will be applied to the surface. the texture must be stretched or shrunk as it is being mapped onto the surface. and I will use that term from now on. a one-eighth-sized version.CHAPTER 4.5. so the additional memory requirement is not a big issue when using mipmaps. in particular. and so on. In any case. When one pixel from the texture covers more than one pixel on the surface. the texture has to be magnified. Sometimes. This is called nearest neighbor filtering .

t.glTexParameteri(GL. GL. and it will have to average at most a few texels in order to do so. you just have to say gl. the only options are GL.GL TEXTURE 2D.GL TEXTURE MIN FILTER. Since OpenGL also supports one-dimensional textures and three-dimensional textures. .5.GL LINEAR. you can use GL. you should check the OpenGL version before doing this. and the two coordinates on a texture image are referred to as s and t. where magFilter and minFilter are constants that specify the filtering algorithm. The filters that can be used can be set with glTexParameteri. It can then do linear filtering on that mipmap to compute a color. The texture coordinates for a vertex determine which point in a texture is mapped to that vertex.glTexCoord4d(s. minFilter). (The methods for creating Textures have a parameter for that purpose.) The best news. is that when you are using Java Texture objects to represent textures. (You can research the remaining two options on your own if you are curious. For automatic generation of mipmaps for 2D textures. which does averaging both between and within mipmaps.4). and if mipmaps are not available. you would call gl. Textures are most often images. For the 2D texture target.glTexParameteri(GL. more likely. there are four options that use mipmaps for more efficient filtering.CHAPTER 4. For the magFilter. For even better results. giving nearest neighbor and linear filtering.GL TEXTURE 2D.glTexParameteri(GL. which are referred to as (s. a set of texture coordinates in OpenGL is represented internally as homogeneous coordinates (see Subsection 3.GL TRUE). in addition to GL. GL. and OpenGL ignores it! 4.0. LIGHT AND MATERIAL 118 pixel.GL LINEAR MIPMAP LINEAR. In fact. Starting with OpenGL Version 1.t. or you must generate them yourself. The default MIN filter is GL. (The GLU library has a method. We have used glTexCoord2d to specify texture s and t coordinates.GL NEAREST MIPMAP LINEAR which does averaging between mipmaps and nearest neighbor filtering within each mipmap.1.GL NEAREST and GL. and then forget about 2D mipmaps! Of course. which are two-dimensional. For minFilter.t) is really just shorthand for gl. it is imperative that you change the minification filter for that texture to GL NEAREST or. at the cost of greater inefficiency. gluBuild2DMipmaps that can be used to generate a set of mipmaps for a 2D texture. magFilter).r. texture coordinates cannot be restricted to two coordinates. if you want to use mipmaps.) One very important note: If you are not using mipmaps for a texture.) ∗ ∗ ∗ OpenGL supports several different filtering techniques for minification and magnification. but a call to gl.4. it is possible to get OpenGL to create and manage mipmaps automatically. the Texture will manage mipmaps for you without any action on your part except to ask for mipmaps when you create the object.3 Texture Transformations Recall that textures are applied to objects using texture coordinates. perhaps.GL GENERATE MIPMAP.glTexCoord2d(s. with similar functions for 1D and 3D textures. The default MIN filter requires mipmaps. then the texture is considered to be improperly formed. and there is rarely any need to change it.q). GL. gl.GL TEXTURE 2D. for example.GL TEXTURE MAG FILTER.1). Texture coordinates can be specified using the glTexCoord* families of methods. GL. GL LINEAR. The default for the MAG filter is GL LINEAR.GL LINEAR.GL NEAREST and GL. you must either load each mipmap individually. In earlier versions.

0. what does this actually mean for the appearance of the texture on a surface? This scaling transforms multiplies each texture coordinate by 2.t) = (0. the texture coordinates that were specified for its vertices are transformed by the texture matrix.1). the effect of a texture transformation on the appearance of the texture is the inverse of its effect on the texture coordinates.4. The texture coordinates vary twice as fast on the surface as they would without the scaling transform. the default texture transform is the identity. // Make sure we are starting from the identity matrix. Now. With this mode in effect. translation and combinations of these basic transforms. If the texture transform is a counterclockwise rotation. Of course. if a vertex was assigned 2D texture coordinates (0. rotation.glScaled(2.java. The following image shows a cube with no texture transform. the texture image will be shrunk by a factor of two on the surface! More generally.glMatrixMode(GL. you could say: gl. (This is exactly analogous to the inverse relationship between a viewing transformation and a modeling transformation. they can be transformed in exactly the same way. you have to use glMatrixMode to set the matrix mode to GL TEXTURE. In other words.5: These pictures are from the sample program TextureAnimation. then the texture rotates clockwise on the surface. which has no effect.2) in the texture.2). A region on the surface that would map to a 1-by-1 square in the texture image without the transform will instead map to a 2-by-2 square in the image—so that a larger piece of the image will be seen inside the region. You can find an applet version of that program on-line. LIGHT AND MATERIAL 119 Since texture coordinates are no different from vertex coordinates. The texture matrix can represent scaling.8. and glLoadIdentity are applied to the texture matrix.glMatrixMode(GL. calls to methods such as glRotated. // Leave matrix mode set to GL MODELVIEW.GL TEXTURE).) If the texture transform is translation to the right. gl.glLoadIdentity(). then the texture moves to the left on the surface. gl. To specify a texture transform. along with the modelview matrix and projection matrix.CHAPTER 4. OpenGL maintains a texture transformation matrix as part of its state.2. For example. and with a texture transform that scales by a factor of 0. gl. .0. For example to install a texture transform that scales texture coordinates by a factor of two in each direction. When a texture is applied to an object. glScalef. to the point (s. after the texture transform is applied. with a texture transform given by a rotation about the center of the texture. then that vertex will be mapped. The transformed texture coordinates are then used to pick out a point in the texture.GL MODELVIEW).

4 Creating Textures with OpenGL Texture images for use in an OpenGL program usually come from an external source. which is used when you are constructing each mipmap in a set of mipmaps by hand. x and y specify the lower left corner of the rectangle in the color buffer from which the texture will be read and are usually 0. y.0 or higher. height. the sample program TextureFromColorBuffer. 0). 0.4. target will be GL. This is possible because OpenGL can read texture data from its own color buffer. will ordinarily be GL. As an example. The texture image in this case is a copy of the two-dimensional hierarchical graphics example from Subsection 2.1.glCopyTexImage2D(GL. As usual with textures. GL. border ). 0. To create a texture image using OpenGL. x. 0.CHAPTER 4. most often an image file. width and height are the size of that rectangle.5. by rendering it. mipmapLevel. should be zero. OpenGL is itself a powerful engine for creating images. which specifies how the texture data should be stored. width. For each frame of the animation. which makes it possible to include a border around the texture image for certain special purposes. although non-power-of-two textures are supported if the OpenGL version is 2. then grabs that image for use as a .GL RGB. will ordinarily be 0. Here is what this image looks like when the program uses it as a texture on a cylinder: The texture image in this program can be animated.GL RGBA.GL RGB or GL. the internalFormat. LIGHT AND MATERIAL 120 4. mipmapLevel.GL TEXTURE 2D. instead of loading an image file. the width and height should ordinarily be powers of two. However. and border.GL TEXTURE 2D except for advanced applications. depending on whether you want to store an alpha component for each texel. a call to glCopyTexImage2D will typically look like gl. That is. internalFormat. where it does its drawing. In this method. it’s convenient to have OpenGL create the image internally. the program draws the current frame of the 2D animation. you just have to draw the image using standard OpenGL drawing commands and then load that image as a texture using the method gl. Sometimes.glCopyTexImage2D( target.java uses this technique to produce a texture. width. height.

GL LIGHT0. which is the only thing that the user gets to see in the end. gl. 0.glViewport(0.glCopyTexImage2D(GL. gl. while (textureWidth > viewPort[2]) textureWidth /= 2. the program erases the image and draws a 3D textured object. GL. int[] viewPort = new int[4]. . // Draws the animated 2D scene.glEnable(GL. GL.0. The current viewport. draw2DFrame(gl). and draw the object selected by the user. // gl.GL COLOR BUFFER BIT). enable 2D texture. gl. // Restore full viewport.textureHeight).GL SEPARATE SPECULAR COLOR).GL SMOOTH).glViewport(0. x and y will be 0. draw the 2D scene into the color buffer.glLightfv(GL. while (textureWidth > viewPort[3]) textureHeight /= 2.4f }. GL. gl. /* First. textureWidth.GL LIGHT0).4f. viewPort[3]). if (version 1 2) gl. */ if (version 2 0) { // Non-power-of-two textures are supported.glEnable(GL. */ gl. // Draws the animated 2D scene. textureWidth = 1024.GL DEPTH TEST). // and use that size for the texture. Use the entire // view area for drawing the 2D scene. /* Set up 3D viewing. viewPort.GL LIGHTING BIT | GL. // The height of the texture. 0.textureWidth. 0.glLightModeli(GL. 0). gl. } else { // Use a power-of-two texture image. int textureWidth = viewPort[2]. gl. Reset the viewport // while drawing the image to a power-of-two-size. gl.glShadeModel(GL. // Use a power of two that fits in the viewport. 0. 0.glEnable(GL. even though the 2D image that is draws is not shown.GL LIGHT MODEL COLOR CONTROL. 0. 0). LIGHT AND MATERIAL 121 texture. int textureHeight = viewPort[3].GL TEXTURE 2D.GL TEXTURE BIT). textureHeight = 512. 0). } /* Grab the image from the color buffer for use as a 2D texture.GL VIEWPORT.GL SPECULAR.4f. viewPort[2]. float[] dimwhite = { 0.GL LIGHTING). GL. draw2DFrame(gl).GL TRUE). // The width of the texture.CHAPTER 4. It does this in the display() method.glPushAttrib(GL. gl.getGL(). It’s worth looking at that display method.glClear(GL. gl. since it requires some care to use a power-of-two texture size and to set up lighting only for the 3D part of the rendering process: public void display(GLAutoDrawable drawable) { GL gl = drawable. // Use a power of two that fits in the viewport. dimwhite.GL LIGHT MODEL LOCAL VIEWER.glLightModeli(GL. After drawing the image and grabbing the texture. textureHeight.glGetIntegerv(GL.GL RGBA. */ gl.

1.glMaterialfv(GL. although other types of data can be used as well.GL SHININESS. Essentially.0. dataType. you can load that data into a 2D texture using the glTexImage2D method: gl.GL RGBA.glTexParameteri(GL. However. When using GL.GL SPECULAR.glRotated(90.glTranslated(0. The data must specify color values for each texel in the texture.glMateriali(GL.glRotated(15. /* Since we don’t have mipmaps. gl. GL.0). 128).GL FRONT AND BACK. objects[selectedObject]. white. but the most common are GL. camera. and GL.render(gl). it’s good to also know how to load texture data using only basic OpenGL commands. mipmapLevel. the negative numbers are reinterpreted as positive numbers. with a value in the range 0 to 255. white. gl. you need a total of 3*n numbers. the safest approach is to use an int or short value and type-cast it to byte.GL DEPTH BUFFER BIT). GL.) Once you have the data in a Buffer.GL LINEAR). or it can even be computed by your program on-the-fly. most textures come from external sources.2. if (selectedObject == 1 || selectedObject == 3) gl. 1. height. .0. Usually. since Java’s byte data typed is signed.-1. gl. GL. LIGHT AND MATERIAL 122 gl.-1. GL.0. which adds an alpha component for each texel. which requires a red. Each number is typically an unsigned byte.glClear(GL. leaving the value at its default * will produce no texturing at all! */ gl. (The use of unsigned bytes is somewhat problematic in Java. To load external data into a texture. 0). 0).GL RGB.GL RGB to specify a texture with n texels.glEnable(GL.GL FRONT AND BACK. You need one number for each component. } 4. border. Several formats are possible. for every texel.GL TEXTURE 2D. gl. into an array). and a green component value for each texel. width. // selectedObject Tells which of several object to draw. internalFormat. // Use white material for texturing. int selectedObject = objectSelect. gl.25).apply(gl). a blue.0).GL FRONT AND BACK.0.GL COLOR BUFFER BIT | GL. format.GL TEXTURE MIN FILTER. gl. The data can be loaded from an image file. it can be taken from a BufferedImage. 1 }. you have to store the color data for that texture into a Java nio Buffer (or.5 Loading Data into a Texture Although OpenGL can draw its own textures.GL TEXTURE 2D). float[] white = { 1.CHAPTER 4.glPopAttrib().GL AMBIENT AND DIFFUSE.1). GL. if using the C API.glTexImage2D(target.3. gl.glMaterialfv(GL.5.glClearColor(0.getSelectedIndex(). with values in the range −128 to 127. // Apply some viewing transforms to the object. gl. Using Java’s Texture class is certainly the easiest way to load existing images. we MUST set the MIN filter * to a non-mipmapped version. buffer).

LIGHT AND MATERIAL 123 The first six parameters are similar to parameters in the glCopyTexImage2D method. textureData1D.put((byte)c. The other three parameters specify the data.GL RGB.getGreen()).rewind().glTexImage1D(GL.java. 1. The dataType tells what type of data is in the buffer and is usually GL. GL.getHSBColor(1.getBlue()).6 Texture Coordinate Generation Texture coordinates are typically specified using the glTexCoord* family of methods or by using texture coordinate arrays with glDrawArrays and glDrawElements. // A color of the spectrum. Other formats are also possible. which also includes an example of a two-dimensional texture created by computing the individual texel colors.put((byte)c. As an example. 4. Given this data type. here is some code that creates a one-dimensional texture consisting of 256 texels that vary in color through a full spectrum of color: ByteBuffer textureData1D = BufferUtil. by the way.) You can find an applet version of the program in the on-line version of this section.GL RGB. The number of bytes in the buffer must be 3*width*height for RGB data and 4*width*height for RGBA data. However. Here are two images from that program. // Add color components to the buffer. textureData1D. how the strong specular highlight on the black part of the Mandelbrot set adds to the three-dimensional appearance of this image. OpenGL is capable of generating certain types of texture .GL TEXTURE 1D.0f/256 * i. gl. } textureData1D. the buffer should be of type ByteBuffer.GL RGBA for RGBA data. 256. (Note. 0.getRed()). 0.GL UNSIGNED BYTE. The glTexImage1D method simply omits the height parameter.newByteBuffer(3*256). The two dimensional texture is the famous Mandelbrot set. The format is GL.GL UNSIGNED BYTE. i++) { Color c = Color.5. as discussed in the previous subsection. i < 256.put((byte)c. Note that the format and the internalFormat are often the same. It is also possible to load a one-dimensional texture in a similar way. textureData1D. although they don’t have to be. textureData1D). one showing the one-dimensional spectrum texture on a cube and the other showing the two-dimensional Mandelbrot texture on a torus. GL. for (int i = 0. computing texture coordinates can be tedious. GL. 1). This code is from the sample program TextureLoading.GL RGB if you are providing RGB data and is GL.CHAPTER 4.

With object-linear texture coordinate generation.glVertex4f(x.c. For two-dimensional textures.d) and (e.GL S.GL TEXTURE GEN S).glTexGenfv(GL. you have to enable and configure it for each texture coordinate separately. for example. LIGHT AND MATERIAL 124 coordinates on its own.d }.GL T.glTexCoord2f(x.y. you want to enable generation of the s and t texture coordinates: gl. with glVertex* or in a vertex array.glTexGeni(GL.h) in the equations: gl. you can find an applet version on-line. As usual. . gl. However.z). If you want to change the equations that are used. It allows the user to enter the coefficients for the linear equations that are used to generate the texture coordinates.glTexGenfv(GL.glTexCoord2f(a*x + b*y + c*z + d*w. This is especially useful for so-called “reflection maps” or “environment maps.w) becomes equivalent to gl. y.glTexGeni(GL. Thus.GL OBJECT LINEAR).y.y). where (a. OpenGL can generate the texture coordinates that are needed for this effect. the effect will be to project the texture onto the surface from the xy-plane (in the coordinate system in which the coordinates are specified. To say that you want to use object-linear coordinate generation.c.b. you can specify the coordinates using glTexGenfv.c. gl.h }. you can use the method glTexGeni to set the texture generation “mode” to object-linear for both s and t: gl.w). gl. For example.GL S.GL T. GL. GL.g.d) and (e.glVertex4f(x. before any transformation is applied).y.GL TEXTURE GEN MODE. To use texture generation.y. new float[] { a.z).glEnable(GL. to use coefficients (a.g. 0).glEnable(GL. GL. If you accept the default behavior. gl. we look at a simple case: object-linear coordinates.GL OBJECT PLANE.b.glVertex3f(x. OpenGL uses texture coordinates that are computed as linear functions of object coordinates. The default when objectlinear coordinate generation is turned on is to make the object coordinates equal to the texture coordinates. and w.GL TEXTURE GEN MODE. would be equivalent to gl.” where texturing is used to imitate the effect of an object that reflects its environment. it is possible to compute the texture coordinates as arbitrary linear combinations of the vertex coordinates x.f. z.GL TEXTURE GEN T). The same program also demonstrates “eyelinear” texture coordinate generation. gl.b.z. Object coordinates are just the actual coordinates specified for vertices.CHAPTER 4. environment mapping is an advanced topic that I will not cover here. 0). Instead. I won’t discuss it further here. However.h) are arbitrary arrays.GL OBJECT PLANE. GL. For two-dimensional textures. GL.f.z. gl. The sample program TextureCoordinateGeneration. which is similar to the object-linear version but uses eye coordinates instead of object coordinates in the equations. GL.g.glVertex3f(x.java demonstrates the use of texture coordinate generation. e*x + f*y + g*z + h*w). new float[] { e. gl.f.GL OBJECT LINEAR).

and to obtain a batch of valid textures IDs.tex. . since the Texture class handles them automatically.getTarget(). the associated texture is actually stored as a texture object. or glCopyTexImage* with the same texture target will be applied to the texture object with ID texID.5. This is an expensive operation.glGenTextures(n. and switching among multiple textures by using this method can seriously degrade a program’s performance. glTexImage*. where glBindTexture is a method that is discussed below. a texture object is identified by an integer ID number. you won’t need to worry about texture objects.7 Texture Objects For our final word on textures. To switch from one texture to another. then the texture that is applied to the geometry is the one associated with that texture ID. Every texture object has its own state. LIGHT AND MATERIAL 125 4.0). The usual method for loading textures. idList is an array of length at least n and of type int[] that will hold the texture IDs. (It is equivalent to gl. Furthermore. fast OpenGL command. To work with the texture object that has ID equal to texID.) The rest of this section tells you how to work with texture objects by hand.idList. After this call. A texture binding for a given target remains in effect until another texture object is bound to the same target.getTextureObject()). transfers data from your program into the graphics card. which includes the values of texture parameters such as GL TEXTURE WRAP S as well as the color data for the texture itself. you can call the method gl. if the texture target is enabled and some geometry is rendered. any use of glTexParameter*. you can delete them by calling gl. idList.) Note that if you are using Java’s Texture class to represent your textures. texID) where target is a texture target such as GL. The method tex. Texture object IDs are managed by OpenGL.GL TEXTURE 1D or GL. (Of course. 0). which were covered in Subsection 3. Like a vertex buffer object. we look briefly at texture objects. Texture objects offer the possibility of storing texture data for multiple textures on the graphics card and to switch from one texture object to another with a single.bind() tells OpenGL to start using that texture object. Texture objects are used when you need to work with several textures in the same program. If tex is of type Texture.3.4.CHAPTER 4. Texture objects are similar in their use to vertex buffer objects. the graphics card has only a limited amount of memory for storing textures. and the 0 indicates the starting index in the array where the IDs are to be placed.glBindTexture(tex. glTexImage*. where n is the number of texture IDs that you want. you simply have to call glBindTexture with a different texture object ID. and texture objects that don’t fit in the graphics card’s memory are no more efficient than ordinary textures. When you are done with the texture objects. you have to call gl.glBindTexture(target.GL TEXTURE 2D.glDeleteTextures(n.

here is a screenshot: ∗ ∗ ∗ 126 . One way to do stereoscopic viewing on a computer screen is to combine the view from the left eye.java.Chapter 5 Some Topics Not Covered This page contains a few examples that demonstrate topics not covered in Chapters 1 through 4. drawn in red. ∗ ∗ ∗ In stereoscopic viewing . can render an anaglyph stereo view of the graph of a function. stereoscopic views can be visually fused to create a convincing illusion of 3D depth. drawn in green. (The program is also a nice example of using vector buffer objects and glDrawElements for drawing primitives. For many people. This type of 3D viewing is referred to as anaglyph (search for it on Google Images). The program uses modified versions of the Classname and TrackBall classes that can be found in the same package. In order to combine the red and green images.) An applet version of the program can be found on-line. the program uses the method glColorMask method. in the package stereoGraph3d. the image must be viewed with red/green (or red/blue or red/cyan) glasses designed for such 3D viewing. The next version of this book should cover these topics—and more— in detail. To see the 3D effect. The images for the left eye and the right eye are rendered from slightly different viewing directions. and the view from the right eye. The source code for the examples might be useful to people who want to learn more. The sample program Sterographer. imitating the way that a two-eyed viewer sees the real world. a slightly different image is presented to each eye.

This makes it difficult to tell which object the user is clicking on. both in package glsl. they just show how to use GLSL with Java. but otherwise nothing special can be done with it. which makes GLSL shaders very versitile. The vertices of the icosphere are generated by a recursive algorithm.0 introduced the OpenGL Shading Language (GLSL). The first sample program. These examples are not meant as a demonstration of what GLSL can do. the program will run with lower version numbers. we have been talking about the standard OpenGL “rendering pipeline. that the conversion applies only to pixels in primitives. It would be impossible to make every new technique a standard part of OpenGL. (Note. GLSL programs are referred to as shaders. I wrote the class GLSLProgram to represent a GLSL program (source code GLSLProgram.java.java in the package selection demonstrates the use of the GL SELECT render mode. To help with this problem. shaders are preferred over the standard OpenGL processing.CHAPTER 5. an applet version is on-line. OpenGL 2.java is a version of MovingLightDemo. here is a screenshot: . by the way. And new techniques are constantly being discovered. IcosphereIFS GLSL.java that allows the user to turn a GLSL program on and off. Since the OpenGL API for working with shaders is rather complicated.java in package glsl).java where the user can click on objects to select it. This example is a version of IcosphereIFS. which draws polyhdral approximations for a sphere by subdividing an icosahedron. based on C. while the objects in the scene are created in object coordinates. The GLSL program consists of a fragment shader that converts all pixel colors to gray scale. Unfortunately. The GLSL programs can only be used if the version of OpenGL is 2.java. It offers the GL SELECT render mode to make selection easier. it’s still fairly complicated and I won’t describe it here. SOME TOPICS NOT COVERED 127 Mouse interaction with a 3D scene is complicated by the fact that the mouse coordinates are given in device (pixel) coordinates. (The effect is not as interesting as I had hoped. and the colors in the GLSL version are meant to show the level of the recursion at which each vertex is generated.java and IcosphereIFS GLSL. An applet version can be found on-line. The second sample program. demonstrate the use of simple GLSL programs with the GLSLProgram class. In the newest versions of OpenGL. OpenGL can help with this “selection” problem.” The operations performed by this pipeline are are actually rather limited when compared to the full range of graphics operations that are commonly used in modern computer graphics. Parts of the OpenGL rendering pipeline can be replaced with programs written in GLSL. ∗ ∗ ∗ Throughout the text. The sample programs MovingLightDemoGLSL. The sample program SelectionDemo. not to the background color that is used by glClear ). uses both a fragment shader and a vertex shader. but the GLSL programs will not be applied.java. MovingLightDemoGLSL. A vertex shader written in OpenGL can replace the part of the pipeline that does lighting. The size of the selected object is animated to show that it is selected. A fragment shader can replace the part that operates on each pixel in a primitive. GLSL is a complete programming language. transformation and other operations on each vertex.) As usual. The program is a version of WalkThroughDemo.0 or higher. An applet version can be found on-line.

SOME TOPICS NOT COVERED 128 .CHAPTER 5.

java.3.java and BasicJoglAnimation2DApplet.java.1.2.java for drawing some basic 3D shapes. from Subsection 2.1. A simple animation of three rotating “paddle wheels. glutil/UVCone. • Axes3D. An animated 2D scene using hierarchical modeling OpenGL.java.2.java for working with the projection and view transforms. • JoglHierarchicalModeling2D. • BasicJoglApp2D. • LitAndUnlitSpheres. from Subsection 2. which used Java Graphics2D instead of OpenGL. The point is to do a 129 .3.java.java. for implementing mouse dragging to rotate the view.java. and glutil/UVCylinder.java.3.java and PaddleWheelsApplet. this one using a scene graph to represent the scene.java. from Subsection 1. and glutil/UVSphere.java. A very simple 2D OpenGL animation.java. This program is pretty much a port of HierarchicalModeling2D.5.1. used for an illustration in Section 2.2.1.java and HierarchicalModeling2DApplet. Another version of the hierarchical modeling animation.2.java and JoglHierarchicalModeling2DApplet. glutil/TrackBall. used for an illustration in Subsection 2. An OpenGL program that just draws a triangle. • CubicBezierEdit.3 is a package that contains several utility classes including glutil/Camera.java. using a rotation transform to rotate a triangle.3.java and CubicBezierEditApplet.2.3. OpenGL. • QuadraticBezierEdit. • PaddleWheels. introduced in Section 2. from Subsection 2. A program that shows an animation constructed using hierarchical modeling with Graphics2D transforms.2. from Subsection 2.java and QuadraticBezierEditApplet. A program that demonstrates quadratic editing curves and allows the user to edit them.2. from Subsection 2.java. • HierarchicalModeling2D.1.java and ColorCubeOfSpheresApplet. This example depends on several classes from the package glutil • ColorCubeOfSpheres. The program uses GLUT to draw the cones and cylinders that represent the axes.Appendix: Source Files This is a list of source code files for examples in Fundamentals of Computer Graphics with Java. A program that demonstrates quadratic editing curves and allows the user to edit them. and Jogl. Draws a lot of spheres of different colors. A very simple OpenGL 3D program that draws four spheres with different lighting settings.java.1. • JoglHMWithSceneGraph2D. arranged in a cube. from Subsection 2. from Subsection 1.java.” The user can rotate the image by dragging the mouse (as will be true for most examples from now on).java. The scene graph is built using classes from the source directory scenegraph2D.2. showing how to use a GLJPanel and GLEventListener. • glutil. A first example of modeling in 3D.2. from Subsection 1. • BasicJoglAnimation2D. A very simple OpenGL 3D program that draws a set of axes.

java and IcosphereIFS GLSL.java and IcosphereIFSApplet.java. These nodes represent the view of the scene from two viewers that are embedded in the scene as part of the scene graph.java and VertexArrayDemoApplet. All of these examples use the glutil package. • MovingCameraDemo. this demo is a modification of WalkThroughDemo. The user navigates through a simple 3D world using the arrow keys.java. in OpenGL 1. and TextureCoordinateGeneration. from the glutil package. The program uses the scenegraph3d package.4. The program also uses several classes from the glutil package. a section at the end of the book that describes a few topices that not covered in the book proper.4. This example depends on several classes from the package glutil and on textures from textures.5 or higher.java and TextureDemoApplet. with or without vertex buffer objects to draw indexed face sets.java and requires the packages glutil and simplescenegraph3d. TextureLoading. from Subsection 4. Demonstrates some light and material properties and setting light positions.2.java and LightDemoApplet. Requires Camera and TrackBall from the glutil package. • VertexArrayDemo. Finally.5. and to see how much the drawing can be sped up by using a display list. TextureFromColorBuffer. The user’s point of view is represented by an object of type SimpleAvatar. with various shapes and textures.4.java. • WalkThroughDemo. The user can rotate each object individually. this program uses modified versions of the Trackball and Camera classes.5 that deal with textures: TextureAnimation.java. • There are four examples in Chapter 5.5.1. SelectionDemo. from Subsection 4. from Subsection 3.2. which can be found in the stereoGraph3D package.java demonstrate the use of “shaders” written in the OpenGL Shading Language (GLSL).java demonstrates a form of stereo rendering that requires red/green 3D glasses. MovingLightDemoGLSL. The program uses the simplescenegraph3d package to implement a scene graph that includes two AvatarNodes.java. these examples are modifications of MovingLightDemo. • PrimitiveTypes.java.java.4. The program also uses several classes from the glutil package.java. from Section 3. • TextureDemo. vertex buffer objects to draw a cylinder inside a sphere. Demonstrates the use of glDrawElements.java and MovingCameraDemoApplet. • LightDemo. from Subsection 2.java. to implement a scene graph that uses two LightNodes to implement moving lights.2. as well as a basic implementation of 3D scene graphs found in the package simplescenegraph3d.4. The program uses several shape classes from the same package.java and PrimitiveTypesApplet. • IcosphereIFS.Source Code Listing 130 lot of drawing.5. Uses vertex arrays and.java.java and MovingLightDemoApplet. from Subsection 3. Shows six textured objects. This example depends on several classes from the package glutil.java and IcosphereIFS. A 2D program that lets the user experiment with the ten OpenGL primitive types and various options that affect the way they are drawn.4. from Subsection 3.5. StereoGrapher.java (and their associated applet classes). a more advanced version of scenegraph3d.java demonstrates using OpenGL’s GL SELECT render mode to let the user “pick” or “select” items in a 3D scene using the mouse. • There are four examples in Section 4. where the sphere is represented as a random cloud of points.java and WalkThroughDemoApplet.3.java.java. from Subsection 3. • MovingLightDemo. The GLSL programs in these . The user can turn the display list on and off and see the effect on the rendering time.

GLSLProgram. that is meant to coordinate the use of GLSL programs. .java. MovingLightDemoGLSL requires the packages glutil and scenegraph3d.Source Code Listing 131 examples are very simple. The examples use a class.