You are on page 1of 49

Texture Mapping

„ Why texture map?

„ How to do it
„ How to do it right
„ Spilling the beans
„ A couple tricks
„ Difficulties with
texture mapping
„ Projective mapping
„ Shadow mapping
„ Environment

Most slides courtesy of

Leonard McMillan and Jovan Popovic
Lecture 15 Slide 1 6.837 Fall 2002
Office hours
Durand & Teller by appointment
Ngan Thursday 4-7 in W20-575
Yu Friday 2-5 in NE43-256
Deadline for proposal: Friday Nov 1
Meeting with faculty & staff about proposal
Next week
Web page for appointment

Lecture 15 Slide 2 6.837 Fall 2002

The Quest for Visual Realism

„ For more info on the computer artwork of Jeremy Birn


Lecture 15 Slide 3 6.837 Fall 2002

The concept is very simple!

Lecture 15 Slide 4 6.837 Fall 2002

Texture Coordinates
Void EdgeRec::init() {
… //Note: here we use w=1/z
„ Specify a texture wstart=1./z1; wend=1./z2; dw=wend-wstart; wcurr=wstart;
coordinate at each vertex sstart=s1; send=s2; ds=sstart-send; scurr=sstart;
tstart=t1; tend=t2; dt=tstart-tend; tcurr=tstart;
(s, t) or (u, v) }
Void EdgeRec::update () {
„Canonical coordinates ycurr+=1; xcurr+=dx; wcurr+=dw;
scurr+=ds; tcurr+=dt;
where u and v are }

between 0 and 1 static void RenderScanLine ( … ) {

„ Simple modifications to for (e1 = AEL->ToFront(); e1 != NULL; e1 = AEL->Next() ) {
triangle rasterizer x1=[e1->xcurr]; x2=[e2->xcurr]; dx=x2-x1;
w1=e1->wcurr; w2=e2->wcurr; dw=(w2-w1)/dx;
s1=e1->scurr; s2=e2->scurr; ds=(s2-s1)/dx;
(0,1) t1=e1->tcurr; t2=e2->tcurr; dt=(t2-t1)/dx;
for (int x=x1; x<x2; x++) {
s+=ds; t+=dt;
if (w<wbuffer[x]) {
raster.setPixel(x, texturemap[s,t])
(0,0) (1,0) raster->write(y);

Lecture 15 Slide 5 6.837 Fall 2002

The Result

Let's try that out ... Texture mapping applet (image)

Wait a minute... that doesn't look right.

What's going on here?

Let's try again with a simpler texture... Texture mapping applet

(simple texture)

Notice how the texture seems to bend and warp along the diagonal
triangle edges. Let's take a closer look at what is going on.

Lecture 15 Slide 6 6.837 Fall 2002

Looking at One Edge
First, let's consider one edge from a given triangle. This
edge and its projection onto our viewport lie in a single
common plane. For the moment, let's look only at that
plane, which is illustrated below:

Lecture 15 Slide 7 6.837 Fall 2002

Visualizing the Problem

Notice that uniform steps on the image plane

do not correspond to uniform steps along the edge.

Let's assume that the viewport is located 1 unit away from the center of

Lecture 15 Slide 8 6.837 Fall 2002

Linear Interpolation in Screen Space

Compare linear interpolation in screen space

Lecture 15 Slide 9 6.837 Fall 2002

Linear Interpolation in 3-Space

to interpolation in 3-space

Lecture 15 Slide 10 6.837 Fall 2002

How to Make Them Mesh
Still need to scan convert in screen space... so we need a mapping from t
values to s values.
We know that the all points on the 3-space edge project onto our screen-
space line. Thus we can set up the following equality:

and solve for s in terms of t giving:

Unfortunately, at this point in the pipeline (after projection) we no

longer have z1 lingering around (Why?). However, we do have w1=
1/z1 and w2 = 1/z2 .

Lecture 15 Slide 11 6.837 Fall 2002

Interpolating Parameters
We can now use this expression for s to interpolate arbitrary parameters, such
as texture indices (u, v), over our 3-space triangle. This is accomplished by
substituting our solution for s given t into the parameter interpolation.

Therefore, if we premultiply all parameters that we wish to interpolate in

3-space by their corresponding w value and add a new plane equation to
interpolate the w values themselves, we can interpolate the numerators and
denominator in screen-space.
We then need to perform a divide a each step to get to map the screen-space
interpolants to their corresponding 3-space values.
Once more, this is a simple modification to our existing triangle rasterizer.

Lecture 15 Slide 12 6.837 Fall 2002

Modified Rasterizer
Void EdgeRec::init() {

wstart=1./z1; wend=1./z2; dw=wend-wstart; wcurr=wstart;
swstart=s1*w1; swend=s2*w2; dsw=swstart-swend; swcurr=swstart;
twstart=t1*w1; twend=t2*w2; dtw=twstart-twend; twcurr=twstart;
Void EdgeRec::update () {
ycurr+=1; xcurr+=dx; wcurr+=dw;
swcurr+=dsw; twcurr+=dtw;
static void RenderScanLine ( … ) {

for (e1 = AEL->ToFront(); e1 != NULL; e1 = AEL->Next() ) {
x1=[e1->xcurr]; x2=[e2->xcurr]; dx=x2-x1;
w1=e1->wcurr; w2=e2->wcurr; dw=(w2-w1)/dx;
sw1=e1->swcurr; sw2=e2->swcurr; dsw=(sw2-sw1)/dx;
tw1=e1->twcurr; tw2=e2->twcurr; dtw=(tw2-tw1)/dx;
for (int x=x1; x<x2; x++) {
float denom = 1.0f / w;
sw+=dsw; tw+=dtw;
correct_s=sw*denom; correct_t=tw*denom;
if (w<wbuffer[x]) {
raster.setPixel(x, texturemap[correct_s, correct_t])

Lecture 15 Slide 13 6.837 Fall 2002

For obvious reasons this method of interpolation is called perspective-
correct interpolation. The fact is, the name could be shortened to simply
correct interpolation. You should be aware that not all 3-D graphics APIs
implement perspective-correct interpolation.
Applet with

You can reduce the perceived artifacts of non-perspective correct

interpolation by subdividing the texture-mapped triangles into smaller
triangles (why does this work?). But, fundamentally the screen-space
interpolation of projected parameters is inherently flawed.

Applet with

Lecture 15 Slide 14 6.837 Fall 2002

Reminds you something?
When we did Gouraud shading didn't we interpolate
illumination values, that we found at each vertex using screen-
space interpolation?

Didn't I just say that screen-space interpolation is wrong (I

believe "inherently flawed" were my exact words)?

Does that mean that Gouraud shading is wrong?

Lecture 15 Slide 15 6.837 Fall 2002

Gouraud is a big simplification
Gourand shading is wrong. However, you usually will not notice
because the transition in colors is very smooth (And we don't know what
the right color should be anyway, all we care about is a pretty picture).

There are some cases where

the errors in Gouraud shading
become obvious.

„ When switching between

different levels-of-detail
„ At "T" joints. Applet showing errors in
Gouraud shading

Lecture 15 Slide 16 6.837 Fall 2002

Texture Tiling
Often it is useful to repeat or tile a texture over the surface of a

if (w<wbuffer[x]) {
raster.setPixel(x, texturemap[tile(correct_s, max), tile(correct_t, max)])

int tile(int val, int size) {
if (val >= size) {
do { val -= size; } while (val >= size);
} else {
while (val < 0) { val += size; }
return val;
} Tiling applet

Can also use symmetries…

Lecture 15 Slide 17 6.837 Fall 2002

Texture Transparency
There was also a little code snippet to handle texture transparency.

if (w<wbuffer[x]) {
textureColor= texturemap[tile(correct_s, max), tile(correct_t, max)];
float t= textureColor.transparency()
newColor = t*textureColor+ (1-t)*oldColor;
raster.setPixel(x, newColor)


Applet showing

Lecture 15 Slide 18 6.837 Fall 2002

Summary of Label Textures
„ Increases the apparent complexity of
simple geometry
„Must specify texture coordinates for
each vertex
„ Projective correction (can't linearly
interpolate in screen space)
„ Specify variations in shading within a
„ Two aspects of shading
„ Illumination
„ Surface Reflectace
„ Label textures can handle both kinds
of shading effects but it gets tedious
„ Acquiring label textures is surprisingly

Lecture 15 Slide 19 6.837 Fall 2002

Difficulties with Label Textures
„ Tedious to specfiy texture coordinates for
every triangle
„ Textures are attached to the geometry
„ Easier to model variations in reflectance than
„ Can't use just any image as a label texture
The "texture" can't have projective
Reminder: linear interploation in image space
is not equivalent to linear interpolation in 3-
space (This is why we need "perspective-
correct" texturing). The converse is also true.

„ Textures are attached to the geometry

„ Easier to model variations in reflectance than
„ Makes it hard to use pictures as textures

Lecture 15 Slide 20 6.837 Fall 2002

Projective Textures

„ Treat the texture as a light source (like a slide projector)

„ No need to specify texture coordinates explicitly
„ A good model for shading variations due to illumination
„ A fair model for reflectance (can use pictures)

Lecture 15 Slide 21 6.837 Fall 2002

Projective Textures

texture Projected onto the interior of a cube

„ [Segal et al. 1992]

Lecture 15 Slide 22 6.837 Fall 2002

The Mapping Process
During the Illumination process:
For each vertex of triangle
(in world or lighting space)

„ Compute ray from the projective texture's

origin to point
„ Compute homogeneous texture coordinate,
[ti, tj, t]
(use projective matrix)

„ During scan conversion

(in projected screen space)

„ Interpolate all three texture coordinates

in 3-space
(premultiply by w of vertex)

„ Do normalization at each rendered pixel

i=ti/t j=tj/t
„ Access projected texture

Lecture 15 Slide 23 6.837 Fall 2002

Projective Texture Examples
Modeling from photograph
Using input photos as textures
[Debevec et al. 1996]

Lecture 15 Slide 24 6.837 Fall 2002

Adding Texture Mapping to Illumination
Texture mapping can be used to alter some or all of the constants in the
illumination equation. We can simply use the texture as the final color for the pixel,
or we can just use it as diffuse color, or we can use the texture to alter the normal,
or... the possibilities are endless!

Phong's Illumination Model

Constant Diffuse Color Diffuse Texture Color Texture used as Label Texture used as Diffuse Color

Lecture 15 Slide 25 6.837 Fall 2002

Texture-Mapping Tricks

„ Textures and Shading

„ Combining Textures
„ Bump Mapping
„ Solid Textures

Lecture 15 Slide 26 6.837 Fall 2002

Shadow Maps
Textures can also be used to generate shadows.
Projective Will be discussed in a later lecture
Textures with

Lecture 15 Slide 27 6.837 Fall 2002

Environment Maps
If, instead of using the ray from the surface point to the projected
texture's center, we used the direction of the reflected ray to index a
texture map. We can simulate reflections. This approach is not completely
accurate. It assumes that all reflected rays begin from the same point.

Lecture 15 Slide 28 6.837 Fall 2002

What's the Best Chart?

Lecture 15 Slide 29 6.837 Fall 2002

Reflection Mapping

Lecture 15 Slide 30 6.837 Fall 2002

Image computed using the Dali
ray tracer by Henrik Wann
Environment map by Paul

Lecture 15 Slide 31 6.837 Fall 2002

Lecture 15 Slide 32 6.837 Fall 2002
Texture Mapping Modes
„ Label textures
„ Projective textures

„ Environment maps

„ Shadow maps

„ …

Lecture 15 Slide 33 6.837 Fall 2002

The Best of All Worlds
All these texture mapping modes are great!
The problem is, no one of them does everything well.
Suppose we allowed several textures to be applied to each primitive
during rasterization.

Lecture 15 Slide 34 6.837 Fall 2002

Multipass vs. Multitexture
Multipass (the old way) - Render the image in multiple passes, and "add"
the results.
Multitexture - Make multiple texture accesses within the rasterizing loop
and "blend" results.
Blending approaches:
„ Texture modulation
„ Alpha Attenuation
„ Additive textures
„ Weird modes

Lecture 15 Slide 35 6.837 Fall 2002

Texture Mapping in Quake
Quake uses light maps in addition to texture maps. Texture maps are used
to add detail to surfaces, and light maps are used to store pre-computed
illumination. The two are multiplied together at run-time, and cached for

Texture Light Maps


Data RGB Intensity

Instanced Yes No

Resolution High Low

Light map image

by Nick Chirkov

Lecture 15 Slide 36 6.837 Fall 2002

Bump Mapping
Textures can be used to alter the surface normal of an object. This does not change the actual
shape of the surface -- we are only shading it as if it were a different shape! This technique is
called bump mapping. The texture map is treated as a single-valued height function. The
value of the function is not actually used, just its partial derivatives. The partial derivatives tell
how to alter the true surface normal at each point on the surface to make the object appear
as if it were deformed by the height function.

Since the actual shape of the object does not change, the silhouette edge of the object will
not change. Bump Mapping also assumes that the Illumination model is applied at every pixel
(as in Phong Shading or ray tracing).

Swirly Bump Map

Sphere w/Diffuse Texture & Bump Map

Sphere w/Diffuse Texture

Lecture 15 Slide 37 6.837 Fall 2002

Bump mapping v D
If Pu and Pv are orthogonal and N is normalized:
v N v
P = [ x(u , v), y (u , v), z (u , v)]T Initial point r N′ v
v v v
N = Pu × Pv Normal Pr' v
v v v
P′ = P + B (u, v ) N Simulated elevated point after bump v
Variation of normal in u direction

v v v v B ( s − ∆, t ) − B ( s + ∆, t )
Bu =
N ′ ≈ N + Bu Pu + Bv Pv 2∆
142 v
43 B ( s, t − ∆ ) − B ( s, t + ∆ )
D Bv =
Variation of normal in v direction

Compute bump map partials

by numerical differentiation

Lecture 15 Slide 38 6.837 Fall 2002

Bump mapping v

v General case
P = [ x(u , v), y (u , v), z (u , v)]T Initial point
N′ v
v v v
N = Pu × Pv Normal
v v
v − N ×P
v v B(u , v) N v Pu
P′ = P + v Simulated elevated point after bump D v v
N × Pv

v v v v
v Bu (N × Pv ) − Bv (N × Pu )
Variation of normal in u direction

v Bu =
B ( s − ∆, t ) − B ( s + ∆, t )
N′ ≈ N + v 2∆
N B ( s, t − ∆ ) − B ( s, t + ∆ )
1444 42 4444 3 Bv =
v 2∆
D Variation of normal in v direction

Compute bump map partials

by numerical differentiation

Lecture 15 Slide 39 6.837 Fall 2002

Bump mapping derivation ≈0
v v v
v v B(u , v) N v v Bu N BN u
P′ = P + v Pu′ = Pu + v + v
v v
Assume B is very small... v v Bv N BN v
v v v Pv
′ = Pv + v + v
N ′ = Pu′ × Pv′ N N
v v v v v v
v v v Bu ( N × Pv ) Bv ( Pu × N ) Bu Bv ( N × N )
N ′ ≈ Pu × Pv + v + v + v 2
v v v v v v v v v
But Pu × Pv = N , Pu × N = − N × Pu and N × N = 0 so
v v v v
v v Bu ( N × Pv ) Bv ( N × Pu )
N′ ≈ N + v − v
Lecture 15 Slide 40 6.837 Fall 2002
More Bump Map Examples

Bump Map

Cylinder w/Diffuse Texture Map Cylinder w/Texture Map & Bump Map

Lecture 15 Slide 41 6.837 Fall 2002

One More Bump Map Example

Notice that the shadow boundaries remain unchanged.

Lecture 15 Slide 42 6.837 Fall 2002

Displacement Mapping
We use the texture map to actually move the surface point. This is called
displacement mapping. How is this fundamentally different than bump

The geometry must be displaced before visibility is determined. Is this

easily done in the graphics pipeline? In a ray-tracer?

Lecture 15 Slide 43 6.837 Fall 2002

Displacement Mapping Example
It is possible to use displacement maps
when ray tracing.

Image from
Geometry Caching for
Ray-Tracing Displacement Maps
by Matt Pharr and Pat Hanrahan.

Lecture 15 Slide 44 6.837 Fall 2002

Another Displacement Mapping Example

Image from Ken Musgrave

Lecture 15 Slide 45 6.837 Fall 2002

Three Dimensional or Solid Textures
The textures that we have discussed to this
point are two-dimensional functions mapped
onto two-dimensional surfaces. Another
approach is to consider a texture as a
function defined over a three-dimensional
surface. Textures of this type are called solid
Solid textures are very effective at
representing some types of materials such as
marble and wood. Generally, solid textures
are defined procedural functionsrather than
tabularized or sampled functions as used in
2-D (Any guesses why?)

The approach that we will explore is based

on An Image Synthesizer, by Ken Perlin,

The vase to the right is from this paper.

Lecture 15 Slide 46 6.837 Fall 2002

Noise and Turbulence
When we say we want to create an "interesting" texture, we usually don't
care exactly what it looks like -- we're only concerned with the overall
appearance. We want to add random variations to our texture, but in a
controlled way. Noise and turbulence are very useful tools for doing just

A noise function is a continuous function that varies throughout space at

a uniform frequency. To create a simple noise function, consider a 3D
lattice, with a random value assigned to each triple of integer

Lecture 15 Slide 47 6.837 Fall 2002

Noise is a good start, but it looks pretty ugly all by itself. We can use noise
to make a more interesting function called turbulence. A simple turbulence
function can be computed by summing many different frequencies of noise

One Frequency Two Frequencies Three Frequencies Four Frequencies

Now we're getting somewhere. But even turbulence is rarely used all
by itself. We can use turbulence to build even more fancy 3D

Lecture 15 Slide 48 6.837 Fall 2002

Marble Example

We can use turbulence to generate beautiful 3D

marble textures, such as the marble vase created by
Ken Perlin. The idea is simple. We fill space with
black and white stripes, using a sine wave function.
Then we use turbulence at each point to distort those
By varying the frequency of the sin function,
you get a few thick veins, or many thin veins. Varying
the amplitude of the turbulence function controls how
distorted the veins will be.

Marble = sin(f * (x + A*Turb(x,y,z)))

Lecture 15 Slide 49 6.837 Fall 2002