You are on page 1of 96

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/234814482

Digital modeling of the appearance of materials

Article · January 2005


DOI: 10.1145/1198555.1198694

CITATIONS READS
10 1,101

2 authors:

Julie Dorsey Holly E. Rushmeier


Yale University Yale University
141 PUBLICATIONS   7,822 CITATIONS    224 PUBLICATIONS   8,351 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Real-time rendering of partially translucent objects View project

Development of software for the Cultural Heritage Experts. CHER-Ob: A Tool for Shared Analysis in Cultural Heritage: View project

All content following this page was uploaded by Julie Dorsey on 02 November 2014.

The user has requested enhancement of the downloaded file.


Digital Modeling of the Appearance of Materials

Julie Dorsey
Holly Rushmeier
Yale University
Course Description

We present an introduction to the digital modeling of materials for realistic image


synthesis. We present a visual tour of images of real materials, and consider how they are
classified by the effects that need to be modeled to realistically render them. Essential
appearance concepts such as diffuse, specular, subsurface scattering and wave effects will
be defined and illustrated. We will then discuss popularly used numerical models such as
Ward, Lafortune and Cook-Torrance. We will discuss these in terms of the effects they
capture and the visual impact of the parameters of each model, and will not cover their
mathematical derivations. We will conclude with the consideration of models that
simulate the processing or aging of materials to predict their variation with time. The goal
of the course is to provide an introduction to translating observation of materials in the
real world into model parameters and/or code for synthesizing realistic images.

Course Prerequisites

The course requires only an introductory level of familiarity with computer graphics from
either a previous course or practical experience. We will assume that the students
understand basic terms and ideas such as setting a pixel color by specifying values of red,
green and blue, and projecting a triangle onto a set of pixels given the specification of a
virtual pinhole camera.

Instructors

Holly Rushmeier is a Professor of Computer Science at Yale University. Since receiving


the Ph.D. from Cornell in 1988, and she has conducted research in global illumination,
data visualization, applications of perception, 3D scanning, and applications of computer
graphics in cultural heritage. She has published in SIGGRAPH, ACM TOG, IEEE CG&A
and IEEE TVCG. Over the past 15 years, she has organized SIGGRAPH courses on
radiosity, global illumination and a scanning case study, and has lectured in
SIGGRAPH courses on capturing surface properties and applying perceptual principles to
rendering.

Julie Dorsey is a Professor of Computer Science at Yale University, where she teaches
computer graphics. Before joining the Yale faculty, she was a tenured faculty member at
MIT. She received undergraduate (BS, BArch 1987) and graduate (MS 1990, PhD 1993)
degrees from Cornell University. Her research interests include photorealistic image
synthesis, material and texture models, illustration techniques, and interactive
visualization of complex scenes. In addition to serving on numerous conference program
committees, she is a member of the editorial board of IEEE Transactions on Visualization
and Computer Graphics and is an area editor for The Visual Computer.

Additional information about our teaching and research activities can be found at:
http://graphics.cs.yale.edu/
Course Schedule and Notes Organization

1. Introduction – 10 mins
Digital Modeling of the Appearance of Materials: Art or Science?
2. A Visual Tour of Materials and Their Classification – 20 mins
3. (Really Brief) Terminology and Mathematical Descriptions – 10 mins
4. Models for Reflectance – 20 mins
Empirical
First Principles
Measured
5. Spatial Variations – 15 mins
6. Processing of Materials – 30 mins

7. Additional Materials

DORSEY, J., AND HANRAHAN, P. Modeling and rendering of metallic patinas. In Proceedings
of the 23rd annual conference on Computer graphics and interactive techniques (1996),
ACM Press,
pp. 387–396.

DORSEY, J., PEDERSEN, H. K., AND HANRAHAN, P. Flow and changes in appearance. In
Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
(1996), ACM Press, pp. 411–420.

DORSEY, J., EDELMAN, A., JENSEN, H.W., LEGAKIS, J., AND PEDERSEN, H. K. Modeling and
rendering of weathered stone. In Proceedings of the 26th annual conference on Computer
graphics and interactive techniques (1999), ACM Press/Addison-Wesley Publishing Co., pp.
225–234.

8. References for further reading


Slide 1

Digital Modeling of the Appearance of Materials

Julie Dorsey
Holly Rushmeier
Yale University

In these course notes we present basic principles of defining numerical models to


be used in rendering realistic imagery of physical materials. Additional
information on research in rendering and materials can be found at our group
website at http://graphics.cs.yale.edu/. Updates or corrections to these notes
will be posted at this site.
Slide 2

1. INTRODUCTION

Digital Modeling of the Appearance of Materials:


Art or Science??

The materials here are rendered with models. Artists conceived the shape, and
selected materials to construct such objects in the physical world. A purely
artistic approach could be used to digitally paint the shades of light and dark on
the digital shapes to give the illusion of translucent stone or shiny gold metal.
However, to generate these images material models are expressed numerically
and rendered using lighting simulations. That is their appearance – the colors,
shades of light and dark, were computed, rather than being digitally painted on
the model.

There are many ways in which compelling visual images can be generated of the
world. Different approaches can be used to create images, all of which are valid
in some set of circumstances. To define our approach in this course we
distinguish between the use of digital models to define materials, and the use of
techniques. Broadly, by digital models we mean a scientific or engineering
approach to rendering, and by technique we mean an artistic or craft-oriented
approach. We are focused here on digital models of materials.
Slide 3

Digital Models: Predictable control parameters


Consistent across view and lighting conditions

We define a model as taking a physically measurable input and producing a


predictive output that can be verified by physical measurement. A model of a
material makes possible the reliable rendering of the appearance of that material
in any geometric and lighting conditions. We define a technique as taking
an input which is not necessarily measurable, and produces an output that may
or may not reproduce the appearance of an object under arbitrary circumstances.
Human judgment is required to set the input of of a technique, and to evaluate its
success.
Models are typically used in science or engineering for their predictive reliability. If a bridge is
being designed, a model is used to predict the load that a particular beam can support given the
material and dimensions of the beam. Similary, if a lighting system is being designed for a
building, the appearance of objects in the building with that system is predicted by modeling the
materials illuminated – carpet, paint, etc. Assurance that the system meets specifications is gained
by the ability to compute specific values of incident light that can be measured in the
environment. Material models may be used outside of design applications which must meet
quantitative specifications. Models may be use as elements in a technique. In designing the visual
look of a scene, an artist working in a digital medium may use a digital model of a material to
produce a visual effect drawn from their visual experience with the everyday world without the
demands of manual adjustments. Digital models may also be used by artists as a starting point to
identify parameters that can be manipulated to meaningfully exaggerate or surpress aspects of
appearance we normally encounter.

Techniques have been used successfully by artists for centuries to produce visual impact.
Techniques traditionally used to represent things like the effect of a shiny metal or matte painted
surface often serve as the inspiration for the development of a model. The line between a model
and a technique is not absolute. An example is the use of techniques and models to render a
drawing. Many image processing techniques can be applied to produce the illusion of different
materials used to make a drawing. A series of filters for blurring, edge detection and color
manipulation can be applied to a digital photograph to give the appearance of a drawing made
with charcoal or crayon. A user can adjust these filters to generate variations until they judge that
the effect of either the charcoal or crayon material has been achieved. That effect would be valid
for the direct view the user has of the image in the editing program. There would be no guarantee
that the image could be say be used on a page laying on a desk in a digital scene and look
realistic. A digital model of charcoal or crayon however, would take into account the physics of
how the carbon or wax reflects light from various directions, and how the smale scale particles
are typically deposited on the paper being used for a particular physical application of the
medium. The input to the model might be the density of carbon particles, the spectral reflectance
of the wax, and/or the geometric distribution of the material on the page as a function of rate of
application of the drawing medium. The measurable output would be the light distribution from
the page under the specified lighting conditions. The model of the drawing on paper could be
inserted into a digital desk scene, and would appear reliably dull or shiny depending on the
specific lighting environment. Clearly a mix is possible, and generally desirable, between the
technique and the model. The digital model of wax or charcoal reflectance could be applied to a
spatial distribution of the medium defined by the user using image processing techniques.

An artist seeks to produce visual impact through the use of color and texture. In computer
graphics this can be achieved by painting an image, or by painting or assigning acquired images
as textures to 3D objects. Many techniques have been developed for created various types of
effects with this approach, as described in works such as the book Digital Texturing and
Painting by Owen Demers (New Riders Publishing, Indianopolis, IN, 2001.) The artist may be
interested in creating a specific mood, or drawing attention to certain places in a scene. Our
purpose here is not to teach art or design to create moods or direct attention. We don’t aim to
present techniques, say, to make a piece of fruit look fresh and sweet. However by presenting
models that predict appearance in the physical world, we provide tools for the artist to draw on.
We present models for reflection of surfaces such as from a pear or apple, and for the refraction
of light through small drops of water on the surface of the fruit. If an apple covered with dew
produces the visual effect sought by the artists, the models we present here allow the construction
of a compelling image of the apple.
Slide 4

Digital Models: Goal is to produce images that appear the


same as seeing a scene or object in person

Our goal is to make predictive images that give a view of a scene or object that is
the same as if the person were viewing it directly. Material modeling is one
aspect of this. We need to consider the object’s shape, and the light incident on
it. We also need to take into account how the incident light is perceived by
humans. Perception is a complex phenomenon, but some simple understanding
of it makes modeling easier. The complexity of human perception is what makes
digital imaging even possible. It is a remarkable fact that vastly different arrays of
incident light, if adjusted properly, give us the same visual impression.
Our goal in this course is not to explain why things look the way they do to human beings, but to
simulate the mechanisms of appearance that are relevant to the construction of digital models.
Clearly there are libraries full of information of various aspects of human vision which ultimately
contribute to how people perceive and evaluate materials. However, it is also clear that the
appearance of an object depends on the light that reaches the viewer from the object, and the
nature of human visual response.

One approach would be to simulate with phenomenal accuracy the light that would reach a
viewer from an object, and then build a device that would produce that same light distribution for
the purpose of display. Such lighting simulations would be computationally infeasible, and such
display devices do not exist. To effectively model the appearance of materials and display the
results, we need to exploit the limitations of human vision. While there is no single
comprehensive model of vision, it is reliably known that there are limitations to human vision.
Many works discuss vision and perception in great depth, such as Seeing, edited by K.K.D.
Valois, (Academic Press, San Deigo, CA, 2000.) The input to the visual system is completely
defined by specifying the spatial, temporal and wavelength variations of incident light. The limits
of the variations that need to be specified are defined by a “window of visibility”. There is a
relatively narrow band of wavelengths and spatial and temporal variations to which people are
sensitive. We exploit loose bounds on these limits in defining the accuracy and resolution
required of appearance models.

Because there is no ideal display device, existing devices further bound the detail that can be
rendered for a particular model. Many graphics methods have been developed that exploit device
limitations. Here we focus on extensible models that can be adjusted to take advantage of device
limitations for computational efficiency, but which ultimately are only bounded in their
application by the limits of human vision.
Slide 5

Light leaving an object that reaches our eye is the physical stimulus that allows
us to see it. The physics of light has been studied intensely for centuries. Our
understanding of light has moved from the Greek’s corpuscular theory, to
Huygen’s waves, to Einstein’s theory of relativity and continues to expand. In
modeling appearance our goal is not to incorporate the entire body of knowledge
that has been accumulated. It is generally not appropriate or useful to use the
most low level, detailed models of light that have been developed in physics.
What we want to do is extract from what is known about light the models that are
relevant to the human process of seeing an object.

Most of the effects of light relevant to image synthesis are well modeled by
geometric optics. In geometric optics, in a typical scene, we model the way light
travels as along straight paths between surfaces. The surfaces may either
absorb or redirect the light into a new direction.
Slide 6

Light source

Image Plane
Shape
with
material
model

View Frustum

An environment consists of a set of objects, each defined by a shape and


material description, and at least one light source. An infinite number of images
could be created of such an environment, and to specify a particular image a
viewpoint, view direction and view frustum (i.e. field of view) need to be specified.
The image is formed by projecting the objects in the environment seen through
the frustum onto an image plane that spans the field of view and is perpendicular
to the view direction. In a digital image, the image is discretized into pixels, and
the display values for that pixel are set by determining the light that would arrive
at the viewer from the object visible through that pixel.
Slide 7

Components of an Object’s Appearance

Shape Global Illumination Material Model

By shape we mean the large scale form or geometry of the object. The shape is needed to place
the image of the object correctly with respect to other objects in the scene, to determine which
other objects are occluded by the object, and what areas are cast into shadow by the object. Fine
scale geometric variations in the object we define as part of the object’s material from the point of
view of creating digital models in computer graphics. For a close view of a tree branch, a leaf is
defined by a flat shape, with the number of lobes or points depending on the type of tree. In an
aerial photograph, a leaf is a small facet in a tree canopy material that covers the terrain. Many
methods can be used to represent shape. The area of computer-aided geometry is devoted to the
study of shape representation, and extensive descriptions of representations such as NURBs
(non-uniform
rational B-splines), triangle meshes, subdivision surfaces and implicit surface are documented at
length in references such as Farin Curves and Surfaces for Computer-Aided Geometric
Design: A Practical Code. Academic Press, Inc., 1996.

Many methods can be used to compute the interreflections of light between objects in an
environment. These methods, referred to as “global illumination” methods, include ray tracing,
radiosity, photon mapping and hybrids of these various approaches. Thorough discussions of
these methods can be found in Dutre, Bekaert and Bala, Advanced Global Illumination. AK
Peters Limited, Wellesley, MA, 2003. For rendering appearance, the essential feature of a
global illumination method is that for a given ray direction the quantity of light from that direction
at a particular point can be efficiently computed.
Slide 8

Components of Material Model

Material Model
Spectral
(color)

Directional
(Shiny, matte,
glossy, hazy)

Spatial variation
(texture)

There are three important components of a material model that allow us to


recognize a material – spectral, directional and spatial. We notice the color of an
object (resulting from the spectral composition of light), its directionality (hazy,
glossy, shiny,) and small spatial variations (textures formed by light and dark, or
little bumps.)
COLOR

A lot of work in studying appearance has focused almost entirely on color. Color perception is
consistentacross populations and cultures, as evidenced by the basic color names for similar color
grouplings that arefound in nearly all languages (BERLIN, B., AND KAY, P. Basic Color Terms:
Their Universality and Evolution. 1969.) A significant proportion of the population (around 8 to
10 percent) is ”color-blind”, however this means for most that they have difficulty discriminating
between particular colors (generally red and green) rather than that they have no color perception
at all. When we asked to describe an object, naming its predominant color is likely to be our
response. For example we might ask ”Which dress?”A likely response is ”The green one” rather
than ”the one with a sheen.” Reproducing the color of an object is key in modeling its appearance.

Our eyes are not sensitive to all possible variations in spectral distribution. A simple description
of our visual system is that at normal light levels incident light is processed by three sensors – one
signal corresponding to three types of sensors are the cones in our eye. The signals are combined
in various ways to produce our perception of color. The physiology of the receptors that produce
color are relatively well understood, and their sensitivities as a function of wavelength have been
measured.

At low light levels the cones are not sensitive to light, and our vision is the result of signals from
the rods in our eyes. There is only type of rod rather than three for different wavelength bands,
and as a result at low light levels we do not perceive color.

We don’t use the sensitivity functions of the cones explicitly image synthesis, but we do use the
very powerful observation that the color we perceive is based on three independent signals –
meaning that all the colors we perceive can be represented by a vector of three values. Basis
functions that can be used to unambiguously compute colors from spectra have been defined by
the International Commission on Illumination (known as the CIE) use basis functions are referred
to as X, Y, Z and were determined y psychophysical experiments. These basis functions are not
the sensitivies of the eye’s cones, but they re constructed so that any two spectra that map to the
same values of X, Y and Z will be perceived as the same color. This mapping, that is the
projection of the spectrum onto the X,Y,Z basis functions is the same as the calculation of
luminance from the spectral distribution and the scotopic curve. Formally, the XYZ coordinates
are found by convolving the spectra with the functions. Spectra that map to the same values of
XYZ appear to be the same color:

(generated using http://www.cs.brown.edu/exploratories/home.html)


Color displays work by emitting light from red, green and blue elements. Each of these elements
emit light across the spectrum, and the light from each element can be characterized by XYZ
coordinates. The red elements have a higher value of X, the green a higher value of Y and blue a
higher value of Z. To display an arbitrary color XYZ, the relative intensities of the red, blue and
green elements are adjusted so that their sum matches the color to be displayed. For example, the
color yellow has high values of X and Y and a relatively low value of z. The elements would be
adjusted so that red and green have high values, and the blue element a low value.
The three signal characteristic of color perception does not in itself guarantee that it is adequate to
simply sample three wavelengths to compute the color perceived from a particular spectral
distribution. The same three spectral samples could come from wildly different spectra, which

map to very different colors.

This is an extreme example. In practice, the rather smooth variations in spectral distributions
make it possible to use sparse sampling. Furthermore, the spectral samples often used in graphics
are rarely obtained by sampling in a narrow wavelength band, but rather are recorded over
broader bands, such as by the red, green and blue values recorded by a digital camera.

The smooth variation of spectral distribution, the tendency to sample over ranges rather than at
discrete wavelength values coupled with the perception of color being the result of a sensors
which process the incident light into three signals make it practical to often do estimates of object
appearance with only three channels used to record wavelength dependent variations.

The perception of color and brightness are low level effects with common terminology and well
defined scales. In many applications concerned with appearance, such as photography, print and
television, reproducing color and brightness are the only issue. These applications start with
nature providing the light from a physical object arriving at a camera sensor or film plane, and the
task is to produce a print or television display that provides the same stimulus to the viewer as
observing the object directly. Appearance is reproduced by ensuring that brightness and color
response stimulated by the display is the same as a direct view of the object. This in part accounts
for why color has been the main aspect of appearance that has been studied.

Unlike photography or television, we aren’t interested in just reproducing one image of the
object. We aren’t given a description of the light reaching the viewer for an existing object, we
need to predict it. The spatial and directional distribution of the light, as well as the spectral
distribution affects our impression of object appearance. If we want to generate an image that
looks like a red carpet, it is not enough to light up an array of red pixels. We need to account for
directional and spatial variations on the object.

For further reading on color in digital media see;


Field Guide to Digital Color, by Maureen C. Stone. Natrick, MA, A. K. Peters,
2003.
DIRECTIONAL EFFECTS

Directional effects are impressions of the object’s appearance that result from the directionality of
the scattering of light by the object. We can observe directional effects by noting whether we see
changes in an object as we change view. The directional effects we classify here are due to
structures at a length scale shorter than that visible to the eye. Directional effects can be observed
on completely flat surfaces. Descriptions that we attached to these impressions include shiny,
hazy, glossy, matte, dull, translucent or transparent. These attributes are attached to the object,
even when the specific pattern of light leaving the object varies with different incident
illumination (see Fleming, Dror and Adelson “Real-world illumination and the perception of
surfaces.” Journal of Vision 3, 5, 347–368.)
.

Unlike color, we don’t have a universally accepted name for this type of effect, or specific
definitions for when something is “shiny” versus “glossy” or “translucent” versus “transparent”.
This is probably related to the absence of specific detectors in our vision system for these effects
analogous to the cones that detect variations in the light spectrum.

Despite the absence of universal definitions, directional effects are clearly important. Terms and
classifications can be found in a variety of contexts and fields of study. In home decorating the
term “surface finish” is often used. Paints of the same color are designated as matte or gloss
finish, metal is weathered or polished, ceramics are glazed or unglazed. Within the term gloss,
various sorts of paints may be gloss, semi-gloss, eggshell, satin or low-luster. Surfaces may be
categorized by their sheen, or how reflective they are.

The physical stimulus for these visual impressions is clearly related to a material’s light scattering
function, but the relationship is not as clear as the relationship between wavelengths and color.
Because of the commercial impact of consumer judgments of these effects though, there has been
some work in correlating directional light scattering measurements with human judgments. In the
effort to established these correlations Hunter (Hunter and Harold, The Measurement of
Appearance. John Wiley and Sons, New York, NY, 1987.) (defined different types of gloss –
specular (“brilliance of highlights”), sheen (“shininess at grazing angles”), luster(“contrast
between specularly reflecting and other areas”), absence-of-bloom (“absence of haze, or milky
appearance”) and distinctness of image (“sharpness of mirror images.) A wide variey of
standardized tests for these types of gloss have been defined by different industries. They involve
measuring light scattering at particular angles and establishing a scale based on judgements
relative to a “perfect” material. For example, sheen might be rated relative to the appearance of
light reflected from a black glass.

the study of human vision, the nature of directional effects have been studied in a variety of ways,
including the effect of directional scattering on the perception of overall three dimensional object
shape. There has not been a focused effort though on relating specific preceptions to the physical
scattering function. Only very early work has been done in computer graphics modeling to
examine these effects. Westlund and Meyer (“Applying appearance standards to light
reflection models.” In Proceedings of the 28th annual conference on Computer graphics and
interactive techniques (2001), ACM Press, pp. 501–51.) explored how to convert industrial
measurements into light scattering models. Pellacini , Ferwerda and Greenberg, in “Toward a
psychophysically-based light reflection model for image synthesis.” In Proceedings of the
27th annual conference on Computer graphics and interactive techniques (2000), ACM
Press/Addison-Wesley Publishing Co., pp. 55–64 performed psychophysical tests to establish
perceptual gloss dimensions to a light scattering model. For the types of materials tested, they
found that gloss perception had two perceptual dimensions. However, there is no general measure
yet to predict when two scattering functions will produce the same impression of shininess or
gloss. The directionality of physical light scatter can be viewed as a very high dimensional space
(Matusik, Pfister, Brand and McMillan, “A data-driven reflectance model” ACM Trans.
Graph. 22, 3 (2003), 759–769) It is unknown how many perceptual dimensions there are for
general material types. Clearly though, modeling the quantity of light scattered in a preferred
direction, the mirror direction, and the quantity scattered in all directions (diffusely) is important.

SPATIAL VARIATIONS:
Textures and Patterns

Another factor in judging appearance is whether there are visual variations on the object surface
that are on a much smaller scale that the size of the object, but much larger than the wavelength
of light. The perceived variations may be due to spatial variations in directional or spectral
scattering characteristics, or to small scale geometric structures such as bumps or pores.
Regardless of the cause, people are able to judge surfaces s either uniform or not – that is whether
there is a texture or pattern. Descriptions of texture are even more varied than those used for
directional variations. Even the word “texture” does not have a uniform meaning – for example it
is applied to microscopic phenomena in material science and to tactile phenomena in foods.

Even by the definition of visual variations defined above, many different terms may be used in
different areas. Painted surfaces may be cribed with terms such as crackled, stippled or swirled.
Different types of wood veneer are described by their “grain figures” or typical textures, such as
cathedral, oval, flakes, burl, flame or fiddleback. Carpet textures are sometimes referred to as
plush, shap or sculpted. More specific classifications include the textures of igneous rocks as
phaneritic, aphanitic, porphyritic, glassy, vesicular and fragmental (“ Igneous Rock Identification:
Nature’s Fiery Cauldron” , By Dave Jessey and Don Tarman
http://geology.csupomona.edu/alert/igneous/texture.htm)

Despite the wide variation in physical origin and the terms used to describe it, the perception
texture has been studied extensively in both human and computer vision literature. A great deal of
work was been inspired by Bela Julesz, who in a seminal paper in 1962 “Visual pattern
discrimination” IRE Trans. Inf. Theory IT-8 (1962), 84–92 used the then new tool of computer
generated patterns to conduct experiments in which observers saw textures only, without any
other physical context. His results launched a search for perceptual channels for texture that are
analogous to the role of cones in color. The problem posed was whether there were features of
textures that would predict when a person would judge two textures to be different. While no
such channels have been found, there is evidence that texture perception is related to the vision
system’s mechanism for tuning for features of different orientations and spatial frequencies. This
observation has been exploited in computer graphics to synthesize large areas of texture formed
by spatial variations of spectral reflectance from small spatial samples (Heeger and Bergen,
“Pyramid-based texture analysis/synthesis.” In Proceedings of the 22nd annual conference
on Computer graphics and interactive techniques (1995), ACM Press, pp. 229–238.)
Slide 9

2. A Visual Tour of Materials and their Classification

In this section we look at a number of everyday objects, and consider why they
look the way they do. To classify the material we need to discount the other
factors that contribute to appearance – shape and illumination. To discount these
factors in observation, it helps to view the object from different angles and in
different positions. In some cases it is impossible to completely decouple our
perception of material from shape because they are too closely coupled for some
materials (e.g. we aren’t used to seeing cows made out of leaf material.)

We will use images of real objects, and a diagram of how light interacts with
them. Light interactions can be viewed in two ways. We need to consider how
individual light rays incident on an object are scattered or redirected. Then, we
consider the rays that we need to follow to form an image, and how we can follow
the scattered rays “backwards” to see how the simple single ray interaction leads
to the formation of an image.
Slide 10

Observe or measure Determine parameters to render

To model materials, we begin with observing and/or measuring the materials we


wish to image in the real world. For some common materials, just from observing
and classifying the material approximate parameters can be determined that give
a reasonable estimate for rendering. For highly accurate rendering, more formal
measurements are needed. Even if a measurement is going to be made,
observation and classification are needed to determine what kind of
measurement is needed – materials vary in the quantity of data that will need to
be measured/estimated as a function of position and direction.
Slide 11

The most familiar and basic light scattering is regular or “mirror-like” reflection, as
shown in the photo at the top. Light rays reflect into one single direction, and that
direction forms the same angle to the surface normal as the incident direction, as
shown on the lower left. Because the reflected rays stay organized as they were
when they left the previous objects, a sharp image is formed just as though you
were looking directly at the objects. This regular or mirror-like reflection is
referred to as pure or ideal specular reflection.
Slide 12

In a plane mirror, reflected objects are as sharp and are proportioned as they are
when viewed directly because the mirror is flat. This image shows an object with
the same mirror-like specular reflection. The shape of objects reflected in this
object are distorted not because of a different material property, but because the
shape of the specular object is not a plane.

Pure specular reflection occurs at smooth surfaces. For a surface to have a high
percentage of the incident light reflected, very little light enters the surface to be
scattered or absorbed. One classification of materials is metallic and dielectric.
Metals conduct electricity easily, and dielectrics do not. An implication of
conducting electricity easily is that light, which is electromagnetic energy, does
not enter and scatter through a metal. Pure specular surfaces that reflect a high
percentage of the incident light are metals. As illustrated later, pure specular
reflection can also occur with a small percentage of the incident light reflected.

We think of mirrors being glass, a dielectric. However the pure specular reflection
we see is from a thin metal coating on either the front or back of a clear glass.
The glass is used to keep the metal flat and dimensionally stable.
Slide 13

Pure specular reflection may be spectrally selective. We generally think of a


mirror as colorless, or possibly silver. This is just a way of describing a material
that reflects all wavelengths the same way. The image shown above is a pure
specular object that is spectrally selective – it reflects a higher fraction of the
incident light of the range of mid and longer wavelengths that we associate with a
yellow color than shorter wavelengths that we associate with blue colors. Note
that even though we would generally referred to this faucet as gold or brass
colored, red and blue colors are visible in the reflection. The color of the blue
and red objects appear slightly different in their reflection than when viewed
directly, because less short wavelength light is reflected from the faucet than is
reflected directly from the objects themselves. The reflection of the white sink
looks like quite a different color since a lot of the light leaving the sink has not
been reflected. The effect of the spectral selectivity of the faucet on the light
coming from the white sink and the red plastic cap is shown in the spectral plots.
Slide 14

Light may also go through an object along a regular path, that is the regular
transmission we are used to in glass as shown on the upper left. Most of the light
goes through the glass, although a small percentage of the light is specularly
reflected. The path of the light is bent at the boundaries between air and glass. In
the images above, the effect of this bending is difficult to see at the edge of the
glass, since the glass is thin and the redirection of the light ray compared to
traveling freely in air is very small.

While the percentage of light reflected is very small, when there is very little light
coming through glass, it appears more mirror-like. In the lower image, less light
has been directed at the black cards, so there is very little light coming through
the glass. As a result the reflection of the outdoor light (with the quantity of light
from the window being the same in both images) is much more noticeable.
Slide 15

For metals, the fraction of light reflected by pure specular reflection is nearly the
same for all angles of incidence. For dielectrics like glass though, the fraction of
light reflected is higher at grazing angles, an effect often refered to as “Fresnel”
reflection, diagrammed in the lower figures and illustrated images above. Unlike
the images in the previous slide, the light from the white document behind the
glass, is not changing. In this case, the relative visibility of the document behind
the glass changes because a higher percent of the light from the vase and the
can are reflected at the near grazing angle viewed at the right than at the near
normal angle of viewing on the left.
Slide 16

The appearance of objects reflected and transmitted through a thin piece of flat
glass is a result of following relatively simple ray paths that generally preserve
the objects’ appearance. As in the fourth slide, the different ray paths formed by
the changing surface orientation for smoothly shaped or faceted glass objects
form distorted images of the surroundings, as illustrated in the above images and
diagram. There is no difference in the material on the left and right, but the shape
of the object makes the difference in the visual impact between the disjointed
images in the faceted object, and the distorted but continuous images in the
smooth object. While there are simple light interactions at each surface, either a
mirror-like reflection or simple refraction, the combination of multiple reflections
and redirections of each ray as it passes through the object and reaches the eye
results in a complex appearance.
Slide 17

Many materials are shiny or glossy, but not purely specular. In these materials,
incident beams of light are distributed into a cone or lobe of directions centered
around the specular, or mirror direction. The result of this is when you are looking
at such materials the light reflected from each point of the surface includes light
from an a range of surfaces in the environment, instead of just reflecting one
point. Instead of seeing sharp edges reflected, everything looks blurred. By
observing the reflections in the paint can in the image, you can see that how
blurred things look depends on how close the objects being reflected are to the
glossy surface. If they are relatively close, the cross section of the cone from
which a point is reflecting light is relatively small, and lines like that between the
yellow and blue surfaces above are only blurred a bit. As the objects get further
away, the cross section of the cone becomes large, and can include entire
objects which then do not appear with any detail when reflected in the glossy
surface.
Slide 18

Objects that appear to have the same pattern of light and dark regardless of how
you view them (as long as you don’t block a significant source of light from the
environment as you move to another view) are diffuse. An ideal diffuse (also
referred to as Lambertian) object reflects an incident beam of light as light rays of
much lower magnitude in all directions. The light coming from any point on the
object in any direction is a product of light coming from many different sources in
the environment. The contribution of each source in the environment varies very
slowly from point to point on the object, so the amount of light varies slowly from
point to point, and there no clear images of the environment can be seen in the
object.
Slide 19

Spatially varying reflectance just means that the reflectance varies from point to
point on the object surface. On the left, the spatial variation is in the spectral
variation of the reflectance. For many materials, modeling the spatial variation is
the most critical for the material to be recognized. For example, different pieces
of wood may have different characteristics in how they reflect light directionally,
but we recognize them as wood by characteristic spatial variations of higher and
lower levels of reflectance, as seen in the image on the right.
Slide 20

Spatially uniform, diffuse + specular

Objects may show a combination of specular and diffuse reflection. In dielectric


objects, such as the plastic bag at the left and the ceramic teapot at left, there is
a difference in the spectral selectivity of the different components of reflection.
The diffuse reflectance can have spectral variation, such as the pink color of the
diffusely reflected light on the left, or the brownish-red, green white and dark
brown diffusely reflected light on the right. The specular component of
reflectance is white – that is the wavelength distribution of the light that is
reflected specularly is the same as the incident light.
Slide 21

The separation of diffuse and specular components can be observed by moving


around the object without changing or blocking the main light sources. The
spatial distribution of specularly reflected light – here most noticeable in the
round white highlights – changes in position as you change your view, while the
spatial distribution of the diffuse light does not. The fraction of light energy
reflected specularly is very small in this example – on the order of a couple of
percent. Only the white highlights corresponding to the reflection of the light
source are obvious, although the entire incident light from the room is reflected
specularly. Looking closely, a dim distorted reflection of the environment can be
seen in these images. Because the fraction of light reflected specularly is so low,
only the specular reflection of the light sources is noticeable compared to the
light that is reflected diffusely.
Slide 22

Analogous to diffuse reflection, thin surfaces may diffusely transmit light. In this
example, the cylinder is made of an opaque material, while the letter A is made
from a thin translucent material When lit from the front, as shown in the top
image, the surfaces look similar. When lit from the back however, the material in
the letter A transmits the light so it appears brighter. Because the transmission is
diffuse however, it is not possible to see objects clearly through the letter A.
Slide 23

In addition to the reflectance that depends on material microstructure and


chemical composition, the appearance depends on small scale geometric
structure. Just as some materials are characterized primarily by the spatial
variations in reflectance, other materials are characterized primarily by their small
scale geometric structure. “Small” is defined as orders of magnitude smaller than
the overall object. The image above shows a piece of plastic with a pattern
pressed into it that changes the surface from smooth to bumpy. The small scale
geometric structure shown here is characteristic of leather material, and this fact
is used in the production of physical materials to make a plastic look like leather.

The variation of light and dark in the image of the plastic is not due to spatial
changes in reflectance, but to the change of surface orientation caused by the
small scale geometry. Even small indentations can cause large changes in the
surface normal. The surface normal, rather than the absolute surface position,
determines in which direction incident light will be reflected.
Slide 24

Some materials don’t just reflect light from the surface, or just transmit the light.
In some cases light penetrates the material and scatters in the interior. This is
referred to as subsurface scattering, and can occur in dielectrics, not metals.
Under normal room illumination, surfaces which allow subsurface scattering often
do not look dramatically different from completely opaque surfaces. The image
on the right though shows an extreme example of illumination. A green laser is
directed at the top of a small piece of stone. Besides being reflected from the top,
the light is scattered in the material and is visible emerging from the sides of the
stone.
Slide 25

On some objects vibrant hues are observed, which change as view is changed.
The examples shown in the images above are typical examples, where the basic
rainbow of colors are visible. These types of colors are explained by considering
the wave nature of light, and require special models to predict this behavior.
Although the color effects are similar, different phenomena are responsible for
the colors in these two examples. On the left, on the CD’s, the colors appear as
the result of small groves the same magnitude of the wavelength of light causing
diffraction. On the right, on the wrapping paper, the colors appear as the result of
the transmission and reflection of light through very thin layers of material that
cause interference.
Slide 26

Anisotropic specular highlights

Spread depends on
material orientation

For some materials like brushed steel, or objects covered by rows of fine thread
or grooves, the specular highlights of round light sources appear stretched.
These stretched highlights are indicative of anisotropic reflection – that the
material has a particular orientation.
Slide 27

Backscatter/Retroreflection: the moon

http://ares.jsc.nasa.gov/HumanExplore/Exploration/EXLibrary/images/Pics/Apollo/05Moon.GIF

“Full moon as photographed from 10,000 nautical miles away by the crew of Apollo 11 during
their transearth journey homeward. “

If the moon were diffuse, a full moon would get dark at the edges. If it were
specular, we would see highlights in it. The relatively “flat” appearance of the
moon is due to backscattering. Backscattering, or retroreflection, is sometimes
included in materials by design, such as the retroreflective materials used in
strips on clothing used for people to be visible to cars at night. Apart from
extreme examples such as the moon or safety reflectors, backscattering is
difficult to observe, since its effect is obscured by the effects of ambient
illumination.

Ref.: How Retroreflectivity Makes Our Roads Safer


http://safety.fhwa.dot.gov/media/nightlights.htm
Slide 28

Observing objects like these leaves, we can see the components that need to be
represented in a digital model. Modeling the appearance of these leaves of
course includes modeling the overall leaf shape. The leaf material then is
composed of diffuse reflectance (with spatially varying spectral distribution),
glossy specular reflectance, diffuse transmittance (also with spatially varying
spectral distribution) and small scale geometric variations modeling the
indentations at the vein locations.
Slide 29

3. (Really Brief)Terminology and Mathematical Descriptions

Key quantities

Radiance L: amount of light we compute in generating an


image,

Bidirectional Reflectance Distribution Function (BRDF) fr :


description of how light is redirected at a surface

λ: wavelength dependence => color


θ,φ : direction
x,y: position

An explanation of the mathematics of light transport isn’t possible in a brief


lecture. However, a couple of important points are:
™ a lot of the notation in light transport is just denoting that quantities vary
with color (spectral dependance λ), direction (given by angles θ and φ) and
position (x,y)
™ there are two quantities that are key, but which take some getting used to .
One is the quantity of light we want to compute, the radiance L. The other
is the function telling how a surface scatters light, the BRDF fr.
Slide 30

A bit about calculus:


Why we use derivatives and integrals:

How do you add up what happens at


a bunch of points to find what
happens in an area?

Think of tiny, “differential” areas dA, not dimensionless points.

It lighting calculations the idea of an infinitesimal or a differential quantity is used


to identify a quantity at a point. From the calculus we use the idea that by
integration we can add the effects of all the differential elements to find the effect
of the whole. For example, in considering the energy leaving a surface we don’t
consider energy leaving a dimensionless point, but a differential area dA. By
integrating over all dA (or digitally adding small elements ∆A) we can find the
energy leaving an entire surface in terms of the energy leaving at each postion.
Slide 31

Planar vs. Solid Angle


Small arc vs. patch on sphere

Just as we can’t add up dimensionless points to get an area, we can’t add


infinitely thin rays to get all directions. All directions from a point in 3D are defined
by a sphere containing a point, just as all directions from a point in a plane are
defined by a circle around the point.

A planar angle is associated with a small arc on a circle in the plane. In three
dimensions, a solid angle is associated with a small area on a sphere
surrounding a point.
Slide 32

Direction from dA is given by two angles

dA
φ

We need two planar angles to specify a direction from a surface in 3D. One is the
measure from the surface normal. The other measure is found by projecting the
direction into the plane of the surface. The second angle is measured in this
plane from an axis defined in the plane. The angles θ and φ on the sphere of
directions are directly analogous to specifying latitude and longitude on the
globe. In that case, the plane is the plane containing the Equator, and the
arbitrary axis for φ is the intersection of the plane containing the equator and a
plane containing the great circle through Greenwich, England.
Slide 33

Angles indicating direction and solid angle

Differential steps in the directional angles θ and φ don’t result in the same solid
angle – it depends on the particular value of θ. This effect accounts for a factor of
sin θ or sin θ arising in various formulae for light transfer.
Slide 34

dΦ (Θ, x, y, λ ) Energy/time
L(Θ, x, y, λ ) =
cos θdAdω
radiance
Projected area

The key quantity we want to compute to make a picture is the radiance, which is
energy per unit time, area, and solid angle. The unit area we are interested in is
the area perpendicular along the direction the light is traveling. When we
consider light leaving a surface at an arbitrary angle θ, the angle perpendicular to
the line of travel is the cosθ times the surface area dA.
Slide 35

Bidirectional reflectance distribution function, BRDF

d 2 Lr (Θ r , x, y, λ )
f r (Θ i , Θ r , x, y , λ ) =
Li (Θi , x, y, λ ) cos θ i dAdωi

The key quantity we use to define how a surface redirects light is the BRDF,
which relates incident and reflected radiance for two given directions. The BRDF
is a distribution function, not a fraction from zero to one. It can take on values
from zero to infinity. To conserve energy, the integral of the BRDF over all
reflected directions must be less then or equal to one.
Terms and Notation: Detailed Discussion

We observe the appearance materials resulting from the combined effects of light, shape and the proper-
ties of the materials themselves, filtered through the properties of our human visual system. Observation
provides guidance in how to model materials, but doesn’t give us precision needed for digital modeling.
The interaction of light and materials occur physically, not in perceptual terms, so we need to consider
characterizing materials in quantitative physical terms, rather than qualitative perceptual terms. We could
devise completely perceptually inspired models for individual materials, and define parameters to adjust
them. However, that would not guarantee that objects looked correct when viewed in combination, or as
conditions change. Perceptual observations let us categorize materials as dielectric versus metallic, dull
versus shiny. Within these categories though we need physically based models with parameters with phys-
ical meaning so that we can measure materials, adjust them, and reuse them in scenes to produce effects
consistent with what we observe in the real world.
To make physically-based numerical definitions, we need to define various expressions for how incident
light energy is distributed by a materal with respect to position, direction and wavelength. In the previous
section we saw how material interactions depended on the material characteristics at a particular position
or in a particular direction. We also saw an instance in which the light reflected in a particular direction
depended on the light from all incident directions. We need a mathematical mechanism that allows us to
specify values for a particular direction, position or wavelength, and then lets us add up all of the particular
values for all directions, positions or wavelengths. The mechanism that lets us do this is the calculus. For
our purposes, a complete calculus course is not required. We only rely on the basic concept from calculus
that we can divide the quantities we are interested in into infinitesimally small pieces that approach zero
in size. These infinitesmally small quantities still have meaning. We we can take ratios of them to specify
values at particular points. We can add up the small pieces at all points to find the quantity for the object as
a whole.
Expressing directional variation has introduces complications in addition to the concept of using infinites-
imals. However, directional variations are essential to modeling appearance. The key quantity in defining
light transfer in a particular direction is radiance. The key quantity for expressing the directional effect of
materials on the incident radiance is the bidirectional reflectance distribution function (BRDF.) There are
many materials that require an extended definition of scattering beyond the BRDF, but these follow readily
understood once radiance and BRDF are defined. We will introduce some foundations for the definitions
ofthese quantities. Throughout, when units are used the SI (Systeme Internationale d’Units) system will be
used.
We denote energy as Q. The SI unit for energy is joules. Since we are interested in light transfer and not
in its storage, we will only be dealing with the rate that it moves through an environment per unit time.
The rate of energy transfer per unit time is power, expressed in the SI system as watts. Wattage ratings are
familiar on light bulbs. Light bulbs aren’t rated to produce or absorb a particular quantity of energy, but to
consume electrical energy at some rate. The power radiated is referred to as the radiant flux. To express the
average flux Φ over a period of time we would measure the energy ∆Q transfered through some time period
∆t and find:
Φ = ∆Q/∆t (1)
To find the flux Φ(t) at a particular instant we consider differential period of time dt, that is the quantity ∆t
as it approaches zero. The differential amount of energy dQ transfered in dt:

Φ(t) = dQ/dt (2)

1
Light travels at nearly 300 million meters per second. We are not going to consider generating images at
that time frame, so we are not going carry the dependence of flux on time any further in our definitions.
Next, we need to consider how the flux of light Φ varies with position. We examine how to do this in detail
to illustrate how taking the ratio of quantities that are approaching zero in magnitude allows us to express
the energy transfer for a particular point. First let’s consider the radiant energy Φ leaving a surface A as
shown in on the left of Fig. 1.

Figure 1: A finite area A on the left can be considered as set of small surfaces ∆A as shown in the center or
as an infinite number of differential surfaces dA as shown on the right.

The average flux leaving per per unit area, or radiant exitance M, is the total flux leaving divided by surface
area A or
M = Φ/A (3)
To express the radiant exitance from a particular point (x, y), first consider breaking the the surface A into
small pieces ∆A as shown in the center image of Fig. 1. Each of the small pieces ∆Ai will have a flux ∆Φi
associated with it. The sum of the fluxes ∆Φi will be the total flux Φ. For example, suppose we had a flux
of 10 watts (w) leaving from an area of one meter squared (m2 ), for an average exitance M of 10 w/m2 . We
could divide that area into 4 areas, ∆A1 , ∆A2 , ∆A3 and ∆4 each of 0.25 m2 . We might find that a ∆Φ1 of
2 w leave from area ∆A1 , and 3 w, 4 w and 1 w from ∆A2 ∆A3 and ∆A4 respectively. Obviously, the flux
from each area is lower than from the whole. If we compute the radiant exitance for each of the 4 areas
though we have that M1 is (2 w/0.25 m2 ) or 8 w/m2 , and M2 , M3 and M4 are 12, 16 and 4 w/m2 respectively.
These radiant exitances are not all smaller than the original, and generally are the same order of magnitude.
We have started to characterize the dependence of how the radiant exitance varies with position, as M is
different for each of the four smaller areas. We can continue to divide up the area into smaller pieces, as
illustrated in the center image of Fig. 1 for N smaller areas. The flux from each piece will continue to get
smaller. However, the ratio of flux per unit area will stay the same magnitude. We will have a better and
better estimate of the spatial variation of radiant exitance from the surface. The limit of this process is to
divide the area up into infinitesimally small areas dA, and consider the flux dΦ from each of these areas.
The infinitely small area dA is located at a single point (x, y), rather than covering some range of values of
x and y on the surface. By considering this infinitesimally small area dA we can define the radiant exitance
M(x, y) at a particular position:
M(x, y) = dΦ(x, y)/dA (4)
The radiant energy per unit time and area arriving at a surface is called the irradiance E. It is defined in the
same manner as M, with the only difference being whether the radiant energy is approaching or leaving the
surface.
When we consider very small objects with light leaving them, such as a small light source, we are interested
in the flux in a particular direction. Similar to considering a full surface for defining radiant exitance, lets
begin with considering a the set of all directions. In the case of a small object, we can think of all of the
directions around the object as a unit sphere around the object, as shown on the left of Fig. 2 . Any direction

2
Figure 2: The set of all directions around a small object is described by a sphere of radius one surrounding
it, as shown on the right. This sphere of all directions ω subtends 4 π steradians. The sphere of all directions
can be broken up into a set of smaller sets of directions ∆ω , as shown in the center. Finally, a particular
direction is associated with a differential set of direction d ω , right.

from the object is specified by a point on the unit sphere. Any set of directions from the small light emitting
object can be indicated as a surface on the unit sphere. A set of directions is referred to as a solid angle. The
area of a unit sphere is 4π , and the set of all directions from a point in the center of the sphere is defined
to be 4π steradians. Steradians are the three dimensional analog of radians as a measure of an angle on the
plane. On the plane, the set of all directions around a point is 2π radians.
The radiant flux per unit solid angle is called the radiant intensity. For a small object with total radiant flux
Φ the average radiant intensity I over the solid angle of all directions, that is over ω of 4π steradians is :

I = Φ/4π (5)

Similar to the consideration of radiant exitance, we can localize the radiant intensity to sets of directions by
considering the flux that is leaving through each of the set of directions defined by dividing up the sphere
of all directions into small areas on the sphere, as shown in the center if Fig. 2. Finally, we can express the
intensity for a particular direction by consider the infinitesimally small flux dΦ through an infinitesimally
small set of directions d ω represented as a differential area on the unit sphere.
Besides defining the meaning of a differential set of directions, we need coordinates to specify a particular
direction. We specified a particular position using the (x, y) coordinates on the surface A. We can specify
a direction from the small object by setting up a spherical coordinate system around the small object as
shown in Fig. 3. The angle a direction makes with “straight up” we denote as θ , which is referred to as
the polar angle. We choose an arbitrary direction perpendicular from “straight up”, in this case to the right
as a second reference. A plane is defined by the center of the object, and by being perpendicular to the up
direction. The projection of a direction onto this plane makes an angle φ with the second reference direction,
and φ is referred to as the azimuthal direction. Together θ and and φ specify a direction. We will use Θ
to compactly denote the direction (θ , φ .) To specify the intensity I in a particular direction, rather than an
average intensity, we define:
I(Θ) = dΦ(Θ)/d ω (6)

We can express the differential solid angle d ω in terms of differential angles dθ and dφ by considering the
area swept out on the unit sphere by these angles. As shown in Fig. 4, the height of the small patch swepth
out is ∆θ (or the differential dθ in the limit as ∆θ becomes infinitesimal). The width of the patch depends
on the value of θ , with the patch being very thin near θ equal to zero and wide at θ equal π /2. In fact, the
width of the patch is given by sinθ d φ . The intensity I (watts w/steradian sr) then can also be expressed as:

3
Figure 3: To specify directions, we define a spherical coordinate system using a polar angle θ and azimuthal
angle φ .

I(Θ) = d 2 Φ/sinθ d θ d φ (7)

In describing light transfer, we are concerned with the flux per unit area and per unit time simultaneously.
Similar to other quantities, we will form this by taking a ratio involving differentials of Φ , A and ω . A
differential flux dΦ leaves in a solid angle d ω . An infinitesimally small portion d2 Φ of that dΦ leaves
from the differential surface dA. A problem though is what area should be considered to define dA? In
understanding light transfer between objects, and from an object to the eye, we want to define a quantity of
light transfer that is independent of a particular object, but has meaning anywhere in space. A surface dA
that is uniquely defined for a particular direction is a differential area perpendicular to the direction of travel
Θ.
The impact of defining light transfer per unit time, direction and area in the direction of travel is to somewhat
complicate the definition of the light leaving a particular surface. This is because unlike the small object
radiating in all directions that we considered in defining intensity, a surface radiating energy has a particular
orientation. In Fig. 2 the direction of “straight up” is arbitrary. For a surface however, the surface normal
defines a unique direction. We will consider the surface normal as the axis from which the polar angle θ
is measured. If another object is in a direction for which θ is less than π /2, it will receive radiant energy
from the surface. However, if the direction of an object has a value of θ greater than π /2 it cannot receive
radiant energy – it is behind the surface. A surface looks largest, that is has the largest “projected area”
when viewed along the surface normal. The surface decreases in apparent size with the cosine of the view
direction θ .
To account for this effect of the apparent area decreasing with view direction, the key quantity radiance
in a particular direction Θ is defined as the radiant flux per unit solid angle and unit area projected in the
direction θ . The radiance L is defined as:

L(Θ) = d 2 Φ(Θ)/cosθ dAd ω (8)

Understanding the cosθ in the denominator of the definition of radiance is one of the two difficult concepts
to get over in understanding the mathematical description of how light interacts with materials. One way to

4
Figure 4: The area on the sphere swept out by small polar and azimuthal angles increases as the center of
that area varies from with polar angle.

think of its convenience, is to think of a small patch of diffuse material such as the right ear of the dog used
as an example of diffuse reflection. Looking straight at the ear we in the right image ear appears to have
a certain brightness, and a certain amount of light energy from it reaches our eye. Looking at the ear at a
more glancing angle in the image on the left the patch is just as bright, but over all less light from it reaches
our eye because the area the light is coming from is smaller from our new point of view. The quantity that
effects our peception of brightness is the radiance. The radiance of the patch for a diffuse material is the
same for all directions even though the energy per unit time depends on the orientation of the view relative
to the surface.
The additional variable that we want to account for as well as time, position and direction is wavelength.
Implicitly, since we are concerned with visible light in the span of 400 to 700 nm, all of the quantities we
have discussed so far are for energy in that band. To express flux, irradiance, radiant exitance, intensity
or radiance as a function of wavelength, we consider the quantity at a value of λ within a small band of
wavelengths between λ and and λ + d λ . By associating a d λ with each value we can integrate spectral
values over the whole spectrum. We express the spectral quantities such as the spectral radiance as:

L(x, y, Θ, λ ) = d 3 Φ(x, y, Θ, λ )/cosθ dAd ω d λ (9)

We can describe how materials affect the path of light now by expressing how the radiance leaving a surface
is related to the radiance incident on a surface. For a pure specularly reflecting material, there is only
one possible direction that light leaves the surface for a given incident direction. In terms of the spherical
coordiate system we are using, light that is incident within a differential solid angle d ωi from direction
(θi , φi ) is reflected in a differential solid angle ωr in direction (θi , φi + π ). For these types of materials all
we need to do then is specify how much, if at all, the radiance is reduced after it is reflected. We can express
this as a simple fraction which we will call reflectance ρs . For a pure specular material such as a mirror we
can express this as:

ρs = Lr /Li (10)

As demonstrated in the image of the brass faucet earlier this may vary with wavelength, and as demonstrated

5
Figure 5: The bidirectional reflectance distribution function (BRDF) relates the radiance reflected to the
radiance incident on a surface.

with the reflection from the glass this may also vary with the angle of incidence. Therefore, we define the
specular reflectance more generally as

ρs (θi , φi , λ ) = Lr (θ , φ + π , λ )/Li (θ , φ , λ ) (11)

For pure specular reflection, the solid angle that light is reflected into is the same as the solid angle of the
incident light, and the projected area of the surface in the direction of incidence is the same as the area
projected in the direction of reflection. This means that for specular reflection the ratio of the exitant flux to
the irradiance is the same as the ratio of the reflected radiance to incident radiance. This is a unique situation
for pure specular reflectance. In general we use the term reflectance to express the ratio of the reflected
energy flux per unit area to the irradiance. This ratio must always be between zero and one. In general his
ratio only expresses the fraction of the energy that is not absorbed by the material, it doesn’t express how
light is redirected.
To describe how a surface redirects light we consider light incident on the surface with radiance L(λ , Θi , x, y)
within a differential solid angle d ωi . The irradiance dEi on the surface that will be either absorbed or
redirected is:

dEi (λ , Θi , x, y) = L(λ , Θi , x, y)cosθi d ωi (12)

The cosθi term appears because the radiance L measures energy per unit area dA in the direction of travel
Θi and that direction projected into the orientation of the surface we are considering is cosθi dA. The d ωi
term enters in because we want to know the effect of energy coming from a single direction representing
a differential section of all the possible directions above the surface. The light reflected by the surface in
each direction can be described by the radiance in each direction. The effect of a material redirecting light
is then given by a function that is the ratio of the radiance reflected in a particular direction Θr as a result of
the total incident flux per unit area from another direction Θi. This ratio is referred to as the bidirectional
reflectance distribution function (BRDF), fr :

6
fr (λ , x, y, Θi , Θr ) = dLr (λ , x, y, Θr )/Li (λ , x, y, Θi )cosθi d ωi (13)

The BRDF is a distribution function, not a reflectance. It describes how the radiance is distributed in different
directions, rather than expressing the fraction of energy reflected. For a general surface, some finite amount
of incident energy from direction Θi is redirected into the hemisphere of all directions about the surface. If
we look at any one reflected direction we are considering only an infinitesimally small solid angle d ωr . If
we consider the redistribution of energy to be roughly even over all directions, the magnitude of the reflected
radiance radiance must be infinitesimally small relative to the magnitude of the incident radiance. Consider
an incident radiance of 100 w/m2 − sr incident normal to the surface in a solid angle subtending one percent
of the hemisphere of 2π sr above the surface. The irradiance is 100 ∗ cos0 ∗ .01 ∗ 2π or about 6.3 w/m2 . If all
of this incident light is reflected evenly in all directions the reflected radiance in the direction of the surface
normal would be 6.3/π or about 2 w/m2 − sr. If we consider a smaller incident solid angle, one tenth this
size (as we approach estimating the effect of light incident from a single direction) the reflected radiance
by the same analysis would be 0.2 w/m2 − sr for the same incident radiance of 100 w/m2 − sr. As we let
the solid angle approach zero, the reflected radiance would approach zero. However, if we characterize the
distribution of light as the ratio of the reflected radiance and a differential solid angle that we reflect into,
and let both incident and reflectd solid angles go to zero at the same rate, we get a quantity that does not go
to zero. For both incident solid angles of .01 and .001 steradians considered above, we would calculate the
ratio of reflected radiance to the solid angle (∆Lr /∆ωr ) as about 32 w/m2 − sr2 . This is why rather the than
expressing the light distribution as a ratio of radiances, we express the distribution as a ratio of differential
radiance per differential solid angle and a radiance. Understanding that the BRDF is a distribution function
rather than a fraction is the second difficult concept to understanding how light is described mathematically
(the first being the projected area term in the radiance definition.)
We can compute a reflectance value using the BRDF by adding together the light reflected in all directions.
The fraction energy reflected to all directions from light incident from one direction must be between zero
and one. This reflectance value is referred to as the directional-hemispherical reflectance ρdh , and it puts a
constraint on possible values of the BRDF. The total exitant radiance from the surface for the incident beam
of light with radiance Li is the integral of the reflected radiance dLr over the whole hemisphere of directions:

dLr (λ , x, y, Θr )cosθr d ωr
ωr

In terms of the BRDF:



fr (λ , x, y, Θi , Θr )Li cosθi d ωi cosθr d ωr
ωr

Since the incident radiance and direction do not depend on reflected direction, these terms can be treated as
a constant outside of the integral:

Li cosθi d ωi fr (λ , x, y, Θi , Θr )cosθr d ωr

The directional hemispherical reflectance then is this quantity divided the incident flux per unit area, Li cosθi d ωi
giving: 
ρdh (λ , x, y, Θi ) = fr (λ , x, y, Θi , Θr )cosθr d ωr (14)

7
The restriction that ρdh be between zero and one is the only bound on the possible values that fr (λ , x, y, Θi , Θr )
can take on. For a diffuse surface that has the same reflected radiance in every direction, the value of rf is the
same in each direction. With fr a constant we can find a relationship between the BRDF of a diffuse surface
and its directional hemispherical reflectance, which is that fr is equal to ρdh /π . For a specular surface the
directional hemispherical reflectance is just equal to the specular reflectance ρs . In the integral on the right
of Eq. 14, since there is nonzero reflected radiance only in one direction for a single direction, the value of
fr is non-zero for a single direction. Since the solid angle for a single angle is a differential approaching
zero, to make the product in the integral on the right equal to a finite value, the value of fr for a specular
surface approaches infinity.
The definition of BRDF provides a framework to describe how different materials interact with light. In
general, surface are not pure specular, ideal diffuse or a combination of these two. Next, we will consider
general models that have been developed to describe the BRDF of surfaces.

8
Slide 36

4. Models for Reflectance

Empirical
First Principles :
Mathematical
Simulation
Measured
Slide 37

Reflection/transmission at a smooth surface

θi=θr
θr
θi
n1 Index of refraction

n2

θt

sinθi/sinθt = n2/n1

The equations for the angles for reflection and transmission at a smooth surface
are derived from a solution of Maxwell’s equations for the electromagnetic fields
before and after the reflection and transmission. Continuity across the boundary,
and the electromagnetic properties of the materials, given by the “index of
refraction” n determine what we refer to as the mirror reflection angle, and Snell’s
law for the transmitted direction. Details of this solution can be found in radiation
heat transfer texts such as Siegel and Howell, Thermal Radiation Heat Transfer
(Hemisphere, 1981). For dielectrics, the equations shown above hold. The value
of the index of fraction is 1 for air, and about 1.33 for water, 1.5 for glass and
2.42 for diamond. The index of refraction is about 1.54 on the blue end of the
spectrum, and 1.5 on the red end. This spectral variation of the index of
refraction is what causes different wavelengths of light to bend a different amount
when they pass through glass, and causes white light to split into a rainbow of
colors in a prism. This type of optical effect in prisms and lenses is discussed in
optical texts such as Jenkins and White, Fundamentals of Optics, (McGraw-Hill,
1976).
Slide 38

Reflection at a smooth surface (dielectric

θr
θi 1
n1

reflectance
n
2

0
0o 90o
θi

The fraction of light reflected can also be calculated from the solution of
Maxwell’s equations, and the results are referred to as Fresnel reflectance. For
dielectrics, the fraction of light reflected depends on the angle of incidence. For
metals, there is not this dramatic change as the angle of incidence nears grazing.
Details of this solution also can be found in radiation heat transfer texts such as
Siegel and Howell, Thermal Radiation Heat Transfer (Hemisphere, 1981).
Slide 39

Empirical Models for Directional Effects

Phong
Ward
Lafortune

Models composed of functions scaled


by a small number of parameters to fit
BRDF

Empirical models don’t attempt to simulate reflection or scattering from the basic
laws of physics. They define a set of mathematical functions that can be
controlled by a small number of parameters. The functions are chosen to best
match the shapes of known BRDF’s with the smallest numbers of functions
possible. The goal of empirical models is to have a small number of parameters
that are easy to relate to the observation of a material. The early classic model
was developed by Phong, and subsequent models can be viewed as variations
and improvements on this basic model.
Slide 40

Empirical Models

Phong
α

Lobe width determined by n

3D view of Phong reflectance from

http://math.nist.gov/~FHunt/appearance/obl.html

The original Phong reflectance model is described in the classic paper: Bui
Tuong Phong “Illumination for computer generated pictures”
Communications of the ACM, v.18 n.6, p.311-317, June 1975 . It was
expressed as reflectance function for light intensity, rather than as a BRDF for
computing radiance. However, it was inspired by physical observation. The effect
of the model in rendering a sphere is compared to a photograph of a real sphere
in the paper. The fuzziness of the specular reflection is computed as a function of
the angle α between the reflected direction and the mirror reflection angle.
Kd and ks give the fractions of diffusely and specularly reflected light.

f r (Θ i , Θ r ) = k d + k s cos n α
Slide 41

Phong

Specular, n=.5
Ideal Diffuse

In contrast to diffuse reflection, the specular component concentrates the


reflected light. The larger the value of n, the smaller the specular highlights
formed by the reflection of the light source. (Images generated using GLView
http://home.snafu.de/hg/)
Slide 42

Phong

Colored highlight
White highlight

Using the Phong model, different colors can be assigned to the specular portion
of reflection. As seen in the tour of materials, white highlights are characteristic of
dielectrics (like the plastic penguin), colored highlights are characteristic of
metals.

Slide 43

Phong

Specular only
Combined
Slide 44

Phong

Increasing n Æ
(colored specular)

Slide 45

Phong

Increasing n Æ
(white specular)
Slide 46

Ward

Lobe described by exponential


function, may be flatter in one α
dimension (anisotropic
reflection)

Applies to transmission also

The Ward reflectance model is similar to the Phong model except it is expressed in
physical terms – it expresses the relationship between incident and reflectance radiance
and conserves energy. Rather than using the cosine term to a power, it uses an
exponential function, parameterized by an average slope, to express the shape of the
specular lobe. Furthermore, the specular lobe can be anisotropic – by expressing
different slopes for different directions on a surface (e.g. for a set of grooves the slope is
zero along the grooves, and potentially steep perpendicular to the grooves). This results
in the width of the lobe being different in the direction out of the page than it is in the
page. Further, the model can be applied to regular and diffuse transmission through a
thin surface. The model is fully described in as described in Ward Larson and
Shakespeare, Rendering with radiance: the art and science of lighting
visualization (Morgan Kaufmann, 1998) . ρd and ρs are the fraction diffuse and
specular, θh the angle of the vector half way between incident and reflected directions,
and σ is a roughness parameter.

− tan 2 θ h
exp( )
ρ 1 σ2
f r (Θ i , Θ r ) = d + ρ s
π cos θ i cos θ r 4πσ 2
Slide 47

Ward

Ideal Diffuse Diffuse plus white Diffuse plus white


glossy specular anisotropic glossy
specular

Since the Ward model is developed in physical terms of incident and reflected
radiance, it works (by design) in a system that simulates physically accurate
global illumination. These variations were rendered using the Radiance software
system. A point to remember is that physically accurate material models only
create realistic appearance when used in the context of a physically accurate
global illumination system.
Slide 48

Ward

Metal, pure specular Metal, glossy

The darker orange in the model is due to the fact that this is a global illuminatin solution
– rays are reflected from the object to itself, altering the reflected spectrum and
intensifying the color on each reflection.
Slide 49

Ward

Metal, isotropic glossy Metal, anisotropic Metal, anisotropic


(horizontal)glossy (vertical)glossy

Anisotropic reflection has a significant impact on appearance, but for a


complicated object its effect is only clear when the effect of isotropic, or
anisotropic reflection with a different orientation is displayed.
Slide 50

Ward

Ideal diffuse transmission, Combined diffuse,


glossy specular reflection regular transmission,
glossy specular reflection

These transmitting models are rendered with the same surface spectrum as the
others. They appear darker and a different color though, because the light is not
primarily from the bright white light source, but from the dimmer grayish green
wall and floor. Observing the shadow of the object on the right also demonstrates
the effect of a material on its surroundings when a global illumination solution is
computed.
Slide 51

Lafortune
Generalized Cosine Lobe

Generalized
Backscatter diffuse Off
specular
peak

The generalized cosine lobe model described in Lafortune, Foo,Torrance, and


Greenberg “Non-linear approximation of reflectance functions” In
Proceedings of the 24th annual conference on Computer graphics and
interactive techniques (1997, ACM Press/Addison-Wesley Publishing Co.,
pp. 117– 126.) Gives a different generalization of the Phong model. Like the
Ward model, it is formulated in physical terms. It conserves energy. Instead of
just describing peaks of reflection around the specular direction, it allows the
definition of lobes (possibly anisotropic) around any axis defined with respect to
the surface. Important other axes are just off the specular direction, the normal
direction and the direction of the source (for backscatter). The general form of the

reflectance is f r ( Θ i , Θ r ) = (C x u x v x + C y u y v y + C z u z v z ) n
where u and v are vectors in the incident and reflected directions expressed in a
coordinate system were the z axis is the surface normal, Cx, Cy and Cz are
coefficients determining the direction and shape of the lobe, and n defines how
narrow it is. Sets of functions of this form can be summed to form the BRDF for a
single material.
Slide 52

Lafortune

An example of a BRDF that the Lafortune model can represent that previous
models could not is generalized diffuse reflectance. In general, even surfaces
that appear matte or diffuse don’t reflect radiance evenly in all directions – the
reflection may peak in the direction of the surface normal and fall off at near
grazing viewing angles. The effects shown here are found using Cx=Cy=0, Cz=1,
n equal t o0 , 0.5 and 2 respectively. (Rendered by a customized local
illumination implementation of Lafortune model.)
Slide 53

Lafortune

The Lafortune model, unlike Phong or Ward, also provides a mechanism for
defining back scatter. In this case a sum of two Lafortune lobes is used. With
summing functions, there become a large number of parameters Cx,Cy,Cz and n
to be defined for specifying reflectance. This makes the model inconvenient for
user interfaces. The Lafortune model is popular though for fitting masses of
measured BRDF data into a compact representation.
Slide 54

First Principles Reflectance models

Model interaction of light with material at microscopic scale

~ 10 µm

In contrast to empirical methods that look for convenient functional forms, first
principles methods model the interaction with light with a mathematical model of
material defined at a microscopic scale. The most frequently used first principles
models use as a mathematical model a statistical distribution of surface facets to
describe the details of the boundary between a material and air. The most
popular methods model this interaction with geometric optics, which requires that
the surface being modeled be “large” with respect to the wavelength of light
(which is 0.4 to 0.7 microns) Some more complex models use wave optics to
capture of the effects of diffraction at the surface.
Slide 55

First Principles Reflectance models

Statistical surface models

Smooth: average slope is zero

Sample rough: average slope ~ .3

General surface has irregular shape, but


characteristic mean slope

A statistical model of a surface describes how many facets of various slopes and
sizes are on the surface, rather than explicitly defining a geometric model of the
surface geometry. A smooth surface has an average slope of zero.
Slide 56

First Principles Reflectance models

First principles models account for the effects that facets can have on one
another – they may block light incident on another facet, making it appear darker,
or they may block light leaving the facet before it reaches a viewer, again
resulting in a darker appearance. Even unblocked, the orientation of the facets
results in light being scattered in a much different directional pattern than from a
smooth surface.
Slide 57

First Principles Reflectance models

Cook-Torrance

Oren-Nayar

Two popular first principles models are Cook-Torrance, Cook and Torrance “A
reflectance model for computer graphics”. ACM Transactions
on Graphics 1, 1 (Jan. 1982), 7–24 and Oren-Nayar Oren and Nayar,
“Generalization of lambert’s reflectance model” In Proceedings of
the 21st annual conference on Computer graphics and interactive
techniques (1994), ACM Press, pp. 239–246. The principle difference in the
analysis in these papers is the type of reflectance assumed at each micro-facet.
In Cook-Torrance, this reflectance is assumed to be specular, in Oren-Nayar
diffuse.
Slide 58

First Principles Reflectance models

Cook-Torrance
Predicts off specular peaks

Oren-Nayar

Predicts back-scatter

3D views: http://math.nist.gov/~FHunt/appearance/obl.html

The principle feature of the Cook-Torrance model is the prediction of off specular
peaks, that are the consequences of shadowing and masking causing
asymmetries. The principle feature of the Oren-Nayar model is the prediction of
back scattering, that is a consequence of facets oriented towards the light source
diffusely reflect some light back to the source. The result in each case are BRDF
functions with lobes that have more complicated structure than those used in the
empirical models. The BRDF for these models is specified by giving parameters
for the microscopic surface geometry. However, since the microstructure is rarely
known, the facet distribution parameters are normally treated as parameters
similar to n in the Phong and Lafortune models for controlling the shape of these
complicated distributions.
Slide 59

First Principles Simulation

Microstructure
models

From Gondek et al., 1994

An alternative to mathematical first principles solutions, is to create the small


scale microstructure, and simulate its scattering effects by shooting rays at it and
saving the results in a data structure designed specifically for BRDF. Two
examples of this are:
Gondek, Meyer, and Newman,” Wavelength dependent reflectance
functions” In Proceedings of the 21st annual conference on Computer
graphics and interactive techniques (1994), ACM Press, pp. 213–220.
And Westin, Arvo and Torrance, “Predicting reflectance functions from
complex Surfaces” In Proceedings of the 19th annual conference on
Computer graphics and interactive techniques (1992), ACM Press, pp. 255–
264.
Slide 60

First Principles Simulation

From Gondek et al., 1994

An advantage of simulation is that it can be used to explore the effects of


subsurface structure, and effects of interference in thin surface layers. Gondek et
al used a simulation of thin titanium dioxide mica flakes to simulate irridescent
paint such as those used on automobiles.
Slide 61

First Principles Simulation

From Gondek et al., 1994

Controlling the geometry and properties of the subsurface and surface structure,
simulation can show the interplay between geometry and color. The desaturation
of color here is not due to different base pigments spectra, but on their geometric
structure.
Slide 62

Measurement

From Dana et al. 1999

Rather than attempting to model materials, many efforts have been made to
develop methods to measure them. An example of a measured data base that is
publicly available is the CURET data base
http://www1.cs.columbia.edu/CAVE/curet/,
with acquisition method described in Dana, Van Ginneken, Nayar and
Koenderink, “Reflectance and texture of real-world surfaces.” ACM
Transactions on Graphics 18, 1 (Jan. 1999), 1–34.
Slide 63

Measurement
Nonconventional Exploitation Factors Data System (NEFDS) database

cement
lumber

http://math.nist.gov/~FHunt/appearance/nefds.html

There are a few pre-existing databases for BRDF, but for specialized purposed
that don’t always include the full spectral data needed for rendering.

A separate course at SIGGRAPH 2005 is devoted to the measurement of


materials: 10. Realistic Materials in Computer Graphics Full-Day, Sunday, 31
July, 8:30 am - 5:30 pm Level: Intermediate, Lecturers: Hendrik P. A. Lensch,
Michael Goesele, Yung-Yu Chuang, Tim Hawkins, Steve Marschner, Wojciech
Matusik,Gero Mueller
Slide 64

TextureCoordinate2 {
5. Spatial Variations point [
0.552825 0.647972,
0.551232 0.64847,
0.55521 0.644493,

image mapped on the


geometry
1

0 1

A basic mechanism for representing spatial variations that characterize a


material is to use images mapped to the surface. Each pixel in the map may
store simply a diffuse color, or a BRDF, a normal, a displacement, or a BTF (a
virtual BRDF that includes the effect of small scale geometry). The mapping is
done by storing coordinate of a location in an image with each vertex used to
define a geometric model.

Mapping to a geometry requires that the geometry be parameterized (i.e. a two


dimensional coordinate system must be defined on the surface), a topic which is
studied extensively in computer aided geometric design. Parameterization is one
of the topics considered in the course in SIGGRAPH 2005 14. Discrete
Differential Geometry: An Applied Introduction Full-Day, Sunday, 31 July,
8:30 am - 5:30 pm lecturers Eitan Grinspun , Mathieu Desbrun, Peter Schröder
Slide 65

Images
processed to
remove lighting
effects in final
texture.

Photographs can’t be used as is to represent the spatial variation of reflectance,


since they include the effect of the lighting conditions in effect when the
photograph was taken. A series of images under different lighting conditions can
be processed to get an illumination-free texture map.

Refer to 10. Realistic Materials in Computer Graphics for more details on


converting lit images to texture maps.
Slide 66

Slide 67

normals

Base geometry

Diffuse reflectance

Maps may contain other values, such as surface normals. Normals maps and
maps of the spatially varying diffuse reflectance can be used in combination.
Slide 68

Spatially varying diffuse reflectance Spatially varying surface normals

The two renderings above were performed with Radiance with procedural
textures rather than image maps to define spatial variations on the surface (e.g.
see D. Ebert, Ed. Texturing and Modeling: A Procedural Approach, Third
Edition. Morgan Kaufmann, San Francisco, CA, 2002.) The same spatial
frequency has different visual impact depending on whether the fraction of light is
modulated, or the direction of the surface normals.
Slide 69

Some of the texture maps, BRDF


(specular highlight shape)
defined as another layer for each
material, gold, lapis etc.

Maps may also define a different BRDF at each point on the model.On this
throne model, not only the color, but the shape and color of the specular highlight
is defined by maps.

(Interactive views of this model showing appearance changes with view available
at www.eternalegypt.org)
Slide 70

Appearance Modeling of Leaves

two-sided capture

Real leaf LLS Capturing Device

Diffuse map Specular map Roughness map Translucency map

Returning to the example of leaves earlier, the spatial variation of reflectance and
transmittance can be measured.

Slide 71

Appearance Modeling of Leaves


(Wang et al. SIGGRAPH 2005)

•BRDF+BTDF Model BRDF

BTDF
Diffuse Specular Translucency
term term term

Diffuse map Specular map Roughness map Translucency map

The observations can be encoded in terms of models.


Slide 72

Slide 73

Leave images synthesized from models.


Slide 74

6. Processing of Materials

1908 1969

So far we have considered materials that are in pristine condition, without


concern for how they might be processed or vary in state. In this section we look
at a number of different techniques for applying processing effects to materials.

We will begin with a discussion of real objects to motivate a number of different


models and techniques that have been developed for computer graphics
applications.
Slide 75

An important aspect of material appearance is the state of the material.


Common states included etched, scratched, polished, and wet.
Slide 76

Wet Materials

The presence of water or other liquids on or within a material is a commonly


observed example of how the state of a material is related to variations in
appearance. Most porous materials, such as sand or clay, become darker when
wet. Other materials, such as paper or cloth, become more transparent. These
changes are due to the presence of water within a material.

The presence of water on a surface, such as puddles on an asphalt road, also


leads to characteristic changes in appearance. Surface water generally causes a
material to appear more specular, due to the air-water interface.
Slide 77

Another aspect of processing is the evolution of a material as a function of time.


Such time-varying, or aging, effects can develop either naturally, when a material
is exposed to the elements, or artificially, by treating a material so as to induce
and accelerate the weathering process.
Slide 78

Metallic Patinas

natural artificial

Metals often develop a characteristic patina over time. The patination process,
which develops in a series of thin surface layers, is due to the chemical alteration
of a surface and results in changes in color. Patination may be the result of
deliberately applied craft processes or natural corrosion.
Slide 79

copper micrograph abstraction

To represent surface patinas, we have developed a layered-surface model. We


have also developed a simple scripting language, consisting of operators such as
“coat” and “erode,” which can be used to specify the evolution of a surface over
time.
Slide 80

Here is an application of the scripting language and the layered surface model to
the development of a patina on a small, Buddha statue.
Slide 81

Flow and Changes in Appearance

In the previous example, we focused on the development of a layered surface


model that can evolve over time. Next, we will consider a representative physical
process – the flow of water over complex surfaces – and the resulting time-
varying effects, which this surface model can support.
Slide 82

Slide 83

Rendering w/out flows Rendering with flows


Slide 84

Captured

Transferred

While simulating aging effects produces good results, first-principles simulations


are time consuming or, in some cases, impossible because the underlying
physics and chemistry are not completely understood.

In recent work, we have explored a new approach to producing weathering


effects. Rather than simulating aging effects, we instead capture time variations
from real shapes to generate synthetic objects. Using this approach, shapes can
be rendered at different times in their histories

This example shows a time series captured from a copper bowl. We applied an
artificial patination treatment to the bowl over a two week period and captured the
shape and texture variations a frequent time intervals. The bottom row shows the
application of this appearance history to a synthetic sea horse model. Note how
the appearance variations are linked to the shape.
Slide 85

Captured

Transferred

(with
different
thickness)

Captured and transferred paint crackling.

View publication stats

You might also like