You are on page 1of 44

Illumination Model

&
Surface-Rendering Methods
• When we view an opaque nonluminous object, we see reflected light from the
surfaces of the object. Total reflected light is the sum of the contributions from
light sources and other reflecting surfaces in the scene.

Light Source

viewpoint

• Light sources are referred to as light-emitting sources.


• Reflecting surfaces are called as light-reflecting sources.
• The term light source is used to mean an object that is emitting radiant energy,
such as a light bulb or the sun.
• A light source can be defined with number of properties: its position, color of
emitted light, and the emission direction.
• Type of Light source:
– Point light source
– Distributed light source
• The simplest model for an object that is emitting radiant energy is a point light
source with a single color.
• It is defined with its position, color of emitted light.
• The light rays are generated along radially diverging paths from a single-color
source position.
• Sources, such as the sun, that are sufficiently far from the scene can be
accurately modeled as a point sources
• A nearby source, such as along fluorescent light is more accurately modeled as a
distributed light source.

An object illuminated with


a distributed source

• When light is incident on an opaque surface, part of it is reflected and part is


absorbed.
• The amount of incident light reflected by a surface depends on the type of the
material. Shiny materials reflect more of the incident light, and dull surfaces
absorb more of the incident light. Similarly, for an illuminated transparent surface,
some of the incident light will be reflected and some will be transmitted through
the material.
• Surface that are rough or grainy, tend to scatter the reflected light in all directions.
The scattered light is called diffuse reflection. A very rough matte surface
produces primarily diffuse reflections, so that the surface appears equally bright
from any viewing angle.
Diffuse reflection

• The incident light is also reflected in a concentrated region, or bright spot called
specular reflection. This reflection is more pronounced in objects with smooth,
shiny surfaces than on dull surfaces. e.g. metal, polished objects

Specular reflection
Ambient Light
• A surface that is not directly exposed to a light source may still be visible due to
the reflected light from nearby objects that are illuminated. The visible effect due to
the reflected light from various surfaces in the sense is denoted as ambient light or
background light.
• Ambient light has no spatial or directional characteristics.
• The amount of ambient light incident on each object is a constant for all surfaces
and over all directions.
• We can set the level for the ambient light in a scene with the parameter Ia, and
each surface is illuminated with this constant value.

Ambient
Light Ambient
Surface of object Light
• The total reflected light from a surface is the sum of the contribution from light
sources and from the light reflected by other illuminated objects.

Point Source
Specular Reflection

Diffuse
Reflection
Ambient
Light
Surface of object
Scene Example: Only Ambient Light

In this model only ambient light is used to simulate the


background light, and no other light source is there.
Diffuse reflection

• Diffuse reflections are constant over each surface in a scene, independent of the
viewing direction.
• We set the diffuse reflectivity or diffuse-reflection coefficient parameter as Kd.
• Parameter Kd is assigned a constant value in the interval 0 to 1, according to the
reflecting properties of the surface.
• For highly reflective surface, we set the Kd near to 1. This produces a bright
surface with the intensity of the reflected light near that of the incident light.
• To simulate a surface that absorbs most of the incident light, we set the Kd to a
value near 0.
• If a surface is exposed only to ambient light, we can express the intensity of the
diffuse reflection at any point on the surface as
Iambdiff = Kd Ia (Ia = intensity of ambient light)
• Assume that we have point source of light too, along with the ambient light, then if
the diffuse reflections from the surface are scattered with equal intensity in all
directions, independent of the viewing direction, such surfaces are referred as
ideal diffuse reflectors.
• Brightness of a surface does depend on the orientation of the surface relative to
the light source.
• A surface oriented perpendicular to light source tend to be brighter than a
surface that is tilted at an oblique angle to the direction of the incoming light.
– Less incident light falls on the surface when it is rotated at an angle from the
direct light source.

• By denoting the angle of incident between the incoming light direction and the
surface normal as θ. The amount of incident light on a surface from a source with
intensity Il can be formed as
Il,incident = Il cos θ
Normal

θ
Incident θ
light
• Ideal Diffuse Reflectors are also called Lambertian reflectors, since radiated
light energy from any point on the surface is governed by Lambert’s cosine law.
• This law states that the radiant energy from any small surface area dA in any
direction θrelative to the surface normal is proportional to cosθ
• According to Lambert’s cosine law, the diffuse reflections can be model as:

Point N
Source
L
θ

Il, diff = Kd Il cosθ or Il, diff = Kd Il (N.L)


where,
Il is the intensity of the point light source
Kd is the diffuse-reflection coefficient
N is the unit normal vector to the surface
L is unit direction vector to the point light source from a position on the surface.
• If the incoming light from the source is perpendicular to the surface at a particular
point, that point is fully illuminated, coz that’ll mean angle with the normal,θ , is 0.
• As the angle of illumination moves away from the normal, brightness of the point
drops off.
• When cosθ is negative, that means the light source is behind the surface.

• We can combine the ambient and point-source intensity calculations to obtain an


expression for the total diffuse reflection.
Idiff = Ka Ia + Kd Il (N.L)
in which, Ka is an ambient-reflection coefficient that can be assigned to each
surface to modify the ambient-light intensity for each surface.
Scene Example: Diffuse Reflection
• Specular reflection, is the result of total, or near total, reflection of the incident
light in a concentrated region around specular reflection angle.

N
L R

θ θ V
ø

• R is the unit vector in the direction of ideal specular reflection.


• L is the unit vector directed toward the light source.
• V is the unit vector pointing to the viewer from the surface position.
• Ø is the viewing angle relative to the specular-reflection direction R.
• For an ideal reflector (perfect mirror), incident light is reflected only in the
specular-reflection direction. In this case, we would only see reflected light when
vectors V and R coincide (Ø = 0)
• Shiny surfaces have a narrow specular-reflection range, and dull surfaces have a
wider reflection range.
Scene Example: Specular Reflection
• Phong Specular-Reflection model or Phong Model sets the intensity of specular
reflection proportional to

• Angle Ø can be assigned values in the range 0 to 90, so the cos value varies from
0 to 1.
• The value assigned to specular-reflection parameter ns is determined by the type
of the surface that we want to display.
• A very shiny surface is modeled with a large value of ns, (say, 100 or more), and
smaller values (down to 1) are used for duller surfaces.
• For a perfect reflector ns is infinite, and for a rough surface, ns is assigned a value
near to 1.

• We can calculate the specular intensity variation using a specular-reflection


coefficient, W(θ), over the range θ= 0 to 90
• In general, W(θ), tends to increase as the angle of incidence increases.

• The variation of specular intensity with the angle of incidence is described as


Fresnel’s Laws of Reflection.
Ispec = W(θ) Il
where Il is the intensity of light source
and Ø is the viewing angle relative to the specular-reflection direction R.
• We can replace W(θ) with a constant specular-reflection coefficient ks . The value
of this coefficient varies from 0 to 1 for each surface.
Ispec = ks Il (V.R)ns
• Vector R in the expression can be calculated in terms of L and N.
.
R + L = (2N L)N
.
=> R = (2N L)N – L

N
L R
θ θ
• A somewhat simplified Phong model is obtained by using the halfway vector H
between L and V to calculate the range of specular reflections.
. .
• If we replace V R in the phong model with the dot product N H, this simply
replaces the calculation of cos Ø with
N theHcalculation of cos α.
L R
α V
θ
ø

• The halfway vector is calculated as:


• For a single light source, we can model the combined diffuse and specular
reflections from a point on an illuminated surface as

• For multiple (n) light sources, we can obtain by summing the contributions from
the n various individual sources:
Warn Model

• Warn model provides a method for simulating studio lighting effects by controlling
light intensity in different directions.
• Light sources are modeled as points on a reflecting surface, using the Phong
model for the surface points.
• Then the intensity in different directions is controlled by selecting values for the
Phong exponent.
• In addition, light controls, spotlighting, used by studio photographers can be
simulated in the Warn Model.
• Flaps are used to control the amount of light emitted by a source in various
directions. Two flaps are provided for each direction, i.e. x, y, z
• Spotlights are used to control the amount of light emitted within a cone with apex
at a point-source position.
Intensity Attenuation
• As radiant energy from a point light source travels through space, its amplitude is
attenuated (decreased) by the factor of 1/d2, where d is the distance that the light
has traveled.
• This means that a surface close to the light source (small d) receives a higher
incident intensity from the source than a distant surface (large d)

• If we use the factor 1/d2 to attenuate intensities, it does not always produce
realistic pictures.
• Graphics packages have compensated for this problem by using inverse linear or
inverse quadratic functions of d to attenuate intensities.
• For example, inverse quadratic function can be setup as:

• A user can fiddle with the coefficients a0 ,a1 and a2 to obtain a variety of lighting
effects for a scene. The value of a0 can be adjusted to prevent f(d) from becoming
too large when d is very small.
• This is an effective method for limiting intensity values when a single light source
is used to illuminate a scene.
• With a given set of attenuation coefficients, we can limit the magnitude of the
attenuation function to 1 with the calculation
f(d) = min(1, 1/(a0 + a1d + a2d2) )

• Using this function, we can then write our basic illumination model having n light
source as:

where di is the distance light has traveled for the light source i.
Color Considerations
• One way to set surface color is to specify the reflectivity coefficient as
three-element vector.
• The diffuse reflection-coefficient vector for example would have RGB components
(KdR ,KdG ,KdB).
• If we want an object to have blue color surface, we select a nonzero value in the
range (0,1) for the blue reflectivity component, KdB, while the red and green
components are set to 0.

• Other possibility for the color is to let the diffuse and specular reflectivity coefficient
as it is, along with it specify the components of diffuse and specular color vectors
for each surface.
• For instance, surface color vector for diffuse component can be (SdR ,SdG ,SdB), and
for specular component it can be (SsR ,SsG ,SsB).
• We multiply the Idiff with (SdR ,SdG ,SdB) and Ispec with (SsR ,SsG ,SsB).
Transparency
• A transparent surface, in general, produces both reflected and transmitted light.
• The relative contribution of the transmitted light depends on the degree of
transparency of the surface and whether any light sources or illuminated surface
is behind the transparent surface.
• Realistic transparency effects are modeled by considering light refraction. When
light is incident upon a transparent surface, part of it is reflected and part of it is
refracted.
• Because the speed of light is different in different materials, the path of the
refracted light is different from that of the incident light direction. The direction of
the refracted light, specified by angle of refraction, is a function of the index of
refraction of each material and the direction of the incident light.
• Index of refraction for a material is defined as the ratio of the speed of light in
vacuum to the speed of light in the material.
• Angle of refractionθr is calculated from the angle of incidenceθi , the index of
refraction ƞi of the “incident” material (usually air), and the index of refraction ƞr of
the refracting material according to the Snell’s Law:

sinθr = (ƞi / ƞr ) sinθi


N
L R • R is the direction of reflection.
• T is the direction of refraction.
θi θi • L is the light source.
ƞ • N is the outward normal.
i
ƞ
r θr

T
• Using the Snells law and the above diagram, we can obtain the unit transmission
vector T in the refraction direction θr as
T = [ (ƞi / ƞr ) cosθi – cosθr ]N – (ƞi / ƞr )L
• A simpler technique for modeling transparency is to ignore the path shifts
altogether. In effect, this approach assumes there is no change in the index of
refraction from one material to another, so that the angle of refraction is always
the same as angle of incidence.
• We combine the transmitted intensity Itrans with the reflected intensity Irefl from the
transparent object using a transparency coefficient kt.
• We assign parameter kt a value between 0 and 1 to specify how much of the
background light is to be transmitted.
• Total surface intensity is then calculated as:
I = (1 – kt ) Irefl + kt Itrans
where (1 – kt ) is the opacity factor.
• For highly transparent objects, we assign kt a value near 1.
• Nearly opaque objects transmit very little light from the background, and we can
set kt to value near 0 (opacity becomes near 1)
• Transparency effects are often implemented with modified depth-buffer(z-buffer)
algorithm.
– A simple way to do this is to process opaque objects first to determine depths
for the visible opaque surfaces.
– Then the depth positions of the transparent objects are compared to the values
previously stored in the depth buffer.
– If any transparent object is visible, its reflected intensity is calculated and
combined with the opaque surface intensity previously stored in the frame
buffer.

• Accurate display of transparency and antialiasing can be obtained with A-buffer


algorithm. For each pixel position, surface patches for all overlapping surfaces are
saved and sorted in depth order. Then, intensities for the transparent and opaque
surface patches that overlap in depth are combined in the proper visibility order to
produce the final averaged intensity for the pixel.
Halftone Patterns & Dithering Techniques
• When an output device has a limited intensity range, we can create an apparent
increase in the number of available intensities by incorporating multiple pixel
positions into the display of each pixel value, i.e. for each pixel in the picture, on
the screen we map several pixels, say 2X2 matrix or a 3X3 matrix.
• When we view a small region consisting of several pixel positions, our eyes tend
to integrate or average the fine detail into an overall intensity.
• Continuous tone photographs are reproduced for publication in newspapers,
magazines and books with a process called halftoning, and the reproduced
pixels are called halftones.
• Each intensity area is reproduced as a series of black circles on a white
background. The diameter of each circle is proportional to the darkness required
for that intensity region. Darker regions are printed with larger circles and lighter
regions are printed with smaller circles.
• In graphics, halftone reproductions are approximated using rectangular pixel
regions, called halftone patterns or pixel patterns.
• The number of intensity levels that we can display with this method depends on
how many pixels we include in the rectangular grids and how many levels a
system can display.
• With n by n pixels for each grid on a bilevel system, we can represent n2 + 1
intensity levels.
Halftone patterns for 2X2 pixel pattern

On On On On
On On On On On On

Halftone patterns for 3X3 pixel pattern

On

On on on on on

on On On on
on on On
on on On on on on on On
On on on
on On on on
on on

on on on

on on on

on on on
• The “on ” pixel positions are near the center of the grid for lower intensity level and
expand outwards as the intensity level increases.
• We can represent the pixel patterns for the various possible intensities with a “mask”
of the pixel position numbers as:
8 3 7

5 1 2

4 9 6

• To display a particular intensity level k, we turn on each pixel whose position


number is less than or equal to k.
• Although the use of n by n pixel patterns increase the number of intensities that can
be displayed, they reduce the resolution of the displayed picture by a factor of 1/n
along each of the x and y axis.
• For example, with a 3X3 pattern, we would reduce the 512 by 512 area to 128
intensity positions along each side.
• Another problem with pixel grids is that subgrid patterns become apparent as the
grid size increases.
• Also high quality displays need at least 64 intensity levels, that means 8X8 grid, and
to obtain halftone in books, we must display at least 60 dots per centimeter. Thus
we need to display 60X8 = 480 dots per centimeter!!!
• Halftone approximations also can be used to increase the number of intensity
options on systems that are capable of displaying more than two intensities per
pixel.
• On systems that can display four intensity levels per pixel, we can use 2by2 pixel
grid to extend the available intensity levels from 4 to 13.
0 0 0 1 0 1 1 1 1 1 1 2 1 2
0 0 0 0 1 0 1 0 1 1 1 1 2 1

2 2 2 2 2 3 2 3 3 3 3 3
2 1 2 2 2 2 3 2 3 2 3 3

• Similarly we can use pixel-grid pattern to increase the number of intensities that
can be displayed on a color system.
• As an example, suppose we have a three-bit per pixel RGB system. Using 2X2
pixel-grid patterns, we now have 12 phosphor dots that can be used to represent a
particular color value.
• 2X2 pattern means 5 possibilities and in each pixel we have 3 color dots, so in all
we’ll get 53 = 125 different color combinations.
Dithering Techniques
• The term Dithering is used in various contexts. Primarily, it refers to techniques
for approximating halftones without reducing resolution, as pixel-grid patterns do.
• The term is also applied to halftone-approximation methods using pixel grids, and
sometimes it is used to refer to color halftone approximations only.
• Random values added to pixel intensities to break up contours are often referred
to as dither noise. Various algorithms have been used to generate the random
distributions. The effect is to add noise over an entire picture, which tends to
soften intensity boundaries.
• Ordered- Dither methods generate intensity variations with a one-to-one
mapping of points in a scene to the display pixels. To obtain n2 intensity levels, we
set up an n by n dither matrix Dn ,whose elements are distinct positive integers in
the range 0 to n2 -1.
• For example

3 1
D2 =
0 2

7 2 6
D3 = 4 0 1
3 8 5
• Each input intensity is first scaled to the range 0<= I <= n2
• If the intensity I is to be applied to the screen position (x,y), we calculate row and
column numbers for the dither matrix as
i = (x mod n) + 1, j = (y mod n) + 1
• If I > D2(i,j) we turn on the pixel at position (x,y). Otherwise, the pixel is not turned
on.

• Another method for mapping a picture with m by n points to a display area with m
by n pixels is error diffusion. Here the error between an input intensity value and
the displayed pixel intensity level at a given position is dispersed, or diffused, to
pixel positions to the right and below the current pixel position.
• Assume that we are the pixel position (i,j), we will pass the α part of the error to
the position (i,j+1), we will pass the ß part of the error to the position (i+1,j-1), we
will pass the ƴ part of the error to the position (i+1, j), and pass the δ part of the
error to the position (i+1, j+1).
α+ ß + ƴ + δ <= 1

Column j
7/16 row i

3/16 5/16 1/16 row i+1


• A variation on the error diffusion method is dot diffusion.
• In this method, the m by n array of intensity values is divided into 64 classes
numbered from 0 to 63, as shown in the figure.
• The error between a matrix value and the displayed intensity is then distributed
only to those neighboring matrix elements that have a larger class number.
• Distribution of the 64 class numbers is based on minimizing the number of
elements that are completely surrounded by elements with a lower class number,
since this would tend to direct all errors of the surrounding elements to that one
position.

34 48 40 32 29 15 23 31
42 58 56 53 21 5 7 10
50 62 61 45 13 1 2 18
38 46 54 37 25 17 9 26
28 14 22 30 35 49 41 33
20 4 6 11 43 59 57 52
12 0 3 19 51 63 60 44
24 16 8 27 39 47 55 36
Polygon Rendering Methods

• Some surface rendering methods:


– Constant-intensity surface rendering or flat surface rendering
– Gouraud surface rendering
– Phong surface rendering
• Constant-intensity shading – In this method, a single intensity is calculated for
each polygon. All points over the surface of the polygon are then displayed with
the same intensity value.
• Advantage: Useful for quickly displaying the general appearance of a curved
surface.
The fast and simplest method for rendering a polygon surface.
• Problem: Surface looks faceted.
OK if really is a polygonal model, not good if a sampled approximation
to a curved surface.

• Assumptions required to be valid in order to provide an accurate display of the


surface:
1. The polygon is one face of a polyhedron and is not a section of a
curved-surface approximation mesh.
2. All light sources illuminating the polygon are sufficiently far from the
.
surface so that N L and the attenuation function are constant over the
area of the polygon.
.
3. The viewing position is sufficiently far from the polygon so that V R is
constant over the area of the polygon.
Flat and Smooth
Shading
Flat Shading Gouraud Shading

The entire The shades at the


polygon is drawn vertices are
with the same interpolated to
shade or color determine the shade
at an interior point
Flat and Smooth Shading

Flat Shading Gouraud Shading

Gouraud shading provides a much smoother


appearance of surfaces
Flat Shading vs. Gouraud
Shading
Gouraud Shading
• This intensity-interpolation scheme, renders a polygon surface by linearly
interpolating intensity values across the surface.
• Gouraud surface-rendering method using the following procedures:
1. Determine the average unit normal vector at each vertex of the polygon
2. Apply an illumination model at each polygon vertex to obtain the light
intensity at that position.
3. Linearly interpolate the vertex intensities over the projected area of the
polygon. N3
N
4

N1 Nv

N2
y
3

1 p Scan line
4 5

• We want to find the intensity of the point p using gouraud shading.


• The intensity at point 4 and point 5 is obtained by linearly interpolated from the
intensities at vertices 1, 2 and, 2, 3 respectively.

• The interior point p is then assigned an intensity value that is linearly interpolates
from intensities at point 4 and 5.
• Gouraud shading removes the intensity discontinuities associated with the
constant-shading model, but it has some deficiencies:
– Highlights on the surface are sometimes displayed with anomalous shapes.
– Linear intensity interpolation can cause bright or dark intensity streaks, called
Mach bands, to appear on the surface.
– These effects can be reduced by dividing the surface into a greater number of
polygon faces or by using other methods, such as Phong shading, that
require more calculations.
• Phong Shading method is a more accurate method for rendering a polygon
surface.
• This method is normal-vector interpolation technique.
• It displays more realistic highlights on a surface and greatly reduces the Mach band
effect.
• A polygon surface is rendered using Phong shading by carrying out the following
steps:
1. Determine the average unit normal vector at each vertex of the polygon.
2. Linearly interpolate the vertex normals over the projected area of the polygon.
3. Apply an illumination model at positions along scan lines to calculate pixel
intensities
N = [(y – y2) / (y1 – y2)] N1 + [(y1 – y) / (y1 – y2)] N2
N2

N1
Advantages:
- More accurate calculation of intensity values
N
- More realistic display of surface highlights
- Great reduction in the Mach-band effect
Disadvantages:
N3 Requires more computation than Gouraud method.
Flat vs. Gouraud vs. Phong

Flat Gouraud Phong


Fast Phong Shading
• Surface rendering with Phong shading can be speeded up by using
approximations in the illumination model calculation of normal vectors.
• Fast phong shading approximates the intensity calculation using a Taylor-series
expansion and triangular surface patches.
• Since Phong shading interpolates normal vectors from vertex normals, we
express the surface normal N at any given point (x,y) over a triangle as
N = Ax + By + C
where A, B, C are determined from the three vertex equations:
Nk = Axk + Byk + C, k=1,2,3
with (xk ,yk) denoting a vertex position
• Omitting the reflectivity and attenuation parameters, we can write the calculation
of light-source diffuse reflection from a surface point (x,y) as
Idiff(x,y) = L . N
|L| |N|
= L . (Ax + By + C)
|L| |Ax + By + C|
= (L.A)x + (L.B)y + L.C
|L| |Ax + By + C|
we can rewrite the expression in the form:
Idiff(x,y) = ax + by + c
(dx2 + exy + fy2 + gx + hy + i)1/2
where parameters a, b, c and d are used to represent the various dot products.
say
a = L.A
|L|
• And finally we represent the denominator of the above equation using Taylor
series expansion and retain terms upto second degree in x and y. this yeilds:
Idiff(x,y) = T5x2 + T4xy + T3y2 + T2x + T1y + T0
where each Tk is a function of parameters a,b,c and so forth.

You might also like