You are on page 1of 2

Introduction to Computer Graphics, Final Cheat Sheet

Recap Projection
ˆ Linear transformation A = T L  
xe
 
xo
 ye 
  = E −1 O  yo 
 
ˆ p̃ = f⃗t c = a⃗t A−1 c ⇒ a⃗t SA−1 c = f⃗t ASA−1 c:  ze   zo 
1 1
p̃ is transformed by S with respect to a⃗t .      2n r+l
− r−l 0 0
 
xn xc r−l xe
2n t+b
kx2 v + c −
   yn   yc   0
kx ky v − kz s kx kz v + ky s  = = t−b t−b 0   ye 
 
ˆ ky kx v + kz s ky2 + c ky kz − kx s  (v = 1 − c)  zn   zc   0 0 f +n
f −n − f2f−n
n  
ze
kz kx v − ky s kz ky v + kx s kz2 v + c 1 wc 0 0 −1 0 1
    
     xw W/2 0 0 (W − 1)/2 xn
ω1 ω2 ω1 ω2 − ĉ1 · ĉ2  yw   0 H/2 (H − 1)/2   yn 
ˆ =  =
ĉ2 ĉ2 ω1 ĉ2 + ω2 ĉ1 + ĉ1 × ĉ2
 
 zw   0 0 1/2 1/2   zn 
1 0 0 0 1 1
ˆ Clipping: because back vertices get flipped and go wrong
We do it after-multiplication-before-division, because
Normalized coord: divide-by-zero error!
Eye coord: we would need camera params
ˆ Backface Culling: calculate cross product
ˆ Varying Variables
v is affine in object ⇔ eye ⇔ clip coordinates
⇔ v/wn is affine in normalized device coordinates
−1
cos( θ2 ) cos( θ2 )
  
0 OpenGL interpolates v/wn , 1/wn to get v.
ˆ Rotation
sin( θ2 )k̂ ĉ sin( θ2 )k̂
ˆ Finding M s.t. qi = Mpi
 
Pn n i
ˆ F (t) = i=0 t (1 − t)n−i Pi
i

Materials
ˆ di = ci + 61 (ci+1 − ci−1 ), ei = ci+1 − 16 (ci+2 − ci ) ˆ Diffuse: equal brightness regardless of viewing angle: ⃗n · ⃗l
float diffuse = max(0.0, dot(normal, tolight));
ˆ vf =
P P P P P
v, ve = v+ vf , vv = n(n − 2)v + vf + ve
ˆ Specular: (⃗n · ⃗h)pow , where ⃗h = normalize(ṽ + l̃ )
vec3 refl = normalize(2.0 * normal * nDotL - tolight);
float specular = pow(max(0.0, dot(refl, viewDir)), 32.0);
ˆ Anisotropic: ⃗h · (⃗n × ⃗t), preferred tangent vector ⃗t
float v = dot(h, cross(normal, tangent));
float anisotropic = pow(1.0 - v*v, 16);

1
Texture Mapping Ray Tracing
Cast ray, test intersection with triangle, or sphere, or implicit
surface. Speed up using bounding volumes, grids or other spa-
tial partitioning, like KD-Tree, which recursively splits cells us-
ing axis-aligned planes, or object partitioning.

Sampling
ˆ Filter: I[i][j] ←
RR
Ωi,j
I(x, y)Fi,j (x, y)dxdy
in practice: super and multi-sampling

ˆ Alpha blending: I C [i][j] ← I f [i][j] + I b [i][j](1 − αf [i][j])


ICdxdy, and αc ← αf + αb (1 − αf )
RR
where I[i][j] ←
continuous composition is approximate:
ideally: I C ←
RR f f RR b b
I C dxdy + I C (1 − C f )dxdy
RR f f RR b b RR f
in practice: I C dxdy + I C dxdy(1 − C dxdy) Radiometry
ˆ Convolution: f (x) ∗ g(x) =
R
f (τ ) · g(x − τ )dτ Radiometry: power vs. Photometry: perceived brightness
convolution theorem: f ∗ g = F · G
symmetry: f (x) · g(x) = F ∗ G
Nyquist frequency = 1/2 sampling frequency

R 2π R π
ˆ steradians:
R
dω = 0 0
(sin θdθ)dϕ = 4π

Reconstruction
P
I(x, y) ← Bi,j (x, y)I[i][j] using some basis function ˆ Irradiance E(p) =
R R
dE(p, ω) =
Li (p, ω) cos θdω
R
from uniform hemispherical light: L sin θ cos θdθdϕ = Lπ
Color ˆ BRDF fr (p, ωi → ωr ) = dLr (p,ωr )
dEi (p,ωi ) = dLr (p,ωr )
Li (p,ωi ) cos θi dωi

Retinal, perceived, and object color. Cone cells at retina have BRDF is nonnegative, reciprocal, and energy-conserving
R
sensitivity functions. Lr (p, ωr ) = Le (p, ωr ) + fr (ωi → ωr )Li (p, ωi ) cos θi dωi
ρ
Diffusion: fr (ωi → ωr ) = π
ˆ Response to mixed light l(λ): L =
R
l(λ)kl (λ)dλ, M , S
Phong: fr (ωi → ωr ) = ks (ωr · mirror(ωi , ñ))α
R 
 R k436 (λ)l(λ)dλ
ˆ ⃗c(l(λ)) = ⃗c436 ⃗c546

⃗c700 R k546 (λ)l(λ)dλ
k700 (λ)l(λ)dλ

You might also like