You are on page 1of 12

Ôn tập đồ họa

CHAPTER6 RASTERIZATION
Line Segments Clipping
Cohen-Sutherland Algorithm
b0b1b2b3
b0 = 1 if y > ymax, 0 otherwise
b1 = 1 if y < ymin, 0 otherwise
b2 = 1 if x > xmax, 0 otherwise
b3 = 1 if x < xmin, 0 otherwise
outcode(A) = outcode(B) = 0Accept line segment
outcode (C) = 0, outcode(D) ≠ 0 Location of 1 in outcode(D)
determines which edge to intersect with
outcode(E) logically ANDed with outcode(F) (bitwise) ≠ 0 out size
GH and IJ: same outcodes, neither zero but logical AND yields zero Shorten line segment by intersecting with
one of sides of window
Cohen Sutherland in 3DUse 6-bit outcodes

Liang-Barsky Clipping
p(α) = (1-α)p1+ αp2 , 1 ≥ α ≥ 0
1> α4 > α3 > α2 > α1 > 0 Intersect right, top, left, bottom: shorten
1> α4 > α2 > α3 > α1 > 0 Intersect right, left, top, bottom: reject

Plane-Line Intersections
p(a)=(1-a)p1 +ap2
n.(p(a)-p0 )=0

a=(n*(p0-p1))/(n*( p2-p1))

Polygon Clipping
Clipping as a Black Box
Nonconvextessellation

Bounding Boxes(complex polygon)


Max(x), max(y),min(x),min(y)

Hidden Surface Removal :Object-space approach: use pair-wise testing between polygons (objects)

Loan Nguyen BKIT08.net Page 1


Painter’s Algorithm:
Sort the polygons by depth
Draw back to front
Generally impossible (and incompatible with pipeline) ex:
Back-Face Removal (Culling)
z-Buffer Algorithm
Scan-Line Algorithm
BSP Tree
Line segment rasterization
DDA
m = (y2-y1)/(x2-x1) ;
y=y1;
For(x=x1; x<=x2,x++) {
y+=m; // 1 ≥ m ≥ 0
write_pixel(x, round(y), line_color);
y=y+m;
} // m = dy/dx = ∆y/∆x = (y2-y1)/(x2-x1)

Bresenham
∆x, ∆y, 2∆y, 2 ∆y- 2∆x
p0= 2∆y- ∆x
k=0
for(0-k){
If pk < 0{
Plot(xk+1, yk)
pk+1 = pk + 2∆y
}else
Plot(xk+1, yk+1)
pk+1 = pk + 2∆y- 2∆x
}

Polygon rasterization

Loan Nguyen BKIT08.net Page 2


y = yfirst;
initialize-left-segment();
initialize-right-segment();
while(y≤ ylast){
if(y == end-of-left-segment)
switch-to-next-left-segment();
if(y == end-of-right-segment)
switch-to-next-right-segment();
xleft = evaluate-left-segment();
xright = evaluate-right-segment();
for(x = xleft;x ≤ xright; x ++)
fill(x,y);
y++;
}

CHAPTER7 TEXTURE MAPPING


Buffers

spatial resolution (n x m)
depth k

Learn to read and write buffers

We read and write rectangular block of pixels


Write: Moving pixels from processor memory to the frame buffer
Format conversions
Mapping, Lookups, Tests
Read: Frame buffer to processor memory

Raster Position: geometric entity


glRasterPos3f(x, y, z);
OpenGL can draw into or read from any of the color buffers (front, back, auxiliary)
Change with glDrawBuffer and glReadBuffer
Bitmaps are useful for raster text
GLUT font: GLUT_BIT_MAP_8_BY_13
Drawing Bitmaps
glBitmap(width, height, x0, y0, xi, yi, bitmap)
Reading and Drawing Pixels
glReadPixels(x,y,width,height,format,type,myimage)

Loan Nguyen BKIT08.net Page 3


glDrawPixels(width,height,format,type,myimage)

Image Formats
Reading the Header
Reading the Data
Scaling the Image Data

Texture Mapping
I ntroduce Mapping Methods
Texture Mapping
Environment Mapping
Bump Mapping

Coordinate Systems
Parametric coordinates
Texture coordinates
Object or World Coordinates
Window Coordinates

Forward vs backward mapping


x = x(s,t)
y = y(s,t)
z = z(s,t)
vs
s = s(x,y,z)
t = t(x,y,z)

Two-part mapping
Map the texture to a simple intermediate surface
Map from intermediate object to actual object
Normals from intermediate to actual
Normals from actual to intermediate
Vectors from center of intermediate

Point sampling vs area averaging


Point sampling of the texture can lead to aliasing errorsuse area averaging
Texture Mapping in OpenGL

Applying Textures I
1. specify the texture
Loan Nguyen BKIT08.net Page 4
read or generate image
assign to texture
Glubyte my_texels[512][512];// Define a texture image from an array of texels in CPU memory
enable texturing
glEnable(GL_TEXTURE_2D)
glTexImage2D( target, level, components,w, h, border, format, type, texels );
// Define Image as a Texture
gluScaleImage( format, w_in, h_in,type_in, *data_in, w_out, h_out,type_out, *data_out );
// If dimensions of image are not powers of 2
2. assign texture coordinates to vertices
Proper mapping function is left to application
3. specify texture parameters
Wrapping parameters determine what happens if s and t are outside the (0,1) range
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP )
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT )
Filter modes allow us to use area averaging instead of point samples
Magnification and Minification

point sampling (nearest texel)


linear filtering( 2 x 2 filter) to obtain texture values
glTexParameteri( target, type, mode )
glTexParameteri(GL_TEXTURE_2D, GL_TEXURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXURE_MIN_FILTER,GL_LINEAR);
// linear filtering requires a border of an extra texel for filtering at edges (border = 1)
Mipmapping allows us to use textures at multiple resolutions
// Lessens interpolation errors for smaller textured objects
glTexImage2D( GL_TEXTURE_*D, level, … )// Declare mipmap level
gluBuild*DMipmaps( … )
mipmapped point sampling
mipmapped linear filtering
Environment parameters determine how texture mapping interacts with shading
glTexEnv{fi}[v]( GL_TEXTURE_ENV, prop, param )
GL_TEXTURE_ENV_MODE modes
GL_MODULATE: modulates with computed shade
GL_BLEND: blends with an environmental color
GL_REPLACE: use only texture color
GL(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
Perspective Correction Hint
glHint( GL_PERSPECTIVE_CORRECTION_HINT, hint )
hint is one of
GL_DONT_CARE
GL_NICEST
GL_FASTEST
Generating Texture Coordinates
glTexGen{ifd}[v]()
generation modes
GL_OBJECT_LINEAR
GL_EYE_LINEAR
GL_SPHERE_MAP (used for environmental maps)

Loan Nguyen BKIT08.net Page 5


Applying Textures II
1. specify textures in texture objects
2. set texture filter
3. set texture function
4. set texture wrap mode
5. set optional perspective correction hint
6. bind texture object
7. enable texturing
8. supply texture coordinates for vertex

Compositing and Blending


Blending for translucent surfaces
Opaque surfaces permit no light to pass through
Transparent surfaces permit all light to pass
Translucent surfaces pass some light
translucency = 1 – opacity (α)
Opaque polygons block all polygons behind them and affect the depth buffer
Translucent polygons should not affect depth buffer
Render with glDepthMask(GL_FALSE) which makes depth buffer read-only

Blending Equation
Define source and destination blending factors for each RGBA component
s = [sr, sg, sb, sα]
d = [dr, dg, db, dα]
Suppose that the source and destination colors are
b = [br, bg, bb, bα]
c = [cr, cg, cb, cα]
Blend as
c’ = [br sr+ cr dr, bg sg+ cg dg , bb sb+ cb db , bα sα+ cα dα ]

glEnable(GL_BLEND)
glBlendFunc(source_factor, destination_factor)
//GL_ZERO, GL_ONE
//GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA
//GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA
Ex:
1.opaque background color (R0,G0,B0,1)
2. want to blend in a translucent polygon with color (R 1,G1,B1,α1)
3. Select GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA as the source and destination blending
factors R’1 = α1 R1 +(1- α1) R0, ……

Fog:
Blend source color Cs and fog color Cf by Cs’=f Cs + (1-f) Cf
GLfloat fcolor[4] = {……}:
glEnable(GL_FOG);
glFogf(GL_FOG_MODE, GL_EXP);

Loan Nguyen BKIT08.net Page 6


glFogf(GL_FOG_DENSITY, 0.5);
glFOgv(GL_FOG, fcolor);

Antialiasing
Use average area α1+α2-α1α2 as blending factor
glEnable(GL_POINT_SMOOTH);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

CHAPTER 8 ADVANCED TOPIC


Programmable Pipelines
Vertex shaders
Fragment shaders

GLSL
Use:

gl_Position
gl_ProjectionMatrix
gl_ModelViewMartrix
gl_Vertex
gl_FrontColor

Data Types

C types: int, float, bool


Vectors:
float vec2, vec 3, vec4
Also int (ivec) and boolean (bvec)
Matrices: mat2, mat3, mat4
Stored by columns
Standard referencing m[row][column]
C++ style constructors
vec3 a =vec3(1.0, 2.0, 3.0)
vec2 b = vec2(a)
There are no pointers in GLSL
Variables can change:once per primitive,once per vertex,once per fragment,at any time in the application

Loan Nguyen BKIT08.net Page 7


const vec4 red = vec4(1.0, 0.0, 0.0, 1.0);
void main(void)
{
gl_Position = gl_ProjectionMatrix*gl_ModelViewMartrix*gl_Vertex;
gl_FrontColor = red;
}

void main(void)
{
gl_FragColor = gl_Color;
}

//Vertex Shader
const vec4 red = vec4(1.0, 0.0, 0.0, 1.0);
varying vec3 color_out;
void main(void)
{
gl_Position =gl_ModelViewProjectionMatrix*gl_Vertex;
color_out = red;
}

//Required Fragment Shader


varying vec3 color_out;
void main(void)
{
gl_FragColor = color_out; //in, out, inout
}

GLuint myProgObj;
myProgObj = glCreateProgram();
/* define shader objects here */
glUseProgram(myProgObj);
glLinkProgram(myProgObj);

//Shader Reader
#include <stdio.h>
char* readShaderSource(const char* shaderFile)
{
FILE* fp = fopen(shaderFile, "r");
char* buf;
long size;
if(fp == NULL) return(NULL);
fseek(fp, 0L, SEEK_END); /* end of file */
size = ftell(fp);
fseek(fp , )L, SEEK_SET); /* start of file */
buf = (char*) malloc(statBuf.st_size + 1 * sizeof(char));

Loan Nguyen BKIT08.net Page 8


fread(buf, 1, size, fp);
buf[size] = '\0'; /null termination */
fclose(fp);
return buf;
}

//Adding a Vertex Shader


GLint vShader;
GLunit myVertexObj;
GLchar vShaderfile[] = “my_vertex_shader.glsl”;
GLchar* vSource= readShaderSource(vShaderFile);
myVertexObj= glCreateShader(GL_VERTEX_SHADER);
glShaderSource(myVertexObj, 1, &vSource, NULL);
glCompileShader(myVertexObj);
glAttachObject(myProgObj, myVertexObj);

//Vertex Attribute Example


GLint colorAttr;
colorAttr = glGetAttribLocation(myProgObj, "myColor");
/* myColor is name in shader */
GLfloat color[4];
glVertexAttrib4fv(colorAttr, color);
/* color is variable in application */

//Uniform Variable Example


GLint angleParam;
angleParam = glGetUniformLocation(myProgObj, "angle");
/* angle defined in shader */
/* my_angle set in application */
GLfloat my_angle;
my_angle = 5.0 /* or some other value */
glUniform1f(angleParam, my_angle);

//Wave Motion Vertex Shader


uniform float time;
uniform float xs, zs, // frequencies
uniform float h; // height scale
void main()
{
vec4 t = gl_Vertex;
t.y = gl_Vertex.y+ h*sin(time + xs*gl_Vertex.x)+ h*sin(time + zs*gl_Vertex.z);
gl_Position = gl_ModelViewProjectionMatrix*t;
}

//Particle System
uniform vec3 init_vel;
uniform float g, m, t;

Loan Nguyen BKIT08.net Page 9


void main()
{
vec3 object_pos;
object_pos.x = gl_Vertex.x + vel.x*t;
object_pos.y = gl_Vertex.y + vel.y*t + g/(2.0*m)*t*t;
object_pos.z = gl_Vertex.z + vel.z*t;
gl_Position = gl_ModelViewProjectionMatrix*vec4(object_pos,1);
}

//Modified Phong Vertex Shader I


void main(void)
/* modified Phong vertex shader (without distance term) */
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
vec4 ambient, diffuse, specular;
vec4 eyePosition = gl_ModelViewMatrix * gl_Vertex;
vec4 eyeLightPos = gl_LightSource[0].position;
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
vec3 L = normalize(eyeLightPos.xyz - eyePosition.xyz);
vec3 E = -normalize(eyePosition.xyz);
vec3 H = normalize(L + E);
/* compute diffuse, ambient, and specular contributions */
float Kd = max(dot(L, N), 0.0);
float Ks = pow(max(dot(N, H), 0.0), gl_FrontMaterial.shininess);
float Ka = 1.0;
ambient = Ka*gl_FrontLightProduct[0].ambient;
diffuse = Kd*gl_FrontLightProduct[0].diffuse;
specular = Ks*gl_FrontLightProduct[0].specular;
gl_FrontColor = ambient+diffuse+specular;
}

//Pass Through Fragment Shader


/* pass-through fragment shader */
void main(void)
{
gl_FragColor = gl_Color;
}

//Vertex Shader for per Fragment Lighting


varying vec3 N, L, E, H;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
vec4 eyePosition = gl_ModelViewMatrix * gl_Vertex;
vec4 eyeLightPos = gl_LightSource[0].position;
N = normalize(gl_NormalMatrix * gl_Normal);
L = normalize(eyeLightPos.xyz - eyePosition.xyz);
E = -normalize(eyePosition.xyz);
H = normalize(L + E);

Loan Nguyen BKIT08.net Page 10


}

//Fragment Shader for Modified Phong Lighting


varying vec3 N;
varying vec3 L;
varying vec3 E;
varying vec3 H;
void main()
{
vec3 Normal = normalize(N);
vec3 Light = normalize(L);
vec3 Eye = normalize(E);
vec3 Half = normalize(H);
float Kd = max(dot(Normal, Light), 0.0);
float Ks = pow(max(dot(Half, Normal), 0.0),gl_FrontMaterial.shininess);
float Ka = 0.0;
vec4 diffuse = Kd * gl_FrontLightProduct[0].diffuse;
vec4 specular = Ks * gl_FrontLightProduct[0].specular;
vec4 ambient = Ka * gl_FrontLightProduct[0].ambient;
gl_FragColor = ambient + diffuse + specular;
}

Samplers
Provides access to a texture object
Defined for 1, 2, and 3 dimensional textures and for cube maps
//In shader:
uniform sampler2D myTexture;
Vec2 texcoord;
Vec4 texcolor = texture2D(mytexture, texcoord);

//In application:
texMapLocation =
glGetUniformLocation(myProg,“myTexture”);
glUniform1i(texMapLocation, 0);
/* assigns to texture unit 0 */

//Reflection Map Vertex Shader


varying vec3 R;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix*gl_Vertex;
vec3 N = normalize(gl_NormalMatrix*gl_Normal);
vec4 eyePos = gl_ModelViewMatrix*gl_Vertex;
R = reflect(-eyePos.xyz, N);
}

Loan Nguyen BKIT08.net Page 11


//Refelction Map Fragment Shader
varying vec3 R;
uniform samplerCube texMap;
void main(void)
{
gl_FragColor = textureCube(texMap, R);
}

//Recursive Ray Tracer


color c = trace(point p, vector d, int step)
{
color local, reflected, transmitted;
point q;
normal n;
if(step > max)
return(background_color);
q = intersect(p, d, status);
if(status==light_source)
return(light_source_color);
if(status==no_intersection)
return(background_color);
n = normal(q);
r = reflect(q, n);
t = transmit(q,n);
local = phong(q, n, r);
reflected = trace(q, r, step+1);
transmitted = trace(q,t, step+1);
return(local+reflected+transmitted);

Loan Nguyen BKIT08.net Page 12

You might also like