You are on page 1of 51

Graphics for VEs

Ruth Aylett

Overview

VE Software
Graphics for VEs
The graphics pipeline
Projections
Lighting
Shading

VR software

Two main types of software used:


off-line authoring or modelling packages
runtime systems.

Some runtime systems support some authoring


Sometimes limited; e.g Java3D
Sometimes part of large-scale toolkit: eg WorldViz,
WorldToolkit
Toolkits are generally expensive and have had trouble
sustaining a market

Runtime VR systems

Two major parts:


initialisation and update loop.

Initialisation
executed once at program start-up time
generates or loads a run-time world database containing
description of all entities in the virtual world.

Usually read from a file


Generated by separate authoring or modelling program
Import formats supported must be known
also loads lighting models, checks input/output devices are
working.

Cyclic executive
Check and
initialise
hardware

Import or
create
3D world

Create
view
points

Software architecture

Render
world

Check
input
devices

Main loop

can be used to implement simple


runtime VR systems.

simulation update loop continues forever until


program is halted
Each loop represents one time step for simulation
Renders current state of the virtual world
Checks input devices and implements changes to world

Tasks within an update loop

Updating state:
reading values from input devices
applying those values to the objects being controlled by each
device
(including the participant's viewpoint or body positions).

For entities with autonomous behaviours:


execute algorithms to calculate behaviours for current time step
e.g., computing new position and orientation of falling or tumbling
object, self-propelled object, intelligent object.

If support for collision detection, must check all


possible collisions and adjust object movements as
needed

Game Engines

Increasing recent use of 3D game engines


Doom, Quake
Unreal Tournament now the most popular
Half-Life and NeverWinter Nights growing

Games engines have optimised 3D engine features


Also large user communities for support

Supply of generic toolkits helps cut development time


supports collision detection, physics-based models, AI behaviours,
multiplayer client/server modes etc.

Downsides include licensing problems


Also turning off the shooting
Proprietary formats for models and scriptings

Free Runtime Software

A number of open source systems

VR Juggler: with a separate scenegraph system


Virtual Rendering System
VRS3D.
OpenSG
OpenSceneGraph

Open source games engines - e.g Ogre3D


3dml
Java 3D
extension API to core Java SDK
probably fundamentally too slow to be a serious tool for the
development of professional VEs
On the web badly impacted by extension status and lack of
browser functionality (especially MS IE)

Graphics Pipeline
3D
DATA

TRANSFORM

3D DATA IN
CAMERA
COORDS

TRANSFORM

HIDDEN SURFACE & RENDER

2D DATA IN
SCREEN
COORDS

DATA
FULLY ON
SCREEN

Scenegraph and pipelines

Scene Description
Z

World

Cartesian Coordinates
2 Dimensions (X,Y)
3 Dimensions (X,Y,Z)

Y
X

Objects

Shape
Colour
Location
X

Object Representation
POINTS

LINES

POLYGONS

MESHES

(X,Y,Z)
(X1,Y1,Z1), (X2,Y2,Z2)
(X1,Y1,Z1) (Xn,Yn,Zn)
Set of Vertices V1 = (X1,Y1,Z1) Vn = (Xn,Yn,Zn)
Set of Polygons P1, P2, Pm
Each Polygon Pm is represented as ordered list
of vertices

Object Colour

RGB System
Red - Green - Blue
Additive Light System
Colour Represented as (R,G,B)

blue

Examples
(1, 1, 0) Yellow
(0, 0, 0) black
(1, 1, 0) green

red
green

Building a World

Built from a set of objects


Objects translated into position
Objects rotated as appropriate
Include light sources
Ambient
Non-directional
Spotlight

Transforming objects

Translation
Add an off set to every vertex on an object

Rotation
Use trigonometry to calculate new position of each
vertex
Allow rotation around each cartesian axis
May be represented as matrices

Scale
Multiply each vertex by constant scale factor
Always apply before object put into scene

Translation
An object is translated by adding an offset to every vertex
Y

(x+Tx, y+Ty)

(x, y)

(Tx, Ty)
X

Matrix Rotation: 2 dimensions


Y

(X, Y)

(X1, Y1)

a
X

(X1, Y1) =

cos(a) - sin(a)

sin(a) - cos(a)

Matrix Rotation: 3 dimensions


X Rotation
1
(X1, Y1, Z1) =

cos(a) -sin(a)

sin(a)

cos(a)

cos(a)

sin(a)

-sin(a)

cos(a)

Y Rotation
(X1, Y1, Z1) =

Z Rotation
cos(a) -sin(a)

(X1, Y1, Z1) = sin(a) cos(a)

Projection

Objects exist in 3 dimensions, final image is in 3 dimensions


Projection reduces the dimensions by one to two
Perspective projection mimics scaling of distance

eye

Isometric projection does not scale with distance

Camera & Projection


World
Projection
Plane

Camera & Projection

The camera moves according to the head movement


The projection plane is either fixed in relation to:
the camera
the world

Camera & Projection


Projection plane fixed to the camera
x

World

Camera & Projection


Projection plane fixed to the camera
Characteristics:
Constant Field of View
Projection plane normal to the viewing direction

Camera & Projection


Projection plane fixed to the world

World

Camera & Projection


Projection plane fixed to the world
Characteristics:
Field of View depends on camera
position
viewing direction not always normal to
projection plane

Clipping

Projected objects may lie:


Totally on screen
Partially on screen
Completely off screen

Polygons clipped to sides of screen


Must make sure new polygons are complete
Remaining polygons may obscure each other

screen

Polygon Sorting & Printing

Aim to identify which parts of a


polygon are visible
Many solutions developed for special
cases
Consider two methods:
Painters algorithm
Z buffer algorithm

Screen

Picture is drawn in a frame buffer


Frame buffer is an area of memory
Think of the screen as a grid of squares
Each square represents a pixel
Every pixel can hold a colour (R, G,B)
Drawing a picture is just colouring the squares

Method

Painters algorithm

Generate all the polygons


Order wrt distance from camera
Draw polygons furthest away first and move towards viewer
Visible polygons drawn over distant polygons

Works when polygons are well-ordered


Some cases cannot be solved

Z-buffer algorithm

Method:

Retain distance information for each vertex


Frame-buffer extended to have a depth-buffer
Whilst drawing the polygon, calculate the pixel
depth for each pixel
For each pixel in polygon
Calculate depth of pixel in polygon
Compare depth with current frame-buffer depth
IF polygon pixel distance less the one stored in frame buffer
THEN save polygon colour and depth in frame buffer
NEXT pixel

Illumination Model

The illumination model allows the


colour to be calculated at a surface
Approximates the behaviour of light
within a scene
Approximate since only considers individual
surfaces
Does not model reflections between them

The Phong model is typically used

Phong Model

A simple model that can be computed rapidly


Three lighting components
Diffuse
Specular
Ambient
Uses four vectors
To source (light)
To viewer
Normal
Perfect reflector

Phong Illumination Model

The 3 types of lighting:


Ambient: constant background illumination
Same value for all objects in scene
Gives constant shading

Diffuse: light widely scattered by surface


Depends only on angle between light and surface and
surface material

Specular: highlights on object


Depends both on position of light wrt surface and
position of viewer wrt surface
As well as surface material

Diffuse Reflection

Mathematical Description

N
normal to object at intersection
Li vector from intersection to light
Ii
intensity of light from light I
Kd coefficient of diffuse reflection
ai
angle between Li and N

Component is given by:


- kd ( l1 cos(a1) + l2 cos(a2) + l3 cos(a3) + )

N
a
eye

Object

Specular Reflection

Mathematical Description

ks
Li
Ii
R
n

normal to object at intersection


L
vector from intersection to light i
intensity of light from light I
R
direction of reflection incident
level of specularity

Specular component given by:


- ks (I1( L1*R)n + I2(l2*R)n + )

eye

Light Sources

In the Phong Model, we add the results from


each light source
Each light source has separate diffuse,
specular, and ambient terms to allow for
maximum flexibility even though this form
does not have a physical justification
Separate red, green and blue components
Hence, 9 coefficients for each point source
Idr, Idg, Idb, Isr, Isg, Isb, Iar, Iag, Iab

Illumination Model
Partial Illumination Model
Ambient light

Ambient light
diffuse reflections
single source, ks= 0

Full Illumination Model


Ambient light
Ir = Iakar+Ii(kdr(Li*N)+ks(Li*R)n)
diffuse reflections
Ig = Iakag+Ii(kdg(Li*N)+ks(Li*R)n)
specular reflections Ib = Iakab+Ii(kdb(Li*N)+ks(Li*R)n)

Object Shading

Flat Shading
Gouraud Shading
Phong Shading

Object Shading

Flat Shading
Gouraud Shading
Phong Shading

Simplest way to shade a polygon


Apply the Phong shading model once
All pixel polygons are shaded the same
Quick but limited realism of meshes

Flat
(R2,G2,B2)

(R1,G1,B1)

(R3,G3,B3)

Object Shading

Flat Shading
Gouraud Shading
Phong Shading

Normal to each polygon vertex


(r, g, b) colour computed for each vertex
Pixel colour calculated by linear interpolation
Slower, smooth appearance across polygon

Gouraud
(R2,G2,B2)
(R1,G1,B1)

(R3,G3,B3)
Colour is linear function of (Ri,Gi,Bi)

Object Shading

Flat Shading
Gouraud Shading
Phong Shading

Phong

Normal to each polygon vertex


Interpolate normals along edges
(r, g, b) colour computed for each normal
Interpolate colour across polygon
Very slow, smooth appearance, good quality
(R2,G2,B2)
(R1,G1,B1)

(R3,G3,B3)
Normal N is linear function of Ni, colour is computed using N for
illumination model

Projections: plane map


V
object
1

image

U
1

Projections: cylindrical map

Theory:
(u,v) - (r*cos(2*pi*u), r*sin(2*pi*u),h*v
V

image

U
1

Projections: spherical map

Theory
(u,v) - (r*cos(2*pi*u)*cos(pi*v), r*sin(2*pi*u)*cos(pi*v),
r*sin(pi*v))
V
1

image

3D Texturing

3 Dimensional Texturing
2 Dimensional textures only cover the surface of an object
3 Dimensional textures are defined for all points in space
Objects are cut from the texture space - much like sculpting
a statue from marble

Alternative rendering
techniques

Raytracing
Aims to improve realism of image
Slower than polygon rendering
Suitable for producing still images and animations

Radiosity
Further improvement of the shading model
Attempts to improve the treatment of ambient
light
Slow, but can be pre-computed and used with
polygon rendering or raytracing

Problems of the graphics


pipeline

Difficult to achieve realism


Reflections and transparency are not accurate
Real shadows increase rendering time
Polygonal objects are not smooth

Gouraud/Phong shading needed to increase


quality
Difficult to render shapes defined
mathematically
The pipeline is not a natural approach

Ray Tracing - the concept

scene

paper
grid

Draw a grid on a piece of paper


Cut a square hole in a card and connect some wires across the hole,
again forming a grid
Hold the card in front of the scene to be painted and draw the image
seen through each grid square on the corresponding square on the paper
If the image though the grid square is to complex, increase the number
of squares on paper and card
When the grid is suitably fine just paint the average colour seen through
a grid hole in the corresponding position on the paper

Ray Tracing
1

N
3

4
N

Trace a ray from the eye through a pixel, into the scene
Find the nearest object intersected by the ray
Cast a new reflected and refracted ray
Calculate light arriving from reflected, refracted rays and direct
illumination from lights in the scene
Apply shading model to get the colour of the object at the
intersection
Paint the pixel with the computed colour

eye

Radiosity

Radiosity is a physically based model of global diffuse


illumination
Developed at Cornell University in 1994 from radiative heat
transfer

Assumes that all surfaces are ideaLambertian


(diffuse) - ray tracing assumes ideal specular
Radiosity discretises the scenes and produces data
independent of the viewer
In general, Radiosity takes longer than ray tracing

Build the
Environment

Determine the
form factors

Solve the Radiosity


Equation

Render the
Environment

Radiosity
Radiosity is not a rendering
technique

Independent of viewers position


Produces data to be rendered later
Needs additional rendering to produce an image
Rendering can be done through a graphics pipeline or using ray
tracing methods
Best results when both ray tracing and radiosity are used
A radiosity feature is colour bleeding

You might also like