Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1
Blender Edge Detection

Blender Edge Detection

Ratings: (0)|Views: 84 |Likes:
Published by Prometheis

More info:

Categories:Types, Maps
Published by: Prometheis on Nov 23, 2012
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





Salier-Hellendag, Mar, Lee, Kurnia 1
Non-photorealistic Renderer
Animation: "The Entertainer"
CS184 Final Project · Fall 2008Isaac Salier-Hellendag (cs), Richard Mar (an), Trevor Lee (ci), Shendy Kurnia (cq)
About the Project
Project Goal
Our objective in this project was to produce a short animation with frames rendered in an NPR(Non-photorealistic) renderer. The renderer would have two major features: cel-shading andcontour outlining, to provide a "toon" shading look for each frame. Further, we decided to createoriginal models and animations for the project, a player piano with keys moving in synch withmusic, on display via a dynamic camera. We would then export each frame of animation into.obj and .mtl files, and combine individually rendered frames into a complete animation.The finished animation (with sound) is available in high definition (1280 x 720) at:http://www.vimeo.com/2486431 Additionally, we have included it on the disc submitted with this paper, in .mov format. Theimages within the paper are also included, in high resolution, on the disc.
Project Approach
We discussed implementing our renderer using an OpenGL shader, with contour outlines createdusing either OpenGL wireframe lines or the line drawing algorithm described in Shirley 9.3.1.However, this approach seemed too simplistic for the scope of the course and the project. Merelyrendering our animation by letting OpenGL doing all the work would be too easy.With that in mind, we decided to use a modified raytracer to perform the rendering work.We decided to render each frame individually and compile the frames into the animation afterrendering. First, to create a better "cartoonish" look for our models, we eliminated reflections andshadows, as they would both negatively impact the appearance of the scene and significantlyincrease rendering time. With post-processing contour outlining and rendering efficiency inmind, we also chose not to use multi-sample anti-aliasing.Our approach was inspired by several online articles on NPR shading that recommended thefollowing basic method for frame-by-frame rendering:1.
Calculate cel-shading based on material color and an array of grayscale values usingmodified raytracer2.
Calculate edges for contour outlines3.
Overlay edges onto the cel-shaded image to produce the final image
Salier-Hellendag, Mar, Lee, Kurnia 2
We then proceeded to gut and rebuild the raytracer from Assignment 4 into a raytracer optimizedfor NPR rendering, throwing out the old code for calculating Phong shading, reflections, andshadows. There were three things that needed to be done: 1) the NPR shader to produce the"cartoonish" cel-shaded look, 2) post-process edge detection for the contour outlining, and 3) anew acceleration structure to cut down on render time.
The NPR Renderer
Commonly found in comic books and hand-drawn animations, cel-shading -- or "toon" shading -- is a form of shading intended to make objects look hand-drawn. As with our Phong shadedraytracer, we would calculate the following for each eye ray (in our case, 1280 x 720 = ~920,000 rays to produce an HD-valid resolution):
The closest possible polygon intersection, using hierarchical bounding boxes to acceleratetracing
The normalized direction vectors to light sources from that point (note: we opted to use asingle white point light source for our scene)
The diffuse RGB value (Kd) at the intersection point
The surface normal of the intersected polygonAs we would when calculating the diffuse component for Phong shading, we need the dotproduct of the light vector and the surface normal to obtain the cosine of the angle between thetwo, a value between 0 and 1. At this point, we depart from Phong shading.Instead of using the cosine value itself, we use an array of 16 grayscale values. In our case, weused values recommended by a cel-shading article on GameDev.net:Three grayscale values are used: 0.5, 0.75, 1.0. Our polygon's diffuse material value, Kd, will bemultiplied by one of these values. Determining which is simple:
int P_grayscale = int ((l * n) * 16)
Then the cel-shaded value is simply:
vec3 cel = Kd * grayscaleArray[P_grayscale]
Note that given the grayscale array described above, this means that any cosine value greaterthan 0.5 would result in the pixel being colored with cel-shading equal to Kd, since the grayscalevalue would be 1.0.
Salier-Hellendag, Mar, Lee, Kurnia 3
We save the resulting RGB value for this pixel and move on to the next pixel. We do not useambient or specular components, as they are unnecessary for cel-shading. We continue thisprocess through every pixel. See Figure 1 for a sample resulting cel-shaded image. Afterward,the only task remaining is to add in contour outlines.
Figure 1. Cel-shaded piano without contour lines
Contour Outlines
In order to enhance the hand-drawn appearance of the images produced by our renderer, ourprogram traces the major figures and edges in the rendered images with bold black contour linesbefore it outputs the final image. Our program finds where to draw the contour lines by applyingan edge detection algorithm that uses differential analysis of the depth and surface orientationfeatures of the raytraced scene. Overall, the contour outlining process has three main phases:1.
The generation of feature maps describing the sampled depth and surface normal attributesof the scene.2.
Edge detection by applying a bi-dimensional Sobel filter algorithm to both feature maps.3.
Overlaying the detected edges as black contour lines on the color image.
 Depth Value and Normal Vector Maps
During raytracing, for each eye ray that collides with an object in the scene, our program recordsin separate arrays both the distance the ray traveled before its collision, i.e. the depth value, andthe normal vector of the object surface at the point of collision. After all of the eye rays havebeen traced, the depth value map is normalized and converted to an 8-bit grayscale value bydividing each entry by the maximum depth value and multiplying by 255. The depth value map

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->