Real-Time Screen-Space Reflections in OpenGL

Ian Lilley∗ University of Pennsylvania

Abstract
This paper is about real-time screen-space reflections in OpenGL. Specifically, I will be discussing what screen-space techniques are and how they can be used to accelerate reflections and other visual phenomena. Along the way I will discuss shadow mapping, deferred shading, and raytracing - all important components of my project. Keywords: reflections, refractions, screen space, framebuffer, deferred shading, raytracing, shadow mapping

Another interesting approach uses billboard impostors to simulate reflected geometry. This technique involves projecting an object onto a texture and intersecting rays with that texture during the reflection process, akin to what we do in the cube map. This approach has obvious speed limitations, especially for scenes with numerous objects. More info on billboard impostors from [Hayward] Finally, we arrive at screen-space reflection techniques. These have been rising in popularity over the past few years due to their speed. Companies like Crytek have done a good deal of research in this area. This will be the technique that I focus on.

3 1 Introduction
Improvements in GPU performance have made it possible to create visual effects that were formerly reserved for offline renderers. Although these effects tend to be less accurate than those made with commercial renderers, they have the advantage of running in real time. One effect that can add a lot of realism to dynamic scenes are reflections. For something so simple, it wasn’t until a few years ago that reflections became technologically practical. A description of reflections in a raytracer should reveal why: In order to reflect something in a raytracer, you need to shoot the ray from the eye and increment it until it detects a collision with some geometry. After that, reflect the ray off the object’s surface and begin the whole process again. Reflection is a recursive algorithm because in order to achieve high visual fidelity, we need to let the ray bounce off of several objects in a row, accumulating color values as it goes. Clearly, there are some compromises to be made when doing this in real time. First, I will discuss various real-time reflection techniques. Next, I will give an overview of my approach and the technical challenges it created. Finally, I will explore potential future techniques for real time reflections.

Approach

First, what is screen space? Screen space can be thought of as a 2D plane onto which OpenGL draws all of the fragments. Screen space usually extends in just the X and Y directions from [0,1], though we can think of the Z direction as depth (also [0,1]). How do we get into screen space? vec4 clipSpace = cameraToClipMatrix * vec4(cameraSpace, 1); vec3 NDCSpace = clipSpace.xyz / clipSpace.w; vec3 screenSpace = 0.5 * NDCSpace + 0.5; The reflection vector is calculated by reflecting the view ray off of the fragment surface normal and then converting two points on the corresponding ray into screen space, which can then be used to determine the screen space reflection vector. Once we have the reflection vector, slowly increment it forward in time until we detect a collision. Before going any further, I need to mention the framebuffer. One popular rendering technique is called deferred shading. The main idea behind deferred rendering is you write positions, normals, colors, and other material properties to textures on the first render pass using a framebuffer (FBO) (nothing is drawn to the screen yet). Subsequent render passes can access these textures to perform special calculations. Deferred shading is great for scenes with complex lighting because it ensures we only perform operations on pixels that will end up being rendered. As a result, scene complexity makes almost no difference on render speed. Initially I did not have support for framebuffers, so I was constantly doing expensive calculations on objects that were not even being drawn. Anyway, to detect a collision between the screen space reflection vector and other scene geometry we compare the depth value of the ray with the depth value saved in the texture from the framebuffer. If the scene’s depth falls between the old and new position of the ray, then there has been an intersection. Unfortunately, it is not that easy. This screen space raytracer can be very inefficient if the ray step size is small, but at the same time making the step too big can cause false collisions. Accordingly, I employed a technique called ”linear + binary search” in my ray march. What this means is I initially have a large ray step size, but once I detect a collision I move the ray backwards in time and decrease the step size. This serves to make the collision point more precise. Once the collision point is found, take the color value that is stored in the framebuffer texture and draw it on the original fragment. Refractions follow in a similar manner, though instead of reflecting the vector we refract it. This is just a matter of calling a different glsl function and providing a refractive index for the new medium.

2

Previous Work

One of the most common and earliest techniques for simulating reflections is to put a reflective object in the center of a cube map. A cube map is a six sided texture that, for conceptual purposes, is infinitely large. The fragment shader simply takes the eye vector and reflects it off of the fragments surface normal. Next, it finds the texture coordinate where the ray intersects the cube map and draws that color onto the fragment. This approach creates a mostly realistic visual effect, but it cannot reflect arbitrary objects in a dynamic scene.
∗ e-mail:lilleyia@seas.upenn.edu

Now we have basic reflections and refractions. They should look pretty decent but there might be some static and trouble reflecting the back-surfaces of objects. The main problem with screen space techniques is they do not account for any geometry that is occluded by the screen. In other words, if you have a ball in front of a mirror and you move the camera in front of the ball and pointing at the mirror, you will not see the ball. It is off the screen, so it cannot be found in any of the textures we generated from the FBO. Although this might be considered a disadvantage, screen space is fast because there is a fixed amount of information we can read from. Issues aside, there are a couple small techniques I use to increase visual quality. I tweak the amount by which the ray step amount decreases, the number of times the ray can be refined, and the initial step size. Finding the right combination of values can make the simulation more efficient and less static. Next, I disallow collisions where the reflected ray and the surface normal are pointing in the same direction. This situation occurs when looking at an object from a bird’s eye view with a reflective floor. The screen contains very little information about the bottom of the object, so the reflected ray ends up going through the bottom of the object and collides with a fragment at the top. Since this is physically inaccurate, I counter it by looking at the dot product between the reflected ray and the surface normal. If the value is greater than zero, the two rays are pointing in the same direction and I stop the ray march. One other technique I use to prevent these situations is fading the reflection when the angle between the camera and the surface normal approaches 0. This results in cleaner transitions when parts of an object do not get written to screen space. I also use a fuzzy reflection technique to make softer reflections. This is done by jittering the initial position or direction of the ray based on some roughness value. Finally, I implemented shadow mapping to make everything look more grounded in reality. In all, using efficient ray marching techniques, carefully selected constants, and framebuffers result in decent, fast reflections. Although there are occasional artifacts and jagged lines, the overall effect is quite convincing and can be used in any interactive application than needs better visuals.

References
K YLE H AYWARD, 2008. http://graphicsrunner.blogspot.com/2008/04/reflectionswith-billboard-impostors.html L ITHEON, 2012. http://www.gamedev.net/blog/1323/entry2254101-real-time-local-reflections/ C RYTEK, http://advances.realtimerendering.com/s2011/index.html

4

Results

My program rarely drops below 60fps on a NVIDIA GTX460. The slowest it gets is when the screen is full of intense physics reactions involving complicated meshes. It is common for the program to be around 100fps with three of four reflective objects in the scene. In fact, everything would be even faster if I removed the shadows and some other irrelevant effects. Overall, my approach is fast enough to be used in most applications. However, there is still room for improvement in making things more efficient and better looking. For a better idea of the results, watch the project video: [http://youtu.be/4tWV36s4gpY]. There are two other older videos that show my improvement over time.

5

Future Work

I don’t know if this is feasible for a couple reasons, but I would like to try out a linked list fragment structure for extremely accurate reflections. This would get rid of the ugly shadows when the reflective objects cover the walls behind them. Unfortunately, this technique has high memory consumption and could be slow. More details on this technique from Sean Lilley’s blog: [http://gamerendering.blogspot.com/]. Any continued work on this project will likely go to: [http://ianlilleycis565.blogspot.com/]

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.