Professional Documents
Culture Documents
Procedural Clouds in Real Time
Procedural Clouds in Real Time
Figure 1A
The sky behind the clouds has some color to it. Usually blue, though it could be black at night. (See
figure 1). (Sunsets and sunrises have gradients that go from yellow/red to blue, as we'll see in a later
photo)
Thin clouds are white. As they get thicker, they turn gray. This isn't just srcalpha-invsrcalpha
transparency, but rather indicates that there are two things that have to be modeled, the amount the
background is being obscured, and the amount of light the clouds emit in the direction of the viewer
(which is light reflected/refracted from all directions).
Clouds are for the most part, randomly turbulent. The shape of the patterns can change greatly, often
with altitude. Low-lying clouds tend to be thick and billowing, and higher level clouds tend to be thin
and uniform.
Clouds at lower altitudes tend to obscure light from above more than reflect light from below, and are
also usually thicker, and thus are usually darker. Clouds at higher levels are nearly always whiter.
Figure 1B
As you turn toward the sun the clouds tend to be brighter.
There are sometimes visible transitions in cloud patterns, such as along a weather front.
Atmospheric haze makes the sky and clouds in the distance fade out to a similar color.
Figure1C
Clouds have thickness, and thus become lit or take on light or appear lit?. Those away from the sun
tend to be brighter on one side and darker on another.
Clouds at sunrise and sunset tend to reflect light from below more than transmit light from above.
At sunrise and sunset, more colors of the spectrum are reflected, thus the sky color tends to be a
gradient from blue to yellow or red, and the clouds tend to be lit with light of orange or red.
The sky's cloud layer isn't a plane (or a cube for that matter), but rather a sphere. We just look at it
from close to it's circumference, and so often mistake it for a plane.
We're certainly a long way from modeling all of these things. Also, this doesn't begin to list the
observations we'd make if we were able to fly up, into and through the clouds. Limiting ourselves to a view
from the ground, we'll see how many we can model in a real time application later. First, some background
on some of the techniques we'll use.
Making Noise
If you've been reading this publication and others on a regular basis, you've undoubtedly heard talk of
procedural textures, and in particular, procedural textures based on Perlin noise. Perlin noise refers to the
technique devised by Ken Perlin of mimicking various natural phenomena by adding together noise
(random numbers) of different frequencies and amplitudes. The basic idea is to generate a bunch of random
numbers using a seeded random number generator (seeded in order to be able to reproduce the same results
given the same seed), do some stuff to them, and make them look like everything from smoke to marble
to wood grain. Sounds like it shouldn't work, but it does.
This is best illustrated with an example. Take the example of a rocky landscape. When looking at it's
altitude variation, low frequencies exist (rolling hills), as well as medium frequencies (boulders, rocks) and
very high frequencies (pebbles). By creating a random pattern at each of these frequencies, and specifying
their amplitude (e.g.mountains are between 0 and 10000 feet, boulders between 0 and 100 feet, pebbles
under 2 inches), we can add them together to get the landscape. (See figure 2 for a one dimensional
example of this .)
Figure 2 - Waves of different frequencies and amplitudes being summed together. In this case a regular
function because the input functions are regular.
Taking the one dimensional example to two dimensions, we'd get a bitmap that could represent a
heightmap, or in our case cloud thickness. Taking it to three dimensions, we could either be representing a
volume texture, or the same two dimensional example animated over time.
Aside from the random number generator, the other thing necessary is a way to interpolate points between
sample values. Ideally, we'd want to use a cubic interpolation to get curves like those in the graph, but we
are going to use a simple linear interpolation. This won't look as good, but will let us use the hardware to do
it.
Summary of the procedural cloud technique
The idea behind the technique is to generate a number of octaves of noise (an octave being an interval
between frequencies having a 2:1 ratio) and combine them together to make some turbulent looking noise
that resembles smoke or clouds. Each of the octaves is updated at a specific rate, different for each octave,
and then smoothed. To generate the turbulent noise for a given frame, we interpolate between the different
updates for each octave, and then combine the different snapshots for each octave together to create the
turbulence. At that point, some texture blending tricks need to be done to clamp off some ranges of values,
and map it onto a sky dome, box, or whatever surface you are working on.
As I mentioned earlier, what intrigued me about the original software-rendered demo was that many of the
steps involved (smoothing noise, interpolating between noise updates, combining octaves) seemed like a lot
of per-pixel cost. Cost that could be done on the graphics card using alpha blending, bilinear filtering, and
by rendering to texture surfaces. A remaining question was whether the simple four-tap sampling of the
bilinear filter would be adequate for doing the smoothing. I figured I'd attempt it, and as I think you'll see,
the results are acceptable.
Background on rendering to texture surfaces
In the synopsis of the technique above, I mentioned rendering to texture surfaces, something possible on a
lot of modern day 3D hardware, and exposed by the DirectX7 API. I wrote an article for gamasutra.com on
this subject (a link is included at the end of this article) that goes into more detail, but I will summarize the
technique here for those that haven't read it.
In order to render to a texture surface, one has to create a surface that can be both used as a render target
and as a texture (there's a DirectDraw surface caps flag for each). If the application is using a Z buffer, it
must attach one to the texture surface as well.
Then for each frame, the application does one BeginScene/EndScene pair per render-target. After rendering
to the texture and switching back to the back buffer, the application is free to use the texture in that scene.
I use this technique pretty extensively in this demo, but there's no reason it can't be done by rendering to the
back buffer and then blitting to a texture surface for later use. In fact, OpenGL doesn't expose the ability to
render to a texture, so you'll have to blit to the textures if this is your API of choice. This is also the work
around used on hardware that doesn't support render-to-texture.
Enough already! Let's render some clouds.
The technique involves a number of steps:
Generating the noise for a given octave at a point in time
Smoothing the noise
Interpolating the smoothed noise with the previous update, for the current frame
Compositing all the current update octaves into a single turbulent noise function for the
current frame
Doing some blending tricks to the composite noise to make it look a little more like clouds
Mapping the texture to the sky geometry
Generating the noise for a given octave at a point in time
Generating the noise is fairly simple. What we want is a simple seeded pseudo-random number generator to
generate 'salt & pepper' noise. The pseudo-random number generator should be such that it produces the
same result for a given seed.
We need to generate noise at several different frequencies. I chose to do four, though less might suffice on
lower end systems. By representing the four octaves as four textures of different resolutions (say 32x32
through 256x256) which eventually will all be upsampled to the same size (by mapping them to textures of
larger sizes), I can achieve the desired result automatically. The bilinear filtering does the interpolation for
me, and a small size texture just ends up being a lower frequency than a larger size texture. A cubic filter
would be better at approximating the curve I should get, but the results from the bilinear are acceptable.
The noise textures are updated at different periods, with the lowest frequency, stored in the 32x32 texture,
being updated the least frequently (I used an interval of 7 seconds). The higher the frequency, the more
frequently it is updated. This makes sense if you think about it. The lowest frequency represents large,
general cloud formations which change infrequently. The highest frequency represents small, wispy bits in
the clouds which change more rapidly. You can see this in action if you ever see time-lapse photography of
clouds.
I used frequencies that were multiples of two (for the sake of simplicity) and because the results were
pretty good. However, it's not necessary to do so. Interesting results can be achieved by using frequency
combinations other than just multiples of two times the input frequency.
Figure 3: The source noise texture for a single octave (32x32), a version that has been smoothed, and the
smoothed version upsampled to a 256x256 texture with filtering.
Interpolating the smoothed noise with the previous update
As we are periodically updating each octave of noise (as opposed to recreating it per frame), we need to
interpolate between updates to come up with a texture representing the noise at the current time. My
implementation uses a simple linear interpolation according to how much time has elapsed between the last
update and the next one. A cubic interpolation might give a better result, but would probably be overkill.
The formula for interpolating between the two updates is simply:
Interpolant = TimeSinceLastUpdate/UpdatePeriod
CurrentOctave = PreviousOctave*(1-Interpolant) + LatestOctave*Interpolant
Compositing all the octaves into a single turbulent noise function
As I mentioned earlier, the lower frequencies represent the larger shapes and patterns that occur in the
clouds. It would follow then, that these are also more significant in their contribution to the cloud texture's
luminance and color.
As I was using a series of frequencies that are multiples of two, I chose to make their contributions
multiples of two, such that each octaves contribution was half that of the next lowest frequency. This can
be expressed as:
Color = 1/2 Octave0 + 1/4 Octave1 + 1/8 Octave2 + 1/16 Octave3 +.
This was easy to code and produced good results. However, the weighting could certainly be changed to
change the look of the end result.
Figure 4 shows the interpolation and compositing steps.
working in the 0 to 1 range works out in our favor). By having our final color equal to some factor minus
the cloud texture squared, we get a roughly similar end result to the exponential function (the color is
inverted, but that won't matter since the noise is fairly evenly distributed). Varying the intensity of the
factor color lets us vary how isolated the clouds are. This is shown in figure 6.
Another problem is the fact that the dynamic range of textures is limited to the 0 to 1 range. This makes it
difficult when dealing with anything that gets amplified over multiple stages. For example, the higher
frequency noise textures ended up only contributing one or two bits to the end result due to this problem.
Making it Scale
If you're writing games for the PC platform, being able to implement a snazzy effect isn't enough. You also
have to be able to make it scale down on basic machines or on video cards with less available memory.
Fortunately, this procedural cloud technique lends itself well to scaling in several respects.
On the lowest end systems, you can either use a static texture or generate the texture only once. Other areas
for scalability include making updates to the noise less frequently, using fewer octaves or using lower
resolution or lower color depth textures.
Doing it Yourself
Generating procedural clouds can allow for unique skies that change over time and with other factors in the
environment. This can improve the look of skies over static cloud textures that never change with time.
Additionally, they can save on storage, or download size for Internet applications.
Hopefully, some of the techniques presented here will allow you to implement similar things in your
applications.
Additional Resources
Modeling and Texturing, A Procedural Approach - second edition. David S. Ebert, Editor. AP
Professional, 1994. ISBN 0-12-228730-4
Hugo Elias & Matt Fairclough's procedural cloud demo http://freespace.virgin.net/hugo.elias/models/m_clouds.htm
Haim Barad's Procedural Texture Using MMX article http://www.gamasutra.com/features/programming/19980501/mmxtexturing_01.htm
Ken Perlin's Noise Machine website - http://www.noisemachine.com/
Kim Pallister's article, Rendering to Texture Surfaces Using DirectX7
http://www.gamasutra.com/features/19991112/pallister_01.htm