You are on page 1of 41

# The Fractal Flame Algorithm

Scott Draves
Spotworks, NYC, USA
Erik Reckase
Berthoud, CO, USA
September 2003, Last revised November 2008
Abstract
The Fractal Flame algorithm is a member of the Iterated Function
System (IFS) class of fractal algorithms. A two-dimensional IFS creates
images by plotting the output of a chaotic attractor directly on the image
plane. The fractal ﬂame algorithm is distinguished by three innovations
over text-book IFS: non-linear functions, log-density display, and struc-
tural coloring. In combination with standard techniques of anti-aliasing
and motion blur the result is striking image variety and quality.
The guiding principle of the design of the algorithm is to expose and
preserve as much of the information content of the attractor as possible.
We found that preserving information maximizes aesthetics.
1 Overview
Some examples appear in Figure 1. The paper begins by deﬁning classic, linear
iterated function systems and hence grounding our notation and terminology.
The classic formulation is then extended with non-linear variations in Section 3,
and then further extended with post transforms and ﬁnal transforms in Sec-
tion 3.1. Section 4 describes how log-density display works and its importance,
and Section 5 covers the coloring algorithm. These three sections cover the core
innovations. Sections 6 and 8 explain other important properties of the imple-
mentation, and Section 7 shows how to create symmetric ﬂames. The appendix
is a catalog of the variations including formulas and examples.
2 Classic Iterated Function Systems
A two-dimensional Iterated Function System (IFS) is a ﬁnite collection of n
functions F
i
from R
2
to R
2
. The solution of the system is the set S in R
2
(and
hence an image) that is the ﬁxed point of Hutchinson’s recursive set equation
[3]:
S =
n−1
¸
i=0
F
i
(S)
1
a b
c d
Figure 1: Example fractal ﬂame images. The names of these ﬂames are: a) 206,
b) 191, c) 4000, and d) 29140. These images were selected for their aesthetic
properties.
2
Figure 2: Sierpinski’s Gasket, a simple IFS. X and y increase towards the lower
right. The recursive structure is visible as the whole image is made up of three
copies of itself, one for each function.
As implemented and popularized by Barnsley [1] the functions F
i
are linear
(technically they are aﬃne as each is a two by three matrix capable of expressing
scale, rotation, translation, and shear):
F
i
(x, y) = (a
i
x + b
i
y + c
i
, d
i
x + e
i
y + f
i
)
For example, if the functions are
F
0
(x, y) = (
x
2
,
y
2
) F
1
(x, y) = (
x+1
2
,
y
2
) F
2
(x, y) = (
x
2
,
y+1
2
)
then the ﬁxed-point S is Sierpinski’s Gasket, as seen in Figure 2.
In order to facilitate the proofs and guarantee convergence of the algorithms,
the functions are normally constrained to be contractive, that is, to bring points
closer together. In fact, the normal algorithm works under the much weaker
condition that the whole system is contractive on average. Useful guarantees
of this become diﬃcult to provide when the functions are non-linear. Instead
we recommend using a numerically robust implementation, and simply accept
that some parameter sets result in better images than others, and some result
in degenerate images.
The normal algorithm for solving for S is called the chaos game. In pseu-
docode it is:
(x, y)= a random point in the bi-unit square
iterate {
i = a random integer from 0 to n − 1 inclusive
(x, y) = F
i
(x, y)
plot(x, y) except during the ﬁrst 20 iterations
}
The bi-unit square are those points where x and y are both in [-1,1]. The
chaos game works because if (x, y) ∈ S then F
i
(x, y) ∈ S too. Though we start
3
out with a random point, because the functions are on average contractive the
distance between the solution set and the point decreases exponentially. After
20 iterations with a typical contraction factor of 0.5 the point will be within
10
−6
of the solution, much less than a pixel’s width. Every point of the solution
set will be generated eventually because an inﬁnite string of random symbols
(the choices for i) contains every ﬁnite substring of symbols. This is explained
in more formally in Section 4.8 of [1].
No suﬃcient number of iterations is given by the algorithm. Because the
chaos game operates by stochastic sampling, the more iterations one makes the
closer the result will be to the exact solution. The judgement of how close is
close enough remains for the user.
In fractal ﬂames, the number of samples is speciﬁed with the more abstract
parameter quality, or samples per output pixel. That way the image quality
(in the sense of lack of noise) remains constant in face of changes to the image
resolution and the camera.
It is useful to be able to weight the functions so they are not chosen with
equal frequency in line 3 of the chaos game. We assign a weight, or relative
probability w
i
to each function F
i
. This allows interpolation between function
systems with diﬀerent numbers of functions: the additional functions can be
phased in by starting their weights at zero. Diﬀerently weighted functions are
also necessary to draw symmetric ﬂames as shown in Section 7.
Some implementations of the chaos game make each function’s weight pro-
portional to its contraction factor. When drawing one-bit per pixel images
this has the advantage of converging faster than equal weighting because it
avoids redrawing pixels. But in our context with mutiple bits per pixel and
non-linear functions (where the contraction factor is not constant) this device
is best avoided.
3 Variations
We now generalize this algorithm. The ﬁrst step is to use a larger class of
functions than just aﬃne functions. Start by composing a non-linear function
V
j
from R
2
to R
2
with the aﬃne functions:
F
i
(x, y) = V
j
(a
i
x + b
i
y + c
i
, d
i
x + e
i
y + f
i
)
We call each such function V
j
a variation, and each one changes the shape
and character of the solution in a recognizable way. The initial variations were
simple remappings of the plane. They were followed by dependent variations,
where the coeﬃcients of the aﬃne transform deﬁne the behaviour of the vari-
ation. More recently, parametric variations have been introduced, which are
variations controlled by additional parameters independent of the aﬃne trans-
form.
Appendix A documents 49 variations. Variation 0 is the identity function.
It and six more examples are:
4
V
0
(x, y) = (x, y) linear
V
1
(x, y) = (sin x, sin y) sinusoidal
V
2
(x, y) =
1
r
2
· (x, y) spherical
V
3
(x, y) =

xsin(r
2
) − y cos(r
2
), xcos(r
2
) + y sin(r
2
)

swirl
V
4
(x, y) =
1
r
· ((x − y)(x + y), 2xy) horseshoe
V
17
(x, y) = (x + c sin(tan 3y), y + f sin(tan 3x)) popcorn
V
24
(x, y) = (sin(p
1
y) − cos(p
2
x), sin(p
3
x) − cos(p
4
y)) pdj
where
r =

x
2
+ y
2
An example of a dependent variation is Popcorn, V
17
, which is dependent
on the c and f coeﬃcients of the aﬃne transform.
The PDJ variation, V
24
, is an example of a parametric variation. PDJ relies
on four external parameters (p
1
, p
2
, p
3
, p
4
) to fully characterize its behaviour.
Variations can be further generalized by replacing the integer parameter j
with a blending vector v
ij
with one coeﬃcient per variation. Then
F
i
(x, y) =
¸
j
v
ij
V
j
(a
i
x + b
i
y + c
i
, d
i
x + e
i
y + f
i
)
With this generalization, if are n functions, then there are 87n parameters
that specify it. The parameters consist of 49n variational coeﬃcients v
ij
, 31n
parametric coeﬃcients, 6n matrix coeﬃcients a
i
through f
i
, and n weights w
i
.
Signiﬁcantly fewer parameters are required, however, if unused parameters are
assumed to be 0.
3.1 Post Transforms
To this point, applying a transform to a set of coordinates involves ﬁrst applying
an aﬃne transformation to the coordinates, and then applying a non-linear vari-
ation function to the result of the aﬃne transformation. We further generalize
this with the addition of a secondary aﬃne transformation, a post transform, to
be applied after the non-linear function. This provides the ability to change the
coordinate systems of the variations. If the post transform is
P
i
(x, y) = (α
i
x + β
i
y + γ
i
, δ
i
x + ǫ
i
y + ζ
i
)
then we redeﬁne the F
i
as follows:
F
i
(x, y) = P
i
(
¸
j
v
ij
V
j
(a
i
x + b
i
y + c
i
, d
i
x + e
i
y + f
i
))
5
3.2 Final Transforms
We can now introduce the concept of a ﬁnal transform, which is an additional
function F
final
(x, y) that is always applied regardless of the value of i in the
iteration loop. The ﬁnal transform is like a non-linear camera. The result of
the application of the ﬁnal transform function is not ‘in the loop’, and there
can be only one ﬁnal transform per ﬂame. The ﬁnal transform can have a post
transform associated with it. In pseudocode, we now have:
(x, y)= a random point in the bi-unit square
iterate {
i = a random integer from 0 to n − 1 inclusive
(x, y) = F
i
(x, y)
(x
f
, y
f
) = F
final
(x, y)
plot (x
f
, y
f
) except during the ﬁrst 20 iterations
}
Adding the post transforms to the earlier parameter count, we now have 93n
parameters necessary to fully specify n functions. If a ﬁnal transform is desired,
then the total number of parameters necessary is 93(n + 1).
4 Log-Density Display
The chaos game produces a series of (x, y) points which are plotted on the image
plane. The collection of these points approximates the solution S to the iterated
function system. S is a subset of the plane, and hence membership is a binary
function, and the image is therefore black and white, lacking even shades of
gray. See Figure 3a for an example.
Information is lost every time we plot a point that has already been plotted.
A more interesting image can be produced if we render a histogram of the
chaotic process, that is, increment a counter at each pixel instead of merely
plotting. These counters can be visualized by mapping them to shades of gray
or by using a color-map that represents diﬀerent densities (larger count values
are more dense) with diﬀerent colors. A linear mapping of counters to gray
values results in Figure 3b.
The result is unsatisfying because of the large dynamic range of densities.
Like many natural systems, the densities are often distributed according to a
power law (frequency is proportional to an exponent of the value). Figure 4 has
two histograms demonstrating this. The densest points are much denser than
the average density, hence with a linear map most of the image is very dark,
and information is lost.
The ﬂame algorithm addresses the problem by using a logarithmic map from
density to brightness. See Figure 3c for the result. The logarithm allows one
to visually distinguish between, for example, densities of 3,000 and 5,000 in one
part of the image and 30 and 50 in another part.
6
a b
c d
e f
Figure 3: Successive reﬁnements of the rendering technique starting with a)
binary membership, then b) linear, c) logarithmic, d) with color, e) with gamma
factor, and ﬁnally f) with vibrant colors. The parameters to create this image
are given in Appendix B.
7
1
10
100
1000
10000
100000
1e+06
1e+07
1 10 100 1000 10000 100000
composite
single
Figure 4: Plots showing that the distribution of densities in an IFS follows the
power law. The density (on the horizontal axis) is the number of hits by the
system in a pixel, the frequency (on the vertical axis) is the number of pixels
with that density (or up to the next power of two). The line is from the image
in Figure 3, the other is the composite of 19 old, favorite systems including
Figure 3. In each case, after a plateau the graph is nearly a straight line in
log-log space. This is the deﬁnitive characteristic of a power law distribution.
Each image was computed with 9.2e6 samples on a 900x900 grid.
8
The display of high dynamic range images like these by tone mapping is
studied in computer graphics [4]. The logarithm used here (combined with the
gamma factor, described below) is just an ad-hoc tone-map.
This log-density mapping is the source of a 3D illusion. On sight people
often guess that fractal ﬂames are rendered in 3D, but as just described the
algorithm works strictly with a 2D buﬀer. However, where one branch of the
fractal crosses another, one may appear to occlude the other if their densities
are diﬀerent enough because the lesser density is inconsequential in sum. For
example branches of densities 1000 and 100 might have brightnesses of 30 and
20. Where they cross the density is 1100, whose brightness is 30.4, which is
hardly distinguishable from 30.
5 Coloring
There is more information to be wrung from the attractor. In particular, which
parts of the attractor come from which functions? The ﬂame algorithm uses
color to convey this. The result is a substantial aesthetic improvement. Color
could be assigned according to the density map and although the result is in-
creased visibility of the densities relative to grayscale, the internal structure of
the fractal remains opaque. Furthermore, for animation the eye prefers that the
color of each part of the attractor remain unchanged over time, otherwise the
illusion of an object in motion is compromised. The fractal ﬂame algorithm uses
an original means to accomplish this: adding a third coordinate to the iteration.
Naturally we want to use a palette or color-map which we deﬁne as a function
from [0,1] to (r,g,b) where r, g, and b are in [0,1]. A palette is classically speciﬁed
with an array of 256 triples of bytes.
To achieve this we assign a color c
i
to each function F
i
(and a corresponding
c
final
if a ﬁnal transform is present) and add an independent coordinate to the
chaos game:
(x, y)= a random point in the bi-unit square
c = a random point in [0,1]
iterate {
i = a random integer from 0 to n − 1 inclusive
(x, y) = F
i
(x, y)
c = (c + c
i
)/2
(x
f
, y
f
) = F
final
(x, y)
c
f
= (c + c
final
)/2
plot (x
f
, y
f
, c
f
) except during the ﬁrst 20 iterations
}
This has the important property that the most recently applied function
makes the largest diﬀerence in the color index and also in spatial location.
Indices make less diﬀerence as they recede in time. Hence colors are continuous
in the ﬁnal image.
9
Color plotting is naturally implemented by keeping three counters per pixel
menting a single density counter. That is not enough information for proper log-
density display, however. Taking the logarithm of each channel independently
grossly alters the colors. Instead one must add a fourth channel of so-called
alpha (α), or transparency values.
So then to plot a point the color is added to the three color channels, and
1 is added to the alpha channel. After all the samples have been accumulated
each channel is scaled by log α/α. See Figure 3d for the result. The resulting
alpha values can be output with the image if the ﬁle format supports them, or
they can be used for compositing the fractal with a background immediately, or
6 The Gamma Factor
Accurate display of any digital image on a Cathode Ray Tube (CRT) requires
gamma correction to account for the non-linear response of screen phosphors to
voltage. Without correction the darker parts of an image appear too dark. If
the brightness b of a pixel is a value between 0 and 1, the corrected brightness
is simply: b
corrected
= b
1/γ
But depending on the speciﬁc image, gamma values as large as 4 improve vis-
ibility of ﬁne structure in the attractor. Large gamma values also substantially
increase the visible noise, and so require longer rendering times to compensate.
The parameter is therefor left to the user to set according to taste and cir-
cumstance. See Figure 3e for the result of applying gamma 4 to our running
example.
Because the red, green, and blue phosphors respond independently, gamma
correction is normally applied to each color channel independently. However
if the gamma value is unnaturally large this has the eﬀect of washing out the
colors of the image (it becomes pastel or ghostly oﬀ-white). This is because
saturated colors occur due to large diﬀerence between channel values. But
gamma correction boosts any small values towards one, leaving less diﬀerence,
and hence less saturation.
While one may desire this eﬀect, to preserve bright colors the gamma cor-
rection may be applied the same way the logarithm is: by calculating a scale
factor on the alpha channel, and then applying it to the three color channels.
The name of the parameter that selects this is called vibrancy, and it can take
any value from 0, meaning to apply gamma to channels independently, to 1,
meaning to apply gamma from the alpha channel to each channel.
7 Symmetry
The human mind responds to symmetric designs at a fundamental level. When
the matrix coeﬃcients are chosen at random, the chances of a symmetric design
10
appearing are vanishingly small. We can easily inject such functions intention-
ally, however. The result appears in Figure 5.
There are two kinds of symmetries: rotational and dihedral. First we cover
rotations. Adding a function to the system that rotates by 180 degrees makes a
2-way rotational symmetry appear. The weight of this function should be equal
to the sum of the weights of all other functions in the system. That way half
of the jumps in the chaos game are between the two halves, and hence the two
halves will have equal density. If the rotation function is given the same weight
as the other functions, then one of the two halves will only be a shadow of the
other.
Adding a rotation by 120 degrees does make a 3-way symmetry appear.
The three branches do not have equal density however, and no weighting would
balance them. That’s because in order to get into the 240-degree branch in the
chaos game, one has to pick the rotational function twice in a row, which is only
25% probable, but the 120-degree branch is 50% probable. Instead one must
introduce two transforms, one by 120 degrees and one by 240, and give them
both weight equal to the sum of the others. Then all three branches will have
the same probability. In general, to produce n-way symmetry, n-1 additional
transforms are necessary to balance the densities.
A dihedral symmetry is created by adding a function that inverts the x coor-
dinate only (producing bilateral symmetry). Again it is given weight equal to the
sum of all the other weights combined. Combining this function with rotation
functions gives all the dihedral symmetries. Dihedral symmetries are named
with negative integers, so the simple bilateral symmetry is -1, and snowﬂake
symmetry is -6. This just follows the isomorphism between multiplication on
integers and composition of symmetries.
It is also possible to introduce symmetries by modifying the chaos game to
can just add an integer symmetry parameter, and then after picking a function
at random, pick a rotation at random. But then how to interpolate between
symmetries without discontinuities is problematic.
Getting good colors with the symmetry functions requires special treatment.
The problem is the symmetry functions can bring a point back onto itself after
two or more applications. If the color is modiﬁed by these functions and then
plotted at its original location, diﬀerent colors get averaged, and the image loses
color diversity. The solution is to not change the color coordinate when applying
symmetry functions. This also makes the colors symmetric as well as the shape.
8 Filtering
Aliasing in spatial and temporal directions is visually disturbing and also indi-
cates information loss because one cannot tell if an artifact in the image is an
original or an alias. With anti-aliasing, there is no ambiguity.
The chaos game lends itself to anti-aliasing. Consider spatial aliasing ﬁrst,
that is the elimination of the jaggie edges. The normal technique is to draw
11
a b
c d
Figure 5: Examples of symmetry. Image d shows how the colors wash out
without special treatment of the color coordinate for symmetry transforms.
the desired image at high resolution, and then ﬁlter it down to display resolu-
tion. This is known as supersampling, and its cost is linear in time and memory
(though the sampling is normally applied in two dimensions, so 3 by 3 super-
sampling means 9x). With the chaos game however we can achieve this eﬀect
by just increasing the number of buckets used in the histogram without increas-
ing the number of iterations. There is a small, sublinear cost in time though
because the time spent ﬁltering is signiﬁcant, and the increased memory usage
also means increased cache misses during iteration. The eﬀect on visual quality,
however, is dramatic. Gamma correction should be done at this ﬁltering step,
when the maximum number of bits of precision is still available.
Despite this ﬁltering, the logarithm and gamma factor may cause low-density
parts of an image to appear dotted or noisy. A wider ﬁlter would solve this, but
at the expense of making the well-sampled parts of the image blurry. This can
be addressed with a form of Density Estimation [5]. We have implemented a
dynamic ﬁlter where a blur kernel of width inversely proportional to the density
of points in the histogram is applied to the samples [6]. The blur kernel is scaled
based on the supersampling level, and is applied after the log density scaling.
This variable width ﬁlter allows higher density areas to remain in focus, while
lower density areas are signiﬁcantly smoothed. Figure 6 illustrates the eﬀect of
density estimation.
12
a b c
Figure 6: Demonstration of density estimation: (a) a low-resolution, low-quality,
zoomed image rendered without density estimation, and (b) with density esti-
mation, and (c) a high quality render at high resolution with density estimation.
9 Motion Blur
Motion blur, or temporal anti-aliasing, is not so easy to do correctly. Super-
sampling can be achieved by varying the parameters over time while running
the chaos game. That is, if 5M samples are used to draw the frame at time t,
to instead use 1M samples at time t-0.5, 1M samples at t-0.25, 1M at t, 1M at
t+0.25 and 1M at t+0.5, all the while accumulating into the same buﬀer. That
would be 5x supersampling for free! With linear density display, this would
work exactly.
The non-linearity of the logarithm complicates things. Consider a pixel
with density 8. Assume for simplicity the logarithm is base 2, so its assigned
brightness is 3. Now put the fractal in motion so it blurs across two pixels, each
should get brightness 1.5. But if the motion takes place before the logarithm
then the 8 samples would be divided into two pixels of density 4 each, whose
logarithm is 2, not 1.5. Objects in motion would appear unnaturally bright.
A proper solution requires the use of an extra buﬀer: the ﬁrst buﬀer is
linear and accumulates the histogram. After each temporal sample, take the
logarithm of this buﬀer and accumulate it into the second one, applying the
density estimation ﬁlter in the process. After all samples are completed, the
second buﬀer is ﬁltered down into the ﬁnal image.
The drawback of this approach, however, is the computational eﬀort required
to apply the density estimation ﬁlter repeatedly to the linear buﬀer; multiple
applications of the ﬁlter can easily double the rendering time.
After experiments with a single buﬀer yielded acceptable results, we decided
to default to single buﬀer renders, while retaining the option of using the extra
buﬀer.
9.1 Directional Motion Blur
Uniformly distributing the samples among time steps works well for an animated
series of frames, but the illusion of motion of an individual frame animation may
be improved by providing a sense of direction. Early attempts at directional
13
a b
Figure 7: (a) motion blur and (b) directional motion blur.
motion varied the number of samples used at each time step, but since the
density estimation ﬁlter was more aggressive during less dense time steps, the
result appeared unnatural.
Instead, the color of points accumulated during earlier time steps are scaled
in intensity, with the scaling constant approaching 1.0 as the time steps progress.
A rendering parameter can be supplied to the renderer to change how aggres-
sively the blending varies throughout the time steps. See the eﬀects of directional
motion blur in Figure 7.
10 History and Acknowledgements
In 1987 at Brown University Bill Poirier showed Scott Draves what he called
‘Recursive Pictures’ a kind of two-dimensional IFS. Poirier used a formulation
that included perspective transforms, but lacked a software implementation. In
response Draves created the ﬁrst of many IFS algorithms. It was written in
Postscript and ran on the Laserwriter, producing high resolution line drawings.
Draves reimplemented this idea in a variety of ways over the three years he
spent in the Computer Graphics Research Group at while still at Brown.
The ﬁrst implementation to include all three deﬁnitive characteristics of
fractal ﬂames (non-linear variations, log-density display, and structural coloring)
was created in the summer of 1991 while Draves was an intern at the NTT-Data
Corporation in Tokyo, Japan and was generously allowed to pursue his own
projects.
That version was released on the then-nascent world wide web in 1992 under
the General Public License (GPL), making it the ﬁrst open-source art. It has
since been incorporated into and ported to many environments, including the
Gimp, Photoshop (as Kai’s Power Tools FraxFlame), After Eﬀects, Digital Fu-
sion, Ultra Fractal 3, screensavers for Macintosh, Windows, and Linux, as well
14
as stand-alone programs (Apophysis, Oxidizer, and Qosmic).
The combined-channel gamma feature and the vibrancy parameter that con-
trols it were introduced in 2001. The symmetries were introduced in 2003. Vari-
ations 7 to 12 were developed by Ronald Hordijk for his screen-saver version of
the ﬂame algorithm, then ported to the Ultra Fractal version by Erik Reckase,
and adopted into the original version (with some modiﬁcations) by Draves in
2003.
In 2004, Mark Townsend released Apophysis, a translation of Draves’ C
code into Delphi Pascal, with the addition of a GUI for interactive design. Erik
Reckase became more involved with the ﬂame algorithm after the release of
Apophysis, and became an oﬃcial developer and maintainer in 2005. Reckase’s
contributions have focused on improved image quality, code optimization, and
keeping up with the wealth of variations being developed in the Apophysis
community.
In 2006 Peter Sdobnov added ﬁnal transforms to Apophysis and they were
soon after adopted by our implementation to maintain compatibility.
Thanks to Hector Yee for suggesting tone mapping as the general solution to
the dynamic range problem, and David Hart for suggesting density estimation
as an improved ﬁlter technique.
The fractal ﬂame algorithm is also the seed that spawned the Electric Sheep
distributed screen-saver [2], a follow-on art project by Draves. In this system,
thousands of idle computers from all over the world are harnessed into rendering
(and evolving) fractal ﬂames. The work of all participating clients is shared
alike.
References
[1] Michael Barnsley: Fractals Everywhere. Academic Press, San Diego, 1988.
[2] Scott Draves. The Electric Sheep Screen-Saver: A Case Study in Aesthetic
Evolution. Applications of Evolutionary Computing, LNCS 3449, 2005.
[3] Hutchinson, J. Fractals and Self-Similarity. Indiana University Journal of
Mathematics. 30, 713-747, 1981.
[4] Jack Tumblin, Holly Rushmeier. Tone Reproduction for Realistic Images.
IEEE Computer Graphics and Applications. November/December 1993
(Vol. 13, No. 6) pp. 42-48.
[5] B. W. Silverman, Density Estimation for Statistics and Data Analysis,
Chapman and Hall, London, 1986.
[6] F. Suykens and Y. D. Willems. Adaptive ﬁltering for progressive Monte
Carlo image rendering. In Eighth International Conference in Central Eu-
rope on Computer Graphics, Visualization and Interactive Digital Media
(WSCG 2000), Plzen, Czech Republic, February 2000.
15
Appendix: Catalog of Variations
For each variation we give its formula and its name, and provide a list of pa-
rameters for parametric variations. Variables used in these formulas are:
r =

x
2
+ y
2
θ = arctan(x/y)
φ = arctan(y/x)
(a, b, c, d, e, f) are considered to be the aﬃne transform coeﬃcients for a
variation, and are used in dependent variations. Ω is a random variable that is
either 0 or π. Λ is a random variable that is either -1 or 1. Ψ is a random variable
uniformally distributed on the interval [0, 1]. The ’trunc’ function returns the
integer part of a ﬂoating-point value.
A visualization of the distorted coordinate grid and a representative ﬂame
using this variation are also supplied. As in Figure 2, x and y increase towards
the lower right, to match the coordinate system of the output image. Note that
these sample ﬂames are selected based on their characteristic shape and not
for any aesthetic qualities. The representative images attempt to use a single
variation, but in general, variations can be mixed when creating ﬂames. For
random-based variations, a scatterplot is substituted for the coordinate grid.
Linear (Variation 0)
V
0
(x, y) = (x, y)
16
Sinusoidal (Variation 1)
V
1
(x, y) = (sin x, sin y)
Spherical (Variation 2)
V
2
(x, y) =
1
r
2
· (x, y)
17
Swirl (Variation 3)
V
3
(x, y) =

xsin(r
2
) − y cos(r
2
), xcos(r
2
) + y sin(r
2
)

Horseshoe (Variation 4)
V
4
(x, y) =
1
r
· ((x − y)(x + y), 2xy)
18
Polar (Variation 5)
V
5
(x, y) =

θ
π
, r − 1

Handkerchief (Variation 6)
V
6
(x, y) = r · (sin(θ + r), cos(θ − r))
19
Heart (Variation 7)
V
7
(x, y) = r · (sin(θr), −cos(θr))
Disc (Variation 8)
V
8
(x, y) =
θ
π
· (sin(πr), cos(πr))
20
Spiral (Variation 9)
V
9
(x, y) =
1
r
(cos θ + sin r, sin θ − cos r)
Hyperbolic (Variation 10)
V
10
(x, y) =

sin θ
r
, r cos θ

21
Diamond (Variation 11)
V
11
(x, y) = (sin θ cos r, cos θ sin r)
Ex (Variation 12)
p
0
= sin(θ + r), p
1
= cos(θ − r)
V
12
(x, y) = r · (p
3
0
+ p
3
1
, p
3
0
− p
3
1
)
22
Julia (Variation 13)
V
13
(x, y) =

r · (cos(θ/2 + Ω), sin(θ/2 + Ω)
(Note: The grid visualization for the julia variation only includes data for
Ω = 0.)
Bent (Variation 14)
V
14
(x, y) =

(x, y) x ≥ 0, y ≥ 0
(2x, y) x < 0, y ≥ 0
(x, y/2) x ≥ 0, y < 0
(2x, y/2) x < 0, y < 0
23
Waves (Variation 15) - dependent
V
15
(x, y) =

x + b sin

y
c
2

, y + e sin

x
f
2

Fisheye (Variation 16)
Note the reversed order of x and y in the formula.
V
16
(x, y) =
2
r + 1
· (y, x)
24
Popcorn (Variation 17) - dependent
V
17
(x, y) = (x + c sin(tan 3y), y + f sin(tan 3x))
Exponential (Variation 18)
V
18
(x, y) = exp(x − 1) · (cos(πy), sin(πy))
25
Power (Variation 19)
V
19
(x, y) = r
sin θ
· (cos θ, sin θ)
Cosine (Variation 20)
V
20
(x, y) = (cos(πx) cosh(y), −sin(πx) sinh(y))
26
Rings (Variation 21) - dependent
V
21
(x, y) =

(r + c
2
) mod (2c
2
) − c
2
+ r(1 − c
2
)

· (cos θ, sin θ)
Fan (Variation 22) - dependent
t = πc
2
V
22
(x, y) =

r · (cos(θ − t/2), sin(θ − t/2)) (θ + f) mod t > t/2
r · (cos(θ + t/2), sin(θ + t/2)) (θ + f) mod t ≤ t/2
27
Blob (Variation 23) - parametric
p
1
= blob.high, p
2
= blob.low, p
3
= blob.waves
V
23
(x, y) = r ·

p
2
+
p
1
− p
2
2
(sin(p
3
θ) + 1)

· (cos θ, sin θ)
PDJ (Variation 24) - parametric
p
1
= pdj.a, p
2
= pdj.b, p
3
= pdj.c, p
4
= pdj.d
V
24
(x, y) = (sin(p
1
y) − cos(p
2
x), sin(p
3
x) − cos(p
4
y))
28
Fan2 (Variation 25) - parametric
Fan2 was created as a parametric alternative to Fan.
p
1
= π(fan2.x)
2
, p
2
= fan2.y
t = θ + p
2
− p
1
trunc(
2θp
2
p
1
)
V
25
(x, y) =

r · (sin (θ − p
1
/2) , cos (θ − p
1
/2)) t > p
1
/2
r · (sin (θ + p
1
/2) , cos (θ + p
1
/2)) t ≤ p
1
/2
29
Rings2 (Variation 26) - parametric
Rings2 was created as a parametric alternative to Rings.
p = (rings2.val)
2
t = r − 2ptrunc

r + p
2p

+ r(1 − p)
V
26
(x, y) = t · (sin θ, cos θ)
Eyeﬁsh (Variation 27)
Eyeﬁsh was created to correct the order of x and y in Fisheye.
V
27
(x, y) =
2
r + 1
· (x, y)
30
Bubble (Variation 28)
V
28
(x, y) =
4
r
2
+ 4
· (x, y)
Cylinder (Variation 29)
V
29
(x, y) = (sin x, y)
31
Perspective (Variation 30) - parametric
p
1
= perspective.angle, p
2
= perspective.dist
V
30
(x, y) =
p
2
p
2
− y sin p
1
· (x, y cos p
1
)
Noise (Variation 31)
V
31
(x, y) = Ψ
1
· (xcos(2πΨ
2
), y sin(2πΨ
2
))
32
JuliaN (Variation 32) - parametric
p
1
= juliaN.power, p
2
= juliaN.dist
p
3
= trunc(|p
1
|Ψ)
t = (φ + 2πp
3
)/p
1
V
32
(x, y) = r
p
2
p
1
· (cos t, sin t)
JuliaScope (Variation 33) - parametric
p
1
= juliaScope.power, p
2
= juliaScope.dist
p
3
= trunc(|p
1
|Ψ)
t = (Λφ + 2πp
3
)/p
1
V
33
(x, y) = r
p
2
p
1
· (cos t, sin t)
33
Blur (Variation 34)
V
34
(x, y) = Ψ
1
· (cos(2πΨ
2
), sin(2πΨ
2
))
Gaussian (Variation 35)
Summing 4 random numbers and subtracting 2 is an attempt at approximating
a Gaussian distribution.
V
35
(x, y) =

4
¸
k=1
Ψ
k
− 2

· (cos(2πΨ
5
), sin(2πΨ
5
))
34
p
1
t
1
= v
36
(
4
¸
k=1
Ψ
k
− 2), t
2
= φ + t
1
sin p
1
, t
3
= t
1
cos p
1
− 1
V
36
(x, y) =
1
v
36
· (r cos t
2
+ t
3
x, r sin t
2
+ t
3
y)
Pie (Variation 37) - parametric
p
1
= pie.slices, p
2
= pie.rotation, p
3
= pie.thickness
t
1
= trunc(Ψ
1
p
1
+ 0.5)
t
2
= p
2
+

p
1
(t
1
+ Ψ
2
p
3
)
V
37
(x, y) = Ψ
3
(cos t
2
, sin t
2
)
35
Ngon (Variation 38) - parametric
p
1
= ngon.power, p
2
= 2π/ngon.sides, p
3
= ngon.corners, p
4
= ngon.circle
t
3
= φ − p
2
⌊φ/p
2

t
4
=

t
3
t
3
> p
2
/2
t
3
− p
2
t
3
≤ p
2
/2
k =
p
3

1
cos t4
− 1

+ p
4
r
p1
V
38
(x, y) = k · (x, y)
36
Curl (Variation 39) - parametric
p
1
= curl.c1, p
2
= curl.c2
t
1
= 1 + p
1
x + p
2
(x
2
− y
2
), t
2
= p
1
y + 2p
2
xy
V
39
(x, y) =
1
t
2
1
+ t
2
2
· (xt
1
+ yt
2
, yt
1
− xt
2
)
Rectangles (Variation 40) - parametric
p
1
= rectangles.x, p
2
= rectangles.y
V
40
(x, y) = (2⌊x/p
1
⌋ + 1)p
1
− x, (2⌊y/p
2
⌋ + 1)p
2
− y)
37
Arch (Variation 41)
V
41
(x, y) = (sin(Ψπv
41
), sin
2
(Ψπv
41
)/ cos(Ψπv
41
))
Tangent (Variation 42)
V
42
(x, y) =

sin x
cos y
, tan y

38
Square (Variation 43)
V
43
(x, y) = (Ψ
1
− 0.5, Ψ
2
− 0.5)
Rays (Variation 44)
V
44
(x, y) =
v
44
tan(Ψπv
44
)
r
2
· (cos x, sin y)
39
V
45
(x, y) = x · (cos(Ψrv
45
) + sin(Ψrv
45
), cos(Ψrv
45
) − sin(Ψrv
45
))
Secant (Variation 46)
V
46
(x, y) =

x,
1
v
46
cos(v
46
r)

40
Twintrian (Variation 47)
t = log
10

sin
2
(Ψrv
47
)

+ cos(Ψrv
47
)
V
47
(x, y) = x · (t, t − π sin(Ψrv
47
))
Cross (Variation 48)
V
47
(x, y) =

1/(x
2
− y
2
)
2
· (x, y)
41

a

b

c

d

Figure 1: Example fractal ﬂame images. The names of these ﬂames are: a) 206, b) 191, c) 4000, and d) 29140. These images were selected for their aesthetic properties.

2

Figure 2: Sierpinski’s Gasket, a simple IFS. X and y increase towards the lower right. The recursive structure is visible as the whole image is made up of three copies of itself, one for each function. As implemented and popularized by Barnsley [1] the functions Fi are linear (technically they are aﬃne as each is a two by three matrix capable of expressing scale, rotation, translation, and shear): Fi (x, y) = (ai x + bi y + ci , di x + ei y + fi ) For example, if the functions are F0 (x, y) = ( x , y ) 2 2 F1 (x, y) = ( x+1 , y ) 2 2 F2 (x, y) = ( x , y+1 ) 2 2

then the ﬁxed-point S is Sierpinski’s Gasket, as seen in Figure 2. In order to facilitate the proofs and guarantee convergence of the algorithms, the functions are normally constrained to be contractive, that is, to bring points closer together. In fact, the normal algorithm works under the much weaker condition that the whole system is contractive on average. Useful guarantees of this become diﬃcult to provide when the functions are non-linear. Instead we recommend using a numerically robust implementation, and simply accept that some parameter sets result in better images than others, and some result in degenerate images. The normal algorithm for solving for S is called the chaos game. In pseudocode it is: (x, y)= a random point in the bi-unit square iterate { i = a random integer from 0 to n − 1 inclusive (x, y) = Fi (x, y) plot(x, y) except during the ﬁrst 20 iterations } The bi-unit square are those points where x and y are both in [-1,1]. The chaos game works because if (x, y) ∈ S then Fi (x, y) ∈ S too. Though we start 3

out with a random point, because the functions are on average contractive the distance between the solution set and the point decreases exponentially. After 20 iterations with a typical contraction factor of 0.5 the point will be within 10−6 of the solution, much less than a pixel’s width. Every point of the solution set will be generated eventually because an inﬁnite string of random symbols (the choices for i) contains every ﬁnite substring of symbols. This is explained in more formally in Section 4.8 of [1]. No suﬃcient number of iterations is given by the algorithm. Because the chaos game operates by stochastic sampling, the more iterations one makes the closer the result will be to the exact solution. The judgement of how close is close enough remains for the user. In fractal ﬂames, the number of samples is speciﬁed with the more abstract parameter quality, or samples per output pixel. That way the image quality (in the sense of lack of noise) remains constant in face of changes to the image resolution and the camera. It is useful to be able to weight the functions so they are not chosen with equal frequency in line 3 of the chaos game. We assign a weight, or relative probability wi to each function Fi . This allows interpolation between function systems with diﬀerent numbers of functions: the additional functions can be phased in by starting their weights at zero. Diﬀerently weighted functions are also necessary to draw symmetric ﬂames as shown in Section 7. Some implementations of the chaos game make each function’s weight proportional to its contraction factor. When drawing one-bit per pixel images this has the advantage of converging faster than equal weighting because it avoids redrawing pixels. But in our context with mutiple bits per pixel and non-linear functions (where the contraction factor is not constant) this device is best avoided.

3

Variations

We now generalize this algorithm. The ﬁrst step is to use a larger class of functions than just aﬃne functions. Start by composing a non-linear function Vj from R2 to R2 with the aﬃne functions: Fi (x, y) = Vj (ai x + bi y + ci , di x + ei y + fi ) We call each such function Vj a variation, and each one changes the shape and character of the solution in a recognizable way. The initial variations were simple remappings of the plane. They were followed by dependent variations, where the coeﬃcients of the aﬃne transform deﬁne the behaviour of the variation. More recently, parametric variations have been introduced, which are variations controlled by additional parameters independent of the aﬃne transform. Appendix A documents 49 variations. Variation 0 is the identity function. It and six more examples are:

4

PDJ relies on four external parameters (p1 . Then Fi (x. y) where = = = = = = = (x. if are n functions. y + f sin(tan 3x)) (sin(p1 y) − cos(p2 x). to be applied after the non-linear function. y) V4 (x. δi x + ǫi y + ζi ) then we redeﬁne the Fi as follows: Fi (x. and then applying a non-linear variation function to the result of the aﬃne transformation. y) V24 (x. sin y) 1 r 2 · (x. di x + ei y + fi ) With this generalization. Variations can be further generalized by replacing the integer parameter j with a blending vector vij with one coeﬃcient per variation. if unused parameters are assumed to be 0. 2xy) (x + c sin(tan 3y). di x + ei y + fi )) 5 . If the post transform is Pi (x. y) x sin(r2 ) − y cos(r2 ). 3. p4 ) to fully characterize its behaviour. p3 . y) V3 (x. y) V1 (x. is an example of a parametric variation. y) (sin x. applying a transform to a set of coordinates involves ﬁrst applying an aﬃne transformation to the coordinates. V24 . Signiﬁcantly fewer parameters are required. 6n matrix coeﬃcients ai through fi .1 Post Transforms To this point. p2 . y) = j vij Vj (ai x + bi y + ci .V0 (x. The PDJ variation. which is dependent on the c and f coeﬃcients of the aﬃne transform. however. then there are 87n parameters that specify it. y) = (αi x + βi y + γi . y) V17 (x. and n weights wi . V17 . a post transform. sin(p3 x) − cos(p4 y)) r = x2 + y 2 linear sinusoidal spherical swirl horseshoe popcorn pdj An example of a dependent variation is Popcorn. We further generalize this with the addition of a secondary aﬃne transformation. y) V2 (x. y) = Pi ( j vij Vj (ai x + bi y + ci . The parameters consist of 49n variational coeﬃcients vij . 31n parametric coeﬃcients. This provides the ability to change the coordinate systems of the variations. x cos(r2 ) + y sin(r2 ) 1 r · ((x − y)(x + y).

See Figure 3a for an example. The logarithm allows one to visually distinguish between. y) plot (xf . The result of the application of the ﬁnal transform function is not ‘in the loop’. In pseudocode. and hence membership is a binary function. that is. The ﬁnal transform is like a non-linear camera. 6 . If a ﬁnal transform is desired. y)= a random point in the bi-unit square iterate { i = a random integer from 0 to n − 1 inclusive (x. the densities are often distributed according to a power law (frequency is proportional to an exponent of the value). Like many natural systems. The ﬂame algorithm addresses the problem by using a logarithmic map from density to brightness. The ﬁnal transform can have a post transform associated with it. These counters can be visualized by mapping them to shades of gray or by using a color-map that represents diﬀerent densities (larger count values are more dense) with diﬀerent colors. then the total number of parameters necessary is 93(n + 1). The densest points are much denser than the average density. y) points which are plotted on the image plane. y) that is always applied regardless of the value of i in the iteration loop. See Figure 3c for the result. A more interesting image can be produced if we render a histogram of the chaotic process. The result is unsatisfying because of the large dynamic range of densities. y) (xf . Figure 4 has two histograms demonstrating this.000 and 5. hence with a linear map most of the image is very dark. which is an additional function Ff inal (x. 4 Log-Density Display The chaos game produces a series of (x. yf ) except during the ﬁrst 20 iterations } Adding the post transforms to the earlier parameter count. increment a counter at each pixel instead of merely plotting.3. and the image is therefore black and white. Information is lost every time we plot a point that has already been plotted. yf ) = Ff inal (x. y) = Fi (x. for example.2 Final Transforms We can now introduce the concept of a ﬁnal transform.000 in one part of the image and 30 and 50 in another part. we now have: (x. A linear mapping of counters to gray values results in Figure 3b. S is a subset of the plane. and information is lost. The collection of these points approximates the solution S to the iterated function system. we now have 93n parameters necessary to fully specify n functions. and there can be only one ﬁnal transform per ﬂame. lacking even shades of gray. densities of 3.

then b) linear. e) with gamma factor. d) with color.a b c d e f Figure 3: Successive reﬁnements of the rendering technique starting with a) binary membership. 7 . The parameters to create this image are given in Appendix B. c) logarithmic. and ﬁnally f) with vibrant colors.

the frequency (on the vertical axis) is the number of pixels with that density (or up to the next power of two). In each case. The density (on the horizontal axis) is the number of hits by the system in a pixel. after a plateau the graph is nearly a straight line in log-log space. the other is the composite of 19 old. The line is from the image in Figure 3.1e+07 composite single 1e+06 100000 10000 1000 100 10 1 1 10 100 1000 10000 100000 Figure 4: Plots showing that the distribution of densities in an IFS follows the power law. favorite systems including Figure 3. This is the deﬁnitive characteristic of a power law distribution. 8 .2e6 samples on a 900x900 grid. Each image was computed with 9.

For example branches of densities 1000 and 100 might have brightnesses of 30 and 20. which parts of the attractor come from which functions? The ﬂame algorithm uses color to convey this. which is hardly distinguishable from 30. y)= a random point in the bi-unit square c = a random point in [0.1]. but as just described the algorithm works strictly with a 2D buﬀer. The fractal ﬂame algorithm uses an original means to accomplish this: adding a third coordinate to the iteration. whose brightness is 30.1] iterate { i = a random integer from 0 to n − 1 inclusive (x. Indices make less diﬀerence as they recede in time. described below) is just an ad-hoc tone-map. In particular.1] to (r. for animation the eye prefers that the color of each part of the attractor remain unchanged over time. g. otherwise the illusion of an object in motion is compromised. one may appear to occlude the other if their densities are diﬀerent enough because the lesser density is inconsequential in sum. Naturally we want to use a palette or color-map which we deﬁne as a function from [0. On sight people often guess that fractal ﬂames are rendered in 3D. Hence colors are continuous in the ﬁnal image. The result is a substantial aesthetic improvement.4. y) cf = (c + cf inal )/2 plot (xf .b) where r. 9 . However. cf ) except during the ﬁrst 20 iterations } This has the important property that the most recently applied function makes the largest diﬀerence in the color index and also in spatial location. The logarithm used here (combined with the gamma factor. To achieve this we assign a color ci to each function Fi (and a corresponding cf inal if a ﬁnal transform is present) and add an independent coordinate to the chaos game: (x. This log-density mapping is the source of a 3D illusion. y) = Fi (x. yf ) = Ff inal (x. where one branch of the fractal crosses another. A palette is classically speciﬁed with an array of 256 triples of bytes. Furthermore. Color could be assigned according to the density map and although the result is increased visibility of the densities relative to grayscale. the internal structure of the fractal remains opaque. Where they cross the density is 1100.g. and b are in [0.The display of high dynamic range images like these by tone mapping is studied in computer graphics [4]. 5 Coloring There is more information to be wrung from the attractor. yf . y) c = (c + ci )/2 (xf .

the chances of a symmetric design 10 . to 1. or they can be used for compositing the fractal with a background immediately. the corrected brightness is simply: bcorrected = b1/γ where γ normally about 2. gamma values as large as 4 improve visibility of ﬁne structure in the attractor.2. Without correction the darker parts of an image appear too dark. leaving less diﬀerence. So then to plot a point the color is added to the three color channels. Instead one must add a fourth channel of so-called alpha (α). The parameter is therefor left to the user to set according to taste and circumstance. But depending on the speciﬁc image. See Figure 3d for the result. and hence less saturation. Because the red. When the matrix coeﬃcients are chosen at random. 7 Symmetry The human mind responds to symmetric designs at a fundamental level. and adding the current color to the three of them instead of incrementing a single density counter. or transparency values. This is because saturated colors occur due to large diﬀerence between channel values. While one may desire this eﬀect. gamma correction is normally applied to each color channel independently. However if the gamma value is unnaturally large this has the eﬀect of washing out the colors of the image (it becomes pastel or ghostly oﬀ-white). But gamma correction boosts any small values towards one. See Figure 3e for the result of applying gamma 4 to our running example. Taking the logarithm of each channel independently grossly alters the colors.Color plotting is naturally implemented by keeping three counters per pixel instead of one. The resulting alpha values can be output with the image if the ﬁle format supports them. however. and so require longer rendering times to compensate. green. The name of the parameter that selects this is called vibrancy. and it can take any value from 0. or they can be discarded. meaning to apply gamma to channels independently. 6 The Gamma Factor Accurate display of any digital image on a Cathode Ray Tube (CRT) requires gamma correction to account for the non-linear response of screen phosphors to voltage. After all the samples have been accumulated each channel is scaled by log α/α. Large gamma values also substantially increase the visible noise. meaning to apply gamma from the alpha channel to each channel. to preserve bright colors the gamma correction may be applied the same way the logarithm is: by calculating a scale factor on the alpha channel. That is not enough information for proper logdensity display. and then applying it to the three color channels. and 1 is added to the alpha channel. If the brightness b of a pixel is a value between 0 and 1. and blue phosphors respond independently.

diﬀerent colors get averaged. and hence the two halves will have equal density. The result appears in Figure 5. In general. Dihedral symmetries are named with negative integers. and then after picking a function at random. which is only 25% probable. First we cover rotations. The solution is to not change the color coordinate when applying symmetry functions. however. and no weighting would balance them. That way half of the jumps in the chaos game are between the two halves. Getting good colors with the symmetry functions requires special treatment. This just follows the isomorphism between multiplication on integers and composition of symmetries. If the rotation function is given the same weight as the other functions. Again it is given weight equal to the sum of all the other weights combined.appearing are vanishingly small. pick a rotation at random. one by 120 degrees and one by 240. We can easily inject such functions intentionally. The chaos game lends itself to anti-aliasing. It is also possible to introduce symmetries by modifying the chaos game to support them directly instead of adding symmetric functions. The weight of this function should be equal to the sum of the weights of all other functions in the system. one has to pick the rotational function twice in a row. Combining this function with rotation functions gives all the dihedral symmetries. There are two kinds of symmetries: rotational and dihedral. but the 120-degree branch is 50% probable. one can just add an integer symmetry parameter. then one of the two halves will only be a shadow of the other. Then all three branches will have the same probability. and snowﬂake symmetry is -6. A dihedral symmetry is created by adding a function that inverts the x coordinate only (producing bilateral symmetry). and the image loses color diversity. 8 Filtering Aliasing in spatial and temporal directions is visually disturbing and also indicates information loss because one cannot tell if an artifact in the image is an original or an alias. This also makes the colors symmetric as well as the shape. The problem is the symmetry functions can bring a point back onto itself after two or more applications. With anti-aliasing. If the color is modiﬁed by these functions and then plotted at its original location. n-1 additional transforms are necessary to balance the densities. that is the elimination of the jaggie edges. The normal technique is to draw 11 . to produce n-way symmetry. But then how to interpolate between symmetries without discontinuities is problematic. The three branches do not have equal density however. and give them both weight equal to the sum of the others. so the simple bilateral symmetry is -1. Adding a function to the system that rotates by 180 degrees makes a 2-way rotational symmetry appear. Adding a rotation by 120 degrees does make a 3-way symmetry appear. For example. Consider spatial aliasing ﬁrst. there is no ambiguity. Instead one must introduce two transforms. That’s because in order to get into the 240-degree branch in the chaos game.

while lower density areas are signiﬁcantly smoothed. The eﬀect on visual quality. With the chaos game however we can achieve this eﬀect by just increasing the number of buckets used in the histogram without increasing the number of iterations. the desired image at high resolution. Gamma correction should be done at this ﬁltering step. however. so 3 by 3 supersampling means 9x). and is applied after the log density scaling. and its cost is linear in time and memory (though the sampling is normally applied in two dimensions. This is known as supersampling. This can be addressed with a form of Density Estimation [5]. the logarithm and gamma factor may cause low-density parts of an image to appear dotted or noisy. Figure 6 illustrates the eﬀect of density estimation. We have implemented a dynamic ﬁlter where a blur kernel of width inversely proportional to the density of points in the histogram is applied to the samples [6].a b c d Figure 5: Examples of symmetry. Despite this ﬁltering. is dramatic. sublinear cost in time though because the time spent ﬁltering is signiﬁcant. and the increased memory usage also means increased cache misses during iteration. when the maximum number of bits of precision is still available. and then ﬁlter it down to display resolution. 12 . This variable width ﬁlter allows higher density areas to remain in focus. There is a small. Image d shows how the colors wash out without special treatment of the color coordinate for symmetry transforms. The blur kernel is scaled based on the supersampling level. A wider ﬁlter would solve this. but at the expense of making the well-sampled parts of the image blurry.

applying the density estimation ﬁlter in the process.5. we decided to default to single buﬀer renders. That is. 9. 1M samples at t-0.5. low-quality. however. each should get brightness 1.5. so its assigned brightness is 3. The drawback of this approach. this would work exactly. and (b) with density estimation.1 Directional Motion Blur Uniformly distributing the samples among time steps works well for an animated series of frames. zoomed image rendered without density estimation. is not so easy to do correctly. if 5M samples are used to draw the frame at time t. or temporal anti-aliasing. all the while accumulating into the same buﬀer. The non-linearity of the logarithm complicates things. That would be 5x supersampling for free! With linear density display. the second buﬀer is ﬁltered down into the ﬁnal image. Early attempts at directional 13 . Supersampling can be achieved by varying the parameters over time while running the chaos game. 1M at t+0. while retaining the option of using the extra buﬀer. is the computational eﬀort required to apply the density estimation ﬁlter repeatedly to the linear buﬀer.25 and 1M at t+0. not 1.25.a b c Figure 6: Demonstration of density estimation: (a) a low-resolution.5. Consider a pixel with density 8. multiple applications of the ﬁlter can easily double the rendering time. Objects in motion would appear unnaturally bright. But if the motion takes place before the logarithm then the 8 samples would be divided into two pixels of density 4 each. Now put the fractal in motion so it blurs across two pixels. After experiments with a single buﬀer yielded acceptable results. After each temporal sample. but the illusion of motion of an individual frame animation may be improved by providing a sense of direction. 9 Motion Blur Motion blur. to instead use 1M samples at time t-0. After all samples are completed. Assume for simplicity the logarithm is base 2. and (c) a high quality render at high resolution with density estimation. take the logarithm of this buﬀer and accumulate it into the second one. 1M at t. whose logarithm is 2. A proper solution requires the use of an extra buﬀer: the ﬁrst buﬀer is linear and accumulates the histogram.

The ﬁrst implementation to include all three deﬁnitive characteristics of fractal ﬂames (non-linear variations. making it the ﬁrst open-source art. Photoshop (as Kai’s Power Tools FraxFlame). Digital Fusion. Japan and was generously allowed to pursue his own projects. It was written in Postscript and ran on the Laserwriter. A rendering parameter can be supplied to the renderer to change how aggressively the blending varies throughout the time steps. After Eﬀects. and structural coloring) was created in the summer of 1991 while Draves was an intern at the NTT-Data Corporation in Tokyo. motion varied the number of samples used at each time step. In response Draves created the ﬁrst of many IFS algorithms. That version was released on the then-nascent world wide web in 1992 under the General Public License (GPL). with the scaling constant approaching 1. Ultra Fractal 3. producing high resolution line drawings. the result appeared unnatural. but lacked a software implementation.0 as the time steps progress. screensavers for Macintosh.a b Figure 7: (a) motion blur and (b) directional motion blur. Windows. Poirier used a formulation that included perspective transforms. but since the density estimation ﬁlter was more aggressive during less dense time steps. See the eﬀects of directional motion blur in Figure 7. log-density display. Draves reimplemented this idea in a variety of ways over the three years he spent in the Computer Graphics Research Group at while still at Brown. as well 14 . including the Gimp. It has since been incorporated into and ported to many environments. and Linux. Instead. 10 History and Acknowledgements In 1987 at Brown University Bill Poirier showed Scott Draves what he called ‘Recursive Pictures’ a kind of two-dimensional IFS. the color of points accumulated during earlier time steps are scaled in intensity.

Indiana University Journal of Mathematics. Reckase’s contributions have focused on improved image quality. 42-48. The Electric Sheep Screen-Saver: A Case Study in Aesthetic Evolution. No. a follow-on art project by Draves. Willems. Mark Townsend released Apophysis. and David Hart for suggesting density estimation as an improved ﬁlter technique. with the addition of a GUI for interactive design. Erik Reckase became more involved with the ﬂame algorithm after the release of Apophysis. 1986. The combined-channel gamma feature and the vibrancy parameter that controls it were introduced in 2001. and became an oﬃcial developer and maintainer in 2005. thousands of idle computers from all over the world are harnessed into rendering (and evolving) fractal ﬂames. [6] F. Visualization and Interactive Digital Media (WSCG 2000). In 2004. IEEE Computer Graphics and Applications. Thanks to Hector Yee for suggesting tone mapping as the general solution to the dynamic range problem. Czech Republic. Silverman. [2] Scott Draves. November/December 1993 (Vol. Tone Reproduction for Realistic Images. [4] Jack Tumblin. Suykens and Y. D. Applications of Evolutionary Computing. code optimization. 6) pp. In Eighth International Conference in Central Europe on Computer Graphics. and keeping up with the wealth of variations being developed in the Apophysis community. February 2000. a translation of Draves’ C code into Delphi Pascal. Oxidizer. W. 1988. then ported to the Ultra Fractal version by Erik Reckase. The symmetries were introduced in 2003. J. 2005. 15 . Chapman and Hall. [3] Hutchinson. Fractals and Self-Similarity. The fractal ﬂame algorithm is also the seed that spawned the Electric Sheep distributed screen-saver [2]. References [1] Michael Barnsley: Fractals Everywhere. 13. Variations 7 to 12 were developed by Ronald Hordijk for his screen-saver version of the ﬂame algorithm. LNCS 3449. Holly Rushmeier.as stand-alone programs (Apophysis. Academic Press. 713-747. San Diego. 30. Adaptive ﬁltering for progressive Monte Carlo image rendering. [5] B. London. Density Estimation for Statistics and Data Analysis. In 2006 Peter Sdobnov added ﬁnal transforms to Apophysis and they were soon after adopted by our implementation to maintain compatibility. The work of all participating clients is shared alike. Plzen. and adopted into the original version (with some modiﬁcations) by Draves in 2003. and Qosmic). 1981. In this system.

to match the coordinate system of the output image. and are used in dependent variations. y) 16 . Ω is a random variable that is either 0 or π. d. c. but in general. 1]. A visualization of the distorted coordinate grid and a representative ﬂame using this variation are also supplied. Note that these sample ﬂames are selected based on their characteristic shape and not for any aesthetic qualities. Ψ is a random variable uniformally distributed on the interval [0. The ’trunc’ function returns the integer part of a ﬂoating-point value. a scatterplot is substituted for the coordinate grid. and provide a list of parameters for parametric variations. y) = (x. f ) are considered to be the aﬃne transform coeﬃcients for a variation. b. e. variations can be mixed when creating ﬂames. Variables used in these formulas are: r θ φ = = = x2 + y 2 arctan(x/y) arctan(y/x) (a. The representative images attempt to use a single variation. x and y increase towards the lower right.Appendix: Catalog of Variations For each variation we give its formula and its name. For random-based variations. Linear (Variation 0) V0 (x. As in Figure 2. Λ is a random variable that is either -1 or 1.

sin y) Spherical (Variation 2) V2 (x. y) r2 17 . y) = 1 · (x. y) = (sin x.Sinusoidal (Variation 1) V1 (x.

y) = x sin(r2 ) − y cos(r2 ). x cos(r2 ) + y sin(r2 ) Horseshoe (Variation 4) V4 (x.Swirl (Variation 3) V3 (x. 2xy) r 18 . y) = 1 · ((x − y)(x + y).

Polar (Variation 5) V5 (x.r − 1 π Handkerchief (Variation 6) V6 (x. y) = r · (sin(θ + r). y) = θ . cos(θ − r)) 19 .

y) = r · (sin(θr). cos(πr)) π 20 . y) = θ · (sin(πr).Heart (Variation 7) V7 (x. − cos(θr)) Disc (Variation 8) V8 (x.

y) = 1 (cos θ + sin r. y) = sin θ . r cos θ r 21 .Spiral (Variation 9) V9 (x. sin θ − cos r) r Hyperbolic (Variation 10) V10 (x.

p1 = cos(θ − r) 22 . y) = r · (p0 + p3 . cos θ sin r) Ex (Variation 12) 3 V12 (x. p3 − p3 ) 1 0 1 p0 = sin(θ + r).Diamond (Variation 11) V11 (x. y) = (sin θ cos r.

y/2)   (2x. y x ≥ 0.Julia (Variation 13) V13 (x. sin(θ/2 + Ω) (Note: The grid visualization for the julia variation only includes data for Ω = 0. y x < 0. y/2) x ≥ 0. y x < 0. y ≥0 ≥0 <0 <0 23 .) Bent (Variation 14)   (x. y) =  (x. y)   (2x. y) V14 (x. y) = √ r · (cos(θ/2 + Ω).

V16 (x.Waves (Variation 15) . x) r+1 24 . y) = 2 · (y. y) = x + b sin y . y + e sin c2 x f2 Fisheye (Variation 16) Note the reversed order of x and y in the formula.dependent V15 (x.

Popcorn (Variation 17) . y) = exp(x − 1) · (cos(πy).dependent V17 (x. sin(πy)) 25 . y + f sin(tan 3x)) Exponential (Variation 18) V18 (x. y) = (x + c sin(tan 3y).

Power (Variation 19) V19 (x. y) = rsin θ · (cos θ. − sin(πx) sinh(y)) 26 . y) = (cos(πx) cosh(y). sin θ) Cosine (Variation 20) V20 (x.

dependent t = πc2 V22 (x. y) = r · (cos(θ − t/2).Rings (Variation 21) . sin θ) Fan (Variation 22) .dependent V21 (x. sin(θ − t/2)) r · (cos(θ + t/2). sin(θ + t/2)) (θ + f ) mod t > t/2 (θ + f ) mod t ≤ t/2 27 . y) = (r + c2 ) mod (2c2 ) − c2 + r(1 − c2 ) · (cos θ.

sin θ) 2 PDJ (Variation 24) .Blob (Variation 23) .low.waves V23 (x.parametric p1 = pdj.a.parametric p1 = blob.d V24 (x. p4 = pdj. y) = (sin(p1 y) − cos(p2 x). p3 = blob. y) = r · p2 + p1 − p2 (sin(p3 θ) + 1) · (cos θ.c. sin(p3 x) − cos(p4 y)) 28 . p3 = pdj.b. p2 = blob.high. p2 = pdj.

p2 = fan2. p1 = π(fan2.Fan2 (Variation 25) . cos (θ + p1 /2)) 29 .x)2 .parametric Fan2 was created as a parametric alternative to Fan.y t = θ + p2 − p1 trunc( V25 (x. y) = 2θp2 ) p1 t > p1 /2 t ≤ p1 /2 r · (sin (θ − p1 /2) . cos (θ − p1 /2)) r · (sin (θ + p1 /2) .

V27 (x.parametric Rings2 was created as a parametric alternative to Rings.Rings2 (Variation 26) . y) = 2 · (x. y) r+1 30 .val)2 t = r − 2ptrunc r+p 2p + r(1 − p) V26 (x. y) = t · (sin θ. cos θ) Eyeﬁsh (Variation 27) Eyeﬁsh was created to correct the order of x and y in Fisheye. p = (rings2.

y) 31 . y) = (sin x. y) = r2 4 · (x.Bubble (Variation 28) V28 (x. y) +4 Cylinder (Variation 29) V29 (x.

Perspective (Variation 30) . y sin(2πΨ2 )) 32 . p2 = perspective.dist p2 · (x. y cos p1 ) V30 (x. y) = p2 − y sin p1 Noise (Variation 31) V31 (x.angle. y) = Ψ1 · (x cos(2πΨ2 ).parametric p1 = perspective.

dist p3 = trunc(|p1 |Ψ) t = (φ + 2πp3 )/p1 V32 (x.parametric p1 = juliaScope.parametric p1 = juliaN.power.power. y) = r p1 · (cos t. sin t) p2 JuliaScope (Variation 33) . p2 = juliaScope.dist p3 = trunc(|p1 |Ψ) t = (Λφ + 2πp3 )/p1 V33 (x. p2 = juliaN. sin t) p2 33 . y) = r p1 · (cos t.JuliaN (Variation 32) .

4 V35 (x. y) = Ψ1 · (cos(2πΨ2 ). y) = k=1 Ψk − 2 · (cos(2πΨ5 ). sin(2πΨ2 )) Gaussian (Variation 35) Summing 4 random numbers and subtracting 2 is an attempt at approximating a Gaussian distribution. sin(2πΨ5 )) 34 .Blur (Variation 34) V34 (x.

p3 = pie. y) = Ψ3 (cos t2 .thickness t1 = trunc(Ψ1 p1 + 0.RadialBlur (Variation 36) .angle) · (π/2) Ψk − 2).rotation.parametric p1 = pie. sin t2 ) 35 . r sin t2 + t3 y) v36 t1 = v36 ( k=1 V36 (x. t3 = t1 cos p1 − 1 1 · (r cos t2 + t3 x.slices. p2 = pie. t2 = φ + t1 sin p1 .parametric 4 p1 = (radialBlur. y) = Pie (Variation 37) .5) 2π (t1 + Ψ2 p3 ) t 2 = p2 + p1 V37 (x.

circle t3 = φ − p2 ⌊φ/p2 ⌋ t4 = t3 t 3 − p2 p3 k= 1 cos t4 t3 > p2 /2 t3 ≤ p2 /2 − 1 + p4 rp1 V38 (x. p2 = 2π/ngon.sides.parametric p1 = ngon.Ngon (Variation 38) .corners. p4 = ngon.power. y) 36 . y) = k · (x. p3 = ngon.

y) = (2⌊x/p1 ⌋ + 1)p1 − x. y) = t2 1 1 · (xt1 + yt2 . (2⌊y/p2 ⌋ + 1)p2 − y) 37 . p2 = rectangles. t2 = p1 y + 2p2 xy V39 (x. p2 = curl.parametric p1 = curl.c1.parametric p1 = rectangles.c2 t1 = 1 + p1 x + p2 (x2 − y 2 ).Curl (Variation 39) .x.y V40 (x. yt1 − xt2 ) + t2 2 Rectangles (Variation 40) .

sin2 (Ψπv41 )/ cos(Ψπv41 )) Tangent (Variation 42) V42 (x. y) = sin x . y) = (sin(Ψπv41 ).Arch (Variation 41) V41 (x. tan y cos y 38 .

5) Rays (Variation 44) V44 (x.5. y) = v44 tan(Ψπv44 ) · (cos x. y) = (Ψ1 − 0.Square (Variation 43) V43 (x. Ψ2 − 0. sin y) r2 39 .

cos(Ψrv45 ) − sin(Ψrv45 )) Secant (Variation 46) V46 (x. 1 v46 cos(v46 r) 40 . y) = x · (cos(Ψrv45 ) + sin(Ψrv45 ). y) = x.Blade (Variation 45) V45 (x.

y) = 1/(x2 − y 2 )2 · (x.Twintrian (Variation 47) t = log10 sin2 (Ψrv47 ) + cos(Ψrv47 ) V47 (x. t − π sin(Ψrv47 )) Cross (Variation 48) V47 (x. y) = x · (t. y) 41 .