Professional Documents
Culture Documents
IES-01
Geometry :
developed as a collection of tools for understanding
the shapes of nature.
For millenia, Symmetry has been recognized as a powerful principle in geometry,
and in art.
We begin by reviewing the familiar forms of symmetry, then show that fractals
reveal a new kind of symmetry, symmetry under magnification.
Here we introduce some basic geometry of fractals, with emphasis on the Iterated
Function System (IFS) formalism for generating fractals.
In addition, we explore the application of IFS to detect patterns, and also several
examples of architectural fractals.
First, though, we review familiar symmetries of nature, preparing us for the new
kind of symmetry that fractals exhibit.
The simplest fractals are constructed by iteration. For example, start with a filledin triangle and iterate this process:
For every filled-in triangle, connect the midpoints of the sides and remove the
middle triangle. Iterating this process produces, in the limit, the Sierpinski
Gasket.
We can describe the gasket as made of three copies, each 1/2 as tall and 1/2 as
wide as the original. But note a consequence of self-similarity:
each of these copies is made of three still smaller copies, so we can say the
gasket is made of nine copies each 1/4 by 1/4 of the original, or 27 copies
each 1/8 by 1/8, or ... . Usually, we prefer the simplest description.
2/22/2013
The Mandelbrot set: a different nonlinear transformation gives the most famous of
all fractals.
The gasket is made of three copies of itself, each scaled by 1/2, and two copies
translated. With slightly more complicated rules, we can build fractals that are
reasonable, if crude, approximations of natural objects.
Later we will find the rules to make these fractals.
For now, to help train your eye to find fractal decompositions of objects, try to
find smaller copies of each shape within the shape.
Fractals found in nature differ from our first mathematical examples in two
important ways:
the self-similarity of natural fractals is approximate or statistical and
For the second point, the forces responsible for a natural fractal structure are
effective over only a limited range of distances.
The rule is this: in the generator, replace each copy of the initiator with a scaled
copy of the generator (specifying orientations where necessary).
The waves carving a fractal coastline are altogether different from the forces
holding together the atoms of the coastline.
2/22/2013
Koch curve
Cantor set
Tents upon tents upon tents ... makes a shape we shall see is very strange, a
curve enclosed in a small box and yet that is infinitely long.
Cut all the tents out of the Koch curve and we are left with something that
appears to be little more than holes. But we can be fooled by appearances.
Take as initiator the line segment of length 1, and as generator the shape on
the right.
Though its construction is so simple, the Koch curve has some properties that
appear counterintuitive.
For example, we shall see that it is infinitely long, and that every piece of it, no
matter how small it appears, also is infinitely long.
Again, take as initiator the line segment of length 1, but now the generator
is the shape shown below.
Here is a picture of the Cantor set resolved to the level of single pixels.
Although so much has been removed that the Cantor set is hardly present at
all, we shall find this fractal in many mathematical, and some physical and even
literary, applications.
As -------- cook, the boiling batter forms bubbles of many different sizes, giving rise
to a fractal distribution of rings.
Some big rings, more middle-size rings, still more smaller rings, and so on.
Pieces of the pieces look like the whole cauliflower, and so on for several more
subdivisions.
Here is a picture of a cauliflower and a piece broken from it.
2/22/2013
Some breads are natural fractals. Bread dough rises because yeast produces
bubbles of carbon dioxide.
Many bubbles are small, some a middle-size, a few are large, typical of the
distribution of gaps in a fractal.
So bread dough is a foam; bread is that foam baked solid.
Kneading the dough too much breaks up the larger bubbles and gives
bread of much more uniform (non-fractal) texture
www.fractenna.com
Now down to work. We learn to grow fractal images, but first must build up the
mechanics of plane transformations.
Geometry of plane transformations is the mechanics of transformations that
produce more general fractals by Iterated Function Systems
Scalings
The scaling factor in the x-direction is denoted r.
The scaling factor in the y-direction is denoted s.
Assume there are no rotations. Then if r = s, the
transformation is a similarity
To generate all but the simplest fractals, we need to understand the geometry
of plane transformations. Here we describe and illustrate the four features of
plane transformations
Reflections
otherwise it is an affinity
Rotations
The angle measures rotations of horizontal lines
Reflection across both the x- and y-axes is equivalent to rotation by 180 about the origin
2/22/2013
Translations
Horizontal translation is measured by e
With this encoding of transformations of the plane, we can make fractals using
the method called Iterated Function Systems (IFS)
and P0 =
Because
of this convergence property, P is
P = T (P ) ... T (P )
called the attractor
of the IFS {T1, ... , Tn}.
...
2
For concreteness we illustrate this converge using the gasket rules. Because
all the transformations are applied at each iteration, this is called
the determinisitc algorithm.
That is,
2/22/2013
Inverse problems
finding the transformations to produce a given fractal
Given a fractal F, the Inverse problem is to find affine
transformations T1, ..., Tn for which
F = T1(F) ... Tn(F)
2. For each piece Fi, find an affine transformation Ti for which Ti(F) = Fi. By "find an affine
transformation" we mean find the r, s, , , e, and f values.
Decomposition
Because the transformations can involve rotations, reflections, and scalings
by different factors in different directions, decomposition is not always as
simple a task as it may seem at first. Here are some examples of more
complicated decompositions.
This fractal is an instructive example for people who have
seen the gasket and a few of its relatives.
the bottom
left piece
TheNote
primacy
of the gasket
in early examples of fractals
is a reflected
copy
makes
this shape
oneofofthe
the easiest to recognize.
whole shape
The most common response to first seeing this picture is,
"It's half a gasket."
2/22/2013
Examples
2. For each piece Fi, find an affine transformation Ti for which Ti(F) = Fi. By "find an
affine transformation" we mean find the r, s, , , e, and f values.
(a) Trace the main features of the fractal and cut out
smaller copies of the tracing.
Keeping in mind that our transformation rules allow only reflections across the
x- and y-axes, some care must be taken with the translation after the reflection
(b) To allow for reflections, flip the small copies and on the
back trace over the lines on the front. Label the front
image with a small F, to distinguish it from its reflection,
and to indicate the original orientation.
(c) Place the small copies, perhaps rotating or reflecting
them, to make a copy of the original fractal.
- 0.5 0.5
0.5 0.5
0
0
0
0
1.0
0.0
0.0
0.5
Note the top and bottom left pieces have the same orientation as the entire
fractal, while the bottom right piece is rotated.
Keeping in mind that our transformation rules allow only rotations fixing the
origin, some care must be taken with the translation after the rotation
.333
.333
.333
.333
.667
.5
.5
.333
.333
.667
.5
.5
.5
.333
.333
.667
.667
.5
.5
.5
.333
.333
.333
.333
.25
.25
.75
.75
.333
.333
.333
.333
.333
.333
.5
.5
.333
.333
.667
.667
-.5
.5
.333
.333
.667
-.5
.5
.5
.5
2/22/2013
When the pieces are not scaled by such obvious amounts, we can find scalings
and rotations by measuring distances and angles
r
-.5
.5
90
90
.5
.5
.5
.5
180
180
.5
.5
.5
-90
-90
.5
.5
-90
-90
.5
.5
.5
180
180
.5
.5
.5
.5
.5
With a bit of thought, now we can find an IFS to generate the tree
Here are the IFS rules, color coded to match each transformation to the
corresponding piece of the tree.
First, it is easy to see the four main branches of the tree are scaled copies of the
whole tree.
The pieces have been pulled apart slightly to emphasize the decomposition.
The trunk is more complicated.
Simply shrinking the tree a lot horizontally works for the top of the trunk, but
makes the bottom of the trunk too thin.
Two shrunken copies of the tree are needed to make the trunk.
The decomposition of this picture into two pieces, the two fists, does not
continue to even one more level.
The fists are not split into smaller pieces. The more levels of the pattern,
the more convincing the fractality of the picture.
Here is an analogous example based on
the Sierpinski tetrahedron.
This is not plausibly fractal: it is a shape made of four
tetrahedra, but the tetrahedra have no substructure.
2/22/2013
and so on
The limit of this process is a single
point, not a fractal.
Here we study the random IFS algorithm, another way to render IFS images. This
includes a careful look at what random means.
and so on.
The limit of the cow pictures
is a Cantor set.
The Chaos Game is played by specifying a number of vertices (a1, b1), (a2, b2),
..., and (aN, bN), and a scaling factor r < 1.
To play the game, start with the point (x0, y0) and pick one of the vertices,
say (ai, bi), randomly.
The point (x1, y1) is the fraction r of the distance between (ai, bi) and (x0, y0).
That is,
(x1, y1) = r(x0, y0) + (1 - r)(ai, bi)
For example, with four vertices, r = 1/3, and (a2, b2) is the first randomly
selected vertex, we obtain
(If r = 1, the point (x1, y1) is the same as the initial point (x0, y0); if r = 0, the
point (x1, y1) is the same as selected vertex (ai, bi).)
Now pick another vertex, (aj, bj), randomly.
The point (x2, y2) is given by
(x2, y2) = r(x1, y1) + (1 - r)(aj, bj)
and so on.
The Chaos Game Plot is the sequence of points (x0, y0), (x1, y1), ...
generated this way.
2/22/2013
What would happen if we used just three vertices (a1, b1), (a2, b2), and (a3, b3)?
As with the square, we start with a point in the triangle. (In this example, it's on the
edge of the triangle, but that's still in the triangle.)
Each move is half-way between where we are and a corner of the triangle, so we
never leave the triangle.
Because we select the corners randomly, no part of the triangle is preferred over
any other.
So since some parts of the triangle fill in, all parts must fill in.
Thus played with three vertices of a triangle, the chaos game should fill in the
triangle. Right?
So since some parts of the square fill in, all parts must fill
in.
Here are more Chaos Game examples. Try to determine the shape
The chaos game often is used as an introduction to the more general Random IFS.
To illustrate its simplicity, frequently the chaos game is performed manually.
While this does convince of the simplicity of the chaos game, it is less effective in
showing the chaos game will generate fractals. For example, generating 30 points
manually requires some patience, but does the picture give much hint of a gasket?
10
2/22/2013
Below on the left is the aggregate of these 10 pictures; on the right is the picture of
300 iterates of one point. Both are reasonable representations of the gasket.
The Random Algorithm is another method of rendering the fractal determined by a given
set of rules, T1, ..., TN.
Example 2
(x, y) = T(x, y) = (x/2, y/2) + (1/2, 0). So x = x/2 + 1/2 and y = y/2, hence x = 1 and y = 0.
Let {n1, n2, ... } be a random sequence of numbers, each from {1, ..., N}.
Generate a sequence of points
A similarity reduces all distances by the same number, r < 1. That is,
...
We shall see this sequence of points eventually will fill up the fractal to any
prescribed accuracy. For example, here are pictures of the Random Algorithm
applied to the gasket rules.
(x1, y1) = r(x0, y0) + (1 - r)(ai, bi)
5000 points
500 points
If all the transformations of an IFS are contractions, then iterating the IFS is guaranteed to
converge to a unique shape.
11
2/22/2013
In case you're interested, here are the first 1000 decimal digits of pi
141592653589793238462643383279502884197169399375105820974
944592307816406286208998628034825342117067982148086513282
306647093844609550582231725359408128481117450284102701938
521105559644622948954930381964428810975665933446128475648
233786783165271201909145648566923460348610454326648213393
607260249141273724587006606315588174881520920962829254091
715364367892590360011330530548820466521384146951941511609
433057270365759591953092186117381932611793105118548074462
379962749567351885752724891227938183011949129833673362440
656643086021394946395224737190702179860943702770539217176
293176752384674818467669405132000568127145263560827785771
342757789609173637178721468440901224953430146549585371050
792279689258923542019956112129021960864034418159813629774
771309960518707211349999998372978049951059731732816096318
595024459455346908302642522308253344685035261931188171010
003137838752886587533208381420617177669147303598253490428
Whenever we get to a 00 pair, we don't have to say what the next number is.
It MUST be 1, because otherwise the infinite sequence would contain 000.
Consequently, we don't have to list the entire infinite sequence to specify it completely. We say
only once that the sequence does not contain the triple 000, and then whenever the pair 00
occurs, we know the next number must be 1.
Similar arguments show that all finite sequences must occur somewhere (in fact, infinitely
often) in an infinite random sequence. If any one is missing, we can use this missing sequence
to describe the infinite sequence without listing all its entriety.
One method starts with the time on your computer's clock, multiplies by a large
number, divides by another large number, and takes the remainder.
Addresses in fractals
To understand why the Random and Deterministic algorithms generate the
same pictures, we also need to understand the notion of the address of parts
of a fractal.
Addresses are the main tool for relating fractals and
dynamics.
The order of the elements of an address is important, and
to some counterintuitive,
The notion of addresses is familiar in one dimension from the decimal expansion of
real numbers in the unit interval, [0, 1].
The left-most digit of the decimal expansion of x tells into which 10th of [0, 1] x falls.
The second digit tells into which hundredth - that is, which 10th of the 10th - x falls.
And so on. Here is an illustration
To relate this to IFS, we need IFS rules to generate the unit interval. There are
(infinitely) many families of such rules, but for ease of interpretation with the
decimal expansion, we use
Ti(x) = x/10 + i/10
for i = 0, ..., 9. Then
Ti(I) is the ith 10th,
TiTj(I) is the jth 100th of the ith 10th,
and so on. Note the order of the subscripts. This is the tricky part of
understanding addresses. We say
the digit i is the address of the ith10th,
the pair ij is the address of the jth 100th of the ith 10th,
and so on. Notice from left to right the address digits specify smaller intervals.
Addresses are unique.
12
2/22/2013
Addresses of a square
For concreteness in the two-dimensional case, we
consider the transformations
T3(x, y) = (x/2, y/2) + (0, 1/2)
In order of application, addresses are read right to left: the left-most digit is the
index of most recent transformation applied.
Because this seems confusing sometimes, we emphasize the order of
addresses is consistent with the order of composition of functions:
ij is the address of TiTj(S).
To each of the 1/2 1/2 squares Ti(S) we associate the length 1 address i.
Each of these squares can be subdivided by iterating this decomposition process. For
example,
T1(S) = T1T1(S) T1T2(S) T1T3(S) T1T4(S).
To each of the 1/4 1/4 squares TiTj(S) we associate the length 2 address ij,
and so on.
Longer addresses
Because each Ti is a contraction, longer addresses specify smaller portions of S.
For example, here are the length 3 addresses for the square transformations.
T3(x, y) = (x/2, y/2) + (0, 1/2)
That longer addreses specify locations with greater accuracy is part of our
common experience. Let's abandon geometrical abstraction and turn to our own
sense of place.
Where are you?
At some fairly crude level, you are on the earth.
More precisely, you are in Asia, on the earth.
To see how address 21 winds up in the indicated position, start with the solid triangle S
apply transformation T1,
obtaining T1(S).
13
2/22/2013
For example, the diameter of a circle is just the common notion of diameter; the
diameter of a square is the diagonal length of the square.
Some diameters
First, fix a resolution, usually one pixel, to which the picture is to be
rendered.
Then we show
long enough addresses specify regions smaller than a
pixel,
that randomness guarantees all finite addresses are visited
by the points generated by the random IFS algorithm, and
that consequently every pixel of the attractor is visited.
Because
all the
rules
the
region
T3(x,
y) = (x/2,
y/2)IFS
+ (0,
1/2)are contractions,
T4(x,
y) =diameter
(x/2, y/2)of+a(1/2,
1/2)of
diam(S)
= 2
address length N goes to 0 as N goes to infinity. We illustrate this with the
T1(x,
= (x/2, y/2)
T2(x,
y) = (x/2, y/2) + (1/2, 0)
diam(T
foury)transformations
i(S)) = (2)/2
diam(TjTi(S)) = (2)/4
and in general
diam(TiN...Ti1(S)) = (2)/(2N)
Consequently, diam(TiN...Ti1(S)) 0 as N .
Observe
Because
Because the diameters of these squares are shrinking to 0, the address 111... =
1 corresponds to the single point(x0, y0).
diam(TiN...Ti1(S)) = (2)/(2N),
if we take N large enough that
(2)/(2N) < resolution,
then the Random Algorithm will fill in the picture to the desired resolution if
all regions of address length N are visited.
Alternately, note that the only point left unchanged by repeated application of
T1 is the point with address 111... = 1
Suppose the first transformation applied is Ti1, the next Ti2, and so on.
What is the effect of these transformations on the address of the point, and on
the address length N region in which the point lies?
For definiteness, say we start with the fixed point (x0, y0) of T1.
The address of this fixed point is 111... = 1.
point
(x0, y0)
i1(1)
i1111... =
i2i1(1)
i2i1(1N-2)
i3i2i1(1)
i3i2i1(1N-3)
...
iN...i3i2i1(1)
iN+1iN...i3i2i1(1)
...
1N-1
...
i1
...
Consequently,
every region with address length N will be visited by the (xik, yik).
To the specified resolution, the Random Algorithm will generate the same picture
as the Deterministic Algorithm.
iN...i3i2i1
...
iN+1iN...i3i2
...
So we see each new transformation applied has this effect on the N-digit
address: discard the right-most digit, shift the remaining N-1 digits one place to
the right, and insert the new address on the left.
14
2/22/2013
Example
For example, suppose we specify the resolution corresponding to addresses of length N =
3 and we start with the point (x0, y0) with address 1infinity.
To the specified resolution, (x0, y0) lies in the region with address 111.
If T2 is the first transformation applied, then resulting point (x1, y1) = T2(x0, y0) lies in the
region with address 211.
If T3 is the next transformation applied, then resulting point (x2, y2) = T3(x1, y1) lies in the
region with address 321.
If T4 is the next transformation applied, then resulting point (x3, y3) = T4(x2, y2) lies in the
region with address 432.
Continuing will fill in all the 43 regions of address length 3.
We take
p4 to range from 0 to 1 in steps of .05,
and
p1 = p2 = p3 = (1 - p4)/3.
Starting with p4 = 0, the first picture is the gasket. Do you see why?
Here is a way to find the probabilities that give approximately uniform fill of the
attractor.
Driven IFS
What happens if we use a non-random sequence in the random IFS algorithm?
In particular, can we run the random IFS algorithm with a sequence of data daily closing prices of a stock, or the intervals between your heartbeats, for
example?
Will patterns in the IFS picture reveal patterns in the data?
Because we use a data sequence to select the order in which the
transformations are applied, we call this approach driven IFS. The
data drive the order in which the IFS rules are applied.
Stewart's experiments
For most of his tests, Stewart used the chaos game fixed points the
vertices (0, 0), ((3)/2, 1/2), and (0, 1) of an equilateral triangle. The
corresponding IFS rules are
T1(x, y) = (x/2, y/2)
T2(x, y) = (x/2, y/2) + (0, 1/2)
T3(x, y) = (x/2, y/2) + ((3)/4, 1/4)
For reference, a random number generator (1000 points) gives
15
2/22/2013
The data values yi often are measured as decimals and because we are
converting these to only four values, the process of turning the yk into ik is
called coarse-graining.
The range of y values for corresponding to a symbol is the bin of that symbol.
Pictures are
generated
sequences of
is produced from sin(t) + sin(t2).
numbers, coarset = 1, 2, ..., 1000.
grained into three
equal-size bins.
Here is the time series y1, y2, ..., y1000 generated by 1000 iterates of the logistic
map with equal-size bin lines drawn, and the corresponding driven IFS
equal-size bins Divide the range of values into four intervals of equal length.
equal weight bins Arrange the bin boundaries so (approximately) the same number of points lie in each
bin.
zero-centered bins For data whose sign is important, take 0 as the boundary between bins 2 and 3; place
the other boundaries symmetrically above and below 0. Unlike the first two cases, this is a family of
coarse-grainings depending on the placements of the other two bin boundaries.
mean-centered bins Take the mean of the data to be the boundary between bins 2 and 3; place the other
boundaries symmetrically above and below the mean, usually expressed as a multiple of the standard
deviation.
median-centered bins Take the median of the data to be the boundary between bins 2 and 3; place the
other boundaries symmetrically above and below the median, usually expressed as a multiple of the
range. Note the equal-weight bins are a special case of this.
Here is the time series y2 - y1, y3 - y2, ..., y1000 - y999 generated by successive
differences of 1000 iterates of the logistic map with equal-size bin lines drawn,
and the corresponding driven IFS
Here is the time series y1, y2, ..., y1000 generated by 1000 iterates of the logistic
map with equal-weight bin lines drawn, and the corresponding driven IFS
This IFS generates the filled-in unit square. Consequently, any departure from
uniform randomness will be visible through departures from uniform fill of the
square.
Here are some examples, all with 10000 points.
Here is the time series y2 - y1, y3 - y2, ..., y1000 - y999 generated by successive
differences of 1000 iterates of the logistic map with equal-weight bin lines
drawn, and the corresponding driven IFS
uniform random
sequence
p1 = p4 = 0.1;
p2 = p3 = 0.4
p1 = 0.1;
p2 = p3 = p4 = 0.3
p1 = p3 = 0.1;
p2 = p4 = 0.4
16
2/22/2013
T G AAT T C AA G T T T G G T G C AAAA C T T G G C A C A G T T AT C C G
C AA G T G G AA T G G A G A G AA G AT G T C C T AT T T AAA G T AAAT A
TATAC GATT TT G T CATTT G TT C T G T CATACAT C T G T T G T C
AT T T T C T TAAAT AT T G TAA C T T AAAT T G T T G AT T AT T A G T T
AG G C T TAT T G T T CAT T TAT C C T TAAT TAAT TAT G T T T T T CA
T T T G A TA C A T C A G T C A C C T G AT AA C A G C T G AAA T C T AAA G
TAT CAC T TAG T GAG T T T T G T T G G G T T G T G T TAAG T C CAT
T How
A G A Gcan
T C T we
A A Gconvert
A A T G T Ta
TG
C T TA
T G G C C T Ainto
C TAA
A AIFS
ATAT
DNA
sequence
an
G G T A G C AT C C T AA G AAT A G T T AT A C T AAAAA G T G AT C C C T
Apicture?
T AAT AT G A C T A C A C T A G G G AA T T TA T T T AT G C T A C A T T A
G G G AA T T AA T T G AAAT T T AAAA G T G AAT G T AAAA G C A G A G
T T AT AAA T T AA T T T C C AT T C T G T AT T AT AT AA C AT G G AT G T
C T T AAT T C T C AA G T C C A TAAT G T T A C AAT AAAAT T T T AAAA
AT C T AAAA TAAAAT C AAAA C AAAA G AT TA TA TA G TAAAA C C
TAAAT C T G G AT AAAAT T C C C AT G TA T G T T AT C A TA G AAAA T
T G T A C A T A T G T G T A C A T A T A C A A T.
sodium channel
Here we will learn to recognize the visual signature of IFS driven by
cyclic data, that is, numbers that repeat a particular pattern.
The simplest repeated sequence is constant, just repeat the same
number,
for example, 11111... = 1.
Starting with (1/2, 1/2), applying T1(x, y) = (x/2, y/2) repeatedly produces the
sequence of points
g1346a094
g0771a003
G1564A209
gap3
Recall
Because it is gotten by applying T1 infinitely many times, the address of this point
is 11111... = 1.
The sequence of points generated by applying T1 repeatedly converges to the point with
address 1111... = 1.
We show this is the fixed point of T1, and find its coordinates.
Fixed point. Say (x*, y*) is the point with address 1. Then
T1(x*, y*) has address 1(1) = 1.
Because T1(x*, y*) and (x*, y*) have the same (infinite) address, they must be the same point.
That is,
T1(x*, y*) = (x*, y*)
and (x*, y*) is the fixed point of T1.
Coordinates. We see
(x*, y*) = T1(x*, y*) = (x*/2, y*/2),
and so (x*, y*) = (0, 0).
Similar arguments show 2, 3, and 4 are the fixed points of T2, T3, and T4, respectively.
These points have coordinates (1, 0), (0, 1), and (1, 1), respectively. For example,
17
2/22/2013
(13)
(14)
2-cycle Coordinates
2-Cycle Addresses
The limiting points in the last example have coordinates (1/3, 0) and (2/3, 0).
The limiting points in the last example [121212... ] have addresses (12) and (21). To
see this, recall the relation between the address and the order in which the transformations
are applied. The sequence 121212... gives points in regions with addresses
To see this, say (x1, y1) is the point with address (12) and (x2, y2) is the point with
address (21). Then notice
T2(x1, y1) has address 2(12) =
2(12)(12)(12)(12)... = (21)(21)(21)(21)... = (21)
1
21
Because T2(x1, y1) and (x2, y2) have the same (infinite) address,
121
2121
By a similar argument,
12121
212121
and so on.
1
121
Alternate
entries in the
sequence are
2121
12121
1212121
21
From the first we obtain
212121
And
...
21212121
(x1, y1) = T1T2(x1, y1) = T1(x1/2 + 1/2, y1/2) = (x1/4 + 1/4, y1/4),
so
x1 = x1/4 + 1/4 and y1 = y1/4
...
(12)
(21)
Solving for x1 and y1, we obtain (x1, y1) = (1/3, 0). A similar argument gives (x2, y2) = (2/3, 0).
3-Cycles
The repeating pattern
Not at all.
This should be no surprise: recall the attractor of an IFS does not depend on
the initial picture. (This is a bit different than the situation at hand, however, but
certainly makes the result plausible.)
T1 followed by T2 followed by T3
produces a sequence converging to the three points with addresses
(321), (132), and (213).
18
2/22/2013
3-Cycle Addresses
3-Cycle Coordinates
Solving the fixed point equations gives the coordinates of each
point. For example,
Because (x, y) and T3T2T1(x, y) have the same (infinite) addresses, they are the
same point.
Similarly,
and
(4/7, 1/7) is the fixed point of T2T1T3
To understand why this is so, write the first several terms of both sequences
123123123123123123123123123123...
312312312312312312312312312312...
The second sequence is just the first shifted two terms to the left.
instead of
T1 followed by T2 followed by T3?
Limiting points
123123123123123123123123123123...
312312312312312312312312312312...
Same as produced
by this
The first sequence is the same as the second, but starting from
T2T1(0.5, 0.5) instead of from (0.5, 0.5).
As with the fractals generated by regular IFS, here the final pattern does not
depend on the starting point. We prefer to start with (0.5, 0.5) because this
is the most neutral choice.
Can you find an order of cycling through 1,2, and 3 that converges
to a different triple of points?
The sequence
T2 followed by T1 followed by T3?
N-Cycles
The same points are produced by any cyclic permutation of the original sequence.
For example, the cyclic permutations of
T2 followed by T1 followed by T3,
are
T1 followed by T3 followed by T2, and T3 followed by T2 followed by T1.
19
2/22/2013
Continuing, the subsquares T1(S14) = S114, T1(S24) = S124, T1(S34) = S134, ...,
The
result
of continuing this process is clear:
and T
3(S34) = S334 contain no points. That is,
if Tif4 i=4,
is never
applied,
every
square whose
address
a 4 isleft).
empty.
j=4, or
k=4, the
subsquare
Sijk contains
no contains
points (below
Similarly,
With this restrction, we see the IFS generates a right isosceles Sierpinski gasket.
the subsquare S contains no points if any of i, j, k, or m is 4 (below right).
This is no surprise,ijkm
because the IFS {T1, T2, T3} generates a right isosceles
Sierpinski gasket.
Because S41 contains no points, the subsquares T1(S41) = S141, T2(S41) = S241, T3(S41) = S341,
and T4(S41) = S441 contain no points
First, note these restrictions imply no points land in the squares with addresses
14, 23, 32, and 41. That is, the shaded squares will contain no points
Continuing, here are the pictures showing the subsquares of address length 4
(left) and 5 (right) containing no points.
Here is a movie showing the first few iterates of the driven IFS with these
restrictions. In contrast to the diagrams above, here the nonempty regions
are shaded
Graphical Representation
For those Driven IFS determined completely by forbidden pairs, a compact
representation of the IFS can be given by a graph showing the allowed pairs.
The graph has four vertices, one for each Ti,
and an edge from vertex i to vertex j if Ti can be be followed immediately by Tj.
For example, the Driven IFS with the single forbidden pair 41 has this graph:
20
2/22/2013
equal-size bins Divide the range of values into four intervals of equal length.
equal weight bins Arrange the bin boundaries so (approximately) the same number of points lie in each bin.
zero-centered bins For data whose sign is important, take 0 as the boundary between bins 2 and 3; place the other
boundaries symmetrically above and below 0. Unlike the first two cases, this is a family of coarse-grainings depending
on the placements of the other two bin boundaries.
Often the data values yi are measured as decimals and because we are
converting these to only four values, the process of turning the yk into ik is
called coarse-graining.
mean-centered bins Take the mean of the data to be the boundary between bins 2 and 3; place the other boundaries
symmetrically above and below the mean, usually expressed as a multiple of the standard deviation.
The range of y values for corresponding to a symbol is the bin of that symbol.
median-centered bins Take the median of the data to be the boundary between bins 2 and 3; place the other
boundaries symmetrically above and below the median, usually expressed as a multiple of the range. Note the equalweight bins are a special case of this.
Equal-Size Bins
The numbers separating the bins are called bin boundaries, B1, B2, and B3:
One method of converting the measured data y1, y2, ..., yN into a symbol string i1, i2,
..., iN is first to find the range of the values, that is, the interval between the maximum
(max) and the minimun (min) of the yi.
Next, divide the range (min, max) into four equal-size bins:
yk lies in bin1 if min yk < min + .25(max - min)
ik = 4 if yk lies in bin4
ik = 3 if yk lies in bin3
ik = 2 if yk lies in bin2
ik = 1 if yk lies in bin1
21
2/22/2013
Zero-Centered Bins
While there is no reason to think 0 holds any special importance for the data set
y1, y2, ..., yN,
it may for the first differences
For instance, a positive first difference means the data values are increasing, a negative
first difference means the data values are decreasing. This has clear significance for
financial data.
B3 the element one-quarters of the way from the top of the sorted list,
B2 the element of one-half of the way from the top of the sorted list, and
B1 to be the element three-quarter of the way from the top of the sorted list.
Equal-weight bins can be called
a maximum entropy partition.
Mean-Centered Bins
Another way to bin the data is to set B2 = m, the mean of the data values, and set B1 and
B3 to some fraction of or multiple of the standard deviation.
Median-Centered Bins
Another way to bin the data is to set B2 = m, the median of the data values, and set
B1 and B3 to some fraction of the range.
Citigroup
American
International
General Electric
Another feature, especially strong in the GE graph, is the absence of points along the 14 diagonal. Although there are points in address 1 and 4 (even in address 11 and 44), there are
very few consecutive moves of > +2.5% followed by < -2.5%, and vice versa.
Contrast this with Sonus, where most of the activity in along the 1-4 diagonal, indicating relatively
wild swings in closing price. The heavy cluster of points in corner 1 does not speak of a successful
stock.
Dell
Sonus Networks
Qwest
Tyson Foods
ColgatePalmolive
Lucent
Roughly speaking, the older economy companies - Citigroup, Tyson, Colgate-Palmolive, GE, and
AIG - have stronger 2-3 diagonals, indicating less volatile behavior.
22
2/22/2013
Here are the rescaled driven IFS (left), each grouped with its original (right) for
comparison.
scaling
Next we experiment with uniformizing the driven IFS by scaling the bin
boundaries with the stock's value.
The factor of a stock is the volatility of the stock relative to that of the market.
> 1 means the stock is more volatile than the market.
= 1 means the stock and the market are equally volatile.
< 1 means the stock is less volatile than the market.
Citigroup: = 1.33
The 1-4 diagonals of the tech stocks reflects larger daily percentage changes, so
we would expect higher volatility.
As a quantitative test of this, Thornton scaled the bin boundaries with each stock's
. For example, Qwest has = 2.15, so the first and third bin boundaries are set
at2.15x2.5% = 5.38% above and below 0.
American International: = 0.85
Dell: = 1.83
A text is a string, but in an alphabet of more than four symbols. How can we
convert this into a string in an alphabet of four symbols?
Sonus: = 4.95
Tyson: = 0.47
Note that in general rescaling the bin boundaries by the stock's makes the
driven IFS look much alike. Note particularly the change in the Sonus IFS.
The obvious exception is Tyson, whose is so small that the rescaling puts many
more points into bins 1 and 4.
Phonological analysis
Category
vowels
Examples
a, e
glides
y, w
liquids
nasals
l, r
m, n, ng
Do these poems exhibit similar phonological patterns? What about patterns in Lewis
Carroll's Jabberwocky, which contains many fabricated words?
obstruents
Will Elliot's texts, Tradition and the Individual Talent for example, reveal different patterns?
fricatives
plosives
s, f, th
p, t, k
affricatives
ch, dg
syllabic boundary
word boundary
Description
no vocal tract friction, most
sonorous
consonantal forms of vowels i
and u, slightly less sonorous
friction caused by the tongue
slightly more friction,
articulated through the nose
varying degrees of vocal tract
constriction
partial vocal tract constriction
air buildup with complete
vocal tract closure, then
released in a short burst
fricative and plosive sound
combined
word boundaries supersede
syllable boundaries
23
2/22/2013
First she analyzed these pieces using a four bin IFS with the familiar transformations
T1: vowels
T2: glides
Some analysis
In all four the squares 22, 32, and 42 are empty, indicating only a vowel can follow a glide.
In all four the squares 111, 234, 334, and 434 are empty.
Hollow Men
Jabberwocky
Some analysis
In all four the squares 22, 32, 42, 52, 62, 72, 82, and 92 are empty: only a vowel can follow a glide.
In all four, 23 and 26 are empty: a glide cannot follow a liquid or an affricate.
In all four, among *8 (for * = 1, ..., 7) the most filled in are 58 and 78: syllables begin with fricatives or
plosives more often than with other phonemes.
Among *9 the most filled are 19, 59, and 79: words begin with vowels as often as with fricatives or plosives.
In all four, among 8* the most filled is 81: syllables are more likely to end with vowels.
T7:
plosives
Love Song of J. Alfred Prufrock
T4: nasals
T5:
fricatives
T6:
affricatives
T1: vowels
T2: glides
T3: liquids
The common sonority profile is an arc: often a syllable begins with a low sonority phoneme, followed by one
of higher sonority, then lower.
Hollow Men
831 is more densely filled than 871: after a vowel a syllable is more likely to end in a nasal than in a plosive.
781 is densely filled: after a syllable ends with a vowel the next syllable can start with a plosive.
The rules governing nasals and liquids are similar: in each of the four plots, the pattern in square 3 is similar
to that in square 4.
In each of the four plots, the pattern in 5 is similar to that in 7 (6 also is similar, though much less filled),
indicating that the three subcategories of obstruents behave similarly.
Although 57 and 75 are fairly populated, 55 and 77 are not, and in general the pairs ii are (nearly) indicating
that only rarely do two phomemes of the same phonemic category occur adjacently within a syllable.
Jabberwocky
Finally, the patterns in 8 and 9 are similar, thought 9 is more filled, indicating many one-syllable words.
Although the sonority profile refers to a syllable rather than to a word, this shows the same pattern of sonority
near a syllable boundary is repeated at word boundaries.
T8(x,y) = (x/3, y/3) + (1/3, 2/3) T9(x,y) = (x/3, y/3) + (2/3, 2/3)
T5(x,y) = (x/3, y/3) + (1/3, 1/3) T6(x,y) = (x/3, y/3) + (1/3, 2/3)
24