## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Karma Dajani and Sjoerd Dirksin

December 18, 2008

2

Contents

**1 Introduction and preliminaries 5
**

1.1 What is Ergodic Theory? . . . . . . . . . . . . . . . . . . . . . 5

1.2 Measure Preserving Transformations . . . . . . . . . . . . . . 6

1.3 Basic Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.4 Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.5 Induced and Integral Transformations . . . . . . . . . . . . . . 15

1.5.1 Induced Transformations . . . . . . . . . . . . . . . . . 15

1.5.2 Integral Transformations . . . . . . . . . . . . . . . . . 18

1.6 Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

1.7 Other Characterizations of Ergodicity . . . . . . . . . . . . . . 22

1.8 Examples of Ergodic Transformations . . . . . . . . . . . . . . 24

**2 The Ergodic Theorem 29
**

2.1 The Ergodic Theorem and its consequences . . . . . . . . . . . 29

2.2 Characterization of Irreducible Markov Chains . . . . . . . . . 41

2.3 Mixing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

**3 Measure Preserving Isomorphisms and Factor Maps 47
**

3.1 Measure Preserving Isomorphisms . . . . . . . . . . . . . . . . 47

3.2 Factor Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.3 Natural Extensions . . . . . . . . . . . . . . . . . . . . . . . . 52

4 Entropy 55

4.1 Randomness and Information . . . . . . . . . . . . . . . . . . 55

4.2 Definitions and Properties . . . . . . . . . . . . . . . . . . . . 56

4.3 Calculation of Entropy and Examples . . . . . . . . . . . . . . 63

4.4 The Shannon-McMillan-Breiman Theorem . . . . . . . . . . . 66

4.5 Lochs’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 72

3

. . . . . . . .1 Existence . . . . . . . . . . . . . 83 6 Invariant Measures for Continuous Transformations 89 6. . . . . . . 136 . . . . 80 5. . . . . . . . .1 Equivalent measures . . . . . . . .2. . . . . . . . . . . . . . . . . . . 118 8 The Variational Principle 127 8. . . . . . . . . . . . 127 8. .2. . . . . . 79 5. . . . . . . . . . . . . . . . .2 Topological Entropy . . . . . . . . . . . .1 Basic Notions . . . . . . . . . . 108 7. . . . . . . . .2 Unique Ergodicity .4 5 Hurewicz Ergodic Theorem 79 5. . . . . . . . . . . . . . . . . . . . . . . 96 7 Topological Dynamics 101 7. 89 6. . . . . . 114 7. . . . . . . . . 102 7. . . . . . .1 Main Theorem .3 Hurewicz Ergodic Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Two Definitions . . . . . . . . . . . . . . . . . . . . . . . 108 7. . . . .2 Equivalence of the two Definitions . . . .2 Non-singular and conservative transformations . . .3 Examples . . . . . . . .2 Measures of Maximal Entropy . . . . . . . . . . . . . . . . . . . . . . .

The word ergodic is a mixture of two Greek words: ergon (work) and odos (path).Chapter 1 Introduction and preliminaries 1. where T x is the state of the system at time t = 1. – if the evolution is continuous or if the configurations have spacial struc- ture. 5 . then we describe the evolution by looking at a group of transfor- mations G (like Z2 . when the system (i.e. R2 ) acting on X. and the evolution is represented by either – a transformation T : X → X. group actions of homogeneous spaces and many more. A modern description of what ergodic theory is would be: it is the study of the long term average behavior of systems evolving in time.. The collection of all states of the system form a space X. the time average along a single trajectory equals the space average. at time t = 0) was initially in state x.e. (This is analogous to the setup of discrete time stochastic processes).. number theory. and the investigation for the conditions under which these two quantities are equal lead to the birth of ergodic theory as is known nowadays.1 What is Ergodic Theory? It is not easy to give a simple definition of Ergodic Theory because it uses techniques and examples from many fields such as probability theory. R. i. The hypothesis as it was stated was false. vector fields on manifolds. statis- tical mechanics. every g ∈ G is identified with a transformation Tg : X → X. and Tgg0 = Tg ◦ Tg0 . The word was introduced by Boltzmann (in statistical mechanics) regarding his hypothesis: for large systems of interacting particles in equilib- rium.

then one speaks of the two sided orbit . The map T is said to be measure preserving with respect to µ if µ(T −1 A) = µ(A) for all A ∈ B. and all integers r1 < r2 < . . < rn . .1 Let (X. then T must be measurable. . µ2 ) be measurable. B. T x.e. and we want T to preserve the basic structure on X. T −1 x. f (T rn +k x) ∈ Bn } . . This definition implies that for any measurable function f : X → R. In this course our space is a probability space (X. 1. . one has for any k ≥ 1. . The following . . f ◦ T. so that T −1 A ∈ B for all A ∈ B. . For example –if X is a measure space. So the evolution is described by a measurable map T : X → X. T 2 x. the process f.2.2 Measure Preserving Transformations Definition 1. then T is measure preserving if and only if µ(T A) = µ(A) for all A ∈ B. . then T must be continuous. µ1 ) → (X2 . stationary. . f (T rn x) ∈ Bn }) = µ {x : f (T r1 +k x) ∈ B1 . . µ ({x : f (T r1 x) ∈ B1 . we want T to be measure preserving. . . Bn . . x. then T is measure preserving if µ1 (T −1 A) = µ2 (A) for all A ∈ B2 . In the language of ergodic theory. If T is invertible. . . then T is a diffeomorphism. f ◦ T 2 . . B2 . In case T is invertible. . . For each x ∈ X. We want also that the evolution is in steady state i. –if X is a topological space. and T : X → X mea- surable. the orbit of x is the sequence x.6 Introduction and preliminaries The space X usually has a special structure. We can generalize the definition of measure preserving to the following case. B. is stationary. . . This means that for all Borel sets B1 . –if X has a differentiable structure. . . . . and our time is discrete. µ). µ) be a probability space. B1 . . Let T : (X1 . T x.

and finally (iii) if A ∈ A.2. we have T −1 A ∈ B1 and µ1 (T −1 A) = µ2 (A). xj = aj } is a semi-algebra. . Proof. . then ∩∞ i=1 Fi ∈ C. and is the smallest σ-algebra containing S (or A). one can get an easier criterion for checking that a transformation is measure preserving.2. . given a semi-algebra S (or an algebra A). then the collection of all cylinder sets {x : xi = ai . An algebra A is a collection of subsets of X satisfying:(i) ∅ ∈ A. . Theorem 1. the σ-algebra generated by S (A) is denoted by B(S) (B(A)). It is in fact the smallest algebra containing S. Theorem 1. then ∪∞ i=1 Ei ∈ C.Measure Preserving Transformations 7 gives a useful tool for verifying that a transformation is measure preserving. then X \ A ∈ A. B ∈ A. and we call it the algebra generated by S. Recall that a collection S of subsets of X is said to be a semi-algebra if (i) ∅ ∈ S. . Suppose S2 is a generating semi-algebra of B2 . 1). Or if X = {0. B ∈ S. Let C = {B ∈ B2 : T −1 B ∈ B1 . Furthermore. Then.1 Let A be an algebra of X. 2. i = 1. T is measurable and measure preserving if and only if for each A ∈ S2 . (ii) if A. 1}Z . and T : X1 → X2 a transformation. For this we need the notions of algebra and semi-algebra. – if F1 ⊇ F2 ⊇ . given a semi-algebra S one can form an algebra by taking all finite disjoint unions of elements of S. µi ) be probability spaces. The monotone class generated by a collection S of subsets of X is the smallest monotone class containing S. are elements of C. and (iii) if A ∈ S. . then the σ-algebra B(A) gener- ated by A equals the monotone class generated by A. Clearly an algebra is a semi-algebra. then S is a semi-algebra. Likewise. .2 Let (Xi . We denote this algebra by A(S). Bi . A monotone class C is a collection of subsets of X with the following two properties – if E1 ⊆ E2 ⊆ . Using the above Theorem. (ii) A ∩ B ∈ S whenever A. For example if X = [0. are elements of C. then X \A = ∪ni=1 Ei is a disjoint union of elements of S. . then A ∩ B ∈ A. and S is the collection of all subintervals. . and µ1 (T −1 B) = µ2 (B)}.

xn = an } = µ ({x : x0 = a0 . b)) = µ ([a. and µ a probability measure on B. . Exercise 1. By the monotone class theorem. . Let E1 ⊆ E2 ⊆ . . Then. C one has µ(A∆B) ≤ µ(A∆C) + µ(C∆B). therefore T is measurable and measure preserving. 1}N with product σ-algebra and product measure µ. are elements of C. hence B2 ⊆ C. µ1 (T −1 E) = µ1 (∪∞ n=1 T −1 En ) = lim µ1 (T −1 En ) n→∞ = lim µ2 (En ) n→∞ = µ2 (∪∞ n=1 En ) = µ2 (E). xn = an }) for any cylinder set. Show that for any measurable sets A. . . C is a monotone class containing the algebra A(S2 ). b). Hence. and hence A(S2 ) ⊂ C. b) ∈ B and µ (T −1 [a. . . then ∩∞i=1 Fi ∈ C. and µ T −1 {x : x0 = a0 . .8 Introduction and preliminaries Then. . A similar proof shows that if F1 ⊇ F2 ⊇ . 1) with the Borel σ-algebra B. . – X = {0. This shows that B2 = C. and let E = ∪∞ i=1 Ei . . A trans- formation T : X → X is measurable and measure preserving if and only if T −1 ({x : x0 = a0 .1 Recall that if A and B are measurable sets.2. . Thus. B2 is the smallest monotone class containing A(S2 ). We show that C is a monotone class. be elements of C. S2 ⊆ C ⊆ B2 . For example if – X = [0. B. b)) for any interval [a. . E ∈ C. . Then a transformation T : X → X is measurable and measure preserving if and only if T −1 [a. then A∆B = (A ∪ B) \ (A ∩ B) = (A \ B) ∪ (B \ A). . xn = an }) ∈ B. −1 ∞ −1 T E = ∪i=1 T Ei ∈ B1 . . Another useful lemma is the following (see also ([KT]). .

2. ) ∪ [ . µ) be a probability space. and A an algebra gener- ating B. (b) Multiplication by 2 modulo 1 – Let (X. Similarly. and Lebesgue measure λ. there exists C ∈ A such that µ(A∆C) < }. notice that µ(A) = lim µ(An ).3 Basic Examples (a) Translations – Let X = [0. Then. B. A ⊆ D ⊆ B.2. Therefore. we need to show that D is a monotone S∞ class. hence by the Monotone class Theorem B ⊆ D.1 Let (X. 1. then there exists C ∈ A such that µ(AN ∆C) < /2. By the Monotone Class Theorem (Theorem (1. b) = [ .Basic Examples 9 Lemma 1. and the theorem is proved. there exists C ∈ A such that µ(A∆C) < . B = D. A ∈ D. Hence. there exists an N such that µ(A∆AN ) = |µ(A) − µ(AN )| < /2. Then. let A1 ⊆ A2 ⊆ · · · be a sequence in D. λ) be as in example (a). Then. Clearly. 2 2 2 2 . For any interval [a. B. one can show that D is closed under decreasing in- tersections so that D is a monotone class containg A. b). Proof. n→∞ Let > 0. by considering intervals it is easy to see that T is measurable and measure preserving. µ(A∆C) ≤ µ(A∆AN ) + µ(AN ∆C) < . and let A = n=1 An . 1) with the Lebesgue σ-algebra B. for any A ∈ B and any > 0.1)). Let D = {A ∈ B : for any > 0. Let 0 < θ < 1. Since AN ∈ D. ). define T : X → X by T x = x + θ mod 1 = x + θ − bx + θc. and let T : X → X be given by 2x 0 ≤ x < 1/2 T x = 2x mod 1 = 2x − 1 1/2 ≤ x < 1. a b a+1 b+1 T −1 [a. To this end.

Define a transformation .e.10 Introduction and preliminaries and λ T −1 [a.. a2 . 1) an infinite sequence of 0’s and 1’s. 1) i. i=1 2i 2n Thus. it has in fact many facets. forms an i. (c) Baker’s Transformation – This example is the two-dimensional version of example (b). the golden mean. sequence of Bernoulli random variables. for n ≥ 1 set an (x) = a1 (T n−1 x). (d) β-transformations √ – Let X = [0. b) = b − a = λ ([a. 2 2 2 2 Since 0 < T n x < 1. then T x = 2x − a1 (x). T x = a22 + T 2 x . x = ∞ P ai i=1 2i . Let 1+ 5 β = 2 . measurable and measure preserv- ing. 1)2 by ( (2x. To do so. y) = y+1 (2x − 1. Exercise 1. then T x = 2x − a1 . y2 ) 0 ≤ x < 1/2 T (x. a1 a2 an T n x x= + 2 + ··· + n + n . Although this map is very simple. b)) .d. 1)2 with product Lebesgue σ-algebra B × B and product Lebesgue measure λ × λ. we write an instead of an (x). Similarly. The underlying probability space is [0. we see that for each n ≥ 1. using T one can associate with each point in [0. Notice that β 2 = β + 1. . we get n X ai T nx x− = → 0 as n → ∞. Define T : [0. . We shall later see that the sequence of digits a1 . Continuing in this manner. 1) with the Lebesgue σ-algebra B. iterations of this map yield the binary expansion of points in [0. we define the function a1 by ( 0 if 0 ≤ x < 1/2 a1 (x) = 1 if 1/2 ≤ x < 1. . for simplicity. Rewriting 2 we get x = a21 + T2x . 2 ) 1/2 ≤ x < 1. For example.i. Fix x ∈ X.1 Verify that T is invertible. 1)2 → [0.3. Now.

. . where yn = xn+1 .Basic Examples 11 T : X → X by βx 0 ≤ x < 1/β T x = βx mod 1 = βx − 1 1/β ≤ x < 1. (e) Bernoulli Shifts – Let X = {0. . . . . B where ( √ 5+3 5 10 0 ≤ x < 1/β g(x) = √ 5+ 5 10 1/β ≤ x < 1. xn+1 = an }. The map T . 1. . and µ ({x : x−n+1 = a−n . i=1 βi where bi ∈ {0. . . called the left shift. . . . since T −1 {x : x−n = a−n . .3. . pan . xn = an } = {x : x−n+1 = a−n .2 Verify that T is measure preserving with respect to µ. define a measure µ on F by specifying it on the cylinder sets as follows µ ({x : x−n = a−n . . Let p = (p0 . 1) (known as β-expansions) of the form ∞ X bi x= . . pk−1 ) be a positive probability vector. . p1 . . is measurable and measure preserving. F the σ-algebra generated by the cylinders. . 1. . . pan . k − 1}N ). T is not measure preserving with respect to Lebesgue measure (give a counterexample). k − 1}Z (or X = {0. . . xn = an }) = pa−n . . . 1} and bi bi+1 = 0 for all i ≥ 1. xn+1 = an }) = pa−n . Exercise 1. . . . . Let T : X → X be defined by T x = y. . but is measure preserving with respect to the measure µ given by Z µ(B) = g(x) dx. Then. and show that (similar to example (b)) iterations of this map generate expansions for points x ∈ [0.

q1 . Y1 (ω). Ynr ∈ Br ) . . . . . a stationary stochastic process on Ω with values in R. one sees that T is measurable and measure pre- serving. Y2 (ω). Let P = (pij ) be a stochastic k × k matrix. . k − 1}N . 1. . Y2 . pan−1 an . . . F. . . We define a measure ν on F as follows. ). . Hence. . . xn+1 = an }. µ ({x ∈ X : xn1 ∈ B1 . . .) : xi ∈ R} with the product σ-algebra (i. Define ν on cylinders by ν ({x : x−n = a−n . .e. (f) Markov Shifts – Let (X. . ∩ Yn−1 r (Br ) ∈ F. . generated by the cylinder sets). Y0 (ω). . qk−1 ) a positive probability vector such that qP = q. Ynr +k ∈ Br ) for any n1 < n2 < · · · < nr and any Lebesgue sets B1 . . . . . φ is measurable since if B1 . . Then. . On cylinder sets µ has the form. . . . xnr ∈ Br }) = Yn−1 1 (B1 ) ∩ . Ynr ∈ Br ) = IP (Yn1 +k ∈ B1 . . We want to see this process as coming from a measure preserving transformation. Y1 . . . then φ−1 ({x ∈ X : xn1 ∈ B1 . . . Y−1 . . Y−1 (ω). x1 = a0 . Y−2 (ω). . xn = an }. . and . . xnr ∈ Br }) = IP (Yn1 ∈ B1 . T x = z where zn = xn+1 . . and q = (q0 . Br are Lebesgue sets in R. . . Br . . . . Just as in example (e). xn = an }) = qa−n pa−n a−n+1 . . then one should consider cylinder sets of the form {x : x0 = a0 . Define a measure µ on X by µ(E) = IP φ−1 (E) . . x0 . . . . and it is easy to see that T is measurable and measure preserving. Y−2 . . (g) Stationary Stochastic Processes– Let (Ω. T ) be as in example (e). . . Define φ : Ω → X by φ(ω) = (.e. Y0 .12 Introduction and preliminaries Notice that in case X = {0. for each k ∈ Z IP (Yn1 ∈ B1 . . . F. . . . . . x1 . . . . . IP) be a probability space. . . . Let T : X → X be the left shift i. x1 . xn = an } = ∪k−1 j=0 {x : x0 = j. Let X = RZ = {x = (. . . . . . In this case T −1 {x : x0 = a0 .

and product measure IP × µ. x) ∈ (C × A)) = (IP × µ) ({(ω.e. and any A ∈ B. let Y = Ω × X with the product σ-algebra. . 1}Z with product σ-algebra F (i. and the uniform product measure IP (see example (e)). the map σ is measure preserving. B. x) : ω0 = 1. xnr +1 ∈ Br }. T x ∈ A) + (IP × µ) {(ω. . σω ∈ C. stationarity of the process Yn implies that T is measure preserving. the σ-algebra generated by the cylinder sets). Define a transformation T : [0. x) : ω0 = −1. T ω0 x). 1). Define S : Y → Y by S(ω. we have (IP × µ) S −1 (C × A) = (IP × µ) ({(ω. and T : X → X an invertible measure preserving transformation. xnr ∈ Br }) = {x : xn1 +1 ∈ B1 . σω ∈ C. Then. T −1 x ∈ A = (IP × µ) {ω0 = 1} ∩ σ −1 C × T −1 A + (IP × µ) {ω0 = −1} ∩ σ −1 C × T A = IP {ω0 = 1} ∩ σ −1 C µ T −1 A + IP {ω0 = −1} ∩ σ −1 C µ (T A) = IP {ω0 = 1} ∩ σ −1 C µ(A) + IP {ω0 = −1} ∩ σ −1 C µ(A) = IP(σ −1 C)µ(A) = IP(C)µ(A) = (IP × µ)(C × A). x) : S(ω. x) = (σω. (h) continued fractions – Consider ([0. Now. Let Ω = {−1. . and measure preserving with respect to IP × µ. Suppose now that at each moment instead of moving forward by T (x → T x). As in example (e). and let σ : Ω → Ω be the left shift. B). .Basic Examples 13 Since T −1 ({x : xn1 ∈ B1 . x x . 1) by T 0 = 0 and for x 6= 0 1 1 T x = − b c. T −1 is measurable and measure preserving with respect to µ. We can describe this random system by means of a measure preserving transformation in the following way. where B is the Lebesgue σ- algebra. then Yi (ω) = πi (φ(ω)) = π0 ◦ T i (φ(ω)). To see the latter. . (h) Random Shifts – Let (X. 1) → [0. . we first flip a fair coin to decide whether we will use T or T −1 . Then S is invertible (why?). if we let πi : X → R be the natural projection onto the ith coordinate. Further- more. for any set C ∈ F. µ) be a probability space.

. T x = x − a1 and hence x = . n1 ]. then one can show that {qn } are mono- qn 1 a1 + .. n ≥ 2. + an tonically increasing. if = . . 1 a2 + . + an + T n x pn 1 In fact. 1) a1 = a1 (x) = 1 n if x ∈ ( n+1 .14 Introduction and preliminaries Exercise 1. a1 + T x 1 a1 + . 1). B log 2 1 + x An interesting feature of this map is that its iterations generate the continued fraction expansion for points in (0..3. 1 a2 + . = .. and pn 1 |x − | < 2 → 0 as n → ∞. let an = an (x) = a1 + T x a1 (T n−1 x). but is measure preserving with respect to the so called Gauss probability measure µ given by Z 1 1 µ(B) = dx. 1 a1 + 1 a2 + 1 a3 + . qn qn The last statement implies that 1 x= . For n ≥ 1. Then. after n iterations we see that 1 1 x= = . 1 1 then. .3 Show that T is not measure preserving with respect to Lebesgue measure. . For if we define ( 1 if x ∈ ( 12 .

5. Theorem 1. . and let B ∈ F. We want to show that µ(F ) = 0.e. Let A ⊂ X with µ(A) > 0.4 Recurrence Let T be a measure preserving transformation on a probability space (X. F. the sets F. x ∈ B is B-recurrent.5 Induced and Integral Transformations 1. µ). Thus. . and µ(T −n F ) = µ(F ) for all n ≥ 1 (T is measure preserving). µ).Recurrence 15 1. . are pairwise disjoint. F = {x ∈ B : T k x ∈ / B for all k ≥ 1}. k≥0 a contradiction.4. Proof Let F be the subset of B consisting of all elements that are not B- recurrent. To see this. First notice that F ∩ T −k F = ∅ for all k ≥ 1. T −1 F. then X 1 = µ(X) ≥ µ ∪k≥0 T −k F = µ(F ) = ∞. then a. The proof of the above theorem implies that almost every x ∈ B returns to B infinitely often. If µ(F ) > 0. there exist infinitely many integers n1 < n2 < . such that T ni x ∈ B.1 (Poincaré Recurrence Theorem) If µ(B) > 0. µ(D) = 0 since µ(F ) = 0 and T is measure preserving. Then. By Poincaré’s Recurrence Theo- rem almost every x ∈ A returns to A infinitely often under the action of T . F. In other words.1 Induced Transformations Let T be a measure preserving transformation on the probability space (X. . . D = {x ∈ B : T k x ∈ F for some k ≥ 0} ⊆ ∪∞ k=0 T −k F. Thus. A point x ∈ B is said to be B-recurrent if there exists k ≥ 1 such that T k x ∈ B. . hence T −l F ∩ T −m F = ∅ for all l 6= m. Then. let D = {x ∈ B : T k x ∈ B for finitely many k ≥ 1}. 1.

. and we denote the new set again by A. .e. on A. T k−1 x 6∈ A.5. we show that µ(TA−1 C) = µ(T −1 C). µ(A) so that (A. Exercise 1. Proposition 1. Finally.1 TA is measure preserving with respect to µA . Consider the σ-algebra F ∩ A on A. which is the restriction of F to A. Furthermore. let Ak = {x ∈ A : n(x) = k} Bk = {x ∈ X \ A : T x. define the induced map TA : A → A by TA x = T n(x) x . defined by µ(B) µA (B) = . n(x) is finite a. In the sequel we remove from A the set of measure zero on which n(x) = ∞. µA ) is a probability space. for x ∈ A. T k x ∈ A}. let n(x) := inf{n ≥ 1 : T n x ∈ A}.2 Show that TA is measurable with respect to the σ-algebra F ∩ A. F ∩ A.5. We call n(x) the first return time of x to A.1) Let C ∈ F ∩A. let µA be the probability measure on A. .1 Show that n is measurable with respect to the σ-algebra F ∩ A on A. since T is measure preserving it follows that µ(C) = µ(T −1 C). To show that µA (C) = µA (TA−1 C). Notice that A = ∞ S k=1 Ak . By Poincaré Theorem.5. for B ∈ F ∩ A. (1. and T −1 A = A1 ∪ B1 and T −1 Bn = An+1 ∪ Bn+1 . From the above we see that TA is defined on A. . What kind of a transformation is TA ? Exercise 1. Proof For k ≥ 1. .16 Introduction and preliminaries For x ∈ A.

.. . ... 3) B...... .. ... X n = µ(Ak ∩ T −k C) + µ(Bn ∩ T −n C)...... n→∞ ..... .... .. ................ 3 B........ A ≡ (A.1: A tower. T A \ A ≡ (A.. . one gets for any n ≥ 1. ... ..... . . ... µ T −1 (C) = µ(A1 ∩ T −1 C) + µ(B1 ∩ T −1 C) = µ(A1 ∩ T −1 C) + µ(T −1 (B1 ∩ T −1 C)) = µ(A1 ∩ T −1 C) + µ(A2 ∩ T −2 C) + µ(B2 ∩ T −2 C) .. k=1 k=1 hence ∞ X TA−1 (C) µ Ak ∩ T −k C ...... 1 B..... 2 B... ... T 2 A \ A ∪ T A ≡ (A. .. n=1 n=1 it follows that lim µ(Bn ∩ T −n C) = 0.1) .. . 1 B.. ... 4 . .. 1) A1 A2 A3 A4 A5 Figure 1.. .. . ∞ [ ∞ [ TA−1 (C) = Ak ∩ TA−1 C = Ak ∩ T −k C. .. ..... ... µ = k=1 On the other hand.... 2 B.. .. .... 3 . ... k=1 Since ∞ ! ∞ [ X 1 ≥ µ Bn ∩ T −n C = µ(Bn ∩ T −n C)... .. . . 2 ...Induced and Integral Transformations 17 B. 1 ...... 1 B. using repeatedly (1... Now... . .... B....... 2) B.. .

G1 ) × [0.5. y) ∈ [0. µ(C) = µ(T C) = k=1 This shows that µA (C) = µA (TA−1 C). and the normalized Lebesgue measure λ × λ. and let f ∈ L1 (A. We now construct a mea- sure preserving transformation T on a probability space (X. 1] T (x. 1. Exercise 1. (b) Determine explicitely the induced transformation of T on the set [0.2 Integral Transformations Let S be a measure preserving transformation on a probability space (A. (1) X = {(y. (x. 1 ). Without using Proposition 1. G ). (x.3 Assume T is invertible. 1) × [0. i ∈ N}. 1)× [0.18 Introduction and preliminaries Thus. Consider the set 2 1 [ 1 1 X = [0. ν) be positive and integer valued. such that the original transformation S can be seen as the induced transformation on X with return time f . 1 + y ). 1) [ . so that G2 = G + 1. F.4 Let G = . G G G (a) Show that T is measure preserving with respect to λ × λ.5.5. 1) × [0. y) = (Gx − 1. i) : y ∈ A and 1 ≤ i ≤ f (y). . √ 1+ 5 Exercise 1. G1 ). C.5. µ).1 show that for all C ∈ F ∩ A. ∞ X −1 µ Ak ∩ T −k C = µ(TA−1 C). y) ∈ [ 1 . G G G endowed with the product Borel σ-algebra. which implies that TA is measure pre- serving with respect to µA . µA (C) = µA (TA C). ) × [0. Define the transformation y (Gx. ν). ).

i) : y ∈ B and f (y) ≥ i} . In fact. i + 1). n) (disjoint union). and let i ≥ 1. T (y. T ) is called an integral system of (A. it suffices to check this on the generators. ν(B) (3) µ(B. µ. i)) = µ(B. A f (y) dν(y) (2) If i = 1. where B ⊂ A. C. f (y) dν(y) A (4) Define T : X → X as follows: ( (y.Induced and Integral Transformations 19 (2) C is generated by sets of the form (B. n=1 S∞ Since n=1 An = A we therefore find that ∞ X ν(An ∩ S −1 B) ν(S −1 B) µ(T −1 (B. We now show that T is µ-measure preserving. We have to discern the following two cases: (1) If i > 1. S) under f . F. i) = R . i) = (B. if i + 1 > f (y). 1) = (An ∩ S −1 B. if i + 1 ≤ f (y). i) = {(y. 1). and we have ∞ [ −1 T (B. Now (X. i − 1) and clearly ν(B) µ(T −1 (B. 1) . i − 1) = µ(B. i) = Z and then extend µ to all of X. we write An = {y ∈ A : f (y) = n}. then T −1 (B. i) = (Sy. B ∈ F and i ∈ N. ν. 1)) = Z = Z n=1 f (y) dν(y) f (y) dν(y) A A ν(B) = Z = µ(B. f (y) dν(y) A . Let B ⊂ A be F-measurable.

then µ(B) = 0 or 1. F. 1) = (Sx. Moreover. In words. (iv) says that elements of B will eventually enter A. . µ) be a probability space and T : X → X mea- sure preserving. we see that after removing a set of measure 0 from B and a set of measure 0 from T −1 B. 1) ∈ (A. then in the above characterization one can replace T −n by T n . and T(A. 1)} is given by n(x.6. Remark 1.1 Let (X. 1) = inf{k ≥ 1 : T k (x. the remaining parts are equal.6. µ). 1. then there exists n > 0 such that µ(T −n A ∩ B) > 0. 1) = f (x). In this case we say that B equals T −1 B modulo sets of measure 0.1 1. 2. F. then the first return time n(x. then µ (∪∞ n=1 T −n A) = 1. almost every x ∈ X eventually (in fact infinitely often) will visit A. Theorem 1. The following are equivalent: (i) T is ergodic. 3. if we consider the induced transformation of T on the set (A. we have µ(A) = 0 or 1.6. 1). then µ(B \ T −1 B) = µ(T −1 B \ B) = 0. 1). and T −1 B = T −1 B \ B ∪ B ∩ T −1 B . Since B = B \ T −1 B ∪ B ∩ T −1 B .6 Ergodicity Definition 1. (iii) If A ∈ F with µ(A) > 0.1) (x. Note that if µ(B4T −1 B) = 0. (iv) If A.20 Introduction and preliminaries This shows that T is measure preserving. (iii) says that if A is a set of positive measure. B ∈ F with µ(A) > 0 and µ(B) > 0.1 Let T be a measure preserving transformation on a proba- bility space (X. (ii) If B ∈ F with µ(T −1 B∆B) = 0. In case T is invertible. The map T is said to be ergodic if for every measurable set A satisfying T −1 A = A. 4.

n=1 k=n Then. Hence. ∞ [ ∞ ! ∞ \∞ ! \ [ µ(C∆B) = µ T −k B ∩ B c + µ T −k B c ∩ B n=1 k=n n=1 k=n ∞ ! ∞ ! [ [ −k ≤ µ T B ∩ Bc +µ T −k B c ∩ B k=1 k=1 ∞ X µ T −k B∆B . µ(C∆B) = 0 which implies that µ(C) = µ(B). . µ(B) = 0 or 1. (iv)⇒(i) Suppose T −1 A = A with µ(A) > 0. Let ∞ [ \ ∞ C = {x ∈ X : T n x ∈ B i. then by (iv) there exists k ≥ 1 such that µ(Ac ∩ T −k A) > 0. Since T S n=1 T is measure preserving. n=1 n=1 Hence.o. (iii)⇒(iv) Suppose µ(A)µ(B) > 0. Therefore. then µ(B) > 0 and µ(T −1 B∆B) = µ(B \ T −1 B) = µ(B) − µ(T −1 B) = 0. by (ii) µ(B) = 1. By (iii) ∞ ! ∞ ! [ [ µ(B) = µ B ∩ T −n A = µ (B ∩ T −n A) > 0. (ii)⇒(iii) Let µ(A) > 0 and let B = ∞ −n A. a contradiction.6.Ergodicity 21 Proof of Theorem 1. We shall define a measurable set C with C = T −1 C and µ(C∆B) = 0.1 (i)⇒(ii) Let B ∈ F be such that µ(B∆T −1 B) = 0. Since T −k A = A. } = T −k B. If µ(Ac ) > 0. µ(A) = 1 and T is ergodic. T −1 C = C. Furthermore. hence by (i) µ(C) = 0 or 1. ≤ k=1 Using induction (and the fact that µ(E∆F ) ≤ µ(E∆G) + µ(G∆F )). Hence. Thus. Then T −1 B ⊂ B. it follows that µ(Ac ∩ A) > 0. one can −k show that for each k ≥ 1 one has µ T B∆B = 0. there exists k ≥ 1 such that µ(B ∩ T −k A) > 0.

and T : X1 → X2 a mea- sure preserving transformation i. µ). F2 . F. F2 . F2 ..7. µi ). F. R R (vi) X1 UT f dµ1 = X2 f dµ2 for all f ∈ L0 (X2 . Fi . Let Z p 0 L (X.7. we can give the following characterization of ergodicity .e. Define the induced operator UT : L0 (X2 . F2 . The following properties of UT are easy to prove. µ2 ) → L0 (X1 .7 Other Characterizations of Ergodicity We denote by L0 (X. i = 1. µ2 ). (where if one side doesn’t exist or is infinite. µ) = {f ∈ L (X.7. F1 . µ2 (A) = µ1 (T −1 A).1 Prove Proposition 1. Then. Proposition 1.1 Using the above properties. µ2 ). UT Lp (X2 . 2 be two probability spaces. µ1 ) by UT f = f ◦ T. µ) : |f |p dµ(x) < ∞}. F. F1 . (iv) UT is a positive linear operator (v) UT 1B = 1B ◦ T = 1T −1 B for all B ∈ F2 . and ||UT f ||p = ||f ||p for all f ∈ Lp (X2 . X We use the subscript R whenever we are dealing only with real-valued func- tions.1 The operator UT has the following properties: (i) UT is linear (ii) UT (f g) = UT (f )UT (g) (iii) UT c = c for any constant c. µ1 ). µ2 ) ⊂ Lp (X1 . Let (Xi . then the other side has the same property). µ) the space of all complex valued measurable func- tions on the probability space (X. (vii) Let p ≥ 1. F.22 Introduction and preliminaries 1. Exercise 1.

(v) If f ∈ L2 (X. for each k ∈ Z. For each n ≥ 1 and k ∈ Z. and assume without any loss of gener- ality that f is real (otherwise we consider separately the real and imaginary parts of f ).1) ⊇ X(k2 . and { n } is a bounded increasing sequence. let k k+1 X(k.1 Let (X. . µ) be a probability space. and T : X → X mea- sure preserving. and (iii)⇒(v) are all clear. then f is a constant a.n) ⊆ {x : f (T x) 6= f (x)} which implies that µ T −1 X(k.e.n) ) = 0 or 1. (iii) If f ∈ L0 (X. (ii) If f ∈ L0 (X. with f (T x) = f (x) for a.n) = {x ∈ X : n ≤ f (x) < }. with f (T x) = f (x) for a.n) = 0.n) (disjoint union). F. with f (T x) = f (x) for all x.2) ⊇ . (iv) If f ∈ L2 (X. X(k1 . .e. for each n ≥ 1. µ).e. It remains to show (i)⇒(iii) and (iv)⇒(i).Other Characterizations of Ergodicity 23 Theorem 1. x. there exists a unique integer kn such that µ X(kn .n) ∆X(k. 2 2n Then. µ(X(k. x. 2 . then f is a constant a. µ).e. with f (T x) = f (x) for all x. F. µ).. In fact. µ). The following are equivalent: (i) T is ergodic.e. T −1 X(k. then f is a constant a. (ii)⇒(iv). (v)⇒(iv). k∈Z Hence.7. then f is a constant a. By ergodicity of T . for each n ≥ 1.e.n) = kn 1. we have [ X= X(k.n) ∆X(k. On the other hand. F. Proof The implications (iii)⇒(ii). (i)⇒(iii) Suppose f (T x) = f (x) a. F. F.e.

Write f in its Fourier series X f (x) = an e2πinx . (⇒) The contrapositive statement is given in Exercise 1. Hence. Let Y = n≥1 X(kn . Tθ is ergodic if and only if θ is irrational. B. Conclude that Tθ is not ergodic if θ is a rational. We have seen in example (a) that Tθ is measure preserving with respect λ.24 Introduction and preliminaries kn T hence limn→∞ exists. (iv)⇒(i) Suppose T −1 A = A and µ(A) > 0. 1) → [0. then Tθ is not ergodic. λ). 1) defined by Tθ x = x + θ (mod 1). if 2n kn x ∈ Y .e. if θ is rational. Consider 1A . consider the transforma- tion Tθ : [0. Now. and let f ∈ L2 (X. and λ Lebesgue measure. We want to show that µ(A) = 1. n∈Z . 3/8) ∪ [1/2. B.e. When is Tθ ergodic? If θ is rational.n) ..8. then 0 ≤ |f (x) − kn /2n | < 1/2n for all n. 1). and 1A ◦ T = 1T −1 A = 1A . p Exercise 1. F.1 i. 1A is a constant a. Consider ([0. where B is the Lebesgue σ-algebra.e. For θ ∈ (0. Proof of Claim. then µ(Y ) = 1. (⇐) Suppose θ is irrational. 7/8) is Tθ -invariant but µ(A) = 1/2. then the set A = [0. then Tθ is not ergodic. We have 1A ∈ L2 (X. 5/8) ∪ [3/4. and therefore µ(A) = 1. 1/8) ∪ [1/4. 1). hence 1A = 1 a. λ) be Tθ -invariant. Consider for example θ = 1/4. q) = 1. the indicator function of A. Hence.1 Suppose θ = with gcd(p.8.8 Examples of Ergodic Transformations Example 1–Irrational Rotations. 1. f (x) = limn→∞ n . by (iv). Claim. 2 and f is a constant on Y . Find a non-trivial Tθ - q invariant set. µ).

.3).7. . 1) is irrational. Example 2–One (or Two) sided shift. . and therefore f (x) = a0 is a constant. Since µ is a product measure. For any > 0. Consider the left shift T defined on X by T x = y. we have µ(A ∩ T −n0 A) = µ(A)µ(T −n0 A) = µ(A)2 . By the uniqueness of the Fourier 2πinθ coefficients.. Show that Tθ × Tθ is measure preserving. Then |µ(E) − µ(A)| = |µ(E \ A) − µ(A \ E)| ≤ µ(E \ A) + µ(A \ E) = µ(E∆A) < . . . y) = (x + θ mod (1) . since θ is irrational we have 1 − e2πinθ 6= 0. Since A depends on finitely many coordinates only. . pan .1 (see subsection 1. and λ normalized Lebesgue measure.2. and define Tθ × Tθ : [0. Exercise 1. . there exists n0 > 0 such that T −n0 A depends on different coordinates than A. . and µ the product measure defined on cylinder sets by µ ({x : x0 = a0 . 1) by Tθ × Tθ (x. we have an (1 − e ) = 0 for all n ∈ Z. Suppose θ ∈ (0. where as above B is the Lebesgue σ-algebra on [0. 1). Let E be a measurable subset of X which is T -invariant i. p1 .Examples of Ergodic Transformations 25 Since f (Tθ x) = f (x). Tθ is ergodic.1. there exists A ∈ F which is a finite disjoint union of cylinders such that µ(E∆A) < . k − 1}N . . where p = (p0 . T −1 E = E. 1). by Lemma 1.e. F the σ- algebra generated by the cylinders. . B × B. Let X = {0. λ × λ). Thus. then X X f (Tθ x) = an e2πin(x+θ) = an e2πinθ e2πinx n∈Z n∈Z X 2πinx = f (x) = an e . 1) × [0. n∈Z an (1 − e )e = 0. but is not ergodic.2). . an = 0 for all n 6= 0. where yn = xn+1 (See Example (e) in Subsection 1.2 Consider the probability space ([0.8. xn = an }) = pa0 . 1. By Theorem 1. 1) → [0. We show that T is ergodic. . pk−1 ) is a positive probability vector. n∈Z 2πinθ 2πinx P Hence. y + θ mod (1) ) . 1) × [0. If n 6= 0. .

.1 a set Eε that is a finite disjoint union of open intervals such that λ(B c 4Eε ) < ε.1 (Knopp’s Lemma) . Given ε > 0 there exists by Lemma 1.e.. |µ(E) − µ(E)2 | ≤ |µ(E) − µ(A)2 | + |µ(A)2 − µ(E)2 | = |µ(E) − µ((A ∩ T −n0 A))| + (µ(A) + µ(E))|µ(A) − µ(E)| < 4. µ(A) = 0 if and only if λ(A) = 0). where B is the Lebesgue σ-algebra. a useful tool to verify that a measure preserving transformation defined on ([0. Therefore. in some cases. it follows that µ(E) = µ(E)2 . |µ(E) − µ((A ∩ T −n0 A))| ≤ µ E∆(A ∩ T −n0 A) < 2. and µ E∆(A ∩ T −n0 A) ≤ µ(E∆A) + µ(E∆T −n0 A) < 2. and µ is a probability measure equivalent to Lebesgue measure λ (i. (b) ∀A ∈ C . hence µ(E) = 0 or 1. B. Lemma 1. The following lemma provides. If B is a Lebesgue set and C is a class of subintervals of [0. Now by conditions (a) and (b) (that is. where γ > 0 is independent of A. Thus.8. 1). Suppose λ(B c ) > 0. T is ergodic. 1) is at most a countable union of disjoint elements from C. writing Eε as a countable union of disjoint elements of C) one gets that λ(B ∩ Eε ) ≥ γλ(Eε ). Since > 0 is arbitrary. then λ(B) = 1. Hence.2. µ(E∆T −n0 A) = µ(T −n0 E∆T −n0 A) = µ(E∆A) < . λ(A ∩ B) ≥ γλ(A). 1) satisfying (a) every open subinterval of [0. Proof The proof is done by contradiction.26 Introduction and preliminaries Further. µ) is ergodic.

i.8. λ(B) = 1. subsection 1. 1) → [0.1 to show that Tβ is ergodic with respect to Lebesgue measure λ.1 to show that T is ergodic. Therefore T is ergodic. We will use Lemma 1. (k + 1)/2n ) linearly onto [0. T n A = [0.3 Let β > 1 be a non-integer. 1). we get a contradiction.Examples of Ergodic Transformations 27 Also from our choice of Eε and the fact that λ(B c 4Eε ) ≥ λ(B ∩ Eε ) ≥ γλ(Eε ) ≥ γλ(B c ∩ Eε ) > γ(λ(B c ) − ε). Hence. . and since ε > 0 is arbitrary. the second hypothesis of Knopp’s Lemma is satisfied with γ = λ(B) > 0.3).e. we have that γ(λ(B c ) − ε) < λ(B c 4Eε ) < ε . Use Lemma 1. 2n Thus. then λ(A) = 0 or 1. Hence. and assume λ(B) > 0.8. 1) and 1 λ(A ∩ B) = λ(A ∩ T −n B) = λ(T n A ∩ B) λ(A) 1 = λ(B) = λ(A)λ(B). B. Let A ∈ C. (we call such an interval dyadic of order n). in fact. T n x = 2n x mod(1). and consider the transformation Tβ : [0. Then. λ) be as in Ex- ample (1) above. We have seen that T is measure preserving. Let C be the collection of all intervals of the form [k/2n . Exercise 1. 0 ≤ k < 2n − 1} of dyadic rationals is dense in [0. C satisfies the first hypothesis of Knopp’s Lemma. and assume that A is dyadic of order n. 1). Example 3–Multiplication by 2 modulo 1–Consider ([0. if Tβ−1 A = A. (see Example (b). hence each open interval is at most a countable union of disjoint elements of C. Hence γλ(B c ) < ε + γε. Notice that the the set {k/2n : n ≥ 1. T n maps each dyadic interval of the form [k/2n . 1). and let T : X → X be given by 2x 0 ≤ x < 1/2 T x = 2x mod 1 = 2x − 1 1/2 ≤ x < 1. (k + 1)/2n ) with n ≥ 1 and 0 ≤ k ≤ 2n − 1. Let B ∈ B be T -invariant. Now. 1) given by Tβ x = βx mod(1) = βx − bβxc.8.

F∩ A. F is T -invariant.5). . Consider the induced transformation TA on (A. T S Exercise 1. and S S F = E ∪C (disjoint union). and T −1 Bk = Ak+1 ∪ Bk+1 . . µ(C) = 0 or µ(C) = µ(A). Recall that TA x = T n(x) x.e. Recall that (see subsection 1. µA ) of T (see subsection 1. µ). –If µ(F ) = 1. F. i. equivalently. µA (C) = 1. we have µ(A) = µ(C). it follows that µ(A \ C) ≤ µ(X \ F ) = 0. .4 Show that if TA is ergodic and µ k≥1 is ergodic. . –If µ(F ) = 0. then. where n(x) := inf{n ≥ 1 : T n x ∈ A}. Let E = k≥1 Bk ∩ T −k C. Hence. and by ergodicity of T we have µ(F ) = 0 or 1. µ). then µ(X \ F ) = 0. T k x ∈ A}. T −1 F = T −1 E ∪ T −1 C [ (Ak+1 ∪ Bk+1 ) ∩ T −(k+1) C ∪ (A1 ∪ B1 ) ∩ T −1 C = k≥1 [ [ = (Ak ∩ T −k C) ∪ (Bk ∩ T −k C) k≥1 k≥1 = C ∪ E = F. T −k A = 1. Let (as before) Ak = {x ∈ A : n(x) = k} Bk = {x ∈ X \ A : T x. then TA is ergodic on (A. then µ(C) = 0. Since A = k≥1 Ak .. . F ∩ A.5) T −1 A = A1 ∪B1 . and hence µA (C) = 0. we have C = TA−1 C = k≥1 Ak ∩ T −k C. Proposition 1. Hence. and A ∈ F with µ(A) > 0.8.1 If T is ergodic on (X.8. T k−1 x 6∈ A. Since X \ F = (A \ C) ∪ ((X \ A) \ E) ⊇ A \ C. µA ). We want to show S that µA (C) = 0 or 1.28 Introduction and preliminaries Example 4–Induced transformations of ergodic transformations– Let T be an ergodic measure preserving transformation on the probability space (X. F. Proof Let C ∈ F ∩ A be such that TA−1 C = C. Since µ(A \ C) = µ(A) − µ(C).

random variables on a probability space (X. otherwise. n→∞ n Using the Strong Law of Large Numbers one can answer this question easily. n→∞ n i=1 For example consider X = {0. if xi = 1.i.). of i. 29 . .1 The Ergodic Theorem and its consequences The Ergodic Theorem is also known as Birkhoff’s Ergodic Theorem or the Individual Ergodic Theorem (1931).d. µ). Suppose one is interested in finding the frequency of the digit 1. for a..e. F the σ-algebra generated by the cylinder sets. .e. and µ the uniform product measure. . . xn = an }) = 1/2n . Yi (x) := 0. 1}N .Chapter 2 The Ergodic Theorem 2. . Y2 . µ ({x : x1 = a1 . x2 = a2 . . More pre- cisely. x we would like to find 1 lim #{1 ≤ i ≤ n : xi = 1}. Define 1.e. This theorem is in fact a generalization of the Strong Law of Large Numbers (SLLN) which states that for a sequence Y1 . one has n 1X lim Yi = EY1 (a. . i. F. with E|Yi | < ∞.

30 The Ergodic Theorem Since µ is product measure. we would like to know under n−1 1X what conditions does the limit limn→∞ f (T i x) exist a.1. For f ∈ L1 (X. F. Kamae and M. if xi = 0. and EYi = E|Yi | = 1/2.e.e. Then. xi+2 = 1} = Zi (x).1 (The Ergodic Theorem) Let (X.. F. xi+1 = 1.S.. xi+2 = 1}. Then. here we present a recent proof given by T. by SLLN one has 1 1 lim #{1 ≤ i ≤ n : xi = 1} = . then Yn = Y1 ◦ T n−1 and Zn = Z1 ◦ T n−1 . . i. Further. n n i=1 It is not hard to see that this sequence is stationary but not independent. If moreover T is ergodic. and f = X f dµ. n−1 1X lim f (T i (x)) = f ∗ (x) n→∞ n i=0 R R ∗ exists a. Hence. F. ∗ ∗ R then f is a constant a. for any f in L1 (µ).e. Keane in [KK].i. we would like to find 1 lim #{1 ≤ i ≤ n : xi = 0. µ) be a probability space and T : X → X a measure preserving transformation. . is T -invariant and X f dµ = X f dµ. Since then. it is easy to see that Y1 . n→∞ n 2 Suppose now we are interested in the frequency of the block 011. xi+2 = 1. n→∞ n We can start as above by defining random variables 1. Notice that if T is the left shift on X. . n 1 1X #{1 ≤ i ≤ n : xi = 0. xi+1 = 1. µ). Y2 . several proofs of this important theorem have been obtained. xi+1 = 1. suppose (X.d. otherwise. µ) is a probability space and T : X → X a measure preserving transformation. Bernoulli Pn process. #{1 ≤ i ≤ n : xi = 1} = i=1 Yi (x). form an i.D.e. If it does exist n i=0 what is its value? This is answered by the Ergodic Theorem which was originally proved by G. In general. Zi (x) := 0. So one cannot directly apply the strong law of large numbers. Theorem 2. . Birkhoff in 1931.

. n→∞ n+1 . Proof of Theorem 2. . 1. . amk−1 + . .1.. and N − mk < M. . . and f (x) = n fn (x) lim inf n→∞ . f (x) = lim supn→∞ . there exists an integer 1 ≤ m ≤ M with an + · · · + an+m−1 ≥ bn + · · · + bn+m−1 . Then f and f are T -invariant. + amk −1 ≥ b0 + . . . . and we consider each part separately). . + am0 −1 ≥ b0 + . . This follows from n fn (T x) f (T x) = lim sup n→∞ n fn+1 (x) n + 1 f (x) = lim sup · − n→∞ n+1 n n fn+1 (x) = lim sup = f (x). . for each positive integer N > M . . am0 + . . bN −M −1 .The Ergodic Theorem and its consequences 31 For the proof of the above theorem. < mk < N with the following properties m0 ≤ M. . . a0 + . Then. Then. + amk −1 ≥ bmk−1 + . a0 + . . . .1 Assume with no loss of generality that f ≥ 0 (otherwise we write f = f + − f − . we need the following simple lemma. k − 1. . + am1 −1 ≥ bm0 + . . + aN −1 ≥ a0 + . .1. . . mi+1 − mi ≤ M for i = 0. . + bmk −1 ≥ b0 + . 2. . .1. . Proof of Lemma 2. + f (T n−1 x). fn (x) Let fn (x) = f (x) + . Lemma 2. .1 Let M > 0 be an integer. + bm0 −1 .1 Using the hypothesis we recursively find integers m0 < m1 < . + bmk −1 . and suppose {an }n≥0 . . {bn }n≥0 are sequences of non-negative real numbers such that for each n = 0. one has a0 + · · · + aN −1 ≥ b0 + · · · + bN −M −1 . . . . + bm1 −1 .

L)(1 − ). an + . for any δ > 0 there exists an integer M > 0 such that the set X0 = {x ∈ X : ∃ 1 ≤ m ≤ M with fm (x) ≥ m min(f (x). –if T n x ∈ X0 .1. . . . then there exists 1 ≤ m ≤ M such that fm (T n x) ≥ m min(f (T n x). X X X For since f − f ≥ 0. this would imply that f = f = f ∗ .32 The Ergodic Theorem (Similarly f is T -invariant). L)(1 − ) = m min(f (x). . to prove that f ∗ exists. + bn+m−1 .We now show that {an } and {bn } satisfy the hypothesis of Lemma 2. L)(1 − ) = bn + . Define F on X by f (x) x ∈ X0 F (x) = L x∈/ X0 . Hence. . then take m = 1 since an = F (T n x) = L ≥ min(f (x). By definition of f . . is integrable and T -invariant. For any x ∈ X. and let L > 0 be any real number. . . R R We first prove that X f dµ ≤ X f dµ. L)(1 − ) = bn . 2. L)(1 − )} has measure at least 1 − δ. Notice that f ≤ F (why?). let an = an (x) = F (T n x). L)(1 − ) (so bn is independent of n). . . for any x ∈ X. For any n = 0. a. . + F (T n+m−1 x) ≥ f (T n x) + . Now.1 with M > 0 as above. . + f (T n+m−1 x) = fm (T n x) ≥ bn + . . and bn = bn (x) = min(f (x). m Now. Fix any 0 < < 1. there exists an integer m > 0 such that fm (x) ≥ min(f (x). + bn+m−1 . –If T n x ∈ / X0 . 1.e. + an+m−1 = F (T n x) + . it is enough to show that Z Z Z f dµ ≥ f dµ ≥ f dµ. .

and lastly L → ∞ one gets together with the monotone convergence theorem that f is integrable. . L)(1 − ). then → 0.1 for all integers N > M one has F (x) + .The Ergodic Theorem and its consequences 33 Hence by Lemma 2. Integrating both sides. N X Now letting first N → ∞.1. . X X Since Z Z F (x) dµ(x) = f (x) dµ(x) + Lµ(X \ X0 ). for any x ∈ X there exists an integer m such that fm (x) ≤ (f (x) + ). X X Fix > 0. m For any δ > 0 there exists an integer M > 0 such that the set Y0 = {x ∈ X : ∃ 1 ≤ m ≤ M with fm (x) ≤ m (f (x) + )} . and using the fact that T is measure preserving one gets Z Z N F (x) dµ(x) ≥ (N − M ) min(f (x). X X0 one has Z Z f (x) dµ(x) ≥ f (x) dµ(x) X X0 Z = F (x) dµ(x) − Lµ(X \ X0 ) X (N − M ) Z ≥ min(f (x). L)(1 − ) dµ(x) − Lδ. + F (T N −1 x) ≥ (N − M ) min(f (x). L)(1 − ) dµ(x). and Z Z f (x) dµ(x) ≥ f (x) dµ(x). then δ → 0. X X We now prove that Z Z f (x) dµ(x) ≤ f (x) dµ(x).

1..e. one gets Z Z f (x) dµ(x) ≤ f (x) dµ(x). In case T is ergodic. and finally → 0.34 The Ergodic Theorem has measure at least 1 − δ. Hence. f = f = f ∗ a. + G(T N −M −1 x) ≤ N (f (x) + ). the measure ν defined by ν(A) = A f (x) dµ(x) is absolutely continuous with respect to the measure µ. there exists δ0 > 0 such that R if µ(A) < δ. X X . N −M X Now. Integrating both sides yields Z Z (N − M ) G(x)dµ(x) ≤ N ( f (x)dµ(x) + ).e. then ν(A) < δ0 .1 with M > 0 as above. Therefore. Hence for any M > N . X X X hence. then ν(X \ Y0 ) = X\Y0 f (x)dµ(x) < δ0 . Let bn = G(T n x). Define G on X by f (x) x ∈ Y0 G(x) = 0 x∈/ Y0 . and an = f (x)+ (so an is independent of n). . Z Z ∗ ∗ f (x) = f (y)dµ(y) = f (y) dµ(y). let first N → ∞. X X R Since f ≥ 0. . Notice that G ≤ f . X X This shows that Z Z Z f dµ ≥ f dµ ≥ f dµ. and f ∗ is T -invariant. one has G(x) + . Since µ(X \ Y0 ) < δ. Z Z Z f (x) dµ(x) = G(x) dµ(x) + f (x) dµ(x) X X X\Y0 Z N ≤ (f (x) + ) dµ(x) + δ0 . Hence. then δ → 0 (and hence δ0 → 0). then the T -invariance of f ∗ implies that f ∗ is a constant a. One can easily check that the sequences {an } and {bn } satisfy the hypothesis of Lemma 2.

µ). Notice that if f ∈ L1 (µ). n−1 n−1 1X 1X lim (f 1A )(T i x) = 1A (x) lim f (T i x) = 1A (x)f ∗ (x) a. Furthermore. Let I be the sub-σ-algebra of F consisting of all T -invariant subsets A ∈ F. then the conditional expectation of f given I (denoted by Eµ (f |I)). (2) Suppose T is ergodic and measure preserving with respect to µ.e. I-measurable L1 (µ) function with the property that Z Z f (x) dµ(x) = Eµ (f |I)(x) dµ(x) A A for all A ∈ I i.The Ergodic Theorem and its consequences 35 Remarks (1) Let us study further the limit f ∗ in the case that T is not ergodic. Exercise 2.1. µ and ν have the same sets of measure zero so µ(A) = 0 if and only if ν(A) = 0). X X This shows that f ∗ = Eµ (f |I). and let ν be a probability measure which is equivalent to µ (i.e. then for every f ∈ L1 (µ) one has n−1 Z 1X lim f (T i (x)) = f dµ n→∞ n X i=0 ν a. A . is the unique a. F.e. it follows that f ∗ is I-measurable.e. Let A be a measurable subset of X of positive µ measure. and denote by n the first return time map and let TA be the induced transformation of T on A (see section 1. We claim that f ∗ = Eµ (f |I).e. for any A ∈ I. n→∞ n n→∞ n i=0 i=0 and Z Z f 1A (x) dµ(x) = f ∗ 1A (x) dµ(x). T −1 A = A.5). by the ergodic theorem and the T -invariance of 1A .1 (Kac’s Lemma) Let T be a measure preserving and ergodic transformation on a probability space (X. Prove that Z n(x) dµ = 1. Since the limit function f ∗ is T -invariant..

F. 1) by 0 if 0 ≤ x < 1/β b1 (x) = 1 if 1/β ≤ x < 1. B ∈ F.1. Corollary 2. . Since the indicator function 1A ∈ L1 (X. 1) → [0. Fix k ≥ 0. one has n−1 1X lim µ(T −i A ∩ B) = µ(A)µ(B). . and consider the transformation Tβ : 2 [0. µA ).1 Let (X.e. value (with respect to Lebesgue measure) of the following limit 1 lim #{1 ≤ i ≤ n : bi = 0. n→∞ n Using the Ergodic Theorem.2 Let β = . and T : X → X a measure preserving transformation. . Then. by the ergodic theorem one has n−1 Z 1X lim 1A (T i x) = 1A (x) dµ(x) = µ(A) a. B ∈ F. (2.e. and let A. .36 The Ergodic Theorem Conclude that n(x) ∈ L1 (A. 1) given by Tβ x = βx mod(1) = βx − bβxc. Find the a. bi+k = 0}.1. n→∞ n X i=0 .1) n→∞ n i=0 Proof Suppose T is ergodic. n→∞ n µ(A) i=0 almost everywhere on A. µ). µ) be a probability space. bi+1 = 0. Define b1 on [0. one can give yet another characterization of ergodicity. F. T is ergodic if and only if for all A. and that n−1 1X 1 lim n(TAi (x)) = . √ 1+ 5 Exercise 2.

Pn−1 Since for each n.1. Let T : X → X be a measure preserving transformation.1) holds for every A. Then. suppose (2. Therefore. T is ergodic. X Conversely. B ∈ F. T is ergodic if and only if for all A. µ) be a probability space. the function limn→∞ n1 i=0 1T −i A∩B is dominated by the constant function 1. µ(E) = µ(E)2 . n→∞ n i=0 Hence. it follows by the dominated convergence theorem that n Z n−1 1X −i 1X lim µ(T A ∩ B) = lim 1T −i A∩B (x) dµ(x) n→∞ n X n→∞ n i=0 i=0 Z = 1B µ(A) dµ(x) = µ(A)µ(B). hence n−1 1X lim µ(T −i E ∩ E) = µ(E). By invariance of E. Since µ(E) > 0.1) for sets A and B belonging to a generating semi-algebra only as the next proposition shows. we have µ(T −i E ∩ E) = µ(E).1) n−1 1X lim µ(T −i E ∩ E) = µ(E)2 . n→∞ n i=0 On the other hand.1 Let (X. n−1 n−1 1X 1X lim 1T −i A∩B (x) = lim 1T −i A (x)1B (x) n→∞ n n→∞ n i=0 i=0 n−1 1X = 1B (x) lim 1A (T i x) n→∞ n i=0 = 1B (x)µ(A) a.e. (2. Proposition 2. Let E ∈ F be such that T −1 E = E and µ(E) > 0. this implies µ(E) = 1. one has n−1 1X lim µ(T −i A ∩ B) = µ(A)µ(B). by (2.The Ergodic Theorem and its consequences 37 Then.2) n→∞ n i=0 . To show ergodicity one needs to verify equation (2. B ∈ S. F. and S a generating semi-algebra of F.

by Lemma 1. |µ(A)µ(B) − µ(A0 )µ(B0 )| ≤ µ(A)|µ(B) − µ(B0 )| + µ(B0 )|µ(A) − µ(A0 )| ≤ |µ(B) − µ(B0 )| + |µ(A) − µ(A0 )| ≤ µ(B∆B0 ) + µ(A∆A0 ) < 2. . Further. B ∈ F.38 The Ergodic Theorem Proof We only need to show that if (2. Then. (T −i A ∩ B)∆(T −i A0 ∩ B0 ) ⊆ (T −i A∆T −i A0 ) ∪ (B∆B0 ). Since. and µ(B∆B0 ) < . B ∈ S. then it holds for all A. it follows that |µ(T −i A ∩ B) − µ(T −i A0 ∩ B0 )| ≤ µ (T −i A ∩ B)∆(T −i A0 ∩ B0 ) ≤ µ(T −i A∆T −i A0 ) + µ(B∆B0 ) < 2. B ∈ F. Let > 0.2) there exist sets A0 .1 (subsection 1. Hence. B0 each of which is a finite disjoint union of elements of S such that µ(A∆A0 ) < . and A.2.2) holds for all A.

! !.

.

1Xn−1 n−1

1X

µ(T −i A ∩ B) − µ(A)µ(B) − µ(T −i A0 ∩ B0 ) − µ(A0 )µ(B0 )

.

.

n n i=0 .

.

i=0 n−1 1 X .

.

µ(T −i A ∩ B) + µ(T −i A0 ∩ B0 ).

− |µ(A)µ(B) − µ(A0 )µ(B0 )| .

" # n−1 1X lim µ(T −i A ∩ B) − µ(A)µ(B) = 0. n→∞ n i=0 . Therefore. ≤ n i=0 < 4.

Then. and T : X → X is measurable and measure preserving with respect to µ1 and µ2 . 2 let n−1 1X Ci = {x ∈ X : lim 1A (T j x) = µi (A)}. and by absolute continuity of µ2 one has µ2 (CA ) = 1. For i = 1. (i) if T is ergodic with respect to µ1 . n i=0 X On the other hand. Then. Proof (i) Suppose T is ergodic with respect to µ1 and µ2 is absolutely con- tinuous with respect to µ1 . n→∞ n i=0 Let n−1 1X CA = {x ∈ X : lim 1A (T i x) = µ1 (A)}.The Ergodic Theorem and its consequences 39 Theorem 2.1. x one has n−1 1X lim 1A (T i x) = µ1 (A). for each n ≥ 1 one has n−1 Z 1X 1A (T i x) dµ2 (x) = µ2 (A). Since A ∈ F is arbitrary. then either µ1 = µ2 or µ1 and µ2 are singular with respect to each other. we have µ1 = µ2 .e. by the ergodic theorem for a. (ii) Suppose T is ergodic with respect to µ1 and µ2 . by the dominated convergence theorem one has Z n−1 Z 1X lim 1A (T i x)dµ2 (x) = µ1 (A) dµ2 (x). n→∞ n j=0 . then µ1 = µ2 . n→∞ n i=0 then µ1 (CA ) = 1.2 Suppose µ1 and µ2 are probability measures on (X. Assume that µ1 6= µ2 . Since T is measure preserving with respect to µ2 . F). For any A ∈ F. there exists a set A ∈ F such that µ1 (A) 6= µ2 (A). and µ2 is absolutely continuous with respect to µ1 . n→∞ X n i=0 X This implies that µ1 (A) = µ2 (A). (ii) if T is ergodic with respect to µ1 and µ2 .

µx (A ∩ B) = Eµ (1A 1B |I)(x) = 1A (x)Eµ (1B |I)(x) = µx (A)µx (B). x µx (T −1 A) = Eµ (1A ◦ T |I)(x) = Eµ (IA |I)(T x) = Eµ (IA |I)(x) = µx (A). for a. for any f ∈ L1 (X. such that for any x ∈ X \ N . Thus µ1 and µ2 are supported on disjoint sets.e. µ). if A = B. the above analysis is just a rough sketch of the proof of what is called the ergodic decomposition of measure preserving transformations. define a measure µx on F by µx (A) = Eµ (1A |I)(x). and hence I-measurable. x.e. µ). So let A ∈ F. 1A is T -invariant. Let T be a transformation on the probability space (X. In particular.e. µx (A) = 0 or 1.). then C1 ∩ C2 = ∅. We assume that X is a complete separable metric space. x ∈ X. (One in fact needs to work a little harder to show that one can find a set N of µ-measure zero. We end this subsection with a short discussion that the assumption of er- godicity is not very restrictive. let A ∈ F be such that T −1 A = A.) . set depended on the choice of A. and hence µ1 and µ2 are mutually singular. one has µx (A) = 0 or 1. Therefore. x and for any B ∈ F. Then. and suppose T is measure preserving but not necessarily ergodic. µx is ergodic. Hence. and any T -invariant set A. Now. then for a. 2. Z f (y) dµx (y) = Eµ (f |I)(x). For x ∈ X. We can decompose µ into T -invariant ergodic components in the following way.e. Then. In the above analysis the a. X Note that Z Z µ(A) = Eµ (1A |I)(x) dµ(x) = µx (A) dµ(x).e. Hence. F. then the latter equality yields µx (A) = µx (A)2 which implies that for a. Let I be the sub-σ-algebra of T -invariant measurable sets. We show that µx is T -invariant and ergodic for a. F. X X and that Eµ (1A |I)(x) is T -invariant.40 The Ergodic Theorem By the ergodic theorem µi (Ci ) = 1 for i = 1. and F the corresponding Borel σ-algebra (in order to make sure that the conditional expectation is well-defined a. Then.e. µx (A) = Eµ (1A |I)(x) = 1A (x) a. Since µ1 (A) 6= µ2 (A).e.

π1 .e. i.2 Characterization of Irreducible Markov Chains Consider the Markov Chain in Example(f) subsection 1. 1. N − 1}Z . T : X → X the left shift. 1.xk =j} (x) = 1{x:x0 =i} (x) lim 1{x:x0 =j} (T k x) = f ∗ (x)1{x:x0 =i} (x). by the ergodic theorem. . That is µ({x : x0 = i0 . qij is well-defined. j ∈ {0. and µ the Markov measure defined by the stochastic N × N matrix P = (pij ). and the positive probability vector π = (π0 .2. n−1 n−1 1X 1X lim 1{x:xk =j} (x) = lim 1{x:x0 =j} (T k x) = f ∗ (x). . We want to find necessary and sufficient conditions for T to be ergodic. and P 0 is the k × k identity matrix. Q = (qij ). the limit limn→∞ n k=0 pij exists. . F the σ-algebra generated by the cylinders. To achieve this.1 For each i. That is X = {0. . . x1 = i1 . n−1 n−1 1X 1X lim 1{x:x0 =i. N − 1}. More precisely. Proof For each n. n→∞ n n→∞ n k=0 k=0 where f ∗ is T -invariant and integrable. . xn = in }) = πi0 pi0 i1 pi1 i2 . . we first set n−1 1X k Q = lim P . n→∞ n k=0 (k) where P k = (pij ) is the k th power of the matrix P . n→∞ n k=0 1 Pn−1 (k) Lemma 2. pin−1 in . πN −1 ) satisfying πP = π. . .3.. . n→∞ n n→∞ n k=0 k=0 . Then. xk = j}). . . . where n−1 1 X (k) qij = lim pij . . n k=0 πi n k=0 Since T is measure preserving.Characterization of Irreducible Markov Chains 41 2. . n−1 n−1 1 X (k) 1 1X pij = µ({x ∈ X : x0 = i.

xk =j} (x) dµ(x) πi X n→∞ n k=0 Z 1 = f ∗ (x)1{x:x0 =i} (x) dµ(x) πi X Z 1 = f ∗ (x) dµ(x) πi {x:x0 =i} which is finite since f ∗ is integrable. n−1 1 1X qij = lim µ({x ∈ X : x0 = i. Proof (i) ⇒ (ii) By the ergodic theorem for each i. 1. (iii) qij > 0 for all i. xk = j}) πi n→∞ n k=0 Z n−1 1 1X = lim 1{x:x0 =i. . N − 1}. n−1 1X lim 1{x:x0 =i. by the dominated convergence theorem.xk =j} (x) ≤ 1 for all n.42 The Ergodic Theorem Since n1 n−1 P k=0 1{x:x0 =i. Recall that the matrix P is said to be irreducible if for every i. j ∈ {0. . We now give a characterization for the ergodicity of T .2. j. (b) Q = QP = P Q = Q2 .1 Show that the matrix Q has the following properties: (a) Q is stochastic. Hence qij exists. (v) 1 is a simple eigenvalue of P . (iv) P is irreducible. Exercise 2. (i) T is ergodic. n→∞ n k=0 . j.1 The following are equivalent. (c) πQ = π. there (n) exists n ≥ 1 such that pij > 0. . (ii) All rows of Q are identical.xk =j} (x) = 1{x:x0 =i} (x)πj . Theorem 2.2.

(iii) ⇒ (iv) For any i. 1. Since πQ = π. n−1 1 1X qij = lim µ({x ∈ X : x0 = i. and let qj = max0≤i≤N −1 qij . N − 1}. 1. . . therefore P is irreducible. it follows that Si 6= ∅. j. Since P is irreducible. . N − 1} such that qkj < qj . Hence. . . Therefore. . l=0 l=0 . N − 1} we have. N X −1 N X −1 qij = qil qlj < qil qj = qj . n→∞ n k=0 Hence. (iv) ⇒ (iii) Suppose P is irreducible. 1. For any state i ∈ {0. for each j there exists a constant cj such that qij = cj for all i. xk = j}) = πj . (ii) ⇒ (iii) If all the rows of Q are identical. . it follows that qij = cj = πj > 0 for all i. Since Q is stochastic and Q2 = Q. . all rows of Q are identical.e. then for any i ∈ {0. Let l ∈ Si . there exists n such that pij > 0. . . j n−1 1 X (k) lim pij = qij > 0. πi n→∞ n k=0 i.. let Si = {j : qij > 0}. Since Q = QP = QP n for all n.Characterization of Irreducible Markov Chains 43 By the dominated convergence theorem. Suppose that not all the qij ’s are the same. 1. (iii) ⇒ (ii) Suppose qij > 0 for all j = 0. Thus. there exists n such that plj > 0. Since Q is a stochastic matrix. qij > 0 for all i. then all the columns of Q are constants. n→∞ n k=0 (n) Hence. . Fix any state j. . N − 1. . . then qil > 0. qij is independent of i. . Then there exists k ∈ {0. then for any state j N X −1 (n) (n) qij = qim pmj ≥ qil plj m=0 (n) for any n. n−1 1X lim µ({x ∈ X : x0 = i. . j. xk = j}) = πi πj .

. pjm−1 jm pjm i0 pi0 i1 . vj = ( N i=0 vi )πj . Hence πj = (k) limn→∞ n1 n−1 P k=0 pij . hence π = qi∗ . (ii) ⇒ (i) Suppose all the rows of Q are identical. . then qij = πj for all i.1 in Section 2. . let qi∗ = (qi0 . . or all the rows are identical. pil−1 il ) = µ(B)µ(A). . . pil−1 il lim pjm i0 n→∞ n k=0 = (πj0 pj0 j1 . This implies that for all j. n−1 1X lim µ(T −k A ∩ B) n→∞ n k=0 n−1 1 X (k) = πj0 pj0 j1 . then vQ = v. pjm−1 jm )(πi0 pi0 i1 . j ∈ {0. . and all the rows of Q are identical. Hence. . . 1 is a simple eigenvalue. . . . If P −1 vP = v. qi∗ is a probability vector. . For any i. By hypothesis π is the only probability vector satisfying πP = P . . Thus. Let A = {x : xr = i0 . we must show that n−1 1X lim µ(T −i A ∩ B) = µ(A)µ(B). for n sufficiently large. We have shown above that this implies qij = πj for all i. . . . From Q = QP . the columns of Q are constants. qi(N −1) ) denote the ith row of Q then. xr+l = il }. . . xs+m = jm } be any two cylinder sets of X. a contradiction. the cylinders T −n A and B depend on different coordinates. j. for all n sufficiently large. (ii) ⇒ (v) If all the rows of Q are identical. . Hence. . .1. (n+r−s−m) µ(T −n A ∩ B) = πj0 pj0 j1 . . T is ergodic. Therefore. Therefore. . . By Proposition 2. Thus. N − 1}. n→∞ n i=0 Since T is the left shift. (v) ⇒ (ii) Suppose 1 is a simple eigenvalue. 1. pjm−1 jm pi0 i1 . and B = {x : xs = j0 . . pil−1 il . .44 The Ergodic Theorem This implies that qj = max0≤i≤N −1 qij < qj . we get qi∗ = qi∗ P . v is a multiple of π. . .

Definition 2. F.1 Let (X.3 Mixing As a corollary to the ergodic theorem we found a new definition of ergodicity. asymptotic average independence. and T : X → X a measure preserving transformation. Based on the same idea. B ∈ F. Then.3.Characterization of Irreducible Markov Chains 45 2. we now define other notions of weak independence that are stronger than ergodicity. (i) T is weakly mixing if for all A. µ) be a probability space. namely. one has n−1 1 X .

.

µ(T −i A ∩ B) − µ(A)µ(B).

= 0. .

This follows from the simple fact that if {an } is a sequence n−1 1X of real numbers such that limn→∞ an = 0. if {an } is a bounded sequence. and weakly mixing im- plies ergodicity.e. n→∞ n such that limn→∞. then limn→∞ |ai | = 0. n − 1} ∩ J) = 0. . . .n∈J / an = 0. Furthermore. n i=0 then the following are equivalent: n−1 1X (i) limn→∞ |ai | = 0 n i=0 n−1 1X (ii) limn→∞ |ai |2 = 0 n i=0 (iii) there exists a subset J of the integers of density zero. and n i=0 n−1 1X hence limn→∞ ai = 0.3) n→∞ n i=0 (ii) T is strongly mixing if for all A. . 1. B ∈ F. . lim (2. one has lim µ(T −i A ∩ B) = µ(A)µ(B). (2. i.4) n→∞ Notice that strongly mixing implies weakly mixing. 1 lim # ({0.

4) holds for all A. Let S be a generating semi-algebra of F. and Example (2) in subsection 1. and T : X → X a measure preserving transformation. (a) Show that if equation (2.3.8. F.3. if and only if T is weakly mixing. Show that T is strongly mixing. (a) Show that T × T is measure preserving with respect to µ × µ. (b) Show that if equation (2. µ × µ) by T × T (x.3. Consider the transformation T × T de- fined on (X × X. Exercise 2. B ∈ S. µ) be a probability space. B ∈ S. µ) be a probability space. y) = (T x. T y). can you state them? Exercise 2. Exercise 2.3.46 The Ergodic Theorem Using this one can give three equivalent characterizations of weakly mixing transformations.3 Let (X. then T is strongly mixing.1 Let (X. then T is weakly mixing.2 Consider the one or two-sided Bernoulli shift T as given in Example (e) in subsection 1. (b) Show that T × T is ergodic. F × F. . F.3) holds for all A. and T : X → X a measure preserving transformation.

µ). Now. S) satisfying (i) ψ is one-to-one and onto a. Note. By this we mean. So our notion of being the same must mean that we have a map ψ : (X. (ii) ψ is measurable. F. ν. C. T ) a dynamical system . F. F. sets of measure zero can be ignored. we call the quadruple (X.e. µ. T ) → (Y. that in this context. µ. and a (suitable) set NY of measure 0 in Y . (2) The dynamical structure. C.Chapter 3 Measure Preserving Isomorphisms and Factor Maps 3.. the map ψ : X \ NX → Y \ NY is a bijection. for all C ∈ C.1 Measure Preserving Isomorphisms Given a measure preserving transformation T on a probability space (X. what should we mean by: these systems are the same? On each space there are two important structures: (1) The measure structure given by the σ-algebra and the probability mea- sure. T ) and (Y. µ. that if we remove a (suitable) set NX of measure 0 in X. S). 47 . ψ −1 (C) ∈ F. given by a measure preserving transforma- tion. i. given two dy- namical systems (X. ν.e. F.

.... normalized Lebeque measure on the unit circle)...... (c) T is strongly mixing if and only if S is strongly mixing......... ...... ........ . ..... which is the same as saying that the following diagram commutes.............. Examples (1) Let K = {z ∈ C : |z| = 1} be equipped with the Borel σ-algebra B on K.. ψ ◦ T = S ◦ ψ. .1 Two dynamical systems (X. C. F....... . ψ ... S) are isomorphic if there exist measurable sets N ⊂ X and M ⊂ Y with µ(X\N ) = ν(Y \ M ) = 0 and T (N ) ⊂ N....e. i........... we should have that (iv) ψ preserves the dynamics of T and S.1........ Show that (a) T is ergodic if and only if S is ergodic.....1....... ....... . ψ ... ν..e... . .. T ) and (Y....... i..N 0 This means that T -orbits are mapped to S-orbits: x→ Tx → T 2x → ··· → T nx → ↓ ↓ ↓ ↓ ↓ 2 n ψ(x) → S (ψ(x)) → S (ψ(x)) → · · · → S (ψ(x)) → Definition 3. .... F. .. ν....... .... .... and Haar measure (i. .. .. ν(C) = µ (ψ −1 (C)) for all C ∈ C.. ............ Exercise 3. and finally if there exists a measurable map ψ : N → M such that (i)–(iv) are satisfied.. S(M ) ⊂ M .. S N 0... (b) T is weakly mixing if and only if S is weakly mixing.48 Measure Preserving Isomorphism and Factor Maps (iii) ψ preserves the measures: ν = µ ◦ ψ −1 ............ ... Finally.. C....... ......... µ... S) are two isomorphic dy- namical systems..e. .. .. T N. . .. µ......1 Suppose (X. N... T ) and (Y.. ......

. and if N = 10 we get the decimal expansion). . . . . 1).3. . in )) = + 2 + ··· + n. S). k=1 N where ∞ k P k=1 ak /N is the N -adic expansion of x (for example if N = 2 we get the binary expansion. 1). 1. B. . . F. . . N − 1}N by ∞ X ak ψ: x = k 7→ (ak )k≥1 . and µ the uniform product measure defined on cylinders by 1 µ({(yi )i≥1 ∈ Y : y1 = a1 . the unit interval with the Lebesgue σ-algebra. λ). 1) → [0. 1). yn = an }) = . . . in ))) . Let T : [0. . . . Let C(i1 . 1) be given by T x = N x − bN xc. Define a map φ : [0. . Let Y := {0. and Lebesgue measure. . and Example (3) in subsection 1. B. the map S is isomorphic to the map T on ([0. 1). λ) given by T x = 2x (mod 1) (see Example (b) in subsection 1. .8). N − 1} for n ≥ 1.) ∈ Y . . λ. . In order to see that ψ is an isomorphism one needs to verify measurability and measure preservingness on cylinders: i1 i2 in i1 i2 in + 1 ψ −1 (C(i1 . 1) → Y = {0. the set of all sequences (yn )n≥1 . a2 . (2) Consider ([0. . N Note that N = {(yi )i≥1 ∈ Y : there exists a k ≥ 1 such that yi = N −1 for all i ≥ k} . . N − 1}N . One can easily check that S is measure preserving. 1) → K by φ(x) = e2πix . . with yn ∈ {0. .. We leave it to the reader to check that φ is a measurable isomorphism. + 2 + ··· + N N N N N Nn and 1 λ ψ −1 (C(i1 . . . i. and where S is the left shift. B. T ) and (Y. where F is the σ-algebra generated by the cylinders. We now construct an isomorphism between ([0. . a3 . µ.. Iterations of T generate the N -adic expansion of points in the unit interval. Define ψ : [0. equivalently Se2πiθ = e2πi(2θ) . yn = in } . in )) = n = µ (C(i1 . Nn for any (a1 . . . φ is a measurable and measure preserving bijection such that Sφ(x) = φ(T x) for all x ∈ [0. .e. . . y2 = a2 . . 1. 1. . . . in ) = {(yi )i≥1 ∈ Y : y1 = i1 . In fact.Measure Preserving Isomorphisms 49 Define S : K → K by Sz = z 2 . . . .

where B × B is the product Lebesgue σ-algebra. i = −k. F. 1) has a unique N -adic expansion generated by T . . 1) × [0. ) × [0.1. 2 y). . y) = (Gx − 1.2 Consider ([0. 1}. µ . 0 ≤ x < 21 T (x. 1) → Ỹ is a bijection. ). Setting Ỹ = Y \ N . 1) × [0. G G2 . λ × λ). 1) [ . 12 ≤ x < 1. 12 ) (so µ(∆) = ( 12 )k+`+1 ). (x. y) ∈ [ 1 . (b) Now let S : [0. G1 ) × [0. Define the transformation y (Gx. since every x ∈ [0. so that G2 = G + 1.50 Measure Preserving Isomorphism and Factor Maps is a subset of Y of measure 0. y) = 1 (2x − 1. then ψ : [0. 1) → [0. G + y ). 1). 1)2 → [0. 1) × [0. 1) be given by y (Gx. G G G endowed with the product Borel σ-algebra. k. (x. ). G G G (a) Show that T is measure preserving with respect to normalized Lebesgue measure on X.1. `}. 2 Show that T is isomorphic to the two-sided Bernoulli shift S on {0. 1) × [0. . where F is the σ-algebra generated by cylinders of the form ∆ = {x−k = a−k . 1}Z . and λ × λ is the product Lebesgue measure Let T : [0. y) = (G2 x − G. x` = a` : ai ∈ {0. . Exercise 3. 1] G S(x. Consider the set 2 1 [ 1 1 X = [0.3 Let G = . 1 ). (y + 1)). G1 ) × [0. Finally. y) ∈ [0. 1 + y ). 1)2 be given by 1 (2x. . it is easy to see that ψ ◦ T = S ◦ ψ. 1] T (x. y) ∈ [0. ` ≥ 0. . (x. (x. G ). √ 1+ 5 Exercise 3. . y) ∈ [ 1 . and µ the product measure with weights ( 21 . B × B. . 1)2 . 1) × [0.

. ν. 21 ≤ x < 1. 1) × [0. 3. C. . we give a precise definition of what it means for a dynamical system to be a subsystem of another one. 1)2 . 21 y). y) = inf{n ≥ 1 : T n (x. µ. since T −1 G = T −1 ψ −1 C = ψ −1 S −1 C ⊆ ψ −1 C = G.e. 1) → Y given by y φ(x. and let U be the induced transformation of T on Y . S) be two dynamical systems. G1 ). Show that the map φ : [0. where Y has the induced measure structure (see Section 1. T ) and (Y. U (x. and satisfies ψ(T (x)) = S(ψ(x)) for all x ∈ M1 . i. y) ∈ Y }. 0 ≤ x < 21 T (x. y) = (2x − 1. (c) Let Y = [0. Remark Notice that if ψ is a factor map. We say that S is a factor of T if there exist measurable sets M1 ∈ F and M2 ∈ C. we discussed the notion of isomorphism which describes when two dynamical systems are considered the same. such that µ(M1 ) = ν(M2 ) = 1 and T (M1 ) ⊂ M1 . where n(x. y) ∈ Y . S(M2 ) ⊂ M2 . y) = (x. 1) × [0. ) G defines an isomorphism from S to U .2 Factor Maps In the above section. We call ψ a factor map. F. then G = ψ −1 C is a T -invariant sub-σ-algebra of F. for (x. 12 (y + 1)).Factor Maps 51 Show that S is measure preserving with respect to normalized Lebesgue measure on [0.2. y) = T n(x. and finally if there exists a measurable and measure preserving map ψ : M1 → M2 which is surjective.1 Let (X. given by (2x. Now. 1) × [0. Examples Let T be the Baker’s transformation on ([0. B × B. 1).y) . Definition 3.5). λ × λ).

i. . 1}. 2 1 µ ({y ∈ Y : y1 = j1 . F.2. ν.2. xn = in }) = . Define ψ : [0. µ. It is easy to check that ψ 2 is a factor map.3 Natural Extensions Suppose (Y. i2 . .52 Measure Preserving Isomorphism and Factor Maps and let S be the left shift on X = {0. An invertible measure-preserving dynamical system (X. . S) if S is a factor of T and the factor map ψ satisfies ∨∞ m −1 m=0 T ψ G = F. 1. 2}N which is endowed with the σ-algebra F. i.. . Exercise 3. generated by the cylinder sets. 1) → X by ψ(x. . 1 µ ({x ∈ X : x1 = i1 .. generated by the cylinder sets. . .. 2}. ν. T ) is called a natural extension of (Y. where ∞ _ T k ψ −1 G k=0 is the smallest σ-algebra containing the σ-algebras T k ψ −1 G for all k ≥ 0. jn ∈ {0. S) is a non-invertible measure-preserving dynamical system. 1}N with the σ-algebra F generated by the cylinders.+jn ) .2 Show that a factor of an ergodic (weakly mixing/strongly mixing) transformation is also ergodic (weakly mixing/strongly mixing). . . G. . Exercise 3. 3 3 where j1 . .1 Let T be the left shift on X = {0.). a2 . j2 . 1)×[0. y) = (a1 .e. 3. and the uniform product measure µ giving each symbol probability 1/3. .. . . . . in ∈ {0. and the product measure ν giving the symbol 0 probability 1/3 and the symbol 1 probability 2/3.. . and the uniform product measure µ. .+jn ( )n−(j1 +j2 +. yn = jn }) = ( )j1 +j2 +. . . 1}N which is endowed with the σ-algebra G. 1. x2 = i2 . Show that S is a factor of T . G.e. y2 = j2 . 3n where i1 . an where x = ∞ P n=1 n is the binary expansion of x. Let S be the left shift on Y = {0..

Set X = {0. and let D = {y ∈ Y : y0 = a−k . . . Notice that T is invertible. ν be the one-sided Bernoulli shift. T is the natural extension of S. . µ be the two-sided Bernoulli shift. ψ −1 D = {x ∈ X : x0 = a−k . . . . . x1 . 1}Z . k=0 Thus. . G. 1}N∪{0} . Then. x−1 . and S on {0. This shows that ∞ _ T k ψ −1 G = F. . we show that ∞ k −1 W k=0 T ψ G contains all cylinders generating F. . . x1 .Natural Extensions 53 Example Let T on {0. . We claim that ∞ _ T k ψ −1 G = F. . Let ∆ = {x ∈ X : x−k = a−k . . . . . x0 .) = (x0 . . . x` = a` } be an arbitrary cylinder in F. while S is not. 1}Z . . Then.) . k=0 To prove this. ψ is a factor map. xk+` = a` } and T k ψ −1 D = ∆. Y = {0. both spaces are endowed with the uniform product measure. . 1}N∪{0} . F. and define ψ : X → Y by ψ (. . yk+` = a` } which is a cylinder in G.

54 Measure Preserving Isomorphism and Factor Maps .

. a2 . . B the σ-algebra generated by cylinder sets of the form ∆n (ai1 . uncertainty) and the transmission of information was first studied by Claude Shannon in 1948. and (ii) that h(T ) is isomorphism invariant. an }.1 Randomness and Information Given a measure preserving transformation T on a probability space (X. The connection between entropy (that is randomness. so that isomorphic transformations have equal entropy. We want to define h(T ) in such a way. F. xin = ain } µ the product measure assigning to each coordinate probability pi of seeing 55 . That is. In ergodic theory we view this process as the dynamical system (X. . an }N . a2 . F. µ. . B.Chapter 4 Entropy 4. the value of h(T ) reflects the amount of ‘randomness’ generated by T . . . Consider a source (for example a ticker-tape) that produces a string of symbols · · · x−1 x0 x1 · · · from the alphabet {a1 . ai2 . . Suppose that the probability of receiving symbol ai at any given time is pi . that (i) the amount of information gained by an appli- cation of T is proportional to the amount of uncertainty removed. and that each symbol is transmitted inde- pendently of what has beenPtransmitted earlier. . . where X = {a1 . . Of course we must have here that each pi ≥ 0 and that i pi = 1. . ain ) := {x ∈ X : xi1 = ai1 . we want to define a nonnegative quantity h(T ) which measures the average uncertainty about where T moves the points of X. . . µ). . As a motivation let us look at the following simple example. . . T ).

) = log2 n. . and Jensen’s Inequality implies that for any p1 . . is that H is maximal if pi = n1 for all i. (4. that is. notice that if the source is degenerate. 4. pn ). pn ) = h(T ) := − pi log2 pi . . .56 Entropy the symbol ai . . pn ) = f (pi ) ≤ f ( pi ) = f ( ) = log2 n . . Another reason to see why this definition is appropriate. . and this agrees with the fact that the source is most random when all the symbols are equiprobable.2 Definitions and Properties So far H is defined as the average information per symbol. .. To see this maximum. . . . pn ) ≤ log2 n for all probability vectors (p1 .1) i=1 If we define log pi as the amount of uncertainty in transmitting the symbol ai . . . . . . n n so the maximum value is attained at ( n1 . the source only transmits the symbol ai ). But 1 1 H( . f (t) = −t log2 t if 0 < t ≤ 1. n1 ). . i.e. The above defini- tion can be extended to define the information transmitted by the occurrence of an event E as − log2 P (E). . To see why this is an appropriate definition. and T the left shift. pn with pi ≥ 0 and p1 + . . pi = 1 for some i (i. − log2 P (E ∩ F ) = − log2 P (E) − log2 P (F ). . .e. then H is the average amount of information gained (or uncertainty removed) per symbol (notice that H is in fact an expected value). n n i=1 n i=1 n n so H(p1 .. . This definition has the property that the in- formation transmitted by E ∩ F for independent events E and F is the sum of the information transmitted by each one individually. . . . n n 1 1X 1X 1 1 H(p1 . . consider the function f : [0. + pn = 1. . In this case we indeed have no randomness. . . . We define the entropy of this system by n X H(p1 . Then f is continuous and concave downward. 1] → R+ defined by ( 0 if t = 0. . then H = 0.

let Cn be the collection of all possible n-blocks (or cylinder sets) of length n.2) follows from the fact that Hn is a subadditive sequence. This can be achieved if one replaces the symbols ai by blocks of symbols of particular size. The source can be seen as follows: with each point x ∈ X. In general. a2 . How can one define the entropy of this system similar to the case of a source? The symbols {a1 . and proposition (4.Definitions and Properties 57 The only function with this property is the logarithm function to any base. µ.. x1 . . we associate an infinite sequence · · · x−1 . i. But first we need few facts about partitions. A2 . . T ). In the above example of the ticker-tape the symbols were transmitted independently. for every n. . . . x0 . . A2 . so that X is the disjoint union (up to sets of measure zero) of A1 .2. and define X Hn := − P (C) log P (C) .3) below). An . . We choose base 2 because information is usually measured in bits. · · · . where xi is aj if and only if T i x ∈ Aj . Now replace the source by a measure preserving system (X. . (4.e.2) n→∞ n The existence of the limit in (4. the symbol generated might depend on what has been received before. In fact these dependencies are often ‘built-in’ to be able to check the transmitted sequence of symbols on errors (think here of the Morse sequence. C∈Cn Then n1 Hn can be seen as the average information per symbol when a block of length n is transmitted. . An } of X.). The entropy of the source is now defined by Hn h := lim . In fact h(T ) must be the maximal entropy over all possible finite partitions. an } can now be viewed as a partition A = {A1 . i=1 Our aim is to define the entropy of the transformation T which is independent of the partition we choose. B. . We define the entropy of the partition α by n X H(α) = Hµ (α) := − µ(Ai ) log µ(Ai ) . sequences on compact discs etc.2.2) (see proposition (4. More precisely. . Such dependencies must be taken into consideration in the calculation of the average information per symbol. Hn+m ≤ Hn + Hm . . .

(c) H(β|α) ≤ H(β). Bm } is a refinement of the partition α = {A1 . (d) H(α ∨ β) ≤ H(α) + H(β). An }. (b) H(α ∨ β) = H(α) + H(β|α).1 Let α = {A1 . then H(α) ≤ H(β).) The above quantity H(α|β) is interpreted as the average uncertainty about which element of the partition α the point x will enter (under T ) if we already know which element of β the point x will enter. . . Bm } be two parti- tions of X. . . . .58 Entropy Exercise 4. Exercise 4. . A∈α B∈β µ(B) (Under the convention that 0 log 0 := 0. and write α ≤ β. T −1 An } and α ∨ β := {Ai ∩ Bj : Ai ∈ α. . β and γ be partitions of X. . . Given two partitions α = {A1 . Bj ∈ β} are both partitions of X. . Bm } of X. . (e) If α ≤ β. . . The members of a partition are called the atoms of the partition. . .2 Show that if β is a refinement of α. An } and β = {B1 . . .2. We say that the partition β = {B1 . . .2. . Then. . Proposition 4. . . Show that T −1 α := {T −1 A1 . each atom of α is a finite (disjoint) union of atoms of β. we define the conditional entropy of α given β by XX µ(A ∩ B) H(α|β) := − log µ(A ∩ B) . if for every 1 ≤ j ≤ m there exists an 1 ≤ i ≤ n such that Bj ⊂ Ai (up to sets of measure zero). The partition α ∨ β is called the common refinement of α and β. . An } and β = {B1 . (a) H(T −1 α) = H(α) .2.1 Let α. . . .

B ∈ β . (g) If β ≤ α. one has that H(α ∨ β) = H(α) + H(β) . We now show that H(β|α) ≤ H(β). then H(γ|α) ≤ H(γ|β). (i) We call two partitions α and β independent if µ(A ∩ B) = µ(A)µ(B) for all A ∈ α.Definitions and Properties 59 (f ) H(α ∨ β|γ) = H(α|γ) + H(β|α ∨ γ). then H(β|α) = 0. (h) If β ≤ α. Recall that the function f (t) = −t log t for 0 < t ≤ 1 is concave down. the rest are left as an exercise. If α and β are independent partitions. B∈β . Proof We prove properties (b) and (c). XX H(α ∨ β) = − µ(A ∩ B) log µ(A ∩ B) A∈α B∈β XX µ(A ∩ B) = − µ(A ∩ B) log A∈α B∈β µ(A) XX + − µ(A ∩ B) log µ(A) A∈α B∈β = H(β|α) + H(α). XX µ(A ∩ B) H(β|α) = − µ(A ∩ B) log B∈β A∈α µ(A) XX µ(A ∩ B) µ(A ∩ B) = − µ(A) log B∈β A∈α µ(A) µ(A) XX µ(A ∩ B) = µ(A)f B∈β A∈α µ(A) ! X X µ(A ∩ B) ≤ f µ(A) B∈β A∈α µ(A) X = f (µ(B)) = H(β). Thus.

Proposition 4. . µ. where T is a 1 n−1 −i measure preserving transformation. an+p ≤ an + ap for all n. ∩ T Ain−1 . n→∞ n m n→∞ n Therefore limn→∞ an /n exists. For any n ≥ 0 one has n = km + i for some i between 0 ≤ i ≤ m − 1.60 Entropy Exercise 4. whose atoms are of the form Ai0 ∩ −1 −(n−1) T Ai1 ∩ .e. Wµ. i=0 j=1 i=1 To define the notion of the entropy of a transformation with respect to a partition. . F. consisting of all points x ∈ X with the property that x ∈ Ai0 . . then n−1 n−1 j _ X _ −i H( T α) = H(α) + H(α| T −i α). Proposition 4.. T n−1 x ∈ Ain−1 . By subadditivity it follows that an akm+i akm ai am ai = ≤ + ≤ k + .2.2. Since m is arbitrary one has an am an lim sup ≤ inf ≤ lim inf . Proof Fix any m > 0. k → ∞ and so lim supn→∞ an /n ≤ am /m. and equals inf an /n.4 Show that if α is a finite partition of (X.1 Wn−1 −i Now consider the partition i=0 T α. T ).2. p. T ). T x ∈ Ai1 .3 Let α be a finite partitions of (X. Then.3 Prove the rest of the properties of Proposition 4.2 If {an } is a subadditive sequence of real numbers i. n km + i km km km km Note that if n → ∞. limn→∞ n H( i=0 T α) exists. we need the following two propositions. B.2. . then an lim n→∞ n exists.2. Exercise 4. . . .

by Proposition 4. Definition 4. T ) := lim H( T −i α) . T ) . Then.Definitions and Properties 61 Proof Let an = H( n−1 −i W i=0 T α) ≥ 0. α The following theorem gives an equivalent definition of entropy. the entropy of the transformation T is given by h(T ) = hµ (T ) := sup h(α.1 The entropy of the measure preserving transformation T with respect to the partition α is given by n−1 1 _ h(α.2. we have n+p−1 _ an+p = H( T −i α) i=0 n−1 n+p−1 _ _ −i ≤ H( T α) + H( T −i α) i=0 i=n p−1 _ = an + H( T −i α) i=0 = an + ap .. T ) = hµ (α. We are now in position to give the definition of the entropy of the trans- formation T . n→∞ n i=0 where n−1 _ X H( T −i α) = − µ(D) log(µ(D)) .2 n−1 an 1 _ lim = lim H( T −i α) n→∞ n n→∞ n i=0 exists. by Proposition 4.2.2. . Hence. i=0 Wn−1 D∈ i=0 T −i α Finally.1.

. i=0 j=1 i=1 Now. . . Furthermore. . We need to show that hµ (T ) = hν (S). . . one gets the desired result Theorem 4. and is non-increasing. S) be two isomorphic measure preserv- ing systems (see Definition 1. hence limn→∞ H(α| i=1 T α) exists. . .2 Entropy is an isomorphism invariant. Proof Let (X. n n j _ 1X −i _ lim H(α| T α) = lim H(α| T −i α). Bn } be any partition of Y . ψ −1 Bn } is a partition of X. .2. . . .62 Entropy Theorem 4.1 The entropy of the measure preserving transformation T with respect to the partition α is also given by n−1 _ h(α. for a definition).2. ∩ S −(n−1) Bin−1 = µ ψ −1 Bi0 ∩ ψ −1 S −1 Bi1 ∩ . T ) and (Y. and taking the limit as n → ∞. so that for any n ≥ 0 and Bi0 . ν. . T ) = lim H(α| T −i α) . then ψ −1 β = {ψ −1 B1 . Bin−1 ∈ β ν Bi0 ∩ S −1 Bi1 ∩ . . C. we have that ν = µψ −1 and ψT = Sψ. . n→∞ n→∞ n i=1 j=1 i=1 From exercise 4.4. n→∞ i=1 Proof Notice that the sequence {H(α| ni=1 −i W WnT α) −i is bounded from below. . ∩ T −(n−1) Ain−1 and B(n) = Bi0 ∩ . . . . we have n−1 n−1 j _ X _ −i H( T α) = H(α) + H(α| T −i α).3. . dividing by n. for 1 ≤ i ≤ n. Let β = {B1 . ∩ ψ −1 S −(n−1) Bin−1 = µ ψ −1 Bi0 ∩ T −1 ψ −1 Bi1 ∩ . Since ψ : X → Y is an isomorphism. .2. . . Setting A(n) = Ai0 ∩ . ∩ T −(n−1) Ain−1 . µ. ∩ S −(n−1) Bin−1 . with ψ : X → Y the corresponding isomorphism. . ∩ T −(n−1) ψ −1 Bin−1 = µ Ai0 ∩ T −1 Ai1 ∩ . Set Ai = ψ −1 Bi . . B.2.

n ≥ 0. The proof of hµ (T ) ≤ hν (S) is done similarly. let σ i=−∞ T −i α be the smallest σ-algebra con- W−n taining all the partitions m −i −i W i=n T α and i=−m T α for W∞ all n andm. and the proof is complete. However. For α = {A1 . such a partition contains all the information ‘transmitted’ by T . Therefore hν (S) = hµ (T ). Naturally. . for one needs to take the supremum over all finite parti- tions. which is practically impossible. α where in the last inequality the supremum is taken over all possible finite partitions α of X.Calculation of Entropy and Examples 63 we thus find that n−1 1 _ hν (S) = sup hν (β. let m ! −n ! _ _ σ T −i α and σ T −i α i=n i=−m be the smallest σ-algebras containing the partitions m T −i α and −n −i W W W−∞ i=n i=−m T α respectively. We call a partition α a generator with respect to T if σ i=−∞ T −i α = F. the entropy of a partition is relatively easier to calculate if one has full information about the partition under consideration. Thus hν (S) ≤ hµ (T ). S) = sup lim Hν ( S −i β) β β n→∞ n i=0 1 X = sup lim − ν(B(n)) log ν(B(n)) β n→∞ n Wn−1 −i B(n)∈ i=0 S β 1 X = sup lim − µ(A(n)) log µ(A(n)) ψ −1 β n→∞ n Wn−1 A(n)∈ i=0 T −i ψ −1 β −1 = sup hµ (ψ β. AN } and all m. T ) = h(T ). So the question is whether it is possible to find a par- tition α of X where h(α. Furthermore. T ) ψ −1 β ≤ sup hµ (α.3 Calculation of Entropy and Examples Calculating the entropy of a transformation directly from the definition does not seem feasible. . . T ) = hµ (T ) . . 4. . To answer this question we need some notations and definitions.

Theorem 4. Proposition 4. . where Ai := {x ∈ X : x0 = i} . See also [W] for more details and proofs. The natural choice of α is what is known as the time-zero partition α = {A1 . .3. To this end we need to find a partition α which generates the σ-algebra F under the action of T .1 The partition α is a generator of F if for each A ∈ F and for each ε > 0 there exists a finite disjoint union C of elements of {αnm }. We will use these two theorems to calculate the entropy of a Bernoulli shift. . 1970) If T is an ergodic measure preserving trans- formation with h(T ) < ∞.1 (Kolmogorov and Sinai. . n . then α is said to be a generator if σ ( i=0 T α) = F. this equality is modulo sets of measure zero. . . . and Ai0 ∩ T −1 Ai1 ∩ · · · ∩ T −m Aim = {x ∈ X : x0 = i0 .3. that each measurable set in X can be approximated by a finite disjoint union of cylinder sets. T −m Ai = {x ∈ X : xm = i} . Naturally. 2. saying basically. . + pn = 1. An }. One has also the following characterization of generators. Our aim is to calculate h(T ). . . · · · . .64 Entropy where F is the σ-algebra W∞ −i on X. For the proofs. Example (Entropy of a Bernoulli shift)–Let T be the left shift on X = {1. If T is non-invertible. then T has a finite generator. where p1 + p2 + . . . . Notice that for all m ∈ Z.2 (Krieger. . i = 1.3. T ). n}Z endowed with the σ-algebra F generated by the cylinder sets. and product measure µ giving symbol i probability pi . xm = im } . we refer the interested reader to the book of Karl Petersen or Peter Walter. We now state (without proofs) two famous theorems known as Kolmogorov- Sinai’s Theorem and Krieger’s Generator Theorem. 1958) If α is a generator with re- spect to T and H(α) < ∞. such that µ(A4C) < ε. then h(T ) = h(α. Theorem 4.

. and the probability vector π = (π1 . and the Markov measure µ given by the stochastic matrix P = (pij ). Show that hµ1 ×µ2 (T1 × T2 ) = hµ1 (T1 ) + hµ2 (T2 ). · · · . πn ) with πP = π. i=1 Thus. · · · . T2 ) are two dynam- ical systems. T −1 α.e. and these by definition generate F. Show that n X X n h(T ) = − πi pij log pij j=1 i=1 Exercise 4. . and so H(α ∨ T −1 α ∨ · · · ∨ T −(m−1) α) = H(α) + H(T −1 α) + · · · + H(T −(m−1) α) X n = mH(α) = −m pi log pi .1 Let T be the left shift on X = {1. B1 . . m→∞ m i=1 i=1 Exercise 4.3.Calculation of Entropy and Examples 65 In other words. 2. α is a generating partition. T ) = lim H T α . n n 1 X X h(T ) = lim (−m) pi log pi = − pi log pi . B2 . . m→∞ m i=0 First notice that − since µ is product measure here − the partitions α. µ2 . µ1 . so that m−1 ! 1 _ −i h(T ) = h(α.3. . n}Z endowed with the σ-algebra F generated by the cylinder sets.2 Suppose (X1 . T1 ) and (X2 . Hence. m −i W i=0 T α is precisely the collection of cylinders of length m (i. . T −(m−1) α are all independent since each specifies a different coordinate. the collection of all m-blocks).

and α = {A1 . A∈α For two finite or countable partitions α and β of X. one has Z X 1 Eµ (f |σ(β)) = 1B f dµ. how- ever all the definitions and results hold if we were to consider countable par- titions of finite entropy. (4. let α(x) be the element of α to which x belongs.4. H(α|β) = X Iα|β (x) dµ(x).1. This follows from the fact (which is easy to prove using the definition of conditional expectations) that if β is finite or countable. Then. B∈β µ(B) B R Clearly. Show that Iα W β = Iα + Iβ|α . we define the condi- tional information function of α given β by XX µ(A ∩ B) Iα|β (x) = − 1(A∩B) (x) log .3) A∈α where σ(β) is the σ-algebra generated by the finite or countable partition β. the information function associated to α is defined to be X Iα (x) = − log µ(α(x)) = − 1A (x) log µ(A). F. . For each x ∈ X.66 Entropy 4.1)). µ) be a probability space. Before we state and prove the Shannon-McMillan- Breiman Theorem.} be a finite or a countable partition of X into measurable sets.1 Let α and β be finite or countable partitions of X. we need to introduce the information function associated with a partition. then for any f ∈ L1 (µ).4 The Shannon-McMillan-Breiman Theorem In the previous sections we have considered only finite partitions on X. (see the remark following the proof of Theorem (2. Exercise 4. A2 . . B∈β A∈α µ(B) We claim that X Iα|β (x) = − log Eµ (1α(x) |σ(β)) = − 1A (x) log E(1A |σ(β)). . . Let (X.

3. and let f ∗ = supn≥1 fn . Proof Let t ≥ 0 and A ∈ α. . T ). . let n ! _ −i fnA (x) = − log Eµ 1A | T α (x). T ). . . Notice that the integrand can be written as n ! 1 W 1 _ I n T −i α (x) = − log µ ( T −i α)(x) .1 Let α = {A1 . A2 .} be any countable partition.4. Before we proceed we need the following proposition. . Ai ∈α Ai ∈α Furthermore.} is also a countable partition. For n ≥ 1. A2 .e. let fn (x) = Iα| Wni=1 T −i α (x). . to n + 1 i=0 h(α. and let α = {A1 .} be a countable partition with finite entropy. F. i=1 and Bn = {x ∈ X : f1A (x) ≤ t. X X IT −1 α (x) = − 1T −1 Ai (x) log µ(T −1 Ai ) = − 1Ai (T x) log µ(Ai ) = Iα (T x). 2. For each n = 1. for each λ ≥ 0 and for each A ∈ α.. then in fact the integrand I n T −i α (x) converges a.The Shannon-McMillan-Breiman Theorem 67 Now suppose T : X → X is a measure preserving transformation on (X. µ ({x ∈ A : f ∗ (x) > λ}) ≤ 2−λ . . n Z 1 _ −i 1 lim H( T α) = lim IWni=0 T −i α (x) dµ(x) = h(α. . . Since T is measure pre- serving one has. . F. n + 1 i=0 n+1 i=0 where ( ni=0 T −i α)(x) is the element of ni=0 T −i α containing x (often referred W W to as the α-cylinder of order n containing x). Then. Proposition 4. Then T −1 = {T −1 A1 . f ∗ ∈ L1 (X. . . fn−1 A (x) ≤ t. n→∞ n + 1 n→∞ n + 1 X i=0 The Shannon-McMillan-Breiman theorem says if T is ergodic and if α has 1 W finite entropy. µ). . . . . Furthermore. T −1 A2 . fnA (x) > t}. . µ).

for some n} = µ (∪∞ n=1 A ∩ Bn ) X∞ = µ (A ∩ Bn ) n=1 ∞ X −t ≤ 2 µ(Bn ) ≤ 2−t . then Z µ(Bn ∩ A) = 1A (x) dµ(x) Bn Z n ! _ = Eµ 1A | T −i α (x) dµ(x) Bn i=1 Z ≤ 2−t dµ(x) = 2−t µ(Bn ). and for x ∈ Bn one has Eµ (1A | i=1 T α) (x) < 2−t . µ ({x ∈ A : f ∗ (x) > t}) = µ ({x ∈ A : fn (x) > t. Bn Thus. µ ({x ∈ A : f ∗ (x) > t}) ≤ min(µ(A). . F. hence.68 Entropy Wn for−i x ∈ A one Notice that has fn (x) = fnAW(x). n=1 We now show that f ∗ ∈ L1 (X. Since Bn ∈ σ ( ni=1 T −i α). First notice that µ ({x ∈ A : f ∗ (x) > t}) ≤ µ(A). 2−t ). µ). for some n}) = µ {x ∈ A : fnA (x) > t.

4) n→∞ Exercise 4.2 Give a proof of equation (4. known as the Martingale Convergence Theorem (and is stated to our setting) Theorem 4.3)).The Shannon-McMillan-Breiman Theorem 69 Using Fubini’s Theorem. then Eµ (f |C) = lim Eµ (f |Cn ) n→∞ µ a. and the fact that f ∗ ≥ 0 one has Z Z ∞ ∗ f (x) dµ(x) = µ ({x ∈ X : f ∗ (x) > t}) dt X Z0 ∞ X = µ ({x ∈ A : f ∗ (x) > t}) dt 0 A∈α XZ ∞ = µ ({x ∈ A : f ∗ (x) > t}) dt A∈α 0 XZ ∞ ≤ min(µ(A). If we denote by ∞ W −i Wn −i i=1 T α = σ(∪n i=1 T α).e.4) using the following important theorem. and let C = σ(∪n Cn ). 2−t ) dt A∈α 0 XZ − log µ(A) XZ ∞ = µ(A) dt + 2−t dt A∈α 0 A∈α − log µ(A) X X µ(A) = − µ(A) log µ(A) + A∈α A∈α loge 2 1 = Hµ (α) + < ∞. loge 2 So far we have defined the notion of the conditional entropy Iα|β when α and β are countable partitions. .1 (Martingale Convergence Theorem) Let C1 ⊆ C2 ⊆ · · · be a sequence of increasing σalgebras. We can generalize the definition to the case α is a countable partition and G is a σ-algebra as follows (see equation (4. and in L1 (µ). If f ∈ L1 (µ).4. then Iα| W∞ i=1 T −i α (x) = lim Iα| n W i=1 T −i α (x). Iα|G (x) = − log Eµ (1α(x) |G). (4.4.

+ fn−1 (T x) + fn (x). Notice that f ∈ L1 (X. µ) and f ∈ L1 (µ). let fn (x) = Iα| Wni=1 T −i α (x). µ). . = Iα (T n x) + f1 (T n−1 x) + .e. Now letting f0 = Iα . and let α be a countable partition with H(α) < ∞. T ) a. n→∞ n + 1 Proof For each n = 1. F. µ a. 3. µ) since X f (x) dµ(x) = h(α. . 2..e. n→∞ X k=0 . T ) a. IWni=0 T −i α (x) = IWni=1 T −i α (x) + Iα| Wni=1 T −i α (x) = IWn−1 i=0 T −i α (T x) + fn (x) = IWn−1 i=1 T −i α (T x) + Iα| n−1 T −i α (T x) + fn (x) W i=1 2 = IWn−2 i=0 T −i α (T x) + fn−1 (T x) + fn (x) . F.70 Entropy Exercise 4..e.2 (The Shannon-McMillan-Breiman Theorem) Suppose T is an ergodic measure preserving transformation on a probability space (X.4. . Then.4. F.3 Show that if T is measure preserving on the probability space (X. n + 1 k=0 n + 1 k=0 By the ergodic theorem. . T ). . then f (T n x) lim = 0. n→∞ n Theorem 4. Then. we have n 1 W 1 X I ni=0 T −i α (x) = fn−k (T k x) n+1 n + 1 k=0 n n 1 X 1 X = f (T k x) + (fn−k − f )(T k x). . 1 W lim I ni=0 T −i α (x) = h(α. n X Z k lim f (T x) = f (x) dµ(x) = h(α. Let f R(x) = Iα| W∞ i=1 T −i α (x) = limn→∞ fn (x).

then by exercise (4. n→∞ n + 1 The above theorem Wn can−ibe interpreted as providing an estimate of the size of Wnthe −iatoms of i=0 T α.4.e. the first term tends to zero a.The Shannon-McMillan-Breiman Theorem 71 n 1 X We now study the sequence { (fn−k − f )(T k x)}. and f ∗ = sup fn .. and by the ergodic theorem as well as the dominated convergence theorem. if α is a generating partition (i. T ) n+1 or µ(An ) ≈ 2−(n+1)h(α. . n n−N 1 X 1 X |fn−k − f |(T k x) = |fn−k − f |(T k x) n + 1 k=0 n + 1 k=0 n 1 X + |fn−k − f |(T k x) n + 1 k=n−N +1 n−N 1 X ≤ FN (T k x) n + 1 k=0 N −1 1 X + |fk − f |(T n−k x). For n sufficiently large. Furthermore. |fn−k − f | ≤ f ∗ + f .e. T ) by h(T ).e. n + 1 k=0 If we take the limit as n → ∞.T ) .e. k≥N n≥1 Notice that 0 ≤ FN ≤ f ∗ +f .3) the second term tends to 0 a. then in the conclusion of Shannon-McMillan-Breiman Theorem one can replace h(α. 1 W lim I ni=0 T −i α (x) = h(α. ∞ −i W i=0 T α = F. For any N ≤ n.e. µ) and lim n → ∞|fn−k − f | = 0 a. hence FN ∈ L1 (X. Also for any k. Hence. µ) and limN →∞ FN (x) = 0 a. a typical element A ∈ i=0 T α satisfies 1 − log µ(A) ≈ h(α. Let n + 1 k=0 FN = sup |fk − f |. F. F.e. T ) a. so that |fn−k − f | ∈ L1 (X.

Now let 1 y= 1 b1 + . Let m (n.72 Entropy 4. + bl and 1 z= 1 c1 + . and if Cj (x) denotes the continued fraction cylinder of order j containing x. a2 . and let z = y + 10−n . Let x ∈ [0. 1 b2 + .5) 1 a1 + 1 a2 + 1 a3 + ... [y..x) (x). a1 . (4. . 1) be an irrational number. then m(n.5 Lochs’ Theorem In 1964. + ck be the continued fraction expansion of y and z. · · · ] (4. x) is the largest integer such that Bn (x) ⊂ Cm(n. Lochs compared the decimal and the continued fraction expan- sions. is its regular continued fraction (RCF) expansion (generated by the map T x = x1 − b x1 c). Lochs proved the following theorem: . z) is the decimal cylinder of order n containing x.6) In other words. Let y = . 1 c2 + . k) : for all j ≤ i. Then. i.d1 d2 · · · dn be the rational number determined by the first n decimal digits of x. Suppose further that 1 x = = [0. and suppose x = . G. Cj (x) is the set of all points in [0. if Bn (x) denotes the decimal cylinder consisting of all points y in [0. which we also denote by Bn (x). .d1 d2 · · · is the decimal expansion of x (which is generated by iterating the map Sx = 10x (mod 1)). 1) such that the first n decimal digits of y agree with those of x. bj = cj } . x) = max {i ≤ min (l. 1) such that the first j digits in their continued fraction expansion is the same as that of x.e.

1 By an interval partition we mean a finite or countable partition of [0. x) = sup {m | Pn (x) ⊂ Qm (x)} . n Note that we do not assume that each Pn is refined by Pn+1 . The content of this section as well as the proofs can be found in [DF].5. 1).e with respect to λ and Q has entropy d a.5. If P is an interval partition and x ∈ [0. x ∈ [0. define mP.e. Definition 4. 1). n→∞ n π2 In this section. Suppose that for some constants c > 0 and d > 0. We show that Lochs’ theorem is true for any two sequences of interval partitions on [0.Q (n. Definition 4.2 Let c ≥ 0.e.Q (n. 1). We begin with few definitions that will be used in the arguments to follow. x) 6 log 2 log 10 lim = . Theorem 4.2 Let P = {Pn }∞ ∞ n=1 and Q = {Qn }n=1 be sequences of interval partitions and λ Lebesgue probability measure on [0.5. with respect to λ if log λ (Pn (x)) − → c a. Then mP. Let λ denote Lebesgue probability measure on [0.e. 1).Lochs’ Theorem 73 Theorem 4. we will prove a generalization of Lochs’ theorem that allows one to compare any two known expansions of numbers. We say that P has entropy c a. For each n ∈ N and x ∈ [0. P has entropy c a. x) c → a.e.e. with respect to λ. 1). 1) m(n. we let P (x) denote the interval of P containing x. Suppose that P = {Pn }∞ ∞ n=1 and Q = {Qn }n=1 are sequences of interval partitions. n d . 1) into subintervals.1 Let λ denote Lebesgue measure on [0. Let P = {Pn }∞ n=1 be a sequence of interval partitions. Then for a.5. 1) satisfying the conclusion of Shannon-McMillan-Breiman theorem.

for each n ∈ N we call an d element of Pn (respectively Qn ) (n.Q (n. we have the desired result.e. n ≥ N . x) ≤ (1 + ε) n d and so mP. Fix η > 0 so that c < 1 + ε. x) c lim supn→∞ ≤ a. Let x ∈ [0.Q (n. Choose N so c− η d that for all n ≥ N λ (Pn (x)) > 2−n(c+η) and λ (Qn (x)) < 2−n(d−η) . Now we show that mP. Choose η > 0 so that ζ =: εc − η 1 + (1 − ε) > 0. x) c lim inf n→∞ ≥ a. By the choice of η.Q (n. n d c Fix ε ∈ (0.) . n c o Fix n so that min n. For d j c k each n ∈ N let m̄ (n) = (1 − ε) n . x) c lim supn→∞ ≤ (1 + ε) a. For brevity. η) −good if λ (Pn (x)) < 2−n(c−η) (respectively λ (Qn (x)) > 2−n(d+η) . n d Fix ε > 0.e.Q (n. and let m0 denote any integer greater than d c (1 + ε) n. n d Since ε > 0 was arbitrary. d λ (Pn (x)) > λ (Qm0 (x)) so that Pn (x) is not contained in Qm0 (x). Therefore c mP. 1) be a point at which the convergence conditions of c+η the hypotheses are met.74 Entropy Proof First we show that mP.e. 1).

we have shown that for almost every x ∈ [0. The expansions we have in mind share the following two properties. λ (Pn (x)) < 2−nζ . there exists N ∈ N. n d This proves that mP. for al- most every x ∈ [0. 1). so that for all n ≥ N . 1). and λ the Lebesgue measure. η) − good Dn (η) = x : . The above result allows us to compare any two well-known expansions of numbers. x) c lim inf n→∞ ≥ (1 − ε) a. η) −good and Qm̄(n) (x) is (m̄ (n) . our underlying space will be ([0. let Pn (x) is (n. By the definition of Dn (η) and m̄ (n). B. 1).e. 1). for almost every x ∈ [0.Lochs’ Theorem 75 For each n ∈ N. λ). In other words. Since m̄ (n) goes to infinity as n does. Since the commonly used expansions are usually performed for points in the unit interval. x) ≥ m̄ (n). so that mP.o. . we have established the theorem.Q (n. λ Qm̄(n) (x) Since no more than one atom of Pn can contain a particular endpoint of an atom of Qm̄(n) . then Pn (x) contains an endpoint of the (m̄ (n) . Thus. η) −good interval Qm̄(n) (x). η) −good and x ∈ / Dn (η). we see that λ (Dn (η)) < 2 · 2−nζ and so ∞ X λ (Dn (η)) < ∞. η) −good and Pn (x) ⊂ Qm̄(n) (x). so that for all n ≥ N .Q (n. mP. n=1 which implies that λ {x | x ∈ Dn (η) i. Pn (x) is (n. there exists N ∈ N. and Pn (x) " Qm̄(n) (x) If x ∈ Dn (η).Q (n. n d Since ε > 0 was arbitrary.} = 0. η) −good and Qm̄(n) (x) is (m̄ (n) . η) − good and Qm̄(n) (x) is (m̄ (n) . Pn (x) is (n. x) c ≥ b(1 − ε) c. there exists N ∈ N. so that for all n ≥ N . where B is the Lebesgue σ-algebra.

where β > 1 is a real number). We refer to the resulting expansion as the T -expansion of x. For n ≥ 1. continuous and injective. 1) → [0. α is a generating partition. Let L. continued fraction expansions (T x = x1 − b x1 c). and many others (see the book Ergodic Theory of Numbers). . If Hµ (α) < ∞. and µ(A) = 0 if and only if λ(A) = 0 dλ dµ for all Lebesgue sets A. with respect to µ. and there exists a T invariant probability measure µ equivalent to λ with bounded density.1 Show that the conclusion of the Shannon-McMillan-Breiman Theorem holds if we replace µ by λ. log λ(Pn (x)) lim − = hµ (T ) a. with respect to λ.3 A surjective map T : [0. dµ dλ (Both and are bounded. (b) T is ergodic with respect to Lebesgue measure λ. where n is a positive integer). Almost all known expansions on [0.e. Let T be an NTFM with corresponding partition α.e. 1) are generated by a NTFM. 1) with digits in D.76 Entropy Definition 4.5. i. let Pn = n−1 −i W i=0 T α. Pn is an interval partition. and T -invariant measure µ equivalent to λ. β expansions (T x = βx (mod 1). Furthermore. n→∞ n Iterations of T generate expansions of points x ∈ [0.). then by property (a). then Shannon- McMillan-Breiman Theorem gives log µ(Pn (x)) lim − = hµ (T ) a. M > 0 be such that Lλ(A) ≤ µ(A) < M λ(A) for all Lebesgue sets A (property (b)). Among them are the n-adic expansions (T x = nx (mod 1).5. n→∞ n Exercise 4.e. i ∈ D} such that T restricted to each atom of α (cylinder set of order 0) is monotone. 1) is called a number theoretic fibered map (NTFM) if it satisfies the following conditions: (a) there exists a finite or countable partition of intervals α = {Ai .

5. Use the fact that the continued fraction map T is ergodic with respect to Gauss measure µ.Lochs’ Theorem 77 Exercise 4. B log 2 1 + x π2 and has entropy equal to hµ (T ) = .3 Reformulate and prove Lochs’ Theorem for any two NTFM maps S and T on [0. .2).2 Prove Theorem (4. 6 log 2 Exercise 4.5.1) using Theorem (4. given by Z 1 1 µ(B) = dx.5.5. 1).

78 Entropy .

g ≥ 0. i. and state some of their properties. for all h ∈ L1 (µ) (or L1 (ν)).. Z Z Z Z h dµ = hf dν and h dν = hg dµ. A ∈ F. which is a generalization of Birkhoff Ergodic Theorem to non-singular conservative transformations.1 Equivalent measures Recall that two measures µ and ν on a measure space (Y. The theorem of Radon-Nikodym says that if µ. we study invertible. We end this section by giving a new proof of Hurewicz Ergodic Theorem. µ(A) = 0 ⇔ ν(A) = 0. we then define non-singular and conservative transformations. We first start with a quick review of equivalent measures. ν are σ-finite and equiv- alent. 5. In particular. then there exist measurable functions f. non-singular and conservative transforma- tions on a probability space. F) are equivalent if µ and ν have the same null-sets. 79 . A A Furthermore.e.Chapter 5 Hurewicz Ergodic Theorem In this chapter we consider a class of non-measure preserving transformations. such that Z Z µ(A) = f dν and ν(A) = g dµ.

µ) be a probability space and T : X → X an invertible measurable function.2) 5.80 Hurewicz Ergodic Theorem dµ dν Usually the function f is denoted by and the function g by . µ) is a probability space.1) Exercise 5. dν dµ Now suppose that (X. . B. By the theorem of Radon-Nikodym.1).1 Starting with indicator functions. µ(A) = 0 if and only if µ(T −1 A) = 0. B. a non-negative measurable dµ ◦ T n function ωn (x) = (x) such that dµ Z n µ(T A) = ωn (x) dµ(x). One can define a new measure µ ◦ T −1 on (X. non-singularity implies that µ(A) = 0 if and only if µ(T n A) = 0.1 Let (X. Note that if T is invertible. Since T is invertible. give a proof of (5. A We have the following propositions.2. This implies that the measures µ ◦ T n defined by µ ◦ T n (A) = µ(T n A) is equivalent to µ (and hence equivalent to each other). n 6= 0. and T : X → X a measurable transformation. It is not hard to prove that for f ∈ L1 (µ). there exists for each n 6= 0.2 Non-singular and conservative transfor- mations Definition 5. T is said to be non-singular if for any A ∈ B. Z Z f d(µ ◦ T −1 ) = f ◦ T dµ (5.1. B) by µ ◦ T −1 (A) = µ(T −1 A) for A ∈ B. then one has that Z Z f d(µ ◦ T ) = f ◦ T −1 dµ (5.

m ≥ 1.e.2 Under the assumptions of Proposition 5. Z Z n ωn (x)ωm (T x)dµ(x) = 1A (x)ωm (T n x)d(µ ◦ T n )(x) A ZX = 1A (T −n x)ωm (x)dµ(x) ZX = 1T n A (x)d(µ ◦ T m )(x) X Z m n m+n = µ ◦ T (T A) = µ(T A) = ωn+m (x)dµ(x). set n−1 i fn (x) = i=0 f (T x)ωi (x). that ωn+m (x) = ωn (x)ωm (T n x).e. Then for every f ∈ L1 (µ). For any measurable function f . the rest of the proof is left to the reader. Z 1A (x) dµ(x) = µ(A) = µ(T (T −1 A)) X Z = ω1 (x) dµ(x) T −1 A Z = 1A (T x)ω1 (x) dµ(x). µ a.2.1 Let (X. Exercise 5.2. n ≥ 1.Non-singular and conservative transformations 81 Proposition 5. where ω0 (x) = 1. . Show that for all n. µ) be a probability space. and T : X → X an invertible P non-singular transformation. X Proposition 5. Z Z Z f (x) dµ(x) = f (T x)ω1 (x) dµ(x) = f (T n x)ωn (x) dµ(x). B. Proof For any A ∈ B. one has for all n. and T : X → X is invertible and non-singular. µ a.2. fn+m (x) = fn (x) + ωn (x)fm (T n x). µ) is a probability space.1. X X X Proof We show the result for indicator functions only.2.1 Suppose (X. ωn+m (x) = ωn (x)ωm (T n x). A Hence. B. m ≥ 1.

4 Suppose T is invertible. Replacing T by T −1 yields the result for n ≤ −1. x ∈ A there exist infinitely many positive and negative integers n. n=1 P∞ Proof Let A = {x ∈ X : n=1 ωn (x) < ∞}. non-singular and conservative . Then for µ a. B. C ⊂ n=1 T B. Note that ∞ [ ∞ X A= {x ∈ X : ωn (x) < M }.e.82 Hurewicz Ergodic Theorem Definition 5. which is a contradiction. and let A ∈ B with µ(A) > 0.2. n S∞ let−nC = {x ∈ A.2. then by conservativity there exists an n ≥ 1. If µ(B) > 0. then Now.3 Suppose T is invertible. In this case.e. non-singular and conservative on the probability space (X. M =1 n=1 If µ(A) > 0. and T : X → X a measurable transformation. Proposition 5. there exists an n 6= 0 such that µ(A ∩ T n A) > 0. then there exists an M ≥ 1 such that the set ∞ X B = {x ∈ X : ωn (x) < M } n=1 .2 Let (X. B. −n B ∩ T B = ∅. Proof Let B = {x ∈ A : T n x ∈ / A for all n ≥ 1}.2. T x ∈ A for only finitely many n ≥ 1}. Proposition 5. µ a. Note that if T is invertible. µ) be a probability space. n=1 Therefore. there exists an n ≥ 1 such that µ(A ∩ T −n A) > 0. for any A ∈ B with µ(A) > 0. then T −1 is also non-singular and conservative. We say that T is conservative if for any A ∈ B with µ(A) > 0. µ(B) = 0. Note that for any n ≥ 1. for almost every x ∈ A there exist infinitely many n ≥ 1 such that T n x ∈ A. µ). implying that ∞ X µ(C) ≤ µ(T −n B) = 0. then ∞ X ωn (x) = ∞. Hence. non-singular and conservative. such that T n x ∈ A. and by non-singularity we have µ(T −n B) = 0 for all n ≥ 1. such that µ(B∩T −n B) is positive.

see also Hurewicz’ original paper [H].Hurewicz Ergodic Theorem 83 has positive measure. which implies that ∞ X 1B (T −n x) < ∞ µ a. similar to the proof for Birkhoff’s Theorem. µ) be a probability space. However. µ a. B ∞ R P n=1 ωn (x) dµ(x) < M µ(B) < ∞. X n=1 1B (T −n x) dµ(x) < ∞. B. and ∞ X ωn (x) = ∞. contradicting Proposition 5. X n=1 R P∞ Hence. x one has T −n x ∈ B for only finitely many n ≥ 1. If f ∈ L1 (µ). see Section 2. and T : X → X an invertible.3. n=1 Therefore. for µ a. Z X ∞ ∞ Z X ωn (x) dµ(x) = ωn (x) dµ(x) B n=1 n=1 B X∞ = µ(T n B) n=1 ∞ Z X = 1T n B (x) dµ(x) n=1 X ∞ Z X = 1B (T −n x) dµ(x).3.2. Theorem 5.3 Hurewicz Ergodic Theorem The following theorem by Hurewicz is a generalization of Birkhoff’s Ergodic Theorem to our setting.e. n=1 5. then n−1 X f (T i x)ωi (x) i=0 lim n−1 = f∗ (x) n→∞ X ωi (x) i=0 . Thus µ(A) = 0.1 Let (X. non-singular and conservative transformation. We give a new prove.1 and [KK].e.e. Then.

and we consider each part separately). To this end.1) and Proposition (5.84 Hurewicz Ergodic Theorem exists µ a. X X Proof Assume with no loss of generality that f ≥ 0 (otherwise we write f = f + − f − . Using Exercise (5.2. fn (T x) f (T x) = lim sup n n→∞ gn (T x) fn+1 (x) − f (x) ω1 (x) = lim sup n→∞ gn+1 (x) − g(x) ω1 (x) fn+1 (x) − f (x) = lim sup n→∞ gn+1 (x) − g(x) fn+1 (x) gn+1 (x) f (x) = lim sup · − n→∞ gn+1 (x) gn+1 (x) − g(x) gn+1 (x) − g(x) fn+1 (x) = lim sup n→∞ gn+1 (x) = f (x).e.2). Furthermore. ω0 (x) = g0 (x) = 1.4). f∗ is T -invariant and Z Z f (x) dµ(x) = f∗ (x) dµ(x).2. n→∞ X n→∞ gn (x) ωi (x) i=0 By Proposition (5. . n→∞ X n→∞ gn (x) ωi (x) i=0 and fn (x) fn (x) f (x) = lim inf n−1 = lim inf . one has gn+m (x) = gn (x) + gm (T n x). gn (x) = ω0 (x) + ω1 (x) + · · · + ωn−1 (x).2. Let fn (x) = f (x) + f (T x)ω1 (x) + · · · + f (T n−1 x)ωn−1 (x). fn (x) fn (x) f (x) = lim sup n−1 = lim sup . we will show that f and f are T -invariant.

Hurewicz Ergodic Theorem 85 (Similarly f is T -invariant). is integrable and T -invariant. R R We first prove that X f dµ ≤ X f dµ. there exists an integer m > 0 such that fm (x) ≥ min(f (x). gm (x) Now. By definition of f . L)(1 − )gm (T n x)ωn (x) ≤ ωn (x)fm (T n x) = f (T n x)ωn (x) + f (T n+1 x)ωn+1 (x) + · · · + f (T n+m−1 x)ωn+m−1 (x) ≤ F (T n x)ωn (x) + F (T n+1 x)ωn+1 (x) + · · · + F (T n+m−1 x)ωn+m−1 (x) = an + an+1 + · · · + an+m−1 . 1. . . + bn+m−1 = min(f (x). Notice that f ≤ F (why?). . Hence. bn + . L)(1 − ). 2. . . . and bn = bn (x) = min(f (x). L)(1 − )gm (T n x). L)(1 − )} has measure at least 1 − δ. to prove that f∗ exists. For any x ∈ X. for any δ > 0 there exists an integer M > 0 such that the set X0 = {x ∈ X : ∃ 1 ≤ m ≤ M with fm (x) ≥ gm (x) min(f (x). then there exists 1 ≤ m ≤ M such that fm (T n x) ≥ min(f (x).e.1.1 with M > 0 as above. for any x ∈ X. Define F on X by f (x) x ∈ X0 F (x) = L x∈/ X0 . Now. X X X For since f − f ≥ 0. For any n = 0. and let L > 0 be any real number. let an = an (x) = F (T n x)ωn (x). –if T n x ∈ X0 . it is enough to show that Z Z Z f dµ ≥ f dµ ≥ f dµ. We now show that {an } and {bn } satisfy the hypothesis of Lemma 2. Fix any 0 < < 1. this would imply that f = f = f∗ . a. L)(1 − )gm (T n x)ωn (x). L)(1 − )ωn (x). Now. ωn (x)fm (T n x) ≥ min(f (x).

1 for all integers N > M one has F (x)+F (T x)+ω1 (x)+· · ·+ωN −1 (x)F (T N −1 x) ≥ min(f (x). and Lemma 2. X X . then → 0. L)(1 − )ωn (x) = bn .2. X X0 one has Z Z f (x) dµ(x) ≥ f (x) dµ(x) X X0 Z = F (x) dµ(x) − Lµ(X \ X0 ) X (N − M ) Z ≥ min(f (x). Integrating both sides. N X Now letting first N → ∞. and Z Z f (x) dµ(x) ≥ f (x) dµ(x).1) together with the T - invariance of f one gets Z Z N F (x) dµ(x) ≥ min(f (x). X Since Z Z F (x) dµ(x) = f (x) dµ(x) + Lµ(X \ X0 ). L)(1 − )gN −M (x) dµ(x) X X Z = (N − M ) min(f (x). L)(1−)gN −M (x). and lastly L → ∞ one gets together with the monotone convergence theorem that f is integrable. then δ → 0. then take m = 1 since an = F (T n x)ωn (x) = Lωn (x) ≥ min(f (x). and using Proposition (5. L)(1 − ) dµ(x).1. Hence by T -invariance of f . L)(1 − ) dµ(x) − Lδ.86 Hurewicz Ergodic Theorem –If T n x ∈ / X0 . X X We now prove that Z Z f (x) dµ(x) ≤ f (x) dµ(x).

2.1 one has for all integers N > M G(x) + G(T x)ω1 (x) + . Hence. –if T n x ∈ Y0 . + bn+m−1 = G(T n x)ωn (x) + · · · + G(T n+m−1 x)ωn+m−1 (x) ≤ f (T n x)ωn (x) + · · · + f (T n+m−1 x)ωn+m−1 (x) = ωn (x)fm (T n x) ≤ (f (x) + )(ωn (x) + · · · + ωn+m+1 (x)) = an + · · · an+m−1 . and an = (f (x) + )ωn (x). gm (x) For any δ > 0 there exists an integer M > 0 such that the set Y0 = {x ∈ X : ∃ 1 ≤ m ≤ M with fm (x) ≤ (f (x) + )gm (x)} has measure at least 1 − δ. ωn (x)fm (T n x) ≤ (f (x)+)gm (T n x)ωn (x) = (f (x)+)(ωn (x)+· · ·+ωn+m−1 (x). then there exists 1 ≤ m ≤ M such that fm (T n x) ≤ (f (x) + )gm (T n x). Let bn = G(T n x)ωn (x). for any x ∈ X there exists an integer m such that fm (x) ≤ (f (x) + ). Hence by Lemma 2.1 with M > 0 as above. –If T n x ∈ / Y0 .Hurewicz Ergodic Theorem 87 Fix > 0. . + G(T N −M −1 x)ωN −M −1 (x) ≤ (f (x) + )gN (x). . By Proposition (5.2).1. one gets bn + . We now check that the sequences {an } and {bn } satisfy the hypothesis of Lemma 2. Notice that G ≤ f . then take m = 1 since bn = G(T n x)ωn (x) = 0 ≤ (f (x) + )(ωn (x)) = an . Define G on X by f (x) x ∈ Y0 G(x) = 0 x∈/ Y0 . . . . and the fact that f ≥ G.1.

Hence. X X X hence. and finally → 0. It is easy to check that the proof of Proposition (1. we say that T is ergodic if for any measurable set A satisfying µ(A∆T −1 A) = 0. there exists δ0 > 0 such that R if µ(A) < δ. n−1 X f (T i x)ωi (x) Z i=0 lim n−1 = f dµ µ a.7. let first N → ∞.e.88 Hurewicz Ergodic Theorem Integrating both sides yields Z Z (N − M ) G(x)dµ(x) ≤ N ( f (x)dµ(x) + ). then δ → 0 (and hence δ0 → 0). then by Hurewicz Ergodic Theorem one has for any f ∈ L1 (µ). conservative and ergodic. then ν(X \ Y0 ) = X\Y0 f (x)dµ(x) < δ0 . the measure ν defined by ν(A) = A f (x) dµ(x) is absolutely continuous with respect to the measure µ.1) holds in this case. n→∞ X X ωi (x) i=0 . non-singular. f = f = f∗ a. Since µ(X \ Y0 ) < δ. then ν(A) < δ0 . one gets Z Z f (x) dµ(x) ≤ f (x) dµ(x). X X This shows that Z Z Z f dµ ≥ f dµ ≥ f dµ. if T is invertible. If T is non- singular and conservative. Hence. and f∗ is T -invariant. one has µ(A) = 0 or 1. Hence. so that T ergodic implies that each T -invariant function is a constant µ a.e. Remark We can extend the notion of ergodicity to our setting. X X R Since f ≥ 0. N −M X Now..e. Z Z Z f (x) dµ(x) = G(x) dµ(x) + f (x) dµ(x) X X X\Y0 Z N ≤ (f (x) + ) dµ(x) + δ0 .

ν ∈ M (X) and 0 ≤ p ≤ 1. Furthermore. the σ-algebra generated by the open sets. for all B ∈ B and every > 0 there exist an open set U and a closed sed C such that C ⊆ B ⊆ U such that µ(U \ C ) < . Theorem 6.. i.1 Existence Suppose X is a compact metric space. Let M (X) be the collection of all Borel probability measures on X. pµ + (1 − p)ν ∈ M (X) whenever µ...2 below shows that a member of M (X) is determined by how it integrates continuous functions. We denote by C(X) the Banach space of all complex valued continuous functions on X under the supremum norm. M (X) is a convex set. and any µ ∈ M (X). B⊆U :U open C⊆B:C closed 89 . Let R = {B ∈ B : B is regular }.e.1.1. There is natural embedding of the space X in M (X) via the map x → δx . Theorem 6.e. Show that R is a σ-algebra containing all the closed sets.1. i.e. and is zero otherwise). and let B be the Borel σ-algebra i. µ(B) = sup µ(C) = inf µ(U ).1 Every member of M (X) is regular. Idea of proof Call a set B ∈ B with the above property a regular set. Corollary 6. where δX is the Dirac measure concentrated at x (δx (A) = 1 if x ∈ A.1 For any B ∈ B.Chapter 6 Invariant Measures for Continuous Transformations 6.

We will show that under this notion of convergence the space M (X) is compact. one can show that m(C) ≤ µ(C) + . This allows us to define a metric structure on M (X) as follows. it suffices to show that µ(C) = m(C) for all closed subsets C of X. Theorem 6. Theorem 6. A sequence {µn } in M (X) converges to µ ∈ M (X) if and only if Z Z lim f (x) dµn (x) = f (x) dµ(x) n→∞ X X for all f ∈ C(X).1.3 (The Riesz Representation Theorem) Let X be a compact metric space and J : C(X) → C a continuous linear map such that J is a positive Roperator and J(1) = 1.1. by regularity of the measure m there exists an open set U such that C ⊆ U and m(U \ C) < . Then there exists a µ ∈ M (X) such that J(f ) = X f (x) dµ(x).X\U )+d(x. then µ = m. thus Z Z µ(C) ≤ f dµ = f dm ≤ m(U ) ≤ m(C) + . but first we need The Riesz Representation Representation Theorem.90 Invariant measures for continuous transformations Theorem 6. Therefore. Notice that 1C ≤ f ≤ 1U . Define f ∈ C(X) as follows ( 0 x∈/ U f (x) = d(x. and hence for all Borel sets. µ(C) = m(C) for all closed sets. Proof From the above corollary.C) x ∈ U .2 Let µ.1. . Let > 0. m ∈ M (X).X\U ) d(x. X X Using a similar argument.4 The space M (X) is compact. If Z Z f dµ = f dm X X for all f ∈ C(X).

Let M (X. R (1) the sequence { X f2 dµn } is bounded. {µn } is a subsequence of {µn } and { X fj dµn } (n) R (n) converges. Then T µ(A) = µ(T −i A). continuous (|J(f )| ≤ supx∈X |f (x)|). Using a standard argument. µ is measure preserving if and only if Z Z f (x) dµ(x) = f (T x) dµ(x) X X for all continuous functions f on X. and hence has a convergent sub- R (2) R (2) sequence { X f2 dµn }. hence has a convergent subsequence { X f1 dµn }. Notice that { X f1 dµn } is also convergent. positive and J(1) = 1. by Riesz Representation Theorem. Then. limn→∞ µn = µ. . Therefore. then T is measurable with respect to B. The sequence { X f1 dµn } is a bounded sequence of R (1) complex numbers. Furthermore. and M (X) is compact. Consider the diagonal sequence {µn }. and hence { X f dµn } converges for all f ∈ C(X). Note that T is measure preserving with respect to µ ∈ M (X) if and only if T µ = µ. an operator T : M (X) → M (X) given by (T µ)(A) = µ(T −1 A) i for all A ∈ B. Now. Since B is generated by the open sets. T ) = {µ ∈ M (X) : T µ = µ} be the collection of all probability measures under which T is measure pre- serving. Thus. to get for each i a subsequence {µn } of {µn } (i) (j) R (i) such that for all j ≤ i. Choose a countable dense subset of {fn } of C(X). one can easily show that Z Z f (x) d(T µ)(x) = f (T x)dµ(x) X X for all continuous functions f on X. J is lin- ear. Let T : X → X be a continuous transformation. there exists a µ ∈ M (X) such that R (n) R (n) J(f ) = limn→∞ { X f dµn } = X f dµ. T induces in a natural way. then { X fj dµn } con- R (n) verges for all j. Equivalently. Now R (n) define J : C(X) → C by J(f ) = limn→∞ { X f dµn }.Existence 91 Idea of proof Let {µn } be a sequence in R M (X). We (i) continue in this manner.

92 Invariant measures for continuous transformations

**Theorem 6.1.5 Let T : X → X be continuous, and {σn } a sequence in
**

M (X). Define a sequence {µn } in M (X) by

n−1

1X i

µn = T σn .

n i=0

Then, any limit point µ of {µn } is a member of M (X, T ).

Proof

R We need toR show that for any continuous function f on X, one

has X f (x) dµ(x) = X f (T x) dµ. Since M (X) is compact there exists a

µ ∈ M (X) and a subsequence {µni } such that µni → µ in M (X). Now for

any f continuous, we have

Z Z

Z Z .

.

.

.

.

.

f (T x) dµ(x) − f (x) dµ(x).

= lim .

f (T x) dµn (x) − f (x) dµ n (x).

j j .

X X .

j→∞ .

X X .

.

Z nj −1

1 X

f (T i+1 x) − f (T i x) dσnj (x).

= lim .

.

.

j→∞ .

nj X .

.

Z i=0 .

.

1 .

= lim .

.

(f (T nj x) − f (x)) dσnj (x).

.

Proof (i) Clearly M (X. j→∞ nj X 2 supx∈X |f (x)| ≤ lim = 0.6 Let T be a continuous transformation on a compact metric space. T ) is convex. Theorem 6. Then. . T ) converging to µ in M (X). We need to show that µ ∈ M (X. µ cannot be written in a non- trivial way as a convex combination of elements of M (X. T ). (ii) µ ∈ M (X.e. T ) is an extreme point (i. j→∞ nj Therefore µ ∈ M (X. Now let {µn } be a sequence in M (X. (i) M (X. T ).1. T )) if and only if T is ergodic with respect to µ. T ) is a compact convex subset of M (X).

(we prove the contrapositive) suppose that T is not ergodic with respect to µ. T is measure preserving with respect to µ. and µ1 6= µ2 since µ1 (E) = 1 while µ2 (E) = 0. for any measurable set B. Since the Banach space C(X) of all continuous functions on X (under the sup norm) is separable i. then µ1 .Existence 93 Since T is continuous. µ2 ∈ M (X. T ). and assume that µ = pµ1 + (1 − p)µ2 for some µ1 . T ) is ergodic.e. T ). and µ ∈ M (X. then there exists a measurable set Y such that µ(Y ) = 1. then for any continuous function f on X. and f ∈ C(X). Furthermore. i. T ). µ(E) µ(X \ E) Since E and X \ E are T -invariant sets. and n−1 Z 1X i lim f (T x) = f (x) dµ(x) n→∞ n X i=0 for all x ∈ Y . and some 0 < p ≤ 1. µ2 ∈ M (X. µ1 is a non-trivial convex combination of elements of M (X. . X Therefore. We will show that µ = µ1 . T ). Thus. Conversely. the function f ◦ T is also continuous. and T is ergodic with respect to µ. T ).2) we have µ1 = µ. Then there exists a measurable set E such that T −1 E = E. hence by Theorem (2. (ii) Suppose T is ergodic with respect to µ. Z Z f (T x) dµ(x) = lim f (T x) dµn (x) X n→∞ X Z = lim f (x) dµn (x) n→∞ X Z = f (x) dµ(x).e. Theorem 6. one gets the following strengthening of the Ergodic Theorem.7 If T : X → X is continuous and µ ∈ M (X. Define measures µ1 . C(X) has a countable dense subset. Hence. µ2 on X by µ(B ∩ E) µ (B ∩ (X \ E)) µ1 (B) = and µ1 (B) = . µ(B) = µ(E)µ1 (B) + (1 − µ(E))µ2 (B). Notice that the measure µ1 is absolutely continuous with respect to µ.1. µ is not an extreme point of M (X. and 0 < µ(E) < 1.1.

n i=0 Proof Suppose T is ergodic with respect to µ. Then T is ergodic with respect to µ if and only if n−1 1X δT i x → µ a. j→∞ X X Theorem 6.e. then µ(Y ) = 1. X . Let Y = k=1 Xk . for each k there exists a subset Xk with µ(Xk ) = 1 and n−1 Z 1X lim fk (T i x) = fk (x) dµ(x) n→∞ n X i=0 T∞ for all x ∈ Xk . and µ ∈ M (X. By the Ergodic Theorem.1. then there exists a subse- quence {fkj } converging to f in the supremum norm. Now. Notice that for any f ∈ C(X). and n−1 Z 1X lim fk (T i x) = fk (x)dµ(x) n→∞ n X i=0 for all k and all x ∈ Y . For any x ∈ Y . one gets n−1 n−1 1X 1X lim f (T i x) = lim lim fkj (T i x) n→∞ n n→∞ j→∞ n i=0 i=0 n−1 1X = lim lim fkj (T i x) j→∞ n→∞ n i=0 Z Z = lim fkj dµ = f dµ. T ). Z f (y) d(δT i x )(y) = f (T i x). using uniform convergence and the dominated convergence theorem.8 Let T : X → X be continuous.94 Invariant measures for continuous transformations Proof Choose a countable dense subset {fk } in C(X). and hence is uniformly convergent. let f ∈ C(X).

Thus. there R exists f ∈ C(X) such that ||F − f ||2 < which implies that | F dµ − f dµ| < . G ∈ L2 (X. Furthermore. B. where µ(Y ) = 1. there exists N so that for n ≥ N one has . let F. Conversely.Existence 95 Hence by theorem 6. suppose n1 n−1 P i=0 δT i x → µ for all x ∈ Y . n→∞ n X i=0 By the dominated convergence theorem n−1 Z Z Z 1X i lim f (T x)g(x) dµ(x) = g(x)dµ(x) f (y) dµ(y). µ) one has n−1 Z 1X lim f (T i x)g(x) = g(x) f (y) dµ(y). µ) so that n−1 Z Z Z 1X i lim f (T x)G(x) dµ(x) = G(x)dµ(x) f (y)dµ(y) n→∞ n X X X i=0 for all f ∈ C(X). there exists a measurable set Y with µ(Y ) = 1 such that n−1 Z n−1 Z 1X 1X lim f (y) d(δT i x )(y) = lim f (T i x) = f (y) dµ(y) n→∞ n X n→∞ n i=0 X i=0 1 Pn−1 for all x ∈ Y . B. n i=0 δT i x → µ for all x ∈ Y .1. LetR > 0. Then for any f ∈ C(X) and any g ∈ L1 (X. n→∞ n X X X i=0 Now. µ). Then.7. B. G ∈ L1 (X. and f ∈ C(X).

Z .

n−1 Z Z .

1X i .

f (T x)G(x) dµ(x) − Gdµ f dµ.

. < .

.

.

Xn .

i=0 X X .

.

96 Invariant measures for continuous transformations Thus. for n ≥ N one has .

Z .

n−1 Z Z .

1 X .

F (T i x)G(x) dµ(x) − Gdµ F dµ.

.

.

.

Xn .

i=0 X X .

Z n−1 1 X .

.

.

≤ F (T i x) − f (T i x).

|G(x)| dµ(x) X n i=0 .

Z .

n−1 Z Z .

1 X .

+ .

f (T i x)G(x)dµ(x) − Gdµ f dµ.

.

.

.

Xn X X .

i=0 .

Z Z Z Z .

.

.

+ .

f dµ .

Gdµ − F dµ G dµ.

.

1 Let X be a compact metric space and T : X → X be a continuous homeomorphism.2 Unique Ergodicity A continuous transformation T : X → X on a compact metric space is uniquely ergodic if there is only one T -invariant probabilty measure µ on X. µ) and x ∈ Y . then µ = δT i x . since µ is an extreme point of M (X. B. T ) is ergodic n−1 1X and µ({x}) > 0. T n x = x and T j x 6= x for j < i. In this case. Recall that if ν ∈ M (X. n i=0 6. Let x ∈ X be periodic point of T of period n. i. one gets that T is ergodic. n−1 Z Z Z 1X i lim F (T x)G(x) dµ(x) = G(x) dµ(x) F (y) dµ(y) n→∞ n X X X i=0 for all F. then there exists a measurable subset Y such that ν(Y ) = 1 and n−1 Z 1X i lim f (T x) = f (y) dν(y) n→∞ n X i=0 . X X X X < ||G||2 + + ||G||2 . G ∈ L2 (X. Exercise 6. Show that if µ ∈ M (X.1. Thus. Taking F and G to be indicator functions. T ) is ergodic.e. T ). and µ is necessarily ergodic. M (X. T ) = {µ}.

the sequence { f (T j x)} converges uniformly n j=0 to a constant. by Riesz Representation Theorem there exists a probability measure µ ∈ M (X) such that Z L(f ) = f (x) dµ(x) X for all f ∈ C(x). (iii) There exists a µ ∈ M (X. n→∞ n X i=0 (iv) T is uniquely ergodic. Thus. hence L is well defined. positive and L(1) = 1. the sequence { f (T j x)} converges pointwise n j=0 to a constant. Proof (i) ⇒ (ii) immediate. n→∞ n i=0 . But n−1 1X L(f ◦ T ) = lim f (T i+1 x) = L(f ).Unique Ergodicity 97 for all x ∈ Y and all f ∈ C(X). n−1 Z 1X i lim f (T x) = f (y) dµ(y). Theorem 6. When T is uniquely ergodic we will see that we have a much stronger result.2. (ii) ⇒ (iii) Define L : C(X) → C by n−1 1X L(f ) = lim f (T i x). continuous (|L(f )| ≤ supx∈X |f (x)|).1 Let T : X → X be a continuous transformation on a com- pact metric space X. It is easy to see that L is linear. Then the following are equivalent: n−1 1X (i) For every f ∈ C(X). T ) such that for every f ∈ C(X) and all x ∈ X. n−1 1X (ii) For every f ∈ C(X). n→∞ n i=0 By assumption L(f ) is independent of x.

it follows by the Dominated Convergence Theorem that Z n−1 Z Z Z 1X i lim f (T x) dν(x) = f (x) dµ(x)dν(y) = f (x) dµ(x). (iii) ⇒ (iv) Suppose µ ∈ M (X. X X Thus. and since each term of the sequence is bounded in absolute value by the constant supx∈X |f (x)|. Z Z f (T x) dµ(x) = f (x) dµ(x). T ) = {µ} and n−1 1X suppose g ∈ C(X) is such that the sequence { g ◦ T j } does not converge n j=0 uniformly on X. n−1 Z 1X i lim f (T x) = f (x) dµ(x) n→∞ n X i=0 for all x ∈ X. X f (x) dµ(x) = X f (x) dν(x).98 Invariant measures for continuous transformations Hence. X n i=0 X R R Thus. Z n−1 Z 1X i f (T x) dν(x) = f (x) dν(x). and for every f ∈ C(X). (iv) ⇒ (i) The proof is done by contradiction. T ). and µ = ν. T ) is such that for every f ∈ C(X). n−1 Z 1X i lim f (T x) = f (x) dµ(x) n→∞ n X i=0 for all x ∈ X. Assume ν ∈ M (X. since the sequence { f (T j x)} converges pointwise to the constant n j=0 R function X f (x) dµ(x). For any f ∈ n−1 1X C(X). µ ∈ M (X. n→∞ X n X X X i=0 But for each n. Then there exists > 0 such that for each N there exists n > N and there exists xn ∈ X such that n−1 Z . T ). Assume M (X. we will show that µ = ν.

1 X j .

.

g(T xn ) − g dµ.

≥ . n j=0 X .

n→∞ n X i=0 As an application of this. 32. . 2. . X X By Theorem (6. 1. . Hence. 9. and any x ∈ [0. 9. 16. 8. n j=0 n j=0 Then. let θ = log10 2. . 6. Consider the sequence of first digits {1. which is a contradiction. log10 (k + 1)). n−1 1X k+1 lim 1Jk (Tθj (0)) = λ(Jk ) = log10 . To do this.}. . Example If Tθ is an irrational rotation. 2. Z Z | g dν − g dµ| ≥ . then θ is irrational. . . . 64. 2.5). . . 2. By unique ergodicity of Tθ . there exists a subsequence µni converging to ν ∈ M (X).Unique Ergodicity 99 Let n−1 n−1 1X 1X j µn = δT j xn = T δxn . . For k = 1.1. 1). . Z Z | gdµn − g dµ| ≥ . we have for each k = 1. let us consider the following question. 1. 3.} obtained by writing the first decimal digit of each term in the sequence {2n : n ≥ 0} = {1. The asymptotic relative fre- pk (n) quency of the digit k is then limn→∞ . . For each k = 1. 8. . 4. This is a consequence of the above theorem and Weyl’s Theorem on uniform distri- bution: for any Riemann integrable function f on [0. . . . X X Since M (X) is compact. let pk (n) be the number of times the digit k appears in the first n terms of the first digit sequence. . We want to identify this limit n for each k ∈ {1. 4. then Tθ is uniquely ergodic. 1). one has n−1 Z 1X lim f (x + iθ − bx + iθc) = f (y) dy. . 9}. 128. let Jk = [log10 k. ν ∈ M (X. 2. T ) and by unique ergodicity µ = ν. n→∞ n k i=0 . 2. . . 9.

This shows that r = biθc. we see that the first digit of 2i is k if and only if Tθi (0) ∈ Jk . Summarizing.100 Invariant measures for continuous transformations Returning to our original problem. Thus. In this case. But Tθi (0) = iθ − biθc. notice that the first digit of 2i is k if and only if k · 10r ≤ 2i < (k + 1) · 10r for some r ≥ 0. and log10 k ≤ iθ − biθc < log10 (k + 1). n−1 pk (n) 1X k+1 lim = lim 1Jk (Tθi (0)) = log10 . so that Tθi (0) ∈ Jk . r + log10 k ≤ i log10 2 = iθ < r + log10 (k + 1). n→∞ n n→∞ n i=0 k .

The field of topological dynamics also studies dynamical systems. The chapter is structured as follows. The two fields are. Indeed. analogies will be sought in the other.Chapter 7 Topological Dynamics In this chapter we will shift our attention from the theory of measure preserv- ing transformations and ergodic theory to the study of topological dynamics. The compactness assumption will play the role of the assumption of a finite measure space. but in a different setting: where ergodic theory is set in a measure space with a mea- sure preserving transformation. In order to illuminate this interplay between the two fields we will be merely interested in compact metric spaces. however. topological dynamics focuses on a topological space with a continuous transformation. The two fields bear great similarity and seem to co-evolve. 101 . we shall encounter a most striking example of a map which exhibits its most interesting topological dynamics in a set of measure zero. Whenever a new notion has proven its value in one field. To conclude the chapter. we will discuss some examples and applications. right in the blind spot of the measure theoretic eye. The assumption of the existence of a metric is sufficient for applications and will greatly simplify all of our proofs. also complementary. We will first intro- duce several basic notions from topological dynamics and discuss how they are related to each other and to analogous notions from ergodic theory. We will then devote a separate section to topological entropy.

e. d) to denote a metric space if there is any risk of confusion. • X is a Hausdorff space • Every closed subspace of X is compact • Every compact subspace of X is closed • X has a countable basis for its topology • X is normal. V containing A and B.1 Basic Notions Throughout this chapter X will denote a compact metric space. i. We will assume that the reader has some familiarity with basic (analytic) topology. We call δ > 0 a Lebesgue number for O. for future reference.102 Basic Notions 7. b] ⊂ R. • X is a Baire space • f is uniformly continuous • If Y is an ordered space then f attains a maximum and minimum on X • If A and B are closed sets in X and [a. Y a topological space and f : X −→ Y continuous. for any pair of disjoint closed sets A and B of X there are disjoint open sets U . we will always denote the metric by d and occasionally write (X. a brief outline of these basics is included as an appendix. Then. Here we will only summarize the properties of the space X. then there exists a continuous map g : X −→ [a. Theorem 7.1 Let X be a compact metric space. then there is a δ > 0 such that every subset A of X with diam(A) < δ is contained in an element of O. respectively • If O is an open cover of X. To jog the reader’s memory.1. b] such that g(x) = a for all x ∈ A and g(x) = b for all x ∈ b . Unless stated otherwise.

1. . Example 7. If T is invertible. hence compact. T is called topologically transitive if for some x ∈ X OT (x) is dense in X. periodicity and expansiveness. Moreover. then O contains an open set U which contains 0.1. so by adding an open set Uy to U from O with y ∈ Uy . the forward orbit of x is the set F OT (x) = {T n (x)|n ∈ Z≥0 }. Baire’s category theorem.. y) = 1 if x 6= y induces the discrete topology on X. most notably the Lebesgue number lemma. called minimality. the orbit of x is defined as OT (x) = {T n (x)|n ∈ Z}. topological conjugacy. 103 The reader may recognize that this theorem contains some of the most important results in analytic topology. Thus X is a compact metric space. X is a finite space. 2. . T is called one-sided topologically transitive if for some x ∈ X. if O is an open cover of X. Then X is a compact metric space. then it is easy to see that T is a homeomorphism. Indeed. If T is a homeomorphism. so T is minimal.1. T (2) = 3. .1 Let X be the topological space X = {1. U contains all but finitely many points of X. . Example 7. . OT (x) = X for all x ∈ X.2 Fix n ∈ Z>0 . . i.T (1000) = 1. F OT (x) is dense in X. Definition 7. T (1) = 2. the extreme value theorem and Urysohn’s lemma. In the literature sometimes a stronger version of topological transitivity is introduced. Definition 7.1 Let T : X −→ X be a continuous transformation. .e. If we now define T : X −→ X by the permutation (12 · · · 1000).2 A homeomorphism T : X −→ X is called minimal if every x ∈ X has a dense orbit in X. Define T : X −→ X by T (x) = nx .1.. for each y not in U . 1000} with the discrete topology. Let X be the topological space X = {0} ∪ { n1k |k ∈ Z>0 } equipped with the subspace topology inherited from R. Let us now introduce some concepts of topological dynamics: topological transitivity. we obtain a finite subcover. and the discrete metric ( 0 if x = y ddisc (x. For x ∈ X.

If A is a closed set in X satisfying T A = A. A has empty interior and therefore W is dense in X.1. non-empty and invariant under T . then either A = X or A has empty interior (i. T A = A and A 6= X. Hence. either A has empty interior.2 Let T : X −→ X be a homeomorphism of a compact metric space.1. Then the following are equivalent: 1. X − A is dense in X) 3.e. Hence. Since T n A = A for all n ∈ Z.104 Basic Notions Note that T is continuous. Then W = ∪∞ n=−∞ T U n is open. non-empty and U ⊂ A. Hence. ∞ \ ∞ [ {x ∈ X|OT (x) = X} = T m Un n=1 m=−∞ . Suppose that A does not have an empty interior. The following theorem summarizes some equivalent definitions of topological transitivity. Now. Theorem 7. Since every open neighborhood of y contains a basis element Ui such that y ∈ Ui . then there exists an open set U such that U is open. (3)⇒(1): For an arbitrary y ∈ X we have: y ∈ OT (x) if and only if every open neighborhood V of y intersects OT (x). For any pair of non empty open sets U . By (2). we see that OT (x) = X if and only if for every n ∈ Z>0 there is some m ∈ Z such that T m (x) ∈ Un . (1)⇒(2): Suppose that OT (x) = X and let A be as stated. Thus W ∩ V 6= ∅. OT (x) ⊂ A and taking closures we obtain X ⊂ A. we can find a p ∈ Z such that T p (x) ∈ U ⊂ A. V are open and non-empty. By theorem 7. (2)⇒(3): Suppose U . V in X there exists an n ∈ Z such that T n U ∩ V 6= ∅ Proof Recall that OT (x) is dense in X if and only if for every U open in X. T m (x) ∈ V for some m ∈ Z. which implies (3). or A = X. so T is one-sided topologically transitive as x = 1 has a dense forward orbit.1 there exists a countable basis U = {Un }∞n=1 for the topology of X. A = X − W is closed. U ∩ OT (x) 6= ∅.e. T is topologically transitive 2. but not surjective as T −1 ({1}) = ∅. Observe that F OT (1) = X − {0} and X − {0} = X. i.

Then every continu- ous T -invariant function is constant. .1. ∪∞ m m=−∞ T Un is dense in X for every n. theorem 7. y). Exercise 7. see [W]. we can find a δ > 0 such that d(f (x).1. x̃) < δ.1). since x0 has a dense (forward) orbit in X. T : X −→ X a transforma- tion and x ∈ X.2 clearly resembles theorem (1.1. T (y)) = d(x.6. This is also reflected in the following (partial) analogue of theorem (1. Definition 7. f (x̃)) < ε for any x̃ ∈ X with d(x. Theorem 7. y ∈ X) then T is minimal. An analogue of this theorem exists for one-sided topological transitivity.7.f. c) = d(f (x). since X is a Baire space (c. Then x is called a periodic point of T of period n (n ∈ Z>0 ) if T n (x) = x and T m (x) 6= x for m < n. d(f (x). Show that if T is an isometry (i. Proof Let f : X −→ Y be continuous on X and suppose f ◦ T = f . there is some n ∈ Z (n ∈ Z≥0 ) such that d(x.e. for all x.1).1 Let X be a compact metric space with metric d and let T : X −→ X is a topologically transitive homeomorphism. Now.1. A periodic point of period 1 is called a fixed point.1) and (2) implies that we can view topological transitivity as an analogue of ergodicity (in some sense). Let x0 ∈ X have a dense (forward) orbit in X and suppose f (x0 ) = c for some c ∈ Y . By continuity of f at x. Hence. Then f ◦ T n = f . our proof is complete. 105 But ∪∞ m m=−∞ T Un is T -invariant and therefore intersects every open set in X by (3). T n (x0 )) < δ. d(T (x). Thus. Note that theorem 7. But then. f (T n (x0 ))) < ε Since ε > 0 was arbitrary. so f is constant on (forward) orbits of points. we see that {x ∈ X|OT (x) = X} is itself dense in X and thus certainly non-empty (as X 6= ∅).3 Let T : X −→ X be continuous and one-sided topologically transitive or a topologically transitive homeomorphism.3 Let X be a topological space.1. Fix ε > 0 and let x ∈ X be arbitrary.

for all m ∈ Z. δ is called an expansive constant for T . Hence. δ/4) contains T m (y) and has diameter δ/2 < δ.1.1. All other points in the domain are periodic points of period 2. there exists a finite subcover β of α. there exists a Lebesgue number δ > 0 for the open cover α. x = y. B(T m (x).1. for any collection ∞ {An }n=−∞ .1. Let α be the open cover of X defined by α = {B(a.3 The map T : R −→ R. Example 7. y ∈ ∩∞ n=−∞ T −n Bn . By compact- ness. any bijective map T : X −→ X is an expansive homeomorphism and any 0 < δ < 1 is an expansive constant for T .4 Let X be a compact metric space and T : X −→ X a homeomorphism. δ/4) ⊂ Am . Proof Suppose that T is expansive and let δ > 0 be an expansive constant for T . there exists an l ∈ Z such that d(T l (x). Expansiveness is closely related to the concept of a generator.3. for all m ∈ Z. defined by T (x) = −x has one fixed point. see exercise 7. Hence.1. T is said to be expansive if there exist a δ > 0 such that: if x 6= y then there exists an n ∈ Z such that d(T n (x). since the discrete metric ddisc induces the topology on X.4 Let X be a finite space with the discrete topology. Conversely. A finite open cover α of X is called a generator for T if the set ∩∞ n=−∞ T −n An contains at most one point of X. Definition 7. y ∈ X are such that d(T m (x). For a non-trivial example of an expansive homeomorphism. T n (y)) > δ. a contradiction.106 Basic Notions Example 7.1. T m (y)) ≤ δ/4. But by expan- siveness. T (y)) ≤ δ. We claim that δ/4 is an expansive constant for T . the closed ball B(T m (x). Ai ∈ α. for all m ∈ Z. Then d(T (x).1. T l (y)) > δ. . namely x = 0. Theorem 7. Now. X is a compact metric space. Then T is expansive if and only if T has a generator. Indeed. Then. m m Bi ∈ β for all i. suppose that α is a generator for T .4 Let X be a compact metric space and T : X −→ X a homeomorphism.5 Let X be a compact metric space and T : X −→ X a homeomorphism.1. Definition 7. We conclude that β is a generator for T . By theorem 7. 2δ )|a ∈ X}. if x. Suppose that x.

T |A .5. in a sense. It follows that x. T is expansive if and only if T n is expansive (n 6= 0) b.2 In the literature. Exercise 7. The following theorem shows that topological conjugacy can be considered as the counter- part of a measure preserving isomorphism. A generator is then defined by replacing ∩∞ n=−∞ T −n An ∞ −n by ∩n=−∞ T An in definition 7. T is expansive if and only if S is expansive . Definition 7. 107 for some Am ∈ α.1.5 Let X. Y be compact spaces and let T : X −→ X. Y be compact spaces and let T : X −→ X.1. our definition of a generator is sometimes called a weak generator. T is minimal if and only if S is minimal 3.6 Let X. S : Y −→ Y be topologically conjugate homeomorphisms. ∞ −m so ∩m=−∞ T Am contains at most one point. Then the product map T × S : X × Y −→ X × Y is expansive with respect to the metric d = max{dX . y ∈ ∩∞ m=−∞ T −m Am . The term ‘topological conjugacy’ is. then the restriction of T to A.1. Note that topological conjugacy defines an equivalence relation on the space of all homeomorphisms. T is said to be topologically conjugate to S if there exists a homeomorphism φ : X −→ Y such that φ ◦ T = S ◦ φ.1. S : Y −→ Y be homeomorphisms. Suppose S : Y −→ Y is an expansive homeomorphism. Theorem 7.1. Then. Exercise 7. a misnomer. is expansive c. T is topologically transitive if and only if S is topologically transitive 2. φ is called a (topological) conjugacy. T is expansive. We conclude that x = y. dY }.But α is a generator. 1. Show that in a compact metric space both concepts are in fact equivalent.3 Prove the following basic properties of an expansive home- omorphism T : X −→ X: a. If A is a closed subset of X and T (A) = A.

Interestingly.2 Topological Entropy Topological entropy was first introduced as an analogue of the succesful con- cept of measure theoretic entropy.108 Two Definitions Proof The proof of the first two statements is trivial. Nevertheless. 7. which was later engineered into a similar definition of measure theoretic entropy. It will turn out to be a conjugacy invariant and is therefore a useful tool for distinguishing between topological dynamical systems.1 Two Definitions We will start with a definition of topological entropy similar to the defini- tion of measure theoretic entropy introduced earlier. For the third statement we note that ∞ \ ∞ \ ∞ \ −n −1 −1 −n −1 T (φ An ) = φ ◦S (An ) = φ ( S −n An ) n=−∞ n=−∞ n=−∞ The lefthandside of the equation contains at most one point if and only if the righthandside does.2. Hence the result follows from theorem 7. The second (and chronologically later) definition uses (n. The definition of topological entropy comes in two flavors. in which the two definitions will turn out to be equivalent. The first definition presented will only require X to be compact.ε) separating and spanning sets. 7. only requires X to be a metric space.1. the reader should be aware that both defini- tions represent generalizations of topological entropy in different directions . so a finite open cover α is a generator for S if and only if φ−1 α = {φ−1 A|A ∈ α} is a generator for T . Before kicking off. We will obscure from these subtleties and simply stick to the context of a compact metric space. we use analogous notation and terminology.4. let us first make a remark on the as- sumptions about our space X. on the other hand. To spark the reader’s recognition of the similarities between the two. The second. the latter definition was a topological dynamical discovery. The first is in terms of open covers and is very similar to the definition of measure the- oretic entropy in terms of partitions.

If T is surjective then H(T −1 α) = H(α). .2.1 Let α be an open cover of X and let N (α) be the number of sets in a finite subcover of α of minimal cardinality. Note that these are all again open covers of X. Exercise 7. H(α ∨ β) ≤ H(α) + H(β) 5. Definition 7. Finally. The proof is left as an exercise.1 (Hint: If T is surjective then T (T −1 A) = A).2. The common refinement of α and β is defined to be α ∨ β = {A ∩ B|AW∈ α. Proposition 7. B ∈ β}.2. 109 and are therefore interesting in their own right. then H(α) ≤ H(β) 4. For a continuous trans- formation T : X −→ X we define T −1 α = {T −1 A|A ∈ α}. Then 1. H(α) = 0 if and only if N (α) = 1 if and only if X ∈ α 3.2.1 Show that T −1 (α ∨ β) = T −1 (α) ∨ T −1 (β) and show that α ≤ β implies T −1 α ≤ T −1 β. We define the entropy of α to be H(α) = log(N (α)).y∈A d(x. we define the diameter of an open cover α as diam(α) := supA∈α diam(A). The following proposition summarizes some easy properties of H(α). y).2 Prove proposition 7. and write α ≤ β. For a finite collection {αi }ni=1 of open covers we may define ni=1 αi = {∩ni=1 Aji |Aji ∈ αi }. where diam(A) = supx. For T : X −→ X continuous we have H(T −1 α) ≤ H(α). We will now move on to the definition of topological entropy for a contin- uous transformation with respect to an open cover and subsequently make this definition independent of open covers.1 Let α be an open cover of X and let H(α) be the entropy of α. Exercise 7. If α ≤ β. β be open covers of X. if for every B ∈ β there is an A ∈ α such that B ⊂ A. We say that β is a refinement of α.2. Let X be a compact metric space and let α. H(α) ≥ 0 2.

We define the topological entropy of T with respect to α to be: n−1 1 _ h(T.2. that the right hand side exists. If α ≤ β.e. i.2. Now. Wn−1 −i Theorem 7.110 Two Definitions Definition 7. by proposition (4.2).2 h(T. h(T. α) is well-defined.1 and exercise 7. then h(T.1 n+p−1 _ an+p = H( T −i α) i=0 n−1 p−1 _ _ −i −n ≤ H( T α) + H(T T −j α) i=0 j=0 ≤ an + ap This completes our proof. α) = lim H( T −i α) n→∞ n i=0 We must show that h(T. it suffices to show that {an } is subadditive. α) ≥ 0 2.2.2. Proof Define n−1 _ an = H( T −i α) i=0 Then.2 Let α be an open cover of X and let T : X −→ X be continuous. β) 3.2. by (4) of proposition 7. Proposition 7. α) ≤ H(α) ..2.1 limn→∞ n1 H( i=1 T α) exists. h(T. α) ≤ h(T. α) satisfies the following: 1.

. T ) to be the maximum cardinality of an (n. T i (y)) 0≤i≤n−1 Definition 7. we arrive at our sought definition: Definition 7. y ∈ A dn (x. α) α where the supremum is taken over all open covers of X. We shall first define the main ingredients. Let d be the metric on the com- pact metric space X and for each n ∈ Z≥0 define a new metric by: dn (x. y) > ε. Define span(n. Finally.2.3 (I) Let T : X −→ X be continuous. ε)-separated set. ε > 0 and A ⊂ X. A is called (n.1. T ) to be the minimum cardinality of an (n.I. y) = max d(T i (x). ε)-separated if for any x. A is called (n. ε. Let us now turn to the second approach to defining topological entropy. We have: n−1 _ n−1 X −i H( T α) ≤ H(T −i α) i=0 i=0 ≤ nH(α) Here we have subsequently used (4) and (5) of the proposition. We define sep(n. ε)-spanning set.2. y) < ε. ε)- spanning for X if for all x ∈ X there exists a y ∈ A such that dn (x. This approach was first taken by E. Dinaburg. We will defer a discussion of the properties of topological entropy till the end of this chapter. We will only prove the third statement. We define the topo- logical entropy of T to be: h1 (T ) = sup h(T.4 Let n ∈ Z≥0 .2. 111 Proof These are easy consequences of proposition 7. ε.

y) < r} is an open ball in the metric d.1: An (n. r) := T −i B(T i (x).112 Two Definitions Figure 7.2. ε) a∈A and A is (n. where K is a compact subset of X. ε)-spanning for X if: [ X= Bn (a.E. A is (n. r) i=0 Hence. ε. T ) to be the minimum cardinality of a covering of X by open sets of dn -diameter less than ε. See [W] for an outline of this extended framework. ε) = ∅ for all a ∈ A Definition 7. . If B(x. ε)-spanning set (right). This generalization to the context of non-compact metric spaces was devised by R. r) = {y ∈ X|d(x. The following theorem shows that the above notions are actually two sides of the same coin. Note that we can extend this definition by letting A ⊂ K.5 For n ∈ Z≥0 and ε > 0 we define cov(n. The dotted circles represent open balls of dn -radius 2ε (left) and of dn -radius ε (right). ε)-separated set (left) and (n. respectively. We can also formulate the above in terms of open balls. then the open ball with centre x and radius r in the metric dn is given by: n−1 \ Bn (x. Bowen. ε)-separated if: (A − {a}) ∩ Bn (a.

Let α. T ). T ) and cov(n.2. T ) by the last inequality. for all n. ε. Note that if α is an open cover of dn -diameter less than ε. T j (y))} 0≤i≤m−1 m≤j≤m+n−1 < ε . sep(n. 1 Lemma 7. smaller than ε and with cardinalities cov(m. This contradiction shows that A is an (n. respectively. ε. ε. But then A ∪ {x} is an (n. T ). T ) ≤ sep(n. 3ε. ε. Suppose A is not (n. span(n. Let A be an (n. The final statement follows for span(n. T ).1 The limit cov(ε.2. 113 Theorem 7.3 Finish the proof of the above theorem by proving the first inequality. ε. ε. T ). ε. T i (y)) 0≤i≤m+n−1 ≤ max{ max d(T i (x). Proof Fix ε > 0. a) ≥ ε. β be open covers of X consisting of sets with dm - diameter and dn -diameter. The first is left as an exercise to the reader. T ) are finite. y) = max d(T i (x). ε. Then there is some x ∈ X such that dn (x. ε)-spanning set for X. ε)-spanning for X. This holds in particular for an open cover of minimal cardinality. for all a ∈ A. ε)-separated set of cardinality sep(n. ε.2 cov(n. so the third inequality is proved.2. T ) and cov(n. Furthermore. y ∈ A ∩ T −m B. T ) ≤ cov(n. T ) and cov(n. max d(T j (x). T ). T ). ε. ε)-separated set of cardinality larger than sep(n. To prove the third inequality. T ). ε. then no element of α can contain more than 1 element of A. ε > 0 and continuous T . Then. Pick any A ∈ α and B ∈ β. T )) exists and is finite. let A be an (n. ε. ε. Exercise 7. ε)-separated set of cardinal- ity sep(n. ε. T ) ≤ span(n. T ) from the com- pactness of X and subsequently for sep(n. T i (y)). Proof We will only prove the last two inequalities. The second inequality now follows since the cardinality of A is at least as large as span(n. ε. ε. for x. dm+n (x. T ) := limn→∞ n log(cov(n.

T )) ε↓0 n→∞ n Note that there seems to be a concealed ambiguity in this definition. since log is an increasing function. our only missing ingredient is the following lemma: . A ∩ T −m B has dm+n -diameter less than ε and the sets {A ∩ T −m B|A ∈ α. as long as it induces the topology of X. This will justify the use of the notation h(T ) for both definitions of topological entropy.6 (II) The topological entropy of a continuous transforma- tion T : X −→ X is given by: 1 h2 (T ) = lim lim log(cov(n. Motivated by the above lemma and theorem. B ∈ β} form a cover of X. T )· cov(n. we have the following definition Definition 7. ε. 7. T ) is subadditive.2 Equivalence of the two Definitions We will now show that definitions (I) and (II) of topological entropy coincide on compact metric spaces. the sequence {an }∞ n=1 defined by an = log cov(n. T )) ε↓0 n→∞n 1 = lim lim log(span(n. ε. ε. After all the work done in the last section. ε. cov(m + n. there is definitely an issue at this point if we drop our assumption of compactness of the space X. since h2 (T ) still depends on the chosen metric d. ε.2. T )) ε↓0 n→∞ n 1 = lim lim log(sep(n. ε. in compact metric spaces h2 (T ) is independent of d. T ) ≤ cov(m. T ) Now. Hence. As we will see later on. we will continue to write h1 (T ) for topological entropy in the definition using open covers and h2 (T ) for the second definition of entropy. However. ε.114 Equivalence of the two Definitions Thus. In the meantime.2. so the result follows from proposition 5.

hence cov(n.2.1.2. αn ) ≤ h1 (T ) for all n ≥ N . Then. k )|x ∈ X}. For any n. Let A be an (n.3 For a continuous transformation T : X −→ X of a compact metric space X.1 there exists a Lebesgue number δ > 0 for β.2. definitions (I) and (II) of topological entropy coincide. we find that for any A ∈ αn there exists a B ∈ β such that A ⊂ B. T ). δ2k ) is con- tained in a member of n−1 W −i Wn−1 −i 1 i=0 T βk . divide by n. δ2k )-spanning set for X of cardinality span(n. for each a ∈ A the ball B(T i (a).4 Finish the proof of lemma 7. Proceeding in the same way as in step 1.2. Now. 115 Lemma 7. h1 (T ) − ε < h1 (T. Then limn→∞ h1 (T. Let {βk }∞ 1 k=1 be defined by βk = {B(x. by proposition 7. Thus. αn ) = h1 (T ). So. Since limk→∞ diam(αk ) = 0. for such n. where diam is the diameter under the metric d. That is.2 and the above. 4k .2. we can subsequently take the log. show that if h1 (T ) = ∞. β) ≤ h1 (T. δ2k ) is contained in a member of βk (for all 0 ≤ i < n). Proof Step 1: h2 (T ) ≤ h1 (T ). the dn -diameter of αk is smaller than 1 Wn−1 −i k .2. N ( i=0 T βk ) ≤ span(n. there exists an N > 0 such that for n ≥ N we have diam(αn ) < δ.2. Proof We will only prove the lemma for the case h1 (T ) < ∞. β ≤ αn . δ2k .2. take the limit for n → ∞ and the limit for k → ∞ to obtain the desired result. T ). i=0 T αk is an open W cover of X by sets of dn -diameter smaller than k . clearly. hence Bn (T i (a). Theorem 7. Step 2: h1 (T ) ≤ h2 (T ). defined by 1 αk = {Bn (x. let {αk }∞ k=1 be the sequence of open covers of X. By assumption. Note that δk = k is a 2 Lebesgue number for βk . i. 3k )|x ∈ X}. we obtain the desired inequality. Exercise 7.2 Let X be a compact metric space. k . β) > h1 (T ) − ε. Then. αn ) = ∞. T ) ≤ N ( n−1 1 1 −i i=0 T αk ). Hence. . for each k. Suppose {αn }∞ n=1 is a se- quence of open covers of X such that limn→∞ diam(αn ) = 0. Let ε > 0 be arbi- trary and let β be an open cover of X such that h1 (T. then limn→∞ h1 (T.e. By theorem 7. by lemma 7.

1 uniformly continuous. ε2 . d) are continuous and hence by theorem 7. by uniform continuity. d) ˜ and j : (X. every (n. dividing by n. Then A is also (n. d) span(n. By subsequently taking the log. ε3 ↓ 0) in these inequalities. ˜ y) < ε2 ⇒ d(x.1. d) where the fourth argument emphasizes the metric. d) and (X. ˜ Analogously. for all n ∈ Z>0 4. Therefore. Let A be an (n. ε3 .4 Let X be a compact metric space and let T : X −→ X be continuous. d) −→ (X. ε2 )- spanning for T under d. we can subsequently find an ε2 > 0 and ε3 > 0 such that for all x. ε1 . ε1 )-spanning set for T under d. then h(T −1 ) = h(T ). we obtain the desired result. T. Then h(T ) satisfies the following properties: 1. d) ≤ span(n. h(T n ) = n· h(T ). Since both induce (1): Let (X. If the metrics d and d˜ both generate the topology of X. ˜ ≤ span(n. d) ˜ −→ (X. ε2 )-spanning set for T under d˜ is (n. d) the same topology. for all n ∈ Z Proof ˜ denote the two metric spaces. taking the limit for n → ∞ and the limit for ε1 ↓ 0 (so that ε2 . ε3 )-spanning for T under d.116 Equivalence of the two Definitions Properties of Entropy We will now list some elementary properties of topological entropy. In this case. Theorem 7. h(T n ) = |n|· h(T ). y ∈ X: ˜ y) < ε2 d(x. y) < ε3 d(x. h(T ) is a conjugacy invariant 3. T. where Y is a . the identity maps i : (X.2. y) < ε1 ⇒ d(x. Then. then h(T ) is the same under both metrics 2. T. Fix ε1 > 0. If T is a homeomorphism. (2): Suppose that S : Y −→ Y is topologically conjugate to T .

for x1 . Analogously. δ̃) Thus. we see that the n−diameters (in the metrics dX and d˜Y ) remain the same under ψ. η)). for any y0 ∈ Y and δ̃ > 0. we find dn (x. Let dX be the metric on X. we can find a δ > 0 such that d(T i (x). by uniform continuity of T i on X (c. n − 1. ε. y ∈ X satisfying d(x. η̃) ⊂ ψ(BdY (y0 . y) < δ. Fix ε > 0. Then. dX ) = cov(n. y ∈ X satisfying d(x. y2 ∈ Y . δ)-spanning set for T n . T ). x2 = ψ(y2 ) for some y1 . we have: dX (T (x1 ). Thus. y) = dX (ψ(x). Now. since ψ is a bijection. ε. ψ −1 (BdX (ψ(y0 ). T j (y)) 1≤i≤m−1 1≤j≤nm−1 therefore. there is a δ > 0 such that BdY (y0 . let BdY (y0 . T (x2 )) = dX (T ◦ ψ(y1 ). T n ) ≤ n nm span(nm. η) be a basis element of the topology of Y . Now. 117 compact metric space. as ψ is an open map. By (1). d˜Y ). cov(n. y) = dX (ψ(y). T ). ε. T ni (y)) ≤ max d(T j (x). for all ε > 0 and n ∈ Z≥0 . span(m. . Let A be an (m. T n ) ≤ span(nm. T i (y)) < ε for all x. for all x ∈ X there exists some a ∈ A such that max d(T ni (x). ψ ◦ S(y2 )) = d˜Y (S(y1 ). Hence. T. with conjugacy ψ : Y −→ X. Bd˜Y (y0 . d˜Y induces the topology of Y . so it now easily follows that h(S) = h(T ). Hence. theorem 7.1). η̃)) ⊂ BdY (y0 . η). y) < ε. . there is an open ball BdX (ψ(y0 ). η̃)) ⇔ d˜Y (y0 . Indeed. x2 ∈ X. ψ(y)) defines a metric on Y which induces the topology of Y . S(y2 )) Where we have used that x1 = ψ(y1 ). ε. δ) ⊂ Bd˜Y (y0 . Then ψ(BdY (y0 . thus h(T n ) ≤ nh(T ). . ε. Hence. ψ(y0 )) < η̃ X hence. ε. T ni (a)) < δ 0≤i≤m−1 . Then. (3): Observe that max d(T ni (x).1. .f. since the open balls are a basis for the topology of X. h(S) is the same under dY and d˜Y . y) < δ and i = 0. η) and y ∈ ψ −1 (Bd (ψ(y0 ). η)) is open in X. S. Then the map d˜Y defined by d˜Y (x. η̃) ⊂ BdY (y0 . for x. T ◦ ψ(y2 )) = dX (ψ ◦ S(y1 ). This implies m1 span(m.

dX ). The second statement is a consequence of (3). which cannot be exceeded. every (n. P ∗ . ε)-separated set for T −1 of the same cardinality. 7. but leave most of the good stuff for you to discover in the exercises. y ∈ A dn (x. still holds if X is merely assumed to be compact. Show that h(T × S) = h(T ) + h(S). Then. for any x. Con- versely. it is supposed to increase. T ) ≤ span(m. (The Quadratic Map) Consider the following biological popu- lation model. We presume that there is a certain limiting population level. T i (y)) > ε 0≤i≤n−1 0≤i≤n−1 so T n−1 A is an (n. T ni+k (a)) = max d(T j (x). Exercise 7.2. max max d(T ni+k (x).5 Show that assertion (4) of theorem 7. T j (a)) < ε 0≤k≤n−1 0≤i≤m−1 0≤j≤nm−1 Thus. S : Y −→ Y continuous. ε)-separated set T −(n−1) B for T .e. dY } induces the product topology on X × Y ). ε. ε. T ) = sep(n. (Hint: first show that the metric d = max{dX . span(mn. T −i (T n−1 (y))) = max d(T i (x).2. Exercise 7. δ. But then.118 Equivalence of the two Definitions But then. (Y. We derive some simple properties of the dynamical systems presented below.3 Examples In this section we provide a few detailed examples to get some familiarity with the material discussed in this chapter. T n ) and it follows that h(T n ) ≥ nh(T ) (4): Let A be an (n. since there is . ε)-separated set for T . h(T −1 ) = h(T ). Thus. i. sep(n. ε)-separated set B for T −1 gives the (n. the proof of the pudding is in the eating! Example. max d(T −i (T n−1 (x)). After all.4. dY ) be compact metric spaces and T : X −→ X. Suppose we are studying a colony of penguins and wish to model the evolution of the population through time. ε.6 Let (X. T −1 ) and the first statement readily follows.2. When- ever the population level is low. y) > ε. by the above.

∞) it can be shown that Qnλ (x) → −∞ as n → ∞. 1] for 0 ≤ i ≤ n − 1. which is nothing more than the graph of Qλ together with the graph of the line y = x. i.2 shows a phase portrait for Qλ on [0. we can think of Pn as the population at time n as a percentage of the limiting population level.1 The Quadratic Map with parameter λ. by removing middle-thirds in subsequent steps. it is expected to decrease as a result of overcrowding. Let us take a closer look at the case λ > 4.3. our atten- tion is restricted to the interval [0. This procedure is reminiscent of the construction of the ‘classical’ Cantor set. We arrive at the following simple model for the population at time n. For x ∈ (−∞. 1]. population is near the up- per limit level P ∗ . 1] − ∪n=1 An . is defined to be Qλ (x) = λx(1 − x) Of course. on the other hand. as it in fact displays a wide range of interesting dynamics. such as periodicity. 1]} and set ∞ C = [0. 1]} is non-empty.e. Define An = {x ∈ [0. 1]|Qλ (x) ∈ / [0. . Figure 7. In a phase portrait the trajectory of a point under iteration of the map Qλ is easily visualized by iteratively project- ing from the line y = x to the graph and vice versa. writing x = P0 . 0) ∪ (1. Then. 1]|Qλ (x) ∈ [0. bifurca- tions and chaos. Qnλ (x) ∈ i / [0. the subset A1 = {x ∈ [0. topological transitivity. The quadratic map is deceivingly simple. The interesting dynam- ics occur in the interval [0. the population dynamics are described by iteration of the following map: Definition 7. in the context of the biological model formulated above. In the exercises below it is shown that C is indeed a Cantor set. 119 plenty of fish to go around. If. 1] since the population percentage cannot lie outside this interval. Qλ : R −→ R. 1]. If we now set P ∗ = 1. denoted by Pn : Pn+1 = λPn (P ∗ − Pn ) Where λ is a ‘speed’ parameter. From our phase portrait for Qλ it is immediately visible that there is a ‘leak’ in our in- terval.

1 . 1] in three parts: I0 . It is easily seen that Rθ is a homeomorphism for every θ. Proposition 7. . Example 7. 1].2: Phase portrait of the quadratic map Qλ for λ > 4 on [0. Proof The first statement follows trivially from the fact that e2niπ = 1 for n ∈ Z. We define the rotation map Rθ : S 1 −→ S 1 with parameter θ by Rθ (z) = eiθ z = ei(θ+ω) . then every z ∈ S 1 is periodic with period b.1 Let θ ∈ [0. A1 and I1 (from left to right). 2π). 1] is homeomorphic to S 1 after identification of the endpoints 0 and 1. 2π) and z = eiω for some ω ∈ [0. The arrows indicate the (partial) trajectory of a point in [0. 2π) and Rθ : S 1 −→ S 1 be the rotation map. This is not surprising. 1]. Rθ is minimal if and only if θ is irrational. If θ is rational. The dotted lines divide [0. The rotation map is very similar to the earlier introduced shift map Tθ .3. considering the fact that the interval [0. say θ = ab .3.120 Equivalence of the two Definitions Figure 7. where θ ∈ [0. (Circle Rotations) Let S 1 denote the unit circle in C.

(Bernoulli Shift Revisited) In this example we will rein- troduce the Bernoulli shift in a topological context. which is Cauchy. . Rθm (z)) < ε where d is the arc length distance function. this sequence has a convergent subsequence. Since these arcs have positive and equal length. the points {Rθn (z)|n ∈ Z} are all distinct and it follows that {Rθn (z)}∞ n=1 is an infinite sequence. yn = xn+1 . Therefore. for any θ ∈ [0. ε. It is easy to see that the cylinder sets form a basis for this topology. so z = eiω for some ω ∈ [0. which asserts that an arbitrary product of compact . Hence. we see that span(n. Rθ ) for all n ∈ Z≥0 and it follows that h(Rθ ) = 0. We can define a topology on Xk (or Xk+ ) which makes the shift map continuous (in fact. Rθ ) = span(1. 2π). k − 1} with the discrete topology and subsequently give Xk the corresponding product topology. Also. we can find integers n > m such that d(Rθn (z). ε. closed arc from z to Rθl (z) onto the connected closed arc from Rθl (z) to Rθ2l (z) and this one onto the arc connecting Rθ2l (z) to Rθ3l (z). 1. .g. By the Tychonoff theorem (see e. Let Xk = {0. we can set l = n−m to obtain d(Rθl (z). Example 7. . The result now follows. L(x) = y. by continuity of Rθl . k −1}Z (or Xk+ = {0. . since the arcs have length smaller than ε. Rθl maps the connected. . etc. Then Rθn (z) = Rθm (z) ⇔ ei(ω+nθ) = ei(ω+mθ) ⇔ ei(n−m)θ = 1 ⇔ (n − m)θ ∈ Z Thus. 121 Suppose that θ is irrational. 2π). [Mu]). . they cover S 1 . By compactness. Rθ is an isometry with respect to the arc length distance function and this metric induces the topology on S 1 . and ε > 0 was arbitrary.2 . since Rθ is distance preserv- ing with respect to this metric. z) < ε. Now. .3. k − 1}N∪{0} ) and recall that the left shift on Xk is defined by L : Xk −→ Xk . . a homeomorphism in the case of Xk ) as follows: we equip {0. Fix ε > 0 and z ∈ S 1 . As mentioned in the proof.

2 The topological entropy of the full shift map is equal to h(L) = log(k). Then An+l is an (n. our space Xk is compact. Proof We will prove the proposition for the left shift on Xk+ . ε. . Hence. we obtain sep(n. T i (y)) = 1 > ε 0≤i≤n−1 where we have used the metric d on Xk+ defined above. Therefore.e. . L)) ≥ log(k) ε↓0 n→∞ n To prove the reverse inequality. a ∈ An is of the form a0 a1 a2 . ε. 1 h(T ) = lim lim log(sep(n.3. then dn (x. take l ∈ Z>0 such that 2−l < ε. i. In the exercises you are asked to show that the metric d(x. the open balls B(x. . Proposition 7. . Here we set d(x. this corresponds to the case l = ∞. y) = 0 if x = y. ε. Thus. Since there are k n possibilities to choose the first n symbols of an itinerary. . the set An consisting of sequences a ∈ Xk+ such that ai ∈ {0. since for every x ∈ Xk+ there is some a ∈ An+l for which the first n + l symbols coincide. the case for the left shift + on Xk is similar. ε. . y) = max d(T i (x). Fix 0 < ε < 1 and pick any x = {xi }∞ ∞ i=0 . a) < 2−l < ε.122 Equivalence of the two Definitions spaces is compact in the product topology. ε)-separated. . . L)-spanning set. an−1 000 . Explicitly. Notice that if at least one of the first n symbols in the itineraries of x and y differ. L) ≥ k n . y) = 2−l if l = min{|j| : xj 6= yj } induces the product topology. y = {yi }i=0 ∈ Xk . 1 n+l h(T ) = lim lim log(span(n. L)) ≤ lim lim log(k) = log(k) ε↓0 n→∞ n ε↓0 n→∞ n and our proof is complete. d(x. . r) with respect to this metric form a basis for the product topology. In other words. k −1} for 0 ≤ i ≤ n−1 and aj = 0 for j ≥ n is (n. We will end this example by calculating the topological entropy of the full shift L.

3) ( 2x if 0 ≤ x ≤ 12 T (x) = 2(1 − x) if 12 ≤ x ≤ 1 Let I0 = [0. we let φi (x) be the index of the element of α which contains T i (x). We will now apply the above procedure to determine the number of pe- riodic points of the teepee (or tent) map T : [0.3: Phase portrait of the teepee map T on [0. Note that φ satisfies φ ◦ T = L ◦ φ. i ∈ Z. Let T : X −→ X be a topological dynamical sys- tem and let α = {A0 . 123 Figure 7. 1]. for x ∈ X. 1]. Ak−1 } be a partition of X such that Ai ∩ Aj = ∅ for i 6= j (Note the difference with the earlier introduced notion of a parti- tion). Example 7.3 . we may apply the same procedure by replacing Xk by Xk+ . if T is not invertible. This defines a map φ : X −→ Xk . Of course. . . The subject of symbolic dynamics is devoted to using this knowledge to unravel the dynamical properties of more difficult systems. . We would like to define an itinerary map . 12 ]. φ(x) = {φi (x)}∞ i=−∞ The image of x under φ is called the itinerary of x. The scheme works as follows. Now. 1]. defined by (see figure 7. 1] −→ [0. called the Itinerary map generated by T on α.3. We assume that T is invertible. (Symbolic Dynamics) The Bernoulli shift on the bise- quence space Xk is a topological dynamical system whose dynamics are quite well understood. I1 = [ 21 . .

. with L : X̃2+ −→ X̃2+ the (induced) left shift. . Indeed. in the sense that An+1 ⊂ An .) and replace the two sequences by a single equivalence class. The periodic points for T are quite difficult to find. . Now. As [0. We conclude that x = y. If a is one of the aforementioned equivalence classes. .124 Equivalence of the two Definitions ψ : [0. are in [x. T n (x) ∈ Ian } = Ia0 ∩ T −1 Ia1 ∩ .1 that An is compact. . . 1]. ψ is one-to-one. . p ∈ {1. First observe that there are no periodic points in the earlier defined equivalence classes in X̃2+ since these points are eventually mapped to 0. . . . 1] −→ X̃2+ . al 01000 . Unfortunately. . We will first show that ψ is injective. 2n − 1}. Moreover. an−1 a0 a1 . y ∈ [0. Let a = {ai }∞ i=0 ∈ X̃2 . . T (x) ∈ Ia1 . I1 }. . . 3. Therefore. for all n ∈ Z≥0 . or any other point in [0. T −i Iai is closed for all i and therefore An is closed for n ∈ Z≥0 . ∩ T −n Ian Since T is continuous. it follows from theorem 7. Then T n (x) and T n (y) lie on the same side of x = 21 for all n ∈ Z≥0 . We will therefore identify any pair of sequences of the form (a0 a1 . y]) does not contain the point x = 21 . .) or (11000 . T n ([x. al 11000 .1. 1]. 1] is a compact metric space. 1] −→ X2+ generated on {I0 . 1] which will end up in x = 21 upon iteration of T . But then no rational numbers of the form 2pn . Now. .) Hence. + Now let us show that ψ is also surjective. since ψ ◦ T = L ◦ ψ. . an−1 a0 a1 . . we have an ambiguous choice of the itinerary for x = 21 . . Suppose that x 6= y. Then. 1]|x ∈ Ia0 . . We conclude that ψ is onto. . define An = {x ∈ [0. . . 1] with ψ(x) = ψ(y). y] for all n ∈ Z>0 . . the fact that I0 ∩ I1 6= ∅ causes some complications. since it can be described by either (01000 . Suppose that there are x. . 1]. . but the periodic points of L are easily identified. 2. . we have found a bijection between the periodic points of L on X̃2+ and those of T on [0. Since these form a dense set in [0. the intersection ∩∞ n=0 An is not empty and ∞ a is precisely the itinerary of x ∈ ∩n=0 An . {An }∞ n=0 forms a decreasing sequence. we obtain a contradiction. Now. T has 2n periodic points of period n in [0. .) and (a0 a1 . The resulting space will be denoted by X̃2+ and the resulting itinerary map again by ψ : [0. any periodic point a of L of period n in X2+ must have an itinerary of the form (a0 a1 .). . we use the ‘0’ element to represent a.

a. Show that φ is a topological conjugacy between Qλ and the one-sided shift map on X2+ . We assume that λ > 2 + 5. Show that the map d∗ : Xk × Xk −→ Xk . . a. . where I0 is the√interval left of A1 and I1 the interval to the right.e. Show that the two-sided shift L : Xk −→ Xk is expansive. .3 Let Qλ be the quadratic map and let C be as in the first example.3. A1 be as in the first example. Set [0. .1 In this exercise you are asked to prove some properties of the shift map. totally disconnected and perfect √ subset of [0. y) = 2−l if l = min{|j| : xj 6= yj } is a metric on Xk that induces the product topology on Xk . y) = n=−∞ 2|n| is a metric on Xk that induces the product topology on Xk . I1 }.3. b. Exercise 7. Let X2+ = {0. Show that the map d : Xk × Xk −→ Xk . 1. Show that φ is a homeomorphism. 1] − A1 = I0 ∪ I1 . This exercise serves as another demonstration of the usefulness of symbolic dynamics. 1. 1}N and let φ : C −→ X2+ be the itinerary map generated by Qλ on {I0 .3. i. As in the previous exercise. Let Xk = {0. 1]. prove that Qλ is topologically transitive on C. Finally. Show that the periodic points of Qλ are dense in C. we will assume that λ > 2 + 5. Exercise 7. Prove that Qλ has exactly 2n periodic points of period n in C.2 Let Qλ be the quadratic map and let C. k − 1}N∪{0} . c. defined by ∞ ∗ X |xn − yn | d (x. b. a closed. This exercise shows that C is a Cantor set. . 125 Exercise 7. d. c. Show that the two- sided shift on Xk is a homeomorphism. defined by d(x. k − 1}Z and Xk+ = {0. so that |Q0λ (x)| > 1 for all x ∈ I0 ∪ I1 (check this!). Show that the one-sided shift on Xk+ is continuous.

every point is a limit point of C. . i. Show that C is totally disconnected. the only connected subspaces of C are one-point sets. Demonstrate that C is perfect.126 Equivalence of the two Definitions a. b. i.e. c. Prove that C is closed.e.

we will proceed along the shortest and most popular route to victory.1. Hµ (α) ≤ log(N (α)). 8. µ). To prove this statement. α) ≤ hµ (T. then hµ (T. Equality holds only if µ(A) = N (α) . The proof is at times quite technical and is therefore divided into several digestable pieces. Then. where N (α) is the number of sets in the partition 1 of non-zero measure. 1.Chapter 8 The Variational Principle In this chapter we will establish a powerful relationship between measure theoretic and topological entropy.1 Let α.1 Main Theorem For the first part of the proof of the Variational Principle we will only use some properties of measure theoretic entropy. T )}. known as the Variational Principle. F. γ be finite partitions of X and T a measure preserv- ing transformation of the probability space (X. All the necessary ingredients are listed in the following lemma: Lemma 8. for all A ∈ α with µ(A) > 0 2. provided by [M]1 . If α ≤ β. β) 1 Our exposition of Misiurewicz’s proof follows [W] to a certain extent 127 . β. It asserts that for a continuous transformation T of a compact metric space X the topological entropy is given by h(T ) = sup{hµ (T )|µ ∈ M (X.

2. for n ≥ 1 n−1 _ n−1 _ −i T α≤ T −i β i=0 i=0 The result now follows from proposition . respectively.2. we get n−1 _ n−1 _ n−1 X n−1 _ −i −i −i H( T α| T γ) ≤ H(T α| T −i γ) i=0 i=0 i=0 i=0 n−1 X ≤ H(T −i α|T −i γ) i=0 = nH(α|γ) Combining the inequalities. γ) + Hµ (α|γ) 4. hµ (T. f (0) = 0. hµ (T n ) = n · hµ (T ). for all n ∈ Z>0 Proofs (1): This is a straighforward consequence of the concavity of the function f : [0. Therefore. (g) and finally using the fact that T is measure preserving. i ∈ Z≥0 . Hµ (α) = A∈α f (µ(A)). dividing both sides by n and taking the limit for n → ∞ gives the desired result. α) ≤ hµ (T.1)(e) and (b). (3): By proposition (4.128 Equivalence of the two Definitions 3. ∞) −→ R. there some A ∈ α such that B ⊂ A and hence also T −i B ⊂ T −i A. (2): α ≤ β implies that for every B ∈ β. . In this notation. P defined by f (t) = −t log(t) (t > 0). n−1 _ n−1 _ n−1 _ H( T −i α) ≤ H( T −i γ ∨ T −i α) i=0 i=0 i=0 n−1 _ n−1 _ n−1 _ = H( T −i γ) + H( T −i α| T −i γ) i=0 i=0 i=0 Now.1) (d). by successively applying proposition (4.

. Exercise 8. This concludes our proof.2. F. for all n ∈ Z We are now ready to prove the first part of the theorem. n T −i α) α i=0 n ≤ sup h(T . nh(T ) = n sup h(T.1 and proposition (4. α) α = h(T n ) Wn−1 Finally. α) Notice that { n−1 −i W i=0 T α | α a partition of X} ⊂ {β | β a partition of X}. α) ≤ h(T . α) i=0 So h(T n ) ≤ nh(T ). T −i α) = lim H( T ( T α)) k→∞ k i=0 j=0 i=0 kn−1 n _ = lim H( T −i α) k→∞ kn i=0 m−1 n _ = lim H( T −i α) m→∞ m i=0 = nh(T. we obtain by (2). α) α n−1 _ = sup h(T .1) to show that for an invertible measure preserving transformation T on (X.Main Theorem 129 (4): Note that n−1 k−1 n−1 _ 1 _ −nj _ −i h(T n . n T −i α) = nh(T.1.1. n−1 _ n h(T . since α ≤ i=0 T −i α. Hence. µ): h(T n ) = |n|h(T ).1 Use lemma 8.

T |B0 ). n. we get n X µ(B0 ∩ Aj ) Hµ (α|β) = µ(B0 ) f( ) = µ(B0 )Hµ(·|B0 ) (α0 ) j=1 µ(B0 ) Hence. Bi ∩ Ai = Bi Bi ∩ Aj = ∅ if i 6= j We can define a σ-algebra on B0 by F ∩ B0 = {F ∩ B0 |F ∈ F} and define the conditional measure µ(·|B0 ) : F ∩ B0 −→ [0. µ(·|B0 )) is a probability space and µ(·|B0 ) ∈ M (B0 . noting that α0 := {A∩B0 |A ∈ α} is a partition of B0 under µ(·|B0 ). T ). .130 Equivalence of the two Definitions Theorem 8. . i = 1. . Bn }. . An } be a finite partition of X. T )}. By theorem 16. writing f (t) = −t log(t). µ is regular. . Proof Fix µ ∈ M (X. so we can find closed sets Bi ⊂ Ai such that µ(Ai − Bi ) < ε. Pick 1 ε > 0 such that ε < n log n . .1 and use the fact that µ(B0 ) = µ(X − ∪ni=1 Bi ) = µ(∪ni=1 Ai − ∪ni=1 Bi ) = µ(∪i=1 (Ai − Bi )) < nε .1 Let T : X −→ X be a continuous transformation of a compact metric space. n X n X µ(Bi ∩ Aj ) Hµ (α|β) = µ(Bi )f ( ) i=0 j=1 µ(Bi ) n X µ(B0 ∩ Aj ) = µ(B0 ) f( ) j=1 µ(B0 ) in the final step we used the fact that for i ≥ 1. Then h(T ) ≥ sup{hµ (T )|µ ∈ M (X. F ∩ B0 . . Then. Now. . Let α = {A1 . 1] by µ(A ∩ B0 ) µ(A|B0 ) = µ(B0 ) It is easy to check that (B0 . Define B0 = X − ∪ni=1 Bi and let β be the partition β = {B0 . . . . .1.1. we can apply (1) of lemma 8.

. By dividing by m.1.1. for any µ. we obtain the desired result. Then we can define an open cover of X by γ = {C1 .1. α) ≤ hµ (T. 1] we have Hpµ+(1−p)ν (α) ≥ pHµ (α) + (1 − p)Hν (α) Proof Define the function f : [0. . . Applying (4) of lemma 8. Lemma 8.2 Let X be a compact metric space and α a finite partition of X. T ). β) + Hµ (α|β) ≤ h(T ) + log 2 + 1 Note that if µ ∈ M (X. the open set Ci by Ci = B0 ∪ Bi = X − ∪j6=i Bj . we still have Hµ ( i=0 T β) ≤ log(2m N ( m−1 −i i=0 T γ)). This is the hard part of the proof. ν ∈ M (X. in the notation of the lemma. Note that γ is not necessarily Wm−1 −ia par- tition.1. so the above inequality holds for T m as well. ∞) −→ R by f (t) = −t log(t) (t > 0).1. . γ) + log 2 ≤ h(T ) + log 2 and by (3) of lemma 8. . since β contains (at most) one more set of non-zero measure. T m ). i = 1. Cn }. β) ≤ h(T. butWby the proof of (1) of lemma 8. .Main Theorem 131 to obtain Hµ (α|β) ≤ µ(B0 ) log(n) < nε log(n) < 1 Define for each i. Thus. By (1) W of lemma 8.1 we get obtain hµ (T. n. it follows that hµ (T. .4 leads us to mhµ (T ) ≤ mh(T ) + log 2 + 1. T ). We will now finish the proof of the Variational Principle by proving the opposite inequality. . Then f is concave and so.1. but also a clever trick. since it does not only require more machinery. taking the limit for m → ∞ and taking the supremum over all µ ∈ M (X. . Hµ ( m−1 i=0 T −i β) ≤ log(N ( m−1 −i i=0 T β)).1 and (3) of theorem 7. then also µ ∈ M (X.2.1 weWhave for m ≥ 1. Then. T ) and p ∈ [0. f (0) = 0. we have: 0 ≤ f (pµ(A) + (1 − p)ν(A)) − pf (µ(A)) − (1 − p)f (ν(A)) The result now follows easily. for A measurable.

132 Equivalence of the two Definitions Exercise 8. . . Then x ∈ ∩n−1 j=0 T −j A . ηx )) = 0. every open neighborhood of x intersects every T Aj . Let {ηi }∞ i=1 be a sequence of distinct real numbers P∞ satisfying 0 < ηi < δ and ηi ↑ δ for i → ∞. Exercise 8. . For any δ > 0. (2): Fix δ > 0. ν ∈ M (X.1. ηi )) = ∞. since ∂Aj ⊂ ∪ni=1 ∂Bi . . Then. (3): Let x ∈ ∂(∩n−1 j=0 T −j Aj ).1. µ ∈ M (X. η)) = 0 2. for all j 3. . T ) and Aj ⊂ X measurable such that µ(∂Aj ) = 0. .e. . Use (the proof of) lemma 8. ηi )) = i=1 µ(∂B(x. ν ∈ M (X.3 Improve the result in the exercise above by showing that we can replace the inequality by an equality sign. for each x ∈ X. j = 0.1. Recall that the boundary of a set A is defined by ∂A = A − A. If µn → µ in M (X) and A is a measurable set such that µ(∂A) = 0.1. η)) > 0. n − 1. Define α by letting A1 = B1 and for 0 < j ≤ n let j−1 Aj = Bj − (∪k=1 Bk ). −j −k That is. The collection {B(x. Then. Then α is a partition of X. so by compactness there exists a finite subcover which we denote by β = {B1 . ηx )|x ∈ X} forms an open cover of X. Bn }. If T : X −→ X is continuous. for any µ. we can find an 0 < ηx < δ/2 such that µ(∂B(x. . Lemma 8. 1] we have hpµ+(1−p)ν (T ) ≥ phµ (T ) + (1 − p)hν (T ). . diam(Aj ) < diam(Bj ) < δ and µ(∂Aj ) ≤ µ(∪ni=1 ∂Bi ) = 0. but x ∈ j / ∩n−1 j=0 T −j Aj . . contradicting the fact that µ is a probability measure on X. Then there exists an x ∈ X and δ > 0 such that for all 0 < η < δ µ(∂B(x. there is a 0 < η < δ such that µ(∂B(x. .2 Suppose X is a compact metric space and T : X −→ X a continuous transformation.2 to show that for any µ. then limn→∞ µn (A) = µ(A) Proof (1): Suppose the statement is false. i. T ) and p ∈ [0. 1. An } of X such that diam(Aj ) < δ and µ(∂Aj ) = 0. 1] we have hpµ+(1−p)ν (T ) = phµ (T ) + (1 − p)hν (T ). . ∞ µ(∪i=1 ∂B(x. By (1). For any x ∈ X and δ > 0. but x ∈ / T Ak .3 Let X be a compact metric space and µ ∈ M (X). T ) and p ∈ [0. then µ(∂(∩n−1 j=0 T −j Aj )) = 0 4. there is a finite partition α = {A1 .

n − 1} = {j + rq + i|0 ≤ r ≤ a(j) − 1. x ∈ T −k Ak − T −k Ak and by continuity of T k.1. Z lim µn (A) ≤ lim fk (x)dµ(x) = µ(A) = µ(A) n→∞ k→∞ X where the final inequality follows from the assumption µ(∂A) = 0. 0 ≤ i ≤ q − 1} ∪ Sj and card(Sj ) ≤ 2q. Lemma 8.1. fk (x) = 1 for all x ∈ A and fk (x) = 0 for all x ∈ X − Ak . . so by the Dominated Convergence Theorem. . . where bc means taking the integer part. a(0) ≥ a(1) ≥ · · · ≥ a(q − 1) 2. it follows from theorem 7.1 that for every k there exists a function fk ∈ C(X) such that 0 ≤ fk ≤ 1. A) > k1 }. j − 1. . n − 1} Then {0. for each k. since A and X − Ak are closed. 1. The proof of the opposite inequality is similar. Then we have the following 1. Then.4 Let q. j + a(j)q. . (4): Recall that µn → µ in M (X) for n → ∞ if and only if: Z Z lim f (x)dµn (x) = f (x)dµ(x) n→∞ X X for all f ∈ C(X). . 1. . .Main Theorem 133 for some 0 ≤ k ≤ n − 1. for 0 ≤ j ≤ q − 1. . . j + a(j)q + 1. Define Sj = {0. Define. Fix 0 ≤ j ≤ q − 1. Let A be as stated and define Ak = {x ∈ X|d(x. . n be integers such that 1 < q < n. Now. . k ∈ Z>0 . T k (x) ∈ T k (T −k Ak ) − Ak ⊂ Ak − Ak = ∂Ak Thus. Z Z Z lim µn (A) = lim 1A (x)dµn (x) ≤ lim fk (x)dµn (x) = fk (x)dµ(x) n→∞ n→∞ X n→∞ X X Note that fk ↓ 1A in µ-measure for k → ∞. ∂(∩n−1 j=0 T −j Aj ) ⊂ ∪n−1 j=0 T −j ∂Aj and the statement readily follows. . a(j) = b n−j q c. Hence.

5) sug- gest a suitable measure µ which can be obtained from the σn . Example 8. One can readily check all properties stated in the lemma.1. which is in this case not quite the same as thinking order. Theorem (6. For each 0 ≤ j ≤ q − 1. 9}. where α is a suitably chosen partition. Note that card(Sj ) ≤ 8. (a(j) − 1)q + j ≤ b n−j q − 1cq + j ≤ n − q.1. e. Define σn ∈ M (X) by σn = (1/sep(n.1. The problem is to find an element of M (X. Theorem 8. S3 is obtained by taking all nonnegative integers smaller than 3 and adding the numbers 3+a(3)q = 7. a(1) = 2.2 Let T : X −→ X be a continuous transformation of a compact metric space. For each n. we will briefly discuss the main ideas. S2 = {0.1. 9}. 9}. we first find a measure σn with Hσn ( n−1 −i W i=0 T α) equal to log(sep(n. make a crude estimate for these tails and later add them back on again. Proof Fix ε > 0. Lemma 8. To do this.1. 0 ≤ r ≤ a(j) − 1} are distinct and no greater than n − q. T )) x∈En δx . The proof is presented in a logical order.3 fills in the remaining technicalities. T ) := limn→∞ n1 log(sep(n.1.4 for n = 10. This is not too difficult. This gives us S0 = {8. Then h(T ) ≤ sup{hµ (T )|µ ∈ M (X. ε. The three statements are clear after a little thought. T ) with this property. ε. S1 = {0. . a(2) = 2 and a(3) = 1. We shall now finish the proof of our main theorem. 3+a(3)q+1 = 8 and 3 + a(3)q + 2 = 9. T ). 2. q = 4. The numbers {j + rq|0 ≤ j ≤ q − 1. ε)-separated set of P cardinal- ity sep(n.4 plays a crucial role in getting from an estimate for W Hσn to one for Hµ . 1. T )).g. The trick in lemma 8. 7. ε. Therefore. 1} and S3 = {0. We would like to construct a Borel probability measure µ with measure the- oretic entropy hµ (T ) ≥ sep(ε. T )). The idea is to first remove the ‘tails’ Sj of the partition n−1 −i i=0 T α. let En be an (n. For example. before plunging into a formal proof. Then a(0) = b 10−0 4 c = 2 and similarly. T )}.134 Equivalence of the two Definitions 3. 8. ε. (a(3) − 1)q + 3 ≤ b 10−34 − 1c · 4 + 3 = 3 ≤ 6 = n − q.1 Let us work out what is going on in lemma 8.

. . ε. M (X) is compact. ε. k. T ). Since n−1 a(j)−1 q−1 _ _ _ _ −i −(rq+j) T α=( T ( T −i α)) ∨ ( T −i α) i=0 r=0 i=0 i∈Sj we find that n−1 _ log(sep(n. T )) = Hσn ( T −i α) i=0 a(j)−1 q−1 X _ X −rq−j ≤ Hσn (T ( T −i α)) + Hσn (T −i α) r=0 i=0 i∈Sj a(j)−1 q−1 X _ ≤ Hσn ◦T −(rq+j) ( T −i α) + 2q log k r=0 i=0 Here we used proposition (4. T )). .2. . we see that Hσn ( i=0 T α) = log(sep(n. Fix 0 ≤ j ≤ q − 1.1. Now. By (2) of lemma 8. we can find a µ-measurable partition α = {A1 . hence there exists a subsequence {nj }∞ j=1 such that {µnj } converges in M (X) to some µ ∈ M (X). T ).4.4 n−1 q−1 q 1X _ 2q 2 log(sep(n. T )) ≤ Hσn ◦T −l ( T −i α) + log k n n l=0 i=0 n q−1 _ 2q 2 ≤ H µn ( T −i α) + log k i=0 n . We will show that hµ (T ) ≥ sep(ε. if A ∈ i=0 T −i α. j = 1. Fix integers q. Since ∪j=1 Aj contains all of En . n with 1 < q < n and define for each 0 ≤ j ≤ q − 1 a(j) as in lemma 8. . .1. then A cannot contain more than one element of En .1)(d) and lemma 8. . Hence.1. . Now if we sum the above inequality over j and divide both sides by n. T ). ε.3. W σn (A) = 0 or k n−1 −i 1/sep(n. µ ∈ M (X. ε. By theorem 19.Main Theorem 135 where δPx is the Dirac measure concentrated at x. from which the result clearly follows. We may as- Wn−1 sume that every x ∈ En is contained in some Aj . By theorem (6.1.1. we obtain by (3) and (1) of lemma 8.5). Ak } of X such that diam(Aj ) < ε and µ(∂Aj ) = 0. Define µn ∈ M (X) by µn = n n−1 1 −i i=0 σn ◦T .1.

2. we see that every µ ∈ M (X.1. i.1 (The Variational Principle) The topological entropy of a continuous transformation T : X −→ X of a compact metric space X is given by h(T ) = sup{hµ (T )|µ ∈ M (X. Mmax (X. T )|hµ (T ) = h(T )}. Let Mmax (X. Then . each atom A of q−1 −i W i=0 T α has boudary of µ-measure zero. Definition 8. T ) = M (X. T ) = {µ ∈ M (X.2. It is that simple. T ) is called a measure of maximal entropy if hµ (T ) = h(T ). so by (4) of the same lemma.1. if we replace n by nj in the above inequality and take the limit for j → ∞ we obtain qsep(ε. T ) = {µ} then µ is called a unique measure of maximal entropy. Example. If Mmax (X. Rθ ). Rθ ). Measures of maximal entropy are closely connected to (uniquely) ergodic measures.3.e. we get the desired inequality. We note that µ ∈ M (X1 . T ) ≤ Hµ q−1 −i W i=0 T α. T2 ) and we observe that hµ (T1 ) = hµ◦φ−1 (T2 ). Since hµ (Rθ ) ≥ 0 for all µ ∈ M (X. Recall that the topological entropy of the circle rotation Rθ is given by h(Rθ ) = 0.2. Hence. namely one that maximizes the entropy of T . 8.136 Measures of Maximal Entropy By (3) of lemma 8. T )} To get a taste of the power of this statement. T ) for any continuous isometry T : X −→ X. Corollary 8. limj→∞ µnj (A) = µ(A). The result now follows from the Variational Principle. T1 ) if and only if µ ◦ φ−1 ∈ M (X2 . Let φ : X1 → X2 denote the conjugacy.1 Let X be a compact metric space and T : X −→ X be continuous. as will become apparent from the following theorem.2 Measures of Maximal Entropy The Variational Principle suggests an educated way of choosing a Borel prob- ability measure on X. Rθ ) = M (X. Rθ ) is a measure of maximal entropy. If we now divide both sides by q and take the limit for q → ∞.1 Let X be a compact metric space and T : X −→ X be continuous.4). let us recast our proof of the invariance of topological entropy under conjugacy ((2) of theorem 7. A measure µ ∈ M (X. More generally. Theorem 8. we have h(T ) = 0 and hence Mmax (X.

1. so h(T ) = hν1 (T ) = hν2 (T ). • If h(T ) < ∞ then the extreme points of Mmax (X. • A unique measure of maximal entropy is ergodic. T ) such that hµn (T ) > 2n . Then. T ) and suppose there is a p ∈ (0. Proofs (1): Let p ∈ [0. µ is also an extreme point of M (X. • If h(T ) = ∞ then Mmax (X. 1] and µ. we can find a µn ∈ M (X. T ). N X µn hµ (T ) ≥ >N n=1 2n This holds for arbitrary N ∈ Z>0 . then T is uniquely ergodic. T ). T ) is ergodic. h(T ) = hµ (T ) = phν1 (T ) + (1 − p)hν2 (T ). for any n ∈ Z>0 . moreover. ν2 ∈ Mmax (X.1. T ) are precisely the ergodic members of Mmax (X. Therefore. For the second part. T ). by theorem 6. 1) and ν1 . T ). T ) by ∞ N ∞ X µn X µn X µn µ= = + n=1 2n n=1 2n n=N 2n But then. T ) and we conclude that µ is ergodic. T has a unique measure of maximal entropy. ν ∈ Mmax (X. T ) ⊂ M (X. But by the Variational Principle.3. T ) 6= ∅. T ). h(T ) = sup{hµ (T )|µ ∈ M (X.6. T ) such that µ = pν1 + (1 − p)ν2 . hpµ+(1−p)ν (T ) = phµ (T ) + (1 − p)hν (T ) = ph(T ) + (1 − p)h(T ) = h(T ) Hence. T ). Then. pµ + (1 − p)ν ∈ Mmax (X. then T has a measure of maximal entropy. suppose µ is an extreme point of M (X.3. µ cannot be written as a non-trivial convex combination of elements of M (X. . ν2 ∈ M (X. In other words. 137 • Mmax (X. if T is uniquely ergodic. by exercise 8. Since Mmax (X. suppose that Mmax (X. so hµ (T ) = h(T ) = ∞ and µ is a measure of maximal entropy. Conversely. (3): By the Variational Principle. thus µ = ν1 = ν2 . µ is an extreme point of Mmax (X. T ) is a convex set. Define µ ∈ M (X. Conversely. If. T ).1. By exercise 8. T ) 6= ∅. (2): Suppose µ ∈ Mmax (X. ν1 . T )}.

2. we can find for any x. By the Variational Principle. η. . for any ν ∈ M (X. and (3). y ∈ A some k = kx.y ∈ Z such that d(T k (x). .y | : x. T ). We will show that h(T ) = sep(ε. so l := max{|kx. (4): The first statement follows from (2).1 Let X = {0. . Define µ ∈ M (X. for h(T ) = ∞. It then immediately follows that µ ∈ Mmax (X. 1.2. Example 8.2. . by the proof. T i (y)) > ε 0≤i≤2l+n−1 .1. Proof Let T : X −→ X be an expansive homeomorphism and let δ > 0 be an expansive constant for T .2. hµ (T ) = h(T ). then M (X. y ∈ A} is in Z>0 . T ). If T is uniquely ergodic. Proposition 8. hµ/2+ν/2 (T ) = 21 hµ (T ) + 12 hν (T ) = ∞.2 to show that the uniform product measure is a unique measure of maximal entropy for T . Fix 0 < ε < δ. hµ (T ) ≥ sep(ε. Pick any 0 < η < ε and let A be an (n. T ) as in the proof of theorem 8.1 Every expansive homeomorphism of a compact metric space has a measure of maximal entropy. η)-separated set of cardinal- ity sep(n.2. T ). µ = ν. y ∈ T −l A we have max d(T i (x). for h(T ) < ∞. Then. A is finite. Now. By the theorem above.1 For θ irrational Rθ is uniquely ergodic with respect to Lebesgue measure.2. T k (y)) > ε By theorem 7. Exercise 8. We end this section with a generalization of the above exercise. By expansiveness. Then. Hence. T ). k − 1}Z and let T : X −→ X be the full two-sided shift. for x. T ). T ) = {µ} for some µ. it follows that Lebesgue measure is also a unique measure of maximal entropy for Rθ .138 Measures of Maximal Entropy Now suppose that T has a unique measure of maximal entropy. T ) = {µ}.3. Use proposition 7. by our choice of l. M (X.

T ) = lim log(sep(n. η)-separated. for any 0 < ε < δ. ε)-separated then A is also (n. T ). . T )) n→∞ 2l + n = sep(ε. ε. Hence. Corollary 8. T )) n→∞ n 1 ≤ lim log(sep(2l + n. T ) Conversely. ε.1. At this point we would like to make a remark on the proof of the above propo- sition. The reader should note. From the proof we extract the following corollary. 1 sep(η. 139 So T −l A is an (2l + n. so sep(ε. T ). T ) = sep(η.2. if A is (n. This completes our proof.2 is a measure of maximal entropy for any continu- ous transformation T . η. T ). ε. η. Since 0 < η < ε was arbitrary. T ) = sep(ε. T ) ≤ sep(η. that µ still depends on ε and it is therefore only because of the above corollary that µ is a mea- sure of maximal entropy for expansive homeomorphisms.1 Let T : X −→ X be an expansive homeomorphism and let δ be an expansive constant for T . One might be inclined to believe that the measure µ constructed in the proof of theorem 8. Then h(T ) = sep(ε. T ). T )) n→∞ n 1 ≤ lim log(sep(2l + n. We conclude that sep(ε. though. T ). T )-separated set of cardinality sep(n. this shows that h(T ) = limη↓0 sep(η.

140 Measures of Maximal Entropy .

R.. (2001). AMS Vol 129. K. Inc. 141 . Proc. Keane. Springer-Verlag New York. 147-187 [Mu] Munkres. Kraaikamp. [H] W. Fieldsteel. Prentice Hall. 653–657. Annals of Math. Osaka J. Analysis and Simulation of Chaotic Sys- tems. Asterique. Inc. 1966. A simple proof of the ratio ergodic theorem. Introduction to Dynamical Systems.. Ergodic theorem withour invariant measure.. Inc. F. 34 (1997).L.. 192–206. Addison-Wesley Publishing Company. Hurewicz. [De] Devaney.C. Cambridge University Press. Topology.Bibliography [B] Michael Brin and Garrett Stuck. [DF] Dajani.. [KT] Kingman and Taylor. M. 2000. Dajani and C. A short proof of the variational principle for a Zn+ action on a compact space. 1989. J. Math. [D] K. A. no. Mathematical Association of Amer- ica. An Introduction to Chaotic Dynamical Systems.R. Equipartition of Interval Partitions and an Application to Number Theory. 3. 29. [M] Misiurewicz. Washington. Ergodic theory of numbers. Teturo. Second Edition. 3453 . Carus Mathematical Monographs. [G] Gleick. 2002. 1976. pp. 12. DC 2002.3460. [KK] Kamae.. Chaos: Making a New Science. Vintage [Ho] Hoppensteadt.. Introduction to measure and probability. Second Edition. 1998. Cam- bridge Press. J. 45 (1944). 40. Michael. 1993.

[P] Karl Petersen. 1989. An Introduction to Ergodic Theory. 1982. Cambridge Studies in Advanced Mathematics. 2004. Cambridge University Press. 79. Graduate Texts in Mathematics. [W] Peter Walters. Springer-Verlag. Ergodic Theory. Reprint of the 1981 original. Topics in Ergodic Theory. . Cambridge. Cambridge.142 Measures of Maximal Entropy [Pa] William Parry. 75. 2. New York-Berlin. Cambridge Tracts in Mathematics. Cambridge University Press.

114 equivalent measures. 120 β-transformations. 120 Birkhoff’s Ergodic Theorem. α). ε)-separated. 29 T −1 α. 110 entropy of the partition. 125 α ∨ β. T ). 82 d. T ). 111 (n. r). d). 10 common refinement. 132 conditional expectation. 61 h2 (T ). 113 conditional information function. 119 binary expansion. 109 ∂A. 79 sep(ε. r). 103 algebra generated. ε)-spanning. 136 N (α). 111 span(n. 111 entropy of the transformation. 111 (n. 109 Baker’s Transformation. 58 Mmax (X. 13 ddisc . 109 diam(A). 58 Wn of open cover. 102 Cantor set. 109 dynamical system. 112 algebra. 109 circle rotation. T ). 7 F OT (x). 109 of open cover.Index (X. 111 B(x. 47 h(T ). T ). 109 diam(α). 103 diameter dn . 112 Stationary Stochastic Processes. 57 h1 (T ). 109 i=1 αi . 35 cov(ε. 10 Rθ . 111 of a set. 103 Bernoulli Shifts. 112 conservative. ε. 134 ergodic. ε. 10 OT (x). ε. 109 atoms of a partition. 66 cov(n. 11 Qλ . T ). 7 H(α). 114 h(T. 20 143 . 109 X. 12 Bn (x. 102 Continued Fractions. 102 sep(n. T ).

107 Poincaré Recurrence Theorem. 105 strongly mixing. 41 of open cover. 123 Knopp’s Lemma. 7 induced operator. 13 generator for. 64.144 Measures of Maximal Entropy ergodic decomposition. 106 refinement minimal. 72 definition (I). 22 Shannon-McMillan-Breiman Theo- information function. 103 regular measure. 107 random shifts. 15 homeomorphism quadratic map. 19 shift map irreducible Markov chain. 103 . 52 extreme point. 103 fixed point. 106 natural extension. 42 full. 107 Riesz Representation Representation Hurewicz Ergodic Theorem. 103 measure preserving. 6 homeomorphism. 107 Radon-Nikodym Theorem. 51 orbit. 109 Markov Shifts. 7 expansive. 106 phase portrait. 103 of open cover. 111 definition (II). 66 rem. 105 periodic point. 116 measure of maximal entropy. 119 weak. 89 weak generator for. 90 induced map. 60 Kac’s Lemma. 121 isometry. 83 Theorem. 40 monotone class. 16 forward. 12 properties. 107 Lebesgue number. 45 isomorphic. 110 unique. 105 generator. 109 topologically transitive. 106. 118 conjugate. 136 wrt open cover. 70 integral system. 35 symbolic dynamics. 136 topologically transitive. 47 subadditive sequence. 102 topological entropy Lochs’ Theorem. 114 Markov measure. 16 semi-algebra. 79 expansive. 92 non-singular transformation. 80 factor map. 106 constant. 26 topological conjugacy. 103 first return time.

145 one-sided. 96 Variational Principle. 136 weakly mixing. 9 uniquely ergodic. 103 translations. 45 .

- Friedmann Mainuploaded byMARCO ANTONIO RAMOS ALVA
- PROBLEM SET-4 Continuous Probabilityuploaded bymaxentiuss
- Probability Notesuploaded byTomas Kojar
- 103 Midtermuploaded by01900504v
- Nonlinear Optimizationuploaded bypramod702487
- 1510uploaded byekmoekmo
- Probability Theory Lecture notes 05uploaded byXing Qiu
- Book 1uploaded byb2p0n
- EMM5622 Lecture w6auploaded byTamil Alagan
- Random Walksuploaded byOmonda Nii
- Topic Notes week 2 QMuploaded bymick
- Analisis Real Solutionsuploaded byDarwin Lopez
- (Cambridge Studies in Advanced Mathematics 112) Daniel W. Stroock-Partial Differential Equations for Probabilists (Cambridge Studies in Advanced Mathematics 112)-Cambridge University Press (2008)uploaded byJuan Diego Cutipa Loayza
- ulamuploaded bypipul36
- Dynamic Delegation of Experimentationuploaded byBen Shion
- ch05uploaded bySaied Aly Salamah
- I Will Survive Pruploaded byLeonalyn Taoingan
- Chapter 8 - Queueing Theoryuploaded byKamrun Nahar
- HMMuploaded bypostscript
- Week 06 Part 4uploaded byJane Doe
- A New Measure of Divergence with its Application to Multi-Criteria Decision Making under Fuzzy Environmentuploaded byscience2010
- Heiiuploaded byspitzersglare
- Chapter 03 Random Variables Part Auploaded byYasir Ateeq
- Probability & Probability Distributionuploaded byVinamra Singh
- Poisson Processuploaded byfunkkyexalter
- 041SCF13uploaded byujnzaq
- Probabilityuploaded byRussell Mindanao
- Geometric Distributionuploaded bysankha80
- Eseuploaded byAndrea Ning
- Probability Distributionuploaded byOmesh Dhanushka

- wei2017uploaded byhendra
- Stav Rid is 2013uploaded byhendra
- Performance Evaluation of a Coordinated Time-Domain eICIC Framework based on ABSF in Heterogeneous LTE-Advanced Networks.pdfuploaded byhendra
- Grits Ch 2004uploaded byhendra
- 1687-1499-2014-220uploaded byhendra
- mk2016.pdfuploaded byhendra
- 1411.4287uploaded byhendra
- nsdi14-mimo.pdfuploaded byhendra
- 1709.04374uploaded byhendra
- 07906483uploaded byhendra
- 31 ijecsuploaded byhendra
- roh2004.pdfuploaded byhendra
- j214uploaded byhendra
- atzeni2015uploaded byhendra
- Bou Ida 2017uploaded byhendra
- 1603.01926uploaded byhendra
- liang2016(2)uploaded byhendra
- 07474028uploaded byhendra
- a51f873301af5d186224310127e66cc02fe7uploaded byhendra
- 10.1109@ICT.2017.7998273uploaded byhendra
- ma2008uploaded byhendra
- IT FullRateSFcode Jan 05uploaded byhendra
- 10.1109@WCT.2003.1321534.pdfuploaded byhendra
- Chang 2016uploaded byhendra
- Information 08 00111uploaded byhendra
- Zhang 2007uploaded byhendra
- 1510.00649uploaded byhendra
- Behn Ad 2016uploaded byhendra
- he2017(1)uploaded byhendra
- Stf Journaluploaded byhendra

Read Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading