55 views

Uploaded by Christian Bazalar Salas

save

You are on page 1of 209

´bor Pete Ga Technical University of Budapest http://www.math.bme.hu/~ gabor April 9, 2013 Work in progress. Comments are welcome.

Abstract These notes have grown (and are still growing) out of two graduate courses I gave at the University of Toronto. The main goal is to give a self-contained introduction to several interrelated topics of current research interests: the connections between 1) coarse geometric properties of Cayley graphs of inﬁnite groups; 2) the algebraic properties of these groups; and 3) the behaviour of probabilistic processes (most importantly, random walks, harmonic functions, and percolation) on these Cayley graphs. I try to be as little abstract as possible, emphasizing examples rather than presenting theorems in their most general forms. I also try to provide guidance to recent research literature. In particular, there are presently over 150 exercises and many open problems that might be accessible to PhD students. It is also hoped that researchers working either in probability or in geometric group theory will ﬁnd these notes useful to enter the other ﬁeld.

Contents

Preface 1 Basic examples of random walks 1.1 1.2 Z and Td , recurrence and transience, Green’s function and spectral radius . . . . . A probability proposition: Azuma-Hoeﬀding . . . . . . . . . . . . . . . . . . . . . . .

d

4 6 6 10 12 12 13

2 Free groups and presentations 2.1 2.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digression to topology: the fundamental group and covering spaces . . . . . . . . . .

1

2.3 2.4

The main results on free groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Presentations and Cayley graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16 18 20 20 25 25 25 25 27 31 34 37 37 39 42 43 44 45 50 52 57 57 60 62 71 74 74 76 78 82 82 86 88 92 95

3 The asymptotic geometry of groups 3.1 3.2 3.3 Quasi-isometries. Ends. The fundamental observation of geometric group theory . . Gromov-hyperbolic spaces and groups . . . . . . . . . . . . . . . . . . . . . . . . . . Asymptotic cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 Nilpotent and solvable groups 4.1 4.2 4.3 4.4 The basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Semidirect products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The volume growth of nilpotent and solvable groups . . . . . . . . . . . . . . . . . . Expanding maps. Polynomial and intermediate volume growth . . . . . . . . . . . .

**5 Isoperimetric inequalities 5.1 5.2 5.3 5.4 Basic deﬁnitions and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amenability, invariant means, wobbling paradoxical decompositions Isoperimetry in Z
**

d

. . . . . . . . .

From growth to isoperimetry in groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 Random walks, discrete potential theory, martingales 6.1 6.2 6.3 Markov chains, electric networks and the discrete Laplacian . . . . . . . . . . . . . . Dirichlet energy and transience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 Cheeger constant and spectral gap 7.1 7.2 7.3 7.4 Spectral radius and the Markov operator norm . . . . . . . . . . . . . . . . . . . . . The inﬁnite case: the Kesten-Cheeger-Dodziuk-Mohar theorem . . . . . . . . . . . . The ﬁnite case: expanders and mixing . . . . . . . . . . . . . . . . . . . . . . . . . . From inﬁnite to ﬁnite: Kazhdan groups and expanders . . . . . . . . . . . . . . . . .

8 Isoperimetric inequalities and return probabilities in general 8.1 8.2 8.3 Poincar´ e and Nash inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolving sets: the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolving sets: the proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9 Speed, entropy, Liouville property, Poisson boundary 9.1 9.2 9.3 9.4 9.5 Speed of random walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Liouville property for harmonic functions . . . . . . . . . . . . . . . . . . . . . . Entropy, and the main equivalence theorem . . . . . . . . . . . . . . . . . . . . . . . Liouville and Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Poisson boundary and entropy. The importance of group-invariance . . . . . . . 2

9.6

Unbounded measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

98 100

10 Growth of groups, of harmonic functions and of random walks

10.1 A proof of Gromov’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 10.2 Random walks on groups are at least diﬀusive . . . . . . . . . . . . . . . . . . . . . . 105 11 Harmonic Dirichlet functions and Uniform Spanning Forests 106

11.1 Harmonic Dirichlet functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 11.2 Loop-erased random walk, uniform spanning trees and forests . . . . . . . . . . . . . 108 12 Percolation theory 110

12.1 Percolation on inﬁnite groups: pc , pu , unimodularity, and general invariant percolations111 12.2 Percolation on ﬁnite graphs. Threshold phenomema . . . . . . . . . . . . . . . . . . 129 12.3 Critical percolation: the plane, scaling limits, critical exponents, mean ﬁeld theory . 134 12.4 Geometry and random walks on percolation clusters . . . . . . . . . . . . . . . . . . 143 13 Further spatial models 148

13.1 Ising, Potts, and the FK random cluster models . . . . . . . . . . . . . . . . . . . . . 148 13.2 Bootstrap percolation and zero temperature Glauber dynamics . . . . . . . . . . . . 157 13.3 Minimal Spanning Forests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 13.4 Measurable group theory and orbit equivalence . . . . . . . . . . . . . . . . . . . . . 161 14 Local approximations to Cayley graphs 166

14.1 Unimodularity and soﬁcity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 14.2 Spectral measures and other probabilistic questions . . . . . . . . . . . . . . . . . . . 172 15 Some more exotic groups 184

15.1 Self-similar groups of ﬁnite automata . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 15.2 Constructing monsters using hyperbolicity . . . . . . . . . . . . . . . . . . . . . . . . 190 15.3 Thompson’s group F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 16 Quasi-isometric rigidity and embeddings References 190 192

3

Preface

These notes have grown (and are still growing) out of two graduate courses I gave at the University of Toronto: Probability and Geometry on Groups in the Fall of 2009, and Percolation in the plane, on Zd , and beyond in the Spring of 2011. I am still adding material and polishing the existing parts, so at the end I expect it to be enough for two semesters, or even more. Large portions of the ﬁrst drafts were written up by the nine students who took the ﬁrst course for credit: Eric Hart, Siyu Liu, Kostya Matveev, Jim McGarva, Ben Rifkind, Andrew Stewart, Kyle Thompson, Llu´ ıs Vena, and Jeremy Voltz — I am very grateful to them. That ﬁrst course was completely introductory: some students had not really seen probability before this, and only few had seen geometric group theory. Here is the course description:

Probability is one of the fastest developing areas of mathematics today, ﬁnding new connections to other branches constantly. One example is the rich interplay between large-scale geometric properties of a space and the behaviour of stochastic processes (like random walks and percolation) on the space. The obvious best source of discrete metric spaces are the Cayley graphs of ﬁnitely generated groups, especially that their large-scale geometric (and hence, probabilistic) properties reﬂect the algebraic properties. A famous example is the construction of expander graphs using group representations, another one is Gromov’s theorem on the equivalence between a group being almost nilpotent and the polynomial volume growth of its Cayley graphs. The course will contain a large variety of interrelated topics in this area, with an emphasis on open problems.

What I had originally planned to cover turned out to be ridiculously much, so a lot had to be dropped, which is also visible in the present state of these notes. The main topics that are still missing are Gromov-hyperbolic groups and their applications to the construction of interesting groups, metric embeddings of groups in Hilbert spaces, more on the construction and applications of expander graphs, more on critical spatial processes in the plane and their scaling limits, and a more thorough study of Uniform Spanning Forests and ℓ2 -Betti numbers — I am planning to improve the notes regarding these issues soon. Besides research papers I like, my primary sources were [DrK09], [dlHar00] for geometric group theory and [LyPer10], [Per04], [Woe00] for probability. I did not use more of [HooLW06], [Lub94], [Wil09] only because of the time constraints. There are proofs or even sections that follow rather closely one of these books, but there are always diﬀerences in the details, and the devil might be in those. Also, since I was a graduate student of Yuval Peres not too long ago, several parts of these notes are strongly inﬂuenced by his lectures. In particular, Chapter 9 contains paragraphs that are almost just copied from some unpublished notes of his that I was once editing. There is one more recent book, [Gri10], whose ﬁrst few chapters have considerable overlap with the more introductory parts of these notes, although I did not look at that book before having ﬁnished most of these notes. Anyway, the group theoretical point of view is missing from that book entirely. With all these books available, what is the point in writing these notes? An obvious reason is that it is rather uncomfortable for the students to go to several diﬀerent books and start reading them somewhere from their middle. Moreover, these books are usually for a bit more specialized 4

So. There are also conjectures and questions in the notes — the diﬀerence compared to the *** exercises is that. Of course. in several categories of diﬃculty: the ones without any stars should be doable by everyone who follows the notes.g. e. and include proofs I have not seen elsewhere in the literature. * means it is a challenge for the reader. Yuval Peres. neither that any of the *** exercises are doable. There are presently over 150 exercises. I am grateful to Alex Bloemendal. Mark Sapir. Damien ´ am Gaboriau. Russ Lyons. . so either nilpotent groups or martingales are not explained carefully. . *** means it is an open problem. Andreas Thom. P´ eter Mester.audience. so I want to encourage the reader to try and attack them.. I would personally suggest starting with the *** exercises. ** means that I think I would be able to do it. But. though they are often not quite trivial. 5 . I wanted to add my favourite explanations and examples to everything. for a PhD topic. this does not necessarily mean that all conjectures are hard. I hope this will help not only students. but also researchers from either ﬁeld get interested and enter the other territory. And there was a very important goal I had: presenting the material in constant conversation between the probabilistic and geometric group theoretical ideas. Besides my students and the books mentioned above. Gady Kozma. Todor Tsankov and B´ alint Vir´ ag for conversations and comments. the *** exercises have not been worked on yet thoroughly enough. . but it would be a challenge for me. according to my knowledge or feeling. where each exercise was worth 2# of stars . Part of the grading scheme was to submit exercise solutions worth 8 points. Ad´ Tim´ ar.

2 and transient for d ≥ 3. .1 Basic examples of random walks {s.1. uniformly at random. 2d 2d < C exp(−cd n) . o) = Po [ Xn = o ] ≍ Cd n−d/2 .1 (P´ olya 1920). . . X1 . Then i Po [ Xn = o ] = Po [ Xn = 0 ∀i ] = (n1 .2. with |n| := n1 + · · · + nd . Xn ) denote i = 0 ∀i S i = ni ∀i P[ S i = ni ∀i ] . Xn ) the walk. .1} Z and Td .1. X1 . . A random walk on a graph is called recurrent if the starting vertex is visited inﬁnitely often with probability one. the so-called “on-diagonal heat-kernel decay” is pn (o. each Xi ∈ V (G). For the proof. bounded degree inﬁnite graph G = (V. . The following estimate holds for a d-dimensional lattice: P # of steps among ﬁrst n that are in the ith coordinate ∈ / n 3n .ZdTd} d In simple random walk on a connected. . . if in the Yn i random walk we move from one node to one of the two adjacent nodes in the ith coordinate. then P[ Yn = 0 ] = n 1 2n n/2 1 ≍ √ .1 1. and where the sequences i are the components of the sequence X0 . . the proof of the second one will be discussed later. . then i changes by one correspondingly.3.Polya} Theorem 1. and ni increases by one. . . Let o = 0 ∈ Zd denote the origin. the walk is called transient. Simple random walk on Zd is recurrent for d = 1. d. . n {l. Po Yn i |n|=n which we got by using the Law of Total Probability and Bayes’ Theorem. we will need the following two lemmas. Xn = (Xn . . . In fact. {l. That is. and n = the number of steps taken in the ith coordinate. we take a starting vertex. and then each step in the walk is taken to one of the vertices directly adjacent to the one currently occupied. . Green’s function and spectral radius {ss. . E ). . Furthermore. 1 d Proof of Theorem 1. The proof of the ﬁrst one is trivial from Stirling’s formula. . independently from all previous steps. . Y1 . . Denote the positions in this walk by the sequence X0 . Yn i 6 . . The result that started the area of random walks on groups is the following: {t. .LD} Lemma 1. . i = 1. nd ) a multi-index. .Stirling} Lemma 1. Deﬁnition 1. Otherwise. with the null moves deleted. let S i = S i (X1 . recurrence and transience. . That is: Po Xn = o inﬁnitely often = P Xn = o inﬁnitely often X0 = o = 1 . . Given a 1-dimensional simple random walk Y0 .

1) is exponentially small. Now.2) {e. o) . 1]. y ∈ V (G) and z ∈ C . the last formula becomes Po [ Xn = o ] ≍ ≍ |n|=n (n1 · · · nd )−1/2 P[ S i = ni ∀i ] ǫ(n) · P[ S i = ni ∀i ] + Cd · n−d/2 · P[ S i = ni ∀i ] . Thus. (1. so the random walk is transient. x)z n = U (x. hence Po # visits to o is ∞ = 0. x) = n k=1 Px [τ x = k ] pn−k (x. y |1) = Ex # of visits to y . x|z ) = 1 . y )z n . Now notice that Eo # of visits to o = ∞ n=0 (1. Also. by Lemma 1. 2 we get that Eo # of visits to o = ∞. Let us also deﬁne U (x. 2d ] where ǫ(n) ∈ [0. x) z n−k . this is not yet quite enough to claim that the random walk is recurrent. so we get Po [ Xn = o ] ≍ Cd n−d/2 . E ). from which we get n pn (x. x|z ) G(x. as claimed. G(x.3. G(x. for d = 1. but the techniques introduced will also be used later. x)z n = k=1 ∞ n=1 Px [τ x = k ] z k pn−k (x. where τ y is the ﬁrst positive time hitting y . Px [τ y = n]z n . 2d ] n 3n ∀ni ∈[ 2 d . n 3n ∃ni ∈ /[ 2 d . 2d ] P[ S i = ni ∀i ] ≤ C · d · exp(−cd n) .2.2. Here is one way to ﬁnish the proof — it may be somewhat too fancy. y |z ) = In particular. pn (x. x|z ) G(x.Using the independence of the steps taken and Lemma 1. we have pn (x. while the second term is polynomially large. hence the ﬁrst term in (1.GU} .1) {e.GU} pn (x.twosums} n 3n ∃ni ∈ /[ 2 d . For simple random walk on a graph G = (V. x|z ) .Zdret} pn (o. y |z ) = ∞ n=1 ∞ n=0 {d.3) {e. x. let Green’s function be deﬁned as G(x.2) we get for d ≥ 3 that Eo # of visits to o < ∞. from (1. G(x. Now. x|z ) . Deﬁnition 1. x). x|z ) − 1 = U (x. However. 1 − U (x. so that Px [τ x = 0] = 0. x|z ) 7 (1.

Pringsheim} A useful classical theorem is the following: Theorem 1. y ) = 1 lim supn→∞ n for k = 0. Then P[ # of returns = k ] = q k (1 − q ) a geometric random variable. y ) n→∞ n pn (x.3) says that U (x.) As we proved above. Thus.4. The simplest diﬀerence is the rate of escape: √ In Z. Prove that for simple random walk on a connected graph.4 (Pringsheim’s theorem).4) {e.. E[ # of returns ] = q/(1 − q ) < ∞ . (To be obvious. 8 . G(x. We will see later in the course the reason for this name. 2 implies recurrence. rigorous. the inﬁnite expectation we had for d = 1. x|1) = ∞. for d = 1. Now. an obvious fundamental question is when ρ is smaller than 1.Thus (1. here is a simpler proof of recurrence (though the real math content is probably the same). y . y . hence G(x. Therefore. which means that Px [τ x < ∞] = 1. we have that rad(x. it is probably worth looking at the radius of convergence of this G(x. y ) into a power series. pn (x. y ).e. on what graphs are the return probabilities exponentially small? We have seen that they are polynomial on Zd . y |z ). we can deﬁne ρ := 1 = lim sup rad(x. and is called the spectral radius of the simple random walk on the graph. or any Zd . x) ≤ 1 . x) ≥ 1. y |z ) < ∞ ⇔ G(r. {t. . Once we have encoded our sequence pn (x. (1. Therefore. denoted by rad(x. a homework from the Central Limit Theorem. w|z ) < ∞ . y ) is independent of x. x|1) = 1. If f (z ) = n convergence is the smallest positive singularity of f (z ). 1. Assume Px [τ x < ∞] = q < 1. we need the so-called Strong Markov property for simple random walk. rad(x. The walk has a very diﬀerent behaviour on regular trees. X0 ) ] ≍ n. which is intuitively Instead of using Green’s function. . E[ dist(Xn . for real z > 0.rho} which is independent of x. by Theorem 1. 2 we have Ex [ # of visits to x ] = ∞. 2. then the radius rad(f ) of n ⊲ Exercise 1. I. which clearly implies recurrence: if we come back once almost surely. By the Cauchy-Hadamard criterion. then we will come back again and again. . By this exercise. an z with an ≥ 0. .1.

y |z ). k k−2 n. or we move to another neighbour of x. So.treerho} We ﬁrst give a proof using generating functions. U (0) is 0.2. By taking a step on Tk . and we get U (z ) = .5. x|z ) = zU (x. 2(k − 1)z 1 k−1 z+ zU (z )2 . k k−1 1 − . consider U (z ) = U (x. E Dn ≥ visible in the return probabilities. we see that either 1 . where x is a neighbour of y (which we will often write as x ∼ y ). k So. Proof.ℓ is a tree such that if vn ∈ Tk. The spectral radius of the k -regular tree Tk is ρ(Tk ) = √ 2 k −1 .2. √ k− k2 −4(k−1)z 2 So.ℓ ).5) {e. the random walk escapes much faster on Tk than on any Zd . Then k−1 k 1 and P Dn+1 = Dn − 1|Dn = 0 = . in particular. 2(k−1)z U (x. Compute ρ(Tk. with probability k k −1 k .GreenTree} √ 2 k −1 . in which case.For the k -regular tree Tk . x|z ) 2(k − 1) = . y |z ) = and thus G(x. 2 k −1 ⊲ Exercise 1. where Tk. 2(k − 1) 1 1 − U (x. deg vn = k ℓ 9 n even n odd .ℓ is a vertex at distance n from the root. k The smallest positive singularity is z = √k . with probability we hit y . then a completely probabilistic proof. X0 ) = Dn . This big diﬀerence will also be {t. then k± k 2 − 4(k − 1)z 2 .4 gives ρ(Tk ) = (1. by the symmetries of Tk . Then. k ≥ 3. k k U (z ) = Px [ ever reaching y if we die at each step with probability 1 − z ] . x|z ) = k− k 2 − 4(k − 1)z 2 . With the generating function of Deﬁnition 1. let dist(Xn . hence E Dn+1 − Dn |Dn = 0 = k k P Dn+1 = Dn + 1|Dn = 0 = On the other hand. Altogether. Theorem 1. E Dn+1 − Dn Dn = 0 = 1. k − 2 + k 2 − 4(k − 1)z 2 so Pringheim’s Theorem 1. we have to ﬁrst return to x and then hit y . U (z ) = which gives U (z ) = Note that if 0 ≤ z ≤ 1. in order to hit y .

) ⊲ Exercise 1. .3. be random variables satisfying the following criteria: • E Xi = 0 ∀i. (b) Using Exercise 1.2 A probability proposition: Azuma-Hoeﬀding {ss. X2 . • Xi ∞ < ∞ ∀i . for any L > 0. together with a good bound on the occurrence of each. i. .5.5 can also be obtained by analyzing the singularities of the generating functions.. n {ex. the distance process Dn = dist(X0 .The next three exercises provide a probabilistic proof of Theorem 1. show that there is a subexponentially growing function (Hint: count all possible m-element zero sets.5. where ρ = ρ(Tk ) is given by Theorem 1. 1. Proposition 1.Azuma} We discuss now a result needed for the random walk estimates in the previous section. (Hint: ﬁrst show. 1). a) Using this and Exercise 1. but I do not know any reference. Note that for SRW on the k -regular tree Tk . . • More generally. P0 Xi > 0 for 0 < i < n Xn = 0 ≍ 1 .nozeros} ⊲ Exercise 1. {ex.5. P[ Xn+1 = j + 1 | Xn = j ] = p. P0 [ Xi > 0 for all 0 < i ≤ 2n ] = P0 [ X2n = 0 ]. Let X1 . Xn ) is a biased SRW on Z reﬂected at 0. for some constants ci depending only on k .LD} {p.e. x) ≍ n−3/2 ρn . P X 1 + · · · + X n > L ≤ e −L 10 2 Then. that for symmetric simple random walk. in a more precise form. n with constants independent of p ∈ (0. .3]. Section 3.4. E Xi1 · · · Xik = 0 for any k ∈ Z+ and i1 < i2 < · · · < ik .6 (Azuma-Hoeﬀding). * For biased SRW on Z.3.) g (m) = exp(o(m)) such that P0 #{i : Xi = 0 for 0 < i < n} = m Xn = 0 ≤ g (m) 1 .zeros} ⊲ Exercise 1. (But note that the correction factor n−3/2 of Exercise 1. using the reﬂection principle [Dur96. /(2 Pn i=1 Xi 2 ∞) . with constants depending only on k .tree3/2} {ex. improve this to pn (x. Show that for biased SRW on Z. prove that the return probabilities on Tk satisfy c1 n−3/2 ρn ≤ pn (x. x) ≤ c2 n−1/2 ρn .) This probabilistic strategy might be known to a lot of people.4.

sequence. 2k k ! i=1 Xi 2 2 ∞ t /2 .i.3. = γ .. Appendix]. which immediately implies Lemma 1. The main point of Proposition 1. usually called Chernoﬀ’s inequality. Xi ∼ Ber(p).6) {e. for |x| ≤ a. we get E etXi ≤ E cosh(ai t) + E Xi sinh(ai t) = cosh(ai t) . By the convexity of ex . for Xi ∼ Ber(p) it is γp (α) = α log α 1−α + (1 − α) log . and. The rate function can be computed in terms of the moment generating function. p 1−p (1. e.i. The function γ (α) is called the large deviation rate function (associated with the distribution of Xi )..d. Deﬁne Sn := X1 + · · · + Xn . besides i. Choose any t > 0. Xi proving the proposition. See Section 6. where {Mi }∞ i=0 is a martingale sequence. Section 1. sequences..d. for general i.9] and [Bil86.d. not even necessarily bounded. this exponential bound is among the most basic See.3 for the deﬁnition and examples. with E Xi = µ and Xi P Sn > αn ≤ exp − n . for any α > µ = E Xi . ∞ 2 2 (ai t)2k = eai t /2 . If we set ai := Xi ∞. for any α > µ we get α−µ 2γ 2 For an i.i. Moreover. [Dur96.LDBinomial} For an i. are also fulﬁlled by martingale diﬀerences Xi = Mi+1 − Mi . instead of boundedness.i. Section 1.g. [AloS00. we have etx ≤ cosh(at) + x sinh(at). e. in order to ensure γ (α) > 0.Proof. 11 .g.. sequences. We have P Sn > L ≤ P etSn > etL ≤ e−tL E etSn (by Markov’s inequality). Bernoulli sequence.d. the limit log P Sn > αn = −γ (α) n→∞ n lim always exists. where we have γ = 1 and µ = 1/d.9] for these results.6 is its generality: the uncorrelatedness conditions for the Xi ’s. tools in discrete probability. e. it is enough that the moment generating function E etX is ﬁnite for some t > 0.g. In fact. Since cosh(ai t) = we see that P Sn > L ≤ e−tL e We optimize the bound by setting t = L/ n i=1 ∞ k=1 (ai t)2k ≤ (2k )! ∞ k=1 Pn 2 ∞. see.

Proof. j . . Lazy solution: take S = Γ. The elements of the free group generated by {a1 . gs) ∈ E (G) maps to (hg. Example: The Cayley graph of F2 with generators {a. then there is an onto map FS −→ Γ. ad | [ai . However. a presentation of Γ is given by S mod the relations in R. nk ). Proof. Let be symbols. and Γ is any group. there is a surjective map f Deﬁnition 2. b−1 } is a 4-regular tree. Every word has a unique reduced word. s−1 ∈ S ). a−1 . . .Cayley} Example: The Zd lattice is the Cayley graph of Zd with the 2d generators {ei .3. gs) : g ∈ Γ. It is now obvious that φ is an isomorphism. . then G is naturally undirected {d. Cayley diagrams. if S is symmetric (∀s ∈ S.groupintro} Deﬁnition 2. the free group on k generators). .1. labeled with the generators.1 (Fk . (Proof by induction. for any map f : S −→ Γ there is a group homomorphism f ˆ(si1 · · · sik ) = f (s1 )i1 · · · f (sk )ik . We wish ˆ) ⊳ FS . Deﬁnition 2. a2 . and group multiplication is concatenation (then reduction if needed). These graphs are often considered to have directed edges.2. . A less lazy solution is to ˆ : FS −→ Γ take a generating set. a2 . aj ] ∀i. n2 . Γ ≃ S / R presented if both R and S can be chosen to be ﬁnite sets. ak {ss. . by the proposition. Then the right Cayley graph G(Γ. hence FS / ker(f ˆ) ≃ Γ. . ak .1. see Figure 2.) Proposition 2. This shows that every Cayley graph is transitive. . . . ak } are the ﬁnite reduced words: remove any 1 1 ai a− or a− i i ai . . 1 −1 Example: Consider the group Γ = a1 . A group is called ﬁnitely to show that this is isomorphic to the group Zd .groups} Introduction 1 −1 −1 a1 . Every group is a quotient of a free group. Deﬁne φ : G −→ Zd by φ(v ) = (n1 . . It is clear that Γ is commutitive — if we have ai aj in a word. . S = Γ. This is written Γ = S |R . Γ = S . an element g ∈ V (G) maps to hg . with ker(f where R ⊂ FS and R is the smallest normal subgroup containing R. we can insert the commutator [ai aj ] to reverse the order — so every word in Γ can be written in the form nk 1 n2 v = an 1 a2 · · · ak .2. 12 . S ) is the graph with vertex set V (G) = Γ and edge set E (G) = {(g. . hgs). . aj ] = ai aj a− i aj . and then they are sometimes called and |S |-regular (even if S has order 2 elements). where [ai . More formally. If S is a set and FS is the free group generated by S . then check that this is a homomorphism. Lemma 2. Deﬁne f 1 k Corollary 2. S ). The left Cayley graph is deﬁned using left multiplications by the generators. .1 Free groups and presentations {s.3. then ˆ : FS −→ Γ extending f . Then. . . s ∈ S }. Given a generating set S and a set of relations R of elements of S . and an element (g. a− 1 . b. Let Γ be a ﬁnitely generated group.2 2. . Then Γ acts on G by multiplication from the left as follows: for h ∈ Γ. Let Γ be a group with right Cayley graph G = G(Γ. −ei }d i=1 .

see Subsections 2. The simplest way to deﬁne an n-dimensional CW-complex is to do it recursively: 13 . We will also see that constructing groups with intermediate behaviours is not always easy. from the other end.1: The Cayley graph of F2 .2. it is only a recent theorem that the expected rate √ of escape is at least E dist(X0 .1. and the return probabilities are much smaller. The present section discusses the necessary background.2 Digression to topology: the fundamental group and covering spaces {ss. there does not seem to be an easy proof.4 and 3. But even on groups with a Z subgroup. Xn ) ≥ c n on any group: see Section 10. but. It looks intuitively clear that Z and Tk should be the extremes. It is indeed relatively easy to show that Tk is one extreme among 2k -regular Cayley-graphs. we will need other means of constructing groups. One reason for this is that the only general way that we have seen so far to construct groups is via presentations.Figure 2. and that there should be a large variety of possible behaviours in between. and the discussion around Proposition 12. Groups with such strange properties are called Tarksi monsters. and studied in Chapter 9 from a random walk point of view. even to the trivial group.3. Classical examples of ﬁnitely generated non-ﬁnitely presented groups are the lamplighter groups Z2 ≀ Zd . We will need the concept of a CW-complex. So.1.5. Finitely presentedness has important consequences for the geometry of the Cayley graphs. The non-ﬁnitely {f. 2. but there are famous undecidability results here: there is no general algorithm to decide whether a word can be reduced in a given presentation to the empty word. One reason for this not in Section 5.F2} being obvious is that not every inﬁnite group contains Z as a subgroup: there are ﬁnitely generated inﬁnite groups with a ﬁnite exponent n: the nth power of any element is the identity. which will be deﬁned presentedness of Z2 ≀ Z is proved in Exercise 12. and there is no general algorithm to decide if a group given by a presentation is isomorphic to another group.topology} Several results on free groups and presentations become much simpler in a topological language. We have seen two eﬀects of Tk being much more “spacious” than Zd on the behaviour of simple random walk: the escape speed is much larger.

functions f : X −→ Y and g : Y −→ X such that f ◦ g ∼ idY .• A 0-complex is a countable (ﬁnite or inﬁnite) union of points. with the discrete topology. Theorem 2. The proof of this theorem uses the Seifert-van Kampen Theorem 3. map any loop γ starting at x to a path from y to x concatenated with γ and the same path from x back to y . if there is a continuous function f : [0. I. Let π1 (X. x) be the set of equivalence classes of paths starting and ending at x. ⊲ Exercise 2. so the graph is homotopy equivalent to Rosek .) T has n − 1 edges. consider a spanning tree T . this is not true. Consider two loops α : [0. if there exist continuous g ◦ f ∼ idX . y ) → xy and Γ −→ Γ : x → x−1 are continuous. f (t. If X ∼ Y then π1 (X ) ∼ = π1 (Y ). i.1.1. 1] −→ X and β : [0. To ﬁnd an isomorphism between π1 (X. s) = x ∀s ∈ [0.5. the fundamental group is free by Theorems 2. 1 2] While it seems from the deﬁnition that the fundamental group would depend on the point x. If Γ is a topological group then π1 (Γ) is commutative. y ). For any ﬁnite connected graph with n vertices and l edges. We do not discuss this here. Corollary 2. • To get an n-complex.e. Proof. 1] −→ X starting at x. 1]. Then the contraction each begins and ends at x. The fundamental group of any (connected) graph is free. There are k = l − n + 1 edges left over. Consider a space X and a ﬁxed point x ∈ X . {t. We are ready to deﬁne the fundamental group of X . Theorem 2. denoted by X ∼ Y .. α(0) = α(1) = x = β (0) = β (1).4.e. Then π1 (Rosek ) = Fk . but we believe the statement is intuitively obvious. Contract T to a point x.fundhomotop} {t.5. and after {c. denoted by α ∼ β .6.. we add homeomorphic images of the n-balls such that each boundary is mapped continuously onto a union of n − 1-cells. s) = f (1. (Recall that a group Γ is a topological group if it is also a topological space such that the functions Γ × Γ −→ Γ : (x.4 and 2. We will always assume that our topological spaces are connected CW-complexes. f (0. x) and π1 (X. 0) = α(t). Hence. 1] × [0. 1] 2 .fundRose} A basic result (with a simple proof that we omit) is the following: 14 . we can glue n-cells to an n − 1-complex. 1] −→ X satisfying f (t.fundgraph} β (2t − 1) t ∈ [ 1 . Contraction of a spanning tree to a point is a homotopy equivalence. The group operation on π1 is induced by concatenation of paths: αβ (t) = α(2t) t ∈ [0. Consider the CW-complex with a single point and k loops from this point. Denote this CW-complex by Rosek . We say that α and β are homotopic. 1) = β (t). The spaces X and Y are homotopy equivalent. a rose with k petals.

The fundamental group π1 (X ) acts on X with continuous maps.We now introduce another basic notion of geometry: Deﬁnition 2. As we mentioned 15 Let γ ∈ π1 (X. Let x ∈ X and U ∋ x a neighbourhood as in Deﬁnition 2. point above x. Then for every x∗ ∈ p−1 (x) there is a Sketch of proof. • X is connected. Because of the local homeomorphisms. The existence of a universal cover is guaranteed by the following theorem: connected components of p−1 (U ) is independent of x and U . there exists a unique homeomorphic curve γ ∗ (not depending on the representative for γ ) with starting point x∗ and some ending point x∗ .9. unique γ ∗ starting at x∗ with p(γ ∗ ) = γ . We say that X ′ ։ X is a covering map if for every x ∈ X . If γ and δ are homotopic with ﬁxed endpoints x and y .uniquelift} p {d. π1 (X ) = 1.7 and Theorem 2. δ such that δ is just γ concatenated with a short piece of path. x) be a loop. Sketch of proof. p the surjective map deﬁned by the covering X . i. Then the number k of fact.8. ⊲ Exercise 2. there is an open neighbourhood U ∋ x such that each connected component of p−1 (U ) is homeomorphic to U by p. Using Proposition 2. the lifts of any path γ between x. Sketch of proof. y ∈ X give a pairing between the preimages of x and y . The topology on X is deﬁned by thinking of a class of paths [γ ] to be close to [δ ] if there are representatives γ. with this topology. as follows. The action fγ on x∗ is now deﬁned by fγ (x∗ ) = x∗ .graphcover} Lemma 2.4.7. Let γ ⊂ X be any path starting at x. and the covering is called k -fold.7 and Theorem 2. We say that X is a universal cover of X if: • X is a cover.monodromy} Theorem 2.e. {p.10. which also belongs to p−1 (x). Any covering space of a graph is a graph. then the lifts γ ∗ and δ ∗ starting from the same x∗ have the same endpoints and γ ∗ ∼ δ ∗ .11. {l. Let the set of points of X be the ﬁxed endpoint homotopy classes of paths starting from a ﬁxed x ∈ X .univcover} Theorem 2.8 (Monodromy Theorem).covering} Proposition 2. (In {t. • X is simply connected.4. Write down the above deﬁnition of the topology on X properly and the proof that. {t. and x∗ ∈ p−1 (x) a .k-fold} Lemma 2. π1 (X ) = 1. The homotopies can be lifted through the local homeomorphisms. there is always a unique way to continue the lifted path. Every connected CW complex X has a universal cover X .8: {l.2.) Let X be a topological space. The following results can be easily proved using Proposition 2..

Again by Lemma 2. too. 1). x) on the ﬁbre p−1 (x).9. If we make the quotient space of X by this group action (i.7 and Theorem 2.5. GH = G/H is a covering of 16 .Schind} Theorem 2. so we indeed have an action of π1 (X. by taking the π1 (X )-orbit containing any given ∼ H. If Fk is free and Fl ≤ Fk such that [Fk : Fl ] = r < ∞. which is still a covering space of 2. and we have π1 (XH ) = For each subgroup H ≤ π1 (X ). and the quotient map is exactly the covering map p: X/π1 (X. then l − 1 = (k − 1)r. in fact. Since π1 (X ) = 1. and it is easy to see that the action is free (i. then δ = p(δ ∗ ) is a path from y to x. hence H is free.8.freeisom} A probably unsurprising result is the following: Theorem 2.freemain} {t. 1) × [0.3 The main results on free groups {ss. On the other hand. all possible choices of δ ∗ are homotopic to each other. As discussed at the end of the previous section.. above.13. x) = X . (Hint 1: how many index 2 subgroups are there? Or. proving Theorem 2.13 (Nielsen-Schreier). hence all resulting δ and all δγδ −1 curves are homotopic. δ ∈ π1 (X. we can consider XH = X/H . Hence we indeed get an action of π1 (X. {t. where R2 is the usual covering made out of copies of the square [0. y ∈ X .3. only the identity has ﬁxed points). {t. but the topological language of the previous subsection makes things much more transparent. it is a graph. Every subgroup of a free group is free.12.pi1factor} Example: The fundamental group of the torus T2 is Z2 . X is the universal cover of XH . We have R2 /Z2 = T2 .1) {e. Proof of Theorem 2. we will obtain X . This action permutes points inside each ﬁbre.14 (Schreier’s index formula).e. Let the free group be Fk . hint 2: What is Fk /[Fk . x). Now take H ≤ Fk . X : there is a natural surjective map from XH to X . (2. there is a unique lift of δγδ −1 starting from y ∗ . x) on the entire X . it is the 2k -regular tree T2k . and its endpoint y ∗ ∈ p−1 (y ) does not depend on the choice of δ ∗ . By Proposition 2. the fundamental group of a graph is free. ⊲ Exercise 2.e. Let G be the universal cover of Rosek . and it is clear that fδ ◦ fγ = fγδ for We need to deﬁne the action also on any y ∗ ∈ p−1 (y ). Rosek and is covered by T2k .NieSch} Theorem 2. We take a path δ ∗ in X from y ∗ to x∗ .6. By Corollary 2.γ. we have Fk ∼ = π1 (Rosek ). By Theorem 2.. H -orbit. each point of X is mapped to its orbit under the group action). Prove Theorem 2. it is a graph. and δγδ −1 is a path from y to itself.13.12. Fk ∼ = Fl only if k = l . Fk ]?) The next two results can be proved also with combinatorial arguments. By Lemma 2.9. and π1 (GH ) = H . this action does not depend on the representative for γ .

As the index of H = Fl in On the other hand. γ (X1 ) ⊆ X2 ∀γ ∈ Γ2 \ {1} . Γ2 . Let Γ be a group acting on some set X . if G′ is an r-fold covering of G. Now let Γ1 = with X1 = x y ∈ R2 . Homotopies of graphs increase or decrease the vertex and the edge sets by the same amount. 17 X2 = 1 2n 0 1 . F2 ≤ SL2 (Z) {l. Assume that there exist non-empty sets X1 .5 and 2. (The action is by graph automorphisms of the tree T . Furthermore. Indeed: Lemma 2. X2 ⊆ X with X2 X1 and γ (X2 ) ⊆ X1 ∀γ ∈ Γ1 \ {1} . If Γ1 = S1 |R1 and Γ2 = S2 |R2 . k times. Thus. * A ﬁnitely generated group acts on a tree freely if and only if the group if free. as it turns out). ⊲ Exercise 2. then rχ(G) = χ(G′ ). ⊲ Exercise 2. We will now show that a ﬁnitely generated free group is linear. Notice that. ∈ R2 . and let Γ∗ = Γ1 . hence χ is a homotopy invariant of graphs.12. Prove the Ping-Pong Lemma. T2k /H is a graph with π1 (T2k /H ) = H = Fl .15 (Ping-Pong Lemma). S2 |R1 .14. and where there are no such elements (in which case there is a ﬁxed vertex.pingpong} (integer 2 × 2 matrices with determinant 1). with |Γ1 | ≥ 3. S1 . Since the graph Rosek has one vertex and k edges. Γ2 be subgroups of Γ. |x| > |y | . The Euler characteristic of a graph is the diﬀerence between the number of vertices and the number of edges: χ(G) = |V (G)| − |E (G)| . if 2 ≤ r < ∞ then r(1 − k ) = 1 − k . diﬀerent roses have non-isomorphic fundamental groups. Fk is r. and a free action means that StabG (x) = {1} for any x ∈ V (T ) ∪ E (T ). n∈Z . In particular. Γ2 = The product ∗ denotes the free product. |x| < |y | . Let Γ1 . χ(T2k /H ) = rχ(Rosek ) = r(1 − k ). we get that T2k /H must be homotopic to Rosel . χ(Rosek ) = 1 − k . ⊲ Exercise 2. and thus χ(T2k /H ) = 1 − l. Then Γ1 ∗ Γ2 = Γ∗ .) Hint: separate the cases where there is an element of order 2 in Γ. For instance.5. by Theorems 2. Prove the Fk has Fk as a subgroup with inﬁnite index. then Γ1 ∗ Γ2 = 1 0 2n 1 x y . and. Therefore.6. we obtain r(1 − k ) = 1 − l.Proof of Theorem 2. we see that T2k /H is an r-fold covering of Rosek . . n∈Z . R2 . π1 (Rosel ) = Fl .4. Since any graph is homotopic to a rose. Fk = Z ∗ Z ∗ · · · ∗ Z.

then they generate it freely. then it is Γ ≃ Γ/ ker φ is ﬁnite. j ∈ [1. For instance. a bijection. i − 1]. translated copies of the same loop have the same word associated to them. A geometric conjugate of a loop can be: • “rotate” the loop. (Hint: assume that ker φ is non-trivial.4 Presentations and Cayley graphs {ss. Let the set of all loops in G be L.) (ii) Conclude that if k distinct elements generate the free group Fk . choose another starting point in the same loop.9. ⊲ Exercise 2. SLn (Z/pZ) and GLn (Z/pZ). gs1 . where the i-th is a geometric conjugate of a basic loop. and only if the words given by the basic loops generate Recall that R R as a normal subgroup. A combination of basic loops is a loop made of a sequence of loops (ℓi ).presentations} Fix some subset of (oriented) loops in a directed Cayley graph G(Γ. because of the natural (mod p) homomorphisms onto (Z/pZ)n . Such groups are called residually ﬁnite.7.Fresfin} 2. Therefore. show that the number of index k subgroups of {ex. Proof.8. Show that the following properties of a group Γ are equivalent. S ) is homotopy equivalent to a combination of basic loops if denotes the normal subgroup of the relations. . . Conclude that the free groups Fk are residually ﬁnite. (i) Show that if Γ is residually ﬁnite and φ : Γ −→ Γ is a surjective homomorphism.16. and Γ = FS / R .loops} w(ℓ) = s1 s2 · · · sk . SLn (Z) and GLn (Z) are residually ﬁnite. the set of basic loops B .. we can associate the word {p.e. S ) of a group Γ = S |R . gs1 s2 . which will be called the basic loops. then arrive at a contradiction. . The statement of the proposition is that C = L if and only if w(B ) = R . To any loop ℓ = (g. So. ⊲ Exercise 2. Zn .resfin} {ex. Show that any subgroup of a residually ﬁnite group is also residually ﬁnite. Proposition 2. Any loop in G(Γ. show that any homomorphism π onto a ﬁnite group F must annihilate ker φ. Alternatively. and starts in a vertex contained in some ℓj . (iii) The intersection of all its normal subgroups of ﬁnite index is trivial. . 1 2 0 1 ⊲ Exercise 2. (ii) The intersection of all its subgroups of ﬁnite index is trivial. the matrices and 1 0 2 1 generate a free group. i. (i) For every γ = 1 ∈ Γ there is a homomorphism π onto a ﬁnite group F such that π (γ ) = 1. from B by combining them and applying homotopies C . with each si ∈ S . gs1 s2 · · · sk = g ).Observe that the hypotheses of the Ping-Pong Lemma are fulﬁlled. • translate the loop in G by a group element. and the set of loops produced 18 .

. hence it is an element of rotating by one edge. gs1 . . Figure 2. w(B ) = w(C ). S ) of Γ = S |R . Let us start with the direction ⊇. The eﬀect of these geometric combinations on the corresponding word is just plugging in trivially reducible words. {f. if two loops are homotopic to each other in G. (g. say. For instance. This is called the Cayley complex corresponding to S and R. b} and R = {aba−1 b−1 }. S ): {d.. i. R) is the 2|S |-regular Cayley graph G(Γ. gS1 t1 . R). . where S1 = s1 · · · sk . .5. This new word is in fact a conjugate of the old word: Next. w(B ) . . ht1 · · · tm = h) into (g. gs1 · · · sn = g ) and (h. . . . gS1 T. The 1-skeleton of X (S. So.We ﬁrst show that 1 s− 1 (s1 s2 · · · sk )s1 . glue a 2-cell on it with boundary given by the word. Since both L and C are We now state an elegant topological deﬁnition of the Cayley graph G(Γ. . we get s2 · · · sk s1 . gS1 T sk+1 . we obviously have w(L) = R So. S2 = sk+1 · · · sn and T = t1 · · · tm .2 (a). gS1 . What happens to the associated word when we rotate a loop? From the word s1 s2 · · · sk ∈ w(B ). . and for any word in R. which are just “contour paths” of subtrees in G.CayleyCW} Deﬁnition 2. R) denote the resulting 2-dimensional CW complex.2 (b). whose universal cover will be homeomorphic to R2 .loops} counterpart of all the operations on the words in w(B ) that together produce word represents 1 in Γ iff the word is in the kernel closed under translations. . Let X (S. . On the other hand. . w(B ) . wich again does not take us out from w(B ) . . since a word on S is a loop in G iff the of the factorization from the free group FS . (b) Homotopic loops inside Z2 . gS1 T S2 = g ). But here is a sketch of the story told in the 19 . we have proved w(B ) ⊇ w(C ). 1 themselves are combinations of trivial si s− loops for some generators si ∈ S . See Figure 2. resulting in a torus.e.2: (a) Combining two loops. when S = {a. . then we can get one from the other by combining in a sequence of contractible loops. But ⊆ is now also clear: we have encountered the geometric R . the last step we need is that L = C if and only if w(L) = w(C ). Now take the universal cover X (S. this is clear. when we combine two loops. . See Figure 2. The proof that this deﬁnition of the Cayley graph coincides with the usual one has roughly the same math content as our previous proposition. then the new associated word S1 T S2 equals −1 SS2 T S2 . i w(B ) . Thus. Take the 1-dimensional CW-complex Rose|S | . with a Z2 lattice as its 1-skeleton. ht1 . we glue a single 2-cell. combining loops does not take us out from Finally.

Z2 with the graph metric (the ℓ1 or taxi-cab metric) is quasi-isometric to R2 with the Euclidean metric. which is just a special natural graph structure. it is clearly equivalent to the existence of some r < ∞ such that the Ripsr (G(Γ. there is some x1 ∈ X1 with d2 (x2 .6. and the edges are (x∗ . We also say that X2 is quasi-isometric to X1 if there exists a quasi-isometry f : X1 −→ X2 . Verify that being quasi-isometric is an equivalence relation. Also. i) → n for 20 {ex. X (S. case: if H ≤ Γ. If X is a CW-complex and r > 0.language of covering surfaces: It is intuitively clear.asymptotic} Quasi-isometries. that π1 (X (S. Ends. ⊲ Exercise 3. we have d1 (p. H. where x∗ runs through the vertices and s runs through S . this Schreier graph is just the Cayley graph G(Γ. Hg ∼ Hgs for s ∈ S . Informally speaking. On the other hand. and we are done.1 The asymptotic geometry of groups {s. R) with Kampen theorem. Then. S ). Indeed. d2 ) are metric spaces. q ∈ X1 . and the Schreier graph of the action by Γ on the Cayley complex will be a 2k -regular Cayley graph. This graph is denoted by G(Γ. we get all possible groups Γ. while the second states that f (X1 ) is relatively dense in the target space X2 . So. there is another usual meaning of a Schreier graph. On the other hand. 3 3. Besides the Schreier graph of a group action. the 3-regular Petersen graph is not a Schreier graph.1. with the following deﬁnition: {d.Quasi-isometry} − C ≤ d2 (f (p). q ) + C . It is a weakening of bi-Lipschitz equivalence: Deﬁnition 3.Rips} Deﬁnition 2.1. For each x2 ∈ X2 . x∗ s).Qisom-equivale . d1 ) and (X2 .10. and S is a generating set of Γ. R)) = S |R . and can be proved using the Seifert – van We can now consider the right Schreier graph G(Γ.q) C {d.1).Schreier} ⊲ Exercise 2. The fundamental observation of geometric group theory {ss.qisom} We now deﬁne what we mean by two metric spaces having the same geometry on the large scale. A map f : X1 −→ X2 is called a quasi-isometry if ∃ C > 0 such that the following two conditions are met: 2. R). S ). X. * Show that any 2k -regular graph is the Schreier graph of Fk with respect to some subgroup H . then the set H \Γ of right cosets {Hg } supports a {ex. Clearly. this is exactly the 1-skeleton of X . generating set S : the vertices are the vertices of the CW complex X (this is just one Γ-orbit). the ﬁrst condition means that f does not distort the metric too much (it is coarsely bi-Lipschitz). as we observed in (2. S ) of the action of Γ on X = X (S. if we factorize Fk by normal subgroups. Z × Z2 with the graph metric is quasi-isometric to Z: (n. f (x1 )) < C . then the Rips complex Ripsr (X ) is given by adding all simplices (of arbitrary dimension) of diameter at most r. R)/Γ = X (S. S )) complex is simply connected. For all p. since the action is free. f (q )) ≤ C d1 (p. For example. Suppose (X1 . 1. The above results show that being ﬁnitely presented is a property with a strong topological ﬂavour.

Deﬁne in a reasonable sense the space of ends of a graph as a topological space.4 (Hopf 1944). ⊲ Exercise 3. i = 1.g. (b) Show that if a f. then the direct product graph G1 × G2 has one end. then the free product Γ1 ∗ Γ2 has a continuum number of ends. if Γ is a ﬁnitely-generated group.ϕ2 Γ2 := S1 ∪ S2 | R1 ∪ R2 ∪ {ϕ1 (h)ϕ2 (h)−1 : h ∈ H } . Zd has one end for d ≥ 2. we can deﬁne the space of ends of a ﬁnitely generated group to be the space of ends of any of its Cayley graphs. Thus. There is an important topological way of how such products arise: 21 An extension of the notion of free product is the amalgamated free product: if Γi = Si | . {ex. By invariance under quasi-isometries. are ﬁnitely generated groups.2. (a) Show that a group has two ends iff it has Z as a ﬁnite index subgroup. group has at least 3 ends. the number of ends is a quasi-isometry invariant of the graph. symmetric ﬁnite generating set. then Γ1 ∗H Γ2 = Γ1 ∗H. G2 are two inﬁnite graphs.) Another quasi-isometric invariant of groups is the property of being ﬁnitely presented. ⊲ Exercise 3. then Γ1 ∗H Γ2 is of course also ﬁnitely presentable. by giving explicit quasi-isometries. (a) Show that if G1 . ⊲ Exercise 3.3. the number of preimages of any point in the target space is bounded from above by some constant. i ∈ {0. v2 from N to N equivalent if ∃ C > 0 such that v1 r /C < v2 (r) < Cv1 (Cr) C growth function vG (n) = |Bn (o)|. 2. 1} is a non-injective quasi-isometry that is “almost an isometry on the large scale”. (b) Show that if |Γ1 | ≥ 2 and |Γ2 | ≥ 3 are two ﬁnitely generated groups. For a transitive connected graph G with ﬁnite degrees of vertices.most importantly to us.2and3ends} {ex.MysteriousEnds knowing that Z has two ends.1) {e. ℓ ≥ 3 are quasi-isometric to each other. (For this. Prove that any quasi-isometry of graphs induces naturally a homeomorphism of their spaces of ends. where o is some vertex of G and Bn (o) is the closed ball of for all r > 0. and maybe n ∈ Z.5. its Cayley graph depends on choosing the while (n. as it follows easily from our earlier remark on Rips complexes just before Deﬁnition 2. We will sometimes call two functions v1 . Show that the regular trees Tk and Tℓ for k. while the k -regular tree Tk has a continuum Ri . but the good thing is that any two such graphs are quasi-isometric.growthequiv} It is almost obvious that quasi-isometric transitive graphs have equivalent volume growth functions. with an embedding ϕi : H −→ Γi .ϕ1 . ⊲ Exercise 3. If both Γi are ﬁnitely presented. then it has continuum many.EndsExamples} {ex. note that for any quasi-isometry of locally ﬁnite graphs. of ends. i) → 2n + i is an injective quasi-isometry with Lipschitz constant 2. each with a subgroup isomorphic to some H . we can deﬁne the volume radius n (in the graph metric on G) with center o. (3.6. Finally.

and y ∈ Y . π (B ) = X/Γ. its action on X is a 22 {l. We start with some deﬁnitions: Deﬁnition 3.amalgamsemi} (a) What is the Cayley graph of Z ∗2Z Z (with the obvious embedding 2Z < Z).Milnor-Schwarz} . x). See the references in [DrK09.1 (Seifert-van Kampen). If X = X1 ∪ X2 is a decomposition of a CW-complex into connected subcomplexes with Y = X1 ∩ X2 connected. it connects two usual “deﬁnitions” of geometric group theory: 1. is denoted by X/Γ. or in other words. q ∈ X there exists a.2] for proofs. y ) = π1 (X1 .2. The following result is also called “the fundamental observation of geometric group theory”. |{g ∈ Γ : g (K ) ∩ K = ∅}| is ﬁnite. y ) . equipped with the factor topology coming from X .) {t.y) π1 (X2 . For instance.{t. {d. see Section 4. The last case occurs iff the group is a free product amalgamated over a ﬁnite subgroup. By compactness of X/Γ. and suppose that a group Γ acts on it by isometries properly discontinuously and co-compactly. then π1 (X. Any group action deﬁnes an equivalence relation on X : the decomposition into orbits. (There is one using harmonic functions [Kap07]. the map Jx : Γ −→ X deﬁned by Jx (g ) = g (x) is a quasi-isometry (on each Cayley graph). van Kampen? The reason for talking about amalgamated free products here is the following theorem. A metric space X is called geodesic if for all p.3. Then Γ is ﬁnitely generated. for each Z factor? Identify the group as a semidirect product Z2 ⋊ Z4 . it is the study of groups using their actions on geometric spaces. The set of equivalence classes. 2 or a continuum of ends.spaces} Deﬁnition 3. with the natural embeddings of π1 (Y.2. An action of a group Γ on a metric space X by isometries is called properly discontinuous if for every compact K ⊂ X . A metric space is called proper if all closed balls of ﬁnite radius are compact.SvK} Theorem 3. b] −→ X with g (a) = p. b ∈ R and an isometry g : [a. and consider the projection π : X −→ X/Γ. y ) into π1 (Xi . Pick an arbitrary point x ∈ X . there is an R < ∞ such that the Γ-translates of the closed ball B = BR (x) cover X .actions} {d. and for any ﬁxed x ∈ X .4 that a group has 0. with one generator (b) Do there exist CW-complexes realizing this amalgamated free product as an application of Seifert- Theorem 3. (Why exactly? Since each element of Γ is invertible. 1. Recall from Exercise 3. Let X be a proper geodesic metric space. Lemma 3. it is the study of group theoretical properties that are invariant under quasi-isometries. Proof.2 (Stallings [Sta68].3 (Milnor-Schwarz).Stallings} {ex. I might say something about that in a later version of these notes. Section 2. y ) ∗π1 (Y.6. [Bergm68]). 2. ⊲ Exercise 3. g (b) = q . The action is called co-compact if X/Γ is compact.

r (3. Each xj belongs to some gj (B ) for some gj ∈ Γ (and take gm+1 = g and g0 = 1). Observe that −1 d(B. g ) for any g. The claim now is that S generates Γ. g ∈Γ g (U ) are open in X . this will mean that Jx (g ) := g (x) is coarsely bi-Lipschitz. gj +1 (B )) ≤ d(xj . g (B )) : g ∈ Γ \ (S ∪ {1}) . we have dS (h. hence Jx will indeed be a quasi-isometry from Γ to X . and connect x to g (x) by a geodesic γ . g (x)) +1< +1. Let S be the subset of Γ consisting of these elements si .bijective isometry. g = si(1) si(2) · · · si(m) . So. S. hg ) = dS (1. s(y )) ≤ 2R. Let i Since the action of Γ is properly discontinuous. Since each si is 1 an isometry. hence d(B. S . and gj +1 = gj si(j ) for some si(j ) ∈ S ∪ {1}. . we also have π (Br (x)) ր X/Γ. g (x)) ≤ g 2R where g S S for all but ﬁnitely many g ∈ Γ we will have B ′ ∩ g (B ) = ∅. by compactness. there are only ﬁnitely many elements si ∈ Γ \ {1} r := inf d(B.) and. s− belongs to S iff si does. g (x)) +1. hence a homeomorphism.2) {e. xm+1 = g (x) ∈ γ such that x1 ∈ B and d(xj . so is positive. Furthermore. xj +1 ) < r . we have BR (y ) ∩ BR (s(y )) = ∅ and hence d(y. since π (B ) = X/Γ. because. h ∈ Γ. and for each g ∈ Γ. . In particular. g (x)) < mr + R . Let g ∈ Γ. the triangle inequality implies that d(x. (m − 1)r + R ≤ d(x. therefore the ≤ d(x. g ) is the word norm on Γ with respect to the generating set S . the ◦ (x)).t.r. . for any open U ⊂ X and g ∈ Γ. Indeed. for any y ∈ X and s ∈ S . gj (gj +1 (B ))) = d(gj (B ). r r On the other hand. Therefore. g S ≤m≤ d(x. we have that g (U ) and π −1 (π (U )) = images ◦ π (Br (x)) ◦ ◦ of open balls are open. g (x)) ≤ 2R g ﬁnishes the proof of (3. Since Br ր X as r → ∞. and S is a generating set for Γ. then inﬁmum above is a minimum of ﬁnitely many positive numbers. d(x. Moreover.Jqisom} = dS (1. . there exists an R < ∞ such that X/Γ = π (BR such that B ∩ si (B ) = ∅. Since. This 23 . x1 . Let m be the unique nonnegative integer with Choose points x0 = x. xj +1 ) < r for 1 ≤ j ≤ m. g (B )) ≥ 1. Observe that r > 0. if we denote by B ′ closed ball with center x0 and radius R + 1. and therefore π (U ) is open in X/Γ.2). for each y ∈ X there is some g ∈ Γ with d(y. g (x)) ≤ R. g (x)) − R d(x). −1 hence the balls B and gj (gj +1 (B )) intersect. in the right Cayley graph w.

Γ1 and Γ2 . the space X can be constructed using the set of all quasi-isometries between Γ1 and Γ2 . This is a metric space which contains not only vertices of the Cayley graph but also points on edges. Proof. so that each edge is a geodesic of length 1. the two actions commute with each other. moreover. including probability theory. For the case H ≤ Γ. and the action of Γ on the Cayley graph of H satisﬁes the conditions of the Milnor-Schwarz lemma. groups. A small generalization and a converse are provided by the following characterization noted in [Gro93]: {p. etc.g. ﬁnite normal subgroups Fi ⊳ Hi such that H1 /F1 ≃ H2 /F2 . are Gromov proposed in [Gro93] the long-term project of quasi-isometric classiﬁcation of all f. (Hint: given the quasi-isometric groups.) on a nice metric space. are virtually or almost isomorphic. This is a huge research area.8. A ﬁnite index subgroup H of a ﬁnitely generated group Γ is itself ﬁnitely generated and quasi-isometric to Γ. In particular. 24 . with connections to a lot of parts of mathematics. Two f. Γ1 and Γ2 . The same conclusions hold if H is a factor of Γ with a ﬁnite kernel. groups Γ1 and Γ2 are quasi-isometric iff there is a locally compact topological space X where they both act properly discontinuously and cocompactly. Prove the above proposition. ⊲ Exercise 3.4 above is the geometric analogue of the following characterization of virtual iso{ex.FiniteIndex} {ex. as we will see. x ∈ X . muting actions on a set X such that the factors X/Γi and all the stabilizers StΓi (x).) An important consequence of Lemma 3.isomcomm} morphisms: ⊲ Exercise 3.4. and clearly this action satisﬁes all properties in the statement of the Milnor-Schwarz lemma ⇒ we For the case when H is a factor of Γ. we say that two groups. Correspondingly. if a group Γ is virtually Proposition 3. groups. we consider an “extended” Cayley graph of Γ with respect to some generating set.5.qisomcomm} Corollary 3. we will say that Γ almost or virtually has P .3 is the following: {cor.This lemma implies that if Γ1 and Γ2 both act nicely (cocompactly. the image of any generating set of Γ generates H . so it is ﬁnitely generated. Based on this corollary. if there are ﬁnite index subgroups Hi ≤ Γi and isomorphic to a group that has some property P .qisomcomm} Proposition 3. Show that two f.g. Recall the notion of residually ﬁnite groups from Exercise 2. Here is one more instance of realizing a group theoretical notion via geometric properties of group actions.g. then they are quasi-isometric. we will usually be interested in group properties that hold up to moving to a ﬁnite index subgroup or factorgroup or group extension. are virtually isomorphic iff they admit comﬁnite.7. H naturally acts on it by isometries can just apply it.7.

z ∈ Z .3 Gromov-hyperbolic spaces and groups Asymptotic cones {ss. Recall that a ﬁnitely generated group is Abelian if and only if it is a direct product of cyclic groups. nilpotent implies solvable. as well. Γn ] terminates at Γs = {1} Clearly. has the disadvantage that we might not know what the group is that we have just deﬁned. {d.nilpsolv} The basics {ss. let us give the simplest non-Abelian nilpotent example. If s is the smallest such index. and the number of (free) cyclic factors is called the (free) rank of the group. y.nilpsolvbasics We have studied the free groups.2 3. 0 0 1 terminates at Γs = {1} in ﬁnite steps.⊲ Exercise 3. Γ] A group Γ is called solvable if the derived series Γ0 = Γ. On the other hand. deﬁning groups using presentations. 3.hyperbolic} {ss.e. We understand these examples very well.1 Nilpotent and solvable groups {s. * (a) Show that a group Γ is residually ﬁnite iff it has a faithful chaotic action on a Hausdorﬀ topological space X by homeomorphisms: (i) the union of all ﬁnite orbits is dense in X . and it is more than natural to start building new groups from it. Γ is called s-step solvable. while taking quotients. Γn+1 = [Γn .) (b) Construct such a chaotic action of SL(n. (A rather obvious hint: start by looking at ﬁnite groups.cones} 4 4.nilpsolv} Deﬁnition 4. which are the “largest groups” from most points of view. Γn+1 = [Γn . i. Recall that the commutator is deﬁned by [g. 25 . Z) on the torus Tn . h] = ghg −1 h−1 . so we should now go beyond commutativity. If s is the smallest such index.1. (ii) the action is topologically transitive: for any nonempty open U.9. The 3-dimensional discrete Heisenberg group is the matrix group 1 x z Γ = 0 1 y : x. the “smallest” inﬁnite group is certainly Z. Γ is called s-step nilpotent. A group Γ is called nilpotent if the lower central series Γ0 = Γ.. in ﬁnite steps. Before discussing any further properties of such groups. V ⊆ X there is γ ∈ Γ such that γ (U ) ∩ V = ∅. We have also seen that it is relatively hard to produce new groups from them: their subgroups are all free.

we will deﬁne in Section 5. c]. Z ] = 1. [c. Γ] is generated by the The factor ΓAb := Γ/[Γ. a−1 ]−1 . Z the matrices given by the three permutations of the entries 1. b]b a. a−1 ] [c. b 26 . Γ] is a subgroup of Γ∗ that is normal in Γ: if g. generated by iterated commutators of the generators of Γ. for which [Γ. Y.If we denote by X. For instance. 0 for x. [c.Heisenberg} ⊲ Exercise 4. b . a−1 ]a−1 cc−1 b−1 = [a. [a. a. and the center is Z (Γ) = Z .1: The Cayley graph of the Heisenberg group with generators X. then. a−1 ] [c. b] b. we can move such pairs of letters towards each other until they become neighbours and annihilate each other.1 the lamplighter group However. Y. [c. {ex. a. a−1 ]aa−1 b−1 = [a. Γ]. then [γ. the largest Abelian factor of Γ: any commutators of the generators of Γ. there is an equal number of x and x−1 letters for each generator x. g ]h = (h−1 γh)(h−1 gh)(h−1 γ −1 h)(h−1 g −1 h) ∈ [Γ∗ . b]ba[c. Clearly. both in the nilpotent and the solvable cases. Whenever Γ∗ ⊳ Γ. [Y. Γn+1 ≤ Γn and Γn ⊳ Γ. it is also generated by just X and Y . b] b. Z ] = 1. z y x Figure 4. In any commutator of two words on the generators of Γ. By introducing commutators. Z . [c. [Γ. hence Γ is 2-step nilpotent. a. a−1 ] = [a. bc] = abca−1 c−1 b−1 = ab[c. 0. Y n ] = Z mn . Furthermore. Show that the Heisenberg group has 4-dimensional volume growth. Z [X. Y. For instance. a−1 ] [a−1 . So. we have that [Γ∗ . a−1 ] a. Γ] is called the Abelianization of Γ.1. Y ] = Z . we are left with a product of iterated commutators. a−1 ] bb−1 [c. z . Therefore. a−1 ]a−1 b−1 = [a. as follows. by simple word manipulations one can show that each Γn is ﬁnitely Γ = Z2 ≀ Z.HeisenGrowth} {f. Note that [X m . then Γ is given by the presentation X. Γ] = ⊕Z Z2 is not ﬁnitely generated. if Γ is nilpotent. y. by induction. Γ] = Z . a ﬁnitely generated solvable group. g ] = γ (gγ −1 g −1 ) ∈ Γ∗ and h−1 [γ. h ∈ Γ and γ ∈ Γ∗ . at the end. [c. a−1 ] [c. [X. the subgroup [Γ. Abelian factor must factor through ΓAb . It is not true that for any ﬁnitely generated group Γ.

For any group N . f acts by af = f (a) and f g acts by af g = (af )g = g (f (a)). in particular. This way we get a ﬁnite set of generators for [Γ.2. f ). but since Γ is s-step nilpotent.2 Semidirect products {ss.In the resulting word. (a) Assume that for some H ⊳ Γ. whenever we have a group Γ with aϕ(h) := h−1 ah. If Γ is ﬁnitely generated solvable. we have the ﬁrst step in the desired polycyclic sequence. with the product being just composition. h ∈ H . we can just delete from the word any iteration of depth larger than s.finpres} ⊲ Exercise 4. Hi+1 ⊳ Hi . Show that Γ is also ﬁnitely presented. a ∈ N }. Hi /Hi+1 is a cyclic group. we have HN = Γ and H ∩ N = {1G}. Similar examples are possible with Zd instead product is in Exercise 3. Then subgroups N ⊳ Γ and H ≤ Γ satisfying HN = Γ and H ∩ N = {1}.e. (b) Show that any ﬁnitely generated almost-nilpotent group is ﬁnitely presented. gh) . Furthermore. and there is some n such that Hn = {1}. 27 α β . i. let Aut(N ) be the group of its group-automorphisms. group homomorphism. as well. f −1 ). Then deﬁne the semidirect product N ⋊ϕ H by (a. 4. H0 /H1 ≃ Z. a) : h ∈ H. and then for each Γn . acting on the right. {ex. Further. Such a procedure is given by the semidirect product. So. b ∈ N and g. then H acts on N by conjugations. the iteration depths of the commutators depend on the two words we started with. and hence composing the surjection Γ −→ Γ/[Γ. of Rd . any nilpotent group is polycyclic: there is a subgroup sequence Hi such that a solvable group is polycyclic iff all subgroups are ﬁnitely generated (and this includes all ﬁnitely H0 = Γ. we get a surjection of Γ onto Z. for the trivial homomorphism ϕ : H −→ {id} ⊂ Aut(N ) we get the direct product of N and H . The archetypical non-trivial examples are the aﬃne group Rd ⋊ GLd (R) = {v → vA + b : A ∈ GLd (R). If it is inﬁnite. Anyway. generated nilpotent groups). so one might prefer writing Γ = H ϕ ⋉ N = {(h. Γ] with the projection to this Z factor. id) = (a. N ⊳ Γ := N ⋊ϕ H . and H ≤ Γ with the inclusion f → (1. Γ]. only one small idea that will be needed later. hence is a ﬁnite direct product of cyclic groups. g ) (b. with the inclusion a → (a. One more small example of a semidirect A fancy way of saying that N ⊳ Γ and Γ/N ≃ F is to write down the short exact sequence 1 −→ N −→ Γ −→ F −→ 1 . b ∈ Rd } and the group of Euclidean isometries Rn ⋊ O(n). Of course.. v → vA1 + b1 → vA1 A2 + b1 A2 + b2 . f )−1 = (f −1 (a−1 ). Conversely. and it is easy to check that Γ = H ϕ ⋉ N . (a. Γ] is also ﬁnitely generated and Abelian.semidirect} We do not yet have any general procedure to build non-commutative nilpotent and solvable groups. It is easy to check that this is indeed a group. f a = (1. let ϕ : H −→ Aut(N ) be some a. then the factor Γ/[Γ. both H and Γ/H are ﬁnitely presented. h) = (aϕ(h) b. In fact.6. We do not give the proof here. f ). f )(a. then one of the factors must be Z. but note that Aut(Zd ) = GLd (Z) is just ±SLd (Z). Furthermore. id).

any subgroup or factor group of a nilpotent group . Therefore. It will also be important to us that any sequence 1 −→ N −→ Γ −→ Z −→ 1 hence for H := x ≃ Z we have H ∩ N = {1} and HN = Γ. UK. Sir Swinnerton-Dyer.1. If M has only absolute value 1 eigenvalues. then Γ is almost nilpotent. show that if N and H are solvable. and using this. ghg −1 h−1 . g ∈ Aut(N ). (a) Show that (a. then Γ has is also nilpotent.1). 1 −→ [Γ. 2}. in addition. we will prove the following: Proposition 4. Nevertheless. Also. M ∈ GLd (Z). since Aut(Z2 ) = {id} means that all semidirect products (4.) Now.1 has a ﬁnite index nilpotent subgroup.semidirect} ⊲ Exercise 4. h)(a. given Γ = Zd ⋊M Z. 2. γ (F ) ∩ α(N ) = {1} and γ (F )α(N ) = Γ. then we say that the short exact sequence splits.) If. hence there is no good γ . (Hint: use the Z factor from the end of Section 4. 28 {p. g )−1 (b.1 and the splitting in (4. hence Γ ≃ N ⋊ϕ F is a semidirect product with ϕ : F −→ Aut(N ) given by the exact sequence 1 −→ Z2 −→ Z4 −→ Z2 −→ 1 are actually direct products.NilpOrExp} k ∈ Z \ {0} {ex. π fashioned and old. If not. show that N ⋊ gk is a ﬁnite index subgroup of N ⋊ g . (b) Given N ⋊ g . an almost nilpotent group in the sense given at the end of Section 3.4. then Γ has exponential volume growth.) Similarly.3. a lot of solvable and nilpotent groups can be produced using semidirect products. for instance. obviously. Show that if Γ/K ≃ N with K a ﬁnite normal subgroup and N nilpotent. there is an injective homomorphism γ : F −→ Γ with conjugation: aϕ(f ) := α−1 γ (f )−1 α(a)γ (f ) . then π (xk ) = k = 0 for all k ∈ Z \ {0}. while ker(π ) = N .which just means that the image of each map indicated by an arrow is exactly the kernel of the next map. he said. but Z2 × Z2 ≃ Z4 . (Or. a famous number theory professor. g )(b.almostnilp} a ﬁnite index nilpotent subgroup. 2} ⊳ {0. {ex.1) {e.NGZ} does split. then N ⋊ϕ H is also. deﬁned by {0. (When I was a grad student in Cambridge. This is because if x ∈ π −1 (1). In this case. h)−1 = aϕ(hg −1 h −1 ) ϕ ( g −1 h −1 ) b (a−1 )ϕ(g −1 h −1 ) (b−1 )ϕ(h −1 ) . 1. the only Z2 subgroup of Z4 does not have trivial intersection with the given normal subgroup {0. as we will see in a second. wrote down short exact sequences just to prove he was not hopelessly oldβ ◦ γ = idF . 3} does not split. This splitting does not always happen. ⊲ Exercise 4. Γ] −→ Γ −→ ΓAb −→ 1 does not always split.

5. . . The proof of the following theorem will be given in the next section (with a few details omitted): Theorem 4.1. for the Heisenberg group we have d0 = 2 and d1 = 1. by Exercise 4. hence d(Γ) = 4. for any m ∈ Z+ . λd be the set of eigenvalues of M .2 (Milnor-Wolf [Mil68a. Moreover.unipotent} unipotent. . If M ∈ GLd (Z) is unipotent. for the ﬁrst part of Proposition 4. To prove Proposition 4. Note that a matrix M is quasi-unipotent if and only if there exists m < ∞ such that M m is Thus. A matrix M is unipotent if all eigenvalues are equal to 1 and quasi-unipotent if all eigenvalues are roots of unity. λd ) ∈ (S ) as k = 0. there exists a convergent subsequence {vkℓ } of {vk }.rootsofunity} of unity. Since (S1 ) is a compact mℓ ℓ mℓ = kℓ+1 − kℓ . Let λ1 . If M ∈ GLd (Z) has only eigenvalues with absolute value 1. .3.3 part (b). 1 M m = S −1 (N m + 29 {ex. But λm i ℓ |λm i | −1 group. .2. Z. we shall ﬁrst show the following lemma: Lemma 4. . then all of them are roots {l. ⊲ Exercise 4. Mil68b]). Note that N d = 0 in the exercise. d mℓ ∈ i=1 λi mℓ λi = 1∀i. and hence vkℓ+1 vk → 1 as ℓ → ∞. . . . the Bass-Guivarch formula (which we will not prove) states that if di := freerank(Γi /Γi+1 ) for the lower central series. . But = 1∀i. then the volume growth of a nilpotent Γ is d(Γ) = i {t. This Deﬁnition 4.1. .MilnorWolf} idi . Proof. For example. . . A ﬁnitely generated almost-solvable group is of polynomial growth if and only if it is almost-nilpotent. k 1 d d Now consider the sequence vk = (λk 1 . this is true in larger generality.In fact. Let ℓ implies that there exists mℓ such that d i=1 ℓ λm i = d. . λd ) → 1 and d i=1 ℓ → d as ℓ → ∞. . then M = I + S −1 N S = S −1 (I + N )S for some strictly upper-triangular integer matrix N .1 it is enough to show that if M is unipotent. then Γ = Zd ⋊M Z is nilpotent. then (λm 1 . m N m−1 + . Then. + I )S 1 m M m − I = S −1 (N m + N m−1 + . . + N )S. 1. 2. . Then we have d Tr(M k ) = i=1 λk i ∈ Z. agreeing with Exercise 4. . . Wol68. and is of exponential growth otherwise. . Thus.

⊲ Exercise 4. Let b ∈ Cd a corresponding eigenvector.4. M i )−1 (w. in fact. bM T = λb. M j )−1 = (vM −i + wM −i−j − vM −i−j − wM −j . we have m− 1 i=0 ηi λi = |ηm λm | = |λ|m . M j )(v. which implies that (M m − I )d = 0. Let ηi = ǫi − δi .alldifferent} ǫi ∈ {0.6. Show that Z 2 {ex.1. hence Γ is at most d-step nilpotent. I ) = vM −i−j (M j − I ) + wM −i−j (I − M i ).last result holds. . while.3 part (a). Now suppose that k k k k δi λi β (v ) = β i=0 i=0 δi vM i = β i=0 ǫi vM i = i=0 ǫi λi β (v ) with ǫm − δm = 0 for some m ≤ k . but ǫi = δi for all m < i ≤ k . hence there exists v ∈ Zd such that β (v ) = 0. For the second half of Proposition 4. If M ∈ GLd (Z) has an eigenvalue |λ| > 2. or M bT = λbT . Then we have: β (vM k ) = vM k bT = vλk bT = λk β (v ) Since β : Cd −→ C is linear and non-zero.HeisenSemi} ⋊0 B1 @ 0 1 1C A 1 Z is isomorphic to the Heisenberg group. M i )(w. then ∃v ∈ Z such that for any k ∈ N and ǫ0 v + ǫ1 vM + . for any m ∈ Z. I . Lemma 4. i=0 Now consider m− 1 i i=0 ηi λ . (v. . Then m ηi λi β (v ) = 0 . On one hand. the What is now Γ1 = [Γ. the matrix part has got annihilated. the key step is the following. the Γ1 ⊂ (d − 1) dim subspace Γ2 ⊂ (d − 2) dim subspace . Since the inverse of a unipotent matrix is again unipotent. Γd = { 1 } . since M i − I lowers dimension for any i ∈ Z. “vector part” has a smaller and smaller support: So. proving the ﬁrst half of Proposition 4. 30 . . Γ]? By Exercise 4. 1}. the 2k+1 vectors are all diﬀerent. Proof. + ǫk vM k d {l.1. Deﬁne the linear form β (v ) = vbT . . Note that M T also has the eigenvalue λ. its kernel is (d − 1)-dimensional.

5. The proposition has the following generalization: {e.7 can be reformulated as follows: {p. We will prove the ﬁrst one. there is an m < ∞ such that M m has the eigenvalue |λm | > 2. but not the second. The ﬁrst step of this is for free: we have seen that an exact sequence (4. there is an eigenvalue |λ| > 1. with ǫi ∈ {0. to ﬁnish the proof of Proposition 4.subtfg} Proposition 4. M m ) = (ǫk vM mk + ǫk−1 vM m(k−1) + · · · + ǫ0 v. 4. then N has growth O(Rd−1 ). hence Γ has exponential growth. 1} and the vector v ∈ Zd given by Lemma 4. See [DrK09. |λ| − 1 since |λ| > 2. if Γ has growth O(Rd ).volnilpsolv} As promised in the previous section. Assume we have a short exact sequence 1 −→ N −→ Γ −→ Z −→ 1 . Propositions 4. Then N ⋊ϕ Z is almost nilpotent or of exponential growth (and this can be easily detected from ϕ). Theorem 5. Consider the products (ǫk v. M m ) · · · (ǫ0 v.4. m− 1 i=0 ηi λi ≤ |λ|m − 1 ≤ |λ|m − 1 . hence Exercise 4.1) always splits. although that is not hard. and prove the Milnor-Wolf Theorem 4. then N is ﬁnitely generated and also has sub-exponential growth. assume that M has an eigenvalue with absolute value not 1. This is a contradiction. Since | det(M )| = 1. then it has exponential growth. Now. {p.nore} elements are all diﬀerent.1 to prove that the subgroups Γn in the lower central series are ﬁnitely generated. We will need two more ingredients. if Γ is not almost nilpotent. for a ﬁxed k these (0. M m(k+1) ).On the other hand. hence we have found the desired v . I ) and ⊲ Exercise 4. we want to go beyond semidirect products. either: the key idea is just to analyze carefully the procedure we used in Section 4. these elements are inside the ball of radius 2(k + 1).3 The volume growth of nilpotent and solvable groups {ss. By that lemma. (i) If Γ has sub-exponential growth. But. M m )(ǫk−1 v.1.2. π 31 . Then. (ii) Moreover.7.6. * Let N be a ﬁnitely generated almost nilpotent group.gnore} Proposition 4. Then. Assume that 1 −→ N −→ Γ −→ Z −→ 1 is a short exact sequence with N being a ﬁnitely generated almost nilpotent group. M m ).12] for the details. in the Cayley graph given by any generating set containing (v.7.6 and 4.

we have the short exact sequence 1 −→ [Γ. h) =: h h H Deﬁnition 4.8.{d. then h 2R}| = O(R have S ∪T | |BR Dd S ∪T ≤ 2R. a−n ban = bm . Then Γ is generated by S ∪ T . Theorem 4. as we showed in Section 4. by the ≤ O(RD ). . and then BS ≃ H ⋊ Z. . where Γ1 = [Γ. Let the Abelian rank of ΓAb be r′ and its free-rank be r ≤ r′ . * Without looking into [DrK09]. prove the last proposition. Take a generating set e1 .7. but its length might be much longer than the original number of S -letters in w. Move all letters of T to the front of w: since π Γ1 ⊳ Γ. then [Γ. In fact. any subgroup has polynomial distortion. BS(1. Γ] such that wT wS = t1 · · · tr ′ h in Γ. tr′ }. Γ] has polynomial distortion.5. we can assume that Γ is nilpotent. it is ﬁnitely generated. Let H ≤ Γ.nilpoly} So b as a subgroup of BS(1. Proof.3. hence the group is two-step solvable. So. Now.polydist} SΓ ⊃ SH . so. So bm n n b = mn but bm n BS = 2n + 1. Take a ﬁnite generating set S of it. h S ∪T r′ 1 ≤ R. and SH and SΓ ﬁnite generating sets of H and Γ respectively. by Corollary 3. hence |{h ∈ Γ1 : h ).7.polydist} Proposition 4. x. Finitely generated almost nilpotent groups have polynomial growth. . ⊲ Exercise 4. First of all. Γ1 has polynomial growth of some degree d. but of the form wT wS . while wS is a word on S . . Γ] −→ Γ −→ ΓAb −→ 1 . er′ for ΓAb . for any g ∈ Γ1 and t ∈ T there is some g ′ ∈ Γ1 with gt = tg ′ . . 32 k · · · tr′r′ is O(R ). But. One can check that [BS. A concrete realization of this presentation is to take the additive group H of the rationals of the form x/my . We say that H has polynomial distortion if there is a polynomial p(x) so that p( h ∀h ∈ H . we can take the generators a : u → um and b : u → u + 1. k k Since w triangle inequality. where wT is a word on T of length equal to the number of T -letters in the original w. by Proposition 4. where t ∈ Z acts on H by multiplication by mt . We get a word w′ representing the same group element.1. y ∈ Z. Anti-example: Consider the solvable Baumslag-Solitar group. Consider any word w of length at most R in S ∪ T . The proof will use induction on the nilpotent rank. Since the number of diﬀerent possible words dD+r 1 tk 1 = O(R ). so Γ has polynomial growth. Indeed. m) does not have polynomial distortion. and let the π -preimages be T = {t1 . with r free generators. Note that for the distance in the corresponding Cayley graphs. Γ] is nilpotent of strictly smaller rank. . we also have tk 1 · · · tr ′ k is the degree of this polynomial for the generating sets S and S ∪ T . and hence. BS] = b . m) := a. but using the hint given in the paragraph before Proposition 4. {t. ≥ h Γ G) ≥ {p.6. but we will not need that result. If Γ is a ﬁnitely generated nilpotent group. dH (e. Γ1 has polynomial distortion. By ≤ the induction hypothesis. with H ∀h ∈ H . . there is some element h ∈ [Γ.8. since π (wT ) in ΓAb can be written in the form k1 1 r′ r′ ek 1 · · · er ′ . we altogether r . if D S S ∪T S ∪T ≤ k1 + · · · + kr′ ≤ R. Moreover. b | a−1 ba = bm . Furthermore.

then we can continue splitting oﬀ a Z factor ad inﬁnitum. so. when Γ is ﬁnite the statement is trivial.2). .e. Γ]. which suggests that subexponential growth should crucially be used. it is not obvious that the splitting procedure (4. we know only that Γ has subexponential growth. .1) that this exact sequence splits.6. . and we are done. we may assume that Γ is inﬁnite and solvable.6 says that N has growth O(Rd−1 ). f2 . . gk . in the ﬁrst step we can have N = [Γ. f = fi1 · · · fil = gi1 γ −si1 · · · gil γ −sil = (gi1 ) · (γ −si1 gi2 γ si1 ) · (γ −si1 −si2 gi3 γ si1 +si2 ) · · · (γ − P l −1 fi pick si ∈ Z so that π (fi γ si ) = 0. Then. . i = 1. Since [Γ : Γj ] < ∞.2) terminates. .6. . Moreover.2) {e. (4. we know from the argument at (4. * Assume that Γ is ﬁnitely generated and has inﬁnitely many subgroups Γ1 . If we let k=1 sik gil γ −sil ) . with N in place of Γ. Now.1KGZ1} Corollary 3. and Γi ∩ Γi+1 . . . . in both cases.9. then S = N . there is a short exact sequence It is also clear from that argument that [N. then Proposition 4. The beginning and the overall strategy of the proof of the two cases are the same. we can iterate (4. (ii) The statement remains true if only subexponential growth is assumed.5. Let Γ = f1 .5 says that Γ is also almost nilpotent. also N does. Take γ ∈ Γ so that π (γ ) = 1 ∈ Z.6. {ex.. Furthermore. Γi ≃ Z. for each S = γm. . By its derived series such that Γj /[Γj . for the ﬁnitely generated solvable lamplighter group Γ = Z ≀ Z that we mentioned earlier. Proof of Proposition 4. .5 iteratively to the short exact sequences we got. For degree 0. then in the last N is a ﬁnite group. . since for any f ∈ ker(π ). We now prove (i) by induction on the degree of the polynomial growth of Γ. In case (ii). we can further assume that 1 −→ N −→ Γ −→ Z −→ 1 . there is a ﬁrst index j ≥ 0 in j = 0. So. 33 . by Proposition 4. By Proposition 4. Now.i := γ m gi γ −m m ∈ Z.inftysplit} ⊲ Exercise 4. .solvnilorexp} Theorem 4. This is done in the next exercise. Γ] is not ﬁnitely generated. so N is solvable. If this procedure stops after ﬁnitely many steps. Γj ] is inﬁnite. it is almost nilpotent. and then Proposition 4. Then Γ = g1 . (i) and (ii). by the argument at the end of Section 4. For instance. Of course. . Γi+2 . by induction. . = {1}. . if Γ has volume growth O(Rd ). which is trivially almost nilpotent. with the properties that for each i ≥ 1. Γ2 . N ] = [Γ. and ﬁnd that Γ at the top was also almost nilpotent. Γ has exponential growth and N = [Γ. Let gi = fi γ si ∈ N . k . Γ] = ⊕Z Z.9. fk . Proof. hence we can apply Proposition 4. However.{t. i. Show that Γ has exponential growth. (i) Any ﬁnitely generated almost solvable group of polynomial growth is almost nilpotent. So. γ . it is ﬁnitely generated. ﬁnishing the proof.1. .

i ⊂ γ0.i . we also get γm+1. Recall the deﬁnition of virtually isomorphic groups from the end of Section 3. so l = k and i = j .i . and [Γ2 : ϕ(Γ1 )] < ∞.franks} {d. . Let BR be the ball of Y radius R in N with this generating set.i = γ · γm. and consider the elements { hi γ k | − R ≤ k ≤ R } . · · · γgi = γgi · · · γgi γgi Now notice that. So Y |BR |=K ≤ Y ∪{γ } |BR+1 | . hK }. y ) ≤ dY (ϕ(x). γm. Then Γ = Y ∪ {γ } . each of length at most m on the generators gi and γgi . . . So.i . 1 If hi γ k = hj γ l .expanding} Let us start with some deﬁnitions. then γ l−k = h− / N . but γ ∈ distinct and belong to BR+1 .i ∈ γ0. So Now consider a ﬁxed i and the collection of 2m words (for m > 0) ǫ2 ǫ1 ǫm · · · γgm γgi γgi ǫj ∈ { 0 . . . since indeed f ∈ S . l k=1 sik = 0. .i .i = γ1. There are K (2R + 1) of them.where the last factor is actually also a conjugated gil . . γm−1.i . due to f ∈ ker(π ). By ǫm − δm = 0. .4. y ∈ X . . Suppose |BR | = {h1 .i γ .i · γ −1 . γm−1.4 Expanding maps. . these elements are j hi . and so get that γn. Then ϕ : X −→ Y is an expanding map if ∃ λ > 1 such that λdX (x. 2R + 1 Y ∪{γ } This shows both parts (i) and (ii) of the proposition. Deﬁnition 4.i . Polynomial and intermediate volume growth {ss.i · · · γm. ǫm m ǫ1 ǫm ǫ1 = γ1 · · · γgi γgi . this yields γm.i · · · γm. Thus our relation becomes δm δ1 ǫm ǫ1 γ1 . a group homomorphism ϕ : Γ1 −→ Γ2 is an expanding virtual isomorphism if it is expanding 34 . (hence injective). That is. . 1 } .i ∈ γ1. the subexponential growth of Γ implies that there must exist some m and ǫm = δm such that δm δ1 ǫm ǫ1 . .i · · · γm. Since γm+1. Let X and Y be metric spaces. .1. ϕ(y )) ∀ x. In particular. somewhat miraculously. . . and doing this for every i gives that N is ﬁnitely generated. . We can do the same argument for m < 0.i | n ∈ Z is ﬁnitely generated. Y Let Y be this particular ﬁnite generating set of N . 4.

11. This implies polynomial growth by v (r) = v (λj ) ≤ v (λ⌈j ⌉ ) ≤ Kv (λ⌈j ⌉−1 ) ≤ · · · ≤ K ⌈j ⌉ v (1). S2 ) for some k . the map x → k · x is an expanding virtual isomorphism.11. where the bounded Jacobian is analogous to the ﬁnite index of the subgroup. If {l. Given two diﬀerent generating sets S1 and S2 for Γ. ** If Zd ⋊M Z is nilpotent. K ) and K j = K logλ r = rlogλ K . (And groups having no isomorphic factors with non-trivial kernel are called Hopﬁan. prove that if ϕ is expanding in G(Γ.12. Then taking the supremum over x on both sides gives Kv (r) ≥ v (λr). ⊲ Exercise 4. let us explore a geometric analogue: Lemma 4.4 of expanding. Proof. The standard examples of expanding virtual automorphisms are the following: 1. Let M be a Riemannian manifold. K ). ⊲ Exercise 4. Assume v (r) := supx∈M vol(Br (x)) < ∞. In Zd .10 (Franks’ lemma 1970). S1 ) then ϕk is expanding in G(Γ.10.10.n 0 1 y ϕ −→ 0 0 1 1 0 0 mx mnz 1 0 is an expanding virtual automorphism. set j = logλ r. (Hint: Emulate the ideas of the proof of the topological version. we have ϕ(Br (x)) ⊇ Bλr (ϕ(x)). Prove the group version of Franks’ Lemma. Groups with this property are called co-Hopﬁan. 2. we have that Kv (r) ≥ vol(ϕ(Br (x)) ≥ vol(Bλr (ϕ(x))). since [Zd : k Zd ] = k d . which gives vol(ϕ(Br (x))) ≥ vol(Bλr (ϕ(x))). we ﬁnally have v (r) ≤ Crd . For the Heisenberg group H3 (Z).) 35 . with index [H3 (Z) : ϕm. then M has polynomial volume growth. but in a discrete setting.n (H3 (Z))] = m2 n2 .) Instead of proving Franks’ Lemma 4. where d = logλ K and C = v (1) max(1. Since K ⌈j ⌉ ≤ K j max(1. If a ﬁnitely generated group has an expanding virtual automorphism. ny 1 ⊲ Exercise 4.{l. then it has polynomial growth.franks2} ϕ : M → M is an expanding homeomorphism with Jacobian Det(Dϕ) < K . From Deﬁnition 4. and the following: For a given r. By the bound on the Jacobian of ϕ.franks} Lemma 4. does it have an expanding virtual automorphism? There exist nilpotent groups with no isomorphic subgroup of ﬁnite index greater than one [Bel03]. the map 1 x z m.

16 (Higher order Franks’ lemma). deﬁned in Section 4. There are no groups with superpolynomial growth but with growth of order √ exp(o( n)). The following groups of exponential growth are scale-invariant: • the lamplighter group Z2 ≀ Z. The proof of Grigorchuk’s group being of intermediate growth relies on the following observation: Lemma 4. (This is the case. e. on “scale-invariant tilings” in transitive graphs. this was disproved in [NekP09]. For an introduction to groups of intermediate growth. which is still relevant to percolation renormalization and is still open.3. torsion-free Gromov-hyperbolic groups are not scale-invariant. BarGN03]. Γ has either exponential growth or polynomial growth.14 (Grigorchuk [Gri83]). However. see Section 15.13. see [dlHar00. Theorem 4.⊲ Exercise 4.) Does this imply that Γ has polynomial growth? A condition weaker than in the last exercise is the following: a group Γ is called scale-invariant This notion was introduced by Itai Benjamini.. Z). In [NekP09]. • the solvable Baumslag-Solitar group BS(1. when ϕ is expanding. and for more details.1. *** Assume Γ is a ﬁnitely generated group and has a virtual isomorphism ϕ such that n≥1 ϕn (Γ) = {1}.12. A linear group (subgroup of a matrix group) Γ either has F2 ≤ Γ or it is solvable. described below in Section 5. There exist ﬁnitely generated groups with intermediate growth.1. • the aﬃne group Zd ⋊ GL(d. Benjamini’s question was motivated by the “renormalization method” of percolation theory. The proofs use the self-similar actions of these groups on rooted trees. 36 . In particular.scaleinv} {l. {t. see Question 12. m).26.g. by the following examples. a result that can be a little bit motivated by the well-known fact that hyperbolic spaces do not have homotheties. we formulated a more geometric version of this question. Going back to volume growth. and he had conjectured that it implies polynomial growth of Γ. On the other hand. Conjecture 4.13 (Tits’ alternative [Tit72]). Theorem 4. If Γ is a group with growth function vΓ (n) and there exists an expanding virtual isomorphism Γ × Γ × · · · × Γ −→ Γ. see [GriP08]. m≥ 2 if it has a chain of subgroups Γ = Γ0 ≥ Γ1 ≥ Γ2 ≥ · · · such that [Γ : Γn ] < ∞ and Γn = { 1 } .highfranks} then exp(nα1 ) vΓ (n) exp(nα2 ) for some 0 < α1 ≤ α2 < 1.15. here are some big theorems: Theorem 4.

e. Prove the higher order Franks’ lemma.) α1 .1. such that |∂Sn | → 0. where the boundary ∂E S is deﬁned to be the set of edges which are adjacent to a vertex in S and a vertex outside of S . and the inner vertex boundary in ∂V S . i. a linear isoperimetric constant ι∞. as the example after the next exercise shows. Sn ⊆ Sn+1 ∀ n and n inequality. often denoted IPd ..) If a group satisﬁes IP∞ .3 and 5. |Sn | ﬁnitely generated Cayley graphs is. any of these could have been used in the deﬁnition. Sn ⊆ V (G). The expanding 5 5. Not every amenable graph (b) Show that a bounded degree tree is amenable iff there is no bound on the length of “hanging chains”. (Hint: Γm ֒→ Γ implies the existence of virtual isomorphism gives the existence of α2 . Deﬁnition 5. i. in addition to connectedness. then it is called nonamenable. See Subsections 5.e. then the sequence {Sn } is called a Følner exhaustion. we can also consider the outer vertex boundary out ∂V S ..⊲ Exercise 4. the set of vertices outside of S with at least one neighbour in S . the Sn s also satisfy Sn ր V (G). denoted by ιψ. Let ψ be an increasing. Deﬁnition 5.amenable} has a Følner exhaustion. chains of vertices with degree 2.isop} Basic deﬁnitions and examples {ss. Then we say that G satisﬁes the ψ -isoperimetric inequality IPψ if ∃ κ > 0 such that |∂E S | ≥ κ ψ (|S |) for any ﬁnite subset of vertices S ⊆ V (G). We say that a group is Følner amenable if any of its If.14. Also note that satisfaction of an isoperimetric inequality is a quasi-isometry invariant. as described in the next deﬁnition.E .e. ⊲ Exercise 5. i.E in this case is usually called the Cheeger constant..isoperimetric} We start with a coarse geometric deﬁnition that is even more important than volume growth. 37 .E of the inﬁnite binary tree. Such an {Sn } is called a Følner sequence.1. since v (n)m ≤ C v (kn) for all n implies that v (n) has superpolynomial growth.1 Isoperimetric inequalities {s. positive function and let G be a graph of bounded degree. (This is so well-known that one might forget that it needs a proof. Since G has bounded degrees. Zd satisﬁes IPn1−1/d . The isoperimetric {d. the set of vertices inside S with at least one neighbour outside S .4. The supremum of all κ’s with this property is usually called the ψ -isoperimetric constant. A bounded degree graph G is amenable if there exists a sequence {Sn } of connected subsets of vertices. Besides the edge boundary ∂E deﬁned above. (a) Find the edge Cheeger constant ι∞.isopbasic} {d. Sn = V (G).2.

tilings in the Euclidean versus hyperbolic plane. r Proof. it might not be obvious at ﬁrst sight. “hyperbolic”. since Γ acts on G by r r Sn ⊆ Sn . but there are also important diﬀerences. centered at the origin. then {Sr (i) }i ր G is a Følner exhaustion. Without loss of generality we can assume e ∈ Sn . g∈Br gSn .2. ∗ ∗ r .. In Deﬁnition 5. Lemma 5. “negative curvature” are very much related to each other.1. (a) Consider the standard hexagonal lattice. and can group the hexagons into countries. Given a Følner sequence Sn . ∗ a Følner sequence. ⊲ Exercise 5. The notions “non-amenable”. Here is a down-to-earth exercise to practice these notions. since e ∈ Br . Now.hexsept} . you can achieve Sn ր V (G) in the case of amenable Cayley graphs. and e ∈ Sn implies that Br ⊆ Sr . however. Also. for each even k ∈ Z.g. set Sn := Consider now the following tree. If we take now a rapidly growing sequence 1 r |Br | . and.root a binary tree at every vertex whose distance from the origin is between 2k and 2k+1 .2. Figure 5. then it is not 38 {f. e. This tree has unbounded hanging chains and is thus amenable. hence |Sn | ≤ |Sn |. it is clear that you cannot both have Sn connected and Sn ⊆ Sn+1 in a Følner sequence. and thus choosing any gn ∈ Sn we can consider gn Sn . choose nr such that |∂Snr |/|Snr | ≤ ∗ ∗ {r(i)}i such that Sr (i) ⊆ Br (i+1) . Then {Sr } is and set Sr := Sn r The archetypical examples for the diﬀerence between amenable and non-amenable graphs are the Euclidean versus hyperbolic lattices. r ∂Sn ≤ g∈Br |∂ (gSn )| ≤ |B (r)| |∂Sn |. Show that if you are given a bound B < ∞. for each r ∈ N.1: Trying to create at least 7 neighbours for each country. each being a connected set of at most B hexagons. Thus we have r |∂Sn | |∂Sn | ≤ |Br | . but part (a) is a special case of part (b). r |Sn | |Sn | Now. where Br is the ball in G of radius r −1 graph automorphisms. causing them to have a large boundary. Take a bi-inﬁnite path with an origin. because otherwise the Sn s would contain larger and larger portions of binary trees.

m) = (f. it is not hard to see that this letf Cayley graph is amenable: set Sn = {(f. m) : −n ≤ m ≤ n and supp(f ) ⊆ [−n. deﬁne the combinatorial curvature at a vertex x by curvG (x) := 2π − (Li − 2) π . m) = (em + f. then it is not {ex. On the other hand. that if there exists some δ > 0 such that curvature is less than −δπ at each vertex. and so they can be interpreted as “switch”. 39 . The lamplighter group is deﬁned to be the wreath product Z2 ≀ Z. “Right” and “Left”. we need to multiply from the left with these nice generators to get the nice interpretation. Let ek ∈ ⊕Z Z2 denote the function that has a 1 in the k th place and zeroes everwhere else. (b) In a locally ﬁnite planar graph G. where f : Z −→ Z2 has |supp(f )| < ∞.possible to have at least 7 neighbours for each country. We now look at an important example of a Følner amenable group with exponential growth. Show that a group with a continuum number of ends must be non-amenable. m) = The volume of a ball in this generating set has the bound |Bn (id)| ≥ 2n/2 . m). m + 1) and L · (f.2 Amenability. wobbling paradoxical decompositions {ss. 0) · (f. or not.3.2. The R := (0.amen} The algebraic counterpart of amenability is the following. see Section 4. and m ∈ Z is the lamplighter or marker. 5. Show possible that both G and its planar dual G∗ are edge-amenable. Thus a general element of the lamplighter group looks like (f. 0). and σ is the left shift automorphism on this group. 1). (Unfortunately. invariant means. while R · (f. −1). and L := (0. because of the way we deﬁned semidirect multiplication. Li i where the sum runs over the faces adjacent to x. Multiplication by these generators gives s · (f. Such pairs multiply according to the semidirect product rules. interpreted as a conﬁguration of Z2 -lamps on a Z-street. n]} in and observe that |Sn | = 22n+1 (2n + 1) and |∂V Sn | = 22n+1 · 2. Z where the left group consists of all bi-inﬁnite binary sequences with only ﬁnitely many nonzero terms. group is generated by the following three elements: s := (e0 . m). and Li is the number of sides of the ith face. This bound is clear because at each step you can either “switch” (apply s).) exponential growth. therefore it has correspond to m = −n or m = n. m) = (e0 . since the points on the boundary (f. which is deﬁned to be Z2 ⋊σ Z.endsnonamen} ⊲ Exercise 5. m − 1).

show that if A1 and A2 are amenable. ⊲ Exercise 5. Section 7.F2vN} {t. there exists a linear map m : L∞ (Γ) −→ R with the following {p.3 (Følner [Føl55]). A ﬁnitely generated group Γ is von Neumann amenable if it two properties for every bounded f : Γ −→ R: 1. m(f ) ∈ [inf(f ). More generally. Theorem 5. Moreover. where fγ (x) := f (γx).4. identify µ(A) and m(1A ) and approximate general bounded functions by step functions. m(fγ ) = m(f ). a group is amenable if 40 . From the amenability of Z.measurevN} |[−n. then Γ is as well. If Γ is a ﬁnitely generated group.4). as it fails ﬁnite additivity. which requires the Axiom of Choice.5. and prove Kesten’s theorem in a future lecture. i. We will sketch a proof of Følner’s theorem below. but the obvious candidate µ(A) = lim sup n→∞ has an invariant mean on L∞ (Γ). (a) Prove that subgroups of amenable groups are amenable.kestenorig} {t. Example: Z is amenable. A ﬁnitely generated group Γ is von Neumann amenable if and only if there exists a ﬁnitely additive invariant (with respect to translation by group elements) probability measure on all subsets of Γ.2.3 (von Neumann 1929). Proposition 5.{d. {pr. equals 1.e. sup(f )] 2. and hence the exercise implies that any solvable group is amenable. n] ∩ A| 2n + 1 is not good enough.vNamenable} Deﬁnition 5. To prove the proposition.2. Theorem 5.4 (Kesten [Kes59]). as deﬁned in (1. Proposition 5. with [Γ. it can be proved that an inﬁnite direct sum of amenable groups is also amenable. F2 is nonamenable in the von Neumann sense. hence the lamplighter group Γ is and only if all ﬁnitely generated subgroups of it are amenable. Γ] = ⊕Z Z2 . For all γ ∈ Γ. A ﬁnitely generated group Γ is amenable if and only if the spectral radius ρ of any of its Cayley graphs.folner} also amenable: it is two-step solvable. this exercise gives that Zd is amenable. But there is some sort of limit argument to ﬁnd an appropriate measure. (b) Given a short exact sequence 1 −→ A1 −→ Γ −→ A2 −→ 1. it is von Neumann amenable if and only if any of its Cayley graphs is Følner amenable..

4.. using wobbling paradoxicity. but ﬁrst we deﬁne some notions and state an exercise we will use. Certainly µ({e}) = 0.olshanski} and odd. ϕ : X −→ X is wobbling if supx d(x.Proof. this time for ﬁnitely presented groups. Let X be a metric space. Let A− denote the set of words beginning with a−1 . and again by Olshanski and Sapir in 2002. |Sn g −1 △ Sn | < ǫ|Sn | for g a generator. because {Sn } is a Følner |Ag∩Sn | | Sn | = same limit as µn (A). although I am not sure that this proof has appeared anywhere before. B − . n) = g1 ..wobbling} {d. * SO(3) ≥ F2 (Use the Ping Pong Lemma. giving invariance of µ. and B similarly. (Hint: State.wobbling} if G = (V. ⊲ Exercise 5.2. See [Lub94] or [Wag93] for more on this. deﬁne µn (A) := sequence. An example of this is the Burnside group B (m. and let A = A+ ∪ A− . This is a contradiction. So µn (Ag ) = | A∩ S n | | Sn | . The following theorem was proved by Olshanski in 1980. and let A+ denote the set of words in F2 beginning with a.5 form the basis of the Banach-Tarski paradox: the 3dimensional solid ball can be decomposed into ﬁnitely many pieces that can be rearranged (using rigid motions of R3 ) to give two copies of the original ball (same size!).5. Theorem 2. Gromov in 1987. E ) is a graph. Deﬁne B + . Deﬁnition 5. This exercise and Proposition 5. Lemma 2. . and so we have = µ(A+ ) + µ(A− ) + µ(B + ) + µ(B − ) = µ(A+ ) + µ(aA− ) + µ(B + ) + µ(bB − ) = µ(A+ ⊔ aA− ) + µ(B + ⊔ bB − ) = 2µ(F2 ). and show that some sort of limit exists. Notice that F2 = A ∪ B ∪ {e}. Denote F2 as a.1. called the Hall-Rado theorem.) Sketch of proof of Theorem 5. I learned about this approach. There exist nonamenable groups without F2 as a subgroup. Adian in 1982. see [ElTS05] and the references there. Theorem 5. gm | g n = 1∀g for m ≥ 2 and n ≥ 665 {t. For the reverse direction. Now.6. prove and use the locally ﬁnite inﬁnite bipartite graph version of the Hall marriage theorem [Die00. from G´ abor Elek. as well as B + ⊔ bB − .2]. suppose that we have µ as in Proposition 5. b .15). * A bounded degree graph is nonamenable if and only if there exists a wobbling paradoxical decomposition. | ( A ∩ S n g −1 ) g | | Sn | will have the 41 . ⊲ Exercise 5.6 (Olshanski 1980). µ(F2 ) = µ(A) + µ(B ) + µ({e}) Now. ϕ(x)) < ∞. {ex. Further. and that F2 also equals A+ ⊔ aA− .. and thus no such measure exists. if there exists a Følner sequence {Sn }. Now we sketch the proof of the Følner theorem. then the maps α and β are a paradoxical decomposition of G if they are wobbling injections such that α(V ) ⊔ β (V ) = V .3.

where α|Ai is translation by some gi ∈ Br . then.6. we have in ρ ∂V K ≥ g in ∂V K ≥ |K |/2 . S ). so assume that G is nonamenable. µ(α(Ci. Then we in ∂V K 1 ≥ . Let Γ be a ﬁnitely generated group. we prove the contrapositive.j )) + µ(β (Ci. if g Proof. then since µ(α(Ci.CSC} in generators. More generally. Suppose that both of these maps move a vertex a distance of at most r. Combining our upper and lower bounds on the number of elements of K moved out of K by g . Thus |K \ Ks−1 | ≤ ∂V K ..e. it has a paradoxical decomposition α and β . S and we are done. Theorem 5. there is a g that moves at least |K |/2 elements out of K . we have µ(V ) = µ(α(V ) ⊔ β (V )) = Hence Γ is von Neumann nonamenable. by iterating the above argument. have in Let K be any ﬁnite subset of Γ.j ) = µ(β (Ci.j )) = 2µ(V ) . with a right Cayley graph G(Γ.For the forward direction. β |Bi is a translation by some hi ∈ Br .j )). Therefore. i.7 ([CouSC93]). Take any s ∈ S .j we let Ci.growthisop} The following theorem was proved by Gromov for groups. which implies E number of x’s moved out of K ≥ |K |/2. Then we can decompose V = A1 ⊔ A2 ⊔ · · · ⊔ Ak = B1 ⊔ B2 ⊔ · · · ⊔ Bℓ . if we pick g ∈ Bρ (o) uniformly at random. let ρ = ρ(2|K |).j )) = µ(Ci. then P g moves x out of K ≥ 1/2 ≥ P g leaves x in K . x → xs moves x out of K . Hence. only if S = r. Deﬁne the inverse growth rate by ρ(n) := min{r : |Br (o)| ≥ n} . and if we assume for contradiction that there exists some invariant probability 5. Then by Exercise 5. we see that |K \ Kg −1 | ≤ r ∂V K . ℓ ≤ |Br |. i. Recall the deﬁnition ∂V K = {v ∈ K : ∃γ ∈ S vγ ∈ / K }.j := Ai ∩ Bj . and generalized by Coulhon and SaloﬀCoste in 1993 for any transitive graph.e. If measure µ on Γ as in Deﬁnition 5. in in x ∈ ∂V K . It is clear that x ∈ K \ Ks−1 . and observe that for any x ∈ K . and k..2. 42 . i. g ∈ Γ is a product of r On the other hand.3 From growth to isoperimetry in groups {ss. xg : g ∈ Bρ (o) \ K ≥ |K | ≥ xg : g ∈ Bρ (o) ∩ K . since the size of {xg : g ∈ Bρ (o)} is greater than 2|K |. |K | 2ρ(2|K |) {t.

with applications to combinatorial number theory.4 Isoperimetry in Zd 1 {ss.7]: it goes through the following theorem. {t. ∃g ∈ Γ with 0 < d(A.8.8.Timar} 5. idea is [LyPer10.8. ˙ Related isoperimetric inequalities were proved in [BabSz92] and [Zuk00]. at least in the regime of polynomial growth.7 would be that the boundary-to-volume ratio would get worse for larger sets. In other words.g. was done by Bollob´ as and Leader [BolL91]. See [BaliBo] for a concise treatment and a mixture of both the entropy and compression methods.t.integerIP} On Zd . Section 6. Theorem 6. An actual result using the same than the Cheeger constant. ⊲ Exercise 5. |∂E S | ≥ 2d|S |1− d . Theorem 5. the boundary-to-volume ratio is strictly larger {ex.Zdisop} {t. (We will see another proof strategy in the next section.7 (Tim´ ar). The lamplighter group shows that the inequality is also sharp for groups of exponential growth. d in S and one in S c . *** For any f. gB ) ≤ C2 ? ⊲ Exercise 5. does ∃?C1 s. where Pi (S ) is the projection of S onto the ith coordinate. For any S in Zd .) Hence Zd shows that the above inequality is sharp. using some very natural compression methods. Give an example of a group Γ where C2 > 1 is needed.7.LoomisWhitney} ∀S ⊆ Zd . since we could glue translated copies of any small set to get larger sets with worse boundary-to-volume ratio. A beautiful alternative method can be seen in [LyPer10. and does ∃?C2 s. B ⊂ Γ ﬁnite. One reason for these exercises to appear here is the important role translations played also in Theorem 5. group Γ. gA) ≤ C1 .9 (Discrete Loomis-Whitney Inequality). the following sharp result is known: Theorem 5. ∀A ⊂ Γ ﬁnite ∃g ∈ Γ with 0 < d(A. ρ(n) = n1/d . |S |d−1 ≤ i=1 |Pi (S )|. ∀A. hence we get that it satisﬁes IPd . 43 . A simple application of an aﬃrmative answer to Exercise 5.2]: for any ﬁnite set.Examples: For Γ = Zd . which is proved using conditional entropy inequalities. the isoperimetric proﬁle φ(r) := {|∂S |/|S | : |S | ≤ r} would be roughly decreasing.t. where ∂E S is the set of edges with one vertex One proof of Theorem 5.

Theorem 6.. (5. I wanted to show that this implies Thomassen’s condition for transience [Tho92]. the Bollob´ as-Leader compression methods seem completely useless.2) {e. hence this strange ψ (v ) suﬃces.g.i. Let me give my personal endorsement for the conditional entropy proof of Theorem 5. but is good enough to deduce (5. 1+α 6 Random walks. any more: these will be the so-called martingales.1). y.4. α2 8 As can be guessed from its peculiar form.8. ⊲ Exercise 5. α > 0. discrete potential theory. the subgraph of the lattice induced by the vertices V (Wh ) = {(x.) In such a subgraph Wh . this is not likely to be sharp. Namely. In [Pet08]. jh(j ) (5. using the ﬂow criterion of transience.1) {e.d. we get ψ (v ) = v 2 + 4 − conjectured isoperimetric function v 1+α 2+α 1 α . CLT. One obvious generalization is to consider random walks on graphs with more interesting geometries. which is basically an isoperimetric inequality IPψ with ∞ k=1 ψ (k )−2 < ∞ . but I managed to prove the result using conditional entropy inequalities for projections. for h(v ) = v α .5 1 < ∞.9. martingales {s.The Loomis-Whitney inequality gives: |S | which is Theorem 5. Another obvious direction of generalization is to study stochastic processes that still take values in R. E. h(j ) = logr j gives transience iff r > 1. Show that the wedge Wh with h(v ) = v α . I needed to prove an isoperimetric inequality in the wedge Wh ⊂ Z3 . What I proved was that Wh satisﬁes IPψ with ψ (v ) := vh √ v/h( v ) . does not satisfy IPψ whenever ψ (v )/v 2+α → ∞.tlyons} For example. z ) : x ≥ 0 and |z | ≤ h(x)}.d. below.7]. resemble random walks in some sense.thomassen} (The reason for this goal will be clear in Section 12. The ﬁrst two sections here will introduce a basic and very useful technic for this: electric networks and discrete potential theory. which is close to the easily only for α close to 0.i.potential} Probability theory began with the study of sums of i. but whose increments are not i. 2d where h(x) is some increasing function. Section 6. random variables: LLN. the subject of the third section. One can look at this as the theory of 1-dimensional random walks. Discrete harmonic functions will connect martingales to the ﬁrst two sections. large deviations. but that is exactly the interesting regime here.2) from (5.9 in [LyPer10. it was shown in [LyT83] that Wh is transient iff ∞ j =1 d −1 d d 1 d ≤ i=1 |Pi (S )| ≤ 1 d d i=1 |Pi (S )| ≤ |∂E S | . or on arbitrary graphs in general. 44 .

xn = x0 .1.. Show that a Markov chain (V. we have n−1 i=0 n−1 i=0 p(xi . y ) = P Xn = y | X0 = x . where it is usually assumed that x ∈ V . On the other hand.” ⊲ Exercise 6. 45 . Consider the n-cycle Z (mod n) with transition probabilities p(i. y ∈ V there is some n with pn (x. . we get the measure πP (y ) = x ∈V y ∈V p(x. i) = p > 1/2 for all i. y ) = π (y )p(y. That is. y ). which we now deﬁne. . A Markov chain is a sequence of random variables X1 . The above intuition with reversing the movie goes wrong now because π is not a ﬁnite measure.1 Markov chains. a movie of the evolving chain looks diﬀerent from the reversed movie: it moves more in the + direction. For an inﬁnite state space. or several. . A measure π is called stationary if the chain leaves it invariant: πP = π . X3 . not identically zero. y ) are called the transition probabilities. there could be no stationary measure. intuitively. We say that a Markov chain is reversible if there exists a reversible measure. the behaviour of the future states is governed only by the current state.networks} Simple random walks on groups (for which we saw examples in Section 1. . X2 . We will not discuss these matters in this generality. We will also use the notation pn (x. just as in Section 1. for instance. since π (x)/π (y ) is given by the Markov chain. But not every stationary measure is reversible: it is good to keep in mind the following simple examples. Note that this reversible measure is unique up to a global factor. P ) has a reversible measure if and only if for all oriented cycles x0 . See. are special cases of reversible Markov chains. the measure π ′ (i) = p/(1 − p) ′ i is reversible. which is unique if the chain is irreducible (which means that for any x. y ) = 1 for all Given any measure π (ﬁnite or inﬁnite) on the state-space V . X2 = x2 . x1 . y. . . the chain with the same formula for the transition probabilities on Z is already reversible. non-negative function π (x). the Markov chain tells us how it π (x)p(x. electric networks and the discrete Laplacian {ss. It is easy to see that this is a non-reversible chain. evolves in time: after one step.6. y ) > 0). There is also a probabilistic proof. Example.e. xi ). But the uniform distribution is a stationary measure.4]. ∈ V such that P Xn+1 = y | X1 = x1 . y ). x) ∀x. that satisﬁes π (x)p(x. i + 1) = 1 − p(i + 1. i. . constructing a stationary measure using the expected number of returns to any given vertex. Xn = xn = P Xn+1 = y | Xn = xn = p(xn .1). . Although the uniform measure π (i) = 1 ∀ i ∈ Z is still stationary but nonreversible. . [Dur96. The values p(x. These facts follow from the Perron-Frobenius theorem for the matrix of transition probabilities.1. An important basic theorem is that any ﬁnite Markov chain has a stationary distribution. . xi+1 ) = p(xi+1 . which we will not state here. Note also that reversible measures are also stationary. . A simple real-life example of an inﬁnite chain having some unexpected stationary measures is the following joking complaint of my former advisor Yuval Peres: “Each day is of average busyness: busier than yesterday but less busy than tomorrow. Section 5. hence looking at a typical realization of the movie is simply meaningless. but see the example below.

e. Cx transition probabilities An important special case is when c(x. The weighted random walk associated to an electric network is always reversible: Cx is a reversible measure. decompose it into two directed edges e and e such that e runs from e− to e+ . [LyPer10]. Take f. since we can deﬁne c(x. (θ. ← → e∈ E θ(e) e:e+ =x 1 . ∞] are called resistances. this is a coboundary operator. it produces functions on one-dimensional objects (the edges). which will turn out to be very relevant for studying the random walk associated to the network. let c(x. we have weights on the undirected edges. If e is an edge with vertices e+ and e− . Also deﬁne the inner product (f. On the other hand. Let Cx = y {d. y ) > 0 only if x and y are neighbours. x) = c(y. we will deﬁne discrete versions of some of the notions of diﬀerential and Riemannian geometry. η : E −→ R such that θ(e) = −θ(e). i. from functions on zero-dimensional objects (the vertices). ← → Take θ. and the edge weights c(e) are called conductances. The inverses r(e) = 1/c(e) ∈ (0. y ) be any non-negative number such that c(x. and deﬁne the weighted random walk to be the Markov chain with p(x.Now. all Markov chains on a countable state space are in fact random walks on weighted graphs: Deﬁnition 6. This is an inessential diﬀerence. where a large resistance should mean a larger distance.g. Deﬁne ∇f (e) = [f (e+ ) − f (e− )] c(e) to be the gradient of f . Consider an electric network on the graph G = (V. 46 . y ) . y ) = c(x. η )r = 1 2 θ(e) η (e) r(e). Consider a directed graph with weights on the directed edges: for any two vertices x and y . y ) := π (x)p(x. Cx Some works. since.network} c(x. g ) = (f. any reversible Markov chain on a countable state space comes from an electric network. omit the “vertex conductances” Cx from the inner product on V and the boundary operator ∇∗ . g )C = x ∈V f (x) g (x) Cx . An electric network is like a discrete geometric space: we clearly have some notion of closeness coming from the neighbouring relation. y ). y ) = π (y )p(y. Indeed. y ) = c(y. Deﬁne the boundary operator ∇∗ θ(x) = Also deﬁne another inner product. In a cohomological language. but it is good to watch out for it. x). while e runs from e+ ← → to e− . Denote by E the set of directed edges formed.1. g : V −→ R. x) for every x and y . while our deﬁnition agrees with [Woe00]..e. E ). Such weighted graphs are usually called electric networks. η ) = (θ.

y ) y = f (x) − (P f )(x) = ∆f (x) . that is. we have ∇∗ ∇f (x) = = e+ =x e+ =x Note that a direct consequence of harmonicity is the maximum principle: if f : V −→ R is maximum over V . A function f : V −→ R is harmonic at some x ∈ V if ∆f (x) = 0. then there are no strict local maxima in V \ U . The Markov operator P is the operator deﬁned by (P f )(x) = y {d. i. Show that the Markov operator P is self-adjoint with respect to the inner product (·. operator is ∆ := I − P .1.adjoint} Proposition 6. if and only if π is a reversible measure for the Markov chain. A function is harmonic if it is harmonic at every vertex. acting on functions f : V −→ R: taking the one-step average by the Markov chain. ·)π . for each x and e such that e+ = x. ∇f (e) 1 Cx c(e) Cx f (e− )c(e) Cx c(x.2. then it must be achieved also at some point of U . harmonic for all x ∈ V \ U . y ) Cx f (e+ ) − f (e− ) f (e+ )c(e) − Cx c(e) − Cx = e+ =x e+ =x = f (x) e+ =x f (y ) y = f (x) − f (y )p(x. ∇ θ).e. how is the Laplacian related to the boundary and coboundary operators? For any f : V −→ R. y )f (y ) . θ) = (f.{p. the term f (x)θ(e) is counted twice: once when y = e+ and once when y = e+ = e− . and if there is a global Now. The right hand side is ( x∈V e+ =x ∗ ∗ θ(e) 1 )f (x)Cx = Cx θ(e)f (x).. x∈V e+ =x The left hand side is 1 2 f (e+ ) − f (e− ) c(e) θ(e) 1 1 = c(e) 2 f (e+ ) − f (e− ) θ(e). (∇f. if the mean value property holds: the average of the function values after one step of the chain equals the value at the starting point x. Since θ(e) = −θ(e). 47 . The Laplacian ⊲ Exercise 6. ∇ and ∇ are the adjoints of each other (hence the notation).2. the two sums are equal. Deﬁnition 6.markovop} p(x. ← → e∈ E ← → e∈ E In this sum. with some π : V −→ R≥0 . Proof.

Then f is harmonic on V \ U : the exact same argument as above works. Then GZ (x. Let Z ⊂ V be such that V \ Z is ﬁnite. ← → • A ﬂow from A to Z is a function θ : E −→ R with θ(e) = −θ(e) such that ∇∗ θ(x) = 0 for node law: the inﬂow equals the outﬂow at every inner vertex. Then. • The strength of a ﬂow is θ := ﬂowing from A to Z . for x ∈ V \ U . it follows that f is harmonic on this set. the total net amount z ∈Z • A main example of a ﬂow is the current ﬂow θ := ∇f associated to a voltage function f between A and Z .) Quite similarly to the previous examples. deﬁne f (x) = E f (Xτ ) | X0 = x . where the τ ’s are the x ∈ V . Let A and Z be disjoint subsets of V (G). hence ∆f |A ≥ 0 and ∆f |Z ≤ 0. and let f (x) = Px [ τA < τZ ]. y ). GZ is harmonic 48 . is called Green’s function killed at Z . while f (x) ∈ [0. y ) := Ex number of times the walk goes through y before reaching Z . and let x ∈ / Z . From this. It is obvious that f |A = 1 and f |Z = 0. the particle is killed instead of moving. (This is a subMarkovian chain.3... y ) = p(x. Z ⊂ V (G). while ∇∗ θ|A ≥ 0 and ∇∗ θ|Z ≤ 0. y )1x∈Z . Example 2. then we have a ﬂow on the graph from U1 to U0 .e. every x ∈ V \ (A ∪ Z ). ∗ a∈A ∇ θ (a) Ca while ∆f |A ≥ 0 and ∆f |Z ≤ 0. A slightly more general example is the following: let U be any subset of V . For x ∈ V \ (A ∪ Z ). Example 3. If we now deﬁne U0 = {u ∈ U | ∆f (u) < 0} and U1 = {u ∈ U | ∆f (u) > 0}. 1] for all y f (y )p(x. with transition probabilities pZ (x. we have f (x) = hitting times on A. In words. i. and let f : U −→ R be any real-valued function. It is the standard Green’s function corresponding to the Markov chain killed at Z .{d.) Example 1.e. it satisﬁes Kirchhoﬀ ’s =− ∇∗ θ(z ) Cz . f is a voltage function from A to Z . where Xτ is the ﬁrst vertex visited in U (we assume that τ < ∞ almost surely). i. • A voltage between A and Z is a function f : V −→ R that is harmonic at every x ∈ V \ (A ∪ Z ). the sum of transition probabilities from certain vertices is less than 1: once in Z . (It is a ﬂow precisely because of the harmonicity of f .flow} Deﬁnition 6. Therefore. Let G be a recurrent network.

4. x) by reversibility. where α > β . It is easy to see that R(A ↔ Z ) := α−β i Uniqueness: Suppose that f1 and f2 are two extensions of f . there exists a unique extension of f to V that is harmonic on V \ U .2.Reff} is independent of the values α > β . V \ U . x ) Z ′ = x′ pZ (x. y ) ′ pZ n−1 (x . Notice that Cx GZ (x. now harmonic in x ∈ / Z ∪ {o}. Existence: The extension f (x) := E f (Xτ ) | X0 = o Lemma 6. Given a ﬁnite network G(V. and any real-valued function f on for x ∈ V \ U . consider the unique voltage function v from A to Z with v |A ≡ α and v |Z ≡ β . y ) (since x ∈ / Z) = n≥1 x′ ′ pZ (x. So. The associated current ﬂow i = ∇v has strength i > 0. again harmonic on (6. Hence g ≡ 0 on V . It is called the eﬀective resistance between A and Z . ⊲ Exercise 6. works. hence R(o ↔ Z ) = GZ (o. y ) n≥0 = x′ pZ (x. a subset U ⊂ V .Ceff} 49 . we need to change GZ a little bit. y ) = n≥1 pZ n (x. Therefore.harmext} U . by the maximum principle. Its inverse C (A ↔ Z ) := 1/R(A ↔ Z ) is the eﬀective conductance. Namely. It is clear that f |Z = 0 and f (o) > 0. E. {ex. the global maximum and minimum of g is attained. x′ ) n≥1 = x′ p (x. where g ≡ 0.in its ﬁrst coordinate outside of Z ∪ {y }: GZ (x. so. x)/Cx = GZ (x. x′ )GZ (x′ . the associated current ﬂow has unit strength. y ) = Cy GZ (y. and by the maximum Given this lemma. Let g = f1 − f2 . x′ )pZ n−1 (x . we can deﬁne a certain electrical distance between disjoint nonempty subsets A and Z of V (G). y ) = P (x) GZ (x.1) {e. Show that eﬀective resistances add up when combining networks in series. above. Proof. {l. given in Example 2 principle. f is a voltage function from o to Z . it must also be attained on U . Since V is ﬁnite. To get harmonicity in the second coordinate. while eﬀective conductances add up when combining networks in parallel.3. c). ⊲ Exercise 6. y ). f (x) := GZ (o. (a) Show that for the voltage function f (x) of Example 3 above. o)/Co is ∆f (o) > 0 and ∆f |Z ≤ 0. o)/Co . y ) ′ pZ n (x .

π ).+ + (b) Using part (a).e. Thus. If A and Z are disjoint subsets of V (G). for any antisymmetric θ : E −→ R we can deﬁne E (θ) := (θ. i.. Pu [ τv < ∞ ] = Pv [ τu < ∞ ] .4 (Thomson’s principle).4. π ). Let (V. the one that minimizes the Dirichlet energy E (θ) also satisﬁes Kirchhoﬀ’s node law in V \ U (i. ⊲ Exercise 6. u ∈ U ).5.7. Assume that the function equation ∆u = f . Solve the 6. where τa is the ﬁrst positive hitting time on a. ← → With a slight abuse of notation. (Hint: for a quadratic function f (x) = solution of f ′ (x) = 0?) 2 i (x − ai ) .2 minimizes Dirichlet energy. Show that. ﬂows θ with given values ∇∗ θ|A∪Z .. E.hitsym} ⊲ Exercise 6. Let f : V −→ R be an arbitrary function in L2 (V. deﬁne the Dirichlet energy by E (f ) := (∇f.. the current ﬂow has the smallest energy (i. ∇f )r = 1 2 ← → e∈ E (G) {d. The same phenomenon holds in the discrete setting. show that C (a ↔ Z ) = Ca Pa [ τZ < τa ]. it is a ﬂow).6 (“Green’s function is the inverse of the Laplacian”). for any u.harmmin} {ex.dirichlet} |f (e+ ) − f (e− )|2 c(e) . For any f : V (G) −→ R. {e. c) be a transitive network (i.3.harmmin} V (G) −→ R if and only it satisﬁes Kirchhoﬀ ’s cycle law. y )/πy is in L2 (V. with a given ﬂux along the boundary (the values ∇∗ θ(u). then among all the Kirchhoﬀ ’s cycle law). θ)r .e. Let G(V.3 can be reformulated as follows: among all antisymmetric θ satisfying Kirchhoﬀ’s cycle law. the group of graph automorphisms preserving the edge weights have a single orbit on V ). Deﬁnition 6.e.e. y → G(x. v ∈ V . P ) be a transient Markov chain with a stationary measure π and associated Laplacian ∆ = I − P . is circulation-free: θ(e) r(e) = 0 e∈C ← → Note that an antisymmetric function θ : E (G) −→ R is the gradient θ = ∇f of some f : ← → for all directed cycles C ⊂ E . what is the {l. There is a dual statement: {l. Lemma 6. Lemma 6. The unique harmonic extension in Lemma 6.Dirichlet} In PDE theory. Prove this lemma. the Laplace equation arises as the Euler-Lagrange variational PDE for minimizing the L2 norm of the gradient.2 Dirichlet energy and transience {ss..Thomson} Lemma 6. the one that satisﬁes 50 . ⊲ Exercise 6.

take an exhaustion of G by subgraphs Fn . and.DReff} {t. it has unit strength. x) is ﬁnite for all o. So. Then ∇f is a non-zero ﬂow to inﬁnity E (f ) = (f. where α > β .3.9.transientflow} a∈A Ca ∇∗ ∇v (a) + Theorem 6. If G is transient. Thus.7.6 (Kanai. (Hint: what would a ﬂow started from the origin do in R3 ? Mimic this in Z3 . ∇∗ ∇f ) = f (o) ∇f = f (o) = G(o.e. assuming recurrence. is For the other direction. By Thomson’s principle (Lemma 6. if the voltage diﬀerence α − β is adjusted so that we E (v ) = β − α = R(A ↔ Z ) . ∇v = 1. then consider the ﬂow γ that is Cz ∇∗ ∇v (z ) = α ∇v − β ∇v .e. So. a ﬂow with A = {o} and Z = ∅).Proof. contradicting the previous sentence. the eﬀective resistance blows up. hence we can consider f (x) := G(o. ⊲ Exercise 6. Its energy. On the other hand. ﬁnd an explicit ﬂow of ﬁnite energy on Z3 . just need to perturb ﬂows instead of functions: constant 1 on C .1).. i. ∇∗ ∇v ) = α β have a unit ﬂow from A to Z . and from that we could produce a unit strength ﬂow of smaller energy from o to each ⊲ Exercise 6. transience is the same as positive eﬀective conductance to inﬁnity. by Exercise 6. then. Let us compute this minimum Dirichlet energy for the unique harmonic extension (the voltage v ) of v |A = α and v |Z = β . Here E (v ) = (v. then ∃ C < ∞ such that E1 (f ◦ φ) ≤ C E2 (f ) . by (6. sense that can be read oﬀ from the above inequality). by the above calculation. z ∈Z if θ is a ﬂow with minimal energy E (θ). unit strength. In other words. (6. by Exercise 6. then there would also be one with G \ Fn .4 (a). from o ∈ G. if there was a ﬁnite energy ﬂow on G from o to inﬁnity. The strategy is the same as in Exercise 6. Fill in the gaps (if there is any) in the proof of the theorem above.8. then Green’s function G(o. 1986). as in Example 3 after Deﬁnition 6. A graph G is transient if and only if there exists a non-zero ﬂow of ﬁnite energy from some vertex o ∈ V (G) to inﬁnity (i. and C is an oriented cycle.4 (b). this means that there are no unit strength ﬂows from o to G \ Fn whose energies remain bounded as n → ∞.) Theorem 6. Proof. x)/Cx . which is indeed ﬁnite.. x. If φ : G1 −→ G2 is a quasi-isometric embedding between bounded degree graphs. Re- currence of G implies.2).4) and (6. o)/Co < ∞.2) {e.Kanai} 51 . then so is G2 . * Without consulting Lyons (Terry or Russ). and hope that it will have ﬁnite energy.5 (Terry Lyons [LyT83]). then the Gi ’s are transient at the same time and E1 ≍ E2 (in the obvious {t. if G1 ≃q G2 . and a simple quadratic computation shows that E (θ + ǫγ ) ≥ E (θ) can hold for all ǫ > 0 only if θ satisﬁes the cycle law along C . if G1 is transient. that C (o ↔ G \ Fn ) → 0 as n → ∞. Furthermore.

since: |f2 (e2 )|2 ≤ β f1 (e2 )2 . with the Xk ’s being independent. we choose one of the shortest paths in G2 going from the vertex φ(e− ) to φ(e+ ). where the sign of f1 (e1 ) depends on the orientation of the path φ(e1 ) with respect to the orientation of e2 . We will prove only the transience statement. then α would also have to be inﬁnite. say. Deﬁnition 6. We will use them in the evolving sets method of Chapter 8. . For any e in G1 . F1 ⊆ F2 ⊆ . if a gambler chooses based on his history of winnings what game to play next and in what value. M2 . and we are done. and Mn is measurable w. the other being very similar. since the contribution of each path φ(e) to each ∇∗ f2 (x) is zero unless x ∈ φ(e± ). Now f2 is a ﬂow from φ(a) to ∞. 6. φ(e− )) e∈G1 and β := sup #{e1 ∈ G1 : e2 ∈ φ(e1 )} . given an increasing sequence of σ -algebras. More generally. . Deﬁne α := sup d2 (φ(e+ ). Example 2: A classical source of martingales is gambling. in an arbitrary way.t. . . . .5. Example 1: We start with a trivial example: if Mn = n k=1 Xk with E[ Xk ] = 0 for all k .3 Martingales {ss. In a completely fair casino. A sequence M1 . from a to inﬁnity. since φ is a quasi-isometry. . . . . then the 52 . e1 :e2 ∈φ(e1 ) e2 ∈G2 |f2 (e2 )|2 ≤ αβ e1 ∈G1 f1 (e1 )2 < ∞ . Mn ] = E[ Xn+1 ] + Mn = Mn . and in several other results related to random walks. in the study of bounded harmonic functions in Chapter 9.r.MG} We now deﬁne one of the most fundamental notions of probability theory: martingales.Proof. of R-valued random variables is called a martingale if E[ Mn+1 | M1 . and we call this the image of e under φ. Mn ] = Mn . then E[ Mn+1 | M1 . (called a ﬁltration). e2 ∈G2 α is ﬁnite. . we want that E[ Mn+1 | Fn ] = Mn . Suppose f1 is a ﬂow of ﬁnite energy on G1 . since the size of a neighbourhood of radius C is bounded by a function of C and the maximum degree of the graph. . Fn . Deﬁne f2 as follows: f2 (e2 ) = e1 :e2 ∈φ(e1 ) ±f1 (e1 ) . β is ﬁnite since G1 and G2 are bounded degree and φ is a quasi-isometry: if β were inﬁnite. . The energy of f2 is ﬁnite.

and E [ M k ] = E [ M k −1 ] = · · · = E [ M 0 ] .increments in his fortune will not be i. . but his fortune will be a martingale: he cannot make and cannot lose money in expectation. whatever he does. Give an example of a random sequence (Mn )∞ n=0 such that E[ Mn+1 | Mn ] = Mn for all n ≥ 0. n < τ ] = f (Xn ). Fn = σ (X0 . . by Fubini (6. In particular. while E Mk Mk−1 = −1 − 2 − · · · − 2k−1 = Mk−1 + 1/2 · 2k − 1/2 · 2k = Mk−1 . (6. = x:f (x)=Mn This averaging argument can be used in general to show that it is easier to be a martingale w. E ) is a network. . ⊲ Exercise 6. .. . . . On the event τ < ∞.. . in the random τ th round. . so this is an almost certain way to make money in a fair game — assuming one has a friend with inﬁnite fortune to borrow from! Now. . we have Mk = 1 = Mk−1 a. how does EMk = EM0 = 0 square with EMτ = 1? Well. and then pay back all the debts. n ] E[ f (Xn+1 ) | Xn = xn ] P[ Xi = xi i = 0. . Example 3: G = (V. . under Optional Stopping Theorems. .. .s.3) {e. M0 ] = = x0 . by being a martingale A famous gambling example is the “double the stake until you win” strategy: 0..MGE2} M0 .. Deﬁne Mn = f (Xn ). then borrow two more dollars. Now. and then the net payoﬀ is −1 − 2 −· · ·− 2τ −1 +2τ = 1. . . 3. .. and so on. . thus Mn is a martingale x0 .r. if the second round is also lost. τ is a random time. E[ Mn+1 | Mn . Xn . if the ﬁrst round is lost. . Xn )..d. .. we have E[ Mn+1 | X1 . M 0 = · · · = M0 . double or lose it with probability 1/2 each.. where Xn is a random walk. and f : V −→ R is harmonic on V \ U .xn : f (xi )=Mi ∀i E[ f (Xn+1 ) | Xi = xi i = 0. U ⊆ V is given. borrow one dollar. arriving at a fortune M1 = 2 − 1 = 1 or M1 = 0 − 1 = −1. and let τ ∈ N be the time of hitting U . .. 1. until ﬁrst winning a round. This will eventually happen almost surely.xn : f (xi )=Mi ∀i and the same holds trivially also on the event {n ≥ τ } instead of {n < τ }. 53 . double or lose it with probability 1/2 each. so why would (6.4) apply? We will further discuss this issue a few paragraphs below. But then. 2. by the Markov property of Xn and the harmonicity of f . n ] f (x) P[ Xn = x ] = Mn .4) {e. but which is not a martingale.i. It is also a martingale in the more restricted sense: given the sequence (Mn )∞ n=0 only. by iterating . to a ﬁltration of smaller σ -algebras.t. . then borrow four more dollars. it is easy to see that the fortune Mk (set to be constant from time τ on) is a martingale: on the event {Mk−1 = 1}. start with fortune M0 = 0. . .10. E M k M 0 = E E M k M k −1 . we set Xτ +i = Xτ for all i ∈ N. . n ] P[ Xi = xi i = 0. .t. .MGE1} = E Mk−1 M0 . It may be that τ = ∞ with positive probability.r. for any martingale. w. M k −2 . .

given a variable Y on this probability space.. we get a set of possible outcomes whose Y -values diﬀer by at most 1. . we can write vi+1 and {v1 . one can ﬁx an ordering v1 . . . . Now. vn of the vertices. this is a far-reaching idea. . Fix an ordering e1 . . This is the edge This is the vertex exposure martingale. One reason for the interest in these martingales is edges spanned by v1 . . . . Xn ) and F∞ = σ (X0 . form a martingale. and Y is an integrable variable in F∞ . vi ] for i = 0. let G [v1 . when we reveal the states of the edges between vi+1 and {v1 .ei } be the states of the G [e1 . for √ P |χ(G ) − Eχ(G )| > λ n ≤ 2 exp(−λ2 /2) . will be Theorem 9.18. then we have the Lipschitz property |MiV +1 − Mi | ≤ 1 almost surely. . . vi ] be the states i 2 E exposure martingale associated to Y . . with M0 = E[ Y ] and edges e1 . vi . then over the edges between vi+1 and {v1 . vi . where m = n 2 of the E Mm = Y . Similarly. So. . m. as follows. two graphs on the same vertex set whose symmetric diﬀerence is a set of edges incident to a single MiV = E MiV +1 G [v1 . vi+1 } and varying the states of the edges between edges not spanned by {v1 . A generalization of this correspondence between possibility. As we will see. for the case when τ = ∞ is a measurable. Let G [e1 .. the probability space is all subgraphs of Kn . Example 3 is a special case of Example 4. . . . then Mn := E[ Y | Fn ] is a martingale: E[ Mn+1 | Fn ] = E E[ Y | Fn+1 ] Fn = E[ Y | Fn ] = Mn . . Proposition 1. . . e. As we saw in Section 6. Proposition 1. and Mn = E[ Y | Fn ] equals f (Xn ). and think of the right hand side as a double averaging: ﬁrst over the The reason for the Lipschitz property is that we clearly have |χ(H ) − χ(H ′ )| ≤ 1 if H and H ′ are vertex. ). . . X1 . . ei ] ∈ {0. . given f : U −→ R. can be applied to prove the concentration of certain variables Y around their mean (even if the value of the mean is unknown). one harmonic extension to V \ U is given by f (x) = Ex [ f (Xτ ) ]. vi }. possibly F∞ = FN for some ﬁnite N . . . In words: our best guesses about a random variable Y .and vertex-exposure martingales for the Erd˝ os-R´ enyi random graph model G(n. with measure n P[ G = H ] = p|E (H )| (1 − p)( 2 )−|E (H )| for any H . and let MiV (G ) := E Y G [v1 . . . . . 1}{e1 . ei ] for i = 0. . . vi }. vi . . . . em of the edges of Kn . at least in the case when τ < ∞ almost surely. Fixing . ei in G . An instance G of this random graph is generated by having each edge of the complete graph Kn on n vertices be present with probability p and missing with probability 1 − p. .6. vi ] . V Mn = χ(G ). . . . 1. . hence. For instance. . vi+1 }. p). and let MiE (G ) := E Y . we have that Y = f (Xτ ) is F∞ harmonic functions and limiting values of random walk martingales.1. .6 gives V when Y = χ(G ). . . is the edge. . . .g. Now. . taking Fn = σ (X0 . vi }. Hence 54 the states of the edges not spanned by {v1 . . . Another instance of Example 4. . we can associate to it two martingales. but typical in probabilistic combinatorics. . .. . In other words. .Example 4: Given a ﬁltration Fn ↑ F∞ . .. . . .. . . . . . . . as we learn more and more information about it. the chromatic number χ(G ). . . . . a random sequence determined by G . 1. that the Azuma-Hoeﬀding inequality. . independently from each other. n. very diﬀerent from random walks. .

see [Dur96. uniform random k -element subset of V (G). The second important group of results about martingales is the Optional Stopping Theorems: given a stopping time τ for a martingale (i. |A| ≥ ǫ 2n =⇒ √ B (A.12.g..s. Theorem 4. τ < ∞ a. 1 ≤ k ≤ |V | an integer.. k ) := E[ χ(G[K]) ]. 2λ n) ≥ (1 − ǫ) 2n . in the case when f is bounded and τ < ∞ almost surely. almost surely and in L1 . with τ being the ﬁrst hitting time on 0. and K a G spanned by K is concentrated: for the number c(G. Chapter 4].. we have a natural candidate for the limit: Mn → Y . An example of this was Example 3.2. in Example 4. as claimed. Let G = (V. as in (6. any bounded martingale Mn converges to some limiting variable M∞ . But their average is Mi . ⊲ Exercise 6. We will state but not prove a general version of the Martingale Convergence Theorem as Theorem 9.s. 1}n. a thorough source is [Dur96. Then the chromatic number χ(G[K]) of the subgraph of {ex.7]. started from 1. if Mn is a uniformly integrable martingale. 1}n : dist(x. Section 4.s. 1}n.e. in Example 2. even small sets become huge if we enlarge them by a little. known as L´ evy’s 0-1 law (see Theorem 9. One version of the Martingale Convergence Theorem implies that Example 4 is not at all that special: the class of martingales arising there coincides with the uniformly integrable ones: K →∞ n≥0 lim sup E Mn 1{|Mn |>K } = 0 . One is known as Martingale Convergence Theorems: e. but E[ Mτ ] = 0 = 1 = M0 . we had E[ Mτ ] = 1 = 0 = M0 . hence they can all be at distance at most 1 from MiV . (6.5) {e. By recurrence. and L1 -limit of Mn . as n → ∞. assuming that τ < ∞ a. k ) > ǫk ≤ exp(−ǫ2 k/2) .5.. all diﬀer by at most 1.. the event {τ > k } is Fk -measurable for all k ∈ N).11. we had Ex [ Mτ ] = Ex [ Y ] = f (x) = M0 almost by deﬁnition. and τ < ∞ a. then E[ Mτ ] = E[ M0 ] does hold. let B (A.7 in Section 9. A) ≤ t .14 in Section 9.V V the MiV +1 -values. There are two basic and very useful groups of results regarding martingales. when do we have E[ Mτ ] = E[ M0 ]? In Example 3. for A ⊂ {0. More generally. E ) be an arbitrary ﬁnite graph.6]. This convergence follows from Fn ↑ F∞ . but in a non-trivial way. We give two more examples of this kind: ⊲ Exercise 6. More generally.4). On the other hand. see [Dur96. An even simpler counterexample is SRW on Z.s. λ > 0 be constants satisfying exp(−λ2 /2) = ǫ. 55 . The concentration of measure phenomenon shown by the next exercise is strongly related to isoperimetric inequalities in high-dimensional spaces. Then. viewed as a martingale {Mn }∞ n=0 . For a subset A of the hypercube {0.hypcubeincreas That is.5). we have P χ(G[K]) − c(G.UI} where the corresponding Y is the a. t) := x ∈ {0. given Mi . Let ǫ.

and calculate Ek [ τ0 ∧ τn ]. = Pk [ A ] 56 . 1. at time τ0 ∧ τn . says that E[ Mτ ] = limn→∞ E[ M We have already seen applications of martingales to concentration results and to harmonic functions deﬁned on general graphs. τ < ∞ ˜ n = Mτ almost surely.13. Pk [ A ] {τA < τZ } for some A. P[ Xi+1 = ℓ | Xi = k. On the other hand. A ] = P[ Xi+1 = ℓ. Then. and A := chain. This is of course also the harmonic extension of h(0) = 0 and h(n) = 1. which can be considered less elegant than using the Optional Stopping Theorem. A ] = Pℓ [ A ] P[ Xi+1 = ℓ | Xi = k ] . M ˜ ˜ is a martingale again. and stop it when ﬁrst hit Ek [ Xτ ] = k by the Optional Stopping Theorem. This conditioning concerns the entire trajectory. . and we will see more later.s. Z ⊂ N (more generally. Proof. . Then ﬁnd a martingale of the form Xi − µ i for some µ > 0.7. hence it might happen. A simple but beautiful result is that. Note that P[ A | Xi+1 = ℓ. Let (Xi )∞ i=0 be any time-homogeneous Markov chain on the state space N. .) Start a symmetric simple random walk X0 . Ek [ Xτ ] = h(k ) · n +(1 − h(k )) · 0. at k ∈ {0. . Xi = k ] P[ Xi+1 = ℓ. . Consider asymmetric simple random walk (Xi ) on Z. we have thus h(k ) = k/n. This construction is a version of Doob’s h-transform [Doo59]. and calculate Pk [ τ0 > τn ]. Xi = k. A ] P[ Xi = k. one can use similar ideas in the asymmetric case: ⊲ Exercise 6. . . but (at equations. that we get a complicated nonMarkovian process. in fact. Xi = k ] = P[ A | Xi+1 = ℓ ] = Pℓ [ A ]. Find a martingale of the form rXi for some r > 0. a priori. we get a nice Markov process. A ] P[ A | Xi+1 = ℓ. Hence the Dominated Convergence Theorem (a.˜ n := Mn∧τ Let us sketch the proof of the Optional Stopping Theorem for bounded martingales.condMC} where Pk [ A ] = P[ A | X0 = k ] is supposed to be positive. as desired. ﬁrst show that τ0 ∧ τn has an Now. random walk on Z. it could be any event in the invariant σ -ﬁeld of the {l. On the other hand. least in principle) one would get this discrete harmonic extension by solving a system of linear (Hint: to prove that the second martingale is uniformly integrable. Also. Xi = k ] = P[ A | Xi = k ] P[ Xi = k ] Pℓ [ A ] P[ Xi+1 = ℓ | Xi = k ] . but let us demonstrate now that martingale techniques are useful even in the simplest example.4 in Section 9. n}. see Deﬁnition 9. with transition probabilities P[ Xi+1 = ℓ | Xi = k. 0 or n. condition the symmetric simple random walk (Xi ) to reach n before 0.) implies that limn→∞ M ˜ n ] = limn→∞ E[ M0 ] = E[ M0 ]. hence E[ Mn ] = E[ M0 ] = E[ M0 ] for any n ∈ N.4). X1 . with h being the harmonic function h(k ) := Pk [ A ] below: Lemma 6. exponential tail. with probability p > 1/2 for a right step and 1 − p for a left step. Then (Xi ) conditioned on A is again a Markov chain. What is h(k ) := Pk [ τ0 > τn ]? Since (Xi ) is a bounded martingale.

. 2.6) for all k = 1. and k+1 k−1 . . . The reason for the n−1 2Xt dt index 3 is that the Bessel(n) process. Note the consistency property that these values do not depend on n.1 and/or 6. Back to our example. stopped at 0 and n. n − 1. ·)C on ℓ0 (V ) and (·. we can consider the associated electric network c(x. then Pk [ A ] = k/n. (6. (But I do not think that anyone knows a direct link between Brownian motion in R3 and the conditioned one in R. E ).6). . . the conditional measures have a weak limit as n → ∞: the Markov chain with transition conditioned not to ever hit zero.specrad} Consider some Markov chain P on the graph (V. Cx = y c(x. if (Xi ) is simple random walk with X0 = k . (P f. Show that the conditioned random walk (6. x). Moreover.e.) 7 Cheeger constant and spectral gap {s.1 Spectral radius and the Markov operator norm {ss. It is the discrete analogue of the Bessel(3) process dXt = + dBt . We will assume in this section that P is reversible.as claimed. the extension P : ℓ2 (V ) −→ ℓ2 (V ) is self-adjoint with respect to π . i. the new transition probabilities are particular. ·)r on ← → ℓ0 ( E ).) ⊲ Exercise 6. 7. then use Subsections 6. Therefore. . P g )π . p(k. This chain can naturally be called SRW on Z 1 Xt dt A = {τn < τ0 }. . is the Euclidean distance of an n-dimensional Brownian motion from the origin. Green’s functions) turned out to have good encodings as harmonic functions over the associated electric network. y ). We will now make these connections even richer: isoperimetric inequalities satisﬁed by the underlying graph (the electric network) will be expressed as linear algebraic or functional analytic properties of the Markov operator acting on functions over the state space. 57 . . g )π = (f. Note that Cx = π (x) now.Cheeger} The previous section introduced a certain geometric view on reversible Markov chains: many natural dynamically deﬁned objects (hitting probabilities. Given any reversible measure π . given by dXt = + dBt . In p(k. k − 1) = probabilities given in (6. (Hint: construct an electric network on N that gives rise to this random walk. y ) = π (x)p(x. for those who have or will see stochastic diﬀerential equations. which can then be translated into probabilistic behaviour of the Markov chain itself.6) is transient. and a basic probabilistic property (recurrence versus transience) turned out to have a useful reformulation via the Dirichlet energy of ﬂows (the usefulness having been demonstrated by Kanai’s quasi-isometry invariance Theorem 6.14. k + 1) = . y ) = c(y. and the usual inner products (·..2. The Markov operator P : ℓ0 (V ) −→ ℓ0 (V ) clearly satisﬁes P = sup f ∈ ℓ 0 (V ) Pf f ≤ 1.6) {e.Bessel3} 2k 2k for k = 1.

n→∞ 2 − (f.Observe furthermore that the Dirichlet energy can be written as E (f ) = EP.2) implies that lim inf n→∞ P n+1 f P nf so. to show that the limits exist. P = ρ(P ).4). (7. for any function f ∈ ℓ0 (V ) with norm 1. it is suﬃcient to show that any of these are ﬁnite. P n+2 f ) ≤ P n f · P n+2 f . ∇∗ ∇f ) = (f. and the ﬁnal inequality is by the Cauchy-Schwarz inequality. P n δ0 )1/2n = P n δ0 ≤( P n 1/n {p.1. So. Using self-adjointness. ρ(P ) = lim sup pn (o.specrad} δ0 )1/n ≤ P . P n+1 f 2 = (P n+1 f.π (f ) = (∇f. (P 2n δ0 . P n+1 f ) = (P n f. n n→∞ n an an n→∞ n→∞ P n+1 f . 0))1/2n ≤ P . P f ) . (I − P )f ) = f Recall the deﬁnition of the spectral radius from (1. P n+1 f (7. o)1/n . 58 . For the other direction. if both limits lim lim P nf 1/n n→∞ n→∞ exist. we get P n+1 f P nf ≤ P n+2 f . Proof. ∇f ) = (f. But (7. = lim supn→∞ P n+1 f P nf . This second equality holds since P is self-adjoint. P nf So.1) {e.ratioineq} For any non-negative sequence (an )∞ n=1 we know that: lim inf n→∞ an+1 an+1 /n /n ≤ lim inf a1 ≤ lim sup a1 ≤ lim sup . δ0 )1/2n = (P n δ0 .DIP} Now here is the reason for calling ρ(P ) the spectral radius: Proposition 7.2) {e. Thus we have found that (C0 p2n (0. hence ρ(P ) ≤ P . then they must be equal.

Lemma 7. The ﬁrst uses “theory”. f and we are done.f ) f 2 . the second uses a “trick”.hilbertnorm} = supf ∈ℓ0 (V ) Proof. Starting the chain of inequalities (7. f ) |(P f. We have = (f. Taking 2nth roots. So for every ǫ > 0 there is some N > 0 such < x. f )|2 .25].. [Rud73. f ) ≥ |(P f.g. people would not ﬁnd or even recognize the right deﬁnitions. For the inﬁnite dimensional case.2) from n = 0 Pf ≤ ρ. since f is ﬁnitely supported. P f )(f. one has to use the spectral theorem.y f (x)f (y )1{f (x)f (y)>0} p2n (x.f ) f 2 are expressions for the note that it is enough to consider the supremum over the dense subset ℓ0 (V ).y This is a ﬁnite sum since f has ﬁnite support. y )Cx x. since P is self-adjoint. since the radius of f (x)f (y )1{f (x)f (y)>0} (ρ + ǫ)2n Cx < C (ρ + ǫ)2n for some ﬁnite constant C > 0.xyfxfy} that for all n > N . y )1/2n < ρ + ǫ. which I learnt from [LyPer10. 59 .6]. For a self-adjoint operator P on a Hilbert space H = ℓ (V ).3). Thus. (7. y )Cx . Theorem 12.) In the ﬁnite-dimensional case (i. (Grothendieck had the program of doing all of math without tricks: the right abstract deﬁnitions should lead to solutions automatically. when V is ﬁnite. For the details. y |z ) does not depend on x and y . and and supf ∈ℓ0 (V ) (P f. Exercise 6.We will now show that lim supn→∞ P n f P nf 2 1/n ≤ ρ(P ). by (7. see. P := supf ∈H (P f. For the tricky proof. hence ℓ2 (V ) = ℓ0 (V )).2.. y . We give two proofs.3) {e. and then both supf ∈H Pf f largest eigenvalue of P . P 2n f ) = x f (x)P 2n f (x)Cx f (x)f (y )p2n (x. without understanding the tricks ﬁrst.y = ≤ x. I think the problem with this is that. ﬁrst notice that sup f ∈ ℓ 0 (V ) Pf (P f. lim supn→∞ p2n (x. P nf 2 convergence of G(x. its eigenvalues are real. 2 Pf f {l. and for every pair x. y )1/2n = ρ. e. we get that limn→∞ P n+1 f P nf = limn→∞ P n f 1/n implies that ≤ ρ. p2n (x. f )| ≥ sup .e. which is just Cauchy-Schwarz. ≥ sup f f 2 f 2 f ∈ ℓ 0 (V ) f ∈ ℓ 0 (V ) where the ﬁrst inequality follows from (P f.

For the other direction. this 60 . P ) satisﬁes IP∞ with κ > 0. 4 2 the proof. then. The isoperimetric inequalities of Chapter 5 were deﬁned not only for groups but also for locally ﬁnite graphs. We will be proving Mohar’s generalization.c := inf S C (∂E S )/π (S ). y ). we want to do this in larger generality. inequality κ ¯ f Proof of (2) ⇔ (3). Now rearrange the Dirichlet 2 2 for all f ∈ ℓ0 (V ). where π (S ) = measure for the associated random walk. x ∈S Cx is the natural stationary reversible Markov chain (V. P f ) ≤ (1 − κ ¯) f − (f. P ) be an inﬁnite reversible Markov chain on the state space V . as follows. and C (∂E S ) = x ∈S y∈ /S c(x.A].2 The inﬁnite case: the Kesten-Cheeger-Dodziuk-Mohar theorem {ss.4.kesten} |f (x) − f (y )| c(x. we are going to prove Kesten’s characterization of the amenability of a group through the spectral radius of the simple random walk..π (f ) = f 2 2 2 2 is true precisely when P ≤ 1 − κ ¯. Then [Dod84] and [Moh88] proved generalizations for graphs and reversible Markov chains. ∇f ) = 1 2 x.E. Dodziuk. P ). f − g ) 4 (f + g. so we can talk about ι∞. P f ). is the Cheeger constant of the network. f + g ) + (f − g. (3) ρ(P ) ≤ 1 − κ ¯ . Sections 4. moreover. g ) = (P (f + g ).π (f ) = (∇f. f ) + (g. ≤ f 2 2 − (f. In fact. Theorem 5. shown in the previous section. f − g ) (f. However. Note that if we are given a {t. Taking g := P f f / P f yields P f ≤ C f . A diﬀerential geometric version was proved in [Che70].kesten} In this section. x f 2 (x)Cx . y ) and (f. The largest possible κ.3 (Kesten. The following theorem was proved for groups by Kesten in his PhD thesis [Kes59]. we can naturally deﬁne them for electric networks. Mohar).y 2 2 2 ι∞. Recall from (7. (P f. By Lemma 7. Again by the denseness of ℓ0 (V ) in H . Theorem 7. The following are equivalent: (1) (V. (2) For all f ∈ ℓ0 (V ) the Dirichlet inequality κ ¯ f where EP. with Markov operator P .2.1) that EP. f + g ) − (P (f − g ). following [Woe00. the Cheeger constant of the chain. then the Cheeger contant of the associated electric network does not depend on which reversible measure π we take. using that P is self-adjoint.π (f ) is satisﬁed for some κ ¯ > 0. this ﬁnishes 7.A and 10. κ and κ ¯ are related by κ(P )2 /2 ≤ 1 − ρ(P ) ≤ κ(P ). respectively.e.E (P ) = κ(P ). P f ) into (f. i. if the rightmost supremum is C . g ) ≤C =C . Cheeger. Satisfying the (edge) isoperimetric inequality IP∞ will mean that there exists a κ > 0 such that C (∂E S ) ≥ κπ (S ) for any ﬁnite connected subset S . Let (V. f ) = ≤ EP. where the spectral radius satisﬁes ρ(P ) = P .

. P ) To show that the ﬁrst statement implies the second. 1 Proof.f (y))(t)dt c(x. which will be ﬁnite since f ∈ ℓ0 .sobolevIP} of “almost invariant” sets.Proof of (2) ⇒ (1).t.π (1S ) = C (∂E S ). y ) 0 ∞ 1[f (x). Take any ﬁnite connected set S .π (f ) = = = = 0 ∞ 0 d For d = ∞ ( d− 1 = 1). and |1S (x) − 1S (y )|2 c(x. Proposition 7. note that 1S and SP. SP. P ) satisﬁes IPd (κ) if and only if the Sobolev inequality κ f d −1 d d d −1 ≤ SP. and f =( x |f (x)|p Cx ) p . ∞].y |f (x) − f (y )|c(x. ∞. SP. we are going to look at the super-level sets St = {f > t}.e. d d −1 = π (S ) d −1 d may assume f ≥ 0. i. f (x)≤t<f (y ) x y :f (y )>f (x) To prove the other direction.f (y)) (t)dt c(x. For any d ∈ [1.π (f ) := 1 2 x.π (f ) ≥ SP.y x ∈S Cx = 1S 2 2. To prove that the Sobolev inequality implies good isoperimetry. Deﬁnition 7. ﬁrst note that SP.1. f ) is close to f 2 . which are exactly the Følner sets.y s.π (f ) holds for all f ∈ ℓ0 (V ). y ) = ∇f 1. where IPd (κ) means C (∂E S ) ≥ p κπ (S ) for any ﬁnite connected subset S .π (f ) = 0 ∞ C (∂E {f > t})dt π ({f > t})dt 1. For t > 0.π (|f |) by the triangle inequality.π (1S ) ≥ κ ¯ 1S =κ ¯ π (S ). The case 1 < d < ∞ is just slightly more complicated than the d = ∞ case. y ) c(x. y )dt = C (∂E {f > t})dt. we get x y :f (y )>f (x) ∞ 0 ∞ x. or functions f such that (P f.π (f ) ≥ κ dt = κ f ∞.π (1S ) = 1 2 x. gives the existence {d. 2 2 satisﬁes IP∞ with κ = κ ¯.sobolev} {p. we want to show that the existence of “almost invariant” functions. f 0 ∞ ≥κ ∞ 0 = κ E[f ] = κ f For d = 1. and so (V. and is left as an we get that SP. since ∂ ({f > t}) = 0 ⇔ 0 < t ≤ f exercise. a reversible chain (V. y ) = C (∂S ). 61 . The Sobolev norm of f is SP.4. Then π (S ) = EP. y )1[f (x). so we x y :f (y )>f (x) (f (y ) − f (x))c(x. Now. Now apply the Dirichlet inequality to 1S to get C (∂S ) = EP. sets S with P 1S close to 1S .

using the inequality f 4 2 The ﬁrst sum above is precisely EP. Proof of (1) ⇒ (2) of Theorem 7.y ≤ ≤ 11 κ2 x. y ) 1 2 x. which gives x + y . ptp−1 F (t)p dt ≤ ( ∞ 0 F (t)dt)p for F ≥ 0 decreasing.e. (b) Using part (a). but we are dealing in this section only with reversible chains. from which we will borrow several things in this section.e.e. We now use the proposition to complete the proof of the Kesten theorem. Let d ∈ (1. f 2 2 = f2 1 1 ≤ SP. κ2 κ2 2 . The existence of such a π is much less obvious than the case of the above constant eigenfunction. with κ ¯= 7.3. after squaring the entire inequality. So. for which the reversible distribution is also stationary. ∞).π (f ) √ 2 f 2. P 1 = 1. y ) 2 1/2 . with the Markov operator P f (x) = 2 y p(x.. we have f 2 2 2 2 ≤ 2 EP. i.. everything is a Markov chain.. y ) 1/2 1 1 κ 2 x. 62 .e.⊲ Exercise 7. i. In most chains.y |f (x) − f (y )|2 c(x. The second sum can be upper bounded by ≤ 2 κ2 EP. when I was still in high school: “With the right glasses.”) A great introductory textbook to Markov chain mixing is [LevPW09]. Therefore. the ﬁrst statement in the Kesten theorem implies the second.π (f ). the 1 < d < ∞ case. and let p = (a) Show that ∞ 0 d d −1 .e. where the last inequality follows by Cauchy-Schwarz. i. if it is a “left eigenfunction” with eigenvalue 1.y |f (x)| + |f (y )| c(x. P ) be a ﬁnite Markov chain.3 The ﬁnite case: expanders and mixing {ss.π (f 2 ) by the proposition κ 11 = |f 2 (x) − f 2 (y )| c(x. i. and the speed of this mixing is a central topic in probability theory and theoretical computer science. Let (V.mixing} The decay of the return probabilities has a very important analogue for ﬁnite Markov chains: convergence to the stationary distribution.. it gets mixed. with applications to almost all branches of sciences and technology. the random walk will gradually forget its starting distribution.. On the other hand.π (f )1/2 .y f (x) − f (y ) |f (x)| + |f (y )| c(x. recall that a probability measure π on V is called stationary if πP = π . y ) κ 2 x. y )f (y ) = E X1 X0 = x acting on ℓ (V ). The constant 1 function is obviously an eigenfunction with eigenvalue 1. y )π (x) = π (y ). ﬁnish the proof of the proposition. (As I heard once from L´ aszl´ o Lov´ asz. if πP (y ) = x p(x. i. f 1 2 2 (|x| + |y |) ≤ 2 2 .1.

this says that 1 − λ2 = inf E (f ) : f 2 f (x)π (x) = 0 x f (x)π (x) = 0} (Raleigh.{ex. (7. Fisher). Two. then λ2 < 1 if and only if (V. with two diﬀerences. start with a function f attaining the inﬁmum in (7. For this. we get SP. (b) If we write −1 ≤ λn ≤ · · · ≤ λ1 = 1. One.2. Courant. ≤ 2 f+ 2 f 2 By symmetry. set h(V.AlonMilman} Theorem 7. These two inequalities 2 E (f+ ) h2 E (f 2 ) ≤ = 1 − λ2 . 2 2 2 and we get S (f+ ) ≥ h f+ 1 .) κ2 2 x ∈V f (x)g (x)π (x). so we are interested in the spectral gap. π (S ) ∧ π (S c ) ≤ 1 − λ2 ≤ 2h. recall that λ2 = sup{ x Pf f : constant functions. with the complement of the set also appearing in the denominator.π (f+ ) ≥ h ∞ 0 π ({f > t}) ∧ π ({f ≤ t}) dt . Sketch of proof. Show: Recall from the Kesten Theorem 7. we have S (f+ ) ≤ 2E (f+ )1/2 f+ 2 . g ) = (a) All eigenvalues λi satisfy −1 ≤ λi ≤ 1. just as in the √ 2 proof of (1) ⇒ (2) of Theorem 7. Let (V. what are the eﬀects of these modiﬁcations on the proofs? When we have some isoperimetric constant h(V. {t. P ) is connected (the chain is irreducible). Dodziuk [Alo86. for f+ := f ∨ 0. Similarly. P ) is not bipartite. not in the spectral radius. P ) and want to bound the spectral gap from below. y ). AloM85. P ) = inf the Cheeger constant of the ﬁnite chain. y ) = π (x)p(x.4) {e. with the stationary distribution π (x). From the argument of Proposition 7. Now. Ritz.2 and by (7. For a ﬁnite reversible Markov C (∂E S ) . ﬁnite Markov chain. we get S (f+ ) ≥ h f+ 1 = h f+ 2 . (c) λn > −1 if and only if (V. the eigenspace corresponding to the eigenvalue 1. chain (V. we have to deal with the fact that 1 is an eigenvalue. we have the following analogue. By an obvious modiﬁcation of Lemma 7. the deﬁnition of the Cheeger constant is slightly diﬀerent now.genspec} ⊲ Exercise 7. Then S ⊆V h2 2 ≤ 1 − ρ ≤ κ for inﬁnite reversible Markov chains.1). (Recall here the easy lemma that a graph is bipartite if and only if all cycles are even. P ) be a reversible.3.4. P ) with stationary distribution π and conductances c(x. Milman. and consider its super-level sets {x : f (x) > t}. since this is the subspace orthogonal to the . Note that P is self-adjoint with respect to (f.5 (Alon.gapDir} Now. 63 . where 1 − λ2 is usually called the spectral gap. hence π ({f > t}) ≤ π ({f ≤ t}) for all t ≥ 0.4). combined.3 that For ﬁnite chains. Dod84]). we may assume that π ({f > 0}) ≤ 1/2. The proof is almost identical to the inﬁnite case.

3. ··· . then the chain has a small mixing time: starting from any vertex or any initial distribution. π∗ For the other direction. the distribution will be close to the stationary measure. if we have a subset A with small boundary. Set gabs := 1 − maxi≥2 |λi |. . . For simple random walk on a complete n-vertex graph with loops. If n is even. and compute the Dirichlet norm. .6. the quite obvious inequality ρ(P ) ≤ P {t. .y |≤ π (y ) (1−gabs )t . . y ) is the probability of being at y after t steps starting from x. graph is bipartite (Exercise 7. called the absolute spectral gap. and indeed −1 is an eigenvalue. ﬁnite Markov chain. Let (V. whose transition matrix 1 n 1 n 1 n 1 n The reason is that λn being close to −1 is an obvious obstacle for mixing: say. Note also that we use gabs . 0.2 (c)). Cn is bipartite. If n is odd. Fill in the missing details of the above proof. compute the spectrum of simple random walk on the n-cycle Cn . 1. ..e. .1). . after not very many steps. y ) is 1/n for every t ≥ 1.specCn} . instead of f = 1A . The converse direction (does fast mixing require a uniformly positive spectral ´ am Tim´ gap?) has some subtleties (pointed out to me by Ad´ ar).gapmix} where pt (x. 0. then the absolute spectral gap is 1 − cos(2π/n) = Θ(n−2 ). The speed of convergence to stationarity is basically the ﬁnite analogue of the heat kernel decay in the inﬁnite case. | pt (x. P ) be a reversible. and the theorem is the ﬁnite analogue of the result that a spectral radius in Proposition 7. For simple random walk on the n-cycle Cn .2. Example 2. the upper in Exercises 7. that is. Before proving the theorem. 7. Then for all x and y . Example 1. Verifying the above example. . . Theorem 7. n − 1. and )−π (y ) π∗ := minx∈V π (x). and the distribution at time t depends strongly on the parity of t. .11 (b).6 is 0. . The result is empty for a chain that is not irreducible: there gabs = 0. So. which will be discussed after the proof of the theorem. the bound in Theorem 7. if λn = −1.. . then the ··· has eigenvalues 1. The true mixing time is actually Cn2 . then take f = 1A − β 1Ac smaller than 1 implies exponential heat kernel decay (i. with a β chosen to make the average 0. The main reason for the interest in the spectral gap of a chain is the following result. ··· 1 n 1 n 1 n 1 n 1 n .where the second inequality is left as a simple exercise. . bound in the theorem gets small at Cn2 log n. let us see four examples. ⊲ Exercise 7. and in Section 8. pt (x. .14 (d). saying that if the gap is large. together with proper deﬁnitions of the diﬀerent notions of mixing time. the exercise below tells us that the eigenvalues are cos(2πj/n). as we will see {ex. j = 0. . and the upper bound 1/π∗ is trivial. 64 . instead of 1 − λ2 . ⊲ Exercise 7.4.

6. So | pt (x. the upper bound in the theorem gets {ex. and the last inequality is by the Pythagorean theorem. By For t = C (d.y π (y ) note that any d-regular graph with n vertices has diameter at least logd−1 n. ϕx ) = (P t (ϕy − 1). 1}k in the above example. See the next section for more on this. the walk stays in the same place for that step. and Exercise 8. Now. small for t = Ck . up to a constant factor. d. an expander mixes basically as fast as possible maxx. and only moving if the coin turns up heads (otherwise.y π (y ) t√ 1 λ∗ . )−π (y ) |≤ where λ∗ := maxi≥2 |λi |. see Exercise 7. c)-expander is a d-regular graph on n vertices.hyperspec} {ndcExpander} regular expander sequence is a sequence of (nk .y | pt (x. In fact. which implies that for a constant degree graph. (The converse is false. the existence of this c > 0 is equivalent to having a uniformly positive spectral gap. c) expander. but the true mixing time is actually C k log k .6. Then P 2 hypercube. (Hint: think of this as ⊲ Exercise 7. but whose existence is far from clear. A random walk can be made “lazy” by ﬂipping a fair coin before each step.2.) Our last example concerns graphs that are easy to deﬁne. Example 4. y ) − π (y ) = (P t ϕy − 1. .) If {0. d. d. and π∗ = 1 n. consider the simple random walk on this hypercube and let ¯ = I +P is the Markov operator for the lazy random walk on the P be its Markov operator. 1) = 0. with h(V ) ≥ c. π (x )π (y ) 65 . Deﬁne ϕx (z ) := Then (P t ϕy )(x) = P[ Xt =y |X0 =x ] π (y ) = pt (x. since (ϕy − 1. and π∗ = So. So pt (x. ϕx ) π (y ) ≤ ϕx · P t (ϕy − 1) = 1 π (x) P t (ϕy − 1) . where f1 . c) log n. So n n P t (ϕy − 1) 2 = i=2 ai λt i fi 2 = i=2 t ≤ λ2 ∗ 2 |λt i | ai f i n 2 ai f i i=2 2 t = λ2 ∗ ϕy − 1 2 t ≤ λ2 ∗ ϕy 2 . π is uniform. .11 (b). This is sharp. the bound in the theorem is small. Proof of Theorem 7. For simple random walk on an (n. An (n. . 7. )−π (y ) | = 1 for t ≤ logd−1 n.16. A dTheorem 7. Therefore. where the inequality is from Cauchy-Schwarz. . ¯ on {0. fn is an orthonormal basis of eigenvectors.5. see Exercises 7. Deﬁnition 7. and we are done. Compute the spectrum of P a product chain of random walks.) 1 { x= z } π (x ) .5. in this case. we can write ϕy − 1 = n i=2 ai fi . 1}k is the k -dimensional hypercube. c)-expanders with nk → ∞ and c > 0. and it has eigenvalues It turns out that gabs = 2 1 k 1+λi 2 .14 (b). 1 2k .y ) π (y ) .Example 3.

pt (y. so tmix (1/4) captures well the magnitude of the time needed for the chain to get close to stationarity.TV} ⊲ Exercise 7. say 1/4. ν (x) (7. the L2 -distance P t ϕy − 1 2 = pt (·. Let d(t) := sup dTV pt (x.7.8. ·)/π (·) and 1(·). d(ℓ tmix (1/4)) ≤ d For instance. Show that dTV (µ. ν ) : = max |µ(A) − ν (A)| : A ⊆ V = 1 2 x ∈V |µ(x) − ν (x)| = 1 2 x ∈V µ(x) − 1 ν (x) . in the L∞ -norm between the functions pt (x. The most popular notion of distance uses the L1 -norm. and so on. called the uniform distance. where the middle equality uses that pt (x. In any case. (1 − ǫ)-factor of their share at stationarity. ⊲ Exercise 7. ν ) := x ∈V 2 µ(x) − 1 ν (x) . ¯(ℓ tmix (1/4)) ≤ d ¯(tmix (1/4))ℓ ≤ (2d(tmix (1/4))ℓ ≤ 2−ℓ .5) {e. deﬁned as follows: for any two probability measures µ and ν on V .y ) π (y ) . This time will be denoted by tmix (1/4). The following two exercises explain why we introduced d ¯(t) ≤ 2d(t).7. ν (x) (7. dTV (µ. y )/π (y ) = pt (y. Deﬁne the separation distance at time t by s(t) := supx∈V 1 − pt (x. the mixing time of a chain is usually deﬁned constant. ν ) = min P[ X = Y ] : (X. as an intermediate step. Using Exercise 7. x)/π (x) for reversible chains.9.This theorem measures closeness to stationarity in a very strong sense. and χ2 is the chi-square distance.y ∈V ¯(t). The proof itself used. π (·) . show that d Therefore. Why is this is a good deﬁnition? Let us discuss the case of the total variation distance. π (·) x ∈V {ex. Note that this is a one-sided version of the uniform distance: s(t) < ǫ means that all states have collected at least a likely to be.TVcoupling} to be the smallest time t when the distance between pt (x. ·) − 1(·) π (·) 2 = χ2 pt (y. tmix (2−100 ) ≤ 100 tmix(1/4). given a notion of distance. the following asymmetric distance between two measures: χ2 (µ. y ) − 1(·) π (y ) = 2 pt (y. for tmix (1/4) = tTV mix (1/4). Prove the equality between the two lines of (7. This also shows dTV to be a metric. ·). one can use any Lp norm. ⊲ Exercise 7. ⊲ Exercise 7. Y ) is a coupling of µ and ν .chi2dist} which we will again use in Section 8.6) {e. Of course. but there still could be states where the walk is very 66 . Show that d(t) ≤ d ¯(t + s) ≤ d ¯(t) d ¯(s).3. or more precisely the total variation distance.6. ·). ·) and π (·) becomes less than some small ¯(t) := sup dTV pt (x. BobT06]. ·). see [SaC97.6). or quantities related to the entropy. ·) . and d x.

C < ∞.6 implies that t∞ mix (1/e) ≤ 1 + ln 1 trelax . let f = 0. d(t) ≤ s(t) ≤ 4d(t/2). Moreover. For the distance in total variation. for some absolute constants 0 < c. Show that with equality at the eigenfunction corresponding to the λi giving gabs = 1 − |λi |. be an eigenfunction corresponding to the λi giving g = 1 − |λi |. π∗ {ex. 67 (7.7) {e.inftyrelax} It also has an interpretation as a genuine temporal quantity. or in other words.14 below. The direction ≤ follows almost immediately from Theorem 7. assuming the answer to Exercise 7.6: ⊲ Exercise 7. trelax ≤ C tTV mix (1/4) . one can get that tTV mix (Cn ) = O(n ) for the n-cycle. This is certainly related to mixing: Theorem 7. π (·) ≤ 2 f ∞ d(t) . then taking tth roots gives the result. The relaxation time of the chain is deﬁned to be trelax := 1/gabs . ∞ 2 dTV pt (x.11.⊲ Exercise 7. one can easily get the bound d 1/2 k ln k + c k ≤ e−2c /2 for c > 1 on the TV distance for the lazy walk on the hypercube {0.7.1 gives the following: Proposition 7. Varπ [P t f ] ≤ (1 − gabs )2t Varπ [f ] . P ) is transitive. This is orthogonal to the constant functions (the eigenfunctions for λ1 = 1).6. Furthermore. g > 0 implies that limt→∞ P f (x) = Eπ f for all x ∈ V . π (·) 2 2 {ex. The above inequality (1 − gabs )t ≤ 2d(t) easily gives that gabs ≥ c/tTV mix (1/4). For instance. hence t |λt i f (x)| = |P f (x)| = x f (x)π (x) Proof. limt→∞ d(t) ≤ ρ(P ) inequality in Proposi{pr.relaxTV} . * Show that. y pt (x. Taking x ∈ V with f (x) = f we get (1 − gabs)t ≤ 2d(t). shown by the following exercise that is just little more than a reformulation of the second part of the proof of Theorem 7. For the other direction.separation} (7. assuming the answer 2 to Exercise 7. let Varπ [f ] := Eπ [f 2 ] − (Eπ f )2 = t x f (x)2 π (x) − x f (x)π (x) .4. (b) Show that if the chain (V.8) {e.convspeed} 1/t = 1 − gabs . ·) − 1(·) π (·) 2 n = 2 i=2 t λ2 i .) Also. Therefore. in any ﬁnite reversible chain. where gabs is the absolute spectral gap. y ) − π (y ) f (y ) ≤ f ∞. then 4 dTV pt (x. ·).5.L2mixing} (a) For f : V −→ R. (This is sharp even regarding the constant 1/2 in front of k ln k — see Exercise 7. 1}k .10. Hence trelax is the ≤ pt (x. ·). the ﬁnite chain analogue of the proof of the P tion 7. time needed to reduce the standard deviation of any function to 1/e of its original standard deviation.

·). for rapidly mixing chains. Yet another method to bound total variation mixing times from above is by Exercise 7. then y = Y0 can be chosen uniformly. we reach Xn = Yn when all the h coordinates in which X0 and Y0 diﬀered have already been sampled. and when and how could we eliminate that? In some cases. a simple coupling is that we ﬁrst the ith coordinate. or for chains on spin systems. but the expected time to couple is still ∼ k ln k . |Vn | base box Zd n . . while we still would like to have polynomial mixing in n. then the two walks move or stay put choose a coordinate i ∈ {1.12. The above Exercise 7. and it is certainly impossible for a ﬁxed y and several x’s. If we want the distribution of pn (x. For the lazy walk on the hypercube {0. This suggests that if we do not want mixing in L∞ . because of the independence of stages in collecting the coupons. this sharp concentration around this expectation (see the next exercise). k } uniformly. in the example of simple random walk on the cycle Cn .. average. then we should be able to avoid this wasteful Cauchy-Schwarz. on V (G). where trelax = O(log |V |). instead of the true mixing time ≍ n2 . it gives a bound O(n2 log n). which is a classical coupon-collector problem: in expectation. In some other cases. . this takes is k ln k . such as the Glauber is exponential in the parameter n that is the dimension of the hypercube or the linear size of the dynamics of the Ising model in a box Zd n on critical or supercritical temperature. Here is an example.7).11 (b) gave two examples where both sources of waste can be eliminated. 2. To exclude this. or on a lamplighter group. then either Xn or Yn moves in it seems very unlikely that P t ϕy − 1 is almost collinear with ϕx . Use the above coupling for the lazy walk on {0. each with probability half. one needs to understand the spectrum very well. ·) to get close to π (·). C < ∞ and all t > 0. In the worst case h = k . this factor comes from two sources. .ask: going from the relaxation time to the uniform mixing time.11 (b) 68 . one clearly expects time k/h + k/(h − 1) + · · · + k/1 ∼ k ln h if both k and h are large. 1}k .g. . Looking at the proof of Theorem 7. how bad is the ln 1/π∗ factor in ∞ Besides (7. ·) ≤ P[ Xn = Yn ] . on Z2 ≀ Zd n . . around k/2.6. and.. 1}n. X1 . which makes h concentrated worst case.7) is basically sharp. . this factor will turn out to be a big loss: e. and y = Y0 . The ﬁrst one is an application of Cauchy-Schwarz that looks indeed awfully wasteful: this happens for a lot of y ’s and a ﬁxed x. . pn (y. this factor is not very important: e. and the factor ln 1/π∗ might ruin the exponent of the polynomial. In these cases. not better than what we got from the ⊲ Exercise 7. 1}k to show the total variation disshows that this k ln k is suboptimal at least by a factor of 2. .g. this is a serious factor. only in some together. . and (7. say. tance bound d(k ln k + tk ) ≤ Ce−ct for some 0 < c.7 above: for any coupling of two simple random walks x = X0 . . dTV pn (x. it is certainly essential. for SRW on the hypercube {0. X2 . For the case of SRW on an expander. Y2 . However. The second source is the possibility that all the non-unit eigenvalues are close to λ2 . In this coupling. If Xn (i) = Yn (i). if Xn (i) = Yn (i). Note that Exercise 7. It is therefore more than natural to (7.8). Y1 . then. it is clear that tTV mix (ǫ) ≤ tmix (ǫ) for any ǫ. it seems even more unlikely that since diﬀerent ϕx ’s are orthogonal.

69 . where σ 2 = (Varµ f + Varν f )/2 and r > 0.5).15.TVfunctions} ⊲ Exercise 7. by Exercise 7. for help.4. on the cycle.. but we want to point out here that ﬁnding one good observable is typically much easier than determining the entire spectrum. the variance Varπ [P t F ] does not start decaying before t reaches order n2 . Show that if tTV mix (Vn ) ≍ trelax (Vn ) for a sequence of ﬁnite reversible Markov chains on n vertices. then the sequence cannot exhibit cutoﬀ for the total variation mixing time. we already knew this from computing the spectrum exactly in Exercise 7. (We already had this lower bound from part (c) and (7. Observe that although this F gives only an Ω(n) lower bound on the relaxation time if the variational formula (7.14 (Lower bounds from eigenfunctions and similar observables). consider the total Hamming weight W (x) := W − Eπ W 2 {ex. ν ) ≥ 1 − 4 .3]: ⊲ Exercise 7. but we wanted to point out that the same observable might fail or succeed.11 (b).8). consider the function F (i) := 1{n/4 ≤ i < 3n/4} for i ∈ Cn .13. See [LevPW09. 1}k .4) is used.4) that the spectral gap of the walk is at most of order 1/k (which we already knew from the exact computation in Exercise 7. This was not an accident: ⊲ Exercise 7. (Again. deduce that d 1/2 k ln k − c k ≥ 1 − 8e−2c+1.11 (b) does imply a quadratic lower bound on the relaxation time. Section 7. (b)* For the lazy random walk {Xt }t∈N on {0. and deduce by the variational formula (7. 2 Similarly.) (d) For SRW on the n-cycle Cn . Comparing with Exercise 7. show that tTV mix (Cn ) = Ω(n ). . 4 t Using Exercise 7. we did not draw such conclusions. In part (d).) In the previous exercise. Explain how this is possible in terms of the L2 -decomposition of F into eigenfunctions of the Markov operator.mixlower} xi . Then ﬁnd the L2 -decomposition of W into the eigenfunctions of the Markov operator. we see a sharp transition in the TV distance (c) Take SRW on the n-cycle Cn . part (b) showed that the TV mixing time on the hypercube is signiﬁcantly larger than the relaxation time. let us quote the following very natural result from [LevPW09. using the strategy of part (b).13] at time 1/2 k ln k : this is a standard example of the cutoﬀ phenomenon [LevPW09.e.Before giving some lower bounds on mixing times. Find a function that shows by (7. hence Exercise 7. Estimate its Dirichlet energy E (W ) and variance Varπ [W ] = (a) For the lazy random walk on {0. if needed. i. compute E W (Xt ) X0 = 0 = and Var W (Xt ) X0 = 0 ≤ 1 k 1− 1− 2 k k .11 (b). 1}k and W as in part (a).4) that the spectral gap of the chain is a most of order 1/n2 . and deduce the decay of Varπ [P t W ] as a function of t. Proposition 7. depending on the method. then dTV (µ. the relaxation time is at least of order k . and showed the cutoﬀ phenomenon. Chapter 18]. Show that if f is a function on the state space such that Eµ f − Eν f ≥ r σ . which is the right order. 4 + r2 k i=1 {ex.13.

g.8) does not imply a uniformly positive absolute spectral gap. that the relaxation time is given by the maximal hitting time for SRW on the base graph torus Z2 n . derandomization of algorithms. You may accept here that transitive expanders exist. . the relaxation and the mixing times are on the same polynomial order of magnitude. tTV mix t∞ mix . the TV mixing time is given by the expected cover time of the base graph. but the walk can be generated using much less randomness. First of all. I cannot formulate a general result or even conjecture on this. {pr. but they still allow for certain states to carry very high measures. α α (b) In a similar manner. e. are in fact of diﬀerent orders: e. very brieﬂy. etc.LLmix} The reason is. On the other hand. where γ = γ (λ2 . En ) with |Vn | → ∞ that mix rapidly. with some 0 < α < 1. tTV mix (1/4) = O(log |V |).d. However. because of ⊲ Exercise 7.9).16 (b). with a natural set of generators. which is log log |Vn |. but do not form an expander sequence. trelax ≍ n2 log n . that does not hold in general: {ex. (7.. This will depend on the exact deﬁnition of “fast mixing”.7 implies that in a sequence of ﬁnite chains.i. we have convergence to stationarity with a uniformly exponential speed if and only if the absolute spectral gaps are uniformly positive. . To conclude this subsection.e.qexpander} not hold. En ) satisfying trelax ≍ tTV mix (1/4) ≍ log |Vn |. P ) be a reversible Markov chain with stationary measure π and spectral of the chain in stationarity (i. See [PerR04].8. In this example.. (a) Give a sequence of d-regular transitive graphs Gn = (Vn .g. . simple random walk on the lamplighter groups Z2 ≀ Zd n . This is useful in pseudorandom generators. .. Proposition 7. X0 ∼ π ). then (7.There are natural chains where the three mixing times. It is especially interesting to notice the diﬀerence between the separation and uniform mixing times: by time C n2 log2 n with large C . I do not know examples of Markov chains over spin systems where this does counterexamples like expanders. sequence. Let us now discuss whether a uniformly positive spectral gap is required for fast mixing. Let (Xi )∞ i=0 be the trajectory P Xi ∈ A for all i = 0. β ) > 0 and C is an absolute constant.9) {e. ·)-measure of any speciﬁc state too small. Let (V. or the ones in Exercise 7. tTV mix (1/4) = O(log |Vn |). Then gap 1 − λ2 > 0. the small pieces left out by the SRW on the torus are not enough to make the pt (x.AKSz} 70 . i. (7.e. This seems to be a rather general phenomenon. if we deﬁne “fast mixing” by the mixing time being small. 2 2 tTV mix (ǫ) ≍ n log n . and let A ⊂ V have stationary measure at most β < 1.. 1.16. while the uniform mixing time is given by the time when the set St of unvisited sites on the base graph satisﬁes E[ 2|St | ] < 1 + ǫ. Indeed. trelax For d = 2. t ≤ C (1 − γ )t . 4 t∞ mix (ǫ) ≍ n . their ratio is just log n. A nice property of simple random walk on an expander is that the consecutive steps behave similarly in several ways to an i. here is an exact statement of this sort: Proposition 7. give a sequence Gn = (Vn .

Koml´ os and Szemer´ edi [AjKSz87]. and f ∈ L2 (V ) has the property that π (supp f ) ≤ 1 − ǫ. Q1 ) . (a) If G′ −→ G is a covering map of inﬁnite graphs. This is true in some sense. Section 3] for a proof and applications. 2t + 1 = Q(P Q)2t+1 1. We want to rewrite the exit probability in question in a functional analytic for f ∈ L2 (V ). We will use the following simple observation: ⊲ Exercise 7. For even times. then ρ(G) ≥ ρ(Tk ) = 71 .suppcontr} P Xi ∈ A for i = 0. 1. Section 9. i. In particular. (QP )t Q1 . the larger graph is more non-amenable. by Exercise 7. then λ2 (G′ ) ≥ λ2 (G).17 and we are done. by Exercise 7. one might think that transitive expanders are the ﬁnite analogues of non-amenable groups. (QP )t−1 Q1 .4 From inﬁnite to ﬁnite: Kazhdan groups and expanders {s. ǫ) > 0. k (b) If G′ −→ G is a covering map of ﬁnite graphs. . the larger graph is a worse expander. 7. Proof of Proposition 7. if G is an inﬁnite k -regular graph. f ) for some δi = δi (λ2 . . Let Γ be a ﬁnitely generated group. by Q being a projection ≤ (1 − δ1 ) (1 − δ2 )t Q1. I am giving here what I think is the simplest possible proof.. and note that language. we can just use monotonicity in t.18. . at least for odd times. (QP )t Q1 . i. . by iterating previous step ≤ (1 − δ1 ) (1 − δ2 )t β . but this does not mean that we can easily produce transitive expanders from non-amenable groups. A simple but crucial obstacle is the following: ⊲ Exercise 7. then (P f.e. So. consider the projection Q : f → f 1A and (P f. by self-adjointness of P and Q ≤ (1 − δ1 ) P (QP )t−1 Q1. with ﬁnite generating set S . f ) {ex. since we want to use the notion of spectral gap.expanders} From the previous section. Stronger and extended versions of this argument can be found in [AloS00. see [HooLW06.This was ﬁrst proved by Ajtai.8. f ) ≤ (1 − δ1 )(f.2] and [HamMP12]. 1 = P (QP )t Q1. P f ) ≤ (1 − δ2 )(f. If (V. √ 2 k −1 .e. ≤ (1 − δ1 ) (1 − δ2 ) (QP )t−1 Q1..17 ≤ (1 − δ1 ) (QP )t Q1. P (QP )t−1 Q1 . then the spectral radii satisfy ρ(G′ ) ≤ ρ(G). P ) is a reversible Markov chain with stationary measure π and spectral gap 1 − λ2 > 0.17.

the free group with two generators. are Kazhdan. See [BekdHV08]. The maximal κ will be denoted by κ(Γ. If a group is Kazhdan (i. then it is a ﬁnite group. has Kazhdan’s property (T )) and it is also amenable. S1 ) > 0 then κ(Γ. for d ≥ 3.20. S2 ) > 0 for any pair of ﬁnite generating set for Γ.[Pip77]). that v is a vector √ in H with norm 1 with ρ(g )v − v ≤ 2.9]: Theorem 7. we can produce invariant vectors in L2 (Γ). see [Lub94. for all g ∈ Γ. there exist a generator s ∈ S with ρ(s)v − v ≥ κ v . √ Example: If Γ is ﬁnite. the deﬁnition says that having no invariant vectors in a representation always means there are no almost invariant vectors either. so it is non-zero. although there are many expanders. Lemma 1. w) > 0} for all g . Proposition 1. hence the Kazhdan property will apply to real actions.Kaka} Recall that a vector ﬁxed by ρ would be an element v such that ρ(g )v = v for all g ∈ Γ. as with some other combinatorial problems. then we can also consider it as a unitary action on a complex Hilbert space. Then ρ(g )v is in the open half-space + Hv := {w ∈ H : (v. ⊲ Exercise 7. Asymptotically almost every d-regular graph is an expander.19.e.9 ([Pin73]. too.3. is not Kazhdan since there exists a surjection F2 −→ Z It is not obvious to produce inﬁnite Kazhdan groups. g + we will obtain an invariant factor which is in the interior of Hv . S ) ≥ 0. ⊲ Exercise 7. The main example is that SLd (Z). then κ(Γ.10) {e. Let us assume. However. (7. So. The ﬁrst reason for property (T) entering probability theory was the construction of expanders.kazhdan} Deﬁnition 7. (Hint: From Følner sets (which are almost invariant). v0 := 1 |G| ρ(g )v . it is hard to ﬁnd an inﬁnite family explicitly. and ∀v . If we average over all g . 72 . and then F2 acts on L2 (Z). We say that Γ has Kazhdan’s property (T ) if there exists a κ > 0 such that for any unitary representation ρ : Γ −→ U (H ) on a complex Hilbert space H without ﬁxed (non-zero) vectors. Note that if we have an action of Γ on a real Hilbert space by orthogonal transformations. on the contrary.1] or [HooLW06.2. S2 . On one hand.. A group having property (T) is “well-deﬁned”: if κ(Γ. S1 .) F2 . we have the following not very diﬃcult theorem. (the obvious projection) to an amenable group.{d. Γ) ≥ 2.

ﬁrst done by Lubotzky. An early example that uses only some simple harmonic analysis is [GaGa81]. for any ǫ > 0. [Dei07] on the random 73 . a completely elementary construction was found by Reingold. What is the asymptotic distribution of λ2 if we choose k -regular graphs uniformly at random? The limiting distribution should be λ2 − √ 2 k −1 k Again. the real symmetric Gaussian random matrices.21 (Margulis 1973). Section 5. Philips and Sarnak in 1988. for β = 1.10 (Question on k -regular graphs). S ) be the Cayley graph of Γn with a ﬁxed generating set S of G. Then the Gn are expanders. this is due to Joel Friedman. the typical eigenvalues of large random k -regular graphs in the k → ∞ limit also matrix ensembles and their universal appearance throughout science. See. More recently. ﬁnite k -regular graph on n vertices has second largest eigenvalue λ2 (G) ≥ A sequence of k -regular graphs Gn are called Ramanujan graphs if √ 2 k−1 . This was the ﬁrst explicit construction of an expander family.⊲ Exercise 7. let Gn = G(Γn . Section 9]. There are also explicit constructions. and is much harder to prove than just being expanders.2. − o(1) as n → ∞. with appropriate generating sets S are Ramanujan graphs. for some unknown normalizing functions f. where T Wβ stands for the Tracy-Widom distribution for the ﬂuctuations of the largest eigenvalue of the β -ensemble of random matrices. asymptotically almost every k -regular graph has λ2 (G) ≤ √ 2 k −1 k +ǫ — + f (n) g (n) → T Wβ =1 . from an inﬁnite Kazhdan group with inﬁnitely many diﬀerent ﬁnite factors. S ). see [HooLW06. for instance. k and any see [HooLW06. lim inf λ2 (Gn ) = n→∞ k So. in particular. much easier constructions have been found. Any inﬁnite k -regular transitive graph G has spectral radius ρ(G) ≥ ρ(Tk ) = √ 2 k −1 k √ 2 k −1 . the Ramanujan graphs are the ones with largest spectral gap. g . As we will see in Exercise ?? and Section 14. The motivation behind this conjecture is that the adjacency matrix of a random k -regular graph is a sparse version of a random real symmetric Gaussian matrix. look like the typical eigenvalues of large random matrices. Vadhan and Wigderson (2002). Since then.2]. Conjecture 7. it is the GOE ensemble. They showed that G(SL3 (Fp ). using the so-called zig-zag product. See [HooLW06] and [Lub94]. If Γ is Kazhdan and inﬁnite. with ﬁnite factor groups Γn .

For instance. for the non-amenable case (d = ∞). y ) ≤ Cn −d 2 {t. an n−d/2 heat kernel decay implies a so-called d-dimensional Faber-Krahn inequality. in (8. a beautiful probabilistic approach developed by Ben Morris and Yuval Peres [MorP05]. volume growth in a bounded degree graph implies recurrence. IPd ⇒ pn (x. An important special case can be formulated as follows: Theorem 8. For transitive graphs.2.1). As can be seen from the second statement of the theorem. So. show that a |Bn (x)| = o(n2 / log n) (b) Construct a recurrent tree with exponential volume growth. developed further with his students [VarSCC92]. depending on the isoperimetry of the space. In particular. We will not study these questions.2 below. do not imply upper bounds on return probabilities. we already have 74 .Sobolev} To compare this inequality for diﬀerent d values.1 Poincar´ e and Nash inequalities {ss. let us give two quick exercises: ⊲ Exercise 8.PoincareNash} Recall the Sobolev inequality from Proposition 7. hence the integral ∇f less mass compared to f f 1 1 d/(d−1) ≍ n(d−1)/d .8 Isoperimetric inequalities and return probabilities in general {s. one way of thinking of this inequality is that loses So. and [CouGP01] for a geometric approach for groups. This was ﬁrst discovered by Varopoulos [Var85a].4: for f ∈ ℓ0 (V ). but if the space has better isoperimetry.varopoulos} . (8. then from each point we have more directions. there are also bounds going in the other direction. Then fn f d/(d−1) ≤ C (κ) ∇f 1 . this is an equivalence. this is strong enough to deduce an at least d-dimensional volume growth. without any regularity assumption. take fn to be roughly constant with a support the function f becomes smaller by taking the derivative in any direction. to make up for the loss caused by taking the derivative. proving and using the so-called Nash inequalities. lower bounds on the volume growth. 8. (a) Using the Carne-Varopolous bound. but before diving into Nash inequalities and evolving sets. V satisﬁes IPd (κ) =⇒ of π -measure n.1.1) {e. Theorem 9. This approach is very much functional analytic. then will study in more depth the method of evolving sets. but not IPd . We will sketch this approach in the next section.return} The main topic of this section is a big generalization of the deduction of exponential heat kernel decay from non-amenability (or fast mixing from positive Cheeger constant): general isoperimetric inequalities also give sharp upper bounds on the return probabilities. However. For general graphs. we are taking diﬀerent norms on the two sides. a continuation of the methods encountered in Subsections 7.1 and 7. Another natural way of compensation ≤ C (κ) ∇f 1. see [Cou00] for a nice overview. 1 .1.

with f = implies the following Nash inequality: f 2 2 t F(t) increasing. The ﬁrst work in this direction was by Varopoulos [Var85a].8]: If U ⊆ Rd with Lipschitz boundary.2) can be spared.2). S ) > 0 such that for any harmonic function f on the Cayley graph G(Γ. . then f (yg ) − f (y ) ≤ f (ys1 . well-known from PDEs [Eva98. The following exercise says that harmonic functions show that the Poincar´ e inequality (8.6) {e.Poincare} RU f (x) dx .SC} ≤2 vol(B2R ) R ∇f vol(BR ) ℓ2 (B3R ) .Hint b} Theorem 7.3) is essentially sharp.) down too much: whenever there is an edge contributing something to ∇f . He proved sharp heat kernel upper bounds by ﬁrst proving that an isoperimetric inequality IPF (κ). So. then f − fBR Hints: we have f (y ) − fBR ≤ z ∈BR ℓ2 (BR ) {ex. the harmonicity carries {ex. sm with m ≤ 2R and generators si ∈ S . the following holds: ⊲ Exercise 8. (8. . we have the Dirichlet inequality f case of Sobolev inequalities. i. (8.. .Hint a} and if g = s1 . c R ∇f ℓ2 (BR ) ≤ f ℓ2 (B2R ) . just like in the ≤ C ∇f see ⊲ Exercise 8.3. Section 5. no compensation is needed.would be to multiply the right hand side by a factor depending on the size of the support. S ). ≤g f 2 1/ f 2 2 EP. .Nash} .e.2) {e. Show that there is a constant c = c(Γ. sm−1 ) + · · · + f (ys1 ) − f (y ) . Note that for non-amenable groups. the factor R on the right hand side of (8. . (8. vol(RU ) Since the quality of the isoperimetry now does not appear in (8. Indeed.5) 2. Show that if f : Γ −→ R on any group Γ. {e. (8. (Such a function cannot go up and this contribution to a large distance. .π (f ). This is done in the Poincar´ e inequalities. one might hope that this can be generalized to almost arbitrary spaces.2 (Saloﬀ-Coste’s Poincar´ e inequality [Kle10]).3) {e. (8. then f (x) − fRU with fRU = Lp (RU ) ≤ CU R ∇f Lp (RU ) . 75 f ∈ ℓ0 (V ) . sm ) − f (ys1 .4) {e. The statement can be regarded as a discrete analogue of the classical theorem that functions satisfying the Mean Value Property are smooth.reverse} There are several ways how Poincar´ e or similar inequalities can be used for proving heat kernel estimates. and BR is a ball of radius R in a Cayley graph.SC} |f (z ) − f (y )| ≤ vol(BR ) g∈B2R vol(BR ) |f (yg ) − f (y )| .revPoincare} 2 (8.7) {e.3 (Reverse Poincar´ e inequality).

where g = Cκ f(4t)2 . The proof is quite similar to the proofs of Proposition 7.4 and Theorem 7.3. To make the result more readable, consider the case of IPd (κ), i.e., F(t) = t(d−1)/d . Then we have g(t) = Cκ t2/d , and (8.7) reads as f

2

≤ Cκ

f

2 1/

f

2 1/d 2

∇f

2,

some kind of mixture of the Sobolev and Poincar´ e inequalities (8.1) and (8.3). For instance, when f = 1BR , we get the usual f

2

The next step is to note that

≤ Cκ R ∇f

2.

But we will apply (8.7) to some other functions.

EP,π (f ) ≤ 2( f

n

2 2

− Pf

2 2) ,

where P = (I + P )/2 is the Markov operator for the walk made lazy. Therefore, applying the Nash inequality to f = P δx for each n, we arrive at a diﬀerence inequality on the sequence u(n) := P δx namely u(n) ≤ 2g 1/u(n) u(n) − u(n + 1) . This gives an upper bound on the decay rate of u(n) as a solution of a simple diﬀerential equation. Then we can use that p2n (x, x) ≤ 2p2n (x, x). See [Woe00, Section 14.A] for more detail. A diﬀerent approach, with stronger results, is given in [Del99]. For Markov chains on graphs with nice regular d-dimensional volume growth conditions and Poincar´ e inequalities, he proves that c1 n−d/2 exp(−C1 d(x, y )2 /n) ≤ pn (x, y ) ≤ c2 n−d/2 exp(−C2 d(x, y )2 /n) . Let us note here that a very general Gaussian estimate on the oﬀ-diagonal heat kernel decay is given by the Carne-Varopoulos Theorem 9.2 below.

n 2 2

= (P

2n

δx , δx ) = p2n (x, x)Cx ,

8.2

Evolving sets: the results

{ss.evolving}

Recall that a Markov chain is reversible if there is a measure m with π (x)p(x, y ) = π (y )p(y, x); such a π is always stationary. This happens iff the transition probabilities can be given by symmetric conductances: c(x, y ) = c(y, x) with p(x, y ) = π (x)p(x, y ), and the isoperimetric proﬁle is φ(r) := inf For instance, IPd (κ) implies φ(r) ≥ κr−1/d . Q(S, S c ) : π (S ) ≤ r . π (S )

c(x,y ) Cx ,

and then π (x) = Cx is a good choice.

Even for the non-reversible case, when the stationary measure π (x) is not Cx , we deﬁne Q(x, y ) :=

On a ﬁnite chain, we take π (V ) = 1, and let φ(r) = φ(1/2) for r > 1/2.

76

{t.Morris-Peres}

**Theorem 8.2 (Morris-Peres [MorP05]). Suppose 0 < γ < 1/2, p(x, x) > γ ∀x ∈ V . If n ≥ 1 + (1 − γ )2 /γ 2 then pn (x, y )/π (y ) < ǫ or |pn (x, y )/π (y ) − 1| < ǫ ,
**

4/ǫ 4 min(π (x),π (y ))

4 du , uφ(u)2

(8.8)

{e.Morris-Peres}

depending on whether the chain is inﬁnite or ﬁnite. For us, the most important special cases are the following: 1) By the Coulhon-Saloﬀ-Coste isoperimetric inequality, Theorem 5.7, any group of polynomial growth d satisﬁes IPd . Then the integral in (8.8) becomes

4/ǫ

Cγ

4

4 du ≍ (1/ǫ)2/d . u1−2/d

That is, the return probability will be less than ǫ after n ≍ ǫ−2/d steps, so pn (x, x) < Cn−d/2 . 2) IP∞ turns the integral into Cγ log(1/ǫ), hence pn (x, x) < exp(−cn), as we know from the Kesten-Cheeger-Dodziuk-Mohar Theorem 7.3. 3) For groups of exponential growth, the CSC isoperimetric inequality implies φ(r) ≥ c/ log r. Then the integral becomes

1/ǫ 1

log2 u du ≍ log3 (1/ǫ) , u

thus pn (x, y ) ≤ exp(−cn

1/3

). This is again the best possible, by the following exercise.

⊲ Exercise 8.4. * Show that on the lamplighter group Z2 ≀ Z we have pn (x, x) ≥ exp(−cn1/3 ). As we discussed in the introduction to this section, these bounds were ﬁrst proved by Varopoulos, using Nash inequalities. The beauty of the Morris-Peres approach is that is completely probabilistic, deﬁning and using the so-called evolving set process. Moreover, it works also for the non-reversible case, unlike the functional analytic tools. The integral (8.8) using the isoperimetric proﬁle was ﬁrst found by Lov´ asz and Kannan [LovKa99], for ﬁnite Markov chains, but they deduced mixing only in total variation distance, not uniformly as Morris and Peres. To demystify the formula a bit, note that for a ﬁnite Markov chain on V with constant (since r = 1/2 is an inﬁmum over a larger set), the Lov´ asz-Kannan integral bound implies the upper bound ≍

1

1 |Vn |

uniform stationary distribution, using the bound φ(r) ≥ φ(1/2) = h for all r, where h is the Cheeger

1 log |Vn | dr = rh2 h2

on the mixing time, as was shown earlier in Theorem 7.6. The idea for the improvement via the integral is that, especially in geometric or transitive settings, small subsets often have better isoperimetry than the large ones. (Recall here the end of Section 5.3, and also see the next exercise.) 77

Example: Take an n-box in Zd . A sub-box of side-length t has stationary measure r = td /nd would need to show that sub-boxes are at least roughly optimal for the isoperimetric problem. This and boundary ≍ td−1 /nd , hence the isoperimetric proﬁle is φ(r) ≍ 1/t = r−1/d /n. (Of course, one

can be done similarly to the inﬁnite case Zd , see Section 5.4, though it is not at all obvious from at r ≍ 1. Using Theorem 7.5, we get that the spectral gap is at least of order 1/n2 , which is still Lov´ asz-Kannan integral, the mixing time comes to Cd n2 . that.) This is clearly decreasing as r grows. Therefore, the Cheeger constant is h ≍ 1/n, achieved

sharp, but then Theorem 7.6 gives only a uniform mixing time ≤ Cd n2 log n. However, using the

Another standard example for random walk mixing is the hypercube {0, 1}d with the usual edges. ⊲ Exercise 8.5. For 0 ≤ m ≤ d, show that the minimal edge-boundary for a subset of 2m vertices in the hypercube {0, 1}d is achieved by the m-dimensional sub-cubes. (Hint: use induction.) The isoperimetric proﬁle of {0, 1}d can also be computed, but it is rather irregular and hard to

work with in the Lov´ asz-Kannan integral, and it does not yield the optimal bound. But, as we will see, with the evolving sets approach, one needs the isoperimetry only for sets that do arise in the evolving sets process: ⊲ Exercise 8.6. * Prove O(n log n) mixing time for {0, 1}n using evolving sets and the standard onedimensional Central Limit Theorem.

{ex.hypermix}

8.3

Evolving sets: the proof

{ss.evolproof}

We ﬁrst need to give the necessary deﬁnitions for the evolving sets approach, and state (with or without formal proofs) a few basic lemmas. Then, before embarking on the proof of the full Theorem 8.2, we will show how the method works in the simplest case, by giving a quick proof of Kesten’s Theorem 7.3, the exponential decay of return probabilities in the non-amenable case, even for non-reversible chains. Let S ⊆ V . Deﬁne: S = {y : Q(S, y ) ≥ U π (y )}, where U ∼ Unif [0, 1]. Remember: Q(S, y ) = process. Thus P y ∈ S S = P Q(S, y ) ≥ U π (y ) = which leads to E[ π (S ) | S ] =

y x ∈S

π (x)p(x, y ) with π (y )P = π (y ) (a stationary measure). This is one step of the evolving set Q(S, y ) , π (y )

π (y )P y ∈ S S =

Q(S, y ) = π (S ) .

y

Therefore, for the evolving set process Sn+1 = Sn , the sequence {π (Sn )} is a martingale. Now take S0 = {x}; then E[ π (Sn ) ] = π (x). Moreover,

π (y ) Lemma 8.3. P[ y ∈ Sn | S0 = x ] π (x) = pn (x, y ).

78

Proof. We use induction on n: pn−1 (x, z )p(y, z ) = P[ z ∈ Sn−1 | S0 = x ] π (z ) p(z, y ) π (x)

z

Note: P[ z ∈ Sn−1 | S0 = x ]Q(z, y ) = E[ χ{z∈Sn−1 } | S0 = x ]Q(z, y ) But from before, P[ y ∈ Sn | S0 = x ] Thus, pn (x, y ) = P[ y ∈ Sn | S0 = x ] π (y ) . π (x) π (y ) π (y ) Q(Sn−1 , y ) = E [ χ {z ∈ S n − 1 } | S 0 = x ] π (x) π (x) π (y )

**shared by the Markov chain trajectory {Xn }, provided the chain is reversible. However, since P[ y ∈ S | S ] =
**

Q(S,y ) π (y ) ,

These two properties, that π (Sn ) is a martingale and P{x} [y ∈ Sn ] π (y ) = pn (x, y )π (x), are the size of S will have a conditional variance depending on the size of the

boundary of S : the larger the boundary, the larger the conditional variance that the evolving set has. This makes it extremely useful if we want to study how the isoperimetric proﬁle aﬀects the random walk.

Recall that, if f is a concave function, by Jensen’s inequality E f (X ) ≤ f (E X ). Moreover, √ if there is a variance, then the inequality is strict. For example, if f is : √ √ ⊲ Exercise 8.7. If Var[X ] ≥ c (EX )2 then E X ≤ (1 − c′ ) EX , where c′ > 0 depends only on c > 0. Recall that Q(x, y ) = π (x) · p(x, y ) thus Q(A, B ) = π (a)p(a, b).

a∈A,b∈B

Proof of Theorem 8.2. We will focus on the case of an inﬁnite state space. Let us ﬁrst state a lemma that formalizes the above observations: large boundary for S means large conditional variance for S , and that in turn means a deﬁnite decrease in E π (S ) compared to E π (S ). We omit the proof,

{l.teclem1}

e) π (S π (S )

because it is slightly technical, and we have already explained the main ideas anyway. Lemma 8.4. Assume 0 < γ ≤ 1/2 and p(x, x) ≤ γ , for all x ∈ V . If Ψ(S ) = 1 − ES for each S ⊂ V , we have that Ψ(S ) ≥ where γ2 φ(S )2 , 2(1 − γ )2 Q(S, S c ) . π (S ) then,

φ(S ) =

79

and once it reaches Ω it is killed). A more serious issue is the non-uniformity of the isoperimerimetric bound. hence. we can take Doob’s h-transform: K (S. and harmonic on X \ (Ω ∪ Λ). A) = = 1. if n > nential decay for non-amenable chains: If φ(r) ≥ h > 0 ∀r. becomes a biased random walk with transition probabilities p(r. ⊲ Exercise 8. . Then. A) := h(A) K (S. since Eπ (Sn ) = Eπ (x). (We have harmonicity 80 . we will apply the Doob transform with h(S ) := π (S ). if at some point. the next step is A.e. . the probability that. 2(1 − γ )2 1 r Ψ(r ) and the proof of Theorem 8. r ± 1) = (r ± 1)/2r for r = 1. π (Sn ) dr.g. A) = PS [S = A].Notice that. then the lemma gives Ψ(r) ≥ γ2 φ(r)2 . on the lattice Zd ) the boundary-to-volume ratio is smaller for larger sets.4 becomes weak. π (Sn ) happens to big (which is not very likely for a martingale at any given time n. for the general case we will have to work harder. conditioned to hit some n ≥ 1 before 0. and we lose control. y )/π (y ) < ǫ for any S .. The replacement for the last ridiculous bound will be (8. .12) below. If we assume |Sn | ≤ exp(−Cn). then our bound from Lemma 8. where the middle inequality comes from the ridiculous bound P{x} [y ∈ Sn ] ≤ However. for A ∈ / Ω ∪ Λ. i. but still may happen at some random times). it cannot be started in Λ. then we are happy anyway. Show that SRW on Z. but otherwise independently of n.8. It would be much better to have a stronger downward push for larger sets. K (S. For this reason. Recall that usually (e. . Now. As promised. when π (Sn ) is small. being in S . as desired. consider a non-negative function on the state space X = 2V of the evolving set process.4 implies that Ψ(r) ≥ h′ > 0. E π (S ) | π (S ) ≤ (1 − h′ ) π (S ). if we deﬁne Ψ(r) = inf {Ψ(S ) : π (S ) ≤ r}. for example in the group case.. then Lemma 8. A) = A A h(A) h(S ) K (S.2 reduces to show that pn (x. A) h(S ) which is indeed a transition kernel: by the harmonicity of h. For the evolving set process. Denote the evolving set transition kernel by K (S. y ) = P{x} [y ∈ Sn ] ≤ E{x} P{x} π (Sn ) ≥ 1 and Markov’s inequality. n − 1. h(S ) h(S ) The usual reason for introducing the h-transform is that the process given by the kernel K is exactly the original process conditioned to hit Ω before Λ (in particular. we introduce some kind of dual process. this implies that pn (x. which equals 1 on some Ω ⊂ X and equals 0 for another subset Λ ⊂ X . an example of the usefulness of this method is that we can immediately get expo- So. Recall that π (Sn ) is a martingale on X \ {∅} exactly because π (S ) is harmonic for S = ∅. ≤ exp(−Cn). by iterating from S0 = {x} to Sn we obtain E{x} π (x) ≡ 1.

·) and π (y ) = 1 then χ(pn (x. π ) = y {d. ∞) −→ [0.lem-tech2} variables with Z0 ﬁxed.10) that we have now for Zn in the K -process actually leads to a fast decay in E. K is the process conditioned on never becoming empty. More generally. δ {l. y ) π (y ) π (y ) π (y ) (8.5.chi2dist} π (y ) µ(y )2 . Let us state another technical lemma (without proof) that formalizes how a control like (8.1 (Non-symmetric distance between two measures). since we will not use this fact.11) {e. ES f (Sn ) = ES In the hat mean: E Zn+1 Zn Sn = E π (Sn+1 ) π (Sn ) Sn < 1 − Ψ(π (Sn )). When µ(y ) = pn (x. because. Lemma 8. We now deﬁne Zn := √ 1 . (8.5). We deﬁne χ as: χ2 (µ. This sounds like black magic. for the inﬁnite state space evolving set process. We have to ﬁnd now a good way to deduce the smallness of return probabilities from the smallness of (8. ·). y ) = ǫ for y ∈ An . where ← p n2 (x.at S = V because that is an absorbing state of the original evolving sets chain. 1) = 1 ǫ y |An | = then χ = ǫ which is a reasonable measure of similarity. Suppose that E [Zn+1 |Sn ] ≤ Zn (1 − f0 (Zn )) ∀n. π (S n ) π (S ) f (Sn ) (8. ·). pn1 +n2 (x.e-psi} with the last inequality provided by the deﬁnition of Ψ(r) after Lemma 8. It is a version of the technique of taking a “strong stationary dual” in [DiaF90]. as it is easy to check from the deﬁnitions. Then π (S n ) 1 E {x } π (x) π (Sn ) = E{x} Zn . We do not prove this carefully. y ) = π (y ) π (x) p(x. If for some δ . with f (Z ) = f0 (Z/2)/2. m) ≤ χ(pn1 (x. pn1 (x. then EZn ≤ δ .4. Assume f0 : (0. but we will not discuss ﬁnite chains here in the proof. When Zn is large then Sn is small and Ψ is good enough to get from (8.10) {e. ·). y ) pn2 (x. 1] is increasing and Zn > 0 is a sequence of random n≥ Z0 dz zf (z ) . Deﬁnition 8.9).) So. but is nevertheless correct.ESEZ} for any function f .9) {e.10) a signiﬁcant decrease for Zn in the new process. y ) p n2 (x. we had the version (7. y ) stands for the stationary reversal. m) χ(← − using Cauchy-Schwartz. y ) y ← − pn1 (x. If pn (x. y ) 1 = π (z ) π (z ) = y pn (x. π (y )2 For ﬁnite chains.p-chi} − p n2 (z. y )2 . 81 .

Poisson boundary {s. we ﬁnish the proof of Theorem 8. P n δy ) ≤ δx = P n δy π (y ) . we pn+n π (z ) can retrieve the number of steps n by the necessary number of steps in Lemma 8.5 with f0 (Zn ) = Ψ(π (Sn )).offdiag} π (y ) n ρ .p-esp} π (Sn ) = E{x} [Zn ].11) and (8.y {l. modiﬁed by the ≤ √ √ ǫ ǫ.1. We 9 9.5.12).. y ) p2n (x.12) {e. we obtain appropriate change of variables. Thus. Given a reversible Markov chain with a stationary measure π (x). then: a) pn (x. entropy. x). (8. using Lemma 8. we can compare pn with m using evolving sets: χ2 (pn (x. π (x)pn (x. Lemma 9. 1 π ( x ) E {x } π (Sn )π (Tn ) . π (x)2 where {Sn } and {Tn } are two independent evolving set processes. We will start by proving part a). x) ≤ sup .Also.1 Speed. ·).e. i. m) = = = π (y ) y P{x} [ y ∈ Sn ] π (y )2 π (x)2 π (y )2 π (y )P{x} [ y ∈ Sn . π (x) P n 82 . π (y ) π (x) x Proof. Recall obtain that E{x} [Zn ] is bounded by δ .2 with the appropriate number of steps n. π (x) p2n (x.4. y ∈ Tn ] 2 1 π (x)2 y 1 E{x} [ π (Sn ∩ Tn ) ].Speed} We have seen many results about the return probabilities pn (x. We now say a few words about oﬀ-diagonal behaviour. Then. about the on-diagonal heat kernel decay. Thus 1 E {x } π (x) π (Sn ) . m) ≤ which is better than what we had before. 1 1 E{x} π (Sn ∩ Tn ) ≤ E {x } π (x)2 π (x)2 χ(pn (x. Using the relation between Ψ and φ from Lemma 8.speedetc} Speed of random walks {ss. √ By setting δ = ǫ and using the inequalities (8. Liouville property. applying Cauchy-Schwartz. y ) ≤ b) sup x. y ) = (δx . ·).

there is an edge between x and y if and only if p(x. P n δy )1/2 = (δx . P n δy ) ≤ (P n δx . Then.speed} dist(o. as usually. Given a reversible Markov chain. y ) ≤ sup π (y ) π (x)1/2 π (y )1/2 x. Varopoulos’ result was a bit weaker. P n δx )1/2 (P n δy . we use the fact that P is self-adjoint: π (x)p2n (x. that π (o) > 0. pn (x.y p2n (x.y p2n (x. see [Pey08]. x) = sup . Var85b]). Now divide both sides by π (y )π (x) and take the supremum over all x and y : sup x.Dividing by π (x). and that ρ < 1. y ) ≤ ρn For part b). Xn ) > 0 a.3. Then. 83 . x)1/2 p2n (y. P 2n δy ) = (P n δx .carne} Theorem 9. This form is due to Carne. y ) ≤ 2 π (y ) n dist(x. P 2n δx )(δy . The inequality here. y )2 ρ exp − π (x) 2n . y ) = (δx . x)π (y )p2n (y. Proposition 9. y )) 1/2 . n Proof. y ) > 0. pn (x. There is now also a probabilistic proof. and a reversible Markov chain on its vertex set. π (x) x {t. Choose α > 0 small enough that Dα < 1/ρ.. P 2n δy ) 1/2 π (y ) . Xn ) ≤ αn] = pn (o. lim inf n→∞ {p. with not necessarily nearest-neighbour transitions.s. the Reader is strongly encouraged to read it in [LyPer10]. is from Cauchy-Schwarz.e. Consider a graph G with degrees bounded by D. where the distance is measured in the graph metric given by the graph of the Markov chain. in the graph metric. x) x∈Bαn (o) ≤ |Bαn (o)|Cρn ≤ Dαn Cρn ≤ exp(−cα n) . y )1/2 p2n (x. Assume that the reversible measure π (x) is bounded by B . with a miraculous proof using Chebyshev polynomials. i.2 (Carne-Varopoulos [Car85. π (x) = (π (x)p2n (x. Po [dist(o. with a more complicated proof.

which gives us that limn→∞ an /n exists. plus the distance from the origin of the farthest lamp on. The second inequality holds because of the choice of α. For a random walk on a group (with not necessarily nearest neighbour jumps in the Cayley graph that gives the metric). Xn ).4. Xn ) + E dist(Xn .1 and the fact that π (x) is bounded. Is it true that positive speed never depends on the ﬁnite symmetric generating set? The case for Z2 ≀ Zd is known [Ers04a]. φn ). and moving to one of the 2d neighbours with probability 1/(4d) each. we will consider the walk given by staying put with probability 1/4. For the lamplighter groups Z2 ≀ Zd . then it is easy to give some crude bounds on dn : it is at least | supp φn |. the following bounds hold for dn = dist(o. where Mn is the position of the marker (the lamplighter) and φn is the conﬁguration of the lamps. we will prove here the statement only for some speciﬁc generating set. d = 2 has E[dn ] ≍ √ n n log n 3.speedinv} 84 .LLspeed} Theorem 9. Xn ) ≤ αn only ﬁnitely many times. We have seen that non-amenability implies positive speed. d ≥ 3 has E[dn ] ≍ n.5. Question 9. n n→∞ Proof. Xn ) + E dist(o. case of the following theorem shows. n {l. We can see that this sequence is subadditive: E dist(o. In each case. Then. d = 1 has E[dn ] ≍ 2. the number of lamps on. Xn+m ) ≤ E dist(o. Xn+m ) ≤ E dist(o. dist(o. In each case.6. switching the lamp at the present location with probability 1/4. So we may apply Fekete’s lemma. Xn ): 1. Xn ) > α. The converse is false.Such a ﬁnite constant C exists because of part a) of Lemma 9. Now we know that n Po [dist(o. Xn ) exists. as the d ≥ 3 {t.speedexists} Lemma 9. So: lim inf n→∞ dist(o. First we deﬁne the sequence (an )∞ n=1 by an = E dist(o. Xn ) ≤ αn] < ∞. Xm ) . and it is at {q. if Xn = (Mn . Proof. Thus by Borel-Cantelli. lim E dist(o.

2). 4 so estimating E |Rn | will certainly be important. ǫ n] ǫ n ≥ ǫ2 n/4.3) {e.LLd1} using that the number of lamps on is at most | min supp φn | + | max supp φn |.3).2) {e. √ √ we have P supp φn ⊆ [−ǫ n. let Mn = maxk≤n |Mk |. By the Central √ Limit Theorem.1. Now.lastvisit} (9. P[ Tt ≤ n. Since {Mn ≥ t} ⊂ {Tt ≤ n}. we have to be slightly clever. 2 we don’t have exact equality because of the possibility of Mn = t. Therefore. in the d = 1 case. but even Mn cannot be much larger than n.1) {e. by the strong Markov property and the symmetry of the walk restarted from t. it looks likely that not only Mn . we have proved (9. | min supp φn | ∨ | max supp φn | ≤ Mn . but if we do not want to lose log n-like factors. {f. Mn ≥ t ] ≥ inequality can be rewritten as 1 P[ Tt ≤ n ] . In fact.maxreflection} This follows from a version of the reﬂection principle. √ ∗ From the CLT. Clearly. P[ |Mn | > ǫ n ] > ǫ for some absolute constant ǫ > 0.1: The generators for the walk on Z2 ≀ Zd . Thus.1. then goes back to the origin.litrange} (9. we claim that ∗ P[ Mn ≥ t ] ≤ 4 P[ Mn ≥ t ] . we have | min supp φn | ∨ | max supp φn | ≤ dn ≤ |Mn | + 3| min supp φn | + 3| max supp φn | .LLd} {ex. Show that P after last visit to x up to time n. by the left side of (9. 85 . this 2 P[ Mn ≥ t ] ≥ P[ Tt ≤ n ] = P max Mk ≥ t . then. ∗ ∗ For an upper bound. If Tt is the ﬁrst time for Mn to hit t ∈ Z+ . using the following exercise: ⊲ Exercise 9.Figure 9. most the number of steps in any procedure that visits all the lamps that are on. Moreover. the expected size of supp φn can be estimated based on the expected size of the range Rn of the movement of the lamplighter. by Exercise 9. Now. we have E |Rn | ≤ E | supp φn | ≤ E |Rn | . k ≤n ∗ Since P[ Mn ≥ t ] ≤ 2 P[ maxk≤n Mk ≥ t ]. we have E[ dn ] ≥ √ √ √ √ E[ | min supp φn | ∨ | max supp φn | ] ≥ P supp φn ⊆ [−ǫ n. (9. ǫ n] > ǫ/4. the lamp is on x ∈ Rn ≥ 1/4. switches them oﬀ. which is a suitable lower bound.

so we can take a spanning tree in it. This suggests that once a point is visited. (This comes from n k=1 exactly the expected number of visits to 0 by time n. while the switches take at most |Rn | steps.2. consider the following exercise: ⊲ Exercise 9. to get from Xn to the origin.1. Now consider the case d = 2. hence dn ≤ 4|Rn |. The precise asymptotics in the above statement is also known. which will be addressed in later sections. it is typically visited roughly log n times in expectation. a classical theorem by Erd˝ os and Dvoretzky.3) over t. 86 {d. Deﬁnition 9. A submartingale is a sequence (Mn )∞ n=1 such that the following two conditions hold: ∀n E |Mn | < ∞ .√ ∗ Summing up (9. we see that E[ dn ] ≤ C ′ n. 0) ≍ log n. Since. and wherever Mn ∈ Rn is.3. And. The distinction between positive and zero speed will be a central topic of the present section. n→∞ n lim From this exercise and (9. 0) ≍ 1/k . For the lamplighter. of course. it is not hard to prove that E |Rn | ≍ n . Using (9. we can deduce that: n ≥ E[dn ] ≥ E | supp φn | ≥ thus proving the case d ≥ 3.Liouville} As we saw in the lecture on evolving sets.4). 4 4 9. we have to switch oﬀ all the lamps.1) we have E |Rn |/4 ≤ E dn . On the other hand. Rn is a connected subset of Z2 containing the origin. and we are done. 0) is ⊲ Exercise 9. For simple random walk on a transitive graph. E |Rn | qn ≥ . But n k=1 pk (0.4) {e. by (9. from the √ CLT. can go around the tree to switch oﬀ all the lamps and return to the origin. there are also ﬁner questions about the rate of escape than just being linear or sublinear.2 The Liouville property for harmonic functions {ss. so the d = 2 case follows from (9. For the case d ≥ 3.suMG} . This can be done in less than 3|Rn | steps in Z2 . and thus the walk visited roughly n/ log n diﬀerent points. we get E[ |Mn | ] ≤ 2 E[ |Mn | ] ≤ C n for some large C < ∞. harmonic functions with respect to a Markov chain are intimately related to martingales.2dRn} pk (0. 0) ≍ k −d/2 on Zd ).1). E dn ≍ E |Rn |. the general fact that pk (0. we have: E |Rn | = q := Po [never return to o] . In fact. Prove the above statement. Thus. log n (9.2). we have pk (0. Altogether.

Theorem 4. . Proof. Y stays still and when X stays still. The walk is recurrent. If their distance in that coordinate is zero. Corollary 9. Suppose f is a bounded harmonic function. i. then there exists a random variable M∞ with Mn → M∞ a. Now. y ∈ V there is a coupling (Xn . .e. n > N such that Mm = f (x) and Mn = f (y ). with a condition relaxing boundedness.7. then there are x and y states such that f (x) = f (y ).∀n E[ Mn+1 | M1 . Mn ] ≥ Mn . then it has the Liouville property. Given a submartingale (Mn )∞ n=1 . so f must be constant.nobddharm} {t. . If f . |f (x) − f (y )| ≤ limn→∞ P[ Xn = Yn ] 2B = 0. . . . So. the distance goes up by one with probability 87 . This is the product chain of lazy walks coordinatewise. Proof. if Xn and Yn have not yet come together in that coordinate. Theorem 9. so returns to these states inﬁnitely often. see [Dur96. A supermartingale is a similar sequence. Therefore. Zd has the Liouville property.LiouProp} {c.9. in addition. if ∀x. f is harmonic with respect to P if and only if it is harmonic with respect to d I +P 2 {t. Clearly. then they move (or remain still) together in that coordinate. Thus. For any Markov chain. Each of X and Y when considered independently is still using the lazy random walk described. Y moves. y ) such that P[ Xn = Yn ] → 0. is nonconstant.Blackwell} .10 (Blackwell 1955). say |f | < B . So Mn cannot converge to any value. {t. then whenever that coordinate is chosen. and some constant B with Mn ≤ B almost surely. Then f (Xn ) and f (Yn ) are both martingales. starting from (x. A recurrent Markov chain has no nonconstant bounded harmonic functions. . For a proof.8. consider the lazy random walk in Z given by ﬂipping a d-sided coin to determine which coordinate to move in and then using the lazy simple random walk on Z in that coordinate.7 (Martingale Convergence Theorem). ∀n |f (x) − f (y )| = |Ef (Xn ) − Ef (Yn )| ≤ E |f (Xn ) − f (Yn )| ≤ P[ Xn = Yn ] 2B .MGconverge} Theorem 9. So ∀N ∃m. . every bounded Proof. the lazy version of P .s. then Mn = f (Xn ) is a bounded martingale.10]. Theorem 9. So no such f can exist. then when X moves. except that the second condition is replaced by ∀n E[ Mn+1 | M1 . Yn ) of random walks harmonic function is constant.2. considering one coordinate at a time. with E[ f (Xn ) ] = f (x) and E[ f (Yn ) ] = f (y ). which contradicts Theorem 9. If not. Now consider the following coupling: Xn and Yn always move in the same coordinate as one another. If f is a bounded harmonic function. Mn ] ≤ Mn .

2: Portion of a d-regular tree (d = 3) Consider Figure 9. Figure 9. This happens in each coordinate. and let f (y ) = Py [{xn } ends up in A]. satisfying lim x 2 →∞ {ex.5. Show that the lamplighter group Z2 ≀ Zd with d ≤ 2 has the Liouville property. Any random walk on Zd with symmetric bounded jumps has the Liouville property.23 below for a proof.7. By transience and by any vertex {f. ∃N such that Xn = Yn ∀n ≥ N .s. ⊲ Exercise 9.5. this chain has the Liouville property.9. ⊲ Exercise 9.dreg} a bounded harmonic function. Xn and Yn will eventually have distance zero in that coordinate. Show that the lamplighter group with d ≥ 3 does not have the Liouville property. the classical Choquet-Deny theorem (1960) says that any measure on any Abelian group has the Liouville property.SpEnt} relationship between positive speed and the existence of non-trivial bounded harmonic functions..3 Entropy. then f (x) = (1 − p)f (o). i. 1).1/2 and down by one with probability 1/2. * Show that any harmonic function f on Zd with sublinear growth. Let p = Px [never hit 0] ∈ (0. so with probability 1. so you may look at the proof of Theorem 9. This is equivalent to a simple random walk on Z. on the lamplighter groups Z2 ≀ Zd there is a strong The main result of the current section is that this equivalence holds on all groups. limn→∞ 1xn ∈A exists a. See Theorem 9. being a cutpoint of the tree. So by Theorem 9. We now give an example that does not have the Liouville property: the d-regular tree Td with d ≥ 3.6. This shows that f is nonconstant. In fact. and then will remain together for the rest of the coupling. It is obviously 9. must be constant. see Theorem 9. A hint for these two exercises is that positive speed and the existence of non-trivial bounded harmonic functions will turn out to be intimately related issues. ⊲ Exercise 9. so this deﬁnition makes sense. which is recurrent. and the main equivalence theorem {ss.e.4.2.12 88 .sublin} f (x)/ x 2 = 0. so with probability 1. ⊲ Exercise 9. As we have seen in the previous two sections.

SMB} directed Cayley graph given by the not necessarily symmetric supp µ). A rough interpretation of H (µ) is the number of bits needed to encode the amount of randomness in µ. Let µ be a ﬁnitely supported probability measure on a group Γ such that supp µ generates Γ. nobody knows what entropy really is. for a random variable X with values in a countable set. if Γ has sub-exponential volume growth (in the {t. h∈ Γ µ(h)ν (h−1 g ). However.” Deﬁnition 9. n→∞ n→∞ n n ⊲ Exercise 9. Here is a fundamental theorem that helps comprehend what asymptotic entropy means: Theorem 9. so I decided to call it ‘uncertainty’. a transparent probabilistic reason currently exists only on the lamplighter groups. Show that h(µ) exist for any µ on a group Γ. so in a debate you will always have the advantage. n→∞ n lim 89 . x Similarly.e. version by Kaimanovich-Vershik). Then we get a nearest neighbour random walk on the directed right Cayley graph of Γ given by the generating set supp µ: P[ xn+1 = h | xn = g ] = µ(g −1 h) . ‘You should call it entropy. then h(µ) = 0. H (X ) is the entropy of the measure µ(x) = P[ X = x ].. For almost every random walk trajectory {Xn }. he had a better idea. with equality for the uniform measure. We start by discussing entropy. When I discussed it with John von Neumann.11 (Shannon-McMillan-Breiman. In the second place. but the word was overly used.below. As a corollary to the fact H (µ) ≤ log | supp µ|. and more important. − log µn (Xn ) = h(µ). I thought of calling it ‘information’. the proof goes through more ergodic/information theoretic notions like entropy and tail-σ -ﬁelds. Then the law of Xn is the n-fold convolution µn = µ∗n .) A quote from Claude Shannon (from Wikipedia): “My greatest concern was what to call it. the amount of information contained in a random sample from µ.2. Note that H (µ) ≤ log | supp µ| by Jensen’s inequality. For the general case. The asymptotic entropy h(µ) of the random walk generated by µ is deﬁned by: h(µ) := lim H (Xn ) H (µn ) = lim . where (µ ∗ ν )(g ) := our previous random walks on groups were examples of such walks. for two reasons. of course. so it already has a name. or in other words.3. The Shannon entropy H (µ) of a probability measure µ is deﬁned by: H (µ) := − µ(x) log µ(x) . i. All Deﬁnition 9. g ∈ Γ. This walk is reversible iff µ is symmetric. where h. if we are using log2 .8. (The interpretation with bits works the best. In the ﬁrst place your uncertainty function has been used in statistical mechanics under that name. µ(g −1 ) = µ(g ) for all g ∈ Γ. Von Neumann told me.

hence the sequence converges in expectation to h(µ) by deﬁnition.12. Is there a quantitive relationship between the speed and the amount of bounded harmonic Finally. zero entropy means that the random walk is mostly conﬁned to a sub-exponentially growing set.10. bounded harmonic function evaluated along a random walk trajectory started at some vertex x is a from a bounded harmonic function we can construct a function “at inﬁnity”. A great introduction to the Poisson boundary is the seminal paper [KaiV83]. see (9. (E) ⇐⇒ (I) will be proved in Section 9.9. This will be proved in this section. but has the Liouville property and trivial Poisson boundary. Theorem 9. (E) positive asymptotic entropy h(µ) > 0.4: a The meaning of (S) ⇐⇒ (E) is that positive speed for a reversible walk is equivalent to the walk {t. For a proof. but it is important for the walk to be group-invariant. and will be proved in Section 9. so let us give here an intuitive meaning: it is the set of diﬀerent places where inﬁnite random walk trajectories can escape (with the natural measure induced by the random walk. as will be shown by an example of a non-amenable bounded degree graph that has positive speed and entropy. In words. Here are some one-sentence summaries of the proofs: being very much spread out. this theorem is an analogue of the Law of Large Numbers.g. 90 . This equivalence holds also for non-symmetric measures µ.SEHI} To see why this theorem should be expected to hold. biased random walk on Z. We will deﬁne the invariant σ -ﬁeld (also called the Poisson boundary) only in the next section. the following are equivalent: (S) positive speed σ (µ) > 0. which has positive speed but zero entropy. h(µ) = 0 iﬀ ∀ǫ > 0 there exists a sequence {An } with µn (An ) > 1 − ǫ and log |An | = o(n).5. which has an almost sure limit whose expectation is the value at x. For any symmetric ﬁnitely supported random walk on a group.exactly H (µn ). (I) non-triviality of the invariant σ -ﬁeld (Poisson boundary). by expressing the asymptotic entropy as The equivalence (I) ⇐⇒ (H) holds for any Markov chain. Hence positive entropy means that non-trivial information is contained in the boundary. Therefore. and vice versa. note that the expectation of − log µn (Xn ) is bounded martingale. the symmetry of µ) is important: consider. ⊲ Exercise 9. called the harmonic measure at inﬁnity). e. Note that the reversibility of the walk (in other words. see [KaiV83]. the amount of information gained about the ﬁrst step by knowing the limit of the random walk trajectory.10). the following is open: ⊲ Exercise 9. We now state a central theorem of the theory of random walks on groups. So. (H) existence of non-constant bounded harmonic functions (the non-Liouville property).. *** Find a direct probabilistic proof of (S) ⇐⇒ (H) for symmetric bounded walks on any group. Beyond the case of lamplighter groups discussed in the previous sections.

11). Let A {ex. invariant under quasi-isometries. sampling from the group using the random walk is not a good idea. we prove (S) ⇐⇒ (E) by giving the following quantitative version: Theorem 9. Ben91].) ⊲ Exercise 9. started from any y ∈ Y . the speed does not necessarily exist. (The latter limit exists because |Bn+m | ≤ |Bn | · |Bm |. Prove the theorem: for the lower bound. On general graphs.funda} µ where σ (µ) is the speed of the walk and ν (µ) := (limn log |Bn |)/n is the exponential rate of volume for positive speed.6. e. the function v → Pv [ the walk ends up in Z4 ] 91 The set Y = {0. This graph will be G. will ever visit the Z4 part only with a probability bounded away from 1.) For symmetric ﬁnitely supported measures with support generating a non-amenable group. Now. i. σ (µ)2 /2 ≤ h(µ) ≤ ν (µ) σ (µ). use Carne-Varopoulos (Theorem 9.g.12 (Vershik). both in the generating set given by supp µ. take a bijection between the vertices along the x-axis of Z4 and the vertices in A. So. deﬁned later? (It is clear from the discussion of the diﬀerent equivalences above that both the symmetry and the group invariance of the walk is needed. called “the fundamental inequality” by Vershik. A simple example is given in [BenS96a]..VVSE} .22 below. As we mentioned in Question 9. ⊲ Exercise 9. However. and hence SRW on G.11).functions. for the upper bound.11. and put an edge between each pair of vertices given by this bijection. for the class of bounded degree graphs it is known that the Liouville property is not quasi-isometry invariant [LyT87. It easily follows from the law of large numbers that SRW on Y spends only ﬁnite time in A. We will see a characterization of amenability using the Poisson boundary in Theorem 9. growth of the graph.3 we know that the walk has positive speed and hence non-trivial Poisson boundary. The importance of the lamplighter group examples was pointed out ﬁrst by [KaiV83]: Z2 ≀ Zd shows that exponential volume growth is not enough (d ≤ 2) and non-amenability is not needed (d ≥ 3) boundary: the ﬁrst means that µn (1) decays exponentially (Theorem 7.3). which we now describe: be the subset of words in which the ratio of 1’s among the letters is more than 2/3. Nevertheless. from Proposition 9. it is not known if the positivity of speed on a group is independent of the Cayley graph. or more generally. by the transience of the Z3 direction orthogonal to the x-axis. it is worth comparing non-amenability and non-trivial Poisson {t.. with positive probability it will eventually stay in Z4 . use that entropy on a ﬁnite set is maximized by the uniform measure. For any symmetric ﬁnitely supported measure µ on a group. 1}∗ of ﬁnite 0-1-words can naturally be viewed as an inﬁnite binary tree.13 (Varopoulos [Var85b] and Vershik [Ver00]). Whenever the walk visits Z4 .e. *** Does there exist for any ﬁnitely generated group a ﬁnitely supported µ with h(µ) = ν (µ) σ (µ)? The trouble with a strict inequality is that then the random walk measure is far from uniform on the sphere where it is typically located.2) and Shannon-McMillan-Breiman (Theorem 9. while the second means that µn (Xn ) decays exponentially (by Theorem 9. Now consider the lattice Z4 . their von Neumann dimension. but the Liouville property can clearly be deﬁned.

.Kol01} converges to 1A almost surely. it is quite powerful: for example. Proof. for some large but ﬁxed k ∈ N. hence the conditional probabilities P[ A | Fn ] converge to 0 or 1. yk = xk }.s. let Y ′ be the tree where each edge join of Y ′ and Z4 as before. the left-hand side P[ A ] ∈ {0. yj +1 ) > 0. . A is independent of Fn . . X2 .9 and Theorem 9. measure space Let P be a transition matrix for a Markov chain on a countable state space S . {t. . w0) of Y is replaced by a path of length k . . To see this. and then apply the theorem. and hence will almost surely end up in Z4 . ∀j . If k is large enough. However.for v ∈ G is a non-trivial bounded harmonic function. but it is probabilistically independent of each ﬁnite subset of them).. This is simply a special case of the Martingale Convergence Theorem 9. with the usual Borel σ -ﬁeld (the minimal σ -ﬁeld that contains all the cylinders {{yj }∞ j =0 : y0 = any x ∈ S .15 (Kolmogorov’s 0-1 law). If X1 . for Ω(x) = {yj }∞ j =0 : y0 = x. . and it follows that 92 . Let us start with a basic result of probability theory: {t.7. . . and the natural measure generated by P . Let Ω be the Ω = { yj } ∞ j =0 : P (yj . G and G′ are obviously quasi-isometric to each other. For each n..Levy01} Theorem 9. . then P[ A ] = 0 or 1. its occcurence or failure is determined by the values of these random variables. and hence it will enter Z4 inﬁnitely often. . Also. we now turn to harmonic functions and the invariant sigma-ﬁeld.s. ∀j . then SRW on Y ′ will visit A inﬁnitely often.14 (L´ evy’s Zero-One Law). Theorem 9. so E 1A Fn = P[ A ]. are independent random variables and A is a tail event (i. this means that G′ has the Liouville property. This is called a zero-one law since. As n → ∞.LioPoi} Having proved (S) ⇐⇒ (E).3 that Mn := E[X |Fn ] is a bounded martingale. Proof.e. and G′ is the same 9. Given a ﬁltration Fn ↑ F∞ . we have that: E[ X | Fn ] → E[ X | F∞ ] a. recall from Section 6. it implies the next theorem. by L´ evy’s theorem. 1}. and X almost surely bounded. it says P[ A | Fn ] → 1A a. yj +1 ) > 0. xk ∈ S ).4 Liouville and Poisson {ss. By the coupling results Theorem 9.10 for the triviality of bounded harmonic functions. for the case X = 1A with A ∈ F∞ . (w. Now. So P[ A ] = 1A almost surely. . for all k and x0 . deﬁne the measure space x0 . P (yj . Although this result might seem obvious.

with the Borel σ -ﬁeld.e. It is also important to notice that although the jumps in the Markov chain are independent.e. 2}. A function F : Ω −→ R is called a tail function if it is T -measureable (i. the exercise afterwards shows that this is not a triviality. and p(0.. we say that T or I is trivial if for any x ∈ S and any event A in the σ -ﬁeld. i. a similar collapse always happens. if the chain has three states. being an invariant function is a stronger condition than being a tail function. Deﬁnition 9. the invariant σ -ﬁeld I on Ω (and identically for Ω(x) for any x ∈ S ): Deﬁnition 9. A is a union of invariant equivalence classes. 93 . we have Px [ A ] = 0. I in the above example of simple random walk on a bipartite graph disappears: for x ∈ V1 . with P0 [B ] = 1/2.. We identically deﬁne the equivalence for Ω(x) for any x ∈ S . on the other hand. we Px [A] ∈ {0. Call two tail or invariant functions f. Now the measure generated by P is a probability measure: the Markov chain trajectories started from x. where 1 and 2 Accordingly. while for x ∈ V2 . g : Ω −→ R equivalent if Px [f = g ] = 1 for any x ∈ S . since the yn ’s are absorbing. themselves are not independent. y ∼ z =⇒ z ∈ A. n ∀m ≥ n I T ∀m ≥ n ym = zm .tailinv} (ii) y ∈ A. F is T I T Kolmogorov’s 0-1 law does not say that tail or invariant functions are always trivial. i) = 1/3 for all i. then B = {yn = 1 eventually} is an invariant event. Of course. The following theorem shows that for SRW on groups. hence a Px -measure zero set can be added to A to put it into I . We deﬁne two equivalence classes. Note that if we consider only the measures Px . E ) with parts V = V1 ∪ V2 : the event A = {y2n ∈ V1 for all n} is in the tail ﬁeld but is not invariant. Similarly. A key example of strict inequality is simple random walk on a bipartite graph G(V. y ∼ z =⇒ z ∈ A. then the distinction between T and have Px [ A ] = 1. A ∈ I if and only if (i) A is Borel-measurable. y m = z m +k . (ii) y ∈ A. For instance. z ∈ Ω let y ∼ z ⇐⇒ ∃n and y ∼ z ⇐⇒ ∃k.4. We now deﬁne the tail σ -ﬁeld T and {d. A ∈ T if and only if (i) A is Borel-measurable. 1.e. F is called an invariant function if it is I -measureable. For y.5.. hence a Px -measure zero set can be subtracted from A to put it into I . Borel measureable and y ∼ z implies F (y ) = F (z )). A is a union of tail equivalence classes. 1}. i. {0.

then we f (y0 .14. For simple random walk on a ﬁnitely generated group.D02} = 2 dTV (µn . . X1 .s.16. . the σ -ﬁelds T and I are not the same up to Px -measure zero sets.) . need to prove that for any x ∈ S the following equality holds Px -a. Theorem 9.17 . X1 .) . y1 .16 relies on the following result.harmonicinv} functions for arbitrary Markov chains on a state space S .18. . with starting point x ∈ Γ. Give an example of a graph G = (V. then the correu(x) = Ex f (X0 . or as n → ∞ . ⊲ Exercise 9. µn − µn+k 1 {t.13. sponding harmonic function u : S −→ R is Proof. and so by the Martingale Convergence Theorem 9. let f : Ω −→ R be invariant and represent an equivalence class. .16 from Theorem 9. .17 (Derriennic’s 0-2 law [Der76]). . . For one direction. This follows easily from the fact that u(Xn ) is a bounded martingale. then the corresponding equivalence class is the one represented by the invariant function f : Ω −→ R deﬁned by f (y0 . E ) and a vertex x ∈ V such that for simple random walk started at x. Let f : Ω −→ R be an invariant function representing an equivalence class.: n→∞ For the other direction. then there are only trivial bounded harmonic functions. although the Reader is invited to think about it a little bit: Theorem 9. . . .) = lim sup Eyn f (X0 . the tail and invariant σ -ﬁelds up to T and I coincide up to Px -measure zero sets. In some sense.TI} Theorem 9. The proof of Theorem 9. for any k ∈ N. 94 . o(1) We now have the following connection between the invariant σ -ﬁeld and bounded harmonic Theorem 9. Prove Theorem 9. we have that Ex lim sup u(Xn ) = Ex (u(X0 )) = u(x).9. µn+k ) = 2 for all n . ⊲ Exercise 9. harmonic. it is a generalization of and equivalence classes of bounded invariant functions on Ω .{t.7 and the Dominated Convergence Theorem. which roughly said that if there is one possible limiting behavior of a Markov chain.) = lim sup u(yn ) . we need to show u(x) = Ex lim sup u(Xn ) . There is an invertible correspondence between bounded harmonic functions on S {t. From the other direction. let u : S −→ R be We now prove this correspondence is invertible. . let u : S −→ R. whose proof we omit. . n→∞ where Ex is the expectation operator with respect to Px . y1 . For SRW on a ﬁnitely generated group.

n→∞ In words. So. the end-result is well- but since f is an invariant function. for almost every random walk trajectory {yn } started from y0 = x. and we are done.12.. That is.r. .. By looking at indicators of invariant events. then we can deﬁne its harmonic extension f : U −→ R by the Poisson formula 1 {c. for any x ∈ S we have by L´ evy’s 0-1 law Theorem 9. |e2πiθ − z |2 f (z ) = 0 which is nothing else but integration against the harmonic measure from x. . Xn = yn = Eyn f (X0 . . . we get the following immediate important consequence.g.5 The Poisson boundary and entropy.) = lim Ex f (y0 . . . . in a more general form: for arbitrary ﬁnite entropy measures instead of just symmetric ﬁnitely supported ones. yn . The name Poisson comes from the following analogue. hyperbolic Brownian motion is the same as w.s. R) is the group of the orientation-preserving hyperbolic isometries (the M¨ obius transformations) acting on U transitively.12. the space of bounded harmonic functions) is called the Poisson boundary. The invariant σ -ﬁeld of Ω is trivial if and only if all bounded harmonic functions on S are constant. . the Poisson boundary of the inﬁnite group P SL(2. Xn+1 . . Now. Corollary 9. and SO(2) is the stabilizer of 0 ∈ U. . i. . a much more general version of the (I) ⇐⇒ (H) equivalence in Theorem 9. yn . R) can be realized geometrically as ∂ U.) . Xn+1 . .approximated by the average end-result of a new random walk started at a large yn . .e. . the Euclidean one.14 that Px -a. . 95 .t. y1 . harmonic measure on ∂U w. only converges to it. . The importance of group-invariance {ss. The invariant σ -ﬁeld of a Markov chain (or equivalently. . . and then writing U = P SL(2. . Xn = yn . and f : ∂ U −→ R is a bounded Lebesgue-measurable function. f (y0 . .harmbound} If U ⊂ C is the open unit disk. .) X0 = y0 . where τ is the ﬁrst hitting time of ∂ U by Brownian motion in U. ∂ U plays the role of the Poisson boundary for Brownian motion on U. This analogy between random walks on groups and complex analysis can be made even closer by equipping U by the hyperbolic metric. This is true in much greater generality: e. .) X0 = y0 . except that the hyperbolic BM never hits the ideal boundary ∂U . 9. . Ex f (τ ). X1 .r. . the Poisson boundary of Gromov-hyperbolic groups can be naturally identiﬁed with their usual topological boundary [Kai00]. . Indeed. since P SL(2. Ex f (y0 . We can already see why it is a boundary: it is the space of possible diﬀerent behaviours of the Markov chain trajectories at inﬁnity. 1 − |z |2 f (θ) dθ . R)/SO(2). . .PoiEnt} We now prove the (E) ⇐⇒ (I) part of Theorem 9. .t.19.

. . From this and the deﬁnition of h(µ). T = I up to Px -measure zero sets. .9) {e. we have by L´ is independent of (X1 . . Proof.Hjoint} Py0 Xi = yi for i = 1. . . . . . Now notice that Markovianity implies that H (X1 . Vice versa. Y ) − H (Y ) . . . conditioned on Xn .9) as follows: h(µ) = 0 iff (X1 . the tail event A A is independent of (X1 . . . Xk | Xn+1 ). then H (X | Y ) := − In particular. . H (X.13} By (9.16 that for SRW on groups. .15). .8) to get n→∞ where hi := H (µi ) = H (Xi ). .12} where H (X1 .8) {e. the asymptotic independence of (X1 . . . the tail σ -ﬁeld T is trivial. Xk | Xn ). k Xn = yn = we easily get −1 µ(x1 ) · · · µ(xk ) µn−k (yk yn ) n µ (yn ) (9. Recall now from Theorem 9. . in particular. . .{t.Hcond} (9. Xn+1 ) ≤ H (X1 . . Xk ). . n→∞ (9. . . we can reinterpret (9.7). . for any event A ∈ T . then (X1 . (9. and (9.y P[ X = x. Now. Xk ) from Xn implies that . Note that. . . .7) {e. with equality if and only if X and Y are independent. Y ) ≤ H (X ) + H (Y ) . . we get the following formula for the asymptotic entropy: h(µ) = H (X1 ) − lim H (X1 | Xn ) . Y = y ] log P[ X = x | Y = y ] = H (X. .9} H (X1 . we get that hn+1 − hn decreases monotonically to h(µ). from x. . For any countable group Γ and a measure µ on it with ﬁnite entropy H (µ) < ∞. . (9. Xk | Xn ) = kh1 + hn−k − hn .6) implies that H (X1 .5) {e. Xk | Xn . Note that. . h(µ) = 0. Combined with the k = 1 case of (9. using the notation xi = yi −1 yi for i ≥ 1 on the trajectory space Ω(y0 ). under any Py0 . 96 evy’s 0-1 law dent of Xn for all k ≥ 1. . So. .20 ([KaiV83]).7} H (X1 . . Xk | Xn . .8). Xn+1 ) = lim H (X1 . . Xk ) = kH (µ) = kh1 is similar to (9. (9. . . by Kolmogorov’s 0-1 law (Theorem 9. . Xk ) − kh(µ) . . Xk ) must be asymptotically independent of Xn . .6) {e. . the Poisson boundary of µ is trivial iff the asymptotic entropy vanishes. we get that hn − hn−1 ≥ hn+1 − hn . . Since. .PoiEnt} Theorem 9.10) {e. take the limit n → ∞ in (9. −1 With this deﬁnition. . . We will need the notion of conditional entropy: if X and Y are random variables taking values in some countable sets. .14) that limn→∞ Py0 [ A | Xn ] = Py0 [ A ]. and we are done. . . Xk ). if T is trivial. Xk ) is asymptotically indepen(Theorem 9. Xk | Xn ) = kh1 − kh(µ) = H (X1 . hence h(µ) = 0. . .6). .

where u ∈ Lk . in fact.3). but has the Liouville property and trivial Poisson boundary. this graph is Liouville. we get that for any function φ ∈ L2 (Ln ) with This implies that u fn ≤ 1.. Formally.e.8). for n larger than the level of u. νn ). E ) has positive speed (namely. and given that this level is k . we have Tn φ k 2 ≤ (1 − c) φ 2 with some c > 0. 2 → 0 as u v u 2 dTV (νn . satisﬁes fn x ∈L n φ(x) = 0. and on the vertex set of each level Ln . 2/6 − 1/6 = 1/6) and positive entropy (after n steps. also Tn 2→2 ≤ 1. However.) It is also not hard to show that moving in the other directions are at least non-expanding. a non-amenable bounded degree graph that has positive speed and entropy. The idea for this is that when ﬁrst getting from Ln to Ln+1 . The main place where the above proof breaks down is in (9. and hence got hn−k instead of H (Xn | Xk ) in (9. simple random walk on this graph G = (V. and hence in the equivalence between positive speed and the non-Liouville property. it is important for the walk to be group-invariant. for all large n. h(u) − h(v ) = u v u v h(w)(νn (w) − νn (w)) ≤ max h(x) 2 dTV (νn . while the one encoding the step within Ln is a strict L2 -contraction on functions orthogonal to constants. deﬁne the operator Tn : L2 (Ln ) −→ L2 (Ln+1 ) by Tn φ(y ) := 2 x ∈L n x φ(x)νn +1 (y ) . we need to show that the total variation distance between the harmonic measures converges to zero. The main idea is that the expanders mix the random walk fast enough within the levels. x ∈G u Let h : V −→ R be a bounded harmonic function. where the factor 2 = |Ln+1 |/|Ln | ensures that Tn 1 = 1. Take an inﬁnite binary tree. Then. we can write Tn as a weighted sum of composition of other operators. v ∈ V . and. A key motivation for [BenK10] was the following conjecture: 97 . by the Optional Stopping Theorem (see Section 6. Then. by 1→1 ≤1 := u 2n νn n → ∞. Clearly. Example: a non-amenable Liouville graph [BenK10]. as shown by the following example.7). the walk with positive probability makes at least one step inside the expander on Ln .r. This also easily implies that u − 1 = Tn Tn−1 · · · Tk+1 (2 δu − 1). so. and let νn denote the harmonic w ∈L n So. so that there can never be a speciﬁc direction of escape. one can show that Tn and Tn ∞→∞ looking at the possible ﬁrst steps of the walk before hitting Ln+1 . by the Riesz-Thorin interpolation theorem.In the above proof. Let u.t. all with L2 -norm at most 1. and we are done. νn ) ≤ fn 1 v + fn 1 → 0. delivering some strict contractive eﬀect in the L2 -norm. the walk is close to level n/6 with huge probability. −1 where we wrote P[ Xn = yn | Xk = yk ] = µn−k (yk yn ). the uniform probability measure on Ln . the distribution is uniform on this set of size 2k ). Altogether. (All Lp norms are understood w. Here is the outline of the actual argument. the distribution of the ﬁrst vertex hit). measure on level Ln of simple random walk (i. place a 3-regular expander.

Formally.t. with a uniform Cheeger constant c > 0. y ∈ Un ? This is very similar to Vershik’s question.infexp} ⊲ Exercise 9. say (see Section 15. One may hope that the above proof could be generalized to show that simple random walk on an inﬁnite expander would always have trivial Poisson boundary. Exercise 9. the uniform measures on the levels. such a generalization seems problematic. at least. and still could have Tn 2 ≤ 1 w. An aﬃrmative answer to this question. *** Does every ﬁnitely generated group have a generating set in which the harmonic o in measures νn on the spheres Ln := ∂V Bn (o) are roughly uniform in the sense that there exist 0 < o o o (y ) < C for all c. the following exercise. then the averaged harmonic measure on Ln+1 is uniform again. one could work with non-uniform measures on the levels in the deﬁnition of the operators Tn or in the operator norms. It is important in this proof that the harmonic measure from the root on the levels Ln is uniform. However. this is used in proving Tn ∞→∞ say.unifharm} x. See. and if the harmonic measure on a level is concentrated on a very small subset.16 ([BenK10]). 9. {ex. ⊲ Exercise 9.17 and 9.12.6 Unbounded measures {ss. or at least independent of the choice of a ﬁnite generating set of a group. one could try to ﬁnd counterexamples to Exercises 9.r.g. On the other hand. Without this uniformity.. then the walk might not feel the contractive eﬀect in the horizontal direction.2).15 (Itai Benjamini). Exercise 9.t. and thus this could not be a Cayley graph. Then place a regular expander on each level in a way that the resulting graph has a non-trivial Poisson boundary. *** Show that there exists no inﬁnite expander: this would be a bounded degree inﬁnite graph such that every ball Bn (x) around every vertex x is itself an expander.17.unbound} There are a lot of nice results showing that there are good reasons for leaving sometimes our usual world of ﬁnitely supported measures. put a double edge to the right child and a single edge to the left child.12 among non-amenable torsion groups. C < ∞ such that for each n there is Un ⊂ Ln with νn (Un ) > c and c < νn (x)/νn ≤ 1 for all n.r. uniform measure.{ex. for Cayley graphs. e. together with an aﬃrmative answer to the question of invariance of the inﬁnite expander property under a change of generators would give a proof of Benjamini’s conjecture. However. Show at least that this property is invariant under quasi-isometries. in fact. * Consider an imbalanced binary tree: from each vertex. This issue calls for the following question: ⊲ Exercise 9. 98 . the expanders on the levels are expanders w. if we start a random walk uniformly at x ∈ Ln . if we place expanders on the levels on an irregular inﬁnite tree.15. Note that the graph in the above example has expander balls around the root of the binary tree.

3.r. conjectured by Furstenberg. ** Can there exist a symmetric measure µ whose inﬁnite support generates a ﬁnitely generated non-amenable group Γ such that the spectral radius is ρ(µ) = 1? We have already mentioned the Choquet-Deny theorem (1960). and extended it to nilpotent groups: Theorem 9. 99 .) Conversely.21 can also be used to produce a symmetric measure with trivial Poisson boundary in any amenable group. Proof for the Abelian case. the Poisson boundary of µ is trivial if and only if the convolutions µn converge weakly to a left-invariant mean on L∞ (Γ). (For symmetric ﬁnitely supported measures we knew this from Proposition 9. and Xn = X0 + T1 + · · · + Tn the random walk. ⊲ Exercise 9. Recall that a measure µ on a group Γ is called aperiodic if the largest common divisor of the set {n : µn (1) > 0} is 1.PoissonMean} {t. Theorem 9. Theorem 9. the support of the measure µ that shows amenability might indeed need to be large: it is shown in [Ers04a.. Can the spectral radius being less than 1 be ruined? It turns out that there exist non-amenable groups where the spectral radius of ﬁnite symmetric walks can be arbitrary close to 1 [Osi02. This establishes the following result. on the lamplighter groups Z2 ≀ Zd with d ≥ 3. A group Γ is amenable iff there is a measure µ supported on the entire Γ whose Poisson boundary is trivial. see Deﬁnition 5.FurstConj} {t. saying that any aperiodic measure on any Abelian group has the Liouville property. . . deﬁne h(Xn ) − h(Xn−1 ) 2 2 h(x + t1 + · · · + tn ) − h(x + t1 + · · · + tn−1 ) µ(t1 ) · · · µ(tn ) . the associated Poisson boundary is trivial. For non-amenable groups. whose proof we omit.. or equivalently. For all n ≥ 1. For any aperiodic probability measure µ on a countable nilpotent group Γ. How small does a group have to be for this to remain true? Raugi found a very simple proof of the Choquet-Deny theorem. Let h : Γ −→ R be a bounded harmonic function. hence I expect that an inﬁnite support can produce a spectral radius 1. the theorem says that positive speed cannot be ruined by strange large generating sets. This immediately implies that on a non-amenable group any non-degenerate µ has non-trivial Poisson boundary.22.t.3. make a direct connection between non-trivial Poisson boundary and non-amenability. µn (g −1 ·) → 0. Theorem 3. dTV µn (·). ArzBLRSV05]. In this theorem.tn ∈Γ {t. . µ.23 ([Rau04]).1] that.18. Theorem 9.. independent un (x) : = Ex = t1 . The idea is to take an average of the uniform measures on larger and larger Følner sets. T1 .Raugi} steps w..The following two theorems from [KaiV83].21. any measure with ﬁnite entropy has non-trivial Poisson boundary. T2 . For any aperiodic measure µ whose (not necessarily ﬁnite) support generates the group Γ.

1 (Gromov’s theorem [Gro81]).harmfindim} {t. see Section 15. . Bartholdi and Erschler have recently shown that there exists a group of exponential growth on which any ﬁnitely supported µ has a trivial boundary [BarE11]. Theorem 10. The ﬁrst ingredient is the following: {t. also using ingredients and insights from [LeeP09] and [Tao10].Gromov} We now switch to the theorem characterizing groups of polynomial growth due to Gromov [Gro81] and give a brief sketch of the new proof due to Kleiner [Kle10]. . un (x) = Ex h(Xn )2 − Ex h(Xn−1 )2 by the orthogonality of martingale un (x) = Ex h(XN )2 − h(X0 )2 . this was shown (with symmetric µ with ﬁnite entropy) in [Ers04a. hence On the other hand. On the other hand.Now. . of harmonic functions and of random walks 10. The following question from [KaiV83] is still open: Is it true that on any group of exponential growth there is a µ with non-trivial Poisson boundary? For solvable groups. Tn 2 by Jensen or Cauchy-Schwarz 2 h(x + T2 + · · · + Tn ) − h(x + T2 + · · · + Tn−1 ) h(Xn−1 ) − h(Xn−2 ) = un−1 (x) . This is a must be zero.1) and some inﬁnitely supported measure on it with ﬁnite entropy that has a nontrivial Poisson boundary. sum of non-decreasing non-negative terms that remains bounded as N → ∞. hence all the terms We know from the entropy criterion for the Poisson boundary that a ﬁnitely supported measure on any group with subexponential growth has trivial boundary. Theorem 4. . Tn 2 Ex h(Xn ) − h(Xn−1 ) T2 . For the harder “only if” direction. .1].1 A proof of Gromov’s theorem {ss. for n ≥ 2.8. N n=1 by harmonicity and commutativity increments (Pythagoras for martingales).Gromov} 100 . un (x) = E Ex ≥E =E = Ex h(Xn ) − h(Xn−1 ) 2 T2 . which obviously implies that h is constant. Anna Erschler proved in [Ers04b] that there exist a group with subexponential growth (an example of Grigorchuk. In fact. 10 Growth of groups. On the other hand. [Tao10] contains a self-contained and mostly elementary proof of Gromov’s theorem. We will borrow some parts of the presentation there. . . We have already proved the “if” direction. in Theorem 4. . A ﬁnitely generated group has polynomial growth if and only if it is virtually nilpotent. we will need several facts which we provide here without proper proofs but at least with a sketch of their main ideas.

5ǫR (x) would contain D′ disjoint balls B i /2. so we would have Now apply Saloﬀ-Coste’s Poincar´ e inequality Exercise 8. . This determinant is the 101 . So. This does not follow easily from being a group of polynomial growth. This lemma implies that by imposing relatively few constraints on a harmonic function we can make it grow quite rapidly.4. Let the set of these B i s be B . in the coupling proof. sum over the B i ’s. most some D′ of these 3B i ’s. x∈B2R |∇f (x)|2 ≤ O(1) ǫ2 x∈B4R . For ﬁxed positive integers d and ℓ there exists some constant f (d. recall Exercise 9. [Kle10]).3. which cannot be for large enough D′ . Then f ℓ2 (BR ) i the so-called doubling condition: there is an absolute constant D < ∞ such that |B (2R)| ≤ D |B (R)| {l.elliptic} ≤Cǫ f ℓ2(B4R ) . then this lemma implies that some of the functions in this space would grow quickly. This is because of the doubling property: if there were more than D′ D′ |BǫR/2 | ≤ |B3. Cover the ball BR by balls B of radius ǫR. so this is an annoying but important technicality all along Kleiner’s proof. then B3. the key lemma is the following: Lemma 10. . But there. Now. uj )ℓ2 (BR ) N i. and take a maximal subset of them in which all of them Enlarge now each B i ∈ B by a factor of three. but it did motivate his proof.2 to each B i .5ǫR |.j =1 x∈BR B i ∈B x ∈B i B i ∈B x∈3B i |∇f (x)|2 |f (x)|2 . the trick is to consider the Gram determinant det (ui . . ℓ). N } is a basis for the harmonic function of growth degree ≤ ℓ such that {ui : 1 ≤ i ≤ M } span the subspace where the mean on each B i ∈ B is zero. and suppose that a harmonic function f : G −→ R has mean zero on each B i .Theorem 10. the idea is that if we take a vector space spanned by many independent harmonic functions. For a tiny bit of intuition to start with. while here we do not have any algebraic information — this is exactly the point. we used the product structure of Zd heavily. so it was not good enough for Kleiner’s purposes. the space of harmonic functions with moderate growth cannot be large. The Colding-Minicozzi proof used Gromov’s theorem. where {ui : i = 1. use the existence of D′ .3 to B2R to get |f (x)|2 ≤ |f (x)|2 ≤ O(1) ǫ2 R2 ≤ O(1) ǫ2 R2 which proves the lemma. To realize this idea. Proof. then apply the reverse Poincar´ e inequality Exercise 8.2 ([ColM97]. To ﬁnish the proof of Theorem 10. . Then the corresponding original balls B i still cover BR .2. Sketch of proof of Theorem 10.2. saying that sublinear harmonic functions on Zd are constant. Shrink each B i by a factor of two. ℓ) (so in particular is ﬁnite-dimensional). such that for any Cayley graph G of any ﬁnitely generated group Γ of polynomial growth with degree ≤ d. We claim that each point in B2R is covered by at are disjoint. of them covering a point x. the space of harmonic functions on G with growth degree ≤ ℓ has dimension ≤ f (d. We give Kleiner’s proof for the case when the Cayley graph satisﬁes for any radius R.

on one hand.Mok} The ﬁrst three of the ﬁve proofs listed above use ultraﬁlters to obtain a limit of almost harmonic functions into diﬀerent Hilbert spaces.j =1 N i. [Kle10]. and then use the better understanding of linear groups. in particular. for 1 ≤ i ≤ M = N − |B|. inspired by random walks on amenable groups. |B| = O(ǫ). uj )ℓ2 (B4R ) 1 det (ui .5. Then there exists an isometric (linear) action of Γ on some real Hilbert space H without ﬁxed points and a non-constant Γ-equivariant harmonic function Ψ : Γ −→ H. E ). We will brieﬂy discuss now the proof of Lee and Peres [LeeP09]. uj )ℓ2 (BR ) 2 N i.t. This means that the determinant is zero for all R whenever N is larger than some f (d. Equivariance means that Ψ(gh) = g (Ψ(h)) for all g. Combining the two ingredients. Altogether. {t. Now. Instead. ϕ.) It works for all amenable transitive graphs.yafoo} ϕ ∈ ℓ (V ) are small if we take ϕ to be the indicator function 1A of a large Følner set A ⊂ V . which are some truncated Green’s functions. which would be of constant order for typical Følner indicator functions 1A .j =1 N i. (I − P )ϕ / ϕ 2 concerns the ratio of these two quantities. ℓ). Let us focus on the amenable case. we have inf 2 (I − P )ϕ 2 = 0. (I − P )ϕ 2 {l. [ShaT09]). This The second ingredient for the proof of Gromov’s theorem is complementary to the previous one: it will imply that for groups of polynomial growth there do exist non-trivial harmonic functions of moderate growth.2. anything amenable is ﬁne. we also have ui ℓ2 (BR ) ≤ C ǫ ui ≤ ℓ2 (B4R ) ≤ O(ǫ2 )N −O(ǫ) O(1) 4d+2ℓ det (ui .r. For simple random walk on any transitive amenable graph G(V.3 and Lemma 7. [LeeP09]. uj )ℓ2 (BR ) N i.2. uj )ℓ2 (B4R ) ≤ proves Theorem 10. h ∈ Γ. some symmetric ﬁnite generating set. not only for groups. (The trick for the general non-Kazhdan case is to consider the right Hilbert space action instead of the regular representation on ℓ2 (Γ) that is so closely related to random walks. and harmonicity is understood w. we will be able to construct a non-trivial representation of our group over a ﬁnite-dimensional vector space.j =1 by Lemma 10. By the construction of B and the doubling property. Theorem 10. obviously ui ui ℓ2 (B4R ) ℓ2 (BR ) squared volume of the parallelepiped is det (ui . [KorS97]. the ≤ O(ǫ2 )N −O(ǫ) det (ui . while. 102 . the right functions to take will be the smoothened out versions k −1 i=0 Pi 1A (x) = Ex {0 ≤ i ≤ k − 1 : Xi ∈ A} . The last two proofs are constructive and quite similar to each other (done independently).4 ([Mok95].squared volume of the parallelepiped spanned by the ui ’s. Let Γ be a ﬁnitely generated group without Kazhdan’s property (T).j =1 for all i. But the lemma As we have seen in Theorem 7.3. The key lemma is the following: Lemma 10. both (I − P )ϕ 2 / ϕ and ϕ. by the growth conditions if ǫ is small and N is large .

smoothing} ⊲ Exercise 10.1 (c) and Exercise 10. The above identity and the inequality together give the required small ratio in Lemma 10. As the ﬁnal ingredient for Gromov’s theorem. a harmonic equivariant function that is non-constant because of (10. P ) is transient or null-recurrent.1). and Ψj (x) − p(x.1). y ) Ψj (x) − Ψj (y ) 2 =1 (10..sumone} y ∈Γ for every x ∈ Γ.4 for the amenable case. (Hint: ﬁrst The combination of Exercise 10. Namely. then there is a k ∈ N such that 2ϕk (f ) − ϕ2k (f ).boosting} ⊲ Exercise 10. These are clearly equivariant functions on Γ. Let us note that the proof would have looked much simpler if we had worked with ϕ = ϕ∞ (f ) = (ϕ. and by (10. However.5. then P i f → 0 pointwise for any f ∈ ℓ2 (V ). 2 ψj . since we stay = |A|. which is in fact true whenever the volume growth of G is at least degree 5. k −1 i 1 i=0 P f .5. f ) is large. {ex. but it is not clear how to show that without relying on Gromov’s polynomial growth theorem. hence (I − P )ϕ ≥ r(|A| − d r in |∂V A|) for any given r. f ≥ L(θ) is large.6 (Tits’ alternative [Tit72]). with f = 1A 2 for a large Følner set A. here is what we mean by linear groups being better understood: Theorem 10.2) {e. F)) is either almost solvable or contains a subgroup isomorphic to F2 . (I − P )ψj We now want to prove Theorem 10.5. Any ﬁnitely generated linear group (i. we would have (I − P )ϕ = 1A . It is easy to see that p(x. Then one can use a compactness argument to extract a limit. f ) → 0. This ﬁnishes the construction.Tits} 103 . for the indicator function of a show that there is an ℓ such that (ϕℓ (f ). Show that k (ϕk . (I − P )ϕ) = x∈A ϕ(x) P i f .1) {e.. (I − P )ψj (10.{ex. they are almost harmonic.almostharm} as j → ∞. Using (10. (I − P )ϕk ) = (2ϕk − ϕ2k . the Ψj ’s are uniformly Lipschitz. we would need to prove here that this ϕ exists (which is just transience) and is in ℓ2 (V ). where G is d-regular. the inﬁmum 0 in Lemma 10.g. * Show that if (I − P )f / f < θ is small (e.2). Choose a sequence ψj ∈ ℓ2 (Γ) giving .) ∞ i=0 large Følner set).e. which we obviously want to avoid here. f ). {t. while in A for at least r steps of the walk if we start at least distance r away from the boundary. (b) Let ϕk = ϕk (f ) := (c) Show that (I − (a) Show that if (V. then use part (b) of the previous exercise. y )Ψj (y ) y ∈Γ 2 = (I − P )ψj 2 → 0. P )ϕk 2 ≤ 4 f 2 and (ϕk . a subgroup of some GL(n. and deﬁne Ψj : G −→ ℓ2 (Γ) by Ψj (x) : g → ψj (g −1 x) 2 ψj .1.2 proves Lemma 10.2.

where ug (x) = u(g −1 x). but let us use only solvability). so by the compact Lie case of the Tits alternative Theorem 10. there is a bounded linear functional π : H −→ R such that ψ0 := π ◦ ψ ∈ V is a non-constant harmonic function on Γ. Proof of Gromov’s Theorem 10. hence from Theorem 10. H is isomorphic to a subgroup of U(n).3) to show that [g. On W . h] − 1 op Actually. almost Abelian. by induction on the growth degree of Γ. and they are almost nilpotent. thereby H becoming a subgroup of the unitary group associated to this inner product. it preserves the Lipschitz the Lipschitz norm is a genuine norm. norm. which is already a statement that can be proved in a quite elementary way. So. Let A0 = A be the ﬁnite 104 . Then. and on a ﬁnite dimensional vector space any two norms are from ψ0 .6. as I learnt from [Tao10]. Moreover.6. It is Lipschitz. Suppose we already proved the result for groups of degree ≤ d − 1. so it takes inﬁnitely many values.GL(n. Let V be the vector space of harmonic real-valued Lipschitz functions on Γ — it is ﬁnite dimensional by Theorem 10. equivalent (up to constant factor bounds). where u(x) ∼ u(x) + c for any c ∈ C. one can already easily believe Jordan’s theorem: any ﬁnite subgroup Γ of U(n) contains an Abelian subgroup Γ∗ of index at most Cn .1. C) will suﬃce for us. Γ is amenable. We now use these facts to give a proof of the “only if” direction. Using this. where the ﬁrst line used that g. inﬁnite.4 we have a non-trivial equivariant harmonic embedding ψ into a real Hilbert space H. The base case is clear: groups of growth degree 0 are precisely the ﬁnite ones.2. since the trivial group is nilpotent. h are unitary and the last line is by the triangle inequality. then their commutator is even closer: [g. Γ acts on V via g : u → ug . it is almost solvable (and. we get a representation ρ : Γ −→ GL(W ) with a precompact image.3) {e. and also acts on the vector space W = V /C. Why? The elements Γǫ of Γ lying in a small enough ǫ-neighbourhood of the identity will generate a good candidate for Γ∗ : the ﬁnite index comes from the compactness of U(n) and the positivity of ǫ. hence the action of Γ preserves a Euclidean structure up image is inﬁnite. because by applying elements of Γ we can get inﬁnitely many diﬀerent functions to constant factors. and then using (10. Since ψ : G −→ H is non-constant. since any inner product on Cn can be averaged by the Haar measure of H to get an H -invariant inner product on Cn . the special case of a ﬁnitely generated subgroup of a compact linear Lie group H ⊂ = gh − hg ≤2 g−1 op op = (g − 1)(h − 1) − (h − 1)(g − 1) op (10. inside a compact Lie group. because ψ (gs) − ψ (g ) = g (ψ (s) − ψ (e)). First of all. h] = 1 for any h ∈ Γǫ . while a good amount of commutativity comes from taking the element g of Γǫ closest to the identity. This ψ0 cannot attain its maximum by the maximum principle. the group B = Im ρ is ﬁnitely generated. one can prove that any ﬁnitely generated subgroup of U(n) of subexponential growth is almost Abelian. the key observation is that if g. Similarly to this and to the proof of Proposition 4. This Now. Thus. and of polynomial growth. and let Γ be ﬁnitely generated of polynomial growth with degree ≤ d. Then one can use some sort of induction. in fact.commclose} h−1 op . h ∈ U(n) are close to the identity (in the operator norm).

it is more spread-out than random walk on a group with polynomial growth. This √ is the best result to date towards verifying the conjectured gap between polynomial and exp(c n) volume growth. one of the many kinds of things that Terry Tao likes to do. it is not at √ all obvious that the rate of escape is at least c n on any group. So. Then. Moreover. show that: √ (a) Any group with polynomial growth satisﬁes E d(X0 . we may ask for other rates of escape. The group Aℓ−1 /Aℓ is ψ : Aℓ−1 −→ Z.5. it is almost nilpotent by the inductive assumption. so. that any quasi-transitive graph of polynomial growth is quasi-isometric to some Cayley graph (of an almost nilpotent group. Xn ) ≥ c n1/3 . Ak−1 ]. hence it can be projected onto Z. using the Central Limit Theorem. We will see in Chapter 16 that there are transitive graphs that are very diﬀerent from Cayley graphs: not quasi-isometric to any of them. Xn ) ≥ c n. inﬁnite and ﬁnitely generated.diffusive} We have seen that non-amenability implies positive speed of escape. see also [Woe00. However. there can be many dead ends in the Cayley graph: vertices from which all steps take us closer to the origin. or in other words. Γ1 is almost nilpotent. Since A is inﬁnite and solvable. and then an exact sequence 1 −→ N −→ Γ1 −→ Z −→ 1 . Theorem 5. c a smallest index ℓ such that Aℓ has inﬁnite index in Aℓ−1 . by This proof was made as quantitative as possible [ShaT09]. N is ﬁnitely generated and of polynomial Proposition 4. growth with degree ≤ d − 1.index solvable subgroup of B . it has a well-deﬁned integer growth degree. Therefore. In particular. (Notice that this is sharp for Zd . Although it may seem that a random walk on a group with exponential growth should be further away from the origin than on a group with polynomial growth.11]. of course). 105 . they showed that there is some small c > 0 such that a |B (R)| ≤ Rc(log log R) volume growth implies almost nilpotency and hence polynomial growth.2 Random walks on groups are at least diﬀusive {ss. Using the results seen for return probabilities in Chapter 8.3: by Proposition 4. there is Γ1 := ρ−1 (Aℓ−1 ) has ﬁnite index in Γ. we get a projection ψ ◦ρ 10. so is also ﬁnitely generated of polynomial growth with degree ≤ d. and Ak = [Ak−1 .6.5. and this was in fact unknown for quite a while. In particular. d ≥ 1. Then Aℓ−1 has ﬁnite index in B . It is enough to show that Γ1 is almost nilpotent. so Abelian. by Corollary 3. But this does not happen in the polynomial growth regime: there is an extension of Gromov’s theorem by [Tro85] and [Los87]. the walk with exponential growth also has more places to go which are not actually further away.3. ⊲ Exercise 10.) (b) If the group has exponential growth. then E d(X0 . We now apply the results of Section 4.

harmonic functions with ﬁnite Dirichlet energy. * Show that E ≤ Cn2 . There is a similar bound for ﬁnite eigenvalue of G.HDandUSF} Harmonic Dirichlet functions x ∼y {ss. Using the equivariant harmonic function Ψ into a Hilbert space H claimed to exist for any amenable group in Theorem 10. denoted by HD(G) from now on). (This improvement is due to B´ Ψ(Xn ) − Ψ(X0 ) Even stronger concentration is true: Theorem 10.1 Harmonic Dirichlet functions and Uniform Spanning Forests {s. for any ǫ > 0 and all n ≥ 1/ǫ2 . Then deduce that E d(X0 . that the diﬀerence of two 106 .4.) incremements. √ which is slightly weaker than the rate c n for the expectation in the previous exercise. This is not to be confused with our earlier result that there is a ﬁnite energy ﬂow from x to ∞ iff the network is transient: such a ﬂow gives a harmonic function only oﬀ x.5. but can use the approximate harmonic functions ϕ∞ (1A ) or ϕk (1A ) for large Følner sets A directly..lee-peres09} P d(X0 .4 (Anna Erschler). Note.6. where λ2 is the second largest √ + ⊲ Exercise 10. Hint: If Mn martingales (the orthogonality of martingale incremements) in Hilbert spaces.) 11 11. One way to do that is to show that higher moments do not grow rapidly. Xn ) ≥ c n. If we want to improve the linear bound on the second moment to a square root bound on the ﬁrst moment. See [LeeP09]. There is an absolute constant C < ∞ such that. using the orthogonality of martingale √ alint Vir´ ag. Xn )2 ] ≥ n/(2d) for all n < 1/(1 − λ2 ). ** The tail of the ﬁrst return time: is it true that Px [τx > n] ≥ c/ n for all groups? (This was a question from Alex Bloemendal in class. and I did not know how hard it was. Xn )2 ] ≥ n/d. Xk ) < ǫ k=0 n/d ≤ Cǫ . Xn )2 ≥ c · n .e. we have 1 n n 4 is martingale in H. where G is d-regular. for SRW on any amenable transitive d-regular graph. 2 (i. Xn ) is somewhat concentrated. however.HD} Recall that the Dirichlet energy for functions f : V (G) −→ R is deﬁned as We have seen that the question whether there are non-constant bounded harmonic functions is interesting. It is equally interesting whether a graph has non-constant harmonic Dirichlet functions |f (x) − f (y )| .7 (James Lee and Yuval Peres [LeeP09]). then E Mn − M0 2 = n−1 k=0 E Mk+1 − Mk 2 . ⊲ Exercise 10. we need to show that d(X0 .⊲ Exercise 10. Later this turned out to be a recent theorem of Ori Gurel-Gurevich and Asaf Nachmias. Let us point out that one does not really need an actual harmonic Lipschitz embedding of the transitive amenable graph G into Hilbert space to bound the rate of escape from below. and get quantitative bounds: E[ d(X0 . transitive graphs: E[ d(X0 . a Pythagorean Theorem for {t. show that E d(X0 .

the free and wired ones.BeVaHDbeta} for any Cayley graph of Γ.D]. We will explain this more in a later version of these notes.2 ([BekV97]). If the case is the latter (conformally invariant to a disk). in which case the graph is transient.e. all harmonic Dirichlet functions are constants.6.2. Of course. denoted by dimG H. then HD = R for any Cayley graph. hence the non-existence of non-constant HD functions is equivalent to a certain uniqueness of currents in the graph. where β1 (Γ) is the ﬁrst Betti number of the L2 -cohomology of the group.1. In particular. ⊲ Exercise 11. ⊲ Exercise 11. One proof is by [BLPS01].3 and 9.e. then HD > R. by Stallings’ Theorem 3. see the next section. Theorem 11. Theorem 11. the vertices are represented by disks and edges are represented by two disks touching. we can try to compute its dimension. For the HD results. Chapter 9] for a complete account. If Γ is a Kazhdan group. This circle packing representation either ﬁlls the plane.2) has HD > R. We have that dimG HD = (2) (2) β1 (Γ) {t. then HD > R. Thus we can talk about a group having HD > R or not. (2) 107 . see the original paper [HeS95] or [Woe00. this dimension is inﬁnite.. i. which is due to Soardi (1993). dimG HD does not depend on the Cayley graph. BenS96b]). HD = R. The property of having non-constant HD functions (i. but. in which case the graph is recurrent.3 ([BekV97]). it would have a continuum of ends. Benjamini-Schramm [BenS96a. See [LyPer10.diﬀerent ﬁnite energy ﬂows from x with the same inﬂow at x is a non-constant harmonic Dirichlet function. In amenable groups. Theorem 11. Section 6. the non-trivial harmonic Dirichlet functions lie between the “extremal” currents. HD > R) is a quasi-isometry invariant. and in general the values can be in R≥0 . or it ﬁlls a domain conformally equivalent to a disk. This connection was ﬁrst noted by [Doy88].4]. when HD > R. * Any non-amenable group with at least two ends (actually.1 (He-Schramm [HeS95].BeVaKazhdan} von Neumann dimension of Hilbert spaces with group action. As HD is a vector space. very brieﬂy.circlepacking} {t. dimG (L2 (Γ)) = 1. For the circle packing representation. see the papers or [LyPer10. It is worth mentioning another theorem which shows where Kazhdan groups ﬁts. due to the action of Γ on the Cayley graph. the dimensions relative to L2 (Γ): that is. We will see a probabilistic deﬁnition of β1 (Γ) in the next section.. using uniform spanning forests. which gives {t. For any inﬁnite triangulated planar graph. there exists a circle packing representation. and can be proved somewhat similarly to Kanai’s Theorem 6. Sections 9. giving a way of measuring the diﬀerence between free and wired currents. But there is a notion. If F is a free group.

called WUSF: wired uniform spanning forest. we can just translate the entire exhaustion we used in the deﬁnition. Chapters 4 and 10] and the fantastic [BLPS01] are our main references. this will not matter. If the random walk produces some loops. x1 . in the limit. a spanning tree with Wilson’s algorithm: In a connected ﬁnite graph G. [LyPer10. take the loop-erasure of it. we “wire” all the boundary vertices of Gn into a single vertex. since any consistency condition will be satisﬁed in some large enough Gn . FUSF stochastically dominates WUSF. Theorem 11. then Wilson’s algorithm (deﬁned above) samples a spanning tree with the uniform distribution (from now on. In an inﬁnite graph G. . (This domination can be proved using electric networks again. Since G∗ n can be considered as having the same edge set as Gn . . we obtain a UST. they can be coupled so that USTn ⊇ USTn . we choose an order of the vertices x0 . the Kolmogorov extension theorem gives us a measure. then P S ⊂ USTn ≤ P S ⊂ USTn+1 . This also implies that the FUSF of a Cayley graph has a group-invariant law: if we translate a ﬁnite edge-set S by some g ∈ Γ. hence. (We can keep or delete the resulting loops. . it can be shown that if These limit probabilities form a consistent family. we erase them in the order of appearance.) This implies that. uniformly at random. then we have P S ⊂ USTn ≥ P S ⊂ USTn+1 . and want to show P S ⊂ FUSF = P g (S ) ⊂ FUSF . as well. and so on. and on a Cayley graph has a group-invariant law. the This is the Transfer-Current Theorem of Burton and Pemantle [BurtP93] that we will include in a later version of these notes. hence the limit exists for all ﬁnite S . then for any n there is an m such that Gn ⊂ Gm . On a ﬁnite graph we can use the loop-erased random walk to produce. vice versa.USF} For this section. In each step of the exhaustion. i. uniform spanning tree). hence we again have a limit. Using the electric interpretations. UST. Another possible approach for a limit object could be that. Moreover.) In this wired Gn we pick a UST and call it UST∗ n . but it is not necessarily connected. at each step in the exhaustion Gn ⊂ G. there seems to be no unique canonical coupling on the 108 .. However. uniform spanning trees and forests {ss. the free uniform spanning forest. denoted by FUSF.2 Loop-erased random walk. obtaining G∗ n . hence the limit limn P S ⊂ USTn = P S ⊂ FUSF does not depend on the exhaustion.11. and joint probability for k edges being in the UST is given by a k × k determinant involving currents. denoted by USTn . This again does not depend on the exhaustion. If G is a ﬁnite graph. It cannot have cycles. but less vertices. ′ If we have two exhaustions. S ⊂ E (Gn ).4 (Wilson’s algorithm). . since a USTn restricted to a ﬁnite subgraph has these properties. Gn and G′ n .e. we can take an exhaustion Gn of it. Then we start a walk from x2 till we hit the path between x0 and x1 . It can ∗ ∗ now be shown that if S ⊂ E (G∗ n ). One corollary to this beautiful algorithm (but it was proved ﬁrst by Kirchhoﬀ in 1847) is that edge marginals in the UST have random walk and electric network interpretations. xn . it is intuitively ∗ clear that USTn stochastically dominates UST∗ n . then produce a path from x1 to x0 by starting a random walk at x1 and stopping it when we hit x0 .

There are two Aut(G)-invariant processes F . which is also a reason for considering it to be a very natural object: Theorem 11.8 below for the exact condition for equality): Proposition 11. WUSF = FUSF on a Cayley graph if and only if HD = R. the WUSF can be generated by Wilson’s algorithm rooted at This walk will escape to inﬁnity. Theorem 11. No algorithm is known for generating the FUSF in a similar way. On the other hand. Proof. BLPS01. as follows (and see Theorem 11.amenUSF} {t. see Section 11.mester} {ex. More generally. On the other hand. More quantitively: (1) E degWUSF (x) = 2. FUSF = WUSF almost surely. we have the stochastic domination. Due to Wilson’s algorithm rooted at inﬁnity. since the diﬀerence between free and wired currents are given by the non-trivial harmonic Dirichlet functions. as shown below by a few nice results.6. If it intersects. {p. Start a loop-erased random walk at x1 . . (2) E degFUSF (x) = 2 + 2β1 (Γ). or it will intersect the ﬁrst walk at some point (possibly x2 ). walk will go to inﬁnity. for example.5 ([Mes10]). Lyons and A. On transient graphs. The edge marginals of WUSF are given by wired currents. Thom is that this is the case for soﬁc groups (see Question 14.. we must have equality. . but there are special cases where the WUSF is the same as the FUSF. the Transfer-Current Theorem mentioned above implies that both the FUSF and the WUSF are determinantal processes.e. the connection of FUSF to β1 (Γ) makes the free one more interesting. then the average degree along Fn is 2.1 below).WFUSF} such that there exists a monotone coupling F ⊆ G (i. we get the following result.8.ﬁnite level. Both FUSF and WUSF consists of inﬁnite trees only. on all amenable transitive graphs [H¨ ag95]. Repeat ad inﬁnitum. For instance: (2) (2) inﬁnity: Order the vertices of the network as {x1 . if Fn is a Følner sequence.7. x2 . On any amenable transitive graph. H¨ ag95.2 (b). Hence. . *** Does there exist a group invariant monotone coupling between FUSF and WUSF? A recent result of R. Now. WUSF is much better understood than FUSF. G stochastically dominates F ). Lyo09]: Theorem 11. with credits going to [Doy88. but there The group invariance of WUSF also follows from Wilson’s algorithm rooted at inﬁnity. }. while those of the FUSF are given by free currents. G ⊆ E (G) on G = T3 × Z exists no invariant coupling F ⊆ G . On the other hand. {t. ⊲ Exercise 11. by Exercise 13. Now start a loop-erased random walk at x2 . and the above proof of the group invariance breaks down.1. the following theorem shows that the situation in general is not simple. For any such forest F . hence there does not seem to be a unique coupling in the limit.3.WFUSFHD} 109 . Either this second end the walk there. Hence we also have E degWUSF (x) = E degFUSF (x) = 2 for all x.

Percolation in the plane is a key example in modern probability. Bernoulli percolation on Zd is a classical subject. the study of critical phenomena. there is a a vertex in the ﬁrst. (In fact. corollary: Theorem 11.perc} Percolation theory discusses the connected components (clusters) in random graphs. see Section 13. The corresponding question for WUSF has the following answer: Theorem 11. but due to the loop erasures. We will usually consider bond percolation. The WUSF is a single tree a. Wilson’s algorithm has the following important. bond percolation is just site percolation on the “line graph” of the original graph. Percolation on groups beyond Zd was initiated by Benjamini and Schramm in 1996 [BenS96c]. one of the main examples of statistical mechanics. iff two independent random walks started at any diﬀerent states intersect with probability 1. see [Wer07]. for which the standard reference is [LyPer10].11 ([BenKPS04]). Another version is site percolation. we construct such a graph by taking a non-random graph. it would imply that cost(Γ) = 1 + β1 (Γ). and look at the components induced by the kept vertices. there is a chain of trees. where each edge of a graph is erased (gets closed) with probability 1 − p. see [AloS00]. Does there exist an invariant percolation Pǫ with edge marginal less than or equal to ǫ such that FUSF ∪ Pǫ is connected? If the answer is yes. not including the spanning trees.rwcollision} (2) neighbours. and a vertex in the second which are neighbours. and ǫ be given.⊲ Exercise 11.4 (Gaboriau).4 below. with at most (n − 1) trees in the chain. and kept (remains open) with probability p. and given any two such trees.) We will usually denote Bernoulli distribution by Ber(p). Frequently. Consider USF = WUSF = FUSF on Zd .numTrees} {t. It may be the case that there exist two such trees which are not neighbours. *** Let Γ be a group. WUSF ∪ PBer(ǫ) is connected for all ǫ if and only if Γ is amenable. then the USF {t. Ber(p) bond percolation on the complete graph Kn on n vertices is the classical Erd˝ os-R´ enyi (1960) model G(n. In a slightly diﬀerent direction. In general. More generally. connecting the ﬁrst to the second. p) of probabilistic combinatorics. If 4 < d ≤ 8 then the USF contains inﬁnitely many spanning trees.9 ([BLPS01]). 110 . hence site percolation is more general. in the sense that given any two of these trees. not at all immediate. and erasing some of the edges based on some probability model. we have the following beautiful result: Theorem 11. If 8 < d ≤ 12 then the USF contains inﬁnitely many will have a common neighbour. but the results are always very similar in the two cases. if 4n < d < 4(n + 1) then the USF contains inﬁnitely many spanning trees.10 ([LyPS03]). where we keep and delete vertices. Let G be any network. each the neighbour of original two trees. but all are 12 Percolation theory {s. but they is a single tree. the next. If d ≤ 4. One example of this is Bernoulli(p) bond percolation. see [Gri99].s.

we have n→∞ the simplest union bound). bond) = 1/2. Is it true that. and so the ﬁrst deﬁnition makes perfect sense. before that. is it obvious of all Ber(p) percolation measures for p ∈ [0. bond) = pc (Td+1 . On any 111 . In this coupling. site) = 1/d and First of all. and will prove.1.. Moreover. if the probability that an inﬁnite event. that pc (Td+1 .percinfty} The most important feature of Bernoulli percolation on most inﬁnite graphs is a simple phase component. First proof of Pp [ ∃ ∞ cluster ] = 1 implying Pp [ |Cx | = ∞ ] > 0 for any x ∈ V (G). i. but. and below which there is not. E ) be any bounded degree inﬁnite graph. a simple proof is by the standard coupling assign an independent U (e) ∼ Unif [0. Then. for instance. and Sn ր V an exhaustion by ﬁnite connected subsets. We can also write pc = inf {p : θx (p) > 0 ∀x ∈ V }. As a result. Deﬁne pc := inf {p : Pp [ ∃ an inﬁnite cluster ] = 1}. Consider Ber(p) percolation on any inﬁnite connected graph G(V. E ).15) implies that Pp [ ∃ an inﬁnite cluster ] is {∃ ∞ cluster in ωp } ⊆ {∃ ∞ cluster in ωp′ }. 1). whether cluster exists is 0 for some p. or not an inﬁnite cluster exists does not depend on the states of any ﬁnite set of edges.e. pu . hence Pp [ ∃ ∞ cluster ] = 1 implies that there exists some n = n(x) such that in Pp [ ∂V Bn (x) ←→ ∞ ] > 0 . here is an exercise clarifying what the measurability of having an inﬁnite cluster means: ⊲ Exercise 12. Let G(V. Kolmogorov’s 0-1 law (see Theorem 9. so this is a tail either 0 or 1. while ωp ⊆ ωp′ for p ≤ p′ . the events En := {Bn (x) intersects an inﬁnite cluster} increase to {∃ ∞ cluster} as n → ∞. and general invariant percolations {ss. 1). for p > pc (G).1 Percolation on inﬁnite groups: pc . though pc (Z) = 1 is clear with either one.pc} Deﬁnition 12. where θx (p) := Pp [ |C (x)| = ∞ ] and C (x) is the cluster of the vertex x. since it is equal to pc ∈ (0. It is not quite obvious that these two deﬁnitions are equivalent to each other. For the other direction in the equivalence of the two deﬁnitions. above which there is an inﬁnite connected {d. with the product topology? Yes. then there must be some x with Pp [ |C (x)| = ∞ ] > 0 (by graph? We are going to show this in three diﬀerent ways. The strict deﬁnition is as follows: transition: the existence of some critical pc ∈ (0. and then ωp := {e ∈ E (G) : U (e) ≤ p} is a ←→ ∂Bn (x)}.1. just to introduce some basic techniques that will often be useful. 1]: to each edge e (in the case of bond percolation) Ber(p) bond percolation conﬁguration for each p. but why does this happen for all x at the same p values in a non-transitive lim Pp [ largest cluster for percolation inside Sn is the subset of an inﬁnite cluster ] = 1 ? connected graph. a Borel measurable subset of {0. among other things. it is clear that if there is an inﬁnite cluster with positive probability. is {∃ an inﬁnite cluster} really an event. unimodularity. hence the probability is monotone. 1}E (G) x ∈V n≥1 {x that Pp [ ∃ an inﬁnite cluster ] is monotone in p? Well. 1] variable. Now. then the probability that any given x is part of an inﬁnite cluster must also be 0. We will later see that for non-1-dimensional graphs one expects pc (Z2 . But ﬁrst some explanations are in order.12.

Bernoulli percolation and the Ising model are such examples. then the same holds for any y ∈ V (G) in place of x. x ←→ ∞] ≥ p|γ | Pp [x ←→ ∞]. . A \ {e} := {ω \ {e} : ω ∈ A}. then the model has uniformly ﬁnite energy or uniform deletion/insertion tolerance. j = 1. i = 1 . In fact. . Similarly. we also have in Pp [ x ←→ y for all y ∈ ∂V Bn (x) ] > 0 . These two events of positive probability depend on diﬀerent bits.1. Such models are often deﬁned by assigning some energy to each (ﬁnite) conﬁguration. and then Pp [y ←→ ∞] ≥ Pp [γ is open. for Pp [ A \ {e} ] ≥ (1 − p) Pp [ A ] . since any measurable event can be approximated by a ﬁnite union of such events. We can now easily establish that if Pp [|C (x)| = ∞] > 0 holds at some p for some x ∈ V (G). . Pp [ A ∪ {e} ] = p/(1 − p) Pp [ A ] if e ∈ {fj }. we are done. Then the ﬁnite energy property is that any ﬁnite modiﬁcation of a conﬁguration changes the energy by some ﬁnite additive constant. . see Section 13. where the states of the vertices (spins) have a tendency to agree with their neighbours (with a stronger tendency if the temperature is lower) is a site percolation process. and Pp [ A ∪ {e} ] = pPp [ A ] if e ∈ {ei } ∪ {fj }. Pp [ A ∪ {e} ] = Pp [ A ] if e ∈ {ei }.On the other hand. any e ∈ E (G). . Bernoulli percolation has a basic property called ﬁnite energy or insertion and deletion tolerance. Since the intersection implies that {x ←→ ∞}. Why? First of all. let A ∪ {e} := {ω ∪ {e} : ω ∈ A}. we have Pp [ A ∪ {e} ] ≥ p Pp [ A ] and Then the property is that P[ A ] > 0 implies that P[ A ∪ {e} ] > 0 and P[ A \ {e} ] > 0. the Uniform Spanning Tree and Forests from Section 11. hence their intersection also has positive probability. For the case of bond percolation. ℓ}. since this requires just ﬁnitely many bits to be open. 112 . 0. ω (fj ) = Why is this property called ﬁnite energy? One often wants to look at not just Bernoulli perco- lation. independently of A. And then. Second proof.2 are bond percolation processes. k. while the Ising model of magnetization. If changing one edge or site changes the probability by a uniformly positive factor. . Pp [ A \ {e} ] ≥ (1 − p)Pp [ A ]. we can focus only on ﬁnite cylinder events A = {ω : ω (ei ) = 1. while the UST is neither deletion nor insertion tolerant. hence are independent of each other. for any event A and e ∈ E (G). Take a ﬁnite path γ between x and y . For instance. but more general percolation processes: a random subset of edges or vertices. as well. . by repeated applications of the insertion tolerance shown above. and hence the probability by a positive factor. for Ber(p) percolation. then giving larger probabilities to smaller energy conﬁgurations in some way (typically using a negative exponential of the energy).

the events {x ←→ y } and {y ←→ ∞} are both monotone increasing. (12. R R since the integrand is always nonnegative.1 for dependent percolation processes.2]. Section 7. and then the statement is one of Chebyshev’s inequalities: f (x)g (x) dµ(x) ≥ f (x)dµ(x) R R just Ber(p) measure on {0. where ω (e) = 1 represents the edge e {t.Cheb} R The proof of this is very easy: [f (x) − f (y )] [g (x) − g (y )] dµ(x) dµ(y ) ≥ 0 . 1}E (G) with coordinate-wise ordering. For now.Third proof. and we are done again. or E[ f g ] ≥ E[ f ] E[ g ] if f and g are increasing functions with ﬁnite second moments. Even more generally than measure µ. The explanation for the name Harris-FKG is that Harris proved it for product measures (just what we stated) in [Har60]. by the Pp [y ←→ ∞] ≥ Pp [x ←→ y. Increasing events in the Bernoulli product measure are positively correlated: P[ A ∩ B ] ≥ P[ A ] P[ B ]. The proof of this must be very diﬀerent from what is sketched above.1) {e. x ←→ ∞] ≥ Pp [x ←→ y ] Pp [x ←→ ∞]. 1}. Kasteleyn and Ginibre (1971). But there is a generalization of Theorem 12. and [GeHM01] for a proper one. since the measure does not have the product structure anymore. of course.HarrisFKG} Theorem 12. let us go back to the equivalence in Deﬁnition 12. The general case. Indeed. Prove it yourself. can be proved by induction. a fundamental paper for percolation theory. namely a product probability measure µ1 × · · · × µd on Rd . being kept.1. increasing events supported on disjoint sets are negatively correlated [LyPer10. coordinateby-coordinate. the Ising model does satisfy this inequality.2]. the set of all subsets of E (G) ordered by inclusion.1 as an example: for any Harris inequality. y ∈ V (G). in the UST. see [Wik10b] for a very short introduction to the general FKG inequality. When looking at general percolation processes on groups and transitive graphs G. For instance. hence. we will do this in Theorem 13. or see [LyPer10.1 (Harris-FKG inequality). due to Fortuin. In other words. Theorem 12. one can consider two increasing functions on R with any probability g (x)dµ(x) . where 1A is the indicator of the event A.1). At the opposite end from the FKG inequality. where the energy of a conﬁguration is given by certain nice local functions favouring agreement: this is the FKG inequality. we say that an event A is increasing if 1A (ω1 ) ≤ 1A (ω2 ) whenever ω1 ≤ ω2 . Before proving this. and then rearranging gives (12. the natural partially ordered set Ω is. x. via a beautiful Markov chain coupling argument. Ω = {0. For bond percolation on G.1 is in fact classical for the case when we have only one bit. Section 4. one usually wants that it is an invariant percolation process: a random subset of edges or vertices whose 113 . Given a partially ordered set Ω. as usual for determinantal measures.

hence it has probability 0 or 1 by ergodicity. Pp [ A ∩ γ (A) ] = Pp [ A ] 2 number of clusters. it is enough that there is an edge with an inﬁnite orbit (and then it is easy to see that every edge has an inﬁnite orbit).law is invariant under the automorphism group of G. the Uniform Spanning Forests from Section 11.15 below. hence Pp [ Aǫ ∩ γ (Aǫ ) ] = Pp [ Aǫ ] . How does the automorphism group act on conﬁgurations and events? Functions can always be pulled back. An event A can be considered as a set of conﬁgurations. In fact. then ω δ◦γ (x) = ω (δ (γ (x))) = (ω δ )(γ (x)) = (ω δ )γ (x). for any ǫ > 0 there is an event Aǫ depending only on ﬁnitely many coordinates such that Pp [ A∆Aǫ ] < ǫ. there exists a “large enough” γ ∈ Aut(G) such that the supports of Aǫ and by invariance. even via noninvertbile maps of the underlying spaces: for any γ ∈ Aut(G) and x ∈ V (G). 1. But studying percolation in this generality also turns out to be useful for Bernoulli percolation itself. Theorem 12. Then. We will be concerned here mainly with ergodic measures. A possible basic property of invariant percolation processes is the following: Lemma 12. the event {the number of inﬁnite clusters is k } is translation {l. when V (G) has ﬁnitely many orbits under Aut(G).01infty} 2 γ (Aǫ ) are disjoint. as we will see later on. although see. .e.. we have B γ = B . ∞}.2 and the Ising model from Section 13. but let us point out that there are always natural modiﬁcations (with almost identical proofs) for quasi-transitive graphs. a less trivial one is the Ising model on Z2 at subcritical temperature with a free boundary condition. e. hence Besides Bernoulli percolation. we let ω γ (x) := ω (γ (x)). . 1}. For the event process its probability is the same. hence Pp [ A ] ∈ {0.e. for any c < 1 there exists an integer r such that the probability that the ball 114 . . the number of inﬁnite clusters is an almost sure constant. Here is a basic application of ergodicity and insertion tolerance: Lemma 12. A = {|Cx | > 100}. In any ergodic insertion tolerant invariant percolation process on any inﬁnite transitive graph. By the measurability of the Proof.1.1 are examples of invariant percolations. namely 0..3. . instead of transitivity. {l. i. For the event B = {∃ x : |Cx | > 100}.2.. hence if Aut(G) acts from the left on G. For any k ∈ {0. or ∞. we have Aγ = {|Cγ −1 (x) | > 100}. hence Aγ = {ω γ : ω ∈ A} and Aut acts from the right again. Note that if δ is another automorphism. I. Any measurable event A can be approximated by a ﬁnite union of ﬁnite cylinder events. On the other hand. invariant.g. i. Most results and conjectures below will concern percolation on transitive graphs. but in an invariant this is an invariant event.ergodic} Proof. 1. There are natural invariant percolations that are not ergodic: a trivial example is taking the empty set or the full vertex set with probability 1/2 each. Altogether. 2. assume that this constant is 1 < k < ∞. Pp [ A ] can be arbitrarily well approximated by Pp [ A ] . which is a diﬀerent event.e. then it acts from the right on conﬁgurations. the number of inﬁnite clusters is an almost sure constant. Now.. where there are more than one phases. by choosing ǫ small enough. Ber(p) bond (or site) percolation on any inﬁnite transitive graph G is ergodic: any invariant event has probability 0 or 1. see again Section 13.

. The edges dual to these edges contain a Figure 12. lower bound. resulting in an event with positive probability. which implies by Borel-Cantelli that there is no inﬁnite cluster almost surely.e. the lower bound generalizes easily: if G has maximal degree d.2. circuits formed by dual edges that correspond to primal edges that were closed in the percolation): Pp [∃ closed self-avoiding dual circuit of length n surrounding o] ≤ n 4(3(1 − p))n ≤ exp(−cn) for p > 2/3. while the number of such paths of length n in Z2 is at most 4 · 3n−1 . For other graphs. by insertion tolerance.e. Show that if in a graph G the number of minimal edge cutsets (a subset of edges whose removal disconnects a given vertex from inﬁnity. back to the question of pc ∈ (0. hence it is certainly less robust. on which the number of inﬁnite clusters is at most k − 1: a contradiction. note that if C (o) is inﬁnite.t. hence. minimal w. (Proof: If the union U of the open clusters + intersecting BN/8 (o) is ﬁnite. the boundary edges closed circuit around o.) I. there is an inﬁnite cluster with positive probability. However. then it contains an inﬁnite self-avoiding path starting from o. with positive probability there is no closed dual circuit longer than N surrounding the origin.e. with length larger than N . for N large enough. then pc (G) ≥ {f.. For the Ep [number of open self-avoiding paths of length n] ≤ 4(3p)n ≤ exp(−cn) for p < 1/3.r. 1).. containment) of size n is at 115 . where the factor n comes from the fact that any such dual circuit must intersect the segment [0.Peierls} straightforward generalization that does hold is the following: 1/(d − 1). (This counting of dual circuits and using the ﬁrst moment method is called the Peierls argument. The {ex. the upper bound relied on planar duality. n] on the real axis of the plane. hence Pp [∃ open self-avoiding path of length n] ≤ Now. we count closed circuits in the dual graph in a similar way (i. For the upper bound.) with an endpoint in the inﬁnite component of Z2 \ U . it is easy to prove that 1/3 ≤ pc (Z2 ) ≤ 2/3.1: Counting primal self-avoiding paths and dual circuits. But then.cutsets} ⊲ Exercise 12. This is summable in n.Br (o) intersects at least two inﬁnite clusters is at least c. then take its exterior edge boundary ∂E U . This out implies that ∂V BN/8 (o) must be connected to inﬁnity. i. we can change everything in Br (o) to open.

most exp(Cn) for some C < ∞. P2 is disjoint from Π1 . although we know that already from Z2 ⊆ Zd . then pc (G) ≤ 1 − ǫ(C ) < 1. ric inequality IP1+ǫ for some ǫ > 0 (deﬁned in Subsection 5. Proof. CutCon(Z2 ) = 1. If G is a non-one-dimensional graph. there is a path Pi between x′ and y ′ that avoids Π3−i . this is closely related to having an exponential bound on the number of minimal cutsets. deﬁne x′ (y ′ ) to be a vertex Because of the minimality of Π. partition of it into two subsets. We are going to prove that if Γ is a ﬁnitely presented group. for i = 1. E (G ) such that there is a path between x and x′ (y and y ′ ) in G \ Π. see Section 3. Note that Π must be ﬁnite. Now look at P1 + P2 ∈ F2 from Π2 . has such an {c. It also intersects Π2 .1). 116 . so it must contain a path from x′ to y ′ . 1}E (G). which we now present.5. (This is obviously the case if Γ is ﬁnitely presented. y ∈ V (G) ∪ ∂G and Π = Π1 ∪ Π2 a minimal edge cutset separating them. from the set of all ends of Td ) do not have bounded connectivity. If x (or y ) is an end.e. Otherwise. E ) be a bounded degree graph.) Then CutCon(G) ≤ t/2. We start with Cayley graphs of ﬁnitely presented groups. pc (Zd ) < 1. let x′ := x (y ′ := y ). 2. But in the second sum. so there must be some cycle in K \ K1 that intersects Π1 . As we will see. which proves the claim. d ≥ 2. then pc < 1. We see from the ﬁrst sum that this θ ⊂ E (G) is disjoint from Π2 . S ) has CutCon(G) < ∞. Note that each cycle in G can be viewed as a conﬁguration of edges. or even as an element of the vector space F2 F2 is then the linear subspace spanned by all the cycles.e. Let G(V. For instance. The cycle space of G over Assume that the cycles of length at most t generate the entire cycle space of G. then any ﬁnitely generated Cayley graph G(Γ. for all known groups. Π = Π1 ∪ Π2 . In particular.exponential bound. This is clearly in the cycle space. the ﬁrst proof [BabB99] used ´ am Tim´ cohomology groups. Show that Zd . Let x. we can write P1 + P2 = c∈K c for some set K of cycles of length at most t. i.e. CutCon(hexagonal lattice) = 2. Tim´ ar also proved that CutCon(G) < ∞ and having an exponential bound are both quasi-isometry invariants. and CutCon(Td ) = 1 for all d. with a nontrivial partition. so.. but is disjoint from Π2 . in fact. and write . but there is a few-line linear algebra proof by Ad´ ar [Tim07]. there are ei ∈ Πi whose distance in G is at most despite the fact that cutsets separating a vertex from inﬁnity (i.1. ..CutCon} E (G ) Proposition 12. it satisﬁes an isoperimet- This has been veriﬁed in many cases. and let ∂G be the set of its ends. But the only odd degree vertices in it are x′ and y ′ .pc<1} Conjecture 12.4 ([BenS96c]). hence it intersects Π1 . Let K1 ⊆ K be the subset of cycles that are disjoint θ := P1 + c ∈K1 c = P2 + c ∈ K \K 1 c. and its length is at most t.. an element of {0. i. That must intersect Π. {pr. Let CutCon(G) be the cutset-connectivity of G: the smallest t ∈ Z+ such that any minimal edge cutset Π between any two elements of V (G) ∪ ∂G is t-connected in the sense that in any non-trivial t.

Ber(p) variables. Explore the cluster of a ﬁxed vertex o by taking the ﬁrst ei with an endpoint in o. . just like a biased random walk on Z might never get back to the origin. Moreover.d. since ρ(|K |) ≤ C ′ o must intersect BAn (o).Now.nonamenexplore 117 . by Exercise 12. see Exercise 3. If the full C (o) is ﬁnite. However. Conjecture 12. then this is exponentially unlikely. says BAn (o). recall that groups of polynomial growth are all almost nilpotent and hence ﬁnitely presented. E (G) = {e1 . . So. If C (o) = n. So. see Exercise 4. This means that among groups.7. then for any ﬁnite set K ⊂ V (G) the Coulhon-Saloﬀ-Coste isoperimetric inequality. as proved by [BenS96c] (or see [LyPer10.24] that any group of exponential growth has pc < 1. and at least |∂E C (o)| ≥ hn closed edges in this process of examining i. It is also known [LyPer10. It is not very hard to show (but needs some “stochastic domination” techniques we have not discussed yet) that pc (G(Γ. then we have found n − 1 open edges if p > 1/(h + 1).2.1 it is almost nilpotent. e2 . it is invariant under quasi-isometries. Theorem 5. by choosing A large enough. Now.4 below for an example. Proof. S )) < 1 is independent of the generating set. But {pr. then the proposition says that minimal cutsets between a vertex o and inﬁnity have bounded connectivity.i.6.6. with positive probability there is no such n. even without ﬁnitely presentedness or even transitivity. by a standard large deviation estimate. pc (G) < 1. then taking the ﬁrst unexamined ei with one endpoint in the current cluster of o and the other endpoint outside. extending the cluster of o by this edge if its open. . then we get an exponential upper bound on the number of such cutsets. a cutset of size n around |K |.24]): Proposition 12.1. But in this case it is non-amenable by Exercise 5. We claim that the ball BAn (o) is a suitable choice for Sn for A > 0 large enough. then this process stops after exploring an open spanning tree of the cluster plus its closed boundary and possibly other closed edges between vertices in the spanning tree.4 remains open only for groups of intermediate growth. we have Pp [ n < |C (o)| < ∞ ] < exp(−cn) for some c = c(p) > 0. then it has continuum many.4.2 and Section 10. we have pc (G) ≤ 1/(h + 1) < 1. because then Γ would be a ﬁnite extension of Z and it would have two ends.3. if Γ is a ﬁnitely presented group with one end. if the component of o in G \ Π contains polynomial growth. Theorem 6. then |Π| ≥ c that |∂E K | ≥ c |K |. Theorem 7. And Γ must have volume growth at least quadratic: if it has |BAn (o)| ≥ c An. if we show that there is a set Sn of edges with size at most exponential in n such that each such cutset must intersect it. as desired. }. if an inﬁnite group does not have 1 or 2 ends. for any nonamenable graph G. Proposition 1. Moreover. and hence. And. This implies that once we ﬁx an edge in a minimal cutset of size n. we have pc (G) ≤ 1/(h + 1) < 1. and so on. Furthermore. Fix an arbitrary ordering of the edges. For bond percolation on any graph G with edge Cheeger constant h > 0. Therefore. examining its state. say. there are only exponentially many possibilities for the cutset. for any p > 1/(h + 1). then by Gromov’s Theorem 10. see Exercise 12. On the other hand. If Γ has at least quadratic volume growth. hence it has an integer growth rate that cannot be 1.

Consider the probability generating function of the oﬀspring distribution. Of course. Rs. it is enough to understand when a GW process survives with positive probability. have Cayley graphs containing a copy of Z2 . therefore we have q := P[ extinction ] = lim P[ Zn = 0 ] = lim f ◦n (0) . S )) = ∞. p). so (12.1. see Section 15. n→∞ n→∞ (12. then each individual in the nth generation gives birth to an independent number of oﬀspring with distribution ξ .4 is also known to hold for planar graphs of polynomial growth that have an embedding into the plane without vertex accumulation points [Koz07].all known examples of such groups. quote the result (due to Russ Lyons) that this being positive implies pc (F ) < 1. 1]. together giving the (n + 1)th generation. A usual GW process is a random tree where we start with a root in the zeroth generation. p). largely due to the work of Russ Lyons. percolation on general trees is quite well understood. Show that the lamplighter group with generating set S = {L. For Ber(p) percolation on Tk+1 . by ﬁnding a subgraph in it isomorphic to the so-called Fibonacci tree F : a directed universal cover of the directed graph with vertices {1. A standard method for this is the following. to ﬁnd pc (Tk+1 ).3.) Find the exponential volume growth of F .4. * Show that the Cayley graph of the lamplighter group Γ = Z2 ≀ Z with generating CutCon(G(Γ.LLpc} ⊲ Exercise 12. lim inf n (log |Bn |)/n.LLCG} set S = {R.3) {e. which turns out to be a special case of a classical object: the cluster of a ﬁxed vertex in Ber(p) percolation on a k + 1-regular tree Tk+1 is basically a Galton-Watson process with oﬀspring distribution ξ = Binom(k. ⊲ Exercise 12.5. s} has pc (G(Γ. the root has a special oﬀspring distribution Binom(k + 1. then the GW process almost surely dies out. and satisﬁes {ex.2) {e. see [LyPer10. with root either 1 or 2. f (s) := E[ sξ ] = k ≥0 P[ ξ = k ]sk . So. sL} is the Diestel-Leader graph DL(2. (22)}.3) easily implies that q is the smallest root of s = f (s) in [0. (12. A quite diﬀerent strategy is the following: {ex. (21).2]. Deduce that pc (Tk+1 ) = 1/k and θ(pc ) = 0. then E[ sZn ] = f ◦n (s). Using the above considerations. L. S )) < 1. this diﬀerence does not aﬀect questions like the existence of inﬁnite clusters. {ex. Example 7. the simplest case is percolation on a regular tree. (Just draw a ﬁgure of how f (s) may look like and what the iteration f ◦n (0) does!) ⊲ Exercise 12. 2).GWpgf} 118 . (There are two directed covers. The general Conjecture 12.GWq} Assuming that P[ ξ = 1 ] = 1. and just As may be guessed from the previous exercise. hence the conjecture holds also for them [MuP01].GWpgf} Notice that if Zn is the size of the nth generation. 2} and edges {(12). 1]. the function f (s) is strictly increasing and strictly convex on [0. show that if E[ ξ ] ≤ 1 but P[ ξ = 1 ] = 1. However. R.

and active vertices will be ordered. hence with positive probability will go to inﬁnity without 119 . The above exploration process is well-suited to construct a subcritical GW tree fully or a supercritical GW tree partially.6. Eξ = µ.2. examine the children of the ﬁrst vertex v in the active list after the ith step. with the children of each vertex being ordered. the steps of the walk are indexed by the vertices as they turn inactive. if µ > 1 and E[ ξ 2 ] < ∞. with Z0 = 1. On the right. and using this. let S0 = 1. The labels on the vertices show the order in which they turn inactive. ≤ C (EZn )2 . Consider the following exploration process of any rooted tree. For general trees T . During the process.. also measuring the Eξ > 1. then the walk has a positive drift. Show that Zn /µn is a martingale.s. then P[ X > 0 ] ≥ (EX )2 /E[ X 2 ] — prove this).GWMG} ⊲ Exercise 12. with the vertices of the ﬁrst inﬁnite ray all being put in the active list. inactive.2: On the left.{ex.. This is a {f. Lyons deﬁned an “average branching number” br(T ). [vdHof13]). then the process will run forever. 2 (b) On the other hand. In the (i + 1)th step. while the height shows the current number of active vertices.GWexplore} random walk with iid increments distributed as ξ − 1. If the tree is ﬁnite. See Figure 12. assuming P[ ξ = 1 ] = 1. 5 8 3 2 4 6 1 7 10 9 5 4 3 2 1 0 Figure 12. which is robust enough to be used in diﬀerent ﬁnite random graph models (see. that µ ≤ 1 implies that the GW process dies out almost surely. the process will end up with all vertices being inactive. which will be the number of children being put into the active list at each step. and then Si+1 = Si + Xi − 1 is the size of the active list after the (i + 1)th step. start with the root as the only active vertex. or neutral.5 and 12. Let Zn be the size of the nth level. if the tree is inﬁnite. the root. In the 0th step. then E Zn Moment Method (if X ≥ 0 a. hence Sn = 0 will eventually happen almost surely. then the walk is either recurrent or has a negative drift. vertices will be active. and by the Second Let us describe yet another strategy. e. and turn v inactive.6. let {Xi : i ≥ 1} be a sequence of iid variables from the oﬀspring distribution ξ . Indeed. deduce that the GW process survives with positive probability.g. Hence we have reached the same conclusion as in Exercises 12. the shades of vertex colours show the order in which the sets of children are put into the active list. put these children at the beginning of the active list. * (a) Consider a GW process with oﬀspring distribution ξ . If Eξ ≤ 1 but P[ ξ = 1 ] = 1. If ever reaching zero. all other vertices are neutral.

key observables like θ(p) are continuous but non-diﬀerentiable at pc . Consider a spherically symmetric tree T where each vertex on the nth level Tn has moment method. i.. A main conjecture in percolation theory is that the critical behaviour θ(pc ) = 0 should hold in general: Conjecture 12. show that pc = 1/k and θ(pc ) > 0. (2) computer simulations show it on Euclidean lattices (Z3 being one of the most famous problems of statistical physics). For instance. the q = 1 case is just percolation) on the lattice Z2 conjecturally have a ﬁrst order (i.7 is quite classical also on nice planar lattices. q ) random cluster models (discussed in Section 13. exponents such as θ(p) = (p − pc )5/36+o(1) as p ց pc should always hold. This is known 120 .. e. it should look the same and be conformally invariant on any planar lattice. [LyPer10].Hausdorﬀ dimension of the boundary of the tree with a natural metric. [Lyo92]. where even the critical value is known in some cases. even though criticality happens at diﬀerent densities. the phase transition is of “second order”. Again. and showed that pc (T ) = 1/br(T ). and probably most groups “in between”. where TG is the triangular grid — these are Kesten’s theorems from 1980.. due to the work of Schramm.7 is also known for Zd with d ≥ 19. it should be noted that the FK(p. statistical mechanics at criticality in the plane is indeed a miraculous world. This. this method is good enough to calculate. (It can be shown that the only possible discontinuity of θ(p) is at p = pc . As an appetizer: although the value of pc for percolation is a lattice-dependent local quantity (see Conjecture 14.7 ([BenS96c]).1.) That transitivity is needed can be seen from the case of general trees: ⊲ Exercise 12.11). this should include Euclidean θ(p) = (p − pc )1+o(1) as p ց pc . Using the second The main reasons to conjecture continuity at pc are the following: (1) it is known to hold in the extreme cases (Z2 and regular trees) and in some other important examples.e.. Besides regular trees. Percolation. and others. Smirnov. pc (Z2 . by many-many transitive graphs. dn ∈ {k. and.thetapc} k n /|Tn | < ∞. (3) simple models of statistical physics tend to have a continuous phase transition. rather than a philosophical one. site) = 1/2. (In physics language. He has also found very close connections between percolation and random walks on T . critical values of such exponents are proved only for site percolation on the triangular lattice. On any transitive graph with pc < 1. bond) = pc (TG. θ(pc ) = 0. hence the conjecture says that θ(p) is continuous everywhere.e. such that limn→∞ |Tn |1/n = k . see Section 12. all non-amenable groups. using a perturbative Fourier-type expansion method called the Hara-Slade lace expansion. k + 1} children.3 for a bit more details. together with other critical exponents.g. and more generally. e. of course.) However. though the existence and Conjecture 12. but ∞ n=0 {c. namely all mean-ﬁeld graphs.. having a ﬁrst versus second order transition should be thought of as a quantitative question. This ﬁeld has seen amazing progress in the past few years. discontinuous) phase transition for q > 4. as we will discuss below. critical percolation itself should be universal: “viewed from far”. are conjecturally shared lattices for d > 6. Conjecture 12. exhibiting conformal invariance. see.7. e.g.g. For this and some other reasons to be discussed in a later version of these notes.

8). y ) := Ef (x.8.. and of all non-Abelian amenable groups remain wide open. γy. Let Γ be the automorphism group of G. a simple resummation argument gives (12. A simple example of a non-unimodular transitive graph is the grandparent graph: take a 3-regular tree. On the other hand. percolation at pc dies out. o). Sections 8. Another class of examples are the Diestel-Leader graphs DL(k.g. See [LyPer10. ω ).4): for instance. but the incoming mass is only 1. given a unimodular graph. in the grandparent graph. ω ) = y ∈V Ef (y. (12. with 3 ≤ d ≤ 18. i. The following important general theorem settles Conjecture 12. The importance of unimodularity is the Mass Transport Principle. So. Cayley graphs do satisfy the Mass Transport Principle (and hence are unimodular): using F (x. Then the condition is that |Γx y | = |Γy x| for all (x. As the simplest case. y. y. (b)* Can you give an example with a Cayley graph? (a) Give an example of a unimodular transitive graph G such that there exist neighbours x. the cases of Zd .1. x) = −1 . {ex. For any non-amenable Cayley (or more generally. the percolation conﬁguration. if every vertex sends mass 1 to each grandchild. {t.MTP} We think of f (x. but it does not have + and − directions in which it looks diﬀerent the vertex x. and add an edge from each vertex to its grandparent towards the ﬁxed end. pick an end of it. ω ).+-} x∈G F (o.8 (Benjamini-Lyons-Peres-Schramm [BLPS99a. we have ﬁrst equality we used that multiplying from the left by a group element is a graph-automorphism. Then the MTP means the conservation of mass on average. such that there is no graph-automorphism interchanging them. y. then the outgoing mass is 4. f (x. one can easily construct a deterministic mass transport rule that does not satisfy (12.BLPSpc} Theorem 12. and in the second equality we used that x → x−1 is a self-bijection of Γ. e.e. ω ). y ∈ V (G) and ω ∈ Ω is the Ef (x.7 for all non-amenable groups. we have y ∈V G is unimodular iff for any random function f (x.4).3 for more information. not only the graph on a quantitative level. We will give a rough sketch of a proof of this theorem. In the other direction. like “highly non-amenable” graphs (including regular trees.. where x. 8. y ∈ V (G) 121 .only in few cases. y. that is diagonally invariant. y. in the ⊲ Exercise 12. y ) ∈ E (G). discovered by Ølle H¨ aggstr¨ om: randomness. unimodular transitive) graph.4) {e. x. o) x∈G F (x = y ∈G F (y. ℓ) with k = ℓ. BLPS99b]). where. again see Section 12. Given a non-unimodular transitive graph. ω ) as the the mass sent from x to y when the situation is given by ω . but ﬁrst we need to deﬁne when a transitive graph G is called unimodular.2] for more details on MTP and unimodularity. A more complicated example can be found in [Tim06b]. ω ) has the same distribution as f (γx. and Γx be the stabilizer of looks the same from all vertices. (But diﬀerent directions are still possible in a ﬁner sense: see Exercise 12. ω ) for any γ ∈ Aut(G). of course).

Let ω be any invariant bond percolation on a transitive unimodular graph G.s. (Hint: the complement of a perfect matching has density 2/3 and consists of Z components. And non-amenability is in fact essential for the conclusion that a large edge-marginal implies the existence of an inﬁnite cluster. (b) In an invariant percolation process on a unimodular transitive graph G.11 is.r. (c) Give an invariant percolation on a non-unimodular transitive graph with inﬁnitely many trifurcation points a. A typical way of using the MTP is to show that whenever there exist some invariantly deﬁned special points of inﬁnite clusters in an invariant percolation process. A more quantitative use of the MTP is the following: ⊲ Exercise 12.11 below is a good example of this. And it is indeed weaker than usual averaging: ⊲ Exercise 12. The bound of Exercise 12. show that almost surely the number of trifurcation points in each inﬁnite cluster is 0 or ∞. it is not surprising that a mean degree larger than α(G) implies the existence of an inﬁnite clusters. but E[ degω (o) ] > 2d − ǫ. Given a percolation conﬁguration ω ⊆ E (G). invariant measures.. Show that if {ex. Show that amenable transitive graphs are unimodular. give an example of an invariant bond percolation process ω on Zd with only ﬁnite clusters a. For any ǫ > 0. Then clearly α(G) + ιE (G) = degG (o). where ιE is the Cheeger constant. since α(G) is the supremum of the average degrees in ﬁnite subgraphs. then there must be inﬁnitely many in any inﬁnite cluster. ⊲ Exercise 12. of course. the supremum of edge-marginals is 2/3.tree2/3margin} 122 .11. Exercise 12. as can be seen from the following two exercises: ⊲ Exercise 12. a trifurcation point is a vertex components. (a) In any invariant percolation process on any transitive graph G.10. The MTP is needed to pass from spatial averages to means w.12. In words.. show that the number of trifurcation points is either almost surely 0 or almost surely ∞. then ω has an inﬁnite cluster with positive probability.9 (Soardi-Woess 1990).t.marginonamen} {ex.s. For instance: ⊲ Exercise 12. all ﬁnite K . vacuous if G is amenable.) Exercise 12. Let αK be the average degree inside a ﬁnite subgraph K ⊂ G.13. but only ﬁnitely many in each inﬁnite cluster. and let α(G) be the supremum of αK over E[ degω (o) ] > α(G).trifurcation} in an inﬁnite cluster C whose removal from C would break it into at least three inﬁnite connected {ex. MTP is a weak form of averaging.In some sense.11 is tight: show that for the set of invariant bond percolations on the 3-regular tree T3 without an inﬁnite cluster.

2) and by Lemma 12.8. represents the structure of the inﬁnite clusters of ω well.s. But this is just Ber(pc − ǫ) percolation. Choose one of them uniformly at random.3: Proposition 12. Theorem 7.17.10. we deﬁne a new invariant percolation ξǫ on E (G).) Note that Exercises 12. we need to rule out the case of a unique and the case of inﬁnitely many inﬁnite clusters. Let γǫ be an independent Ber(ǫ) bond percolation. it is also easy to check that such an inﬁnite cluster sense and sparser in another. but low enough to ensure high edge marginals. (Hint: close the boundaries of a set of randomly translated Følner sets. A unimodular transitive graph is amenable iff for any ǫ > 0 there is an invariant bond percolation ω with ﬁnite clusters only and edge-marginal P[ e ∈ ω ] > 1 − ǫ. for any two vertices in a Cayley graph. A related characterization can be found in Exercise 13. almost surely. so it has no inﬁnite clusters a. by Exercise 12. y ) ∈ E (G) be open in ξǫ if limǫ→0 P[ (x. see Figure 12.{ex. but. there is a unique natural automorphism moving one to the other. with a lot of branching. dist(y. For each x ∈ V (G). so we can hope to derive a contradiction. There is an obvious graph structure G on V as vertices. there is a random ﬁnite set {x∗ i } of vertices that implies the existence of an inﬁnite cluster in ω \ γǫ .3 (a). maybe surprisingly. and assume it has a unique inﬁnite cluster {pr. These resulting inﬁnite clusters still has inﬁnitely many trifurcation points.14 together give a percolation characterization of amenability. a very similar condition characterizes not non-amenability but a stronger property. although not at all a subgraph of G. and x∗ and y ∗ are connected in ω \ γǫ . with a “density” high enough to ensure having only ﬁnite clusters. Generalize the previous exercise to all amenable Cayley graphs. Kazhdan’s (T).11 and 12. they can be glued to get inﬁnitely many trifurcation points as in Exercise 12. denoted by x∗ . However. which makes the proof a bit easier. The statement holds not only for Cayley graphs. for some small enough ǫ > 0. Using insertion tolerance. C∞ ) < 1/ǫ.marginamen} ⊲ Exercise 12. see Theorem 12. y ) ∈ ξǫ ] = 1. there is an inﬁnite dist(x. Then let the edge (x. On the other hand. which is a critical percolation two properties appear to point in diﬀerent directions. on the other hand.15 below.marginal} are the closest points of C∞ to x. Moreover. For structure. Let V ⊂ V (G) denote the set 123 . This graph. Assume now that there are inﬁnitely many inﬁnite clusters in ω . each component of G is kind of tree-like. if a trifurcation point is removed. Sketch of proof of Theorem 12.9 ([BLPS99a]). We will base our sketch on [BLPS99b]. C∞ ) < 1/ǫ. by the ergodicity of Bernoulli percolation (Lemma 12.11. — a contradiction. thicker than ω in some Let ω ⊆ E (G) be the percolation conﬁguration at pc . Hence. the same application of MTP shows that.3 (b). First of all. C∞ . see Figure 12. It is easy to see that cluster in ξǫ with positive probability. similar to Kesten’s random walk characterization. For any ǫ > 0. then each of the of trifurcation points. but also for all amenable transitive graphs. with an edge between two trifurcation points if there is a path connecting them in ω that does not go through any other trifurcation point.3.14.

y ) ∈ F \ Fǫ ] = 0. Now again let γǫ be an independent Ber(ǫ) bond percolation.07 . the number of inﬁnite clusters is almost surely 0 or 1. jv } be the set of inﬁnite clusters of G \ {v } neighbouring ℓ v . there is the following very elegant theorem: Theorem 12. let {wi : 1 ≤ ℓ ≤ ki } be the set of G -neighbours of v in Ci (v ). .34 .d. since F itself is not a transitive unimodular graph. Unif [0. . Conjecture 12. 1). Proof.3: Constructing the graph G and forest F of trifurcation points in an inﬁnite cluster.3 (c).52 . as in the proof of Lemma 12. inﬁnitely many inﬁnite clusters on the k -regular tree.43 .3 above. We proved in Lemma 12. then (x. . ℓ and 1 ≤ i ≤ jv . then. while limǫ→∞ P[ (x. It is pretty obvious that for all p ∈ (pc (Tk ). or ∞. there are a.31 .1.trifurtree} Let Fǫ be the following subgraph of F : if (x. but a similar MTP argument on the entire V (G) can be set up to ﬁnd the contradiction.11. 0.i. we will exhibit a nonamenable spanning forest F inside each component of G in an invariant way. For each v ∈ V .7 has been established for most such known graphs in the union of [Tim06b] and [PerPS06]. using insertion tolerance. The above proof strategy clearly breaks down for non-unimodular transitive graphs. For each 1 ≤ i ≤ jv . See Figure 12.27 .88 . We cannot just use Exercise 12. We omit the details.10 (Burton-Keane [BurtK89]).45 .the actual proof. . It is not very hard to show that F has no cycles. the Reader is invited to write a proof or look it up in [BLPS99b]. they could be glued.s. It has only ﬁnite clusters. Fǫ is an invariant bond percolation process on V (G). where jv ≥ 3.3 that it is an almost sure constant. with edges usually is a non-amenable tree with minimum degree at least 3. 1] label to each v ∈ V . draw an edge from v to that element of {wi : 1 ≤ ℓ ≤ ki } that has the smallest We now use some extra randomness: assign an i. If there were inﬁnitely many inﬁnite clusters. Then ω \ γǫ has only ﬁnite clusters. But each tree of F cluster of ω \ γǫ .09 . {f. let {Ci (v ) : i = 1. Nevertheless. and for each v label. We then forget the orientations of the edges to get F . The number of inﬁnite clusters in Bernoulli percolation is also a basic and interesting topic.71 Figure 12. On the other hand.BurtonKeane} . y ) ∈ Fǫ if x and y are in the same not in E (G). Clearly. y ) ∈ E (F ). suggesting that this cannot happen. to get that a given vertex is a trifurcation point with 124 {t. For any insertion tolerant ergodic invariant percolation on any amenable transitive graph. .

(pc . For transitive graphs. On the other hand. So. then each inﬁnite cluster of ωp′ contains an inﬁnite cluster of ωp .s. but new ones can also appear by ﬁnite clusters merging on the other hand.s. hence P[ x ←→ y ] ≥ P[ x. Before discussing what is known about the non-triviality of these intervals. where having inﬁnitely many inﬁnite clusters is a possibility. what does it really mean that “clusters merge as we raise p”? Recall the standard monotone coupling of Ber(p) percolations if p is such that ωp has an inﬁnite cluster a.y∈Z2 P x ←→ y = 0. (Hint: you can use the ideas of ω 125 .y∈V (G) P x ←→ y ω {ωp : p ∈ [0. y ∈ C∞ ] ≥ p2 . the expected number of trifurcation points Xn grows linearly with |Fn |. * Give an example of an ergodic uniformly insertion tolerant invariant percolation [H¨ aM09]. Hence. pu = 1. For transitive non-amenable graphs with one end.. then inf x. Note that there is no monotonicity that would make this obvious: as we raise p. if an ergodic invariant ω satisfying the FKG inequality has a unique inﬁnite cluster. increasing event with a positive probability p > 0 that is independent of x. modular transitive graph G satisfying inf x. (This requires a little thought. Conjecture 12.positive probability. and p′ ≥ p. 1] there is uniqueness a. but more importantly.) on Z2 with a unique inﬁnite cluster but inf x.y∈V (G) P x ←→ y > 0. And then the real result is that a.12 ([BenS96c]).16. for all p ∈ (pu . how is this result proved? But. pu < 1. ⊲ Exercise 12. At pu the situation could be either way. The second part is obvious: if C∞ is the unique inﬁnite cluster of ω . reducing their number on one hand. deterministically. Give an example of a Ber(p) percolation on a Cayley graph G that has nonuniqueness.17. Show that in a transitive graph with inﬁnitely many ends. in this standard coupling. pu ) and (pu . the inner vertex boundary of Fn has |∂Fn | should grow linearly with |Fn |. then {x ∈ C∞ } is an ω ω ⊲ Exercise 12. See Figure 12.15.13 ([LySch99]). then ω has a unique inﬁnite Conversely. pu ) there is non-uniqueness [H¨ aPS99]. For the non-amenable transitive case. If ω is an ergodic insertion-tolerant invariant percolation on a unicluster.uniconn} > 0. merge. pu (G) := inf {p : Pp [∃! ∞ cluster] > 0}. but there is a sequence xn ∈ V (G) with dist(x0 .) Combining these two facts. 1]} using Unif [0.11 ([BenS96c]).s.3. pc < pu iff G is non-amenable. ﬁrst things ﬁrst. 1) are non-empty: Conjecture 12. ⊲ Exercise 12. in a large Følner set Fn . xn ) → ∞ and inf n Pp [ x0 ←→ xn ] > 0.. contradicting the deﬁnition of a Følner sequence. it is not known when the intervals {c. and for all p ∈ (pc . It is known that to be at least Xn + 2. 1] labels.pu} {t.pcpu} {c. inﬁnite clusters can one can deﬁne a second critical point.3. here is an important characterization of uniqueness: Theorem 12. The proof of this uses Invasion Percolation. see Section 13.

This density must be the same for each cluster by indistinguishability. Of course.y P x ←→ y ω {t.5). see [LyPer10. o) for SRW on G. and the same can be shown for each uk (r) by induction on k . which we outline brieﬂy. then 126 . But the second scenario is unlikely because there are not enough simple loops in G: if uk (r) denotes the probability that there is some x ∈ Br (o) such that o and x are p+ -connected with at most k cut-edges. by choosing k then r large enough. there are only ﬁnitely many inﬁnite clusters. if G is d-regular. let an be the number of simple loops of length n starting (and ending) at a given vertex o.13 is the following.6) {e. Here is a brief explanation of what kind of Cayley graphs make pc < pu easier. or there are many p+ -open simple loops. it was shown by Pak and Smirnova-Nagniebeda [PaSN00] that every nonamenable group has a generating set satisfying this. k ) cut-edges open even at level p is exponentially costly: the probability is (p/p+ )k . or none. conditionally on the connection at p+ . this property should be independent of the generating set taken (probably also a quasi-isometry invariant). Hence. Pp [ o ←→ x ] must be small.13 in [LySch99] uses a fundamental result from the same paper: Theorem 12. pc (G) ≤ 1 1 1 < = . Indeed. Then. If inf x.14 (Cluster indistinguishability [LySch99]).6.30]. by contributions to Pp [ o ←→ x ] will be small. by Proposition 12. keeping many (say. both Now.gammapu} 1/n for any transitive graph G. If ω is an ergodic insertion-tolerant invariant percolation on a unimodular transitive graph G.. it is enough to prove by the easy direction of Theorem 12. while the sum of the densities of the inﬁnite clusters must be at most 1. Taking p < p+ < 1/γ . Theorem 7.The proof of the ﬁrst part of Theorem 12. In the ﬁrst scenario. In a transitive graph G. hence there is hope that pu will be larger. On the other hand. Regarding pc < pu . and let γ (G) := lim supn an . hE (G) + 1 hE (G) dG ιE (G) (12.indist} A very rough intuitive explanation of how Theorem 12. If o ←→ x at level p (in the standard coupling).pcrho} (12. and by insertion tolerance and ergodicity. proving (12. 1 ≤ pu (G) . independently of the starting point of the walk. dG ρ(G) γ (G) where ρ is the spectral radius of the SRW.14 implies Theorem 12. As usually. then u0 (r) → 0 noticing that uk+1 (r) ≤ u0 (s) + |Bs (o)| uk (r − s).13 that. for o and x far away either there are many cut-edges between them already at level p+ . The proof is a nice counting argument. then an /dn ≤ pn (o. we will consider bond percolation.5) {e.s.7) {e.gammarho} as r → ∞ because of p+ < 1/γ . > 0. The smaller this is. γ (G) (12. as proved by Schramm. the more treelike the graph is. hence 1 1 ≤ . from each other. then each inﬁnite cluster must have a positive “density” that may be measured by the frequency of visits by a simple random walk. and A is a Borel-measurable translationinvariant set of subgraphs of G. there must be a unique one. then either all inﬁnite clusters of ω are in A a.

e. then take keep the multiplicities with which group elements occur as k -wise products). by increasing the generating set). then we indeed can push hE close to the degree (or in other words. respectively.12 on pu < 1 is known under some additional assumptions (besides being nonamenable and having one end): CutCon(G) < ∞ (for instance. Section 7. for which we have some ρ(G(Γ.e. S )) = ρ0 < 1. the aim becomes to ﬁnd a Cayley graph G for which ιE is close to 1 and ρ is close to 0. and we are done.. Here is how being Kazhdan plays a role: {t.5 below. or having the so-called Rapid Decay property. Fortunately.18. can push ιE close to 1) without using multiple edges. ⊲ Exercise 12. while ιE is the edge Cheeger constant of the Markov chain (the SRW).29. see [BabB99.2.e. which is of out course stronger. given by any ﬁnite generating set S ? The answer is “yes” for groups having a free subgroup F2 . since |∂V S | ≤ |∂E S | ≤ (d − 1)|S | in a d-regular graph. i.3. The Mass Transport Principle shows that this proof cannot work in a group-invariant way. being a ﬁnitely presented Cayley graph) or being a Kazhdan group are suﬃcient. Bk ))/|Bk | → 1 for any group Γ and any ﬁnite generating set S ? Possibly the best attempt so far at proving pc < pu in general is an unpublished argument of Oded Schramm. See [PaSN00] for the (easy) proofs of these statements. it uses the ratios C (∂E S )/π (S ). by the quantitive bound ι2 E /2 ≤ 1 − ρ ≤ ιE from Kesten’s Theorem 7.. The transition matrix for SRW on the resulting multigraph is just the k th power of the original transition matrix.4 below. these two aims are really the is easy: take any ﬁnite generating set S . For G(Γ. S k ) for some large k . such that the new graph G∗ will be d∗ -regular. as in Section 7.6. see Theorem 13. and [LySch99]. but still want to add them only “locally”. Bk ))/|Bk | → 1 as k → ∞ for any nonamenable group Γ and the ball of S S (b) Is it true that ιV (G(Γ. which we discuss in Section 12. out Even the outer vertex Cheeger constant hV := inf |∂V S |/|S | can be close to the degree. (Hint: use the wobbling paradoxical decomposition from Exercise 5. Show that for any d-regular non-amenable graph G and any ǫ > 0. see Theorem 12. where S k = S · · · S is the multiset of all possible k -wise products (i. would the ball Bk (1) work.6].. Conjecture 12. and ιV (G∗ ) := hV (G∗ )/d∗ will be larger than 1 − ǫ for the outer vertex Cheeger constant. which includes all Gromov-hyperbolic groups. hence ρ(G(Γ. It is not known if this ρ → 0 can be achieved with generating sets without multiplicities.5. Tim07] or [LyPer10.where hE is the edge Cheeger constant deﬁned using the ratios |∂E S |/|S |.) ⊲ Exercise 12. no multiple edges. S k )) = ρk 0 → 0 as k → ∞. 12. we same. And making ρ → 0 S instance. The following exercise shows that if we do not insist on adding edges in a group-invariant way (i. After comparing the three displayed inequalities (12.19.Kazhdanclosure} 127 .7). *** radius k in any ﬁnite generating set S ? S S (a) Is it true that ιE (G(Γ. 12. There is an interpretation of pc < pu in terms of the Free and Wired Minimal Spanning Forests.6. there exists K < ∞ such that we can add edges connecting vertices at distance at most K .

2 for a bit more on these issues..g. In other words.16 ([LySch99]). let δ erg (G) := sup Eµ {(o.g.) (b) Show that δ erg (T3 ) = 1. not that clearly. group Γ is Kazhdan iff any (or one) of its Cayley graphs G has δ erg (G) < 1. Assume pu (G) = 1. ⊲ Exercise 12. it is clear that Let us also sketch a non-probabilistic proof that uses directly Deﬁnition 7. process.d. and let ωp be Ber(p) percolation at some p < 1.20. Let ηp be the invariant site percolation where the vertex set of each cluster of ωp is completely deleted with 0. instead of the characterization Theorem 12.Theorem 12.. (Hint: free groups are not Kazhdan e. We come back now to the question of pu < 1: Corollary 12. all tail events have probability 0 or 1).15 ([GlW97]).i. we have inf x. partly following a suggestion made in [LySch99].e.agree} are denoted by δ tt and δ ﬁid .). Clearly.13. then pu (G) < 1. There are some natural variants of δ erg (G): instead of all ergodic measures. it is only conjectured that δ tt (G) < 1 is again equivalent to nonamenability.15 says that Γ could not be Kazhdan. or only those that are factors of an i. but ηp converges to µhalf in the weak* topology. as p → 1. Unif [0.15.i. we can take only the tail-trivial ones (i. at pu there is non-uniqueness. By Theorem 12. probability 1/2. (b) Show that δ tt (T3 ) < 1.3 of being Kazhdan. for Cayley graphs. *** (a) Find the value of δ ﬁid (T3 ). we get σ as a measurable function f of the i.i. process ω .20 (a) below.d.. a f. It is not surprising that this implies that ηp is ergodic.y Pp [ x ←→ y ] = {c. See [LyNaz11] and Subsection 14.e.) On the other hand.g.d. E ) and any o ∈ V . Then. The corresponding suprema are they really diﬀerent? An unpublished result of Benjy Weiss and Russ Lyons is that. Sketch of proof of pu < 1. so Theorem 12.) ⊲ Exercise 12. If G is a Cayley graph of an inﬁnite Kazhdan group Γ. δ ﬁid (G) < 1 is equivalent to nonamenability. inﬁnite group Γ is Kazhdan iff the measure µhalf on 2Γ that gives probability half to the emptyset and probability half to all of Γ is not in the weak* closure of the Γ-invariant ergodic probability measures on 2Γ . In general. Moreover. for a transitive d-regular graph G(V.21.Kazhdanpu} {ex.i. 1] process on V or E (i. A f. where f commutes with the action of Γ on the conﬁgurations σ and ω ). coin ﬂips in large Følner neighbourhoods vote on the σ -value of each vertex.d. (a) Show that δ ﬁid (G) = 1 for any amenable transitive graph G. It is due to [IKT09]. independently from other clusters. it is a hard task to ﬁnd out what tail trivial processes are factors of some i.. (Hint: have i. δ erg (G) ≥ δ tt (G) ≥ δ ﬁid (G) (wait. x) ∈ E : σ (x) = σ (o)} ergodic invariant measures µ on σ ∈ {±1}V : dG with Eµ σ (o) = 0 . On the other hand. 128 . because they surject onto Z. (One direction is easy: see Exercise 12.

hence its maximum is 12. then. ρs (ι)) = 2Pp [ 1 ←→ s ] < ǫ for all s ∈ S . by the way. *** Prove pc < pu for Kazhdan groups by ﬁnding an appropriate representation.23 (Todor Tsankov). It is easy to see that a vector ξ is invariant under the representation ρ iff ξ (ω. Let Ω = {0. ρgh−1 (ι))µp . If there are inﬁnitely many inﬁnite clusters in µp . for each ω . then ι µp ωg ω Γ has a natural unitary (in fact. Fill in the missing details in either of the above proof sketches for pu < 1 for Kazhdan groups. ⊲ Exercise 12. for ϕ. S ) for a ﬁnite generating set S . ρh (ι))µp = (ι. On the other hand. Hence. or in other words. Cω (g ) be the cluster of g . Threshold phenomema {ss. Ω Sketch of another proof of pu < 1. all of these clusters must be inﬁnite. S ) > 0. C ) dµp . ι − ρs (ι) µp Take an ǫ > 0 smaller than the Kazhdan constant κ(Γ. p). and [KalS06] for a nice survey of inﬂuences and threshold phenomena that we will deﬁne in a second. if ι ∈ H is the vector that is 1 at Cω (1) and 0 at the other clusters of ω . for all g ∈ Γ. If p is close enough to 1. By a simple application of the Mass Transport Principle. and a site percolation conﬁguration ω ∈ Ω. orthogonal) representation ρ on H. there is no invariant way to choose ﬁnitely many of them. Taking these clusters.22. we get an invariant choice of ﬁnitely many clusters. ψ ∈ H . hence we must have a unique inﬁnite cluster instead. and Pp [ g ←→ h ] = Pp [ C (g ) = C (h) ] = (ρg (ι). C )ψ (ω. by the Kazhdan property. for each ω . there is an invariant vector ξ ∈ H. [AloS00] for probabilistic combinatorics in general. H := ℓ2 (Cω ) . that Pp [ g ←→ h ] is a positive deﬁnite function. 1}V ×V that is invariant under 129 . see [JaLR00] for more on random graphs. ρg (ϕ)(ω. that does not care about A graph property A over some vertex set V is a subset of {0. 1}Γ. similarly to the present notes. the diagonal action of the permutation group Sym(V ). by the cluster indistinguishability Theorem 12. ⊲ Exercise 12. with the product Bernoulli measure µp with Hilbert space of square summable real-valued sequences deﬁned on Cω . which is just Ber(p) percolation on the complete graph Kn . g −1 C ) for C ∈ Cω . ·) ∈ ℓ2 (Cω ). This implies. We will give a very brief introduction.density p. g −1 C ) attained at ﬁnitely many clusters. C ) = ξ (ω g . we have ξ (ω. ψ )µp := Ω C ∈C ω ϕ(ω.percfin} The best-known example is the Erd˝ os-R´ enyi random graph model G(n.14. Given the right Cayley graph G(Γ. and ℓ2 (Cω ) be the integral of these Hilbert spaces over all ω . let Cω be the set of its clusters. note here that g −1 C ∈ Cωg . where ω g (h) := ω (gh). since g −1 x ←→ g −1 y iff x ←→ y . Consider now the direct (ϕ. translating vectors by = 1. C ) := ϕ(ω g .2 Percolation on ﬁnite graphs. [Gri10] contains a bit of everything. Now. then = 2 − 2(ι.

the labelling of the vertices. Examples are “containing a triangle”, “being connected”, “being 3colourable”, and so on. Such properties are most often monotone increasing or decreasing. It was noticed by Erd˝ os and R´ enyi [ErdR60] that, in the G(n, p) model, monotone graph properties have a relatively sharp threshold: there is a short interval of p values in which they become extremely likely or unlikely. Here is a simple example: Let X be the number of triangles contained in G(n, p) as a subgraph. Clearly, Ep X =

n 3

p3 ,

have Ep X → ∞, but, in order to conclude that Pp [ X ≥ 1 ] → 1, we also need that X is somewhat X ≥ 0, applying Cauchy-Schwarz to E[ X ] = E[ 1X>0 X ] gives P[ X > 0 ] ≥ Now, back to the number of triangles, Ep [ X 2 ] =

∆ , ∆ ⊂E (K n ) triangles

′

hence, if p = p(n) = o(1/n), then Pp [ X ≥ 1 ] ≤ Ep X → 0. What can we say if p(n)n → ∞? We

concentrated. This is the easiest to do via the Second Moment Method: for any random variable

(EX )2 . E[ X 2 ]

(12.8) {e.2MM}

**Pp [ ∆, ∆′ are both open ] =
**

∆=∆′

+

|∆∩∆′|=1

+

∆ ∩ ∆ ′ =∅

=

n 3 n p + 3 2

n−2 5 1 n p + 2 2 3

n−3 6 p . 3

Thus, for p > c/n, we have Ep [ X 2 ] ≤ C (Ep X )2 , and (12.8) yields that Pp [ X ≥ 1 ] ≥ c′ > 0. Moreover, if pn → ∞, then Ep [ X 2 ] ≤ (1 + o(1))(Ep X )2 , hence Pp [ X ≥ 1 ] → 1. Here are now the exact general deﬁnitions for threshold functions:

{d.threshold}

Deﬁnition 12.2. Consider the Ber(p) product measure on the base sets [n] = {1, . . . , n}, and let pt A (n) be the p for which Pp [ An ] = t, and call pA (n) := pA (n) the critical probability for A.

1/2

An ⊆ {0, 1}[n] be a sequence of increasing events (not the empty and not the full). For t ∈ [0, 1], let (These exist since p → Pp [ An ] is strictly increasing and continuous, equalling 0 and 1 at p = 0 and

p = 1, respectively.) The sequence A = An is said to have a threshold if 1 if p(n) ∨ 1−pA (n) → ∞ , pA (n) 1−p(n) Pp(n) [ An ] → 0 if p(n) ∧ 1−pA (n) → 0 . pA (n) 1−p(n) Furthermore, the threshold is sharp if for any ǫ > 0, we have

−ǫ ǫ p1 A (n) − pA (n) → 0 as n → ∞ , pA (n) ∧ (1 − pA (n))

and it is coarse if there are ǫ, c > 0 such that the above ratio is larger than c for all n. Similar deﬁnitions can be made for decreasing events. A sequence An of events will often be deﬁned only for some subsequence nk → ∞: for instance,

the base set [nk ] may stand for the vertex or the edge set of some graph Gk (Vk , Ek ).

130

**called the Margulis-Russo formula, is fundamental in the study of threshold phenomena: for any event A, d Pp [ A ] = dp =
**

i∈[n] A ¯p I (i) , which is i∈[n]

A short threshold interval means that Pp [ A ] changes rapidly with p, hence the following result,

(12.9) {e.Russo} Pp [ i is pivotal for A ] for A increasing,

A ¯p where I (i) := Pp [ Ψi A ] − Pp [ Ψ¬i A ] is the signed inﬂuence of the variable i on A, with Ψi A :=

a given conﬁguration ω changes the outcome of the event, i.e., that ω ∈ Ψi A △ Ψ¬i A. The ordinary ¯A (i) for A increasing. (unsigned) inﬂuence is just I A (i) := Pp [ i is pivotal for A ], equalling I

p p

{ω : ω ∪ {i} ∈ A} and Ψ¬i A := {ω : ω \ {i} ∈ A}, while pivotal means that changing the variable in

term,

The proof of (12.9) is simple: write Pp [ A ] = d Pp [ A ] = dp

ω 1A (ω )Pp [ ω ], compute the derivative for each

ω

**then, by monitoring that a given conﬁguration ω ∈ A is appears for what η conﬁgurations as η ∪ {i} = ω (hence η ∈ Ψi A), notice that
**

n i=1

1 1 , 1A (ω )Pp [ ω ] |ω | − (n − |ω |) p 1−p

(12.10) {e.ddpA}

Pp [ Ψi A ] =

ω

1A (ω )Pp [ ω ] |ω | + |ω |

1−p , p

and similarly,

n i=1

Pp [ Ψ¬i A ] =

ω

1A (ω )Pp [ ω ] (n − |ω |) + (n − |ω |)

p , 1−p

and the diﬀerence of the last two equations is indeed equal to (12.10). One intuition behind the simplicity. In the standard coupling of the Ber(p) measures for p ∈ [0, 1], when we gradually raise the density from p to p + ǫ, then the increase in the probability of the event is exactly the number of these pivotal openings is

p+ǫ p

formula (which could be turned into a second proof) is the following. Take an increasing A, for

probability that there is a newly opened variable that is pivotal at that moment. The expected pending even on n, the number of bits), it is reasonable to guess that (1) this expectation is about expected number. Dividing by ǫ and taking limǫ→0 gives (12.9).

A inﬂuence Ip := A Ip A i Ip (i)

Eq [ number of pivotals for A ] dq . For very small ǫ (de-

ǫ Ep [ number of pivotals for A ], and (2) the probability of having a pivotal opening is close to this So, one could prove a sharp threshold for some increasing A = An by showing that the total

**proving isoperimetric inequalities in the hypercube. For instance, for the uniform measure p = 1/2,
**

A n−1 the precise relationship between total inﬂuence and edge boundary is I1 , and the /2 = |∂E A|/2

is the size of the edge boundary of A in {0, 1}[n], measured using Pp , hence we are back at

is large for all p around the critical density pA (n). But note that

standard edge-isoperimetric inequality for the hypercube (which can be proved, e.g., using a version of Theorem 5.9) can be written as

A I1 /2 ≥ 2 P1/2 [ A ] log2

1 . P1/2 [ A ]

(12.11) {e.InfIsop}

131

**An easier inequality is the following Poincar´ e inequality (connecting isoperimetry and variance, just like in Section 8.1):
**

A I1 /2 ≥ 4 P1/2 [ A ] (1 − P1/2 [ A ]) .

(12.12) {e.InfPoin}

{ex.InfIsop}

⊲ Exercise 12.24.

A n−1 (a) Prove the identity I1 . /2 = |∂E A|/2 A (b) Show that, among all monotone events A on [n], the total inﬂuence I1 /2 is maximized by the

majority Majn , and ﬁnd the value. (Therefore, Majn has the sharpest possible threshold at p = 1/2. For general p, but still bounded away from 0 and 1, the optimum remains similar: see (12.15).) ⊲ Exercise 12.25. Prove the Poincar´ e inequality (12.12). Fourier analysis, deﬁned very brieﬂy as follows: E1/2 f (ω ) χS (ω ) , where χS (ω ) := Hint: Deﬁne a map from the set of pairs (ω, ω ′ ) ∈ A × Ac into ∂E A. Alternatively, use discrete For any function f : {−1, 1}n −→ R of n bits, deﬁne the Fourier-Walsh coeﬃcients f (S ) :=

i∈S

{ex.InfPoin}

**E1/2 [ f g ]. (In a slightly diﬀerent language, these are the characters of the group Zn 2 .) Determine the coeﬃcients f (S )2 .
**

A variance Varf , and, for Boolean functions f = 1A , the total inﬂuence I1 /2 in terms of the squared

ω (i) : S ⊆ [n] is an orthonormal basis w.r.t. (f, g ) :=

one. In general, we have the following basic result, due to Bollob´ as and Thomason [BolT87], who used isoperimetric considerations to prove it.

**event, for some ﬁxed i ∈ [n]. For this event, Pp [ Di ] = p, hence it has a threshold, but a very coarse
**

{ex.threshold}

sharp, as shown by the dictator: Di (ω ) := ω (i) as a Boolean function, or Di := {ω : i ∈ ω } as an

For balanced events, i.e., when P1/2 [ A ] = 1/2, the stronger (12.11) gives I1/2 (A) ≥ 1, which is

⊲ Exercise 12.26. Prove that for any sequence monotone events A = An and any ǫ there is Cǫ < ∞ events has a threshold. (Hint: take many independent copies of a low density percolation to get success with good probability at a larger density.) Now, what properties have sharp thresholds? There is an ultimate answer by Friedgut and Bourgain [FriB99]: an increasing property has a coarse threshold iff it is local, i.e., its probability can be signiﬁcantly increased by conditioning on a bounded set of bits to be present. Typical examples are the events of containing a triangle or some other ﬁxed subgraph, either anywhere in the graph, or on a ﬁxed subset of the bits as in a dictator event. Sharp thresholds correspond to global properties, such as connectivity, and k -colorability for k ≥ 3. The exact results are slightly diﬀerent in the case of graph and hypergraph properties (Friedgut) and general events (Bourgain), and we omit them. Note that although it is easy to show locality of events that are “obviously” local, it might be much harder to prove that something is global and hence has a sharp threshold. Therefore, it is still useful to have more robust conditions for the quantiﬁcation of how sharp a threshold is. The following is a key theorem, which is a generalization of the p = 1/2 case proved in [KahKL88] that strengthens (12.12):

−ǫ 1−ǫ ǫ ǫ such that p1 A (n) − pA (n) < Cǫ pA (n) ∧ (1 − pA (n)). Conclude that every sequence of monotone

132

{t.BKKKL}

**Theorem 12.17 ([BouKKKL92]). For Ber(p) product measure on [n], and any nontrivial event A ⊂ {0, 1}[n], we have
**

A Ip ≥ c Pp [ A ] (1 − Pp [ A ]) log

1 , 2 mA p

(12.13)

{e.totalInf}

A where mA p := maxi Ip (i). Furthermore,

mA p ≥ c Pp [ A ] (1 − Pp [ A ]) In both inequalities, c > 0 is an absolute constant.

log n . n

(12.14)

{e.maxInf}

as n → ∞, uniformly in a large enough interval of p values around pA (n), then (12.13) and the

For instance, if we can prove for some sequence of monotone events A = An that mA p → 0

any ǫ > 0. Note that this condition is going in the direction of excluding locality: a bounded set of small-inﬂuence bits usually do not have a noticeable inﬂuence even together. Furthermore, if there is a transitive group on [n] under which An is invariant (such as a graph property), then all the

A individual inﬂuences are the same, so Ip = n mA p , and (12.14) implies that the threshold interval is

−ǫ ǫ Margulis-Russo formula (12.9) show that the threshold interval is small: p1 A (n) − pA (n) → 0 for

at most Cǫ / log n. We will see some applications of these ideas in Subsections 12.3 and 13.2. The proofs of these inﬂuence results, including the Friedgut-Bourgain theorem, use Fourier analysis on the hypercube Zn 2 , as deﬁned brieﬂy in Exercise 12.25. From the viewpoint of percolation, the most relevant properties are related to connectivity and sizes of clusters. On a ﬁnite graph, instead of inﬁnite clusters, we talk about giant clusters, i.e., clusters that occupy a positive fraction of the vertices of Gn = (Vn , En ). For the Erd˝ os-R´ enyi model G(n, p), the threshold for the appearance of a giant cluster is pA (n) = 1/n: below that the largest cluster has size O(log n), while above there is a unique cluster of macroscopic volume, and all other clusters are O(log n). The threshold window is of size n−4/3 , and within this window, the largest, second largest, etc. clusters all have sizes around n2/3 , comparable to each other. A beautiful description of this critical regime is done in [Ald97], using Brownian excursions. But there is much beyond percolation on the complete graph Kn . The ﬁrst question is the analogue of pc < 1 for non-1-dimensional graphs: Conjecture 12.18 (Benjamini). Let Gn = (Vn , En ) be a sequence of connected ﬁnite transitive graphs with |Vn | → ∞ and diameter diam(Gn ) = o(|Vn |/ log |Vn |). Then there is a, ǫ > 0 such that P1−ǫ [there is a connected component of size at least a|Vn |] > ǫ for all large enough n. ⊲ Exercise 12.27. Show by example that the o(|Vn |/ log |Vn |) assumption is sharp. The next question is the uniqueness of giant clusters.

{c.finitepc}

133

there is a possible analogue of the nonuniqueness phase of the non-amenable case: in the intermediate regime. where Pp denotes the probability with respect to Ber(p) percolation.24 (b) from the case p = 1/2. even without transitivity [AloBS04]. 1}n. On the other hand. due to the ﬁniteness of the graph itself. p<1−ǫ Conjecture 12. Despite the conjectured uniqueness of the giant cluster. Show by example that the 1 − ǫ cutoﬀ is needed. Zd for high d is locally tree-like enough and the global structure is simple enough so that critical percolation can be understood quite well.{c. if p ∈ (ǫ. they this case proceeds by ﬁrst showing a simple general upper bound on the average inﬂuence.) This bound is applied to proving that there cannot be many edges whose insertion would connect two large clusters. (This generalizes Exercise 12.) For the hypercube. The planar self-duality of the lattice Z2 was apparent in our proof of the upper bound in 1/3 ≤ pc (Z2 ) ≤ 2/3. for a uniformly chosen random i ∈ [n]. p). again with critical value (1 + o(1))/n [AjKSz82]. critical exponents.g. where the latter is understood broadly and vaguely: e. another classical example where these conjectures are known to are known for expanders.3 Critical percolation: the plane. scaling limits.19 for hold is the hypercube {0.finiteunique} ∞. complementing (12. then ∃ α = α(ǫ) such α (12. then for any a > 0 and ǫ > 0. 1}[n]. and Bernoulli percolation is a key example for the study of critical phenomena. (Locally we see many large clusters.14): for any increasing event A ⊂ {0. the identity embedding of the giant cluster into the original graph should have large metric distortion. two macroscopic clusters in an expander would necessarily produce such pivotal edges. mean ﬁeld theory {ss. √ the conjectured second critical value is around 1/ n [AngB07]. ⊲ Exercise 12. 12. 1 − ǫ). This section will concentrate on critical planar percolation. Critical percolation is best understood in the plane and on tree-like (so-called mean ﬁeld) graphs. A more striking (but still very simple) consequence of this self-duality is the following: 134 . but that presently is not enough. Furthermore.15) {e. with some discussion on more general ideas and the mean ﬁeld theory.. while Gromovhyperbolic groups are very much tree-like globally. hence there is uniqueness. the total inﬂuence is always Ip = O( n). Besides the Erd˝ os-R´ enyi G(n.critperc} Statistical mechanics systems with phase transitions are typically the most interesting at criticality. If Gn is a sequence of connected ﬁnite transitive graphs with |Vn | → sup Pp [there is more than one connected component of size at least a|Vn |] → 0 as n → ∞. n √ A in other words. The proof of Conjecture 12. only later they hook up.28.19 ([AloBS04]).pmaxInf} Pp [ i is pivotal for A ] ≤ √ . somewhat that.

we will need some basic deterministic planar topological results.g. and we are done.. while if it requires only a common corner.{l. the colours of the four corner hexagons will not matter. Given a percolation conﬁguration in the n × (n + 1) rectangle (the red bonds on the right hand picture of Figure 12. with black hexagons on the right.4).4: The self-duality of percolation on TG and Z2 .half} have a probability exactly 1/2. one can show that in the ﬁrst case the right boundary of the set of explored hexagons will form a black path from the top to the bottom side of the rhombus. Section 1. The events of having an open left-right crossing in an n × (n + 1) rectangle in Figure 12. The most elegant one I have seen is an inductive proof via Shannon’s and Schensted’s game of Y. it is intuitively quite clear that. a similar statement holds. for better visualization. then it can happen that neither crossing is present. exactly one of the two possibilities. either there is a left-right white crossing. in the second case. One can show that this path exploring the percolation conﬁguration cannot get stuck. regardless of n. For bond percolation on Z2 .20. I am not going to do the discrete hacking in full detail. Again. and the fact that at least one of the crossings must occur is a discrete version of Brouwer’s ﬁxed point theorem in two dimensions: any continuous map from the closed disk to itself must have a ﬁxed point. by colouring the faces of Z2 : if being neighbours requires a common edge.2. while. we will use the latter.duality} . Ber(1/2) bond percolation Z2 . to actually prove the discrete versions (even assuming the topology theorems). but let me mention two approaches. and will end either at the upper left or the lower right corner. Now start a path on the edges of the hexagonal lattice in the lower right corner.3]. A more natural approach is to use the exploration interface. and also in an n × n rhombus in Ber(1/2) site percolation TG both Lemma 12. Now. then both crossings might be present at the same time. opening/closing the sites of TG is the same as colouring the faces of the hexagonal lattice white/black. some combinatorial hacking is inevitable. so. (In other words. in any two-colouring.4. see [PerW10. the left and right rows coloured white. white ones on the left. For the proof. the left boundary will form a white path from left to right. However. First of all. as shown. Add an extra row of hexagons (an outer boundary) to each of the four sides. e. consider the dual 135 {f.) The fact that there cannot be both types of crossings is a discrete version of Jordan’s curve theorem. or a top-bottom black crossing in the rhombus. a hex game will always end with a winner. as in the left hand picture of Figure 12. the top and bottom rows coloured black.

Consider Ber(1/2) site percolation on TGη . with piecewise smooth inner and outer boundary 0 < c0 < P ∂1 A ←→ ∂2 A in percolation on TGη inside A < c1 < 1 for some ci (D.20. represented as a black-and-white colouring of the hexagonal lattice. This intuition becomes slightly more grounded once we know that the special domains considered in Lemma 12.RSWquad} (12. hence. Again. then the complement Ac is the event of a top-bottom dual crossing (on TG. with the dual conﬁguration (the blue bonds): a dual edge {pr. if A is the event of a left-right primal crossing in Ber(p) percolation. a. and the top and bottom sides ﬁxed to be in the dual conﬁguration. together with a lot of applications of the Harris-FKG inequality. pieces ∂1 A and ∂2 A. d) and all 0 < η < η0 (D. we have Pp [ Ac ] = P1−p [ A ]. d) and all 0 < η < η0 (A) small enough. By the above discussion. Assuming the fact that at least one type of crossing is present in any two-colouring of the n × n rhombus. b. d) small enough.16) {e. c. Proof. c. Proof of Lemma 12. This says that Pp [ A ] + P1−p [ A ] = 1. c. But. prove Brouwer’s ﬁxed point theorem in two dimensions. The exploration interface now goes with primal bonds on its left and dual bonds on its right.18) {e. a. are actually not that special: Proposition 12. a.RSWannu} /4 . d ∈ ∂D 0 < c0 < P ab ←→ cd in percolation on TGη inside D < c1 < 1 for some ci (D.RSW} (12.17) {e.RSWkey} . we have P1/2 [ A ] = 1/2. be the images of the corners of [0.20 fed into the following inequality. 1]2 . so that connections are understood as white paths. for p = 1 − p = 1/2. and let a. Then (ii) Let A ⊂ C be homeomorphic to an annulus. by colour-ﬂipping and planar symmetry. b.20. ⊲ Exercise 12. the fact that at p = 1/2 there is a non-trivial event with a non-trivial probability that is independent of the size of the system suggests that this should be the critical density pc . with any reasonable deﬁnition of “open crossing of a domain”. For a physicist. with the left and right sides of the rectangle ﬁxed to be present in the primal conﬁguration. The key special case is the existence of some s > r such that crossing an r × s rectangle in the harder (length s) direction has a uniformly positive probability. (12. exactly one of the two possibilities.29. where percolation took place. depending only on r/s. s) 136 2 (n + 1) × n rectangle on the dual lattice. Indeed. 1]2 . primal/dual simply mean open/closed). as one can easily convince themselves. the triangular grid with mesh size η . either there is a left-right crossing on the primal graph.21 (Russo-Seymour-Welsh estimates). We will use the notation P = Pη 1/2 . imply all the claims: P LR(r. 2s) ≥ P LR(r. Lemma 12. as desired. Then (i) Let D ⊂ C be homeomorphic to [0. Similar statements hold for Ber(1/2) bond percolation on Z2 .is open iff the corresponding primal edge was closed. b. with piecewise smooth boundary. b. or a top-bottom crossing on the dual graph. c.

will be in β and not in α. and both halves “basically” agree with the r × s α (c) γ β A B Figure 12. See Figure 12. where α is the bottom side of A together with the part of the left side below γ .. then.18) and Figure 12. If the white crossing between γ and β occurs. with boundary arcs γ . s are arbitrary. The event of hitting the midline between A and B is equivalent to LR(A): the right (white) boundary of the stopped interface will be the uppermost left-right crossing of A (provided that a crossing exists). subdomain of D below γ ∪ γ ˜. By (almost-)symmetry. to make sure that everyone gets to know this beautiful proof.5. we have P[ γ ←→ β ] ≥ 1/2.) γ (a) (b) γ ˜ union of lattice hexagons intersecting the rectangle).20. (Anecdote: when Oded Schramm received the proof from Smirnov.5: Smirnov’s proof of RSW. The following very simple proof is due to Stas Smirnov.a. the lattice rectangle (the hexagons cut into two by the midline are considered to be part of both halves). s) is the left-to-right crossing event in the rectangle r × s in the horizontal (length s) direction.) Namely. Fixing the outer {f. (The possible middle hexagon on the bottom side. we can repeat the argument of Lemma 12. Let D be the r × 2s lattice rectangle. Open/white denoted by yellow.5.5. start an exploration interface in the upper left corner until it hits one of the two other sides of A. with left and right halves A and B . In the γ ˜. there is either a white crossing between γ and β or a black crossing between γ ˜ and α. Note that the conﬁguration in the part of A below γ and in the entire B is independent of γ . and r.5.a. closed/black denoted by blue. See Figure 12. if γ ended at a hexagon cut into two by the midline. See again Figure 12. except that are chosen relative to the mesh η in a way that the vertical midline of the r × 2s rectangle is an axis of symmetry of the “lattice rectangle” (i. then this hexagon will be in γ and not in γ ˜ . exactly one of the two possibilities. and similarly for β in B . Now let the reﬂection of γ across the midline between A and B be γ ˜ .RSW} boundary black along the top side of A and white along its left side. 137 . α in a clockwise order. Condition now on this event and also on the right (white) boundary γ of the interface. β .where LR(r.e. just below the midline. we get a white crossing from the left side of A to the bottom or right side of B . together with the white γ . in the form of a one-page fax containing basically just (12. he almost posted the fax to the arXiv under Smirnov’s name.b.

⊲ Exercise 12. e. complete the proofs of (i) and (ii) of the proposition. imply the polynomial decay of Exercise 12.[Kes80]).10 that there is a unique white and a unique black inﬁnite cluster. B |) implies LR(D). One of the most general such results is [GriM11]. B ) and LR( A.31. hence followed a diﬀerent route. However.5. θ(pc ) = 0 is known there. Sketch of proof. these RSW estimates are the sign of criticality.22 (Harris-Kesten theorem [Har60].21. the FKG-Harris inequality. It is also important that on such nice lattices. Given the insertion and 138 . that P LR(|A. αi such that c1 n−α1 ≤ P[ 0 ←→ ∂Bn (0) ] ≤ c2 n−α2 . Nevertheless. We know from Theorem 12.21 (ii). then.. more work was needed to fully establish the natural conjecture: .19) the “RSW-inequality”. So. ℓr) ≥ fℓ Pp LR(r) (12. denoting this event by LR(|A. Show that for p = 1/2 site percolation on TG or bond percolation on Z2 . in a crucial way.30. see Figure 12. The simpler direction is pc ≥ 1/2: the RSW estimates in annuli. and (12. even in a stronger form. by P LR(D) ≥ P LR(|A. As we mentioned above.21 the “box-crossing lemma”. pc (Z2 . which means that p = 1/2 is neither very subcritical nor supercritical: ⊲ Exercise 12. there is also an inﬁnite black cluster.18). there exist constants ci .bound 1/2 over all possible choices of γ .18) holds for all p.s. Harris did not have the RSW estimates available. There are several other proofs of Proposition 12. However. The above proof used the symmetry between primal and dual percolation. hence θ(1/2) = 0. too.HarrisKesten} Theorem 12. One symmetry is still needed there: there is no diﬀerence between horizontal and vertical primal crossings.c. which is important.1armpoly} {t. B ) ≥ P LR(A) · 1/2.g. in particular. Lemma 11. we get. with fℓ (x) ≥ c(ℓ) x and lim fℓ (x) = 1 . Proposition 12. bond) = pc (TG. Moreover. B ) ∩ LR( A. they imply the following polynomial decay. the result is also known.31. and explain what to change so that the proof works for bond percolation on Z2 . which is just (12.18). site) = 1/2.19) {e. B |) ≥ P LR(B ) · 1/2. Similarly. B |) ≥ P LR(A) P LR(B ) /4. after averaging the conditional probability lower P LR( A. Using (12. a bound of the type (12. for critical site percolation on Z2 . see [Gri99. we have the intersection of the events LR(|A. The next exercise ﬁnishes the proof. If there is an inﬁnite white cluster on TG at p = 1/2 a. B ). since the lattice has that non-trivial symmetry. θ(pc ) = 0. since they generalize in diﬀerent ways. For instance. x→1 So. (12. and hence pc = 1 − pc .18) or {ex..73]: Pp LR(r. Since β and its reﬂection intersect only in at most one hexagon. by symmetry.RSWpkey} Some people (including Grimmett) call Proposition 12.

S. Therefore. where Piv(n) is the set of pivotal bits for the leftFor the direction pc ≤ 1/2. See Figure 12. Indeed. at distance at least n/4. South. E. Let Ci (n) for i ∈ {N.7. using the FKG-inequality. this event. it seems plausible that by raising the density to any 1/2 + ǫ. However.W} Ci (n)c → 0 as n → ∞ .E. let Di (n) be the same events in the dual percolation. Kesten.S. the probability of crossings in very large boxes will be very close to 1.deletion tolerance and the FKG property.W} Ci (n) ∩ Di (n) > 0 for n large enough. East. and then one can use RSW in dyadic annuli around that ﬁrst pivotal point . together with the uniqueness of both the primal and the dual inﬁnite cluster. N ? W E S Figure 12.E.2. the idea is that at p = 1/2 we already have that large boxes are paths that there is a pivotal point with positive probability. The simplest realization of this idea is due to Yu Zhang (1988. This is not surprising: it is easy to prove using two exploration ﬁrst of all proved that E1/2 [ |Piv(n)| ] ≥ c log n. this can be proved in several ways. hence. See Figure 12. West side of the n × n {f. as follows.9). Assuming θ(1/2) > 0. W} be the event that the North.S.6. unpublished). 139 boundary of the square. so this really should not happen. Also. in order to use the Margulis-Russo formula (12. the box B (n) will intersect the inﬁnite cluster for n large enough. P i∈{N. say. hence.6: Zhang’s argument for the impossibility of θ(1/2) > 0. discussed in Section 12. we have P Ci (n)c 4 ≤ P i∈{N. is impossible due to planar topology. with a bit of experience with the sharp threshold phenomena in critical systems.Zhang} box B (n) is connected to inﬁnity within R2 \ B (n). right crossing of the n × n square. crossed with positive probability (the RSW lemma). and the same for Di (n). respectively. it is pretty hard to imagine that a unique inﬁnite white cluster can exist and still leave space for the inﬁnite black cluster. from the to produce logarithmically many pivotals with high conditional probability.

See Figure 12. δ > 0. if n > nǫ.00000 11111 00000 11111 00000 11111 00000 11111 11111 00000 11111 00000 0000 1111 11111 00000 00000 11111 00000 11111 0000 1111 00000 11111 00000 11111 00000 11111 0000 1111 00000 11111 0000 1111 0000 1111 00000 11111 00000 11111 0000 1111 00000 11111 0000 1111 0000 1111 00000 11111 0000 1111 00000 11111 0000 1111 0000 1111 00000 11111 0000 1111 00000 11111 0000 1111 0000 1111 00000 11111 00000 11111 0000 1111 00000 11111 0000 1111 0000 1111 00000 11111 0000 1111 0000 1111 00000 11111 0000 1111 0000 1111 00000 11111 0000 1111 0000 1111 00000 11111 00000 11111 00000 11111 00000 11111 1111 0000 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 0000 1111 00000 11111 00000 0000 1111 0000 11111 1111 0000 1111 00000 11111 00000 0000 1111 0000 1111 00000 11111 11111 00000 11111 0000 1111 0000 1111 00000 11111 00000 11111 0000 1111 00000 11111 0000 1111 00000 11111 00000 11111 0000 1111 00000 11111 00000 11111 00000 11111 0000 1111 00000 11111 00000 11111 00000 11111 0000 1111 00000 11111 00000 11111 00000 11111 0000 1111 00000 11111 00000 11111 00000 11111 00000 11111 Figure 12.epsdelta} A slightly diﬀerent route to arrive at the same conclusion. for all p with 1 − ǫ > Pp [ LR(n) ] > ǫ. we still d dp Ep [ LR(n) ] {f. by Exercise 12. the same proof shows that assuming that p satisﬁes 1 − ǫ > Pp [ LR(n) ] > ǫ. then it has the alternating 4-arm event to the sides of the n × n square.20) can be combined using the FKG property and a simple renormalization idea to give long connections. Wherever the bit is located. take a tiling of the inﬁnite lattice by n × n boxes.17 shows that the ǫ-threshold interval for LR(n) is at most C/ log n. at least one of these primal and one of these dual arms is of length n/2. Therefore. Therefore.31. if a bit is pivotal. 2n) ] > 1 − δ . or from doing the above Margulis- (12. Now. edges do not share an endpoint. and deﬁne the following dependent bond percolation the union of these two boxes is crossed in the long direction and each box is crossed in the orthogonal .7: Producing c log n pivotals in an n-box with positive probability. then their states are independent of each other. and if two and with the obvious neighbouring relation). Well. by the Margulis-Russo formula. Namely.8. but that means that the size of this ǫ-threshold interval for have Ep [ |Piv(n)| ] ≥ c(ǫ) log n.20) {e. is to use the BKKKL Theorem 12. ≥ c(ǫ) log n Moreover. Then we need to bound only the maximal inﬂuence mp LR(n) of any bit on the event LR(n). a Peierls 140 The probability of each edge of Gn to be open is larger than 1 − δ if n is large enough.20) again. with less percolation-speciﬁc arguments but with more sharp threshold technology. for all ǫ. giving a grid Gn isomorphic to Z2 (with the n-boxes as vertices process on Gn : declare the edge between two boxes open if the n × 2n or 2n × n rectangle given by direction. the highly probable large crossings given by (12. In fact. hence LR(n) mp ≤ Pp 0 ←→ ∂B (n/2) ∧ P1−p 0 ←→ ∂B (n/2) ≤ c2 n−α2 for all p . either from the RSW-inequality (12.17 instead of the Margulis-Russo formula.logpiv} LR(n) must be at most C/ log n. then P1/2+ǫ [ LR(n) ] > 1 − δ . Russo argument in the n × 2n rectangle. Then Theorem 12.δ . and then we get (12.19) for general p. Therefore. for large enough n we have P1/2+ǫ [ LR(n.

critical percolation itself should be universal: “viewed from far”. which implies the existence of an inﬁnite cluster also in the original percolation. critical percolation.) Here will come a very short intro to the miraculous world of statistical mechanics in the plane at criticality. although the value of pc is a lattice-dependent local quantity (see Conjecture 14. Similar results hold or should hold for the uniform spanning tree.renorm} Pp [ · | 0 ←→ ∂Bn (o) ] restricted to any box Bm (0) with m < n. and hence it was proved without relying on it that. This ﬁnishes the proof of θ(1/2 + ǫ) > 0. (However.11). for p = pc (Z2 ). argument establishes the existence of an inﬁnite cluster in this renormalized percolation process. see [Wer07.n Figure 12. Schr07]. Kesten’s Incipient Inﬁnite Cluster. B. there are large ﬁnite clusters. critical Ising model. the IIC. this monotonicity is not known and might be false. It is tempting to try to deﬁne a “just-inﬁnite cluster” at pc . the loop-erased random walk. This limit indeed exists [Kes86]. At criticality. the self-avoding walk (the n → ∞ limit of the uniform measures on self-avoiding walks of length n). (a) Show that the “conditional FKG-inequality” does not hold: ﬁnd three increasing events A. Very brieﬂy. See also [HamPS12]. and is called that many other natural deﬁnitions yield the same measure [J´ ar03]. it should look the same and be conformally invariant on any planar lattice. domino tilings (the uniform measure on perfect matchings). It was later shown by Kesten’s PhD student Antal J´ arai {f.32.8: Supercritical renormalization: bond percolation on the grid Gn of n-boxes. the main point is that many such models have conformally invariant scaling limits: the classical example is that simple random walk on any planar lattice converges after suitable rescaling to the conformally invariant planar Brownian motion (Paul L´ evy. these measures have a weak limit as n → ∞. and the FK random-cluster models. C (b) Show that conditional FKG would imply that Pp [ · | 0 ←→ ∂Bn+1 (o) ] stochastically dominates in some Ber(p) product measure space such that Pp [ AB | C ] < Pp [ A | C ] Pp [ B | C ]. 1948). for percolation. the Gaussian Free Field (the “canonical” random height function). even though 141 . For instance. ⊲ Exercise 12. A natural idea is to hope that the conditional measures Pp [ · | 0 ←→ ∂Bn (o) ] have a weak limit as n → ∞. Wer03. but there is no inﬁnite one.

behaves. This is very much open. in particular. for groups with inﬁnitely many ends and for planar transitive non-amenable graphs with one end [Scho02]. all non-amenable groups.) The method also identiﬁes that the scaling limit should be the Integrated Super-Brownian Excursion on Rd . Using Exercise 12. k ≥ 3. the existence and conformal invariance of a critical percolation scaling limit has been proved so far only for site percolation on the triangular lattice. we have θ(p) ≍ p − pc as p ց pc . However. the near-critical percolation probability satisﬁes θ(p) = (p − pc )5/36+o(1) as p ց pc (proved for site percolation on the triangular lattice). The method relates Fourier expansions of percolation (or self-avoiding walk. Mark Sapir conjectures [Sap07] that if two groups have isometric asymptotic cones. without Fourier. Oded Schramm noticed in 1999 that using the conformal invariance and the Markov property inherent in such random models. we can understand how percolation. A possible solution Benjamini suggested was to consider somehow random geodesics in the deﬁnition of the asymptotic cone. etc. this method is good enough to calculate critical exponents. namely all mean-ﬁeld graphs.. and probably most groups “in between”. then they have the same critical exponents. For tree-like graphs even pc can be computed in many cases. but on what object? Some sort of scaling limit of Cayley graphs.. for instance. (One paper when it is done entirely in ”x-space”. while the scaling limits of interesting stochastic processes are usually rotationally invariant. partly due to the problem that lace expansion works best with Fourier analysis. show that on a regular tree Tk . Again. see [Sla06].5 or 12. A readable simple introduction to this scaling limit object is [Sla02]. and all non-Abelian amenable groups. However. and for a direct product of two trees [Koz11]. On the other hand.7 is open for Zd with 3 ≤ d ≤ 18.33. etc. and if we know everything about Green’s function. this should include Euclidean lattices for d > 6. hence the ℓ2 -distance should be more relevant.treebeta} ⊲ Exercise 12. using ˇ multi-type branching processes [Spa09]. is the construction of asymptotic cones. {ex. n) 142 . e. θ(p) = (p − pc )1+o(1) as p ց pc . Conjecture 12. Mean-ﬁeld criticality has been proved without lace expansion in some non-amenable cases other than regular trees: for highly non-amenable graphs [PaSN00.6. by Stas Smirnov (2001). which are conjecturally shared by manymany transitive graphs. The methods developed in the last decade are good enough to attack the popular percolation questions regarding critical exponents: for instance. 0) and (n. For Zd with d ≥ 19. a bit similarly to how Rd arises from Zd . Scho01]. An issue regarding the use of asymptotic cones in probability theory was pointed out by Itai Benjamini [Ben08]: the asymptotic cone of Zd is Rd with the ℓ1 -metric.g.criticality happens at diﬀerent densities. for these non-amenable cases it is not clear what the scaling limit should be: it still probably should be Integrated Super-Brownian Excursion. is [HarvdHS03]. a perturbative Fourier-type expansion method called the Hara-Slade lace expansion works. many questions can be translated (via the Stochastic L¨ owner Evolution) to Itˆ o calculus questions driven by a one-dimensional Brownian motion. but does not actually prove the existence of any scaling limit. a uniform random ℓ1 -geodesic in Z2 between (0.) quantities to those of simple random walk quantities. On the other hand. there is some small combinatorial miracle there that makes the proof work. which does not really exist outside Zd .

s. if needed). show that any supercritical GW tree (you may assume E[ ξ 2 ] < ∞ for the oﬀspring distribution. the straight diagonal line. conditioned on survival.11 on the locality of pc below. (i) If G is a transient transitive graph.25 ([Pet08]). and see [Pet08] for more details. These anchored isoperimetric inequalities are enough to prove. ⊲ Exercise 12. * Without consulting [LyPer10]. it is far from clear that this idea would work in other groups. the conjecture is proved for p > n−1/2+ǫ . A simple discovery I made (in 2003. where e(C1 . Instead. and Conjecture 14.BLS} . (ii) Similarly. The result implies that a weaker (the so-called anchored) version of any isoperimetric inequality satisﬁed by the original graph is still satisﬁed by any inﬁnite cluster. 143 {c.4 Geometry and random walks on percolation clusters {ss.g.repulsion} {c. or could satisfy any IPd .percgeom} When we take an inﬁnite cluster C∞ at p > pc (G) on a transitive graph. are the large-scale geometric and random walks properties of G inherited? Of course. My favourite formulation is the following. positive speed and zero speed also survive. Part (ii) and hence also (i) are known for non-amenable Cayley graphs for all p > pc [BenLS99]. possibly Cǫ n log n. Conjecture 12. is a. transient. However. Part (i) is known for graphs with only exponentially many minimal cutsets (such as Cayley graphs of ﬁnitely presented groups) for p close enough to 1 [Pet08]. no percolation cluster (at p < 1) can be non-amenable. There are relaxed versions of isoperimetric inequalities that are conjectured to survive (known in some cases) but I don’t want to discuss them right now. As we will see in a minute. C∞ ) > n < Cp exp(−cp n). let me just give some of my favourite conjectures.34. for any ǫ > 0. I proved this result for Zd . literally on the margin of a paper by my advisor) was that good isoperimetry inside C∞ would follow from certain large deviation results saying that it is unlikely for the cluster Co of the origin to be large but ﬁnite. e. The giant cluster at p = (1+ ǫ)/n on the hypercube {0. all p > pc (Zd ). C2 ) is the number of edges with one endpoint in C1 another in C2 . In [Pet08]. Conjecture 12..24 (folklore). 12.23 ([BenLS99]). since arbitrarily bad pieces occur because of the randomness. 1}n has poly(n) mixing time.2. and for any inﬁnite graph with only exponentially many minimal cutsets for p close enough to 1. Conjecture 12. called exponential cluster repulsion.√ is very likely to be O( n)-close to the ℓ2 geodesic. Part (ii) is known also for p close enough to 1 on the lamplighter groups Z2 ≀ Zd .11 on pc < pu above. By the result of [AngB07] mentioned in the last paragraph of Section 12. then for all p > pc (G). d ≥ 3. these questions and ideas are related also to such fundamental questions as Conjecture 12. and for all p > pc for d ≤ 2 [ChPP04]. then all inﬁnite percolation clusters are also transient. Pp |Co | < ∞ but ∃ an inﬁnite C∞ with e(Co . If G is a transitive (unimodular?) graph.

hence. It is unclear on what groups {q.25 (but very far from the exponential decay) is a theorem by Tim´ ar: in a non-amenable unimodular transitive graph. a. is it necessarily of polynomial growth? From the expanding endomorphism of Zd it is trivial to construct a strongly scale-invariant tiling. let a connected Følner sequence Fn ր G such that for almost all percolation conﬁgurations. already mentioned in Section 4.transience (using Thomassen’s criterion (5. A nice result related to Conjecture 12. See [NekP09] for more on this. (3) for each n ≥ 1.26 ([NekP09]). However. using the evolving sets theorem. percolation theory). called renormalization. there is such a tiling graph Gn+1 on Gn in such a way that the resulting nested sequence of tiles T n (x) ∈ Gn containing any ﬁxed vertex x of G exhausts G. with density θ(p).23. whose proof appears to be a huge challenge: Question 12. This was known before [Pet08]. or at least the most conceptual. Conjecture 12. but this is the simplest proof by far.unique} ci (W ) denote the number of vertices in the ith largest connected component of W . the return probabilities inside C∞ are pn (x. n→∞ {q. therefore. x) ≤ Cd n−d/2 . there is a key probabilistic ingredient missing.2)). it is actually not that important for the presently existing forms of the method. not only the survival of the anchored d-dimensional isoperimetric inequality follows. and (i. on Zd .25 would imply part (i) of Conjecture 12. j ) is an edge of G iff there is an edge of G connecting Ti with Tj . But. For a ﬁnite vertex set W ⊂ G. This technique uses that Zd has a tiling with large boxes such that the tiling graph again looks like Zd itself. (2) the following tiling graph G is isomorphic to G: the vertex set is I . [Tim06a] The reason that Conjecture 12. One algebraic reason for this tiling is the subgroup sequence (2k Zd )∞ k=0 .27 ([NekP09]).s.4. but this uses the commutativity of the group very much. given by the such tilings are possible: expanding endomorphism x → 2x. presently available only on Zd . A harder result proved in [NekP09] is that the Heisenberg group also has Cayley graphs with strongly scale-ivariant tilings. Does there exist lim c2 (Fn ∩ C∞ ) = 0. Furthermore. they are only conjecturally strong enough for return probabilities. Let G be an amenable transitive graph. and let C∞ be its unique inﬁnite percolation cluster at some p > pc (G). c1 (Fn ∩ C∞ ) 144 . any two inﬁnite clusters touch each other only ﬁnitely many times. See [Pet08] for a short description and [Gri99] for a thorough one. G has a strongly scale-invariant tiling if each T n is isomorphic to T n+1 . If G has a scale-invariant tiling. On the other hand.2). A scale-invariant tiling of a transitive graph G is a decomposition of its vertex set into ﬁnite sets {Ti : i ∈ I } such that (1) the subgraphs induced by these tiles Ti are connected and all isomorphic to each other.tiling} Question 12. Although this geometric property of the existence of scale-invariant tilings looks like a main motivation and ingredient for percolation renormalization.25 is known on Zd for p arbitrarily close to pc is due to a fundamental method of statistical physics (in particular. but also the survival of almost the entire isoperimetric proﬁle (as deﬁned in Section 8.

145 The existence of K (p) < ∞ is known on Zd [AntP96].21).11 below.) On the other hand. which does not decide the ﬁniteness of the mean at pc . y ) < ∞ < K (p) < ∞. I do not presently see a conceptual reason for κ < 1 to hold — this condition is there only because we will need it below. This value comes from the following facts: on the level of exponents. This is also the underlying idea why random walk on the inﬁnite cluster should behave similarly to the original graph. y ) dist(x.moreover. for any p > pu (G) there is a K (p) < ∞ such that for any x.21) {e. Is it true that for any transitive (unimodular?) graph. however.e. which can be asked for any transitive graph. n→∞ lim Why would this be true? The main idea is that the intersection of the unique percolation cluster with a large Følner set should not fall apart into small pieces. An aﬃrmative answer would follow from an aﬃrmative answer to the following question. but the p ց pc behaviour is not known. and would clearly imply Oded Schramm’s conjecture on the locality of pc in the amenable setting. then the shortest connection should not be very long. and the exponent κ < 1 might hold with pu replaced by pc . (This follows from the dimension being 4/3 and from [GarPS10b]. but. the distance measured inside the percolation clusters (inﬁnite if x and y are not connected). y ) > n distω (x. y ) the chemical distance.28. If x and y are neighbours in G. y ∈ V (G). using the theory of near-critical percolation and the Antal-Pisztora chemical distance results (which are easy in 2 dimensions). not only amenable ones. .chemical} {q. On the other hand. n−5/4+o(1) = c1 α4 (n) ≤ P distω (x. Let ω denote the percolation conﬁguration. (This rough equivalence uses the so-called separation of arms phenomenon. there is a κ < 1 with K (p) < O(1) (p − pu )−κ+o(1) as p ց pu ? The ﬁniteness K (p) < ∞ might hold for almost all values p > pc .y (p) := E distω (x. y ) distω (x. Furthermore. there could exist special values (such as p = pu on certain graphs) with K (p) = ∞.chemical} |Fn ∩ C∞ | c1 (Fn ∩ C∞ ) = lim = θ(p) ? n→∞ |Fn | |Fn | moreover. because if two points are connected to each other in the percolation conﬁguration. imply that together with the 4-arm exponent 5/4 and the obvious observation that P[ distω (x. the outer boundary of a closed cluster of radius roughly r is an open path having length r4/3+o(1) with large probability. it is closely related to Conjecture 12. i.19 on the uniqueness of the giant cluster in ﬁnite transitive graphs. (12. I do not know any example where some κ > 0 is actually needed. Question 12. and distω (x. y ) < ∞ ] > c > 0. Kx. Conjecture 14.. then the event that they are connected in ω and the shortest path between them leaves the r-ball but does not leave the 2r-ball around them is comparable to the largest radius of an alternating 4-arm event around them being between r and 2r. A non-trivial example where one might hope for explicit calculations is site percolation on the hexagonal lattice [Wer07]. y ) < ∞ ≤ c2 α4 (n3/4 ) = n−15/16+o(1) . it implies that κ ≤ 1/9 in (12.) These.

28 would also prove that pc < pu on non-amenable transitive graphs. we get an expectation at most ǫ5/3−16/9 = ǫ−1/9 . many things can be on [KoN09b]. (1) at p = pc + ǫ. in a planar unimodular transitive non-amenable graph with one end satisﬁes Kx. Schramm).1. using this branching walk and Ber(pc ) percolation on G. Now. known to be ﬁnite [KoN09a]. with spectral radius ρ < 1. So. and (3) at this scale.ignoring smaller order eﬀects. Then. Here. via the following unpublished gem of Oded Schramm (which can now be watched in [Per09] or read in [Koz11]): Theorem 12. ⊲ Exercise 12. {t. again. x) ≤ p2n (o.y (pu ) = ∞? An aﬃrmative answer to Question 12. at pu (G).odedpcpu} here. Start m + 1 particles from o. after tn SRW steps. we get that for the branching random walk is transient. Let G be a transitive unimodular non-amenable graph. independent from the percolation. the length of the path is at most ǫ−4/3·4/3 . *** Is it true that the chemical distance in percolation at pu between neighbours . Consider percolation at pc . a natural candidate to kill Question 12. and all the (m + 1)m particles (the second generation) continue with independent SRWs for n more steps.8 that there are only ﬁnite clusters. Since p2n (o.35. whenever the mean-ﬁeld behaviour is actually proved (such as Zd . this is not known to imply that limpցpc K (p) < ∞. i. proving this seems hard. altogether. The expected number of particles at o at the end of the tth generation.. Then each particle of this ﬁrst generation branches into m particles. i. is ptn (o. y ) > r ≍ 1/r for neighbours x. so. where “≈” means equal exponential growth rates. Take a SRW (Xn )∞ n=0 on G. P X0 and Xn are in the same percolation cluster at pc ≤ 2ρn . However. The canonical example of such a graph is a non-amenable planar graph G with one end — here pu (G) understood using the mean-ﬁeld theory at pc (G∗ ). o) for any x ∈ V (G) by Lemma 9. in the joint probability space of the percolation and the SRW. Hence K (pu ) = ∞. where we know from Theorem 12. (2) the probability that the radius of the 4-arm event is about ǫ−4/3 is ǫ4/3·5/4 .28 for all p > pc could be percolation near pu on graphs where there is already a unique inﬁnite cluster at pu . If m < ρ−n .e. doing independent simple random walks for n steps. based though barely.e. or planar non- is equal to 1 − pc (G∗ ). our back-of-the-envelope arguments. probably any κ > 0 would do for p ց pu . o) (m + 1)mt−1 ≈ exponentially fast in t. where G∗ is the planar dual to G.. the expected number of total visits to Br (o) by all the particles is ﬁnite. d ≥ 19. suggest that Ppu ∞ > distω (x. Proof. percolation looks critical up to scales ǫ−4/3 and supercritical at larger scales. beAs Gady Kozma pointed out to me. then this expectation decays any ﬁxed radius r > 0. but. Further examples to try are the groups where critical percolation has the mean-ﬁeld behaviour: amenable unimodular transitive graphs). Consider the following branching random walk on G. we deﬁne a bond percolation process ξ on the rooted (m + 1)-regular tree Tm+1 that indexes the branching random walk: if v is 146 ρtn mt . and. and so on.29 (O. y . the expected chemical distance between neighbours is cause continuity is far from clear here. then each particle branches into m particles again.

when lowering at pu = pc is at least θ(p)2 /2 · (1 − ǫ)2K (p)n ≍ exp(−ǫ1−κ+o(1) n). Assume that. Pp distω (X0 . On the other hand. into the bond percolation conﬁguration. hence. as pointed out in [Tim06a]. Markov’s inequality. it does not even look invariant. since from the FKG-inequality it is clear that at p > pu any two points are connected with a uniformly positive probability: at least with θ(p)2 .37 (Ad´ ar).28) can imply good cluster repulsion (see Conjecture 12. The way this theorem is related to pc < pu is the following.27). good bounds on the chemical distance (see Question 12. So. So. pc idea described above. by Exercise 12. the endpoints of the edges ei would be connected but with then a large touching set e(Cx .29 Finally. since the tree is rooted and the ﬂow of time for the random walk has a direction. there are inﬁnitely many inﬁnite clusters and for any neighbours x. (2) it seems reasonable that we still should have a decay at pc + ǫ for some small enough ǫ > 0 (we will explain this in a second). y ∈ V (G) pc pc the probability for any given edge of Tm+1 to be open in ξ is P[X0 ←→ Xn ]. the probability that we have a connection between X0 and Xn if κ < 1 and ǫ is chosen suﬃciently small. Let G be a unimodular transitive graph. Xn ) > 2K (p)n distω (X0 .11. by (12. of course. v ) will be open in ξ if the branching random walk particles corresponding to u and v are at two vertices of G that are connected in the Ber(pc ) percolation. The chemical distance result would be used in making the second step rigorous. using that G is unimodular and SRW is reversible. Cy ) = {ei : i ∈ I } would mean that after inserting the edge {x. So. if Cx = Cy . At ﬁrst sight. Show that Ep e(Cx .36. We know from Theorem 12. Assuming pc = pu and taking p = pu + ǫ with small ǫ > 0. y } large chemical distances: ´ am Tim´ ⊲ Exercise 12. That is. we get that ξ has only ﬁnite clusters. but this is. but also directly.21) and the level of percolation from p to pu .8 that we have only ﬁnite clusters in Ber(pc ). which contradicts Theorem 12. the average degree in ξ is at most 2. Together with the transience of the branching random walk for m < ρ−n . then the edge (u. their distance in G is at most n. However.a child of u in Tm+1 . (m + 1) P[X0 ←→ Xn ] ≤ 2. (1) The theorem implies that we have a deﬁnite exponential decay of connectivity at pc in certain directions. (Hint: use the Ber(p) bond percolation. The idea is that for neighbours x. So. y ) Cx = Cy < ∞.25) not only through the renormalization method (see Question 12. y ∈ V (G). not a Bernoulli percolation. implemented using a suitable Mass Transport. taking m := ⌈ρ−n ⌉ − 1. We have now all ingredients ready for the proof. as desired. Xn ) < ∞ < 1/2. (3) but then pc + ǫ < pu . Show that the above percolation process ξ on Tm+1 is automorphism-invariant. Cy ) Cx = Cy < ∞. for some we have Ep distω (x. we would have X0 and Xn in the same cluster with probability θ(p)2 > 0.) 147 . we get that P[X0 ←→ Xn ] ≤ 2/⌈ρ−n⌉ ≤ 2ρn . it can be shown that ξ is in fact Aut(Tm+1 )-invariant: ⊲ Exercise 12.

and hence are truly fundamental. form a joint generalization of Potts. 1.3) {e. From now on. and [BerKMP05. while at small β the system does not care that much: thermal noise takes over. and the Uniform Spanning Trees and Forests.Hamh} x ∈ V (G ) where h > 0 means spins like to be 1. 0) = 2H (σ ) − |E (G)| with the deﬁnition of (13.Ising} The Ising and the more general Potts models have a huge literature. partly because they are more important for physics than Bernoulli percolation is. η Zβ η where Zβ := σ:σ|∂V =η exp(−2βH (σ )) .13 13. The Ising model is the most natural site percolation model with correlations between the sites.Hamq} then ﬁx β ≥ 0 and deﬁne the following probability measure (called Gibbs measure) on conﬁgurations that agree with some given boundary conﬁguration η on ∂V : Pη β [σ ] := exp(−2βH (σ )) .Gibbsq} (The reason for the factor 2 in the exponent will become clear in the next paragraph.g. we will discuss them rather brieﬂy. (13. The interpretation of β is the inverse temperature 1/T : at large β . we will consider H (σ.. Potts. the Hamiltonian (13. q − 1}. and computational complexity behaviour. E ).) This Zβ is called the partition function. Since percolation and the USF already exhibit many of their main features. instead of σ (x) ∈ {0. 1. if we now deﬁne the corresponding measure and 148 .1 Further spatial models Ising. (x. [Lyo00] speciﬁcally for their phase transitions on non-amenable graphs. For the deﬁnition of these models. . e. q − 1}. +1}. . . See [GeHM01] for a great introduction and survey of these models (and further ones. . disagreements are punished more. dynamical. the hard core model). we will to be interpreted as + and − magnetic spins. percolation. .1).2) {e. (13. while h < 0 means they like to be −1. Sly10] for relationships between phase transitions in spatial. So. The Potts(q ) model is the obvious generalization with q states for each vertex instead of the Ising case q = 2. The more disagreements between spins there are. β = 0 gives the uniform measure on all conﬁgurations. σ : V −→ {0. For spin conﬁgurations 1 {σ ( x ) = σ ( y ) } .1) {e. Note that H (σ. hence.y )∈E (G) focus on the Ising case. A future version of our notes will hopefully expand on these phase transitions a bit more. A natural extension is to add an external ﬁeld with which spins like to agree. the larger the Hamiltonian and the smaller the probability of the conﬁguration is. and the FK random cluster models {ss. in some sense. we ﬁrst need to focus on ﬁnite graphs G(V. q = 2. The FK random cluster models. In particular.y )∈E (G) with a possibly empty subset ∂V ⊂ V of so-called boundary vertices. . the system prefers order. h) := −h σ (x) − σ (x)σ (y ) . and therefore switch to the usual Ising setup σ : V (G) −→ {−1. . consider the Hamiltonian (or energy function) H (σ ) := (x. .

h [σ ] := exp(−βH (σ.h is just a boring normalization factor. Show that for the total x ∈V So far. y ) and vertices x have their own “coupling strengths” neighbouring spins like to agree.3). and Zβ.h is already determined by η |∂U . and hence Pη β. See. instead of constant 1 and h. for boundary condition η and have Eµ [ H (σ ) ] = E . then it is called antiferromagnetic.partition function without the factor 2. it out is a Markov random ﬁeld: if U ⊂ V (G) and we set ∂V = V \ U and ∂U = ∂V U ⊂ ∂V . A further generalization is where edges (x. See [Gri10.0 [σ ] = Pβ [σ ]. it clearly satisﬁes the spatial Markov property. it is not obvious what the measure on an inﬁnite graph should be. the Hammersley-Cliﬀord theorem says that for any graph G(V.h [ H ] . or in other words. the measure Pη β. Jx.y < 0 for all (x. We 149 . A colouring of the vertices of a graph with q colours is called proper if no neighbours share their colours. while if Jx. In fact. the model is called ferromagnetic. If Jx. The ﬁrst signs of this are the following: ⊲ Exercise 13. these are not at all true.h using a product over edges of G.1]. as Pη β.Gibbsh} η then exp(−βH (σ. only. h)) η Zβ.. Similarly to generating functions in combinatorics. Another good reason for deﬁning the measure from the Hamiltonian via exp(−βH ) is that.h .1.h [ H ] = − ∂ ∂ ln Zβ. The role of the positive energy condition will be clear from the following example.y > 0 for all (x. our Gibbs measures Pη β. since we deﬁned the measure Pη β.h from the Hamiltonian was a bit arbitrary. with variance Varβ. e. The uniform distribution on proper q -colourings of a graph can be considered as the zero temperature antiferromagnetic Potts(q ) model.g. h)) . h).h := σ:σ|∂V =η exp(−βH (σ. among all probability measures µ on {±1}V (G) that satisfy the {ex. h) := −(β |V |)−1 ln Zβ. So. As opposed to Bernoulli percolation.y and Jx . However.h [ M ] = − ∂h f (β. and it is not a Gibbs measure.partition} average magnetization M (σ ) := |V |−1 (b) The average free energy is deﬁned by f (β. But we will stick to (13. 0)) = Kβ exp(−2βH (σ )) with a constant Kβ .2]. y ). i. only in a certain β → ∞ limit. (13.h [H ] = − Eβ. Section 7. ∂β ∂β ∂ σ (x). any given energy level E ∈ R. (a) Show that the expected total energy is Eβ. Section 12.4) {e. then we are “forced” to consider these Gibbs measures. But it does not have the ﬁnite energy property. proved using Lagrange multipliers. This is probably due to Boltzmann. y ) ∈ E (G). and it is certainly a Markov ﬁeld. E ).h maximize the entropy. if we accept the Second Law of Thermodynamics. First of all. they might think that the way of deﬁning the measure Pη β. we have Eβ. the Markov random ﬁelds satisfying the positive energy condition are exactly the Gibbs random ﬁelds: measures that are given by an exponential of a Hamiltonian that is a sum over cliques (complete subgraphs) of G. we have been talking about the Ising (and Potts) model on ﬁnite graphs. then for any boundary condition η on ∂V . respectively.e. the partition function contains a lot of information about the model.h η and Zβ.h . [CovT06. If this is the ﬁrst time the Reader sees a model like this..

a non-trivial critical βc ∈ (0. −1) := µ(−1) (13. say. Note that the inﬁnite graph G(V. −1) := µ(+1) − ν (+1). φ(+1. and. which is the most natural Markov chain with the Ising model at a given temperature as stationary measure: each vertex has an independent exponential clock of rate 1. let us prove the FKG inequality for the Ising model. +1}. i. which probably overrides the eﬀect of even a completely negative boundary condition. as in percolation? Actually. The simplest possible example. at least for amenable graphs. a coupling such that x ≥ y for φ-almost every pair (x. hence there are no correlations and there is a single limit measure. φ(+1.h will be a suitable Markov ﬁeld. Strassen’s theorem says that this domination µ ≥ ν is equivalent to the existence of a monotone coupling φ of the measures µ and ν on P × P . φ(−1. Is there a phase transition in β . Take the Ising model on any ﬁnite graph G(V. En ). do diﬀerent boundary conditions have an eﬀect even in the limit? Intuitively. E ). +1}V .5) {e. and when a clock rings.FKGIsing} on ∂V ⊂ V . and consider the natural partial order on the conﬁguration space {−1. and Vn ր V is an exhaustion by ﬁnite connected subsets. as promised in Section 12. That limit points exist is clear from the Banach-Alaoglu theorem. y ) ∈ P × P . Proof. we have µ ≥ ν iff µ(+1) ≥ ν (+1).1. is this question of a phase transition in the correlation decay related in any way to the connectivity phase transition in percolation? Before starting to answer these questions. any weak limit point of a sequence of measures Pη β.h . and ηn are random boundary conditions on ∂V Vn sampled according to ηn P∞ β. the spin at that vertex is updated according Now. 1. or several? In particular. do we expect a phase transition for all larger-than-one-dimensional graphs. q − 1}V with the product topology is compact. Recall that a probability measure µ on a poset (P . for any ﬁnite this requirement already suggests how to construct such measures: if P∞ β. since {−1. with any boundary condition η η any two increasing events A and B are positively correlated: Pη β.e. n Vice versa. then the laws E Pβ.4).e.) using the heat-bath dynamics or Gibbs sampler.1: Theorem 13. In the case h = 0. Strassen’s coupling need not be unique.h out U ⊂ V (G) and boundary condition η on ∂V U .certainly want any inﬁnite volume measure to satisfy the spatial Markov property. Therefore. +1) := ν (+1). {t. a larger β increases correlations and hence helps the eﬀect travel farther. This is one reason for the issues around Exercise 11. 1}V or {0.pm1example} is the unique monotone coupling φ. (In general. aﬀects every single spin directly. .h [A]... is the following: for P = {−1. while β = 0 is just the product measure.3.h . the question is: what is the set of (the convex combinations of ) the limit points given by ﬁnite exhaustions? Is there only one limit point. since setting h > 0. the distribution should follow (13.h is such a Markov ﬁeld on out subgraphs Gn (Vn . . We will prove this 150 . the FKG inequality says that P[ · | B ] stochastically dominates P[ · ]. ∞)? This certainly seems easier when h = 0. and in this case. i. ≥) is said to stochastically dominate another probability measure ν if µ(A) ≥ ν (A) for any increasing measurable set A ⊆ P . .h [A | B ] ≥ Pβ. . Then. E ). which we will actually need in a minute. with induced (with the expectation taken over ηn ) will converge weakly to P∞ β.

h . converging to their stationary η out η ≥ Pη U . given by + ηn ≡ +1∂ out Vn .to the Ising measure conditioned on the current spins of the neighbours. Now. replaced by +1 update). Consider now an exhaustion Vn ր V (G) with η+ U ′ ⊂ V (G) and η and η ′ are the all-plus conﬁgurations on ∂ out U and ∂ out U ′ . It will be denoted by P+ β. Similarly. which is the class We will run two Markov chains. while {Xi }i≥0 is a modiﬁed heat-bath dynamics. all i ≥ 0. stochastic domination) sequence of ′ ′ ′ any sequence Vn ր V (G) and any ηn on ∂ out Vn .t. − we can still have Xi+ +1 ≥ Xi+1 after the update.h σ (x) = Eβ. ≥). the limit measures P+ β.h ≥ Pβ. and we are done. Hence. {Xi+ }i≥0 and {Xi− }i≥0 .h . all the marginals coincide).h [ · | B ]. and ﬁxed to equal η on ∂V forever. another example is the Metropolis algorithm. µ ≤ ν . Therefore (see Exercise 13. and we couple the updates such that Xi+ ≥ Xi− is maintained for as many +1 neighbours in Xi+ as in Xi− . there is {ex.5). denoted by P− β.h ≥ Pβ. then they are equal. with stationary measure Pη β. conﬁgurations on V \ ∂V .. we are not running these two Markov chains independently. then µ = ν . ⊲ Exercise 13. this limit point for (Vn . then µ = ν . giving rise to the monotone decreasing (w. respectively. respectively. and µ|x = ν |x for 151 .e.h on U . This is an example of Glauber dynamics. but coupled in the following way: the clocks on the vertices ring at the same time in the two chains. then n measures Pβ. started from the all-plus and all-minus where steps in which a +1 would change into a −1 such that the resulting conﬁguration would cease to satisfy B are simply suppressed (or in other words. and it is even more so if we take into account that some of the −1 moves in {Xi+ }i≥0 are suppressed. which we do not deﬁne here. − (c) On any transitive inﬁnite graph. +1}V with coordinatewise ordering. then Pη β. Since Xi+ ≥ Xi− holds in the coupling for i.h . Any weak limit point of this sequence dominates all other possible limits. η An immediate corollary is that if η ≥ η ′ on ∂V . if U ⊂ ′ ′ ′ The Markov chains {Xi+ }i≥0 and {Xi− }i≥0 are clearly ergodic. − They are equal iff E+ β. {Xi− }i≥0 is just standard + heat-bath dynamics. all x ∈ V (i. One can easily check that of Markov chains that use independent local updates and keep Ising stationary. ηn ) must be unique.h and Pβ.) (b) Let P = {−1. for β < ∞ the chain is reversible. and hence Pη β.h σ (x) for one or any x ∈ V (G). In particular.r.h . A simple general claim about reversible Markov chains (immediate from the electric network representation) is that the stationary measure of the latter chain is simply the stationary measure of the original chain conditioned on the event that we are keeping: Pη β. hence the probability of the outcome +1 in a standard heat-bath dynamics update would be at least as big for Xi+ as for Xi− . since for any Vn there exists an m0 (n) such that ′ + Vn ⊆ Vm for all m ≥ m0 (n).h are translation invariant.2). this stochastic domination also holds for the stationary measures. Show that if µ ≤ ν on P . by example (13.h on ∂ β.2. and both µ ≥ ν and graph dominate each other.h . Conclude that if two inﬁnite volume limits of Ising measures on an inﬁnite (a) Show that if µ and ν are probability measures on a ﬁnite poset (P . then v has at least distributions. Why is this possible? If Xi+ ≥ Xi− and the clock of a vertex v rings.stochdom} a unique minimal measure. and cannot depend even on the exhaustion. (Hint: use Strassen’s coupling.

152 . A In 1944.3. n→∞ n lim Eη β. at the critical and a slightly supercritical inverse temperature. the limit measures P+ β. βc (Z2 ) = 2 n while there is non-uniqueness for β > βc .h = Pβ.h and P+ β.6) {e. Ernst Ising proved in 1924 in his PhD thesis that the Ising model on Z has no phase transition: there is a unique inﬁnite volume limit for any given h ∈ R and β ∈ R≥0 .440687 β = 0. for β > β+ (d). On any transitive inﬁnite graph.IsingPeierls} for β < β− . directly from 1/3 ≤ pc (Z2 . we “just” need to + decide if P− β. So. in one the most fundamental works of statistical mechanics.h . Based on this. We will prove (13. with the gaps ﬁlled in during the next few decades) that √ 1 ln(1 + 2) ≈ 0. More precisely. d ≥ 2. bond) ≤ 2/3 once we have deﬁned the FK random cluster measures. all inﬁnite volume measures are sandwiched between P− β. he turned out to be wrong: using a variant of the contour method that we saw in the elementary percolation result in β on Zd .0 σ (0) > 0 1/3 ≤ pc (Z2 . (13. n] .440687 for h = 0: for β ≤ βc . to answer the question of the uniqueness of inﬁnite volume measures. He also computed critical exponents like Eη βc [σ (0)] = n−1/8+o(1) . β = 0.2 (c)).h and Pβ. In summary. bond) ≤ 2/3. Rudolph Peierls showed in 1933 that for h = 0 there is a phase transition n→∞ for β > β+ . then n lim inf Eη β. at given β and h. However. while for β < β− (d) there is uniqueness (see Exercise 13. Lars Onsager showed (employing partly non-rigorous math. as well. there is a unique inﬁnite volume measure.0 σ (0) = 0 In particular.1: The Ising model with Dobrushin boundary conditions (black on the right side of the box and white on the left). he proved the existence of some values 0 < β− (d) < β+ (d) < ∞ out d such that if ηn is the all-plus spin conﬁguration on ∂Z d [−n. there are at least two ergodic translation-invariant inﬁnite volume measures on Zd . he guessed that there is no phase transition in any dimension.6) similar result holds for the Potts(q ) models.h are ergodic.h .45 Figure 13.− ⊲ Exercise 13.

L¨ of and by Ruelle.q) := ω ⊆E p|ω| (1 − p)|E \ω| q kπ (ω) . for any h = 0. then taking in ∂Vn := ∅ gives FUSF in the limit. and the critical behaviour must be encoded in the analytic properties of the singularities. the FK(p. If G(V. h) is diﬀerentiable in β . for any ω ⊂ E . the contribution of the boundary to these “global” quantities turns out to be negligible. due to Fortuin and Kasteleyn [ForK72]. h) describe the critical points. we can see that a big change of the free energy corresponds to big changes of quantities like the total energy and the average magnetization. using that Zd is amenable. We have hinted a couple of times at the Ising correlation and uniqueness of measure questions being analogous to the phase transition in the existence of inﬁnite clusters in Bernoulli percolation.Onsager proved his results by looking at the partition function: the critical points need to occur at the singularities of the limiting average free energy f∞ (β. These quantities are “global”. using the Lee-Yang circle theorem: if A = (ai. while taking ∂Vn := ∂V Vn with πn := {∂Vn } gives WUSF.. but otherwise all conﬁgurations will have the same probability.j lie on the unit circle. we punish a larger number of components more and more. The q = 1 case is clearly just Bernoulli(p) bond percolation. Furthermore. like E[ σ (0) ] or even the average magnetization in the inﬁnite limit measure. Indeed. A diﬀerent approach (by Preston in 1974) is to use the so-called GHS concavity inequalities [GHS70] to prove the same diﬀerentiability. as a polynomial in h. This works for any inﬁnite graph G(V.e. the Ising partition function Zβ. and it can be proved that the singularities of f∞ (β. E ).q) [ω ] := On a ﬁnite graph G(V.h .1. That there is no phase transition on Zd for h = 0 was ﬁrst proved in 1972 by Lebowitz and Martinz |S | ai. including the history. E ) is inﬁnite. hence it is not immediately clear that “local” quantities that behave well under taking weak limits. if we let q → 0. Here is the deﬁnition of the model.j ∈S pointers to the above discussion. see Section 11. we recover the Uniform Spanning Tree. then all the roots of the polynomial P (z ) := Therefore. Why is this so? From Exercise 13.h = Pβ. the clusters given by ω in the graph where the vertices in each part of π are collapsed into a single vertex. will also have interesting changes in their behaviour. q ) random cluster model with q = 2. then we will have as few edges as possible. See [JoS99] and the references there for S ⊆[n] i∈S. involving the entire ﬁnite domain Vn . E ).7) {e. then 153 .q) π with ZFK( p. correlations between Ising spins can be interpreted as connectivity in a diﬀerent model. any limit f∞ (β. Nevertheless. the connection between diﬀerentiability and uniqueness works only on amenable transitive graphs: Jonasson and Steif proved that a transitive graph is nonamenable iff − there is an h = 0 such that P+ β. can have roots only at purely imaginary values of h.j =1 is a real symmetric matrix. see [Gri06] for a thorough treatment of the model. (13. h) = − lim|Vn |→∞ (β |Vn |)−1 log Zβ. with a so-called boundary ∂V ⊂ V together with a partition π of it into p|ω| (1 − p)|E \ω| q kπ (ω) π ZFK( p. let Pπ FK(p. and Vn ր V is an exhaustion by ﬁnite subsets. that is.FK} where kπ (ω ) is the number of clusters of ω/π .j )n i. not just a ﬁxed window inside the domain.h for some β < ∞.h on any ﬁnite graph. and then if we let p → 0. This can be used to prove that. i. far from the boundary. so we get a single spanning cluster in the limit. disjoint subsets. However.2.

σ ] = p|ω| (1 − p)|E \ω| (x. n]d . except that the boundary For q ∈ {2.q) (x. 1. color each cluster of ω/π in the FK(p. E ) and a boundary partition π . . using the choice (1 − p) = exp(−2β ). σ ] = ω 1 π ZFK( p. .q) [ω. }. the 1 that we indeed get the Potts(q ) model.e.ES} where the formula is clear from the facts that for any given ω .q) instead of Zβ.y)∈E σ(x)=σ(y) (p + 1 − p) = ary partition π . σ ] ω = 0 come in pairs: (x. . and if they are not.8) {e. Summarizing. We will prove in a second condition on ∂V . y ) always contributes a factor 1 − p to the probability of the conﬁguration.q .. (In the Edwards-Sokal coupling. we may assume that it is compatible with π (i.q) [ω. . y ) ∈ π ).q) of its clusters by 1{σ(x)=σ(y)} . i.y )∈ω ∪π π ZFK( p.y )∈π 1{σ(x)=σ(y)} (x. σ ] = 0. Fix a q -coloring σ .q) [ω. If (x. q ) conﬁguration ω and the q -coloring σ Pπ ES(p. we can retrieve the Potts(q ) model via the Edwards-Sokal coupling.) The interpretation of Let us denote the Edwards-Sokal coupling of the FK(p. Given G(V. Pπ ES(p. that does not change the probability of the connection. .e. and with normalization ZFK(p.9) {e. then their colors will be independent.. since if they are connected. then (x. will be only a partition telling which spins in ∂V have to agree with each other. σ (x) = σ (y ) for all (x. leaving other edges intact. just look at the q -coloring of the vertices. then forget ω . these normalizations actually have to agree: π π ZFK( p. . but.q) [ω. (13.y )∈π 1{σ(x)=σ(y)} . n]d . q ) model independently with one of q colors. but since we are talking about probability measures. y ) ∈ E and σ (x) = σ (y ). such an edge (x. n]d} in the wired FK measure FK(p. Pπ ES(p. in the resulting Potts(q ) model. we need to prove that the marginal on σ is the Potts(q ) model with β = β (p) and boundary partition π . σ ] = 0. and that we need to arrive at (13. by symmetry. which correlation between the spins σ (x) and σ (y ) is exactly the probability Pπ FK(p.y )∈E \π 1 {σ ( x ) = σ ( y ) } (x. otherwise ω with Pπ ES(p.2). correlations as connections also shows that it is more than natural that a larger p gives a higher β 1 ln(1 − p). This is exactly the Potts(q )-measure (13. So. .q) 1 π ZFK( p.7) after summing over these colorings σ .q) [x←→y ].y)∈E \π σ(x)=σ(y) (1 − p) (x. q − 1}. with β = β (p) = − 2 ln(1 − p).was introduced somewhat implicitly in [SwW87] and explicitly in [EdS88]. the Ising expectation E+ β (p). in the formula β (p) = − 2 {0 ←→ ∂ [−n. then the conﬁgurations On the other hand. y ) can be kept in ω or deleted. if σ (x) = σ (y ).q) [ω. Note that. they get the same color in the coupling. 3. we need to condition the measure to have the +1 spin on ∂ [−n.PottsFKpart} This concludes the veriﬁcation of the Edwards-Sokal coupling.q .2 [σ (0)] is the probability of the connection to get the Ising + measure. In particular. 154 .q) = Zβ (p). the number of compatible q -colorings is q kπ (ω) . (13. 2) on [−n. instead of a function η : ∂V −→ {0. just with bound- exp −2β (x. y ) ∈ ω whenever Pπ ES(p.

⊲ Exercise 13. regarding limit measures. q ) model on any ﬁnite graph G(V.1: consider the FK heat-bath dynamics with independent exponential clocks on the edges.h [σ (x)] > 0 for some x ∈ V . 155 .q) ≤ PFK(p. neither that any Markov measure is a convex combination of limit measures. then Pπ FK(p. denoted by FFK(p.e. E ) that if E+ β. Since we are interested in connections in the FK model. For q ≥ 1. it is not clear that any inﬁnite volume limit measure actually satisﬁes the spatial Markov property. q ). However. the limits of free and wired FK measures along any ﬁnite exhaustion exist and are unique. in any inﬁnite graph.10) {e. if p ≤ p′ . y ) ∈ ω ω |E \{e} = p if {x ←→ y } in E \ {e} otherwise . this dynamics {ωi }i≥0 is attractive in the sense that if ωi ≥ ωi in the natural ′ ′ partial order on {0. This is an open problem that have been bugging quite a few people for quite a long time. measure.5. Nevertheless. and the free boundary (where π consists of singletons) is dominated by all other conditions. Consequently.4.q) on V . 1}E . Therefore. an existing connection increases the probability of an edge iff q > 1. For q < 1. The FK(p. Show a third type of stochastic domination for the FK(p. but this is proved only for the UST. 1) and 1 ≤ q ≤ q ′ . the boundary conditioning is on all the connections in V \ U (just like in (13. they have not been proved. which is a determinantal process. Having noted this.q′ ) ≤ PFK(p. Similarly to the Ising model. which In formulating the spatial Markov property over ﬁnite domains U ⊂ V (G) for an inﬁnite volume is not as local as it was for the Potts(q ) model.. and updates following ′ (13. q ) in terms of stochastic domination. ∂V ) dominates all other boundary conditions. q ) model satisﬁes the FKG-inequality for q ≥ 1. they are translation invariant and ergodic. proving the FKG inequality. and hence we can maintain the p p+(1−p)q (13.q) . then Pπ FK(p. E ) with boundary ∂V ⊂ V . then the same holds for any β ′ > β . then Pπ FK(p. (c) Conclude for the + limit Ising measure on any inﬁnite graph G(V. there is an important diﬀerence compared to the Ising model. although these statements are expected to hold. why is q ≥ 1 important? PFK(p. see [Gri06.Well. ω i. see [BorBL09] for recent results on negative correlations. there should be negative correlations. q ) and WFK(p. the uniqueness of the Ising limit measures is monotone in β . ⊲ Exercise 13. Theorem 13.q) ≤ PFK(p′ . Therefore. First of all. π (a) If π ≤ π ′ on ∂V . show the following two types of stochastic domination: ′ π (b) Given any π on ∂V . q ) model on a ﬁnite graph: π if p ∈ (0.1 for all i ≥ 0.FKFKG} monotone coupling of the proof of Theorem 13. In fact. For the FK(p. then P e ∈ ωi+1 ωi ≥ P e ∈ ωi +1 ωi . the proof of the FKG inequality becomes very similar to the Ising case. On a transitive graph.q) on V .q) (x. it is at least known that all Markov measures and all inﬁnite volume measures are sandwiched between FFK(p.10)). the FKG inequality implies that the fully wired boundary condition (where π has just one part. q ) and WFK(p. Chapter 4] for more information. it is natural to deﬁne the critical point pc (q ) in any inﬁnite volume limit measure as the inﬁmum of p values with an inﬁnite cluster.10).

see [Gri06]. there is only a countable number of p values where FFK(p. this is a random cluster model for ω ∗ . independently of which limit measure is taken. there is actually only one pc (q ). which implies that. Clearly. d the Ising expectation E+ β (p).FWFK} So. see Theorem 12. q y |ω ∗ | {f. So.10) implies that Pπ p) for p ˜= FK(p. plus two main probabilistic ingredients: RSW bounds proved using the FKG-inequality. on any amenable transitive graph. pc (TG. n] } in the wired FK measure FK(p. What is the law of ω ∗ ? Figure 13. Recall that the percolation critical values pc (Z2 . to start with: is there some planar self-duality in FK(p. and k (ω ∗ ) equals the number of faces in ω (which is 2 in the ﬁgure). for any d ≥ 2 and q ≥ 1.q) stochastically dominates Bernoulli(˜ We can easily show that 0 < pc (q ) < 1 on Zd . by Euler’s formula. q ) = WFK(p. q ).6) is also clear now: as we noticed above.22. any of the limiting average free energy functions has only a countable number of singularities in p.2. because of the following argument.2: The planar dual of an FK conﬁguration on Z2 . and The random cluster model also provides us with an explanation where Onsager’s value βc (Z2 ) = √ 1 2) ≈ 0. then the free and wired measures that (13. and the Margulis-Russo formula. respectively. ∗ Well. See Figure 13. But this probability is bounded from above and below by the probabilities of the same event in Bernoulli(p) and Bernoulli(˜ p) percolation. Using convexity. contradicting countability. in terms of p and 156 . site) = 1/2 came from planar self-duality. but if this was a strict inequality.q) [ω ] ∝ y | ω | k (ω ) q ∝ y −|ω ∗ | q k(ω ∗ )+|ω ∗ | = q k (ω ) . 2) on [−n.440687 comes from. W It is clear that pF c (q ) ≥ pc (q ).Fortunately.2 [σ (0)] is exactly the probability of the connection {0 ←→ ∂ [−n. and is stochastically dominated by Bernoulli(p) bond percolation. q )? Consider the planar dual to a conﬁguration ω on a box with free boundary. pc (q )). |ω ∗ | = |E | − |ω |. bond) = 2 ln(1 + we are done. n]d . then PFK(p. with the same q and y ∗ = q/y ! Or. If we now let y = p/(1 − p). The proof of (13. for any π . say: we get a conﬁguration ω ∗ on a box with wired boundary. for any q . |V | − |ω | + k (ω ∗ ) = 1 + k (ω ). The key is to notice p p+(1−p)q . and then the claim follows from 0 < pc (Zd ) < 1 in Bernoulli percolation. F would diﬀer for the entire interval p ∈ (pW c (q ).

g.9) and Theorem 12. See [DumCS11.21) do not work in the presence of dependencies: an exploration path has two sides. The main problem is to determine the critical probability p(G.p∗ . to work on a large torus. what measure should we work in to have exact self-duality and the symmetries needed? The solution is to use periodic boundary conditions.2 Bootstrap percolation and zero temperature Glauber dynamics {ss. Just as in the case of percolation with p = 1/2. (Hint for k = 2: show that a single large enough completely occupied box (a “seed”) has a positive chance to occupy everything. so we cannot use FKG. q )? On Z2 . here comes another issue: the dual of the free measure is wired. and draw the rectangle that we want to cross inside there.7 ([Scho92]). Substituting q = 2 and β (p) = − 2 The main obstacle Beﬀara and Duminil-Copin needed to overcome was that the previously known RSW proofs for percolation (e. and hence one can use symmetry. where the information revealed can be easily compared with symmetric domains with free-wired-free-wired boundary conditions. p = psd (q ) = becomes the wired measure at the same value psd . Nevertheless. just as in the percolation case. Finally. and induction on the dimension. one naturally expects that this is also the critical point pc (q ). the threshold result Exercise 12.e. 13.8 ([BalPP06]).72. the other having negative inﬂuence on increasing events. so.BPTd} 157 .2. then it becomes occupied in the next step.17) is provided by [GraG06]. 1 ln(1 − p) by Vincent Beﬀara and Hugo Duminil-Copin [BefDC10].10] and [Gri06]. (Hint: use the d = 2 idea. 2) = 0 and p(Z2 . k ) = 0 for k ≤ d and = 1 for k ≥ d + 1. they found an only slightly more complicated argument. and a deterministic spreading rule with a ﬁxed parameter k : if a vacant site has at least k occupied neighbors at a certain time step. exploring crossings and gluing them. and proved for q > 25. we get √ q √ 1+ q p p∗ = q.bootstrap} Bootstrap percolation on an arbitrary graph has a Ber(p) initial conﬁguration. Finally. * {ex.. Proposition 3. 3) = 1.6 ([vEnt87]). Complete occupation is the event that every vertex becomes occupied during the process. gives Onsager’s value.) ⊲ Exercise 13. Non-uniqueness at a single pc (q ) is expected for q > 4. (1 − p) (1 − p∗ ) is the self-dual point on Z2 : the planar dual of the free measure Therefore. i. This was proved for all q ≥ 1 only recently.. * Show that p(Zd .26. one having positive. the replacement for the sharp threshold results for product measures via the Margulis-Russo formula or the BKKKL theorem (see (12. Show that p(Z2 . uniqueness is known for q = 2 and all p. what about the uniqueness of inﬁnite volume measures for FK(p. ⊲ Exercise 13. However. k ) for complete occupation: the inﬁmum of the initial probabilities p that make Pp [complete occupation] > 0. the one we presented in Proposition 12.) ⊲ Exercise 13. Some further results and questions on the FK model will be mentioned in Subsection 14.

This Glauber dynamics was already deﬁned in Section 13.2. of course). Bootstrap percolation results have often been applied to the study of the zero temperature Glauber dynamics of the Ising model. once k is small enough so that there are no local obstacles that obviously make p(Zd . 1] and a sequence of integers kd with limd→∞ kd /d = γ . See [Scho92. The previous exercises show a clear diﬀerence between the behaviour of the critical probability on Zd and Td : on Zd . 1). pﬁx (T3 ) > 1/2 [How00]. −}V . k ) is the supremum of all p for which the equation P Binom(d.1. the rings at some time t > 0. BalBM10. p(Td+1 . k ) = 1. we already have p(Zd . ⊲ Exercise 13. k ) ∈ {0.9. then ωt (x) becomes the majority of the spins of the neighbours of x. (1 − x)(1 − p)) ≤ d − k = x has a real root x ∈ (0.3. but the zero temperature case can actually be described without mentioning the Ising model at all. kd ) = γ. For what graphs dynamics is that each site has an independent Poissonian clock. Given a locally ﬁnite inﬁnite graph G(V. From the symmetry of the two competing colours.. Using very reﬁned knowledge of bootstrap percolation. the spin of each site x will be “+” from a ﬁnite time T (x) onwards) almost surely. one can ask the usual question: Question 13. E ) with an initial spin conﬁguration ω0 ∈ {+. Now let pﬁx (G) be the inﬁmum of p values for which this dynamics started from a Ber(p) initial conﬁguration of “+” spins ﬁxates at “+” (i. is it equal to 1/2? It is non-trivial to prove that pﬁx (Z) = 1. 2) = 1/2. k ) = 0. Hol07. the resulting r-regular Cayley graph has p(Gr . both on inﬁnite and ﬁnite graphs. So. while the “+” 158 . On the other hand. Is a group amenable if and only if for any ﬁnite generating set. then the new state of x is chosen uniformly at random. and if the clock of some site x ∈ V Question 13. or. More generally. see [Arr83]. [Mor10] proved that limd→∞ pﬁx (Zd ) = 1/2. 1} for any k -neighbor rule? The answer is known to be aﬃrmative for symmetric generating sets of Zd and for any ﬁnitely generated non-amenable group that contains a free subgroup on two elements. BalBDCM10] and the references there for more on this model.(a) Show that the 3-regular tree has p(T3 . d→∞ lim p(Td . BalPP06. *** Find the truth for at least one more group (that is not a ﬁnite extension of Zd . it is clear that pﬁx (G) ≥ 1/2.e. BalP07. the reason is that a density 1/2 − ǫ for the “−” phase is just barely subcritical for producing a bi-inﬁnite path. if there is an equal number of neighbours in each state. (b) Deduce from part (a) that for any constant γ ∈ [0. show that for 2 ≤ k ≤ d. Is it true that pﬁx (Zd ) = 1/2 for all d ≥ 2? And pﬁx (Td ) = 1/2 for d ≥ 4? Here it is what is known.

with free or wired boundary conditions. γ i. 1] labels U (e) on the edges. these results do not imply that pﬁx = 1/2 on these graphs. γ where the inﬁmum is taken over paths γ in G \ {e} that connect the endpoints of e.i. 1] at once. This implies that the probability that a given vertex ﬁxates at “+” is tending to 1. if d is large enough.8 (b) says that for p > 1/2 + ǫ. What happens on Td for d ≥ 4 at p = 1/2 is not known. as well. The Minimal Spanning Tree (MST) on a ﬁnite graph is constructed by taking i. Of course. or every site changes often [NaNS00]. γ can also be a disjoint union of two half-inﬁnite paths. Exercise 13. For p = 1/2 on Z2 . * Show that limd→∞ pﬁx (Td ) = 1/2. moreover.d. Note that this is naturally coupled to Ber(p) bond percolation for all p ∈ [0. coming from Rob Morris. one emanating from each endpoint of e.. Give a ﬁnite graph on which MST = UST with positive probability. For any e ∈ E (G). phase is just barely supercritical. deﬁne ZF (e) := inf max{U (f ) : f ∈ γ } . for p = 1/2 on the hexagonal lattice. ⊲ Exercise 13. the Minimal Spanning Forests are related to percolation. ﬁxing the state of that vertex. either. it is known that every site changes its state inﬁnitely in two ways: either some sites ﬁxate at “+” while some other sites ﬁxate at “−”. non-ﬁxation can happen its state inﬁnitely often. bi-inﬁnite “−” paths will form ⊲ Exercise 13. The limiting measures can be directly constructed. but here is a hint for a simpler proof. Then. The reason for the diﬀerence is the odd degree on the hexagonal 13. But then. as d → ∞. For an inﬁnite graph G. . sites ﬁxate at “−” [HowN03]. some sites ﬁxate at “+” while all other lattice.11.10. Unif [0.e. making “+” ﬁxation impossible. the majority of the neighbours of any given vertex ﬁxate at “+”. while. the Free and Wired Minimal Spanning Forests are FMSF := {e : U (e) ≤ ZF (e)} and WMSF := {e : U (e) ≤ ZW (e)} . so in a short time in the dynamics. we again have two options: we can try to take the weak limit of the MST along any ﬁnite exhaustion.MSF} Our main references here will be [LyPS06] and [LyPer10]. By deﬁnition.3 Minimal Spanning Forests {ss. 159 where the inﬁmum is taken over “generalized paths” γ in G \ {e} that connect the endpoints of e. (This follows from [CapM06].) It is also interesting what happens at the critical density. then taking the spanning tree with the minimal sum of labels. While the Uniform Spanning Forests are related to random walks and harmonic functions. and deﬁne ZW (e) := inf sup{U (f ) : f ∈ γ } . the probability that an initially “+” site ever becomes “−” is tending to 0. Prove that. then the probability of everything becoming “−” in ⌈d/2⌉-neighbour “−”-bootstrap is zero.(somewhere in the huge non-amenable tree).

deg2} {t. see [H¨ aPS99]) that for any transitive graph G and any p > pc (G). Non-percolation at pc also has an interpretation as the smallness of the WMSF trees.. we have the following: {ex. Moreover.16. in the non-amenable case this gives that each tree of WMSF has one end. Show that for any invariant spanning forest F on a transitive amenable G. it will p-cluster. it is not possible that a. measured along any Følner exhaustion of G. lim inf {U (e) : e ∈ IT(v )} = pc for any v ∈ V (G). if all the trees are inﬁnite.s. On the other hand. θ(pc ) = 0 is equivalent to IT(v ) having density zero.The connection between WMSF and critical percolation becomes clear through invasion percolation. The Invasion Tree n≥0 Tn . Furthermore.s. then WMSF = v ∈V (G) IT(v ). Moreover. not use edges outside it.IT} ⊲ Exercise 13.s. Show that for any invariant spanning forest F on a transitive amenable G. then each has 1 or 2 ends. On a transitive unimodular graph.15. then the expected degree is exactly 2. if all trees of F are inﬁnite a. almost give this “one end” result: ⊲ Exercise 13. We now have the following deterministic result: {ex. ⊲ Exercise 13. θ(pc ) = 0 implies that a. *** Show that all the trees in the WMSF on any transitive graph have one end almost surely. Prove that if U : E (G) −→ R is an injective labelling of a locally ﬁnite graph. each tree in it can intersect at most one inﬁnite cluster of pu -percolation in the standard coupling.) ⊲ Exercise 13. each tree of WMSF has one end. the expected degree is at most 2. combined with the fact (obvious from Exercise 13.WMSFoneend} 160 . ** Show that for G transitive amenable. For the amenable case. On a transitive unimodular graph.13. the invasion tree eventually enters an inﬁnite invasion percolation is a “self-organized criticality” version of critical percolation.14. let of v is then IT(v ) := Tn+1 = Tn ∪ {en+1 }. all trees of F have at least 3 ends. it is not surprising (though non-trivial to prove. the following two exercises. if pu > pc .end2} {ex. then each tree intersects exactly one inﬁnite pu -cluster. Furthermore.4 ([LyPS06]). (Hint: use the previous exercise.12. This already suggests that {ex. Once the invasion tree enters an inﬁnite p-percolation cluster C ⊆ ωp := {e : U (e) ≤ p}. For a vertex v and the labels {U (e)}.8.ITsparse} ⊲ Exercise 13. FMSF is more related to percolation at pu . inductively. where en+1 is the edge in ∂E Tn with the smallest label U .12) that all the trees of WMSF are inﬁnite. adding an independent Ber(ǫ) bond percolation to FMSF makes it connected. Therefore. moreover. then. given Tn . which we state without a proof: Theorem 13. let T0 = {v }.end1} {ex. Finally. By Benjamini-Lyons-Peres-Schramm’s Theorem 12.

µ{x ∈ action on bond or site percolation conﬁgurations under an ergodic probability measure. one deﬁnitely expects inﬁnitely contradictory conjectures have been made: d = 6 [JaR09] and d = 8 [NewS96].s. Finally. In particular. if there is a positive measure set of p values for which there are more than one inﬁnite p-clusters with positive probability. 1] if and only if FMSF = WMSF a. say. pc = pu is equivalent to FMSF = WMSF a. for any ﬁnite collection of points there is a tree connecting them. there is a positive Lebesgue measure set of possibilities for this U (e).16.MeGrTh} by measure-preserving transformations. 1}) and essentially free (i. where the change from one to inﬁnity happens.s.4 Measurable group theory and orbit equivalence {ss. Conversely. Regarding the critical dimension.s. On any connected graph G. if U ⊆ X satisﬁes g (U ) = U for all g ∈ Γ.. B .e. using planarity. there is an edge e whose endpoints are in diﬀerent inﬁnite p-clusters with positive probability. we have a. there is a positive probability for A(e).MSFpcpu} Theorem 13. for d ≥ 19. see [LyPS06] or [LyPer10]. there could be quite long connections between some points of Zd .5 ([LyPS06]). is rotationally and scale invariant. at most one inﬁnite cluster graphs.{t.s. On the other hand. then µ(U ) ∈ {0. µ) X : g (x) = x} = 0 for any g = 1 ∈ Γ). Since G is countable. and for these percolation parameters we clearly have at least two inﬁnite clusters with positive probability. but for the scaling limit of the MSF. For instance. This action of a 161 . then.. escaping to inﬁnity in the scaling limit. then. and hence. many trees. by the independence of U (e) from other edges. but is conjectured not to be conformally invariant [GarPS10a] — a behaviour that goes against the physicists’ rule of thumb about conformal invariance. by insertion tolerance. This probability equals to the probability of the event this is positive. which is a spanning tree of Rd in a well-deﬁned sense (very roughly. as we did in the second proof of Corollary 12. for transitive unimodular FMSF is equivalent to having some e ∈ E (G) with the property that P[ ZW (e) < U (e) ≤ ZF (e) ] > 0. where the lace expansion shows θ(pc ) = 0 and many other results. the critical dimension might be 8. some of trees in the FMSF and the WMSF in a transitive graph be either 1 or ∞ a. There are many open questions here. say Ber(p) Consider a (right) action x → g (x) = xg of a discrete group Γ on some probability space (X. If 13. must the number MSF a single tree? The answer is yes for d = 2. These conditions are satisﬁed for the natural translation percolation: ω g (h) := ω (gh). WMSF for almost all p ∈ [0.e.? In which Zd is the A(e) that the two endpoints of e are in diﬀerent inﬁnite clusters of U (e)-percolation on G \ {e}. by the independence of U (e) from other edges. Proof. with some natural compatibility relations between the diﬀerent trees). We will usually assume that the action is ergodic (i. It might also be that both answers are right in some sense: for Zd . the critical dimension might be 6 [AiBNW99]: in d = 7. the scaling limit of a version of the MST (adapted to site percolation) is known to exist. on the triangular grid.

is usually called a Bernoulli shift. with the product of Ber(p) or {ps }s∈S or Leb[0. or [0. Another natural notion is the following: Deﬁnition 13.g. β (wa ) = β (w) + 1. then this action has the interpretation of adding 1 in binary expansion. (Note that this is indeed an equivalence relation. if the two actions are equivalent in the usual sense.11) {e.m. The obvious notion for probability measure preserving (abbreviated p. µi ) for i = 1. amenable group (in most cases) are equivalent in this sense iff the entropies (generalizing ps log(ps ) from the case of Z-actions suitably) are equal. if we write β (w) := k i=0 wi 2i . But this notion is much more ﬂexible. groups are measure equivalent iff they admit stably orbit equivalent actions. 1}N of inﬁnite binary sequences. by the following actions of Z and Z2 .e. actions. groups. µ). 2 there exists a Borel subset Yi ⊆ Xi that meets each orbit of Γi and there is a measure-scaling isomorphism ϕ : Y1 −→ Y2 such that ϕ(xΓ1 ∩ Y1 ) = ϕ(x)Γ2 ∩ Y2 for a. Deﬁnition 13. can be found in [Gab02]. .measequiv} alent if there is a measure-preserving map ϕ : X1 −→ X2 such that ϕ(xΓ1 ) = ϕ(x)Γ2 for almost all A small relaxation is that the actions are stably orbit equivalent: for each i = 1.p. which is a great introduction to orbit equivalence. Two p. Of course. exactly as quasi-isometry was the geometric analogue — see Exercises 3.2. similar to Exercise 3.7 and 3. x ∈ X1 .adding} Note that this deﬁnition can also be made for ﬁnite sequences w. There is a famous theorem of Ornstein and Weiss [OW87] that two Bernoulli shifts of a − s∈S given f. 1] measures. Bi . hence the name. Bi . (0w)a = 1w (1w)a = 0wa . Γ1 and Γ2 are measure equivalent if they admit commuting (not necessarily probability) measure preserving essentially free actions on some measure space (X.p.orbiteq} {d. 1}Γ.7. or S Γ with a countable S . (13. with the Ber(1/2) product measure. Now. and any w ∈ X .) The action of Z on X is clearly measure-preserving.) actions of some groups Γi (i = 1. each with a positive ﬁnite measure fundamental domain. as shown.g. Γi acting on (Xi . Consider the set X = {0. wk . The adding machine action of Z on X is deﬁned by the following recursive rule: for the generator a of Z. 162 . then they are also orbit equivalent. and the actions on the starting ﬁnite segments of an inﬁnite word are compatible with each other. Two f. are called orbit equivx ∈ X1 . B .group Γ on {0. which is the natural measure-theoretical analogue of the virtual isomorphism of groups.g. . We will consider here a cruder equivalence relation. For a ﬁnite word w = w0 w1 . (One thing to be careful about is that for a ﬁnite sequence w of all 1’s this β -interpretation breaks down: one needs to add at least one zero at the end of w to get it right. the point is that two f.1. µi ) being the same is that there is a group isomorphism ι : Γ1 −→ Γ2 and a measure-preserving map ϕ : X1 −→ X2 such that ϕ(xg ) = ϕ(x)ι(g) for almost all x ∈ X1 . for instance.8. hence the deﬁnition indeed makes sense for inﬁnite words.m. 2) on some (Xi . 1]Γ . The proof.) {d. 2.

A nice interpretation of this result is that any non-amenable group has an F2 randosubgroup. since it is not a distribution on actual subgroups.H . these two actions are orbit-equivalent. there is a p.p. (It cannot be called a “random subgroup”. simply doing the Z-action equivalence. The group Γ acts on this f g (t) := f (g )−1 f (gt) .m. Γ is a randosubgroup of H if there is a Γ-invariant distribution on injective maps in SG.) ⊲ Exercise 13. actions of amenable groups are orbit equivalent to each other. . which has a percolation-theoretical proof. Show that a Cayley graph G(Γ. 1] labels on the vertices Γ. as a combination of the work of several people. a recent result is that any non-amenable group has continuum many orbit-inequivalent actions. Leb)Γ contains a subrelation generated by a free ergodic p.by the Z action is the set of all words with the same tail as w. (13.17. using some of the machinery of the entropy/isomorphism theorem.GaboLyo} {ex. An extreme generalization of the previous example is another famous result of Ornstein and Weiss from 1980. and the trees are indistinguishable. for the other direction.e.18. For any non-amenable countable group Γ. .H := {f : G −→ H. Consider the set SG. map ϕ : X −→ Y such that ϕ(xΓ ) ⊆ ϕ(x)H for almost all x ∈ X . Now. the interlacing map ϕ : X × X −→ X deﬁned ′ ′ ′ ′ by ϕ(w0 w1 . produce an invariant mean from the invariant Z. w′ ) ∈ X × X is the set of all pairs with each coordinate having the correct tail. . f (1) = 1}. . So.p. A key ingredient is the following theorem. A closely related probabilistic statement is the following: ⊲ Exercise 13. produce the invariant Z using coarser and coarser “quasi-tilings” that come from Exercise 12. the orbits of the Bernoulli shift can be decomposed into the orbits of an ergodic F2 action. ) = w0 w0 w1 w1 . there is an invariant spanning forest of 4-regular trees on Γ that is a factor of i. A randosubgroup of an amenable group is also amenable.m. the orbit of some (w. Why? The orbit of a word w ∈ X We can similarly deﬁne an action of Z2 on X × X = {0. to be discussed in a future version of these notes: Theorem 13..amenZ} it is easy to check that this is indeed an action from the right.12) {e.e. x ∈ X and every g ∈ Γ there is an 163 .) Similarly. Now.) On the other hand.6 ([GabL09]).i. .14. A randomorphism is a Γ-invariant probability distribution on SG. (Note that we can apply a or a−1 only ﬁnitely many times. is clearly measure-preserving and establishes an orbit- coordinate-wise. . w0 w1 . . action of the free group F2 .m. then for a. 1].SGH^G} {t. (Hint: for one direction. Section 10]. S ) is amenable iff it has a Γ-invariant random spanning Z subgraph. see [KecM04]: any two ergodic free p. Now. the orbit equivalence relation of the Bernoulli shift action on ([0. what is the connection to orbit equivalence? If the orbit equivalence relation on X generated by Γ is a subrelation of the one on Y generated by H . see [Gab10. Unif [0. 1}N×N.p. In other words. i. Or more probabilistically.H .d. set by in the following sense. A group-homomorphism is just a ﬁxed point of this Γ-action.

action of a group Γ on X . we have a Γ-equivariant action on the set {αx : x ∈ X }.13). 1}Γ×Γ that is concentrated on symmetric functions on Γ × Γ and is invariant under the .m. A usual way of obtaining a graphing is to consider the “Schreier graph” of a p.p. The ℓ2 -Betti numbers and the cost of groups mentioned in Chapter 11 are such examples. g ) is uniquely determined. which are measure-preserving isomorphisms Given a probability space (X. Now.H ? By (13. A future version of these notes will hopefully discuss them in a bit more detail. The cost of an equivalence relation R ⊆ X × X is cost(R) := inf {cost(Φ) : Φ generates R} . for a.cost1} equivalence relation generated by Φ is the orbit equivalence relation of the action.p. B .14) {e. which.13) {e.cocycle} By writing αx (g ) = α(x. then this α(x.r. i. gh) = α(x. let’s try. one needs some non-trivial invariants. a graphing on X is simply a measurable oriented graph: αx (g )−1 αx (g ) αxg (t) = αxg (t). but see [Gab02. and if we between measurable subsets Ai and Bi . by (13.. what is the ac−1 tion of Γ on such elements of SG. which is the graph obtained from Φ by forgetting the orientations. g ) ∈ H such that ϕ(xg ) = ϕ(x)α(x.p. and it is not known whether all groups have a ﬁxed price. actions have How is this the same cost as before? A Γ-invariant random graph is a probability measure µ on diagonal action of Γ. the Γ-invariant probability measure µ. action Γ the same cost. µ). Now.e. the connected components of Φ. (13. the cost Any graphing Φ generates a measurable equivalence relation RΦ on X : the equivalence classes are A group is said to have a ﬁxed price if the orbit equivalence relations of all free p. Here is a more tangible deﬁnition of the cost of a group Γ for probabilists: cost(Γ) = 1 inf Eµ [deg(o)] : µ is the law of a Γ-invariant random spanning graph on Γ .cost2} 2 X . αg αx (gt). in order to distinguish non-amenable groups from each other. Well. Given that all amenable groups are measure equivalent. of some free p. then we get a Γ-invariant measure on {αx : x ∈ X }. Gab10] for now. if the H -action is free. The cost of a graphing.m. is cost(Φ) := i∈I µ(Ai ) = X i∈I 1Ai (x) dµ(x) . rel.m. is x (t) = αx (g ) take a random point x ∈ X w. a countable set of “edges” Φ = {ϕi : Ai −→ Bi }i∈I . which we can also think of as the average out-degree of a random vertex in X .t. Then the of a group Γ is deﬁned as cost(Γ) := inf cost(R) : R is the orbit equiv. g ) α(xg . h) . x ∈ X we get a map αx ∈ SG.H .12).e. a randoembedding of Γ into H .g) . (13.α(x. Moreover. with Φ = {ϕi : X −→ X }i∈I given by a generating set {γi }i∈I of Γ. and it satisﬁes the so-called cocycle equation α(x. (13. A corresponding graphing (which may be called the cluster graphing) is the 164 Ω = {0.15) {e. g ). So.

η ∈ Ω be connected by γ ∈ Γ if ω γ = η and the edge from o to γo is open in the graph ω . the right coset tree T has the root Γ = Γ0 . γo) is open. µ). Thus. and is called Let us now assume the so-called Farber condition on the sequence (Γn )n≥0 : the natural action of Γ on the boundary ∂T (Γ. and which is just the Schreier graphing of the action of Γ on Ω.p. µ) is a free p. we considered here only some special p. then ∂ T can be equipped with a group structure. in T is the boundary ∂ T of the tree. see [KecM04. hence the cost of this graphing (measured in µ) is the µ-expected to ω ..g. because some of these actions on (Ω. Fix an element o ∈ Γ. actions Γ (Ω. ν ). then for each x ∈ X we can consider ωx ∈ Ω = {0. see e. equipped with the usual metrizable topology.5]. * (a) Show that the Farber condition is satisﬁed if each Γn is normal in Γ and n≥1 n→∞ d(Γn ) − 1 . This shows that (13. with the natural Borel probability measure.following. . µ) with a ﬁxed free action (say.p.m. etc. to Ω × Y . 1}Γ×Γ given by (g. The number of children of Γn x is [Γn : Γn+1 ].19. If we the proﬁnite completion of Γ with respect to the series (Γn )n≥0 . since ω = η γ −1 out-degree of o.16) {e. or one half of the expected total degree. The set of rays Γ = Γ0 x0 ⊃ Γ1 x1 ⊃ have normal subgroups. [Wil98].. (Γn )) := lim ⊲ Exercise 13. is essentially free. µ) might not be essentially free. be a chain of ﬁnite index subgroups. see [AbN07]. then let ω. h) is an open edge in ωx ⇔ (xg . Corresponding to such a subgroup Γ2 x2 ⊃ .) This is a sub-graphing of the full graphing. (Γn )) of the coset tree. γo) = η (γ −1 o. For the details. γ −1 o). action by taking a direct product of (Ω. however. a Bernoulli shift) (Y.m. (13. And we do not even get that (13. The notion of cost resembles the rank (i.m. for hyperbolic groups. we can produce a free p. The rank gradient of the chain is deﬁned by RG(Γ. sequence. in which ω.14). . the cluster graphing µ-almost surely generates the orbit equivalence relation of the action iff µ-almost surely the graph spans the entire Γ.p.e. . η are connected by γ iff ω γ = η . But all these issues can be easily solved: If Γ (X. Clearly. |Γ : Γn | (13.14).rankgrad} Γn = { 1 } . Γn+1 y ⊂ Γn x.15) is at least as big as (13. For the other direction. The domain Aγ of this measurable edge consists of those ω ∈ Ω in which the edge (o. natural extension of the cluster graphing that we had before. action. and Φ is a graphing generating its orbit equivalence relation. Γn ⊳ Γ ∀n. Here is an explicit formulation of this idea.15) is indeed the inﬁmum of some costs. given an invariant random spanning graph µ on Ω. which has several nice applications. o) = η (o. and EµΦ [deg o]/2 = cost(Φ). and only some special graphings generating the corresponding orbit equivalence relations. and considering the Proposition 29. Then the pushforward µΦ of µ under this x → ωx gives an invariant spanning graph on Γ. 165 .15) is not larger than (13. (Note here that γ −1 is an edge from η and ω (o. the minimal number of generators) d(Γ) of a group. . and a coset Γn+1 y is a child of Γn x if Let Γ = Γ0 ≥ Γ1 ≥ . xh ) is an edge in Φ .

(b) Show that the limit in the deﬁnition of RG always exists. (c) Show that, for the free group on k generators, RG(Fk , (Γn )) = k − 1, regardless of Γn . RG(Γ, (Γn )) = cost(R) − 1. Theorem 13.7 ([AbN07]). Let R denote the orbit equivalence relation of Γ ∂T (Γ, (Γn )). Then

according to µ, and take its connected component Φ(x) in Φ. This random rooted graph will be unimodular (with a deﬁnition more general than what we gave before, which applies to non-regular graphs): by the ϕi ’s in Φ being measure-preserving, the equivalence relation RΦ is also measurepreserving, i.e., for any measurable F : X × X −→ R, we have F (x, y ) dµ(x) =

X y ∈R [x] Φ X y ∈R [x] Φ

We can also obtain a random rooted graph from a graphing: pick a random root x ∈ X

F (y, x) dµ(x) ,

as the deﬁnition of the Mass Transport Principle hence unimodularity for the random rooted graph Φ(x). See also Deﬁnition 14.1 and the MTP (14.1) in Section 14.1.

where RΦ [x] = Φ(x) is the equivalence class or connected component of x. Now, this can be taken

14

**Local approximations to Cayley graphs
**

{s.local}

For many probabilistic models, it is easy to think that understanding the model in a large box of Zd is basically the same as understanding it on the inﬁnite lattice. Okay, sometimes the ﬁnite problem is actually harder (for instance, compare Conjecture 12.19 with Lemma 12.3), but they are certainly closely related. Why exactly is this so? In what sense do the boxes [n]d converge to Zd ?

14.1

Unimodularity and soﬁcity

{ss.sofic}

A sequence of ﬁnite graphs Gn is said to converge to a transitive graph G in the BenjaminiSchramm sense [BenS01] (also called local weak convergence [AldS04]) if for any ǫ > 0 and Gn have an r-neighbourhood isomorphic to the r-ball of G. r ∈ N+ there is an n0 (ǫ, r) such that for all n > n0 , at least a (1 − ǫ)-proportion of the vertices of For instance, the cubes {1, . . . , n}d converge to Zd . On the other hand, if we take the balls

Bn (o) in the d-regular tree Td , then the proportion of leaves in Bn (o) converges to (d − 2)/(d − 1) as n → ∞, and more generally, the proportion of vertices at distance k ∈ N from the set of leaves (i.e., on the (n − k )th level Ln−k ) converges to p−k := (d − 2)/(d − 1)k+1 . And, for a vertex in Ln−k , the

sequence of its r-neighbourhoods in Bn (o) (for r = 1, 2, . . . ) is not at all the same as in an inﬁnite Td , or any other transitive graph. More generally, we have the following exercise:

**regular tree, and depends on the value of k . Therefore, the limit of the balls Bn (o) is certainly not
**

{ex.amensofic}

⊲ Exercise 14.1. Show that a transitive graph G has a sequence Gn of subgraphs converging to it in the local weak sense iff it is amenable.

166

The sequence of balls in a regular tree does not converge to any transitive graph, but there is still a meaningful limit structure, a random rooted graph. Namely, generalizing our previous deﬁnition, we say that a sequence of ﬁnite graphs Gn converges in the local weak sense to a probability distribution on rooted bounded degree graphs (G, ρ), where ρ ∈ V (G) is the root, if for any r ∈ N, taking a uniform random root ρn ∈ V (Gn ), the distribution we get on the r-neighbourhoods around ρn in Gn converges to the distribution of r-neighbourhoods around ρ in G. There is a further obvious generalization, for graphs whose edges and/or vertices are labelled by elements of some ﬁnite set, or more generally, of some compact metric space: two such labelled rooted graphs are close if the graphs agree in a large neighbourhood of the root and all the corresponding labels are close to each other. These structures are usually called random rooted networks. So, for instance, we can talk about the local weak convergence of edge-labeled digraphs (i.e., directed graphs) to Cayley diagrams of inﬁnite groups (see Deﬁnition 2.3). Continuing the previous example, the balls in Td converge to the random rooted graph depicted

∗ on Figure 14.1, which is a ﬁxed inﬁnite tree Td with inﬁnitely many leaves (denoted by level L0 ) and

— the weights that we computed above. It does not matter how this probability is distributed among the vertices of the level; say, all of it could be given to a single vertex. The reason for this

∗ is that the vertices on a given level lie in a single orbit of Aut(Td ), and our notion of convergence

one end, together with a random root that is on level L−k with probability p−k = (d − 2)/(d − 1)k+1

looks only at isomorphism classes of r-neighbourhoods. So, in fact, the right abstract deﬁnition for our limiting “random rooted graph” is a Borel probability measure on rooted isomorphism classes of rooted graphs, denoted by G∗ , equipped with the obvious topology.

p −3 =

d −2 (d−1)4

L −3

p −2 = p −1 = p0 =

d −2 (d−1)3 d −2 (d−1)2 d −2 d −1

L −2 L −1 L0 ρ

∗ Figure 14.1: The tree Td with a random root ρ (now on level L−1 ), which is the local weak limit of

the balls in the d-regular tree Td , for d = 3.

{f.2ary1end}

∗ Td . However, there is also a tempting interpretation in terms of Td itself: if ρ was chosen “uniformly ∗ at random among all vertices of Td ”, then it should have probability p−k to be on level L−k , since

∗ It is clear how the probabilities p−k for the root ρ in Td arise from the sequence of ﬁnite balls in

167

“evidently” there are p−k /p−(k+1) = d − 1 times “more” vertices on level L−k than on L−(k+1) . Now,

even though with the counting measure on the vertices this argument does not make sense, there are ways to make it work: one should be reminded of the deﬁnition of unimodularity in Section 12.1, just after Theorem 12.8. In fact, one can make the following more general deﬁnition: Deﬁnition 14.1. (a) Given a d-regular random rooted graph (or network) (G, ρ) sampled from

{d.urn}

a measure µ, choose a neighbour of ρ uniformly at random, call it ρ′ , and consider the joint triples (G, x, y ), where G is a bounded degree graph and x, y ∈ V (G) are neighbours, equipped with the natural topology. Now “take the step to ρ′ and look back”, i.e., take (G, ρ′ , ρ), which is again a random rooted graph, with root ρ′ , plus a neighbour ρ. If the two laws are the same, then the random rooted graph (G, ρ), or rather the measure µ, is called unimodular. (b) For non-regular random graphs, if the degrees are µ-a.s. bounded by some d, one can add above deﬁnition of the step from ρ to ρ′ , consider the delayed random walk that stays put with probability (d − deg(x))/d. (This is a natural deﬁnition for random subgraphs of a given transitive graph.) (c) If the degrees are not bounded, one still can take the following limiting procedure: take a large d, add the half-loops to each vertex with degree less than d to get Gd , and then require that the total variation distance between (Gd , ρ, ρ′ ) and (Gd , ρ′ , ρ), divided by P[ ρ′ = ρ ] (with the randomness given by both sampling (G, ρ) and then making the SRW step; we do this to make up for the laziness we introduced) tends to 0 as d → ∞. Another possibility in the unbounded case is to truncate (G, ρ) in any reasonable way to have maximal degree d, and require that the resulting random graph be unimodular, for any d. One reasonable truncation is to remove uniformly at random the excess number of incident edges for each vertex with a degree larger than d. (Some edges might get deleted twice, which is ﬁne, and we might get several components, which is also ﬁne.) (d) For the case Eµ deg(ρ) < ∞, there is an alternative to using the delayed SRW. Let µ ˆ be the probability measure µ on (G, ρ) size-biased by deg(ρ), i.e., with Radon-Nikodym derivative

dµ ˆ dµ (G, ρ)

distribution of (G, ρ, ρ′ ) on G∗∗ , which is the set of all (double-rooted) isomorphism-classes of

d − deg(x) “half-loops” to each vertex x to make the graph d-regular. In other words, in the

= deg(ρ)/Eµ deg(ρ). Now, if we sample (G, ρ) from µ ˆ, and then take a non-delayed

SRW step to a neighbor ρ′ , then (G, ρ′ , ρ) is required to have the same distribution as (G, ρ, ρ′ ). It is easy to see that if we take a transitive graph G, then it will be unimodular in our old sense (|Γx y | = |Γy x| for all x ∼ y , where Γ = Aut(G)) if and only if the Dirac measure on (G, o), with equivalence class (G, x, y ), with (x, y ) ∈ E (G), we will have P (G, ρ, ρ′ ) ≃ (G, x, y )

′

an arbitrary o ∈ V (G), is unimodular in the sense of Deﬁnition 14.1. Indeed, for a double-rooted where d is the degree of the graph, while P (G, ρ , ρ) ≃ (G, x, y ) = |Γy x|/d, and these are equal = |Γx y |/d,

for all pairs (x, y ) ∈ E (G) iff G is unimodular.

168

The simplest possible example of a unimodular random rooted graph is any ﬁxed ﬁnite graph G with a uniformly chosen root ρ. Here, unimodularity is the most transparent by part (d) of Deﬁnition 14.1: if we size-bias the root ρ by its degree, then the edge (ρ, ρ′ ) given by non-delayed SRW is simply a uniform random element of E (G), obviously invariant under ρ ↔ ρ′ . The example of ﬁnite graphs is in fact a crucial one. It is not hard to see that the Benjamini-

**Schramm limit of ﬁnite graphs is still a unimodular random rooted graph. For instance, the rooted
**

∗ tree (Td , ρ) of Figure 14.1 was a local limit of ﬁnite graphs. Rather famous examples are the

Uniform Inﬁnite Planar Triangulation and Uniform Inﬁnite Planar Quadrangulation, which are the local weak limits of uniform planar maps with n triangle or quadrangle faces, respectively, see [Ang03] and [BenC10], and the references there. Note that here the sequence of ﬁnite graphs Gn is random, so we want that in the joint distribution of this randomness and of picking a uniform random vertex of Gn , the r-ball around this vertex converges in law to the r-ball of G. ⊲ Exercise 14.2. If G is a transitive graph with a sequence of ﬁnite Gn converging to it locally, then it must be unimodular. Same for random rooted networks that are local weak limits. Unimodularity is clearly a strengthening of the stationarity of the delayed random walk, This strengthening is very similar to reversibility, and one could try an alternative description of “looking back” from ρ′ and working with G∗∗ : namely, one could require that the Markov chain e.g., if (G, ρ) is a deterministic transitive graph, then reversibility holds even without unimodularity. Nevertheless, we have the following simple equivalences: ⊲ Exercise 14.3. Consider Ber(p) percolation with 0 < p < 1 on a transitive graph G. Show that the following are equivalent: (a) G is unimodular; (b) the cluster of a ﬁxed vertex ρ is a unimodular random graph with ρ as its root; (c) the delayed SRW on the cluster generates a reversible chain on G∗ . Formulate a version for general invariant percolations on G. Let us remark that a random rooted graph (G, ρ) is called stationary in [BenC10] if (non-delayed) SRW generates a stationary chain on G∗ , and is called reversible if the chain generated on G∗∗ by the “looking back” procedure is stationary. In other words, using the inverse of Deﬁnition 14.1 (d): if we bias the distribution by 1/ deg(ρ), then we get a unimodular random rooted graph. ⊲ Exercise 14.4. (a) Show that the Galton-Watson tree with oﬀspring distribution Poisson(λ), denoted by PGW(λ), rooted as normally, is unimodular. (b) Show that if we size-bias PGW(λ) by deg(ρ), we get a rooted tree given by connecting the roots of two i.i.d. copies of PGW(λ) by an edge, then choosing the root uniformly from the two.

{ex.PGW} {ex.limisunimod}

i.e., that the Markov chain (G, ρ) → (G, ρ′ ) given by the delayed SRW is stationary on (G∗ , µ).

**(G, ρ) → (G, ρ′ ) given by the delayed SRW be reversible on (G∗ , µ). This does not always work:
**

{ex.percunimod}

169

We have seen so far two large classes of unimodular random rooted networks: Cayley graphs (and their percolation clusters) and local weak limits of ﬁnite graphs and networks. i.2: Question 14. ρ) dµ(G. ⊲ Exercise 14.1) {e. x) dµ(G. . It is also true that the sequence of uniformly chosen random d-regular graphs converges to the d-regular tree.4. we can take µ to be a Dirac mass on (G. . λ/n) is the PGW(λ) tree of Exercise 14. the transitivity of G and the diagonal invariance of f is now replaced by considering rooted networks and functions on double-rooted isomorphism-classes.. a group being soﬁc has a deﬁnition that is independent of its Cayley diagram. λ/n) is shorter than k tends to 0 as n → ∞. y ).G(n. Here comes an obvious fundamental question: can we get all Cayley graphs and unimodular random rooted networks as local weak limits of ﬁnite graphs and networks? To start with. n + πi (v )) : 1 ≤ i ≤ d. . Show that for all λ ∈ R+ . then. ρ) = x ∈ V (G ) y ∈ V (G ) F (G. This 170 {q. is every unimodular random rooted graph or network a local weak limit? We need to clarify a few things about these three questions. .1 and 14. (14.1: we may require that the Mass Transport Principle holds in the form that the probability measure µ on rooted networks has the property that for any Borel-measurable function F on G∗∗ .4) for transitive graphs? In some sense.6. ρ. ρ) . the local weak limit of the Erd˝ os-R´ enyi random graphs G(n. of the generating set considered. (a) Show that the number of multiple edges remains tight as n → ∞.1 ([Gro99. y ) := Ef (x. we have F (G. Wei00]. does every Cayley diagram have a sequence of labelled ﬁnite digraphs Gn converging to it in the Benjamini-Schramm sense? In particular. this follows from Corollary 2. if f is a diagonally invariant random function on a transitive graph G. 1 ≤ v ≤ n}.sofic} .1). We have also seen that amenable Cayley graphs are actually local weak limits of ﬁnite graphs. can we get regular trees as limits? ⊲ Exercise 14. group soﬁc.MTPgen} How do we get back our old Mass Transport Principle (12.d/n)girth} ⊲ Exercise 14. and consider the bipartite graph V = [2n]. and F (G.19 of [Bol01]. (b) Show that the local weak limit of these random bipartite graphs is the d-regular tree Td . i. is every Cayley graph a local weak limit of ﬁnite graphs? And more generally. Now that we know that amenable transitive graphs and regular trees are local weak approximable (Exercises 14. y. Is every f. let us state a last equivalent version of Deﬁnition 14.bipgirth} {ex. it is not completely ridiculous to ask the converse to Exercise 14.7.7). Fix d ∈ Z+ . take d independent uniformly random permutations π1 . First of all. This model is usually handled by proving things for the so-called conﬁguration model (which is rather close to a union of d independent perfect matchings).[AldL07]). Namely. and then verifying that the two measures are not far from each other from the point of view of what we are proving. with an arbitrary o ∈ V (G). in (14. Show that for all λ ∈ R+ and k ∈ Z+ . {ex.5.e. x. πd on [n].g. the probability that the smallest cycle in the Erd˝ os-R´ enyi random graph G(n.Before going further.e.. E = {(v. o).

g ∈ Γ : ∀f = g ∈ Γ : 1 ni 1 lim i→∞ ni i→∞ lim 1 ≤ p ≤ ni : pσi (f )σi (g) = pσi (f g) 1 ≤ p ≤ ni : pσi (f ) = pσi (g) = 1. group is soﬁc in the “almost-action by permutations” sense (14. Tim´ ar constructed a Cayley diagram of (Z ∗ Z2 ) × Z4 that “sees” the alternating decomposition {q. This is called the LEF property (locally embeddable into ﬁnite groups).2) {e. there is a ﬁnite symmetric group Sym(n) and an injective map σn : 171 ǫ > 0. for which the density of any independent set is less than 1/2 − ǫ for some absolute constant of T3 .soficperm} {ex. answering both of the following questions with “yes” is expected to be hard: Question 14. Show that a f. The construction uses Theorem 14. the standard Cayley graph of Z ∗ Z2 . Indeed. one needs to use ﬁnite 3-regular bipartite graphs with high girth.2. hence can be locally approximated by the graphs Hn × C4 only if one forgets the labels.9.8 above.2) if and only if one or any Cayley diagram of it is local weak approximable by ﬁnite labelled digraphs. it can be formalized as follows: for any ﬁnite subset F ⊆ Γ. and get a convergent sequence of graphs. *** By encoding edge labels by ﬁnite graphs. for which the alternating decomposition into two independent sets does exist.GraphsToGroups} . then the group is soﬁc? A probably simpler version of this question is the following: ⊲ Exercise 14. by Exercise 14. due to Bollob´ as: although the vertex set of the 3-regular tree T3 . The converse is false: a result of Ad´ ar [Tim11] says that there is a sequence of graphs Gn converging to a Cayley graph G of the group (Z ∗ Z2 ) × Z4 for which the edges of Gn cannot be oriented and labelled in such a way that the resulting networks converge to the Cayley diagram that gave G.8 below.) (b) Is it true that if all ﬁnitely generated Cayley graphs of a group are local weak approximable.2). is it possible to show that every Cayley graph of every group being local weak approximable would imply that every group is soﬁc? It is also important to be aware of the fact that local approximability by Cayley diagrams of ﬁnite groups is a strictly stronger notion than soﬁcity. = 1.sofic} ⊲ Exercise 14. there exists a sequence {Hn } of ﬁnite 3-regular graphs with girth going to inﬁnity (hence locally converging to T3 ). (14. we can just forget the edge orientations ´ am Tim´ and labels. can trivially be decomposed into two independent sets in an alternating manner. This example of Tim´ ar shows that going from the local approximability of graphs to the soﬁcity of groups might be a complicated issue. for a local approximation of the Cayley diagram.deﬁnition is that the group Γ has a sequence of almost-faithful almost-actions on ﬁnite permutation groups: a sequence {σi }∞ i=1 of maps σi : G −→ Sym(ni ) with ni → ∞ such that ∀f.g.8. similarly to (14. (a) Does the local weak approximability of one Cayley graph of a group implies the local weak approximability of all its Cayley graphs? (Note that the answer to the same question for approximations of Cayley diagrams by labelled digraphs is an easy “yes”. because. If we have a convergent sequence of labelled digraphs.

Furthermore.12. be a sequence of subgroups. For instance. But the θ(pc ) = 0 percolation conjecture is certainly a serious candidate. T .8). (b) Using the residual ﬁniteness of F2 (see Exercise 2.7) has the LEF property. transitive graphs? Note that these two things are not the same — see Chapter 16.2 Spectral measures and other probabilistic questions {ss. but he also has some good reasons). There are examples of soﬁc groups that are not residually amenable [ElSz06]. Γn . 14. . presented inﬁnite simple group. are known to be soﬁc but not LEF: there exist solvable non-LEF groups [GoV97]. S ) are the Schreier graphs Gn := G(Γ. Consider before (13. hence cannot be LEF by the following exercise. Gromov says that if a property is true for all groups than it must be trivial.1 is “no” (maybe everyone except for Russ Lyons.F −→ Sym(n) such that σn (f g ) = σn (f )σn (g ) whenever f. See [AldL07] for local weak convergence and unimodularity and [Pes08] for soﬁcity. as deﬁned just the Schreier graphs G(Γ.spectral} Although the question of local weak convergence is orthogonal to being quasi-isometric (in the sense that the former cares about local structure only. Now. give a sequence of ﬁnite transitive graphs Amenability and residual ﬁniteness are two very diﬀerent reasons for soﬁcity. The levels of G(Γ. it 172 . f g ∈ F . it might (b) Show that a ﬁnitely presented LEF group is residually ﬁnite. S ). there are groups that the Burger-Mozes group (a nice non-trivial lattice in Aut(Tm ) × Aut(Tn ) [BurgM00]) is a ﬁnitely even be a non-soﬁc group. T . (In fact. (a) Let Γ = Γ0 > Γ1 > . say.10.16). while the latter cares about global structure). but. and show that residually amenable groups are soﬁc. Maybe probability does not count? Or these questions are not really about groups. g. S ) of the action of Γ on the corresponding coset tree T as deﬁned just before Exercise 2.10. Conclude that a ﬁnitely presented inﬁnite simple group cannot have the LEF property. converging locally to the tree T4 . Most people think that the answer to Question 14. Further sources of provably soﬁc groups can be found in [Ele12]. but maybe quantitative results do not count. Show that (Gn )n≥0 converges in the Benjamini-Schramm sense to the Cayley graph G(Γ. (A counterexample to this meta-statement is the Ershler-Lee-Peres √ theorem about the c n escape rate. S ) iff the sequence (Γn )n≥0 satisﬁes the Farber condition.) One reason that this is an important question is that there are many results known for all soﬁc groups. (a) Show that any residually ﬁnite group (deﬁned in Exercise 2. Deﬁne a good notion of residual amenability. we will discuss the relevance of local weak convergence to probability on groups. and S a ﬁnite generating set of Γ.) ⊲ Exercise 14.11. . One can also combine them easily: ⊲ Exercise 14. In the next subsection. ⊲ Exercise 14.

A simple but important example is the set of simple random walk return probabilities pk (o.3 ). For instance. the Benjamini-Schramm paper where local weak convergence was introduced is about such an example: Theorem 14. Construct a local weak convergence sequence of uniformly bounded degree ﬁnite graphs Gn such that for any ﬁnite graph F . hence any ﬁnite graph is contained in a large enough lattice box [0. Now let us turn to the question of what properties local weak convergence preserves. Hint 2: You get a probabilistically trivial but graph theoretically slightly harder example by showing that the Z2 lattice with the diagonals added as edges contains any ﬁnite graph as a minor. Deﬁnition 14.r. then obviously pG k (o.3 in a diﬀerent direction is to relax the condition on uniformly bounded degrees to sequences where the degree of the uniform random root has at most an exponential tail [GuGN13]. .testable} P[ Xk = o | X0 = o ]: if the r-neighbourhood of a vertex o in a graph G is isomorphic to the r′ 173 .g. There are also examples where the information preserved is not quite what one may ﬁrst expect.BSrec} {d. . and r is at least k/2. then Gn contains F as a minor. o) = pk (o .1. if n > nF .13. (Hint 1: Show that any ﬁnite graph can be embedded into R3 without edge intersections. see Theorem 11.t. local weak convergence: whenever {Gn } is a convergent sequence of ﬁnite graphs. k ]3 as a minor.) An extension of Theorem 14. o ) in the two {t. for triangulated graphs the representation is unique up to M¨ obius transformations. Then the limiting unimodular random rooted graph (G. then the limit is almost surely recurrent [AngSz]. or simply local. Let p(G) be a graph parameter: a number or some more complicated object assigned to isomorphism classes of ﬁnite (or sometimes also inﬁnite) graphs. o) is almost surely recurrent. The importance of this result is that it shows that the uniform inﬁnite planar triangulation and quadrangulation (UIPT and UIPQ. which implies by a result of He and Schramm that the graph is recurrent. and then construct the required sequence using boxes of random size. The ﬁrst is Koebe’s theorem that any planar graph can be represented as a circle packing. It is said to be locally approximable. {p(Gn )} also converges.. In fact.1) are recurrent.1. o) = G ′ ′ neighbourhood of o′ in G′ . Let Gn be a local weak convergent sequence of ﬁnite planar graphs with uniformly bounded degrees. . if it is continuous w. The converse is false: ⊲ Exercise 14. Then one needs to show that this limiting circle packing has at most one accumulation point of centres.still preserves a lot of information about probabilistic behaviour.2. but the limit is almost surely recurrent. A generalization is that if that there exists a ﬁnite graph such that none of Gn contains it as a minor (e. mentioned shortly after Deﬁnition 14. planar graphs are characterized by not having either K5 or K3. This helps normalize the circle packing representations of Gn such that we get a circle packing for the limit graph G.3 ([BenS01]). 1. or testable. The proof of the theorem is based on two theorems on circle packings. . moreover. the balls in a hyperbolic planar tiling converge locally to a recurrent unimodular random graph that is somewhat similar to the one-ended tree of Figure 14.

More precisely: ⊲ Exercise 14. E ). {ex.g. the Markov operator P is self-adjoint w. A fancy reformulation of this observation is that the spectral measure of the random walk is locally approximable.localspectral} 174 . as deﬁned in Exercise 14.r.graphs.) Now consider the unit vectors ϕx := δx / π (x) for each x ∈ V . Then the spectral measure of Gn . ρ). these are probability measures on [−1. Let Gn be ﬁnite graphs converging to (G. (b) ** One may also consider taking the weighted average σ ˆG := x ∈V σx. averaged w. However. then P has n eigenvalues −1 ≤ λi ≤ 1.1 (d). it is natural to take the average of the above spectral measures: σG := 1 n {ex. converges to ∞ n a unimodular random rooted graph (G. with orthogonal eigenvectors λi Ei . 2 x ∈V σx. for x = y . then. For x = y . ·)π . while. (·.3) {e. and P = n i=1 G is a ﬁnite graph on n vertices.t. and the local approximability of the return probabilities (and of the ratio π (x)/π (y ) = deg(x)/ deg(y )) implies that the spectral measures are also locally approximable.15. I = 2 1 −1 1 −1 t dE (t). Is there a nice way to express σ ˆG using the eigenvalues {λi }? Now notice that π (x) π (y ) pn (x. simple random walk on a graph G(V. ρn )}n=1 converges in distribution to pG k (ρ. for any k ∈ Z+ ﬁxed. dE (t). for any measurable S ⊂ [−1. e. converges weakly (weak convergence of measures) to the Kesten spectral measure Eσρ. similarly to the size-biasing in Deﬁnition 14. Here is what we mean by this. it is also natural to take the normalized counting 1 n n i=1 δλi . ρ) in the local weak sense.x . are signed measures with zero total mass. {pG k (ρn . E ) is a ﬁnite graph on n vertices.t. with a uniform random root ρn . and call that the spectral measure of G. P n ϕy )π = 1 −1 tn dσx. For any reversible Markov chain.r. In particular. ρ). and deﬁne.y (t) . ρ).Kestenmom} hence the return probabilities are given by the moments of the Kesten spectral measures σx.. each ﬁnite Gn has 1 in its spectrum. the randomness in the limit (G. σx. and hence we can take its spectral decomposition P = where E is a projection-valued measure on ℓ (V.Kestenmeas} sometimes called the Kesten spectral measures or Plancherel measures. a resolution of the identity. if G is a non-amenable transitive graph. Since these measures have compact support. y ) = (ϕx .x .14. while. and deduce that the two deﬁnitions give the same spectral measure σG .14 (a). ⊲ Exercise 14. then its spectral measure is bounded away from 1. if a sequence of ﬁnite graphs Gn .4) {e.ρ of (G.x π (x). they are determined by their moments. (14. (If fi with ℓ2 (V. fi )π fi on the eigenline spanned by fi . E (t)ϕy )π . note that this does not mean that the supports of these measures also converge: for instance. 1].x ({λi }) = fi (x) π (x) with the unit eigenvectors fi . π ). ρ). 1]. (a) If G(V. π )-norm 1. for each of them we have the projection Ei (f ) := (f. E (S )ϕy )π = S d(ϕx . Show that σx.y (S ) := (ϕx . (14.specmeas} measure on the eigenvalues of the Markov operator. On the other hand.

x of the k -regular tree Tk is supported on [− 2 √ √ k −1 2 k −1 . |Aǫ | {ex.4 and 14. u2 ) and v = (v1 . π 1−t2 [−1. An . 175 . v2 ) are two G 1 ×G 2 G1 G2 vertices in the direct product G1 × G2 .x (t) = (Hint: you could do this in at least two ways: either from the spectrum of the cycle Cn . .19 (Kaimanovich).18. that the direct product of two amenable groups is ⊲ Exercise 14.v2 Note that this can be easily translated to the spectral measures of the Markov operators only when amenable. hence not too many of them can be very small. note that dim kerQ An /|V (Gn )| = σn ({0}). a combination of Exercises 7. which is also the n → ∞ limit of the normal{ex. 1] ≥ 1 − 2ǫ/h2 . 1 .⊲ Exercise 14.) Let An = converges. i. .Zspec} {ex. but 1 also lies in the support of the spectral measure dσx. i = 1. both Gi are regular graphs. First of all. In fact. as k → ∞. ǫ] \ {0} < D′ / log(1/ǫ) for any ǫ > 0.x (t). and The spectral measure of random walks on inﬁnite graphs is a widely studied area [MohW89].amenspecmeas} where Aǫ is any Følner set with |Aǫ g ∆Aǫ | < ǫ|Aǫ | for all g ∈ supp µ.1] and.15.15 is the following: Theorem 14.e. with edges labelled by integers from {−D.4) and arguing that the spectral measure is determined by its moments. 2. for any h < 1 and ǫ > 0. Let {Gn }∞ n=1 be a sequence of {t. 2.Luck} ﬁnite graphs with degrees at most D.17. with some D′ depending on D. σx. ]. ⊲ Exercise 14. A satisﬁes σn [−ǫ. Show that for the spectral measures σx. Their product is a coeﬃcient of the D2 . the Kesten spectral measure is dσx.v1 2 . D}. the spectral measure of An Proof. For SRW on Z. More precisely. (Hint: give ﬁrst a sequence of inﬁnite solvable groups converging locally to the free group. then σu. But all the absolute values |λi | are at most we are done.y associated to the adjacency matrices (as √1 1 (t) dt. for instance. 4 random matrix ensembles. hence is a nonzero integer. k k ized counting measure on the eigenvalues of the n × n classical β = 1.) A corollary to Exercise 14. if u = (u1 . with the weak convergence of the spectral measures. Consider now all the nonzero eigenvalues λi ∈ R of An . a convolution of measures. Assume that Adj(Gn ) be the adjacency matrices with the labels being the entries. Then dim kerQ An /|V (Gn )| {Gn } converges in the local weak sense (with the obvious generalization handling the labels.16. it converges to Wigner’s semicircle law. the associated random walk has not only ρ(P ) = 1.v = σu ∗ σu .4 (L¨ uck approximation for combinatorialists [L¨ uc94]). . * Show that for any symmetric ﬁnitely supported µ on an amenable group..) Gi ⊲ Exercise 14. Deduce. or from (14.x [1 − h. where σn is the spectral measure of characteristic polynomial. .prodspec} opposed to the Markov operators) of two graphs Gi . this implies that σn ({0}) also converges. Together basic result is that the spectral measure σx. * Give a sequence of ﬁnite non-expander Cayley graphs converging locally to the free group.

For a long while, it was not known if simple random walk on a group can have a spectral measure not absolutely continuous with respect to Lebesgue measure. Discrete spectral measures are usually associated with random walk like operators on random underlying structures, e.g., with the adjacency or the transition matrices of random trees [BhES09], or with random Schr¨ odinger operators ∆ + diag(Vi ) on Zd or Td , with Vi being i.i.d. random potentials on the vertices [Kir07]. Note that the latter is a special case of the former in some sense, since the Vi can be considered as loops with random weights added to the graph. Here, Anderson localization is the phenomenon that for large enough randomness in the potentials, the usual picture of absolutely continuous limiting spectral measure, with repulsing (or even lattice-like) eigenvalues and spatially extended eigenfunctions in the ﬁnite approximations, disappears, and Poisson statistics for the eigenvalues appears, with localized eigenfunctions, giving L2 -eigenfunctions, hence atoms, in the limiting spectral measure. However, the exact relationship between the behaviour of the limiting spectral measure and the eigenvalue statistics of the ﬁnite approximations has only been partially established, and it is also unclear when exactly localization and delocalization happen. Assuming that the random potentials have a nice distribution, localization is known for arbitrary small (but ﬁxed) variance on Z, and very large variance on other graphs, while delocalization is conjectured for small variance on Zd , d ≥ 3, proved on Td . The case of Z2 is unclear even conjecturally. In the context of our present notes, it looks like a strange handicap for the subject that the role of the Benjamini-Schramm convergence was discovered as a big surprise only in [AiW06], but still without realizing that this local convergence has a well-established theory, e.g., for random walks. ˙ Nevertheless, deterministic groups can also exhibit discrete spectrum: Grigorchuk and Zuk showed that the lamplighter group Z2 ≀ Z has a self-similar action on the inﬁnite binary tree (see Subsection 15.1, around (15.6)), and, with the corresponding generating set, the resulting Cayley graph G has a pure discrete spectral measure. They used the ﬁnite Schreier graphs Gn for the ˙ action on the nth level of the tree converging locally to G. See [GriZ01]. An alternative proof was found in [DicS02], later extended by [LeNW07], which interpret the return probabilities of SRW on the lamplighter group F ≀ Zd as the averaged return probabilities of a SRW on a p-percolation conﬁguration on Zd , with parameter p = 1/|F |, where the walk is killed when it wants to exit the underlying structure is somehow still there.) On the other hand, the following is still open. For a rough intuitive “deﬁnition” of the von Neumann dimension dimΓ , see the paragraph before Theorem 11.3. Conjecture 14.5 (Atiyah 1976). Simple random walk on a torsion-free group Γ cannot have atoms in its spectral measure dσx,x , and more generally, any operator on ℓ2 (Γ) given by multiplication by a non-zero element of the group algebra CΓ has a trivial kernel. Even more generally, the kernel of any matrix A ∈ Mn×k (CΓ), as a linear operator ℓ2 (Γ)n −→ ℓ2 (Γ)k , has integer Γ-dimension

n

cluster of the starting vertex. (So, in a certain sense, the picture of a random walk on a random

{c.Atiyah}

dimΓ (ker A) :=

i=1

(πker A ei , ei )ℓ2 (Γ)n ,

176

where πH is the orthogonal projection onto the subspace H ⊆ ℓ2 (Γ)n and ei is the standard basis vector of ℓ2 (Γ)n , with a Kronecker δe (g ) function (where e ∈ Γ is the identity) in the ith coordinate. How is the claim about the trivial kernel a generalization of the random walk spectral measure being atomless? The Markov operator P , generated by a ﬁnitely supported symmetric measure µ, can be represented as multiplication of elements ϕ = Now, because of group-invariance, having an atom in dσx,x (t) at t0 ∈ [−1, 1] is equivalent to having t0 . Indeed, dimΓ (ker(µ − t0 e)) is exactly the size σx,x ({t0 }) of the atom.

g ∈Γ

ϕ(g )g ∈ ℓ2 (Γ) by µ =

g ∈Γ

µ(g )g ∈ R≥0 Γ.

an atom at dE (t0 ), and that means there is a non-trivial eigenspace in ℓ2 (Γ) for P with eigenvalue The conjecture used to have a part predicting the sizes of atoms for groups with torsion; a strong ˙ version was disproved by the lamplighter result [GriZ01], while all possible weaker versions have

recently been disproved by Austin and Grabowski [Aus09, Gra10]. The proof of Austin is motivated by the above percolation picture. Here is a nice proof of the Atiyah conjecture for Z that I learnt from Andreas Thom (which might well be the standard proof). The Fourier transform a = (an )n∈Z → a(t) = an exp(2πitn), t ∈ S 1

n∈Z

is a Hilbert space isomorphism between ℓ2 (Z) and L2 (S 1 ), and also identiﬁes multiplication in CZ (which is a convolution of the coeﬃcients) with pointwise multiplication of functions on S 1 . So, the kernel Ha for multiplication by a ∈ CZ in ℓ2 (Z) is identiﬁed with Ha = {f (t) ∈ L2 (S 1 ) : a(t)f (t) = a(t). Then, by the Hilbert space isomorphism, 0}, and projection on Ha is just multiplication by 1Za (t), the indicator function of the zero set of

dimZ (Ha ) = (πHa e, e)ℓ2 (Z) =

1Za (t) dt = Leb(Za ) = 0 ,

S1

(14.5) {e.LebZa}

since a(t) is a nonzero trigonometric polynomial, and we are done. ⊲ Exercise 14.20. ** A variable X ∈ [0, 1] follows the arcsine law if P[ X < x ] = or, in other words, has density (π

2 π

√ arcsin( x),

{ex.arcsine}

Brownian motion Bt on R, the scaling limit of SRW on Z: the location of the maximum of {Bt : have this distribution. Is there a direct relation to the spectral measure density in Exercise 14.17? A possibly related question: is there a quantitative version of (14.5) relating the return probabilities but can you formulate it using projections? See [M¨ oP10] for background on Brownian motion. ≍ n−1/2 on Z to the dimension 1/2 of the zeroes of Brownian motion? This relationship is classical, To see a probabilistic interpretation of atoms in the spectral measure, note that for SRW on a estimate has been proved using random walks and harmonic functions, even in a stronger form: Theorem 7.8 of [Woe00], a result originated in the work of Guivar’ch and worked out by Woess, says that whenever a group is transient (i.e., not quasi-isometric to Z or Z2 ), then it is also ρ-transient, 177 group Γ, by (14.4), there is no atom at the spectral radius ±ρ(P ) iff pn (x, x) = o(ρn ). And this t ∈ [0, 1]}, the location of the last zero in [0, 1], and the Lebesgue measure of {t ∈ [0, 1] : Bt > 0} all

x(1 − x))−1 . This distribution comes up in several ways for

meaning that G(x, y | 1/ρ) < ∞, for Green’s function evaluated at the spectral radius ρ = ρ(P ). This clearly implies that there is no atom at ±ρ. Let us turn for a second to the question what types of spectral behaviour are robust under quasiisometries, or just under a change of generators. The spectral radius ρ being less than 1 is of course robust, since it is the same as non-amenability. On the other hand, the polynomial correction in pn (x, x) = o(ρn ) can already be sensitive: for the standard generators in the free product Zd ∗ Zd , for d ≥ 5, we have pn (x, x) ≍ n−5/2 ρn , while, if we take a very large weight for one of the generators

in each factor Zd , then random walk in the free product will behave like random walk on a regular Cartwright, see [Woe00, Section 17.B]. tree, giving pn (x, x) ≍ n−3/2 ρn , see Exercise 1.5. This instability of the exponent was discovered by

⊲ Exercise 14.21. Explain why the strategy of Exercises 1.3, 1.4, 1.5 to prove the exponent 3/2 in the free group does not necessarily yield the same 3/2 in Zd ∗ Zd . In a work in progress, Grabowski and Vir´ ag show that the discreteness of the spectral measure can be completely ruined by a change of variables: in the lamplighter group, by changing the generators, they can set the spectrum to be basically anything, from purely discrete to absolutely continuous. The locality of the spectral measure suggests that if a graph parameter can be expressed via simple random walk return probabilities on the graph, then it might also be local. We have seen in Subsection 11.2 that the Uniform Spanning Tree measure of a ﬁnite graph is closely related to random walks. This motivates the following result of Russ Lyons: Theorem 14.6 (Locality of the tree entropy [Lyo05]). For any ﬁnite graph G(V, E ), let τ (G) be the number of its spanning trees, and let htree (G) := in the Benjamini-Schramm sense to the unimodular random rooted graph (G, ρ), then, under mild conditions on the unimodular limit graph (e.g., having bounded degrees suﬃces),

n→∞ log τ (G) | V (G )|

{t.treeent}

be its tree entropy. If Gn converges

lim htree (Gn ) = E log deg(ρ) −

k ≥1

pG k (ρ, ρ) , k

(14.6)

{e.treeentlim}

where pG k (ρ, ρ) is the SRW return probability on G. Sketch of the proof for a special case. Let LG = DG − AG be the graph Laplacian matrix: DG is d times our usual Markov Laplacian I − P .) The Matrix-Tree Theorem says that τ (G) equals det(Lii G ), where the superscript ii means that the ith row and column are erased. By looking at the characteristic polynomial of LG , it is easy to see that this truncated determinant is the same as

1 n n i=2

the diagonal matrix of degrees and AG is the adjacency matrix. (For a d-regular graph, this is just

that each Gn is d-regular. Then κi = d(1 − λi ), where −1 ≤ λn ≤ · · · ≤ λ1 = 1 are the eigenvalues of the Markov operator P . Thus − log n n − 1 1 log τ (Gn ) = + log d + n n n n 178

n i=2

Assume for easier notation that |V (Gn )| = n. A less trivial simpliﬁcation is that we will assume

κi , where |V (G)| = n and 0 = κ1 ≤ · · · ≤ κn are the eigenvalues of LG .

log(1 − λi ) .

(14.7) {e.treeentn}

**Consider the Taylor series log(1 − λ) = −
**

n E pG k (ρ, ρ) =

**the uniform random root ρ in Gn , the invariance of trace w.r.t. the choice of basis implies that
**

1 n n i=1

1 k k ≥1 k λ ,

for λ bounded away from 1. Recall that, for

**λk i . Putting these ingredients together, 1 n
**

n i=2

log(1 − λi ) = −

k ≥1

1 k

n E pG k (ρ, ρ) −

1 n

.

(14.8) {e.treeentm}

We are on the right track towards formula (14.6) — just have to address how to interchange the inﬁnite sum over k and the limit n → ∞. For lazy SRW in a ﬁxed graph, the distribution after a large k number of steps converges to the

stationary one, which is constant 1/n in a d-regular graph with n vertices. Let us therefore consider ˜ = (I + P )/2 and return probabilities p the lazy walk in Gn , with Markov operator P ˜Gn (·, ·). Any

k

connected graph is at least 1-dimensional in the sense that any ﬁnite subset of the vertices that is not the entire vertex set has at least one boundary edge. Thus, Theorem 8.2 implies that

n Ep ˜G k (ρ, ρ) −

1 Cd ≤ √ , n k

(14.9) {e.pkspeed}

for all n. So, if we had p ˜k instead of pk on the RHS of (14.8), then we could use the summability of k −3/2 to get a control in (14.8) that is uniform in n. But how could we relate the lazy SRW to the original walk? ˜ n , then SRW on this If we add d half-loops at each vertex, so that we get a 2d-regular graph G graph is the same as the lazy SRW on Gn , hence we can again write down the identities (14.7) ˜ n and p ˜ n ) = τ (Gn ). Thus, and (14.8), now with G ˜k . On the other hand, we obviously have τ (G log τ (Gn ) − log n n − 1 = + log(2d) − n n n 1 k

n Ep ˜G k (ρ, ρ) −

k ≥1

1 n

.

(14.10) {e.treeentlazy}

**Now (14.9) implies that for any ǫ > 0, if K and n are large enough, then log τ (Gn ) − log(2d) − n
**

K

k=1

1 k

n Ep ˜G k (ρ, ρ) −

1 n

< ǫ.

n Take n → ∞ and then K → ∞. For each ﬁxed k , we have E p ˜G ˜G k (ρ, ρ), which yields k (ρ, ρ) → E p

log τ (Gn ) lim = log(2d) − n→∞ n

∞ k=1

1 Ep ˜G k (ρ, ρ) . k

Note here that the inﬁnite sum is ﬁnite either by taking the limit n → ∞ in (14.9) or by the inﬁnite chain version of the same Theorem 8.2. This already shows the locality of the tree entropy, but we still would like to prove formula (14.6). This is accomplished by Exercise 14.22 below, the inﬁnite graph version of the identity log 2 − 1 k

n Ep ˜G k (ρ, ρ) −

k ≥1

1 n

=−

k ≥1

1 k

n E pG k (ρ, ρ) −

1 n

that we get for any ﬁnite graph Gn by comparing (14.7, 14.8) with (14.10).

179

ρ) can actually be calculated sometimes. where G \ e is the graph obtained from G by deleting e. (b) Deduce that ch(G. the eﬀect of any boundary condition decays exponentially fast with the distance. q ) is a polynomial in q . q ) − ch(G/e. q ) = ch(G \ e. This ch(G. of the form q n + an−1 (G)q n−1 + · · · + a1 (G)q . q ) = n log(q − z ) dµcol G (z ) . see the examples where ch(G.e. .11) {e. From the connection between spanning trees and domino tilings in planar bipartite graphs. q ) > 0 for all integers q ≥ 4.1 of [BurtP93]. colourings of V (G) such that neighbours never get the same colour). whenever q > 2d. x + y ). (a) Show that. What about locality? It was proved in [BorChKL13] that hq-col (G) is local for graphs with degrees bounded by d. for instance.6 and the tree’s Green function (1. A certain generalization has been proved by Alan Sokal [Sok01]: whenever q ∈ {λ1 .q) | V (G )| . q ). . Lyons deduced htree (T4 ) = 3 log(3/2) in [Lyo05]. . 1]. For α ∈ [0. A similarly deﬁned notion of entropy is the q -colouring entropy hq-col (G) := log ch(G. q ) = of G(V. q ) is called the chromatic polynomial of G.5). and hence there is only one Gibbs measure in any inﬁnite volume limit (i. deﬁne probabilities to x after k steps in the two chains. where G[S ] is the subgraph of G n i=1 (q The roots of the chromatic polynomial ch(G. 1). There is a huge body of work on the chromatic polynomial by Sokal and coauthors (just search the arXiv).pkqk} ⊲ Exercise 14. .) k qk (x)z k k ≥1 the lazy transition matrix Q := αI + (1 − α)P . They used the Dobrushin uniqueness theorem: with this many colours. Let P be the transition matrix of any inﬁnite Markov chain. where G := after Theorem 6. let pk (x) and qk (x) denote the return /k as an inner product using the operator log(I − zQ). on any limiting unimodular random ch(G. for any q ∈ Z+ and any e ∈ E (G). For a state x. The tree entropy limn htree (Gn ) = htree (G. write then let z ր 1. where S ⊆V (G) ch(G[S ]. and (analogously to the spectral measure) the counting measure on the roots normalized to have total mass 1 is called the chromatic measure µcol G . (14. we have ch(G. We can express the q -colouring entropy using this measure: hq-col (G) = 1 log ch(G.chpoly} (c) Show that induced by S . k 2 k≥0 (−1) /(2k +1) is Catalan’s constant. Then (Hint: For any z ∈ (0.{ex. q (x)/k = − log(1 − α) + k ≥1 p(x)/k .166.23. q ) in C are contained in the ball of radius Cd.. y ) = ch(G.. and G/e is obtained by gluing the endpoints of e and erasing the resulting loops. λn }. for the following reason: ⊲ Exercise 14.e. E ). then all roots of ch(G. q ) is number of proper colourings of G with q colours (i.22. |V (G)| = n. such that if G has maximal degree d. one can show that htree (Z2 ) = 4G/π ≈ 1.qcolorint} − λi ) are called the chromatic roots C there exists an absolute constant C < 8. from Theorem 14. For the 4-regular tree. x) ch(G[V \ S ]. {ex. Note that the 4-colour theorem says that any planar graph G satisﬁes 180 .

and G has degrees less than d. . it satisﬁes the above properties for any ﬁxed v . . where mk (G) is the number matchings of size k . y ) := ω ⊆E (x − 1)k(ω)−k(E ) (y − 1)k(ω)+|ω|−|V | . z ). q ) of the FK model in (13. q. x.12). It should not be surprising that this has to do with the locality of how many diﬀerent colourings are possible. Note that ch(G. A graph polynomial f (G. In other words.6. y (14. −1). z ) := z n − m1 (G)z n−1 + m2 (G)z n−2 − m3 (G)z n−3 + . featured in the proof of Theorem 14. q. v ) := q k(ω) v |ω| .15). Now. cise 14.rooted graph). z ) is the characteristic polynomial of the . q ).Tutte} where k (ω ) is the number of connected components of (V. E ) is deﬁned as T (G. The variable v corresponds to p/(1 − p) in FK(p. Now. the main result of [CsF12]. where ⊔ denotes disjoint union.TutteFK} A third common version is F (G. z ) = f (G1 . it is natural to ask about the locality of the chromatic measure. for any graph G with all degrees at most d. z ) = n k=0 ak (G)z k is called isomorphism invariant if it depends only on the isomorphism class of G. where c < 7. here are some further examples that satisfy all these properties (see [CsF12] for proofs and references): (1) The Tutte polynomial of G(V. then the absolute value of any root is less than cR(d). ω ).14) {e. We start with the deﬁnitions needed. In fact. in light of (14. y ) = y |E | ZFK (x − 1)k(E ) (y − 1)|V | ω ⊆E y−1 .04.12) {e. This is almost the same as the partition function ZFK (p. z ) has a bounding function R in (14. (x − 1)(y − 1) . since its degree in q is |V |.boundedexp} v ∈ S ⊆V (G ) | S | =t Besides the chromatic polynomial. Laplacian matrix LG = DG − AG . It is of exponential type if it satisﬁes the identity of Exer- (14. . z ) f (G2 .7). the exact relation is T (G. x.13) {e. This was addressed by Ab´ ert and Hubai [AbH12]. and any t ≥ 1.6 of [CsF12] says that if f (G. then Csikv´ ari and Frenkel [CsF12] found a simpler argument that generalizes to a wide class of graph polynomials. Sokal’s theorem above about the chromatic roots of bounded degree graphs generalizes rather easily to graph polynomials of bounded exponential type: Theorem 1.11) and the locality of the random walk spectral measure (Exercise 14. generalizing the case of chromatic polynomials from [AbH12] is the following: 181 (2) The Laplacian characteristic polynomial L(G. the polynomial then has bounded roots. q ) = F (G.23 (c). This third form is the best now. but the proof is somewhat tricky. any v ∈ V (G). It is of bounded exponential type if there is a function R : N −→ R≥0 not depending on G such that. It is called multiplicative if f (G1 ⊔ G2 . (14. we have |a1 (G[S ])| ≤ R(d)t−1 . (3) The (modiﬁed) matching polynomial is M (G.

the independence ratio of a balanced bipartite graph (i. the holomorphic moments do not characterize uniquely a compactly supported measure on C.8 (The independence ratio is not local [Bol81]). process. and K ⊂ C a compact domain that contains all the roots of f (Gn . somewhat surprisingly.d.e. (14. One direction is clear: given any measurable function of an i.7). this shows that the independence ratio of d-regular graphs is not local. z ) be an isomorphism-invariant monic multiplicative graph polynomial of exponential type. That is. Since related to the question of how dense an invariant independent set can be deﬁned on the d-regular tree Td that is a factor of an i. the chromatic polynomial of the cycle on n vertices is ch(Cn . there exists an ǫ > 0 uniformly random d-regular balanced bipartite graphs also converge to Td in the local weak sense (see Exercise 14.. In particular. For instance. i.24. the analogous notion with independent subsets of vertices instead of edges behaves very diﬀerently. while (b) Deduce that the chromatic measure of Cn converges weakly to the uniform distribution on the mass at z = 1.indepratio} {ex.e. the f -entropy at ξ is local.e. random d-regular graphs will do. Assume that it has bounded roots. the two parts have the same size) is 1/2. but it follows from the locality of the matching polynomial root moments that the matching ratio. See also [ElL10].i. For any d ≥ 3. Surprisingly at ﬁrst sight. Let Gn be a sequence that converges in the Benjamini-Schramm sense.ξ (Gn ) converges to a harmonic function locally uniformly on C \ K . (a) Show that the chromatic polynomial of the path on n vertices is ch(Pn . q ) = q (q − 1)n−1 . then we can apply the same local functions on any 182 . converging locally to Td ) for Theorem 14. we can approximate that by functions depending only on a bounded neighbourhood. (c)*** Find a good deﬁnition according to which one of the limiting measures is the canonical one.Theorem 14. Then K the holomorphic moments f -entropy or free energy at ξ ∈ C \ K by hf.d. divided by the number of vertices is a local parameter.7 (Csikv´ ari-Frenkel [CsF12]). the maximal size of a disjoint set of edges. if we deﬁne the 1 log |f (G. As opposed to moments of a compactly supported measure on R.e. E ) is the size of the largest independent set (i. q ) = (q − 1)n + (−1)n (q − 1).. the chromatic measure itself is in fact not local.fentint} C then the Taylor series of log |z | shows that hf. And. ξ )| = |V (G)| | log(ξ − z )| dµf G (z ) . What is then the independence ratio of random d-regular graphs? This seems to be intimately which the independence ratio is less than 1/2 − ǫ.ξ (G) := z k dµf Gn (z ) converge for all k ∈ N. z ).. Let µf G be the uniform distribution on the roots of f (G. Non-trivially. {t. Basically.chcycle} circle of radius 1 around z = 1.15) {e.i.. as shown by the following exercise: ⊲ Exercise 14. z ). for ξ ∈ C \ K . while the chromatic measure of Pn converges weakly to the point and a sequence of d-regular graphs with girth tending to inﬁnity (i. process on Td that produces an independent set. Let f (G. a subset of the vertices such that no two of them are neighbours) divided by |V |. The independence ratio of a ﬁnite graph G(V.

i. . q ) random cluster measures. times log p. See [BenNP11] for partial results. one can also ask about the locality of pc (q ) in the FK(p. See [LyNaz11. 1. Endre Cs´ oka.11 is necessary. .i.4 that Ornstein-Weiss have developed a very good entropy theory for amenable groups.d.) The ideology is that random d-regular graphs have no global structure. factor processes on all soﬁc groups. as a factor of i. with supn pc (Gn ) < 1.25.) Then limn→∞ h(F. the optimal density (in fact.e. it would follow from Questions 12. And here is another beautiful conjecture: Conjecture 14. An aﬃrmative answer to Conjecture 14. then write on each vertex the (mod p) sum of its neighbouring labels. this is equal to 1 − dim ImFp An /|V (Gn )|. And we can easily get this uniform distribution as a factor of an i.d. Gn )/|Gn | exists.pcloc} {c. process. is the normalized entropy of the uniform measure on the image space. Of course..10 (Ab´ ert-Szegedy). factors in d-regular trees. ωn an i. and the normalized dimension of the image. .4.11 (Locality of pc .27 or 12.i.10 would say that there is a good entropy notion also for i. Gn ) be the entropy of the resulting measure. for instance. The possible values for the densities of independent sets in random d-regular graphs coincides with the possible densities of independent sets as i.i. Mes11]. We have mentioned in Subsection 13.d. It is an important question what kind of processes can be realized as a factor of an i. then dim kerFp An /|V (Gn )| converges. ⊲ Exercise 14. O. and F (Gn . HatLSz12. process: assign i. Schramm). As mentioned in Section 12.d.4: if An ’s are the adjacency matrices of a convergent sequence of ﬁnite graphs Gn . p − 1}.28. Let h(F. Several people (Bal´ azs Szegedy. (For simplicity. Let Gn be a Benjamini-Schramm-convergent sequence of ﬁnite graphs.d. Indeed. If Gn are inﬁnite transitive graphs locally converging to G. see [HatLSz12] for now. we can assume that there are ﬁnitely many possible conﬁgurations of F (Gn .i.9 (Bal´ azs Szegedy).i. then pc (Gn ) → pc (G). all possible limit densities) can be achieved by such a local construction: Conjecture 14.d.sequence of ﬁnite graphs with girth going to inﬁnity. Show that the supn pc (Gn ) < 1 condition in Conjecture 14. Another interesting corollary would be the (mod p) version of the L¨ uck Approximation Theorem 14. process on Gn . (This is part of a much more general conjecture on the so-called local-global limits that I plan to discuss in a later version. But there is an even more basic question: {c.i. . ωn ) a factor process.) on the local limit graph (the regular tree). uniform random labels to the vertices of Gn from {0. ωn ). hence all limit processes can be constructed locally (i. maybe David Aldous) have independently arrived at the conjecture that on random d-regular graphs. Let us turn to a locality question that is quite diﬀerent from spectral measures and entropies: the critical parameter in Bernoulli percolation: Conjecture 14.d.AbSzeg} 183 .

. . with the Schreier graphs of the ﬁrst few levels. and also the q = 2 Ising case is known when G is a regular tree [MonMS12]. 1. In the amenable case.adding2} By representing ﬁnite and inﬁnite binary sequences as vertices and rays in a binary tree. . by the automaton.1) {e.automata} This section shows some natural ways to produce group actions on rooted trees.4 we already encountered the adding machine action of Z: the action of the group on ﬁnite and inﬁnite binary sequences was (0w)a = 1w (1w)a = 0wa .12. The ﬁrst picture of Figure 15. which have played a central role in geometric group theory in the last two decades or so. there 184 Deﬁnition 15.1 shows this action. The action of a group Γ on the b-ary tree Tb (b ≥ 1) is called self-similar if for . E ). . Given any initial state s0 ∈ V . .exotic} We now present some less classical constructions of groups. towards the root. b − 1} instead of binary {d. it is clear that both the free and the wired measures can be achieved by local limits. A key source of interest in this ﬁeld is Grigorchuk’s construction of groups of intermediate volume growth (1984). If Gn converges to a non-amenable transitive G in the local weak sense. the picture should be self-explanatory once one knows that the switches of the subtrees are to be read oﬀ from bottom to top. which come up independently in complex and symbolic dynamical systems and computer science. BarGN03. 1. we clearly get an action by tree automorphisms. the third picture of Figure 15. whose vertices for any inﬁnite binary sequence x1 x2 . q ) random cluster models (especially for the much more accessible q > 1 case) that the limit measure from the Gn ’s is the wired measure on G? This is easy to see for the WUSF (the q = p = 0 case) using Wilson’s algorithm. we output the second label. 15. The second picture is called the “proﬁle” of the action of the generator a. is it true for all FK(p. .1 Self-similar groups of ﬁnite automata {ss. as follows. this will be y1 . we follow the arrow whose ﬁrst label is x1 . and any ﬁnite or inﬁnite word w on this alphabet. and then we continue with the target state of the arrow (call it s1 ) and the next letter x2 . . the automaton produces a new sequence y1 y2 . generating an action on the b-ary tree Tb . There is of course a version with labels from {0. 15 Some more exotic groups {s. . Nek05]. and so on.ss} any g ∈ Γ. b − 1}. : from s0 (the states) correspond to some tree automorphisms. . Finally. . (15.Question 14. and may provide or have already provided exciting examples for probability on groups.1 is called the Moore diagram of the automaton generating the group action. This automaton is a directed labeled graph G(V.1. . any letter x ∈ {0. The group generated by the tree automorphisms given by the states V is called the group generated sequences. The main references are [GriNS00. . In Section 13.

2) {e. Consider ϕ : x → 2x. the isomorphism (15. (But. ǫ is switching these two subtrees. (15. an expanding automorphism of the Lie group (R. and we get an action of Γ on the subtree starting at v . h′ ) = (gg ′ . we can write the action (13. then S is called a self-similar generating set.1: The adding machine action of Z: (a) the Schreier graphs on the levels (b) the proﬁle of the generator a’s action (c) the Moore diagram of the automaton is a letter y and h ∈ Γ such that (xw)g = y (wh ).treathA} corresponding to the restriction actions inside the b subtrees at the root and then permuting them. and the order of the multiplication is dictated by having a right action on the tree. where (g. a)ǫ. If there is a ﬁnite such S . Now. and a loop γ : [0. a twofold self-covering of the circle S 1 . for any g ∈ Γ and ﬁnite word v there is a word u of the same length and h ∈ Γ such that (vw)g = u(wh ) for any word w. For a self-similar action by Γ. h)(g ′ . Since ϕ(Z) = 2Z ⊆ Z. Then the group can clearly be generated by an automaton with states S . of Tb is of course self-similar. This h is called the restriction h = g |v . +). (g. In particular.adding} and word xw there is a letter y and t ∈ S such that (xw)s = y (wt ). and ∀s ∈ S {f.) For a general self-similar action by Γ ≤ Aut(Tb ). going around S 1 once. Pick a base point x ∈ S 1 . There is also a nice geometric way of arriving at the adding machine action of Z.0/0 0 1 0 1 a−1 00 01 11 1/1 1/0 id 0/1 a 1/0 000 010 111 Figure 15. Aut(Tb ) is not ﬁnitely generated. and there is the obvious wreath product decomposition Aut(Tb ) ≃ Aut(Tb ) ≀ Symb . we can consider ϕ : R/Z −→ R/Z.11) very concisely as a = (1. ϕ−1 (x) consists of two 185 . then Γ is called a ﬁnite-state self-similar group. of course.2) gives an embedding Γ ֒→ Γ ≀ Symb . h) is the tree-automorphism acting like g on the 0-subtree and like h on the 1-subtree. g ). h)ǫ = ǫ(h. If S ⊆ Γ generates Γ as a semigroup. The action of the full automorphism group (15.3) {e. 1] −→ S 1 starting and ending at x. hh′ ) and (g.treathG} Using this embedding.

) A similar notion is the limit solenoid SΓ . . . . if in addition. . y−1 . respectively. At this point. Then the resulting action of Γ = IMG(ϕ) every g ∈ Γ there is a k ∈ N such that the restriction g |v is in N for all vertices v ∈ Tb of depth at on Tb is ﬁnite state self-similar. . b − 1}Z that is product topology on the left tail but given a ﬁnite word w = xk xk+1 . ) in ∂ Tb . . we can denote this space by SΓ it will be the image of the topology on {0. i. . x1 . Following these γi ’s we get a permutation on ϕ−1 (x). . from the action itself.. . possibly branched. See [Nek03] or [Nek05] for proofs.e. their appearance in the story is not accidental: it is possible to reconstruct X and ϕ. For a general b-fold π1 (S 1 ) = Z on the entire binary tree. ⊲ Exercise 15. γ1 . . etc. so IMG(ϕ) = π1 (X ). x10 . . Clearly. . and ϕ : X1 −→ X is a locally branch points from X in the case of a branched covering). . the transposition (x0 x1 ) in the present case. . x01 . Now. minimal set N ⊆ Γ giving the contraction property. b−1}N being (y−k . . . and we get an action of representations of the edges in the Schreier graph of the adding machine action. x−1 . . .7. ) ∼ (. . . ) iff there is a ﬁnite K ⊂ Γ ⊳ . call them x0 and x1 . . will be geometric covering. (For a general. y1 . discrete on the right. x−k+1 . y0 ) with the action of Γ on Tb . . expanding partial b-fold self-covering map for some X1 ⊆ X (we typically get X1 by removing the A particularly nice case is when X is a Riemannian manifold. )gk = (y−k . . i ≥ 1 . we should start with one γ for each generator of π1 (X ). . This is called the Iterated Monodromy Group IMG(ϕ) of the covering map. so the actual group of tree-automorphisms that we get will be a factor of π1 (X ). x11 }. . we can consider the tile Tw := . 1. On JΓ we take the topology to be the image of the product topology under the equivalence quotient map. . . this equivalence is very diﬀerent from two rays in ∂ Tb = {0. b − 1}−N by the following asymptotic equivalence relation: the sequences (.1. . b-fold get an action of π1 (X ) on ϕ−1 (x). while on SΓ level. x0 ) and (. 1. . . x−k+1 ) on the second the equivalence relation that (. to emphasize the asymmetric topology. . γ00 etc. . like X = S 1 in our case. so where do these continuum objects R and S 1 and Riemannian manifolds come from? Well. it is possible that the resulting action on the b-ary tree is not faithful. . y−1 . . contracting: there is a ﬁnite set N ⊂ Γ such that for least k . . with k ∈ −N. . y0 . taking covering ϕ : X −→ X . . . at least topologically. and then would the preimage set ϕ−2 (x) = ϕ−1 (x0 ) ∪ ϕ−1 (x1 ) = {x00 . . . It is easy to see that the lifts γ0 . getting γ0 and γ1 . .. Give an example where N = Γ. . etc. 186 . . i = 0. x−1 . x0 )gk = {ex. the Reader might feel a bit uneasy: we have been talking about countable groups acting on trees. Given any contracting action by Γ on Tb . . . Using Proposition 2. . as in Section 2. x0 . y−k+1 .2. . . (x−k . Show that. (In particular. there is a unique Show that N is a self-similar generating set of N . moreover. one can deﬁne the limit space JΓ as the quotient of the set of left-inﬁnite sequences {0. . It is called the nucleus of the self-similar action. . we can lift γ starting from either point. for any contracting self-similar action of some Γ on Tb . . . . with x−k on the ﬁrst level. .) We can now iterate this procedure. then the action of π1 (X ) is faithful. .nucleus} in the same Γ-orbit. b − 1}. x0 .points. γi ends at γ1−i . the quotient of {0. Moreover. X1 = X is a compact Riemannian manifold. b − 1}Z by such that ∀k ∈ N ∃gk ∈ K with (x−k . . xk−2 xk−1 w : xk−i ∈ {0. y0 ) are equivalent iff there exists a ﬁnite subset K ⊂ Γ such that for all k ∈ N there is some gk ∈ K with (x−k .

as we take the level k → −∞. . if X1 = X is a Γ = IMG(ϕ) and the limit space construction X = JΓ are true inverses of each other. 1)(02) c = (1. . s). 1. The same paper used the self-similar actions of several groups of exponential growth to show that they are scale-invariant (see Theorem 4. found by [GriNS00]. ϕ) := ϕ−n (x) ⊆ X is the Julia set of ϕ. then. compact Riemannian manifold. (JG . if N = Γ. with the tiles and the shift map s leading to scale-invariant tilings in some Cayley graphs of H3 (Z).. easily shown to be independent of x ∈ X . the tiles are getting smaller and ⊳ and more.a subset of JΓ after the factorization. For the adding machine action. Show that for any w ∈ ∂ T2 . under mild additional assumptions. and the aﬃne groups Zd ⋊ GL(d.e. if the action of Γ is nice enough. these tiles are not at all disjoint for diﬀerent w’s with the same starting level k = k (w). where ∞ n=0 cally conjugate to (J (X. . . and the intersections of the boundaries are given by the action of N : Twg ∩ Twh = ∅ iff g −1 h ∈ N . so. In JΓ . 1. the solvable Baumslag-Solitar groups BS(1. hence is b-to-1. the leaf LO(w) is ⊲ Exercise 15.2. Expanding endomorphisms of the real and integer Heiseinberg group H3 (R) and H3 (Z) were used in [NekP09] to produce nice contracting self-similar actions. However. and hence the Schreier graphs. b − 1}−N that deletes the last (the 0th ) letter. drawn on the tiles. give a homeomorphism between JZ and S 1 that homeomorphic to R. ϕ). c)(01) on the alphabet {0. 1)(12) b = (1. so we g ∈Γ ⊳ corresponding to Γ-orbits O(w) Twg ⊆ SΓ smaller. s preserves the ⊳ asymptotic equivalence relation. k ∈ Z. then their interiors are disjoint. The upshot (proved by Nekrashevych) is that when the contracting action is obtained from a locally expanding map ϕ : X1 −→ X . then J (X. b − 1}Z to the left. ⊲ Exercise 15. In both cases. . Recall that π1 (X ) = IMG(ϕ) in this case. 2} is homeomorphic to the Sierpi´ nski gasket. The lamplighter group G = Z2 ≀ Z is a famous unexpected example. When a group turns out to have a ﬁnite state self-similar action. ϕ) = X . the ⊳ tile Tw will be a subset of SΓ . . Z). ϕ) (i. using 187 . approximate the structure of JΓ more need to restrict our attention to the leaves LO(w) := in ∂ Tb . interchanges the actions x → 2x on S 1 and s on JZ . there is a homemorphism between the spaces that interchanges J (X. m). given an inﬁnite word w = xk xk+1 . or that moves the origin in {0. Similarly. Because of the factorization. and thus we get the dynamical systems (JG .12): the lamplighter group Z2 ≀ Z. . Show that the limit set JΓ for the self-similar action a = (a.26. Finally. . then the adjacencies are given by the Schreier graph on that level. whose self-similarity was ﬁrst noticed and proved by ˙ ˙ Grigorchuk and Zuk in [GriZ01]. . s) and (SΓ . a huge box of great tools opens up — but it is far from obvious if a given group has such an action. For instance. 1. b. but there is a much simpler proof. as deﬁned in Question 12. In particular. s) is topologithe actions). . . The situation in SΓ is a bit more complicated: it is a highly disconnected space. consider the shift action s on {0.3.

We have T = T2 . → xi ti−1 . the action of these generators on the binary tree T can be easily checked to be (0w)a = 1wb (1w)a = 0wa (0w)b = 0wb (1w)b = 1wa .5). even though the way we found it followed an orthogonal direction of thought. which we will present below. while f : Z −→ Z2 is the ⊕Z Z2 ⋊ Z = Z2 ≀ Z is the standard lamplighter group. This group G = conﬁguration of the lamps. We then let Hn := ψ ◦n (H ).GZ} for any ﬁnite or inﬁnite {0. and consider the injective endomorphism ψ (F (t)) := tF (t) for F (t) ∈ Z2 [[t]].16). The self-similar action of G was used in [GriZ01] to show that the spectrum of the simple random walk on the Cayley graph generated by the self-similar generators a and b below is discrete — see Section 14. We now show G can be generated by a ﬁnite automaton. a nested sequence Consider now the right coset tree T corresponding to the subgroup sequence (Hn )n≥0 . We can represent f by the ﬁnite set supp f ⊂ Z. namely.4) {e.4) (m. via the identiﬁcation Φ : x1 x2 . .Phi} where x = x1 x2 . Thus the semidirect (15. for each element and (15. Since tF (t) = (1 + t)F (t) − F (t). . Rs on one hand. The action of G on the inﬁnite binary tree T can now be described by the combination of (15. sR on the other. So. with index [H : H1 ] = 2. Note that s = b−1 a = a−1 b. 1} word w. In terms of the representation (15. one can think of m ∈ Z as the position of the lamplighter. representing “switch” and “Right”. deﬁned before (13.6) {e. k ∈Z Let A be the cyclic group Z acting on H by multiplication by (1 + t).˙ Z2 [[t]]. (15. and of the index two subgroup Γ1 = Rs. in T . product G = A ⋉ H is the group of the following transformations of Z2 [[t]]: F (t) → (1 + t)m F (t) + f (k )(1 + t)k . Another 188 . Hence {a. as (1 + t)k − 1 ∈ ψ (H ) for any k ∈ Z. The scale-invariance of G proved in [NekP09] is closely related to the way how the spectrum was computed.5) {e.2 for a bit more on this. for example. Observe that power series divisible by t.5). Then. Rs = (1. . we will also use L = R−1 . Let H be the additive subgroup of the group Z2 [[t]] of formal power series over Z2 consisting of ﬁnite Laurent polynomials of (1 + t). we have that ψ (H ) ⊆ H . that the Diestel-Leader graph DL(2. f ). . consider the following new generators of the lamplighter group: a = Rs. Namely. while the action of R is F (t) → (1 + t)F (t). The usual wreath product generators are s and R. this easily implies that ψ (H ) is exactly the subgroup of H of of ﬁnite index isomorphic subgroups. and it turns out to be a ﬁnite-state self-similar action. b} is a ﬁnite self-similar generating set. and the boundary ∂ T is a topological group: the proﬁnite additive group Z2 [[t]]. . the action of s is F (t) → F (t) + 1. i≥1 (15. b := R.trafo} where m ∈ Z and f : Z −→ Z2 is any function with ﬁnitely many non-zero values. is the shorthand notation for the ray H = H0 x0 ⊃ H1 x1 ⊃ H2 x2 ⊃ . 2) is the Cayley graph of G with the generators R. {1}). .

though Kleiner’s proof of Gromov’s theorem was already somewhat in the other direction. Figure 15. b) . but the resulting group might be very complicated. For Grigorchuk’s group (15. (15.g. a)ǫ . (15. Grigorchuk’s ﬁrst group G is deﬁned by the following self-similar action on the binary tree: a = ǫ. using the third level stabilizers.chuk} 189 . d) . b = (a. we have the following easy exercise: ⊲ Exercise 15. probabilistic ideas giving algebraic results. check that the stabilizer Gv of any vertex in the binary tree is isomorphic to the original group. see e.chuk} If this looks a bit ad hoc. One can easily write down a couple of formulas like (15. d in Grigorchuk’s group.usual notation for this self-similar action. We will brieﬂy discuss two famous examples.2. by Lemma 4. one can show that there is an expanding virtual isomorphism from the direct product of eight copies of G to itself. Moreover. Grigorchuk’s group of intermediate growth and the Basilica group.1.4. see Figure 15. hence. c.7) {e.8).7).LLselfsim} We note that in the literature there are a few slightly diﬀerent versions of (15. c = (a. d = (1. using (15. Now. c) . b = (b.3). At that point this was the furthest {f.2: The proﬁles of the generators b.7) to describe the lamplighter group. Most of our course has been about how algebraic and geometric properties of a group inﬂuence the behaviour of SRW on it.8) {e. see Section 10. This is partly due to the fact that interchanging the generators a and b induces ˙ an automorphism ι of G. hence G has G × G as an index 2 subgroup. But the ﬁrst example of SRW applied to a group theory problem was by Bartholdi and Vir´ ag [BarV05]: they showed that the so-called Basilica group is amenable by ﬁnding a ﬁnite generating system on it for which they were able to compute that the speed is zero.16. writing down the proﬁles of the generators will make it clearer. is a = (b. a) . [GriZ01]. G has intermediate growth. See [GriP08] for more details.

Determine whether Thompson’s group F is amenable or not.Thompson} Clearly.6. Deﬁnition 15. ⊲ Exercise 15. (15. a ∈ Z. ⊲ Exercise 15.1. One step towards such a classiﬁcation would be to describe all self-quasi-isometries of a given group.3 Constructing monsters using hyperbolicity Thompson’s group F {ss. Furthermore. hence this probabilistic direction of attack is not available here. while [AmAV09] for linear activity groups. Certain 190 . This is quite analogous to the case of lamplighter groups Z2 ≀ Zd .1). 1] to itself whose graphs satisfy the conditions that a) All slopes are of the form 2a . 16 Quasi-isometric rigidity and embeddings {s. namely. {d. b) . Show that the activity growth ActG (n) is either polynomial or exponential. F is a group with composition of maps as a multiplication operation and this group is called Thompson’s group F . See [CanFP96] for some background. b) All break points have ﬁrst coordinate of the form k 2n .2 15. Show that any ﬁnite state self-similar group Γ with bounded activity growth is contracting (as deﬁned just before Exercise 15. This group is again generated by a ﬁnite automaton: a = (1.2.ThompF} A very famous example of a group whose amenability is not known is the following. Consider the set F of orientation preserving piecewise linear homeomorphisms of [0. it cannot be built from groups of subexponential growth via group extensions. Kaimanovich has proved that SRW has positive speed on Thompson’s group F for some generating set.monsters} {ss. The activity growth ActG (n) of a ﬁnite state self-similar group Γ is the number of length n words w such that the section g |w is not the identity for some of the self-similar generators g . For at most quadratic activity. Question 15.9) {e. Are they always amenable? [BarKN10] showed this for bounded activity groups.Basil} Continuations of this Basilica work include [BarKN10] and [AmAV09]. 15. but not for larger activity. the Poisson boundary is conjectured to be trivial (proved for at most linear growth). it is not known if all contracting groups are amenable. n ∈ N.5.qirigid} It is a huge project posed by Gromov (1981) to classify groups up to quasi-isometries. Sidki [Sid00] showed that a polynomial activity self-similar group cannot contain a free subgroup F2 . b = (1. a)ǫ .known example of an amenable group from abelian groups. k. and [Cal09] for more recent stories.

1. then they are also bi-Lipschitz equivalent. the conjecture turned out to be false for solvable groups [Dym10].4). the group isomorphism is induced by an rigid: all quasi-isometries come in some sense from group automorphisms. show that if two non-amenable groups are quasi-isometric to each other. using a maximal principle argument [Wik10a]. So. The group isomorphism induces a quasiisometry of Hn . This turns out (because of what?) to be a M¨ obius map. due to Oded Schramm. But it is very much open for nilpotent groups. See [Pel07] for details. Are they quasi-isometric to each other a.1 (Mostow rigidity 1968). works only for unimodular ones (including Cayley graphs). It is known to be yes if one of the groups is Zd . moreover.3.e. more classical. a favourite question of mine: Question 16. since we expect most properties to be quasi-isometry invariant. Bruce Kleiner thinks it’s no..g. transitive graphs of polynomial growth are always quasi-isometric to a nilpotent group. i. Question 16. N isometry of Hn .. If two complete ﬁnite volume hyperbolic manifolds M. On the other end. there is also a simple elementary proof. Here is a completely probabilistic problem of the same ﬂavor. which then induces a quasi-conformal map on the ideal boundary S n−1 . i. The following proof of a special case I learnt from G´ abor Elek: ⊲ Exercise 16. An application of the Mostow rigidity theorem that is interesting from a probabilistic point of view. have π1 (M ) ≃ π1 (N ). see [Sha04]. related to Theorem 11.? This is motivated by the probably even harder question of Mikl´ os Ab´ ert (2003): are two independent inﬁnite clusters of Ber(p) percolation on the same transitive graph quasi-isometric almost surely? Gromov conjectured that if two groups are quasi-isometric to each other.2 (Bal´ azs Szegedy). For instance. Are quasi-isometric nilpotent groups also bi-Lipschitz equivalent? I think the answer is yes. the Mass Transport Principle (12. However. * Using wobbling paradoxical decompositions. then they are isometric.Mostow} Let us give here the rough strategy of the proof. Take two independent Poisson point processes on R.. For a long while it was not known if all transitive graphs are quasi-isometric to some Cayley graph. then they are also bi-Lipschitz equivalent.1.e.s. an aﬃrmative answer would mean that percolation 191 . This is an interesting question from the percolation point of view. the quasi-isometry group of Z is huge and not known. was by Thurston.groups. e. A similar. while a key tool. However. {t. who proved that any ﬁnite triangulated planar graph has an essentially unique circle packing representation. there is a bijective quasi-isometry. it comes from an isometry of Hn . fundamental groups of compact hyperbolic manifolds of dimension n ≥ 3) are quite result is: Theorem 16.

Fortunately. Hubai. Geom. Koml´ os and E. Szemer´ edi.2. 319–367. Show that any ﬁnite subset of L2 embeds isometrically into L1 . Combinatorica 2 (1982). Question 16. Is every unimodular transitive graph quasi-isometric to a Cayley graph? The Eskin-Fisher-Whyte proof introduces something called ”coarse metric diﬀerentiation”. Warzel. are counterexamples. Wilson. B. A. no. Scaling limits for minimal and random spanning trees in two dimensions. a technique similar to the one used by Cheeger and Kleiner to prove that the Heisenberg group does not have a Lipschitz embedding into L1 . C. Ajtai. Math. Burchard. 291–333 (2007).PR/9809145] [AiW06] M. There are a lot of connections between random walks and embeddings. Rank gradient. it is a huge subject what ﬁnite and inﬁnite metric spaces embed into what Lp space with how much metric distortion. Nikolov. L1 is more mysterious. Random Structures Algorithms 15 (1999). Aizenman and S.theory is somehow on the wrong track. Ajtai. One motivation is from theoretical computer science: analyzing large data sets (equipped with a natural metric. Newman and D. Deterministic simulation in LOGSPACE. [AjKSz87] M. ⊲ Exercise 16. pp. 1–7. M..4. [arXiv:math-ph/0607021] [AjKSz82] M. We have also used nice harmonic embeddings into L2 to gain algebraic information (in Kleiner’s proof of Gromov’s theorem) and to analyze random walks (in the Ershler-Lee-Peres results). Koml´ os. of the 19th Ann. Aizenman. Eskin-Fisher-Whyte proved as a byproduct of their work [EsFW07] on the quasi-isometric rigidity of the lamplighter groups F ≀ Z.GR/0701361] [AiBNW99] M. see [NaoP08]. ℓ) with k = ℓ. to appear. Anal. and E. The target case of L2 is easier. Benjamini-Schramm convergence and the distribution of chromatic roots for sparse graphs. and by Lee and Raghavendra to show that that a universal constant suﬃces for all planar graphs. see [Woe05] for References [AbH12] M. J. 1987. 132–140. on Theory of Computing. that the non-unimodular Diestel-Leader graphs DL(k. Soc.3861 [math. ACM Symp. Math. [arXiv:math. where F is a ﬁnite their deﬁnition. Ab´ ert and T. cost of groups and the rank versus Heegaard genus problem. In Proc. Phys. Abelian group. Ab´ ert and N. 4. J. It is conjectured In general. Largest random component of a k-cube. J. 192 . Eur. see [ChKN09]. arXiv:1201. The canopy graph and level statistics for random operators on trees.CO] [AbN07] M. like the number of disagreements in two DNA sequences) is much easier if the data set is a subset of some nice space. Preprint. 9 (2006). [arXiv:math. Szemer´ edi. there are ﬁnite planar graphs needing at least a Lipschitz constant 2 − ǫ [LeeRa10].

Anal.142/~ejpecp/viewarticle. In Probability on Discrete Structures. 2000. Probab. M. M. J. isoperimetric inequalities for graphs.2007 [math. random 25 graphs and multiplicative coalescent. Angel and I. Pisztora. [Arr83] R. vol. Spencer.PR/0208123] [AngB07] O. Processes on unimodular random networks. The Probabilistic Method. Amenability of linear-activity automaton groups. 24 (1996). Paper 54. arXiv:0909. Combin. [Aus09] T. Reeves. Ann. Alon and V. Short and E. [AloS00] N. I. critical Probab. The objective method: probabilistic combinato- rial optimization and local weak convergence.208. [AmAV09] G. Arzhantseva. Aldous and J. Math. 1727–1745.php?id=1754 [AldS04] D. Lyons. B 38 (1985). 645–658. Lustig. 812–854.pdf [Alo86] N. Preprint. Eigenvalues and expanders. Wiley. 11 (1983). Sci. 33 (2004). N. 193 . Arratia. Milman. Szegedy. 1. Preprint.GR]. 73–88. Austin. 499–522. Brownian excursions. Ann. Berlin. On the chemical distance in supercritical Bernoulli percolation. [ArzBLRSV05] G. H. 706–713. 197 (2005). In preparation. Probab. pages 1–72. Electron. 13 (2003). Combinatorica 27 (2007). Ann. Stacey. 83–96.ps. 110 of Encyclopaedia Math. the Aldous. Benjamini. no. Aldous and R.stat. Geom. Vir´ ag. Alon.GR]. λ1 . 2004. Alon. 2nd edition. Springer. [arXiv:math. Alon and J. 12 (2007).Z [AldL07] D. http://www. On the recurrence of the weak limits of discrete structures. Combinatorica 6 (1986). [Ang03] O. and superconcentrators. Probab. J.2360v2 [math. (1997). L. Amir. no. [AloBS04] N. 1454–1508. Percolation on ﬁnite graphs and isoperimetric inequalities. http://www. 5. Rational group ring elements with kernels having irrational dimension. 935–974. http://128. Angel. O. Burillo.. Antal and A. Angel and B. D.128. J. Probab. Angel and B. [AloM85] N. H. 1036–1048.edu/~aldous/Papers/me101. Ann.edu/~aldous/Papers/me73.berkeley. [AntP96] P. Growth and percolation on the uniform in?nite planar triangulation. Benjamini and A. arXiv:0905. Funct. Steele. Ventura. A phase transition for the metric distortion of percolation on the hypercube. Adv.PR/0306355] [AngSz] O. Site recurrence for annihilating random walks on Zd .stat. Theory Ser.[Ald97] D. [arXiv:math.berkeley. Uniform non-amenability.

P. 5.3326 [math.univ-rennes1. 2003. Amer.PR/0311125] [BarE11] L. 715–730. Bootstrap percolation in high dimensions. Math. R. no. Random Struc. arXiv:0907. Morris. Proc. no. & Alg. Bartholdi and B. Combinatorics. Bartholdi. arXiv:0802. Cut sets and normed cohomology. harmonic functions and the ﬁrst L2 -Betti number. The self-dual point of the two-dimensional randomarXiv:1006. & Computing 15 (2006). Fractals in Graz (P. Basel. Trends in Mathematics. 589–597. Soc.5073 [math. Babai and M. Szegedy. 643–692. Local expansion of symmetrical graphs. R. Grigorchuk and V. Combin. [BekdHV08] B. 30 (2007). Bartholdi. 127 (1999). Combin. Bollob´ as. Pete. Balogh and B. 39–56.fr/bachir. Preprint. Balogh. Amenability via random walks. Bartholdi and A. 575–598. 257–286. Erschler. Soc. Vir´ ag. On amenability of automata groups. [BalPP06] J. Duminil-Copin. 194 . Projections. 130 (2005). Y.PR] [BalBM10] J. B. 313–326. 154 (2010). no. Trans. Babson and I. 1–11. Pittel.. From fractal groups to fractal sets. Amer.pdf [BekV97] M. J. Valette.3097 [math. pp. V. 2008. Math. arXiv:0711.GR] [BarV05] L. Duke Math.). eds. 4. Potential Anal. to appear.PR] cluster model is critical for q ≥ 1. Balogh. Preprint. B. 3. Peres and G.2837 [math.bekka/KazhdanTotal. Balogh. Woess. Bootstrap percolation on random regular graphs. [arXiv:math. Bekka and A. [BabB99] E.GR/0305262] [BefDC10] V. 6 (1997). Benjamini. Grabner and W. arXiv:1107. 25–118. H.[BabSz92] L. & Computing 1 (1992). with applications to percolation.CO] [BalBDCM10] J. [arXiv:math. Nekrashevych. [arXiv:math. Bollob´ as. to appear. Birkh¨ auser Verlag. Duminil-Copin.GR/0202001v4] [BarKN10] L. Kazhdan’s property (T). Theory Related Fields. http://perso. [BaliBo] P. entropy and sumsets.PR] [BalP07] J. Probability & Computing 19 (2010).GR] [BarGN03] L. Valette.1151v1 [math.5499 [math. Cambridge University Press. Bekka. Group cohomology. Nekrashevych. Probab. Morris. de la Harpe and A. Probab. Bootstrap percolation on inﬁnite trees and non-amenable groups. J. Probab. arXiv:1010. Kaimanovich and V. Bollob´ as and R. Beﬀara and H. The sharp threshold for bootstrap percolation in all dimensions. Duke Math. Poisson-Furstenberg boundary and growth of groups. Balister and B.

6. Benjamini. Bull. pp. 261–269. Peres and O. 565–587.2526 [math. Cambridge. 13 pp.washington. Benjamini. 1219–1238. (2) 160 (2004). [BenK10] I. Random walks and harmonic functions on inﬁnite planar graphs using square tilings. Cambridge Univ. Peres and O.edu/~ejpecp/ECP/viewarticle. 1–65. Lyons and O. Belegradek. [BenLS99] I. Benjamini and O. Press. Benjamini and G. Benjamini and O. Schramm. Y. Benjamini. On co-Hopﬁan nilpotent groups. [Ben91] I. Benjamini. Math. J. beyond Probab. Harmonic functions on planar and almost planar graphs and manifolds. 29–66. [BLPS01] I. Critical percolation on any nonamenable group has no inﬁnite clusters. Percolation Commun. Electron. Benjamini and O. [BenC10] I. . Benjamini. [arXiv:math.math. [Ben08] I. Benjamini and a and few O. Probab. R. Schramm. 27 (1999). arXiv:1011.php?id=1561&layout=abstract [BenS01] I. arXiv:1010. Picardello and W. Woess (eds. R. 8. Ann. Schramm. Nonamenable Liouville graphs. tions answers. 126 (1996). . Sympos. Preprint. Probab.PR/0011019] 195 . Recurrence of distributional limits of ﬁnite planar graphs. Benjamini. Peres and O. Schramm. no. Ann. Anal. Curien.MG] [BLPS99a] I. [BLPS99b] I. Y. 29 (2001).PR] [BenS96a] I. Benjamini and N. Peres and O. 23. Geometry of the uniform spanning forest: transitions in dimensions 4. http://www. Kozma. A. ques71–82. Soc. Schramm. 9 (1999). Probab. In: Random walks and discrete potential theory (Cortona. 56–84. 1347–1356. Lyons. [BenS96b] I. Instability of the Liouville property for quasi-isometric graphs and manifolds of polynomial volume growth. Nachmias and Y. London Math. Ann. 4 (1991). R. 1999. J. [BenS96c] I. Probab. no. Schramm. Y. May 2008. Lyons. Lyons.[Bel03] I. M. Y. Elect. R.PR/9804010] [BenNP11] I. 1 many (1996). Zd . 631–637. Peres. (electronic). Discussion at an American Institute of Mathematics workshop. Benjamini. Palo Alto. 3. Benjamini. Schramm. Math. XXXIX. of Math.3365 [math. Theoret.). Geom. via circle packings. H. Kesten. Theory Related Fields 149 (2011).PR] [BenKPS04] I. [arXiv:math. Group-invariant percolation on graphs. 35 (2003) 805– 811. . 24 (1996).4616 [math. arXiv:0901. Preprint. . Probab. 12. Uniform spanning forests. 465–491. Ergodic theory on stationary random graphs. 1997). Invent. Funct. Percolation perturbations in potential theory and random walks. Schramm. Is the critical percolation probability local? Probab. Schramm. Ann.

Publ. Combinatorica 11 (1991). 1329– 1371. N. J. J. Chayes. The independence ratio of regular graphs. Random Structures & Algorithms 42 (2013).E. [arXiv:math. Thomason. Kenyon. Comm. Peres. The inﬂuence of variables in product spaces. John Wiley & Sons. Threshold functions.edu/~tetali/PUBLIS/BT_ENT. [BurtK89] R. 299–314. Br¨ and´ en and T. Inc. Second edition. Billingsley. [BurtP93] R. arXiv:1002.. M. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. Soc. Glauber dynamics on trees and hyperbolic graphs. Ann.gatech. Linial. 3. Probab. 433–436. I. Probab. Israel J. Liggett.H.[BerKMP05] N. Theoret. ties G. [arXiv:math. 2nd edition. Math. E.jstor. arXiv:0903. Katznelson and N. Theory Related Fields 131 (2005). Math. New York. Lattices in products of trees. Theoret. Local characteristics. J. Keane. Kahn and L. M. Mossel and Y. 121 (1989). to appear.org/stable/2043545 [Bol01] [BolL91] B. 35–38. Burton and R. Leader. Bollob´ as. Burton and M.PR] [BorChKL13] C. 83 (1981). arXiv:0707. [BobT06] S. 2001. 88 (1968).PR/0308284] [Bergm68] G. J. entropy and limit theorems for spanning trees and domino tilings via transfer-impedances. 92 (2000). 1986. [BhES09] S. Soc. Bhamidi. Modiﬁed logarithmic 19 (2006).PR/0404048] 196 . Ann. 151–194. Amer. Mozes.math. Y. Proc. B. in Bobkov discrete and P. 21 (1993). Left and right convergence of graphs with bounded degree. 77 (1992). Combinatorica 7 (1987).2340 [math. 501–505. Kalai. Borgs. Math.3589v2 [math. J. Math. 521–567.S. [BolT87] B. Sobolev no. no. Pemantle.PR]. C. 335–340. 311–340. Bollob´ as. 2.pdf [Bol81] B. Density and uniqueness in percolation. inequali289–336. 1–28. 22 (2009). Spectra of large random trees. Bollob´ as and I. Bourgain. Bergman. [Bil86] P. Random graphs. settings. Sen. 55–64. Probab. Burger and S. G. Tetali. Lov´ asz. Probability and measure. Amer. Edge-isoperimetric inequalities in the grid. Negative dependence and the geometry of polynomials. Bollob´ as and A. J. [BurgM00] M. Evans and A. Phys.CO] [BouKKKL92] J. Cambridge University Press. Berger. Probab.0115 [math. P. http://www.. [BorBL09] J. http://people. Math. Kahn. Math. M. S. Borcea. On groups acting on locally ﬁnite graphs.

Ann. B. Preprint. 215–256.. F. Martinelli. H. NJ. [CapM06] P. Random walks and geometry on inﬁnite graphs.PR/0303321] [ColM97] T. Chen and Y. 2006. [ChKN09] J. percolation and speed. Mat. Caputo and F. Rev. In: R.. John Wiley and Sons. Fourier (Grenoble) 51 (2001). Z¨ urich. Pete. Press. arXiv:0910. A symposium in honor of Salomon Bochner. pp. Anchored expansion. Naor. Coulhon and L. Cheeger. K. Csikv´ ari.wordpress. Thomas. Coulhon. Ann. L. [Che70] J. pages 195–199. Math. (2) 146 (1997). Soc. Deift. Lecture notes on analysis on metric ´ ed.. 32 (2004). Minicozzi II. Problems in Analysis. Second edition. P. C. Princeton University. [arXiv:math. 1970. Universality for mathematical and physical systems. P. 725–747. Amenability of Thompson’s group F ? Geometry and the imagination. [CovT06] T. New York. Ann. W. 3. No. 399–405. Princeton Univ. Kleiner and A. Math. 125–152. with an appendix by G.I. (2) 109 (1985). and W.0463 [math. 2006). Pisa. In International Congress of Mathematicians (Madrid.editor. [CouSC93] T. Harmonic functions on manifolds. arXiv:1204.org/proceedings/Vol_I/11.. Grigor’yan and C. http://www. weblog. Probab. no.ps [CouGP01] T. [CsF12] P. Math. Iberoamericana 9 (1993). Sci.M. Carne.u-cergy. Peres. 2007.PR/0412450] [Car85] T. Frenkel.com/2009/07/06/amenability-of-thompsons-group-f/ [CanFP96] J.CO] [Dei07] P. of Math. di 1999. Eur.C. Saloﬀ-Coste.fr/rech/pages/coulhon/trento. Trento. (2). 1–3 April 1969. 37–80. Pittet.icm2006. E. Fields 136 (2006). Vol I.R. Cheeger. A geometric approach to on-diagonal heat kernel lower bounds on groups. Cannon. Cover and J. A. 42 (1996). R. 1763–1827. Colding and W. Compression bounds for Lipschitz maps from the Heisenberg group to L1 . W. Benjamini–Schramm continuity of root moments of graph polynomials. pp. M. Introductory notes on Richard Thompson’s groups. Phase ordering after a deep quench: the stochastic Ising and hard core gas models on a tree. http://www.[Cal09] D. A lower bound for the smallest eigenvalue of the Laplacian. 3-4. Theory Relat. Bull. Probab. 5–30. http://lamington. A transmutation formula for Markov chains. [Cou00] T. 2978–2995. Coulhon. 293–314. spaces. Scuola Normale Superiore 2000. Floyd.2026 [math. [arXiv:math. Gunning. A. J. Serra Cassano. Elements of information theory. Ambrosio. Princeton. Parry.pdf 197 . Inst. Isop´ erim´ etrie pour les groupes et les vari´ et´ es. Calegari. Enseign.MG] [ChPP04] D.

2009–2012.0621 [math. Bilipschitz equivalence is not equivalent to quasi-isometric equivalence for ﬁnitely generated groups.1549 [math. available at [arXiv:math. L. Lecture notes for the Clay Mathematical Institute Summer School in Buzios. [Dod84] J. Rev. Kapovich: Lectures on geometric group theory. Conformal invariance of lattice models. J. Diestel and I. Ann. Diaconis and J.[Del99] T. 1. Lippner. Graduate Texts in Mathematics.1805 [math. Revista Matematica Iberoamericana 15 (1999). Delmotte. An analytical approach to constant-time algorithms. 1996. Applications aux marches al´ eatoires. Electric currents in inﬁnite networks (preliminary version). 2. Soc. arXiv:1109. Sokal. [Der76] Y. [DieL01] R.ucdavis. Duxbury Press. 284 (1984). Smirnov. 2939–2947. Elek and G. Trans. 1483–1522. Springer.GR] G. 2010. H. 111–129. [Doy88] P. Drut ¸u and M.math. 181–232. Probab.pdf [DumCS11] H. Dicks and T. New York. no. Diestel Graph Theory. Combin. J. Dymarz. B (N. 2nd Edition. G. 18 (1990).S. D. Inst. Doyle.GR] [EdS88] R. erratum 993. Second edition. vol. Alg. Derriennic. Amer. Phys. D (3) 38 (1988). Generalization of the Fortuin-Kasteleyn-Swendsen-Wang representation and Monte Carlo algorithm. [Doo59] J. Preprint. Proc. arXiv:0907. Amer. Math. A. 154 (2010). Durrett. Full groups and soﬁcity. 509–526. [Ele12] [ElL10] G. Dodziuk. 121–137. 17–25.PR/0703899]. J.CO] 198 . 787–794. 8 (1959). The spectral measure of certain elements of the complex group ring of a wreath product. Strong stationary times via a new form of duality. arXiv:1211. Diﬀerence equations. 173.edu/~kapovich/EPR/ggt. 6. [Die00] R. Ann. 138 (2010). Parabolic Harnack inequality and estimates of Markov chains on graphs. [DiaF90] P. no.PR] [Dur96] [Dym10] R. Unpublished manuscript.) 12 (1976). 14 (2001). Elek. Lois “z´ ero ou deux” pour les processus de Markov. Edwards and A.3764 [math. [DicS02] W. A conjecture concerning a limit of non-Cayley graphs. [DrK09] C. Borel oracles. Duke Math. T. 2000. 433–458. arXiv:0904. Discrete potential theory and boundaries. Poincar´ e Sect. Doob. Dedicata. Probability: theory and examples. Math. Leader. no. Fill. Mech. isoperimetric inequality and transience of certain random walks. Soc. 93 (2002). Schick. Geom. Math. Duminil-Copin and S. http://www.

Bourgain. of 12 and graphs (1999). Kasteleyn. http://www. R´ enyi. J. Probab. 167–186. pp. Invariant percolation and harmonic Dirichlet functions. [arXiv:math.GR/0305352] [ElTS05] G. http://www. Introduction and relation to other models. Geom. Elek and V. Funct. 155 (2004). properties. Berlin. Rigidity in dynamics and geometry (Cambridge. On groups with full Banach mean values. K¨ ozl. On soﬁc groups. [FriB99] E.fr/~gaboriau/Travaux-Publi/Cambridge/Cambridge. 3. 927–947. 1004–1051. Math. 243–254. Gaboriau. 943–945. 5. System Sci. Quasi-isometries and rigidity of solvable groups. Mat.ps [GaGa81] O. 2000). Kutat´ o Int. Erschler. 407–420.GR] 199 .huji. Fortuin and P. C. Ann. [EsFW07] A. RI. J. J. Galil. & Comput.ac.html [Gab05] D. 161–171.ac. In: Proceedings of the 2010 Hyderabad ICM. C. On the random-cluster model. 3.GR/0511647] [Eva98] L. Combin. 1183–1210. Partial diﬀerential equations.ma. 3 (1955). I. [Føl55] [ForK72] E. S´ os. Følner. (1960) 5. [vEnt87] A. Graduate Studies in Mathematics.ma. Explicit constructions of linear-sized superconcentrators. Paradoxical decompositions and growth conditions. to appear. of Math. Gaboriau. arXiv:1009. [Ers04b] A.ps http://www. Fisher and K. Anal. On orbit equivalence of measure preserving actions.umpa. no.il/~ehudf/docs/thre.[ElSz06] G. T. 14 (2005). On the evolution of random graphs.. Eskin. D.il/~ehudf/docs/bo. [ErdR60] P.PR/0405458] [Gab10] D. Evans.huji. no. (2) 160 (2004). Physica 57 (1972). Szab´ o. Springer. D. no. 81–105. of Group Theory 9 (2006). 15 (2005). Stat. W. Gaboriau. [Ers04a] A. [arXiv:math. Proof of Straley’s argument for bootstrap percolation. Pure and Applied Mathematics Quarterly 3 (2007).ens-lyon. 22 (1981). C. Gabber and Z.0132v1 [math. [arXiv:math. van Enter. Erd˝ os and A. 19. Erschler. Jour. 1017–1054. Amer. 55– 80. Friedgut. problem. Sharp thresholds Soc. Math. American Mathematical Society. [Gab02] D. Providence.Whyte. 48 (1987). 2002. Elek and E. Math. 17–60. Comput. Invent. Scand. with an and Appendix the k-sat by J. 1998. Phys. Liouville property for groups and manifolds. 536–564. Orbit equivalence and measured group theory. Boundary behavior for groups of subexponential growth. M.

Pivotal. 11 (1970). Glasner and B. English translation: St.math. J. 3. 1–142. no. [arXiv:math. ed. Grigorchuk and I. A measurable-group-theoretic solution to von Neumann’s problem.tamu. Dedicata 87 (2001). 28 (1983). Schramm. XVIth International Congress on Mathematical Physics (Prague 2009). Invent.ras. Pete and O. Funct. The scaling limit of the Minimal Spanning Tree — a preliminary report. Pak. G. [GriNS00] R.3138 [math. I. http://www. Zuk. 1. Math. arXiv:1004. Hurst and S. pp. I. 917–935. Kazhdan’s property T and the geometry of the collection of invariant measures. Grabowski. Anal. Preprint. Grigorchuk. 2010. no. 475–480. 177 (2009). pp. an in- troduction. Pete and O.tamu. Academic Press. Petersburg Math.2030 [math. [GoV97] E. Soviet Math. Concavity of magnetization of an Ising ferromagnet in a positive external ﬁeld. 128–203. Geom. I. arXiv:0909. V. Preprint. Math. Phys. 49–67. Weiss. Exner. 18. R.[GabL09] D. Inﬂuence and sharp threshold theorems for monotonic measures. Lyons. Schramm. [GeHM01] H-O. Graham and G. Vershik.1643 [math. no. dynamical systems and the Atiyah problem. Sushchanskii. Vol. 209–244. C. 34 (2006). 23–26. Groups (2) 54 of intermediate (2008).1378 [math. 7 (1997). dynamical systems and groups. World Scientiﬁc. [Gri83] R. Nekrashevich and V. http://www. Singapore. Grimmett. The random geometry of equilibrium phases. CA.pdmi. no. Sherman. Maes. Griﬃths. [GraG06] B. M. 251–272. growth.math. no. Grigorchuk. Enseign. 9 (1997). Gaboriau and R. [GHS70] R.PR/9905031v1] [GlW97] E. V. Algebra i Analiz 9 (1997). 1726–1745.PR]. Dokl. O.edu/~grigorch/publications/PSIM128.PS [GriP08] R.PR] [GarPS10b] C. 200 . Automata. cluster and interface measures for critical planar percolation. Garban. A. On the Milnor problem of group growth. 1-3.ps [Gra10] L. 1. The lamplighter group as a group generated by a 2-state automaton and its spectrum. Math. 1. arXiv:0711. Phase transitions and critical phenomena. 71–97. Ann. H¨ aggstr¨ om and C. San Diego. T.pdf ˙ [GriZ01] ˙ R.edu/~grigorch/publications/grigorchuk_pak_intermediate_growth. Geom. 790–795. Grigorchuk and A. Proceedings of the Steklov Institute of Mathematics 231 (2000). Garban.GR]. P. 3-4. J.ru/~vershik/gordon. Gordon and A. I. 533–540. G. no. Groups that are locally embeddable in the class of ﬁnite groups.GR] [GarPS10a] C. arXiv:1008. http://www. Georgii. Probab. B. On Turing machines.

Pete. Probab.math. [H¨ aM09] Ø. [GuGN13] O. 53 (1981). [HamPS12] A. Nachmias. and hexagonal lattices. 53–73.PR] [H¨ aPS99] Ø. 1999. H¨ aggstr¨ om and P. Ann. Electr. Appl. Springer-Verlag. triangular. arXiv:1208. Festschrift in honor of Harry Kesten. R. Bramson and R. 761–781. Article no. 59 (1995). Hammond. Percolation on transitive graphs as a coalescent process: Relentless merging followed by simultaneous uniqueness. and O. H. Gromov.5535 [math. Gromov. Grimmett. arXiv:1011. http://www. Grundlehren der Mathematischen Wissenschaften. Gromov. Birkh¨ auser. pages 69–90. Local time on the exceptional set of dynamical percolation.html [GriM11] G. Math. Hammond. Math. 17 (2012). J. Grimmett. 1. 1999. with applications to dynamical percolation. Appendix by J. pages 1–295. Grimmett. Random-cluster measures and uniform spanning trees. ed. Pete. Tits. [Gro93] M. Asymptotic invariants of innite groups. J. arXiv:1206. Vol. Schonmann. 333. 31 (2003). H¨ aggstr¨ om. Recurrence of planar graph limits. Cambridge University Press. Mester. Vol. Inhomogeneous bond percolation on square. Geometric group theory. Exit time tails from pairwise decorrelation in hidden Markov chains. and the Incipient Inﬁnite Cluster.cam. Grundlehren der mathematischen Wissenschaften. [Gro99] M. 1991). Some two-dimensional ﬁnite energy percolation processes. Soc. van der Hofstad and G. arXiv:1111. Springer-Verlag. 2. Hara. 42–54. Slade. Publ.ca/~slade/xspace_AOP128. 1. Grimmett and I. Berlin. no. http://www.6618 [math. [Gri06] G. Preprint. 74–78. Durrett.S. 1–16. [HamMP12] A.statslab. Percolation. 321. Berlin.ac. 2 (Sussex.0707 [math. Y. M. Elect. 2006. Preprint. Boston. Probability on Graphs.E. Mossel and G.[Gri99] G.pdf 201 . European Math. 2010. R.PR] [HarvdHS03] T.uk/~grg/books/pgs. Stochastic Process. Comm. 1993. Probab. Gurel-Gurevich and A. 349–408. Groups of polynomial growth and expanding maps. 177 (2013). 109–197. H¨ aggstr¨ om. Critical two-point functions and the lace expansion for spread-out high-dimensional percolation and related models. Probab. E. The random-cluster model. Perplexing Problems in Probability.PR]. 267–275. arXiv:1105.2872 [math.PR]. no. I.ubc.H. Cambridge Univ. Peres and R. Second edition. 1 (1999). Schramm. Cambridge. 14 (2009). [Gro81] M. [Gri10] G.3826 [math.PR] [H¨ ag95] Ø. G. 68. Manolescu. Ann. IMS Textbook Series. Endomorphisms of symbolic algebraic varieties. Press.

Book in preparation.tue.0430 [math. 2000.il/~kalai/kkl.ac. G.pdf [KahKL88] J.ubc. Proc. 68–80. Preprint. 10–13. T. Tsankov. Theory of minimum spanning trees I: Mean-ﬁeld theory and strongly disordered spin-glass model. Random graphs. 14 (1995). Wiley-Interscience. Schramm. 736–747.win.math. Random graphs and complex networks. M.il/~nati/PAPERS/expander_survey. Hoory. Harris. 549–559. http://www. Ioana. Howard and C. 37 (2000). Hatami. [Har60] T. Amenability and phase transition in the Ising model. E. de la Harpe. 111 (2003). 3 (2009). 12 (1999).stat-mech] [JaLR00] S. http://www. Luczak and A. Lov´ asz and B. Kechris and T. Jackson and N. The inﬂuence of variables on boolean functions. E. 439–561. Expander graphs and their applications.chalmers. Soc. Probab.huji.3651v3 [cond-mat.[dlHar00] P.pdf [Hol07] A. no. Steif. D. arXiv:0806. Read. Dyn.ps 202 . 444–485. Bulletin of the American Mathematical Society 43 (2006). A lower bound for the critical probability in a certain percolation process. Groups Geom. 31 (2003) no. A. Bul- letin du Centre de Recherches Mathematiques (2007). Stat. Szegedy. 123–149. Cambridge Philos. S. University of Chicago Press. 56 (1960).ma. The percolation transition for the zero-temperature stochastic Ising model on the hexagonal lattice. arXiv:0902. http://www. 13–20.CO].DS] [JaR09] T.huji. Zero-temperature Ising spin dynamics on the homogeneous tree of degree three. He and O. J´ arai.ca/~holroyd/papers/cell. New York. J. Rucinski. Incipient inﬁnite percolation clusters in 2D. Kalai and N. Topics in geometric group theory. Newman. 4. Ann. D.nl/~rhofstad/NotesRGCN. L. Probab.math. Appl. Subequivalence relations and positive-deﬁnite functions. Hyperbolic and parabolic packings. Howard. [JoS99] J. arXiv:1205. No 1. Vol 13. Geom.pdf [How00] C. 1. Linial and A.4356 [math. E. Prob. van der Hofstad. Janson. 2000. Chicago Lectures in Mathematics. Wigderson. [IKT09] A. N. 29th Annual Symposium on Foundations of Computer Science.pdf [HooLW06] S.ac. Physics. 4. Linial. 579–625. 1988. Kahn.cs. X. Limits of local-global convergent graph sequences.se/~steif/p25. [HeS95] Z. 57–72. [HatLSz12] H. Theoret. Discrete Comput. No. Holroyd: Astonishing cellular automata (expository article). http://www. J. [HowN03] C. J. S. Jonasson and J. http://www. [vdHof13] R. [J´ ar03] A.

pdf [Kap07] M. 39 (2011). Full Banach mean values on countable groups. Percolation. [Koz07] G. 635–654. Percolation on a product of two trees. 203 . Korevaar and R. 369–394. Kozma. Probab. 74 (1980). Global existence theorems for harmonic maps to non-locally compact spaces. Probab. The critical probability of bond percolation on the square lattice equals 1/2.PR]. 671– 676. Kleiner. Kozma and A. arXiv:0707. Complex. M. 41–59. Topics in orbit equivalence. Vershik. Ann. 11 (1983). 3. 457–490. Probab. Soc. In Computational complexity and statistical physics. The incipient inﬁnite cluster in two-dimensional percolation. 1. Special issue dedicated to the memory of Oded Schramm. Kirsch. New York. Kaimanovich. 7 (1959). Springer. An invitation to random Schr¨ odinger operators. no. Random walks on discrete groups: boundary and entropy. Comm.4231 [math. Kozma. computer science.ma. planarity. Math 178 (2009). Nachmias. [Kir07] W.. to appear. Ann. D. Oxford Univ. perimetry. Rel. 333–387. of Math. Kesten. http://www. 3.il/~kalai/ML. [KalS06] G. Iberoam. Math. arXiv:0710.PR] [KoN09b] G. Kapovich. arXiv:1003. Amer. Ann. Safra. no. Fields 73 (1986). Kesten. Geom. (2) 152 (2000). 2.1442 [math. Math. 1864–1895. Energy of harmonic functions and Gromovs proof of the Stallings theorem. Math. A new proof of Gromov’s theorem on groups of polynomial growth. Miller. Scand.GR]. Sci. Nachmias.0871v1 [math. Soc.. Threshold phenomena and inﬂuence: perspectives from mathematics. Comm. 659–692. [Kle10] B. no. J. 2004. no. pages 25–60.3707 [math-ph]. Kozma and A. Preprint. A. [KecM04] A.[Kai00] V. J. arXiv:arXiv:0911. 146– 156. Phys. Lecture notes.PR]. Kalai and S.GR] [KorS97] N. Math. [Kes86] H.4593 [math. Rev. St. [KoN09a] G. Fe Inst. Th. no. [arXiv:math. Anal. The Poisson formula for groups with hyperbolic properties.huji. 2006. [KaiV83] V. Math. Kechris and B. Kesten. [Kes59] H. Arm exponents in high dimensional percolation. The Alexander-Orbach conjecture holds in high dimensions. M. and economics.PR/0509235] [Koz11] G. 23 (2007). 2. Kaimanovich and A. 2007. Stud. S. J. Press. 815–829. [Kes80] H. arXiv:0709. Invent.5240 [math. 5 (1997).ac. arXiv:0806. Amer. Schoen. 23 (2010).

4. [arXiv:math. 491–522.PR/9908177] [Lyo05] R. expanding graphs and invariant measures. Math. Random walks. RI. 455–481. Lyons.PR] [LyNaz11] R.CO/0212165] [Lyo09] R. Random complexes and ℓ2 -Betti numbers. 43 (2010).edu/~rdlyons/ps/cap11. Inst. arXiv:0911.uoregon. Probab. American Mathematical Society. Coarse diﬀerentiation and multi-ﬂows in planar graphs. 32 (2011). arXiv:0712. Math.2933 [math. Peres.MG] [LeNW07] F. Book in preparation. & Comput. Markov chains and mixing times. Probab. 14 (2005). Phase transitions on nonamenable graphs.PR] [LyPer10] R. Losert. With an appendix by J. Lyons and F. arXiv:0911. G. [Lub94] A. 4 (1994). Levin. Top. Math. Providence. Peres and O. Neuhauser and W. no. 2009. Lehner. Y. B. Raghavendra. Progress in Mathematics. 195 (1987). Anal.0092v2 [math. Europ. 125.[LeeP09] J. Probab. Ann. J. Y. Lyons. Probability on trees and networks.edu/~dlevin/MARKOV/ [Los87] V. Proceedings of the 27th Annual ACM Symposium on theory of computing. [arXiv:math. D. Analysis 1 (2009). 20 (1992).3135v2 [math. Funct. Wilmer. Lov´ asz and R. Basel. Lyons.edu/~rdlyons. Statist. http://www. present version is at http://mypage..PR] [LeeRa10] J. Ann. Probab. Harmonic maps on amenable groups and a diﬀusive lower bound for random walks.ps [Lyo00] R. Propp and D. Lyons. Peres and E. with Y.. Schramm. 153–175. Birkh¨ auser Verlag. arXiv:0811. [L¨ uc94] W. 2043–2088. 41 (2000). to appear. Ann. Peres. Combin. Markov chain intersections and the loop-erased walk. Lyons. On the structure of groups with polynomial growth. Geom.FA] [LevPW09] D. capacity. Poincar´ e. Woess. 39 (2003). 1994. M. Faster mixing via average conductance. Discrete Comput Geom. to appear. Ann. http://mypage. Lyons. Lee and P.1573v4 [math. A.PR/0107055] 204 . Z. H. Approximating L2 -invariants by their ﬁnite-dimensional analogues. Nazarov. On the spectrum of lamplighter groups and percolation clusters. [arXiv:math. Rogawski. Combin. J. L¨ uck. 779–791. [LovKa99] L. L. 346–362. Asymptotic enumeration of spanning trees.iu. 109–117. 1999. Lee and Y. [LyPS03] R. 1115–1125. and percolation on trees. Perfect matchings as IID factors on non-amenable groups. Discrete groups. [Lyo92] R. Phys. Lubotzky. arXiv:0804. Wilson.0274 [math. Kannan. With a chapter by J. 1099– 1126. J.iu.

M¨ orters and Y. Diﬀerential Geom.0353v2 [math-ph] [M¨ oP10] P. arXiv:0809. arXiv:1111. Geom. Fields (2010). no.3067 [math. Lyons and O. Schramm. [MohW89] B.[LyPS06] R. London Math. Morris. Morris and Y. Y.uk/maspm/book. with uniform marginals and inﬁnite clusters spanned by equal labels. Mester. J. J. 447–449. Bull.PR] [MorP05] B. Comm. [arXiv:math.edu/~pak/papers/perc. Montanari. Indistinguishability of percolation clusters. no. Minimal spanning forests. [LyT87] T. 26 (1987). Prob. [arXiv:math. 1993). Lyons. 1. 2. Peres. Mester.0719 [math. 21 (1989). Isoperimetric inequalities.PR/9811170v2] [LyT83] T. Rel.PR/0305349] [Mor10] R. mixing and heat kernel bounds. Mossel and A. Zero-temperature Glauber dynamics on Zd . Invariant monotone coupling need not exist. 103 (1988). J. Ann. Mohar and W. Cambridge University Press. Lyons. 2 (1968). [MonMS12] A. Diﬀ. Woess. 2 (1968).ac. Pak. and the spectrum of graphs. no.math. 30. Preprint.. Vol.ps 205 . 661–671. Probab. Ann. Instability of the Liouville property for quasi-isometric Riemannian manifolds and reversible Markov chains. Harmonic forms with values in locally constant Hilbert bundles. Ann. Peres and O. Mohar. Percolation on Grigorchuk groups. Fields 133 (2005). http://www. Algebra 29 (2001).i. Mok. Probab. Cambridge Series in Statistical and Probabilistic Mathematics.2283 [math.pdf. arXiv:1011. growth. Appl. Growth of ﬁnitely generated solvable groups. [Mok95] N. 33–66. 209–234. Evolving sets. Schramm. Soc. 31–51.d. (1995). Peres. 3. Geom. [arXiv:math. 1–7. E. 11 (1983). Special Issue. Muchnik and I. Milnor. Th. 433–453. Diﬀ. A survey on spectra of inﬁnite graphs. Rel. arXiv:0912.ucla. 1809–1836. [Mes10] P. 27 (1999). Theory Related Fields 152 (2012). Probab. Milnor. [MuP01] R. Ann. The weak limit of Ising models on locally tree-like graphs. A note on curvature and fundamental group. A simple criterion for transience of a reversible Markov chain. Linear Algebra Appl. Lyons. [Mil68a] [Mil68b] [Moh88] J. 2010. Probab.PR/0412263v5] [LySch99] R. Prob. 119–131. B.PR] [Mes11] P. J. 393–402. 1665–1692. 245–266. Th. 34 (2006).PR]. 5. J. http://people. Fourier Anal. Proceedings of the Conference in Honor of Jean-Pierre Kahane (Orsay. A factor of i. Brownian motion.bath. to appear. no. Probab. Sly.

2009. New York J. Iterated monodromy groups. [arXiv:math. R. Shlosman and Y. Am. 12 (2006). Scale-invariant groups. Peres. 825–845. Suhov. Scolnicov. Math. On non-uniqueness of percolation on nonamenable Cayley graphs Comptes Rendus de l’Academie des Sciences Series I Mathematics 330 (2000). Phys. Peres and D. S. Connectivity probability in critical percolation: An unpublished gem from Oded. On rough isometries of Poisson processes on the line. Self-similar groups. Course notes for Probability on Trees and Networks.ucla. 26 (2004). Ornstein and B. 6. vol. arXiv:0811. Embeddings of discrete groups and the speed of random walks. 495–500. article ID rnn076. Geometry and Dynamics.PR/0501532] [PerR04] Y. Providence. C.pdf [Per09] Y. NV/Hoboken. [Osi02] D. Mathematical Surveys and Monographs. Transl.digitalwell.PR/0404190] 206 .edu/~pak/papers/psn. [PaSN00] I. 1–141. Minlos. Weiss. Newman and D. arXiv:0708. Critical percolation on certain non-unimodular graphs. Peres.pdf [Pel07] R. G. Probab. Dynamics of Ising spin systems at zero temperature. pp. 48 (1987). to appear. M.2383v2 [math. NJ. Nekrashevych.V.washington. Computational and statistical group theory (Las Vegas. Nekrashevych and G.0220v4 [math.DS/0312306] V. Pete. Analyse Math. 34 pages. UC Berkeley. Nanda. 298. 1–18..PR] [Per04] Y. [arXiv:math. J. S. http://www. [NekP09] V. Osin. V. American Mathematical Society. J. Contemporary Mathematics.math. V. to appear.edu/~peres/notes1. http://stat-www. 105–113. arXiv:0709. Electron. Microsoft Research. 117. Fall 2004. Ground-state structure in a highly disordered spin-glass model. M. (2) 198 (2000). [OW87] D. 183–194. Annals of Applied Probability. RI. Peled. [NaoP08] A. 1113–1132. Nekrashevych. 2001). 82 (1996). Edited by Asaf Nachmias. RI. L. International Mathematics Research Notices (2008). of Math. 2002. Providence. Viewable with Internet Explorer and Windows Media Player at http://content. No. Soc.GR] [NewS96] C. Stein. eds. Weakly amenable groups. Soc. Groups. Naor and Y. Entropy and isomorphism theorems for actions of amenable groups. Newman and D. Stein.0853 [math. Mixing times for random walks on ﬁnite lamplighter groups. August 30-31. J. Pak and T.edu/msr/external_release_talks_12_05_2005/17504/lectu [PerPS06] Y. [arXiv:math. Revelle. Amer.berkeley.MG] [Nek03] [Nek05] V. Lecture at the Oded Schramm Memorial Conference. 2005. Smirnova-Nagnibeda.[NaNS00] S. Peres. In On Dobrushins way (From Probability Theory to Statistical Mechanics). Vol. Pete and A. Peres. Math. Statist.

[Pes08] V. 1. Tao. Probab. 377–392. V. H. http://www. [Rud73] [SaC97] W. Phys. Poincar´ e Probab. SIAM J. Harmonic analysis. I. 13 (2008). 2. [Sap07] M. Inst.GR] [Scho92] R. 1997. no. Schonmann.GR].. H. Rudin. [Pip77] [Rau04] N. Statist. 301–413. 1973. L.normalesup. H. Z¨ urich. 2. cohomology. On the complexity of a concentrator. 219 (2001). Conformally invariant scaling limits: an overview and a collection of problems. 6. [ShaT09] Y. Soc.PR/0602151] [Sha04] Y. 119–185. 298–304. Lectures on ﬁnite Markov chains. 6 (1977). Potential Analysis 29 (2008).PR/0702474v4] [Pey08] R. and the large-scale geometry of amenable groups Acta Math. Hyperlinear and soﬁc groups: a brief guide. Some group theory problems. pages 513–543. 20 (1992). LNM 1665. In International Congress of Mathematicians (Madrid. of Algebra and Comput.GR] [Pet08] G. no. A general Choquet-Deny theorem for nilpotent groups.pdf [Pin73] M. Peres. no. 271–322. Pete. Pestov. 1189–1214. Schonmann. Berlin. Comm. Mean-ﬁeld criticality for percolation on planar non-amenable graphs. H. Ann. 17– 36. Vol. Math. 453–463. arXiv:0910. [arXiv:math.org/~rpeyre/pro/research/carne. Internat. 225 (2002). 207 . arXiv:0804. Shalom and T. 677–683.4148 [math. Schonmann. Elect. 2006). arXiv:0704. Sapir. no. Comm. 3. Peyre. 7th International Teletraﬃc Conference. Superconcentrators. 174–193. [Scho01] R. On the behavior of some cellular automata related to bootstrap percolation. Phys. Bulletin of Symbolic Logic 14 (2008). available at http://dbwilson.[PerW10] Y. 1996).3968v8 [math. 40 (2004). 1973. [Scho02] R. Preprint. G. Stockholm. Wilson. Springer. Comm. no. with D. Saloﬀ-Coste. A note on percolation on Zd : isoperimetric proﬁle via exponential cluster repulsion. Multiplicity of phase transitions and mean-ﬁeld criticality on highly non-amenable graphs. A. [Schr07] Oded Schramm. Functional Analysis. Comput. 17 (2007). Math. A probabilistic approach to Carne’s bound. Game Theory: Alive. McGraw-Hill. 318/1-4. J. Ann. Shalom. A ﬁnitary version of Gromov’s polynomial growth theorem. Eur. [arXiv:math. Pinsker. Raugi. Math. Book in preparation. Lectures on probability theory and statistics (Saint-Flour. 2007.com/games/. 192 (2004).2899v1 [math. 449–480. Pippenger. Probab.

no. of Math. 16 (2007). Ann. ar. 6. Sokal. Probab. Phys Rev Lett. 2006. Algebra 20 (1972). 1.5584 [cs. [Tit72] [Tro85] J. D. circuit structure. 208 . 49 (2002). Combinatorics. The lace expansion and its applications. no. [Sly10] A. no. Springer. Math. http://terrytao. Lecture Notes in Mathematics 1879.PR/0801. 34 (2006).PR/0702875] [Tim07] ´ Tim´ A. Tits. Math. J. Thomassen. V. 88 (1968). ´ Tim´ [Tim06a] A. Extended abstract to appear in the Proceedings of FOCS 2010. 159–166.CO] [Tim11] ´ A. J. Isoperimetric inequalities and transient random walks on graphs. Bounds on the complex zeros of (di)chromatic polynomials and Pottsmodel partition functions. Cutsets in inﬁnite graphs. Probab. Troﬁmov. Math. 41–77.CC]. Neighboring clusters in Bernoulli percolation. 6. 1. no. On torsion-free groups with inﬁnitely many ends. I. 37 (2009). Ann.PR/0702873] ´ Tim´ [Tim06b] A. arXiv:1103.4968 [math. ar. Soc. Critical percolation of virtually free groups and other tree-like graphs. [arXiv:math.com/2010/02/18/a-proof-of-gromovs-theorem/ [Tho92] C. 6. A proof of Gromov’s theorem. 1056–1067. Sly. Probab.1711 [math. Tim´ ar. Nonuniversal critical dynamics in Monte Carlo simulations. 250–270. February 2010. 2344–2364. Graphs with polynomial growth.[Sid00] S. and acyclicity. 1592–1600. 58 (1987). [arXiv:math. Spakulova. Probab.html [Sla06] G. 1925–1943. Sidki. Slade: Scaling limits and super-Brownian motion. 312–334. Ann. Swendsen and J. [SwW87] R. Tao. Notices of Amer. Stallings.CO]. Free subgroups in linear groups.4153] [Sta68] J. [arXiv:cond-mat/9904146] ˇ [Spa09] ˇ I. xiv + 232 pages. Slade. Ann. [Sok01] A. Preprint. arXiv:0711.ams. 405–417. Combin. 2262–2296. H. & Comput. [Sla02] G. What’s new blog entry.wordpress. 9. 20 (1992). ar. no. Preprint at arXiv:1005. Wang. http://www. 34 (2006). Sci.org/notices/200209/index. Percolation on nonunimodular transitive graphs. Automorphisms of one-rooted trees: growth. S. USSR Sbornik 51 (1985). [Tao10] T. no. [arXiv:math. Probab. Probability and Computing 10 (2001). 2. 86–88. 2332–2343. Berlin. Approximating Cayley diagrams versus Cayley graphs. Computational transition at the uniqueness threshold. (New York) 100 (2000) no. Ann.

Funct. Anal. Zuk. Proﬁnite groups. Saloﬀ-Coste and T. Ser. Sci. 225–252. Werner. IAS Park City Graduate Summer School.wordpress. 1993. Coulhon. Version of May 7. Weiss. Circle packing theorem. L. Random walks on inﬁnite graphs and groups. Th. Random planar curves and Schramm-Loewner evolutions. Spring 2009. Geom. of Stat. J. Cambridge University Press. 1999). 215–239. 667–733. http://392c. (2) 109 (1985). Isoperimetric inequalities and Markov chains. Bull. [Wil09] H.[Var85a] N. Diﬀ. Wolf. [VarSCC92] N. M. Oxford University Press. no.org/wiki/Circle_packing_theorem#A_uniqueness_statement [Wik10b] English Wikipedia. Lecture notes from the 2002 Saint-Flour Summer School. taught at the University of Texas at Austin. [Ver00] A. 14 (2005). Werner. Th. 415–433. 1998. Probab. New York. Surveys 55 (2000). Growth of ﬁnitely generated solvable groups and curvature of Riemannian manifolds. FKG inequality. Woess. [Wer07] W. The Clarendon Press. 421–446. Sankhy¯ a: The Indian J. & Comput. [Wei00] B. J. Lamplighters. Diestel-Leader graphs. Topology 39 (2000). [Woe05] W. and harmonic functions. Math. ˙ [Zuk00] ˙ A. Wilson. [Wag93] S. Cambridge Tracts in Mathematics Vol. 947–956. Combin. Cambridge. On an isoperimetric inequality for inﬁnite ﬁnitely generated groups. New Series. Cambridge University Press. Soﬁc groups and dynamical systems. S. 209 . London Mathematical Society Monographs.wikipedia. The Banach-Tarski paradox. [Wol68] J. Wilton. boundaries. [Wer03] W. Corrected reprint of the 1985 original. arXiv:0710. http://en. 3. Wagon. Varopoulos. Math.org/wiki/FKG_inequality [Wil98] J. http://en. Analysis and geometry on groups. Th.PR] [Wik10a] English Wikipedia. 2 (1968). 2010. 2007. 138. 392C Geometric Group Theory. Ergodic theory and harmonic analysis (Mumbai. 5. 2010. 19.. Varopoulos. Woess. 2000. 1992. [arXiv:math. Russ. 63 (1985). Version of May 7. examples. A 62 (2000).PR/0303354]. Cambridge University Press. no.0856v2 [math. With a foreword by Jan Mycielski. Lectures on two-dimensional critical percolation. 350–359. random walks.wikipedia. Varopoulos. Vershik. [Var85b] N. Cambridge. Long range estimates for Markov chains. Dynamic theory of growth in groups: Entropy.com/2009/01/ [Woe00] W. 3. no.

- algtopUploaded bycopasder
- Comprative Study Kirchoff’s and Tutte Matrix TheoremUploaded byEditor IJRITCC
- Quasi PlanesUploaded byadorooee
- Nr210501 Discrete Structures Graph Theory Set1Uploaded bySrinivasa Rao G
- hw3Uploaded byspamspamspans
- ProblemsUploaded bycalifauna
- lecture24.pptUploaded byAnonymous niE5VQOH
- 5-1Uploaded byksr131
- IOSR JournalsUploaded byInternational Organization of Scientific Research (IOSR)
- b_a_a_copy.pdfUploaded byBrian Precious
- uhsUploaded byGrothendieck Langlands Shtukas
- CatalanUploaded bypabloN_1991
- BAB 2 GRAPHUploaded byRoyan Bakhtiar Rifa'i
- report v1.2Uploaded bythisismeyouknw
- (a40508) Design and Analysis of AlgorithmsUploaded byAnonymous ZntoXci
- FialaUploaded byAndrew Raspwald
- Split Block Domination in GraphsUploaded byesatjournals
- 1801101__24Uploaded byhuevonomar05
- seltenUploaded byvahabibrahim6434
- CS70 Midterm Exam 1 Fall 2014.pdfUploaded byfara
- Spanning TreeUploaded byapi-3839714
- AntonioMaschiettiUploaded bymaschiet4225
- The Role of Lie AlgebrasUploaded byMani Pillai
- ch1.pdfUploaded bySunilkumar
- GD Topics of DAAUploaded byankurpachauri

- ej03Uploaded byChristian Bazalar Salas
- GRAFICOS ESTADISTICOS_RM.pdfUploaded byChristian Bazalar Salas
- constru2Uploaded byChristian Bazalar Salas
- Serial.txtUploaded byChristian Bazalar Salas
- 2016-II.docxUploaded byChristian Bazalar Salas
- acv_2015_a_08.pdfUploaded byChristian Bazalar Salas
- refuerzo_ampliacion.mates.loscaminos 52.pdfUploaded byChristian Bazalar Salas
- BIO-ANA.docxUploaded byChristian Bazalar Salas
- viomcc.pdfUploaded byChristian Bazalar Salas
- A-153P_InstJosefaCapdevila.docUploaded byChristian Bazalar Salas
- 2016-II.docxUploaded byChristian Bazalar Salas
- Ejercicios Tema 1.pdfUploaded byChristian Bazalar Salas
- 2016-IIUploaded byChristian Bazalar Salas
- BIO-ANAUploaded byChristian Bazalar Salas
- numeros primos-3ero.pdfUploaded byChristian Bazalar Salas
- PrimosUploaded byChristian Bazalar Salas
- familia.docxUploaded byChristian Bazalar Salas
- Numeros Primos 3eroUploaded byChristian Bazalar Salas
- CadeteUploaded byChristian Bazalar Salas
- torneo18prob.pdfUploaded byChristian Bazalar Salas
- OSCEUploaded byChristian Bazalar Salas
- Xxi ProvincialUploaded byChristian Bazalar Salas
- viomcc.pdfUploaded byChristian Bazalar Salas
- estdiv1-03Uploaded byChristian Bazalar Salas
- Monografia Mantenim Industrial Reingenieria y Globalizaciãn2Uploaded byChristian Bazalar Salas
- exadep1111.docUploaded byChristian Bazalar Salas
- Xxi RegionalUploaded byChristian Bazalar Salas
- 1609Uploaded byChristian Bazalar Salas
- McdUploaded byChristian Bazalar Salas
- MCM-5TOUploaded byChristian Bazalar Salas