You are on page 1of 160

UNIVERSITY LECTURE SERIES VOLUME 71

Introduction to
Analysis on Graphs
Alexander Grigor’yan

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
10.1090/ulect/071

Introduction to
Analysis on Graphs

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
UNIVERSITY LECTURE SERIES VOLUME 71

Introduction to
Analysis on Graphs
Alexander Grigor’yan

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
EDITORIAL COMMITTEE
Jordan S. Ellenberg Robert Guralnick
William P. Minicozzi II (Chair) Tatiana Toro

2010 Mathematics Subject Classification. Primary 05C50, 05C63, 05C76, 05C81, 60J10.

For additional information and updates on this book, visit


www.ams.org/bookpages/ulect-71

Library of Congress Cataloging-in-Publication Data


Names: Grigoryan, A. (Alexander), author.
Title: Introduction to analysis on graphs/Alexander Grigor’yan.
Description: Providence, Rhode Island : American Mathematical Society, [2018] | Series: Univer-
sity lecture series ; volume 71 | Includes bibliographical references and index.
Identifiers: LCCN 2018001105 | ISBN 9781470443979 (alk. paper)
Subjects: LCSH: Graph theory. | Laplace transformation. | Finite groups. | AMS: Combinatorics
– Graph theory – Graphs and linear algebra (matrices, eigenvalues, etc.). msc | Combinatorics
– Graph theory– Infinite graphs. msc | Combinatorics – Graph theory – Graph operations (line
graphs, products, etc.). msc | Combinatorics – Graph theory – Random walks on graphs. msc |
Probability theory and stochastic processes – Markov processes –Markov chains (discrete-time
Markov processes on discrete state spaces). msc
Classification: LCC QA166.G7485 2018 | DDC 511/.5–dc23
LC record available at https://lccn.loc.gov/2018001105

Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting
for them, are permitted to make fair use of the material, such as to copy select pages for use
in teaching or research. Permission is granted to quote brief passages from this publication in
reviews, provided the customary acknowledgment of the source is given.
Republication, systematic copying, or multiple reproduction of any material in this publication
is permitted only under license from the American Mathematical Society. Requests for permission
to reuse portions of AMS publication content are handled by the Copyright Clearance Center. For
more information, please visit www.ams.org/publications/pubpermissions.
Send requests for translation rights and licensed reprints to reprint-permission@ams.org.

c 2018 by the American Mathematical Society. All rights reserved.
The American Mathematical Society retains all rights
except those granted to the United States Government.
Printed in the United States of America.

∞ The paper used in this book is acid-free and falls within the guidelines
established to ensure permanence and durability.
Visit the AMS home page at https://www.ams.org/
10 9 8 7 6 5 4 3 2 1 23 22 21 20 19 18

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
Contents

Preface vii
Chapter 1. The Laplace operator on graphs 1
1.1. The notion of a graph 1
1.2. Cayley graphs 5
1.3. Random walks 8
1.4. The Laplace operator 19
1.5. The Dirichlet problem 22

Chapter 2. Spectral properties of the Laplace operator 27


2.1. Green’s formula 27
2.2. Eigenvalues of the Laplace operator 28
2.3. Convergence to equilibrium 34
2.4. More about the eigenvalues 39
2.5. Convergence to equilibrium for bipartite graphs 42
2.6. Eigenvalues of Zm 43
2.7. Products of graphs 45
2.8. Eigenvalues and mixing time in Znm , m odd. 49
2.9. Eigenvalues and mixing time in a binary cube 51
Chapter 3. Geometric bounds for the eigenvalues 53
3.1. Cheeger’s inequality 53
3.2. Eigenvalues on a path graph 58
3.3. Estimating λ1 via diameter 61
3.4. Expansion rate 63
Chapter 4. Eigenvalues on infinite graphs 73
4.1. Dirichlet Laplace operator 73
4.2. Cheeger’s inequality 76
4.3. Isoperimetric and Faber-Krahn inequalities 78
4.4. Estimating λ1 (Ω) via inradius 79
4.5. Isoperimetric inequalities on Cayley graphs 82
4.6. Solving the Dirichlet problem by iterations 86
Chapter 5. Estimates of the heat kernel 89
5.1. The notion and basic properties of the heat kernel 89
5.2. One-dimensional simple random walk 91
5.3. Carne-Varopoulos estimate 96
5.4. On-diagonal upper estimates of the heat kernel 99
5.5. On-diagonal lower bound via the Dirichlet eigenvalues 107
5.6. On-diagonal lower bound via volume growth 112
v

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
vi CONTENTS

5.7. Escape rate of random walk 114


Chapter 6. The type problem 117
6.1. Recurrence and transience 117
6.2. Recurrence and transience on Cayley graphs 122
6.3. Volume tests for recurrence 123
6.4. Isoperimetric tests for transience 128
Chapter 7. Exercises 131
Bibliography 143

Index 149

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
Preface

This book is based on a semester lecture course Analysis on Graphs that I


taught a number of years ago at the Department of Mathematics of the University
of Bielefeld. The purpose of the book is to provide an introduction to the subject
of the discrete Laplace operator on locally finite graphs. It should be accessible to
undergraduate and graduate students with enough background in linear algebra,
analysis and elementary probability theory.
The book starts with elementary material at the level of first semester mathe-
matics students, and concludes with the results proved in the mathematical litera-
ture in 1990s. However, the book covers only some selected topics about the discrete
Laplacian and is complementary to many existing books on similar subjects.
Let us briefly describe the contents of the book.
In Chapter 1 we give the definition and prove some basic properties of the dis-
crete Laplace operator such as solvability of the Dirichlet problem and the existence
of the associated random walk (= a reversible Markov chain).
In Chapter 2 we are concerned with the eigenvalues of the Laplace operator on
finite graphs and their relation to the rate of convergence to the equilibrium of the
corresponding random walk.
Chapter 3 contains some estimates of the eigenvalues on finite graphs, in par-
ticular, Cheeger’s inequality, as well as the relation of eigenvalues to the expansion
rate of subsets of graphs [37], [39].
In Chapter 4 we deal with the Laplace operator on infinite graphs and its
restriction to finite domains – the Dirichlet Laplacian. The central topic is the
relation between the eigenvalues of the Dirichlet Laplacian and the isoperimetric
properties of the graph, which is based on a version of Cheeger’s inequality. In
Section 4.5 we prove a beautiful theorem of Coulhon and Saloff-Coste [51] about
isoperimetric inequalities on Cayley graphs.
Chapter 5 is devoted to heat kernel estimates on infinite graphs, where the heat
kernel is the density of the transition probability of the random walk with respect to
the underlying measure (for example, the degree measure). In the case of a simple
random walk in Z we obtain the estimates directly from the definition by means
of Stirling’s formula. In Section 5.3 we prove a universal Gaussian upper bound of
the heat kernel that is due to Carne [32] and Varopoulos [136] . In Section 5.4 we
prove the on-diagonal upper bound of the heat kernel of [47], [81] assuming that
the graph satisfies a Faber-Krahn inequality.
In Sections 5.5-5.6 we prove some lower bounds of the heat kernel of [49], [113].
In Section 5.7 we use the heat kernel techniques to prove a universal upper bound for
escape rate of the random walk on graphs with the polynomial volume growth. √ This
can be regarded as a far-reaching generalization of the Hardy-Littlewood n log n-
estimate for the escape rate of a simple random walk in Zm , which was obtained in

vii

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
viii PREFACE

1914 (ten years before Khinchin’s √ law of the iterated logarithm). For graphs with
polynomial volume growth the n log n-estimate is sharp as was shown in [15].
In Chapter 6 we are concerned with the problem of deciding whether the ran-
dom walk is recurrent or transient. Here we give a number of analytic conditions for
recurrence and transience. In particular, the heat kernel bounds from the previous
chapter lead immediately to the celebrated theorem of Polya: the simple random
walk in Zm is recurrent if and only if m ≤ 2 (Section 6.1). A far-reaching generaliza-
tion of Polya’s theorem is the Varopoulos criterion [135] for recurrence on Cayley
graphs that is presented in Section 6.2. In the remaining part of this chapter, we
prove for general graphs Nash-Williams’ [115] and volume tests for recurrence (the
latter being a discrete version of a theorem of Cheng-Yau [33] about parabolicity
of Riemannian manifolds), as well as an isoperimetric test for transience [72].
Chapter 7 contains exercises that were actually used for homework in the afore-
mentioned lecture course. Solutions to all exercises are available on my home page.
Some remarks are due concerning the bibliography. Initially I planned to limit
myself to a minimal bibliography list containing only necessary references from the
text. However, it was suggested by an anonymous referee that the bibliography
should contain also references to the sources in adjacent areas thus providing a
broader coverage of topics of analysis on graphs. Furthermore, the referee had
kindly offered a long list of such references, which greatly facilitated my work on
the bibliography list. Hence, here is a list of sources for further reading.
• Classical (combinatorial) graph theory: [31], [38], [59], [122], [123].
• Various aspects of analysis on graphs: [29], [45], [46], [52], [53], [64],
[65], [66], [129], [130].
• Spectral theory on graphs: [6], [19], [21], [22], [24], [27], [28], [30], [35],
[39], [40], [42], [43], [44], [45], [52], [53], [67], [70], [90], [92], [93], [105],
[106], [114], [125], [132], [133], [134].
• Potential-theoretic aspects of graphs: [63], [96], [97], [127], [137], [139].
• Analysis on Cayley and Schreier graphs: [16], [17], [49], [51], [54], [65],
[66], [71], [83], [84], [85], [112], [118], [121], [119].
• Random processes on graphs: [9], [10], [15], [86], [120], [124], [131],
[139], [140].
• Heat kernels on graphs: [10], [11], [12], [13], [14], [47], [48], [50], [55],
[56], [68], [69], [81], [82], [87], [88], [100], [101], [102], [116], [126]
• Curvature on graphs: [20], [23], [41], [94], [95], [107], [108], [109].
• Homology theory on graphs: [4], [7], [60], [75], [76], [77], [78], [79], [80].
• Analysis on metric/quantum graphs: [26], [27], [64], [67], [98].
• Analysis on fractals and ultra-metric spaces: [3], [8], [25], [64], [73], [99],
[128].

Alexander Grigor’yan, March 2018

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
10.1090/ulect/071/01

CHAPTER 1

The Laplace operator on graphs

1.1. The notion of a graph


A graph Γ is a pair (V, E) where V is a set of vertices, that is, an arbitrary set
whose elements are called vertices, and E is a set of edges, that is, the elements of
E are unordered pairs (x, y) of vertices x, y ∈ V . We write x ∼ y if (x, y) ∈ E and
say in this case that x is connected to y, or x is adjacent to y, or x is a neighbor of
y. By definition, the relation x ∼ y is symmetric.
A graph can be represented graphically as a set of points on a plane, and if
x ∼ y, then one connects the corresponding points on the plane by a line.
The edge (x, y) will also be denoted by xy, and x, y are called the endpoints of
this edge. The edge (xx) with the same endpoints (should it exist) is called a loop.
A graph Γ is called simple if it has no loops.
A graph Γ is called locally finite if, for any vertex x, the number of adjacent
vertices to x is finite. A graph Γ is called finite if the number of its vertices is finite.
Of course, a finite graph is locally finite.
Close relatives of the notion of a graph are digraphs, quivers and multigraphs.
In a digraph, the edges are directed. That is, the relation x ∼ y does not have to
be symmetric. In a quiver, two vertices x, y may be connected by multiple directed
edges. In a multigraph, two vertices x, y may be connected by multiple undirected
edges.
Let us emphasize that in this book we deal only with graphs as defined above.
For any set S, denote by |S| its cardinality. For each vertex x of a graph (V, E),
define its degree by
deg (x) = card {y ∈ V : x ∼ y} ,
that is, deg (x) is the number of the vertices adjacent to x. We start with a simple
observation.
Lemma 1.1. (Double counting of edges) On any simple finite graph (V, E), the
following identity holds: 
deg (x) = 2 |E| .
x∈V


Proof. Denote by E the set of ordered pairs (x, y) with x ∼ y. Let us count


| E | in two ways. Since there are no loops, we have


| E | = 2 |E| .
On the other hand, each vertex x ∈ V gives rise to deg (x) of ordered pairs x ∼ y,
so that 


|E | = deg (x) ,
x∈V

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2 1. THE LAPLACE OPERATOR ON GRAPHS

whence the claim follows. 

Consider some examples of graphs.


Example 1.2. A complete graph Kn . The set of vertices is V = {1, 2, ..., n},
and the edges are defined as follows: i ∼ j for any two distinct i, j ∈ V . That is,
any two distinct points in V are connected by an edge. Here are some examples of
complete graphs.

K2 = K3 = K4 =

By Lemma 1.1, the number of edges in Kn is equal to


1
n
1
deg (i) = n (n − 1) .
2 i=1 2

Example 1.3. A complete bipartite graph Kn,m . The set of vertices of Kn,m
is
V = {1, .., n, n + 1, ..., n + m} ,
and the edges are defined as follows: i ∼ j if and only if either i < n and j ≥ n or
i ≥ n and j < n. That is, the set of vertices is split into two groups: S1 = {1, ..., n}
and S2 = {n + 1, ..., m}, and the vertices are connected if and only if they belong
to the different groups. The number of edges in Kn,m is equal to nm. Here are
some examples.

K1,1 = K2,2 = K3,3 =

Example 1.4. A lattice graph Z. The set of vertices V consists of all integers,
and the integers x, y are connected if and only if |x − y| = 1. The graph Z is shown
on Figure 1.1.

Figure 1.1. Graph Z

Example 1.5. A lattice graph Zn . The set of vertices consists of all n-tuples
(x1 , ..., xn ) where xi are integers, and
(x1 , ..., xn ) ∼ (y1 , ..., yn )
if and only if

n
|xi − yi | = 1.
i=1
That is, xi is different from yi for exactly one value of the index i, and |xi − yi | = 1
for this value of i. For example, the lattice graph Z2 is shown on Figure 1.2.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
1.1. THE NOTION OF A GRAPH 3

Figure 1.2. Graph Z2

Definition 1.6. A weighted graph is a pair (Γ, μ) where Γ = (V, E) is a graph


and μ is a weight on Γ, that is, μ : (x, y) → μxy is a non-negative function on V × V
such that
(1) μxy = μyx ;
(2) μxy > 0 if and only if x ∼ y.
Since the weight μ contains full information about the set of edges E, the
weighted graph can also be denoted by (V, μ). Alternatively, μ can be considered
as a positive function on the set E of edges, that is extended to be 0 on non-edge
pairs (x, y).
Example 1.7. Set μxy = 1 if x ∼ y and μxy = 0 otherwise. Then μ is a weight.
Such a weight is called simple.
Any weight μxy gives rise to a function on vertices as follows:

μ (x) = μxy . (1.1)
y,y∼x

Then μ (x) is called the weight of a vertex x. For example, if the weight μxy is
simple then μ (x) = deg (x). The following lemma extends Lemma 1.1.
Lemma 1.8. If Γ = (V, E) is a simple graph, then for any weight μ on Γ,
 
μ (x) = 2 μξ .
x∈V ξ∈E

Proof. Rewrite (1.1) in the form



μ (x) = μxy ,
y∈V

where the summation is extended to all y ∈ V . This does not change the sum in
(1.1) because we add only non-edges (x, y) where μxy = 0. Therefore, we obtain
    
μ (x) = μxy = μxy = μxy = 2 μξ .
x∈V x∈V y∈V x,y∈V x,y:x∼y ξ∈E

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
4 1. THE LAPLACE OPERATOR ON GRAPHS

Definition 1.9. A finite sequence {xk }nk=0 of vertices on a graph is called a


path if xk ∼ xk+1 for all k = 0, 1, ..., n − 1. The number n of edges in the path is
referred to as the length of the path.
Definition 1.10. A graph (V, E) is called connected if for any two vertices
x, y ∈ V , there is a path connecting x and y, i.e., a path {xk }nk=0 such that x0 = x
and xn = y. If (V, E) is connected, then define the graph distance d (x, y) between
any two distinct vertices x, y as follows: if x = y then d (x, y) is the minimal length
of a path that connects x and y, and if x = y then d (x, y) = 0.
The connectedness here is needed to ensure that d (x, y) < ∞ for any two
points.
Lemma 1.11. On any connected graph, the graph distance is a metric, so that
(V, d) is a metric space.
Proof. We need to check the following axioms of a metric.
(1) Positivity: 0 ≤ d (x, y) < ∞, and d (x, y) = 0 if and only if x = y.
(2) Symmetry: d (x, y) = d (y, x) .
(3) The triangle inequality:
d (x, y) ≤ d (x, z) + d (z, y) .
The first two properties are obvious for the graph distance. To prove the
n
triangle inequality, choose a shortest path {xk }k=0 connecting x and z, and a
m
shortest path {yk }k=0 connecting y and z, so that
d (x, z) = n and d (z, y) = m.
Then the sequence
x1 , ..., xn−1 , z, y1 , ..., ym
is a path connecting x and y, and it has the length n + m, which implies that
d (x, y) ≤ n + m = d (x, z) + d (z, y) .

Lemma 1.12. If (V, E) is a connected locally finite graph, then the set of vertices
V is either finite or countable.
Proof. Fix a reference point x ∈ V and consider the set
Bn : {y ∈ V : d (x, y) ≤ n} ,
that is a ball with respect to the distance d. Let us prove by induction in n that
|Bn | < ∞.
Inductive basis for n = 0 is trivial because B0 = {x}. Inductive step: assuming
that Bn is finite, let us prove that Bn+1 is finite. It suffices to prove that Bn+1 \ Bn
is finite. For any vertex y ∈ Bn+1 \ Bn , we have d (x, y) = n + 1 so that there is a
n+1
path {xk }k=0 from x to y of length n + 1. Consider the vertex z = xn . Clearly, the
n
path {xk }k=0 connects x and z and has the length n, which implies that d (x, z) ≤ n
and, hence, z ∈ Bn . On the other hand, we have by construction z ∼ y. Hence, we
have shown that every vertex y ∈ Bn+1 \ Bn is connected to one of the vertices in
Bn . However, the number of vertices in Bn is finite, and each of them has finitely
many neighbors. Therefore, the total number of the neighbors of Bn is finite, which
implies |Bn+1 \ Bn | < ∞ and |Bn+1 | < ∞.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
1.2. CAYLEY GRAPHS 5


Finally, observe that V = ∞n=1 Bn because for any y ∈ V we have d (x, y) < ∞
so that y belongs to some Bn . Then V is either finite or countable as a countable
union of finite sets. 

1.2. Cayley graphs


Here we discuss a large class of graphs that originate from groups. Recall that
a group (G, ∗) is a set G equipped with a binary operation ∗ that satisfies the
following properties:
(1) for all x, y ∈ G, x ∗ y is an element of G;
(2) the operation ∗ satisfies the associative law:
x ∗ (y ∗ z) = (x ∗ y) ∗ y
for all x, y, z ∈ G;
(3) there exists a neutral element e such that x ∗ e = e ∗ x = x for all x ∈ G;
(4) there exists the inverse element x−1 for any x ∈ G, such that x ∗ x−1 =
x−1 ∗ x = e.
If the operation ∗ is also commutative, that is, x ∗ y = y ∗ x, then the group
G is called abelian or commutative. In the case of abelian group, one uses the
additive notation. Namely, the group operation is denoted + instead of ∗, the
neutral element is denoted by 0 instead of e, and the inverse element is denoted by
−x rather than x−1 .
Example 1.13. Consider the set Z of all integers with the operation +. Then
(Z, +) is an abelian group where the neutral element is the number 0 and the inverse
of x is the negative of x.
Example 1.14. Fix an integer q ≥ 2 and consider the set Zq of all residues
modulo q, with the operation +. Namely, one says that two integers x, y are
congruent modulo q and writes
x = y mod q
if x − y is divisible by q. This relation is an equivalence relation and gives rise
to q equivalence classes that are called the residues modulo q, and are denoted by
0, 1, ..., q − 1 as integers. The residue class of an integer k is denoted by k mod q.
The addition in Zq is inherited from Z as follows:
x + y = z in Zq ⇔ x + y = z mod q in Z.
Then (Zq , +) is an abelian group, the neutral element is 0, and the inverse of x is
q − x (except for x = 0).
For example, consider Z2 = {0, 1}. Apart from trivial sums x + 0 = x, we have
the following rules in this group: 1 + 1 = 0 and −1 = 1. If Z3 = {0, 1, 2}, we have
1 + 1 = 2, 1 + 2 = 0, 2 + 2 = 1, −1 = 2, −2 = 1.
Definition 1.15. Define a direct product of two groups (A, +) , (B, +) as the
group (A × B, +) that consists of pairs (a, b) where a ∈ A and b ∈ B with the
operation
(a, b) + (a , b ) = (a + a , b + b ) .
The neutral element of A × B is (0A , 0B ), and the inverse to (a, b) is (−a, −b).

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
6 1. THE LAPLACE OPERATOR ON GRAPHS

More generally, given n groups (Ak , +) where k = 1, ..., n, define their direct
product
(A1 × A2 × ... × An , +)
as the set of all sequences (ak )nk=1 where ak ∈ Ak , with the operation
(a1 , ..., an ) + (a1 , ..., an ) = (a1 + a1 , ..., an + an ) .
The neutral element is (0A1 , ..., 0An ) and the inverse is
− (a1 , ..., an ) = (−a1 , ..., −an ) .
If the groups are abelian then their product is also abelian.
Example 1.16. The group Zn is defined as the direct product
Zn = Z × Z × ... × Z
  
n
of n copies of the group Z.
Example 1.17. The group (Z2 × Z3 , +) consists of pairs (a, b) where a is a
residue mod 2 and b is a residue mod 3. For example, we have the following sum in
this group:
(1, 1) + (1, 2) = (0, 0) ,
whence it follows that − (1, 1) = (1, 2) .

Groups give rise to a class of graphs that are called Cayley graphs. Let (G, ∗)
be a group and S be a subset of G with the property: if x ∈ S then x−1 ∈ S and
that e ∈
/ S. Such a set S will be called symmetric.
A group G and a symmetric set S ⊂ G determine a graph (V, E) as follows: the
set V of vertices coincides with G, and the set E of edges is defined by the relation
∼ as follows:
x ∼ y ⇔ x−1 ∗ y ∈ S, (1.2)
or, equivalently,
x ∼ y ⇔ y = x ∗ s for some s ∈ S.
Note that the relation x ∼ y is symmetric in x, y. That is, x ∼ y implies y ∼ x,
because, by the symmetry of S,
 −1
y −1 ∗ x = x−1 ∗ y ∈ S.
Hence, (V, E) is indeed a graph. The fact that e ∈
/ S implies that x ∼ x because
x−1 ∗ x = e ∈
/ S. Hence, the graph (V, E) contains no loops.
Definition 1.18. The graph (V, E) defined as above is denoted by (G, S) and
is called the Cayley graph of the group G with the edge generating set S.
There may be many different Cayley graphs based on the same group since they
depend also on the choice of S. It follows from the construction that deg (x) = |S|
for any x ∈ V. In particular, if S is finite then the graph (V, E) is locally finite. In
what follows, we will consider only locally finite Cayley graphs and always assume
that they are equipped with a simple weight.
If the group operation is + then (1.2) becomes
x ∼ y ⇔ y − x ∈ S ⇔ x − y ∈ S.
In this case, the symmetry of S means that 0 ∈
/ S, and if x ∈ S then also −x ∈ S.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
1.2. CAYLEY GRAPHS 7

Example 1.19. G = (Z, +) and S = {1, −1} . Then x ∼ y if x − y = 1 or


x − y = −1. Hence, (G, S) coincides with the lattice graph Z (cf. Example 1.4).
If S = {±1, ±2} then x ∼ y if |x − y| = 1 or |x − y| = 2 so that we obtain a
different graph.
Example 1.20. Let G = (Zn , +) . Let S consist of points (x1 , ..., xn ) ∈ Zn
such that exactly one of xi is equal to ±1 and the others are 0; that is

n
S= (x1 , ..., xn ) ∈ Zn : |xi | = 1 .
i=1
For example, in the case n = 2 we have
S = {(1, 0) , (−1, 0) , (0, 1) , (0, −1)} .
The connection x ∼ y means that x − y has exactly one coordinate ±1, and all
others are 0; equivalently, this means that
n
|xi − yi | = 1.
i=1
Hence, the Cayley graph of (Z , S) is exactly the standard lattice graph Zn (cf.
n

Example 1.5).
Consider now another edge generating set S on Z2 with two more elements:
S = {(1, 0) , (−1, 0) , (0, 1) , (0, −1) , (1, 1) , (−1, −1)} .

The corresponding graph Z2 , S is shown on Figure 1.3.


Figure 1.3. Graph Z2 , S

Example 1.21. Let G = Z2 = {0, 1}. The only possibility for S is S = {1}
(note that −1 = 1). The graph (Z2 , S) coincides with
K2 = .
Example 1.22. Let G = Zq where q > 2, and S = {±1}. That is, each residue
k = 0, 1, ..., q − 1 has two neighbors: k − 1 and k + 1. For example, 0 has the
neighbors 1 and q − 1. The graph (Zq , S) is called the q-cycle and is denoted by
Cq . Here are the graphs C3 and C4 :

C3 = C4 =

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
8 1. THE LAPLACE OPERATOR ON GRAPHS

Example 1.23. Consider Zq with the symmetric set S = Zq \ {0}. That is,
every two distinct elements x, y ∈ Zq are connected by an edge. Hence, the resulting
Cayley graph is the complete graph Kq .
Example 1.24. The group
G = Zn2 := Z2 × Z2 × ... × Z2
  
n

consists of n-tuples (x1 , ..., xn ) of residues mod 2, that is, each xi is 0 or 1. Let
S consist of all elements (x1 , ..., xn ) such that exactly one xi is equal to 1 and all
others are 0. Then the graph (Zn2 , S) is called the n-dimensional binary cube and
n n
is denoted also by {0, 1} , analogously to the geometric n-dimensional cube [0, 1] .
Clearly, {0, 1}1 = K2 and {0, 1}2 = C4 . The hexahedron graph Z32 = {0, 1}3 is
shown on Figure 1.4.

Figure 1.4. Hexahedron

Example 1.25. Let G = Zq × Z2 . Then G consists of pairs (x, y) where x ∈ Zq


and y ∈ Z2 . The set G can be split into two disjoint subsets
G0 = Zq × {0} = {(x, 0) : x ∈ Zq } ,
G1 = Zq × {1} = {(x, 1) : x ∈ Zq } ,
each having q elements. Set S = G1 . Then
(x, a) ∼ (y, b) ⇔ a − b = 1 ⇔ a = b.
In other words, (x, a) ∼ (y, b) if and only if the points (x, a) and (y, b) belong to
different subsets G0 , G1 . Hence, the graph (Zq × Z2 , S) coincides with the complete
bipartite graph Kq,q with the partition G0 , G1 .
Definition 1.26. A graph (V, E) is called D-regular, if all vertices x ∈ V have
the same degree D. A graph is called regular if it is D-regular for some D.
Clearly, all Cayley graphs are regular. All regular graphs that we have discussed
above, are Cayley graphs. However, there exist regular graphs that are not Cayley
graphs (cf. Exercises 9 and 13). Of course, there are plenty of examples of non-
regular graphs.

1.3. Random walks



Consider a classical problem from probability theory. Let {xk }k=1 be a se-
quence of independent random variables, taking values 1 and −1 with probabilities
1/2 each; that is,
P (xk = 1) = P (xk = −1) = 1/2.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
1.3. RANDOM WALKS 9

Consider the sum


Xn = x1 + ... + xn
and ask, what is a likely behavior of Xn for large n?
Historically, this type of problem came from game theory (and the gambling
practice): at each integer value of time, a player either wins 1 with probability 1/2
or loses 1 with the same probability. The games at different times are independent.
Then Xn represents the gain at time n if Xn > 0, and the loss at time n is Xn < 0.
Of course, the mean value of Xn , that is, the expectation, is 0 because E (xk ) =
1
2 (1 − 1) = 0 and
n
E (Xn ) = E (xk ) = 0.
k=1
The games with this property are called fair games or martingales. However, the
deviation of Xn from the average value 0 can be still significant at any particular
time.
We will adopt a geometric point of view on Xn as follows. Note that Xn ∈ Z
and Xn is defined inductively as follows: X0 = 0 and Xn+1 − Xn is equal to 1 or
−1 with equal probabilities 1/2. Hence, we can consider Xn as a position on Z
of a walker that jumps at any time n from its current position to a neighboring
integer, either right or left, with equal probabilities 1/2, and independently of the
previous movements. Such a random process is called a random walk. Note that
this random walk is related to the graph structure of Z: namely, a walker moves at
each step along the edges of this graph. Hence, Xn can be regarded as a random
walk on the graph Z.
Similarly, one can define a random walk on ZN : at any time n = 0, 1, 2, ..., let
Xn be the position of a walker in ZN . It starts at time 0 at the origin, and at time
n + 1 it moves with equal probability 1/ (2N ) along one of the vectors ±e1 , ..., ±eN ,
where e1 , ..., en is the canonical basis in Rn . That is, X0 = 0 and
1
P (Xn+1 − Xn = ±ek ) = .
2N
We always assume that the random walk in question has the Markov property: the
choice of the move at any time n is independent of the previous movement.
More generally, one can define a random walk on any locally finite graph (V, E) .
Namely, imagine a walker that at any time n = 0, 1, 2... has a random position Xn
at one of the vertices of V that is defined as follows: X0 = x0 is a given vertex,
and Xn+1 is obtained from Xn by moving with equal probabilities along one of the
edges of Xn , that is,
1/ deg (x) , y ∼ x,
P (Xn+1 = y | Xn = x) = (1.3)
0, y ∼ x.
The random walk {Xn } defined in this way, is called a simple random walk on
(V, E). The adjective “simple” refers to the fact that the walker moves to the
neighboring vertices with equal probabilities.
A simple random walk is a particular case of a Markov chain. Given a finite or
countable set V (that is called a state space), a Markov kernel on V is any function
P (x, y) : V × V → [0, +∞) with the property that

P (x, y) = 1 for all x ∈ V. (1.4)
y∈V

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
10 1. THE LAPLACE OPERATOR ON GRAPHS

Any Markov kernel defines a Markov chain {Xn }∞ n=0 as a sequence of random
variables with values in V such that the following identity holds
P (Xn+1 = y | Xn = x) = P (x, y) , (1.5)
and that the behavior of the process at any time n onwards is independent of the
past. The latter requirement is called the Markov property and it will be considered
in details below.
Observe that the rule (1.3) defining the random walk on a graph (V, E) can be
also written in the form (1.5) where
1
deg(x) , y ∼ x,
P (x, y) = (1.6)
0, y ∼ x.
The condition (1.4) is obviously satisfied because
   1
P (x, y) = P (x, y) = = 1.
y∼x y∼x
deg (x)
y∈V

Hence, the simple random walk on a graph is a particular case of a Markov chain,
with a specific Markov kernel (1.6).
Let us discuss the Markov property. The exact meaning of it is given by the
following identity:
P (X1 = x1 , ..., Xn+1 = xn+1 |X0 = x) (1.7)
= P (X1 = x1 , ..., Xn = xn |X0 = x) P (Xn+1 = xn+1 |Xn = xn )
that postulates the independence of the jump from xn to xn+1 from the previous
path. Using (1.5) and (1.7), one obtains by induction that
P (X1 = x1 , ..., Xn = xn |X0 = x) (1.8)
= P (x, x1 ) P (x1 , x2 ) ...P (xn−1 , xn ) .
Obviously, (1.8) implies back (1.7). In fact, (1.8) can be used to actually construct
the Markov chain. Indeed, it is not entirely obvious that there exists a sequence of
random variables satisfying (1.5) and (1.7).
Proposition 1.27. The Markov chain exists for any Markov kernel.
Proof. Let us construct first a family of probability spaces (Ω, Px )x∈X . Define

Ω as the set V ∞ of all sequences {xk }k=1 of points of V (that represent the final
outcome of the process). In order to construct a probability measure Px on Ω, we
(n)
first construct a probability measure Px of the set of finite sequences {xk }nk=1 .
Note that the set of sequences of n points of V is nothing other than the product
V n = V × 
... × V.
n
(1)
Hence, our strategy is as follows: first construct a probability measure Px on V ,
then Px on V n , and then extend it to a measure Px on V ∞ . Hence, Px will be
(n)

associated with a Markov chain starting from the point X0 = x.


Fix a point x ∈ V and observe that P (x, ·) determines a probability measure
(1)
Px on V as follows: for any subset A ⊂ V , set

P(1)
x (A) = P (x, y) .
y∈A

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
1.3. RANDOM WALKS 11

Clearly, Px is σ-additive. That is,



 ∞
 
P(1)
x Ak = Px (Ak ) ,
k=1 k=1
(1)
and Px (V ) = 1 by (1.4).
(n)
Next, define a probability measure Px on V n as follows. Firstly, define the
measure of any point (x1 , ..., xn ) ∈ V n by1
P(n)
x (x1 , ..., xn ) = P (x, x1 ) P (x1 , x2 ) ...P (xn−1 , xn ) , (1.9)
and then extend it to all sets A ⊂ V n
by

P(n)
x (A) = P(n)
x (x1 , ..., xn ) .
(x1 ,...,xn )∈A

(n)
Let us verify that it is indeed a probability measure, that is, Px (V n ) = 1. The
inductive basis was proved above, let us make the inductive step from n to n + 1:

P(n)
x (x1 , ..., xn+1 )
x1 ,...,xn+1 ∈V

= P (x, x1 ) ...P (xn−1 , xn ) P (xn , xn+1 )
x1 ,...,xn+1 ∈V
 
= P (x, x1 ) ...P (xn−1 , xn ) P (xn , xn+1 )
x1 ,...,xn ∈V xn+1 ∈V
 
= P (x, x1 ) ...P (xn−1 , xn ) P (xn , xn+1 )
x1 ,...,xn ∈V xn+1 ∈V
= 1 · 1 = 1,
where we have used (1.4) and the inductive hypothesis.
The sequence of measures {Px }∞
(n)
n=1 constructed above is consistent in the
following sense. Fix two positive integers n < m. Then every point (x1 , ..., xn ) ∈ V n
can be regarded as a subset of V m that consists of all sequences where the first n
components are exactly x1 , .., xn , and the rest of the components are arbitrary.
Then we have
P(n)
x (x1 , ..., xn ) = Px
(m)
(x1 , ..., xn ) , (1.10)
that is proved as follows:

P(m)
x (x1 , ..., xn ) = P(m)
x (x1 , ..., xn , xn+1 , ..., xm )
xn+1 ,...,xm ∈V

= P (x, x1 ) ...P (xn−1 , xn ) P (xn , xn+1 ) ...P (xm−1 , xm )
xn+1 ,...,xm ∈V

= P (x, x1 ) ...P (xn−1 , xn ) P (xn , xn+1 ) ...P (xm−1 , xm )
xn+1 ,...,xm ∈V
 n−m
= P(n)
x (x1 , ..., xn ) Pxn
(n−m)
V
= P(n)
x (x1 , ..., xn ) .

1 Note that measure P (n) (1)


x is not a product of n copies of Px since the latter is
P (x, x1 ) P (x, x2 ) ...P (x, xn ) .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
12 1. THE LAPLACE OPERATOR ON GRAPHS

In the same way, any subset A ⊂ V n admits a cylindrical extension A to a subset


of V m as follows: a sequence (x1 , ..., xm ) belongs to A if (x1 , ..., xn ) ∈ A. It follows
from (1.10) that
x (A) = Px
P(n) (m)
(A ) . (1.11)
This is exactly the Kolmogorov consistency condition that allows to extend a se-
quence of measures Px on V n to a measure on V ∞ . Consider first cylindrical
(n)

subsets of V , that is, sets of the form

A = {{xk }k=1 : (x1 , ..., xn ) ∈ A} , (1.12)
where A is a subset of V n with some n, and set
Px (A ) = P(n)
x (A) . (1.13)
Due to the consistency condition (1.11), this definition does not depend on the
choice of n. Kolmogorov’s extension theorem says that the functional Px defined
in this way on the cylindrical subsets of V ∞ , extends uniquely to a probability
measure on the minimal σ-algebra containing all the cylindrical sets.
Now we define the probability space Ω as V ∞ endowed with the family {Px } of
the probability measures. The random variable Xn is a function on Ω with values
in V that is defined by
Xn ({xk }∞
k=1 ) = xn .

Then (1.9) can be rewritten in the form


Px (X1 = x1 , ..., Xn = xn ) = P (x, x1 ) P (x1 , x2 ) ...P (xn−1 , xn ) . (1.14)
The identity (1.14) together with (1.4) are the only properties of Markov chains
that we need here and in what follows. Let us use (1.14) to prove that the sequence
{Xn } is indeed a Markov chain with the Markov kernel P (x, y). We need to verify
(1.5) and (1.8). The latter is obviously equivalent to (1.14). To prove the former,
write

Px (Xn = y) = Px (X1 = x1 , ..., Xn−1 = xn−1 , Xn = y)
x1 ,...,xn−1 ∈V

= P (x, x1 ) P (x1 , x2 ) ...P (xn−1 , y)
x1 ,...,xn−1 ∈V

and
Px (Xn = y, Xn+1 = z)

= Px (X1 = x1 , ..., Xn−1 = xn−1 , Xn = y, Xn+1 = z)
x1 ,...,xn−1 ∈V

= P (x, x1 ) P (x1 , x2 ) ...P (xn−1 , y) P (y, z)
x1 ,...,xn−1 ∈V

whence
Px (Xn = y, Xn+1 = z)
Px (Xn+1 = z | Xn = y) = = P (y, z) ,
Px (Xn = y)
which is equivalent to (1.5). 

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
1.3. RANDOM WALKS 13

Given a Markov chain {Xn } with a Markov kernel P (x, y), note that by (1.14)
P (x, y) = Px (X1 = y) ,
so that P (x, ·) is the distribution of X1 . Denote by Pn (x, ·) the distribution of Xn ,
that is,
Pn (x, y) = Px (Xn = y) .
The function Pn (x, y) is called the transition function or the transition probability
of the Markov chain. Indeed, it fully describes what happens to random walk at
time n. For a fixed n, the function Pn (x, y) is also called the n-step transition
function. It is easy to deduce a recurrence relation between Pn and Pn+1 .
Proposition 1.28. For any Markov chain, we have

Pn+1 (x, y) = Pn (x, z) P (z, y) . (1.15)
z∈V

Proof. By (1.14), we have


Pn (x, z) = Px (Xn = z)

= Px (X1 = x1 , ..., Xn−1 = xn−1 , Xn = z)
x1 ,...,xn−1 ∈V

= P (x, x1 ) P (x1 , x2 ) ...P (xn−1 , z) .
x1 ,...,xn−1 ∈V

Applying the same argument to Pn+1 (x, y) we obtain



Pn+1 (x, y) = P (x, x1 ) P (x1 , x2 ) ...P (xn−1 , xn ) P (xn , y)
x1 ,...,xn ∈V
 
= P (x, x1 ) P (x1 , x2 ) ...P (xn−1 , xn )
xn ∈V x1 ,...,xn−1 ∈V

×P (xn , y)

= Pn (x, xn ) P (xn , y) ,
xn ∈V

which is equivalent to (1.15). 


Corollary 1.29. For any fixed n, Pn (x, y) is also a Markov kernel on V .
Proof. Since Pn (x, y) ≥ 0, we need only to verify that

Pn (x, y) = 1.
y∈V

For n = 1 this is given, the inductive step from n to n + 1 follows from (1.15):
 
Pn+1 (x, y) = Pn (x, z) P (z, y)
y∈V y∈V z∈V

= Pn (x, z) P (z, y)
z∈V y∈V

= Pn (x, z) = 1.
z∈V

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
14 1. THE LAPLACE OPERATOR ON GRAPHS

Corollary 1.30. We have, for all positive integers n, k,



Pn+k (x, y) = Pn (x, z) Pk (z, y) . (1.16)
z∈V

Proof. Induction in k. The case k = 1 is covered by (1.15). For the inductive


step from k to k + 1, we have

Pn+(k+1) (x, y) = Pn+k (x, w) P (w, y)
w∈V
 
= Pn (x, z) Pk (z, w) P (w, y)
w∈V z∈V
 
= Pn (x, z) Pk (z, w) P (w, y)
z∈V w∈V

= Pn (x, z) Pk+1 (z, y) ,
z∈V

which was to be proved. 

Now we impose one restriction on a Markov chain.


Definition 1.31. A Markov kernel P (x, y) is called reversible if there exists a
positive function μ (x) on V such that
P (x, y) μ (x) = P (y, x) μ (y) . (1.17)
A Markov chain is called reversible if its Markov kernel is reversible.
It follows by induction from (1.17) and (1.15) that Pn (x, y) is also a reversible
Markov kernel (cf. Exercise 10).
The condition (1.17) means that the function
μxy := P (x, y) μ (x)
is symmetric in x, y. For example, this is the case when P (x, y) is symmetric in
x, y and μ (x) ≡ 1. However, the reversibility condition can be satisfied for non-
symmetric Markov kernel as well. For example, in the case of a simple random
walk on a graph (V, E), we have by (1.6)
1, x ∼ y
μxy = P (x, y) deg (x) = .
0, x ∼ y
which is symmetric. Hence, a simple random walk is a reversible Markov chain,
while P (x, y) is not symmetric if deg (x) = deg (y).
Any reversible Markov chain on V gives rise to a graph structure on V as
follows. Define the set E of edges on V by the condition
x ∼ y ⇔ μxy > 0.
Then, μxy can be considered as a weight on (V, E) (cf. Section 1.1). Note that the
function μ (x) can be recovered from μxy by the identity
 
μxy = P (x, y) μ (x) = μ (x) ,
y,y∼x y∈V

which matches (1.1).

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
1.3. RANDOM WALKS 15

Let V be a finite or countable set. As we have seen above, any reversible


Markov kernel on V determines a weighted graph (V, μ). Conversely, a weighted
graph (V, μ) determines a reversible Markov kernel on V provided

0< μxy < ∞ for all x ∈ V. (1.18)
y∈V

For example, the positivity condition in (1.18) holds if the underlying graph (V, E)
(where the set of edges E is determined by the weight μxy ) has no isolated vertices,
that is, the vertices without neighbors; and the finiteness condition holds if the
graph (V, E) is locally finite, so that the summation in (1.18) has finitely many
positive terms. Hence, the condition (1.18) is satisfied for locally finite graphs
without isolated vertices.
If (1.18) holds, then the weight on vertices

μ (x) = μxy
y∈V

is finite and positive for all x, and we can set


μxy
P (x, y) = (1.19)
μ (x)
so that the reversibility condition (1.17) is obviously satisfied. In this context, a
reversible Markov chain is also referred to as a random walk on a weighted graph.
From now on, we stay in the following setting: we have a weighted graph
(V, μ) satisfying (1.18), the associated reversible Markov kernel P (x, y), and the
corresponding random walk (= Markov chain) {Xn }. Fix a point x0 ∈ V and
consider the functions
vn (x) = Px0 (Xn = x) = Pn (x0 , x) ,
and
un (x) = Px (Xn = x0 ) = Pn (x, x0 ) .
The function
 vn (x) is the distribution of Xn at time n ≥ 1. By Corollary 1.29, we
have x∈V vn (x) = 1.
Function un (x) is somewhat more convenient to be dealt with. The function
vn and un are related follows:
vn (x) μ (x0 ) = Pn (x0 , x) μ (x0 ) = Pn (x, x0 ) μ (x) = un (x) μ (x) ,
where we have used the reversibility of Pn . Hence, we have the identity
un (x) μ (x)
vn (x) = . (1.20)
μ (x0 )
Extend un and vn to n = 0 by setting u0 = v0 = 1{x0 } , where 1A denotes the
indicator function of a set A ⊂ V , that is, the function that has value 1 at any
point of A and value 0 outside A.
Corollary 1.32. For any reversible Markov chain, we have, for all x ∈ V and
n = 0, 1, 2, ...,
 1
vn+1 (x) = vn (y) μxy , (1.21)
y
μ (y)

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
16 1. THE LAPLACE OPERATOR ON GRAPHS

(a forward equation) and


1 
un+1 (x) = un (y) μxy , (1.22)
μ (x) y

(a backward equation).
Proof. If n = 0 then (1.21) becomes
1
P (x0 , x) = μxx0 ,
μ (x0 )
which is a defining identity for P . For n ≥ 1, we obtain, using (1.19) and (1.15),
 1 
vn (y) μyx = Pn (x0 , y) P (y, x)
y
μ (y) y
= Pn+1 (x0 , x) = vn+1 (x) ,
which proves (1.21). Substituting (1.20) into (1.21), we obtain (1.22). 

In particular, for a simple random walk we have μxy = 1 for x ∼ y and μ (x) =
deg (x) so that we obtain the following identities:
 1 1 
vn+1 (x) = vn (y) , un+1 (x) = un (y) .
y∼x
deg (y) deg (x) y∼x

The last identity means that un+1 (x) is the mean-value of un (y) taken at the points
y ∼ x. Note that in the case of a regular graph, when deg (x) ≡ const, we have
un ≡ vn by (1.20).
Example 1.33. Let us compute the function un (x) on the lattice graph Z.
Since Z is regular, un = vn so that un (x) the distribution of Xn at time n provided
X0 = 0. We evaluate inductively un using the initial condition u0 = 1{0} and the
recurrence relation:
1
un+1 (x) = (u (x + 1) + u (x − 1)) . (1.23)
2
Computation of un (x) for n = 1, 2, 3, 4, 10 and |x| ≤ 4 are shown here (all empty
fields are 0):
x∈Z -4 -3 -2 -1 0 1 2 3 4
u0 1
1 1
u1 2 2

1 1 1
u2 4 2 4

1 3 3 1
u3 8 8 8 8

1 1 3 1 1
u4 16 4 8 4 16

... ... ... ... ... ... ... ... ... ...
u10 0.117 0.205 0.246 0.205 0.117

One can observe (and prove) that un (x) → 0 as n → ∞.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
1.3. RANDOM WALKS 17

Example 1.34. Consider un (x) on the graph C3 = (Z3 , {±1}). The formula
(1.23) is still true provided that one understands x as a residue mod 3. Computation
of un (x) for n = 1, ..., 6 is shown here:

x ∈ Z3 0 1 2
u0 0 1 0
1 1
u1 2 0 2

1 1 1
u2 4 2 4

3 1 3
u3 8 4 8

5 3 5
u4 16 8 16

11 5 11
u5 32 16 32

21 11 21
u6 64 32 64

Here one can observe that un (x) converges to a constant 1/3 as n → ∞, which will
be proved later. Hence, for large n, the probability that Xn visits any given point
is nearly 1/3, as expected.
The following table contains a similar computation of un on C5 = (Z5 , {±1}):

x ∈ Z5 0 1 2 3 4
u0 0 0 1 0 0
1 1
u1 0 2 0 2 0
1 1 1
u2 4 0 2 0 4

1 3 3 1
u3 8 8 0 8 8

1 1 3 1 1
u4 4 16 8 16 4

... ... ... ... ... ...


u25 0.199 0.202 0.198 0.202 0.199

Here un (x) approaches to 15 but the convergence is much slower than that in the
case of C3 .
Consider one more example: a complete graph K5 . In this case, function un (x)
satisfies the identity

1
un+1 (x) = un (y) .
4
y=x

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
18 1. THE LAPLACE OPERATOR ON GRAPHS

The following table contains computation of un (x) on K5 :


x ∈ K5 0 1 2 3 4
u0 0 0 1 0 0
1 1 1 1
u1 4 4 0 4 4

3 3 1 3 3
u2 16 16 4 16 16

13 13 3 13 13
u3 64 64 16 64 64

u4 0.199 0.199 0.199 0.199 0.199

Here the convergence of un to the constant 1/5 occurs much faster than in the
previous example, although C5 and K5 have the same number of vertices. The
extra edges in K5 allow for a faster mixing than in C5 .
As we will see, for finite graphs it is typically the case that the transition
function un (x) converges to a constant as n → ∞. For the function vn this means
that
un (x) μ (x)
vn (x) = → cμ (x) as n → ∞
μ0 (x)
for some constant c. The constant c is determined by the requirement that cμ (x)
is a probability measure on V , that is, from the identity

c μ (x) = 1.
x∈V

Hence, cμ (x) is the asymptotic distribution of Xn as n → ∞. The function cμ (x)


on V is called the stationary measure or the equilibrium measure of the Markov
chain.
One of the problems for finite graphs that will be discussed here, is the rate
of convergence of vn (x) to the equilibrium measure. The point is that Xn can be
considered for large n as a random variable with the distribution function cμ (x)
so that we obtain a natural generator of a random variable with a prescribed law.
However, in order to be able to use this, one should know for which n the distribu-
tion of Xn is close enough to the equilibrium measure. The value of n, for which
this is the case, is called a mixing time.
For infinite graphs the transition functions un (x) and vn (x) converge typically
to 0 as n → ∞, and an interesting question is to determine the rate of convergence
to 0. For example, we will prove in Chapter 5 that, for a simple random walk in
ZN ,
vn (x)  n−N/2 as n → ∞,
where the relation  means that the ratio of the left- and right-hand sides remains
bounded between two positive constants. A long-time behavior of the distribution
function vn (x) is very sensitive to the geometry of the underlying graph.
Another interesting question that arises on infinite graphs, is to distinguish the
following two alternatives in the behavior of the random walk Xn :
(1) Xn returns infinitely often to a given point x0 with probability 1;
(2) Xn visits x0 finitely many times and then never comes back, also with
probability 1.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
1.4. THE LAPLACE OPERATOR 19

In the first case, the random walk is called recurrent, and in the second case -
transient. By a theorem of Polya, a simple random walk in ZN is recurrent if and
only if N ≤ 2 (see Chapter 6).

1.4. The Laplace operator


Let f (x) be a function on R. Recall that the derivative f  of f is defined by
f (x + h) − f (x)
f  (x) = lim ,
h→0 h
so that for small h
f (x + h) − f (x) f (x) − f (x − h)
f  (x) ≈ ≈ .
h h
The operators f (x+h)−f
h
(x)
and f (x)−fh(x−h) are called the difference operators and
can be considered as numerical approximations of the derivative.
Let us determine a similar approximation for the second derivative. We have
f  (x + h) − f  (x)
f  (x) ≈
 h 
1 f (x + h) − f (x) f (x) − f (x − h)
≈ −
n h h
f (x + h) − 2f (x) + f (x − h)
=
 h2 
2 f (x + h) + f (x − h)
= − f (x) .
h2 2
Hence, f  is approximately determined by the average value of f at the neighboring
points x + h and x − h, minus f (x) .
For a function f (x, y) on R2 , one can similarly obtain numerical approximations
2 2
for second order partial derivatives ∂∂xf2 and ∂∂yf2 , and then that for the Laplace
operator
∂2f ∂2f
Δf = 2
+ 2,
∂x ∂y
which yields
 
4 f (x + h, y) + f (x − h, y) + f (x, y + h) + f (x, y − h)
Δf (x, y) ≈ 2 − f (x) .
h 4
That is, Δf (x, y) is approximately determined by the average value of f at neigh-
boring points
(x + h, y) , (x − h, y) , (x, y + h) , (x, y − h) ,
minus the value at (x, y).
This observation motivates us to define a discrete version of the Laplace oper-
ator on any graph as follows.
Definition 1.35. Let (V, E) be a locally finite graph without isolated points
(so that 0 < deg (x) < ∞ for all x ∈ V ). For any function f : V → R, define the
function Δf by
1 
Δf (x) = f (y) − f (x) . (1.24)
deg (x) y∼x

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
20 1. THE LAPLACE OPERATOR ON GRAPHS

The operator Δ on functions on V is called the Laplace operator on the graph


(V, E).
In other words, Δf (x) is the the average value of f (y) at all the vertices y ∼ x
minus f (x). Note that the set R of values of f can be replaced by any vector space
over R, for example, by C.
For example, on the lattice graph Z we have
f (x + 1) + f (x − 1)
Δf (x) = − f (x) ,
2
while on Z2
f (x + 1, y) + f (x − 1, y) + f (x, y + 1) + f (x, y − 1)
Δf (x, y) = − f (x) .
4
The notion of the Laplace operator can be extended to weighted graphs as
follows.
Definition 1.36. Let (V, μ) be a locally finite weighted graph without isolated
points. For any function f : V → R, define the function Δμ f by
1 
Δμ f (x) = f (y) μxy − f (x) . (1.25)
μ (x) y

The operator Δμ acting on functions on V , is called the (weighted ) Laplace operator


on (V, μ).
Note that the summation in (1.25) can be restricted to y ∼ x because otherwise
μxy = 0. Hence, Δμ f (x) is the difference between the weighted average of f (y) at
the vertices y ∼ x and f (x). The Laplace operator Δ in (1.24) is a particular case
of the weighted Laplace operator Δμ from (1.25) when the weight μ is simple, that
is, when μxy = 1 for all x ∼ y.
Denote by F the set of all real-valued functions on V . Then F is obviously
a linear space with respect to addition of functions and multiplication by a real
constant. Then Δμ can be regarded as an operator in F, that is, Δμ : F → F.
Note that Δμ is a linear operator on F, that is,
Δμ (λf + g) = λΔμ f + Δμ g,
for all functions f, g ∈ F and λ ∈ R, which obvious from (1.25).
Another useful property to mention is that
Δμ const = 0
(a similar property holds for the differential Laplace operator). Indeed, if f (x) ≡ c
then
1  1 
f (y) μxy = c μxy = c,
μ (x) y μ (x) y
whence the claim follows.
Recall that the corresponding reversible Markov kernel is given by
μxy
P (x, y) = ,
μ (x)
so that we can write
 
Δμ f (x) = P (x, y) f (y) − f (x) = P (x, y) (f (y) − f (x)) .
y y

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
1.4. THE LAPLACE OPERATOR 21

Consider the Markov kernel also as an operator on functions as follows:



P f (x) = P (x, y) f (y) .
y

This operator P is called the Markov operator . Hence, the Laplace operator Δμ
and the Markov operator P are related by a simple identity

Δμ = P − id,

where id is the identical operator in F.

Example 1.37. Let us approximate f  (x) on R using different values h1 and


h2 for the steps of x:

f  (x + h1 ) − f  (x)
f  (x) ≈
h1
 
1 f (x + h1 ) − f (x) f (x) − f (x − h2 )
≈ −
h1 h1 h2
   
1 1 1 1 1
= f (x + h1 ) + f (x − h2 ) − + f (x)
h1 h1 h2 h1 h2
 
1 1 1
= +
h1 h1 h2
   
1 1 1
× 1 1 f (x + h1 ) + f (x − h2 ) − f (x) .
h1 + h2
h1 h2

Hence, we obtain the weighted average of f (x + h1 ) and f (x − h2 ) with the weights


1 1
h1 and h2 , respectively. This average can be realized in a weighted Laplace operator
as follows. Consider a sequence of reals {xk }k∈Z that is defined by the rules

xk + h1 , k is even
x0 = 0, xk+1 =
xk + h2 , k is odd.

For example, x1 = h1 , x2 = h1 + h2 , x−1 = −h2 , x−2 = −h2 − h1 , etc. Set


V = {xk }k∈V and define the edge set E on V by xk ∼ xk+1 . Now define the weight
μxy on edges by
1/h1 , k is even,
μxk xk+1 =
1/h2 , k is odd.
Then we have
μ (xk ) = μxk xk+1 + μxk xk−1 = 1/h1 + 1/h2 ,

and, for any function f on V ,


1 
Δμ f (xk ) = f (xk+1 ) μxk xk+1 + f (xk−1 ) μxk xk−1
μ (xk )
h1 f (xk + h1 ) + h2 f (xk − h2 ) , k is even,
1 1
1
=
h2 f (xk + h2 ) + h1 f (xk − h1 ) , k is odd.
1 1
1/h1 + 1/h2

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
22 1. THE LAPLACE OPERATOR ON GRAPHS

1.5. The Dirichlet problem


Broadly speaking, the Dirichlet problem is a boundary value problem of the
following type: find a function u in a domain Ω assuming that Δu is known in Ω
and u is known at the boundary ∂Ω . For example, if Ω is an interval (0, 1) then
this problem becomes as follows: find a function u (x) on [0, 1] such that
u (x) = f (x) for all x ∈ (0, 1)
u (0) = a and u (1) = b
where the function f and the reals a, b are given. This problem can be solved by re-
peated integrations, provided f is continuous. A similar problem for n-dimensional
n ∂2
Laplace operator Δ = k=1 ∂x 2 is stated as follows: given a bounded open domain
k
Ω ⊂ Rn , find a function u in the closure Ω that satisfies the conditions
Δu (x) = f (x) for all x ∈ Ω,
(1.26)
u (x) = g (x) for all x ∈ ∂Ω,
where f and g are given functions. Under certain natural hypotheses, this problem
can be solved, and a solution is unique.
One of the sources of the Dirichlet problem is electrical engineering. If u (x) is
the potential of an electrostatic field in Ω ⊂ R3 , then u satisfies in Ω the equation
Δu = f where −f (x) is the density of a charge inside Ω, while the values of u at
the boundary are determined by the exterior conditions. For example, if the surface
∂Ω is a metal then it is equipotential so that u (x) = const on ∂Ω.
Another source of the Dirichlet problem is thermodynamics. If u (x) is a sta-
tionary temperature at a point x in a domain Ω, then u satisfies the equation
Δu = f where −f (x) is the heat source at the point x. Again the values of u at
∂Ω are determined by the exterior conditions.
Let us consider an analogous problem on a graph that, in particular, arises from
a discretization of the problem (1.26) for numerical purposes. For any set Ω ⊂ V ,
denote by Ωc its complement, that is, Ωc = V \ Ω.
Theorem 1.38. Let (V, μ) be a connected locally finite weighted graph (V, μ),
and let Ω be a subset of V . Consider the following Dirichlet problem:
Δμ u (x) = f (x) for all x ∈ Ω,
(1.27)
u (x) = g (x) for all x ∈ Ωc ,
where u : V → R is an unknown function while the functions f : Ω → R and
g : Ωc → R are given. If Ω is finite and Ωc is non-empty then, for all functions f, g
as above, the Dirichlet problem ( 1.27) has a unique solution.
Here Ωc := V \ Ω. Note that, by the second condition in (1.27), the function u
is already defined outside Ω, so the problem is to construct an extension of u to Ω
that should satisfy the equation Δμ u = f in Ω.
Define the vertex boundary of Ω as follows:
∂v Ω = {y ∈ Ωc : y ∼ x for some x ∈ Ω} .
Observe that the Laplace equation Δμ u (x) = f (x) for x ∈ Ω involves the values
u (y) at neighboring vertices y of x, and any neighboring point y belongs to either
Ω or to ∂v Ω. Hence, the equation Δμ u (x) = f (x) uses the prescribed values of u

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
1.5. THE DIRICHLET PROBLEM 23

only at the boundary ∂v Ω, which means that the second condition in (1.27) can be
restricted to ∂Ω as follows:
u (x) = g (x) for all x ∈ ∂v Ω.
This condition (as well as the second condition in (1.27) is called the boundary
condition.
If Ωc is empty, then the statement of Theorem 1.38 is not true. For example,
in this case any constant function u satisfies the same equation Δμ u = 0 so that
there is no uniqueness. The existence also fails in this case (cf. Exercise 16).
The proof of Theorem 1.38 is based on the following maximum principle. A
function u : V → R is called subharmonic in Ω if Δμ u (x) ≥ 0 for all x ∈ Ω, and
superharmonic in Ω if Δμ u (x) ≤ 0 for all x ∈ Ω. A function u is called harmonic
in Ω if it is both subharmonic and superharmonic, that is, if it satisfies the Laplace
equation Δμ u = 0. For example, the constant function is harmonic on all sets.
Lemma 1.39. (A maximum/minimum principle) Let (V, μ) be a connected lo-
cally finite weighted graph, and let Ω be a non-empty finite subset of V such that
Ωc is non-empty. Then, for any function u : V → R, that is subharmonic in Ω, we
have
max u ≤ sup u,
Ω Ωc
and for any function u : V → R, that is superharmonic in Ω, we have
min u ≥ infc u.
Ω Ω

Proof. It suffices to prove the first claim. If supΩc u = +∞, then there is
nothing to prove. If supΩc u < ∞, then, by replacing u by u + const, we can assume
that supΩc u = 0. Set
M = max u
Ω
and show that M ≤ 0, which will settle the claim. Assume from the contrary that
M > 0 and consider the set
S := {x ∈ V : u (x) = M } . (1.28)
Clearly, S ⊂ Ω and S is non-empty.
Claim 1. If x ∈ S then all neighbors of x also belong to S.
Indeed, we have Δμ u (x) ≥ 0 which can be rewritten in the form

u (x) ≤ P (x, y) u (y) .
y∼x

Since u (y) ≤ M for all y ∈ V , we have


 
P (x, y) u (y) ≤ M P (x, y) = M.
y∼x y∼x

Since u (x) = M , all inequalities in the above two lines must be equalities, whence
it follows that u (y) = M for all y ∼ x. This implies that all such y belong to S.
Claim 2. Let S be a non-empty set of vertices of a connected graph (V, E), such
that x ∈ S implies that all neighbors of x belong to S. Then S = V .
Indeed, let x ∈ S and y be any other vertex. Then there is a path {xk }nk=0
between x and y, that is,
x = x0 ∼ x1 ∼ x2 ∼ ... ∼ xn = y.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
24 1. THE LAPLACE OPERATOR ON GRAPHS

Since x0 ∈ S and x1 ∼ x0 , we obtain x1 ∈ S. Since x2 ∼ x1 , we obtain x2 ∈ S. By


induction, we conclude that all xk ∈ S, whence y ∈ S.
It follows from the two claims that the set (1.28) must coincide with V , which
is not possible since u (x) ≤ 0 in Ωc . This contradiction shows that M ≤ 0. 

Proof of Theorem 1.38. Let us first prove the uniqueness. If we have two
solutions u1 and u2 of (1.27) then the difference u = u1 − u2 satisfies the conditions

Δμ u (x) = 0 for all x ∈ Ω,


u (x) = 0 for all x ∈ Ωc .
We need to prove that u ≡ 0. Since u is both subharmonic and superharmonic in
Ω, Lemma 1.39 yields

0 = infc u ≤ min u ≤ max u ≤ sup u = 0,


Ω Ω Ω Ωc

whence u ≡ 0.
Let us now prove the existence of a solution to (1.27) for all f, g. For any x ∈ Ω,
rewrite the equation Δμ u (x) = f (x) in the form
 
P (x, y) u (y) − u (x) = f (x) − P (x, y) g (y) , (1.29)
y∼x,y∈Ω y∼x,y∈Ωc

where we have moved to the right hand side the terms with y ∈ Ωc and used that
u (y) = g (y). Denote by FΩ the set of all real-valued functions u on Ω and observe
that the left hand side of (1.29) can be regarded as an operator in this space; denote
it by Lu, that is,

Lu (x) = P (x, y) u (y) − u (x) ,
y∼x,y∈Ω

for all x ∈ Ω. Rewrite the equation (1.29) in the form

Lu = h,

where h is the right hand side of (1.29)


 that is a given function on Ω. Note that FΩ
is a linear space. Since the family 1{x} x∈Ω of indicator functions form a basis
in FΩ , we obtain that dim FΩ = |Ω| < ∞. Hence, the operator L : FΩ → FΩ is a
linear operator in a finitely dimensional space, and the first part of the proof shows
that Lu = 0 implies u = 0 (indeed, just set f = 0 and g = 0 in (1.29), that is,
the operator L is injective. By linear algebra, any injective operator acting in the
spaces of equal dimensions, must be bijective (alternatively, one can say that the
injectivity of L implies that det L = 0 whence it follows that L is invertible and,
hence, bijective). Hence, for any h ∈ FΩ , there is a solution u = L−1 h ∈ FΩ , which
finishes the proof. 

Let us discuss how to calculate numerically the solution of the Dirichlet prob-
lem. Set N = |Ω| and observe that solving the Dirichlet problem amounts to solving
a linear system Lu = h where L is an N × N matrix. If N is very large then the
usual elimination method (not to say about inversion of matrices) may require too
many operations. A more economical Jacobi’s method uses an approximating se-
quence {un } that is constructed as follows. Using that Δμ = P − id, rewrite the

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
1.5. THE DIRICHLET PROBLEM 25

equation Δμ u = f in the form u = P u − f and consider a sequence of functions


{un } given by the recurrence relation
P un − f in Ω
un+1 = .
g in Ωc
The initial function u0 can be chosen arbitrarily to satisfy the boundary condition;
for example, take u0 = 0 in Ω and u0 = g in Ωc . In the case f = 0, we obtain the
same recurrence relation un+1 = P un as for the distribution of the random walk,
although now we have in addition some boundary values.
Let us estimate the amount of computation for this method. Assuming that
deg (x) is uniformly bounded, computation of P un (x) − f (x) for all x ∈ Ω requires
 N operations, and this should be multiplied by the number of iterations. As
we will see later (see Section 4.6), if Ω is a subset of Zm of a cubic shape then
the number of iterations should be  N 2/m . Hence, the Jacobi method requires 
N 1+2/m operations. For comparison, the row reduction requires  N 3 operations2 .
If m = 1 then the Jacobi method requires also  N 3 operations, but for higher
dimensions m ≥ 2 the Jacobi method is more economical.
Example 1.40. Let us look at a numerical example in the lattice graph Z for
the set Ω = {1, 2, ..., 9}, for the boundary value problem
Δu (x) = 0 in Ω
u (0) = 0, u (10) = 1.
The exact solution is a linear function u (x) = x/10. Using an explicit expression
for Δ, write the approximating sequence in the form
un (x + 1) + un (x − 1)
un+1 (x) = , x ∈ {1, 2, ..., 9} ,
2
while un (0) = 0 and un (10) = 1 for all n. Set u0 (x) = 0 for x ∈ {1, 2, ..., 9}. Then
computation yields the following:
x∈Z 0 1 2 3 4 5 6 7 8 9 10
u0 0 0 0 0 0 0 0 0 0 0 1
1
u1 0 0 0 0 0 0 0 0 0 2 1
1 1
u2 0 0 0 0 0 0 0 0 4 2 1
1 1 5
u3 0 0 0 0 0 0 0 8 4 8 1
1 1 3 5
u4 0 0 0 0 0 0 16 8 8 8 1
... ... ... ... ... ... ... ... ... ... ... ...
u50 0.00 0.084 0.17 0.26 0.35 0.45 0.55 0.68 0.77 0.88 1.00

... ... ... ... ... ... ... ... ... ... ... ...
u81 0.00 0.097 0.19 0.29 0.39 0.49 0.59 0.69 0.79 0.897 1.00

so that u81 is rather close to the exact solution. Here N = 9 and, indeed, one needs
 N 2 iterations to approach to the solution.

2 For the row reduction method, one needs  N 2 of row operation, and each row operation

requires  N of elementary operations. Hence, one needs  N 3 of elementary operation.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
10.1090/ulect/071/02

CHAPTER 2

Spectral properties of the Laplace operator

Let (V, μ) be a locally finite weighted graph without isolated points and Δμ be
the weighted Laplace operator on (V, μ). As above, denote by F the space of all
real-value functions on V .

2.1. Green’s formula


Let us consider the difference operator ∇xy that is defined for any two vertices
x, y ∈ V and maps F to R as follows:
∇xy f = f (y) − f (x) for any f ∈ F.
The function ∇f (x, y) := ∇xy f , defined on V × V , is called the discrete gradient
of f .
The relation between the Laplace operator Δμ and the gradient is given by
1  
Δμ f (x) = (∇xy f ) μxy = P (x, y) (∇xy f ) ,
μ (x) y y

since
  
P (x, y) (f (y) − f (x)) = P (x, y) f (y) − P (x, y) f (x)
y y y

= P (x, y) f (y) − f (x) = Δμ f (x) .
y

The following theorem is one of the main tools when working with the discrete
Laplace operator.
Theorem 2.1. (Green’s formula) Let (V, μ) be a locally finite weighted graph
without isolated points, and let Ω be a non-empty finite subset of V . Then, for any
two functions f, g on V ,
 1 
Δμ f (x)g(x)μ(x) = − (∇xy f ) (∇xy g) μxy (2.1)
2
x∈Ω x,y∈Ω
 
+ (∇xy f ) g(x)μxy .
x∈Ω y∈Ωc

The formula (2.1) is analogous to the integration by parts formula


 b  b
f  (x) g (x) dx = − f  (x) g  (x) dx + [f  (x) g (x)]a ,
b

a a
where f and g are smooth enough functions on a bounded interval [a, b]. A similar
formula holds also for the differential Laplace operator in bounded domains of Rn .
27

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
28 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

If V is finite and Ω = V , then Ωc is empty so that the “boundary” term in


(2.1) vanishes, and we obtain
 1 
Δμ f (x)g(x)μ(x) = − (∇xy f ) (∇xy g) μxy . (2.2)
2
x∈V x,y∈V

Proof. We have
⎛ ⎞
 1  
Δμ f (x)g(x)μ(x) = ⎝ (f (y) − f (x)) μxy ⎠ g(x)μ(x)
μ (x)
x∈Ω x∈Ω y∈V

= (f (y) − f (x)) g(x)μxy
x∈Ω y∈V

= (f (y) − f (x)) g(x)μxy (2.3)
x∈Ω y∈Ω
 
+ (f (y) − f (x)) g(x)μxy
x∈Ω y∈Ωc

= (f (x) − f (y)) g(y)μxy (2.4)
y∈Ω x∈Ω
 
+ (∇xy f ) g(x)μxy ,
x∈Ω y∈Ωc

where in (2.4) we have switched x and y. Adding together the identities in (2.3)
and (2.4), we obtain
 1 
Δμ f (x)g(x)μ(x) = (f (y) − f (x)) (g(x) − g(y)) μxy
2
x∈Ω x,y∈Ω
 
+ (∇xy f ) g(x)μxy ,
x∈Ω y∈Ωc

which is equivalent to (2.1). 

2.2. Eigenvalues of the Laplace operator


Let (V, μ) be a finite connected weighted graph where N := |V | > 1. Then the
vector space F of all real-valued functions on V has dimension N , and the Laplace
operator Δμ : F → F is a linear operator in an N -dimensional vector space. We
will investigate the spectral properties of this operator.
Recall a few facts from linear algebra. Let A be a linear operator in a N -
dimensional vector space V over R. A vector v = 0 is called an eigenvector of A
if Av = λv for some constant λ that is called an eigenvalue of A. In general, one
allows complex-valued eigenvalues λ by considering a complexification of V and A.
The set of all complex eigenvalues of A is called the spectrum of A and is denoted
by spec A.
All the eigenvalues of A can be found from the characteristic equation
det (A − λ id) = 0.
Here A can be represented as an N × N matrix in any basis of V; therefore,
det (A − λ id) is a polynomial of λ of degree N (that is called the characteristic

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2.2. EIGENVALUES OF THE LAPLACE OPERATOR 29

polynomial of A), and it has N complex roots, counted with multiplicities. Hence,
in general the spectrum of A consists of N complex eigenvalues.
In the case A = Δμ , the eigenvectors of Δμ are also referred to as eigenfunc-
tions. It will be more convenient to work with the operator L = −Δμ so that
1 
Lf (x) = f (x) − f (y) μxy
μ (x) y∼x

= f (x) − f (y) P (x, y) .
y∈V

In the case of a simple weight, we have


1 
L (x) f = f (x) − f (y) .
deg (x) y∼x

Example 2.2. 1. Let (V, E) = K2 that is, V = {1, 2} and 1 ∼ 2. Then


Lf (1) = f (1) − f (2)
Lf (2) = f (2) − f (1)
so that the equation Lf = λf becomes
(1 − λ) f (1) = f (2)
(1 − λ) f (2) = f (1) ,

whence (1 − λ)2 f (k) = f (k) for both k = 1, 2. Since f ≡ 0, we obtain the equation
2
(1 − λ) = 1 whence we find two eigenvalues λ = 0 and λ = 2. Alternatively,

considering a function f as a column-vector ff (1)
(2)
, we can represent the action of
L as a matrix multiplication:
    
Lf (1) 1 −1 f (1)
= ,
Lf (2) −1 1 f (2)
 
1 −1
so that the eigenvalues of L coincide with those of the matrix . Its
−1 1
characteristic equation is (1 − λ)2 − 1 = 0, whence we obtain again the same two
eigenvalues λ = 0 and λ = 2.
2. Let (V, E) = C3 = K3 , that is, V = {1, 2, 3} and 1 ∼ 2 ∼ 3 ∼ 1. We have
then
1
Lf (1) = f (1) − (f (2) + f (3)) ,
2
and similar identities for Lf (2) and Lf (3) . The action of L can be written as a
matrix multiplication:
⎛ ⎞ ⎛ ⎞⎛ ⎞
Lf (1) 1 −1/2 −1/2 f (1)
⎝ Lf (2) ⎠ = ⎝ −1/2 1 −1/2 ⎠ ⎝ f (2) ⎠ .
Lf (3) −1/2 −1/2 1 f (3)
 3
The characteristic polynomial of the above 3×3 matrix is − λ − 3λ2 + 94 λ . Eval-
uating its roots, we obtain the following eigenvalues of L: λ = 0 (simple) and
λ = 3/2 with multiplicity 2.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
30 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

3. Consider a graph with vertices V = {1, 2, 3} and edges 1 ∼ 2 ∼ 3. Then


Lf (1) = f (1) − f (2)
1
Lf (2) = f (2) − (f (1) + f (3))
2
Lf (3) = f (3) − f (2)
so that the matrix of L is
⎞⎛
1 −1 0
⎝ −1/2 1 −1/2 ⎠ .
0 −1 1
 3
The characteristic polynomial is − λ − 3λ + 2λ , and the eigenvalues are λ =
2

0, λ = 1, and λ = 2.
Coming back to the general theory, assume now that V is an inner product
space. Recall that an inner product (u, v) (where u, v ∈ V) is a bilinear, symmetric,
positive definite function on V × V. An operator A is called symmetric (or self-
adjoint) with respect to the inner product if (Au, v) = (u, Av) for all u, v ∈ V. The
following theorem collects important results from linear algebra about symmetric
operators.
Theorem 2.3. Let A be a symmetric operator in a N -dimensional inner prod-
uct space V over R.

(a) All the eigenvalues of A are real. Hence, they can be enumerated in an increasing
order as λ1 , ..., λN where each eigenvalue is counted with multiplicity.
(b) (Diagonalization of symmetric operators) There is an orthonormal1 basis
{vk }N
k=1 in V such that each vk is an eigenvector of A with the eigenvalue λk , that
is, Avk = λk vk . Equivalently, the matrix of A in the basis {vk } is diag (λ1 , ..., λN ).
(c) (The variational principle) For any non-zero vector v ∈ V, define its Rayleigh
quotient by
(Av, v)
R (v) = .
(v, v)
Then the following identities are true for all k = 1, ..., N :
λk = R (vk ) = inf R (v) = sup R (v) .
v⊥v1 ,...,vk−1 v⊥vk+1 ,...,vN

In particular,
λ1 = inf R (v) and λN = sup R (v) .
v=0 v=0

Here, v⊥u means that u and v are orthogonal, that is, (v, u) = 0.
One of the consequences of part (b) is that the algebraic multiplicity of any
eigenvalue λ, that is, its multiplicity as a root of the characteristic polynomial,
coincides with its geometric multiplicity dim ker (A − λ id), that is, the maximal
number of linearly independent eigenvectors with the same eigenvalue λ.
1A sequence {vk } of vectors is called orthonormal if

1, k = j
(vk , vj ) = .
0, k = j

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2.2. EIGENVALUES OF THE LAPLACE OPERATOR 31

An eigenvalue λ is called simple if its multiplicity is 1. That means, on the


one hand, that this eigenvalue occurs in the list λ1 , ..., λN exactly once, and on the
other hand, that the equation Av = λv has exactly one non-zero solution v up to
multiplication by const .
We will apply Theorem 2.3 to the Laplace Δμ in the vector space F of real-
valued functions on V . Consider in F the following inner product: for any two
functions f, g ∈ F, set

(f, g) := f (x) g (x) μ (x) ,
x∈V
which can be considered as the integration of f g against measure μ on V . Obviously,
all the axioms of an inner product are satisfied: (f, g) is bilinear, symmetric, and
positive definite (the latter means that (f, f ) > 0 for all f = 0).
Lemma 2.4. The operator Δμ is symmetric with respect to the above inner
product, that is,
(Δμ f, g) = (f, Δμ g)
for all f, g ∈ F.
Proof. Indeed, by the Green formula (2.2), we have
 1 
(Δμ f, g) = Δμ f (x) g (x) μ (x) = − (∇xy f ) (∇xy g) μxy ,
2
x∈V x,y∈V

and the last expression is symmetric in f, g so that it is equal also to (Δμ g, f ). 


Hence, Theorem 2.3 applies to the operator Δμ . In Theorem 2.7 below, we
will give a more detailed information about the spectrum of Δμ . It will be more
convenient to work with the operator L = −Δμ that is called a positive definite
Laplacian. By definition, we have
1 
Lf (x) = f (x) − f (y) μxy .
μ (x)
y∈V

By the Green formula of Theorem 2.1, the Rayleigh quotient of L is given by


 2
(Lf, f ) (Δμ f, f ) 1 x,y∈V (∇xy f ) μxy
R (f ) = =− =  2
.
(f, f ) (f, f ) 2 x∈V f (x) μ (x)

The quantity x∈V f 2 (x) μ (x) here is just the square norm of f in L2 (V, μ), while
1
 2
2 x,y∈V (∇xy f ) μxy is the energy of f , that is a discrete analogue of the Dirichlet
integral 
|∇f |2 dx.
Rn
Before we can state the main result of this section – Theorem 2.7, we need also
the notion of a bipartite graph.
Definition 2.5. A graph (V, E) is called bipartite if V admits a partition into
two non-empty disjoint subsets V1 , V2 such that if x and y belong to the same set
Vi then x ∼ y. The pair (V1 , V2 ) is called a bipartition of the graph (V, E).
In terms of coloring, one can say that V is bipartite if its vertices can be colored
by two colors, say black and white, so that the vertices of the same color are not
connected by an edge.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
32 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

Example 2.6. Here are some examples of bipartite graphs.


1. Z is bipartite with the following partition: V1 is the set of all odd integers
and V2 is the set of all even integers.
2. Zn is bipartite with the following partition: V1 is the set of all points
(x1 , ..., xn ) ∈ Zn with an odd sum x1 + ... + xn , and V2 is the set of all
points from Zn with an even sum x1 + ... + xn (see Figure 2.1).

Figure 2.1. Bipartition of Z2

3. A cycle Cn is bipartite provided n is even.


4. A complete bipartite graph Kn,m is bipartite.
n
5. A binary cube Zn2 = {0, 1} is bipartite with the following bipartition: V1
is the set of all points (x1 , ..., xn ) ∈ Zn2 with an odd number of 1’s, and V2
is the set of all points (x1 , ..., xn ) ∈ Zn2 with an even number of 1’s.
Theorem 2.7. For any finite, connected, weighted graph (V, μ) with N :=
|V | > 1, the following is true.
(a) Zero is a simple eigenvalue of L.
(b) All the eigenvalues of L are contained in [0, 2].
(c) If (V, μ) is not bipartite then all the eigenvalues of L are in [0, 2).
(d) If (V, μ) is a bipartite and λ is an eigenvalue of L then 2 − λ is also an
eigenvalue of L, with the same multiplicity as λ. In particular, 2 is a
simple eigenvalue of L.
Since dim F = N , the operator L has N real eigenvalues counted with multi-
N −1
plicity. Denote by {λk }k=0 the sequence of the eigenvalues of L in an increasing
order. Note that the smallest eigenvalue is denoted by λ0 rather than by λ1 . By
Theorem 2.7 we have
0 = λ0 < λ1 ≤ λ2 ≤ ... ≤ λN −1 ≤ 2.
Also, (V, μ) is bipartite if and only if λN −1 = 2.
Proof of Theorem 2.7. (a) Since L1 = 0, the constant function is an eigen-
function with the eigenvalue 0. Assume now that f is an eigenfunction of the
eigenvalue 0 and prove that f ≡ const, which will imply that 0 is a simple eigen-
value. If Lf = 0 then it follows from (2.2) with g = f that
 2
(f (y) − f (x)) μxy = 0.
{x,y∈V :x∼y}

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2.2. EIGENVALUES OF THE LAPLACE OPERATOR 33

In particular, f (x) = f (y) for any two neighboring vertices x, y. The connectedness
of the graph means that any two vertices x, y ∈ V can be connected to each other
m
by a path {xk }k=0 where
x = x0 ∼ x1 ∼ ... ∼ xN = y,
whence it follows that f (x0 ) = f (x1 ) = ... = f (xm ) and f (x) = f (y). Since this
is true for all x, y ∈ V , we obtain f ≡ const.
Alternatively, one can prove this using the maximum principle of Lemma 1.39.
Indeed, choose a point x0 ∈ V and consider the set Ω = V \ {x0 }. Since this set is
finite and function f is harmonic, we obtain by Lemma 1.39 that
infc f ≤ inf f ≤ sup f ≤ sup f.
Ω Ω Ω Ωc
However, inf Ωc f = supΩc f = f (x0 ), whence it follows that f ≡ f (x0 ) = const .
(b) Let λ be an eigenvalue of L with an eigenfunction f . Using Lf = λf and
(2.2), we obtain
 
λ f 2 (x) μ (x) = Lf (x) f (x) μ (x)
x∈V x∈V
1 
= (f (y) − f (x))2 μxy . (2.5)
2
{x,y∈V :x∼y}

It follows from (2.5) that λ ≥ 0. Using an elementary inequality


2 
(a + b) ≤ 2 a2 + b2 ,
we obtain
   
λ f 2 (x) μ (x) ≤ f (y)2 + f (x)2 μxy
x∈V {x,y∈V :x∼y}
 
= f (y)2 μxy + f (x)2 μxy
x,y∈V x,y∈V
 2

= f (y) μ (y) + f (x)2 μ (x)
y∈V x∈V
 2
= 2 f (x) μ (x) . (2.6)
x∈V

It follows from (2.6) that λ ≤ 2.


(c) We need to prove that λ = 2 is not an eigenvalue. Assume from the
contrary that λ = 2 is an eigenvalue with an eigenfunction f , and prove that (V, μ)
is bipartite. Since λ = 2, all the inequalities in the above calculation (2.6) must
become equalities. In particular, we must have for all x ∼ y that
 
2 2 2
(f (x) − f (y)) = 2 f (x) + f (y) ,
which is equivalent to
f (x) + f (y) = 0.
If f (x0 ) = 0 for some x0 then it follows that f (x) = 0 for all neighbors of x0 .
Since the graph is connected, we obtain that f (x) ≡ 0, which is not possible for an
eigenfunction. Hence, f (x) = 0 for all x ∈ Γ. Then V splits into a disjoint union
of two sets:
V + = {x ∈ V : f (x) > 0} and V − = {x ∈ V : f (x) < 0} .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
34 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

The above argument shows that if x ∈ V + , then all neighbors of x are in V − , and
vice versa. Hence, (V, μ) is bipartite, which finishes the proof.
(d) Let V + , V − be a bipartition of V . Then P (x, y) = 0 if x and y belong to
the same subset V + or V − . Given an eigenfunction f of L with the eigenvalue λ,
consider the function
f (x) , x ∈ V +,
g (x) = (2.7)
−f (x) , x ∈ V − .
Let us show that g is an eigenfunction of L with the eigenvalue 2 − λ. For all
x ∈ V + , we have

Lg (x) = g (x) − P (x, y) g (y)
y∈V

= g (x) − P (x, y) g (y)
y∈V −

= f (x) + P (x, y) f (y)
y∈V −
= 2f (x) − Lf (x) = 2f (x) − λf (x) = (2 − λ) g (x) ,
that is,
Lg (x) = (2 − λ) g (x) ,
and similarly one proves this identity also for x ∈ V − . Hence, 2 − λ is an eigenvalue
of L with the eigenfunction g.
Let m be the multiplicity of λ as an eigenvalue of L, and m be the multiplicity
of 2 − λ. Let us to prove that m = m. There exist m linearly independent eigen-
functions f1 , ..., fm of the eigenvalue λ. Using (2.7), we construct m eigenfunctions
g1 , ..., gm of the eigenvalue 2 − λ, that are obviously linearly independent, whence
we conclude that m ≥ m. Applying the same argument to the eigenvalue 2 − λ
instead of λ, we obtain the opposite inequality m ≥ m , whence m = m .
Finally, since 0 is a simple eigenvalue of L, it follows that 2 is also a simple
eigenvalue of L. It follows from the proof that the eigenfunction g (x) with the
eigenvalue 2 is as follows: g (x) = c on V + and g (x) = −c on V − , for any non-zero
constant c. 

2.3. Convergence to equilibrium


Let P be the Markov operator associated with a weighted graph (V, μ). We
consider P as a linear operator from F to F. Recall that it is related to the Laplace
operator L by the identity P = id −L. It follows that all the eigenvalues of P have
the form 1 − λ where λ is an eigenvalue of L, and the eigenfunctions of P and L
are the same.
−1
Denote αk = 1 − λk so that {αk }N k=0 is the sequence of all the eigenvalues of
P in a decreasing order, counted with multiplicities. By Theorem 2.7, we have
1. α0 = 1 is a simple eigenvalue whose eigenfunction is const;
2. αk ∈ [−1, 1] ;
3. αN −1 = −1 if and only if the graph is bipartite (and in this case the
eigenvalue −1 is simple).
We can write
−1 ≤ αN −1 ≤ ... ≤ α1 < α0 = 1.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2.3. CONVERGENCE TO EQUILIBRIUM 35

Since spec P ⊂ [−1, 1], it follows from a general theory of symmetric operators,
that P  ≤ 1.
In the next statement, we consider the powers P n of P for any positive integer
n, using composition of operators.
Lemma 2.8. For any f ∈ F and any positive integer n, we have the following
identity:

P n f (x) = Pn (x, y) f (y) , (2.8)
y∈V

where Pn is the n-step transition function.


Proof. For n = 1, (2.8) becomes the definition of the Markov operator. For
the inductive step from n to n + 1, we have

P n+1 f = P n (P f ) = Pn (x, y) P f (y)
y

 
= Pn (x, y) P (y, z) f (z)
y z

 
= Pn (x, y) P (y, z) f (z)
z y

= Pn+1 (x, z) f (z) ,
z

which finishes the proof. In the last line we have used the identity (1.15) of Propo-
sition 1.28. 

The next theorem is one of the main results of this Chapter. We use the
notation
f  = (f, f )
for any function f ∈ F.
Theorem 2.9. Let (V, μ) be a finite, connected, weighted graph and P be its
Markov kernel. For any function f ∈ F, set
1 
f= f (x) μ (x) .
μ (V )
x∈V

Then, for any positive integer n, we have


! n !
!P f − f ! ≤ ρn f  , (2.9)
where
ρ = max (|1 − λ1 | , |1 − λN −1 |) . (2.10)
Consequently, if the graph (V, μ) is non-bipartite then
! n !
!P f − f ! → 0 as n → ∞, (2.11)
that is, P n f converges to a constant f as n → ∞.
The estimate (2.9) gives the rate of convergence of P n f to the constant f : it is
decreasing exponentially in n provided ρ < 1. The constant ρ is called the spectral

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
36 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

radius of the Markov operator P . Indeed, in terms of the eigenvalues αk of P , we


have
ρ = max (|α1 | , |αN −1 |) .
Equivalently, ρ is the minimal positive number such that
spec P \ {1} ⊂ [−ρ, ρ] ,
because the eigenvalues of P except for 1 are contained in [αN −1 , α1] .

Proof of Theorem 2.9. If the graph (V, μ) is non-bipartite then by Theorem


2.7 we have
−1 < αN −1 ≤ α1 < 1,
which implies that ρ < 1. Therefore, ρn → 0 as n → ∞ and (2.9) implies (2.11).
N −1
To prove (2.9), choose an orthonormal basis {vk }k=0 of the eigenfunctions of
P so that P vk = αk vk and, hence,
P vk = αk vk .
Any function f ∈ F can be expanded in the basis vk as follows:

N −1
f= ak vk ,
k=0

where ak = (f, vk ) . By the Parseval identity, we have



N −1
f 2 = a2k .
k=0

We have

N −1 
N −1
Pf = ak P vk = αk ak vk ,
k=0 k=0

whence, by induction in n,

N −1
P nf = αk ak vk .
k=0

On the other hand, recall that v0 ≡ c for some constant c. It can be determined
from the normalization condition v0  = 1, that is,

c2 μ (x) = 1,
x∈V

whence c = √1 . It follows that


μ(V )

1 
a0 = (f, v0 ) = f (x) μ (x) ,
μ (V ) x∈V

and
1 
a0 v0 = f (x) μ (x) = f .
μ (V )
x∈V

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2.3. CONVERGENCE TO EQUILIBRIUM 37

Hence, we obtain

N −1
P nf − f = αkn ak vk − a0 v0
k=0

N −1
= α0n a0 v0 + αkn ak vk − a0 v0
k=1

N −1
= αkn ak vk .
k=1

By the Parseval identity, we have


 −1
! n ! N
!P f − f !2 = αk2n a2k
k=1

N −1
≤ max |αk |2n a2k .
1≤k≤N −1
k=1

As was already explained before the proof, all the eigenvalues αk of P with k ≥ 1
are contained in the interval [−ρ, ρ], so that |αk | ≤ ρ. Observing also that

N −1
a2k ≤ f 2 ,
k=1

we obtain ! n !
!P f − f !2 ≤ ρ2n f 2 ,
which finishes the proof. 

Corollary 2.10. Let (V, μ) be a finite, connected, weighted graph that is


non-bipartite, and let {Xn } be the associated random walk. Fix a vertex x0 ∈ V
and consider the distribution function of Xn :
vn (x) = Px0 (Xn = x) .
Then
μ (x)
vn (x) → as n → ∞, (2.12)
μ (V )

where μ (V ) = x∈V μ (x). Moreover, we have
 μ (x)
2
μ (x0 )
vn (x) − ≤ ρ2n . (2.13)
μ (V ) μ (x)
x∈V

It follows from (2.13) that, for any x ∈ V ,


" " #
" μ (x) "
"vn (x) − " ≤ ρn μ (x) . (2.14)
" μ (V ) " μ (x0 )

Proof. Since the graph is not bipartite, we have ρ ∈ (0, 1), so that (2.12)
follows from (2.13) (or from (2.14)). To prove (2.13), consider also the backward
distribution function
un (x) = Px (Xn = x0 ) ,

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
38 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

and recall that, by (1.20),


vn (x) μ (x0 )
un (x) = .
μ (x)
Since

un (x) = Pn (x, x0 ) = Pn (x, y) 1{x0 } (y)
y∈V
= P n 1{x0 } (x) ,
we obtain by Theorem 2.9 with f = 1{x0 } that
! ! ! !
!un − f !2 ≤ ρ2n !1{x } !2 .
0

Since for this function f


1 2
f= μ (x0 ) and f  = μ (x0 ) ,
μ (V )
we obtain that
  vn (x) μ (x0 ) μ (x0 )
2
− μ (x) ≤ ρ2n μ (x0 ) ,
x
μ (x) μ (V )
whence (2.12) follows. 

A random walk is called ergodic if (2.12) is satisfied, which is equivalent to


(2.11). We have seen that a random walk on a finite, connected, non-bipartite
graph is ergodic.
Let us show that if the graph is bipartite, that is, if λN −1 = 2, then this is not
the case. Indeed, if f is an eigenfunction of L with the eigenvalue 2, then f is the
eigenfunction of P with the eigenvalue −1, that is, P f = −f . Then we obtain that
P n f = (−1)n f so that P n f does not converge to any function as n → ∞. The
case of bipartite graphs will be considered in more details in Section 2.5.
In the case of a non-bipartite graph, the rate of convergence of the distribution
μ(x) n
function vn (x) to the equilibrium measure μ(V ) is determined by ρ where

ρ = max (|α1 | , |αN −1 |) .


Given a small number ε > 0, define the mixing time T = T (ε) by the condition
T = min {n : ρn ≤ ε} ,
which gives
ln 1ε
T ≈ . (2.15)
ln ρ1
With this T and n ≥ T , we obtain from (2.14) that
" " #
" μ (x) "
"vn (x) − " ≤ ε μ (x) .
" μ (V ) " μ (x0 )
The value of ε should be chosen so that
#
μ (x) μ (x)
ε << ,
μ (x0 ) μ (V )

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2.4. MORE ABOUT THE EIGENVALUES 39

which is equivalent to
μ (x)
. ε << min
μ (V ) x

In many examples of large graphs, λ1 is close to 0 and λN −1 is close to 2 as on


Figure 2.2.

Figure 2.2. Eigenvalues λ1 and λN −1

In this case, we have


1 1 1
ln = ln = ln ≈ λ1 ,
|α1 | |1 − λ1 | 1 − λ1
and
1 1 1
ln = ln = ln ≈ 2 − λN −1 ,
|αN −1 | |1 − λN −1 | 1 − (2 − λN −1 )
whence  
1 1 1
max T ≈ ln
, . (2.16)
ε λ1 2 − λN −1
In the next sections, we will estimate the eigenvalues on specific graphs and, con-
sequently, provide some explicit values for the mixing time.

2.4. More about the eigenvalues


The next statement contains additional information about the spectrum of L.
Theorem 2.11. Let (V, μ) be a finite, connected, weighted graph with N :=
|V | > 1.
(a) We have the following inequality
λ1 + ... + λN −1 ≤ N. (2.17)
Consequently,
N
. λ1 ≤ (2.18)
N −1
If, in addition, (V, μ) has no loops, then
λ1 + ... + λN −1 = N, (2.19)
and
N
.λN −1 ≥ (2.20)
N −1
(b) If (V, μ) = KN , that is, (V, μ) is a complete graph with a simple weight,
then
N
λ1 = ... = λN −1 = .
N −1
(c) If (V, μ) is non-complete, then λ1 ≤ 1. Consequently, a graph with a
simple weight is complete if and only if λ1 > 1.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
40 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

Figure 2.3. Tetrahedron

For example, the tetrahedron graph on Figure 2.3 is isomorphic to K4 , which


implies that the eigenvalues of the Laplace operator on the tetrahedron are 0 and
4
3 (with multiplicity 3).

−1
Proof. (a) Let {vk }N k=0 be an orthonormal basis in F that consists of the
eigenfunctions of L, so that Lvk = λk vk . In the basis {vk }, the matrix of L is
diag (λ0 , λ1 , ...λN −1 ) .
Since λ0 = 0, we obtain
trace L = λ0 + λ1 + ... + λN −1 = λ1 + ... + λN −1 . (2.21)
Note that the trace trace L does not depend on the choice of a basis. Let us choose
another basis as follows: enumerate all the vertices of V by 0, 1, ..., N − 1 and
consider the indicator functions 1{k} (where k = 0, 1, ..., N − 1) that obviously form
a basis in F. The components of any function f ∈ F in this basis are the values
f (k). Rewrite the definition of L in the form

Lf (i) = f (i) − P (i, j) f (xj )
j

= (1 − P (i, i)) f (i) − P (i, j) f (xj ) .
j=i

We see that the matrix of L in this basis has the values 1 − P (i, i) on the diagonal
(the coefficients in front of f (i)) and −P (i, j) in the intersection of the column i
and the row j off the diagonal. It follows that

N −1 
N −1
trace L = (1 − P (i, i)) = N − P (i, i) ≤ N. (2.22)
i=0 i=0

Comparison with (2.21) proves (2.17). Since λ1 is the minimum of the sequence
{λ1 , ..., λN −1 } of N − 1 numbers, (2.18) follows from (2.17).
If the graph has no loops, then P (i, i) = 0, and (2.22) implies (2.19). Since
λN −1 is the maximum of the sequence {λ1 , ..., λN −1 }, we obtain (2.20).
(b) We need to construct N − 1 linearly independent eigenfunctions with the
eigenvalue NN−1 . As above, set V = {0, 1, ..., N − 1} and consider the following
N − 1 functions fk for k = 1, 2, ...N − 1 :

⎨ 1, i = 0,
fk (i) = −1, i = k,

0, otherwise.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2.4. MORE ABOUT THE EIGENVALUES 41

We have
1 
Lfk (i) = fk (i) − fk (j) .
N −1
j=i

If i = 0, then fk (0) = 1 and in the sum j=0 fk (j) there is exactly one term = −1,
for j = k, and all others vanish, whence
1 
Lfk (0) = fk (0) − fk (j)
N −1
j=0
1 N
= 1+ = fk (0) .
N −1 N −1

If i = k, then fk (k) = −1 and in the sum j=k fk (j) there is exactly one term
= 1, for j = 0, whence
1 
Lfk (k) = fk (k) − fk (j)
N −1
j=k
1 N
= −1 − = fk (k) .
N −1 N −1

If i = 0, k, then fk (i) = 0, while in the sum j=k fk (j) there are terms 1, −1 and
all others are 0, whence
N
Lfk (i) = 0 = fk (i) .
N −1
N −1
Hence, Lfk = NN−1 fk . Since the sequence {fk }k=1 is linearly independent, we see
that NN−1 is the eigenvalue of multiplicity N − 1, which finishes the proof.
(c) By the variational principle, we have
λ1 = inf R (f ) ,
f ⊥1

where R (f ) is the Rayleigh quotient and the condition f ⊥1 comes from the fact
that the eigenfunction of λ0 is constant. Hence, to prove that λ1 ≤ 1, it suffices to
construct a function f ⊥1 such that R (f ) ≤ 1.
Claim 1. Fix z ∈ V and consider the indicator function f = 1{z} . Then R (f ) ≤ 1.
We have 
(f, f ) = f (x)2 μ (x) = μ(z),
x∈V
and, by the Green formula,
1 
(Lf, f ) = (f (x) − f (y))2 μxy
2
x,y∈V
⎛ ⎞
1⎝  
⎠ (f (x) − f (y))2 μxy
= +
2
x=z,y=z x=z,y=z
 2

= (f (z) − f (y)) μzy = μzy ≤ μ(z),
y=z y=z

whence R (f ) ≤ 1 (Note that if the graph has no loops, then we obtain the identity
R (f ) = 1.).
Clearly, we have also R (cf ) ≤ 1 for any constant c.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
42 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

Claim 2. Let f, g be two functions on V such that


R (f ) ≤ 1, R (g) ≤ 1,
and their supports
A = {x ∈ V : f (x) = 0} and B = {x ∈ V : g (x) = 0}
are disjoint and not connected, that is, x ∈ A and y ∈ B implies that x = y and
x ∼ y. Then R (f + g) ≤ 1.
It is obvious that f g ≡ 0. Let us show that also (Lf ) g ≡ 0. Indeed, if g (x) = 0,
then (Lf ) g (x) = 0. If g (x) = 0, then x ∈ B. It follows that f (x) = 0 and f (y) = 0
for any y ∼ x whence

Lf (x) = f (x) − P (x, y) f (y) = 0,
y∼x

whence (Lf ) g (x) = 0. Using the identities f g = (Lf ) g = (Lg) f = 0, we obtain


(f + g, f + g) = (f, f ) + 2 (f, g) + (g, g) = (f, f ) + (g, g) ,
and
(L (f + g) , f + g) = (Lf, f ) + (Lg, f ) + (Lf, g) + (Lg, g)
= (Lf, f ) + (Lg, g) .
Since by hypothesis
(Lf, f ) ≤ (f, f ) and (Lg, g) ≤ (g, g) ,
it follows that
(Lf, f ) + (Lg, g)
R (f + g) = ≤ 1.
(f, f ) + (g, g)
Now we construct a function f ⊥1 such that R (f ) ≤ 1. Since the graph is
non-complete, there are two distinct vertices, say z1 and z2 , such that z1 ∼ z2 .
Consider function f in the form
f (x) = c1 1{z1 } + c2 1{z2 } ,
where the coefficients c1 and c2 are chosen so that f ⊥1 (for example, c1 = 1/μ (z1 )
and c2 = −1/μ (z2 )). Since R ci 1{zi } ≤ 1 and the supports of 1{z1 } and 1{z2 }
are disjoint and not connected, we obtain that also R (f ) ≤ 1, which finishes the
proof. 

2.5. Convergence to equilibrium for bipartite graphs


We state here without proof analogues of Theorem 2.9 and Corollary 2.10 for
bipartite graphs (see Exercise 22 for proofs).
Theorem 2.12. Let (V, μ) be a finite connected weighted graph. Assume that
(V, μ) is bipartite, and let V + , V − be a bipartition of V. For any function f on V ,
consider the function f' on V that takes two values as follows:

' 2 f (y) μ (y) , x ∈ V + ,
f (x) = y∈V + −
μ (V ) y∈V − f (y) μ (y) , x ∈ V .

Then, for all even n, ! !


! n !
!P f − f'! ≤ ρn f  ,

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2.6. EIGENVALUES OF Zm 43

where
ρ = 1 − λ1 .
Consequently, for all x ∈ V , we have P n f (x) → f'(x) as n → ∞, n is even.
Note that by Theorem 2.11 λ1 ≤ 1. Unlike Theorem 2.9, here we do not use
λN −1 , which is not surprising because λN −1 = 2 by Theorem 2.7. In fact, the proof
of Theorem 2.12 requires use of λN −2 , but the latter amounts again to λ1 because
by Theorem 2.7, λN −2 = 2 − λ1 .
Corollary 2.13. Under the hypotheses of Theorem 2.12, consider the distri-
bution vn (x) = Px0 (Xn = x) of the random walk on (V, μ) and the function
2μ(x)
μ(V ) , x ∼ x0 ,
v' (x) :=
0, x ∼ x0 .
Then, for all x ∈ V and even n,
#
μ (x)
|vn (x) − v' (x)| ≤ ρ n
.
μ (x0 )
Consequently, for all x ∈ V , we have vn (x) → v' (x) as n → ∞, n is even.
It follows that the mixing time (assuming that n is even) is estimated by
ln 1ε ln 1ε ln 1ε
T ≈ 1 = 1 ≈ (2.23)
ln ρ ln 1−λ1 λ1
μ(x)
assuming λ1 ≈ 0. Here ε must be chosen so that ε << minx μ(V ) .

2.6. Eigenvalues of Zm
Let us give an example of computation of the eigenvalues λk of L. It will be
more convenient to compute the eigenvalues αk = 1 − λk of the Markov operator
P = id −L.
Let us compute the eigenvalues of the Markov operator on the cycle graph Cm
with simple weight. Recall that Cm = Zm = {0, 1, ..., m − 1} and the edges are
0 ∼ 1 ∼ 2 ∼ ... ∼ m − 1 ∼ 0.
The Markov operator is given by
1
(f (k + 1) + f (k − 1)) ,
P f (k) =
2
where k is a residue mod m. The eigenfunction equation P f = αf becomes
f (k + 1) − 2αf (k) + f (k − 1) = 0. (2.24)
We know already that α = 1 is always a simple eigenvalue of P , and α = −1
is a (simple) eigenvalue if and only if Zm is bipartite, that is, if m is even. Assume
in what follows that α ∈ (−1, 1) .
Consider first the difference equation (2.24) on Z, that is, for all k ∈ Z, and
find all solutions f as functions on Z. Observe first that the set of all solutions of
(2.24) is a linear space (the sum of two solutions is a solution, and a multiple of a
solution is a solution), and the dimension of this space is 2, because function f is
uniquely determined by (2.24) and by two initial conditions f (0) = a and f (1) = b.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
44 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

Therefore, to find all solutions of (2.24), it suffices to find two linearly independent
solutions.
Let us search for a specific solution of (2.24) in the form f (k) = r k where r is
a complex number yet to be found. Substituting into (2.24) and cancelling by r k ,
we obtain the equation for r:
r 2 − 2αr + 1 = 0.
It has two complex roots
r = α ± i 1 − α2 = e±iθ ,
where θ ∈ (0, π) is determined by the condition
cos θ = α (and sin θ = 1 − α2 ).
Hence, we obtain two independent complex-valued solutions of (2.24):
f1 (k) = eikθ and f2 (k) = e−ikθ .
Taking their linear combinations and using the Euler formula, we arrive at the
following real-valued independent solutions:
f1 (k) = cos kθ and f2 (k) = sin kθ. (2.25)
In order to be able to consider a function f (k) on Z as a function on Zm , it must
be m-periodic, that is,
f (k + m) = f (k) for all k ∈ Z.
The functions (2.25) are m-periodic provided mθ is a multiple of 2π, that is,
2πl
θ= ,
m
for some integer l. The restriction θ ∈ (0, π) is equivalent to
l ∈ (0, m/2) .
Hence, for each l from this range we obtain an eigenvalue α = cos θ of multiplicity
2 (with eigenfunctions cos kθ and sin kθ).
Let us summarize this result in the following statement.
Lemma 2.14. The eigenvalues of the Markov operator P on Zm are as follows:
(a) If m is odd, then the eigenvalues are α = 1 (simple) and α = cos 2πl
m for
all l = 1, ..., m−1
2 (double).
(b) If m is even, then the eigenvalues are α = ±1 (simple) and α = cos 2πl
m
2 − 1 (double).
for all l = 1, ..., m
Note that the sum of the multiplicities of all the listed above eigenvalues is m
so that we have found indeed all the eigenvalues of P .
For example, in the case m = 3 we obtain the Markov eigenvalues α = 1 and
α = cos 2π3 = − 2 (double). The eigenvalues of L are as follows: λ = 0 and λ = 3/2
1

(double). If m = 4 then the Markov eigenvalues are α = ±1 and α = cos 2π 4 = 0


(double). The eigenvalues of L are as follows: λ = 0, λ = 1 (double), λ = 2.
In the following lemma we give an alternative description of the eigenvalues of
Zm .
Lemma 2.15. All the eigenvalues of the Markov operator P on Zm (with mul-
 m−1
tiplicities) are given by the sequence cos 2πj
m j=0 .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2.7. PRODUCTS OF GRAPHS 45

Proof. Let m be odd. The value j = 0 gives the eigenvalue α = 1, while for
j = 1, ..., m − 1, we have
2πj 2πl
cos = cos ,
m m
where
j, 1 ≤ j ≤ m−1
2 ,
l=
m − j, m+1 2 ≤ j ≤ m − 1,
because  
2πj 2πj 2πl
= cos 2π −
cos = cos .
m m m
  m−1  
2πj m−1
Hence, every value of cos 2πl 2
m l=1 occurs in the sequence cos m j=1 exactly
twice, so that we obtain all the eigenvalues of P by Lemma 2.14.
Let m be even. Then j = 0 and j = m/2 give the values 1 and −1, respectively,
while for j ∈ [1, m − 1] \ {m/2} we have
2πj 2πl
cos = cos ,
m m
where
j, 1≤j≤ m 2 − 1,
l=
m − j, m2 + 1 ≤ j ≤ m − 1,
  m2 −1
so that each value of cos 2πl
m l=1 is counted exactly twice. 

2.7. Products of graphs


Definition 2.16. The Cartesian product of two graphs (X, E1 ) and (Y, E2 ) is
a graph
(V, E) = (X, E1 )  (Y, E2 ) ,
where V = X × Y , that is, V is the set of pairs (x, y) with x ∈ X and y ∈ Y , and
the set E of edges is defined by
either x ∼ x and y = y  ,
(x, y) ∼ (x , y  ) if (2.26)
or y ∼ y  and x = x .
A fragment of the graph (V, E) is shown on Figure 2.4.

Figure 2.4. Graph XY

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
46 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

Clearly, we have |V | = |X| |Y | and


deg (x, y) = deg (x) + deg (y)
for all x ∈ X and y ∈ Y .
For example, we have ZZ = Z2 and, more generally, Zn Zm = Zn+m . Also,
we have

Z2 Z2 = Z4 = and Z2 Z3 =


.

Let us define a more general notion of the product of weighted graphs.


Definition 2.17. Let (X, a) and (Y, b) be two locally finite weighted graphs.
Fix two numbers p, q > 0 and define the weighted product
(V, μ) = (X, a) p,q (Y, b)
as follows: V = X × Y and the weight μ on V is defined by

⎨ pb (y) axx , if y = y  ,
μ(x,y),(x ,y ) = qa (x) byy , if x = x ,

0, otherwise.
The numbers p, q are called the parameters of the product.
Clearly, the product weight μ(x,y),(x ,y ) is symmetric, and the edges of the
graph (V, μ) are exactly those from (2.26). The weight on the vertices of V is given
by
  
μ (x, y) = μ(x,y),(x ,y ) = p axx b (y) + q a (x) byy
x ,y  x y
= (p + q) a (x) b (y) .
Lemma 2.18. If A and B are the Markov kernels on (X, a) and (Y, b), then the
Markov kernel P on the product (V, μ) is given by
⎧ p 
⎨ p+q A (x, x ) , if y = y  ,
  
P ((x, y) , (x , y )) = q
B (y, y ) , if x = x , (2.27)
⎩ p+q
0, otherwise.
Proof. Indeed, we have in the case y = y 
μ(x,y),(x ,y ) paxx b (y)
P ((x, y) , (x , y  )) = =
μ (x, y) (p + q) a (x) b (y)
p axx p
= = A (x, x ) ,
p + q a (x) p+q
and the case x = x is treated similarly. 

For the random walk on (V, μ), the identity (2.27) means the following: the
random walk at (x, y) chooses first between the directions X and Y with probabil-
p q
ities p+q and p+q , respectively, and then chooses a vertex in the chosen direction
accordingly to the Markov kernel there.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2.7. PRODUCTS OF GRAPHS 47

In particular, if a and b are simple weights, then we obtain



⎨ p deg (y) , if x ∼ x and y = y  ,
μ(x,y),(x ,y ) = q deg (x) , if y ∼ y  , and x = x ,

0, otherwise.

If in addition the graphs A and B are regular, that is,

deg (x) = const =: deg (A) and deg (y) = const =: deg (B) ,

then the most natural choice of the parameter p and q is as follows

1 1
p= and q= ,
deg (B) deg (A)

so that the weight μ is also simple. We obtain the following statement.

Lemma 2.19. If (X, a) and (Y, b) are regular graphs with simple weights, then
the weighted product
(X, a)  deg(B)
1 1
, deg(A) (Y, b)

is again a regular graph with a simple weight and the degree deg (A) + deg (B).
Moreover, it coincides with the Cartesian product (X, a)  (Y, b).

Example 2.20. Consider the graphs Zn and Zm with simple weights. Since
their degrees are equal to 2n and 2m, respectively, we obtain

Zn  2m
1 1 Z
, 2n
m
= Zn Zm = Zn+m .

Theorem 2.21. Let (X, a) and (Y, b) be finite weighted graphs without isolated
n−1 m−1
vertices, and let {αk }k=0 and {βl }l=0 be the sequences of the eigenvalues of the
Markov operators A and B respectively, counted with multiplicities. Then all the
eigenvalues of the Markov ) P on the product (V, μ) = (X, a) p,q (Y, b) are
( operator
given by the sequence pαk +qβl
p+q where k = 0, ..., n − 1 and l = 0, ..., m − 1.

In other words, the eigenvalues of P are the convex combinations of eigenvalues


p q
of A and B, with the coefficients p+q and p+q . Note that the same relation holds
for the eigenvalues of the Laplace operators: since those on (X, a) and (Y, b) are
1 − αk and 1 − βl , respectively, we see that the eigenvalues of the Laplace operator
on (V, μ) are given by

pαk + qβl p (1 − αk ) + q (1 − βl )
1− = ,
p+q p+q

that is, the same convex combination of 1 − αk and 1 − βl .

Proof. Let f be an eigenfunction of A with the eigenvalue α and g be the


eigenfunction of B with the eigenvalue β. Let us show that the function h (x, y) =

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
48 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

pα+qβ
f (x) g (y) is the eigenvalue of P with the eigenvalue p+q . We have

P h (x, y) = P ((x, y) , (x , y  )) h (x , y  )
x ,y 
 
= P ((x, y) , (x , y)) h (x , y) + P ((x, y) , (x, y  )) h (x, y  )
x y
p  q 
= A (x, x ) f (x ) g (y) + B (y, y  ) f (x) g (y  )
p+q  p+q 
x y
p q
= Af (x) g (y) + f (x) Bg (y)
p+q p+q
p q
= αf (x) g (y) + βf (x) g (y)
p+q p+q
pα + qβ
= h (x, y) ,
p+q
which was to be proved.
Let {fk } be a basis in the space of functions on X such that Afk = αk fk ,
and {gl } be a basis in the space of functions on Y , such that Bgl = βl gl . Then
hkl (x, y) = fk (x) gl (y) is a linearly independent sequence of functions on V =
X × Y . Since the number of such functions is nm = |V |, we see that {hkl } is a basis
in the space of functions on V . Since hkl is the eigenfunction with the eigenvalue
pαk +qβl pαk +qβl
p+q , we conclude that the sequence p+q exhausts all the eigenvalues of
P. 
Corollary 2.22. Let (V, E) be a finite connected regular graph with N > 1
vertices, and set
n
(V n , En ) = (V, E) = (V, E) ... (V, E).
  
n times
−1
Let μ be a simple weight on (V, E) and {αk }N
be the sequence of the eigenvalues
k=0
of the Markov operator on (V, μ), counted with multiplicity. Let μn be a simple
weight on (V n , En ). Then the eigenvalues of the Markov operator on (V n , μn ) are
given by the sequence *
αk1 + αk2 + ... + αkn
(2.28)
n
for all ki ∈ {0, 1, ..., N − 1} , where each eigenvalue is counted with multiplicity.
−1
It follows that if {λk }N
k=0 is the sequence of the eigenvalues of the Laplace
operator on (V, μ), then the eigenvalues of Laplace operator on (V n , μn ) are given
by the sequence *
λk1 + λk2 + ... + λkn
. (2.29)
n
Proof. Induction in n. If n = 1, then there is nothing to prove. Let us
make the inductive step from n to n + 1. Let the degree of (V, E) be D, then
deg (V n ) = nD. Note that
 n+1
V , En+1 = (V n , En )  (V, E) .
It follows from Lemma 2.19 that
 n+1
V , μn+1 = (V n , μn )  D1 , nD
1 (V, μ) .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2.8. EIGENVALUES AND MIXING TIME IN Zn
m , m ODD. 49

By the inductive hypothesis, the eigenvalues of the Markov operator on (V n , μn )


are given by the sequence (2.28). Hence, by Theorem 2.21, the eigenvalues on
 n+1
V , μn+1 are given by
1/D αk1 + αk2 + ... + αkn 1/ (nD)
+ αk
1/D + 1/ (nD) n 1/D + 1/ (nD)
n αk1 + αk2 + ... + αkn 1
= + αk
n+1 n n+1
αk1 + αk2 + ... + αkn + αk
= ,
n+1
which was to be proved. 

2.8. Eigenvalues and mixing time in Znm , m odd.


Eigenvalues. Consider Znm with an odd m so that the graph Znm is not bipar-
tite. By Corollary 2.10, the distribution of the random walk on Znm converges to the
μ(x)
) = N , where N = |V | = m , and the rate of convergence
1 n
equilibrium measure μ(V
is determined by the spectral radius ρ
ρ = max (|αmin | , |αmax |) , (2.30)
where αmin is the minimal eigenvalue of P and αmax is the second maximal eigen-
value of P .
By Lemma 2.14, the eigenvalues of the Markov operator on Zm are listed in
the sequence (without multiplicity):
* m−1
2πl 2
cos .
m l=0
This sequence is obviously decreasing in l. Its minimal value corresponds to l = m−1
2
and is equal to  
2π m − 1 π
cos = − cos ,
m 2 m
and its second maximal value corresponds to l = 1 and is equal to

cos .
m
By Corollary 2.22, the eigenvalues of the Markov operator on Znm have the form
αk1 + αk2 + ... + αkn
, (2.31)
n
where αki are the Markov eigenvalues of Zm . The minimal value αmin of (2.31) is
equal to the minimal value of αk , that is
π
αmin = − cos . (2.32)
m
The maximal value of (2.31) is, of course, 1 when all αki = 1, and the second
maximal value is obtained when one of αki is equal to cos 2π m and the rest n − 1
values are 1. Hence, we have
n − 1 + cos 2π 1 − cos 2π
m
αmax = =1− m
. (2.33)
n n
Consider some explicit examples.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
50 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

Case m = 3. If m = 3, then αmin = − cos π3 = − 12 and


1 − cos 2π 3
αmax = 1 − 3
=1− ,
n 2n
whence  " "
1 "" 3 "" 1
2, n ≤ 3,
ρ = max , "1 − = (2.34)
2 2n " 1− 3
2n , n ≥ 4.

θ2
Case of a large m. If m is large, then using the approximation 1 − cos θ ≈ 2
for small θ, we obtain from (2.32) and (2.33)
 
π2 2π 2
αmin ≈ − 1 − , αmax ≈ 1 − .
2m2 nm2
1
Using further ln 1−θ ≈ θ, we obtain
1 π2 1 2π 2
ln ≈ , ln ≈ .
|αmin | 2m2 |αmax | nm2
Finally, by (2.15) and (2.30), we obtain the following estimate of the mixing time
in Znm with error ε:
 
ln 1ε 1 2m2 nm2 ln 1ε
T ≈ 1 ≈ ln ε max , = nm2 , (2.35)
ln ρ π 2 2π 2 2π 2

assuming for simplicity that n ≥ 4. Choosing ε = N12 (recall that ε has to be chosen
significantly smaller than the limit value N1 of the distribution) and using N = mn ,
we obtain
2 ln mn 1
T ≈ 2
nm2 = 2 n2 m2 ln m. (2.36)
2π π
For example, in Z10
5 the mixing time is T ≈ 400, which is relatively short given the
fact that the number of vertices in this graph is N = 510 ≈ 106 . In fact, the actual
mixing time is even smaller than the above value of T since the latter is an upper
bound for the mixing time.
For comparison, if we choose n = 1, then (2.35) changes slightly to become
1 2m2 4
T ≈ ln 2
= 2 m2 ln m,
ε π π
where we have chosen ε = m12 . For example, in Zm with m = 106 we have T  1012 ,
which is huge in comparison with the previous graph Z105 with (almost) the same
number of vertices as Z106 !

Minimizing the mixing time with a fixed number of vertices. Let us


try to spot those Znm with the minimal mixing time assuming that the number of
vertices N = mn is (approximately) fixed. The formula (2.36) can be rewritten in
terms of m and N = mn as follows:
m2
ln2 N, T ≈
π 2 ln m
where we have used n = ln N/ ln m. Therefore, to minimize T with a given N ,
the value of m should be taken minimal, that is, m = 3. In this case, we can

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
2.9. EIGENVALUES AND MIXING TIME IN A BINARY CUBE 51

use the exact value of ρ given by (2.34), and obtain that, for Zn3 with large n and
ε = N12 = 3−2n :
 
ln 1 2n ln 3 4 4
T ≈  ε 3 ≈
2 2
= ln 3 n2 = (ln N ) ≈ 1.2 (ln N ) .
ln 1 − 2n 3/ (2n) 3 3 ln 3

For example, for Z13


3 with N = 3
13
> 106 vertices, we obtain a very short mixing
time T ≈ 250.

2.9. Eigenvalues and mixing time in a binary cube


Eigenvalues of the Laplacian. The eigenvalues of the Laplace operator on
Z2 = {0, 1} are λ0 = 0 and λ1 = 2. Then the eigenvalues of the Laplace operator
on Zn2 = {0, 1}n are given by (2.29), that is, by
*
λk1 + λk2 + ... + λkn
n

where each ki = 0 or 1. Hence, each eigenvalue of Zn2 is equal to 2j n where j ∈ [0, n] is


the number of 1’s in the sequence k1 , ..., kn . The multiplicity of the eigenvalue 2j
n is
equal to the number of binary sequences {k1 , ...,kn } where 1 occurs exactly j times.
This number is given by the binomial coefficient nj . Hence, all the eigenvalues of the
 n
Laplace operator on {0, 1}n are given by the sequence 2j n j=0 , and the multiplicity
2j
n
of n is equal to j .

A curious identity. Note that the total sum of all multiplicities of the eigen-
values of Zn2 is equal to
n  
n
= 2n ,
j=0
j

that is the number of vertices in Zn2 as expected. The trace of the Laplace operator
on Zn2 is equal to
n  
2j n
sum of the eigenvalues = ,
j=0
n j

while by Theorem 2.11 the trace is equal to 2n . Hence, we obtain the identity
n  
n
j = n2n−1 .
j=0
j

Of course, one can prove this identity independently by induction in n.

Case n = 3. The eigenvalues of the hexahedron Z32 are 0, 23 , 43 , 2 where 0 and


 
2 are simple, while 23 and 43 have multiplicities 31 = 32 = 3. For comparison, let
us evaluate the eigenvalues of Z32 directly. For that, enumerate the vertices of Z32
as on the Figure 2.5.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
52 2. SPECTRAL PROPERTIES OF THE LAPLACE OPERATOR

Figure 2.5. Hexahedron with numbered vertices

The matrix of the Laplace operator L is then as follows:


⎛ ⎞
1 − 13 0 − 13 − 13 0 0 0
⎜ −1 1 − 13 0 0 − 13 0 0 ⎟
⎜ 3 ⎟
⎜ 0 −1 1 − 1
0 0 − 13 0 ⎟
⎜ 1 3 3 ⎟
⎜ − 0 −3 1
1 0 0 0 − 13 ⎟
⎜ 31 ⎟.
⎜ − 0 0 0 1 − 13 0 − 13 ⎟
⎜ 3 ⎟
⎜ 0 −1 0 0 − 13 1 − 13 0 ⎟
⎜ 3 ⎟
⎝ 0 0 − 13 0 0 − 13 1 − 13 ⎠
0 0 0 − 13 − 13 0 − 13 1
Using computer packages such as Maple, one obtains that the eigenvalues of this
matrix are indeed as listed above.

Convergence rate in a binary cube. By (2.23), the convergence rate of


the random walk on Zn2 is determined by λ1 = n2 . Assume that n is large and let
N = 2n . Taking ε = N12 = 2−2n , we obtain by (2.23) the following estimate of the
mixing time:
ln 1ε 2n ln 2 (ln N )2
T ≈ = = (ln 2) n2 = ≈ 1.4 (ln N )2 .
λ1 2/n ln 2
For example, for Z220 with N = 220 ≈ 106 vertices we obtain T ≈ 280.
Remark 2.23. Let us emphasize again that T is, in fact, an upper bound for
a mixing time, based on spectral properties of the Markov operator. Using more
subtle methods, based on log-Sobolev inequality, it is possible to prove a better
estimate for a mixing time in the binary cube Zn2 :
T  ln N ln ln N,
see [57] and [124] for details. Further results on mixing times can also be found in
[111].
For explicit computation of eigenvalues on various graphs see [105].

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
10.1090/ulect/071/03

CHAPTER 3

Geometric bounds for the eigenvalues

In this chapter, (V, μ) is always a finite, connected weighted graph with N =


−1
|V | > 1. As above, let {λk }N
k=0 be the increasing sequence of the eigenvalues of the
Laplace operator L on (V, μ). Our purpose is to obtain estimates for the smallest
positive eigenvalue λ1 .

3.1. Cheeger’s inequality


Let E be the set of edges of (V, μ). Recall that, for any vertex subset Ω ⊂ V ,
its measure μ (Ω) is defined by

μ (Ω) = μ (x) .
x∈Ω

Similarly, for any edge subset S ⊂ E, define its measure μ (S) by



μ (S) = μξ ,
ξ∈S

where μξ := μxy for any edge ξ = xy.


For any set Ω ⊂ V , define its edge boundary ∂Ω by
∂Ω = {xy ∈ E : x ∈ Ω, y ∈
/ Ω} .
Definition 3.1. Given a finite weighted graph (V, μ), define its Cheeger con-
stant by
μ (∂Ω)
h = h (V, μ) = inf . (3.1)
Ω⊂V
1
μ (Ω)
μ(Ω)≤ 2 μ(V )
In other words, h is the largest constant such that the following inequality is true:
μ (∂Ω) ≥ hμ (Ω) (3.2)
for any subset Ω of V with measure μ (Ω) ≤ 1
2 μ (V ).
Lemma 3.2. We have λ1 ≤ 2h.
Proof. Let Ω be a set at which the infimum in (3.1) is attained. Consider the
following function
1, x ∈ Ω,
f (x) =
−a, x ∈ / Ωc ,
where a constant a is chosen so that f ⊥1, that is, μ (Ω) = aμ (Ωc ) whence
μ (Ω)
a= ≤ 1.
μ (Ωc )
Since by Theorem 2.3
(Lf, f )
λ1 ≤ R (f ) = ,
(f, f )
53

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
54 3. GEOMETRIC BOUNDS FOR THE EIGENVALUES

it suffices to prove that R (f ) ≤ 2h. We have


 2
(f, f ) = f (x) μ (x) = μ (Ω) + a2 μ (Ωc ) = (1 + a) μ (Ω)
x∈V
and
1
(Lf, f ) = (f (x) − f (y))2 μxy
2 x,y
 2
= (f (x) − f (y)) μxy
x∈Ω,y∈Ωc

= (1 + a)2 μxy
x∈Ω,y∈Ωc
2
= (1 + a) μ (∂Ω) .
Hence,
2
(1 + a) μ (∂Ω)
R (f ) ≤ = (1 + a) h ≤ 2h,
(1 + a) μ (Ω)
which was to be proved. 
The following lower bound of λ1 via h is most useful and is frequently used.
Theorem 3.3. (Cheeger’s inequality) We have
h2
. λ1 ≥ (3.3)
2
This theorem was proved in [2], [61] and [62]. We precede the proof Theorem
3.3 by two lemmas. Given a function f : V → R and an edge ξ = xy, let us use the
following notation1 :
|∇ξ f | := |∇xy f | = |f (y) − f (x)| .

Lemma 3.4. (Co-area formula) Given any real-valued function f on V , set for
any t ∈ R
Ωt = {x ∈ V : f (x) > t}.
Then the following identity holds:
  ∞
|∇ξ f | μξ = μ(∂Ωt ) dt. (3.4)
ξ∈E −∞

A similar formula holds for differentiable functions on R:


 b  ∞
|f  (x)| = card {x : f (x) = t} dt,
a −∞
and the common value of the both sides is called the full variation of f .
Proof. For any edge ξ = xy, there corresponds an interval Iξ ⊂ R that is
defined as follows:
Iξ = [f (x), f (y)),
where we assume that f (x) ≤ f (y) (otherwise, switch the notations x and y).
Denoting by |Iξ | the Euclidean length of the interval Iξ , we see that |∇ξ f | = |Iξ | .
 
1 Note that ∇ξ f is undefined unless the edge ξ is directed, whereas ∇ξ f  makes always sense.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
3.1. CHEEGER’S INEQUALITY 55

Let us verify that


ξ ∈ ∂Ωt ⇔ t ∈ Iξ . (3.5)
Indeed, the boundary ∂Ωt consists of edges ξ = xy such that x ∈ Ωct and y ∈ Ωt ,
that is, f (x) ≤ t and f (y) > t; which is equivalent to t ∈ [f (x) , f (y)) = Iξ .
Using (2.26), we obtain
  
μ(∂Ωt ) = μξ = μξ = μξ 1Iξ (t) ,
ξ∈∂Ωt ξ∈E:t∈Iξ ξ∈E

whence
 +∞  +∞ 
μ(∂Ωt ) dt = μξ 1Iξ (t)dt
−∞ −∞ ξ∈E

 +∞
= μξ 1Iξ (t)dt
ξ∈E −∞
 
= μξ |Iξ | = μξ |∇ξ f | ,
ξ∈E ξ∈E

which finishes the proof. 


Lemma 3.5. For any non-negative function f on V , such that
1
μ {x ∈ V : f (x) > 0} ≤ μ (V ) , (3.6)
2
the following is true:
 
|∇ξ f | μξ ≥ h f (x) μ (x) , (3.7)
ξ∈E x∈V

where h is the Cheeger constant of (V, μ) .


Note that for the function f = 1Ω the condition (3.6) means that μ (Ω) ≤
1
2 μ (V ), and the inequality (3.7) is equivalent to
μ (∂Ω) ≥ hμ (Ω) ,
because  
f (x) μ (x) = μ (x) = μ (Ω) ,
x∈V x∈Ω
and
  
|∇ξ f | μξ = |f (y) − f (x)| μxy = μxy = μ (∂Ω) .
ξ∈E x∈Ω,y∈Ωc x∈Ω,y∈Ωc

Hence, the meaning of Lemma 3.5 is that the inequality (3.7) for the indicator
functions implies the same inequality for arbitrary functions.
Proof. By the co-area formula, we have
  ∞  ∞
|∇ξ f | μξ = μ(∂Ωt ) dt ≥ μ(∂Ωt ) dt.
ξ∈E −∞ 0

By (3.6), the set Ωt = {x ∈ V : f (x) > t} has measure ≤ 1


2 μ (V ) for any t ≥ 0.
Therefore, by (3.2)
μ (∂Ωt ) ≥ hμ (Ωt ) ,

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
56 3. GEOMETRIC BOUNDS FOR THE EIGENVALUES

whence 
 ∞
|∇ξ f | μξ ≥ h μ (Ωt ) dt.
ξ∈E 0

On the other hand, noticing that x ∈ Ωt for a non-negative t is equivalent to


t ∈ [0, f (x)), we obtain
 ∞  ∞ 
μ (Ωt ) dt = μ (x) dt
0 0 x∈Ωt
 ∞ 
= μ (x) 1[0,f (x)) (t) dt
0 x∈V
  ∞
= μ (x) 1[0,f (x)) (t) dt
x∈V 0

= μ (x) f (x) , (3.8)
x∈V

which finishes the proof. 


Proof of Theorem 3.3. Let f be the eigenfunction of λ1 . Consider two sets
V + = {x ∈ V : f (x) ≥ 0} and V − = {x ∈ V : f (x) < 0} .
Without loss of generality, we can assume that μ (V + ) ≤ μ (V − ) (if not, then
replace f by −f ). It follows that μ (V + ) ≤ 12 μ (V ) . Consider the function
f, f ≥ 0,
g = f+ :=
0, f < 0.
Applying the Green formula (2.2),
1 
(Lf, g) = (∇xy f ) (∇xy g) μxy
2
x,y∈V

and using Lf = λ1 f , we obtain


 1 
λ1 f (x)g(x)μ(x) = (∇xy f ) (∇xy g) μxy .
2
x∈V x,y∈V
2 2
Observing that f g = g and
2 2
(∇xy f ) (∇xy g) = (f (y) − f (x)) (g (y) − g (x)) ≥ (g (y) − g (x)) = |∇xy g| .
we obtain 
ξ∈E |∇ξ g|2 μξ
λ1 ≥  .
x∈V g 2 (x) μ (x)
Note that g ≡ 0 because otherwise f+ ≡ 0 and (f, 1) = 0 imply that f− ≡ 0,
whereas f is not identical 0.

2 Here we use the obvious identity a+ a = a2+ and inequality


|a+ − b+ | ≤ |a − b| ,
that implies
(a+ − b+ )2 ≤ (a − b) (a+ − b+ ) ,
for arbitrary real a, b.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
3.1. CHEEGER’S INEQUALITY 57

Hence, to prove (3.3) it suffices to verify that


 2 h2  2
|∇ξ g| μξ ≥ g (x) μ (x) . (3.9)
2
ξ∈E x∈V

Since
 1
μ (x ∈ V : g (x) > 0) ≤ μ V + ≤ μ (V ) ,
2
we can apply (3.7) to function g 2 , which yields
"  " 
"∇ξ g 2 " μξ ≥ h g 2 (x) μ (x) . (3.10)
ξ∈E x∈V

Let us estimate from above the left hand side as follows:


"  "  " 2 "
"∇ξ g 2 " μξ = 1 "g (x) − g 2 (y)" μxy
2
ξ∈E x,y∈V
1
= |g(x) − g(y)|μ1/2
xy |g(x) + g(y)|μxy
1/2
2 x,y
 12
1  1 
≤ ( (g(x) − g(y))2 μxy ) ( (g(x) + g(y))2 μxy ) ,
2 x,y 2 x,y

where we have used the Cauchy-Schwarz inequality


1/2 1/2
  
ak bk ≤ 2
ak 2
bk
k k k

that is true for arbitrary sequences of non-negative reals ak , bk . Next, using the
inequality 12 (a + b)2 ≤ a2 + b2 , we obtain
⎛ ⎞1/2
"  "   
"∇ξ g 2 " μξ ≤ ⎝ 2
|∇ξ g| μξ g 2 (x) + g 2 (y) μxy ⎠
ξ∈E ξ∈E x,y
⎛ ⎞1/2
 
= ⎝2 |∇ξ g|2 μξ g 2 (x)μxy ⎠
ξ∈E x,y
⎛ ⎞1/2
 
= ⎝2 |∇ξ g| μξ2
g 2 (x)μ (x)⎠ ,
ξ∈E x∈V

which together with (3.10) yields


⎛ ⎞1/2 1/2
  
g 2 (x) μ (x) ≤ ⎝2 |∇ξ g| μξ ⎠
2 2
h g (x)μ (x) .
x∈V ξ∈E x∈V
 1/2
Dividing by x∈V g 2 (x) μ (x) and taking square, we obtain (3.9). 

Remark 3.6. Cheeger’s inequality (3.3) can be improved as follows:


λ1 ≥ 1 − 1 − h2 .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
58 3. GEOMETRIC BOUNDS FOR THE EIGENVALUES

See [2], [22], [35], [58] for details. Further extensions of Cheeger’s inequality can
be found in [21], [103], [104], [110].

3.2. Eigenvalues on a path graph


Here we consider a weighted path graph (V, μ) where V = {0, 1, ...N − 1}, the
edges are
0 ∼ 1 ∼ 2 ∼ ... ∼ N − 1,
N −1
and the weights are μk−1,k = mk , where {mk }k=1 is a given sequence of positive
numbers.
For 1 ≤ k ≤ N − 2, we have
μ (k) = μk−1,k + μk,k+1 = mk + mk+1 ,
and the same is also true for k = 0, N − 1 if we set m−1 = mN = 0. The Markov
kernel is then
μk,k+1 mk+1
P (k, k + 1) = = .
μ (k) mk + mk+1
−1
Lemma 3.7. Assume that the sequence {mk }N k=1 is increasing, that is, mk ≤
mk+1 . Then, for the weighted path graph, h ≥ 2N . Consequently, we have
1

1
λ1 ≥ . (3.11)
8N 2
Proof. Let Ω be a subset of V with μ (Ω) ≤ 12 μ (V ), and let (k − 1, k) be an
edge of the boundary ∂Ω with the largest possible k. We claim that either Ω or
Ωc is contained in [0, k − 1]. Indeed, if there were vertices from both sets Ω and
Ωc outside [0, k − 1], that is, in [k, N − 1], then there would have been an edge
(j − 1, j) ∈ ∂Ω with j > k, which contradicts the choice of k. It follows that either
μ (Ω) ≤ μ ([0, k − 1]) or μ (Ωc ) ≤ μ ([0, k − 1]) . However, since μ (Ω) ≤ μ (Ωc ), we
obtain that in the both cases μ (Ω) ≤ μ ([0, k − 1]) . We have

k−1 
k−1
μ ([0, k − 1]) = μ (j) = (μj−1,j + μj,j+1 )
j=0 j=0


k−1
= (mj + mj+1 )
j=0
≤ 2kmk , (3.12)
where we have used that mj ≤ mj+1 ≤ mk . Therefore
μ (Ω) ≤ 2kmk .
On the other hand, we have
μ (∂Ω) ≥ μk−1,k = mk ,
whence it follows that
μ (∂Ω) mk 1 1
≥ = ≥ ,
μ (Ω) 2kmk 2k 2N
which proves that h ≥ 2N
1
.
Consequently, Theorem 3.3 yields (3.11). 

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
3.2. EIGENVALUES ON A PATH GRAPH 59

For comparison, in the case of a simple weight, the exact value of λ1 is


π
λ1 = 1 − cos
N −1
(see Exercise 28). For large N , we obtain
π2 5
λ1 ≈ 2 ≈ ,
2 (N − 1) N2
which is of the same order in N as the estimate (3.11).
Lemma 3.8. Assume that the weights mk satisfy a stronger condition
mk+1 ≥ cmk ,
for some constant c > 1 and all k = 0, ..., N − 2. Then
c−1
h≥
c+1
and, consequently,
 2
1 c−1
λ1 ≥ . (3.13)
2 c+1
Proof. Clearly, we have for all k ≥ j,
mk ≥ ck−j mj ,
which allows to improve (3.12) as follows:

k−1
μ ([0, k − 1]) = (mj + mj+1 )
j=0


k−1
 j−k
≤ c mk + cj+1−k mk
j=0
 
= mk c−k + c1−k 1 + c + ...ck−1
 ck − 1
= mk c−k + c1−k
c−1
c+1
≤ mk .
c−1
As in the previous proof, we conclude that
μ (∂Ω) c−1

μ (Ω) c+1
whence
c−1
h≥ .
c+1
Theorem 3.3 yields then (3.13). 

Let us estimate the mixing time on the path graph (V, μ). Since it is bipartite,
the mixing time is given by (2.23):
ln 1ε ln 1ε
T ≈ 1 ≤ ,
ln 1−λ1 λ1

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
60 3. GEOMETRIC BOUNDS FOR THE EIGENVALUES

μ(k)
where ε << mink μ(V ) . Observe that


N −1 
N −1
μ (V ) = (mj + mj+1 ) ≤ 2 mj ,
j=0 j=1

where we put m0 = mN = 0, whence


μ (k) m1 1
min ≥ N −1 = ,
k μ (V ) 2 j=1 mj 2M

where
N −1
j=1 mj
M := .
m1
Setting ε = 1
M2 (note that M ≥ N and N can be assumed large) we obtain
2 ln M
T ≤ .
λ1
Hence, for an arbitrary increasing sequence {mk }, we obtain using (3.11) that
T ≤ 8N 2 ln M. (3.14)
Example 3.9. Consider the weights mk = ck where c > 1. Then we have

N −1
cN −1 − 1
M= cj−1 = ,
j=1
c−1

whence ln M ≈ N ln c. Using also (3.13), we obtain


 2
c+1
T ≤ 4N ln c.
c−1
Note that in this case T is linear in N !

Example 3.10. Consider one more example: mk = kp for some p > 1. Then
 p  p
mk+1 1 1
= 1+ ≥ 1+ =: c.
mk k N
If N >> p, then c ≈ 1 + p
N whence
 2
1 c−1 1 p2
λ1 ≥ ≈ .
2 c+1 8 N2
In this case,

N −1
M= j p ≤ N p+1 ,
j=1

whence ln M ≤ (p + 1) ln N and
16 (p + 1) 2
T ≤ N ln N.
p2

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
3.3. ESTIMATING λ1 VIA DIAMETER 61

3.3. Estimating λ1 via diameter


The diameter of a connected graph (V, E) is defined by
diam (V, E) = max {d (x, y) : x, y ∈ V } .
Theorem 3.11. For any finite connected weighted graph (V, μ) of diameter D,
we have
c
λ1 ≥ ,
Dμ (V )
where c = minx∼y μxy (for example, c = 1 for a simple weight).
Proof. Let f be the eigenfunction of the eigenvalue λ1 . Let us normalize f
to have
max f = max |f | = 1,
and let x0 be a vertex where f (x0 ) = 1. Since (f, 1) = 0, there is also a vertex y0
n
where f (y0 ) < 0. Let {xk }k=0 be a path connecting x0 and y0 = xn where n ≤ D.
Using the identity
 2
1 x,y (f (x) − f (y)) μxy
λ1 = R (f ) =  2 ,
2 x f (x) μ (x)
let us estimate the denominator and numerator as follows:

f (x)2 μ (x) ≤ μ (V ) ,
x∈V

and
1 
n−1
(f (x) − f (y))2 μxy ≥ c (f (xk ) − f (xk+1 ))2
2 x,y
k=0
2
c 
n−1
≥ f (xk ) − f (xk+1 )
n
k=0
c
= (f (x0 ) − f (xn ))2
n
c

n
because f (x0 ) = 1 and f (xn ) ≤ 0. Since n ≤ D, we obtain
c c
λ1 ≥ ≥ ,
nμ (V ) Dμ (V )
which was to be proved. 
Example 3.12. Let us recall that, by (2.33), the eigenvalue λ1 on Znm is equal
to
1 − cos 2π 2π 2
m
≈ λ1 = , (3.15)
n nm2
assuming that m is large enough. Since μ (Znm ) = 2nmn and diam (Znm ) = nm, the
estimate of Theorem 3.11 gives
1
λ1 ≥ 2 n+1 .
2n m
Comparison with (3.15) shows that the latter estimate is good enough only if n = 1.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
62 3. GEOMETRIC BOUNDS FOR THE EIGENVALUES

Figure 3.1. Graph (V, E)

Example 3.13. Let graph (V, E) consist of a graph (V+ , E+ ) (that is, a finite
connected graph), its disjoint copy (V− , E− ), and a path graph of 2m + 1 vertices
z−m ∼ z−(m−1) ∼ ...z−1 ∼ z0 ∼ z1 ∼ ... ∼ zm−1 ∼ zm
so that zm ∈ V+ and z−m is a copy of zm in V− (see Figure 3.1). Set
D = diam (V, E) , D+ = diam (V+ , E+ ) , M = |E+ |
and assume that
D+ << m << M.
For example, these conditions are satisfied if (V+ , E+ ) is a square in Z2 of a side s
so that D+ = 2s and M = 2s (s + 1), where s is large enough.
Let μ be a simple weight on (V, E). Then we have μ (V+ ) = 2M ,
μ (V ) = 2μ (V+ ) + 2m ≈ 4M,
and
D ≤ 2D+ + 2m ≈ 2m,
so that the estimate of Theorem 3.11 becomes
1
λ1  .
8mM
Let us prove the following matching upper bound:
1
λ1 ≤ . (3.16)
2mM
Indeed, consider a function

⎨ m, if x ∈ V+ ,
f (x) = k, if x = zk ,

−m, if x ∈ V− .
Clearly, f ⊥1 whence λ1 ≤ R (f ). We have
 2
(f, f ) = f (x) μ (x) ≥ 2m2 μ (V+ ) = 4m2 M
x∈V

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
3.4. EXPANSION RATE 63

and
1 2
(Lf, f ) = (f (x) − f (y)) μxy
2 x,y

m−1
2
= (f (zk ) − f (zk+1 )) = 2m
k=−m

whence
(Lf, f ) 2m 1
R (f ) = ≤ = ,
(f, f ) 4m2 M 2mM
which proves (3.16).

3.4. Expansion rate


Let (V, μ) be a finite connected weighted graph with N > 1 vertices. For any
two non-empty subsets X, Y ⊂ V , set
d (X, Y ) = min d (x, y) .
x∈X,y∈Y

Note that d (X, Y ) ≥ 0 and d (X, Y ) = 0 if and only if X and Y have a non-empty
intersection.
We will need also another quantity:
1 μ (X c ) μ (Y c )
l (X, Y ) = ln ,
2 μ (X) μ (Y )
that will be applied for disjoint sets X, Y . Then X ⊂ Y c and Y ⊂ X c whence it
follows that l (X, Y ) ≥ 0. Furthermore, l (X, Y ) = 0 if and only if X = Y c . To
understand better l (X, Y ), let us express it in terms of the set Z = V \ (X ∪ Y ) so
that   
1 μ (Z) μ (Z)
l (X, Y ) = ln 1 + 1+ .
2 μ (X) μ (Y )
Hence, the quantity l (X, Y ) measures “space” between X and Y in terms of the
measure of Z.
−1
Let {λk }N
k=0 be the increasing sequence of the eigenvalues of the Laplace op-
erator L on (V, μ). Set
λN −1 − λ1 2
δ := = 1 − λN −1 . (3.17)
λN −1 + λ1 +1 λ1
Clearly, δ ∈ [0, 1), and δ = 0 can occur only on complete graphs (cf. Theorem
2.11). It is useful to observe that
λ1 1−δ 2
= =1− 1 .
λN −1 1+δ δ + 1
The relation of δ to the spectral radius
ρ = max (|1 − λ1 | , |λN −1 − 1|)
is as follows: since λ1 ≥ 1 − ρ and λN −1 ≤ 1 + ρ, it follows that
λ1 1−ρ
≥ ,
λN −1 1+ρ
whence
δ ≤ ρ.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
64 3. GEOMETRIC BOUNDS FOR THE EIGENVALUES

Theorem 3.14. For any two disjoint sets X, Y ⊂ V , we have


l (X, Y )
d (X, Y ) ≤ 1 + . (3.18)
ln 1δ
l(X,Y )
If δ = 0, then we set by definition ln 1δ
= 0.

Remark 3.15. By the definitions of l (X, Y ) and δ, the estimate (3.18) is


equivalent to - c
)μ(Y c )
ln μ(X μ(X)μ(Y )
d (X, Y ) ≤ 1 + .
ln λλN
N −1 +λ1
−1 −λ1

This estimate was proved in [39] and [40]. An earlier version of this inequality
for regular graphs was proved in [34]. The paper [40] contains also the following
modification of (3.18):
- c
)μ(Y c )
cosh−1 μ(X μ(X)μ(Y )
d (X, Y ) ≤ 1 + .
cosh−1 λλN
N −1 +λ1
−1 −λ 1

Before the proof of Theorem 3.14, let us discuss some examples and conse-
quences.
Example 3.16. Consider the path graph (V, E) with the vertex set V =
{0, 1, 2} and with two edges 0 ∼ 1 ∼ 2. Let μ be a simple weight. The eigenvalues
of the Laplacian L are equal to 0, 1, 2 with the eigenfunctions {1, 1, 1}, {1, 0, −1},
{1, −1, 1} respectively (cf. Exercise 28). Therefore,
1 λN −1 + λ1 2+1
= = = 3.
δ λN −1 − λ1 2−1
For the sets X = {0} and Y = {2}, we have μ (X) = μ (Y ) = 1 and μ (X c ) =
μ (Y c ) = 3 whence
1 μ (X c ) μ (Y c )
l (X, Y ) = ln = ln 3.
2 μ (X) μ (Y )
Since d (X, Y ) = d (0, 2) = 2, we obtain the equality
l (X, Y )
d (X, Y ) = 1 + .
ln 1δ
Hence, in this case the estimate (3.18) is sharp.
Example 3.17. If D = d (X, Y ) > 1, then the estimate (3.18) implies that
    1
1 l (X, Y ) μ (X c ) μ (Y c ) 2(D−1)
≤ exp = =: A
δ D−1 μ (X) μ (Y )
whence δ ≥ A−1 and ρ ≥ A−1 . In terms of the eigenvalues we obtain
λ1 A−1
≤ .
λN −1 A+1
Since λN −1 ≤ 2, this yields an upper bound for λ1
A−1
λ1 ≤ 2 . (3.19)
A+1

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
3.4. EXPANSION RATE 65

Example 3.18. Setting D = diam (V, E), let us prove that


1 μ (V )
D ≤1+ 1 ln m , (3.20)
ln δ
where m = minx∈V μ (x). Indeed, set in (3.18) X = {x}, Y = {y} where x, y are
two distinct vertices. Then
2
1 μ (V ) μ (V )
l (X, Y ) ≤ ln ≤ ln ,
2 μ (x) μ (y) m
whence
1 μ (V )
d (x, y) ≤ 1 + 1 ln .
ln δ m
Taking in the left-hand side the supremum in all x, y ∈ V , we obtain (3.20). In
a particular case of a simple weight, we have m = minx deg (x) and μ (V ) = 2 |E|
whence
1 2 |E|
D ≤ 1 + 1 ln .
ln δ m
Definition 3.19. For any non-empty set X ⊂ V and any r ≥ 0, denote by
Ur (X) the r-neighborhood of X, that is,
Ur (X) = {y ∈ V : d (y, X) ≤ r} .
Corollary 3.20. For any non-empty set X ⊂ V and any integer r ≥ 0, we
have
μ (V )
μ (Ur (X)) ≥ c) . (3.21)
1 + μ(X
μ(X) δ
2r

Proof. Indeed, take Y = V \ Ur (X) so that Ur (X) = Y c . Then d (X, Y ) =


r + 1, and (3.18) yields
1 1 μ (X c ) μ (Y c )
r≤ ln ,
2 ln 1δ μ (X) μ (Y )
whence  2r
μ (Y c ) 1 μ (X)
≥ .
μ (Y ) δ μ (X c )
Since μ (Y ) = μ (V ) − μ (Y c ), we obtain
μ (V ) − μ (Y c ) μ (X c )
c
≤ δ 2r ,
μ (Y ) μ (X)
whence (3.21) follows 
Example 3.21. Given a set X ⊂ V , define the expansion rate of X to be the
minimal positive integer R such that
1
μ (UR (X)) ≥ μ (V ) .
2
Imagine a communication network as a graph where the vertices are the commu-
nication centers (like computer servers) and the edges are direct links between the
centers. If X is a set of selected centers, then it is reasonable to ask, how many
direct links from X are required to reach the majority (at least 50%) of all centers.
This is exactly the expansion rate of X, and the networks with a shorter expansion
rate provide a better connectivity.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
66 3. GEOMETRIC BOUNDS FOR THE EIGENVALUES

The inequality (3.21) implies the following upper bound for the expansion rate:
μ(X c )
1 ln μ(X)
R≤ , (3.22)
2 ln 1δ
where · is the ceiling function. Indeed, if r is any integer such that
μ(X c )
1 ln μ(X)
r≥ ,
2 ln 1δ
then
μ (X c ) 2r
δ ≤ 1,
μ (X)
and (3.21) implies that μ (Ur (X)) ≥ 12 μ (V ). It follows that R ≤ r, which implies
(3.22).
Hence, a good communication network should have the number δ as small as
possible (which is similar to the requirement that ρ should be as small as possible
for a fast convergence rates to the equilibrium). For many large practical networks,
one has the following estimate for the spectral radius:
1
ρ≈ ,
ln N
where N  1 is the number of vertices. Since δ ≤ ρ, it follows that
1
 ln N.
δ
c
Assuming that X consists of a single vertex and that μ(X )
μ(X) ≈ N , we obtain the
following estimate of the expansion rate of a single vertex:
ln N
. R
ln ln N
For example, if N = 108 , which is a typical figure for the internet graph, then R  7
(although neglecting the constant factors). This very fast expansion rate is called
“a small world” phenomenon, and it is actually observed in large communication
networks (see [42]).
The same phenomenon occurs in the coauthor network : two mathematicians
are connected by an edge if they have a joint publication. Although the number of
recorded mathematicians is quite high (≈ 105 ), a few links are normally enough to
get from one mathematician to a substantial portion of the whole network.
Proof of Theorem 3.14. As before, denote by F the space of all real-valued
functions on V . Let w0 , w1 , ..., wN −1 be an orthonormal basis in F that consists
of the eigenfunctions of L, and let their eigenvalues be λ0 = 0, λ1 , ..., λN −1 . Any
function u ∈ F admits an expansion in the basis {wl } as follows:

N −1
u= al wl (3.23)
l=0

with some coefficients al . We know already that


1 
a0 w0 = u = u (x) μ (x)
μ (V )
x∈V

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
3.4. EXPANSION RATE 67

(see the proof of Theorem 2.9). Denote



N −1
u = u − u = al wl ,
l=1

so that u = u + u and u ⊥u.


Let Φ (λ) be a polynomial with real coefficient. We have

N −1
Φ (L) u = al Φ (L) wl
l=0

N −1
= al Φ (λl ) wl
l=0

N −1
= Φ (0) u + al Φ (λl ) wl .
l=1

If v is another function from F with expansion



N −1 
N −1
v= bl wl = v + bl wl = v + v  ,
l=0 l=1

then

N −1
(Φ (L) u, v) = (Φ (0) u, v) + al bl Φ (λl )
l=1

N −1
≥ Φ (0) uvμ (V ) − max |Φ (λl )| |al | |bl |
1≤l≤N −1
l=1
≥ Φ (0) uvμ (V ) − max |Φ (λl )| u  v   .

(3.24)
1≤l≤N −1

Assume now that supp u ⊂ X, supp v ⊂ Y and that


D = d (X, Y ) ≥ 2
(if D ≤ 1, then (3.18) is trivially satisfied). Let Φ (λ) be a polynomial of λ of degree
D − 1. Let us verify that
(Φ (L) u, v) = 0. (3.25)
Indeed, the function Lk u is supported in the k-neighborhood of supp u, whence it
follows that Φ (L) u is supported in the (D − 1)-neighborhood of X. Since UD−1 (X)
is disjoint with Y , we obtain (3.25). Comparing (3.25) and (3.24), we obtain
uvμ (V )
max |Φ (λl )| ≥ Φ (0) . (3.26)
1≤l≤N −1 u  v  
Let us take now u = 1X and v = 1Y . We have
μ (X) μ (X)2
u= , u2 = , u2 = μ (X) ,
μ (V ) μ (V )
whence
# #
 μ (X)2 μ (X) μ (X c )
u  = u2 − u2 = μ (X) − = .
μ (V ) μ (V )

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
68 3. GEOMETRIC BOUNDS FOR THE EIGENVALUES

Using similar identities for v and substituting into (3.26), we obtain


#
μ (X) μ (Y )
max |Φ (λl )| ≥ Φ (0) .
1≤l≤N −1 μ (X c ) μ (Y c )
Finally, let us specify Φ (λ) as follows:
 D−1
λ1 + λN −1
Φ (λ) = −λ .
2
Since max |Φ (λ)| on the set λ ∈ [λ1 , λN −1 ] is attained at λ = λ1 and λ = λN −1
and  D−1
λN −1 − λ1
max |Φ (λ)| = ,
[λ1 ,λN −1 ] 2
it follows from (3.34) that
 D−1  D−1 #
λN −1 − λ1 λN −1 + λ1 μ (X) μ (Y )
≥ .
2 2 μ (X c ) μ (Y c )
Rewriting this inequality in the form
 D−1
1
exp (l (X, Y )) ≥
δ
and taking ln, we obtain (3.18). 
The next result is a generalization of Theorem 3.14. Set
λN −1 − λk
δk = .
λN −1 + λk
Theorem 3.22. Fix an integer 1 ≤ k ≤ N − 1 and let X1 , ..., Xk+1 be non-
empty disjoint subsets of V . Set
D = min d (Xi , Xj ) .
i=j

Then the following estimate is true:


1
D ≤1+ max l (Xi , Xj ) . (3.27)
ln δ1k i=j
Remark 3.23. The estimate (3.27) was proved in [39]. The following modifi-
cation of (3.27)
# 
1 −1
μ (Xic ) μ Xjc
D ≤1+ max cosh
cosh−1 δ1 i=j μ (Xi ) μ (Xj )
k

was proved in [40].


Example 3.24. For any positive integer k, define the k-diameter Dk of a graph
(V, E) by
Dk = max min d (xi , xj ) .
x1 ,...,xk+1 ∈V i=j
In particular, D1 is the usual diameter of the graph. Applying Theorem 3.22 to
sets Xi = {xi } and maximizing over all xi , we obtain
ln μ(V )
Dk ≤ 1 + m
, (3.28)
ln δ1k

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
3.4. EXPANSION RATE 69

where m = inf x∈V μ (x) .


We precede the proof of Theorem 3.22 by a lemma.
Lemma 3.25. In any sequence of n + 2 vectors in n-dimensional Euclidean
space, there are two vectors with a non-negative inner product.
Note that n + 2 is the smallest number for which the statement of Lemma 3.25
is true. Indeed, if e1 , e2 , ..., en denote an orthonormal basis in the given space, let
us set v := −e1 − e2 − ... − en . Then any two of the following n + 1 vectors
e1 + εv, e2 + εv, ..., en + εv, v
have a negative inner product, provided ε > 0 is small enough.
Proof. Induction in n. The inductive basis for n = 1 is obvious. The inductive
step from n − 1 to n is shown on Figure 3.2. Indeed, assume that there are n + 2
vectors v1 , v2 , ..., vn+2 such that (vi , vj ) < 0 for all distinct i, j. Denote by E the
orthogonal complement of vn+2 , and by vi the orthogonal projection of vi onto E.

Figure 3.2. Subspace E

The difference vi − vi is orthogonal to E and, hence, colinear to vn+2 so that


vi − vi = εi vn+2 (3.29)
for some constant εi . Multiplying this identity by vn+2 we obtain
(vi , vn+2 ) − (vi , vn+2 ) = εi (vn+2 , vn+2 ) .
Since (vi , vn+2 ) = 0 and (vi , vn+2 ) < 0, it follows that
(vi , vn+2 )
εi = − > 0.
(vn+2 , vn+2 )
Rewriting the identity (3.29) in the form
vi = vi − εi vn+2 ,
we obtain, for all distinct i, j = 1, ..., n + 1

(vi , vj ) = vi , vj + εi εj (vn+2 , vn+2 ) . (3.30)
By the inductive hypothesis, out of n + 1 vectors in (n − 1)-dimensional v1 , ..., vn+1


Euclidean space E, there are two vectors with non-negative inner product, say,

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
70 3. GEOMETRIC BOUNDS FOR THE EIGENVALUES

  
vi , vj ≥ 0. It follows from (3.30) that also (vi , vj ) ≥ 0, which finishes the proof.


Proof of Theorem 3.22. We use the same notation as in the proof of The-
orem 3.14. If D ≤ 1, then (3.27) is trivially satisfied. Assume in the sequel that
D ≥ 2. Let ui be a function on V with supp ui ⊂ Xi . Let Φ (λ) be a polynomial
with real coefficients of degree ≤ D−1, which is non-negative for λ ∈ {λ1 , ..., λk−1 } .
Let us prove the following estimate
ui uj μ (V )
max |Φ (λl )| ≥ Φ (0) min . (3.31)
k≤l≤N −1 i=j ui uj 
Expand a function ui in the basis {wl } as follows:

N −1 
k−1 
N −1
(i) (i) (i)
ui = a l w l = ui + al wl + al wl .
l=0 l=1 l=k

It follows that

k−1 
N −1
(i) (j) (i) (j)
(Φ (L) ui , uj ) = Φ (0) ui uj μ (V ) + Φ (λl ) al al + Φ (λl ) al al
l=1 l=k

k−1
(i) (j)
≥ Φ (0) ui uj μ (V ) + Φ (λl ) al al
l=1
! !
− max |Φ (λl )| ui  !uj ! .
k≤l≤N −1

Since also (Φ (L) ui , uj ) = 0, it follows that

! ! 
k−1
|Φ (λl )| ui  !uj ! ≥ Φ (0) ui uj μ (V ) +
(i) (j)
max Φ (λl ) al al . (3.32)
k≤l≤N −1
l=1

In order to be able to obtain (3.31), we would like to have



k−1
(i) (j)
Φ (λl ) al al ≥ 0. (3.33)
l=1

In general, (3.33) cannot be guaranteed for any couple i, j but we claim that there
exists a couple i, j of distinct indices such that (3.33) holds (this is the reason why
in (3.31) we have min in all i, j). To prove that, consider the inner product in Rk−1
given by3

k−1
(a, b) = Φ (λi ) ai bi ,
i=1

for any two vectors


 a = (a1 , ..., ak−1 ) and b = (b1 , ..., bk−1 ). Also, consider the
(i) (i) (i)
vectors a = a1 , ..., ak−1 for i = 1, ..., k + 1. Hence, we have k + 1 vectors in
(k − 1)-dimensional Euclidean space. By Lemma 3.25, there are two vectors, say

3 By hypothesis, we have Φ (λ ) ≥ 0. If Φ (λ ) vanishes for some i, then use only those i for
i i
which Φ (λi ) > 0 and consider the inner product in a space of a smaller dimension.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
3.4. EXPANSION RATE 71


a(i) and a(j) such that a(i) , a(j) ≥ 0, which exactly means (3.33). For these i, j,
we obtain from (3.32)
ui uj μ (V )
max |Φ (λl )| ≥ Φ (0) ! !,
k≤l≤N −1 ui  !uj !
whence (3.31) follows.
In particular, taking ui = 1Xi and using that
μ (Xi )
ui =
μ (V )
and #
μ (Xi ) μ (Xic )
ui  = ,
μ (V )
we obtain #
μ (Xi ) μ (Xj )
max |Φ (λl )| ≥ Φ (0) min  . (3.34)
k≤l≤N −1 i=j μ (Xic ) μ Xjc
Consider the following polynomial of degree D − 1
 D−1
λk + λN −1
Φ (λ) = −λ ,
2
which is clearly non-negative for λ ≤ λk . Since max |Φ (λ)| on the set λ ∈ [λk , λN −1 ]
is attained at λ = λk and λ = λN −1 and
 D−1
λN −1 − λk
max |Φ (λ)| = ,
λ∈[λk ,λN −1 ] 2
it follows from (3.34) that
 D−1  D−1 #
λN −1 − λk λN −1 + λk μ (Xi ) μ (Xj )
≥ min  ,
2 2 i=j μ (Xic ) μ Xjc
which is equivalent to (3.27). 
Further results about eigenvalues of graphs and their applications can be found
in [5], [22], [23], [34], [35], [36], [91], [109], [138] and in many other sources. A
good source for expander graphs is [112].

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
10.1090/ulect/071/04

CHAPTER 4

Eigenvalues on infinite graphs

In this chapter, (V, μ) is a locally finite connected weighted graph with an


infinite (countable) set of vertices V .

4.1. Dirichlet Laplace operator


Given a finite subset Ω ⊂ V , denote by FΩ the set of functions Ω → R. Then
FΩ is a linear space of the dimension N = |Ω|. Define the operator LΩ on FΩ as
follows: first extend f to the whole V by setting f = 0 outside Ω and then set
LΩ f = (Lf ) |Ω .
In other words, for any x ∈ Ω, we have

LΩ f (x) = f (x) − P (x, y) f (y) ,
y∼x

where f (y) is set to be 0 whenever y ∈


/ Ω.

Definition 4.1. The operator LΩ is called the Dirichlet Laplace operator in


Ω.
Example 4.2. Recall that the Laplace operator in the lattice graph Z2 is
defined by
1
Lf (x) = f (x) − f (y) .
4 y∼x

Let Ω be the subset of Z2 that consists of three vertices a = (0, 0), b = (1, 0),
c = (2, 0) , so that a ∼ b ∼ c. Then we obtain for LΩ the following formulas:
1
LΩ f (a) = f (a) − f (b)
4
1
LΩ f (b) = f (b) − (f (a) + f (c))
4
1
LΩ f (c) = f (c) − f (b) .
4
Consequently, the matrix of LΩ is
⎛ ⎞
1 −1/4 0
⎝ −1/4 1 −1/4 ⎠
0 −1/4 1

and the eigenvalues are 1, 1 ± 14 2.
73

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
74 4. EIGENVALUES ON INFINITE GRAPHS

For comparison, consider Ω as a finite graph itself. Then deg (a) = deg (c) = 1
and deg (b) = 2 so that the Laplace operator on Ω as a finite graph is defined by
Lf (a) = f (a) − f (b)
1
Lf (b) = f (b) − (f (a) + f (c))
2
Lf (c) = f (c) − f (b) .
The matrix of L is ⎛ ⎞
1 −1 0
⎝ −1/2 1 −1/2 ⎠ ,
0 −1 1
the eigenvalues are 0, 1, 2. As we see, the Dirichlet Laplace operator of Ω as a subset
of Z2 and the Laplace operator of Ω as a finite graph are different operators with
different spectra.
Returning to the general setting, let us introduce in FΩ the inner product

(f, g) = f (x) g (x) μ (x) .
x∈Ω

Lemma 4.3 (Green’s formula). For any two functions f, g ∈ FΩ , we have


1 
(LΩ f, g) = (∇xy f ) (∇xy g) μxy , (4.1)
2
x,y∈Ω1

where Ω1 = U1 (Ω) .
Proof. Extend both functions f and g to all V as above. Applying Green’s
formula of Theorem 2.1 in Ω1 and using that g = 0 outside Ω, we obtain

(LΩ f, g) = Lf (x) g (x) μ (x)
x∈Ω1
1  
= (∇xy f ) (∇xy g) μxy − (∇xy f ) g(x)μxy (4.2)
2
x,y∈Ω1 x∈Ω1 ,y∈Ωc1
1 
= (∇xy f ) (∇xy g) μxy .
2
x,y∈Ω1

We have used the fact that the last sum in (4.2) vanishes. Indeed, the summation
can be restricted to neighboring x, y. Therefore, if y ∈ Ωc1 , then necessarily x ∈ Ωc
and g (x) = 0. 

Since the right hand side of (4.1) is symmetric in f, g, we obtain the following
consequence.
Corollary 4.4. Operator LΩ is symmetric in FΩ .
Hence, the spectrum of LΩ is real. Denote the eigenvalues of LΩ in an increasing
order1 by
λ1 (Ω) ≤ λ2 (Ω) ≤ ... ≤ λN (Ω) ,

1 Unlike the case of a finite graph, the enumeration of the eigenvalues starts here from 1 rather

than from 0.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
4.1. DIRICHLET LAPLACE OPERATOR 75

where N = |Ω|. As for every symmetric operator (cf. Theorem 2.3), the smallest
eigenvalue λ1 (Ω) admits the variational definition:
λ1 (Ω) = inf R (f ) , (4.3)
f ∈FΩ \{0}

where the Rayleigh quotient R (f ) is defined by


1
 2
(LΩ f, f ) x,y∈Ω1 (∇xy f ) μxy
R (f ) = = 2
 2
. (4.4)
(f, f ) x∈Ω f (x)μ(x)
Here the second equality is true by Lemma 4.3. Note that the ranges x ∈ Ω and
x, y ∈ Ω1 of the summations in (4.4) can be extended to x ∈ V and x, y ∈ V
respectively, because the contributions of each additional term is 0. Indeed, for
the denominator it is obvious because f (x) = 0 outside Ω. For the numerator, if
x∈ / Ω1 and y ∼ x, then y ∈
/ Ω, so that f (x) = f (y) = 0 and ∇xy f = 0.
Theorem 4.5. Let Ω be a finite non-empty subset of V . Then the following is
true.
(a) 0 < λ1 (Ω) ≤ 1.
(b) λ1 (Ω) + λN (Ω) ≤ 2. Consequently,
spec LΩ ⊂ [λ1 (Ω) , 2 − λ1 (Ω)] ⊂ (0, 2) . (4.5)
(c) λ1 (Ω) decreases when Ω increases.
Remark 4.6. Consider the Markov operator PΩ := id −LΩ and observe that
its eigenvalues are αk (Ω) := 1 − λk (Ω). Then Theorem 4.5 can be restated as
follows:
• 0 ≤ α1 (Ω) < 1,
• αN (Ω) ∈ [−α1 (Ω) , α1 (Ω)],
• α1 (Ω) increases when Ω increases.
Proof. (a) Let f be the eigenfunction of λ1 (Ω). Then we have
 2
(LΩ f, f )
1
|∇xy f | μxy
λ1 (Ω) = = 2
x,y∈Ω1 2 , (4.6)
(f, f ) x∈Ω f (x)μ(x)
which implies λ1 (Ω) ≥ 0. Let us show that λ1 (Ω) > 0. Assume from the contrary
that λ1 (Ω) = 0. It follows from (4.6) that ∇xy f = 0 for all neighboring vertices
x, y ∈ Ω1 , that is, for such vertices we have f (x) = f (y). Fix a vertex x ∈ Ω.
Since Ω is finite and V is infinite, the complement Ωc is non-empty. Choose y ∈ Ωc .
Since (V, μ) is connected, there is a path connecting x and y, say {xi }ni=0 , where
x0 = x and xn = y. Then let k be the minimal index such that xk ∈ Ωc (such k
exists because x0 ∈ Ω, while xn ∈ Ωc ). Since xk−1 ∈ Ω and xk−1 ∼ xk , it follows
that xk ∈ Ω1 . Hence, all the vertices in the path x0 ∼ x1 ∼ ... ∼ xk−1 ∼ xk belong
to Ω1 , whence we conclude by the above argument that
f (x0 ) = f (x1 ) = ... = f (xk ) .
Since f (xk ) = 0 it follows that f (x) = f (x0 ) = 0. Hence, f ≡ 0 in Ω, which
contradicts the definition of an eigenfunction. This proves that λ1 (Ω) > 0.
To prove that λ1 (Ω) ≤ 1, we use the trace of the operator LΩ . On the one
hand, we have
trace (LΩ ) = λ1 (Ω) + ... + λN (Ω) ≥ N λ1 (Ω) . (4.7)

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
76 4. EIGENVALUES ON INFINITE GRAPHS

On the other hand, since



LΩ f (x) = f (x) − P (x, y)f (y),
y
 
the matrix of the operator LΩ in the basis 1{x} x∈Ω has on the diagonal the values
1 − P (x, x) . It follows that

trace (LΩ ) = (1 − P (x, x)) ≤ N. (4.8)
x∈Ω

Comparing (4.7) and (4.8), we obtain λ1 (Ω) ≤ 1.


(b) Let f be an eigenfunction with the eigenvalue λN (Ω). Then we have simi-
larly to (4.6)
1
 2
x,y∈V (∇xy f ) μxy
λN (Ω) = R (f ) = 2
 2
.
x∈V f (x)μ(x)
Applying (4.3) to the function |f | , we obtain

1
x,y∈V (∇xy |f |)2 μxy
λ1 (Ω) ≤ R (|f |) = 2
 .
x∈V f 2 (x)μ(x)
Since

(∇xy f )2 + (∇xy |f |)2 = (f (x) − f (y))2 + (|f (x)| − |f (y)|)2 ≤ 2 f 2 (x) + f (y)2
it follows that
 
x,y∈V f 2 (x) + f (y)2 μxy
λ1 (Ω) + λN (Ω) ≤ 
x∈V f 2 (x)μ(x)
  2
2 y∈V f (x)μxy
= 
x∈V

x∈V f 2 (x)μ(x)

2 x∈V f 2 (x)μ (x)
=  2
= 2.
x∈V f (x)μ(x)

This together with part (a) implies (4.5).


(c) If Ω increases, then the space FΩ also increases. Clearly, the infimum of
the functional R (f ) over a larger set must be smaller. Hence, by (4.3), λ1 (Ω)
decreases. 

4.2. Cheeger’s inequality


Recall that, for any subset Ω of V , the edge boundary ∂Ω is defined by
∂Ω = {xy ∈ E : x ∈ Ω, y ∈ Ωc } .
Also, for any subset S of the edge set E, its measure is defined by

μ (S) = μξ .
ξ∈S

Definition 4.7. For any finite subset Ω ⊂ V , define its Cheeger constant by
μ (∂U )
h (Ω) = inf ,
U⊂Ω μ (U )
where the infimum is taken over all non-empty subsets U of Ω.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
4.2. CHEEGER’S INEQUALITY 77

In other words, h (Ω) is the largest constant such that the following inequality
is true
μ (∂U ) ≥ h (Ω) μ (U ) (4.9)
for any non-empty subset U of Ω.
Theorem 4.8. (Cheeger’s inequality) We have
1 2
λ1 (Ω) ≥ h (Ω) .
2
The proof is similar to the case of finite graphs (cf. Theorem 3.3). We start
with the following lemma.
Lemma 4.9. For any non-negative function f ∈ FΩ , the following is true:
 
|∇ξ f | μξ ≥ h (Ω) f (x) μ (x) . (4.10)
ξ∈E x∈V

Proof. By the co-area formula of Lemma 3.4, we have


  ∞
|∇ξ f | μξ ≥ μ(∂Ut ) dt,
ξ∈E 0

where Ut = {x ∈ V : f (x) > t} . Since Ut ⊂ Ω for non-negative t, we obtain by


(3.2)
μ (∂Ut ) ≥ h (Ω) μ (Ut ) ,
whence
  ∞
|∇ξ f | μξ ≥ h (Ω) μ (Ut ) dt.
ξ∈E 0

On the other hand, by the identity (3.8) from the proof of Lemma 3.5, we have
 ∞ 
μ (Ut ) dt = f (x) μ (x) ,
0 x∈V

which implies (4.10). 


Proof of Theorem 4.8. Let f be the eigenfunction of λ1 (Ω). Rewrite (4.6)
in the form  2
ξ∈E |∇ξ f | μξ
λ1 (Ω) =  2
.
x∈V f (x) μ (x)
Hence, to prove (4.10), it suffices to verify that
 2 h (Ω)2  2
|∇ξ f | μξ ≥ f (x) μ (x) . (4.11)
2
ξ∈E x∈V

Applying (4.10) to function f 2 , we obtain


"  " 
"∇ξ f 2 " μξ ≥ h (Ω) f 2 (x) μ (x) . (4.12)
ξ∈E x∈V

The same computation as in the proof of Theorem 3.3 shows that


⎛ ⎞1/2
"  "  
"∇ξ f 2 " μξ ≤ ⎝2 |∇ξ f |2 μξ f 2 (x)μ (x)⎠ .
ξ∈E ξ∈E x∈V

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
78 4. EIGENVALUES ON INFINITE GRAPHS

Combining this with (4.12) yields


⎛ ⎞1/2 1/2
  
f 2 (x) μ (x) ≤ ⎝2 |∇ξ f | μξ ⎠
2
h (Ω) f 2 (x)μ (x) .
x∈V ξ∈E x∈V

 1/2
Dividing by x∈V f 2 (x) μ (x) and taking square, we obtain (4.11). 

4.3. Isoperimetric and Faber-Krahn inequalities


Definition 4.10. We say that a weighted graph (V, μ) satisfies the isoperimet-
ric inequality with a function Φ (s) if, for any finite non-empty subset Ω ⊂ V ,
μ (∂Ω) ≥ Φ (μ (Ω)) . (4.13)
We always assume that the function Φ (s) is non-negative and is defined for all
s ≥ inf μ (x) , (4.14)
x∈V

so that the value μ (Ω) is in the domain of Φ for all non-empty subsets Ω ⊂ V .
Example 4.11. If (V, E) is an infinite connected graph and μ is a simple weight
on (V, E), then (V, μ) satisfies the isoperimetric inequality with function Φ (s) ≡ 1,
because any finite subset Ω of V has at least one edge connecting Ω with Ωc (because
of the connectedness of (V, E)).
Note that, for the graph Z, the sharp isoperimetric function is Φ (s) ≡ 2. As
we will show below in Section 4.5, Zm satisfies the isoperimetric inequality with
m−1
function Φ (s) = cm s m with some constant cm > 0.
The relation between isoperimetric inequalities and the Dirichlet eigenvalues is
given by the following theorem. As before, we assume that (V, μ) is a locally finite
connected weighted graph with an infinite set of vertices.
Theorem 4.12. Assume that (V, μ) satisfies the isoperimetric inequality with
function Φ (s) such that Φ (s) /s is decreasing in s. Then, for any finite non-empty
subset Ω ⊂ V ,
λ1 (Ω) ≥ Λ (μ (Ω)) , (4.15)
where
 2
1 Φ (s)
Λ (s) = .
2 s
Definition 4.13. Let Λ be a non-negative function defined on the domain
(4.14). We say that (V, μ) satisfies the Faber-Krahn inequality with function Λ (s)
if (4.15) holds with this function for any finite non-empty subset Ω ⊂ V .
Hence, the isoperimetric inequality with function Φ (s) implies the Faber-Krahn
 2
inequality with function Λ (s) = 12 Φ(s)
s .

Example 4.14. As it follows from Example 4.11 and Theorem 4.12, any con-
nected infinite graph with a simple weight satisfies the Faber-Krahn inequality with
function Λ (s) = 2s12 . The lattice graph Zm satisfies the Faber-Krahn inequality
with function Λ (s) = cm s−2/m for some cm > 0.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
4.4. ESTIMATING λ1 (Ω) VIA INRADIUS 79

Proof of Theorem 4.12. We have, for any non-empty subset U ⊂ Ω,


Φ (μ (U )) Φ (μ (Ω))
μ (∂U ) ≥ Φ (μ (U )) = μ (U ) ≥ μ (U ) .
μ (U ) μ (Ω)
It follows that
Φ (μ (Ω))
h (Ω) ≥ ,
μ (Ω)
whence by Theorem 4.8
 2
1 2 1 Φ (μ (Ω))
λ1 (Ω) ≥ h (Ω) ≥ = Λ (μ (Ω)) .
2 2 μ (Ω)


4.4. Estimating λ1 (Ω) via inradius


For any x ∈ V and r ≥ 0, define the ball Br (x) by
Br (x) = {y ∈ V : d (x, y) ≤ r} .
For any finite non-empty set Ω ⊂ V , denote by r(Ω) its inradius, that is,
r(Ω) = max {r ≥ 0 : Br (x) ⊂ Ω for some x ∈ Ω} .
The following theorem is an analogue of Theorem 3.11 for infinite graphs.
Theorem 4.15. For any finite non-empty set Ω ⊂ V , we have
c
λ1 (Ω) ≥ , (4.16)
(1 + r (Ω)) μ (Ω)
where c = inf x∼y μxy (for example, c = 1 for a simple weight).
Proof. By (4.3), it suffices to prove that, for any non-zero function f ∈ FΩ ,
(LΩ f, f ) c
≥ .
(f, f ) (1 + r (Ω)) μ (Ω)
Without loss of generality, we can assume that max |f | = 1. Then we have

(f, f ) = f 2 (x)μ(x) ≤ μ(Ω). (4.17)
x∈V

In order to estimate (LΩ f, f ), consider a point x0 ∈ Ω such that |f (x0 )| = 1 and


let n be the largest integer such that the ball Bn (x0 ) is a subset of Ω. The ball
Bn+1 (x0 ) contains a vertex that is not in Ω. Connecting it to x0 by a shortest
path, we obtain a path
x0 ∼ x1 ∼ ... ∼ xn ∼ xn+1
such that xn+1 ∈ Ωc (see Figure 4.1). Therefore, we have
 n
(LΩ f, f ) ≥ (f (xi ) − f (xi+1 ))2 μxi xi+1
i=0
2
c 
n
c
≥ |f (xi ) − f (xi+1 )| ≥ , (4.18)
n+1 i=0
n+1
where we have used the inequality

n−1
|f (xi ) − f (xi+1 )| ≥ |f (x0 ) − f (xn )| = 1.
i=0

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
80 4. EIGENVALUES ON INFINITE GRAPHS

Figure 4.1. The path {xk }n+1


k=0

Since n ≤ r(Ω), it follows from (4.17) and (4.18) that


(LΩ f, f ) c c
≥ ≥ ,
(f, f ) (n + 1) μ(Ω) (r(Ω) + 1) μ(Ω)
which finishes the proof. 
Theorem 4.16. Assume that
c := inf μxy > 0. (4.19)
x∼y

Suppose that, for all points x ∈ V and all integers r ≥ 0, we have


μ (Br (x)) ≥ V(r), (4.20)
where V is a continuous positive strictly increasing function on [0, +∞). Then, for
any non-empty finite set Ω ⊂ V ,
c
λ1 (Ω) ≥ −1 . (4.21)
(V (μ(Ω)) + 1) μ(Ω)
That is, (V, μ) satisfies the with the function
c
Λ (s) = −1 . (4.22)
(V (s) + 1) s
Proof. Fix a finite set Ω ⊂ V and denote s = μ (Ω), r = r(Ω). For some
point x ∈ Ω, we have Br (x) ⊂ Ω, whence we obtain by (4.20) that
V(r) ≤ μ (Br (x)) ≤ μ(Ω) = s,
−1
and, hence, r ≤ V (s). By Theorem 4.15, we obtain
c c
λ1 (Ω) ≥ ≥ ,
(1 + r (Ω)) μ (Ω) (1 + V −1 (s)) s
which was to be proved. 
Corollary 4.17. Under the hypothesis (4.19) assume that, for all x ∈ V and
all integers r ≥ 1,
μ (Br (x)) ≥ ar m (4.23)
with some a, m > 0. Then the Faber-Krahn inequality is satisfied with the function
Λ (s) = bs−
m+1
m , (4.24)

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
4.4. ESTIMATING λ1 (Ω) VIA INRADIUS 81

where b = b (a, c, m) > 0.


Proof. Indeed, the hypothesis (4.20) is satisfied with the function
V (r) = ar m . (4.25)
Since V −1 (s) = (s/a)
1/m
, we obtain by Theorem 4.16 that the Faber-Krahn in-
equality is satisfied with the function
c
Λ (s) =   .
1/m
(s/a) +1 s
Finally, since s = μ (Ω) ≥ c, it follows that the Faber-Krahn inequality holds also
with the function (4.24) for an appropriate b > 0. 
Example 4.18. Let us verify that any infinite connected graph (V, μ) with
condition (4.19) satisfies the Faber-Krahn inequality with function
Λ (s) = bs−2 (4.26)
2
with b = b (c) > 0 (in fact, b = c /2; see also Example 4.14). Indeed, for any ball
Br (x) of an integer radius r ≥ 1, there is a point y ∈ V such that d (x, y) = r + 1.
Let {xk }r+1
k=0 be a shortest path connecting x = x0 and y = xr+1 . Then all the
edges xk−1 xk with k = 1, ..., r belong to Br (x) whence

r
μ (Br (x)) ≥ μxk−1 xk ≥ cr.
k=1

Hence, the hypothesis (4.23) of Corollary 4.17 is satisfied with m = 1 and a = c.


We conclude that the Faber-Krahn inequality is satisfied with the function (4.24)
with m = 1, that is, with the function (4.26).
Example 4.19. Let us show that, in general, the Faber-Krahn inequality with
function (4.24) cannot be obtained from the isoperimetric inequality by means of
Theorem 4.12. Indeed, consider a Vicsek tree that is an infinite graph obtained by
a self-similar fractal construction as on Figure 4.2. More precisely, the Vicsek tree

is the union of an infinite sequence of finite graphs {Ωk }k=0 , where Ω0 is the cross
of 5 vertices as on Figure 4.2, and Ωk+1 is obtained by taking the union of Ωk with
its four copies, again as on Figure 4.2.
Let μ be a simple weight on the Vicsek tree. It is easy to prove that, for all
vertices x and for all integers r ≥ 1,
μ (Br (x))  r m ,
where
ln 5
m=
ln 3
(indeed, each step of construction increases the diameter by factor 3 whereas the
measure increases by factor 5). Hence, the hypothesis (4.23) of Corollary 4.17 is
satisfied with this value of m. It follows that the Faber-Krahn inequality is satisfied
with the function (4.24), that is, with
Λ (s) = bs− ln 5 .
ln 8
(4.27)
Let us mention for comparison that the isoperimetric inequality
μ (∂Ω) ≥ Φ (μ (Ω))

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
82 4. EIGENVALUES ON INFINITE GRAPHS

Figure 4.2. Construction of the Vicsek tree

can be satisfied on the Vicsek tree only with the function


Φ (s) = const,
because there are subsets Ω with arbitrarily large measure and with μ (∂Ω) = 4.
Hence, Theorem 4.12 can only ensure the Faber-Krahn inequality with the function
Λ (s) = const s−2 that is worse than (4.27).
For further properties of the Vicsek tree see Examples 5.13 and 5.22 below (as
well as [9], [13], [81], [82]).

4.5. Isoperimetric inequalities on Cayley graphs


Let G be an infinite group and S be a finite symmetric edge generating subset
of G. Let (V, E) be the Cayley graph (G, S). We always assume that the graph
(G, S) is connected, that is, the set S generates the entire group G.
Let μ be a simple weight on (V, E). Recall that the degree of every vertex in
(V, E) is |S| so that μ (x) = |S| ≥ 2.
Denote by e the neutral element of G and define the balls centered at e:
Br = {x ∈ V : d (x, e) ≤ r} (4.28)
for any r ≥ 0.
Theorem 4.20. Let V (r) be a continuous non-negative strictly increasing func-
tion on [0, +∞) such that V (r) → ∞ as r → ∞. Assume that, for all non-negative
integers r,
μ (Br ) ≥ V (r) . (4.29)
Then the Cayley graph (G, S) satisfies the isoperimetric inequality with function
s
Φ (s) = c0 −1 , s ≥ |S| , (4.30)
V (2s)
 1 .
where c0 = 1
4|S| +1
V −1 (2|S|)

Theorem 4.20 was proved by Coulhon and Saloff-Coste in [51].

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
4.5. ISOPERIMETRIC INEQUALITIES ON CAYLEY GRAPHS 83

Remark 4.21. The domain of the inverse function V −1 (s) is [V (0) , +∞). Since
V (0) ≤ μ (B0 ) = μ (e) = |S|, the function V −1 (s) is defined on [|S| , +∞). Hence,
the function V −1 (2s) is defined on [ 12 |S| , +∞) and, thus, is strictly positive on
[|S| , +∞). It follows that Φ (s) is defined on [|S| , +∞) and, consequently, Φ (μ (Ω))
is defined for all non-empty finite subsets Ω ⊂ V .
Example 4.22. In Zm we have μ (Br )  r m for r ≥ 1 so that we can take
V (r) = cr m . Then V −1 (s) = (s/c)1/m , and we conclude that Zm satisfies the
isoperimetric inequality with function
s m−1
Φ (s) = cm 1/m = cm s m ,
s
where cm is a positive constant depending on m (as it was already mentioned
above).
Example 4.23. There is a large class of groups with an exponential volume
growth, that is, the groups where (4.29) holds with V (r) = exp (c r). For such
groups Theorem 4.20 guarantees the isoperimetric inequality with function Φ (s) =
c lnss .
Combining Theorems 4.12 and 4.20, we obtain the following:
Corollary 4.24. Under the conditions of Theorem 4.20, the Cayley graph
(G, S) satisfies the Faber-Krahn inequality with the function
 2
1
Λ (s) = c
V −1 (2s)
with some positive constant c > 0.
Example 4.25. In Zm we obtain the Faber-Krahn inequality with function
Λ (s) = cm s−2/m with some cm > 0. On the groups with an exponential volume
growth, we obtain Λ (s) = c (ln s)−2 .
Proof of Theorem 4.20. Let us introduce some notation that will be used
only in this proof. For any function f on V with finite support, set

f  := |f (x)| .
x∈V

Also, consider |∇ξ f | as a function of ξ ∈ E, and set


 1 
∇f  := |∇ξ f | = |f (x) − f (y)| .
2
ξ∈E x,y∈V : x∼y

We will obtain certain lower bounds for ∇f  which will then imply a lower estimate
for μ (∂Ω). Indeed, for f = 1Ω we have
1, ξ ∈ ∂Ω
|∇ξ f | = .
0, ξ ∈
/ ∂Ω
It follows that
∇f  = |∂Ω| = μ (Ω) , (4.31)
where we have used that μξ = 1 for any edge ξ.
For any z ∈ G, consider the function fz on G that is defined by
fz (x) = f (xz) .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
84 4. EIGENVALUES ON INFINITE GRAPHS

That is, fz is a spatial shift of f by the right multiplication by z.


Claim 1. If s ∈ S, then f − fs  ≤ 2 ∇f  .
Recall that the edges x ∼ y are determined by the relation x−1 y ∈ S that is
equivalent to y = xs for some s ∈ S. Hence, for any s ∈ S, we have x ∼ xs and
 
f − fs  = |f (x) − f (xs)| ≤ |f (x) − f (y)| = 2 ∇f  . (4.32)
x∈V x,y∈V : x∼y

Claim 2. If z ∈ Bn , then f − fz  ≤ 2n ∇f .


Any z ∈ Bn can be represented in the form z = s1 s2 ...sk where si ∈ S and
k ≤ n. Then we have

f − fz  = |f (x) − f (xz)|
x∈V

≤ |f (x) − f (xs1 )|
x∈V

+ |f (xs1 ) − f (xs1 s2 )|
x∈V
+...

+ |f (xs1 ...sk−1 ) − f (xs1 ...sk )| .
x∈V

The first sum on the right hand side is bounded by 2 ∇f  as in (4.32). In the
second sum make change y = xs1 so that it becomes

|f (y) − f (ys2 )| ,
y∈V

which is bounded by 2 ∇f  as in (4.32). In the same way, we have, for any
i = 1, ..., k,
 
|f (xs1 ...si−1 ) − f (xs1 ...si )| = |f (y) − f (ysi )| ≤ 2 ∇f  .
x∈V y∈V

Hence, it follows that


f − fz  ≤ 2k ∇f  ≤ 2n ∇f  .

Claim 3. For any positive integer n and any function f on V with finite support,
define a new function An f (averaging over n-balls) on V by
1 
An f (x) = f (y) .
|Bn |
{y:d(x,y)≤n}

Then the following inequality is true:


f − An f  ≤ 2n ∇f  . (4.33)

The condition d (x, y) ≤ n means that there is a path between x and y of at


most n edges. Since moving along an edge means the right multiplication by some
s ∈ S, we obtain that
y = xs1 ...sk

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
4.5. ISOPERIMETRIC INEQUALITIES ON CAYLEY GRAPHS 85

for some k ≤ n and s1 , ..., sk ∈ S. Setting z = s1 ...sk we obtain that d (x, y) ≤ n is


equivalent to y = xz for some z ∈ Bn . Hence, we have

f − An f  = |f (x) − An f (x)|
x∈V
" "
 "" 1  "
"
= "f (x) − f (xz)"
" |Bn | "
x∈V z∈Bn
" "
 "" 1  "
"
= " (f (x) − f (xz))"
" |Bn | "
x∈V z∈Bn
 1 
≤ |f (x) − f (xz)|
|Bn |
x∈V z∈Bn
1  
= |f (x) − f (xz)|
|Bn |
z∈Bn x∈V
1 
= f − fz 
|Bn |
z∈Bn
≤ 2n ∇f  ,
where in the last line we have used Claim 2.
Claim 4. Let Ω be a non-empty finite subset of V , and n be a positive integer such
that2 |Bn | ≥ 2 |Ω|. Then we have
1
μ (∂Ω) ≥ μ (Ω) . (4.34)
4 |S| n

Set f = 1Ω . Then we have, for any x ∈ V ,


1 
An f (x) = f (y)
|Bn |
{y:d(x,y)≤n}
1 
≤ f (y)
|Bn |
y∈V
1 1
= |Ω| ≤ .
|Bn | 2
It follows that
 1
f − An f  ≥ |f (x) − An f (x)| ≥ |Ω| ,
2
x∈Ω
while ∇f  = μ (∂Ω) (cf. (4.31). Comparing the above two lines and using (4.33)
we obtain
1
μ (∂Ω) ≥ |Ω| .
4n
Noticing that μ (Ω) = |S| |Ω| (because the degree of any vertex is |S|), we obtain
(4.34).
Claim 5. For any non-empty finite set Ω ⊂ V , we have μ (∂Ω) ≥ Φ (μ (Ω)) where
Φ is defined by ( 4.30).

2 Note that such n always exists because |Bn | → ∞ as n → ∞.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
86 4. EIGENVALUES ON INFINITE GRAPHS

Choose n to be a minimal positive integer with the property that


V (n) ≥ 2μ (Ω) .
This implies μ (Bn ) ≥ 2μ (Ω) which is equivalent to |Bn | ≥ 2 |Ω| so that (4.34)
holds. The minimality of n implies that either n = 1or n > 1 and
V (n − 1) < 2μ (Ω) .
In both cases, we obtain that
n ≤ 1 + V −1 (2μ (Ω)) .
Since Ω contains at least one vertex and the measure of this vertex is |S|, we have
V −1 (2μ (Ω)) ≥ V −1 (2 |S|) =: C > 0,
whence  
1
n≤ + 1 V −1 (2μ (Ω)) .
C
Substituting into (4.34), we obtain
1 μ (Ω)
μ (∂Ω) ≥ 1 = Φ (μ (Ω)) ,
4 |S| C + 1 V −1 (2μ (Ω))
which was to be proved. 
Further results on isoperimetric inequalities on graphs can be found in [35],
[36], [118], [119], [124], [139].

4.6. Solving the Dirichlet problem by iterations


Let Ω be a finite non-empty subset of V . Consider the Dirichlet problem
LΩ u = f, (4.35)
where f ∈ FΩ is a given function and u ∈ FΩ is to be found. This problem was
considered in Section 1.5 and it was shown in Theorem 1.38 that it has a unique
solution. Now we can prove it much easier. Indeed, the operator LΩ has the
spectrum in (0, 2). In particular, 0 is not in the spectrum, which means that this
operator is invertible, which is equivalent to the unique solvability of (4.35).
In what follows, we will use the notion of the norm of a linear operator. Let
V be a finite dimensional linear space with an inner product (·, ·). Recall that the
norm of any vector v ∈ V is defined then by v = (v, v). Given a linear operator
A : V → V, define its norm by
Av
A = sup .
v∈V\{0} v

In other words, A is the smallest value of a constant C that satisfies the inequality
Av ≤ C v
for all v ∈ V. It is easy to verify the following properties of the norm (although we
will not use them):
(1) A + B ≤ A + B.
(2) λA = |λ| A.
(3) AB ≤ A B.
For symmetric operators, there is the following relation to the spectrum:

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
4.6. SOLVING THE DIRICHLET PROBLEM BY ITERATIONS 87

Lemma 4.26. If the operator A is symmetric, then


A = max |α| . (4.36)
α∈spec A

Proof. We have
 2
2 (Av, Av) A v, v
A = sup = sup .
v∈V\{0} (v, v) v∈V\{0} (v, v)

The expression  2
A v, v
R (v) =
(v, v)
is the Rayleigh quotient of the operator A2 , and sup R (v) is equal to the maximal
eigenvalue of A2 . The eigenvalues of A2 have the form 2
 2 α where α is2 an eigenvalue
of A. Hence, the maximal eigenvalue of A is max α = (max |α|) where α runs
2

over all the eigenvalues of A, whence (4.36) follows. 


Let us investigate the rate of convergence in Jacobi’s method of solving (4.35).
Set PΩ = id −LΩ and rewrite (4.35) in the form
u = PΩ u + f. (4.37)
Let us construct successive approximations un ∈ FΩ to the solution as follows3 :
u0 = 0 and
un+1 = PΩ un + f. (4.38)
Theorem 4.27. The following inequality is true for all positive integers n:
un − u ≤ αn u ,
where
α = 1 − λ1 (Ω) .
Consequently, un → u as n → ∞.
Proof. Indeed, subtracting (4.37) from (4.38), we obtain
un+1 − u = PΩ (un − u) ,
whence
un+1 − u ≤ PΩ  un − u . (4.39)
The eigenvalues of PΩ are αk = 1 − λk (Ω) . The sequence {αk } is decreasing, and
by Theorem 4.5 we have that
• 0 ≤ α1 < 1 (follows from 0 < λ1 (Ω) ≤ 1)
• α1 + αN ≥ 0 (follows from λ1 (Ω) + λN (Ω) ≤ 2), that is, αN ≥ −α1 .
Hence, we have
spec PΩ ⊂ [αN , α1 ] ⊂ [−α1 , α1 ]
which implies max |αk | = α1 . Denoting α1 by α, we obtain from (4.39) that
un+1 − un  ≤ α un − u ,
whence it follows by induction that
un − u ≤ αn u0 − u = αn u ,
3 Subtracting un from the both sides of (4.38), we obtain a discrete heat equation:
un+1 − un = Δμ un + f.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
88 4. EIGENVALUES ON INFINITE GRAPHS

which was to be proved. Since α < 1, we obtain that indeed un − u → 0 as


n → ∞, that is, un → u. 
Definition 4.28. Given ε > 0, define the rate of convergence of {un } to u by
T = min {n : αn ≤ ε} ,
that is,
ln 1ε
T ≈ .
ln α1
In other words, after T iterations in Jacobi’s method, we obtain that
un − u ≤ ε u .
The value of ε should be chosen to have ε << 1 so that un − u is a small fraction
of u and un can be considered as a good approximation for u. It is standard to
set ε = e−1 . With this convention, we obtain
1 1
T ≈ 1 = .
ln α ln 1−λ1 (Ω)
1

Since ln ≥ λ1 (Ω) (indeed, it follows from e−λ ≥ 1−λ), we obtain the upper
1
1−λ1 (Ω)
bound for the convergence rate
1
T ≤ . (4.40)
λ1 (Ω)
Example 4.29. Let Ω be a non-empty finite subset of Zm and N = |Ω|. Since
Z satisfies the Faber-Krahn inequality with function Λ (s) = cm s−2/m , it follows
m

that
λ1 (Ω) ≥ cN −2/m ,
for some positive constant c = c (m). Therefore, we obtain from (4.40)
T ≤ CN 2/m .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
10.1090/ulect/071/05

CHAPTER 5

Estimates of the heat kernel

In this Chapter, (V, μ) is a locally finite weighted graph such that μ (x) > 0 for
all x ∈ V .

5.1. The notion and basic properties of the heat kernel


Recall that Pn (x, y) is the n-step transition function of the random walk on
(V, μ), that can be determined inductively as follows: P1 (x, y) = P (x, y) and

Pn (x, y) = Pn−1 (x, z) P (z, y)
z∈V

(cf. Proposition 1.28). Furthermore, Pn (x, y) is reversible with respect to measure


μ (x), that is
Pn (x, y) μ (x) = Pn (y, x) μ (y) .
Definition 5.1. The function
Pn (x, y)
pn (x, y) :=
μ (y)
is called the heat kernel of (V, μ) (or the transition density of the random walk).
The reversibility condition clearly implies that pn (x, y) is symmetric in x, y
that is
pn (x, y) = pn (y, x) . (5.1)
Other useful properties of the heat kernel that follow from the corresponding prop-
erties of the transition function, are as follows:

Pn f (x) = pn (x, y) f (y) μ (y) ,
y∈V

pn (x, y) μ (y) ≡ 1,
y∈V

pn+m (x, y) = pn (x, z) pm (z, y) μ (z) . (5.2)
z∈V
For n = m and x = y, the latter identity implies that

p2n (x, x) = p2n (x, z) μ (z) . (5.3)
z∈V

Lemma 5.2. We have for all x, y ∈ V and n, m ∈ N:


pn+m (x, y) ≤ (p2n (x, x) p2m (y, y))1/2 . (5.4)
89

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
90 5. ESTIMATES OF THE HEAT KERNEL

Proof. Indeed, it follows from the symmetry (5.1) and the semigroup identi-
ties (5.2), (5.3) that
1/2 1/2
 
pn+m (x, y) ≤ pn (x, z)2 μ (z) pm (z, y)2 μ (z)
z∈V z∈V
1/2
= (p2n (x, x) p2m (y, y)) .

Lemma 5.3. On any Cayley graph (G, S) with a simple weight, the value of
pn (x, x) does not depend on x, that is, pn (x, x) = pn (y, y) for all x, y.

Proof. Let us show that the heat kernel is invariant under the left multipli-
cation, that is,
pn (x, y) = pn (zx, zy) (5.5)

for all x, y, z ∈ G, which will imply for y = x and z = x−1 that pn (x, x) = pn (e, e) .
Recall that x ∼ y is equivalent to x−1 y ∈ S. Since
−1
(zx) (zy) = x−1 z −1 zy = x−1 y

we see that x ∼ y if and only if zx ∼ zy.


For n = 1 we have
P (x, y) μxy
p1 (x, y) = = .
μ (y) μ (x) μ (y)

If x, y are not neighbors, then p1 (x, y) = 0. If x ∼ y, then it follows that

1 1
p1 (x, y) = = 2.
deg (x) deg (y) |S|

Since the right hand side is the same for all couples x ∼ y, we obtain (5.5) for
n = 1. Let us make the inductive step from n − 1 to n:

pn (zx, zy) = pn−1 (zx, w) p1 (w, zy) μ (w)
w∈G

= pn−1 (zx, zu) p1 (zu, zy) μ (u)
u∈G

= pn−1 (x, u) p1 (u, y) μ (u)
u∈G
= pn (x, y) .

One of the most interesting problems on infinite graphs is the rate of conver-
gence of pn (x, y) to 0 as n → ∞. The question amounts to obtaining upper and
lower estimates of pn (x, y) for large n, that will be considered in this Chapter.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.2. ONE-DIMENSIONAL SIMPLE RANDOM WALK 91

5.2. One-dimensional simple random walk


Let (V, μ) be Z with a simple weight. Since μ (x) = 2, we have pn (x, y) =
1
2 Pn (x, y). Since Z is shift invariant, pn (x, y) is also shift invariant, that is,
pn (x, y) = pn (x − z, y − z) for any integer z. In particular, for z = x, we ob-
tain pn (x, y) = pn (0, y − x).
In this section we investigate pn (0, x) as a function of x and n. A first obser-
vation is that
pn (0, x) = 0 if |x| > n,
since there is no path of length n from 0 to x. It follows from Exercise 17 that
 n
1
2n+1 x+n , |x| ≤ n and x ≡ n mod 2,
pn (0, x) = 2 (5.6)
0, otherwise,
n
where m is the binomial coefficient. Using Stirling’s formula
√  n n
n! ∼ 2πn as n → ∞, (5.7)
e
and assuming that n is even, we obtain
 
1 n
pn (0, 0) =
2n+1 n/2
1 n!
= n+1 2
2 ((n/2)!)
√ n n
1 2πn e
∼ 
2n+1 2π n 2 n n
2 2e
1
= √ ,
2πn
so that
1
pn (0, 0) ∼ √ as n → ∞, n even. (5.8)
2πn
In particular, pn (0, 0) → 0 as n → ∞. Since pn (x, x) = pn (0, 0) for any x ∈ Z, it
follows from (5.4) that also pn (x, y) → 0 as n → ∞ for all x, y ∈ Z. Below we get
a more precise information about behavior of pn (0, x).
Given two sequences {an } and {bn } of positive numbers, we write an  bn (and
say that an is comparable to bn ) if there exists a constant C ≥ 1 such that
an
C −1 ≤ ≤ C for all n.
bn
Equivalently, an  bn means that ln an − ln bn remains bounded as n → ∞.
Clearly, the equivalence an ∼ bn implies an  bn . For example, (5.7) implies
that, for all n ≥ 1,
n!  nn+ 2 e−n .
1
(5.9)
In fact, for all integers n ≥ 1, we have
√ √
2πnn+ 2 e−n ≤ n! ≤ C 2πnn+ 2 e−n
1 1
(5.10)
where C = e1/12 ≈ 1.086 9. Indeed, it is known that

n! = 2πnnn e−n eξn

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
92 5. ESTIMATES OF THE HEAT KERNEL

where
1 1
≤ ξn ≤
12n + 1 12n
whence (5.10) follows.
Theorem 5.4. For all positive integers n and for all x ∈ Z such that |x| ≤ n
and x ≡ n mod 2, the following inequalities hold:
C x2 C x2
√2 e−(ln 2) n ≤ pn (0, x) ≤ √1 e− 2n , (5.11)
n n
where C1 , C2 are some positive constants.
Note that ln 2 ≈ 0.69315 > 12 .
Proof. Stirling’s formula (5.9) implies, for any integer n ≥ 0,
(n + 1)! 1
n! =  (n + 1)n+ 2 e−n . (5.12)
(n + 1)
Assuming that m is an even non-negative integer and applying (5.12) to n = m/2,
we obtain
m m  m+1 m+1
−m/2
e−m/2  (m + 2) 2 (2e)
2
! +1 .
2 2
We would like to replace here m + 2 my m + 1. For that observe that
 m+1  m+1
m+2 1
= 1+ ≤e
m+1 m+1
whence
m+1 m+1
(m + 2)  (m + 1)
and m m+1
−m/2
(2e) !  (m + 1)
. 2
(5.13)
2  n±x
Using (5.12) to estimate n! and (5.13) with m = n ± x to estimate 2 !, we
obtain from (5.6)
pn (0, x)
 
1 n
=
2n+1 x+n 2
1 n!
= n+1
 n+x  n−x
2 2 ! 2 !
1
2−n (n + 1)n+ 2 e−n
 (5.14)
n+x+1
− n+x n−x+1
− n−x
(n + x + 1) (2e) 2 (n − x + 1) 2
2
(2e) 2

1
=   n+x+1   n−x+1
√ 2 2
x
n + 1 1 + n+1 1 − n+1
x

1 1
= √  N +x  N −x , (5.15)
N 1+ x 2
1− x 2
N N
where N = n + 1. Using the Taylor formula for logarithm
α2 α3
ln (1 + α) = α − + − ..., −1 < α ≤ 1,
2 3

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.2. ONE-DIMENSIONAL SIMPLE RANDOM WALK 93

|x|
and the fact that N < 1, we obtain
 x N +x
ln 1 +
N 
x
= (N + x) ln 1 +
 N 
x x2 x3 x4 x5 x6
= (N + x) − + − + − + ...
N 2N 2 3N 3 4N 4 5N 5 6N 6
 
x2 x3 x4 x5 x6
= x− + 2
− 3
+ 4
− + ...
2N 3N 4N 5N 6N 5
 2 
x x3 x4 x5 x6
+ − + − + − ...
N 2N 2 3N 3 4N 4 5N 5
x2 x3 x4 x5 x6
= x+ − + − + − ....
2N 2 · 3N 2 3 · 4N 3 4 · 5N 4 5 · 6N 5
Changing here x to −x, we obtain
 x N −x x2 x3 x4 x5 x6
ln 1 − = −x + + + + + − ....
N 2N 2 · 3N 2 3 · 4N 3 4 · 5N 4 5 · 6N 5
Adding up the two expressions and observing that all the odd powers of x cancel
out, we obtain
 N −x 
x 2  x 2
N +x

ln 1+ 1−
N N
    
1 x N +x x N −x
= ln 1 + + ln 1 −
2 N N
1 2 1 1
= x + x4 + x6 + ...
2N 3 · 4N 3 5 · 6N 5
 xk+2
=
(k + 1) (k + 2) N k+1
k even, k≥0

x2  1  x k
= .
N (k + 1) (k + 2) N
k even, k≥0

Substituting into (5.15), we obtain


 2 
1 x 1 1  x 2 1  x 4
pn (0, x)  √ exp − + + + ... . (5.16)
N N 2 3·4 N 5·6 N
Clearly, this implies the upper bound
 
C1 x2
pn (0, x) ≤ √ exp − . (5.17)
N 2N
√ √
Let us show that N = n + 1 can be replaced here by n. That N  n is obvious,
while the comparison
   
x2 x2
exp −  exp − (5.18)
2N 2n
follows from
x2 x2 x2 x2 1
− = ≤ 2 ≤ . (5.19)
2n 2N 2n (n + 1) 2n 2

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
94 5. ESTIMATES OF THE HEAT KERNEL

Hence, (5.17) yields the upper bound in (5.11). To prove the lower bound, observe
that by |x|
N <1
1 1  x 2 1  x 4 1 1 1
+ + + ... < + + + ...
2 3·4 N 5·6 N 1·2 3·4 5·6
1 1 1 1 1
= 1 − + − + − + ...
2 3 4 5 6
= ln 2,
whence by (5.16)
 
C2 x2
pn (0, x) ≥ √ exp − (ln 2) .
N N
Arguing as above, we can replace here N by n, thus proving the lower bound in
(5.11). 

Corollary 5.5. In the domain where n|x|3/4 is bounded, we have the following

estimate  
1 x2
pn (0, x)  √ exp − . (5.20)
n 2n

Moreover, if |x| = o n3/4 as n → ∞, then
 
1 x2
pn (0, x) ∼ √ exp − . (5.21)
2πn 2n
Let us recall for comparison that the fundamental solution pt (x, y) of the heat
1 ∂2u
∂t = 2 ∂x2 in R admits the following explicit formula
equation ∂u
 2
1 x
pt (0, x) = √ exp − .
2πt 2t
The similarity of this formula and (5.21) is obvious and can be explained by the
fact that pt (x, y) is the transition density of Brownian motion in R that can be
obtained as a scaled limit of the simple random walk.
Proof. The upper bound in 5.20) is the same as in (5.11), so that we need
only to prove the lower bound. The expression under the exponential function in
(5.16) can be estimated from above as follows:
 
x2 1 1  x 2 1  x 4
+ + + ...
N 2 3·4 N 5·6 N
x2 x4 x6 x8
= + + + + ...
2N 3 · 4N

3 5 · 6N 5 7 · 8N 7 
x2 x4 1 1 x2 1 x4
= + 3 + + + ... (5.22)
2N N 3 · 4 5 · 6 N2 7 · 8 N4
 
x2 1 1 1
≤ +c + + + ...
2N 3·4 5·6 7·8
2
x c
< + ,
2N 3
4
x
where c is a constant that bounds N 3 . Substituting this into (5.16) and replacing

as before N by n, we obtain the lower bound in (5.20).

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.2. ONE-DIMENSIONAL SIMPLE RANDOM WALK 95

To prove (5.21), let us trace the proof of Theorem 5.4 and see in which places
the comparison
  can be replaced by the asymptotic equivalence ∼. Thecondition
|x| = o n3/4 implies that n − x → ∞ so that in theStirling formula for n−x2 ! we
n+x
can use the equivalence. The same always applies to 2 ! and n!. Hence, in (5.14)
we obtain asymptotic equivalence, provided we use a correct constant multiple in
the right hand side. Therefore, we obtain also asymptotic equivalence in (5.16):
 2 
1 x 1 1  x 2 1  x 4
pn (0, x) ∼ √ exp − + + + ... (5.23)
2πn N 2 3·4 N 5·6 N
√ √
where we use N ∼ n and the constant √12π is chosen to match (5.8) for x = 0.
Note that this asymptotic expansion takes place whenever n − x → ∞.
As |x| /n3/4 → 0, the second term in the right hand side of (5.22) goes to 0.
2 2
Also from (5.19) we get xn − xN → 0. Substituting these estimates into (5.23), we
obtain (5.21). 

The following lemma gives another estimate of Pn (0, x), that we will use in the
next section.
Lemma 5.6. We have, for all positive integers r, n,
  2
r
Pn (0, x) ≤ exp − . (5.24)
2n
x≥r

Proof. Let {Zn }n=1 be a sequence of independent random variables each
taking values ±1 with probabilities 1/2. Then
Xn = Z1 + ... + Zn
is a simple random walk on Z started at 0. Recall that
Pn (0, x) = P (Xn = x) ,
which implies that

Pn (0, x) = P (Xn ≥ r) .
x≥r

We have, for any α > 0,



P (Xn ≥ r) = P eαXn ≥ eαr ≤ e−αr EeαXn .
Using the independence of Zk and
1 α
EeαZk = e + e−α = cosh α,
2
we obtain
 n
EeαXn = E eαZ1 ...eαZn = EeαZ1 ...EeαZn = (cosh α) .
Using also that
 
α2 α4 1 2
cosh α = 1 + + + ... ≤ exp α ,
2! 4! 2
we obtain  
−αr n 1 2
P (Xn ≥ r) ≤ e (cosh α) ≤ exp −αr + α n .
2

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
96 5. ESTIMATES OF THE HEAT KERNEL

Finally, choosing α to minimize the expression under the exponential, that is, α =
r
n , we obtain  2
r
P (Xn ≥ r) ≤ exp − , (5.25)
2n
which was to be proved. 
Let us compare
√ the estimate (5.25) with the central limit theorem. Setting in
(5.25) r = s n where s is a new positive variable, we obtain
 2
 √ s
P Xn ≥ s n ≤ exp − .
2
Since EZk = 0 and Var Zk = 1, the central limit theorem yields the following:
 ∞  2
 √ 1 x
lim P Xn ≥ s n = √ exp − dx.
n→∞ s 2π 2
One can verify that, for large s,
 ∞  2  2
1 x 1 s
√ exp − dx ≈ √ exp − ,
s 2π 2 2πs 2
which shows that (5.25) cannot be essentially improved.

5.3. Carne-Varopoulos estimate


The main result of this section is the following theorem and its consequences.
Consider the Markov operator P as an operator in the Hilbert space

L2 (V, μ) := f :V →R| f 2 (x) μ (x) < ∞ ,
x∈V

and observe that P is a symmetric operator and P  ≤ 1 (cf. Exercise 18).


Theorem 5.7. Let f, g be two functions from L2 (V, μ) and let r be the distance
between supp f and supp g. Then
 2
r
|(P n f, g)| ≤ 2 f  g exp − . (5.26)
2n
The proof of Theorem 5.7 is based on the technique of Carne [32]. It will be
given below after Lemma 5.9.
Corollary 5.8. (A Carne-Varopoulos estimate) For all x, y ∈ V and positive
integers n,  2 
2 d (x, y)
pn (x, y) ≤ exp − . (5.27)
μ (x) μ (y) 2n
Proof. Setting in (5.26) f = 1{x} and g = 1{y} and noticing that r = d (x, y),
f  = μ (x), g = μ (y)
and

(P n f, g) = pn (x, y) f (x) g (y) μ (x) μ (y) = pn (x, y) μ (x) μ (y) ,
x,y∈V

we obtain (5.27). 

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.3. CARNE-VAROPOULOS ESTIMATE 97

Figure 5.1. The Chebyshev polynomial T9 (λ)

The estimate (5.27) was proved by Varopoulos [136] and, by a simpler method,
by Carne [32]. The comparison with the one-dimensional estimate (5.11) shows that
the estimate (5.27) lacks the on-diagonal term that would give the rate of decay
of pn (x, x) as n → ∞. The reason for that is that we do not use in (5.27) any
additional information about the graph, without which the decay of pn (x, x) in n
cannot be obtained (cf. Section 5.4). On the contrary, the decay of pn (x, y) with
respect to d (x, y) is a universal phenomenon that takes place on an arbitrary graph.
For the proof of Theorem 5.7 we use the Chebyshev polynomials Tk that are
defined by the identity
Tk (λ) = cos (k arccos λ) ,
where k is an integer parameter and λ ∈ [−1, 1]. Since Tk ≡ T−k , we restrict so far
our consideration to non-negative k. Setting θ = arccos λ, we obtain

Tk (λ) = cos kθ = Re eikθ = Re (cos θ + i sin θ)k


   
k k
= cosk θ − cosk−2 θ sin2 θ + cosk−4 θ sin4 θ − ...
2 4
   
k k−2  k k−4  2
= λ −
k
λ 1−λ +2
λ 1 − λ2 − ...,
2 4
whence we see that Tk (λ) is indeed a polynomial of λ of degree k. Note that the
leading coefficient in front of λk is equal to
   
k k
1+ + + ... = 2k−1 .
2 4
A distinguished property of Chebyshev polynomials to be used below is that |Tk (λ)|
≤ 1 for all λ ∈ [−1, 1] that is obvious from the definition (see also Figure 5.1).

Lemma 5.9. The following identity holds for all λ ∈ [−1, 1] and all non-negative
integers n:
n
λn = qn (k) Tk (λ) ,
k=−n

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
98 5. ESTIMATES OF THE HEAT KERNEL

where 
1
2n
n
k+n , k ≡ n mod 2
qn (k) = 2
0, otherwise.
Proof. As above, let θ be such that λ = cos θ. Setting z = cos θ + i sin θ and
observing that z = z1 , we obtain, for any k ∈ Z,
1 k
Tk (λ) = Re z k = z + z −k .
2

On the other hand, λ = Re z = 12 z + z1 whence
 n n    n−k n  
n 1 1 1  n k 1 1  n n−2k
λ = n z+ = n z = n z .
2 z 2 k z 2 k
k=0 k=0
It follows that also
n   n  
1  n −(n−2(n−k)) 1  n −(n−2l)
λn = z = z .
2n n−k 2n l
k=0 l=0
n
Taking the half-sum of the two expressions for λ , we obtain
n  
n 1  n z n−2k + z −(n−2k)
λ =
2n k 2
k=0
 
1  n
n
= Tn−2k (λ)
2n k
k=0
n  
1  n
= n−l Tl (λ)
2n 2
l=−n
 
1 
n
n
= n+l Tl (λ) ,
2n 2 l=−n

which was to be proved. 


Proof of Theorem 5.7. Applying the identity of polynomials of Lemma 5.9
to the operator P , we obtain

n
Pn = qn (k) Tk (P ) .
k=−n

That P  ≤ 1 implies spec P ⊂ [−1, 1]. Since also sup[−1,1] |Tk | ≤ 1, it follows by
the spectral mapping theorem that
spec Tk (P ) ⊂ Tk (spec P ) ⊂ Tk ([−1, 1]) ⊂ [−1, 1] .
Hence, we have Tk (P ) ≤ 1.
It follows from the above identity that
n
n
(P f, g) = qn (k) (Tk (P ) f, g) .
k=−n

Observe that (Tk (P ) f, g) = 0 for |k| < r, because Tk is a polynomial of degree |k|
and r is the distance between the supports of f and g (cf. the proof of Theorem
3.14). On the other hand, for any k we have
|(Tk (P ) f, g)| ≤ Tk (P ) f  g ≤ f  g .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.4. ON-DIAGONAL UPPER ESTIMATES OF THE HEAT KERNEL 99

Therefore, we obtain
" "
"  "
" "
|(P n f, g)| = "" qn (k) (Tk (P ) f, g)""
"r≤|k|≤n "

≤ qn (k) |(Tk (P ) f, g)|
r≤|k|≤n
⎛ ⎞

≤ ⎝ qn (k)⎠ f  g
|k|≥r
⎛ ⎞

= 2⎝ qn (k)⎠ f  g .
k≥r

Observe that, by (5.6)


 
qn (k) = P (Xn = k) ,
k≥r k≥r
where Xn is the simple random walk on Z. By Lemma 5.6 we obtain
  2
r
qn (k) ≤ exp − ,
2n
k≥r

whence  2
r
|(P n f, g)| ≤ 2 f  g exp − .
2n


5.4. On-diagonal upper estimates of the heat kernel


In this section (V, μ) is an infinite locally finite connected weighted graph that
satisfies in addition the conditions
1 ≤ μxy ≤ M for all x ∼ y,
(5.28)
deg (x) ≤ D for all x ∈ V,
for some constants M and D. The first condition is trivially satisfied for a sim-
ple weight, the second condition is always satisfied for regular graphs; for Cayley
graphs, in particular.
Lemma 5.10. The conditions (5.28) imply that, for any non-empty finite set
A⊂V,
μ (U1 (A)) ≤ C0 μ (A) , (5.29)
where C0 = C0 (D, M ) .

Proof. Since μ (x) = y∼x μxy , it follows from (5.28) that
1 ≤ μ (x) ≤ M D.
Therefore, for any finite set A, we have
|A| ≤ μ (A) ≤ M D |A| .
Recall that the the r-neighborhood of A is defined by
Ur (A) = {y ∈ V : d (x, y) ≤ r for some x ∈ A} ,

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
100 5. ESTIMATES OF THE HEAT KERNEL

and the balls of radius r are defined by


Br (x) = {y ∈ V : d (x, y) ≤ r} .
It follows that 
Ur (A) = Br (x) .
x∈A
The ball B1 (x) consists of the vertex x and of the vertices y ∼ x so that |B1 (x)| ≤
D + 1. Hence, 
|U1 (A)| ≤ |B1 (x)| ≤ (D + 1) |A| ,
x∈A
whence it follows that
μ (U1 (A)) ≤ M D (D + 1) μ (A) .

The next theorem is the main result of this section.
Theorem 5.11. If (V, μ) satisfies (5.28) and the Faber-Krahn inequality with
function
Λ (s) = cs−1/α ,
for some α, c > 0, then the the heat kernel satisfies the following estimate
pn (x, y) ≤ Cn−α . (5.30)
for all x, y ∈ V , n ≥ 1 and some C = C (α, c, C0 ) .
Theorem 5.11 was proved in [81].
Example 5.12. If (V, μ) is a Cayley graph satisfying the volume growth con-
dition
μ (Br ) ≥ ar m , (5.31)
then by Corollary 4.24, (V, μ) satisfies the Faber-Krahn inequality with the function
Λ (s) = bs−2/m ,
where a, b, m > 0. By Theorem 5.11 we conclude that
pn (x, y) ≤ Cn−m/2 . (5.32)
Since (5.31) is satisfied in Z , we see that the estimate (5.32) holds in Z . As we
m m

will see later on, the power n−m/2 is sharp here (cf. Example 5.20).
Example 5.13. Assume in addition to (5.28) that, for all x ∈ V and integers
r≥1
μ (Br (x)) ≥ ar m , (5.33)
for some a, m > 0. Then, by Corollary 4.17, we have the Faber-Krahn inequality
with function
Λ (s) = bs− m .
m+1
(5.34)
Theorem 5.11 yields then the heat kernel upper bound
pn (x, y) ≤ Cn− m+1 .
m
(5.35)
Although this upper bound is not sharp in Zm with m > 1, in general it cannot be
improved under the hypothesis (5.33). Indeed, for the Vicsek tree from Example
4.19, all the conditions (5.33), (5.34) and (5.35) are satisfied with m = ln 5
ln 3 , that is,

pn (x, y) ≤ Cn− ln 15 .
ln 5
(5.36)

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.4. ON-DIAGONAL UPPER ESTIMATES OF THE HEAT KERNEL 101

As we will see later in Example 5.22, this upper bound is sharp in this case. Further
examples of this type can be found in [13].
Example 5.14. If the weight μ is simple, then we always have the Faber-Krahn
inequality with function Λ (s) = 12 s−2 , that is, with α = 1/2 (cf. Example 4.14).
Assuming that the degree is uniformly bounded, we obtain by Theorem 5.11 that
pn (x, y) ≤ Cn−1/2 .
Note that this estimate is sharp in Z (up to a constant multiple).
Proof of Theorem 5.11. We use the notation

(f, g) = f (x) g (x) μ (x) , (5.37)
x∈V

whenever the right hand side is well-defined. Let F be the set of all functions f on
V with a finite support
supp f = {x ∈ V : f (x) = 0} .
Clearly, F is a linear space of infinite dimension. Observe that f ∈ F implies that
Lf and P f belong to F, because
supp (P f ) ⊂ U1 (supp f ) .
Note that (f, g) is well-defined provided one of the functions f, g belongs to F.
The approach to the proof is as follows. For a fixed z ∈ V , denote fn (x) =
pn (x, z). We will show that fn+1 = P fn . Set

bn := (fn , fn ) = pn (x, z)2 μ (x) = p2n (z, z) .
x∈V

We will show that {bn } is a decreasing sequence and will estimate the difference
bn − bn+1 = (fn , fn ) − (P fn , P fn ) ,
which will imply an upper bound for bn and, hence, for p2n (z, z) . Then Lemma 5.2
will allow to estimate pn (x, y) for all x, y ∈ V.
The technical implementation of this approach is quite long and will be split
into a series of claims.
Claim 0. If f ∈ F, then (P f, 1) = (f, 1) .
Note that 
(f, 1) = f (x) μ (x) .
x∈V
Using Green’s formula of Theorem 2.1 in domain Ω = U1 (supp f ), we obtain
(f, 1) − (P f, 1) = (Lf, 1)

= Lf (x) 1 (x) μ (x)
x∈Ω
1   
= (∇xy f ) (∇xy 1) μxy − (∇xy f ) μxy .
2 c
x,y∈Ω x∈Ω y∈Ω

The first sum is 0 because ∇xy 1 = 0. In the second sum, y ∈/ Ω and x ∼ y imply
that x ∈/ supp f whence ∇xy f = 0 so that the second sum is also 0, which proves
the claim.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
102 5. ESTIMATES OF THE HEAT KERNEL

Consider now the following functional

Q (f, g) = (f, g) − (P f, P g) ,

that is defined for all f, g ∈ F. Also, we write Q (f ) = Q (f, f ) .


Claim 1. If Ω is a finite non-empty subset of V , f ∈ F and U1 (supp f ) ⊂ Ω, then

Q (f ) ≥ λ1 (Ω) (f, f ) . (5.38)

In particular, Q (f ) ≥ 0 for any f ∈ F.


Clearly, supp (P f ) ⊂ Ω so that P f = PΩ f where PΩ = id −LΩ . Let α1 =
1 − λ1 (Ω) so that spec PΩ ⊂ [−α1 , α1 ] and PΩ  ≤ α1 . Then

Q (f ) = (f, f ) − (PΩ f, PΩ f )
2 2
≥ f  − α12 f 
= (1 − α1 ) (1 + α1 ) f 2
2
≥ λ1 (Ω) f  .

Claim 2. For all f ∈ F we have


1  2
Q (f ) = (f (x) − f (y)) P2 (x, y) μ (x) . (5.39)
2
x,y∈V

Using the symmetry of the Markov operator P , we obtain


 
(P f, P f ) = P 2 f, f = P2 (x, y) f (x) f (y) μ (x) ,
x,y∈V

whence
 
Q (f ) = f 2 (x) μ (x) − P2 (x, y) f (x) f (y) μ (x)
x∈V x,y∈V
 
= P2 (x, y) f 2 (x) μ (x) − P2 (x, y) f (x) f (y) μ (x)
x,y∈V x,y∈V

= P2 (x, y) f (x) (f (x) − f (y)) μ (x) . (5.40)
x,y∈V

Interchanging x, y we obtain also



Q (f ) = P2 (x, y) f (y) (f (y) − f (x)) μ (x) . (5.41)
x,y∈V

Adding up (5.40) and (5.41), we obtain (5.39).


Claim 3. If f ∈ F and c is a positive constant, then

Q (f − c)+ ≤ Q (f ) . (5.42)

Define a function ϕ : R → R by

ϕ (t) = (t − c)+ .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.4. ON-DIAGONAL UPPER ESTIMATES OF THE HEAT KERNEL 103

Since ϕ is a Lipschitz function with the Lipschitz constant 1, we obtain by (5.39)



Q (f − c)+ = Q (ϕ ◦ f )
1 
= (ϕ (f (x)) − ϕ (f (y)))2 P2 (x, y) μ (x)
2
x,y∈V
1  2
≤ (f (x) − f (y)) P2 (x, y) μ (x) = Q (f ) .
2
x,y∈V

Claim 4. Let f be a non-negative function from F. For any s ≥ 0 define the set
Ωs by 
Ωs = U1 supp (f − s)+ .
Then1
Q (f ) ≥ λ1 (Ωs ) ((f, f ) − 2s (f, 1)) . (5.43)
In particular, for
1 (f, f )
s= ,
4 (f, 1)
we obtain
1
Q (f ) ≥ λ1 (Ωs ) (f, f ) .
2
Set g = (f − s)+ . By (5.38) and (5.42), we have
Q (f ) ≥ Q (g) ≥ λ1 (Ωs ) (g, g) .
On the other hand, we have
g 2 ≥ f 2 − 2sf. (5.44)
Indeed, if f ≥ s, then g = f − s and
g 2 = f 2 − 2sf + s2 ≥ f 2 − 2sf,
and if f < s, then g = 0 and f 2 − 2sf = (f − 2s) f ≤ 0. Summing up (5.44) against
measure μ (x), we obtain
(g, g) ≥ (f, f ) − 2s (f, 1) ,
whence (5.43) follows.
Claim 5. Let {fn }∞ n=0 be a sequence of non-negative functions on V such that
f0 ∈ F, (f0 , 1) = 1, and fn+1 = P fn . Set
bn = (fn , fn ) .
Then
bn − bn+1 ≥ c b1+1/α
n , (5.45)
 1 −1/α
where c = 2 c (4C0 )
.
By induction, we obtain that fn ∈ F and (fn , 1) = 1 (by Claim 0). Note that
bn − bn+1 = (fn , fn ) − (P fn , P fn ) = Q (fn ) .
Estimating Q (fn ) by Claim 4 and choosing
1 (fn , fn ) 1
s= = bn ,
4 (fn , 1) 4
1 Note that, for s = 0, (5.43) coincides with (5.38).

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
104 5. ESTIMATES OF THE HEAT KERNEL

we obtain
1
bn − bn+1 ≥ λ1 (Ωs ) bn , (5.46)
2
where 
Ωs = U1 supp (fn − s)+ .
On the other hand, we have

μ supp (fn − s)+ = μ (x ∈ V : fn (x) > s)
1 1 1
≤ fn (x) μ (x) = (fn , 1) = .
s s s
x∈V
By Lemma 5.10, we obtain that
C0 4C0
μ (Ωs ) ≤ = .
s bn
Hence, by the Faber-Krahn inequality,
λ1 (Ωs ) ≥ cμ (Ωs )−1/α ≥ c (4C0 )−1/α b1/α
n , (5.47)
which together with (5.46) yields (5.45).

Claim 6. If {bn }n=0 is a sequence of positive real numbers satisfying ( 5.45), then
bn ≤ C  n−α where C  = (α/c ) .
α

We use an elementary inequality


β (x − y)
y −β − x−β ≥ , (5.48)
xβ+1
that is true for all β > 0 and x > y > 0. Indeed, by the mean-value theorem, we
have
y −β − x−β y −β − x−β
=− = βξ −β−1
x−y y−x
where ξ ∈ (y, x), whence (5.48) follows. Applying (5.48) with β = α1 , we obtain
c bn c
1+1/α
−1/α bn − bn+1
bn+1 − b−1/α
n ≥ 1+1/α
≥ 1+1/α
= .
αbn αbn α
−1/α 
Summing up this inequality from 0 to n, we conclude that bn ≥ cα n and bn ≤
 −α
Cn .
Now we can finish the proof as follows. Fix a vertex z ∈ V and set f0 =
μ(z) {z} . Then f0 ∈ F and (f0 , 1) = 1. Define the sequence {fn } inductively by
1
1
fn+1 = P fn and show that, in fact,
fn (x) = pn (x, z) for any n ≥ 1.
We have
 P (x, z)
f1 (x) = P f0 (x) = P (x, y) f0 (y) = = p1 (x, z)
μ (z)
y∈V

and

fn+1 (x) = P (x, y) fn (y)
y∈V

= p1 (x, y) pn (y, z) μ (y)
y∈V
= pn+1 (x, z) .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.4. ON-DIAGONAL UPPER ESTIMATES OF THE HEAT KERNEL 105

The sequence {fn } satisfies the hypotheses of Claim 5. Setting


bn = (fn , fn ) = p2n (z, z) ,
we obtain by Claims 5 and 6 that
p2n (z, z) ≤ C  n−α , (5.49)
for all z ∈ V . Using Lemma 5.2 and (5.49), we obtain that
−α/2
≤ C  (kl)
1/2
pk+l (x, y) ≤ (p2k (x, x) p2l (y, y)) , (5.50)
for all x, y ∈ V and positive integers k, l. Given an integer n ≥ 2, represent it in
the form n = k + l where l = k for even n and l = k + 1 for odd n. In both cases,
we have
n−1 n
l≥k≥ ≥ ,
2 4
whence by (5.50)
pn (x, y) ≤ C  n−α .
P (x,y)
Finally, for n = 1 we obtain p1 (x, y) = μ(y) ≤ 1 because P (x, y) ≤ 1 and
μ (y) ≥ 1 by (5.28). 
Remark 5.15. As we have seen in the last part of the proof, the estimate (5.30)
is equivalent to the on-diagonal estimate
pn (x, x) ≤ Cn−α .
For that reason, (5.30) is also frequently referred to as an on-diagonal estimate of the
heat kernel. The point is that this estimate does not take into account the distance
between points x, y, which could improve the estimate. Indeed, if d (x, y) > n,
then obviously pn (x, y) = 0. Combining the on-diagonal estimate (5.30) with the
Carne-Varopoulos estimate (5.27), it is easy to show that, for any 0 < ε < α,
 
C d2 (x, y)
pn (x, y) ≤ α−ε exp −cε (5.51)
n n
with some cε > 0. Using much more complicated methods, one can show that (5.51)
holds also for ε = 0 (see [10], [48], [50], [56], [89]).
Theorem 5.11 can be extended to a general Faber-Krahn function Λ (s) as
follows.
Theorem 5.16. If (V, μ) satisfies (5.28) and the Faber-Krahn inequality with a
positive decreasing function Λ (s) on (0, +∞), then, for all positive integers n and
all x, y ∈ V ,
C
pn (x, y) ≤ −1 ,
γ (n/8)
where C is a constant and the function γ is defined by
 s
dv
γ (s) = .
1 vΛ(v)
This theorem was proved in [47].
Example 5.17. In the case Λ (s) = cs−1/α , we obtain
  
1 s 1/α−1
γ (s) = v dv = c s1/α − 1 ≤ c s1/α
c 1
and γ −1 (t) ≥ c tα , which gives the previous theorem.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
106 5. ESTIMATES OF THE HEAT KERNEL

Example 5.18. Let Λ (s) = ln2 c(2s) as it is the case on the Cayley graphs with
exponential volume growth. Then
 
1 s 2 dv 1 2s 2 du 1 3
γ (s) = ln (2v) = ln u ≤ ln (2s)
c 1 v c 2 u 3c

whence γ −1 (t) ≥ 12 exp −c t1/3 and
 
pn (x, y) ≤ C exp −c n1/3 . (5.52)
One can show that, for a large family of Cayley graphs, there is a similar lower
bound, with different values of constants C, c (see Example 5.23).
Proof. The proof goes the same lines as that of Theorem 5.11. The only place
where we have used the Faber-Krahn inequality was the estimate (5.47) in Claim
5, which now becomes
 
4C0
λ1 (Ωs ) ≥ Λ (μ (Ωs )) ≥ Λ .
bn
Put together with (5.46) and setting C = 4C0 , we obtain
 
1 C
bn − bn+1 ≥ Λ bn . (5.53)
2 bn
Using (5.53), we estimate bn from above as follows. Consider the function
1
ϕ (b) =  C ,
Λ b b
which is monotone decreasing in b ∈ (0, +∞). It follows that
 bn
bn − bn+1 1
ϕ (b) db ≥ (bn − bn+1 ) ϕ (bn ) =   ≥ ,
bn+1 C
Λ bn bn 2

where in the last inequality we have used (5.53). Summing up this inequality from
0 to n, we obtain
 b0
db n
C ≥ ,
bn Λ b b 2
C
which implies by the change v = b that
 C/bn
dv n
≥ . (5.54)
C/b0 Λ (v) v 2
Recall that 
b0 = (f0 , f0 ) = f02 (x) μ (x) .
x
Since 
f0 (x) μ (x) = (f0 , 1) = 1,
x
it follows that 
f02 (x) μ2 (x) ≤ 1.
x
Since μ (x) ≥ 1, we obtain that

b0 = f02 (x) μ (x) ≤ 1 < C,
x

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.5. ON-DIAGONAL LOWER BOUND VIA THE DIRICHLET EIGENVALUES 107

because C = 4C0 > 4. Hence, bC0 > 1, and (5.54) implies that
   C/bn
C dv n
γ = ≥ ,
bn 1 Λ (v) v 2
whence n
C
≥ γ −1
bn 2
and
C
. bn ≤
γ −1 (n/2)
1
As in the previous proof, choosing f0 = μ(z) 1{z} , we obtain bn = p2n (z, z) and
C
p2n (z, z) ≤ .
γ −1 (n/2)
For any integer n ≥ 2, using (5.50) and choosing k, l as in the previous proof, we
obtain
C C
pn (x, y) ≤ 1/2
≤ −1 . (5.55)
(γ −1 (k/2) γ −1 (l/2)) γ (n/8)
For n = 1 we have p1 (x, y) ≤ 1. By increasing if necessary the value of C, we can
have γ −1C(1/8) ≥ 1 so that (5.55) is satisfied also for n = 1. 

5.5. On-diagonal lower bound via the Dirichlet eigenvalues


Theorem 5.19. For any even positive integer n and for any non-empty finite
set Ω ⊂ V , the heat kernel satisfies the following estimate :
(1 − λ1 (Ω))n
sup pn (x, x) ≥ . (5.56)
x∈V μ(Ω)
In particular, if λ1 (Ω) ≤ 1/2, then
exp (−2λ1 (Ω)n)
sup pn (x, x) ≥ . (5.57)
x∈V μ(Ω)
On a Cayley graph with a simple weight μ, we have instead of ( 5.57)
exp (−2λ1 (Ω)n)
pn (x, x) ≥ (5.58)
μ(Ω)
for all x ∈ V and even n.
This theorem was proved in [47] and [49]. Since the set Ω is arbitrary, one can
rewrite (5.57) in the form
exp (−2λ1 (Ω)n)
sup pn (x, x) ≥ sup ,
x∈V Ω⊂V μ(Ω)
where sup is taken over all non-empty finite subsets Ω. In this form, it appears as
an inequality between two functions of n.
Proof. The estimate (5.57) follows from (5.56) using the inequality
1 − λ ≥ exp (−2λ) (5.59)
that is true for 0 ≤ λ ≤ 1/2. Indeed, it is obviously true for λ = 0 and λ = and, 1
2
hence, is true for λ ∈ [0, 1/2] because the function 1 − λ is linear and exp (−2λ) is
convex.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
108 5. ESTIMATES OF THE HEAT KERNEL

Let us prove (5.56). We use again the Markov operator P = id −L, that acts
on functions f ∈ F as follows:

P f (x) = P (x, y) f (y) .
y∈V

Recall that the powers P n of the Markov operators are given by



P n f (x) = Pn (x, y) f (y) ,
y∈V

where Pn (x, y) is the transition function that can be defined inductively by P1 (x, y)
= P (x, y) and 
Pn (x, y) = Pn−1 (x, z) P (z, y) . (5.60)
z∈V
Fix a non-empty finite set Ω ⊂ V and consider the operator Q = PΩ = id −LΩ in
FΩ , that is, 
Qf (x) = P (x, y) f (y) .
y∈Ω
The distinction between Q and P is that the range of the summation of the former
is restricted to y ∈ Ω. For any positive integer n, consider the powers Qn . By
induction, one obtains

Qn f (x) = Qn (x, y) f (y) , (5.61)
y∈Ω

where Q1 (x, y) = P (x, y) and



Qn (x, y) = Qn−1 (x, z) P (z, y) . (5.62)
z∈Ω

The function Qn (x, y) can be regarded as the transition function for a random walk
with the killing condition outside Ω. The comparison of (5.60) and (5.62) shows
that
Qn (x, y) ≤ Pn (x, y) .
n
Consider trace Q . On the one hand, (5.61) means that the matrix of this
operator in the basis 1{x} x∈Ω has on the diagonal the values Qn (x, x) so that
 
trace Qn = Qn (x, x) ≤ Pn (x, x) .
x∈Ω x∈Ω

On the other hand, the operator Q has the eigenvalues 1 − λk (Ω), k = 1, ..., N
where N = |Ω|. The operator Qn has the eigenvalues (1 − λk (Ω))n whence

N
trace Qn = (1 − λk (Ω))n ≥ (1 − λ1 (Ω))n .
k=1
We have used that all the terms in the above sum are non-negative, which is true
because n is even. Comparing the two expressions for the trace, we obtain

(1 − λ1 (Ω))n ≤ Pn (x, x)
x∈Ω

= pn (x, x)μ(x)
x∈Ω
≤ sup pn (x, x)μ(Ω),
x∈Ω

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.5. ON-DIAGONAL LOWER BOUND VIA THE DIRICHLET EIGENVALUES 109

whence (5.56) follows.


In the case of a Cayley graph, we have by Lemma 5.3 that, for any x ∈ V ,
pn (x, x) = sup pn (x, x)
x∈V

so that (5.58) follows from (5.57). 


Example 5.20. We claim that in Zm
pn (x, x) ≥ cn−m/2 , (5.63)
for all even positive integers n and x ∈ Z . Indeed, fix r ∈ N and take Ω = Br .
m

One can show that


C
λ1 (Br ) ≤ 2
r
(see Exercise 44). Since Zm is a Cayley graph, we obtain by Theorem 5.19 that,
for large enough r,
 C
exp (−2λ1 (Br )n)  exp − r 2 n
pn (x, x) ≥ ≥c .
μ(Br ) rm

Choosing r  n, we obtain (5.63). Combining (5.63) with the on-diagonal upper
bound (5.32) for the heat kernel in Zm , we obtain that, for all x ∈ Zm and all even
n,
pn (x, x)  n−m/2 .
Note that, for odd n, pn (x, x) = 0.
Corollary 5.21. Assume that there exists a sequence {Ωk }∞
k=1 of subsets of
V such that
μ (Ωk ) ≤ Cak and λ1 (Ωk ) ≤ Cb−k
for some a, b, C > 1. Then, for all even n,
sup pn (x, x) ≥ cn−α (5.64)
x∈V
where c > 0 and
ln a
. α=
ln b
Proof. Applying Theorem 5.19 with Ω = Ωk , we obtain, for any even n and
all large enough k, 
exp −2Cb−k n
sup pn (x, x) ≥ .
x∈V Cak
If n is large enough, then we can choose k so that n  bk . Then
ln a
ak = bk ln b  nα ,
and we obtain (5.64) for large enough n. For a bounded range of n this estimate is
trivial. 
Example 5.22. Let (V, μ) be the Vicsek tree from Example 4.19, with a simple
weight μ. Denote by Ωk the finite graph at the step k of construction of the Vicsek
tree, considered as a subgraph of (V, μ) (see Figure 4.2). It is easy to see that
|Ωk | = 5 |Ωk−1 | − 4,
which implies that
μ (Ωk )  |Ωk |  5k .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
110 5. ESTIMATES OF THE HEAT KERNEL

Let us show that


λ1 (Ωk ) ≤ C 15−k .
Denote by z0 the center of Ωk and by z1 , ..., z4 its corners. Define a function f on
Ωk as follows. First set
f (z0 ) = 1 and f (zi ) = 0 for i = 1, ..., 4,
then extend f linearly on each path of length 3k connecting z0 with zi , and by
constant on any transversal path (see Figure 5.2). Since f ≥ 23 on Ωk−1 , it follows

Figure 5.2. The values of f on the diagonal z0 zi of Ω2 . The func-


tion f remains constant on all paths transversal to this diagonal
(except for the other diagonals z0 zj , j = i).

that
(f, f )  μ (Ωk )  5k .
Also, since |∇xy f | = 3−k for any two neighboring points x, y on each of the diagonals
connecting z0 and zi , and |∇xy f | = 0 otherwise, we obtain
 
4
(LΩk f, f ) = |∇xy f |2 μxy = 3−2k d(z0 , zi ) = 4 · 3−k .
x,y∈Ωk i=1

It follows that
(LΩk f, f )
≤ C 15−k ,
λ1 (Ωk ) ≤ R (f ) =
(f, f )
as was claimed. Applying Corollary 5.21, we obtain the lower bound
sup pn (x, x) ≥ cn− ln 15 ,
ln 5

which matches the upper bound (5.36) of Example 5.13. In fact, it is possible to
prove that, for all even n and all x ∈ V ,
pn (x, x)  n− ln 15
ln 5

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.5. ON-DIAGONAL LOWER BOUND VIA THE DIRICHLET EIGENVALUES 111

(cf. [8], [82]).


Example 5.23. The method of the proof of Corollary 5.21 can be extended
to a more general setting covering a superpolynomial decay of the heat kernel (see
[49]). In this way one can prove the following lower bound on the Cayley graphs
of polycyclic groups with exponential volume growth:
 
pn (x, x) ≥ c exp −Cn1/3 ,

for even n, which matches the upper bound (5.52) of Example 5.18. In this case,
construction of a sequence {Ωk } is more complicated and requires the use of the
group structure (see [1], [49], [117]).
As an example, consider a semi-direct product Z2  Z that consists of couples
(x, a) where x ∈ Z2 and a ∈ Z, and the group operation is defined by
(x, a) ∗ (y, b) = (x + M a y, a + b) ,
where M is a 2 × 2 matrix with integer coefficients and with det M = 1 (then
also M −1 has integer coefficients).
2 1If M = id, then we obtain just Z3 . For a less
trivial M , for example, for M = 1 1 , the group Z  Z has an exponential volume
2

growth, and its heat kernel satisfies the estimates


   
c2 exp −C2 n1/3 ≤ pn (x, x) ≤ C1 exp −c1 n1/3 ,
for even n, where c1 , c2 , C1 , C2 > 0.
Using Theorem 5.19, we can now prove a converse to Theorem 5.11.
Corollary 5.24. Assume that
inf μ (x) > 0 (5.65)
x∈V

and that the heat kernel on (V, μ) satisfies the upper bound
pn (x, x) ≤ Cn−α (5.66)
for all x ∈ V and n ≥ 1. Then (V, μ) satisfies the Faber-Krahn inequality with the
function
Λ (s) = cs−1/α , (5.67)
with some c > 0.
Of course, the hypothesis (5.65) can be replaced by a stronger condition (5.28).
Hence, under (5.28), the heat kernel bound (5.66) and the Faber-Krahn inequality
with the function (5.67) are equivalent.
Proof. Fix a finite non-empty subset Ω of V and prove that
λ1 (Ω) ≥ cμ (Ω)−1/α .
If λ1 (Ω) ≥ 12 , then this is true by (5.65).
Assume that λ1 (Ω) ≤ 12 . Then, by Theorem 5.19 and (5.66), we obtain, for all
even integers n > 0,
exp (−2λ1 (Ω) n) ≤ Cn−α μ (Ω) . (5.68)
Let n be the minimal positive even integer such that
Cn−α μ (Ω) ≤ e−2 ,

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
112 5. ESTIMATES OF THE HEAT KERNEL

that is,
nα ≥ Ce−2 μ (Ω) .
Since by (5.65) μ (Ω) is separated from 0 and C can be chosen large enough, it
follows that
nα  μ (Ω) .
Then we obtain from (5.68) that
λ1 (Ω) n ≥ 1,
whence
−1/α
λ1 (Ω) ≥ n−1  μ (Ω) ,
which was to be proved. 
We will not cover off-diagonal Gaussian and sub-Gaussian estimates of the heat
kernel on graphs in this volume. Such results can be found in [9], [10], [14], [20],
[48], [50], [41], [56], [81], [82] and in many other sources.

5.6. On-diagonal lower bound via volume growth


Here we obtain another lower bound for the heat kernel. On Cayley graphs it
may be not as sharp as the one of Theorem 5.19, but on general graphs it may be
satisfactory.
Theorem 5.25. Assume that
μ0 := inf μ (x) > 0.
x∈V
Fix a vertex x0 ∈ V , set for all r > 0
Br = {x ∈ V : d (x, x0 ) ≤ r}
and
V (r) = μ (Br ) .
Assume that, for all r large enough,
V (r) ≤ Cr α (5.69)
α
for some constants C and α. Then, for any c > and for all large enough positive 2
even integers n, the heat kernel satisfies the following estimate
1/4
pn (x0 , x0 ) ≥ √ . (5.70)
V cn ln n

This theorem was proved
√ in [113] (see also [47]). In general, the term n ln n
cannot be replaced by n – see [74].
Proof. In fact, we will prove the following inequality
1/4
p2n (x0 , x0 ) ≥ √ , (5.71)
V cn ln n
that holds for any c > α and for all large enough positive integers n. Clearly, (5.71)
implies
1/4
p2n (x0 , x0 ) ≥  c ,
V 2 2n ln (2n)
whence (5.70) follows upon renaming 2n by n and 2c by c.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.6. ON-DIAGONAL LOWER BOUND VIA VOLUME GROWTH 113

To prove (5.71) we use (5.3) and the Cauchy-Schwarz inequality in a ball Br


as follows:
 2
p2n (x0 , x0 ) = pn (x0 , x) μ (x)
x∈V
 2
≥ pn (x0 , x) μ (x)
x∈Br
2
1 
≥ pn (x0 , x) μ (x)
μ (Br )
x∈Br
⎛ ⎞2
1 ⎝ 
= 1− pn (x0 , x) μ (x)⎠ .
V (r) c x∈Br

Suppose that, for a given large enough n, we can find r = r (n) so that
 1
pn (x0 , x) μ (x) ≤ . (5.72)
c
2
x∈Br

Then the previous estimate implies


1/4
p2n (x0 , x0 ) ≥ ,
V (r)

which will allow us to obtain (5.71), if r = cn ln n.
To prove (5.72) with this r, let us apply Theorem 5.7. By (5.27) we have
 2   2 
2 d (x0 , x) 2 d (x0 , x)
pn (x0 , x) ≤ exp − ≤ exp − ,
μ (x0 ) μ (x) 2n μ0 2n
whence, for large enough r,
  2 
2  d (x0 , x)
pn (x0 , x) μ (x) ≤ exp − μ (x)
μ0 2n
x∈Brc x∈Brc
∞  2 
2   d (x0 , x)
= exp − μ (x)
μ0 2n
k=0 x∈B2k+1 r \B2k r
∞  k 2
2   2 r
≤ exp − μ (x)
μ0 2n
k=0 x∈B2k+1 r \B2k r
∞  k 2
2  2 r
≤ exp − μ (B2k+1 r )
μ0 2n
k=0
∞  k 2
2C  4 r  k+1 α
≤ exp − 2 r ,
μ0 2n
k=0
 α
where we have used μ (B2k+1 r ) ≤ C 2k+1 r . Setting
 k 2
4 r  k+1 α
ak = exp − 2 r ,
2n

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
114 5. ESTIMATES OF THE HEAT KERNEL

we see that
 k+1   2
ak+1 4 − 4k r 2 r
= exp − 2 ≤ exp −
α
2α .
ak 2 n n
r2
If n ≥ α, then
ak+1
≤ e−α 2α =: q < 1,
ak
so that the sequence {ak } decays faster than the decreasing geometric sequence
with the ratio q, whence

 ∞
 a0
ak ≤ a0 q k = .
1−q
k=0 k=0

It follows that  2
 r
pn (x0 , x) μ (x) ≤ C  exp − rα (5.73)
c
2n
x∈Br
α+1 √
where C  = μ20 (1−q)
C
. Choose here r = cn ln n with c > α so that the condition
r2
n ≥ α is satisfied, at least for large n. Then we obtain
 α (ln n) 2
α

 − 2c ln n  α
pn (x0 , x) μ (x) ≤ C e (cn ln n) 2
=Cc α .
2 (5.74)
n2− 2
c

x∈Brc

Since c/2 − α/2 > 0, the right hand side here goes to 0 as n → ∞. In particular,
it becomes < 12 provided n is large enough, which finishes the proof. 

5.7. Escape rate of random walk


Using the computations from the proof of Theorem 5.25, we can prove the
following property of the random walk.
Theorem 5.26. Let {Xn } be the random walk on (V, μ). Under the hypothesis
of Theorem 5.25, we have
 √ 
Px0 d (X0 , Xn ) ≤ cn ln n for all large enough n = 1,
where c is any constant that is larger that α + 2.
This theorem was proved in [15].
Proof. Let {rn } be an increasing sequence of positive real numbers such that
rn → ∞ as n → ∞. Let us investigate the conditions under which
Px0 (d (X0 , Xn ) ≤ rn for all large enough n) = 1. (5.75)
For that, consider the events
An = {d (X0 , Xn ) > rn }
and observe that, by the lemma of Borel and Cantelli, if

Px0 (An ) < ∞, (5.76)
n

the events An occur finitely often with probability 1. In other words, the probability
that An does not occur for large enough n is equal to 1, which is exactly what we

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
5.7. ESCAPE RATE OF RANDOM WALK 115

need for (5.75). We are left to verify (5.76), more precisely, to see, under what
conditions on rn , (5.76) is satisfied. Observe that
  
Px0 (An ) = Px0 Xn ∈ Brcn = Pn (x0 , x) = pn (x0 , x) μ (x) .
x∈Brcn x∈Brcn

Assuming that n is large enough and rn = cn ln n with c ≥ α, we obtain by (5.74)
 α (ln n)
α
2
Px0 (An ) = pn (x0 , x) μ (x) ≤ C  c 2 c − α .
x∈B c
n2 2
rn

Clearly, if c > α + 2, then − α2 > 1 whence it follows that the series (5.76)
c
2
converges, which finishes the proof. 
Remark 5.27. Any function f (n) such that
Px0 (d (X0 , Xn ) ≤ f (n) for all large enough n) = 1,
is called an upper rate function (or escape rate) for the random
√ walk. Corollary
5.26 can be then restated as follows: the function f1 (n) = cn ln n is an upper rate
function. For a simple random walk in Zm the Khinchin’s law of iterated logarithm
says that
|Xn − X0 |
lim sup √ = 1.
√ 2n ln ln n
n→∞
It follows that a function f2 (n) = Cn ln ln n with any constant C > 2 is an upper
rate function, too. Clearly, f2 is a sharper upper rate function than f1 . However, in
the general context of Theorem 5.26, function f1 is optimal and cannot be replaced
by f2 , as was shown in [15] (see also [74]).

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
10.1090/ulect/071/06

CHAPTER 6

The type problem

In this Chapter, (V, μ) is a locally finite weighted graph such that μ (x) > 0 for
all x ∈ V . Most of the results are non-trivial only for infinite graphs.

6.1. Recurrence and transience


We say that an event An (where n ∈ N) occurs infinitely often if there is a
sequence nk → ∞ of indices such that Ank takes place for all k.
Definition 6.1. We say that the random walk {Xn } on (V, μ) is recurrent if,
for any x ∈ V ,
Px (Xn = x infinitely often) = 1,
and transient otherwise, that is, if there is x ∈ V such that
Px (Xn = x infinitely often) < 1.
The type problem is the problem of deciding whether the random walk is re-
current or transient.
Theorem 6.2. The random walk is transient if and only if


pn (x, x) < ∞ (6.1)
n=1

for some/all x ∈ V .
Corollary 6.3 (Polya’s theorem). In Zm the random walk is transient if and
only if m > 2.
Proof. Indeed, in Zm we have
  1
pn (x, x) 
n n
nm/2
and the latter series converges if and only if m > 2. 

We start the proof of Theorem 6.2 with the following lemma.


Lemma 6.4. If the condition


pn (x, y) < ∞ (6.2)
n=1

holds for some x, y ∈ V , then it holds for all x, y ∈ V. In particular, if (6.1) holds
for some x ∈ V , then it holds for all x ∈ V and, moreover, (6.2) holds for all
x, y ∈ V .
117

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
118 6. THE TYPE PROBLEM

Proof. Let us show that if (6.2) holds for some x, y ∈ V , then the vertex
x can be replaced by any of its neighbors, and (6.2) will be still true. Since the
graph (V, μ) is connected, in a finite number of steps the initial point x can be then
replaced by any other point. By the symmetry, the same applies to y so that in the
end both x and y can take arbitrary values.
Fix a vertex x ∼ x and prove that


pn (x , y) < ∞.
n=1

We have 
Pn+1 (x, y) = P (x, z) Pn (z, y) ≥ P (x, x ) Pn (x , y) ,
z
whence
Pn (x , y) Pn+1 (x, y) pn+1 (x, y)
pn (x , y) = ≤ = .
μ (y) P (x, x ) μ (y) P (x, x )
It follows that

 ∞
1
pn (x , y) ≤ pn+1 (x, y) < ∞,
n=1
P (x, x ) n=1

which was to be proved. 

Proof of Theorem 6.2: The sufficiency of (6.1). Fix a vertex x0 ∈ V


and denote by An the event {Xn = x0 } so that, for any x ∈ V ,
Px (An ) = Px (Xn = x0 ) = Pn (x, x0 ) = pn (x, x0 ) μ (x0 ) .

By Lemma 6.4, the condition (6.1) implies n pn (x, x0 ) < ∞, whence

Px (An ) < ∞. (6.3)
n

We have
Px (Xn = x0 infinitely often) = Px (∀m ∃n ≥ m Xn = x0 )

. 
= Px An
m n≥m
 
.
= Px Bm
m

where Bm = n≥m An . It follows from (6.3) that

Px (Bm ) ≤ Px (An ) → 0 as m → ∞.
n≥m

The sequence {Bm } is decreasing in m, which implies that


 
.
Px Bm = lim Px (Bm ) = 0.
m m→∞

Therefore,
Px (Xn = x0 infinitely often) = 0, (6.4)

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
6.1. RECURRENCE AND TRANSIENCE 119

and the random walk is transient1 . 

Note that the condition (6.4) that we have proved, is in fact stronger than the
definition of the transience as the latter is
Px0 (Xn = x0 infinitely often) < 1
for some x0 ∈ V . We will take advantage of (6.4) later on.
The proof of the necessity of condition (6.1) in Theorem 6.2 will be preceded
by some lemmas.

Lemma 6.5 (Strong maximum principle). Let u be a subharmonic function on


V , that is, such that Lu ≤ 0 on V . If, for some point x ∈ V ,
u (x) = sup u,
then u ≡ const . In other words, a subharmonic function on V cannot attain its
supremum unless it is a constant.

Proof. Set M = sup u and let x be a vertex where u (x) = M . Since Lu (x) ≤
0, it follows that

M = u (x) ≤ P u (x) = P (x, y) u (y) .
y∼x

The right hand side here is bounded by M because u (y) ≤ M for all y. If u (y) <
M for some y ∼ x, then we obtain that the right hand side < M , which is a
contradiction. Hence, u (y) = M for all y ∼ x. Hence, the set
S = {x ∈ V : u (x) = M }
has the property that if x ∈ S, then all neighbors of x also belong to S. Since S is
non-empty and the graph V is connected, it follows that S = V , that is, u ≡ M . 

Definition 6.6. Fix a finite non-empty set K ⊂ V and consider the function
vK (x) = Px (∃n ≥ 0 Xn ∈ K) .
The function vK (x) is called the hitting (or visiting) probability of K. Consider
also the function
hK (x) = Px (Xn = x0 infinitely often) ,
that is called the recurring probability of K.

Clearly, we have v ≡ 1 on K and 0 ≤ hK (x) ≤ vK (x) ≤ 1 for all x ∈ V .


In the next two lemmas, the set K will be fixed so that we write v (x) and h (x)
instead of vK (x) and hK (x), respectively.

Lemma 6.7. We have Lv (x) = 0 if x ∈


/ K (that is, v is harmonic outside K),
and Lv (x) ≥ 0 for any x ∈ K.

1 Alternatively, one can use a lemma of Borel-Cantelli that says the following: if (6.3) is

satisfied, then the probability that the events An occur infinitely often, is equal to 0 (which is
exactly (6.4)). In fact, the above argument contains the proof of that lemma.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
120 6. THE TYPE PROBLEM

Proof. If x ∈
/ K, then we have by the Markov property (see Figure 6.1)
v (x) = Px (∃n ≥ 0 Xn ∈ K)
= Px (∃n ≥ 1 Xn ∈ K)

= P (x, y) Py (∃n ≥ 1 Xn−1 ∈ K)
y

= P (x, y) Py (∃n ≥ 0 Xn ∈ K)
y

= P (x, y) v (y)
y

so that v (x) = P v (x) and Lv (x) = 0. If x ∈ K, then


Lv (x) = v (x) − P v (x) = 1 − P v (x) ≥ 0
because P v (x) ≤ P 1 (x) = 1. 

Figure 6.1. A sample path hits K

Lemma 6.8. The sequence of functions {P n v} is decreasing in n and


lim P n v (x) = h (x) (6.5)
n→∞
for any x ∈ V .
Proof. Since Lv ≥ 0, we obtain
P n v − P n+1 v = P n (v − P v) = P n (Lv) ≥ 0
so that {P n v} is decreasing. Hence, the limit in (6.5) exists.
Consider the events
Bm = {∃n ≥ m Xn ∈ K} .
Obviously, the sequence {Bm } is decreasing and the event
.
Bm = {∀m ∃n ≥ m Xn ∈ K}
m
is identical to the event that Xn ∈ K infinitely often. Hence, we have
 
.
h (x) = Px Bm = lim Px (Bm ) . (6.6)
m m→∞

We claim that
Px (Bm ) = P m v(x). (6.7)

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
6.1. RECURRENCE AND TRANSIENCE 121

Indeed, for m = 0 this is the definition of v (x). Here is the inductive step from
m − 1 to m using the Markov property:

Px (∃n ≥ m Xn ∈ K) = P (x, y) Py (∃n ≥ m Xn−1 ∈ K)
y

= P (x, y) Py (∃n ≥ m − 1 Xn ∈ K)
y

= P (x, y) P m−1 v (y)
y
m
= P v (x) .
Combining (6.6) with (6.7), we obtain (6.5). 

Proof of Theorem 6.2: The necessity of (6.1). Assume that the random
walk is transient and show that (6.1) is true. Let x0 ∈ V be a point where
Px0 (Xn = x0 infinitely often) < 1.
Consider the hitting and recurring probabilities v (x) and h (x) with respect to the
set K = {x0 }. The above condition means that h (x0 ) < 1. It follows that v ≡ 1
because otherwise P n v ≡ 1 for all n and by Lemma 6.8 h ≡ 1. As we know,
Lv (x) = 0 for x = x0 and Lv (x0 ) ≥ 0.
Claim 1. Lv (x0 ) > 0.
Assume from the contrary that Lv (x0 ) = 0, that is, Lv (x) = 0 for all x ∈ V .
Since v takes its maximal value 1 at some point (namely, at x0 ), we obtain by the
strong maximum principle that v ≡ 1, which contradicts the assumption of the
transience.
Denote f = Lv so that f (x) = 0 for x = x0 and f (x0 ) > 0.
Claim 2. We have for all x ∈ V


P n f (x) ≤ v (x) . (6.8)
n=0

Fix a positive integer m and observe that



(id −P ) id +P + P 2 + ... + P m−1 = id −P m
whence it follows that


m−1
L P fn
= (id −P m ) f = f − P m f ≤ f.
n=0

Set

m−1
vm = P n f.
n=0
Obviously, vm has a finite support and Lvm ≤ f . For comparison, we have Lv = f
and v ≥ 0 everywhere. We claim that vm ≤ v in V . Indeed, let Ω = supp vm so that
outside Ωm the inequality vm ≤ v is trivially satisfied. In Ω we have L (v − vm ) ≥ 0.
By the minimum principle of Lemma 1.39, we have
min (v − vm ) = infc (v − vm ) .
Ω Ω

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
122 6. THE TYPE PROBLEM

Since the right hand side is ≥ 0, it follows that v − vm ≥ 0 in Ω, which was claimed.
Hence, we have

m−1
P n f ≤ v,
n=0
whence (6.8) follows by letting m → ∞.
Using that supp f = {x0 }, rewrite (6.8) in the form
∞
pn (x, x0 ) f (x0 ) μ (x0 ) ≤ v (x)
n=0
whence it follows that


pn (x, x0 ) < ∞.
n=0
Setting here x = x0 we finish the proof. 
Corollary 6.9. Let K be a non-empty finite subset of V . If the random walk
is recurrent, then vK ≡ hK ≡ 1. If the random walk is transient, then vK ≡ 1 and
hK ≡ 0.
Hence, we obtain a 0-1 law for the recurring probability: either hk ≡ 1 or
hK ≡ 0.
Proof. Let x0 be a vertex from K. Obviously, we have
v{x0 } (x) ≤ vK (x) .
Therefore, if the random walk is recurrence and, hence, v{x0 } ≡ 1, then also
vK (x) ≡ 1. Since
hK = lim P m vK , (6.9)
m→∞
it follows that hK ≡ 1.
Let the random walk be transient. Then by Theorem 6.2 and Lemma 6.4, we
have


pn (x0 , x) < ∞
n=1
for all x0 , x ∈ V . It follows from the proof of Theorem 6.2 that h{x0 } (x) = 0 (cf.
(6.4)). If {Xn } visits K infinitely often, then {Xn } visits infinitely often at least
one of the vertices in K. Hence, we have

hK ≤ h{x0 } .
x0 ∈K

Since h{x0 } ≡ 0, we conclude that hK ≡ 0. Finally, (6.9) implies that vK ≡ 1. 

6.2. Recurrence and transience on Cayley graphs


Now we can completely solve the type problem for Cayley graphs.
Theorem 6.10. Let (V, E) be a connected Cayley graph and μ be a simple
weight on it. Let Br = {x ∈ V : d (x, e) ≤ r}.
(a) If |Br | ≤ Cr 2 for large enough r with some constant C, then (V, μ) is
recurrent.
(b) If |Br | ≥ cr α for large enough r with some constants c > 0 and α > 2,
then (V, μ) is transient.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
6.3. VOLUME TESTS FOR RECURRENCE 123

This theorem was proved by Varopoulos [135].


Remark 6.11. It is known from group theory [18] that for Cayley graphs the
following two alternatives take place:
(1) either μ (Br )  r m for some positive integer m (a power volume growth),
(2) or, for any N , we have μ (Br ) ≥ r N for large enough r (a superpolynomial
volume growth).
It follows from Theorem 6.10 that, in the first case, the random walk is recurrent
if and only if m ≤ 2, while in the second case the random walk is always transient.
Of course, Theorem 6.10 contains Polya’s theorem as a particular case.
Proof. (a) Set V (r) = μ (Br ). By the hypothesis, we have V (r) ≤ Cr 2 for
large r. By Theorem 5.25 we obtain, for large enough n,
1/4 1 const
p2n (e, e) ≥ √  ≥ √ 2 = .
V 2n ln n n ln n
4C 2n ln n

It follows that

p2n (e, e) = ∞,
n
whence the recurrence follows by Theorem 6.2.
(b) By Corollary 4.24, the graph (V, μ) has the Faber-Krahn function Λ (s) =
cs−2/α . By Theorem 5.16, we obtain
C
pn (x, x) ≤ .
nα/2
Since α > 2, it follows that

pn (x, x) < ∞,
n
so that the graph is transient by Theorem 6.2. 

6.3. Volume tests for recurrence


In this section, let us fix an integer-valued function ρ (x) on V with the following
two properties:
• for any non-negative integer r, the set
Br = {x ∈ V : ρ (x) ≤ r}
is finite and non-empty.
• if x ∼ y, then |∇xy ρ| ≤ 1.
For example, ρ (x) can be the distance function to any finite non-empty subset
of V .
Theorem 6.12 (The Nash-Williams test). If

 1
= ∞, (6.10)
r=0
μ (∂Br )

then the random walk on (V, μ) is recurrent.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
124 6. THE TYPE PROBLEM

This theorem was proved by Nash-Williams [115]. Note that the edge boundary
∂Br is non-empty because otherwise the graph (V, μ) would be disconnected.
An alternative way of stating this theorem is the following. Assume that V is a

disjoint union of a sequence {Ak }k=0 of non-empty finite subsets with the following
property: if x ∈ Ak and y ∈ Am with |k − m| ≥ 2, then x and y are not neighbors.
Denote by Ek the set of edges between Ak and Ak+1 and assume that

 1
= ∞. (6.11)
μ (Ek )
k=0

Then the random walk ron (V, μ) is recurrent. Indeed, defining ρ (x) = k if x ∈ Ak
we obtain that Br = k=0 Ak and ∂Br = Er . Hence, (6.11) is equivalent to (6.10).
Let us give two simple examples when (6.11) is satisfied:
 k ) ≤ Ck for all large enough k;
(1) if μ (E
(2) if μ Ekj ≤ C for a sequence kj → ∞ (in this case, μ (Ek ) for k = kj
may take arbitrarily big values).
Proof of Theorem 6.12. Consider the hitting probability of B0 :
v (x) = vB0 (x) = Px (∃n ≥ 0 : Xn ∈ B0 ) .
Recall that 0 ≤ v ≤ 1, v = 1 on B0 , and Lv = 0 outside B0 (cf. Lemma 6.7). Our
purpose is to show that v ≡ 1, which will imply the recurrence by Corollary 6.9.
We will compare v (x) to the sequence of functions {uk }∞
k=1 that is constructed
as follows. Define uk (x) as the solution to the following Dirichlet problem in
Ωk = Bk \ B0 :
Luk = 0 in Ωk
(6.12)
uk = f in Ωck
where f = 1B0 . In other words, uk = 1 on B0 and uk = 0 outside Bk , while uk
is harmonic in Bk \ B0 (see Figure 6.2). By Theorem 1.38, the problem (6.12) has

Figure 6.2. Function uk

a unique solution. By the maximum/minimum principle of Lemma 1.39, we have


0 ≤ uk ≤ 1.
Since uk+1 = uk on B0 and uk+1 ≥ 0 = uk in Bkc , we obtain by the maximum
principle that uk+1 ≥ uk in Ωk . Therefore, the sequence {uk } increases and con-
verges to a function u∞ as k → ∞. The function u∞ has the following properties:
0 ≤ u∞ ≤ 1, u∞ = 1 on B0 , and Lu∞ = 0 outside B0 (note that Luk → Lu∞ as
k → ∞). Comparing v with uk in Ωk and using the maximum principle, we obtain
that v ≥ uk , whence it follows that v ≥ u∞ . Hence, in order to prove that v ≡ 1,
it suffices to prove that u∞ ≡ 1, which will be done in the rest of the proof.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
6.3. VOLUME TESTS FOR RECURRENCE 125

By Exercise 25, the solution of the Dirichlet problem (6.12) has the minimal
value of the Dirichlet integral
1  2
D (u) = (∇xy u) μxy ,
2
x,y∈U1 (Ωk )

among all functions u that satisfy the boundary condition u = f in Ωck . Since u ≡ 1
in B0 , u ≡ 0 in Bkc , and U1 (Bk ) ⊂ Bk+1 , we have
1 
D (u) = (∇xy u)2 μxy .
2
x,y∈Bk+1

Choose a function u with the above boundary condition in the form


u (x) = ϕ (ρ (x)) ,
where ϕ (s) is a function on Z such that ϕ (s) = 1 for s ≤ 0 and ϕ (s) = 0 for
s ≥ k + 1. Set S0 = B0 and
Sr = {x ∈ V : ρ (x) = r}
for positive integers r. Clearly, Br is a disjoint union of S0 , S1 , ..., Sr . Observe also
that if x ∼ y then x, y belong either to the same Sr (and in this case ∇xy u = 0) or
one to Sr and the other to Sr+1 , because |ρ (x) − ρ (y)| ≤ 1. Having this in mind,
we obtain
k 
D (u) = (∇xy u)2 μxy
r=0 x∈Sr ,y∈Sr+1


k 
= (ϕ (r) − ϕ (r + 1))2 μxy
r=0 x∈Sr ,y∈Sr+1


k
2
= (ϕ (r) − ϕ (r + 1)) μ (∂Br ) .
r=0
Denote
m (r) := μ (∂Br )
and define ϕ (r) for r = 0, ..., k from the following conditions: ϕ (0) = 1 and
ck
ϕ (r) − ϕ (r + 1) = , r = 0, ..., k (6.13)
m (r)
where the constant ck is to be found. Indeed, we have still the condition ϕ (k + 1) =
0 to be satisfied. Summing up (6.13), we obtain

k
1
ϕ (0) − ϕ (k + 1) = ck
r=0
m (r)
so that ϕ (k + 1) = 0 is equivalent to
−1

k
1
ck = . (6.14)
r=0
m (r)
Hence, assuming (6.14), we obtain

k
c2k k
1
2
D (u) = 2 m (r) = c k = ck .
r=0 m (r) r=0
m (r)

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
126 6. THE TYPE PROBLEM

By the Dirichlet principle, we have D (uk ) ≤ D (u) whence


D (uk ) ≤ ck . (6.15)
On the other hand, by Green’s formula
 1 
Luk (x) uk (x) μ (x) = (∇xy uk )2 μxy
2
Bk+1 x,y∈Bk+1
 
− (∇xy uk ) uk (x)μxy .
c
x∈Bk+1 y∈Bk+1

The last sum vanishes because if y ∈ Bk+1


c
and x ∼ y, then x ∈ Bkc and uk (x) = 0.
The range of summation in the first sum can be reduced to Bk because uk =
0 outside Bk , and then further to B0 because Luk = 0 in Bk \ B0 . Finally, since
uk ≡ 1 in B0 , we obtain the identity
 1  2
Luk (x) μ (x) = (∇xy uk ) μxy = D (uk ) .
2
B0 x,y∈Bk+1

It follows from (6.15) that



Luk (x) μ (x) ≤ ck .
B0

Since u takes the maximal value 1 at any point of B0 , we have at any point x ∈ B0
that P uk (x) ≤ 1 and
Luk (x) = uk (x) − P uk (x) ≥ 0.
Hence, at any point x ∈ B0 , we have
0 ≤ Luk (x) μ (x) ≤ ck .
By (6.10) and (6.14), we have ck → 0 as k → ∞, whence it follows that
Luk (x) → 0 for all x ∈ B0 .
Hence, Lu∞ (x) = 0 for all x ∈ B0 . Since Lu∞ (x) = 0 also for all x ∈
/ B0 , we see
that u∞ is harmonic on the whole graph V . Since u∞ takes its supremum value
1 at any point of B0 , we conclude by the strong maximum principle that u∞ ≡ 1,
which finishes the proof. 
The following theorem provides a convenient volume test for the recurrence.
Theorem 6.13. If

 r
= ∞, (6.16)
r=0
μ (Br )
then the random walk is recurrent. In particular, (6.16) holds provided
μ (Brk ) ≤ Crk2 (6.17)
for a sequence rk → ∞.
The second statement here is an analogue of the recurrence test for Brownian
motion on Riemannian manifolds proved by Cheng and Yau [33]. Both conditions
(6.16) and (6.17) hold in Zm with m ≤ 2 for the function ρ (x) = d (x, 0). Hence,
we obtain one more proof of the recurrence of Zm for m ≤ 2 (cf. Corollary 6.3).
We need the following lemma for the proof of Theorem 6.13.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
6.3. VOLUME TESTS FOR RECURRENCE 127

Lemma 6.14. Let {σr }nr=0 be a sequence of positive reals and let

r
vr = σi . (6.18)
i=0
Then
n
1 1 r
n
≥ .
σ
r=0 r
4 r=0 vr

Proof. Assume first that the sequence {σr } is monotone increasing. If 0 ≤


k≤ n−1
2 , then

2k+1
v2k+1 ≥ σi ≥ (k + 1) σk
i=k+1
whence
1 k+1 1 2k + 1
≥ ≥ .
σk v2k+1 2 v2k+1
Similarly, if 0 ≤ k ≤ n
2, then

2k
v2k ≥ σi ≥ kσk
i=k+1

and
1 k 1 2k
≥ = .
σk v2k 2 v2k
It follows that
 2 
 2
n−1 n

2k + 1  2k 
n n
1 r
4 ≥ + = ,
σk v2k+1 v2k v
r=0 r
k=0 k=0 k=0
which was claimed. Now consider the general case when the sequence {σr } is not
σr }nr=0 be an increasing permutation of {σr }nr=0 and
necessarily increasing. Let {'
set

r
v'r = 'i .
σ
i=0
Note that v'r ≤ vr because v'r is the sum of r smallest terms of the sequence {σi }
whereas vr is the sum of some r terms of the same sequence. Applying the first
part of the proof to the sequence {'σi }, we obtain
n
1 n
1 1 r
n
1 r
n
= ≥ ≥ ,
σ
r=0 r
'
σ
r=0 r
4 r=0 v'r 4 r=0 vr
which finishes the proof. 
Proof of Theorem 6.13. Set for any r ≥ 1
Sr = {x ∈ V : ρ (x) = r} = Br \ Br−1
and S0 = B0 . Then we have
 
μ (∂Br ) = μxy = μxy
x∈Br ,y ∈B
/ r x∈Sr ,y∈Sr+1
 
≤ μxy = μ (x) = μ (Sr ) .
x∈Sr ,y∈V x∈Sr

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
128 6. THE TYPE PROBLEM

Denoting vr = μ (Br ) and σr = μ (Sr ) and observing that the sequences {vr } and
{σr } satisfy (6.18), we obtain by Lemma 6.14 and (6.16) that

  1 ∞ ∞
1 1 r
≥ ≥ = ∞.
r=0
μ (∂Br ) r=0 σr 4 r=0 vr

Hence, (6.10) is satisfied, and we conclude by Theorem 6.12 that the random walk
on (V, μ) is recurrent.
We are left to show that (6.17) implies (6.16). Given positive integers a < b,
we have
b b a
b (b + 1) a (a + 1) b2 − a2
r= r− r= − ≥ ,
r=a+1 r=1 r=1
2 2 2
whence it follows that

b
r 1 b2 − a2
≥ .
v
r=a+1 r
vb 2

By choosing a subsequence of {rk }, we can assume that rk ≥ 2rk−1 . Then we have,


using (6.17),
∞  
rk
r r

v
r=0 r
v
+1 r
k r=rk−1
 1 rk2 − rk−1
2

vrk 2
k
1  rk2 − rk−1
2
≥ 2
2C rk
k
 
1  2
rk−1
= 1− 2
2C rk
k
1 3
≥ = ∞,
2C 4
k

which was to be proved. 

6.4. Isoperimetric tests for transience


Theorem 6.15. Let the graph (V, μ) satisfy the hypothesis (5.28).
(a) If (V, μ) satisfies the isoperimetric inequality with function Φ (s) such that
 ∞
ds
< ∞, (6.19)
Φ2 (s)
then the random walk on (V, μ) is transient.
(b) If (V, μ) satisfies the Faber-Krahn inequality with function Λ (s) such that
 ∞
ds
2
< ∞, (6.20)
s Λ (s)
then the random walk on (V, μ) is transient.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
6.4. ISOPERIMETRIC TESTS FOR TRANSIENCE 129

m−1
For example, in Zm we have Φ (s) = cs m and (6.19) becomes
 ∞
ds
m−1 < ∞,
s2 m
which is satisfied provided 2 m−1
m > 1, that is, m > 2.
Proof. It suffices to prove (b) because (a) follows (b) with
 2
1 Φ (s)
Λ (s) =
2 s
(cf. Theorem 4.12). By Theorem 5.16, the Faber-Krahn inequality implies that
C
pn (x, x) ≤ −1 ,
γ (cn)
where  r
ds
γ (r) = .
1 sΛ(s)

Hence, it suffices to prove that n γ −11(cn) < ∞, which is equivalent to
 ∞
dt
−1
< ∞.
γ (t)
Changing s = γ −1 (t), we arrive at
 ∞
γ  (s)
ds < ∞.
s
Since γ  (s) = 1
sΛ(s) , the latter condition is equivalent to (6.20). 
Further results for recurrence/transience can be found in [63] and [139].

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
10.1090/ulect/071/07

CHAPTER 7

Exercises

(1) Let Γ = (V, E) be a simple, connected, locally finite graph. The diameter
of Γ = (V, E) is defined by
diam Γ = sup d (x, y) .
x,y∈V

(a) Prove that Γ is finite if and only if diam Γ < ∞.


(b) Prove that Γ is a complete graph if and only if diam Γ = 1.
(c) Prove that, for any vertex x and for any positive integer n ≤ 12 diam Γ,
there is a vertex y ∈ V such that d (x, y) = n.
(2) Let (V, E) be a simple finite graph.
(a) Assume that there exists a vertex path on (V, E) that contains every
edge exactly once (an Euler walk). Prove that the set
M = {deg (x) : x ∈ V }
either contains no odd number or contains exactly two odd numbers.
(b) Show that the graph on Figure 7.1 has no Euler walk.

Figure 7.1. The graph of Königsberg bridges

(3) A simple connected graph is called bipartite if it admits a coloring of its


vertices into two colors, say, black and white, so that the vertices of the
same color are not connected by an edge (for example, the set of fields of
a chessboard is a bipartite graph). Prove that a simple connected graph
is bipartite if and only if it contains no cycle Cn with odd n.
Hint: Choose the color of a vertex x depending on the distance d (x, x0 )
where x0 is a fixed vertex.
(4) A graph (V1 , E1 ) is said to be a subgraph of (V, E) if V1 ⊂ V and E1 ⊂ E.
Two graphs (V1 , E1 ) and (V2 , E2 ) are said to be isomorphic if there is
a bijection ϕ : V1 → V2 that preserves edges, that is, (ϕ (x) , ϕ (y)) ∈
E2 if and only if (x, y) ∈ E1 . Given two graphs Γ = (V, E) and Γ1 =
(V1 , E1 ), denote by N (Γ, Γ1 ) the number of distinct subgraphs of Γ that
are isomorphic to Γ1 . For example, let Kn be a complete graph with n
vertices and Cn be a cycle with n vertices. Then N (Γ, K1 ) is the number
of vertices of graph Γ, N (Γ, K2 ) is the number of edges of Γ, N (Γ, K3 )
131

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
132 7. EXERCISES

is the number of complete triangles to be found in Γ, N (Γ, C4 ) is

the number of 4-cycles to be found in Γ.


Evaluate N (Kn , Ck ) for all n ≥ k ≥ 3.
(5) Let Kn,m be a complete bipartite graph. Evaluate N (Kn,m , Ck ) for all
k ≥ 3.
(6) A complete m-partite graph Kn1 ,...,nm is defined as follows. It has n1 +
...+nm vertices that are split into m groups V1 , ..., Vm such that |Vk | = nk ,
and two vertices x, y are connected if and only if they belong to different
groups Vk . Prove that if n1 = ... = nm = n, then the graph Kn,...,n is a
Cayley graph.
(7) Prove that the numbers aj = N (Kn1 ,...,nm , Kj ) satisfy the following iden-
tity:
/
m
(z − ni ) = z n − a1 z n−1 + a2 z n−2 + ... + (−1)m am
i=1
for all complex z.
(8) A subset S of a group G is called generating if any element x ∈ G can be
represented in the form
x = s1 ∗ s2 ∗ ... ∗ sn
for some positive integer n and with some sk ∈ S. Prove that if S is a sym-
metric generating subset of G, then the Cayley graph (G, S) is connected.
(A graph (V, E) is called connected if, for any two vertices x, y ∈ V , there
is an edge path in (V, E) connecting x and y.)
(9) A graph (V, E) is called regular if deg (x) is the same for all x ∈ V . The
following graphs are obviously regular: all Cayley graphs, cycles Cn , com-
plete graphs Kn , complete multipartite graphs Kn,...,n , and their products.
(a) List all connected regular graphs with at most 6 vertices. Show that
every such graph is a Cayley graph. Show that every such graph
belongs to one of the families
Cn , Kn , Kn Km , Kn,..,n . (7.1)
(b) Give an example of a connected regular graph with 7 vertices that is
non-Cayley and that does not belong to any of the families (7.1).
(10) Let P, Q be Markov kernels on a finite or countable set V . Consider a
function P ◦ Q on V × V that is defined by

(P ◦ Q) (x, y) = P (x, z) Q (z, y) .
z∈V

(a) Prove that P ◦ Q is a Markov kernel.


(b) Prove that if R is also a Markov kernel, then
(P ◦ Q) ◦ R = P ◦ (Q ◦ R) .
(c) Prove that if P and Q are reversible with respect to the same function
μ (x) and P ◦ Q = Q ◦ P then P ◦ Q is also reversible with respect to
μ (x).
(11) Let (V, E) be a finite graph without loops and let μ be a simple weight
on (V, E). Prove that trace Δμ = − |V |.
(12) Let Γ be a simple graph with m ≥ 3 vertices and n edges.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
7. EXERCISES 133

0 1
m2
(a) Prove that if n ≥ + 1, then N (Γ, K3 ) ≥ 1.
4
0 21
(b) For any m ≥ 3, give an example of a graph Γ with n = m4 edges
such that N (Γ, K30) =10.
2 2 3
(c) Prove that if n ≥ m4 + 1 then, in fact, N (Γ, K3 ) ≥ m 2 .
(13) Consider the Petersen graph as on Fig 7.2. Prove that this graph is not a
Cayley graph.

Figure 7.2. Petersen graph

(14) Let A4 be the group of even permutations of the sequence {1, 2, 3, 4}.
Consider the cyclic permutations a = (234) and b = (123) as well as the
product of two transpositions c = (12) (34).
(a) Verify that a3 = b3 = c2 = id and ba = a2 b2 = c.
(b) Prove that A4 is isomorphic to the group of rotations of a regular
tetrahedron in R3 .  
(c) Consider a symmetric set S = a, a−1 , b, b−1 , c and prove that the
Cayley graph (A4 , S) is isomorphic to the icosahedron graph on Fig-
ure 7.3.

Figure 7.3. Icosahedron

(15) Prove the following identities for arbitrary functions f, g on a weighted


graph (V, μ):
(a) ∇xy (f g) = (∇xy f ) g + (∇xy g) f + (∇
xy f ) (∇xy g) .
1
(b) Δμ (f g) = (Δμ f ) g + (Δμ g) f + μ(x) y∼x (∇xy f ) (∇xy g) μxy .
(16) Consider the equation Δμ u = f on a finite connected weighted graph
(V, μ). Here f is a given function whereas u is an unknown function.
(a) Prove that if one solution u exists, then all other solutions are u +
const .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
134 7. EXERCISES

(b) Prove that if a solution u exists, then



f (x) μ (x) = 0. (7.2)
x∈V

(c) Prove that if (7.2) is satisfied, then a solution u exists.


(17) Let {Xn } be a simple random walk on Z, and set
vn (x) = P0 (Xn = x) .
(a) Prove that

1
2n
n
x+n , x ≡ n mod 2
vn (x) = 2
0, otherwise,
n
where m is the binomial coefficient that is defined by
 
n n!
m!(n−m)! if 0 ≤ m ≤ n,
=
m 0, otherwise.
(b) Prove that, for even n,
4
2
as n → ∞.
vn (0) ∼
πn
√  n
Hint: Use the Stirling formula n! ∼ 2πn ne as n → ∞.
(18) Let F be the space of real-valued functions on a finite weighted graph
(V, μ), endowed by the inner product

(f, g) = f (x) g (x) μ (x) .

Set f  = (f, f ). Let P be the Markov operator of (V, μ) and L be the


positive definite Laplace operator of (V, μ). 
2
(a) Prove that, for any f ∈ F, (P f ) ≤ P f 2
(b) Prove that, for any f ∈ F, P f  ≤ f .
(c) Use (b) to show that spec P ∈ [−1, 1].
(d) Conclude that spec L ⊂ [0, 2] .
Remark : This gives an alternative proof of the fact that all the eigenvalues
of L are contained in [0, 2].
(19) Prove that if (X, E1 ) and (Y, E2 ) are two connected graphs, then the graph
(V, E) = (X, E1 )  (Y, E2 ) is also connected.
(20) Let (X, E1 ) and (Y, E2 ) be two finite connected graphs with more than
one vertex. Prove that their product (X, E1 )  (Y, E2 ) is bipartite if and
only if both (X, E1 ) and (Y, E2 ) are bipartite.
(21) Let P be the Markov kernel of a locally finite weighted graph (V, μ) and
E be the corresponding set of edges.
(a) Fix a positive integer n, and consider two vertices x, y ∈ V . Prove
that Pn (x, y) > 0 if and only if there is a path of length n in the
graph (V, E) that connects x and y.
(b) Define a new set of edges En on V as follows: (x, y) ∈ En if Pn (x, y) >
0. Prove that if (V, E) bipartite then (V, E2 ) is disconnected.
(c) Let (V, E) be finite, connected and non-bipartite. Prove that (V, En )
is complete for some n.
(22) Let (V, μ) be a bipartite finite connected weighted graph. Let V + , V − be
a bipartition of V .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
7. EXERCISES 135

(a) For any function f on V , consider the function f' on V that takes
two values as follows:

' 2 f (y) μ (y) , x ∈ V + ,
f (x) = y∈V + −
μ (V ) y∈V − f (y) μ (y) , x ∈ V .

Prove that if n is even and n → ∞, then P n f (x) → f'(x) for all


x∈V.
(b) Consider the distribution vn (x) = Px0 (Xn = x) of the random walk
on (V, μ). Prove that if x0 ∈ V + and n is even, then as n → ∞
2μ(x)
μ(V ) , x ∈ V +,
vn (x) →
0, x ∈ V −.
(23) Prove that the positive definite Laplace operator L on a complete bipartite
graph Kn,m (where n + m > 2) with a simple weight has the following
eigenvalues: 0, 1, 2. What are their multiplicities?
(24) Fix integers m, n ≥ 2. Prove that the positive definite Laplace operator
L on a complete m-partite graph Kn, n, ..., n (cf. Exercises 6 and 9) with
  
m
m
simple weight has the following eigenvalues: 0, 1, m−1 . What are their
multiplicities?
Observing that K2,2,2 is isomorphic to the octahedron graph (Figure
7.4), evaluate the eigenvalues of the Laplacian on the octahedron.

Figure 7.4. Octahedron

(25) (The Dirichlet principle) Let Ω be a finite set of vertices on a connected


weighted graph (V, μ) such that Ωc is non-empty. Consider the Dirichlet
problem
Δμ u (x) = 0 for all x ∈ Ω,
(7.3)
u (x) = g (x) for all x ∈ Ωc ,
where g is a given function on Ωc .
(a) Prove that a solution u of the Dirichlet problem (7.3) has the smallest
value of the Dirichlet integral
1  2
D (u) = (∇xy u) μxy ,
2
x,y∈Ω

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
136 7. EXERCISES

among all other functions u that satisfy the same boundary condition
u = g in Ωc . Here Ω is the union of Ω with all its neighbors.
(b) Prove that if u minimizes the Dirichlet integral among all functions
with the boundary condition u = g in Ωc , then u solves (7.3).
(c) Prove that there exists a function u that minimizes the Dirichlet
integral among all functions with the boundary condition u = g in
Ωc . Remark: This provides an alternative proof of the existence of
solution of (7.3).
(26) Let (V, μ) be a finite connected weighted graph without loops, and let
λ0 = 0 < λ1 ≤ ... ≤ λN −1 be the eigenvalues of the Laplace operator L on
(V, μ). Assume that, for some positive integers k, there are k + 1 functions
f1 , f2 , ..fk+1 on V such that:
(i) their supports Ai = {x ∈ V : fi (x) = 0} are disjoint and not con-
nected, that is, if x ∈ Ai and y ∈ Aj with i = j, then x = y and x ∼ y.
(ii) R(fi ) ≤ a for some real a and for all i = 1, 2, ..., k + 1.
Prove that λk ≤ a.
(27) Let D be the diameter of the graph (V, μ), that is,
D = max d (x, y) .
x,y∈V

Prove that, for any k ≤ [D/2], we have λk ≤ 1.


Hint: Use Exercise 26.
(28) Evaluate the eigenvalues and eigenfunctions of the Markov operator on a
path graph (V, E) with simple weight, that is, V = {0, 1, ..., N − 1} and
the edges are defined by
0 ∼ 1 ∼ ... ∼ N − 1.
(29) Let (V, μ) be a finite connected weighted graph with N > 1 vertices. Let
−1
P be the Markov operator on (V, μ), {vk }N k=0 be an orthonormal basis of
eigenfunctions of P with eigenvalues αk , where 1 = α0 > α1 ≥ α2 ≥ ... ≥
αN −1 . Fix a point x0 ∈ V and set f = 1{x0 } .
(a) Assume that there is a constant c such that |vk (x)| ≤ c for all x ∈ V
and k = 1, ..., N − 1. Prove that
" " 
" n " n
"P f (x) − μ (x0 ) " ≤ c2 μ (x0 ) |αk |
n
" μ (V ) "
k=1

for all x ∈ V and positive integers n.


(b) Prove that if (V, μ) is a cycle graph CN = ZN with an odd N and
with a simple weight μ, then
" "
" n "
"P f (x) − 1 " ≤ 4 4n1
" N " N e N2 − 1
for all x ∈ V and positive integers n. Conclude that the mixing time
T admits the estimate T  N 2 .
Hint: Use the explicit eigenvalues and eigenfunctions of ZN and the in-
z2
equality 0 ≤ cos z ≤ e− 2 for z ∈ (0, π/2).
(30) Let (V, μ) be a finite connected weighted graph with N > 1 vertices. Let
−1
P be the Markov operator on (V, μ), {vk }N k=0 be an orthonormal basis of

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
7. EXERCISES 137

eigenfunctions of P with eigenvalues αk , where 1 = α0 > α1 ≥ α2 ≥ ... ≥


αN −1 . Fix a point x0 ∈ V and set f = 1{x0 } .
(a) Prove that for any positive integer n,
! !2 N−1
! n !
!P f − μ (x0 ) ! = αk2n vk2 (x0 ) μ2 (x0 ) . (7.4)
! μ (V ) !
k=1

(b) Assume in addition that (V, μ) is vertex transitive, that is, for any
two vertices x, y ∈ V , there is a graph isomorphism ϕ : V → V , that
is, a bijection that preserves weight μ, such that ϕ (x) = ϕ (y) . Prove
that
 1
2 N −1
1  2n
P n f (x) − = αk .
N N
x∈V k=1
(31) Let (V, E) be a finite connected k-regular graph. Let a, b ∈ V be two
distinct vertices of V such that x ∼ a implies x ∼ b. Prove that the
following function on V

⎨ 1, x = a
f (x) = −1, x = b

0, otherwise
is an eigenfunction of the Laplace operator L on (V, E) (with a simple
weight). What is its eigenvalue?
(32) Let (V, μ) be a finite connected weighted graph and i be a weight preserv-
ing involution of (V, μ), that is, a non-identical mapping i : V → V such
that i2 = id and μi(x)i(y) = μxy for all x, y ∈ V .
(a) Prove that there exists a non-constant eigenfunction f (x) of the
Laplace operator L on (V, μ) such that f ◦ i = −f .
(b) Prove that if there exist vertices x1 , x2 ∈ V such that the four vertices
x1 , x2 , i (x1 ) , i (x2 ) are all distinct then there exists a non-constant
eigenfunction f (x) of the Laplace operator L on (V, μ) such that
f ◦ i = f.
(33) Evaluate the eigenvalues of the Laplace operator of the graph on Figure
7.5 with a simple weight. Hint: Use Exercises 31 and 32 to build various
eigenfunctions of the Laplace operator.

Figure 7.5. A 4-regular graph with 7 vertices

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
138 7. EXERCISES

(34) Evaluate the eigenvalues of the Laplace operator on the Petersen graph
from Exercise 13.
(35) Let (V, μ) be a finite connected weighted graph. Prove that if the diameter
of the graph is D ≥ 1, then there exist at least D + 1 distinct eigenvalues
of the Laplace operator.
(36) Let (V, μ) be a finite connected weighted graph. Prove that, for any subset
Ω⊂V,
μ (Ω) μ (Ωc )
μ (∂Ω) ≥ λ1 ,
μ (V )
where λ1 is the smallest positive eigenvalue of the Laplace operator on
(V, μ).
(37) Let (V, μ) be a finite connected weighted graph with N > 1 vertices. Fix
a positive integer r and define the expansion factor Fr of the graph by
μ (Xr \ X) μ (V )
Fr = inf ,
X⊂V μ (X) μ (X c )
where Xr = {x ∈ V : d (x, X) ≤ r}. Prove that
 2r
λN −1 − λ1
Fr ≥ 1 − ,
λN −1 + λ1
where λ1 and λN −1 are the eigenvalues of the Laplace operator on (V, μ).
(38) Let (V, E) be a simple connected graph of diameter D ≥ 2, and let μ be
a simple weight on (V, E) . Set
k = max deg (x) .
x∈V

(a) Fix x0 ∈ V and define for any non-negative integer r a set Er of


edges as follows:
Er = {xy ∈ E : d (x, x0 ) = r, d (y, x0 ) = r + 1} .
Prove that
R−r
|ER | ≤ (k − 1) |Er |
for all non-negative integers r ≤ R.
(b) Fix a positive integer R and consider the following function on V :

⎨ 1, x = x0
−α(r−1)
f (x) = e , if r := d (x, x0 ) ∈ [1, R]

0, if r > R,
where a = ln (k − 1) . Prove that
1
2
√  
2 k−1 1 1
R (f ) ≤ 1 − 1− − .
k R kR
N −1
(c) Let {λm }m=0 be an increasing sequence of the eigenvalues of the
Laplace operator L on (V, μ) counted with multiplicities, where N =
|V |. Prove that if 4m ≤ D, then
√  
2 k−1 1 1
λm ≤ 1 − 1− −
k R kR

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
7. EXERCISES 139

where  
D
− 1. R=
2m
Remark: If R is √
large, then the main part of this estimate is given by
the term 1 − 2 k . There are k-regular graphs with arbitrarily large
k−1

diameters and with



k−1
λ1 ≥ 1 − 2 .
k
Such graphs are called Ramanujan graphs.
(39) Fix a positive integer N and consider the following subset Ω of Z2 :
Ω = {(j, 0) : j = 1, 2, ..., N } .
Evaluate all the eigenvalues and eigenfunctions of the Dirichlet Laplace
operator LΩ .
(40) Let (V, E) be a connected locally finite infinite graph without loops. A
finite or infinite sequence {xk } of vertices on V is called a geodesic if
d (xk , xn ) = |k − n| for all indices k, n. Prove that there is an infinite
geodesic starting at any given vertex x ∈ V .

In all the remaining questions, (V, μ) is an infinite, locally finite, con-


nected weighted graph, P is the corresponding Markov kernel, pn (x, y) is
the heat kernel, and Ω is a finite non-empty subset of V .
(41) Prove that if
(LΩ f, f )
= λ1 (Ω)
(f, f )
for some non-zero function f ∈ FΩ , then f is an eigenfunction of LΩ with
the eigenvalue λ1 (Ω).
(42) Let the weight μ be simple and (V, E) be m-regular. Let Ω be a finite non-
empty subset of V such that every vertex of Ω has at most k neighbors in
Ω where 2 ≤ k < m (for example, if Ω is a path or a cycle, then k = 2).
Prove that
 2
m−k 1 m−k
h (Ω) ≥ and λ1 (Ω) ≥ .
m 2 m
(43) For any positive integer r set Ωr = Ur (Ω). Prove that
μ (Ωr+1 )
λ1 (Ωr ) ≤ .
r 2 μ (Ω)
(44) Let Br = {x ∈ Zm : d (x, 0) ≤ r} be the ball of radius r in Zm . Prove that
C
λ1 (Br ) ≤
r2
and
−2/m
λ1 (Br ) ≤ C  μ (Br ) ,

where C and C are constants depending only on m.
(45) Consider the Dirichlet problem LΩ u = f where f and u are functions from
FΩ . Prove the following inequalities:
(a) u ≥ 12 f  .
(b) u ≤ λ11(Ω) f  .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
140 7. EXERCISES

(c) Let Ω1 = U1 (Ω). Then


1  2 1 2
(∇xy u) μxy ≤ f  .
2 λ1 (Ω)
x,y∈Ω1

(46) Set
c := inf μxy .
x,y∈V :x∼y
Prove that, for any function f ∈ FΩ and for any point x0 ∈ Ω, the
following inequality holds:
c 2
(LΩ f, f ) ≥ f (x0 ) .
d (x0 , Ωc )
Hint: Consider a path {xk }nk=0 connecting x0 to the nearest point from

Ωc and use the sum nk=1 (f (xk−1 ) − f (xk ))2 μxk−1 xk .
(47) Let Ω be connected and f be an eigenfunction of LΩ with the eigenvalue
λ1 (Ω) .
(a) Prove that if f ≥ 0, then f > 0 in Ω.
(b) Prove that f+ and f− are also the eigenfunctions of λ1 (Ω), pro-
vided they do not vanish identically. Here f+ = max (f, 0) and
f− = − min (f, 0) so that f = f+ − f− . Hint: Use Exercise 41.
(c) Prove that either f > 0 in Ω or f < 0 in Ω. Hint: Assume the
contrary and use (b).
(d) Prove that λ1 (Ω) is a simple eigenvalue. Hint: Assuming that there
exist two linearly independent eigenfunctions, consider their linear
combination that vanishes at some vertex, and use (c) .
(48) Let f be a function on V with a finite support. Set
un (x) = P n f (x) .
(a) Prove that supx |un (x)| is a decreasing function of n.
(b) Prove that un  is a decreasing function of n.
(c) Prove that the heat kernel p2n (x, x) is a decreasing function of n, for
any fixed vertex x.
(49) Assume that μ (x) ≥ 1 for all x ∈ V and that the heat kernel on (V, μ)
satisfies
pn (x, x) ≤ Cn−α
for all x ∈ V and all positive integers n, where C, α > 0. Prove that, for
any 0 < ε < α, for all x, y ∈ V and all positive integers n,
 
C d2 (x, y)
pn (x, y) ≤ α−ε exp −c
n n
where C  , c > 0.
Hint: Prove first that pn (x, y) ≤ const n−α for all x, y and n, and then
combine this estimate with the estimate of Carne-Varopoulos.
(50) Prove that if (V, μ) is a Cayley graph with the exponential volume growth,
then the heat kernel of (V, μ) admits the following estimate
 2 
d (x, y)
pn (x, y) ≤ C exp − − cn 1/3
,
4n
for all x, y ∈ V , positive integers n, and some constants C, c > 0.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
7. EXERCISES 141

(51) Assume that there exists a constant p0 > 0 such that


P (x, y) ≥ p0 for all x ∼ y.
(a) Prove that deg (x) ≤ 1/p0 for any vertex x ∈ V .
(b) Prove that, for all x, y ∈ V ,
d(x,y)
μ (x) ≥ p0 μ (y)
(c) Prove that any ball B (x, r) contains at most C r vertices where C =
C (p0 ) .
(d) Prove that, for any finite set A ⊂ V and for any positive integer r,
μ (Ur (A)) ≤ K r μ (A)
where K = K (p0 ).
(52) Assume that μ (x) ≥ 1 for all x ∈ V ,
μ (Br (x)) ≤ Cr α
for all x ∈ V and positive integers r, and that
 
C d2 (x, y)
pn (x, y) ≤ α/2 exp −c
n n
for all x, y ∈ V and positive integers n, where C, c, α > 0 (for example,
all these hypotheses hold on Zm with m = α or, more generally, on any
Cayley graph of polynomial volume growth). Prove that, for all x ∈ V
and for all positive even integers n,
c
pn (x, x) ≥ α/2 ,
n
with some constant c > 0.
(53) Let μ∗xy be a weight on V that is associated with the Markov kernel
P2 (x, y), that is,
μ∗xy = P2 (x, y) μ (x) .
Then (V, μ∗ ) is a weighted graph. Let us mark by ∗ all quantities related
to (V, μ∗ ) as opposed to those of (V, μ).
(a) Prove that for all x ∈ V , μ∗ (x) = μ (x) .
(b) Prove that, for any finite subset Ω ⊂ V ,
"
"
PΩ∗ = (PU )2 " ,
Ω
where U = U1 (Ω) .
(c) Prove that
λ∗1 (Ω) ≥ λ1 (U ) .

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
Bibliography

[1] Alexopoulos G.K., A lower estimate for central probabilities on polycyclic groups, Can. J.
Math., 44 (1992) 897-910.
[2] Alon N., Milman V.D., λ1 isoperimetric inequalities for graphs and superconcentrators, J.
Comb. Theory B, 38 (1985) 73-88.
[3] Auscher P., Coulhon T., Grigor’yan A., ed., “Heat kernels and analysis on manifolds, graphs,
and metric spaces”, Contem. Math. 338, AMS, Providence, RI, 2003.
[4] Babson E., Barcelo H., de Longueville M., Laubenbacher R., Homotopy theory of graphs,
J. Algebr. Comb., 24 (2006) 31–44.
[5] Bachoc C., DeCorte E., de Oliveira Filho F.M., Vallentin F., Spectral bounds for the in-
dependence ratio and the chromatic number of an operator, Israel J. Math, 202 (2014)
227–254.
[6] Band R., Oren I., Smilansky U., Nodal domains on graphs - how to count them and why?, in:
“Analysis on graphs and its applications”, Proc. Sympos. Pure Math. 77, AMS, Providence,
RI, 2008. 5–27.
[7] Barcelo H., Capraro V., White J.A., Discrete homology theory for metric spaces, Bull.
London Math. Soc., 46 (2014) 889–905.
[8] Barlow M.T., Diffusions on fractals, in: “Lectures on Probability Theory and Statistics, Ecole
d’été de Probabilités de Saint-Flour XXV - 1995”, Lecture Notes Math. 1690, Springer,
1998. 1-121.
[9] Barlow M.T., Which values of the volume growth and escape time exponent are possible for
graphs?, Rev. Mat. Iberoam., 40 (2004) 1-31.
[10] Barlow M.T., “Random walks and heat kernels on graphs”, LMS Lecture Note Series 438,
Cambridge Univ. Press, 2017.
[11] Barlow M.T., Bass R.F., Stability of parabolic Harnack inequalites, Trans. Amer. Math.
Soc., 356 (2004) 1501–1533.
[12] Barlow M.T., Bass R.F., Kumagai T., Parabolic Harnack inequality and heat kernel esti-
mates for random walks with long range jumps, Math. Z., 261 (2009) 297-320.
[13] Barlow M.T., Coulhon T., Grigor’yan A., Manifolds and graphs with slow heat kernel decay,
Invent. Math., 144 (2001) 609-649.
[14] Barlow M.T., Coulhon T., Kumagai T., Characterization of sub-Gaussian heat kernel esti-
mates on strongly recurrent graphs, Comm. Pure Appl. Math., 58 (2005) 1642–1677.
[15] Barlow M.T., Perkins E.A., Symmetric Markov chains in Zd : how fast can they move?,
Probab. Th. Rel. Fields, 82 (1989) 95–108.
[16] Bartholdi L., Grigorchuk R., Spectra of non-commutative dynamical systems and graphs
related to fractal groups, C. R. Acad. Sci. Paris Ser. I Math. 331 (2000) 429–434.
[17] Bartholdi L., Grigorchuk R., Nekrashevych V., From fractal groups to fractal sets, in: “Frac-
tals in Graz 2001”, Trends Math., Birkhäuser, Basel, 2003. 25–119.
[18] Bass H., The degree of polynomial growth of finitely generated groups, Proc. London Math.
Soc., 25 (1972) 603–614.
[19] Bass H., The Ihara-Selberg zeta function of a tree lattice, Internat. J. Math., 3 (1992)
717–797.
[20] Bauer F., Horn P., Lin Y., Lippner G., Mangoubi D., Yau S.-T., Li-Yau inequality on graphs,
J. Diff. Geom., 99 (2015) 359–405.
[21] Bauer F., Hua B., Jost J., The dual Cheeger constant and spectra of infinite graphs, Adv.
Math., 251 (2014) 147–194.
[22] Bauer F., Jost J., Bipartite and neighborhood graphs and the spectrum of the normalized
graph Laplace operator, Comm. Anal. Geom., 21 (2013) 787–845.

143

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
144 BIBLIOGRAPHY

[23] Bauer F., Jost J., Liu S., Ollivier-Ricci curvature and the spectrum of the normalized graph
Laplace operator, Math. Res. Lett., 19 (2012) 1185–1205.
[24] von Below J., Can one hear the shape of a network?, in: “Partial differential equations on
multistructures (Lumini 1999)”, Lecture Notes in Pure and Appl. Math. 219, Dekker, New
York, 2001. 19–36.
[25] Bendikov A., Grigor’yan A., Pittet Ch., Woess W., Isotropic Markov semigroups on ultra-
metric spaces, Russian Math. Surveys, 69 (2014) 589–680.
[26] Berkolaiko G., Carlson R., Fulling S., Kuchment P. ed., “Quantum graphs and their appli-
cations”, Contemp. Math. 415, AMS, Providence, RI, 2006.
[27] Berkolaiko G., Kuchment P., “Introduction to quantum graphs”, Mathematical Surveys and
Monographs 186, AMS, Providence, RI, 2013.
[28] Berkolaiko G., A lower bound for nodal count on discrete and metric graphs, to appear in
Commun. Math. Phys.
[29] Biggs N., “Algebraic graph theory”, Cambridge Univ. Press, 2001.
[30] Bıyıkoğlu T., Leydold J., Stadler P.F., “Laplacian eigenvectors of graphs. Perron-Frobenius
and Faber-Krahn type theorems”, Lecture Notes in Mathematics 1915, Springer, 2007.
[31] Bollobás, B., “Modern graph theory”, Graduate Texts in Mathematics 184, Springer, 1998.
[32] Carne K., A transmutation formula for Markov chains, Bull. Sci. Math., 109 (1985) 399-403.
[33] Cheng S.Y., Yau S.-T., Differential equations on Riemannian manifolds and their geometric
applications, Comm. Pure Appl. Math., 28 (1975) 333-354.
[34] Chung F.R.K., Diameters and eigenvalues, J. Amer. Math. Soc., 2 (1989) 187-196.
[35] Chung F.R.K., “Spectral Graph Theory”, CBMS Regional Conference Series in Mathematics
92, AMS, Providence, RI, 1997.
[36] Chung F.R.K., Discrete isoperimetric inequalities, in: “Eigenvalues of Laplacians and other
geometric operators”, Surveys in Differential Geometry IX, (2004) 53-82.
[37] Chung F.R.K., Faber V., Manteuffel Th. A., An upper bound on the diameter of a graph
from eigenvalues associated with its Laplacian, SIAM J. of Discrete Math., 7 (1994) 443–457.
[38] Chung F.R.K., Graham R., “Erdös on graphs. His legacy of unsolved problems”, A K Peters,
Ltd., Wellesley, MA, 1998.
[39] Chung F.R.K., Grigor’yan A., Yau S.-T., Upper bounds for eigenvalues of the discrete and
continuous Laplace operators, Advances in Math., 117 (1996) 165-178.
[40] Chung F.R.K., Grigor’yan A., Yau S.-T., Eigenvalues and diameters for manifolds and
graphs, in: “Tsinghua Lectures on Geometry and Analysis”, International Press, 1997. 79-
105.
[41] Chung F.R.K., Lin Y., Yau S.-T., Harnack inequalities for graphs with non-negative Ricci
curvature, J. Math. Anal. Appl, 415 (2014) 25–32.
[42] Chung F.R.K., Lu L., “Complex Graphs and Networks”, CBMS Regional Conference Series
in Mathematics 107, AMS, Providence, RI, 2006.
[43] Chung F.R.K., Yau S.-T., Eigenvalue inequalities for graphs and convex subgraphs, Comm.
in Analysis and Geom., 2 (1994) 628-639.
[44] Chung F.R.K., Yau, S.-T., Eigenvalues of graphs and Sobolev inequalities, Combinatorics,
Probability and Computing, 4 (1995) 11-26.
[45] Colin de Verdière Y., “Spectres de graphes”, Cours Spécialisés 4, Société Mathématique de
France, Paris, 1998.
[46] Coulhon T., Sobolev inequalities on graphs and manifolds, in: “Harmonic Analysis and
Discrete Potential Theory”, Plenum Press, New York and London, 1992.
[47] Coulhon T., Grigor’yan A., On-diagonal lower bounds for heat kernels on non-compact
manifolds and Markov chains, Duke Math. J., 89 (1997) 133-199.
[48] Coulhon T., Grigor’yan A., Random walks on graphs with regular volume growth, Geom.
Funct. Anal., 8 (1998) 656-701.
[49] Coulhon T., Grigor’yan A., Pittet Ch., A geometric approach to on-diagonal heat kernel
lower bounds on groups, Ann. Inst. Fourier, Grenoble, 51 (2001) 1763-1827.
[50] Coulhon T., Grigor’yan A., Zucca F., The discrete integral maximum principle and its
applications, Tohoku Math. J., 57 (2005) 559-587.
[51] Coulhon T., Saloff-Coste L., Isopérimétrie pour les groupes et les variétés, Rev. Mat.
Iberoam., 9 (1993) 293-314.
[52] Cvetkovic D., Doob M., Gutman I., Targasev A., “Recent results in the theory of graph
spectra”, Ann. Disc. Math. 36, North Holland, 1988.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
BIBLIOGRAPHY 145

[53] Cvetkovic D., Doob M., Sachs H., “Spectra of graphs”, Acad. Press., NY, 1979.
[54] Davidoff G., Sarnak P., Valette A., “Elementary number theory, group theory and Ramanu-
jan graphs”, Cambridge Univ. Press, 2003.
[55] Davies E.B., Large deviations for heat kernels on graphs, J. London Math. Soc. (2), 47
(1993) 65-72.
[56] Delmotte T., Parabolic Harnack inequality and estimates of Markov chains on graphs, Rev.
Mat. Iberoam., 15 (1999) 181-232.
[57] Diaconis P., Saloff-Coste L., What do we know about the Metropolis Algorithm?, J. of
Computer and System Sciences, 57 (1998) 20-36.
[58] Diaconis P., Stroock D., Geometric bounds for eigenvalues of Markov chains, Ann. Appl.
Prob., 1 (1991) 36-61.
[59] Diestel R., “Graph theory”, Graduate Texts in Mathematics 173, Springer, Berlin, 2017.
[60] Dimakis A., Müller-Hoissen F., Discrete differential calculus: graphs, topologies, and gauge
theory, J. Math. Phys., 35 (1994) 6703-6735.
[61] Dodziuk J., Difference equations, isoperimetric inequalities and transience of certain random
walks, Trans. Amer. Math. Soc., 284 (1984) 787-794.
[62] Dodziuk J., Kendall W.S., Combinatorial Laplacians and isoperimetric inequality, in: “From
local times to global geometry, control and physics (Coventry, 1984/85)”, Pitman Res. Notes
Math. Ser. 150, 1986. 68–74.
[63] Doyle P.G., Snell J.L., “Random walks and electric networks”, Carus Mathematical Mono-
graphs 22, Mathematical Association of America, Washington, DC, 1984.
[64] Exner P., Keating J.P., Kuchment P., Sunada T., Teplyaev A., ed., “Analysis on graphs and
its applications”, Proc. Sympos. Pure Math. 77, AMS, Providence, RI, 2008.
[65] Figa-Talamanca A., “Harmonic analysis on free groups”, CRC, 1983.
[66] Figa-Talamanca A., Nebbia C., “Harmonic analysis and representation theory for groups
acting on homogenous trees”, Cambridge Univ. Press, 1991.
[67] Friedlander L., Genericity of simple eigenvalues for a metric graph, Israel J. Math., 146
(2005) 149–156.
[68] Gaveau B., Okada M., Differential forms and heat diffusion on one-dimensional singular
varieties, Bull. Sci. Math., 115 (1991) 61–80.
[69] Gaveau B., Okada M., Okada T., Explicit heat kernels on graphs and spectral analysis, in:
“Several complex variables (Proceedings of the Mittag-Leffler Institute, Stockholm, 1987-
88)”, Princeton Math. Notes 38, Princeton University Press, 1993. 364–388.
[70] Gieseker D., Knörrer H., Trubowitz E., “The geometry of algebraic Fermi curves”, Academic
Press, Boston, 1992.
[71] Godsil C., Royle G., “Algebraic graph theory”, Graduate Texts in Mathematics 207,
Springer, New York, 2001.
[72] Grigor’yan A., On the existence of positive fundamental solution of the Laplace equation
on Riemannian manifolds, (in Russian) Matem. Sbornik, 128 (1985) 354-363. Engl. transl.:
Math. USSR Sb., 56 (1987) 349-358.
[73] Grigor’yan A., Heat kernels on metric measure spaces, in: “Handbook of Geometric Analysis
Vol. 2”, Ed. L. Ji, P. Li, R. Schoen, L. Simon, Advanced Lectures in Math. 13, International
Press, 2010. 1-60.
[74] Grigor’yan A., Kelbert M., On Hardy-Littlewood inequality for Brownian motion on Rie-
mannian manifolds, J. London Math. Soc. (2), 62 (2000) 625-639.
[75] Grigor’yan A., Lin Y., Muranov Yu., Yau S.-T., Homotopy theory for digraphs, Pure Appl.
Math. Quaterly, 10 (2014) 619-674.
[76] Grigor’yan A., Lin Y., Muranov Yu., Yau S.-T., Path complexes and their homologies, to
appear in Int. J. Math.
[77] Grigor’yan A., Muranov Yu., Yau S.-T., Graphs associated with simplicial complexes, Ho-
mology, Homotopy and Appl., 16 (2014) 295–311.
[78] Grigor’yan A., Muranov Yu., Yau S.-T., Cohomology of digraphs and (undirected) graphs,
Asian J. Math., 19 (2015) 887-932.
[79] Grigor’yan A., Muranov Yu., Yau S.-T., On a cohomology of digraphs and Hochschild co-
homology, J. Homotopy Relat. Struct., 11 (2016) 209–230.
[80] Grigor’yan A., Muranov Yu., Yau S.-T., Homologies of digraphs and Künneth formulas,
Comm. Anal. Geom., 25 (2017) 969–1018.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
146 BIBLIOGRAPHY

[81] Grigor’yan A., Telcs A., Sub-Gaussian estimates of heat kernels on infinite graphs, Duke
Math. J., 109 (2001) 451-510.
[82] Grigor’yan A., Telcs A., Harnack inequalities and sub-Gaussian estimates for random walks,
Math. Ann., 324 (2002) 521-556.
[83] Grigorchuk R., Nekrashevych V., Self-similar groups, operator algebras and Schur comple-
ment, J. Modern Dynamics, 1 (2007) 323–370.
[84] Grigorchuk R., Sunik Z., Asymptotic aspects of Schreier graphs and Hanoi towers groups,
C. R. Acad. Sci. Paris Ser. I Math. 342 (2006) 545–550.
[85] Grigorchuk R., Zuk A., The Ihara zeta function of infinite graphs, the KNS spectral measure
and integrable maps, in: “Random walks and geometry”, Walter de Gruyter, Berlin, 2004.
141–180.
[86] Grimmet J., “Probability on graphs: random Processes on graphs and lattices, 2nd edition”,
Cambridge Univ. Press, 2017.
[87] Hambly B.M., Kumagai T., Heat kernel estimates and law of the iterated logarithm for
symmetric random walks on fractal graphs, in: “Discrete geometric analysis”, Contemp.
Math. 347, Amer. Math. Soc., Providence, RI, 2004. 153–172.
[88] Hambly B.M., Kumagai T., Heat kernel estimates for symmetric random walks on a class of
fractal graphs and stability under rough isometries, in: “Fractal geometry and applications:
a jubilee of Benoı̂t Mandelbrot, Part 2” Proc. Sympos. Pure Math. 72 Part 2, Amer. Math.
Soc., Providence, RI, 2004. 233–259.
[89] Hebisch W., Saloff-Coste, L., Gaussian estimates for Markov chains and random walks on
groups, Ann. Prob., 21 (1993) 673–709.
[90] Higuchi Y., Nomura Y., Spectral structure of the Laplacian on a covering graph, European
J. Combin., 30 (2009) 570–585.
[91] Hoffman A.J., On eigenvalues and colorings of graphs, in: “Graph Theory and its Applica-
tions”, Academic Press, New York, 1970. 79–91.
[92] Horton M.D., Stark H.M., Terras A.A., What are zeta functions of graphs and what are
they good for?, in: “Quantum graphs and their applications”, Contemp. Math. 415, AMS,
Providence, RI, 2006. 173–189.
[93] Horton M.D., Stark H.M., Terras A.A., Zeta functions of weighted and covering graphs, in:
“Analysis on graphs and its applications”, Proc. Sympos. Pure Math. 77, AMS, Providence,
RI, 2008. 29–50.
[94] Hua B., Lin Y., Curvature notions on graphs, Front. Math. China, 11 (2016) 1275–1290.
[95] Hua B., Lin Y., Stochastic completeness for graphs with curvature dimension conditions,
Adv. Math., 306 (2017) 279–302.
[96] Kaimanovich V.A., Vershik A.M., Random walks on discrete groups: boundary and entropy,
Ann. Prob., 11 (1983) 457-490.
[97] Kaimanovich V.A., Woess W., The Dirichlet problem at infinity for random walks on graphs
with a strong isoperimetric inequality, Probab. Th. Rel. Fields, 91 (1992) 445-466.
[98] Keating J.P., Quantum graphs and quantum chaos, in: “Analysis on graphs and its appli-
cations”, Proc. Sympos. Pure Math. 77, AMS, Providence, RI, 2008. 279–290.
[99] Kigami J., “Analysis on fractals”, Cambridge Tracts in Mathematics 143, Cambridge Uni-
versity Press, 2001.
[100] Kotani M., Sunada T., Albanese maps and off diagonal long time asymptotics for the heat
kernel, Comm. Math. Phys., 209 (2000) 633–670.
[101] Kotani M., Sunada T., Spectral geometry of crystal lattices, in: “Heat kernels and analysis
on manifolds, graphs, and metric spaces”, Contem. Math. 338, AMS, Providence, RI, 2003.
271–305.
[102] Kumagai T., Heat kernel estimates and parabolic Harnack inequalities on graphs and resis-
tance forms, Publ. RIMS, Kyoto Univ., 40 (2004) 793–818.
[103] Lawler G.F., Sokal A.D., Bounds on the L2 spectrum for Markov chains and Markov pro-
cesses: a generalization of Cheeger’s inequality, Trans. Amer. Math. Soc, 309 (1988) 557–
580.
[104] Lee J.R., Oveis Gharan S., Trevisan L., Multi-way spectral partitioning and higher-order
Cheeger inequalities, in: “STOC12 Proceedings of the 2012 ACM Symposium on Theory of
Computing”, ACM, New York, 2012. 1117–1130.
[105] Lee S.-L., Luo Y.-L., Yeh Y.-N., Topological analysis of some special graphs: III. Regular
polyhedra, J. Cluster Science, 2 (1991) 219–229.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
BIBLIOGRAPHY 147

[106] Lenz D., Teplyaev A., Expansion in generalized eigenfunctions for Laplacians on graphs and
metric measure spaces, Trans. Amer. Math. Soc., 368 (2016) 4933–4956.
[107] Lin Y., Lu L., Yau S.-T., Ricci curvature of graphs, Tohoku Math. J., 63 (2011) 605–627.
[108] Lin Y., Lu L., Yau S.-T., Ricci-flat graphs with girth at least five, Comm. Anal. Geom., 22
(2014) 671–687.
[109] Lin Y., Yau S.-T., Ricci curvature and eigenvalue estimate on locally finite graphs, Math.
Res. Lett., 17 (2010) 343–356.
[110] Liu S., Multi-way dual Cheeger constants and spectral bounds of graphs, Adv. Math., 268
(2015) 306–338.
[111] Lovász L., Simonovits M., The mixing rate of Markov chains, an isoperimetric inequality, and
computing the volume, in: “Annual Symposium on Foundations of Computer Science, Vol.
I, II (St. Louis, MO, 1990)”, IEEE Comput. Soc. Press, Los Alamitos, CA, 1990. 346–354.
[112] Lubotzky A., “Discrete groups, expanding graphs and invariant measures”, Progr. Math.,
125, Birkhäuser Verlag, Basel, 1994.
[113] Lust-Piquard F., Lower bounds on K n 1→∞ for some contractions K of L2 (μ), with some
applications to Markov operators, Math. Ann., 303 (1995) 699-712.
[114] Mohar B., Woess W., A survey on spectra of infinite graphs, Bull. London Math. Soc., 21
(198) 209–234.
[115] Nash-Williams C. St. J. A., Random walks and electric current in networks, Proc. Cambridge
Phil. Soc., 55 (1959) 181-194.
[116] Pang M.M.H., Heat kernels on graphs, J. London Math. Soc. (2), 47 (1993) 50–64.
[117] Pittet Ch., Følner sequences on polycyclic groups, Rev. Mat. Iberoam., 11 (1995) 675-686.
[118] Pittet Ch., Saloff-Coste L., Amenable groups, isoperimetric profiles and random walks, in:
“Geometric group theory down under. Proceedings of a special year in geometric group
theory, Canberra, Australia, 1996”, Walter De Gruyter, Berlin, 1999. 293–316.
[119] Pittet Ch., Saloff-Coste L., On the stability of the behavior of random walks on groups, J.
Geom. Anal., 10 (2000) 713-737.
[120] Pittet Ch., Saloff-Coste L., On random walks on wreath products, Ann. Prob, 30 (2002)
948–977.
[121] Pittet Ch., Saloff-Coste L., Random walks on abelian-by-cyclic groups, Proc. Amer. Math.
Soc., 131 (2003) 1071–1079.
[122] Rahman Md.S., “Basic graph theory”, Undergraduate Topics in Computer Science, Springer,
Cham, 2017.
[123] Rigo M., “Advanced graph theory and combinatorics”, Computer Engineering Series, ISTE,
London; John Wiley and Sons, Inc., Hoboken, NJ, 2016.
[124] Saloff-Coste L., Lectures on finite Markov chains, in: “Lectures on probability theory and
statistics”, Lecture Notes Math. 1665, Springer, 1997. 301-413.
[125] Schenker J., Aizenman M., The creation of spectral gaps by graph decoration, Lett. Math.
Phys., 53 (2000) 253–262.
[126] Shubin M., Sunada T., Geometric theory of lattice vibrations and specific heat, arXiv:math-
ph/051288
[127] Soardi P.M., “Potential theory on infinite networks”, Lecture Notes in Mathematics 1590,
Springer, 1994.
[128] Strichartz R.S., “Differential equations on fractals. A tutorial”, Princeton University Press,
Princeton, NJ, 2006.
[129] Sunada T., Discrete geometric analysis, in: “Analysis on graphs and its applications”, Proc.
Sympos. Pure Math. 77, AMS, Providence, RI, 2008. 51–83.
[130] Sunada, T, Topological crystallography. With a view towards discrete geometric analysis,
Surveys and Tutorials in the Applied Mathematical Sciences 6, Springer, Tokyo, 2013.
[131] Telcs A., “The art of random walks”, Lecture Notes in Mathematics 1885, Springer, 2006.
[132] Terras A., Survey of spectra of Laplacians on finite symmetric spaces, Experiment Math, 5
(1996) 15–32.
[133] Terras A., “Fourier analysis on finite groups and applications”, Cambridge Univ. Press,
1999.
[134] Terras A., A survey of discrete trace formulas, IMP Vol. Math. and Appl., 109 (1999)
643–681.
[135] Varopoulos N.Th., Random walks on soluble groups, Bull. Sci. Math., 107 (1983) 337-344.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
148 BIBLIOGRAPHY

[136] Varopoulos N.Th., Long range estimates for Markov chains, Bull. Sci. Math., 109 (1985)
113-119.
[137] Varopoulos N.Th., Saloff-Coste L., Coulhon T., “Analysis and geometry on groups”, Cam-
bridge Tracts in Mathematics 100, Cambridge University Press, 1992.
[138] Wang F.-Y., Criteria of spectral gap for Markov operators, J. Funct. Anal., 266 (2014)
2137–2152.
[139] Woess W., “Random walks on infinite graphs and groups”, Cambridge Tracts in Mathemat-
ics 138, Cambridge Univ. Press., 2000.
[140] Woess W., “Denumerable Markov chains”, EMS Textbooks in Mathematics, EMS Publishing
House, 2009.

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
Index

backward equation, 16 complete, 39


binary cube, 8, 32 complete bipartite Kn,m , 2
boundary condition, 23 complete Kn , 2
connected, 4
Carne-Varopoulos estimate, 96 cycle Cq , 7
Chebyshev polynomials, 97 finite, 1
Cheeger infinite, 73
constant on finite graphs, 53 locally finite, 1
constant on infinite graphs, 76 path graph, 58
inequality on finite graphs, 54 regular, 8
inequality on infinite graphs, 77 simple, 1
co-area formula, 54 weighted, 3
complement Ωc , 22 Zn , 2
convergence rate, 88 graph distance, 4
Green’s formula
degree, 1
for finite graphs, 27
diameter, 61
for infinite graphs, 74
Dirichlet Laplace operator LΩ , 73
group, 5
Dirichlet problem, 22, 86
Zn , 6
double counting, 1
Zq , 5
edge, 1
edge boundary, 53 heat kernel, 89
edge generating set, 6 lower bound for sup pn (x, x), 107
eigenfunction, 29 on-diagonal lower bound
eigenvalues, 28 on Cayley graphs, 107
of a binary cube, 51 on polycyclic groups , 111
of the Dirichlet Laplace operator, 75 on Vicsek tree, 110
of the Laplace operator, 32 on Zm , 109
of the Markov operator, 75 via volume, 112
of Zm , 44 on-diagonal upper bound, 100
of Zn on Cayley graphs, 106
m , 49
on products, 47, 48 on Vicsek tree, 100
equilibrium measure, 18, 38 on Zm , 100
escape rate, 114
expansion rate, 65 inner product, 31
inradius, 79
Faber-Krahn inequality, 78, 80 isoperimetric inequality, 78
on Cayley graphs, 83 on Cayley graphs, 82
on Zm , 83 on Zm , 83
forward equation, 16
Laplace operator
graph, 1 on a graph, 20
bipartite, 31, 32, 42 positive definite, 31
Cayley, 6 weighted, 20

149

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
150 INDEX

Markov chain, 10
Markov kernel, 9, 20
on products, 46
reversible, 14
Markov operator, 21, 34
Markov property, 10
maximum principle, 23
strong, 119
minimum principle, 23
mixing time, 18, 38
on a binary cube, 52
on Znm , 50

path, 4
Polya’s theorem, 19, 117
product
of graphs, 45
of regular graphs, 47
weighted, 46
product of groups, 5

random walk
on Z, 9, 91
recurrent, 19, 117
simple, 9
transient, 19, 117
rate of convergence, 35
Rayleigh quotient, 30
recurrence
Nash-Williams test, 123
on Cayley graphs, 122
volume test, 126
residue, 5

spectral radius, 36, 63


spectrum, 28
subharmonic, 23
superharmonic, 23

trace, 40
transience
isoperimetric test, 128
on Cayley graphs, 122
transition function, 13, 35
type problem, 117

vertex, 1
Vicsek tree, 81

weight
of a vertex, 15
of edges, 3
of vertices, 3
simple, 3

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms
Anybody who has ever read a mathematical text of the author would agree that his way of
presenting complex material is nothing short of marvelous. This new book showcases again the
author’s unique ability of presenting challenging topics in a clear and accessible manner, and of
guiding the reader with ease to a deep understanding of the subject.
—Matthias Keller, University of Potsdam
A central object of this book is the discrete Laplace operator on finite and infinite graphs.
The eigenvalues of the discrete Laplace operator have long been used in graph theory
as a convenient tool for understanding the structure of complex graphs. They can also
be used in order to estimate the rate of convergence to equilibrium of a random walk
(Markov chain) on finite graphs. For infinite graphs, a study of the heat kernel allows to
solve the type problem—a problem of deciding whether the random walk is recurrent or
transient.
This book starts with elementary properties of the eigenvalues on finite graphs,
continues with their estimates and applications, and concludes with heat kernel esti-
mates on infinite graphs and their application to the type problem.
The book is suitable for beginners in the subject and accessible to undergraduate and
graduate students with a background in linear algebra I and analysis I. It is based on a
lecture course taught by the author and includes a wide variety of exercises. The book
will help the reader to reach a level of understanding sufficient to start pursuing research
in this exciting area.

For additional information


and updates on this book, visit
www.ams.org/bookpages/ulect-71

ULECT/71

Licensed to AMS.
License or copyright restrictions may apply to redistribution; see https://www.ams.org/publications/ebooks/terms

You might also like