with Applications
Oliver Knill
Contents
Preface
3
1
Introduction
1.1 What is probability theory? . . . . . . . . . . . . . . . . . .
1.2 Some paradoxes in probability theory . . . . . . . . . . . .
1.3 Some applications of probability theory . . . . . . . . . . .
2
Limit theorems
2.1 Probability spaces, random variables, independence
2.2 Kolmogorov’s 0 − 1 law, BorelCantelli lemma . . .
2.3 Integration, Expectation, Variance . . . . . . . . .
2.4 Results from real analysis . . . . . . . . . . . . . .
2.5 Some inequalities . . . . . . . . . . . . . . . . . . .
2.6 The weak law of large numbers . . . . . . . . . . .
2.7 The probability distribution function . . . . . . . .
2.8 Convergence of random variables . . . . . . . . . .
2.9 The strong law of large numbers . . . . . . . . . .
2.10 The Birkhoff ergodic theorem . . . . . . . . . . . .
2.11 More convergence results . . . . . . . . . . . . . . .
2.12 Classes of random variables . . . . . . . . . . . . .
2.13 Weak convergence . . . . . . . . . . . . . . . . . .
2.14 The central limit theorem . . . . . . . . . . . . . .
2.15 Entropy of distributions . . . . . . . . . . . . . . .
2.16 Markov operators . . . . . . . . . . . . . . . . . . .
2.17 Characteristic functions . . . . . . . . . . . . . . .
2.18 The law of the iterated logarithm . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25
25
38
43
46
48
55
61
63
68
72
77
83
95
97
103
113
116
123
Discrete Stochastic Processes
3.1 Conditional Expectation . . . . . . . . . . . .
3.2 Martingales . . . . . . . . . . . . . . . . . . .
3.3 Doob’s convergence theorem . . . . . . . . . .
3.4 L´evy’s upward and downward theorems . . .
3.5 Doob’s decomposition of a stochastic process
3.6 Doob’s submartingale inequality . . . . . . .
3.7 Doob’s Lp inequality . . . . . . . . . . . . . .
3.8 Random walks . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
129
129
137
149
157
159
163
166
169
3
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
14
18
2
Contents
3.9
3.10
3.11
3.12
3.13
3.14
4
5
The arcsin law for the 1D random walk
The random walk on the free group . . .
The free Laplacian on a discrete group .
A discrete FeynmanKac formula . . . .
Discrete Dirichlet problem . . . . . . . .
Markov processes . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
174
178
182
186
188
193
Continuous Stochastic Processes
4.1 Brownian motion . . . . . . . . . . . . . .
4.2 Some properties of Brownian motion . . .
4.3 The Wiener measure . . . . . . . . . . . .
4.4 L´evy’s modulus of continuity . . . . . . .
4.5 Stopping times . . . . . . . . . . . . . . .
4.6 Continuous time martingales . . . . . . .
4.7 Doob inequalities . . . . . . . . . . . . . .
4.8 Khintchine’s law of the iterated logarithm
4.9 The theorem of DynkinHunt . . . . . . .
4.10 Selfintersection of Brownian motion . . .
4.11 Recurrence of Brownian motion . . . . . .
4.12 FeynmanKac formula . . . . . . . . . . .
4.13 The quantum mechanical oscillator . . . .
4.14 FeynmanKac for the oscillator . . . . . .
4.15 Neighborhood of Brownian motion . . . .
4.16 The Ito integral for Brownian motion . . .
4.17 Processes of bounded quadratic variation
4.18 The Ito integral for martingales . . . . . .
4.19 Stochastic differential equations . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
199
199
206
213
215
217
223
225
227
230
231
236
238
243
246
249
253
263
268
272
Selected Topics
5.1 Percolation . . . . . . . . . . . . .
5.2 Random Jacobi matrices . . . . . .
5.3 Estimation theory . . . . . . . . .
5.4 Vlasov dynamics . . . . . . . . . .
5.5 Multidimensional distributions . .
5.6 Poisson processes . . . . . . . . . .
5.7 Random maps . . . . . . . . . . . .
5.8 Circular random variables . . . . .
5.9 Lattice points near Brownian paths
5.10 Arithmetic random variables . . .
5.11 Symmetric Diophantine Equations
5.12 Continuity of random variables . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
283
283
294
300
306
314
319
324
327
335
341
351
357
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Preface
These notes grew from an introduction to probability theory taught during
the first and second term of 1994 at Caltech. There was a mixed audience of
undergraduates and graduate students in the first half of the course which
covered Chapters 2 and 3, and mostly graduate students in the second part
which covered Chapter 4 and two sections of Chapter 5.
Having been online for many years on my personal web sites, the text got
reviewed, corrected and indexed in the summer of 2006. It obtained some
enhancements which benefited from some other teaching notes and research,
I wrote while teaching probability theory at the University of Arizona in
Tucson or when incorporating probability in calculus courses at Caltech
and Harvard University.
Most of Chapter 2 is standard material and subject of virtually any course
on probability theory. Also Chapters 3 and 4 is well covered by the literature but not in this combination.
The last chapter “selected topics” got considerably extended in the summer
of 2006. While in the original course, only localization and percolation problems were included, I added other topics like estimation theory, Vlasov dynamics, multidimensional moment problems, random maps, circlevalued
random variables, the geometry of numbers, Diophantine equations and
harmonic analysis. Some of this material is related to research I got interested in over time.
While the text assumes no prerequisites in probability, a basic exposure to
calculus and linear algebra is necessary. Some real analysis as well as some
background in topology and functional analysis can be helpful.
I would like to get feedback from readers. I plan to keep this text alive and
update it in the future. You can email this to knill@math.harvard.edu and
also indicate on the email if you don’t want your feedback to be acknowledged in an eventual future edition of these notes.
3
4
Contents
To get a more detailed and analytic exposure to probability, the students
of the original course have consulted the book [109] which contains much
more material than covered in class. Since my course had been taught,
many other books have appeared. Examples are [21, 35].
For a less analytic approach, see [41, 95, 101] or the still excellent classic
[26]. For an introduction to martingales, we recommend [113] and [48] from
both of which these notes have benefited a lot and to which the students
of the original course had access too.
For Brownian motion, we refer to [75, 68], for stochastic processes to [17],
for stochastic differential equation to [2, 56, 78, 68, 47], for random walks
to [104], for Markov chains to [27, 91], for entropy and Markov operators
[63]. For applications in physics and chemistry, see [111].
For the selected topics, we followed [33] in the percolation section. The
books [105, 31] contain introductions to Vlasov dynamics. The book of [1]
gives an introduction for the moment problem, [77, 66] for circlevalued
random variables, for Poisson processes, see [50, 9]. For the geometry of
numbers for Fourier series on fractals [46].
The book [114] contains examples which challenge the theory with counter
examples. [34, 96, 72] are sources for problems with solutions.
Probability theory can be developed using nonstandard analysis on finite
probability spaces [76]. The book [43] breaks some of the material of the
first chapter into attractive stories. Also texts like [93, 80] are not only for
mathematical tourists.
We live in a time, in which more and more content is available online.
Knowledge diffuses from papers and books to online websites and databases
which also ease the digging for knowledge in the fascinating field of probability theory.
Oliver Knill, March 20, 2008
Acknowledgements and thanks:
• Sep 3, 2007: Thanks to Csaba Szepesvari for pointing out that in
theorem 2.16.1, the condition P 1 = 1 was missing.
• Jun 29, 2011, Thanks to Jim Rulla for pointing out a typo in the
preface.
• Csaba Szepesvari contributed a clarification in Theorem 2.16.1.
• Victor Moll mentioned a connection of the graph on page 337 with a
paper in Journal of Number Theory 128 (2008) 18071846. (September 2013: thanks also for pointing out some typos).
Contents
5
• March and April, 2011: numerous valuable corrections and suggestions to the first and second chapter were submitted by Shiqing Yao.
More corrections about the third chapter were contributed by Shiqing
in May, 2011. Some of them were proof clarifications which were hard
to spot. They are all implemented in the current document. Thanks!
• April 2013, thanks to Jun Luo for helping to clarify the proof of
Lemma 3.14.2.
Updates:
• June 2, 2011: Foshee’s variant of Martin Gardner’s boygirl problem.
• June 2, 2011: page rank in the section on Markov processes.
6
Contents
It is called the Borel σalgebra. It is a set of subsets of Ω in which on can perform finitely or countably many operations like taking unions. An important example is the Borel σalgebra on the real line. geometry or dynamical systems. we are led to integration problems in calculus. the event takes place almost certainly. If a point ω in the ”laboratory” Ω denotes an ”experiment”. an ”event” A ∈ A is a subset of Ω. With a probability space (Ω. analysis. on which one can hope to do so. there is already some interesting mathematics: one has for example the combinatorial problem to find the probabilities of events like the event to get a ”royal flush”R in poker. A. probability theory adds a measure theoretical structure to Ω which generalizes ”counting” on finite sets: in order to measure the probability of a subset A ⊂ Ω. or a time evolution on a set. topology. the probability space is part of Euclidean space and the σalgebra is the smallest which contains all open sets. 1]. As with any fundamental mathematical construction. in many applications.Chapter 1 Introduction 1. If P[A] = 1. P). The elements in A are called events. A random variable is a function X from Ω to the real line R which is measurable in the sense that the inverse of a measurable Borel set B in R is 7 . The probability measure P has to satisfy obvious properties like that the union A ∪ B of two disjoint events A. For example. If Ω is a subset of an Euclidean space like the plane. In a similar way as introducing algebraic operations. if P[A] = 1/3. B satisfies P[A ∪ B] = P[A] + P[B] or that the complement Ac of an event A has the probability P[Ac ] = 1 − P[A]. the event happens with probability 1/3. A. for which one can assign a probability P[A] ∈ [0. a topology. the theory starts by adding more structure to a set Ω. one singles out a class of subsets A. complements or intersections. one can define random variables X. Actually. P) alone. y) dxdy for a suitable nonnegative function f . P[A] = A f (x. Given a probability space (Ω. This leads to the notion of a σalgebra A.1 What is probability theory? Probability theory is a fundamental pillar of modern mathematics with relations to other mathematical areas like algebra.
. one can add and multiply them by defining (X + Y )(ω) = X(ω) + Y (ω) or (XY )(ω) = X(ω)Y (ω). Any continuous function is measurable for the Borel σalgebra. We give some examples of ”paradoxes” to illustrate the need for building a careful theory. Extending the expectation to a subset L1 of the entire algebra is part of integration theory. Y. Back to the fundamental notion of random variables: because they are just functions. whether true or false. where functions are often denoted with capital letters. a random variable X is measurable if X −1 (B) ∈ A for all Borel sets B ∈ B. In calculus. O) to the topological space (R. Serious issues are avoided with those constructions. Random variables form so an algebra L. These two basic examples as well as the linearity requirement E[aX + bY ] = aE[X] + bE[Y ] determine the expectation for all random Pnvariables in the algebra L: first one defines expectation for finite sums i=1 ai 1Bi called elementary random variables. one can live with the Riemann integral on the real line. also in probability theory. one would expect to measure in the experiment. U). This Monte Carlo definition of the Lebesgue integral is based on the law of large numbers and is as intuitive . one often does not have to sweat about measurability issues. In general. the R1 integral 0 f (x) dx would not be defined because a Riemann integral can give 1 or 0 depending on how the Riemann approximation is done. where one does not have to worry about continuity most of the time. It allows to make statements like that the probability of the set of real numbers with periodic decimal expansion has probability 0. . they are essential for the ”survival” of the theory. ProbabilRb ity theory allows to introduce the Lebesgue integral by defining a f (x) dx P as the limit of n1 ni=1 f (xi ) for n → ∞. So. these notions are not only introduced to keep the theory ”clean”. b]. . It is a real number which indicates the ”mean” or ”average” of the observation X. Indeed. The constant random variable X(ω) = a has the expectation E[X] = a. like X. where xi are random uniformly distributed points in the interval [a. A function is continuous if f −1 (U ) ∈ O for all open sets U ∈ U .8 Chapter 1. As in calculus. Mathematics is eternal: a once established result will be true also in thousands of years. The later is more fundamental and probability theory is a major motivator for using it. then X(ω) measures an observable quantity of the experiment. the probability of A is the expectation of the random variable X(x) = f (x) = 1A (x). A theory in which one could prove a theorem as well as its negation would be worthless: it would formally allow to prove any other result. It is the value. . Introduction in A. The technical condition of measurability resembles the notion of a continuity for a function f from a topological space (Ω. which approximate general measurable functions. which defines the integral Rb P by Riemann sums a f (x) dx ∼ n1 i/n∈[a. The interpretation is that if ω is an experiment. the integral defined in measure theory is the Lebesgue integral. While in calculus. one could suspect that notions like σalgebras or measurability were introduced by mathematicians to scare normal folks away from their realms. This is not the case.b] f (i/n). In probability theory. then the expectation of X is just the probability of B. If X = 1B is the random variable which has the value 1 if ω is in the event B and 0 if ω is not in the event B. The expectation of a random variable X is denoted by E[X] if it exists.
. Y are independent. Y ] = Cov[XY ]/(σ[X]σ[Y ]) of two random variables with positive variance is a number which tells how much the random variable X is related to the random variable Y . Y ] = −1. This is a realvalued function defined as FX (s) = P[X ≤ s] on R. For example. where α is the angle between the centered random variables X − E[X] and Y − E[Y ]. Traditionally. then the random variables X. random variables with a continuous law are called continuous random variables. it does not reveal the structure of the probability space for example. . Of course. b]) = FX (b) − FX (a). If E[XY ] is interpreted as an inner product. they are antiparallel. By the Lebesgue decomposition theorem. if Cov[X. Var[X] = E[X 2 ] − E[X]2 and the standard deviation σ[X] = Var[X] of a random variable X for which X 2 ∈ L1 .9 1. 2 where fX (x) = e−x /2 / 2π. Y for which X 2 .1. It can mix the . then the standard deviation is the length of X − E[X] and the correlation has the geometric interpretation as cos(α). One can describe random variables by their moment generating functions MX (t) = E[eXt ] or by their characteristic function φX (t) = E[eiXt ]. 2. an absolutely continuous part µac and a singular continuous part µsc . What is probability theory? 1 n P to state as the Riemann integral which is the limit of xj =j/n∈[a. If the correlation is zero. }. then Y = λX for some λ > 0. the law of a random variable X does not need to be pure. The correlation Corr[X. one can decompose any measure µ into a discrete part µpp . The ′ later is the Fourier transform of the law µX = FX which is a measure on the real line R. 1. An example of a discrete distribuk tion is the Poisson distribution P[X = k] = e−λ λk! on N = {0. But the function FX allows the construction of a probability space with exactly this distribution function. With the fundamental notion of expectation one can define p the variance. Y ] = 1. Decorrelated random variables still can have relations to each other but if for any measurable real functions f and g. Y 2 ∈ L1 . There are two important types of distributions. An example of a continuous√distribution is the standard normal distribution. But singular continuous random variables appear too: in spectral theory.b] f (xj ) for n → ∞. these two type of random variables are the most important ones. A random variable X can be described well by its distribution function FX . The distribution function does not encode the internal structure of the random variable X. if Cov[X. . continuous distributions with a probability density function ′ fX = FX and discrete distributions for which F is piecewise constant. dynamical systems or fractal geometry. Random variables X for which µX is a discrete measure are called discrete random variables. One R can characterize it as the distribution with maximal entropy I(f ) = − log(f (x))f (x) dx among all distributions which have zero mean and variance 1. One can also look at the covariance Cov[XY ] = E[XY ] − E[X]E[Y ] of two random variables X. The law µX of the random variable is a probability measure on the real line satisfying µX ((a. the geometric interpretation is that the two random variables are perpendicular. where {X ≤ s } is the event of all experiments ω satisfying X(ω) ≤ s. the random variables f (X) and g(Y ) are uncorrelated.
extreme cases being E[X1] = E[X]. 1/p + 1/q = 1 for random variables. . if for any finite subset of them. One has E[XY ] = h(Y ) for some function h. E[XX] = X. Other inequalities are the Jensen inequality E[h(X)] ≥ h(E[X]) for convex functions h. A random variable can be mixed discrete and continuous for example. B are called independent. Each event A = {a < X(ω) < b } is independent of the event B = {c < Y (ω) < d }. Introduction three types. the events A. Y . the moment generating functions and characteristic functions satisfy the formulas MX+Y (t) = MX (t)MY (t) and φX+Y (t) = φX (t)φY (t). Independence is a central notion in probability theory. An illustrative example is the situation . . One should think of independent random variables as two aspects of the laboratory Ω which do not influence each other. . B are independent. The notion of conditional probability leads to the important notion of conditional expectation E[XB] of a random variable X with respect to some subσalgebra B of the σ algebra A. Any inequality which appears in analysis can be useful in the toolbox of probability theory. the conditional probability is larger. Two events A. b) and (c. Two random variables X. In general. the conditional expectation with respect to B is a new random variable obtained by averaging on the elements of B. if P[A ∩ B] = P[A] · P[B]. If B is generated by a finite partition B1 . if they generate independent σalgebras. Ω }]. It is enough to check that the events A = {X ∈ (a. we obtain the usual expectation E[X] = E[X{∅. it is the random variable itself. B are called independent. Independence can also be explained using conditional probability with respect to an event B of positive probability: the conditional probability P[AB] = P[A ∩ B]/P[B] of A is the probability that A happens when we know that B takes place. It is a speinequality P[X − E[X] ≥ c] ≤ Var[X] c2 cial case of the ChebychevMarkov inequality h(c) · P[X ≥ c] ≤ E[h(X)] for monotone nonnegative functions h. d)} are independent for all intervals (a. Inequalities play an important role in probability theory. X. for which Xp = E[Xp ]. Ω }. the probability of their intersection is the product of their probabilities. it is a new random variable which is Bmeasurable.10 Chapter 1. φX valuable tools to compute the distribution of an arbitrary finite sum of independent random variables. for the trivial algebra B = {∅. An arbitrary set of events Ai is called independent. if for any pair A ∈ A. The Chebychev is used very often. Y q = E[Y q ] are finite. Bn of Ω of pairwise disjoint sets covering Ω. Y are independent. If B is independent of A. B ∈ B. then P[AB] = P[A] but in general. If B is the σalgebra of an independent random variable Y . then E[XY ] = E[XB] = E[X]. . While the distribution function R FX+Y of the sum of two independent random variables is a convolution R FX (t−s) dFY (s). then E[XB] is piecewise constant on the sets Bi and the value on Bi is the average value of X on Bi . b)} and B = {Y ∈ (c. These identities make MX . For B = A. the Minkowski inequality X + Y p ≤ Xp + Y p or the H¨older inequality XY 1 ≤ Xp Y q . Two σalgebras A. d).
For example. one distinguishes between discrete time stochastic processes and continuous time stochastic processes. Examples of such processes are percolation processes. the iteration of maps and continuous time systems. The language of probability fits well into the classical theory of dynamical systems. where t is real time. σ] if Λn = 2n log log n and σ is the standard deviation of Xk . Similar as ergodic theory. in probability theory. An other important theorem is the central limit theorem which shows that Sn = X1 + X2 + · · · + Xn normalized to have zero mean and variance 1 converges in law to the normal distribution or the law of the iterated logarithm which says that for centered independent and identically distributed Xk . While one can realize every discrete time stochastic process Xn by a measurepreserving transformation T : Ω → Ω and Xn (ω) = X(T n (ω)). given by E[XY ](x) = 0 f (x. martingale theory is a natural extension of probability theory and has many applications. the process of summing up IID random variables with zero mean. E[XY ] is R1 a function of x alone. This is called a conditional integral. the theory of ordinary and partial differential equations. While stating . A continuous time stochastic process is given by a family of random variables Xt . A set {Xt }t∈T of random variables defines a stochastic process. Martingales are a powerful generalization of the random walk. y) dy. What is probability theory? 11 where X(x. There are versions of the law of large numbers for which the random variables do not need to have a common distribution and which go beyond Birkhoff’s theorem. A discrete time stochastic process is a sequence of random variables Xn with certain properties. where E[Xn An−1 ] is the conditional expectation with respect to the subalgebra An−1 . The variable t ∈ T is a parameter called ”time”. where one has a filtration An ⊂ An+1 of σalgebras such that Xn is An measurable and E[Xn An−1 ] = Xn−1 . y) is a continuous function on the unit square with P = dxdy as a probability measure and where Y (x. In that case. There are different versions of the law of large numbers. An example is a solution of a stochastic differential equation. the ergodic theorem of Birkhoff for measurepreserving transformations has as a special case the law of largePnumbers which describes the average of partial sums of random variables n1 m k=1 Xk . the √ scaled sum Sn /Λn has accumulation points in the interval [−σ. probability theory often focuses a special subclass of systems called martingales. Deterministic dynamical system theory branches into discrete time systems. With more general time like Zd or Rd random variables are called random fields which play a role in statistical physics. An important example is when Xn are independent. ”Weak laws” make statements about convergence in probability. ”strong laws” make statements about almost everywhere convergence. An example is a family Xn of random variables which evolve with discrete time n ∈ N. Stochastic processes are to probability theory what differential equations are to calculus.1. y) = x. identically distributed random variables.1. Similarly.
Examples of such operators are PerronFrobenius operators X → X(T ) for a measure preserving transformation T defining a . Because white noise is only defined as a generalized function and is not a stochastic process by itself. Cov[Bs . where ζ(t) is called white noise. the same formula holds too. Stochastic differential equations can Rt be defined in different ways. Introduction the weak and strong law of large numbers and the central limit theorem. which leads to martingale solutions. which is the stochastic analog of the fun0 damental theorem of calculus. Similarly as for differential equations. Examples of stochastic differential equations are d d Bt −t/2 . where the theory of differential equations is more technical than the theory of maps. d Xt = More generally. Bt ] = s for s ≤ t and for any sequence of times. The expression 0 f (Xs ) dBt can either be defined as an Ito integral. the solutions can be used to solve classical partial differential equations like the Dirichlet problem ∆u = 0 in a bounded domain D with u = f on the boundary δD. As in the deterministic case. Some features of stochastic process can be described using the language of Markov operators P . different convergence notions for random variables appear: almost sure convergence is the strongest. building up the formalism for continuous time stochastic processes Xt is more elaborate. it implies convergence in probability and the later implies convergence convergence in law. One can get the solution by computing the expectation of f at the end points of Brownian motion starting at x and ending at the boundary u = Ex [f (BT )]. 0 = t0 < t1 < · · · < ti < ti+1 . The most important continuous time stochastic process definitely is Brownian motion Bt . E[Bt ] = 0. Stochastic calculus is also useful to interpret quantum mechanics as a diffusion processes [75. Solutions to stochastic differential equations are examples of Markov processes which show diffusion. a solution to a stochastic differential equation dt f (Xt )ζ(t) + g(Xt ) is defined as the solution to the integral equation Xt = Rt Rt X0 + 0 f (Xs ) dBt + 0 g(Xs ) ds. the increments Bti+1 − Bti are all independent random vectors with normal distribution. one has first to prove the existence of the objects. Or dt Xt = Bt4 ζ(t) dt Xt = Xt ζ(t) which has the solution Xt = e 5 3 which has as the solution the process Xt = Bt − 10Bt + 15Bt . this stochastic R t differential R t equation has to be understood in its integrated form Bt = 0 dBs = 0 ζ(s) ds. The key tool to solve stochastic differential equations is Ito’s formula f (Bt ) − f (B0 ) = Rt Rt ′ f (Bs )dBs + 21 0 f ′′ (Bs ) ds.12 Chapter 1. or the Stratonovich integral. 73] or as a tool to compute solutions to quantum mechanical problems using FeynmanKac formulas. Especially. On a discrete graph. which has similar integration rules than classical differentiation equations. if Brownian motion is replaced by random walk. Standard Brownian motion is a stochastic process which satisfies B0 = 0. There is also L1 convergence which is stronger than convergence in probability. which are positive and expectationpreserving transformations on L1 . Brownian motion Bt is a solution of the stochastic differential equation d dt Bt = ζ(t).
1.1. What is probability theory?
13
discrete time evolution or stochastic matrices describing a random walk
on a finite graph. Markov operators can be defined by transition probability functions which are measurevalued random variables. The interpretation is that from a given point ω, there are different possibilities to go
to. A transition probability measure P(ω, ·) gives the distribution of the
target. The relation with Markov operators is assured by the ChapmanKolmogorov equation Pn+m = Pn ◦ Pm . Markov processes can be obtained
from random transformations, random walks or by stochastic differential
equations. In the case of a finite or countable target space S, one obtains
Markov chains which can be described by probability matrices P , which
are the simplest Markov operators. For Markov operators, there is an arrow of time: the relative entropy with respect to a background measure
is nonincreasing. Markov processes often are attracted by fixed points of
the Markov operator. Such fixed points are called stationary states. They
describe equilibria and often they are measures with maximal entropy. An
example is the Markov operator P , which assigns to a probability density
fY the probability density of fY +X where Y + X is the random variable
Y + X normalized so that it has mean 0 and variance 1. For the initial
function f = 1, the function P n (fX ) is the distribution of Sn∗ the normalized sum of n IID random variables Xi . This Markov operator has a
unique equilibrium point, the standard normal distribution. It has maximal entropy among all distributions on the real line with variance 1 and
mean 0. The central limit theorem tells that the Markov operator P has
the normal distribution as a unique attracting fixed point if one takes the
weaker topology of convergence in distribution on L1 . This works in other
situations too. For circlevalued random variables for example, the uniform
distribution maximizes entropy. It is not surprising therefore, that there is
a central limit theorem for circlevalued random variables with the uniform
distribution as the limiting distribution.
In the same way as mathematics reaches out into other scientific areas,
probability theory has connections with many other branches of mathematics. The last chapter of these notes give some examples. The section
on percolation shows how probability theory can help to understand critical phenomena. In solid state physics, one considers operatorvalued random variables. The spectrum of random operators are random objects too.
One is interested what happens with probability one. Localization is the
phenomenon in solid state physics that sufficiently random operators often have pure point spectrum. The section on estimation theory gives a
glimpse of what mathematical statistics is about. In statistics one often
does not know the probability space itself so that one has to make a statistical model and look at a parameterization of probability spaces. The goal
is to give maximum likelihood estimates for the parameters from data and
to understand how small the quadratic estimation error can be made. A
section on Vlasov dynamics shows how probability theory appears in problems of geometric evolution. Vlasov dynamics is a generalization of the
nbody problem to the evolution of of probability measures. One can look
at the evolution of smooth measures or measures located on surfaces. This
14
Chapter 1. Introduction
deterministic stochastic system produces an evolution of densities which
can form singularities without doing harm to the formalism. It also defines
the evolution of surfaces. The section on moment problems is part of multivariate statistics. As for random variables, random vectors can be described
by their moments. Since moments define the law of the random variable,
the question arises how one can see from the moments, whether we have a
continuous random variable. The section of random maps is an other part
of dynamical systems theory. Randomized versions of diffeomorphisms can
be considered idealization of their undisturbed versions. They often can
be understood better than their deterministic versions. For example, many
random diffeomorphisms have only finitely many ergodic components. In
the section in circular random variables, we see that the Mises distribution has extremal entropy among all circlevalued random variables with
given circular mean and variance. There is also a central limit theorem
on the circle: the sum of IID circular random variables converges in law
to the uniform distribution. We then look at a problem in the geometry
of numbers: how many lattice points are there in a neighborhood of the
graph of onedimensional Brownian motion? The analysis of this problem
needs a law of large numbers for independent random variables Xk with
uniform distribution on [0, 1]: for 0 ≤ δ < 1, and An = [0, 1/nδ ] one has
Pn 1 (X )
limn→∞ n1 k=1 Annδ k = 1. Probability theory also matters in complexity theory as a section on arithmetic random variables shows. It turns out
that random variables like Xn (k) = k, Yn (k) = k 2 + 3 mod n defined on
finite probability spaces become independent in the limit n → ∞. Such
considerations matter in complexity theory: arithmetic functions defined
on large but finite sets behave very much like random functions. This is
reflected by the fact that the inverse of arithmetic functions is in general
difficult to compute and belong to the complexity class of NP. Indeed, if
one could invert arithmetic functions easily, one could solve problems like
factoring integers fast. A short section on Diophantine equations indicates
how the distribution of random variables can shed light on the solution
of Diophantine equations. Finally, we look at a topic in harmonic analysis which was initiated by Norbert Wiener. It deals with the relation of
the characteristic function φX and the continuity properties of the random
variable X.
1.2 Some paradoxes in probability theory
Colloquial language is not always precise enough to tackle problems in
probability theory. Paradoxes appear, when definitions allow different interpretations. Ambiguous language can lead to wrong conclusions or contradicting solutions. To illustrate this, we mention a few problems. For
many more, see [110]. The following four examples should serve as a motivation to introduce probability theory on a rigorous mathematical footing.
1) Bertrand’s paradox (Bertrand 1889)
We throw random lines onto the unit disc. What is the probability that
15
1.2. Some paradoxes in probability theory
the line intersects the disc with a length ≥
equilateral triangle?
√
3, the length of the inscribed
First answer: take an arbitrary point P on the boundary of the disc. The
set of all lines through that point
√ are parameterized by an angle φ. In order
that the chord is longer than 3, the line has to lie within a sector of 60◦
within a range of 180◦ . The probability is 1/3.
Second answer:√take all lines perpendicular to a fixed diameter. The chord
is longer than 3 if the point of intersection lies on the middle half of the
diameter. The probability is 1/2.
Third answer: if the midpoints
of the chords lie in a disc of radius 1/2, the
√
chord is longer than 3. Because the disc has a radius which is half the
radius of the unit disc, the probability is 1/4.
Figure. Random angle.
Figure.
translation.
Random
Figure. Random area.
Like most paradoxes in mathematics, a part of the question in Bertrand’s
problem is not well defined. Here it is the term ”random line”. The solution of the paradox lies in the fact that the three answers depend on the
chosen probability distribution. There are several ”natural” distributions.
The actual answer depends on how the experiment is performed.
2) Petersburg paradox (D.Bernoulli, 1738)
In the Petersburg casino, you pay an entrance fee c and you get the prize
2T , where T is the number of times, the casino flips a coin until ”head”
appears. For example, if the sequence of coin experiments would give ”tail,
tail, tail, head”, you would win 23 − c = 8 − c, the win minus the entrance
fee. Fair would be an entrance fee which is equal to the expectation of the
win, which is
∞
∞
X
X
2k P[T = k] =
1=∞.
k=1
k=1
The paradox is that nobody would agree to pay even an entrance fee c = 10.
16
Chapter 1. Introduction
The problem with this casino is that it is not quite clear, what is ”fair”.
For example, the situation T = 20 is so improbable that it never occurs
in the lifetime of a person. Therefore, for any practical reason, one has
not to worry about large values of T . This, as well as the finiteness of
money resources is the reason, why casinos do not have to worry about the
following bullet proof martingale strategy in roulette: bet c dollars on red.
If you win, stop, if you lose, bet 2c dollars on red. If you win, stop. If you
lose, bet 4c dollars on red. Keep doubling the bet. Eventually after n steps,
red will occur and you will win 2n c − (c + 2c + · · · + 2n−1 c) = c dollars.
This example motivates the concept of martingales. Theorem (3.2.7) or
proposition (3.2.9) will shed some light on this. Back to the Petersburg
paradox. How does one resolve it? What would be a reasonable entrance
fee in ”real life”? Bernoulli proposed to√
replace the expectation
√ E[G] of the
profit G = 2T with the expectation (E[ G])2 , where u(x) = x is called a
utility function. This would lead to a fair entrance
∞
X
√
1
2k/2 2−k )2 = √
∼ 5.828... .
(E[ G])2 = (
( 2 − 1)2
k=1
It is not so clear if that is a way out of the paradox because for any proposed
utility function u(k), one can modify the casino rule so that the paradox
√
k
reappears: pay (2k )2 if the utility function u(k) = k or pay e2 dollars,
if the utility function is u(k) = log(k). Such reasoning plays a role in
economics and social sciences.
Figure. The picture to the right
shows the average profit development during a typical tournament of 4000 Petersburg games.
After these 4000 games, the
player would have lost about 10
thousand dollars, when paying a
10 dollar entrance fee each game.
The player would have to play a
very, very long time to catch up.
Mathematically, the player will
do so and have a profit in the
long run, but it is unlikely that
it will happen in his or her life
time.
8
6
4
1000
2000
3000
4000
3) The three door problem (1991) Suppose you’re on a game show and
you are given a choice of three doors. Behind one door is a car and behind
the others are goats. You pick a doorsay No. 1  and the host, who knows
what’s behind the doors, opens another doorsay, No. 3which has a goat.
(In all games, he opens a door to reveal a goat). He then says to you, ”Do
17
1.2. Some paradoxes in probability theory
you want to pick door No. 2?” (In all games he always offers an option to
switch). Is it to your advantage to switch your choice?
The problem is also called ”Monty Hall problem” and was discussed by
Marilyn vos Savant in a ”Parade” column in 1991 and provoked a big
controversy. (See [102] for pointers and similar examples and [90] for much
more background.) The problem is that intuitive argumentation can easily
lead to the conclusion that it does not matter whether to change the door
or not. Switching the door doubles the chances to win:
No switching: you choose a door and win with probability 1/3. The opening
of the host does not affect any more your choice.
Switching: when choosing the door with the car, you loose since you switch.
If you choose a door with a goat. The host opens the other door with the
goat and you win. There are two such cases, where you win. The probability
to win is 2/3.
4) The BanachTarski paradox (1924)
It is possible to cut the standard unit ball Ω = {x ∈ R3  x ≤ 1 } into 5
disjoint pieces Ω = Y1 ∪ Y2 ∪ Y3 ∪ Y4 ∪ Y5 and rotate and translate the pieces
with transformations Ti so that T1 (Y1 ) ∪ T2 (Y2 ) = Ω and T3 (Y3 ) ∪ T4 (Y4 ) ∪
T5 (Y5 ) = Ω′ is a second unit ball Ω′ = {x ∈ R3  x − (3, 0, 0) ≤ 1} and all
the transformed sets again don’t intersect.
While this example of BanachTarski is spectacular, the existence of bounded
subsets A of the circle for which one can not assign a translational invariant probability P[A] can already be achieved in one dimension. The Italian
mathematician Giuseppe Vitali gave in 1905 the following example: define
an equivalence relation on the circle T = [0, 2π) by saying that two angles
are equivalent x ∼ y if (x−y)/π is a rational angle. Let A be a subset in the
circle which contains exactly one number from each equivalence class. The
axiom of choice assures the existence of A. If x1 , x2 , . . . is a enumeration
of the set of rational anglesSin the circle, then the sets Ai = A + xi are
∞
pairwise disjoint and satisfy i=1 Ai = T. If we could assign a translational
invariant probability P[Ai ] to A, then the basic rules of probability would
give
∞
∞
∞
X
X
[
p.
P[Ai ] =
1 = P[T] = P[ Ai ] =
i=1
i=1
i=1
But there is no real number p = P[A] = P[Ai ] which makes this possible.
Both the BanachTarski as well as Vitalis result shows that one can not
hope to define a probability space on the algebra A of all subsets of the unit
ball or the unit circle such that the probability measure is translational
and rotational invariant. The natural concepts of ”length” or ”volume”,
which are rotational and translational invariant only makes sense for a
smaller algebra. This will lead to the notion of σalgebra. In the context
of topological spaces like Euclidean spaces, it leads to Borel σalgebras,
algebras of sets generated by the compact sets of the topological space.
This language will be developed in the next chapter.
18
Chapter 1. Introduction
1.3 Some applications of probability theory
Probability theory is a central topic in mathematics. There are close relations and intersections with other fields like computer science, ergodic
theory and dynamical systems, cryptology, game theory, analysis, partial
differential equation, mathematical physics, economical sciences, statistical
mechanics and even number theory. As a motivation, we give some problems and topics which can be treated with probabilistic methods.
1) Random walks: (statistical mechanics, gambling, stock markets, quantum field theory).
Assume you walk through a lattice. At each vertex, you choose a direction
at random. What is the probability that you return back to your starting point? Polya’s theorem (3.8.1) says that in two dimensions, a random
walker almost certainly returns to the origin arbitrarily often, while in three
dimensions, the walker with probability 1 only returns a finite number of
times and then escapes for ever.
Figure. A random
walk in one dimensions displayed as a
graph (t, Bt ).
Figure. A piece of a
random walk in two
dimensions.
Figure. A piece of a
random walk in three
dimensions.
2) Percolation problems (model of a porous medium, statistical mechanics,
critical phenomena).
Each bond of a rectangular lattice in the plane is connected with probability
p and disconnected with probability 1 − p. Two lattice points x, y in the
lattice are in the same cluster, if there is a path from x to y. One says that
”percolation occurs” if there is a positive probability that an infinite cluster
appears. One problem is to find the critical probability pc , the infimum of all
p, for which percolation occurs. The problem can be extended to situations,
where the switch probabilities are not independent to each other. Some
random variables like the size of the largest cluster are of interest near the
critical probability pc .
1.3. Some applications of probability theory
Figure. Bond percolation with p=0.2.
Figure. Bond percolation with p=0.4.
19
Figure. Bond percolation with p=0.6.
A variant of bond percolation is site percolation where the nodes of the
lattice are switched on with probability p.
Figure. Site percolation with p=0.2.
Figure. Site percolation with p=0.4.
Figure. Site percolation with p=0.6.
Generalized percolation problems are obtained, when the independence
of the individual nodes is relaxed. A class of such dependent percolation problems
√ by choosing two irrational numbers α, β
√ can be obtained
like α = 2 − 1 and β = 3 − 1 and switching the node (n, m) on if
(nα + mβ) mod 1 ∈ [0, p). The probability of switching a node on is again
p, but the random variables
Xn,m = 1(nα+mβ) mod 1∈[0,p)
are no more independent.
4. u−2 . We assume that V (n) takes random values in {0. solid state physics) P Consider the linear map Lu(n) = m−n=1 u(n) + V (n)u(n) on the space of sequences u = (. Of special interest is the situation. 3) Random Schr¨ odinger operators. This phenomenon is called localization. . Dependent site percolation with p=0. . disordered systems. u0 . Dependent site percolation with p=0. there are many eigenvalues for the infinite dimensional matrix L . It turns out that if V (n) are IID random variables with a continuous distribution.6. if also the distribution of the random variables Xn. Introduction Figure.m can depend on the position (n. The problem is to determine the spectrum or spectral type of the infinite matrix P∞ L on 2 the Hilbert space l2 of all sequences u with finite u22 = n=−∞ un . u1 . m). Chapter 1. ). 1}. u2 . Even more general percolation problems are obtained. . (quantum mechanics. Figure. Dependent site percolation with p=0. The function V is called the potential. . where the values V (n) are all independent random variables. .2. .20 Figure. The spectral properties of L have a relation with the conductivity properties of the crystal. . u−1 . The operator L is the Hamiltonian of an electron in a onedimensional disordered crystal.at least with probability 1. functional analysis.
3. 4) Classical dynamical systems (celestial mechanics. population models) The study of deterministic dynamical systems like the logistic map x 7→ 4x(1 − x) on the interval [0. A wave ψ(t) = eiLt ψ(0) evolving in a random potential at t = 2. mechanics. where V (n) = λ cos(θ + nα) and for which α/(2π) is irrational. which can be seen as a brother of probability theory. Figure. Some applications of probability theory Figure. A wave ψ(t) = eiLt ψ(0) evolving in a random potential at t = 0. fluid dynamics. Many effects can be described by ergodic theory. A wave ψ(t) = eiLt ψ(0) evolving in a random potential at t = 1. Shown are both the potential Vn and the wave ψ(0). 1] or the three body problem in celestial mechanics has shown that such systems or subsets of it can behave like random systems. Many results in probability theory generalize to the more general setup of ergodic theory. Shown are both the potential Vn and the wave ψ(1).1. An example is Birkhoff ’s ergodic theorem which generalizes the law of large numbers. 21 Figure. A well studied example is the almost Mathieu operator. Shown are both the potential Vn and the wave ψ(2). . More general operators are obtained by allowing V (n) to be random variables with the same distribution but where one does not persist on independence any more.
Assume.22 Chapter 1. number theory. P). Figure. Given a dynamical system given by a map T or a flow Tt on a subset Ω of some Euclidean space. (computer science. q are factors. coding theory. There are energies and subsets of the energy surface which are invariant and on which there is an invariant probability measure. if one knows them. 5) Cryptology. The simplest method is to try to find the factors by trial and error but this is impractical already if N has 50 digits. While many aspects of coding theory are based in discrete mathematics. The number N can be public but only the person. Introduction 4 2 4 2 2 4 2 4 Figure. there are probabilistic and combinatorial aspects to the problem. Figure. For many dynamical systems including also some 3 body problems. A. An observed quantity like a coordinate of an individual particle is a random variable X and defines a stochastic process Xn (ω) = X(T n ω). we want to crack the code and find the factors p and q. who knows the factors p. one obtains for every invariant probability measure P a probability space (Ω. A good code can repair loss of information due to bad channels and hide the information in an encrypted way. algebra and algebraic geometry. We would have to search through 1025 numbers to find the factor p. Iterating the logistic map T (x) = 4x(1 − x) on [0. A short time evolution of the Newtonian three body problem. data encryption) Coding theory deals with the mathematics of encrypting codes or deals with the design of error correcting codes. q but easy to verify that p. The simple mechanical system of a double pendulum exhibits complicated dynamics. The invariant measure P is continuous. We illustrate this with the example of a public key encryption algorithm whose security is based on the fact that it is hard to factor a large integer N = pq into its prime factors p. Probability theory is therefore intrinsically relevant also in classical dynamical systems. Both aspects of coding theory have important applications. This corresponds to probe 100 million times . 1] produces independent random variables. there are invariant measures and observables X for which Xn are IID random variables. q can read the message. The differential equation defines a measure preserving flow Tt on a probability space.
Monte Carlo experiments. Some applications of probability theory 23 every second over a time span of 15 billion years. Actually. Let us illustrate this with a very simple but famous example. . algorithms) In applied situations. we have to try N 1/4 numbers. so that the probability of a birthday match is 1 − 0. how many such numbers do we have to generate. 6) Numerical methods. investigating and attacking data encryption codes or random number generators. the Buffon needle problem. x2 = T (T (a)) .3. Limit theorems assure that these numerical values are reasonable. two of them have the same birthday is larger than 1/2: the probability of the event that we have no birthday match is 1(364/365)(363/365) · · ·(343/365) = 0. many random number generators built into computer operating systems and programming languages are pseudo random number generators. This probabilistic argument would give a rigorous probabilistic estimate if we would pick truly random numbers. then the interval of angles leading to an intersection with a grid line has length 2 arccos(y) among a possible range of angles . Probabilistic thinking is often involved in designing. all of which are distance d = 2 apart. If we apply this thinking to the sequence of numbers xi generated by the pseudo random number generator T . With the above N .507292. If we assume the numbers x0 = a. The algorithm of course generates such numbers in a deterministic way and they are not truly random. This happens for example in statistical mechanics or quantum electrodynamics.492703 . . until two of them are the same modulo one of the prime factors p? The answer is surprisingly small and based on the birthday paradox: the probability that in a group of 23 students. The method goes as follows: start with an integer a and iterate the quadratic map T (x) = x2 + c mod N on {0. 1. √ Because p ≤ n. This is an estimate only which gives the order of magnitude. where one wants to find integrals in spaces with a large number of dimensions. then we expect √ to have a chance of 1/2 for finding a match modulo p in p iterations.N − 1 }. . . It produces numbers which are random in the sense that many statistical tests can not distinguish them from true random numbers. to be random.1. N ) produces the factor p of N . . If the center of the stick falls within distance y of a line. It can be found very fast. it is often very difficult to find integrals directly. (integration. . there is a prime factor p = 35121409. then gcd(xn − xm . . to get a factor: if xn and xm are the same modulo p.492703 = 0. There are better methods known and we want to illustrate one of them now: assume we want to find the factors of N = 11111111111111111111111111111111111111111111111. The Pollard algorithm finds this factor with probability 1/2 √ in p = 5926 steps. In the above example of the 46 digit number N . x1 = T (a). .. This is larger than 1/2. then we have a match x27720 = x13860 . A stick of length 2 is thrown onto the plane filled with parallel lines. One can nevertheless compute numerical values using Monte Carlo Methods with a manageable amount of effort. if we start with a = 17 and take a = 3. The generator is called a pseudo random number generator.
24 Chapter 1. Introduction R1 [0. The number 2n/k is an approximation of π. The law of large numbers assures that the approximations will converge to the expected limit. The Buffon needle problem is a Monte Carlo method to compute π. a . one can get random approximations of π. All Monte Carlo computations are theoretically based on limit theorems. By counting the number of hits in a sequence of experiments. Figure. Just throw randomly n sticks onto the plane and count the number k of times. This leads to a Monte Carlo method to compute π. π]. it hits a line. The probability of hitting a line is therefore 0 2 arccos(y)/π = 2/π. This is of course not an effective way to compute π but it illustrates the principle.
random variables. then i∈I Ai is a σalgebra. 4) A.Chapter 2 Limit theorems 2. It is called the trivial σalgebra. If Ω is an arbitrary set. If A is a σalgebra. Properties. T ∞ S∞ 2) lim supn An :=S n=1T m=n An ∈ A. A = {∅. then the following T properties follow immediately by checking the axioms: 1) n∈N An ∈ A. T 5) If {Aλ }i∈I is a family of σ. Definition. The set of all subsets of Ω is the largest σalgebra one can define on a set. A set A of subsets of Ω is called a σalgebra if the following three properties are satisfied: (i) Ω ∈ A. ∞ 3) lim inf n An := ∞ n=1 m=n An ∈ A. independence Let Ω be an arbitrary set. B are algebras. Ω} is a σalgebra.subalgebras of A. then A = {A ⊂ Ω} is a σalgebra. For an arbitrary set Ω. (ii) A ∈ A ⇒ AcS= Ω \ A ∈ A. Example. then A ∩ B is an algebra.1 Probability spaces. A) for which A is a σalgebra in Ω is called a measurable space. 25 . (iii) An ∈ A ⇒ n∈N An ∈ A A pair (Ω. Example. and An is a sequence in A.
(See [114]. Definition. Limit theorems Example. The smallest nonempty elements {A1 . Every finite σalgebra is of this form. . Compact sets in a topological space are sets for which every open cover has a finite subcover. O) is a topological space. 2}.26 Chapter 2. Example 2. It generates the σalgebra: A = {A = j∈J Aj } where J runs over all subsets of {1. The open balls in R are either single points or the whole space. y) = 1{x6=y } is the discrete metric. we can define σ(C). d). Example. For any set C of subsets of Ω. . In Euclidean spaces Rn . the set C = {{1. The σalgebra generated by the open balls is the set of countable subset of R together with their complements. 3}. The σalgebra A is the intersection of all σalgebras which contain C. . The completion of the Borel σalgebra is the σalgebra of all Lebesgue measurable sets.. d) need not to agree with the family of Borel subsets. n}. which are generated by O. . An } of this algebra are called atoms. The smallest σalgebra B which contains all these sets is called the completion of B. But it can also have pathological features like that the composition of a Lebesgue measurable function with a continuous functions does not need to be Lebesgue measurable any more. the set of open sets in (X. One sometimes defines the Borel σalgebra as the σalgebra generated by the set of compact sets C of a topological space. 3 }} generates the σalgebra A which consists of all 8 subsets of Ω. Example. where compact sets coincide with the sets which are both bounded and closed. . Remark. A2 . the Borel σalgebra is enlarged to the σalgebra of all Lebesgue measurable sets. . If A ⊂ B. For Ω = {1. The σalgebra generated by the open balls C = {A = Br (x) } of a metric space (X. . Definition. A finite set of subsets A1 . . {2. It is in general strictly larger than the Borel σalgebra. A set B in B is also called a Borel set. the Borel σalgebra is the set of all subsets of R. then σ(O) is called the Borel σalgebra of the topological space. which includes all sets B which are a subset of a Borel set A of measure 0. it is called a partition of Ω. The two definitions agree for a large class of topological spaces like ”locally compact separable metric spaces”.4). Often. the Borel σalgebra generated by the compact sets is the same as the one generated by open sets. then A is called a subalgebra of B. If (E. . . Because any subset of R is open. Proof. where O is the set of open sets in E. Take the metric space (R. 2. Remark. An of Ω which are pairwise disjoint and whose union S is Ω. d) where d(x.. This σalgebra has 2n elements. It is again a σalgebra. the smallest σalgebra A which contains C.
According to Doob. Here are some basic properties of the probability measure which immediately follow from the definition: 1) P[∅] = 0. A function X : Ω → R is called a random variable. if it is a measurable map from (Ω. A function P : A → R is called a probability measure and (Ω. Remark. 1]. 1] is the unit square and C is the set of all sets of the form [0. Other mathematicians have formulated similar axiomatizations at the same time. B). for X(x) = x2 on (R.1. S∞ 6) A1 ⊂ A2 . A. 1] × [a. if X −1 (B) ∈ A for all B ∈ B. A map X from a measure space (Ω.2. ⊂ · · · with An ∈ A then P[ n=1 An ] = limn→∞ P[An ]. The name ”Kolmogorov axioms” honors a monograph of Kolmogorov from 1933 [54] in which an axiomatization appeared. P) is called a probability space if the following three properties called Kolmogorov axioms are satisfied: (i) P[A] ≥ 0 for all A ∈ A. 2] ∪ [−2. B) one has X −1 ([1. where A is in the Borel σalgebra of [0. For example. 1] × A. then σ(C) is the σalgebra of all sets of the form [0. Properties. independence 27 Example. Statement 6) is equivalent to σadditivity if P is only assumed to be additive. Definition. random variables. 4) P[Ac ] = 1 − P[A]. Remark. −1]. This pull back set X −1 (B) is defined even if X is noninvertible. Probability spaces.5) in the above list. 1] × [0. 3) P[ n An ] ≤ n P[An ]. (ii) P[Ω] = 1. 2) A S ⊂ B ⇒ P[A] P ≤ P[B]. 5) 0 ≤ P[A] ≤ 1. A) to (R. Given a measurable space (Ω. Definition. Bohlmann in 1908 [22]. Definition. 4]) = [1. b] with 0 < a < b < 1. If Ω = [0. axioms (i)(iii) were first proposed by G. P S (iii) An ∈ A disjoint ⇒ P[ n An ] = n P[An ] The last property is called σadditivity. A). One could for example replace (i) and (ii) with properties 4). like Hans Reichenbach in 1932. There are different ways to build the axioms for a probability space. where B is the Borel σalgebra of . B) is called measurable. The set X −1 (B) consists of all points x ∈ Ω for which X(x) ∈ B. A) to an other measure space (∆.
3. y) = xy(x + y) is a random variable. then the random variable X is called a random vector. each component Xi is a random variable. B). More generally. But also X(x. Given a set B ∈ A with P[B] > 0. The map X(x. Limit theorems R. where A is in the Borel σalgebra of the interval [0. 1]. Example. {0. It is the probability of the event A. Definition. P[B] the conditional probability of A with respect to B. We throw two fair dice. Ω }. 1] }. y) = 1/(x + y) is a random variable. y) = x from the square Ω = [0. 1. Let A be the event that the first dice is 6 and let B be the event that the sum of two dices is 11. Definition. Example. 2. X(x. Because P[B] = 2/36 = 1/18 and P[A ∩ B] = 1/36 (we need to throw a 6 and then a 5). we have only two possibilities: (5. Example. For example. The map X from Z6 = {0. . 6) or (6. 5}. {1. 1] to the real line R defines the algebra B = {A × [0. 1} ⊂ R defined by X(x) = x mod 2 has the value X(x) = 0 if x is even and X(x) = 1 if x is odd. The interpretation is that since we know that the event B happens. 4. . Let Ω = R2 with Borel σalgebra A and let Z Z 2 2 1 P[A] = e−(x −y )/2 dxdy . The σalgebra generated by X is A = {∅. . under the condition that the event B happens. . . If E = Rd . Ω }. 2. 5} to {0. we have P[AB] = (1/16)/(1/18) = 1/2. 1] × [0. 4}. Example. The set L is an algebra under addition and multiplication: one can add and multiply random variables and gets new random variables. we define P[AB] = P[A ∩ B] . x3 ) is an example of a random vector. y) = (x. Every random variable X defines a σalgebra X −1 (B) = {X −1 (B)  B ∈ B } . 2π A Any continuous function X of two variables is a random variable on Ω. The vectorvalued function X(x. Xd ). For a random vector X = (X1 . Example. On this space of possibilities. We denote this algebra by σ(X) and call it the σalgebra generated by X.28 Chapter 2. 3. one can consider random variables taking values in a second measurable space (E. even so it is not continuous. y. 5). only the second is compatible with the event B. Denote by L the set of all real random variables. A constant map X(x) = c defines the trivial algebra A = {∅.
4). 4. a) Verify that the Sicherman dices with faces (1. Properties. . (3. P[AB] + P[Ac B] = 1. 4. random variables. Each time you bet at even odds that either the 2 or the 5 or both will show.1. P[AA] = 1. Let him throw a pair of dice as often as he wishes. (2. 5. C are called nontransitive. 4) have the property that the probability of getting the value k is the same as with a pair of standard dice. An } in A and B ∈ A with P[B] > 0. Probability spaces. (4. 4). A finite set {A1 . If all possible experiments are partitioned into different events Aj and the probabilities that B occurs under the condition Aj . 5.1. if the probability that A > B is larger than 1/2. For example. the probability to get 5 with the Sicherman dices is 4/36 because the three cases (1. While they follow immediately from the definition of conditional probability. 3). (3. B. 2). 5). (3. 2). 6. . 6) and C = (2. 3. Verify the nontransitivity property for A = (1. 4. 4. 4). Also for the standard dice. P[A ∩ BC] = P[AC] · P[BA ∩ C]. 5. Definition. .2. j=1 P[BAj ]P[Aj ] . 2. B = (3. (4. 1). 2.1 (Bayes rule)..” Is this a good bet? Exercise. the probability that B > C is larger than 1/2 and the probability that C > A is larger than 1/2. 2). 4. Martin Gardner writes: ”Ask someone to name two faces of a die. we have four cases (1. . 3. Given a finite partition {A1 . The following properties of conditional probability are called Keynes postulates. b) Three dices A. An } ⊂ A is called a finite partition of Ω if Sn A = Ω and Aj ∩ Ai = ∅ for i 6= j. In [28]. 3. 3. 3. 1) lead to a sum 5. pairwise disjoint sets. 8) and (1. one has P[BAi ]P[Ai ] P[Ai B] = Pn . then one can compute the probability that Ai occurs knowing that B happens: Theorem 2. Suppose he names 2 and 5. independence 29 Exercise. 2. 3. 2. they are historically interesting because they appeared already in 1921 as part of an axiomatization of probability theory: 1) 2) 3) 4) P[AB] ≥ 0. . 3.. A finite partition covers the entire j j=1 space with finitely many.
4. Here is the solution: first introduce the probability space of all possible events Ω = {bg. we have P[B] = j=1 P[B ∩ Aj ] = j=1 P[BAj ]P[Aj ] = P6 −j j=1 2 /6 = (1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64)(1/6) = 21/128. a fair coin is tossed k times. A fair dice is rolled first. We know P[BAj ] = 2−j . Let B = {bg. Because the denominator is P[B] = j=1 P[BAj ]P[Aj ]. Because the events Aj . But these are by definition both P[Ai ∩ B]. 6 }. The GirlBoy problem has been popularized by Martin Gardner: ”Dave has two children. P[AB] = P[B] (3/4) 3 . Assume. However. We have (1/2) 2 P[A ∩ B] = = . 6 form a parP6 P6 tition of Ω. What is the probability that the other child is a girl”? Most people would intuitively say 1/2 because the second event looks independent of the first. Next. Let B be the event that all coins are heads and let Aj be the event that the dice showed the number j. we know that all coins show heads. gb. . The problem is to find P[A5 B].30 Chapter 2. 5. j = 1. .5 0. 2. gg } with P[{bg }] = P[{gb }] = P[{bb }] = P[{gg }] = 1/4.4 0. the Bayes rule just says P[Ai B]P[B] = P[BAi ]P[Ai ]. bg. (1/32)(1/6) P[BA5 ]P[A5 ] 2 = P[A5 B] = P6 = . . Example. what is the probability that the score of the dice was equal to 5? Solution. The probabilities P[Aj B] in the last problem 0.2 0.1 1 2 3 4 5 6 Example. it is not and the initial intuition is misleading. gb. gg } be the event that there is at least one girl. Limit theorems Pn Proof. It gives a random number k from {1. . bb } be the event that there is at least one boy and A = {gb. 21/128 63 ( j=1 P[BAj ]P[Aj ]) 0.3 Figure. bb. One child is a boy. By Bayes rule. 3.
My friend who stands near the coin tells me ”At least one of them is head”. We formulate it in a simplified form: ”Dave has two children. 3. The information that one of the kids is a boy eliminates the last 4 examples. (bn)(bd). But this is not the case. one of whom is a boy born on a Tuesday. (gd)(bd). (bd)(bn). (gd)(bn). 2. (gn)(bn) } . What is the probability that Dave has two boys?” Exercise. The information that the boy is born at night only allows pairings (bn) and eliminates all cases with (bd) if there is not also a (bn) there. Definition. One rolls under a bookshelf and is invisible. (bn)(bd). A. (bn)(gn). Two events A. (bn)(bn) }. The answer is the conditional probability P[AB] = P[A ∩ B]/P[B] = 3/7. This is bigger than 1/3 the probability without the knowledge of being born at night. We are left with an event B containing 7 cases which encodes the information that one of the kids is a boy born at night: B = {(bd)(bn). Probability spaces. (gd)(gd). (bd)(gd). 4. The event A that Dave has two boys is A = {(bd)(bn). What is the probability that the other is head? b) I throw two dice onto the floor. one of whom is a boy born at night. The probability space of all events has 12 elements Ω = {(bd)(bd). (bn)(bd). Exercise. The probability space Ω = {1.2. 5. (bn)(gd). B in s probability space (Ω. (gn)(bd). This version is close to the original Gardner paradox: a) I throw two dice onto the floor. (bn)(gn). (gn)(bn). (bn)(gd). A variant of the BoyGirl problem is due to Gary Foshee [84]. random variables. (bn)(bn). The set A = {1. (gd)(bn). What is the probability that Dave has two boys?” It is assumed of course that the probability to have a boy (b) or girl (g) is 1/2 and that the probability to be born at night (n) or day (d) is 1/2 too. independence 31 Example. if P[A ∩ B] = P[A] · P[B] . (gn)(gn) }. What is the probability that the hidden one is head? Explain why in a) the probability is 1/3 and in b) the probability is 1/2. A friend who stands nearby looks at them and tells me: ”At least one of them is head”. (bd)(gn). (gn)(gd). P) are called independent. One would think that the additional information ”to be born at night” does not influence the probability and that the overall answer is still 1/3 like in the boygirl problem. 5 } is the . 3. (gd)(gn).1. (bn)(bn). 6 } and pi = P[{i}] = 1/6 describes a fair dice which is thrown once. Solve the original Foshee problem: ”Dave has two children. Example.
2. (4) If P[B] > 0. 2}. Aj . the σalgebras A = {∅. B are independent. D) is called a Dynkin system if D is a set of subsets of Ω satisfying (i) Ω ∈ D. d) is either empty or again such an interval [c. B ∈ D. {1. Such a σalgebra is called Ptrivial. then P[A ∩ A] = P[A] = P[A]2 so that for every A ∈ F . It has the probability 1/2. (Ω. 3}. 2 }. Ω} are independent. A family {Xj }j∈J of random variables is called independent. then A. (1) If a σalgebra F ⊂ A is independent to itself. Definition. Definition. if for every J ⊂f I and every choice Aj ∈ Aj P[ j∈J Aj ] = j∈J P[Aj ]. b). A σadditive map from a πsystem I to [0. then P[AB] = P[A] because P[AB] = (P[A] · P[B])/P[B] = P[A]. B are independent. 4 }. On Ω = {1. if {σ(Xj )}j∈J are independent σalgebras. 4 } the two σalgebras A = {∅. if the σalgebras Aj = {∅. 4 }. {2}. B are in I. 1] because the intersection of two such intervals [a. {3}. (3) If A. 2) The set I = {[a. {1. b) and [c. B c are independent too. We use the notation An ր A if An ⊂ An+1 and n An = A.32 Chapter 2. 1) The family I = {∅. 2. Write J ⊂f I if J is a finite subset of I. Ω} is a πsystem on Ω = {1. Ω } are independent. A family {Ai }i∈I of σsubalgebras Tof A is called Q independent. (ii) A. 2 } is the event that the dice shows a number smaller than 3. B. {1}. Let Ω be a set. The two events are independent because P[A ∩ B] = P[{1}] = 1/6 = P[A] · P[B]. Example. and A. Ac . 3. Example. (iii) An ∈ D. A family of sets {Aj }j∈I is called independent. Properties. 1}. Ω } and B = {∅. The event B = {1. Ω} and B = {∅. Acj . A ⊂ B ⇒ B \ A ∈ D. A. 3}. {2. It has probability 1/3. B ∈ A are independent if and only if P[A∩B] = P[A]·P[B]. B. S Definition. An ր A ⇒ A ∈ D . {2. Limit theorems event that ”the dice produces an odd number”. ∞) is called a measure. b) 0 ≤ a < b ≤ 1} ∪ {∅} of all half closed intervals is a πsystem on Ω = [0. 3 }. (5) For independent sets A. if I is closed under intersections: if A. {1. P[A] ∈ {0. then A ∩ B is in I. {3. A family I of subsets of Ω is called a πsystem. (2) Two sets A. Ω } are independent. B c .
which contains I and call it the Dynkin system generated by I. If I is a π. so .3. (Extension lemma) Given a πsystem I. We see that A is a σalgebra. A is both a πsystem and a Dynkin system. From (i) we know that I ⊂ D 2 .1. (ii) Define D2 = {A ∈ d(I)  B ∩ A ∈ d(I). By the previous lemma.4. Lemma 2. Given A. Given A. Proof. ∀B ∈ d(I) }. Definition. then d(I) = σ(I). B ∈ D. D1 is a Dynkin system. Given An ր A with An ∈ D 1 and C ∈ I. Then Bn ր A and by the Dynkin property A ∈ A. Lemma 2. then it certainly is both a πsystem and a Dynkin system. Therefore B \A ∈ D1 . Proof. Claim.1. A ⊂ B. Therefore D2 = d(I). random variables. Because I is a πsystem.2. we need only to show that d(I) is a π− system. B ∈ A. Assume now. we show that D 2 is a Dynkinsystem. B c = Ω \ B c c are in A and by the πsystem property also ) ∈ A. B ∈ D1 with A ⊂ B. (Ω. A) is a σalgebra if and only if it is a πsystem and a Dynkin system. we denote by d(I) the smallest Dynkin system. Given A. then µ = ν. Like in (i).system. Given An ∈ D with An ր A. If I is any set of subsets of Ω. Proof. SnA ∪ B = Ω \ (A ∩ B S Given a sequence An ∈ A. ν(Ω) < ∞ and µ(A) = ν(A) for A ∈ I. then the σ additivity gives µ(A) = lim supn µ(An ) = lim supn ν(An ) = ν(A). Clearly Ω ∈ D 1 . Proof. The Dynkin property implies that Ac = Ω \ A. Then An ∩C ր A∩C so that A ∩ C ∈ d(I) and A ∈ D1 . For C ∈ I we compute (B \ A) ∩ C = (B ∩ C) \ (A ∩ C) which is in d(I).2. Then µ(B \ A) = µ(B) − µ(A) = ν(B)−ν(A) = ν(B\A) so that B\A ∈ D. ∀C ∈ I }. Define Bn = k=1 Ak ∈ A and A = n An . If two measures µ. which means that d(I) is a πsystem.1. independence 33 Lemma 2. we have I ⊂ D1 . The set D = {A ∈ σ(I)  µ(A) = ν(A) } is Dynkin system: first of all Ω ∈ D. (i) Define D 1 = {B ∈ d(I)  B ∩ C ∈ d(I). ν on σ(I) satisfy µ(Ω). If A is a σalgebra.1. Probability spaces.
Before we launch into the proof of this theorem.1. H) the measures µ(H) = P[I ∩ H]. A. Then G and H are independent if and only if I and J are independent.1. Definition. Two πsystems I. J ⊂ A are called Pindependent. P). Let G. A is an algebra if A is a set of subsets of Ω satisfying (i) Ω ∈ A. Given a probability space (Ω. B ∈ A ⇒ A ∩ B ∈ A Remark. Lemma 2.5. H be two σsubalgebras of A and I and J be two πsystems satisfying σ(I) = G. Therefore also the symmetric difference A∆B = (A \ B) ∪ (B \ A) is in the algebra. ∞] with λ(∅) = 0. Let A be an algebra and λ : A 7→ [0. Theorem 2. Proof. We see that Ac ∩ B = B \ A and A ∩ B c = A \ B are also in the algebra A.1. (iii) A. they agree on H and we have P[I ∩ H] = P[I]P[H] for all I ∈ I and H ∈ H. explaining the name algebra. Definition. Since D is a Dynkin system containing the πsystem I. G). Limit theorems that A ∈ D. Its up to you to find the zero element 0∆A = A for all A and the one element 1 ∩ A = A in this algebra. we know that σ(I) = d(I) ⊂ D which means that µ = ν on σ(I). we need two lemmas: Definition. P[A∩B] = P[A]·P[B]. they coincide on J and by the extension lemma (2. P). The operation ∩ is the ”multiplication” and the operation ∆ the ”addition” in the algebra. By the independence of I and J . Given a probability space (Ω. They agree on I and so on G. (ii) A ∈ A ⇒ Ac ∈ A. A. ν(H) = P[I]P[H] of total probability P[I]. A σadditive map from A to [0. The relation A∪B = (Ac ∩B c )c shows that the union A∪B in the algebra. A set A ∈ A is called a λset.6 (Carath´eodory continuation theorem).4). if λ(A ∩ G) + λ(Ac ∩ G) = λ(G) for all G ∈ A. Definition. (ii) Define for fixed H ∈ H the measures µ(G) = P[G ∩ H] and ν(G) = P[G]P[H] of total probability P[H] on (Ω. if for all A ∈ I and B ∈ J . . σ(J ) = H. Any measure on an algebra R has a unique continuation to a measure on σ(R). We have shown that P[G∩H] = P[G]P[H] for all G ∈ G and all H ∈ H.34 Chapter 2. ∞) is called a measure. (i) Fix I ∈ I and define on (Ω.
Let A be a σalgebra. We have so verified that Aλ is an algebra. A.2. random variables. C ∈ Aλ . (2. then Aλ ⊂ A defines a σalgebra on which λ is countably additive.3) Adding up these three equations shows that B ∩ C is a λset. if λ(∅) = 0. Proof. we get using B ∩ C = A. (Carath´eodory’s lemma) If λ is an outer measure on a measurable space (Ω. ∞] is called an outer measure. Since C ∈ Aλ . we get λ(C ∩ Ac ∩ G) + λ(C c ∩ Ac ∩ G) = λ(Ac ∩ G) . Pn The set Aλ of λsets Sn of an algebra A is again an algebra and satisfies k=1 λ(Ak ∩ G) = λ(( k=1 Ak ) ∩ G) for all finite disjoint families {Ak }nk=1 and all G ∈ A.8. A map λ : A → [0. Proof. . we have λ(C ∩ G) + λ(C c ∩ G) = λ(G) . (2. Probability spaces. This can be rewritten with C ∩ Ac = C ∩ B c and C c ∩ Ac = C c as λ(Ac ∩ G) = λ(C ∩ B c ∩ G) + λ(C c ∩ G) . Then A = B ∩ C ∈ Aλ .1. This can be rewritten as λ(B ∩ G) + λ(C ∩ G) = λ((B ∪ C) ∩ G). An ∈ A ⇒ λ( n An ) ≤ n λ(An ) (σ subadditivity) Lemma 2.7. independence 35 Lemma 2. A). then B c ∈ Aλ . λ(A ∩ G) + λ(B c ∩ C ∩ G) = λ(C ∩ G) . The analog statement for finitely many sets is obtained by induction. From the definition is clear that Ω ∈ Aλ and that if B ∈ Aλ . B ∈ A withSA ⊂ B ⇒P λ(A) ≤ λ(B). Given B. If B and C are disjoint in Aλ we deduce from the fact that B is a λset λ(B ∩ (B ∪ C) ∩ G) + λ(B c ∩ (B ∪ C) ∩ G) = λ((B ∪ C) ∩ G) . Definition. (2.1) Because B is a λset.2) Since C is a λset.1.1.
n n n Since the choice of An is arbitrary. with An ∈ R. A1 . By the definition of λ X X X µ(Bn ) = µ(A ∩ Bn ) + µ(Ac ∩ Bn ) ≥ λ(A ∩ G) + λ(Ac ∩ G) n n n .k∈N n. GivenSA ∈ R andPG ∈ P. one can S(by the definition n. λ k=1 k we have λ(G) = = λ(Bn ∩ G) + λ(Bnc ∩ G) ≥ λ(Bn ∩ G) + λ(Ac ∩ G) n X λ(Ak ∩ G) + λ(Ac ∩ G) ≥ λ(A ∩ G) + λ(Ac ∩ G) . Define a sequence That S is B1 = S S {Bn }n∈N of disjoint sets in R inductively. the σsubadditivity is proven. so that λ(A) ≤ A ⊂ n.36 Chapter 2. Define A = σ(R) and the σalgebra P consisting of all subsets of Ω. By the above lemma (2. Clearly λ(A) ≤ µ(A).k n. We have to show that A = S P A ∈ A and λ(A) = λ(A λ n ). (ii) λ = µ on R. Define A = P S µ(B ) ≤ B . take a sequence An ∈ P with λ(An ) < ∞ and fix ǫ > 0. Suppose that A ⊂ n An .6): Proof. Bn = n Snn n A is in A . Given a disjoint sequence An ∈ Aλ . λ(∅) = 0 and λ(A) ≤ λ(B) for A ⊂ A are obvious.1.7).k n λ(An ) + ǫ. Given an algebra R with a measure µ.k n nS k∈N k∈N n. (iii) Let P λ be the set of λsets in P. By the monotonicity. We conclude that A ∈ Aλ . Then R ⊂ P λ . To see the σ subadditivity. this gives µ(A) ≤ λ(A).k n∈N n Since ǫ was arbitrary. All the inequalities in this proof are therefore equalities. For all n ∈ N. From the σadditivity of µ on R follows [ [ X µ(A) ≤ µ( An ) = µ( Bn ) = µ(Bn ) .1. k=1 Subadditivity for λ gives λ(G) ≤ λ(A ∩ G) + λ(Ac ∩ G). There exists a sequence {Bn }n∈N in R such that G ⊂ n Bn and n µ(Bn ) ≤ λ(G) + ǫ. Bn = An ∩ ( k<n Ak )c such that Bn ⊂ An and n Bn = n An ⊃ A.k P. Finally we show that λ is σ additive on Aλ : for any n ≥ 1 we have n X k=1 λ(Ak ) ≤ λ( n [ k=1 Ak ≤ ∞ X λ(Ak ) . n n∈N (i) λ is an outer measure on P. k=1 Taking the limit n → ∞ shows that the right hand side and left hand side agree verifying the σadditivity. We now prove the Caratheodory’s continuation theorem (2. additivity and the σ subadditivity.k }k∈N in R P of λ) find a sequence {B −n µ(B ) ≤ λ(A ) + ǫ2 B and such that A ⊂ n. Define on P the function X [ λ(A) = inf{ µ(An )  {An }n∈N sequence in R satisfying A ⊂ An } . n. Limit theorems Proof. S Given A ∈ R.
which is often used in measure theory and which differ from the notions of algebra or σalgebra in that Ω does not have to be in it. A). in which rules like A · (B + C) = A · B + A · C hold. Since ǫ is arbitrary. 38. finite intersections finite intersections increasing countable union. P). b]) = b − a for every interval and if ν([a. this is an extension of the measure µ on R. independence 37 S c S c because A ∩ G ⊂ n A ∩ Bn and A ∩ G ⊂ n A ∩ Bn . If µ+ .1. since λ is subadditive. (iv) By (i) λ is an outer measure on (Ω. those notions are not needed at first. so that we can define µ on A as the restriction of λ to A. One says that ν is absolutely continuous with respect to µ. Probability spaces. A nonzero measure µ on a measurable space (Ω.2. On the other hand. If the set Ω is also in the ring. R ⊂ P λ . In probability theory. We write ν << µ if for every A in the σalgebra A. Let ν. we get λ(G) ≥ λ(A ∩ G) + λ(Ac ∩ G). A) is called positive. difference complement and finite unions countably many unions and complement complement and finite unions countably many unions and complement σalgebra generated by the topology Remark.4). Set of Ω subsets topology πsystem Dynkin system ring σring algebra σalgebra Borel σalgebra contains ∅. (v) The uniquness follows from Lemma (2. A measure is called finite if it has a Hahn decomposition and the positive measure µ defined by µ(A) = µ+ (A) + µ− (A) satisfies µ(Ω) < ∞. We also include the notion of ring and σring. we have also λ(G) ≤ λ(A∩G)+λ(Ac ∩G) and A is a λset. µ− are two positive measures so that µ(A) = µ+ − µ− then this is called the Hahn decomposition of µ. one has a ring with 1 because the identity A ∩ Ω = A shows that Ω is the 1element in the ring. Definition. A) satRb isfying µ([a. we know by Caratheodory’s lemma that A ⊂ P λ . b]) = a x2 dx then ν << µ. random variables. a ring of sets becomes an algebraic ring like the set of integers. The empty set ∅ is the zero element because A∆∅ = A for every set A. the condition µ(A) = 0 implies ν(A) = 0. Lets add some definitions. A) = ([0. By step (ii). Example. Here is an overview over the possible set of subsets of Ω we have considered. The name ”ring” has its origin to the fact that with the ”addition” A + B = A∆B = (A ∪ B) \ (A ∩ B) and ”multiplication” A · B = A ∩ B. Ω ∅. For an introduction into measure theory. Since by step (iii).1. . 18]. which will occur later: Definition. Ω Ω ∅ ∅ Ω ∅. if µ(A) ≥ 0 for all A ∈ A. Ω closed under arbitrary unions. If µ = dx is the Lebesgue measure on (Ω. 1]. µ be two measures on the measurable space (Ω. see [3.
For any nonempty set J ⊂ I. Use lemma (2.J finite AJ c .38 Chapter 2. If {Ai }i∈I are independent σalgebras. ). Define for H ⊂ I the πsystem \ I H = {A ∈ A  A = Ai . Indeed. if Xn takes values ±1/n. K ⊂f H. i∈K The πsystems I F and I G are independent and generate the σalgebras AF and AG . Proof. Ω }. where J c = I \ J . T = J⊂f I AJ c is independent of any AK for K ⊂f I. the tail σ. A is in T . Therefore T is independent of itself which implies that it is P trivial. It is therefore independent to the πsystem I I which generates AI . Let Xn be a sequence of independent random variables and let A = {ω ∈ Ω  ∞ X n=1 Xn converges } . we have A ∈ σ(An . Given a family S W {Ai }i∈I of σsubalgebras of A. we have µ(A) = 0 but ν(A) = 1.1.2 Kolmogorov’s 0 − 1 law. A) and ν = δ1/2 is the point measure which satisfies ν(A) = 1 if 1/2 ∈ A and ν(A) = 0 else. . (i) The algebras AF and AG are independent. G ⊂ I are disjoint. Example. If µ = dx is the Lebesgue measure on ([0. 2. The decision whether P[A] = 0 or P[A] = 1 is related to the convergence or divergence of a series and will be discussed later again. (ii) Especially: AJ is independent of AJ c for every J ⊂ I. for the set A = {1/2}. Use again lemma (2. then the tail σalgebra T is Ptrivial: P[A] = 0 or P[A] = 1 for every A ∈ T . An+1 . . Because n=1 Xn converges if and only P∞ if Yn = k=n Xk converges. The tail σalgebra T of {A}i∈I is defined as T = J⊂I. whenever F. Therefore. If for example. (iii) T is independent of AI . each with probability 1/2. T Proof. If Xn takes values ±1/n2 each with probability 1/2. then P[A] = 0. Proof. Limit theorems Example.algebra defined by the independent σalgebras An = σ(Xn ).1.5). . Define T also A∅ = {∅. Then ν is not absolutely continuous with respect to µ.1 (Kolmogorov’s 0 − 1 law).5). let AJ := j∈J Aj be the σalgebra generated by j∈J Aj . Theorem 2. then P[A] = 1. Ai ∈ Ai } . 1]. P∞ Then P[A] = 0 or P[A] = 1. Proof. BorelCantelli lemma Definition.2. (iv) T is a subσalgebra of AI .
For every integer n ∈ N.3 (Second BorelCantelli lemma). The set A∞ := lim sup An = n→∞ ∞ [ \ An m=1 n≥m consists of the set {ω ∈ Ω} such that ω ∈ An for infinitely many n ∈ N. Given a sequence of events An ∈ A. Theorem 2. In the theory of dynamical systems. It follows from Kolmogorov’s 0 − 1 law that P[A∞ ] ∈ {0. An example of such a system is a shift map T (x)n = xn+1 on Ω = ∆N . Theorem 2. where ∆ is a compact topological space.2. An . A. Then X P[An ] < ∞ ⇒ P[A∞ ] = 0 . Acn . n∈N Proof. with F 0 = {x ∈ Ω = ∆Z  x0 = r ∈ ∆ }. a measurable map T : Ω → Ω of a probability space (Ω. \ Y P[ Ack ] = P[Ack ] k≥n k≥n = Y k≥n = (1 − P[Ak ]) ≤ exp(− X k≥n Y k≥n P[Ak ]) . P[A∞ ] = limn→∞ P[ S k≥n Ak ] ≤ limn→∞ P k≥n P[Ak ] = 0. n∈N Proof. Kolmogorov’s 0 − 1 law. exp(−P[Ak ]) .2.2. Ω}. Remark. Let {An }n∈N be a sequence of subsets of Ω.2 (First BorelCantelli lemma). BorelCantelli lemma 39 Example. Ω }. The set A∞ is contained in the tail σalgebra of An = {∅. For a sequence An ∈ A of independent events. X P[An ] = ∞ ⇒ P[A∞ ] = 1 . 1} if An ∈ A and {An } are Pindependent. if there exists a σsubalgebra F ⊂ A which satisfies F ⊂ σ(T (F )) for which the sequence F n = σ(T n (F )) satisfies F N = A and which has a trivial tail σalgebra T = {∅. W∞ The Ksystem property follows from Kolmogorov’s 0−1 law: take F = k=1 T k (F 0 ).2. P) onto itself is called a Ksystem.
From [ \ X \ P[Ac∞ ] = P[ Ack ] ≤ P[ Ack ] = 0 n∈N k≥n n∈N k≥n follows P[Ac∞ ] = 0. 104 105 1010 1017 1022 1023 1027 1030 1041 1051 1080 One ”myriad”. The age of the universe in seconds. This means that Shakespeare’s work is not only written once. A monkey is placed in front of a terminal and types symbols at random. Mass of the sun in kilograms. (”Monkey typing Shakespeare”) Writing a novel amounts to enter a sequence of N symbols into a computer. The largest number considered by the Romans. P).that is the question. then the probability that Hamlet appears when typing the first N letters is ǫN . The largest numbers. but infinitely many times. For example. The age of the universe in years. .5. to write ”Hamlet”. Archimedes’s estimate of number of sand grains in universe. Before we model this precisely. By the second BorelCantelli lemma. These sets Ak are all independent and have all equal probability. Estimated size of universe in meters. The number of protons in the universe. The chance to write ”To be. producing a random sequence Xn of identically distributed sequence of random variables in the set of all possible symbols. To compare the probability. Mass of our home galaxy ”milky way” in kilograms. If each letter occurs with probability at least ǫ. Limit theorems The right hand side converges to 0 for n → ∞. n 1 x n=1 Example.” with 43 random hits onto the keyboard is 1/1063. 1/n] P∞we get A∞ = P∅∞and1 so P[A∞ ] = 0. the Greeks were considering. the events occur infinitely often. a dash and a period sign. where P = dx is the Lebesgue measure on the Borel σalgebra B of [0. Example. lets look at the odds for random typing. But because P[An ] = 1/n we have P[A ] = n n=1 n=1 n = ∞ because the harmonic series P∞ n=1 1/n diverges: Z R R X 1 1 ≥ dx = log(R) . For An = [0. 1]. it is helpful to put the result into a list of known large numbers [10. 1]. Distance to our neighbor galaxy Andromeda in meters. 39]. The following example illustrates that independence is necessary in the second BorelCantelli lemma: take the probability space ([0. Shakespeare had to enter N = 180′ 000 characters. or not to be . one per unit time.40 Chapter 2. There are 30N possibilities to write a word of length N with 26 letters together with a minimal set of punctuation: a space. B. Number of atoms in two gram Carbon which is 1 Avogadro. Call A1 this event and call Ak the event that this happens when typing the (k − 1)N + 1 until the kN ’th letter. a comma. Note that the life time of a monkey is bounded above by 131400000 ∼ 108 seconds so that it is even unlikely that this single sentence will ever be typed.
Kolmogorov’s 0 − 1 law. . there exists a sequence X1 . The theorem is equivalent to the Axiom of choice and one of the fundamental assumptions of mathematics. A) has the property that for any cylinder set Z(w) = {ω ∈ Ω  ωk = rk . . Inverse is probability that a mouse survives on the sun for a week.6).2. . the random variables Xi (ω) = ωi ) are independent and have the same distribution as X.2. The proof made use of Tychonov’s theorem which tells that the product of compact topological spaces is compact. X2 . . Remark. of independent random variables for which all random variables Xi have the same distribution as X. . . Inverse is chance to find yourself on Mars by quantum fluctuations One ”Gogoolplex” Lemma 2. ape needs to write ”hound of Baskerville” (random typing). P[Z(w)] = n Y i=k P[ωi = ri ] = n Y i=k Q({ri }) . The probability measure P = QZ is defined on (Ω. Size of large prime number (Mersenne number. BorelCantelli lemma 100 10 10153 10155 6 1010 7 1010 33 1010 42 1010 50 1010 1051 10 100 1010 41 One ”googol”. . The product space Ω = ∆N is compact by Tychonov’s theorem. ωk+1 = rk+1 . . The measure P is σadditive on this algebra. Years. Inverse is chance that a can of beer tips by quantum fluctuation. For this probability space (Ω. Given a random variable X on a finite probability space ∆. there exists a measure P on (Ω. Let A be the Borelσalgebra on Ω and let Qdenote the probability measure on ∆. Finite unions of cylinder sets form an algebra R which generates σ(R) = A. Proof. . By Carath´eodory’s continuation theorem (2. (Name coined by 9 year old nephew of E. we can assume it to be a fundamental axiom itself. ωn = rn } defined by a ”word” w = [rk . A). . rn ]. A. Kasner). Estimated number of possible games of chess. . Size of ninth Fermat number (factored in 1990). Number mentioned in a myth about Buddha.1. Nov 1996). Since Tychonov’s theorem is known to be equivalent to the axiom of choice. .2. It was easier to just refer to a fundamental assumption of mathematics. P).4. The compactness of a countable product of compact metric spaces which was needed in the proof could be proven without the axiom using a diagonal argument.
What is the Borel σalgebra A generated by this topology? b) Show that for every λ > 0 P[A] = X e−λ n∈A λn n! is a probability measure on the measurable space (Ω. Remark. the event that the Monkey types this novel arbitrarily often. the process of authoring is given by a sequence of independent random variables Xn (ω) = ωn . ζ(s) p p prime f) Let A be the set of natural numbers which are not divisible by a square different from 1. They have all the same probability. Lemma (2. N (k + 1)] is given by a cylinder set Ak . Prove 1 . c) Show that for every s > 1 P[A] = X ζ(s)−1 n−s n∈A is a probability measure on the measurable space (Ω. In the example of the monkey writing a novel. P[A] = ζ(2s) .42 Chapter 2. Limit theorems Example.2. we experiment with some measures on Ω = N [113]. P[A∞ ] = 1. a) The distance d(n. By the second BorelCantelli lemma. The random variables Xi (ω) = ωi are independent and have the law Pi . e) Give a probabilistic proof of Euler’s formula Y 1 1 = (1 − s ) . The event that Hamlet is written during the time [N k + 1. What happens if p is not prime. d) Show that the sets Ap = {n ∈ Ω p divides n} with prime p are independent. An other construction of independent random variables is given in [109]. Exercise. A). A) considered in a). B. has probability 1. The function s 7→ ζ(s) = X 1 ns n∈Ω is called the Riemann zeta function. A. The set A∞ . m) = n − m defines a topology O on Ω = N.4) can be generalized: given any sequence of probability spaces (R. In this exercise. P). Pi ) one can form the product space (Ω.
3. For X ∈ L1 . Y for which E[X − Y ] = 0 are identified. The vector space L1 is called the space of integrable random variables. X dP = E[X] := Ω i=1 Definition. The nonnegative number σ[X] = Var[X]1/2 is called the standard deviation of X. A elementary function or step function is an element of L which is of the form n X αi · 1Ai X= i=1 with αi ∈ R and where Ai ∈ A are disjoint sets. we can define the integral or expectation Z Z Z Y dP . Expectation. for which Z sup Y dP Y ∈S . Unlike Lp . Definition.3 Integration. It is a vector space over the field R of the real numbers in which one can multiply. Denote by S the algebra of step functions. A statement is said to hold almost everywhere. Definition. if the set P[{ω  S(ω) = false }] = 0. Integration.43 2. Similarly. For X ∈ S we can define the integral Z n X αi P[Ai ] . Variance In this entire section.Y ≤X + where X + = X ∨ 0 = max(X. We will come back to this later. is a short hand notation for the statement that the set {x ∈ Ω  Xn (x) → X(x) } is measurable and has measure 1. (Ω. for p ≥ 1 write Lp for the set of random variables X for which E[Xp ] < ∞. Definition. Define L1 ⊂ L as the set of random variables X. where random variables X. P) will denote a fixed probability space. the spaces Lp are Banach spaces. A. It is custom to write L1 for the space L1 . we can define the variance Var[X] := E[(X − E[X])2 ] = E[X 2 ] − E[X]2 . false}. Variance 2. For example. 0) and X − = −X ∨ 0 = max(−X.Y ≤X − Y ∈S . A statement S about points ω ∈ Ω is a map from Ω to {true.Y ≤X is finite. Y dP − sup E[X] := X dP = sup Y ∈S . Expectation. the statement ”let Xn → X almost everywhere”. The algebra of all random variables is denoted by L. Definition. . 0). For X ∈ L2 .
If X is a random variable. A. The expectation is the ”average”.44 Chapter 2. b]] = a f (x) dx with (x−m)2 1 f (x) = √ e− 2σ2 . then E[X m ] is called the m’th moment of X. Example. ”mean” or ”expected” value of the variable and the standard deviation measures how much we can expect the variable to deviate from the mean. Limit theorems The names expectation and standard deviation pretty much describe already the meaning of these numbers. Both the expectation as well as the standard deviation converge to 0 if m → ∞. For given m ∈ R. Definition. 1]. 2πσ 2 This is a√probability measure y = R ∞ because after a change R ∞ of variables 2 (x−m)/( 2σ). Let Ω = R. E[X] = xm dx = m +1 0 the variance Var[X] = E[X 2 ] − E[X]2 = 1 m2 1 = − 2m + 1 (m + 1)2 (1 + m)2 (1 + 2m) and the standard deviation σ[X] = (1+m) m √ (1+2m) . define the probability Rb measure P[[a. m! m! m=0 m=0 Comparing coefficients shows E[X m ] = 1/(m + 1). Example. The m’th power random variable X(x) = xm on ([0. The function κX (t) = log(MX (t)) is called the cumulant generating function. B. the integral −∞ f (x) dx becomes √1π −∞ e−y dy = 1. For X(x) = x on [0. The moment generating function often allows a fast simultaneous computation of all the moments. P) has the expectation Z 1 1 . The moment generating function of X is defined as MX (t) = E[etX ]. σ > 0. Definition. 1] we have both MX (t) = Z 0 1 etx dx = ∞ ∞ X X tm−1 tm (et − 1) = = t m! (m + 1)! m=1 m=0 and MX (t) = E[etX ] = E[ ∞ ∞ X X tm X m E[X m ] ]= tm . Example. The random variable X(x) = x on (Ω. The m’th central moment of X is defined as E[(X − E[X])m ]. P) is a random variable with Gaussian .
c) Show that the random variables rn (x) on [0. Exercise. Lets R justify the constants m and σ: the expectation of X is E[X] = X dP = R∞ R∞ 2 2 2 xf (x) dx = m. The variance is E[(X − m) ] = x f (x) dx = σ −∞ −∞ so that the constant σ is indeed the standard deviation. One simply calls it a Gaussian random variable or random variable with normal distribution. The Rademacher functions rn (x) are realvalued functions on [0. This means that for fixed x.45 2. Variance distribution mean m and standard deviation σ. Such a random variable is called normalized. . 1 }. −∞ The random variable Y has the log normal distribution. the sequence rn (x) is the binary expansion of 1 − 2x. P = dx). b) Verify that rn (x) = sign(sin(2π2n−1 x)) for almost all x. then the random variable Y = (X − m)/σ has the mean m = 0 and standard deviation σ = 1. A random variable X ∈ L2 with standard deviation σ = 0 is a constant random variable. Definition. The cumulant generating function is therefore κX (t) = mt + σ 2 t2 /2. A. Expectation. 1] are IID random variables with uniform distribution on {−1. It satisfies X(ω) = m for all ω ∈ Ω. If X is a Gaussian random variable with mean m = 0 and standard deviation σ. Example. They are random variables on the Lebesgue space ([0. d) Each rn (x) has the mean E[rn ] = 0 and the variance Var[rn ] = 1.3. Example. If X ∈ L2 is a random variable with mean m and standard deviation σ. 1]. 1] defined by rn (x) = 1 −1 2k−1 n 2k n ≤ ≤ x < 2k n x < 2k+1 n . P∞ a) Show that 1−2x = n=1 rn2(x) n . One often only adjusts the mean and calls X − E[X] the centered random variable. then the random variable Y = eX has the mean 2 E[Y ] = E[eX ] = eσ /2 . Proof: √ 1 2πσ 2 Z ∞ y2 ey− 2σ2 dy = eσ 2 −∞ /2 1 √ 2πσ Z ∞ e (y−σ2 )2 2σ2 dy = eσ 2 /2 . The moment gen2 2 erating function of X is MX (t) = emt+σ t /2 . Integration.
.6 0.8 1 Figure. . Given any 0 − 1 data of length n. Beppo L´evi 1906). What is special about probability theory is that the measures µ are probability measures and so finite.2 0.5 0.5 0. 2. instead of X. Theorem 2.2 0. Limit theorems 1 1 1 0. and assume X = limn→∞ Xn converges point wise.8 1 0.8 1 0.4 Results from real analysis In this section we recall some results of real analysis with their proofs. .46 Chapter 2. Let Xn be a sequence of random variables in L1 with 0 ≤ X1 ≤ X2 . . then X ∈ L1 and E[X] = lim E[Xn ] . verify that we can compute the variance of the data as p(1 − p). Let k be the number of ones.4 0. Y.4 0. A statistician would prove it as follows: n 1X (xi − p)2 n i=1 = 1 (k(1 − p)2 + (n − k)(0 − p)2 ) n = (k − 2kp + np2 )/n = p − 2p + p2 = p2 − p = p(1 − p) .1 (Monotone convergence theorem.5 0. . Give a shorter proof of this using E[X 2 ] = E[X] and the formulas for Var[X]. . . .6 0.4. but this is just a change of vocabulary. The Rademacher Function r2 (x) 0. n→∞ . .5 0. In R the measure theory or real analysis literature.5 0. The Rademacher Function r3 (x) Exercise. h.4 0. Z. . it is custom to write f (x) dµ(x) instead of E[X] or f.2 0. If p = k/n is the mean.5 1 1 1 Figure. If supn E[Xn ] < ∞. g.6 0. The Rademacher Function r1 (x) Figure.
1) that Y = supn Yn = lim supn Xn and Z = inf n Zn = lim inf n Xn are in L1 and E[lim inf Xn ] = n sup E[ inf Xm ] ≤ sup inf E[Xm ] = lim inf E[Xn ] m≥n n n m≥n ≤ lim sup E[Xn ] = inf sup E[Xm ] ≤ inf E[ sup Xm ] = E[lim sup Xn ] . Find for each n a monotone sequence of step functions Xn. Theorem 2. Consider the sequence of step functions Yn := sup Xk.Y ≤X E[Y ] and concludes E[X] = sup Y ∈S . Let Xn be a sequence of random variables in L1 with Xn  ≤ X for some X ∈ L1 . m≥n m≥n Therefore E[ inf Xm ] ≤ E[Xp ] ≤ E[ sup Xm ] . n→∞ n→∞ n→∞ n→∞ Proof.n ≤ sup Xk. Then E[lim inf Xn ] ≤ lim inf E[Xn ] ≤ lim sup E[Xn ] ≤ E[lim sup Xn ] . we can assume Xn ≥ 0.n+1 = Yn+1 . m≥n m≥n Because p ≥ n was arbitrary.4.Y ≤X E[Y ] = sup E[Yn ] ≤ sup E[Xn ] ≤ E[sup Xn ] = E[X] .m . we have inf Xm ≤ Xp ≤ sup Xm .2 (Fatou lemma. n m≥n n n n m≥n n . Results from real analysis Proof. we have also E[ inf Xm ] ≤ inf E[Xp ] ≤ sup E[Xp ] ≤ E[ sup Xm ] . For p ≥ n. m≥n p≥n p≥n m≥n Since Yn = inf m≥n Xm is increasing with supn E[Yn ] < ∞ and Zn = supm≥n Xm is decreasing with inf n E[Zn ] > −∞ we get from BeppoLevi theorem (2. 1≤k≤n+1 Since Yn ≤ supnm=1 Xm = Xn also E[Yn ] ≤ E[Xn ].4.n+1 ≤ 1≤k≤n 1≤k≤n sup Xk. Because we can replace Xn by Xn − X1 . One checks that supn Yn = X implies supn E[Yn ] = supY ∈S . n n n We have used the monotonicity E[Xn ] ≤ E[Xn+1 ] in supn E[Xn ] = E[X].47 2.4. 1906).m ∈ S with Xn = supm Xn.
For Ω = [0. if there exists for all x0 ∈ R a linear map l(x) = ax + b such that l(x0 ) = h(x0 ) and for all x ∈ R the inequality l(x) ≤ h(x) holds. h(x) = x2 is convex. h(x) = x3 is not convex on R but convex on R+ = [0. h(x) = x is convex. Let Xn be a sequence in L1 with Xn  ≤ Y for some Y ∈ L1 . Definition. Define also for p ∈ [1.48 Chapter 2. h(x) = −x2 is not convex. then E[Xn ] → E[X]. Since X = lim inf n Xn = lim supn Xn we know that X ∈ L1 and from Fatou lemma (2. Because X is bounded for α > 0. A function h : R → R is called convex.2) E[X] = E[lim inf Xn ] ≤ lim inf E[Xn ] n n ≤ lim sup E[Xn ] ≤ E[lim sup Xn ] = E[X] . . look at the random variable X(x) = xα . Example.3 (Lebesgue’s dominated convergence theorem. If Xn → X almost everywhere. It says that E[Xn ] → E[X] if Xn  ≤ K and Xn → X almost everywhere. where α is a real number. Example.4. we have then X ∈ L∞ . ∞) the vector spaces Lp = {X ∈ L  Xp ∈ L1 } and L∞ = {X ∈ L  ∃K ∈ R X ≤ K. For R1 α < 0.5 Some inequalities Definition. n n A special case of Lebesgue’s dominated convergence theorem is when Y = K is constant. the integral E[Xp ] = 0 xαp dx is finite if and only if αp < 1 so that X is in Lp whenever p > 1/α. Limit theorems Theorem 2. Proof. The theorem is then called the bounded dominated convergence theorem. 1902). almost everywhere }. 2. ∞).4. h(x) = ex is convex. 1] with the Lebesgue measure P = dx and Borel σalgebra A.
This implies that Xq := E[Xq ]1/q ≥ E[Xp ]1/p = Xp for p ≤ q and so L∞ ⊂ Lq ⊂ Lp ⊂ L1 for p ≤ q. The function h in this picture is a quadratic function of the form h(x) = (x−s)2 + t. . By the linearity and monotonicity of the expectation. Define h(x) = xq/p . Show that E[X + 1/X] ≥ 2. Example. Assume X is a nonnegative random variable for which X and 1/X are both in L1 . X(v) = b. Let l be the linear map defined at x0 = E[X]. we have E[h(X)] ≥ h(E[X]) .5. The vector space Lp has the seminorm Xp = E[Xp ]1/p rsp. we get h(E[X]) = l(E[X]) = E[l(X)] ≤ E[h(X)] . For any convex function h : R → R.5.1 (Jensen inequality).49 2. Some inequalities Figure. Given X ∈ L1 . Proof. Exercise. v }. The Jensen inequality in the case Ω = {u. The smallest space is L∞ which is the space of all bounded random variables. ∞) and X ≤ K almost everywhere for p = ∞. X∞ = inf{K ∈ R  X ≤ K almost everywhere }. h(a) E[h(X)] =(h(a)+h(b))/2 h(b) h(E[X]) =h((a+b)/2) a E[X]=(a+b)/2 b Theorem 2. where the left hand side can also be infinite. Given p ≤ q. P[{u}] = P[{v}] = 1/2 and with X(u) = a. Jensen’s inequality gives E[Xq ] = E[h(Xp )] ≤ h(E[Xp ]) = E[Xp ]q/p . We have defined Lp as the set of random variables which satisfy E[Xp ] < ∞ for p ∈ [1.
the space L2 is a real Hilbert space with inner product < X. Then XY ∈ L1 and XY 1 ≤ Xp Y q . 1] and the value 0 to irrational numbers is different from the constant function g(x) = 0 in Lp . One can construct from L a real Banach space Lp = Lp /N which is the quotient of Lp with N = {X ∈ Lp  Xp = 0 }. Example. Y are defined over a probability space (Ω. The finiteness of the inner product follows from the following inequality: Theorem 2. Y ≥ 0 because replacing X with X and Y with Y  does not change anything.4) can be rewritten as E[XY ]q E[Y q ] ≤ p q E[X ] E[X p ] (2. Using EQ [U ] = EQ [ and EQ [U q ] = EQ [ Y E[XY ] ]= p−1 X E[X p ] Yq X q(p−1) ] = EQ [ E[Y q ] Yq ]= . H¨ older 1889). Without this identification. p X E[X p ] Equation (2. one only has a preBanach space in which the property that only the zero element has norm zero is not necessarily true. Y >= E[XY ].2 (H¨older inequality. The function f (x) = 1Q (x) which assigns values 1 to rational numbers x on [0. We define the new random variable U = 1{X>0} Y /X p−1 . We can also assume Xp > 0 because otherwise X = 0. E[X p ] R R If P[A] = A 1dP(x) then Q[A] = [ A X p (x)dP(x)]/E[X p ] so that Q[Ω] = E[X p ]/E[X p ] = 1 and Q is a probability measure. for p = 2. The random variables X. The key idea of the proof is to introduce a new probability measure X pP . ∞] with p−1 + q −1 = 1 and X ∈ Lp and Y ∈ Lq . Given p. where both sides are zero.50 Chapter 2. Proof.5.4) . Limit theorems p Definition. A. Let us denote the expectation with respect to this new measure with EQ . q ∈ [1. We can write therefore X instead of X and assume X is not zero. Especially. Without loss of generality we can restrict us to X. We will use that p−1 + q −1 = 1 is equivalent to q + p = pq or q(p − 1) = p. Jensen’s inequality applied to the convex function h(x) = xq gives Q= EQ [U ]q ≤ EQ [U q ] . we have f = g. But in Lp . P).
∞] and X. Some inequalities which implies E[XY ] ≤ E[Y q ]1/q E[X p ]1−1/q = E[Y q ]1/q E[X p ]1/p . Let h be a monotone function on R with h ≥ 0. We use the shorthand notation P[X ≥ c] for P[{ω ∈ Ω  X(ω) ≥ c }].3 (Minkowski inequality (1896)). Y ∈ Lp .5. We use H¨ older’s inequality from below to get E[X + Y p ] ≤ E[XX + Y p−1 ] + E[Y X + Y p−1 ] ≤ Xp C + Y p C .5. Then X + Y p ≤ Xp + Y p .4 (ChebychevMarkov inequality). where C = X + Y p−1 q = E[X + Y p ]1/q which leads to the claim. A special case of H¨ older’s inequality is the CauchySchwarz inequality XY 1 ≤ X2 · Y 2 . Theorem 2. Proof. The last equation rewrites the claim XY 1 ≤ Xp Y q in different notation. Proof. For every c > 0. The seminorm property of Lp follows from the following fact: Theorem 2.51 2. Given p ∈ [1.5. and h(X) ∈ L1 we have h(c) · P[X ≥ c] ≤ E[h(X)] . . Integrate the inequality h(c)1X≥c ≤ h(X) and use the monotonicity and linearity of the expectation. Definition.
52 Chapter 2.5 (Chebychev inequality). Y ] := E[(X − E[X])(Y − E[Y ])] = E[XY ] − E[X]E[Y ] .5 Example. X] = Var[X] = E[(X − E[X])2 ] for a random variable X ∈ L2 . The proof of the ChebychevMarkov inequality in the case h(x) = x. 1. For X.5 1 0. Y ] = 0. c2 Proof. Two random variables in L2 are called uncorrelated if Cov[X. Definition.5 1 0.5 0.5. The left hand side h(c) · P[X ≥ c] is the area of the rectangles {X ≥ c} × [0.5 1 1. Exercise. Y ∈ L2 define the covariance Cov[X. . Prove the Chernoff bound P[X ≥ c] ≤ inf t≥0 e−tc MX (t) where MX (t) = E[eXt ] is the moment generating function of X. h(x)] and E[h(X)] = E[X] is the area under the graph of X. An important special case of the ChebychevMarkov inequality is the Chebychev inequality: Theorem 2. Take h(x) = x2 and apply the ChebychevMarkov inequality to the random variable Y = X − E[X] ∈ L2 satisfying h(Y ) ∈ L1 . Limit theorems 2 Figure.5 0. If X ∈ L2 . We have Cov[X. then P[X − E[X] ≥ c] ≤ Var[X] . Example. h(x) = x leads to P[X ≥ c] ≤ X1 /c which implies for example the statement E[X] = 0 ⇒ P[X = 0] = 1 .
. Some inequalities Remark. . The CauchySchwarzinequality can be restated in the form Cov[X. . Proposition 2. . Y˜ ]. . If X = Y . Y ] .6. . the regression line has the property that it minimizes the sum of the squares of the distances from these points to the line. when knowing only E[Y ] and Cov[X. where a= Cov[X. Regression line computed from a finite set of data points (X(i). .5. Figure. then a = 1 and b = 0. Example. Proof. As will follow from the proposition below. Y = (Y (1). . Example. Y (i)) in the plane. b) = E[(aX + b − Y )2 ] under the constraint g(a. Y ] = Cov[X. To minimize Var[aX+b−Y ] under the constraint E[aX+b−Y ] = 0 is equivalent to find (a. . It follows that b = E[Y ]. X(n)). then the random variables X. Y define the vectors X = (X(1). Y ] ≤ σ[X]σ[Y ] Definition. This least square solution . Y ]. . n } is a finite set. If y = ax + b is the regression line of of X. b) = E[aX + b − Y ] = 0. b) which minimizes f (a. The regression line of two random variables X. Var[X] If Ω = {1.5. If X. Y is defined as y = ax + b. . We check Cov[X. The best guess for Y is X. Y (n)) or n data points (X(i). b = E[Y ] − aE[X] . then the random variable Y˜ = aX + b minimizes Var[Y − Y˜ ] under the constraint E[Y ] = E[Y˜ ] and is the best guess for Y . then a = 0. .53 2. Y are independent. Y (i)). Y .
Y ] .7 (Pythagoras).3). then Cov[X. E[Xn ] P → E[X] and E[Yn ] → E[Y ] almost everywhere. E[Xn Yn ] → E[XY ].4. Two random variables in L2 are called uncorrelated if Corr[X. By the independence of X. Y ] = 0. Y ] is the cosine of the angle α between X − E[X] and Y − E[Y ] because the dot product satisfies ~v · w ~ = ~v w ~ cos(α). By the Lebesgue dominated convergence theorem (2.4. We can choose these functions in such a way that Ai ∈ A = σ(X) and Bj ∈ B = σ(Y ). If X and Y are uncorrelated. Y ] is the dot product between the centered random variables X − E[X] and Y − E[Y ]. Y ] = 0. So. why lemma (2. Y ]/Var[X].5.54 Chapter 2.7) is called Pythagoras theorem. Limit theorems can be obtained with the Lagrange multiplier method or by solving b = E[Y ]−aE[X] and minimizing h(a) = E[(aX−Y −E[aX−Y ])2 ] = a2 (E[X 2 ]− E[X]2 )−2a(E[XY ]−E[X]E[Y ]) = a2 Var[X]−2aCov[X. Yn = n X j=1 βj · 1Bi → Y . Y ] σ[X]σ[Y ] which is a number in [−1. Y we have E[Xn Yn ] = E[Xn ] · E[Yn ] and so E[XY ] = E[X]E[Y ] which implies Cov[X. Proof. If two random variables X. and σ[X] is the length of the vector X − E[X] and the correlation coefficient Corr[X. then Y = aX + b by the CauchySchwarz inequality.5. 1]. If Ω is a finite set. The second statement follows from Var[X + Y ] = Var[X] + Var[Y ] + 2 Cov[X. Y have the property that X − E[X] is perpendicular to Y − E[Y ].3) again. then Var[X + Y ] = Var[X] + Var[Y ]. By the Lebesgue dominated convergence theorem (2. Y ]. then the covariance Cov[X. Y ] = Cov[X. If the standard deviations σ[X]. Compute Xn · n Yn = i. σ[Y ] are both different from zero. Remark. Y ∈ L2 are independent. Definition. The other extreme is Corr[X. uncorrelated random variables X. then one can define the correlation coefficient Corr[X. We can find monotone sequences of step functions Xn = n X i=1 αi 1Ai → X .j=1 αi βj 1Ai ∩Bj . The statement Var[X −Y ] = Var[X]+Var[Y ]− . Setting h′ (a) = 0 gives a = Cov[X. Y ] = 1. This geometric interpretation explains. Theorem 2. Y ] = E[XY ] − E[X] · E[Y ] = 0.
see the classic [30. lim P[Yn − Y  ≥ ǫ] = 0 .4). P[Xn −X ≥ ǫ] ≤ X − Xn p /ǫp . 60]. Y ] = 1 if X − E[X] = Y − E[Y ] 2. X − E[X]. . of random variables on a probability space (Ω.5. Var[X] = E[X 2 ] − E[X]2 . 1]. The limiting behavior is described by ”laws of large numbers”. one speaks of ”weak” and ”strong” laws of large numbers. n→∞ One calls convergence in probability also stochastic convergence. Y ] = E[XY ] − E[X]E[Y ]. A sequence of random variables Yn converges in probability to a random variable Y . We first prove the weak law of large numbers. Y ] ∈ [0. Definition. For more inequalities in analysis. Y ∈ L2 have nonzero variance and satisfy Corr(X. Depending on the definition of convergence. if for all ǫ > 0. . Y ]. P). There exist different versions of this theorem since more assumptions on Xn can allow stronger statements. . Remark. X2 . The weak law of large numbers 2 2 2 2 Cov[X. c are the length of the triangle width vertices 0.6. then Y = aX + b for some real numbers a.6 The weak law of large numbers Consider a sequence X1 . Exercise. Y ) = 1. Y ] ≤ σ[X]σ[Y ]. Show that if two random variables X. Corr[X. Y − E[Y ]. We end this section with a list of properties of variance and covariance: Var[X] ≥ 0. ∞). b. Var[X + Y ] = Var[X] + Var[Y ] + 2Cov[X. Corr[X. If for some p ∈ [1. then Xn → X in probability since by the ChebychevMarkov inequality (2. A.55 2. Y ] is the law of cosines c = a + b − 2ab cos(α) in disguise if a. Cov[X. Cov[X. . Var[λX] = λ2 Var[X]. b. Xn − Xp → 0. We are interested in the asymptotic behavior of the sums Sn = X1 + X2 + · · · + Xn for n → ∞ and especially in the convergence of the averages Sn /n.
B5 . we obtain E[Sn /n] = m and Var[ n E[Sn ]2 Var[Sn ] 1 X S2 Sn Var[Xn ] . Approximation of a function f (x) by Bernstein polynomials B2 .2 0.8 0. B30 . − m ≥ ǫ] ≤ n ǫ2 As an application in analysis. Unlike the abstract Weierstrass theorem.4 0. Using linearity. then Snn → m in probability. the construction with specific polynomials is constructive and gives explicit formulas. 1]. Y ] and Xn are pairwise uncorrelated. If Xn are pairwise uncorrelated. 0. B20 .2 0.4 0. = = ] = E[ n2 ] − n n n2 n2 n2 i=1 The right hand side converges to zero for n → ∞. this leads to a constructive proof of a theorem of Weierstrass which states that polynomials are dense in the space C[0.5. Limit theorems Theorem 2. we obtain P[ Var[ Snn ] Sn .1 (Weak law of large numbers for uncorrelated random variables). Since Var[X + Y ] = Var[X] + Var[Y ] + 2 · Cov[X. P Assume Xi ∈ L2 have common expectation E[Xi ] = m and satisfy n 1 supn n i=1 Var[Xi ] < ∞. Proof.2 Figure. 0.56 Chapter 2.5). get Var[Xn + Xm ] = Var[Xn ] + Var[Xm ] and by Pwe n induction Var[Sn ] = i=1 Var[Xn ]. 1] of all continuous functions on the interval [0.6 0.6 0.8 1 . B10 .6. With Chebychev’s inequality (2.4 0.
If f (x) ≥ 0. The weak law of large numbers 57 Theorem 2.1). the convergence can be accelerated: Theorem 2. Proof. In the first version of the weak law of large numbers theorem (2. Since P[Sn = k] = n xk (1 − p)n−k . If Xi are independent. . P) defined by P[ωn = 1] = x.6. By the Chebychev inequality (2. we take the probaN bility space ({0. the first term can be estimated from above by Var[Xi ] .5. 2f  nδ 2 a bound which goes to zero for n → ∞ because the variance satisfies Var[Xi ] = x(1 − x) ≤ 1/4. In other words. For every f ∈ C[0. 1]. then Sn /n → m in ∞ probability.5) and the proof of the weak law of large numbers.6. we can write Bn (x) = E[f ( Snn )].2. 1}. 1 } .6.2 (Weierstrass theorem). For x ∈ [0. then also Bn (x) ≥ 0. Even n=1 P[ Snn − m ≥ ǫ] converges for all ǫ > 0. let Xn be a sequence of independent {0.valued random variables with mean value x. Assume Xi ∈ L4 have common expectation E[Xi ] = m and satisfy M = supn P X4 < ∞. the Bernstein polynomials n X k n xk (1 − x)n−k Bn (x) = f( ) k n k=1 converge uniformly to f . We estimate with k f  = max0≤x≤1 f (x) Bn (x) − f (x) Sn Sn )] − f (x) ≤ E[f ( ) − f (x)] n n Sn − x ≥ δ] ≤ 2f  · P[ n Sn − x < δ] + sup f (x) − f (y) · P[ n x−y≤δ = E[f ( Sn − x ≥ δ] n sup f (x) − f (y) .3 (Weak law of large numbers for independent L4 random variables). Under the stronger condition of independence and a stronger conditions on the moments (X 4 ∈ L1 ). we only assumed the random variables to be uncorrelated. It converges to zero for δ → 0. 1]. A. ≤ 2f  · P[ + x−y≤δ The second term in the last line is called the continuity module of f .6.
If Xi are indepenPn 1 dent. Uniform integrability can then be written as supi∈I E[Xi . we get E[Sn4 ] n X = E[Xi1 Xi2 Xi3 Xi4 ] .i3 . ǫ n ǫ n Sn  ≥ ǫ] ≤ n We can weaken the moment assumption in order to deal with L1 random variables. a summand E[Xi1 Xi2 Xi3 Xi4 ] is zero if an index i = ik occurs alone. Because the Xi are independent.R] . we can assume that E[Xn ] = 0 for all n ∈ N. Since by Jensen’s inequality E[Xi2 ]2 ≤ E[Xi4 ] ≤ M we get E[Sn4 ] ≤ nM + n(n − 1)M . Limit theorems Proof. the random variables Xn(R) = fR (Xn ) − E[fR (Xn )]. if there are two pairwise equal indices. A] for X ∈ L1 and A ∈ A.i2 .i4 =1 Again by independence. Xi  ≥ R] → 0. An other assumption needs to become stronger: Definition. A family {Xi }i∈I of random variables is called uniformly integrable. n i=1 n i=1 n . Use now the ChebychevMarkov inequality (2. then n i=1 (Xm − E[Xm ]) → 0 in L1 and therefore in probability. it is E[Xi4 ] if all indices are the same and E[Xi2 ]E[Xj2 ]. independent L1 random variables).5.4 (Weak law for uniformly integrable. if supi∈I E[Xi 1Xi ≥R ] → 0 for R → ∞. Tn(R) = Y . Assume Xi ∈ L1 are uniformly integrable. because otherwise Xn can be replaced by Yn = Xn − E[Xn ]. Yn(R) = Xn − Xn(R) as well as the random variables n Sn(R) = n 1 X (R) 1 X (R) Xn .58 Chapter 2. Define fR (t) = t1[−R. Without loss of generality. i1 .4) with h(x) = x4 to get P[ E[(Sn /n)4 ] ǫ4 1 n + n2 ≤ M 4 4 ≤ 2M 4 2 . Proof. Theorem 2. A convenient notation which we will use again in the future is E[1A X] = E[X.6. We can assume without loss of generality that m = 0.
7 times the length n of the longest diagonal in the cube.5 (Weak law of large numbers for IID L1 random variables).. The weak law of large numbers We estimate. n l∈N 1≤l≤n In the last step we have used the independence of the random variables and (R) E[Xn ] = 0 to get (R) Sn(R) 22 = E[(Sn(R) )2 ] = E[(Xn )2 ] R2 ≤ . one r ≤ could rephrase this that asymptotically. Xl  ≥ R] ≤ R √ + 2 sup E[Xl . this means that for every ǫ > 0. Consequently any set of IID random variables in L1 is also uniformly integrable. using the Minkowski and CauchySchwarz inequalities Sn 1 ≤ Sn(R) 1 + Tn(R) 1 ≤ Sn(R) 2 + 2 sup E[Xl . x ) = x n/2 − ǫ ≤ 1 n n 1 p n/2 + ǫ converges to 1 for n → ∞.59 2. n n The claim follows from the uniform integrability assumption supl∈N E[Xl . Assume Xi ∈ L1 are IID random variables with mean m. the volume of the set of p points in the ndimensional cube for p which the 2 + · · · + x2 to the origin satisfies distance r(x . We show that a set of IID L1 random variables is uniformly integrable: given X ∈ L1 . Xl  ≥ R] . The weak law of large numbers tells us that P[Sn − 1/2 ≥ ǫ] → 0 for n → ∞. the probabilities P[Xi  ≥ R] = E[1Xi ≥ R] are independent of i. In colloquial language. as the number of dimensions to go infinity. 1] has the expectation R1 m = E[X] = 0 x2 dx = 1/2. where all the random variables are IID: Theorem 2.. . Because the random variables Xi are identically distributed. Then Sn /n → m in L1 and so in probability. we can form the sum Sn /n = (x21 + x22 + · · · + x2n )/n.6. most of the √ weight of a ndimensional √ cube is concentrated near a shell of radius 1/ 2 ∼ 0. The random variable X(x) = x2 on [0. Geometrically.6. we have K · P[X > K] ≤ X1 so that P[X > K] → 0 for K → ∞.6.4). Example. For every n. Xl  ≥ R] → 0 for R → ∞ A special case of the weak law of large numbers is the situation. We can now use theorem (2. . Proof.
Prove that n1 Pn Xn → 0 in L1 for almost all θ. You can take for granted the fact n that n1 k=1 pk → 1/2 for almost all real parameters θ ∈ [0. Show that n 1X (Xk − mk ) → 0 n k=1 in probability. Find an example of two random variables X. 2π] Exercise. the sequence pn is dependent on a parameter. Prove that Xn → X in L1 . how fast the volume goes to 0.60 Chapter 2. b) We assume the same set up like in a) but this time. but not completely.6)) . Give an example of a sequence of random variables Xn which converges almost everywhere. 1] and a sequence Xn of IID random variables taking values in {−1. 1} such that P[Xn = 1] = pn and P[Xn = −1] = 1P − pn with pn = (1 + cos[θ + nα])/2. where θ is a parameter. Exercise. Estimate. Given a sequence Xn of independent random variables taking values in {−1. Y ∈ L1 for which XY ∈ / L1 . where mk = 2pk − 1. Exercise. Limit theorems 1 Exercise. 1} such that P[Xn = 1] = pn and P[Xn = −1] = 1 − pn . Exercise. Use the weak law of large numbers to verify that the volume of an ndimensional ball of radius 1 satisfies Vn → 0 for n → ∞. Given a sequence of random variables Xn . Show that Xn converges to X in probability if and only if E[ Xn − X ]→0 1 + Xn − X for n → ∞. Show that if X. Y ∈ L are independent random variables. then XY ∈ L1 . Exercise. (See example (2. then there exists of a subsequence Yn = Xnk satisfying Yn → X almost everywhere. a) Given a sequence pn ∈ [0.
The distribution function F is very useful. just take a random variable Y with uniform distribution on [0. 1] be the unit interval with the Lebesgue measure µ and let m be an integer. s]) = P[X ≤ s] . if each random variable in the set has the same distribution function. b] } = {F −1 (Y ) ∈ [a. The distribution function is sometimes also called cumulative density function (CDF) but we do not use this name here in order not to confuse it ′ with the probability density function (PDF) fX (s) = FX (s) for continuous random variables. The law of a random variable X is the probability measure µ on R defined by µ(B) = P[X −1 (B)] for all B in the Borel σalgebra of R. For example. If we want to produce random variables with a distribution function F . One calls its distribution a power distribution. if X is a continuous random variable with distribution function F . Let Ω = [0. 1] to produce random variables with a given continuous distribution. The random variable is continuous in the sense that it has aRprobability density function s ′ fX (s) = FX (s) = s1/m−1 /m so that FX (s) = −∞ fX (t) dt.2. The distribution function of a random variable X is defined as FX (s) = µ((−∞. 1] and define X = F −1 (Y ). The measure µ is also called the pushforward measure under the measurable map X : Ω → R. then Y = F (X) has the uniform distribution on [0. A common abbreviation for independent identically distributed random variables is IID. The probability distribution function 61 2. Define the random variable X(x) = xm .7. Remark. Definition. 1]. The distribution function of X is FX (s) = s(1/m) on [0. Example. F (b)]} = F (b) − F (a). Definition. 1] and FX (s) = 0 for s < 0 and FX (s) = 1 for s ≥ 1.7 The probability distribution function Definition. b]) } = {Y ∈ [F (a). We see that we need only to have a random number generator which produces uniformly distributed random variables in [0. It is called independent and identically distributed if the random variables are independent and identically distributed. A set of random variables is called identically distributed. . b] } = {Y ∈ F ([a. This random variable has the distribution function F because {X ∈ [a. We can reverse this. It is in L1 and has the expectation E[X] = 1/(m + 1).
2 Figure.25 1.5 0.25 1 1 0.8 1 Figure. One can realize V and W on the unit square Ω = [0. 1] . 1] by V (x.2 0. 0 R FV (s) = 1 1 − (s−1)1/m 1 − (s − xm )1/m dx.2 0.5 0.25 0. The density function fX (s) of X(x) = xm in the case m = 2.75 0.2 Chapter 2. FW (s) is the area of the set B(s).6 0. 1] × [0. s ∈ [0.8 1 1.6 0. we see in this case ( R s1/m (s − xm )1/m dx.4 0.75 1. We will later see how to compute the distribution function of a sum of independent random variables algebraically from the probability distribution function FX . 2] . W = X−Y . shown here in the case m = 4. y) = xm + y m and W (x. we can look at the random variables V = X+Y. The distribution functions FV (s) = P[V ≤ s] and FW (s) = P[V ≤ s] are the areas of the set A(s) = {(x.75 0. FV (s) is the area of the set A(s). s ∈ [1.75 1.5 1.2 0. Given two IID random variables X. Limit theorems 2 2 1. Figure. y)  xm + y m ≤ s } and B(s) = {(x. y)  xm − y m ≤ s }. y) = xm − y m . 0. shown here in the case m = 4.25 0. From the area interpretation.62 0. The distribution function FX (s) of X(x) = xm in the case m = 2.4 0. Figure. Y with the m’th power distribution as above.5 1.
∞).5 1 1.8 0. s ∈ [−1. The function FW (s) with density (dashed) fW (s) of the difference of two power distributed random variables with m = 2. a) Verify that for θ > 0 the Rayleigh distribution f (x) = 2θxe−θx 2 is a probability distribution on R+ = [0.2 0.4 0. a) Verify that for θ > 0 the Maxwell distribution 2 4 f (x) = √ θ3/2 x2 e−θx π is a probability distribution on R+ = [0. Y are normal random variables. This distribution can model √ 2 the speed distribution X + Y 2 of a two dimensional wind velocity (X.2 0.63 2. we need some other notions of convergence.6 0. Convergence of random variables and FW (s) = ( R (s+1)1/m 1 − (xm − s)1/m dx . 0 R 1 s1/m + s1/m 1 − (xm − s)1/m dx. ∞).5 0.5 1 0. This distribution can model the speed distribution of molecules in thermal equilibrium.4 0. where both X. 0] .2 Figure. Figure.8 Convergence of random variables In order to formulate the strong law of large numbers.2 0. 2. . The function FV (s) with density (dashed) fV (s) of the sum of two power distributed random variables with m = 2. s ∈ [0. 1] 1 1 0. Y ).8.6 0. Exercise.5 2 1 0.8 0.
We have so four notions of convergence of random variables Xn → X. Definition. A sequence of random variables Xn converges almost everywhere or almost surely to a random variable X. 2. . n } with the uniform distribution P[{k}] = 1/n and Xn the random variable Xn (x) = x/n. Let X(x) = x on the probability space [0. which not necessarily assume Xn and X to be defined on the same probability space. 1] with probability P[[a. Example.. A. Definition. if P[Xn − X ≥ ǫ] → 0 for all ǫ > 0.. Xn converges weakly to X if for every continuous function f on R of compact support. Definition. if P[Xn → X] = 1. Remark. if FXn (s) → FX (s) for all points s. . Let Ωn = {1. if the random variables are defined on the same probability space (Ω. In other words. A sequence of random variables Xn converges in distribution.2) that the following definitions are equivalent: Definition. A sequence of random variables Xn converges in probability to a random variable X. where FX is continuous. The random variables Xn and X are defined on a different probability spaces but Xn converges to X in distribution for n → ∞.13. Lets nevertheless add these two definitions also here. We will later see the two equivalent but weaker notions convergence in distribution and weak convergence. Definition. in theorem (2. if the laws µn of Xn converge weakly to the law µ of X.64 Chapter 2. if Xn − Xp → 0 for n → ∞. A sequence of Lp random variables Xn converges in Lp to a random variable X. Limit theorems Definition. b)] = b − a. A sequence of random variables Xn converges in law to a random variable X. We will see later. or completely if X P[Xn − X ≥ ǫ] < ∞ n for all ǫ > 0. A sequence of random variables Xn converges fast in probability.. one has Z Z f (x) dµn (x) → f (x) dµ(x) . P).
Convergence of random variables Proposition 2. infinitely often] = 0 . ∀ǫ > 0. The next figure shows the relations between the different convergence types. ✒ ❅ ■ ❅ 3) In Lp Xn − Xp → 0 2) Almost everywhere P[Xn → X] = 1 ✻ P n 4) Complete P[Xn − X ≥ ǫ] < ∞.8.65 2.1. ∀ǫ > 0 Proof. 4) ⇒ 2): The first BorelCantelli lemma implies that for all ǫ > 0 P[Xn − X ≥ ǫ. FX cont. at s ✻ 1) In probability P[Xn − X ≥ ǫ] → 0. . 2) ⇒ 1): Since {Xn → X} = \[ \ k m n≥m {Xn − X ≤ 1/k} ”almost everywhere convergence” is equivalent to [ \ \ 1 1 1 = P[ {Xn − X ≤ }] = lim P[ {Xn − X ≤ }] m→∞ k k m n≥m n≥m for all k and so 0 = lim P[ n→∞ for all k.8. 0) In distribution = in law FXn (s) → FX (s). Therefore [ {Xn − X ≥ n≥m P[Xm − X ≥ ǫ] ≤ P[ [ n≥m 1 }] k {Xn − X ≥ ǫ }] → 0 for all ǫ > 0.
Proof.2 . we get a sequence Xn satisfying lim inf Xn (ω) = 0. . infinitely often] = 0 n n from which we obtain P[Xn → X] = 1.2−n ] on the probability space ([0. . Given a sequence Xn ∈ L∞ with Xn ∞ ≤ K for all n.3 . 1].k = 1[k2−n . . to get P[Xn − X ≥ ǫ] ≤ E[Xn − Xp ] . Define the random variables Xn. Here is an example of convergence in probability but not almost everywhere convergence.2.1 . . With more assumptions other implications can hold. lim sup Xn (ω) = 1 n→∞ n→∞ but P[Xn. 1]. A. X2 = X2. . P[X > K + 1 1 ] ≤ P[X − Xn  > ] → 0. . Proposition 2. 1]. 3) ⇒ 1): Use the ChebychevMarkov inequality (2. then Xn → X in probability if and only if Xn → X in L1 .66 Chapter 2. X3 = X2.1 . 2. k = 0. We give two examples. . Example. P) converge almost everywhere to the constant random variable X = 0 but not in Lp because Xn p = 2n(p−1)/p . By lexicographical ordering X1 = X1. infinitely often] ≤ P[Xn −X ≥ ǫk .8. X4 = X2. A.k ≥ ǫ] ≤ 2−n . . Limit theorems We get so for ǫn → 0 [ X P[ Xn −X ≥ ǫk . .4). P) be the Lebesgue measure space. . Therefore P[X > K] = P[ [ 1 {X > K + }] = 0 .(k+1)2−n ] . Let ([0. . where A is the Borel σalgebra on [0. (i) P[X ≤ K) = 1. And here is an example of almost everywhere but not Lp convergence: the random variables Xn = 2n 1[0. For k ∈ N. n = 1. n → ∞ k k so that P[X > K + k1 ] = 0.5. ǫp Example. 2n − 1 . k k . Proof.
. using (i) and the notation E[X. b) Xn converges in L1 to X. A] < ǫ. Lemma 2. X > R] = 0 X∈C R→∞ for all X ∈ C. If X ∈ L1 . we can choose K such that P[X > K] < δ. Convergence of random variables (ii) Given ǫ > 0. A] = E[X · 1A ] E[Xn − X] = E[(Xn − X. Proof.67 2. then E[X. For any random variable X and K ≥ 0 define the bounded variable X (K) = X · 1{−K≤X≤K} + K · 1{X>K} − K · 1{X<−K} . Assume we are given ǫ > 0. The following is equivalent: a) Xn converges in probability to X and {Xn }n∈N is uniformly integrable.8. Xn − X ≤ ] 3 3 ǫ ǫ ]+ ≤ǫ. there exists K ≥ 0 with E[X. Therefore E[X. Given a sequence random variables Xn ∈ L1 and X ∈ L1 .8. X > K] < ǫ.4. 3 3K Then. Choose m such that for all n > m P[Xn − X > ǫ ǫ ]< . Then. a) ⇒ b).3. we can find δ > 0 such that if P[A] < δ. if lim sup E[X1X>R] = E[X. Given X ∈ L1 and ǫ > 0. Recall that a family C ⊂ L1 of random variables is called uniformly integrable. 3 3 Definition. Since KP[X > K] ≤ E[X].8. The next proposition gives a necessary and sufficient condition for L1 convergence. Xn − X > ≤ 2KP[Xn − X > ǫ ǫ ] + E[(Xn − X. X > K] < ǫ. Proof. The next lemma was already been used in the proof of the weak law of large numbers for IID random variables. Proposition 2.
E[Xn(K) − Xn ] < (K) ǫ ǫ . Limit theorems By the uniform integrability condition and the above lemma (2. we know Xn → X (K) in L1 so that for (K) (K) n > m E[Xn − X ] ≤ ǫ/3.3) applied to X (K) and X we can choose K such that for all n. Xn  > K] ≤ E[X. We have seen already that Xn → X in probability if Xn −X1 → 0. Given ǫ > 0. E[X. using the notation E[X.8. we can choose K such that K −1 supn E[Xn ] < δ which implies P[Xn  > K] < δ. For n ≥ m. b) A sequence Xn converges almost surely if and only if lim P[sup Xn+k − Xn  > ǫ] = 0 n→∞ k≥1 for all ǫ > 0. The strong laws of large numbers make analog statements about almost everywhere convergence. we have Xn → X (K) in probability.2). There exists m such that E[Xn − X] < ǫ/2 for n > m. Exercise. for n > m also E[Xn − X] ≤ E[Xn − Xn(K) ] + E[Xn(K) − X (K) ] + E[X (K) − X] ≤ ǫ . 3 3 (K) Since Xn − X (K)  ≤ Xn − X. A] < ǫ. We have to show that Xn → X in L1 implies that Xn is uniformly integrable. Xn  > K] + E[X − Xn ] < ǫ . . Because Xn is bounded in L1 . A] < ǫ/2 . A] = E[X · 1A ] E[Xn . we have therefore. 2.8.68 Chapter 2. E[X (K) − X] < . b) ⇒ a). a) P[supk≥n Xk − X > ǫ] → 0 for n → ∞ and all ǫ > 0 if and only if Xn → X almost everywhere. By the absolutely continuity property. Therefore. we can choose δ > 0 such that P[A] < δ implies E[Xn .9 The strong law of large numbers The weak law of large numbers makes a statement about the stochastic convergence of sums X1 + · · · + Xn Sn = n n of random variables Xn . 1 ≤ n ≤ m. (K) By the last proposition (2.
2.9. The strong law of large numbers
69
The first version of the strong law does not assume the random variables to
have the same distribution. They are assumed to have the same expectation
and have to be bounded in L4 .
Theorem 2.9.1 (Strong law for independent L4 random variables). Assume
Xn are independent random variables in L4 with common expectation
E[Xn ] = m and for which M = supn Xn 44 < ∞. Then Sn /n → m almost
everywhere.
Proof. In the proof of theorem (2.6.3), we derived
P[
Sn
1
− m ≥ ǫ] ≤ 2M 4 2 .
n
ǫ n
This means that Sn /n → m converges completely. By proposition (2.8) we
have almost everywhere convergence.
Here is an application of the strong law:
Definition. A real number x ∈ [0, 1] is called normal to the base 10, if its
decimal expansion x = x1 x2 . . . has the property that each digit appears
with the same frequency 1/10.
Corollary 2.9.2. (Normality of numbers) On the probability space
([0, 1], B, Q = dx), Lebesgue almost all numbers x are normal.
Proof. Define the random variables Xn (x) = xn , where xn is the n’th
decimal digit. We have only to verify that Xn are IID random variables. The
strong law of large numbers will assure that almost all x are normal. Let Ω =
{0, 1, . . . , 9 }N be the space of all infinite sequences ω = (ω1 , ω2 , ω3 , . . . ).
Define on Ω the product σalgebra A and
the product probability measure
P∞
P. Define the measurable map S(ω) = n=1 ωk /10k = x from Ω to [0, 1].
It produces for every sequence in Ω a real number x ∈ [0, 1]. The integers
ωk are just the decimal digits of x. The map S is measure preserving and
can be inverted on a set of measure 1 because almost all real numbers have
a unique decimal expansion.
Because Xn (x) = Xn (S(ω)) = Yn (ω) = ωn , if S(ω) = x. We see that Xn
are the same random variables than Yn . The later are by construction IID
with uniform distribution on {0, 1, . . . , 9 }.
Remark. While almost all numbers are normal, it is difficult to decide
normality for specific real √
numbers. One does not know for example whether
π − 3 = 0.1415926 . . . or 2 − 1 = 0.41421 . . . are normal.
70
Chapter 2. Limit theorems
The strong law for IID random variables was first proven by Kolmogorov
in 1930. Only much later in 1981, it has been observed that the weaker
notion of pairwise independence is sufficient [25]:
Theorem 2.9.3 (Strong law for pairwise independent L1 random variables).
Assume Xn ∈ L1 are pairwise independent and identically distributed random variables. Then Sn /n → E[X1 ] almost everywhere.
Proof. We can assume without loss of generality that Xn ≥ 0 (because we
can split Xn = Xn+ + Xn− into its positive Xn+ = Xn ∨ 0 = max(Xn , 0) and
negative part X − = −X ∨ 0 = max(−X, 0). Knowing the result for Xn±
implies the result for Xn .).
(R)
Define fR (t) = t · 1[−R,R] , the random variables Xn = fR (Xn ) and Yn =
(n)
Xn as well as
n
Sn =
n
1X
1X
Xi , T n =
Yi .
n i=1
n i=1
(i) It is enough to show that Tn − E[Tn ] → 0.
Proof. Since E[Yn ] → E[X1 ] = m, we get E[Tn ] → m. Because
X
n≥1
P[Yn 6= Xn ] ≤
=
X
n≥1
P[Xn ≥ n] =
XX
n≥1 k≥n
=
X
k≥1
X
n≥1
P[X1 ≥ n]
P[Xn ∈ [k, k + 1]]
k · P[X1 ∈ [k, k + 1]] ≤ E[X1 ] < ∞ ,
we get by the first BorelCantelli lemma that P[Yn 6= Xn , infinitely often] =
0. This means Tn − Sn → 0 almost everywhere, proving E[Sn ] → m if
E[Tn ] → m.
(ii) Fix a real number α > 1 and define an exponentially growing subsequence kn = [αn ] which is the integer part of αn . Denote by µ the law of
the random variables Xn . For every ǫ > 0, we get using Chebychev inequality (2.5.5), pairwise independence for kn = [αn ] and constants C which can
71
2.9. The strong law of large numbers
vary from line to line:
∞
X
n=1
P[Tkn − E[Tkn ] ≥ ǫ]
≤
=
=
≤(1)
≤
∞
X
Var[Tkn ]
ǫ2
n=1
∞
X
kn
1 X
Var[Ym ]
ǫ2 kn2 m=1
n=1
∞
X
1 X
Var[Ym ]
2
ǫ m=1
1
ǫ2
∞
X
n:kn ≥m
Var[Ym ]
m=1
1
kn2
C
m2
∞
X
1
C
E[Ym2 ] .
2
m
m=1
P
In (1) we used that with kn = [αn ] one has n:kn ≥m kn−2 ≤ C · m−2 . In the
last step we used that Var[Ym ] = E[Ym2 ] − E[Ym ]2 ≤ E[Ym2 ].
Lets take some breath and continue, where we have just left off:
∞
X
n=1
P[Tkn − E[Tkn ] ≥ ǫ]
≤
≤
=
≤
≤(2)
≤
In (2) we used that
Pn
m=l+1
∞
X
1
E[Ym2 ]
2
m
m=1
∞
m−1 Z
X
1 X l+1 2
C
x dµ(x)
m2
m=1
l=0 l
Z l+1
∞
∞
X
X
1
x2 dµ(x)
C
m2 l
l=0 m=l+1
Z
∞
∞
X
X
(l + 1) l+1
x dµ(x)
C
m2
l
l=0 m=l+1
∞ Z l+1
X
C
x dµ(x)
C
l=0
l
C · E[X1 ] < ∞ .
m−2 ≤ C · (l + 1)−1 .
We have now proved complete (=fast stochastic) convergence. This implies
the almost everywhere convergence of Tkn − E[Tkn ] → 0.
(iii) So far, the convergence has only be verified P
along a subsequence kn .
n
Because we assumed Xn ≥ 0, the sequence Un = i=1 Yn = nTn is monotonically increasing. For n ∈ [km , km+1 ], we get therefore
Ukm+1
Ukm
Un
km+1 Ukm+1
km Ukm
=
≤
=
≤
km+1 km
km+1
n
km
km km+1
72
Chapter 2. Limit theorems
and from limn→∞ Tkm = E[X1 ] almost everywhere, the statement
1
E[X1 ] ≤ lim inf Tn ≤ lim sup Tn ≤ αE[X1 ]
n
α
n
follows.
Remark. The strong law of large numbers
can be interpreted as a statement
Pn
about
the
growth
of
the
sequence
X
. For E[X1 ] = 0, the convergence
n
k=1
Pn
1
X
→
0
means
that
for
all
ǫ
>
0
there
exists m such that for n > m
n
k=1
n

n
X
k=1
Xn  ≤ ǫn .
Pn
This means that the trajectory k=1 Xn is finally contained in any arbitrary small cone. In other words,
Pn it grows slower than linear. The exact
description for the growth of k=1 Xn is given by the law of the iterated
logarithm of Khinchin which says that a sequence of IID random variables
Xn with E[Xn ] = m and σ(Xn ) = σ 6= 0 satisfies
lim sup
n→∞
Sn
Sn
= +1, lim inf
= −1 ,
n→∞ Λn
Λn
p
with Λn = 2σ 2 n log log n. We will prove this theorem later in a special
case in theorem (2.18.2).
Remark. The IID assumption on the random variables can not be weakened
without further restrictions. Take for example a sequence Xn of random
variables satisfying P[Xn = ±2n ] = 1/2. Then E[Xn ] = 0 but even Sn /n
does not converge.
Exercise. Let Xi be IID random P
variables in L2 . Define Yk =
n
1
What can you say about Sn = n k=1 Yk ?
1
k
Pk
i=1
Xi .
2.10 The Birkhoff ergodic theorem
In this section we fix a probability space (Ω, A, P) and consider sequences
of random variables Xn which are defined dynamically by a map T on Ω
by
Xn (ω) = X(T n (ω)) ,
where T n (ω) = T (T (. . . T (ω))) is the n’th iterate of ω. This can include
as a special case the situation that the random variables are independent,
but it can be much more general. Similarly as martingale theory covered
later in these notes, ergodic theory is not only a generalization of classical
probability theory, it is a considerable extension of it, both by language as
by scope.
2.10. The Birkhoff ergodic theorem
73
Definition. A measurable map T : Ω → Ω from the probability space onto
itself is called measure preserving, if P[T −1 (A)] = P[A] for all A ∈ A. The
map T is called ergodic if T (A) = A implies P[A] = 0 or P[A] = 1. A
measure preserving map T is called invertible, if there exists a measurable,
measure preserving inverse T −1 of T . An invertible measure preserving map
T is also called an automorphism of the probability space.
Example. Let Ω = {z = 1 } ⊂ C be the unit circle in the complex plane
with the measure P[Arg(z) ∈ [a, b]] = (b − a)/(2π) for 0 < a < b < 2π
and the Borel σalgebra A. If w = e2πiα is a complex number of length 1,
then the rotation T (z) = wz defines a measure preserving transformation
on (Ω, B, P). It is invertible with inverse T −1 (z) = z/w.
Example. The transformation T (z) = z 2 on the same probability space as
in the previous example is also measure preserving. Note that P[T (A)] =
2P[A] but P[T −1(A)] = P[A] for all A ∈ B. The map is measure preserving
but it is not invertible.
Remark. T is ergodic if and only if for any X ∈ L1 the condition X(T ) = X
implies that X is constant almost everywhere.
Example. The rotation on the circle is ergodic if α is irrational. Proof:
2πix
with z =
write a random variable X on Ω as a Fourier
Pe∞ one can
P∞ series
n
f (z) = n=−∞ an z n which is the
sum
f
+f
+f
,
where
f
=
0
+
−
+
n=1 an z
P∞
−n
is analytic in z < 1 and f− = n=1 an z
is analytic inP
z > 1 and f0 is
∞
n n
constant. By doing P
the same decomposition
for
f
(T
(z))
=
n=−∞ an w z ,
P∞
∞
n
n n
we see that f+ = n=1 an z = n=1 an w z . But these are the Taylor
expansions of f+ = f+ (T ) and so an = an wn . Because wn 6= 1 for irrational
α, we deduce an = 0 for n ≥ 1. Similarly, one derives an = 0 for n ≤ −1.
Therefore f (z) = a0 is constant.
Example. Also the noninvertible squaring transformation T (x) = x2 on
the circle is ergodic as a Fourier argument shows again: T preserves again
the decomposition P
of f into three analytic
functions f = P
f− + f0 + f+
P∞
∞
∞
2n
n
2n
=
a
z
implies
a
z
=
so
that
f
(T
(z))
=
n
n
n=1 an z
n=−∞
n=−∞
P∞
n
a
z
.
Comparing
Taylor
coefficients
of
this
identity
for
analytic
funcn
n=1
tions shows an = 0 for odd n because the left hand side has zero Taylor
coefficients for odd powers of z. But because for even n = 2l k with odd
k, we have an = a2l k = a2l−1 k = · · · = ak = 0, all coefficients ak = 0 for
k ≥ 1. Similarly, one sees ak = 0 for k ≤ −1.
Definition. Given a random variable X ∈ L and a measure preserving transformation T , one obtains a sequence of random variables Xn = X(T n ) ∈ L
by X(T n )(ω) = P
X(T n ω). They all have the same distribution. Define
n
S0 = 0 and Sn = k=0 X(T k ).
74
Chapter 2. Limit theorems
Theorem 2.10.1 (Maximal ergodic theorem of Hopf). Given X ∈ L1 and
a measure preserving transformation T , the event A = {supn Sn > 0 }
satisfies
E[X; A] = E[1A X] ≥ 0 .
Proof. Define
S Zn = max0≤k≤n Sk and the sets An = {Zn > 0} ⊂ An+1 .
Then A = n An . Clearly Zn ∈ L1 . For 0 ≤ k ≤ n, we have Zn ≥ Sk and
so Zn (T ) ≥ Sk (T ) and hence
Zn (T ) + X ≥ Sk+1 .
By taking the maxima on both sides over 0 ≤ k ≤ n, we get
Zn (T ) + X ≥
max
1≤k≤n+1
Sk .
On An = {Zn > 0}, we can extend this to Zn (T ) + X ≥ max1≤k≤n+1 Sk ≥
max0≤k≤n+1 Sk = Zn+1 ≥ Zn so that on An
X ≥ Zn − Zn (T ) .
Integration over the set An gives
E[X; An ] ≥ E[Zn ; An ] − E[Zn (T ); An ] .
Using (1) this inequality, the fact (2) that Zn = 0 on Ω\An , the (3) inequality Zn (T ) ≥ Sn (T ) ≥ 0 on An and finally that T is measure preserving (4),
leads to
E[X; An ] ≥(1)
=(2)
≥(3)
E[Zn ; An ] − E[Zn (T ); An ]
E[Zn ] − E[Zn (T ); An ]
E[Zn − Zn (T )] =(4) 0
for every n and so to E[X; A] ≥ 0.
A special case is if A is the entire set:
Corollary 2.10.2. Given X ∈ L1 and a measure preserving transformation
T . If supn Sn > 0 almost everywhere then E[X] ≥ 0.
Theorem 2.10.3 (Birkhoff ergodic theorem, 1931). For any X ∈ L1 the time
average
n−1
Sn
1X
X(T i x)
=
n
n i=0
converges almost everywhere to a T invariant random variable X satisfying
E[X] = E[X]. If T is ergodic, then X is constant E[X] almost everywhere
and Sn /n converges to E[X].
75
2.10. The Birkhoff ergodic theorem
Proof. Define X = lim supn→∞ Sn /n, X = lim inf n→∞ Sn /n . We get
X = X(T ) and X = X(T ) because
n + 1 Sn+1
Sn (T )
X
−
=
.
n (n + 1)
n
n
(i) X = X.
Define for β < α ∈ R the set Aα,β = {X < β < α < X }. It is T invariant because X, X are T invariant
as mentioned at the beginning of
S
the proof. Because {X < X } = β<α,α,β∈Q Aα,β , it is enough to show
that P[Aα,β ] = 0 for rational β < α. The rest of the proof establishes this.
In order to use the maximal ergodic theorem, we also define
Bα,β
= {sup(Sn − nα) > 0 , sup(Sn − nβ) < 0}
n
n
= {sup(Sn /n − α) > 0, sup(Sn /n − β) < 0 }
n
n
⊃ {lim sup(Sn /n − α) > 0, lim sup(Sn /n − β) < 0 }
n
n
= {X − α > 0, X − β < 0 } = Aα,β .
Because Aα,β ⊂ Bα,β and Aα,β is T invariant, we get from the maximal
ergodic theorem E[X − α; Aα,β ] ≥ 0 and so
E[X; Aα,β ] ≥ α · P[Aα,β ] .
Because Aα,β is T invariant, we we get from (i) restricted to the system T
on Aα,β that E[X; Aα,β ] = E[X; Aα,β ] and so
E[X; Aα,β ] ≥ α · P[Aα,β ] .
(2.5)
Replacing X, α, β with −X, −β, −α and using −X = −X shows in exactly
the same way that
E[X; Aα,β ] ≤ β · P[Aα,β ] .
(2.6)
The two equations (2.5),(2.6) imply that
βP[Aα,β ] ≥ αP[Aα,β ]
which together with β < α only leave us to conclude P[Aα,β ] = 0.
(ii) X ∈ L1 .
We have Sn /n ≤ X, and by (i) that Sn /n converges pointwise to X = X
and X ∈ L1 . The Lebesgue’s dominated convergence theorem (2.4.3) gives
X ∈ L1 .
(iii) E[X] = E[X].
Define the Tinvariant sets Bk,n = {X ∈ [ nk , k+1
n )} for k ∈ Z, n ≥ 1. Define
k
for ǫ > 0 the random variable Y = X − n + ǫ and call S˜n the sums where
76
Chapter 2. Limit theorems
X is replaced by Y . We know that for n large enough supn S˜n ≥ 0 on
Bk,n . When applying the maximal ergodic theorem applied to the random
variable Y on Bk,n . we get E[Y ; Bk,n ] ≥ 0. Because ǫ > 0 was arbitrary,
E[X; Bk,n ] ≥
k
P[Bk,n ] .
n
With this inequality
E[X, Bk,n ] ≤
1
k
1
k+1
P[Bk,n ] ≤ P[Bk,n ]+ P[Bk,n ] ≤ P[Bk,n ]+E[X; Bk,n ] .
n
n
n
n
Summing over k gives
E[X] ≤
1
+ E[X]
n
and because n was arbitrary, E[X] ≤ E[X]. Doing the same with −X we
end with
E[−X] = E[−X] ≤ E[−X] ≤ E[−X] .
Corollary 2.10.4. The strong law of large numbers holds for IID random
variables Xn ∈ L1 .
Proof. Given a sequence of IID random variables Xn ∈ L1 . Let µ be the
law of Xn . Define the probability space Ω = (RZ , A, P), where P = µZ is
the product measure. If T : Ω → Ω, T (ω)n = ωn+1 denotes the shift on Ω,
then Xn = X(T n ) with with X(ω) = ω0 . Since every T invariant function
is constant almost everywhere, we must have X = E[X] almost everywhere,
so that Sn /n → E[X] almost everywhere.
Remark. While ergodic theory is closely related to probability theory, the
notation in the two fields is often different. The reason is that the origin
of the theories are different. Ergodic theorists usually write (X, A, m) for
a probability space, not (Ω, A, P). Of course an ergodic theorists looks at
probability theory as a special case of her field and a probabilist looks at
ergodic theory as a special case of his field. An other example of different
language is also that ergodic theorists do not use the word ”random variables” X but speak of ”functions” f . This sounds different but is the same.
The two subjects can hardly be separated. Good introductions to ergodic
theory are [37, 13, 8, 79, 55, 112].
77
2.11. More convergence results
2.11 More convergence results
We mention now some results about the almost everywhere convergence of
sums of random variables in contrast to the weak and strong laws which
were dealing with averaged sums.
Theorem 2.11.1 (Kolmogorov’s inequalities). a) Assume Xk ∈ L2 are independent random variables. Then
P[ sup Sk − E[Sk ] ≥ ǫ] ≤
1≤k≤n
1
Var[Sn ] .
ǫ2
b) Assume Xk ∈ L∞ are independent random variables and Xn ∞ ≤ R.
Then
(R + ǫ)2
P[ sup Sk − E[Sk ] ≥ ǫ] ≥ 1 − Pn
.
1≤k≤n
k=1 Var[Xk ]
Proof. We can assume E[Xk ] = 0 without loss of generality.
a) For 1 ≤ k ≤ n we have
Sn2 − Sk2 = (Sn − Sk )2 + 2(Sn − Sk )Sk ≥ 2(Sn − Sk )Sk
and therefore E[Sn2 ; Ak ] ≥ E[Sk2 ; Ak ] for all Ak ∈ σ(X1 , . . . , Xk ) by the
independence of Sn − Sk and Sk . The sets A1 = {S1  ≥ ǫ}, Ak+1 =
{Sk+1  ≥ ǫ, max1≤l≤k Sl  < ǫ} are mutually disjoint. We have to estimate
the probability of the events
Bn = { max Sk  ≥ ǫ} =
1≤k≤n
n
[
Ak .
k=1
We get
E[Sn2 ] ≥ E[Sn2 ; Bn ] =
n
X
k=1
E[Sn2 ; Ak ] ≥
n
X
k=1
E[Sk2 ; Ak ] ≥ ǫ2
n
X
P[Ak ] = ǫ2 P[Bn ] .
k=1
b)
E[Sk2 ; Bn ] = E[Sk2 ] − E[Sk2 ; Bnc ] ≥ E[Sk2 ] − ǫ2 (1 − P[Bn ]) .
On Ak , Sk−1  ≤ ǫ and Sk  ≤ Sk−1  + Xk  ≤ ǫ + R holds. We use that in
78
Chapter 2. Limit theorems
the estimate
n
X
E[Sn2 ; Bn ] =
k=1
n
X
=
E[Sk2 + (Sn − Sk )2 ; Ak ]
E[Sk2 ; Ak ] +
n
X
k=1
k=1
n
X
E[(Sn − Sk )2 ; Ak ]
n
X
≤
(R + ǫ)2
≤
P[Bn ]((ǫ + R)2 + E[Sn2 ])
P[Ak ] +
k=1
P[Ak ]
k=1
n
X
Var[Xj ]
j=k+1
so that
E[Sn2 ] ≤ P[Bn ]((ǫ + R)2 + E[Sn2 ]) + ǫ2 − ǫ2 P[Bn ] .
and so
P[Bn ] ≥
E[Sn2 ] − ǫ2
(ǫ + R)2
(ǫ + R)2
≥ 1−
≥ 1−
.
2
2
2
2
2
(ǫ + R) + E[Sn ] − ǫ
(ǫ + R) + E[Sn ] − ǫ
E[Sn2 ]
Remark. The inequalities remain true in the limit n → ∞. The first inequality is then
P[sup Sk − E[Sk ] ≥ ǫ] ≤
k
∞
1 X
Var[Xk ] .
ǫ2
k=1
Of course, the statement in a) is void, if the right hand side is infinite. In
this case, however, the inequality in b) states that supk Sk − E[Sk ] ≥ ǫ
almost surely for every ǫ > 0.
Remark. For n = 1, Kolmogorov’s inequality reduces to Chebychev’s inequality (2.5.5)
Lemma 2.11.2. A sequence Xn of random variables converges almost everywhere, if and only if
lim P[sup Xn+k − Xn  > ǫ] = 0
n→∞
k≥1
for all ǫ > 0.
Proof. This is an exercise.
79
2.11. More convergence results
Theorem 2.11.3 (Kolmogorov). Assume Xn ∈ L2 are independent and
P
∞
n=1 Var[Xn ] < ∞. Then
∞
X
(Xn − E[Xn ])
n=1
converges almost everywhere.
Pn
Proof. Define Yn = Xn − E[Xn ] and Sn = k=1 Yk . Given m ∈ N. Apply
Kolmogorov’s inequality to the sequence Ym+k to get
P[ sup Sn − Sm  ≥ ǫ] ≤
n≥m
∞
1 X
E[Yk2 ] → 0
ǫ2
k=m+1
for m → ∞. The above lemma implies that Sn (ω) converges.
Figure. We sum up independent random variables Xk
with
which take values ±1
kα
equal probability. According to
theorem (2.11.3),the process
Sn =
n
X
k=1
(Xk − E[Xk ]) =
n
X
Xk
k=1
converges if
∞
X
k=1
E[Xk2 ] =
∞
X
1
k 2α
k=1
converges. This is the case if α >
1/2. The picture shows some experiments in the case α = 0.6.
The following
Pn theorem gives a necessary and sufficient condition that a
sum Sn = k=1 Xk converges for a sequence Xn of independent random
variables.
Definition. Given R ∈ R and a random variable X, we define the bounded
random variable
X (R) = 1X<R X .
80
Chapter 2. Limit theorems
Theorem
P∞2.11.4 (Three series theorem). Assume Xn ∈ L be independent.
Then n=1 Xn converges almost everywhere if and only if for some R > 0
all of the following three series converge:
∞
X
k=1
P[Xk  > R] <
∞
X
k=1
∞
X
∞,
(2.7)
E[Xk ] <
∞,
(2.8)
(R)
∞.
(2.9)
(R)
Var[Xk ] <
k=1
Proof. ”⇒” Assume first that the three series all converge. By (3) and
P∞
(R)
(R)
− E[Xk ]) converges
Kolmogorov’s theorem, we know that
k=1 (Xk
P∞
(R)
almost surely. Therefore, by (2), k=1 Xk converges almost surely. By
(R)
(1) and BorelCantelli, P[Xk 6= Xk infinitely often) = 0. Since for al(R)
most all ω, Xk (ω) = Xk (ω) for sufficiently large k and for almost all
P∞
P∞
(R)
ω, k=1 Xk (ω) converges, we get a set of measure one, where k=1 Xk
converges.
P∞
”⇐” Assume now that n=1 Xn converges almost everywhere. Then Xk →
0 almost everywhere and P[Xk  > R, infinitely often) = 0 for every R > 0.
By the second BorelCantelli lemma,
P∞ the sum (1) converges.
The almost sure convergence of n=1 Xn implies the almost sure converP∞
(R)
gence of n=1 Xn since P[Xk  > R, infinitely often) = 0.
Let R > 0 be fixed. Let Yk be a sequence of independent random vari(R)
ables such that Yk and Xk have the same distribution and that all the
(R)
random variables Xk , Yk are independent. The almost sure convergence
P∞
P∞
(R)
(R)
(R)
of n=1 Xn implies that of n=1 Xn − Yk . Since E[Xk − Yk ] = 0
(R)
and P[Xk − Yk  ≤ 2R) = 1, by Kolmogorov inequality b), the series
Pn
(R)
Tn = k=1 Xk − Yk satisfies for all ǫ > 0
P[sup Tn+k − Tn  > ǫ] ≥ 1 − P∞
k≥1
k=n
(R + ǫ)2
(R)
Var[Xk
− Yk ]
.
P∞
(R)
Claim: k=1 Var[Xk − Yk ] < ∞.
Assume, the sum is infinite. Then the above inequality gives P[supk≥ Tn+k −
P∞
(R)
Tn  ≥ ǫ] = 1. But this contradicts the almost sure convergence of k=1 Xk −
Yk because the latter implies by Kolmogorov inequality that P[supk≥1 Sn+k −
P
(R)
Sn  > ǫ] < 1/2 for large enough n. Having shown that ∞
−
k=1 (Var[Xk
Yk )] < ∞, we are done because then by Kolmogorov’s theorem (2.11.3),
P∞
(R)
(R)
the sum k=1 (Xk − E[Xk ]) converges, so that (2) holds.
2.11. More convergence results
81
Figure. A special case of the
three series theorem is when Xk
are uniformly bounded Xk ≤
R and have zero expectation
E[Xk ] = 0. In that case, almost
everywhere
convergence of Sn =
Pn
X
is
equivalent
to the
k=1 k
P∞
Var[X
convergence of
k ].
k=1
For example, in the case
1
kα
,
Xk =
− k1α
and α = 1/2, we do not have
almost everywhere
P∞ convergence
of
S
,
because
k=1 Var[Xk ] =
P∞ n 1
=
∞.
k=1 k
Definition. A real number α ∈ R is called a median of X ∈ L if P[X ≤
α] ≥ 1/2 and P[X ≥ α] ≥ 1/2. We denote by med(X) the set of medians
of X.
Remark. The median is not unique and in general different from the mean.
It is also defined for random variables for which the mean does not exist.
The median differs from the mean maximally by a multiple of the standard
deviation:
Proposition 2.11.5. (Comparing median and mean) For Y ∈ L2 . Then every
α ∈ med(Y ) satisfies
√
α − E[Y ] ≤ 2σ[Y ] .
Proof. For every β ∈ R, one has
α − β2
≤ α − β2 min(P[Y ≥ α], P[Y ≤ α]) ≤ E[(Y − β)2 ] .
2
Now put β = E[Y ].
Theorem 2.11.6 (L´evy). Given a sequence Xn ∈ L which is independent.
Choose αl,k ∈ med(Sl − Sk ). Then, for all n ∈ N and all ǫ > 0
P[ max Sk + αn,k  ≥ ǫ] ≤ 2P[Sn  ≥ ǫ] .
1≤k≤n
82
Chapter 2. Limit theorems
Proof. Fix n ∈ N and ǫ > 0. The sets
A1 = {S1 + αn,1 ≥ ǫ }, Ak+1 = { max (Sl + αn,l ) < ǫ, Sk+1 + αn,k+1 ≥ ǫ }
1≤l≤k
Sn
for 1 ≤ k ≤ n are disjoint and k=1 Ak = {max1≤k≤n (Sk + αn,k ) ≥ ǫ }.
Because {Sn ≥ ǫ } contains all the sets Ak as well as {Sn − Sk ≥ αn,k } for
1 ≤ k ≤ n, we obtain using the independence of σ(Ak ) and σ(Sn − Sk )
P[Sn ≥ ǫ] ≥
=
≥
=
=
n
X
k=1
n
X
P[{Sn − Sk ≥ αn,k } ∩ Ak ]
P[{Sn − Sk ≥ αn,k }]P[Ak ]
k=1
n
X
1
2
P[Ak ]
k=1
n
[
1
P[
2
Ak ]
k=1
1
P[ max (Sn + αn,k ) ≥ ǫ] .
2 1≤k≤n
Applying this inequality to −Xn , we get also P[−Sm − αn,m ≥ −ǫ] ≥
2P[−Sn ≥ −ǫ] and so
P[ max Sk + αn,k  ≥ ǫ] ≤ 2P[Sn  ≥ ǫ] .
1≤k≤n
Corollary 2.11.7. (L´evy) Given a sequence Xn ∈ L of independent random
variables. If the partial sums Sn converge in probability to S, then Sn
converges almost everywhere to S.
Proof. Take αl,k ∈ med(Sl − Sk ). Since Sn converges in probability, there
exists m1 ∈ N such that αl,k  ≤ ǫ/2 for all m1 ≤ k ≤ l. In addition,
there exists m2 ∈ N such that supn≥1 P[Sn+m − Sm  ≥ ǫ/2] < ǫ/2 for all
m ≥ m2 . For m = max{m1 , m2 }, we have for n ≥ 1
P[ max Sl+m − Sm  ≥ ǫ] ≤ P[ max Sl+m − Sm + αn+m,l+m  ≥ ǫ/2] .
1≤l≤n
1≤l≤n
The right hand side can be estimated by theorem (2.11.6) applied to Xn+m
with
ǫ
≤ 2P[Sn+m − Sm  ≥ ] < ǫ .
2
Now apply the convergence lemma (2.11.2).
One reason to introduce distribution functions is that one can replace integrals on the probability space Ω by integrals on the real line R which is more convenient. the distribution function does not determine the random variable itself. . Prove that almost surely ∞ Y n X n=1 i=1 Hint: Use Var[ rem. Remark.12 Classes of random variables The probability distribution function FX : R → [0. Hint: Use Kolmogorov’s theorem for Yk = Xk /k. Q E[Xi ]2 and use the three series theo 2. 1]. a]) = FX (a) on the πsystem I given by the intervals {(−∞. then Sn /n → m almost everywhere. which have the same distribution. Q i Xi ] = Q E[Xi2 ] − Xi < ∞ . There are many different random variables defined on different probability spaces. Prove the strong law of large numbers of independent but not necessarily identically distributed random variables: Given a sequence of independent random variables Xn ∈ L2 satisfying E[Xn ] = m. The distribution function FX determines the law µX because the measure ν((−∞. Classes of random variables Exercise. With the law µX = X ∗ P of X on R has FX (x) = −∞ dµ(x) so that F is the antiderivative of µ.83 2. 1] of a random variable X was defined as FX (x) = P[X ≤ x] . where P[X ≤ x] is a short hand notation for P[{ω R x ∈ Ω  X(ω) ≤ x }. Let Xn be an IID sequence of random variables with uniform distribution on [0. If ∞ X k=1 Var[Xk ]/k 2 < ∞ .12. Exercise. Of course. a]} determines a unique measure on R.
This will put more weight to center. Remark. y) = 1/r = 1/ x2 + y 2 is obtained when throwing parallel lines. b) FX (−∞) = 0. P) which satisfies FX = F . f (x. a) follows from {X ≤ x } ⊂ {X ≤ y } for x ≤ y. b). b) P[{X ≤ −n}] → 0 and P[{X ≤ n}] → 1. the set of real functions F which satisfy properties a). Given F . Bertrands paradox mentioned in the introduction shows that the choice of the distribution functions is important. b). Furthermore. . Example.1. y) which is radially symmetric. The radial density is 1/π. c) FX (x + h) − FX = P[x < X ≤ x + h] → 0 for h → 0. Limit theorems Proposition 2. The distribution function FX of a random variable is a) nondecreasing. Every Borel probability measure µ on R determines a distribution function FX of some random variable X by Z x dµ(x) = F (x) . given a function F with the properties a). The disc Ar of radius r has probability P[Ar ] = r2 /π. In any of the three cases. −∞ The proposition tells also that one can define a class of distribution functions. a]] = F [a] on the πsystem I defines a unique measure on (Ω.84 Chapter 2. The p density in the r direction is 2r/π. there exists a random variable X on the probability space (Ω.12. define Ω = R and A as the Borel σalgebra on R. A). The distribution f (x. The constant distribution f (x. Proof. y) = 1/π is obtained when we throw the center of the line into the disc. A. The density in the r direction is 1/ 1 − r2 . c). y) is the distribution when we rotate the line around a point on the boundary. there is a distribution function f (x. The disc A√ r of radius r has probability arcsin(r). The probability P[Ar ] = r/π is bigger than the area of the disc. c). FX (∞) = 1 c) continuous from the right: FX (x + h) → FX for h → 0. The measure P[(−∞.
we would probably try to throw the center of the stick into the middle of the disc.4 0. So.8 1 Figure. The definition of (ac).6 0. Definition.(pp) and (sc) Borel measures on R. A plot of the radial distribution function F (r) = P[Ar ] There are different values at F (1/2).6 2 0. A distribution function F is called singular continuous (sc) if F is continuous and if there exists a Borel set S of zero Lebesgue measure such that µF (S) = 1. A distribution function is called pure point (pp) or atomic if there exists a countable xn and a sequence of P sequence of real numbers P positive numbers pn . Classes of random variables 4 1 0. A plot of the radial density function f (r) for the three different interpretation of the Bertrand paradox. One calls a random variable with an absolutely continuous distribution function a continuous random variable. if it contains no atoms. Definition. what happens. Remark. if Rx there exists a Borel measurable function f satisfying F (x) = −∞ f (x) dx. Since we would aim to the center.8 3 0.2 0. Definition. points with positive measure. A Borel measure is (pp). the distribution would be different from any of the three solutions given in Bertrand’s paradox. if we really do an experiment and throw randomly lines onto a disc? The punch line of the story is that the outcome of the experiment very much depends on how the experiment will be performed.6 0. A distribution function F is called absolutely continuous (ac).(pp) and (sc) distribution functions is compatible for the definition of P (ac). If we would do the experiment by hand.4 0.2 0.12. It is continuous. One calls a random variable with a discrete distribution function a discrete random variable.8 1 Figure. if there exists a measurable .xn ≤x pn . if µ(A) = x∈A µ({a}). One calls a random variable with a singular continuous distribution function a singular continuous random variable. 0.2 0. n pn = 1 such that F (x) = n.85 2. It is (ac).4 1 0.
Define also B(p. Definition. We first show that any measure µ can be decomposed as µ = µac +µs . Proof. take an increasing sequence An ∈ B with µ(An ) → c. if they are zero. then µ is absolutely continuous and we are done. q) = Z 0 1 xp−1 (1 − x)q−1 dx . Every Borel measure µ on (R. µsc is singular continuous and µac is absolutely continuous with respect to the Lebesgue measure λ. If c = 0. To get existence. It is (sc). S If c > 0.86 Chapter 2. The Gamma function is defined for x > 0 as Z ∞ tx−1 e−t dt . (1) (1) (2) (2) we again have uniqueness because µs = µsc +µsc = µpp +µpp implies that (1) (2) (2) (1) ν = µsc − µsc = µpp − µpp are both singular continuous and pure point which implies that ν = 0.12. define the finite or countable set A = {ω  µ(ω) > 0 } and define µpp (B) = µ(A ∩ B). Limit theorems function f such that µ = f dx. To split the singular part µs into a singular continuous and pure point part. b]) = b−a. To get the existence of the decomposition.λ(A)=0 µ(A). where µac is absolutely continuous with respect to λ and µs is singular. σ 2 ) on Ω = R has the probability density function (x−m)2 1 f (x) = √ e− 2σ2 . Γ(x) = 0 It satisfies Γ(n) = (n − 1)! for n ∈ N. the Beta function. if it is continuous and if µ(S) = 1 for some Borel set S of zero Lebesgue measure. 2πσ 2 . The following decomposition theorem shows that these three classes are natural: Theorem 2.2 (Lebesgue decomposition theorem). define c = supA∈A. where µpp is pure point. B) can be decomposed in a unique way as µ = µpp + µac + µsc . B) for which λ([a. Denote by λ the Lebesgue measure on (R. Here are some examples of absolutely continuous distributions: ac1) The normal distribution N (m. (2) (1) (1) (2) The decomposition is unique: µ = µac + µs = µac + µs implies that (1) (2) (2) (1) µac − µac = µs − µs is both absolutely continuous and singular with respect to µ which is only possible. Define A = n≥1 An and µs as µs (B) = µ(A∩B).
∞) has the probability density function f (x) = λe−λx . b−a ac4) The exponential distribution λ > 0 on Ω = [0. f (x) = √ 2πx2 σ 2 ac6) The beta distribution on Ω = [0. β > 0 f (x) = xα−1 β α e−x/β . Figure. q > 1 has the density f (x) = xp−1 (1 − x)q−1 . . ac5) The log normal distribution on Ω = [0. Classes of random variables ac2) The Cauchy distribution on Ω = R has the probability density function f (x) = b 1 . The probability density and the CDF of the Cauchy distribution. ∞) with parameters α > 0. B(p. 1] with p > 1. 2 π b + (x − m)2 ac3) The uniform distribution on Ω = [a. ∞) has the density function (log(x)−m)2 1 e− 2σ2 .12.87 2. Γ(α) Figure. b] has the probability density function 1 f (x) = . The probability density and the CDF of the normal distribution. q) ac7) The Gamma distribution on Ω = [0.
. . . Figure. 3. The probability density and the CDF of the uniform distribution. n } n P[X = k] = pk (1 − p)n−k k pp2) The Poisson distribution on Ω = N P[X = k] = e−λ λk k! pp3) The Discrete uniform distribution on Ω = {1. n } P[X = k] = 1 n pp4) The geometric distribution on Ω = N = {0.88 Chapter 2. . The probability density and the CDF of the exponential distribution. } P[X = k] = p(1 − p)k−1 . 2. . } P[X = k] = p(1 − p)k pp5) The distribution of first success on Ω = N \ {0} = {1. 3. . We use the notation n! n = k (n − k)!k! for the Binomial coefficient. . For example. . . . Limit theorems Figure. 2.. .. 1. where k! = k(k−1)(k−2) · · · 2·1 is the factorial of k with the convention 0! = 1. Definition. = 3 7!3! Examples of discrete distributions: pp1) The binomial distribution on Ω = {1. 10! 10 = 10 ∗ 9 ∗ 8/6 = 120 .
1/3] ∪ [2/3.2 0. The probabilities and the CDF of the geometric distribution. Define F (x) = lim Fn (x) n→∞ where Fn (x) has the density (3/2)n ·1En .075 0.12.025 0. E1 = [0. Figure.05 Figure. where E0 = [0.15 0.05 Figure.1 0. Classes of random variables 0. X= 3n n=1 where Xn take values 0 and 2 with probability 1/2 each. 0. 1] and En is inductively obtained by cutting away the middle third of each interval in En−1 . 1].1 0. An example of a singular continuous distribution: T sc1) The Cantor distribution.3 0.15 0.89 2.05 0.2 0.15 0.1 0.25 0. The probabilities and the CDF of the Poisson distribution. The probabilities and the CDF of the uniform distribution.05 0. The probabilities and the CDF of the binomial distribution. .1 0. Let C = ∞ n=0 En be the Cantor set. One can realize a random variable with the Cantor distribution as a sum of IID random variables as follows: ∞ X Xn .125 0.15 0. Figure.2 0.25 0.
2 0. β > 0 αβ αβ 2 Variance σ2 ∞ (b − a)2 /12 1/λ2 2 pq (p+q)2 (p+q+1) 2 2 . Its graph is also called a Devils staircase 0. The CDF of the Cantor distribution is continuous but not absolutely continuous.90 Chapter 2. If h(X) is integrable. if µ = µac = f dx then Z h(x)f (x) dx . then for step functions X ∈ S and then by the monotone convergence theorem for any X ∈ L for which h(x) ∈ L1 .4.3.4 0.µ({x})6=0 Proof. For any measurable map h : R1 → R 1 [0. σ 2 > 0 p.6 0.2 0. E[h(X)] = R If µ = µpp . Parameters Mean m ∈ R.6 1 0. one has E[h(X)] = R h(x) dµ(x). Especially.12. b > 0 ”m” a<b (a + b)/2 λ>0 1/λ ac5) LogNormal ac6) Beta m ∈ R. then E[h(X)] = E[h+ (X)] − E[h− (X)].4 0. ∞) for which h(X) ∈ L . prove it first for X = c1x∈A . The function FX (x) is in this case called the Cantor function. If the function h is nonnegative. then E[h(X)] = X h(x)µ({x}) . Limit theorems 1 0. Distribution ac1) Normal ac2) Cauchy ac3) Uniform ac4) Exponential Proposition 2. Given X ∈ L with law µ.8 Figure. q > 0 eµ+σ /2 p/(p + q) (eσ − 1)e2m+σ ac7) Gamma α. σ 2 > 0 m m ∈ R. x.12.8 Lemma 2.
1] np λ>0 λ n∈N (1 + n)/2 p ∈ (0. which we do in some of the examples: Exponential distribution: Z ∞ p! p p xp λe−λx dx = E[X p−1 ] = p . Cantor distribution: because one can realize a random variable with the .12.12. We get then E[X] = λ. These are direct computations. 1) 1/p 1/2 np(1 − p) λ (n2 − 1)/12 (1 − p)/p2 (1 − p)/p2 1/8 Proof. . Geometric distribution. 1) (1 − p)/p p ∈ (0. E[X ] = λ α 0 Poisson distribution: E[X] = ∞ X ke−λ k=0 ∞ X λk−1 λk = λe−λ =λ. 2 p p For calculating the higher moments one can proceed as in the Poisson case or use the moment generating function. k! (k − 1)! k=1 For calculating higher moments. n ∈ N. p ∈ [0. one can also use the probability generating function ∞ X (λz)k = e−λ(1−z) e−λ E[z X ] = k! k=0 and then differentiate this identity with respect to z at the place z = 0. E[X(X − 1)] = λ2 . so that E[X 2 ] = λ + λ2 and Var[X] = λ. .91 2. (1 − x)2 k(1 − p)k p = p ∞ X k=1 k(1 − p)k = p 1 = . E[X 3 ] = E[X(X − 1)(X − 2)]. . Classes of random variables pp1) Bernoulli pp2) Poisson pp3) Uniform pp4) Geometric pp5) First Success sc1) Cantor Proposition 2.5. Differentiating the identity for the geometric series ∞ X xk = k=0 gives ∞ X kxk−1 = k=0 Therefore E[Xp ] = ∞ X k=0 k(1 − p)k p = ∞ X k=0 1 1−x 1 .
E[X ] = MX (0) = m + σ .1. . Example. n n n 3 9 9 1 − 1/9 8 8 n=1 n=1 n=1 See also corollary (3. 3. All the moments ′ can be obtained with the moment formula. 2. 2. E[X n ] = xn dµ = dtn R For the characteristic function one obtains Z dn φX xn dµ = (−i)n E[X n ] = (t)t=0 . 1. Its moment generating function is MX (t) = 0 etx dx = (et − 1)/t = 1 + t/2!+ t2 /3!+ . which agrees with the moment formula. . The random variable X(x) = x has the uniform distribution R1 on [0. E[X] = MX (0) = 2 ′′ 2 2 m. . . For example. . } with P[X = k] = e−λ λk! . A comparison of coefficients gives the moments E[X m ] = 1/(m + 1). where the IID random variables Xn take the values 0 and 2 with probability p = 1/2 each. 1 − (1 − p)et . 1. A random variable X on Ω = N = {0. .6) for an other computation. k=0 Example. we have E[X] = ∞ ∞ X E[Xn ] X 1 1 1 = = −1= n n 3 3 1 − 1/3 2 n=1 n=1 and Var[X] = ∞ ∞ ∞ X Var[Xn ] X Var[Xn ] X 1 1 9 1 = = = −1 = −1 = . 1]. Computations can sometimes be done in an elegant way using characteristic functions φX (t) = E[eitX ] or moment generating functions MX (t) = E[etX ]. 3. σ 2 ) 2 2 has the moment generating function MX (t) = etm+σ t /2 . A random variable X which has the Normal distribution N (m.92 Chapter 2. . the moment generating function is MX (t) = ∞ X t P[X = k]etk = eλ(1−e ) . With the moment generating function one can get the moments with the moment formula Z dn MX (t)t=0 . Example. For a Poisson distributed random variable X on Ω = N = k {0. . dtn R Example. . Limit theorems P∞ n Cantor distribution as X = n=1 Xn /3 . } with the geometric distribution P[X = k] = p(1 − p)k has the moment generating function MX (t) = ∞ X k=0 kt k e p(1 − p) = p ∞ X ((1 − p)et )k = k=0 p .
has the moment generating function MX (t) = ∞ X k=1 ekt p(1 − p)k−1 = et p ∞ X k=0 ((1 − p)et )k = pet . If k is allowed to be an arbitrary positive real number. Compute the mean and variance of the Erlang distribution f (x) = λk tk−1 −λx e (k − 1)! on the positive real line Ω = [0. then their moment generating functions satisfy MX+Y (t) = MX (t) · MY (t) . . Show that the standard normal distribution has zero excess kurtosis. Exercise.12. The kurtosis of a random variable X is defined as Kurt[X] = E[(X −E[X])4]/σ[X 4 ]. If X. Excess kurtosis is often abbreviated by kurtosis. b the random variable Y = aX + b has the same kurtosis Kurt[Y ] = Kurt[X]. 3. Classes of random variables A random variable X on Ω = {1. } with the distribution of first success P[X = k] = p(1 − p)k−1 . Verify that if X. ∞) with the help of the moment generating function. Definition. . a distribution with negative excess kurtosis appears more flat.6.12. A distribution with positive excess kurtosis appears more peaked. Exercise. Y are independent random variables. . The excess kurtosis is defined as Kurt[X]−3. 1 − (1 − p)et Exercise. Y are independent random variables of the same distribution then the kurtosis of the sum is the average of the kurtosis Kurt[X + Y ] = (Kurt[X] + Kurt[Y ])/2. Now use the previous exercise to see that every normal distributed random variable has zero excess kurtosis. .93 2. Exercise. Prove that for any a. 2. then the Erlang distribution is called the Gamma distribution. Lemma 2.
∞) with fixed expectation m = 1/λ. The moment formula allows us to compute moments E[X n ] and central moments E[(X − E[X])n ] of X. the exponential distribution λe−λ is the one with maximal entropy. Because for n = 1.94 Chapter 2. the moment generating function of X is MX (t) = [(1 − p) + pet ]n . The sum X + Y of a Poisson distributed random variable X with parameter λ and a Poisson distributed random variable Y with parameter µ is Poisson distributed with parameter λ+ µ as can be seen by multiplying their moment generating functions. If X and Y are independent. The entropy allows to distinguish several distributions from others by asking for the distribution with the largest entropy. We have seen earlier that the density of X is fX (x) = x1/m−1 /m so that Z 1 H(X) = − (x1/m−1 /m) log(x1/m−1 /m) dx . Example. we have MXi (t) = (1 − p) + pet . An interesting quantity for a random variable with a continuous distribution with probability density fX is the Shannon entropy or simply entropy Z f (x) log(f (x)) dx . For example. Definition. H(X) = − R Without restricting the class of functions. B. 0 . 1]. Example. dx). Let us compute the entropy of the random variable X(x) = xm on ([0. The lemma can be used to compute the moment generating function of the binomial distribution. tX and etY are independent. A random variable X with binomial distribution can be written as a sum of IID random variables Xi taking values 0 and 1 with probability 1 − p and p. Limit theorems Proof. Examples: E[X] = np E[X 2 ] = np(1 − p + np) Var[X] = E[(X − E[X])2 ] = E[X 2 ] − E[X]2 = np(1 − p) E[X 3 ] = np(1 + 3(n − 1)p + (2 − 3n + n2 )p2 ) E[X 4 ] = np(1 + 7(n − 1)p + 6(2 − 3n 4 +n2 )p2 + (−6 + 11n − 6n2 + n3 )p3 ) E[(X − E[X]) ] = E[X 4 ] − 8E[X]E[X 3] + 6E[X 2 ]2 + E[X]4 = np(1 − p)(1 + (5n − 6)p − (−6 + n + 6n2 )p2 ) Example. H(X) is allowed to be −∞ or ∞. We will return to these interesting entropy extremization questions later. then also e Therefore. among all distribution functions on the positive real line [0. E[et(X+Y ) ] = E[etX etY ] = E[etX ]E[etY ] = MX (t) · MY (t) .
Because dm H(Xm ) = (1/m)−1 2 d 2 and dm2 H(Xm ) = −1/m . For weak convergence. where the density is uniform.95 2. This dense set can consist of the space P (R) of polynomials or the space Cb∞ (R) of bounded. Weak convergence a a a To compute this integral. A sequence of Borel probability measures µn on R converges weakly to a probability measure µ on R if for every f ∈ Cb (R) one has Z R f dµn → Z f dµ R in the limit n → ∞. The entropy of the random variables X(x) = xm on [0. which is the uniform distribution 2 3 4 2. smooth functions. note first that f (x) = x log(x ) = ax log(x) has R1 the antiderivative ax1+a ((1+a) log(x)−1)/(1+a)2 so that 0 xa log(xa ) dx = d −a/(1+a2) and H(X) = (1−m+log(m)).13. R R Remark. 1] as a function of m. b]). . Denote by Cb (R) the vector space of bounded continuous functions on R. Similarly. This means that f ∞ = supx∈R f (x) < ∞ for every f ∈ Cb (R). the random variable X(x) = x has maximal entropy. The entropy decreases for m → ∞.13 Weak convergence Definition. An important fact is that a sequence of random variables Xn converges in distribution to X if and only if E[h(Xn )] → E[h(X)] for all smooth functions h on the real line. This will be used the proof of the central limit theorem. 1 2 3 4 5 1 Figure. one has a topology for M1 ([a. The maximum is attained for m = 1. Among all random variables X(x) = xm . the entropy has its maximum at m = 1. Weak convergence defines a topology on the set M1 (R) of all Borel probability measures on R. it is enough to test R f dµn → R f dµ for a dense set in Cb (R).
Definition. We need to show that any sequence µn of probability measures on I has an accumulation point. Proof. The set M1 (I) of all probability measures on an interval I = [a. a more general theorem called BanachAlaoglu theorem is known: a closed and bounded set in the dual space X ∗ of a Banach space X is compact with respect to the weak* topology. if the interval [a. Given a distribution function F . Given a sequence Fn of monotonically increasing functions. We say that a sequence of random variables Xn converges in distribution to a random variable X. Remark.1. where the functionals µn converge to µ if and only if µn (f ) converges to µ(f ) for all f ∈ X. Definition. because for every rational P number p/q > 0. which is Tychonovs theorem.13. . where ai = µ({ti }) > 0. which is again a distribution function. X = Cb [a. b] span all polynomials and so a dense set in Cb ([a. The compactness of probability measures can also be seen by looking at the distribution functions Fµ (s) = µ((−∞. Definition. In other words. Limit theorems Lemma 2. there is a subsequence Fnk which converges to an other monotonically increasing function F . b]) is equivalent to the compactness of the product space I N with the product topology. In functional analysis. b] (see [7]). b] and the dual space X ∗ is the space of all signed measures on [a. The set of functions fk (x) = xk on [a. b] is replaced by the real line R. This fact generalizes to distribution functions on the line where the limiting function F is still a rightcontinuous and nondecreasing function Helly’s selection theorem but the function F does not need to be a distribution function any more. b] is a compact topological space. we denote by Cont(F ) the set of continuity points of F .96 Chapter 2. if FXn (x) → FX (x) point wise for all x ∈ Cont(F ). Remark. The sequence µn converges Rb to µ if and only if all the moments a xk dµn converge for n → ∞ and for all k ∈ N. Remark. s]). There can be only countably many such discontinuities. In the present case. A sequence of random variables Xn converges weakly or in law to a random variable X. the compactness of M1 ([a. Because F is nondecreasing and takes values in [0. b]). there are only finitely many ai with ai > p/q because i ai ≤ 1. if the laws µXn of Xn converge weakly to the law µX of X. 1]. the only possible discontinuity is a jump discontinuity. They happen at points ti .
n→∞ n→∞ Since F is continuous at x we have for δ → 0: F (s) = lim F (s − δ) ≤ lim inf Fn (s) ≤ lim sup Fn (s) ≤ F (s) .1. Therefore. Given s ∈ Cont(f ) and δ > 0. That is. Then Z Z Z Fn (s) = 1(−∞. so that µ and ν agree on the π system I. Define a continuous function 1(−∞. s] also. s]) → ν(−∞.s] ≤ f ≤ 1(−∞. s]). we know that µ = ν on the Borel σalgebra and so µ = ν. For any random variable X with nonzero variance. δ→0 n→∞ n→∞ That is we have established convergence in distribution.s−δ] ≤ f ≤ 1(−∞. The set of all probability measures on I is compact in the weak topology. there is a subsequence and ǫ > 0 such that R f dµnk − Rf dµ ≥ ǫ > 0. we obtain with a function 1(−∞. 2.s] dµn ≤ f dµn ≤ 1(−∞.13. Define the πsystem I of all intervals {(−∞.2 (Weak convergence = convergence in distribution). R R R This gives lim sup Fn (s) ≤ lim n→∞ n→∞ Z f dµn = Z f dµ ≤ F (s + δ) . s]  s continuity point of F }. We have µn ((−∞. There exists a compact interval I such that  I f dµnk − I f dµ ≥ ǫ/2 > 0 and we can assume that µnk and µ have support on I.14.4).s+δ] dµn = Fn (s + δ) .14 The central limit theorem Definition.97 2. the initial assumption of having no convergence in law was wrong. (ii) Assume now we have no RconvergenceRin law. s]) = FXn (s) → FX (s) = µ(−∞. a subsequence of µnk converges weakly to a measure ν and ν(f ) − µ(f ) ≥ ǫ/2. we denote by (X − E[X]) X∗ = σ(X) . If µ and ν agree on I. By lemma (2. The central limit theorem Theorem 2. Using (i) we see µnk ((−∞. Proof. We want to show that we have convergence in distribution. So. they agree on the πsystem of all intervals {(−∞. A sequence Xn of random variables converges in law to a random variable X if and only if Xn converges in distribution to X.s] Z Z lim inf Fn (s) ≥ lim f dµn = f dµ ≥ F (s − δ) . s]}. (i) Assume we have convergence in law. There exists then a continuous function f so that f dµ R n to f Rdµ fails. Similarly.s+δ] . This contradicts ν(f ) − µ(f ) ≥ ǫ/2.
14. Limit theorems ∗ the normalized p random variable. ∀x ∈ R . The probability density function fS1∗ of the random variable X(x) = x on [−1. we again use the notation Sn = nk=1 Xk . . 1]. Given P a sequence of random variables Xk . n→∞ 2π −∞ Figure. The probability density function fS2∗ of the random variable X(x) = x on [−1.14. Lemma 2. Theorem 2. π 2 q Especially E[X3 ] = π8 σ 3 . 1]. n i=1 Then Sn∗ converges in distribution to a random variable with standard normal distribution N (0. 1): Z x 2 1 ∗ lim P[Sn ≤ x] = √ e−y /2 dy. A N (0.2.1 (Central limit theorem for independent L3 random variables). σ 2 ) distributed random variable X satisfies 1 1 E[Xp ] = √ 2p/2 σ p Γ( (p + 1)) . Figure. which has mean E[X ] = 0 and variance ∗ ∗ σ(X ) = Var[X ] = 1. δ = lim inf i n→∞ 1X Var[Xi ] > 0 .98 Chapter 2. Figure. Assume Xi ∈ L3 are independent and satisfy n M = sup Xi 3 < ∞. The probability density function fS3∗ of the random variable X(x) = x on [−1. 1].
Note that Z1 + Y1 = Sn∗ and Zn + Y˜n = S˜n .14. . The central limit theorem x2 − 2σ 2 2 −1/2 Proof.14. . Y˜n } Pn are independent. Y˜k−1 + Yk+1 + · · · + Yn . σ )distributed random variables Yi having the property that the set of random variables {Y1 . Define Zk = Y˜1 + . . (2. . . In order to show the theorem. . After this preliminary computation. . 1≤i≤n σ(Sn ) Pn 2 ˜ so that Sn∗ = i=1 Yi . We get therefore E[f (Sn∗ )] − E[f (S˜n )] ≤ n X E[R(Zk .99 2. It is enough to verify it for smooth f of compact support. . Proof.5. Y˜1 . π π π . . Define for fixed n ≥ 1 the random variables Yi = (Xi − E[Xi ]) . The distribution of S˜n = i=1 Y˜i is just the normal distribution N (0.2) and the Jensen inequality (2. Yk ) + R(Zk .1) r r r 8 3 8 8 3 2 3/2 ˜ E[Yk  ] = σ = E[Yk  ] ≤ E[Yk 3 ] . Yk )] + E[R(Zk . we get by lemma (2. Using first a telescopic sum and then Taylor’s theorem. Yn .10) k=1 Because Y˜k are N (0. we have to prove E[f (Sn∗ )] − E[f (S˜n )] → 0 for any f ∈ Cb (R). Y˜k )] k=1 with a Taylor rest term R(Z. we turn to the proof of the central limit theorem. Define N (0. we can write f (Sn∗ ) − f (S˜n ) = = n X k=1 n X [f (Zk + Yk ) − f (Zk + Y˜k )] [f ′ (Zk )(Yk − Y˜k )] + k=1 n X + n X 1 [ f ′′ (Zk )(Yk2 − Y˜k2 )] 2 k=1 [R(Zk . 1). σ 2 )distributed. we have E[Xp ] = R ∞ pWith the density function f (x) = (2πσ ) 2e 2 2 0 x f (x) dx which is after a substitution z = x /(2σ ) equal to 1 √ 2p/2 σ p π Z ∞ 1 x 2 (p+1)−1 e−x dx . Y ). which can depend on f . . 0 The integral to the right is by definition equal to Γ( 12 (p + 1)). Y˜k )] .
3) the expectation goes to zero as n → ∞. if we assume the Xi to be identically distributed. Yk )] + E[R(Zk . By (2. we can relax the condition Xi ∈ L3 to Xi ∈ L2 : Theorem 2. Using the IID property we can estimate the rest term R= n X E[R(Zk .3): for the first term for example. If Xi ∈ L2 are IID and satisfy 0 < Var[Xi ]. It is CX12 for some constant C.14. Note also that the function δ which depends on the test function f in the proof of the previous result is bounded so that the roof function in the dominated convergence theorem exists. y) ≤ δ(y) · y 2 with δ(y) → 0 for y → 0. n δ 3/2 n We have seen that for every smooth f ∈ C√ b (R) there exists a constant C(f ) such that E[f (Sn∗ )] − E[f (S˜n )] ≤ C(f )/ n. Limit theorems Taylor’s theorem gives R(Zk . . Y˜k )] k=1 n X E[Yk 3 ] ≤ const · ≤ const · n · sup Xi 3 /Var[Sn ]3/2 = ≤ k=1 i supi Xi 3 1 const · ·√ 3/2 n (Var[Sn ]/n) M 1 C(f ) √ = √ →0. 1). because δ(y) → 0 and X1 ∈ L2 . The previous proof can be modified.100 Chapter 2. Proof. We change the estimation of Taylor R(z. δ( σX√1n ) σ21 → 0 pointwise almost everywhere. Yk )] + E[R(Zk . then Sn∗ converges weakly to a random variable with standard normal distribution N (0.4. σ n σ σ n σ Both terms converge to zero for n → ∞ because of the dominated conX2 vergence theorem (2.4.3 (Central limit theorem for IID L2 random variables). Y˜k )] k=1 as follows R ≤ = = n X E[δ(Yk )Yk2 ] + E[δ(Y˜k )Y˜k2 ] k=1 ˜1 X ˜2 X X1 X 2 n · E[δ( √ ) 21 ] + n · E[δ( √ ) 21 ] σ n σ n σ n σ n 2 ˜ ˜2 X1 X X1 X E[δ( √ ) 21 ] + E[δ( √ ) 21 ] . Yk ) ≤ const · Yk 3 so that n X E[R(Zk .
see [109].14. Y ((x. Denote by B(n. Y can be realized on the probability space (R2 . Corollary 2.14.1 be the measure µ on (R. Then T (µ) is obviously the law of (X+Y )/ 2. 1). why the Poisson distribution on N is natural. p). Let P 0. In this case (Sn − np) 1 lim P[ p ≤ x] = √ n→∞ 2π np(1 − p) Z x e−y 2 /2 dy −∞ as had been shown by de Moivre in 1730 in the case p = 1/2 and for general p ∈ (0. R x dµ(x) = 0.14. . Proof.1 . Then T (µ) is the law of the normalized random variable (X + Y )/ 2 because the independent random variables X. The next limit theorem for discrete random variables illustrates. B. B R ) which R R space of probability have the properties that R x2 dµ(x) = 1. Define the map T µ(A) = x+y 1A ( √ )dµ(x) dµ(y) 2 R Z Z R on P 0. y)) = x. y)) = y. It is a direct consequence of the central limit theorem: Corollary 2. p) the binomial distribution on {1. . The only attractive fixed point of T on P 0. Y with Var[X] = Var[Y ] = 1 and E[X] = E[Y ] √= 0. n } and with Pα the Poisson distribution on N \ {0 }. The central limit theorem The central limit theorem can be interpreted as a solution to a fixed point problem: Definition. (DeMoivreLaplace limit theorem) The distribution of Xn∗ converges to the normal distribution if Xn has the binomial distribution B(n. the central limit theorem is quite old. µ × µ) as coordinate functions √ X((x. 1) by Laplace in 1812. 1). For independent 0 − 1 experiments with win probability p ∈ (0.4. . . . Now use that T n (X) = (S2n )∗ converges in distribution to N (0. For more general versions of the central limit theorem.5.1 is the law of the standard normal distribution. If µ is the law of a random variables X.101 2.
1 0.2 0.30 0. 1. pn )distributed and suppose npn → α. 3.5 0. Figure.14. Proof.4 0. 3. a) Justify that one can estimate for large n probabilities P[a ≤ Sn∗ ≤ b] ∼ Φ(b) − Φ(a) . Given a sequence of IID random variables Xn with this distribution.05 Figure.2 0. We have to show that P[Xn = k] → P[X = k] for each fixed k ∈ N. k! n k! = ∼ 0.10 0.15 0. 1/2) has its support on {0. Limit theorems Theorem 2. . (n − k + 1) k pn (1 − pn )n−k k! 1 npn n−k αk −α (npn )k (1 − ) → e .20 0. It is custom to use the notation Z s 2 1 e−y /2 dy Φ(s) = FX (s) = √ 2π −∞ for the distribution function of a random variable X which has the standard normal distribution N (0. The binomial distribution B(2. The Poisson distribution with α = 1 on N = {0. 1. . 5 }. Let Xn be a B(n.3 0.1 0. 4. 1). }. Exercise.35 0.4 0. 2. . . Figure. n P[Xn = k] = pkn (1 − pn )n−k k n(n − 1)(n − 2) . The binomial distribution B(5. . 1.25 0.102 Chapter 2. 2. 2 }. 1/5) has its support on {0. .6 (Poisson limit theorem). Then Xn converges in distribution to a random variable X with Poisson distribution with parameter α.3 0.
if there exists a density f ∈ L1 (ν) such that µ = f ν. . Example.15 Entropy of distributions Denote by ν a (not necessarily finite) measure on a measure space (Ω. Note that the measure is defined only on a δsubring of A since we did not assume that ν is finite. Entropy of distributions b) Assume Xi are all uniformly distributed random variables in [0. If ν is the counting measure N = {0.1). 2. one writes µ ≪ ν. ǫ and n. . then the density is f (x) = e−x /2 / 2π. Remark. .1. 1. An example is the Lebesgue measure on R or the counting measure on N. which immediately shows that f is a density: Z Z Z ∞ Z 2π 2 2 2 e−(x +y )/2 dxdy = e−r /2 rdθdr = 2π . R2 0 0 . For which λ can you describe the limit? 2. Call P(ν) the set of all ν absolutely continuous probability measures. Exercise. 1].15.5 ≥ ǫ] in terms of Φ. The function f is therefore called the RadonNykodym derivative of µ with respect to ν. the set P(ν) is the set of R functions f ∈ L1 (ν) satisfying f ≥ 0 and f (x) dν(x) = 1. Example. The fact that µ ≪ ν defined earlier is equivalent to this is called the RadonNykodym theorem (3. If ν is the Lebesgue measure on (−∞. Estimate for large n P[Sn /n − 0. then the density is f (k) = p(1 − p)k . If µ is νabsolutely continuous. } and ν is the law of the geometric distribution with parameter p. A). ∞) and µ is the law of √ 2 the standard normal distribution. Definition.103 2. Define for λ > 0 the transformation Z Z x+y ) dµ(x) dµ(y) Tλ (µ)(A) = 1A ( λ R R in P = M1 (R). the set of all Borel probability measures on R. There is a multivariable calculus trick using polar coordinates. c) Compare the result in b) with the estimate obtained in the weak law of large numbers. In other words. A probability measure µ on R is called ν absolutely continuous.
The entropy of this distribution is (log(λ) − 1)/λ. . If ν is a probability measure on R. In this case. Example. 3. . . 2. . . An } is a partition on R. Let ν be the counting measure on a countable set Ω. for the standard normal distribution µ with probability den2 sity function f (x) = √12π e−x /2 . Example. where the assumption had been dν = dx.104 Chapter 2. . A random variable on Ω with probability density function f (x) = λe−λx is called the exponential distribution. Limit theorems Definition. ω∈Ω For example. X H(µ) = −f (ω) log(f (ω)) . Let ν be the Lebesgue measure on R. p p Example. we have Z −f (x) log(f (x)) dx . . ( f˜ = i=1 Ai the entropy H(f˜ν) is equal to H({Ai }) = X i −ν(Ai ) log(ν(Ai )) which is called the entropy of the partition {Ai }. The approximation of the density f by a step functions f˜ is called coarse graining and the entropy of f˜ is called the coarse grained entropy. If ν is the Lebesgue measure dx on Ω = R+ = [0. It has the mean 1/λ. Example. the geometric distribution P[{k}] = p(1 − p)k has the entropy ∞ X k=0 −(1 − p)k p log((1 − p)k p) = log( log(1 − p) 1−p )− . for Ω = N = {0. 1. } with counting measure ν. f a density and A = {A1 . For the step function n Z X f dx)1Ai ∈ S(ν) . If µ = f dx has a density function f . It has first been considered by Gibbs in 1902. H(µ) = Ω It generalizes the earlier defined Shannon entropy. the entropy is H(f ) = (1 + log(2π))/2. H(µ) = R For example. . ∞). For any probability measure µ ∈ P(ν) define the entropy Z −f (ω) log(f (ω)) dν(ω) . where A is the σalgebra of all subsets of Ω and let the measure ν is defined on the δring of all finite subsets of Ω.
In ergodic theory. Proof. The negative relative entropy −H(˜ µµ) is also called the conditional entropy. Definition. We will see in a moment that the uniform distribution is the maximal entropy. We write also H(f f˜) instead of H(˜ µµ).. T (A).15. On the other hand. if f is the uniform distribution on Ω..105 2. The function u(x) = x log(x) is convex . Assume that Ω is finite and that ν the counting measure and µ({ω}) = f (ω) the probability distribution of random variable describing the measurement of an experiment. The averaged information or surprise is H(µ) = X ω −f (ω) log(f (ω)) . If the event {ω} happens.15. then − log(f (ω)) is a measure for the information or ”surprise” that the event happens. There is no surprise then and the measurements show a unique value. ∞] . Given two probability measures µ = f ν and µ ˜ = f˜ν which are both absolutely continuous with respect to ν. 0 ≤ H(˜ µµ) ≤ +∞ and H(˜ µµ) = 0 if and only if µ = µ ˜. which means that µ is deterministic. where one studies measure preserving transformations T of probability spaces. If f takes only the values 0 or 1. Entropy of distributions Remark. Define the relative entropy H(˜ µµ) = f˜(ω) ) dν(x) ∈ [0.1 (Gibbs inequality). then H(µ) = log(Ω) is larger than 0 if Ω has more than one element. . then H(µ) = 0. Theorem 2. one is interested in the growth rate of the entropy of a partition generated by A. T n (A). This leads to the notion of an entropy of a measure preserving transformation called KolmogorovSinai entropy. We can assume H(˜ µµ) < ∞. Interpretation. f˜(ω) log( f (ω) Ω Z ˜ It is the expectation Eµ˜ [l] of the Likelihood coefficient l = log( ff (x) (x) ).
Z Ω (ω) = 1 almost everywhere If µ = µ ˜. It is not a metric although. then f = f˜ almost everywhere then ff˜(ω) and H(˜ µµ) = 0. if H(˜ µµ) = 0. then by the Jensen inequality (2. On the other hand. . Since both f and f˜ are densities. Remark. Eµ˜ [u( ff )] = u(Eµ˜ [ ff ]). H(˜ µµ) = = = ≥ = f˜(ω) ) dν(ω) f˜(ω) log( f (ω) Ω Z f˜(ω) f˜(ω) log( ) dν(ω) f˜(ω) f (ω) f (ω) ZΩ f (ω) f˜(ω)u( ) dν(ω) f˜(ω) Ω Z f (ω) f˜(ω)( − 1) dν(ω) f˜(ω) Ω Z f (ω) − f˜(ω) dν(ω) = 1 − 1 = 0 . if ν = dx [88]. Limit theorems + on R = [0. f f ˜ ˜ ˜ Therefore. The relative entropy can be used to measure the distance between two distributions.5.1) f˜ f˜ 0 = Eµ˜ [u( )] ≥ u(Eµ˜ [ ]) = u(1) = 0 .106 Chapter 2. The strict convexity of u implies that ff must be a constant almost everywhere. The relative entropy is also known under the name KullbackLeibler divergence or KullbackLeibler metric. the equality f = f˜ must be true almost everywhere. ∞) and satisfies u(x) ≥ x − 1.
1) µ = µ ˜. The exponential distribution with density f (x) = λe−λx with parameter λ on Ω has the maximal entropy among all distributions with fixed mean c = 1/λ. In general. Let (Ω. It is unique with this property. The following distributions have maximal entropy.107 2. a) The density f = 1/Ω is constant. 1.11) Ω With H(˜ µµ) = H(µ) − H(˜ µ) we also have uniqueness: if two measures µ ˜. σ 2 ) has maximal entropy among all distributions with fixed mean m and fixed variance σ 2 .1) H(˜ µµ) ≥ 0. The aim is to show H(˜ µµ) = H(µ) − H(˜ µ) which implies the maximality since by the Gibbs inequality lemma (2. Therefore H(µ) = log(Ω) and equation (2. It is unique with this property. The geometric distribution with parameter p = c−1 has maximal entropy among all distributions on N = {0. It is unique with this property. a) If Ω is finite with counting measure ν. 2. . Then the measure ν with uniform distribution f = 1/ν(Ω) has maximal entropy among all other measures on Ω. H(µ) = − (2. .2 (Distributions with maximal entropy). d) Ω = [0. It is unique with this property. Entropy of distributions Theorem 2. f) Finite measures. . where SN (ω) = i=1 ωi . e) Ω = R with Lebesgue measure ν.15. 3. b) Ω = N with counting measure ν. c) Ω = {0. The product distribution η N . It is unique with this property. Proof. Z H(˜ µµ) = −H(˜ µ) − f˜(ω) log(f (ω)) dν Ω so that in each case.15. where η(1) = p. we have to show Z f˜(ω) log(f (ω)) dν .15. } with fixed mean c. The uniform distribution on Ω has maximal entropy among all distributions on Ω.15. ∞) with Lebesgue measure ν. µ have maximal entropy. then H(˜ µµ) = 0 so that by the Gibbs inequality lemma (2. Let µ = f ν be the measure of the distribution from which we want to prove maximal entropy and let µ ˜ = f˜ν be any other measure. The normal distribution N (m. η(0) = 1 − p with p = c/N has maximal entropy among all PN distributions satisfying E[SN ] = c. 1}N with counting measure ν. A) be an arbitrary measure space for which 0 < ν(Ω) < ∞.11) holds. . It is unique with this property.
b depending only on m and σ. Let us try to get the maximal distribution using calculus of variations. e) For the normal distribution log(f (x)) = a + b(x − m)2 with two real number a. The measure ν is called the microcanonical ensemble. it is the measure which maximizes entropy. Remark. Limit theorems b) The geometric distribution on N = {0. The claim follows since we fixed Var[X] = E[(x − m)2 ] for all distributions. . p c) The discrete density is f (ω) = pSN (1 − p)N −SN so that log(f (k)) = SN log(p) + (N − SN ) log(1 − p) and X k f˜(k) log(f (k)) = E[SN ] log(p) + (N − E[SN ]) log(1 − p) . 2. We have computed the entropy before as log(1 − p)/p) − (log(1 − p))/p = − log(p) − (1 − p) log(1 − p) . } satisfies P[{k}] = f (k) = p(1 − p)k . The claim follows since we fixed E[SN ]. The energy surface is then a compact surface Ω and the motion on this surface leaves a measure ν invariant which is induced from the flow invariant Lebesgue measure. 1. d) The density is f (x) = αe−αx . Remark. Therefore H(µ) = 0 which is also on the right hand side of equation (2. This result has relations to the foundations of thermodynamics. where one considers the phase space of N particles moving in a finite region in Euclidean space. The claim follows since we fixed E[X] = x d˜ µ(x) was assumed to be fixed for all distributions. In order to find the maximum of the functional Z H(f ) = − f log(f ) dν on L1 (ν) under the constraints Z Z F (f ) = f dν = 1.108 Chapter 2.11). f) The density f = 1 is constant. . soR that log(f (x)) = log(α) − αx. where the Lagrange equations ∂ ∂ ∂ H(f ) = λ F (f ) + µ G(f ) ∂f ∂f ∂f F (f ) = 1 G(f ) = c . According to f ) in the above. constrained critical points are points. . G(f ) = Xf dν = c . Ω Ω ˜ = H − λF − µG In infinite dimenwe have to find the critical points of H sions.
determining T . 1). For Ω = R. it is also negative definite when restricted to the surface in L1 determined by the restrictions F = 1. Here is a dictionary matching some notions in probability theory with corresponding terms in statistical physics. = 1. = c Ω by solving the first equation for f : f Z Z = e−λ−µx+1 e−λ−µx+1 dν(x) = 1 xe−λ−µx+1 dν(x) = c dividing the R third equation by R the second. Because the Hessian D2 (H) = −1/f is negative definite. Example. P = e −ǫn λ1/Z(f ) with Z(f ) = P −ǫn λ1 For Ω = N. λ.15. which maximize the entropy possibly under some constraint are mathematically natural because they are critical points of a variational principle. We find (f. G = c. because nature prefers them. −ǫn λ1 Example. This indicates that we have found a global maximum. X(x) = x2 . the extremal properties of entropy . one writes λ−1 = kT with the Boltzmann constant k. X(n) = ǫn . This variational approach produces critical points of the entropy. we get f (n) = c. we get the normal distribution N (0. From the statistical mechanical point of view. Physically. so that we can get µ from the equationR xe−µx x dν(x) = c e−µ(x) dν(x) and λ from the third equation e1+λ = e−µx dν(x).109 2. The statistical physics jargon is often more intuitive. they are natural. In physics. µ are the Lagrange multipliers. Entropy of distributions are satisfied. Probability theory Set Ω Measure space Random variable Probability density Entropy Densities of maximal entropy Central limit theorem Statistical mechanics Phase space Thermodynamic system Observable (for example energy) Thermodynamic state BoltzmannGibbs entropy Thermodynamic equilibria Maximal entropy principle Distributions. ν) as a solution of the system of equations −1 − log(f (x)) Z f (x) dν(x) Z Ω xf (x) dν(x) = λ + µx. This is called and where λ1 is determined by n ǫn e ne the discrete MaxwellBoltzmann distribution. The derivative ∂/∂f is the functional derivative and λ. the temperature.
A) is called the exponential family defined by ν and X. we have H(µλ µ) = − log(Z(λ)) + λEµλ [X] . Theorem 2. For all probability measures µ ˜ which are absolutely continuous with respect to ν. Consider the moment generating function Z(λ) = Eµ [eλX ] and define the interval Λ = {λ ∈ R  Z(λ) < ∞ } in R. The minimum is obtained for µ ˜ = µλ . Corollary 2. we have for all λ ∈ Λ H(˜ µµ) − λEµ˜ [X] ≥ − log Z(λ) . b) If we fix λ by requiring Eµλ [X] = c. For every µ ˜ = f˜ν. Therefore H(˜ µµ) − λEµ˜ [X] = H(˜ µµλ ) − log(Z(λ)) ≥ − log Z(λ) . then µλ maximizes the entropy H(˜ µ) among all measures µ ˜ satisfying Eµ˜ [X] = c. Definition.15. . Given f ∈ L1 leading to the probability measure µ = f ν. where large systems are modeled with statistical methods. The set {µλ  λ ∈ Λ } of measures on (Ω.4. For every λ ∈ Λ we can define a new probability measure µλ = f λ ν = eλX µ Z(λ) on Ω. (Minimizers for relative entropy) a) µλ minimizes the relative entropy µ ˜ 7→ H(˜ µµ) among all νabsolutely continuous measures µ ˜ with fixed Eµ˜ [X].3 (Minimizing relative entropy).15. we have Z f˜ f˜λ H(˜ µµ) = f˜ log( · ) dν f˜λ f Ω = H(˜ µµλ ) + (− log(Z(λ)) + λEµ˜ [X]) . Given a measure space (Ω. Thermodyanamic equilibria often extremize variational problems in a given set of measures. A) with a not necessarily finite measure ν and a random variable X ∈ L. Limit theorems offer insight into thermodynamics. For µ ˜ = µλ .110 Chapter 2. The minimum − log Z(λ) is obtained for µλ . Proof.
µλ ) = −H(˜ µ) + (− log(Z)) − λEµ [X] = −H(˜ µ) + H(µλ ) . Corollary 2. the measure µλ is called the Gibbs distribution or Gibbs canonical ensemble for the observable X and Z(λ) is called the partition function. the Gaussian distribution maximizes the entropy. This corollary can also be proved R by calculus of variations. Entropy of distributions 111 Proof. In physics. The above theorem shows that µλ is minimizing that. 1. Take on the real line Hamiltonian X(x) = x2 and a measure R the 2 µ = f dx. b) If µ = f ν.15. We illustrated two physical principles: nature maximizes entropy when the energy is fixed and minimizes the free energy. Remark. and to determine the Lagrange multiplier λ by Eµλ [X] = c.5. H(˜ µµ) = −H(˜ µ). . Example.2. Example. then µλ maximizes F (µ) = H(µ) + λEµ [X] among all measures µ ˜ which are absolutely continuous with respect to µ. .15. . Adding the restriction that X has a specific expectation value c = Eµ [X] leads to the probability measure µλ . 2. The . one uses the notation λ = −(kT )−1 . In statistical mechanics. } and X(k) = k and let ν be the counting measure on Ω and µ the Poisson measure with parameter 1. the micro canonical ensemble. we get the energy x dµ. namely by finding the minimum of F (f ) = f log(f ) + Xf dν under the constraint R f dν = 1. The claim follows from the theorem since a minimum of H(˜ µµ) − λEµ˜ [X] corresponds to a maximum of F (µ). then 0 ≤ H(˜ µ. Among all symmetric distributions fixing the energy. Let Ω = N = {0. If ν = µ is a probability measure. The measure µ is the a priori model. the canonical ensemble. Proof. Since then f = 1. Maximizing H(µ) − (kT )−1 Eµ [X] is the same as minimizing Eµ [X] − kT H(µ) which is called the free energy if X is the Hamiltonian and Eµ [X] is the energy. Take µ = ν. a) Minimizing µ ˜ 7→ H(˜ µµ) under the constraint Eµ˜ [X] = c is equivalent to minimize H(˜ µµ) − λEµ˜ [X]. when energy is not fixed. µλ = e−λX f /Z. where T is the temperature.
.112 Chapter 2. The geometric distribution on N = {0. Since 0 ≤ = H(˜ µµ) = H(˜ µµp ) + log(p)E[X] + log(1 − p)E[(N − E[X])] −H(˜ µµp ) + H(µp ) . The product measure on Ω = {0. µp is an exponential family. There is an obvious generalization of the maximum entropy principle to the case. ν the counting measure and let µp be the binomial distribution with p. Ω = {1. 1. . Example. . we have for all λ ∈ Λ X λi Eµ˜ [Xi ] ≥ − log Z(λ) . .6. For all probability measures µ ˜ which are absolutely continuous with respect to ν. . . If we fix λi by requiring Eµλ [Xi ] = ci . Remark. 1 }N with win probability p is an exponential family with respect to X(k) = k. Example. Example. . } is an exponential family. when we have finitely many random variables {Xi }ni=1 . N }. ] is the partition function defined on a subset Λ of Rn . k! k! where α = eλ . Take µ = µ1/2 and X(k) = k. The measure µλ maximizes F (˜ µ) = H(˜ µ) + λEµ˜ [X] . Theorem 2. Assume ν = µ is a probability measure. 2. The exponential family of the Poisson measure is the family of all Poisson measures. H(˜ µµ) − i The minimum − log Z(λ) is obtained for µλ . . then µλ maximizes the entropy H(˜ µ) among all measures µ ˜ satisfying Eµ˜ [Xi ] = ci . Limit theorems partition function is Z(λ) = X k eλk e−1 = exp(eλ − 1) k! so that Λ = R and µλ is given by the weights µλ (k) = exp(−e−λ + 1)eλk e−1 αk = e−α . Given µ = f ν we define the (ndimensional) exponential family µλ = f λ ν = e Pn where Z(λ) = Eµ [e i=1 λi Xi Z(λ) Pn i=1 λi Xi µ. 3.15.
x ∈ [1/2. x ∈ [0. A Markov operator is therefore also called a stochastic operator. 1/2] T (x) = 2(1 − x) . A. Remark. Exercise. f 1 = 1 } of probability densities. This Koopman operator is often studied on L2 . . Markov operators Proof. Example. ν) leaves invariant the set D(ν) = {f ∈ L1  f ≥ 0.16.16 Markov operators Definition. 2. A. ν). They correspond bijectively to the set P(ν) of probability measures which are absolutely continuous with respect to ν. It is called nonsingular if T ∗ ν is absolutely continuous with respect to ν. Assume Ω = [0. Here is an abstract version of the Jensen inequality (2. a Markov operator P has to leave the closed positive cone invariant L1+ = {f ∈ L1  f ≥ 0 } and preserve the norm on that cone. Let T be a measure preserving transformation on (Ω. f ≥ 0 ⇒ P f ≥ 0. A Markov operator on (Ω. A linear operator P : L1 (Ω) → L1 (Ω) is called a Markov operator. Given a not necessarily finite probability space (Ω.113 2. 1] with Lebesgue measure µ. Take the same proofs as before by replacing λX with λ · X = P i λi Xi . It is due to M. It is a Markov operator. f ≥ 0 ⇒ P f 1 = f 1 . In other words. but it becomes a Markov operator when considered as a transformation on L1 . if P 1 = 1. The unique operator P : L1 → L1 satisfying Z Z P f dν = f dν A T −1 A is called the PerronFrobenius operator associated to T . Kuczma. See [63]. ν). Closely related is the operator P f (x) = f (T x) for measure preserving invertible transformations. Verify that the PerronFrobenius operator for the tent map 2x .5. A. 1] is P f (x) = 21 (f ( 21 x) + f (1 − 12 x)). Remark.1).
Given a convex function u and an operator P : L1 → L1 mapping positive functions into positive functions satisfying P 1 = 1. there exists by definition of convexity a linear function y 7→ ay + b such that u(x) = ax + b and u(y) ≥ ay + b for all y ∈ R. Proof. a Markov operator does not decrease entropy: H(Pf ) ≥ H(f ) . H(Pf Pg) ≤ H(f g) . We have to show u(P f )(ω) ≤ P u(f )(ω) for almost all ω ∈ Ω. Using Jensen’s inequality. Given x = (P f )(ω).16.114 Chapter 2. 1981). For all f. Take the convex function u(x) = x log(x) and put h = f /g. g ∈ L1+ . we get Pf Pf P (f log(f /g)) log = u(Rh) ≤ Ru(h) = Pg Pg Pg . A ∩ Ac .2 (Voigt. By restriction to the measure space (Ac . the linear operator Rh = P (hg)/P (g) maps positive functions into positive functions. We can assume that {g(ω) = 0 } ⊂ A = {f (ω) = 0 } because nothing is to show in the case H(f g) = ∞. Therefore. For fixed g.1 (Jensen inequality for positive operators). (i) Assume first (f /g)(ω) ≤ c for some constant c ∈ R. then u(P f ) ≤ P u(f ) for all f ∈ L1+ for which P u(f ) exists. since af + b ≤ u(f ) and P is positive u(P f )(ω) = a(P f )(ω) + b = P (af + b)(ω) ≤ P (u(f ))(ω) . g > 0 so that by our assumption also P f > 0 and P g > 0. Theorem 2. we can assume f > 0. since H(f 1) = −H(f ) is the entropy. The following theorem states that relative entropy does not increase along orbits of Markov operators. The assumption that {f > 0 } is mapped into itself is actually not necessary. Proof. Limit theorems Theorem 2. Especially.16. ν(· ∩ Ac )). but simplifies the proof. Given a Markov operator P which maps {f > 0 } into itself.
On B. we have fk log(fk /g) = f log(f /g) and on Ω\ B we have fk log(fk /g) ≤ fk+1 log(fk+1 /g)u → f log(f /g) so that by Lebesgue dominated convergence theorem (2. If a measure preserving transformation T is invertible. Markov operators Pf Pg ≤ P (f log(f /g)).16. then the corresponding Koopman operator and PerronFrobenius operators preserve relative entropy. We have fk ≤ fk+1 and fk → f in L1 . Define B = {f ≤ g}. k→∞ As an increasing sequence. P fk converges to P f almost everywhere. R R2 √ ) dµ(x) dµ(y) does 1A ( x+y 2 . From (i) we know that H(P fk P g) ≤ H(fk g). The operator T (µ)(A) = not decrease entropy. Integration gives which is equivalent to P f log Z Pf H(P f P g) = P f log dν Pg Z Z ≤ P (f log(f /g)) dν = f log(f /g) dν = H(f g) .4. We can assume H(f g) < ∞ because the result is trivially true in the other case. Integration gives with Fatou’s lemma (2. Proof.16.4. Example. Because P and P −1 are both Markov operators.115 2. Corollary 2. H(f g) = lim H(fk g) . the relative entropy is constant: H(Pf Pg) = H(f g). kg) so that fk /g ≤ k. For an invertible Markov operator P.16.4. The elementary inequality x log(x) − x ≥ x log(y) − y for all x ≥ y ≥ 0 gives (P fk ) log(P fk ) − (P fk ) log(P g) − (P fk ) + (P g) ≥ 0 . H(f g) = H(PP −1 f PP −1 g) ≤ H(P −1 f P −1 g) ≤ H(f g) . Corollary 2. (ii) Define fk = inf(f.3).3.2) H(P f P g) − P f  + P g ≤ lim inf H(P fk P g) − P fk  + P g k→∞ and so H(P f P g) ≤ lim inf k→∞ H(P fk P g).
as for example. We have shown as a corollary of the central limit theorem that T has a unique fixed point attractive all of P 0. eitx dFX (x) = φX (t) = R R Remark. characteristic functions are Fourier transforms of probability measures: if µ is the law of X. how many are there? 2. By definition. also the nonlinear map T (µ) = PXµ (µ) shares this property. then φX = µ ˆ. Denote by Xµ a random variable having the law µ and with µ(X) the law of a random variable. For a fixed random variable Y .1 . For every n. Definition. The entropy is also strictly increasing at infinitely many points of the orbit T n (µ) since it converges to the fixed point with maximal entropy. Important questions are: is there a thermodynamic equilibrium for a given Markov operator P and if yes. It follows that T is not invertible. More generally: given a sequence Xn of IID random variables. R Remark.116 Chapter 2. when summing up independent random variables. we define the operator Xµ + Y ). By Voigt’s theorem (2. PY (µ) = µ( √ 2 It is a Markov operator. .2). Given a random variable X. It is an important topic by itself [62]. If FX is a continuous distribution function dFX (x) = fX (x) dx. its characteristic function is a realvalued function on R defined as φX (u) = E[eiuX ] . A fixed point of a Markov operator is called a stationary state or in more physical language a thermodynamic equilibrium. If FX is the distribution function of X and µX its law. We can summarize: summing up IID random variables tends to increase the entropy of the distributions. then φX is the Fourier transform of the density function fX : Z eitx fX (x) dx . ∗ the map Pn which maps the law of Sn∗ into the law of Sn+1 is a Markov operator which does not increase entropy. It is therefore convenient to deal with its Fourier transforms. the characteristic functions. the characteristic function of X is the FourierStieltjes transform Z Z eitx µX (dx) .17 Characteristic functions Distribution functions are in general not so easy to deal with. Since every PY has this property. Limit theorems Proof.16. the operator PY does not decrease entropy.
For continuous distributions with density R ∞FX −ita 1 inverse formula for the Fourier transform: fX (a) = 2π −∞ e φX (t) dt R ∞ e−ita 1 so that FX (a) = 2π −∞ −it φX (t) dt. The claim reduces to 1 2π Z ∞ −∞ p e−ita − e−itb itb pe dt = it 2 R R itc which is equivalent to the claim limR→∞ −R e it−1 dt = π for c > 0. Characteristic functions m Example. one has Z ∞ −ita 1 1 1 e − e−itb φX (t) dt = µ[(a. one can assume µ is located on a single point b with p = P[X = b] > 0. (−it)1+m (m + 1) xk /(k!) is the n’th partial exponential function.1 (L`evy formula). (2.17. The verification of the L´evy formula is then ′ = fX is the a computation. For a random variable with density fX (x) = x /(m + 1) on Ω = [0. b)] + µ[{a}] + µ[{b}] .117 2. Because a distribution function F has only countably many points of discontinuities. it is enough to determine F (b) − F (a) in terms of φ if a and b are continuity points of F . . 1] the characteristic function is φX (t) = Z 1 eitx xm dx/(m + 1) = 0 where en (x) = Pn k=0 m!(1 − eit em (−it)) . Theorem 2. b are points of continuity of F . 2π −∞ it 2 2 Proof. By linearity. then Z ∞ −ita e − e−itb 1 φX (t) dt . only lim R→∞ Z R −R sin(tc) dt = π t remains. The general formula needs only to be verified when µ is a point measure at the boundary of the interval. The Fourier transform of the Dirac measure pδb is φX (t) = peitb .12) FX (b) − FX (a) = 2π −∞ it In general. This proves the inversion formula if a and b are points of continuity. The characteristic function φX determines the distribution of X.17. The verification of this integral is a prototype computation in residue calculus. If a. Because the imaginary part is zero for every R by symmetry.
If F and G are distribution functions. 1) m ∈ R. We have learned in lemma (2. 1] λ > 0. We have to verify the three properties which characterize distribution functions among realvalued functions as in proposition (2. A sequence Xn of random variables converges weakly to X if and only if its characteristic functions converge point wise: φXn (x) → φX . a) Since F is nondecreasing. then F ⋆ G is again a distribution function.17.3. From formula (2. Let F and G be two probability distribution functions. it follows from the definition that weak convergence implies the point wise convergence of the characteristic functions.2 (Characterization of weak convergence). b > 0 [−a. σ 2 > 0 first success Cauchy p ∈ (0.12. Here is a table of characteristic functions (CF) φX (t) = E[eitX ] and moment generating functions (MGF) MX (t) = E[etX ] for some familiar random variables: Distribution Normal N (0. also F ⋆ G is nondecreasing. Proof.13. a] λ>0 n ≥ 1. Their convolution F ⋆ G is defined as Z F ⋆ G(x) = F (x − y) dG(y) . 1) CF 2 2 emit−σ t /2 2 e−t /2 sin(at)/(at) λ/(λ − it) (1 − p + peit )n it eλ(e −1) MGF 2 2 emt+σ t /2 2 et /2 sinh(at)/(at) λ/(λ − t) (1 − p + pet )n t eλ(e −1) e e p (1−(1−p)eit peit (1−(1−p)eit imt−t p (1−(1−p)et pet (1−(1−p)et mt−t Definition.2) that weak convergence is equivalent to convergence in distribution.17.118 Chapter 2.1). Proof. . Limit theorems Theorem 2. R Lemma 2.12) follows that if the characteristic functions converge point wise. p ∈ [0. Because the exponential function eitx is continuous for each t. λ p ∈ (0. Example. then convergence in distribution takes place. 1) Uniform Exponential binomial Poisson Geometric Parameter m ∈ R.
Given two continuous distributions F. we prove it by hand: use an approximation of the integral by step functions: Z eiux d(F ⋆ G)(x) R = = = = N 2n X lim N. G with densities h and k. If F and G are distribution functions with characteristic functions φ and ψ. q defined by (p ⋆ q)n = k=0 pk qn−k .119 2. Fn (x) converges point wise to F (x). Characteristic functions b) Because F (−∞) = 0 we have also F ⋆ G(−∞) = 0. Lemma 2.n→∞ −n eiuk2 Z R eiux [F ( k−1 k − y) − F ( n − y)] dG(y) 2n 2 k−1 k − y) − F ( n − y)] · eiuy dG(y) 2n 2 Z dF (x)]eiuy dG(y) = φ(u)eiuy dG(y) k eiu 2n −y [F ( R . where p⋆q is the convolution of the sequences Pn p. then F ⋆ G has the characteristic function φ · ψ. Then the distribution of F ⋆ G is given by the convolution Z h(x − y)k(y) dy h ⋆ k(x) = R because (F ⋆ G)′ (x) = d dx Z R F (x − y)k(y) dy = Z h(x − y)k(y) dy . The Lebesgue dominated convergence theorem (2. Given two discrete distributions X X F (x) = pn . c) Given a sequence hn → 0.4. Because F is continuous from the right. also F ⋆ G(∞) = 1.n→∞ Z N 2n X k=−N 2n +1 [ lim R N →∞ Z N −y −N −y φ(u)ψ(u) .4.17. G(x) = qn . Proof. Example. We see that the convolution of discrete distributions gives again a discrete distribution. While one can deduce this fact directly from Fourier theory. Example. Z R k=−N 2n +1 lim N. Since F (∞) = 1 and dG is a probability measure. Define Fn (x) = F (x + hn ). n≤x n≤x P Then F ⋆G(x) = n≤x (p⋆q)n .3) implies that Fn ⋆ G(x) = F ⋆ G(x + hn ) converges to F ⋆ G(x).17.
the random variable X = n=1 Xn /3n is a random variable with the Cantor distribution. If we put gj (x) = exp(ix). . . Limit theorems It follows that the set of distribution functions forms an associative commutative group with respect to the convolution multiplication. The characteristic funcQn n tion of j=1 Xj is φ = j=1 φj . j=1 j=1 Proof: This follows almost immediately from the definition of independence since one can check it first for functions gj = 1Aj . we get for any set of complex valued measurable functions gj . Because the characteristic function i2/3n n of Xn is φXn /3n (t) = E[eitXn /3 ] = e 2 −1 . Characteristic functions become especially useful. we see that the characteristic function of X is ∞ i2/3n Y e −1 . The Fourier theory of fractal measures has been developed much more since then. 1 with probability 1/2. . Their characteristic functions multiply: Proposition 2.120 Chapter 2. Given a finite set of independent random variables Xj .17. where Yn takes values −1. if one deals with independent random variables. j =P 1. for which E[gj (Xj )] exists: E[ n Y gj (Xj )] = n Y E[gj (Xj )] . Proof. So φY (t) = Y n n E[eitYn /3 ] = ∞ Y Y ei/3n + e−i/3n t cos( n ) . the proposition is proved. n with characteristic functions φj . where Aj are σ(Xj measurable functions for which gj (Xj )gk (Xk ) = 1Aj ∩Ak and E[gj (Xj )gk (Xk )] = m(Aj )m(Ak ) = E[gj (Xj )]E[gk (Xk )] . Since Xj are independent. then for step functions by linearity and then for arbitrary measurable functions. φX (t) = 2 i=1 The random variable Y = X − 1/2 can be written as Y = P∞ centered n n=1 Yn /3 . = 2 3 n n=1 This formula for the Fourier transform of a singular continuous measure µ has already been derived by Wiener. If Xn are IID random variables which take P∞the values 0 and 2 with probability 1/2 each. . .5. Example. The reason is that the characteristic functions have this property with point wise multiplication.
n=1 The measure µ is then called a Bernoulli convolution.17. . If φY (t) is the characteristic function of Y . For more information on this stochastic process and the properties of the measure µ which in a subtle way depends on λ. The characteristic function of a vector valued random variable X = (X1 . if Xj has the density fj . Let µ be the law of the random sum X = k=1 Xk .17.4 0. where φY (t) = cos(t). The characteristic function φY (t) of a random variable Y with a centered Cantor distribution supported on [−1/2. This follows immediately from proposition (2. n=1 For example. the measure is supported on the Cantor set as we have seen above. if the random Yn take values −1. Let Yk be IID Pn random variables and let Xk = λ Yk with 0 < λ < 1.0 Figure. PnThe probability density of the sum of independent random variables j=1 Xj is f1 ⋆ f2 ⋆ · · · ⋆ fn .2 0. Exercise. Proof. For example.2 200 100 100 200 0.8 0. . then ∞ Y φX (t) = cos(tλn ) .5) and the algebraic isomorphisms between the algebra of characteristic functions with convolution product and the algebra of distribution functions with point wise multiplication.17. The process Sn = k=1 Xk is called the random walk with variable step size or the branching random walk with P∞exponentially decreasing steps. see [42]. for λ = 1/3.121 2. . k Example. 1/2] has an explicit formula φY (t) = Q ∞ t n=1 cos( 3n ) and already been derived by Wiener in the early 20’th century. 0. Characteristic functions 1. 1 with probability 1/2. .6. then the characteristic function of X is φX (t) = ∞ Y φX (tλn ) .4 Corollary 2. The formula can also be used to compute moments of Y with the moment formula dm E[X m ] = (−i)m dt m φX (t)t=0 .6 0. Xk ) is the realvalued function φX (t) = E[eit·X ] .
µ) be a probability space and let U. . µ) be a probability space and Xi ∈ L random variables. T is the temperature. . 1). Y mean in terms of (i) the Laplace transform. A. Let (Ω.122 Chapter 2. is taking its minimum for the exponential family. tk ). What does independence of two random variables X. . where B is the Borel σalgebra on Rk . where p is the pressure. We have seen that the Helmholtz free energy Eµ˜ [U ] − kT H[˜ µ] (k is a physical constant). . . Limit theorems k on R . The moment generating function is defined as M (t) = E[etX ] provided that the expectation exists in a neighborhood of 0. if the σalgebras X −1 (B) and Y −1 (B) are independent. . Exercise. V ∈ X be random variables (describing the energy density and the mass density of a thermodynamical system). Show that the sum X + Y of two Gaussian distributed random variables is again Gaussian distributed. . Exercise. where we wrote t = (t1 . Y are independent. c) Find the probability density of a Gaussian distributed random variable X with covariance matrix A and mean m. . Compute Eµ [Xi ] and the entropy of µλ in terms of the partition function Z(λ). a vector valued random variable X has a Gaussian distribution with covariance A and mean m. (ii) the moment generating function or (iii) the generating function? Exercise. The Laplace transform of a positive random variable X ≥ 0 is defined as lX (t) = E[e−tX ]. Two such random variables X. . Let (Ω. if 1 φX (t) = eim·t− 2 (t·At) . Find the measure minimizing the free enthalpy or Gibbs potential Eµ˜ [U ] − kT H[˜ µ] − pEµ [V ] . mk ) called the mean of X. b) Given a real nonsingular k × k matrix A called the covariance matrix and a vector m = (m1 . a) Show that if X and Y are independent then φX+Y = φX · φY . We say. A. The generating function of an integervalued random variable is defined as ζ(X) = E[uX ] for u ∈ (0.
11. in order that the factor log log n is 3. Define for only √n ≥ 2 the constants Λn = 2n log log n. δ = ~ω is the incremental energy. the Lagrange multiplier of the variational problem). ν). Since can fix also the temperature T instead of the energy ǫ. a) Given the discrete measure space (Ω = {ǫ0 + nδ}. .33 · 103519 . Let Xn by symmetric and independent. For every ǫ > 0 P[ max Sk > ǫ] ≤ 2P[Sn > ǫ] . b) The physical interpretation is as follows: Ω is the discrete set of energies of a harmonic oscillator. ǫ(ω. Put λ = 1/kT . The proof of the theorem for general IID random variables Xn can be found for example in [109]. b)) for all a < b. the distribution in a) maximizing the entropy is determined by ω and T .18 The law of the iterated logarithm We will give only a proof of the law of iterated logarithm in the special case. A symmetric random variable X ∈ L1 has zero mean. with ǫ0 ∈ R and δ > 0 and where ν is the counting measure and let X(k) = k. ǫ0 is the ground state energy.1. when the random variables Xn are independent and have all the standard normal distribution. where T is the temperature (in the answer of a). Compute the spectrum ǫ(ω. The law of the iterated logarithm 123 Exercise. X(k) = k is the Hamiltonian and E[X] is the energy. This is a direct consequence of L´evy’s theorem (2.2. T ) of the blackbody radiation defined by ω2 π 2 c3 where c is the velocity of light. −a)) = µ([a. there appears a parameter λ. Lemma 2.18. A random variable X ∈ L is called symmetric if its law µX satisfies: µ((−b. We Pn again use the notation Sn = k=1 Xk in this section. 1≤k≤n Proof. The central limit theorem makes the general result plausible when knowing this special case.18. T ) = (E[X] − ǫ0 ) 2. Find the distribution f maximizing the entropy H(f ) among all measures µ ˜ = f ν fixing Eµ˜ [X] = ǫ. Definition. For example. we already have n = exp(exp(9)) > 1. √ Definition.6) because we can take m = 0 as the median of a symmetric distribution. where ω is the frequency of the oscillation and ~ is Planck’s constant. It grows √ slightly faster than 2n. You have deduced then Planck’s blackbody radiation formula.
Then lim sup n→∞ Sn Sn = 1. 1)distributed random variable 2 P[X > t] ≤ const · e−t /2 √ Snk+1 2nk log log nk 2P[Snk+1 > (1 + ǫ)Λk ] = 2P[ √ ] > (1 + ǫ) √ nk+1 nk+1 2nk log log nk 1 ) ≤ C exp(− (1 + ǫ)2 ) 2 nk+1 ≤ C1 exp(−(1 + ǫ) log log(nk )) = C1 log(nk )−(1+ǫ) ≤ C2 k −(1+ǫ) . infinitely often}. 1)distributed and that for a N (0.124 Chapter 2. max nk <n≤nk+1 max 1≤n≤nk+1 Sn > (1 + ǫ)Λk ] Sn > (1 + ǫ)Λk ] √ The righthand side can be estimated further using that Snk+1 / nk+1 is N (0. n→∞ Λn Λn Proof. it is enough to show that k P[Ak ] < ∞.18. It suffices to show.2. Because the second statement follows obviously from the first one by replacing Xn by −Xn .2 (Law of iterated logarithm for N (0. nk+1 ] }. infinitely often] = 1 for all ǫ > 0. we get with the above lemma P[Ak ] ≤ P[ ≤ P[ ≤ 2P[Snk+1 > (1 + ǫ)Λk ] . HavingPshown that P[Ak ] ≤ const · k −(1+ǫ) for large enough k proves the claim k P[Ak ] < ∞. lim inf = −1 . we have only to prove lim sup Sn /Λn = 1 . Define nk = [(1 + ǫ)k ] ∈ N. 1)). that for all ǫ > 0. We follow [48].2). By the first BorelP Cantelli lemma (2. 1)distributed random variables. Let Xn be a sequence of IID N (0. Clearly lim supk Ak = {Sn > (1+ǫ)Λn . where [x] is the integer part of x and the events Ak = {Sn > (1 + ǫ)Λn . . Limit theorems Theorem 2. For each large enough k. infinitely often] = 0 for all ǫ > 0. infinitely often] = 1 . for some n ∈ (nk . n→∞ (i) P[Sn > (1 + ǫ)Λn . there exists a subsequence nk P[Snk > (1 − ǫ)Λnk . (ii) P[Sn > (1 − ǫ)Λn .
where we have used assumption (2. we use the fact that t e−x /2 dx ≥ 2 C · e−t /2 for some constant C. Choose N > 1 large enough and c < 1 near enough to 1 such that p √ (2.3 (Central limit theorem for IID random variables). The law of iterated logarithm is true for T (X) implies that it is true for X. We would need however sharper estimates.18. For such k. We present a second proof of the central limit theorem in the IID case. we know that p Snk > −2 2nk log log nk for sufficiently large k. We know that N (0. .13) c 1 − 1/N − 2/ N > 1 − ǫ . Both inequalities hold therefore for infinitely many values of k. This shows that it would be enough to prove the theorem in the case when X has distribution in an arbitrary small neighborhood of N (0.18. The sets p Ak = {Snk − Snk−1 > c 2∆nk log log ∆nk } R∞ 2 are independent. Define nk = N k and ∆nk = nk − nk−1 . From (i). 1). In the following estimate. 1) in distribution. Given Xn ∈√L2 which are IID with mean 0 and finite variance σ 2 . 1) is the unique fixed point of the map T by the central limit theorem. The law of the iterated logarithm Given ǫ > 0. p P[Ak ] = P[{Snk − Snk−1 > c 2∆nk log log ∆nk }] √ Snk − Snk−1 2∆nk log log ∆nk √ >c }] = P[{ √ ∆nk ∆nk ≥ C · exp(−c2 log log ∆nk ) ≤ C · exp(−c2 log(k log N )) = C1 · exp(−c2 log k) = C1 k −c 2 P so that k P[Ak ] = ∞. We have therefore by BorelCantelli a set A of full measure so that for ω ∈ A p Snk − Snk−1 > c 2∆nk log log ∆nk for infinitely many k. Then Sn /(σ n) → N (0. Theorem 2.13) in the last inequality. p Snk (ω) > Snk−1 (ω) + c 2∆nk log log ∆nk p p ≥ −2 2nk−1 log log nk−1 + c 2∆nk log log ∆nk p p √ ≥ (−2/ N + c 1 − 1/N ) 2nk log log nk p ≥ (1 − ǫ) 2nk log log nk .125 2. to illustrate the use of characteristic functions.
4. The characteristic function of N (0.126 Chapter 2. Since by assumption E[Xn ] = 0 and E[Xn2 ] = σ 2 . This method can be adapted to other situations as the following example shows. k k=1 where γ = limn→∞ Pn Var[Sn ] = 1 k=1 k − log(n) is the Euler constant.18. we have φXn (t) = 1 − σ2 2 t + o(t2 ) . Denote by φXn the characteristic function of Xn . n X 1 1 π2 (1 − ) = log(n) + γ − + o(1) . 2 Therefore E[e it σS√nn ] = = = t φXn ( √ )n σ n 1 t2 1 (1 − + o( ))n 2n n 2 e−t /2 + o(1) . We have to . Proposition 2. k k 6 k=1 it satisfy E[Tn ] → 0 and Var[Tn ] → 1. E[Sn ] = n X 1 = log(n) + γ + o(1) . Limit theorems 2 Proof. Compute φXn = 1 − n1 + en so that Q it φSn (t) = nk=1 (1 − k1 + ek ) and φTn (t) = φSn (s(t))e−is log(n) . Then Sn − log(n) Tn = p log(n) converges to N (0. Given a sequence of independent events An P ⊂ Ω with n P[An ] = 1/n. Proof. 1) in distribution. Define the random variables Xn = 1An and Sn = k=1 Xk . 1) is φ(t) = e−t show that for all t ∈ R E[e it σS√nn 2 ] → e−t /2 /2 . where s = .
2. 2 2 We see that Tn converges in law to the standard normal distribution.18. The law of the iterated logarithm p t/ log(n). For n → ∞. . we compute log φTn (t) = 127 n X p 1 log(1 + (eis − 1)) −it log(n) + k k=1 = n X p 1 1 −it log(n) + log(1 + (is − s2 + o(s2 ))) k 2 k=1 = = = n n X X p 1 1 2 s2 2 −it log(n) + ) (is + s + o(s )) + O( k 2 k2 k=1 k=1 p 1 −it log(n) + (is − s2 + o(s2 ))(log(n) + O(1)) + t2 O(1) 2 −1 2 1 2 t + o(1) → − t .
Limit theorems .128 Chapter 2.
where P+ and P− 129 . A second measure P′ on (Ω. A) is called absolutely continuous with respect to P.1. then there exists a unique Y ∈ L1 (P) with P′ = Y P. It is unique in L1 . and Y ∈ L1 satisfies Y (x) ≥ 0 for all x ∈ Ω. If P[B] < 1. Assume P is again the Lebesgue measure on [0. If P [A] = . We have P′ [B c ] = 0 but P[B c ] = 1 − P[B] > 0. For B = {1/2}. We can assume without loss of generality that P′ is a positive measure (do else the Hahn decomposition P = P+ − P− ).Chapter 3 Discrete Stochastic Processes 3. then P′ is not absolutely continuous 0 1/2 ∈ /A with respect to P. Given a probability space (Ω. if P[A] = 0 implies P′ [A] = 0 for all A ∈ A.1 Conditional Expectation Definition.1 (RadonNykodym equivalent). then P is not absolutely continuous with respect to P′ . 1] and A is the Borel σalgebra. P). Proof. Given a measure P′ which is absolutely continuous with respect to P. 1] as in the last example. A. One writes P′ ≪ P. The function Y is called the RadonNykodym derivative of P′ with respect to P . we have P[B] = 0 but P′ [B] = 1 6= 0. then P′ [A] = P[A∩B] for all A ∈ A. Theorem 3. then Rb P′ [a. 1 1/2 ∈ A ′ Example. b] = a Y (x) dx is absolutely continuous with respect to P. b] = b − a is the uniform distribution on Ω = [0. The next theorem is a reformulation of a classical theorem of RadonNykodym of 1913 and 1930. If Y (x) = 1B (x). Example. Example. If P[a.
B). R (i) Construction: We recall the notation E[Y . A similar argument gives Y ′ ≤ Y almost everywhere. The random variable Y ∈ L1 (B) is unique in L1 (B). One has then E[Y − Y ′ . Definition. A] = E[1A Y ] = A Y dP . If B = A. so that Y = Y ′ almost everywhere. . (ii) Uniqueness: assume there exist two derivatives Y. for ω ∈ Y c . In other words. A] = ≤ E[Y1 . Radon1 Nykodym’s theorem (3. Y ′ . Given X ∈ L1 (A) and aR sub σalgebra B ⊂ A. Theorem 3. The random variable Y in this theorem is denoted with E[XB] and called the conditional expectation of X with respect to B. Ω}. P′ would be singular with respect to P according to the definition given in section (2. If B is the trivial σalgebra B = {∅.4. Kolmogorov 1933). R ˜ Proof. then E[XB] = X. ∀A ∈ A } is closed under formation of suprema E[Y1 ∨ Y2 . A] ≤ P′ [A]. A] on the probability space (Ω. then E[X{Z}I ] is defined as E[Xσ({Z}I )].130 Chapter 3.2 (Existence of conditional expectation.1. Discrete Stochastic Processes are positive measures). If {Z}I is a family of random variables.1) shows that the supremum of Γ is in Γ. Given a set B ∈ B with P˜ [B] = 0. Y c . If B = {∅. then P′ [B] = 0 so that P ′ is absolutely continuous with respect to P˜ .1) R provides us with a random variable Y ∈ L (B) R ′ with P [A] = A X dP = A Y dP . Ω} then ( 1 R E[XB](ω) = m(Y ) Y R X dP Yc m(Y c ) X dP for ω ∈ Y . Define the measures P[A] = P[A] and P′ [A] = A X dP = E[X. Y = Y ′ in L1 . Example. Y. Example. Example. The set Γ = {Y ≥ 0  E[Y .1. then E[XB] = E[X]. We claim that the supremum Y of all functions Γ satisfies Y P = P′ : an application of BeppoL´evi’s theorem (2. If Z is a random variable. The measure P′′ = P′ − Y P is the zero measure since we could do the same argument with a new set Γ for the absolutely continuous part of P′′ . There exists a random R variable Y ∈ L1 (B) with A Y dP = A X dP for all A ∈ B. A ∩ {Y2 ≥ Y1 }] P′ [A ∩ {Y1 > Y2 }] + P′ [A ∩ {Y2 ≥ Y1 }] = P′ [A] and contains a function Y different from 0 since else. {Y ≥ Y ′ }] = 0 and so Y ≥ Y ′ almost everywhere. A ∩ {Y1 > Y2 }] + E[Y2 .12) of absolute continuity. then E[XZ] is defined as E[Xσ(Z)].
1. y) dy . Let (Ω. 1]. Conditional Expectation Example. dxdy). 3. The orthogonal complement L2 (B)⊥ is defined as L2 (B)⊥ = {Z ∈ L2 (A)  (Z. conditioned to the σalgebra B which contains the events singled out by data from Xi . y) is a random variable on Ω. Remark. the possible outcomes are modeled by a probability space (Ω. 2. then L2 (B) is a Hilbert space which is a linear subspace of the Hilbert space L2 (A). then Y = E[XB] is the random variable Z 1 Y (x. 0 This conditional integral only depends on x. there exists a unique projection p(X) ∈ L2 (B). 4 } and A the σalgebra of all subsets of Ω. the map q is a projection which must agree with p. {3. 1A ) = E[X − E[XB]. our best knowledge of the random variable X is the conditional expectation E[XB]. y) = X(x. Example.3. A. 4}. we have for A ∈ B (X − E[XB]. Proof. When identifying functions which agree almost everywhere. It models the ”knowledge” obtained from some measurements we can do in the laboratory and B is generated by a set of random variables {Zi }i∈I obtained from some measuring devices. 2}. A) which is our ”laboratory”. Here is a possible interpretation of conditional expectation: for an experiment. With respect to these measurements. {1. For a specific ”experiment ω. It is a random variable which is a function of the measurements Zi . P) = ([0.1. A.131 3. Ω}. A] = 0 . the conditional expectation E[XB](ω) is the expected value of X(ω). 1]. If X(x. What is the conditional expectation Y = E[XB] . Therefore X − E[XB] ∈ L2 (B)⊥ . 1]. By the definition of the conditional expectation. Assume that the only information about the experiment are the events in a subalgebra B of A. 1] × [0. Let Ω = {1. it is linear and has the property that (1 − q)(X) is perpendicular to L2 (B). Proposition 3. This notion of conditional expectation will be important later. Let B be the σalgebra of sets A × [0. Let B = {∅. The conditional expectation X 7→ E[XB] is the projection from L2 (A) onto L2 (B). The space L2 (B) of square integrable Bmeasurable functions is a linear subspace of L2 (A). where A is the Borel σalgebra defined by the Euclidean distance metric on the square Ω. Y ) := E[Z · Y ] = 0 for all Y ∈ L2 (B) } . where A is in the Borel σalgebra of the interval [0. For any X ∈ L2 (A). Because the map q(X) = E[XB] satisfies q 2 = q.
we can make with our knowledge. The first two components √ √ .1. b)  a. ). then E[XB] is the best guess about this random variable. Given two independent random variables X. Therefore. This proposition 3. b ∈ R }. . a. {Z = k}] = λ P[Z = k] . The Hilbert space L2 (B) is the set of all vectors v = (v1 . we want to know. v2 . 2 2 2 2 Remark. . 2 2 E[XB] = ( X(1) + X(2) X(1) + X(2) X(3) + X(4) X(3) + X(4) √ √ √ √ .132 Chapter 3. the next list of properties of conditional expectation can be remembered better with proposition 3. Discrete Stochastic Processes of the random variable X(k) = k 2 ? The Hilbert space L2 (A) is the fourdimensional space R4 because a random variable X is now just a vector X = (X(1). The conditional expectation projects onto that plane. λ+µ Hint: It is enough to show E[X.3 in mind which identifies conditional expectation as a projection. . It is the twodimensional subspace of all vectors {v = (a. b. X(3)+X(4) ). the second two components (X(1). Show that E[XB] = λ Z.3 means that Y is the leastsquares best Bmeasurable square integrable predictor. X(3). λ+µ Even if random variables are only in L1 . X(2)) project to ( X(1)+X(2) 2 2 √ √ project to ( X(3)+X(4) . The random variable Z = X + Y has Poisson distribution Pλ+µ as can be seen with the help of characteristic functions.1. if they are in L2 . If B is the σalgebra describing the knowledge about a process (like for example the data which a pilot knows about an plane) and X is the random variable (which could be the actual data of the flying plane). X(1)+X(2) ). Y ∈ L2 such that X has the Poisson distribution Pλ and Y has the Poisson distribution Pµ . Let B be the σalgebra generated by Z. v2 ) would generate a finer algebra. X(2). X(4)). Exercise. This makes conditional expectation important for controlling processes. v4 ) for which v1 = v2 and v3 = v4 because functions which would not be constant in (v1 . v3 .
(4) Conditional Fatou: Xn  ≤ X. ⇒ E[Xn B] → E[XB] a. Y. These inequalities hold therefore simultaneously for all n and we obtain almost surely E[h(X)G] ≥ sup(an E[XG] + bn ) = h(E[XG]) . Especially E[XB]p ≤ Xp . n The corollary is obtained with h(x) = xp . This is especially useful when applied to the algebra C Y = {∅. Because X ≤ Y almost everywhere if and only if E[XC Y ] ≤ E[Y C Y ] for all Y ∈ B. Y c .2) and so linear. one has E[ZXB] = ZE[XB]. The tower property reduces these statements to versions with B = C Y which are then on each of the sets Y.e. Y ∈ L. We get from h(X) ≥ an X + bn that almost surely E[h(X)G] ≥ an E[XG] + bn . Conditional Expectation 133 Theorem 3. (6) Conditional Jensen: if h is convex.1. if they are true conditioned with C Y for each Y ∈ B. −1/n]) ∈ B would have positive probability for some n. (5) Conditional dominated convergence: Xn  ≤ X. (2) Positivity: X ≥ 0 ⇒ E[XB] ≥ 0. then E[XC] = E[X]. The tower property reduces these statements to linearity. (7) Extracting knowledge: For Z ∈ L∞ (B). (8) Independence: if X is independent of C. Proof. For given random variables X. (6) Chose a sequence (an . E[lim inf n→∞ Xn B] ≤ lim inf n→∞ E[Xn B]. the following properties hold: (1) Linearity: The map X 7→ E[XB] is linear.3. .3) are true. Y c the usual theorems. Ω}. then A = Y −1 ((−∞.4 (Properties of conditional expectation). (4)(5) The conditional versions of the Fatou lemma or the dominated convergence theorem (2. (1) The conditional expectation is a projection by Proposition (5.e. This would lead to the contradiction 0 ≤ E[1A X] = E[1A Y ] ≤ −n−1 m(A) < 0. (3) Use that P ′′ ≪ P ′ ≪ P implies P ′′ = Y ′ P ′ = Y ′ Y P and P ′′ ≪ P gives P ′′ = ZP so that Z = Y ′ Y almost everywhere. note that if Y = E[XB] would be negative on a set of positive measure. (3) Tower property: C ⊂ B ⊂ A ⇒ E[E[XB]C] = E[XC]. bn ) ∈ R2 such that h(x) = supn an x + bn for all x ∈ R. (7) It is enough to condition it to each algebra C Y for Y ∈ B.1. then E[h(X)B] ≥ h(E[XB]). (2) For positivity. Xn .4. Xn → X a.
a) What is E[Y Y ]? b) Show that if E[XA] = 0 and E[XB] = 0.1. C) µ : A 7→ E[1A X]. We add a notation which is commonly used. This exercise deals with conditional expectation. Lebesgue or Jensen. B)] = 0. Y ∈ L1 satisfying E[XY ] = Y and E[Y X] = X. The conditional probability space (Ω. Exercise. P) for which Ω is a complete separable metric space with Borel σalgebra A. there exists a regular probability space for any sub σalgebra B of A. C). then E[Xσ(A. The measures on σ(B. The properties of Conditional Fatou. From the conditional Jensen property in theorem (3. The random variable Y = E[XB] is B measurable and because Y 1B is independent of C we get E[(Y 1B )1C ] = E[Y 1B ]P[C] so that E[1B∩C X] = E[1B∩C Y ].4).134 Chapter 3. ν : A 7→ E[1A Y ] agree therefore on the πsystem of the form B ∩ C with B ∈ B and C ∈ C and consequently everywhere on σ(B. the random variables X1B and 1C are independent so that E[X1B∩C ] = E[X1B 1C ] = E[X1B ]P[C] . Remark. Lebesgue and Jensen are statements about functions in L1 (B) and not about numbers as the usual theorems of Fatou. c) Given X. it follows that the operation of conditional expectation is a positive and continuous operation on Lp for any p ≥ 1. For B ∈ B and C ∈ C. Verify that X = Y almost everywhere. In general such a map ω 7→ Pω does not exist. A. . Is there for almost all ω ∈ Ω a probability measure Pω such that Z E[XB](ω) = X dPω ? Ω If such a map from Ω to M1 (Ω) exists and if it is Bmeasurable. A. However. Discrete Stochastic Processes (8) By linearity. it is known that for a probability space (Ω. Definition. P[·B]) is defined by P[B  B] = E[1B B] . we can assume X ≥ 0. Remark. Remark. it is called a regular conditional probability given B.
Lemma 3. (Law of total variance) For X ∈ L2 and an arbitrary random variable Y . if B is generated by a random variable Y . B = {2. Luo Jun found the following counter example which we include as an exercise. 3} and the random variables X = 1A . By the definition of the conditional variance as well as the properties of conditional expectation: Var[X] = = = = E[X 2 ] − E[X]2 E[E[X 2Y ]] − E[E[XY ]]2 E[Var[XY ]] + E[E[XY ]2 ] − E[E[XY ]]2 E[Var[XY ]] + Var[E[XY ]] . one has the conditional moment E[X B] if B be a σsubalgebra of A. Y = 1B . b) Verify that Var[X + ZY ] 6= Var[X. (Due to Luo Jun) Let Ω = {1.6. one has Var[X] = E[Var[XY ]] + Var[E[XY ]] . the conditional variance Var[XB] is the random variable E[X 2 B] − E[XB]2 . . (Variance of the Cantor distribution) The standard Cantor distribution for the Cantor set on [0. the conditional variance given Y of their sum is equal to the sum of their conditional variances was mentioned in a remark. a) Verify that E[XY ]E[ZY ] 6= E[XZY ]. one writes Var[XY ] = E[X 2 Y ] − E[XY ]2 . Y ] + Var[X. 4}. Exercise. 2. C = {2. 4}. 3. Conditional Expectation p p For X ∈ L . Z]. For X ∈ L2 . Define the events A = {3.5. Corollary 3. 1] has the expectation 1/2 and the variance 1/8.1. Here is an application which illustrates how one can use of the conditional variance in applications: the Cantor distribution is the singular continuous distribution with the law µ has its support on the standard Cantor set. In an earlier version.1. Z = 1Z . Of special interest is the conditional variance: Definition. Especially.135 3. Proof. They are Bmeasurable random variables and generalize the usual moments. Remark.1. the statement for two independent random variables X and Z. 4} with the counting measure.
Let X be R 1a random variable with the Cantor distribution. A. T. Discrete Stochastic Processes Proof. The map T is said to have pure point spectrum. Var[X] 1 + . 1/3] and equal to R1 2 2/3 (x − 5/6) dx on [2/3.1/3) . if there exists a countable set of eigenvalues λi such that their eigenfuctions Xi span L2 . P) a dynamical system. P) to L1 (B. It is generated by the random variable Y = 1[0. By the selfsimilarity of the Cantor set. a) Given a measure preserving invertible map T : Ω → Ω we call (Ω. Show that S is again measure preserving and invertible. µ) if there exists a measure preserving map U : Ω → ∆ such that S ◦ U (x) = U ◦ T (x) for all x ∈ Ω. It is a random variable which is constant 1/6 on [0. Q). 1]. S. T. A. if there exists X ∈ L2 such that X(T ) = λX. B) defined by Q[A] = P[A] for A ∈ B. Given a probability space (Ω. where Q is the conditional probability. ν) is called a factor of a measure preserving dynamical system (Ω. [0. By symmetry. 36 36 Define the random variable W = Var[XY ] = E[X 2 Y ] − E[XY ]2 = R 1/3 E[X 2 Y ] − Z 2 . 1]. The identity E[Var[XY ]] = Var[X]/9 implies Var[X] = E[Var[XY ]] + Var[E[XY ]] = E[W ] + Var[Z] = Solving for Var[X] gives Var[X] = 1/8. 3/3]. a) Show that the map P : X ∈ L1 7→ E[XB] is a Markov operator from L1 (A. It has the expectation E[Z] = (1/6)P[Y = 1] + (5/6)P[Y = 0] = 1/12 + 5/12 = 1/2 and the variance Var[Z] = E[Z 2 ] − E[Z]2 = 25 1 P[Y = 1] + P[Y = 0] − 1/4 = 1/9 . µ). 9 9 Exercise. where Q is the conditional probability measure on (Ω.136 Chapter 3. 1] } on Ω = [0. B. 1/3) and equal to 5/6 on [1/3. b) The map T can also be viewed as a map on the new probability space (Ω. [0. then also S has pure point spectrum. Denote this new map by S. S(x) = x. 1]. 1/3). B. Define Z = E[XY ]. then the two systems are called . A complex number λ is called an eigenvalue of T . Exercise. [1/3. If S is a factor of T and T is a factor of S. we see that W = Var[XY ] is actually constant and equal to Var[X]/9. Q). E[X] = 0 x dµ(x) = 1/2. Show that if T has pure point spectrum. P) and a σalgebra B ⊂ A. A. b) A measure preserving dynamical system (∆. Examples of factors are the system itself or the trivial system (Ω. It is equal to 0 (x − 1/6)2 dx on [0. Define the σalgebra {∅.
{An }n∈N . 1}N is the space of all 0 − 1 sequences with the Borel σalgebra generated by the product topology and An is the finite set of cylinder sets A = {x1 = a1 . . Let A be the Borel σalgebra on T1 = R/(2πZ) and P = dx the Lebesgue measure. P). xn = an } with ai ∈ {0. Martingales 137 isomorphic. if each Xn is in Lp . B. which contains 2n elements. If Ω = {0. P) a filtered space. if A0 ⊂ A1 ⊂ · · · ⊂ A. Describe the possible factors of T and their spectra. Given k ∈ N. It is a powerful tool with many applications. Verify that every factor of a dynamical system (Ω. c) It is known that if a measure preserving transformation T on a probability space has pure point spectrum. Also Sn (x) = i=1 xi is adapted to the filtration.2. . 1}. one calls (Ω. A sequence X = {Xn }n∈N of random variables is called a discrete stochastic process or simply process.3. then {An }n∈N is a filtered space. Definition. Example. denote by B k the σalgebra consisting of all A ∈ A such that A + n2π k = A (mod 2π) for all 1 ≤ n ≤ k. What is the conditional expectation E[XBk ] for a random variable X ∈ L1 ? 3. Definition. A process is called adapted to the filtration {An } if Xn is An measurable for all n ∈ N. T. A. the process Xn (x) = Pi=1 xi is n a stochastic process adapted to the filtration. These algebras are often defined by a set of random variables. especially in the case of stochastic processes. Let Ω = T1 be the onedimensional circle. A sequence {An }n∈N of sub σalgebras of A is called a filtration. It is a Lp process. . Definition. . Exercise. µ) can be realized as (Ω. A. Martingales are discrete stochastic processes which generalize the process of summing up IID random variables. A L1 process which is adapted to a filtration {An } is called a martingale if E[Xn An−1 ] = Xn−1 . In this section we follow largely [113]. then the system is isomorphic to a ˆ which is the dual group of the translation on the compact Abelian group G discrete group G formed by the spectrum σ(T ) ⊂ T. T.2 Martingales It is typical in probability theory is that one considers several σalgebras on a probability space (Ω. 1}N as above. For Ω = {0. µ) where B is a σsubalgebra of A. A. Given a filtration {An }n∈N . Qn Example.
y). Remark. A random variable X on the unit square defines a gray scale picture if we interpret X(x. Allan Gut mentions in [35] that a martingale is an allegory for ”life” itself: the expected state of the future given the past history is equal the present state and on average. If we mean either submartingale or supermartingale (or martingale) we speak of a semimartingale. Discrete Stochastic Processes for all n ≥ 1. Figure. Exercise. y) is the gray value at the point (x. . It shows Joseph Leo Doob (19102004). The images get brighter and brighter in average as the resolution becomes better. It is a martingale. whether it is a supermartingale or a submartingale. Determine from the following sequence of pictures. The sequence of pictures shows the conditional expectations E[XAn ]. It immediately follows that for a martingale E[Xn Am ] = Xm if m < n and that E[Xn ] is constant. 1] × [0. 1]. The partitions An = {[k/2n(k + 1)/2n ) × [j/2n (j + 1)/2n )} define a filtration of Ω = [0. who developed basic martingale theory and many applications.138 Chapter 3. nothing happens. It is called a supermartingale if E[Xn An−1 ] ≤ Xn−1 and a submartingale if E[Xn An−1 ] ≥ Xn−1 .
. we could assume the process is null at 0 which means X0 = 0. . . c) If Xn is a martingale. . . . Define S0 = 0. . Remark. Since we can change X to X − X0 without destroying any of the martingale properties. . the independence property of the conditional expectation. . b) If Xn is a submartingale. We have used linearity. Yn−1 ] E[E[XY0 . . where Yn is a given process. Yn ) is a filtered space and Xn = E[XY0 . Yn are two submartingales. X is is called a martingale with respect Y . . . . Yn−1 ] = Xn−1 . Remark. Then Xn = E[XAn ] is a martingale. Yn ]Y0 . Conditional expectation Given a random variable X ∈ L1 on a filtered space (Ω. If X is a supermartingale. . then E[Xn ] ≤ E[Xn−1 ]. P). If a martingale Xn is given with respect to a filtered space An = σ(Y0 . . Then An = σ(Y0 . . Martingales Definition.139 3. a) Verify that if Xn . Sum of independent random variables Let Xi ∈ L1 be a sequence of independent random variables with mean P E[Xi ] = 0. Exercise. . . . . A supermartingale. Example. Yn ). . Example. Then Sn is a martingale since Sn is an {An }adapted L1 process and E[Sn An−1 ] = E[Sn−1 An−1 ] + E[Xn An−1 ] = Sn−1 + E[Xn ] = Sn−1 . Sn = nk=1 Xk and An = σ(X1 . The word ”martingale” means a gambling system in which losing bets are doubled. then −X is a submartingale and vice versa. Y ) is a submartingale. . . . . Yn ] is a martingale. Especially: given a sequence Yn of random variables. Remark. Ω}. . Given a martingale. A. then E[Xn ] = E[Xn−1 ]. {An }n∈N . . Proof: by the tower property E[Xn An−1 ] = = = E[Xn Y0 .2. From the tower property of conditional expectation follows that for m < n E[Xn Am ] = E[E[Xn An−1 ]Am ] = E[Xn−1 Am ] = · · · = Xm . . . It is also the name of a part of a horse’s harness or a belt on the back of a man’s coat. . . then sup(X. . which is also a submartingale is a martingale. . Yn−1 ] E[XY0 . Xn ) with A0 = {∅.
. . . then Yn is a supermartingale. The process ”learns” in the sense that if there are more black balls. after n draws. . Therefore 1 E[Yn+1 Y1 . if Zn  denotes the norm of the matrix (the square root of the maximal eigenvalue of Zn · Zn∗ . where · denotes matrix multiplication. . Example. . . Define X0 = 1 and Xn = i=0 Yi and An = σ(Y1 . we can also look at the sequence of random variables Yn = Z1 · Z2 · · · Zn . If Zn is a sequence of matrix valued random variables. Note that the martingale property does not follow directly by taking logarithms. We claim that X is a martingale with respect to Y : the random variables Yn take values in {1. Then Xn is a martingale. Discrete Stochastic Processes verifying the martingale property E[Xn An−1 ] = Xn−1 . . Example. then the winning chances are better. . . At each time n ≥ 1. Define the realvalued random variables Xn = log Z1 · Z2 · · · Zn . Like this. . If E[Zn ] = 1. Example. the fraction of black balls. Assume E[log Zn ] ≤ 0.140 Chapter 3. . . a ball is taken randomly. Polya’s urn scheme An urn contains initially a red and a black ball. Because Xn ≤ log Zn  + Xn−1 . we get E[Xn An−1 ] ≤ = E[log Zn   An−1 ] + E[Xn−1 An−1 ] E[log Zn ] + Xn−1 ≤ Xn−1 so that Xn is a supermartingale. Product of positive variables Given a sequence Yn of independent random Qn variables Yn ≥ 0 satisfying with E[Yn ] = 1. and both this ball and another ball of the same color are placed back into the urn. This is an exercise. . Example. . . . Yn−1 ). Product of matrixvalued random variables Given a sequence of independent random variables Zn with values in the group GL(N. . Clearly P[Yn+1 = k + 1Yn = k] = k/(n + 2) and P[Yn+1 = kYn = k] = 1 − k/(n + 2). Yn ] = Note that Xn is not independent of Xn−1 . n+2 E[Xn+1 Y1 . . . where Zn∗ is the adjoint). . Zn ). . Yn )measurable. there exist Borel measurable functions hn : Rn+1 → R such that Xn = hn (Y0 . . Yn ] n+3 1 = P[Yn+1 = k + 1Yn = k] · Yn+1 n+3 +P[Yn+1 = k  Yn = k] · Yn 1 Yn Yn = [(Yn + 1) + Yn (1 − )] n+3 n+2 n+2 Yn = = Xn . . R) of invertible N × N matrices and let An = σ(Z1 . In ergodic theory. such a matrixvalued process Xn is called subadditive. its color noted. n + 1}. . . Note that because Xn is by definition σ(Y0 . Define Yn as the number of black balls after n moves and Xn = Yn /(n + 2). the urn contains n + 2 balls. Yn ). . . . We say X is a martingale with respect to Y .
Yn ]/mn+1 = mYn /mn+1 = Xn . Martingales Figure. A typical growth of Yn of a branching process.1. . . the sum is zero. we have for every n E[Yn+1 Y0 . but often. . .141 3. . . the random variables Zni had a Poisson distribution with mean m = 1. . .2. Example. The branching process can be used to model population growth. . . Define Y0 = 1 and Yn X Yn+1 = Znk k=1 with the convention that for Yn = 0. . By the independence of Yn and Zni . . . In this example. think of Yn as the size of a population at time n and with Zni the number of progenies of the i − th member of the population. In the first case. We claim that Xn = Yn /mn is a martingale with respect to Y . . . disease epidemic or nuclear reactions. A typical run of 30 experiments with Polya’s urn scheme. integervalued random variables with positive finite mean m. Yn ] = E[ Yn X Znk ] = mYn k=1 so that E[Xn+1 Y0 . i ≥ 1. Figure. . in the n’th generation. Branching processes Let Zni be IID. It is possible that the process dies out. it grows exponentially. Yn ] = E[ Yn X k=1 Znk Y0 . Yn ] = E[Yn+1 Y0 .
4) Yn = u(Xn ) = u(E[Xn+1 An ]) ≤ E[u(Xn+1 )An ] = E[Yn+1  An ] . In some sense we can predict them. A stochastic process C = {Cn }n≥1 is called previsible if Cn is An−1 measurable.2. Y are submartingales and a. If C is a Rbounded nonnegative previsible process and X is a supermartingale then C dX is a supermartingale. Given a semimartingale X and a previsible process C. Theorem 3.2. a) If X is a martingale and u is convex such that u(Xn ) ∈ L1 . k=1 It is called a discrete stochastic integral or a martingale transform.3 (The system can’t be beaten). then aX + bY is a submartingale. Definition. if Xn ∈ L∞ and if there exists K ∈ R such that Xn ∞ ≤ K for all n ∈ N. Submartingales and supermartingales form cones: if for example X.142 Chapter 3.1. Let An be a fixed filtered sequence of σalgebras. Proof. Linear combinations of martingales over An are again martingales over An .2. then Y = u(X) is a submartingale. a) We have by the conditional Jensen property (3. b) If u is monotone and convex and X is a submartingale such that u(Xn ) ∈ L1 . Discrete Stochastic Processes Proposition 3. b > 0.2. . then u(X) is a submartingale.1. Proposition 3. Use the linearity and positivity of the conditional expectation. Previsible processes can only see the past and not see the future. Especially. The same statement is true for submartingales and martingales. Proof. A process X is called bounded. Definition. the process Z n X ( C dX)n = Ck (Xk − Xk−1 ) . if X is a martingale. b) Use the conditional Jensen property again and the monotonicity of u to get Yn = u(Xn ) ≤ u(E[Xn+1 An ]) ≤ E[u(Xn+1 )An ] = E[Yn+1  An ] . then X is a submartingale.
A random time T is called a stopping time with respect to a filtration An . Xn = ±1 with probability 1/2 and Cn = 1 if Xn−1 is even and Cn = 0 if Xn−1 is odd. Definition. Define A∞ = σ( n≥0 An ). Show that Xn is a supermartingale. If Cn is the stake on game n. Figure. . Xn = Y1 · · · Yn and An = σ(Y1 . The original process Xn is a symmetric random walk and so Ra martingale. From the property of ”extracting knowledge” in theorem (3. Y2 . .1. . Define X0 = 1. Xn = Z1 · · · Zn . Remark. then Z n X C dX = Ck (Xk − Xk−1 ) k=1 are the total winnings up to time n. . In this example.3. a) Let Y1 . The proposition stays true. . The above proposition tells that you can not find a strategy for putting your stake to make the game fair. Exercise. If one wants to relax the boundedness of C. A random variable TSwith values in N = N ∪ {∞} is called a random time. Let Y = C dX. Define X0 = 1. Y2 . A martingale represents a fair game since E[Xn − Xn−1 An−1 ] = 0.4). . Remark. Martingales 143 R Proof. Show that Xn is a martingale. if both C and X are L2 processes. b) Let Zn be a sequence of independent random variables taking values in the set of n × n matrices satisfying E[Zn ] = 1.2. then Xn − Xn−1 are the net winnings per unit stake. Yn ). then one has to strengthen the condition for X. Here is an interpretation: if Xn represents your capital in a game. whereas a supermartingale is a game which is unfavorable to you. we get E[Yn −Yn−1 An−1 ] = E[Cn (Xn −Xn−1 )An−1 ] = Cn ·E[Xn −Xn−1 An−1 ] ≤ 0 because Cn is nonnegative and Xn is a supermartingale. The new process C dX is again a martingale. be a sequence of independent nonnegative random variables satisfying E[Yk ] = 1 for all k ∈ N. . . if {T ≤ n} ∈ An for all n ∈ N.
1] × [0.144 Chapter 3. y. First entry time.2. y) ∈ [0. A random time T is aSstopping time if and only if {T = n } ∈ An for all n ∈ N since {T ≤ n} = 0≤k≤n {T = k} ∈ An . Let T1 . 1]3  z > 1 − x − y } which is 1/6 = 1/3!. This means that if we play BlackJack with uniformly k=0 distributed random variables and threshold 1. Example. Inductively we P∞ see P[T = k] = 1/k! and the expectation of T is E[T ] = k/k! = k=1 P∞ 1/k! = e. P[T = 2] = P[X2 > 1 − X1 ] is the area of region {(x. Example. ”Continuous BlackJack”: let Xi P be IID random variables with uniform distribution in [0. T2 be two stopping times. Let Xn be a An adapted process and given a Borel set B ∈ B in Rd . Assume the same setup as in 1). Whether or not you stop after the n−th game depends only on the history up to and including the time n. A popular problem asks for the expectation of this random variable T : How many ”cards” Xi do we have to draw until we get busted and the sum is larger than 1? We obviously have P[T = 1] = 0. Now. Define Sn = nk=1 Xi and let T (ω) be the smallest integer so that Sn (ω) > 1. Define T (ω) = inf{n ≥ 0  Xn (ω) ∈ B} which is the time of first entry of Xn into B. 1]  y > 1 − x } which is 1/2. whose occurrence can be determined without preknowledge of the future. but less than 3 ”cards”. Discrete Stochastic Processes Remark. 1]. Similarly P[T = 3] = P[X3 > 1 − X1 − X2 ] is the volume of the solid {(x. The set {T = ∞} is the set which never enters into B. Remark. A gambler is forced to stop to play if his capital is zero. the maximum T1 ∨ T2 as well as the sum T1 + T2 are stopping times. z) ∈ [0. we expect to get busted in more than 2. Example. Obviously {T ≤ n} = n [ k=0 {Xk ∈ B} ∈ An so that T is a stopping time. This is a stopping time. . Proposition 3. Last exit time. Here is an interpretation: stopping times are random times. The term comes from gambling. But this time T (ω) = sup{n ≥ 0  Xn (ω) ∈ B} is not a stopping time since it is impossible to know that X will return to B after some time k without knowing the whole future. The infimum T1 ∧ T2 .4.
T (ω) < ∞ XT (ω) = 0 . If X is a supermartingale and T is a stopping time. then the stopped process X T is a supermartingale. It is equal to XT for times T ≤ n and equal to Xn if T > n. The process C is previsible. The claim follows from the ”system can’t be beaten” theorem.145 3.2. However 1 = E[XT ] 6= E[X0 ] = 0 . The same statement is true if supermartingale is replaced by martingale in which case E[X T ] = E[X0 ]. This is the martingale strategy in casino which gave the name of these processes. Martingales Proof. The process XnT = XT ∧n is called the stopped process. When can we say E[XT ] = E[X0 ]? The answer gives Doob’s optimal stopping time theorem: . else P∞ or equivalently XT = n=0 Xn 1{T =n} . maxima and sums. Define then the ”winning process” Z n X (T ) ( C (T ) dX)n = Ck (Xk − Xk−1 ) = XT ∧n − X0 . let T be the stopping time T = inf{n  Xn = 1 }. The above theorem gives E[X T ] = E[X0 ]. As we will see later on. This is obvious from the definition because An measurable functions are closed by taking minima. Proposition 3. (T ) Proof. Given a stochastic process Xn which is adapted to a filtration An and let T be a stopping time with respect to An . R Remark. You can think of it as betting 1 unit and quit playing immediately after time T . k=1 or shortly C (T ) dX = XT − X0 .5. the random walk is recurrent P[T < ∞] = 1 in one dimensions. since it can (T ) only take values 0 and 1 and {Cn = 0 } = {T ≤ n − 1 } ∈ An−1 .2. Define the ”stake process” C (T ) by Cn = 1T ≤n . define the random variable XT (ω) (ω) . It is important that we take the stopped process X T and not the random variable XT : for the random walk X on Z starting at 0. Definition. In particular E[X T ] ≤ E[X0 ].
If one of the five following conditions are true: (i) T is bounded. If X is a martingale. n→∞ (iii) We estimate XT ∧n − X0  =  T ∧n X k=1 Xk − Xk−1  ≤ T ∧n X k=1 Xk − Xk−1  ≤ T K . (iv) XT ∈ L1 and limk→∞ E[Xk . (v) X is uniformly integrable and T is almost everywhere finite. (ii) Use the dominated convergence theorem (2. {T > k}] < ∞. (iii) T ∈ L1 and Xn − Xn−1  ≤ K for some K > 0.6 (Doob’s optimal stopping time theorem). Xn  > R] → 0 for R → ∞ assures that XT ∈ L1 since E[XT ] ≤ k · max1≤i≤k E[Xk ] + supn E[Xn . The interpretation of this result is that a fair game cannot be made unfair by sampling it with bounded stopping times. we get E[X0 ] ≥ E[XT ∧k ] = E[XT . Since for each n we have XT ∧n − X0 ≤ 0. this remains true in the limit n → ∞. (v) The uniformly integrability E[Xn . Discrete Stochastic Processes Theorem 3. {T > k}] → 0. {T ≤ k}] + E[Xk .2.4. . we can apply (iv).3) to get lim E[XT ∧n − X0 ] ≤ 0 . we can take n = sup T (ω) < ∞ and get E[XT − X0 ] = E[XT ∧n − X0 ] ≤ 0 . If X is a martingale and any of the five conditions is true. (ii) X is bounded and T is almost everywhere finite. then E[XT ] ≤ E[X0 ].146 Chapter 3.4.3). Let X be a supermartingale and T be a stopping time. {T > k}] ≤ supn E[Xn . {T > k}] and taking the limit gives E[X0 ] ≥ limk→∞ E[Xk . {T > k }] = 0. Proof. Since E[Xk . we use the supermartingale case for both X and −X. (i) Because T is bounded. Because T ∈ L1 . the result follows from the dominated convergence theorem (2. (iv) By (i). then E[XT ] = E[X0 ].3) and the assumption. Remark.4. {T ≤ k}] → E[XT ] by the dominated convergence theorem (2. We know that E[XT ∧n − X0 ] ≤ 0 because X is a supermartingale.
there is a very small risk to lose. With the martingale strategy one has T = n with probability 1/2n . then E[( CdX)T ] = 0. Proof. A. there is a winning strategy. Assume X is a martingale and suppose Xn −Xn−1  is bounded. Ac .7 (No winning strategy). Martingales Theorem 3. Apply E[XT ] = E[X0 ] and E[XT ′ ] = E[X0 ] for the bounded constant stopping time T ′ = n + 1 to get E[Xn . R R Proof. the claim follows from the optimal stopping time theorem part (iii). but if they lose. A] + E[Xn+1 . A] = E[Xn . she just has to double the bet until the coin changes sign. Ac ] = = E[XT ] = E[X0 ] = E[XT ′ ] = E[Xn+1 ] E[Xn+1 . Martingales can be characterized involving stopping times: Theorem 3. Given a previsible R process C which is bounded and let T ∈ L1 be a stopping time. We know that C dX is a martingale and since ( C dX)0 = 0.8 (Komatsu’s lemma). but then the loss is large. we know that E[Xn+1 An ] = E[Xn An ] = Xn and X is a martingale. Ω } ⊂ An . The martingale strategy mentioned in the introduction shows that for unbounded stopping times. Ac ] so that E[Xn+1 . A] + E[Xn+1 . Example. their loss can be huge. The map n T = n + 1 − 1A = n+1 ω∈A ω∈ /A is a stopping time because σ(T ) = {∅. You see that in the real world: players with large capital in the stock market mostly win. for any A ∈ An . Remark. Since this is true.2. A].2.2. The gambler’s ruin problem is theP following question: Let Yi be IID with P[Yi = ±1] = 1/2 and let Xn = nk=1 Yi be the random walk . With a finite but large initial capital. Fix n ∈ N and A ∈ An . The player always wins.147 3. then X is a martingale with respect to An . Let X be an An adapted sequence of random variables in L1 such that for every bounded stopping time T E[XT ] = E[X0 ] . But it assumes an ”infinitely thick wallet”.
9. A has fortune a and B has fortune b. {T > k }] ≤ max{a. One can see this by the law of the iterated logarithm. then T is the time. Three samples of a process Xn starting at X0 = 0. lim sup n Xn Xn = 1.2. Discrete Stochastic Processes with X0 = 0. We know that X is a martingale with respect to Y . lim inf = −1 . when we treat the random walk in more detail. the value of Xk is in (−a. We check that Xk satisfies condition (iv) in Doob’s stopping time theorem: since XT takes values in {a. If at the beginning. Given a. it is in L1 and because on the set {T > k }. then P[XT = −a] is the ruin probability of A and P[XT = b] is the ruin probability of B. b }. T is finite almost everywhere.) It follows that P[XT = −a] = 1 − P[XT = b]. b > 0. b}P[T > k] → 0. b. when Xn hits the lower bound −a or the upper bound b. . n Λn Λn (We will give later a direct proof the finiteness of T . Proposition 3. the initial capital of the second gambler is b. If Xn is the winning of a first gambler. Figure. Remark. We want to compute P[XT = −a] and P[XT = b] in dependence of a. for which one of the gamblers is broke. we define the stopping time T = min{n ≥ 0  Xn = b. The process is stopped with the stopping time T .148 Chapter 3. which is the loss of a second gambler. The initial capital of the first gambler is a. or Xn = −a } . b). (a + b) Proof. If Yi are the outcomes of a series of fair gambles between two players A and B and the random variables Xn are the net change in the fortune of the gamblers after n independent games. we have E[Xk . P[XT = −a] = 1 − P[XT = b] = b .
This fact leads to the ”martingale” gambling strategy defined by doubling the bet when loosing. which means that the casino is ruined with probability 1. we define the random variable Un [a. . Given a stochastic process X and two real numbers a < b. The process Xn = Sn − n E[Y1 ] is a martingale satisfying condition (iii) in Doob’s stopping time theorem. Xti (ω) > b. If the casinos would not impose a bound on the possible inputs. It is lcalled the number of upcrossings in [a. n→∞ Because n 7→ Un [a.3 Doob’s convergence theorem Definition. then we expect to win mt. Let T = inf{n  Xn = 1 }. Theorem 3.149 3. b]. b] the limit U∞ [a. One can see it also like this.10 (Wald’s identity). this gambling strategy would lead to wins. Doob’s convergence theorem Remark.2. if we play a game where the expected gain in each step is m and the game is stopped with a random time T which has expectation t = E[T ]. The process Sn = nk=1 Yk satisfies E[ST ] = mE[T ] . Therefore 0 = E[X0 ] = E[XT ] = E[ST − T E[Y1 ]] . b](ω) = max{k ∈ N  ∃ 0 ≤ s1 < t1 < · · · < sk < tk ≤ n. Remark. The boundedness of T is necessary in Doob’s stopping time theorem. Now solve for E[ST ] = E[T ]E[Y1 ] = mE[T ]. 1 ≤ i ≤ k } . Xsi (ω) < a. b] is monotone. a = ∞ then P[XT = b] = 1. Proof. If you are A and the casino is B and b = 1. One could assume Y to be a L2 process and T in L2 . b] . 3. this limit exists in N ∪ {∞}. Denote by U∞ [a. In other words. b] = lim Un [a. But you have to go there with enough money. Assume T is a stopping time of a L1 process Y for which Yi are L∞ IID random P variables with expectation E[Yi ] = m and T ∈ L1 .3. Then E[XT ] = 1 but E[X0 ] = 0] which shows that some condition on T or X has to be imposed.
The random variable Un [a. Remark.3. Then P[U∞ [a. Proof. The proof uses the following strategy for putting your stakes C: wait until X gets below a. we know that Yn is also a supermartingale (see ”the system can’t be beaten”) and we have therefore E[Yn ] ≤ 0. Definition. Play then unit stakes until X gets above b and stop playing. while (Xn − a)− is essentially the loss during the last interval of play. b] with values in N measures the number of upcrossings in the time interval [0. R It is a previsible process. a stochastic process Xn is bounded in Lp . If X is a supermartingale which is bounded in L1 . b t 1 t 2 a s 1 s 2 s 3 Theorem 3.150 Chapter 3. Then (b − a)E[Un [a.3. etc. We have the winning inequality Yn (ω) ≥ (b − a)Un [a. Define C1 = 1{X0 <a } and inductively for n ≥ 2 the process Cn := 1{Cn−1 =1 } 1{Xn−1 ≤b } + 1{Cn−1 =0 } 1{Xn−1 <a } . b] increases the Y value (the winning) by at least (b − a). where Xs < a until the time. . bounded and nonnegative. An upcrossing is a time s. Corollary 3. Discrete Stochastic Processes Figure.1 (Doob’s upcrossing inequality). b]] ≤ E[(Xn − a)− ] . Taking expectation of the winning inequality leads to the claim. when the first time Xt > b. Wait again until X gets below a. b] = ∞] = 0 . If X is a supermartingale.2. b](ω) − (Xn (ω) − a)− . A random walk crossing two values a < b. Define the winning process Y = C dX which satisfies by definition Y0 = 0. Every upcrossing of [a. n]. We say. Since C is previsible. if there exists M ∈ R such that Xn p ≤ M for all n ∈ N.
a. we have for each n ∈ N (b − a)E[Un [a. where P is the Lebesgue measure. which gives the claim.a. Then X∞ = lim Xn n→∞ exists almost everywhere.b∈Q = [ Λa. Theorem 3. b]] = ∞.4. Therefore X∞ = limn→∞ Xn exists almost surely. 1). Let Xn be a supermartingale which is bounded in L1 . n By the dominated convergence theorem (2. Λ = {ω ∈ Ω  Xn has no limit in [−∞. E[U∞ [a. ) 2n 2n defines a filtration and Xn = E[XAn ] is a martingale which converges.b ] = 0 and therefore also P[Λ] = 0. Doob’s convergence theorem Proof. P). . Proof. b]] ≤ a + E[Xn ] ≤ a + sup Xn 1 < ∞ .3.3.3 (Doob’s convergence theorem). The finite σalgebra An generated by the intervals Ak = [ k k+1 . In this case.3) (b − a)E[U∞ [a. ∞] } = {ω ∈ Ω  lim inf Xn < lim sup Xn } [ = {ω ∈ Ω  lim inf Xn < a < b < lim sup Xn } a<b. b]] < ∞ . then it is a martingale which is unbounded in L1 . Pn Remark. By the upcrossing lemma. b] = ∞ } we have P[Λa. a<b.4. By Fatou’s lemma E[X∞ ] = E[lim inf Xn ) ≤ lim inf E[Xn ] ≤ sup E[Xn ] < ∞ n→∞ n→∞ so that P[X∞ < ∞] = 1.151 3.b . Let X be a random variable on ([0.b ⊂ {U∞ [a. We will see below with L´evys upward theorem (3.b∈Q Since Λa. If Sn = k=1 Xk is the one dimensional random walk. A. n Example.2 that the limit actually is the random variable X.
then X∞ = limn→∞ Xn exists almost everywhere and is finite. Sn 1 ≤ 1−λ The martingale converges by Doob’s convergence theorem almost surely.3. which was defined earlier. Since the process Y giving the fraction of black balls is a martingale and bounded 0 ≤ Y ≤ 1. the Pn variables k branching random walk Sn = k=0 λ Xk is a martingale which is bounded in L1 because 1 X0 1 .11. Remark. We prove this by induction. this leads to E[θYn+1 ] = E[f (θ)Zn ] . Doobs convergence theorem (3. This corollary is also true for nonpositive submartingales or martingales. Remark. Since the supermartingale property gives E[Xn ] = E[Xn ] ≤ E[X0 ]. We n saw that the process Xn = Yn /m is nonnegative and a martingale. . For n = 1 this is trivial. Let Xk be IID random in L1 . Proof. (i) Yn has the generating function f n (θ) = f (f n−1 )(θ). We look again at Polya’s urn scheme.3) if Xk ∈ L2 . Write α = f (θ) and use induction to simplify the right hand side to E[f (θ)Yn ] = E[αYn ] = f n (α) = f n (f (θ)) = f n+1 (θ) . Apply Doob’s convergence theorem. the process Xn is bounded in L1 . Of course.3. For 0 < λ < 1. It is an interesting problem to find the distribution of X∞ : Assume Zni have the generating function f (θ) = E[θZni ].152 Chapter 3. we had IID randomPvariables Zni Yn with positive finite mean m and defined Y0 = 0. If X is a nonnegative supermartingale. we can apply the convergence theorem: there exists Y∞ with Yn → Y∞ . By the tower property. According to the above corollary. Yn+1 = k=1 Znk . Example. which are either nonnegative or nonpositive. Example. For the Branching process. the limit X∞ exists almost everywhere. Discrete Stochastic Processes Example.3) assures convergence for Xk ∈ L1 . we can replace supermartingale by submartingale or martingale in the theorem. Corollary 3. Using the independence of Znk we have E[θYn+1 Yn = k] = f (θ)k and so E[θYn+1 Yn ] = f (θ)Zn .4. One can also deduce this from Kolmogorov’s theorem (2.
Xn ) ∈ L1 (X) and which is subadditive with respect to a measure preserving transformation T . Remark. Related to Doob’s convergence theorem for supermartingales is Kingman’s subadditive ergodic theorem.3.3. Since Xn → X∞ almost everywhere. which generalizes Birkhoff ergodic theorem and which we state without proof. which Xn : X → R ∪ {−∞} with Xn+ := max(0. For the branching process defined by IID random variables Zni having the generating function f . Neither of the two theorems are however corollaries of each other. A sequence of random variables Xn is called subadditive with respect to a measure preserving transformation T . Given a sequence of random variables. the Fourier transform L(λ) = E[eiλX∞ ] of the distribution of the limit martingale X∞ can be computed by solving the functional equation L(λ · m) = f (L(λ)) . Theorem 3.153 3. we have n L(Xn )(λ) = f n (eiλ/m ) so that L satisfies the functional equation L(λm) = f (L(λ)) . we have L(Xn )(λ) → L(X∞ )(λ). .3. if Xm+n ≤ Xm +Xn (T m ) almost everywhere. Furthermore n1 E[Xn ] → E[X]. If f has no analytic extension to the complex plane. we have to replace the Fourier transform with the Laplace transform L(λ) = E[e−λX∞ ] . Definition. Remark.6 (The subadditive ergodic theorem of Kingmann).5 (Limit distribution of the branching process). Doob’s convergence theorem (ii) In order to find the distribution of X∞ we calculate instead the characteristic function L(λ) = L(X∞ )(λ) = E[exp(iλX∞ )] . Then there exists a T invariant integrable measurable function X : Ω → R ∪ {−∞} such that n1 Xn (x) → X(x) for almost all x ∈ X. Theorem 3. Since Xn = Yn /mn and E[θYn ] = f n (θ).
By Jensen’s inequality E[Xn ] = E[E[XAn ]] ≤ E[E[XAn ]] ≤ E[X] . Choose δ > 0 such that for all A ∈ A. What we have to show is that it is uniformly integrable. Discrete Stochastic Processes If the condition of boundedness of the process in Doob’s convergence theorem is strengthened a bit by assuming that Xn is uniformly integrable.3. By Doob’s convergence theorem (3. On the other hand. Xn  > K] < ǫ . Choose further K ∈ R such that K −1 · E[X] < δ. then Xn is bounded in L1 and Doob’s convergence theorem gives Xn → X almost everywhere. the condition P[A] < δ implies E[X.7). Proof. . Therefore K · P[Xn  > K] ≤ E[Xn ] ≤ E[X] ≤ δ · K so that P[Xn  > K] < δ. To prove the ”if” part.3. Remark. An An adapted process is an uniformly integrable martingale if and only if Xn → X in L1 and Xn = E[XAn ]. Proof. As a summary we can say that supermartingale Xn which is either bounded in L1 or nonnegative or uniformly integrable converges almost everywhere. then one can reverse in some sense the convergence theorem: Theorem 3. Theorem 3. Given ǫ > 0. A supermartingale Xn is uniformly integrable if and only if there exists X such that Xn → X in L1 . a sequence Xn ∈ L1 converging to X ∈ L1 is uniformly integrable.7 (Doob’s convergence theorem for uniformly integrable supermartingales). By definition of conditional expectation. We already know that Xn = E[XAn ] is a martingale. assume Xn = E[XAn ] → X.154 Chapter 3. Xn  ≤ E[XAn ] and {Xn  > K} ∈ An E[Xn .8 (Characterization of uniformly integrable martingales). A] < ǫ. we know the ”only if”part.3. Xn  > K] ≤ E[X. But a uniformly integrable family Xn which converges almost everywhere converges in L1 . If Xn is uniformly integrable.
155 3. b) Design a martingale Xn . let Yn be the number of black balls after n steps. We assume that the IID random variables Znk have the geometric distribution P[Z = k] = p(1 − p)k = pq k with parameter 0 < p < 1. The probability generating function of this distribution is Z f (θ) = E[θ ] = ∞ X k=1 pq k θk = p . where the iteration of polynomials P ∈ P plays a role. Doob’s convergence theorem Exercise. a) Prove that P[Yn = k] = 1/(n + 1) for every 1 ≤ k ≤ n + 1. We have seen that the Laplace transform L(λ) = E[e−λX∞ ] of the limit X∞ satisfies the functional equation L(mλ) = f (L(λ)) .3. In Polya’s urn process. 1 − qθ . Exercise. b) Show that for every supermartingale X and stopping times S ≤ T the inequality E[XT ] ≤ E[XS ] holds. P Yn Example. b) Compute the distribution of the limit X∞ . Exercise. We have seen that X is a martingale. a) Which polynomials f can you realize as generating functions of a probability distribution? Denote this class of polynomials with P. Let S and T be stopping times satisfying S ≤ T . a) Show that the process Cn (ω) = 1{S(ω)<n≤T (ω)} is previsible. The branching process Yn+1 = k=1 Znk defined by random variables Znk having generating function f and mean m defines a martingale Xn = Yn /mn . c) Use one of the consequences of Doob’s convergence theorem to show that the dynamics of every polynomial P ∈ P on the positive axis can be conjugated to a linear map T : z 7→ mz: there exists a map L such that L ◦ T (z) = P ◦ L(z) for every z ∈ R+ . Let Xn = Yn /(n + 2) be the fraction of black balls.
1 q 0 qn −1 1 We get therefore n n L(λ) = E[e−λX∞ ] = lim E[e−λYn /m ] = lim fn (eλ/m ) = n→∞ n→∞ pλ + q − p .9. 1) or not. If f (p) = p. In the case m > 1. n n If we assume that P[Z = 0] > 0. 1) and this fixed point is the extinction probability of the process. Define pn = P[Yn = 0].156 Chapter 3. The generating function f (θ) = E[θZ ] = = n=0 P[Z = n]θ P n p θ is analytic in [0. It is nondecreasing and satisfies f (1) = 1. This power can be computed by −q 1 diagonalisating A: n 1 p p 0 q −p An = (q − p)−1 . . Proposition 3. the extinction probability is the unique solution of f (x) = x in (0. This means that the process dies out. the law of X∞ has a point mass at 0 of weight p/q = 1/m and an absolutely continuous part (1/m − 1)2 e(1/m−1)x dx. then the law of X∞ is a Dirac mass at 0. the probability that the process dies out until time n. qλ + q − p If m ≤ 1. p The function f n (θ) can be computed as f n (θ) = pmn (1 − θ) + qθ − p . p is called the extinction probability. Discrete Stochastic Processes As we have seen in proposition (2. The orbit f n (u) converges to this fixed point for every u ∈ (0.12. For a branching process with E[Z] ≥ 1. L(λ) = e−λ0 + q 0 Definition. E[Z] = ∞ X pq k k = k=1 q . The value of f ′ (0) = E[Z] decides whether there exists an attractive fixed point in the interval (0. 1). then f (0) > 0 and there exists a unique solution of f (x) = x satisfying f ′ (x) < 1. For E[Z] ≤ 1. We see that in this case directly that limn→∞ fn (θ) = 1. Since pn = f n (0) we have pn+1 = f (pn ). P∞ n Proof.5). 1]. This can be seen by performing a ”look up” in a table of Laplace transforms Z ∞ p (1 − p/q)2 e(p/q−1)x · e−λx dx . the extinction probability is 1.3. qmn (1 − θ) + qθ − p This is because f is a M¨obius and iterating f corresponds transformation n 0 p to look at the power An = .
Therefore X∞ exists almost everywhere by Doob’s convergence theorem for uniformly integrable martingales. By Jensen’s inequality. the convergence is in L1 . L´evy’s upward and downward theorems 3. Since E[X S ∞ An ] = E[XAn ]. A] .4. By proving the claim for the positive and negative part.2 (L´evy’s upward theorem). Y = E[XB] satisfies E[Y ] = E[E[XB]] ≤ E[E[XB]] ≤ E[X] . Denote by A∞ the σalgebra generated by S n An . Definition. A]. Given ǫ > 0. XB  > K] < ǫ . Choose δ > 0 such that for all A ∈ A. we can assume that X ≥ 0 (and so Y ≥ 0). B is σ − algebra } is uniformly integrable. XB  > K] ≤ E[X. Given X ∈ L1 . Choose further K ∈ R such that K −1 · E[X] < δ. and since the family Xn is uniformly integrable. Then Xn = E[XAn ] is a uniformly integrable martingale and Xn converges in L1 to X∞ = E[XA∞ ]. Consider the two measures Q1 (A) = E[X. The process X is a martingale. Proof. Theorem 3. They agree therefore everywhere on A∞ . A] < ǫ. By definition of conditional expectation. Proof. Therefore K · P[Y  > K] ≤ E[Y ] ≤ E[X] ≤ δ · K so that P[Y  > K] ≤ δ. Given X ∈ L1 .4 L´evy’s upward and downward theorems Lemma 3. The sequence Xn is uniformly integrable by the above lemma. Then the class of random variables {Y = E[XB]  B ⊂ A.1.157 3. P[A] < δ implies E[X. We have to show that X∞ = Y := E[XA∞ ].4. Define the event . Y  ≤ E[XB] and {Y  > K } ∈ B E[XB . we know that Q1 and Q2 agree on the πsystem n An . Q2 (A) = E[X∞ .4.
Then X−∞ = limn→∞ X−n converges in L1 and X−∞ = E[XA−∞ ]. Because X is 0 − 1 valued and X = P[A]. We have E[X. The same argument as before shows that X−∞ = E[XA−∞ ]. the tail σS T For any sequence An of independent σalgebras. Define X−n = E[XA−n ]. Given a downward filtration A−n and X ∈ L1 . X = E[XC ∞ ] = lim E[XC n ] .1. Define A−∞ = T n A−n . As an application. . . b] ≤ (a + X1 )/(b − a) . A sequence A−n of σalgebras A−n satisfying · · · ⊂ A−n ⊂ A−(n−1) ⊂ · · · ⊂ A−1 is called a downward filtration. . Discrete Stochastic Processes A = {E[XA∞ ] > X∞ } ∈ A∞ . Theorem 3.4.2). we see a martingale proof of Kolmogorov’s 0 − 1 law: Corollary 3. A]. the number of upcrossings is bounded Uk [a. This implies in the same way as in the proof of Doob’s convergence theorem that limn→∞ X−n converges almost everywhere. We show now that X−∞ = E[XA−∞ ]: given A ∈ A−∞ . −n ≤ k ≤ −1 : for all a < b.158 Chapter 3. .4). algebra T = n B n with B n the algebra generated by m>n Am is trivial. A] = 0 we have E[XA∞ ] ≤ X∞ almost everywhere. Similarly also X∞ ≤ XA∞ ] almost everywhere.4 (L´evy’s downward theorem). Definition. . define X = 1A ∈ L∞ (T ) and the σalgebras C n = σ(A1 . A] = E[X−∞ . By L´evy’s upward theorem (3.4.4. n→∞ But since C n is independent of An and (8) in Theorem (3. Proof. we have P[A] = E[X] = E[XC n ] → X . Given A ∈ T . An ). Proof. it must be constant and so P[A] = 1 or P[A] = 0. Apply Doob’s upcrossing lemma to the uniformly integrable martingale Xk . A] = E[X−n .3. Since Q1 (A) − Q2 (A) = E[E[XA∞ ] − X∞ .
5. 3.4. Then X = X0 + N + A where N is a martingale null at 0 and A is a previsible process null at 0. X is a submartingale if and only if A is increasing. See theorem (4.5. ). then E[Xn −Xn−1 An−1 ] = E[Nn −Nn−1 An ]+E[An −An−1 An−1 ] = An −An−1 which means that An = n X k=1 E[Xk − Xk−1 An−1 ] . Theorem 3. . ] = Xi .5 Doob’s decomposition of a stochastic process Definition. Sn+1 . . . if P[Xn ≤ Xn+1 ] = 1. Doob’s decomposition of a stochastic process 159 Lets also look at a martingale proof of the strong law of large numbers. . A process Xn is increasing. . it is by Kolmogorov’s 01 law a constant c and c = E[X] = limn→∞ E[Sn /n] = m. If X has a Doob decomposition X = X0 + N + A. Sn+1 . The corresponding result for continuous time processes is deeper and called DoobMeyer decomposition theorem. we get the required decomposition and the submartingale characterization is also obvious. and E[X1 An ] = Sn /n. Proof. . Let Xn be an An adapted L1 process. If we define A like this. This decomposition is unique in L1 . Proof.17.5. Corollary 3. . We can apply L´evy’s downward theorem to see that Sn /n converges in L1 . Then Sn /n → m in L1 .2). Define the downward filtration A−n = σ(Sn . Remark. Since the limit X is in T .3. Since E[X1 A−n ] = E[Xi A−n ] = E[Xi Sn .1 (Doob’s decomposition). Given Xn ∈ L1 which are IID and have mean m.
4 (Doob’s convergence theorem for L2 martingales).5. Because E[Xv − Xu Au ] = Xu − Xu = 0. Proof. Proof. Xn → X almost everywhere for some X.160 Chapter 3.3.3). Let Xn be a L2 martingale which is bounded in L2 . E[Xn2 ] = E[X02 ]+ n X k=1 E[(Xk −Xk−1 )2 ] ≤ E[X02 ]+ ∞ X k=1 E[(Xk −Xk−1 )2 ] < ∞ . it is bounded in L1 so that by Doob’s convergence theorem. v ∈ N with s ≤ t ≤ u ≤ v. then. we have X E[(X − Xn )2 ] ≤ E[(Xk − Xk−1 )2 ] → 0 k≥n+1 .5. Given s. t. The formula n X Xn = X0 + (Xk − Xk−1 ) k=1 expresses Xn as a sum of orthogonal terms and Pythagoras theorem gives the second claim. Proof. then Xn 2 ≤ K < ∞ and Pon the other hand. If X is bounded in L2 .2. Discrete Stochastic Processes Lemma 3. then there exists X ∈ L2 such that Xn → X in L2 . The first claim follows since Xt − Xx ∈ L2 (Au ). Corollary 3. 2 2 k E[(Xk − Xk−1 ) ] ≤ K + E[X0 ].5.5. we know that Xv − Xu is orthogonal to L2 (Au ). Theorem 3. If Xn is bounded in L2 . then E[(Xt − Xs )(Xv − Xu )] = 0 and E[Xn2 ] = E[X02 ] + n X k=1 E[(Xk − Xk−1 )2 ] . By Pythagoras and the previous corollary (3. A L2 martingale X is bounded in L2 if and only if P ∞ 2 k=1 E[(Xk − Xk−1 ) ] < ∞. If Xn is a L2 martingale. by monotonicity of the norm X1 ≤ X2 . u.
From X 2 = N + A. Proof.5. Lemma 3. {An∧S(k) ∈ B } = C1 ∪ C2 with C1 = n−1 [ i=0 C2 {S(k) = i.1. X is in L2 if and only if E[A∞ ] < ∞ since An is increasing. where N is a martingale and A is a previsible increasing process. 161 Definition. Ai ∈ B} ∈ An−1 = {An ∈ B} ∩ {S(k) ≤ n − 1}c ∈ An−1 . One writes also hXi for A so that X 2 = N + hXi . Doob’s decomposition of a stochastic process 2 so that Xn → X in L . Assume Xn − Xn−1 ∞ ≤ K for all n. Assume X is a L2 martingale. Doob’s decomposition theorem allows to write X 2 = N + A. since (X S(k) )2 − ASk = (X 2 − A)S(k) is a martingale. Define A∞ = limn→∞ An point wise. where the limit is allowed to take the value ∞ also.6.5. we get E[Xn2 ] = E[An ] since for a martingale N . Let Xn be a martingale in L2 which is null at 0. The assumption shows that for almost all ω there is a k such that S(k) = ∞. we see that hX S(k) i = AS(k) . Proof. a) We first show that A∞ (ω) < ∞ implies that limn→∞ Xn (ω) converges.4) shows that Xn2 is a submartingale. The conditional Jensen’s inequality (3.3. We can now relate the convergence of the process Xn to the finiteness of A∞ = hXi∞ : Proposition 3. The later process AS(k) is bounded by k so that by the above lemma X S(k) is bounded in L2 . The stopped process AS(k) is also previsible because for B ∈ B R and n ∈ N. Then limn→∞ Xn (ω) converges if and only if A∞ < ∞. Therefore.5. we can define for every k a stopping time S(k) = inf{n ∈ N  An+1 > k }. X is bounded in L2 if and only if E[hXi∞ ] < ∞. the equality E[Nn ] = E[N0 ] holds and N is null at 0. Now.5. Because the process A is previsible.
7 (A strong law for martingales). we also know that limn Xn (ω) exists for almost all ω. Thus E[AT (c)∧n ] ≤ (c + K)2 for all n. This is a contradiction to P[A∞ = ∞. Example. supn Xn  < ∞] > 0. . Discrete Stochastic Processes S(k) and limn X (ω) = limn Xn∧S(k) (ω) exists almost everywhere. A∞ = ∞ ] > 0 . sup Xn  < ∞ ] > 0 . If Yk is a sequence of independent random variables of zero mean and standard deviation σk .162 Chapter 3. Let X be a L2 martingale zero at 0 and let A = hXi. . then b1n k=1 (bk − bk−1 )vk → v∞ . . n Then. Theorem 3.5. (i) C´esaro’s lemma: Given 0 = b0 < b1 ≤ . where T (c) is the stopping time T (c) = inf{n  Xn  > c } . P[T (c) = ∞. Suppose the claim is wrong and that P[A∞ = ∞. . Pn Define2 the Pn 2 = N + A with A = process X = Y . The last proposition implies that X converges almost n Pn everywhere if and only if k=1 σk2 converges. b) Now we prove that the existence of limn→∞ Xn (ω) implies that A∞ (ω) < ∞ almost everywhere. In this case A is a numerical sequence and not n n n n k=1 k a random variable. Proof. Now E[XT2 (c)∧n − AT (c)∧n ] = 0 and X T (c) is bounded by c + K. But since S(k) = ∞ almost everywhere. Write S n n n n k n k=1 E[Yk ] = k=1 Pn 2 2 σ and N = S −A . OfP course we know P this also n n from Pythagoras which assures that Var[Xn ] = k=1 Var[Yk ] = k=1 σk2 2 and implies that Xn converges in L . P bn ≤ bn+1 → ∞ and a n sequence vn ∈ R which converges vn → v∞ . Assume Yk ∞ ≤ K are bounded. Then Xn →0 An almost surely on {A∞ = ∞ }.
{ sup Xk ≥ ǫ}] ≤ E[Xn ] .163 3. . 1≤k≤n 1≤k≤n .6. Kronecker’s lemma (ii) applied point wise implies that on {A∞ = ∞} lim Xn /(1 + An ) = lim Xn /An → 0 . (iii) Proof of the claim: since A is increasing and null at 0. we have An > 0 and 1/(1+An) is bounded. Since A is previsible. Doob’s submartingale inequality Proof. 1 + Ak k=1 Moreover. For any nonnegative submartingale X and every ǫ > 0 ǫ · P[ sup Xk ≥ ǫ] ≤ E[Xn . also 1/(1+An ) is previsible. Define sn = x1 + · · ·+ xn . bn ≤ bn+1 → ∞ and a sequence xP n of real numbers. . we have lim inf ≥ v∞ .6 Doob’s submartingale inequality We still follow closely [113]: Theorem 3. By a similar argument lim sup ≥ v∞ .1 (Doob’s submartingale inequality). Proof. (ii) Kronecker’s lemma: Given 0 = b0 < b1 ≤ . Then lim inf n→∞ n 1 X (bk − bk−1 )vk bn k=1 ≥ lim inf n→∞ m 1 X (bk − bk−1 )vk bn k=1 bn − bm (v∞ − ǫ) + bn 0 + v∞ − ǫ ≥ Since this is true for every ǫ > 0. we can define the martingale Z n X Xk − Xk−1 Wn = ( (1 + A)−1 dX)n = . we have E[(Wn −Wn−1 )2 An−1 ] = (1+An )−2 (An −An−1 ) ≤ (1+An−1 )−1 −(1+An )−1 almost surely. We have un − un−1 = xn /bn and sn = n X k=1 bk (uk − uk−1 ) = bn un − n X k=1 (bk − bk−1 )uk−1 . Let ǫ > 0.6. Choose m such that vk > v∞ − ǫ if k ≥ m. since (1 + An ) is An−1 measurable. This implies that hW i∞ ≤ 1 so that limn→∞ Wn exists almost surely. . Then the convergence n of un = k=1 xk /bk implies that sn /bn → 0. n→∞ n→∞ 3. C´esaro’s lemma (i) implies that sn /bn converges to u∞ − u∞ = 0.
Theorem 3.164 Chapter 3.11. Given Xn ∈ L2 IID with Pn E[Xi ] = 0 and Sn = k=1 Xk .6. Now apply the submartingale inequality (3. P[ sup Sk  ≥ ǫ] ≤ 1≤k≤n Var[Sn ] . Because u(x) = x2 is convex. . Xn ). 1). Ak ] ≥ E[Xk . Sn is a martingale with respect to An = σ(X1 . X2 . Proof. 1) random variables. The set A = {sup1≤k≤n Xk ≥ ǫ} is a disjoint union of the sets A0 = {X0 ≥ ǫ } ∈ A0 Ak = {Xk ≥ ǫ } ∩ ( k−1 \ i=0 Aci ) ∈ Ak . Then lim supn→∞ Sn /Λ(n) = 1. Here it appears as a special case of the submartingale inequality: Theorem 3.6. Ak ] ≥ ǫP[Ak ] . ǫ2 Proof. Since X is a submartingale. and Xk ≥ ǫ on Ak we have for k ≤ n E[Xn . Here is an other proof of the law of iterated logarithm for independent N (0.6. Then for ǫ > 0. Summing up from k = 0 to n gives the result.1). .2 (Kolmogorov’s inequality). . . We have seen the following result already as part of theorem (2.3 (Special case of law of iterated logarithm). We will use for Z 1 − Φ(x) = ∞ x φ(y) dy = Z ∞ (2π)−1/2 exp(−y 2 /2) dy x the elementary estimates (x + x−1 )−1 φ(x) ≤ 1 − Φ(x) ≤ x−1 φ(x) . . Given Xn IID with standard normal distribution N (0. Discrete Stochastic Processes Proof. Sn2 is a submartingale.1).
Choose ǫn = KΛ(K n−1 ). The submartingale inequality (3. The BorelCantelli lemma assures that for large enough n and K n−1 ≤ k ≤ Kn Sk ≤ sup Sk ≤ ǫn = KΛ(K n−1 ) ≤ KΛ(k) 1≤k≤K n which means for K > 1 almost surely lim sup k→∞ Sk ≤K .6.6.1) gives P[ sup Sk ≥ ǫ] = P[ sup eθSk ≥ eθǫ ] ≤ e−θǫE[eθSn ] = e−θǫ eθ 1≤k≤n 2 ·n/2 . we get the best estimate for θ = ǫ/n and obtain P[ sup Sk > ǫ] ≤ e−ǫ 2 /(2n) . . Doob’s submartingale inequality (i) Sn is a martingale relative to An = σ(X1 . S(N n ) > −2Λ(N n ) for large n so that for infinitely many n. The function x 7→ eθx is convex on R so that eθSn is a submartingale. Λ(k) By taking a sequence of K’s converging down to 1. . we have S(N n+1 ) > (1 − δ)Λ(N n+1 − N n ) − 2Λ(N n ) . Since P[An ] is up to logarithmic P 2 terms equal to (n log N )−(1−δ) . Then P[An ] = 1 − Φ(y) = (2π)−1/2 (y + y −1 )−1 e−y 2 /2 with y = (1 − δ)(2 log log(N n−1 − N n ))1/2 . The last inequality in (i) gives P[ sup 1≤k≤K n Sk ≥ ǫn ] ≤ exp(−ǫ2n /(2K n )) = (n − 1)−K (log K)−K . Define the independent sets An = {S(N n+1 ) − S(N n ) > (1 − δ)Λ(N n+1 − N n )} . BorelCantelli shows that P[lim supn An ] = 1 so that S(N n+1 ) > (1 − δ)Λ(N n+1 − N n ) + S(N n ) .165 3. 1≤k≤n For given ǫ > 0. n+1 Λn Λ(N ) N n . Xn ). . we obtain almost surely lim sup k→∞ Sk ≤1. Λ(k) (iii) Given N > 1 (large) and δ > 0 (small). By (ii). It follows that lim sup n Sn 1 S(N n+1 ) ≥ lim sup ≥ (1 − δ)(1 − )1/2 − 2N −1/2 . 1≤k≤n (ii) Given K > 1 (close to 1). . we have n P[An ] = ∞.
By Fubini’s theorem. (Corollary of H¨ older inequality) Fix p > 1 and q satisfying p−1 + q −1 = 1. n Proof. X ≥ ǫ] ∀ǫ > 0. the the left hand side is L= Z 0 ∞ E[pǫ p−1 1{X≥ǫ} ] dǫ = E[ Z 0 ∞ pǫp−1 1{X≥ǫ} dǫ] = E[Xp ] . Proof. Then X ∗ = supn Xn is in Lp and satisfies X ∗ p ≤ q · sup Xn p . the right hand side is R = E[q · Xp−1 Y ]. Since (p − 1)q = p. From Doob’s submartingale inequality (3. X ≥ ǫ] dǫ =: R . Similarly. n . Define Xn∗ = sup1≤k≤n Xk for n ∈ N. Given a nonnegative submartingale X which is bounded in Lp . Theorem 3. which gives the claim. Discrete Stochastic Processes p 3. we can substitute Xp−1 q = E[Xp ]1/q on the right hand side. Y ∈ Lp satisfying ǫP[X ≥ ǫ] ≤ E[Y .7 Doob’s L inequality Lemma 3.6.1. Integrating the assumption multiplied with pǫp−2 gives L= Z ∞ 0 pǫp−1 P[X ≥ ǫ] dǫ ≤ Z ∞ 0 pǫp−2 E[Y .7. we see that Xn∗ p ≤ qXn p ≤ q sup Xn p .1) and the above lemma (3. With H¨ older’s inequality.2 (Doob’s Lp inequality).7.166 Chapter 3.7. Given X. then Xp ≤ q · Y p .1). we get E[Xp ] ≤ E[qXp−1 Y ] ≤ qY p · Xp−1 q .
3.7. Corollary 3. Doob’s Lp inequality 167 Corollary 3.3) we deduce Xn → X∞ in Lp . The process 1/2 Tn = 1/2 1/2 X1 X2 Xn ··· a1 a2 an Q is a martingale. By Doob’s L2 inequality E[sup Sn ] ≤ E[sup Tn 2 ] ≤ 4 sup E[Tn 2 ] < ∞ n n n so that S is dominated by S ∗ = supn Sn  ∈ L1 . If Sn is uniformly integrable. Then X∞ = limn→∞ Xn exists in Lp and X∞ p = limn→∞ Xn p .7. Given a nonnegative submartingale X which is bounded in Lp . Then S∞ = limn Sn exists. we assume that . We have Q∞ Q to show that n an = 0. The n=1 an > 0. Let Xn be a nonnegativeQindependent L1 process with E[Xn ] = 1 for all n.4. then Sn → S∞ in L1 . Then X∞ = lim Xn n→∞ exists in Lp and X∞ p = limn→∞ Xn p . Aiming to a contradiction.3. 1/2 Proof. From Xn − X∞ pp ≤ (2X ∗ )p ∈ Lp and the dominated convergence theorem (2. Theorem 3. Use the above corollary for the submartingale X = Y . This implies that S is uniformly integrable. We know therefore that X∞ = limn→∞ Xn exists almost everywhere. Define S0 = 1 and Sn = nk=1 Xk . because Sn is a nonnegative L1 martingale.4.7.7. We have E[Tn2 ] = (a1 a2 · · · an )−2 ≤ ( n an )−1 < ∞ so that T is bounded in L2 . The submartingale X is dominated by the element X ∗ in the Lp inequality. Define an = E[Xn ]. Q∞ 1/2 Then Sn is uniformly integrable if and only if n=1 E[Xn ] > 0. Proof. Given a martingale Y bounded in Lp and X = Y . Proof.5 (Kakutani’s theorem). The supermartingale −X is bounded in Lp and so bounded in L1 .
This example is a primitive model for the Stock and Bond market.168 Chapter 3. Given a < r < b < ∞ real numbers. This is not possible because E[Sn ] = 1 by the independence of the Xn . P . Xn ) and the martingale Mn = E[XAn ] . . we know that Mn converges in L2 to a random variable M∞ . So. S0 = 1 . σk2 ). Sn = (1 + Rn )Sn−1 . If the noise grows too much. Let X.5. But since n an = 0 we must then have that S∞ = 0 and so Sn → 0 in L1 . We define the random variables Yk = X + Xk which we consider as a noisy observation of the random variable X. . for example for σn = n. One can show that E[(X − Mn )2 ] = (σ −2 + n X σk−2 )−1 . The process Yn = (1 + r)−n Xn satisfies then (1 + r)−n An Sn−1 (Rn − r) 1 (b − a)(1 + r)−n An Sn−1 (Zn − Zn−1 ) = 2 = Cn (Zn − Zn−1 ) R showing that Y is the stochastic integral C dZ. We can write Rn − r = 21 (b − a)(Zn − Zn−1 ) with the martingale Zn = n X k=1 (ǫk − 2p + 1) . B0 = 1 . X1 . Discrete Stochastic Processes martingale T defined above is a nonnegative martingale which has a limit Q T∞ . where martingales occur in applications: Example. Let ǫn be IID random variables taking values 1. your fortune is Xn and satisfies Xn = (1 + r)Xn−1 + An Sn−1 (Rn − r) .4). with Rn = (a + b)/2 + ǫn (a − b)/2. Define p = (r − a)/(b − a). be independent random variables satisfying that the law of X is N (0. . then we can not recover X from the observations Yk . By Doob’s martingale convergence theorem (3. σ 2 ) and the law of Xk is N (0. Define An = σ(X1 . k=1 This implies that X = M∞ if and only if n σn−2 = ∞. We define a process Bn modeling bonds with fixed interest rate f and a process Sn representing stocks with fluctuating interest rates as follows: Bn = (1 + r)n Bn−1 . Given a sequence An . Here are examples. Yn − Yn−1 = Example. then Y is a martingale. . if the portfolio An is previsible which means by definition that it is An−1 measurable. X2 . your portfolio. . −1 with probability p respectively 1 − p. .
which assigns to each point e the probability ν({e}) = (2d)−1 .8. In this case. Figure. . . . Define a sequence of IID random variables Xn which take values in d X ei  = 1 } I = {e ∈ Zd  e = i=1 and which have the uniform distribution defined by P[Xn = e] = (2d)−1 Pn for all e ∈ I. . The random variable Sn = i=1 Xi with S0 = 0 describes the position of the walker at time n.169 3. Sn (ω) in the lattice Z2 after 2000 steps. Random walks 3. As a probability space. A random walk sample path S1 (ω). What is the probability that the walker returns back to the origin? Definition. d If the walker has returned to position Pn 0 ∈ Z at time n. A walker starts at the origin 0 ∈ Zd and makes in each time step a random step into one of the 2d directions. The discrete stochastic process Sn is called the random walk on the lattice Zd . . the walker returns back to the origin infinitely many times. The sum Bn = k=0 Yk counts the number of visits of P∞ the origin 0 of the walker up to time n and B = k=0 Yk counts the total number of visits at the origin. otherwise Yn = 0.8 Random walks We consider the ddimensional lattice Zd where each point has 2d neighbors. The random variables Xn are then defined by Xn (ω) = ωn . where ν is the measure on E. We write E[B] = ∞ if the sum diverges. Bn (ω) is the number of revisits of the starting points 0. Define the sets An = {Sn = 0 } and the random variables Yn = 1An . then Yn = 1. The expectation E[B] = ∞ X P[Sn = 0] n=0 tells us how many times the walker is expected to return to the origin. we can take Ω = I N with product measure ν N .
shows 1 (2π)2 2 (2π)2 2 · x ≤ 1 − φX (x) ≤ 2 · x . . Fix n ∈ N and define a(n) (k) = P[Sn = k] for k ∈ Zd . E[B] = ∞ for d = 1. . It is the characteristic function of Sn because X P[Sn = k]eik·x .8.170 Chapter 3. Proof. φXn (x) = d 1 X cos(2πxi ))n . 2 and E[B] < ∞ for d > 2. 2 2d 2d The claim of the theorem follows because the integral Z 1 dx 2 x {x<ǫ} over the ball of radius ǫ in Rd is finite if and only if d ≥ 3. Discrete Stochastic Processes Theorem 3. 2d d i=1 j=1 Because the Sn is a sum of n independent random variables Xj φSn = φX1 (x)φX2 (x) . Because the walker can reach in time n only a bounded region. We can therefore define its Fourier transform X a(n) (k)e2πik·x φSn (x) = k∈Zd which is smooth function on Td = Rd /Zd . Td n=0 Td 1 − φX (x) n A Taylor expansion φX (x) = 1 − x2j 2 j 2 (2π) P + . . . The Fourier inversion formula using the normalized Volume mesure dx on T3 gives Z X Z ∞ X 1 n P[Sn = 0] = φX (x) dx = dx .1 (Polya). P We now show that E[B] = n≥0 φSn (0) is finite if and only if d < 3. ( dn i=1 Note that φSn (0) = P[Sn = 0]. the function a(n) : Zd → R is zero outside a bounded set. . E[eixSn ] = k∈Zd The characteristic function φX of Xk is φX (x) = d 1 X 2πixj 1X = e cos(2πxi ) .
3. If we know the probability P[Sn = 0] that a path returns to 0 in n step.2. If d ≤ 2. for which the P∞ particles returns to 0 infinitely many times. 1−p m≥1 Because E[B] = ∞. Proof. let p be the probability that the random walk returns to 0: [ p = P[ An ] . Random walks Corollary 3.8. There are (2d)n Z d X ( cos(2πxk ))n dx1 · · · dxd Td k=1 closed paths of length n which start at the origin in the lattice Zd . The particle returns therefore back to 0 only finitely many times and in the same way it visits each lattice point only finitely many times. If d > 2. then (2d)n P[Sn = 0] is the number of closed paths in Zd of length n. We can write X 1 E[B] = mpm−1 (1 − p) = . Proof. The walker returns to the origin infinitely often almost surely if d ≤ 2. For d ≥ 3. where dx = dx1 · · · dxd .8. almost surely. Z d X ( cos(2πxk ))n dx Td k=1 . This means that the particle eventually leaves every bounded set and converges to infinity. n Then pm−1 is the probability that there are at least m visits in 0 and the probability is pm−1 − pm = pm−1 (1 − p) that there are exactly m visits. the walker or rather bird returns only finitely many times to zero and P[limn→∞ Sn  = ∞] = 1. But P[Sn = 0] is the zero’th Fourier coefficient Z Td φSn (x) dx = of φSn . we know that p = 1. then A∞ = lim supn An is the subset of Ω. Since E[B] = n=0 P[An ].8.171 3. The use of characteristic functions allows also to solve combinatorial problems like to count the number of closed paths starting at zero in the graph: Proposition 3. the BorelCantelli lemma gives P[A∞ ] = 0 for d > 2.
The random walk can also be studied on a general graph. . an Abelian group G is isomorphic to Zk × Zn1 × . P[S2n = 0] = n (2d)n For n = 2 for example. If the degree is d at a point x. The recurrence properties on nonAbelian groups is more subtle.8.172 Chapter 3. We know that also because 1 2n . In this case X pj e2πixj . R1 0 cos(2πx)2 dx = 2 closed paths of The lattice Zd can be generalized to an arbitrary graph G which is a regular graph that is a graph. Corollary 3. An other generalization is to add a drift P by changing the probability distribution ν on I. where each vertex has the same number of neighbors. Example. Discrete Stochastic Processes Example. In the case d = 1. 1) with j=1 pj = 1. Znd . . Given pj ∈ (0.4. we have Z 1 2n 22n cos2n (2πx) dx = n 0 closed paths of length 2n starting at 0. then the walker choses a random direction with probability 1/d. The characteristic function of Xn is a ˆ function on the dual group G Z ∞ ∞ Z ∞ Z X X X 1 dx P[Sn = 0] = φSn (x) dx = φnX (x) dx = ˆ ˆ 1 − φX (x) ˆ G n=0 n=0 G n=0 G ˆ contains a three dimensional torus which means is finite if and only if G k > 2. Proof. . we have 22 length 2 which start at 0 in Z. φX (x) = j=1 We have recurrence if and only if Z Td 1 dx = ∞ . If G is the Cayley graph of an Abelian group G then the random walk on G is recurrent if and only at most two of the generators have infinite order. By the structure theorem for Abelian groups. . ad . . . 1 − φX (x) . because characteristic functions loose then some of their good properties. A convenient way is to take as the graph the Cayley graph of a discrete group G with generators a1 .
The a function f which takes each value in I on an interval [ 2d random variables Xn (ω) = f (ω + nα) define an ergodic discrete stochastic process P but the random variables are not independent. Then φX (x) = pe2πix + (1 − p)e−2πix = cos(2πx) + i(2p − 1) sin(2πx) . where α is an irrational number. 1) if Zk ∈ [1/2. A random walk with drift on Zd will almost certainly not return to 0 infinitely often. Example. 1). an irrational number α and k k+1 . 3/4) and Xk = (0. Define on the probability space Ω = ([0. Example. there can belong intervals. where Sn (x) grows like n. 1/4).173 3. where Xk is the same because Zk stays in the same quarter interval. An example which appears in number theory in the case d = 1 is to take the probability space Ω = T1 = R/Z. Random walks Take for example the case d = 1 with drift parameterized by p ∈ (0.78 ≤ Sn (0) ≤ a · log(n) + 1 . Xk = (0. 1]. 1/2). The picture shows a P typical path of n the process Sn = k=1 Xk . For small a. 1). one sees 2 a much slower growth Sn (0) ≤ √ log(n) for almost all α and √ for special numbers like the golden ratio ( 5 + 1)/2 or the silver ratio 2 + 1 one has for infinitely many n the relation a · log(n) + 0. 0) if Zk ∈ [0. Also Xk are no more independent. 0) if Zk ∈ [1/4. a]. which need not to be independent. dx) the random variables Xn (x) = 21[0. which shows that Z Td 1 dx < ∞ 1 − φX (x) if p 6= 1/2. An other generalization of the random walk is to take identically distributed random variables Xn with values in I. 2d ). Define Xk = (1.1/2](x + nα) − 1. −1) if Zk ∈ [3/4. A random walk Sn = nk=1 Xk with random variables Xk which are dependent is called a dependent random walk. then Zn = P n k=1 Yk mod 1 are dependent. Xk = (−1.8. An example of a onedimensional dependent random walk is the problem of ”almost alternating sums” [53]. √ but unlike for the usual random walk. This produces a symmetric random walk. If Yk are IID random variables with uniform distribution in [0. A. Figure.
Discrete Stochastic Processes √ with a = 1/(2 log(1 + 2)).9 The arcsin law for the 1D random walk Definition. Figure. T−a ≤ n] = P[Sn = a + b] . one has P[a + Sn = b . we define the stopping time Ta = min{n ∈ N  Sn = a } .9. or zero (like for α = 1/2). For integers a. b > 0. the growth for most irrational α seems to be logarithmic. Theorem 3. Given a ∈ Z. Instead of flipping coins to decide whether to go up or down. .1 (Reflection principle).174 Chapter 3. While for periodic α the growth of Sn is either linear (like for α = 0). Proof. 1 }valued random variables n with P[Xn = ±1] = 1/2 and let Sn = k=1 Xk be the random walk. Let Xn denote independentP{−1. one turns a wheel by an angle α after each step and goes up if the wheel position is in the right half and goes down if the wheel position is in the left half. We have seen that it is a martingale with respect to Xn . An almost periodic random walk in one dimensions. It is not known whether Sn (0) grows like log(n) for almost all α. 3. The number of paths from a to b passing zero is equal to the number of paths from −a to b which in turn is the number of paths from zero to a + b.
We have the following distribution of the stopping time: a) P[T−a ≤ n] = P[Sn ≤ −a] + P [Sn > a]. The arcsin law for the 1D random walk Figure. a + Sn = b] b∈Z = X P[a + Sn = b] + b≤0 = X b>0 P[a + Sn = b] + b≤0 = X P[T−a ≤ n. To every path which goes from a to b and touches 0 there corresponds a path from −a to b. b) P[T−a = n] = na P[Sn = a].9. Sn−1 ≤ a] +P[Sn > a .9. The reflection principle allows to compute the distribution of the random variable T−a : Theorem 3. a) Use the reflection principle in the third equality: X P[T−a ≤ n] = P[T−a ≤ n. n 2 Also P[Sn > a] − P[Sn−1 > a] = P[Sn > a . a + Sn = b] X P[Sn = a + b] n b>0 P[Sn ≤ −a] + P[Sn > a] b) From P[Sn = a] = a+n 2 we get a 1 P[Sn = a] = (P[Sn−1 = a − 1] − P[Sn−1 = a + 1]) .2 (Ruin time). Proof. Sn−1 > a] − P[Sn−1 > a] 1 (P[Sn−1 = a] − P[Sn−1 = a + 1]) = 2 .175 3. The proof of the reflection principle: reflect the part of the path above 0 at the line 0.
1}] P[S2n−1 = 1] = P[S2n = 0] . The distribution of the first return time is P[T0 > 2n] = P[S2n = 0] . using a) P[T−a = n] = = + = + = P[T−a ≤ n] − P[T−a ≤ n − 1] P[Sn ≤ −a] − P[Sn−1 ≤ −a] P[Sn > a] − P[Sn−1 > a] 1 (P[Sn−1 = a] − P[Sn−1 = a + 1]) 2 1 (P[Sn−1 = a − 1] − P[Sn−1 = a]) 2 1 a (P[Sn−1 = a − 1] − P[Sn−1 = a + 1]) = P[Sn = a] 2 n Theorem 3. P[Sn = a . the number of paths from 0 to a of length n which do no more hit 0 is the number of paths of length n which start in a and for which T−a = n.9. . .176 Chapter 3.3 (Ballot theorem). . P[T0 > 2n] = = = = = 1 1 P[T−1 > 2n − 1] + P[T1 > 2n − 1] 2 2 P[T−1 > 2n − 1] ( by symmetry) P[S2n−1 > −1 and S2n−1 ≤ 1] P[S2n−1 ∈ {0. Proof. . Discrete Stochastic Processes and analogously P[Sn ≤ −a] − P[Sn−1 ≤ −a] = 1 (P[Sn−1 = a − 1] − P[Sn−1 = a]) . n Proof. n Corollary 3. Now use the previous theorem a P[T−a = n] = P[Sn = a] . 2 Therefore. . Sn−1 > 0] = a · P[Sn = a] . When reversing time.9. S1 > 0.4.
we have P[ √ L 2 ≤ z] → arcsin( z) . We see that limn→∞ P[T0 > 2n] = 0.5 (Arc Sin law). This restates that the random walk is recurrent. one has √ 22n / πn and so 1 2n ∼ (πn)−1/2 . L has the discrete arcsin distribution: 1 2n 2N − 2n P[L = 2n] = 2N n N −n 2 and for N → ∞. P[S2n = 0] = n 22n 2n n ∼ Definition. The Stirling formula gives P[S2k = 0] ∼ so that 1 1 k 1 P[L = 2k] = p = f( ) π k(N − k) N N with f (x) = 1 p .9. then L is the time when one of the two players does no more give up his leadership. 2N π Proof. the expected return time is very long: E[T0 ] = ∞ X nP[T0 = n] = n=0 ∞ X P[T0 > n] = n=0 ∞ X n=0 P[Sn = 0] = ∞ √ because by the Stirling formula n! ∼ nn e−n 2πn. The arcsin law for the 1D random walk Remark.177 3.9. π x(1 − x) √1 πk . P[L = 2n] = P[S2n = 0] · P[T0 > 2N − 2n] = P[S2n = 0] · P[S2N −2n = 0] which gives the first formula. who play over a time 2N . Theorem 3. If the random walk describes a game between two players. We are interested now in the random variable L(ω) = max{0 ≤ n ≤ 2N  Sn (ω) = 0 } which describes the last visit of the random walk in 0 before time 2N . However.
. . It is a thermodynamic equilibrium measure for this quadratic map.4 1 0. Remark. The density function of this distribution in the limit N → ∞ is called the arcsin distribution.8 1 Figure. It belongs to a measure which is the Gibbs measure of the quadratic map x 7→ 4 · x(1 − x) on the unit interval maximizing the BoltzmannGibbs entropy.178 Chapter 3. 0.5 2 0. One calls such measures also potential theoretical equilibrium measures.5 0.5 0.6 1. a2 . It is the measure µ on the interval [0. . The arcsin distribution is a natural distribution on the interval [0. 3.6 1 0. one has to expect that the winner takes the final leading position either early or late. Remark. .8 0. . .2 0. ad . The distribution function P[L/2N ≤ z] converges in the limit N √→ ∞ to the function 2 arcsin( z)/π.2 0.8 2.6 0.5 1 3 0. π 4 1. The free group Fd with d generators is the set of finite words w written in the 2d letters −1 −1 A = {a1 .2 3.4 0. a−1 1 . .2 0.2 0.2 Figure. a2 . 1] from the different points of view. From the shape of the arcsin distribution.10 The random walk on the free group Definition. . ad } . 1] which minimizes the energy I(µ) = − Z 0 1 Z 0 1 log E − E ′  dµ(E) dµ(E ′ ) .4 0. Discrete Stochastic Processes It follows that L ≤ z] → P[ 2N Z z f (x) dx = 0 √ 2 arcsin( z) .
The inverse of w = w1 w2 · · · wn is w−1 = wn−1 · · · w2−1 w1−1 . Note that the group operation Xk needs not to be commutative. Elements w in the group Fd can be uniquely represented by reduced words obtained by deleting all words vv −1 in w. S2 6= e. The random walk on the free group ai a−1 i ai−1 ai modulo the identifications = = 1. Part of the Cayley graph of the free group F2 with two generators a. one can go into 4 different directions. Define for n ∈ N rn = P[Sn = e . Figure.10. . Sn−1 6= e] which is the probability of returning for the first time to e if one starts at e. Definition. Given a free group G with generators A and let Xk be uniformly distributed random variables with values in A. Define also for n ∈ N mn = P[Sn = e] with the convention m(0) = 1. 1 − r(x) rn xn . Lemma 3. . The group operation is concatenating words v ◦ w = vw. The random walk on the free group can be interpreted as a walk on a tree. because the Cayley graph of the group Fd with generators A contains no noncontractible closed circles.1. It is a tree. Definition.179 3. (Feller) m(x) = 1 . The stochastic process Sn = X1 · · · Xn is called the random walk on the group G. b. The identity e in the group Fd is the empty word. We denote by l(w) the length of the reduced word of w. Going into one of these directions corresponds to multiplying with a.10. b or b−1 . a−1 . . . S1 6= e. Let r and m be the probability generating functions of the sequences rn and mn : m(x) = ∞ X n mn x . r(x) = n=0 ∞ X n=0 These sums converge for x < 1. At every point.
In the other case. There are 1 2n − 2 n−1 n . w2n ∈ G. Snk = e. 1 − r(x) Remark. Let T be the stopping time T = min{n ∈ N  Sn = e} . . . (2d)2n To count the number of such words. We have r2n = 1 {w1 w2 . 0) to (n. nk }]xn ∞ n ∞ X X X P[ Tk = n]xn = rn (x) = n=0 n=0 k=1 1 . Sn2 = e. Proof. . The values of r2n can be computed by using basic combinatorics: Lemma 3.180 Chapter 3. The probability generating function of a sum independent random variables is the product of the probability generating functions. wk 6= e } . (Kesten) r2n 1 1 = (2d)2n n 2n − 2 n−1 2d(2d − 1)2n−1 . the function r(x) = n=1 rn xn is the probability generating function of T . The numbers r2n+1 are zero for odd 2n+1 because an even number of steps are needed to come back. . If l(wk ) = l(wk−1 ) + 1. . if l(wk ) = l(wk−1 ) − 1. n) which is away from the diagonal except at the beginning or the end. if Ti are independent random variables with distribuPn tion T .10. we record a vertical step. . . then i=1 Ti has the probability generating function x 7→ rn (x). Discrete Stochastic Processes Proof. we record a horizontal or vertical step of length 1.2. We have ∞ ∞ X X mn xn = P[Sn = e]xn n=0 = n=0 ∞ X X P[Sn1 = e. wk = w1 w2 . . . P∞ With P[T = n] = rn . map every word with 2n letters into a path in Z2 going from (0. The map is constructed in the following way: for every letter. . . we record a horizontal step. . Therefore. This lemma is true for the random walk on a Cayley graph of any finitely presented group. The first step is horizontal independent of the word. n=0 0≤n1 <n2 <···<nk = Sn 6= e for n ∈ / {n1 .
Denote as in the case of the random walk on Zd with B the random variable counting P of visits of the origin. we see that we have in the first step 2d possibilities and later (2d − 1) possibilities in each of the n − 1 horizontal step and only 1 possibility in a vertical step. The Cayley graph of the free group is also called the Bethe lattice.10. (d − 1) + d2 − (2d − 1)x2 Proof. we have m(x) = 2d − 1 p .3 (Kesten). We have then P the total number again E[B] = n P[Sn = e] = n mn = m(1). One can read of from this formula that the spectrum of the free Laplacian L : l2 (Fd ) → l2 (Fd ) on the Bethe lattice given by X Lu(g) = u(g + a) a∈A √ is the whole interval [−a. a] with a = 2 2d − 1.10. Proof. n−1 n Counting the number of words which are mapped into the same path. We have therefore to multiply the number of paths by 2d(2d − 1)2n−1 . Since we know the terms r2n we can compute p d − d2 − (2d − 1)x2 r(x) = 2d − 1 and get the claim with Feller’s lemma m(x) = 1/(1 − r(x)). The random walk on the free group 181 such paths since by the distribution of the stopping time in the one dimensional random walk P[T2n−1 = 2n − 1] = = = 1 · P[S2n−1 = 1] 2n − 1 1 2n − 1 n 2n − 1 1 2n − 2 .10.3. Corollary 3. Remark. For the free group Fd . The random walk on the free group Fd with d generators is recurrent if and only if d = 1. Theorem 3. We see that for d = 1 we .4.
n=0 the BorelCantelli lemma gives P[A∞ ] = 0 for d > 2. The free Laplacian for the random walk given by (G. This establishes the analog of Polya’s result on Zd and leads in the same way to the recurrence: (i) d = 1: We know that Z1 = F1 . Remark. ∞ X E[B] = P[An ] = m(1) < ∞ . j) satisfying i − j ∈ A or j − i ∈ A. the Cayley graph contains two closed loops of length 1 at each site. p) is the linear operator on l2 (G) defined by Lgh = pg−h . Discrete Stochastic Processes have m(1) = ∞ and that m(d) < ∞ for d > 1. A. Then A∞ = lim supn An is the subset of Ω. We write the composition in G additively even so we do not assume that G is Abelian. A) is the graph with edges G and sites (i. This distribution is given by numbers pa = p−1 ∈ [0. Since we assumed pa = pa−1 .11 The free Laplacian on a discrete group Definition. (ii) d ≥ 2: define the event An = {Sn = e}. Trying to extend the class of solvable random walks seems to be an interesting problem. the matrix L is symmetric: Lgh = Lhg and the spectrum σ(L) = {E ∈ C  (L − E) is invertible } is a compact subset of the real line. The Cayley graph Γ of (G. and that the walk in Z1 is recurrent. Definition. Since for d ≥ 2. We allow A to contain also the identity e ∈ G. 1] satisfying a a∈A∪A−1 pa = 1. The particle returns therefore to 0 only finitely many times and similarly it visits each vertex in Fd only finitely many times. It would also be interesting to know.182 Chapter 3. A) is the process obtained by summing up independent uniformly distributed (A ∪ A−1 )valued random variables Xn . we can allow the random variables Xn −1 to be independent but have any distribution on A P∪ A . We have seen that the classes of Abelian finitely generated and free groups are solvable. Let G be a countable discrete group and A ⊂ G a finite set which generates G. Remark. We could say that the problem of the random walk on a discrete group G is solvable if one can give an algebraic formula for the function m(x). whether there exists a group such that the function m(x) is transcendental. This means that the particle eventually leaves every bounded set and escapes to infinity. . Definition. 3. The symmetric random walk on Γ(G. In this case. More generally. for which the walk returns to e infinitely many times.
183
3.11. The free Laplacian on a discrete group
Remark. One can interpret L as the transition probability matrix of the
random walk which is a ”Markov chain”. We will come back to this interpretation later.
Example. G = Z, A = {1}. p = pa = 1/2 for a = 1, −1 and pa = 0 for
a∈
/ {1, −1}. The matrix
. .
. 0 p
p 0 p
p 0
L=
p
p
0
p
p
0
p
p
0 .
. .
is also called a Jacobi matrix. It acts on the Hilbert space l2 (Z) by (Lu)n =
p(un+1 + un−1 ).
Example. Let G = D3 be the dihedral group which has the presentation
G = ha, ba3 = b2 = (ab)2 = 1i. The group is the symmetry group of the
equilateral triangle. It has 6 elements and it is the smallest nonAbelian
group. Let us number the group elements with integers {1, 2 = a, 3 =
a2 , 4 = b, 5 = ab, 6 = a2 b }. We have for example 3 ⋆ 4 = a2 b = 6 or
3 ⋆ 5 = a2 ab = a3 b = b = 4. In this case A = {a, b}, A−1 = {a−1 , b} so that
A ∪ A−1 = {a, a−1 , b}. The Cayley graph of the group is a graph with 6
vertices. We could take the uniform distribution pa = pb = pa−1 = 1/3 on
A ∪ A−1 , but lets instead chose the distribution pa = pa−1 = 1/4, pb = 1/2,
which is natural if we consider multiplication by b and multiplication by
b−1 as different.
Example. The free Laplacian on D3 with the random walk transition probabilities pa = pa−1 = 1/4, pb = 1/2 is the matrix
L=
0 1/4 1/4 1/2 0
0
1/4 0 1/4 0 1/2 0
1/4 0
0
0
0 1/2
1/2 0
0
0 1/4 1/4
0 1/2 0 1/4 0 1/4
0
0 1/2 1/4 0
0
which has the eigenvalues (−3 ±
√
√
5)/8, (5 ± 5)/8, 1/4, −3/4.
184
Chapter 3. Discrete Stochastic Processes
Figure. The Cayley graph of the
dihedral group G = D3 is a regular graph with 6 vertices and 9
edges.
A basic question is: what is the relation between the spectrum of L, the
structure of the group G and the properties of the random walk on G.
Definition. As before, let mn be the probability that the random walk
starting in e returns in n steps to e and let
X
m(x) =
mn xn
n∈G
be the generating function of the sequence mn .
Proposition 3.11.1. The norm of L is equal to lim supn→∞ (mn )1/n , the
inverse of the radius of convergence of m(x).
Proof. Because L is symmetric and real, it is selfadjoint and the spectrum
of L is a subset of the real line R and the spectral radius of L is equal to
its norm L.
Qn
We have [Ln ]ee = mn since [Ln ]ee is the sum of products j=1 paj each
of which is the probability that a specific path of length n starting and
landing at e occurs.
It remains therefore to verify that
lim sup Ln 1/n = lim sup[Ln ]1/n
ee
n→∞
n→∞
and since the ≥ direction is trivial we have only to show that ≤ direction.
Denote by E(λ) the spectral projection matrix of L, so that dE(λ) is a
projectionvalued measure on Rthe spectrum and the spectral theorem says
that L can be written as L = λ dE(λ). The measure µe = dEee is called
a spectral measure of L. The real number E(λ) − E(µ) is nonzero if and
only if there exists some spectrum of L in the interval [λ, µ). Since
Z
(−1) X [Ln ]ee
= (E − λ)−1 dk(E)
n
λ
λ
R
n
3.11. The free Laplacian on a discrete group
185
can’t be analytic in λ in a point λ0 of the support of dk which is the
spectrum of L, the claim follows.
Remark. We have seen that the matrix L defines a spectral measure µe on
the real line. It can be defined for any group element g, not only g = e and
is the same measure. It is therefore also the so called density of states of L.
If we think of µ as playing the role of the law for
R random variables, then
the integrated density of states E(λ) = FL (λ) = −∞ dµ(λ) plays the role
of the distribution function for realvalued random variables.
Example. The Fourier transform U : l2 (Z1 ) → L2 (T1 ):
X
uˆ(x) = (U u)(x) =
un einx
n∈Z
diagonalises the matrix L for the random walk on Z1
(U LU ∗ )ˆ
u(x)
= ((U L)(un )(x) = pU (un+1 + un−1 )(x)
X
= p
(un+1 + un−1 )einx
n∈Z
= p
X
un (ei(n−1)x + ei(n+1)x )
n∈Z
= p
X
un (eix + e−ix )einx
n∈Z
= p
X
un 2 cos(x)einx
n∈Z
= 2p cos(x) · u
ˆ(x) .
This shows that the spectrum of U LU ∗ is [−1, 1] and because U is an
unitary transformation, also the spectrum of L is in [−1, 1].
Example. Let G = Zd and A = {ei }di=1 , where {ei } is the standard bases.
Assume p = pa = 1/(2d). The analogous Fourier transform F : l2 (Zd ) →
P
L2 (Td ) shows that F LF ∗ is the multiplication with d1 dj=1 cos(xj ). The
spectrum is again the interval [−1, 1].
Example. The Fourier diagonalisation works for any discrete Abelian group
with finitely many generators.
Example. G = Fd the free group with the natural d generators. The spectrum of L is
√
√
2d − 1
2d − 1
,
]
[−
d
d
which is strictly contained in [−1, 1] if d > 1.
Remark. Kesten has shown that the spectral radius of L is equal to 1 if
and only if the group G has an invariant mean. For example, for a finite
graph, where L is a stochastic matrix, a matrix for which each column is a
probability vector, the spectral radius is 1 because LT has the eigenvector
(1, . . . , 1) with eigenvalue 1.
186
Chapter 3. Discrete Stochastic Processes
Random walks and Laplacian can be defined on any graph. The spectrum
of the Laplacian on a finite graph is an invariant of the graph but there are
nonisomorphic graphs with the same spectrum. There are known infinite
selfsimilar graphs, for which the Laplacian has pure point spectrum [65].
There are also known infinite graphs, such that the Laplacian has purely
singular continuous spectrum [99]. For more on spectral theory on graphs,
start with [6].
3.12 A discrete FeynmanKac formula
Definition. A discrete Schr¨
odinger operator is a bounded linear operator
L on the Hilbert space l2 (Zd ) of the form
(Lu)(n) =
d
X
i=1
u(n + ei ) − 2u(n) + u(n − ei ) + V (n)u(n) ,
where V is a bounded function on Zd . They are discrete versions of operators L = −∆ + V (x) on L2 (Rd ), where ∆ is the free Laplacian. Such
operators are also called Jacobi matrices.
Definition. The Schr¨
odinger equation
i~u˙ = Lu, u(0) = u0
is a differential equation in l2 (Zd , C) which describes the motion of a complex valued wave function u of a classical quantum mechanical
system. The
√
constant ~ is called the Planck constant and i = −1 is the imaginary
unit. Lets assume to have units where ~ = 1 for simplicity.
Remark. The solution of the Schr¨odinger equation is
t
ut = e i L u0 .
The solution exists for all times because the von Neumann series
etL = 1 + tL +
t3 L 3
t2 L 2
+
+ ···
2!
3!
is in the space of bounded operators.
Remark. It is an achievement of the physicist Richard Feynman to see
that the evolution as a path integral. In the case of differential operators
L, where this idea can be made rigorous by going to imaginary time and
one can write for L = −∆ + V
e−t: u(x) = Ex [e
Rt
0
V (γ(s)) ds
u0 (γ(t))] ,
where Ex is the expectation value with respect to the measure Px on the
Wiener space of Brownian motion starting at x.
187
3.12. A discrete FeynmanKac formula
Here is a discrete version of the FeynmanKac formula:
Definition. The Schr¨odinger equation with discrete time is defined as
i(ut+ǫ − ut ) = ǫLut ,
where ǫ > 0 is fixed. We get the evolution
ut+nǫ = (1 − iǫL)n ut
˜ n ut .
and we denote the right hand side with L
Definition. Denote by Γn (i, j) the set of paths of length n in the graph
G having as edges Zd and sites pairs [i, j] with i − j ≤ 1. The graph G
is the Cayley graph of the group Zd with the generators A ∪ A−1 ∪ {e},
where A = {e1 , . . . , ed , } is the set of natural generators and where e is the
identity.
Definition. Given a path γ of finite length n, we use the notation
Z
n
Y
Lγ(i),γ(i+1) .
exp( L) =
γ
i=1
Let Ω is the set of all paths on G and E denotes the expectation with
respect to a measure P of the random walk on G starting at 0.
Theorem 3.12.1 (Discrete FeynmanKac formula). Given a discrete
Schr¨odinger operator L. Then
Z n
n
(L u)(0) = E0 [exp(
L) u(γ(n))] .
0
Proof.
(Ln u)(0) =
X
=
X
(Ln )0j u(j)
j
j
=
Z
exp(
X
γ∈Γn (0,j)
Z n
X
exp(
γ∈Γn
n
L) u(j)
0
L)u(γ(n)) .
0
Remark. This discrete random walk expansion corresponds to the FeynmanKac formula in the continuum. If we extend the potential to all the sites of
188
Chapter 3. Discrete Stochastic Processes
the Cayley graph by
R putting V ([k, k]) =QVn(k) and V ([k, l]) = 0 for k 6= l,
we can define exp( γ V ) as the product i=1 V ([γ(i), γ(i + 1)]). Then
Z n
(Ln u)(0) = E[exp(
V )u(γ(n))]
0
which is formally the FeynmanKac formula.
˜ n u)(k) with L
˜ = (1 − kǫL), we have to take the
In order to compute (L
potential v˜ defined by
v˜([k, k]) = 1 − iǫv(γ(k)) .
Remark. The Schr¨odinger equation with discrete time has the disadvantage
that the time evolution of the quantum mechanical system is no more
unitary. This drawback could be overcome by considering also i~(ut −
ut−ǫ ) = ǫLut so that the propagator from ut−ǫ to ut+ǫ is given by the
unitary operator
iǫ
iǫ
U = (1 − L)(1 + L)−1
~
~
which is a Cayley transform of L. See also [51], where the idea is disussed
˜ = arccos(aL), where L has been rescaled such that aL has norm
to use L
smaller or equal to 1. The time evolution can then be computed by iterating
the map A : (ψ, φ) 7→ (2aLψ − φ, ψ) on H ⊕ H.
3.13 Discrete Dirichlet problem
Also for other partial differential equations, solutions can be described probabilistically. We look here at the Dirichlet problem in a bounded discrete
region. The formula which we derive in this situation holds also in the
continuum limit, where the random walk is replaced by Brownian motion.
Definition. The discrete Laplacian on Z2 is defined as
∆f (n, m) = f (n+1, m)+f (n−1, m)+f (n, m+1)+f (n, m−1)−4f (n, m) .
With the discrete partial derivatives
1
1
(f (n+1, m)−f (n, m)), δx− f (n, m) = (f (n, m)−f (n−1, m)) ,
2
2
1
1
δy+ f (n, m) = (f (n, m+1)−f (n, m)), δy− f (n, m) = (f (n, m)−f (n, m−1)) ,
2
2
the Laplacian is the sum of the second derivatives as in the continuous case,
where ∆ = fxx + fyy :
∆ = δx+ δx− + δy+ δy− .
δx+ f (n, m) =
The discrete Laplacian in Z3 is defined in the same way as a discretisation
of ∆ = fxx + fyy + fzz . The setup is analogue in higher dimensions
d
(∆u)(n) =
1 X
(u(n + ei ) + u(n − ei ) − 2u(n)) ,
2d i=1
189
3.13. Discrete Dirichlet problem
d
where e1 , . . . , ed is the standard basis in Z .
Definition. A bounded region D in Zd is a finite subset of Zd . Two points
are connected in D if they are connected in Z3 . The boundary δD of D
consists of all lattice points in D which have a neighboring lattice point
which is outside D. Given a function f on the boundary δD, the discrete
Dirichlet problem asks for a function u on D which satisfies the discrete
Laplace equation ∆u = 0 in the interior int(D) and for which u = f on the
boundary δD.
Figure. The discrete Dirichlet
problem is a problem in linear algebra. One algorithm to
solve the problem can be restated
as a probabilistic ”path integral
method”. To find the value of u
at a point x, look at the ”discrete Wiener space” of all paths
γ starting at x and ending at
some boundary point ST (ω) ∈
δD of D. The solution is u(x) =
Ex [f (ST )].
Definition. Let Ωx,n denote the set of all paths of length n in D which start
at a point x ∈ D and end up at a point in the boundary δD. It is a subset
of Γx,n , the set of all paths of length n in Zd starting at x. Lets call it the
discrete Wiener space of order n defined by x and D. It is a subset of the
set Γx,n which has 2dn elements. We take the uniform distribution on this
finite set so that Px,n [{γ}] = 1/2dn.
Definition. Let L be the matrix for which Lx,y = 1/(2d) if x, y ∈ Zd are
connected by a path and x is in the interior of D. The matrix L is a bounded
linear operator on l2 (D) and satisfies Lx,z = Lz,x for x, z ∈ int(D)
= D\δD.
R
Given f : δD → R, we extend f to a function F (x) = 0 on D = D \ δD
and F (x) = f (x) for x ∈ δD. The discrete Dirichlet problem can be restated
as the problem to find the solution u to the system of linear equations
(1 − L)u = f .
Lemma 3.13.1. The number of paths in Ωx,n starting at x ∈ D and ending
at a different point y ∈ D is equal to (2d)n Lnxy .
Let Sn be the random walk on Zd and let T be the stopping time which is the first exit time of S from D. Figure. Proof. . Definition. The measure Px on the set of paths satisfies Ex [1] = 1 as we will just see.y y∈δD and Ex [f ] = ∞ X Ex. Use induction. The solution to the discrete Dirichlet problem is u(x) = Ex [f (ST )] . n=0 This functional defines for every point x ∈ D a probability measure µx on the boundary δD. Because (1 − L)u = f and Ex.190 Chapter 3. Proposition 3. It is the discrete analog of the harmonic measure in the continuum.n [f ] = f (y)Lnx.13. define X Ex. For a function f on the boundary δD.n [f ] .y is the number of paths of length n from x to y.2. Discrete Stochastic Processes Proof. The integer Lnx. Lxz is 1/(2d) if there is a path from x to z. Here is an example of a problem where D ⊂ Z2 has 10 points: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 0 0 4L = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 Only the rows corresponding to interior points are nonzero.n [f ] = (Ln f )x . . By definition.
Let (D. . . 1. the spectral radius of L is smaller than 1 and the series converges. n=0 Define the matrix K by Kjj = 1 for j ∈ δD and Kij = Lji /4 else. Figure. if Kjj = 1. The diffusion process after time t = 3. Figure. Define the matrix L as Ljj = 0 if j is a boundary point and Lij = Kji otherwise. . From Ex [1] = 1 follows that the discrete Wiener measure is a probability measure on the set of all paths starting at x. E) be an arbitrary finite directed graph. The diffusion process after time t = 2. Figure. If f = 1 on the boundary. then u = 1 everywhere. . Because L < 1.13. The random walk defines a diffusion process. Discrete Dirichlet problem we have from the geometric series formula (1 − A)−1 = n X Ak k=0 the result u(x) = (1 − L)−1 f (x) = ∞ X [Ln f ]x = n=0 ∞ X Ex. The matrix K has a maximal eigenvalue 1 and so norm 1 (K T has the maximal eigenvector (1. Denote an edge connecting i with j with eij .n [f ] = Ex [ST ] .191 3. where D is a finite set of n vertices and E ⊂ D × D is the set of edges. 1) with eigenvalue 1 and since eigenvalues of K agree with eigenvalues of K T ). The matrix K is a stochastic matrix: its column vectors are probability vectors. The complement intD = D\δD consists of interior points. Lets call a point j ∈ δD a boundary point. The path integral result can be generalized and the increased generality makes it even simpler to describe: Definition. . The stochastic matrix encodes the graph and additionally defines a random walk on D if Kij is interpreted as the transition probability to hop from j to i. Let K be a stochastic matrix on l2 (D): the entries satisfy Kij ≥ 0 and its column vectors are probability vectors P i∈D Kij = 1 for all j ∈ D.
13. x1 . The discrete Dirichlet problem on the graph is to find a function u on D which is harmonic and which satisfies u = f on the boundary δD of D. If Sn is the random walk starting at x and T is the stopping time to reach the boundary of D.192 Chapter 3. Let F be the function on D which agrees with f on the boundary of D and which is 0 in the interior of D. . Lets look at a directed graph (D. then the solution u = Ex [f (ST )] is the expected value of ST on the discrete Wiener space of all paths starting at x and ending at the boundary of D. .3 (The Dirichlet problem on graphs). n=0 But this is the sum Ex [f (ST )] over all paths γ starting at x and ending at the boundary of f .j+1 . Example. xn ) starting at a point x ∈ D for which Kxi xi+1 > 0. x2 . the solution u of the discrete Dirichlet problem (1 − L)u = f on this graph can be written as a path . 0 0 0 1 0 0 0 0 0 1 Given a function f on the boundary of D. Because the matrix L has spectral radius smaller than 1. The Dirichlet problem on the graph is the system of linear equations (1 − L)u = f . Proof. Assume D is a directed graph. Theorem 3. The Laplacian on D is defined by the stochastic matrix 0 1/3 0 0 0 1/2 0 1 0 0 K = 1/4 1/2 0 0 0 1/8 1/6 0 1 0 1/8 0 0 0 1 or the Laplacian L= 0 1/2 1/4 1/8 1/8 1/3 0 1/2 1/6 0 0 1 0 0 0 . . A function u on D is called harmonic if (Lu)x = 0 for all x ∈ D. Discrete Stochastic Processes The discrete Wiener space Ωx ⊂ D on D is the set of all finite paths γ = (x = x0 . the problem is given by the geometric series u= ∞ X Ln f . . The discrete Wiener measure on this countable set is defined as Px [{γ}] = Qn−1 j=0 Kj. E) with 5 vertices and 2 boundary points.
For mathematical treatments. Example. see [19. where S is a set and B is a σalgebra on S.14. where we write with respect to the measure P (x. B) and inductively the measures P n+1 (x.14 Markov processes Definition. A set of nodes with connections is a graph. then the adjacency matrix is a n× n matrix with entries Aij = 1 if there exists a link from aj to ai . where 0 < d < 1 is a parameter called damping factor and A is the stochastic . Figure. If there are n sites. A Google matrix is the matrix G = dA + (1 − d)E. B) is Bmeasurable. B)P (x. P 1 (x. 3. the map s → P (s. The link structure of the web forms a graph. dy). Given a stochastic matrix K and a point s ∈ S. Markov processes 193 P∞ n integral n=0 L f = Ex [f (ST )] for the random walk Sn on D stopped at the boundary δD. The adjacency matrix A of this graph is called the web graph. ·) are the probability vectors. dy) for the integration on S S P (y. we obtain a Markov matrix A which is called the normalized web matrix. the measures P (s. The interplay of random walks on graphs and discrete partial differential equations is relevant in electric networks. B) = P (x. B) = R RDefine n P (x. ·). j. E) with 5 vertices and 2 boundary points. Any network can be described by a graph. The graduate students and later entrepreneurs Sergey Brin and Lawrence Page had in 1996 the following ”one billion dollar idea”: Definition. If we divide each column by the number of 1 in that column. The directed graph (D. which are the columns of K. Given a measurable space (S. A function P : S × B → R is called a transition probability function if P (x. 103].3. If S is a finite set and B is the set of all subsets of S. B) for all x ∈ S and if for every B ∈ B. Define the matrix E which satisfies Eij = 1/n for all i. Remark. B) called state space. ·) is a probability measure on (S. where the individual websites are the nodes and if there is an arrow from site ai to site aj if ai links to aj .
P (x0 . The future depends only on the present and not on the past. B) dQ(x) S we get a commutative semigroup. The transition probability functions are elements in L(S. µ(dx0 ) P[ωk ∈ Bk . It follows from the definition of a Markov process that Xn satisfies the elementary Markov property: for n > k. Define a measure P = Pµ on (Ω. . P) with a filtration An of σalgebras. .194 Chapter 3. . B) and define on the prodN N uct Q space (Ω. . The relation P n+m = P n ◦ P m is also called the ChapmannKolmogorov equation.1 billion. Definition. . A Markov chain is called a denumerable Markov chain. For any state space (S. A. a finite Markov chain. B) and any transition probability function P . . Proof. M1 (S)). Choose a probability measure µ on (S. n] = B0 B1 Bn . B) = P (y. The corresponding eigenvector v scaled so that the largest value is 10 is called page rank of the damping factor d. A) = (S . Discrete Stochastic Processes matrix obtained from the adjacency matrix of a graph by scaling the rows to become stochastic matrices. This is a stochastic n × n with eigenvalue 1. An An adapted process Xn with values in S is called a discrete time Markov process if there exists a transition probability function P such that P[Xn ∈ B  Ak ](ω) = P n−k (Xk (ω). [57] Remark. k = 1. C) by requiring Z Z Z P (xn−1 .1 (Markov processes exist). one had n=8. This means that the probability distribution of Xn is determined by knowing the probability distribution of Xn−1 . where M1 (S) is the set of Borel probability measures on S. P[Xn ∈ B  X1 . if the state space is finite. B ) the πsystem C consisting of of cylindersets n∈N Bn given by a sequence Bn ∈ B such that Bn = S except for finitely many n. Given a probability space (Ω.14. Xk ] = P[Xn ∈ B  Xk ] . B) . . Remark. With the multiplication Z (P ◦ Q)(x. then the Markov process is called a Markov chain. . if the state space S is countable. dxn ) . In 2006. dx1 ) . Page rank is probably the world’s largest matrix computation. If the state space S is a discrete space. Theorem 3. a finite or countable set. Definition. . there exists a corresponding Markov process X.
Call this measure P. If X0 is constant and equal to i and Xn is a Markov process with transition probability P . Every measure on S can be given by a vector π ∈ l2 (S) and P π is again a measure. Given a Markov process with finite or countable state space S. Q n Define the increasing sequence of σalgebras An = B n × i=1 {∅. The transitionPmatrix Pij is a stochastic matrix: each column is a probability vector: j Pij = 1 with Pij ≥ 0. B) dπ(x) . Independent Svalued random variables Assume the measures P (x. The random variables Xn (ω) = xn are An adapted. 1. Sum of independent Svalued random variables Let S be a countable Abelian group and let π be a probability distribution on S assigning to each j ∈ S the weight πj . Ω} containing cylinder sets. The transition probability function P acts also on measures π of S by Z P (x. . In order to see that it is a Markov process. Bn ) which is a special case of the above requirement by taking Bk = S for k 6= n. Define Pij = πj−i . Example. The matrix P transports the law of Xn into the law of Xn+1 . Every sequence of IID random variables is a Markov process. P(π)(B) = S A probability measure π is called invariant if Pπ = π.14. Now Xn is the sum of n independent random variables with law π. Define (i) the matrix Pij = πj . If X is a Svalued random variable with distriPn bution π then k=1 Xk has a distribution which we denote by π (n) . We define the transition matrix Pij on the Hilbert space l2 (S) by Pij = P (i. . we have to check that P[Xn ∈ Bn  An−1 ](ω) = P (Xn−1 (ω). The Markov chain with this transition probability matrix on S is called a branching process. Definition. ·) are independent of x. Example. Example. Branching processes Given S = {0. then Pijn = P[Xn = j]. Markov processes This measure has a unique extension to the σalgebra A. Countable and finite state Markov chains. 2 . The sum changes from i to j with probability Pij = pi−j . } = N with fixed probability distribution π. An invariant measure π on S is called stationary measure of the Markov process. . Example.195 3. The S− valued random variables Xn are independent and have identical distribution and P is the law of Xn . In this case P[Xn ∈ Bn  An−1 ](ω) = P[Bn ] which means that P[Xn ∈ Bn  An−1 ] = P[Xn ∈ Bn ]. {j}) .
If we show that µ± = f ± ν are both absolutely continuous also µ = µ+ − µ− is absolutely continuous. For any x ∈ S define the measure ν(B) = ∞ X 1 n P (x. But this is obvious R P (1B ν) = B P (x. It is enough to show this for elementary functions f = j aj 1Bj with aj > 0 with Bj ∈ B P satisfying j aj ν(Bj ) = 1 satisfies P (1B ν) = ν(B). . B)f (x) dν(x) ≥ 0 . Choose ν as above and define Pf1 = f2 if Pµ1 = µ2 with µi = fi ν. B) = 0 for all n and so f (x)ν(B) = 0 implying ν(B) = 0. Proof. S We also have to show that Pf 1 =P1 if f 1 = 1.14. B) has the property that if µ is absolutely continuous with respect to ν. Corollary 3. Given µ = f · ν with f ∈ L1 (S). Proof. To each transition probability function can be assigned a Markov operator P : L1 (S. To check that P is a Markov operator. Z Pµ = P (x. B)f (x) dν(x) S is absolutely continuous with respect to ν because Pµ(B) = 0 implies P (x.3. which follows from Z Pf ν(B) = P (x.14. ν).2. We can so assign a Markov operator to a transition probability function: Lemma 3.196 Chapter 3. B) 2n n=0 on (S. ·) dν(x) = ν(B). ν) → L1 (S. then also Pµ is absolutely continuous with respect to ν. Now. Discrete Stochastic Processes This operator on measures leaves a subclass of measures with densities with respect to some measure ν invariant. where f ± are both nonnegative. Lets assume that f ≥ 0 because in general we can write f = f + − f − . B) = 0 for almost all x with f (x) > 0 and therefore f (x)P n (x. we have to check Pf ≥ 0 if f ≥ 0.
when dealing with discrete time Markov processes. Markov processes 1 We see that the abstract approach to study Markov operators on L (S) is more general. This point of view can reduce some of the complexity. than looking at transition probability measures.197 3.14. .
Discrete Stochastic Processes .198 Chapter 3.
1 Brownian motion Definition. This means that for almost all ω ∈ Ω. one says X is continuous in probability if for any t ∈ R the limit Xt+h → Xt takes place in probability for h → 0. If time is an interval. It is called a sample function of the stochastic process. R+ or R. Xt is called a right continuous stochastic process. one can regard Xt (ω) as a function of t. Definition. A. it is called a stochastic process with continuous time. A stochastic process is called measurable. it is a sample path. the sample functions coincide Xt (ω) = Yt (ω). If the sample function Xt (ω) is a continuous function of t for almost all ω. If the time T can be a discrete subset of R. Definition. In the case of a realvalued process (S = R). it is called a vectorvalued stochastic process but one often abbreviates this by the name stochastic process too. P) be a probability space and let T ⊂ R be time. A Rn valued random vector X is called Gaussian. if it has the multidimensional characteristic function φX (s) = E[eis·X ] = e−(s. Two stochastic process Xt and Yt satisfying P[Xt − Yt = 0] = 1 for all t ∈ T are called modifications of each other or indistinguishable. For any fixed ω ∈ Ω. if X : T × Ω → S is measurable with respect to the product σalgebra B(T ) × A. If Xt takes values in S = Rd . a curve in Rd . A collection of random variables Xt .Chapter 4 Continuous Stochastic Processes 4. then Xt is called a discrete time stochastic process.V s)/2+i(m. then Xt is called a continuous stochastic process.s) 199 . t ∈ T with values in R is called a stochastic process. If the sample function is a right continuous function in t for almost all ω ∈ Ω. In the case of a vectorvalued process. Let (Ω.
. A normal distributed random variable X is a Gaussian random variable. . For a jointly Gaussian set of of random variables Xj . Therefore. 0). Example. Example. 0) is standard Brownian motion. where r = min(s. The covariance matrix is in this case the scalar Var[X]. Xt ] = E[(Xs − ms )·(Xt −mt )∗ ] is called Brownian motion if for any 0 ≤ t0 < t1 < · · · < tn . the vector X = (X1 . t) = Cov[Xs . Xn ) is a Gaussian random vector. A Gaussian process is a Rd valued stochastic process with continuous time such that (Xt0 . Xn are called jointly Gaussian if any linear combination ni=1 ai Xi is a Gaussian random variable too. . t) = V (r. one can diagonalize it. Note that because V is symmetric. Figure. Example. Xti+1 − Xti are independent and the covariance matrix V satisfies V (s. It is called centered if mt = E[Xt ] = 0 for all t. . s). the computation can be done in a bases. It is called the standard Brownian motion if mt = 0 for all t and V (s. t) and s 7→ V (s. Xtn ) is jointly Gaussian for any t0 ≤ t1 < · · · < tn . A set of random P variables X1 . . . The matrix V is called covariance matrix and the vector m is called the mean vector. . Xt1 . the random vectors Xt0 . where V is diagonal. . An Rd valued continuous Gaussian process Xt with mean vector mt = E[Xt ] and the covariance matrix V (s. A path Xt (ω1 ) of Brownian motion in the plane S = R2 with a drift mt = E[Xt ] = (t. then the random variable −1 1 p e−(x−m. .V (x−m))/2 X(x) = n/2 (2π) det(V ) on Ω = Rn is a Gaussian random variable with covariance matrix V . r). t) = min{s.200 Chapter 4. . This reduces the situation to characteristic functions for normal random variables. Continuous Stochastic Processes for some nonsingular symmetric n × n matrix V and vector m = E[X]. To see that it has the required multidimensional characteristic function φX (u). . Definition. . Example. The process Yt = Xt − (t. t}. This is not standard Brownian motion. . If V is a symmetric matrix with determinant det(V ) 6= 0.
Let X be Gaussian on R and define for α > 0 the variable Y (ω) = −X(ω).2. Y ) where each of the two component is a onedimensional random Gaussian variable. Y ). t). Y ) with random vectors X.1. . This example shows why Gaussian vectors (X. s). Y with mean vectors m. Lemma 4. Y ] U 0 U= = Cov[Y. If Xt is a Gaussian process with covariance V (s.1. Proof.Y ) (s. t ∈ Rn Indeed. Example.1. the covariance matrix is Cov[X. if ω > α and Y = X else.1. r) with r = min(s. Also Y is Gaussian and there exists α such that E[XY ] = 0. Y are centered. then U Cov[X. But X and Y are not independent and X +Y = 0 on [−α. Y ] = 0 if this matrix is the zero matrix. then it is Brownian motion. In the context of this lemma.Y ) (r) 1 = E[eir·(X. Brownian motion Recall that for two random vectors X. Y which are not independent [114]: Proof. With r = (t. A Gaussian random vector (X. Y ) are defined directly as R2 valued random variables with some properties and not as a vector (X. α] shows that X +Y is not Gaussian. n. We can assume without loss of generality that the random variables X. Proposition 4. X] V 0 V is the covariance matrix of the random vector (X. Y satisfying Cov[X. t) = φX (s) · φY (t).201 4. ∀s. if V is the covariance matrix of the random vector X and W is the covariance matrix of the random vector Y . We say Cov[X. Two Rn valued Gaussian random vectors X and Y are independent if and only if φ(X. Y ]ij = E[(Xi − mi )(Yj − nj )]. t) = V (r. we have therefore φ(X.Y ) ] = e− 2 (r·Ur) 1 1 = e− 2 (s·V s)− 2 (t·W t) 1 1 = e− 2 (s·V s) e− 2 (t·W t) = φX (s)φY (t) . Y ] = 0 has the property that X and Y are independent. one should mention that there exist uncorrelated normal distributed random variables X.
Because Xn are independent. Given a separable real Hilbert space (H. and X(h) is Gaussian. Xtj+1 − Xtj ] = 0 . define X X(h) = hn X n n 2 which converges in L . ti ) + V (ti . Remark. There exists a probability space (Ω. But by assumption Cov[Xt0 . Xtj+1 − Xtj ] = V (t0 . Pick an orthonormal basis {en } in H and attach to each en a centered GaussianPIID random variable Xn ∈ L2 satisfying Xn 2 = 1. irregular motion having its origin in the particles themselves and not in the surrounding fluid. The construction of Brownian motion happens in two steps: one first constructs a Gaussian process which has the desired properties and then shows that it has a modification which is continuous. tj ) −V (ti . Xtj+1 − Xtj ] = = V (ti+1 . ti+1 ) −V (ti .1. Brown’s explanation was that matter is composed of small ”active molecules”. Brown’s contribution was to establish Brownian motion as an important phenomenon. h ∈ H of realvalued random variables on Ω such that h 7→ X(h) is linear. t0 ) − V (t0 . Continuous Stochastic Processes Proof. The book [75] includes more on the history of Brownian motion. Proposition 4. tj+1 ) − V (ti+1 . Cov[Xti+1 − Xti .3. tj ) V (ti+1 . centered and E[X(h)2 ] = h2 . By the above lemma (4. we only have to check that for all i < j Cov[Xt0 . While previous studies concluded that these particles were alive. tj+1 ) + V (ti . n n . tj ) = V (t0 . to demonstrate its presence in inorganic as well as organic matter and to refute by experiment incorrect mechanical or biological explanations of the phenomenon.1). A.  · ). which exhibit a rapid. Botanist Robert Brown was studying the fertilization process in a species of flowers in 1828. Xtj+1 − Xtj ] = 0. he observed small particles in ”rapid oscillatory motion”. Proof. they are orthonormal in L2 so that X X X(h)22 = h2n Xn 2 = h2n = h22 . While watching pollen particles in water through a microscope.202 Chapter 4. ti+1 ) − V (ti+1 . P) and a family X(h). tj+1 ) − V (t0 . ti ) = 0 . Given a general h = hn en ∈ H. t0 ) = 0 and Cov[Xti+1 − Xti .1.
203 4. t + h ∈ [a. Define ǫ = r − αp. By the ChebychevMarkov inequality (2. . b] for which there exist three constants p > r. X(B)] = (1A . We can assume without loss of generality that a = 0. Define the process Bt = X([0. One also has n X(∅) = 0. the map X : H 7→ L2 is also called a Gaussian measure. Given a stochastic process Xt with t ∈ [a.1. dx). For any sequence t1 . this process has independent increments Bti − Bti−1 and is a Gaussian process. This model of Brownian motion has everything except continuity. 1B ) = A ∩ B . K such that E[Xt+h − Xt p ] ≤ K · h1+r for every t. b] Yt (ω) − Ys (ω) ≤ C(ω) t − sα . then Xt has a modification Yt which is almost everywhere continuous: for all s. Brownian motion Definition. For a Borel set A ⊂ R+ we define then X(A) = X(1A ). If we choose H = L2 (R+ . The term ”measure” is warranted by the fact that X(A) = P X(A n ) if A is a countable disjoint union of Borel sets An . b]. t ∈ [a. · · · ∈ T .4) P[Xt+h − Xt ] ≥ hα ] ≤ h−αp E[Xt+h − Xt p ] ≤ Kh1+ǫ so that P[X(k+1)/2n − Xk/2n  ≥ 2−nα ] ≤ K2−n(1+ǫ) . Definition. p Proof. 0 < α < r .4 (Kolmogorov’s lemma). the increment Bt − Bs has variance t − s so that E[Bs Bt ] = E[Bs2 ] + E[Bs (Bt − Bs )] = E[Bs2 ] = s .1. We know from the above lemma that h and h′ are orthogonal if and only if X(h) and X(h′ ) are independent and that E[X(A)X(B)] = Cov[X(A).5. h′ ) . The space X(H) ⊂ L2 is a Hilbert space isomorphic to H and in particular E[X(h)X(h′ )] = (h. b = 1 because we can translate and rescale the time variable to be in this situation. Theorem 4. Remark. t]). For each t. we have E[Bt2 ] = t and for s < t. t2 . Especially X(A) and X(B) are independent if and only if A and B are disjoint.
b]. Given t. Y satisfies Yt (ω) − Ys (ω) ≤ C(ω) t − sα for all s.204 Chapter 4. Brownian motion exists. Then Xt (ω) − Xk2−n (ω) ≤ m X i=1 Pm i=1 γi /2n+i γi 2−α(n+i) ≤ d 2−nα with d = (1 − 2−α )−1 . Let n ≥ n(ω) and t ∈ [k/2n . For almost all ω. . Moreover. . The process Y is therefore a modification of X. t + h ∈ D = {k2−n  n ∈ N. .1. there exists n(ω) < ∞ almost everywhere such that for all n ≥ n(ω) and k = 0. Similarly Xt − X(k+1)2−n  ≤ d 2−nα . s∈D→t Because the inequality in the assumption of the theorem implies E[Xt (ω) − lims∈D→t Xs (ω)] = 0 and by Fatou’s lemma E[Yt (ω)−lims∈D→t Xs (ω)] = 0 we know that Xt = Yt almost everywhere. Then (k + 1)/2n+1 ≤ t + h ≤ (k + 3)/2n+1 and Xt+h − Xt  ≤ 2d2−(n+1)α ≤ 2dhα . . (k+1)/2n ] of the form t = k/2n + with γi ∈ {0. Corollary 4. . . n − 1}. Such a function can be extended to a continuous function on [0. By the first BorelCantelli’s lemma (2.2). Take n so that 2−n−1 ≤ h < 2−n and k so that k/2n+1 ≤ t < (k + 1)/2n+1 . k = 0. 1] by defining Yt (ω) = lim Xs (ω) . the path Xt (ω) is uniformly continuous on the dense set of dyadic numbers D = {k/2n}.2.5. Continuous Stochastic Processes Therefore n ∞ 2X −1 X n=1 k=0 P[X(k+1)/2n − Xk/2n  ≥ 2−nα ] < ∞ . We know now that for almost all ω. t ∈ [a. . 2n − 1 X(k+1)/2n (ω) − Xk/2n (ω) < 2−nα . 1}. . this holds for sufficiently small h.
.s] 1[0. k < 2n } and f0. Bt ) of n independent one. Let Bt be the standard Brownian motion. One has then the process Xt (ω) = ω(t). 0) } of independent Gaussian random variables. A general construction of such measures is possible given a Markov transition probability function [108]. Because that proof gives an explicit formula for the Brownian motion process Bt and is so constructive. By construction of a Wiener measure on C[0. It can be found in Simon’s book on functional integration [97] or in the book of Revuz and Yor [86] about continuous martingales and Brownian motion. where the probability space is directly given by the set of paths. The construction given here is due to Neveu and goes back to Kakutani. k < 2n .n)∈I 0 s Z 0 t f(k. so that E[(Bt+h − Bt )4 ] = 3h2 . .n) fι = 0 1[0. one has a construction of Brownian motion. n)n ≥ 1. .4) assures the existence of a continuous modification of B. k odd } ∪ {(0. To define standard Brownian motion in n dimension.0 = 1. 5) Check E[Bs Bt ] = X Z (k. The first rigorous construction of Brownian motion was given by Norbert Wiener in 1923. 2) Take a family Xk. .205 4. the process Xtx = x + Bt is called Brownian motion started at x. In one dimension. we take the joint (1) (n) motion Bt = (Bt . take the process Bt from above.k2−n ] − 1[k2−n . For any x ∈ Rn . t } . 1] the Haar functions fk. Since Xh = Bt+h − Bt is centered with variance h. the fourth moment is E[Xh4 ] = d4 2 2 dx4 exp(−x h/2)x=0 = 3h . In McKean’s book ”Stochastic integrals” [68] one can find L´evy’s direct proof of the existence of Brownian motion.n .t] = inf{s. 1]. This construction has the advantage that it can be applied to more general situations. Kolmogorov’s lemma (4. 0 4) Prove convergence of the above series. .n (k. n) ∈ I = {(k.n)∈I Z t Z 1 fk. Brownian motion Proof.n for (k. n)  n ≥ 1. 3) Define X Bt = Xk.n := 2(n−1)/2 (1[(k−1)2−n .dimensional Brownian motions. Definition.1. We will come to this later.(k+1)2−n ] ) for {(k. we outline it shortly: 1) Take as a basis in L2 ([0.1.
Proposition 4. To do so. A. The continuity of Xt and Yt implies then that for almost all ω. the process B nian motion. Proof. Brownian motion is unique in the sense that two standard Brownian motions are indistinguishable. A′ . Xt (ω) = Yt (U ω). u ≤ s). ˜t = −Bt is a Brownian motion.(iii) In each case.1. we first have to say.2 Some properties of Brownian motion We first want to establish that Brownian motion is unique. ˜t is a continuous centered Gaussian proProof. B cess with continuous paths. they are indistinguishable. ˜0 = 0.(ii). . We are now ready to list some symmetries of Brownian motion. (i). 1] to t ∈ [0. The following symmetries exist: ˜t = Bt+s − Bs is a (i) Timehomogeneity: For any s > 0. In other words. A. Continuous Stochastic Processes 6) Extend the definition from t ∈ [0. The construction of the map H → L2 was unique in the sense that if we construct two different processes X(h) and Y (h). A special case is if the two processes are defined on the same probability space (Ω. such that Xt′ (U ω) = Xt (ω). P) and Xt′ on (Ω′ . then there exists an isomorphism U of the probability space such that X(h) = Y (U (h)). t > 0 is a Brownian (iv) Time inversion: The process B motion. Two processes Xt on (Ω. B ˜t = tB1/t .2 (Properties of Brownian motion). if there exists an isomorphism U : Ω → Ω′ of probability spaces. Indistinguishable processes are considered the same. independent increments and variance t. Theorem 4. 4. ∞) by taking independent P (n) (i) Brownian motions Bt and defining Bt = n<[t] Bt−n .206 Chapter 4.2. P′ ) are called indistinguishable. the process B Brownian motion independent of σ(Bu . (ii) Reflection symmetry: The process B ˜t = cBt/c2 is a Brow(iii) Brownian scaling: For every c > 0. P) and Xt (ω) = Yt (ω) for almost all ω.2. when two processes are the same: Definition. where [t] is the largest integer smaller or equal to t.
∞) 7→ Xt ∈ Rn is called H¨older continuous of order α if there exists a constant C such that Xt+h − Xt  ≤ C · hα for all h > 0 and all t. From the time inversion property (iv). .2. we know that B ˜s → 0 almost t = 0. because of the almost everywhere continuity of Bt . but since E[B everywhere. The curve is called locally H¨ older continuous of order α if there exists for each t a constant C = C(t) such that Xt+h − Xt  ≤ C · hα for all small enough h. For every α < 1/2. Definition. We have to check continuity only for Continuity of B ˜s2 ] = s → 0 for s → 0.3 (SLLN for Brownian motion). Some properties of Brownian motion 207 ˜ is a centered Gaussian process with covariance (iv) B ˜s . we see that t−1 Bt = B1/t which converges for t → ∞ to 0 almost everywhere. A parameterized curve t ∈ [0. B ˜t ] = E[B ˜s . t) .4.4. 1 ) = inf(s. B ˜t ] = st · E[B1/s .2. Proof. A curve which is H¨ older continuous of order α = 1 is called Lipshitz continuous. (local) H¨ older continuity holds if for almost all ω ∈ Ω the sample path Xt (ω) is (local) H¨ older continuous for almost all ω ∈ Ω. If Bt is Brownian motion. B1/t ] = st · inf( 1 . Brownian motion has a modification which is locally H¨ older continuous of order α. Proposition 4. then 1 lim Bt = 0 t→∞ t almost surely. Cov[B s t ˜t is obvious for t > 0. It follows the strong law of large numbers for Brownian motion: Theorem 4.2. For a Rd valued stochastic process.
Since increments of Brownian motion are Gaussian. the result follows. We follow [68]. A continuous path Xt = (Xt . 2p Because p can be chosen arbitrary large. n .5 (Wiener). Continuous Stochastic Processes Proof.208 Chapter 4. It is enough to show it in one dimensions. if for all t. Xt ) is called nowhere (i) differentiable. . Because of this proposition. . 0 < α < p−1 . . we have E[(Bt − Bs )2p ] = Cp · t − sp for some constant Cp .2. Suppose Bt is differentiable at some point 0 ≤ s ≤ 1. the path t 7→ Xt (ω) is nowhere differentiable. Theorem 4. we can assume from now on that all the paths of Brownian motion are locally H¨ older continuous of order α < 1/2. But this means that l Bj/n − B(j−1)/n  ≤ 7 n for all j satisfying i = [ns] + 1 ≤ j ≤ [ns] + 4 = i + 3 and sufficiently large n so that the set of differentiable paths is included in the set B= [ [ \ [ \ l≥1 m≥1 n≥m 0<i≤n+1 i<j≤i+3 {Bj/n − B(j−1)/n  < 7 l }. . It is enough to show it in one dimension because a vector function with locally H¨ older continuous component functions is locally H¨ older continuous. (1) (n) Definition. each coordinate function Xt is not differentiable at t. Proof. Brownian motion is nowhere differentiable: for almost all ω. Kolmogorov’s lemma assures the existence of a modification satisfying locally Bt − Bs  ≤ C t − sα . There exists then an integer l such that Bt − Bs  ≤ l(t − s) for t − s > 0 small enough.
Consider the vector space of finite sums i=1 ai δti with inner product d d X X X ai bj V (ti .j n X ui Xti )2 ] ≥ 0 . if for all finite sets {t1 . . n→∞ n Remark. A. bj δtj ) = ai δti . .2. One has just to do the same trick with k instead of 3 steps. ǫ s −Bt  More precisely. The actual modulus of continuity is very near to α = 1/2: Bt − Bt+ǫ  is of the order r 1 h(ǫ) = 2ǫ log( ) . . ( i=1 j=1 i. Any positive semidefinite function V on T ×T is the covariance of a centered Gaussian process Xt . . .2). we show that P[B] = 0 as follows \ [ \ l P[ {Bj/n − B(j−1)/n  < 7 }] n n≥m 0<i≤n+1 i<j≤i+3 ≤ = ≤ l lim inf nP[B1/n  < 7 ]3 n→∞ n l 3 lim inf nP[B1  < 7 √ ] n→∞ n C lim √ = 0 . P[lim supǫ→0 sups−t≤ǫ Bh(ǫ) = 1] = 1. . The covariance of standard Brownian motion was given by E[Bs Bt ] = min{s. . We look now at a more general class of Gaussian processes. This proposition shows especially that we have no Lipshitz continuity of Brownian paths. P). ∞)) as a Gaussian subspace of L2 (Ω.6. where k(α − 1/2) > 1. . A function V : T × T → R is called positive semidefinite. Definition. V (ti . . un ). The covariance of a centered Gaussian process is positive semidefinite. td } ⊂ T . Proof.4. tj ) . the matrix Vij = V (ti . We constructed it by implementing the Hilbert space L2 ([0. Some properties of Brownian motion 209 Using Brownian scaling.j .4. as we will see later in theorem (4. t}.2. . un ) X i. . tj ) satisfies (u. Proposition 4. V u) ≥ 0 for all vectors u = (u1 . tj )ui uj = E[( i=1 We introduce Pnfor t ∈ T a formal symbol δt . A slight generalization shows that Brownian motion is not H¨ older continuous for any α ≥ 1/2. . The first statement follows from the fact that for all u = (u1 .
δt ) = V (s. t) = 21 e−t−s on T × T . eikt e−t dt = 2 + 1) 2π(k R By Fourier inversion. Xs ] = (δs . Lets look at some examples of Gaussian processes: Example. . We first show that V is positive semidefinite: The Fourier transform of f (t) = e−t is Z 1 .k=1 (k 2 + 1)−1 X j uj eiktj 2 dk 1 uj uk e−tj −tk  .2. 2π R 2 and so 0 −1 ≤ (2π) = n X Z R j. Proposition 4. The OrnsteinUhlenbeck is also called the oscillatory process. Let T = R+ and take the function V (s. Define now as in the construction of Brownian motion the process Xt = X(δt ). we get Z 1 1 (k 2 + 1)−1 eik(t−s) dk = e−t−s . we have E[Xt . Multiplying out the null vectors {v = 0 } and doing a completion gives a separable Hilbert space H.7. 2 This process has a continuous modification because E[(Xt − Xs )2 ] = (e−t−t + e−s−s − 2e−t−s )/2 = (1 − e−t−s ) ≤ t − s and Kolmogorov’s criterion. t) . Brownian motion Bt and the OrnsteinUhlenbeck process Ot are for t ≥ 0 related by 1 Ot = √ e−t Be2t . Because the map X : H → L2 preserves the inner product. Denote by O the OrnsteinUhlenbeck process and let Xt = 2−1/2 e−t Be2t . Continuous Stochastic Processes This is a positive semidefinite inner product.210 Chapter 4. 2 Proof. The OrnsteinUhlenbeck oscillator process Xt is a onedimensional process which is used to describe the quantum mechanical oscillator as we will see later.
4. one observes that Xt = Bs − sB1 is a Gaussian process. It is also called tied down process. Brownian bridge is a onedimensional process with time T = [0. e2s } = es−t /2 . which has the covariance E[Xs Xt ] = E[(Bs − sB1 )(Bt − tB1 )] = s + st − 2st = s(1 − t) . The case V (s. Three paths of the OrnsteinUhlenbeck process. ThereRare also generalized Ornsteinˆ(t − s) Uhlenbeck processes. continuous processes with independent increments. To verify that they are the same. t) = V (t. we have to show that they have the same covariance. t) = µ ˆ(t − s) is positive semidefinite. we have X1 = 0 which means that all paths start from 0 at time 0 and end at 1 at time 1. t) = s(1 − t) for 1 ≤ s ≤ t ≤ 1 and V (s. t) = R e−ik(t−s) dµ(k) = µ with the Cauchy measure µ = 2π(k12 +1) dx on R can be generalized to take symmetric measure µ on R and let µ ˆ denote its Fourier transform Rany −ikt e dµ(k). This is a computation: E[Ot Os ] = 1 −t −s e e min{e2t . Figure. Since E[X12 ] = 0. The realization Xt = Bs − sB1 shows also that Xt has a continuous realization. s) else. In order to show that V is positive semidefinite. 1] and V (s. Both X and O are centered Gaussian. 2 It follows from this relation that also the OrnsteinUhlenbeck process is not differentiable almost everywhere. Example. Some properties of Brownian motion 211 We want to show that X = Y . The same calculation as above shows that the function R V (s. .2.
The process Y has however no more zero mean. Example.212 Chapter 4. (t + 1) (s + 1) s t (1 − t)(1 − s) min{ . 0 0 0 In a distributional sense. if s 6= t. If V (s. Z t Z t Z t t = E[ Xs2 ] = E[ Xs Xs ds] = E[ Xs dµ(s)] = 0 . there is no autocorrelation between different Xs and Xt . We can consider the Gaussian process Yt = ty + Xt which describes paths going from 0 at time 0 to y at time 1. we get a Gaussian process which has the property that Xs and Xt are independent. But then Z t Z t Z t′ 2 2 E[Yt ] = E[( Xs ) ] = E[Xs′ Xs′ ] ds′ ds = 0 0 0 0 which implies Yt = 0 almost everywhere so that the measure dµ(ω) = Xs (ω) ds is zero for almost all ω. } = min{s. Bt = B These identities follow from the fact that both are continuous centered Gaussian processes with the right covariance: ˜t ] = ˜s B E[B ˜t] = ˜sX E[X s t . Three paths of Brownian bridge. t) = 1{s=t } . We will look at stochastic differential equations later. . ω) 7→ Xt (ω) were measurable. Especially. This process is called white noise or ”great disorder”. It can not be modified so that (t. t} = E[Bs Bt ] . one can see Brownian motion as a solution of the stochastic differential equation and white noise as a generalized meansquare derivative of Brownian motion. Let Xt be the Brownian bridge and let y be a point in Rd . Continuous Stochastic Processes Figure. ω) 7→ Xt (ω) is Rt measurable: if (t. Xt = X ˜ t := (1 − t)Bt/(1−t) . then Yt = 0 Xs ds would be measurable too. Brownian motion B and Brownian bridge X are related to each other by the formulas: ˜t := (t + 1)Xt/(t+1) . } = s(1 − t) = E[Xs Xt ] (1 − s) (1 − t) (t + 1)(s + 1) min{ and uniqueness of Brownian motion.
A. Brownian sheet is not a stochastic process with one dimensional time but a random field: time T = R2+ is two dimensional.s0 is standard Brownian motion. E) be a measurable space and let T be a set called ”time”.3.. A.s0 or Bt = Bt. Denote by Yt (w) = w(t) the coordinate maps on ET . the time T can be quite arbitrary and proposition (4. . s2 ). Figure.6) stated at the beginning of this section holds true. also φ is measurable. as long as we deal only with Gaussian random variables and do not want to tackle regularity questions. P) to (E T . t2 )) = min(s1 .. t2 ) is called Brownian sheet. Illustrating a sample of a Brownian sheet Bt. . . (t1 . Every trace Bt = Bt. n] . which is the smallest algebra for which all the functions Xt are measurable which is the σalgebra generated by the πsystem { n Y t1 .3 The Wiener measure Let (E.s . Because Yt ◦ φ is measurable for all t. xti ∈ Ati }  Ati ∈ E} consisting of cylinder sets. . tn ) ⊂ T and all sets Ai ∈ E. . . It has similar scaling properties as Brownian motion. . 1 = 1. Denote by PX the pushforward measure of φ from (Ω. E T ) defined by PX [A] = P[X −1 (A)]. t1 ) · min(s2 . 4. One says. Time is two dimensional. The Wiener measure 213 Example.. .4. A stochastic process on a probability space (Ω.. P) indexed by T and with values in E defines a map φ : Ω → E T . The product space ET is equipped with the product σalgebra E T . . we have P[Xti ∈ Ai . the two processes X and Y are versions of each other. . Actually. n] = PX [Yti ∈ Ai . . i = 1.2. For any finite set (t1 . ω 7→ Xt (ω) .tn Ati = {x ∈ E T . The Gaussian process with V ((s1 . .
E). Obviously. E)  f − f0  ≤ r} ∈ B are in B ′ . the process Y defined on (D. If D is right continuous with left hand limits. The Wiener measure can also be constructed without Brownian motion and can be used to define Brownian motion. Proof. The other relation B ′ ⊂ B is clear so that B = B ′ . Y is called the coordinate process of X and the probability measure PX is called the law of X. Definition. Let D = C(T. which is the Borel σalgebra restricted to C(T. E). Define the probability space (D. E) is called the Wiener measure. Definition. X ′ possibly defined on different probability spaces are called versions of each other if they have the same law PX = PX ′ . W ′ and let Y. Many processes have versions which are right continuous and have left hand limits at every point.system and are therefore the same. E) ⊂ E T . The probability space (C(T. We have B ⊂ B ′ . W ) is called the Wiener space. Two processes X. where Q is the measure Q = φ∗ P where φ : Ω → D has the property that φ(ω) is the version of ω in D. E T ∩ D. We sketch the idea. E). Define Ω = S [0.t] . If E = Rd and T = [0. Let D be a measurable subset of ET and assume the process X has a version X such that almost all paths X(ω) are in D. Corollary 4. Q). Y ′ be the coordinate processes of B on D with respect to W and W ′ . Define the measure W = φ∗ PX and let Y be the coordinate process of B. Let S = R˙n denote the one point compactification of Rn . Let E = Rd and T = R+ . Uniqueness: assume we have two such measures W.1. they are also versions of each other.3. E). Since both Y and Y ′ are versions of X and ”being a version” is an equivalence relation. namely the Borel σalgebra B generated by its own topology. Definition. Let B ′ be the σalgebra E T ∩ C(T. since all closed balls {f ∈ C(T. the measure W on C(T. The space C(T. the process is called the canonical version of X. There exists a unique probability measure W on C(T. Continuous Stochastic Processes Definition. E) carries an other σalgebra. Q) is another version of X. E T ∩ C(T. The Wiener measure is therefore a Borel measure. Remark. E) for which the coordinate process Y is the Brownian motion B. E T ∩ D.214 Chapter 4. This means that W and W ′ coincide on a π. One usually does not work with the coordinate process but prefers to work with processes which have some continuity properties. ∞).
. It is by Tychonov a compact space with the product topology. . . ω(tn ))} . . x2 . there exists a unique measure µ on C(Ω) such that R L(φ) = φ(ω) dµ(ω). y. .1. 1 −a2 /2 e > a Z ∞ e−x 2 /2 dx > a 2 a e−a /2 . φ(ω) = F (ω(t1 ). x2 . L´evy’s modulus of continuity n be the set of functions from [0. Define also the Gauss kernel p(x. 4. xm )p(0. By the Riesz representation theorem. . . Z ∞ e−x 2 /2 dx < a Z ∞ e−x 2 /2 (x/a) dx = a 1 −a2 /2 e . . s2 ) (Rn )m · · · p(xm−1 . s1 )p(x1 . This is the Wiener measure on Ω. R)  ∃F : Rn → R. Define on Cf in (Ω) the functional (Lφ)(s1 .215 4. a . a2 + 1 Proof. it is a bounded linear functional on the dense linear subspace Cf in (Ω) ⊂ C(Ω). Define Cf in (Ω) = {φ ∈ C(Ω. it extends uniquely to a bounded linear functional on C(Ω). . . . a For the right inequality consider Z a ∞ 1 1 −b2 /2 e db < 2 b2 a Z ∞ e−x 2 /2 dx . It is nonnegative and L(1) = 1.4. t) = (4πt)−n/2 exp(−x−y2 /4t). sm ) dx1 · · · dxm with s1 = t1 and sk = tk − tk−1 for k ≥ 2. . Since L(φ) ≤ φ(ω)∞ . x1 .4 L´evy’s modulus of continuity We start with an elementary estimate Lemma 4. sm ) Z = F (x1 . t] to S which is also the set of paths in R . a Integrating by parts of the left hand side of this gives 1 −a2 /2 e − a Z a ∞ e−x 2 /2 dx < 1 a2 Z ∞ e−x 2 /2 dx . xm . By the Hahn Banach theorem.4.
4. n→∞ 1≤k≤2n (ii) Proof of the inequality ”≤ 1”. we compute. √ Take 0 < δ < 1. 1≤k≤2 Because Bk/2n − B(k−1)/2n are independent Gaussian random variables. Using the above lemma. If B is standard Brownian motion.2 (L´evy’s modulus of continuity). Define P[An ] = P[ max Bj2−n − Bi2−n /h(k2−n ) ≥ (1 + ǫ)] k=j−i∈K [ = P[ {Bj2−n − Bi2−n ] ≥ an.k } . Proof.k = h(k2−n )(1 + ǫ). Since n P[An ] < ∞. we get with some constants C which may vary .4. Continuous Stochastic Processes Theorem 4. We follow [86]: (i) Proof of the inequality ”≥ 1”. P[lim sup sup h(ǫ) ǫ→0 s−t≤ǫ p where h(ǫ) = 2ǫ log(1/ǫ).1) and 1 − s < e−s P[An ] ≤ ≤ ≤ (1 − 2 Z ∞ an 2 n 1 √ e−x /2 dx)2 2π 2 n an e−an /2 )2 a2n + 1 √ 2 2an −a2n /2 exp(−2n 2 e ) ≤ e−C exp(n(1−(1−δ) )/ n ) . Define an = (1 − δ)h(2−n ) = (1 − δ) n2 log 2. Consider P[An ] = P[ max n Bk2−n − B(k−1)2−n  ≤ an ] . k=j−i∈K where K = {0 < k ≤ 2nδ } and an. using the above lemma (4. then Bs − Bt  = 1] = 1 . Take again 0 < δ < 1 and pick ǫ > 0 such that (1 + ǫ)(1 − δ) > (1 + δ). an + 1 (1 − 2 P where C is a constant independent of n. we get by the first BorelCantelli that P[lim supn An ] = 0 so that P[ lim max Bk2−n − B(k−1)2n  ≥ h(2−n )] = 1 .216 Chapter 4.
A good source for stopping time results and stochastic process in general is [86]. .k /2 P[An ] ≤ a−1 n. A) is an increasing family (At )t≥0 of subσalgebras of A. . . We see that n P[An ] converges. In the last step was used that there are at most 2nδ points in K and for each of themPlog(k −1 2n ) > log(2n (1 − δ)). with t1 ≤ i2−n < j2−n ≤ t2 and 0 < k = j − i ≤ t2n < 2nδ . t2 = j2−n + 2−q1 + 2−q2 . Because ǫ > 0 was arbitrary. We get Bt1 (ω) − Bt2 (ω) ≤ ≤ ≤ Bt1 − Bi2−n (ω) +Bi2−n (ω) − Bj2−n (ω) +Bj2−n (ω) − Bt2  X 2 (1 + ǫ)h(2−p ) + (1 + ǫ)h(k2−n ) p>n (1 + 3ǫ + 2ǫ2 )h(t) .5 Stopping times Stopping times are useful for the construction of new processes.217 4.k e k∈K X log(k −1 2n )−1/2 e−(1+ǫ) ≤ C· ≤ C · 2−n(1−δ)(1+ǫ) ≤ 2 log(k−1 2n ) k∈K C·n 2 X (log(k −1 2n ))−1/2 ( since k −1 > 2−nδ ) k∈K −1/2 n(δ−(1−δ)(1+ǫ)2 ) 2 . where k = j − i ∈ K. A measurable space endowed with a filtration (At )t≥0 is called a filtered space. m>n Pick 0 ≤ t1 < t2 ≤ 1 such that t = t2 − t1 < 2−n(ω)(1−δ) . the proof is complete. 4. . A process X is called adapted to the filtration At . By BorelCantelli we get for almost every ω an integer n(ω) such that for n > n(ω) Bj2−n − Bi2−n  < (1 + ǫ) · h(k2−n ) . Increase possibly n(ω) so that for n > n(ω) X h(2−m ) < ǫ · h(2−(n+1)(1−δ) ) . if Xt is At measurable for all t. t2 : t1 = i2−n − 2−p1 − 2−p2 . in proofs of inequalities and convergence theorems as well as in the study of return time results.5. A filtration of a measurable space (Ω. Definition. Stopping times from line to line: X −a2n. . . Take next n > n(ω) such that 2−(n+1)(1−δ) ≤ t < 2−n(1−δ) and write the dyadic development of t1 .
where E is a metric space. Proof.1. Let A be a closed set in E. if T = s is constant. We give examples of stopping times. Heuristically. Definition. A stopping time relative to a filtration At is a map T : Ω → [0. ∀t} . X is then a left continuous adapted process. A. Definition. We always have At− ⊂ At ⊂ At+ and both inclusions can be strict. Then the so called entry time TA (ω) = inf{t ≥ 0  Xt (ω) ∈ A } is a stopping time relative to the filtration At = σ({Xs }s≤t ). Let d be the metric on E. If At = At− . Definition.s≤t which is in At = σ(Xs . x ≤ t). define AT = {A ∈ A∞  A ∩ {T ≤ t} ∈ At . ∞] such that {T ≤ t } ∈ At . If T is a stopping time. Let X be the coordinate process on C(R+ . s>t For t = 0 we can still define A0+ = also A∞ = σ(As . With a filtration we can associate two other filtration by setting for t > 0 \ At− = σ(As . the filtration At+ of any filtration is right continuous. Proposition 4.218 Chapter 4. A process X on (Ω. Continuous Stochastic Processes Definition. Note also that AT + = {A ∈ A∞  A ∩ {T < t} ∈ At . E). As an example. then At is left continuous. s ≥ 0). T s>0 As and define A0− = A0 .T ] (t) is adapted. Remark. d(Xs (ω). s < t). Define Remark. As an example. then T is a stopping time if and only if {T < t } ∈ At . A) = 0 } . At is the set of events. If At = At+ then the filtration At is called right continuous. At+ = As . P) defines a natural filtration At = σ(Xs  s ≤ t). Also T is a stopping time if and only if Xt = 1(0. It is a σalgebra. If At is right continuous. which may occur up to time t.5. then AT = As . Definition. We have {TA ≤ t} = { inf s∈Q. ∀t} . the minimal filtration of X for which X is adapted.
Given a closed ball U ∈ E. If A is open and Xs (ω) ∈ A. ω) 7→ Xs (ω) from ([0.s<t Xs ∈ A } ∈ A t . ω)  Y (s. E) is measurable.U (ω) < s < t. t]. Then the hitting time TA (ω) = inf{t > 0  Xt (ω) ∈ A } is a stopping time with respect to the filtration At+ . Proof. s + ǫ) for some ǫ > 0.5. For some processes.3. Proof. We have to show that Y −1 (U ) = {(s. it is not clear whether XT is measurable.2. For a process X. E). Xs ∈ U } . ω) 7→ Xs (ω) with Y = Y (s.U = 0 and inductively for k ≥ 1 the k’th hitting time (a stopping time) Hk. Definition. Remark. where E is a metric space.U (ω) = inf{s ∈ Q  Ek−1. the inverse holds: Lemma 4. Let At be a filtration on (Ω. A) and let T be a stopping time. we define E0. t]×At ) into (E. Therefore {TA < t} = { inf s∈Q. An adapted process with right or left continuous paths is progressively measurable. the space of right continuous functions.5. Let A be an open subset of E. t]) × At . We have met this definition already in the case of discrete time but in the present situation. the map (s. we know by the rightcontinuity of the paths that Xt (ω) ∈ A for every t ∈ [s. TA is a At+ stopping time if and only if {TA < t} ∈ At for all t. we define a new random variable XT on the set {T < ∞ } by XT (ω) = XT (ω) (ω) . B([0. ω) ∈ U } ∈ B([0. Stopping times Proposition 4. It turns out that this is true for many processes.5.219 4. Write X as the coordinate process D([0. t]×Ω. Assume right continuity (the argument is similar in the case of left continuity). ω). Let X be the coordinate process on D(R+ . A progressively measurable process is adapted. Denote the map (s. Given k = N. E). A process X is called progressively measurable with respect to a filtration At if for all t. Definition.
t] × At measurable by hypothesis. Definition. Proposition 4. then XT is AT measurable on the set {T < ∞}. The set {T < ∞ } is itself in AT . t] × Ω.5. Corollary 4.4. Proof. t]) × At ) is measurable and XT is the composition of this map with X which is B[0. B([0. t]. Given a stopping time T and a process X. Continuous Stochastic Processes as well as the k’th exit time (not necessarily a stopping time) Ek.U (ω) ≤ s ≤ Ek. These are countably many measurable maps from D([0. But the map S : ({T ≤ t}. t]) × At . ω) 7→ XT (ω) = Xψ(s. t].measurable on this set is equivalent with XT · 1{T ≤t} ∈ At for every t. Since also ψ : (s.U (ω) = inf{s ∈ QHk.5. Because t ∧ T is a stopping time. If X is progressively measurable and T is a stopping time. B[0. If At is a filtration then At∧T is a filtration since if T1 and T2 are stopping times. ω) 7→ Xs (ω) is measurable.220 Chapter 4. Remark. This means that the map ω 7→ (T (ω). we have from the previous proposition that X T is At∧T measurable. ω). E) to [0. If X is progressively measurable with respect to At and T is a stopping time.5. t]. To say that XT is AT . ω) from (Ω. Proof.U (ω) < s < t. ω)  Hk. Xs ∈ / U} . we define the stopped process (X T )t (ω) = XT ∧t (ω). we know also that the composition (s.U (ω)} k=1 which is in B([0. then (X T )t = Xt∧T is progressively measurable with respect to At∧T .ω) (ω) = φ(ψ(s. then T1 ∧ T2 is a stopping time. . We know by assumption that φ : (s. Then by the rightcontinuity Y −1 (U ) = ∞ [ {(s. t]) is measurable because T is a stopping time. ω) 7→ (s ∧ T )(ω) is measurable. At ∩ {T ≤ t}) → ([0. ω) is measurable. At ) to ([0.
Proof. x. Having the fundamental solution G. TδD > t] and Px is the Wiener measure of Bt starting at the point x. 0 where for x ∈ D. To determine G(x. a solution u ∈ C(D) such that ∆u = 0 inside D and lim x→y. where T is the hitting time of the boundary. of the particles described by the Brownian motion. The classical Dirichlet problem for a bounded Green domain D ∈ Rd with boundary δD is to find for a given function f ∈ C(δ(D)). we can solve the Poisson equation ∆u = v for a given function v by Z G(x. Given a stopping time T . x. where δ(x−y) is the Dirac measure at y ∈ D. Definition. y) · v(y) dy . Such a domain is then called a Green domain. y) dy = Px [Bt ∈ C. We can interpret that as follows. define the discretisation Tk = +∞ if T ≥ k and Tk = q2−k if (q − 1)2−k ≤ T < q 2−k . Every stopping time is the decreasing limit of a sequence of stopping times taking only finitely many values. y) = δ(x−y). y) = g(t. Z C g(t. Each Tk is a stopping time and Tk decreases to T . Stopping times Proposition 4. We give very briefly some examples without proofs.221 4.6. y). Let Bt be Brownian motion in Rd and TA the hitting time of a set A ⊂ Rd . but some hints to the literature. y) dt .x∈D for every y ∈ δD. u= D The Green function can be computed using Brownian motion as follows: Z ∞ G(x. Let D be a domain in Rd with boundary δ(D) such that the Green function G(x.5. Definition. G(x. y) exists in D. q < 2k k. Many concepts of classical potential theory can be expressed in an elegant form in a probabilistic language. y) is then the probability density. The Green function of a domain D is defined as the fundamental solution satisfying ∆G(x. consider the killed Brownian motion Bt starting at x.5. u(x) = f (y) .
Theorem 4. Continuous Stochastic Processes This problem can not be solved in general even for domains with piecewise smooth boundaries if d ≥ 3. In words. But the heat radiated from the thorn is proportional to its surface area.x∈D u(x) = f (y) for all nonsingular points y in the boundary δD. one has to modify the question and declares u is a solution of a modified Dirichlet problem.5. 47]). see [44. If the tip is sharp enough. The solution of the regularized Dirichlet problem can be expressed with Brownian motion Bt and the hitting time T of the boundary: u(x) = Ex [f (BT )] . no matter how close to the heater. This means lim inf x→y. if u satisfies ∆D u = 0 inside D and limx→y. Let D be the inside of a spherical chamber in which a thorn is punched in.x∈D u(x) < 1 = f (y). Definition. The temperature u inside D is a solution of the Dirichlet problem ∆D u = 0 satisfying the boundary condition u = f on the boundary δD. a person sitting in the chamber will be cold. the solution u(x) of the Dirichlet problem is the expected value of the boundary function f at the exit point BT of Brownian motion Bt starting at x. which means that almost every Brownian particle starting at y ∈ δD will return to δD after positive time. Figure.7 (Kakutani 1944). We have seen in the previous chapter that the discretized version of this result on a graph is quite easy to prove.222 Chapter 4. The boundary δD is held on constant temperature f . The following example is called Lebesgue thorn or Lebesgue spine has been suggested by Lebesgue in 1913. Irregularity of a point y can be defined analytically but it is equivalent with Py [TDc > 0] = 1. then compute f (BT ) and average this random variable over all paths ω. . Because of this problem. To solve the Dirichlet problem in a bounded domain with Brownian motion. (For more details. start the process at the point x and run it until it reaches the boundary BT . where f = 1 at the tip of the thorn y and zero except in a small neighborhood of y.
Since by the ”extracting knowledge” property E[Bt Bs  As ] = Bs · E[Bt  As ] = 0 . Then Bt . One can compute the heat flow e−t∆ u by the following formula (e−t∆ u)(x) = Ex [u(Bt ). Let Bt be standard Brownian motion. If additionally Xt ∈ Lp for all t. Continuous time martingales Remark.6 Continuous time martingales Definition. it is called a supermartingale if −X is a submartingale and a martingale. Proof. if it is both a super and submartingale. Let K be a compact subset of a Green domain D. We give a definition of the equilibrium potential later. Bt2 − t 2 and eαBt −α t/2 are martingales. Therefore E[Bt  As ] − Bs = E[Bt − Bs As ] = E[Bt − Bs ] = 0 . Given a filtration At of the probability space (Ω.223 4. if one is grounding the conducting boundary and charging the conducting set B with a unit amount of charge.6. we speak of Lp super or submartingales. Remark.1. The hitting probability p(x) = Px [TK < TδD ] is the equilibrium potential of K relative to D. but the process of reflected Brownian motion.6. if E[Xt As ] ≥ Xs . t < T ] . Ikeda has discovered that there exists also a probabilistic method for solving the classical von Neumann problem in the case d = 2. Remark. A. . Given the Dirichlet Laplacian ∆ of a bounded domain D. 4. the equilibrium potential is obtained by measuring the electrostatic potential. Physically. where T is the hitting time of δD for Brownian motion Bt starting at x. For more information about this. Proposition 4. Brownian motion gives examples with continuous time. 81]. The process for the von Neumann problem is not the process of killed Brownian motion. We have seen martingales for discrete time already in the last chapter. Bt − Bs is independent of Bs . A realvalued process Xt ∈ L1 which is At adapted is called a submartingale. P). one can consult [44. we get E[Bt2 − t  As ] − (Bs2 − s) = = E[Bt2 − Bs2  As ] − (t − s) E[(Bt − Bs )2  As ] − (t − s) = 0 .
2 E[eαBt As ]e−α t/2 (t−s)/2 2 = E[eαBs ]e−α s/2 As in the discrete case. Figure. Nt is the number of events which happen up to time t. we have 2 E[eα(Bt −Bs ) As ] = E[eαBt−s ] = eα from which follows. If Xt is a Lp martingale. how many jumps are necessary to reach t. See [50] or the last chapter for more information about Poisson processes.2. also N T is progressively measurable. ∞) is defined as Nt = ∞ X k=1 1Sk ≤t . then Xt p is a submartingale for p ≥ 1. Example. Proof. We know therefore that it is progressively measurable and that for each stopping time T . It is a right continuous process. it follows that Nt − ct is a martingale with respect to the filtration At = σ(Ns . Continuous Stochastic Processes Since Brownian motion begins at any time s new. The Poisson process Nt with time T = R+ = [0. This process takes values in N and measures.6. The conditional Jensen inequality gives E[Xt p As ] ≥ E[Xt As ]p = Xs p . Since E[Nt ] = ct. Let Sn = k=1 Xk . The Poisson point process on the line. It is an example of a martingale which is not continuous. s ≤ t). we remark: Proposition 4. S1 S2 X1 S3 S4 X2 X3 S5 X4 S6 S7 S8 X5 X6 X7 S9 S 10 S11 X8 X9 X 10 .224 Chapter 4. Let Xn be a sequence of IID exponential distributed Pn random variables with probability density fX (x) = e−cx c. It could model for example the number Nt of hits onto a website.
4. Remark. k! Proof.6. Nt−0 = n − 1 } and show that Xn = Sn − Sn−1 are independent random variables with exponential distribution. { sup Xt ≥ ǫ}] ≤ E[Xb ] . The process is defined as follows: take Xt as above and define ∞ X Zk 1Sk ≤t . Let X be a nonnegative right continuous submartingale with time T = [a. (Interval theorem) The Poisson process has independent increments ∞ X Nt − Ns = 1s<Sn ≤t . Nt is Poisson distributed with parameter tc: P[Nt = k] = (tc)k −tc e .225 4.7. Let Pn be the analog of the Wiener measure on right continuous paths on the lattice and denote with En the expectation. n=1 Moreover. The FeynmanKac formula for discrete Schr¨odinger operators H = H0 + V is (e−itH u)(n) = e2dt En [u(Xt )iNt e−i Rt 0 V (Xs ) ds ]. Define then Sn (ω) = {t  Nt = n. a≤t≤b a≤t≤b .3. Poisson processes on the lattice Zd are also called Brownian motion on the lattice and can be used to describe FeynmanKac formulas for discrete Schr¨ odinger operators. they hold also for rightcontinuous submartingales. This means that a particle stays at a lattice site for an exponential time and jumps then to one of the neighbors of n with equal probability. For any ǫ > 0 ǫ · P[ sup Xt ≥ ǫ] ≤ E[Xb . b]. Doob inequalities Proposition 4. By a limiting argument.7.1 (Doob’s submartingale inequality).7 Doob inequalities We have already established inequalities of Doob for discrete times T = N. Yt = k=1 where Zn are IID random variables taking values in {m ∈ Zd m = 1}. The proof is done by starting with a Poisson distributed process Nt . Theorem 4.
x). we get the claim for T = [a.3 (Exponential inequality). t∈Dn t∈Dn since E[Xt ] is nondecreasing in t. Theorem 4. Continuous Stochastic Processes Proof.7. Since exp(αSt − α2 t 2 is a mar α2 t α2 t α2 s ) ≤ exp(sup Bs − ) ≤ sup exp(Bs − ) = sup Ms . It is nonnegative. b]. One often applies this inequality to the nonnegative submartingale X if X is a martingale. 2 2 2 s≤t s≤t s≤t .226 Chapter 4. t∈D t∈D Since D is dense and X is right continuous we can replace D by T . The following inequality measures. Then X ∗ = supt∈T Xt is in Lp and satisfies X ∗ p ≤ q · sup Xt p . x ≤ a · t}. Theorem 4.6. how big is the probability that onedimensional Brownian motion will leave the cone {(t.7. { sup Xt ≥ ǫ}] ≤ E[Xb ] .2 (Doob’s Lp inequality). Since X is right continuous. Fix p > 1 and q satisfying p−1 + q −1 = 1. St = sup0≤s≤t Bs satisfies for any a>0 2 P[St ≥ a · t] ≤ e−a t/2 . We know now that for all n ǫ · P[ sup Xt ≥ ǫ] ≤ E[Xb . We had  sup Xt  ≤ q · sup Xt p . Take a countable subset S D of T and choose an increasing sequence Dn of finite sets such that n Dn = D.1) that Mt = eαBt − tingale. We have seen in proposition (4. t∈T Proof. Proof. Take a countable subset S D of T and choose an increasing sequence Dn of finite sets such that n Dn = D. Given a nonnegative rightcontinuous submartingale X with time T = [a. t∈Dn t∈Dn Going to the limit gives  sup Xt  ≤ q · sup Xt p . b] which is bounded in Lp . Going to the limit n → ∞ gives the claim with T = D.
For a.1] αt ) ≥ β] 2 = P[ sup (eαBs − s∈[0. 2 The result follows from E[Bt ] = E[B0 ] = 1 and inf α>0 exp(−αat + 2 exp(− a2 t ).8.1] ≤ e −βα sup E[Ms ] = e−βα s∈[0.1 (Law of iterated logarithm for Brownian motion).1] s∈[0. 4.227 4. b > 0. P[ sup (Bs − s∈[0. P[ sup (Bs − αs ) ≥ β] ≤ e−αβ .7.7.8 Khintchine’s law of the iterated logarithm Khinchine’s law of the iterated logarithm for Brownian motion gives a precise statement about how onedimensional Brownian motion oscillates in a neighborhood of the origin.8. Khintchine’s law of the iterated logarithm we get with Doob’s submartingale inequality (4. Theorem 4.1] since E[Ms ] = 1 for all s. As in the law of the iterated logarithm. P[lim sup t→0 Bt Bt = 1] = 1. P[lim inf = −1] = 1 t→0 Λ(t) Λ(t) .1] Proof. define p Λ(t) = 2t log  log t . α2 t 2 ) = An other corollary of Doob’s maximal inequality will also be useful.4.1) P[St ≥ at] ≤ ≤ P[sup Ms ≥ eαat− α2 t 2 ] s≤t exp(−αat + α2 t )E[Mt ] . 2 αs ) ≥ β] 2 ≤ P[ sup (Bs − s∈[0.1] α2 t 2 ) ≥ eβα ] = P[ sup Ms ≥ eβα ] s∈[0. Corollary 4.
1) and (4. we get the claim.228 Chapter 4. θn−1 ). βn = Λ(θn ) . Therefore n P[An ] = ∞ and by the second BorelCantelli lemma. we know from (i) that −Bθn+1 < 2Λ(θn+1 ) (4. 1) and define αn = (1 + δ)θ−n Λ(θn ). √ Bθn ≥ (1 − θ)Λ(θn ) + Bθn+1 (4. 2θ 2 In the limit θ → 1 and δ → 0.2) and Λ(θn+1 ) ≤ 2 θΛ(θn ) for large enough n. δ ∈ (0.2) for sufficiently √ large n. Using these two inequalities (4. P[sup(Bs − 2 s≤1 The BorelCantelli lemma assures P[lim inf sup(Bs − n→∞ s≤1 αn s ) < βn ] = 1 2 which means that for almost every ω. we have for sufficiently large n and s ∈ (θn . Bs ≤ 1 almost everywhere: (i) lim sups→0 Λ(s) Take θ. we get αn s ) ≥ βn ] ≤ e−αn βn = Kn(−1+δ) . For θ ∈ (0.7. the sets An = {Bθn − Bθn+1 ≥ (1 − √ θ)Λ(θn )} are independent and since Bθn − Bθn+1 is Gaussian we have Z ∞ 2 2 a du > 2 e−u /2 √ P[An ] = e−a /2 a + 1 2π a √ with a = P (1 − θ)Λ(θn ) ≤ Kn−α with some constants K and α < 1.4). 2 2 2θ 2 Since Λ is increasing on a sufficiently small interval [0.1) for infinitely many n. a). The second statement follows from the first by changing Bt to −Bt . we get √ √ √ Bθn > (1 − θ)Λ(θn ) − 4Λ(θn+1 ) > Λ(θn )(1 − θ − 4 θ) . θn−1 ] Bs (ω) ≤ ( (1 + δ) 1 + )Λ(s) . From corollary (4. Bs (ii) lim sups→0 Λ(s) ≥ 1 almost everywhere. 2 We have αn βn = log log(θn )(1 + δ) = log(n) log(θ). Bs (ω) ≤ αn s θn−1 (1 + δ) 1 + βn ≤ αn + βn = ( + )Λ(θn ) . there is n0 (ω) such that for n > n0 (ω) and s ∈ [0. 1). Continuous Stochastic Processes Proof. Since −B is also Brownian motion.
Then Bt · e is a 1dimensional Brownian motion since Bt was defined as the product of d orthogonal Brownian motions. we have with Proof. Since B s = 1/t ˜s B1/s B = lim sup s Λ(s) s→0 s→0 Λ(s) Bt Bt = lim sup . one gets the law of iterated logarithm near infinity: Corollary 4. P[lim sup t→∞ Bt Bt = 1] = 1.8.2. For ddimensional Brownian motion. = lim sup t→∞ Λ(t) t→∞ tΛ(1/t) 1 = lim sup The other statement follows again by reflection. This statement shows also that Bt changes sign infinitely often for t → 0 and that Brownian motion is recurrent in one dimension. P[lim inf = −1] = 1 . n Λ(t) n→∞ Λ(θ ) The claim follows for θ → 0.8. Let e be a unit vector in Rd .3.4. Remark. This is true for all unit vectors and we can even get it simultaneously for a dense set {en }n∈N . we have P[lim sup t→0 Bt · e = 1] = 1 . namely that the set {Bt = 0 } is a nonempty perfect set with Hausdorff dimension 1/2 which is in particularly uncountable. From the previous theorem. Khintchine’s law of the iterated logarithm 229 for infinitely many n and therefore lim inf t→0 √ Bθ n Bt ≥ lim sup >1−5 θ . Corollary 4. By time inversion. we know that the lim sup is ≥ 1.8. P[lim inf = −1] = 1 t→0 Λ(t) Λ(t) Proof. t→∞ Λ(t) Λ(t) ˜t = tB1/t (with B ˜0 = 0) is a Brownian motion. One could show more. one has P[lim sup t→0 Bt Bt = 1] = 1. Λ(t) Since Bt · e ≤ Bt .
Bt+t2 . This two statements are equivalent to the statement that for every C ∈ AT + E[f (B˜t ) · 1A∩C ] = E[f (Bt )] · P[A ∩ C] . Then.n) (T (ω)) k=1 k 2n which is again a stopping time. ∀t } .1 (DynkinHunt). 2n ). . Theorem 4. {T < ∞} and B Proof. It follows that in d dimensions. By reflection symmetry. we have lim sup = 1. Therefore. 4. . Let A be the set {T < ∞}. The next theorem tells that Brownian motion starts afresh at stopping times. . . then B ˜t is independent of AT + when conditioned to {T < ∞}. Define also: AT + = {A ∈ A∞  A ∩ {T < t } ∈ At . Let T be a stopping time for Brownian ˜t = Bt+T − BT is Brownian motion when conditioned to motion. n) the interval [ k−1 2n .9 The theorem of DynkinHunt k Definition.9. Bt+tn ) with g ∈ C(Rn ) E[f (B˜t )1A ] = E[f (Bt )] · P[A] and that for every set C ∈ AT + E[f (B˜t )1A∩C ] · P[A] = E[f (B˜t )1A ] · P[A ∩ C] . If T is a stopping time. Remark. The theorem says that for every function f (Bt ) = g(Bt+t1 . (n) then T denotes its discretisation T (n) (ω) = ∞ X 1I(k. Assume the lim sup is 1 + ǫ > 1. . Continuous Stochastic Processes of unit vectors in the unit sphere.230 Chapter 4. Denote by I(k. there exists en such that P[lim sup t→0 ǫ Bt · e n ≥1+ ]=1 Λ(t) 2 in contradiction to the law of iterated logarithm for Brownian motion. lim inf = −1. the set of limit points of Bt /Λ(t) for t → 0 is the entire unit ball {v ≤ 1}.
k ∩C ] E[f (B0 )] · P[An. Remark. no conditioning is necessary and Bt+T − BT is again Brownian motion. we know that A0+ is independent to itself. By DynkinHunt’s result. For every set A ∈ A0+ we have P[A] = 0 or P[A] = 1. This definition turns out to be equivalent to the classical definition in potential theory: a point y ∈ δD is irregular if and only if there exists a barrier function f : N → R in a neighborhood N of y. Selfintersection of Brownian motion (n) (n) Let T be the discretisation of the stopping time T and A < ∞} Sn∞= {T as well as An. We say it is regular. ˜ = Bt+T − Proof. 4. Given a point y ∈ δD. A barrier function is defined as a negative subharmonic function on int(N ∩ D) satisfying f (x) → 0 for x → y within D. Remark. This zeroone law can be used to define regular points on the boundary of a domain D ∈ Rd . if Py [TδD > 0] = 0 and irregular Py [TδD > 0] = 1. If T < ∞ almost everywhere. Using A = {T < ∞}.9. P[ k=1 An.k = {T (n) = k/2n }.k ∩ C] E[f (B0 )] lim P[ n→∞ ∞ [ k=1 An. we compute E[f (B˜t )1A∩C ] = = = = lim E[f (BT (n) )1An ∩C ] n→∞ lim n→∞ lim n→∞ ∞ X k=0 ∞ X k=0 E[f (Bk/2n )1An.10 Selfintersection of Brownian motion Our aim is to prove the following theorem: . Since every C ∈ A0+ is {Bs . we know that B = B is independent of B T + = A0+ . s > 0} measurable.231 4.k ∩ C] = E[f (B0 )1A∩C ] = = E[f (B0 )] · P[A ∩ C] E[f (Bt )] · P[A ∩ C] .2 (Blumental’s zeroone law).10.k ∩C] → P[A ∩ C] for n → ∞. Now B ˜ Bt = B. Theorem 4. Take the stopping time T which is identically 0.
232 Chapter 4. Proof. then y + Bs ∈ / K for t ≤ Tδ . By the law of iterated logarithm. T ≤ s < ∞] is a harmonic function on Rd \ K. there are no triple points. TK ≤ s] = P[cy + Bs/c2 ∈ cK. Proof. By DynkinHunt. Dvoretsky and Erd¨ os have shown that for d > 3. Proposition 4. K) = P[y + Bs ∈ K.10.10. TcK ≤ s/c2 ] = P[cy + Bs˜. s ≥ Tδ ] ˜ ∈ K] = P[y + BTδ + B Z h(y + x) dµ(x) . Then h(K. we know that B δ If δ is small enough. cK) . Let Tδ be the hitting time of Sδ = {x − y = δ}. We have therefore h(y) = P[y + Bs ∈ K. Proposition 4.10. The random variable BTδ ∈ Sδ has a uniform distribution on Sδ because Brownian motion is rotational symmetric. ˜t = Bt+T − Bt is again Brownian motion. It is known that for d ≤ 2. T ≤ s] = P[cy + Bs/c2 ∈ cK. Brownian motion has infinitely many self intersections with probability 1.3. This equality for small enough δ is the definition of harmonicity. = Sδ where µ is the normalized Lebesgue measure on Sδ . there are infinitely many n−fold points and for d ≥ 3. The hitting probability of K can only be a function f (r/R) of r/R: h(y. there are no selfintersections with probability 1. . Let K be a countable union of closed balls. (i) We show the claim first for one ball K = Br (z) and let R = z−y. For d ≤ 3. The hitting probability h(y) = P[y + Bs ∈ K. Kakutani. y) → 1 for y → K.1 (Self intersections of random walk). TcK ≤ s˜] = h(cy. Let K be a compact subset of Rd and T the hitting time of K with respect to Brownian motion starting at y. we have Tδ < ∞ almost everywhere. Remark.2. Continuous Stochastic Processes Theorem 4. By Brownian scaling Bt ∼ c · Bt/c2 .
R3 R3 Given a compact set K ⊂ R3 . The set of all probability measures on a compact subset E of Rd is known to be compact. there R exists an equilibriumRmeasure µ on K and the equilibrium potential x − y−(d−2) dµ(y) rsp. ). we can fix y = y0 = (1. f dµn → f dµ. The capacity is for d = 2 defined as exp(− inf µ I(µ)) and for d ≥ 3 as (inf µ I(µ))−(d−2) . . K0 ) = 1 n→∞ n→∞ by (i). 82]. . K) ≥ lim h(yn .233 4. Remark.10. The next proposition is part of Frostman’s fundamental theorem of potential theory. By translation invariance. α→∞ x→1 E[inf (Bs )1 < −1] = 1 = s≥0 [ Kα ) because of the law of iterated logarithm. . . . In the case d = 2. 0. . Kα ) = h(y0 . one takes x − y−(d−2) . A measure on K minimizing the energy is called an equilibrium measure. We have h(y0 . Let µ be a probability measure on R3 . log(x − y−1 ) dµ(y) takes the value C(K)−1 on the support K ∗ of µ. Kα ) = f (α/(1 + α)) and take therefore the limit α → ∞ lim f (x) = lim h(y0 . which is a ball of radius α around (−α. . Selfintersection of Brownian motion We have to show therefore that f (x) → 1 as x → 1. . we refer to [40.10. one replaces x − y−1 by log x − y−1 . 0. For every compact set K ⊂ Rd . Then y0 ∈ K0 for some ball K0 . where M (K) is the set of probability measures on K. the capacity of K is defined as ( inf µ∈M(K) I(µ))−1 . Proposition 4. For detailed proofs.4. if for all continuous functions f . Definition. In the case d ≥ 3. This definitions can be done in any dimension. lim inf h(yn . Definition. Define the potential theoretical energy of µ as Z Z I(µ) = x − y−1 dµ(x) dµ(y) . We say a Rmeasure µnRon Rd converges weakly to µ. 0) and change Kα . (ii) Given yn → y0 ∈ K.
(iii) (Value of capacity) If the potential φ(x) belonging to µ is constant on K. For S a general compact set K.2) and proposition (4. Both h(y.10. then it must take the value C(K)−1 since Z φ(x) dµ(x) = I(µ) . let {yn } be a dense set in K and let Kǫ = n Bǫ (yn ).5. Corollary 4. They must therefore be equal.234 Chapter 4. both functions h and φµ · C(K) are harmonic. then C(Gn ) → C(E).3). Kǫ ) → h(y. Then µn has an accumulation point µ and I(µ) ≤ inf µ∈M(K) I(µ). By constructing a new measure on K ∗ one shows then that one can strictly decrease the energy. Let µ be the equilibrium distribution on K. zero at ∞ and equal to 1 on δ(K). The statement C(Kǫ ) → C(K) follows from the upper semicontinuity of the capacity: if Gn is a sequence of open sets with ∩Gn = E. This is physically evident if we think of φ as the potential of a charge distribution µ on the set K. n→∞ (ii) (Existence of equilibrium measure) The existence of an equilibrium measure µ follows from the compactness of the set of probability measures on K and the lower semicontinuity of the energy since a lower semicontinuous function takes a minimum on a compact space. Assume first K is a countable union of balls. . (iv) (Constancy of capacity) Assume the potential is not constant C(K)−1 on K ∗ .10. then lim inf I(µn ) ≥ I(µ) . K) and inf x∈Kǫ x − y−1 → inf x∈K x − y−1 are clear. Proof. Take a sequence µn such that I(µn ) → inf µ∈M(K) I(µ) . According to proposition (4. K) ≥ C(K) · inf x∈K x − y−1 . (i) (Lower semicontinuity of energy) If µn converges to µ. The upper semicontinuity of the capacity follows from the lower semicontinuity of the energy. K) = φµ · C(K) and therefore h(y. Then h(y. Continuous Stochastic Processes Proof.10. One can pass to the limit ǫ → 0.
We have only to show that in the case d = 3. S (i) α = P[ t∈[0. .235 4. nT + 1]. We know that it has positive capacity almost everywhere and that therefore h(Bs . K) > 0 almost everywhere. P They are independent and by the strong law of large numbers n Xn = ∞ almost everywhere. (b − a) Then I(µ) = Z Z x − y −1 dµ(x)dµ(y) = Z a b Z a b (b − a)−1 Bs − Bt −1 dsdt .10.1] Bt . We have to find a probability measure µ(ω) on BI (ω) such that its energy I(µ(ω)) is finite almost everywhere. (n + 1)T ] } . b]. Proof. Now we prove the theorem Proof. S (ii) αT = P[ t∈[0. Proof. (iii) Proof of the claim.10. the result in smaller dimensions follow. To see the claim we have to show that this is finite almost everywhere. Because Brownian motion projected to the plane is two dimensional Brownian and to the line is one dimensional Brownian motion.2≤T Bt = Bs ] > 0 for some T > 0.1]. But h(Bs . we integrate over Ω which is by Fubini Z bZ b E[I(µ)] = (b − a)−1 E[Bs − Bt −1 ] dsdt a a √ which is finite since Bs − Bt has the same distribution as s − tB1 by R 2 Brownian scaling and since E[B1 −1 ] = x−1 e−x /2 dx < ∞ in dimenRbRb√ sion d ≥ 2 and a a s − t ds dt < ∞. Let K be the set t∈[0. Define the random variables Xn = 1Cn with Cn = {ω  Bt = Bs . s ∈ [nT + 2. 0 ≤ s ≤ 1.1]. Define such a measure by dµ(A) =  {s ∈ [a. for some t ∈ [nT. the set BJ (ω) = {Bt (ω)  t ∈ [a. b]  Bs ∈ A} . K) = α since Bs+2 − Bs is Brownian motion independent of Bs .s≥2 Bt = Bs ] > 0. S Proof. Clear since αT → α for T → ∞. Selfintersection of Brownian motion Proposition 4. For any interval J = [a. b]} has positive capacity for almost all ω. Assume. the dimension d = 3.6.
B and independent of AT .11.1. .7. Again. .2≤T in the proof of the theorem. Kr ) = (r/y)d−2 . also R(x)Bt is Brownian motion. we have only to treat the three dimensional case. the claim follows. Then B ˜t = Bt+T − BT is Brownian motion Proof.10.236 Chapter 4. We have thus selfintersections of the random walk in any interval [0.11.2. P[Bt = Bs  t ∈ [0. b]. Since RT is AT ˜t is independent of AT . 0. By scaling. if R(x) is a random rotation on Rd independent of Bt . 4. We have for y ∈ / Kr h(y. Continuous Stochastic Processes Corollary 4. By the DynkinHunt theorem.11 Recurrence of Brownian motion We show in this section that like its discrete brother. the random walk. Let Kr be the ball of radius r centered at 0 ∈ Rd with d ≥ 3. it follows that if B is Brownian motion. Let T be a finite stopping time and RT (ω) be a rotation in Rd which turns BT (ω) onto the first coordinate axis RT (ω)BT (ω) = (BT (ω). Proof. ˜t = RT (Bt+T − BT ) is again Brownian motion. β]. By checking the definitions of Brownian motion. Brownian motion is transient in dimensions d ≥ 3 and recurrent in dimensions d ≤ 2. measurable and B Lemma 4. T β] ] is independent of β. Any point Bs (ω) is an accumulation point of selfcrossings of {Bt (ω)}t≥0 .1]. b] and by translation in any interval [a. s ∈ [2β. Lemma 4. . . Let T > 0 be such that [ Bt = Bs ] > 0 αT = P[ t∈[0. 0) .
Theorem 4. Define a sequence of stopping times Tn by Tn = inf{s > 0  Bs  = 2n } . this follows readily from the law of iterated logarithm. 0) + Kr (0). .11. In dimensions d = 2. K) = 1. 1] we have h(y) = a and so a = 1.3 (Escape of Brownian motion in three dimensions). Let d ≤ 2 and S be an open nonempty set in Rd . Since h ∈ [0.11. Therefore using the previous lemma ˜t ∈ (2n . First a lemma Lemma 4. . Proof. K) is harmonic and equal to 1 on δK. . 0 .4. the claim follows. we have limt→∞ Bt  = ∞ almost surely. We know from the lemma (4.11. . Kr ) and (r/y) are harmonic functions which are 1 at δKr and zero at infinity. They are the same. Bs (ω) ≥ r for s > Tn . Clearly also BTn  = 2n . For d ≥ 3. It is also rotational invariant and therefore h(y) = a + b log y.1) that ˜t = RTn (Bt+Tn − BTn ) B is a copy of Brownian motion.5 (Recurrence of Brownian motion in 1 or 2 dimensions). 0 . ˜t ∈ We have Bs ∈ Kr (0) = {x < r} for some s > Tn if and only if B (2n . 0) + Kr (0) for some t > 0. s > Tn ] = P[B 2n which implies in the case r2−n < 1 by the BorelCantelli lemma that for almost all ω. . Theorem 4. almost every path of Brownian motion hits a ball Kr if r > 0: one has h(y. Proof.11. . we get lim inf s Bs  ≥ r. We know that h(y) = h(y. Since r is arbitrary. which is finite almost everywhere because of the law of iterated logarithm. t > 0] = ( r )d−2 P[Bs ∈ Kr (0). Recurrence of Brownian motion 237 d−2 Proof. Both h(y. . In the case d = 1.4. Since Tn is finite almost everywhere.11. Brownian motion is recurrent in dimensions d ≤ 2. Then the Lebesgue measure of {t  Bt ∈ S} is infinite.
The Lebesgue measures Yn = In  of the time intervals In = {t  Bt  ≤ 1. Therefore. are independent random variables. This defines the adjoint A∗ of A with domain D(A∗ ). u) = (v. Sn of the annulus {1/2 < x < 2}. n Remark. where T1 = inf{t  Bt  = 2} and Sn Tn = = inf{t > Tn  Bt  = 1/2} inf{t > Sn−1  Bt  = 2} . where H0 = −∆/2 is the Hamiltonian of a free particle and V : Rd → R is the potential. the Schr¨odinger equation i~u˙ = Hu defines the evolution of the wave function u(t) = e−itH/~ u(0) in a Hilbert space H. it is a Schr¨ odinger operator H = H0 + V . A linear operator A : D(A) ⊂ H → H is called symmetric if (Au. The DynkinHunt theorem shows that Sn − Tn and Tn − Sn−1 are two mutually independent families of IID random variables. ∞)  Bt  ≤ 1 } ≥ Tn . Since by the previous lemma. if it is symmetric and D(A) = D(A∗ ). Brownian motion hits every ball almost surely. By the law of large numbers. v) = (u. Brownian motion in Rd can be defined as a diffusion on Rd with generator ∆/2. a ball of radius r around x0 . Definition. where ∆ is the Laplacian on Rd . Continuous Stochastic Processes Proof. See [59]. Like this. The free operator H0 already is not defined on the whole Hilbert space H = L2 (Rd ) and one restricts H to a vector space D(H) called domain containing the in H dense set C0∞ (Rd ) of all smooth functions which are zero at infinity.238 Chapter 4. Yn ) are independent bounded IID random variables. These are finite stopping times. Define D(A∗ ) = {u ∈ H  v 7→ (Av. We assume. Define inductively a sequence of hitting or leaving times Tn . w) for all u ∈ D(A). A generalization of Brownian motion to manifolds can be done using the diffusion processes with respect to the LaplaceBeltrami operator.12 FeynmanKac formula In quantum mechanics. The operator H is the Hamiltonian of the system. If u ∈ D(A∗ ). Tn ≤ t ≤ Tn+1 } . . one can define Brownian motion on the torus or on the sphere for example. It suffices to take S = Kr (x0 ). also Xn = min(1. we can assume that x0 = 0 and by scaling that r = 1. then there exists a unique function w = A∗ u ∈ H such that (Av. v ∈ D(A) and selfadjoint. 4. Av) for all u. u) is a bounded linear functional on D(A)}. P P Y = ∞ and the claim follows from X = ∞ which implies n n n n X {t ∈ [0.
12. Because A+ B is selfadjoint on D. see [83. B are bounded from below. n→∞ (4. The principle of uniform boundedness states that n(St/n − Ut/n )u ≤ C · u . n→∞ If A. n→∞ Proof. Define eA = 1 + A + A2 /2! + A3 /3! + · · · . One writes A = s − limn→∞ An . one has vt ∈ D. j=0 0≤s≤t We have to show that this goes to zero for n → ∞. B defined on D(A). D(B) ⊂ H. A sequence of bounded linear operators An converges strongly to A. Given u ∈ D = D(A) ∩ D(B). Theorem 4. Assume A + B is selfadjoint on D = D(A) ∩ D(B). Ut = Vt Wt and vt = St v for v ∈ D. then eit(A+B) = s − lim (eitA/n eitB/n )n . 7]. then e−t(A+B) = s − lim (e−tA/n e−tB/n )n . Moreover. Use a telescopic sum to estimate n )v (St − Ut/n n−1 X j n−j−1 Ut/n (St/n − Ut/n )St/n v =  ≤ n sup (St/n − Ut/n )vs  . eitA leaves the domain D(A) of A invariant. We have a bounded family {n(St/n − Ut/n )}n∈N of bounded operators from D to H. if An u → Au for all u ∈ H.1 (Trotter product formula).3) The linear space D with norm u = (A + B)u + u is a Banach space since A + B is selfadjoint on D and therefore closed. lim s→0 Ss − 1 Us − 1 u = i(A + B)u = lim u s→0 s s so that for each u ∈ D lim n · (St/n − Ut/n )u = 0 . Given selfadjoint operators A. Vt = eitA . Wt = eitB . Define St = eit(A+B) . We will use the fact that a selfadjoint operator defines a one parameter family of unitary operators t 7→ eitA which is strongly continuous. FeynmanKac formula Definition.239 4. . For more details.12.
x2 . y) = x−y2 (2πit)−d/2 ei 2t . Remark.2.x1 .240 Chapter 4. . V ∈ C0∞ (Rν ) is enough or V ∈ L2 (R3 ) ∩ L∞ (R3 ) in three dimensions. B. which is a special case. The second statement is proved in exactly the same way.xn . For example. dxn n→∞ n d n (R ) where n t X 1 xi − xi−1  2 ( ) − V (xi ) . . (Feynman 1948) Assume H = H0 + V is selfadjoint on D(H).. Then Z 2πit −d/2 e−itH u(x0 ) = lim ( ) eiSn (x0 . . The same Fourier calculation shows that e−tH0 has the integral kernel Pt (x. Continuous Stochastic Processes An ǫ/3 argument shows that the limit (4.3) exists uniformly on compact subsets of D and especially on {vs }s∈[0. We did not specify the set of potentials. Remark. for which H0 + V can be made selfadjoint. where gt is the density of a Gaussian random variable with variance t. xn . the function ut (x) = e−tH0 u(x) = Pt (x − y)u(y)dy is continuous and defined .12.t) u(xn ) dx1 . y) = (2πt)−d/2 e− x−y2 2t . Trotter’s product formula generalizes the Lie product formula A B lim (exp( ) exp( ))n = exp(A + B) n→∞ n n for finite dimensional matrices A.. t) = n i=1 2 t/n 2 Proof. Sn (x0 . we get by Fourier transform uˆ˙ = i k2 uˆ 2 u0 (k) and by inverse Fourier transform which gives u ˆt (k) = exp(i k2  t)ˆ Z x−y2 e−itH0 u(x) = ut (x) = (2πit)−d/2 ei 2t u(y) dy . .t] ⊂ D and so n sup0≤s≤t (St/n − Ut/n )vs  = 0. (Nelson) From u˙ = −iH0 u... Note that even if u ∈ L2 (Rd )R is only defined almost everywhere. x1 . Corollary 4. We have seen in the above proof that e−itH0 has the integral kernel P˜t (x. Rd The Trotter product formula e−it(H0 +V ) = s − lim (eitH0 /n eitV /n )n n→∞ gives now the claim. . .
Bsn − Bsn−1 are mutually independent Gaussian random variables of variance t1 . . fn (xn ) dx (Rd )n −t1 H0 (e f1 · · · e−tn H0 fn )(0) . . t2 . y2 ) . fn (yn ) dy Pt1 (0. . i ≥ 2 and the fi on the left hand side are understood as multiplication operators on L2 (Rd ). tn . . FeynmanKac formula 241 everywhere. . y2 ) . yn )f1 (y1 ) . . yn ) dy which is after a change of variables y1 = x1 . . Ptn (xn−1 . . . Given f1 . Given f0 . . Proof. . . x2 ) . Z Z f1 (Bs1 ) · · · fn (Bsn ) dB (Rd )n = = Z Pt1 (0. . Rd ) and with dx the Lebesgue measure on Rd . where t1 = s1 . Corollary 4. y1 )Pt2 (0. Ptn (xn−1 . Ptn (0. their joint distribution is Pt1 (0.12. . x1 )Pt2 (x1 . e−t1 H0 f1 · · · e−tn H0 fn ) . y1 )Pt2 (0. We define also an extended Wiener measure dW = dx × dB on C([0. Bs2 − Bs1 . ∞).12. ∞). Rd ) on all paths s 7→ Ws = x + Bs starting at x ∈ Rd . f1 . .3. . Therefore. . . . fn ∈ L∞ (Rd )∩L2 (Rd ) and 0 < s1 < · · · < sn . xn ) dx . . Ptn (0. Lemma 4. Then Z f0 (Ws0 ) · · · fn (Wsn ) dW = (f0 . . yi = xi − xi−1 Pt1 (0. .4. . . . Denote by dB the Wiener measure on C([0. Since Bs1 . . x2 ) .12. fn ∈ L∞ (Rd ) ∩ L2 (Rd ) and 0 < s1 < · · · < sn . xn )f1 (x1 ) . ti = si − si−1 . x1 )Pt2 (x1 . Then Z (e−t1 H0 f1 · · · e−tn H0 fn )(0) = f1 (Bs1 ) · · · fn (Bsn ) dB. .4. .
Theorem 4. Given H = H0 + V with V ∈ C0∞ (Rd ). we have almost everywhere Z t n−1 t X V (Ws ) ds . Proof. Continuous Stochastic Processes Proof.242 Chapter 4.5 (FeynmanKac formula). (i) Case s0 = 0. We prove now the FeynmanKac formula for Schr¨odinger operators of the form H = H0 + V with V ∈ C0∞ (Rd ). (ii) In the case s0 > 0 we have from (i) and the dominated convergence theorem Z f0 (Ws0 ) · · · fn (Wsn ) dW Z = lim 1{x<R} (W0 ) R→∞ = Rd f0 (Ws0 ) · · · fn (Wsn ) dW lim (f 0 e−s0 H0 1{x<R} . e−t1 H0 f1 · · · e−tn H0 fn (x)) R→∞ = (f 0 . Because V is continuous. then Z Rt −tH (f. e g) = f (W0 )g(Wt )e− 0 V (Ws )) ds dW . we have after the dB integration that Z Z f0 (x)e−t1 H0 f1 (x) · · · e−tn H0 fn (x) dx f0 (Ws0 ) · · · fn (Wsn ) dW = Rd = (f 0 . e−t1 H0 f1 · · · e−tn H0 fn ) .4) .12.4) (f. the integral Rt V (Ws (ω)) ds can be taken for each ω as a limit of Riemann sums and R0t 0 V (Ws ) ds certainly is a random variable. From the above lemma. (e−tH0 /n e−tV /n )n g) n→∞ so that by corollary (4. (Nelson) By the Trotter product formula (f.12. V (Wtj/n ) → n j=0 0 (4. e −tH g) = lim n→∞ Z n−1 t X f (W0 )g(Wt ) exp(− V (Wtj/n )) dW n j=0 and since s 7→ Ws is continuous. e−tH g) = lim (f. e−t1 H0 f1 · · · e−tn H0 fn ) .
Why is the FeynmanKac formula useful? • One can use Brownian motion to study Schr¨odinger semigroups. which needed in Trotter’s product formula. which has the Hamiltonian H(x. • One can study the classical limit ~ → 0.4. • It is useful to study ground states and ground state energies under perturbations. The formula can be extended to larger classes of potentials like potentials V which are locally in L1 . It allows for example to give an easy proof of the ArcSinlaw for Brownian motion. Also Trotter’s product formula allows further generalizations [97. the particle annihilation operator. e−tH0 g) < ∞ . A = √ (x + ). Because for all u.13 The quantum mechanical oscillator The onedimensional Schr¨odinger operator H = H0 + U = − 1 1 d2 1 + x2 − 2 2 dx 2 2 is the Hamiltonian of the quantum mechanical oscillator. A∗ v) .12.3) leads us now to the claim. d 1 1 d ). v ∈ C0∞ (R) (Au. Remark. A∗ = √ (x − dx dx 2 2 The first order operator A∗ is also called particle creation operator and A. 32]. Z f (W0 ) · g(Wt ) dW = (f . The selfadjointness. p) = 12 p2 + 12 x2 − 21 . The dominated convergence theorem (2. It is a quantum mechanical system which can be solved explicitly like its classical analog. The space C0∞ of smooth functions of compact support is dense in L2 (R). The quantum mechanical oscillator 243 The integrand on the right hand side of (4. 4.4. • One can treat operators with magnetic fields in a unified way. v) = (u. • Functional integration is a way of quantization which generalizes to more situations. is assured if V ∈ L2 ∩ Lp with p > d/2. One can write with H = AA∗ − 1 = A∗ A .4).4) is dominated by f (W0 ) · g(Wt ) · etV ∞ which is in L1 (dW ) because again by corollary (4.13.
(A∗ )n ] = n · (A∗ )n−1 . Because AΩ0 = 0. It is called the ground state or vacuum state describing the system with no particle. Define inductively the nparticle states 1 Ωn = √ A∗ Ωn−1 n by creating an additional particle from the (n − 1)particle state Ωn−1 . A∗ Ωn = n + 1Ωn+1 . 1/ 2) distributed random variable.244 Chapter 4. . . H2 (x) = 4x2 − 2. d) The functions Ωn form a basis in L2 (R).13. The vector Ω0 = 1 π 1/4 e−x 2 /2 √ is a unit vector because Ω20 is the density of a N (0. 2n n! where Hn (x) are Hermite polynomials. . H3 (x) = 8x3 − 12x. Theorem 4. . The first Hermit functions Ωn . it is an eigenvector of H = A∗ A with eigenvalue 1/2. Continuous Stochastic Processes the two operators are adjoint to each other. c) (n − 21 ) are the eigenvalues of H 1 1 H = (A∗ A − )Ωn = (n − )Ωn 2 2 . B] = AB − BA the commutator of two operators A and B.1 (Quantum mechanical oscillator). Proof.m . They are unit vectors in L2 (R) defined by Ωn (x) = Hn (x)ω0 (x) √ . √ √ b) AΩn = nΩn−1 . Denote by [A. Figure. We check first by induction the formula [A. H1 (x) = 2x. H0 (x) = 1. The following properties hold: a) The functions are orthonormal (Ωn . Ωm ) = δn. .
xn Ω0 ) = 0 . (A∗ )n Ω0 ) = (f. (A∗ )n ]Ω0 (A∗ )m−1 Ω0 ) n((A∗ )n−1 Ω0 . (A∗ )n−1 ]A∗ + (A∗ )n−1 [A. In order 2 to show that they span L (R). A∗ ] (n − 1)(A∗ )n−1 + (A∗ )n−1 = n(A∗ )n−1 . (A∗ )m−1 Ω0 ) . lets assume (f. we have to verify that they span the dense set S = {f ∈ C0∞ (R)  xm f (n) (x) → 0.n−1 = n!. A ] = 1. Because A∗ + A = 2x √ 0 = n!2n (f.13. For n = 0 it follows from the fact that Ω0 is normalized. . (A∗ )n ] = = [A. can be proven by induction. = X (ik)n (f. The quantum mechanical oscillator ∗ For n = 1. we must have f = 0. x → ∞. Ωn ) = (f. we obtain ((A∗ )n Ω0 . (A∗ )n−1 Ω0 ). n 2 d) Part a) shows that {Ωn }∞ n=0 it is an orthonormal set in L (R).245 4. (A∗ )m Ω0 ) = n!δmn . this means [A. which is by induction n(n − 1)!δn−1. a function f must be zero in L2 (R) if it is orthogonal √ to a dense set. while in the case n = m. a) Also ((A∗ )n Ω0 . √ 1 1 AΩn = √ A(A∗ )n Ω0 = √ nΩ0 = nΩn−1 . ∀m. n! n≥0 n≥0 and so f Ω0 = 0. Since Ω0 (x) is positive for all x. xn Ω0 ) we have ˆ (f Ω0 ) (k) = Z ∞ f (x)Ω0 (x)eikx dx −∞ X (ikx)n Ω0 ) n! = (f. (A∗ )n ] = n · (A∗ )n−1 and AΩ0 = 0: ((A∗ )n Ω0 . This finishes the proof that we have a complete basis. If n < m. (A∗ )n Ω0 ) = n · ((A∗ )n−1 Ω0 . then we get from this 0 after n steps. b) A∗ Ωn = √ n + 1 · Ωn+1 is the definition of Ωn . (A∗ + A)n Ω0 ) = 2n/2 (f. The induction step is [A. n! n! c) This follows from b) and the definition Ωn = √1 A∗ Ωn−1 . Ωn ) = 0 for all n. The reason is that by the HahnBanach theorem. n ∈ N } called the Schwarz space. So. Ω0 eikx ) = (f. The induction step uses [A. (A∗ )m Ω0 ) = = = (A(A∗ )n Ω0 (A∗ )m−1 Ω0 ) ([A.
The operator Q is also called a Dirac operator. The calculation of the integral . In quantum field theory. Particle annihilation and creation operators play an important role. Remark. With 0 A∗ 1 0 Q= . With the eigenvalues {λn = n− 1/2}∞ n=0 and the complete set of eigenvectors Ωn one can solve the Schr¨ odinger equation d u = Hu dt P∞ by writing the function u(x) = n=0 un Ωn (x) as a sum of eigenfuctions. map H 7→ H + 1 has the effect that it replaces U with U ˜ where Ω0 is the lowest eigenvalue. The solution of the Schr¨odinger equation is i~ u(t. P 2 = 1. This procedure can be reversed and to create ”soliton potentials” out of the vacuum. where un = (u. n=0 Remark. where 1 d d 1 A∗ = √ (q(x) − ). where a quantum mechanical system is extended to a quantum field. See [12]. A 0 0 −1 ˜ = Q2 .P = . It is also natural to use the language of supersymmetry as introduced by Witten: take two copies Hf ⊕ Hb of the Hilbert space where ”f ” stands for Fermion and ”b” for Boson. The B¨acklund transfor˜ = AA∗ is in the case of the harmonic oscillator the mation H = A∗ A 7→ H ˜ = U − ∂x2 log Ω0 . x) = ∞ X un ei~(n−1/2)t Ωn (x) . This gives a complete solution to the quantum mechanical harmonic oscillator. A supersymmetric system has the property that nonzero eigenvalues have ˜ the same number of bosonic and fermionic eigenstates.14 FeynmanKac for the oscillator We want to treat perturbations L = L0 + V of the harmonic oscillator L0 with an similar FeynmanKac formula.246 Chapter 4. Ωn ). dx dx 2 2 The oscillator is the special case q(x) = x. QP + P Q = 0 and one says (H. The formalism of particle creation and annihilation operators can be extended to some potentials of the form U (x) = q 2 (x) − q ′ (x) the operator H = −D2 /2 + U/2 can then be written as H = A∗ A. The new operator H has the same spectrum as H except that the lowest eigenvalue is removed. Q) one can write H ⊕ H has supersymmetry. 4. there exists a process called canonical quantization. Continuous Stochastic Processes Remark. This implies that H has the same spectrum as H except that lowest eigenvalue can disappear. P. A = √ (q(x) + ).
. xΩ0 ) = 1/2 we have Z 1 xi xj dG = (xΩ0 . . fn ∈ L∞ (R) and −∞ < s0 < s1 < · · · < sn < ∞. . .247 4. pt (x..2 (Mehler formula). y) of e −tL0 satisfying (e−tL0 f )(x) = Z pt (x. f1 . ti = si − si−1 . Proof. Lemma 4. the measure dGm is Gaussian converging to a Gaussian measure dG. The Trotter product formula for L0 = H0 + U gives = = (Ω0 . e−(sj −si ) L0 xΩ0 ) = e−(sj −si ) 2 which shows that dG is the joint probability distribution of Qs0 . . y) of L0 is given by the Mehler formula 1 (x2 + y 2 )(1 + e−2t ) − 4xye−t . . Qsn . FeynmanKac for the oscillator kernel pt (x.mn ). . f0 (e−t1 H0 /m1 e−t1 U/m1 )m1 f1 · · · e−tn H0 fn Ω0 ) f0 (x0 ) · · · fn (xn ) dGm (x.14.. y) = √ exp − 2σ 2 πσ 2 with σ 2 = (1 − e−2t ). Since L0 (xΩ0 ) = xΩ0 and (xΩ0 . where t0 = s0 . y)f (y) dy R is slightly more involved than in the case of the free Laplacian.mi →∞ Z (Ω0 . Since e−tH0 has a Gaussian kernel and e−tU is a multiple of a Gaussian density and integrals are Gaussian. Given f0 . The claim follows. f0 e−t1 L0 f1 · · · e−tn L0 fn Ω0 ) lim m=(m1 . Let Ω0 be the ground state of L0 as in the last section.. i ≥ 1.. y) and Gm is a measure. .14. Theorem 4. Then Z (Ω0 .14. f0 e−t1 L0 f1 · · · e−tn L0 fn Ω0 ) = f0 (Qs0 ) · · · fn (Qsn ) dQ .1. The kernel pt (x.
. Ω20 dx). Let dQ be the Wiener measure on C(R) belonging to the oscillator process Qt . We have (f. (4. 1 2 e−t We get Mehler’s formula by inverting this matrix and using that the density is (2π) det(A)−1/2 e−((x. Given L = L0 + V with V ∈ C0∞ (R).248 Chapter 4.A(x.4. By the Trotter product formula (f Ω0 . then Z Rt (f Ω0 .14. The dominated convergence theorem (2.y)) . y) dy with the Gaussian measure dG having covariance 1 1 e−t A= . g ∈ L2 (R. we have almost everywhere Z t n−1 t X V (Qtj/n ) → V (Qs ) ds .3) gives the claim. e−iL gΩ0 ) = f (Q0 )g(Qt )e− 0 V (Qs )) ds dQ for all f. Continuous Stochastic Processes Proof.3 (FeynmanKac for oscillator process).y). n j=0 0 The integrand on the right hand side of (4. y) = Z f (y)pt (x. e−tL0 g) = Z −1 f (y)Ω−1 0 (y)g(x)Ω0 (x) dG(x.5) n j=0 and since Q is continuous. Theorem 4.5) is dominated by f (Q0 )g(Qt )etV ∞ which is in L1 (dQ) since Z f (Q0 )g(Qt ) dQ = (Ω0 f . Proof. e−iL gΩ0 ) = lim (f Ω0 . Definition. e−iL gΩ0 ) = lim n→∞ Z f (Q0 )g(Qt ) exp(− n−1 t X V (Qtj/n )) dQ . (e−tL0 /n e−tV /n )n gΩ0 ) n→∞ so that (f Ω0 . e−tL0 Ω0 g) < ∞ .
n) be the Dirichlet Laplacian on Dn and Ek (x. Neighborhood of Brownian motion 249 4. Denote by HD the Dirichlet Laplacian −∆/2. see [97]. Let W δ (t) be the set {x ∈ Rd  x − Bt (ω) ≤ δ. . t]} . . This function is also called the integrated density of states. 1 ≤ j ≤ n of radius rn into a glass of water so that n · rn = α. . Given a sequence x = (x1 . n) which are random variable Ek (n) in x. This motivates to compute the lowest eigenvalue of n the domain D \ j=1 Kj. This is the domain D with n points balls Kj. Let H(x. ) which is an element in DN and a sequence of radii r1 . if DN is equipped with the product Lebesgue measure. n) the kth eigenvalue of H(x. xn and radius rn removed.n . . Let D be an open set in Rd such that the Lebesgue measure D is finite and the Lebesgue measure of the boundary δD is zero. For more details. where the crushing makes nrn → ∞. Example.15.t] (ω). there is much better cooling as one might expect. This set is called Wiener sausage and one is interested in the expected volume W δ (t) of this set as δ → 0. .n is randomly distributed in the glass. . for some s ∈ [0. This can be done exactly in the limit n → ∞ and when ice Kj. Denote with Kd the unit ball in Rd and with Kd  = Vol(Kd ) = π d/2 Γ( d2 + 1)−1 its volume. . . . Definition. Random impurities produce a constant shift in the spectrum.n with center x1 .15 Neighborhood of Brownian motion The FeynmanKac formula can be used to understand the Dirichlet Laplacian of a domain D ⊂ Rd . One can show that in the case nrn → α Ek (n) → Ek (0) + 2παD−1 in probability. Denote by kD (E) the number of eigenvalues of HD below E. In order to know. r2 . this is described as follows: Let D be an open bounded domain in Rd . . E→∞ E d/2 2 π It shows that one can read off the volume of D from the spectrum of the Laplacian. .n . It is of course dependent on ω and just a δneighborhood of the Brownian path B[0. how good this ice cools the water it is good to know the lowest eigenvalue E1 of the Dirichlet Laplacian HD since the motion of the temperature distribution u by the heat equation u˙ = HD u is dominated by S e−tE1 . Mathematically. For the physical system with the crushed ice. We will look at this problem a bit more closely in the rest of this section. Weyl’s formula describes the asymptotic behavior of kD (E) for large E: kD (E) Kd  · D lim = d/2 d . x2 . Put n ice balls Kj. define Dn = D \ n [ i=1 {x − xi  ≤ rn } . Example.4.
Proof. x. where H is the Dirichlet Laplacian on D. using (i) and (ii) and Feynman . Continuous Stochastic Processes Figure. which relates the Dirichlet Laplacian HD = −∆/2 on D with Brownian motion. Then Z E[Bs ∈ D. λ → ∞ for z outside [0.1. Let D be a bounded domain in Rd containing 0 and pD (x. where V = 1Dc is the characteristic function of the exterior Dc of D. Lets first prove a lemma.250 Chapter 4. t) dx . A finite path of Brownian motion with its neighborhood W δ . we have 0 V (Bs ) ds > 0 if and only if Bs ∈ C c for some s ∈ [0. Rt (ii) Since Brownian paths are continuous. ∞) and all u ∈ Cc∞ (Rd ). A sample of Wiener sausage in the plane d = 2.3). We get therefore e−λ Rt 0 V (Bs ) ds → 1{Bs ∈Dc } point wise almost everywhere. the integral kernel of e−tH .15. Lemma 4. 0 ≤ s ≤ t] = 1 − pD (0. y. This means that (H0 + λ · V )−1 u → (HD − z)−1 u. We get with the dominated convergence theorem (2. t). (i) It is known that the Dirichlet Laplacian can be approximated in the strong resolvent sense by operators H0 + λV . t].4. Let un be a sequence in Cc∞ converging point wise to 1.
0 ≤ s ≤ t] n→∞ lim lim E[e−λ Rt 0 V (Bs ) ds n→∞ λ→∞ un (Bt )] lim lim e−t(H0 +λ·V ) un (0) n→∞ λ→∞ lim e−tHD un (0) n→∞ lim n→∞ Z pD (0. 0 ≤ s ≤ t] = = = = = lim E[un (Bs ) ∈ Dc . t) dx . we get the general case with the formula E[W δ (t)] = δ 3 · E[W 1 (δ −2 t)]. .15. 0 ≤ s˜ ≤ t}] λ λ3 · E[W δ (t)] . 0 ≤ s ≤ t] dB dx f (x. Define the hitting probability f (x. t) = P[x + Bs ∈ K. 0 ≤ s ≤ t] dx dB P[Bs − x ∈ K. 0 ≤ s ≤ t] . x. E[W λδ (λ2 t)] = = = = E[{x − Bs  ≤ λδ. t) dx . Neighborhood of Brownian motion Kac E[Bs ∈ Dc .15. Rd Proof.251 4. We have E[W 1 (t)] = Z f (x. E[W δ (t)] = 2πδt + 4δ 2 2πt + 3 Proof. x. Using Brownian scaling. √ 4π 3 δ . Let K be the closed unit ball in Rd . 0 ≤ s˜ = s/λ2 ≤ t}] λ λ x E[{ − Bs˜ ≤ δ. t)un (0) dx = Z pD (0. In three dimensions d = 3.2 (Spitzer). 0 ≤ s ≤ λ2 t}] x B 2 E[{ − s˜λ  ≤ δ. t) dx . so that one assume without loss of generality that δ = 1: knowing E[W 1 (t)]. Theorem 4. E[W 1 (t)] = = = = Z Z Z Z Z Z Z P[x ∈ W 1 (t)] dx dB P[Bs − x ∈ K.
From the previous lemma follows that f˙ = (∆/2)f . Remark. 0. one has 2 f (x. Remark. Corollary 4.3. The corollary shows that the Wiener sausage is quite ”fat”. Brownian motion is rather ”twodimensional”. t) = rf (x. one has: 1 lim E[W δ (t)] = 2πt δ→0 δ and lim t→∞ 1 · E[W δ (t)] = 2πδ . t) = (∆/2)p(x. t) = √ r 2πt Z ∞ e− (x+z−1)2 2t dz . t) dx = 2πt + 4 2πt x≥1 and R x≤1 f (x. Kesten. t) dx = 4π/3 so that √ E[W 1 (t) = 2πt + 4 2πt + 4π/3 . . t) inside D. The kernel of e−tH satisfies the heat equation ∂t p(x. We compute Z √ f (x. It is even true that limδ→0 Wδ (t)/t = 2πδ and limt→∞ Wδ (t)/t = 2πδ for almost all paths. t Proof. 0 Proof. In three dimensions. t) = 1. 0. The proof follows immediately from Spitzer’s theorem (4. t) with boundary condition g(r. t) satisfies g˙ = 2(∂r) 2 g(r. then δ −2 E[W δ (t)] would stay bounded as δ → 0.252 Chapter 4. Continuous Stochastic Processes The hitting probability is radially symmetric and can be computed explicitly in terms of r = x: for x ≥ 1. so that the ∂2 function g(r. Spitzer and Wightman have got stronger results. g(1.15.15. If Brownian motion were onedimensional. 0) = 0.2).
where δn Bt = Bt − Bt−2−n . . Let Bt be the onedimensional Brownian motion process and let f be a function f : R → R. R ) which can contain for 0 Rt example 0 V (Bs ) ds as in the FeynmanKac formula.1. m=1 We will use later for Jn. why we have to take the differentials δn Btm to ”stick out into future”. m=1 If we want to take for C a function of X. we were already dealing with R t a special case of stochastic integrals. Define for n ∈ N the random variable n Jn (f ) = 2 X m=1 n f (B(m−1)2−n )(Bm2−n − B(m−1)2−n ) =: 2 X Jn. then Jn (f ) converges in L2 to a random variable Z 1 f (Bs ) dB = lim Jn 0 satisfying  Z 1 0 f (Bs ) dB22 = E[ n→∞ Z 0 1 f (Bs )2 ds] . Actually.16 The Ito integral for Brownian motion We start now to develop stochastic integration first for Brownian motion and then more generally for continuous martingales. The stochastic integral is a limit of discrete stochastic integrals: Lemma 4. where f is a function on C([0. namelyd with Wiener integrals f (Bs ) dBs . If f ∈ C 1 (R) such that f. But the result of this integral was a number while the stochastic integral. f ′ are bounded on R. ∞]. We have earlier defined the discrete stochastic integral for a previsible process C and a martingale X Z n X ( C dX)n = Cm (Xm − Xm−1 ) . then we have to take Cm = f (Xm−1 ). Definition. This is the reason.16.5) that almost all paths of Brownian motion are not differentiable.16.253 4. Remark. We know by theorem (4. how a stochastic integral can still be constructed. The Ito integral for Brownian motion 4. we are going to define. Lets start with a motivation.m (f ) also the notation f (Btm−1 )δn Btm .m (f ) .2. will be a random variable. We are first going to see. The usual LebesgueStieltjes integral Z t f (Bs )B˙ s ds 0 can therefore not be defined.
Proof. For j > i. Continuous Stochastic Processes Proof. We can extend the integral to functions f . Since m f (B(m−1)2−n )2 2−n converges point wise to 0 f (Bs )2 ds. there exists C = f ′ 2∞ and this gives f (x) − f (y)2 ≤ C · x − y2 . (ii) E[Jn. which are locally L1 and bounded near 0. Since f ∈ C 1 . We see that Jn is a Cauchy sequence in L2 and has therefore a limit. f (B(m−1)/2n ) is independent of (Bm2−n − B(m−1)2−n )2 which has expectation 2−n . R1 R1 (v) The claim  0 f (Bs ) dB22 = E[ 0 f (Bs )2 ds]. R1 P Proof. R1 Corollary 4. and is dominated by f 2∞ . (i) For i 6= j we have E[Jn. 0 f (Bs ) dB exists as a L2 random variable for f ∈ L1loc (R) ∩ L∞ (−ǫ.j (f ) and the claim follows from E[Bj2−n − B(j−1)2−n ] = 0. (iii) From (ii) follows n Jn (f )2 = 2 X E[f (B(m−1)2−n )2 ]2−n .16. where the last equality followed from the fact that E[(B(2m+1)2−(n+1) − B(2m)2−(n+1) )2 ] = 2−n since B is Gaussian. . m=1 (iv) The claim: Jn converges in L2 .i (f )Jn.254 Chapter 4.j (f ) independent of the rest of Jn. We get Jn+1 (f ) − Jn (f )22 = n 2X −1 m=1 ≤ C E[(f (B(2m+1)2−(n+1) ) − f (B(2m)2−(n+1) ))2 ]2−(n+1) n 2X −1 E[(B(2m+1)2−(n+1) − B(2m)2−(n+1) )2 ]2−(n+1) m=1 −n−2 = C ·2 .i (f )Jn.2.j (f )] = 0. the claim follows since Jn converges in L2 . ǫ) and any ǫ > 0. We write Lploc (R) for functions f which are in Lp (I) when restricted to any finite interval I on the real line.i (f )Jn. there is a factor Bj2−n − B(j−1)2−n of Jn.m (f )2 ] = E[f (B(m−1)2−n )2 ]2−n . Proof. (which exists because f and Bs are continuous).
a] (Bs )f (B) is indepenR1 dent of a for large a. Now. n n 2 X j=1 2 Jn. 1)distributed random variables so that by the law of large numbers 2n 1 X Xj → 1 2n j=1 for n → ∞. ǫ). E[ f (Bs ) ds] = 2πs 0 0 R (ii) If f ∈ L1loc (R) ∩ L∞ (−ǫ. By definition of Brownian motion.a] (Bs )f (Bs )2 ds] is bounded by E[f (Bs )2 ds] < ∞ by (i).j (1)2 ] = 2n · Var[Bj/2n − B(j−1)/2n ] = 2n 2−n = 1 . Having the onedimensional integral allows also to set up the integral in higher dimensions: R 1 with Brownian motion in Rd and f ∈ L2loc (Rd ) define the integral 0 f (Bs ) dBs component wise. Proof.16.3. 2−n )distributed random variables and so n E[ 2 X j=1 Jn. . a]f (Bs ) dB in L2 . By the dominated convergence theorem (2.4. Definition. Xj = 2n Jn. ǫ). ǫ) for some ǫ > 0. we have Z Z 1[ − a.3). For n → ∞.j (1) = 2 X j=1 (Bj/2n − B(j−1)/2n )2 → 1 . then for almost every B(ω). Lemma 4.16. a]fn (Bs ) dB → 1[ − a. Bs is continuous for almost all ω so that 1[−a.a] fn → f in L2 (R).4.a] (Bs )f (Bs )2 ds lim a→∞ 0 exists point wise and is finite. then Z 1 Z 1Z f (x)2 −x2 /2s 2 √ e dx ds < ∞ . Proof. the L2 bound is independent of a. Jn.j are IID N (0. Since by (ii). Proof. we know that for fixed n. This integral is called an Ito integral. (iii) The claim. The Ito integral for Brownian motion 255 ∞ L1loc (R) Proof. Assume f ∈ L1loc (R)∩L∞ (−ǫ. we can also pass to the limit a → ∞. (i) If f ∈ ∩ L (−ǫ.j are N (0. Given fn ∈ C 1 (R) with 1[−a. the limit Z 1 1[−a. The integral E[ 0 1[−a.
the claim follows. We mention now some trivial properties of the stochastic integral. Definition. define Z t Z Z t f (Ws ) dWs = f (x + Bs ) dBs dx . Rt For (3) define Xt = 0 f (Bs ) dB.16. Rt (4) E[ 0 f (Bs ) dBs ] = 0. It will be useful to consider an other generalizations of the integral. Theorem 4. 2 2 0 Proof. Rt (3) t 7→ 0 f (Bs ) dBs is a continuous map from R+ to L2 . We have for example in one dimension: Z 1 1 1 Bs dB = (B12 − 1) 6= (B12 − B02 ) . Define n Jn− = 2 X m=1 f (B(m−1)2−n )(Bm2−n − B(m−1)2−n ) . If dW = dxdB is the Wiener measure on Rd × C([0.256 Chapter 4. Both of these identities come from cancellations in the sum and imply together the claim. Rt (5) 0 f (Bs ) dBs is At measurable. Proof. The above lemma implies that Jn+ − Jn− → 1 almost everywhere for n → ∞ and we check also Jn+ + Jn− = B12 . Rt Rt (2) 0 λ · f (Bs ) dBs = λ · 0 f (Bs ) dBs . Here are some basic properties R t of the Ito integral: Rt Rt (1) 0 f (Bs ) + g(Bs ) dBs = 0 f (Bs ) dBs + 0 g(Bs ) dBs . Since Z t+ǫ Xt − Xt+ǫ 22 = E[ f (Bs )2 ds] t Z t+ǫ Z f (x)2 −x2 /2s √ = e dx ds → 0 2πs t R for ǫ → 0. n Jn+ = 2 X m=1 f (Bm2−n )(Bm2−n − B(m−1)2−n ) . ∞). (1) and (2) follow from the definition of the integral.4 (Properties of the Ito integral). Continuous Stochastic Processes The formal rules of integration do not hold for this integral. 0 Rd 0 . (4) and (5) can be seen by verifying it first for elementary functions f .
It is the ”fundamental theorem for stochastic integrals” and allows to do ”change of variables” in stochastic calculus similarly as the fundamental theorem of calculus does for usual calculus. 0 If Bs would be an ordinary path in Rd with velocity vector dBs = B˙ s ds. s) ds . we can also define the integral Z t f (Bs . Assume fR is also time dependent so that it is a function on 1 Rd × R. It is a bit surprising that in the stochastic setup.16. t) on Rd × [0. t] which is twice differentiable in x and differentiable in t. One writes sometimes the formula also in the differential form 1 df = ∇f dB + ∆f dt .5 (Ito’s formula). For a C 2 function f (x) on Rd f (Bt ) − f (B0 ) = Z 0 t ∇f (Bs ) · dBs + 1 2 Z t ∆f (Bs ) ds . Models like that of BlackScholes constitute the basis on which a modern business makes decisions about how everything from stocks and bonds to pork belly futures should be priced.16. As long as E[ 0 f (Bs .6 (Generalized Ito formula). 2 Remark. s)2 ds] < ∞.257 4.16. The Ito integral for Brownian motion Definition. Theorem 4. s) ds . t)−f (B0 .” For more information on the BlackScholes model and the famous BlackScholes formula. 0) = Z 0 t ∇f (Bs . We cite [11]: ”Ito’s formula is now the bread and butter of the ”quant” department of several major financial institutions. a second derivative ∆f appears in a first order differential. see [16]. Ito’s formula provides the link between various stochastic quantities and differential equations of which those quantities are the solution. which can be timedependent too: Theorem 4. . t). Given a function f (x. Then f (Bt . s) ds+ Z 0 t f˙(Bs . then we had Z t ∇f (Bs ) · B˙ s ds f (Bt ) − f (B0 ) = 0 by the fundamental theorem of line integrals in calculus. s)· dBs + 1 2 Z 0 t ∆f (Bs . 0 The following formula is useful for understanding and calculating stochastic integrals. It is not much more work to prove a more general formula for functions f (x.
tk−1 ) In + IIn + IIIn . (iii) n 2 X k=1 P2n k=1 E[(Btk − Btk−1 )4 ] → 0 follows from (ii).258 Chapter 4. Continuous Stochastic Processes In differential notation. (i) By definition of the Ito integral. the first sum In converges in L2 to R1 0 (∇f )(Bs . −∞ This means n E[ 2 X k=1 δn Btk p ] = C2n 2−(np)/2 which goes to zero for n → ∞ and p > 2.j i j . 1} and define δn Btk = Btk − Btk−1 . tk = k · 2−n . . tk−1 ) − (∇f )(Btk−1 . 2 Proof. For each n. tk−1 )δn Btk n + 2 X k=1 = f (Btk . · · · . we discretized time {0 < 2−n < . By a change of variables. tk−1 )δn Btk k=1 n + 2 X 2 X k=1 f (Btk . We have therefore n 2 2 E[g(Btk . tk ) ((Btk − Btk−1 ) − 2 −n 2 ) ] ≤ C 2 X k=1 Var[(Btk − Btk−1 )2 ] n ≤ C 2 X k=1 E[(Btk − Btk−1 )4 ] → 0 . P2n (ii) If p > 2. we can assume t = 1. 1) − f (B0 . 2−n )distributed random variable so that Z ∞ 2 E[δn Btk p ] = (2π)−1/2 2−(np)/2) xp e−x /2 dx = C2−(np)/2 . δn Btk is a N (0. s) dBs . . We write n f (B1 . . (iv) Using a Taylor expansion f (x) = f (y) − ∇f (y)(x − y) − 1X ∂x x f (y)(x − y)i (x − y)j + O(x − y3 . this means 1 df = ∇f dB + ( ∆f + f˙) dt . Proof. we have k=1 δn Btk p → 0 for n → ∞. 2 i. tk−1 ) − f (Btk−1 . 0) = (∇f )(Btk−1 . tk ) − f (Btk .
(v) A Taylor expansion with respect to t f (x. s) is continuous and IIIn is a Riemann sum approximation. Example. 2 t . t) = eαx−α t/2 . The Ito integral for Brownian motion we get for n → ∞ n IIn − 2 X 1X k=1 2 i. α α 0 we get other formulas like Z 0 t Bs dB = 1 2 (B − t) . Consider the function 2 f (x. Because this function satisfies the heat equation f˙ + f ′′ /2 = 0. s) · dBs . 0 We see that for functions satisfying the heat equation f˙ + f ′′ /2 = 0 Ito’s formula reduces to the usual rule of calculus. If we make a power expansion in α of Z t 2 2 1 1 eαBs −α s/2 dB = eαBs −α s/2 − . s)(t − s) + O((t − s)2 ) gives IIIn → Z t f˙(Bs .16. t) − f (B0 .j ∂xi xj f (Btk−1 . t) − f (x. s) ds 0 in L2 . tk−1 )(δn Btk )i (δn Btk )j → 0 in L2 . s) ds 0 in L1 because s → f (Bs . t) = α f (Bs . we have therefore IIn → 1 2 Z t ∆f (Bs . tk−1 )[(δn Btk )i (δn Btk )j − δij 2−n ] goes to zero in L2 (applying (ii) for g = ∂xi xj f and note that (δn Btk )i and (δn Btk )j are independent for i 6= j). s) − f˙(x. we get from Ito’s formula Z t f (Bt . Since n 2 X 1 k=1 2 ∂xi xj f (Btk−1 .259 4.
By definition of the creation and annihilation operators one has Q = √12 (A + A∗ ). Define L = ∗ n j=0 Pn j=0 n j (A∗ )j An−j . we have the identity n : Q := 1 2n/2 n 1 X n (A∗ )j An−j . Here are the first Wick powers: :x: = x : x2 : = x2 − 1 : x3 : = x3 − 3x : x4 : = x4 − 6x2 + 3 : x5 : = x5 − 10x3 + 15x . Define : xn := 1 x Hn ( √ ) 2n/2 2 and extend the definition to all polynomials by linearity. Example. why Wick ordering has its name and why it is useful in quantum mechanics: Proposition 4.260 Chapter 4. The Polynomials 2 : xn : are orthogonal with respect to the measure Ω20 dy = π −1/2 e−y dy because we have seen that the eigenfunctions Ωn are orthonormal. Definition. The following formula indicates.16. Let Hn (x)Ω0 (x) √ 2n n! Ωn (x) = be the n′ th eigenfunction of the quantum mechanical oscillator. Definition. Continuous Stochastic Processes Wick ordering. The multiplication operator Q : f 7→ xf is called the position operator. : (A + A ) := n/2 j 2 Definition.7. . There is a notation used in quantum field theory developed by GianCarlo Wick at about the same time as P Ito’s invented the integral. This Wick n ordering is a map on polynomials i=1 ai xi which leave monomials (polyn n−1 nomials of the form x + an−1 x · · · ) invariant. As operators.
The Ito integral for Brownian motion 2 Proof.261 4. n+1 0 Remark. we can assume that t = 1.16. The new ordering made the operators A. The limit q → 1 corresponds to the Dq xn = [n]xn−1 . j j=0 n X n j(A∗ )j−1 An−j − (n − j)(A∗ )j An−j−1 = j ∗ j=0 = 0 √ we obtain by linearity [Hk ( 2Q). A∗ ] = 1: The fact that stochastic integration is relevant to quantum mechanics can be seen from the following formula for the Ito integral: Theorem 4. even so they don’t: they satisfy the commutation relations [A. where an adaption of notation helps is quantum calculus. B would commutate. Remark.8 (Ito Integral of B n ). where [n] = qq−1 classical limit case ~ → 0 of quantum mechanics. An other example. Proof. . L]. A∗ behaves as if A. where the derivative is defined as Dq f (x) = dq f (x)/dq (x) with dq f (x) = f (qx) − f (x). we get 0 = (: Qn : −2−n/2 L)Ω0 √ = (k!)−1/2 Hk ( sQ)(: Qn : −2−n/2 L)Ω0 √ = (: Qn : −2−n/2 L)(k!)−1/2 Hk ( sQ)Ω0 = (: Qn : −2−n/2 L)Ωk . Because : Qn : Ω0 = 2−n/2 (n!)1/2 Ωn = 2−n/2 (A∗ )n Ω0 = 2−n/2 LΩ0 . we have only to verify that : Qn : Ωk = 2−n/2 LΩk for all k. Since we know that Ωn forms a basis in L . ”calculus without taking limits” [45]. From 2 −1/2 n X n (A∗ )j An−j ] [Q. Z t 1 : Bsn : dBs = : Btn+1 : . L] = [A + A . We prove all these equalities simultaneously by showing Z 0 1 : eαBs : dB = α−1 : eαB1 : −α−1 . By rescaling. Notation can be important to make a concept appear natural. Wick ordering makes the Ito integral behave like an ordinary integral.16. One can see that n −1 .
1/2 (n!) n=0 If we apply A∗ on both sides. R n We can now determine all the integrals Bs dB: Z Z Z t 1 dB 0 t Bs dB 0 t 0 Bs2 dB = Bt 1 2 (B − 1) 2 t Z t 1 1 : Bs2 : +1 dB = Bt + (: Bt :3 ) = Bt + (Bt3 − 3Bt ) = 3 3 0 = and so on. Continuous Stochastic Processes The generating function for the Hermite polynomials is known to be ∞ X n=0 Hn (x) √ α2 αn = eα 2x− 2 . : eαx := n! j=0 Since the right hand side satisfies f˙ + f ′′ /2 = 2. the claim follows from the Ito formula for such functions. FeynmanKac formula for Schr¨ odinger operators with magnetic fields. n! (We √ can check this formula by multiplying it with Ω0 .) This means ∞ X 1 2 αn : xn : = eαx− 2 α . Let Qt = e−t Be2t / 2 the oscillator process and At = (1 − t)Bt/(1−t) the Brownian bridge. the equation goes onto itself and we get after k such applications of A∗ that that the inner product with Ωk is the same on both sides. Therefore the functions must be the same. Stochastic integrals√for the oscillator and the Brownian bridge process.262 Chapter 4. Stochastic integrals appear in the FeynmanKac formula for particles moving in a magnetic field. Let A(x) be a vector potential in R3 which gives . replacing x with x/ 2 so that we have ∞ X α2 x2 Ωn (x)αn = eαx− 2 − 2 . If we define new discrete differentials δn Qtk δn Atk = Qtk+1 − e−(tk+1 −tk ) Qtk tk+1 − tk = Atk+1 − Atk + Atk (1 − t) the stochastic integrals can be defined as in the case of Brownian motion as a limit of discrete integrals.
Example. F (B. Monotone and bounded functions are of finite variation. 0 where K is a progressively measurable process which satisfies some boundedness condition. If supt f t < ∞. we have a function of bounded variation. ∞) → R.t) u(Bt ) dB . In the case A = 0. bounded variation with BV. Proof: define f ± = (±ft + f t )/2. Processes of bounded quadratic variation the magnetic field B(x) = curl(A). where f ± are both positive and increasing. a particle moving in an magnetic field together with an external field is described by the Hamiltonian H = (i∇ + A)2 + V . t] we define ∆ = supri=1 ti+1 − ti  called the modulus of ∆. Sums of functions of bounded variation are of bounded variation. . . we get the usual Schr¨odinger operator. Note that for functions of finite variations. then f is called of bounded variation. One abbreviates. Definition. Quantum mechanically. The FeynmanKac formula is the Wiener integral Z e−tH u(0) = e−F (B. where F (B. t) is a stochastic integral. The aim will be to define an integral Z t Ks dMs .263 4. Brownian motion B will be replaced by a martingale M which are assumed to be in L2 . Every function of finite variation can be written as f = f + − f − . Differentiable C 1 functions are of finite variation. . . f ∆ = i=0 A function with finite total variation f t = sup∆ f ∆ < ∞ is called a function of finite variation.17 Processes of bounded quadratic variation We develop now the stochastic Ito integral with respect to general martingales. t = tn } of the interval [0. . Remark. Given a rightcontinuous function f : [0. t1 . 0 4. For each finite subdivision ∆ = {0 = t0 .17. Define n−1 X fti+1 − fti  . Vt can go to ∞ for t → ∞ but if Vt stays bounded. t) = i Z a(Bs ) dB + i 2 Z 0 t div(A) ds + Z t V (Bs ) ds .
A continuous martingale M is never of finite variation. A process Xt is called increasing if the paths Xt (ω) are finite. . where Sn is the stopping time Sn = inf{s  Vs ≥ n} and Vt is the variation of M on [0. rightcontinuous and of finite variation for almost all ω ∈ Ω. we can form the LebesgueStieltjes integral (X · A)t (ω) = Z t Xs (ω) dAs (ω) . Definition. The problem is: Proposition 4. Since M is a martingale. Assume M is of finite variation. 0 We would like to define such an integral for martingales. (iii) Let ∆ = {t0 = 0. Functions of bounded variation are in one to R tone correspondence to Borel measures on [0. (i) We can assume without loss of generality that M is of bounded variation. If Xt is a bounded At adapted process and A is a process of bounded variation. Remark. t1 . .1. A process Xt is called of finite variation. We show that it is constant. unless it is constant.264 Chapter 4. (ii) We can also assume also without loss of generality that M0 = 0. ∞) by the Stieltjes integral 0 df  = ft+ + ft− . we can look at the martingale M Sn . tn = t} be a subdivision of [0.17. Every bounded variation process A can be written as At = A+ t − Rt − ± + − At . Proof. we have by Pythagoras E[Mt2 ] = E[ k−1 X i=0 = E[ k−1 X i=1 = E[ (Mt2i+1 − Mt2i )] (Mti+1 − Mti )(Mti+1 + Mti )] k−1 X i=1 (Mti+1 − Mti )2 ] . Otherwise. rightcontinuous and increasing for almost all ω ∈ Ω. Continuous Stochastic Processes Remark. if the paths Xt (ω) are finite. t]. t]. . Proof. where At are increasing. . The process Vt = 0 dAs = At + At is increasing and we get for almost all ω ∈ Ω a measure called the variation of A.
5. For ti < s < ti+1 . Therefore A = B. The process X is called of finite quadratic variation. . B.1). ∞) with only finitely many points {t0 . tk } in each interval [0. X >t as ∆ → 0. where M 2 was a submartingale so that M 2 could be written uniquely as a sum of a martingale and an increasing previsible process. . E[(Mti+1 − Mti )2  As ] = E[(Mti+1 − Ms )2 As ] + (Ms − Mti )2 . Remark. then A − B would be a continuous martingale with bounded variation (if A and B are increasing they are of bounded variation) which vanishes at zero.2 (DoobMeyer decomposition). . Proof. If ∆ = {t0 = 0 < t1 < . Therefore M = 0. t1 . Then < M. Uniqueness follows from the previous proposition: if there would be two such continuous and increasing processes A. the random variable Tt∆ converges in probability to < X.17.17. Theorem 4. Given a continuous and bounded martingale M of finite quadratic variation. M > is a martingale. Before we enter the not so easy proof given in [86]. let us mention the corresponding result in the discrete case (see theorem (3. t]. we define for a process X k−1 X Tt∆ = Tt∆ (X) = ( i=0 (Xti+1 − Xti )2 ) + (Xt − Xtk )2 . This proposition applies especially for Brownian motion and underlines the fact that the stochastic integral could not be defined point wise by a LebesgueStieltjes integral. } is a subdivision of R+ = [0. then the right hand side goes to zero since M is continuous. Remark. we have from the martingale property using that (Mti+1 − Ms )2 and (Ms − Mti )2 are independent. i i If the modulus ∆ goes to zero. Proof. (i) Mt2 − Tt∆(M ) is a continuous martingale.265 4. if there exists a process < X. . . Definition. Processes of bounded quadratic variation and so E[Mt2 ] ≤ E[Vt (sup Mti+1 − Mti ] ≤ K · E[sup Mti+1 − Mti ] . M > is the unique continuous increasing adapted process vanishing at zero such that M 2 − < M. . X > such that for each t. .
the sequence Ta∆n has a limit in L2 . the process ′ ′′ X = T ∆ − T ∆ is a martingale and by (i) again. independent of the subdivision ∆ = {t0 . Then E[Ta∆] ≤ 4C 2 . The previous computation in (i) gives for s = 0. one has E[(Ta∆)2 ] ≤ 48C 4 . using (ii) E[(Ta∆ )2 ] = 2 n X k=1 E[(Ma − Mtk )2 (Tt∆k − Tt∆k+1 )] + 2 n X k=1 E[(Mtk − Mtk−1 )4 ] ≤ E[(2 sup Ma − Mtk  + sup Mtk − Mtk−1 2 )Ta∆ ] ≤ 12C 2 E[Ta∆ ] ≤ 48C 4 . we have E[Ta∆ − Tt∆k Atk ] = E[(Ma − Mtk )2  Atk ] and consequently. (ii) Let C be a constant such that M  ≤ C in [0. Then (Ta∆ (M ))2 = = n X ( (Mtk − Mtk−1 )2 )2 k=1 n X 2 (Ta∆ − Tt∆ )(Tt∆k − Tt∆ )+ k k−1 k=1 n X k=1 (Mtk − Mtk−1 )4 . Proof. ′ We have therefore only to show that E[Ta∆ (T ∆ )] → 0 for ∆′  + ∆′′  → 0. using T0∆(M ) = 0 E[Tt∆(M )A0 ] = E[Mt2 − M02 A0 ] ≤ E[(Mt − M0 )(Mt + M0 )] ≤ 4C 2 . . ∆′′ of [0. a]. We can assume tn = a. applied to the martingale X instead of M we have. Let sk be in ∆ and tm the rightmost point in ∆′ such that tm ≤ sk < sk+1 ≤ tm+1 . Proof. a].266 Chapter 4. This implies that Mt2 − Tt∆(M ) is a continuous martingale. let ∆ be the subdivision obtained by taking the union of the points of ∆′ and ∆′′ . We have ′ Ts∆k+1 − Ts∆k ′ = = (Msk+1 − Mtm )2 − (Msk − Mtm )2 (Msk+1 − Msk )(Msk+1 + Msk − 2Mtm ) . . . Continuous Stochastic Processes This implies with 0 = t0 < t1 < · · · < tl < s < tl+1 < · · · < tk < t and using orthogonality E[Tt∆ (M ) − Ts∆ (M )As ] = + = E[ k X (Mtj+1 − Mtj )2 As ] j=l E[(Mt − Mtk )2 As ] + E[(Ms − Mtl )2 As ] E[(Mt − Ms )2 As ] = E[Mt2 − Ms2 As ] . By (i). k k (iii) For fixed a > 0 and subdivisions ∆n of [0. a]. (iii) For any subdivision ∆. Proof. tn } of [0. . Given two subdivisions ∆′ . a] satisfying ∆n  → 0. using (x + y)2 ≤ 2(x2 + y 2 ) ′ ′′ ′ ′′ E[Xa2 ] = E[(Ta∆ − Ta∆ )2 ] = E[Ta∆(X)] ≤ 2(E[Ta∆ (T ∆ )] + E[Ta∆(T ∆ )]) . From (i).
M − N i) . It holds for general martingales and even more generally for so called local martingales. M i on [0. Proof. t≤a Choose S the sequence ∆n such that ∆n+1 is a refinement of ∆n and such that n ∆n is dense in [0.4. Doob’s inequality applied to the discrete time martingale T ∆n −T ∆m gives E[sup Tt∆n − Tt∆m 2 ] ≤ 4E[(Ta∆n − Ta∆m )2 ] . a]. Remark. stochastic processes X for which there exists a sequence of bounded stopping times Tn increasing to ∞ for which X Tn are martingales. Therefore hM.3. N i is a martingale. which can be chosen to be dense. k By the Cauchy Schwarzinequality ′ E[Ta∆(T ∆ )] ≤ E[sup Msk+1 + Msk − 2Mtm 4 ]1/2 E[(Ta∆)2 ]1/2 k and the first factor goes to 0 as ∆ → 0 and the second factor is bounded because of (iii). N i = 1 (hM + N. we can achieve that the convergence is uniform. N i of finite variation which is vanishing at zero and such that M N − hM. For any pair s < t in n ∆n . (v) hM. M i is increasing. The assumption of boundedness for the martingales is not essential. To get existence. M i is therefore continuous.17. Let M. N be two continuous martingales with the same filtration. Take ∆n ⊂ ∆n+1 . The limit hM. Corollary 4. a]. use the parallelogram law hM. Processes of bounded quadratic variation and so 267 ′ Ta∆ (T ∆ ) ≤ (sup Msk+1 + Msk − 2Mtm 2 )Ta∆ . Uniqueness follows again from the fact that a finite variation martingale must be zero.17. S Proof. M i is increasing everywhere. M + N i − hM − N. (iv) There exists a sequence of ∆n ⊂ ∆n+1 such that Tt∆n (M ) converges uniformly to a limit hM. 4 . M i is increasing on n ∆n . we have Ts∆n (M ) ≤ Tt∆n (M ) if n is so S large that ∆n contains both s and t. There exists a unique continuous adapted process hM. Proof. The continuity of M implies that hM.
. M it ]. N i is called the bracket of M and N and hM. . N be two continuous martingales and H. we have defined for two continuous martingales M . This is L´evy’s characterization of Brownian motion. If B = (B (1) . N i)1/2 q . M it is a martingale vanishing at zero. we could also write hM. B (j) i = δij t as we have computed in the proof of the Ito formula in the case t = 1. Since we have got hM. Since Mt2 − hM. we have E[Mt2 ] = E[hM. B (d) ) is Brownian motion. It can be shown that every martingale M which has the property that hM (i) . then h< B (i) . M i the increasing process of M . M i. Then for all p. N i and so that (M ± N )2 − hM ± N. Let M. M + N i − hM − N. Continuous Stochastic Processes This is vanishing at zero and of finite variation since it is a sum of two processes with this property. 4. M i)1/2 p Hs  Ks  dhM. N i. M ± N i are martingales. K two measurable processes. If M is a martingale vanishing at zero and hM. The process hM. N 2 − hN. the bracket process hM. Because hM. M − N i .18. It defines a random measure dhM. 0 . q ≥ 1 satisfying 1/p + 1/q = 1. M + N i − (M − N )2 − hM − N. it was of finite variation and therefore also hM. N i is of finite variation. . Definition. Theorem 4. Example.N .1 (KunitaWatanabe inequality). Remark.18 The Ito integral for martingales In the last section. N i. then M = 0. M i as a limit of processes Tt∆ . N i is a martingale. N i as such a limit. M i was increasing. . We know that M 2 − hM. Remark. M i = 0. and M N − hM. M − N i = 4M N − hM + N. N is ] ≤ ( 0 Z t · ( Ks2 dhN. Therefore (M + N )2 − hM + N. we have for all t ≤ ∞ E[ Z 0 t Z t Hs2 dhM.268 Chapter 4. M (j) i = δij · t must be Brownian motion.
M + rN its is positive almost everywhere and this stays true simultaneously for a dense set of r ∈ R. N its + r2 hN. N itii+1  t t )1/2 )1/2 (hM. N is  ≤ ≤ ≤ = X i X i t Hi Ki hM. . we define L2 (M ) the space of progressively measurable processes K such that Z ∞ K2 2 = E[ Ks2 dhM. t Call H 2 the subset of continuous martingales in H2 and with H02 the subset of continuous martingales which are vanishing at zero. H. The Ito integral for martingales Proof. N its )1/2 . M iti+1 Hi Ki (hM. ti+1 ). K as in (ii)  Z 0 t Hs Ks dhM. L (M) 0 Both H2 and L2 (M ) are Hilbert spaces. The claim follows. it is enough to prove this for t < ∞ and bounded K. N i)1/2 . (iii) We get from (i) for step functions H. N its  ≤ (hM. enough to show almost everywhere. N is ≤ ( Hs2 dhM. For fixed r. where Ji = [ti . N it − hM.18. M is ] < ∞ . N its = hM. N iti+1 ) ( Hi2 hM. Since M. Definition. it holds for all r. Given a martingale M ∈ H2 . By taking limits. N i)1/2 Hs  Ks  dhM. Hs dhM. the random variable hM. M its )1/2 (hN. By a density P argument. using H¨ older’s inequality. we can also the both K and H are Passume n n step functions H = i=1 Hi 1Ji and K = i=1 Ki 1Ji . the inequality Z t Z t Z t Ks2 dhN. N its = hM + rN. √ √ since a+2rb+cr2 ≥ 0 for all r ≥ 0 with nonnegative a. M i)1/2 · ( 0 0 0 holds. Denote by H2 the set of L2 martingales which are At adapted and satisfy M H2 = (sup E[Mt2 ])1/2 < ∞ .269 4. N is . M i) · ( ( 0 0 where we have used CauchySchwarz inequality for the summation over i. (ii) To prove the claim. M iti+1 ( i i i i Z t Z t 1/2 2 Ks2 dhN. M its + 2rhM. N are continuous. M iti+1 i i X X t t 1/2 )1/2 Ki2 hN. Proof. Claim: almost surely hM. c implies b ≤ a c. (i) Define hM. it is.
Rt 0 KdM is an isometry form L2 (M ) to Proof. for which supt Mt k − Mt  converges point wise to zero almost everywhere. N is ] 0 for every N ∈ H02 . we have for every N ∈ H02 E[ Z 0 t Ks dhM. Therefore M ∈ H 2 . Proposition 4. Continuous Stochastic Processes Lemma 4. 0 . The same argument shows also that H02 is closed. There exists a unique Rt element 0 KdM ∈ H02 such that < Z t KdM. We can assume M ∈ H0 since in general. Take a sequence M (n) in H 2 converging to M ∈ H2 .2. N >= 0 Z t KdhM. Rt 0 K dM = (i) By the KunitaWatanabe inequality. By Doob’s inequality E[(sup Mtn − Mt )2 ] ≤ 4M (n) − M 2 2 .270 Chapter 4. N i 0 for every N ∈ H 2 . The map K 7→ H02 . N is ] ≤ N H2 · KL2 (M) .18. Also H02 is closed in H 2 and is therefore a Hilbert space.18. Given M ∈ H 2 and K ∈ L2 (M ). Proof. we define Rt 0 K d(M − M0 ). The map Z t N 7→ E[( Ks ) dhM. there is an element K dM ∈ H02 such that Z t Z t E[( Ks dMs )Nt ] = E[ Ks dhM.3. The space H 2 of continuous L2 martingales is closed in H2 and so a Hilbert space. By Riesz representation theorem. H t (n ) We can extract a subsequence. N is ] 0 is therefore a linear continuous functional on theR Hilbert space H02 .
N i for all N ∈ H02 . The Ito integral for martingales ′ ∈ H02 ′ (ii) Uniqueness. L (M) Rt Definition. Then Z t Z t Xt Yt − X0 Y0 = Xs dYs + Ys dXs + hX. hL − L . The special case is proved by discretisation: let ∆ = {t0 . We can take especially. . since continuous processes are progressively measurable. First. . . Brownian motion. Y be two continuous martingales.4 (Integration by parts). such that for every n. Definition.271 4. The general case follows from the special case by polarization: use the special case for X ± Y as well as X and Y . Then n X i=1 (Xti+1 − Xti )2 = Xt2 − X02 − 2 n X i=1 Xti (Xti+1 − Xti ) . from which L = L′ follows. Theorem 4. K = f (M ). We show now that Ito’s formula holds also for general martingales. The martingale 0 Ks dMs is called the Ito integral of the progressively measurable process K with respect to the martingale M . t]. M i] 0 = K2 2 . Local martingales are more general than martingales. . in particular. If we take M = B. Let X.18. L such that hL. Xit . tn } be a finite discretisation of [0. N i = hL′ . we get the already familiar Ito integral. the integration by parts formula. . L − L′ i = 0.18. Assume there exist two martingales L. t1 . Proof. Stochastic integration can be defined more generally for local martingales. the process X Tn 1{Tn >0} is a uniformly integrable At martingale. An At adapted rightcontinuous process is called a local martingale if there exists a sequence Tn of increasing stopping times with Tn → ∞ almost everywhere. Then. a special case. Y it 0 0 and especially Xt2 − X02 =2 Z 0 t Xs dXs + hX. Rt (iii) The integral K 7→ 0 K dM is an isometry because Z ∞ Z t 2 Ks dMs )2 ] K dM H = E[(  0 0 0 Z ∞ = E[ Ks2 dhM.
Bt i = dt. Theorem 4. Bt i = t. f (Xt ) − f (X0 ) = f (Xs ) dBs + 2 0 0 In one dimension and if Mt = Bt is Brownian motion and Xt is a martingale. . . It is enough to prove the formula for polynomials. because hBt .5 (Ito formula for martingales). Then f (Xt )−f (X0 ) = Z 0 t 1X ∇f (X) dMt + 2 ij Z t 0 (i) (j) δxi δxj fxi xj (Xs ) dhMt . Example. The usual Ito formula in one dimensions is a special case Z Z t 1 t ′′ ′ f (Xs ) ds . If f (x) = log(x). We can write this in differential form as dXt = αXt dMt . . t) = 2 eαBt −α t/2 is a martingale. we get the claim. . In the last section we learned using Ito’s formula and and 12 ∆f + f˙ = 0 that Z 0 t αXs dMs = Xt − 1 . Mt i . when dealing with stochastic differential equations. if it is established for a function g. X0 = 1 . R). Since it is true for constant functions. this formula gives for processes satisfying X0 = 0 Z t 1 2 Xt /2 = Xs dBs + t . Remark. Example. It is a special case. . If f (x) = x2 .19 Stochastic differential equations We have seen earlier that if Bt is Brownian motion. Continuous Stochastic Processes Letting ∆ going to zero. M (d) ) and X and a function f ∈ C 2 (Rd . Proof. we are done by induction. we have We will use it later. By the integration by parts formula.272 Chapter 4. 2 0 Rt This formula integrates the stochastic integral 0 Xs dBs = Xt2 /2 − t/2.18. Given vector martingales M = (M (1) . then X = f (B. we get the result for functions f (x) = xi g(x). the formula gives Z t Z 1 t dBs /Xs − log(Xt /X0 ) = ds/Xs2 . 2 0 0 4. so that dhBt .
As for ordinary differential equations. ·i and the differentials dt. However. Stochastic differential equations This is an example of a stochastic differential equation (SDE) and one would use the notation dX = αX dM if it would not lead to confusion with the corresponding ordinary differential equation.19. to integrate. where M is not a stochastic process but a variable and where the solution would be X = eαB . dBt can be found in many books of stochastic differential equations [2. dt Separation of variables gives dX = rtdt + αζdt X . 68] and is useful to have in mind when solving actual stochastic differential equations: dt dBt dt 0 0 dBt 0 t Example. Definition. is a Rd valued process Xt satisfying Xt = Z t 0 f (Xs . 0 where f : Rd × Rd → Rd and g : Rd × R+ → Rd . this works for stochastic differential equations. Here. one has to use an adapted substitution. A solution of a stochastic differential equation dXt = f (Xt . The linear ordinary differential equation dX/dt = rX with solution Xt = ert X0 has a stochastic analog. It is called the stochastic population model. We look for a stochastic process Xt which solves the SDE dXt = rXt + αXt ζt . the solution is the stochastic process 2 Xt = eαBt −α t/2 . where one can easily solve separable differential equations dx/dt = f (x) + g(t) by integration. 47. The key is Ito’s formula (4.5) which holds for martingales and so for solutions of stochastic differential equations which is in one dimensions Z t Z 1 t ′′ f ′ (Xs ) dXs + f (Xt ) − f (X0 ) = f (Xs ) dhXs .18. Let Bt be Brownian motion in Rd . 2 0 0 The following ”multiplication table” for the product h·. Bt ) · dBt + g(Xt ) dt . Bs ) · dBs + Z t g(Xs ) ds . Xs i .273 4.
0 Xt In order to compute the stochastic integral on the left hand side.274 Chapter 4. Continuous Stochastic Processes and integration with respect to t Z t dXt = rt + αBt . Figure. The BlackScholes model makes the assumption that the stock prices evolves according to geometric Brownian motion. We see especially that X˙ = X/2 + Xξ has the solution Xt = eBt . Xs i 2 0 0 gives therefore log(Xt /X0 ) = Z 0 so that Rt 0 t 1 dXs /Xs − 2 Z t α2 ds 0 dXs /Xs = α2 t/2 + log(Xt /X0 ). . Ito’s formula in one dimensions Z t Z 1 t ′′ ′ f (Xs ) dXs + f (Xt ) − f (X0 ) = f (Xs )hXs . rXt dt + α2 Xt dBt i = α2 Xt2 dt . Figure. Solutions to the stochastic population model for r > 0. This process is called geometric Brownian motion. α2 t/2 + log(Xt /X0 ) = rt + αBt 2 and so Xt = X0 ert−α t/2+αBt . In that area the constant r is called the percentage drift or expected gain and α is called the percentage volatility. Solutions to the stochastic population model for r < 0. dXt i = hrXt dt + αXt dBt . we have to do a change of variables with f (X) = log(x). Therefore. Remark. Looking up the multiplication table: hdXt . The stochastic population model is also important when modeling financial markets.
If the friction constant b is noisy. s) · dMs + g(Xs . With a time dependent force F (x. An example from physics is when a particle move in a possibly timedependent force field F (x. 2 Xt = eαMt −α hX.275 4. t). Stochastic differential equations Example. we obtain dXt = (−b + αζt )Xt dt which is the stochastic population model treated in the previous example. In principle. Ms . with X = x˙ and F = 0. For example.19. dt which has the solution Xt = e−bt + αBt . we get a stochastic differential equation x ¨ = −bx˙ + F (x. t) with friction b for which the equation without noise is x ¨ = −bx˙ + F (x. already the differential equation without noise can not be given closed solutions in general. The notational replacement dBt = ζt dt is quite popular for more applied sciences like engineering or finance. Because the Ito integral can be defined for any continuous martingale. 0 0 Example. If we add white noise. the function v(t) satisfies the stochastic differential equation dXt = −bXt + αζt . M0 = 1.Xit /2 is a solution of dXt = αMt dMt . s) ds . Example. one can study stochastic versions of any differential equation. A solution must then satisfy Z t Z t Xt = f (Xs . Brownian motion could be replaced by an other continuous martingale M leading to other classes of stochastic differential equations. We again use the notation of white noise ζ(t) = dB dt which is a generalized function in the following table. t) . Here is a list of stochastic differential equations with solutions. t) + αζ(t) . Stochastic differential equation d dt Xt = 1ζ(t) d dt Xt = Bt ζ(t) d 2 dt Xt = Bt ζ(t) d 3 dt Xt = Bt ζ(t) d 4 dt Xt = Bt ζ(t) d dt Xt = αXt ζ(t) d dt Xt = rXt + αXt ζ(t) Solution X t = Bt Xt =: Bt2 : /2 = (Bt2 − 1)/2 Xt =: Bt3 : /3 = (Bt3 − 3Bt )/3 Xt =: Bt4 : /4 = (Bt4 − 6Bt2 + 3)/4 Xt =: Bt5 : /5 = (Bt5 − 10Bt3 + 15Bt )/5 2 Xt = eαBt −α t/2 2 Xt = ert+αBt −α t/2 Remark. .
p−1 Proof. We will do such an iteration on the 2 Hilbert space H[0. Below. Let X be a Lp martingale with p ≥ 1.1. one can do the same. This brings the topic into the framework of ergodic theory. Then E[sup Xs p ] ≤ ( s≤t p p ) · E[Xt p ] . t≤T We will need the following version of Doob’s inequality: Lemma 4. The proof given for 1dimensional systems generalizes to differential equations in arbitrary Banach spaces. The general result follows by approximating X by X ∧ k with k → ∞. p−1 . one knows that unique solutions exist locally if f is Lipshitz continuous in x and continuous in t. For ordinary differential equations x˙ = f (x. Kunita shows in his book [56] that one can view solutions as stochastic flows of diffeomorphisms. Both versions of stochastic integration have advantages and disadvantages. So. Continuous Stochastic Processes Remark. we give a detailed proof of this existence theorem for ordinary differential equations.t] of L2 martingales X having finite norm XT = E[sup Xt2 ] . For stochastic differential equations. We can assume without loss of generality that X is bounded. Differential equations with a different integral came from Stratonovich but there are formulas which relating them with each other. Define X ∗ = sups≤t Xs p .276 Chapter 4. The idea of the proof is a Picard iteration of an operator which is a contraction. t).19. it is enough to consider the Ito integral. Stochastic differential equations were introduced by Ito in 1951. From Doob’s inequality P[X ≥ λ] ≤ E[Xt  · 1X ∗ ≥λ ] we get ∗ p E[X  ] = E[ = E[ Z 0 Z 0 = E[ ≤ E[ Z X∗ ∞ ∞ Z0 ∞ 0 pλp−1 1{X ∗ ≥λ} dλ] pλp−1 P[X ∗ ≥ λ] dλ] pλp−1 E[Xt  · 1X ∗ ≥λ ] dλ] = pE[Xt  = pλp−1 dλ] Z 0 X∗ λp−2 dλ p E[Xt  · (X ∗ )p−1 ] .
Proof. Rt 0 f (s. Write S(X) = S 1 (X)+ S 2 (X). Let M be a continuous martingale. the map S is a contraction and that S n (X) converges in the metric X − Y T = E[sups≤T (Xs − Ys )2 ]. Xs ) dMs + Z t g(s.19. Stochastic differential equations H¨ older’s inequality gives E[X ∗ p ] ≤ p E[(X ∗ )p ](p−1)/p E[Xt p ]1/p p−1 and the claim follows. Theorem 4. if T is small enough to a unique fixed point. Then there exists T > 0 and a unique solution Xt of the SDE dX = f (x. Ys ) dMs T ≤ (1/4) · X − .277 4. w) − f (t. t) dM + g(x. Define the operator S(X) = Z t f (s. there exists a constant K. w′ ) ≤ K · sup w − w′  . Xs ) − f (s. t) ds with initial condition X0 = X0 . 2 S i (X) − S i (Y )T ≤ (1/4) · X − Y T then S is a contraction S(X) − S(Y )T ≤ (1/2) · X − Y T . t) and g(x.19. It is enough that for i = 1. T ]. Xs ) ds 0 0 on L2 processes. t) are continuous in t and Lipshitz continuous in x. We will show that on some time interval (0. Assume f (x. By assumption. such that f (t. s≤1 (i) S 1 (X) − S 1 (Y )T =  Y T for T small enough.2 (Local existence and uniqueness of solutions).
Dφ(x) ≤ λ < 1 and φ(x0 ) − x0  ≤ (1 − λ) · r then φ has exactly one fixed point in X. In this Appendix. x) . Let X = Br (x0 ) ⊂ X and assume φ is a differentiable map X → X . M it ] 0 2 ≤ 4K E[ = Z 4K 2 Z 0 T 0 ≤ T sup Xs − Ys 2 dt] s≤t X − Y s ds (1/4) · X − Y T . X) − f (t. we add the existence of solutions of ordinary differential equations in Banach spaces. Banach’s fixed point theorem applied to the complete metric space X and the contraction φ implies the result. By the above lemma for p = 2.3. Y ))2 dhM. Proof. where the last inequality holds for T small enough. we have Z t S 1 (X) − S 1 (Y )T = E[sup( f (s. The following lemma is useful for proving existence of fixed points of maps. Lemma 4. Rt (ii) S 2 (X) − S 2 (Y )T =  0 g(s. This is proved for differential equations in Banach spaces.19. The map φ maps therefore the ball X into itself. Y ) dMt )2 ] 0 T Z 4E[( f (t.278 Chapter 4. The two estimates (i) and (ii) prove the claim in the same way as in the classical CauchyPicard existence theorem. Xs ) − g(s. Appendix. Let X be a Banach space and I an interval in R. Y ) dMs )2 ] t≤T ≤ = 0 T Z 4E[( f (t. The condition x − x0  < r implies that φ(x) − x0  ≤ φ(x) − φ(x0 ) + φ(x0 ) − x0  ≤ λr + (1 − λ)r = r . Let f be a map from I × X to X . X) − f (t. A differentiable map u : J → X of an open ball J ⊂ I in X is called a solution of the differential equation x˙ = f (t. If for all x ∈ X. X) − f (s. Continuous Stochastic Processes Proof. Ys ) dsT ≤ (1/4) · X − Y T for T small enough.
y continuous} with norm y = sup t∈J(t0 . there exists exactly one solution of the differential equation x˙ = f (t. b)} as well as k = sup{ f (t. y2 ∈ Vr . φ is a contraction. x). For two points y1 . b). there exists an open interval J ⊂ I with midpoint t0 . such that M = sup{f (t. x2 ) ∈ J(t0 . y(s))ds φ(y) : t 7→ x0 + t0 which is again an element in X r . u(t)) . x) ∈ J(t0 . Let f : I × X → X be continuous in the first coordinate and locally Lipshitz continuous in the second. y1 (s)) − f (s. y1 (s)) − f (s. x). x1 ). we have φ(y1 ) − φ(y2 ) =  ≤ Z ≤ Z t t0 t t0 f (s. Proof. r) → X . Then. A fixed point of φ is then a solution of the differential equation x˙ = f (t. x)  (t. r). Stochastic differential equations if we have for all t ∈ J the relation u(t) ˙ = f (t.279 4. We prove now. y2 (s)) ds kr · y1 − y2  .19. we have by assumption f (s. a) = (t0 − a. There exists an interval J(t0 . t0 + a) ⊂ I and a ball B(x0 . x0 ) ∈ I × X . y2 (s)) ≤ k · y1 (s) − y2 (s) ≤ k · y1 − y2  for every s ∈ J r . x1 6= x2 } x1 − x2  are finite. Define for r < a the Banach space X r = C(J (t0 . a)×B(x0 . x1 ) − f (t. which exists on J = Jr (t0 ). a) × B(x0 . that for r small enough. X ) = {y : J(t0 .b be the open ball in X r with radius b around the constant map t 7→ x0 .4 (CauchyPicard Existence theorem). For every y ∈ Vr. (t. for every (t0 . Theorem 4. Thus.19.b we define Z t f (s. .r) y(t) Let Vr. y2 (s)) ds f (s. y1 (s)) − f (s. such that on J. b). x2 )  (t.
y(s)) ≤ M and so φ(x0 ) − x0  =  Z t t0 f (s. x0 (s)) ds ≤ M · r . z. 0) is a contraction on R2 . Definition. 1]) of all continuous functions x(t) on the interval [0. y) > 0 for x 6= y. d) is defined to be a sequence which has the property that for any ǫ > 0. y ∈ X. y) ≥ 0 for all x. y) for which the following properties (i) d(y. x0 (s)) ds ≤ Z t t0 f (s. The map φ shrinks the distance of any two points by the contraction factor λ. y) + d(y. hold is called a metric space. We can apply the above lemma. y ∈ X. Example. The map φ(x) = 12 x + (1. The ndimensional Euclidean space q (Rn . A Cauchy sequence in a metric space (X. there exists n0 such that xn − xm  ≤ ǫ for n ≥ n0 . A metric space in which every Cauchy sequence converges to a limit is called complete. d(x. This is the case. By choosing r small enough. (ii) d(x. y) for all x. The set of rational numbers with the usual distance (Q. z) ≤ d(x. y. Example. A set X with a distance function d(x. . we can get the contraction rate as small as we wish. The map φ(x)(t) = sin(t)x(t) + t is a contraction on C([0. Definition. y) = maxt x(t) − y(t) is a metric space. d(x. Example. z) for all x. we have for every s ∈ J r f (s. if there exists λ < 1 such that d(φ(x). y) = x − y. (iii) d(x. x) = d(x. Example. m ≥ n0 . 1] with the distance d(x. x) = 0 and d(x. Continuous Stochastic Processes On the other hand. A map φ : X → X is called a contraction. y) = x1 − y1  + x2 − y2 . The set C([0. Example. y) = x − y) is not complete. An other metric is the Manhattan or taxi metric d(x. if kr < 1 and M r < b(1 − kr). 1]) because φ(x)(t) − φ(y)(t) =  sin(t) · x(t) − y(t) ≤ sin(1) · x(t) − y(t). The plane R2 with the usual distance d(x. if r < b/(M + kb). φ(y)) ≤ λ · d(x. y) = x − y = x21 + · · · + x2n ) is complete. Definition.280 Chapter 4.
Therefore xn (t) converges point wise to a function x(t). This is impossible unless x ˜ = y˜. we get for all λk d(x. 1−λ By completeness of X it has a limit x˜ which is a fixed point of φ. xn+k ) ≤ λn · d(x. y˜) = d(φ(˜ x). φn (y)) ≤ λn · d(x.(ii). So. Theorem 4. Stochastic differential equations Example. x(t) − xn (t) ≤ ǫ/3 and yn (s) − y(s) ≤ ǫ/3. y) for all n. then xn (t) is a Cauchy sequence in R for all t. φn x) ≤ n−1 X k=0 d(φk x. there were two fixed points x ˜. For large n.5 (Banachs fixed point theorem). The space C[0. the second term is smaller than ǫ/3. Proof. xk ) ≤ λn · 1 · d(x. φ(x)) . φk+1 x) ≤ n−1 X k=0 P k λk = (1 − λ)−1 . (ii) Using the triangle inequality and x ∈ X. x(t) − x(s) ≤ ǫ if t − s is small.19. Then d(˜ x. y˜ of φ.19. φ(x)) ≤ 1 · d(x. x1 ) . 1] is complete: given a Cauchy sequence xn . d(xn . This function is continuous: take ǫ > 0. Assume. then x(t) − x(s) ≤ x(t) − xn (t) + xn (t) − yn (s) + yn (s) − y(s) by the triangle inequality. y˜) . . φ(˜ y )) ≤ λ · d(˜ x. If s is close to t. A contraction φ in a complete metric space (X. 1−λ (iii) For all x ∈ X the sequence xn = φn (x) is a Cauchy sequence because by (i). (i) We first show by induction that d(φn (x). d(x. d) has exactly one fixed point in X. (iv) There is only one fixed point.281 4.
282 Chapter 4. Continuous Stochastic Processes .
E) has the lattice Zd as vertices. If it is finite. . Denote with Ld the Cayley graph of Zd with the generators A = {e1 . . 1} of all configurations. The product measure Pp is defined on the probability space Ω = e∈E {0. Definition. ed }. x1 . We denote expectation with respect to Pp with Ep [·]. This graph Ld = (V. A path is called open if all its edges are open and closed if all its edges are closed. .1 Percolation Definition. 283 . Definition. . . The edges or bonds in that graph are straight line P segments connecting neighboring points x. Such a path has length n and connects x0 with xn . The connected components of this graph are called open clusters. . y. . Let ei be the standard basis in the lattice Zd . xi+1 ) = ei are bonds of Ld . Consider the random subgraph of Ld containing the vertex set Zd and only open edges. an open cluster is also called a lattice animal. Two subgraphs of Ld are disjoint if they have no edges and no vertices in common. xn ) such that (xi . Points satisfying x − y = di=1 xi − yi  = 1. Definition.Chapter 5 Selected Topics 5. A path in Ld is a sequence of vertices (x0 . the distribution of C(x) is independent of x and we can take x = 0 for which we write C(0) = C. Bonds are open ore closed independently of all otherQbonds. Call C(x) the open cluster containing the vertex x. We declare each bond of Ld to be open with probability p ∈ [0. . 1] and closed otherwise. By translation invariance.
We can therefore define pc = inf{p ∈ [0. There exists a critical value pc = pc (d) such that θ(p) = 0 for p < pc and θ(p) > 0 for p > pc . If the origin is in an infinite ′ cluster of Zd . Definition. Any configuration ′ in Ld projects then to a configuration in Ld .1. The connective constant of Ld is defined as λ(d) = lim σ(n)1/n . Selected Topics Figure. Remark. 1]  θ(p) > 0 }. Definition. The value d 7→ pc (d) is nonincreasing with respect to the dimension pc (d + 1) ≤ pc (d).284 Chapter 5. m < n} . The onedimensional case d = 1 is not interesting because pc = 1 there. Let σ(n) be the number of selfavoiding paths in Ld which have length n. The function p 7→ θ(p) is nondecreasing and θ(0) = 0. Proof. Interesting phenomena are only possible in dimensions d > 1. n→∞ . Lemma 5. A lattice animal. Define the percolation probability θ(p) being the probability that a given vertex belongs to an infinite open cluster. θ(p) = P[C = ∞] = 1 − ∞ X P[C = n] . n=1 One of the goals of bond percolation theory is to study the function θ(p). θ(1) = 1. then it is also in an infinite cluster of Zd . ′ The graph Zd can be embedded into the graph Zd for d < d′ by realizing Zd ′ as a linear subspace of Zd parallel to a coordinate plane.1. The planar case d = 2 is already very interesting. A selfavoiding random walk in Ld is the process ST obtained by stopping the ordinary random walk Sn with stopping time T (ω) = inf{n ∈ N  ω(n) = ω(m).
then 0 < λ(d)−1 ≤ pc (d) ≤ pc (2) < 1 . Denote by L2∗ the dual graph of L2 which has as vertices the faces of L2 and as vertices pairs of faces which are adjacent. c4 = 100. .2 (BroadbentHammersley theorem). Consult [64] for more information on the selfavoiding random walk. Let ρ(n) denote the number of closed circuits in the dual which have length n and which contain in their interiors the origin of L2 . 2. Let N (n) ≤ σ(n) be the number of open selfavoiding paths of length n in Ln . For example. Percolation Remark. c5 = 284. .1. Each such circuit contains a selfavoiding walk of length n − 1 starting from a vertex of the form (k + 1/2. 1/2). where 0 ≤ k < n. This shows that pc (d) ≥ λ(d)−1 . (i) pc (d) ≥ λ(d)−1 . This defines bond percolation on L2∗ . c2 = 12. c7 = 2172.1. Since any such path is open with probability pn . we have ρ(n) ≤ nσ(n − 1) . If the origin is in an infinite open cluster. But one has the elementary estimate d < λ(d) < 2d − 1 because a selfavoiding walk can not reverse direction and so σ(n) ≤ 2d(2d − 1)n−1 and a walk going only forward in each direction is selfavoiding. it is known that λ(2) ∈ [2. c6 = 780. The exact value of λ(d) is not known.6381585. (ii) pc (2) < 1. We can realize the vertices as Z2 + (1/2. . Theorem 5. The fact that the origin is in the interior of a closed circuit of the dual lattice if and only if the open cluster at the origin is finite follows from the Jordan curve theorem which assures that a closed path in the plane divides the plane into two disjoint subsets. 1/2).285 5. . c3 = 36.62002. If d > 1. there must exist open paths of all lengths beginning at the origin so that θ(p) ≤ Pp [N (n) ≥ 1] ≤ Ep [N (n)] = pn σ(n) = (pλ(d) + o(1))n which goes to zero for p < λ(p)−1 . The number cn of selfavoiding walks of length n in L2 is for small values c1 = 4. we have Ep [N (n)] = pn σ(n) . Proof. Since the number of such paths γ is at most nσ(n − 1). if it crosses a closed edge. Since there is a bijective relation between the edges of L2 and L2∗ and we declare an edge of L2∗ to be open if it crosses an open edge in L2 and closed.69576] and numerical estimates makes one believe that the real value is 2.
Pp ). We will see below that even pc (2) < 1 − λ(2) known that pc (2) = 1/2. . where d Ω = {0. γ We have therefore P[C = ∞] = P[no γ is closed] ≥ 1 − X γ P[γ is closed] ≥ 1/2 so that pc (2) ≤ δ < 1. It is conjectured that the following critical exponents γ = β = δ −1 = log χ(p) pրpc log p − pc  log θ(p) lim pցpc log p − pc  log Ppc [C ≥ n ].286 Chapter 5. Furthermore. this sum goes to zero if q → 0 so that we can find 0 < δ < 1 such that for p > δ X P[γ is closed] ≤ 1/2. Selected Topics and with q = 1 − p X γ P[γ is closed] ≤ ∞ X n=1 q n nσ(n − 1) = ∞ X qn(qλ(2) + o(1))n−1 n=1 which is finite if qλ(2) < 1. For p > pc . − lim n→∞ log n − lim exist. 1}L is the set of configurations with product σalgebra A and d product measure Pp = (p. The parameter set p < pc is called the subcritical phase. It is however Definition. The answer is known to be no in the case d = 2 and generally believed to be no for d ≥ 3. . It is known that χ(p) < ∞ for p < pc but only conjectured that χf (p) < ∞ for p > pc . one would like to know the mean size of the finite clusters χf (p) = Ep [C  C < ∞] . For p near pc it is believed that the percolation probability θ(p) and the mean size χ(p) behave as powers of p − pc . −1 Remark. one is also interested in the mean size of the open cluster χ(p) = Ep [C] . Percolation deals with a family of probability spaces (Ω. An interesting question is whether there exists an open cluster at the critical point p = pc . For p < pc . the set p > pc is the supercritical phase. Definition. A. 1 − p)L .
Pp ). if ω(e) ≤ ω ′ (e) for all bonds e ∈ L2 . then Ep [X] ≤ Eq [X] if p ≤ q. we prove the claim first for random variables X which depend only on n edges e1 . The following correlation inequality is named after Fortuin. A. P) increasing if ω ≤ ω ′ implies X(ω) ≤ X(ω ′ ). Percolation Definition.287 5. we can write Ep [X] = pX(1)+ d (1 − p)X(0). As usual. It is called decreasing if −X is increasing. Pq )∩L1 (Ω. There exists a natural partial ordering in Ω coming from the ordering on {0. Y ∈ L2 (Ω. If X depends only on a single bond e. . If X is a increasing random variable in L1 (Ω. . we can write it as a sum X = i=1 Xi of variables Xi which depend only on one bond and get again n X d (Xi (1) − Xi (0)) ≥ 0 .4 (FKG inequality). we have dp Ep [X] = X(1) − X(0) ≥ 0 which gives Ep [X] ≤ Eq [X] for p ≤ q. this notion can also be defined for measurable sets A ∈ A: a set A is increasing if 1A is increasing.1. We have (X(ω) − X(ω ′ )(Y (ω) − Y (ω ′ )) ≥ 0 . en and proceed by induction. the claim follows. Ep [X] = dp i=1 In general we approximate every random variable in L1 (Ω.1. Pp ) ∩ L1 (Ω. Kasterleyn and Ginibre (1971). Because X is assumed to be increasing. Lemma 5. As in the proof of the above lemma. For increasing random variables X. . . Theorem 5. we have Ep [XY ] ≥ Ep [X] · Ep [Y ] . Proof. Pp ). Since Ep [Xi ] → Ep [X] and Eq [Xi ] → Eq [X].3. (i) The claim. Proof.1. We call a random variable X on (Ω. 1}: we say ω ≤ ω ′ . If X depends only Pd on finitely many bonds. e2 . if X and Y only depend on one edge e. Pq ) by step functions which depend only on finitely many coordinates Xi .
Y depending on n edges e1 . Therefore X (X(ω) − X(ω ′ ))(Y (ω) − Y (ω ′ ))Pp [ω(e) = σ]Pp [ω(e) = σ ′ ] 0 ≤ σ. both factors are nonnegative since X. . we get also in L1 or the L2 norm in (Ω. Remark. . For fixed e1 . e2 . Since Xn = E[XAn ] and Yn = E[XAn ] are martingales which are bounded in L2 (Ω. . Pp ) Xn Yn − XY 1 ≤ ≤ ≤ (Xn − X)Yn 1 + X(Yn − Y )1 Xn − X2 Yn 2 + X2 Yn − Y 2 C(Xn − X2 + Yn − Y 2 ) → 0 where C = max(X2 . en . Doob’s convergence theorem (3. We know from (ii) that Ep [Xn Yn ] ≥ Ep [Xn ]Ep [Yn ]. . . . .4) implies that Xn → X and Yn → Y in L2 and therefore E[Xn ] → E[X] and E[Yn ] → E[Y ]. if 0 = ω(e) = ω ′ (e) = 1 both factors are nonpositive since X. we get Pp [ k \ i=1 Ai ] ≥ k Y i=1 Pp [Ai ] . By the tower property of conditional expectation. Y are increasing. The random variables Xk = Ep [XAk ].288 Chapter 5. Let Ak = A(e1 . ek ) be the σalgebra generated by functions depending only on the edges ek . Then Ai are increasing events and so after applying the inequality k times. (iii) Let X. Pp ).σ′ ∈{0. We claim that it holds also for X. . By the Schwarz inequality. then Pp [A ∩ B] ≥ Pp [A] · Pp [B].1} = 2(Ep [XY ] − Ep [X]Ep [Y ]) . (ii) Assume the claim is known for all functions which depend on k edges with k < n. en−1 . . we have (XY )n−1 ≥ Xn−1 Yn−1 and so Ep [XY ] = Ep [(XY )n−1 ] ≥ Ep [Xn−1 Yn−1 ] . By induction. . Yn = Ep [Y An ]. . This means Ep [Xn Yn ] → Ep [XY ]. Ep [Xn−1 Yn−1 ] ≥ Ep [Xn−1 ]Ep [Yn−1 ] . Selected Topics ′ since the left hand side is 0 if ω(e) = ω (e) and if 1 = ω(e) = ω ′ (e) = 0. B are increasing events in Ω. Y be arbitrary and define Xn = Ep [XAn ]. the right hand side is Ep [X]Ep [Y ]. ek and are increasing. . Example. Let Γi be families of paths in Ld∗ and let Ai be the event that some path in Γi is open. . Yk = Ep [Y Ak ] depend only on the e1 . .5. . Y are increasing. A. Y 2 ) is a constant. . It follows immediately that if A.
it follows that θp > 0. (i) We define a new probability space. can be embedded in one probability space d d ([0. Theorem 5. For N large enough. Then d Pp [A] = Ep [N (A)] . Corollary 5. Proof. Definition.6 (Russo’s formula). Since FN and GN are both increasing. We say that an edge e ∈ Ld is pivotal for the pair (A.1. 1]L ). If (1 − p)λ(2) < 1. dp where N (A) is the number of edges which are pivotal for A. Percolation We show now. B([0. A. We deduce θ(p) = Pp [C = ∞] = Pp [FN ∩ GN ] ≥ Pp [FN ] · Pp [GN ] . The family of probability spaces (Ω. if (1−p)λ(2) < 1 or p < (1 − λ(2)−1 ) which proves the claim. Given A ∈ A and ω ∈ Ω. P) . we have therefore Pp [GN ] ≥ 1/2. where ωe is the unique configuration which agrees with ω except at the edge e.5. Pp ). Let A be an increasing event depending only on finitely many edges of Ld . ω) if 1A (ω) 6= 1A (ωe ).1. We know that FN ∩ GN ⊂ {C = ∞}. how this inequality can be used to give an explicit bound for the critical percolation probability pc in L2 . the correlation inequality says Pp [FN ∩ GN ] ≥ Pp [FN ] · Pp [GN ].289 5. define the events FN = GN = {∃ no closed path of length ≤ N in Ld∗ } {∃ no closed path of length > N in Ld∗ } . 1]L . then we know that Pp [GcN ] ≤ ∞ X n=N (1 − p)n nσ(n − 1) which goes to zero for N → ∞. Since also Pp [FN ] > 0. Proof.1. The following corollary belongs still to the theorem of BroadbentHammersley. . pc (2) ≤ (1 − λ(2)−1 ) . Given any integer N ∈ N.
then Pp [A] is a function of a finite set {p(fi ) }m i=1 of edge probabilities.p) ∂p(fi ) Pp [fi pivotal for A] i=1 = Ep [N (A)] . If A depends on finitely many edges. Selected Topics Ld d where P is the product measure dx . Divide both sides by (p′ (f ) − p(f )) and let p′ (f ) → p(f ). given p ∈ [0. Since A is increasing. Assume p and p′ differ only at an edge f such that p(f ) ≤ p′ (f ). ∂p(f ) (iii) The claim. weQcan define configurations with a large class of probability measures Pp = e∈Ld (p(e). 1]L . The claim is obtained by making F larger and larger filling out E. Then {ηp ∈ A} ⊂ {ηp′ ∈ A} so that Pp′ [A] − Pp [A] = = = P[ηp′ ∈ A] − P[ηp ∈ A] / A] P[ηp′ ∈ A. if A depends on finitely many edges. 1 − p(e)) with one probability space and we have Pp [A] = P[ηp ∈ A] .p. ηp ∈ ′ (p (f ) − p(f ))Pp [f pivotal for A] . This gives ∂ Pp [A] = Pp [f pivotal for A] . (iv) The general claim. (ii) Derivative with respect to one p(f ). define for every finite set F ⊂ E pF (e) = p + 1{e∈F } δ where 0 ≤ p ≤ p + δ ≤ 1.. we have Pp+δ [A] ≥ PpF [A] and therefore X 1 1 (Pp+δ [A] − Pp [A]) ≥ (PpF [A] − Pp [A]) → Pp [e pivotal for A] δ δ e∈F as δ → 0. Like this. we get configurations ηp (e) = 1 if η(e) < p(e) and ηp = 0 else. 1]L and p ∈ [0.. In general.p. . Given a configuration η ∈ [0. we get a configuration in Ω by defining ηp (e) = 1 if η(e) < p and d ηp = 0 else..290 Chapter 5. 1].. More generally. The chain rule gives then d Pp [A] = dp = m X i=1 m X ∂ Pp [A]p=(p.
Proof. An edge e ∈ F is pivotal for A if and only if A \ {e} has exactly k − 1 open edges.1.291 5. Lemma 5. . then Pp [M < ∞] = 1. x) = {y ∈ Zd  x − y = di=1 xi  ≤ n} be the ball of radius n around x in Zd and let An be the event that there exists an open path joining the origin with some vertex in δS(n. em } ⊂ E be a finite set in of edges. (Exponential decay of radius of the open cluster) If p < pc . l l=k Remark. We get X Ep [C] ≤ Ep [C  M = n] · Pp [M = n] n ≤ ≤ X n X n S(n. S(n.1. 0). we obtain by integration m X m Pp [A] = pl (1 − p)m−1 . e2 . Let S(n.1.8. then Pp [A] need no more be differentiable for all values of p. If A does no more depend on finitely many edges. The mean size of the open cluster is χ(p) = Ep [C]. Definition. Clearly. k−1 dp e∈F Since we know P0 [A] = 0. . For p < pc . We have m−1 Pp [e is pivotal] = pk−1 (1 − p)m−k k−1 so that by Russo’s formula X d m−1 Pp [A] = Pp [e is pivotal] = m pk−1 (1 − p)m−k . 0) ≤ Cd · (n + 1)d with some constant Cd . the mean size of the open cluster is finite χ(p) < ∞. A = {the number of open edges in F is ≥ k} . Let F = {e1 . Theorem 5. . The proof of this theorem is quite involved P and we will not give the full argument. . . 0)Pp [An ] Cd (n + 1)d e−nψp < ∞ . there exists ψp such that Pp [An ] < e−nψp . if p < pc . By definition of pc . Let M = max{n  An occurs }.7 (Uniqueness). Percolation Example.
We are concerned with the probabilities gp (n) = Pp [An ]. One needs to show then that Ep [N (An ) An ] grows roughly linearly when p < pc . Selected Topics Proof. The number of open clusters per vertex is defined as κ(p) = Ep [C−1 ] = ∞ X 1 Pp [C = n] . gp (n) p By integrating up from α to β. Definition. . Sine An are increasing events. where N (An ) is the number of pivotal edges in An . n n=1 Let Bn the box with side length 2n and center at the origin and let Kn be the number of open clusters in Bn .292 Chapter 5. Russo’s formula gives gp′ (n) = Ep [N (An )] . The following proposition explains the name of κ. we get gα (n) = gβ (n) exp(− Z β 1 Ep [N (An )  An ] dp) p α ≤ gβ (n) exp(− ≤ exp(− Z Z β Ep [N (An )  An ] dp) α β α Ep [N (An )  An ] dp) . This is quite technical and we skip it. We have gp′ (n) = X Pp [e pivotal for A] e = X1 e = X1 e = Pp [A ∩ {e pivotal for A}] Pp [A ∩ {e pivotal for A}A] · Pp [A] p Ep [N (A)  A] · Pp [A] X1 p Ep [N (A)  A] · gp (n) e so that p Pp [e open and pivotal for A] X1 e = p X1 e = p gp′ (n) 1 = Ep [N (An )  An ] .
P P P (v) x∈B(n) Γn (x) ≤ x∈B(n) Γ(x) + x∼δBn Γn (x). there exists a unique infinite open cluster. Define Γ(x) = C(x)−1 . Percolation Proposition 5. We mention: The uniqueness of the infinite open cluster: For p > pc and if θ(pc ) > 0 also for p = pc . Follows from (ii) and (iii). In L1 (Ω. Kn (iv) lim inf n→∞ B ≥ κ(p) almost everywhere. the functions θ(p).9. There would be much more to say in percolation theory. θ(p) is continuous from the right.1. P (iii) B1n  x∈Bn Γ(x) → Ep [Γ(0)] = κ(p). Remark. pc ). Regularity of some functions θ(p) For p > pc . (vi) 1 Bn  P x∈Bn Γn (x) ≤ 1 Bn  P x∈Bn Γ(x) + δBn  Bn  . Proof. P (i) x∈Bn Γn (x) = Kn . χf (p).293 5. The claim follows from the ergodic theorem. It is even known that κ and the mean size of the open cluster χ(p) are real analytic functions on the interval [0. Proof. each open cluster contributes 1 to the left hand side. Let Cn (x) be the connected component of the open cluster in Bn which contains x ∈ Zd . 1]. . It is known that function κ(p) is continuously differentiable on [0. Follows from (i) and the trivial fact Γ(x) ≤ Γn (x). n Proof. κ(p) are differentiable. A.1. The critical probability in two dimensions is 1/2. P Kn (ii) B ≥ B1n  x∈Bn Γ(x) where Γ(x) = C(x)−1 . Proof. Γ(x) are bounded random variables which have a distribution which is invariant under the ergodic group of translations in Zd . If Σ is an open cluster of Bn . Thus. Pp ) we have Kn /Bn  → κ(p) . n Proof. In general. where x ∼ Y means that x is in the same cluster as one of the elements y ∈ Y ⊂ Zd . then each vertex x ∈ Σ contributes Σ−1 to the left hand side.
)  ∞ X k=−∞ x2k = 1 } of the form Lω u(n) = X u(m) + Vω (n)u(n) = (∆ + Vω )(u)(n) . x−1 . This measure is defined as the functional C(R) → R. . An important role will play its derivative Z dµ(λ) ′ F (z) = . There exists λ0 such that for λ > λ0 . the operator Lω = ∆ + λ · Vω has pure point spectrum for almost all ω.2 Random Jacobi matrices Definition. x2 . Definition. A bounded linear operator L has pure point spectrum. 1]. m−n=1 where Vω (n) are IID random variables in L∞ . E) = [(Lω − E)−1 ]mn . (y − z)2 R . . we mostly write the elements ω of the probability space (Ω. We will give a recent elegant proof of AizenmanMolchanov following [98].1 (Fr¨ohlichSpencer).294 Chapter 5. Selected Topics 5. A Jacobi matrix with IID potential Vω (n) is a bounded selfadjoint operator on the Hilbert space l2 (Z) = {(. In this section. Given E ∈ C \ R. These operators are called discrete random Schr¨ odinger operators. x1 . . Define the function Z dµ(y) F (z) = R y−z It is a function on the complex plane and called the Borel transform of the measure µ. A random operator has pure point spectrum if Lω has pure point spectrum for almost all ω ∈ Ω. Let V (n) are IID random variables with uniform distribution on [0. P) as a lower index.2. if there exists a countable set of eigenvalues λi with eigenfunctions φi such that Lφi = λi φi and φi span the Hilbert space l2 (Z). define the Green function Gω (m. We are interested in properties of L which hold for almost all ω ∈ Ω. n. Our goal is to prove the following theorem: Theorem 5. A. . Let µ = µω be the spectral measure of the vector e0 . f 7→ f (Lω )00 by f (Lω )00 = E[f (L)00 ]. . x0 . Definition.
Random Jacobi matrices Definition. 1 + αF (z) We have to show that for any continuous function f : C → C Z Z Z f (x) dµα (x) dα = f (x) dE(x) R R and it is enough to verify this for the dense set of functions {fz (x) = (x − z)−1 − (x + i)−1 z ∈ C \ R} . R Contour integration in the upper half plane (now with α) implies that R hz (α) dα = 0 for Im(z) < 0 and 2πi for Im(z) > 0. One calls Lα a rankone perturbation of L. when solved for Fα . . The second resolvent formula gives (Lα − z)−1 − (L − z)−1 = −α(Lα − z)−1 P0 (L − z)−1 .2. if ±Im(z) > 0. Given any Jacobi matrix L. where P0 is the projection onto the onedimensional space spanned by δ0 . the AronzajnKrein formula Fα (z) = F (z) .295 5. let Lα be the operator L + αP0 . This means that hz (α) has either two poles in the lower half plane if Im(z) < 0 or one in each half plane if Im(z) > 0. Theorem 5. we obtain Fα (z) − F (z) = −αFα (z)F (z) which gives.2. then ±ImF (z) > 0 so that ±ImF (z)−1 < 0. R Contour integration in the upper half plane gives R fz (x) dx = 0 for Im(z) < 0 and 2πi for Im(z) > 0.2 (Integral formula of JavrjanKotani). Looking at 00 entry of this matrix identity. On the other hand Z fz (x)dµα (x) = Fα (z) − Fα (−i) which is by the AronzajnKrain formula equal to hz (α) := 1 1 − . The average over all specral measures dµα is the Lebesgue measure: Z dµα dα = dE . α + F (z)−1 α + F (−i)−1 Now. R Proof.
Define for α 6= 0 the sets Sα = Pα = L = {x ∈ R  F (x + i0) = −α−1 . then lim ǫ ImFα (E + iǫ) = (α2 F ′ (E))−2 ǫ→0 since F (E +iǫ) = −1/α+iǫF ′(x)+o(ǫ) if F ′ (E) < ∞ and ǫ−1 Im(1+αF ) → ∞ if F ′ (E) = ∞ which means ǫ1 + αF −1 → 0 and since F → −1/α. dµ({E0 }) = limǫ→0 ImF (E0 + iǫ)ǫ. F ′ (x) = ∞ } {x ∈ R  F (x + i0) = −α−1 . Proof. (Facts about Borel transform) For ǫ → 0. . dµac (E) = π −1 ImF (E + i0) dE. Definition. One has (dµα )sc (Sα ) = 1 and (dµα )ac (L) = 1. a spectral averaging argument will then lead to pure point spectrum also for α = 0. one gets ǫF/(1 + αF ) → 0. Because Fα = F/(1 + αF ). dµsing ({E  ImF (E + i0) = ∞ }) = 1.12.296 Chapter 5.2.3. The function F is related to this decomposition in the following way: Proposition 5. F ′ (x) < ∞ } {x ∈ R  ImF (x + i0) 6= 0 } Lemma 5. we know that Fα (E + i0) = ∞ is equivalent to F (E + i0) = −1/α. The sets Pα .2. In the case of IID potentials with absolutely continuous distribution.4. Sα . Selected Topics In theorem (2. The theorem of de la Vall´ee Poussin (see [92]) states that the set {E  Fα (E + i0) = ∞ } has full (dµα )sing measure. The following criterion of SimonWolff [100] will be important.2). the measures π −1 ImF (E + iǫ) dE converges weakly to µ. we have seen that any Borel measure µ on the real line has a unique Lebesgue decomposition dµ = dµac + dµsing = dµac + dµsc + dµpp . (AronzajnDonoghue) The set Pα is the set of eigenvalues of Lα . L are mutually disjoint. If F (E + i0) = −1/α.
2.2. the sum P −1 2 n∈Z (L − E − iǫ)0n  increases monotonically as ǫ ց 0 and converges point wise to F ′ (E). because replacing a general α ∈ C with the nearest point in [0. such that for all α. the Lebesgue measure of S = {E  F ′ (E) = ∞ } is zero. we have X 2 (L − E − iǫ)−1 0n  = (L − E − iǫ)−1 δ0 2 = [(L − E − iǫ)−1 (L − E + iǫ)−1 ]00  Z dµ(x) (x − E)2 + ǫ2 R n∈Z = from which the monotonicity and the limit follow. b]) = µα (L ∩ [a.2. Lemma 5. This means by the integral formula that dµα (S) = 0 for almost all α. For any interval [a. 1] only decreases the .2.6. β ∈ C Z 0 1 x − α 1/2 x − β −1/2 dx ≥ C Z 0 1 x − β−1/2 dx . The AronzajnDonoghue lemma (5.2. the random operator L has pure point spectrum if F ′ (E) < ∞ for almost almost all E ∈ [a. b]. Lemma 5.5 (SimonWolff criterion). 1]. Proof. b] ⊂ R.4) implies µα (Sα ∩ [a. Proof. There exists a constant C.297 5. Proof. For ǫ > 0. We can assume without loss of generality that α ∈ [0. (Formula of SimonWolff) For each E ∈ R.7. b]) = 0 so that µα has only point spectrum. Random Jacobi matrices Theorem 5. By hypothesis.
we have therefore to show that X sup E[( G(n. Let f. we have by SimonWolff only to show that F ′ (E) < ∞ for almost all E. But then Z 0 1 1/2 x − α x − β 1 dx ≥ ( )1/2 4 −1/2 The function h(β) = R 1 0 R1 3/4 Z 1 3/4 x − β−1/2 dx . z)2 )1/4 ≤ G(n. 0.2. Since ∆ < 2d. Therefore C := inf h(β) > 0 . Selected Topics left hand side.2. In order to prove theorem (5. 0.1): Proof. Since [(a∆)m ]ij = 0 for m < i − j we have [(a∆)m ]ij = ∞ X m=i−j [(a∆)m ]ij ≤ ∞ X (2da)m . n n .298 Chapter 5. z)2 )1/4 ] < ∞ . 1/2]. This will be achieved by proving E[F ′ (E)1/4 ] < ∞. z)1/2 . β∈C The next lemma is an estimate for the free Laplacian.8. 0. we can also assume that α ∈ [0.2. [(1 − a∆)−1 ]ij ≤ (2da)j−i (1 − 2da)−1 . By the formula of SimonWolff. continuous and satisfies h(∞) = 1/4. m=i−j We come now to the proof of theorem (5. Lemma 5. P∞ Proof.1). x − β−1/2 dx x − α1/2 x − β−1/2 dx is nonzero. Because the symmetry α 7→ 1 − α leaves the claim invariant. (1 − a∆)f ≤ g ⇒ f ≤ (1 − a∆)−1 g . we can write (1 − a∆)−1 = m=0 (a∆)m which is preserving positivity. z∈C n Since X X ( G(n. g ∈ l∞ (Z) be nonnegative and let 0 < a < (2d)−1 .
The aim is now to give an estimate for X kz (n) n∈Z which holds uniformly for Im(z) 6= 0. z) and kz (n) = E[gz (n)1/2 ].2. The independent random Q variables V (k) can be realized over the probability space Ω = [0. Proof. 0. 1]Z = k∈Z Ω(k). (i) E[λV (n) − z1/2 gz (n)1/2 ] ≤ δn. Random Jacobi matrices we have only to control the later the term. where A. Follows directly from (i) and (ii). Define gz (n) = G(n.0 + X kz (n + j) . (L − z)gz (n) = δn0 means (λV (n) − z)gz (n) = δn0 − X gz (n + j) . j=1 Proof. kz (n + j) + δn0 . j=1 Jensen’s inequality gives E[λV (n) − z1/2 gz (n)1/2 ] ≤ δn0 + X kz (n + j) . We can write gz (n) = A/(λV (n) + B).299 5. B are functions of {V (l)}l6=n . . (iv) (1 − Cλ1/2 ∆)k ≤ δn0 . Rewriting (iii). j=1 (ii) E[λV (n) − z1/2 gz (n)1/2 ] ≥ Cλ1/2 k(n) . We average now λV (n) − z1/2 gz (n)1/2 over Ω(n) and use an elementary integral estimate: Z Ω(n) λv − z1/2 A1/2 dv λv + B1/2 = ≥ = = A1/2 CA Z 0 1/2 1/2 Cλ 1 Z v − zλ−1 v + Bλ−1 −1/2 dv 1 Z 0 1 A/(λv + B)1/2 0 1/2 E[gz (n) v + Bλ−1 −1/2 dv ] = kz (n) . Proof. (iii) kz (n) ≤ (Cλ1/2 )−1 X j=1 Proof.
We can also take the estimator T (ω) which is the median of X1 (ω). From lemma (5. Xn on the probability space (Ω. its expectation with respect to the measure Pθ is denoted by Eθ [X].300 Chapter 5. the function f (x) = i=1 (xi − x)2 is minimized by the arithmetic mean of the data. This leads to extremization problems. The parameters θ are taken from a parameter space Θ. . Definition. We start with some terminology. . we want to estimate a quantity g(θ) using an estimator T (ω) = t(X1 (ω). the arithmetic mean. By SimonWolff. Because the set of random operators of Lα and L0 coincide on a set of measure ≥ 1 − 2α. The aim is to estimate continuous or discrete parameters for models in an optimal way. we can look at the estimator T (ω) = n1 j=1 Xi (ω). Pθ ) of probability spaces is called a statistical model. . Definition. A. Xn (ω)). If X is a random variable. If X is continuous. . . . Proof.3 Estimation theory Estimation theory is a branch of mathematical statistics. which is assumed to be a subset of R or Rk . . Proof. we have kz ∈ l∞ (Z). we get the estimate from (v). (vii) Pure point spectrum. . In that case one has of course Eθ [X] = Ω fθ (x) dx. Proof. xn . α α P (vi) For λ > 4C −2 . A. The median is a natural quantity because the function f (x) = ni=1 xi − x is minimized by the median. Example.2. its variance is Varθ [X] = Eθ [(X−Eθ [X])2 ]. . For Im(z) 6= 0. The arithmetic Pnmean is natural because for any data x1 . Since Cλ1/2 < 1/2.P . Pθ ). It allows to define the global expectation E[X] = Θ Eθ [X] dµ(θ). . Selected Topics 1/2 (v) Define α = Cλ . . If the quantity g(θ) = Eθ [Xi ] is the expectation Pn of the random variables. . we get also pure point spectrum of Lω for almost all ω. Xn (ω). we have 2 2 k(n) ≤ α−1 [(1 − ∆/α)−1 ]0n ≤ α−1 ( )n (1 − )−1 . Definition. then its probability Rdensity function is denoted by fθ . Example. a − x + b − x = . . . Given n independent and identically distributed random variables X1 . . A collection (Ω. we have pure point spectrum for Lα for almost all α. Proof. kz (n) ≤ α−1 (2d/α)n (1 − 2d/α)−1 . B) is called an a priori Rdistribution on Θ ⊂ R.8) and (iv). A probability distribution µ = p(θ) dθ on (Θ. 5. we get a uniform bound for n kz (n).
Part b) is the reason. Eθ [T ] = Pn j=1 Pn j=1 αi Xi (ω) with αi Eθ [Xi ] = Eθ [Xi ].3. Estimation theory b − a + C(x). With R an a priori distribution on Θ. A linear estimator T (ω) = 1 is unbiased for the estimator g(θ) = Eθ [Xi ].j 1 n(n − 1) Eθ [Xi ]2 Eθ [Xi2 ] − n n2 n−1 1 Eθ [Xi ]2 = (1 − )Eθ [Xi2 ] − n n n−1 Varθ [Xi ] . − X i )2 . Proof. we have f (x) = P> m D(xj ) which is minimized for C(x )+ x −x + j i n+1−i xj >xm j=1 P Pmxj <xm x = x . P i αi = Proposition Pn 5. Proposition 5. where C(x) is zero if a ≤ x ≤ b and C(x) = x − b if x b and D(x) = aP − x if x < a. a) Eθ [T ] = b) For T = 1 n P 1 n Pn j=1 (Xi i (Xi − m)2 = Varθ [T ] = g(θ). the estimator T = n1 j=1 (Xi − m)2 is unbiased. the estimator is called unbiased. Remark. = n = Eθ [Xi2 ] − Therefore n/(n − 1)T is the correct unbiased estimate.301 5.3. If the bias is zero. . why statisticians often take the average of n 2 (n−1) (xi −x) as an estimate for the variance of n data points xi with mean m if the actual mean value m is not known. IfP n = 2m + 1 is odd. Example. Proof. For g(θ) = Varθ [Xi ] and fixed mean m. one can define the global error B(T ) = Θ B(θ) dµ(θ). the estimator Pn Pn 1 1 2 T = n−1 i=1 (Xi − X) with X = n i=1 Xi is unbiased.3. xm+1 ]. If n = 2m. we have f (x) = j=1 xi −xn+1−i + xj >xm+1 C(xj )+ P m xj <xm−1 D(xj ) which is minimized for x ∈ [xm .2. we get Eθ [T ] = Eθ [Xi2 ] − Eθ [ 1 X Xi Xj ] n2 i.1. If the mean is unknown. Define the bias of an estimator T as B(θ) = Bθ [T ] = Eθ [T ] − g(θ) . The bias is also called the systematic error.
where Bθ [T ] is the bias. . P Proof. the risk function is Errθ [T ] = Varθ [T ] = X α2i Varθ [Xi ] . . . xn . . One also looks at the maximum a posteriori estimator. which is the maximum of θ 7→ Lθ (x1 . . Example. With T = i αi Xi . . . . where i αi = 1. . . . Definition. . . Selected Topics Definition. We have Errθ [T ] = Varθ [T ] + Bθ [T ] . For continuous random variables. . . . where p(θ) dθ was the a priori distribution on Θ. . T ) . . . . xn ) = fθ (x1 ) · · · · · fθ (xn )p(θ) . . T ) dµ(θ) . . The maximum likelihood function Lθ (x1 . . . Definition. . Xn = xn ]. T θ The Bayes principle is the aim to find Z min (R(θ. . i It is by Lagrange minimal for αi = 1/n. . . If T is unbiased. . . Assume fθ (x) = 12 e−x−θ. then Errθ [T ] = Varθ [T ]. T Θ Example. Lθ (x1 . xn ) is defined as the maximum of θ 7→ Lθ (x1 . The maximum likelihood estimator is the random variable T (ω) = t(X1 (ω). xn ) is the median of the data x1 . . . xn ) := fθ (x1 ) · · · · · fθ (xn ). . Example. . .302 Chapter 5. the maximum likelihood function t(x1 . The expectation of the quadratic estimation error Errθ [T ] = Eθ [(T − g(θ))2 ] is called the risk function or the mean square error of the estimator T . . For discrete random variables. The minimax principle is the aim to find min max R(θ. It measures the estimator performance. xn ) = 1 − Pj xi −θ e 2n P is maximal when j xi − θ is minimal which means that t(x1 . . The arithmetic mean isPthe ”best linear unbiased estimator”. Xn (ω)) . xn ) would be replaced by Pθ [X1 = x1 .
The maximum likelihood estimator for θ = (m.4. (xi − x)2 ) . I(θ) = −Eθ [(log(fθ )′′ ]. Proof. σ 2 ) for Gaussian −(x−m)2 1 distributed random variables fθ (x) = √2πσ e 2σ2 has the maximum 2 likelihood function maximized for 1X 1X xi . . E[ fθθ ] = R Ω fθ′ dx = 0 so that Varθ [ f′ fθ′ ] = Eθ [( θ )2 ] . The maximal likelihood function lθ (x1 . Define the Fisher information of a random variable X with density fθ as Z f ′ (x) 2 ) fθ (x) dx . f′ Lemma 5. t(x1 . . I(θ) = ( θ fθ (x) If θ is a vector. Example.3. I(θ) = Varθ [ fθθ ]. fθ fθ Lemma 5. Integration by parts gives: Z Z Z E[log(fθ )′′ ] = log(fθ )′′ fθ dx = − log(fθ )′ fθ′ dx = − (fθ′ /fθ )2 fθ dx . . xn ) = ( n i n i Definition. . . .3. one defines the Fisher information matrix Iij (θ) = Z f′ f′ θi θj fθ2 fθ dx .3. . Estimation theory x −θ Example. Assume fθ (x) = θ e /x! is the probability density of the Poisson distribution. . xn ) = is maximal for θ = Pn i=1 e P i log(θ)xi −nθ x1 ! · · · xn ! xi /n.303 5. f′ Proof.3. .
. . xn )L′θ (x1 .3. xn ) dx1 · · · dxn implies Z 0 = L′θ (x1 . . one has Errθ [T ] ≥ 1 nI(θ) R Proof. . xn ) L′ = Eθ [T θ ] Lθ R 3) 1 = Lθ (x1 . xn ) = t(x1 . . . xn )/Lθ (x1 . One has I(θ) = Eθ [ρ2θ ] = Varθ [ρθ ]. . . . . . The score function for a continuous random variable is defined as the logarithmic derivative ρθ = fθ′ /fθ . . We see that Var[X] = 1/I. . . . This is a special case n = 1. L′θ /Lθ ] = Eθ [T L′θ /Lθ ] − 0 = 1 + B ′ (θ) . . Varθ [T ] ≥ (1 + B ′ (θ))2 . . .5 (RaoCramer inequality). . xn ) = E[L′θ /Lθ ] . . . 4) Using 3) and 2) Cov[T. T = X.304 Chapter 5. 2) Z ′ 1 + B (θ) = t(x1 . . The Fisher information I is 1/σ 2 . If X is a Gaussian random variable. . ≤ Varθ [T ]Varθ [ = Varθ [T ] n X L′θ ] Lθ Eθ [( i=1 = Varθ [T ] nI(θ) . . xn ) dx1 · · · dxn Z L′ (x1 . the score function ρθ = f ′ (θ)/f (θ) = −(x − m)/(σ 2 ) is linear and has variance 1. . xn )Lθ (x1 . Selected Topics Definition. xn ) dx1 · · · dxn . . 5) (1 + B ′ (θ))2 L′θ ] Lθ = Cov2 [T. . . . . . xn ) θ dx1 · · · dxn Lθ (x1 . . . . Example. nI(θ) In the unbiased case. . . . . . . θ = m of the following bound: Theorem 5. fθ′ (xi ) 2 ) ] fθ (xi ) . 1) θ + B(θ) = Eθ [T ] = t(x1 .
Equality holds if and only if ρX is linear. that is if X is normal. n = 1. Y are independent random variables then the following inequalities hold: −1 −1 a) Fisher information inequality: IX+Y ≥ IX + IY−1 .5. Theorem 5. as well as the power entropy N (θ) = 1 2S(θ) e . 2πe Theorem 5.3. where θ is fixed. Estimation theory where we used 4).3. a) IX+Y ≤ c2 IX + (1 − c)2 IY is proven using the Jensen inequality (2.1). A direct computation giving also R uniqueness: E[(aX + b)ρ(X)] = (ax + b)f ′ (x) dx = −a f (x) dx = −a implies 0 ≤ = ≤ E[(ρ(X) + (X − m)/σ 2 )2 ] E[(ρ(X)2 ] + 2E[(X − m)ρ(X)]/σ 2 + E[(X − m)2 /σ 4 ] IX − 2/σ 2 + 1σ2 . b) Power entropy inequality: NX+Y ≥ NX + NY .3. Proof. Proof. Equality holds if and only if X is the Normal distribution. the lemma and L′θ /Lθ = n X fθ′ (xi )/fθ (xi ) . The bias is automaticallyR zero.6 (Information Inequalities). c) Uncertainty property: IX NX ≥ 1. This is a special case of RaoCramer inequality. i=1 Definition. A random variable X with mean m and variance σ 2 satisfies: IX ≥ 1/σ 2 . equality holds if and only if the random variables are Gaussian.305 5. Closely related to the Fisher information is the already defined Shannon entropy of a random variable X: Z S(θ) = − fθ log(fθ ) dx . b) and c) are exercises. Take then c = IY /(IX + IY ). If X. . In all cases.7 (RaoCramer bound).
Example. In a center of mass coordinate system where to a system of coupled harmonic oscillators Pn i=1 d2 fi (x) = −fi (x) . M These equations are called the Hamiltonian equations of the Vlasov flow. . the notation slightly changes in this section. It deals with the evolution of the law Pt of a discrete random vector X t . . Definition. It is an important feature of Vlasov theory that while the random variables X t stay smooth. g˙ = − ∇V (f (ω) − f (η)) dm(η) . g(ω) ˙ =− f (ω) − f (η) dm(η) M is equivalent to the nbody evolution f˙i g˙ i = = gi − n X j=1 (fi − fj ) . this simplifies . then X t describes the evolution of n particles (fi . V (x1 . . the stochastic process X t describes the evolution of densities or the evolution of surfaces. gi ) = X(ωi ). Let Ω = M be a 2pdimensional Euclidean space or torus with a probability measure m and let N be an Euclidean space of dimension 2q. Given a potential V : Rq → R. m) labels the particles which move on the target space N . Vlasov dynamics is therefore a generalization of nbody dynamics. For example. We write X t for the stochastic process for example and not Xt as before. If Pt is a discrete measure located on finitely many points. . . g t ) : M → N is defined by the differential equation Z ˙ f = g. We can interpret X t as a vectorvalued stochastic process on the probability space (M.4 Vlasov dynamics Vlasov dynamics generalizes Hamiltonian nbody particle dynamics. The probability space (M. This can be useful to model shocks. then it is the usual dynamics of n bodies which attract or repel each other. A.xn ) = 2 i then ∇V (x) = x and the Vlasov Hamiltonian system Z ˙ f = g. m). if X x2 i . In general. 5. If p = 0 and M is a finite set Ω = {ω1 . . the Vlasov flow X t = (f t . their laws Pt can develop singularities.306 Chapter 5. . ωn }. dt2 fi (x) = 0. A. . Due to the overlap of this section with geometry and dynamics. Selected Topics We see that the normal distribution has the smallest Fisher information among all distributions with the same variance σ 2 .
Let M = N = R2 and assume that the measure m has its support on a smooth closed curve C. M If r(s). Figure. Dynamically. Vlasov dynamics 2 Example. Figure. The deformation transformation X t = (f t . where the measure m is located on 2 points. its periodic motion acts like a ”mixer” for the complicated evolution of the test particles. It describes the evolution of a continuum of particles on the curve. The process X t is again a volumepreserving deformation of the plane. b]) = . y) describe the position and the speed of the particles.307 5. The two particles have evolved in the phase space N . The situation at time t = 0. The interaction potential is V (x) = e−x .4. 1] is the parameterization of the curve C so that m(r[a. it can for example describe the evolution of a curve where each part of the curve interacts with each other part. it will never have self intersections. If N = M = R and m is a measure. An example with M = N = R2 . Because the curve at time t is the image of the diffeomorphism X t . X t is a oneparameter family of volumepreserving diffeomorphisms in the plane. In other words. then the process X t describes a volumepreserving deformation of the plane M . The situation is shown at time t = 0. g t ) satisfies the differential equation d f dt d g dt = g Z = e−(f (ω)−f (η)) dm(η) .1. The coordinates (x. Even so the 2 body problem is integrable. The picture sequence below shows the evolution of a particle gas with support on a closed curve in phase space. The Vlasov evolution describes a deformation of the plane. s ∈ [0. Example. Each point moves as ”test particle” in the force field of the 2 particles. The curvature of the curve is expected to grow exponentially at many points.
g t (r(s))). The support of the measure P0 on N = R2 .2 on N = R2 . it becomes discontinuous after some time. If X t is a stochastic process on (Ω = M.308 Chapter 5. y) = x. 0 The evolved curve C t at time t is parameterized by s → (f t (r(s)). p sin(s)). Assume the measure P0 is located on a curve ~r(s) = (s. . Example. Figure. then Pt is a probability measure on N defined by Pt [A] = m(X −1 A). m) with takes values in N . sin(s)) and assume that there is no particle interaction at all: V = 0. Figure. then the equations are d t f (x) dt d t g (x) dt = = g t (x) Z 1 t t e−(f (x)−f (r(s))) ds . The spatial particle density ρ is the law of the random variable x(x. The measure Pt is a measure in the phase space N . Then Pt is supported on a curve (s + sin(s). It is called the pushforward measure or law of the random vector X. The support of the measure P0.4 on N = R2 . Pt ). The support of the measure P1. The Vlasov evolution defines a family of probability spaces (N. A. Example. While the spatial particle density has initially a smooth density 1 + cos(s)2 . B. Selected Topics (b − a). Figure.
Already for the free evolution of particles on a curve in phase space. (Maxwell) If X t = (f t . y ′ )) dy ′ dx′ . In the case of the quadratic potential V (x) = x2 /2 assume m has 2 2 a density ρ(x. Given a smooth function h on N of compact support.309 5.4. y) = e−x −2y . g t ) is a solution of the Vlasov Hamiltonian flow. Vlasov dynamics t=0 t=1 t=2 t=3 1 2 3 4 1 Figure. then the law Pt = (X t )∗ m satisfies the Vlasov equation P˙ t (x.5 1 Example. then Pt has the density ρt (x. R Proof. 0. we have to do integrate y out and do a conditional expectation. To get from this density in the phase space. Lemma 5.4. we calculate L= Z N as follows: h(x.5 5 6 0. y) − W (x) · ∇y Pt (x. We have ∇V (f (ω) − f (η)) dm(η) = W (f (ω)). y) = f (x cos(t)+ y sin(t). y) + y · ∇x Pt (x. y) dxdy dt .1. the spatial particle density can become nonsmooth after some time. −x sin(t) + y cos(t)). y) = 0 R with W (x) = M ∇x V (x − x′ ) · Pt (x′ . the spatial density of particles. y) d t P (x.
310 Chapter 5. The Vlasov equation is an example of an integrodifferential equation. The evolution for the density P is the partial differential equation d t P (x. f¨(ω) = −(f (ω) − M In centerofmasscoordinates f˜ = f − E[f ]. The Vlasov equation becomes the transport equation P˙ (x. t). g˙ = −f with solutions f (t) = f (0) cos(t) + g(0) sin(t). For a quadratic potential V (x) = x2 . y ′ ) dx′ dy ′ dxdy N Z − h(x. g(ω. which has the explicit solution P t (x. y) ∇x h(x. y)P t (x. t) dm(ω) M Z Z ∇V (f (ω) − f (η)) dm(η) dm(ω) ∇y h(f (ω. In a short hand notation. Particles move freely. x + ty). the system is a decoupled system of a continuum of oscillators f˙ = g. t))g(ω. N Remark. y)∇x P t (x. Example. V (x) = 0. y)∇y h(x. Restricting this function to y = x gives the Burgers equation ut + xux = 0. g(ω. y) dxdy dt N Z d h(f (ω. y) = P 0 (cos(t)x + sin(t)y. y) + y · ∇x Pt (x. y) = 0 dt written in short hand as ut +y·ux −x·uy = 0. g(t) = −f (0) sin(t) + g(0) cos(t) . y)yP t (x. − sin(t)x + cos(t)y). It has solutions u(t. t). y) = u(u. Example. . the Hamilton equations are Z f (η) dm(η)) . y)y dxdy ZN + h(x. t). g(ω. The right hand side is an integral. It is an example of a HamiltonJacobi equation. y)W (x) · ∇y P t (x. y) dxdy . y) − x · ∇y Pt (x. t)) dm(ω) dt Z M ∇x h(f (ω. y. where W = ∇x V ⋆ P is the convolution of the force ∇x V with P. x. t) + y · ∇x P t (x. y) dxdy − N N Z ∇V (x − x′ )P t (x′ . Selected Topics L = = = = = Z d h(x. the Vlasov equation is P˙ + y · Px − W (x) · Py = 0 . t)) − M Z Z M P t (x. y) = 0 which is in one dimensions a partial differential equation ut + yux = 0.
For example. But V (x) = x/2 has this Fourier transform: Z ∞ 1 x −ikx e dx = − 2 . The function h(t) satisfying the differential equation h′ (t) = g(t)u(t) satisfies Rt h′ (t) ≤ g(t)h(t). where ut (x) evolves in a function space. the Laplacian ∆f = f ′′ is the second derivaˆ fˆ(k) = −k 2 fˆ. V (x) = log(1 − x · x). . 1 1 3 = R V (x) = 4π x .311 5. establishing global existence for the Vlasov dynamics is not easy but it has been done in many cases [31]. 1 1 = R4 V (x) = 8π x2 . Integrating the assumption gives u(t) ≤ u(0) + 0 g(s)u(s) ds. which has the Fourier series Vˆ (k) = −1/k 2. 1 = R2 V (x) = 2π log x. Lemma 5. then u(t) ≤ u(0) exp( 0 g(s) ds) for 0 ≤ t ≤ T . Here are some examples: • • • • • • N N N N N N = R: V (x) = x 2 . so that f = V ⋆ ρ. where ⋆ is the convolution. Rt Proof. One just can apply the same proof for any fixed x. It is diagonal in Fourier space: ∆ 2 ˆ fˆ(k) = −k fˆ = ρˆ(k) we get fˆ(k) = −(1/k 2 )ˆ ρ(k).4. It is the 2πperiodic function V (x) = x(2π − x)/(4π). x(2π−x) = T: V (x) = 4π = S2 . (Gronwall) If a function u satisfies u′ (t) ≤ g(t)u(t) for all Rt 0 ≤ t ≤ T . The potential x models galaxy motion and appears in plasma dynamics [94. where k ∈ R. This leads to h(t) ≤ h(0) exp( 0 g(s) ds) so that u(t) ≤ Rt u(0) exp( 0 g(s) ds). This proof for real valued functions [20] generalizes to the case. 67. The function Gy (x) = V (x − y) is also called the Green function of the Laplacian. k −∞ 2 Also for N = T. From tive. On any Riemannian manifold with LaplaceBeltrami operator ∆. for N = R.4. see for example [60] Remark. the Laplacian ∆f = f ′′ is diagonal in Fourier space. 85]. This defines Newton potentials on the manifold. there are natural potentials: the Poisson equation ∆φ = ρ is solved by φ = V ⋆ ρ. Because Newton potentials V are not smooth. Delta where V is the function which has the Fourier transform Vˆ (k) = −1/k 2 . For general N = Rn . Vlasov dynamics Example.2.
N ) has a unique solution: because of Lipshitz continuity G(f ) − G(f ′ )∞ ≤ 2D(∇x V )∞ · f − f ′ ∞ the standard Piccard existence theorem for differential equations assures local existence of solutions.4. One could approximate a smooth measure m by point measures. then Pt is piecewise smooth. where d is the distance in N . The Hamiltonian differential equation for X = (f. Df˙0 (ω) are linearly dependent. g) evolves on on the complete metric space of all continuous maps from M to N . The rank of the matrix DX t (ω) stays constant. The distance is d(X. It is the differential equation Z ¨ ∇2 V (f (ω) − f (η)) dm(η)Df (ω) =: B(f t )Df (ω) Df (ω) = − M and we can write it as a first order differential equation d f d DX = dt dt g 0 1 f R = 2 g M −∇ V (f (ω) − f (η)) dm(η) 0 f = A(f t ) . If ∇x V is bounded and globally Lipshitz continuous. Df t (ω) is a linear combination of Df 0 (ω) and Dg 0 (ω). Y ) = supω∈M d(X(ω). Y (ω)). This gives the global existence. Remark. If m is a point measure supported on finitely many points. More generally Yk (t) = {ω ∈ M  DX t (ω) has rank 2q − k = dim(N ) − k} is time independent. . The set Yq contains {ω  D(f )(ω) = λD(g)(ω). g Remark. If V and P0 are smooth. λ ∈ R ∪{∞}}. Definition. Proof. The Gronwall’s lemma assures that X(ω) can not grow faster than exponentially.3 (BattNeunzertBrownHeppDobrushin). then the Hamiltonian Vlasov flow has a unique global solution X t and consequently. Critical points of f t can only appear for ω.312 Chapter 5. then one could also invoke the global existence theorem for differential equations. For smooth potentials. Selected Topics Theorem 5. where Df 0 (ω). the dynamics depends continuously on the measure m. WeR have to show that the differential equation f˙ = g and g˙ = G(f ) = − M ∇x V (f (ω) − f (η)) dm(η) in C(M. the Vlasov equation has a unique and global solution P t in the space of measures. The evolution of DX t at a point ω ∈ M is called the linearized Vlasov flow.
Vlasov dynamics Definition. y ′ ) dx′ dy ′ . Time independent solutions of the Vlasov equation are called equilibrium measures or stationary solutions.4. y ′ ) dx′ dy ′ Z = Q(x)(−βS(y)y) ∇x V (x − x′ )Q(x′ ) dx′ N gives y∇x P (x. y) = S(y)Q(x) is an equilibrium solution of the Vlasov equation to the potential V .4.313 5. Proof. 2 R The constant C is chosen such that Rd S(y) dy = 1. y) = C exp(−β( + V (x − x′ )Q(x′ ) dx)) = S(y)Q(x) .4. y∇x P = yS(y)Qx (x) = yS(y)(−βQ(x) Z Rd ∇x V (x − x′ )Q(x′ ) dx′ ) and Z ∇x V (x − x′ )∇y P (x. R)cocycle At = A(f t ) along an orbit X t = (f t . g t ) of the Vlasov flow. Differentiation of Df¨ = B(f t )f t at a critical point ω t gives D2 f¨t (ω t ) = B(f t )D2 f t (ω t ). ∞] t is called the maximal Lyapunov exponent of the SL(2q. One can construct some of them with a Maxwellian ansatz Z y2 P (x. These measures are called BernsteinGreenKruskal (BGK) modes. y)P (x′ . The eigenvalues λj of the Hessian ¨ j = B(f t )λj . y)P (x′ . If Q : N 7→ R satisfies the integral equation Z βV (x − x′ )Q(x′ )) dx′ = exp(−βV ⋆ Q(x)) Q(x) = exp(− Rd then the Maxwellian distribution P (x. The random variable λ(ω) = lim sup t→∞ 1 log(D(X t (ω))) ∈ [0. . Definition. Proposition 5. The Lyapunov exponent could be infinite. y) = R N ∇x V (x − x′ )∇y P (x. D2 f satisfy λ Definition.
If X is a random vector. td ) = P[X1 ≤ t1 . . 1]3 with Lebesgue measure P has the expectation E[X] = (1/4. The multidimensional distribution function is also called multivariate distribution function. Xd ≤ td ] . . . there is a density fX (t) satisfying FX (t) = Z t1 −∞ . .... One often adds the term ”multivariate” to indicate that one has multiple dimensions. the moment sequence. Definition. Assume X = (X1 . . . .Xd ) (t1 ... 1/5. We call the map n ∈ Nd 7→ µn the moment configuration or. xd ) dx1 · · · dxd . It is equal to E[X n ] = E[X1n1 X2n2 · · · Xdnd ]. E[Xd ]). call µn (X) the n’th moment of X. . sd ) ds1 · · · dsd . . Denote by µn = I d xn dµ the n’th moment of µ. −∞ −∞ . Z td −∞ f (s1 . .. The random vector X = (x3 . nd ) is negative. the moments satisfy Z xn f (x) dx µn (X) = Rd which is a short hand notation for Z ∞ Z ∞ ··· xn1 d · · · xnnd f (x1 . 1]d. . Var[Xd ]). . . . . The expectation E[X] of a random vector X = (X1 . . A random vector is a vectorvalued random variable. Example. . We will tacitly assume µn = 0. . . Xd ) is a random vector in L∞ . Qd Definition. After some scaling and translation we can assume that µ be a bounded Borel measure on the unit cube I d = [0. Xd ) is defined as FX (t) = F(X1 . if d = 1. . The multidimensional distribution function of a random vector X = (X1 . Definition. . .. Xd ) is the vector (E[X1 ]. . . For a continuous random variable. z 5 ) on the unit cube Ω = [0. the variance is the vector (Var[X1 ].314 Chapter 5. If X is a continuous random vector. if at least one coordinate ni in n = (n1 . The law of the random vector X is a measure µ on Rd with compact support. . . with law µ. It is in Lp if each coordinate is in Lp . . . Selected Topics 5. ... . We use Rin this section the multiindex notation xn = i=1 xni i ... 1/6).5 Multidimensional distributions Random variables which are vectorvalued can be treated in an analogous way as random variables. y 4 . Definition. .
xd ) . . d. we take a particular sign convention for ∆. The n = (7. n1 n2 · · · n dt ∂x1 ∂x2 ∂xnd d √ Example. Definition. u) = esx+t y+uz dxdydz 0 = 0 0 (es − 1) 2 + 2et (t − 1) −6 + 3eu (2 − 2u + u2 ) . one can define a multidimensional moment generating function MX (t) = E[et·X ] = E[et1 X1 et2 X2 · · · etd Xd ] which contains all the information about the moments because of the multidimensional moment formula Z dn MX xnj dµ = E[X n ]j = (t)t=0 . The random variable X = (x. 3. To improve readQ Q n ni n d ki k ability.1) Id k+ei using xn−ei −k (1−x)k −xn−k (1−x)k = xn−ei −k (1−x) . X2 . the moment generating function is of the form M (s)M (t)M (u) . y . . t. z 1/3 ) has the moment generating function Z 1Z 1Z 1 √ 1/3 M (s. . By induction in i=1 ni . s t2 u3 Because the components X1 . y. Define theQpartial difference (∆i a)n = an−ei − an on configurations and write ∆k = i ∆ki i . . X2 and X3 . 3 4 5 Remark. This Pdallows us to avoid many negative signs in this section. Let ei be the standard basis in Zd . As in one dimension. X3 in this example were independent random variables. 4)’th moment of the random vector X = (x . Unlike the usual convention. z) = ( 4 x−2/3 y −3/4 z −4/5 )( )( ). one proves the relation Z (∆k µ)n = xn−k (1 − x)k dµ (5. where the factors are the onedimensional moments of the onedimensional random variables X1 .5. .315 5. we also use notation like n = i=1 ni or = i=1 k ki Pn Pnd Pn1 or k=0 = k1 =0 · · · kd =0 . . dtnj Rd where the n’th derivative is defined as ∂ n1 ∂ n2 ∂ nd d f (x) = f (x1 . z 5 ) is 1 1 1 . Multidimensional distributions 3 Example. We mean n → ∞ in the sense that ni → ∞ for all i = 1 . y. E[X1n1 X2n2 X3n3 ] = E[x21 y 12 z 20 ] = 22 13 20 The random vector X is continuous and has the probability density f (x. .
Given a continuous function f : I → R. We show now existence of a measure µ under condition (5. Lemma 5. the factorization argument is more elegant.5.6. Remark. For a measures µ.2) one has (∆k µ)n ≥ 0 for all k. Theorem 5.HildebrandtSchoenberg). n ∈ Nd .1). there exists a unique solution to the moment problem.1. There is a bijection between signed bounded Borel measures µ on [0. . . ni > 0 we define the higher dimensional Bernstein polynomials Bn (f )(x) = n X k=0 n1 nd f( . Proof. 1]d and configurations µn for which there exists a constant C such that n X n  (∆k µ)n  ≤ C. By the Weierstrass theorem. ∀n ∈ Nd . Proof. It is therefore enough to prove Qd m i the claim for f (x) = xm = i=1 xm i .2 (Hausdorff.5. define for n ∈ Nd . (i) Because by lemma (5.2). .1) to Bernstein’s proof in one dimension.2) in one dimensions. polynomials are dense in C(I d ). Hildebrandt and Schoenberg refer for the proof of lemma (5.5. Because Bn (y )(x) is the product of one dimensional Bernstein polynomials Bn (y m )(x) = d Y Bni (yimi )(xi ) . we have Bn (f ) → f if n → ∞. i=1 the claim follows from the result corollary (2. (Multidimensional Bernstein) In the uniform topology in C(I d ). While a higher dimensional adaptation of the probabilistic proof could be done involving a stochastic process in Zd with drift xi in the i’th direction. (5.316 Chapter 5. Selected Topics d Definition. ) k1 kd n k xk (1 − x)n−k .5. multidimensional polynomials are dense in C(I d ) as they separate points in C(I d ). For n ∈ Nd .2) k k=0 A configuration µn belongs to a positive measure if and only if additionally to (5. .
.2) implies that the variation of the measures µ(n) is bounded.. R Let δ(x) denote the Dirac point measure located on x ∈ I . k k Id Remark. An explicit finite constructive approximations of a given measure µ on I d is given for n ∈ Nd by the atomic measures X n nd − kd n1 − k1 . (iv) If µ is a positive measure.. (ii) The left hand side of (5.5. and µ has finite variation. then by (5. )) . there exists a constant C such that µ(n)  ≤ C for all n. (iii) We see that if (∆k µ)n ≥ 0 for all k.2) is the variation µ(n)  of the measure µ(n) . nd ) ∈ I with 0 ≤ ki ≤ ni . .1) Z n n k (∆ µ)n = xn−k (1 − x)k dµ(x) ≥ 0 . Id 0 we know that any signed measure µ which is an accumulation point of µ(n) . We extract from the proof of theorem (5.317 5.3. It satisfies I d δ(x) dy = x. then the measures µ(n) are all positive and therefore also the measure µ.5. The condition (5. where ni → ∞ solves the moment problem. µ(n) = (∆k µ)n δ(( k n1 nd 0≤ki ≤ni . By Alaoglu’s theorem.. Multidimensional distributions n the atomic measures µ(n) on I d which have weights (∆k µ)n on the k Qd nd −kd n1 −k1 d i=1 (ni + 1) points ( n1 . . . Because Z m x dµ (n) (x) = Id = = = n X n−k m k n ) (∆ µ)n ( k n k=0 Z X n n − k m n−k n ( ) x (1 − x)k dµ(x) k n I d k=0 Z X n k n ( )m xk (1 − x)n−k dµ(x) k n d I k=0 Z 1 Z m Bn (y )(x) dµ(x) → xm dµ(x) . Hildebrandt and Schoenberg noted in 1933.2).5. there exists an accumulation point µ.2) the construction: Corollary 5. d Definition.. This establishes (5. that this result gives a constructive proof of the Riesz representation theorem stating that the dual of C(I d ) is the space of Borel measures M (I d ). Because by (i) µ(n) → µ.
Definition.1) Z xn−k (1 − x)k dµ(x) (∆k µ)n = Id Z xn−k (1 − x)k f dν(x) = Id Z ≤ f ∞ xn−k (1 − x)k dν(x) Id k = f ∞ (∆ ν)n .5. A Borel probability measure µ on I d is uniformly absod lutely continuous with respect to Lebesgue measure on I if and only if n Qd ∆k µn  ≤ i=1 (ni + 1) for all k and n. If µ = f ν with f ∈ L∞ (I d ). This leads to a higher dimensional generalization of Hausdorff’s result which allows to characterize the continuity of a multidimensional random vector from its moments: Corollary 5. This has an obvious generalization to d dimensions: . There is also a characterization of Hausdorff of Lp measures on I 1 = [0. we get using (5.2) a positive measure ρ on I d . n ∈ Nd . we have for any Borel set A ⊂ I d ρ(A) ≥ 0. Use corollary (5. we call a measure µ on I d uniformly absolutely continuous with respect to ν. Proof.4. This gives µ(A) ≤ Cν(A) and implies that µ is absolutely continuous with respect to ν with a function f satisfying f (x) ≤ C almost everywhere.318 Chapter 5. A positive probability measure µ is uniformly absolutely continuous with respect to a second probability measure ν if and only if there exists a constant C such that (∆k µ)n ≤ C · (∆k ν)n for all k.5. On the other hand.4) and R Id n x dx = Q i ni ki Q i (ni + 1). k Proof. 1] [74]. Since ρ = Cν − µ.5. if (∆k µ)n ≤ C(∆k ν)n then ρn = C(∆k ν)n − (∆k µ)n defines by theorem (5. 1] for p > 2. if it satisfies µ = f dν with f ∈ L∞ (I d ). As usual. Selected Topics Hausdorff established a criterion for absolutely continuity of a measure µ with respect to the Lebesgue measure on [0. This can be generalized to find a criterion for comparing two arbitrary measures and works in d dimensions.5. Corollary 5.5.
5. 1). the map ω → NB (ω) = P [S] Π(ω) ∩ B Π(ω) is a Poisson distributed random variable with parameter P[B]. Here A denotes the cardinality of a finite set A.3) Proof. where g (n) was constructed in (i). Given a bounded positive probability measure µ ∈ M (I d ) and assume 1 < p < ∞.319 5. Since the unitball in the reflexive Banach space Lp (I d ) is weakly compact for p ∈ (0. It is understood that NB (ω) = 0 if ω ∈ S 0 = {0}. a nonatomic finite Borel measure P on S and a function ω 7→ Π(ω) ⊂ S from Ω to the set of finite subsets of S such that for every measurable set B ⊂ S. . Qd Because the cube has Lebesgue volume (n + 1)−1 = i=1 (ni + 1)−1 . it has the same measure with respect to both µ ˜ (n) and g (n) dx.3) means that g (n) p ≤ C. (ii) Assume µ = f dx with f ∈ Lp . A Poisson process (S. assumption (5. F . We have therefore (n) also g dx → µ weakly. But then. Π.6.6. n (n + 1)p−1 n X (∆k (µ)n k=0 n k )p ≤ C .3). We construct first from the atomic measures µ(n) absolutely continuous measures µ ˜(n) = (n) d g dx on I given by a function g which takes the constant value k (∆ (µ)n  n k Y d (ni + 1)p )p i=1 on a cube of side lengths 1/(ni + 1) centered at the point (n − k)/n ∈ I d . a subsequence of g (n) converges to a function g ∈ Lp . For any finite partition {Bi }ni=1 of S. Poisson processes Proposition 5. Then µ ∈ Lp (I d ) if and only if there exists a constant C such that for all k. The measure P is called the mean measure of the process. there exists a constant C such that g (n) p ≤ C and this is equivalent to (5. Because g (n) dx → f dx in the weak topology for measures. This implies that a subsequence of g (n) dx converges as a measure to gdx which is in Lp and which is equal to µ by the uniqueness of the moment problem (Weierstrass). (i) Let µ(n) be the measures of corollary (5. the set of random variables {NBi }ni=1 have to be independent.3).6 Poisson processes Definition. P. (5. Q) is given by a complete metric space S.5. (iii) On the other hand. N ) over a probability space (Ω. we have g (n) → f weakly in Lp .5.
N ) is a Poisson process on the probability space (Ω. For every Borel set B in S. 2π Theorem 5. . S∞ Proof.320 Chapter 5. The Poisson process is an example of a point process. we have Q[NB1 = d1 . where Q[NS = d] = Q[S d ] = e−P [S] (d!)−1 P [S]d . NBm = dm  NS = d0 + m X j=1 dj = d] = m Y d! P [Bj ]dj d0 ! · · · dm ! j=0 P [S]dj . . For every nonatomic measure P on S. Define Π(ω) = {ω1 . NB (ω) = t Π(ω) Remark. . . F . . Q): For any measurable partition {Bj }m j=0 of S. The set S is [0. Define Ω = d=0 S d . . . The set Π(ω) is the discrete point set Π(ω) = {Sn (ω)  n = 1. t] with Lebesgue measure P as mean measure. we have Π(ω) ∩ B . P. where S d = S×· · ·×S is the Cartesian product and S 0 = {0}. . We started with IID Poisson distributed random variables Xk which are ”waiting times” and defined Nt (ω) = P ∞ k=1 1Sk (ω)≤t . because we can see it as assigning a random point set Π(ω) on S which has density P on S. } ∩ S. there exists a Poisson process. .6. Selected Topics Example. . . A Poisson process in R2 with mean density 2 2 e−x −y P= dxdy . ωd } if ω ∈ S d and NB as above. Let F be the Borel σalgebra on Ω. Lets translate this into the current framework. Π. then the interpretation is that f (x) is the average density of points at x. . One readily checks that (S. 2. Figure. We have encountered the onedimensional Poisson process in the last chapter as a martingale. The probability measure Q restricted to S d is the product measure (P×P×· · ·×P)·Q[NS = d]. If S is part of the Euclidean space and the mean measure P is continuous P = f dx. 3.1 (Existence of Poisson processes).
φ(g)] = (f. The image of this map is a Hilbert i space of random variables with dot product Cov[φ(f ).6. . Q) to the set of finite subsets of a probability space (S. B. The last step in the calculation is then justified. P = R Ω P (ω) dQ(ω) = P˜ . NBm = dm ] = Q[NS = d] Q[NB1 d=d1 +···+dm = = = d1 . . .321 5. P) such that NB (ω) := ω ∩ B is a random variable for all measurable sets B ∈ B. NBm = dm  NS = d] ∞ m X Y e−P [S] d! d=d1 +···+dm ∞ −P [B0 ] X e [ d0 =0 = = d! d0 ! · · · dm ! j=0 P [Bj ]dj m P [B0 ]d0 Y e−P [Bj ] P [Bj ]dj ] d0 ! dj ! j=1 m Y e−P [Bj ] P [Bj ]dj dj ! j=1 m Y Q[NBj = dj ] . g). . Proof. But this measure is just P : Lemma 5. j=1 This calculation in the case m = 1. Define NB = φ(1B ). . P) some independent Poissondistributed random P variables Zi = φ(ei ) and define then a map P a φ(e ) if f = φ(f ) = i i i ai ei . Remark. . . These random variables have the correct distribution and are uncorrelated for disjoint sets Bj . P (ω) dQ(ω) = P P (ω)[B] dQ(ω) one gets P = Ω Ω Remark. F .2. Definition. leaving away the last step shows that NB is Poisson distributed with parameter P [B]. The random discrete measure P (ω)[B] = NB (ω) is a normalized counting measure on S with support on Π(ω). . Because the Poisson distributed random variable P∞ NB (ω) = P (ω)[B] has by assumption the Qexpectation P [B] = k=0 k Q[NB = k] = R R ˜. The expectation of ˜ ˜ the R random measure P (ω) is the measure P on S defined by P [B] = Ω P (ω)[B] dQ(ω). The existence of Poisson processes can also be established by assigning to a basis {ei } of the Hilbert space L2 (S.6. . Poisson processes so that the independence of {NBj }m j=1 follows: ∞ X Q[NB1 = d1 . A point process is a map Π a probability space (Ω.
one gets Σf (ω) = NB (ω). P). 2 2k i=1 . A kcube in an open subset S of Rd is is a set d Y ni (ni + 1) [ k. where both f + and f − are nonnegative. we have the characteristic functional n Y E[eaj tNBj ] = e Pn j=1 P [Bj ](1−eaj t ) . For a Poisson process. For a Poisson process and f = 1B . where Bk are disjoint sets. The next theorem of Alfr´ed R´enyi (19211970) gives a handy tool to check whether a point process. Selected Topics Definition. Example. the random variables Σfkj are independent for different j or different sign. It is called the characteristic functional of the point process. B. then P approximating P + − + = 1 both functions with step functions fk+ = j a+ j Bj j fkj and fk = P − P − ± j aj 1Bj− j fkj . For a Poisson process and f = k=1 aj 1Bk . Σf (ω) = z∈Π(ω) Example. Pn Example. Definition. the moment generating at function of Σf (ω) = NB (ω) is E[eatNB ] = eP [B](1−e ) . S This is called Campbell’s theorem. P). For a Poisson process and f = a1B . the moment generating function of Σf is the product of the moment generating functions Σf ± = kj NB±j . The moment generating function of Σf is defined as for any random variable as MΣf (t) = E[etΣf ] . the moment generating function of Σf is Z MΣf (t) = exp(− (1 − exp(tf (z))) dP (z)) . We have computed the moment generating function of a Poisson distributed random variable in the first chapter. a random variable Π with values in the set of finite subsets of S. Because for Poisson process. defines a Poisson process. The proof is done by writing f = f + − f − . and f ∈ L1 (S.322 Chapter 5. define the random variable X f (z) . Assume Π is a point process on (S. P). ). Definition. j=1 Example. For a function f : S → R+ in L1 (S.
6. the moment generating function of NUk = k 1O(B)j) is the product of the moment generating functions of 1O(B)j) : k Y E[etNU ] = k−cube B = Y k−cube B (Q[O(B)] + et (1 − Q[O(B)])) (exp(−P [B]) + et (1 − exp(−P [B]))) . the sets O(B P j ) ⊂ O(U ) are independent. for almost all ω. Poisson processes Theorem 5. (iv) For an open set U .6. F . Proof: we compute its moment generating function. 1967). . Let P be a nonatomic probability measure on (S. Assume for any finite union of kcubes B ⊂ S. the sets O(Bj ) ⊂ Ω are independent. Each factor of this product is positive and the monotone convergence theorem shows that the moment generating function of NU is E[etNU ] = lim k→∞ Y k−cube B (exp(−P [B]) + et (1 − exp(−P [B]))) . (i) Define O(B) = {ω ∈ Ω  NB (ω) = 0 } ⊂ Ω for any measurable set B in S. P. j=1 (iii) We count the number of points in an open open subset U of S using kcubes: define for k > 0 the random variable NUk (ω) as the number kcubes B for which ω ∈ O(B ∩ U ). (ii) For m disjoint kcubes {Bj }m j=1 .3 (R´enyi’s theorem. Proof.323 5. the random variable NU is Poisson distributed with parameter P [U ]. Because for different kcubes. Proof: Q[ m \ O(Bj )] = 0}] = Q[{NSm j=1 Bj j=1 = exp(−P [ m [ Bj ]) j=1 = = m Y j=1 m Y exp(−P [Bj ]) Q[O(Bj )] . B) and let Π be a point process on (Ω. By assumption. Then (S. Π. Q). N ) is a Poisson process with mean measure P . These random variable NUk (ω) converge to NU (ω) for k → ∞. Q[NB = 0] = exp(−P [B]). Q[O(B)] = exp(−P [B]).
the random variables {NUj )}m j=1 are independent. 1]N . Iterating this random logistic map is done by taking IID random variables cn with law ν and then iterate x0 . T (ω)) . c) = x + c sin(x) is a circle diffeomorphism. x2 = f (x1 . A. Let (Ω. If M is the circle and f (x. ω0 ). Letting k → ∞ shows that the variables NUj are independent. . Because the generating function determines the distribution of NU . Selected Topics t which converges to exp(P [U ](1 − e )) for k → ∞ if the measure P is nonatomic. P) the characteristic functional is the one of a Poisson process by approximation and the Lebesgue dominated convergence theorem P (2. c1 ) .7 Random maps Definition.4. the parameter c is given by IID random variables which change in each iteration. A. P) be a probability space and M be a manifold with Borel σalgebra B.3). it defines a cocycle S(x. one can use the characterization of a Poisson process by its moment generating P function of f ∈ L1 (S. we can iterate this map and assume. . . If f = ai 1Uj for disjoint open sets Uj and real numbers aj . A random diffeomorphism on M is a measurable map from M × Ω → M so that x 7→ f (x. this assures that the random variables NU are Poisson distributed with parameter P [U ].324 Chapter 5. (vi) To extend (iv) and (v) from open sets to arbitrary Borel sets. T (ω)) which is a map on M × Ω. . we have seen that the characteristic functional is the characteristic functional of a Poisson process. (v) For any disjoint open sets U1 . ω) = (f (x. c0 ). Use f = 1B to verify that NB is Poisson distributed and f = ai 1Bj with disjoint Borel sets Bj to see that {NBj )}m j=1 are independent. ω) = (f (x. ν N ) where ν is a measure on [0. The random variables {NUkj )}m j=1 are then independent for fixed k. P). P) = ([0. x1 = f (x0 . B N . . 5. Given a P measure preserving transformation T on Ω. . Um . Proof: the random variables {NUkj )}m j=1 are independent for large enough k. Example. . because no kcube can be in more than one of the sets Uj . ω). 1] and take the shift T (xn ) = xn+1 and to define S(x. ω) is a diffeomorphism for all ω ∈ Ω. For general f ∈ L( S. We can model this by taking (Ω. .
R) of all d × d matrices with determinant 1. ω) defines so a IID diffeomorphismvalued random variables f1 (x)(ω) = f (x. ·) is a probability measure on M . X2 (ω)). v) = A(x)v is called a matrix cocycle. Proof. a matrix with entries Pij ≥ 0 and for which the sum of the column elements is 1 in each column. This is easily be done by checking all the axioms. A random map for which f (xi . 1]: P(x. If M is a finite set {1.7.325 5. B) is Bmeasurable. With M = Rd . then the random map defines a discrete Markov process. f2 (x) = f (x. Example. b) is the definition of a discrete Markov process. If Ω = (∆N . Random diffeomorphisms are examples of Markov chains as covered in Section (3. ω) depends smoothly on ω and the transition probability measures are smooth. T ) is an ergodic dynamical system.14) of the chapter on discrete stochastic processes: Lemma 5. . a) We have to check that for all x. If the transition probability measures are continuous. a) Any random map defines transition probability functions P : M × B → [0. B) = P[f (x. A.. R) is measurable map with values in the special linear group SL(d. then P is a discrete Markov process. X1 (ω)). A random map f (x.. the measure P(x. and A : Ω → SL(d.1. we get IID ∆valued random variables Xn = T n (x)0 . the map x → P(x. ω) = xj with probability Pij is called a finite Markov chain. It is important to note that ”continuous” and ”smooth” in this definition is . If (Ω. b) If An is a filtration of σalgebras and Xn (ω) = T n (ω) is An adapted.7. Random maps Example. If f (x. Definition. then the random diffeomorphism is called a smooth IID random diffeomorphism. F N . n} and P = Pij is a Markov transition matrix. We further have to verify that for all B ∈ B. ω) ∈ B] . This is the case because f is a diffeomorphism and so continuous and especially measurable. then the random diffeomorphism is called a continuous IID random diffeomorphism. We will call a random diffeomorphism in this case an IID random diffeomorphism. One often uses the notation An (x) = A(T n−1 (x)) · A(T n−2 (x)) · · · A(T (x)) · A(x) for the n’th iterate of this random map. ν N ) and T (x) is the shift. In case. Example. the random diffeomorphism f (x. P.
if the manifold is compact: . If α and β are both rational vectors. Definition. x + β with probability 1/2 Each map either rotates the point by the vector α = (α1 . Ergodicity especially means that the transformation T on the ”base probability space” (Ω. Selected Topics only with respect to the transition probabilities that ∆ must have at least dimension d ≥ 1. the Markov operator is also called a transfer operator. It is by definition a closed set. we have already assumed smoothness from the beginning.326 Chapter 5. Given an IID random map: x + α with probability 1/2 fn (x) = . (i + 1)/7] × [j/11. A) dµ µ(A) = M for every Borel set A ∈ A. the stationary measure µ is a stationary measure of the Markov process. 5/11) then the 77 rectangles [i/7. A. P) is ergodic. ω) from ω ∈ Ω preserves a measure µ. If the random diffeomorphism defines a Markov process. Let M = T2 = R2 /Z2 denote the twodimensional torus. The previous example 2) shows that there can be infinitely many ergodic invariant measures of a random diffeomorphism. Remark. µ × P). A stationary measure µ of a random diffeomorphism is called ergodic. It is a group with addition modulo 1 in each coordinate. α2 ) or by the vector β = (β1 . Remark. Definition. With respect to M . If µ is a stationary invariant measure. If every diffeomorphism x → f (x. The support of a measure µ is the complement of the open set of points x for which there is a neighborhood U with µ(U ) = 0. The Lebesgue measure on T2 is invariant because it is invariant for each of the two transformations. if α = (3/7. Remark. Example. β2 ). In the context of random maps. We have earlier written this as a fixed point equation for the Markov operator P acting on measures: Pµ = µ. A measure µ on M is called a stationary measure for the random diffeomorphism if the measure µ × P is invariant under the map S. (j + 1)/11] are permuted by both transformations. if µ × P is an ergodic invariant measure for the map S on (M × Ω. then there are infinitely many ergodic invariant measures. For example. But for smooth IID random diffeomorphisms. Example. then µ is a automatically a stationary measure. β = (1/11. 2/7). one has Z P (x. one has only finitely many. Definition.
U2 ) is positive. Their supports are mutually disjoint and separated by open sets. Denote by Σ1 and Σ2 their support. A. Assume Σ1 and Σ2 are not disjoint. the smoothness assumption of the transition probabilities P (y. But then µ2 (U × Ω) = 0 and µ2 (S(U × Ω)) > 0 violating the measure preserving property of S. π) or T = [0. Choose a point yi in Σi . . . Circular random variables 327 Theorem 5. 2π) = R/(2πZ). Example. for any realvalued random variable Y . The sequence of points has an accumulation point y ∈ M by compactness of M . Then there exist points xi ∈ Σi and open sets Ui of xi so that the transition probability P (x1 .8 Circular random variables Definition. A.8. In general. Again. A.. there exist at least countably many. . This uses the assumption that the transition probabilities have smooth densities.5.2 (Finitely many ergodic stationary measures (Doob)). ·) contradicts with the S invariance of the measures µi having supports Σi . A. It is a circlevalued random variable on every finite probability space (Ω = {1. Remark. A measurable function from a probability space (Ω. P) = (R. n }. the first significant digit is X(k) = 2π log10 (k) mod 1. 5. a smooth IID random diffeomorphism has finitely many ergodic stationary measures µi . It is an example of a Choquet simplex.7. the random variable X(x) = X mod 2π is a circlevalued random variable. (i) Let µ1 and µ2 be two ergodic invariant measures. This implies that an arbitrary ǫneighborhood U of y intersects with infinitely many Σi . µ2 . We can enumerate them as µ1 . . √ 2 Example. . e−x /2 / 2πdx. then λµ1 +(1−λ)µ2 is an other stationary probability measure. We can realize the circle as T = [−π. then X(x) = x mod 2π is a circlevalued random variable. B) with Borel σalgebra B is is called a circlevalued random variable. µ2 are stationary probability measures. If M is compact. . P[{k}] = 1/n). Denote by Σi their supports. For a positive integer k. This theorem implies that the set of stationary probability measures forms a closed convex simplex with finitely many corners. P) to the circle (T. Proof. If (Ω. If µ1 . It is an example of a directional random variable.. (ii) Assume there are infinitely many ergodic invariant measures.
. If the law is absolutely continuous. Example. it has a probability density function fX on the circle and µ = fX (x)dx. As on the real line the Lebesgue decomposition theorem (2. Even if X is an unfair dice and if Y is fair. We roll it two times. The law of the first significant digit random variable Xn (k) = 2π log10 (k) mod 1 defined on {1. k=−∞ It is an example of a wrapped normal distribution. 2. . but instead of adding up the results X and Y . 0 The Gibbs inequality lemma (2.2) assures that every measure on the circle can be decomposed µ = µpp + µac + µsc . 3.1) assures that H(f g) ≥ 0 and that H(f g) = 0. For example. µsc is (sc) and µac is (ac). The mean direction m and resultant length ρ of a circular random variable taking values in {z = 1} ⊂ C are defined as ρeim = E[eiX ] . Definition. if f = g almost everywhere. The law of a circular random variable X is the pushforward measure µ = X ∗ P on the circle T. The circular variance is defined as V = 1 − ρ = E[1 − cos(X − m)] = E[(X − m)2 /2 − (X − m)4 /4! . The relative entropy for two densities is defined as H(f g) = Z 2π f (x) log(f (x)/g(x)) dx . It is an example of a lattice distribution. 5 (count 6 = 0). The law of the wrapped normal distribution in the first example is a measure on the circle with a smooth density ∞ X fX (x) = e−(x+2πk) 2 /2 √ / 2π . . If ρ = 0. The circular variance is a number in [0. Example.12. . Selected Topics Example. 1]. ]. there is no distinguished mean direction. 1. then X + Y = 1. One can write ρ = E[cos(X − m)]. The later expansion shows the relation with the variance in the case of realvalued random variables. . A dice takes values in 0. We define m = 0 just to have one in that case. Definition.15. where µpp is (pp). supported on {k2π/100 ≤ k < 10 }. n } is a discrete measure. Note that E[X + Y ] = E[X] 6= E[X] + E[Y ]. 4.328 Chapter 5. then X + Y is a fair dice. Definition. . . if X = 4 and Y = 3. we add them up modulo 6. The entropy of a circlevalued random variable X with probR 2π ability density function fX is defined as H(f ) = − 0 f (x) log(f (x)) dx.
We can assume without loss of generality that m = 0. Definition. The following lemma is analogous to theorem (2.8. then ρ = 0.1 (Chebychev inequality on the circle). . Circular random variables Example. 2ǫ2 Proof. then P[ sin((X − m)/2) ≥ ǫ] ≤ V . The interpretation is that if you take a large random number. For the wrapped normal distribution m = 0. The law is also called Benford’s law. If X is a circular random variable with circular mean m and variance V . shows that the Chebychev inequality on the circle is ”sharp”: one can not improve it without further assumptions on the distribution. π). If the distribution of X is the uniform distribution on the circle.329 5. then the probability that the first digit is 1 is log(2). As with real valued random variables weak convergence is also called convergence by law. The equality P[ sin(X/2) ≥ ǫ] = 2V /(4ǫ2 ) = V /(2ǫ2 ) . V = −σ2 /2 1−e .5. A sequence of circlevalued random variables Xn converges weakly to a circlevalued random variable X if the law of Xn converges weakly to the law of X. There is no particular mean direction in 2 this case. We take T = [−π. It is called the distribution of the first significant digit. ρ = e−σ /2 . This distribution has the circular mean m and the variance V . Example. We use the trigonometric identity 1 − cos(x) = 2 sin2 (x/2). 2 Example.8. m = x0 and V = 0. The sequence Xn of significant digit random variables Xn converges weakly to a random variable with lattice distribution P[X = k] = log10 (k + 1) − log10 (k) supported on {k2π/10  0 ≤ k < 10 }. V = 1. the probability that the first digit is 6 is log(7/6). Let X be the random variable which has a discrete distribution with a law supported on the two points x = x0 = 0 and x = x± = ±2 arcsin(ǫ) and P[X = x0 ] = 1 − V /(2ǫ2 ) and P[X = x± ] = V /(4ǫ2 ). to get V = E[1 − cos(X)] = 2E[sin2 ( ≥ 2E[1 sin( X )≥ǫ sin( ≥ 2ǫ2 P[ sin( 2 X )] 2 X )] 2 X ) ≥ ǫ ] . otherwise replace X with X − m which does not change the variance. then ρ = 1. If the distribution of X is located a single point x0 .5): Theorem 5.
π]. It is also called the circular normal distribution. The constant C is 1/(2πI0 (κ)).17). . It is a function on Zd given by Z in·X ein·x dνX (x) . The density function of the Mises distribution plotted as a polar graph.05 0 1 2 3 0 4 5 6 Figure. Selected Topics Definition.15 0. φX (n) = E[e ]= Td The following lemma is analog to corollary (2. the Mises distribution approaches the uniform distribution on the circle.330 Chapter 5. The characteristic function of a circlevalued random variable X is the Fourier transform φX = νˆ of the law of X. Figure. A sequence Xn of circlevalued random variables converges in law to a circlevalued random variable X if and only if for every integer k. More generally. the characteristic function of a Td valued random variable (circlevalued random vector) is the Fourier transform of the law of X. The density function of the Mises distribution on [−π.8. A circle valued random variable with probability density function f (x) = Ceκ cos(x−α) is called the Mises distribution. 0. The parameter κ is called the concentration parameter. where I0 (κ) = P ∞ 2n 2 n=0 (κ/2) /(n! ) a modified Bessel function.1 0. It is a sequence (that is a function on Z) given by Z inX einx dνX (x) .2.25 0. one has φXn (k) → φX (k) for n → ∞. Lemma 5. For κ → 0. φX (n) = E[e ]= T Definition. Example.2 0.3 0. the parameter α is called the mean direction.
then log(g) = κ cos(x − α) + log(C) and H(g) = κρ + 2π log(C). Definition.8. This means with the resultant length ρ of f and g: H(f ) ≥ −E[κ cos(x − α) + log(C)] = −κρ + 2π log(C) = H(g) . Example. α = 0 σ.331 5. any modeling by vectorvalued random variables is kind of arbitrary. The Mises distribution maximizes the entropy among all circular distributions with fixed mean α and circular variance V . α = 0 characteristic function φX (k) = eikx0 φX (k) = 0 for k 6= 0 and φX (0) = 1 Ik (κ)/I0 (κ) 2 2 2 e−k σ /2 = ρk . A circlevalued random variable with values in a closed finite subgroup H of the circle is called a lattice distribution.3. A circlevalued random variable with probability density function ∞ X 2 1 e−(x−α−2kπ) 2σ 2 f (x) = √ 2 2πσ k=−∞ is the wrapped normal distribution. It is obtained by taking the normal distribution and wrapping it around the circle: if X is a normal distribution with mean α and variance σ 2 . For example. Distribution point uniform Mises wrapped normal Parameter x0 κ. Circular random variables Proposition 5. Example. An advantage is also that the characteristic function is now a sequence and no more a function. 2π)? The reason to change the language is that there is a natural addition of angles given by rotations. A circlevalued random variable with constant density is called a random variable with the uniform distribution. Why do we bother with new terminology and not just look at realvalued random variables taking values in [0. the value 2π/3 with probability 1/4 and the value 4π/3 with probability 1/4 is an example of a lattice distribution.8. If g is the density of the Mises distribution. Now compute the relative entropy Z Z 0 ≥ H(f g) = f (x) log(f (x))dx − f (x) log(g(x))dx . Also. Proof. the random variable which takes the value 0 with probability 1/2. then X mod 1 is the wrapped normal distribution with those parameters. The group H is the finite cyclic group Z3 . Remark.
for which the central limit theorem does not hold. then the convergence in the central limit theorem is exponentially fast. because then.k)∈G The central limit theorem assures that the total magnetic field distribution in a large region is close to a uniform distribution.8. If X1 . k+1) called plaquettes. If φX (k) ≤ 1 − δ for δ > 0 and all k 6= 0. (j. . is a sequence of circlevalued random variables. Note that such a restatement of the central limit theorem is not natural in the context of circular random variables because it assumes the circle to be embedded in a particular way in the real line and also because the operation of dividing by n is not natural on the circle. Example. for all k. . . define Sn = X1 + · · · + Xn .332 Chapter 5. Proof. Every Fourier mode goes to zero exponentially. Naturally. π] Because the classical central limit theorem shows that i=1 Xn / n Pn √ converges weakly to a normal distribution. Definition.k∈G Xjk . all Fourier coefficients φSn (k) converge to 0 for n → ∞ for k 6= 0. If Xi converges in law to a lattice distribution. then there is a subsequence. i=1 Xn / n mod 1 converges to the wrapped normal distribution. The Fourier coefficients φXn (k) = 1 − ank P∞ should Q∞ have the property that n=1 ank diverges. then X has a lattice distribution. Because φSn (k) = k=1 φXi (k). the usual central limit theorem still applies if one considers a circlevalued random variable as a random variable taking Pn values√in [−π. We can attach IID random variables Bjk = eiXjk on each plaquette. Remark. Circlevalued random variables appear as magnetic fields in mathematical physics. n=1 (1 − ank ) → 0. j + 1) × [k. We have φX (k) < 1 for all k 6= 0 because if φX (k) Q = 1 for some n k 6= 0. Assume the plane is partitioned into squares [j.4 (Central limit theorem for circlevalued random variable). Remark. Selected Topics The functions Ik (κ) are modified Bessel functions of the first kind of k’th order. X2 . Theorem 5. . The IID property can be weakened. The total magnetic field in a region G is the product of all the magnetic fields Bjk in the region: P Y Bjk = e j. The sum Sn of IIDvalued circlevalued random variables Xi which do not have a lattice distribution converges in distribution to the uniform distribution. It uses the field structure of the cover R. Remark.
c By looking at the characteristic function φX+Y = φX φY = φX . Y are independent.8. Assume that Y has the uniform distribution and that X. Y ] = Var[Y ] > 0 . Y ] = Cov[Y. Consider standard Brownian motion Bt on the real line and its graph of {(t. The sum of two independent random variables can be independent to the first random variable. If X. Y are realvalued IID random variables. Indeed X + Y and Y are positively correlated because Cov[X + Y. The stochastic process produces a circlevalued random variable Xn = Bn mod 1. Bt )  t ∈ R } in the plane. Proof. The graph of onedimensional Brownian motion with a grid. To do so we calculate P[A ∩ B] = R b R d−x f (x)fY (y) dydx. then X +Y is not independent of X. If X. Because Y has the uniform distribution. Y ] + Cov[Y. Figure. The situation changes for circlevalued random variables. d] } is independent of the event B = {X ∈ [a. b] }.8.333 5. Adding a random variable with uniform distribution immediately renders the sum uniform: Theorem 5. The distribution of Xn is the wrapped normal distribution with parameter m = 0 and σ = n. Y are circlevalued random variables. The circlevalued random variables Xn = Bn mod 1 gives the distance of the graph at time t = n to the next lattice point below the graph. we get a c−x X after a substitution u = y − x. Y ] = Cov[X. we see that X + Y has the uniform distribution. We have to show that the event A = {X + Y ∈ [c. then X + Y is independent of X and has the uniform distribution. .5 (Stability of the uniform distribution). Circular random variables Example. Z a b Z d−x fX (x)fY (y) dydx = c−x Z a b Z d fX (x)fY (u) dudx = P[A]P[B] .
If λ. Let Y be a realvalued random variable which has standard normal distribution. the uniform distribution plays the role of the normal distribution as the following central limit theorem shows: Theorem 5. all Fourier coefficients φSn (k) which are not 1 converge to 0. λ2 are in Λ. Here is the setup: . (ii) The random variable takes values in a group H which is the dual group of Zd /H. Exercise. The limiting distribution is the uniform distribution on that circle. . If X is a random variable with an absolutely continuous distribution on Td .6 (Central limit theorem for circular random vectors). R (i) Λ is a lattice. What is Cov[Sn . 0). If eikX(x) dx = 1 then X(x)k = 1 for all x. Selected Topics The interpretation of this lemma is that adding a uniform random noise to a given uniform distribution makes it uniform. }. Qn (iii) Because φSn (k) = k=1 φXi (k). then the distribution of Sn converges to the uniform distribution on Td . (1. If G = T2 and Λ = {. a one dimensional circle and there is no smaller subgroup. y)  y ∈ T1 }. Proof. Sm ]? The central limit theorem applies to all compact Abelian groups. The sum Sn of IIDvalued circlevalued random vectors X converges in distribution to the uniform distribution on a closed subgroup H of G. . then Sn = Y1 + · · · + Yn mod 1 are circlevalued random variable.334 Chapter 5. . . Again φX (k) ≤ 1. . 0). . Example. Then X(x) = Y (x) mod 1 is a circlevalued random variable.8. which is the characteristic function of the uniform distribution on H. 0). If Yi are IID normal distributed random variables. then λ1 + λ2 ∈ Λ. Remark. (iv) φSn (k) → 1Λ . On the ndimensional torus Td . (2. Let Λ denote the set of k such that φX (k) = 1. (−1. then the random variable X takes values in H = {(0.
A topological group G is a group with a topology so that addition on this group is a continuous map from G × G → G and such that the inverse x → x−1 from G to G is continuous. The circle T with addition or more generally. . It is an example of a compact Abelian topological group. a Borel σalgebra A and the Haar measure P.5. where the law of random variables depends on n. In this case. R) with matrix multiplication is a topological group if the topology is the topology inherited as a sub2 set of the Euclidean space Rn of n×n matrices. Especially. Definition. The rotation group has the sphere S n as a homogeneous space. Example. Example. A. then X is called a spherical random variable. The real line R with addition or more generally.9. A. if H is the ddimensional sphere (S d . It is used to describe spherical data. P) is a the probability space by taking a compact topological group G with a group invariant distance d. y) = 0 if x = y is a topological group. the space H is called a homogeneous space. The law of a spherical random variable X is the pushforward measure µ = X ∗ P on G. A measurable function from a probability space (Ω. Lattice points near Brownian paths 335 Definition. Example. R) of orthogonal matrices are topological groups. Definition. P) to a topological group (G.9 Lattice points near Brownian paths The following law of large numbers deals with sums Sn of n random variables. If the group acts transitively as transformations on a space H. If (G. y) = 1 if x 6= y and d(x. A measurable function to a homogeneous space is called Hvalued random variable. B) with Borel σalgebra B is is called a Gvalued random variable. R) of matrices with determinant 1 or the rotation group SO(n. where Gx is the isotopy subgroup of G consisting of all elements which fix a point x. like the special linear group SL(n. A. B) with Borel probability measure. The law of X is called the uniform distribution on G. Definition. Example. then X(x) = x is a group valued random variable. Example. the torus Td with addition is a topological group with addition. A measurable function from a probability space (Ω. R). The general linear group G = Gl(n. Also subgroup of Gl(n. P) to the group (G. Any finite group G with the discrete topology d(x. 5. H can be identified with G/Gx . B) is called a Gvalued random variable. the Euclidean space Rd with addition are topological groups when the usual Euclidean distance is the topology.
Therefore. so that we can not invoke the law of large numbers directly. identically distributed random variables with mean E[Zk ] = p = 1/nδ Pn and variance p(1 − p). n1−δ n ǫ ǫ n ǫ n This proves convergence in probability and the weak law version for all δ < 1 follows. For δ < 1/2. Take κ = 2 with κ(1 − δ) > 1 and define nk = k κ = k 2 .5).2. 1]   Sn (x) − 1 > ǫ } n1−δ has by the Chebychev inequality (2. for large enough k. we need to take a sub∞ sequence so that k=1 P[Bnk ] converges. and An = [0. Then for any 0 ≤ δ < 1. The event B = lim supk Bnk has measure zero. Like this.5.336 Chapter 5. we establish complete convergence which implies almost everywhere convergence. then  Snk (x) − 1 ≤ ǫ n1−δ k for large enough k. the measure P[Bn ] ≤ Var[ Sn Var[Sn ] 1−p 1 ]/ǫ2 = 2−2δ 2 = 2 1−δ ≤ 2 1−δ .  Snk (x) Sl (Tkn (x)) Snk +l (x) − 1 ≤  − 1 + .9.2). Consequently.1/nδ ] (Xk ) are independent. Proof. then the random variables in the sum Sn change too. n1−δ n1−δ n1−δ k k k . we have almost everywhere convergence. the set Bn = {x ∈ [0. the random variables Zk (x) = 1[0. we have lim n→∞ 1 n1−δ n X k=1 1An (Xk ) → 1 in probability. The sum Sn = k=1 Xk has a binomial distribution with mean np = n1−δ and variance Var[Sn ] = np(1 − p) = n1−δ (1 − p). 1]. we are in none of the sets Bnk : if x ∈ B. For fixed ǫ > 0 and n.1 (Law of large numbers for random variables with shrinking support). Selected Topics Theorem 5. 1/nδ ]. In order to apply P the BorelCantelli lemma (2. For fixed n. If Xi are IID random variables with uniform distribution on [0. But the tools for the proof of the law of large numbers still work. This is the event that we are in infinitely many of the sets Bnk . Note that if n changes.
If we sum up independent random variables Zk = nδ 1[0.5. . the number of lattice point hits. For δ ∈ [1/2. then we have discs of radius 1/10000 and we expect Sn . 1−δ k nk For δ < 1/2. Figure. It seems however that a strong law should work for all δ < 1.1/nδ ] (Xk ) where Xk are IID random variables. Corollary 5. Here is an application of this theorem in random geometry. The size of the discs depends on the number of discs on the plane. nδ almost surely. We could not conclude the proof in the same way as in theoPn rem (2.9. the moments E[Zkm ] = n(m−1)δ become infinite for m ≥ 2. If δ = 1/3 and if n = 1′ 000′ 000. Assume we place randomly n discs of radius r = 1/n1/2−δ/2 onto the plane. Lattice points near Brownian paths 337 2 Because for nk = k we have nk+1 − nk = 2k + 1 and 2k + 1 Sl (Tkn (x)) ≤ 2(1−δ) . (x) − 1 → 0 almost everywhere for n → ∞. For example. 1) we have only proven a weak law of large numbers. Throwing randomly discs onto the plane and counting the number of lattice points which are hit.9. when taking larger sums. this goes to zero assuring that we have not only convergence of the sum along a subsequence Snk but for Sn (compare lemma (2.3) because Un = k=1 Zk is not monotonically increasing.2. We know now  Snn1−δ Remark. We also change the randomPvariables. Remark. then for δ < 1/2 Sn →π.9.2)). If Sn is the number of lattice points hit by the discs. The laws of large numbers do not apply because E[Zk2 ] depends on n and diverges for n → ∞. Their total area without overlap is πnr2 = πnδ . to be 100π.11. the assumption n supn n1 i=1 Var[Xi ] < ∞ does not apply.
then X has exponential decay of correlations. X(T n )] ≤ e−Cn for some constant C > 0. The random variable Xn is a circlevalued random variable with wrapped normal distribution with parameter σ = n.3. The case δ = 0 is the trivial case. Definition. We want to know the correlation between Xn+m and Xn : Z 1 Z 0 1 f (x)f (x + y)g(x)g(y) dy dx . Similarly as with the Buffon needle problem mentioned in the introduction. If Bt is standard Brownian motion. where the radius of the disc stays the same. Then the random variables Xn = Bn mod 1 have exponential decay of correlations. 1] has decay of correlations for a random variable X satisfying E[X] = 0. But unlike the Buffon needle problem. A measure preserving transformation T of [0.338 Chapter 5. We adapt the experiment depending on the number of tries. Bn has the standard normal distribution with mean 0 and standard deviation σ = n. 0 With u = x + y. Lemma 5. Let 2 2 2 ∞ gn = k=0 e−k n /2 cos(kx) = 1 − ǫ(x) ≥ 1 − e−Cn be the density of Xn which is also the density of Yn . We have Xn+m = Xn + Ym mod 1. we can get a limit. . X(T n )] → 0 for n → ∞. if Cov[X. Selected Topics Remark.9. It is enough to have asymptotically exponentially decorrelated random variables. Its characteris2 2 tic function is φX (k) = e−k σ /2 . where we keep the setup the same. The proof of theorem (5.1) shows that the assumption of independence can be weakened. this is equal to Z 1 0 = Z 0 ≤ Z 1 f (x)f (u)g(x)g(u − x) dudx 0 1 Z 1 f (x)f (u)(1 − ǫ(x))(1 − ǫ(u − x)) dudx 0 2 C1 f 2∞ e−Cn . independent of the number of experiments. wherePXn and Ym are independent circlevalued random variables. Proof. If we make a large number of experiments.9. If Cov[X. we take a small radius of the disk.
and An = [0. we have 1 lim n→∞ n1−δ n X k=1 1An (T k (x)) → 1 . 1/2). If T : [0. The decorrelation assumption implies that there exists a constant C such that X i6=j≤n Cov[Xi . Var[Sn ] = nVar[Xn ] + X i6=j≤n Cov[Xi . The same proof works. Proof. where B is the Borel σalgebra on [0. 1/nδ ].17). B. Then for any δ ∈ [0. 1] → [0.339 5.j≤n Remark. X 2 e−C(i−j) . . 1] is a measurepreserving transformation which has exponential decay of correlations for Xj . Xj ] ≤ C . Xj ] ≤ C1 f 2∞ The sum converges and so Var[Sn ] = nVar[Xi ] + C. i. 1]. P) where Ω is a compact metric space with Borel σalgebra A and P[{x}] = 0 for all x ∈ Ω is measure theoretically isomorphic to ([0.9. 1/nδ ] is not essential. Lattice points near Brownian paths Proposition 5. 1] (see [13] proposition (2. 1] is not crucial. This is a result in the geometry of numbers for connected sets with fractal boundary. A. The same remark also shows that the assumption An = [0.4. Therefore. The assumption that the probability space Ω is the interval [0. One can take any nested sequence of sets An ∈ A with P[An ] = 1/nδ . Figure.9. where we have a probability space of paths and where we can make a statement about almost every path in that space. and An+1 ⊂ An . dx). We can apply this proposition to a lattice point problem near the graphs of onedimensional Brownian motion. Many probability spaces (Ω.
while hold for Pnit does not 1 k Liouville α. (y +b)/2) = (k. However. A similar result can be shown for other dynamical systems with strong recurrence properties. But if (x. 1] × [−1. if the ql are sufficiently far apart. 1/n] with transition probabilities. Mixed with probability theory it is a result in the random geometry of numbers. The theorem we have proved above belongs to the research area of geometry of numbers.5.3). such that any 1/n1+δ neighborhood of the graph of B over [0. v+2l) then (x−u. By point symmetry also (a. 1]. By convexity ((x+a)/2. where fn is bounded away from 1 and where fn do not converge to 1. The random variables Xi have exponential decay of correlations as we have seen in lemma (5. we have fn = n1−δ k=1 1An (T (x)) near 1 for arbitrary large n = ql . if the lattice has a minimal spacing distance of 1/n.9. y) = (u+2k. For any 0 ≤ δ < 1/2. Proof.9. −v) is in the set M . 1] contains at least C/n1−δ lattice points. 2l). Selected Topics Corollary 5. there exists a constant C. there are two different points (x. Bt+1/n mod 1/n is not independent of Bt but the Poincar´e return map T from time t = k/n to time (k + 1)/n is a Markov process from [0. (a. Proof. l) is in M . Because the area is > 4. where pl /ql is the periodic approximation of δ. y−v) = (2k. It holds for example for irrational rotations with T (x) = x + α mod 1 with Diophantine α. Assume Bt is standard Brownian motion.6 (Minkowski theorem). Remark. One can translate all points of the set M back to the square Ω = [−1. 1/n] to [0.9. b) = (−u. This is the lattice point we were looking for. For any irrational α. b) which have the same identification in the square Ω. A prototype of many results in the geometry of numbers is Minkowski’s theorem: Theorem 5. there are arbitrary large n. .340 Chapter 5. A convex set M which is invariant under the map T (x) = −x and with area > 4 contains a lattice point different from the origin. y).
How many 1/n lattice points are there in a Wiener sausage. in a 1/n1+δ neighborhood of the path? 5. . The theorem of Minkowski assumes. 1] of area 4. it is larger than 4. The smallest θ for which one knows the is θ = 46/73. symmetric set M .10. The function Xn behaves as random variable on an infinite probability space. The symmetry and convexity allows to conclude the existence of a lattice point in M . n = 10100 } for example functions like Xn = k 2 + 5 mod n are accessible on a small subset only. the area has been chosen smaller than 4 in this picture. • For a smooth curve of length 1 which is not a line. Is there a result for δ < 1? • If we look at Brownian motion in Rd . Arithmetic random variables 341 Figure.we have no possibility to inspect all of of the numbers from Ωn = {1. Translate all points back to the square [−1. One obtains overlapping points. we have a similar result as for the random walk but we need δ < 1/3. If . . For illustration purposes. . One believes that an estimate E(n) ≤ Cnθ holds for every θ > 1/2.10 Arithmetic random variables Because large numbers are virtually infinite . A convex.5. Figure. There are also open questions: • The Gauss circle problem asks to estimate the number of 1/nlattice points g(n) = πn2 + E(n) enclosed in the unit disk. 1] × [−1.
. With this notion. Selected Topics we could find the events Un = {Xn = 0 } easily. in the case of π. n − 1 }. . n) mod q(k. Can we statistically recover the factors of n from O(log(n)) data points (kj . . we use the notion of asymptotically independence. . . . whether this sequence is an arithmetic sequence. . which can be computed with a fixed number of arithmetic operations. To deal with ”number theoretical randomness”. then Xn (k) = xk on Ωn = {0. Pn ) for which Ωn ⊂ Ωn+1 and An is the set of all subsets of Ωn . For example Xn (x) = ((x5 − 7) mod 9)3 x − x2 mod n defines a sequence of arithmetic random variables on Ωn = {0. It is unknown how much information can we get from a large integer n with finitely many computations. where xj = n mod kj for example? As an illustration of how arithmetic complexity meets randomness. It would be a surprise. Example. . 2 or the M¨obius function µ(n) appear ”random”. Asymptotically independent random variables approximate independent random variables in the limit n → ∞. the digits xn of the decimal sequence of π defines a sequence of number theoretical random variables Xn (k) = xn for k ≤ n. If there exists a constant C such that Xn on {0. . For these functions. xj ). Also other deterministic sequences √ like the decimal expansions of π. . Definition. A finite but large probability space Ωn can be explored statistically and the question is how much information we can draw from a small number of data. n − 1 }. comparisons. greatest common divisor and modular operations. An example is a sequence Xn of integer valued functions defined on Ωn = {0. . If xn is a fixed integer sequence. n } is computable with a total of less than C additions. where p and q are polynomials in two variables. Yl (k) = X(k + l) behave very much as independent random variables. For example. . . Y1 (k) = X(k + 1). even so there is no fixed probability space on which the sequences form a stochastic process. . we call X a sequence of arithmetic random variables. the sets X −1 (a) = {X(k) = a } are in general difficult to compute and Y0 (k) = X(k). then factorization would be easy as its factors can be determined from in Un . . An . . n) . it is not known. multiplications. . n − 1 } is a sequence of number theoretical random variables. . Example. we consider in this section examples of number theoretical random variables. if one could compute xn with a finite nindependent number of basic operations. These functions belong to a class of random variables X(k) = p(k.342 Chapter 5. Both have the property that they appear to be ”random” for large n. However. we can study fixed sequences or deterministic arithmetic functions on finite probability spaces with the language of probability. . . . A sequence of number theoretical random variables is a collection of integer valued random variables Xn defined on finite probability spaces (Ωn .
If limn→∞ Var[Xn ] exists. Proposition 5. To know P[S1 ]. Because P[Sd ] = P [S1 ]/d2 . . .10. k) → (dj. . . n]2 and Sd on [1. If limn→∞ E[Xn ] exists. Because there is a bijection φ between S1 on [1. y) needs less than C(x + y) basic operations. the limiting law is called the asymptotic law. one has P[S1 ] · ( so that P[S1 ] = 6/π 2 . then it is called the asymptotic expectation of a sequence of arithmetic random variables. m). we note that the sets Sd form a partition of N2 and also when restricted to Ωn . This shows that En [X1 ]/En [Xd ] → d2 has a limit 1/d2 for n → ∞. Remark. m) = d }. A. n]. . . Pn ). where all random variables Xn are defined on a fixed probability space (Ω. . Arithmetic random variables 343 Remark. . . If the law of Xn converges. we have included it too in the definition of arithmetic random variable. . Example. P). Unlike for discrete time stochastic processes Xn . Arithmetic functions are a subset of the complexity class P of functions computable in polynomial time. 1 1 π2 1 + 2 + 2 + .1. consider the arithmetic random variables Xd = 1Sd . dk). The class of arithmetic sequences of random variables is expected to be much smaller than the class of sequences of all number theoretical random variables. . . . we have S1 /n2 = Sd /(d2 n2 ). Because computing gcd(x.5. . Proof. an arithmetic sequence of random variables Xn uses different finite probability spaces (Ωn . where Sd = {(n. . .10. In other words. 2 1 2 3 6 . n]×[1. . The asymptotic expectation Pn [S1 ] = En [X1 ] is 6/π 2 . it is called the asymptotic variance. Definition. the probability that two random integers are relatively prime is 6/π 2 . An . gcd(n. ) = P[S1 ] =1. . dn]2 realized by φ(j. On the probability space Ωn = [1.
n] × [1. . k) = 1 is 6/π 2 = 0. in the limit n → ∞. k) in the finite probability space [1. A large class of arithmetic random variables is defined by Xn (k) = p(n. Show that the asymptotic expectation of the arithmetic random variable Xn (x. Yn are independent. The are called asymptotically uncorrelated. . The probability that gcd(j. n]2 is infinite. . y) = gcd(x. if for all I. . y) on [1. are called uncorrelated if Cov[Xn . J. n] are called asymptotically independent. .607927 . . . Yn are defined on the same probability spaces Ωn ). if you pick two large numbers (j. . . ∈ J] − P[ ∈ I] P[ ∈ J] → 0 n n n n . k) at random. Two sequences Xn . if their asymptotic correlation is zero: Cov[Xn . . Yn Xn Yn Xn ∈ I. n] is painted black if gcd(j. So. (where Xn . . Yn ] → 0 for n → ∞. Selected Topics Figure. . Exercise. k) on Ωn = {0. The probability that two random integers are relatively prime is 6/π 2 . Yn of arithmetic random variables. We will look more closely at the following two examples: 1) Xn (k) = n2 + c mod k 2) Xn (k) = k 2 + c mod n Definition. . we have P[ for n → ∞. n − 1 } where p and q are not simultaneously linear polynomials. . A cell (j. . Y of arithmetic random variables with values in [0. Two sequences X. the random variables Xn . Definition. Y of arithmetic random variables are called independent if for every n. Example. . k) mod q(n. Yn ] = 0. Two sequences X. k) = 1. the change to have no common divisor is slightly larger than to have a common divisor. .344 Chapter 5. .
Remark. the number of solutions of (x. where c is a constant and p(n) is the n’th prime number. then X. Y are asymptotically independent. Example. then X. There is a clear correlation between the two random variables. 23]. If there exist two uncorrelated sequences of arithmetic random variables U. Yn /n ∈ Jn is In  · Jn . Lets look at the distribution of the random vector (Xn . Having asymptotic correlations between sequences of arithmetic random variables is rather exceptional. Yn (k) = 5k + 3 modulo n in the case n = 2000.10. The figure shows the points (Xn (k). Two arithmetic random variables Xn (k) = k mod n and Yn (k) = ak + b mod n are not asymptotic independent.5. p This means that the probability that Xn /n ∈ In . Arithmetic random variables 345 Remark. Find the correlation of Xn (k) = k mod n and Yn (k) = 5k + 3 mod n. Yn (k)) for Xn (k) = k. Yn ) in an example: Figure. Y are asymptotically uncorrelated. . If two random variables are asymptotically independent. Proof: by a lemma of Merel [69. Consider the two arithmetic variables Xn (k) = k and Yn (k) = ck −1 mod p(n) . If the same is true for independent sequences U. Most of the time. Exercise. we observe asymptotic independence. Here are some examples: Example. y) ∈ I × J of xy = c mod p is IJ + O(p1/2 log2 (p)) . they are asymptotically uncorrelated. V such that Un − Xn L2 (Ωn ) → 0 and Vn − Yn L2 (Ωn ) → 0. The random variables Xn and Yn are asymptotically independent. V of arithmetic random variables.
the multivalued function X −1 is probably hard to compute in general because if we could compute it fast. . Therefore. instead of n2 + c mod k. we could factor integers fast. Yn (k)) for Xn (k) = k. Indeed. these regions become smaller and smaller for n → ∞. Yn (k) = k 2 + 3 in the case n = 2001. we can also rescale the image. . where p is the 200’th prime number p(200) = 1223. The picture shows the points {(k. 1]. We see the points (Xn (k). Lets start with an experiment: Figure. Elements in the set X −1 (0) are the integer factors of n. . we will show that Xn . we look at Xn (k) = n n n mod k = −[ ] k k k on Ωm(n) = {1. n} is equivalent to Xn (k) = n mod k on {0. 1/k) mod p }. where [x] is the integer part of x. . Illustration of the lemma of Merel. To study the distribution of the arithmetic random variable Xn . . Nonlinear polynomial arithmetic random variables lead in general to asymptotic independence. 2 The random variable Xn (k) = (n √ + c) mod k on {1. . The random variable Yn = Xn (x · Ωn ) can be extended from the discrete set {k/Ωn )} to the interval [0. Yn are asymptotically independent random variables. Selected Topics Figure. Because factoring is a well studied NP type problem. . 1]. . . m(n) }. where m(n) = √ n − c. . After the rescaling the sequence of random variables is easier to analyze. .346 Chapter 5. . [ n − c] }. so that the range in the interval [0. Even so there are narrow regions in which some correlations are visible. .
Figure. . then the law µn of the random variables Xn (x) = (x. 1]. Proof. b] × T1 ) is the Lebesgue measure of {(x. b] × [d. n].10. fn (x)) converges weakly to the Lebesgue measure µ = dxdy on [0. Arithmetic random variables 347 Proposition 5. Lemma 5. 1] × T1 . the data points appear random. t t To show the asymptotic independence of Xn with any of its translations. We can use lemma (5. the slope of the graph becomes larger and larger for n → ∞. Because µn ([a. .5.3) below. The points are located on the graph of the circle map fn (t) = n n −[ ].10. we restrict the random vectors to [1. . 1/na ] with a < 1. 1]. b] × [c. For smaller values of k. Fix an interval [a. c + dy]) and µn ([a. The rescaled arithmetic random variables Xn (k) = n n n mod k = −[ ] k k k converge in law to the uniform distribution on [0.10. 1].3. b] in [0. y) ∈ [a. When rescaling the argument [0. Data points (k. Let fn be a sequence of smooth maps from [0.10. b]} which is equal to b − a. d + dy]) . Proof.2. n mod k ) k for n = 10′ 000 and 1 ≤ k ≤ n. The functions fnr (k) = n/(k+r)−[n/(k+r)] are piecewise continuous circle maps on [0. 1]. y) Xn (x. 1] to the circle T1 = R/Z for which (fn−1 )′′ (x) → 0 uniformly on [0. . we only need to compare µn ([a.
. where k ∈ [0. Polynomial maps like T (x) = x2 + c are used as pseudo random number generators for example in the Pollard ρ method for factorization [87]. c + dy]) − µn ([a. Xn+1 (k) = T (Xn (k)). Remark. Pna Proof. c + dy]) is bounded above by (fn−1 )′ (c) − (fn−1 )′ (d) ≤ (fn−1 )′′ (x) which goes to zero by assumption. But µn ([a. na ] with a < 1 converges weakly to the Lebesgue measure. . . Figure.10. To do so. Y (k)) which is supported on the curve t 7→ (X(t). Y (t)). 1] × T1 . Proof of the lemma. . n} For every integer r > 0. n − 1} defined by X0 (k) = k. . In that case. When rescaled. the random variables X(k). this curve is the graph of the circle map fn (x) = 1/x mod 1 The result follows from lemma (5. . . one considers the random variables {0.3). X(k + r1 ). Remark.4. b] × [c. b] × [c. Y (k) = X(k + r) are asymptotically independent and uncorrelated on [0.10. Similarly. Y (k)) converge weakly to the Lebesgue measure on the torus. X(k + r2 ). . The condition f ′′ /f ′2 → 0 assures that the distribution in the y direction smooths it out. we could show that the random vectors (X(k). Already one polynomial map produces randomness asymptotically as n → ∞. . .348 Chapter 5. . . Selected Topics in the limit n → ∞. Let c be a fixed integer and Xn (k) = (n2 + c) mod k on {1. we first R 1 Pna look at the measure µn = 0 j=1 δ(X(k). d+dy d c+dy c a b Theorem 5. na ]. . X(k + rl )) are asymptotically independent. 0 < a < 1. We have to show that the discrete measures j=1 δ(X(k). The measure µn with support on the graph of fn (x) converges to the Lebesgue measure on the product space [0.
. T (x)) in {1. Figure.3) for the circle maps fn (x) = p(nx) mod n on [0. because the log log(n) bound in the . If factorization was a NP complete problem. Arithmetic random variables 349 Theorem 5.5.10. The graph (x. n} × {1. The slope of the graph of p(x) mod n becomes larger and larger as n → ∞. 1]. Remark. n]2 . The M¨obius function is a function on the positive integers defined as follows: the value of µ(n) is defined as 0. Again use lemma (5. . µ(14) = 1 and µ(18) = 0 and µ(30) = −1. p(x)) mod n converges in law to the Lebesgue measure on [0. then x2 = y 2 mod n. . It is now believed that M (n)/ p n is unbounded but it is hard to explore this numerically. Proof. The reason is that taking square roots modulo n is at least as hard as factoring is the following: if we could find two square roots x. n) of n. n] produces essentially a random value p(k) mod n. Also here. This would lead to factor gcd(x − y. For example. the push forward of the Lebesgue measure on [0. if it contains k distinct prime factors. This fact which had already been known by Fermat. n} has a large slope on most of the square.10. y of a number modulo n. . n]. then factorization would be in the complexity class P of tasks which can be computed in polynomial time. . To prove the asymptotic independence. The map can be extended to a map on the interval [0. if n has a factor p2 with a prime p and is (−1)k . then inverting those maps would be hard. If p is a polynomial of degree d ≥ 2. .5. we deal with random variables which are difficult to invert: if one could find Y −1 (c) in O(P (log(n)) times steps. Remark. n] under the map f (x) = (x. The random variables X(k) = k and Y (k) = p(k) mod n are asymptotically independent and uncorrelated.10. one has to verify that in the limit. . Choosing an integer k ∈ [0. The Mertens conjecture claimed hat √ M (n) = µ(1) + · · · + µ(n) ≤ C n √ for some constant C. . then the distribution of Y (k) = p(k) mod n is asymptotically uniform.
Why would the law of the iterated logarithm for the M¨ obius function imply the Riemann hypothesis? Here is a sketch of the argument: the Euler product formula .for example for n = 10100 . because if it were true. The question is probably very hard. the connection with the M¨ obius functions produces a convenient way to formulate the Riemann hypothesis to nonmathematicians (see for example [14]). 1}. Is this sequence asymptotically independent? Is the sequence µ(n) random enough so that the law of the iterated logarithm lim sup n→∞ n X µ(k) p ≤1 2n log log(n) k=1 holds? Nobody knows.009.says ζ(s) = ∞ Y X 1 1 = (1 − s )−1 . It is also known √ that lim sup M (n)/ n ≥ 1. the question about the randomness of µ(n) appeared in classic probability text books like Fellers. This would imply that ζ(s) has also no nontrivial zeros to the left of the axis Re(s) = 1/2 and . In any case.sometimes referred to as ”the Golden key” . one would have M (n) ≤ n1/2+ǫ . Selected Topics law of iterated logarithm is small for p the integers n we are able to compute . Actually. one can conclude from the formula ∞ X µ(n) 1 = ζ(s) n=1 ns that ζ(s) could be extended analytically from Re(s) > 1 to any of the half planes Re(s) > 1/2 + ǫ. By a result of Riemann. With M (n) ≤ n1/2+ǫ . This would prevent roots of ζ(s) to be to the right of the axis Re(s) = 1/2. The fact n M (n) 1X = µ(k) → 0 n n k=1 is known to be equivalent to the prime number √ theorem. one obtains a sequence of number theoretical random variables Xn . which take values in {−1. for all ǫ > 0 which is called the modified Mertens conjecture .06 and lim inf M (n)/ n ≤ −1. s n p n=1 p prime The function ζ(s) in the above formula is called the Riemann zeta function. the function Λ(s) = π −s/2 Γ(s/2)ζ(s) is a meromorphic function with a simple pole at s = 1 and satisfies the functional equation Λ(s) = Λ(1 − s).350 Chapter 5. If one restricts the function µ to the finite probability spaces Ωn of all numbers ≤ n which have no repeated prime factors. This conjecture is known to be equivalent to the Riemann hypothesis. one has log log(n) is less then 8/3. the probably most notorious unsolved problem in mathematics.
It has many solutions. The Diophantine equation is homogeneous.11. . It is a symmetric Diophantine equation if k = l. . . Symmetric Diophantine Equations that the Riemann hypothesis were proven. An Euler Diophantine equation is equivalent to a symmetric Diophantine equation if m is odd and k + l is even. While Xk is a deterministic sequence. yk ) is called a symmetric Diophantine equation. A homogeneous Diophantine equation is also called a form. xk and where the polynomial f has integer coefficients. } produces a Pn ”random walk” Sn = k=1 Xk . xk ) = 0. where l(k) is the k nonzero entry in the sequence {µ(1). . . . t with x = 2st. . 5] Remark. µ(2). where p is a polynomial in k integer variables x1 . the behavior of Sn resembles a typical random walk. 100 50 10000 20000 30000 40000 50000 60000 50 100 5. . The upshot is that the Riemann hypothesis could have aspects which are rooted in probability theory. The quadratic equation x2 + y 2 − z 2 = 0 is a homogeneous Diophantine equation of degree 2. The sequence Xk = µ(l(k)). . . If that were true and the law of the iterated logarithm would hold. They are called Pythagorean triples. 36. µ(3). . . . xk ) = p(y1 . as has been known since antiquity already [15]. The Diophantine equation has degree m if the polynomial has degree m. . a Diophantine equation l k X X xm xm j i = i=1 j=1 is called an Euler Diophantine equation of type (k. . A Diophantine equation of the form p(x1 .351 5. . l) and degree m. . [29. Definition. if every summand in the polynomial has the same degree.11 Symmetric Diophantine Equations Definition. More generally. Figure. A Diophantine equation is an equation f (x1 . y = s2 − t2 . . . 4. this would imply the Riemann hypothesis. One can parameterize them all with two parameters s. . z = s2 + t2 . 15. Example.
yk ). For example. m) = (2. . . xk ). This is the case if nk /k! > knm or nk−m > k! k. where more than 2 values hit. . . that the proof also gives nontrivial solutions to simultaneous equations like p(x) = p(y) = p(z) etc. The set S = {p(x) = √ m/2 m m x1 + · · · + xk ∈ [0. Proof. No solutions of x41 + y14 = x42 + y24 = x43 + y34 were known to those authors [29]. Let R be a collection of different integer multisets in the finite set [0. .11. 2p(x) = 2p(y) + 1 has no solution for any nonzero polynomial p in k variables because there are no solutions modulo 2. For general polynomials. . The following theorem was proved in [70]: Theorem 5. . . The proof generalizes to the case. xk ) = p(y1 .. again by the pigeon hole principle: there are some slots. (y1 . .2. For k > m. y for which p(x) = p(y).11. yk ) to a symmetric Diophantine equation p(x) = p(y) is called nontrivial. A solution (x1 . m) = (2. Remark. if {x1 . the Diophantine m m m equation xm 1 + · · · + xk = y1 + · · · + yk has infinitely many nontrivial solutions. It has been realized by JeanCharles Meyrignac. multiple solutions lead to so called taxicab or HardyRamanujan numbers. . By the pigeon hole principle. xk . . m). y. For an arbitrary polynomial p in k variables of degree m. xk } and {y1 . For (k. Already small deviations from the symmetric case leads to local constraints: for example. . yk } are different sets. m = 3: for every r. kn ]  x ∈ R } contains at least nk /k! numbers. .1 (Jaroslaw Wroblewski 2002). 3).. . . Remark. It contains at least nk /k! elements. . Remark. . . z) = x3 + y 3 + z 3 . Theorem 5. there are numbers which are representable as sums of two positive cubes in at least r different ways. Mahler proved that x3 + y 3 + z 3 = 1 has infinitely many solutions. the degree and number of variables alone does not decide about the existence of nontrivial solutions of p(x1 . . there are √ different multisets √ x. . . . . . n]k . .352 Chapter 5. 53 + 73 + 33 = 33 + 73 + 53 is a trivial solution of p(x) = p(y) with p(x. nor whether there are infinitely many solutions for general (k. . the Diophantine equation p(x) = p(y) has infinitely many nontrivial solutions. There are symmetric irreducible homogeneous equations with . . Hardy and Wright [29] (theorem 412) prove that in the case k = 2. Selected Topics Definition. It is believed that x3 +y 3 +z 3 +w3 = n has solutions for all n. . where p is an arbitrary polynomial of degree m with integer coefficients in the variables x1 . .
m=7 (10. 19. 35. 19. 26.m=11 open problem? k=7. two parametric solutions by Moessner 1939. 64. 1999) k=20. 8. . 48. 14. . 28. .5. Bremner and Brudno parametric solutions) k=3. 44. For homogeneous Diophantine equations. 51. 32. 50. 54. 12. . 43)8 = (5. xk ) = p(x1 . .m=6 (3. 78. 65. 58. 35. 8. 76.m=8 (1. The reason is that (mx1 . 15. 39. 2000) . 48. . 54. which match. 30.m=8 open problem? k=5. 51.11. 52. 30)12 = (95. 16)10 = (92. 38. 14. Symmetric Diophantine Equations 353 k < m/2 for which one has a nontrivial solution. 28. 68. 36. 41. Wroblewski’s theorem covers cases like x2 +y 2 +z 2 = u2 +v 2 +w2 or x3 + y 3 + z 3 + w3 = a3 + b3 + c3 + d3 . 23. 15]: k=2. n]k is the law of the random variable defined on the finite probability space Ω. 66. 41. 16. Sources are [71. 46. then the density fX is larger than 1 at some point. 82. 71. [61]. .m=10 (95. 10.m=7 open problem? k=4. 47. mxk ) is a solution too. 67)5 ([61]. 38. 88. . 55. 28. 5)10 (Randy Ekl.m=22 (85. 3)12 (Greg Childers. 38. 42. Remark. . 90. Here are examples of solutions. . 34. 60. 149)7 = (15. 2000) k=7. 20. 18)7 ([61]) k=5. 5). 4)21 = (77. . gave algebraic solutions in 1772 and 1778) k=2. 28. 32. 43. 16. 41)8 . 73. 31. 89.m=3 (3. xk ) to obtain infinitely many. . 20. 30. 73. 35.m=5 (3. 28. 15. 91. 71. 77. 11. 54. 51. 48. 58. 45. 27)11 = (66. 17. 48. 134)4 (Euler. 129. 61. 23. So. it is enough to find a single nontrivial solution (x1 .m=9 (192. The continuum analog is that if a random variable X on a domain Ω takes values in [a. 56. 54. 47. 36.m=12 (99. 101.m=4 (59. 158)4 = (133. . 7)11 (Nuutti Kuosa. 12)9 (Randy Ekl. 23)6 ([29]. 68)10 = (17. 63. The law of a symmetric Diophantine equation p(x1 . 50. there have to be different integers. 34. 67)10 ([61]) k=7.1997) k=6.m=21 (76. 58. 61. 14. b] and b − a is smaller than the area of Ω. 55. 13. .m=13 open problem? k=8. 26)9 = (180. 11. .m=11 (67. 74.m=7 (8. 65. 49. 20. Remark. 17. 3)21 (Greg Childers. 3. 19)7 = (2. 67. 10. 175.m=5 (open problem ([36]) all sums ≤ 1. 32. 11)22 = (83. 34. 77. 79. 64. 22)6 = (10. 25. 23. 16. 116. 11. SwinnertonDyer) k=3. 12. 11. k=5. 21. 9. . xk ) with domain Ω = [0. 23)6 (Subba Rao [61]) k=6. 4. 61. It is believed that for k > m/2. 8. 6. for any m 6= 0. An example is p(x. 6)22 (Greg Childers. 36. 70. there are infinitely many solutions and no solution for k < m/2. 85. 49. 15. 1997) k=5. 3) = p(4.Subba Rao. 77. 70. 42.m=10 (1. . 25. 32. . 37. 30.02 · 1026 have been tested) k=3. 123. . 74. 72. 146)7 (Ekl) k=4. 17. Wroblewski’s theorem holds because the random variable has an average density which is larger than the lattice spacing of the integers. Remark. Definition.m=10 open problem k=6. . 47. 62)5 = (24. 73. 22)6 = (10. y) = x5 − 4y 5 which has the nontrivial solution p(1. 60. 74. 24. 2000) k=22. 32. 29.
there should be no solutions. ~y of symmetric Diophantine equations g(~x) = g(~y) with g(~x) = m xm 1 +· · ·+xk .. xk ) ∈ [nk a. Proof. below. Above this line.b (n) = Fn (b) − Fn (a) . xk )/nk on the finite probability spaces Ωn = [0. The points above the diagonal beat Wroblewski’s theorem. . . . . P). . B. nk b] . Selected Topics Figure.b = {x ∈ [0. . . . . . . . . . The random variables Xn (x1 . . there are solutions. . . . Definition.. Wroblewski’s theorem assures that for k > m. xn ) = p(x1 . nk where Fn is the distribution function of Xn . This means Sa. Let Sa. . b) }. Given a polynomial p(x1 . . . xk ) satisfying p(x1 . . xk ) with integer coefficients of degree k. . 1]k  X(x1 . there should be nontrivial solutions.354 Chapter 5. where Aa. xk ) on the probability space ([0. .b (n)/nk is a Riemann sum approximation of the integral F (b) − F (a) = Aa. it is clearly a piecewise smooth function. xk ) ∈ (a. .b 1 dx. . The problem has a probabilistic flavor because one can look at the distribution of random variables in the limit n → ∞: Lemma 5. Known cases of (k. 1]k . . where B is the Borel σalgebra and P is the Lebesgue measure.b (n) be the number of points (x1 . m k What happens in the case k = m? There is no general result known. xk ) = p(x1 . . By the lemma. n]k converge in law to the random variable X(x1 .11.. ..3. . . The result follows from the fact that Fn (b)−Fn (a) = RSa. The steep line m = 2k is believed to be the threshold for the existence of nontrivial solutions. m) with nontrivial solutions ~x. Lets call the limiting distribution the distribution of the symmetric Diophantine equation.
1]2 .4. Remark. The asymptotic distribution of p(x. This works often because the polynomials p modulo some integer number L do not cover all the conjugacy classes. the area xm − y m ≤ s is piecewise differentiable and the derivative stays bounded. We can assume without loss of generality that the first variable is the one with a smaller degree m. 4n4 ]. The polynomial maps Ωn to the interval [0. The pigeon hole principle shows that there are matches. n] × [0. The asymptotic distribution of p(x. then there are nontrivial solutions to p(x) = p(y).11. we use polar coordinates F (s) = {(r. In general. n] with n4+1/3 . y. n]k ⊂ Rk which contains nk−β points which is mapped by X into [0. we try to find a subsets Ω ⊂ [0. n]k/m × [0. Symmetric Diophantine Equations m 1/m Example. θ)  r2 cos(2θ)/2 ≤ s }. n]k−1 with nk+k/m−1 = nk+ǫ elements into the interval [min(p).5. where F (s) = O(s1/m ) near s = 0. Corollary 5.355 5. .11. If p is a polynomial of k variables of degree k. Integration shows that F (s) = Cs2 + f (s). we definitely have to know the density of X on subsets. y) = x2 − y 2 were plotted in the first part of these notes. subsets or points. If the variable x1 appears only in terms of degree k − 1 or smaller. . If the density fp of the random variable p on a surface Ω ⊂ [0. The density is bounded. n]k is larger than k!. Apply the pigeon hole principle. We can see geometrically that (π/4)s2 ≤ FX (s) ≤ s2 . y) = x2 − y 2 is unbounded near s = 0 Proof. We have to understand the laws of the random variables X(x. we have F (s) = P [X(x) ≤ s] = P [x ≤ s] = s /n. This includes surfaces. z. n] × [0. max(p)] ⊂ [−Cnk . Theorem 5. y) = x2 + y 2 and p(x. . Example. Much of the research in this part of Diophantine . The distribution for k = 2 for p(x. Let us illustrate this in the case p(x. For m > 2. where the density of X is large. then then there are solutions to the symmetric Diophantine equation p(x) = p(y). To decide about this. . w) = x4 + x3 + z 4 + w4 . (Generalized Wroblewski) Wroblewski’s result extends to polynomials p of degree k for which at least one variable appears in a term of degree smaller than k. n4/3 ] × [0. Consider the finite probability space Ωn = [0. For Y (x. The distribution function of p(x1 . where f (s) grows logarithmically as − log(s). Cnk ]. then the polynomial p maps the finite space [0.11. x2 . For k = 1. nm−α ]. y) = x2 +y 2 is bounded for all m. y) = x2 − y 2 . Proof. If the density f = F ′ of the asymptotic distribution is unbounded. xk ) is a k ′ th convolution product Fk = F ⋆ · · · ⋆ F . y) = x2 +y 2 on [0.
. xk ). y.5 Figure. y. the finite probability space Ω = {(x1 . X(x. For given n. Define the random variable m 1/m . X(x) = (xm 1 + · · · + xk ) We expect that X takes values 1/nk/m = nm/k close to an integer for large n because Y (x) = X(x) mod 1 is expected to be uniformly distributed on the interval [0. xk )  0 ≤ xi < n1/m } contains nk/m different vectors x = (x1 . X(x. . Figure. .356 Chapter 5. . How close do two values Y (x). Show that there are infinitely many integers which can be written in non trivially different ways as x4 + y 4 + z 4 − w2 . 2 3 2.5 1 0. the Fermat Diophantine equation x3 + y 3 = z 3 . z) = x3 + y 3 − z 3 Exercise. . Cases like m = 3. . 1) as n → ∞. Remark. X(y)m . k = 2.5 2 1 1.5 1. If X(y)m−1 ǫ < 1. so that Y (x) = Y (y)? Assume Y (x) = Y (y) + ǫ. Here is a heuristic argument for the ”rule of thumb” that the Euler m m Diophantine equation xm 1 + · + xk = x0 has infinitely many solutions for k ≥ m and no solutions if k < m. Y (y) have to be. With the expected ǫ = nm/k and X(y)m−1 ≤ Cn(m−1)/m we see we should have solutions if k > m − 1 and none for k < m − 1. . then it must be zero so that Y (x) = Y (y).5 0. Selected Topics equations is devoted to find such subsets and hopefully parameterize all of the solutions. Then X(x)m = X(y)m + ǫX(y)m−1 + O(ǫ2 ) with integers X(x)m . z) = x3 + y 3 + z 3 . .
1088. 258. 1999) m = 8. m = 3k = 2: xm + ym = z m impossible. 90)8 = 14098 (Scott Chase) m = 9. like 103 + 93 = 13 + 123 (Viete 1591). 5)9 = 1039 (JeanCharles Meyrignac.12. choose n so large that the n’th Fourier approximation Xn (x) = nk=−n φX (n)einx satisfies X − Xn 1 < ǫ.1 (Riemann Lebesguelemma). 748. 68. A. 266. k = 7: (525. We see nevertheless that probabilistic thinking can help to bring order into the zoo of Diophantine equations. some written in the Lander notation m xm = (x1 . 89. 13. 65. 524. we have φm (Xn ) = E[eimXn ] = 0 so that φX (m) = φX−Xn (m) ≤ X − Xn 1 ≤ ǫ . m = 4. k = 4 275 + 845 + 1105 + 1335 = 1445 (Lander Parkin 1967). 702.12 Continuity of random variables Let X be a random variable on a probability space (Ω. by Fermat’s theorem. If X ∈ L1 . k = 12. then φX (n) → 0 for n → ∞. k = 3: like x5 + y5 + z 5 = w5 is open m = 4. m = 6. 402. The decision about singular or absolute continuity is more subtle.1997) 5. 430. k = 4: 304 + 1204 + 2724 + 3154 = 3534 . (91. how can we deduce from the characteristic function whether X is absolutely continuous or not? The first question is completely answered by Wieners theorem given below. k = 6: (74. m = 7. . 478. 894. Continuity of random variables 357 are tagged as threshold cases by this reasoning. 439. . 234. m = 2k = 2: x2 + y2 = z 2 Pythagorean triples like 32 + 42 = 52 (1900 BC). This argument has still to be made rigorous by showing that the distribution of the points f (x) mod 1 is uniform enough which amounts to understand a dynamical system with multidimensional time. 91. 223. 474. . m = 3. 71. How can we see from the characteristic function φX whether X is continuous or not? If it is continuous. Norrie 1911 [36]) m = 5. Here are some known solutions. 19. P).12. xk )m = xm 1 + · · · + xk . Given P ǫ > 0. Proof. k = 5: x6 + y6 + z 6 + u6 + v6 = w6 is open. There is a necessary condition for absolute continuity: Theorem 5. 1077)6 = 11416 . 127)7 = 5687 (Mark Dodrill. . 16. 42. 43. k = 3: 26824404 + 153656394 + 187967604 = 206156734 (Elkies 1988 [24]) m = 5. 413. k = 3: x3 + y3 + u3 = v3 derived from taxicab numbers. k = 8: (1324. m = 6. 1190. (R.5. . For m > n.
By (iiii). then there exists a random variable X ∈ L1 for which φX (n) = an . If some bn = c < 0. Proof: bn decreases monotonically. Proof. Proof. it goes to 0 for n → ∞. Proof. (iii) Also nbn goes Pn to zero.12. Here is an example of a criterion for the characteristic function which assures that X is absolutely continuous: Theorem 5. then by (i). The sum converges by (iv). Proof: Because k=1 (ak −ak+1 ) = a1 −an+1 is bounded and the summands are positive. Pn (iv) k=1 k(ak−1 − 2ak + ak+1 ) → 0 for n → ∞. The RiemannLebesgue lemma can not be reversed. If an = a−n satisfies an → 0 for n → ∞ and an+1 − 2an + an−1 ≥ 0. but which X is not in L1 . Proof. Selected Topics Remark. This sum simplifies to a0 − an+1 − n(an − an+1 .358 Chapter 5. Kk 1 = 2π 0 for all k. the existence of a discrete component of the random variable X is decided by the following theorem. The F´ejer kernel is a positive summability kernel and satisfies Z 2π 1 Kk (x) dx = 1 . P∞ (v) The random variable Y (x) = k=1 k(ak−1 − 2ak + ak+1 )Kk (x) is in L1 . k+1 For bounded random variables.5) given later on. (ii) bn = an − an+1 is nonnegative for all n. It will follow from corollary (5. We follow [49].2 (Convexity). There are random variables X for which φX (n) → 0. (i) bn = an − an+1 decreases monotonically. we must have k(ak − ak+1 ) → 0.12. . if Kk (x) is the F´ejer kernel with Fourier coefficients 1 − j/(k + 1). Proof: the convexity condition is equivalent to an − an+1 ≤ an−1 − an . (vi) The random variables X and Y have the same characteristic functions. also bm ≤ c for all m contradicting the assumption that bn → 0. φY (n) = ∞ X k=1 = = ∞ X k=1 ∞ X n+1 ˆ k (n) k(ak−1 − 2ak + ak+1 )K k(ak−1 − 2ak + ak+1 )(1 − j ) k+1 k(ak−1 − 2ak + ak+1 )(1 − j ) = an .
3 (Wiener theorem).12.12.12. Given X ∈ L∞ with law µ supported in [−π. Then n X 1X φX (k)2 = P[X = x]2 . k=−n The functions fn (t) = n X 1 1 Dn (t − x) = e−inx eint 2n + 1 2n + 1 k=−n are bounded by 1 and go to zero uniformly outside any neighborhood of t = x. n→∞ n lim x∈R k=1 Therefore. then for every x ∈ T. We follow [49]. n→∞ 2n + 1 k=−n Proof. π] and characteristic function φ = φX . µ − µ({x})i = n X 1 φ(n)einx − µ({x}) → 0 . Continuity of random variables Theorem 5. one has n X 1 µ({x}) = lim µ ˆk eikx . n Lemma 5. If µ is a measure on the circle T with Fourier coefficients µ ˆk . µ − µ({x})i = 0 n→∞ so that hfn . 2n + 1 k=−n .4. X is continuous if and only if the Wiener averages Pn 1 2 k=1 φX (k) converge to 0. From Z x+ǫ lim d(µ − µ({x})δx ) = 0 ǫ→0 x−ǫ follows lim hfn . The Dirichlet kernel Dn (t) = n X k=−n eikt = sin((k + 1/2)t) sin(t/2) satisfies Dn ⋆ f (x) = Sn (f )(x) = n X fˆ(k)eikx .359 5.
we have in general X (µ ⋆ µ∗ )({0}) = µ({x})2  . µ({x})2 = lim R→∞ 2R −R x∈R We turn our attention now to random variables with singular continuous distribution.12. We have also Z R X 1 ˆ µ(t)2 dt . Given a function h : R → [0. If µP= aj δx j is a discrete measure. Selected Topics Definition. one calls F uniformly lip − h continuous [89]. Pn k=1 ˆ µ k 2 < . A). then µ∗ = aj δ−xj . π] also µ∗ (A) = µ(−A).360 Chapter 5. For h(x) = xα with 0 < α ≤ 1. If µ and ν are two measures on (Ω = T. if there exists a constant C such that for all intervals I = [a. such that n1 √ C · h( n1 ) for all n ≥ 0. (Wiener) P x∈T µ({x}) 2 = limn→∞ 1 2n+1 Pn k=−n ˆ µ n 2 . For these random variables. then its convolution is defined as Z µ(A − x) dν(x) µ ⋆ ν(A) = T for any A ∈ A. then µ is uniformly hcontinuous. x∈T Corollary 5. A measure µ on the real line or on the circle is called uniformly hcontinuous. b] on T the inequality µ(I) ≤ Ch(I) holds. It is then the derivative of a αH¨older continuous function. Define for a measure on [−π. For general h. one does have P[X = c] = 0 for all c. If µ is the law of a singular continuous random variable X with distribution function FX .5. Theorem 5. Last). We have µ ˆ∗ (n) = µ ˆ(n) P and µ ˆ⋆ ν(n) = µ ˆ(n)ˆ ν (n). π] and so that we can use Fourier series instead of Fourier integrals. then FX is αH¨ older continuous if and only if µ is αcontinuous. For bounded random variables. P Remark. The graph of FX looks like a Devil staircase. we can rescale the random variable so that their values is in [−π. Remark. ∞) satisfying limx→0 h(x) = 0. If there exists C. the measure is called uniformly αcontinuous. Because µ ⋆ µ∗ = j aj 2 . the distribution function FX of such a random variable X does not have a density. Here is a refinement of the notion of continuity for measures. Furthermore.6 (Y. where I = b − a is the length of I. Definition.12. Remark.
Using estimate (5. there exists δ > 0 such that n1 Kn (t) ≥ δ > 0 if 1 ≤ nt ≤ π/2. The Dirichlet kernel satisfies n X k=−n ˆ µ k 2 = Z Z Dn (y − x) dµ(x)dµ(y) T2 and the F´ejer kernel Kn (t) satisfies Kn (t) = = 2 sin( n+1 1 2 t) n+1 sin(t/2) n X k )eikt (1 − n+1 k=−n = Dn (t) − n X k=−n k ikt e . A property of the F´ejer kernel Kn (t) is that for large enough n. (5. one gets nl X ˆ µ k 2 nl k=−nl ≥ ≥ ≥ Z Z T T Knl (y − x) dµ(x)dµ(y) nl δµ(Il )2 ≥ δl2 h(Il ) 1 C · h( ) .361 5.4) ˆ µ k 2 − T k=−n T ˆ−n . changing only the Because µ ˆn = µ √ constant C. If µ is not uniformly ph continuous. Continuity of random variables Proof. we can also sum from −n to n. n n k=−n .12. n+1 Therefore 0 ≤ = Z Z n X 1 2 kµk  = (Dn (y − x) − Kn (y − x))dµ(x)dµ(y) n+1 T T k=−n Z Z n X Kn (y − x)dµ(x)dµ(y) . there exists a sequence of intervals Ik  → 0 with µ(Il ) ≥ l h(Il ). nl This contradicts the existence of C such that n 1 X 1 ˆ µk 2 ≤ Ch( ) . so that 1 ≤ nl · Il  ≤ π/2.4). We follow [58]. Choose nl .
n n k=1 Proof. The computation ([106. Selected Topics Theorem 5.362 Chapter 5. In the following computation.12. we abbreviate dµ(x) with dx: (k+θ)2 Z 1 n−1 n−1 X e− n2 1 X 2 dθ ˆ µ k 2 ˆ µ k ≤ 1 e n n 0 k=−n k=−n =2 e 1 n−1 X Z e− n 0 k=−n =3 e Z T2 =4 e Z Z (k+θ)2 n2 1 n−1 X Z e−i(y−x)k dxdydθ T2 e− (k+θ)2 n2 −i(x−y)k n 0 k=−n Z 1 2 n2 +i(x−y)θ − (x−y) 4 dθdxdy e T2 n−1 X 0 e−( k+θ n 2 n +i(x−y) 2 ) n k=−n dθdxdy and continue n−1 1 X ˆ µ2k n k=−n ≤5 e Z e−(x−y) T2 n−1 X e =7 ≤8 =9 ≤11 Z n 1 0 dθ dxdy Z ∞ −( t +i(x−y) n )2 2 2 n2 e n [ dt]e−(x−y) 4 dxdy n T2 −∞ Z √ 2 n2 (e−(x−y) 4 ) dxdy e π T2 Z √ 2 n2 e π( e−(x−y) 2 dx dy)1/2 e Z T2 ∞ Z √ X e π( k=0 ≤10  n 2 −(i k+θ n +(x−y) 2 ) k=−n =6 2 n2 4 e−(x−y) k/n≤x−y≤(k+1)/n ∞ X √ 2 e−k /2 )1/2 e πC1 h(n−1 )( k=0 Ch(n −1 ). There exists a constant C such that for all n n 1 1X ˆ 2 (µ)k  ≤ C · h( ) . 107] for the Fourier transform was adapted to Fourier series in [52]). 2 n2 2 dx dy)1/2 . Let µ be a uniformly hcontinuous measure on the circle.7 (Strichartz).
Continuity of random variables Here are some remarks about the steps done in this computation: (1) is the trivial estimate e Z 1 n−1 X 0 k=−n (2) Z e−i(y−x)k dµ(x)dµ(y) = T2 Z e− (k+θ)2 n2 n e−iyk dµ(x) T dθ ≥ 1 Z T eixk dµ(x) = µ ˆk µ ˆk = ˆ µ k 2 (3) (4) (5) (6) uses Fubini’s theorem. (11) This step uses that ∞ X 2 ( e−k /2 )1/2 k=0 is a constant. R ∞ −( t +i(x−y) n )2 √ (7) uses −∞ e n n 2 dt = π because Z ∞ −∞ 2 √ e−(t/n+b) dt = π n for all n and complex b. (8) is Jensen’s inequality. .363 5. (9) splits the integral over a sum of small intervals of strips of width 1/n. is a completion of the square.12. is the CauchySchwartz inequality. (10) uses the assumption that µ is hcontinuous. R1 R∞ replaces a sum and the integral 0 by −∞ .
364 Chapter 5. Selected Topics .
C. New York. Berberian. [9] D. Grillenberger. [6] F.R. Scientific American. [2] L.P. Sigmund. [5] A.V. New York. 59:291–307. [4] A. Conway. [10] R. Cornfeld. 1980.. [11] Daniel D. Berlin.K. [3] S. volume 92 of CBMS Regional Conference Series in Mathematics. Ergodic Theory on Compact Spaces. 119:329–347. volume 115 of Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen. 1997. 45(2):267–310. 2005. Akhiezer. [7] J. The classical moment problem and some related questions in analysis. Choudhry. Stroock. and K.R. 1965. Choudhry. M¨ unchen. Lecture Notes in Mathematics 527. The challenge of large numbers. Oldenbourg Verlag. Springer. Measure and Integration. Chung. Symmetric Diophantine systems revisited.K. 1976. MacMillan Company. Acta Arithmetica. London and New York. (Feb). 1991. Symmetric Diophantine systems.Bibliography [1] N. Acta Arithmetica. AMS. Applications of a commutation formula. Wien.B. Cox and V. J. [12] P. SpringerVerlag. 1973. Springer Verlag. Spectral graph theory. volume 96 of Graduate texts in Mathematics. Monographs on Applied Probability and Statistics. A course in functional analysis.Fomin. 365 . 1978. Duke Math. and Ya.Sinai.G.I. [13] M. Hafner publishing company. Chapman & Hall. Point processes. 1965. University Mathematical Monographs. 1982. Arnold. Crandall. S. Deift. 1985.E. [8] I. Stochastische Differentialgleichungen. Ergodic Theory. Denker. Isham.
1969. Wiley. Measure Theory. 1984. Berlin.L. Feller.math. Reprint of the 2003 original [J. Plume. [15] L.C. Prime obsession. Math. [30] J. [28] Martin Gardner. [21] R. 55(1):119–122. Verw. Snell. 1993. 1959. Eisen.366 Bibliography [14] John Derbyshire.. 1968. Comput. History of the theory of numbers. Durrett. 1981. Dreyer. Dickson.E.G. CRC Press. . Oxford. volume 22 of Carus Mathematical Monographs. DC. Duxburry Press. Littlewood G.P. 2004. American Mathematical Society. 1988. Wahrsch. PrenticeHall. volume 70 of Graduate Studies in Mathematics. second edition edition. An Introduction to the Theory of Numbers. Inequalities. [18] J. Modelling with Ordinary Differential equations. Markov Chains. On a4 + b4 + c4 = d4 . Etemadi. Tricks and Puzzles. [19] P. [24] Noam Elkies. Introduction to mathematical probability theory.E. An elementary proof of the strong law of large numbers.Vol. [27] D.II:Diophantine analysis.. 1953. 1959. [23] N. Washington. 2003. Wiley series in probability and mathematical statistics. [22] M. New York. 1996. [17] J. An introduction to probability theory and its applications. fourth edition edition. A mathematical Guide to the BlackScholes Formula. Stochastic processes. 2005. Springer Verlag. [26] W. Freedman. 1966. Gebiete. [29] E. Probability: Theory and Examples. Hardy and G. Inc. Hardy. New York Heidelberg. Graduate Texts in Mathematics. Doob. D. An application of Kloosterman sums. Probability Theory in Finance. 51:828–838. Cambridge at the University Press. 1983. MR1968857]. Doyle and J. Henry Press. Dineen. [20] T. Bernhard Riemann and the greatest unsolved problem in mathematics.M.pdf. [16] S. Wright G. Random walks and electric networks. New York. Springer Verlag. 1994.. New York. [25] N. Elkies. Chelsea Publishing Co. http://www. Boca Raton. Oxford University Press.02/kloos. John Wiley and Sons.edu/ elkies/M259.harvard.H. Z. AMS. Science Magic. Washington. Polya. Doob.H. Dover.
1963. SpringerVerlag. volume 113 of Graduate Texts in Mathematics. [45] V. Halmos. 2006. Philadelphia.367 Bibliography [31] R. 1989. Quantum calculus. Stirzaker. Halmos. [36] Richard K. 1940. SIAM. Hermann. New York. Twelve Lectures on Subjects by his Life and Work. Solomyak. Kac and P. Kahane and R. A graduate course in probability. New York. [39] G. Isaac. Itˆ o and H. 1974. McKean. Springer. [40] W. Berlin. Clarendon PRess. Cambridge at the University Press. [42] O. Hayman. Branching random walk with exponentially decreasing steps and stochastically selfsimilar measures. [37] P. Percolation. [33] G. Lectures on ergodic theory. volume 20 of London Mathematical Society Monographs. Berlin. Inc. Glassey. 1967. Unsolved Problems in Number Theory. [46] JP. Probability: A graduate Course. Ensembles parfaits et s´eries [47] I. [32] J. [43] R. 1987. Karatzas and S. Benjamini and B. Quantum physics. New York. Guy. arXiv PR/0608271. Probability and Random Processes. Grimmet. a functional point of view. 1956. GurelGurevich I. Glimm and A. trigonom´etriques. Springer Verlag. [41] H. Academic Press.Tucker. 1995. Springer texts in statistics. [34] G. London.K. The Pleasures of Probability. Problems and Solutions. 2002. Harcourt Brace Jovanovich. The mathematical society of Japan. Graduate Texts in Mathematics. Brownian motion and stochastic calculus. second printing edition. [44] K. 1996. 2005. second edition. volume 125 of Die Grundlehren der mathematischen Wissenschaften. Gut.G. SpringerVerlag. SpringerVerlag.II. New York. Probability and Mathematical Statistics. . 1991. Springer Verlag. 1989. Measure Theory. Publishers. Springer Verlag. second edition. Cheung. [35] A. The Cauchy Problem in Kinetic Theory. Oxford.H. 3 edition. Salem. Academic Press. Springer. Ramanujan.P. Universitext. Springer Verlag. Diffusion processes and their sample paths.R. 1974. Jaffe.T. Shreve. Hardy. [38] Paul R. 2004. Subharmonic functions I. Grimmet and D.
[62] E. Lukacs and R. Karr. second corrected edition edition. [63] M. Grundbegriffe der Wahrscheinlichkeitsrechnung. 1993. 1984). New York. [53] B. SpringerVerlag.O. [54] N. [60] E. [57] Amy Langville and Carl Meyer. volume 3 of Oxford studies in probability. Time’s arrow: the origins of thermodynamics behavior. Kolmogorov. (99):446–459.. Princeton University Press. 71:233–241. 1986. Googles PageRandk and Beyond. Probability. . 1996.Laha. Serbinowska. Griffin’s Statistical Monogrpaphs and Courses. Poisson processes. Dover publications. Lewis. Springer texts in statistics. 142:406–445. 1968.C. Chelsea. Krengel. Almost alternating sums. Discrete and continuous dynamical systems. Mathematics of Computation.. Knill. Kingman. Clarendon Press. [52] O. 1950. A remark on quantum dynamics. 1933. [56] H. Cambridge University Press. New York. SpringerVerlag. Springer.R. Singular continuous spectrum and quantitative rates of weakly mixing. 1985.F. Inc. Knill. [49] Y. American Mathematical Society. Parken. Katznelson. 2006. pages 158–167. [50] J.H. Berlin. [58] Y. An introduction to harmonic analysis. [61] J. Ergodic Theorems. Reznick K. volume 6 of De Gruyter Studies in Mathematics. Selfridge L. 1993. Mackey. Walter de Gruyter. Quantum dynamics and decompositions of singular continuous spectra. Berlin. [59] J.. volume 1158 of Lecture Notes in Math. Lieb and M. 1992. 1998. Math. Applications of Characteristic Functions.C. A survey of equal sums of like powers. Amer.368 Bibliography [48] A. New York: Oxford University Press.L. Lander. Kunita. Last. English: Foundations of Probability Theory. 4:33–42. Helvetica Physica Acta. In Stochastic processes—mathematics and physics (Bielefeld. New York. Monthly. Analysis. 1990. 1967. Berlin. Bryant and M. An elementary approach to Brownian motion on manifolds. Loss. volume 14 of Graduate Studies in Mathematics. Stochastic flows and stochastic differential equations. 1998.F. [55] U. [51] O.J. T. 1933. Func. 1996.G. Anal. J.
Records of equal sums of like powers.V. [75] E.I.N. Existence of solutions of (n. [81] S. [65] L. Peterson.fr/theorem. Fifty Challenging Problems in Probability with solutions. [78] B. Dynamical theories of Brownian motion. 1978.P. Princeton university text.n+1). Math.free.htm. . 129(2):390–405.fr/records. Schr¨ odinger equations and diffusion theory. Mardia. Bornes pour la torson des courbes elliptiques sur les corps de nombres. [80] I. Stochastic integrals. Nelson. 124:437–449.New York. John Wiley and Sons. 1995. Natanson. 1987. [76] E. inc. Universitext. Pure point spectrum of the Laplacians on fractal graphs. Academic press.P.J. Teplyaev. Princeton university press. Petersen. 1986.Lewis N. C. 1996. fourth edition edition. Academic Press (Harcourt Brace Jovanovich Publishers). Miller. 1993. Math. Mosteller. [79] K. The selfavoiding random walk. 1995. [77] T. [67] A. London and New York. Inc. Cambridge University Press.J. McKean. Embleton. http://euler. Statistical analysis of spherical data. Translation series. Nelson. volume 86 of Monographs in Mathematics. Cambridge University Press. New York.. [72] F. The Jungles of Randomness. http://euler. Merel. Radically elementary probability theory.Bibliography 369 [64] N. J.htm. Madras and G. [71] JeanCharles Meyrignac. Malozemov and A. Probability and its applications. United states atomic energy commission. Dover Publications. New York. [70] JeanCharles Meyrignac. A mathematical Safari. 1983. 1972. Anal. Academic Press. 39:191–198. Birkh 1993.. Birkh¨auser Verlag. [69] L. Ergodic theory. State Publishing House of TechnicalTheoretical Literature. Matulich and B. Fisher and B. SpringerVerlag.n+1. Cambridge. 1965. [73] M. Probability and Mathematical Statistics. Constructive theory of functions.. Stochastic Differential Equations. 1987. Inv. Funct. Brownian motion and classical potential theory. [66] K. [68] H. [74] I. Nagasawa. Slade. Gravity in one dimension: stability of a three particle system. Oksendal. 1969.free. Statistics of directional data. 1949. Basel. Colloq. Port and C. Stone. 1967.
web edition. volume 95 of Graduate Texts in Mathematics. [94] G. Applied Probability Models with optimization Applications. 1993. Hausdorff measures. Grundlehren der mathmatischen Wissenschaften. Miller.G. Monday June 28. Princeton University Press. Pattern Classification. [87] H. 2009. 293. Simon. Real and Complex Anslysis. DordrechtHolland. 48:4250–4256.370 Bibliography [82] T. 1980. editor. 1995. 1970. Volume I. Rybicki. pages 194–210. [86] D. Gravity in one dimension: The critical population. 1991. Lecar. D. Phys. . McGrawHill Series in Higher Mathematics. Birkh¨auser Boston Inc. Lecture Notes. RI. 1985. Rogers. Schaum’s Outlines. Spectral analysis of rank one perturbations and applications. Ross. [92] W. 1970. When intuition and math probably look wrong. II. Cambridge University Press. pages 109–149. New York. [97] B. McGrawHill. [90] Jason Rosenhouse. AMS. Academic Press.. [84] Julie Rehmeyer. Cambridge. 2010. SpringerVerlag. Duda and D. Dover Publications. BC. Oxford University Press.O. volume 57 of Progress in Mathematics. 1993). [89] C. Princeton Science Library.J. Reidl and B. 1991. In M. Rev. 1972. John Wiley and Sons. Prime numbers and computer methods for factorization. 1979.E. New York. Stork. volume 8 of CRM Proc.M. Academic Press. [95] A. [85] C. [93] D. 2010. Riesel.N. Methods of modern mathematical physics . Ruelle. volume 28 of London Mathematical Society Student Texts. Schr¨ odinger operators (Vancouver. Cambridge University Press. [91] S. Continuous Martingales and Brownian Motion.A. Shu. The Monty Hall Problem. inc. Revuz and M. In Mathematical quantum theory. Pure and applied mathematics. 1987. [98] B. Springer Verlag. June. Probability. Exact statistical mechanics of a onedimensional selfgravitating system. second edition edition. Potential theory in the complex plane. Probability. 1995. New York. Orlando. Inc. Hart R. 1997. Science News. Functional Integration and Quantum Physics. Reidel Publishing Company. Providence. Random variables and Random Processes. 2010. Shiryayev.B. [83] M. [88] P. Simon. Reed and B. [96] Hwei P. 1984. Simon.Yor. Chance and Chaos. Rudin. E. Ransford. Gravitational NBody Problem.
[103] P. van Kampen.B. 1976. Walters. [109] D. CRC Press. An Introductory Course. The journal of Fourier analysis and Applications. Strichartz. [113] D. Selfsimilarity in harmonic analysis. Boca Raton. 1:1–37. Stochastic processes in physics and chemistry. G.. Graduate texts in mathematics. 1987. 1982. Simon. Potential theory on infinite networks. volume 1590 of Lecture Notes in Mathematics.. Williams. [112] P. Springer Verlag. [108] D. SpringerVerlag. Cambridge University Press. 1992. 124(4):1177–1182. Cambridge University Press. Wolff. 1995.. Cambridge mathematical Texbooks. Sinai. 1996. 1993. Szekely. [110] Gabor J. Probability and Stochastics Series. [105] H. [111] N. SpringerVerlag. 1994. 1991. [102] J. Simon and T. an analytic view. Texts and monographs in physics. Singular continuous spectrum under rank one perturbations and localization for random Hamiltonians. New York. Graduate texts in mathematics 79.L. [100] B. [107] R. Berlin. Wise and E. Lectures on Stochastic Analysis and Diffusion Theory. Berlin. Oxford University Press. An introduction to ergodic theory. J. Stroock. Graph Laplacians and LaplaceBeltrami operators. Commun. 1986. 1992. Principles of Random walk. NorthHolland Personal Library. Soc.S.S.W. Snell and R. [104] F.M. New York Heidelberg Berlin. Akademiai Kiado.W. 1993. Hall. Three bewitching paradoxes. 1990. SpringerVerlag. Counterexamples in probability and real analysis. Vanderbei. Probability with Martingales. Func. Pure Appl. Budapest. Probability Theory. Math.L. 1991. New York. Math. Large scale dynamics of interacting particles. Springer Textbook. [101] Ya. [114] G. Operators with singular continuous spectrum. Spitzer. Proc.Bibliography 371 [99] B. London Mathematical Society Students Texts. [106] R. 39:75. 1994. . 89:154–187. Stroock. Soardi. Amer. Paradoxes in Probability Theory and Mathematical Statistics. SpringerVerlag. Anal. Fourier asymptotics of fractal measures. Spohn. VI. Strichartz. Probability theory.
34 algebra σ. 88 Birkhoff ergodic theorem. 25 σalgebra Ptrivial. 86 Bethe lattice. 25 Borel. 26 σring. 37 atomic random variable. 343 atom. 313 Bertrand. 295 asymptotic expectation. 343 asymptotic variance. 85 372 . 177 arithmetic random variable. 26. 37 almost alternating sums.Index Lp . 26 Borel set. 85 absolutely continuous measure. 21 birthday paradox. 144 blackbody radiation. 109 BoltzmannGibbs entropy. 37 algebra. 129 absolutely continuous distribution function. 43 Pindependent. 174 almost everywhere statement. 38 algebra trivial. 27 σalgebra. 38. 176 Banach space. 274 BlackJack. 34 πsystem. 25 algebra of random variables. 86 atomic distribution function. 87 Beta function. 57 BernsteinGreenKruskal modes. 21 angle. 316 Bernstein polynomials. 38 λset. 88 binomial distribution. 313 bias. 283 bonds. 30 Benford’s law. 32 σ ring. 26 σalgebra generated subsets. 73 axiom of choice. 39 absolutely continuous. 46 Bernoulli convolution. 34 Ptrivial. 342 AronzajnKrein formula. 168 Borel σalgebra. 50 BanachTarski paradox. 121 Bernstein polynomial. 23 Black and Scholes. 32 Borel. 43 almost Mathieu operator. 231 Boltzmann distribution. 104 bond of a graph. 43 algebraic ring. 17 Ballot theorem. 330 BeppoL´evi theorem. 294 BorelCantelli lemma. 17 Bayes rule. 181 BGK modes. 257. 301 Binomial coefficient. 123 Blumental’s zeroone law. 26 Borel transform. 85 automorphism probability space. 26 generated by a map. 75 Birkhoff’s ergodic theorem. 55 arcsin law. 28 tail. 37 σadditivity. 15 beta distribution.
96 continuous random variable. 188 CDF. 130 conditional integral. 64 convergence almost sure. 51 Cayley graph. 135 cone. 23 calculus of variations. 328 classical mechanics. 141. 64 convex function. 48 bounded process. 280 CauchyBunyakowskySchwarz inequality. 330 conditional entropy. 105 conditional expectation. 212 Brownian motion. 182 Cayley transform. 111 canonical version of a process. 31 bracket process. 274 on a lattice. 120 Cantor function. 113 cone of martingales. 225 strong law . 26 concentration parameter. 105 compact set. 96 in law. 45 central limit theorem. 152 Brownian bridge. 64. 213 Buffon needle problem. 150 bounded variation. 55 weak. 214 Cantor distribution. 280 convergence in distribution. 64 convergence complete. 87 Cauchy sequence. 172. 36 casino. 44 373 ChapmannKolmogorov equation. 48 . 204 geometry. 52 Choquet simplex. 89 Cantor distribution characteristic function. 195 branching random walk. 21 coarse grained entropy. 55 stochastic. 142 continuation theorem. 118 Chebychev inequality. 327 circular variance. 268 branching process. 64 completion of σalgebra. 51 CauchyPicard existence. 11. 34 continuity module. 120 characteristic functional point process. 64 convergence in probability. 200 centered random variable. 121 capacity. 61 celestial mechanics. 85 contraction. 334 central moment. 96 convergence almost everywhere. 263 boygirl problem. 322 characteristic functions examples. 194 characteristic function. 207 Brownian sheet. 327 circlevalued random variable. 126 central limit theorem circular random vectors.Index bounded dominated convergence. 142 bounded stochastic process. 64 convergence in Lp . 135 conditional variance. 64 convergence fast in probability. 21 centered Gaussian process. 52 ChebychevMarkov inequality. 121. 280 complete convergence. 16 Cauchy distribution. 51 Chernoff bound. 200 Brownian motion existence. 322 canonical ensemble. 26 complete. 64. 90 Cantor set. 28 conditional probability space. 279 CauchySchwarz inequality. 96 in probability. 233 Carath´eodory lemma. 131 conditional probability. 57 continuity points. 109 Campbell’s theorem. 116 characteristic function Cantor distribution.
20 elementary function. 61 density of states. 93 exponential. 185 dependent percolation. 183 Diophantine equation. 161 Doob submartingale inequality. 351 Dirac point measure. 151 Doob’s decomposition. 118 coordinate process. 101 decimal expansion. 150 DoobMeyer decomposition. 141 distribution beta. 87 binomial. 44 cumulative density function. 93 geometric. 85 discrete. 158 dyadic numbers. 90 dice fair. 160. 86 Poisson . 222 Dirichlet problem discrete. 129 Devil staircase. 360 convolution random variable. 88 log normal. 87 first success. 159 decomposition of DoobMeyer. 54 covariance. 192 discretized stopping time. 200 crushed ice. 354 Erlang. 29 Sicherman. 360 devils staircase. 202 electron in crystal. 33 DynkinHunt theorem. 88 Gamma. 230 economics. 55 downward filtration. 41 de MoivreLaplace. 61 cylinder set. 88 distribution function. 45. 189 discrete random variable. 85 discrete Laplacian. 48 Doob convergence theorem. 230 disease epidemic. 88 uniform. 249 cumulant generating function. 279 stochastic. 173 derivative of RadonNykodym. 214 correlation coefficient. 330 dominated convergence theorem. 61 distribution function absolutely continuous. 351 Diophantine equation Euler. 43 elementary Markov property. 142 Index discrete stochastic process. 317 directed graph. 87. 222 discrete Dirichlet problem. 89 Cauchy. 85 distribution of the first significant digit. 204 Dynkin system. 289 Einstein. 265 dot product. 28 nontransitive.374 convolution. 189 discrete distribution function. 294 discrete stochastic integral. 283 edge which is pivotal. 164 Doob’s convergence theorem. 160 density function. 69 decomposition of Doob. 192 Dirichlet kernel. 87. 273 dihedral group. 85 discrete Schr¨odinger operator. 85 singular continuous. 87 Diophantine equation. 189 Kakutani solution. 29 differential equation solution. 16 edge of a graph. 137 discrete Wiener space. 87 normal. 88 Cantor. 52 Covariance matrix. 194 . 351 symmetric. 19 dependent random walk. 159 Doob’s upcrossing inequality. 360 Dirichlet problem.
234 Vlasov dynamics. 45 functional derivative. 156 F´ejer kernel. 39 ergodic transformation. 357 FeynmanKac formula. 179 free Laplacian. 263 first entry time. 178. 311. 303 FKG inequality. A]. 37 finite quadratic variation. 105 distribution. 116 convex. 295 FeynmanKac.Index Elkies example. 86 Gaussian distribution. 217 finite Markov chain. 351 Eulers golden key. 183 function characteristic. 303 Fisher information matrix. 105 random variable. 117 formula of Russo. 93 estimator. 137. 289 Formula of SimonWolff. 23 fair game. 330 first success distribution. 105 measure preserving transformation. 137. 313 ergodic theorem. 61 Rademacher. 43 expectation E[X. 356 Euler’s golden key. 116 Fr¨ohlichSpencer theorem. 288 form. 122 generalized function. 148 game which is fair. 187 L`evy. 104 KolmogorovSinai. 297 Fortuin Kasteleyn and Ginibre. 88 factorization of numbers. 74 ergodic theory. 300 Euler Diophantine equation. 87. 122 free group. 233 equilibrium measure existence. 148 gamblers ruin problem. 211 generating function. 287. 144 first significant digit. 48 distribution. 217 375 filtration. 357 ensemble microcanonical. 108 entropy BoltzmannGibbs. 109 gamblers ruin probability. 287 Fourier series. 104 geometric distribution. 122 generating function . 93 expectation. 130 exponential distribution. 45 process. 187. 58 conditional. 351 formula AronzajnKrein. 94 equilibrium measure. 265 finite total variation. 241 extinction probability. 73 Erlang distribution. 73. 325 finite measure. 143 Fatou lemma. 105 normal distribution. 42 excess kurtosis. 104 partition. 294 free energy. 47 Fermat theorem. 104 circle valued random variable. 75 ergodic theorem of Hopf. 187 filtered space. 225 FeynmanKac in discrete case. 359 Fourier transform. 93 Gamma function. 358. 143 Gamma distribution. 200 random vector. 351. 87 extended Wiener measure. 200 vector valued random variable. 12 generalized OrnsteinUhlenbeck process. 361 factorial. 328 coarse grained. 88 Fisher information.
159. 223 Helly’s selection theorem. 61 IID random diffeomorphism. 268 information inequalities. 294 groupvalued random variable. 88 geometric series. 174 google matrix. 92 geometric Brownian motion. 300 Golden key. 335 H¨ older continuous. 221. 52 ChebychevMarkov. 207. 150 Fisher . 326 smooth. 185 invertible transformation. 38 Kolmogorov axioms. 221 Green function. 153 Kintchine’s law of the iterated logarithm. 37. 77 inequality CauchySchwarz. 31 independent identically distributed. 130 HamiltonJacobi equation. 194 great disorder. 122 Hermite polynomial. 96 Helmholtz free energy. 271 integrated density of states. 61 independent random variable. 39 Keynes postulates. 122 global error. 92 Gibbs potential. 274 geometric distribution. 164 inequality of KunitaWatanabe. 257 Jacobi matrix. 43 integrable uniformly. 40 heat equation. 50 Haar function. 360 H¨ older inequality. 114 Minkowski. 49 Jensen for operators. 260 heat flow. 67 integral. 114 jointly Gaussian. 301 global expectation. 43 integral of Ito . 227 Kolmogorov 0 − 1 law. 206 inequalities Kolmogorov. 255. 200 Ksystem.376 moment. 51 Chebychev. 29 Kingmann subadditive ergodic theorem. 326 increasing process. 50 integrable. 205 Hahn decomposition. 40 HardyRamanujan number. 149 IID. 50 homogeneous space. 34 independent events. 49 Jensen inequality for operators. 305 inner product. 294 JavrjanKotani formula. 326 IID random diffeomorphism continuous. 260 Hilbert space. 124 Ito integral. 77 . 226 inequality Kolmogorov. 50 Jensen. 32 Index independent subalgebra. 264 independent πsystem. 192 harmonic series. 61 identity of Wald. 213 Green domain. 58. 73 iterated logarithm law. 305 submartingale. 295 Jensen inequality. 352 harmonic function on finite graph. 271 Ito’s formula. 51 power entropy. 335 identically distributed. 311 Hamlet. 51 Doob’s upcrossing. 186. 305 FKG. 27 Kolmogorov inequalitities. 287 H¨ older. 32 indistinguishable process. 311 Green function of Laplacian. 351 golden ratio.
9 Lebesgue measurable. 183 Laplacian on Bethe lattice. 86 Lebesgue dominated convergence. 61 law of arcsin. 113. 144 last visit. 16 martingale transform. 69. 361 lattice animal. 357 Komatsu. 105 limit theorem de MoiveLaplace. 87 logistic map. 328 law of iterated logarithm. 48 377 Lebesgue integral. 45. 181 last exit time. 76 weak. 74 maximum likelihood estimator. 21 M¨ obius function. 164 Kolmogorov theorem. 194 martingale. 56 law of large numbers strong. 326 Markov process. 55 law of group valued random variable. 360 law of a random variable. 101 Poisson. 328. 268 kurtosis. 58 law of total variance. 301 linearized Vlasov flow. 139 matrix cocycle. 311 Laplacian. 351 Marilyn vos Savant. 177 Last’s theorem. 194 Markov operator. 113 Kronecker lemma. 38. 223 martingale inequality. 308. 70. 177 law of cosines. 122 LaplaceBeltrami operator. 335 iterated logarithm. 353 uniformly hcontinuous. 82 L`evy formula. 55 lexicographical ordering. 124 random vector. 138. 17 Markov chain. 109 mean direction. 314 symmetric Diophantine equation. 286 mean size open cluster. 207 locally H¨ older continuous. 312 liph continuous. 63 MaxwellBoltzmann distribution. 66 likelihood coefficient. 163 KullbackLeibler divergence. 164 law of large numbers. 26 Lebesgue thorn. 117 Lander notation.Index Kolmogorov inequality. 142 martingale. 357 Langevin equation. 320 mean size. 147 length. 330 mean measure Poisson process. 360 Lipshitz continuous. 222 lemma BorelCantelli. 147 Koopman operator. 130 Kolmogorov zeroone law. etymology. 194 Markov process existence. 325 maximal ergodic theorem of Hopf. 275 Laplace transform. 207 log normal distribution. 302 Maxwell distribution. 102 linear estimator. 283 lattice distribution. 36 Fatou. 291 . 328 law group valued random variable. 166 martingale strategy. 47 RiemannLebesgue. 79 Kolmogorov theorem conditional expectation. 38 Komatsu lemma. 39 Carath´eodory. 135 Lebesgue decomposition theorem. 194 Markov property. 93 L´evy theorem. 106 KunitaWatanabe inequality.
233 outer. 23 Multidimensional Bernstein theorem. 283 net winning. 276 pigeon hole principle. 17 partial exponential function. 187 percentage drift. 45 nowhere differentiable. 139 number of open clusters. 274 percolation. 108. 55 PerronFrobenius operator. 122 measure. 284 perpendicular. 360 Wiener . 314 neighboring points. 316 multivariate distribution function. 28 measurable space. 224. 200 measurable progressively. 239 OrnsteinUhlenbeck process. 295 Petersburg paradox. 86. 194 paradox Bertrand. 19 percolation probability. 35 positive. 186 point process. 113 PerronFrobenius. 18 cluster. 81 Mehler formula. 219 measurable map. 247 Mertens conjecture. 15 Petersburg. 92 monkey. 141 null at 0. 213 uniformly hcontinuous. 123. 243 outer measure. 88 Poisson equation. 61. 69 normality of numbers. 340 Mises distribution. 32 measure . 35 page rank. 16 Picard iteration. 113 Markov. 25 measure. 92. 103 algebra. 349 nuclear reactions. 117 partition. 214 measure preserving transformation. 210 oscillator. 143 normal distribution. 330 moment. 104 Index normal number. 314 random vector. 29 path integral. 311 Poisson limit theorem. 292 operator Koopman. 320 Poisson distribution. finite37 absolutely continuous. 320 . 34 equilibrium. 289 Planck constant. 102 Poisson process. 221. 69 normalized random variable. 274 percentage volatility. 302 mean vector. 351 metric space. 280 microcanonical ensemble. 113 Schr¨odinger. 40 Monte Carlo integral. 73 median. 40 monkey typing Shakespeare. 92 generating function. 218 Minkowski inequality. 45. 18 percolation bond. 37 pushforward. 314 moments. 16 three door . 111 minimal filtration. 51 Minkowski theorem. 44 moment formula. 9 Monte Carlo method. 352 pivotal edge. 208 NP complete. 20 symmetric. 113 perturbation of rank one.378 mean square error. 44. 18 dependent. 27.
85 spherical. 85 discrete. 55 Pythagorean triples. 45 RadonNykodym theorem. 168 position operator. 223 reflection principle. 48 absolutely continuous. 186 Schr¨odinger operator. 105 relative entropy circle valued random variable. 141 portfolio. 357 quantum mechanical oscillator. 174 regression line. 213 random number generator. 9 Riemann zeta function. 61. 43 normalized. 23 Pollare ρ method. 123 uniformly integrable. 148 ruin problem. 85 arithmetic. 351 probability P[A ≥ c] . 219 pseudo random number generator. 335 symmetric. 92 probability space. 27 pure point spectrum. 142 finite variation. 351 Riemann integral. 218 ring. 134 relative entropy. 113 positive measure. 213. 27 process bounded. 61. 264 previsible. 20 score function. 37 risk function.379 Index Poisson process existence. 302 ruin probability. 61 probability generating function. 45 circle valued. 294 pushforward measure. 206 progressively measurable. 169 random walk last visit. 351 RiemannLebesgue lemma. 42. 357 right continuous filtration. 243 R´enyi’s theorem. 323 Rademacher function. 29 power distribution. 324 random diffeomorphism. 142 prime number theorem. 289 Schr¨odinger equation. 170 Polya urn scheme. 32 random vector. 305 RaoCramer inequality. 264 increasing. 335 integrable. 328 resultant length. 141 population growth. 129 random circle map. 342 centered. 260 positive cone. 177 rank one perturbation. 61 random variable. 18. 28 probability density function. 308 Pythagoras theorem. 63 reflected Brownian motion. 28 random walk. 209 postulates of Keynes. 45 singular continuous. 51 conditional. 61 previsible process. 324 random field. 304 SDE. 142 process indistinguishable. 304 Rayleigh distribution. 295 RaoCramer bound. 327 continuous. 328 Riemann hypothesis. 320 Pollard ρ method. 348 pull back set. 273 . 28 random variable Lp . 67 random variable independent. 53 regular conditional probability. 148 Russo’s formula. 348 Polya theorem random walks. 85 group valued. 23. 351. 37 positive semidefinite.
46 Birkhoff. 297 singular continuous distribution. 138 set of continuity points. 34 central limit. 43 Stirling formula. 182 systematic error. 362 strong convergence operators. 138. 96 Shakespeare. 153 subalgebra. 161 Doob’s convergence. 357 theorem Ballot. 332 sum of random variables. 137 stocks. 126 dominated convergence. 104 state space. 185 spectrum. 326 ergodic. 326 random map. 207 strong law of large numbers for Brownian motion. 199 stochastic process discrete. 44 standard normal distribution. 168 stopped process. 196 stationary measure discrete Markov process. 26 subalgebra independent. 193 statement almost everywhere. 174 Strichartz theorem. 274 stochastic process. 300 step function. 223 submartingale inequality. 123 symmetric random walk. 239 symmetric random variable. 218 Index stopping time for random walk. 174 SimonWolff criterion. 43 stationary measure. 195 stochastic operator. 326 stationary state. 335 Spitzer theorem. 283 submartingale.380 semimartingale. 191. 226 sum circular random variables. 55 stochastic differential equation. 164 submartingale inequality continuous martingales. 177 stochastic convergence. 144. 98. 75 bounded dominated convergence. 85 singular continuous random variable. 145 stopping time. 73 supersymmetry. 176 BanachAlaoglu. 246 supercritical phase. 96 BeppoLevi. 38 taxicab number. 32 subgraph. 138. 351 symmetric operator. 239 strong law Brownian motion. 207 subcritical phase. 326 symmetric Diophantine equation. 252 stake of game. 48 Carath´eodory continuation. 85 solution differential equation. 277 stochastic matrix. 116 statistical model. 352 taxicab numbers. 327 silver ratio. 151 DynkinHunt. 186. 21 Birkhoff ergodic. 301 tail σalgebra. 94 significant digit. 200 standard deviation. 279 spectral measure. 40 Shannon entropy . 143 Standard Brownian motion. 230 . 192. 113 stochastic population model. 48 Doob convergence. 286 supermartingale. 20 spherical random variable. 223 support of a measure. 273 stochastic differential equation existence. 286 subadditive.
331 Wroblewski theorem. 213 Wick ordering. 38 L´evy. 193 tree. 67 upcrossing. 178 three door problem. Norbert. 58 weak law of large numbers for L1 . 38 trivial algebra. 214 Wiener sausage. 135 transfer operator. 28 vertex of a graph. 122 martingale. 96 Voigt. 136 conditional. 12. 260 Wick power. 142 transition probability function. 16 variance. 96 Kolmogorov. 88 uniform distribution circle valued random variable. 283 Vitali. 52. 141 utility function. 150 upcrossing inequality. 306 Vlasov flow Hamiltonian. 38 Zeta function. 326 transform Fourier. 179 trivial σalgebra. 160 maximal ergodic theorem. 87. 361 Lebesgue decomposition. 129 Strichartz. 82 Last. 340 monotone convergence. GianCarlo. 233 measure . 359. 96 weak law of large numbers. 28 Tychonov theorem. 360 wrapped normal distribution. 306 von Mises distribution. 250 Wiener space. 362 three series . 150 urn scheme. 260 Wick. 170 Pythagoras. 17 three series theorem. 360 uniformly integrable. 249 white noise. 54 uniform distribution. 25. 96 uncorrelated. 331 uniformly hcontinuous measure. 352 zeroone law of Blumental. 360 Wroblewski. 57 Wiener. 330 Wald identity. 231 zeroone law of Kolmogorov. 46 Polya. 118 for measures. 114 Weierstrass. 79 Kolmogorov’s 0 − 1 law. 57 Weyl formula. 116 Laplace. 359 Wiener. 335 total variance law. 86 martingale convergence. 211 topological group. 264 vector valued random variable. Giuseppe. 44 variance Cantor distribution.381 Index Helly. 135 variation stochastic process. 55 RadonNykodym. 74 Minkowski. 41 Tychonovs theorem. 351 . 205 Wieners theorem. 17 Vlasov flow. 260 Wiener measure. 352 thermodynamic equilibrium. 56. 116 thermodynamic equilibrium measure. 80 tied down process. 95 random variable. 58 Weierstrass theorem. 328. 80 Tychonov. 149 weak convergence by characteristic functions. 214 Wiener theorem. 58.