You are on page 1of 45

8 Random walk

8.1 Zero-or-one laws


In this chapter we adopt the notation N for the set of strictly positive integers,
and TV0 for the set of positive integers; used as an index set, each is endowed
with the natural ordering and interpreted as a discrete time parameter.
Similarly, for each ne N, Nn denotes the ordered set of integers from 1 to n
(both inclusive); N% that of integers from 0 to n (both inclusive); and N'n
that of integers beginning with n + 1.
On the probability triple (Ω, ^, 0>), a sequence {Xn,ne N} where each
Xn is an r.v. (defined on Ω and finite a.e.), will be called a {discrete parameter)
stochastic process. Various Borel fields connected with such a process will
now be introduced. For any sub-B.F. & of ^ , we shall write
(1) Xe&

and use the expression " X belongs to ^ " or " ^ contains X" to mean that
Χ~λ(β) c <& (see Sec. 3.1 for notation): in the customary language ^ i s said
to be "measurable with respect to ^ " . For each ne N, we define two B.F.'s
as follows:

&n = the augmented B.F. generated by the family of r.v.'s {Xk9 k e Nn};
8.1 ZERO-OR-ONE LAWS | 251

that is, !Fn is the smallest B.F. ^ containing all Xk in the family
and all null sets ;
&% = the augmented B.F. generated by the family of r.v.'s {Xk9 k e N^}.

00

Recall that the union [J ^n is a field but not necessarily a B.F. The
n= l
smallest B.F. containing it, or equivalently containing every «^, ne N, is
denoted by
*- = V K\
n= l

it is the B.F. generated by the stochastic process {Xn, ne N}. On the other
00 00

hand, the intersection Π J^' is a B.F. denoted also by /\ ß^. It will be called
n=l n=l
the remote field of the stochastic process and a member of it a remote event.
Since J ^ <= J ^ ^ is defined on J ^ . For the study of the process
{Xn9 ne N} alone, it is sufficient to consider the reduced triple (Ω, ^ , ^ | j r w ) .
The following approximation theorem is fundamental.
00

Theorem 8.1.1. Given e > 0 and A e ^ , , there exists Ae e IJ J^ such


n= l
that
(2) ^ ( Λ Δ Λ € ) < e.

PROOF. Let ^ be the collection of sets Λ for which the assertion of the
theorem is true. Suppose Ak e & for each k e N and Ak f Λ or Ak j Λ.
Then Λ also belongs to &, as we can easily see by first taking k large and then
applying the asserted property to Ak. Thus ^ is a monotone class. Since it is
00

trivial that ^ contains the field (J J^ that generates «^Ό,, ^ must contain J ^
n= l
by the Corollary to Theorem 2.1.2, proving the theorem.
Without using Theorem 2.1.2, one can verify that ^ is closed with
respect to complementation (trivial), finite union (by Exercise 1 of Sec. 2.1),
and countable union (as increasing limit of finite unions). Hence ^ is a B.F.
that must contain J ^ .

It will be convenient to use a particular type of sample space Ω. In the


notation of Sec. 3.4, let
Ω = X Ωη,
71 = 1

where each Ωη is a "copy" of the real line 0tx. Thus Ω is just the space of all
infinite sequences of real numbers. A point ω will be written as {ωη, n e N}9
and ωη as a function of ω will be called the nth coordinate (function) of ω.
Each Ωη is endowed with the Euclidean B.F. &1, and the product Borel field
252 I RANDOM WALK

IF ( = ^co in the notation above) on Ω is defined to be the B.F. generated by


the finite-product sets of the form

(3) f){œ:œnjeBnj}
j= i

where (nl9..., nk) is an arbitrary finite subset of N and where each Bnj e &1.
In contrast to the case discussed in Sec. 3.3|, however, no restriction is made
on the p.m. & on IF. We shall not enter here into the matter of a general
construction of &. The Kolmogorov extension theorem (Theorem 3.3.6)
asserts that on the concrete space-field (Ω, F') just specified, there exists a
p.m. & whose projection on each finite-dimensional subspace may be
arbitrarily preassigned, subject only to consistency (where one such subspace
contains the other). Theorem 3.3.4 is a particular case of this theorem.
In this chapter we shall use the concrete probability space above, of the
so-called "function space type", to simplify the exposition, but all the results
below remain valid without this specification. This follows from a measure-
preserving homomorphism between an abstract probability space and one of
the function-space type; see Doob [16, chap. 2].
The chief advantage of this concrete representation of Ω is that it
enables us to define an important mapping on the space.

DEFINITION OF THE SHIFT. The shift τ is a mapping of Ω such that


T : ω = {ω η , n 6 N} -> τω = {ω η + 1 , Π Ε Ν} ;

in other words, the image of a point has as its nth coordinate the (n + l)st
coordinate of the original point.

Clearly τ is an oo-to-1 mapping and it is from Ω onto Ω. Its iterates are


defined as usual by composition : r° = identity, rk = τ o Tk ~x for k > 1. It
induces a direct set mapping τ and an inverse set mapping r _ 1 according to
the usual definitions. Thus

τ _ 1 Λ = {ω : τω Ε Λ}

and τ'η is the nth iterate of τ _ 1 . If Λ is the set in (3), then

(4) τ " 1 Λ = f){*>:<oni + 1eBnj}.


;
j=i

It follows from this that τ _ 1 maps iF into 1F\ more precisely,


VAeJ^ : T - n A 6 j ; , ncN,

where ^ is the Borel field generated by {oufc, k > n}. This is obvious by (4)
8.1 ZER0-0R-0NE LAWS | 253

if Λ is of the form above, and since the class of Λ for which the assertion
holds is a B.F., the result is true in general.
DEFINITION. A set in IF is called invariant {under the shift) iff Λ = τ~λΑ.
An r.v. Y on Ω is invariant iff Υ(ω) = Υ(τω) for every ω e Ω.

Observe the following general relation, valid for each point mapping τ
and the associated inverse set mapping τ" 1 , each function F on Ω and each
subset A of ^ 1 :
(5) τ- χ {ω : Υ(ω) e A} = {ω : Υ(τω) G A}.
This follows from τ " 1 o F " 1 = (Y o τ ) " 1 .
We shall need another kind of mapping of Ω. A permutation on Nn is a
1-to-l mapping of Nn to itself, denoted as usual by
/ 1, 2 , . . . , n\
\σ19σ29...,ση)
The collection of such mappings forms a group with respect to composition.
A finite permutation on TV is by definition a permutation on a certain "initial
segment" 7Vn of N. Given such a permutation as shown above, we define σω
to be the point in Ω whose coordinates are obtained from those of ω by the
corresponding permutation, namely
fœaj9 if jeNn;
(σω)Ί· = < „

As usual, σ induces a direct set mapping σ and an inverse set mapping


σ - 1 , the latter being also the direct set mapping induced by the "group
inverse" σ" 1 of σ. In analogy with the preceding definition we have the
following.

DEFINITION. A set in IF is called permutable iff Λ = σΛ for every


finite permutation σ on TV. A function Y on Ω is permutable iff Υ(ω) =
Y (aw) for every finite permutation σ and every ω e Ω.

It is fairly obvious that an invariant set is remote and a remote set is


permutable ; also that each of the collections : all invariant events, all remote
events, all permutable events, forms a sub-B.F. of <F. If each of these B.F.'s
is augmented (see Exercise 20 of Sec. 2.2), the resulting augmented B.F.'s
will be called "almost invariant", " almost remote ", and " almost permutable ",
respectively. Finally, the collection of all sets in IF of probability either
0 or 1 clearly forms a B.F., which may be called the "all-or-nothing"
field. This B.F. will also be referred to as "almost trivial".
254 I RANDOM WALK

Now that we have defined all these concepts for the general stochastic
process, it must be admitted that they are not very useful without further
specifications of the process. We proceed at once to the particular case
below.

DEFINITION. A sequence of independent r.v.'s will be called an in-


dependent process; it is called a stationary independent process iff the r.v.'s
have a common distribution.

Aspects of this type of process have been our main object of study,
usually under additional assumptions on the distributions. Having christened
it, we shall henceforth focus our attention on "the evolution of the process as
a whole"—whatever this phrase may mean. For this type of process, the
specific probability triple described above has been constructed in Sec. 3.3.
Indeed, J^ = J ^ , and the sequence of independent r.v.'s is just that of the
successive coordinate functions {ωη, n e N}, which, however, will also be
interchangeably denoted by {Xn, n e N}. If φ is any Borel measurable
function, then {<p(Xn)9 n G N} is another such process.
The following result is called Kolmogorov's "zero-or-one law".

Theorem 8.1.2. For an independent process, each remote event has


probability zero or one.
GO

PROOF. Let Λ G P | J^' and suppose that ^(A) > 0; we are going to
n= l
prove that ^ ( A) = 1. Since J^ and J^' are independent fields (see Exercise
5 of Sec. 3.3), Λ is independent of every set in J^ for each ne N; namely, if
M G Q ^ , then
n= l

(6) 0>(A n M ) = ^(Λ)^(Μ ).

If we set
^(Ar^M)
•AW - ^(Λ)

for Me«f, then ^i(-) is clearly a p.m. (the conditional probability relative
00

to A; see Chapter 9). By (6) it coincides with 0* on (J &η and consequently


n= l
also on ^ by Theorem 2.2.3. Hence we may take M to be A in (6) to conclude
that 0>(A) = ^(A) 2 or ^(A) = 1.

The usefulness of the notions of shift and permutation in a stationary


8.1 ZERO-OR-ONE LAWS | 255

independent process is based on the next result, which says that both r _ 1 and
σ are "measure-preserving".

Theorem 8.1.3. For a stationary independent process, if Λ e 3F and σ is any


finite permutation, we have
(7) ^ ( τ ^ Λ ) = 0\K)\
(8) &{pK) = 0>(A).

PROOF. Define a set function & on ^ as follows :


#(Λ) = ^ ( τ " 1 Λ ) .
Since τ " 1 maps disjoint sets into disjoint sets, it is clear that & is a p.m. For a
finite-product set Λ, such as the one in (3), it follows from (4) that

»{K) = Π μ(Βηί) = 0(A).

Hence £? and # coincide also on each set that is the union of a finite number
of such disjoint sets, and so on the B.F. ^ generated by them, according to
Theorem 2.2.3. This proves (7); (8) is proved similarly.

The following companion to Theorem 8.1.2, due to Hewitt and Savage,


is very useful.

Theorem 8.1.4. For a stationary independent process, each permutable


event has probability zero or one.
PROOF. Let Λ be a permutable event. Given € > 0, we may choose
ek > 0 so that
CO

Σ efc < €.
fc = l

By Theorem 8.1.1, there exists Ak e^nfc such that ^ ( Λ Δ Ak) < ek, and we
may suppose that nk f oo. Let
/ 1 , ...,/i fc ,/i fc + l,...,2/i f c \
\nk + l,...,2« f c , 1 , . . . , nk]
and Mfc = aAk. Then clearly Mk e^' f c . It follows from (8) that
0>(AAMk)<€k.
For any sequence of sets {Ek} in ^ we have
^(limsup^) = 0>{Ç\ U Ek) < Σ &{Ek).
k m=lk =m k=l
256 I RANDOM WALK

Applying this to Ek = Λ Δ Mfc, and observing the simple identity

lim sup (Λ Δ Mfc) = (A\lim inf Mfc) u (lim sup Mfc\A),


we deduce that
^(A Δ lim sup Mfc) < ^(lim sup (A Δ Mfc)) < e.
00 00

Since (J Mk e J ^ , the set lim sup Mk belongs to Π J ^ , which is seen to


k=m k =m
coincide with the remote field. Thus lim sup Mk has probability zero or one
by Theorem 8.1.2, and the same must be true of A, since e is arbitrary in the
inequality above.
Here is a more transparent proof of the theorem based on the metric
on the measure space (Ω, J5", &) given in Exercise 8 of Sec. 3.2. Since Ak and
Mfc are independent, we have
^(AfcnMfc) = ^(Afc)^(Mfc).
Now Afc.-> A and Mk -> A in the metric just mentioned, hence also

Afc n Mfc -> A n A

in the same sense. Since convergence of events in this metric implies con-
vergence of their probabilities, it follows that ^ ( A n Λ) = ^(A)^(A), and
the theorem is proved.

Corollary. For a stationary independent process, the B.F.'s of almost


permutable or almost remote or almost invariant sets all coincide with the
all-or-nothing field.

EXERCISES

Ω and J^ are the infinite product space and field specified above.
1. Find an example of a remote field that is not the trivial one; to make
it interesting, insist that the r.v.'s are not identical.
2. An r.v. belongs to the all-or-nothing field if and only if it is constant
a.e.
3. If A is invariant then A = rA; the converse is false.
4. An r.v. is invariant [permutable] if and only if it belongs to the
invariant [permutable] field.
5. The set of convergence of an arbitrary sequence of r.v.'s { Yn, n e N)
n
are
or of the sequence of their partial sums 2 Yj both permutable. Their
y= i
limits are permutable r.v.'s with domain the set of convergence.
*6. If an > 0, lim an exists finite or infinite, and lim (an + 1/an) = 1, then
8.2 BASIC NOTIONS | 257

the set of convergence of


K-1 i Y,}
/=i

is invariant. If an -> +00, the upper and lower limits of this sequence are
invariant r.v.'s.
*7. The set {Y2n e A i.o.}, where A e 081, is remote but not necessarily
invariant; the set
{2 y y e ^i.o.}
;=i

is permutable but not necessarily remote. Find some other essentially different
examples of these two kinds.
8. Find trivial examples of independent processes where the three
numbers ^ ( τ ^ Λ ) , ^(Λ), ^(τΛ) take the values 1, 0, 1 ; or 0, ±, 1.
9. Prove that an invariant event is remote and a remote event is
permutable.
*10. Consider the bi-infinite product space of all bi-infinite sequences of
real numbers {ωη, n e N}9 where N is the set of all integers in its natural
(algebraic) ordering. Define the shift as in the text with N replacing N, and
show that it is 1-to-l on this space. Prove the analogue of (7).
11. Show that the conclusion of Theorem 8.1.4 holds true for a sequence
of independent r.v.'s, not necessarily stationary, but satisfying the following
condition: for everyj there exists ak > jsuch that Xk has the same distribu-
tion as Xj. [This remark is due to Susan Horn.]

8-2 Basic notions


From now on we consider only a stationary independent process {Xn, neN}
on the concrete probability triple specified in the preceding section. The
common distribution of Xn will be denoted by μ (p.m.) or F(d.f.); when only
this is involved, we shall write X for a representative Xn, thus £(Χ) for
*(*n).
Our interest in such a process derives mainly from the fact that it under-
lies another process of richer content. This is obtained by forming the
successive partial sums as follows :

(1) Sn= 2 XJ9 neN.

An initial r.v. S0 = 0 is adjoined whenever this serves notational con-


venience, as in Xn = Sn — Sn.1 for neN. The sequence {5n, n e N} is then a
very familiar object in this book, but now we wish to find a proper name for
258 I RANDOM WALK

it. An officially correct one would be "stochastic process with stationary


independent differences"; the name "homogeneous additive process" can
also be used. We have, however, decided to call it a " random walk (process) ",
although the use of this term is frequently restricted to the case when μ is of
the integer lattice type or even more narrowly a Bernoullian distribution.

DEFINITION OF RANDOM WALK. A random walk is the process {Sn, ne N}


defined in (1) where {Xn, ne N} is a stationary independent process. By
convention we set also S0 = 0.
A similar definition applies in a Euclidean space of any dimension, but
we shall be concerned only with &1 except in some exercises later.
Let us observe that even for an independent process {Xn, n e N}9 its
remote field is in general different from the remote field of {Sn9 n e N}9 where
n
Sn = Σ ^0- They are almost the same, being both almost trivial, for a
3 = 1

stationary independent process by virtue of Theorem 8.1.4, since the remote


field of the random walk is clearly contained in the permutable field of the
corresponding stationary independent process.
We add that, while the notion of remoteness applies to any process,
"(shift)-invariant" and "permutable" will be used here only for the under-
lying "coordinate process" {ωη, ne N} or {Xn, n e N}.
The following relation will be much used below, for m < n:
n-m n-m
Sn-m(rma>) = Σ Χ£τ™ω) = Σ Χί + Μ(ω) = Ξη(ω) - Ξη(ω).
j = 1 j = 1

It follows from Theorem 8.1.3 that 5 n _ m and Sn — Sm have the same distri-
bution. This is obvious directly, since it is just /x(n_m)*.
As an application of the results of Sec. 8.1 to a random walk, we state
the following consequence of Theorem 8.1.4.

Theorem 8.2.1. Let Bn e &1 for each n e N. Then


&{Sn e Bn i.o.}
is equal to zero or one.
PROOF.If σ is a permutation on Nm, then Sn(aœ) = Ξη(ω) for n > m,
hence the set
00

A m = (J {SneBn}
n-m

is unchanged under σ" 1 or σ. Since Am decreases as m increases, it follows


that
00

n Am
m= l
is permutable, and the theorem is proved.
8.2 BASIC NOTIONS | 259

Even for a fixed B = Bn the result is significant, since it is by no means


evident that the set {Sn > 0 i.o.}, for instance, is even vaguely invariant or
remote with respect to {Xn9neN} (cf. Exercise 7 of Sec. 8.1). Yet the
preceding theorem implies that it is in fact almost invariant. This is the
strength of the notion of permutability as against invariance or remoteness.
For any serious study of the random walk process, it is imperative to
introduce the concept of an "optional r.v." This notion has already been
used more than once in the book (where ?) but has not yet been named. Since
the basic inferences are very simple and are supposed to be intuitively
obvious, it has been the custom in the literature until recently not to make the
formal introduction at all. However, the reader will profit by meeting these
fundamental ideas for the theory of stochastic processes at the earliest
possible time. They will be needed in the next chapter, too.

DEFINITION OF OPTIONAL r.v. An r.v. a is called optional relative to the


arbitrary stochastic process {Zn, ne N} iff it takes strictly positive integer
values or +oo and satisfies the following condition:
(2) V« G TV u {oo} : {ω : α(ω) = n}e&n.

Similarly if the process is indexed by TV0 (as in Chapter 9), then the
range of a will be N°. Thus if the index n is regarded as the time parameter,
then a effects a choice of time (an "option") for each sample point ω. One
may think of this choice as a time to "stop", whence the popular alias
" stopping time", but this is usually rather a momentary pause after which the
process proceeds again : time marches on !
Associated with each optional r.v. a there are two or three important
objects. First, the pre-a field !Fa is the collection of all sets in ^όο of the
form
(3) U K« = »}nA n ],
1 <,n<, oo

where Ane^n for each ne Nu {oo}. This collection is easily verified to be a


B.F. (how?). If A e J ^ , then we have clearly A n {a = n} e J ^ , for every n.
This property also characterizes the members of S^a (see Exercise 1 below).
Next, the post-a process is the process |{Z a + n , n e N1} defined on the trace of
the original probability triple on the set {a < oo}, where
(4) VneN : Ζα + η(ω) = Ζα(ω) + η(ω).
Each Ζ α + π is seen to be an r.v. with domain {a < oo}; indeed it is finite a.e.
there provided the original Z n 's are finite a.e. It is easy to see that Za e 3Fa.
The post-a field ^à is the B.F. generated by the post-a process: it is a sub-B.F.
of{« < o o j n J ^ .
260 I RANDOM WALK

Instead of requiring a to be defined on all Ω but possibly taking the value


oo, we may suppose it to be defined on a set Δ in « ^ . Note that a strictly
positive integer n is an optional r.v. The concepts of pre-α and post-α fields
reduce in this case to the previous 3Fn and ^ ' .
A vital example of optional r.v. is that of the first entrance time into a
given Borel set A :

(5)
elsewhere.

To see that this is optional, we need only observe that for each ne N\

which clearly belongs to J^; similarly for n = oo.


Concepts connected with optionality have everyday counterparts,
implicit in phrases such as "within thirty days of the accident (should it
occur)". Historically, they arose from "gambling systems", in which the
gambler chooses opportune times to enter his bets according to previous
observations, experiments, or whatnot. In this interpretation, a + 1 is the
time chosen to gamble and is determined by events strictly prior to it. Note
that, along with a, a + 1 is also an optional r.v., but the converse is false.
So far the notions are valid for an arbitrary process on an arbitrary
triple. We now return to a stationary independent process on the specified
triple and extend the notion of "shift" to an "a-shiff as follows: τα is a
mapping on {a < oo} such that

(6)

Thus the post-α process is just the process {Χη(ταω), n e N}. Recalling that
Xn is a mapping on Ω, we may also write

(7)

and regard Xn o Ta, ne N, as the r.v.'s of the new process. The inverse set
mapping (τ α ) _ 1 , to be written more simply as τ~ α , is defined as usual:

Let us now prove the fundamental theorem about " stopping " a stationary
independent process.

Theorem 8.2.2. For a stationary independent process and an almost


everywhere finite optional r.v. a relative to it, the pre-α and post-α fields are
8.2 BASIC NOTIONS | 261

independent. Furthermore the post-α process is a stationary independent


process with the same common distribution as the original one.
PROOF. Both assertions are summarized in the formula below. For any
Ae^keN, BjS &1, 1 < j < k, we have

(8) ^{Λ; Xa + j e Bj9 \<j<k} = 0>{A) Π μ{Β]).

To prove (8), we observe that it follows from the definition of a and ^


that
(9) A n { a = «} = A n n f a = / i } e / n ,

where Ane^n for each ne N. Consequently we have


&>{A; a = n; Xa + j e BJ9 1 < j < k} = 0>{An ; a = n\ Xn + j e Bj9 1 < j < k}
= &{A; a = n}&{Xn + j e Bj9 1 < j < k)

= 0>{Α;α = η}Πμ(Β}),
y= i

where the second equation is a consequence of (9) and the independence of


J^ and J^'. Summing over w e TV, we obtain (8).

An immediate corollary is the extension of (7) of Sec. 8.1 to an a-shift.

Corollary. For each A e « ^ we have


(10) 0>(τ~αΑ) = ^ ( Λ ) .

Just as we iterate the shift r, we can iterate τ α . Put a1 = a, and define


k
a inductively by
ak +
\œ) = ak(raoj), keN.
k
Each a is finite a.e. if a is. Next, define β0 = 0, and

ßk= I a\ keN.
J = l

We are now in a position to state the following result, which will be


needed later.

Theorem 8.2.3. Let a be an a.e. finite optional r.v. relative to a stationary


independent process. Then the random vectors {Vk, k e N}, where
Vk(œ) = (ak(œ), Χβκ_1+1{ω\ . . . , * , » ) ,

are independent and identically distributed.


262 I RANDOM WALK

PROOF. The independence follows from Theorem 8.2.2 by our showing


that Vl9..., Vk-± belong to the pre-j8fc_! field, while Vk belongs to the
post-ß,,...! field. The details are left to the reader; cf. Exercise 6 below.
To prove that Vk and Vk + 1 have the same distribution, we may suppose
that k = 1. Then for each neN, and each «-dimensional Borel set A, we
have

{ω : α 2 (ω) = n\ ( A > + 1 (ω), . . ., Α > +ΰ?(ω)) G A}


= {ω : α\τ ω) α
= n\ ( ^ ( τ α ω ) , . . ., ΧΧταω) G A],

since
ΧΧταω) = Χα^ω){ταω) = Χα*{ω)(τ«ω)
= ^α 1 (ω) + α 2 (ω)(ω) = Χαι+α*(ω)

by the quirk of notation that denotes by Xa( · ) the function whose value at ω
is given by Xam(w) and by (7) with n = α2(ω). By (5) of Sec. 8.1, the preceding
set is the r~ a -image (inverse image under τα) of the set

{ω : α\ω) = n; (Χλ(ω)9 . . ., ^ β ι ( ω ) ) 6 A},

and so by (10) has the same probability as the latter. This proves our
assertion.

Corollary. The r.v.'s {Yk, k e N}, where

η(ω) = Σ φ(ΧηΠ)
n=^fc_i+l

and φ is a Borel measurable function, are independent and identically


distributed.

For φ = 1, Yk reduces to ak. For φ(χ) = x, Yk = 5j5fc — 5 ^ , ^ The


reader is advised to get a clear picture of the quantities ak, ßki and Yk before
proceeding further, perhaps by considering a special case such as (5).
We shall now apply these considerations to obtain results on the
"global behavior" of the random walk. These will be broad qualitative
statements distinguished by their generality without any additional
assumptions.
The optional r.v. to be considered is the first entrance time into the
strictly positive half of the real line, namely A = (0, oo) in (5) above. Similar
results hold for [0, oo); and then by taking the negative of each Xn, we may
deduce the corresponding result for (— oo, 0] or (-co, 0). Results obtained
8.2 BASIC NOTIONS | 263

in this way will be labeled as "dual" below. Thus, omitting A from the
notation :

(11)
elsewhere ;
and

Define also the r.v. Mn as follows:

(12)

The inclusion of S0 above in the maximum may seem artificial, but it does not
affect the next theorem and will be essential in later developments in the next
section. Since each Xn is assumed to be finite a.e., so is each Sn and Mn.
Since Mn increases with n, it tends to a limit, finite or positive infinite, to be
denoted by
(13)

Theorem 8.2.4. The statements (a), (b), and (c) below are equivalent; the
statements (a'), (b'), and (c') are equivalent.

(a) (a')

(b) (b')

(c) (C)

PROOF. If (a) is true, we may suppose a < oo everywhere. Consider the


r.v. Sa: it is strictly positive by definition and so 0 < é?(Sa) < -foo. By the
Corollary to Theorem 8.2.3, {Sßk + x — Sßk, k > 1} is a sequence of independent
and identically distributed r.v.'s. Hence the strong law of large numbers
(Theorem 5.4.2 supplemented by Exercise 1 of Sec. 5.4) asserts that, if
a0 = Oand Sao = 0:

a.e.

This implies (b). Since lim Sn < M, (b) implies (c). It is trivial that (c)
n-* oo

implies (a). We have thus proved the equivalence of (a), (b), and (c).
264 I RANDOM WALK

If (a') is true, then (a) is false, hence (b) is false. But the set
{iim Sn = +00}
n-+ oo

is clearly permutable (it is even invariant, but this requires a little more
reflection), hence (b') is true by Theorem 8.1.4. Now any numerical sequence
with finite upper limit is bounded above, hence (b') implies (c'). Finally, if
(c') is true then (c) is false, hence (a) is false and (a') is true. Thus (a'), (b'),
and (c') are also equivalent.

Theorem 8.2.5. For the general random walk, there are four mutually
exclusive possibilities, each taking place a.e. :

(i) V « e i V : S n = 0;
(ii) Sn-> -oo ;
(iii) Sn-> +00 ;
(iv) -oo = lim Sn < lim Sn = +oo.
n-*oo n-»oo

PROOF. If X = 0 a.e., then (i) happens. Excluding this, let φλ = lim Sn.
n
Then 9?x is a permutable r.v., hence a constant c, possibly ±oo, a.e. by
Theorem 8.1.4. Since
KmSn = Xx +ïïrn"OSn - Xx)9
n n
ω
we have ψλ = Χλ + φ2, where <ρ2( ) = ψι(τω) = c a.e. Since X1 Φ 0, it
follows that c = +oo or —oo. This means that
either lim Sn = +oo or lim Sn = — oo.
n n

By symmetry we have also


either Urn Sn = —oo or lim Sn = +oo.
π π

These double alternatives yield the new possibilities (ii), (iii), or (iv), other
combinations being impossible.
This last possibility will be elaborated upon in the next section.

EXERCISES

In Exercises 1-6, the stochastic process is arbitrary.


* 1 . a is optional if and only if Wz e N : {a < n) e ^n.
8.2 BASIC NOTIONS | 265

*2. For each optional a we have a e ^ and Xa e fFa. If a and ß are both
optional and a < ß, then f a c «^.
3. If ax and a2 are both optional, then so is αλ A a2, «i V a2, αχ + a2.
If a is optional and A e ^ , then αΔ defined below is also optional:
_ (a on Δ
aA
~ \ + oo οηΩ\Δ.

*4. If a is optional and β is optional relative to the post-α process, then


a + β is optional (relative to the original process).
5. V& G TV : a 1 + · · · 4- ak is optional. [For the a in (11), this has been
called the kth ladder variable.]
*6. Prove the following relations :

7. If a and ß are any two optional r.v.'s, then

(τβ ο τα)(ω) = τ
β(ταω) + α
^\ω) φ τβ + α(ω) in general.

*8. Find an example of two optional r.v.'s a and β such that a < β but
^ φ ^ ' . However, if y is optional relative to the post-α process and
β = a + y, then indeed ^ => J^'. As a particular case, #%k is decreasing
(while ^ t e is increasing) as k increases.
9. Find an example of two optional r.v.'s a and β such that a < β but
β — a is not optional.
10. Generalize Theorem 8.2.2 to the case where the domain of definition
and finiteness of a is Δ with 0 < ^(Δ) < 1. [This leads to a useful extension
of the notion of independence. For a given Δ in & with ^(Δ) > 0, two
events Λ and M, where M c A , are said to be independent relative to Δ iff
0>{A n Δ n M} = 0>{A n Δ}^{Μ}.]
*11. Let {Xn9 n e N} be a stationary independent process and {afc, ke N}&
sequence of strictly increasing finite optional r.v.'s. Then {Xafc + 1, k e N} is a
stationary independent process with the same common distribution as the
original process. [This is the gambling-system theorem first given by Doob
in 1936.]
12. Prove the Corollary to Theorem 8.2.2.
13. State and prove the analogue of Theorem 8.2.4 with a replaced by
a
[o,oo). [The inclusion of 0 in the set of entrance causes a small difference.]
14. In an independent process where all Xn have a common bound,
£{a) < oo implies ${Sa) < oo for each optional a [cf. Theorem 5.5.3].
266 I RANDOM WALK

8.3 Recurrence

A basic question about the random walk is the range of the whole process :
00

[J Sn(oj) for a.e. w, or, "where does it ever g o ? " Theorem 8.2.5 tells us that,
n= l
ignoring the trivial case where it stays put at 0, it either goes off to —oo or
+ 00, or fluctuates between them. But how does it fluctuate? Exercise 9
below will show that the random walk can take large leaps from one end to
the other without stopping in any middle range more than a finite number of
times. On the other hand, it may revisit every neighborhood of every point
an infinite number of times. The latter circumstance calls for a definition.

DEFINITION. The number x e &1 is called a recurrent value of the random


walk {Sn, n e N}, iff for every e > 0 we have

(1) 0>{\Sn - x\ < €i.o.} = 1.

The set of all recurrent values will be denoted by 9Î.

Taking a sequence of e decreasing to zero, we see that (1) implies the


apparently stronger statement that the random walk is in each neighborhood
of x i.o. a.e.
Let us also call the number x a possible value of the random walk iff for
every e > 0, there exists ne N such that &{\Sn — x\ < e} > 0. Clearly a
recurrent value is a possible value of the random walk (see Exercise 2 below).

Theorem 8.3.1. The set 9Î is either empty or a closed additive group of real
numbers. In the latter case it reduces to the singleton {0} if and only if
X = 0 a.e. ; otherwise 9? is either the whole ^ 1 or the infinite cyclic group
generated by a nonzero number c, namely {±nc : n e N\°\}.

PROOF. Suppose 9? φ 0 throughout the proof. To prove that 9Î is a


group, let us show that if x is a possible value and y e SJÎ, then x — y E 9Î.
Suppose not; then there is a strictly positive probability that from a certain
value of n on, Sn will not be in a certain neighborhood of x — y. Let us put
for z e f :
(2) />e.m(z) = n\Sn -z\>€ for all n > m};

so that p2€,m(x — y) > 0 f° r some e > 0 and me N. Since x is a possible


value, for the same e we have a k such that &{\Sk — x\ < e} > 0. Now the
two independent events \Sk — x\ < e and \Sn — Sk — (y — x)\ > 2e
8.3 RECURRENCE | 267

together imply that \Sn — y\ > e; hence

(3) P<.k + n(y) = &{\Sn - y\ > * for all/i > k + m)


> &{\Sk - x\ < *W{\Sn - Sk-(y- x)\ > 2ε
for all n > k + m}.

The last-written probability is equal to p2<E,m(x — y\ since Sn — Sk has the


same distribution as Sn_fc. It follows that the first term in (3) is strictly
positive, contradicting the assumption that y e 9?. We have thus proved that
9Î is an additive subgroup of 0tx. It is trivial that 9Î as a subset of ^ 1 is closed
in the Euclidean topology. A well-known proposition (proof?) asserts that
the only closed additive subgroups of &1 are those mentioned in the second
sentence of the theorem. Unless X = 0 a.e., it has at least one possible value
x Φ 0, and the argument above shows that — x = 0 — xe5R and con-
sequently also x = 0 — ( — x) e 9Î, since 0 e ^R; hence 9Î is not a singleton.
The theorem is completely proved.

It is clear from the preceding theorem that the key to recurrence is the
value 0, for which we have the criterion below.

Theorem 8.3.2. If for some e > 0 we have

(4) ln\Sn\ <e}<0),


n
then

(5) n\Sn\ < ei.o.} = 0


(for the same e) so that 0 $ 9Î. If for every e > 0 we have

(6) In\Sn\ <*} = «>,


n

then
(7) n\Sn\ < €i.o.} = 1
for every € > 0 and so 0 e 9Î.

Remark. Actually if (4) or (6) holds for any e > 0, then it holds for every
€ > 0; this fact follows from Lemma 1 below but is not needed here.

PROOF. The first assertion follows at once from the convergence part of
the Borel-Cantelli lemma (Theorem 4.2.1). To prove the second part consider
F = liminf{|S n | > €};
268 I RANDOM WALK

namely F is the event that \Sn\ < e for only a finite number of values of n.
For each ω on F, there is an m(w) such that |£ η ( ω )| ^ € for all n > m(œ); it
follows that if we consider "the last time that \Sn\ < e", we have

for all

Since the two independent events |5 m | < e and | 5 n — 5 m | > 2e together


imply that | 5 n | > €, we have

by the previous notation (2), since Sn — Sm has the same distribution as


5 n _ m . Consequently (6) cannot be true unless /?2e,i(0) = 0. We proceed to
extend this to show that p2€,k = 0 for every k G N. TO this aim we fix k and
consider the event

then Am and Am> are disjoint whenever m' > m + k and consequently
(why?)

The argument above for the case k = 1 can now be repeated to yield

and so p2€,k(®) =
0 f° r every e > 0. Thus

which is equivalent to (7).


A simple sufficient condition for 0 e % or equivalently for 5R φ 0, will
now be given.

Theorem 8.3.3. If the weak law of large numbers holds for the random walk
{Sn, ne N} in the form that Sn/n -> 0 in pr., then 91 Φ 0.
PROOF. We need two lemmas, the first of which is also useful elsewhere.

Lemma 1. For any € > 0 and m e TV we have

(8)
8.3 RECURRENCE | 269

PROOF OF LEMMA 1. It is sufficient to prove that if the right member of (8)


is finite, then so is the left member and (8) is true. Put
/ = (-€,€>, /=[/€, a + 1 » ,

for a fixed j e N; and denote by φΐ9 <ρ7 the respective indicator functions.
Denote also by a the first entrance time into / , as defined in (5) of Sec. 8.2
with Z n replaced by Sn and A by / . We have
00 00 -, 00

(9) <?{ Σ <PÂSn)} = I I, Σ 9,(Sn) d0>.


n=l fc = l J{cc = k) n = l

The typical integral on the right side is by definition of a equal to

f (i+ Σ 9j{sn)}d&<\ {i + I 9ASn-sk)}d0>,


J{cc = k) n = /c + l J{a = k} n=k + l

since {a = k) c {Sfc G7} and {5fc e 7 } n {5n e J) cz {5n - 5fc e / } . Now


{a = &} and 5 n — Sk are independent, hence the last-written integral is
equal to

0>{μ = k){\ + f Σ 9^/(^n - Sk) M = 0>{* = ΛΚ{ I φ / (5 η )},


L JO. n=k +l J n =0

since 9>7(S0) = 1· Summing over A: and observing that ç>/(0) = 1 only if


j = 0, in which case J <=■ I and the inequality below is trivial, we obtain
for each j :
' { f 9ÀSn)}<<?{ï <P,(Sn)}.
n=0 n =0

Now if we write / ; for / and sum over y from — m to m — 1, the inequality (8)
ensues in disguised form.

This lemma is often demonstrated by a geometrical type of argument.


We have written out the preceding proof in some detail as an example of the
maxim : whatever can be shown by drawing pictures can also be set down in
symbols !

Lemma 2. Let the positive numbers {un(m)}, where neN and m is a real
number > 1, satisfy the following conditions:

(i) V« : un{m) is increasing in m and tends to 1 as m -> oo ;


00 00
w
(ii) 3c > 0: 2 ujjn) < cm Σ n0) for all m > 1
n =0 n=0

(iii) Υδ > 0: lim un(Sn) = 1.


270 I RANDOM WALK

Then we have
(10) f un{\)= oo.
n=0

Remark. If (ii) is true for all integer m > 1, then it is true for all real
m > 1, with c doubled.

PROOF OF LEMMA 2. Suppose not; then for every A > 0:


co i co * [Am]
M
CO > 2 «»(1) a; — 2 "»0») 2: — 2
L r n c m
"(w)
n=0 n =0 n =0
| Mm]
>
cm
Letting m->cc and applying (iii) with δ = y4 -1 , we obtain

c
n=0

since A is arbitrary, this is a contradiction, which proves the lemmp

To return to the proof of Theorem 8.3.3, we apply Lemma 2 with


Φ ) = n\Sn\ < m).
Then condition (i) is obviously satisfied and condition (ii) with c = 2 follows
from Lemma 1. The hypothesis that SJn -> 0 in pr. may be written as

un(Sn) = &. < &y->l

for every δ > 0 as n^co, hence condition (iii) is also satisfied. Thus
Lemma 2 yields
f &{\Sn\ <\}= +00.
n =0

Applying this to the "magnified" random walk with each Xn replaced by XJe,
which does not disturb the hypothesis of the theorem, we obtain (6) for every
e > 0, and so the theorem is proved.

In practice, the following criterion is more expedient (Chung and Fuchs,


1951).

Theorem 8.3.4. Suppose that at least one of £(X+) and S(X~) is finite.
The W φ 0 if and only if ê{X) = 0; otherwise case (ii) or (iii) of Theorem
8.2.5 happens according as S(X) < 0 or >0.
8.3 RECURRENCE | 271

PROOF. If -oo < ê{X) < 0 or 0 < ê{X) < +00, then by the strong
law of large numbers (as amended by Exercise 1 of Sec. 5.4), we have

^ -> ê{X) a.e.,


n
so that either (ii) or (iii) happens as asserted. If &(X) = 0, then the same law
or its weaker form Theorem 5.2.2 applies; hence Theorem 8.3.3 yields the
conclusion.

DEFINITION OF RECURRENT RANDOM WALK. A random walk will be


called recurrent iff 5H φ 0 ; it is degenerate iff 9Î = {0}; and it is of the lattice
type iff 91 is generated by a nonzero element.

The most exact kind of recurrence happens when the Xn's have a common
distribution which is concentrated on the integers, such that every integer is a
possible value (for some Sn), and which has mean zero. In this case for each
integer c we have &{Sn = c i.o.} = 1. For the symmetrical Bernoullian
random walk this was first proved by Poly a in 1921.
We shall give another proof of the recurrence part of Theorem 8.3.4,
namely that £(X) = 0 is a sufficient condition for the random walk to be
recurrent as just defined. This method is applicable in cases not covered by
the two preceding theorems (see Exercises 6-9 below), and the analytic
machinery of ch.f.'s which it employs opens the way to the considerations in
the next section.
The starting point is the integrated form of the inversion formula in Exer-
cise 3 of Sec. 6.2. Taking x = 0, u = e, and F to be the d.f. of Sn9 we have
1
(11) P(\Sn\ < e) > - Γ &(\Sn\ <u)du = I Γ ~™etm*dt.
* Jo TTC J - 0 0 t

Thus the series in (6) may be bounded below by summing the last expression
in (11). The latter does not sum well as it stands, and it is natural to resort
to a summability method. The Abelian method suits it well and leads to, for
0 < r < 1:

(12) j , r^(\Sn\ < c) * 1 £ Li™« Rf _ L ^ dt,


where R and I later denote the real and imaginary parts 01 a complex
quantity. Since
1 1- r
(13) R
1 7Π\ ^ ΤΊ 77ÄT2 > °
1 - rf(t) |1 - rf(t)\2
and(l - cos €t)lt2 > C€ 2 for|e/| < 1 and some constant C, it follows that for
272 I RANDOM WALK

η < 1/e the right member of (12) is not less than

(14)

Now the existence of <^(|ΑΊ) implies by Theorem 6.4.2 that 1 — f(t) = o{t)
as / -> 0. Hence for any given δ > Owe may choose the η above so that

The quantity in (14) is then not less than

As r f 1, the right member above tends to ττ/3δ; since δ is arbitrary, we have


proved that the right member of (12) tends to 4-oo as r \ 1. Since the series
in (6) dominates that on the left member of (12) for every r, it follows that (6)
is true and so 0 e 5R by Theorem 8.3.2.

EXERCISES

fis the ch.f. of μ.

1. Generalize Theorems 8.3.1 and 8.3.2 to 0td. (For d > 3 the generaliza-
tion is illusory; see Exercise 12 below.)
*2. If a random walk in &d is recurrent, then every possible value is a
recurrent value.
3. Prove the Remark after Theorem 8.3.2.
*4. Prove that x is a recurrent value of the random walk if and only if

for some

5. For a recurrent random walk that is neither degenerate nor of the


lattice type, the countable set of points {Sn(œ)9 n e N} is everywhere dense in
&1 for a.e.cü. Hence prove the following result in Diophantine approximation :
if y is irrational, then given any real x and € > 0 there exist integers m and n
such that \my + n — x\ < e.
*6. If there exists a δ > 0 such that (the integral below being real-valued)

then the random walk is recurrent.


8.3 RECURRENCE | 273

*7. If there exists a δ > 0 such that


rô dt
sup
1 ^ΤΆ < °°>
o<r<i J-Ö 1 — rj(t)
then the random walk is not recurrent, [HINT: Use Exercise 3 of Sec. 6.2 to
show that there exists a constant C(e) such that

0>{\Sn\ < e) < C(6) £ t * ~ C g 6 ljC


/*.(<**) < ϊ ψ £ " Λ 1^/(0" A,
where μη is the distribution of Sn, and use (13). [Exercises 6 and 7 give,
respectively, a necessary and a sufficient condition for recurrence; no
necessary and sufficient condition in terms o f / i s known in the general case.
If μ is of the integer lattice type with span one, then

1 ·7Γ\ dt = 00
J-πΙ - / ( O
is such a condition according to Kesten and Spitzer.]
*8. Prove that the random walk with/(i) = e~]tl (Cauchy distribution)
is recurrent.
*9. Prove that the random walk with/(O = e~^\ 0 < a < 1, (Stable
law) is not recurrent, but (iv) of Theorem 8.2.5 holds.
10. Generalize Exercises 6 and 7 above to 3td.
*11. Prove that in @t2 if the common distribution of the random vector
(X, Y) has mean zero and finite second moment, namely:
g(X) = 0, g(Y) = 0, 0 < £(X2 4- Y2) < oo,
then the random walk is recurrent. This implies by Exercise 5 above that
almost every Brownian motion path is everywhere dense in 0t2. [HINT: Use
the generalization of Exercise 6 and show that

R 1 ^ c
i -f(h,t2)- t? + ti
for sufficiently small 1^1 + \t2\. One can also make a direct estimate:

&(\sn\ < «) > Ç]


*12. Prove that no truly 3-dimensional random walk, namely one whose
common distribution does not have its support in a plane, is recurrent.
[HINT: There exists A > 0 such that

A 3

J7J\2 ttXtmdx)
274 I RANDOM WALK

is a strictly positive quadratic form Q in (tl912, t3). If

then

13. Generalize Lemma 1 in the proof of Theorem 8.3.3 to @td. For


d = 2 the constant 2m in the right member of (8) is to be replaced by 4m2,
and " \Sn\ < c" means 5 n is in the open square with center at the origin and
side length 2e.
14. Extend Lemma 2 in the proof of Theorem 8.3.3 as follows. Keep
condition (i) but replace (ii) and (iii) by

(ii')

(iii')

Then (10) is true.


15. Generalize Theorem 8.3.3 to &2 as follows. If the central limit
theorem applies in the form that SnjVn converges in dist. to the unit normal,
then the random walk is recurrent, [HINT: Use Exercises 13 and 14. This is
sharper than Exercise 11. No proof of Exercise 12 using a similar method
is known.]
16. Suppose ê{X) = 0, 0 < δ{X2) < oo, and μ is of the integer lattice
type, then

17. The basic argument in the proof of Theorem 8.3.2 was the "last
time in ( —e, e)". A harder but instructive argument using the "first time"
may be given as follows. Let a be the first entrance time into ( —e, e), and

Show that for 1 < m < M:

It follows by a form of Lemma 1 in this section that

now use Theorem 8.1.4.


8.4 FINE STRUCTURE | 275

18. For an arbitrary random walk, if ^{V/i e N : Sn > 0} > 0, then


l&{Sn< 0, 5 n + 1 > 0} < oo.
n

Hence if in addition &>{Vn e N : Sn < 0} > 0, then


2 \&{Sn > 0} - ^ { S n + 1 > 0}| < oo;
n

and consequently
2^^{Sn>0}<oo.
n

[HINT: For the first series, consider the last time that Sn < 0; for the third
series, apply Du Bois-Reymond's test. Cf. Theorem 8.4.4 below; this exercise
will be completed in Exercise 15 of Sec. 8.5.]

8.4 Fine structure

In this section we embark on a probing in depth of the r.v. a defined in


(11) of Sec. 8.2 and some related r.v.'s.
The r.v. a being optional, the key to its introduction is to break up the
time sequence into a pre-a and a post-« era, as already anticipated in the
terminology employed with a general optional r.v. We do this with a sort of
characteristic functional of the process which has already made its appearance
in the last section :
oo oo i

(1) £{ 2 rV'M = I rn/(0n =


'J rfitV
n=0 n=0
1A _ \* J

where 0 < r < 1, /is real, a n d / i s the ch.f. of X. Applying the principle just
enunciated, we break this up into two parts:

£{2 rneitsn) + ê{ f rneitsn}


n=0 n = cc
00

with the understanding that on the set {a = oo}, the first sum above is 2
71 = 0

while the second is empty and hence equal to zero. Now the second part may
be written as
00 OO

(2) ê{ 2 ra + neits« + n} = «f{rVis« 2 rV i(s «+ "-*«)}.


71 = 0 71 = 0

It follows from (7) of Sec. 8.2 that

Sa + n — Sec — ^n07"a
276 I RANDOM WALK

has the same distribution as Sn, and by Theorem 8.2.2 that for each n it is
independent of Sa. [Note that the same fact has been used more than once
before, but for a constant a.] Hence the right member of (2) is equal to

{ Γ ¥ ψ { f rneitsn} = £{raeits«} -—*


n=0 1 — rf(t)

where raeits<* is taken to be 0 for a = oo. Substituting this into (1), we


obtain

(3) χ _\f{t) [1 - <?{rV<M] = < f rV'M-

We have
(4) 1 - <f{/-ae"M = 1 - 2 rn[ e"s» d»\
n=l J{cc = n}
and

(5) £{2 rneitsn} = f f 2rneitsnd0> = 1+ | rn ί eitsn rf^


n=0 fc = l J(cc = k} n =0 n= l J{a>n}

by an interchange of summation. Let us record the two power series appearing


in (4) and (5) as

P(r,t) = 1 - ' { r W } = Σ rnpn(t);


n=0

Ö(M) = ^ i r V , s » } = Σ r»in(f),
n=0 n=0

where

i 0 (0-l, 9 n (0 = |(a>n)e"s"^ = / , 1 ^ K ^ ) .

Now Un( · ) = ^{a = « ; iSn G ·} is a finite measure with support in (0, oo),
while Vn{·) = 0*{a > n; Sne ·} is a finite measure with support in (— oo, 0].
Thus each pn is the Fourier transform of a finite measure in (0, oo), while
each qn is the Fourier transform of a finite measure in ( — oo, 0]. Returning to
(3), we may now write it as

(6) ! _ ' r / ( 0 ^ 0 = Q{r,t).

The next step is to observe the familiar Taylor series :


1 OO γΠ\

= exp^ 2, — f> \x\ < 1.


1 - x
8.4 FINE STRUCTURE | 277

Thus we have

(7)

where

and μη(·) = é?{Sn e ·} is the distribution of Sn. Since the convolution of two
measures both with support in (0, oo) has support in (0, oo), and the con-
volution of two measures both with support in (—oo, 0] has support in
(—oo,0], it follows by expansion of the exponential functions above and
rearrangements of the resulting double series that
00 00

Mr, 0 = 1 + Σ r"c,„(0, Mr, 0 = 1 + I rV»(0,


n=1 n=1

where each <pn is the Fourier transform of a measure in (0, oo), while each φη
is the Fourier transform of a measure in (—00, 0]. Substituting (7) into (6)
and multiplying through by/+(r, t), we obtain

(8) P(r,0/-(r,0= Q(r9t)f+(r,t).


The next theorem below supplies the basic analytic technique for this develop-
ment, known as the Wiener-Hopf technique.

Theorem 8.4.1. Let

P(r,t)= I r»pn(t), Q(r,t)= f rnqn(t)9


n=0 n=0

P*(r,t)= Σ r*p*(t), Q*(r,t)= f r*q*(t),


n=0 n=0

where p0(t) = q0(t) = pt(t) = q$(t) = 1 ; and for n > 1, pn and p* as


functions of / are Fourier transforms of measures with support in (0, 00);
qn and qt as functions of t are Fourier transforms of measures in ( — 00, 0].
Suppose that for some r0 > 0 the four power series converge for r in (0, r0)
and all real t9 and the identity
(9)
278 I RANDOM WALK

holds there. Then


ρ Ξ ρ*, ρ=ρ*.
The theorem is also true if (0, oo) and (—00, 0] are replaced by [0, 00) and
( — 00, 0), respectively.
PROOF. It follows from (9) and the identity theorem for power series
that for every n > 0 :

(10) i ΛίΟίΪ-ΛΟ = Σ Pt<J)qn-k{t).


fc=0 fc=0

Then for n = 1 equation (10) reduces to the following:

Λ ( 0 - Λ ( 0 = ?Ι(0-«Ϊ(0·
By hypothesis, the left member above is the Fourier transform of a finite
signed measure νλ with support in (0, 00), while the right member is the
Fourier transform of a finite signed measure with support in (—00, 0]. It
follows from the uniqueness theorem for such transforms (Exercise 13 of
Sec. 6.2) that we must have vx = v2, and so both must be identically zero
since they have disjoint supports. Thus px = p\ and qx = q*. To proceed
by induction on w, suppose that we have proved that/? ; = /?*and q3 = #*for
0 < j < n - 1. Then it follows from (10) that
Pn{t) + qt(t)=p*n{t) + qn(t).
Exactly the same argument as before yields that pn = /?* and qn = q*. Hence
the induction is complete and the first assertion of the theorem is proved ;
the second is proved in the same way.

Applying the preceding theorem to (8), we obtain the next theorem in


the case A = (0, 00) ; the rest is proved in exactly the same way.

Theorem 8.4.2. If a = aA is the first entrance time into A, where A is one of


the four sets: (0, 00), [0, 00), (—00, 0), (—00, 0], then we have

(11) 1 - ê{raéis«) = exp ( - V - Γ eitsn d0>\;


n
I n=i J{SneA) J

(12) ê{ "y rneitsn} = exp ( + V - Γ eitsn d&\-


n
n=o L n=l J{SneAc} J

From this result we shall deduce certain analytic expressions involving


the r.v. a. Before we do that, let us list a number of classical Abelian and
8.4 FINE STRUCTURE | 279

Tauberian theorems below for ready reference.


00

(A) If cn > 0 and 2 cnrn converges for 0 < r < 1, then


n=0
00 00

(*) lim 2 cnrn = Σ cn


rîl n= 0 n= 0

finite or infinite.
00

(B) If cn are complex numbers and 2 <Vn converges for 0 < r < 1,
n=0
then (*) is true.

(C) If cn are complex numbers such that cn = ο(η~λ) [or just 0(/z -1 )] as
n -> oo, and the limit in the left member of (*) exists and is finite, then (*) is
true.

(D) If c™ > 0, 2 c(nl)rn converges for 0 < r < 1 and diverges for
n=0

r = 1, / = 1, 2; and

Σ c ^ ~ K I c«>
k=0 fc=0
1)
[or more particularly c^ ^ Kc^] as « -> oo, where 0 < K < +oo, then

2 c™rn ~ K Σ c'n2)rn
n=0 n=0
as r f 1.

(E) If cn > 0 and

Σ c-rn 1 - r
as r f 1, then
n-l
2 cfc - « .
k=0

Observe that (C) is a partial converse of (B), and (E) is a partial converse
of (D). There is also an analogue of (D), which is actually a consequence of
(B): if cn are complex numbers converging to a finite limit c, then as r \ 1,

A
n=0 '

Proposition (A) is trivial. Proposition (B) is Abel's theorem, and


280 I RANDOM WALK

proposition (C) is Tauber's theorem in the "little 0" version and Littlewood's
theorem in the "big 0 " version; only the former will be needed below.
Proposition (D) is an Abelian theorem, and proposition (E) a Tauberian
theorem, the latter being sometimes referred to as that of Hardy-Littlewood-
Karamata. All four can be found in the admirable book by Titchmarsh,
The theory of functions (2nd ed., Oxford University Press, Inc., New York,
1939, pp. 9-10, 224 if.).

Theorem 8.4.3. The generating function of a in Theorem 8.4.2 is given by

(13) ${r"} = 1 - « P { " £ Ç nsn e A]}

= 1 - (1 - r) exp { | £ 0>\Sn e A']}-


We have
(14) 0>{a < 00} = 1 if and only if JT - 0>[Sn e A] = 00;
H
n=l
in which case
(15) ^ } = exp{JJ^[5nE^]}.
PROOF. Setting t = 0 in (11), we obtain the first equation in (13), from
which the second follows at once through
1 r 00 γη\ 00 n 00 „n
r
=cxp\2-\ = exp{2 -nSnsA]+ Σ -nSneA<}}.
1 — r \^n = i n j η=ι η η= ι η
Since
lim £{ra) = lim 2 0>{a = n}rn = Σ &{<* = n) = &{a < 00}
rîl rîl n = l n= l

by proposition (A), the middle term in (13) tends to a finite limit, hence also
the power series in r there (why?). By proposition (A), the said limit may be
obtained by setting r = 1 in the series. This establishes (14). Finally, setting
t = 0 in (12), we obtain

(16) *{% r«} = exp { + J Ç &\Pn e Ac]}·


n
n=0 { n=l )
Rewriting the left member in (16) as in (5) and letting r f 1, we obtain

(17) lim f rn0>[a > n] = f 0>[a > n] = £{a) < 00


rîl n = 0 n=0
by proposition (A). The right member of (16) tends to the right member of
(15) by the same token, proving (15).
8.4 FINE STRUCTURE | 281

When is cf{a} in (15) finite? This is answered by the next theorem.

Theorem 8.4.4. Suppose that X φ 0 and at least one of £(X+) and S(X~)
is finite; then

(18) ê{X) > 0 =>*{««>..)} < oo;


(19) *(JO<0=>^{a [ 0 i oo)} = cx).

PROOF. If δ{Χ) > 0, then 0>{Sn -> +00} = 1 by the strong law of large
numbers. Hence ^{lim Sn = —00} = 0, and this implies by the dual of
7l-> 00

Theorem 8.2.4 that ^{a(_^t0) < 00} < 1. Let us sharpen this slightly to
^{«(-oo.o] < 00} < 1. To see this, write a = ^ . « ^ j a n d consider Sß'n as in
the proof of Theorem 8.2.4. Clearly Sß'n < 0, and so if ^{a < 00} = 1, one
would have &{Sn < 0 i.o.} = 1, which is impossible. Now apply (14) to
a<-oo,o] a n d (15) to a(0,oo) to infer

'{«Co.«»} = e x p | %\&[Sn < 0 ] | < oo,

proving (18). Next if δ(Χ) = 0, then ^ { α ( . β | 0 ) < oo} = 1 by Theorem 8.3.4.


Hence, applying (14) to a(_ «,,<)) and (15) to a[0>00), we infer

^Κο,οο)} = exp I I 1 nSn < 0] I = 00.

Finally, if $(X) < 0, then ^{a[0tOQ) = 00} > 0 by an argument dual to that
given above for a( _«,,<)], and so <o{a[0iO0)) = 00 trivially.

Incidentally we have shown that the two r.v.'s a(0j00) and α[0ι00) have both
finite or both infinite expectations. Comparing this remark with (15), we
derive an analytic by-product as follows.

Corollary. We have

(20) 2 i ^ n
n = 0]<ax
n=l

This can also be shown, purely analytically, by means of Exercise 25 of


Sec. 6.4.

The astonishing part of Theorem 8.4.4 is the case when the random walk
is recurrent, which is the case if $(X) = 0 by Theorem 8.3.4. Then the
282 I RANDOM WALK

set [0, oo), which is more than half of the whole range, is revisited an infinite
number of times. Nevertheless (19) says that the expected time for even one
visit is infinite ! This phenomenon becomes more paradoxical if one reflects
that the same is true for the other half (—oo, 0], and yet in a single step one
of the two halves will certainly be visited. Thus we have:
x
(-oo,0] Λ OC [ 0 > 0 0) = 1, ^Κ-οο,Ο]} = ^Κθ,αο)} = OO.

Another curious by-product concerning the strong law of large numbers


is obtained by combining Theorem 8.4.3 with Theorem 8.2.4.

Theorem 8.4.5. SJn -» m a.e. for a finite constant m if and only if for every
e > 0 we have
00
1 Sn
(21)
l~n m
> € > < 00.

n
PROOF. Without loss of generality we may suppose m = 0. We know from
Theorem 5.4.2 that 5 > - > 0 a . e . if and only if δ(\Χ\) < oo and δ(Χ) = 0.
If this is so, consider the stationary independent process {XJ n e TV}, where
X'n = Xn — e, e > 0; and let S'n and a = a'(0f00) be the corresponding r.v.'s
for this modified process. Since δ(Χ') = — e, it follows from the strong law
of large numbers that Sn -> —oo a.e., and consequently by Theorem 8.2.4
we have ^{a < oo} < 1. Hence we have by (14) applied to a :

(22) 2 lnSn-ne>0]<œ.
U
n=l

By considering Xn 4- € instead of Xn — €, we obtain a similar result with


" 5 n - ne > 0 " in (22) replaced by "Sn + ne < 0". Combining the two, we
obtain (21) when m = 0.
Conversely, if (21) is true with m = 0, then the argument above yields
0>{a < oo} < 1, and so by Theorem 8.2.4, ^{lim S'n = -foo} = 0. A fortiori
n-» oo
we have
Ve > 0 : 0>{S'n > ne i.o.} = 0>{Sn > 2ne i.o.} = 0.
Similarly we obtain Ve > 0 : 0*{Sn < -2nelo.} = 0, and the last two
relations together mean exactly SJn -> 0 a.e. (cf. Theorem 4.2.2).

Having investigated the stopping time a, we proceed to investigate the


stopping place Sa9 where a < oo. The crucial case will be handled first.

Theorem 8.4.6. If £{X) = 0 and 0 < £(X2) = σ2 < oo, then

(23) £{Sa) = ^ = e x p | | J g - 0>(SneA)] y < oo.


8.4 FINE STRUCTURE | 283

PROOF. Observe that $(X) = 0 implies each of the four r.v.'s a is finite
a.e. by Theorem 8.3.4. We now switch from Fourier transform to Laplace
transform in (11), and suppose for the sake of definiteness that A = (0, oo).
It is readily verified that Theorem 6.6.5 is applicable, which yields

(24)

for 0 < r < 1, 0 < λ < oo. Letting r f 1 in (24), we obtain an expression
for the Laplace transform of Sa9 but we must go further by differentiating
(24) with respect to λ to obtain
(25)

the justification for term wise differentiation being easy, since ^{|S n |} <
«<f{|Z|}. If we now set λ = 0 in (25), the result is

(26)

By Exercise 2 of Sec. 6.4, we have as n -> oo,

so that the coefficients in the first power series in (26) are asymptotically
equal to σ/λ/2 times those of

since

It follows from proposition (D) above that

Substituting into (26), and observing that as r \ 1, the left member of (26)
tends to ${Sa) < oo by the monotone convergence theorem, we obtain

(27)
284 I RANDOM WALK

It remains to prove that the limit above is finite, for then the limit of the
power series is also finite (why ? it is precisely here that the Laplace transform
saves the day for us), and since the coefficients are o(\/n) by the central limit
theorem, and certainly 0(1/n) in any event, proposition (C) above will identify
it as the right member of (23) with A = (0, oo).
Now by analogy with (27), replacing (0, oo) by (—oo, 0] and writing
«(-oo.o] as ß, we have

(28) <?{Sß} = -?= lim exp { f ) £ [5 " ^ » < 0)]}·

Clearly the product of the two exponentials in (27) and (28) is just exp 0 = 1 ,
hence if the limit in (27) were +00, that in (28) would have to be 0. But since
S(X) = 0 and ê(X2) > 0, we have &{X < 0) > 0, which implies at once
&(Sß < 0) > 0 and consequently ${Sß) < 0. This contradiction proves that
the limits in (27) and (28) must both be finite, and the theorem is proved.

Theorem 8.4.7. Suppose that X ψ 0 and at least one of &(X+) and ê(X~) is
finite; and let a = α(000), β = α(_οο,ο]·

(i) If i(X) > 0 but may be +00, then £(Sa) = ê(a)ê(X\


(ii) If g{X) = 0, then S(Sa) and ß(Sß) are both finite if and only if
ê(X2) < 00.

PROOF. The assertion (i) is a consequence of (18) and Wald's equation


(Theorem 5.5.3 and Exercise 8 of Sec. 5.5). The "if" part of assertion (ii)
has been proved in the preceding theorem; indeed we have even "evaluated"
<f(Sa). To prove the "only if" part, we apply (11) to both a and ß in the
preceding proof and multiply the results together to obtain the remarkable
equation :

(29) [1 - <f{rVis«}][l - i { r W } ] = e x p { - 2 - / ( 0 n | = 1 - rf(t).

Setting r = 1, we obtain for t Φ 0:

1 -f(t) = 1 - g{e**a} 1 - *{€?*,}


t2 ~ -it +it
Letting t j 0, the right member above tends to é>{Sa}é>{ — Sß} by Theorem
6.4.2. Hence the left member has a real and finite limit. This implies
S(X2) < 00 by Exercise 1 of Sec. 6.4.
8.5 CONTINUATION | 285

8.5 Continuation
Our next study concerns the r.v.'s Mn and M defined in (12) and (13) of Sec.
8.2. It is convenient to introduce a new r.v. Ln, which is the first time
(necessarily belonging to N%) that Mn is attained:
(1) V« E N° : Ln(œ) = min {k e K : Sk(w) = M » } ;
note that L0 = 0. We shall also use the abbreviation

(2) α = α(ο,οο), β = «(-οο,ο]

as in Theorems 8.4.6 and 8.4.7.


For each n consider the special permutation below :

(3)

which amounts to a "reversal" of the ordered set of indices in Nn. Recalling


that ß o pn is the r.v. whose value at ω is β{ρηω), and Sk ° pn = Sn — Sn-k for
1 < k < n, we have

It follows from (8) of Sec. 8.1 that

(4)

Applying (5) and (12) of Sec. 8.4 to β and substituting (4), we obtain

(5)

applying (5) and (12) of Sec. 8.4 to a and substituting the obvious relation
{a > n} = {Ln = 0}, we obtain

(6)

We are ready for the main results for Mn and M, known as Spitzer's identity:

Theorem 8.5.1. We have for 0 < r < 1 :

(7)
286 I RANDOM WALK

M is finite a.e. if and only if


l
(8) 2 -nSn>0}<œ9
n=1

in which case we have

(9) ê{ëm) = exp { 2 - [^(eitsn) - 1]

PROOF. Observe the basic equation that follows at once from the meaning
ofL n :
(10) {Ln = k} = {Lk = k}n {Ln.k orfc = 0},
where rk is the kth iterate of the shift. Since the two events on the right side
of (10) are independent, we obtain for each n e N°, and real t and u:
(11) ^ n g i u ( s n - M n ) } = 2 f eitsfceiuiSn-s^ d&
k = 0J{Ln =k)

= Σ f eits*d&[ „ eiu(S«->c-xk>d&
k = 0 J{L,c=k} J{Ln_fcoTte=0}

= 2 f éts* d& f eiuSn-K d&.


fc = 0 J{Lk=k) J{Ln.k=0}

It follows that

(12) f rnê{émn eiuiSn-Mn)} = f rn f eitsn d&- Σ rn Ï eiuSnd0>.


n =0 n =0 J{Ln=n} n= o J{Ln=0}

Setting u = 0 in (12) and using (5) as it stands and (6) with t = 0, we obtain

V rn£{eitMn} = exp { Y - Γ f eitsn dP + f 1 d0>]\,


H
n=0 ln = l U{Sn>0} J{SniO) })
which reduces to (7).
Next, by Theorem 8.2.4, M < oo a.e. if and only if 0*{a < oo} < 1 or
equivalently by (14) of Sec. 8.4 if and only if (8) holds. In this case the
convergence theorem for ch.f.'s asserts that
£{eim) = lim ê{ém«),
π-> oo

and consequently

ê{ëm) = lim (1 - r) V rn ê{ëm«)


m

lim exp if Ç[<?(e«\+)- 1]


8.5 CONTINUATION | 287

where the first equation is by proposition (B) in Sec. 8.4. Since

2{-\S{e^)-\\< 2 jW>0]<co
n n
n = l n = l

by (8), the last-written limit above is equal to the right member of (9), by
proposition (B). Theorem 8.5.1. is completely proved.

By switching to Laplace transforms in (7) as done in the proof of


Theorem 8.4.6 and using proposition (E) in Sec. 8.4, it is possible to give an
"analytic" derivation of the second assertion of Theorem 8.5.1 without
recourse to the "probabilistic" result in Theorem 8.2.4. This kind of mathe-
matical gambit should appeal to the curious as well as the obstinate; see
Exercise 9 below. Another interesting exercise is to establish the following
result :

(13) *(Mn)= 2 JP'W)

by differentiating (7) with respect to t. But a neat little formula such as (13)
deserves a simpler proof, so here it is.
Proof of (13). Writing

Mn= V S/ = ( V s,y

and dropping "d^ 3 " in the integrals below:

J{Mn>0} fc = l

= f [ i ! + o V ( 4 - Xi)] + f V sk
J{Mn>0:Sn>0} fc =2 J{Mn>0;Sn<iO}k =l

= f Xx + f [0 V (5fc - XJ] + f V Sfc.


J{Sn>0} J{Sn>0} fc =2 J{Mfl_1>0;Sn^0} fc=l

Call the last three integrals J\, J2> and j 3 . We have on grounds of symmetry

« J{S n >0>

Apply the cyclical permutation


/1,2,...,«\
\2,3,...,1/
to J*2 to obtain
J2 n x
J{sn>o>
288 I RANDOM WALK

Obviously, we have

J3 = f Afn_! = f Afn_le
Gathering these, we obtain

W
J{Sn>0} J{Sn>0} J{Sn^0}

= I^(5 n + ) + A M n _ 1 ) ,
and (13) follows by recursion.

Another interesting quantity in the development was the "number of


strictly positive terms" in the random walk. We shall treat this by combi-
natorial methods as an antidote to the analytic skulduggery above. Let us
define
νη(ω) = the number of k in Nn such that Sk(œ) > 0;
ν'η(ω) = the number of k in Nn such that Sk(œ) < 0.
For easy reference let us repeat two previous definitions together with two
new ones below, for ne N°:
Μη(ω) = max S,(OJ); Ln(w) = mm{jeNZ : S;(a/) = Μη(ω)};
0 <, ) ^ n

Μ'η{ω) = min S/ω); L'n(w) = max{jeN° : Sfa) = M'n{w)}.

Since the reversal given in (3) is a 1-to-l measure preserving mapping


that leaves Sn unchanged, it is clear that for each Λ e J*öo, the measures on
^ 1 below are equal:
(14) &{A;Sne.} = 0>{PnA;Sne.}.

Lemma. For each n e N°, the two random vectors


(Ln, Sn) and (n - L'n9 Sn)
have the same distribution.

PROOF. This will follow from (14) if we show that


(15) V* e K: 9-'{Ln = k} = {L'n = n-k).
Now for ρηω the first n + 1 partial sums Sj(pnœ)9 j e N%, are
0, ω η , ωη + ω η _ 1 ? . . . , ω η H h ωη_; + 1, . . . , ωη -\ h ω1}
8.5 CONTINUATION | 289

which is the same as

from which (15) follows by inspection.

Theorem 8.5.2. For each n e TV0, the random vectors


(16) (L n ,S n ) and („n, 5n)
have the same distribution ; and the random vectors
(16') (L'n9Sn) and (v'n9 Sn)
have the same distribution.
PROOF. For n = 0 there is nothing to prove; for n = 1, the assertion
about (16) is trivially true since {L± = 0} = {Sx < 0} = { vx = 0}; similarly
for (16'). We shall prove the general case by simultaneous induction,
supposing that both assertions have been proved when n is replaced by n — 1.
For each k£ Ν%„λ and y e &1, let us put
G(y) = &{Ln-i = k\ Ξη.λ < y}, H (y) = ^η-ι = k; Ξη.λ < y}.
Then the induction hypothesis implies that G = H. Since Xn is independent
of J^_! and so of the vector (L n _ 1? S n _i), we have for each x e f :

(17) ^ { L ^ = k; Sn < x} = f°° F(x - j ) rfC(>0


J — 00

= Γ F(x - y) dH{y) = ^ K _ x = k; Sn < x},


J - 00

where Fis the common d.f. of each Xn. Now observe that on the set {Sn < 0}
we have Ln = Ln_1 by definition, hence if x < 0:

(18) {a, : Ln(œ) = A:; 5η(ω) < x} = {ω : Ln.^) = fc; S » < x}.

On the other hand, on the set {Sn < 0} we have vn_1 = vn9 so that if x < 0,

(19) {ω : νη(ω) = k; Sn(œ) < x} = {ω : ^ , ^ ω ) = k; Sn(œ) < x}.

Combining (17) with (19), we obtain

(20) Vit G N°, x < 0 : 0>{Ln = k; Sn < x} = &{vn = k; Sn < x}.

Next if k e TV0, and x > 0, then by similar arguments :


{ω : L^(CÜ) = n - k; Sn(œ) > x} = {ω : Ζ,ή-ι(ω) = n - k; Sn(œ) > x},
{ω : v^(tü) = n - k; Sn(œ) > x} = {ω : νή-ι( ω ) = n - k; Sn(cj) > x}.
290 I RANDOM WALK

Using (16') when n is replaced by n — 1, we obtain the analogue of (20):


(21) Vk e Nl x > 0 : 0>{L'n = n - k; Sn > x} = 0>{v'n = n - k\Sn> x).
The left members in (21) and (22) below are equal by the lemma above, while
the right members are trivially equal since vn + v'n = n:
(22) V£ e # ° , x > 0 : 0>{Ln = k\ Sn > x} = 0>{vn = k\ Sn > x}.
Combining (20) and (22) for x = 0, we have
(23) V/c E NS : ^{L n = k} = 0>{vn = k};
subtracting (22) from (23), we obtain the equation in (20) for x > 0; hence it
is true for every x, proving the assertion about (16). Similarly for (16'), and
the induction is complete.

As an immediate consequence of Theorem 8.5.2, the obvious relation


(10) is translated into the by-no-means obvious relation (24) below:

Theorem 8.5.3. We have for keNQn\


(24) 0{vn = k} = &{vk = kW{vn.k = 0}.
If the common distribution of each Xn is symmetric with no atom at zero,
then
(25) V* e NS : &K = *} = (-1)»( - *) ( π " * Λ ) ·

PROOF. Let us denote the number on right side of (25), which is equal
to
J _ (2k\(2n - 2k\

by an(k). Then for each n e N, {an(k), k e N%} is a well-known probability


distribution. For n = 1 we have
&{Vl = 0} = 0>{Vl = 1} = i = aM = an(n),
so that (25) holds trivially. Suppose now that it holds when n is replaced by
n — 1 ; then for k e Nn _ λ we have by (24) :

^ Κ = k) = ( - I f ( " * ) ( - 1 ) - * ^ - ^ = an{k).
It follows that
0>{vn = 0} + &{vn = n) = 1 - " f 0>{vn = k}
fc = l

= 1 - π Σ 1 α π (Α:) = απ(0) + απ(η).


8.5 CONTINUATION | 291

Under the hypotheses of the theorem, it is clear by considering the dual


random walk that the two terms in the first member above are equal ; since
the two terms in the last member are obviously equal, they are all equal and
the theorem is proved.

Stirling's formula and elementary calculus now lead to the famous


"arcsin law", first discovered by Paul Levy (1939) for Brownian motion.

Theorem 8.5.4. If the common distribution of a stationary independent


process is symmetric, then we have

V;c E [0, 1] : lim Θ>{^ < x\ = - arc sin Vx = - Γ . du ■


n-oo {n ) π 77· Jo y w ( i _ u)
This limit theorem also holds for an independent, not necessarily
stationary process, in which each Xn has mean 0 and variance 1 and such that
the classical central limit theorem is applicable. This can be proved by the
same method (invariance principle) as Theorem 7.3.3.

EXERCISES

1. Derive (3) of Sec. 8.4 by considering

<f{ r V f M = f rn\[ eitsn deP - f eitsn d&\.


n=i U{a>n-1} J{a>n} J

*2. Under the conditions of Theorem 8.4.6, show that

g{Sa) = - l i m f SndeP.
n-^oo J{a>n}
3. Find an expression for the Laplace transform of Sa. Is the corre-
sponding formula for the Fourier transform valid ?
*4. Prove (13) of Sec. 8.5 by differentiating (7) there; justify the steps.
5. Prove that

2 r«0>[Mn = 0] = exp { J Ç 0>\Sn < 0]]

and deduce that


&>[M = 0] = exp | - 2 \ 0*[Sn > 0 ] | ·

6. If M < oo a.e., then it has an infinitely divisible distribution.


7. Prove that

2 &{Ln = 0; Sn = 0} = exp ( J \ nSn = 0]


n
n=0 U =1
292 I RANDOM WALK

[HINT: One way to deduce this is to switch to Laplace transforms in (6) of


Sec. 8.5 and let λ -> oo.]
8. Prove that the left member of the equation in Exercise 7 is equal to

[1 - Σ &W = n;Sn = 0}]-\


n= l
where a = a[0f00); hence prove its convergence.
*9. Prove that
Î \ &[Sn > 0] < 00
n
n= l

implies 0*(M < oo) = 1 as follows. From the Laplace transform version of
(7) of Sec. 8.5, show that for λ > 0,

lim(l - r) t rn <?{έΤλΜ»}
rtl n=0
exists and is finite, say = χ(λ). Now use proposition (E) in Sec. 8.4 and apply
the convergence theorem for Laplace transforms (Theorem 6.6.3).
*10. Define a sequence of r.v.'s {Yn9ne N0} as follows:
*Ό = 0, Fn + 1 = ( r n + Xn + 1)\ nsN\
where {Xn, n e N} is a stationary independent sequence. Prove that for each
n, Yn and Mn have the same distribution. [This approach is useful in queuing
theory.]
11. If -oo < S{X) < 0, then
ê(X) = £(-V~), where V = sup Sj.
1 <, j < oo

[HINT: If Vn = max Sj9 then


1 < ; <, n
tv
£(é :) + ê{éiYû) - 1 = g{eity«) = ^(eitvn-i)f(t);
let n -> oo, then 11 0. For the case #(Χ) = —oo, truncate. This result is due
to S. Port.]
*12. If ^{a(0,oo) < oo} < 1, then vn -> v, Ln->L, both limits being finite
a.e. and having the generating functions:

<f{rv} = ê{r^ = exp { Î "—^ &\Sn > 0]J-

00

[HINT: Consider lim 2 ^Vm = n}rn and use (24) of Sec. 8.5.]
m-* oo n = 0

13. If ^(JT) = 0, £{X2) = σ2, 0 < σ2 < oo, then as « -> oo we have
8.5 CONTINUATION | 293

where

[HINT: Consider

lim (1 - r)112 f rn0>[vn = 0]


rtl n= 0

as in the proof of Theorem 8.4.6, and use the following lemma: if pn is a


decreasing sequence of positive numbers such that

Σ Pu ~ 2«1'2,
fc = l

then/? n ~ n~112.]
*14. Prove Theorem 8.5.4.
15. For an arbitrary random walk, we have

2^^{S„>0}<oo.
n

[HINT : Half of the result is given in Exercise 18 of Sec. 8.3. For the remaining
case, apply proposition (C) in the 0-form to equation (5) of Sec. 8.5 with
Ln replaced by vn and / = 0. This result is due to D. L. Hanson and M.
Katz.]

Bibliographical Note
For Sec. 8.3, see
K. L. Chung and W. H. J. Fuchs, On the distribution of values of sums of
random variables, Mem. Am. Math. Soc. 6 (1951), 1-12.
K. L. Chung and D. Ornstein, On the recurrence of sums of random variables,
Bull. Am. Math. Soc. 68 (1962) 30-32.
Theorems 8.4.1 and 8.4.2 are due to G. Baxter; see
G. Baxter, An analytical approach to finite fluctuation problems in probability,
J. d'Analyse Math. 9 (1961), 31-70.
Historically, recurrence problems for Bernoullian random walks were first dis-
cussed by Poly a in 1921.
The rest of Sec. 8.4, as well as Theorem 8.5.1, is due largely to F. L. Spitzer; see
F. L. Spitzer, A combinatorial lemma and its application to probability theory,
Trans. Am. Math. Soc. 82 (1956), 323-339.
F. L. Spitzer, A Tauberian theorem and its probability interpretation, Trans.
Am. Math. Soc. 94 (1960), 150-169.
294 I RANDOM WALK

See also his book below—which, however, treats only the lattice case :
F. L. Spitzer, Principles of random walk. D. Van Nostrand Co., Inc.,
Princeton, N.J., 1964.
The latter part of Sec. 8.5 is based on
W. Feller, On combinatorial methods in fluctuation theory, Surveys in Prob-
ability and Statistics (the Harald Cramer volume), 75-91. Almquist &
Wiksell, Stockholm, 1959.
Theorem 8.5.3 is due to E. S. Andersen. Feller [13] also contains much material
on random walks.

You might also like