You are on page 1of 194
MARTINGALES and MARKOV CHAINS Solved Exercises and Elements of Theory Paolo Baldi Laurent Maziiak Pierre Priouret oH CHAPMAN & HALL/CRC ACRC Press Company Boca Raton London NewYork Washington, D.C. Library of Congress Cataloging-in-Publication Data Bald, Paolo, 194%- {Maringales et ehatnes de Markov. English) Maringales and Markov chains solved exercises and elements of theery / Paolo Bald, Laurent Mazliak, Piee Priuret p.sem Tacludes bibliographical references and index. ISBN 1-58488-329-4 (alk. pape) 1. Martngales (Mathematics) 2. Martingale (Mathematics) Problems, exerises, et. 3. Markov processes. 4. Markov procestes—Problems, exercises ec. 1. Mazliak, Lucent, I. Priouret.P Piere), 1939-0, Tile (QA274.5 .83513 2002 519.2°87-—de21 2002018810 ‘This book contains information obtained from authentic and highly regarded sources, Repsinted material ix quoted with permission, and sources ure indicated, A wide variety of references are listed. Ressonable efforts have been made to publigh reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use, Neither this book nor any put may be reproduced or transmitted in any form or by any means, clectrontc UF mechanical, including photocopying, microfilming, and recording, or ny any infermation siorage 0 rewieval systein, without prior permission in writing froin the publisher ‘The censeut of CRC Press LLC does not extend to copying tor general distribution, for promotion, far creating new works. oF for resale, Specific permission must be obtained in writing from CRC Press LLC for such copy Direct all inquiries to CRC Pross L 2000 N.W. Corporate Bive,, Boca Ruton, Florida 33431 ‘Trademark Notice: Product or corporate names miuy be trademarks o° registered trade cased only for identification an explanation, without intent to infringe ks, and ane Visit the CRC Press Web site at www.crepress.com © 1998 by Hermann, Hditeurs des Sciences et des Ants, Paris (Frenen edition) © 2002 by Chapman & Hal/CRC (English edition) No elaim to original U.S. Government works Interccnional Standard Book Number |-SR488-379.4 Library nf Congress Can! Number 2002018810 Printed ia the United Sues of America 1234567390 Printer on awid-fe paper Contents Preface 1 Conditional Expectations Introduction | Definition and First Properties Conditional Expectations and Conditional Laws Exercises Solutions, 2. Stochastic Processes General Facts Stopping Times . Exercises Solutions 3. Martingales First Definitions First Properties : ‘The Stopping Theorem Maximal Inequalities Square Integrable Martingales Convergencé Theorems Regular Martingales Exercises Problems Solutions 4 Markov Chains Tiansition Matrices, Markov Chains Construction and Existence Computations on the Canonical Chain . Potential Operators Passage Problems Recurrence, Transience Recurrent Ineducible Chains Periodicity 28 30 30 39 44 3 B 16 7 B ” 81 st ey wi CONTENTS Exercises oe good 9 Problems. : poe : 107 Solutions : : : cae : 121 References 189 191 Index Preface This is primarily ¢ book of exercises with solutions xbout discrete time stochastic Processes, more precisely on martingales and Markov chains with a countable state Space, These are two of the most important instances of processes, and being farniliar with them is an unavoidable step toward the study of nore complex situations. We aire therefore concerned with notions of stochastic processes such as are teated at the end of undergraduate or at the beginning of graduate studies. We resolved to give rather detailed solutions, inostly for the first exercises of each chapter In this book the reader can find: 1) A section of elementary exercises essentially requiring use of basic theory. followed by others in which the reader is prompted to more initiative. 2) A section of problems; often these come from examinations at the end of the undergraduate level (inaftrise) at the Université Pierre et Maric Curie, Paris 6. Often they open the perspective toward specific applications. Itseemed necessary to give a substantial recall of the elements of the theoryin order to set precisely the contox!. The references to the thcory are very frequent in ihe solution af the exercises; these are either to a theorem (whase reference is given by i chapter number followed by the statement number: “Theorein 4.25” suggests, therefore, to look at Theorem 25 of Chapter 4) or by the mark e followed by a number, pointing to the essential elements of the theory section. Italso appeared necessary, before starting the core of the subject, to write two short chapters recalling the basic notions about conditional expectations and stochastic processes, We do not claim to be thorough; the aim is only to stafe the essentials for the rest of the book. On the other hand we assume the reader is familiar with the basic notions of measure theory, integration, probability, Exercises and problems are, in principle, sorted in increasing order of difficulty, but this rule, however subjective, is not unbreakable. We found it particularly advisable to put in a sequence some exercises having strong links between them but not necessarily of the same difficulty. Since some of the readers might choose to study Markov chains without knowing about martingales, we have marked with the beacon @ those exercises that make use: explicitly of martingales. The problems, conversely, very often need both notions and. we did not deem necessary to give the same indication, Paolo Baldi Laurent Mazliak Pierre Priouret CHAPTER | Conditional Expectations Introduction 1.1 Letus begin with a simple, albeit typical, case of conditioning. On a probabilily space (2, #, P), let X be a bounded real random variable (r.v.) and T a rv. with values in E = (t1,t2,.-.) such that P(T = 14) > 0 for every k > |. We define, for every Borel set AC Randk > |, POX ALT =) PO =n) For every fixed k, A —> n(tg, A) is a probability on R which is called the conditional law of X given T = ty. The conditional expectation of X given T = t is then defined as nl, A) = P(X € AIT =) = 1 PT =) Jirant ‘This is a key notion, as we shall see in the rest of this book. This explains why it is important to extend it to the case of a general rv. T.. This is the aim of this chapter: we are going to characterize h(t.) = E(X | T = tx) by a formula having a meaning in the general case, i., when the rv. T takes its values in a general measurable space. ‘We remark that, for every BC E, E(X|T=4) = xap= [ xna.ds). Soc *O#” = DECI =a) PT = 4) = ae = EK ray) = EX reay) af XaP. nee (rea) ‘The previous relation thus states that h(T) is a o(T)-measurable bounded rv, such that its integral and the integral of X coincide on events belonging to o(T). This property characterizes the conditional expectation. Definition and First Prope! #1.2 Let (@, of, P) be a probability space. Theorem and Definition 1.1 Let % C of be a sub-o-algebra and X a real inte grable (resp. positive) rv. Then there exists a real integrable (resp. positive) rv. ¥. B-measurable and unique up to an equivalence such that [vara [ xar for every Ae B. an A A 2 CHAPTER !: CONDITIONAL EXPECTATIONS Secha nv. ¥ iscalled the conditional expectation of X given B and is writtenE(X | #8) or E(X). 1.3 We shall often ntake use of the following fact. Let @ C of be a sub-a-algebra (thus, possibly, B = of). If X and ¥ are integrable (resp. positive) @-measurable y's suéh that [xe [var for every Ae B, (ay A A then X > ¥ as, This property already proves the uniqueness in Theorem-Definition 1 1.4 The relation (1.1) can also be written E(¥Z) = E(XZ), where Z = 1. This equality extends to all finite linear combinations of indicators of sets in and, by the usual techniques of integration theory, to more general classes of functions. More precisely, if X is a integrable (resp. positive) rv. and ¥ = E(X |B), we have, for every bounded (resp. positive) #-mteasurable rv. Z, E(YZ) = E(XZ) (1.3) 1.5 We state now the min properties of the conditional expectaticn. In what follows BE are sab-o-algebras of of and X. ¥. Xy tv.’s which are positive or satisfy suitable integrability conditions. (E41) = Las. ELE@(X)] = E(X), Ui) E# (aX + BY) = aE (xX) + DEF (Y) as WX < ¥,E4(X) < B*Y) as, (iv) Uf f is a real convex function, { (E*(X)) < E#(f(X)) as. Jensen's inequal- ity). In particular |E4(x)|? < E# (|X|?) as. for every p > |. (9) If ¥ is @-meusurable. E4(X¥) = YE*(X) a.s.. in particular E*(Y) (vi) IEC Cc B.EC(X) = EWE(X)] as, (Xn DX ats. (hen Xn S Xngt as. for every mand liitiy age Xn = X at.s., E*(X,) t E*(X) ws. (Beppo Lewr's theorem for conditional expectations). (ili) IX, = 0. EM (im, ,c Xn) € lim, .,. B4(X,) a.s, (Fatou’s Jemma for conditional expectations) GX) I Xq nang X ats. and, for every n, [Kui < Z € 2), then BA(X,.) yo E*(X) as. (Lebesgue's theorem for conditional expectations). y 91.6 One can see immediately that, if X = X!as,. then E#(X) = E(X’) as The conditional expectation is thus actually defined on equivalence classes of r.v-'s. Having stressed this point, «1.5 (iv) states that the conditional expectation is an operator L? > LP, p > 1. which is a contraction (recall that L? is a space that is not formed of t's, but of equivalence classes of r.v.’s). The range of L? through the operator X —» E*(X) will be written 1.?(A): it is the subspace of L? that is formed by those equivalence classes of r.v.'s that contain at least one @-measurable element. 1.7 The case 1.7 deserves particular attention. Relation (1.3) means that, if X € L? and ¥ = E%(X), then X — ¥ 1 Z for every bounded A-measurable Z. By density we have X —¥ 1 Z for every Z € L?(#). Thus ¥ is the orthogonal projection of X on £(B), Therefore, by a classical characterization of the orthogonal projection, DEFINITION AND FIRST PROPERTIES 3 EU(X ~ ¥)?) = inf zeu2¢qy BUX — 2Z)?] and, if X € L?, B#(X) is the best L? approximation of X by a B-measurable cv. #18 Let T bear. with values in the measurable space (E, 6°) and o (T) the smallest, g-algebra on 9 making T measurable, We recall thata real rv. Z is o(7)-mneasurable if and only if there exists a measurable function hk on (E, &) such that Z = h(T). Thus, if @ = o(T), there exists a function h, measurable on (E,é), such that X(X | @) = A(T) as. In this case the computation of the conditional expectation is reduced to the determination of h, One writes also E(X { T) instead of EX |o(T)) and, with a suggestive notation, AQ) SE(X|T = 1). One should remark that A is only defined a.s. with respect to the law of T Thus, thanks to €3.4, for a integrable (resp. positive) r.v. X.h(T) = E(X|T) as if and only if, for every bounded (resp. positive) measurable function 2, E(h(T)a(T)\ = E[Xa(T)]. (1.4) If X © L?. the results of #1.7 allow characterizing the function h through the property E[(X ~ A(T))?] = inf{E[(X — g(T))?]; g such that g(T) € L”) 01.9 If < of is asub-c-algebra and A € «7, let us set P?(A) = P(A\#) E(14 |B). P#(A) is called the conditional probability of A given @. It is worth pointing out that, thanks to #1.5 (ii) and (vii), it holds, for every sequence (A,)>1 of disjoint events. PA (LU An) = So P#Ay) as and also P*(A‘) P(A) as. Notice however that P#(A) is av. that is only defined up to a negligible set: because of this, this notion should be handled with a certain cure 1.10 Letusstatea first link between the theory developed so far and the Introduction. Let (Aq)ii21 be @ puttition of 2 with A, € of and B = a{Aq.a > 1). We recall that ary. ¥ is measurable if and only if ¥ = Soy.91 dnl ay. da € R. Let X be a positive E*(X). Then ¥ = Doyo) nla, and, by the relation E(X14,) = E(1a, ¥) = OnP(An). we get ay = P(Aq)~'E(X14,) if P(Aq) > 0, whereas oq can be chosen arbitrarily if P(An) = O. In particular, if P(A,) > 0 for every n, we have, for every A € B, P(AN An) FA) = Ss! PAA) xX Pag 1.11 Let us recall that a rv, X is independent of the v-algebra ¥ if X is independem of every -meusurable r.v, ¥, We have then, for every positive or bounded function Fr ECFX) |B) = ECf(X)) as, Conversely we can prove. using the usual methods of integration theory, the following criterion: ev. and 4 CHAPTER 1: CONDITIONAL EXPECTATIONS (3) if, for every bounded function f, E¥(f(X)) = constant as., then X and B are independent, (ii) If X takes values in R? and if, for every t € R?, EM (e!"*)) = constant as., then X and @ are independent. Let us point out that, thanks to «1.5 (i), these constants are equal, respectively, to E(f(X)) and E(e!l-*), In particular, if X and 7 are independent, we have (s(X)|7) = B(s(X)) as. This relation admits the following very useful extension. Lemma 1.2 Let B be a sub-o-algebra of f, T a B-measurable r.v. with values in (E, &) and X a rv, independent of B with values in (F. F). Leth be a measurable function, positive, or such that h(X, T) is integrable. Then EUi(X, 7) |B] = H(T), where H(t) = E(h(X,1)). The proof is very simple. We assume first that h is positive. Let Z be a positive sB- measurable r.v. Let us note 2x and uz,7 the laws of X and (Z, T), respectively. ft holds from one side He = f nx, Dduxoo and, from the other, the rv.’s X and (Z, T) being independent, E(ZH(X, m= fe. nduxtrdue rz) = = [ e( fm. naux00) duzzr(et) = f eHordurren = E(2H(T)) ‘Then it is easy to extend the proof to the case of an integrable function A, If B= o(T), the lemma states that E[A(X,T)|T = X and T are independent, then = EX OL (1.5) Conditional Expectations and Conditional Laws 1.12 We now establish the link between Theorem-Definition 1.1 and conditional laws, Let T be a rv. with values in (E, 6), v its law and X a rv. with values in (F, F). A family of probabilities on (F. F), (n(t, dx))re8, 18 a conditional law of X given T if, Gi) For every A € F,1 > n(t, A) is &-measurable. Gi) For every Ae F and Be &, P(X € A,T eB) = f n(t, A) vid). 2 We deduce that, if f and g are positive measurable functions, BUT) = Li o. Stout, deg) ude) EXERCISES 7 ‘Thus, if we set A(t) =f fe) n(t, dx), then Ie ECS (X)a(T) frown (dt) = E(h(T)(T)). which means exactly that E(f(X)|T) = f f(x)n(T. dx) as. In particular, for X = 0. it holds as. EXIT) = fxner dx). Ie ‘This the conditional expectation appears to be the mean of the conditional law. It is easy to see that the previous formula extends to the case /(X) (resp. X) integrable. Let us assume, in particular, that the couple (T, X) has a density h with respect to a product measure 4 ® p on E x F, wand p being o-finite. Let us set oe = fi mtn ean the density of T and Q = {1; (1) = 0), Of course P(T € Q) = 0, We set aes iff ¢aQ any arbitrary density ifr € Q. hon = (6 lican be shown easily that n(¢, dx) = A(x; 1) a(dx) is a conditional law of X given 1. Thus we have. for every positive Borel function f. Bgl) = | serie aan. which ullows computing explicitly the conditional expectation in many concrete sit- ustions. Exercises Exercise 1.1 Let X and ¥ be independent rv.'s with a B(1, p) law. i.c., Bernoulli with parameter p. We set Z = lix+r=o) and Y = «(Z). Compute E(X |G) and E(Y |), Are these t.v.’s still independent? Exercise 1.2 Let X be a square integrable real rv. on (Q..F.P) and Yc Fa sub-o-algebra, Show that, if we set Var(X |Y) = Bj(X — E(X |)? |), then Var(X) = B(Var( x | #)) + Ver (E(X | 9)) Exercise 1.3 Let (2, F, P) be a probability space and Y C F a sub-c-algebra. For Ae F, consider the event B = {E(14 |) = 0} Show that B.C A® as. Bxercise 14 1) Let #, FZ be 1.v.'s wits values iv (E, 8) such that the couples KZ) und (F, Z) have the same law (ia particular # and F have common la se). 6 (CHAPTER I: CONDITIONAL EXPECTATIONS 1a) Show that, if f is a real positive function (resp. such that /(X) is integruble), then EC{(X) |Z) = E(f(¥) |Z) as. 1b) Let g be a measurable application from (E. &) to R that is positive (resp. such that g(Z) is integrable), i(X) a(Z)| X).A2(¥) = E(g(Z)i¥). Show that my = hp was 2) Let Ti,..., T be real integrable r.v.’s, independent and having the same law. We set T = Ti +...+T 2a) Show that (7) |T) = f 2b) Compute E(T'| 7). Exercise 1.5 (Conditional laws of Gaussian vectors) 1) Let X be ary, with values in R". of the form X= o(1) + Z, where ¥ and Z are independent. Show that the conditional law of X giveni ¥ ides with the law of Z + $(y). + 2) Let X, ¥ be Gaussian vectors taking values in Ré and R?, respectively. We as- sume that their joint law on (R*+?, @(R**?)) is Gaussian with mean and covat matrix given, respectively, by (nm) ( Rx Rxy ) my Rrx Ry where Rx, Ry are the covariance matrices of X and Y, respectively, and where Rxy = E((X — E(X))'(¥ ~ E(Y))) = ‘Ry is the & x p matrix of the covariances between components of X and ¥: we assuine moreover that Ry is positive defined (and thus invertible). 20) Find ak x p matrix, A, such that the rv.'s X — AY and ¥ are independent. 2b) Show that the conditional law of X given ¥ is Gaussian with mean E(X | ¥) = my + Rey Ry'(¥ — my) cm) and covariance matrix Rx ~ Rev Ry! Rrx. 8) 3) Let X be a signal, with normal law N(0, 1). We assume that we cannot observe the value of X; we only know an observation ¥ = X + W. where W is a noise, independent of X and with law N(0, 02) 3a) Given an observation ¥ = y, give an estimate of the value X of the signal. 3b) Let us assume o? = 0.1 and that the value of the observation is ¥ = 0.55. What is the probability for the signal X to be in the interval [}, 3]? Exercise 1.6 In this exercise we give an application of the conditional laws of Gaus- sian vectors. The reader should first try to understand thoroughly Exercise 1.5 A) Leto > 0,a € RY and C a positive definite (symmetric) d x d matrix. Show that SOLUTIONS: EXERCISE 1.1 7 (remark that ‘@C'a = (C-'a,a)). B) Let (Zq)nz1 be a sequence of real independent r.v."s such that Zy has Inw N(O,c2), cy > 0 and X a teal rv, with law N(0,o7),c > 0, independent of the sequence (Z,)nz1.We set, for every a > 1, Fe=X+2Zn, Gp=olhiy-¥ns (xX). Intuitively, Yq is a noisy observation of X. We note ¥" = (¥i,...4 ¥n)- B1) What is the law of the vector (X, ¥)...-+ ¥n)? B2) For every y € R", show that the conditional law of X given ¥" = vis Gaussian and compute its inean and variance. B3) Compute %, and E[(X — X,)?I. B4) Show that X, —ry-+20 X in L? if and only if D2) 677 Exercise 1.7 Let 7). T2,... he independent r.v.’s having an exponential Law with parameter 2. We set! = Ty +... Tre a) Determine the conditional law of 7; given T and then compute E(T; | T). b) Compute E(7? | 7) and (77 |T) Exercise 1.8 Let (Xn)n>0 be a sequence of independent rv.'s taking values in RY, all defined on the same probability space (@, #,P). Let us detine So = 0.5, = X1+...+Xq and F, = o(Sp, k R, +00, E(S(Su) | Fa=1) = ECF (Sn) | Sut) as) and express this quantity in terms of the law jy of Xy Solutions E11 Observe that the o-algebra Y is generated by the partition Ay = {X + ¥ = 0}. Ay = {X + ¥ > 1). By el. 10 the rv. E(X |) takes on Aj, i = 0, 1, the value — EUO4;) eA But X = 0 on Ao; hence E(X 149) = 0 and @ = 0. On the other hand X14, = Uxetlpxtyen = |e). Therefore ————E ~ PAY)” =p) a Finally BID) = elise By symmetry (the right-hand side is symmetric in X and Y) this formula also gives E(¥ |¥). Therefore E(X |‘) = E(¥ |). As a nonconstant r-v. cannot be indepen- dent of itself; the two r.v.'s E(X |) and E(Y |) are not independent. E12 One may write X ~E(X) = X —E(X|9) + E(X| 9) — E(X) 8 (CHAPTER |: CONDITIONAL EXPECTATIONS Now the rv. E(X |Y) — E(X) is Y-measurable whereas X ~ E(X |:9) is orthogonal to every Y-measurable rv. (#1.7). Therefore E(X — E(X))?) = E(X ~ E(X19))?) + E(E(K |) ~ BOO)) = (Var(X | F)) + Var(E(X | F)) 1.3 By definition E(14 19) is a Y-measurable rv. so that B ¢ Y. Therefore, by the definition of conditional expectation, E(1 ala) = E(E(14 19) 12). (1.10) ‘Ag on B itholds E(14 17) = 0, (1.10) implies E(1.41p) = 0 and thus, since 141g is positive, Ing = 0as., which is equivalent to BC AT as, E14 a) For every measurable positive (resp. bounded) application # from (E, 6°) oR, ELb(ZECS 0) |Z) = E@(Z) FO) = BO@(Z)F(4)) = BLO(Z)ECF(Y) |Z], which implies (using 1.3) that E(/(X) |Z) = EC/(¥) |Z) as. 1b) For every measurable positive (resp. bounded) application ¢ from (E, 6) to R, foc mo0 duty = E(X)A(X)) = BLO (XE) | XY) = BLOCOS(Z)) = = E(o(¥)8(Z)) = Elo (Y)E(g(Z) 1 ¥)] = feormoranon which, again, implies hy = hz. except at most on a jz-null set. 2a) Clearly the r.v.'s (Ti, T), (Ta. T).-+ +. (Tm. T) have the same law. Thanks to (1a), this produces E(T)|T) =... =E(T IT) as. Therefore nE(T) 17) = E(T IT) +--- + ECT AT) = E(T +... + ET) = (T|T)=T. where the last relation comes by #1.5 (v)- 2b) If Z = Te +... + Tr, the v's T; and Z are independent and T = 7; + Z. Hence ETT) = E(Z + TT) = E(Z) + Te = (9 ~ ECT) + Thy where we used the properties #1.5 (ii), (v) and @1.11 This exercise gives an example of computation of a conditional expectation without previous determination of the conditional distribution (but 'sing instead the definition and properties 1.5). EL5 1) If one denotes by n(y, dx) the conditional distribution of X given ¥ onc knows that (1.12), for every bounded Borel function f, J fe)ny.d) = BOO LY = 9) = BLO +2)1¥ =>). SOLUTIONS: EXERCISE 1.5 ’ By Lemma 1.2, the last quantity equals E(f(p(y) + Z)). Therefore m(y. dx) is the distribution of @(y) + Z. (To be precise, this last relation is true for almost every 1. with respect to the distribution of ¥...). 2u) Let Z = X— AY. The couple (X, ¥) being Gaussian, this also holds for (Z. Y). which is obtained from (X, ¥) through a linear transformation. Therefore, to prove the independence of Z and Y. it is sufficient to check that Cov(Z;, Yj) = 0 for every i= 1,.-.k.j = Ly... p. Suppose first, in order to simplify the notations, thai the means my and my are equal to 0, The condition of noncorrelation between the components of Z and ¥ may therefore be written 0 = E(Z'¥) = E(X — AY)'Y) = E(X'Y) — AE(Y 'Y) = Rxy — ARy. ‘The matrix A is therefore unique and given by A = Rxy Ry. If we do not suppose any more that the means are equal to 0, it is sufficient to repeat the same calculation with'X and ¥ replaced by X — mx and ¥ — my, respectively. 2b) One may write X= (X-AY)+AY, where the rv.’s X — AY and AY are independent. Thanks to 1) the conditional Uistribution of X given ¥ = y is the same as the distribution of X - AY + Ay. Since X — AY is Gaussian, this distribution is characterized by its mean mx — Amy + Ay = mx — Rxv Ry (my — y) and its covariance matrix Cx ay = Rx — Rxy'A — ARyx + ARy'A = Rx = Rey Ry" Ray ~ Rey Ry 'Rxy + Rey Ry! RyRy 'Rxy = = Rx — Rev Ry! xv, where We used the fact that Ry is symmetric and the obvious relation Ry x = 'Rxy 3a) We are looking for a function of the observation ¥ that is a good approximation of X. One knows (#1.7 and e1.8) that the r.v. #(¥) that minimizes the discrepancy E((#(¥) — X)?) is the conditional expectation ¢(¥) = E(X | ¥). Therefore if one decides to measure the quality of the approximation of X by #(¥) with this distance (other choices are also possible), the best approximation of the value of X is given by E(X | ¥). From (1.7) and (1.8), with the values mx = my one obtains 2 gui =—" ro 3b) With the given values the conditional distribution of X given ¥ = y is Gaussian with mean as and variance t+o? T+o2 Ii The probability that a r.v. with such a Gaussian distribution takes values in interval © (CHAPTER !: CONDITIONAL EXPECTATIONS [4.21 is easily computed: it is equal to the probability that a N(O, i) 1.v. takes its values in the interval (-}- VU, 4-17) = {-0.83, 0.83), which may be computed with the help of tables (or of specialized software), and one obtains the value 0.58. EL.6 A) Its sufficient to compute colatac7! (€ +a!) (Ct ~ emia a} 1 oe e _ 14 0%4C7a, a), =f +07a'aco! — (@'4c“! + aPalacta lac BI otulacn! — ON ayer aad on? + (Ca, a)" B1) The veetor (X, ¥1,..., ¥a) is obtained from (X, Z1,..., Zn) through a linear transformation. Asthe rv.’s X, Zi,.... Zn are independent and each have a Gaussian distribution. (X, Yi, ..-» Ya) is a Gaussian vector which is moreover centered. Let us look for its covariance matrix. Ife € R" denotes the vector with components 1. ..., 1, then Var(X) = o?, E(X'¥") = 0? “and Rye = E(¥"'¥") = Cy + 7e 'e, where 3 0 0 2 Thus the covariance matrix of (X. Yi... Yq) is eo. oF ( j ) a 2) By (1.1). (1.8) and A). the conditional distribution of X given ¥" = y is Gaussian with mean (Cre, eCy ye) on? + (Ci 'e,€) oP eRely = 02((Crty.e) and variance ae tGrlea) A 27(1~o8((Cjtee) ~ EES )) = SOLUTIONS: EXERCISE 1.7 1 B3) Obviously: 4. M(x) = — on? + (Cree) and E[(X ~ Xq)?] = BUX — Xn)? 1G = 02 BA) Itis immediate, as p? = (o-? +The; ?) and only if 7%, cf? = +00. E17 a) Let us compute the distribution of the couple (7;, 7). If we set Z = Tz + .. + Tq, then the rv. Z has aT'(n ~ 1, A) distribution, i.e., with density art Xn * nonce E[(X —Xq)"] —>n-+00 0 gz) = ve Lissa @=2! Moreover T = T; + Z. As the r.v.'s Ti and Z are independent, they have a joint density #2 given by rc Au, v) = v2 HOH Ty 0) lva0)- aD As the couple (Ti, T) is the image of (71. Z) through the linear transformation asso- ciated to the matrix 10 *=(14): by the change of variable theorem the density g2 of (7;, 7) is given by 82(x. y) = 82(A7!(5)) I det Al“? Now hence not £208, 9) = ay 2) Fe loseest The conditional density of 7; given T = y is therefore z ayy) 1 a! a@= ean Ser 0 = 2) Mio) (remark: itdoes not depend on A). E(T; | T) is the mean of this conditional distribution: integrating by parts or using the relation ee T@)rs) we uP du = OV, f (=u du = Fea e> 0.8 >0, which holds for o, 6 > 0, we have T n=l a(T = uy"? du ETT) = 2 (CHAPTER |: CONDITIONAL EXPECTATIONS (of course this is the saine as the result obtained in Exercise 1.4, where no hypotheses on the distribution of the rv.’s T;, | 0 satisfies the Markov property. Note also the role played! by the very useful Lemma 1.2. CHAPTER 2 Stochastic Processes General Facts 2.1 A stochastic process on a measurable space (E, &) is a term X= (2, F, (Xn)u>20-P) where (2, F. P) isa probability space and where, for every n, Xp is ary. with values in (E, €). Let us set #9 = o(Xo. X1...., Xq). We know that a function @ on 2 is #9. measurableif and onty if > = (Xp. X1,.-., Xv) fora &%("+) measurable function @. In particular for an event A to belong to 9 it means, intuitively, that one knows Whether 4 has happened or not as soon as the values taken by Xo. X 1... Xq are known, However, in inany cases, what is Known a time 1 is aot just the values oF Xo. Xt... Xn, since also other information may be available. the values taken by other processes, tor instance. This explains the follwing detinition Defi 1. We call filtered probability space a term (QF (Fu)noo- PY where (2, .¥, P) is a probubility space and (.F,,)n20 ix an increasing family of sub- o-algebras of #, called a filtration, on 2 Fr is called the o-algebra of the events prior to tine w. Definition 2.2. An adapted stochastic process on a mexsurable space (E. é') is a term X = (QF Fy nays (Xn)nz0.P) where (Q.F.(Fa)nz0, P) is a filtered proba: bility space and, for every n, Xq is a r-v. with values in (E, &) and F,-mteasurable (this last property is also expressed by saying that (X,)n20 is adapted to the filtration (Fudnz0)- 42.2 We give now two examples of those processes that will be our main concern. i) An adapted real stochastic process X is called a martingale if every rv. Xq integrable and, for every n, E(Xn gi | Fu) = Xu as. Itisa supermartingale if E(Xn41 | Fn) < Xq and asubmartingale itE(Xn4t | Fn) > Xu ii) A stochastic process X on (E, 6) is a Markov chain if, for every n and every Aes, P(X p41 € AL Fa) = P(Xng1 CAL Xs) as ‘This means, in particular, that, if oe knows the position X, of the process at tiine 11, in order to foresee its position X41 at time n + 1, the additional knowledge of what 15 16 CHAPTER 2: STOCHASTIC PROCESSES happened before time n (and which is contained in F,) carries no useful additional information. 02.3 Let X be a process on (E, 6). We denote by jin the law of the t.v. (Xo, X1,---5 X,,), which takes its values in (E"*!, 680+). The probabilities (4n)n20 are called the finite dimensional distributions of the process X. Since BnlAQ % A X06. % An) = P(X0 € Ao. ---+ Xn € Ande it holds Hy=1(Ag % Ay... % Ayat) = He(AD X AL ..X Anat XE). (2.1) Conversely, given for every n > Oa probability zq on (E"+!, &@+)) satisfying, (2.1), does a process having (J4»)n20 as its finite dimensional distributions exist? In order to answer this question, we introduce the canonical space Q=EN, w= (Wednz0, Xn(@) = wm GuaoXk en), F=a(X,k 20), oe which is going to play an important role in the chapter on Markov chains. We define. a probability P, on (2, Faq) by setting, for A € SO), Py(AX BE... x EX...) = bal) and then defining a set function on pag Fn bY P(A) =P, (A) if A © Fp. We must extend P to ¥ = 0 (Uo). It holds ‘Theorem 2.3 (Kolmogorov) Let (H4n)nz0 be a family of probability laws, each pun being a probability on the product space (E"+', &@"+")), satisfying (2.1). Then on the canonical space defined in (2.2) there exists a unique probability P such that the process (2..F (Fy). (Xu )w>tt. P) bas (J4n)u20 as its finite dimensional disteibu- tions. 42.4 Let us recall the following result (which is called the monotone class theorem) S be a family of parts of © such that, NQEF, ii) if An €Y and An t A,then A €S, iii) if A, Be and BC A, then A\ Be. Then, if . contains a class @ that is stable with respect to finite intersect contains ¢ (). This result has two important consequences. ‘Theorem 2.4 Let P,, P2 be two probabilities on (, #) such that P,(A) = P2(A) for every A € , where @ is a class that is stable with respect to finite intersections. Then P\(A) = P2(A) for every A € o(@). ns, F Theorem 2.5 Let # be a vector space of bounded real functions defined on @ and © aclass of subsets of 2. which is stable with respect to finite intersections, Assume that DE; STOPPING TIMES 7 ii) if fu € 9 andO < fy + f for abounded function f, then f < #: iii) 14 € H, forevery Ae &. Then JC contains all bounded o (6’)-measurable functions, Both Theorems 2.4 and 2.5 are important technical tools. For instance Theorem 2.4im- plies immediately the uniqueness of P in Kolmogorov's Theorem 2.3, since (Jy Fn is a class that is stable with respect to finite intersections. We shall make reference to Theorems 2.4 and 2.5 speaking of “monotone class arguments”. Stopping Times 025 Let (2, F, (F,)nz0s P) be a filtered probability space (Definition 2.1). We set Foo = 2 (Un20- Fn) 60 Definition 2.6 (i) stopping time (of the filtration (F,)n20) is any application \ 1» N such that, for every n > 0, WwsnleF, (ii) We call o-algebra of the events prior to time v and write Fy, the 0 -algebra Fr = {he Foo: foreveryn > 0, ANY En) © Fn). Remark. In (i) and (ii), one can replace {v 0; Xy(0) € A}, 2a with the understanding that inf @ = +00, which we shall use throughout all this 12x Then r, is a stopping time since ea Sm) = {Xo eA Xt EA Xnat €ALKn © ALE Fai ta is called the passage or hitting time in A, We shall also consider (beware of the difference ...) 04 (w) = inf{n > 1; Xn(w) © Als ay a4 isa stopping time (same proof), which, of course, coincides with ta if Xo(w) % A is called the return time in A. 02.6 Let (Fq)y20 be a filtration and v;, vz two stopping times for this filtratiou. We have then the following properties (elementary to check and that are the subject of Exercise 2.1), i) vp + v2, vt V M2, vi A v2 are stopping times with respect to the same filtration, ii) If wy < va.then Fy, C Puy WF yan = Fo, VF x Is CHAPTER 2: STOCHASTIC PROCESSES iv) (vy < vg} and {vy = v2] both belong to. Fi, Fey #2.7 Let X be an adapted process: we want to define the position of the process at lime v. ie., X(w) = Xvfuy(w). There is a problem when v(or) = +00. A possible way to handle this situation is to fx & tv. Xoor Foo-tneasurable, and to set Xy = Xne on {v = +00]. If E is a topological space (and & its Borel c -algebra), there are two customary choices for Xo i) The sequence (Xn)qn20 has as. a limit X as n —» 00. One can then choose Xoo = X. ii) Otherwise one can add to the space £ an isolated point 8, sometiines referred to as the cemetery, and set Xoo = 2. One can then define Xy=Xpon{v=n}, new Let us remark that the r.v. X, is #,-measurable since [Xv € AING nl = {Xn € AN {vy =a] EF,. Finally the following result is very useful in order to compute the conditional expec- tation with respect to Fy Proposition 2.7 Let X beapositive or integrable r.v. and v a stopping tine. I.et us set, foreveryn € N,Xq = E(X | Fq). Then on (v = n),itholdsE(X | F.) = E(X | Fn). E(X| FL) =X. as. 23) Let us check (2.5) for X > 0. lf Z is > O and #,-measurable, then it is easy to see that Z {ven} is Fy-measurable and that E(ZXy) = J EZXn liven} = D> ELZX Iver ne = E(ZX), ic, E(X| Fy) = Xv. Exercises Exerclse'2.1 Let (2, F. (Fu)n20,P) be a filtered probability space (note however that in this exercise P plays no role), r and v two stopping times of the filiration (Fanzd Fe (resp. F,) the o-algebra of the events prior to r (resp. v). Show that Dift = p,peEN, then F = Fp, ii) tA vt Vv. t + v are stopping times: iii) if't < v.then FC Fy iv) Fen Wit @ he the translation operator, defined by 8((wn)n20) = (Wn+1)n20, and (6,)n20 the sequence of its iterates (i.c., 6) =identity and 8, = @ 0...0 8,7 times). 1)Show tlrat Xp 0 Oy = Xngn and OB! (Fy) = 9 (Xn vv Xngm) 2) Let t and-v be two stopping times of the Bltration (¥,yo0 (possibly = v) SVE FN, SOLUTIONS: EXERCISE 2.1 18 2a) Show that, for every positive integer k, k + 1 © Oy is a stopping time. 2b) Show that p = v+ 1.06, isa stopping time (with the understanding p = +ooon {v = +00}). Show that, if moreover one assumes that v andr are finite, X; 06, 2c) For AC E, let us note tal@) = inf(n > 0; Xy(w) € AH oa(w) = inf ln > 1; Xq(w) © A}, the hitting und return times in A. These are stopping times, as we saw in «2.5. Let o be a stopping time. Show that o + £4.06, =inf{k =o: X, € A) o +04 08, =inf{k > 0; Xe € A} 26) and that, if AC Byt, = te +040 Ory Solutions E2.1 i) Suppose t = p. The event {t nandto Q if p <1. By consequence, as ) € F, for every n, Fy ={A € Feo, A € Fp for every n > p) = () J nap Fy ii)Let n € N. One must show that the events {r Av 0, AN {r Av < kl € Fe. But one already knows that AM {rt < k] € FA, and AN{v < k] € Fy. Hence, recalling that {eAv sk) =(r SkULy sk}, ANIEAY Sk) = AN (le kh). so 20 CHAPTER 2: STOCHASTIC PROCESSES But, forO ky SKIN {rt SK- IS EFC F, vs eA CF. Therefore, {r < v] A(t 0, X4(6y(w)) = Xeqv(w) and X 140,60) (Ov(ay (wo) = Xv4 100, (2) 2c) The relations are obviously true on {¢ = +00). On a = © +14 00g =k + th 00 =k + inf, Xm 0G € A) = A+ inflm, Xngk © AD } it holds an SOLUTIONS: EXERCISE 2.2 Bur if inf[m, Xm4e € A} =, this means that Xp ¢ A,...,Xe-1 GA, Xe € Aa info, Xmae € Al = inffr > k. Xr € A) Kk. This, with (2.7), proves the relation we are looking for on {a = k) for every &. The proof for a4 is identical, IAC B, the relation t, Actually, as Xq ¢ A for every k < tp. ta = inflk, Xx © A) = inf{k > te, Xe € A) = ta + 1A O8ry Th + tA 0 Bry in at immediate consequence of (2.6) CHAPTER 3 Martingales First Definitions 3.1 Let (&, F, (F)nz0. P) be a fixed filtered probability space. We know that the fainily of sets U,o9 Fn is not, in general, a o-algebra. Let us define. thus, zs in «2.5, Poo = O(n20 Fn). Foo is a g-algebra that is going to play an important role. Definition 3.1 A real adapted process (Xq)n20 is a martingale (resp, a supermartin- gale, a submartingale } if it is integrable and if, for everyn € N. E(Xna1 | Fu X, as. (resp. <, >). Thus (X,,)n20 is a martingale (resp. a supermartingale, a submartingale) if and only if, for every A € F,, J xunae =f XqdP (cesp. <, >). A ls Note that (X,)y20 is a supermarlingale if and only if (~Xp)y20 is a submartingale and (Xq)nog is # Martingale if and only i (Xq)n20 is both @ supermartingale and a submartingale. ‘The definitions above stili hold if “X, integrable” is replaced by “X» positive” We speak then of a positive martingale (resp. supermurtingale, submattingale). It is understood that a martingale (resp. supermartingale, submartingale) without further specification is necessarily integrable. Finally a process X = (2, F, (Xn)n20, P) (without specification of the filtration) is a martingale (resp. a supermartingale, a submartingale) if itis a martingale (resp. 4 supermartingale, a submartingale) with respect to the natural filtration #2 = (Xo, ..-. Xn): #3.2 Exainples (i) Let X € L! (resp. X = 0). Thanks to 01.5 (vi), Xn is a martingale (resp. a positive martingale, possibly nonintegrable). Gi) Clearly a process (X,)n20 is a martingale (resp. a superma tingale) if and only if B(Xng1 — Xn | Fp) = 0, (resp. <, >). Thus if (¥)nz0 is an adapted integrable process, Xq = Yo + ¥i +... + Y» delines a martingale (resp. : supermartingale, a submartingale) if and only if E(%p +1 | Fx) = 0, (resp. <0. > 0). In particular if (Y,),>) is sequence of independent integrable tv.’s, then (Xn) >! uimaningale (resp. a supermartingale, a submartingale) with respect to the fi (Yo. Yas «+ Yn) ify for every n > 1, E(¥a) = 0 (resp. < 0, = 0) E(X | Fn) 23 a (CHAPTER 3: MARTINGALES First Properties 3.3 (i) If (X,)nz0 isa martingale (resp. a supermartingale, a submartingale), the sequence (E(X,))n20 iS constant (resp. decreasing, increasing). This follows imme- diately from #15 (i). Gi) If (Xq)x20 is a martingale (resp. a supermartingale, a submartingale), for m ). This follows from 1.5 (vi). ii) If (X,)n20 and (¥q)»20 are supermartingales, the same is true for X» + ¥, und Xn A Yo. (iv) If (Xq)az0 is @ martingale (resp. a supermartingale) and f is a real concave (resp. concave and increasing) function such that f (X,) is integrable for every n > then Ty, = f(X,) isa supermartingale (this follows from Jensen's inequality «1.3 (v) If we replace the assumption " f (X,) integrable” with " f(X») positive’, (f(Xn)n is a positive supermartingale. In particular if (X,,)n20 is a martingale, (|Xnl)n20 and (X3)n20 are positive sub- martingales. (v) If (Xy )uz0 is a martingale (resp. a supermartingale, a submartingale), this also holds for the stopped process X¥ = Xpav, where v is a stopping time of the filtration (Fu)nz0-Actually, by the definition ofa stopping time, {v > +1) = (v Oand Ant is F,-measuvable, Let (X»,)n20 be a submartingale. One defines Ag An + E(Xn41 — Xn | Fn) By construction, (An)nz0 is an integrable predictable increasing process and M, = Xp — An verifies E(Mn4.| ~My | Fy) = O (since Ana is F,-measurable!). This de- composition is moreover unique since, if X» = Mj, + A!, is any such decomposition, then Ay = Oand . Anat Anat from which, by conditioning with respect to .F,, A’,,, — A’, = E(Xna1— Xnl Fu)s thus A’, = A, and M) = M,, We havethus proved (fecall that, by definition, Ao = 0): Theorem 3.2 Every submartingale (X»)n2a can be written, uniquely, in the form Xp = My+Ay, where (My )y20 isa martingaleand (A,),20 isan integrable increasing predictable process. Such an increasing process (An)n20 is called the compensator of the submartingale Ci dwen = Aly = Xnar = Xn = (Migs = MG) The Stopping Theorem 03.5 Let (Xn)nzu be w supermartingale. Thus, form ). By taking the expectations. Corollary 3.4 Let (X»)n20 be a positive or integrable martingale (resp. supermar tingale, submactingale) and v a bounded stopping time: then E(X,) = E(Xo) (resp BD #3.6 In these statements, the assumption of boundedness for v is essential and cannot be reinoved without additional hypotheses, Let us consider, for instance, the random walk Sp = 0, Sq = X1+X2+...+X, where (X,)n20 is a sequence of independent hv.'s such that P(X, = 1) = P(X_ = —1) = } for every k = 1, 2,... , We shall see (Exercise 3.13) that v = infin; S, = 1} satisfies P(v < +00) = 1. Besides (Sin: is amartingale (03,2). Then E(So) = E(S,) = Obut also Sy = | a.s.: thus E(S,) ‘The statement of Corollary 3.4 does nor hold and obviously v is not bounded; actually one can see easily that P(v > n) > 2-" > 0. Maximal Inequalities 3.7 These are inequalities concerning the r.v.’s, max [Xe], supXq. sup|Xnl- GRRE, Mee GRE, Sap Nee SABA Let us start with two simple but typical exainples Let (X,)20 be apositive supermartingale and denote, fora > 0, by v = inf{n > 0 Xn > a] the passage time in Ja, +o0{. Then (supysoX4 > a] = |v < +oo] and Xy > don {v < +00]; thus Xnav > al ven}. We deduce that E(Xq) > E(Xnav) = aP(v aP(v < +00) = aP(supysg Xu > a) larly, if (Xq)nz0 is a positive submartingale, one has (with the same stopping Pa CHAPTER 3: MARTINGALES. time v) {maxose a) which gives aP(v 0 it holds (Dif Xn)nze is a positive supermartingale, P(supyso Xt > a) < Gi) if (Xn)n20 18 2 positive submartingale, P( max xera)eif XndP < LR(X,) maxes a sehen @ wea} Starting fron: this theorem one can prove ‘Theorem 3.6 (Doob’s inequality) Let p > 1 and(Xq)azo a martingale (or a positive submartingale) such that sup, x9 EiXnl? < +00, Then the rv. supy.g [Xal? is integ- rable and ~ eS po BEM Note that, if p = 2, the inequality becomes E[sup,»o X2] < 4sup,2o E(X2) ‘The proof of Doob’s inequality is actually simple. It is enough to prove the statement for a positive submartingale, as, if (Xn)nz0 is a martingale, (|Xql)nz0 is a positive submartingale. 5 Note first that ||Xqlly is increasing in n. Let Yj, = maxtem Xp. By Theorem 3.5 ii). AE(1\¥,,20)) = 2P(¥m > @) = E(Xwl (yy>a)) ‘and, by multiplying by pa?~? and integrating. = 100 f pat "BLirgraidea = f pa’"E(Xmliyqsaj)da. (3.4) lb 0 Since everything is positive, on one side one has f Pa? irneida = Ef pa?" iyqsujda) = BOER) = WY lo lo and on the other one 1 f par Eki ane da = E(Xm f 5 = Bm Vf ? pat x4oa) da) SQUARE INTEGRABLE MARTINGALES a Moreover. hy Hélder’s inequality for p and q = 547, EK Yh) < 1Xmllpll ed sfx = Wl oY" and, plugging this into (3.4), we get mil < EX p Foals dividing by I¥mllp” "(which is finite) one gets [I¥nllp < G27 1X lly. Finally just note that, as mM > 00, Yy = maxyem Xk T SUP, Xq- I sup, 20 EiXul" < +00, (Xq)nz0 is said to be bounded in L”. Square Integrable Martingales #3.8 Let (M,)nzo be a martingule such that E(M3) < +00 for every n > 0. Note that (M2)p20 is a submartingal ene the sequence (ECM) nzo is increasing. Also. for p > 1, it holds E*"(M,M,4)) = MuE*"(Mnyp) = M2 a.s., from which we derive the formulae B*"((Myap — Mn)®) = E%"(M2,,— M3) as, E((Masp ~ Mu)*) = B(M3,, ~ M2) ep Usually one writes ((M)n)nz0 for the compensator (A,,)nz0 of the submartingale (M2)qz0 (@3.4). This increasing process is characterized by (M)o = 0, (M)nat is F,-measuradle and EF" (Magy — Mp)?) = B% (M2 ep — M2) = (M)nap — (Mn ass Then U, = M? — (M), is a martiagale and, in particular, if Mo = 0, E(M2) = E((M)q).'The compensator ((M),,)n20 isalso called the associated increasing process of the square integrable martingale (M,)n>0- #39 Let us assume now that the martingale (Mn)n20 is bounded in L, and set i = sup, 9 E(M2) < +00. Actually, since (E(M2))n20 is increasing, E(M?) t at < +00, Therefore E((Mu+p — My)?) < m* — E(M3), which gives supE(Mnyp — My)? > 0. Beal neo This shows that (M,)n>0 converges in L?, being a Cauchy sequence, Let us show that (Mp)nz0 converges 6.s. Let us note Vy, = sup,,js_ [Mi — Mj|; obviously (Vn)az0 is decreasing and V, 4 V. It suffices now to show that V = Oa. since, then, (Mn)n20 would be a.s. a Cauchy sequence and thus a.s. convergent. Applying Doob’s inequality (Theorein 3.6) to the martingale (Mj ~ Mn)izn, one gets, for every p > 0, PV, > 0) = P( sup loi = Mjl > p) 2) < < = sup E((M; ~ M,)*) ‘This implies P(V > 0) = 0 and 0. eal CHAPTER 3: MARTINGALES ‘Theorem 3.7 Let (My )nzu be 4 martingale such that sup,»4E(M;) < +00. Then (Miy)y29 converges a.s, and in L?, Convergence Theorems #3.10 One of the reasons for the importance of martingales is the convergence the- orems; they guarantec that a martingale converges a.s., under a set of assumptions that are often easy to check. Already Theorem 3.7 states that a bounded martingale converges a.s. Let (Xq)nz0 be a submartingale bounded in absolute value by some number K and let X, = My + A, be its Doob decomposition (Theorem 3.2). Then E(A,) = E(X1,) — E(Mo) < 2K so that E(Ago) < 2K, which gives Ano < +00 as. We sct tp = ifm; Angi > pl: tp is a stopping time and |Mz,anl £|Xeypan— Aepanl S K + p. so that Mis a bounded martingafe and converges a.s. by Theorem 3.7. We get then the a.s, convergence of (X;")n20 and therefore that (X,)n20 converges on [tp = +00). Finally (X;,)n20 converges a.s. Since, Ago being as, finite, U% {tp = +00] = U% {Ace < p) = {Avo < +00) = Rass, If (Xp)az0 is u positive supermartingale, sinice the exponential function is convex and increasing, ¥, = e~** is a submartingale (3.3) satisfying 0 < ¥, < 1, Thus (Y¥n)n20 and (X,)q,29 (with values in R*+) both converge a.s. Moreover by Futou’s Jemma (21.5) F(Xeo| Fi) = E( lim Xn| Fe) < lim EX_| Fi) sXe as. ‘This implies that, for every M > 0, E(Xzol poem) = E(XolxocMy)+80 that Xoo < +00 on [Xo < +00}. We have proved ‘Theorem 3.8 Let (X,,)n20 be a positive supermartingale, Then it converges a.8, 10 anv Xoo and X_, > E(Xo| %,), Moreover, if P(Xq < +00) = 1, thenP(Xx < +00) = 1, Of course, a positive martingale isa positive supermartingale and the previous theorem gives the convergence also in this ease. Theorem 3.9 Let (X,,)a>0 be a submartingale such that sup, 29 E(|X»l) < +00. Then (Xu)n20 converges i : ‘The proof simply amounts to showing that (X,)20 can be written as the difference of a positive integrable martingale and a positive integrable supermartingale and then applying Theorem 3.8, Let Xt = My + Ay be the Doob decomposition of the submartingale (X})q20- Then E(Aze) = supysg E(An) < ElMol +sup,»o E|Xnl < +00. If ¥y = Mn + E(Aco| F,), since Yr > Xf > Xns (¥q nad is a positive mnartingale and Z, = Yq —X, isa positive supermartingale, Obviously X, = ¥,—Z,, Note that Theorem 3.8 states that E(¥ao) < +00 and E(Zeo) < +00, 80 that Feo < +00 and Zao < +90 4.8. Let us point out that, if (X.)nxo is a submartingale, since |x] = 2x+ — x, it holds E|Xn| = 2E(X+) — E(X,) 5 2E(X$) — E(Xo). Thus supyso E|X,| < +00 if CONVERGENCE THEOREMS Fy and only if sup,syE(X,}) < +00. Similarly, tor a (integrable) supermartingule, Sup,20 E[X,| < +00 if and only if sup, > E(X,) < +00. 11 We have already seen that a martingale that is bounded in L? converges a.s. anu in L?, More generally o martingale that is bounded in L?. p > |, converges a.s. an in LP, Actually itis even bounded in L' and thus, thanks to Theorem 3,9, there exists ZV. Xoo such that Xq —>n-+09 Xao @8, Moreover, by Doob’s inequality, the rv. X* = sup,20 [Xn betongs to LP. Thanks to the inequality |X, — Xool? < 2PX*", we are allowed to apply Lebesgue’s theorem and get lim, BUX — Xool”) = 0. Contrary to the cuse L?, p > 1, the condition supyso E|X»| < +00 does not imply the convergence in L', as we shall see in the exercises (Exercise 3.5, for instance). #3.12 Let us go back to our first example of a martingale, i.e., X, = E-**(X),X € L! (see @3,2). This martingale is bounded in L', E(X pl) = EUE%*(X))) < EE (1X|)) = EUXD. ‘Therefore (Theorem 3.9) Xn —*n-+00 Xoo 4.8. Let us study the convergence in L' ‘Thanks to the decomposition X = X+ — X™, we can assume X > 0. Leta > 0. ‘The Lebesgue theorem gives ||X — X A all; -» Gas. > +00, We just proved that K(X A a| Fy) converges a.s. but, being bounded, E(X A a |.%y) also converges in L', It suffices then to write VEX Fy) ~ BOX | Fwy S WE(X | Fn) — E(X Aa| Fq), + WE(X Aa | F,) — E(X Aa | Fu) + HHE(X Aa | Fy) — E(X| Fini = <2I1X — X vail, + WE(X A.4| Fn) — EX Ac | Fndhy in order to derive that (E**(X)),>0 is a Cauchy sequence in L' and thus that it con- verges in L'. Let us identity its limit Xoo. Cleatly Xco is Foo-measurable. Moreover, by the definition of X,, [xara [xan torvey eh a A from which, since (Xq)q20 converges in L', [XwaP =f XaP, forevery AG Fy. as) A A It is easy to see, by a monotone class argument (#24), that (3.5) holds for every A Feo = 6 (q20 Fn)- This shows that Xoo = E(X | Foo). Summarizing, Theorem 3,10 Assume X € L', The martingale X = E(X | Fy) converges a.s. and in L' t0 E(X | Foo). 43.13 We just mention the following Theorem 3.11 Let X € L! and (n)nz0 be a decreasing sequence of o-algebras with Goo = My>0Gn- The sequence X, = E(X |G) then converges a.s. and in L! 10 E(X | Go). 30 CHAPTER 3: MARTINGALES ‘The sequence (Xn)nz0 of Theorem 3.11 is called an inverse martingale, Regular Martingales £3.14 A martingale of the form X, = E(X | F,). X € L' is said to be regular. We have just proved that such a martingale converges a. andin L' to Xq = E(X | Foo) Conversely, let (Xj)n20 be @ martingale converging in L' to a cv. X, Since |Xnl < IX] + IX — Xl SuPpso ECXnl) < +00. Thus (Theorem 3,9), (X,)n20 converges as, to X, We then have, for every p > Oand for every A € Fn, ff Xesvars f xnaP F i und, letting p goto +00. since X,, 4 converges in L', f xaP -f XndP, forevery Ae F, or equivalently, X,, = E(X | ¥)). In conclusion, Theorem 3.12 A martingale is regular if and only if it converges in L'. 93.15 Let (X_)q20 be a regular martingale having the &.v. Xao as its limit, Then it is closed meaning that (X,),,¢4 is a martingale, i.e, verifies the definition for every m t). Provethat (Z_)n29 is asupermartingale (resp. a martingale) with respect to (F,)y>0. Exercise 3.3 Let (2, F,(Fn)nz0,P) be a filtered probability space on which we consider two square integrable martingales (X,),20 and (Yn n20 a) Show that E(Xm ¥y | Fm) = Xm Yn as. for every m 0. We set Fy = o(¥,.... ¥y) und Xp = Ky +... + Yy, Recall that ae Ee") = elt, 3.6) 1) Let 2 = exp(wX,, — 4nu2o2), Show that, for every u € R, (Zi)yos is a (Fpnz1-mivtingale, 2) Show that, for every u € R, (Z)ne1 converges us. toa finite rv, Z4,.Detennine Zi. For what values of u € R is (24)q>1 a regular martingale? Exercise 3.6 A process (My)y20 is said to be with independent increments i every n, the rv. Mp4 — My is independent of the o-algebra F, = 0 (Mo. .... Mn) 1) Let (My)qz0 be @ square integrable martingale with independent increments. We set a9 = Var(Mo) and, for n > 1,02 = Var(My — My—1). 1a) Show that Var(M,,) = D9 of. Tp) Let ((M)p)y20 be the increasing process ussociated to (My)n20 (see #3.8), Compute (M)p, 2) Let (Mn)nzo be a Gaussian martingale (we recall that a process (Mr)nz0 is Gaussian if, for every n, the vector (Mo, ..., Mn) is Gaussian). 2a) Show that (M,)qz0 has independent increments. 2b) Show that, for every fixed @ € R, the process Zt = eM Mn an a martingele, Does it converge a.s.? Exercise 3.7 Let (Xq)n20 be # sequence of r.v.’s with values in {0. I], We set -¥,, (Xow... Xn). Assume that Xo =« € [0, 1] and = MF) 1 Xn P(e = SEH) = xe P(Xnat 1) Show that (X,,)p20 is a martingale converging a 2) Show that E((Xw41 — Xn)°) = $E(Xn(1 — Xp). 3) Compute E(Z(1 — Z)). What is the law of Z? Exercise 3.8 Attime | an urn contains a white ball and a red ball. A ball is drawn und replaced with two balls having the same colour as the one that is drawn. This gives the new composition of the urn at time 2 and so on following the same procedure. ‘We denote by ¥,, and X» = ¥,/(n + 1) the number and the proportion of white balls in the urn at time n, respectively. We set Fy = 0(¥), 1-4 Yn). 1) Show that (X,,)y21 is a martingale that converges a.s. to ar.v. U and that, for every k > |, lita seo E(XH) = EUS), 2 CHAPTER 3: MARTINGALES 2) Let k > 1 be fixed. We set, form > 1, Yun + 1). in eK = 1 GFDGAFD. AFH Show that (Z)y21 is a (Fn)n20-martingale, What is its limit? Derive from this the value of E(U*) 3) Let X be a real r.v. such that |X| < M < +00 as, Show that its characteristic function is given by the expansion in power series Zn = (8) for every 1 € R. 4) What is the law of U? Exercise 3.9 (A proof of the Kolmogorov 0-1 law by martingales) Let (¥n)n1 bea sequence of independent rv.’s, Let us define Fy HO Vita). Foo = o(\JFn)o ean mes F" = (Yn Ynys 1) Let A € F™. Show that is a martingale ...). 2) Show that if X is jer P(A) = 0 or P(A) = 1, (Hint: Zy = E¥*(14) real ¥°-measurable rv., then X is constant a.s. Exercise 3.10 (A proof of the strong law of large numbers via inverse martingales) Let (Y¥a)nz1 be a sequence of real independent integrable r.v.’s, having the same law. We set So = 0, S, = ¥i +... + ¥q and Gp =o (Sn. Ynts Yng2s e+e 1) Show that, for | < m 1 converges a.s, and in L! toany, X. 3) Show that X = E(¥\) as. Exercise 3.11 Let (2..F, (F,)n20.(Mz)n20, P) be a square integrable martinga- le und denote by ((M),,)uz0 its associated increasing process (see 3.8). We set (M) 0 = limy+o0 t (My 1) Let r be @ stopping time, Show that ((M)qnr)n20 is the increusing process associated to the martingale (Mz)q>9 (recall that Mf = My az). 2) Let a > 0. Show that tq = infin; (M)n41 > @] is a stopping time. 3) Show that the martingale (M,*),>9 converges a.s. and in L*, 4) Show that, on {(M)os < +00], (Mn)no converges a.s, Exercise 3.12 Let (2. F, (Fu)nz0, (Mn)nz0, P) be a square integrable martinga- le, We note A, = (M), the associated increasing process (see #3,8) and Ago = limmysoo T An. We have seen in Exercise 3.11 that such a martingale converges as. on the event (Azo < +00]. In this exercise we are going to be more precise about EXERCISES ” what happens in {Ago = +00). We set Xo=0, Deduce that (X}q < 1 for every n and that (Xq)n2o converges a.s. 2a) Let (aq)n20 be a sequence of strictly positive numbers such that limp soo t 4 = +00 and (up)nz0 be a sequence of real numbers converging to w. Show that ig Flak = teak cre 2b) (Kronecker’s lemma) Let (an)n21 be a sequence of strictly positive numbers such that limyroo 1 dy = +00 and (X,)nz0 be a sequence of real numbers, We set Sn =X +... + tn, Show that if the series 31%, tn/ay converges, then lim * = 0, 0 Gn 3) Show that, on {Ago = 00], My /An —>n-so0 08. Exercise 3.13 Let (Z,)nz1 bea sequence of independentr.v.'s such that P(Z; = 1) = PZ 1) = 3 fori = 1. We set So = 0, Sp = 2) +... + Zn, Fo = (2, 8) and Fn = O(Zi, +04 Zq). Leta bea strictly positive integer and r = inf{n > 0; Sy = 2] the first passage time in a. a) Show that, for @ € R, 9s, ot Xn = Cosh oye is a (Fn)n>o-martingale. Show that, if@ > 0, (X8,,)n>0 is a bounded (F,)q>9-mar- tingale, b1} Show that, for every @ = 0, (Xéqr)nz0 converges as. and in L? to the rv, fe Tosh aye MF <4eoh b2) Show that P(r < +00) = 1 and that, for every 8 > 0, E[(cosh@)"*] = Exercise 3.14 Assume, as in Exercise 3.13, that (Z,)n21 is a sequence of indepen- dent rv.'s such that P(Z; =1) = } for’ = 1, We set Sp = 0, Sp = Zi +... + 2m Fo = {QB} and Fy = 0(Z,..., Zn). Let a be a positive integer and A areal number such that 0 0; |Snl =a) the exit tiine from } — a, al. 1) Show that Xq = (cos X)-" cos(ASp) is a (Fn)n20-martingale. b) Show that 9) 1 = E(Xnar) 2 cos(Aa)E((cosa)“"**). cy CHAPTER 3: MARTINGALES: ¢) Deduce that E((cos.2)") = (cos(a))7! and then that r is a.s, finite and the martingale (Xnar)u20 is regular, 4) Compute E((cos 4)~"). Does r belong to L”. p > 1? Exercise 3.15 Let (Sp)nz0 be @ simple random walk on Z: Sp = 0, Sp = Ui + + Up, Where the f.v.’s U; are independent and identically distributed such that 0 p, sup S,, BS) = 55 @ We shall see in Problein 3.5 that actually, if g > p, in the two previous inequalities the equal sign holds and that the fy. sup, sq Sy hus a geometric law with parameter 1 ~ B Give, P(Supy so Sn = k) = (1 - 2)(2)4), Exercise 3.16 Let (X,)y21 bea sequence of real independent r.v.’s, all having normal distribution N (1m, 02) with m <0, Let us set Sp = 0, S, = Xp +... + Xy ard Br = (Soy - 2+ Sado W = sup Sy. nz0 The aim of this exercise is to establish certain properties of the r.v, W 1) Show that P(W < +00) = 2) Recall that, for every real 2, E(e*1) = e}}*2* ei, Compute E(e+1 | By). 3) Show that there exists a unique Ag > 0 such that (e*95"),,>9 is a martingale, 4) Show that, for every a > 1, P(e” > ay ct and that. fort > 0, P(W > 1) nat @G.10) lo and deduce that, for every A < Ag, E(e*”) < +00. In particular W has moments of all orders. Exercise 3.17 Let (2. .F, (Fn)n20. (Xn)n20,P) bea submartingale bounded in L', i.e, such that sup, 49 B|Xn] < +00. 1) Show that, for a fixed n, the sequence (E** (X#)) pan is inercasing in p. 2) Let us set My, = limy-roo t E%*"(X). Show that (M,)nz0 is a positive inte: grable martingale. 3) Let us set ¥, tingale. Mn X~. Show that (¥,)n20 is a positive integrable supermar- EXERCISES cD We come to the conclusion that every submartingule that is bounded in L! can be written as the dilference of a mustingale and of a supernartingale, both positive and integrable (Krickeberg’s decomposition). Exercise 318 Let (¥pjnz0 be a sequence of independent identically distributed rv."s such that P(¥_ = 1) = P(¥e = —1) = 9. Wesel Ho = (0, 2). Bn = O(N and So = 0, Sy = Yi +...+ fy.n > 1. Let us conside: the sign function 1 ifs>0 sign(x) 0 ifx=0 1 ifx <0 and the process defined by My = Yo sien(Ser¥ay = 1,2, et Mo =0, 1) What is the compensator (see #3.4) of the submartingale ($2),20? 2) Show that (M,),20 is a square integrable martingale and compute its associated increasing process (63.8). 3) Compute the Doob decomposition of ({SnI)nz0- Show that M, is measurable with respect to the 7-algebra a(ISil,.. Syl) Exercise 3.19 Let p,q be two probabilities on a countable set E such that p # g and q(x) > 0 for every x € E, Let (Xn)n>1 be a sequence of independent r.v,'s with values in £ and having the same law g, Show that the sequence ry Pex) Be TKR) is a positive martingale whose limit is O ass. Is it regular? ( V¥n) Exercise 3.20 Let (¥q)y>1 be a sequence of positive independent r.v'. tation equal to 1 and F = {0,2}, Fy = o(Yk 0 and deduce that (/Xn)n>0 is a supermartingale, 2) Assume that []72, E(/Ys) = 0. Investigate the convergence and the limit of (VXn)nz0. then of (Xq)x20. Is the martingale (X,)y>0 regular? 3) Assume that []22, E(/¥i) > 0, Show that (/X,,),.20 is a Cauchy sequence in 41? and deduce that (X,),20 is a regular martingale. Exercise 3.21 The aim of this exercise is to establish a version of the Borel~Cantelll lemma for a family of not necessarily independent r.v.'s, which is often useful, We recull that, for a martingale (M,)irx0, the three relationships supE(|Myl) <+00, supE(M,}) < +00, sup E(M,) < +00 30 30 no are equivalent (03,10), Let (9, ¥, (Fn),20.P) be a filtered probability space and (n)nzi @ Sequence of positive, integrable, adapted r.v.’s (not necessarily indepen- dent). 36 CHAPTER 3: MARTINGALES a) Let us set Xo = 0, Xq = Yi -+...+ ¥n, Show that (X»: and determine its compensator (Ayn 1b) Show that, for every @ > 0, te Ic) Let us set Zy = Xn (Znara)n20 CONVETBES AF, Id) Show that {littns00 t An < +00} € (limy+oo t Xn < +00) 4S. ( happens on {tq = +00]?). 2) Assume, furthermore, that sup,» ¥x € L', Show that 20 is a submartingale 41 > a] isa stopping time, An. Show that, for every n, Zanz, < @ Deduce that (lim, 1 Xn < +00) = (lim, t An <-too) as. (hint; introduce the stopping time ¢, = inf{n; X,, > a} and consider Z;,, 3) Let (By)y21 be a sequence of adapted events, Show that, {So P7B,) < +00} = [> 1a, < +00} iat net Exercise 3.22 Let (X;)nz1 be a Sequence of real, square integrable, adapted r.v.'s, defined on (2, F. (Fn)n20. P) and (62)y20 a Sequence of positive numbers, Assume that as.. for every n > 1, BFe1(Xy) BFr1(X}) = 0, We set So = 0, Ag = Oand, form > 1, S_ = Xp... Xn An = = An 1) Show that (S,),20 and (Vq)q20 are martingales, 2) Show that, if 1072, 2 < +00, (Sq)as0 Converges a.s, and in L?, 3) Assume that (Si.)q20 onverges as, and that there exists aconstant M such that, for every n > 1,[Xu] 0, we set tq = inf(n > 0; [Sul > a). 3a) Show that, for every 11, E(S2az,) = EAnaz,) 3b) Show that there exists a > 0 stich that P(rw = +00) > 0, 3c) Deduce that 5°22, a2 < +00. M+ bon, Let (Xn)nz1 be a sequence of real centered, independent, square integrable r.v.'s, tis clear that such a sequence satisfies the hypotheses of this exercise. Thus we have just met some classical result if DP, Var(Xz) < +0, then 722 Xx converges as. Gi) ifthe rv.'s X_ are uniformly bounded, then 722 Xx converges as. if and only if OR Var(Xt) < +00. Exercise 3,23 Let (2, F, (Fn)n20, (Mn)n>0, P) be a square integrable martinga- le and denote by A, = (M),, the associated it increasing process (sec #3.8), We set Yq = infin > 0; Angi > a2), 1) Show that ty is a stopping time. 2) Show that P(sup,s9 |Munzq| > @) a) a) +P Sup |Myary] > a) BID Won Vea EXERCISES ” 4) Let X be a positive 1.v, Show, using Fubini's theorein, the two relations [ew > Nd = E(X AA) for every 4 € [0. +00], [ere Aa?)da = 2E(VX). 5) Show that E(sup, @ trom 0 to +90...) 6) Let (¥,)n21 bea sequence of centered, independent. square integrable and identi- cally distributed r.v.'s. We set So = 0, .Fo = (2, und, form > 1,Sy = ¥i+...+¥n- 20 |My) © 3E(V/Ace) (hint; integrate (3.11) with respect 10 Fy = O(¥\.-.., ¥q).Show that if r isa stopping time such that E( VT) < +00, then F(S;) =0, @ Wis interesting to compare the result of question (61 with part (A+) of Exercise 3.25 for m = 0. One can derive, furthermore, that the passage time T of part (A4) uf Exercise 3.25 is such that E(\/t) = +00. Exercise 3.24 On a probability space (9, F, P) let us consider the r.v's.-X',, 1 1,2,...,@= 1,2, independent and Bernoulli B(1, $). We set Si = Sf) XL. w = inf (or; Sf =a) where a is an integer > | and set v= vy A v2 1) Show that P(v; < +00) = 1,7 = 1,2, 2) We set, for i = 1,2 and every n > 0, Mi = 25), In! = (2Sq — n)(2S4 — n) — 08:5 where 4; = 1 if i = j and = 0 otherwise, Show that (Mi)_ and (My/)y are martingales with respect to the filtration Fn = (Xk k Smt = 1,2), 3) Show that E(v) < 2a. 4) Show that E(M; 5) Show that E(\S! M3), Exercise 3.25 (Wald identities) Let (¥n)i>1 be a sequence of independent, integra- ble, identically distributed ceal r.v.'s, We set m = E(¥1), So = 0, 5 = (S, } ancl, 21) < Ja (hint: consider the martingale Mj! — 2M} + for'n > 1, Sp = Yi ts set Yas Fn = O(¥is, Ya), Let v be an integrable stopping time, Al) We set Xq = Sy ~ nm, Show that (Xq)az0 is a martingale, * ‘A2) Show that, for every n, E(Spav) = mE(n A v). A3) Show that Sy is integrable and that E(S,) = mE(v) (hint: consider first the case %q > 0). A4) Assume that, for every n,P(¥q = —1) = P(¥, = 1) = gandr 4}, where a > is an integer. In Exercise 3.13 we showed that r < +00 + ig not integrable B) Assume furthermore that E(¥?) < +00 and set o? = Var(¥1). Assume first that m = Oand set Zy = $7 — no? B1) Show that (Zy )n20 18 0 (Jy 20-martingale, inf{n; Sp > ss, Show that

You might also like