4 views

Uploaded by aninditapal

save

- 06609136
- t2Fall2014A.pdf
- Lec-Markov_note_spring 2010.pdf
- Markov Chain Blackjack
- 08_Lopez Et Al 2001_LUP_Predicting Land_cover and Land_use Change
- Transition Probability Matrix for Markov Chain Analysis
- set5
- Lec-Markov Note 2
- markov
- Methods Bound Reference Questions Probability: No Answers, only questions
- 6.3.pdf
- Contemporary Issues and Challenges
- (Cambridge Monographs on Applied and Computational Mathematics) Watanabe-Algebraic geometry and statistical learning theory-CUP (2009).pdf
- Newhouse EmergeMoney
- ap_r13-rp
- Probability
- Computation Camp 2017
- markov hjm
- Quantitative Technique
- lab5.pdf
- maths
- 1-s2.0-S0167668715301013-main
- Lecture_6_f
- Lecturenotes26-27-Probability
- The Unwinding: An Inner History of the New America
- Yes Please
- Sapiens: A Brief History of Humankind
- Dispatches from Pluto: Lost and Found in the Mississippi Delta
- Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
- The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution
- Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
- John Adams
- The Emperor of All Maladies: A Biography of Cancer
- A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
- This Changes Everything: Capitalism vs. The Climate
- Grand Pursuit: The Story of Economic Genius
- The Prize: The Epic Quest for Oil, Money & Power
- The New Confessions of an Economic Hit Man
- Team of Rivals: The Political Genius of Abraham Lincoln
- The World Is Flat 3.0: A Brief History of the Twenty-first Century
- Smart People Should Build Things: How to Restore Our Culture of Achievement, Build a Path for Entrepreneurs, and Create New Jobs in America
- The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
- Rise of ISIS: A Threat We Can't Ignore
- Steve Jobs
- Angela's Ashes: A Memoir
- How To Win Friends and Influence People
- Bad Feminist: Essays
- The Light Between Oceans: A Novel
- The Sympathizer: A Novel (Pulitzer Prize for Fiction)
- Extremely Loud and Incredibly Close: A Novel
- Leaving Berlin: A Novel
- The Silver Linings Playbook: A Novel
- You Too Can Have a Body Like Mine: A Novel
- The Incarnations: A Novel
- The Love Affairs of Nathaniel P.: A Novel
- Life of Pi
- Bel Canto
- The Master
- A Man Called Ove: A Novel
- The Flamethrowers: A Novel
- Brooklyn: A Novel
- The First Bad Man: A Novel
- We Are Not Ourselves: A Novel
- The Blazing World: A Novel
- The Rosie Project: A Novel
- The Wallcreeper
- The Art of Racing in the Rain: A Novel
- Wolf Hall: A Novel
- Beautiful Ruins: A Novel
- The Kitchen House: A Novel
- Interpreter of Maladies
- The Perks of Being a Wallflower
- Lovers at the Chameleon Club, Paris 1932: A Novel
- The Bonfire of the Vanities: A Novel
- The Cider House Rules
- A Prayer for Owen Meany: A Novel
- My Sister's Keeper: A Novel

You are on page 1of 2

by: Manuel Lladser

Fall 2005

**3.1 More on the Markov Property.
**

Our deﬁnition of Markov chain was (omitting the time homogeneity part) as follows: (Xn )n≥0 is a ﬁrst order Markov chain provided that P (Xn+1 = sn+1 | X0 = s0 , . . . , Xn = sn ) = P (Xn+1 = sn+1 | Xn = sn ) , for all s0 , s1 , . . . , sn ∈ S. Does this imply for example that P (Xn+1 = sn+1 | X0 = s0 , Xn = sn ) = P (Xn+1 = sn+1 | Xn = sn )? Intuitively, the answer should be a yes. Mathematically, this is the argument: P (Xn+1 = sn+1 |X0 = s0 , Xn = sn ) = P (Xn+1 = sn+1 , Xn = sn , X0 = s0 ) , P (Xn = sn , X0 = s0 ) P (Xn+1 = sn+1 , Xn = sn , Xn−1 = sn−1 , . . . , X1 = s1 , X0 = s0 ) = , P (Xn = sn , X0 = s0 )

s1 ,...,sn−1 ∈S

= =

P (Xn+1 = sn+1 |Xn = sn ) P (Xn = sn , X0 = s0 )

P (Xn = sn , Xn−1 = sn−1 , . . . , X1 = s1 , X0 = s0 ) ,

s1 ,...,sn−1 ∈S

P (Xn+1 = sn+1 |Xn = sn ) · P (Xn = sn , X0 = s0 ) , P (Xn = sn , X0 = s0 ) = P (Xn+1 = sn+1 |Xn = sn ) .

The above identity is a special case of the following more general property (which holds with A0 := {s0 }, A1 = . . . = An−1 = S.) Suppose that for each i ∈ {0, . . . , n − 1}, Ai is a non-empty subset of S. Then the P (Xn+1 = sn+1 | X0 ∈ A0 , . . . , Xn−1 ∈ An−1 , Xn = sn ) = P (Xn+1 = sn+1 | Xn = sn ) . Exercise 3.1.1 – True or False? If for each i ∈ {0, . . . , n + 1}, Ai is a non-empty subset of S then P (Xn+1 ∈ An+1 | X0 ∈ A0 , . . . , Xn ∈ An ) = P (Xn+1 ∈ An+1 | Xn ∈ An ) .

**3.2 Markov chains with random initial states.
**

Suppose that (Xn )n≥0 is a ﬁrst-oder homogeneous Markov chain on a discrete state space S and with probability transition matrix p. We learned in lecture that the P (Xn = j | X0 = i) is the entry in row-i and column-j of pn . What’s the probability that Xn = j when X0 is itself a random variable? In this case things should not be very diﬀerent. Indeed, letting µn denote the distribution of Xn i.e. P (Xn = j) = µn (j) it follows for n ≥ 1 that µn (j) = P (Xn = j) =

i∈S

(j ∈ S) ,

P (X0 = i, Xn = j) =

i∈S

P (X0 = i) · P (Xn = j | X0 = i) =

i∈S

µ0 (i) · pn (i, j) .

Thus, if we think of the µn ’s as a row-vectors the above identity is equivalent to µn = pn · µ0 .

**3.3 A non-stopping time example.
**

Recall that a function T : S N → N ∪ {∞} is said to be a stopping time if T satisﬁes the following property for all ﬁnite n ≥ 0: If T (s0 , s1 , . . .) = n then T (s0 , s1 , . . .) = n whenever (s0 , . . . , sn ) = (s0 , . . . , sn ). Given x ∈ S, we showed in class that Tx (s0 , s1 , . . .) := min{n ≥ 1 : sn = x} is a stopping time. For a Markov chain with state space S, Tx can be interpreted as the time of the ﬁrst visit to x (after n = 0).

however.4. To see why consider x. . with x = y. the Px (Tx < ∞) = ρk = 1 and therefore xx ∞ ∞ k [Tx k=0 Px (N (x) < ∞) = P = ∞] ≤ k=0 k P (Tx = ∞) = 0 . x. this means that T = min{Tx . .True or False? Let (Xn )n≥0 be a Markov chain deﬁned in certain state space S and x. . Ty }). y. . . y. s1 ) = (s0 . Observe that for all y ∈ S the Py (N (x) ∈ {0. . Then.1 . Exercise 3. Px (N (x) < ∞) = 1. k for all k ≥ 0. Observe that UTx = x whenever Tx < ∞. To see why recall that by deﬁnition x is recurrent if ρxx = 1. 2.3. Nx can be interpreted as the last time a Markov chain visits state x between times n = 1 and n = 8 inclusive. Suppose that (Un )n≥0 is a Markov chain deﬁned in a state space S and with a probability transition matrix p. z) + p(y. Let T be the ﬁrst time (after n = 0) that the chain visits state x or y (mathematically. So Markov chains visit transient states only a ﬁnite number of times. after all. Nx is not a stopping time. with the understanding that ∞ · 0 = 0. y. and deﬁne s := (y. The strong Markov property states that this intuition is actually correct.4 More on the strong Markov property. x. N (x) is a discrete random variable and ∞ Ey (N (x)) = k=0 k · P (N (x) = k) + ∞ · P (N (x) = ∞) . at time Tx the chain was located at x. y. . y ∈ S be distinct states. . y.) := max{1 ≤ n ≤ 8 : sn = x}. U } is a stopping time. . y. y.). Exercise 3. Observe that Nx (s) = 1 and (s0 . y ∈ S. Therefore. y). . for a transient state x.e. . this property states that P (UTx +1 = y | Tx < ∞) = p(x. z) . 1. However. y. 3. Px (N (x) = ∞) = 1 for a recurrent state x. s1 . in particular. 3. x is transient ⇐⇒ Ex N (x) < ∞ . with the understanding that the maximum of an empty set is inﬁnite. for all z ∈ S the P (XT +1 = z|T < ∞) = p(x. Using that the Ex N (x) = ∞ for a recurrent state x we cannot conclude that Px (N (x) = ∞) = 1 (why?). this assertion is in fact true for recurrent states. it is intuitive that the probability that UTx +1 = y is p(x. y. . Let N (x) be the total number of visits for times n ≥ 1 that the chain makes to state x. s1 ). Suppose that (Xn )n≥0 is a ﬁrst-oder homogeneous Markov chain on a discrete state space S and with probability transition matrix p.1 – True or False? If T and U are both stopping times then min{T. Let Tx be the ﬁrst time that (Un )n≥0 visits state x. Thus Nx cannot be a stopping time. . Nx (s ) = 3 = 1 = Nx (s). y. x. To be more precise. y. We proved in lecture that x is recurrent ⇐⇒ Ex N (x) = ∞ . In particular. Since Un is a ﬁrst-order homogeneous Markov process. Equivalently. y.As an example of a non-stopping time consider the function Nx (s0 . the Px (N (x) = ∞) = 0 i. y. y) .5 Transience versus recurrence.) and s := (y.} ∪ {∞}) = 1. Equivalently.

- 06609136Uploaded byBodepudi Sai Nishanth
- t2Fall2014A.pdfUploaded byDavid Lee
- Lec-Markov_note_spring 2010.pdfUploaded byأيمن بن بريك
- Markov Chain BlackjackUploaded bystupidchatbot
- 08_Lopez Et Al 2001_LUP_Predicting Land_cover and Land_use ChangeUploaded byOmar Sarhan
- Transition Probability Matrix for Markov Chain AnalysisUploaded bySangita
- set5Uploaded byRon Chan
- Lec-Markov Note 2Uploaded byssolaymani
- markovUploaded bytejshahi1
- Methods Bound Reference Questions Probability: No Answers, only questionsUploaded byYY_L
- 6.3.pdfUploaded byjay ian gerona
- Contemporary Issues and ChallengesUploaded byahmedbalo
- (Cambridge Monographs on Applied and Computational Mathematics) Watanabe-Algebraic geometry and statistical learning theory-CUP (2009).pdfUploaded byhann hanny
- Newhouse EmergeMoneyUploaded byHassan Pervaiz
- ap_r13-rpUploaded byabinaya
- ProbabilityUploaded byAditya Bansal
- Computation Camp 2017Uploaded byElizabeth
- markov hjmUploaded byOren Cheyette
- Quantitative TechniqueUploaded byapi-3805289
- lab5.pdfUploaded bySelvam Pds
- mathsUploaded bySailesh Bond
- 1-s2.0-S0167668715301013-mainUploaded byClaudio Samanez
- Lecture_6_fUploaded byAnonymous 6iFFjEpzYj
- Lecturenotes26-27-ProbabilityUploaded byarjunvenugopalachary