Stochastic Models I (for first-year doctoral students)

Attribution Non-Commercial (BY-NC)

9 views

Stochastic Models I (for first-year doctoral students)

Attribution Non-Commercial (BY-NC)

- midtermTwoF07sols
- [259]Using Markov Chains
- Aguirregabiria
- dtmc
- fp3pp_june12
- Gao Meng Tac Tn08n9
- E_mg1jono
- Lec6
- LexRank.pdf
- Stochastic Processes
- Inverse Gaussian
- Homework #1
- Markov model for non-lognormal density distributions in compressive isothermal turbulence
- Midterm Two f 07
- midtermTwo6711F12sols
- Midterm Two 6711 f 12
- midtermTwo6711F10sols
- Midterm Two 6711 f 10
- midtermOne6711F12sols
- Midterm One 6711 f 12

You are on page 1of 5

Problem 4.40 Consider a segment of a sample path beginning and ending in state i, with no visit to i in between, i.e, the vector (i, j1 , j2 , j3 , . . . , jn1 , jn = i), where jk = i for the non-end states jk . Going forward in time, the probability of this segment is i Pi,j1 Pj1 ,j2 Pj2 ,j3 Pjn1 ,i . The probability, say p, of the reversed sequence (i, jn1 , jn2 , jn3 , . . . , j1 , j0 = i) under the reverse DTMC with transition matrix j Pj,i P i,j i is p = i P i,jn1 P jn1 ,jn2 P jn2 ,jn3 P j1 ,i . However, successively substituting in the reverse-chain transition probabilities, we get p = i jn1 Pjn1 ,i P jn1 ,jn2 P jn2 ,jn3 P j1 ,i i = Pjn1 ,i jn1 P jn1 ,jn2 P jn2 ,jn3 P j1 ,i j Pj ,j = Pjn1 ,jn2 jn1 n2 n2 n1 P jn1 ,jn2 P jn2 ,jn3 P j1 ,i jn1 = Pjn1 ,i Pjn2 ,jn1 Pjn3 ,jn2 . . . Pj1 ,j2 j1 P j1 ,i = Pjn1 ,i Pjn2 ,jn1 Pjn3 ,jn2 . . . Pj1 ,j2 Pi,j1 i = i Pi,j1 Pj1 ,j2 Pj2 ,j3 Pjn1 ,i . Problem 4.41 (a) The reverse time chain has transition matrix j Pj,i P i,j i To nd it, we need to rst nd the stationary vector . By symmetry (or by noting that the chain is doubly stochastic), j = 1/n, j = 1, , n. Hence,

Pij = j Pji /i = Pji =

p 1p

if j = i 1 if j = i + 1

(b) In general, the DTMC is not time reversible. It is in the special case p = 1/2. Otherwise, the probabilities of clockwise and counterclockwise motion are reversed.

Problem 4.42 Imagine that there are edges between each of the pair of nodes i and i + 1, i = 0, , n 1, and let the weight on edge (i, i + 1) be wi , where w0 = 1

i

wi =

j =1

pj , qj

i1

where qj = 1 pj . As a check, note that with these weights Pi,i+1 = pi /qi wi = = pi , wi1 + wi 1 + pi /qi 0<i<n.

Since the sum of the weights on edges out of node i is wi1 + wi , i = 1, , n 1, it follows that i = c

j =1

i1

pj + qj

i j =1

0 = c pj c = qj qi pj qj

i1 j =1

pj , 0<i<n qj

n1

n = c

j =1

n j =0 j

= 1.

Problem 4.46 (a) Yes, it is a Markov chain. It suces to construct the transition matrix and verify that the process has the Markov property. Let P be the new transition matrix. Then we have, for 0 i N and 0 j N ,

Pi,j = Pi,j + k=N +1

Pi,k Bk,j ,

(N )

where Bk,j is the probability of absorption into the absorbing state j in the absorbing Markov chain, where the states N + 1, N + 2, . . . are the transient states, while the state (N ) 1, 2, . . . N are the N absorbing states. In other words, Bk,j is the probability that the next state with index in the set {1, 2, . . . , N } visited by the Markov chain, starting with k > N is in fact j . It is easy to see that the markov property is still present. (b) The proportion of time in j is j /

N i=1 i .

(N )

(c) Let i (N ) be the steady-state probabilities for the chain, only counting to visits among the states in the subset {1, 2, . . . , N }. (This chain is necessarily positive recurrent.) By renewal theory, i (N ) = (E [Number of Y transitions between Y visits to i)1 2

and j (N ) = = E [No. Y -transitions to j between Y visits to i] E [No. Y -transitions to i between Y visits to i] E [No. X -transitions to j between X visits to i] 1/i (N )

(d) For the symmetric random walk, the new MC is doubly stochastic, so i (N ) = 1/(N +1) for all i. By part (c), we have the conclusion. (e) It suces to show that

i (N )Pi,j = j (N )Pj,i

i (N )Pi,j

= i (N )Pi,j + i (N )

k=N +1

Pi,k Bk,j ,

(N )

and

j (N )Pj,i

= j (N )Pj,i + j (N )

k=N +1

Pj,k Bk,i ,

(N )

The two terms on the right are equal in these two displays. First, by the original reversibility, we have i (N )Pi,j = j (N )Pj,i . Second, by Theorem 4.7.2, we have

j (N )

k=N +1

Pj,k Bk,i = i (N )

k=N +1

(N )

Pi,k Bk,j .

(N )

We see that by expanding into the individual paths, and seeing that there is a reverse path. Problem 4.47 Intuitively, in steady state each ball is equally likely to be in any of the urns and the positions of the balls are independent. Hence it seems intuitive that (n) = M! n1 ! nm ! 1 m

M

To check the above and simultaneously establish time reversibility let n = (n1 , , ni1 , ni 1, ni+1 , , nj 1 , nj + 1, nj +1 , , nm )

M

ni 1 M m1 1 m

M

nj + 1 1 M m1

Problem 4.48 (a) Each transition into i begins a new cycle. A reward of 1 is earned if state visited from i is j . Hence average reward per unit time is Pij /ii . (b) Follows from (a) since 1/jj is the rate at which transitions into j occur. (c) Suppose a reward rate of 1 per unit time when in i and heading for j . New cycle whenever enter i. Hence, average reward per unit time is Pij ij /ii . (d) Consider (c) but now only give a reward at rate 1 per unit time when the transition time from i to j is within x time units. Average reward is E[Reward per cycle] E[Time of cycle] = = = where Xij Fij . Problem 4.49

t

limt P(S (t) = j, X (t) = i) P(X (t) = i) Pij 0 Fij (y )dy/ii by Theorem 4.8.4 Pi Pij ij i

(b) P(heading for 2) = P1 (c) fraction of time from 2 to 3 = P2 P23 t23 8 60 3 = = 2 38 80 19 3 P12 t12 15 10 = = 1 38 25 19

- [259]Using Markov ChainsUploaded byMuhammad Miftakul Amin
- AguirregabiriaUploaded byNikhil Kejriwal
- dtmcUploaded byalexandru_bratu_6
- fp3pp_june12Uploaded byMarkus1234567
- Gao Meng Tac Tn08n9Uploaded byromakhan
- E_mg1jonoUploaded byvipulkmd
- Lec6Uploaded byAshique Elahi Chowdhury
- LexRank.pdfUploaded byTung Nguyen
- Stochastic ProcessesUploaded bybingo
- Inverse GaussianUploaded bySakthivelan Ramachandran
- Homework #1Uploaded byсимона златкова
- Markov model for non-lognormal density distributions in compressive isothermal turbulenceUploaded byGaston GB

- midtermTwoF07solsUploaded bySongya Pan
- Midterm Two f 07Uploaded bySongya Pan
- midtermTwo6711F12solsUploaded bySongya Pan
- Midterm Two 6711 f 12Uploaded bySongya Pan
- midtermTwo6711F10solsUploaded bySongya Pan
- Midterm Two 6711 f 10Uploaded bySongya Pan
- midtermOne6711F12solsUploaded bySongya Pan
- Midterm One 6711 f 12Uploaded bySongya Pan
- midtermOne6711F10solsUploaded bySongya Pan
- Midterm One 6711 f 10Uploaded bytabrupt
- midtermOne6711F08solsUploaded bySongya Pan
- Midterm One 6711 f 08Uploaded bySongya Pan
- Lect 0910Uploaded bySongya Pan
- Lect 0905Uploaded bySongya Pan
- Lect 0903Uploaded bySongya Pan
- homewkLTUploaded bySongya Pan
- homewk13SolsUploaded bySongya Pan
- homewk13Uploaded bySongya Pan
- homewk12SolsUploaded bySongya Pan
- homewk12Uploaded bySongya Pan
- homewk11Uploaded bySongya Pan
- homewk10SolsUploaded bySongya Pan
- homewk10Uploaded bySongya Pan
- homewk9SolsUploaded bySongya Pan
- homewk9Uploaded bySongya Pan
- homewk8SolsUploaded bySongya Pan
- homewk8Uploaded bySongya Pan
- homewk7SolsUploaded bySongya Pan
- homewk7Uploaded bySongya Pan

- Test Bank Questions Chapter 5Uploaded byAnonymous 8ooQmMoNs1
- ReviewCh7-Markov Chains Part IUploaded byBomezzZ Enterprises
- Hull_OFOD9e_MultipleChoice_Questions_and_Answers_Ch14.docUploaded byguystuff1234
- Stock Price Movement(2017)-sUploaded byFamous Fanta
- Extraordinary Bonus Home TasksUploaded byk_zen865412
- Entropy Rates of a Stochastic ProcessUploaded bymaxbmen
- STA457Uploaded byrachel
- MdpUploaded byCường Mậm
- Stochastic CalculusUploaded byarthit.i
- ExcercisesUploaded byR.A.M
- ch6.pptUploaded byGurjar Devyani
- NonsenseUploaded byMalosGodel
- Brownian Motion and Poisson ProcessUploaded bySolisterADV
- NPT29_randomproces1.docUploaded byPoornanand Naik
- 091 - MA8451 MA6451 Probability and Random Processes - Important Question.pdfUploaded bySivanarul
- Markov Chain Monte CarloUploaded bymurdanetap957
- CHAPTER 4 Multiple QuestionsUploaded bymuralitejas
- PfaffvdvsdUploaded byprabinseth
- Section 13 Continuous Time Queueing SystemsUploaded byDylan Ler
- Markov ChainUploaded byDharmendra Kumar
- HMM Package for RUploaded byneuromiguel
- psol-ch6Uploaded byAl Farabi
- chapter5_1Uploaded byCường Mậm
- 347542506Uploaded byYu Jie Ng
- Markov Chains Exercise Sheet-SolutionsUploaded byDiosel Lara
- 3-Seasonality.pdfUploaded byjustinlondon
- Markov Chain - WikipediaUploaded byAriadna Mercado
- Syllabus May2015 ExamUploaded byGiovanniErario
- PQT - Model Question PaperUploaded bylmsrt
- RandomnessUploaded byshoaib625

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.