This action might not be possible to undo. Are you sure you want to continue?

** Recall our transition probability matrix, where p
**

ij

is

the probability of a transition to state j, given that we

are in state i

Suppose that the following matrix gives the transition

probabilities only for transient states, for a Markov

Chain with absorbing states:

Note that the rows do not necessarily sum to 1, because

there are other absorbing states not shown

Q=

11

⋯

1,−

⋮ ⋱ ⋮

−,1

⋯

−,−

Expected Time in a Transient State

Let’s define m

ij

as the expected number of periods we

will spend in state j, given that we start in state i

Then we can define a matrix M of these values:

M=

11

⋯

1,−

⋮ ⋱ ⋮

−,1

⋯

−,−

Expected Time in a Transient State

Then we have:

For all i and j, where o

ij

= 1 if i = j, and 0 otherwise

The above is a set of (s – m) x (s – m) equations

In matrix form, we can write this set of equations as:

M = I + QM

=

+

−

=1

Expected Time in a Transient State

• We would like to solve M = I + QM

This is equivalent to (I – Q)M = I, or

M = (I – Q)

-1

Thus, (I – Q)

-1

gives us a matrix of m

ij

values

Element (i, j) of this matrix tells us the expected

number of periods we spend in state j, given that we

start in state i

Transitions to Absorbing States

Recall our definition of the R matrix:

The R matrix gives the transition probabilities from

transient state i to absorbing state j

Define b

ij

as the probability we end up in absorbing state j,

given that we start in transient state I

Let’s define the matrix Q as consisting of q

ij

values and the

matrix R as consisting of r

ij

values to avoid confusion

columns columns

rows

0

rows

s m m

P

Q R

s m

I

m

÷

=

( ÷

(

(

¸ ¸

Absorbing States

Define:

If we start in transient state i, three things can happen:

We transition to absorbing state j (with probability r

ij

)

We go to some other absorbing state (with probability

≠

We go to transient state j (with probability q

ik

)

B=

11

⋯

1,

⋮ ⋱ ⋮

−,1

⋯

−,

Transitions to Absorbing States

The probability that we end in absorbing state j, given

that we start in transient state i, b

ij

, is:

b

ij

= Prob{we go to absorbing state j in one transition

(we go to some other transient state k · we end up in

absorbing state j given that we start in transient state k)}

These are mutually exclusive events, so that the above

can be written as:

b

ij

= Prob{we go to absorbing state j in one transition} +

Prob{we go to some other transient state k · we end up

in absorbing state j given that we start in transient state

k}

Transitions to Absorbing States

Next, note that transitions to the s – m transient states

are mutually exclusive events, so that this can be

written as

b

ij

= q

ij

+ sum over all transient states k of Prob{we go to

transient state k · we end up in absorbing state j given

that we start in transient state k}

Next note that the events {transitioning from transient

state i to transient state k} and {ending up in absorbing

state j given that we start in transient state k} are

independent events

Transitions to Absorbing States

This implies that we can rewrite this as:

b

ij

= q

ij

+ sum over all transient states k of Prob{we go to

transient state k} x Prob{we end up in absorbing state j

given that we start in transient state k}

The above can then be written as

=

+

−

=1

, for all i = 1, …, s – m, j = 1, …, m

This is a set of (s – m) x m equations

Absorbing States

In matrix form, we can write this set of equations as:

B = R + QB

Rearranging, we get (B – QB) = R, or B(I – Q) = R

Thus, B = (I – Q)

-1

R

b

ij,

the probability of ending in absorbing state j, given

that we start in transient state i, is element (i, j) of

(I – Q)

-1

R

Interesting Note

Consider the following matrix:

N = I + Q + Q

2

+ Q

3

+ … + Q

n

+ …

This matrix has dimension (s – m) x (s – m); clearly this matrix

exists, right?

Now consider the quantity N(I – Q)

N(I – Q) = N – NQ

= I + Q + Q

2

+ Q

3

+ … + Q

n

+ …

– Q – Q

2

– Q

3

– … – Q

n

– …

= I

Thus, because N(I – Q) = I, this means N = (I – Q)

-1

Therefore, (I – Q) always has an inverse

- Complete Test R output1
- 89164273-Doctors
- DoctorsList
- Sample Test 2
- Sample Test
- Disclosure
- Problem+Set+1+ +Capital+Budgeting
- Sealed Air Case Comments
- DSP
- APICS
- APICS
- APICS
- CPIM
- CPIM
- CPIM
- SMR
- mpr
- study notes
- DSP notes
- best practices
- CARS1
- The next step for the CEO Moving IT-enabled services outsourcing to the strategic agenda
- Courses of Study 201011

equations of markov chain

equations of markov chain

- markov
- markov-chains
- Lecture MC Part2
- Disc 12
- 2CTMCs
- Assignment 3
- chap7.pdf
- lecture6a
- Hw 3 Solution
- Ferrari e Galves - Construction of Stochastic Processes, Coupling and Regeneration
- 541soln5
- Markov Analysis
- final6711F12sols
- markov_chains_4
- Markov Analysis
- 07_chapter 3.pdf
- Markov Ug Exam Examples
- Markov Chains
- markov.ppt
- Markov Process
- Notes Markov Analysis
- 03 Introduction to Markov Chains courtesy of QUT.pdf
- sjem2252group6-140209051533-phpapp01
- Stochastic Processes
- Markov Analysis
- ASC486_5
- Stochastic Processes by Joseph T Chang
- RP Notes
- Untitled
- Markov Chain
- markov chain

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd