You are on page 1of 30

Chapter Five

Markov's Processes (Markov Analysis)


Markov chain
• Markov analysis, like decision analysis, is a probabilistic technique.
• It provides probabilistic information about a decision situation that
can aid the decision maker in making a decision.
• In other words, Markov analysis is not an optimization technique;
it is a descriptive technique that results in probabilistic
information.
• Markov analysis is specifically applicable to systems that exhibit
probabilistic movement from one state (or condition) to another,
over time.
• A Markov process is a stochastic process (random process) in which
the probability distribution of the current state is conditionally
independent of the path of past states, a characteristic called the
Markov property.
• Markov chain is a discrete-time stochastic process with the Markov
property

• A sequence of trials of an experiment is a Markov chain if

– the outcome of each experiment is one of a set of


discrete states;
– the outcome of an experiment depends only on the
present state, and not on any past states. 3
Summary of Markov properties.

• Property 1: The transition probabilities for a given


beginning state of the system sum to one.

• Property 2: The probabilities apply to all participants in the


system.

• Property 3: The transition probabilities are constant over


time.

• Property 4: The states are independent over time.


• The state of the system is where the system is at a point in
time

• We call the vector q= [q1, q2,…qs] the initial probability


distribution for the Markov chain.
• A transition matrix includes the transition probabilities for
each state of nature.
• In most applications, the transition probabilities are
displayed as an s x s transition probability matrix P.
The transition probability matrix P may be written as

 p11 p12  p1s 


p p22  
p2 s 
P  21

   
 
 ps1 ps 2  pss 
• The steady-state probabilities are average probabilities that
the system will be in a certain state after a large number of
transition periods.
• This does not mean the system stays in one state.

• The system will continue to move from state to state in future


time periods;
• however, the average probabilities of moving from state to state
for all periods will remain constant in the long run.
• In a Markov process, after a number of periods have passed, the
probabilities will approach steady state.
• For each i j s

p
j 1
ij 1

• We also know that each entry in the P matrix must be


nonnegative.
• Hence, all entries in the transition probability matrix are
nonnegative, and the entries in each row must sum to 1.
Example 1 one of the greatest application of Markov
analysis is in the area of brand switching problem. The
brand switching problem analysis is the probability of
customer changing brand of a product over time.
Direct Algebraic Determination of Steady-
State Probabilities

Steady state condition the calculation of


probability for petro (P) and national (N),
can be calculated as
• Performing matrix operations results in the following
set of equations:

Steady-state probabilities can be computed by developing


a set of equations, using matrix operations, and solving
them simultaneously

Recall that the transition probabilities for a row in the


transition matrix (i.e., the state probabilities)
must sum to one:
Markov Analysis Information

Determine the probability of a customer's trading with Petroco in


month 3, given that the customer initially traded with Petroco in
month 1?
Probability of markov using decision tree
Solution
• we must add the two branch probabilities in

Figure associated with Petroco:


0.36 +0.08 = 0.44, the probability of a customer's trading
with Petroco in month 3
• Similarly , add the two branch probabilities in the
Figure associated with National:
• 0.24 + 0.32 = 0.56, the probability of a customer's
trading with National in month 3
• The same type of analysis can be performed under the condition
that a customer initially purchased gasoline from National ,given
that National is the starting state in month 1, the probability of a
customer's purchasing gasoline from National in month 3 is:
• 0.08 + 0.64 =0.72 and the probability of a customer's trading
with Petroco in month 3 is
0.12 + 0.16 = 0.28
• Notice that for each starting state, Petroco and National, the
probabilities of ending up in either
• state in month 3 sum to one:
Example 2
Representation of a Markov Chain as a Diagram

A C D
B
0.95 0.95 0 0.05 0
A
0.2 0.5 0 0.3
0.2 0.5 B
A B 0 0.2 0 0.8
C
0.05 0.2 0.3 0 0 1 0
D
0.8
C D
1

Each directed edge AB is associated with the


positive transition probability from A to B.
17
Example 3
• Problem: Consider the Markov chain with three
states, S={1,2,3}, that has the following
transition matrix
Solution
a. The state transition diagram is shown in Figure

Figure ­A state transition diagram.


Example 4
Coke vs. Pepsi

• Given that a person’s last cola purchase was Coke, there is a 90%
chance that his next cola purchase will also be Coke.
• If a person’s last cola purchase was Pepsi, there is an 80% chance
that his next cola purchase will also be Pepsi.
transition matrix:
C P

C
0.9 0.1
P 
0.9 0.1
0.8

P  0 . 2 0. 8 coke pepsi

coke 0.2

pepsi
20
Markov Process
Coke vs. Pepsi Example (cont)
Given that a person is currently a Pepsi purchaser, what is the probability that he
will purchase Coke two purchases from now?
Pr[ Pepsi?Coke ] =
Pr[ PepsiCokeCoke ] + Pr[ Pepsi Pepsi Coke ] =
0.2 * 0.9 + 0.8 * 0.2 = 0.34

00.9.9 00.1.1 0.9 0.1 0.83 0.17 


P 
2
    
00.2.2 00.8.8 0.2 0.8 0.34 0.66

Pepsi  ? ?  Coke
21
Coke vs. Pepsi Example (cont)

Given that a person is currently a Coke


purchaser, what is the probability that he
will purchase Pepsi three purchases from
now?

0.9 0.1 0.83 0.17  0.781 0.219


P 
3
    
0.2 0.8 0.34 0.66 0.438 0.562
22
•Assume each person makes one cola purchase per week
•Suppose 60% of all people now drink Coke, and 40%
drink Pepsi
•What fraction of people will be drinking Coke three
weeks from now?
0.9 0.1  0.781 0.219
P  P 
3

 0.2 0 .8  0.438 0. 562 

Pr[X3=Coke] = 0.6 * 0.781 + 0.4 * 0.438 = 0.6438

Qi - the distribution in week i


Q0=(0.6,0.4) - initial distribution
23
Q = Q * P =(0.6438,0.3562)
3
Example 5
• Table shows that if an individual is in state 1 (lower-income class)
then there is a probability of 0.65 that any offspring will be in the
lower-income class, a probability of 0.28 that offspring will be in
the middle-income class, and a probability of 0.07 that offspring
will be in the upper-income class

New generation
State 1 2 3
Current 1 0.65 0.28 0.07
generation 2 0.15 0.67 0.18
3 0.12 0.36 0.52
• The symbol Pij will be used for the probability of
transition from state i to state j in one generation.
• For example, represents the probability that a person in
state 2 will have offspring in state 3; from the table
above.
P23 = 0.18; P31= 0.12; P22= 0.67 and so on
• The information from Table 1 can be written in other
forms.
• The next figure is a transition diagram that shows the
three states and the probabilities of going from one state
to another.
• Transition Diagram
0.65

1
0.28
0.07
0.12
0.15
0.36
2
3
0.18

0.67

0.52
A transition matrix has several features:

• It is square, since all possible states must be used both as


rows and as columns.
• All entries are between 0 and 1,
• The sum of the entries in any row must be 1, since the
numbers in the row give the probability of changing
from the state at the left to one of the states indicated
across the top.
Transition matrix product

• P.P = P2
New generation
State 1 2 3
Current 1 0.47 0.39 0.13
generation
2 0.22 0.56 0.22
3 0.19 0.46 0.34
• The entry in row 3, column 2 of P2 gives the probability
that a person in state 3 will have a grandchild in state 2;
that is, that an upper-class person will have a middle-class
grandchild.
• This number, 0.46, is the result (rounded to two decimal
places) found through using the tree diagram.
• Row 1, column 3 of P*P= P2 gives the number 0.13, the
probability that a person in state 1 will have a grandchild
in state 3; that is, that a lower-class person will have an
upper-class grandchild. How would the entry 0.47 be
interpreted?
Thank you

You might also like