You are on page 1of 51

# Ch 04: Markov Processes

## ELEXM 621 Random Process and Queuing

Theorem
Assoc. Prof. Dr. Rashid A. Saeed
MSC Comp and Comm
Elex, FoE, SUST,
Agenda

Memoryless Information
Processes
Markov Processes and n-gram
Models
D
r.
R
a
s
hi
d
A
. 2
Description
Sometimes we are interested in how a random
variable changes over time.
The study of how a random variable evolves
over time includes stochastic processes.
An explanation of stochastic processes – in
particular, a type of stochastic process known
as a Markov chain is included.
We begin by defining the concept of a
stochastic process.
A continuous –time Markov Chain (CTMC)
A Discrete –time Markov Chain (DTMC)
3
Dr. Rashid A. Saeed
What is a Markov Chain?
 One special type of discrete-time is called a Markov
Chain.
 Definition: A discrete-time stochastic process is a
Markov chain if, for t = 0,1,2… and all states
P(Xt+1 = it+1|Xt = it, Xt-1=it-1,…,X1=i1, X0=i0)
=P(Xt+1=it+1|Xt = it)
 Essentially this says that the probability distribution of
the state at time t+1 depends on the state at time t(it)
and does not depend on the states the chain passed
through on the way to it at time t.

4
Dr. Rashid A. Saeed
 In our study of Markov chains, we make further
assumption that for all states i and j and all t,
P(Xt+1 = j|Xt = i) is independent of t.
 This assumption allows us to write P(Xt+1 = j|Xt
= i) = pij where pij is the probability that given
the system is in state i at time t, it will be in a
state j at time t+1.
 If the system moves from state i during one
period to state j during the next period, we call
5
that a transition from i to j has occurred.
Dr. Rashid A. Saeed
 We call the vector q= [q1, q2,…qs] the initial probability
distribution for the Markov chain.
 In most applications, the transition probabilities are displayed
as an s x s transition probability matrix P. The transition
probability matrix P may be written as

##  p11 p12  p1s 

p p22  p2 s 
P   21
    
 
 ps1 ps 2  pss 

6
Dr. Rashid A. Saeed
 For each I
j s

p
j 1
ij 1

##  We also know that each entry in the P matrix must be

nonnegative.
 Hence, all entries in the transition probability matrix are
nonnegative, and the entries in each row must sum to 1.

7
Dr. Rashid A. Saeed
Markov processes

ashid A. Saeed
8
The Gambler’s Ruin Problem
 At time 0, I have \$2. At times 1, 2, …, I play a game
in which I bet \$1, with probabilities p, I win the
game, and with probability 1 – p, I lose the game.
My goal is to increase my capital to \$4, and as soon
as I do, the game is over. The game is also over if my
capital is reduced to 0.
 Let Xt represent my capital position after the time
t game (if any) is played
 X0, X1, X2, …. May be viewed as a discrete-time
stochastic process

9
Dr. Rashid A. Saeed
The Gambler’s Ruin Problem
 \$0 \$1 \$2 \$3 \$4

 1 0 0 0 0
1  p 0 p 0 0
 
P = 0 1 p 0 p 0
 0 0 1  p 0 p

 0 0 0 0 1 

## Dr. Rashid A. Saeed 10

The Cola Example
 Suppose the entire cola industry produces only two colas.
 Given that a person last purchased cola 1, there is a 90%
chance that their next purchase will be cola 1.
 Given that a person last purchased cola 2, there is an 80%
chance that their next purchase will be cola 2.
1. If a person is currently a cola 2 purchaser, what is the
probability that they will purchase cola 1 two purchases from
now?
2. If a person is currently a cola 1 a purchaser, what is the
probability that they will purchase cola 1 three purchases
from now?

11
Dr. Rashid A. Saeed
20%
2 1 90%
80%
The Cola Example
10%

##  We view each person’s purchases as a Markov chain with

the state at any given time being the type of cola the
person last purchased.
 Hence, each person’s cola purchases may be represented
by a two-state Markov chain, where
 State 1 = person has last purchased cola 1
 State 2 = person has last purchased cola 2
 If we define Xn to be the type of cola purchased by a
person on her nth future cola purchase, then X0, X1, …
may be described as the Markov chain with the following
transition matrix: X2 X1
X0
12 90%
Dr. Rashid A. Saeed
The Cola Example X2 X1
X0
90%

Cola1 Cola 2 
Cola 1  .90 .10 
P
Cola 2  .20 .80 
 

## We can now answer questions 1 and 2.

1. We seek P(X2 = 1|X0 = 2) = P21(2) = element 21 of P2:

## .90 .10 .90 .10 .83 .17

P 
2
    
.20 .80  .20 .80   .34 .66 

13
Dr. Rashid A. Saeed
Markov Processes
 A Markov source consists of:
 an alphabet A
 a set of states ∑,
 a set of transitions between states,
 a set of labels for the transitions and
 two sets of probabilities.

14
Diagrammatic representation of a
Markov source

15
16
Markov chain in Network queuing
 Markov Chain for M/M/1 system

17
Birth-death chain

## The Poisson process

– Arrival rate of λ packets per second
– Over a small interval δ,

18
Markov model of a scalar passage of
music.

19
Find mth-order Markov model?
21
A Markov source equivalent to a
3-gram model.

22
Ch 05: Little’s Theorem and
M/M/1
ELEXM 621 Random Process and Queuing
Theorem
Assoc. Prof. Dr. Rashid A. Saeed
MSC Comp and Comm
Elex, FoE, SUST,
Agenda

Introduction
Queuing systems
Categories, Kendall notation
Markovian queuing systems
 Little’s result
M/M/1
24
Dr. Rashid A. Saeed
What is queuing theory?
Queuing theory: performance evaluation of
resource sharing systems, specifically, for
teletraffic systems

25
Dr. Rashid A. Saeed
Performance of queuing systems

26
Dr. Rashid A. Saeed
Block diagram of a queuing system

27
Dr. Rashid A. Saeed
Description of queuing systems

28
Dr. Rashid A. Saeed
Examples in details

29
Dr. Rashid A. Saeed
Examples in details

30
Dr. Rashid A. Saeed
Performance measures

31
Dr. Rashid A. Saeed
Poisson process

32
Dr. Rashid A. Saeed
Poisson process

33
Dr. Rashid A. Saeed
Exponential distribution and
memoryless property

34
Dr. Rashid A. Saeed
Poisson process and exponential
distribution

35
Dr. Rashid A. Saeed
Kendall’s notation A/S/m/c/p/O

36
Dr. Rashid A. Saeed
Notation Example
 M/M/3/20/1500/FCFS – single queue system with:
 Exponentially distributed arrivals
 Exponentially distributed service times
 Three servers
 Capacity 20 (queue size is 20 – 3 = 17)
 Population is 1500 total
 Service discipline is FCFS
 Often, assume infinite queue and infinite population and
FCFS, so just  M/M/3

37
Dr. Rashid A. Saeed
Markovian Queuing Systems

38
Dr. Rashid A. Saeed
System variables

39
Dr. Rashid A. Saeed
Little’s result

40
Dr. Rashid A. Saeed
Example

41
Dr. Rashid A. Saeed
42
Dr. Rashid A. Saeed
43
Dr. Rashid A. Saeed
44
Dr. Rashid A. Saeed
45
Dr. Rashid A. Saeed
46
Dr. Rashid A. Saeed
47
Dr. Rashid A. Saeed
48
Dr. Rashid A. Saeed
49
Dr. Rashid A. Saeed
M/M/1 queuing systems

50
Dr. Rashid A. Saeed
Thank You

51
Dr. Rashid A. Saeed