You are on page 1of 11

Carnegie Mellon University Apr 1 2020

Due on Apr 15, 2020


Advanced Probability & Statistics for Engineers 18-665/465
Osman Yağan

Homework 6

Instructions
• The homework is due at 11:59 PM (your local time) on April 15, 2020.
• There are 17 problems in this HW. Please return solutions for ONLY 10 of them that you
choose. We will distribute solutions for all problems.
• Some optional exercises have been added for additional practice. Please do NOT include them in
your submission.
• Please write the problem number and page numbers (i.e., page 1/4).
• Please show and justify all the steps.
• It is recommended that you use a scanner to scan your solutions. If you are planning to use your
phone’s camera, please use a Document Scanning application, e.g., iOS Notes, CamScanner, etc.

Problem 1
A car is either working or under maintenance. If the car is working today, then with probability
0.9, it will be working tomorrow. If the car is under maintenance today, then with probability
0.5, it will be working tomorrow.

(a) Define a Markov chain to describe the car’s state and draw the corresponding transition
probability diagram.

(b) Find the Stationary distribution corresponding to the above Markov chain. Is the Markov
chain aperiodic and irreducible? What is the limiting probability that the car is under
maintenance on any day?

Problem 2
A professor continually gives exams to his students. He can give three possible types of exams,
and his class is graded as either having done well or badly. Let pi denote the probability that
the class does well on an exam of type i, and suppose that p1 = 0.3, p2 = 0.6, and p3 = 0.9. If the
class does well on an exam, then the next exam is equally likely to be any of the three types. If
the class does badly, then the next exam is always type 1.

(a) Define a Markov chain to describe the type of exam delivered to the students and write
the corresponding transition probability matrix.

1
Carnegie Mellon University Apr 1 2020
Due on Apr 15, 2020
Advanced Probability & Statistics for Engineers 18-665/465
Osman Yağan

Homework 6

(b) Find the Stationary distribution corresponding to the above Markov chain. In the long-
run, what is the proportion of exams of types 1, 2 and 3?

Problem 3 [Qualcom Interview Question]


Consider a binary channel that can be in either one of the two states: “Good” or “Bad”, and
assume that the state of the channel forms a discrete-time Markov Chain with the following
state transition probabilities
P (Bad | Bad) = P (Good | Good) = p
P (Bad | Good) = P (Good | Bad) = 1 − p
In its “Good” state, the channel is binary symmetric with a probability of successful transmis-
sion α. 1 In its “Bad” state, no successful transmission can occur over the channel; i.e., the
transmitted bit won’t be received at all.
Assume that you want to transmit a single bit (say, 0) over this channel and keep sending until
a successful transmission occurs; i.e., until 0 is received at the receiver. Assume that you have
perfect knowledge of what is received by the receiver and ignore any delays, etc.
What is the expected number of transmissions if the channel is initially in the Good state? What
is the expected number of transmissions if the channel is initially in the Bad state?

Problem 4
Three blue and three green balls are distributed in two urns in such a way that each urn contains
three balls. At each step, we randomly swap a ball from each urn. At any given step, the system
is in of the four states {0, 1, 2, 3} corresponding to the number of blue balls in the first urn. In
other words, the system is in state i if the first urn has i blue balls, where i = 0, 1, 2, 3.
(a) Define a Markov chain to describe the number of balls in the first urn and write the cor-
responding transition probability matrix.
(b) Find the Stationary distribution corresponding to the Markov chain. What is the limiting
probability of having no green balls in the first urn?
1
More precisely, if the channel is in the Good state, a transmitted bit (either 0 or 1) will be observed at the
receiver unchanged with probability α. With probability 1 − α, the transmitted bit will be flipped by the channel;
i.e., a 0 will be received as 1, and vice versa.

2
Carnegie Mellon University Apr 1 2020
Due on Apr 15, 2020
Advanced Probability & Statistics for Engineers 18-665/465
Osman Yağan

Homework 6

Problem 5
At the beginning of the nth hour, a server inspects the number Xn of unprocessed jobs in its
queue. If Xn = 0, the server remains idle for the next hour. If Xn ≥ 1, the server takes the first
job in the queue and completes it in exactly one hour. Also during the nth hour, there are Un
arrivals in the queue, where Un are i.i.d. variables with distribution

P (Un = 0) = 1/8, P (Un = 1) = 1/4, P (Un = 2) = 1/2, P (Un = 3) = 1/8.

(a) Derive an equation for Xn+1 in terms of Xn and Un . Hence, or otherwise, show that the
sequence {Xn } forms a time-homogeneous Markov chain.

(b) Compute the transition probability matrix P .

Problem 6
(a) Consider a finite, irreducible, aperiodic Markov chain over n states with a doubly stochastic
transition probability matrix. Here, a doubly stochastic matrix refers to a matrix in which
each row and column sum equals 1. Verify that the stationary distribution for this Markov
chain is given by  
1 1 1
π= ...
n n n

(b) Suppose, a DNA nucleotide can take one of four possible states. Assume that in going
from time period to time period, the nucleotide does not change state with probability α;
and if it does change, then it is equally likely to change to any of the other 3 states, where
0 < α < 1/3. Is the transition probability matrix representing the state of the nucleotide
doubly stochastic? Determine the long-run proportion of time spent by the nucleotide in
each state.

Problem 7
Consider a finite, connected, undirected graph G(N , E), where N is the set of nodes, and E is
the set of edges. Let d(i) denote the degree of node i, where i ∈ N . Now, consider a particle
making a random walk over the nodes of this graph, where at each time step, the particle

3
Carnegie Mellon University Apr 1 2020
Due on Apr 15, 2020
Advanced Probability & Statistics for Engineers 18-665/465
Osman Yağan

Homework 6

moves from the previous node to one of its neighbors, chosen uniformly at random. Note that
the position of the particle evolves as a Markov chain over the set of nodes N . What is the
stationary distribution corresponding to the above Markov Chain?

Problem 8
Consider a time-homogeneous Markov chain with state space S = {1, 2, 3, 4, 5} and transition
probability matrix  
2/5 3/5 0 0 0
 4/5 1/5 0 0 0 
 
P = 1/5 0 2/5 1/5 1/5 

 0 0 0 3/5 2/5 
0 0 1/5 1/5 3/5
(a) Identify all classes of states, and determine whether each class is transient or recurrent.
Also compute the class period.
(b) Determine all possible stationary distributions π for P .
(c) What is the probability of never returning to state 5, given that the initial state was 5?

Problem 9
Two gamblers play the following game. A fair coin is flipped; if the outcome is Heads, player
A pays player B $1, and the if the outcome is Tails, player B pays player A $1. The game
is continued until one of the players goes broke. From that point onwards, the two players’
fortunes remain constant in time. Suppose that initially (n = 0), A has $1 and B has $2; and let
Xn be A’s fortune at time n.
(a) Find the transition probability matrix P of the Markov chain with the set of states{Xn , n ≥
0}.
(b) By considering all allowable transitions in each case, show that
p11 (2n) = 4−n = p22 (2n)
2
p10 (2n) = (1 − 4−n ) = p23 (2n).
3
4
5. Consider a discrete-time Markov chain whose states are the vertices of the square shown below.
The initial state X0 of the chain is a with probability 1. In each subsequent transition, the state
Carnegie Mellon University Apr 1 2020
Xn either remains the same with probability 1/2, or becomes an adjacent vertex with probability
1/4 (per vertex). Due on Apr 15, 2020
Advanced Probability & Statistics for Engineers 18-665/465
c d
Osman Yağan

Homework 6

Optional Exercise: Compute the limit


a of P2n as n → ∞,
b and show that it coincides with the
limit of P2n+1 = P P2n .
(i) What is the expected number of visits to vertex b before returning to a?
(ii) What is the expected time of the first visit to d?
Problem
(iii) What10
is the probability that the chain visits d before visiting b?

Consider a modified
6. Consider random
a modified randomwalk
walk on theintegers
on the integers
suchsuch
that that athop,
at each each hop, movement
movement towards thetowards
origin is
the origin is twice
twice asaslikely as movement
likely away from
as movement awaythefrom
origin.
the origin.
2/3 2/3 2/3 2/3 2/3 2/3

-3 -2 -1 0 1 2 3

1/3 1/3 1/3 1/3 1/3 1/3

1/3

The transition probabilities are shown on the diagram above. Note that once at the origin, there
The transition probabilities
is equal probability arethere,
of staying shown on the
moving diagram
to +1 above.
or moving to °1.Note that once at the origin, there
is equal probability of staying there, moving to
(i) Is the chain irreducible? Explain your answer. +1 or moving to -1.

(i) (ii) Carefully


Is the chain show that a stationary
irreducible? Explain distribution of the form
your answer.
k º = cr|k|
(ii) Carefully show that a stationary distribution of the form πk = cr|k| exists, and determine
the values
exists, of r andthe
and determine c. values of r and c.
(iii) Is the stationary distribution shown in part (ii) unique? Explain your answer.
(iii) Is the stationary distribution shown in part (ii) unique? Explain your answer.
≥P ¥
k∏0 Æ
k = (1 ° Æ)°1 for |Æ| < 1

Problem 11
Consider a time-homogenous Markov Chain with state space S = {1, 2, 3, 4, 5} and transition
probability matrix  
4/10 2/10 0 3/10 1/10
 0 1/2 1/2 0 0 
 

P = 0 1/5 4/5 0 0 

 0 0 0 0 1 
0 0 0 2/3 1/3

5
Carnegie Mellon University Apr 1 2020
Due on Apr 15, 2020
Advanced Probability & Statistics for Engineers 18-665/465
Osman Yağan

Homework 6

(a) Determine all classes of states, their period, and whether they are recurrent or transient.
Also compute the class period.
(b) Suppose that X0 = 1 with probability 1. What is the probability that state 5 is reached
before state 3?
(c) Suppose X0 = 2 with probability 1. What is the expected time of first visit to state 3.
(d) Suppose X0 = 4 with probability 1. What is the expected number of visits to state 5
between two successive visits to state 4.
(e) Suppose now that the initial distribution (i.e., P (X0 = i)) is given by the vector
 
3/10 1/10 2/10 2/10 2/10

Determine limn→∞ P (Xn = i) for every state i; i.e., determine the stationary distribution
π.

Problem 12
Consider a Markov chain with state space S = {0, · · · , K − 1} and transition probabilities given
by


αi , j = i + 1;
pij = 1 − αi − βi , j = i;


βi , j = i − 1.

where β0 = αK−1 = 0.
(i) Display the transition probability graph (diagram) and transition probability matrix P
for this Markov chain.
(ii) Under what conditions on αi and βi is the chain irreducible? Under what additional
conditions is it aperiodic?
(iii) Show, by induction or otherwise, that the stationary distribution in the aperiodic case is
given by
α0 α1 · · · αi−1
πi = · π0 , i = 1, · · · , K − 1
β0 β1 · · · βi

6
Carnegie Mellon University Apr 1 2020
Due on Apr 15, 2020
Advanced Probability & Statistics for Engineers 18-665/465
Osman Yağan

Homework 6

(iv) Why is it intuitively expected that πi αi = πi+1 βi+1 for i = 0, · · · , K − 2?

(v) Determine the stationary distribution in the case K = 8, assuming αi = 1/5 and βi = 2/5
for all (applicable) values of i. Express the answer in rational form.

Problem 13 [Time-Reversibility]
Consider a stationary Markov chain · · · Xn , Xn+1 , Xn+2 , · · · with transition probabilities {pij }
and limiting probabilities given by {πi }, with i, j ∈ S.

(a) Show that the time-reversed sequence · · · Xn , Xn−1 , Xn−2 , · · · is also a Markov Chain with
transition probabilities
πj pji
P (Xn = j|Xn+1 = i) = p?ij =
πi
and that the limiting probabilities are still given by {πi }.
NOTE: The fact that limiting probabilities remain the same can be seen intuitively as
follows: imagine recording a video of a MC visiting various states and then rewinding
the video. Would the fraction of time MC spends in any given state change?

(b) A Markov chain is said to be time reversible if p?ij = pij for all i, j. In other words, a MC is
time-reversible if
πi pij = πj pji i ∈ S, j ∈ S (1)
The equations given at (??) are known as Detailed Balance Equations. They capture the
notion that “the rate of transition from state i to state j” equals “the rate of transition
from state j to state i.”
Show that a necessary condition for time reversibility is that

pij pjk pki = pik pkj pji for all i, j, k

which states that the transition i → j → k → i has the same probability as the reversed
transition i → k → j → i. In fact, for a reversible chain starting at any state i, any path
back to i has the same probability as the reversed path.

7
Carnegie Mellon University Apr 1 2020
Due on Apr 15, 2020
Advanced Probability & Statistics for Engineers 18-665/465
Osman Yağan

Homework 6

Problem 14
Consider a random walk on the unit cube (states = vertices). Transitions to any adjacent vertex
occur with probability p < 1/3, while the probability of staying at the same vertex is 1 − 3p.
Let a = (0, 0, 0), b = (1, 0, 0), c = (1, 1, 0) and d = (1, 1, 1). Assuming that the initial state is a,
compute:

(i) the expected return time to a;

(ii) the expected first visit time to d;

(iii) the expected number of visits to d before returning to a.

Problem 15
A mouse is put into the maze shown in the following figure. Each time period, it chooses at
random one of the doors in the room it is in, and moves to another room through that door.
The mouse can escape to the outside (room 5) to never return to the maze again, but in room
3, there is a mouse trap which would put an end to its journey; i.e., if and when the the mouse
enters room 3, it will stay there forever.

(a) If the mouse starts at room 2, what is the probability that it escapes?

(b) If the mouse starts at room 4, what is the expected time it will take before it escapes or
gets caught?

8
Carnegie Mellon University Apr 1 2020
Due on Apr 15, 2020
Advanced Probability & Statistics for Engineers 18-665/465
Osman Yağan

Homework 6

Problem 16
If you keep on tossing a biased coin with probability of Heads given by p, what is the expected
number of tosses before you observe HHH (i.e., three heads in a row)?

Optional Exercise: What is the expected number of tosses needed to observe the first THH
(Tails-Heads-Heads)?

Problem 17 [Markov Chains and Google’s PageRank Algorithm]


In this problem, we will walk through an example to see how Google uses Markov Chains
in its PageRank algorithm. The problem PageRank tries to solve is the following: given M
interlinked webpages S = {0, 1, . . . , M − 1}, how can we assign each page an importance score
(so that we can rank them)? Instead of seeking for fancy big data methods, PageRank aims to
solve the problem using only the internal link structure of the pages.
The main idea of the PageRank algorithm is as follows. We consider a web surfer who starts
with an arbitrary page in S at time zero. Then, they start to navigate from one page to another
by clicking on one of the links that exists in the current page they are visiting. Let pij denote
the probability that the surfer visits page j after visiting page i. We can show that the pages
that this surfer visits form a Markov Chain with state space S and transition probability matrix
with entries {pij , i, j ∈ S}.
The PageRank algorithm calculates the limiting distribution π̄ = (π0 , π1 , . . . πM ) of this Markov
Chain. As seen in class, πj gives us the limiting probability of the Markov Chain being in state

9
Carnegie Mellon University Apr 1 2020
Due on Apr 15, 2020
Advanced Probability & Statistics for Engineers 18-665/465
Osman Yağan

Homework 6

j irrespective of the initial state that the surfer started with. We have also seen that πj gives
the fraction of time that the Markov Chain visits state j. Put differently, πj equals the expected
fraction of time an arbitrary surfer will spend in page j. Thus, it can be argued that πj is closely
associated with the popularity or importance of page j.
To give a simple example, say we have M = 4 pages, and they have the following link
structure:

• page-0 has a link to page-1 and a link to page-2.

• page-1 has a link to page-0 and a link to page-3.

• page-2 has a link to page-3.

• page-3 has a link to page-2.

Suppose each link has equal chance to be clicked, then the walk of the surfer can be mod-
eled as Markov chain MC-A as shown below. If this Markov chain has a limiting distribution
π0 , . . . , πM −1 , then we can assign πj as the importance scores of page j.

1/2
pag𝑒 pag𝑒
0 1
1/2

1/2 1/2

1
pag𝑒 pag𝑒
2 3
1

Figure 1: MC-A

Q1: Write down the probability transition matrix A for MC-A.


Q2: Does MC-A have a limiting distribution? Show your reasoning. If yes, also write down
the limiting distribution, which will be the importance scores we are looking for.
Now we relax the condition that the surfer can only open links from the current page. We
assume at each step that the surfer will click a link that exists on the current page they are
visiting with probability 0.5, but will open a new random page with probability 0.5. We can

10
Carnegie Mellon University Apr 1 2020
Due on Apr 15, 2020
Advanced Probability & Statistics for Engineers 18-665/465
Osman Yağan

Homework 6

see that this resembles the reality more closely as web surfing sometimes entails being bored
with the current chain of pages and resetting the surfing process with an entirely new page.
We call the new Markov chain as MC-B. The probability transition matrix of MC-B is given by
B = 12 A + 12 C, where C is an M by M matrix with every entry being equal to M1 = 41 .
Q3: Does MC-B have a limiting distribution? Show your reasoning. If yes, also write down
the limiting distribution, which will be the importance scores we are looking for.

11

You might also like