Professional Documents
Culture Documents
0.4 0.6 2
2
x(2) = x(0)A = 0.4 0.6 = 0.334 0.666 .
0.3 0.7
Since p1 + p2 = 1, we have p = 31 23 .
11.31 Solving
0.4 − λ 0.6
0 = |A − λI| = = λ2 − 1.1λ + 0.1,
0.3 0.7 − λ
the eigenvalues of A are given by
1
λ1 = 1, λ2 = .
10
1
Hence the rate of convergence is |λ2 |n = .
10n
11.32 Let px be the probability that, starting from x, the process hits 2 before 3. Then
p2 = 1, p3 = 0,
and
2
so that p1 = .
3
2018 Fall MMAT 5340 2
0.4 0.5
0.2
1 2
0.2
0.3
0.1 0.3
1
3 4
0.5
0.5
From the graph, we can see that 1 and 2 are transient states, while 3 and 4 are
recurrent states.
11.40 Let X = Z − 1. Then
P (X = 0) = P (Z = 1) = 0.5, P (X = 1) = P (Z = 2) = 0.52 = 0.25,
P (X ≥ 2) = 1 − P (X = 0) − P (X = 2) = 0.25
Hence, the transition matrix is
0.5 0.5 0
P = 0.5 0.25 0.25
0 0.5 0.5
We can find the stationary distribution p = p0 p1 p2 by solving pP = p: By
performing column operations,
−0.5 0.5 0 −0.5 0 0
P − I = 0.5 −0.75 0.25 ∼ 0.5
−0.25 0.25
0 0.5 −0.5 0 0.5 −0.5
−0.5 0 0 1 0 0
∼ 0.5 −0.25 0 ∼ −1 1
0 .
0 0.5 0 0 −2 0
2018 Fall MMAT 5340 3
That is p0 = p1 = 2p2 . Hence p = 0.4 0.4 0.2 .
The long-term average premium is
p0 r0 + p1 r1 + p2 r2 = (0.4)(0.5 · 0 + 1) + (0.4)(0.5 · 1 + 1) + (0.2)(0.5 · 2 + 1)
= 1.4 thousands of dollars.
13.20 Since Xn is the number of Heads during the first n tosses, we have
1 1 1
E(Xn+1 | Xn ) = (Xn + 1) + Xn = Xn + .
2 2 2
If Yn := 3Xn − cn is a martingale, then
For x > 0, f (x) = x3 is convex, since f 00 (x) = 6x > 0. Note that Mn > 0. By
Doob’s martingale inequality, for λ > 0, we have,
3
EM100
P max Mn ≥ λ ≤ .
0≤n≤100 λ3
Now
3
M100 = e−3 e3S100 e−300c = e−3 e−300c e3X1 · · · e3X100 ,
so that
100
(3)2 4
3 −3 −300c 3X1 100 −3 −300(2)
EM100 =e e (Ee ) =e e e 2 = e1197 .
Hence
e1197
P max Mn ≥ λ ≤ .
0≤n≤100 λ3
14.12 Note that τ4 − τ3 and τ3 − τ2 are i.i.d. Exp(λ) random variables, with λ = 3. Hence
Therefore,
2
E(τ4 − τ2 ) = 2λ−1 = ,
3
2
Var(τ4 − τ2 ) = 2λ−2 = .
9
2018 Fall MMAT 5340 5
14.13 Since N (t) − N (s) is independent of N (u), u ≤ s, and N (t) − N (s) ∼ Poi(λ(t − s)),
we have
15.26 The transition matrix for the corresponding discrete-time Markov chain is
1 1
0 2 2
P = 0 0 1 .
1 2
3 3
0
15.27 Let λ0i , 1 ≤ i ≤ 3, be the intensity of exit from state i, that is, the negative of
the diagonal entries of A. Then the stationary distribution π for the corresponding
discrete-time Markov chain is given by
1 0
λ1 ρ1 λ02 ρ2 λ03 ρ3
π=
λ01 ρ1
+ λ02 ρ2 + λ03 ρ3
8 1 5 3
= 4 8 4
132 5 6
= 13 13 13
.