You are on page 1of 4

ECE 534 RANDOM PROCESSES FALL 2011

SOLUTIONS TO PROBLEM SET 6

1 Residual lifetime process of a Poisson process
(a) Each sample path of R is a strictly positive function on [0, ) that decreases with slope -1 in
between upward jumps which happen just as the function would otherwise reach zero.
R
t
t
Recall that one characterization of a Poisson process N with rate is that the times between suc-
cessive jumps, U
1
, U
2
, are independent and have the exponential distribution with parameter
. The initial state R
0
is equal to U
1
, and the successive upward jumps of R have sizes U
2
, U
3
, . . .
Thus, R
0
and the sizes of the upward jumps are mutually independent. To see why R is a Markov
process, x t 0; think of t as the present time. The past (R
s
: 0 s t) is determined by the
initial state R
0
and the sizes of the jumps of R that occur during (0, t]. The future (R
s
: s t)
is determined by the present state R
t
and the sizes of future jumps. Since the sizes of the future
jumps are independent of the past, the future given R
t
is conditionally independent of the past.
(b) For c 0, P{R
t
> c} = P{N
t+c
N
t
= 0} = e
c
, because N
t+c
N
t
is a Poisson random
variable with mean c. Thus, R
t
has the exponential distribution with parameter . This distribu-
tion is the same for all t, so it is the equilibrium distribution of R.
(c) Let t 0, x > 0, > 0, and consider the conditional distribution of R
t+
given R
t
= x. On
one hand, if < x then R
t+
= x with conditional probability one. That is, if < x, the
conditional distribution is concentrated on the single point x. On the other hand, given R
t
= x,
the process starts over at time t + x, so that if x, the conditional distribution of R
t+
given
R
t
= x is the same as the unconditional distribution of R
x
, which by part (b) is the exponential
distribution with parameter . One way to summarize this is to use a function, letting f(y, |x)
denote the conditional density of R
t+
given R
t
= x, and writing:
f(y, |x) =
_
(y (x )) if < x
e
y
for y 0 if x.
(d) R is not continuous in the a.s. sample-path sense because the sample paths have jumps. The
jump times of R are the same as the jump times of N; the time of the nth jump has the gamma
density with parameters n and . Thus, the probability there is a jump of R at any xed time
is zero, so R is a.s. continuous at each t. It follows that R is also continuous in the p. and d.
senses. We claim also that R is continuous in the m.s. sense. Here is a proof based on Proposition
2.1.13(c). Fix some time t; it suces to show that R is m.s. continuous at t. Let (t
n
: n 1) be
a sequence converging to t. Without loss of generality, suppose |t
n
t| 1 for all n. As already
discussed, R
tn
R
t
in p. as n . Let Y = R
t+1
+2. Since R
t+1
has the exponential distribution
1
with parameter , it follows that E[Y
2
] < . Also, due to the structure of the sample paths of R,
|R
t
| Y for all t such that |t t| 1. Thus, |R
tn
| Y for all n. Therefore, R
tn
R
t
in the m.s.
sense.
2 Prediction using a derivative
The derivative process X

has mean zero,

R
X,X
() = R

X
() =
2
(1 +
2
)
2
R
X
() = R

X
() =
2 6
2
(1 +
2
)
3
.
By the formulas for

E and the resulting MSE,

E[X

|X

0
] =
R
X,X
()X

0
R
X
(0)
=
X

0
(1 +
2
)
2
MSE = R
X
(0)
R
X,X
()
2
R
X
(0)
= 1
2
2
(1 +
2
)
4
,
which is minimized at =
1

3
0.5777.
3 Prediction of future integral of a Gaussian Markov process
(a) Since J is the integral of a Gaussian process, J has a Gaussian distribution. It remains to nd
the mean and variance. E[J] =
_

0
e
t
E[X
t
]dt = 0.
Var(J) = E[J
2
] =
_

0
_

0
e
s
e
t
e
|st|
dsdt
= 2
_

0
_
t
0
e
s
e
t
e
(st)
dsdt
= 2
2
_

0
e
(+)t
_
t
0
e
()s
dsdt
=
2
2

_

0
e
(+)t
_
t
0
_
1 e
()t
_
dt
=
2
2

_
1
+

1
2
_
=

+
(The above gives the correct answer even if = ; can check directly or by continuity.)
(b) Since X
0
and J are jointly Gaussian, we can use the standard formulas for conditional expec-
tation and MSE. Both J and X
0
have mean zero,
Cov(J.X
0
) =
_

0
e
t
e
|t0|
dt =

+
,
and Var(X
0
) = R
X
(0) = 1. So E[J|X
0
] =

E[J|X
0
] =
X
0
+
and MSE =

+

_

+
_
2
=

(+)
2
.
2
4 A two-state stationary Markov process
(a) Since E[X
t
] = 0 for each t and since X
t
takes values in {1, 1}, the distribution of X
t
for any
t must be
i
= 0.5 for i {1, 1}. Solving the Kolmogorov forward equations (see Example 4.9.3)
yields the distribution of the process at any time t 0 for any given initial distribution:
(t) = (0)e
2t
+ (0.5, 0.5)(1 e
2t
)
So the transition probability functions are given by:
p
ij
() =
_
0.5(1 +e
2
) if i = j
0.5(1 e
2
) if i = j
so that for 0,
R
X
() = E[X()X(0)] =

i{1,1}

j{1,1}
P{X(0) = i, X() = j}ij =

i{1,1}

j{1,1}

i
p
i,j
()ij
=
1
4
_
(1 +e
2
) + (1 +e
2
) (1 e
2
) (1 e
2
)

= e
2
.
Hence for all , R
X
() = e
2||
.
(b) For all , because R
X
is continuous.
(c) For = 0, because R
X
is twice continuously dierentiable in a neighborhood of zero if and only
if = 0.
(d) For > 0, because lim

R
X
() = 0 for > 0 and lim

R
X
() = 0 if = 0.
5 Some Fourier series representations
(a) The coordinates of f are given by c
i
= (f,
i
) for i 1. Integrating, we nd c
1
=

T
2
. For
k 1, we use integration by parts to obtain: c
2k
= 0 and c
2k+1
=

2T
2k
.
(b) The best N eigenfunctions to use are the ones with the largest magnitude coordinates. Thus,
f
(N)
(t) =

T
2

1
(t)
N1

k=1

2T
2k

2k+1
(t).
We nd ||f||
2
=
_
T
0
|f(t)|
2
dt =
T
3
(and we can check that

i=1
c
2
i
=
T
3
too.) Now
||f f
(N)
||
2
=

k=N
c
2
2k+1
=
T
2
2

k=N
1
k
2
so
||f f
(N)
||
2
||f||
2
=
3
2
2

k=N
1
k
2

3
2
2
_

N1
1
x
2
dx =
3
2
2
(N 1)
0.01
if N 1 +
3
2
2
(0.01)
= 16.2, so N = 17 suces.
(c) Without loss of generality, we assume the parameter
2
of the Brownian motion is one. The
N-dimensional random process closest to W in the mean squared L
2
norm sense is obtained by
using the N terms of the KL expansion for N with the largest eigenvalues. Note E[||W||
2
] =
3
E[
_
T
0
W
2
t
dt] =
_
T
0
tdt =
T
2
2
. The eigenvalues for the KL expansion of W are given by
n
=
4T
2
(2n+1)
2

2
for n 0. Thus,
W
(N)
(t) =
N1

n=0
(W,
n
)
n
.
E[||W W
(N)
||
2
] =

n=N

n
=
4T
2

n=N
1
(2n + 1)
2
so
E[||W W
(N)
||
2
]
E[||W||
2
]
=
8

n=N
1
(2n + 1)
2

4

2
_

2N
1
x
2
dx =
2

2
N
0.01
if N
2
(0.01)
2
= 20.26, so N = 21 suces.
6 First order dierential equation driven by Gaussian white noise
(a) Since
N
0 it follows that
X
(t) = x
0
e
t
.The covariance function of X is given by:
C
X
(s, t) =
_
s
0
_
t
0
e
(su)
e
(tv)

2
(u v)dvdu
=
_
s
0
e
(su)
e
(tu)

2
du =
2
e
st
_
s
0
e
2u
du
=

2
2
_
e
st
e
st
_
By the symmetry of C
X
, it is given in general by
C
X
(s, t) =

2
2
_
e
|ts|
e
ts
_
(b) Let r < s < t. It must be checked that
C
X
(r, s)C
X
(s, t)
C
X
(s, s)
= C
X
(v, t)
or (e
rs
e
rs
)(e
st
e
st
) = (e
rt
e
rt
)(1 e
2s
) which is easily done.
(c) As t ,
X
(t) 0 and C
X
(t +, t)

2
2
e
||
.
4