You are on page 1of 3

ECE 534 RANDOM PROCESSES FALL 2011

SOLUTIONS TO PROBLEM SET 7


1 Some ltering
(a) View Y as the output of a linear, time-invariant system with input X. For a deterministic input
x to the system, the output y satises j y() = y() + (j 1) x(), so the system transfer
function is H() =
j1
j+1
. Thus, |H()|
2
=
1+
2
1+
2
= 1. Thus, S
Y
= |H|
2
S
X
= S
X
, Y has the same
power spectral density and same total power X.
(b) Letting D
t
= X
t
Y
t
, we can view D as the output of a linear time invariant system with
input X and transfer function K() = 1 H() =
j+1(j1)
j+1
=
2
j+1
. So |K()|
2
=
4

2
+1
. Thus,
E[(X
t
Y
t
)
2
] = (power of D) =
_

S
X
()
4

2
+1
d
2
. |K()|
2
|K(0)|
2
= 4 for all , so the power
of D is maximized over S
X
subject to the given total power of X, by putting all the power at fre-
quency 0. That is, taking S
X
() = 10(2)(), corresponding to X = (X
t
: t R) being constant
in time. Then E[(X
t
Y
t
)
2
] = 40.
(c) Similarly, E[(X
t
Y
t
)
2
] can be made arbitrarily close to zero by selecting S
X
to have its mass far
from zero. For example, if S
X
() = 5(2)((
0
) +( +
0
)) then E[(X
t
Y
t
)
2
] =
40
1+
2
0
which
converges to zero as
0
. This corresponds to X
t
=

20 cos(
0
t + ), where is uniformly
distributed over [0, 2].
2 Synthesizing a random process with specied spectral density
Recall from Example 8.4.2, a Gaussian random process Z with a rectangular spectral density
S
Z
(2f) = I
{f
0
ff
0
}
can be represented as (note, if
1
T
= 2f
o
, then
tnT
T
= 2f
o
t n) :
Z
t
=

n=
A
n
_
_
2f
o
_
sinc(2f
o
t n)
where the A
n
s are independent, N(0, 1) random variables. (To double check that Z is scaled
correctly, note that the total power of Z is equal to both the integral of the psd and to E[Z
2
0
].) The
desired psd S
X
can be represented as the sum of two rectangular psds: S
X
(2f) = I
{20f20}
+
I
{10f10}
, and the psd of the sum of two independent WSS processes is the sum of the psds, so
X could be represented as:
X
t
=

n=
A
n
_

40
_
sinc(40t n) +

n=
B
n
_

20
_
sinc(20t n)
where the As and Bs are independent N(0, 1) random variables. This requires 60 samples per
unit simulation time.
An approach using fewer samples is to generate a random process Y with psd S
Y
() = I
{20f20}
and then lter Y using a lter with impulse response H with |H|
2
= S
X
. For example, we could
simply take H(2f) =
_
S
X
(2f) = I
{20f20}
+
_
2 1
_
I
{10f10}
, so X could be represented
as:
X =
_

n=
A
n
_

40
_
sinc(40t n)
_
h
1
where h(t) = (40)sinc(40t) +
_
2 1
_
(20)sinc(20t). This approach requires 40 samples per unit
simulation time.
3 Some linear transformations of some random processes
(a) Yes, X is the result of passing the stationary process U through the linear time-invariant
system with impulse response function h(k) = a
k
I
{k0}
. Thus X is stationary. Since
U
= and
C
U
(n) =
2
I
{n=0}
, we nd E[X
k
] =

kZ
h(k) =

1a
, and C
X
(n) = h

hC
U
(n) =
2
h

h(n) =

kZ
h(k)h

(k n) =

2
a
|n|
1a
2
. (Think of the case n 0 and n < 0 separately.) (b) Yes. Fix a
time k and view it as the present time. The past of X is determined by (U
j
: j k), and the future
of X is determined by X
k
and (U
j
: j k +1), through the update equations: X
n+1
= aX
n
+U
n+1
for n k. Since (U
j
: j k) and (U
j
: j k + 1) are independent, the Markov property follows.
(c) Yes, X is mean ergodic in the m.s. sense, because C
X
(n) 0 as n . (d) Yes. This is
similar to part (a), but now h is random: h(k) = A
k
I
{k0}
. Y is stationary for the same reason X
is stationary. The mean is
Y
= E[
1
1A
]. To nd C
Y
we rst nd R
Y
as follows:
R
Y
(n) = E
_
R
X
(n)

a=A

=
2
E
_
1
(1 A)
2
_
+E
_

2
A
|n|
1 A
2
_
.
Then, C
Y
(n) = R
Y
(n)
2
Y
=
2
Var
_
1
1A
_
+ E
_

2
A
|n|
1A
2
_
. (e) No. Fix a time k and view it as
the present time. The past of X would yield a good estimate of the value of A, which cannot be
inferred from X
k
alone. Thus, the future of X, which depends on both X
k
and A, is not condition-
ally independent of the past given X
k
.
(f) No. For A xed, the time average of Y is

1A
, by parts (a) and (c). Since this time average is
not a constant, Y is not mean ergodic in the m.s. sense. Another justication is to note that as
n , C
Y
(n)
2
Var(
1
1A
) = 0.
4 A standard noncausal estimation problem
(a) g() =
_

0
g(t)e
jt
dt +
_
0

g(t)e
jt
dt =
1
+j
+
1
j
=
2

2
+
2
.
(So
_

2
+
2
d
2
=
1
2
.)
(b)
_

1
a+b
2
d
2
=
_

1/b
a/b+
2
d
2
=
1/b
2

a/b
=
1
2

ab
.
(c) By Example 9.1.1 in the notes, H() =
S
X
()
S
X
()+S
N
()
. By the given and part (a),
S
X
() =
2

2
+
2
and S
N
() =
2
. So
H() =
2
2 +
2
(
2
+
2
)
=
2/
2
(2/
2
+
2
) +
2


_
2
2
+ (
2
)
2
exp
_

_
2/
2
+
2

_
(d) By Example 9.1.1 in the notes and part (b),
MSE =
_

H()S
N
()
d
2
=
_

2
(2/
2
+
2
) +
2
d
2
=

_
2/
2
+
2
=
1
_
1 + 2/(
2
)
.
2
MSE0 as
2
0 and MSE1 = E[X
2
t
] as
2
, as expected.
(e) The estimation error D
t
is orthogonal to constants and to Y
s
for all s by the orthogonality
principle, so C
D,Y
0.
5 Linear and nonlinear ltering
The equilibrium distribution is the solution to Q = 0; = (0.25, 0.25.0.25, 0.25). Thus, for each
t, Z
t
takes each of its possible values with probability 0.25. In particular,
Z
=

iS
(0.25)i =
(0.25)(3+113) = 0. The Kolmogorov forward equation

(t) = (t)Q and the fact



i

i
(t) = 1
for all t yield

i
(t) = 3
i
(t) + (1
i
(t)) = 4
i
(t) + for each state i. Thus, (t) =
+((0))e
4t
. Considering the process starting in state i yields p
i,j
() = 0.25+(
i,j
0.25)e
4
.
Therefore, for 0,
R
Z
() = E[Z()Z(0)] =

iS

jS
ij
i
p
i,j
()
= (0.25)

iS

jS
ij
i,j
e
4
+ (0.25)
2
(1 e
4
)

iS

jS
ij
. .
=0
= (0.25)((3)
2
+ (1)
2
+ 1
2
+ 3
2
)e
4
= 5e
4
.
So R
Z
() = 5e
4||
.
(b) Thus, S
Z
() =
40
16
2
+
2
. Also, S
Y Z
= S
Z
. Thus, the optimal lter is given by
H
opt
() =
S
Z
()
S
Z
() +S
N
()
=
40
40 +
2
(16
2
+
2
)
.
The MSE is given by MSE=
_

H
opt
()S
N
()
d
2
=
5
q
5
2
2
+1
.
(c) It is known that P{|Z
t
| 3} = 1, so by hard limiting the estimator found in (b) to the interval
[3, 3], a smaller MSE results. That is, let

Z
(NL)
t
=
_

_
3 if

Z
t
3

Z
t
if |

Z
t
| 3
3 if

Z
t
3
Then (Z
t


Z
(NL)
t
)
2
(Z
t


Z
t
)
2
, and the inequality is strict on the positive probability event
{|

Z
t
| 3}.
(d) The initial distribution for the hidden Markov model should be the equilibrium distribution,
= (0.25, 0.25.0.25, 0.25). By the denition of the generator matrix Q, the one step transition
probabilities for a length time step are given by p
i,j
() =
i,j
+q
i,j
+o(). So we ignore the
o() term and let a
i,j
= if i = j and a
i,i
= 13 for i, j S. (ALTERNATIVELY, we could
let a
i,j
= p
i,j
(), that is, use the exact transition probability matrix for time duration .) If
is small enough, then Z will be constant over most of the intervals of length . Given Z = i over
the time interval [(k 1), k],

Y
k
= i +
_
k
(k1)
N
t
dt which has the N(i,
2
) distribution.
Thus, we set b
i,y
=
1

2
2
exp
_

(yi)
2
2
2
_
.
3