You are on page 1of 4

ECE 534 RANDOM PROCESSES FALL 2011

SOLUTIONS TO PROBLEM SET 3


1 Comparison of MMSE estimators for an example
(a) The minimum MSE estimator of X of the form g(U) is given by g(u) = u
3
, because this
estimator has MSE equal to zero. That is, E[X|U] = X.
(b) We begin by calculating some moments.
E[U
n
] =
_
1
1
(0.5)u
n
du =
_
1
n+1
n even
0 n odd
Var(U) = E[U
2
] E[U]
2
= E[U
2
] =
1
3
E[X] = E[U
3
] = 0
Var(X) = E[X
2
] = E[U
6
] =
1
7
Cov(X, U) = E[XU] E[X]E[U] = E[U
4
] =
1
5
So

E[X|U] = E[X] +
Cov(X,U)
Var(U)
(U E[U]) =
Cov(X,U)U
Var(U)
=
3U
5
and the mean square error is
Var(X)
Cov(X,U)
2
Var(U)
=
1
7

3
25
=
4
175
.
2 Estimation with jointly Gaussian random variables
(a) E[W] = E[X + 2Y + 3] = E[X] + 2E[Y ] + 3 = 13.
We use the fact Cov(X, Y ) =
X

Y
= (0.2)(3)(5) = 3 to get
Var(W) = Var(X + 2Y + 3)
= Var(X) + 4 Var(Y ) + 2 2 Cov(X, Y )
= 9 + 100 + 12 = 121
(b) Since W is a linear combination of jointly Gaussian random variables, W is Gaussian. So
P{W 20} = P
_
W 13
11

20 13
11
_
= Q
_
20 13
11
_
= Q(0.6364) = 0.2623
(c) Since W and Y are linear combinations of the jointly Gaussian random variables X and Y , the
variables W and Y are jointly Gaussian. Therefore, the best unconstrained estimator of Y given
W is the best linear estimator of Y given W.
So
g

(W) =

E[Y |W] = E[Y ] +
Cov(Y, W)
Var(W)
(W E[W]).
Using
Cov(Y, W) = Cov(Y, X + 2Y + 3) = Cov(Y, X) + 2Var(Y ) = 3 + 50 = 53,
1
we nd
g

(W) =

E[Y |W] = 4 +
53
121
(W 13)
and
MSE = Var(Y )
(Cov(Y, W))
2
Var(W)
= 25
(53)
2
121
= 1.7851.
3 Projections onto nested linear subspaces
(a) By the Orthogonality Principle, since Z

1
V
1
, it must only be shown that (Z

0
Z

1
) Z for
all Z V
1
. But this follows from the three facts:
Z

0
Z

1
= (X Z

1
) (X Z

0
),
(X Z

1
) Z for all Z V
1
, (X Z

0
) Z for all Z V
0
, so for all Z V
1
(b) (i) V
0
= {a +bY
1
+cY
2
: a, b, c are real constants} V
1
= {a +bY
1
: a, b are real constants}
(ii) V
0
= {g(Y
1
, Y
2
) : g so that E[g(Y
1
, Y
2
)
2
] < } V
1
= {g(Y
1
) : g so that E[g(Y
1
)
2
] < }
(iii) V
0
= {a +bY
1
: a, b are real constants} V
1
= set of real constants
4 Conditional third moment for jointly Gaussian variables
(a) Z has the same distribution as W +, where W is a N(0,
2
) random variable. Since E[W] = 0
and E[W
3
] = 0, E[Z
3
] = E[W
3
+3W
2
u +3W
2
+
3
] = 3
2
+
3
. An alternative approach is to
use the characteristic function of Z.
(b) The conditional distribution of X given Y = y is N(y, 1
2
). Therefore, the answer to
this part is obtained by replacing by Y and
2
by 1
2
in the answer to part (a). That is,
E[X
3
|Y ] = 3(1
2
)Y +
3
Y
3
.
2
5 Some identities for estimators, version 3
(a) TRUE. In fact, the estimators E[X|Y ] and E[X|Y, Y
2
] are identical because any function of
Y, Y
2
is a function of Y alone, so equality always holds in (a).
(b) FALSE. (It would be true if the inequality was pointed in the other direction.) For example,
suppose Y is a N(0, 1) random variable and X = Y
2
. Then

E[X|Y ] = E[X] because X and Y are
uncorrelated. However

E[X|Y, Y
2
] = X, which has MSE equal to zero.
(c) TRUE. If X and Y are jointly Gaussian,

E[X|Y ] has the minimum MSE over all possible func-
tions of Y so it also has the minimum MSE over all possible functions of Y of the form a+bY +cY
2
.
Therefore,

E[X|Y ] =

E[X|Y, Y
2
].
(d) TRUE. The estimator E[X|Y ] minimizes the MSE over all functions of Y, in particular it has
MSE at least a small as E[E[X|Z] |Y ], which is also a function of Y.
(e) TRUE. The given implies that the mean, E[X] has the minimum MSE over all possible func-
tions of Y. (i.e. E[X] = E[X|Y ]) Therefore, E[X] also has the minimum MSE over all possible
ane functions of Y, so

E[X|Y ] = E[X]. Thus, E[X|Y ] = E[X] =

E[X|Y ].
6 Steady state gains for one-dimensional Kalman lter
(a) Let b
k
=
2
k
. The given equations show that the sequence b
k
satises the recursion b
k+1
= F(b
k
),
where F(b) =
bf
2
1+b
+ 1, with the initial condition b
0
=
2
0 given. The function F is positive,
strictly increasing, and bounded. Thus if b
k
b
k+1
for some k, then b
k+1
= F(b
k
) F(b
k+1
) =
b
k+2
. Therefore, if b
0
b
1
, then the sequence (b
k
: k 0) is monotone nondecreasing and bounded.
Similarly, if b
0
b
1
then the sequence (b
k
: k 0) is monotone nonincreasing and bounded. Since
bounded monotone sequences have nite limits, the sequence (b
k
: k 0) converges.
(b) Denote the limit by b

(so b

=
2

). Since F is a continuous function, letting k


in the equation b
k+1
= F(b
k
) yields b

= F(b

), which has the unique nonnegative solution


b

=
f
2
+

f
4
+4
2
.
(c) If f = 0, the states x
k
are uncorrelated with variance one. The observations y
0
, . . . , y
k1
are
therefore orthogonal to x
k
, and the variance of the error,
2
k
, is just the variance of x
k
, equal to
one for all k. The limiting variance of error is thus also one.
7 Kalman lter for a rotating state with 2D observations
(a) Given a nonzero vector x
k
, the vector Fx
k
is obtained by rotating the vector x
k
one tenth
revolution counter-clockwise about the origin, and then shrinking the vector towards zero by one
percent. Thus, successive iterates F
k
x
o
spiral in towards zero, with one-tenth revolution per time
unit, and shrinking by about ten percent per revolution.
(b) The equations for
k+1|k
and K
k
can be written as

k+1|k
= F
_

k|k1

k|k1
(
k|k1
+I)
1

k|k1

F
T
+I
K
k
= F
k|k1
(
k|k1
+I)
1
.
with the usual initial condition
0|1
= P
0
.
(c) If P
0
=
2
0
I, then we see by induction that
k|k1
is proportional to I for all k 0. Moreover,
we can write
k|k1
=
2
k
I, where
2
k
=
2
k|k1
is the conditional variance sequence arising for the
one dimensional Kalman lter in the previous problem, and K
k
=
_

2
k

2
k
+1
_
F.
(d) The steady state covariance of error and gain do not depend on the initial covariance matrix
3
P
0
so we can assume without loss of generality that the initial condition of part (c) holds. Using
the results of the previous problem, we nd

=
2

I K

=
_

2

+ 1
_
F, where
2

=
f
2
+
_
f
4
+ 4
2
.
4