You are on page 1of 3

EE 623-Assignment 5

1. The data x [n] forn = 0, 1, , N 1 are observed, each sample having


the conditional PDF
(
exp((x[n] )) x[n] >
p (x [n] | ) =
0
x[n] <
and conditioned on the observations are independent. The prior PDF
(
exp() > 0
p (x [n] | ) =
0
< 0.
Find the MMSE estimator of .
2. A quality assurance inspector has the job of monitoring the resistance
values of manufactured resistors. He does so by choosing a resistor from
a batch and measuring its resistance with an ohmmeter. He knows that
ohmmeter is of poor quality and imparts an error to the measurement
which he models as a N (0, 1) random variable. Hence, he takes N
independent measurements. Also, he knows that the resistance should
be 100 ohms. Due to manufacturing tolerances,however, they are generally in error by , where N (0, 0.011). If the inspector chooses
a resistor, how many ohmmeter measurements are necessary to ensure
that a MMSE estimator of the resistance R yields the correct resistance
to 0.1 ohms on the average or as he continuous to choose the resistors
throughout the day? How many measurements would he need if he did
not have prior knowledge about the manufacturing tolerances?
3. The data
x [n] = Arn + w [n]
1

n = 0, 1, , N 1

where r is known, w [n] is WGN with variance 2 , and A N (0, A2 )


independent of w [n] are observed. Find the MMSE estimator of A as
well as the minimum Bayesian MSE.
4. A measure of the randomness of a random variable is its entropy
defined as
Z
H () = E ( ln p ()) = ln p () p () d.
If N ( , 2 ), find the entropy and relate it to the concentration of
the PDF. After observing the data, the entropy of posterior PDF can
be defined as
ZZ
H (|x) = E ( ln p (|x)) =
ln p (|x) p (x, ) dxd
and should be less than H (). Hence a measure of the information
gained by observing the data is
I = H () H (|x)
Prove that I 0. Under what conditions will I=0? Hint: Express
H () as
ZZ
H () =
ln p () p (x, ) dxd
You will also need the inequality
Z
p1 (u)
ln
p1 (u) du
p2 (u)

for PDFs p1 (u),p2 (u). Equality holds if and only if p1 (u) = p2 (u).
5. Assume that
p (x [n] | ) =

exp(x[n]) x[n] > 0


0
x[n] < 0

where x[n]s are conditionally IID, or


p (x|) =

N
1
Y
n=0

p (x [n]|)

and the prior PDF is


p () =

exp(x[n]) > 0
0
<0

Find the MAP estimator of .


6. Prove the Bayesian Gauss-Markov theorem below
Bayesian Gauss-Markov theorem: If the data are described by
Bayesian linear model form
x = H + w
where x is an N1 data vector, H is a known Np observation matrix,
is a p 1 random vector of parameters whose realization is to be
estimated and has mean E () and covariance matrix C , and w is N1
random vector with zero mean and covariance Cw and is uncorrelated
with , then the LMMSE estimator of is,
1
= E () + C H T HC H T + Cw
(x HE ())

1
= E () + C 1 + H T Cw 1 H H T Cw 1 (x HE ())

The performance of the estimator is measured by the error =


whose mean is zero and whose covariance matrix is

C = Ex, T
1
= C C H T HC H T + Cw HC
1
= C 1 + H T Cw 1 H

You might also like