You are on page 1of 5

Exam et4386 Estimation and Detection

January 21st, 2016


Answer each question on a separate sheet. Make clear in your answer
how you reach the final result; the road to the answer is very important.
Write your name and student number on each sheet. It is allowed to answer
in Dutch or English.
It is allowed to use a double sided A4 self-written formula sheet.

Question 1 (Estimation theory) - 10 points


Using a sensor we observe data according to the following assumed model
x[n] = Arn+1 + w[n],

n = 0, . . . , N 1

where A is an unknown constant parameter, r > 0 is known, n is the timeindex in samples and w[n] is zero-mean uncorrelated Gaussian noise having
variance 2 . The goal is to make an estimate of A by using the data.
(1 p) (a) Give the likelihood function p(x; A).
(2 p) (b) Derive the Cramer-Rao lower bound when observing x.
(2 p) (c) Does the minimum variance unbiased (MVU) estimator exist? If
so determine the MVU.
(1 p) (d) Give the best linear unbiased estimator (BLUE).
Instead of assuming that A is an unknown constant (deterministic) parameter, we now assume A is the realization of a scalar Gaussian random process,
(i.e., A N (0, A2 )) that is independent from w[n].
(3 p) (e) Calculate the (Bayesian) MMSE estimator that is given by E[A|x].
(Hint: You can make use of the fact that the signal model is Gaussian
and linear: w N (0, 2 I) and A N (0, A2 ). )
In question (d) we already calculated the BLUE for the case that A was
assumed to be deterministic. In the next question we again assume A is
deterministic. However, we change the assumed distribution of the noise
w[n] such that it is Rayleigh distributed given by the pdf


(
w[n]
w[n]2
exp

w[n] > 0
2
2 2
p(w[n]) =
0
w[n] < 0.
(1 p) (f ) Give the best linear unbiased estimator with these new noise characteristics and explain the changes compared to the BLUE estimator
in question 1(d).

Question 2 (Estimation theory) - 10 points


Within estimation theory we distinguish two main approaches when estimating a certain variable, that are the classical approach and the Bayesian
approach. Within both approaches we can minimize the mean squared error
(MSE), however, the definition in both approaches is different. The Bayesian
MSE is given by
Z Z

2 p(x, )dxd
Bmse() =
( )
and the classical MSE is given by
Z

2 p(x; )dx
mse() = ( )
(1 p) (a) Explain the difference between the Bayesian mean square error
(MSE), and the classical mean square error.
(2 p) (b) Show by derivation that the MMSE estimator is given by the
conditional expectation E[|x].
Assume that is an unknown scalar parameter with prior probability density
function (pdf) p(). To estimate , we use N noisy observations given by
x[n], with n = 0, 1, . . . , N 1. Assume that the observations are conditionally
independent and identically distributed (i.i.d.). The conditional pdfs are
given by

exp(x[n]), x[n] > 0
p(x[n]|) =
0,
x[n] < 0.
The prior pdf is given by

p() =

exp(), > 0
0,
< 0.

(2 p) (c) Two different Bayesian estimators are the MMSE and the MAP
estimator. Choose the most convenient estimator based. Explain your
choice and calculate the estimate of using observations x[n] with
n = 0, 1, . . . , N 1.
(1 p) (d) Indicate based on your solution obtained in question (c): 1) How
does the final solution depend on the prior information and the data,
AND, 2) how the prior information and the data are traded off aginst
each other as a function of N ?
3

We now consider a different statistical model where the posterior pdf is given
by

exp[( x[0])] > x[0]
p(|x[0]) =
0,
< x[0].
Note that this model is based on only one observation x[0], i.e., N = 1.
(2 p) (e) Determine the MMSE estimator.
R
R
(Hint: Remember partial integration, i.e., udv = uv vdu. )
(1 p) (f ) Determine the MAP estimator.
(1 p) (g) Which of the two estimators calculated in question (d) and (e)
will have the smallest (Bayesian) mean square error?

Question 3 (Detection theory) - 11 points


We would like to detect the presence of a speech signal in noise by applying
an optimal Neyman-Pearson detector to N time domain samples. To do so,
we distinguish between the speech presence hypothesis (H1 ) and the speech
absence hypothesis (H0 ). The two hypothesis are given by
H0 : x[n] = w[n]
n = 0, 1, ..., N 1
H1 : x[n] = s[n] + w[n] n = 0, 1, ..., N 1,
where s[n] and w[n] denote a clean speech sample and noise sample respectively. Let us assume that under H0 x N (0, 2 I) and under H1
x N (0, (s2 + 2 )I)
(2 p) (a) Give the log-likelihood ratio (LLR) and determine the Neyman0
Pearson detector T (x) and threshold .
0

The optimal threshold to compare T (x) is yet unknown.


0

(2 p) (b) Determine an expression for the threshold that maximizes the


detection performance under a given false alarm probability PF A .
(2 p) (c) Calculate the detection probablity PD as a function of the false
alarm probability PF A and the variances 2 and S2 .
(1 p) (d) Give two ways to improve the detection performance PD .
Next we will look at a slightly generalized problem formulation, where the
distribution under H1 changes into x N (0, Cs + 2 I), with diagonal matrix
2
2
2
, ..., s,N
Cs = diag[s,0
, s,1
1 ].
(2 p) (e) Give the Neyman-Pearson detector T (x) and show that T (x) can
be interpreted as an estimator-correlator (i.e., indicate what is the estimator part and what is the correlator part).
(1 p) (f ) How is this estimator called?
(1 p) (g) Explain which statistical assumptions are the cause for this estimator to result in the final detector T (x).

You might also like