This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

Editors' Picks Books

Hand-picked favorites from

our editors

our editors

Editors' Picks Audiobooks

Hand-picked favorites from

our editors

our editors

Editors' Picks Comics

Hand-picked favorites from

our editors

our editors

Editors' Picks Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

**Theory of Detection and Estimation Handout #9
**

Homework #4 Solutions

1. Rayleigh Samples. Suppose Y

i

are i.i.d. samples from p

θ

(y), for i = 1, ..., n, where θ ∈

is unknown, and the family of distributions p

θ

(y) is the Rayleigh family given by

p

θ

(y) =

y

θ

e

−

y

2

2θ

u(y),

where u(y) is the unit step function.

The distribution of the entire i.i.d. vector of samples p

θ

(y

n

) can be inferred from p

θ

(y).

a. Is p

θ

(y

n

) an exponential family?

b. Find a complete suﬃcient statistic for θ.

c. What is the minimum variance unbiased estimator (MVUE) for θ?

d. Can the Cramer-Rao Bound be applied? If so, use it to obtain a lower bound on the

variance of unbiased estimators for θ. Is there an eﬃcient estimator? (An unbiased

estimator is eﬃcient if it meets the Cramer-Rao bound.)

e. What is the maximum likelihood estimator for θ.

Solution:

a. Yes, p

θ

(y

n

) is an exponential family. We can write it in the proper form as

p

θ

(y

n

) =

n

i=1

p

θ

(y

i

)

=

_

1

θ

n

_

_

n

i=1

y

i

u(y

i

)

_

e

−1

2θ

n

i=1

y

2

i

.

b. The parameter

−1

2θ

, in the exponent of the formula for p

θ

(y

n

) given above, takes on

values in the set (−∞, 0), which is a rectangle in

1

. Therefore, the statistic

n

i=1

y

2

i

is complete. It is a complete suﬃcient statistic for θ.

c. The second moment of a Rayleigh distributed random variable is 2θ. Therefore,

ˆ

θ =

1

2n

n

i=1

Y

2

i

is the only unbiased function of the complete suﬃcient statistic found in part b. and

is therefore the minimum variance unbiased estimator.

d. First we calculate the score function.

s(θ, Y

n

) =

∂

∂θ

ln p

θ

(Y

n

)

= −

n

θ

+

1

2θ

2

n

i=1

Y

2

i

.

From the above equation, we can see that the expected value of the score function is

zero, and the Cramer-Rao Bound can be applied. However, we will factor the score

Page 2 of 6 ELE 530, Spring 2010-2011

function further to show that there is an eﬃcient estimator and identify the Fisher

information directly. As a side note, once we’ve shown that there is an eﬃcient esti-

mator, then we already know that the estimator from part c. is the eﬃcient estimator.

However, it will be obvious from the score function what the eﬃcient estimator is as

well.

s(θ, Y

n

) =

n

θ

2

_

1

2n

n

i=1

Y

2

i

−θ

_

.

From the above factorization, we know that there is an eﬃcient estimator, and we ﬁnd

that the Fisher information is

I(θ) =

n

θ

2

.

Therefore, the Cramer-Rao Lower Bound for all unbiased estimators is

V ar(

ˆ

θ) ≥

θ

2

n

.

e. Also, from the score function we see that the maximum likelihood estimator is the

same as the estimator MVUE from part c. In problem 4 we show that this is always

the case if an eﬃcient estimator exists.

Homework #4 Solutions Page 3 of 6

2. Uniform Samples. Suppose Y

i

are i.i.d. samples ∼ Unif[−θ, θ], for i = 1, ..., n, where

θ > 0 is unknown.

The distribution of the entire i.i.d. vector of samples p

θ

(y

n

) can be inferred from p

θ

(y).

a. Is p

θ

(y

n

) an exponential family?

b. Find a scaler suﬃcient statistic for θ.

c. What is an unbiased estimator based on the suﬃcient statistic of part b?

d. Can the Cramer-Rao Bound be applied? If so, use it to obtain a lower bound on the

variance of unbiased estimators for θ. Is there an eﬃcient estimator? (An unbiased

estimator is eﬃcient if it meets the Cramer-Rao bound.)

e. What is the maximum likelihood estimator for θ.

Solution:

a. The family of distributions p

θ

(y

n

) is not an exponential family. The support of an

exponential family must be the same for all θ, and that is not the case with the

uniform distribution we are dealing with.

b. We use the Neyman-Fisher Factorization Theorem to ﬁnd a suﬃcient statistic.

p

θ

(y

n

) =

n

i=1

p

θ

(y

i

)

=

1

(2θ)

n

n

i=1

1

y

i

∈[−θ,θ]

=

1

(2θ)

n

1

max

i

|y

i

|≤θ

.

Therefore, T = max

i

|Y

i

| is a suﬃcient statistic.

We can show that T is complete. First we derive the distribution of T. Notice that

|Y

i

| are each uniformly distributed on [0, θ] and independent. Thus, the formula for

the maximal statistic is:

p

T|θ

(t) = n

_

F

|Y | |θ

(t)

_

n−1

p

|Y | |θ

(t)

= n

_

t

θ

_

n−1

1

θ

1

t∈[0,θ]

=

n

θ

_

t

θ

_

n−1

1

t∈[0,θ]

Now take any function v(T) that has zero mean for all θ and notice the following chain

Page 4 of 6 ELE 530, Spring 2010-2011

of implications:

_

θ

0

v(t)

n

θ

_

t

θ

_

n−1

dt = 0 ∀θ > 0.

n

θ

n

_

θ

0

v(t)t

n−1

dt = 0 ∀θ > 0.

_

θ

0

v(t)t

n−1

dt = 0 ∀θ > 0.

v(t)t

n−1

= 0 ∀θ > 0, t ≥ 0.

v(t) = 0 ∀θ > 0, t > 0.

Therefore, v(T) is zero with probability one. The forth equality above only need hold

for t almost everywhere, but that’s actually all that we need.

c. First we ﬁnd the expected value of our suﬃcient statistic.

E T =

_

θ

0

t

n

θ

_

t

θ

_

n−1

dt

= n

_

θ

0

_

t

θ

_

n

dt

=

n

n + 1

θ.

An unbiased estimator is

ˆ

θ =

n+1

n

max

i

|Y

i

|. Since we showed that the statistic is

complete, this estimator is the MVUE.

d. First we calculate the score function.

s(θ, Y

n

) =

∂

∂θ

ln p

θ

(Y

n

)

=

_

−n

θ

, max

i

|Y

i

| < θ

undeﬁned, elsewhere

Therefore,

E s(θ, Y

n

) =

−n

θ

,

and the Cramer-Rao Bound does not apply.

e. We see from the score function that

ˆ

θ

ML

= max

i

|Y

i

|.

Homework #4 Solutions Page 5 of 6

3. MMSE vs. MVU and ML. Consider jointly Gaussian random variables X and Y with

zero mean, unit variance, and correlation EXY = ρ.

a. What is the MMSE estimate of X given Y ?

b. Now treat X like a parameter (Ignore the marginal distribution on X and just consider

the conditional distribution p(y|x).). What is the minimum variance unbiased (MVU)

estimate of X given Y , and what is the maximum likelihood (ML) estimate of X given

Y ?

Solution:

a.

ˆ

X

MMSE

= E (X|Y )

= ρY.

b. Notice that the family of distributions p(y|x) is an exponential family.

p(y|x) =

1

_

2π(1 −ρ

2

)

e

−

1

1−ρ

2

(y−ρx)

2

=

_

1

_

2π(1 −ρ

2

)

e

−

y

2

1−ρ

2

_

_

e

−

ρ

2

x

2

1−ρ

2

_

e

2ρx

1−ρ

2

y

.

Therefore, Y is complete. The expected value of Y is ρX. To make it unbiased we

must divide by ρ.

ˆ

X

MV U

=

Y

ρ

.

It is obvious from p(y|x) that the maximum likelihood estimate for X will be the one

that causes the term (y −ρx)

2

to be zero. Thus,

ˆ

X

ML

=

Y

ρ

.

Page 6 of 6 ELE 530, Spring 2010-2011

4. Eﬃciency of ML. Show that if an eﬃcient unbiased estimator exists then it is the maxi-

mum likelihood estimator. Consider the necessary and suﬃcient condition for an eﬃcient

estimator to exist, stated in Theorem 3.1 of Kay Vol. 1.

Solution:

From Theorem 3.1 of Kay Vol. 1 involving the Cramer-Rao lower bound, we know that

an eﬃcient estimator exists if and only if the score function can be factored as,

s(θ, X) =

∂

∂θ

ln p

θ

(X)

= I(θ)(g(X) −θ),

for some functions g(X), which is the eﬃcient estimator, and I(θ), which is the Fisher

Information.

Choosing θ to maximize the likelihood is equivalent to maximizing the log-likelihood. So

to ﬁnd

ˆ

θ

ML

we can inspect the derivative of the log-likelihood function with respect to θ,

which is precisely the score function.

Notice that the Fisher Information, I(θ) is non-negative. Therefore, the score function is

positive for θ < g(X) and negative for θ > g(X). This means that

ˆ

θ

ML

= g(X)

=

ˆ

θ

MV U

.

The maximum likelihood estimator is also unique except in the degenerate case where

I(θ) = 0 for some non-empty open interval of θ. However, this would mean that the

family of distributions does not change over that interval, which is not a reasonable

model to work with for parameter estimation.

Estimation Theory.

Estimation Theory.

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd