You are on page 1of 7

An Introduction to Signal Detection and

Estimation - Second Edition


Chapter IV: Selected Solutions
H. V. Poor
Princeton University
April 26, 2005
Exercise 1:
a.

MAP
(y) = arg
_
max
1e
log |y| log
_
= 1.
b.

MMSE
(y) =
_
e
1
e
|y|
d
_
e
1
e
|y|
d
=
1
|y|
+
e
|y|
e
1e|y|
e
|y|
e
e|y|
Exercise 3:
w(|y) =

y
e

0

y
e

d
=

y
e
(+1)
(1 + )
y+1
y!
.
So:

MMSE
(y) =
1
y!
_

0

y+1
e
(+1)
d(1 + )
y+1
=
y + 1
+ 1
;

MAP
(y) = arg
_
max
>0
y log ( + 1)
_
=
y
+ 1
;
and

ABS
(y) solves
_

ABS
(y)
0
w(|y)d =
1
2
which reduces to
y

k=0
_

ABS
(y)
_
k
k!
=
1
2
e

ABS
(y)
.
Note that the series on the left-hand side is the truncated power series expansion of
exp{

ABS
(y)} so that the

ABS
(y) is the value at which this truncated series equals half
of its untruncated value when the truncation point is y.
1
Exercise 7:
We have
p

(y) =
_
e
y+
if y
0 if y <
;
and
w() =
_
1 if (0, 1)
0 if (0, 1)
.
Thus,
w(|y) =
e
y+
_
min{1,y}
0
e
y+
d
=
e

e
min{1,y}
1
,
for 0 min{1, y}, and w(|y) = 0 otherwise. This implies:

MMSE
(y) =
_
min{1,y}
0
e

d
e
min{1,y}
1
=
[min{1, y} 1]e
min{1,y}
+ 1
e
min{1,y}
1
;

MAP
(y) = arg
_
max
0min{1,y}
e

_
= min{1, y};
and
_

ABS
(y)
0
e

d =
1
2
_
e
min{1,y}
1
_
which has the solution

ABS
(y) = log
_
e
min{1,y}
+ 1
2
_
.
Exercise 8:
a. We have
w(|y) =
e
y+
e

e
y
_
y
0
d
=
1
y
, 0 y,
and w(|y) = 0 otherwise. That is, given y, is uniformly distributed on the interval
[0, y]. From this we have immediately that

MMSE
(y) =

ABS
(y) =
y
2
.
b. We have
MMSE = E {V ar(|Y )} .
Since w(|y) is uniform on [0, y], V ar(|Y ) =
Y
2
12
. We have
p(y) = e
y
_
y
0
d = ye
y
, y > 0,
from which
MMSE =
E{Y
2
}
12
=
1
12
_

0
y
3
e
y
|dy =
3!
12
=
1
2
.
2
c. In this case,
p

(y) =
n

k=1
e
y
k
+
, if0 < < min{y
1
, . . . , y
n
}.
from which

MAP
(y) = arg
_
max
0<<min{y
1
,...,yn}
exp
_
n

k=1
y
k
+ (n 1)
__
= min{y
1
, . . . , y
n
}.
Exercise 13:
a. We have
p

(y) =
T(y)
(1 )
(nT(y))
,
where
T(y) =
n

k=1
y
k
.
Rewriting this as
p

(y) = C()e
T(y)
with = log(/(1 )), and C() = e
n
we see from the Completeness Theorem for
Exponential Families that T(y) is a complete sucient statistic for and hence for .
(Assuming ranges throughout (0, 1).) Thus, any unbiased function of T is an MVUE for
. Since E

{T(Y )} = n, such an estimate is given by

MV
(y) =
T(y)
n
=
1
n
n

k=1
y
k
.
b.

ML
(y) = arg
_
max
0<<1

T(y)
(1 )
(nT(y))
_
= T(y)/n =

MV
(y).
Since the MLE equals the MVUE, we have immediately that E

ML
(Y )} = . The
variance of

ML
(Y ) is easily computed to be (1 )/n.
c. We have

2
log p

(Y ) =
T(Y )

2

n T(Y )
(1 )
2
,
from which
I

=
n

+
n
1
=
n
(1 )
.
The CRLB is thus
CRLB =
1
I

=
(1 )
n
= V ar

ML
(Y )).
3
Exercise 15:
We have
p

(y) =
e

y
y!
, y 0, 1, . . ..
Thus

log p

(y) =

( + y log ) = 1 +
y

,
and

2
log p

(y) =
y

2
< 0.
So

ML
(y) = y.
Since Y is Poisson, we have E

ML
(Y )} = V ar

ML
(Y )) = . So,

ML
is unbiased.
Fishers information is given by
I

= E

_

2

2
log p

(Y )
_
=
E

{Y }

2
=
1

.
So the CRLB is , which equals V ar

ML
(Y )). (Hence, the MLE is an MVUE in this
case.)
Exercise 20:
a. Note that Y
1
, Y
2
, . . . , Y
n
, are independent with Y
k
having the N(0, 1+s
2
k
) distribution.
Thus,

log p

(y) =
n

k=1

1
2
log(1 +s
2
k
)
y
2
k
2(1 + s
2
k
)
_
=
1
2
n

k=1
_
s
2
k
1 + s
2
k

y
2
k
s
2
k
(1 + s
2
k
)
2
_
,
from which the likelihood equation becomes
n

k=1
s
2
k
(y
2
k
1

ML
(y)s
2
k
)
(1 +

ML
(y)s
2
k
)
2
= 0.
b.
I

= E

_

2

2
log p

(Y )
_
=
n

k=1
_
s
4
k
E

{Y
2
k
}
(1 + s
2
k
)
3

s
4
k
2(1 +
2
k
)
2
_
=
1
2
n

k=1
s
4
k
(1 + s
2
k
)
2
.
4
So the CRLB is
2

n
k=1
s
4
k
(1+s
2
k
)
2
.
c. With s
2
k
= 1, the likelihood equation yields the solution

ML
(y) =
_
1
n
n

k=1
y
2
k
_
1,
which is seen to yield a maximum of the likelihood function.
d. We have
E

ML
(Y )
_
=
_
1
n
n

k=1
E

_
Y
2
k
_
_
1 = .
Similarly, since the Y

k
s are independent,
V ar

ML
(Y
_
=
1
n
2
n

k=1
V ar

_
Y
2
k
_
=
1
n
2
n

k=1
2(! + )
2
=
2(1 + )
2
n
.
Thus, the bias of the MLE is 0 and the variance of the MLE equals the CRLB. (Hence,
the MLE is an MVUE in this case.)
Exercise 22:
a. Note that
p

(y) = exp
_
n

k=1
log F(y
k
)/
_
,
which impies that the statistic

n
k=1
log F(y
k
) is a complete sucient statistic for via
the Completeness Theorem for Exponential Families. We have
E

_
n

k=1
log F(Y
k
)
_
= nE

{log F(Y
1
)} =
n

log F(Y
1
) [F(y
1
)]
(1)/
f(y
1
)dy
1
.
Noting that d log F(y
1
) =
f(y
1
)
F(y
1
)
dy
1
, and that [F(y
1
)]
1/
= exp{log F(y
1
)/}, we can make
the substitution x = log F(y
1
) to yield
E

_
n

k=1
log F(Y
k
)
_
=
n

_
0

xe
x/
dx = n.
Thus, we have
E

MV
(Y )
_
= ,
which implies that

MV
is an MVUE since it is an unbiased function of a complete sucient
statistic.
5
b. [Correction: Note that, for the given prior, the prior mean should be E{} =
c
m1
.]
It is straightforward to see that w(|y) is of the same form as the prior with c replaced
by c

n
k=1
log F(y
k
), and m replaced by n + m. Thus, by inspection
E{|Y } =
c

n
k=1
log F(Y
k
)
m + n 1
,
which was to be shown. [Again, the necessary correction has been made.]
c. In this example, the prior and posterior distributions have the same form. The only
change is that the parameters of that distribution are updated as new data is observed.
A prior with this property is said to be a reproducing prior. The prior parameters , c and
m, can be thought of as coming from an earlier sample of size m. As n becomes large
compared to m, the importance of these prior parameters in the estimate diminishes.
Note that

n
k=1
log F(Y
k
) behaves like nE{log F(Y
1
)} for large n. Thus, with n m, the
estimate is appropximately given by the MVUE of Part a. Altenatively, with m n, the
estimate is approximately the prior mean, c/(m1). Between these two extremes, there
is a balance between prior and observed information.
Exercise 23:
a. The log-likelihood function is
log p(y|A, ) =
1
2
2
n

k=1
_
y
k
Asin
_
k
2
+
__
2

n
2
log(2
2
).
The likelihood equations are thus:
n

k=1
_
y
k


Asin
_
k
2
+

__
sin
_
k
2
+

_
= 0,
and

A
n

k=1
_
y
k


Asin
_
k
2
+

__
cos
_
k
2
+

_
= 0.
These equations are solved by the estimates:

A
ML
=
_
y
2
c
+ y
2
s
,

ML
= tan
1
_
y
c
y
s
_
,
where
y
c
=
1
n
n

k=1
y
k
cos
_
k
2
_

1
n
n/2

k=1
(1)
k
y
2k
,
6
and
y
s
=
1
n
n

k=1
y
k
sin
_
k
2
_

1
n
n/2

k=1
(1)
k+1
y
2k1
.
b. Appending the prior to the above problem yields MAP estimates:

MAP
=

ML
,
and

A
MAP
=

A
ML
+
_
_
r
n
_
2
+
2(1+)
2
n
1 +
,
where
2
2
n
2
.
c. Note that, when (and the prior diuses), the MAP estimate of A does
not approach the MLE of A. However, as n , the MAP estimate does approach the
MLE.
7

You might also like