You are on page 1of 4

ECE 531: Detection and Estimation Theory, Spring 2011

Homework 6

Problem 1. (8.20) (Shu Wang)


Solution:
From the problem, we know that:

H =

1
r
..
.
r N 1

So we have h[n] = r n . According to 8.46, we have:

1) + K[n](x[n] h[n]T A(n


1))
A(n)
= A(n
1) + K[n](x[n] r n A(n
1))
= A(n

From the problem, we know that 2 = 1. According to 8.45, we have V ar(A(n))


= [n]. We can
get K[n] by using 8.47:

V ar(A(n))
=
=

1))h[n]
V ar(A(n
1))h[n]
1 + h[n]T V ar(A(n
1))r n
V ar(A(n
1))
1 + r 2n V ar(A(n

Also according to 8.48, we have:

1))
V ar(A(n))
= (1 K[n]h[n]T )V ar(A(n
1))r 2n
V ar(A(n
1))
)V ar(A(n
= (1
1))
1 + r 2n V ar(A(n
1))
V ar(A(n
=
1))
1 + r 2n V ar(A(n

Let 2 = 1 = V ar(A(0)).
Then we can have:


V ar(A(1))
=

1
1 + r2

V ar(A(2))
=

1
1+r 2
1
1 + r 4 1+r
2

Then we can conclude that V ar(A(n))


=

PN 1

n=0

r 2n

1
1 + r2 + r4

Problem 2. (8.27) (Luke Vercimak)


Solution:
The model is x[n] = exp() + w[n]
1. Newton-Raphson
We will assume that the signal model is:
s[n] = exp()
s = exp()1
We want to minimize:
J = (x s())T (x s())
To do this we want to solve:

s()
(x s()) = 0

Using results 8.59 and 8.60 from the book:


g() = exp
k+1 = k +

exp(2)

N
1
X
i=0

i=0

PN 1

[exp() (x[n] exp())]


PN 1
exp(2) i=0 [exp() (x[n] exp())]
PN 1
i=0 [exp() (x[n] exp())]
= k +
PN 1
exp(2) i=0
[exp() (x[n] exp())]

k+1 = k +

k+1

!1 N 1
X
[exp() (x[n] exp())]
[exp() (x[n] exp())]
i=0

2. Analytically
Changing the model to vector form:
x = exp()1 + w
We will assume that the signal model is:
s[n] = exp()
This model can be transformed to a linear model by the transformation:
= exp() = g()
Therefore (since g() is invertible):
s() = s(g1 ())
The signal model now becomes:
s = H = 1
Using the linear model results from the book to find the LSE:
LSE() = (HT H)1 Hx
= (1T 1)1 1T x
N 1
1 X
=
x[n]
N
n=0

=x

LSE() = g1 (LSE())
= ln(LSE())
= ln(
x)
Problem 3. (10.3) (Shu Wang)
Solution:
Because x[n] are conditional independent of . Then we can have:

p(x|) = exp[

N
1
X

(x[n] )]U(min(x[n]) )

n=0

p(x, ) = p(x|)p()
= exp[

N
1
X

x[n] + (N 1)]U(min(x[n]) )

n=0

p(x) =

min(x[n])

p(x, d

min(x[n])

exp[
0

N
1
X

x[n] + (N 1)]d

n=0

= exp[

N
1
X
n=0

x[n]]

1
(exp[(N 1)min(x[n])] 1)
N 1

p(x, )
p(|x) =
p(x)
exp[(N 1)]U(min(x[n]) )
= 1
N 1 (exp[(N 1)min(x[n])] 1)
Z min(x[n])
p(|x)d
E(|x) =
0

1
= (N 1)
exp[(N 1)min(x[n])] 1

min(x[n])

exp[(N 1)]d
0

By using partial integration, we have:


M M SE =

1
min(x[n])

1 exp[(N 1)min(x[n])] N 1

You might also like