You are on page 1of 10

Alternate Form for CRLB

var()

1
ln p ( x; ) 2
E

See Appendix 3A
for Derivation

Sometimes it is easier to find the CRLB this way.


This also gives a new viewpoint of the CRLB:

Posted
on BB

From Gardners Paper (IEEE Trans. on Info Theory, July 1979)


Consider the Normalized version of this form of CRLB
var()

1
ln p ( x; ) 2
E

Well derive
this in a way
that will reinterpret the
CRLB
1

Consider the Incremental Sensitivity of p(x; ) to changes in :


If +, then it causes p(x; ) p(x; + )
How sensitive is p(x; ) to that change??
p ( x; )

p ( x; ) % change in p ( x; ) p ( x; )

~p
S ( x ) =
=
=

% change in

p ( x; )

Now let 0:

~
ln p ( x; )
p ( x; )
=

Sp ( x ) = lim Sp ( x ) =

p( x; )

Recall from Calculus: ln f ( x ) = 1 f ( x )


x

var()

1
ln p ( x; ) 2
2 E

2
p
E S ( x )

f ( x ) x

Interpretation
Norm. CRLB =
Inverse Mean
Square
Sensitivity

Definition of Fisher Information


The denominator in CRLB is called the Fisher Information I( )
It is a measure of the expected goodness of the data for the
purpose of making an estimate
2 ln p ( x; )
I ( ) = E

Has the needed properties for info (as does Shannon Info):
1. I( ) 0 (easy to see using the alternate form of CRLB)
2. I( ) is additive for independent observations
follows from: ln p(x; ) = ln p( x[n]; ) = ln[ p( x[n]; )]

If each In ( ) is the same: I( ) = NI( )

3.5 CRLB for Signals in AWGN


When we have the case that our data is signal + AWGN then
we get a simple form for the CRLB:
Signal Model: x[n] = s[n; ] + w[n], n = 0, 1, 2, , N-1
White,
Gaussian,
Zero Mean

Q: What is the CRLB?


First write the likelihood function:
p ( x; ) =

(2 )

2 N /2

1
exp 2
2

(x[n] s[n; ])
n =0

N 1

Differentiate Log LF twice to get:


2
2

ln p( x; ) =

N 1

2 s[n; ]

s[n; ]
(
)

x
[
n
]
s
[
n
;
]

n =0

Depends on
random x[n]
so must take
E{}
4

2
2
N 1
2

s[n; ]
s[n; ]

E 2 ln p ( x; ) = 2 E {x[n ]} s[n; ]

2
#"

n =0 $


s[ n; ]
$!

!
!#!!!
"

=0

N 1

n =0

s[n; ]

Then using this we get the CRLB for Signal in AWGN:


var()

2
N 1

n =0

s[n; ]

Note:

s[n; ]

tells how

sensitive signal is to parameter

If signal is very sensitive to parameter change then CRLB is small


can get very accurate estimate!
5

Ex. 3.5: CRLB of Frequency of Sinusoid


Signal Model: x[n ] = A cos(2f o n + ) + w[n ] 0 < f o < 12 n = 0, 1, 2, , N 1
1

var()
Error in
Book

SNR

2
[
]
2

n
sin(
2

f
n
+

CRLB (cycle s /sa mple ) 2

n =0

CRLB 1 /2 (cycle s /s a mple)

Bound on
Variance

Bound on
Std. Dev.

N 1

x 10

Signal is less
sensitive if fo
near 0 or

-4

4
2
0

0.1

0.2
0.3
fo (cycles/sample)

0.4

0.5

0.1

0.2
0.3
f (cycles/sample)

0.4

0.5

0.025
0.02
0.015
0.01

3.6 Transformation of Parameters


Say there is a parameter with known CRLB
But imagine that we instead are interested in estimating
some other parameter that is a function of :
= g( )
Q: What is CRLB ?
2

g ( )
var( ) CRLB =
CRLB

Proved in
Appendix 3B

Captures the
sensitivity of to

Large g/

small error in gives larger error in


increases CRLB (i.e., worsens accuracy)
7

Example: Speed of Vehicle From Elapsed Time


Laser
Known Distance D

Sensor

start

stop

Laser

Sensor

Measure Elapsed Time T


Possible Accuracy Set by CRLBT

But really want to measure speed V = d/T


Find the CRLBV:
Accuracy Bound
2

CRLBV =
T

D
CRLBT
T
2

D
= CRLBT
T
=

V4
D

CRLBT

V2
V
CRLBT
D

(m / s)

Less accurate at High Speeds (quadratic)


More accurate over large distances
8

Effect of Transformation on Efficiency


Suppose you have an efficient estimator of :
But you are really interested in estimating = g( )
Suppose you plan to use = g ()
Q: Is this an efficient estimator of ???
A: Theorem: If g( ) has form g( ) = a + b, then = g ()
is efficient.
affine transform
= because efficient
Proof:
First: var ( ) = var a + b = a 2 var = a 2CRLB

()

Now, what is CRB ? Using transformation result:


(a + b )
2
CRLB =
CRLB
=
a
CRLB

!
$
!#!!
"
2

=a 2

var ( ) = CRLB

Efficient!

Asymptotic Efficiency Under Transformation


If the mapping = g( ) is not affine this result does NOT hold
But if the number of data samples used is large, then the
estimator is approximately efficient (Asymptotically Efficient)

= g (), p ()

= g (), p ()

pdf of

Small N Case
PDF is widely spread
over nonlinear mapping

pdf of

Large N Case
PDF is concentrated
onto linearized section

10

You might also like