You are on page 1of 16

3.

7 CRLB for Vector Parameter Case


Vector Parameter: = [1

Its Estimate: = 1

]T

{}

Assume that estimate is unbiased: E =


For a scalar parameter we looked at its variance
but for a vector parameter we look at its covariance matrix:

{}

[ ][ ]

var = E = C

For example:
for = [x y z]T

var( x )

C = cov( y , x )

cov( z, x )

cov( x , y )
var( y )
cov( z, y )

cov( x , z )

cov( y , z )

var( z )

Fisher Information Matrix


For the vector parameter case
Fisher Info becomes the Fisher Info Matrix (FIM) I()
whose mnth element is given by:

[I()]mn

Evaluate at
true value of

2 ln[ p ( x; )]
= E
, m, n = 1, 2, , p
n m

The CRLB Matrix


Then, under the same kind of regularity conditions,
the CRLB matrix is the inverse of the FIM:
CRLB = I 1 ()
So what this means is: 2 = [C ]nn [I 1 ()]nn
n

(!)

Diagonal elements of Inverse FIM bound the parameter variances,


which are the diagonal elements of the parameter covariance matrix
var( x )

C = cov( y , x )

cov( z, x )

cov( x , y )
var( y )
cov( z, y )

cov( x , z )

cov( y , z )

var( z )

b11

b21

b31

b12
b22
b32

b13

b23 = I 1 ()

b33
3

More General Form of The CRLB Matrix


C I 1 () is positive semi - definite

Mathematical Notation for this is:


C I 1 () 0

(! !)

Note: property #5 about p.d. matrices on p. 573


states that (! !) (!)

CRLB Off-Diagonal Elements Insight

Not In Book

Let = [xe ye]T represent the 2-D x-y location of a


transmitter (emitter) to be estimated.
Consider the two cases of scatter plots for the estimated
location:
y e
y e
ye

y e

ye
xe

xe

y e
xe

x e

xe

x e

Each case has the same variances but location accuracy


characteristics are very different. This is the effect of the
off-diagonal elements of the covariance
Should consider effect of off-diagonal CRLB elements!!!

CRLB Matrix and Error Ellipsoids


Assume = [x e

y e ]T

is 2-D Gaussian w/ zero mean


and a cov matrix C
Only For Convenience

Then its PDF is given by:

()

p =

(2 )N

Not In Book

Let :
1

exp T C 1
2

Quadratic Form!!
(recall: its scalar valued)

C 1 = A
for ease

So the equi-height contours of this PDF are given by the


values of such that:
Some constant
T A = k
Note: A is symmetric so a12 = a21

because any cov. matrix is symmetric


and the inverse of symmetric is symmetric

What does this look like?

a 11 x e2 + 2 a 12 x e y e + a 22 y e2 = k

An Ellipse!!! (Look it up in your calculus book!!!)


Recall: If a12 = 0, then the ellipse is aligned w/ the axes &
the a11 and a22 control the size of the ellipse along the axes
Note: a12 = 0

a11
C 1 =
0

1
0

C = a11

a 22
0

a 22

x e & y e are uncorrelated


Note: a12 0

x e & y e are correlated


2
xe
C =

y e xe

e
ey

2y e

Error Ellipsoids and Correlation


if

x e & y e are uncorrelated


y e

~ 2 y

x e

~ 2 x

if

~ 2 y

x e & y e are correlated

x e
~ 2 x

~ 2 y

Choosing k Value
For the 2-D case
k = -2 ln(1-Pe)

x e
~ 2 x

y e
~ 2 y

y e

Not In Book

where Pe is the prob.


that the estimate will
lie inside the ellipse

y e

x e
~ 2 x

See posted
paper by
Torrieri

Ellipsoids and Eigen-Structure

Not In Book

Consider a symmetric matrix A & its quadratic form xTAx


Ellipsoid: x T Ax = k

or

Ax , x = k

Principle Axes of Ellipse are orthogonal to each other


and are orthogonal to the tangent line on the ellipse:
x2

x1

Theorem: The principle axes of the ellipsoid xTAx = k are


eigenvectors of matrix A.
9

Proof: From multi-dimensional calculus: gradient of


a scalar-valued function (x1,, xn) is orthogonal to the surface:

x2
x1
grad ( x1 ,, xn ) = x ( x ) =

=
x1

Different
Notations

( x )
=
x
!

xn

See handout posted on Blackboard on Gradients and Derivatives


10

For our quadratic form function we have:


( x ) = x A x = aij xi x j
T

Product rule:

( xi x j )
xk

( xi x j )

= aij
x k
x k
i j

x j
xi
=
x j + xi
xk
x
%$
#
%$#k

()

()

jk

1 i = k
= ik =
0 i k

Using () in () gives: = a jk x j + aik x j


xk
j
i
= 2 akj x j

And from this we get:

By Symmetry:
aik = aki

x ( x T Ax) = 2Ax
11

Since grad ellipse, this says Ax is ellipse:


x2
x

Ax

x1
Ax , x = k

When x is a principle axis, then x and Ax are aligned:


x2
x

Ax

x1
Ax , x = k

< End of Proof >

Ax = x
Eigenvectors are
Principle Axes!!!
12

Theorem: The length of the principle axis associated with


eigenvalue i is k / i
Proof: If x is a principle axis, then Ax = x. Take inner product
of both sides of this with x:
k
k
Ax , x = x , x

x, x =
x =
%$ #
%$#

=k

= x

< End of Proof >


Note: This says that if A has a zero eigenvalue, then the error ellipse
will have an infinite length principle axis NOT GOOD!!

So well require that all i > 0


C must be positive definite
13

Application of Eigen-Results to Error Ellipsoids


The Error Ellipsoid corresponding to the estimator covariance
matrix C must satisfy: T 1

C = k

Thus finding the eigenvectors/values of


shows structure of the error ellipse

C 1

Note that the error


ellipse is formed
using the inverse cov

Recall: Positive definite matrix A and its inverse A-1 have the
same eigenvectors
reciprocal eigenvalues

Thus, we could instead find the eigenvalues of


and then the principle axes would have lengths
set by its eigenvalues not inverted

C = I 1 ()

Inverse FIM!!
14

Illustrate with 2-D case: T C 1 = k


v1 & v2
1 & 2

Eigenvectors/values for C

(not the inverse!)

v2
k2

v1
k 1

15

The CRLB/FIM Ellipse


Can make an ellipse from the CRLB Matrix
instead of the Cov. Matrix
This ellipse will be the smallest error ellipse that an unbiased estimator
can achieve!
We can re-state this in terms of the FIM
Once we find the FIM we can:
Find the inverse FIM
Find its eigenvectors gives the Principle Axes
Find its eigenvalues Prin. Axis lengths are then

k i

16

You might also like