You are on page 1of 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/224130846

Design of Schmidt-Kalman filter for target tracking with navigation errors

Conference Paper  in  IEEE Aerospace Conference Proceedings · April 2010


DOI: 10.1109/AERO.2010.5446689 · Source: IEEE Xplore

CITATIONS READS
5 1,065

3 authors, including:

Chun Yang Erik Blasch


Sigtem Technology, Inc. Air Force Research Laboratory
119 PUBLICATIONS   1,231 CITATIONS    630 PUBLICATIONS   8,972 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Evaluation Techniques for Uncertainty Representation and Reasoning View project

High-Level Information Fusion View project

All content following this page was uploaded by Erik Blasch on 04 June 2014.

The user has requested enhancement of the downloaded file.


Design of Schmidt-Kalman Filter for Target Tracking
with Navigation Errors
Chun Yang Erik Blasch Phil Douville
Sigtem Technology, Inc. Air Force Research Lab/RYAA Air Force Research Lab/RYAA
San Mateo, CA 94402 WPAFB, OH 45433 WPAFB, OH 45433
chunynag@sigtem.com erik.blasch@wpafb.af.mil philip.douville@wpafb.af.mil

Abstract—In most target tracking formulations, the tracking speed maneuvering target tracking as the target’s relative
sensor location is typically assumed perfectly known. location has to be transferred into a global reference frame
Without accounting for navigation errors of the sensor and the sensor navigation errors get magnified into the
platform, regular Kalman filters tend to be optimistic (i.e., target position solution. For tracking mission success, it
the covariance matrix far below the actual mean squared requires the use of such techniques as the Schmidt-Kalman
errors). In this paper, the Schmidt-Kalman filter (SKF) is Filter (SKF) [9] to account for navigation errors.
formulated for target tracking with navigation errors. The
SKF does not estimate the navigation errors explicitly but Without accounting for navigation errors, regular Kalman
rather takes into account (i.e., considers) the navigation filters tend to be optimistic in that the covariance matrix
error covariance provided by an on-board navigation unit in reaches a steady-state value (if not diverged) far below the
the tracking filter formulation. Including the navigation actual mean squared errors. Heuristic techniques such as the
errors leads to the so-called “consider covariance.” By α-β-γ filter are used to prevent divergence and boost filter
exploring the structural navigation errors, the SKF is not gain by adjusting Q and R covariance parameters.
only more consistent but also produces smaller mean
squared errors than regular Kalman filters. Monte Carlo In [6, 7], it was shown that residual biases in radar
simulation results are presented in the paper to demonstrate measurements could be mitigated using the Schmidt-
the operation and performance of the SKF for target Kalman filter. In the setting of tracking ballistic missiles in
tracking in the presence of navigation errors.1,2 midcourse, ground-based electronically scanned array radars
(ESA) were considered. The range and off-boresight angles
TABLE OF CONTENTS of a target measured in the radar frame are transformed into
relative position measurements in a local east-north-up
1. INTRODUCTION ………………………………………. 1 (ENU) frame. It was assumed that the measurement process
2. TARGET TRACKING FORMULATION ………………… 2 is subject to frame transformation errors (in terms of Euler
3. CONSIDER COVARIANCE IN TRACKING FILTER .….… 3 angles) and unknown but constant measurement errors (in
4. RECURSIVE CALCULATION OF METRICS ………….…. 5 terms of biases and scale factors) in addition to random
5. SIMULATION RESULTS AND ANALYSIS ………………. 6 noise. It was further assumed that a bias estimator was
6. CONCLUSIONS …………………………………...…… 11 available to remove the angular and measurement biases.
ACKNOWLEDGEMENT ...…………………………………11 Further, zero-mean Gaussian residual biases could be
REFERENCES …………………………………………..... 11 mitigated so long as the residual bias covariance matrix was
BIOGRAPHY …………………………………………....... 12 provided by the bias estimator.

1. INTRODUCTION In this paper, the Schmidt-Kalman filter (SKF) is


formulated for target tracking with navigation errors. The
Navigation errors are a performance-limiting factor for Schmidt Kalman filter does not estimate the navigation
target tracking as ranging measurements increase in errors explicitly (it could if external reference data were
accuracy. In the past, a sensor platform is typically assumed available) but rather takes into account (i.e., consider) the
to be at a perfectly known location when formulating the navigation error covariance, provided by an on-board
measurement equations for tracking filters. Such an navigation unit, into the tracking filter formulation. By
assumption is reasonable when the tracking goal is to exploring the structural navigation errors, the SKF are not
determine the relative location of a target for such only more consistent but also produces smaller mean
applications as a homing missile. However, for other squared errors than regular Kalman filters.
applications such as sensor data fusion or weapons cueing,
the sensor/platform location accuracy is critical for high The paper is organized as follows. The conventional
tracking filter is formulated in Section 2. The design of the
1
SKF using the consider covariance in the presence of
978-1-4244-3888-4/10/$25.00 ©2010 IEEE navigation errors is presented in Section 3. The recursive
2
IEEEAC paper #1081, Version 3, Updated January 5, 2010

1
calculation of performance metrics in terms of root mean
squared (RMS) errors and root sample covariance trace In practice, however, the target measurements are nonlinear
(RSCT) is described in Section 4. Monte Carlo simulation in nature such as range and bearing. Consider a general
results are presented in Section 5 to demonstrate the nonlinear measurement taken from the sensor at xs to a
operation and performance of the SKF for target tracking in target at xt as
the presence of navigation errors. Finally Section 6
concludes the paper with an outline of future work. z = f (x t , x s ) +v (3)

2. TARGET TRACKING FORMULATION where f (⋅, ⋅) is the nonlinear measurement equation and v is
the measurement error assumed to be a zero-mean Gaussian
Consider a linear time-invariant discrete-time dynamic N(0, σ2).
system together with its measurement as
Assume that the sensor location xs is known perfectly and
x k +1 = Ax k + Bu k + w k (1a) an initial estimate of the unknown target location, denoted
by x0, is available. The nonlinear measurement can be
y k = Hx k + v k (1b) linearized around this initial estimate as:

where the subscript k is the time index, x is the state vector, z ≈ f (x t , x s ) + h T (x t − x 0 ) +v (4a)
u is a known input, y is the measurement, and w and v are
state and measurement noise processes, respectively. It is ⎡ ∂z ∂z ∂z ⎤
hT = ⎢ t " (4b)
implied that all vectors and matrices have compatible ⎥
⎣ ∂x1 ∂x 2t ∂x nt ⎦ x
dimensions, which are omitted for simplicity. 0

The goal is to find an estimate, denoted by x̂ k , of xk given In terms of measurement prediction error, the equation can
be further written in terms of x = xt – x0 as:
the measurements up to time k, denoted by Yk = {y0, …, yk}.
Under the assumptions that the state and measurement ~z = z − f (x , x s ) + h T x = h T x + v
noises are uncorrelated zero-mean white Gaussian with w ~ 0 0
(5)
N{0, Q} and v ~ N{0, R} where Q and R are positive
semi-definite covariance matrices, the Kalman filter For M measurements, the linearized measurements can be
provides an optimal estimator in the form of xˆ k = E{x k | Yk } put into a vector format as:
[1, 2, 3, 4]. ~z = Hx + v (6a)
Starting from an initial estimate xˆ 0 = E{x 0 } and its ~z T = [~
z1 2 zM ]
~z " ~ (6b)
estimation error covariance P0 = E{(x 0 − xˆ 0 )(x 0 − xˆ 0 )T }
where the superscript T stands for matrix transpose, the vT = [v1 v2 " vM ] (6c)
Kalman filter equations specify the propagation of x̂ k and
Pk over time as well as the update of x̂ k and Pk by ⎡ h1T ⎤
⎢ ⎥ (6d)
measurement yk as H=⎢ # ⎥
⎢hTM ⎥
⎣ ⎦
x k +1 = Axˆ k + Bu k (2a)
The use of linearized measurements (6a) is conformal with
Pk +1 = APk A T + Q (2b) the linear formulation (1b) and its application to (2) leads to
the extended Kalman filter (EKF) [1, 2, 3, 4].
xˆ k +1 = x k +1 + K k +1 (y k +1 − Hx k +1 ) (2c)
In the two dimensional setting with ranging measurements,
(2d) let x and y be the coordinate axes. Then, xt = [xt, yt]T and xs =
Pk +1 = (I − K k +1H ) Pk +1
[xs, ys]T. The nonlinear equation (3) for range is (ignoring
the noise terms)
K k +1 = Pk +1H T S k−1+1 (2e)
r = f (x t , x s ) = ( x t − x s ) 2 + ( y t − y s ) 2 (7a)
S k +1 = HPk +1H + RT
(2f)
The corresponding linearized equation around an initial
where x k +1 and Pk +1 are the predicted state and prediction estimate x0 of xt is
error covariance, respectively, and Sk is the measurement
prediction error covariance. r ≈ f (x 0 , x s ) + h T (x t − x 0 ) (7b)

2
with δr = r − r = h tT δx t + n r (11a)

⎡ x0 − xs ⎤ ∂r (11b)
⎢ ⎥ ht =
h=⎢ r ⎥ = ⎡cos(θ )⎤ (7c) ∂x t
⎢ y0 − y ⎥ ⎢⎣ sin(θ ) ⎥⎦
s

⎣⎢ r ⎦⎥ 3.2 With Sensor Position Errors

where θ is the angle relative to the x-axis. Eq. (7c) affords To account for the sensor positioning errors, consider three
an important interpretation that the observation matrix is cases: zero mean, constant bias, and non-stationary.
made of the line of sight (LOS) vector from the sensor to
target [12]. Case 1: Zero Mean

A well-behaved navigation system provides a navigation


3. CONSIDER COVARIANCE IN TRACKING FILTER solution whose error is uncorrelated (with itself over time
and with the target state estimation error), with a zero mean
In the above formulation, the sensor location xs is assumed
E{δxs} = 0 and covariance Ps.
to be known perfectly, which is approximately true when
the ranging errors are much larger than the sensor
The concept of consider covariance as used in [5, 10] is the
positioning errors. However, when the ranging errors are
central idea of the Schmidt-Kalman filter. The consider
comparable to the sensor positioning errors, it is required to
covariance for the measurement equation (10) is given by
account for navigation errors explicitly to ensure tracking
performance.
S = h tT P t h t + h sT P s h s + R (12)
Assume that an onboard navigation system provides the
where Pt is the target state error covariance, Ps is the sensor
sensor’s location at xs, which is subject to an unknown
location error covariance, and R is the measurement error
instantaneous error δxs and a location error covariance Ps.
covariance.
Further assume that the sensor provides range measurements
given by
Case 2: Constant Bias
r = xt − xs + nr (8)
2 However, if the measurement exhibits a constant bias, the
bias can be estimated directly if an external reference
where xt is the target location, ||▪||2 stands for the 2-norm, measurement is available. Alternatively, it can be estimated
and nr is the measurement noise, which is assumed to be of by a separate bias estimator. Any residual bias is accounted
zero mean with a variance of σ2. for by the consider covariance as suggested in [6, 7].

Given the measured sensor location x s and predicted target To derive the Schmidt-Kalman filter, the bias is taken as an
location x t , the predicted range is obtained as augmented state in the filter formulation for the state and
covariance propagation. However, the update equations for
(9) the augmented state as nuisance parameters are discarded.
r = xt − x s
2

More specifically, the augmented equations are given by


The range equation is expanded around the measured sensor
location and predicted target location as ⎡x k +1 ⎤ ⎡ A 0⎤ ⎡x k ⎤ ⎡B ⎤ ⎡w k ⎤ (13a)
⎢β ⎥ = ⎢ 0 I ⎥ ⎢β ⎥ + ⎢ 0 ⎥u k + ⎢ 0 ⎥
∂r T t ∂r ⎣ k +1 ⎦ ⎣ ⎦⎣ k ⎦ ⎣ ⎦ ⎣ ⎦
r=r +( ) (x − x t ) + ( s ) T (x s − x s ) + n r (10a)
∂x t ∂x
⎡x ⎤
y k = [H t H s ]⎢ k ⎥ + v k (13b)
∂r ∂r (10b) ⎣β k ⎦
= r + ( t ) T δx t + ( s ) T δx s + n r
∂x ∂x
The bias βk is not estimated but its effect on xk is accounted
where the target position error is δxt. for through a guess about the bias in terms of its covariance
matrix Cov{βk} = Psk and its correlation with the target state
3.1 Without Sensor Position Errors estimate defined as

In conventional EKF formulation as in (4a) and (7a), the Pkts = E{(x k − xˆ k )β Tk } , P0ts = 0 (14)
sensor position errors are ignored. The resulting linearized
measurement equation is
The augmented state covariance matrix is given by

3
bias while (~ t3) cubically under gyro drift. The horizontal
⎡ Pt Pkts ⎤ position errors oscillate at a Schuler period of 84.4 min (the
Pk = ⎢ tsk T (15)
⎥ Schuler frequency ωs = g/R0 = 0.00123 rads/s) near the earth
⎣(Pk ) Pks ⎦
surface whereas the vertical position (height) errors grow
unbounded.
The Schmidt-Kalman filter is then obtained by applying the
Kalman filter to the augmented equations (13) with the Navaids such as GPS and baro altimeter are employed to
augmented covariance (15). The time propagation equations limit the navigation solution errors. The raw data rates of
of the target state and its error covariance remain the same inertial sensors are typically at 100 ~ 200 Hz whereas the
as (2a) and (2b). The time propagation of the bias navaids are available at 10 Hz to 1 Hz or less frequently.
covariance is unity: Psk+1 = Psk. The time propagation of the Between two navaid updates, the errors grow. The updated
cross-correlation is given by error covariance exhibits a sawtooth pattern as exemplified
in Figures 1 and 2 for the position and velocity components,
Pkts+1 = APkts (16) respectively.

The measurement update equation of the target state still has


the same form as (2c) but the gain matrix (2e) is different as
given below

K k +1 = ( Pkt+1H tT + Pkts+1H sT )S −k 1+1 (17a)

where the predicted measurement error covariance Sk+1 is


given by

S k +1 = (H t Pkt+1H tT + H s PktsT
+1 H
tT

+ H t Pkts+1H sT + H s PksT+1H tT + R k +1 ) (17b)

However, the measurement update equation of the target


state error covariance is given by

Pk +1 = (I − K k +1H t ) Pk +1 − K k +1H s PktsT (18a) Fig. 1 Integrated Inertial: Position Errors


+1

The measurement update of the cross-correlation is given by

Pkts+1 = Pkts+1 − K k +1 (H t Pk +1 + H s Pks+1 ) (18b)

From (17b), it can be seen that if there is no correlation, that


is, Pkts = 0 , it reduces to (12), which is a form of covariance
inflation. This may occur when the sensor position is subject
to zero-mean uncorrelated errors.

In general, however, even starting with a null initialization,


that is, P0ts = 0 as in (14), the cross-correlation for non-zero
bias is not zero as seen from (18b).

Case 3: Non-Stationary Errors


Fig. 2 Integrated Inertial: Velocity Errors
For many practical navigation systems, however, the error
characteristics of the navigation solution are non-stationary In the steady state, if the update rate of the target tracking
and correlated. This is mainly due to the fact that most aided algorithm is commensurate with that of the navaid update, a
navigation systems utilize a Kalman filter to integrate an uniform quality of the integrated navigation solution can be
inertial navigation solution with various navaids such as expected. If the update interval of the target tracking
GPS to curb the solution error growth while calibrating algorithm is much longer than that of the navaid update, the
inertial sensor errors. navigation errors can be considered uncorrelated in time.
Otherwise, variable quality can occur as indicated by the
For an unaided solution in short terms, the position error navaid integration filter’s covariance matrix Ps and the
grows with time quadratically (~ t2) under accelerometer navigation errors may be correlated if the target tracking
4
update interval is comparable to or shorter than the navaid
update interval. 1 n
S n (k ) = ∑[xˆ i (k ) − m n (k )][xˆ i (k ) − m n (k )]T
n i =1
(23a)
To deal with variable and/or correlated errors, one way is to
introduce a first-order Gauss-Markov model for the bias, n −1 1
which is no longer constant but slow-varying. This aspect = S n −1 (k ) + [xˆ n (k ) − m n (k )][xˆ n (k ) − m n (k )]T
n n
will be investigated as part of our future work.
n −1 (23b)
4. RECURSIVE CALCULATION OF METRICS + [m n (k ) − m n −1 (k )][m n (k ) − m n −1 (k )]T
n
In Section 5, the Monte Carlo simulation will be used to
In contrast to (19), instead of using the sample mean,
evaluate the performance of a Schmidt-Kalman filter vs. a
another widely used covariance is calculated with respect to
conventional Kalman filter in the presence of navigation
the true vector x(k) if available as
errors. The performance metrics will be root mean squared
(RMS) errors when the state truth is known and the root
1 N
sample covariance trace (RSCT) when the state truth is not PN (k ) = ∑[xˆ i (k ) − x(k )][xˆ i (k ) − x(k )]T
N i =1
(24)
known.

Assume that a vector quantity x(k) ~ N(μ, Σ) at time k is The square root of the trace of P is the so-called root mean
estimated N times as in a Monte Carlo simulation. The squared (RMS) errors defined as
estimate is denoted by xˆ i (k ) for i = 1, …, N. There are two
1 N
ways to characterize the estimation errors. One way is to RMS N ( k ) = trace{PN (k )} = ∑[xˆ i (k ) − x(k )]T [xˆ i (k ) − x(k )]
N i =1
consider the sample mean and sample covariance defined as
(25)
1 N
m N (k ) = ∑ xˆ i (k )
N i =1
(19a) In calculating (24) and (25), the truth x(k) is assumed to be
known, which is a preferred choice for performance
evaluation in simulation. Without modeling errors, the
1 N Kalman filter provides the estimation error covariance as
S N (k ) = ∑[xˆ i (k ) − m N (k )][xˆ i (k ) − m N (k )]T
N i =1
(19b)
defined in (24). The error covariance is frequently used to
validate a Kalman filter implementation and as a
If the denominator in (19b) is (N-1), subsequent equations performance indicator. The ellipses drawn with {x(k),
need change. However, the approximation error is small for PN(k)} are called error ellipses.
large N.
Similarly, inside a Monte Carlo simulation loop, the RMS
The sample covariance SN(k) does not tell how accurate the errors can be calculated recursively as
estimates are (i.e., how far away it is from the true value)
but rather provides an indication about the spreading of the n −1 1
Pn (k ) = Pn −1 (k ) + [xˆ n (k ) − x(k )][xˆ n (k ) − x(k )]T
estimates around the sample mean mN(k) (i.e., its precision). n n
(26)
Consider a bias estimate in the estimate
n −1 1
RMS n2 ( k ) = RMS n2−1 (k ) + [xˆ n ( k ) − x(k )]T [xˆ n (k ) − x(k )]
b N (k ) = m N (k ) − x(k ) (20) n n
(27)
The ellipses drawn with {mN(k), SN(k)} are often called
confidence ellipses. The statistic It is easy to show that the estimation error covariance PN(k)
is related to the sample covariance SN(k) and the bias vector
T 2 = n(m − μ)T P −1 (m − μ ) (21) bN(k) by

PN (k ) = S N (k ) + b N (k )b N ( k )T (28)
follows the so-called Hotellings’ distribution with n-1
degrees of freedom [8] where n is the dimension of x.
Without knowing the truth x(k), the bias vector cannot be
It can be easily shown that the sample mean and covariance pre-calculated using (20). As a result, the sample mean
can be calculated recursively inside a Monte Carlo mN(k) and sample covariance SN(k) cannot indicate the
simulation loop as estimation accuracy. With the truth x(k), the two methods
can be linked to each other via (28), thus being equivalent.
1 n n −1 1
m n (k ) = ∑ xˆ i (k ) = n m n −1 (k ) + n xˆ n (k )
n i =1
(22)

5
Instead of calculating the sample covariance itself, the trace ∂ρ x (32g)
=
may be evaluated, leading to the root sample covariance ∂x ρ
trace (RSCT), a scalar quantity similar to the RMS errors, as
∂ρ y (32h)
RSCTn (k ) = trace{S n (k )} (29a) =
∂y ρ

n −1 5.1 Simulation with Comparable Navigation Errors


RSCTn2 (k ) = RSCTn2−1 (k )
n
To analyze the effects of sensor location errors (navigation
1 errors) on targeting accuracy, consider a sensor-target
+ [xˆ n (k ) − m n (k )]T [xˆ n (k ) − m n (k )]
n crossing scenario as shown in Figure 3. The sensor moves
along the x-axis from (-10000 m, 0) at a constant speed of
n −1 (29b) 100 m/s while the target moves along the y-axis from (0,
+ [m n (k ) − m n−1 (k )]T [m n (k ) − m n−1 (k )]
n 20000 m) at a constant speed of -12 m/s.

5. SIMULATION RESULTS AND ANALYSIS The initial state is

x s = [x y ]
T
Three simulation examples are presented in this section x y
wherein a more general case with both range and range rate
measurements are considered: = [− 10000 m 100 m/s 0 m 0 m/s ] (33a)
T

ρ = x2 + y 2 (30a) x t = [x y ]
T
x y

xx + yy = [0 m 0 m/s 20000 m − 12 m/s ] (33b)


T
ρ = (30b)
ρ
The sensor provides range and range rate measurements
Taking partial derivatives of range and range rate with the measurement error variances of 102 m2 and 0.12
measurements (30a) and (30b) with respect to xs and xt, (m/s)2, respectively.
retaining only the linear components, yields the observation
matrices Hs and Ht, which assume the same form. Without The sensor navigation errors are assumed to be comparable
the subscript, it is given by to the measurement errors with the variances of 102 m2 and
0.12 (m/s)2, respectively. This represents a rather extreme
⎡ ∂ρ ∂ρ ∂ρ ∂ρ ⎤ case to emphasize the effects.
⎢ ∂x ∂y ∂x ∂y ⎥ (31)
H=⎢ ⎥ A regular extended Kalman filter (EKF) is implemented
⎢ ∂ρ ∂ρ ∂ρ ∂ρ ⎥
⎢⎣ ∂x ∂y ∂x ∂y ⎥⎦ based on the following discrete-time second-order kinematic
model (nearly constant velocity)
where

∂ρ x ⎡1 T 0 0⎤ ⎡ 12 T 2 0 ⎤
(32a) ⎢0 ⎢ ⎥
=
∂x ρ 1 0 0 ⎥⎥ T 0 ⎥ (34)
x k +1 = ⎢ xk + ⎢ w
∂ρ y ⎢0 0 1 T ⎥ ⎢ 0 2T
1 2
⎥ k
= (32b) ⎢ ⎥ ⎢ ⎥
∂y ρ ⎣0 0 0 1⎦ ⎢⎣ 0 T ⎥⎦

∂ρ (32c) where the process noise w ~ N(0, Q) is a zero-mean


=0
∂x Gaussian noise. The covariance matrix Q = diag([ σ x2 σ y2 ])
uses the particular values of σ x = σ y = 0.1 m/s2 for both x
∂ρ (32d)
=0
∂y and y-axes in the first simulation.

∂ρ x ( xx + yy ) x The initial state drawn from the initial estimation error
= − (32e) covariance of 1002 m2 and 52 (m/s)2, respectively, for the
∂x ρ ρ3 position and velocity in the x and y-axes
∂ρ y ( xx + yy ) y (32f) P0 = diag([1002 52 1002 52]) (35)
= −
∂y ρ ρ3

6
Under the same initial conditions, a Schmidt-EKF is 5.2 Simulation with Different Process Noise
implemented to calculate the predicted measurement
covariance. The 100 Monte Carlo runs are used to calculate In the next simulation, the process noise covariance Q is
the root mean squared (RMS) errors. reduced to 0.0012 (m/s2)2. The sensor navigation errors are
also reduced to 52 m2 and 0.052 (m/s)2 (twice better).
Figure 4 shows the details of the sample sensor trajectory
with the blue-colored truth (dashed line) and the green- Compared to Figures 7 and 8, the errors in Figures 9 and 10
colored navigation solution (solid line). are all smaller due to the use of smaller Q and Ps. More
importantly, the standard deviations calculated from the
Figure 5 shows the details of the sample target trajectory, propagated and updated covariances of the consider EKF
where the red-colored curve is the truth (dashed line), the (CEKF) now match well with the actual RMS errors. The
cyan-colored curve is the regular Kalman filter estimate CEKF-to-RMS error match indicates the consider
(with cross line style), and the purple-colored curve is the covariance can better predict the targeting performance for
navigation solution (solid line). target tracking, sensor fusion, and resource management.

Figure 6 is another sample target trajectory, where the red- By comparing Figures 9 and 10, one would ask why the
colored curve is the truth (dashed line), the cyan-colored RMS position errors of regular EKF and consider EKF as in
curve is the regular Kalman filter estimate (with cross line Figure 9 differ that much - whereas why their RMS velocity
style), and the purple-colored curve is the navigation errors fairly agree as in Figure 10. Note that in the
solution (solid line). simulation, navigation errors are introduced only in the
sensor position not in the sensor velocity and the sensor
Figure 7 shows the RMS errors and standard deviations for position errors are translated directly into the target position
the position estimate produced by the regular EKF and errors. Typically the velocity components of an integrated
consider EKF (the term consider EKF or consider filter is navigation solution are more accurate than the position
used interchangeably with Schmidt-EKF), respectively. The counterparts. But significant sensor velocity errors could be
RMS errors of the regular EKF (the green curve with plus added in the simulation. Furthermore, the measurements
marks +) are larger than those of the consider EKF (the used by both filters include both range and range rate as in
solid cyan curve) whereas the standard deviations of the (30). The range rate measurements, only subject to random
regular EKF (the dashed blue curve) are smaller than those noise, help produce relatively good velocity estimates
of the consider EKF (the red curve with cross marks ×). shown in Figure 10.
This indicates that the actual estimates of the consider EKF
are smaller (better) than those of the regular EKF. In the past, heuristic approaches have been used to adjust Q
and R of a Kalman filter to prevent divergence due to un-
Additionally, the consider filter provides a better estimated error terms (which are totally ignored) and to
performance bound than the optimistic regular EKF. match the filter’s covariance matrix with data-driven RMS.
Such “blind” adjustments do not explore the inherent
Figure 8 shows the RMS errors and standard deviations for structures of actual errors. In contrast, even not estimating
the velocity estimate produced by the regular and consider the error terms, the consider filter explicitly takes into
EKF, respectively. The RMS errors of the regular EKF (the account the distribution and structure of the errors, thus
green curve with plus marks +) are larger than those of the providing better results.
consider EKF (the solid cyan curve). The standard
deviations of the regular EKF (the dashed blue curve) and 5.3 Evaluation with Unknown Truth
those of the consider EKF (the red curve with cross marks
×) are comparable. Again, this indicates that the actual In the following simulation, the sample mean and
estimates of the consider EKF are smaller (better) than those covariance are calculated by the Monte Carlo simulation
of the regular EKF and the consider filter provides a better without knowing the target state truth. The estimated-mean
performance bound than the optimistic regular EKF. centered RMS is calculated, which is the root sample
covariance trace (RSCT) defined in Section 4.
The discrepancy between the RMS errors and standard
deviations of the consider EKF as shown in Figures 7 and 8 Note that the estimated-mean centered RMS has a different
may be explained by the fact that the standard deviations are meaning than the truth-centered RMS. It does not indicate
calculated from the propagated and updated covariances how close the estimates are to the truth but how dispersed
with a rather large process noise covariance Q whereas both the estimates are. It may reveal how consistent the estimates
the target and sensor are moving at constant speed. are but the estimates may be biased. So two aspects need to
be shown, one is the bias and the other is spread around the
bias. If the target state estimate is unbiased, the sample
estimates are close to the RMS errors values.

7
4
x 10
2.5

sensor position truth sensor position truth


30 sensor position estimate
sensor position estimate target trajectory
2 target position truth
target position truth
regular KF estimate 20 regular KF estimate
CKF estimate CKF estimate
1.5
10
y, m

y, m
1 0

-10
0.5

sensor trajectory -20

0
-30

-0.5
-12000 -10000 -8000 -6000 -4000 -2000 0 2000 4000 6000 -8000 -6000 -4000 -2000 0 2000 4000
x, m x, m

Fig. 3 Sensor-Target Crossing Scenario Fig. 4 Sample Sensor Trajectory

4 4
x 10 x 10
2.02
2 sensor position truth
2 sensor position estimate
1.98
target position truth
1.98
1.96 regular KF estimate
1.96 CKF estimate
1.94
1.94
1.92
y, m

y, m

1.92
1.9
1.9
1.88
1.88
sensor position truth
1.86
sensor position estimate 1.86
1.84 target position truth
1.84
regular KF estimate
1.82 CKF estimate 1.82

-140 -120 -100 -80 -60 -40 -20 0 20 40 60 -900 -800 -700 -600 -500 -400 -300 -200 -100 0
x, m x, m

Fig. 5 Sample Target Trajectory (Run 1) Fig. 6 Sample Target Trajectory (Run 2)

regular vs. consider EKF with navigation errors regular vs. consider EKF with navigation errors
90 4
P 0 ~ 100, 5
P0 ~ 100, 5 std, regular EKF
85 Q ~ 0.1, 0.1 rms, regular EKF
3.5
Q ~ 0.1, 0.1
std, consider EKF
80 R ~ 10, 0.1
R ~ 10, 0.1 rms, consider EKF
P s ~ 10, 0.1 3
velocity errors per axis, m/s

75 P s ~ 10, 0.1
position errors per axis, m

mc = 100
std, regular EKF 2.5 mc = 100
70
rms, regular EKF
65 std, consider EKF 2
rms, consider EKF
60
1.5

55
1
50

0.5
45

40 0
0 50 100 150 0 50 100 150
dt = 2 s dt = 2 s

Fig. 7 Comparison of Position Errors Fig. 8 Comparison of Velocity Errors

8
regular vs. consider EKF with navigation errors regular vs. consider EKF with navigation errors
75 4
P 0 ~ 100, 5
std, regular EKF P 0 ~ 100, 5 std, regular EKF
Q ~ 0.001, 0.001 rms, regular EKF 3.5 rms, regular EKF
70 Q ~ 0.001, 0.001
std, consider EKF std, consider EKF
R ~ 10, 0.1
rms, consider EKF R ~ 10, 0.1 rms, consider EKF
3

velocity errors per axis, m/s


65 P s ~ 5, 0.05
position errors per axis, m

P s ~ 5, 0.05
mc = 50 2.5 mc = 50
60
2
55
1.5

50
1

45 0.5

40 0
0 50 100 150 0 50 100 150
dt = 2 s dt = 2 s

Fig. 9 Position Errors with Small Q Fig. 10 Velocity Errors with Small Q

regular vs. consider EKF with navigation errors regular vs. consider EKF with navigation errors
75 4
std, regular EKF std, regular EKF
rms, regular EKF 3.5 rms, regular EKF
70
std, consider EKF std, consider EKF
rms, consider EKF rms, consider EKF
3
65 velocity errors per axis, m/s
position errors per axis, m

2.5
60
2
55
1.5

50
1

45 0.5

40 0
0 50 100 150 0 50 100 150
dt = 2 s dt = 2 s

Fig. 11 Truth-Centered Position Errors Fig. 12 Truth-Centered Velocity Errors

Figure 11 shows the RMS errors and standard deviations for those of the consider EKF (the red curve with cross marks
the position estimate produced by the regular and consider ×) are comparable. Again, this indicates that the actual
EKF, respectively. The RMS errors of the regular EKF (the estimates of the consider EKF are smaller (better) than those
green curve with plus marks +) are larger than those of the of the regular EKF and the consider filter provides a better
consider EKF (the solid cyan curve) whereas the standard performance bound than the optimistic regular EKF.
deviations of the regular EKF (the dashed blue curve) are
smaller than those of the consider EKF (the red curve with Figure 13 shows the position biases (i.e., the difference
cross marks ×). This indicates that the actual estimates of between the sample mean and the state truth) of the regular
the consider EKF are smaller (better) than those of the Kalman filter and consider Kalman filter. The CKF biases
regular EKF. Besides, the consider filter provides a better which are comparable, but slightly better than the regular
performance bound than the regular EKF, which is too Kalman filter biases.
optimistic.
Figure 14 shows the velocity biases (i.e., the difference
Figure 12 shows the RMS errors and standard deviations for between the sample mean and the state truth) of the regular
the velocity estimate produced by the regular and consider Kalman filter and consider Kalman filter, which are
EKF, respectively. The RMS errors of the regular EKF (the comparable with the consider Kalman filter.
green curve with plus marks +) are larger than those of the
consider EKF (the solid cyan curve). The standard Figure 15 shows the position errors in terms of the sample
deviations of the regular EKF (the dashed blue curve) and mean-centered RMS error values or the root sample

9
regular vs. consider EKF with navigation errors, mc = 100 regular vs. consider EKF with navigation errors, mc = 100
14 0.4
bias, regular EKF bias, regular EKF
bias, consider EKF 0.35 bias, consider EKF
12

0.3

velocity errors per axis, m/s


10
position errors per axis, m

0.25
8
0.2
6
0.15

4
0.1

2 0.05

0 0
0 50 100 150 0 50 100 150
dt = 2 s dt = 2 s

Fig. 13 Position Bias Fig. 14 Velocity Bias

regular vs. consider EKF with navigation errors, mc = 100 regular vs. consider EKF with navigation errors, mc = 100
70 3.5
sample std, regular EKF sample std, regular EKF
sample std, consider EKF sample std, consider EKF
3
65

velocity errors per axis, m/s


2.5
position errors per axis, m

60

2
55
1.5

50
1

45
0.5

40 0
0 50 100 150 0 50 100 150
dt = 2 s dt = 2 s

Fig. 15 Sample Mean-Centered Position Errors Fig. 16 Sample Mean-Centered Velocity Errors

regular vs. consider EKF with navigation errors regular vs. consider EKF with navigation errors
10 0.48
std, regular EKF std, regular EKF
std, consider EKF 0.46 std, consider EKF
9
range rate measurement errors, m/s

0.44
range measurement errors, m

8
0.42

7 0.4

6 0.38

0.36
5
0.34

4
0.32

3 0.3
0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80
dt = 2 s dt = 2 s

Fig. 17 Predicted Range Errors (CEKF anticipates Fig. 18 Predicted Range Rate Errors (CEKF anticipates
navigation errors whereas REKF is optimistic) navigation errors whereas REKF is optimistic)

10
covariance trace (RSCT) of the regular Kalman filter (RKF) ACKNOWLEDGEMENT
and consider or Schmidt Kalman filter (CKF). From Figure
15, it is clear that the CKF is much better than the RKF. Research supported in part under Contract No. FA8650-08-
Because of small position bias as shown in Figure 13, the C-1407, which is greatly appreciated.
sample mean-centered RMS or the root sample covariance
trace (RSCT) of Figure 15 is expected as the truth-centered
RMS as shown in Figure 11. This is indeed true.
REFERENCES
Similarly, Figure 16 shows the velocity errors in terms of
the sample mean-centered RMS error values or the root [1] Y. Bar-Shalom and X.R. Li, Multitarget-Multisensor
sample covariance trace (RSCT) of the regular Kalman filter Tracking: Principles and Techniques, YBS Publishing,
and consider Kalman filter. The consider Kalman filter Storrs, CT, 1995.
outperforms the regular Kalman filter. [2] S. Blackman and R. Popoli, Design and Analysis of
Modern Tracking Systems, Artech House, Boston,
Because of small position bias as shown in Figure 12, the 1999.
sample mean-centered RMS or the root sample covariance [3] A. Farina and F.A. Studer, Radar Data Processing
trace (RSCT) of Figure 14 is very close to the truth-centered (Vol. I & II), Wiley, New York, 1985.
RMS as shown in Figure 10. [4] P. Maybeck, Stochastic Models, Estimation, and
Control, Volume 1, Academic Press, Inc, 1979.
Finally, comparison of (12) (and (17b)) to (2e) indicates an [5] O. Montenbruck and E. Gill, “Section 8.1.4: Consider
extra time-varying term, which is a function of the Parameters” in Satellite Orbits: Models, Methods, and
underlying state navigation error Ps and the observation Applications, Springer, 2000.
matrix Hs (and extra time-varying terms, which are function [6] R.Y. Novoselov, S.M. Herman, S.M. Gadaleta, and A.B.
of the cross correlation Pkts and observation matrices Ht and Poore, “Mitigating The Effects of Residual Biases with
Hs). Note that Hs is geometry-dependent and so is Ht. Schmidt-Kalman Filtering,” Proc. of Fusion’2005. July
2005.
Figures 17 and 18 show the two diagonal elements of the [7] R. Paffenroth, R. Novoselov, S. Danford, M. Teixeira,
predicted measurement error covariance (squared roots). S. Chan, and A. Poore, “Mitigation of Biases Using the
The regular EKF predicted measurement errors are small. It Schmidt-Kalman Filter,” Proc. SPIE, Vol. 6699, 2007.
cannot describe large errors due to the navigation errors, [8] L.R. Paradowski, “Uncertainty Elllipses and Their
thus being optimistic. In contrast, the consider EKF Application to Interval Estimation of Emitter Position,”
faithfully produces large errors because it anticipates the IEEE Trans. on Aerospace and Electronic Systems,
navigation errors. The net effect of incorporating navigation 33(1), January 1997, 126-133.
errors in the tracking filter is a time-varying adaptive [9] S. Schmidt, “Applications of State-Space Methods to
adjustment of the measurement error covariance. Navigation Problems,” in Advances in Control Systems,
Vol. 6, C. Leondes (Ed.), 293-340, Academy Press,
New York, 1966.
6. CONCLUSIONS [10] B.D. Tapley, B.E. Schutz, and G.H. Born, “Chapter 6:
Consider Covariance Analysis” in Statistical Orbit
In this paper, we investigated the effect of sensor location Determination, Elsevier, Burlington, MA, 2004.
errors (navigation errors) on target tracking performance. [11] C. Yang, Performance Monitoring and Prediction for
Without accounting for such errors, a regular Kalman filter Active Management of Distributed Sensors Fusion in
produces optimistic results. In contrast, the Schmidt-Kalman Target Tracking, Technical Report (FA8650-08-C-
filter (also known as the consider Kalman filter), which 1407), November 2008.
incorporates the navigation error covariance, was shown not [12] C. Yang, E. Blasch, and I. Kadar, “Geometric Factor in
only to be more consistent but also to produce smaller Target Positioning and Tracking,” Fusion’2009, Seattle,
estimation errors. One aspect of our ongoing work is to WA, July 2009.
extend the formulation to maneuvering targets using such [13] C. Yang, M. Miller, E. Blasch, and T. Nguyen,
algorithms as the interacting multiple model (IMM) “Proactive Radio Navigation and Target Tracking,”
estimator. Another effort is to apply the consider covariance ION-GNSS’2009, Savannah, GA, September 2009.
for track-to-track fusion. Yet another effort is to develop a
track bias calibration and removal algorithm. Finally, we
will investigate the proactive approach to sensor
management with consider covariance in the presence of
navigation errors [13].

11
BIOGRAPHY (WSU)/ University of Dayton /AFIT and a Reserve Major at
the Air Force Office of Scientific Research (AFOSR). He
received his Ph.D. in Electrical Engineering from WSU in
1999, a MSEE from WSU in 1997, a MS in Mech. Eng
Chun Yang received his Bachelor of (1994) and MS in Industrial Eng. (1995) from Georgia
Engineering from Northeastern Tech, and a BSME from MIT in 1992 among other
University, Shenyang, China, in 1984 advanced degrees in engineering, health science,
and his title of Docteur en Science from economics, and business administration. He was
Université de Paris, Orsay, France, in the President of the International Society of Information
1989. After two years of postdoctoral Fusion (ISIF), SPIE Fellow, and active in IEEE, AIAA, and
research at University of Connecticut, ION. His research interests include target tracking, sensor
Storrs, CT, he moved on with his and information fusion, automatic target
industrial R&D career. Since 1994, he has been with Sigtem recognition, biologically-inspired robotics, and controls.
Technology, Inc. and has been working on adaptive array
and baseband signal processing for GNSS receivers and
radar systems as well as on nonlinear state estimation with
applications in target tracking, integrated inertial Philip Lavern Douville graduated from
navigation, and information fusion. Dr. Yang is an Adjunct the University of Detroit with a B.E.E.
Professor of Electrical and Computer Engineering at Miami in 1972, the Air Force Institute of
University. He is the member of the ION, IEEE, ISIF, and Technology with an M.S.E.E. in 1981,
SPIE. and the University of Dayton with a
Ph.D. in Electrical Engineering in
1996. At the Air Force Research
Laboratory (AFRL), he is currently
Erik Blasch is a Fusion Evaluation working in the Sensors ATR Technology Division (RYA) of
Tech Lead for the Assessment & the Sensors Directorate (RY).
Integration Branch, Sensors Directorate,
Air Force Research Laboratory,
WPAFB, OH. He is an adjunct
professor at Wright State University

12

View publication stats

You might also like