You are on page 1of 16

This article was downloaded by: [University of South Florida] On: 21 September 2011, At: 14:21 Publisher: Taylor & Francis Informa Ltd Registered

in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

International Journal of Control
Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tcon20

Kalman filters for non-linear systems: a comparison of performance
Tine Lefebvre , Herman Bruyninckx & Joris De Schutter
a a a a

Katholieke Universiteit Leuven, Dept. of Mechanical Eng., Division P.M.A., Celestijnenlaan 300B, B-3001 Heverlee, Belgium
b

Katholieke Universiteit Leuven, Dept. of Mechanical Eng., Division P.M.A., Celestijnenlaan 300B, B-3001 Heverlee, Belgium E-mail: Tine.Lefebvre@mech.kuleuven.ac.be Available online: 19 Feb 2007

To cite this article: Tine Lefebvre , Herman Bruyninckx & Joris De Schutter (2004): Kalman filters for non-linear systems: a comparison of performance, International Journal of Control, 77:7, 639-653 To link to this article: http://dx.doi.org/10.1080/00207170410001704998

PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan, sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Julier and Uhlmann 1996.. A well-known example is systems with linear process and measurement models and with additive Gaussian uncertainties. Tanizaki 1996). This mean and covariance are updated analytically with the Kalman filter (KF) algorithm (Kalman 1960.INT. This paper separates performance evaluation of Kalman filters into (i) consistency. the algorithm is often used for non-linear systems by linearizing the system’s process and measurement models.M. Fast analytical update algorithms require the pdf to be an analytical function of a limited number of time-varying parameters. VOL. International Journal of Control ISSN 0020–7179 print/ISSN 1366–5820 online # 2004 Taylor & Francis Ltd http://www.1080/00207170410001704998 . In some applications. the consistency and the information content of the estimates (defined in } 3). This decomposition provides the insights supporting an objective and systematic evaluation of the appropriateness of a particular Kalman filter variant in a particular application. The uncertainty on the state value varies over time due to the changes in the system state (the process updates) and due to the information in the measurements (the measurement updates). Sorenson 1985). 7. The linear regression Kalman filter (LRKF) (Lefebvre et al. HERMAN BRUYNINCKXy and JORIS DE SCHUTTERy The Kalman filter is a well-known recursive state estimator for linear systems. For most non-linear systems.kuleuven. Downloaded by [University of South Florida] at 14:21 21 September 2011 1. J. 2.uk/journals DOI: 10. Tanizaki 1996). In practice. * Author for correspondence. The pdf is then a Gaussian distribution. When choosing a filter for a new application. 2002). Different ways of linearization (different KF variants) lead to different results. 2001.Lefebvre@ mech. the first-order divided difference filter (DD1) (Nørgaard et al. Julier et al. i. a state estimate is represented by a probability density function (pdf ). Revised and accepted 1 April 2004. Dept. Maybeck 1982. NO. 2000 a. Division P.g. by providing an application-independent analysis of the performances of the common Kalman filter variants. This paper relates the consistency and information content of the estimates to (i) how the linearization is performed and (ii) how the linearization errors are taken into account. 2000). which is only true for some systems. Celestijnenlaan 300B. (ii) how they take the linearization errors into account. It also means that the true pdf is approximated by a Gaussian distribution. the literature gives us little to rely on. The paper gives the following new insights: 1. the pdf cannot be written as an analytical function with time-varying parameters. B-3001 Heverlee. This is achieved by linearization of the process and measurement models of the system. 1974. Belgium. the KF is used as an approximation.. and it separates the filter structure into (i) the process update step.be y Katholieke Universiteit Leuven.e.ac. 77. by intervals or fuzzy sets. many research areas looked into the matter of on-line state estimation. The iterated extended Kalman filter (IEKF) (Gelb et al. Bar-Shalom and Li 1993. Bar-Shalom and Li 1993. 2. 1974. e-mail: Tine. and (iii) how the quality of their state estimates depends on the previous two choices. of Mechanical Eng. This filter comprises the central difference filter (CDF) (Schei 1997). e. there can be Received 1 September 2003. In Bayesian estimation (Bayes 1763. 639–653 Kalman filters for non-linear systems: a comparison of performance TINE LEFEBVREy*. CONTROL. The uncertainty can be represented in different ways. The studied algorithms are: 1. Maybeck 1982. and (ii) information content of the estimates. Different ways of linearizing the models lead to different filters. 3. these ‘Kalman filter variants’ seem to perform well. Although the filters use similar linearization techniques for the linearization of the process and measurement models. Introduction During recent decades. which is fully determined by its mean vector and covariance matrix. b) and the unscented Kalman filter (UKF) (Uhlmann 1995. This paper describes (i) how the common KF variants differ in their linearization of the process and measurement models.co. In order to have a computationally interesting update algorithm. 10 MAY 2004.A. The quality of the estimates of the KF variants can be expressed by two criteria. Laplace 1812).tandf. The extended Kalman filter (EKF) (Gelb et al. This paper tries to bridge the gap between the theoretical derivation of a Kalman filter variant and its performance in practice when applied to a non-linear system. while for other applications they are useless. and (ii) the measurement update step.

Additionally. where previously the choice was mainly made based on success in similar applications or based on trial and error. Lefebvre et al. This paper describes all filter algorithms as the application of the basic KF algorithm to linearized process and measurement models. . The analysis clarifies how some filters automatically adapt their compensation for linearization errors. being a random vector sequence with zero mean and known covariance matrix Q. Furthermore. The difference between the KF variants is situated in the choice of linearization and the compensation for the linearization errors. the performances of the KF variants are often compared by interpreting the estimation results for a specific application after executing a large number of process and measurement update steps. . Hence. F. e. the paper compares the filter performances separately for process updates and measurement updates instead of their overall performance when they are both combined. the UKF is originally derived as a filter which does not linearize the models. The obtained insights are important for all researchers and developers who want to apply a KF variant to a specific application. the LRKF measurement update yields consistent but non-informative state estimates (} 5. the pdfsz pðxk jZkÀ1 Þ and pðxk jZk Þ are also Gaussian distributions. describes the fusion of the information in the state estimate with the information in the new measurement. In the existing literature. qp and qm are mutually uncorrelated and uncorrelated between sampling timesy. 2. 3. 2. . it can be interesting to use different filters for both updates. The models are chosen such that they provide a clear graphical demonstration of the discussed effects. The (linear) Kalman filter The Kalman filter (KF) (Kalman 1960. in that case several models are used for illustration. making the results application independent. The subscripts k and k À 1 indicate the time step. 2. ^ ^ For this system. this is the original formulation of the KF (Kalman 1960). The Kalman filter algorithm Downloaded by [University of South Florida] at 14:21 21 September 2011 Therefore. ^ z pðxk jZj Þ denotes the pdf of the state x at time k. k : ð1Þ ð2Þ z is the measurement vector. (b) the measurement update (} 5). C. and (ii) for large uncertainties on the state estimate. In previous work this linearization was not always recognized. the process and measurement models are of the form (1) and (2) with uncorrelated uncertainties. in this update also the new measurement is available and can be used to linearize the measurement model. It applies to the estimation of a state x if the state space description of the estimation problem has linear process and measurement equations subject to additive Gaussian uncertainty xk ¼ F kÀ1 xkÀ1 þ bkÀ1 þ C kÀ1 qp. These insights are obtained because: 1. Expressed in this new state vector. . qm is the measurement uncertainty and is a random vector sequence with zero mean and known covariance matrix R. kÀ1 zk ¼ H k xk þ d k þ E k qm. The analysis starts from a general formulation of non-linear process and measurement models. assume a ^ Gaussian prior pdf pðx0 Þ with mean x0j0 and covariance matrix P0j0 .3). For the measurement update. They lead to a systematic choice of a filter. hence. Two new insights on the performance of specific KF variants are: (i) the IEKF measurement update outperforms the EKF and LRKF updates if the state—or at least the part of it that causes the non-linearity in the measurement model—is instantaneously fully observable (} 5. while other filters have constant (developer-determined) error compensation. The filtering formulas can be expressed as analytical functions calculating the ^ mean vector x and covariance matrix P of these pdfs ^ xkjkÀ1 ¼ Epðxk jZkÀ1 Þ ½xk Š ^ ^ ¼ F kÀ1 xkÀ1jkÀ1 þ bkÀ1 hÀ ÁÀ ÁT i ^ ^ PkjkÀ1 ¼ Epðxk jZkÀ1 Þ xk À xkjkÀ1 xk À xkjkÀ1 ^ ¼ F kÀ1 PkÀ1jkÀ1 F T þ C kÀ1 QkÀ1 CT kÀ1 kÀ1 ð4Þ ð3Þ y Correlated uncertainties can be dealt with by augmenting the state vector. . zj g up to time j. the state estimate and its uncertainty are the only available information. a substantial difference in their performance for both updates: (a) for the linearization of the process update (} 4). on the other hand. 3.640 T. d and E are (possibly non-linear) functions of the system input. b.2).g.1. given the ^ ^ ^ measurements Zj ¼ fz1 . qp denotes the process uncertainty. H. Sorenson 1985) is a special case of Bayesian filtering theory. Examples of 2D systems are provided. which describes the evolution of the state. the filters’ performances are system dependent.

and three possible Gaussian approximations p1 ðxk jZ ^ ^ ^ p2 ðxk jZ i Þ and p3 ðxk jZi Þ. à T Rk þ Ek Rk Ek is used. kÀ1 zk ¼ hk ðxk Þ þ E k qm. kÀ1 p. xkjkÀ1 is called the predicted state estimate and ^ ^ xkjk the updated state estimate. then equations (5)–(9) ^ ^ reduce to xkjk ¼ xkjkÀ1 and Pkjk ¼ PkjkÀ1 . kÀ1 of equation (1) corresponds to qp. The additional uncertainty on the linearized models due to these linearization errors is modelled by the covariance matrices Qà and Rà . The consistency of the state estimate ^ A state estimate xkji with covariance matrix Pkji is called consistent if hÀ ÁÀ ÁT i ^ ^ Epðxk jZi Þ xk À xkji xk À xkji ð14Þ Pkji : ^ For consistent results. Qà . Even more. The consistency of the state estimate is a necessary condition for a filter to be acceptable.2. ^ Inconsistency of the calculated state estimate xkji and covariance matrix Pkji (‘divergence’ of the filter) is the most encountered problem with the KF variants. These intuitive reflexions are formulated in two criteria: the consistency and the information content of the state estimate. Downloaded by [University of South Florida] at 14:21 21 September 2011 g is called the innovation. Intuitively we feel that p1 ðxk jZ i Þ is a good approximation because ‘the same’ values of x ^ are probable. Consistency and information content of the estimates ð5Þ 641 leads to non-optimal estimates and covariance matrices. is inconsistent. kÀ1 of equation (12). k by linearization xk ¼ F kÀ1 xkÀ1 þ bkÀ1 þ qà kÀ1 þ C kÀ1 qp. kÀ1 þ CkÀ1 qp. If no measurement zk is available at a certain time step k. on the other hand. kÀ1 Ek qm. can be dealt with by augmenting the state vector with the uncertainities.e. . the matrix Pkji is equal to or larger than the expected squared deviation with respect ^ to the estimate xkji under the (unknown) distribution ^ ^ pðxk jZi Þ. Equations (3)–(4) are referred to as the process update. The performance of these KFs depends on how representative the Gaussian pdf with ^ mean xkji and covariance Pkji is for the (unknown) pdf ^ ^ pðxk jZi Þ. hence instead of Ek Rk Ek in equation (9). Expressed in this new state vector. applying the KFz (3)–(9) to systems with non-linear process and/or measurement models y Models which are non-linear functions of the uncertainties qp and qm . K is the Kalman gain. equations (5)–(9) as the measurement ^ update. þ E k qm. different choices for F. Pdf and p2 ðxk jZ ^ p3 ðxk jZ i Þ. 2. i. The mean and covariance of pdfs p1 ðxk jZ i Þ ^ i Þ in figure 1 obey equation (14). H. hence instead of T T CkÀ1 QkÀ1 CkÀ1 in equation (4). 3. its covariance is S. This paper aims at making an objective comparison of the performances of the commonly used linearizations (KF variants). zk ¼ H k xk þ d k þ qà k m. yield other results. Finally pdf p2 ðxk jZ i Þ ^ is ‘more uncertain’ than pðxk jZ i Þ because a larger domain of x values is uncertain. Figure 1 shows a non-Gaussian pdf pðxk jZ i Þ ^ i Þ. In this case. T equation (13). the subsequent state estimates are also inconsistent.1. b. k of equation (2) corresponds to qà k þ Ek qm. once an inconsistent state estimate is met. z CkÀ1 qp. This is because the filter believes the inconsistent state estimate to be more accurate than it is in reality and hence it The difference between these models and models (1) and (2) is the presence of the terms qà and qà representing p m the linearization errors. Similarly p3 ðxk jZi Þ is not a good approximation because a lot of probable values for x of the original distribution have a probability density of ^ ^ approximately zero in p3 ðxk jZ i Þ. the process and measurement models are of the form (10) and (11).Kalman filters for non-linear systems ^ xkjk ¼ Epðxk jZk Þ ½xk Š ^ ^ ¼ xkjkÀ1 þ K k gk hÀ ÁÀ ÁT i ^ ^ Pkjk ¼ Epðxk jZk Þ xk À xkjk xk À xkjk ^ ¼ ðI nÂn À K k H k ÞPkjkÀ1 where ^ ^ gk ¼ zk À H k xkjkÀ1 À d k Sk ¼ E k Rk E T þ H k PkjkÀ1 H T k k K k ¼ PkjkÀ1 H T S À1 : k k ð7Þ ð8Þ ð9Þ ð6Þ 3. Qà þ CkÀ1 QkÀ1 CkÀ1 is used. Pkji is too small and no longer represents a ^ reliable measure for the uncertainty on xkji . The information content of the state estimates defines an ordering between all consistent filters. Kalman filters for non-linear systems The KF algorithm is often applied to systems with non-linear process and measurement modelsy xk ¼ f kÀ1 ðxkÀ1 Þ þ CkÀ1 qp. d and Rà . Unfortunately. k : ð12Þ ð13Þ ð10Þ ð11Þ The KF variants for non-linear systems calculate ^ an estimate xkji and covariance matrix Pkji for a pdf which is non-Gaussian. Different ways of linearizing the process and measurement models. k of m.

The different KF variants linearize the process and the measurement models in the uncertainty region around the state estimate. The most informative. p2 ðxk jZi Þ and p3 ðxk jZ i Þ. p3 ðxk jZi Þ is inconsistent. However. The information content of the state estimate The calculated covariance matrix Pkji indicates how ^ uncertain the state estimate xkji is: a large covariance matrix indicates an inaccurate (and little useful) state estimate. i. 4. Non-linear process models This section contains a comparison between the process updates of the different KF variants when dealing with a non-linear process model (10) with linearization (12). For ^ ^ example both pdfs p1 ðxk jZi Þ and p2 ðxk jZ i Þ of figure 1 ^ i Þ. p1 ðxk jZ i Þ is more informative than p2 ðxk jZi Þ. how they take the linearization errors into account. Consistent estimates are obtained by adding process and measurement uncertainty on the linearized models to compensate for the linearization errors. Testing for inconsistency is done by consistency tests such as tests on the sum of a number of normalized innovation squared values (SNIS) (Willsky 1976. expressed in terms of consistency and information content. corresponds to losing information about the actual accuracy of the state estimate. y The more non-linear the behaviour of the process or measurement model in the uncertainty region around the state estimate. i. p1 ðxk jZ i Þ and ^ i Þ are consistent. Downloaded by [University of South Florida] at 14:21 21 September 2011 Figure 1. In order for the estimates to be informative. larger than necessary for consistency. depends on these two choices. and (ii) the extra uncertainty on the linearized models should not be larger than necessary to compensate for these errors. p1 ðxk jZi Þ has a ^ are consistent with pðxk jZ smaller variance.642 T. The following sections describe how the extended Kalman filter. ^ ^ ^ p2 ðxk jZ attaches too much weight to this state estimate when processing new measurements. the larger the information content of the state estimate.2. BarShalom and Li 1993). Lefebvre et al. p1 ðxk jZ i Þ for the example.e. however. . ^ ^ ^ ^ ^ Non-Gaussian pdf pðxk jZi Þ with three Gaussian approximations p1 ðxk jZi Þ. hence it is more informative than ^ p2 ðxk jZi Þ (a smaller domain of x values is probable).e. There is a trade-off between consistent and informative state estimates: inconsistency can be avoided by making Pkji artificially larger (see equation (14)). 3. the more pronounced the difference in quality performance (consistency and information content of the state estimates) between the KF variants. the iterated extended Kalman filter and the linear regression Kalman filter differ in their linearization of the process and measurement models. the smaller the covariance matrix. The KF variants differ by their choice of F kÀ1 . and how the qualityy of the state estimates. making Pkji too large. consistent approximation is the Gaussian with the same mean and covariance as the ^ original distribution. (i) the linearization errors need to be as small as possible.

Kalman filters for non-linear systems F kÀ1 .

@f kÀ1 .

@x .

x ^ .

kÀ1jkÀ1 @f kÀ1 .

.

Of course. j ¼ 1. They are informative because the linearized model is a good approximation of the process model in the uncertainty ^ region around xkÀ1jkÀ1 (equations (19) and (20)). 4. . the user can decide to add extra process uncertainty QÃ (or to multiply the kÀ1 calculated covariance matrix PkjkÀ1 by a fading factor larger than 1 (Bar-Shalom and Li 1993)). The linear regression Kalman filter The linear regression Kalman filter (LRKF) uses the function values of r regression points X jkÀ1jkÀ1 in state space to model the behaviour of the process function in the uncertainty region around the updated ^ state estimate xkÀ1jkÀ1 .2). i. @x ^ xkÀ1jkÀ1 643 QÃ kÀ1 0nÂn 0nÂn 1 r bkÀ1 ^ f kÀ1 ð^kÀ1jkÀ1 Þ À F kÀ1 xkÀ1jkÀ1 x ^ f kÀ1 ð^kÀ1jkÀ1 Þ À F kÀ1 xkÀ1jkÀ1 x arg minðF. ¼ f kÀ1 ðX j kÀ1jkÀ1 Þ: ð18Þ y The EKF and IEKF only differ in their measurement update (} } 5. bkÀ1 Þ ¼ arg min ðF. X jkjkÀ1 Þ points. } 4. They are consistent because QÃ gives a well-founded approxikÀ1 mation of the linearization errors (equation (21)). this is the case for the (I)EKF or for an LRKF with a number of regression points too limited to capture the z This depends on the non-linearity of the model in the uncertainty region around the state estimate.1 describes the linearization of the process model by the EKF and IEKF. The formulas are summarized in table 1. bkÀ1 and QÃ are obtained kÀ1 by statistical linear regression through the ðX jkÀ1jkÀ1 . Section 4. The function values of the regression points are X j kjkÀ1 gives an idea of the magnitude of the linearization errors ^ in the uncertainty region around xkÀ1jkÀ1 . . The regression points are chosen such that their mean and covariance matrix equal the ^ state estimate xkÀ1jkÀ1 and its covariance matrix PkÀ1jkÀ1 . . Downloaded by [University of South Florida] at 14:21 21 September 2011 bkÀ1 and QÃ . Summary of the linearization of the process model by the extended Kalman filter (EKF). This is useful if the basic filter algorithm is not consistent. the iterated extended Kalman filter (IEKF) and the linear regression Kalman filter (LRKF).1 and 5.e. Section 4.2 by the LRKF.1. it is not possible to guarantee that the iteration has converged to a set of regression points representative for the model behaviour in the uncertainty region. . Extra process uncertainty In all of the presented filters. the state estimates of the LRKF process update are consistent and informative. DD1 and UKF filters correspond to specific choices. because the true pdf is unknown.2. the deviations ej between the function values in X jkÀ1jkÀ1 for the non-linear and the linearized function are minimized in least-squares sense   ej ¼ X jkjkÀ1 À FX jkÀ1jkÀ1 þ b ð19Þ ðF kÀ1 . Intuitively we feel that when enoughz regression points are taken.3. 4. After linearization. For example. The (iterated) extended Kalman filter The EKF and the IEKF linearizey the process model by a first-order Taylor series around the updated state ^ estimate xkÀ1jkÀ1  @f kÀ1   F kÀ1 ¼ ð15Þ @x xkÀ1jkÀ1 ^ ^ ^ bkÀ1 ¼ f kÀ1 ðxkÀ1jkÀ1 Þ À F kÀ1 xkÀ1jkÀ1 : ð16Þ The LRKF algorithm uses a linearized process function (12) where F kÀ1 . 4.4 presents some examples. A possible approach is to increase the number of regression points until the resulting linearization (with error covariance) does not change any more. The CDF. r. . bÞ r X j¼1 eT ej : j ð20Þ The sample covariance of the deviations ej QÃ ¼ kÀ1 r 1X T ee r j¼1 j j ð21Þ The basic (I)EKF algorithms do not take the linearization errors into account (n is the dimension of the state vector x) QÃ  0nÂn : kÀ1 ð17Þ This leads to inconsistent state estimates when these errors cannot be neglected. they all use process kÀ1 update equations (3) and (4) to update the state estimate and its uncertainty. bÞ Pr T j¼1 ej ej EKF IEKF LRKF arg minðF. bÞ Pr T j¼1 ej ej Pr T j¼1 ej ej Table 1.

For a particular problem.g.4. results in less informative estimates. The process update of xk ð2Þ is linear. non-linear behaviour of the process model in the uncer^ tainty region around xkÀ1jkÀ1 .4. 4. values for QÃ that result kÀ1 in consistent and informative state estimates are obtained by off-line tuning or on-line parameter learning (adaptive filtering. The results of this computation are used to illustrate the (in) consistency and information content of the state estimates of the different KF variants. The mean and ^ covariance matrix of the pðxk jZ kÀ1 Þ calculated by the Monte Carlo algorithm are " # " # 136 16 994 721 ^ xkjkÀ1 ¼ . The dotted line is the uncertainty ellipse of the distribution obtained by Monte Carlo simulation.644 T.4. PkÀ1jkÀ1 ¼ : ð23Þ xkÀ1jkÀ1 ¼ 15 0 3600 4. Mehra 1972). by taking a constant QÃ over time which compensates for decreasing linearization errors. PkjkÀ1 ¼ : ð24Þ 55 721 32 436 4. kÀ1  02 Â 1 . for the (I)EKF predicted state estimate (full line) and the Monte Carlo uncertainty ellipse (dotted line). Downloaded by [University of South Florida] at 14:21 21 September 2011 Figure 2. however. Non-linear process model. xk ð1Þ depends non-linearly on xkÀ1 . The updated state estimate and its uncertainty at time step k À 1 are " # " # 10 36 0 ^ .1. The mean value and ^ the covariance of the true pdf pðxk jZkÀ1 Þ are calculated with a (computationally expensive) Monte Carlo ^ simulation based on a Gaussian pdf pðxkÀ1 jZ kÀ1 Þ with ^ mean xkÀ1jkÀ1 and covariance matrix PkÀ1jkÀ1 . e. Starting from the point xkji . Examples The different process updates are illustrated by a simple 2D non-linear process model (xðiÞ denotes the ith element of x) ) xk ð1Þ ¼ ðxkÀ1 ð1ÞÞ2 ð22Þ xk ð2Þ ¼ xkÀ1 ð1Þ þ 3xkÀ1 ð2Þ with no process uncertainty: qp. The IEKF y The uncertainty ellipsoid ^ ^ ðxk À xkji ÞT PÀ1 ðxk À xkji Þ ¼ 1 kji ð25Þ is a graphical representation of the uncertainity on the state ^ ^ estimate xkji . This. Lefebvre et al. In many practical cases consistency is assured by taking the added uncertainty too large. Monte Carlo simulation. Figure 2 shows the updated and (I)EKF predicted state estimates and their uncertainty ellipsesy. the distance to the ^ ellipse in a direction is a measure for the uncertainty on xkji in that direction. The predicted state estimate is inconsistent due to the neglected linearization errors: the uncertainty ellipse of the IEKF predicted estimate is shifted with respect to the Monte Carlo uncertainty ellipse and is somewhat smaller.2. Uncertainty ellipses for the updated state estimate at k À 1 (dashed line). (I)EKF. .

Kalman filters for non-linear systems 645 Downloaded by [University of South Florida] at 14:21 21 September 2011 Figure 3. Non-linear process model. the updated state estimate (top) and the predicted state .4. pðxk jZ 4. state prediction and its covariance matrix are ! ! 100 14 400 720 ^ xkjkÀ1 ¼ . For consistent results this covariance should even be larger because the IEKF ^ estimate xkjkÀ1 differs from the mean of the pdf ^ kÀ1 Þ. PkjkÀ1 ¼ : ð26Þ 55 720 32 436 Due to the neglected linearization errors. LRKF. and Monte Carlo uncertainty ellipse (dotted line which coincides with the full line. Uncertainty ellipses for the updated state estimate at k À 1 (dashed line. bottom figure). The LRKF predicted state estimate is consistent and informative: its uncertainty ellipse coincides with the Monte Carlo uncertainty ellipse. Figure 3 shows the X jkÀ1jkÀ1 points (top figure) and the X jkjkÀ1 points (bottom figure). for the LRKF predicted state estimate (full line.3. the state estimate is inconsistent: the covariance PkjkÀ1 is smaller than the covariance calculated by the Monte Carlo simulation for the first state component xð1Þ which had a non-linear process update. top figure). calculated by the Monte Carlo simulation. bottom figure).

646 Hk .

@hk .

.

T. Lefebvre et al. dÞ Pr T j¼1 ej ej 1 r RÃ k 0mÂm 0mÂm Pr T j¼1 ej ej EKF IEKF LRKF @x xkjkÀ1 ^ @hk . dk ^ hk ð^kjkÀ1 Þ À H k xkjkÀ1 x ^ hk ð^kjk Þ À H k xkjk x minðH.

3) linearize the measurement model based only on the predicted state estimate and its uncertainty. The uncertainty ellipse obtained by Monte Carlo simulation coincides with the final uncertainty ellipse of the LRKF predicted state estimate (bottom figure). A large uncertainty on the linearized measurement model RÃ þ k E k Rk E T (due to a large uncertainty on the state estik mate) results in throwing away the greater part of the information of the possibly very accurate measurement. Section 5. After linearization they use the KF update equations (5)–(9). The EKF. If the measurement model is non-linear in the uncertainty region around the predicted state estimate. Summary of the linearization of the measurement model by the extended Kalman filter (EKF). The (I)EKF on the other hand only uses the function evaluation and its Jacobian in this state estimate.1) and LRKF (} 5.5. @x xkjk ^ minðH. For the latter filters. Downloaded by [University of South Florida] at 14:21 21 September 2011 estimate (bottom) and their uncertainty ellipses for the LRKF. The linearization of the measurement model by the IEKF (} 5. including two times the point xkÀ1jkÀ1 . 5. IEKF and LRKF choose H k . . where m is the dimension of the measurement vector zk . 3. Non-linear measurement models The previous section contains a comparison between the (I)EKF and LRKF process updates. This is an advantage where this Jacobian is difficult to obtain or non-existing (e. the iterated extended Kalman filter (IEKF) and the linear regression Kalman filter (LRKF). the linearization errors (RÃ ) are larger. The X jkÀ1jkÀ1 points are chosen with the UKF algorithm of Julier and Uhlmann (1996) where  ¼ 3 À n ¼ 1. d k and RÃ in a k different way. dÞ Pr T j¼1 ej ej Table 2. The LRKF linearizes the function based on its behaviour in the uncertainty region around the updated state estimate. The different linearization formulas are summarized in table 2. Unlike the (I)EKF. especially when the measurek ment function is quite non-linear in the uncertainty region around the predicted state estimate. The LRKF predicted state estimate and its covariance matrix are ! ! 136 16 992 720 ^ xkjkÀ1 ¼ . The LRKF deals with linearization errors in a theoretically founded way (provided that enough regression points are chosen). After processing the measured value.2) takes the measurement into account. given the linear measurement model and the measurement uncertainty. For instance. the state is believed y ‘Far from’ (and ‘close to’) must be understood as: the deviation of the true state with respect to the linearized measurement model is not justified (is justified) by the ~k ~ ~ k measurement uncertainty RÃ þ Ek Rk ET . Conclusion: the process update The LRKF performs better than the (I)EKF when dealing with non-linear process functions: 1.1. the EKF (} 5. 2.5 presents some examples. the linearization errors are not negligible.g. This means that the linearized measurement model does not reflect the relation between the true state value and the measurement. The extended Kalman filter The EKF linearizes the measurement model around ^ the predicted state estimate xkjkÀ1  @h  Hk ¼ k ð28Þ @x xkjkÀ1 ^ ^ ^ d k ¼ hk ðxkjkÀ1 Þ À H k xkjkÀ1 : ð29Þ The basic EKF algorithm does not take the linearization errors into account RÃ  0mÂm k ð30Þ 5. the LRKF does not need the function Jacobian. this section focuses on their measurement updates for a non-linear measurement model (11) with linearization (13). This corresponds to choosing six ^ regression points. This indicates consistent and informative results. The (I)EKF on the other hand needs trial and error for each particular example to obtain good values for the covariance matrix representing the linearization errors. for discontinuous process functions). the true state value is ‘far from’y the linearized measurement model. PkjkÀ1 ¼ : ð27Þ 55 720 32 436 4.

5.1. The result is also informative because no uncertainty due to linearization errors needs to be added.4. e. Extra measurement uncertainty In order to make the state estimates consistent. the user can tune an inconsistent filter by adding extra measurement uncertainty RÃ . the updated state estimate is inconsistent. ^ kjk x0 . First example. In case of a measurement model that instantaneously fully observes the state (or at least the part of the state that causes the non-linearities in the measurement model). Z jk Þ. the basic IEKF algorithm does not take the linearization errors into account RÃ  0mÂm : k ð33Þ The LRKF algorithm uses a linearized measurement function (13) where H k . by taking a constant RÃ over time which compensates for decreasing linearization errors. The IEKF tries to do better by linearizing the measurement model around the updated state estimate  @hk   Hk ¼ ð31Þ @x xkjk ^ ^ ^ d k ¼ hk ðxkjk Þ À H k xkjk : ð32Þ 647 are chosen such that their mean and covariance matrix ^ are equal to the predicted state estimate xkjkÀ1 and its covariance PkjkÀ1 . The function values of the regression points through the non-linear function are Z jk ¼ hk ðX jkjkÀ1 Þ: ð34Þ Downloaded by [University of South Florida] at 14:21 21 September 2011 This is achieved by iteration: the filter first linearizes ^ kjk the model around a value x0 (often taken equal to ^ the predicted state estimate xkjkÀ1 ) and calculates the updated state estimate. the (X jkjkÀ1 .2.5). However. Z jk ) points deviate substantially from a hyperplane. The linearizations are started around a freely choosen In order to assure quick and correct iteration. This results in a large RÃ and k non-informative updated state estimates (see the example in } 5. (part of) this value can be chosen based on the measurement information if this information is more accurate than the predicted state estimate. DD1 and UKF filters correspond to specific choices. 5. the linearization errors will be smally in the uncertainty region ^ around xkjk . if the measurement model is highly non-linear in the uncertainty region ^ around xkjkÀ1 . 5. The statistical linear regression is such that the deviations ej between the non-linear and the linearized function in the regression points are minimized in least-squares sense   ð35Þ ej ¼ Z jk À HX jkjkÀ1 þ d ðH k . Z jk ) are taken the state estimates of the LRKF measurement update are consistent because RÃ gives a well-founded approximation of the linearizak tion errors (equation (37)).3. d k and RÃ are obtained by k statistical linear regression through the points ðX jkjkÀ1 .e. The iterated extended Kalman filter The EKF of the previous section linearizes the measurement model around the predicted state estimate.Kalman filters for non-linear systems to be in a region which does not include the true state estimate. . d k Þ ¼ arg min r X j¼1 ðH. Examples If the measurement model is non-linear in the uncer^ tainty region around the updated state estimate xkjk . . The comparison between the different measurement updates is illustrated with the measurement function zk ¼ h1 ðxk Þ þ m. 5. This reduces the information content of the results. j ¼ 1. The true state estimate is then ‘close to’ the linearized measurement function and the updated state estimate is consistent. Then. Like the EKF algorithm. In many practical cases consistency is assured by choosing the added uncertainty too large. . The linear regression Kalman filter The LRKF evaluates the measurement function in r regression points X jkjkÀ1 in the uncertainty region ^ around the predicted state estimate xkjkÀ1 . This process ^ is iterated until a state estimate xikjk is found for which ^ kjk ^ ^ xikjk is close to xiÀ1 . the filter linearizes the ^ kjk model around the newly obtained estimate x1 and cal^ culates a new updated state estimate (based on xkjkÀ1 . dÞ eT e j : j ð36Þ The sample covariance matrix of the deviations ej gives an idea of the magnitude of the linearization errors r 1X T ee : ð37Þ RÃ ¼ k r j¼1 j j Intuitively we feel that when enough regression points (X jkjkÀ1 . 5. The CDF.g. r. k Only off-line tuning or on-line parameter learning can lead to a good value for RÃ for a particular probk lem. 5. The state estimate xkjk and uncertainty Pkjk are calculated starting from the state estimate ^ xkjkÀ1 with its uncertainty PkjkÀ1 and the measurement ^ model linearized around xikjk . i. .5. k h1 ðxk Þ ¼ ðxk ð1ÞÞ2 þðxk ð2ÞÞ2 ð38Þ . The X jkjkÀ1 ^ y This assumes that the iterations lead to an accurate xikjk . PkjkÀ1 and the new linearized model). state estimates will be inconsistent.

An LRKF is run on the first example (equation (38)). including two times the point xkjkÀ1 . the true state value xk . The updated state estimate is not informative if this is not the case. Pkjk ¼ ð42Þ 25 À24 16 is inconsistent. linearization errors. 5. IEKF. This corresponds to choosing six regression ^ points. k ¼ ð39Þ h2 ðxk Þ þ qm. the linearized measurement function. the linearized measurement function around the point ^ xikjk .5. LRKF. The measurement function is " # h1 ðxk Þ þ qm. if this value is ‘far’ outside the uncertainty ellipse of a state estimate. The predicted 15 20 " 10 15 # is the predicted state estimate with covariance matrix " # 36 0 PkjkÀ1 ¼ : 0 3600 ^ The processed measurement is zk ¼ 630 and the measurement covariance is Rk ¼ 400. hence. k ð1Þ zk ¼ hðxk Þ þ qm.5. k ð2Þ with h1 ðxk Þ ¼ ðxk ð1ÞÞ2 þðxk ð2ÞÞ2 h2 ðxk Þ ¼ 3ðxk ð2ÞÞ2 =xk ð1Þ where xk ¼ is the true value and ^ xkjkÀ1 ¼ " # ) ð40Þ 15 20 " 10 15 # is the predicted state estimate with covariance matrix " # 36 0 : PkjkÀ1 ¼ 0 3600 The processed measurement and the measurement covariance matrix are " # " # 630 400 0 ^ zk ¼ . the uncertainty on the state estimate should drop considerably when the measurement is processed. Downloaded by [University of South Florida] at 14:21 21 September 2011 5. the true state value xk and the state estimates for the IEKF applied to the first example (equation (38)).648 where xk ¼ is the true value and ^ xkjkÀ1 ¼ " # T. Second example. ^ This results in an uncertain updated state estimate xikjk around which the filter linearizes the measurement function. The updated state estimate and covariance matrix " # " # 14 2:6 À1:7 ^ xkjk ¼ .5. the IEKF updated state estimate is accurately known. The measurement model does not fully observe the state. 5. The resulting updated state estimate " # " # 10 36 À24 ^ xkjk ¼ . the true state value xk is plotted. Pkjk ¼ ð44Þ 21 À1:7 1:3 are consistent and informative due to the small. The true measurement function is non-linear.3. Figure 6 shows the measurement function. uncertainty ellipses and measurement functions for the EKF applied to the first example (equation (38)).4. the corresponding estimate is inconsistent. Lefebvre et al. The updated state estimate " # " # 10 36 À16 ^ xkjk ¼ . EKF. ignored. the X jkjkÀ1 -points and the linearization. the state estimates and the uncertainty ellipses for the IEKF applied to the second example (equations (39) and (40)). xk is the true value of the state. As was the case for the EKF. Because the measurement is accurate and the initial estimate is not. Figure 4 shows the state estimates. . and is ‘close to’ this function. the updated state estimate is consistent. In this case.2.5.5. Figures 7 and 8 show the non-linear measurement function. To illustrate the consistency of the state estimate of an IEKF when the measurement observes the state completely. 5. The X jkjkÀ1 points are chosen with the UKF algorithm with  ¼ 3 À n ¼ 1 (Julier and Uhlmann 1996). Figure 5 shows the measurement function. Rk ¼ : ð41Þ 85 0 400 In all figures. a second example is used. the linearization errors are small and the true state value is ‘close to’ the linearized measurement function. the linearization errors are not negligible and the true value is ‘far from’ the linearized measurement function. The linearization around the uncertain predicted state estimate is not a good approximation of the function around the true state value: the true state value is ‘far from’ the linearized measurement function. If however the measurement model fully observes the state. Pkjk ¼ ð43Þ 23 À16 7:0 is inconsistent.

^ ^ Non-linear measurement model z ¼ h1 ðxk Þ and EKF linearization around xkjkÀ1 (dotted lines). and its IEKF linearization around xkjk ^ (dotted lines). leading to an inconsistent state estimate xkjk (uncertainty ellipse in full line). Non-linear measurement model z ¼ h1 ðxk Þ that does not observe the full state. . The true state xk is ‘far ^ from’ this linearization and the obtained state estimate xkjk (uncertainty ellipse in full line) is inconsistent. ^ ^ Figure 5. The true state xk is ‘far from’ this linearization.Kalman filters for non-linear systems 649 Downloaded by [University of South Florida] at 14:21 21 September 2011 Figure 4.

Downloaded by [University of South Florida] at 14:21 21 September 2011 ^ Figure 6. Figure 7.650 T. Non-linear measurement model z ¼ hðxk Þ that fully observes the state. and its IEKF linearization around xkjk (dotted ^ lines). The linearization errors are large. The true state xk is ‘close to’ this linearization. . Lefebvre et al. leading to a consistent state estimate (uncertainty ellipse in full line). ^ Non-linear measurement model z ¼ h1 ðxÞ and LRKF linearization.

the state estimates and the uncertainty ellipses. Non-linear measurement model z ¼ h1 ðxÞ and LRKF linearization (dotted lines). the LRKF linearized measurement function. . The updated state estimate is consistent. Z jk ) points and the linearized measurement function (see figure 8). it can hardly be called an improvement over the previous state estimate (Pkjk % PkjkÀ1 ). state estimate is uncertain. the true state value xk . the measurement function. Pkjk ¼ " 36 0 0 3600 # : ð45Þ Figure 9 shows the X jkjkÀ1 points. However. The updated state estimate (uncertainty ellipse in full line) is consistent k k but non-informative.Kalman filters for non-linear systems 651 Downloaded by [University of South Florida] at 14:21 21 September 2011 Figure 8. hence the X jkjkÀ1 -points are widespread. The information in the measurement is neglected due to the high ‘measurement uncertainty’ RÃ þ E k Rk E T k k on the linearized function. The large linearization errors result in a large measurement uncertainty RÃ þ E k Rk E T . RÃ is large (RÃ ¼ 2:6 Â 107 ) due to the large k k deviations between the (X jkjkÀ1 . The updated state estimate and its covariance matrix are " ^ xkjk ¼ 10 2:6 # . Figure 7 seen from another angle. ^ Figure 9.

Hence. This paper relates the consistency and information content of the estimates to (i) how the linearization is performed and (ii) how the linearization errors are taken into account. (part of) this value can be chosen based on the measurement information if this information is more accurate than the predicted state estimate. e.652 T. In the other cases. the EKF and IEKF on the k other hand require off-line tuning or on-line parameter learning of RÃ to yield consistent state estimates. These insights are a result of the distinct analysis approach taken in this paper: 1. The analysis clarifies how some filters automatically adapt their compensation for linearization errors. 5. This means that once a well-tuned IEKF is available. The performance of the different filters is compared for the process and measurement updates separately. the state estimates it returns can be far more informative than those of the LRKF or a well-tuned EKF. 2. because a good performance for one of these updates does not necessarily mean a good performance for the other update. Finally. The paper describes all filter algorithms as the application of the basic KF algorithm to linearized process and measurement models. The (I)EKF on the other hand only uses the function evaluation and its Jacobian in this state estimate. In order to assure quick and correct iteration. For process updates the LRKF performs better than the other mentioned KF variants because (i) the LRKF linearizes the process model based on its behaviour in the uncertainty region around the updated state estimate. its linearization errors are smaller than those of the EKF and LRKF. Downloaded by [University of South Florida] at 14:21 21 September 2011 6. ^ y This assumes that the iterations lead to an accurate xikjk . where previously the choice Note that some kind of iterated LRKF (similar to the iterated EKF) would not solve this problem: the ^ updated state estimate xkjk and its covariance matrix Pkjk are more or less the same as the predicted state ^ estimate xkjkÀ1 and its covariance matrix PkjkÀ1 . none of the presented filters outperforms the others: the LRKF makes an estimation of the linearization errors.g. the IEKF additionally uses the measurement value in order to linearize the measurement model. the EKF and IEKF on the other hand require extensive off-line tuning or on-line parameter learning in order to yield consistent state estimates. The (I)EKF on the other hand needs trial and error for each particular application to obtain a good covariance matrix representing the linearization errors. The quality of the state estimates is expressed by two criteria: the consistency and the information content of the estimates. A filter should be chosen for each specific application: the LRKF makes an estimate of its linearization errors (RÃ ). k Because the IEKF additionally takes the measurement into account when linearizing the measurement model. while other filters have constant (developer-determined) error compensation. In this case (and assuming that the algorithm iterates to a good linearization pointy). the iterated extended Kalman filter (IEKF) and the linear regression Kalman filter (LRKF). none of the presented filters outperforms the others. and (ii) the LRKF deals with linearization errors in a theoretically founded way. In previous work this linearization was not always recognized. The understanding of the linearization processes allows us to make a well-founded choice of filter for a specific application. The IEKF is the best way to handle non-linear measurement models that fully observe the part of the state that makes the measurement model non-linear. its linearization errors are smaller and once a well-tuned IEKF is available. Lefebvre et al. note that the LRKF does not use the Jacobian of the measurement function. provided that enough regression points are chosen. However. They lead to a systematic choice of a filter. This makes it interesting in some cases to use different filters for both updates. In the other cases. 4. Hence.6. The insights described in this paper are important for all researchers and developers who want to apply a KF variant to a specific application. the state estimates it returns can be far more informative than those of the LRKF or a well-tuned EKF. Conclusion: the measurement update Measurements which fully observe the part of the state that makes the model non-linear. ^ kjk The linearizations are started around a freely choosen x0 . The difference between the KF variants is situated in the choice of linearization and the compensation for the linearization errors. Conclusions This paper gives insight into the advantages and drawbacks of the extended Kalman filter (EKF). unlike the EKF and LRKF. the IEKF linearization errors are negligible. which makes it possible to process discontinuous measurement functions. the regression points and the linearization would be approximately the same after iteration. 3. the UKF is originally derived as a filter which does not linearize the models. are best processed by the IEKF. .

M. Bayes. C.. Llinas (Eds) Handbook of Multisensor Data Fusion (New York: CRC Press).. Sorenson. Nash. Technical University of Denmark. Technical report. 1960. and Durrant-Whyte. P. Poulsen. London: Artech House). and De Schutter. Data fusion in non-linear systems. J. 293–315. Techniques and Software (Boston. Estimation and Tracking: Principles. and Li. 1627–1638. H. Volume 2 (New York: Academic Press). Price... 2001.. 1996. E. Applied Optimal Estimation (Cambridge. Reprinted in Biometrika. Leuven’s Concerted Research Action GOA/99/04 are gratefully acknowledged. Julier. N. 12. 1972. S. Julier. and Ravn. M... New developments in state estimations for nonlinear systems. 1406–1408.. 1993. 1812. 82. A new method for the transformation of means and covariances in filters and estimators. IEEE Transactions on Automatic Control. 45.. Mehra.. 53. O. Journal of Basic Engineering. 45(3). 1995. Chapter 13. Lefebvre. 2000 b. Bruyninckx. 370–418. 33(11). Comment on ‘A new method for the nonlinear transformation of means and covariances in filters and estimators’. Nonlinear Filters: Estimation and Applications—Second. T. Automatica. S. University of Oxford. 2002.. Advances in derivative-free state estimation for nonlinear systems. 1976. Kasper. 47(8).. N.... University of Oxford.. P. A finite-difference method for linearisation in nonlinear estimation algorithms. IEEE Transactions on Automatic Control. 1974. R.. Gelb. Stochastic Models.. 1763. PhD thesis. The´orie Analytique des Probabilite´s (Paris: Courcier Imprimeur). J. O. 1997. Financial support by the Belgian Programme on Inter-University Attraction Poles initiated by the Belgian State—Prime Minister’s Office—Science Policy Programme (IUAP). Estimation. A. H. 36(11). and Uhlmann. Y. Philosophical Transactions of the Royal Society of London. T. Denmark. Kalman. 1958. and Uhlmann. Transactions of the ASME. In D.... Dynamic map building and localization: new theoretical foundations. NY: IEEE Press).. IEEE Transactions on Automatic Control. Department of Engineering Science. 1996. A.. Uhlmann. U.. 17(5). S. J. A. and Ravn. MA: MIT Press). Maybeck. R.. Laplace. and Control. A new approach to linear filtering and prediction problems. and Sutherland. J. W. J.. 2000. 34–45. UK. NÖrgaard. Willsky. J. Approaches to adaptive filtering. Further work should report on practical applications using these insights and an effort should be made to analyse and include future KF algorithms in this framework. 601–611. H.. Schei. Lefebvre is a Postdoctoral Fellow of the Fund for Scientific Research–Flanders (FWO) in Belgium. 1985. 2053–2058. 2000 a. R. . 1982. Robotics Research Group. Acknowledgements T. Technical Report IMM-REP-1998-15 (revised edition). T. Revised and Enlarged Edition (Berlin-Heidelberg: Springer-Verlag). References Bar-Shalom. Kalman Filtering: Theory and Application (New York. A survey of design methods for failure detection in dynamic systems. Automatica.. and by K. Tanizaki.-S. A general method for approximating nonlinear transformations of probability distributions... 693–698. NÖrgaard.. Automatica. Uhlmann. An essay towards solving a problem in the doctrine of chances. H.Kalman filters for non-linear systems was mainly made based on success in similar applications or based on trial and error. Poulsen. 653 Downloaded by [University of South Florida] at 14:21 21 September 2011 Julier.. 477–482.. Hall and J. X.