You are on page 1of 6
35 Kalman Filtering 35.1 Introduction 589) 352. Problem Definition 589 The State Estimation Probie 353 Summary of the Kalman Filter Equations sat ‘Addvonal Assumptions * The Kalman Fer Dynamics * Properties of| Model-Based Observers * The Kalman Fiter Gam and Associated Pet ‘Algebraic Ricati Equation (FARE) * Duality Between the KF and LO Problems 354. Kalman Fier Properties so Introduction Guaranteed Stability + Fiequency Domain Equality" Guar anteed Robust Properties 3555 The Accurate Measurement Kalman Filter 593 Problem Definition» The Main Result Michael Athans References 54 Messachusets Dstt of Technolgy, Combridge. MA Further Reading 594 35.1 Introduction ‘The purpose ofthis chapter isto provide an overview of Kalman filtering concepts. We cover only small portion ofthe material associated with Kalman filters. Our choice ofaterialisprimarily| ‘motivated by the material needed inthe design and analysis of multivariable feedback control systems fr linear time-invariant (LTH) systems and, more specifically, by the design philosophy commonly called the linear uadeatic Gaussian (LQG) method. Thus, from a technical perspective, we cover, without proof, the so-called steady-state, condtant-gain LT Kalman filters. ‘More details regarding different formulations and solutions of Kalman filtering problems (continuous-time linear time varying, discrete-time linear time-varying, and time-invariant) can be found in several classic and standard textbooks on the subject of estimation theory. Please see Further Reading atthe end ofthis chapter. “The steady-state constant-gain Kalman filter is an algorithm that is used to estimate the state variables of a continuous: time LTT system, subject to stochastic disturbances, on the basis of noisy measurements of certain output variables. Thus, the Kalman filtering algorithm combines the information regarding the plant dynamics, the probabilistic information regarding the stochastic disturbances that influence the plant state variables, 5, ‘well as that regarding the measurement noise that corrupts the sensor measurements and the deterministic controls. ‘Credit for “inventing” this algorithm is usually given to Dr ‘Rudolph E. Kalman, who presented thekey ideas in thelate 19505 {© Tomy CR ren In and early 1960s Although the Kalman ter isan alternative rep- resentation ofthe Wiener filter [1], Kalman's contribution was totiethe state estimation problem tothe state-space models, and the (then new) concepts of controllability and observabilty (2} to[5). Thousands ofpapershave been writen about the Kalman filter and its numerous applications to navigation, tacking, and ‘estimator and controller design in defense and industrial appli 35.2 Problem Definition In this section we present the definition of the basic stochastic estimation problem for which the Kalman fiter (KF) yields an “optimal” solution, First we present the plant state dynamics. ‘We deal with a inite-dimensional LTT system whote state vector (0), 2(0) € RY (arealn-vector), obeysthe stochastic differential equation a Ax(t) + Butt) + LE) os) where u(t), u(t) © RO is the deterministic contra vector (as- sumed known), £(,8(0) € RY is a vector-valued stochastic process, often called the process nose, that acts a a disturbance to the plant dynamics. Vectors are underlined, lower case let- tersand matrices are underlined, uppercase letters The process noise £ (1) isasumed to have certain statistical properties, corre- spondingto stationary (time-invariant) continuous-time white Gaussian noise wits ero mean, ic, EIg(} =0, foralle (52) 589 590 and its covariance matrix is defined by cov (€(0; £0001 = ELEOEO) B50) (353) with 5(¢ ~r) being the Dirac delta function (impulse at ¢ =), ‘The matrix 3 is called the intensity matix of §(1) and it is a symmetric positive definite matrix, i, B=z>0 (354) REMARK35.1 Continuous-time white noise does not exist in nature; itis the limit ofa broadband stochastic process. Inthe frequency domain, continuous-time white noise corresponds to a stochastic process with constant power spectral density as a function of frequency. This implies that continuous-time white noise has constant power at all frequencies, and therefore has in finite energy! White nose is completely unpredictable, as can be seen from Equation 35.3 becauseitis uncorrelated for any # r, ‘while thas init variance and standard deviation at¢ = r. This is obviously an approximation o reality. As with the Dirac delta function, 5(?), white noise creates some subtle mathematical is- sues, but is nonetheless extremely useful in engineering REMARK 35.2 The state x(1), the solution of Equation 35.1, is a well-defined physical stochastic process, a so-called colored Gaussian random process, and it has finite energy. ts power spectal density rolls off at high frequencies, Next, we turn our attention to the measurement equation, We assume that our sensors cannot directly measure all ofthe physical state variables, the components ofthe vector x() of the plant given in Equation 35.1. Rather, in the lasical Kalman filter formulation we assume that we can measure only certain ‘output variables linear combinations of the state variables) in the presence of additive continuous-time white noise. ‘The mathematical model of the measurement proces ie as follows: 055) ‘The vector y(t) € RP represents the sensor measurement. The measurement or sensor noise O(t) € RP is assumed to be a continuous-time white Gaussian random proces, independent oF §(0), with zero mean, ie, 10 = Cxl0) +900) E (QO) = 0, forallt 636) and covariance matrix E(a(n0'¢0) cov (800; 807 ear) 887) ‘where the sensor noise intensity matrixis symmetric and positive definite, ie, 2 po (5) Figure 35.1 shows a visualization, in block diagram form, of Equations 35.1 and 35.5. ‘THE CONTROL HANDBOOK 35.2.1. The State Estimation Problem Imagine that we have been observing the control u(t) and the ‘output y(t) over the infinite past up to the present time f. Let 059) 510) UO = (ur);-00< <1) YO = ly(t:-s0< <0) ‘denote the past histories ofthe control and output, respectively, ‘The state estimation problem is as follows: given U(t) and (0) find a vector £6, at time 1, which i an “optimal” estimate ofthe present state x(1) ofthe system defined by Equation 35. Figure 35.1 A stochastic liner dynamic system, Under the stated assumptions regarding the Gaussian nature ‘of £(2) and 8(4 the “optimal” state estimate is the same for an extremely large class of optimality criteria (6). This generally ‘optimal estimate isthe conditional mean ofthe state i.e. £0) 2 ElsO1U@, YO} (35.1) (One can relax the Gaussian assumption and define the opti malty of the state estimate £() in different ways. One popular ‘wayisto demand that £(?) be generated bya linear transformation on the past “data” U (0) and ¥(0), such thatthe state estimation error (0) 20-80 (12) has zero mean, FRO) (0513) and the cost functional 1 elSxo| E{ROXO) = rEROXON 05.14) laminae ‘Thecos funciona has the physica interpretation tha tis the tum ofthe error vatinces (Ef (9) foreach sate viable Awe let Z denote the covariance mate (tionary) ofthe ate eatination ero BS ERX) (35.15) then the cost, J, of quation 35.14 can also be written as J =i] (5:16) 353. SUMMARY OF THE KALMAN FILTER EQUATIONS Bottom Line: We need an algorithm that translates the signals we ‘u(t) and y(t), into a state estimate £(0), such that the state estimation error is"smal!”in some well-defined sense. ‘The KF isthe algorithm that does just that! 35.3. Summary of the Kalman Filter Equations 35.3.1 Additional Assumptions In this section we summarize the on-line and off-line equations that define the Kalman filter. Before we do that we make two ditional “mild” assumptions LA, Ll is stabilizable (or controllable) (5:17) LA, C] is detectable (or observable) (5:18) {A, L] is controllable, means thatthe process noise (1) excites all modes ofthe system defined by Equation 35. servable means thatthe “noiseless output (0) = Cx( information about all state variables. If, L] is stabilizable, the ‘modes ofthe system that are not excited by £(¢) are asymptot- ically stable; if [A, C] is detectable, the unobserved modes are asymptotically stable 35.3.2. The Kalman Filter Dynamics ‘The function ofthe KPisto generate in realtime thestateestimate (of the state x(-). The KF is actualy an [TI dynamic sytem, ‘of identical order (n) tothe plant Equation 35.1, and is driven by (1) the deterministic control input u(t), and (2) the measured ‘output vector y(0). The Kalman flter dynamics are given as follows: ARO) + BUC + Hly() = CX}_—_(819) a block diagram visutlizaton of Equation 35.19%s shown in Fig- ‘ure 35.2, Note that in Equation 35.19 all variables have been defined previously, except fr the KF gain matrix H, whose cal- culation is carried out off-line and is discussed shorty The filter gain matrix H_ multiplies the so-called residual or ro Ay) ~ C80) (3520) and updates the time rate of change, di(0)/dt, of the sate es: timate £(9. The residual (0) is lke an “erroe” between the ‘measured output y(¢), and the predicted output C3). REMARK35.3_Fromanintuitivepoin of view the K, defined by Equation 35.19 and illustrated in Figure 35.2, canbe thought a8 a model-based observer or sate reconstruct. The reader should carefully compare the structures depicted in Figures 35.1 591 ‘and 352. The plant/sensor properties, reflected by the matri- ‘B, and C, are duplicated in the KF". The state estimate '&(# iscontinuously updated by the actual sensor measurements, through the formation ofthe residual rt) and the “closing” of| the foop with the filter gain matrix. ‘The KF dynamics of Equation 35.19 can also be written in the form ax) = [A- HCHO + Bu) + Hy 521) Figure352_Thestrucure ofthe Kalman Filter, Thecontol u(),and ‘measured output, y(t), ae those associated with the stochastic system ‘of Figure 35.1 The iter gains matrix H i computed na special way. From the structure of Equation 35.21 we can immediatly see thatthe stability of the KF is governed by the matric A— HC. At this point of our development we remark that the assumpt Equation 35.18, ie, the detectability of (4, C], guarantees the ‘existence ofa least one filter guin matrix if such that the KF is stable, ie ReylA- HC} <0; i Deseo (9522) ‘where 2; is the ith eigenvalue of [4 — HCI. 35.3.3 Properties of Model-Based Observers ‘We have remarked that the KF gain matrix #7 is calculated in a very special way. However, itis extremely useful to examine the structure of Equations 35.19 0r35.21 and Figure 35.2 with after gain matrix H that is arbitrary except for the requirement that Equation 35.22 holds. Thus, for the development that fellows in ‘this subsection think of Has being a fixed matrix. Asbefore let (0) denote the state estimation eror vector mw) (35.23) Ifo tht 4p = 4p gp (52H "ns gal copending w 8) hr op fg 35a, The isbecause we asumed that (0) had zero-mean and was completly unpeediable. Thus the beat eximat for £0) given data upto time io. 592 (Next, we substitute Equations 35.1, 35.5 and 35.21 into Equa- tion 35.24 and use Equation 35.23 as appropriate. After some easy algebraic manipulations we obtain the folowing stochastic vector differential equation for the state estimation error Z(): BO 214 —neKK +40 - Heo (525) Note that, in view of Equation 35.22, the estimation error dy- namic system is stable. Also note that the deterministic signal Bu(®) does not appear in the eror equation (35.25). ‘Under ourassumptionsthatthesystemisstableand wasstarted at the indefinite past (ig —00)it is easy to verify that EGO) ‘This implies that any stable model-based estimator of the form shown in Figure 35.2, with any flter gain matrix H, gives us sumbiased (thats, E(&(O) = E(a(e) estimates. ‘Using next elementary facts from stochastic linear system the- ‘ory one can calculate the error covariance matrix B ofthe state ‘estimation error (0) (3526) covlg(0 £0) = ROX) (9527) ‘The matrix Eis the solution ofthe so-alled Lyapunov matrix ‘equation (linear in B) [A-HC]z+2(A- HOY’ + LBL’ + HOH! =0 (3528) with B20 (3529) ‘Thus, for any given filter gain matrix H_we can calulte the sociated error covariance matrix E from Equation 35.28. Re ‘alling Equation 3.16, we can evaluate, foragiven H, the quality of the estimator by calculating irlI 530) ‘The speciic way tht the KF gain is calculated is by solving ‘a constrained static deterministic optimization problem. Mini- smize Equation 35.30 with respecttothe elementsh, ofthe matrix _H.subject to the algebraic constraints given in Equations 35.28 and 35.29. 35.3.4 The Kalman Filter Gain and Associated Filter Algebraic Riceati Equation (FARE) We now summarize the of-lin calculations that define fully the Kalman filter (Equation 35.19 or 35.21). “The KF grin matrix H is computed by H=zce" (3531) "The MATLAB and MATRIX-X softwar pockagescan ole Lyapunov quitions. ‘THE CONTROL HANDBOOK where Z isthe unique, symmetric, and atleast positive seme nite solution matrix ofthe so-alled fle algebraic Riccati equa tion (FARE) AE+EA'+ 532) with 0 (3533) REMARK 35.4 The formula forthe KF gain can be obtained by setting : amyl where 3 is given by Equation 35.28, The results Equation 35.31. Substituting Equation 3531 into Equation 35.28 one deduces the FARE (Equation 35.32). (354) 35.3.5 Duality Between the KF and LQ Problems ‘The mathematical problems associated with the solution ofthe LQand KF are dual. This duality was recognized by RE. Kalman asearly as 1960 [2] ‘The duality can be used to deduce several properties of the KF simply by “ualizng” the results ofthe LQ problem. A summary of the KF properties is given in Section 35.4 35.4 Kalman Filter Properties 35.4.1 Introduction In this section we summarize the key properties ofthe Kalman filter. These properties are the “dual” of those forthe LQ con- troller, 35.4.2 Guaranteed Stability Recall that the KP algorithm is OL 4- nei + Bun + Hy) 0538) ‘Then, under the assumptions of Section 35.3, the matrix (A — HC] isstrictly sable, ie, RelA-HC1 <0; §=1,2.--." 536) 35.4.3 Frequency Domain Equality ‘One can readily derive a frequency domain equality forthe KF Inthe development that follows, let (337) Let us make the following definitions: let Gx p(s) denote the KP loop-transfer matrix cor- an 2538) 35.5, THE ACCURATE MEASUREMENT KALMAN FILTER GE 2Hes-A C3539) where [AIM denotes the complex conjugate ofthe transpose of an arbitrary complex matrix A. Let Geo (s) denote the fit open-loop transfer matrix (from £(0) to Y(0) Grout (0540) Go, esa “Ten the following equality holds [L+ Ger ()lOlL + Gx pO" 8+ GroulSG%op (0) 522) fi ean noo 0543) then Equation 35.42 reduces to [1+ Gx pL + Expl" = 0540 35.44 Guaranteed Robust Properties ‘The KF enjoys the same type of robustness properties asthe LQ regulator. The following properties are valid if ¢ diagonal matrix (35.45) From the frequency domain equality (Equation 35.42) wededuce the inequality (LF Ge pllL + Gep(0I" > L (3546) rom the definition of singular values we then deduce that niall + Gx (= 1 oF amaell+ Gy pO"! 1 (9547) ominlL +O >} or omll + Gxr(OI"'Gee 22 (548) 35.5 The Accurate Measurement Kalman Filter ‘We summarize the properties ofthe Kalman Fitter (KF) problem when the intensity of the sensor noise approaches 2ero. Ta a ‘mathematical sense this is the “dval” of the so-called “cheap- contr” LOR problem. The results are fundamental tothe loop tansfer recovery (LTR) method applied a the plant input. 35.5.1 Problem Definition Consider as befor, the stochastic LTT system x0 ye Ax) + LEO Cx) +800) (5.49) (3550) 593 ‘We assume thatthe process nis (1 is white, zero-mean, and with unit intensity. te, EEE = 150-9 (55) We aso assume tht the measurement noise (0) is whit, zero: ‘mean, and with intensity indexed by H, ie, Ege’ (3832) ie- 0 DEFINITION35.1 The accurate measurement KF problem is defined by the limiting case noo 553) corresponding essentially noiseless measurements Under the assumptions that (A, £ is stabilizable and that LA. C) is detectable we know that the KF isa stable system and generates the stat estimates £(0) by i) (550) SO ota 8) + Hy) ‘where we use the subscript to stress the dependence ofthe KF {gain matrix H,, upon the parameter. ‘We recall that #7, is computed by snc (3555) @ ‘where the error covariance matrix 3, also dependent upon jt, iscalelated by the solution ofthe FARE: L = AE, +5,4 +20 -42,C'CE, (556) ‘We seck insight about the limiting behavior ofboth B, and Hy asym 0. 35.5.2 The Main Result In this section we summarize the main result in terms ofa theo- THEOREM 35.1. Suppose thatthe transfer function matrix rom the white noise §(0) t0 the output y(t) forthe system defined by Equations 35.49 and 35.50, Le, the Fansfr function matrix (557) 4 minimum phase, Then, limo By =o (0558) and limy+0 YH, = LW: (3559) PROOF 35.1 This is theorem 4.14 in Kwakernaak and Sivan 7]-pp. 370-371 598 REMARK35_Itcanbeshovnthatthereguirementthat¥ (0) ‘given by Equation 35.57, be minimum phase is both @ neces- sary and sufcint condition forthe ming properties given by Equations 35.58 and 35.59. REMARK 35.6 The implication of Equation 35.58 is that, in thecaseof exact measurements upon aminimum phase plant, the KF yields exact state estimates since the eror covariance matrix is zero. This assumes thatthe KF has been operating upon the data for a sufficiently longtime so that intial transient errors have died out. REMARK 35.7 For anon-minimum phase plant lim B, #0 (3860) Hence, perfect state estimation is impossible for non-rinimum phase plans. REMARK 35.8 The limiting behavior (with L Kalman Filter gain B) of the ik, vi (561) isthe precise dual ofthe limiting behavior ofthe LQ contro gain lim, VoG,=We; WW=1 (562) for the minimum phase plant (0563) ‘The relation (Equation 35.61) hasbeen used by Doyle and Stein [a] to apply the LTR method at the plant input, while Equa- tion 35.62 has been used by Kwakernaak (9] to apply the ITR ‘method a the plant output (see also Kwakernaak and Sivan [7], pp. 419-427), References [1] Wiener, N, Extrapolation, Interpolation and Smooth ingof Stationary Time Series, MIT Press and John Wi- ley & Sons, New York, 1950 (reprinted fom a publi- cation restricted for security reasons in 1942). (2 Kalman, RE, A new approach t linear filtering and prediction problems, ASME J. Basic Eng, Ser. D, 82, 34-45, 1960. |} Kalman, RE New methods and results in Hnear pre- diction and filtering theory, Proc. Symp. Engineering Applications of Random Function Theory and Prob bility, John Wiley & Sons, New York, 1961. Kalman, RE. and Bucy, RS, New results in linear filtering and prediction theory, ASME J. Basic Eng. Ser. D, 83, 95-108, 1961 ia THE CONTROL HANDBOOK {5} Kalman, RE, New methods in Wiener filtering Proc. FirstSymp. Engineering Applications of Random Func ton Theory and Probability, John Wiley & Sons, New York, 1963, chap 9. [6] VanTrees, FLL, Detection Estimation, and Modulation ‘Theory, J. Wiley & Sons, New York, 1968. 17) Kiwakernaak; H.and ivan, R, Linear Optimal Control Systems, John Wiley & Sons, New York, 1972. [8] Doyle, LC. and Stein, G., Multivariable feedback de. ‘sign: concepts fora dassicallmodern symthesis, EEE ‘Trans. Autom. Control, 1981 [9] Kwakernaak, H., Optimal low sensitivity linear feed- back systems, Automatica, 5, 279-286, 1969 Further Reading ‘There is avast literature on Kalman filtering and is app! cations A reasonable starting point isthe recent textbook. Kalman Filtering: Theory and Practice by MS Grewal and AB. Andrews, Prentice Hal, Englewood Cliffs, NI 1993 ‘The books by Gelb et al, Jazwinski, and Maybeck have ‘been standard references for those interested primarily in applications for some ime, [1] Geb, A, Kasper, LE, Je, Nash, RA, Je, Price, CE, and Sutherland, A.A, Je, Applied Optimal Estimation, MIT Press, Cambridge, MA, 1974 [2] Jazwinski, AHL, Stochastic Processes and Filtering, ‘Academic Press, New York, 1970, [3] Maybeck, PS, Stochastic Models, Estimation, and Control, Vl. 1, Academic Pres, San Diego, 1979, [4] Maybeck, PS, Stochastic Models, Estimation, and Control, Vo. 2, Academic Pres, San Diego, 1982. ‘The reprint volume edited by H. Sorenson, Kalman Filter ing: Theory and Application, IEEE Press, New York, 1985 contains reprints of many ofthe early theoretical papers, Kalman filtering, a8 well as a collection of applications. ‘A very readable introduction to the theory is (5} Davis, MoH.A, Linear Btimation and Stochastic Con- to, Halsted Press, New York, 1977.

You might also like