AN ENGINEERING APPROACH TO OPTIMAL

CONTROL AND
ESTIMATION THEORY

GEORGE M. SIOURIS
Air Force Institute of Technology Wright-Patterson AFB, Ohio

A Wiley-Interscience

Publication

JOHN WILEY & SONS, INC.

New York / Chichester

/ Brisbane / Toronto

/ Singapore

To Karin

This text is printed Copyright

on acid-free paper.

© 1996 by John Wiley & Sons, Inc. simultaneously in Canada.

All rights reserved. PubJished

Reproduction or translation of any part of this work beyond that permitted by Section 107 or 108 of the 1976 United States Copyright Act without the permission of the copyright owner is unlawful. Requests for permission or further information should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-00 12. Library of Congress Cataloging in Publication Siouris, George M. An engineering approach to optimal George M. Siouris. p. em. 'A Wiley-Interscience publication." Includes index. ISBN 0-471-12126-6 1. Automatic control. 2. Control TJ213.s474443 1996 629.8--dc20 Printed in the United States of America 10987654321 control Data: and estimation theory/

theory. 95-6633

I. Title.

CONTENTS

PREFACE
CHAPTER 1 CHAPTER 2 INTRODUCTION MATHEMATICAL AND HISTORICAL PRELIMINARIES PERSPECTIVE 1

6 6 10 11 12 17 22 23 25 31 38 42 47 53

2.1 Random Variables 2.2 Expectations and Moments 2.2.1 Statistical Averages (Means) 2.2.2 Moments 2.2.3 Conditional Mean 2.3 The Chebychev and Schwarz Inequalities 2.4 Covariance 2.5 The Autocorrelation Function and Power Spectral Density 2.6 Linear Systems 2.7 The Classical Wiener Filter 2.8 White Noise 2.9 System Input Error Models Problems
CHAPTER 3 LINEAR REGRESSION; LEAST-SQUARES AND MAXIMUM-LIKELIHOOD ESTIMATION

63

3.1 Introduction 3.2 Simple Linear Regression

63 64
vii

viii

CONTENTS

3.3 Least-Squares Estimation 3.3.1 Recursive Least-Squares Estimator 3.4 The Maximum-Likelihood Estimator (MLE) 3.4.1 Recursive Maximum-Likelihood Estimator 3.5 Bayes Estimation 3.6 Concluding Remarks Problems CHAPTER 4 THE KALMAN FILTER

64 71 73 80 82 83 85 92 93 109 111 136 141 159 163 169 171 190 201 204 205 219 219 221 232 253 256 256 257 265 274 291 294

4.1 The Continuous-Time Kalman Filter 4.2 Interpretation of the Kalman Filter 4.3 The Discrete-Time Kalman Filter 4.3.1 Real- WorId Model Errors 4.4 The State Transition Matrix 4.5 Controllability and Observability 4.6 4.7 4.8 4.9 4.10 4.5.1 Observers Divergence The U -D Covariance Algorithm in Kalman Filters The Extended Kalman Filter Shaping Filters and Colored Noise Concluding Remarks Problems LINEAR REGULATORS

CHAPTER 5

5.1 Introduction 5.2 The Role of the Calculus of Variations in Optimal Control 5.3 The Continuous-Time Linear-Quadratic Regulator 5.4 The Discrete-Time Linear-Quadratic Regulator 5.5 Optimal Linear-Quadratic-Gaussian Regulators 5.5.1 Introduction 5.5.2 Properties of the LQG Regulators 5.6 Pontryagin's Minimum Principle 5.7 Dynamic Programming and the Hamilton-Jacobi Equation 5.8 Concluding Remarks Problems CHAPTER 6 COVARIANCE ANALYSIS AND SUBOPTIMAL FILTERING

302 302 319

6.1 Covariance Analysis 6.1.1 Concluding Remarks

the federal government is funding a multimillion dollar effort to develop intelligent vehicle highway systems (IVHS) in the next decade. The design of optimal control and stabilization systems. temperature. have been published. Optimal control theory is playing an increasingly important role in the design of modern systems. and stability augmentation systems for manned aircraft are commonly designed using linearized analyses. bandwidth. the determination of optimal flight paths (e. autopilots. which are subject to deterministic inputs. flight controllers. where for example. peak overshoot.PREFACE Optimal control and estimation theory has grown rapidly and at the same time advanced significantly in the last three decades.g. For example. many books and research papers on optimal control theory. and which in turn depend upon the locations of the poles and zeros of the system transfer function. pressure. On the other hand. on various levels of sophistication. are frequently optimized with respect to common transient response criteria such as rise time. During this time. as well as in other applications. In particular. control problems play an important role in aerospace. and other variables must be kept at desired values regardless of disturbances. the development of control theory in engineering emphasized stability. These linear systems. In particular. xi . settling time. A distinction between classical and modern control theory is often made in the control community. the design of a modern optimal controller requires the selection of a performance criterion.. and so on. feedback gains or adjustable parameters are manipulated in order to obtain satisfactory system transient response to control inputs and gust inputs. the optimization in flight mechanics). Historically. and the calculation of orbital transfers have a common mathematical foundation in the calculus of variations. More specifically.

and the recursive maximum-likelihood estimator. Consequently. discusses decentralized Kalman filters. during the last thirty years or so a great deal of research has been done on the subject of optimal control theory. Above all. the continuous-time and discrete-time linear quadratic regulator (LQR). the optimum solution lies in a region where the performance criterion is fairly flat. the autocorrelation function and power spectral density. Finally. from first-year graduate-level students to experienced engineers. basic mathematical concepts needed for an understanding of the work that follows. and the matrix inversion lemma. for a single source of information on major aspects of the subject. The methods of modern control theory have been developed in many instances by pure and applied mathematicians. least-squares estimation. for example. The material of Chapter 4 is devoted to the development of the Kalman filter. the eigenvalue problem. The book concludes with two appendices. which may be of help to the student or engmeer. as in most textbooks. and searching for the maximum or minimum of a function. the classical Wiener filter. matrix algebra. the U-D covariance algorithms. SIOURIS ORGANIZATION OF THE TEXT September The structure of the book's organization is an essential part of the presentation. The material of the book is divided into eight chapters and two appendices. and some familiarity with classical control theory. Therefore. linear systems. Chapter 5 is devoted to linear regulators and includes a detailed discussion of the role of the calculus of variations in optimal control. Several illustrative examples have been included in the text that show in detail how principles discussed are applied. the government. and teaching. maximum-likelihood estimation. quadratic forms. depending on the system error and time for completely linear systems. and engineenng managers. In many practical problems. the selection of a performance criterion is based on real physical considerations such as payload. as mentioned above. and dynamic programming and the Hamilton-Jacobi-Bellman equation. Chapter 7 discusses the rt. least-squares. my intent is to serve a broad spectrum of users. This book is intended to fill the need especially by the practicing engineer. Chapter 3 is concerned with linear regression. real-world model errors. Topics covered in this chapter include continuous-time and discrete-time Kalman filters. The mathematical background assumed of the reader includes concepts of elementary probability theory. etc. Among the topics covered in this chapter are simple linear regression. so that the increased benefits in moving from a near-optimum to an optimum solution may be quite small. minimizing the root-mean-squareerror for statistical inputs.-f3-y tracking filters.xii PREFACE PREFACE xiii Kalman and Bucy have investigated optimal controllers for linear systems and obtained solutions to the combined optimal control and filtering problem. the recursive least-squares estimator. the field is so vast that no single book or paper now available to the student or engineer can possibly give him an adequate picture of the principal results. final velocity. giving a historical perspective and the evolution of optimal control and estimation theory. and system input error models. and an understanding of the effectsthat they have on various types of signals is important to engineers in many fields. controllability and observability. and numerous very good books have been written on the subject. they do have shortcomings from a practical point of view. moments. with the result that much of the written presentation is formal and quite inaccessible to most control engineers. Chapter 8. Optimization of linear systems with bounded controls and limited control effort is important to the control engineer because the linearized versions of many physical systems can be easily forced into this general formulation. problems have been added at the end of each chapter. covariance. white noise. for example. The examples chosen are sufficiently practical to give the reader a feeling of confidence in mastering and applying the concepts of optimal control theory. scientists. the major approaches to optimal control are minimizing some performance index. It is recommended that the student and/or engineer read and attempt to solve these problems. Chapter 6 may be considered as a natural extension of Chapter 4. the optimal linear quadratic Gaussian (LQG) regulator.OH GEORGE 1995 M. Although. I have felt that several topics of great importance to practicing engineers as well as students have not appeared in a systematic form in a book. This appendix has been added as a review for the interested reader. The last chapter. divergence. Therefore. Appendix A reviews matrix operations and analysis. My interest in optimal control theory was acquired and nurtured during my many years of research and development in industry. Furthermore. The topics covered in Chapter 2 include random variables. Pontryagin's minimum principle. statistics. Appendix B presents several matrix subroutines. The problems have been selected with care and for the most part supplement the theory presented in the text. Dayton. In many aeronautical applications. and the extended Kalman filter. the state transition matrix. even though the techniques of optimal control theory provide particularly elegant mathematical solutions to many problems. and deals with covariance analysis and suboptimal filtering. Chapter 2 presents an overview of the . N onlinearities occur in all physical systems. The topics covered in Appendix A are basic matrix concepts. and maximumlikelihood estimation. since modern optimal control theory leans heavily on matrix algebra. in this book I have tried to keep the theory to a minimum while placing emphasis on applications. rather than at a sharp minimum. Chapter 1 is an introduction. linear system theory.

Craig Coulter of the Carnegie Mellon University. Seoul. Department of Control and Instrumentation Engineering. Nevertheless. and offered constructive criticisms of the overall presentation of the text. Associate Professor. My thanks also go to Dr. Ohio. New York. Systems Engineering Department. Wright-Patterson AFB. Professor Victor A. Head. Mori read the entire manuscript. Department of Electrical Engineering. Binghamton. New York. Kuo-Chu Chang. The enthusiastic support. San Jose. Thomas J. Chang made several corrections and improvements in portions of the manuscript. Inc. Stanley Shinners of the Unisys Corporation. Brown. Department of Electrical Engineering. Pittsburgh. and to Dr. Dr. for his guidance and readiness to help at any time. Associate Professor. Binghamton University (SUNY). and to Dr. Texas. The invaluable comments and suggestions of Dr. University of Houston.. Virginia. The Cooper Union for the Advancement of Science and Art. Grateful acknowledgment is due to Professor William M. Air Force Institute of Technology. Skormin. Great Neck. pointed out various errors. Department of Electrical Engineering. Fairfax. R. Shozo Mori of Tiburon Systems. Dr. preparation of this book has left me indebted to many people. Department of Electrical and Computer Engineering. and Dr. have been of considerable assistance in the preparation of the final manuscript. George Mason University. Republic of Korea. and Adjunct Professor. Houston. California. Robotics Institute.ACKNOWLEDGMENTS The problem of giving proper credit is a vexing one for an author. and xv . Watson School of Engineering and Applied Science. I am indebted to many writers and colleagues whose work has deepened my understanding of modern control and estimation theory. Pennsylvania. Also I am very grateful for the advice and encouragement I received from Professor Jang Gyu Lee. Seoul National University. Guanrong Chen.

Filtering usually refers to the extraction of the true signal from the observations. The application contexts can be deterministic or probabilistic. save computer memory by updating the estimate of the signals between measurement times without requiring storage of 1 . an algorithm which generates estimates of variables of the system being controlled by processing available sensor measurements.xvi ACKNOWLEDGMENTS suggestions provided by both these gentlemen have materially increased its value as a text. for a nonminimum phase system. combining several sources of (often redundant) data in order to arrive at an estimate of some unknown parameters. velocity.S. However. I will be grateful to hear of. One of the most widely used estimation algorithms is the Kalman filter. and radar systems for estimating position and velocity. or more precisely the Kalman algorithm. but perhaps most importantly. CHAPTER 1 INTRODUCTION AND HISTORICAL PERSPECTIVE Many modern complex systems may be classed as estimation systems. the closed-loop system is unstable for high controller gains. in classical design. and support of my family during the preparation of the book. Smoothing usually implies the elimination of some noisy or useless component in the observed data. Estimation theory is the application of mathematical analysis to the problem of extracting information from observational data. and the resulting estimates are required to have some optimality and reliability properties. Finally. Among such systems are terrestrial or space navigators for estimating such parameters as position. I wish to acknowledge the patience. fire-control systems for estimating impact point. G. Estimation is often characterized as prediction.M. understanding. has become a fundamental tool for analyzing and solving a broad class of estimation problems. The Kalman filter equations. The faults that remain are of course the responsibility of the author. Prediction usually implies the extension in some manner of the domain of validity of the information. and attitude. Optimal estimation always guarantees closed-loop system stability even in the event 01 high estimator gains. or smoothing. The Kalman filter theory. Any errors that remain. in its various forms. depending on the intended objectives and the available observational information. filtering.

and later in the ea. in one form or another. This is done in real time by exercising a model of the system and using the difference between the model predictions and the measurements. Based on the work of these researchers. largememory digital computers for solving the equations. whereas linear regression techniques based on weighted least-squares or maximumlikelihood criteria were characteristic of the treatment of discrete-time filtering problems. However. due primarily to its sequential nature. J. If this condition is not met. F. the measurement was rejected. the Kalman filter is an optimal recursive data-processing algorithm. E. the different signals entering the optimal gain matrix [see Eqs. The modern theory of estimation has its roots in the early works of A. Carlson [13]. Bucy revolutionized the field with their now classical papers [41. The basis of the concept was attributed by Kalman [41 J to the ideas of orthogonality and wide sense conditional expectation discussed by J. Schmidt [60]. Later researches in the square-root Kalman filtering method can be found in the works of L. ship motion. (4. the Wiener-Kolmogorov solution. At this point.g. which is stored in a computer's memory. T ACAN. Kolmogorov in 1941 and Wiener in 1942 independently developed and formulated the important fundamental estimation theory of linear minimum mean-square estimation. maximum-likelihood. as it came to be called later. Schmidt [50. Square-root filtering. radar tracking. even though the associated reliabilities may be different. Kolmogorov and N. the filter is flexible in that it can handle measurements one at a time or in batches. A major thrust of Kalman mechanizations and architectures is the use of parallel.13) and (4. a measuring-device failure can be detected and isolated. C. Kalman and later Kalman and R. Furthermore. Battin [5J. Specifically. the stochastic processes involved are often modeled as Gaussian ones to simplify the mathematical analysis of the corresponding estimation problems. when R. and Bayesian methods-give almost identical estimates. In short. and power station control systems. Another important feature of the filter is that it includes failure detection capability. was only tractable for stationary processes until the early 1960s. noise in the measured signals. The purpose of the filter is to reconstruct the states which are not measured. it basically leads to the square-root error covariance matrix. the algorithm is called a filter if it has the capability of ignoring. The results of the Kalman-Bucy filter were quickly applied to large classes of linear systems. Furthermore. and N. the automobile industry (as vehicles begin to incorporate smart navigation packages). and attempts were made at extending the results to nonlinear systems. NASA's early work on the manned lunar mission.. partitioned. chemical process control. The value of n used was 3. The standard Kalman filter provides the best sequential linear unbiased estimate (globally optimal estimate) when the noise processes are jointly Gaussian. in conjunction with a closed-loop algorithm. Simply stated. the problems of implementation on small spaceborne and airborne computers led to a square-root formulation to overcome the numerical difficulties associated with computer word length.. Omega. the filtering of continuous-time signals was characterized by the solution to the classical Wiener-Kolmogorov problem. or decentralized versions of the standard Kalman filter. expressed as an integral equation.2 INTRODUCTION AND HISTORICAL PERSPECTIVE INTRODUCTION AND HISTORICAL PERSPECTIVE 3 all the past measurements. G. One prerequisite is that the filter requires a model of the system dynamics and a priori noise statistics involved. E. on-line failure detection in nuclear plant instrumentation. If the residual magnitude exceeded n times the standard deviation. Doob [27].20)J should be white sequences with zero mean and predictable covariance. and control-the field where they were first used (viz. Thus. but closed-form expressions for the error bound were not found. An important feature of the Kalman filter is its generation of a system error analysis. 63J. N.-Iy sixties the development of the navigation systems for the Apollo and the Lockheed C-5A aircraft programs). was developed in the early 1960s and can be found in the works of R. For example. two different types of square-root filters have been developed. A. F. in aided inertial navigation systems when all measuring devices (e. etc.42]. mobile robotics. corresponding to a 30" residual magnitude test. the three best-known estimation methods-the least-squares. natural gamma-ray spectroscopy in oil. guidance. McGee and S.) on board a vehicle are operating correctly. The second involves the square root of the information . Wiener [72]. Several authors presented a series of results to a variety of extended Kalman filters. These techniques were rapidly adapted in such diverse fields as orbit determination. Bierman [9J. it is appropriate to define and/or explain what is meant by the term filter. A. measurement of instantaneous flow rates and estimation and prediction of unmeasurable variables in industrial processes. as the filter's use gained in popularity in the scientific community. and S. Doppler radar. Kalman filtering techniques have seen widespread application in aerospace navigation. the filter performs this error analysis in a very efficient way. The work that led to this new formulation is also discussed in this book. Also. That is. The Kalman filter theory is well developed and has found wide application in the filtering of linear or linearized systems. a filter is thought as a computer program in a central processor. In particular. These techniques were largely aimed at specific problems or classes of problems. The first may be regarded as a factorization of the standard Kalman filter algorithm. or filtering out. the Global Positioning System. One of the prime contributing factors to the success of the present-day estimation and control theory is the ready availability of high-speed. That is.and gas-well exploration. a bad-data rejection technique has been developed which compares the measurement residual magnitude with its standard deviation as computed from the Kalman filter measurement update algorithm. Then. and to minimize the process and measurement noise influence. Engineers engaged in the aforementioned areas as well as mathematicians will find Kalman filtering techniques an indispensable tool. H. In such a simplified case. which is ideally suited to digital computers. Potter [56J. L. the control signals are fine-tuned to bring the estimate into agreement with nominal performance. independent of any data inputs.

Representation of this conditional density function gave rise to an approximation problem complicated by its changing form. the actual system (or plant. the covariance matrix will become so small that additional measurements would be ignored by the filter. may become a poor choice as time progresses. is the model from which the Kalman gains are determined. the filter's noise covariance matrices must be adjusted accordingly. there is a perfect match between the two models. The application of Kalman filtering theory requires the definition of a linear mathematical model of the system. the reader will notice that different symbols are used to express the same concept or principle. as we shall see later. With regard to the covariance matrix P. on the other hand. This has been done to acquaint the reader with the various notations that will be encountered in the literature. It then concentrated on the solution for this kernel by gradient techniques. The largest sources of Kalman filter estimation error are un modeled errors. The work of Kushner [45] and Bucy [42] addressed itself to this problem for continuous-time systems. These two techniques are (1) covariance analysis and (2) Monte Carlo simulation (see Chapter 6). Their approach resulted in the conditional density function being expressed as either a ratio of integrals in function space or the solution to a partial differential equation similar to the Fokker-Planck equation. It should be pointed out. and the filter model. When the ratio of order of the truth model to that of the filter model is one. The Kalman filter is an optimal recursive data-processing algorithm. sometimes referred to as a real-world model. and (3) information about the initial conditions of the variables. it is noted that if very accurate measurements are processed by the filter.4 INTRODUCTION AND HISTORICAL PERSPECTIVE INTRODUCTION AND HISTORICAL PERSPECTIVE 5 matrix. With regard to the system model. Moreover. and (2) prediction (propagation). filter models provide performance almost as good as the optimum filter based on the exact model. Specifically. an approximation such as quasimoments. based upon past measurements. based on the form of the initial distribution. A truth model is a description of the system dynamics and a statistical model of the errors in the system. The extension to optimal control is accomplished by exploiting. Still other researchers formulated the problem in Hilbert space and generated a set of integral equations for which the kernel had to be determined. that the Kalman filtering algorithms derived from complex system models can impose extremely large storage and processing requirements. two simulation techniques are commonly used to study the effect of uncertainties or perturbations within the system model when the system truth model is present. if not properly optimized. This problem is due to modeling errors. If the ratio is less than one. This is done in real time by utilizing a model of the system and using the difference between the model predictions and measurements. When this happens. but requires more execution time because of the automatic service of zero elements. the optimum is achieved when the error is orthogonal to the linear manifold generated by the finite polynomials of the observer. The desire to determine estimators that were optimal led to the more fundamental problem of determining the conditional probability. can require a large word count and excessive execution time. the Kalman filter. In the traditional suboptimal Kalman filter. Stated another way. For example. In essence. Even the determination of the conditional density function becomes suboptimal due to the necessity of using approximations to describe the density function. The filter model. or simplified. the filter operates on the system errors and processes all available measurements. . and can be corrected by using pseudo noise in the time update equations. Computation of stored equations for individual matrix elements that are nonzero reduces the execution time but requires a higher word count. When the time of interest of state-vector estimation occurs after the last measurement. filtering refers to estimating the state vector at the current time. only very small corrections to the estimated state will diverge from the true state. which is derived from duality concepts in mathematical programming. Since there is no uniformity in the literature on optimal control and estimation theory concerning the mathematical symbols used. Specifically. the main function of the Kalman filter is to estimate the state vector using system sensors and measurement data corrupted by noise. that is. For this reason. With the above preliminaries in mind. It can represent the best available model of the true system or be hypothesized to test the sensitivity of a particular system design to modeling errors. Before we proceed with the Kalman filter and its solution. The techniques used for nonlinear problems all have limitations. Thus. as it is also called) differs from that being modeled by the filter. the duality of linear estimation and control. the filter model is in general of lower order than a truth model. regardless of their precision to estimate the current values of the variables of interest by using the following facts: (1) knowledge of the system model and measurement-device dynamics. which is defined as the inverse of the error covariance matrix. A final note is appropriate at this point. the estimation problem is termed prediction [58]. however. The Kalman filter algorithm concerns itself with two types of estimation problems: (1) filtering (update). a purely recursive loop. we will briefly review in Chapter 2 the mathematical concepts that are required in order to understand the work that follows. Notice that from Hilbert space theory. When the time at which an estimate of the state vector is desired coincides with the last measurement point. (2) statistical description of system noises and uncertainties. the estimation problem is known as filtering. The implementation of a specific filter demands tradeoff between core usage and duty cycle. implementation of the matrix element computations minimizes core use. the suboptimal. a distinction is made between a truth model. we can now state in more precise terms the function of the Kalman filter. which generates estimates of the variables (or states) of the system being controlled by processing all available measurements.

CHAPTER 2

MATHEMATICAL PRELIMINARIES

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

LEAST-SQUARES AND MAXIMUMLIKELIHOOD ESTIMATION .CHAPTER 3 LINEAR REGRESSION.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

tion can be written: degree-n vector differential . The Kalman filtering theory is concerned with overcoming the difficulties of the Wiener filter enumerated III Section 2. time-varying systems. Thus. time-varying state models are commonly expressed through state-space methods. and to improve the system compensation for critical error sources. The input reflects the excitations delivered to the physical system. Then. if the system error dynamics and their associated statistics are exactly modeled in the filter. the state vector is a set of n variables. whose values describe the system behavior completely. (2) state variables. The theory accommodates both continuous-time and discrete-time linear systems. stationary or time-varying. State Variables: The state variables represent meaningful physical variables or linear combinations of such variables. Linear. This algorithm generates estimates of the variables of the system being controlled by processing available sensor measurements. they are presented in a much more simplified form than required for the Wiener problem. one-dimensional or multidimensional.4. and (3) output variables.1 THE CONTINUOUS-TIME KALMAN FILTER We begin this chapter with certain important definitions and a discussion of the continuous-time Kalman filter for linear. As stated in Chapter 1.7. The aforementioned variables will now be defined more formally. The continuous-time Kalman filter is used when the measurements are continuous functions of time. 92 The diagram in Figure 4. the following first-order. This is done in real time by exercising a model of the system and using the difference between the model predictions and the measurements. In essence. Intrinsic to any state model are three types of variables: (1) input variables. the computation of the optimum filter becomes highly simplified. Consequently. with generalized equations covering all cases. whereas the output reflects the signal returned to the external world. and the same equations are valid for filtering as for prediction problems. employing statistical estimates of the system error sources in order to compute the time-varying gains for the processing of external measurement information. which is stored in the computer's memory. For example. in conjunction with a closed-loop algorithm. timeequa- THE KALMAN FILTER From the figure. In fact. the control signals are fine-tuned in order to bring the estimates into agreement with nominal performance. Input-Output Variables: The input and output variables characterize the interface between the physical system and the external world. The state model identifies the dynamic and interaction of these variables. the measurement information is used to generate corrections. the optimum corrections for the avaiblable measurement information are generated. all generally expressed as vectors. Although statistical data are required for the Kalman filter.1 illustrates varying state model.1 THE CONTINUOUS-TIME KALMAN FILTER 93 CHAPTER 4 4. the Kalman filter is one of the most widely used estimation algorithms. the Kalman filter consists of a linearized model of the system dynamics. The Kalman filtering problem is treated entirely within the time domain. the general composition of a linear.

.

.

.

.

.

.

.

(4.2 INTERPRETATION OF THE KALMAN FILTER The equation for the Kalman filter is given by Eq.9) it can be seen that in the absence of any measurements. From the terms on the right-hand side of Eq. the optimal estimates of the states evolve in time according to the same . (4.9).4.

The properties of the filter that make it useful as an estimation model can be summarized as follows: 1. meaning it does not store past data. we note that some information is lost if a continuous source of measurement data is sampled at discrete points in time. and Gaussianness. a radar is often used in a pulsed. since for a given R(t). Similarly. Second.13). white noise can be approximated with a wide band noise having low power at frequencies about the system bandpass. error budget analysis consists of repeated covariance analyses whereby the error sources in the truth model are tuned on individually in order to determine the separate effects of these error sources. In the discrete form.3 THE DISCRETE-TIME KALMAN FILTER 111 dynamical relationships as in the actual system. system dynamics) that is Gaussian with zero mean and covariance matrix Q(t). In error budget analysis. the sytem designer can determine which are the predominant error sources. but this situation can be avoided by prefiltering. A word now about white noise is appropriate (see Section 2. Furthermore. mode. whiteness. 4. that is. The filter is recursive. In particular.8 for a detailed mathematical description of white noise). It can be used to assess the theoretical limits of the system under design. can be estimated using observations or data samples that are functions of the state vector. Linearization lations. The filter is linear or it must be linearized. First of all. 2. a true white noise cannot exist. In this context. It is the difference between the actual measurement z(t) and the expected measurement H(t) x(t) that drives the estimator. simplifies calcu3. which is unknown.. the driving term to the Kalman filter is small.1): 1. However. K(t) is proportional to the covariance of the errors in the estimate. if the chosen or designed filter has been tuned properly (see also Section 4. At a given time t. For sampled data systems. the discrete-time form of the Kalman filter is of interest. It provides clues to better design of the optimal (or suboptimal) 3. the correlation time is zero). for given P(t). the system noises which drive the filter are assumed to be white and Gaussian.110 THE KALMAN FILTER 4. the measurements to improve the estimate of the state of the system are made at discrete intervals of time. An observation made at a point in time t is corrupted by uncorrelated. The Kalman gains K(t) represent the weight given to the incoming measurements for tuning up the estimates of the state. Since this results in a noise with infinite power. By stability we mean that the asymptotic values of the estimated state and its error covariance matrix are independent of their respective initial values (note that filtering stability is one of the requirements for an unbiased estimate). the engineer must make certain that the stability of Kalman filter is satisfied. 3. (4. On the other hand. and thus sampled.) Thus. when the estimator is doing well.9) shows the effect of measurements on the state estimates. This is intuitively reasonable. The concept of the error budget can be summarized by considering the following points (see also Section 6. so that he can utilize the best hardware that contribute smaller errors in the overall system accuracy. the second term on the right-hand side of Eq. even if the source of the measurements operates continuously. 4. measurements in certain avionics applications are available only at distinct points in time. It also implies that the noise has equal power at all frequencies. and essentially constant power at all frequencies within the system bandpass. Whiteness implies that the noise value is not correlated in time (i. and the mathematics involved in the Kalman filter is considerably simplified. The estimate is a minimum-variance estimate. 2. The recursive nature of the discrete-time case implies that there is no need to In applying sumptions: the Kalman filtering theory.2. This is in analogy with white light. Gaussian noise. or technological performance as a special form of sensitivity analysis. The state vector x(t) exists at the time t in a random environment (i. we make the following model as- .e. However. of the individual error sources to the total filter. These are both reasonable results. an error budget can be established. (The quantity in the brackets is also known as innovations. the filter performs best for linear systems. making then suitable for machine computation. having a zero mean and covariance matrix R(t).e. which contains all frequencies. 2. It can be considered analysis formulation. It catalogs the contribution system error. 1. From Eq. the state estimates can be found by solving the known equations of motion of the real system. an increase in R(t) means the measurements are noisier.6). For example. the Kalman filter is the best filter of any conceivable form.. The state vector. In engineering applications.3 THE DISCRETE-TIME KALMAN FILTER A Kalman filter can also be derived for discrete systems. It relies on the linearity of the covariance 4. Thus. such noise looks identical to the fictitious white noise. the expected value of the estimate is the value of the state vector at time t. it can be seen that if the measurements are interrupted or shut off. implying that they should be weighted less heavily. decreased confidence in the estimates-indicated by a larger P(t)-would call for a heavier weighting of the incoming measurements. (4. Within the bandpass of the system of interest. and is the heart of Kalman filtering theory. P(t). and inversely proportional to the covariance of the measurement errors R(t). 5. Under the restrictions of linearity. the filter generates an unbiased estimate x of the state vector x.

2 we discussed the importance of the Kalman gain K(k) in connection with the continuous-time case. It should be pointed out that the observation equation contributes no dynamics to the model. Furthermore. This equation says that our best estimates at t = k. and (3) trapezoidal method. For the present discussion.112 THE KALMAN FILTER store past measurements and older estimates for the purpose of computing present estimates. The linear. we will select the forward Euler approximation technique. The vector x(k) represents the parameter. Eq.3). The dynamical system of Eq. Consider now Eq.1 can be generated from various schemes available for digital simulation. relating one value of x. called the observation equation. Some of these schemes are: (1) Euler's technique. (4. Since in practice the Kalman filter algorithm is implemented in a digital computer.14) is called the state equation. (4. Therefore. (2) Simpson's method. Therefore. x(k). (4. making use of the state-transition matrix defined by Eq. the values of the state vector at time k.15) relates these measurements to the state vector via the observation matrix H(k) for k ~ 1. Eq. Equation (4. (4. Most of the generality of the recursive model is due to the use of this equation. and since the digital computer cannot perform the differentiation and integration operations required for solution of the continuous-time equations. or data sequences.18). whose components we will try to estimate.14) is a model for the true state of affairs. (4. which can be expressed as follows: where the superscript Tdenotes matrix transposition. In Section 4. Equation (4. or state vector.14) structures their dynamical behavior.17) represents the covariance propagation.1) may be written in discretized form. Eq. is a model of the measurement process. the usefulness of the discrete-time case lies in the fact that digital computers accept only discrete data.15). The random measurement noise is represented by v(k) for k ~ 1. It is a first-order difference equation in k. z(k) represents the measurement (or observation) vector whose components are the individual scalar measurements made at time k. The recursive characteristic is important because the burden on the onboard computer memory capacity is considerably reduced. a differenceequation formulation of the filter is appropriate. discrete-time equivalent of the continuous-time Kalman filter state model given in Section 4. Equation (4. When the components of the state vector are time-dependent. to the next value x(k + 1). using all observations up to and including . (4.

.

.

.

.

11 shows how the Kalman filter enters into a typical navigation loop [61]. (4.14)-(4.20) constitute the discrete-time Kalman filter algorithm. A flowchart illustrating the computational order of Eqs. The conventional sequential discrete-time Kalman filter algorithm can be implemented in real time with reasonable ease by most computer processors.19b) This form is efficient. it is well recognized that the Kalman filter can be numerically unstable due to the asymmetric nature of its covariance update equation P(k/k) = [I . Therefore. to improve the accuracy of the overall data system. An alternative form of the covariance update equation which is commonly used is P(klk) = P(klk -1) .14)-(4. Eq. which makes it less attractive to use. the Kalman filter provides an estimate of the state of the system at the current time based on all measurements obtained up to and including the present time. which leads to numerical instability and eventual filter divergence. especially on finite-work computers.K(k)H(k)P(kl k -1).10. the subtraction operation in this equation can cause the covariance matrix P(k I k) to be asymmetric (and also lose positive definiteness) because of finite-precision computation.20) is shown in Figure 4. as stated above. and Figure 4. It should be noted that the measurements z(t) are assumed to be discrete samples in time. in recursive processing. Figure 4. Equations (4. (4. Moreover. Thus. regardless of their precision.K(k)H(k)]P(klk -1).9 shows the timing diagram of the discrete-time Kalman filter. two simultaneously measurements of position can be incorporated simultaneously in batch. However. the filter uses all available measurements. but in certain applications it may involve the small difference of large numbers when the measurements are rather accurate. As noted earlier. we note that this expression is better behaved numerically when many measurements of accuracy comparable to that of the prior information are processed. .19a) Under certain conditions. (4. the observation vector z is partitioned into components as follows [48]: For example. Its algebraic simplicity and computational efficiency have contributed to its widespread application.19a) involves the product of a nonsymmetric matrix and a symmetric matrix.122 THE KALMAN FILTER thus all measurements are simultaneously incorporated into the estimate. which may lead into loss of positive definiteness. However. (4.

.

.

.

.

.

136

THE KALMAN

FILTER

4.3

THE DISCRETE-TIME

KALMAN

FILTER

137

We noted earlier in this example that the initial state x(O) and initial error covariance matrix P(O) were given. Therefore, in order to start the computational or estimation procedure, an initial estimate of the state, x(O), with its initial error covariance matrix P(O), must be specified. Furthermore, the initial values of the covariance matrices R(k) and Q(k) must be specified. The initial value of the measurement noise covariance matrix R(k) is specified by the characteristics of the particular tracking radar system used. That is, it can be obtained from a model of actual radar noise characteristics. The process noise covariance matrix Q(k) associated with the random forcing function (or process noise vector) w(k) is a function of estimated state parameters and compensates for the model inaccuracies. That is, a good initial choice can be determined experimentally or from the physics of the problem. For more details on the covariance matrices R(k) and Q(k), see the next section.

These statistical properties will now be discussed:
Initial System Error Statistics The system error state vector Xn can be described statistically in terms of the system error covariance matrix Pn defined as [see also Eq. (4.10)] Pn

= 18' {xn x~}.

The diagonal (or trace) elements of Pn respresent the variance (or mean square values) of the system error states, and the off-diagonal elements represent the covariances (or cross-correlations) between pairs of error states. Therefore, the square roots of the diagonal elements represent the rms (root mean square) system errors. In particular, for error state Xi' the rms error at time tn is given by the relation rms [xJn = J[PiiJn, where [PiiJn is the ith element of Pn. In other words, the square roots of its diagonal terms yield the time histories of standard deviations (or lIT values equal to rms values if the processes are zero mean) of errors in the estimates of the quantities of interest. Stated another way, the trace elements of the error covariance matrix, that is, the elements Pii on the diagonal of P are the variances of the state parameters:
Pii

4.3.1

Real-World Model Errors

The results of Sections 4.2 and 4.3 can be summarized by considering a general example. Specifically, we will consider the real-world (or truth) model navigation errors of an inertial navigation system. The state of, say, a navigation system is commonly described in terms of many different parameters (states) representing error sources that affect system performance. System error dynamics are modeled as a first-order linear vector-matrix differential equation of the form

= IT?,

where IT; is the standard deviation of the ith parameter [see also Eq. (6.2)]. The error covariance matrix P provides a statistical representation of the uncertainty in current estimate of the state and the correlation between the individual elements of the state. At time t = 0, Pn can be diagonalized (that is, the error cross-correlations are assumed to be zero initially), and its diagonal elements are set equal to the initial mean-square uncertainties of the system error states. One option is to allow certain diagonal and off-diagonal elements of P to be initialized to their steady-state values. As discussed previously, the real-world model of system error states is used as a baseline in order to evaluate the performance of the Kalman estimation filter. Thus, it is desirable to reduce the time required for the real-world model to compute the steady-state values of the system error covariance matrix P. One way to achieve steady-state conditions faster is to model the initial cross-correlations between key error states. Another way is to set the initial variances of the key error states to their approximate steady-state values. Finally, the time history of P can be generated from a covariance analysis program, depicting the covariance of the true estimation errors caused by the filter model under design. Table 4.1 presents a suggestion on how the system

.

.

.

.

.

.

.

4) IF (N.EQ. N DO 120 J = 1.J) = T*DBLE(A(I. M) FORMAT (* TRANSPOSE*) FORMAT (1X.N D0701=1. N) RETURN END SUBROUTINE DRUK (A.J)) 10 EXPTA(I. N). 01.49).02.J) CALL DRUK(T3. T3. T1(N. AE. J) D070J=1. N). TK = T/PK DO 30 I = 1.1) THEN DO 15 I = 1. N.'("NUMBER OF ITERATIONS IS: ". I) = EXPTA(I.J). N T2 (I. J) = T2 (I. N T2 (I. N. N DO 20 I = 1.J) = (T2(I.M) RETURN PRINT 101 DO 3 I = 1. pp. N). J) + DBLE (F(I. N EXPTA(I.13)')K DO 120 I = 1.156 THE KALMAN FILTER 4. N IF (KLM. N. N).J=1. M PRINT 102. PK = 101 102 1. N DO 10 I = 1. THE DIMENSION IS VARIABLE IN THE SUBROUTINE. T2. DOUBLE PRECISION DIMENSION T3(N. RE. J) + T1 (I.GT.M) GO TO 2 DO 1 I = 1. REDEFINE AE. T1. 12E11. PK.1.(A(I. N T1 (I. EXPTA. ENDIF PK = PK + 1.J) IF(KLM. 1))* TK ENDIF EXPT A (I. which uses a Taylor series expansion. DO 30 L = 1.1) THEN T1 (I. 70 120 CCCC TO REDUCE RUN TIME. then the Pade approximation may appeal to the user. J) = EXPT A (I.J)/(D1 + D2*DABS(EXPTA(I. a simpler computational FORTRAN program based on Eq. EXPTA. F. K. 1 2 3 I) DO 10 J = 1. KMAX. J) = T2(1. is presented below.EQ. However. T2(N.4 THE STATE TRANSITION MATRIX 157 Reference [58. EXPTA(N. (4.GT. N). J)))) IF(ER.J) = O.L T. J)*TK ELSE T1(I. J = 1.J) = EXPTA(I. J) DO 20 J = 1. (A(J. based on the Sylvester expansion theorem. T2.FOR 18 30 C * C * * PURPOSE: COMPUTE THE MATRIX EXPONENTIAL EXP(Ft) * USING THE TAYLOR SERIES APPROACH FOR THE * STATE TRANSITION MATRIX 20 *************************************************************** C * SUBROUTINE KLM) CCCC C C MEXP (0.N ER = DABS (T1(1.J) + T2 (J. BUT MUST BE DEFINED IN THE MAIN PROGRAM. RE.A.J)=T1(I. N PRINT102. TK. L))* T1 (L.03 F(N. C C C C C C *************************************************************** * SUBROUTINE: MEXP.N) A(N.0)THEN IF (PK. NOTE THAT ALL MATRICES ARE SQUARE WITH DIMENSION N x N. M) DIMENSION A(N. N DO 30 J = 1. N) RETURN 15 + 03 . N T3(I. T. I). T1.FLOAT(KMAX))THEN GOTO 18 ELSE STOP 'MAXIMUM NUMBER OF ITERATIONS EXCEEDED!' ENDIF ENDIF CONTINUE K = IDINT(PK) WRITE(*.140-142] gives a detailed program listing for computing the state transition matrix. AND 0 AS DOUBLE PRECISION 01 = DBLE(AE) 02 = DBLE(RE) 03 = DBLE(D) T=O. If more accuracy is needed in calculating eFt. T3.

.

.

that much of the optimal control engineering is based on the following two assumptions: 1. Complete Controllability: The plant may be transferred from an arbitrary initial state x(to) to any other state x(tl) by applying a suitable control u(t) for a finite time. Uz (t). given a knowledge of the output y(t). = input distribution matrix. yp(t)Y. is twofold: must identify the state of the plant by obser- The function of the controller 1. Furthermore.1. however. . = noise coefficient matrix. Observation: The controller vation of the output. Yz (t). = output vector = [Yl (t). = input vector to the controller. . . 2. the input . Complete Observability: The initial state x(to) of the plant may be identified by observing the output y(t) for a finite time. In certain control systems.1 Observers From the discussion given in Section 4.5. Xn(t)]T.. It should be pointed out. it is sometimes desired to obtain a good estimate of the state x(t). the controller needs only partial information about the state of the plant.where x(t) u(t) y(t) r(t) A(t) B(t) C(t) D(t) = system state vector = [x 1(t). 2. = system matrix. .. Xz (t). um(t) Y. = measurement (or observation) matrix. successful control of the output may often be achieved without complete control of the state. Control: The controller trajectory by producing must guide the state of the plant along a desired suitable control signals. = input vector = [ul (t). 4.

Specifically. called the observer. The structure of the dynamic observer mirrors the usual state model equations and depend implicitly on the known plant matrices A. In order for this to happen. the state x(t) must asymptotically approach x(t) for large t. The continuous measurements of u(t) and y(t) over the interval [to' t lJ drive a linear dynamical system. a dynamic observer builds around a replica of the given plant to provide an on-line. C.e. since the total state vector is reconstructed. in order for the replica to be a dynamic observer. That is. This problem is referred to as the problem. whose output x(t) approximates (i. from exact measurements of the state vector [47]. and its state (output) should be a good approximation to x(t). From intuition. the error state reconstruction .. deterministic linear system. B. The observers that will be presently discussed are called full-state or identity observers. the idea of an observer was formulated for reconstructing the state vector of an observable. the state is reconstructed from input-output measurements. Another dynamic system. C. B. and D. is to be constructed.164 u(t). where the control law may depend on knowledge of all the system states. but only limited combinations of the states are measurable. and D. continuous estimate of the system state. That is. asymptotically tracks) the original plant (or system) state vector. Its input will depend on y(t) and u(t). A. Observer concepts have found extensive use in the application of deterministic feedback control problems. known as a dynamic observer. THE KALMAN FILTER and the system matrices.

control design engineers insert dynamic observer structures in the feedback loop. however. Therefore. the additional dynamics do not interfere with the desired system behavior. a separate design of the observer can be used to provide the desired observer poles. additional eigenvalues. a feedback system with the desired poles can be designed. and additional natural frequencies. Because of the eigenvalue separation theorem. Consequently. oftentimes only the output variables or some subset thereof are available for measurement. However. the dynamic behavior of the observer does not interfere with the desired eigenstructure of the controlled plant. In order to remedy this situation. proceeding as if all the states were measurable. The eigenvalue separation theorem states that the characteristic polynomial of the feedback system with a dynamic observer equals the product of the characteristic polynomial of the observer and that of the state feedback control without the observer. (12). many feedback design schemes require complete state information. The diagram in Figure 4. creates additional system dynamics. arid use the state estimate in the feedback control law of the type indicated by Eq. Then. The feedback .19 illustrates the need for the eigenvalue separation theorem. Insertion of a dynamic observer in the feedback path.As stated earlier.

since the system model is usually an approximation to a physical situation. causing the filter to diverge. it should be pointed out that the actual error may become unbounded. As the gain becomes small. That is. that an inexact filter model will degrade the filter performance. the system model used in constructing the filter differs from the real system that generates the observations. Another reason is that the gain K(k) in the Kalman filter algorithm approaches zero too rapidly. However. we also know that. divergence manifests itself by the inconsistency of residuals (or innovations) in . However. Hence the estimate becomes decoupled from the observation sequence and is not affected by the growing observation error. therefore. the model parameters and noise statistics are seldom exact. It is clear. subsequent observations are ignored by the filter. the performance of the filter under actual operating conditions can be seriously degraded from the theoretical performance indicated by the state covariance matrix.4. divergence takes place when the covariance matrix becomes too small or optimistic. In actual applications. Furthermore. Consequently. the filter designer in designing a filter model must perform a tradeoff study or evaluate the effect on performance of various approximations made. In particular. even though the error covariance in the Kalman filter algorithm is vanishingly small.6 DIVERGENCE We have seen earlier that the Kalman filter theoretically produces an increasingly accurate estimate of the state parameters as additional data become available and processed by the filter. Specifically. the magnitude of the estimation errors as measured by the determinant of the estimation error covariance matrix is a monotonically decreasing function of the number of observations.

The first step in the design of a Kalman filter is the development of a mathematical model of the system. When the filter is performing satisfactorily. Limiting of the error covariance. It should be pointed out here that the innovations or residual sequence was defined as the difference between the actual output of the system and its predicted value.. 4. that is. Lack of complete knowledge of the physical problem. Inaccuracies in the modeling* process used to determine the message or observation model. Both the tuning and the validation of the estimator model can be accomplished by performing simple statistical tests on the innovations sequence. filter is predicting the state accurately. 3. If the filter is operating in a nonoptimal fashion. for reasons that will be discussed shortly. Therefore. Divergence in the filter can be prevented by: 1. Model errors in some sense add uncertainty to the system. (3) to force the matrix P to be symmetric at every recursive step. and P(O) for satisfactory performance. and (4) to avoid deterministic processes in the filter modeling (e.170 THE KALMAN FILTER 4. references [31. For a more detailed discussion of divergence. The major causes of divergence in the Kalman filter can be summarized as follows: 1. 4. Failure of linearization. As with other filtering algorithms. particularly for off-line operations. all the parameters need to be identified. and the question is whether the filter designed with these estimates is performing in an optimal manner. Finally. since the filter designer must adjust Q. Bierman's method employs the covariance factorization P= UDUT. and is equivalent to the Carlson method without squareroot computations. Of the various algorithms that have been developed to compute these updates. and the initial estimate P(O) of the error covariance matrix. Commonly. 5. (2) to avoid propagating the error covariance matrix P in many small steps between measurements.) The tuning of the filter is clearly a hypothesis testing problem. once the state (i. The first type. a random constant). message) and measurement models are determined. Errors in the statistical modeling of noise variances and mean. an inexact filter model will degrade the filter performance. Measurement updating using the U-D factorization preserves the nonnegative definite structure of the covariance matrix P.7 THE U-D COVARIANCE ALGORITHM IN KALMAN FILTERS 171 hood of divergence. This sequence can be viewed as an error or residual in the sense that the predicted value was off from the actual output by that amount.g. 2. then the residuals should be large. This model is developed based on the knowledge of the physical system and statistical data. Its essence lies in updating the state estimate and its covariance matrix. The other type arises from the fact that optimum Kalman filtering requires an exact knowledge of the process noise covariance matrix Q. (Note that the matrices Q and R can be statistically estimated when they are not known. the measurement noise covariance matrix R.e. "Modeling is the process of obtaining a mathematical model of the physical system under consideration. and P(O) are available. Consequently. arises from a mismatch between the estimator model and the real process being estimated. given the previous outputs. Among the techniques recommended for minimizing the roundoff errors are: (1) to avoid using fixed-point arithmetic by using doubleprecision arithmetic. In reference [59] it is shown that actual estimation errors can be bounded by adding noise to the system. in a practical problem these matrices are either unknown or known only approximately. Artificial increase of the system (or plant) noise variance. Clearly. which should reflect itself in some degradation in certainty [increase in P(k)]. (4. That is. it should be mentioned here briefly that the round off-error problem is a difficult one. this is a very important result. which can be used as a criterion for testing filter performance.70) where U is a unit upper triangular matrix and D is a positive diagonal matrix. is a commonly used method of generating a conservative filter design. Wordlength and roundoff errors in the implementation of the filter algorithm on a digital computer. 2. there are two types of modeling errors associated with Kalman filtering. the innovations sequence should be small. However. This problem is sometimes referred to as tuning of the filter. Direct increase of the gain matrix. R. and 59] are recommended.7 THE U-D COVARIANCE ALGORITHM IN KALMAN FILTERS The discrete-time Kalman filter has been highly successful in a wide variety of real-time estimation problems. R. which may be associated with the filter design. 6. Loss of positive definiteness or symmetry of the error covariance matrix. 3. the filter designer can take certain steps in order to avoid it. In this section we will derive the U-D algorithm for measurement updates and time updates.37. Bierman [9] recognized that the square-root calculations required by the Carlson algorithm are often costly and proposed a square-root-free measurement update scheme.. the P= UDUT decomposition was chosen . In the tuning of the filter it is assumed that some estimates of Q. the so-called U-D algorithm (also known as the U-D covariance factorization algorithm) is perhaps the most attractive.

any covariance matrix P can be found if desired by computing P = U D UT. and so on. the U-D algorithms get their name from the fact that they involve the unit upper triangular matrix U (all diagonal elements of U are one. together with the greater accuracy of the square-root algorithms. The U-D algorithms have several advantages over alternative algorithms. as square-root algorithms. for example. P(ti-) = P -. The U-D data-processing algorithm can be derived by factoring Eq. the time index will be dropped. and not the covariance matrices themselves. For convenience. This fact. Moreover. Therefore. in order to start the algorithm. and all elements below the diagonal are zero) and the diagonal matrix D which uniquely factors the n x n symmetric covariance matrix Pas P = U DUT. we will consider only scalar measurements. However. (4. algebraically equivalent to the Kalman filter. the measurement update equations in a Kalman filter with scalar measurements are . reduces the computer wordlength requirements for a specified accuracy. A rule of thumb is that square-root algorithms can use half the wordlength required by conventional algorithms. using this simplified notation. Therefore. or P(klk)= U(klk)D(klk)UT(klk)=P+.172 THE KALMAN FILTER by Bierman in order to develop a numerically stable filtering algorithm.29). they reduce the dynamic range of the numbers entering into the computations. In addition. the initial covariance matrix Po must be converted into its U-D factors. Moreover. the U-D algorithms generally involve significantly less computational cost than the conventional algorithms. They assure the positive definiteness of the covariance matrices and implicitly preserve their symmetry. only the U-D factors are found. When U-D algorithms are used in Kalman filter applications. Specifically.

.

= (U-HD-)*(U-)T. U+ AND THE DIAGONAL MATRICES D-.THE MATRIX U IN THE CALL LIST IS USED TO INPUT THE OLD FACTORS U-. IT DESTROYS THE INPUTED MATRIX U.ARE ALWAYS ONE. THIS SUBROUTINE INVOLVES THE U-D FACTORS OF P-AND p+ NAMELY THE UNIT UPPER TRIANGULAR MATRICES U-. N). THE VECTOR X IN THE CALL LIST IS USED TO INPUT X.ARE STORED ALONG THE DIAGONALS OF U+. D+ SUCH THAT p+ = (U+)* (D+)*(U+)T AND P. Y. 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 C C-C-C-C-C-C-C-C-C-C-C-C-C C-~ C-C-~ C-C~C C-- ~ 33 34 36 37 38 39 40 41 42 43 44 46 47 C C C-C-C-C-C-C-C--C-C C-C-C-INPUTS (PRE-UPDATE) N U (N.N) H(N) R (SCALAR) Y(SCALAR) X (N) OUTPUTS U (N.4. .AND OUTPUT THE NEW FACTORS U+. X+ AND X. D+ THE NEW AND OLD ESTIMATES OF THE STATE VECTOR. AND SCALAR Y. THE COVARIANCE MATRIX MEASUREMENT UPDATE EQUATION IS P+=(P-)-K*HT*(p-) WHERE K = (P-)*H/(W*(P-)H + R). SINCE THE DIAGONALS OF U+ AND U. H.MUST BE EQUAL TO OR GREATER THAN N. D. . U-. D. X. VECTORS H AND X.7 THE U-D COVARIANCE ALGORITHM IN KALMAN FILTERS 177 UD ALGORITHM 1 2 3 4 5 6 7 8 9 10 C~C--C-C-C C-C-C-C-C-SUBROUTINE MEAS (U. IT PRESERVES THE SCALAR R AND INTEGER N.ARE RELATED BY X+=(X-)+K*(Y-W*(X-)). ALTHOUGH THIS SUBROUTINE IS VERY SPACE EFFICIENT. THE MATRICES D+. 35 C =DIMENSION OF THE STATE VECTOR = CONTAINS U-D FACTORS OF COVARIANCE MATRIX P= STATE MEASUREMENT VECTOR = VARIANCE OF MEASUREMENT NOISE = OBSERVATION = STATE ESTIMATE X- (POST-UPDATE) = CONTAINS U~D MATRIX p+ FACTORS OF COVARIANCE 45 C Figure 4. R.21 FORTRAN program for U-D measurement update algorithm. N) ~.) PARAMETER IN = 50 DIMENSION U (N.AND OUTPUT X+ THE PARAMETER IN . N. X (N). H (N) DIMENSION B (IN) THIS SUBROUTINE PERFORMS THE U-D MEASUREMENT UPDATE FOR A DISCRETE KALMAN FILTER.

N C 30 K(J) = B(J)*AINV • Eq. N 1 Y = Y ..V(N) IS STORED IN B(J). (4. P. THAT IS. WHERE Y IS NOW THE INNOVATIONS COMPUTED ABOVE. N .N) DIMENSION V (IN) IF (N. I + 1. BY D030J=1.76) Y _HT* (X-). N) = V (N) P(N. V=(D-)*F AND PLACE IN B.92) Figure 4... N) PARAMETER IN = 50 DIMENSION U(N. . (4.Hh(X-) = INNOVATIONS KALMAN GAIN = B*AINV. 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 10 11 C C-C-- B(\) = B(I) + B(J)*C CONTINUE • Eq.. 1)*H(1) • Eq.J)*AOLD*AINV • Eq.1 C=U(I.85) IF (N..92) C C-C-C COMPUTE UPDATED STATE ESTIMATE X+ = (X-) + K*Y = (X-) + B*AINV.J .J)*H(K) • Eq.EQ. ALL MATRICES HAVE DIMENSION (N.N) (\=1. . WHICH IS THE VARIANCE OF THE INNOVATIONS Y . C-FOR EACH J.89) AINV = 1/./A U(1. B(J) C-ANDV(J)..Hh(X-).21 (Continued).. (4.77) 6 CONTINUE C B(1) = U(1. (4. C A = R + B (1)*H(1) • Eq. J . .J) = U(J. 1) GO TO 11 DO 10J=2.J)*H(J) • Eq. 1)*R*AINV • Eq.H(J)*X(J) COMPUTE COMPUTE Y BY THE INNOVATIONS " • Eq. 1 2 3 4 SUBROUTINE PCOMPT (U. . (4.I)=U(I. THE PARAMETER IN MUST BE EQUAL TO OR GREATER THAN N. IT IS ASSUMED THAT THE MATRIX D IS STORED ALONG THE DIAGONAL OF U.N AOLD=A .I) IS COMPUTED HERE. .H(J)*AINV • Eq. 1) = U(1. AINV = 1. (4. C=Y*AINV DO 15 J = 1.1 DO 2 K = 1. (4.N-2) IS COMPUTED HERE.77) C C-PERFORM MEASUREMENT UPDATE OF U -D FACTORS. (4. (4.91) THE KALMAN GAIN K CAN BE COMPUTED AT THIS POINT BY K=B*AINV.2 P(\.A U(J.AOLD=A/SUBJ-1 A=A+B(J)*H(J) .N).N)*U(N. K(J) IS STORED IN B(1). N). EQ.87) ALAM = .I)+U(I. (4.72) and (4.178 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 THE KALMAN FILTER 4.1 H(J) = H(J) + U(K. EQ.J) U(I. F = (U-)T*H AND PLACE IN H.88) DO 10 I = 1.76) 2 CONTINUE 3 B(J) = U(J. N 15 X(J) = X(J) + B(J)*C RETURN END • Eqs.J) = C + ALAM*B(\) • Eq. .1) GO TO 6 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 DO 3 J = N.7 THE U-D COVARIANCE ALGORITHM IN KALMAN FILTERS 179 C-C C-C-C C C-C C-C-C C-C C-C C C C C-C-C X(N) = STATE ESTIMATE OF X+.N) P(\.EQ. REPLACE MEASUREMENT DO 1 J = 1. . AT THE END OF THIS PROGRAM SEVERAL QUANTITIES HAVE BEEN COMPUTED WHICH ARE OF INTEREST IN SOME APPLICATIONS.I)=V(N) 22 23 C P(\.1 C C-C P(I./A Y = Y(OBSERVATION) . .1) GO TO 50 NM 1 =N-1 IF (N.N)*V(N) 24 25 26 27 28 C DO 20 J = N M 1..85) AINV = 1. (4.P(N. . (4. 2. (4. 11 12 13 14 15 16 17 18 19 20 21 C C-- C C-C V(N)=U(I. .Eq. K(1).B(N). IF (N. . THESE QUANTITIES AND THEIR STORAGE LOCATIONS FOLLOW: A =HT*(U-)*(D-)*(U-)T*H+R=HT*(P-)*H+R.90) 5 6 7 8 9 10 C C-C-C-C-C-THIS SUBROUTINE COMPUTES THE MATRIX P = UDUT WHERE U IS UPPER UNIT TRIANGULAR AND D IS DIAGONAL. 2) GO TO 40 DO 30 I = 1. NOTICE THAT C-K(J) = V (J) IN THE JTH ITERATION.

AT THIS POINT THERE ARE STILL FOUR ENTRIES TO BE COMPUTED. . the Cholesky factorization (or decomposition) algorithm will be given for the interested reader.1 PV(I) = U(I.J)=P(I. Before we leave this section. N)*P(NM1.J)*V(J) C C-C P(I.N)=U(NM1.180 29 30 31 32 33 34 35 36 37 38 39 40 THE KALMAN FILTER 4.1) GO TO 30 DO 20 I = 1.N)*U(N.J)=P(I. IT IS ASSUMED THAT MATRIX 0 IS STORED ALONG THE DIAGONAL OF U.J)+V(K)*U(J.N) P(N. N) P(NM1. N P(I.J)*U(I.J)*U(J. N) IF (N.K) CONTINUE P(I.N) = U(N.23 program for computing the diagonal elements of U DUT.J) CONTINUE CONTINUE CONTINUE -.7 THE U-D COVARIANCE ALGORITHM IN KALMAN FILTERS 181 V(J) = U(I. positive definite n x n matrix. PV.I) DO 10 J = I + 1. MATRICES HAVE DIMENSION (N. in terms of an upper triangular and a lower triangular matrix) as [43] p namely 10 20 30 P(I.N) RETURN END FORTRAN COTRITHE THE ALL Figure 4. N) 50 P(N. NM1) = P(NM1. The Cholesky decomposition algorithm can be stated as follows: if P is a symmetric.N) RETURN END Figure 4.I)+U(I.I)=P(I. WHERE U IS UNIT UPPER ANGULAR AND 0 IS DIAGONAL. DIAGONALS OF P ARE PLACED IN THE OUTPUT VECTOR PV.J)(I = 1. N -1) IS COMPUTED HERE. N . it can be uniquely factored (i.J) 10 CONTINUE 20 CONTINUE 30 CONTINUE PV(N)=U(N. NM1) + U(NM1. NM1) = U (NM1..e. PV(N) C C-C-C-C-C-C--THIS SUBROUTINE COMPUTES THE DIAGONALS OF THE VARIANCE MATRIX P = UDUT.EO. THEY FOLLOW: = STS 41 42 43 44 45 46 47 48 49 50 51 52 53 C 40 C C-C-C-C P(NM1. N PV(I)= PV(I) + U(I.J)=V(J) DO 10K = J + 1.J)*U(J.22 (Continued) 1 2 3 4 5 6 7 8 10 11 12 13 14 15 16 17 18 19 20 21 SUBROUTINE PVAR (U. N) DIMENSION U (N. N -2 J=1 +1.J) P(I. N).

1 2 3 4 5 6 7 8 9 10

C-C-C-C-C C-C-C-C-C-C--

SUBROUTINE TIME (U, A, Q, G, N, M, X) PARAMETER JNPM = 50 DIMENSION U(N, N), A(N, N), Q(MJ, G(N, M), X(N) DIMENSION B(JNPM), C(JNPM) D(JNPM) THIS SUBROUTINE PERFORMS THE U-D TIME UPDATE FOR A DISCRETE KALMAN FILTER. ALTHOUGH THIS SUBROUTINE IS VERY SPACE AND TIME EFFICIENT, IT DESTROYS THE INPUTED MATRICES U AND G AND THE VECTOR X. IT PRESERVES THE MATRIX A, THE VECTOR Q, AND THE INTEGERS NAND M. THE COVARIANCE MATRIX TIME UPDATE EQUATION IS P- = A*(P+)*N + GQQGT. THIS SUBROUTINE INVOLVES THE U-D FACTORS OF PAND P+, NAMELY, THE UNIT UPPER TRIANGULAR MATRICES U-, U+ AND THE DIAGONAL MATRICES D-, D+ SUCH THAT p+ = (U+)*(D+)* (U+)T AND P- = *(U-)*(D-)*(U-)T. SINCE THE DIAGONALS OF U+, U+, UARE ALWAYS ONE, THE MATRICES D+ AND DARE STORED ALONG THE DIAGONALS OF U +, U -. THE MATRIX U IN THE CALL LIST IS USED TO INPUT THE OLD FACTORS U+, D+ AND OUTPUT THE NEW FACTORS U- D-. THE NOISE COVARIANCE MATRIX QQ IS ASSUMED TO BE DIAGONAL ITS DIAGONALS ARE CONTAINED IN THE INPUT VECTOR Q. THE NEW AND OLD ESTIMATES OF THE STATE VECTOR XAND X+, ARE RELATED BY X- =A*(X+). THE VECTOR X IN THE CALL LIST IS USED TO INPUT X+ AND OUTPUT X-. THE PARAMETER JNPM MUST BE EQUAL TO OR GREATER THAN N+M.

11
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 37 38 39 40 41 42 43 44 45 46

C
C-C-C-C-C-C-C-C-C-C-C-C C-C-C-C C-C-C-C-C-C C-C C C-C-C-C-C-C-C-C-C-C-C C-C

INPUTS (PRE-UPDATE) N = DIMENSION OF THE STATE VECTOR M = DIMENSION OF THE NOISE VECTOR U(N, N) = CONTAINS U-D FACTORS OF COVARIANCE MATRIX P+. A(N, N) = STATE TRANSITION MATRIX Q(N) = CONTAINS DIAGONALS OF NOISE COVARIANCE MATRIX G(N, M) = NOISE DISTRIBUTION MATRIX X(N) = STATE ESTIMATE X+ OUTPUTS (POST-UPDATE)

36 C

Figure 4.25

FORTRAN program for U-D time update algorithm.

188 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93

THE KALMAN

FILTER

4.7

THE

V-D COVARIANCE

ALGORITHM

IN KALMAN

FILTERS

189

C-C-C C C CC-CC C-C-C C

U(N,N,)CONTAINS U-D FACTORS OF COVARIANCE MATRIX PX(N) = STATE ESTIMATE X-

94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141

C-C-C C

NPM = N + M THE MATRIX W = [AU;G] IS WRITTEN IN [U;G], I.E., EACH ROW OF W HAS ITS FIRST N ELEMENTS AND LAST M ELEMENTS STORED IN THE CORRESPONDING ROW OF U AND G RESPECTIVELY. COMPUTE A*U AND PLACE IN U. RETRIEVE DIAGONALS OF U AND STORE IN D. COMPUTE A*X AND PLACE IN X DO 1 I = 1, N B(I) = X(I) X(I)=O. IF(N.EO.l)GO TO 6 DO 5 J = N, 2, -1 USE D(lL., D(J -1) TO TEMPORARILY STORE U(l,J). ... , (J -l,J). USE D(J) TO PERMANENTLY STORE U(J,J) DO 2 K = 1, J D(K) = U(K,J) DO 4 1= 1, N X(J)=X(J)+A(J,I)*B(I) U(I,J)=A(I,J) D04K=1,J-l U (I, J) = U (I, J) + A(I, K)*D(K) U(J,l)=A(J,l) X(l)=X(l)+A(l,J)*B(J) CONTINUE D(l) = U(l, 1) U(l,l)=A(l,l) X(l) =X(l) + A(l, l)*B(l) PERFORM UPDATE FOR J = N, N -1,

10

11 C

USE B(l), ... , B(N), B(N + 1), ... , B(N + M) TO TEMPORARILY STORE U(J, 1), ... , U(J,N), G(J, 1), ... G(J,M). ' DO 10 I = 1, N B(I) = U (J, I) C(I) = D(I)*B(I) S=S+B(I)*C(I) eEq.(4.110) DO 11 I = 1, M NPI=N+l B(NPI)=G(J,I) C(NPI) = O(l)*B(NPI) S=S+B(NPI)*C(NPI) eEq.(4.110) U(J,J) = S IF(J.EO.l)GOto20 DINV = 1./S

C DO 19 K = 1, J - 1 C C-C-C COMPUTE ELEMENT PLACEINU(J,K). S=O. D0121=1,N S=S+U(K,I)*C(I) D0131=1,M S=S+G(K,I)*C(N+I) S=S*DINV U(J, K) = S U(K,J) OF NEW U-D FACTOR AND

1

C C-C-C-C 2

12 13

eEq.(4.111) eEq.(4.111) eEq.(4.111)

eEq.(4.96a)

4 5 6

e Eq. (4.99) eEq.(4.99) eEq.(4.96a)

C C-C-C 14 15 19 20 C

UPDATE VECTOR V/SUB K, WHICH IS STORED IN KTH ROW OF U AND G. DO 14 I = 1, N U(K,I)=U(K,I)-6*B(1) DO 15 I = 1, M G(K,I)=G(K,I)-S*B(N+I) CONTINUE CONTINUE IF (N.EO.l) GO TO 25 DO 24 J = 2, N DO 24 I = 1, J - 1 U(I,J) = U(J,I) U(J, I) = O. CONTINUE RETURN Er~D

eEq.(4.112) eEq.(4.112)

eEq.(4.99) e Eq. (4.96a) ... ,1.

C C-C-C-C

THE NEW VALUES OF U ARE TEMPORARILY STORED IN THE LOWER LEFT TRIANGULAR PART OF U. D020J=N,1,-1 S = O.

C C-C--

COMPUTE D-/SUB J AND PLACE IN U(J, J). STORE COMPUTATIONS OF (D-/HAT)*(V/SUB VECTOR C.

24 25

J) IN

Figure 4.25 (Continued)

Specifically. to produce an estimate of the state x(t) in such a manner that the mean square is minimized statistically. the problem of combined state and parameter estimation was originally posed as a nonlinear state estimation problem using the extended Kalman filter (EKF). Moreover. The integers nand m are the dimensions of the state vector x and the noise vector w. most physical problems or processes are nonlinear. we note that if the U-D algorithms are used and the covariance matrix is desired. At update time. the conventional Kalman filter performs the above tasks for linear systems and linear measurements in which the driving and measurement noises are assumed to be mutually uncorrelated. however. zero-mean. If only the diagonals performed. noise-free system).3 gives a comparison of the number of multiplications required by the U-D and the conventional algorithm to compute the Kalman gain k for measurement updates. and the conditions for acceptability of the solution are so to speak vague. which is usually implemented in software on a digital computer. state estimates. This requires of P (that is. the nonlinear systems must be linearized (that is.jn multiplications. Finally. in in 4. and Gaussian. The problem of identification of nonlinear systems is divided into identification of (1) deterministic systems (that is. Thus. divergence may result if the initial estimate is poor. not much is known about the convergence properties of the extended Kalman filter. then n2 . it combines all available measurements. approximated) before the linear filter theory can be applied. Consequently. During propagation.73). Since this requires a linear approximation of a nonlinear system about the current estimate. respectively.8 THE EXTENDED KALMAN FILTER We have seen in Section 4.n multiplications are required. and (2) stochastic systems (implying the existence of plant or system and observation noise). the variances of the state estimates) are desired. white. it advances the estimate in such a way as to again maintain optimality. and updated covariance matrices. plus prior knowledge about the system and the measuring devices.71)-(4. Systems of the later kind can be solved by means of the extended Kalman filter .3 that the conventional discrete-time Kalman filter is a recursive data-processing algorithm. The conventional algorithm is the direct implementation of Eqs.190 THE KALMAN FILTER Table 4. (4. the extra computation P = U D UT must be 3 + 2 . In nature.

.

.

.

4.9

SHAPING

FILTERS AND COLORED

NOISE

201

occurrence when the range of numbers the filter estimates is large. The problem of negative variances arises due to the limited numerical precision of computers which must multiply high order matrices together. To alleviate this problem, a small amount of added noise in the filter will keep the variance of the state positive and will not degrade the filter's state estimate. 3. Dynamics noise is added because of filter order reduction. States which are eliminated do not appear directly in the measurement equations, but impact the states which are part of the measurement equations. Therefore, in order to compensate for the eliminated states, the noise they affect is increased. The noise increase is small since the eliminated states all have small magnitudes, but is necessary to ensure a well-tuned filter. From the above discussion, it is clear that the measurement covariance matrix R( tJ must be adjusted upward in the full-order filters to increase the uncertainty in the measurement equation due to using linearized equations to model nonlinear systems. It should be pointed out that the EKF has access only to measurements that are a combination of states and not to the states themselves. Also, increasing the measurement noise covariance matrix is due to filter order reduction. That is, when states are eliminated in the filter model that are part of the measurement equation it is necessary to increase R(tJ Finally, it should be noted that these procedures are not universal to all Kalman filter tuning. They are basic reasons why Kalman filters need to be tuned and the engineer involved in Kalman filter tuning needs to be familiar with them.

4.9

SHAPING FILTERS AND COLORED NOISE

adequately. Adding noise increases the uncertainty of the filter-assumed state's dynamics to compensate for nonlinear behavior or to keep the Kalman filter gains in that channel from going to zero. Usually, the states in the filter needing this type of noise adjustment are those that are not part of the measurement equation. For example, in an inertial navigation system, some of the states that are not part of the measurement equation, need a small amount of added noise to compensate for nonlinear dynamics. If the states are not directly part of the measurement equation and the dynamics equations do not totally describe the true states behavior, the extended Kalman filter has difficulty estimating these states. 2. The dynamics noise needs to be added into a filter model when the filter variance of a state goes negative. This numerical difficulty is a normal

Up to now we have discussed systems whose noise processes are white, uncorrelated noise sequences. However, there are situations in the real world where white Gaussian noise may not be adequate to model the system. In this section we will discuss the basic problem of a system (or plant) driven by colored, or time-correlated, noise. The concept of a shaping filter has been used for many years in the analysis of physical systems, because it allows a system with a random input to be replaced by an augmented system (the original system plus the shaping filter) excited only by white noise. The shaping filter greatly simplifies the mathematics and has found wide application in the construction of optimum estimators via both the Wiener and Kalman techniques. The shaping-filter approach is generally applied in problems where the mean-square values of outputs (e.g., mean-square errors in estimation or control) are of prime importance. In such cases, only second-order statistics are important, so that complex input processes can sometimes be represented by

4.10

CONCLUDING REMARKS

This chapter has covered various aspects of estimation theory. In particular, the subject of estimation discussed in this chapter is essential for the analysis and control of stochastic systems. Estimation theory can be defined as a process of selecting the appropriate value of an uncertain quantity, based on the available information. Specifically, the need for sequential estimation arises when the measurement data are obtained on line as the process evolves and the measurements arrive one at a time. Based on the Kalman-Bucy filter, state estimation presents only a limited aspect of the problem, since it yields optimal estimates only for linear systems and Gaussian random processes. Approximation techniques can usually be extended to such application contexts with observational and computational noise. The stochastic processes involved are often modeled as Gaussian ones to simplify the mathematical analysis of the corresponding estimation problems. If the process is not Gaussian, the estimator is still minimum in the meansquare sense, but not necessarily most probable. In the case of nonlinear systems, linearization techniques have been developed to handle the Kalman-Bucy filter. It should be pointed out, however, that this approach has limited value for small deviations from the point of linearization, and no guarantee of the so-called approximate optimality is made. As mentioned in Chapter 1, the Kalman- Bucy filter, an algorithm which generates estimates of variables of the system being controlled by processing available sensor measurements, is one of the most widely used estimation algorithms. This is done in real time by using a model of the system and using the difference between the model predictions and the measurements. Then, in conjunction with a closed-loop algorithm, the control signals are fine tuned in order to bring the estimates into agreement with nominal performance, which is stored in the computer's memory. Recent research indicates that estimation theory is applicable not only to aerospace problems, but to the automotive industry, chemical industry, etc.

.

.

.

.

.

.

and minimizing the error in estimation of the position of a vehicle. For example. For more than a quarter of century now. and Wiener among others. much attention has been focused on optimizing the behavior of systems. 219 . and in Russia by Boltyanskii. determining optimal flight paths. almost all solvable problems in optimal control can be solved by the calculus of variations. Mishchenko. Butkovskii. Modern optimal control techniques such as the minimum principle of Pontryagin and the dynamic programming of Bellman are derived from the calculus of variations. Moreover. Letov. Leitmann. The calculus of variations is the branch of mathematics which is concerned with the finding of trajectories that maximize or minimize a given functional. the Bolza formulation in the calculus of variations leads into the proof of the Pontryagin minimum principle. such as maximizing the range of a rocket. the theory and design of optimal control systems have a common mathematical foundation in the calculus of variations.1 INTRODUCTION Optimal control of linear deterministic systems with quadratic performance criteria and bounded controls has been studied extensively. Bryson. Bellman. Krasovskii.CHAPTER 5 LINEAR REGULATORS 5. Kalman. Historically. By adding a sufficient number of variables. Finding the control which attains the desired objective while maximizing (or minimizing) a defined criterion constitutes the fundamental problem of optimization theory. many problems in modern system theory may be simply stated as extreme-value problems. Miele. Optimization techniques have been investigated extensively in the United States by Athans. and Pontryagin. Gamkrelidge. Fel'dbaum.

develop the optimal feedback control law in a framework which is readily applicable by the control engineer. All these methods are related to the calculus of variations. the approach normally taken in designing the missile guidance system. Also. Specifically.2 THE ROLE OF THE CALCULUS OF VARIATIONS IN OPTIMAL CONTROL As stated in the introduction (Section 5. For the reader who wishes to obtain a deeper insight into the calculus of variations.2 THE ROLE OF THE CALCULUS OF VARIATIONS IN OPTIMAL CONTROL 221 In solving optimal control problems. the control engineer often sets up a mathematical performance criterion or index and then tries to find solutions that optimize that particular measure. Although several different approaches to the problem have been developed.e. and 58].1). 5. . (2) plant and output variables can be measured exactly. and energy-optimal problems are a class of optimal control problems that the control engineer is often called on to solve. and controller dynamics are known exactly.g. Furthermore. the elementary part of the theory is concerned with a necessary condition (generally in the form of a differential equation with boundary conditions) which is required must satisfy. first formulated and solved by John Bernoulli (1696). For linear problems with quadratic performance criteria. The basic problem in the calculus of variations is to determine a function such that a certain definite integral involving that function and certain of its derivatives takes on a maximum or minimum value. 28. In particular. but be diverted in flight by the guidance system's update commands. is to model those missile functions that respond to attitude commands. which decouples the full stochastic control problem into two separate parts: (1) the control portion of the decoupled problem solves for the optimum deterministic controller with a quadratic performance measure. the separation theorem obtained its name from the ability of such systems to perform state estimatation and optimal control separately. whether it is for air-to-air or air-to-ground application. In the presentation that follows. Deterministic models possess the following characteristics: (I) there are no plant (e. The general regulator problem has been treated by many authors [2.. we will discuss briefly the basic concepts of the calculus of variations necessary for understanding and solving the type of problems encountered in optimal control theory. In this section we will briefly discuss the optimallinear~quadratic~regulator (LQR) problem. One of the earliest instances of a variational problem is the problem of the brachistochrone (i. and discuss its solution in terms of the solution to the matrix Riccati differential equation.220 LINEAR REGULATORS 5. references [11 and 34] are recommended. the separation theorem* may be invoked. we will forgo much of the mathematical rigor. numerical methods for approximating the gain rely on approximating the solution to the associated Riccati equation. We begin our discussion by noting that the simplest single-stage process with equality constraints is to maximize or minimize a cost function (or performance index) of the form * Simply stated. The separation theorem assures that the composite system of controller and estimator will be the jointly optimum stochastic controller. for instance. fuel-optimal. In recent years. this performance index may take a variety of forms containing constraints or penalties on the control energy expended and on the deviations of the states from the desired values. modern optimal control theory has its roots in the calculus of variations. In general. assuming that we have exact and complete knowledge of all the state variables of the plant (system). modern optimal control theory techniques such as the minimum principle of Pontryagin and dynamic programming of Bellman are based on and/or inspired by the classical calculus of variations. aircraft) disturbances. we will discuss the deterministic linear~quadratic state regulator.5. and (2) the remaining portion of the problem is that of a stochastic estimator which uses the noisy and incomplete measurements of the states of the system to give the least-square-error estimates of the system states. In particular. a missile may follow a preassigned trajectory specified by the mission planner. Time-optimal. several methods have been advanced which help find solutions to certain class of problems. As we shall see later in Section 5. a full optimization study involves finding the optimum control law in the presence of stochastic disturbances. the feedback law generally arises in conjunction with an operator Riccati differential or integral equation. In this section. the curve of quickest descent). These estimates are then used as if they were known exactly by the optimum controller. Consequently. In missile guidance. the calculus of variations will be discussed by means of the Euler-Lagrange equation and associated transversality conditions..

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

g. in the presence of uncertainties and plant disturbances.e. subject to quadratic costs for defining optimality criteria.K) be a constant matrix as N ~ 00. one may let the matrix F (N . or Kalman. Therefore. using noisy measurements and allowing for process noise entering the plant. thereby corrupting the state equations. robustness becomes a concern when controllers are designed using a reduced-order model. Since. Gaussian white noise. Moreover. say. What one can measure is outputs. respectively. which means that the measurements are not perfect. in solving the discrete-time linear regulator problem in a digital computer. 5. Feedback controllers built to stabilize systems must be able to keep the overall system stable in the presence of external disturbances. it is conceivable that the discrete version of the optimal regulator problem would be computationally more efficient as well as more accurate. Consider the continuous-time stochastic linear system model governed by the known linear differential equation [2. modern optimal control theory showed promise in application to flight control problems.3. In this section.* Robustness implies that the controller provides adequate (i. Therefore. we need a method to reconstruct the state equations and produce and estimate them.1 OPTIMAL LINEAR-QUADRATIC Introduction GAUSSIAN REGULATORS robust. a very important requirement of. the optimal control can be implemented by feedback of the states. and measurement matrices are present. the control designer may reduce the number of arithmetic operations by making certain substitutions. In particular. although the deterministic approach admits errors in modeling via feedback. the measurements of all states are generally not available.. when noise is a significant part of the system. In the 1960s. it does not take into account the many disturbances. Consequently. These sensors (e. a linear-quadratic Gaussian (LQG) regulator must be used. That is. time-invariant systems driven by zero-mean.. such as wind gusts acting on an aircraft (the plant) or target glint noise present in the angle tracking signal of an interceptor missile. it is necessary to measure exactly all the states of the system under design.5. A robust controller will provide stable closedloop performance even when states of the real-world system have been ignored in the design model. For example.21. we will consider the continuous-time LQG regulator formulation for the design of controllers when uncertainties in the state.5. . stable closed-loop) performance in the face of uncertainties and over a wide range of operating conditions and system parameters. For example.67). Eq.5 5. This is obviously not possible in the real world. which means that for a large number of stages. (5. Moreover. Many engineering applications require that the controller be robust. implies that the optimal control at each stage is a linear combination of the states. and changes in the operating environment. the LQR has desirable robustness properties in its guaranteed gain and phase margins of at least-6 dB to 00 and of ±60°. as was stated above. a flight control system design is that it be We will now present the equations used in designing optimal linear-quadratic Gaussian controllers for systems that are modeled as linear.256 LINEAR REGULATORS 5. Since the calculations are usually done in a digital computer. a missile seeker) have noise associated with them. input.Bucy filter. by means of sensors in the system. robustness is the ability of the filter to cope with adverse environments and input conditions. we must invoke the stochastic estimation theory. In addition.5 OPTIMAL LINEAR-QUADRATIC GAUSSIAN REGULATORS 257 The equation for the optimal control. real systems will always have some type of noises or biases affecting them.2 Properties of the LQG Regulator In order to implement the deterministic LQR discussed in Section 5.28J * In Kalman filter. it becomes necessary to include a filter or observer (the Kalman filter is an observer) in the controlled system to estimate the states. modeling errors. 5. One disadvantage in such an application is that the resulting controllers require full-state feedback.

.

.

.

.

.

.

.

.

.

The network is said to be oriented because the admissible paths through it are in the same general direction. the cost of transportation between the nodes. The network consists of links of given lengths. an optimal policy is found by employing the intuitively appealing concept called Bellman's principle of optimality.. a cost is associated with each segment. in general. and 12l In the method of dynamic programming. Dynamic programming is useful in solving multistage optimization problems in which there are only a small number of possible choices of the control at each stage.19. the principle of optimality says that the optimal policy has the property that whatever the initial state and initial decision are. For more details on dynamic programming and the Hamilton~Jacobi equation the reader is referred to references [2. We would like to find the path from state a to state I with minimum cost. quite difficult to solve. This equation can be derived heuristically. provided certain assumptions are made. If we consider the lengths in the diagram to be travel times. ui. We will begin this section with a discussion of Bellman's principle of optimality.U~_l} must constitute the optimal control sequence relative to the state xi (which is now viewed as an initial state). with the aid of the principle of optimality. which extends the optimal decision-making concept. . when it can be solved. u~ _ I} is such that. a decision can be defined as a choice among the alternative paths leaving a given node. to sequences of decisions which together define an optimal policy (or control law) and trajectory.276 LINEAR REGULATORS Dynamic programming is a computational technique. and in which no derivative information is available. called a directed (or oriented) network. . whatever value the first choice u6 and hence the value xi is. However. Given the directed network in Figure 5. the choice of the remaining N -1 values in the sequence {ui. the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. the Hamilton~ Jacobi equation is.. Obviously.6. .. joined together at nodes (or points). from left to right. Dynamic programming is useful in solving multistage optimization problems in which there are only a small number of possible choices for the control at each stage and in which no derivative information is available. the time taken to perform a task. The basic result of dynamic programming is a nonlinear partial differential equation known as Bellman's equation. More .. The principle of optimality can be stated as follows [6]: The optimal control sequence for an N -step process {u6. Stated more simply. a candidate for an optimal control function is found as a function of the state trajectory. On the other hand.19. etc. The length of a link may represent the distance between its terminal nodes. The principle of optimality can best be visualized by considering the diagram in Figure 5. then we are looking for the minimum-time path. . One of the advantages of dynamic programming lies in the insight it provides into the properties of the optimal control function and the optimal performance index.

.

.

.

.

.

.

the calculus of variations.8 CONCLUDING REMARKS In this chapter we have discussed linear regulators with quadratic costs. As we saw. and dynamic programming and the Hamilton-Jacobi-Bellman equation. Pontryagin's minimum principle. the .5.

when this difference is multiplied by the steering gains. unconstrained in both x and u. which in turn are sent to the control system. Dynamic programming attained its greatest practical significance in conjuction with the modern digital computer. Therefore. the algorithm uses full missile-state feedback provided by the inertial navigation system. the LQR will calculate the difference between vehicle estimated position and velocity (both of these parameters may be obtained from an onboard inertial navigation system) and vehicle precomputed position and velocity. Thus. When the LQR algorithm is implemented in an onboard computer. offers optimal performance and a mature technology. Consequently. it is basically. The technique presented in this chapter for solving the deterministic linear optimal regulator can be used in conjuction with frequency domain methods in the analysis and design of missile guidance and aircraft flight control systems.292 LINEAR REGULATORS combination of quadratic cost functions and linear state equations provides an important class of problems for which the application of Pontryagin's minimum principle leads to a set of linear equations. Finally. the control corrections are subtracted from the nominal controls to produce steering commands. the obvious approach would be to compute the total cost index J along all possible path combinations and then choose the best one. Real-time implementation of a guidance law demands that these requirements be met. adapting to varying mission requirements. it takes on the characteristics of what is referred to as a multistage decision process. it becomes necessary when using Bellman's method to discretize the otherwise continuous control processes. the essential feature of dynamic programming is that it can reduce the N-stage decision process to sequence of N single-stage decision processes. When a continuous control process is viewed in this way. Since the digital computer accepts only discrete data or data sequences. Above all. Therefore. the LQR. it produces corrections to the nominal control commands.3 in connection with the missile interception. In selecting the optimum policy. and interface with other software subsystems. an ingenious method of computer programming. used as a guidance law. The general form of the linear-state problems with quadratic costs we studied have an n-dimensional state vector x and an r-dimensional control vector u. There is clearly a need for methods that will overcome some of these limitations. The state equation is linear in both the state and control variables and has the general form As mentioned in Section 5. respond to inflight perturbations. As its name implies. enabling us to solve the problem in a simple iterative manner on a . at each guidance update. the LQR can command the interceptor missile to the target (evader) along a precalculated or nominal trajectory by making use of a stored nominal state and control history and a set of precalculated varying steering gains. the methods of the calculus of variations described in this chapter are limited basically to control processes that are (1) linear. (2) characterized by quadratic cost criteria. Bellman's well-known dynamic programming has gained considerable popularity as a complement to the classical methods. the algorithm must be able to calculate commands for complex maneuvers. From a practical point of view. and (3) characterized by low dimensionality.

and only a brief development was given here. There is. it should be pointed out that Pontryagin's minimum principle provides only necessary conditions that an optimal control solution must satisfy. It gives us information about how this ultimate achievement can be reached. . on the other hand. The greatest value of optimal control theory is that it shows us the ultimate capabilities of a specific system subject to specific constraints. We now summarize some of the important features of optimal control that we have discussed in this chapter: 1. The Pontryagin minimum principle. However. in general. Table 5. 2. The proof of Pontryagin's minimum principle is quite difficult. no guarantee that a given problem will have an optimal solution. and many different cost functions can be successfully handled. This reduction is made possible by use of the fundamental principle of optimality. There exists a large class of variational-type problems which are of great engineering importance but which cannot be handled by the Euler-Lagrange theory. with the overall cost determining its usefulness.2 summarizes the performance indices most frequently used in optimal control theory. There is no restriction to linear state equations. if it exists. applies to a much wider class of optimal control problems. Implementation of optimal control theory in hardware places a great burden on the computational equipment.294 LINEAR REGULATORS computer. Suboptimal control may be an acceptable compromise and perhaps represents a more attractive solution from an engineering point of view. 3.

.

.

.

CHAPTER 6 COVARIANCE ANALYSIS AND SUBOPTIMAL FILTERING .

contribute little to the performance. aid in the design of the navigation system with regard to sensor error calibration.1 COVARIANCE ANALYSIS 305 more than one hundred of these state variables. 2. assess mission performance based on the mission trajectory. and generation of cost~performance tradeoff studies.2. assess the relative critical aspects of individual error components so as to aid in the design and construction of sensor hardware. These methods will now be discussed. The covariance simulation mechanizes a real-world error state model (normally user-specified) which is a subset of the total error sources. is a subset of the true world error sources to be implemented in the simulated onboard-computer Kalman filter mechanization. On the other hand. and thus are not needed for most accuracy analyses. in simulations of unaided-system performance. One of the requirements of covariance analysis is that all inputs should be of zero mean. Simulation studies are commonly used in the design. 4. Most of them. the evaluation of the design. the simulation extrapolates the covariance matrix of the real-world error states. all of the real-world error sources must be included in the design filter state vector. simulation studies can determine the degree to which the model approximates a known physical process and hence verify its accuracy. user-specified). We will now recapitulate the covariance propagation and update equations that are used in covariance analysis.6. and the optimization of parameters. in simulations of aided-system performance. the system analyst has used two approaches: (1) covariance analysis. Specifically. the system error budget. In essence. and (2) Monte Carlo simulation. a design filter error state vector is defined (again. multisensor Kalman-integrated navigation systems) the error dynamics can be represented to good approximation by the first-order variational equation . and evaluation of feedback controllers. the objective of a covariance analysis and/or simulation is to develop a mission error analysis tool in order for the analyst to: 1. In linear covariance analysis (for example. in general. one verifies the model by seeing whether its responses coincide with the observed responses of the process. For error analysis. since many physical processes have certain behavioral responses. Specifically. utilizing the system state transition matrix. which. alignment filter. Therefore. as well as the evaluation of performance in the presence of system disturbances. 3. At this point it is appropriate to define what we mean by simulation. For optimal filter performance. and any navigational aids that may be used. testing. serve as a software design tool in determining which error states should be modeled in the Kalman filter. whereby the actual development of the control algorithm. In systems analysis. as we shall see in Section 6. simulation is the verification of the accuracy of the modeling dynamics.

.

the computation time is increased and roundoff errors will become pronounced. the accuracy will be poor.308 COVARIANCE ANALYSIS AND SUBOPTIMAL FILTERING between the update time and the time of measurement. to propagate the measurements to the current time (update time). and this can be used to reduce the computational effort. The system dynamics matrix for inertial system errors is typically a function of navigation parameters such as position. climb. Three of the functions that produce the basic navigation data are as follows: (1) the trajectory generator. If Tis chosen too large. Important to the Runge. will also be required for the navigation process. 1). velocity. State corrections are then made at the current time. the RungeKutta method is presented below for the interested reader [24]. The covariance update function mechanizes a set of equations which describes the improvement in the navigation error estimates resulting from incorporating an external measurement or fix. These parameters can be calculated from an aircraft trajectory model that can simulate maneuvers such as takeoff. Figure 6.2) are stored navigation data. The covariance propagation block involves the time solution of a set of navigation error equations which define how the navigation error states propagate during flight between one update point and the next. For most large problems the system dynamics matrix F (t) and the noise sensitivity matrix G(t) are sparse. the updated covariance and trajectory state vector are stored for restarting the updating process. and acceleration. such as input-output and data handling. inserted into the system state transition matrix. The discussion presented so far in this chapter can be summarized by noting that the covariance analysis provides an effective tool for conducting a tradeoff analysis among the various designs. and P(2. Navigation data can be produced by a time history simulation. 2). and (3) the covariance update. This loop solves the vehicle flight-path equations in the trajectory generator to provide vehicle position.Kutta numerical integration algorithm is the step size T. The covariance matrix elements P(1. turns. velocity. Other functions. cruise. . (2) the covariance propagation.P(1. The above calculations are performed for each measurement that is made. and acceleration data required by the covariance propagation and covariance update blocks. In this connection.2 depicts a functional flow diagram of the above functions. and landing. while if it is chosen small. Following the update. great-circle routes. and thus the covariance P(t) is updated at the measurement time and the effects of this update are propagated to current time.

.

.

DO) +5 IC = 2*IDINT(HALFM*(0. MIC DOUBLE PRECISION HALFM REAL S DOUBLE PRECISION DATAN.GT. IC. THE CALLING PROGRAM SHOULD NOT ALTER THE VALUE OF IY BETWEEN SUBSEQUENT CALLS TO URAND.5DODSQRT(3. As mentioned earlier. ITWO/2/ C C IF(M2. M2. COMPUTE M=1 M2 = M M = ITWO*M2 IF(M. ITWO.5/HALFM TO FLOATING C . INTEGER lA.DO)) MIC = (M2-IC) + M2 +1 C C C S IS THE SCALE FACTOR FOR CONVERTING POINT S = 0. DSQRT DATA M2/0/. URAND (uniform random number generator) is presented below [32]: Random Number Generator (Urand) Program Listing FUNCTION URAND (lY) C C C C C C C C INTEGER IY URAND IS A UNIFORM RANDOM NUMBER GENERATOR.M2) GO TO 10 HALFM = M2 C C C C COMPUTE MULTIPLIER AND INCREMENT CONGRUENTIAL METHOD FOR LINEAR IA = 8*IDINT(HALFM*DATAN(1.D0)/6.1 COVARIANCE ANALYSIS 315 where the subscript j corresponds to the jth state of the state error vector.DO)/8. in a Monte Carlo simulation the individual samples of a stochastic process can be generated by using a random number generator. A stimple. VALUES OF URAND WILL BE IN THE INTERVAL (0.1). M. efficient random number generator subroutine. THE INTEGER IY SHOULD BE INITIALIZED TO AN ARBITRARY INTEGER PRIOR TO THE FIRST CALL TO URAND.0) C C C 10 GO TO 20 MACHINE INTEGER WORDLENGTH IF FIRST ENTRY.NE.6.

in the limit. a. Each model will be analyzed to the point of predicting the time behavior of the angle variances. The resulting value of y"+1 is returned through the parameter IY. see Section 2. Specifically. If the models are the same as for covariance analysis. The values of a and c are called in the source code IA and IC. These integers are converted into floating-point numbers in the interval (0.316 C COVARIANCE ANALYSIS AND SUBOPTIMAL FILTERING COMPUTE THE NEXT RANDOM 20 IV = IV*IA NUMBER C C C C THE FOllOWING STATEMENT IS FOR COMPUTERS WHICH DO NOT AllOW INTEGER OVERFLOW ON ADDITION IF(IV. it should be pointed out that Monte Carlo simulations are inefficient and costly.M2) . Finally.1) and returned as the value of URAND.GT. and care computed automatically upon the initial entry. we will show how the con variance equation (6. Consider the theoretical error models used in the statistical analysis of inertial navigation systems. match covariance analysis results.M2) .) . respectively. Monte Carlo results should. IY should be initialized to an arbitrary integer value.O) IV = (IV + M2) + M2 URAND = FlOAT(lV)*S RETURN END C on the nth call of URAND.GT. where the values of m (a power of 2). which is often used to model ring laser gyros. we will consider the following models: (1) angular random walk. and (3) driftrate random ramp. (For more details on these statistical processes.9. (2) Markov (or exponentially correlated) drift rate.l T.M2 WHERE C C C C THE FOllOWING STATEMENT IS FOR COMPUTERS INTEGER OVERFLOW AFFECTS THE SIGN BIT IF(lV.M2) IV = (IV . On the first call of URAND.M2 C C IV=IV+IC C C THE FOllOWING STATEMENT IS FOR COMPUTERS WHERE THE WORD lENGTH FOR ADDITION IS GREATER THAN FOR MULTIPLICATION IF(IVj2.6) is used in practical applications. As an example.MIC) IV = (IV .

This is not necessarily the actual system performance. The system error propagation equations describe mathematically how each of the error sources (in the sensor error models) generates a system error.). the first step is to define error models for each of the sensor systems (e. the filter gain wiIl be too smaIl and the filter will not update the real error adequately. The second step is to define the system performance. approximate description of system error behavior. If the two are generaIly in agreement. etc. the result is the rms system response to that error source. the diagonal elements of the filter's covariance matrix. does not agree exactly with the actual system performance. inertial. in general.g. Omega. the Kalman filter definition consists of selecting the state variables and specifying the dynamics matrix F. This is the truth model of system performance and represents actual system performance to the extent that the error model represents the magnitude and behavior of the actual errors. or driven. Since this procedure utilizes the expected rms (root-mean-square) value of each error source. If the filter-calculated error is much less than the real error. the filter.1. One then repeats the filter run to generate a new set of Kalman gains and reruns the battery of system error propagations. Global Positioning System. In order to obtain the actual system performance. one propagates each of the error sources individually and updates the resulting system errors with the Kalman gains as computed and stored from the preceding filter run. One test of a good Kalman filter design is how weIl its predicted system performance (covariance diagonals) matches the actual system performance. Because the filter is necessarily a limited. Loran. and the initial value of the covariance matrix P.6. Note that the differential equations describing system error propagation are excited. the measurement noise matrix R. This situation can cause filter divergence wherein the real error continues to . In filter design and performance analysis of multisensor systems. as well as its sensitivity to the various error sources. the observation matrix H. one must redefine the Kalman filter by adding states or by changing the matrices Q or R. Doppler radar.. The rss (root-sum-square) of system responses over all error sources then determines the system performance.1 Concluding Remarks As discussed in Chapter 4. the process matrix Q. as predicted by the filter. the filter design is a good one. directly by the individual sensor errors. If not. that is.

terrain following and terrain avoidance. in contrast to the filter-calculated error. Because of finite computer memory and throughput (or speed). say. 6. In present-day weapon system designs. Kalman filters provide optimum estimates of a linear system by processing noisy measurements of quantities which are functions of the system states. in most applications these models are not known exactly. These tasks require a computer with a processing time and memory size well within the capability of handling the additional computer requirements im- • By a conventional Kalman filter is meant one with a fixed gain. The increase in accuracy over that available from conventional* filtering is significant. the optimal filter model yields the maximum navigation accuracy obtainable with linear filtering of the navigation sensor data. it is good design practice to make the filter somewhat pessimistic. either unintentionally or by design. since it models all error sources in the system. it is often impractical to model all known states.320 COVARIANCE ANALYSIS AND SUBOPTIMAL FILTERING grow. In navigation systems. The increased accuracy is obtained by doubling computer memory and processing-rate requirements for the navigation function over those of. Thus. the optimal Kalman filter equations require an exact mathematical model of the system. In addition. That is. That is. system checkout. Normally. Thus. Fortunately. this does not directly double the computer cost. . difficult to do without a reasonably accurate knowledge of the magnitude of the real error. This is. To forestall this behavior. covariance analysis plays a great role in filter design and performance. Stated in other words. the onboard digital computer performs such tasks as weapon delivery. a conventional position-referenced Doppler-inertial design. The design of Kalman filters depends upon models of the system (or plant) dynamics and upon the statistical characteristics of the error sources which drive the system and of the errors in the measurements. the model used for the Kalman filter is an approximation of the real-world system behavior. flight control. because the navigation function reflects only a fraction of the total computer memory and burden rate requirements. the best available models are often so complex that the corresponding filter is not easily implemented. guidance. reconnaissance. and navigation functions. filters are often implemented using simplified models which approximate the behavior of the system. we wish to develop a tool to analyze cases where the Kalman filter model and the real-world model differ. however. which remains small. the filter computed errors should be somewhat larger than the real errors. these suboptimum filters (also known as "reduced order" filters) often provide performance almost as good as the optimum filter based on the exact model. Hence.2 SUBOPTIMAL FILTERING As mentioned previously. However.

.

Hypothetical n-state filter in an n-state real world (the best that can be achieved). the filter design is based on an incomplete and/or incorrect model of the actual system dynamics. Assuming that an (n + m)-state real world exists and that an m-state filter is mechanized. then the dimension of P will be n> m. An error budget summarizes the contribution of each error source or group of error sources to the total system error. They are designed to display less sensitivity than optimal filters to modeling errors. • Develop a complete real-world covariance truth model containing all known error sources. this analysis uses the reference system of equations (the optimal Kalman filter formulation) and the suboptimal system to derive a set of sensitivity equations relating the two systems. • Optimal Filter. • Compare the covariance analysis with a system simulation to verify the mechanization equations. That is.6. • Identify the dominant error states for the filter modeling by an error budget (see below as well as Section 4.1. • Redesign the filter so that it has minimum sensitivity to changes in the real-world statistics. the design of a suboptimal filter. called a truth model (also known as reference model). the Kalman gains produced are optimal and so are the performance figures indicated by the covariance matrices. the following steps are recommended: 6. 2. 3. It is useful in determining the dominant errors contributing to the total error. m-state filter in an optimistic m-state real world (what the filter thinks is being achieved). if we designate P as the n x n truth error covariance matrix and Ps as the m x m filter error covariance matrix. They are designed to enable near-optimum system performance. m-state filter in an (n + m)-state real world (what will be achieved). In the filter design. subject to the above constraints. Therefore. The indications of filter performance in the suboptimal filter covariance equation will be optimistic. The filter design model generates a set of time-varying gains which specify the filter.1 Concluding Remarks The problem addressed in Section 6.2. An important feature of the suboptimal filter analysis is the capability to generate an error budget. Sensitivity equations generate realistic performance projections for a practical filter mechanization by properly allowing for the unavoidable mismatch between filter design assumptions or simplifications and the real-world environment in which the filter must perform. the following assessments can be made: • Baseline. • Best Possible Suboptimal. as discussed in Section 6.1. Appropriate elements of the covariance matrices in these sensitivity equations indicate the performance of the suboptimal filter operating in the real-world reference system. A sensitivity analysis evaluates the performance of a suboptimal filter by determining the sensitivity to an incorrect or an incomplete dynamic and statistical modeling. • Suboptimal. These time-varying gains are then used in the larger set of reference covariance equations.2) experiments. Filter performance without selecting suboptimal filter parameters Q and R. The basic analysis technique used is called sensitivity analysis. because certain error sources which exist in the reference system have been left out of the filter. The suboptimal filter design is based on using the most significant subset of states from the real world. . For instance. the real world is commonly replaced by the most complete mathematical model that can be developed.2 is a model reduction. If the reference system covariance equations are propagated alone.2 SUBOPTIMAL FILTERING 325 For this purpose.

.

CHAPTER 7 .

.

.

.

.

.

.

.

.

More numerically stable and better-conditioned implementations of the Kalman filtering algorithms can be obtained using. the Bierman U~D and square-root formulations [9]. Decentralized estimation offers numerous advantages in many applications. decentralized estimation for linear systems has been an active area of research in which decentralized and parallel versions of the Kalman filter have been reported in the literature [16. such as integrated inertial navigation systems. [73] are based on decomposing a central estimation problem in.to smaller. The local estimates are then fused by a master filter to make a best global estimate of the state vector of the master system. we can define decentralized filtering as a two-stage data-processing technique which processes data from multisensor systems.19. diverge and tend to be numerically unreliable. In essence. for use in distributed multisensor systems. where the inertial navigation system may be aided by a number of different sensors. In recent years. local ones. the approaches by Speyer [65] and Willsky et al. These works show that the global estimates can be obtained using linear operations on the local filter. In the first stage. These estimates are then obtained in a parallel processing mode. For example. it provides significant advantages for real-time multisensor applications.1 INTRODUCTION It is well known that the conventional Kalman filtering algorithms. In this design. Specifically. At this point.36 and 65]. a 347 . each local processor uses its own data to make a best local estimate. the study of decentralized filters began as an attempt to reduce throughput. although globally optimal.20.CHAPTER 8 DECENTRALIZED FILTERS KALMAN 8.35. This chapter develops the concepts of decentralized Kalman filtering with parallel processing capabilities. for example.

Furthermore. the local filters contain all or part of the master filter state. (2) reducing the required bandwidth for information transmission to a central processor. The state estimates and covariances of a given local filter are then weighted and combined in the master filter.. A comparison between the centralized and decentralized filters is given in Figure 8. suboptimal) models. Here. As the model increases in size and complexity. however.2 FILTER MECHANIZATIONS AND ARCHITECTURES 349 number of sensor-dedicated local filters run in parallel. it is a well-known fact that the conventional Kalman filter provides the best sequential linear. the filtering duties are divided so that each measurement is processed in a local filter. since the output of each local sensor filter can be tested. decentralization makes for easy fault detection and isolation (FDI). and because of the rapidly increasing computational burden associated with large filter state size. the outputs being fused into a master filter. Therefore. However. the local filters must be specially constructed so that one filter keeps track of process noise information and the other local filters have no information about process noise. it can expeditiously be removed from the sensor network before it affects the total filter output.2 FILTER MECHANIZATIONS AND ARCHITECTURES Kalman filtering techniques have been successfully utilized in a multitude of aerospace applications. ideally.. In reference [9]. yielding estimates that are globally optimal. or infinite process noise injected into the dynamical nonbias state. decentralized or parallel versions of the standard Kalman filter began to receive increasing attention in recent years. Also. (2) high data rate. Bierman proposed a federated square-root information filter (SRIF).e.e. each individual sensor has its own built-in Kalman filter. (Note that all filters are implemented in SRIF form. This filter architecture was designed to provide good throughput and an optimal estimate. this strains the processing resources. decentralization increases the input data rates significantly and yields moderate improvements in throughput. the absence of process noise information means that there is no dynamics model information in the other local filters from time step to time step. especially in inertial navigation and guidance systems. For example. the mathematical ideal of a complete centralized Kalman filter has never been fully realized in the avionics environment. whose own state is the fusion of all the local filter states. Therefore. or a globally optimal estimate. and as stated earlier (see Chapter 4). be optimal. Consequently. excessive processing time). rate) by local parallel processing. Also. Furthermore. Consider the conventional discrete-time* linear dynamic system (or global model) of the form . unbiased estimate. In a multisensor system. we see that the centralized (or monolithic) Kalman filter is undoubtedly the best (optimal) estimator for simple. decentralized Kalman filtering implementations enable one to allocate a multitude of Kalman filters for fast multisensor and multitarget threat tracking [20]. the master filter must be propagated and updated. Such models are carefully designed to reflect the dominant error sources and to represent the remaining error sources by noise statistics or other simplified means. As a result. the federated SRIF inputs all the process noise information into one of the local filters and no process noise into the other filters. real-time inertial navigation filters generally employ reduced-order (i. each local filter is assigned to a sensor and processes a single measurement or group of measurements.1.) The Bierman estimate is optimal. This hierarchical filtering structure provides some important improvements: (1) increasing the data throughput (i.. wellbehaved linear systems. centralized filtering in a multisensor environment can suffer from severe computational loads and must run at high input-output rate. 8. In the SRIF architecture. Furthermore. that is. which contains only those states that are directly relevant to the measurement. and if a sensor should fail. whenever data are acquired. As a result. For this reason. From the above discussions. when noise processes are jointly Gaussian. and requires constant feedback between the local and master filters.348 DECENTRALIZED KALMAN FILTERS 8. and (3) accuracy specifications. in inertial navigation applications. Due to the large size of the relevant aircraft models and severe constraints on airborne computer throughput capabilities. in practice the use of a large centralized Kalman filter may not be feasible due to such factors as (1) computational burden (i. a bank of local filters feeds a master filter. The master filter must run at the highest data rate.e. One is interested in combining the estimates from these independent data sources (filters) in order to generate a global estimate that will. and (3) allowing for a more fault-tolerant system design.

Loran. Thus. it is shown in these references that the central estimate is globally optimal. but it was not until recently that these ideas formed a solid discipline [9. that is. In other words.8. and Tercom. all communication being unidirectional. 14. they do not communicate with each other and are not interrupted by the central processor with any feedback information. References [19J and [35J have demonstrated that each local processor depends on its own information to generate its state estimates and covariances using its own local Kalman filter. as if all measurements were available in one location to the central collating unit to feed a centralized Kalman filter.. In particular. the local Kalman filters are stand alone. In the case where there is hierarchy in the filtering structure (as. bidirectional communication). In order to parallelize the conventional Kalman filter. In decentralized or parallel filtering. or collating unit) that combines the local results in order to generate a global or central result. for example. since they require no interprocessor communications and no communication from the collating filter back to the local filters (that is. Omega. Specifically. consider again the standard discrete-time linear dynamic system . As a result. All decentralized Kalman filtering mechanizations require some processing at the local level. local estimates are then combined at a central (or master) processor to produce the global optimal estimates. each local filter generating estimates of the full state vector. theoretically there is no loss in performance of these decentralized structures as compared to the optimal centralized filter. The central collating filter combines the estimates from the local filters to produce the central estimate. upwards in the hierarchy to the central processor [57].3 PARALLEL KALMAN FILTER EQUATIONS Decentralized or parallel concepts for Kalman filtering are gaining practical significance for real-time implementation as a result of the recent advances in chip technology. Multirate filtering may also be desirable in situations where there are sensors of different nature. they can be run entirely in parallel.3 PARALLEL KALMAN FILTER EQUATIONS 351 8. the local filter architectures are completely autonomous. That is. References [19J and [35J provide a definitive formulation of these ideas. This architecture is useful for the purposes of FDI. local Kalman filters). as for example in integrated inertial navigation systems. the Global Positioning System. it is natural to employ multirate filtering. Decentralized or parallel ideas for the Kalman filter started to emerge more than a decade ago. fusion center. in that each of them generates estimates based solely on its own available raw data. Depending on the type of application. such as an inertial navigation system being aided by such navigation aids as Doppler radar. A decentralized Kalman filtering system consists of a bank of N local subsystems (i. This allows the local filters to run in parallel at a faster rate.e. local processors being at a lower level and a central processor at a higher level). these local processed data may then be sent to a central processing unit (also referred to as a master filter. The decentralized mechanizations discussed here are the most attractive ones. whereby lower levels can run at a faster rate than higher ones. and 15]. the development and implementation of the very high speed integrated circuit (VHSIC) program resulted in Kalman filter architectures for maximum efficiency. at the subsystem level. Stated another way.

.

.

.

.

.

.

E. A. A..: Dynamic Programming. 1975. R. I. 1977. M.: Optimal Control: An Introduction to the Theory and Its Applications. Y..: Applied Optimal Control: Optimization. [9J Bierman. and Control. Math. Scan Diego. 11-18. T. L. McGraw-Hill. [5J Battin. Academic Press.: Factorization Methods for Discrete Sequential Estimation. New Jersey. New York. Jr.. Inc. D. 1957. J. Waltham. University of Chicago Press. and Gross. c. 14. [4J Bar-Shalom. 1963. McGraw-Hill. (ed. [12J Bryson. Y. W. Appl. 2nd Edition. G. Y. (ed.): Multitarget. 0.. Academic Press. Artech House. Vol. Norwood. pp.): Mathematical Optimization Techniques.: Introduction to Stochastic Control Theory. [10J Blair. New York.. K. Glicksberg. Estimator for Continuous rx-f3 Target Trackers.. California.. New York. G. 179. Massachusetts. and Formann. 1990. Blaisdell Publishing Co.: Tracking and Data Association. 1966. P. [3J Bar-Shalom. 1988.REFERENCES [IJ Astrom.: Lectures on the Calculus of Variations. June 1992. [2J Athans. California.. 1964.. Berkeley. Massachusetts. and Falb. [6J Bellman. R.: On the Bang-Bang Control Problem. E.: Astronautical Guidance. Quart. 365 . 1970.: Two-Stage rx-f3-y-J.Multisensor Tracking: Advanced Applications. Academic Press. Chicago.. Illinois. 1956. New York. 1963. R. Mathematics in Science and Engineering. Inc. Proceedings of the American Control Conference (ACe). Princeton University Press. Princeton. [11J Bliss. Vol. R. and Ho. Inc. University of California Press. Estimation. [7J Bellman. J. H. [8J Bellman.

J. 1992. [42] Kalman.1. Marcel Dekker.: A New Approach to the Linear Filtering and Prediction Problems. L. [36] Hong. R E. 1958. Y. and Fomin.. [39] Kalata. R. p. [34] Gelfand.: Stochastic Processes.: Federated Kalman Filter Simulation Results. 28. E.: Design of Near-Optimal Linear Digital Tracking Filters with Colored Input. Automatic. New Jersey. 1991. R.. Basic Eng. 776-785. L. 88-94.): Optimization Techniques. New York.: Stochastic Processes and Filtering Theory. Proceedings of the 2nd MlTjONR Workshop. Prentice-Hall. Roy. 83.. Vol. London. Doctor of Philosophy Dissertation. R E. 35. Automat. 1967. pp. Control. 736-747.: An Introduction to Observers. Vol. No. Detection.6. C. pp.: Linear Control System Analysis and Design. Aerospace and Electronic System. McGrawof Random to the Theory [26] D'Azzo. March 1984. IEEE Trans. 1979. D. and Chang... 2nd [22] Chui. and Bucy.6. Estimation. B. Control. R A. Springer-Verlag. S. [18] Chen. September-October 1989. G. ASME J. 1. [38] Johnson.: Computational Techniquesfor the Matrix Pseudoinverse in Minimum Variance Reduced-Order Filtering and Control. pp. (Also as Chapter 2 in Large-Scale Systems. Approach. Inc. No. A H. March 1961. No. E. Academic Press. Vol. Trans. New York. 1966. and Control. 174-182. World Scientific Publishing Company. Vol. 1990. [46] Leitmann. Inc. S. Y. No. 1965. S. Inc. Vol. B. IEEE Trans. Vol.. November 1973. Control. ASME J. 82.: rx-fJ Target Tracking Systems: A Survey. AC-16. Statistics. [27] Doob. October 1992. Phil.1. Vol.) [17] Chen. [48] Maybeck. (ed. G. New York. A. C. pp. [45] Kushner. and Chui. Massachusetts. Edition.: Optimum Steady-State Position and Velocity Estimation Using Noisy Sampled Position Data.: Effect of Reduced Computer Precision on a Midcourse Navigation and Guidance System Using Optimal Filtering and Linear Prediction. January 1988. 1448-1456. 1995. August 1978. December 1971. C. Academic Press. AIAA Journal.: Linear Systems and Optimal Control. C. F. 22. Appl. C. H.H. D. pp. 1922. Inc. Vol. 33. No. Heidelberg. J. pp. Berlin. C. September 1973. AIAA 1. Stability and Control. 1982. 1993. K.. c. G.: Divergence of the Kalman Filter. Control. No. McGraw-HilI. A. and Control. M. 1. S. Malcolm. T. No. and Chen. and Berarducci. p. No. Academic Press. Boston. Control.: Kalman Filtering with Real-Time Applications. P. New York. December 1971. [16] Chang. Automat. AC-16. NAECON. A and Moler. [24] Conte. 353-370.: Stochastic Stability and Control. B. S. K.2. [41] Kalman. Teaneck. California. Automat. New York. 15. A. [20] Chong. New York.: Distributed Filtering Using Set Models. P. March 1960.: On the Mathematical Foundations of Theoretical Trans. New York. C3 [33] Friedland. 1963. 1259-1265.: Distributed Multitarget-Multisensor Tracking. and Root.). Proceedings of the National Aerospace and Electronics Conference. 1986. K. December 1971. Vol..5. Berlin.). C. N... 12. IEEE Trans. IEEE Trans. H. J. New Jersey. Roy. Soc. Chen. and Schmidt. 222. Advances in Algorithms and Computational Techniques for Dynamic Control Systems. AC-16. Proceedings of the American Control Conference (ACC). Comput. Automat. Math. [21] Chui. Norwood. Massachusetts. Kuo-Chu: Distributed Estimation in Distributed Sensor Networks. IEEE Trans.: Modified Extended Kalman Filtering and a Real. XXVII. pp. Inc. 596-602. A. K.: Computer Methods for M athematical Computations. Control. [50] McGee. 35-45. pp. [28] Doyle. [15] Carlson. John Wiley and Sons. IEEE Trans. c. pp. c. Estimation. M. N. W. 1144-1153..: Calculus of Variations. pp. H. 21-23 June 1993. J. T. Trans. 309. 1970. [32] Forsythe. AC-23. IEEE Trans. Guanrong (ed. Inc. Vol. Inc. Automat. New York. pp. and Chen. IEEE Trans. [14] Carlson. Academic Press.: Fast Triangular Formulation of the Square Root Filter. [30] Fisher. pp. P. 727-735. Aerospace and Electronic Systems. pp. 1.. [23] Chui. 95-108. N. [49] Maybeck. Monterey. R. Aerospace and Electronic Systems. Ser. Connecticut. J. New York. C. Vol. Control.: Guaranteed Margins for LQG Regulators. R. Vol. Dayton. 11. Ser.: Elementary Numerical Analysis.. New York.: Hierarchical Estimation. Academic Press. AC-16.. [44] Kerr. in Control and Dynamic Systems. D. Vol. Automat. 1989. 1925. G. 1962. Englewood Cliffs. New York.. 1977. C. The University of Connecticut. 1979. A J. 906-911. Bryson. IEEE Trans.: On the Optimal Control of Stochastic Linear Systems. M. New York. Guidance and Control. L. No. an Algorithmic Hill. Inc. pp. Vol.Time Paralled Algorithm for System Parameter Identification. Academic Press. Prentice-Hall. New Jersey. June 1982. G. Y. [29] Fisher. Englewood Cliffs.. P... R: New Results in Linear Filtering and Prediction. Cambridge Phil.. Heidelberg. AES9. H. 681-690.9. c. R: The Tracking Index: A Generalized Parameter for rx-fJ and rx-fJ-y Target Trackers.: Discrete Square Root Filtering: A Survey of Current Techniques. Ohio. 1986. D. D. K.: Stochastic Models.4. No.6. Leondes (ed. Vol. 1989. V. [43] Kaminski. 2.6. 832-835. [40] Kalata. Basis Eng.: Stochastic Models. No. Vol. Vol. Chapter 8 in Distributed Multisensor-Multitarget Tracking: Advanced Applications. C.: Decentralized Structures for Parallel Kalman Filtering. [31] Fitzgerald. pp.: An Introduction Signals and Noise. [51] Oshman. 700. J. Springer-Verlag. G. [25] Davenport. Jr. 756-757. S. G. New York. and Houpis. Estimation. Vol. AES-20. Vol.: Approximate Kalman Filtering. [35] Hashemipour. [37] Jazwinski. NASA TN D-3382. No. G. Proceedings of the 49th Annual Meeting of the Institute of Navigation (ION). December 1971. January 1990. Artech House. McGraw-HilI. and Laub. [19] Chong. Mori. pp. Vol. A: Theory of Statistical Estimation. Soc. Proc. A. S. . IEEE Trans. 4th Edition. P. [47] Luenberger. and Chui.6. 1953.: Gain-Free Square Root Information Filtering Using the Spectral Decomposition. 18-22 May 1987. W. Storrs. 100-104. Inc. pp. Inc..366 REFERENCES REFERENCES 367 [13] Carlson.4.: Federated Square Root Filter for Decentralized Parallel Processing..

2. in Control and Dynamic Systems. AIAA J. October 1976. Vol. pp. 1962. June 1988. G. 5. 1981. J. L. Academic Press. John Wiley and Sons. 1114-1120. and Leros.: Gram-Schmidt Algorithms for Covariance Propagation.: Application of Statistical Filter Theory to the Optimal Estimation of Position and Velocity On-Board a Circumlunar Mission.: System Theory: A Unified State-Space Approach to Continuous and Discrete Systems. California. J. Castanon. J. [66] Stein. Massachusetts. V. . 276-289. A.. Academic Press.368 REFERENCES REFERENCES 369 [52] Padulo. Vol.: The LQG/LTR Procedure for Multivariable Feedback Control Design. N.. [59] Schlee. 489-498. and Toda. pp. AC-27. L.. New York. August 1968. A. J. G. June 1967. T. M. 1974. [60] Schmidt. 799-813. Technical Memorandum 33-798. S. 1964.. Inc.: Minimum-Time Intercept Guidance for Tactical Missiles. G. Levy. and Bierman. B.). and McGee. Cambridge. [69] Thornton. [54] Plant. (C-TAT). South Carolina. F. R. July 1964. Inc. L. Interscience Publishers. (Originally published as Philco WDL Technical REport No. T. H. AC-24. [73] Willsky. February 1987. Inc. 27. P. L. New York. S. [53] Papoulis. R. Aerospace and Electronic Systems.: Probabilistic Modeling and Analysis in Science and Engineering. A. Saunders Company. A. No.. pp. McGraw-Hill. and Laub. Inc. H. [65] Speyer. IEEE Trans. Instrumentation Laboratory Memo SGA 5-64.. August 1992.: W Matrix Augmentation. F. and Control. Jet Propulsion Laboratory.. Vol. No. M. Hashemi. Inc. G. M. M. 1249-1254. S. Control-Theory and Adv. and Athans.: The Extrapolation. IEEE Trans. Control. P. 1949. pp. New York. E. Proc. and Mishchenko.I. Boltyanskii. 251-263. Leondes (ed. K. Technol. A.: UDUT Covariance Factorizationfor Kalman Filtering. Pennsylvania. A. [70] Tse.: Divergence in the Kalman Filter. F. NASA TN D-1208. 1962. AC-32.: Application of State-Space Methods to Navigation Problems. pp. and Athans. pp.. April 1979.. E. L.. G.4. Inc. W. Random Variables. [71] Whang. A. G. 1966. Bello. J. 177-248. New York.: The Mathematical Theory of Optimal Processes. J. 1377-1398. [72] Wiener..2... IEEE Trans. M. J. H. T. Control. M.: Computation and Transmission Requirements for a Decentralized Linear-Quadratic-Gaussian Control Problem.: Combining and Updating of Local Estimates and Regional Maps Along Sets of One-Dimensional Tracks. in Advances in Control Systems. Vol.. C. pp. Automat. No. C.: On the Computation of Transtition Matrices for Time-Invariant Systems. Acdemic Press.) [61] Siouris. N. 405-434. 266-269.. Standish. Automat.. [64] Soong. Hilton Head. Vol. L. [62] Siouris. C. IEEE Tans. 1984. C. J. E. Inform. L. and Stochastic Processes. I.4. New York. [67] Thornton.: Aerospace Avionics Systems: A Modern Synthesis. New York. 1980. Philadelphia. 1971.: Estimation Theory with Applications to Communication and Control. [55] Pontryagin.. Schmidt..6. Gamkrelidge. pp. No.: Probability. Vol.4. J. [63] Smith. L.: Observer Theory for Continuous-Time Linear Systems. [68] Thornton. Proceedings of the 1992 AIAA Guidance Navigation and Control Conference. March 1991. [58] Sage. Vol.T.. B. Institute of Electrical and Electronic Engineers. [56] Potter.: A Modified Target Maneuver Estimation Technique Using Pseudo-Acceleration Information. C. 22. F. Interpolation and Smoothing of Stationary Time Series. IEEE.2. Vol. G. 3. San Diego. New York. August 1982. pp. 1993. and Verghese. No. S. John Wiley and Sons. 105-114. 2nd Edition.2. T.: Square Root Parallel Filtering Using Reduced-Order Local Filters. [57] Roy. G. and Bierman.. M. Control. Sung. 1973. Automat. Proceedings of the IEEE Conference on Decision and Control.: Triangular Covariance Factorization for Kalman Filtering. pp. G. G. L. B. McGraw-Hill. pp. F. 1975. and Lee. V. and Me1sa. No.. and Arbib..

since many of the models of systems considered in this book are rather complicated. but also affords a practical and convenient method of adapting the data for processing by a digital computer.1 INTRODUCTION The purpose of this appendix is to provide the reader with the basic concepts of matrix theory. Matrix theory not only provides an extremely helpful tool for designing a mathematical model of a system with many variables. it becomes apparent that matrix theory must be used. consisting for the most part of coupled systems of difference or differential equations. Moreover. The theory of matrices plays an important role in the formulation and solution of problems in mathematics.. 370 .. +alnXn = Cl.. engineering. A. Xn' Consider a system of m linear equations in the n unknowns form of the all Xl + al2x2 + ..APPENDIX A MATRIX OPERATIONS AND ANALYSIS A. . and optimal control and estimation theory. Matrix theory provides a convenient shorthand notation for treating sets of simultaneous linear algebraic equations.2 BASIC CONCEPTS Xl' X2' .

.

.

.

.

.

.

.

.

[8] Schreier. 1971. F. McGraw-Hill Book Company. [6] Pipes.Time Applications. P. I.. New York. E. R. Berlin. [7] Sage. A. New Jersey. 1963. with Applications to Communications and Control. O.. 2nd Edition. L. New York. M. F. and Falb. Inc.LT. Heidelberg. Englewood Cliffs. P. and Sperner. A. Springer-Verlag. and Schweppe. K.: Introduction to Matrix Analysis.: Methods of Applied Mathematics. 1965. Prentice-Hall. C.I. [3] Bellman. G. 1959. McGraw-Hill Book Company. Englewood Cliffs. and Melsa. Technical Note 1965-53. [2] Athans. Inc. 1960. New Jersey. M. M.: Matrix Methods for Engineering. New York.. 2nd Edition. J. [5] Hildebrand. and Chen.: Estimation Theory.: Optimal Control: An Introduction to the Theory and Its Applications. 2nd Edition. Prentice-Hall. c.: Gradient Matrices and Matrix Calculations. New York. .REFERENCES [1] Athans. 17 November 1965. McGraw-Hill Book Company. 1991.: Introduction to Modern Algebra and Matrix Theory.. [4] Chui. B. New York. Chelsea Publishing Company.: Kalman Filtering with Real. Lincoln Laboratory. 1966.

J) CONTINUE CONTINUE RETURN END SUBROUTINE MSUB (R.J) -B(I. system identification. and (2) interactive packages. R(NR. Also presented in this appendix are a number of basic matrix operation programs coded in FORTRAN IV that often arise in estimation theory and aerospace software applications.J) CONTINUE CONTINUE RETURN END SUBROUTINE MMUL T(R. 2. NAR DO 20 J = 1.NC) C DIMENSION C DO 10 I = 1. Furthermore. or modify them to suit his need.) and (2) NAG (the Numerical Algorithms Group). B. B(NBR.J) = A (I. NBR). The above libraries form the foundation of MATLAB. NR. NBC).NC) . Two commonly used packages are: 1. NBR R(I. NBR. which incorporates many enhanced features of control. B. NR DO 20 J = 1. NC) 20 10 MATRIX LIBRARIES C C C R=A-B DIMENSION A(NR. NR. B. Other commercially available subroutines which use many of the basic ElSPACK and UNPACK libraries are those of (1) IMSL (International Mathematical and Statistical Libraries.NC). A derivative of MA TLAB is the MA TRIXx package. B(NR. Section 4. Inc.APPENDIX B 391 APPENDIX B convenience to the reader and/or systems analyst and as a starting point for further research. NBC R (I. A.J)+B(I. NC) C C R=A + B A (NR.3) available from Math Works. These tools are expertly written and are transportable to various computer systems. EISPACK: C C DO 10 I = 1 . NBC) C C DO 10 I = 1. UNPACK: 20 10 includes functions for solving and analyzing basic linear equaC C C tions. These subroutines are presented here as a 390 R=A*B DIMENSION R(NAR. B(NR. A package which carries most of the operations discussed in the text is the MATLAB package (see also Chapter 4. NC R(I.J) = R(I. they support topics discussed in the text. NAR. The reader may use these subroutines as they are. A(NAR.NC).NC). A. Canned algorithms for most of the mathematical operations discussed in the text are available in the EISPACK and IMSL libraries. DO 30 K = 1. C SUBROUTINE MADD (R. NBC) includes functions for solving eigenvalue-eigenvector prob- lems. Inc. R(NR.J) = A(I.J) = O.K)*B(K.J) +A(I.NC) This appendix presents a brief discussion of applicable software tools that are available commercially. Software can be categorized as (1) library packages. signal analysis.A. NC R (I. NR DO 20 J = 1 .J) CONTINUE CONTINUE CONTINUE RETURN END 30 20 10 . and nonlinear system analysis.

C. J) + A(K) C C C SUBROUTINE MCON (R.NCB C 10 20 30 C C C DO 10 I = 1. NR. NRA=THE NUMBER OF ROWS IN A D0100J=1. J) = R( IA(K). NCB). B(NCB. NC R(J. lA. CONTINUE CONTINUE RETURN END R = TRANSPOSE DIMENSION C A(NR.J) CONTINUE CONTINUE RETURN END 10 20 20 10 .J)=O. B.NR. IA = THE ROW INDICES OF THE NON ZERO ELEMENTS OF THE SPARE MATRIX.J) = 0. M IF ( JA(K) GT. R(NC. N DO 10 J = 1.392 MATRIX LIBRARIES APPENDIX B 393 SUBROUTINE SMUL T (R.0 CONTINUE DO 30 K = 1. C DIMENSION 10 C C DO 20 1= 1. A(NR. GO TO 20 R( IA(K).0 CONTINUE RETURN END A(N. NC) C C R=C * A. NC) C C DO 20 I = 1. NCB. NCB = THE NUMBER OF ROWS AND COLUMNS OF B. N) C C C C C C C C C C C SMUL T IS A SUBROUTINE WHICH MULTIPLIES AN UPPER TRIANGULAR MATRIX BY A SPARSE MATRIX R = THE RESULTANT MATRIX A * B A = THE NONZERO ELEMENTS OF A SPARSE MATRIX READ IN VECTOR FORM . NCB).J) *C CONTINUE CONTINUE RETURN END * B (JA(K). J). N A(I. NC R(I. NRA) SUBROUTINE ION (A. N RA R(I. N R DO 10 J = 1. J) = O. NR DO 10J=1. A(M).J)=A (I. A. NC) C C R = ZERO MATRIX DIMENSION R(NR. M = THE NUMBER OF NON ZERO ELEMENTS IN THE SPARSE MATRIX. NC) C=CONSTANT R (NR. 0 CONTINUE CONTINUE DO 30 I = 1.A. NR) C DO 10 I = 1.A. JA = THE CORRESPONDING COLUMN INDICES OF THE NON ZERO ELEMENTS OF THE SPARSE MATRIX. N A(I.NC) (A) C C C C SUBROUTINE ZER (R. NR. J) 10 20 C 20 30 C 100 CONTINUE CONTINUE CONTINUE RETURN END SUBROUTINE C C MTRA (R.I)=1. NR DO 20 J = 1.NC R (I. IA(M). JA(M) C C A = IDENTITY MATRIX DIMENSION C DO 20 I = 1.I)=A(I. JA. M. NC). NC).• B = A UPPER TRIANGULAR MATRIX. N) C DIMENSION R (NRA.

NR. N. RM(NR. A.J) =A (I. N IJ=IZ+I 10 IF (ABS (BIGA) . D. L.20.20 C C C RM-REAL MATRIX NR . M) 20 10 C C DO 30 J = 1. NR). L(900). NR IJ=IZ + I A(IJ) =RM(I. L(1). RM.394 MATRIX LIBRARIES APPENDIX B 395 SUBROUTINE MEQU (R.35. 10) STOP C C C C C C C D010J=1. NR) C SUBROUTINE IMINV (A.N DO 30 I = 1. NR IZ=NR*(J-1) DO 20 1=1. NC R(I.25 25 KI = K . M(1) C DIMENSION 10 20 C C A-INPUT AND OUTPUT SQUARE MATRIX C N-ORDER OF THIS SQUARE MATRIX CD-RESULTANT DETERMINANT C L-WORK VECTOR OF LENGTH N C M-WORK VECTOR OF LENGTH N C SEARCH FOR LARGEST ELEMENT M (900) RMI = INVERSE (RM) COMMON jMICOMj A(900). N R IJ=IZ +1 RMI (I. M) A(1). L. NR D010J=1. N KI=KI+N HOLD= -A(KI) JI=KI-K+J A(KI) = A(JI) 30 A(JI) = HOLD C INTERCHANGE 15. GT. NC) 30 CONTINUE RETURN END C C R=A DIMENSION R (NR. IF NR EXCEEDS THIS MAXIMUM THE CALLING PROGRAM WILL STOP HERE.0 NK= -N DO 80 K = 1. AND M ARE WORK VECTORS WHICH MUST BE DIMENSIONED AS THE SQUARE OF THE LARGEST MATRIX INVERSE WHICH WILL BE COMPUTED. C C DIMENSION RMI (NR. IF (NR.K) 35. N NK= NK+ N L (K)=K M (K)= K KK=NK+K BIGA=A (KK) DO 20 J = K. NR IZ=NR* (J -1) DO 40 I = 1. A(NR. J) =A(I J) CONTINUE 40 COLUMNS .NUMBER OF ROWS OF THIS SQUARE MATRIX THE VECTORS A. NC) C C C C C DO 20 1=1.J) CONTINUE CONTINUE RETURN END SUBROUTINE C C MINV (RMI. NC). L. D. NR.ABS(A (lJ))) 15 BIGA=A(IJ) L (K)=I M(K)=J 20 CONTINUE C INTERCHANGE ROWS J=L(K) IF (J . NR) D= 1. N IZ=N* (J-1) DO 20 I = K. J) CONTINUE CONTINUE CALL IMINV (A.

38 J P = N * (I .45. NR) A. N IF (I .125 125 KI=KN DO 130 I = 1. NR) C C CALL MEOU (R.K) 45.62 KJ = IJ .1) DO 40 J = 1.55.O RETURN DO 55 I = 1.150.50 IK=NK+I A(lK) =A(IK)/(-BIGA) CONTINUE REDUCE MATRIX DO 65 I = 1.J) IF (TEST .A(NR. NR. I) .1) DO 110 J = 1. N 100 K= (K -I) IF (K) 150. N IK=NK+I HOLD=A(IK) IJ=I-N DO 65 J = 1.NE.K) 70.75. J ) = 0.65. N IJ=IJ+N IF (I .0 R (I.396 35 38 MATRIX LIBRARIES APPENDIX B 397 I=M(K) IF (I .A( J.K) 100.K) 120.70 A(KJ) = A (KJ) / BIGA CONTINUE PRODUCT OF PIVOTS D=D* BIGA REPLACE PIVOT BY RECIPROCAL A(KK) = 1. E .1. N KI=KI+N HOLD=A (KI) J I = KI . N JK=JO+J HOLD =A (JK) JK=NK+J JI=JP+J HOLD= -A(JK) A(JK)=A(JI) 40 A(JI) = HOLD C C 45 46 48 50 55 C DIVIDE COLUMN BY MINUS PIVOT (VALUE OF PIVOT ELEMENT IS CONTAINED IN BIGA) IF (ABS (BIGA) .60 IF (J .65. NR) DO 10 1=1.48 D=O.20) 46. N R IF ( I.105 105 I=L (K) IF (1.100.NR 70 75 C C 80 C DO 20 J = 1. J ) GO TO 30 TEST = A( I.K + J A (KI)= -A (JI) 130 A (JI) = HOLD GO TO 100 150 RETURN END 60 62 65 C SUBROUTINE COVPRP (R. A. N KJ = KJ + N IF (J .N DO 75 J=1. 0.0 GO TO 40 ) GO TO 5 . A. NR).•.120. A = COVARIANCE MATRIX R = NORMALIZED UPPER .I + K A (lJ) =HOLD*A(KJ)+A(IJ) CONTINUE DIVIDE ROW BY PIVOT KJ = K .108 108 JO=N* (K-1) JR = N* (I .K) 62.K) 60. EO.T PART OF R CONTAINS CORRELATION DEVIATIONS COEFFICIENTS DIAGONAL PART CONTAINS STANDARD LOWER-T PART OF R CONTAINS CROSS-COVARIANCES DIMENSION R (NR.K) 50.46.A (JI) 110 A(JI) 120 J=M =HOLD (K) IF (J .0 / BIG A CONTINUE FINAL ROW AND COLUMN INTERCHANGE K=N C C C C C JI =JR+J A(JK) = .

X .NR. P (J. J = 1. P.D.j(6X. D (N) MATRIX.NE. U (N. I .10).) ATANYX= P1j2.) ATANYX = ATANYX RETURN END SUBROUTINE FACTOR (P.J)) ) C IF (X .GT. 5 (13. NC) PRINT MATRIX P TITLE = 6 CHARACTER LFN = OUTPUT FILE C C C COMPUTE POSE (U) FACTORS U & D WHERE P = U * D * TRANS- C I.J)=SQRT CONTINUE CONTINUE CONTINUE RETURN END SUBROUTINE (A(I.. 1X.. CONTINUE FORMAT (j. NR WRITE (LFN. A6) 220 FORMAT (1X. 10 100 200 PRMD (TITLE.TRIANGULAR D = DIAGONAL FACTOR STORED AS A VECTOR DIMENSION C C C C P (N. 1X. LFN) ZERO THE LOWER TRIANGULAR EXCLUDING THE DIAGONAL DO 20 1=2. 5 (13.398 5 30 40 20 10 MATRIX LIBRARIES APPENDIX B 395 R (I. NC) C C C G20.N). 1X. X .200) P (NR.N DO 10 J = I. J). P.O C N) 10 20 CONTINUE CONTINUE EPS: THRESHOLD AT WHICH AN ELEMENT OF D IS CONSIDERH TO BE ZERO.P(I. 0.J)jSQRT GO TO 40 R (I.X) C FUNCTION ATANYX C J=N C C 4-QUADRANT PI = 3.LFN) C 100 LABEL - + PI PI C C C C C DIMENSION C WRITE (LFN .10) ) RETURN END C C C C EPS = 1. 13.N).LT. I) *A(J. N. O. O.LT.100) TITLE DO 101=1.) ATANYX=ATANYX IF (Z . (J. AND. 1X.N) C PRM(TITLE.LE. GO TO 100 C 50 Z =Y j X AT ANYX = AT AN (Z) IF (Z . 0.J) = A(I.NC. C C C SUBROUTINE C DIMENSION P (N. 13. 0. N) WRITE (LFN.0E -30 (Y. 10 CONTINUE 100 FORMAT (j.J.J)=O. 0. A6) FORMAT (4(3X. IF (Y . AND. 13. G20.10) ) ) RETURN END 2X.200) (J.U.J).1 U (I. J=1. 100) TITLE WRITE (LFN. 0. P = COVARIANCE MATRIX U = UNIT UPPER .LT.J) (A(I.1415927 ARC-TANGENT C GO TO 150 C C .GT. 1X.) GO TO 50 IF (Y . G20. (P SINGULAR).) ATANYX= -Plj2. 1X.

C C B = INVERSE (A). J) = 0.A.Verlag. Dongarra. Society for Industrial and Applied Mathematics. EPS ) GO TO 160 ALPHA = 1. 1-1 B(I.LT.B(K. B.400 MATRIX LIBRARIES REFERENCES 401 100 J=J-l CONTINUE C TRIANGULAR DIMENSION MATRIX B(N. J. and Moler.1) C IF (0(1) .. Bunch.0 o (J)=O..: Matrix Eigensystem Routines-EISPACK Guide Extension./A(J. N DO 10 J = 1. J M 1 SUM = SUM ./A(1. 1) = 1.0 / D(J) GO TO 170 C 10 20 CONTINUE CONTINUE B(l.. 1979. 1) C 160 170 ALPHA = 0. C 250 C DO 100 I = K. N B (J.. 1)= 1. J. Philadelphia.J) 200 CONTINUE CONTINUE C DO 150 K= 1. J) = 1. Pennsylvania.O Jl=J-l C C C DO 250 K= 1. EPS) 0(1)=0. R. C 150 C C IF (J . [2J Garbow. Lecture Notes in Computer Science. G. J. M.0 C C IF (D(J) . 2) GO TO 300 U (J. Moler.K)=P (I. C. W. N) REFERENCES [1J Dongarra. JMl SUM =0. B. S.J) U(K. I)*A(I.0 C 200 C C C RETURN END C C C SUBROUTINE TRIINV (B. D(J) = P (J. J. B..LT.J) =ALPHA*BETA DO 200 J = 2.J) = 1. J.LT.J)=SUM*B(J. J) CONTINUE B (K. Jl BETA=P (K. C 150 CONTINUE CONTINUE RETURN END C C D(1)=P(1.J) C GO TO 100 C 100 C 300 CONTINUE U(1. Vol. N).: LINP ACK User's Guide.J) C C C ZERO THE LOWER TRIANGULAR EXCLUDING THE DIAGONAL DO 20 I = 2. N) MATRIX. 1977. Boyle. C.K)-BETA*U(I.. and Stewart. K P (I. WHERE A IS AN UPPER- . Springer. 51. J) JM1=J-l C DO 200 I = 1. J. A(N.

362-367 matrix. 75 Augmented system. and Van Loan. Boyle.. H. C. 227 Borel field. 207 Adams-Moulton method. 201-204 Completely controllable. 11 Bandwidth. 99 403 . Shure.402 MATRIX LIBRARIES [3] Golub. 229 Brownian motion. 230 Batch processing. L. 12. C. 238. 138 time. 266-267.: Matrix Computations. C. 219. J. 221 Continuous filter.. J. B. 329. 159. B. 238-239 Covariance. Vol.. 17 Constraints. 1. 329-331 Alpha-beta tracking filter (a-~ filter). 276 Beta distribution. 35. 240 Controllability. 163 Conditional mean. F. 22L 227-228 equality. 24. 82-83 Bayes' rule.: Matrix Eigensystem Routines-EISP ACK Guide. 287-288 Bang-bang-off control. 311-3\3 Adjoint time. 21 Bias. 52-53. 285. Klema. Garbow. 206. [5] Smith. 337-339 A priori information. 16.. 347. 25-26. Sherborn. and Bangert. 12 Characteristic function. 171 Causal signaL 33 Cayley-Hamilton theorem. 183-185 Colored noise. 271 Carlson square root filter. 154 Correlation: coefficient. G.. Little. 329. 348-350 Central moments. Springer-Verlag. Massachusetts. 2. and Moler. 1983. 240. 1. 93 Control. Dongarra. L 67. 126-127.. 99 analysis. 279 Aided inertial system. 385 Central limit theorem. 222 Admissible control. Computer Science. S. 10 Bolza problem. 222. 29.292 Bang-bang control. 22 Bellman's principle of optimality. 159 Convergence. 163 observable. 37-38 Average value. 23-24. B. 138 Costate. 226.. 175-176. 38. 221-232 Canonical form.. 3. S. 22. 7 Boundary conditions.57 Chi-square distribution. [4] Moler. 201-204 Autocorrelation function (ACF). Ikebe.236. 97.\3 Centralized filter.20 Cholesky factorization (or decomposition). B. 351 Alpha filter (a-filter). 85 distance. Lecture Notes in INDEX Accelerometer error model. T. 23-24. c. 24 Chebychev inequality. Johns sity Press. 1976.: Pro-MATLABfor tions. 236 Adjoint vector. 6. 48 Bierman U-D factorization algorithm. Inc. M. 48 Calculus of variations. The Math Works. 7L 121 Bayesian estimation. 127 Bivariate normal distribution. 51. 1987. 35-36. 2nd Edition.. 160. 331-337 Alpha-beta-gamma tracking filter (a-~-y filter). Hopkins Univer- Apollo WorkstaY.. 266.

130 Hypersurface. 75 Linear filter. 160-161. 325-326 covariance. 169-171 Inverse of a matrix. 112 Fourier transform pair.25-26 Error: analysis. 10!.208 Influence coefficients.200 Efficient estimate.83-84. 257-262 Linear-quadratic Gaussian loop transfer recovery (LQG/LTR) method. 378 trace. 169-171 gain. 162 Duality theorem.307 Cramer-Rao inequality. 96. 227 Lagrangian. 125 Kalman filter.333.241 control. 376 state transition. 2. 285 local. 36 Master filter. 387-388 Interactive multiple mode (IMM). 10 Noise. 115.146.20 Exponentially correlated noise. 94 Linearization. 100-101. 68. 139 system input.100-101.404 INDEX INDEX 405 Covariance (Continued) propagation. 220. 166-168. 109. 149.372. 40. 137.375 Lagrange multipliers. 32-33 Free terminal time problem. 93-100 discrete-time. 347-355 Density function. 111-125 Dispersion. 92 continuous-time.12-15 Gaussian noise. 356 Multivariate normal distribution. 152-153 determinant of. 4. 172-179. see Extended Kalman filter Local extremum. 36. 164-165 system. 317-318 Extended Kalman filter. 129. 374 diagonalization.151 Least squares curve fitting.58 initial. 372-374 diagonal. 375 Markov process. 48 Global minimum. 116. 267-268 Global Positioning System (GPS). 384 Hilbert space. 169. 65 Least squares estimation. 48. 67. 198. 95. 69 Linear-quadratic Gaussian (LQG) problem. 374-375 inner product. 78. 375 triangularization.169-171 Doppler radar. 97. 121 function. 74.see Probability density function Deterministic: control. 8L 122. 104-105. see Pontryagin minimum principle Minimum time. 379 skew-symmetric. 73-75. 103. 285-286 Identity matrix. 204 Kernel.356-357 Measurement: noise. 227. 258. 270. 120. 22L 224-226 Expected value. see Colored noise Gaussian. 73-80 Mean. 74-75 maximum likelihood. 150.141-142 Inner product.306 sensitivity.357 Maximum likelihood estimate (MLE). 3. 113 Kalman-Bucy filter.233 Extremal. 266. 2. 382 Innovations. 241 matrix. II Eigenvalue(s). 24-25 Jordan canonical form. 158 Euler-Lagrange equation. 376 triangular. 65 Decentralized filter. 305 divergence. field of.383-385 Ergodic process. 47-49 Estimate.100-101. 376-378 inversion lemma.390 Matrix: adjoint 158. 377 identity. 295 Fundamental matrix. 227 Maximum a posteriori (MAP) estimator. 209 Gyroscope error model. 278. 49-50 Fisher information matrix.137. 379 quadratic form. 374 Cross-correlation. 95. 263-264 Linear quadratic regulator (LQR). 69 Euclidean norm. 129-130 Information matrix.148-149. 158. 295 Hierarchical filtering.50-51 Gaussian process. 100. 128 correlated.233. 69 Moment. 24. 319. 22 I. 306 update. 9. 104-107. 115. 166-168. II I. 134 efficient. 63-66 weighted. 226 Laplace transform. 7. 372. 378-379 Riccati equation.381 transpose. 113-114. 355 Feedback. 287 Minimum variance estimate. 282 Inertial navigation system.314 Estimator." 105. 10. 232-246 properties. 267 unconstrained. 387-388 minor. 351 Gain matrix.355 Initial conditions. 48. 42. 285 Fault detection and isolation (FDI). 111-125 duality theorem. 96-97. 36. 355 Jacobian. 32 Linear minimum-variance (LMV) estimation. 10. 314-316 Multiple-mode adaptive estimation (MMAE). 386-387 product. 17-18 Gaussian distribution. see State transition matrix Fusion center. 95 similar. 64 Diagonalization. 284-285 Hamiltonian. 31-32. 386-387 positive semidefinite.385 Joseph's form. II Estimation error. 96 Discrete filter. 348-349. 319.386-387 negative semidefinite. 351-353 MATLAB. 166 Filter. 24 Monte Carlo analysis. 97. 171 Dual control. 162 Dynamic programming. 128 Mayer form. 77 lower bound. 339. 378 idempotent. 243 Feedforward matrix. see Variance Divergence. 266-267. 36. 222. 141-156 symmetric. 68 Likelihood function. 380-381 pseudoinverse. 84 minimum error variance. 17 square error. 14. 92-93. 109. 4!. 8. 2. 193 Joint probability density. 68 Linear system. 386-387 nulL 246. 203 Gaussian white noise. 24. 372-373 decomposition. 230 Information filter. 96 maximum a posteriori (MAP).352 Minimum principle. 139 update. II Exponential distribution. 374-375 Imbedding principle. 376-377. 12-13 central. 63-64. 3. 380-382 cofactor. 209 Hamilton-Jacobi-Bellman equation. 22 I. 114. 386-387 rank. 11-12 conditional. 270 Fuel-optimal control. 149. 136 models. 94 model. 97. 117. 4 Forward Euler approximation. 115 Extrema. 136.382 inverse. 268. 158. 50-51 . 351 Double precision. 15!. 222-223.151-152 Dirac delta function. 162 gain. 222 Lognormal distribution. 170 First-order Markov process. 183-185 Gyroscope drift. 90-91 Cramer's rule. 190-195 Extrapolation.302 tuning. 305 budget. 184-185 positive definite. 87. 19 Lower triangular matrix. 42 Exponentially time-correlated process. 139. 107. 8. 222 sufficient conditions. 63. 305. 376-378 Inversion lemma. 17. 78 Fokker-Planck equation. 385 Hermitian.374 algebra. 16.3 Kronecker delta. 232 Linear regression.303. 84. 351 Gram-Schmidt orthogonalization. 372 negative definite. 274-285 Dynamics model. 221-222. 375 orthogonal. 12L 129. 12 joint. 101.26-27 Curve fitting. 140 design. 51 Kolmogorov. 334. 94 Gamma distribution. 24!. 258 Householder transformation.

116 Uniform distribution. 304. 88 Schukr fre4uency.22. 49 Power spectrum. 295 Time update. 295 Optimal estimate. 351 Optimal control. 42-45.72 Taylor series. 137 Variation. 96-97. 154. 317 Rank of a matrix. 6-8 conditional.:d kast s4uar. 24. 23-24. 120. 42. 323 Pea no-Baker formula. 48. 95. 261-262 asymptotic. 374-375 Unmodeled dynamics. 201-204 Smoothing. 120. lin. 3. 302. 319 Roundoff errors. 233-234 Perfect measurements. 312-313 Truth model. 233 Time-correlated noise. 49.257. 2. 139-140. 233-234 Terminal time. 263-265 Omega.190 171-172 .136. 6-9 Process noise.:r. 195 Sampk mean.:nce mod. 68-(. 97 Robustness. 96.. 321. 167-168.:ns. 305 Performance index. 80-82 Rekr. 355-357 Probability density function (pdf). see State transition matrix Transversality condition. 129-130 Stability. 65. 163-164. 58 Pole allocation. 39-41 Wiener-Khintchine e4uations. 386-387 Quadratic performance index. 28-29.220 Series.68. 42. 355 Probability fusion e4uations.: (Coll1illllecl) str. 266. 84 estimation. 114-115 Suboptimal filter. 282-283 Parallel Kalman filter e4uations. 293.:s estimation. 48 Random variabk. 100 Positive semidefinite. 160-161 Rayleigh distribution. 130 Standard deviation.:n!!th. 130 Obsavability. 15\ Signum. 99 Random walk. 74.:l. 96. 256-262 Residuals.283 Temporal process. 288 unspecified. 106. 100-101 property of P. 242. 170. 35. 113 Stationarity.:s. 259 of filter. 77-78. 234-235. 201 Time optimal control. 48. 256-257. 145. 48 Wide-s. 18 Unity matrix.51. 42.:cursive maximum-likdihood estimator. 3. 287 Shaping filter.228 U-D covariance factorization. 233 State space representation.:rvation model. 194.217-218 Schwarz ine4uality. 292 System error model. 232-233. 47 White noise. 223 Stochastic process. 265-268 Positive definite.349 Superposition principlc. 96 Optimality prineiplc. 45 Stochastic difference e4uations. 130 numerical.:ner lilter. 286. 82.) White (J<lussi<ln noise process. 158 Vertical delkction of gravity.406 INDEX INDEX 407 Nois. 159 . 168 Pontryagin minimum principle. 97 Performance analysis. 320-324. 94.:ar. 25 Wi.259. 12. 3.350 Proportional navigation (PN). 324-326 Separation theorem. 139 Propagation. 232 continuous-tim':. 117 Optimal filter. 241. 1\5. 164 terminal. 137. 350-351 Parameter. 137. 167. 52-53 Terminal state. 115. 222 Variational approach. 295 Poisson distribution. 1. 349 reduced-order filter.218 Schuler loop. 158 Numerical stability. 170 Runge-Kutta method. 68 R. 236 free. 109.237 Pseudoinverse. second. 123. 82.40. random. 63-64. 135. 25 Stationary function. 243.1 W.381 Transition matrix. 265 Root mean s4uare (rms).356 joint. 29 Prediction. 122. 52-5. 160. 37 Wiener-Kolmogorov procedure.137. 13. 239. 63.67 Quadratic forms.21 R.232-234 disnete-time. 226 Truncation error. 163 Obs. 48 Wiener-HopI' e4uation. 198 Observers. 136. 38.7 norm.353 Trace of a matrix. 132 Upper triangular matrix.210 State transition matrix. 335 Wiener-Paley criterion. 375 URAND. 233. 190-192 Norm. 45-46. 319 Tuning.169 Riceati e4uation.:gression. 26-29. 33-34 Wordlength. 267. 32 Sweep method. 111.330 State: reconstruction. 93 vector. Unbiased estimate. 38-41 process. 100 Power spectral density.303.253-256 linear-4uadratic Gaussian. 320. 170 Zero matrix. 114-11:' whit:.:ar. 115 Spatial process. 180-190.315-316 Variabk. 8. 71-73 R.201 Two-point boundary value probkm (TPBVP). 119 Probability. 17. 321. 122. 97. 147 Sensitivity analysis. 302-303 Root sum s4uare (rss). 126-127 S4uare root information filter (SFIR). 192. 26. 2. 154. 6-8 Variance. 307-313 Runge-Kutta-Fehlberg routine. 324 R.57 Second-order Markov process. 8. 48-49.92 wide sense. 24-25 Probability distribution function (PDF).75. 93.:cursive kast-s4uar. 48 Random constant. 42-44. 65. 3. 47 TACAN.:ompkte. 15-16.372.205-206. lilter. 276. 99. 125. 220. 117. 144-145 Penalty eost. 202 S4uare root filter. 11. 224 Vector: kngth. 84-85 sensitivity. 42-4:' Nonlinear Kalman filt. 52-53 Spectral factorization theorem. see Null matrix Zero-mean noise. 12. 382 Second moment.:gulator. 233 Random bias. 6-7.219 lin. infinite.: sl<ltionary. 263. 115 Strength of white noise. 195-196. 141-156 State variable.:ight. 22. 10 Probabilistic data association (PDA).

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.