P. 1
Skf Final Documentation

Skf Final Documentation

|Views: 43|Likes:
Published by Deepkar Reddy

More info:

Published by: Deepkar Reddy on Apr 27, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as DOC, PDF, TXT or read online from Scribd
See more
See less

11/11/2013

pdf

text

original

Sections

  • 1.1 Literature Survey
  • 2.0 Introduction
  • 2.1 Block diagram
  • 2.2.1 System Identification
  • 2.2.2 Inverse System Identification
  • 2.2.3 Noise or Interference Cancellation
  • 2.2.4 Signal Prediction
  • 2.3 Conclusion
  • 3.0 Introduction
  • 3.1 Advantages of Kalman Filter
  • 3.2 The Filtering Problem
  • 3.3 Overview of the calculation
  • 3.4 Kalman filter in computer vision
  • 3.5 Underlying Dynamic System Model
  • 3.6 The Process to be estimated
  • 3.7 The Computational Origins of the Filter
  • 3.8 The Probabilistic Origins of the Filter
  • 3.9 The Discrete Kalman Filter Algorithm
  • 3.10 Filter Parameters and Tuning
  • 3.11 Conclusion
  • 4.0 Introduction
  • 4.1 The Process to be estimated
  • 4.2 The Computational Origins of the Filter
  • 4.3 Conclusion:
  • 5.0 Introduction
  • 5.1.1 Adaptive Background Module
  • 5.1.2 Stauffer’s Algorithm
  • 5.1.3 Measurement Module
  • 5.1.4 Kalman Tracking Module
  • 5.1.5 Target tracking formulation derivation
  • 5.2.1 Without Sensor Position Errors
  • 5.2.2 With Sensor Position Errors
  • 5.3 Recursive Calculation of Matrices
  • 5.4 Simulation Results and Analysis
  • 6.1Simulation with Comparable Navigation Errors
  • 6.2 Simulation with Different Process Noise
  • 6.3 Evaluation with Unknown Truth
  • 6.4 Comparison between Kalman Filter and Schmidt Kalman Filter
  • 6.5 Applications
  • 6.6 Conclusion

Chapter-1

1. Introduction
We humans have been filtering things for virtually our entire history. Water filtering is a simple example. We can filter impurities from water as simply as using our hands to skim dirt and leaves off the top of the water. Another example is filtering out noise from our surroundings. If we paid attention to all the little noises around us we would go crazy. We learn to ignore superfluous sounds (traffic, appliances, etc.) and focus on important sounds, like the voice of the person we're speaking with. There are also many examples in engineering where filtering is desirable. Radio communications signals are often corrupted with noise. A good filtering algorithm can remove the noise from electromagnetic signals while still retaining the useful information. Another example is voltages. Many countries require in-home filtering of line voltages in order to power personal computers and peripherals. Without filtering, the power fluctuations would drastically shorten the useful lifespan of the devices.

1.1 Literature Survey
Kalman filtering is a relatively recent (1960) development in filtering, although it has its roots as far back as Gauss (1795). Kalman filtering has been applied in areas as diverse as aerospace, marine navigation, nuclear power plant instrumentation, demographic modeling, manufacturing, and many others. In the field of motion estimation for video coding many techniques have been applied. It is now quite common to see the Kalman filtering technique and some of its extensions to be used for the estimation of motion within image sequences. Particularly in the pixel-recursive approaches, which suit very much the Kalman formulation, one finds various ways of applying this estimation technique both in the time and frequency domains. On a very general perspective, we find use of Kalman filter (KF), which implies linear state-space representations and the extended Kalman filter (EKF), which uses the linearized expressions of non-linear state-

1

space formulations. Moreover, the parallel extended Kalman filter (PEKF) which consists of a parallel bank of EKF’s, is often encountered in practice. Navigation errors are a performance-limiting factor for target tracking as ranging measurements increase in accuracy. In the past, a sensor platform is typically assumed to be at a perfectly known location when formulating the measurement equations for tracking filters. Such an assumption is reasonable when the tracking goal is to determine the relative location of a target for such applications as a homing missile. However, for other applications such as sensor data fusion or weapons cueing, the sensor/platform location accuracy is critical for high speed maneuvering target tracking as the target’s relative location has to be transferred into a global reference frame and the sensor navigation errors get magnified into the target position solution. For tracking mission success, it requires the use of such techniques as the Schmidt-Kalman Filter (SKF) to account for navigation errors. Without accounting for navigation errors, regular Kalman filters tend to be optimistic in that the covariance matrix reaches a steady-state value (if not diverged) far below the actual mean squared errors. Heuristic techniques such as the-β-γ filter are used to prevent divergence and boost filter gain by adjusting Q and R covariance parameters. In it was shown that residual biases in radar measurements could be mitigated using the Schmidt- Kalman filter. In the setting of tracking ballistic missiles in midcourse, groundbased electronically scanned array radars (ESA) were considered. The range and off-boresight angles of a target measured in the radar frame are transformed into relative position measurements in a local east-north-up (ENU) frame. It was assumed that the measurement process is subject to frame transformation errors (in terms of Euler angles) and unknown but constant measurement errors (in terms of biases and scale factors) in addition to random noise. It was further assumed that a bias estimator was available to remove the angular and measurement biases. Further, zero-mean Gaussian residual biases could be mitigated so long as the residual bias covariance matrix was provided by the bias estimator.

2

The Schmidt-Kalman filter (SKF) is formulated for target tracking with navigation errors. The Schmidt Kalman filter does not estimate the navigation errors explicitly (it could if external reference data were available) but rather takes into account (i.e., consider) the navigation error covariance, provided by an on-board navigation unit, into the tracking filter formulation. By exploring the structural navigation errors, the SKF are not only more consistent but also produces smaller mean squared errors than regular Kalman filters.

Chapter-2 Adaptive Filter
2.0 Introduction
3

An adaptive filter is a filter that self-adjusts its transfer function according to an optimization algorithm driven by an error signal. Because of the complexity of the optimization algorithms, most adaptive filters are digital filters. By way of contrast, a nonadaptive filter has a static transfer function. [16] Adaptive filters are required for some applications because some parameters of the desired processing operation (for instance, the locations of reflective surfaces in a reverberant space) are not known in advance. The adaptive filter uses feedback in the form of an error signal to refine its transfer function to match the changing parameters. Generally speaking, the adaptive process involves the use of a cost function, which is a criterion for optimum performance of the filter, to feed an algorithm, which determines how to modify filter transfer function to minimize the cost on the next iteration.[16] As the power of digital signal processors has increased, adaptive filters have become much more common and are now routinely used in devices such as mobile phones and other communication devices, camcorders and digital cameras, and medical monitoring equipment.[16]

2.1 Block diagram

Fig 2.1 Block diagram of adaptive filter To start the discussion of the block diagram we take the following assumptions:

The input signal is the sum of a desired signal d(n) and interfering noise v(n) x(n) = d(n) + v(n) 4 (2.1)

In vector notation this is expressed as[17] (2. LMS and RLS define two different coefficient update algorithms.2.5) is an input signal vector. the variable filter updates the filter coefficients at every time instant (2. For such structures the impulse response is equal to the filter coefficients. The adaptive algorithm generates this correction factor based on the input and error signals.4) where (2.1 System Identification 5 .• The variable filter has a Finite Impulse Response (FIR) structure.2 Applications of Adaptive Filters 2.[17] 2.2) • The error signal or cost function is the difference between the desired and the estimated signal (2. Moreover. The coefficients for a filter of order p are defined as[17] .6) where is a correction factor for the filter coefficients.3) The variable filter estimates the desired signal by convolving the input signal with the impulse response. (2.

and is a part of the sound you hear from your modem when you log in to your Internet service provider.[17] Fig: 2.[18] 2. As shown in the 6 . In this case the same input feeds both the adaptive filter and the unknown. the adaptive filter response is close to the response of the unknown system.2 Inverse System Identification By placing the unknown system in series with your adaptive filter.2 System Identification Model Clearly.2. to pick fairly divergent applications. the input often represents white noise. If.One common adaptive filter application is to use adaptive filters to identify an unknown system. for example. Other applications include echo cancellation and channel identification. your filter adapts to become the inverse of the unknown system as e(k) becomes very small. such as the response of an unknown communications channel or the frequency response of an auditorium. when e(k) is very small. the unknown system is a modem.

having a response that rolls off at higher frequencies (or data rates) and having other anomalies as well.figure the process requires a delay inserted in the desired signal d(k) path to keep the data at the summation synchronized. increasing the available frequency output range and data rate for the telephone system.[18] Plain old telephone systems (POTS) commonly use inverse system identification to compensate for the copper transmission medium. the copper wires behave like a filter. Adding an adaptive filter that has a response which is the inverse of the wire response. the desired signal. adaptive filters let you remove noise from a signal in real time.3 Noise or Interference Cancellation In noise cancellation. lets the filter compensate for the rolloff and anomalies. Here. and configuring the filter to adapt in real time. Adding the delay keeps the system causal.2. Fig 2. When you send data or voice over telephone lines.3 Determining an Inverse Response to an Unknown System Including the delay to account for the delay caused by the unknown system prevents this condition. 7 . the one to clean up. combines noise and desired information.[18] 2.

Notice that in this application. the adaptive filter adjusts its coefficients to reduce the value of the difference between y(k) and d(k).2. and periodic over time as well.To remove the noise. feed a signal n'(k) to the adaptive filter that represents noise that is correlated to the noise to remove from the desired signal. Assume that the signal is either steady or slowly varying over time. removing the noise and resulting in a clean signal in e(k).[18] 2. the error signal actually converges to the input data signal. [19] Fig 2. 8 .4 Signal Prediction Predicting signals requires that you make some key assumptions. rather than converging to zero.4 Using an Adaptive Filter to Remove Noise from an Unknown System So long as the input noise to the filter remains correlated to the unwanted noise accompanying the desired signal.

for clarity in the figures.Fig 2. [19] 2. this structure with the delay in the input signal. converting the input signals to and from the analog domain is probably necessary. Since the adaptive filters are assumed to be digital in nature and many of the problems producing analog data. 9 . You might use this structure to remove a periodic signal from stochastic noise signals. the analog-to-digital (A/D) and digital-to-analog (D/A) components do not appear.5 Predicting Future Values of a Periodic Signal Accepting these assumptions.3 Conclusion Finally. the adaptive filter must predict the future values of the desired signal based on past values. it can be noticed that most systems of interest contain elements of more than one of the four adaptive filter structures. can perform the prediction. Also. Carefully reviewing the real structure may be required to determine what the adaptive filter is adapting to. These models are used in the design of adaptive filters and those are further used in various applications of advanced communication. When s(k) is periodic and the filter is long enough to remember previous values.

10 . The estimates produced by the method tend to be closer to the true values than the original measurements because the weighted average has a better estimated uncertainty than either of the values that went into the weighted average. the Kalman filter is a mathematical method named after Rudolf E. which is now ubiquitous in FM radios and most electronic communications equipment. estimating the uncertainty of the predicted value. and computing a weighted average of the predicted value and the measured value. The most weight is given to the value with the least uncertainty.0 Introduction In statistics. the Kalman filter is an algorithm for efficiently doing exact inference in a linear dynamical system. containing noise (random variations) and other inaccuracies. Extensions and generalizations to the method have also been developed. Kalman.[14] The Kalman filter produces estimates of the true values of measurements and their associated calculated values by predicting a value. Its purpose is to use measurements observed over time.[14] From a theoretical standpoint. and produce values that tend to be closer to the true values of the measurements and their associated calculated values.Chapter-3 Kalman Filter 3. The Kalman filter has many applications in technology. and is an essential part of space and military technology development. A very simple example and perhaps the most commonly used type of Kalman filter is the phase-locked loop. which is a Bayesian model similar to a hidden Markov model but where the state space of the latent variables is continuous and where all latent and observed variables have a Gaussian distribution (often a multivariate Gaussian distribution).

however it is capable of much greater accuracy. 2. 4. The Kalman filter requires less additional mathematical preparation to learn for the modern control engineering student. Necessary information for mathematically sound. statistically-based decision methods for detecting and rejecting anomalous measurements are provided through the use of Kalman filter. The Kalman filter is compatible with state-space formulation of optimal controllers for dynamic systems. 5.3.1 Advantages of Kalman Filter Below are some advantages of the Kalman filter. The information below is obtained from. which this was replaced by analog circuitry for estimation and control when Kalman filter was first introduced. This implementation may be slower compared to analog filters of Wiener. 11 . It proves useful towards the 2 properties of estimation and control for these systems. compared to the Wiener filter. 3. Stationary properties of the Kalman filter are not required for the deterministic dynamics or random processes. Many applications of importance include no stationary stochastic processes. The Kalman filter algorithm is implementable on a digital computer. comparing with another famous filter known as the Wiener Filter.[20] 1.

[21] 12 . The observations convey the errors and uncertainties in the process.1. a mobile robot. a satellite) is driven by a set of external inputs or controls and its outputs are evaluated by measuring devices or sensors.. a chemical process. Figure 3. such that the knowledge on the system’s behavior is solely given by the inputs and the observed outputs. namely the sensor noise and the system errors.1: Typical application of the Kalman Filter Figure 3. A physical system. In particular situations.3.[21] Based on the available information (control inputs and observations) it is required to obtain an estimate of the system’s state that optimizes a given criteria. reproduced from illustrates the application context in which the Kalman Filter is used. (e. this filter is a Kalman Filter. explained in the following sections. This is the role played by a filter.2 The Filtering Problem This section formulates the general filtering problem and explains the conditions under which the general filter simplifies to a Kalman filter (KF).g.

[15] All measurements and calculations based on models are estimates to some degree. The Kalman filter averages a prediction of a system's state with a new measurement using a weighted average.. velocity. Noisy sensor data. This allows for representation of linear relationships between different state variables (such as position. physical laws of motion). As such. and measurements (such as from sensors) to form an estimate of the system's varying quantities (its state) that is better than the estimate obtained by using any one measurement alone. approximations in the equations that describe how a system changes.e. a measure of the estimated uncertainty of the prediction of the system's state. The tracking 13 .e. smaller) estimated uncertainty are "trusted" more.not the entire history .4 Kalman filter in computer vision Data fusion using a Kalman filter can assist computers to track objects in videos with low latency (not to be confused with a low number of latent variables).3 Overview of the calculation The Kalman filter uses a system's dynamics model (i. This means that the Kalman filter works recursively and requires only the last "best guess" . with the new estimate and its covariance informing the prediction used in the following iteration. [16] 3. and has a better estimated uncertainty than either alone.of a system's state to calculate a new state. and acceleration) in any of the transition models or covariances. known control inputs to that system. The purpose of the weights is that values with better (i. The result of the weighted average is a new state estimate that lies in between the predicted and measured state. [16] The weights are calculated from the covariance..3.[16] When performing the actual calculations for the filter (as discussed below). This process is repeated every time step. it is a common sensor fusion algorithm. the state estimate and covariances are coded into matrices to handle the multiple dimensions involved in a single set of calculations. and external factors that are not accounted for introduce some uncertainty about the inferred values for a system's state.

There is a strong duality between the equations of the Kalman Filter and those of the hidden Markov model.[20] 3. This means specifying the following matrices: Fk. considering a different constraint at every time instance. in contrast to the Gaussian noise model that is used for the Kalman filter.of objects is a dynamic problem. using data from sensor and camera images that always suffer from noise. another linear operator mixed with more noise generates the observed outputs from the true ("hidden") state. The iterative predictor-corrector nature of the Kalman filter can be helpful. The state of the system is represented as a vector of real numbers. This can sometimes be reduced by using higher quality cameras and sensors but can never be eliminated. Video can also be pre-processed. perhaps using a segmentation technique. All the measured data are accumulated over time and help in predicting the state. with the key difference that the hidden state variables take values in a continuous space (as opposed to a discrete state space as in the hidden Markov model). so it is often desirable to use a noise reduction method. The Kalman filter may be regarded as analogous to the hidden Markov model. They are modelled on a Markov chain built on linear operators perturbed by Gaussian noise. the state-transition model.5 Underlying Dynamic System Model Kalman filters are based on linear dynamic systems discretized in the time domain. with some noise mixed in. Qk. one must model the process in accordance with the framework of the Kalman filter. the covariance of the process noise. because at each time instance only one constraint on the state variable need be considered. At each discrete time increment. Additionally. the observation model. the hidden Markov model can represent an arbitrary distribution for the next value of the state variables. [20] In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations. 14 . Then. Hk. and optionally some information from the controls on the system if they are known. to reduce the computation and hence latency. This process is repeated. a linear operator is applied to the state to generate the new state.

2) z k = Hx k + v k 15 . k. the control-input model. and sometimes Bk. the covariance of the observation noise.Rk.1) x ∈ ℜ n that is (3. as described below. Fig 3. for each time-step.2 The Kalman filter model assumes the true state at time k is evolved from the state at (k − 1) 3.6 The Process to be estimated The Kalman filter addresses the general problem of trying to estimate the state x ∈ ℜ n of a discrete-time controlled process that is governed by the linear stochastic difference equation[22] Where x k = Ax k −1 + Bu k + w k −1 with a measurement (3.

7 The Computational Origins of the Filter ˆk We define x − ∈ ℜ n (note the “super minus”) to be our a priori state estimate at step ˆ k given knowledge of the process prior to step k. in the absence of either a driving function or process noise. the process noise covariance Q and measurement noise covariance R matrices might change with each time step or measurement. Q). but here we assume it is constant.3) (3. and k ˆ ek ≡ x k − x k . In practice H might change with each time step or measurement. R). In practice. Note that in practice A might change with each time step. and with normal probability distributions p(w) ≈ N(0.2) relates the state to the measurement z k . (3. The m × n matrix H in the measurement equation (1.1) relates the state at the previous time step k-1 to the state at the current step k. The n × l matrix B relates the optional control input u ∈ ℜ n to the state x.The random variables w k and v k represent the process and measurement noise (respectively). but here we assume it is constant. 16 . The n × n matrix A in the difference equation (1.4) p(v) ≈ N(0.[22] 3. however here we assume they are constant. white. They are assumed to be independent (of each other). We can then define a priori and a posteriori estimate errors as ˆk e − ≡ x k − x − . and x k ∈ ℜ n to be our a posteriori state estimate at step k given measurement z k .

7) is called the measurement innovation. or the ˆk residual.A residual of zero means that the two are in complete n × m matrix K in (3. setting that result equal to zero. k k and the a posteriori estimate error covariance is (3.7) ˆk The difference (z k − Hx − ) in (1. we begin with the goal of finding an equation that computes an a posteriori state estimate priori estimate ˆ x k as a linear combination of an a ˆk x− and a weighted difference between an actual measurement zk and a measurement prediction ˆ Hx k as shown below in (3.7) into the above definition for ek .5) Pk = E[e k e T ]. substituting that into (3.[22] The z k . performing the indicated expectations. The residual reflects the discrepancy between the predicted measurement Hx − and the actual measurement agreement. This minimization can be accomplished by first substituting (3. Some justification for (3.6).7).7) is given in “The Probabilistic Origins of the Filter” found below. taking the derivative of the trace of the result with respect to K .7) is chosen to be the gain or blending factor that minimizes the a posteriori error covariance (3. ˆ ˆk ˆk x k = x − + K(z k − Hx − ) (3.6). k (3.The a priori estimate error covariance is then Pk− = E[e − e − T ].6) In deriving the equations for the Kalman filter. and then solving for K K k = Pk− H T (HPk− H T + R) −1 17 .

7) reflects the mean (the first moment) of the state distribution— it is normally distributed if the conditions of (3. 3.6) reflects the variance of the state distribution (the second non-central moment).8 The Probabilistic Origins of the Filter The justification for (3. Specifically. as the a priori estimate error covariance Pk− approaches zero the actual measurement less and less. the gain K weights the residual more heavily.7) is rooted in the probability of the a priori estimate conditioned on all prior measurements ˆk x− zk (Bayes’ rule). For now let it suffice to point out that the Kalman filter maintains the first two moments of the state distribution. the actual measurement while the predicted measurement z k is “trusted” more and more.8) we see that as the measurement error covariance approaches zero.4) are met. 18 . while the predicted measurement z k is trusted ˆk Hx − is trusted more and more. The a posteriori state estimate (3.3) and (3. On the other hand.8) Looking at (1. In other words. ˆk Hx − is trusted less and less. The a posteriori estimate error covariance (3. Another way of thinking about the weighting by K is that as the measurement error covariance R approaches zero.= Pk− H T HPk− H T + R (3.[22] ˆ E[x k ] = x k ˆ ˆ E[(x k − x k )x k − x k ) T ] = Pk .

As such. 3.e. while the measurement update equations can be thought of as corrector equations. the equations for the Kalman filter fall into two groups: time update equations and measurement update equations. for incorporating a new measurement into the a priori estimate to obtain an improved a posteriori estimate. 19 . After presenting this high-level view. Indeed the final estimation algorithm resembles that of a predictor-corrector algorithm for solving numerical problems as shown below in Figure 3.3. Pk ). The time update equations are responsible for projecting forward (in time) the current state and error covariance estimates to obtain the a priori estimates for the next time step.ˆ ˆ p(x k z k ) ≈ N(E[x k ].[20] The Kalman filter estimates a process by using a form of feedback control: the filter estimates the process state at some time and then obtains feedback in the form of (noisy) measurements.9 The Discrete Kalman Filter Algorithm We will begin this section with a broad overview. we will narrow the focus to the specific equations and their use in this version of the filter. E[x k − x k )x k − x k ) T ]) ˆ = N(x.[20] The time update equations can also be thought of as predictor equations. The measurement update equations are responsible for the feedback—i. covering the “high-level” operation of one form of the discrete Kalman filter (see the previous footnote).

11) is the same as (3. (3. The specific equations for the time and measurement updates are presented below in Table 3.3.2: Discrete Kalman filter measurement update equations The first task during the measurement update is to compute the Kalman gain.13).1: Discrete Kalman filter time update equations Again notice how the time update equations in Table 3. Again (3.1 and Table 3.12) is simply (3.Figure 3.2.12).10) Table 3. The measurement update adjusts the projected estimate by an actual measurement at that time. A and B are from (3. The ongoing discrete Kalman filter cycle. Initial conditions for the filter are discussed in the earlier references. . The time update projects the current state estimate ahead in time.[20] (3.13) Table 3.[20] 20 . The final step is to obtain an a posteriori error covariance estimate via (3. Notice that the equation given here as (3.11) (3.8). The next step is to actually measure the process to obtain .3).7) repeated here for completeness. while Q is from (3. and then to generate an a posteriori state estimate by incorporating the measurement as in (3.1 project the state and covariance estimates forward from time step k-1 to step .12) (3.1).9) (3.

4 below offers a complete picture of the operation of the filter.After each time and measurement update pair. This recursive nature is one of the very appealing features of the Kalman filter—it makes practical implementations much more feasible than an implementation of a Wiener filter which is designed to operate on all of the data directly for each estimate. often times superior filter performance (statistically speaking) can be obtained by tuning the filter parameters Q and R .10 Filter Parameters and Tuning In the actual implementation of the filter. combining the high-level diagram of Figure 3. Measuring the measurement error covariance R is generally practical (possible) because we need to be able to measure the process anyway (while operating the filter) so we should generally be able to take some off-line sample measurements in order to determine the variance of the measurement noise. If this is the case. Certainly in this case one would hope that the process measurements are reliable. the measurement noise covariance R is usually measured prior to operation of the filter. The Kalman filter instead recursively conditions the current estimate on all of the past measurements.2. the process is repeated with the previous a posteriori estimates used to project or predict the new a priori estimates. whether or not we have a rational basis for choosing the parameters.[20] 3. frequently with the help of another (distinct) Kalman filter in a process generally referred to as system identification.1 and Table 3. both the estimation error covariance Pk and the Kalman gain K k will stabilize quickly and then remain constant (see the filter update equations in Figure 1-2). [8] The determination of the process noise covariance Q is generally more difficult as we typically do not have the ability to directly observe the process we are estimating. Sometimes a relatively simple (poor) process model can produce acceptable results if one “injects” enough uncertainty into the process via the selection of Q. Figure 3. In either case.[7] In closing we note that under conditions where Q and R . these 21 . The tuning is usually performed off-line.are in fact constant.3 with the equations from Table 3.

It is frequently the case however that the measurement error (in particular) does not remain constant. combining the high-level diagram of Figure 1-1 with the equations from Table 1-1 and Table 1-2. the noise in measurements of nearby beacons will be smaller than that in far-away beacons. For example. or determining the steady-state value of Pk .[7] 3. For example. and increase the magnitude if the dynamics start changing rapidly. Smoothing means estimating the value of the state at some prior time.11 Conclusion Filtering refers to estimating the state vector at the current time. In such cases Q k might be chosen to account for both uncertainty about the user’s intentions and uncertainty in the model.parameters can be pre-computed by either running the filter off-line. the process noise Q is sometimes changed dynamically during filter operation—becoming Q k —in order to adjust to different dynamics. based upon all past measurements. Also. Prediction refers to estimating the state at a future time. Figure 3. when sighting beacons in our optoelectronic tracker ceiling panels. based on all measurements taken up to 22 .4 A complete picture of the operation of the Kalman filter. in the case of tracking the head of a user of a 3D virtual environment we might reduce the magnitude of Q k if the user seems to be moving slowly.

x. In the extended Kalman filter (EKF).[17] 4. A Kalman filter that linearizes about the current mean and covariance is referred to as an extended Kalman filter or EKF. but that the process is now governed by the non-linear stochastic difference equation .1) 23 . x .[17] x k = f(x k −1 . the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about the current mean and covariance.1 The Process to be estimated The Kalman filter addresses the general problem of trying to estimate the state x ∈ ℜ n of a discrete-time controlled process that is governed by a linear stochastic difference equation. w k −1 ). An estimate. But what happens if the process to be estimated and (or) the measurement relationship to the process is non-linear? Some of the most interesting and successful applications of Kalman filtering have been such situations. with a measurement z ∈ ℜ n that is (4. is the computed value of a quantity. we can linearize the estimation around the current estimate using the partial derivatives of the process and measurement functions to compute estimates even in the face of non-linear relationships. the state transition and observation models need not be linear functions of the state but may instead be non-linear functions.[17] In something akin to a Taylor series. Let us assume that our process again has a state vector x ∈ ℜ n . y.ˆ the current time. Chapter-4 The Extended Kalman Filter (EKF) 4.0 Introduction In estimation theory. we must begin by modifying some of the material presented in Kalman filter. u k −1 . To do so. based upon a set of measurements.

zk xk ˆ where x k is some a posteriori estimate of the state (from a previous time step k).2) relates the state measurement xk to the z k .4).0). It includes as parameters any driving function u k −1 and the zero-mean process noise w k .3) ~ = h(~ .1) relates the state at the previous time step k-1 to the state at the current time step k . v k ).[17] 4. one can approximate the state and measurement vector without them as ~ = f(x .3) and (4.2) v k again represent the process and measurement noise as in (1.3) and (1. The EKF is simply an ad hoc state estimator that only approximates the optimality of Bayes’ rule by linearization. In this case the non-linear function f in the difference equation (2. The non-linear function h in the measurement equation (4.4).2 The Computational Origins of the Filter To estimate a process with non-linear difference and measurement relationships. (4. 24 .0) ˆ k −1 k −1 xk and (4. we begin by writing new governing equations that linearize an estimate about (4. where the random variables w k and (4.z k = h(x k . However. u .4) It is important to note that a fundamental flaw of the EKF is that the distributions (or densities in the continuous case) of the various random variables are no longer normal after undergoing their respective nonlinear transformations.[17] In practice of course one does not know the individual values of the noise w k and vk at each time step.

u k −1 . H is the Jacobian matrix of partial derivatives of h with respect to x. x ˆ z k ≈ ~k + H(x k − x k ) + Vv k z Where (4. H [i.4).[24] A is the Jacobian matrix of partial derivatives of f with respect to x.ˆ x k ≈ ~ k + A(x k −1 − x k −1 ) + Ww k −1 . x k The random variables w k and v k represent the process and measurement noise as in (4.0). ~ And ~ are the approximate state and measurement vectors from (4. ∂x [j] V is the Jacobian matrix of partial derivatives of h with respect to v.6) x k And z k are the actual state and measurement vectors. 25 . that is A [i.5) (4. V[i.3) and (4.4).0). W. and V. even though they are in fact different at each time step. j] = ∂f [i] ∂x [j] ˆ (x k −1 . j] = ∂h [i] ~ (x k . xk zk ~ Is an a posteriori estimate of the state at step k. H. W is the Jacobian matrix of partial derivatives of f with respect to w.0). ∂x [j] Note that for simplicity in the notation we do not use the time step subscript k with the Jacobians A. j] = ∂h [i] ~ (x k . W[i.3) and (4. u k −1 . j] = ∂f [i] ∂x [j] ˆ (x k −1 .[24] Now we define a new notation for the prediction error.0).

i.2) from the discrete Kalman filter. ˆk ezk k Remember that in practice one does not have access to (4.3) and (3.8) we can write governing equations for an error process as[24] ~ ≈ A(x − x ) + ε . WQ k W T ) 26 . and that they closely resemble the difference and measurement equations (3. the quantity one is trying to estimate. it is the actual state vector.8) and a second (hypothetical) Kalman filter to estimate the prediction error ez k given by (4. ˆk exk k and the measurement residual.10) are linear. On the other hand.9) (4. could then be used along with (4. call it.11) p(ε k ) ≈ N(0. with Q and R as in (3.8). This motivates us to use the actual measurement residual ez k in (4.[24] Notice that the equations (4.9) and (4.~ ≡x −x . ˆ k −1 e xk k −1 k ~ ≈ H~ + η .4) respectively.7) to obtain the a posteriori state estimates for the original non-linear process as ˆ ˆ x k = x k + ek .7) ~ ≡ z −z .7) and (2.7).9). one does have access to zk in (2.10) have approximately the following probability distributions (see the previous footnote):[24] T p( e x ) ≈ N(0. Using (4.8) xk in (4.9) and (4. The random variables of (4. (4.10) εk and ηk represent new independent random variables having zero mean T and covariance matrices WQW T and VQV .1) and (3. e zk e xk k Where (4. E[ e x e xk ]) k k (4.e. This estimate. it is the actual measurement that one is using to estimate x k .

4). the time update equations in Table 4. and the Kalman gain K k coming from (3. with x k and z k coming from (4.1 project the state and covariance estimates from the previous time step k-1 to the current 27 .1: Extended Kalman filter time update equations As with the basic discrete Kalman filter.3) and (4. to reinforce the notion that they are different at (and therefore must be recomputed at) each time step.11) and making use of (4.14) (4.p(η k ) ≈ N(0.W.12) ˆ e k = K k ez k By substituting (4. H. the Kalman ˆ e k is (4.15) Table 4.13) Equation (4. [24] The complete set of EKF equations is shown below in Table 4.13) can now be used for the measurement update in the extended Kalman filter.1 and Table 4. Note that we have substituted ˆk x − for x k to remain consistent with the earlier “super minus” a priori notation. (4. and V.2.8) we see that we do not actually need the second (hypothetical) Kalman filter: ˆ x k = x k + K k ez k = x k + K k (z k − z k ) (4. VR k V T ) Given these approximations and letting the predicted value of filter equation used to estimate ˆ ek be zero.11) with the appropriate substitution for the measurement error covariance.12) back into (4. and that we now attach the subscript k to the Jacobians A.

18) Table 4.1 below offers a complete picture of the operation of the EKF. and R k is the measurement noise covariance (1.3) at step k.4).[24] (4.17) (4. Again h in H k and V are the measurement Jacobians at step k. A k and Wk are the process Jacobians at step k.2: Extended Kalman filter.4) at step k.3.2 correct the state and covariance estimates with the measurement (2.17) comes from (2.16) (4. z k .time step k.14) comes from (4. the measurement update equations in Table 4. Figure 4. and Q k is the process noise covariance (4.2. combining the high-level diagram of Figure 3. (Note we now subscript R allowing it to change with each measurement. the measurement update equations As with the basic discrete Kalman filter.1 with the equations from Table 4.) The basic operation of the EKF is the same as the linear discrete Kalman filter as shown in Figure 3.1 and Table 4. 28 .3). Again f in (4.

it uses the complex step Jacobian to linearizes the nonlinear dynamic system.1 with the equations from Table 2-1 and Table 2-2.[24] 4.Figure 4. the Jacobian H k affects the Kalman gain so that it only ˆ− magnifies the portion of the residual z k − h(x k .0) that does affect the state. if there is not a one-to-one mapping between the measurement and the state via . An important feature of the EKF is that the Jacobian H k in the equation for the Kalman gain K k serves to correctly propagate or “magnify” only the relevant component of the measurement information. For example. then as you might expect the filter will quickly diverge. In this case the process is unobservable. Inside.3 Conclusion: The non linear extended Kalman filter (EKF) uses the standard EKF formulation to achieve nonlinear state estimation.1 A complete picture of the operation of the extended Kalman filter. Of course if over all measurements there is not a one-to-one mapping between the measurement z k and the state via h. 29 . combining the high-level diagram of Figure 3. The linearizes matrices are then used in the Kalman filter calculation.

regular Kalman filters tend to be optimistic in that the covariance matrix reaches a steady-state value (if not diverged) far below the actual mean squared errors. Such an assumption is reasonable when the tracking goal is to determine the relative location of a target for such applications as a homing missile. The range and off-bore sight angles of a target measured in the radar frame are transformed into relative position measurements in a local east-north-up (ENU) frame. Without accounting for navigation errors. For tracking mission success. ground-based electronically scanned array radars (ESA) were considered. for other applications such as sensor data fusion or weapons cueing. the sensor/platform location accuracy is critical for high speed maneuvering target tracking as the target’s relative location has to be transferred into a global reference frame and the sensor navigation errors get magnified into the target position solution. a sensor platform is typically assumed to be at a perfectly known location when formulating the measurement equations for tracking filters. In the setting of tracking ballistic missiles in midcourse. In the past. It was assumed that the 30 [9] to account for navigation . It has been seen that residual biases in radar measurements could be mitigated using the Schmidt-Kalman filter. However.0 Introduction Navigation errors are a performance-limiting factor for target tracking as ranging measurements increase in accuracy. it requires the use of such techniques as the Schmidt-Kalman Filter (SKF) errors. Heuristic techniques such as the α-β-γ filter are used to prevent divergence and boost filter gain by adjusting Q and R covariance parameters.Chapter-5 Schmidt-Kalman Filter (SKF) 5.

The recursive calculation of performance metrics in terms of root mean squared (RMS) errors and root sample covariance trace (RSCT) is described in Section 4. The Schmidt Kalman filter does not estimate the navigation errors explicitly (it could if external reference data were available) but rather takes into account (i.[7] The steps of Schmidt Kalman Filter Algorithm 1. By exploring the structural navigation errors. Target Tracking Formulation 2. [6] The conventional tracking filter is formulated in Section 1. zero-mean Gaussian residual biases could be mitigated so long as the residual bias covariance matrix was provided by the bias estimator.measurement process is subject to frame transformation errors (in terms of Euler angles) and unknown but constant measurement errors (in terms of biases and scale factors) in addition to random noise.[6. Consider Covariance in Target Filter 3. Results of SKF for Target Tracking in the presence of Navigation Errors 31 . Simulation of SKF 5. Further. Recursive Calculation of Matrices 4.7] The Schmidt-Kalman filter (SKF) is formulated for target tracking with navigation errors. provided by an on-board navigation unit..e. The design of the SKF using the consider covariance in the presence of navigation errors is presented in Section 3. It was further assumed that a bias estimator was available to remove the angular and measurement biases. Monte Carlo Simulation is presented in Section 5 the results of SKF for target tracking in the presence of navigation errors in Section 6. consider) the navigation error covariance. the SKF are not only more consistent but also produces smaller mean squared errors than regular Kalman filters. The design of the SKF using the consider covariance in the presence of navigation errors is presented in Section 2. into the tracking filter formulation. Chapter 5 concludes the Thesis with an outline of future work.

The existing or new target information is passed to the Kalman filtering module to update the state of the tracker.1 Target Tracking Formulation The block diagram of the tracking system is shown in Figure 5. velocity and size of the targets. The measurement module associates the foreground pixels to targets. the position. i. The output of the tracker is the state information which is also fed back to the adaptive background module to guide the spacio-temporal adaptation of the algorithm[7] 32 . It comprises three modules: adaptive background.5.e. The adaptive background module produces the foreground pixels of each video frame and passes this evidence to the measurement module. measurement and Kalman filtering. initializes new ones if necessary and manipulates existing targets by merging or splitting them based on an analysis of the foreground evidence.

Unfortunately this is not practical and adaptive background approaches are adopted primarily for two reasons: First. an increase of its weight and a decrease of all the weights of the other Gaussians of that pixel.Figure 5. Secondly and most importantly. this segmentation is simply the difference of the current frame from the background image. Colors in any given pixel similar to that modeled by any of the Gaussians of that pixel lead to an update of that Gaussian. If a background image were available.1. while the covariance matrix indicates the extend around the mean that a color of that pixel is to be considered as similar to the one modeled.1.2 Stauffer’s Algorithm Stauffer’s adaptive background algorithm is capable of learning such changes with so different speeds of change by learning into the background any pixel. the wind causes swaying movements of flexible background object (e. periodic changes or changes that occurred in the distant past lead to pixels that are considered background. whose color in the current frame resembles the colors that this pixel often has. The changes in the video frames due to the movement are used to identify and segment the foreground (pixels of the moving targets) from the background (pixels without movement). So no changes. [7] 5. fluorescent light flickers at the power supply frequency. a number of weighted Gaussians model the appearance of different colors in each pixel. A static image of the empty scene viewed by the camera can be used for background. All these changes need to be learnt into an adaptive background model. To do so.g. Colors not matching any of the Gaussians of that pixel lead to the introduction 33 . 5.1 Adaptive Background Module The targets of the proposed system (vehicles and humans) are mostly moving. The mean is a three dimensional vector indicating the color modeled for that pixel. foliage). The foreground pixels thus obtained are readily grouped into target regions. background (outdoors and indoors) also changes: Natural light conditions change slowly as time goes by. such an empty scene image might not be available due to system setup.1: Block diagram of the complete feedback tracker architecture. The weights indicate the amount of time the modeled color is active in that particular pixel.

Hence the possible updates of the weight of the i-th Gaussian of the pixel located at (x. but are also slow in adapting to the background changes. The foreground/background threshold is left unspecified though. y) equal to the weight of the Gaussian matching the current color of the pixel at (x. this can lead to track loss even in the presence of movement. Large rates favor background adaptation but tend to fade a target into the background when it stops.of a new Gaussian with minimum weight. a spacio-temporal adaptation of the learning rate is used. This is a map of the same dimension as the frames with a value at each location (x. When the target resumes moving.[7] The propose a feedback tracking architecture in order to addresses these problems. y) at time t are Where ‘a’ is the learning rate. Small rates fade foreground objects slowly. y). This fading progressively destroys the region of the tracked object. like the motion of a chair. The original algorithm and most of the modifications lead to a binary decision for each pixel: foreground or background in the (PPM) is used instead. while large indicate background. Thus instead of a constant value. deforms its perceived shape and finally leads to loosing track of the object altogether. Small PPM values indicate foreground objects. The state of the Kalman tracker contains the ellipse that describes every target. [7] The drawback of the existing Stauffer’s algorithm is that stationary foreground objects tend to fade in the background with rate. 34 . The learning rate is modified in elliptical regions around these targets. When the target has fairly uniform coloration. foreground pixels will be marked only at the locations not previously occupied by the stationary target.

as this is estimated from the state of the Kalman tracker for this particular target. while they do not affect the adaptation of the background around the targets. If this starts happening (the target becomes smaller while its mobility is small). since the tampering of the weights is very forceful and affects the whole elliptical disk around the target. regardless if the pixel is actually foreground or not. The penalty of this strategy is the delayed detection of new very small targets. The weight of the current Gaussian is decreased and that of all the rest is increased with a rate that is inversely proportional to the mobility of the target.When such 35 . the normal weight update mechanism is bypassed.This delays fading of the targets and depending on the selection of the small learning rate and the motion of the targets can be sufficient. This fading prevention mechanism is not always in effect. This way flickering pixels are avoided far from the targets. while the targets themselves are not affected. [7] The second major proposed modification of Stauffer’s algorithm addresses extreme flickering situations often encountered in night vision cameras. even the small learning rate will gradually fade them into the background.e. In such scenes the PPM needs to be bounded by a very low threshold in order not to consider flickering pixels as foreground. it is only activated when targets are small and rather immobile.i. The proposed solution is to adapt the threshold T in a spacio-temporal fashion similar to the learning rate in (15) . [7] These proposed feedback mechanisms on the learning rate lead to robust foreground regions regardless of the flickering in the images or the lack of target mobility. In some cases though where targets stay put for very long periods. The threshold on the other hand tends to discard actual foreground pixels as well.

The forming of target regions is the goal of the measurement module. The processing identifies peaks that correspond to heads and valleys that correspond to 36 . These segments are associated to targets based on their Mahalanobis distance from the targets. If two such Gaussians are too similar. The obtained segments are checked for possible merging based on their Mahalanobis distance and are further considered only if they are large enough. All the found targets are then processed to identify the number of bodies in them and detect the heads. detailed next.3 Measurement Module The measurement module finds foreground segments. This is done by processing the height of the target as a function of its column number. Finally. while the other part is used to request a new target from the Kalman tracker. This involves shadow detection based on. For this. The height is measured form the bottom of the box bounding the target. [7] 5. The targets are subsequently checked for possible merging based on how similar they are. Non-associated segments generate new target requests to the Kalman module. Splitting is performed using the k-means algorithm on the pixels of the foreground segment comprising the target. two monitored people can be walking together and then separate their tracks. for example. the targets are merged. the resulting PPM is more suitable for target region forming that the original version of . the minimum Mahalanobis distance of the one with respect to the other is used. If the two parts are found distinct.1. dilation. filling of any holes in the segments and erosion.flickering and mobility conditions occur. The measurement process begins by processing the adaptively threshold PPM to obtain foreground segments. very large targets are checked for splitting. The information for new targets or targets to be updated is passed to the Kalman module. The one part of the foreground evidence is used to update the existing target. [7] This is necessary as. Since we are using a Kalman tracker. then they form two targets. assigns them to known targets or initializes new ones and checks targets for possible merging or splitting. These parts are subsequently checked to determine if they are distinct. Two parts are requested from the k-means algorithm. the targets are described by twodimensional Gaussians .

4 Kalman Tracking Module The Kalman module maintains the states of the targets. [7] Fig. (b) PPM of the target and 2D Gaussian approximation. The states of the targets are fed back to the adaptive background module to adapt the learning rate and the threshold for the PPM linearization.points that the target can be split into more than one body. The target states are seven37 . i. As at the shoulders the width of the body increases rapidly. heads are found by examining the smoothed derivative of the width of the detected peaks. It creates new targets should it receive a request from the measurement module and performs measurement update based on the foreground segments associated to the targets. (c) Target height profile (blue line) used to identify the peaks and valleys that correspond to the head tops (red circles) and body splitting points (vertical red lines) respectively. 5. 5. can be described by a single Gaussian.e. the face position can be refined inside the head region using skin color histograms. If the lighting conditions are normal. Finally.2: Processing a target to extract bodies and heads. This facilitates the use of a Kalman tracker. The process is illustrated in Figure 5 and works well with upright people. Every target is approximated by an elliptical disc.1. this point can be easily detected. Head estimates are also marked with black lines. (a) Scene of a target with the two associated bodies and heads marked.

Further. the covariance terms are modeled as constant.dimensional.1. A strict dynamic model in this case can mislead a tracker to a particular track even in the presence of contradicting evidence. The variations of the velocity and the covariance terms are permitted by the state update variance term. and w and v are state and measurement noise processes. their exact model is non-linear. instead of using linearization and an extended Kalman tracker. the velocity of the mean (horizontal and vertical components) and the three independent terms of the covariance matrix. Unlike aircraft. respectively.[7] 5. flying aircraft can be modeled as rigid bodies thus strict and multiple dynamic models are appropriate and have been used extensively in Interacting Multiple Model Kalman trackers.1b) where the subscript k is the time index. y is the measurement. It is implied that all vectors and matrices have compatible dimensions. which are omitted for simplicity. hence cannot be used with the Kalman tracker.5 Target tracking formulation derivation Consider a linear time-invariant discrete-time dynamic system together with its measurement as x k +1 = Ax k + Bu k + w k y k = Hx k + v k (5. As for the update of the three covariance terms. x is the state vector. It is very different to the more elaborate models used for tracking aircraft. This loose dynamic model permits arbitrary movement of the targets. u is a known input. they comprise of the mean of the Gaussian describing the target (horizontal and vertical components). The prediction step uses a loose dynamic model of constant velocity for the update of the mean position and velocity. street vehicles and especially humans have more degrees of freedom for their movement which includes apart from speed and direction changes obstacles arbitrarily. rendering the learning of a strict dynamic model impractical.[23] 38 . [7] Aircraft can perform a limited set of maneuvers that can be learned and be expected by the tracking system.1a) (5.

3..The goal is to find an estimate. 2.. and x k +1 and P k +1 are the predicted state and prediction error covariance. ˆ Starting from an initial estimate x 0 = E{x 0 } and its estimation error covariance ˆ ˆ P0 = E{(x 0 − x 0 )(x 0 − x 0 ) T } where the superscript T stands for matrix transpose..y k }. Q} and v ~ N{0. 4].2f) where respectively. S k is the measurement prediction error covariance. of x k given the measurements up to time k.2a) ˆ x k +1 = Ax k + Bu k Pk +1 = APk A T + Q (5. denoted by y k = {y 0 .2d) (5.2b) ˆ x k +1 = x k +1 + K k +1 (y k +1 − Hx k +1 ) Pk +1 = (I − K k +1 H)Pk +1 (5.. denoted by ˆ x k ..2c) (5.2e) K k +1 = Pk +1 H T S −1 1 k+ S k +1 = HPk +1 H T + R (5... the Kalman filter ˆ provides an optimal estimator in the form of x k = E{x k y k } [1. Under the assumptions that the state and measurement noises are uncorrelated zero-mean white Gaussian with w ~ N{0.[24] 39 . R} where Q and R are positive semi-definite covariance matrices. the Kalman filter equations specify the propagation of update of ˆ x k and Pk over time as well as the ˆ x k and Pk by measurement y k as (5..

x s ) + v Where f (⋅. denoted by x 0 . Consider a general nonlinear measurement taken from the sensor at at xs to a target xt as (5.~ ] z z1 z 2 zM (5... the equation can be further written in terms of x = x t − x 0 as: ~ = z − f(x .... the linear zed measurements can be put into a vector format as: ~ = Hx + v z ~ T = [~ ~ .In practice..6b) (5. is available.6a) (5. Assume that the sensor location x s is known perfectly and an initial estimate of the unknown target location.. however......4a)  ∂z ∂z ∂z  hT =  t ... σ2).5) For M measurements.. x s ) + h T x = h T x + v z 0 0 (5.... The nonlinear measurement can be liberalized around this initial estimate as z ≈ f(x t . ⋅) is the nonlinear measurement equation and v is the measurement error assumed to be a zero-mean Gaussian N (0..3) z = f(x t .v M ] 40 .. x s ) + h T (x t − x 0 )) + v (5.. the target measurements are nonlinear in nature such as range and bearing.6c) v T = [v1 v 2 .. t  t ∂x n  x  ∂x 1 ∂x 2 0 (5..4b) In terms of measurement prediction error..

3) for range is (ignoring the noise terms). 4]. The nonlinear equation (5.7b) r ≈ f(x 0 . Then. 3. [12] 41 . Eq.6d) The use of linearzed measurements (5.7a) x0 of x t is (5. (7c) affords an important interpretation that the observation matrix is made of the line of sight (LOS) vector from the sensor to target . y s ] T .2) leads to the extended Kalman filter (EKF) [1. x s ) = (x t − x s ) 2 + (y t − y s ) 2 The corresponding linearized equation around an initial estimate (5. In the two dimensional setting with ranging measurements.[24] r = f(x t .1b) and its application to (5. x t = [x t . 2.7c) Where θ is the angle relative to the x-axis. x s ) + h T (x t − x 0 ) with x0  h=  y0   − xs   r  = cos(θ )   − y s  sin(θ )   r  (5.h1T   T h H = 2      T hM    (5. let x and y be the coordinate axes.6a) is conformal with the linear formulation (5. y t ]T and x s = [x s .

2 Consider Covariance in Tracking Filter In the above formulation.5. when the ranging errors are comparable to the sensor positioning errors. the predicted r = xt − xs 2 (5. However. it is required to account for navigation errors explicitly to ensure tracking performance. which is subject to an unknown instantaneous error δxs and a location error covariance Ps.10a) = r+( ∂r T t ∂r ) δx + ( s ) T δx s + n r t ∂x ∂x (5. which is assumed to be of zero mean with a variance of Given the measured sensor location range is obtained as xs and predicted target location xt . which is approximately true when the ranging errors are much larger than the sensor positioning errors. is the measurement noise. ||▪||2 stands for the 2-norm.10b) 42 . [24] Assume that an onboard navigation system provides the sensor’s location at xs.9) The range equation is expanded around the measured sensor location and predicted target location as r = r+( ∂r T t ∂r ) (x − x t ) + ( s ) T (x s − x s ) + n r ∂x t ∂x (5.8) x t is the target location. the sensor location xs is assumed to be known perfectly. Further assume that the sensor provides range measurements given by r = xt − xs where 2 + nr (5. and n r σ2 .

1 Without Sensor Position Errors In conventional EKF formulation as in (5.5. all the data in each profile are slid vertically so that their average is zero: Fig 5.2 With Sensor Position Errors To account for the sensor positioning errors.4a) and (5.2.[23] Case 1: Zero Mean a) Zero-mean normalization When zero-mean normalization is applied. the sensor position errors are ignored.7a).3 Zero-Mean Normalization 43 . constant bias.11b) 5.2.11a) ht = ∂r ∂x t (5. consider three cases: zero mean. and non-stationary. The resulting linearized measurement equation is δr = r − r = h tT δx t + n r (5.

[23] 44 . all the data in each profile are multiplied so that the length of the associated vector be equal to 1. and R is the measurement error covariance.[23] (b) Unit-norm normalization When unit-norm normalization is applied. The length of the vector is the square root of the sum of squares of all values.The blue profile was obtained from the red one by a vertical translation so that the average of the profile be zero.12) where Pt is the target state error covariance. Fig 5. Ps is the sensor location error covariance. The consider covariance for the measurement equation (10) is given by S = h tT P t h t + h sT P s h s + R (5. with a zero mean E{δxs} = 0 and covariance Ps.4 Normalization toward 1 A well-behaved navigation system provides a navigation solution whose error is uncorrelated (with itself over time and with the target state estimation error). The concept of consider covariance as used in is the central idea of the Schmidt-Kalman filter.

15) The Schmidt-Kalman filter is then obtained by applying the Kalman filter to the augmented equations (5. 7].14) The augmented state covariance matrix is given by  Pkt Ρ k =  ts T (Pk ) Pkts   Pks  (5. the bias is taken as an augmented state in the filter formulation for the state and covariance propagation. the update equations for the augmented state as nuisance parameters are discarded. it can be estimated by a separate bias estimator.13) with the augmented covariance (5. k k k k P0ts = 0 (5. the augmented equations are given by  x k +1  A 0  x k  B w  = +  u k +  k  β    0  k +1   0 I  β k   0  yk = [ Ht x  Hs ] k  + vk β k  (5. Any residual bias is accounted for by the consider covariance as suggested in[6.Case 2: Constant Bias However. if the measurement exhibits a constant bias.15). Alternatively.13b) The bias βk is not estimated but its effect on xk is accounted for through a guess about the bias in terms of its covariance matrix Cov{βk} = Ps k and its correlation with the target state estimate defined as ˆ P ts = E{(x − x )β T . the bias can be estimated directly if an external reference measurement is available. To derive the Schmidt-Kalman filter. The time propagation 45 .13a) (5. However. [6] More specifically.

[24] In general. however. that is. even starting with a null initialization.2b). This may occur when the sensor position is subject to zero-mean uncorrelated errors.The time propagation of the cross-correlation is given by Pkts 1 = APkts + (5. that is. s s The time propagation of the bias covariance is unity: Pk +1 = Pk .17b).equations of the target state and its error covariance remain the same as (5. the cross-correlation for non-zero bias is not zero as seen from (5.17a) s k +1 is given by S k +1 = (H t Pkt+1 H tT + H s PktsT H tT ) +1 + H t Pkst+1 H sT + H s PksT1 H tT + R k +1 + (5. Pkts = 0 as in (5. which is a form of covariance inflation.18b).18b) From (5. 46 .18a) Pkts+1 = Pkts 1 − K k +1 (H t Pk +1 + H s Pks+1 ) + (5. Pkts = 0 . the measurement update equation of the target state error covariance is given by Pk +1 = (I − K k +1 H t )Pk +1 − K k +1 H s PktsT +1 The measurement update of the cross-correlation is given by (5.12).2a) and (5. it reduces to (5.17b) However.14).2c) but the gain matrix (5.2e) is different as given below K k +1 = (Pkt+1 H tT + Pkts 1 H sT )S −1 1 + k+ where the predicted measurement error covariance (5.16) The measurement update equation of the target state still has the same form as (5. it can be seen that if there is no correlation.

The estimate is denoted by x i (k) for i = 1.5. The statistic T 2 = n(m − u) T P −1 (m − u) (5. The sample covariance SN(k) does not tell how accurate the estimates are (i.19a) 1 N ˆ ˆ SN (k) = ∑ [x i (k) − m N (k)][x i (k) − m N (k)]T N i =1 (5. SN(k)} are often called confidence ellipses. the approximation error is small for large N...e.20) The ellipses drawn with {mN(k). …. N. The performance metrics will be root mean squared (RMS) errors when the state truth is known and the root sample covariance trace (RSCT) when the state truth is not known. Consider a bias estimate in the estimate. Σ) at time k is estimated N times as in a ˆ Monte Carlo simulation.[24] Assume that a vector quantity x(k) ~ N(μ.21) follows the so-called Hotellings’ distribution with n-1 degrees of freedom [8] where n is the dimension of x. However.[24] b N (k) = m N (k) − x(k) (5.3 Recursive Calculation of Matrices Monte Carlo simulation will be used to evaluate the performance of a SchmidtKalman filter vs. One way is to consider the sample mean and sample covariance defined as m N (k) = 1 N ˆ ∑ x i (k) N i +1 (5. its precision). how far away it is from the true value) but rather provides an indication about the spreading of the estimates around the sample mean mN(k) (i. There are two ways to characterize the estimation errors.e. a conventional Kalman filter in the presence of navigation errors.19b) need change. It can be easily shown that the sample mean and covariance can be calculated recursively inside a Monte Carlo simulation loop as 47 .

22) S n (k) = 1 n ˆ ˆ ∑ [x i (k) − m n (k)][x i (k) − m n (k)] T n i =1 (5.27) 48 .23b) + covariance is calculated with respect to the true vector x(k) if available as PN (k) = 1 N ˆ ˆ ∑ [x i (k) − x(k)][x i (k) − x(k)]T N i =1 (5. instead of using the sample mean.26) RMS 2 (k) = n (5. another widely used (5.23a) = n −1 1 ˆ ˆ S n −1 (k) + [x n (k) − m n (k)][x n (k) − m n (k)] T n n n −1 [m n (k) − m n −1 (k)][m n (k) − m n −1 (k)] T n In contrast to (19).24) The square root of the trace of P is the so-called root mean squared (RMS) errors defined as RMS N (k) = trace{P N (k)} = 1 N ˆ ˆ ∑ [x i (k) − x(k)][x i (k) − x(k)] T N i =1 (5.m n (k) = 1 n n −1 1 ˆ ˆ ∑ x i (k) = n mn −1 (k) + n x n (k) n i =1 (5.25) It is easy to show that the estimation error covariance PN(k) is related to the sample covariance SN(k) and the bias vector bN(k) by Pn (k) = n −1 1 ˆ ˆ Pn −1 (k) + [x i (k) − x(k)][x i (k) − x(k)] T n n n −1 1 ˆ ˆ RMS 2 −1 (k) + [x i (k) − x(k)] T [x i (k) − x(k)] n n n (5.

the sample mean mN(k) and sample covariance SN(k) cannot indicate the estimation accuracy.29b) + 5. as RSCT n (k) = trace{S n (k)} (5. [24] Instead of calculating the sample covariance itself. leading to the root sample covariance trace (RSCT). the bias vector cannot be pre-calculated using (5.30a) 49 .29a) RSCTn2 (k) = n −1 RSCTn2−1 (k) n + 1 ˆ ˆ [x n (k) − m n (k)]T [x n (k) − m n (k)] n n -1 [m n (k) − m n -1 (k)]T [m n (k) − m n -1 (k)] n (5. the trace may be evaluated.20).28) Without knowing the truth x(k). thus being equivalent. the two methods can be linked to each other via (28). With the truth x(k). a scalar quantity similar to the RMS errors.4 Simulation Results and Analysis Three simulation examples are presented in this section wherein a more general case with both range and range rate measurements are considered: ρ = x 2 + y2 (5.It is easy to show that the estimation error covariance PN (k) is related to the sample covariance S N (k) and the bias vector b N (k ) by PN (k) = SN (k) + b N (k)bN(k) T (5. As a result.

31) ∂ρ x = ∂x ρ ∂ρ y = ∂y ρ (5. yields the observation matrices H s and H t . which assume the same form.30a) and (5.32d) (5. retaining only the linear components.32b) ∂ρ =0  ∂x (5.32h) 50 . ρ=   xx + yy ρ (5.32c) ∂ρ =0  ∂y     ∂ρ x (xx + yy)x = − 3 ∂x ρ ρ (5.30b) with respect to x s and x t .32e) (5. Without the subscript.32f) (5. it is given by[24]  ∂ρ  ∂x H=  ∂ρ  ∂x  Where ∂ρ ∂y ∂ρ ∂y ∂ρ  ∂x ∂ρ  ∂x ∂ρ   ∂y   ∂ρ   ∂y  (5.32g)     ∂ρ y (xx + yy)y = − ∂y ρ ρ3  ∂ρ x =  ∂x ρ  ∂ρ y =  ∂y ρ (5.32a) (5.30b) Taking partial derivatives of range and range rate measurements (5.

6.1a)  x t = [x x  y y] T . The sensor moves along the x-axis from (-10000 m.1 Sensor-Target Crossing Scenario The initial state is x s = [x  x  y y]T T = [ − 10000m 100m/s 0m 0m/s ] (6.12m/s ] T = [ 0m 0m/s 20000m (6. 0) at a constant speed of 100 m/s while the target moves along the y-axis from (0. consider a sensor-target crossing scenario as shown in Figure 5. respectively. 51 .1 Simulation with Comparable Navigation Errors To analyze the effects of sensor location errors (navigation errors) on targeting accuracy.12 (m / s) 2 . Results and Discussions 6. 20000 m) at a constant speed of -12 m/s. 6.1b) The sensor provides range and range rate measurements with the measurement error variances of 10 2 m 2 and 0. Fig.5.

The sensor navigation errors are assumed to be comparable to the measurement errors with the variances of 10 2 m 2 and 0.12 (m / s) 2 , respectively. This represents a rather extreme case to emphasize the effects. A regular extended Kalman filter (EKF) is implemented based on the following discrete-time second-order kinematic model (nearly constant velocity)

x k +1

1 0 = 0  0

T 1 0 0

0 0 1 0

1 2 0 2 T  0  xk +  T T  0   0 1 

0   0  1 2 wk T 2  T  

(6.2)

where the process noise w ~ N(0, Q) is a zero-mean Gaussian noise. The covariance matrix

Q = diag( σ 2 x

[

σ 2 ) uses the particular values of σ x = σ y = 0.1m/s 2 for both x and yy 

]

axes in the first simulation. The initial state drawn from the initial estimation error covariance of 1002 m2 and 52 m/s 2 , respectively, for the position and velocity in the x and y-axes

P0 = diag( 100 2

[

52

100 2

52 )

]

(6.3)

Under the same initial conditions, a Schmidt-EKF is implemented to calculate the predicted measurement covariance. The 100 Monte Carlo runs are used to calculate the root mean squared (RMS) errors.

52

Fig. 6.2 Sample Sensor Trajectory Figure 6.2 shows the details of the sample sensor trajectory with the blue-colored truth (dashed line) and the green colored navigation solution (solid line).

Fig. 6.3 Sample Target Trajectory (Run 1) Figure 6.3 shows the details of the sample target trajectory, where the red-colored curve is the truth (dashed line), the cyan-colored curve is the regular Kalman filter estimate (with cross line style), and the purple-colored curve is then navigation solution (solid line).

53

Fig. 6.4 Sample Target Trajectory (Run 2) Figure 6.4 is another sample target trajectory, where the red colored curve is the truth (dashed line), the cyan-colored curve is the regular Kalman filter estimate (with cross line style), and the purple-colored curve is the navigation solution (solid line).

Fig. 6.5 Comparison of Position Errors

54

Figure 6.5 shows the RMS errors and standard deviations for the position estimate produced by the regular EKF and consider EKF (the term consider EKF or consider filter is used interchangeably with Schmidt-EKF), respectively. The RMS errors of the regular EKF (the green curve with plus marks +) are larger than those of the consider EKF (the solid cyan curve) whereas the standard deviations of the regular EKF (the dashed blue curve) are smaller than those of the consider EKF (the red curve with cross marks ×). This indicates that the actual estimates of the consider EKF are smaller (better) than those of the regular EKF. Additionally, the consider filter provides a better performance bound than the optimistic regular EKF.

Fig. 6.6 Comparison of Velocity Errors Figure 6.6 shows the RMS errors and standard deviations for the velocity estimate produced by the regular and consider EKF, respectively. The RMS errors of the regular EKF (the green curve with plus marks +) are larger than those of the consider EKF (the solid cyan curve). The standard deviations of the regular EKF (the dashed blue curve) and those of the consider EKF (the red curve with cross marks ×) are comparable. Again, this indicates that the actual estimates of the consider EKF are smaller (better) than those of the 55

the errors in Figures 6.001 2 (m/s 2 ) 2 .05 2 m/s 2 (twice better). The sensor navigation errors are also reduced to 52 m 2 and 0. The CEKF-to-RMS error match indicates the consider covariance can better predict the targeting performance for target tracking. The discrepancy between the RMS errors and standard deviations of the consider EKF as shown in Figures 6. 6.2 Simulation with Different Process Noise In the next simulation.8 are all smaller due to the use of smaller Q and Ps More importantly.6 may be explained by the fact that the standard deviations are calculated from the propagated and updated covariances with a rather large process noise covariance Q whereas both the target and sensor are moving at constant speed. Compared to Figures 6.7 and 6. 6.6.regular EKF and the consider filter provides a better performance bound than the optimistic regular EKF. and resource management. sensor fusion.5 and 6.7 Position Errors with Small Q 56 .5 and 6. the process noise covariance Q is reduced to 0. Fig. the standard deviations calculated from the propagated and updated covariance of the consider EKF (CEKF) now match well with the actual RMS errors.

But significant sensor velocity errors could be added in the simulation.8 Velocity Errors with Small Q By comparing Figures 6. the consider filter explicitly takes into account the distribution and structure of the errors. even not estimating the error terms. one would ask why the RMS position errors of regular EKF and consider EKF as in Figure 6. help produce relatively good velocity estimates As shown in Figure 6. The range rate measurements.Fig. In the past. navigation errors are introduced only in the sensor position not in the sensor velocity and the sensor position errors are translated directly into the target position errors. In contrast.whereas why their RMS velocity errors fairly agree as in Figure 6.7 differ that much . Note that in the simulation. the measurements used by both filters include both range and range rate as in (30).8.8.7 and 6. only subject to random noise. Furthermore. Typically the velocity components of an integrated navigation solution are more accurate than the position counterparts. thus providing better results. 57 . 6.6. Such “blind” adjustments do not explore the inherent structures of actual errors. heuristic approaches have been used to adjust Q and R of a Kalman filter to prevent divergence due to unestimated error terms (which are totally ignored) and to match the filter’s covariance matrix with data-driven RMS.

9 Truth-Centered Position Errors Figure 6. The RMS errors of the regular EKF (the green curve with plus marks +) are larger than those of the consider EKF (the 58 . So two aspects need to be shown. respectively. It may reveal how consistent the estimates are but the estimates may be biased. which is the root sample covariance trace (RSCT) defined in Section 4.6. The estimated-mean centered RMS is calculated. Fig. one is the bias and the other is spread around the bias. It does not indicate how close the estimates are to the truth but how dispersed the estimates are. the sample estimates are close to the RMS errors values.9 shows the RMS errors and standard deviations for the position estimate produced by the regular and consider EKF. the sample mean and covariance are calculated by the Monte Carlo simulation without knowing the target state truth. 6.3 Evaluation with Unknown Truth In the following simulation. Note that the estimated-mean centered RMS has a different meaning than the truth-centered RMS. If the target state estimate is unbiased.

respectively. which is too optimistic. Fig. This indicates that the actual estimates of the consider EKF are smaller (better) than those of the regular EKF.10 Truth-Centered Velocity Errors Figure 6. 6. 59 . The RMS errors of the regular EKF (the green curve with plus marks +) are larger than those of the consider EKF (the solid cyan curve). Besides. the consider filter provides a better performance bound than the regular EKF. this indicates that the actual estimates of the consider EKF are smaller (better) than those of the regular EKF and the consider filter provides a better performance bound than the optimistic regular EKF. Again.solid cyan curve) whereas the standard deviations of the regular EKF (the dashed blue curve) are smaller than those of the consider EKF (the red curve with cross marks ×).10 shows the RMS errors and standard deviations for the velocity estimate produced by the regular and consider EKF. The standard deviations of the regular EKF (the dashed blue curve) and those of the consider EKF (the red curve with cross marks ×) are comparable.

12 shows the velocity biases (i. Fig.Fig.e.e. 6.12 Velocity Bias Figure 6. the difference between the sample mean and the state truth) of the regular Kalman filter and consider Kalman filter. The CKF biases which are comparable. 6.11 shows the position biases (i.. 60 .. which are comparable with the consider Kalman filter. the difference between the sample mean and the state truth) of the regular Kalman filter and consider Kalman filter. but slightly better than the regular Kalman filter biases.11 Position Bias Figure 6.

From Figure 6.Fig.13 is expected as the truth-centered RMS as shown in Figure 6. it is clear that the CKF is much better than the RKF.13 shows the position errors in terms of the sample mean-centered RMS error values or the root sample covariance trace (RSCT) of the regular Kalman filter (RKF) and consider or Schmidt Kalman filter (CKF).11.13 Sample Mean-Centered Position Errors Figure 6. 61 . This is indeed true. Because of small position bias as shown in Figure 6.6. 6. the sample mean-centered RMS or the root sample covariance trace (RSCT) of Figure 6.10.

6.6.14 Sample Mean-Centered Velocity Errors Similarly. sample mean-centered RMS or the root sample covariance trace (RSCT) of the the Figure 6. Because of small position bias as shown in Figure 6. The consider Kalman filter outperforms regular Kalman filter. comparison of (12) (and (17b)) to (2e) indicates an extra time-varying term.14 shows the velocity errors in terms of the sample meancentered RMS error values or the root sample covariance trace (RSCT) of the regular Kalman filter and consider Kalman filter. Figure 6. Finally. 62 .Fig. 6. which is a function of the underlying state navigation error P and the observation matrix H s s Note that H s is geometry-dependent and so is H t .12 is very close to the truth-centered RMS as shown in Figure 6.

16 Predicted Range Rate Errors (CEKF anticipates navigation errors whereas REKF is optimistic) Finally. 6. which are function of the cross correlation Pkts and observation matrices H t and H s ). Figures 6.12) (and (5.15 and 6. comparison of (5. 6. 63 . which is a function of the underlying state navigation error P s and the observation matrix H s (and extra time-varying terms.2e) indicates an extra time-varying term.17b)) to (5.Fig. Note that H s is geometry-dependent and so is H t .16 show the two diagonal elements of the predicted measurement error covariance (squared roots).15 Predicted Range Errors (CEKF anticipates navigation errors whereas REKF is optimistic) Fig.

Here green line indicates the RMS Value of Regular EKF and blue line indicates the consider EKF (SKF) 64 .18 shows the RMS Position of Target Tracking Filter.Fig 6.18 RMS Position Figure 6.17 RMS Value Figure 6.17 shows the RMS value of Consider Covariance in Tracking Filter. Here green line indicates the RMS Value of Regular EKF and blue line indicates the consider EKF (SKF) Fig 6.

19 RMS Correlation Figure 6. Here blue line indicates the gain of consider EKF (SKF) 6.1 Comparison between KF and SKF 65 .19 shows the RMS Correlation of Schmidt Kalman Filter.4 Comparison between Kalman Filter and Schmidt Kalman Filter parameters In presence of navigation Errors Kalman Filter Doesn’t give an exact Coordinates of target Schmidt Kalman Filter Gives an exact Coordinates of target Same as Kalman filter but a touch more(~1ms) Very high accuracy in the presence of navigation errors Speed In the order of 1ms accuracy Best suitable in the absence of navigation errors Very high in the presence of navigation errors when compared to Schmidt Kalman filter Less Standard Deviation In the presence of navigation errors RMS error In the presence of navigation errors Less standard deviation Almost equal to Kalman filter Table 6.Fig 6.

Satellite Navigation system 3. 6. Orbit Determination.6. Radar tacking 2. Weather Forecasting (to detect tornado’s or cyclones) 4.6 Conclusion The regular EKF predicted measurement errors are small. It cannot describe large errors due to the navigation errors. In contrast. Navigation System 5.5 Applications As the project is used for target tracking the applications of this filter are enormous and in the presence of navigation errors the filter has proved to be very effective in giving the exact coordinates. Simultaneous Localization and Mapping. 6. 1. the consider EKF faithfully produces large errors because it anticipates the navigation errors. 66 . The net effect of incorporating navigation errors in the tracking filter is a time-varying adaptive adjustment of the measurement error covariance. thus being optimistic. Some of the applications are given below .

Such an assumption is reasonable when the tracking goal is to determine the relative location of a target for such applications as a homing missile. a sensor platform is typically assumed to be at a perfectly known location when formulating the measurement equations for tracking filters. was shown not only to be more consistent but also to produce smaller estimation errors. a regular Kalman filter produces optimistic results. Without accounting for such errors. for other applications such as sensor data fusion or weapons cueing.Conclusion and Future Scope Conclusion Navigation errors are a performance-limiting factor for target tracking as ranging measurements increase in accuracy. Future Scope 67 . The effect of sensor location errors (navigation errors) on target tracking performance. In the past. In contrast. it requires the use of such techniques as the Schmidt-Kalman Filter (SKF) [9] to account for navigation errors. which incorporates the navigation error covariance. For tracking mission success. the sensor/platform location accuracy is critical for high speed maneuvering target tracking as the target’s relative location has to be transferred into a global reference frame and the sensor navigation errors get magnified into the target position solution. the Schmidt-Kalman filter (also known as the consider Kalman filter). it has been seen that the proactive approach to sensor management with consider covariance in the presence of navigation errors. Finally.

For instance. we are likely to face difficulties. to name just two. such as nonparametric density estimation [25] .The thesis can be extended to formulate the maneuvering (direction of moving ) targets using such algorithms as the interacting multiple model (IMM) estimator. The underlying problem is that in state space the distribution we are tracking is not Gaussian .d. Appendix 68 . the input p. or tracking higher order moments to estimate more [26] accurately the a posteriori density .d. In general comparison to Monte- Carlo methods should give more insight into the SKF’s performance. is quite nearly Gaussian while the transformed range p. into highly non-Gaussian shapes. Generally. There are many solutions.f.f. the problem with using simple parametric distributions (like Gaussians) is that error-propagation through non-linear functions can transform the p.even if our sensor measurements are. in stereo. In the near future there are plans to use the estimation machinery developed in this paper to improve the structure of maps built from moving robot platforms. Yet another effort is to develop a track bias calibration and removal algorithm. Since our state space representation assumes the normal distribution.f. This leads naturally into Structure from Motion (SFM) and Simultaneous Localization and Mapping (SLAM) research – problems that we are actively working on. Another effort is to apply the consider covariance for track-to-track fusion.d. Another interesting direction to consider is filtering dense stereo range images. is not.

1 Arithmetic Operations + ⃰ / ^ Pi.This appendix introduces the reader to programming with the software package MATLAB. The ability to implement and modify programs quickly makes MATLAB an appropriate format for exploring and executing the algorithms in this textbook.2 Built-in Functions 69 . and a high-level programming format.7124 1. and examples. options. 1.and three-dimensional graphics. The package consists of an extensive library of numerical routines. >>(2+3*pi)/2 ans =5. branching using logical relations. and editing. I Addition Subtraction Multiplication Division Power Constants Ex. These techniques are directly applicable in the windows-type environment of MATLAB. easily accessed two. MATLAB is a mathematical software package based on matrices. To find additional information about commands. The reader should work through the following tutorial introduction to MATLAB (MATLAB commands are in typewriter type). the reader is urged to make use of the on-line help facility and the Reference and User's Guides that accompany the software. The examples illustrate typical input and output from the MATLAB Command Window. calling subroutines. e. It is assumed that the reader has had previous experience with a high-level programming language and is familiar with the techniques of writing loops.

Ex. b was not displayed.7)) ans =-1. 1. abs(#) sin(#) cos(#) tan(#) sqrt(#) exp(#) log(#) log10(#) tanh(#) cosh(#) floor(#) acos(#) Ex.3 Assignment Statements Variable names are assigned to expressions by using an equal sign. >>b=sin(a). Ex. >>3*cos(sqrt(4.7)) ans =-1. >>a=3-floor(exp(2. Entering the command format long will display approximately 15 significant decimal figures. Ex.9)) a= -15 A semicolon placed at the end of an expression suppresses the computer echo (output). Descriptions of other available functions may be found by using the on-line help facility. >>2*b"2 ans= 0.68686892236893 1.8457 Note. The following example illustrates how functions and arithmetic operations are combined.Below is a short list of some of the functions available in MATLAB. >>format long 3*cos(sqrt(4.6869 The default format shows approximately five significant decimal figures.4 Defining Functions 70 .

In MATLAB the user can define a function by constructing an M-file (a file ending in . Once defined.~2/4.m. Different letters could be used for the variables and a different name could be used for the function. Place the function fun(x) = 1 + x — x 2/4 inthe M-file fun.m. Ex. but the same format would have to be followed. >>A=[1 2 3. In the Editor/Debugger one would enter the following: function y=fun(x) y=1+x-x. Alternatively.m) in the M-file Editor/Debugger.5 Matrices All variables in MATLAB are treated as matrices or arrays. Matrices can be entered directly: Ex."" shortly. it can be called in the MATLAB command Window in the same manner as any function. This command requires that the function be called as a string.4 5 6. Note that the entries of the matrix must be separated by a single space. 1. Once this function has been saved as an M-file named fun.7 8 9] A= 123 456 789 Semicolons are used to separate the rows of a matrix. We will explain the use of ". a matrix can be entered row by row. >>cos(fun(3)) ans= -0.1782 A useful and efficient way to evaluate functions is to use the feval command. a user-defined function is called in the same manner as built-in functions. >>A=[1 2 3 71 . Ex.

8).456 7 8 9] A= 123 456 789 Matrices can be generated using built-in functions. >>Y=0:0. assign a new value to an entry of A another way to select a sub matrix of A select a sub matrix of A creates a 1 × 5 matrix by taking the Cosine of each entry of Y creates a 3 × 5 matrix of zeros creates a 3 × 5 matrix of ones creates the displayed 1 × 5 matrix 72 .[1 3]) ans= 1 3 7 9 >>A(2.0707 -0.5:2 Y= 0 0. Ex.5000 2.2:3) ans= 2 3 5 6 >>A([1 3]. >>A(2.5).4161 The components of matrices can be manipulated in several ways.5). >>Z=zeros(3. Ex. >>X=ones(3.0000 >>cos(Y) ans= 1.2)=tan(7.0000 1.5403 0.3) select a single entry of A ans= 6 >>A(1:2.5000 1.0000 0.8776 0.

Array operations are crucial to the efficient construction and execution of MATLAB programs and graphics. This was demonstrated earlier when the cosine of the entries of a 1 × 5 matrix was taken.3 4]. >>B=[1 2. and scalar multiplication already operate elementwise. and power do not.^.5.and . division. 1.*. >>C=B’ C is the transpose of B C= 13 24 >>3*(B*C)^3 ans= 13080 29568 29568 66840 3(BC)3 1. These three operations can be made to operate elementwise by preceding them with a period: . It is important to understand how and when to use these operations.6 Array Operations One of the most useful characteristics of the MATLAB package is the number of functions that can operate on the individual elements of a matrix. but the matrix operations of multiplication.1 Matrix Operations + * ^ ’ Addition Subtraction Multiplication Power Conjugate transpose Ex.Additional commands for matrices can be found by using the on-line help facility or consulting the documentation accompanying the software. . The matrix operations of addition. 73 ./. subtraction.

>>x=0:0.0707 -0. Options and additional features of graphics in MATLAB can be found in the on-line facility and the documentation accompanying the software. >>A"2 produces the matrix product AA ans= 7 1 0 15 22 >>A. >>A=[1 2.5403 0.1:pi. and then each entry in the matrix cos(x) is squared using the command. 74 . >>z=cos(x). The last term. Ex. The semicolon is necessary to suppress the echoing of the matrices x.z. >>y=cos(x). The plot command is used to generate graphs oftwo-dimensional functions. The following example will create the plot of the graphs of y = cos (x) and y = cos2 (x) over the interval [0.1./2) divides each entry of A by 2. The next two lines define the two functions.and three-dimensional plots of curves and surfaces. and z on the command screen. In the third line the use of the array operation ". z k ) where z k = cos2 (x k ).7 Graphics MATLAB can produce two. then takes the cosine of each entry ans= 0. x and z.Ex. >>plot(x. y. The third and fourth terms.x.n].y."2 squares each entry of A ans= 1 4 916 >>cos(A. 'o'. The first two terms in the plot command. produce the plot of y = cos2(x). The fourth line contains the plot command that produces the graph. Note that the first three lines all end in a semicolon.'o') The first line specifies the domain with a step size of 0. First the cosine of each entry in the matrix x is taken.8776 0. x and y.3 4].4161 1.~2. plot the function y = cos (x). results in o's being plotted at each point (x k ."" is critical.

b].where0 < t < 2n. >>[x.y)."2. Ex.2]) plots y = tanh( x ) over [-2.[a. b]. >>x=-pi:0.y]=meshgrid(x. The form of the command is fplot('name'. 3sin(t )).and threedimensional space. >>y=x. t 2.m by sampling n points in the interval [a. These graphs are helpful in visualizing the solutions of partial differential equations.2:2*pi.t. >>z=sin(cos(x+y)). >>plot3(2*cos(t).1:pi.3*sin(t)) Ex. These commands are particularly useful in the visualization of the solutions of differential equations in two and three dimensions.is produced with the following commands: >>t=0:0. Ex. respectively. »fplot('tanh'.n). The default number for n is 25. where 0. This command creates a plot of the function name. The plot of the curve c(t) = (2cos( t ). 2] The plot and plot3 commands are used to graph parametric curves in two. >>plot(2*cos(t)./t) Three-dimensional surface plots are obtained by specifying a rectangular subset of the domain of a function with the meshgrid command and then using the mesh or surf commands to obtain a graph.1.1 < t < 4n. The plot of the ellipse c(t) = (2cos(t).1:0.1:4*pi. is produced with the following commands: >>t=0.[-2.1/ t ).The graphics command fplot is a useful alternative to the plot command. Ex. >>mesh(z) 75 .

if.1.8 Loops and Conditionals Relational operators == ~= < > <= >= Logical operators ~ & | 1 0 Not And Or True False (complement) (true if both operands are true) (true if either or both operands are true) Boolean Values Equal to Not equal to Less than Greater than Less than or equal to Greater than or equal to The for.expression) executable-statements end while (while-expression) executable-statements end 76 . and while statements in MATLAB operate in a manner analogous to their counterparts in other programming languages. These statements have the following basic form: for (loop-variable = loop-expression) executable-statements end if (logical-expression) executable-statements else (logical.

end 77 . disp([x x~2 x~3]) k=k+1. Ex.A(1. n=10. if ((k>10)&(x-floor(x)==0)) break end end k The disp command can be used to display text or a matrix. Ex. Typing nest in the MATLAB Command Window produces the matrix A. The following file was saved as a M-file named nest.j-1)+A(i-1.i)=1. the entries of the matrix A are the entries in Pascal's triangle.The following example shows how to use nested loops to generate a matrix. k=0. end end A The break command is used to exit from a loop.j).m. Note that. Ex. end for i=2:5 for j=2:5 A(i.1)=1. for i=1:5 A(i. while k<=n x=k/3. when viewed from the upper-left corner. for k=1:100 x=sqrt(k).j)=A(i.

n is the number of rows-m is the prime number " %Output .m).j-1)+P(k-1. end for k=2:n for j=2:n P(k.3) to see the first five rows of Pascal's triangle mod 3.1. end end Now in the MATLAB Command Window enter P=pasc(5.3). (note the semicolon) and then type spy(P) (generates a sparse matrix for large values of n ) .j)=1. Or try P=pasc(175.j).m) %Input .1)=1.P is Pascal's triangle for j=1:n P(j. 1. The following example allows one to visualize the effects of moding out Pascal's triangle with a prime number.10 Conclusion At this point the reader should be able to create and modify programs based on the algorithms in this textbook. They are easily called as subroutines in other programs.9 Programs An efficient way to construct programs is to use user-defined functions. Ex. These functions are saved as M-files. Additional information on commands and information regarding the use of MATLAB on your particular platform can be found in the on-line help facility or in the documentation accompanying the software. Type the following function in the MATLAB Editor/Debugger and then save it as an M-file named pasc. 78 .m. These programs allow the user to specify the input and output parameters.P(1. function P=pasc(n.j)=rem(P(k.

I & II). Elsevier.B. Tapley. Seattle. M. Danford. and Control.R.” IEEE Trans. R.” in Advances in Control Systems.M. Paradowski. E. Kadar. [2] S. Volume 1. C. Gill. 33(1). of Fusion’2005. Yang. S. Novoselov. New York. Teixeira. [8] L. and G. 1995. 2000. Vol.A. and I. Chan. Springer.References [1] Y. “Geometric Factor in Target Positioning and Tracking. [10] B. [5] O. SPIE. Leondes (Ed. Multitarget-Multisensor Tracking: Principles and Techniques. Studer. New York. Born. and A. Stochastic Models. [6] R.1. on Aerospace and Electronic Systems. MA. 2007.E. Yang. and A. [7] R. “Applications of State-Space Methods to Navigation Problems. Vol. [11] C. S. Farina and F.” Proc. Poore. 293-340. “Uncertainty Elllipses and Their Application to Interval Estimation of Emitter Position. [9] S.R. 79 . 1966. S. Poore. Burlington. Popoli. YBS Publishing. Wiley. “Chapter 6: Consider Covariance Analysis” in Statistical Orbit Determination. Inc. Novoselov.D. [12] C. Radar Data Processing (Vol. Performance Monitoring and Prediction for Active Management of Distributed Sensors Fusion in Target Tracking. 1999. Artech House. and Applications. “Mitigating The Effects of Residual Biases with Schmidt-Kalman Filtering.” Proc.4: Consider Parameters” in Satellite Orbits: Models. Boston.H. “Mitigation of Biases Using the Schmidt-Kalman Filter. 2004. Blackman and R. Herman. 6. Paffenroth. 1985. Storrs. 1979. CT. November 2008. [3] A. Academy Press. 126-133. 6699.” Fusion’2009.). Academic Press. Estimation. July 2009. Gadaleta. July 2005. Schmidt. WA. Maybeck. Bar-Shalom and X. [4] P. Design and Analysis of Modern Tracking Systems. Schutz. Montenbruck and E.Y. Technical Report (FA8650-08-C1407). Methods. B. Blasch. S. “Section 8. Li. January 1997.M.

[25] A. [26] H. “Lectures Notes on Wiener and Kalman Filtering.. Marcel Drekker. 2000. 1981. "Non-linear DSGE Models.. 1968.” Institute for Systems and Robotics. [22] Thomas Kailath. [18] Wan. F. Parameter Estimation: Principles and Problems. Inc. February 2004. E. “Probability.K.” John Wiley & Sons. J. E. The American Statistician. Uhlmann. Kailath. L. Papoulis. Van Trees. GA. [23] A. "A new extension of the Kalman filter to nonlinear systems". [19] Julier. Proceedings of the IEEE: 401–422. “Gaussian Probability Density Functions: Properties and Error Characterization. Sayed. Savannah. Random Variables and Stochastic Processes. [15] Martin Møller Andreasen (2008). [20] T.” McGraw-Hill.” Prentice Hall. J. (1997). Eric A. M. 1965. Isabel Ribeiro. September 2009. (15 October 1976). (2004). [21] H. Uhlmann. Rudolph "The Unscented Kalman Filter for Nonlinear Estimation". [14] Ingvar Strid.” Springer-Verlag. Estimation and Modulation Theory. Yang. W. S. Bayesian statistics without tears: A samplingresampling perspective. S. Ali H.J. 46 (2):84–88. and The Mean Shifted Particle Filter" [16] Thornton. Smith and A. “Detection. Karl Walentin (April 2009).K. Blasch.. 1980.[13] C. Gelfand. Miller. and van der Merwe. and T. The Central Difference Kalman Filter.” ION-GNSS’2009. May 1992.J. "Block Kalman Filtering for Large-Scale DSGE Models". ” Linear Estimation. “Proactive Radio Navigation and Target Tracking. 80 . Catherine L. Technical Report. "Unscented filtering and nonlinear estimation". Triangular Covariance Factorizations for Kalman Filtering. [24] M. M. Sorenson. Nguyen. [17] Julier. Babak Hassibi.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->