You are on page 1of 7

Kalman filters

Inspiration
I was genuinely interested in the application of this topic to robotics. I am part of the UPennalizers, the
autonomous robotic soccer team at the University of Pennsylvania. I have worked in the area of
locomotion and one of the big projects for next year is to rewrite the code for the walking, diving and
standing up routine. As of now, all of these processes are done by keyframing, meaning that there are
files full of different positions all the motors have to reach (there are 25 motors in each player) in a
certain given time. The problem with this is that there is not a dynamic feedback from the sensors. The
new process is meant to contrast the information gathered by the sensors (especially the inertia sensor)
and based on that information, generate a responsive action in each motor to perform the desired
activity.

In order for this implementation to work efficiently, the use of Kalman filters, tracking and data fusion
can be leveraged.

Kalman filters
The Kalman filter is an algorithm that estimates the actual value of an estimate based on uncertainty
measures, such as statistical noise. It is based on a continues series of measurements over a certain
time. It is also a recursive process since the output is continuously being fed back to the inputs of the
filter.

- Inputs
The inputs to the Kalman filter are the original error estimate of the measurement, the original
estimate (it could be the first measurement) and the continuous data input.
- Outputs
The output of the filter is the updated estimate, which is fed back to the filter as the new
estimate (replacing the original estimate stated in “inputs”)
- Processes
There are 3 main processes that happen when calculating the updated estimate:
• Calculate the Kalman gain:
This process takes as inputs the error in the estimate coming from the third process
explained in this section, and the error in the measurement, it weighs each of these errors
differently to obtain a more reliable estimate
• Obtain the current estimate:
The inputs to this process are the previous estimate and the measured value (the one that
comes directly from the data input we are measuring)
• Finally, there is a new error estimation that takes as inputs the Kalman gain from process 1
and the current estimate from process 2. Its output goes back to feed the error in the
estimate input for process 1.

As we can see, the processes are interconnected, being the outputs of one the input of the
others and vice versa, that is why we can say it is a recursive process.
1 D Version
Calculating the Kalman gain:
The Kalman Gain is defined as:
𝐸𝑒𝑠
𝐾𝑔 =
𝐸𝑒𝑠 + 𝐸𝑚𝑒𝑎
Where Ees is the error in the estimate and Emea is the error in the measurement. What the Kalman gain
does is very smart. We get close to one for a small value of error in the measurement, while if the error
in the measurement is very large, the Kalman gain is reduced. This becomes very relevant in the next
step.

Obtaining the current estimate


The current estimate in the Kalman filter is defined as:

𝐸𝑠 = 𝐸𝑠𝑝 + 𝐾𝑔(𝑀𝑒𝑎 − 𝐸𝑠𝑝)


Where Es is the estimate, Kg is the Kalman gain, Mea is the current measurement and Esp is the previous
estimate, e.g. the output of this function the last time we performed it. It is very interesting to note that
when the error in the measurement is very large, the Kalman gain is small, meaning that it reduces the
importance of the second part of the sum in the estimate equation above, preserving the last estimate
as it is more reliable than the current one. On the other hand, when the error in the measurement is
small, we can trust the measurement and therefore give it more importance in order to update the
estimate.

Estimating the new error


The new error is estimated by:
𝐸𝑚𝑒𝑎 ∗ 𝐸𝑒𝑠𝑝
𝐸𝑒𝑠 =
𝐸𝑚𝑒𝑎 + 𝐸𝑒𝑠𝑝
Which can also be written as:

𝐸𝑒𝑠 = (1 − 𝐾𝑔)(𝐸𝑒𝑠𝑝)
Where Eesp is the error estimate of the previous step. It is very easy to see in the second expression that
if the Kalman gain is large (meaning small error in the measurement) then the error estimate will tend to
zero, as we are approximating to the actual value that we are ultimately looking for. On the contrary, if
the Kalman gain is very small, the error estimate will be close to the previous one.

Multidimensional Version
For the multidimensional version things change slightly.

First of all, the initial state is not a single measurement or value, but a set of them and it is denoted X0. It
typically contains the positions and velocities in all the dimensions we are trying to track. We also have a
process covariance matrix that represents the error in the estimate.

An example of a state matrix in 2d would be the following:


𝑥
𝑦
𝑥̇
𝑦̇
Where x and y are the positions and the other two values are the corresponding velocities in both.

Obtaining the current estimate


The state matrix is updated with every iteration by following this formula for a stochastic time-variant
linear system:

Where Xk is the current state matrix (in a certain way, the “to be updated”) and Xk-1 is the previous state
matrix. A is the matrix that makes a transformation on the previous state based on the inherent
characteristics of the process. For example, if we follow the example of motion, we could state that the
A matrix is a matrix that has ones in the diagonal and contains a change in time so that when it is
multiplied by the previous state matrix, it adds 1 * x + (change in time) * velocity.

The B matrix is the input or control matrix. Again, going back to the example of the motion, this change
would correspond to the acceleration part. Matrix B only handles the time related part, while the vector
u, also called the control variable matrix, has the information related to the specific process that the
system is undergoing. In other words, the matrix B deals with the general characteristic of the process
(for example, motion that has a second order change in time, like a falling object), and the vector u has
the value of that specific process (for example, g for the falling object, or a for a given motion).

So, in summary for this formula, A expresses state transition and B the changes due to the input or
control. In the example of motion, A expresses the first order changes and B the second order changes
in time (i.e. velocity and acceleration).

Finally, Wk-1 represents the process noise vector.

Then we have the measurement part of the model:

This is in order to calculate the measurement in matrix format.

Hk is the observation matrix, and it decides which variables of the measurement we want to integrate
into the Kalman filter process. Vk is just a measurement of the noise of the measurements.

Estimating the new error


The error in estimate in the multi-dimensional version of the filter is accounted by the covariance matrix
of the state. The error in the measurement is now the measurement covariance matrix and now we
introduce a new variable, the process noise covariance matrix, that contains the information about the
noise of the process.

To estimate the new error, we have:


Where Q is this new noise covariance matrix.

I will not discuss deeply what the covariance matrix measures, but it Is defined to be the matrix where
the diagonals equal the variances of the different variables and the off diagonal components are just the
products of the standard deviations of the variable corresponding to that matrix entrance.
Remembering that a 0 means that both variables are independent from each other.

Calculating the Kalman gain:


The Kalman gain for the multidimensional version is given by

Where R is the measurement covariance matrix, meaning the error in the measurement and H is the
observation matrix. And as we see, we repeat the same logic as in the one-dimensional case: if the error
in the measurement is large, then the gain is small and vice versa.

Example
In order to see this working, and also as a starting point in a very confusing path to start thinking about
how to implement Kalman filters in autonomous robots, I followed the MATLAB example to estimate the
position of a vehicle:

The module in dark orange is the actual filter, while the part on the left is the modeling of the vehicle.

The variables to be tracked are:


Similar to the example talked through in the middle of the explanation.

And, as we can see, the definition of the model is:

We can observe that the control part of the model is 0.

Where

Which, by matrix multiplication with X[n] will give: xe + Vxe * Ts, xn + Vxn * Ts, xe, xn. Which means that
it is indeed calculating the new position, as it sums the initial position to the displacement in x given by
speed * time. Note as well that the velocities do not change.

This matrix tells us that there are only position measurements available, just as stated in the discussion
of what C was above.

After performing the movements proposed in the example and graphing both the actual and the
estimated values we obtain:
Even when the noise signal is amplified by 100, the filter still gives a very good estimate:
Being closer to the actual value than the measurement by itself

Conclusion
In summary, for the multidimensional version of the Kalman filter, we track the different variables
involved with the movement or change of the state by using matrices, a very effective way of organizing
data. At the same time, the integration of the previous data with the new measure becomes apparent
when we see the way the filter works. This Kalman filter made me remember the PID Controller, which
tries to mitigate the error by comparing it and then applying a proportional, integral and or derivative
gain. The hardest part of the PID, at least the ones I have had experience with, was the tuning of such
gains, and it would depend on the nature of the process. With the Kalman filter, on the contrary, the
gains are automatically calculated, and the feedback loop is completed.

It will definitely be something that I look into in the future to be implemented to the programming for
the robotics players for the UPennalizers, modeling every single degree of freedom as a variable. Truly,
the Kalman filter is a wonder for autonomous systems.

Bibliography
Terejanu, G. A. (2013). Discrete kalman filter tutorial. University of Buffalo, Department of Computer
Science and Engineering, NY, 14260.

Welch, G., & Bishop, G. (1995). An introduction to the Kalman filter. University of North Carolina at
Chapel Hill, Department of Computer Science, NC

Girija, G., Raol, J. R., Appavu, R. R., & Kashyap, S. K. (2000). Tracking filter and multi-sensor data
fusion. Sadhana, 25(Part 2), 159-167.

Van Bieze, Michael. Special topics – the Kalman filter (1-30). YouTube. Dec. 2005.
https://www.youtube.com/playlist?list=PLX2gX-ftPVXU3oUFNATxGXY90AULiqnWT

MATLAB example:

MathWorks. State Estimation Using Time-Varying Kalman Filter. Available at


https://www.mathworks.com/help/control/examples/state-estimation-using-time-varying-kalman-
filter.html

You might also like