Thursday, February 23, 2012

Kalman Filtering for Dummies - Part III

This part is adapted from Wikipedia. The text in Wikipedia about Kalman Filter is licensed under the Creative Commons license. Therefore I should be acknowledging the author(s) for the text.

Let's do some recap.

q[k] = F * q[k-1] --- Equation -1
w[k] = H * q[k] --- Equation 0

Equation -1 and 0 are both the state equation and output equation, where the state variable is q, outputs is w, F is the state transition matrix and H is the output matrix. I have changed the notation from previous part so to not confuse with the Kalman formula below.

Following on, the Kalman formula will be applied. The formulas given below are adapted from Wikipedia where I have modified some notation.

Before that, I would like to explain about:

k-1|k-1: This is the previous filtered sample data.
k|k-1: This is the a priori prediction for the Kalman filter. Simply put that this is the predicted variables.
k|k: This is the a posteriori update for the Kalman filter. This is the updated variables, means filtered data.

From Equation 1 and 2, we have to predict the state variables, q, and the state variables covariance, P. Note that if for sampling time is 0, q0|0 can take any value, while P0|0 can be a zero matrix if the initial value for state is known or any diagonal valued matrix if the initial value is unknown. Q is the process covariance or the prediction covariance. Put differently, this is the variance that will occur during the prediction stage.

Obtaining the value of Q is not practical because the process is quite hidden. But I would use F * transpose (F). I do not guarantee the value to be correct but I used this without any major problem. Sometimes, it does depend on intuition too.

From Equation 3, z is the measured data. For example, from previous part, the measured data is position, velocity and acceleration. H * x is the output equation from the predicted state variable from Equation 1. y is the residual from the measured data and the predicted data.

From Equation 4 and 5, these are just the calculation of Kalman gain from various covariances. This is also means the calculation based on the degree of trust, which later used to update the predicted state variables. R is the measurement covariance where sensors do not always give correct data. From our example of the positional - velocity - acceleration system, since all parameters are independent of each other, the covariance is just a diagonal matrix. Where the first element in the diagonal reflects the variance in the position measurement, second element the velocity measurement and the third diagonal element, the variance of the acceleration measurement.

Obtaining the measurement covariance is also quite impractical, unless there is another accurate sensor to get the variance of each sensor. Sometimes the tuning of R will depend on intuition.

Equation 6 is actually the correction done to the state variable from the predicted state variable by adding the product of Kalman gain and the residual. Equation 7 is the update of the state variable covariance.

In part IV, I will give a coding example in Matlab for the position, velocity and acceleration system. I will also plot the data to show how effective the Kalman filter algorithm is, despite the presence of noise.

Part I
Part II
Part III
Part IV
Part V
Part VI


Kerry said...

The equations that update the Kalman gain 4, 6 and 7, run independently of the measurement. The only input to these equations is Rk. Yet I gather Rk and Qk are generally not measured? If they are constant estimates then the Kalman gain equations become filter design equations. That is we can run the equations offline, independent of any measurement until the value of the K settles to a constant. Then the Kalman filter is implemented at runtime by equations 3 and 6. Is my understanding correct?

Khin Hooi Ng said...


First of all, I have to mention that I am no expert in this algorithm.

But from my perspective, the initial value of the estimate covariance, P-sub-k|k do affect the Kalman gain.

If your initial value of your states are known, the estimate covariance should be initialized to zero, else, it should be initialized with a large value.

If the estimate covariance has already converged to a steady state value (like you mentioned), user do not have the freedom to determine if the initial value of the state variables are known or not.

Hope this answers your question.

P.s. For the algorithm with steady state value, you might be interested in the alpha-beta-filter or also known as alpha-beta-tracker. Both this filter and kalman filter has much similarity, only that alpha-beta-filter is a steady state kalman filter.