Kalman Filters, In My Own Words

Kalman filters are discrete systems that allow you to define a dependent variable by an independent variable, whereby you must solve for the independent variable so that when you are given your measurements (the dependent variable), you can infer an estimate of the independent variable assuming that noise exists from your input measurement and noise also exists in how you’ve modeled the world with your math equations because of inevitably unaccounted for factors in the non-sterile world.

Input variables become more valuable when modeled as a system of equations, or a matrix, in order to make it possible to determine the relationships between those values. Every variables in every dimension will contain noise, and therefore the introduction of related inputs will allow weighted averaging to take place based on the predicted differential at the next step, the noise unaccounted for in the system, and the noise introduced by the sensor inputs. The predicted differentials are established by modeling your next state in terms of the previous state (i.e. x(t + 1) = x(t) + 1.23456). Understanding the above concepts combined with comfortably navigating matrices and derivatives leaves the only remaining task of mapping those terms into a pre-defined kalman filter.

A standard kalman filter is easily conflated with an extended kalman filter. A (non-extended) kalman filter will define a state transition matrix that, when given the previous measurement, a single matrix is defined at a certain point in time that is defined only based on the previous measurement in time, and that matrix multiplied by the previous state + some noise established by the kalman filter will give an estimate that is within the sensor measurement’s multi-dimensional standard deviation. The model is completely dependent on time, and therefore is only 1 dimensional relative to its outputs. Thus, it is a linear kalman filter that would never truly exist with high accuracy in the real world.

If a matrix is defined as a set of equations, multiple inputs can produce outputs for the same variable. Therefore the data is overfitted, and we can pick a better estimate for this particular dimension base on the estimated error along the input axes. An extended kalman filter, in my opinion, is actually simpler to implement because there’s no need to think in discrete time steps. Instead, you more generally take the derivative of each state variable in terms of another state variable. This is more tedious, but it’s in the abstract more straightforward. A jacobian, or the derivative of the state matrix in terms of every state at each new column of the jacobian, will be be defined as a model where the time delta and the derivative values of the previous predicted state will be used to estimate an overfitted state where the lowest error as a function of modeled noise produces a value per dimension which populated a predicted, or output state.

On the streets, there is noise. So much noise I haven’t been able to model it correctly.

But your input measurements will typically have standard deviations that are easily defined since real world inputs will be in the form of sensors where standard deviations are measured. It can generally be assumed that the inputs have no relation to each other whatsoever, but if you really wanted to go hardcore, inputs are likely tied together in the real world in some fashion, especially if a single sensor provides multiple inputs (i.e. GPS bearing and speed are related in some way that is difficult to measure but can provide helpful, yet marginal benefit). Therefore, noise can be expressed as a diagonal matrix where each non-zero element along the diagonal corresponds to the sensor’s variance, and for the non faint of heart, the variance of a sensor as a function of another sensor input (rather than time) can be populated.

For fusing sensor inputs, it’s not necessary to separate out your separate sensors (i.e. laser range finder and ultrasonic sensor) into prediction and update steps. These are both measurements, and therefore both can be updates. Your prediction step can include a control vector, but it is equally valid to assume no additional system inputs. The prediction input control vector can perhaps more appropriately thought of as values that might be established from within a sister application on the system feeding its output into the kalman filter, where its values are computer generated and therefore its associated noise should always be trusted completely. An update step can include all possible input values even if only a subset of real world values are available to feed into the system. Since we can include the confidence of the input measurements, we can associate arbitrarily high errors with any input. Therefore your laser range finder + camera + ultrasonic sensor combo can be thought of a single device that has super spotty confidence for all sensors at the highest possible frequency of all devices’ least common multiple frequency.

Providing inputs at the update, rather than prediction step, provides the added benefit of correcting the prediction by applying a weighted average between the most recent measurement update. In alternate scenarios of supplying sensor inputs to the prediction step, correcting the state based on the kalman gain only happens at lower frequency intervals.

  • Pstoppani

    You sensor fusion point is interesting as I’ve been struggling understanding the possible results differences between separating sensors into control and measure vs combining them vs chaining them somehow with multiple Kalman filters.

    Have you considered creating a second version of your GPS+INS code which combines them into the update/measurement function. I’m curious how you manage the different update frequencies and the measurement covariances matrix.