GPS and Accelerometer Sensor Fusion with a Kalman Filter, a Practical Walkthrough (Golang)
The Up Front Video
Chances are, you’re reading this blog post while sitting on the toilet. That means I have a meager 30 seconds to 8 minutes to really grab your attention and show off what this post is all about. Therefore, peep the video first:
Please excuse the blimp icon for the actual car I’m traveling in. The actual project is sensor fusion for a blimp; I just happen to test the sensor fusion aspect by recording a trip to Taco Bell to pick up 3 crispty tacos, 2 cheesy gordita crunches, a large diet coke, and sure, I’ll try the new double stacked tacos nacho crunch.
5 star chefs as always working in the back over there. They’re the every day heroes of this world. I’m just a programmer that fell into the notorious airship game.
True Story Follows
So if you’re one of my avid followers, which I’m not sure is something that actually exists, then you might be privy to my autonomous blimp project. Well, it turns out that last weekend I was on the grind with none other than Larry Fleming AKA “Airshippimus Dirigiblamous”, and we were doing a test run of another version of code I had prepared. I told him to not worry, and that you might as well start calling me Rumpelstiltskin, because I straight spit out gold when I type on the keyboard. I had mentioned that we might as well codename this project “Fort Knox”, because that’s how much gold was now inside the code. I told him: I know I said this would work before, but now, this is definitely going to work. Like: I know we’re all born equal, and I put my pants on one leg at a time, but the only difference is that once my pants are on, I make gold. Well anyway, I came over after declaring all this, and the blimp autopilot didn’t work. In fact, it was actually worse than before.
But, the test flight was valuable from a number of intangible insights. In short, the blimp generally appeared to be attempting the right thing, but in all actions, there was pilot induced oscillation, which basically means that the blimp was overcompensating to correct a previous fault. Which basically means: the sensor updates are too slow and out of date. The autopilot, while responding appropriately, is basing its actions off of old data, and the implications of the corrected measures are not realized until seconds later, which only perpetuates the problem.
Now, we’re using reasonably cheap hardware, and meanwhile there are autopilot algorithms for quadcopters and such that use the same cheap hardware, so clearly I’m missing something. While I’m referencing velocity to calculate reactionary force, that velocity value is from the average of previous GPS data, which is clearly outdated. So, the most recent sensor I have after that is an accelerometer. However, an accelerometer will not be able to give me any usable information unless I have an initial velocity so that the integral can mean something. And, I cannot have an initial velocity because I don’t have a velocity sensor; therefore I must get the velocity from GPS. Ah! I am a genius! So somehow, I must “fuse”, if you will the inputs between GPS and accelerometer. Ah! Oh. Wait. Yeah. That’s already a solved problem. You just use a kalman filter. There’s like 50,000 white papers about it.
The Problem
So, after feeling on top of the world for a few minutes after recognizing that I indeed must be on to something if the idea I have is already something smart people before me have already solved, I sought to understand and/or find sample code for what I was trying to do. All over the internets, the billboards read: “Use a kalman filter to merge GPS and accelerometer data”, but, as trivial as the internets made it out to be, there were no examples of EXACTLY what I was trying to do. There were white papers that explained concepts in the pure abstract, then there were a few blog posts that referenced said white papers and referenced said concept, and then there were a few examples on Github that were so intertwined with the hardware platform and matrix libraries that it was difficult to extract what I needed to know.
So, after a few days of reading through a few white papers, watching a few YouTube videos, and reading a few blog posts several times through, I managed to put together a Kalman Filter that works properly from sensors found exclusively on the Emlid Navio2.
Context For Following Example
It should be said that whatever information is displayed in this blog post requires upfront context. It should only serve to bridge gaps that I could not already find on the internets. Among the existing information:
 What exactly a kalman filter is
 Why a kalman filter works
 The mathematical proofs behind a kalman filter
In abundance, there already appeared to be erudite information about what a kalman filter is, but there was hardly anything in concrete form to tie the abstract into something useful. With that said, the two posts that helped me out a ton in understanding kalman filters:
The example below also already relies on perfect inputs for sensor reads. Therefore, there’s a few prerequisites there in order to get the example below working. One, you’re using an Emlid Navio2 to read sensor data. Then, you have a working implementation of a compass to get raw data from the accelerometer. I have an example of this code.
Since my previous post mentioned, I’ve added some updates to the compass code. As I left it, the compass could output a pitch, yaw, and roll angle, which, as it turns out, is very difficult to work with in a 3D space. Therefore, I went and learned about quaternions in order to learn how I could take my raw accelerometer data and simply turn it into an absolute acceleration vector in a three dimensional space.
For starters, accelerometer data needs to be converted to values stripped of acceleration due to gravity. To do this, you need to first:
 Calibrate your accelerometer by reading its raw magnitude while at rest. This value should be equal to the force of gravity, so apply the appropriate divisor to ensure that this magnitude is indeed 1G

Strip out the acceleration due gravity. To do this:
float a12 = 2.0f * (q[1] * q[2] + q[0] * q[3]); float a22 = q[0] * q[0] + q[1] * q[1]  q[2] * q[2]  q[3] * q[3]; float a31 = 2.0f * (q[0] * q[1] + q[2] * q[3]); float a32 = 2.0f * (q[1] * q[3]  q[0] * q[2]); float a33 = q[0] * q[0]  q[1] * q[1]  q[2] * q[2] + q[3] * q[3]; float gravity[3]; gravity[0] = a32; gravity[1] = a31; gravity[2] = a33; float pureForwardAcceleration = 1 * (ay  gravity[1]); float pureUpAcceleration = 1 * (az  gravity[2]); float pureRightAcceleration = 1 * (ax + gravity[0]);
A quaternion, which is an array of 4 values, can be converted into a rotation matrix. Hence, [x, y, z, y] can be converted to:
// this effectively converts a quaternion to a matrix and then applies that rotation to a // vector. Normally I wouldn't do these whacky comments and just move to an aptly named // function, but we care about performance a bit float num1 = q[0] * 2.0; float num2 = q[1] * 2.0; float num3 = q[2] * 2.0; float num4 = q[0] * num1; float num5 = q[1] * num2; float num6 = q[2] * num3; float num7 = q[0] * num2; float num8 = q[0] * num3; float num9 = q[1] * num3; float num10 = q[3] * num1; float num11 = q[3] * num2; float num12 = q[3] * num3; float accelerationEast = (1.0  (num5 + num6)) * pureAx + (num7  num12) * pureAy + (num8 + num11) * pureAz; float accelerationNorth = (num7 + num12) * pureAx + (1.0  (num4 + num6)) * pureAy + (num9  num10) * pureAz; float accelerationUp = 1 * ((num8  num11) * pureAx + (num9 + num10) * pureAy + (1.0  (num4 + num5)) * pureAz); float sinMagneticDeclination = sin(magneticDeclinationOffset * TO_RADIANS_COEFF); float cosMagneticDeclination = cos(magneticDeclinationOffset * TO_RADIANS_COEFF); float easternNorthComponent = sinMagneticDeclination * accelerationEast; float northernEastComponent = sinMagneticDeclination * accelerationNorth; float northernNorthComponent = cosMagneticDeclination * accelerationNorth; float easternEastComponent = cosMagneticDeclination * accelerationEast; accelerationEast = easternEastComponent + easternNorthComponent; accelerationNorth = northernNorthComponent + northernEastComponent;
So now, the prerequisites should be met. You’re using an Emlid Navio2 hat for raspberry pi (and if you’re not it’s actually fine, but you’ll need to plug in slightly different values throughout this example), and you have working compass code that outputs acceleration in absolute terms of North, East, and up. Moreover, you have working GPS reads working (from the Navio2).
The Code
The working code for this entire example can be found on GitHub. You should be able to use the individual module to create a kalman filter from an abstract position and an abstract acceleration. The only assumption is that this filter works in exactly one dimension. Therefore, if you have 2 or 3 dimensions, simply use 2 or 3 kalman filters, respectively. For simplicity’s sake, the file on Github is inclusive of the main function and some helper functions. The kalman filter then, distilled, with only inputs in terms of meters and acceleration in meters per second per second, is as follows in Golang:
import ( "github.com/slobdell/basicMatrix" ) /* Although these variables aren't expressive, they're based on existing mathematical conventions and in reality should be completely abstract. The variables in my own words expressed below: H: For our usage, this should just be an identity matrix. In practice this is meant to be a transformation matrix to standardize inputs to the system, but I'm enforcing this in the API itself; This should simplify usage and a bit of performance by not having to use this P: Newest estimate for average error for each part of state. This value will evolve internally from the kalman filter, so initializing as an identity matrix is also acceptable Q: Abstractly, the process error variance. Explicitly for our use case, this is the covariance matrix for the accelerometer. To find, you can leave the accelerometer at rest and take the standard deviation, then square that for the variance. Matrix would then be [AVariance 0] [0 AVariance] Additionally, when computing standard deviation, in this context it would make sense to override the mean value of the readings to be 0 to account for a blatant offset from the sensor. R: Abstractly, the measurement error variance. Explicitly for our use case, this is the covariance matrix of the GPS. If you can get the actual standard deviation of the GPS, this might work, but if you take GPS readings at rest, you might have a GPS lock that results in extremely minimal error. In practice, I just took the advertised +/ value from the GPS (i.e. uBlock is accurate +/ 1 meter allegedly, so you can use that). u: Overridden during each prediction step; Setting as a struct attribute for performance reasons. This is the input matrix of high frequency sensor readings that without subject to any error would give us an accurate state of the world. In our case, it's a 1x1 matrix of accelerometer input in a given direction. z: Overridden during each prediction step; Setting as a struct attribute for performance reasongs. This is the input matrix of low frequency sensor readings that are absolute but presumably high standard deviation. In our case, it's a 2x1 matrix of GPS position and velocity [ P v ] A: The state transition matrix. Abstractly, this is a matrix that defines a set of of equations that define what the next step would like given no additional inputs but a "next step" (or more than likely, change in time). Given that this struct is explicitly for fusing position and acceleration, it's: [ 1 t 0 1 ] To explain the above, if you have position, then its next position is the previous position + current velocity * times. If you have velocity, then its next velocity will be the current velocity. B: Control matrix. Given input changes to the system, this matrix multiplied by the input will present new deltas to the current state. In our case, these are the equations needed to handle input acceleration. Specifically: [ 0.5t^2 t ] */ type KalmanFilterFusedPositionAccelerometer struct { H *basicMatrix.Matrix // transformation matrix for input data P *basicMatrix.Matrix // initial guess for covariance Q *basicMatrix.Matrix // process (accelerometer) error variance R *basicMatrix.Matrix // measurement (GPS) error variance u *basicMatrix.Matrix // INPUT control (accelerometer) matrix z *basicMatrix.Matrix // INPUT measurement (GPS) matrix A *basicMatrix.Matrix // State Transition matrix B *basicMatrix.Matrix // Control matrix currentState *basicMatrix.Matrix currentStateTimestampSeconds float64 } func (k *KalmanFilterFusedPositionAccelerometer) Predict(accelerationThisAxis, timestampNow float64) { deltaT := timestampNow  k.currentStateTimestampSeconds k.recreateControlMatrix(deltaT) k.recreateStateTransitionMatrix(deltaT) k.u.Put(0, 0, accelerationThisAxis) k.currentState = (k.A.MultipliedBy(k.currentState)).Add(k.B.MultipliedBy(k.u)) k.P = ((k.A.MultipliedBy(k.P)).MultipliedBy(k.A.Transpose())).Add(k.Q) k.currentStateTimestampSeconds = timestampNow } func (k *KalmanFilterFusedPositionAccelerometer) Update(position, velocityThisAxis float64) { k.z.Put(0, 0, position) k.z.Put(1, 0, velocityThisAxis) y := k.z.Subtract(k.currentState) s := k.P.Add(k.R) sInverse, err := s.Inverse() if err != nil { // matrix has no inverse, abort return } K := k.P.MultipliedBy(sInverse) k.currentState = k.currentState.Add(K.MultipliedBy(y)) k.P = (basicMatrix.NewIdentityMatrix(2, 2).Subtract(K)).MultipliedBy(k.P) /* above is equivalent to: updatedP := k.P.Subtract(K.MultipliedBy(k.P)) which would explain some confusion on the internets */ } func (k *KalmanFilterFusedPositionAccelerometer) recreateControlMatrix(deltaSeconds float64) { dtSquared := 0.5 * deltaSeconds * deltaSeconds k.B.Put(0, 0, dtSquared) k.B.Put(1, 0, deltaSeconds) } func (k *KalmanFilterFusedPositionAccelerometer) recreateStateTransitionMatrix(deltaSeconds float64) { k.A.Put(0, 0, 1.0) k.A.Put(0, 1, deltaSeconds) k.A.Put(1, 0, 0.0) k.A.Put(1, 1, 1.0) } func (k *KalmanFilterFusedPositionAccelerometer) GetPredictedPosition() float64 { return k.currentState.Get(0, 0) } func (k *KalmanFilterFusedPositionAccelerometer) GetPredictedVelocityThisAxis() float64 { return k.currentState.Get(1, 0) } func NewKalmanFilterFusedPositionAccelerometer(initialPosition float64, positionStandardDeviation float64, accelerometerStandardDeviation float64, currentTimestampSeconds float64) *KalmanFilterFusedPositionAccelerometer { currentState := basicMatrix.NewMatrix(2, 1) currentState.Put(0, 0, initialPosition) currentState.Put(1, 0, 0.0) u := basicMatrix.NewMatrix(1, 1) z := basicMatrix.NewMatrix(2, 1) H := basicMatrix.NewIdentityMatrix(2, 2) P := basicMatrix.NewIdentityMatrix(2, 2) Q := basicMatrix.NewMatrix(2, 2) Q.Put(0, 0, accelerometerStandardDeviation*accelerometerStandardDeviation) Q.Put(1, 1, accelerometerStandardDeviation*accelerometerStandardDeviation) R := basicMatrix.NewMatrix(2, 2) R.Put(0, 0, float64(positionStandardDeviation*positionStandardDeviation)) // TODO might need to play with this value R.Put(1, 1, float64(positionStandardDeviation*positionStandardDeviation)) B := basicMatrix.NewMatrix(2, 1) A := basicMatrix.NewMatrix(2, 2) return &KalmanFilterFusedPositionAccelerometer{ A: A, B: B, z: z, u: u, H: H, P: P, Q: Q, R: R, currentState: currentState, currentStateTimestampSeconds: currentTimestampSeconds, } }
Note that this code by itself has no dependencies, and this should work for all cases involving position and acceleration assuming the two are already using the same units for distance.