GPS and Accelerometer Sensor Fusion with a Kalman Filter, a Practical Walkthrough (Golang)

The Up Front Video

Chances are, you’re reading this blog post while sitting on the toilet. That means I have a meager 30 seconds to 8 minutes to really grab your attention and show off what this post is all about. Therefore, peep the video first:

Please excuse the blimp icon for the actual car I’m traveling in. The actual project is sensor fusion for a blimp; I just happen to test the sensor fusion aspect by recording a trip to Taco Bell to pick up 3 crispty tacos, 2 cheesy gordita crunches, a large diet coke, and sure, I’ll try the new double stacked tacos nacho crunch.

5 star chefs as always working in the back over there. They’re the every day heroes of this world. I’m just a programmer that fell into the notorious airship game.

True Story Follows

So if you’re one of my avid followers, which I’m not sure is something that actually exists, then you might be privy to my autonomous blimp project. Well, it turns out that last weekend I was on the grind with none other than Larry Fleming AKA “Airshippimus Dirigiblamous”, and we were doing a test run of another version of code I had prepared. I told him to not worry, and that you might as well start calling me Rumpelstiltskin, because I straight spit out gold when I type on the keyboard. I had mentioned that we might as well codename this project “Fort Knox”, because that’s how much gold was now inside the code. I told him: I know I said this would work before, but now, this is definitely going to work. Like: I know we’re all born equal, and I put my pants on one leg at a time, but the only difference is that once my pants are on, I make gold. Well anyway, I came over after declaring all this, and the blimp autopilot didn’t work. In fact, it was actually worse than before.

But, the test flight was valuable from a number of intangible insights. In short, the blimp generally appeared to be attempting the right thing, but in all actions, there was pilot induced oscillation, which basically means that the blimp was overcompensating to correct a previous fault. Which basically means: the sensor updates are too slow and out of date. The autopilot, while responding appropriately, is basing its actions off of old data, and the implications of the corrected measures are not realized until seconds later, which only perpetuates the problem.

Now, we’re using reasonably cheap hardware, and meanwhile there are autopilot algorithms for quadcopters and such that use the same cheap hardware, so clearly I’m missing something. While I’m referencing velocity to calculate reactionary force, that velocity value is from the average of previous GPS data, which is clearly outdated. So, the most recent sensor I have after that is an accelerometer. However, an accelerometer will not be able to give me any usable information unless I have an initial velocity so that the integral can mean something. And, I cannot have an initial velocity because I don’t have a velocity sensor; therefore I must get the velocity from GPS. Ah! I am a genius! So somehow, I must “fuse”, if you will the inputs between GPS and accelerometer. Ah! Oh. Wait. Yeah. That’s already a solved problem. You just use a kalman filter. There’s like 50,000 white papers about it.

The Problem

So, after feeling on top of the world for a few minutes after recognizing that I indeed must be on to something if the idea I have is already something smart people before me have already solved, I sought to understand and/or find sample code for what I was trying to do. All over the internets, the billboards read: “Use a kalman filter to merge GPS and accelerometer data”, but, as trivial as the internets made it out to be, there were no examples of EXACTLY what I was trying to do. There were white papers that explained concepts in the pure abstract, then there were a few blog posts that referenced said white papers and referenced said concept, and then there were a few examples on Github that were so intertwined with the hardware platform and matrix libraries that it was difficult to extract what I needed to know.

So, after a few days of reading through a few white papers, watching a few YouTube videos, and reading a few blog posts several times through, I managed to put together a Kalman Filter that works properly from sensors found exclusively on the Emlid Navio2.

Context For Following Example

It should be said that whatever information is displayed in this blog post requires upfront context. It should only serve to bridge gaps that I could not already find on the internets. Among the existing information:

  • What exactly a kalman filter is
  • Why a kalman filter works
  • The mathematical proofs behind a kalman filter

In abundance, there already appeared to be erudite information about what a kalman filter is, but there was hardly anything in concrete form to tie the abstract into something useful. With that said, the two posts that helped me out a ton in understanding kalman filters:

The example below also already relies on perfect inputs for sensor reads. Therefore, there’s a few pre-requisites there in order to get the example below working. One, you’re using an Emlid Navio2 to read sensor data. Then, you have a working implementation of a compass to get raw data from the accelerometer. I have an example of this code.

Since my previous post mentioned, I’ve added some updates to the compass code. As I left it, the compass could output a pitch, yaw, and roll angle, which, as it turns out, is very difficult to work with in a 3D space. Therefore, I went and learned about quaternions in order to learn how I could take my raw accelerometer data and simply turn it into an absolute acceleration vector in a three dimensional space.

For starters, accelerometer data needs to be converted to values stripped of acceleration due to gravity. To do this, you need to first:

  • Calibrate your accelerometer by reading its raw magnitude while at rest. This value should be equal to the force of gravity, so apply the appropriate divisor to ensure that this magnitude is indeed 1G
  • Strip out the acceleration due gravity. To do this:
        float a12 =   2.0f * (q[1] * q[2] + q[0] * q[3]);
        float a22 =   q[0] * q[0] + q[1] * q[1] - q[2] * q[2] - q[3] * q[3];
        float a31 =   2.0f * (q[0] * q[1] + q[2] * q[3]);
        float a32 =   2.0f * (q[1] * q[3] - q[0] * q[2]);
        float a33 =   q[0] * q[0] - q[1] * q[1] - q[2] * q[2] + q[3] * q[3];
        float gravity[3];
        gravity[0] = a32;
        gravity[1] = a31;
        gravity[2] = a33;
        float pureForwardAcceleration = -1 * (ay - gravity[1]);
        float pureUpAcceleration = -1 * (az - gravity[2]);
        float pureRightAcceleration = -1 * (ax + gravity[0]);

A quaternion, which is an array of 4 values, can be converted into a rotation matrix. Hence, [x, y, z, y] can be converted to:

	// this effectively converts a quaternion to a matrix and then applies that rotation to a
	// vector.  Normally I wouldn't do these whacky comments and just move to an aptly named
	// function, but we care about performance a bit 
	float num1 = q[0] * 2.0;
	float num2 = q[1] * 2.0;
	float num3 = q[2] * 2.0;
	float num4 = q[0] * num1;
	float num5 = q[1] * num2;
	float num6 = q[2] * num3;
	float num7 = q[0] * num2;
	float num8 = q[0] * num3;
	float num9 = q[1] * num3;
	float num10 = q[3] * num1;
	float num11 = q[3] * num2;
	float num12 = q[3] * num3;

	float accelerationEast = (1.0 - (num5 + num6)) * pureAx + (num7 - num12) * pureAy + (num8 + num11) * pureAz;
	float accelerationNorth = (num7 + num12) * pureAx + (1.0 - (num4 + num6)) * pureAy + (num9 - num10) * pureAz;
	float accelerationUp = -1 * ((num8 - num11) * pureAx + (num9 + num10) * pureAy + (1.0 - (num4 + num5)) * pureAz);

        float sinMagneticDeclination = sin(magneticDeclinationOffset * TO_RADIANS_COEFF);
	float cosMagneticDeclination = cos(magneticDeclinationOffset * TO_RADIANS_COEFF);

	float easternNorthComponent = sinMagneticDeclination * accelerationEast;
	float northernEastComponent = -sinMagneticDeclination * accelerationNorth;

	float northernNorthComponent = cosMagneticDeclination * accelerationNorth;
	float easternEastComponent = cosMagneticDeclination * accelerationEast;

	accelerationEast = easternEastComponent + easternNorthComponent;
	accelerationNorth = northernNorthComponent + northernEastComponent;

So now, the pre-requisites should be met. You’re using an Emlid Navio2 hat for raspberry pi (and if you’re not it’s actually fine, but you’ll need to plug in slightly different values throughout this example), and you have working compass code that outputs acceleration in absolute terms of North, East, and up. Moreover, you have working GPS reads working (from the Navio2).

The Code

The working code for this entire example can be found on GitHub. You should be able to use the individual module to create a kalman filter from an abstract position and an abstract acceleration. The only assumption is that this filter works in exactly one dimension. Therefore, if you have 2 or 3 dimensions, simply use 2 or 3 kalman filters, respectively. For simplicity’s sake, the file on Github is inclusive of the main function and some helper functions. The kalman filter then, distilled, with only inputs in terms of meters and acceleration in meters per second per second, is as follows in Golang:

import (

	Although these variables aren't expressive, they're based on existing mathematical conventions
	and in reality should be completely abstract.  The variables in my own words expressed below:

	H: For our usage, this should just be an identity matrix.  In practice this is meant to be
	a transformation matrix to standardize inputs to the system, but I'm enforcing this in the
	API itself; This should simplify usage and a bit of performance by not having to use this

	P: Newest estimate for average error for each part of state. This value will evolve internally
	from the kalman filter, so initializing as an identity matrix is also acceptable

	Q: Abstractly, the process error variance.  Explicitly for our use case, this is the covariance
	matrix for the accelerometer.  To find, you can leave the accelerometer at rest and take the standard
	deviation, then square that for the variance.  Matrix would then be

	[AVariance 0]
	[0 AVariance]

	Additionally, when computing standard deviation, in this context it would make sense to override
	the mean value of the readings to be 0 to account for a blatant offset from the sensor.

	R: Abstractly, the measurement error variance. Explicitly for our use case, this is the covariance
	matrix of the GPS.  If you can get the actual standard deviation of the GPS, this might work, but
	if you take GPS readings at rest, you might have a GPS lock that results in extremely minimal error.

	In practice, I just took the advertised +/- value from the GPS (i.e. uBlock is accurate +/- 1 meter allegedly, so you can use that).

	u: Overridden during each prediction step; Setting as a struct attribute for performance reasons. This
	is the input matrix of high frequency sensor readings that without subject to any error would give us
	an accurate state of the world.

	In our case, it's a 1x1 matrix of accelerometer input in a given direction.

	z: Overridden during each prediction step; Setting as a struct attribute for performance reasongs. This
	is the input matrix of low frequency sensor readings that are absolute but presumably high standard

	In our case, it's a 2x1 matrix of GPS position and velocity
	[ P
	  v ]

	  A: The state transition matrix. Abstractly, this is a matrix that defines a set of of equations that define what the next step would like given no additional inputs but a "next step" (or more than likely, change in time). Given that this struct is explicitly for fusing position and acceleration, it's:

	  [ 1 t
	    0 1 ]

	To explain the above, if you have position, then its next position is the previous position + current velocity * times. If you have velocity, then its next velocity will be the current velocity.

	B: Control matrix. Given input changes to the system, this matrix multiplied by the input will present new deltas
	to the current state.  In our case, these are the equations needed to handle input acceleration.  Specifically:

	[ 0.5t^2
	  t     ]
type KalmanFilterFusedPositionAccelerometer struct {
	H                            *basicMatrix.Matrix // transformation matrix for input data
	P                            *basicMatrix.Matrix // initial guess for covariance
	Q                            *basicMatrix.Matrix // process (accelerometer) error variance
	R                            *basicMatrix.Matrix // measurement (GPS) error variance
	u                            *basicMatrix.Matrix // INPUT control (accelerometer) matrix
	z                            *basicMatrix.Matrix // INPUT measurement (GPS) matrix
	A                            *basicMatrix.Matrix // State Transition matrix
	B                            *basicMatrix.Matrix // Control matrix
	currentState                 *basicMatrix.Matrix
	currentStateTimestampSeconds float64

func (k *KalmanFilterFusedPositionAccelerometer) Predict(accelerationThisAxis, timestampNow float64) {
	deltaT := timestampNow - k.currentStateTimestampSeconds


	k.u.Put(0, 0, accelerationThisAxis)

	k.currentState = (k.A.MultipliedBy(k.currentState)).Add(k.B.MultipliedBy(k.u))

	k.P = ((k.A.MultipliedBy(k.P)).MultipliedBy(k.A.Transpose())).Add(k.Q)

	k.currentStateTimestampSeconds = timestampNow

func (k *KalmanFilterFusedPositionAccelerometer) Update(position, velocityThisAxis float64) {

	k.z.Put(0, 0, position)
	k.z.Put(1, 0, velocityThisAxis)

	y := k.z.Subtract(k.currentState)
	s := k.P.Add(k.R)
	sInverse, err := s.Inverse()
	if err != nil {
		// matrix has no inverse, abort
	K := k.P.MultipliedBy(sInverse)

	k.currentState = k.currentState.Add(K.MultipliedBy(y))

	k.P = (basicMatrix.NewIdentityMatrix(2, 2).Subtract(K)).MultipliedBy(k.P)

		above is equivalent to:
			updatedP := k.P.Subtract(K.MultipliedBy(k.P))

		which would explain some confusion on the internets

func (k *KalmanFilterFusedPositionAccelerometer) recreateControlMatrix(deltaSeconds float64) {
	dtSquared := 0.5 * deltaSeconds * deltaSeconds

	k.B.Put(0, 0, dtSquared)
	k.B.Put(1, 0, deltaSeconds)
func (k *KalmanFilterFusedPositionAccelerometer) recreateStateTransitionMatrix(deltaSeconds float64) {
	k.A.Put(0, 0, 1.0)
	k.A.Put(0, 1, deltaSeconds)

	k.A.Put(1, 0, 0.0)
	k.A.Put(1, 1, 1.0)

func (k *KalmanFilterFusedPositionAccelerometer) GetPredictedPosition() float64 {
	return k.currentState.Get(0, 0)

func (k *KalmanFilterFusedPositionAccelerometer) GetPredictedVelocityThisAxis() float64 {
	return k.currentState.Get(1, 0)

func NewKalmanFilterFusedPositionAccelerometer(initialPosition float64,
	positionStandardDeviation float64,
	accelerometerStandardDeviation float64,
	currentTimestampSeconds float64) *KalmanFilterFusedPositionAccelerometer {

	currentState := basicMatrix.NewMatrix(2, 1)

	currentState.Put(0, 0, initialPosition)
	currentState.Put(1, 0, 0.0)

	u := basicMatrix.NewMatrix(1, 1)
	z := basicMatrix.NewMatrix(2, 1)
	H := basicMatrix.NewIdentityMatrix(2, 2)
	P := basicMatrix.NewIdentityMatrix(2, 2)

	Q := basicMatrix.NewMatrix(2, 2)
	Q.Put(0, 0, accelerometerStandardDeviation*accelerometerStandardDeviation)
	Q.Put(1, 1, accelerometerStandardDeviation*accelerometerStandardDeviation)

	R := basicMatrix.NewMatrix(2, 2)
	R.Put(0, 0, float64(positionStandardDeviation*positionStandardDeviation))
	// TODO might need to play with this value
	R.Put(1, 1, float64(positionStandardDeviation*positionStandardDeviation))
	B := basicMatrix.NewMatrix(2, 1)
	A := basicMatrix.NewMatrix(2, 2)

	return &KalmanFilterFusedPositionAccelerometer{
		A:                            A,
		B:                            B,
		z:                            z,
		u:                            u,
		H:                            H,
		P:                            P,
		Q:                            Q,
		R:                            R,
		currentState:                 currentState,
		currentStateTimestampSeconds: currentTimestampSeconds,

Note that this code by itself has no dependencies, and this should work for all cases involving position and acceleration assuming the two are already using the same units for distance.

  • Peter Fang

    Well Done! Those information does help me a lot. Thanks!
    One question, how did you calculate the Standard Deviation(ex: accelerometerEastStandardDeviation, …)? Was it based on the “abs_north_acc” in your demo data “taco_bell_trip”?

  • Mike

    ” Then, you have a working implementation of a compass to get raw data from the accelerometer”
    Are you trying to say that you need a compass in order to be able to calibrate the accelerometer? I’m confused by this statement, since you can certainly get raw data from an accelerometer without a compass. Thanks!

    • Scott Benedict Lobdell

      Right, sorry for the ambiguity…the accelerometer will give you values that are relative to the body of the device…to get any meaningful data, you need acceleration in terms that you can work with…namely East, North, and down. I meant “compass” like a 3d compass where you can get the rotation matrix of the device so that you can rotate your acceleration vector into usable values.

  • Ben Crossman

    Hi Scott

    Wonderful work. Just a few things I noticed in GitLab:

    – In your code, defaultPositionErr is always null which therefore means “k.R.Put(0, 0, *positionError**positionError)” never happens but I guess you just leave the matrix R at it’s initial value created using “latLonStandardDeviation” (2m) ?

    – You create a different standard deviation for each world axis:
    accelerometerEastStandardDeviation := ACTUAL_GRAVITY * 0.033436506994600976
    accelerometerNorthStandardDeviation := ACTUAL_GRAVITY * 0.05355371135598354
    accelerometerUpStandardDeviation := ACTUAL_GRAVITY * 0.2088683796078286
    but since these axes were rotated from the accelerometer device space it doesnt really make sense to have these separate because it would depend on the orientation of the blimp when you were recording the stationary noise. I think the best you could do is include all three axes of unrotated data in the same standard deviation calculation and come up with one value which you use for all three dimensions.

    – Your sample data “taco_bell_data.tar.gz” doesn’t contain any information for:

    VelNorth float64 `json:”vel_north”`
    VelEast float64 `json:”vel_east”`
    VelDown float64 `json:”vel_down”`
    VelError float64 `json:”vel_error”`
    AltitudeError float64 `json:”altitude_error”`

    That’s fine but I was wondering how you obtain these values.

    – My GPS receiver reports a speed (from the Doppler effect) but no direction to make it a 3D velocity. Could I create a velocity by using a normalized vector from the last GPS location to the new one and then scaling by GPS speed?

    – In regards to VelError, my GPS receiver only reports location accuracy (representing 1 standard deviation) so I could use that for position accuracy (and hence the position part of the measurement error variance) but what about velocity error?


    • Scott Benedict Lobdell

      Hey Ben,
      Astute observations from you! It’s been a bit since I’ve looked at this code sample specifically, but what I did here and what I’ve been doing since has been to always define covariance matrices up front…In this case I do it in the initialization function…you are also correct about how defining different standard deviation on a per axis basis is misleading…in practice what I’ve done up to now is just use one standard deviation value for any accelerometer input. I think that until I get to a point of actually defining covariance matrices dynamically at each step, the ratio between the two is “good enough”.

      As far as GPS velocity goes, it looks like I’ve embarrassingly published an inaccurate approach to this problem. I think at the time I just used velocity inferred from the previous 2 GPS measurements, which is wrong…the GPS should provide velocity readings directly, I think I just didn’t have access to velocity at the time.

      For position error, yes, you are correct as well (you’re basically correct all over)…The GPS will give you a PDOP value (I believe the lower the better), but I haven’t actually used this myself yet. But…that can only serve as an input parameter to a function that defines a heuristic to output some standard deviation. I haven’t messed with this yet myself.

      For velocity error, you might double check this. I think that the available outputs depends on what kind of protocol your GPS conforms to, but so far I’ve used UBlox (which definitely provides an exact standard deviation in terms of meters), and then Android also has experimental support for getting speed accuracy.

      • Ben Crossman

        Thanks for the reply.

        Great, so it sounds like:
        – A single value for accelerometer standard deviation is fine
        – I can use the Doppler speed and directionlize it by using last GPS location
        – I can use the accuracy of the GPS position as is (It already represent 1 standard deviation so no need for a heuristic function)
        – For velocity accuracy I can probably just use a small fixed value since the Doppler calculation should be pretty precise. Do you have a link to that experimental support for getting speed accuracy you mentioned?


        • Scott Benedict Lobdell

          Yup…here’s the link to the API function for Android:

          In my experience the velocity readings were always something like 4 meters/second standard deviation pretty consistently. But that was also only a few runs in the same location, so that might just be anecdotal.

          Also, for the UBlox GPS I’d been using, you got an actual velocity from the GPS, not just speed. If you’re using Android, there’s also a getBearing() function you’d want to take advantage of.

          As far as using previous GPS location to infer some value…generally it’s probably a bad idea to rely on something like this directly, but if you can constrain your problem to the actual use case, that probably helps. I.e. if you’re driving in a car, you could assume that all driving between 2 GPS reads happens in one continuous arc, and therefore an “S” pattern is not possible (an “S” pattern would be possible such as a lane change, but this is far less frequent than the general case of the bearing being off by a few degrees).

          I can probably be more specific if you contact me over email or LinkedIn or something. Hope it helps!


          • Ben Crossman

            Thanks for the link. Couldn’t find it anywhere myself.
            And getBearing too. Very useful.

            Will email you.

  • Ravindra ji

    Hello Scott,
    First thank you very much for sharing the knowledge and the open source code for this project.
    I would like to ask how to did you tested this algorithm?. did you installed the go code directly on the phone or you just collected the data along with video and then tested it on your PC?
    please give some insights, which will provide some opportunity for the new bees like me to learn a little bit more.
    Thank you very much in advance.
    With best regards,

    • Scott Benedict Lobdell

      the easiest way to test is to go and do a drive or flight or whatever and just log all the sensor data that you can with the timestamp. Include everything like GPS signal strength and anything else.

      From there you can test over and over again until you get compelling results. So to answer your question, I logged data from a phone…with an Android you can get output from adb logcat. You can also use something like pubnub to publish data over the network with a remote computer listening and collecting the data.

      It’s well worth making the time to set up a testing environment and even a making a web interface with Google Maps or something to output your results.

      Make sure that timestamps are passed in as parameters to your kalman filter so that it’s easier to test and not dependent on a particular moment in time; your code should be 100% deterministic.

      • Ravindra ji

        Thank you very much!!

  • Doobie M

    Hi, thank you very much for sharing this to the community
    can you also provide a pseudocode of the application of kalman filter algorithm in this project?
    also why didn’t you include the output.json at github? it would be helpful(for me) to compare the effects of using algorithm

  • Ashok

    Thank you for sharing.

    Is this kind of approach needed in iOS devices? It seems iPhone offers its own.

  • Laura G

    This code is fantastic,
    It is quite ingenious and easy to understand.
    Thank you very much Mr. Scott for sharing this blog.
    I am an electronic engineering student and I am working on a sensory fusion project for an autonomous land vehicle, if I understand correctly once I have processed the imu and gps data, I make a car trip to collect this data in a file. txt (for example), and once the route is finished, with the computer, I merge this data using the Kalman filter to later plot them and check the deviation.
    Is it like that or am I wrong?

    • Scott Benedict Lobdell

      Yes, log all raw sensor data to a text file so you can refine and tune. You’ll need some form of ground truth to compare it to. An industry standard would be to have a separate high-precision sensor kit to compare against, but for an educational context you probably just want a complementary video feed. Make the timestamp alignment as straightforward as possible.

      • Laura G

        ok, thanks for the tips!

  • Sam

    Do you have this filter developed in C? I’m working with Arduino and I have some problems.

    Thank you in advance!

  • Sam

    Hi Scott,
    could you tell me how you calculate the velocity error?

    • Scott Benedict Lobdell

      With a UBlox GPS receiver, velocity error is included from the GPS measurements. If you’re using a phone, you might not get that same information

  • Sam

    Okey, thnkyou very much!
    I have an other question, I would like to predict the angular velocity and position from the z axis to the XY plane, I have the yaw, but I dont know how to implement the equations, if x axis is like that:
    with angular position should be like that:
    but I dont know where do I take the variables.
    sorry for my bad English.

  • Даніїл

    Hey Scott, first of all this is incredible. Thanks for sharing. I apologize in advance if I’m asking a question you’ve answered previously, you can just tell me where to look, so I don’t waste your time 🙂

    To the point:
    I have a question about the use of velocity that the GPS provides. I’ve tested a uBlox m8n and the velocity it outputs, updates quite slowly. Meaning if I go 30 mph, it shows 30mph, but if I stop quickly, it takes around 10-15 seconds to go down to 0 (or thereabout). It’s like uBlox uses some kind of a Kalman filter to smooth the speed readings internally, but uses a very low speed of reaction to change. I’ve checked the absolute Velocity data output (Velocity North, East and down) and they update in the same rate (I assume they and the ground speed reading are correlated).

    So the question is: how does this affect the accuracy of the predicted Lat and Lon in your code? I mean the object can stop, but the Kalman filter will think that we are still moving, which will give us incorrect predicted location. Or am I doing something wrong in terms of getting the velocity from uBlox device?


  • Laura G

    Hi Scott!

    I’m trying to include a 3rd axis with yaw angle, to see the car yaw rate in the XY plane, do you have any idea of how to implement ti?

    do’t know if this dinamic model is valid:

    yaw = yaw(0) + yaw_rate*dt

    [yaw ] = [1 0] * [yaw] + [dt] * [yaw_rate]
    [yaw_rate] [0 1] [yaw_rate] [0]

    x = A*x + B*u

    yaw_rate from IMU
    yaw angle from GPS or IMU…

    If you could help me with this, I would be really gratefull
    Thank you!

  • hotigeftas

    thank you for the guide, it helps a lot.
    I’m trying to implement Kalman-filter on raw data coming from an Android phone. So I have 2 rather simple questions:
    1: is it a problem, if I don’t have acces to GPS accuracy, but all the other inputs are available? And if so, what are the possible solutions to this issue?
    2: for another type of raw data, I have only the GPS position, and the raw accelerometer logged. I’m almost certain, the data can’t be filtered with a Kalman-filter, due to the lack of information, but is there a method, to better predict position with this much of resource (don’t want you to explain it, just give me keywords please)?
    Thank you very much

  • Lei Sun

    Hi scott:
    Thanks for great post. Could you please explain the algorithm you used to remove the gravity component from sensor reading? high-pass filter or some sort? and the value of q[0] – q[3] is the output from the accelerometer? it’s 3-axis , how come there are 4 numbers?

  • Fayçal A-z

    Hello Scott, thanks a lot for sharing.

    I’ve just a question about the z vector : when you write z=[p v], is p the relative position or GPS coordinates (lat, lon, i.e. absolute coordinates) ?

    In case p is GPS coordinates, how do you convert it to relative position ?

    Can someone bring more information about that ?

    Thanks !

  • Kenan Senagic

    Hi Scott, you made awesome tutorials regarding sensor fusion, but there are little things that are still unclear to me.
    If I understood you correctly in your video tutorial about accelerometer, gyro and magnetometer you first calculate quanterion, then you calculate rotation matrix from that quanterion and that is used to project device frame to world frame and then multiply that value by accelerometer vector to get acceleration in NORTH EAST and DOWN. The part that confuses me is gravity, when I calculate acceleration in NORT, EAST and DOWN and then offset that by [0, 0, 1] it is ok if device is parallel to the floor, but when I tilt it let’s say up I will get earth gravity in NORT or DOWN values. Also, I noticed here on the blog that you have some code to strip out gravity from acceleration so I am not sure in which step do you do this?