fusing a mag, accel, and gyro to estimate orientation | understanding sensor fusion and tracking, part 2 -凯发k8网页登录
from the series: understanding sensor fusion and tracking
brian douglas
this video describes how we can use a magnetometer, accelerometer, and a gyro to estimate an object’s orientation. the goal is to show how these sensors contribute to the solution and to explain a few things to watch out for along the way.
we’ll cover what orientation is and how we can determine orientation using an accelerometer and a magnetometer. we’ll also talk about calibrating a magnetometer for hard and soft iron sources and ways to deal with corrupting accelerations.
we’ll also show a simple dead reckoning solution that uses the gyro on its own. finally, we’ll cover the concept of blending the solutions from the three sensors.
in this video, we’re going to talk how we can use sensor fusion to estimate an object’s orientation. now you may call orientation by other names, like attitude, or maybe heading if you’re just talking about direction along a 2d pane. this is why the fusion algorithm can also be referred to as an attitude and heading reference system. but it’s all the same thing; we want to figure out which way an object is facing relative to some reference.
we can use a number of different sensors to do this. for example, satellites could use star trackers to estimate attitude relative to the inertial star field, whereas an airplane could use an angle of attack sensor to measure orientation of the wing relative to the incoming free air stream.
now, in this video, we’re going to focus on using a very popular set of sensors that you will find in every modern phone and a wide variety of autonomous systems: a magnetometer, accelerometer, and a gyro.
the goal of this video is not to develop a fully fleshed-out inertial measurement system. there’s just too much to cover to really do a thorough job. instead, i want to conceptually build up the system and explain what each sensor is bringing to the table, and a few things to watch out for along the way. i’ll also call out some other really good sources that i’ve linked to below where you can dive into more of the details. so let’s get to it. i’m brian, and welcome to a matlab tech talk.
when we are talking about orientation, we’re really describing how far an object is rotated away from some known reference frame. for example, the pitch of an aircraft is how far the longitudinal axis is rotated off of the local horizon. so in order to define an orientation, we need to choose the reference frame that we want to describe the orientation against, and then specify the rotation from that frame using some representation method. we have several different ways to represent a rotation. perhaps the easiest to visualize and understand at first is idea of roll, pitch, and yaw. this representation works great in some situations; however, it has some widely known drawbacks in others. so, we have other ways to define rotations for different situations, things like the direction cosine matrix and the quaternion.
the important thing for this discussion is not what a quaternion is or how a dcm is formulated, but rather just to understand that these groups of numbers all represent a fixed three-dimensional rotation between two different coordinate frames: the object’s own coordinate frame that is fixed to the body and rotates with it, and some external coordinate frame. and it’s this rotation, or these sets of numbers, that we’re trying to estimate by measuring some quantity with sensors.
so let’s get to our specific problem. let’s say we want to know the orientation of a phone that’s sitting on a table. so the phone’s body coordinate frame relative to the local north, east, and down coordinate frame. when can find the absolute orientation using just a magnetometer and accelerometer, a little later on we’ll add a gyro to improve accuracy and correct for problems that occur when the system is moving, but for now we’ll stick with these two sensors. simply speaking, we could measure the phone’s acceleration, which would just be due to gravity since it’s sitting stationary on the table and we’d know that direction is up, the direction opposite the direction of gravity. and then we can measure the magnetic field in the body frame to determine north.
but here’s something to think about: the mag field points north but it also points up or down depending on the hemisphere you’re in. and it’s not just a little bit. in north america, the field lines are angled around 60 to 80 degrees down, which means it’s mostly in the gravity direction. the reason a compass points north and not down is that the needle is constrained to rotate within a 2d plane. however, our mag sensor has no such constraint, so it’s going to return a vector that is also in the direction of gravity. so to get north, we need to do some cross products. we can start with our measured mag and accel vectors in the body frame. down is in the opposite direction of the acceleration vector. east is the cross product of down and the magnetic field, and then north is the cross product of east and down. so the orientation of the body is simply the rotation between the body frame and the ned frame, and i can build the direction cosine matrix directly from the n, e, and d vectors that i just calculated.
so let's go check out an implementation of this fusion algorithm. i have a physical imu; it’s the mpu-9250 and it has an accelerometer, magnetometer, and gyro. although for now we’re not going to use the gyro. i’ve connected it to an arduino through i2c, which is then connected to matlab through usb. i’ve pretty much just followed along with this example from the mathworks website, which provides some of the functions that i’m using, and i’ve linked below if you want to do the same.
but let me show you my simple script. i connect to the arduino and the imu and i’m using a matlab viewer to visualize the orientation and i update the viewer each time i read the sensors. this is a built-in function, with the sensor fusion and tracking toolbox. the small amount of math here is basically reading the sensors, performing the cross products, and building the dcm.
and that’s pretty much the whole of it. so if i run this, we can watch the algorithm in action. notice that when it’s sitting on the table it does a pretty good job of finding down; it’s in the positive x axis, and if i rotate it to another orientation you can see that it follows pretty well with my physical movements. so overall, pretty easy and straightforward, right?
well, there are some problems with this simple implementation and i want to highlight two of them. the first is that accelerometers aren’t just measuring gravity; they measure all linear accelerations. so if the system is moving around a lot, it’s going to throw off the estimate of where down is. you can see here that i’m not really rotating the sensor much, but the viewer is jumping all over the place. this might not be much of a problem if your system is largely not accelerating, like a plane while it’s cruising at altitude or a phone that’s sitting on a table. but linear accelerations aren’t the only problem. even rotations can throw off the estimate because an accelerometer that’s not located at the center of rotation will sense an acceleration when the system rotates. so we have to figure out a way to deal with these corruptions.
and a second problem is that magnetometers are affected by disturbances in the magnetic field. obviously, you can see that if i get a magnet near the imu, the estimate is corrupted. so what can we do about these two problems? well, let’s start with the magnetometer. if the magnetic disturbance is part of the system and rotates with the magnetometer, then it can be calibrated out.
these are the so-called hard iron and soft iron sources. a hard iron source is something that generates its own magnetic field. this would be an actual magnet like the ones in an electric motor or it could be a coil that has a current running through it from the electronics themselves. if you tried to measure an external magnetic field, a hard iron source near the magnetometer would contribute to the measurement. if we rotate the system around a single axis and measure the magnetic field, the result would be a circle that is offset from the origin. so your magnetometer would read a larger intensity in some directions and a smaller intensity in the opposite direction.
a soft iron source is something that doesn’t generate its own magnetic field but is what you might call magnetic; you know, like a nail that is attracted to a magnet or the metallic structure of your system. this type of metal will bend the magnetic field as it passes through and around it and the amount of bending changes as that metal rotates. so a soft iron source that rotates with the magnetometer would distort the measurement, creating an oval rather than a circle.
so even if you had a perfect noiseless magnetometer, it would still return an incorrect measurement simply because of the hard and soft iron sources that are near it. and your phone and pretty much all systems have both of them.
so let’s talk about calibration. if the system had no hard or soft iron sources and you rotated the magnetometer all around in four pi-steradian directions, then the magnetic field vector would trace out a perfect sphere with the radius being the magnitude of the field. now, a hard iron source would offset the sphere and a soft iron source would distort it into some ellipsoid. if we could measure this ahead of time, we could calibrate the magnetometer by finding the offset and transformation matrix that would convert it back into a perfect sphere centered at the origin. this transformation matrix and bias would then be applied to each measurement, essentially removing the hard and soft iron sources.
this is exactly what your phone does when it asks you to spin it around in all directions before using the compass. here, i’m demonstrating this by calibrating my imu using the matlab function, magcal. i’m collecting a bunch of measurements in different orientations and then finding the calibration coefficients that will fit them to an ideal sphere.
now that i have an a matrix that will correct for soft iron sources and a b matrix that will remove hard iron bias, i can add a calibration step to the fusion algorithm that i showed you previously and this will produce a more accurate result than what i had before.
all right, now let’s go back to solving the other problem of the corrupting linear accelerations. one way to address this is by predicting linear acceleration and removing it from the measurement prior to using it. this might sound difficult to do, but it is possible if the acceleration is the result of the system actuators—you know, rather than an unpredictable external disturbance. we can take the commands that are sent to the actuators and play it through a model of the system to estimate the expected linear acceleration and then subtract that value from the measurement. this is something that is possible, say, if your system is a drone and you’re flying around by commanding the four propellers.
if we can’t predict the linear acceleration or the external disturbances are too high, another option is to ignore accelerometer readings that are outside of some threshold from a 1 g measurement. if the magnitude of the reading is not close to the magnitude of gravity, then clearly the sensor is picking up on other movement and it can’t be trusted.
this keeps corrupted measurements from getting into our fusion algorithm, but it’s not a great solution because we stop estimating orientation during these times and we lose track of the state of the system. again, not really a problem if we’re trying to estimate orientation for a static object; this algorithm would work perfectly fine. however, often we want to know the orientation of something that is rotating and accelerating. so we need something else here to help us out.
what we can do is add a gyro into the mix to measure the angular rate of the system. in fact, the combination of magnetometer, accelerometer, and gyro are so popular that they are often packaged together as an inertial measurement unit like i have with my mpu-9250. so how does the gyro help?
well, to start, i think it’s useful to think about how we can estimate orientation for a rotating object with just the gyro on its own, no accel and no magnetometer. for this, we can multiply the angular rate measurement by the sample time to get the change in angle during that time. then, if we knew the orientation of the phone at the previous sample time, we can add this delta angle to it and have an updated estimate of the current orientation. if the object isn’t rotating, then the delta angle would be zero and the orientation wouldn’t change, so it all works out. and by repeating this process for next sample and the one after that, we will know the orientation of the phone over time. this process is called dead reckoning and essentially it’s just integrating the gyro measurement.
there are downsides to dead reckoning. one, you still have to know the initial orientation before you can begin so we have to figure that out and, two, sensors aren’t perfect. they have bias and other high-frequency noises that will corrupt our estimation. now, integration acts like a low-pass filter, so that high-frequency noise is smoothed out a little bit, which is good, but the result drifts away from the true position due to random walk as well as integrating any bias in the measurements. so, over time, the orientation will smoothly drift away from the truth.
so at this point we have two different ways to estimate orientation, one using the accelerometer and the magnetometers and the other using just the gyro. and each have their own respective benefits and problems. and this is where sensor fusion comes in once again. we can use it to combine these two estimates in a way that emphasizes each of their strengths and minimizes their weaknesses. now, there’s a number of sensor fusion algorithms that we can use, like a complementary filter or a kalman filter, or the more specialized but very common madgwick or mahony filters, but at their core, every one of them does essentially the same thing.
they initialize the attitude, either by setting it manually or using the initial results of the mag and accelerometer, then, over time, they use the direction of the mag field and gravity to slowly correct for the drift in the gyro. now, i go into a lot more detail in my video on the complementary filter, and mathworks has a series on the mechanics of the kalman filter, both linked below, but in case you don’t go and watch them right away, let me go over a very high-level concept of how this blending works.
let’s put our two solutions at opposite ends of a scale that represents our trust in each solution. and we can place a slider that specifies which solution we trust more. if the slider is all the way left, then we trust our mag/accel solution 100% and we just use that value for our orientation. all the way to the right, and we use the dead reckoning solution 100%. when the slider is in between, this is saying that we trust both solutions some amount and therefore want to take a portion of one solution and add it to the complementary portion of the other solution. by putting the slider almost entirely to the dead reckoning solution, we are mostly trusting the smoothness and quick updates of the integrated gyro measurements, which gives us good estimates during rotations and linear accelerations, but we are ever so gently correcting that solution back toward the absolute measurement of the mag and accel to remove the bias before it has a chance to grow too large. so these two approaches complement each other.
now, for the complementary filter, you as the designer figure out manually where to place this slider, how much you trust one measurement over the other. with a kalman filter, the optimal gain or position of the slider is calculated for you after you specify things like how much noise there is in the measurements and how good you think your system model is. so the bottom line is that we’re doing some kind of fancy averaging between the two solutions based on how much trust we have in them. now, if you want to practice this yourself, the matlab tutorial i used earlier goes through a kalman filter approach using the matlab function ahrsfilter.
and that’s where i’m going to leave this video. in the next video, we’re going to take this one step further and add gps and show how our imu and orientation estimate can help us improve the position we get from the gps sensor.
so, if you don’t want to miss that or other future tech talk videos, don’t forget to subscribe to this channel. also, if you want to check out my channel, control system lectures, i cover more control topics there as well. i’ll see you next time.
related products
learn more
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 mathworks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- (español)
- (english)
- (english)
欧洲
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
亚太
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)