Tracking the relative orientation of two IMU and position within limited area

Thread Starter

AJDickinson

Joined Mar 26, 2019
14
We are busy with a project where we need to be able to track the position of an IMU (or two) and also the relative orientation between two IMUs.

We are trying to keep the project simple and are using Arduino nano this two connected BNO055 IMUs to obtain Euler and Quaternion measurements.

The physical setup has the two IMUs mounted above each other with the orientation of both boards in the same direction and on the same horizontal plane.
The bottom IMU is mounted at a midpoint (center of gravity) to obtain stability and the top IMU can move in forward, sideways and backwards or any combination of this in relation to the bottom.
The arduino is mounted between the sensors. We are using RF to transmit the data to a collection arduino.

We are currently able to obtain the measurements using arduino and have implemented a basic comparison between the Euler measurements to see if there is deviations from the original starting point and plotting this to determine if there is possible bending in one of the directions.

We need to try and establish the movement of the sensors in the horizontal plane (and possibly the vertical) to be able to see at what point the deviation occurred between the sensors.

We have been reading quite a lot but are struggling with converting the Quaternion measurements to movement. We have had a comment:

"Quaternions (or Euler Angles) are used to create a rotation matrix that modifies the raw acceleration values. This gives you accelerations in an inertial coordinate system (e.g. the room your robot is in). From there, you multiply your timestep (integrate time) and use it to update a 1x3 velocity array (matrix), and multiply your timestep again to determine displacement. The displacement is added to a position array (matrix) to determine current position."
but we are not really making any progress to try and calculate this on the arduino.

This is for a project for my son and could possibly be expanded in different directions as it is turning out to be an interesting area.

The Euler basic comparison is the following:
Data is a structure that the measurements are simultaneously logged:
#include <Adafruit_Sensor.h>
#include <Adafruit_BNO055.h>

sensors_event_t event;
bno1.getEvent(&event);
data.OX2=event.orientation.x;
data.OY2=event.orientation.y;
data.OZ2=event.orientation.z;

//data.qOR = quat / quat2;
data.xOR = data.OX1-data.OX2;
data.yOR = data.OY1-data.OY2;
data.zOR = data.OZ1-data.OZ2;

This is then transmitted.

Any help in translating the comment to actual code would be appreciated. We know that we will not have absolute values and that we need to add filters to the processing at a later stage
 

Thread Starter

AJDickinson

Joined Mar 26, 2019
14
If this falls outside the scope of what can be posted or of the forum please let me know
 

jpanhalt

Joined Jan 18, 2008
7,893
At first, I thought it was a typo for a pair of these:
upload_2019-3-28_14-23-33.png

Then, when I got to the part about mounting and whether it was the correct forum, I thought, that has to be it.

Finally, it dawned on me that it must be some sort of Inertial Measurement Unit.
 

Thread Starter

AJDickinson

Joined Mar 26, 2019
14
I saw a posting on this site by Mark Hughes regarding IMUs and their use and explanations regarding the use of quaternions.
 

Mark Hughes

Joined Jun 14, 2016
395
Hi @AJDickinson ,
Thanks for creating a post! So for the other contributors -- IMU -- Inertial Measurement Systems are able to track angular acceleration and translational accelerations. They can be used to perform something navigators refer to as "Dead-Reckoning" -- essentially using sensor readings and math to calculate a position in space.
IMUs measure accelerations -- they calculate velocity and they calculate position. This is true for translational (straight line) and rotational motion.

ExplainationInegrationIMU.jpg
MEMS Accelerometers are relatively stable -- so if you integrate the acceleration readings, you'll have a very good idea of changes in your velocity (assuming you do not saturate the sensor), and the velocity readings, when integrated give you a pretty good idea of your displacement. This is true in three dimensions. But life gets complicated quickly. First -- the accelerometer axes are never perfectly orthogonal (or orthornormal) for that matter -- that means acceleration in one direction, can and does get picked up in the other two axes -- fortunately, it's just a few percent. Still, to get increased accuracy, you have to calibrate and subtract the influences.

But most people never get to that level of precision because of larger problems -- it's not enough to continuously integrate the readings if you don't know which direction the sensor is pointing. And you don't know which direction the sensor is pointing because MEMS gyroscopes are famously unstable. A property called "Bias Instability" describes how many degrees a perfectly stable and motionless gyroscope can drift in a second -- so gyroscopes have to be continuously recalibrated. They're very good at detecting changes in acceleration -- but not absolute acceleration. A second reference has to be added -- in the case of the BNO055, that is a magnetometer.

Inside the BNO055, the magnetometer data is used to determine long-term orientation, and the gyroscope is used to determine short-term angular accelerations. Those two bits of data are "fused" to create "Absolute Orientation" -- a calibration sequence is performed, or loaded into the sensor's memory. Then, the sensor provides the 3-DOF angular displacement from that original position.

We need that angular displacement to determine the orientation of the sensor in the room when the acceleration measurements are recorded because the acceleration measurements are in reference to the sensor, but the sensor is moving in reference to the room.

So, the general procedure is this: Use the absolute orientation (quaternion or Euler angles) to determine appropriate transformations for acceleration data. Multiply the acceleration reading by the refresh-period (0.01 s) to determine the change in velocity -- then use the change in velocity to update the previous velocity -- finally, multiply the velocity by the refresh-period to determine displacement -- add the displacement to the previous position to determine the current position.

There are people out there who have done things similar to what you are doing -- I did a quick search, and found the following (which I have looked over, but not carefully analyzed).

https://github.com/coder543/lifereckoner/blob/master/task_dr.cpp

Let me go finish dinner with the fam and I'll post a bit more.
 

Mark Hughes

Joined Jun 14, 2016
395
Hi @AJDickinson ,
Okay -- I'm back -- so coming up with the position from acceleration is the trivial aspect of this programming challenge. It's keeping track of and updating three variables (or three elements of an array).
The hard part is the angular conversion -- taking the accelerations, as measured in the accelerating frame, and determining how those accelerations are then projected into the inertial frame for computation.
MovingAxesRigidAxes.gif

So -- you take the angles from the Absolute Orientation readings and you can then apply them to an appropriate transformation matrix: (taken from my article on the introduction to Quaternions)
RotationMatrices.jpg
As you can imagine -- this is computationally demanding for a tiny Arduino Nano running at I assume 8-16 MHz. It's conceivable that the MCU cannot process the data fast enough to keep up with the refresh rate of the sensor (100 Hz). FYI -- addition and subtraction require a few clock cycles -- multiplication several clock cycles -- division dozens of clock cycles, and trig requires hundreds of clock cycles.

So -- that's why programmers have grown fond of Quaternions. If you want to convert it to that same rotation matrix -- you can do it without using one bit of trig. Just use the raw quaternion data that comes off the sensor -- it's already normalized. (This assumes that A0 corresponds to w, the scalar part of the quaternion)



Is this starting to make sense?

I kind of regret writing the article on the BNO055 the way I did, and I'd write it as a series of articles if I had it all to do over again -- I really need a standalone article on "Here's the data that comes off the sensor....here's code to interpret it." But it's a several-year-old sensor at this point -- I doubt I could get the article proposal through editorial.
 

Thread Starter

AJDickinson

Joined Mar 26, 2019
14
Please let me know if we are on the right track:
In the setup we do the IMU calibrations
and zero a position:
phyPOS.X=0;
phyPOS.Y=0;
phyPOS.Z=0;

In the loop we:
//Load the quaternions
imu::Quaternion quat = bno1.getQuat();
imu::Quaternion quat2 = bno2.getQuat();
//


//get the quaternion matrix mat1 is a struct with nine entries
mat1.a00 = quat.w^2 + quat.x^2 + quat.y^2 + quat.z^2;
mat1.a10 = 2*(quat.x*quat.y + quat.w*quat.z ) ;
mat1.a20 = 2*(-1*quat.w*quat.y + quat.x*quat.z);
mat1.a01 = 2*(quat.x*quat.y -quat.w*quat.z);
mat1.a11 = quat.w^2 - quat.x^2 + quat.y^2 - quat.z^2;
mat1.a21 = 2*(quat.w * quat.x + quat.y * quat.z);
mat1.a02 = 2*(quat.w * quat.y + quat.x * quat.z);
mat1.a12 = 2(-1*quat.w * quat.x _quat.y * quat.z);
mat1.a22 = quat.w^2 - quat.x^2 -quat.y^2 + quat.z^2;
//LAst set is accelaration is i understand all the reading
eACC.X= mat1.a02 * 0.1;
eACC.Y= mat1.a12 * 0.1;
eACC.Z= mat1.a22 * 0.1;
//??update previous velocity
mat1.a01 = (mat1.a01 + eACC.X) * 0.1;
mat1.a11 = (mat1.a11 + eACC.Y) * 0.1;
mat1.a21 = (mat1.a21 + eACC.Z) * 0.1;
//update the original position to the new one
mat1.a00 = mat1.a00 + mat1.a01 ;
mat1.a10 = mat1.a10 + mat1.a11 ;
mat1.a20 = mat1.a20 + mat1.a21 ;

//Set in the position to store or send
phyPOS.X=mat1.a00 ;
phyPOS.Y=mat1.a10 ;
phyPOS.Z=mat1.a20 ;

Am I on the right track or missing the boat?
 

Mark Hughes

Joined Jun 14, 2016
395
Hi @AJDickenson,
Looks like you're on the right track! There's a whole lot to take in here, so my compliments on jumping in with both feet. You do need to make a few modifications.
1) The time-step is determined by how quickly the sensor updates -- and you want to read & use every little bit of data you can get -- so look up in the BNO055 datasheet and find the refresh rate -- your time step will be 1/ that number (100 Hz = 0.01 s). I chose 0.1 as an example in my explanation -- I should have made that more clear. Sorry about that.
2) Your matrix definition and multiplication seems to be a bit off. You need to apply the acceleration matrix [3x1] not to individual elements in the matrix, but to the whole matrix [3x3]. I'm going to change the variable names I've been using to see if I can make things a bit clearer. The result (if expanded fully) would appear to be more along these lines (not sure if I got all the elements right -- but it's at least 90% close):
aYcalculated = aYmeasured*(Qw^2 + 2 QwQx - 2 QwQz - Qx^2 + 2 QxQy + Qy^2 + 2 QyQz - Qz^2) +
aXcalculated = aXmeasured*(Qw^2 - 2 QwQy + 2 QwQz + Qx^2 + 2 QxQy + 2 QxQz + Qy^2 + Qz^2) +
aZcalculated = aZmeasured*(Qw^2 - 2 QwQx + 2 QwQy - Qx^2 + 2 QxQz - Qy^2 + 2 QyQz + Qz^2)

3) Then you'd update your velocity and position :
velocityXcurrent+=(aXcalculated*timeStep)
velocityYcurrent+=(aYcalculated*timeStep)
velocityZcurrent+=(aZcalculated*timeStep)
positionXcurrent+=velocityXcurrent*timeStep
positionYcurrent+=velocityYcurrent*timeStep
positionZcurrent+=velocityZcurrent*timeStep
4) (You can do this fix last) Multiplication is computationally clock-cycle consuming -- you should avoid repeating calculations if you can help it -- so if you have the available memory, create additional variables that precalculate repeated operations. For example, w², x², y², and z² is repeated three times, 2xy, 2xz, 2wy, 2wz are repeated twice. If you create additional variables, you'll only have to calculate 8 products per measurement cycle, vs 24 products if you do it the way you've got it set up in your code.
 

Thread Starter

AJDickinson

Joined Mar 26, 2019
14
New code:
Problem is that the "bno1.getVector(Adafruit_BNO055::VECTOR_ACCELEROMETER);"
seems to always have a value even if the sensors are laying on a table and therefore there is continuously distance being added to the position in all axis
Please have a look maybe I have some more issues in the code

/* Get a new sensor event for Eular measurements*/
sensors_event_t event;
sensors_event_t event2;

//read the event
bno1.getEvent(&event);
bno2.getEvent(&event2);

//Load the quaternions
imu::Quaternion quat = bno1.getQuat();
imu::Quaternion quat2 = bno2.getQuat();
//
imu::Vector<3> accel = bno1.getVector(Adafruit_BNO055::VECTOR_ACCELEROMETER);
accelMeasure.X= accel[0];
accelMeasure.Y= accel[1];
accelMeasure.Z= accel[2];

float qw =(float) quat.w();
float qx = (float) quat.x();
float qy =(float) quat.y();
float qz = (float) quat.z();
//get the quaternion matrix
mat1.a00 = pow(qw,2) + pow(qx,2) + pow(qy,2) + pow(qz,2);
mat1.a10 = 2*(qx*qy + qw*qz ) ;
mat1.a20 = 2*(-1*qw*qy + qx*qz);
mat1.a01 = 2*(qx*qy -qw*qz);
mat1.a11 = pow(qw,2) - pow(qx,2) + pow(qy,2) - pow(qz,2);
mat1.a21 = 2*(qw * qx + qy * qz);
mat1.a02 = 2*(qw * qy + qx * qz);
mat1.a12 = 2*(-1*qw * qx - qy * qz);
mat1.a22 = pow(qw,2) - pow(qx,2) - pow(qy,2) + pow(qz,2);
accelCalc.X = accelMeasure.X*(mat1.a00 + mat1.a10 + mat1.a20) ;
accelCalc.Y = accelMeasure.Y*(mat1.a01 + mat1.a11 + mat1.a21) ;
accelCalc.Z = accelMeasure.Z*(mat1.a02 + mat1.a12 + mat1.a22);
Serial.print("ACC X: ");
Serial.print(accelCalc.X,4);
Serial.print("ACC Y: ");
Serial.print(accelCalc.Y,4);
Serial.print("ACC Z: ");
Serial.print(accelCalc.Y,4);
Serial.print("Timestep: ");
Serial.print(timeStep,10);
//LAst set is accelaration is i understand all the reading
velocity.X=velocity.X + accelCalc.X * timeStep;
velocity.Y= velocity.Y +accelCalc.Y * timeStep;
velocity.Z=velocity.Z + accelCalc.Z * timeStep;
Serial.print("VEL X: ");
Serial.print(velocity.X,8);
Serial.print("VEL Y: ");
Serial.print(velocity.Y,8);
Serial.print("VEL Z: ");
Serial.print(velocity.Y,8);
phyPOS.X= phyPOS.X + velocity.X * timeStep ;
phyPOS.Y= phyPOS.Y + velocity.Y * timeStep ;
phyPOS.Z= phyPOS.Z + velocity.Z * timeStep ;

Output: X Y Z vzlues
0.000179 0.001155 0.000104
0.000538 0.003467 0.000317
0.002687 0.017335 0.001598
0.003759 0.024268 0.002242
0.00501 0.032359 0.002992
0.006438 0.041608 0.003853
0.008045 0.052018 0.004819
0.00983 0.063587 0.005891
0.011791 0.076314 0.007072
0.013929 0.0902 0.008361
0.016242 0.105239 0.009759
0.018734 0.121437 0.011265
0.021402 0.13879 0.012879
0.024246 0.157304 0.014601
0.027268 0.176973 0.01643
0.030465 0.197799 0.018367
0.033843 0.219785 0.020411
 

Thread Starter

AJDickinson

Joined Mar 26, 2019
14
XYZ position after 2070 iterations
392.9152 1461.272 279.1302
393.3614 1462.464 279.5459
393.8078 1463.656 279.962
394.2545 1464.848 280.3783
394.7013 1466.041 280.7949
395.1483 1467.235 281.2118
395.5955 1468.429 281.629
396.0429 1469.624 282.0465
396.4906 1470.819 282.4643
 

Mark Hughes

Joined Jun 14, 2016
395
Hi @AJDickinson ,
Woops! Stupid oversight on my part -- that's due to the gravity vector. The magnitude of those values (Ax^2+Ay^2+Az^2)^1/2 should be about 9.8 m/s² (or 980 cm/s², or 9800 mm/s², etc...). If you are getting a different number, there could be a multiplier that has to be introduced due to different measurement ranges. The datasheet should state what numbers come off and what 1-bit is equal too. Look for "LSB" near the accelerometer section. For some reason I was under the impression that the calibration eliminated that offset -- but it's been so long since I've played with the BNO055 that I must be remembering incorrectly.
So now we have to subtract that effect out....let me think about how to do that for a bit. I want to say we can simply subtract 9.8 from the z-component after the vector is rotated.

@DickCappels Would we get a bit more traction if this post was in the programming forum instead of projects?
 

Thread Starter

AJDickinson

Joined Mar 26, 2019
14
3.6.4.1Accelerometer offset
The accelerometer offset can be configured in the following registers, shown in the table
below. There are 6 bytes required to configure the accelerometer offset (2 bytes for each of
the 3 axis X, Y and Z). Configuration will take place only when the user writes the last byte
(i.e., ACC_OFFSET_Z_MSB).
Table 3-15: Accelerometer Default-Reg settings
Reg Name
Default Reg Value (Bit 0 – Bit 7)
ACC_OFFSET_X_LSB
0x00
ACC_OFFSET_X_MSB
0x00
ACC_OFFSET_Y_LSB
0x00
ACC_OFFSET_Y_MSB
0x00
ACC_OFFSET_Z_LSB
0x00
ACC_OFFSET_Z_MSB
0x00

The range of the offsets varies based on the G-range of accelerometer sensor.
Table 3-16: Accelerometer G-range settings
Accelerometer G-range
Maximum Offset range in mg
2G
+/- 2000
4G
+/- 4000
8G
+/- 8000
16G
+/- 16000

Table 3-17: Accelerometer Unit settings
Unit
Representation
m/s2
1 m/s2 = 100 LSB
mg
1 mg = 1 LSB
 

Thread Starter

AJDickinson

Joined Mar 26, 2019
14
found the following also
3.5 Sensor Configuration The fusion outputs of the BNO055 are tightly linked with the sensor configuration settings. Due to this fact, the sensor configuration is limited when BNO055 is configured to run in any of the fusion operating mode. In any of the non-fusion modes the configuration settings can be updated by writing to the configuration registers as defined in the following sections. 3.5.1 Default sensor configuration At power-on the sensors are configured with the default settings as defined in Table 3-8 below. Table 3-7: Default sensor configuration at power-on Sensors Parameters Value
Accelerometer Power Mode
NORMAL Range +/- 4g Bandwidth 62.5Hz Resolution 14 bits
Gyroscope Power Mode
NORMAL Range 2000 °/s Bandwidth 32Hz Resolution 16 bits
Magnetometer Power Mode
FORCED ODR 20Hz XY Repetition 15 Z Repetition 16 Resolution x/y/z 13/13/15 bits
 

Thread Starter

AJDickinson

Joined Mar 26, 2019
14
The magnitude of those values (Ax^2+Ay^2+Az^2)^1/2 should be about 9.8 m/s² (or 980 cm/s², or 9800 mm/s², etc...)
They seem to be correct: Test acc: 95.65 : 9.78
Serial.print("Test acc: ");
Serial.print(pow(accelMeasure.X,2) + pow(accelMeasure.Y,2) + pow(accelMeasure.Z,2));
Serial.print(" : ");
Serial.print(pow(pow(accelMeasure.X,2) + pow(accelMeasure.Y,2) + pow(accelMeasure.Z,2),0.5));
 

Mark Hughes

Joined Jun 14, 2016
395
Hi Andrew,
Sorry for the delayed reply -- I was writing some super-exciting (not-really -- actually mind-numbingly boring) technical articles for the site today.
First off -- you've made tremendous progress! That's good news! Great news even. Remember -- this stuff isn't easy. There are guys out there who get their master's degrees and Ph.D.s playing with this stuff. I know this because they often email me for help ;) Sadly, I'm not often able to help them because while they've collected data -- they've usually dropped a lot of frames, failed to keep track of initial conditions, or done other things that render the data useless.

Second -- remember that these devices have inherent instabilities built into them. They will never be perfect -- even after careful calibration. MEMS-based devices are getting better all the time, but for spot-on accuracy (think weapons systems, submarines, etc...), people still use mechanical accelerometers and mechanical gyroscopes.

Third -- if I had to guess the next problem, it could quite possibly be the rotation matrix I provided you. There are 12 or so Euler rotation matrices, and I picked the one I did based on a hunch -- not based on reading the BNO datasheet -- which would clearly demonstrate the axes orientation.

Fourth -- If I had to make another guess at the problem, it's in initial setup and calibration of the device. The quaternions returned to you are based upon movement from the initial orientation of the device. Which may or may not have the gravity vector lying along the z-axis when you turn this thing on. Therefore, when you use the rotation matrix to rectify the movement -- you're only measuring against the initial position -- which may or may not have the inertial frame aligned with the device frame.
Hughes_TechnicalArticles_Quaternions_MisalignedAxes.gif

So -- we should be able to use the gravity vector at startup to establish a rotation to orient z.
But that still leaves us to deal with orientation for x and y -- which could be rotated at pretty much any angle. Maybe we can use the compass reading to put x along a 0° heading?

What do you think -- is this an initial conditions problem?
 

Thread Starter

AJDickinson

Joined Mar 26, 2019
14
It could well be the initializing as the device initializes on startup each time.

How would you go about doing the orientation on the Z x and y axis in code?

I have moved the initial calibration to happen in the setup and not in the loop so that the moment it enter the loop phase the calibrations is done.
 
Top