-
Notifications
You must be signed in to change notification settings - Fork 146
Sensor Fusion
In PSMoveService, there are two mutually exclusive ways we may use the term 'sensor fusion'. First, we may use sensor fusion to combine readings from different sensors to estimate orientation. Second, we may use sensor fusion to combine readings from even more sensors to estimate both orientation and position (i.e. the pose).
When we first started PSMoveService, we used an orientation filter to do sensor fusion to estimate the orientation only and the position was pulled directly from the optical tracker. Now we use a pose filter to do sensor fusion to estimate the orientation and position together.
In both cases, we use the set of sensors in the inertial measurement unit (IMU). In the PSMove controller, the IMU has 3 sensors:
- Accelerometer - Measures the direction of gravity
- Magnetometer - Measures the direction of your local magnetic field (basically a 3d compass)
- Gyroscope - Measures the rate at which the controller is rotating about the controller local X, Y and Z axes.
TO DO: IMAGE DEPICTING EACH SENSOR AND THE XYZ ORIENTATION. ALSO SENSOR OFFSETS
And we also use an optical tracker (Optical-Tracker-Algorithms) to get a first estimate of the position.
TODO: SIMPLE IMAGE OF OPTICAL TRACKER
#Calculating Orientation (only)
PSMoveService originally calculated orientation in its own filter, independent of any position data. We still fall back to this method if no optical tracker is available.
There are a few ways you could compute orientation from an IMU, though they aren't all recommended.
You could track a controller's orientation just by adding up the deltas (i.e., integrating) from the Gyroscope but there are two problems with this:
- You need to start from a known orientation
- Errors in the gyro readings will accumulate and within a few seconds the orientation will be completely off
Another way to determine orientation is to use a sensor that measures some locally static field pointing in a known direction like Earth's gravity. For example, if you know that your controller is standing upright when your sensor says gravity is pointing down, the controller's local Y-Axis and you are currently measuring gravity pointing down the controller's local X-Axis - then the controller must be rotated 90 degrees. To measure the direction of gravity the PS Move uses a 3-Axis Accelerometer.
Measuring the direction of one field allows you to compute two rotation values, typically yaw and pitch. Measuring the direction of two different fields pointing in different directions allows you to compute all 3 rotation values: roll, pitch, and yaw.
Just as in the single field case, if you measure two independent fields in one known controller pose (i.e. calibrate the controller sitting upright) and then compare those to current readings you can compute orientation by determining the rotation you would need to align the calibration readings with the current readings. Using the PS Move's 3-Axis Magnetometer we get the second field direction we need from the Earth's magnetic field.
So then if we can completely compute the orientation from the accelerometer and the magnetometer, why do we care about the gyroscope? The answer is noise and bias. Both the accelerator and magnetometer give much "noisier" readings than the gyroscope does. If you compute orientation from just the accelerometer and the magnetometer you'll get a really jittery looking orientation.
If however we can compute some weighted blend of the orientation computed from the Accelerometer-Magnetometer derived orientation with the Gyroscope readings we can get the best of both worlds: smooth orientation changes without the drift. This process of combining information from multiple sensors is called "Sensor Fusion". There are many algorithms for doing Sensor Fusion. They vary in what kinds of sensors they use, how they deal with noise, whether they need calibration or not, and how computationally expensive they are to run.
The PSMoveService library uses a modified version of Madgwick's AHRS orientation filter which I adapted from his paper describing the filter. His original method and implementation require no calibration because the default magnetometer direction is supposed to converge on the correct value. In practice I've found that it never quite did this for me and the filter suffered from drift in many circumstances as a result. For increased reliability (at the cost of ease of use) we instead do a manual calibration of the magnetometer. This calibration process is outlined on the PSMove Magnetometer Calibration wiki page.
TODO: Also mention basic Magwick, complementary filter and Kalman filter
To calculate pose (orientation + position), we perform sensor fusion using all sensors in the IMU and the rough positional estimate from the optical tracker.
We want to estimate the state of the controller which comprises several variables. Our sensors don't directly provide all of these states, and the relationships are noisy.
A common way to calculate a cleaner estimate of an unobserved state using a set of noisy sensors is with a Kalman filter. In our case, the math governing how our states evolve over time and how our sensors relate to our states are non-linear, so we use a non-linear sigma-point Kalman filter (SPKF), specifically the square-root formulation of the Unscented Kalman filter (SRUKF). TODO: Links for the different types of KFs.
The system has 2 driving functions. First, the "process function" describes how the state x at time k-1 leads to a state at time k (i.e., x_k). It is also assumed that there is some process noise v in the state transition. Second, the "observation function" determines how the underlying state can generate the observed sensor values in y. It is also assumed that the observation function has some noise n.
The basic steps are as follows:
- Initialize with best guess for state, state covariance, noise, and noise covariance
- Time update i.e. predict state xbar_k from state x_(k-1)
- Calculate sigma points X_(k-1)
- These are special points that fully describe the mean and variance of the state x_(k-1)
- 2L+1 points are needed, where L is the length of the state vector, and the computation is complicated so it's helpful to keep the state vector small.
- Depends on prior state covariance matrix P_x_(k-1) (alternatively, in square-root implementation, S_x_(k-1) where S*S^T = P)
- Propagate sigma points through process function to get X_k
- Weighted sum sigma points to get xbar_k
- Calculate sigma point covariance P_xbar_k (or S_xbar_k)
- Depends on process noise covariance, usually considered constant
- Calculate sigma points X_(k-1)
- Measurement (observation) update i.e. predict sensor observation ybar_k
- Redraw sigma points to incorporate updated covariance in previous step (I'm not sure about this...)
- Propagate sigma points through observation model to get Y_k
- Weighted sum of Y_k to get ybar_k (predicted observations)
- Calculate predicted observation covariance P_ybar_k (actually S_ybar_k)
- Depends on observation noise covariance, which might be dynamic
- Calculate state-observation cross correlation P_xk_yk (NOT square root)
- Calculate Kalman gain K from P_ybar_k (S_ybar_k) and P_xk_ybark
- A posteriori update using true measurements y_k to get x_k and P_x_k
- Update state estimate x_k = xbar_k + K(innovation) where innovation = ybar - y_k (true measurements)
- Update state covariance P_x_k (S_x_k)
Note: In another implementation I've seen (Appendix A.C), the state vector is augmented with both process and measurement noise vectors, and the (sqrt) state covariance includes (sqrt) process and measurement noise covariance, and is thus incorporated into the calculation of the sigma points that get propagated through the models. It is then no longer necessary to incorporate the noise covariance when calculating the state and observation covariances.
Luckily, the SR-UKF has several open source implementations. Most of the steps above are coded in a generic way and provided by the open source implementations. All we have to do is provide the following items (bolded above):
- Define the state vector
- Define the process model (i.e., state transition function)
- Define the process noise (probably constant)
- Define the observation (sensor) vector
- Define the observation model (i.e., how state relates to sensor values)
- Define the observation noise (might be dynamic)
A minimal pose state vector includes:
- Position x/y/z
- Orientation
- Represented as a unit quaternion w/x/y/z
- 4 variables but only 3 degrees of freedom
But, for gaming purposes, and also to make some of the filtering work better, we add the following variables to our state vector:
- Linear velocity x/y/z
- Linear acceleration x/y/z
- Angular velocity x/y/z
For a total of 16 state variables.
The process function describes how we can predict a future state assuming no input. We need process models for every one of our state variables. For now I'll include the noise terms, even though they may be ignored depending on the specific UKF implementation.
There are several ways to describe the physical evolution of our controller state. Note that orientation, represented in a quaternion, is updated via quaternion multiplication, but angular velocity is a 3-vector updated via addition.
new_position = old_position + old_velocity*dt + 0.5*old_acceleration*dt*dt + position_noise
new_velocity = old_velocity + old_acceleration*dt + velocity_noise
new_acceleration = old_acceleration + acceleration_noise # Assume constant acceleration frame-to-frame
new_orientation = old_orientation * orientation_noise * delta_orientation_from_old_angular_velocity
new_angular_velocity = old_angular_velocity + angular_velocity_noise # Assume constant angular velocity frame-to-frame
There are a few ways to get delta_orientation_from_old_angular_velocity
, or delta_q
#1)
rot_angle = mag(angular_velocity) * dt
rot_axis = angular_velocity / mag(angular_velocity)
delta_q = [cos(rot_angle/2) rot_axis*sin(rot_angle/2)]
#2) In van der Merwe, Wan, Julier 2004 below.
g_p, g_q, g_r are bias&earth&noise-corrected gyroscope readings.
OMEGA_gyro = [ 0 g_p g_q g_r
-g_p 0 -g_r g_q
-g_q g_r 0 -g_p
-g_r -g_q g_p 0]
delta_q = -1/2 * OMEGA_gyro * orientation
# Also see equations 42-49. Might be faster / more robust than what we are currently using.
#3) See Crassidis and Markley eq 19
qX = [ 0 -qz qy
qz 0 -qx
-qy qx 0]
Xi = [qw*eye(3) + qX
-qx -qy -qz]
delta_q = 0.5*Xi*angular_velocity
#3) I can't remember where I got this. It seems incomplete.
omega_q = [old_ang_vel*0.5 0.0]
delta_q = dt*omega_q.conjugate()
There are a many things we could change. Among them:
- Add angular acceleration to the state vector
- Assume angular acceleration is constant but angular velocity is changing
- PSMoveService no longer has to calculate diff(ang_vel) from the last 2 angular_velocity measurements to get angular acceleration if the game requires it.
- Compensate for offsets between individual sensors and the controller 0,0,0 point (centre of bulb)
- May manifest as lever-arm effect in acceleration.
- Treat the accelerometer and gyroscope readings as inputs to the model (so far we have described a 0-input system).
- Account for delays in optical measurements relative to IMU measurements. See Appendix B
- Add dynamics models or kinematic constraints if we assume the controller will always be clutched in someone's hand
- Not guaranteed and it's a lot of work to implement.
While there are probably some relationships between variables, possibly due to the physical nature of the controller or the human kinematics, let's assume that each direction of motion is independent of each other. However, within a direction, position is related to velocity is related to acceleration. These covariances can be captured in the following equation:
So our matrix Q will have 1st-3rd order (1st: position, 2nd: velocity, 3rd: acceleration) entries in linear space and 1st-2nd order entries in angular space. Well, we only have 1st-order if we do not represent orientation as a quaternion, rather a 3D axis-angle representation. sigma^2 can be hand tuned and can be unique per direction.
If we expected the process noise to change over time, we could set it manually or have it adapt on every iteration. For example, the ReBeL toolbox does this using either an `anneal' method or 'robbins-monro' method.
- Accelerometer x/y/z
- Gyroscope x/y/z
- Magnetometer x/y/z
- Optical tracker x/y/z
x_world_to_controller = orientation_quat.toConjugate().toMatrix()
accelerometer = x_world_to_controller * (linear_acceleration + gravity) + accelerometer_bias + accelerometer_noise
gyroscope = angular_velocity + gyroscope_bias + gyroscope_noise
magnetometer = x_world_to_controller * (local_magnetic_field) + magnetometer_bias + magnetometer_noise
optical_tracker = position + position noise.
For x_world_to_controller, see van der Merwe, Wan, Julier 2004 equation 36, or quaternion_conjugate followed by quaternion_matrix here. These are equivalent to within epsilon.
For now, let's assume that the sensor bias is constant and sensor scaling are properly captured during controller calibration, and the sensor values returned by PSMoveService are properly corrected. Thus we can drop the _bias terms from the above.
- Position x/y/z
- Position noise is anisotropic when using few cameras (to-from camera dimension is noisier than left-right or up-down). Do we know the camera z-direction in world frame? Maybe there should be some off-diagonal entries here when the camera is not perfectly aligned with the world frame.
- Accelerometer x/y/z: Taken from calibration?
- Gyroscope x/y/z: Taken from calibration?
- Magnetometer x/y/z: Taken from calibration?
-
van der Merwe and Wan, 2003
- The SRUKF paper that everyone cites, and that is used for most online algorithms.
-
van der Merwe and Julier, 2004
- First example of the augmented state vector
-
van der Merwe, Wan, Julier 2004
- Multiple UKF formulations, and a specific application of sensor fusion.
- Algorithm is different to their previous, more highly cited paper.
- This new algorithm is the one used in srukf.m in their ReBeL toolbox (no longer at the original site, but can be found here)
- Process noise Q and Measurement noise R are incorporated into an augmented state variable, and augmented sigma-points, and are not used directly to calculate S (S^2 = P).
- Sigma points are not redrawn after state prediction.
- This new algorithm is the one used in srukf.m in their ReBeL toolbox (no longer at the original site, but can be found here)
- Process model is a little bit unusual in that they use the sensor readings and biases directly in the process model.
- delta_velocity = acceleration, where 'acceleration' is bias&noise&gravity-corrected values directly from accelerometer.
- Also corrects for lever-arm if accelerometer is not in center-of-gravity.
- Also update functions for sensor bias
-
Kraft 2003 (backup link)
- Details on how to deal with quaternions in the state vector.
- Process noise only has 3-dimensions for orientation
- Calculation of sigma points customized for quaternion multiplication using augmented vector
- Process noise (Q) is incorporated into S before propagation through process model.
- angle-axis converted to quaternion
- Process noise Q not used in propagation of sigma points because it was already included in calculation of sigma points.
- Use gradient descent to calculate state estimate (mean) from propagated sigma points for the quaternion part.
- Update to covariance estimate (P) also requires calculating rotational differences.
- Details on how to deal with quaternions in the state vector.
-
Enayati 2013
- Section 3.4
- Their state model matches ours perfectly.
- Details on how to treat quaternions follows exactly that used by Kraft.
-
Crassidis and Markley, 2003
- Representation orientation with modified Rodrigues parameters.
- Entirely different way of dealing with orientation math. Seems much more complicated.
- Representation orientation with modified Rodrigues parameters.
-
Baldassano 2008
- SRUKF using an IMU. Same algorithm as van der Merwse and Wan, 2003
- Includes Matlab code.
- Excellent filterpy notebook and its accompanying repository
- Great UKF explanation, though it doesn't have SRUKF.
-
Diaz 2010
- Appendix has Matlab code implementing the van der Merwe and Julier, 2004 algorithm, including some adjustments for quaternions. Starts at page 103. Has direct references to the above paper, but some of the code reference points are a bit off.
-
Yuan et al., Sensors, 2015
- Some details on how to deal with quaternions in UKF.
-
Matlab code by Yi Cao, 2008
- Simple Matlab code for UKF, non-augmented, non-squareroot.