You are on page 1of 116

UC Santa Cruz

UC Santa Cruz Electronic Theses and Dissertations

Title
IMU Data Processing to Recognize Activities of Daily Living with Smart Headset

Permalink
https://escholarship.org/uc/item/5rs5t0sf

Author
Aranburu, Alex

Publication Date
2018

Peer reviewed|Thesis/dissertation

eScholarship.org Powered by the California Digital Library


University of California
UNIVERSITY OF CALIFORNIA
SANTA CRUZ

IMU DATA PROCESSING TO RECOGNIZE ACTIVITIES OF


DAILY LIVING WITH A SMART HEADSET

A thesis submitted in partial satisfaction


of the requirements for the degree of

MASTER OF SCIENCE
in
Electrical Engineering
by

Alex Aranburu
June 2018

The thesis of Alex Aranburu


is approved:

Professor Yu Zhang, Chair

Professor Dejan Milutinovic, Advisor

Mr. Shridhar Mukund

Tyrus Miller
Vice Provost and Dean of Graduate Studies
Contents

1 Introduction 1
1.1 State of the Art on ADL Tracking . . . . . . . . . . . . . . . . . . . . 2
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Data Acquisition 5
2.1 Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.2 Gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.3 Magnetometer . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Sensor Characterization . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.1 Accelerometer and Gyroscope Calibration . . . . . . . . . . . 16
2.3.2 Magnetometer Calibration . . . . . . . . . . . . . . . . . . . . 17
2.4 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3 Data Processing 26
3.1 Orientation Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.1.1 Euler Angles Estimation . . . . . . . . . . . . . . . . . . . . . 27
3.1.2 Quaternion-based Estimation . . . . . . . . . . . . . . . . . . 42
3.1.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.2 Position Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.2.1 Linear Acceleration Estimation . . . . . . . . . . . . . . . . . 77
3.2.2 Velocity and Position Estimation . . . . . . . . . . . . . . . . 80

4 Pattern Recognition 83
4.1 Event Detection Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 84
4.2 Classification Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 87

5 Tests and Results 94

6 Conclusions and Future Lines 98

References 100

iii
List of Figures

2.1 Example of a 9 DoF IMU . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2 Measured acceleration at rest (stationary conditions) in X (blue), Y


(red) and Z (green) axes. . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.3 Measured angular velocity at rest (stationary conditions) in X (blue),


Y (red) and Z (green) axes. . . . . . . . . . . . . . . . . . . . . . . . 9

2.4 Vectorial representation of the geomagnetic field, in function of the


inclination angle δ and magnitude B. . . . . . . . . . . . . . . . . . . 11

2.5 Measured magnetic flux at rest (stationary conditions) in X (blue),


Y (red) and Z (green) axes, in Santa Cruz, California. . . . . . . . . . 12

2.6 Gaussian approximation of measured probability density function (his-


togram) of accelerometer at rest (stationary conditions) along X, Y
and Z axes, from top to bottom. The mean µ and standard deviation
σ here are represented by u and std, respectively. . . . . . . . . . . . 13

2.7 Gaussian approximation of measured probability density function (his-


togram) of gyroscope at rest (stationary conditions) along X, Y and
Z axes, from top to bottom. The mean µ and standard deviation σ
here are represented by u and std, respectively. . . . . . . . . . . . . 14

2.8 Gaussian approximation of measured probability density function (his-


togram) of magnetometer at rest (stationary conditions) along X, Y
and Z axes, from top to bottom. The mean µ and standard deviation
σ here are represented by u and std, respectively. . . . . . . . . . . . 15

2.9 Locus of uncalibrated raw magnetometer measurements under a strong


hard-iron and soft-iron environment with optimum ellipsoidal fit su-
perimposed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.10 Transformation from uncalibrated locus to calibrated locus with op-


timum spherical fit superimposed . . . . . . . . . . . . . . . . . . . . 19

2.11 MP-9250 breakout, showing all axes orientation for each sensor. . . . 24

iv
3.1 Euler angles roll (φ), pitch (θ) and yaw (ψ). [5pt] . . . . . . . . . . . 28

3.2 Roll angle error surface |∆φ| for µ = 0.01. . . . . . . . . . . . . . . . 33

3.3 Roll angle error surface |∆φ| for µ = 0.1. [7pt] . . . . . . . . . . . . . 33

3.4 Phase unwrapping example . . . . . . . . . . . . . . . . . . . . . . . . 34

3.5 Roll and pitch estimation using accelerometer data and Euler angles. 35

3.6 Roll and pitch estimation using gyroscope data. . . . . . . . . . . . . 36

3.7 Improved roll and pitch estimation after fusing accelerometer and
gyroscope estimations by a complimentary filter. . . . . . . . . . . . . 37

3.8 Yaw estimation using magnetometer data and Euler angles. . . . . . . 40

3.9 Yaw estimation using gyroscope data. . . . . . . . . . . . . . . . . . . 40

3.10 Improved yaw estimation after fusing magnetometer and gyroscope


estimations by a complimentary filter. . . . . . . . . . . . . . . . . . . 41

3.11 Visual representation of complex symbols properties. . . . . . . . . . 43

3.12 Simplified 2D comparison between Linear intERPolation (LERP) and


Spherical Linear IntERPolation (SLERP). Note that in this case q a =
¯ L qacc or ∆
∆LG qacc or ∆LG qmag , q b = qI and q int = ∆ ¯ L qmag used below. 60
G G

3.13 Gain factor function f (eg ) for the accelerometer and magnetometer
error quaternion complementary filter. . . . . . . . . . . . . . . . . . 63

3.14 Roll, pitch and yaw angles estimation (colored lines) with quaternion-
based complementary filter compared to gyroscope angle estimation
(dashed line). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.15 Block diagram of orientation estimation process through quaternions


and complementary filter. . . . . . . . . . . . . . . . . . . . . . . . . 66

3.16 Roll, pitch and yaw angles estimation (colored lines) with quaternion-
based complementary filter compared to gyroscope angle estimation
(dashed line). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

3.17 Comparison of the three analyzed orientation estimation approaches:


Euler angles (green), quaternion-based complementary filter (red) and
quaternion-based Multiplicative Extended Kalman Filter (blue). . . . 74

3.18 Comparison of the quaternion-based complementary filter (red) and


quaternion-based Multiplicative Extended Kalman Filter (blue). . . . 75

v
3.19 Linear accelerometer estimation at rest (stationary conditions). . . . . 78

3.20 Comparison of filtered linear accelerometer estimation (colored lines)


and raw accelerometer estimation (dashed lines) when standing up
(around t = 12s). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

3.21 Velocity estimation when standing up (around t = 12s). . . . . . . . . 81

3.22 Position estimation when standing up (around t = 12s). . . . . . . . . 82

4.1 Detection signal (in blue) and k = 40 times the short-term standard
deviation kσST (dashed line) when standing up (around t = 14s).
The event has been correctly detected at the red dot. . . . . . . . . . 85

4.2 Detection signal (in blue), k = 40 times the short-term standard


deviation kσST (dashed gray line), minimum threshold thmin = 16
(dashed red line) and maximum threshold thmax = 100 (dashed green
line) with standing up (around t = 3.5s), walking (around t = 7.5s),
sitting down (around t = 13.5s) and head movement (around t =
18s) movement events. Events have been correctly detected and head
movement has been successfully neglected. . . . . . . . . . . . . . . . 87

4.3 Template of the standing up pattern represented with the scaled ver-
tical linear velocity (along Z axis). . . . . . . . . . . . . . . . . . . . . 88

4.4 Template of the sitting down pattern represented with the scaled
vertical linear velocity (along Z axis). . . . . . . . . . . . . . . . . . . 89

4.5 Template of the noisy movement pattern represented with the scaled
vertical linear velocity (along Z axis). . . . . . . . . . . . . . . . . . . 89

4.6 Test signal of a sitting down event represented with the scaled vertical
linear velocity (along Z axis). . . . . . . . . . . . . . . . . . . . . . . 90

4.7 Pattern recognition of a standing up event through template matching


between aligned test signal and template patterns using euclidean
distance (d in the plots). . . . . . . . . . . . . . . . . . . . . . . . . . 92

4.8 Pattern recognition of a sitting down event through template match-


ing between aligned test signal and template patterns using euclidean
distance (d in the plots). . . . . . . . . . . . . . . . . . . . . . . . . . 93

5.1 Illustration with probability distributions of an event happening and


not happening. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

vi
5.2 Illustration of hit (true positive), missed detection (false negative),
correct rejection (true negative) and false alarm (false positive) in
signal detection theory. . . . . . . . . . . . . . . . . . . . . . . . . . . 96

vii
List of Tables

2.1 Full-scale range and sampling rate options of the MPU-9250. [5pt] . . 24

5.1 Test result distributions showing the event detection algorithm per-
formance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

5.2 Test result distribution showing the classification algorithm perfor-


mance. [2pt] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

viii
IMU Data Processing to Recognize Activities of
Daily Living with a Smart Headset

Master’s Thesis

Alex Aranburu

Abstract

This thesis proposes a novel approach for the detection and recognition of hu-

man movements. Specifically, Activities of Daily Living (ADL) such as standing up,

sitting down and walking are tracked. In order to monitor human motion, a headset

prototype with a single Inertial Measurement Unit (IMU) with accelerometer, gy-

roscope and magnetometer has been developed. Accurate estimation of orientation

and velocity through computationally cheap IMU data processing allows a repre-

sentative characterization of human behavior patterns in this case. Thanks to tests

performed among different subjects, results show that events are robustly detected

89% of the cases and that activities are recognized with a success rate of 92%. What

is more, adequate rejection of noisy movements has also been achieved 94% of the

time. Thus, an efficient and robust solution for human physical activity tracking is

presented.

ix
1. Introduction

During the last years, the use of wearable devices has been widely spread in various

industrial and academic domains. In addition, an increase in the quality, accuracy

and cheapness of inertial sensors has popularly lead to quantitative analysis of human

orientation and displacement behaviors. This thesis proposes and explains a new

efficient solution to detect and recognize Activities of Daily Living (ADL) trough a

smart headset containing a single Intertial Measurement Unit (IMU) and performing

minimal processing.

First of all, the state of the art is covered here in Section 1, followed by the

sought objectives throughout the thesis. Secondly, the proposed method is exhaus-

tively explained showing data acquisition, data processing and pattern recognition

procedures in Section 2, Section 3 and Section 4, respectively. After that, the per-

formed tests and obtained results are remarked in Section 5. To conclude, Section

5 summarizes the fundamental concepts and concludes with the future lines and

conclusions of the work.

1
1.1 State of the Art on ADL Tracking

Activities of Daily Living is a term employed in healthcare and embraces people’s

any self care daily activity. With regards to this, the scientific community has

recently shown an special interest in monitoring human motion and physical activity.

Valuable information about ADL such as walking, sitting down, and standing up,

may actually help clinicians to monitor rehabilitation and pharmaceutical treatment

processes [1], [2]. What is more, the wish to constantly track daily activity has

significantly increased among people that take particular care of pursuing a healthy

lifestyle. It is well known that the information provided by inertial sensors data and

processing perfectly suits human motion applications in virtual reality environments,

smartphones, GPS-aided vehicles and more [3], [4]. As a result, several automatic

methods and systems capable of detecting and classifying such activities have been

developed.

Following previous research studies, there are several approaches that attempt

to recognize human motion and ADL through IMUs. However, most of them use

more than a single sensor, or they are placed strategically in a specific part of the

body that changes from application to application [5]–[7].

Moreover, a wide range of pattern recognition techniques have been employed

to perform activity or movement detection and classification, from Hidden Markov

Models (HMM), to Gaussian Mixture Models (GMM), k-Nearest Neighbour (k-NN),

Support Vector Machine (SVM), etc. [8], [9]. Recently, machine learning techniques

have become very popular when trying to monitor human physical activity [10].

2
Many authors required a large dataset to classify ADL successfully, specially using

Artificial Neural Networks (ANNs) [11].

At the writing date of this thesis, there is not any known solution that places

inertial sensors in the head and still performs ADL recognition with reasonable

results. Besides, many of the existing approaches require long and expensive training

procedures or high computational resources and processing.

1.2 Objectives

Prior to the completion of this thesis, various objectives were listed so that satisfac-

tory results could be obtained. Primarily, the main goal has been to find a solution

to simple ADL detection and recognition such as standing up, sitting down and

walking by means of a headset equipped with an IMU. To be more specific, a list of

secondary objectives has been written down.

Firstly, one of the aims has been to rely only on a single IMU. Extra sensors

and hardware considerably add to the energy consumption, computational effort

and equipment cost. Like that, there is no need for additional communication and

synchronization strategies in the algorithm.

In addition, inertial sensors must be placed on the head with non-invasive meth-

ods. A headset perfectly satisfies this condition, as it is used by millions of people

all around the globe for different purposes. Other solutions in the past included

their IMUs in the waist through a belt, in the back, in the feet and other places in

the body where a specific device needs to be worn, against the convenience of the

3
user.

Furthermore, it has been intended to provide a fully working headset prototype so

that an added-value product can be built by headset manufacturers and producers.

It is known that headsets nowadays come with their own processing platforms for

noise suppression, echo cancellation, etc. For that reason, as reduced computational

cost as possible has been sought from the beginning. This might imply using sim-

plest algorithms and traditional techniques instead of ultimate complex solutions.

Thus, efficiency has been one of the main priorities in many aspects throughout the

completion of this thesis.

To conclude, the new hardware should not notably increase manufacturing (and

market) prices. While some other work in the field relies in the high accuracy of

sensors with a prohibitive cost, proposed hardware does not exceed the $5 (manu-

facturing price) in this case. That is why only cheap hardware and low-cost sensors

have been integrated into the given solution.

4
2. Data Acquisition

The first step of the solution consists of acquiring human motion and environmental

conditions through low-cost sensors. Inertial Measurement Units (IMUs) have been

selected for that purpose, as they meet such requirements rightfully. Considering

that an IMU provides measurements in the local reference frame, with variable

scale and frequency, there are several aspects that have to be taken into account

in the data acquisition process. It is essential to build meaningful measurement

models of the sensors which allow to predict possible effects of external interferences

during sensing. What is more, special care has to be taken when characterizing and

calibrating sensors carefully to ensure successful data acquisition.

2.1 Sensing

Typically, IMUs are composed by MEMS (Micro-Electro-Mechanical System) based

sensors like an accelerometer and a gyroscope, in addition to a magnetometer or

compass (see an example in Figure 2.1). They are popularly used to get track of

aircrafts, spacecrafts, satellites, cars, etc [12]. In this case, a 9 axis or 9 DoF (Degrees

of Freedom) IMU able to obtain measurements in 3 dimensions has been placed in

5
a headset, with the aim of sensing head movements. In the next section we describe

measurements of sensors, the units employed and sources of measurement errors.

Figure 2.1: Example of a 9 DoF IMU

2.1.1 Accelerometer

The accelerometer measures the acceleration along three orthogonal dimensions in

m/s2 or g units, where 1g = 9.81m/s2 . The sensed acceleration ã can be approx-

imated as a sum of gravitational acceleration a(g) , external or linear acceleration

a(l) and Additive Gaussian random noise ϕ, with zero-mean and σ 2acc variance,

(ϕ ∼ N (0, σ 2acc )).

ã = a(g) + a(l) + ϕ (2.1)

If the sensor is properly calibrated, at rest and lies on a flat surface, measured

acceleration in stationary conditions ãss points up (in Z axis) and corresponds to

a(g) . In that case, if the noise is ignored (see Figure 2.2), there is no linear or external

6
(g)
acceleration so that a(l) = 0g and az = 1g, i.e.,

     
0 ϕx  0
     
(g)
     
0 + ϕy  ≈ 0
ãss = a + ϕ =  (2.2)
    
     
     
1 ϕz 1

This fact is commonly employed as the key assumption of many estimation al-

gorithms, which are covered in Section 3.

Acceleration in X, Y, Z axes
0.04

0.035

0.03

0.025
0 1 2 3 4 5
Acceleration (g)

−0.026

−0.028

−0.03

−0.032
0 1 2 3 4 5
1.035

1.03

1.025

1.02
0 1 2 3 4 5
t (sec)

Figure 2.2: Measured acceleration at rest (stationary conditions) in X (blue), Y


(red) and Z (green) axes.

In general, estimations computed from accelerometer measurements are not re-

liable in short term due to motion and measurement noise. Nevertheless, they are

7
accurate in long term as there is no drift, or what is the same, the noise does not

get summed up over and over after each measurement.

2.1.2 Gyroscope

The gyroscope is capable of measuring angular velocities in all three dimensions in


/s or rad/s. In contrast with the accelerometer, the gyroscope contains significant

non-stationary sensor bias β which is temperature dependent and can be approxi-

mated as a constant. Together with the zero-mean Gaussian noise η ∼ N (0, σ 2gyro ),

the bias also adds up to the actual angular velocity w. As a result, the total measure-

ment noise (β + η) has a non-zero mean Gaussian distribution N (β, σ 2gyro ). Figure

2.3 shows gyroscope data recorded at stationary conditions w̃ss , that corresponds

to zero angular velocity wss = 0.

w̃ = w + β + η (2.3)

 
βx 
 
 
w̃ss = β + η ≈ 
 βy 
 (2.4)
 
 
βz

Three dimensional orientation angles Θ̂[t] can be directly estimated by integrat-

ing angular velocities if the sample time ∆t is known.

8
Angular velocity in X, Y, Z axes
0.2

0
Angular velocity (deg/sec)

−0.2
0 1 2 3 4 5
0.2

−0.2
0 1 2 3 4 5
0.1

−0.1
0 1 2 3 4 5
t (sec)

Figure 2.3: Measured angular velocity at rest (stationary conditions) in X (blue), Y


(red) and Z (green) axes.

 
 φgyro [t + ∆t] 
 
 
Θ̂gyro [t + ∆t] = 
 θgyro [t + ∆t]
 = Θ̂gyro [t] + w̃∆t
 (2.5)
 
 
ψgyro [t + ∆t]

Unfortunately, this method also integrates noise, thus adding drift over time.

This fact makes it impossible to estimate orientation angles φ (roll, rotation around

X axis), θ (pitch, rotation around Y axis) and ψ (yaw, rotation around Z axis)

accurately by using a gyroscope alone.

Consequently, unlike the accelerometer, the gyroscope measurements are consid-

ered to be very accurate at high frequencies or short term but they are not reliable at

9
all in the long term due to the drift. In fact, as the accelerometer related parameters

do provide reliability in the long term, there are many sensor fusion algorithms that

exploit this feature and complement both sensors.

2.1.3 Magnetometer

The magnetometer or compass measures the magnetic flux of the geomagnetic field in

µT or Gauss, where 1 Gauss = 100 µT . While the IMU body-related measurements

are obtained by the accelerometer and gyroscope, the magnetometer senses the

magnetic field of the earth. It is essential to remark this aspect, as this is what gives

a sense of a global coordinate frame to IMU measurements. Without a compass, all

measurements and estimated parameters could only be described within the body

or local reference system.

Having said that, the geomagnetic field has some particularities that are worth

mentioning. The horizontal vector of the field always points to the north. In the

northern hemisphere, the vertical vector points downward, whereas it points up-

wards in the southern hemisphere. In either case, the vector points with an angle

of inclination δ, which depends on the location (-90◦ in the south magnetic pole, 0◦

through the equator and +90◦ in the north magnetic pole) [13]. Besides, the magni-

tude of the geomagnetic field B is also a function of earth’s surface and varies from a

minimum of 22 µT over South America to a maximum of 67 µT south of Australia.

Figure 2.4 illustrates the vectorial representation of the geomagnetic field.

Another significant difference from the other sensors is related to the calibration.

The magnetometer suffers from the so-called soft- and hard-iron offsets, produced

10
Figure 2.4: Vectorial representation of the geomagnetic field, in function of the
inclination angle δ and magnitude B.

by external factors such as nearby materials, radio waves, etc. and by permanent

magnetic field existing in the sensor Printed Circuit Board (PCB), respectively.

The soft-iron offset is represented by a 3x3 rotation matrix W that multiplies the

actual 3-dimensional magnetic flux vector b. The hard-iron offset vector V and

random noise ζ, modeled with a zero-mean Gaussian distribution N (0, σ 2mag ), are

added up to the previous component and describe the measured magnetic flux b̃.

Figure 2.5 shows magnetometer data recorded at stationary conditions in Santa

Cruz, California.

b̃ = W × b + V + ζ (2.6)

Similar to the accelerometer, the compass is too noisy to be reliable in the short

term and is also accurate in the long term or at low frequencies. Therefore, it

complements adequately with the gyroscope in some sensor fusion algorithms as

well.

11
Magnetic flux in X, Y, Z axes
26

24

22

20
0 1 2 3 4 5
Magnetic flux (uT)

−38

−40

−42

−44
0 1 2 3 4 5
−45

−50

−55
0 1 2 3 4 5
t (sec)

Figure 2.5: Measured magnetic flux at rest (stationary conditions) in X (blue), Y


(red) and Z (green) axes, in Santa Cruz, California.

2.2 Sensor Characterization

Since raw IMU measurements might differ significantly from real values, character-

izing and calibrating IMU sensors properly is essential in many scenarios. Random

noise, bias, interferences, offsets and other phenomena may often contribute to sig-

nal degradation. Sometimes, it is uncertain how these artifacts can affect to sensor

measurements and that is why probabilistic methods become effective in this field.

It has been proven that IMU sensors measurements follow normal or Gaussian

distributions [14], that simplify many of the assumptions considered when modeling

uncertainties. For example, sensor noise in 2.1 was easily represented with normal

12
distributions.

In order to obtain approximated values of noise distribution mean and variance,

data from all sensors has been recorded in stationary conditions and for a long period

(over 20 seconds). Hereafter, Figure 2.6, 2.7 and 2.8 show the probability density

functions of the measurements of each sensor (approximated by histograms) and

confirm that Gaussian distributions are followed.

Acceleration noise histogram along X, Y, Z axes


600
u = 4e-02, std = 8e-04
400

200

0
0.041 0.042 0.043 0.044 0.045 0.046 0.047 0.048
400
u = -2e-02, std = 9e-04
Counts

200

0
−0.02 −0.019 −0.018 −0.017 −0.016 −0.015 −0.014 −0.013
400
u = 1e+00, std = 1e-03

200

0
1.02 1.022 1.024 1.026 1.028 1.03 1.032 1.034
Acceleration (g)

Figure 2.6: Gaussian approximation of measured probability density function (his-


togram) of accelerometer at rest (stationary conditions) along X, Y and Z axes, from
top to bottom. The mean µ and standard deviation σ here are represented by u and
std, respectively.

At this point, the sensors have already been characterized and variances σ 2acc ,

σ 2gyro , σ 2mag and gyroscope bias β = µgyro can be defined along X, Y and Z axes.

While gyroscope bias will be removed directly from every measurement, noise vari-

13
Angular velocity noise histogram along X, Y, Z axes
600
u = -6e-03, std = 4e-02
400

200

0
−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15
1000
u = -4e-02, std = 4e-02
Counts

500

0
−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15
1000
u = -1e-02, std = 4e-02

500

0
−0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15
Angular velocity (deg/sec)

Figure 2.7: Gaussian approximation of measured probability density function (his-


togram) of gyroscope at rest (stationary conditions) along X, Y and Z axes, from
top to bottom. The mean µ and standard deviation σ here are represented by u and
std, respectively.

ances will be used in further estimation steps. Hereafter, the specific values can be

found for our sensors.

     
5.97 × 10−7  4.25 × 10−3  0.617
     
     
σ 2acc =
7.08 × 10−7 
 σ 2gyro =
1.35 × 10−3 
 σ 2mag =
0.648
 (2.7)
     
     
1.40 × 10−6 1.22 × 10−3 0.575

14
Magnetic flux noise histogram along X, Y, Z axes
400
u = 4e+01, std = 8e-01

200

0
33 34 35 36 37 38 39 40
400
u = -4e+00, std = 7e-01
Counts

200

0
−6.5 −6 −5.5 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5
400
u = -4e+01, std = 8e-01

200

0
−47 −46 −45 −44 −43 −42 −41
Magnetic flux (uT)

Figure 2.8: Gaussian approximation of measured probability density function (his-


togram) of magnetometer at rest (stationary conditions) along X, Y and Z axes,
from top to bottom. The mean µ and standard deviation σ here are represented by
u and std, respectively.

 
−0.0135
 
 
β = µgyro = −0.0285

 (2.8)
 
 
−0.0126

It is relevant to underline that other more sophisticated characterizing methods

exist [15]. For instance, the Allan Variance breaks up IMU error sources into five

basic noise terms: random walk, bias, instability, rate random walk and rate ramp

[16]. Additionally, the Power Spectral Density (PSD) analyzes the spectrum of the

sensor noise covariance to also determine the error sources and see which frequency

15
ranges can actually be modeled as Gaussian noise [17]. Nevertheless, Gaussian

process modeling has been sufficient for this use case.

2.3 Calibration

Once the sensors have been characterized, a sensor calibration must be performed to

correct for additional offsets. Accelerometer and gyroscope calibrations can simply

rely on characterization data and do not require complex procedures. Calibrating

a magnetometer demands deeper mathematical knowledge and least-squares like

problem solving in order to remove soft- and hard-iron offsets.

2.3.1 Accelerometer and Gyroscope Calibration

Accelerometer calibration is carried out by taking advantage of it stationary con-

dition properties. That is, accelerometer data magnitude or norm should always

be 1g. In reality, this is not likely to happen because of measurement noise.

Hence, an scaling factor acalib can be approximated from the mean values µacc =
 
µaccx µaccy µaccz , obtained after the characterization process. As a result, quasi-

unit norm corrected acceleration values â are obtained from

q
2 2
||ass || = acalib ||ãss || = acalib ã2x + ã2y + ã2z = 1 (2.9)

which yields

1 1
acalib ≈ =p 2 (2.10)
||µacc ||2 µx + µ2y + µ2z

16
and finally

â = (acalib ) ã (2.11)

Although more elaborated calibration methods are available for high precision

accelerometers [18], this approach has lead to accurate results in our work. The gyro-

scope, calibration is as simple as removing the bias component β = µgyro estimated

during characterization (Equation 2.12). Thus, gyroscope corrected measurements

can be approximated to be zero at stationary conditions if the noise is neglected.

ŵ = w̃ − β (2.12)

ŵss = 0 +  =  ≈ 0 (2.13)

2.3.2 Magnetometer Calibration

Magnetometer calibration aims to get rid of hard-iron and soft-iron offsets by es-

timating 3x3 rotation matrix W and vector V defined in Equation 2.6. These

components should be then applied inversely to remove the offsets and calibrate

compass data [19].

b̂ = W−1 (b̃ − V) (2.14)

17
For the purpose of visually comparing uncalibrated and calibrated data, magne-

tometer locus is defined as b̂T b̂. Figure 2.9 illustrates the locus of some compass

measurements before calibration. Note that an ellipsoidal fit has been superimposed

into the measurements space.

 T  
T −1 −1
b̂ b̂ = W (b̃ − V) W (b̃ − V) = B 2 (2.15)

Figure 2.9: Locus of uncalibrated raw magnetometer measurements under a strong


hard-iron and soft-iron environment with optimum ellipsoidal fit superimposed.

Looking at the general expression of an ellipsoid centered at R0 , shape defined

by symmetric matrix A and size set by constant k (see Equation 2.16), the fit for
T
magnetometer data is carried out with R0 = V, A = W−1 W−1 = AT and size


18
defined by the geomagnetic field magnitude B. Thanks to the symmetric property

of matrix A, the soft-iron offset matrix results in W−1 = A1/2 .

T 
R − R0 A R − R0 = k (2.16)

It turns out that if V and W−1 are computed from the ellipsoidal fit and applied

to compass measurements as defined in Equation 2.15, the modified locus can be

superimposed by a sphere centered at origin (R0 = 0) and the geomagnetic field

magnitude B as the radius. Below, Figure 2.10 shows the locus transformation.

Figure 2.10: Transformation from uncalibrated locus to calibrated locus with opti-
mum spherical fit superimposed

Whereas the described procedure corrects for both soft-iron and hard-iron offsets,

the observed scenarios in this case have empirically shown that hard-iron effects dom-

inate over soft-iron effects in most cases. Consequently, a simplified four-parameter

19
calibration has been designed with Equation 2.15 as a reference, assuming an iden-

tity soft-iron offset matrix (W−1 = I).

b̂T b̂ = (b̃ − V)T (b̃ − V) = B 2 (2.17)

b̃T b̃ − 2b̃T V + VT V − B 2 = 0 (2.18)

If the residual error r for the i-th measurement is considered, the main goal of
 T
this approach is to minimize r while fitting the four parameters (V = Vx Vy Vz

and B) from a series of M magnetometer measurements. Essentially, this becomes

a least squares problem. This residual of the ith measurement is

r[i] = b̃2x [i] + b̃2y [i] + b̃2z [i] − 2b̃x [i]Vx − 2b̃y [i]Vy − 2b̃z [i]Vz + Vx2 + Vy2 + Vz2 − B 2 (2.19)

which can be written as

 
 2Vx 
 
 
 
 2Vy


2 2 2

r[i] = b̃x [i] + b̃y [i] + b̃z [i] − b̃x [i] b̃y [i] b̃z [i] 1 
  (2.20)


 2V z


 
 
2 2 2 2
B − Vx − Vy − Vz

or

20
      
 r[0]   b̃2x [0] + b̃2y [0] + b̃2z [0]
  b̃x [0] b̃y [0] b̃z [0] 1 2Vx 
      
      
 r[1]   b̃2 [1] + b̃2 [1] + b̃2 [1]   b̃ [1] b̃y [1] b̃z [1] 1
 2Vy 
x
= x y z
− (2.21)
     
      
      
 ...   ...   . . .  2V 
      z 
      
      
2 2 2 2 2 2 2
r[M − 1] b̃x [M − 1] + b̃y [M − 1] + b̃z [M − 1] b̃x [M − 1] b̃y [M − 1] b̃z [M − 1] 1 B − Vx − Vy − Vz

Typically, around M = 10 measurements at different angles will suffice to achieve

reliable calibration parameters. It is necessary to remark that such measurements

need to be at different orientations. That is why the sensor generally is moved in an

eight shape while magnetometer calibration data is saved. Returning to the residual

error definition, Equation 2.21 can also be rewritten in compressed matrix notation,

if Y, X and ν are defined as follows.

r = Y − Xν (2.22)

where

   
2 2 2
 r[0]   b̃x [0] + b̃y [0] + b̃z [0] 
   
   
 r[1]   b̃2 [1] + b̃2 [1] + b̃2 [1] 
   x y z 
r=


 Y=


 (2.23)
 ...   ... 
   
   
   
r[M − 1] b̃2x [M − 1] + b̃2y [M − 1] + b̃2z [M − 1]

and

21
   
 b̃x [0] b̃y [0] b̃z [0] 1  2Vx 
   
   
 b̃ [1] b̃ [1] b̃ [1] 1   2Vy 
 x y z   
X=


 ν=

 (2.24)


 ... 


 2Vz 

   
   
b̃x [M − 1] b̃y [M − 1] b̃z [M − 1] 1 B 2 − Vx2 − Vy2 − Vz2

Next, the sum of squares of residuals P = ||r||2 = rT r is minimized. This is a

well known least squares optimization problem that has a closed form solution [20].

The sum of square of residuals is

T
P = ||r||2 = rT r = r2 [0] + r2 [1] + ... + r2 [M − 1] = Y − Xν

Y − Xν (2.25)

and the minimization of it respect to ν yields

−1
ν = XT X XT Y (2.26)

   
V1  ν1 
   
  1  q
V = V2  = 
 
ν2 
 B= ν3 + Vx2 + Vy2 + Vz2 (2.27)
  2  
   
V3 ν3

 
In the case of a test scenario with M = 20 we obtained V = 13.4347 10.5466 −70.6098

and B = 60.3417 and all compass measurements have been automatically calibrated

to get b̂ according to Equation 2.14 (recall that W−1 = I as soft-iron offset is

22
ignored).

Now that all sensors have been carefully and adequately calibrated, more accu-

rate estimations can be obtained. For example, orientation and position estimation

with calibrated sensors will certainly contain much less drift error than estimation

through uncalibrated sensor.

2.4 Hardware

Low-cost but yet accurate IMU boards have been taken into account when choos-

ing the hardware for the sensing process. The popular MPU-9250 IMU has been

selected in our work. Specifically, we have used the breakout designed by Sparkfun

[21], whose System in Package (SiP) combines two different chips: MPU-6500 con-

taining a 3 axis accelerometer plus a 3 axis gyroscope and AK8963 containing a 3

axis magnetometer. Figure 2.11 provides a picture of the actual breakout and the

orientation of the axes for each of the sensors. As shown, the magnetometer has

a different orientation with respect to the accelerometer and gyroscope. Therefore,

data must be correctly interpreted in order to align all the measurements.

Different sampling frequencies and full-scale ranges can be configured. Table

2.1 summarizes all the possible full-scale ranges and frequencies for each sensor. It

is worth mentioning that with higher sampling rates and wider full-scale ranges, a

more responsive sensing is achieved but less accurate values are measured. This is

mainly because the MPU-9250 is composed of 16-bit Analog-to-Digital Converters

(ADCs) for digitizing all 9 axes. The board is designed to connect a microcontroller

23
Figure 2.11: MP-9250 breakout, showing all axes orientation for each sensor.

through I 2 C communication and requires a power supply between +2.4V and +3.6V

for correct operation.

Accelerometer Gyroscope Magnetometer

± 2g ± 250 ◦ /s
± 4g ± 500 ◦ /s
Full-scale ranges ± 4800 µT
± 8g ± 1000 ◦ /s
± 16g ± 2000 ◦ /s
8 Hz
Sampling rates 4-1000 Hz 4-1000 Hz
100 Hz

Table 2.1: Full-scale range and sampling rate options of the MPU-9250.

For the sake of achieving a balanced trade-off between accuracy and responsive-

ness, a sampling rate of 100 Hz has been chosen for the accelerometer, gyroscope and

magnetometer. Moreover, the full-scale ranges have been defined as ± 4g, ± 500◦ /s

and ±4800µT, respectively.

In our solution, connection and communication between the IMU, the micro-

controller and the PC has been set up. Specifically, an Arduino Bluno Nano [22]

24
has been selected as the microcontroller, due to its fast deployment and simplicity

for prototyping purposes. With respect to the PC, Matlab has been employed as

the processing platform. While the PC and the Arduino currently communicate via

serial port, Bluetooth Low Energy (BLE) connection could be established in the

future. Even the microcontroller could be further programmed to support all the

processing on-board.

25
3. Data Processing

Once the IMU sensors have been properly calibrated, data is ready to be processed

and estimate parameters that will allow to detect and classify human activity pat-

terns. This section describes the analysis and processing carried out to estimate

orientation, linear acceleration, linear velocity and position of the headset. Some

of these parameters are then used as features in the pattern recognition stage in

Section 4.

3.1 Orientation Estimation

Different algorithms have been compared and implemented in order to obtain angle

information of the IMU. Equation 2.5 in Section 2 shows that gyroscope measure-

ments are not sufficient for the estimation of the sensor orientation in three dimen-

sions due to accumulation of drift over time. Consequently, sensor fusion algorithms

that combine gyroscope, accelerometer and magnetometer data, and compensate

drift are required.

In the literature, we find that Euler angle and Quaternion based methods have

been primarily highlighted [23], [24]. Besides, complementary filter and Multiplica-

26
tive Extended Kalman Filter (MEKF) implementations have been distinguished

inside Quaternion based estimation. Thus, all the three methods (Euler angles,

quaternion-based complementary filter and quaternion-based MEKF estimation)

have been studied, implemented and applied, to finally be capable of performing

a thorough comparison through theoretical but also practical results.

3.1.1 Euler Angles Estimation

Leonhard Euler (15 April 1707 - 18 September 1783), the swiss mathematician,

physicist, astronomer, logician and engineer introduced the Euler angles to describe

the orientation of a rigid body with respect to a reference coordinate frame. Ba-

sically, he claimed that any three-dimensional orientation can be achieved by the

composition of three elemental rotations, one rotation around each axis. Further-

more, an (Euler) angle can be assigned to each rotation. The rotation angle around

X axis is the roll angle, the rotation angle around Y axis is the pitch angle, and the

rotation angle around Z axis is the yaw angle. Euler angles are also usually denoted

as φ (roll), θ (pitch) and ψ (yaw) and can be defined in the local, i.e., body reference

frame (Figure 3.1) [25].

There are many factors that need to be taken into account and many challenges

that must be addressed when estimating Euler angles from IMU data. Due to

inherent sensor characteristics, roll and pitch angles derived from accelerometer

data and yaw angle derived from magnetometer data are employed to compensate

the drift generated by the gyroscope in the integration phase. Magnetometer data is

required because the accelerometer is only able to estimate roll and pitch angles. The

27
Figure 3.1: Euler angles roll (φ), pitch (θ) and yaw (ψ).

explanation for this is that at rest measurements are always the same when there

is no linear acceleration, no matter which the actual yaw angle is. For example,
 T
when the Z axis is aligned with the gravity, ãss ≈ 0 0 1 for every actual yaw

angle. Consequently, same accelerometer measurement ãss is sensed in the whole

yaw angle range. The same holds for other sensor orientations at rest. That is

why accelerometer cannot perceive yaw angle variations. As a result, magnetometer

yaw angle has to be computed using roll and pitch information and correct drifted

gyroscope yaw angle.

3.1.1.1 Roll and pitch Estimation

The algorithm to estimate roll and pitch from sensed acceleration [26] basically relies

on the assumption that there is no linear acceleration at the starting point and thus

any normalized accelerometer measurement can be related to the gravitational ac-

28
celeration through a rotation matrix. This approximation tends to amplify estimate

errors whenever external accelerations like sensor shake are present at start [27].

However, it is still considered accurate enough. Given that any rotation can be de-

scribed as a composition of roll, pitch and yaw angles rotations, the rotation matrix

can be decomposed as one of six possible sequences of Euler angle rotation matrices

Rx (φ), Ry (θ) and Rz (ψ). Note that due to matrix multiplication properties, every

possible sequence of these rotation matrices gives a different result (see Equation

3.4). In our work, we use the common sequence used in the aerospace field, namely

Rxyz , which first rotates around Z (yaw), then around Y (pitch) and finally around X

(roll). Another reason for choosing Rxyz is that the rotation matrix in Equation 3.2

depends only on roll and pitch angles. Therefore, the yaw angle does not need to be

estimated. Following Direct Cosine Matrix (DCM) derivation [28], it can be easily

proven that the combinations are not equal and therefore rotations are not commu-

tative. For instance, Equation 3.2 and 3.3 below show that Rxyz 6= Ryxz , while the

same holds for the rest of the sequences. This is one of the main drawbacks of Euler

angles estimation, as each sequence or combination provides a different estimation

result.

     
1 0 0  cosθ 0 −sinθ  cosψ sinψ 0
     
Rx (φ) = Ry (θ) = Rz (ψ) = (3.1)
     
0 cosφ sinφ  0 1 0  −sinψ cosψ 0
     
     
     
0 −sinφ cosφ sinθ 0 cosθ 0 0 1

29
Rxyz = Rx (φ)Ry (θ)Rz (ψ)
 
 cosθcosψ cosθsinψ −sinθ 
 
 
= cosψsinθsinφ − cosφsinψ cosφcosψ + sinθsinφsinψ cosθsinφ


 
 
cosφcosψsinθ + sinφsinψ cosφsinθsinψ − cosψsinφ cosθcosφ

(3.2)

Ryxz = Ry (θ)Rx (φ)Rz (ψ)


 
cosψcosθ − sinθsinφsinψ sinψcosθ + sinθsinφsinψ −sinθcosφ
 
 
=
 −cosφsinψ cosφcosψ sinφ 

 
 
cosθsinφsinψ + sinθcosψ −cosψcosθsinφ + sinψsinθ cosθcosφ

(3.3)

Rxyz 6= Ryxz 6= Rxzy 6= Ryxz 6= Rzxy 6= Rzyx (3.4)

Returning to the initial assumption that requires no linear acceleration at start

(a(l) = 0g), Equation 3.5 relates any normalized accelerometer measurement to


 T
(g)
the gravitational acceleration ainit = a = 0 0 1 by means of the selected

rotation matrix sequence Rxyz . Having derived Rainit , it is then straightforward to

solve for roll and pitch angle values, φacc and θacc . It is essential to mention that

roll angle and pitch angle must be limited to [−π, π] and [−π/2, π/2] to avoid

30
duplicate results, respectively. If this action was not taken, multiple roll, pitch

and yaw rotation combinations would lead to the same absolute rotation. As a

result, IMU applications must not exceed these ranges as long as Euler angles are

implemented.

   
0  −sinθ 
   
ã    
= R ainit = Rxyz 0 = cosθsinφ (3.5)
||ã||   
  


   
1 cosθcosφ

   
ãx   −sinθ 
   
ã 1    
=p 2  =
ãy  cosθsinφ
  (3.6)
||ã|| ãx + ã2y + ã2z 
  


   
ãz cosθcosφ

Solving for roll angle φ and pitch angle θ,

 
−1 ãy
φ = tan ∈ [−π, π] (3.7)
ãz

   
−ãx −ã
θ= = tan −1
p 2 x 2 ∈ [−π/2, π/2] (3.8)
ãy sinφ + ãz cosφ ãy + ãz

Nonetheless, there are still certain singularities that can cause an unstable es-

timation. This is the case in which both the numerator and denominator inside

the arctangent are zero. While Equation 3.8 never gets to this problem (as it is

not possible to have ã = 0), Equation 3.7 indeed does when ãy = 0 and ãz = 0.

31
Even though there is no perfect solution to this problem, a widely used fix is to

use a tiny fraction µ of the accelerometer measurement in the X axis to prevent the

denominator from becoming zero.

 

φacc = tan−1
py ∈ [−π, π], µ≈0 (3.9)
sign(ãz ) ã2z + µã2x

 
−ã
θacc = tan −1
p 2 x 2 ∈ [−π/2, π/2] (3.10)
ãy + ãz

This stable approximation also introduces an additional error when the roll and

pitch angles approach ±90◦ . Figure 3.3 and 3.2 illustrate the absolute difference

|∆φ| between using the fix and not using it when estimating the roll angle in degrees

computed from Equation 3.13. A value of µ = 0.01 has been selected, as a reduced

error is achieved with respect to µ = 0.1.

   
ã ãy
∆φ = tan −1
py −1
− tan (3.11)
sign(ãz ) ã2z + µã2x ãz

 a−b 
tan−1 (a) − tan−1 (b) = tan−1 (3.12)
1 + ab

 p 
2 2
y ãz −
 ã ãz + µãx 
−1
∆φ = tan p
ã2y + ãz ã2z + µã2x

 sinφ cosθcosφ − pcos2 θcos2 φ + µsin2 θ 
 (3.13)
= tan−1 p
cosθsin2 φ + cosφ cos2 θcos2 φ + µsin2 θ

32
Figure 3.2: Roll angle error surface |∆φ| for µ = 0.01.

Figure 3.3: Roll angle error surface |∆φ| for µ = 0.1.

Apart from that, there are also other singularities that cannot be avoided effi-

ciently when the IMU is not at rest. This happens when two of the three axes are

aligned and driven into a parallel configuration after rotation. In consequence, one

of the degrees of freedom is ”locked” and lost, turning the reference frame into a

two-dimensional space. This effect is called the Gimbal lock [29] and it is actually

one of the main causes of instabilities with Euler angles.

33
The estimated roll and pitch angles have to be unwrapped in order to have a more

realistic representation of the angles. Phase unwrapping basically consists of remov-

ing sudden jumps from the angle signals, exceeding the [−π, π] and [−π/2, π/2]

ranges in this case [30]. Specifically, multiples of ±2π and ±π are added when ab-

solute jumps between consecutive angle estimations are greater than or equal to the

jump tolerance of π radians and π/2 radians for roll and pitch angles, respectively.

Figure 3.4 provides a case with a phase signal before and after phase unwrapping.

Wrapped VS unwrapped phase signal


200
Wrapped signal (◦ )

100

−100

−200
0 2 4 6 8 10
Unwrapped signal (◦ )

600

400

200

−200
0 2 4 6 8 10
t (sec)

Figure 3.4: Phase unwrapping example

At this stage, improved roll and pitch angles can now be computed from ac-

celerometer measurements. Figure 3.5 show an example of the estimation results

when the IMU was moving for 10 seconds. As it can be seen in the plots, the signal

is noisy, but is still accurate at low frequencies.

34
Accelerometer roll and pitch angles
Roll (◦ ) 50

−50

−100
0 2 4 6 8 10

50
Pitch (◦ )

−50

−100
0 2 4 6 8 10
t (sec)

Figure 3.5: Roll and pitch estimation using accelerometer data and Euler angles.

Roll and pitch angles can also be estimated with the integration method pre-

viously described in Equation 2.5 using gyroscope data. However, we can observe

in Figure 3.6 that due to the integration process, the estimations have a drift that

accumulates in time.

Because of that, we use both accelerometer and gyroscope estimation angles and

combine them by a sensor fusion algorithm. Although the Kalman Filter (KF) gives

the most accurate estimation results, the literature frequently uses the complemen-

tary filter [31]. The complementary filter makes a rougher approximation that is

less accurate than the KF but is simple and cheap to implement. Hence, a comple-

mentary filter (defined as follows) has been employed in this case to fuse different

sensor estimations of roll and pitch Euler angles.

35
Gyroscope roll and pitch angles
50

0
Roll (◦ )

−50

−100
0 2 4 6 8 10

50
Pitch (◦ )

−50

−100
0 2 4 6 8 10
t (sec)

Figure 3.6: Roll and pitch estimation using gyroscope data.

     
 φ̂[t + ∆t]   φgyro [t + ∆t]   φacc [t + ∆t] 
  = (1 − α)  +α , 0≤α≤1
     
θ̂[t + ∆t] θgyro [t + ∆t] θacc [t + ∆t]

(3.14)

Looking at the literature and by empirical testing, a filter coefficient value of

α = 0.02 has been used to combine gyroscope and accelerometer roll and pitch

angle estimations as

     
 φ̂[t + ∆t]   φgyro [t + ∆t]   φacc [t + ∆t] 
  = 0.98   + 0.02  , α = 0.02
(3.15)
     
θ̂[t + ∆t] θgyro [t + ∆t] θacc [t + ∆t]

36
In essence, a complementary filter like this improves the accuracy by taking

the best of each sensor: high frequency components from the gyroscope and low

frequency components from the accelerometer. The result based on data from Figure

3.5 and 3.6 are presented in Figure 3.7. As it can be seen, the drift in the gyroscope

angles estimation is successfully removed at the end of the time interval. This helps

to maintain precise estimation results not only in the short but also in the long run.

Improved roll and pitch angle estimation


50

0
Roll (◦ )

−50 Improved roll angle


Gyroscope roll angle
−100
0 2 4 6 8 10

50
Pitch (◦ )

−50 Improved pitch angle


Gyroscope pitch angle
−100
0 2 4 6 8 10
t (sec)

Figure 3.7: Improved roll and pitch estimation after fusing accelerometer and gyro-
scope estimations by a complimentary filter.

3.1.1.2 Yaw Estimation

With the aim of finding a way to estimate the yaw angle using the accelerometer,

most IMUs nowadays come equipped with a magnetometer. While accelerometer

and gyroscope estimations are combined to estimate roll and pitch angle calcula-

tions, yaw angle estimation is commonly improved by fusing magnetometer and

37
gyroscope estimations. It should be noted that both accelerometer and magnetome-

ter estimates are based on the assumption that initially the accelerometer measures

zero linear acceleration (see Equation 3.5) and the X axis of the global coordi-

nate frame is pointing northwards while the local Z axis is pointing downwards
 T
(binit = B cosδ 0 sinδ , where binit is in the local reference system, δ is the

inclination and B is the magnitude of the geomagnetic field, as stated in Figure 2.4).

Like that, no external forces and alignment of global and local coordinate frame is

assumed at initial conditions. Again, the Rxyz = Rx (φ)Ry (θ)Rz (ψ) sequence has

been selected as the rotation matrix for the approximated magnetometer measure-

ment model in Equation 3.16, like similarly assumed in Equation 3.5.

 
cosδ 
 
 
 0 +V
b̃ = W R binit + V = W Rx (φ)Ry (θ)Rz (ψ) B   (3.16)
 
 
sinδ

Recall that W and V represent the soft-iron matrix and hard-iron vector off-

sets, respectively, as shown in Equation 2.6. As roll and pitch values are now known,

terms in Equation 3.16 can be rearranged such that only the rotation matrix around

Z (R(ψ)z ) gets multiplied with binit and the rest with R(−φ)x and R(−θ)y . Note

that the hard-iron offset matrix is neglected in our work (W−1 = I), as explained in

Section 2.3.2. Also recall that R(φ)z , R(θ)z and R(ψ)z have already been defined

in Equation 3.1. Solving for yaw, a closed form expression is finally obtained. Look-

ing at Equation 3.19, there is no need to know the value of the geomagnetic field

magnitude B, because this term gets canceled out. A comparison of magnetometer

38
and gyroscope yaw angle estimates can be found in Figure 3.8 and 3.9 below.

 
cosδ 
 
−1
 
Rz (ψ) B  0   = Ry (−θ)Rx (−φ) W (b̃ − V) (3.17)

 
 
sinδ

     
 cosψ Bcosδ   (b̃x − Vx )cosθ + (b̃y − Vy )sinθsinφ + (b̃z − Vz )sinθcosφ   bf x 
     
     
 
−sinψ Bcosδ  = 
 (b̃ y − Vy )cosφ − (b̃ z − Vz )sinφ

 = b 
 fy (3.18)
     
     
   
Bsinδ −(b̃x − Vx )sinθ + (b̃y − Vy )cosθsinφ + (b̃z − Vz )cosθcosφ bf z

   
−1 −bf y −1 (b̃z − Vz )sinφ − (b̃y − Vy )cosφ
ψmag = tan = tan (3.19)
bf x (b̃x − Vx )cosθ + (b̃y − Vy )sinθsinφ + (b̃z − Vz )sinθcosφ

ψmag ∈ [−π, π] (3.20)

39
Magnetometer yaw angle
200

100
Yaw (◦ )

−100

−200
0 1 2 3 4 5 6 7 8 9 10
t (sec)

Figure 3.8: Yaw estimation using magnetometer data and Euler angles.

Gyroscope yaw angle


200

100
Yaw (◦ )

−100

−200
0 1 2 3 4 5 6 7 8 9 10
t (sec)

Figure 3.9: Yaw estimation using gyroscope data.

After unwrapping the estimated yaw angles, the same complementary filter as in

Equation 3.14 is applied, again with a coefficient value of α = 0.02. As previously,

low frequency signal components from the magnetometer and high frequency com-

ponents from the gyroscope are combined together, which results in the removal of

the drift produced by the gyroscope in the yaw angle estimation (see Figure 3.10).

40
ψ̂[t + ∆t] = (1 − α) ψgyro [t + ∆t] + α ψmag [t + ∆t], 0≤α≤1

(3.21)
= 0.98 ψgyro [t + ∆t] + 0.02 ψmag [t + ∆t], α = 0.02

Improved roll and pitch angle estimation


200
Improved yaw angle
100 Gyroscope yaw angle
Roll (◦ )

−100

−200
0 1 2 3 4 5 6 7 8 9 10
t (sec)

Figure 3.10: Improved yaw estimation after fusing magnetometer and gyroscope
estimations by a complimentary filter.

In summary, an approximate estimation of the IMU 3D orientation by Euler an-

gles is possible, although many limitations exist in the approach. First, the rotation

combinations are not commutative and therefore each sequence gives a different re-

sult. Second, additional errors can be also found due to all the assumptions that are

made throughout the algorithm. Third, there are singularities that lead to Gimbal

lock, losing a degree of freedom. Last but not least, the use of Euler angles is limited

to applications that do not allow exceeding the pitch angle over ±90◦ . Otherwise,

the application becomes unstable. However, Euler angles are still widely popular

because it only involves basic trigonometric and algebraic calculations.

41
3.1.2 Quaternion-based Estimation

Quaternions provide a different and interesting way of dealing with orientation.

While Euler angles describe changes in orientation by three elemental rotations,

quaternions multiply three-dimensional vectors with a certain angle in order to de-

scribe 3D orientations, using an axis-angle representation [32].

Quaternions were invented by William Rowan Hamilton between 1844-1853 [33]

and they are composed by a real component q0 and three complex components q1 i,

q2 j and q3 k. Any quaternion q is said to be a combination of a scalar q0 and complex


 T
vector q1 q2 q3 .

q = q0 + q1 i + q2 j + q3 k (3.22)

Due to their abstract nature, many people try to avoid them in the field of at-

titude or orientation estimation. Nevertheless, quaternions offer several advantages

that make them suitable in many applications. Not only in attitude estimation, but

quaternions are employed to solve many computer animation and robotics problems

as well [34].

3.1.2.1 Background

Quaternion algebra differs from conventional linear algebra in certain aspects [35].

Complex symbols i, j and k are not different from the symbol in any complex

number (a+bi). The particularity here comes with the inter-symbol multiplications,

42
as illustrated in Figure 3.11 and defined as

i2 = j 2 = k 2 = −1 (3.23)

ij = k, jk = i, ki = j (3.24)

ji = −k, kj = −i, ik = −j (3.25)

Figure 3.11: Visual representation of complex symbols properties.

Continuing with arithmetic operations, summation between two arbitrary quater-

nions a and b follow the same distributive and associative properties as complex

43
numbers.

a ± b = (a0 + a1 i + a2 j + a3 k) ± (b0 + b1 i + b2 j + b3 k)
(3.26)
= (a0 ± b0 ) + (a1 ± b1 )i + (a2 ± b2 )j + (a3 ± b3 )k

Obviously, commutativity does not hold for all the operations. Even though

summation is commutative, multiplication is not, for example. The quaternion

product between any two arbitrary quaternions a and b is typically represented

with the symbol ⊗ and is not commutative (a ⊗ b 6= b ⊗ a). Scalar multiplications

on the other hand do allow commutations.

a ⊗ b = (a0 + a1 i + a2 j + a3 k)(b0 + b1 i + b2 j + b3 k)

= (a0 b0 − a1 b1 − a2 b2 − a3 b3 )

+ (a0 b1 + a1 b0 + a2 b3 − a3 b2 )i (3.27)

+ (a0 b2 − a1 b3 + a2 b0 + a3 b1 )j

+ (a0 b3 + a1 b2 − a2 b1 − a3 b0 )k

a α b=α a⊗b= a⊗b α (3.28)

Besides, the conjugate is among one of the most commonly used properties. The

conjugate of a quaternion consists in changing the sign of complex terms and is

denoted as q∗ .

q ∗ = q 0 − q1 i − q2 j − q 3 k (3.29)

44
(a ⊗ b)∗ = b∗ ⊗ a∗ (3.30)

Regarding the norm of a quaternion (defined as |q|2 ), it assigns a strictly positive

length or size to the vector formed by quaternion components. It turns out that the

norm of a product is the same as the product of norms.

|q|2 = q ⊗ q∗ (3.31)

|a ⊗ b|2 = a ⊗ b ⊗ (a ⊗ b)∗ = a ⊗ b ⊗ b∗ ⊗ a∗ = |a|2 |b|2 (3.32)

With that in mind, it is also straightforward to compute the inverse of a quater-


q∗
nion q1 , given that 1
q
= |q|2
.

1 q∗ (q0 − q1 i − q2 j − q3 k)
= 2
=
q |q| (q0 + q1 i + q2 j + q3 k)(q0 − q1 i − q2 j − q3 k)
(3.33)
q0 − q1 i − q2 j − q3 k
= 2
q0 + q12 i + q22 j + q32 k

Apart from that, it is very useful to know certain properties of unit quaternions,

namely quaternions with unit norm (|q|2 = 1). The main feature of unit quaternions

is that the product c between two unit quaternions a and b always gives another

unit quaternion. Note that the symbol ⇐⇒ means if and only if in the expressions

below. This is later exploited by estimation algorithms because rotations in fact can

be represented as a multiplication of unit quaternions in time.

45
c = a ⊗ b ⇒ |c|2 = 1 ⇐⇒ |a|2 = |b|2 = 1 (3.34)

1
What is more, the inverse of a unit quaternion q
is the same as the conjugate of

that same quaternion q∗ .

1
= q∗ ⇐⇒ |q|2 = 1 (3.35)
q

In addition, pure quaternion is the term given to a quaternion with only complex

components, i.e. q = q1 i+q2 j +q3 k, and real quaternion is just a quaternion with no

complex components at all (q = q0 ). Moreover, any quaternion might be expressed

in vector notation (q ) or with an angle and a vector (q {α, e}), following the axis-

angle representation.

   
q0   cos α2

   
   
q1  e1 sin α2 
   
q =   = q {α, e} =  (3.36)
   

   
q2  e2 sin α 
   2
   
   
q3 e3 sin α2

46
α = cos−1 (2q0 )
 
q1
 sin α2 
 
  (3.37)
e =  q2α 
 
 sin 2 
 
 
q3
sin α2

 T
The same way sensor data, e.g. accelerometer data ã = ãx ãy ãz , can be

referenced in a local (L) or global (G) coordinate frame, corresponding quaternion

(ãq ) might be also defined in any reference system. For instance, accelerometer data

is denoted as L ãq if it is referenced in the local coordinate frame L.

 
 0 
  

 
0  L ãx 

L

ãq = 
 = 
   (3.38)
L L ã 
ã  y
 
 
L
ãz

Notation LG q is used to indicate that a given quaternion expresses an arbitrary

3D orientation of the global frame with respect to the local frame. Conversely, the

conjugate quaternion LG q ∗ may be employed to represent the orientation of the local

frame with respect to the global frame (G


L q ).

47
 
q0 
 
 
q 
L G ∗
 1
Gq = L q =   (3.39)
 
q 
 2
 
 
q3

 
 q0 
 
 
−q 
G L ∗
 1
L q = Gq =  (3.40)
 

−q 
 2
 
 
−q3

Whenever a rotation takes place, this can be expressed by a quaternion mul-

B
tiplication. Given coordinate frames A, B and C, if quaternion Aq describes the

orientation of frame A with respect to B and quaternion C


B q describes the orienta-

tion of frame B with respect to C, the orientation C


A q of frame A with respect to C

can be obtained by quaternion multiplication.

C C B
Aq = Bq ⊗ Aq (3.41)

Therefore, if local accelerometer data (L ã) and a quaternion expressing the rela-

tionship between the global and local frames are available, then we can express the

data in the global coordinate frame using the product in Equation 3.42. Inversely,

one can always return the measurements to the local reference system with Equation

3.43. It is necessary to mention that all the quaternions in this expression must be

48
unit quaternions to avoid orientation quaternion going to infinity.

G G L G ∗ L ∗ L L
ãq = Lq ⊗ ãq ⊗ Lq = Gq ⊗ ãq ⊗ Gq (3.42)

L L G L ∗ G ∗ G G
ãq = Gq ⊗ ãq ⊗ Gq = Lq ⊗ ãq ⊗ Lq (3.43)

Any unit quaternion expressing coordinate frame transformations in Equation

3.42 and Equation 3.43 can be associated with the corresponding rotation matrix

L

R Gq .

G G
L L ∗
L
ãq = R Lq ãq = R Gq ãq (3.44)

L L
G G ∗
G
ãq = R Gq ãq = R Lq ãq (3.45)

L

Where rotation matrix R Gq is derived from Direct Cosine Matrix (DCM)

and convention from Equation 3.2 (derivation can be found in [36]).

 
q02
+ − − q12 q22 q32 2(q1 q2 − q0 q3 ) 2(q1 q3 + q0 q2 ) 
 
 
R LG q =  2(q1 q2 + q0 q3 )
 
q02 − q12 + q22 − q32 (3.46)

 2(q2 q3 − q0 q1 ) 
 
 
2 2 2 2
2(q1 q3 − q0 q2 ) 2(q2 q3 + q0 q1 ) q 0 − q1 − q2 + q 3

49
Given that LG q is a unit quaternion,

 
1 − 2(q22 + q32 ) 2(q1 q2 − q0 q3 ) 2(q1 q3 + q0 q2 ) 
 
 
R LG q =  2(q1 q2 + q0 q3 ) 1 − 2(q12 + q32 ) 2(q2 q3 − q0 q1 )  ⇐⇒ |LG q |2 = 1
  
 
 
 
2(q1 q3 − q0 q2 ) 2(q2 q3 + q0 q1 ) 1 − 2(q12 + q22 )
(3.47)

To finish with the quaternions review, it is fundamental to define the direct


 T
relationship between Euler angles φ θ ψ and unit quaternions.

     
   
R32 2(q2 q3 +q0 q1 )
 φ  tan−1 R33  tan−1 1−2(q12 +q22 ) 
     
     
       
 θ  =  sin−1 R  = sin−1 2(q q − q q )  ⇐⇒ |LG q |2 = 1 (3.48)
   33   1 3 0 2 
     
     
       
ψ tan−1 R 21
R11
tan−1 2(q 1 q2 +q0 q3 )
1−2(q 2 +q 2 )
2 3

With all the background materials covered, it is time to delve into the estimation

algorithm [37]. Equal to Euler angles method, accelerometer is used to compensate

gyroscope drift in pitch and roll angles. This is then complemented with the mag-

netometer, which corrects drift in the gyroscope yaw angle.

3.1.2.2 Gyroscope Quaternion

First of all, before correction quaternions derived from the earth field sensors (ac-

celerometer and magnetometer) are applied, the gyroscope quaternion needs to be

50
updated. For that, gyroscope data quaternion L w̃q has to be defined from original
 
L
data w̃ = w̃x w̃y w̃z rad/sec, which is actually measured in the local frame L.

 
L
w̃q = 0 w̃x w̃y w̃z (3.49)

Apart from that, the derivative of the forward gyroscope unit quaternion G
L q̇ w,tk

is well defined in the literature (derivation can be found in [38]).

G 1 G L
L q̇ w,tk = L q tk−1 ⊗ w̃q,tk (3.50)
2

However, there is special interest in the inverse unit quaternion derivative LG q̇ w,tk =

G
∗
L q̇ w,tk , as this specifies the global reference frame with respect to the local frame.

L G
∗ 1 G L
∗
G q̇ w,tk = L q̇ w,tk = L q tk−1 ⊗ w̃q,tk
2
1 L
∗ G
∗
= w̃q,tk ⊗ L q tk−1 (3.51)
2
1 L L
=− w̃q,tk ⊗ G q tk−1
2

Now that the expression of the inverse unit gyroscope quaternion derivative has

been formulated, integration needs to be carried out to obtain gyroscope orientation

L
quaternion G q w,tk by using the inverse unit quaternion derivative, sampling time

∆t and previous corrected orientation quaternion LG q tk−1 . Finally, the quaternion is

normalized to make sure that the unit quaternion property is satisfied every time

step. Note that the initial corrected orientation quaternion LG q w,t0 is simply a real

51
unit quaternion q I .

L
G q w,tk =LG q tk−1 +LG q̇ w,tk ∆t (3.52)

L
L G q w,tk
G q w,tk = L
(3.53)
|G q w,tk |2

 
L
G q t0 = qI = 1 0 0 0 (3.54)

At any given time, Equation 3.48 and 3.47 can be sequentially applied to the

gyroscope orientation quaternion LG q w,tk to get gyroscope Euler angles. As this itself

ˆ L q acc,t and filtered


is drifted, filtered accelerometer error (or delta) quaternion ∆ G k

ˆ L q mag,t are estimated to form the filtered angle


magnetometer error quaternion ∆ G k

error quaternion ∆ ˆ L q mag,t and rotate the gyroscope angle


ˆ L q acc,t ⊗ ∆
ˆ Lqt = ∆
G k G k G k

quaternion LG q w,tk to the corrected and improved orientation quaternion LG q tk .

L L ˆ L q w,t =
⊗∆ L ˆ L q acc,t ⊗ ∆
⊗∆ ˆ L q mag,t
G q tk = G q w,tk G k G q w,tk G k G k
(3.55)

3.1.2.3 Accelerometer Error Quaternion

The main point of the accelerometer error quaternion ∆LG q acc,tk is to correct roll

and pitch orientation of the drifted gyroscope angle quaternion. Additionally, the

correction can be further improved with a complementary or Kalman filter, which

ˆ L q acc,t (note the difference between


give a filtered accelerometer error quaternion ∆ G k

52
ˆ L q acc,t ). The accelerometer corrected quaternion is denoted as
∆LG q acc,tk and ∆ G k

L
G q acc,tk . For simplicity and convenience, time variable t has been removed from all

the quaternions and accelerometer measurements ã are considered to be normalized

from this point on.

L L ˆ L q acc
⊗∆
G q acc = Gq w G (3.56)

 T
Furthermore, given the normalized local measurement L ãq = L ãx L ãy L ãz
 T
and its quaternion defined as L ãq = 0 L ãx L ãy L ãz , the global acceleration

value G ã may be computed by transforming the local frame measurements to the

global reference system using the gyroscope angle quaternion or its respective rota-

L ∗

tion matrix R Gq w
. Then, this global measurement is approximated to the global
 T
G
true gravity vector g = 0 0 1 through the accelerometer error quaternion.

Below, Equation 3.57, 3.58 and 3.59 include both rotation matrix domain (left) and

quaternions domain (right).

G L ∗ L G L ∗ L L

ã = R Gq w ã → ãq = Gq w ⊗ ãq ⊗ Gq w (3.57)

G
G
ã = R ∆LG q acc g → G
ãq = ∆LG q acc ⊗ G
gq ⊗ ∆LG q ∗acc (3.58)

53
   
 0  0
   
G
 ãx  0
   
   
    G ã  0
 x
 = ∆G q acc ⊗   ⊗ ∆LG q ∗acc
  L
  L
 
G ã  = R ∆G q acc 0 →  (3.59)
 y      
    G ã  0
     y  
G
ãz 1 



 
 
G
ãz 1

As expected, the system of equations in 3.59 is undetermined and has infinite so-

lutions (see Equation 3.60). This is due to the fact that accelerometer cannot detect

rotations around the Z axis (yaw angle), as explained in Section 3.1.1.1. Therefore,

the solutions can actually be restricted to a finite number by choosing ∆LG q3acc = 0.

Even though this new fully determined system gives four single solutions, two of

them are discarded because they have a negative form and and a third one is the

sign-inverted version of the fourth one. Hence, the last solution is selected with

positive scalar value ∆LG q0acc , presented in Equation 3.62.



 
2 ∆LG q1acc ∆LG q3acc + ∆LG q2acc ∆LG q0acc = G ãx











2 ∆LG q2acc ∆LG q3acc + ∆LG q0acc ∆LG q1acc = G ãy (3.60)









∆LG q0acc
2
− ∆LG q1acc
2
− ∆LG q2acc
2
− ∆LG q3acc
2
= G ãz

54



2∆LG q2acc ∆LG q0acc = G ãx










−2∆LG q0acc ∆LG q1acc = G ãy ⇐⇒ ∆LG q3acc = 0 (3.61)









∆LG q0acc
2
− ∆LG q1acc
2
− ∆LG q2acc
2
= G
ãz

q T
∆LG qacc = az +1
−√ ay
√ ax
0 ⇐⇒ ∆LG q3acc = 0 (3.62)
2 (2(az +1)) (2(az +1))

Unfortunately, there is a singularity in the expression (when G ãz = −1) that may

affect the stability of the estimation. That is why an alternative solution is sought

by setting ∆LG q2acc = 0 instead of ∆LG q3acc = 0. While the new simplified system

of equations has two different solutions, the one shown in Equation 3.64 is chosen.

This also has a singularity (when G ãz = 1 in this case), so a combination between

solutions in Equation 3.62 and 3.64 is used here (see Equation 3.65).




2∆LG q1acc ∆LG q3acc = G ãx










2∆LG q0acc ∆LG q1acc = G ãy ⇐⇒ ∆LG q2acc = 0 (3.63)









∆LG q0acc
2
− ∆LG q1acc
2
− ∆LG q3acc
2
= G
ãz

 q T
∆LG qacc = −√ ay 1−az
0 √ ax ⇐⇒ ∆LG q2acc = 0 (3.64)
(2(1−az )) 2 (2(1−az ))

55
" #T
 q G
ãz ≥ 0



 az +1
−√ ay
√ ax
0 ,
 2 (2(az +1)) (2(az +1))
∆LG qacc = " #T (3.65)

 q
 G
 − √ ay

 1−az
0 √ ax , ãx < 0
(2(1−az )) 2 (2(1−az ))

That is how a closed form solution for the accelerometer error quaternion is

obtained. Specifically, the singularities are avoided by having two different solutions

for the accelerometer error quaternion, depending on the hemisphere in which the

global measurement G ã is pointing.

Before correcting the gyroscope angle quaternion, the accelerometer error quater-

nion is filtered with a complementary filter or a Multiplicative Extended Kalman

ˆ L q acc
filter (see Section 3.1.2.5 and 3.1.2.6, respectively, for final derivations of ∆ G

ˆ L q mag ). As the complementary filter is applied almost identically to both


and ∆ G

the accelerometer and magnetometer error quaternions, it is coherent to explain the

derivation of the magnetometer error quaternion first and delve into the comple-

mentary filter then.

3.1.2.4 Magnetometer Error Quaternion

Essentially, the magnetometer error quaternion takes care of correcting yaw ori-

entation from the gyroscope angle quaternion. In this case, this quaternion is

L
later filtered to rotate accelerometer corrected quaternion (recall that G q acc =

L ˆ L q acc ) and remove gyroscope yaw angle drift with filtered error quater-
⊗∆
Gq w G

ˆ L q mag . Like with acceleration, sensed magnetic field b̃ is normalized.


nion ∆ G

56
L L ˆ L q mag
⊗∆
Gq = G q acc G (3.66)

Whereas accelerometer local measurements are transformed from local to global

frame with the gyroscope angle quaternion, magnetometer local measurements (L b̃ =


 T  T
L
L
b̃x L b̃y L b̃z with quaternion b̃q = 0 L b̃x L b̃y L b̃z ) are rotated with the

inverse accelerometer corrected quaternion LG q ∗acc to get a corrected global measure-

Gc
ment b̃, defined in the global coordinate frame corrected by the accelerometer

(Gc ). Next, the relationship between the corrected global magnetic field measure-

G
ment and the Xpositive -Z semiplane Π(x+ z) is defined through the magnetometer

error quaternion. Like that, the heading is enforced to the magnetic north (as-

sumed in the global X axis). Again, Equation 3.67, 3.68 and 3.69 have been for-

mulated in rotation matrix domain (left) and quaternion domain (right). Note that
q
Gc b̃ 2 + Gc b̃ 2 .
 
Γ= x y

Gc L ∗
L Gc L ∗ L L
b̃ = R G q acc b̃  b̃q = G q acc ⊗ b̃q ⊗ G q acc (3.67)

Gc
R ∆LG q mag b̃ ∈ G
Π(x+ z)  ∆LG q mag ⊗ Gc
b̃q ⊗ ∆LG q ∗mag ∈ G
Πq(x+ z) (3.68)

57
   
 0   0 
   
Gc

 b̃x   Γ 
   
  √ 
    Gc b̃   Γ
L
    L
 x L ∗
 
Gc b̃y  =  0   ∆G q mag
R ∆G q mag     ⊗

 ⊗ ∆G q mag = 
 
 (3.69)

    Gc b̃   0 
     y  
Gc Gc
b̃z b̃z 







Gc Gc
b̃z b̃z

When trying to solve for ∆LG q mag , it is desired that the magnetometer error

quaternion only affects yaw angle. Consequently, ∆LG q1mag = ∆LG q2mag = 0 can be

set, enforcing the quaternion to have a single degree of freedom (around Z axis). In

that way, the shortest rotation solution (shown in Equation 3.72) is then selected

among all possible solutions.

 T
∆LG q mag = ∆LG q0mag 0 0 ∆LG q3mag (3.70)



 
2
∆LG q0mag − ∆LG q3mag
2
= Gc b̃x











2∆ L
q 0mag ∆L
q 3mag Γ = Gc b̃y (3.71)
 G G







 Gc
 ∆LG q0mag
2
+ ∆LG q3mag
2
b̃z = Gc b̃z

√ T

Gc b̃
∆LG qmag = Γ + Gc b̃x Γ
√ 0 0 √ y
√ (3.72)
2Γ 2(Γ + Gc b̃
x Γ)

Although singularities exist when Gc b̃y = 0 and Gc b̃x < 0, a combined solution can

58
again cope with potential system instabilities. Hereafter, Equation 3.73 represents

the closed form solution of the magnetometer error quaternion.

" #T
 √ √
Gc
b̃x ≥ 0
 Gc b̃
Γ + Gc b̃x Γ ,

 y


 √

0 0 Gc b̃

2(Γ + x Γ)
∆LG qmag = " #T (3.73)


 Gc b̃ Gc b̃ Gc
 √

 y
√ 0 0 √ y

, b̃x < 0
2(Γ − Gc b̃ Γ) 2(Γ − Gc b̃ Γ)
x x

Now that accelerometer and magnetometer error quaternions can be calculated,

it is always convenient to filter them in order to improve the accuracy of the esti-

mates. The purpose of filtering error quaternions in the algorithm is to fuse together

gyroscope orientation quaternion with acceleration and magnetic field readings so

that the estimation becomes robust against high noise levels and time-varying bi-

ases. In this case, the performance of a quaternion-based complementary filter and

a Multiplicative Extended Kalman Filter have been analyzed and compared.

3.1.2.5 Complementary Filter

On one hand, the complementary filter with quaternions aims to provide a simple

fusion method to correct the drift present in the gyroscope orientation as a result of

the constant integration over time. The logic behind remains the same as with the

complementary filter applied with the Euler angles method. It is considered to be

a cheap filtering process, with low complexity and computational cost, robust and

with fast convergence. What is more, it is highly responsive to aggressive movements

of the sensors. This features make it a suitable algorithm for embedded systems.

59
Basically, the filter itself consists on removing high frequency noise from the

accelerometer and magnetometer error quaternions, ∆LG qacc and ∆LG qmag [39]. In

this whole section ∆LG qacc is used in the derivations as an example, although the same

holds exactly for ∆LG qmag as well. There are many ways to achieve the mentioned

filtering effect, one of the most effective ones being Spherical Linear intERPolation

(SLERP) with an identity quaternion qI . However, the complexity and accuracy of

SLERP may not be required at all times, specially when the error quaternions are

small due to the reduced deviation of the measurements from the real gravity or the

Xpositive -Z semiplane. For such cases, just Linear intERPolation (LERP) is enough.

Figure 3.12 provides a pictorial representation of LERP and SLERP.

Figure 3.12: Simplified 2D comparison between Linear intERPolation (LERP) and


Spherical Linear IntERPolation (SLERP). Note that in this case q a = ∆LG qacc or
¯ L qacc or ∆
∆LG qmag , q b = qI and q int = ∆ ¯ L qmag used below.
G G

Once more, the selection of the technique is becomes a tradeoff between efficiency

and accuracy. While LERP is very efficient but less accurate, SLERP is well know

for its precision at a higher computational complexity. Thus, a relevant metric that

allows to switch between techniques becomes essential in this context. In this case,

60
when the angle between the error quaternion and the identity quaternion Ω is small

than a certain threshold λ, LERP is employed (Ω < λ). Otherwise, if the angle is big

enough (Ω ≥ λ), SLERP is implemented for a better accuracy. The angle between

two quaternions can be easily computed from the relationship of the cosine angle

and dot product. A value of λ = 0.436 rad = 25◦ has been set, following literature

reference [39].

 
1
 
 
0
 
qI =  
  (identity quaternion) (3.74)
0
 
 
 
0

cosΩ = ∆LG qacc qI = ∆LG q0acc q0I + ∆LG q1acc q1I + ∆LG q2I q2acc + ∆LG q3acc q3I (3.75)

Ω = cos−1 ∆LG q0acc q0I + ∆LG q1acc q1I + ∆LG q2I q2acc + ∆LG q3acc q3I

(3.76)

Below, the expressions of LERP and SLERP can be found in Equation 3.77 and

3.78 [40], respectively, where α ∈ [0, 1] is the gain that characterizes the cutoff

frequency of the filter [41]. After interpolation, the resulting interpolated error

¯ L qacc is normalized to obtain the filtered error quaternion ∆


quaternion ∆ ˆ L qacc .
G G

61
¯ L qacc = (1 − α)qI + α∆L qacc
∆ (3.77)
G G

 
¯ L sin (1 − α)Ω sin αΩ L
∆G qacc = qI + ∆G qacc (3.78)
sinΩ sinΩ

¯L
ˆ LG qacc = ∆G qacc
∆ (3.79)
¯ L qacc ||
||∆ G

Nevertheless, establishing a constant filter gain sets a clear constraint in the com-

plementary filter. For example, whenever a high movement acceleration is present,

the orientation quaternion is estimated using a false assumption, fact that may

cause meaningful critical estimation errors. That is why an adaptive algorithm has

been specifically designed for gain α. In essence, the goal is to disable the filtering

when the norm of raw accelerometer measurements ||ã|| deviates significantly with

respect to the true gravity norm value ||G g|| = 1g. At static conditions, the error

quaternions get fully filtered. Based on this, a gravity error magnitude eg and a gain

factor function f (eg ) have been defined. This adaptive gain factor then multiplies

the static value of the gain denoted as ᾱ. The static gain should be set according

to the value that gives optimal results when the IMU is at stationary conditions.

| ||L ã|| − ||G g|| |


eg = (3.80)
||G g||

62
α = α(eg ) = ᾱf (eg ) (3.81)

Regarding the definition of the gain factor, this is a continuous function de-

pendent on a low and a high threshold (TL and TH ). It is equal to one when the

magnitude error is lower than the lower threshold and equal to zero when eg is

higher than the upper threshold. For all the rest of values, a continuous linear func-

tion is fit. Equation 3.82 shows the formal definition of the gain factor and Figure

3.13 illustrates the actual gain factor function f( eg ) used for the accelerometer and

magnetometer error quaternion filtering.



1, eg ≤ TL









f (eg ) = 1 − eg −TL
, TL < eg < TH (3.82)

 TH −TL






0, eg ≥ TH

Gain factor in function of the gravity error magnitude


TL = 0.0005
1
Gain factor

0.5

TH = 0.01
0

0 0.005 0.01 0.015


Gravity error magnitude

Figure 3.13: Gain factor function f (eg ) for the accelerometer and magnetometer
error quaternion complementary filter.

63
In general, it has been observed that the estimation is quite sensitive to the fil-

tering, so low gains have been set. In case of the accelerometer complementary filter,

a static gain of ᾱ = 0.2 has been set. For the magnetometer complementary filter,

a static gain an order of magnitude lower has been defined (ᾱ = 0.02). Speaking

about the thresholds, both filters have a low threshold of TL = 0.0005 and a high

threshold of TH = 0.01.

Thanks to this algorithm, the gyroscope angle quaternion is corrected once the

filtered accelerometer and magnetometer error quaternions have been computed.

Lastly, Euler angles are obtained from the estimated orientation quaternion using

Equation 3.47 and 3.48. As a summary, a block diagram with all the fundamental

quaternions computations and complementary filtering processes has been included

in Figure 3.15. A test sample of the algorithm with a duration of 20 seconds can

be found in Figure 3.14, where the estimated orientation angles and gyroscope an-

gles are compared. It is clearly appreciated how the drift is removed in the final

estimation.

ˆ Lqt = ∆
∆ ˆ L q acc,t ⊗ ∆
ˆ L q mag,t (3.83)
G k G k G k

L L ˆ L q w,t =
⊗∆ L ˆ L q acc,t ⊗ ∆
⊗∆ ˆ L q mag,t
G q tk = G q w,tk G k G q w,tk G k G k
(3.84)

64
 
1 − 2(q22 + q32 ) 2(q1 q2 − q0 q3 ) 2(q1 q3 + q0 q2 ) 
 
L
  
R G q tk =  2(q1 q2 + q0 q3 ) 1 − 2(q1 + q3 ) 2(q2 q3 − q0 q1 ) 
2 2 (3.85)
 
 
 
 
2(q1 q3 − q0 q2 ) 2(q2 q3 + q0 q1 ) 1 − 2(q12 + q22 )

     
   
R 2(q2 q3 +q0 q1 )
 φ  tan−1 R32 33
 tan−1 1−2(q12 +q22 ) 
     
     
       
 θ  =  sin−1 R  = sin−1 2(q q − q q )  (3.86)
   33   1 3 0 2 
     
     
       
ψ tan−1 R 21
R11
tan−1 2(q 1 q2 +q0 q3 )
1−2(q 2 +q 2 )
2 3

Improved roll, pitch and yaw angle estimation


100
Roll (◦ )

−100

−200
0 5 10 15 20
50
Pitch (◦ )

−50

−100
0 5 10 15 20
50
Yaw (◦ )

−50

−100
0 5 10 15 20
t (sec)

Figure 3.14: Roll, pitch and yaw angles estimation (colored lines) with quaternion-
based complementary filter compared to gyroscope angle estimation (dashed line).

65
Figure 3.15: Block diagram of orientation estimation process through quaternions
and complementary filter.

3.1.2.6 Multiplicative Extended Kalman Filter

The Kalman Filter (KF) provides optimal sensor fusion method for linear systems.

The filtering process can be divided into two parts: the prediction or time update

and the estimation or measurement update. The prediction step is mainly based on

the dynamic model of the sensor body, whereas the measurement step focuses on a

estimation algorithm combining prediction and measurements. With respect to non-

linear systems, the Extended Kalman Fitler (EKF) offers an approximate solution.

The Multiplicative Extended Kalman Filter (MEKF) relies on multiplication rather

than addition in the measurement update [42].

Generally, compared to the complementary filter, the MEKF requires more in-

depth knowledge about estimation theory. This makes it more complex and harder

to implement. Several matrix multiplications are necessary every time step as well,

so the computational cost may be high. Nonetheless, there is an improved estima-

66
tion and robustness against vibrations. Tthe Kalman Filter approach is typically

considered a really solid algorithm in the scientific community [43].

Speaking about attitude estimation, the state vector x of the MEKF is usually

defined by error quaternions, while the gyroscope angle quaternion is propagated

separately. Here we follow a similar strategy but the error vector ae = αe (with

||e|| = 1) is employed instead of the error quaternion (note that ae does not repre-

sent accelerometer data error but the error vector derived from a delta quaternion).

Like that, only three state variables are required, instead of four the variables that

compose a quaternion. The relationship between a delta quaternion and its corre-

sponding error vector and vice versa are defined as

   
ˆ L ||ae ||
∆G q0acc   cos( 2 ) 
   
   
ˆ q1acc   xe sin( ||a ||
 L  a e

∆ G )
  ||ae || 2
ˆ q acc {α, e} = 
L
 
∆G 
=
 

 (3.87)
ˆL   aye ||ae || 
∆G q2acc   ||a || sin( 2 )
   e 
   
   
ˆ L aze ||ae ||
∆G q3acc ||ae ||
sin( 2 )

 
∆ ˆ L q2acc ∆
ˆ L q1acc ∆ ˆ L q3acc
G G G
ˆ L q0acc )
x = ae = αe = 2cos (∆ −1
(3.88)
G
ˆ L q0acc )

sin cos−1 (∆ G

As with the complementary filter, all the derivations are shown with the ac-

ˆ L q acc ) example , although the magnetometer error


celerometer error quaternion (∆ G

ˆ L q mag also shows the exact same behavior.


quaternion ∆ G

Starting with the prediction step, the dynamic system of equations needs to be

defined first. This yields the differential equations of the state space variables ae

67
and covariance matrix P, where G is the cross-correlation matrix, F is the Jacobian

and Q is the covariance of the stationary, zero-mean process noise W(t).



ẋ = f (x, u, t) + GW(t)









Standard EKF prediction step = Ṗ = FP + PFT + GQGT (3.89)







F = ∂f (x,u,t)


∂x



ȧe = −w̃ × ae + GW(t)












Ṗ = FP + PFT + GQGT






Error vector quaternion



=   (3.90)

MEKF prediction step




 0 w̃z −w̃y 
  

  
∂f (x,u,t)
  
F = ∂x = −ŵ = −w̃z

0 w̃

  

 x 

  

  

  
w̃y −w̃x 0

Considering that W(t) is zero-mean and that the cross correlation matrix G = 1,

i.e. state space variables (axe , aye and aze ) are uncorrelated, the predicted values

âe (−) and covariance matrix P(−) can be calculated through integration. At this

point, gyroscope angle quaternion LG q w,tk needs to be updated and normalized con-

currently as well.

68



âe (−) = 0


Prediction = (3.91)


P(−) = P + FP + PFT + Q ∆t

 

L 1 L L
G q̇ w,tk =− w̃q,tk ⊗ G q tk−1 (3.92)
2

L
G q w,tk =LG q tk−1 +LG q̇ w,tk ∆t (3.93)

L
L G q w,tk
G q w,tk = L
(3.94)
|G q w,tk |2

After that, the measurement step must be performed. Once more, the standard

KF and the error vector quaternion MEKF equations have been compared. Here z(k)

ˆ L q mag and ∆
is the raw measurement (ae obtained directly from ∆ ˆ L q mag ), H is the
G G

measurement output matrix (which is the identity matrix because measurements are

directly taken from sensor observations), K is the Kalman gain, R is the covariance

of the stationary, measurement noise V(t), âe (+) is the final estimate of the error

vector quaternion and P(+) is the final estimate of the covariance matrix.

69






 z(k) = h(x, k) + GV(t)






H = ∂h(x,k)


∂x







Standard EKF prediction step = K = P(−)HT HP(−)HT + R−1 (3.95)







 
x̂(+) = x̂(−) + K z(k) − Hx̂(−)










 
P(+) = I − KH P(−)







 ae = h(x, k) + V(t)






H = ∂h(x,k) =I


∂x




Error vector quaternion



= −1 (3.96)
K = P(−) P(−) + R
prediction step







âe (+) = Kae










 
P(+) = I − K P(−)

Apart from that, an adaptive Kalman Filter strategy has been designed to ac-

count for the estimation errors due to incorrectly assumed references. Basically the

goal is to take advantage of an approach similar to the one used with the com-

plementary filter. In this case, measurement noise model gets adapted instead of

adapting filter gains directly. When the sensors are not at steady state, accelerome-

ter measurements magnitude can actually be quite different from real gravity value

and assumption in Equation 3.59 then produces additional estimation errors. Hence,

70
an adaptive measurement noise covariance matrix R is defined in order to give more

or less trust level to the measurements. A vector k with tunable gains is used to

find the optimal R matrix from the static and initial measurement noise matrix R0 .

 
R = R0 1 + k eg (3.97)

Where eg is declared in Equation 3.80 as the gravity error magnitude and repre-

sents the difference between accelerometer measurements L g̃ and global true gravity

G
vector g with unit norm (||G g|| = 1g). Empirically, kacc and kmag vectors have

been set as a result of a fine tuning process. In essence, if em ≈ 0, the MEKF re-

lies mostly on measurements, whereas predictions become increasingly reliable and

certain as long as em > 0.

 
kacc = kmag = 104 104 104 (3.98)

Even though theoretically the approach might look straightforward, in practice

one of the most challenging parts of Kalman Filter modeling is to correctly charac-

terize process and sensor noise covariance matrices (Q and R, respectively). Thanks

to the sensor characterization procedure described in Section 2.2, the noise covari-

ance matrix can be already defined as uncorrelated and with variances σ 2acc , σ 2gyro

and σ 2mag (see Equation 2.7). Regarding covariance matrix Q, uncorrelated noise

has also been assumed but the variances have been tuned empirically. In the ex-

periments it has been observed that the MEFK does a great job if there is more

71
uncertainty in the measurements and more trust into the predictions. Therefore,

variances of process noise have been defined as two orders of magnitude lower than

the corresponding measurement noise variance. This increases the uncertainty of

measurements while making predictions more trustworthy.

 
2
σx,acc 0 0 
 
  R0,acc
R0,acc = 0 σ 2
0  Qacc = (3.99)

 y,acc 
 100
 
2
0 0 σz,acc

 
2
σx,mag 0 0 
 
  R0,mag
R0,mag = 0
 2
σy,mag 0  Qmag = (3.100)


 100
 
2
0 0 σz,mag

After the final corrected error quaternions ae,acc and ae,mag have been estimated

by means of the Multiplicative Extended Kalman Filter, Equation 3.87 is used to get

ˆ L q acc and ∆
the respective corrected error quaternions ∆ ˆ L q mag . To finish, gyroscope
G G

angle quaternion is corrected and converted to Euler angles (through Equation 3.83,

3.84, 3.85 and 3.85) in the same way as it is done with the complementary filter.

Below, Figure 3.16 illustrates the angle estimation results obtained through the

quaternion-based MEKF, using mentioned tunable values.

72
Improved roll, pitch and yaw angle estimation
Roll (◦ ) 200

100

−100
0 5 10 15 20
100
Pitch (◦ )

−100
0 5 10 15 20
100
Yaw (◦ )

50

−50
0 5 10 15 20
t (sec)

Figure 3.16: Roll, pitch and yaw angles estimation (colored lines) with quaternion-
based complementary filter compared to gyroscope angle estimation (dashed line).

3.1.3 Comparison

Backed up by a working implementation of the methods and theoretical conclusions,

orientation angle estimation through Euler angles, quaternion-based complementary

filter and quaternion-based Multiplicative Extended Kalman Filter has been com-

pared. The final decision will be then used to fuse signals merged by the gyroscope,

accelerometer and magnetometer. In order to help making the best decision, re-

sults of each approach using same raw data have been plotted together to see if any

significant differences exist. This can be found hereafter in Figure 3.17.

73
Angle estimation comparison of different methods
100

Roll (◦ )
50

−50
0 5 10 15 20
50
Pitch (◦ )

−50
0 5 10 15 20
100
Yaw (◦ )

50

−50
0 5 10 15 20
t (sec)

Figure 3.17: Comparison of the three analyzed orientation estimation approaches:


Euler angles (green), quaternion-based complementary filter (red) and quaternion-
based Multiplicative Extended Kalman Filter (blue).

First of all, a clear discrimination needs to be done between Euler angles estima-

tion and quaternion-based estimation. Even only looking at the results, it is evident

that the first contains more estimation errors, specially in the yaw angle. That is

because the computation of yaw is dependent on roll and pitch angles. Thus, most

of the big errors in yaw estimation can be primarily identified when roll and pitch

angle estimation errors happen.

Besides being less human understandable and having to make sure that unit

quaternions are preserved, quaternion estimation offers a more complete solution.

Orientation changes are described with a single rotation with the axis-angle repre-

sentation, unlike Euler angles method, which requires three elemental rotations and

74
therefore more computational resources. Moreover, ambiguities and singularities

such as Gimbal lock are avoided. Adaptive filtering strategies implemented with

quaternions also reduce errors.

Focusing on the complementary filter and MEKF with error quaternions, there

is a clear tradeoff between accuracy and cost. It seems that the MEKF can do

sligtly better than the complementary filter, although the difference is not really

obvious in this case. Robustness against noise is also a factor that is in favor of

the Kalman Filter, while the complementary filter is more responsive to aggressive

sensor movements. Finally, the MEKF requires more matrix-wise operations due

to the state space formulation, which asks additional memory and computational

resources allocation. Hereafter, Figure 3.18 represents the comparison between the

quaternion-based complementary filter and the quaternion-based MEKF.

Figure 3.18: Comparison of the quaternion-based complementary filter (red) and


quaternion-based Multiplicative Extended Kalman Filter (blue).

75
Considering the target application of this work, that is, a smart and portable

headset with IMU sensors capable of detecting human behavior movements, it has

been decided that a quaternion-based complementary filter is more suitable over the

quaternion-based MEKF. Basically, it is assumed that accuracy of the complemen-

tary filter method is enough for the proposed task. Additionally, this approach fits

better into any embedded system thanks to its simplicity and cheap implementation.

3.2 Position Estimation

Unfortunately, it has been observed that orientation angles themselves are not

enough to perform a successful detection and classification of stand up, sit down

and walking behaviors, even though certain patterns have been identified in the

pitch angles. Many people tend to move the head a little bit up and down when

sitting down or standing up. Using that information, several successful movements

have been classified correctly. Nonetheless, not every subject has that characteris-

tic and when head movements are noisy it is very easy to classify a false positive.

Thus, angles alone cannot be employed as a unique features as a the classification

strategy. For that reason, position has been also estimated, as an additional feature

to orientation. The most common approach to estimate position is to apply dou-

ble integration to linear acceleration [44], [45], extracted from accelerometer data.

Therefore, linear acceleration needs to be estimated in first place.

76
3.2.1 Linear Acceleration Estimation

Considering that measured acceleration in the local coordinate frame L ã is composed

by gravitational acceleration L a(g) , linear acceleration L a(l) and measurement noise

L
ϕ (recall Equation 2.1), it is possible to compute (noisy) linear acceleration after

subtracting the gravity component.

L L (g) L (l) L
ã = a + a + ϕ (3.101)

L L (g) L (l) L L (l)


ã − a = a + ϕ≈ a (3.102)

However, the IMU might be at a different orientation any given time, so there is

no way to know how the gravitational acceleration is distributed along the X, Y and

Z axes in the local coordinate frame. As a result, estimated orientation quaternion

must be used to rotate from the local reference system to the global reference system.

After that, 1g has to be subtracted from the global Z axis to finally obtain linear

acceleration. Figure 3.19 shows that there is no linear acceleration at stationary

conditions (a(l) = 0).

G L ∗ L G L ∗ L L

ã = R G q tk ã → ãq = G q tk ⊗ ãq ⊗ G q tk (3.103)

77
   
(l)
G a˜x   G a˜x 
   
G (l)
   
ã = G (l)  =  G
 a˜y   a˜x 
 (3.104)
   
   
G (l) G
a˜z a˜x − 1

Linear acceleration estimation along X, Y, Z axes


0.05

0
Linear acceleration (m/s2 )

−0.05
0 5 10 15 20
0.05

−0.05
0 5 10 15 20
0.1

−0.1
0 5 10 15 20
t (sec)

Figure 3.19: Linear accelerometer estimation at rest (stationary conditions).

Note that measurement noise is still present in the estimation. In consequence,

double integration is going to generate an exponentially increasing drift. In Section

3.1, many sensor fusion techniques were employed to remove drift in the orientation

angles resulting from gyroscope data single integration. Nevertheless, there are not

known similar fusion algorithms to eliminate velocity and position drift. That means

that a double integration stage on the raw linear accelerometer gives extremely

78
inaccurate velocity and position estimates. For example, position estimates may

very well accumulate a drift error of ±10 meters after 5 seconds [46]. In order to

avoid that, various techniques can be applied to increase estimation reliability.

First of all, linear acceleration signal might be low pass filtered to reduce some

high frequency noise. In this case, our solution implements a moving average filter,

where linear acceleration is downsampled from 100 Hz to 25 Hz, taking the average

of the last 4 samples every time step and obtaining a filtered linear acceleration

estimate G â(l) . What is more, when the absolute value of G â(l) is still smaller than

ten times the standard deviation σ acc , linear acceleration is automatically set to

zero. That fix helps to avoid estimating non-zero velocity when the IMU is at rest.

An illustration of the filtered linear acceleration can be found in Figure 3.20 when

a person is standing up.



|G â(l) | < σ acc

0,


G (l)
â = (3.105)


G â(l) , |G â(l) | ≥ σ acc

79
Filtered linear acceleration estimation along X, Y, Z axes
1

0
Linear acceleration (m/s2 )

−1
0 5 10 15 20
4

−2
0 5 10 15 20
5

−5
0 5 10 15 20
t (sec)

Figure 3.20: Comparison of filtered linear accelerometer estimation (colored lines)


and raw accelerometer estimation (dashed lines) when standing up (around t = 12s).

3.2.2 Velocity and Position Estimation

Next, filtered linear acceleration is integrated and linear velocity G v̂(l) is estimated.

As with the gyroscope integration (see Equation 2.5), this is carried out every time

1
step ∆t (where now ∆t = 25
seconds). Due to remaining noise in filtered linear

acceleration, velocity might not get to zero at rest after integration. Hence, whenever

filtered linear acceleration is zero for a period of 5 samples, the velocity is also set

to zero. Hereafter, Figure 3.21 represents the velocity estimation of the data shown

in Figure 3.20 when a person is standing up. Note that velocity is forced to zero

after integration.

80
G (l) G (l) G (l)
v̂ [t + ∆t] = v̂ [t] + â ∆t (3.106)

Linear velocity estimation along X, Y, Z axes


0.06

0.04

0.02

0
0 5 10 15 20
0.5
Velocity (m/s)

−0.5
0 5 10 15 20
0.5

−0.5
0 5 10 15 20
t (sec)

Figure 3.21: Velocity estimation when standing up (around t = 12s).

Finally, position G p̂(l) is computed by integrating estimated linear velocity. Po-

sition estimation from linear velocity obtained in Figure 3.21 has been included in

Figure 3.22. It is essential to remark that the value of this estimations contain dou-

ble integration drift and therefore cannot be employed as a real absolute value of

velocity and position. However, the relative information of such estimations can be

very useful when having to detect movement events such as sitting down, standing

up and walking. The detection algorithm of our work is thoroughly explained in

Section 4.

81
G
p̂(l) [t + ∆t] = G
p̂(l) [t] + G (l)
v̂ ∆t (3.107)

Position estimation along X, Y, Z axes


0.02

0.01

0
0 5 10 15 20
0.2
Position (m)

−0.2
0 5 10 15 20
0.1

−0.1
0 5 10 15 20
t (sec)

Figure 3.22: Position estimation when standing up (around t = 12s).

82
4. Pattern Recognition

Having derived precise orientation angles and approximate linear acceleration, ve-

locity and position, the relevance and contribution of each of these features to be-

havioral movements have been analyzed. Specifically, the final goal has been to

determine different events and identify the transition among the following states:

standing up, sitting down and walking.

For that purpose, the development of a robust detection algorithm has been in

the main focus. As classification only takes place among three different patterns,

pattern recognition does not become the hardest stage. The toughest part is to

identify standing up, sitting down and walking events and discarding random head

movements and noise. Our solution has been to look at the rate of change of the

estimated position along the Z axis to observe if any of the events happened, which

is actually equivalent to looking at the vertical estimated linear velocity. Regarding

the classification algorithm, traditional techniques and metrics such as template

matching with correlation and euclidean distance turned out to be very effective as

long as an accurate detection algorithm is implemented.

83
4.1 Event Detection Algorithm

In essence, the event detection algorithm consists in computing the accumulated ve-

locity over the last 2 seconds (called the detection signal) and doing a peak detection

on the signal using the short-term standard deviation σST as a threshold. For the

accumulated velocity, the last 50 linear velocity samples are summed up because it

has been observed that it takes about two seconds (which is equal to 50 samples at

25 Hz) to stand up or sit down. It is believed that this signal is representative of

the vertical head movements when a person is wearing a headset.

Apart from that, looking at the short-term standard deviation is a good way

to detect rounded peaks in a varying real-time signal. The distribution formed by

the last 10 samples (or 0.2 milliseconds) are utilized to calculate σST in this case.

That means that the threshold becomes adaptive and depends on last estimated few

samples. Then, it is said that an event has been detected if the detection signal is

greater than kσST , where k = 40 has been set after empirical analysis in our work.

Figure 4.1 below illustrates the detection signal while a person is standing up and

the red dot represents the detection of the event.

84
Detection signal
Detection signal 250

200

150

100

50

0
0 2 4 6 8 10 12 14 16 18 20
t (sec)

Figure 4.1: Detection signal (in blue) and k = 40 times the short-term standard
deviation kσST (dashed line) when standing up (around t = 14s). The event has
been correctly detected at the red dot.

Even though this method perfectly detects standing up and sitting down events

when the user is not moving the head randomly, the algorithm becomes prone to

false positives once neck rotations come into play. Therefore, a minimum threshold

thmin has been calculated empirically. Whenever the detection signal is greater than

thmin = 16, an event is detected as long as the kσST is satisfied as well. Noise and

person’s natural head movements are mostly discarded through this thresholding

strategy.

Besides, it has been noted that the detection signal increases linearly if the

person wearing the headset is walking. It is assumed that this phenomenon is due

to how velocity is approximated through integration, because the estimated velocity

turns out to be always non-zero and positive after each step. In consequence, a

maximum threshold thmax = 100 has been set by empirical analysis as well. Like

that, the walking event can be detected as the detection signal exceeds thmax after

2-3 seconds of continuous walking.

85
It is worth mentioning that only the events detected between thmin and thmax

will go through the classification algorithm and pattern recognition phase afterwards,

while all the events above thmax are due to walking events and below thmin no events

are detected. Thanks to this strategy, very few false positives are detected in total.

Experimental and numerical results are later covered in Section 5 and in fact, they

confirm the statements made in this section.

Hereafter, a sample case of all possible events has been included in Figure 4.2.

The sequence of events shown is defined by the person first standing up (around

t = 3.5 seconds), secondly walking (around t = 13.5 seconds), then sitting down

(around t = 18 seconds) and finally moving the head to simulate noisy cases (around

t = 13.5 seconds). It can be clearly seen how the first three events are perfectly de-

tected, whereas the random head movement is neglected by the detection algorithm.

Sometimes head movements are neglected because the detection signal is below thmin

and other times they are discarded because the signal does not exceed kσST . In

conclusion, it can be said that a proper and precise event detection algorithm has

been developed from which the classification algorithm can take advantage in the

next stage.

86
Detection signal
Detection signal 500

400

300

200

100

0
0 2 4 6 8 10 12 14 16 18 20
t (sec)

Figure 4.2: Detection signal (in blue), k = 40 times the short-term standard de-
viation kσST (dashed gray line), minimum threshold thmin = 16 (dashed red line)
and maximum threshold thmax = 100 (dashed green line) with standing up (around
t = 3.5s), walking (around t = 7.5s), sitting down (around t = 13.5s) and head
movement (around t = 18s) movement events. Events have been correctly detected
and head movement has been successfully neglected.

4.2 Classification Algorithm

The classification algorithm takes care of performing pattern recognition from de-

tected events in the detection algorithm. These events typically correspond to stand-

ing up or sitting down movements, although strong random head movements might

also get to this stage. Walking events are never classified by this algorithm, as the

event detection algorithm alone can automatically do that job. Thus, any event

introduced to the classification algorithm can only have three possible outcomes:

standing up, sitting down or noisy movement. Due to the fact that few patterns

need to be distinguished, simple and cheap traditional pattern recognition tech-

niques suffice for successful classification. In our solution, vertical linear velocity

(along Z axis) of detected events has been employed for pattern representation.

Basically a database with three templates is created (one for each event type) and

87
these are then aligned with the detected event or test event through cross-correlation

[47] in order to maximize the similarity among them. After that, euclidean distance

between the aligned test event and patterns is computed. The lowest distance de-

termines the matched template lastly and the outcome is provided [48].

It is necessary to mention that the templates have been recorded under controlled

environments and circumstances so that the inherent nature of the movements could

be captured. The more representative a template is, the higher the probability of

successful classification becomes. The templates of standing up, sitting down and

noisy movement can be found in Figure 4.3, Figure 4.4 and Figure 4.5, respectively.

Note that the duration of the templates lasts three seconds and that the level of

patterns has been scaled to the [-1, 1] range to account for the different vertical

linear velocity intensities in the movements. That means that shape of the vertical

linear velocity is mainly compared rather than the amplitude.

Standing up pattern
1.5
Scaled amplitude

0.5

−0.5
0 0.5 1 1.5 2 2.5 3
t (sec)

Figure 4.3: Template of the standing up pattern represented with the scaled vertical
linear velocity (along Z axis).

88
Sitting down pattern
1
Scaled amplitude

0.5

−0.5

−1
0 0.5 1 1.5 2 2.5 3
t (sec)

Figure 4.4: Template of the sitting down pattern represented with the scaled vertical
linear velocity (along Z axis).

Noisy movement pattern


1.5
Scaled amplitude

0.5

−0.5
0 0.5 1 1.5 2 2.5 3
t (sec)

Figure 4.5: Template of the noisy movement pattern represented with the scaled
vertical linear velocity (along Z axis).

89
In the case of the pattern for the noisy movement, a random movement similar

to a walking pattern has been employed. This pattern has been selected because

it is sufficiently different from standing up and sitting down patterns and therefore

anything that does not match these two movements will always match with a noisy

movement.

Once an event recognized in the detection algorithm is to be fed to the classi-

fication algorithm, the test signal is directly defined as the three-second duration

scaled vertical linear velocity around the detection time instant. As an example,

Figure 4.6 shows the test signal defined from a sitting down event in the detection

algorithm.

Test signal from sitting down event


1
Scaled amplitude

0.5

−0.5

−1

−1.5
0 0.5 1 1.5 2 2.5 3
t (sec)

Figure 4.6: Test signal of a sitting down event represented with the scaled vertical
linear velocity (along Z axis).

As a next step, a cross-correlation is carried out between each template and

the test signal. Thanks to this, both signals can be aligned so that the similarity

between them is maximized. Cross-correlation Rxy simply consists of computing the

correlation coefficient (defined in the range [-1, 1]) at m samples lag between two

signals, x and y, where m = 1, 2, ..., 2N − 1, N being the length of the signals.

90
Rxy (m) = E{xn+m yn∗ } = E{xn yn−m

} (4.1)

−∞ < n < ∞, m = 1, 2, ..., 2N − 1



PN −m−1 x
 ∗
m≥0

 n=0 n+m yn ,
Rxy (m) = (4.2)




Rxy
 (−m), m<0

Here ∗ indicates complex conjugation and E { } is the expected value operator. If

a cross-correlation between two identical signals is calculated, then this is denoted

as auto-correlation and the maximum value happens to be at m = 0 (Rxy (0) = 1).

Therefore, the optimal lag mopt that maximizes the similarity between x and x can

be derived from the cross-correlation, which always yields the highest correlation

value, i.e. Rxy (mopt ) = 1.

After the test signal is shifted mopt samples, the euclidean distance deucl. between

the now aligned test signal and template is calculated (see Equation 4.3 below for a

generalized expression of the euclidean distance). Finally, all the computed distance

metric are compared and the template with the smallest distance value is defined

to be matched template and hence the output of the classification algorithm. Other

distance calculation methods have been tried out such as Dynamic Time Warping

(DWT) [49], but it has been observed that they only increase the computational

cost and do not make a difference in the template matching. Hereafter, test signals

91
from the detected events in Figure 4.2 have been aligned with all the three templates

to compute the euclidean distance in Figure 4.7 and Figure 4.8. Both events are

classified correctly, as Figure 4.7 recognizes a standing up movement while Figure

4.8 is matched with a sitting down event.

p
deucl. (x, y) = (x1 − y1 )2 + (x2 − y2 )2 + ... + (xN − yN )2

v
u N
uX (4.3)
=t x n+m (xi − yi )2
i=1

Standing up pattern vs aligned test signal


1
d = 6e−01
0

−1
0 0.5 1 1.5 2 2.5 3
Sitting down pattern vs aligned test signal
Scaled amplitude

1
d = 7e+00
0

−1
0 0.5 1 1.5 2 2.5 3
Noisy movement pattern vs aligned test signal
1
d = 7e+00
0

−1
0 0.5 1 1.5 2 2.5 3
t (sec)

Figure 4.7: Pattern recognition of a standing up event through template matching


between aligned test signal and template patterns using euclidean distance (d in the
plots).

92
Standing up pattern vs aligned test signal
2
d = 2e+01
0

−2
0 0.5 1 1.5 2 2.5 3
Sitting down pattern vs aligned test signal
Scaled amplitude

2
d = 7e+00
0

−2
0 0.5 1 1.5 2 2.5 3
Noisy movement pattern vs aligned test signal
2
d = 3e+01
0

−2
0 0.5 1 1.5 2 2.5 3
t (sec)

Figure 4.8: Pattern recognition of a sitting down event through template matching
between aligned test signal and template patterns using euclidean distance (d in the
plots).

In summary, it can be underlined that a robust pattern recognition implemen-

tation has been achieved, thanks to the event detection algorithm and the classifi-

cation algorithm. Walking events are directly classified upon detection, most noisy

movements are properly treated throughout the process and the distinction between

standing up and sitting down is efficiently performed as well.

93
5. Tests and Results

Recordings of different subjects have been made to see if our work can correctly

track behavioral events regardless of the person wearing the headset and verify the

accuracy of the proposed solution. In total, tests with 16 people have been carried

out, where each subject was recorded three times, each with different activity events.

Of course, every subject was told to act naturally and under a relaxed behavior. All

the tests included walking, standing up, sitting down and noisy head movement

events. What is more, some of the subjects even moved the head quite aggressively

in some cases.

At the time of making the tests and obtain the results, different statistical pa-

rameters have been observed. This allows for a more exhaustive modeling of the

performance in this solution. For that, signal detection theory principles have been

studied and applied to evaluate the effectiveness of the event detection algorithm

[50]. Given that happening an event and not happening an event depends on various

factors, both cases follow a certain probability distribution (see Figure 5.1). As both

distributions can overlap due to conflictive edge cases, there are different outcomes

that a detection algorithm might output.

94
Figure 5.1: Illustration with probability distributions of an event happening and not
happening.

According to signal detection theory, there are four possible outcomes in any

detection scheme when the detection decision is based on a certain criterion or

threshold. First of all, if the algorithm detects an event that actually happened,

it is said that there was a hit or true positive detection. On the contrary, if the

event was not detected, there was a missed detection or false negative. Moreover,

when artifacts are intentionally not detected, a correct rejection or true negative

detection is carried out. In last place, a false alarm or false positive detection takes

place when the algorithm is not able to neglect the fake event. Figure 5.2 illustrates

all the four different possible outcomes from a probability distribution standpoint.

95
Figure 5.2: Illustration of hit (true positive), missed detection (false negative), cor-
rect rejection (true negative) and false alarm (false positive) in signal detection
theory.

In the case of the classification algorithm, two extreme cases may be distin-

guished: correct match and incorrect match. Recall that this only applies to de-

tected standing up, sitting down and noisy head movement events, whereas walking

event pattern classification corresponds to the detection algorithm. Hereafter, the

statistical results computed from the tests have been analyzed. Whereas Table 5.1

includes the distribution of the results in the event detection algorithm, the success

rate of the classification algorithm can be found in Table 5.2.

96
Outcome type Results distribution
Event Hits (true positives) 89 %
happening Missed detections (false negatives) 11 %
Event not Correct rejections (true negatives) 94 %
happening False alarms (false positives) 6%

Table 5.1: Test result distributions showing the event detection algorithm perfor-
mance.

Outcome type Results distribution


Event Correct match 92 %
happening Incorrect match 8%

Table 5.2: Test result distribution showing the classification algorithm performance.

As it can be observed, there are few cases in which the solution fails. Sometimes

the detection signal is not high enough during events and that is why few of them are

not detected. Reducing the threshold would only increase false alarms, so it becomes

a trade-off between true and false positives. Furthermore, it can be said that noisy

head movements and fake events are effectively discarded. Regarding the template

matching, a reasonable success rate has been obtained, i.e. 92 %. The remaining

8 % was not matched correctly primarily because the estimation of vertical linear

velocity was not accurate enough. In addition to Table 5.1 and Table 5.2 results,

walking event detection and classification success rate has been seen to be close to

99 %. Overall, a solution with robust detection and classification stages has been

achieved.

97
6. Conclusions and Future Lines

To finish, a summary of the most fundamental concepts, potential future lines and

principal conclusions of the work have been included in this section. Essentially, a

prototype of a headset with an IMU has been built and raw sensor data has been

thoroughly analyzed and processed.

Different orientation estimation algorithms have been compared and the quaternion-

based complementary filter has been selected because it computes accurate enough

estimations for the planned task at a considerably lower computational cost than

other approaches, such as Euler angles or Multiplicative Extended Kalman Filter

estimation. Furthermore, efficient sensor fusion techniques have been followed to

correct orientation drift. Even like that, it has been concluded that orientation

alone is not enough to detect and classify movements. With regards to linear ac-

celeration, velocity and position, approximated estimations have been obtained in

order to reinforce detection and classification stages. With that information, reliable

and efficient pattern recognition has been performed. Finally, satisfactory results

have been obtained with the proposed solution in the tests using different subjects.

Besides, there are still some aspects that could be improved. In first place, it

is believed that there is still room for improvement in the velocity and position

98
estimation algorithms. What is more, it is expected that calculating more exact

velocity estimates would really help to reduce the incorrect match statistics, as

movement patterns would become cleaner and more representative. On the other

hand, making the orientation estimation algorithm more accurate would not bring

relevant advantages as a precise enough method has already been achieved to rotate

measurements from local to global coordinate frame.

Moreover, it has been observed that many of the work in the literature has used

labeled data and trained supervised machine learning algorithms, e.g. Neural Net-

works, Deep Learning, etc. It would be interesting to see whether these approaches

have certain aspects that might be merged with the proposed solution.

A kinematic model of the human body could be also built so that artificial data

can be created. This would allow to further analyze detection and classification

algorithms. Combining real and artificial data, resulting statistics would be richer

as well.

In addition, some hardware changes could be made to the prototype and a more

appealing end device could be built. For instance, new sensors such as small cameras,

infrared sensors or GPS could highly contribute to a better estimation of the position

and would definitely help to strengthen the current implementation. In terms of

communication, an interesting future line would be to build a wireless prototype or

end device in the headset that can communicate via Bluetooth Low Energy.

Basically such little enhancements would probably allow to add more behavioral

events or activities to the solution. For example, movements such as walking up and

down stairs could be detected and classified. Running states could be analyzed as

99
well. Another attractive idea would be to track human posture while sitting down,

by means of orientation angles. All this would contribute to a better tracking of

human daily activities.

In conclusion, even though certain enhancements might be performed in order

to increase the performance and improve results, it can be said that an efficient and

robust solution has been developed and that the main initial objectives have been

successfully fulfilled.

100
References

[1] V. Chiang, K.-H. Lo, and K.-S. Choi, “Rehabilitation of activities of daily
living in virtual environments with intuitive user interface and force feedback”,
vol. 12, pp. 1–9, Oct. 2016.

[2] J. Millán Calenti, J. Tubı́o, S. Pita-Fernández, I. González Abraldes, T. Lorenzo,


T. Fernández-Arruty, and A. Maseda, “Prevalence of functional disability in
activities of daily living (adl), instrumental activities of daily living (iadl) and
associated factors, as predictors of morbidity and mortality”, vol. 50, pp. 306–
10, Jun. 2009.

[3] J. D. H. Manon Kok and T. B. Schön, “Using inertial sensors for position and
orientation estimation”, Apr. 2017.

[4] O. Sprdlı́k, “Detection and estimation of human movement using inertial sen-
sors: Applications in neurology”, 2012.

[5] B. Yu, T. Bao, D. Zhang, W. Carender, K. Sienko, and P. Shull, “Determining


inertial measurement unit placement for estimating human trunk sway while
standing, walking and running”, vol. 2015, pp. 4651–4654, Aug. 2015.

[6] H. Nguyen, K. Lebel, S. Bogard, E. Goubault, P. Boissy, Q. Clinicians, and C.


Duval, “Using inertial sensors to automatically detect and segment activities
of daily living in people with parkinson’s disease”, vol. PP, pp. 1–1, Aug. 2017.

[7] L. T. Soangra R, “A comparative study for performance evaluation of sit-to-


stand task with body worn sensor and existing laboratory methods”, 48:407–
414, 2012.

101
[8] M. D.F.C.L. O. Ferhat Attal Samer Mohammed and Y. Amirat, “Physical
human activity recognition using wearable sensors”, 2015.

[9] S. Asht and R. Dass, “Pattern recognition techniques: An review”, vol. 3,


2012.

[10] A. Mannini and A. M. Sabatini, “Machine learning methods for classifying


human physical activity from on-body accelerometers”, 2010.

[11] N. P.F.F.-R. Ivan Miguel Pires Nuno M. Garcia and S. Spisante, “Pattern
recognition techniques for the identification of activities of daily living using
mobile device accelerometer”, 2013.

[12] W Geiger, J Bartholomeyczik, U Breng, W Gutmann, M Hafen, E Handrich, M


Huber, A Jackle, U Kempfer, H Kopmann, J Kunz, P Leinfelder, R Ohmberger,
U Probst, M Ruf, G Spahlinger, A Rasch, J Straub-Kalthoff, M. Stroda, and
S Zimmermann, “Mems imu for ahrs applications”, pp. 225 –231, Jun. 2008.

[13] W. D. C. for Geomagnetism. (2015). International geomagnetic reference field


(igrf-12), [Online]. Available: http://wdc.kugi.kyoto-u.ac.jp/igrf/.

[14] N. Kj, A Sreejith, J. Mathew, M. Sarpotdar, A. Suresh, A. Prakash, M. Sa-


fonova, and J. Murthy, “Noise modeling and analysis of an imu-based attitude
sensor: Improvement of performance by filtering and sensor fusion”, Jul. 2016.

[15] Y. Zaho, M. Horemuz, and L. E. Sjöberg, “Stochastic modelling and analysis


of IMU sensor errors”, Archives of Photogrammetry, Cartography and Remote
Sensing, Vol. 22, 2011, p. 437-449, vol. 22, pp. 437–449, Dec. 2011.

[16] N. El-Sheimy, H. Hou, and X. Niu, “Analysis and modeling of inertial sensors
using allan variance”, vol. 57, pp. 140 –149, Feb. 2008.

[17] V. Vukmirica, I. Trajkovski, and N. Asanović, “Two methods for the determi-
nation of inertial sensor parameters”, Apr. 2018.

102
[18] F. Semiconductor. (2015). High-precision calibration of a three-axis accelerom-
eter. Application Note 4399, [Online]. Available: http://cache.freescale.
com/files/sensors/doc/app_note/AN4399.pdf.

[19] F. Semiconductor. (2015). Calibrating an ecompass in the presence of hard-


and soft-iron interference. Application Note 4246, [Online]. Available: https:
//www.nxp.com/docs/en/application-note/AN4246.pdf.

[20] L. Ljung, System Identification: Theory For The User. Dec. 1999.

[21] Sparkfun. (2016). Mpu-9250 hookup guide, [Online]. Available: https : / /


learn . sparkfun . com / tutorials / mpu - 9250 - hookup - guide ? _ga = 2 .
10569713.436974227.1524011107-1454688882.1517868149.

[22] DFRobot. (2017). Bluno nano sku:dfr0296 wiki, [Online]. Available: https:
//www.dfrobot.com/wiki/index.php/Bluno_Nano_SKU:DFR0296.

[23] H. Fourati and D. Belkhiat, Recent Advances on Multisensor Attitude and


Heading Estimation: Fundamental Concepts and Applications. Aug. 2016.

[24] E. Bergamini, G. Ligorio, A. Summa, G. Vannozzi, A. Cappozzo, and A. Saba-


tini, “Estimating orientation using magnetic and inertial sensors and differ-
ent sensor fusion approaches: Accuracy assessment in manual and locomotion
tasks”, vol. 14, pp. 18 625–18 649, Oct. 2014.

[25] P. Corke, Robotics, Vision and Control. Springer, 2011, pp. 19-37, isbn: 978-
3-642-20143-1.

[26] F. Semiconductor. (2013). Tilt sensing using a three-axis accelerometer. Ap-


plication Note 3461, [Online]. Available: https://cache.freescale.com/
files/sensors/doc/app_note/AN3461.pdf.

[27] F. Semiconductor. (2015). Accuracy of angle estimation in ecompass and 3d


pointer applications. Application Note 4249, [Online]. Available: https : / /
www.nxp.com/docs/en/application-note/AN4249.pdf.

103
[28] M. Subbiah, A. M. Sharan, and C. Dhanraj, “New methods for obtaining
the matrix of direction cosines in three-dimensional structures”, Finite Ele-
ments in Analysis and Design, vol. 10, no. 4, pp. 283 –291, 1992, issn: 0168-
874X. doi: https : / / doi . org / 10 . 1016 / 0168 - 874X(92 ) 90016 - 6. [On-
line]. Available: http://www.sciencedirect.com/science/article/pii/
0168874X92900166.

[29] G Vass, “Avoiding gimbal lock”, vol. 32, pp. 10–11, Jun. 2009.

[30] G. F. Baldi A. Bertolino F., Phase Unwrapping Algorithms: A Comparison.


Berlin, Heidelberg: Springer, 2000, in Interferometry in Speckle Light pp. 483-
490, isbn: 978-3-642-63230-3.

[31] G. Perez Paina, D. Gaydou, J. Redolfi, C. Paz, and L. Canali, “Experimental


comparison of kalman and complementary filter for attitude estimation”, Aug.
2011.

[32] J. Diebel, Representing attitude: Euler angles, unit quaternions, and rotation
vectors, 2006.

[33] W. Hamilton, Lectures on Quaternions. Hodges and Smith, 1853. [Online].


Available: https://books.google.com/books?id=TCwPAAAAIAAJ.

[34] A. J. Hanson, “Quaternion applications”, in SA ’12, 2012.

[35] L. Rodman, “Topics in quaternion linear algebra”, Jan. 2014.

[36] M. Jafari and Y. Yayli, “Generalized quaternion and rotation in 3-space e


(3-alfa,beta)”, Apr. 2012.

[37] G. P. Davailus and B. Newman, “The application of quaternion algebra to


gyroscopic motion, navigation, and guidance”, vol. 5, Aug. 2005.

[38] M. Boyle, “The integration of angular velocity”, Apr. 2016.

104
[39] R. Valenti, I. Dryanovski, and J. Xiao, “Keeping a good attitude: A quaternion-
based orientation filter for imus and margs”, vol. 15, pp. 19 302–19 330, Aug.
2015.

[40] K. Shoemake, “Animating rotation with quaternion curves”, SIGGRAPH Com-


put. Graph., vol. 19, no. 3, pp. 245–254, Jul. 1985, issn: 0097-8930. doi:
10.1145/325165.325242. [Online]. Available: http://doi.acm.org/10.
1145/325165.325242.

[41] M. De Franceschi and D. Zardi, “Evaluation of cut-off frequency and correc-


tion of filter-induced phase lag and attenuation in eddy covariance analysis of
turbulence data”, vol. 108, pp. 289–303, Jan. 2003.

[42] E LEFFERTS, L. Markley, and M SHUSTER, “Kalman filtering for spacecraft


attitude estimation”, Jan. 1982.

[43] R. Leishman and T. McLain, “A multiplicative extended kalman filter for


relative rotorcraft navigation”, Dec. 2014.

[44] F. Semiconductor. (2007). Implementing positioning algorithms using accelerom-


eters. Application Note 3397, [Online]. Available: http://cache.freescale.
com/files/sensors/doc/app_note/AN3397.pdf?fsrch=1&sr=2.

[45] G. Llorach, A. Evans, J. Agenjo, and J. Blat, “Position estimation with a


low-cost inertial measurement unit”, pp. 1–4, Jun. 2014.

[46] C. Robotics. (2012). Estimating velocity and position using accelerometers.


Application Note 1007, [Online]. Available: https://www.pololu.com/file/
0J587/AN- 1007- EstimatingVelocityAndPositionUsingAccelerometers.
pdf.

[47] D. Sarwate and M. Pursley, “Crosscorrelation properties of pseudorandom and


related sequences”, vol. 68, pp. 593 –619, Jun. 1980.

[48] K. Briechle and U. Hanebeck, “Template matching using fast normalized cross
correlation”, vol. 4387, Mar. 2001.

105
[49] P. Senin, “Dynamic time warping algorithm review”, Jan. 2009.

[50] V. Tuzlukov, Signal Detection Theory, ser. Signal Detection Theory. Springer,
2001, isbn: 9780817641528. [Online]. Available: https://books.google.com/
books?id=lZP7IZtktY8C.

106

You might also like