Navigation System Design
Eduardo Nebot Centre of Excellence for Autonomous Systems The University of Sydney NSW 2006 Australia nebot@acfr.usyd.edu.au http://acfr.usyd.edu.au/homepages/academic/enebot/
May 5, 2005
Version 1.1
Navigation System Design (KC4)
i
Contents
1 Introduction to Navigation Systems 1.1 1.2 An Historical Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Modern Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 1.2.2 1.3 The Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 2 3 3 4 5 5 5 7 15
Course Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Introduction to Vehicle Modelling 2.1 Kinematics 2.1.1 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Basic Kinematic Model . . . . . . . . . . . . . . . . . . . . . . . . .
Kinematic Model of the Utility Car (UTE) . . . . . . . . . . . . . . . . . .
3 Beacon Based Navigation 3.1
Linear Kalman Filter Estimator . . . . . . . . . . . . . . . . . . . . . . . . 15 3.1.1 3.1.2 Prediction Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Update Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 3.3 3.4 3.5 3.6
The Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . 17 The Information Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 The Extended Information Filter . . . . . . . . . . . . . . . . . . . . . . . 20 Sensors Used for Beacon Based Navigation Systems . . . . . . . . . . . . . 21 Bearing Only Navigation Systems . . . . . . . . . . . . . . . . . . . . . . . 22 3.6.1 3.6.2 3.6.3 State Transition (Prediction) Equations . . . . . . . . . . . . . . . . 22 Sensor Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Update Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.7 3.8 3.9
RangeBearing Navigation Systems . . . . . . . . . . . . . . . . . . . . . . 25 Data Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Filter Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Navigation System Design (KC4) 4 The Global Positioning System (GPS) 4.1 4.2 4.3 4.4 4.5 4.6
ii 29
GPS observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Position evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Most Common GPS Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Fundamental of DGPS (Diﬀerential GPS) . . . . . . . . . . . . . . . . . . 35 Phase carrier GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Coordinate Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 44
5 Inertial Sensors 5.1
Fundamental Principles of Accelerometers and Gyroscopes . . . . . . . . . 44 5.1.1 5.1.2 5.1.3 5.1.4 Accelerometers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Gyroscopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Vibratory gyroscopes . . . . . . . . . . . . . . . . . . . . . . . . . 46 Fiber Optic Gyros . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2
Sensor Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2.1 5.2.2 Evaluation of the Error Components . . . . . . . . . . . . . . . . . 49 Faults associated with Inertial Sensors . . . . . . . . . . . . . . . . 50
5.3 5.4 5.5
Application Inertial Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Coordinate Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Inertial Navigation Equations . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.5.1 5.5.2 5.5.3 5.5.4 The Coriolis Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 59 Navigation in large areas . . . . . . . . . . . . . . . . . . . . . . . . 60 Navigation in local areas, Earth Frame . . . . . . . . . . . . . . . . 62 Schuler Damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.6
Attitude Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.6.1 5.6.2 5.6.3 Euler Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Direction Cosine Matrix (DCM) Representation . . . . . . . . . . . 65 Quaternion Representation . . . . . . . . . . . . . . . . . . . . . . . 67
Navigation System Design (KC4) 5.6.4 5.7
iii
Discussion of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 68
Attitude evaluation error sources . . . . . . . . . . . . . . . . . . . . . . . 69 5.7.1 5.7.2 5.7.3 Errors in attitude computation . . . . . . . . . . . . . . . . . . . . 69 Vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Minimising the Attitude Errors . . . . . . . . . . . . . . . . . . . . 72
5.8 5.9
Error Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Calibration and Alignment of and Inertial Measurement Unit . . . . . . . . 75 5.9.1 5.9.2 5.9.3 Alignment Techniques . . . . . . . . . . . . . . . . . . . . . . . . . 75 Alignment for Low Cost Inertial Units . . . . . . . . . . . . . . . . 77 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.10 Position, Velocity and Attitude Algorithms . . . . . . . . . . . . . . . . . . 78 6 GPS/INS Integration 6.1 6.2 6.3 79
Navigation architectures for Aided Inertial Navigation Systems . . . . . . . 79 Linear, Direct Feedback Implementation . . . . . . . . . . . . . . . . . . . 82 Aiding in Direct Feedback Conﬁgurations . . . . . . . . . . . . . . . . . . . 85 6.3.1 Real Time Implementation Issues . . . . . . . . . . . . . . . . . . . 88
6.4
Aiding Using a Vehicle Model . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.4.1 6.4.2 General ThreeDimensional Motion . . . . . . . . . . . . . . . . . . 95 Motion of a Vehicle on a Surface . . . . . . . . . . . . . . . . . . . 96
6.5
Estimation of the Vehicle State Subject to Constraints . . . . . . . . . . . 97 6.5.1 6.5.2 Observability of the States . . . . . . . . . . . . . . . . . . . . . . . 98 Multiple Aiding of an Inertial System . . . . . . . . . . . . . . . . . 101
6.6
Tightly Coupled GPS/Ins . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6.6.1 Advantages and Disadvantages of the Tightly and Loosely Coupled Conﬁgurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Navigation System Design (KC4) 7 Simultaneous Localisation and Mapping (SLAM) 7.1
iv 108
Fundamentals of SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 7.1.1 7.1.2 Process Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 The Observation Model . . . . . . . . . . . . . . . . . . . . . . . . 110
7.2
The Estimation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 7.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.3 7.4 7.5
Fundamentals results in SLAM . . . . . . . . . . . . . . . . . . . . . . . . 113 Nonlinear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Optimization of SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 7.5.1 7.5.2 7.5.3 Standard Algorithm Optimization . . . . . . . . . . . . . . . . . . . 120 Compressed Filter SubOptimal SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Navigation System Design (KC4)
1
1
Introduction to Navigation Systems
Navigation may be deﬁned as the process of determining vehicle position (also known as Localisation). This is distinct from Guidance (or Control) which is the process of controlling a vehicle to achieve a desired trajectory. An autonomous vehicular system generally must include these two basic competencies in order to perform any useful task. In these notes, the problem of navigation is discussed in detail. Probabilistic methods are used in order to perform accurate, predictable localisation.
1.1
An Historical Perspective
The navigation information that is generally needed is orientation, speed and position of the vehicle. Modern navigation techniques were introduced several hundred years ago to aid vessels crossing the ocean. The ﬁrst navigation techniques were used to estimate the position of a ship through dead reckoning, using observations of the ships speed and heading. With this information, the trajectory of the ship was able to be predicted, albeit with errors that accumulated over time. The heading of the ship was determined by observing the position of the sun, or other stars. The velocity was found with a log, compensating for the eﬀects of local currents. Absolute information was used to provide a position ﬁx, to compensate for the errors accumulated through dead reckoning. These ﬁxes were obtained when well known natural or artiﬁcial landmarks were recognised. The Mediterranean was the proving ground for these navigational techniques. It was an appropriate environment as the water currents are light in most places, and many natural landmarks can be seen along most possible journeys. A new navigational problem arose amongst the explorers of the ﬁfteenth century. Even with the invention of the compass, dead reckoning was only useful for short periods of time due to diﬃculties in obtaining accurate measurements of velocity. The diﬃculties in compensating for local currents made dead reckoning only useful for periods of a few hours. In the open sea, natural landmarks are scarcely available, making an accurate position update not possible. It was clear at that time that new methods, or new sensors, were needed to obtain absolute position information. Techniques to determine Latitude were developed in the early 1550’s by the Portuguese. Latitude information could be found with relative accuracy (30 miles) by the observation of the Pole star. The determination of Longitude is a much more complicated problem, and in fact, took another 300 years to be solved.
Navigation System Design (KC4)
2
This problem was of fundamental economic importance since many ships were lost at sea due to total uncertainty in Longitude. In 1714, England’s parliament oﬀered a signiﬁcant prize to anyone who could formulate a reliable method to obtain absolute position information in the open sea. Two approaches were mainly investigated at this time. One method was based on accurate prediction and observation of the moon. Diﬃculties in predicting the lunar orbit with enough precision and accuracy required of the instrumentation made this approach almost impossible to implement in standard ships at that time. The other important approach was based on knowing the time with enough accuracy to evaluate the Longitude. By 1770 both techniques were mature enough and were used in many journeys across the ocean. By the end of the century, navigation in the open sea became predictable and safe, making possible the growing of business involving commercial transactions from all over the world. In fact, Captain Cook tested one of Harrison’s chronometers (the Mark IV), on his last journey.
1.2
A Modern Perspective
The previous section introduced the essential elements of navigation, Prediction and Correction. Prediction can be considered to be the use of a model of some description to provide dead reckoning information. Dead reckoning has the property of accumulating position error with time. Correction is the process whereby the observation of landmarks (either natural or artiﬁcial) can reduce the location uncertainty inherent in dead reckoning. It may be argued that with the advent of modern sensors such as the Global Positioning System (GPS) that dead reckoning is no longer a necessary part of navigation. This assertion may be readily refuted by the following statement: There is no such thing as a perfect sensor. All sensors have some measure of error or uncertainty present within every measurement. Similarly, if it were possible to perfectly model vehicle motion, external sensors would not be needed. Therefore it is essential to understand not only the sensors used for navigation, but also the model used for prediction, as they both contribute to the ﬁdelity of the position solution. As both prediction and correction steps contain uncertainty, it is useful to pose navigation as an Estimation problem. If the error in prediction, and the error in correction can be modelled as probability distributions (they can, as will be seen in later sections) then the Kalman ﬁlter can be used to fuse all available information into a common estimate that may then be used for guidance.
Navigation System Design (KC4) 1.2.1 The Kalman Filter
3
A consistent methodology for estimating position from navigation sensors is through the use of Kalman ﬁltering and, for nonlinear systems, through the use of the extended Kalman ﬁlter. The Kalman ﬁlter is a linear statistical algorithm used to recursively estimate the states of interest. The states of interest will usually consist of the vehicle pose and other relevant vehicle parameters. In map building, the state vector can be augmented with feature positions, so that they too may be estimated. To aid in the estimation of the states, the Kalman ﬁlter requires that there be two mathematical models: the process model and the observation model. The process model describes how the states evolve over time, together with an estimate of the errors committed by the system. The observation model explicitly describes the information supplied by a sensor as a function of the state, together with a model of measurement noise. These models correspond to prediction and correction respectively. For a linear system subject to Gaussian, uncorrelated, zero mean measurement and process noises, the Kalman ﬁlter is the optimal minimum mean squared error estimator. An additional beneﬁt of using the Kalman ﬁlter estimator is that it keeps track of the uncertainties present in the system via the computation of a covariance matrix. This is important in many applications where it is desired to know how well (or how poorly) a system is performing. The Kalman ﬁlter has been used sucessfully in many multisensor AGV applications, including transport vehicles[9], road vehicles[13, 15] and indoor mobile robotics[5]. 1.2.2 Other Issues
There is considerable research into diﬀerent types of dead reckoning and external sensors that can work reliably in diﬀerent environments and weather conditions. Diﬀerent sensor suites are usually required for each application, such as navigation in underground mining, underwater vehicles, ships, and aerospace. Each system requires a particular combination of dead reckoning and absolute sensors. In many of these applications, reliability and integrity are very important issues. Such is the case in autonomous operation of very large machines. In these applications, the navigation system must be designed with enough redundancy to be able to detect any possible fault that may occur. This usually requires that multiple redundant sensors are used, each based on diﬀerent physical principles to avoid common failure modes.
Navigation System Design (KC4)
4
1.3
Course Outline
This course presents the tools and techniques necessary for designing navigation systems based on natural and artiﬁcial landmarks working in structured and unstructured environments. Part 1 Presents the introductory navigation topics of vehicle modelling, beacon (known feature) based navigation. Laboratory: localisation assignment with outdoor experimental data. Part 2 Presents an introduction to inertial sensors and inertial navigation. laboratory: Calibration, Alignment and integration of data obtained with a full six degree of freedom inertial measurement unit (IMU). Part 3 Presents the fundamental of Global Positioning Systems (GPS) and GPS/INS data fusion algorithms. laboratory: Integration of GPS/INS data. Part 4 Presents the fundamentals of Simultaneous Localisation and Mapping. Optimal and Suboptimal algorithms to reduce the computational and memory requirements of SLAM. laboratory: SLAM assignment with outdoor experimental data.
Navigation System Design (KC4)
5
2
Introduction to Vehicle Modelling
This section introduces the topic of vehicle modelling. The vehicle model can be a very important component of a navigation system. However the issue of vehicle modeling for navigation is largely neglected in the literature. One of the most important issues with modelling is knowing the limitation of the model selected. For example, in the case of a pure kinematic model it is important to know the operational range where the model is valid. A simple kinematics model can be very accurate to predict the trajectory of a vehicle while moving in straight direction. However it can have signiﬁcant degradation when turning or uder diﬀerent load conditions due to tire deformation. There are diﬀerent type of models that can be used for a particular application such as kinematics, dynamics or lumped parameter models. In this course basic kinematics Ackerman steered vehicle models will be presented and used as a running example for the diﬀerent navigation algorithms introduced. A much more detailed reference on the various aspects of vehicle modelling as they relate to navigation can be found in [14].
2.1
Kinematics
Perhaps the simplest method of vehicle modelling is simply to exploit the vehicle’s kinematic constraints. These constraints are known as the rigid body and rolling motion constraints. The rigid body constraint makes the assumption that the vehicle frame is rigid, wheras the rolling motion constraint assumes that all points on the vehicle rotate about the Instantaneous Centre of Rotation with the same angular velocity. It is further assumed that there is no slip between the vehicle tyres and the ground, and that vehicle motion may be adequately be represented as a twodimensional model whose motion is restricted to a plane. The constraints mentioned above have the additional eﬀect of reducing the total number of degrees of freedom for an ackerman steered vehicle from a total of six (the speeds of each wheel plus the steer angle of each of the front wheels) to two. It is often convenient for modelling purposes to replace the four wheels of an ackerman steered vehicle with two ‘virtual’ wheels located along the vehicle centerline as can be seen in Figure 1. These virtual wheels encapsulate the two degrees of freedom we are left with once rigid body and rolling motion constraints are applied. 2.1.1 Basic Kinematic Model
Consider the vehicle model shown in Figure 2, it can be shown that the continuous time form of the vehicle model (with respect to the center of the front wheel) can be derived as follows:
Navigation System Design (KC4)
gl gr
x
6
y
B
'Virtual' Wheels
O
W
Rrr
Figure 1: Ackerman Steered Vehicle Geometry
x(t) = V (t) cos(φ(t) + γ(t)) ˙ y(t) = V (t) sin(φ(t) + γ(t)) ˙ V (t) sin(γ(t)) ˙ φ(t) = B
(1)
where x(t) and y(t) denote the position of the vehicle, the angle φ(t) is the orientation of the vehicle with respect to the x axis, and V (t) represents the linear velocity of the front wheel. The angle γ is deﬁned as the steer angle of the vehicle, and B is the base length of the vehicle. These equations are reported widely in the literature[6, 8, 13, 19]. Measurements are made of the wheel’s angular velocity ω and steer angle γ. It should be noted that for nonlinear models such as this, a closed form solution for the discrete time vehicle model can not in general be found by integrating Equation 1 between the time intervals tk and tk−1 as is possible in the linear case. For nonlinear models, an Euler approximation of the integral can be used, assuming the control inputs are approximately constant over this interval. This approximation yields a discrete time
Navigation System Design (KC4)
V = wR g
x
7
y
B
O
Figure 2: Simpliﬁed Kinematic Vehicle Model
vehicle model of the form x(k) = x(k − 1) + ∆T V (k − 1) cos(φ(k − 1) + γ(k − 1)) y(k) = y(k − 1) + ∆T V (k − 1) sin(φ(k − 1) + γ(k − 1)) V (k − 1) sin(γ(k − 1)) φ(k) = φ(k − 1) + ∆T B where ∆T is deﬁned as the sample time from tk−1 to tk . This discrete time vehicle model is now implementable within a discrete time Extended Kalman Filter.
(2)
2.2
Kinematic Model of the Utility Car (UTE)
This section presents a more realistic model of a standard outdoor vehicle shown in Figure 3. In general the models that predict the trajectory of the vehicle and the models that relates the observation with the states are nonlinear. The navigation algorithms to be used require the linearisation of these models. In this case the Jacobian of the process and observation are needed to propagate the uncertainties. Assume a vehicle equipped with dead reckoning capabilities and an external sensor capable
Navigation System Design (KC4)
8
Figure 3: Utility car used for the experiments. The vehicle is equipped with a Sick laser range and bearing sensor, linear variable diﬀerential transformer sensor for the steering and back wheel velocity encoders. of measuring relative distance between vehicle and the environment as shown in Figure 4. The steering control α, and the speed υc are used with the kinematic model to predict the position of the vehicle. In this case the external sensor returns range and bearing information to the diﬀerent features Bi(i=1..n) . This information is obtained with respect to the vehicle coordinates (xl , yl ), that is z(k) = (r, β) , where r is the distance from the beacon to the range sensor, β is the sensor bearing measured with respect to the vehicle coordinate frame. Considering that the vehicle is controlled through a demanded velocity υc and steering angle α the process model that predicts the trajectory of the centre of the back axle is given by xc ˙ vc · cos (φ) yc = vc · sin (φ) + γ ˙ vc ˙ · tan (α) φc L
(3)
Where L is the distance between wheel axles as shown in Figure 5. To simplify the equation in the update stage, the kinematic model of the vehicle is designed to represent the trajectory of the centre of the laser. Based on Figure 4 and 5, the translation of the centre of the back axle can be given PL = PC + a · Tφ + b · Tφ+π/ 2 (4)
PL and PC are the position of the laser and the centre of the back axle in global coordinates respectively. The transformation is deﬁned by the orientation angle, according to the following vectorial expression:
Navigation System Design (KC4)
9
yL
b
z(k)=(r,b)
Bi
B3
yl xl
f
vc
a
B1 xL
Figure 4: Vehicle Coordinate System
Tφ = (cos (φ) , sin (φ)) The scalar representation is xL = xc + a · cos (φ) + b · cos (φ + π/2) yL = yc + a · sin (φ) + b · sin (φ + π/2) Finally the full state representation can be written xL ˙ vc · cos (φ) − vc · (a · sin (φ) + b · cos (φ)) · tan (α) L yL = vc · sin (φ) + vc · (a · cos (φ) − b · sin (φ)) · tan (α) + γ ˙ L vc ˙ · tan (α) φL L
(5)
(6)
The velocity, υc , is measured with an encoder located in the back left wheel. This velocity is translated to the centre of the axle with the following equation: vc = νe 1 − tan (α) ·
H L
(7)
Where for this car H = 0.75m, L=2.83 m, b = 0.5 and a = L + 0.95m. Finally the discrete
Navigation System Design (KC4)
10
Encoder
Laser yl
H Pc (yc, xc)
xl b
yL xL
L a
Figure 5: Kinematics parameters
model in global coordinates can be approximated with the following set x(k − 1) + ∆T vc (k − 1) · cos (φ(k − 1)) − vc · (a · sin (φ(k − 1)) + b · cos (φ(k − 1))) L · tan (α(k − 1)) x(k) y(k) = y(k vc (k−1)− 1) + ∆T vc (k − 1) · sin (φ(k − 1)) + φ(k) · (a · cos (φ(k − 1)) − b · sin (φ(k − 1))) L · tan (α(k − 1)) φ(k − 1) + vc (k−1) · tan (α(k − 1)) L where ∆t is the sampling time, that in this case is not constant.
of equations: +γ
(8)
The observation equation relating the vehicle states to the observations is z = h (X, xi , yi ) =
i zr i zβ 2 2
(xi − xL ) + (yi − yL ) + γh (y atan (xi −yL ) − φL + π/2 i −xL ) (9)
=
where z is the observation vector, is the coordinates of the landmarks, xL , yL and φL are the vehicle states deﬁned at the external sensor location and γh the sensor noise. The complete nonlinear model can be expressed in general form as: X (k + 1) = F (X (k) , u (k) + γu (k)) + γf (k) z (k) = h (X (k)) + γh (k) (10)
Navigation System Design (KC4)
11
The propagation of uncertainties and estimation of states will require the evaluation of the jacobians of the process and measurement models. The Jacobian of the Process model with respect to the state variables is: 1 0 −∆T (vc sin φ + vc tan α(a cos φ − b sin φ) L ∂F Fx = = 0 1 ∆T (vc cos φ − vc tan α(a sin φ − b cos φ) L ∂X 0 0 1 The evaluation of the Jacobian with respect to the input require a few additional mathematical steps. In this section the full procedure is shown for two diﬀerent cases [11]: ˙ • Using Steering encoder: φ = • Using Gyro information: The general kinematic model is ˙ v1 · cos (φ) + φ · ηx (φ) ˙ ˙ F v1 , φ = v1 · sin (φ) + φ · ηy (φ) ˙ φ v2 v1 = 1 − βH L 1 ˙ ˙ φ = · v1 · β or φ = φz L ˙ G (v2 , β, φz ) = F v1 , φ with
1 L
· v1 · β
with β = tan α
˙ φ = φz
ηx = −(a sin φ + b cos φ) ηy = (a cos φ − b sin φ)
The general partial derivatives with respect to the three inputs:
Navigation System Design (KC4)
12
˙ ∂G ∂F ∂v1 ∂F ∂ φ ∂v1 = · + · · ˙ ∂v2 ∂v1 ∂v2 ∂ φ ∂v1 ∂v2 ˙ ˙ ∂G ∂ φ ∂v1 ∂ φ ∂F ∂v1 ∂F · · · = + + ˙ ∂β ∂v1 ∂β ∂v1 ∂β ∂β ∂φ ˙ ∂F ∂ φ ∂G = · ˙ ∂φz ∂ φ ∂φz ∂F T = cos (φ) sin (φ) 0 ∂v1 ∂F T = ηx (φ) ηy (φ) 1 ˙ ∂φ ∂v1 1 ∂v1 v2 = , = H ∂v2 ∂β 1−βL 1 − βH
L
2
·
H L
˙ ˙ ∂φ β ∂φ = or =0 ∂v1 L ∂v1 ˙ ˙ ∂φ v1 ∂φ = or =0 ∂β L ∂v1 ˙ ˙ ∂φ ∂φ = 0 or =1 ∂φz ∂φz If the steering angle is used to predict (as the derivative) the orientation then the gradient is
Navigation System Design (KC4)
13
∂G = ∂ (v2 , β, φz ) ∂G ∂F = ∂v2 ∂v1 cos (φ) ∂G = sin (φ) · ∂v2 0 ·
∂G ∂v2
∂G ∂β
∂G ∂φz
˙ ∂v1 ∂F ∂ φ ∂v1 · + · = ˙ ∂v2 ∂ φ ∂v1 ∂v2 ηx (φ) β 1 1 + ηy (φ) · · H L 1 − βH 1−βL L 1
˙ ˙ ∂F ∂v1 ∂F ∂ φ ∂v1 ∂ φ ∂G · · · = + + = ˙ ∂β ∂v1 ∂β ∂v1 ∂β ∂β ∂φ cos (φ) ηx (φ) ∂G v2 H β v2 = sin (φ) · + ηy (φ) · · 2 · ∂β L L 1 − βH 1 − βH 0 1 L L ηx (φ) ˙ ∂G ∂F ∂ φ = · = ηy (φ) · 0 = 0 ˙ ∂φz ∂ φ ∂φz 1
2
·
H v1 + L L
If a Gyro is used to predict the change in orientation then the following gradient is used: ∂G ∂G ∂G = ∂v2 ∂G ∂φz ∂β ∂ (v2 , β, φz ) cos (φ) ∂G 1 = sin (φ) · ∂v2 1 − βH L 0 cos (φ) ∂G v2 H = sin (φ) · · H 2 ∂β L 1−βL 0 ηx (φ) ηx (φ) ∂G = ηy (φ) · 1 = ηy (φ) ∂φz 1 1 Finally the jacobian of the observation model with respect to the state variables can be evaluated with the following equation Then the output Jacobian matrix, evaluated in (xL , yL , ϕ) will be
Navigation System Design (KC4)
14
∂H ∂ (r, β) = = ∂ (xL , yL , ϕ) ∂ (xL , yL , ϕ) d= (xL − xi )2 +
(xL −xi ) √ d (yL −yi ) − d (yL − yi )2
(yL −yi ) √ d (xL −xi ) d
0 −1
These models will be used with a standard EKF algorithms to navigate, build and maintain a navigation map of the environment.
Navigation System Design (KC4)
15
3
Beacon Based Navigation
In this section, a navigation system employing artiﬁcial beacons (or targets) is developed. The kinematic vehicle model developed in the previous section will be used, and sensor models will be described. The generally accepted deﬁnition of a ‘beacon’, is a target whose position (and perhaps other parameters) is known to the navigation system. By observing a beacon whose absolute position is known, the position of the vehicle with respect to the beacon may be easily computed. Therefore, beacon based navigation systems are perhaps the easiest to implement. The drawback, of course, is that infrastructure is usually needed to implement a navigation system of this type. A form of navigation that overcomes this limitation will be discussed in Section 10.
3.1
Linear Kalman Filter Estimator
This section presents a brief introduction of Kalman ﬁlter techniques to make the reader familiar with notation and terms that will be used through the rest of the course. The Kalman ﬁlter is a linear estimator that fuses information obtained from model, observation and prior knowledge in an optimal manner. We can start by assuming the system we are interested can be modeled by: x(k) = F (k)x(k − 1) + B(k)u(k) + G(k)v(k) (11)
where x(k−1) is the state of the system at time k−1,u(k) is an input, v(k) is additive noise, B(k) and G(k) are input and noise transition matrices and F (k) is the transition matrix that allows to propagate x(k) to x(k + 1), being k the index to denote one incremental step of time. We can also assume that the observation model have the following form: z(k) = H(k)x(k) + w(k) (12)
where z(k) is the observation at time k, H(k) is the observation matrix that relates the states with the observations and w(k) is additive noise. It is also assumed that all noises are Gaussian, uncorrelated and zero mean. E[v(k)] = E[w(k)] = 0 with covariances: E[v(i)v T (j)] = δij Q(i) , E[w(i)wT (j)] = δij R(i)
Navigation System Design (KC4) The process and observation noises are also assumed uncorrelated: E[v(i)wT (j)] = 0 Finally it is assumed that we know with a given uncertainty the initial state: x(0) = x0 , P (0) = P0
16
where P0 is in general a covariance diagonal Matrix with an initial guess of the uncertainty on the initial value x0 . With the process and observation models F, B, G, H, covariances Q, R and P0 and initial state x0 the following Kalman Filter equations can be applied to recursively estimate the state in two stages: Prediction and Update. 3.1.1 Prediction Stage
The prediction stage is usually responsible for the high frequency information and uses dead reckoning sensors inputs to drive the process model and generate a prediction of the state x at time k − 1 given all the information up to time k. x(k/k − 1) = F (k)ˆ(k − 1/k − 1) + B(k)u(k) ˆ x The uncertainty of this prediction is also evaluated: P (k/k − 1) = F (k)P (k − 1/k − 1)F T (k) + G(k)Q(k)GT (k) 3.1.2 Update Stage (14) (13)
The ﬁlter will work in prediction mode until an observation become available. The state x(k) will then be updated with the observation obtained at time k. In this stage we obtain the estimate x(k/k) that is usually read as the state x(k) given all the information up to ˆ time k. The update equations for the state is: x(k/k) = x(k/k − 1) + W T (k)(z(k) − H(k)ˆ(k/k − 1)) ˆ ˆ x (15)
It can be seen that the state x at time k given all the information up to time k is obtained with the prediction x(k/k − 1) plus a correction factor formed by a product of ˆ the Kalman Gain W and the diﬀerence between the observation z(k) and the prediction of the observation H(k)ˆ(k/k − 1). At the same time, since an observation was included the x uncertainty of the state, P (k/k), must decrease. This can be seen in the update equation
Navigation System Design (KC4) for the state covariance P : P (k/k) = P (k/k − 1) − W (k)S(k)W T (k) where the Kalman gain matrix W (k) and innovation covariance are given by W (k) = P (k/k − 1)H T (k)S(k)−1
17
(16)
(17)
S(k) = H(k)P (k/k − 1)H T (k/k − 1) + R(k) . The last equation represents the covariance of the innovation sequence: ν(k) = z(k) − H(k)x(k/k − 1)
(18)
Since in most cases the true value of the state is never known the innovation sequence gives an indication of how well the ﬁlter is working. This measure can be obtained by comparing the actual value of the innovation sequence with the expected variance predicted by S(k). This approach will be used to validate the data before incorporating the observation and for data association purposes. More on these later in this chapter.
3.2
The Extended Kalman Filter
In most real cases the process and/or observation models do not behave linearly and hence the linear Kalman ﬁlter described above cannot be implemented. To overcome this problem, the extended Kalman ﬁlter (EKF) is implemented. It provides the best MMSE of the estimate and in principle it is a linear estimator which linearises the process and observation models about the current estimated state. The nonlinear discrete time system is described as x(k) = F(x(k − 1), u(k), k) + w(k), (19)
where F(·, ·, k) is a nonlinear state transition function at time k which forms the current state from the previous state and the current control input. The nonlinear observation model is represented as z(k) = H(x(k)) + v(k). (20)
Navigation System Design (KC4)
18
Following the same deﬁnitions outlined in the Kalman ﬁlter, the state prediction equation is ˆ x(kk − 1) = F(ˆ (k − 1k − 1), u(k)), x and the predicted covariance matrix is P(kk − 1) = Fx (k)P(k − 1k − 1) FT (k) + Q(k). x (22) (21)
The term Fx (k) is the Jacobian of the current nonlinear state transition matrix F(k) with respect to the predicted state x(kk − 1). When an observation occurs, the state vector is updated according to ˆ ˆ x(kk) = x(kk − 1) + W(k)ν(k) where ν(k) is the nonlinear innovation, ν(k) = z(k) − H(ˆ (kk − 1)). x The gain and innovation covariance matrices are W(k) = P(kk − 1) HT (k)S−1 (k), x S(k) = Hx (k)P(kk − 1) HT (k) + R(k), x (25) (26) (24) (23)
where the term Hx (k) is the Jacobian of the current observation model with respect to the estimated state x(kk). The updated covariance matrix is P(kk) = (I − W(k) Hx (k))P(kk − 1)(I − W(k) Hx (k))T + W(k)R(k)WT (k).
(27)
In the linear Kalman Filter, if the models are time invariant, then the Kalman gains can be computed oﬀ line and used within the ﬁlter in real time. However, in the nonlinear implementation, since the process and observation models are dependent on the current state, this beneﬁt is not available. Furthermore, the nonlinear ﬁlter must be properly initialised to ensure that the linearised models are accurate, since this can lead to ﬁlter instabilities. Finally, the presence of the Jacobians adds a higher level of complexity and furthermore increases the computational processing required.
Navigation System Design (KC4)
19
3.3
The Information Filter
The information ﬁlter is mathematically equivalent to the Kalman ﬁlter. That is, if both ﬁlters use the same data then identical results would be obtained (the reader is referred to [20] for a derivation of the information ﬁlter). The key components in the information ﬁlter are the information matrix Y(kk), and the ˆ information vector y(kk). Assuming that the noise is Gaussian then Y(kk) is the inverse of the covariance matrix, Y(kk) = P−1 (kk), (28)
and hence represents the amount of information present amongst the states of interest and their correlations. The information vector represents the information content in the states themselves and can be evaluated by transforming the state values over to information space, ˆ y(kk) = Y(kk)ˆ (kk). x The predicted information vector is ˆ y(kk − 1) = L(kk − 1)ˆ (k − 1k − 1) + B(k)u(k), y where L(kk − 1) = Y(k − 1k − 1)F(k)Y−1 (k − 1k − 1). (30) (29)
The transformation matrix B(k) transforms the control input u(k) to information space. The corresponding predicted information matrix is Y(kk − 1) = [F(k)Y−1 (k − 1k − 1)FT (k) + Q(k)]−1 . (31)
Again, these two equations are evaluated at each sample of the dead reckoning sensor. The information contribution to the states due to an observation is in the form of the information observation vector, i(k) = HT (k)R−1 (k)z(k). (32)
The amount of certainty associated with this observation is in the form of the information observation matrix, I(k) = HT (k)R−1 (k)H(k) (33)
The beneﬁt of implementing the information ﬁlter lies in the update stage. Since the observation has been transformed over to information space, the update procedure is
Navigation System Design (KC4) simply the information contribution of this observation to the prediction, ˆ ˆ y(kk) = y(kk − 1) + i(k) Y(kk) = Y(kk − 1) + I(k).
20
(34) (35)
If there is more than one sensor providing observations, the information observation vector and matrix is simply the sum of the individual observation information vectors and matrices and hence, ˆ ˆ y(kk) = y(kk − 1) + Y(kk) = Y(kk − 1) + ij (k) Ij (k). (36) (37)
where j is the number of sensors providing information at time k. There are a number of advantages associated with the Information ﬁlter: • No innovation covariance matrix S(k) has to be computed; • Furthermore, there is no gain matrix W(k) which needs to be computed; • The initial information matrix can be easily initialised to zero information; • It is computationally easier to implement an information ﬁlter in a multisensor environment since it is simply the summation of the individual information contributions. Furthermore, a Kalman ﬁlter cannot accommodate for the situation where there are multiple observations to be handled by a ﬁlter at the same time, due to the correlation of the innovation amongst the observations.
3.4
The Extended Information Filter
The derivation of the extended information ﬁlter (EIF) can be found in [20]. As with the EKF, the EIF estimates the states of interest given nonlinear process and/or observation models, however in information space. The predicted information vector and matrix is ˆ y(kk − 1) = Y(k − 1k − 1)F(ˆ (k − 1k − 1), u(k)), x −1 Y(kk − 1) = [ Fx (k)Y (k − 1k − 1) FT (k) + Q(k)]−1 . x (38) (39)
When an observation occurs, the information contribution and its associated information matrix is i(k) = I(k) = HT (k)R−1 (k)[ν(k) + Hx (k)ˆ (kk − 1)], x x HT (k)R−1 (k) Hx (k), x (40) (41)
Navigation System Design (KC4) where the innovation is ν(k) = z(k) − H(ˆ (kk − 1)). x
21
(42)
Although the nontrivial derivation of the Jacobians still occurs in this form, the EIF has the beneﬁt of being easily initialised, that is, one can set the initial conditions to zero information. Furthermore, there is the added beneﬁt of decentralising the ﬁlter across multiple nodes and the incorporation of multiple observations to a single ﬁlter [20].
3.5
Sensors Used for Beacon Based Navigation Systems
The sensors used in beacon based navigation systems typically fall into two distinct categories. The ﬁrst, bearing only, refers to sensors that simply return the bearing and/or azimuth to a given beacon. These type of laser systems are usually cheaper since they do not need to implement the range measurement circuitry. Single cameras also fall into this category, unless some form of stereo vision can also provide range information. The second category of sensors are those that return both range and bearing to a beacon. Typically used rangebearing sensors include sonar, laser, radar, and stereo vision.
Figure 6: Bearing Laser, range and bearing radar and laser In most navigation applications the beacons will be clearly diﬀerentiated from the background but will not report their identity. The navigation system will have a list with the location of all the beacons and will be responsible to determine which is the beacon that has been detected. This process is called data association and will be more diﬃcult and less reliable with bearing only sensors.
Navigation System Design (KC4)
22
3.6
Bearing Only Navigation Systems
This section presents bearing only localisation algorithms using the basic nonlinear kinematic and observations models presented in the previous chapter. The UTE model will be used in the laboratory assignments and will be referred to in this chapter when necessary. The extended Kalman ﬁlter is also introduced to deal with nonlinear models. 3.6.1 State Transition (Prediction) Equations
Consider the Ackerman steered vehicle discussed in Section 2.1. Equation 1 gives the continuous time vehicle model. For implementation in a discrete time EKF, the model must be discretised as was shown in Equation 2. This discretised vehicle model (state transition equation) is reproduced here for clarity; x(k) = x(k − 1) + ∆T V (k − 1) cos(φ(k − 1) + γ(k − 1)) y(k) = y(k − 1) + ∆T V (k − 1) sin(φ(k − 1) + γ(k − 1)) V (k − 1) sin(γ(k − 1)) φ(k) = φ(k − 1) + ∆T B
(43)
To best illustrate this example, assume that the kinematic assumptions (rigid body, rolling motion) hold true. When this is the case, the only source of process noise is injected via the control inputs; steer angle, and wheel velocity such that these parameter may be modelled as: ω(t) = ω(t) + δω(t) γ(t) = γ(t) + δγ(t)
where δω(t) and δγ(t) are Gaussian, uncorrelated, zero mean random noise processes with 2 2 variances σω and σγ respectively. The discrete time state vector at time k + 1 can now be deﬁned as x(k + 1) = f (x(k), u(k), k) + v(k) = [x(k + 1), y(k + 1), φ(k + 1)]T where u(k) = [ω(k), γ(k)]T is the control vector at time k, v(k) is additive process noise representing the uncertainty in slip angles and wheel radius, and f (·) is the nonlinear state transition function mapping (44)
Navigation System Design (KC4)
23
the previous state and current control inputs to the current state, here represented by Equation 43. Assume the system has an estimate available at time k which is equal to the conditional mean ˆ x(kk) = E[x(k)Zk ] (45) ˆ The prediction x(k + 1k) for the state at time k + 1 based on information up to time ˆ k is given by expanding Equation 44 as a Taylor series about the estimate x(kk), eliminating second and higher order terms, and taking expectations conditioned on the ﬁrst k observations, giving ˆ x(k + 1k) = E[x(k + 1)Zk ] = f (x(kk), u(k), k) (46)
The vector state prediction function f (·) is deﬁned by Equation 43 assuming zero process and control noise. The prediction of state is therefore obtained by simply substituting the previous state and current control inputs into the state transition equation with no noise. Deﬁne the noise source vector as δw(k) = [δω(k), δγ(k)]T The error between the true state and the estimated state is given by ˆ δx(kk) = x(k) − x(kk) the prediction of covariance is then obtained as P(k + 1k) = E[δx(kk)δx(kk)T zk ] T = fx (k)P(kk) fx (k) +
T fw (k)
(47)
fw (k)Σ(k)
(48) (49)
where fx (k) represents the gradient or Jacobian of f (·) evaluated at time k with respect to the states, fw (k) is the Jacobian of f (·) with respect to the noise sources, and Σ(k) is the noise strength matrix given by Σ(k) = fx (k) and
2 σω 0 2 0 σγ
(50)
fw (k) are given by the following,
Navigation System Design (KC4) 1 0 ∆T V sin(φ(kk) + γ(k)) fx (k) = 0 1 ∆T V cos(φ(kk) + γ(k)) 0 0 1 and ∆T cos(φ(kk) + γ(k)) −∆T V sin(φ(kk) + γ(k)) fw (k) = ∆T sin(φ(kk) + γ(k)) ∆T V cos(φ(kk) + γ(k))
∆T sin(γ(k)) B ∆T V cos(γ(k)) B
24
(51)
(52)
For the UTE model the reader needs to select the type of information used to determine the change of bearing, steering encoder or Gyro, and then select the appropriate equation to evaluate the Jacobian. 3.6.2 Sensor Prediction
Assume that the vehicle has a laser sensor attached such that the bearing to a number of ﬁxed beacons Bi = [Xi , Yi ], i = 1, . . . , n can be obtained. This observation can be predicted by a nonlinear model z(t) = h(x(t)) + v(t) (53)
Y The bearing to a beacon is given by arctan( Xii−y(t) ). However the platform is oriented in −x(t) the direction φ, so for this system, the measurement equation for each beacon detected by the laser is given by the model
zi (t) = arctan θ
Yi − y(t) Xi − x(t)
i − φ(t) + vθ (t)
(54)
i where the laser observation noise vθ , is uncorrelated zero mean and Gaussian with 2 strength σθ , and is assumed to be identical for each beacon observed.
The predicted observation is found by taking expectations conditioned on all previous observations, truncating at ﬁrst order to give ˆ(k + 1k) = E[z(k + 1)Zk ] z = h(ˆ(k + 1k)) x (55)
ˆ If there is a predicted state for the vehicle, x(k + 1k), we can therefore predict the observations that will be made at that state. From Equation 54 and from Equation 55,
Navigation System Design (KC4) we have the predicted observations as ˆθ z(k + 1k) = zi (k + 1k) = arctan Yi − y (k + 1k) ˆ Xi − x(k + 1k) ˆ ˆ − φ(k + 1k)
25
(56)
Now, the innovation or observation prediction error covariance S(k), used in the calculation of the Kalman gains must be computed. This is found by squaring the estimated observation error and taking expectations of the ﬁrst k measurements to give the following, S(k + 1) = In this case, the Jacobian hx (k + 1)P(k + 1k) hT (k + 1) + R(k + 1) x hx (k + 1) is given by
ˆ − y(k+1k)−Yi d2 x(k+1k)−Xi ˆ d2
(57)
hx (k + 1) =
−1
(58)
where d = (Xi − x(k + 1k))2 + (Yi − y (k + 1k))2 is the predicted distance between a ˆ ˆ beacon and the vehicle. The observation variance term R(k) is given by
2 R(k) = σθ
(59)
3.6.3
Update Equations
The state update equations for the EKF are given by x(k + 1k + 1) = x(k + 1k) + K(k + 1)[z(k + 1) − h(ˆ (k + 1k))] ˆ ˆ x where K(k + 1) = P(k + 1k) hT (k + 1)S(k + 1)−1 x (61) (60)
3.7
RangeBearing Navigation Systems
The implementation of a range–bearing beacon based navigation system is very similar to the bearing only system presented in Section 3.6. The only major diﬀerence is in the derivation of the sensor model. The sensor still observes a number of ﬁxed beacons, however it now also returns the range to these beacons.
Navigation System Design (KC4) The prediction equations for such a sensor can be derived as z(k + 1k) = ˆr zi (k + 1k) ˆθ zi (k + 1k) = (Xi − x(k + 1k)2 + (Yi − y (k + 1k))2 ˆ ˆ Yi −ˆ(k+1k) y ˆ + 1k) arctan − φ(k
Xi −ˆ(k+1k) x
26
(62)
where zi is the range measurement returned by the sensor when observing the i’th beacon. r In this case, the Jacobian hx (k + 1) is given by
x(k+1k)−Xi ˆ d ˆ − y(k+1k)−Yi d2 y (k+1k)−Yi ˆ d x(k+1k)−Xi ˆ d2
hx (k + 1) =
0 −1
(63)
where d = (Xi − x(k + 1k))2 + (Yi − y (k + 1k))2 is (as before), the predicted distance ˆ ˆ between the i’th beacon and the vehicle. The observation variance term R(k) is given by R(k) =
2 σr 0 2 0 σθ
(64)
2 where σr is the variance associated with the range measurement. As before it is assumed that the sensor is corrupted by Gaussian, zeromean, uncorrelated measurement noise.
The state and covariance prediction, and update steps are identical to those in Section 3.6.
3.8
Data Association
This is a fundamental problem in beacon navigation: We have a list of beacons with position information but when a beacon is sensed we have the following question: What is the identity of this beacon ? There are several ways to address this problem. The ﬁrst (and simplest) is to use beacons that are not identical. A common method for achieving this in bearing only navigation is to use bar codes for uniquely identifying targets. The second method uses statistics to determine how likely a given beacon is the one that was sensed. The ﬁrst method makes the association very simple but it can become limited with respect to the number of beacons possible or expensive to be used in large areas. The second method can use an unlimited number of identical pasive beacons (reﬂective tape) but will make the data association more complex. This section will discuss basic data association algorithms based on statistical methods.
Navigation System Design (KC4)
27
The innovation (ν) for a navigation ﬁlter is deﬁned as the diﬀerence between the observed and predicted observations as ν(k) = z(k) − H(k)ˆ(k/k − 1) x The innovation mean can be found as E ν(k)Zk−1 ˆ = E z(k) − z(kk − 1)Zk−1 ˆ = E z(k)Zk−1 − z(kk − 1) = 0 with variance S(k) = E ν(k)ν T (k) = E (z(k) − H(k)ˆ (kk − 1))(z(k) − H(k)ˆ (kk − 1))T x x ˆ ˆ = E (H(k) [x(k) − x(kk − 1)] + v(k))(H(k) [x(k) − x(kk − 1)] + v(k))T = R(k) + H(k)P(kk − 1)HT (k) (67) (66) (65)
So, the innovation sequence is white with zero mean and variance S(k). For beacon matching (data validation), these properties can be exploited to determine how likely a beacon is to be at the current observation. The normalised innovations squared q(k) are deﬁned as q(k) = ν T (k)S−1 (k)ν(k) (68)
which under the assumption that the ﬁlter is consistent can be shown to be a χ2 random distribution with m degrees of freedom, where m = dim(z(k)), the dimension of the measurement sequence[3]. For gating observations, q is simply formed (usually for each beacon Bi ), a threshold is chosen from χ2 tables (given a conﬁdence value, 95% or 99% for example), and then q compared to the threshold. If the value of q for a given beacon is less than the threshold, then that beacon is likely to have been the one observed. One caveat however: It is important to keep track of the number of beacons gated for a given observation. Depending on the geometry of the beacons, it is sometimes possible for multiple beacons to (quite validly) match an observation. In these cases, it is usually prudent to ignore the observation, rather than trying to guess which beacon was observed. In other words, it is usually safer to throw away good data than to use bad data. This is very important since the incorporation of a wrong beacon will most likely generate a catastrophic fault in the ﬁlter. That is the ﬁlter will not be able to match any more
Navigation System Design (KC4)
28
beacons univocally since the vehicle will be in another position and the state covariance error will grow indeﬁnitely. There are some other methods to improve the robustness of data association that consider more than one observation at the time or perform delay batch association.
3.9
Filter Implementation
Any real localization application will have dead reckoning sensors that will provide high frequency information at ﬁxed intervals and external sensors that will provide low frequency information not necessarily at a preﬁx interval. An example of the dead reckoning sensors are velocity and steering encoders that are sampled at given intervals. External sensors usually monitor the environment and eventually report information when a possible beacon is detected. In general we have various prediction cycles for each external information. This external information will not in most cases happen at the predictions time. Furthermore, the external information will be present in some cases with a signiﬁcant delay due to the processing necessary to extract the possible features or the validation process. One example is shown in Figure 7. In this case the system perform predictions 1, 2 and 3. At time 3 we realize that an observation is available with time stamp somewhere between 2 and 3. The right procedure to incorporate this information is to save the previous states at 2 and make a new prediction up to the observation time. At this time perform an update and then a prediction to time 3 again. This process will require the ﬁlter to save few previous states in accordance with the maximum delay expected in the external information.
Update Pred 4 Pred5
Observation Prediction 1 Prediction 2 Prediction 3
Figure 7: Asynchronous Prediction Update Cycle
Navigation System Design (KC4)
29
4
The Global Positioning System (GPS)
The Global Positioning System is an allweather and continuous signal system designed to provide information to evaluate accurate position worldwide. The GPS system achieved initial operation capabilities on 8 December 1993. It consist of 3 major segments: space, control and user. Space Segment: It consists of the GPS satellites. The GPS operational constellation consists of 24 satellites that orbit the earth with a period of 12 hours. There are 6 diﬀerent orbital planes with 55 degrees inclination. Each plane has a radius of 20200 km and the path is followed by 4 satellites. This distribution assures that at least four satellites are always in view anywhere around the world. The maximum number of satellites that can be seen from any point in the surface of the earth is 12, although this is very rare due to natural terrain ondulation or other obstructions. Control Segment: This segment consist of a system of tracking stations distributed around the world. The main objective is to determine the position of the satellites to update their ephemeris. Satellite clock correction are also updated. User segment: The user segment consists of the GPS receivers that use the GPS signal information. The satellites transmit information at frequencies L1=1575.42 Mhz and L2=1227.6 Mhz. They are modulated with a pseudo random C/A and P codes and with a navigation message. The C/A code is a repeating 1 Mhz pseudo random binary sequence. There is a separate C/A code for each satellite in operation. The code is used by the receiver to identify the satellite and to obtain range information. The Precision Positioning System (PPS) is provided by the P code included in the L2 signal. This code is only available to authorised users. The Standard Positioning System (SPS) is based on the C/A code that is included in both frequencies. The original accuracy of SPS is 100 m (2drms), 156 m vertical (95%) and 340 ns (95 %), with Selective Availability. The drms metric stands for distance root mean square. In this case 2drms is the radius of the circle that bound the error with 95 % probability. The accuracy of the PPS system is 22m horizontal (2drms) 27.7 m vertical (95%), and 200 ns (95%) time. The navigation message is present in both the L1 and L2 frequencies. It contains information about the precise satellite ephemeris, clock correction parameters, and low precision ephemeris data for the other satellites. After 30 seconds of continuous satellite tracking the GPS receiver is able to use the tracked satellite for position determination.
Navigation System Design (KC4)
30
4.1
GPS observables
Two types of observations are of main interest. One is the pseudorange which is the distance from the satellite to the vehicle plus additional errors due to clock drifts, the ionosphere, the troposphere, multipath and Selective Availability, (SA). SA is a deliberate error introduced by the US Department of Defense and consists of additional noise included in the transmitted satellite clock and satellite ephemeris. SA has been eliminated since May 2000 making the standard GPS more accurate. Another observation used by GPS receivers is Doppler frequency information. As the vehicle and the satellites are in constant motion with respect to each other the receiver signal will experience a change in frequency proportional to their relative velocities. This information, usually referred to as delta range, can be used to generate very accurate velocity estimation. Although most GPS receivers generate position and velocity information, only the more sophisticated systems generate velocity from Doppler observation making it independent of position information. This is an important consideration for the data fusion algorithm since position and velocity observation errors must not be correlated. The precision of the solution is aﬀected by two main factors: the geometry of the actual relative positions of the satellites with respect to the receiver, usually referred as PDOP (position dilution of precision), and secondly, the precision in range determination. With time stamp range and satellite position information of at least 4 satellites the 3D position x, y and z, and the GPS receiver clock drift ∆t can be obtained using standard triangulation techniques. Equation 69 presents the set of non linear equation that needs to be solved to evaluate the position of the GPS receiver. The position of the satellites is predicted with the ephemeris information. Some GPS receivers include all the satellites in view to obtain a position ﬁx. They usually implement a least square or Kalman ﬁlter algorithm to estimate position and velocity of the receiver.
4.2
Position evaluation
Once ephimeris and ranges from 4 satellites become available, the set of nolinear equation presented in Equation 69 can be solved. This equation is function of the known ranges and satellite position and unknown position of the GPS receiver and clock drift. r1 r2 r3 r4 = = = = (x − x1 )2 + (y − y1 )2 + (z − z1 )2 + c∆t (x − x2 )2 + (y − y2 )2 + (z − z2 )2 + c∆t (x − x3 )2 + (y − y3 )2 + (z − z3 )2 + c∆t (x − x4 )2 + (y − y4 )2 + (z − z4 )2 + c∆t
(69)
Although there are closed form solution for this system, GPS receivers usually linearize these equations and evaluate the position by solving a linear set of equations or by im
Navigation System Design (KC4)
31
plementing a least square solution for the case where redundant information is available. Linearizing Equation 69 we have: r1 (p) r1 (p0 ) r2 (p) r2 (p0 ) r3 (p) = r3 (p0 ) r4 (p) r4 (p0 ) with A: A= and δSi δSi = (x − x1 )2 + (y − y1 )2 + (z − z1 )2 (72) Where p = [x, y, z, c∆t]T and ε are the errors due to range noise and missing higher order terms in the linearization. With this approximation the change in position can then be evaluated: ∆p = A−1 ∆r (73)
x−x1 δSi x−x1 δSi x−x1 δSi x−x1 δSi y−y1 δSi y−y1 δSi y−y1 δSi y−y1 δSi z−z1 δSi z−z1 δSi z−z1 δSi z−z1 δSi
x − x0 εx y − y0 εy + A z − z0 + εz c∆t εclock
(70)
1 1 1 1
(71)
When ranges from more than four satellites are available a least square solution can be implemented: ∆p = (AT A)−1 AT ∆r (74)
Finally the previous position is updated with the correction to obtain the position of the rover at he time stamp of the pseudorange information: p = p0 + ∆p (75)
The psedudorange is obtained by the GPS hardware through a correlation process and the position of the satellites needs to be computed using the precise satellite ephimeries broadcast as part of the navigation message by each satellite.
Navigation System Design (KC4)
32
4.3
Most Common GPS Errors
There are several sources of ranging error for GPS. Depending on the type of GPS implementation they can introduce signiﬁcance errors in the position estimation. Satellite Clock Errors The GPS system ultimately depends on the accuracy of the satellite clock. The ground stations are responsible for estimating the clock errors. These stations do not correct the clock but upload the parameters for the correction formula that the satellite broadcast as part of the navigation message. Each GPS receiver needs to compensate the pseudorange information with a polynomial equation function of these parameters. The major source of clock error is SA. With SA on the RMS value of the error could be as large as 20 m. Without SA the clock drift can be predicted to reduce the error to less than 2 meter for a 12 hour period. Ephemeris Errors These errors result from the satellite transmitting the ephimeris parameter with errors. This will translate in errors in the prediction of the position of the satellite that will generate an incorrect position ﬁx of the GPS receiver. These errors grow with time since the last update performed by the ground stations. The GPS receiver usually do not use satellites with ephemeries older than 2 hours. Ionosphere Errors Since there are free electron in the Ionosphere, the GPS signal does not travel at the speed of light while in transit in this region. The delay is proportional to the number of free electrons encountered and to the inverse of the signal frequency. The user can compensate for these errors using a diurnal model for these delays. The parameters of this model are part of the GPS navigation message. The error obtained after this compensation is in the order of 25 m. Another compensation method uses the signals at both frequencies, L1 and L2 to solve for the delay. This method can reduce the errors to 12 m. Troposphere Errors Another delay is introduced by the variation of the speed of the signal due to variation in temperature, pressure and humidity. Model correction can reduce this error to the order of 1 meter. Receiver Errors This is the error introduced by the GPS receiver when evaluating the range though the correlation process. It is mostly dependent on nonlinear eﬀects and thermal noise. The magnitude of this error is in the order of 0.20.5 m.
Navigation System Design (KC4)
33
Multipath errors The range information is evaluated measuring the travel time of the signal from the satellite to the rover. Sometimes this path may be obstructed but the signal will still reach the rover antenna through an indirect path by multiple reﬂections. The receiver will obtain erroneous range and phase carrier diﬀerence information and an incorrect position ﬁx will be generated. The multipath problem can be reduced with an appropriate selection of the antenna, GPS receiver, and accepting observations only from satellites with a minimum elevation angle. Since multipath is due to multiple reﬂection, satellites with lower elevation angles are more prone to generate these errors. The antenna can also be retroﬁtted with a chokering ground plate to eliminate low angle reﬂections. Some GPS receivers include algorithms and extra hardware to detect and eliminate various type of multipath errors. Although this technology has been making enormous progress in the past few years it is almost impossible to guarantee a given GPS accuracy 100% of the time when working in application such as mining or stevedoring. Large pieces of equipment, such as cranes, draglines and large structures will almost certainly cause important perturbation to the system at unpredictable times. Selected Availability Selective Availability (SA) is a deliberate error introduced by the US Department of Defense and consists of additional noise included in the transmitted satellite clock and satellite ephemeris. SA has been disconnected since May 2000 and it is not the intent of the US to ever use SA again on the civil GPS signal. Due to this fact this section has a brief description of this type of error to demonstrate the complexity of ﬁltering coloured noise. Figure 8 shows the longitude and latitude error in meters obtained with a standard GPS with SA.
longitude error 100 meters 50 0 50 100 0 1000 2000 3000 4000 5000 6000 7000 8000
latitude error 100 meters 0 100 200
0
1000
2000
3000
4000
5000 6000 Time in sec.
7000
8000
Figure 8: Latitude and Longitude errors due to SA
Navigation System Design (KC4)
34
It is clear from this ﬁgure that the error is time correlated, showing a characteristic oscillatory behavior with an amplitude of approximately 60 meters. Figure 9 presents the average PSD obtained with sequences of 512 samples of longitude information. Similar results are obtained for latitude information. The position estimation is corrupted with
Power Spectrum Density of Lonngitude error 50
40
30
20
10
0
10 4 10
10
3
10 frequency in Hz
2
10
1
Figure 9: PSD of Longitude Error. The PSD has a gain of approximate 43 db below 2 mHz, continuing with a sharp rolloﬀ of approximate 40 db/decade up to 6 mHz where the magnitude starts to decrease at approximate 20 db/decade. colored noise of very low frequency. It becomes clear that without additional information it is practically impossible to eliminate this added noise in short periods of time. For more information about SA teh reader can see [22] Geometric Dilution of Precision The quality of the solution of the position and clock bias error in relation with the error in the pseudorange measurement is a function of the matrix A, Equation 73. Assuming that the standard deviation for the pseudorange observation is σ, it can be proved that the matrix covariance for the state p, Equation 75, is given by: −1 P = (AT A) σ 2 = Hσ 2 (76) From this equation the various deﬁnition of estimation accuracy are deﬁned: V DOP = HDOP = P DOP = GDOP = H33 (77) (78) (79) (80)
(H11 ) + (H22 )
(H11 + H22 + H33 )
(H11 + H22 + H33 + H44 )
Navigation System Design (KC4)
35
Finally the estimated error of the individual component of the state vector p can be given as function of the DOP variables var(z) = σV DOP var(x) + var(y) = σHDOP var(x) + var(y) + var(z) = σP DOP var(x) + var(y) + var(z) + var(c∆t) = σGDOP (81) (82) (83) (84)
Most GPS receivers evaluate these uncertainties in real time allowing the user to monitor the accuracy of the solution. This is important in real time application of stand alone GPS or when integration with INS is used. In some cases the GPS may be able to report a solution but due to the geometrical conﬁguration of the satellites in view the accuracy of the solution can be very poor.
4.4
Fundamental of DGPS (Diﬀerential GPS)
The errors in range determination can be signiﬁcantly reduced with the use of another station placed at a known surveyed location. This station processes the information from all satellites in view and determines the errors in range information. These errors are then broadcast by a base station with a radio modem. The other GPS receivers receive the range correction with a radiomodem to compensate the pseudorange observation. The base station is usually placed in a location with good sky visibility. It process satellites with at least 5 degrees over the horizon to avoid multipath problems. This implementation is called diﬀerential GPS (DGPS) and is shown in Figure 10. Although the user can provide for the range correction, base station and telemetry, there are a number of commercial services that provide standard RTCM corrections. With DGPS the errors due to delays in the Troposphere and Ionosphere are signiﬁcantly reduced. It eliminates almost all the errors due to SA ( satellite ephimeris and clock dither ). DGPS was of fundamental importance with SA on. With this approach the positioning errors were reduced from approximate 100 meters to under a metre. Without SA the reduction is not that important, specially for low quality DGPS systems as can be seen in the following tables. The prediction of the magnitude of the errors for diﬀerent type of GPS (SPS/PPS/DGPS with and without SA implementation is presented in table 1 ( SPS Autonomous and Diﬀerential without SA), Table 2 (SPS Autonomous and Diﬀerential with SA) and table 3 (PPS Autonomous and Diﬀerential) Finally Table 4 presents actual experimental errors obtained from a 24 hour data set logged with a standard standalone GPS receiver. It can be seen that the magnitude of the errors are smaller than the predicted values.
Navigation System Design (KC4)
36
Error Source Ephemeris Satellite Clock Ionosphere Troposphere Multipath Receiver measurement User Equiv Range Error Vertical 1 σ error VDOP Horizontal 1 σ error HDOP 2.0
Bias 2.1 1 4 0.5 1 0.5 5.1
Random 0 0.7 0.5 0.5 1 0.2 1.4
Total 2.1 2.1 4 0.7 1.4 0.5 5.3 12.8 10.2
DGPS 0 0 0.4 0.2 1.4 0.5 1.6 3.9 3.1
Table 1: Standard Errors ( metres )  L1 C/a. Without SA
Error Source Ephemeris Satellite Clock Ionosphere Troposphere Multipath Receiver measurement User Equiv Range Error Vertical 1 σ error VDOP Horizontal 1 σ error HDOP 2.0
Bias 2.1 20 4 0.5 1 0.5 20.5
Random 0 0.7 0.5 0.5 1 0.2 1.4
Total 2.1 20 4 0.7 1.4 0.5 20.6 51.4 41.1
DGPS 0 0 0.4 0.2 1.4 0.5 1.6 3.9 3.1
Table 2: Standard Errors ( metres )  L1 C/a. With SA
Navigation System Design (KC4)
37
Error Source Ephemeris Satellite Clock Ionosphere Troposphere Multipath Receiver measurement User Equiv Range Error Vertical 1 σ error VDOP Horizontal 1 σ error HDOP 2.0
Bias 2.1 2.0 1.0 1.0 1 0.5 3.3
Random 0 0.7 0.5 1.0 1 0.2 1.5
Total 2.1 2.1 1.2 1.4 1.4 0.5 3.6 8.6 6.6
DGPS 0 0 0.1 1.4 1.4 0.5 1.5 3.7 3.0
Table 3: Standard Errors ( metres )  PPS Dual Frequency P/Y Code
Conﬁdence 50% 68.27% 95.45% 99.7%
Hor. GPS 2.5 3.8 7 9.8
Hor DGPS 1.6 2.2 4.2 6.7
Vert GPS 5.6 7.4 14.4 21.3
Vert DGPS 1.8 2.7 6.4 12.0
Table 4: Comparison of the errors between GPS and DGPS without SA
Navigation System Design (KC4)
38
Figure 10: Diferential GPS implementation
4.5
Phase carrier GPS
The more sophisticated GPS receivers make use of phase carrier information to improve the accuracy of the position ﬁx. Diﬀerential carrier phase tracking consists of measuring the phase shift between the same signal received at the base and the rover stations. This phase shift is proportional to the distance between these two stations. Much more elaborated hardware and software algorithms are needed for this implementation since these measurements are subject to phase ambiguities, that is uncertainty on the number of whole carrier waves between the receiver and the satellite. Algorithm convergence is a vital factor in achieving the accuracy of the carrier phase measurements. The resolution of the integer ambiguity plays a very important role in the solution of the baseline vector . When the receiver unit is switched on and commences logging the initial whole cycle diﬀerence (ambiguity), between the satellite and the receiver is unknown. As time progresses and the receiver continuously locks on to satellite signals without interruption the receiver moves towards generating a ﬁxed integer ambiguity for each corresponding satellite signal. Once the state of the ambiguity is held ﬁxed, the receiver is said to have ”converged” and the ambiguity is resolved. In some cases, the receiver is unable to reach a ﬁxed integer ambiguity state and is held ﬂoating with minimal tolerance and is also said to have converged. Once the ambiguities are said to have converged, each corresponding satellite signal ambiguity, N, is held constant and the change in phase, ∆ψ, is used to calculate the change in the receiver’s position. Convergence Time for Phase carrier GPS In any navigation application it is expected that in many occasions the number of satellites in view will not be enough to obtain a carrier phase solution. The system will then be restarted and the ambiguities
Navigation System Design (KC4)
39
will need to be resolved again. During the convergence period the accuracy is considerably degraded. The convergence time depends on a number of diﬀerent factors, including the baseline, number of visible satellites, satellite conﬁguration, the method of ambiguity resolution: i.e. single frequency, dual frequency or combined dual frequency and code pseudorange data. The most important factor inﬂuencing the time taken for a receiver to solve the unknown ambiguities, to either a ﬁxed or ﬂoating solution, is the applied method of ambiguity resolution. However, both the visible satellite count and the length of the baseline vector also have an important inﬂuence. Method for Ambiguity Resolution: In this section the comparison between two high quality type of phase carrier implementations is presented. The experimental data was obtained with a GPS receiver manufactured by Novotel working with two diﬀerent algorithms named RT20 and RT2. The RT20 algorithm has the capability of generating a predictable position accuracy of within 20cm CEP in real time using the single L1 frequency. The RT2 algorithm, however, is capable of generating a predictable accuracy of within 2cm CEP also in real time. The RT2 algorithm utilizes both the L1 and the L2 frequencies, making it possible to resolve ambiguities faster and more accurately using linear combinations of L1 and L2 frequencies. Figure 11, presents the GPS estimation of its standard deviation for the RT2 and the RT20 algorithms. For this test the GPS antenna was located at a perfectly known position so that the real convergence time can be evaluated. It can be seen that depending on the algorithm implementation the system will take between 100 to 180 seconds to be within speciﬁcations.
Standard Deviation
1.2
1
Dev (m) Std
0.8
0.6
0.4
RT20
0.2 RT2 0 50 100 150 200 250 300
Time (sec)
Figure 11: Convergence time for steady state position The RT2 algorithm had a very distinct and sudden time of convergence, when compared to the RT20 algorithm. The RT20 algorithm demonstrates a more gradual process of solving
Navigation System Design (KC4)
Standard Deviation
1.8 1.6 1.4 1.2
40
Std Dev (m)
1 0.8 0.6 0.4 0.2 0 50 100 150 200 250 300 350 Baseline Approx. 1 km Baseline Approx. 100 m
Time (sec)
Figure 12: Convergence time for the RT2 algorithm with diﬀerent baselines for the ambiguities, slowly moving towards convergence. This distinct diﬀerence was a result of the dual frequency capabilities of the RT2 algorithm. Each individual ambiguity value for any particular observation corresponds to one possible pseudorange value, deﬁned as a lane. So, for any particular observation there are a large number of possible lane combinations, meaning the receiver must analyze each possible lane value in order to select the correct one. For the receivers with only single frequency capabilities, there is no other alternative but to tediously sort through the possible lanes in search for the correct solution. However the application of dual frequency monitoring of the L1 and the L2 frequencies introduces the advantage of building linear combination of the two frequencies and as a result leading to additional ”wider” and ”narrower” lanes. The increased wide lane wavelength provides an increased ambiguity spacing, thus making it easier to choose the correct lane. When the correct wide lane has been selected, the software can then search for the narrow lane and as a result resolve the integer ambiguities faster and more accurately. Once the ambiguities are ﬁxed, these values considered constant, resulting in a more consistent and accurate position estimate. However when the number of satellites in view becomes less than ﬁve the ambiguities are lost and a full cycle will be restarted. Visible Satellites An adequate number of visible satellites is an essential component of the GPS system. Not only does the number of visible satellites inﬂuence the accuracy of the generated position, but also, in the case of phase measurements, inﬂuences the resolution of the signal ambiguities. Therefore, the number of visible satellites does inﬂuence the convergence time of a receiver. There are a large number of possible lane combinations
Navigation System Design (KC4)
41
for every observation. Therefore, in a situation of increased satellite visibility the number of possible lanes will also increase, thus providing a greater chance of selecting the correct lane and improving the ambiguity resolution time. Baseline Length The length of the baseline between the remote and the reference station has arguably the second largest inﬂuence on the time taken for a receiver to reach a converged state, since, essentially the key to convergence is solving the starting vector. For a receiver to reach convergence, the calculations performed by the relative carrier phase GPS system must generate the position of the remote station with respect to the reference station, i.e. baseline vector. Therefore the length of the baseline plays an important factor in the convergence time. Figure 12 shows the time required for the same receiver, using the RT2 algorithm, to converge to a ﬁx integer ambiguity solution, ﬁrstly at a baseline of 100 m. and secondly at a baseline of 1 km. At both locations the number of visible satellites were 6. GLONASS The GLONASS is the Russian equivalent GPS system, in this case named Global Navigation Satellite System. It also transmits navigation and range data on two frequencies L1 and L2. Unlike GPS where the satellites are distinguished by the spreadspectrum code, the satellites in GLONASS are distinguished by the frequency of the signal, L1 between 15971617 MHz and L2 between 12401260 MHz. The system is also envisioned to handle 24 satellites in almost the same conﬁguration as the GPS equivalent. Both systems employ a military only code package which is modulated onto both the L1 and L2 frequencies. The corresponding civilian access code lies only on the L1 frequency. The initial purpose of implementing two frequencies is to remove the ionospheric errors associated with the transmission of the signal through the atmosphere. If both systems are combined then a theoretical doubling of the number of satellites in view is achievable. This increase in the number of satellites increases both the navigation performance and fault detection capabilities. There are various commercial GPS receivers that use both systems.
4.6
Coordinate Transformation
This section presents various coordinate transformation equation that converts position and velocity given in the standard ECEF frame to the local navigation frame used by the data fusion algorithms presented in this course. The position Pg = {PX , PY , PZ } and velocity Vg = {VX , VY , VZ } obtained from the GNSS receiver is delivered in the WGS84 datum, represented by the ECEF coordinate frame, Figure 13.
Navigation System Design (KC4)
42
The latitude and longitude is obtained by transforming the position observation received by the GNSS receiver unit from ECEF to geodetic coordinates. The equations are Semimajor Earth axis (metres) Undulation Semiminor Earth axis (metres) Coeﬃcients a = 6378137.0 f = 0.003352810664747 b = a(1 − f ) : p = Px 2 + Py 2 Pz × a ) p×b
t = tan−1 (
e1 = 2f − f 2 a 2 − b2 e2 = b2 PZ + e2 × b sin3 t latitude (degrees) φ = tan−1 ( ) p − e1 × a cos3 t PY longitude (degrees) γ = tan−1 ( ) PX Given the latitude and longitude, a transformation matrix Cn can be formed which g transforms the position and velocity data given by the GNSS receiver from the ECEF frame over to the navigation frame. − sin φ cos λ − sin φ sin λ cos φ − sin λ cos λ 0 (85) Cn = g − cos φ cos λ − cos φ sin λ − sin φ Thus the position and velocity of the vehicle in the local navigation frame is P n = Cn P g g Vn = Cn Vg g (86) (87)
If the vehicle’s work area has negligible change in latitude and longitude, then Cn is g eﬀectively ﬁxed. Equations 86 and 87 form the observations needed for the aided inertial navigation system.
Navigation System Design (KC4)
43
Z
N E
D
φ λ
Y
X
Figure 13: Coordinate representations. XYZ represents the orthogonal axes of the ECEF coordinate frame used by the GNSS receiver. λ represents the longitude and φ the latitude of the vehicle. NED represents the local navigation frame at which the vehicle states are evaluated to.
Navigation System Design (KC4)
44
5
Inertial Sensors
Inertial sensors make measurements of the internal state of the vehicle. A major advantage of inertial sensors is that they are nonradiating and nonjammable and may be packaged and sealed from the environment. This makes them potentially robust in harsh environmental conditions. Historically, Inertial Navigation Systems (INS) have been used in aerospace vehicles, military applications such as ships, submarines, missiles, and to a much lesser extent, in land vehicle applications. Only a few years ago the application of inertial sensing was limited to high performance high cost aerospace and military applications. However, motivated by requirements for the automotive industry, a whole variety of low cost inertial systems have now become available in diverse applications such as heading and attitude determination. The most common type of inertial sensors are accelerometers and gyroscopes. Accelerometers measure acceleration with respect to an inertial reference frame. This includes gravitational and rotational acceleration as well as linear acceleration. Gyroscopes measure the rate of rotation independent of the coordinate frame. The most common application of inertial sensors is in the use of a heading gyro. Integration of the gyro rate information provides the orientation of the vehicle. Another application of inertial sensors is the use of accelerometers as inclinometers and to perform vibration analysis. The most sophisticated use of inertial sensor is in the full six degree if freedom navigation systems. In this case at least three gyros are used to track the orientation of the platform and a minimum or three accelerometers are integrated to provide velocity and position. They can also provide 3D position information and, unlike encoders, have the potential of observing wheel slip. This section presents diﬀerent type of inertial sensors and the fundamental principles inertial navigation.
5.1
5.1.1
Fundamental Principles of Accelerometers and Gyroscopes
Accelerometers
Accelerometers measure the inertia force generated when a mass is aﬀected by change in velocity. This force may change the tension of a string or cause a deﬂection of beam or may even change the vibrating frequency of a mass. The accelerometers are composed of three main elements: a mass, a suspension mechanism that positions the mass and a sensing element that returns an observation proportional to the acceleration of the mass. Some devices include an additional servo loop that generates an apposite force to improve the linearity of the sensor. A basic onedegree of freedom accelerometer is shown in Figure 14. This accelerometer is usually referred to as an open loop since the acceleration is indicated by the displacement of the mass. The accelerometer described in this example can be modelled with a second order equation:
Navigation System Design (KC4)
45
Sensitive axis Spring X
0
k Mass m c
Damper Frame
Figure 14: Basic Components of an open loop accelerometer
F =m
d2 x dx + c + kx 2t d dt
(88)
where F is the applied force to be measured. The damping can be adjusted to obtain fast responses without oscillatory behaviour. Many of the actual commercial accelerometers are based on the pendulum principle. They are built with a proof mass, a spring hinge and a sensing device. These accelerometers are usually constructed with a feedback loop to constrain the movement of the mass and avoid cross coupling accelerations. There are accelerometers that implement variation of this principle such as the Sundstrand ”QFlex” design and the Analog Device silicon micromachined accelerometer. One important speciﬁcation of the accelerometers is the minimum acceleration that can be measured. This is of fundamental importance when working in land vehicle applications, where the acceleration expected is usually in the range of 0.10.3 g. Figure 15 shows the acceleration measured when travelling with a car at low speed. A standard accelerometer for such applications is usually capable of measuring acceleration of less than 500 µg. The dependence of the bias with temperature and the linearity of the device are also important speciﬁcations. 5.1.2 Gyroscopes
These devices return a signal proportional to the rotational velocity. There is a large variety of gyroscopes that are based on diﬀerent principles. The price and quality of these
Navigation System Design (KC4)
46
0.6
0.4
0.2
Acceleration (g)
0
−0.2
−0.4
−0.6
−0.8 0
50
100
150
Time (sec)
Figure 15: Typical acceleration in land vehicle applications sensors varies signiﬁcantly. The following sections present some of the most common types of gyroscopes available for industrial applications. 5.1.3 Vibratory gyroscopes
These types of gyroscopes can be manufactured in very diﬀerent forms but they are all based on obtaining rotation rate by measuring coriolis acceleration. The device can be modelled by a simple massspring system as shown in the Figure 16. The purpose of the gyroscope is to measure the angular velocity of the particle that is supposed to be rotating about the axis OZ. In this approach the particle is made to vibrate with constant amplitude along the axis OX. This motion is usually referred to as primary motion and is controlled by an embedded circuit that maintains the oscillation at constant amplitude. Under rotation the mass will experience a coriolis inertia force that will be proportional to the applied rate of turn and will have a direction parallel to the OY axis. This motion is referred to as secondary motion and its amplitude is measured to provide information proportional to the angular rotation. Some devices provide an additional feedback mechanism for this secondary motion to increase the linearity of the sensor. The Performance of gyroscopes is mainly characterized by two parameters: scale factor and drift. A gyro with low distortion in the scale factor will be able to accurately sense angular rates at diﬀerent angular velocities. Various system imperfections and as well as environmental conditions can cause signiﬁcant variations in the scale factor. Gyro drift is the nominal output of the gyro in steady state.
Navigation System Design (KC4)
47
Y Primary motion
Secondary motion O
Figure 16: Mass spring model This nonzero output is usually temperature dependent. Murata manufactures a gyroscope model Gyrostar ENV05 that is based on this principle. This device presents very good high frequency characteristics but poor low frequency performance. The drift expected is in the order of 1 degree / minute. Noise and signiﬁcant bias temperature dependence also aﬀect the performance of this particular sensor. Better performance can be expected from the Vibrating Structure Gyroscopes from BAE and the quartz rate sensors build by Systron Donner. These devices have a drift of less that 0.15 degree/minute and the linearity is better than 0.05 % of the full range. 5.1.4 Fiber Optic Gyros
X
The ﬁber optic gyros are based on the Sagnac eﬀect discovered by Georges Sagnac in 1913. This eﬀect can be easily explained assuming two waves of light circulating in opposite direction around a path of radius R. If the source is rotating at speed ω, the light traveling in the opposite direction will reach the source sooner than the wave traveling in the same direction, as shown in Figure 17. The wave traveling with the rotation will covers a distance D1 in a transit time t1 , while the other signal covers a distance t2 in time D2 D1 = 2πR − Rωt1 D2 = 2πR − Rωt2 Making the waves travel the path N times, the diﬀerence in transit time becomes: ∆t = N (t2 − t1 ) = 4πN R2 ω c2 (90) (89)
Navigation System Design (KC4)
48
It is important to relate this time diﬀerence with a phase shift at a particular frequency. φ = 2π∆tf For a given rotation ω the phase shift will be ∆φ = 8π 2 RN ω c (92) (91)
Most low cost implementations of these devices works in an open loop manner. The maximum phase shift that can be evaluated without ambiguities is 90 degrees.
t1
t2
R
Light source
ω
Figure 17: Transmission time diﬀerence There are commercial laser gyros such as the KVH model Ecore 2000 and 1000, which are capable of drifts as low as 0.036 and degree 0.12 per minute respectively. Closed loop optical gyros are also available but they are more expensive
5.2
Sensor Errors
The measurement errors associated with inertial sensors are dependent on the physical operational principle of the sensor itself. The general error equation used for gyroscopic performance along, say, the x direction, is written as ax (93) δωx = b + bg ay + sf ωx + my ωy + mz ωz + η, az
Navigation System Design (KC4) and that of the accelerometers as δfx = b + sf ax + my ay + mz az + η, where • b is the residual biases • bg is a 1 × 3 vector representing the gdependent bias coeﬃcients • sf is the scale factor term • m represents mounting and internal misalignments and • η is random noise on the sensor signal
49
(94)
Other terms such as anisoelastic and anisoinertia errors mainly aﬀect mechanical gyros while vibropendulous errors aﬀect mechanical accelerometers, and hence will not be considered here. Apart from the random noise term η, all other error terms are, in principle, predictable thus allowing for some form of compensation. 5.2.1 Evaluation of the Error Components
This section discusses tests which can be performed to gyros in order to systematically remove the errors. Similar tests can be conducted on the accelerometer. It should be noted that temperature and memory eﬀect play a signiﬁcant role in the stability of the output of low cost inertial sensors. It is for this reason that when one purchases low cost inertial units, not all the values for the error terms are available, and so testing is required based on the application at hand. If the gyro is left stationary then the only error term encountered is that of the gindependent bias. One of strongest correlations that can be found in inertial sensors is that between the gindependent bias and temperature, also known as inrun bias. Thus by cycling through the temperature that would be encountered in the target application the bias on the gyro can be determined. The better the gyro, the smaller the bias variation over the temperature range. Furthermore, the better the sensor the greater the linearity of this bias variation. There is also a hysterisis eﬀect encountered with most inertial sensors. Thus in an ensemble of tests, cycling through the temperature in the same manner each time, the variation in the bias between ensembles can be determined. This is known as the switchon to switchon bias. Again the better the gyro, the better the switchon to switchon bias. When testing for the remaining error components, the gindependent bias is assumed
Navigation System Design (KC4)
50
known and is removed. Start with a rate table and place the gyro such that its supposed sensitive axis is orthogonal to input rotation vector, then any output signal will be due to internal misalignment of the sensor’s input axis. The purpose of mounting the sensor orthogonal to the input rotation is to deal with a small error about null as opposed to a small error about a large value (as would be encountered if the sensitive axis was parallel to the rotation input). With the misalignment and gindependent bias available, the gyro is placed on the rate table with its sensitive axis parallel to rotation input. The rate table is cycled from stationary to the maximum rotation measurable by the gyro in steps, holding the rotation rate at particular points. The scale factor and the scale factor linearity can then be determined by comparing the output of the gyro to the rotation rate input. With low cost gyros the temperature also has a part to play with scale factor stability, thus the tests should also be conducted with varying temperature. Finally the gyro can be placed in a centrifuge machine which applies a rotation rate and acceleration to the gyro. The output reading from the gyro minus the previous terms calibrated, results in the determination of bias due to acceleration or gdependent bias. By mathematically integrating Equations 93 and 94, the eﬀect of each error term on the performance of the gyro can be determined. For each error term that is accounted for there is a corresponding increase in performance. Table 5 compares the error values available for the RLG, FOG, ceramic and silicon VSGs
Characteristic gindependent bias o /hr gdependent bias o /hr/g scale factor nonlinearity %
RLG 0.00110 0 0.2  0.3
FOG 0.550 <1 0.05  0.5
Ceramic VSG 3601800 36180 5  100
Si VSG > 2000 36180 5  100
Table 5: Comparison of some of the major errors with various gyro implementations
5.2.2
Faults associated with Inertial Sensors
The accelerometers measure the absolute acceleration with respect to an inertial frame. We are interested in the translational acceleration hence the algorithms used should compensate for all other accelerations. For practical purposes it can be considered that gravity is the only other acceleration present. In Figure 15 it can be seen that the average acceleration obtained from a land vehicle is smaller that 0.3 g. This implies that the orientation of the accelerometer has to be known with very good accuracy in order to compensate for gravity without introducing errors comparable to the actual translation acceleration.
Navigation System Design (KC4)
51
This has been the main limitation that has prevented the widespread use of inertial sensors as position sensors in low cost land navigation applications. The orientation of the accelerometer can be tracked using a gyroscope. These sensors provide an output proportional to the rotation speed but are always contaminated by noise and drift. For short periods of time the drift can be approximated by a constant bias. The orientation is evaluated with a single integration of the gyroscope output: θm = ˙ θ + b + ν dt = θ + bt + ν dt (95)
The integration given in equation 95 returns the rotation angle with two additional undesired terms. A random walk due to the noise and another term that grows with time proportional to the gyro bias. As shown in ﬁgure 18, the gyro drift will introduce an error
Figure 18: Errors due to drift in the gyro in acceleration given by a = ax sin(bt) (96)
For small angle increments the errors in acceleration, velocity and position are given by:
Navigation System Design (KC4)
52
a = ax bt 1 v = ax bt2 2 1 p = ax bt3 6
(97) (98) (99)
It is important to remark that a constant gyro bias introduces an error in position determination that grows proportional to t3 . For standard low cost, good quality gyros the bias expected is in the order of 110 degrees / hour. Without calibration, the bias could introduce an error of approximately 140 meters due to the incorrect compensation of gravity after only 2 minutes of operation. Another aspect that needs to be considered is the resolution of the analog to digital converter used, especially when working with low drift gyroscopes. The original noise standard deviation of the gyro noise could be increased selecting an inappropriate digital to analog converter. A gyro with stability of the order of 1 degree per hour that works in a range of +/100 deg/sec and will require a 16bit converter to operate under its speciﬁcations. The bias in the accelerometer will also increase the error in position and is proportional to the square of time: v = ba t 1 p = ba t2 2 (100) (101)
The error introduced by the accelerometer is important enough to be neglected. Figure 19 presents the bias measured in a set of identical accelerometers corresponding to an inertial unit over a period of six hours whilst they were stationary. Evidently the biases do not remain constant and in fact it can be clearly seen that any one accelerometer is not indicative of the general performance of the other accelerometers. The diﬀerent biases values indicates that they are physically diﬀerent low cost inertial sensors. The change in the value occur due to an increase of the temperature of the unit due to ambient temperature variations. The estimation of the biases needs to be done each time the vehicle is stationary to minimize this eﬀect. The calibration procedure must incorporate the identiﬁcation of gyro and accelerometer biases at the beginning of the navigation task and become active each time the vehicle stops. These identiﬁcation of the biases can also be done online but information from other sensors is required. A second mayor error is due to the integration of random walk ( ν dt) which causes a growing error term known as random walk. Figure 20 presents the eﬀects if integrating a gyro signal on several occasions after the bias was removed and the unit stationary. Although the average error is zero in any particular run the error grow will occur in a random direction. Assuming t unity strength white Gaussian noise and letting x(k) = 0 ν(u)δ(u),then the ensemble
Navigation System Design (KC4)
x 10
−3
53
Y accelerometer 0.03
X accelerometer
5 0 Acceleration (g) −5 −10 −15 −20
Acceleration (g) 0 2 4 6
0.025
0.02
0.015
0.01
0
2
4
6
Z accelerometer −1 30
Temperature
Acceleration (g)
−1.01 Temp (C) 0 2 4 Time (hours) 6 25
−1.02
20 −1.03
−1.04
15
0
2
4 Time (hours)
6
Figure 19: Bias change of the accelerometers over a period of 6 hours. mean value is: E[
0
t
t
ν(u)δu] =
0
E[ν(u)]δu
(102)
and the variance is E[x2 (t)] = E[
0 t t t t
ν(u)δu
0
ν(v)δv] =
0 0
E[ν(u)ν(v)]δuδv
(103)
E[ν(u)ν(v)]is the autocorrelation function, which in this case (assuming uncorrelated errors) is simply a Delta Dirac function δ(u − v). Hence Equation 103 becomes: E[x2 (t)] =
0 t 0 t
δ(u − v)δuδv = t
(104)
The standard deviation of the noise is: σν = √ t (105)
Therefore without any aiding the white noise will cause an unbounded error growth in √ the inertial sensors whose value at any particular point will be bounded by ±2√ t 95 % of the time. For white noise of strength K the standard deviation is σν = K t. The larger the standard deviation of the noise the greater the standard deviation of the error. This is an important speciﬁcation of the system, since a good gyro can be degraded with
Navigation System Design (KC4)
Random Walk − X gyro 0.3
54
0.2
0.1
0 Angle (degrees)
−0.1
−0.2
−0.3
−0.4
−0.5
0
0.5
1
1.5 Time (min)
2
2.5
3
Figure 20: Typical acceleration in land vehicle applications inappropriate signal conditioning and electronic.
5.3
Application Inertial Sensors
Inclinometers: These sensors are based on accelerometers that resolve the direction of the gravity vector. The gravity vector is a known quantity that is constant in magnitude and direction. Figure 21 presents a basic example of a tilt sensor. These sensors are usually calibrated through a lookup table to determine the oﬀset as a function of temperature. There are tilt sensors available for this purpose that can provide accurate information while the vehicle is stationary. Low cost devices can provide bank and elevation information with an accuracy of up to 0.1 degrees. These sensors can not be used when the platform is moving since they are sensitive to translational and rotational accelerations. Pendulum accelerometers are also used to determine the bank and elevation angles. These devices use the principle that a pendulum with an equivalent radius of the earth will always point to the vertical irrespective of the acceleration. This eﬀect can be approximated with appropriate feedback loop compensation. Although they are able to obtain some rejection of the undesired acceleration due to the movement of the platform they can only be used under very low vehicle accelerations. Vibration: Vibration Analysis consist of measuring the frequency, strength over a range of frequencies of vibrations. Machinery vibration problems can consume excessive power and impose additional wear on bearings, seals, couplings and foundations. Vibrations
Navigation System Design (KC4)
55
G
G
ro ele Acc
te me
r
Accelerometer
B
Vout=Accelerometer Offset
Vout=Scale Factor *G * Sin(B) + Accelerometer Offset
Figure 21: Basic principle of inclinometers are typically caused by machinery misalignment and unbalance. These problems can be detected by obtaining the frequency response of the vibration of the machine under investigation. This is commonly done via analysis of the FFT of an acceleration signal. There are other application where vibration monitoring may be useful such as the analysis and identiﬁcation of vibration sources, problems in structures, vibration and shock testing, analysis to ensure products comply with speciﬁed vibration required for the application etc. Heading: Another application of inertial sensors is in the use of a heading gyro. Integration of the gyro rate information provides the orientation of the vehicle. A good quality gyro will have zero or constant bias, and small noise variance. There are many types of gyroscopes but few in the price range that can be aﬀorded in commercial mobile robot applications. Fiber optic and vibratory gyroscopes have been released with satisfactory speciﬁcations and reasonable price ranges for many industrial applications Inertial Measurement Unit (IMU) A full inertial navigation system (INS) consists of at least three (triaxial) accelerometers and three orthogonal gyroscopes that provide measurements of acceleration in three dimensions and rotation rates about three axes. The Physical implementation of inertial sensors can take on two forms: • Gimballed arrangement: This unit consists of a platform suspended by a gymbal structure that allows three degree of freedom, as shown in Figure 22. This device is usually attached to a vehicle through the outermost gymbal. The vehicle can then perform any trajectory in 3D while the platform is maintained level with respect to a desired navigation frame using the feedback control loops. These loops use the gyro signal to provide appropriate control singnal to the actuators places in each gymbal. With this approach the platform can be maintained aligned with respect to a coordinate system. The information provided by the accelerometers can then be integrated directly to obtain position and velocities.
Navigation System Design (KC4)
56
Figure 22: Gymbaled Six Degree of Freedom INS. Each axis has a control loop to maintain the axis at a particular orientation with respect to a coordinate frame. The gyro provides the feedback information to control the actuators of each gymbal • Strapdown arrangement: The IMU system assembled from low cost solidstate components is almost always constructed in a ”strapdown” conﬁguration. This term means that all of the gyros and accelerometers are ﬁxed to a common chassis and are not actively controlled on gimbals to align themselves in a prespeciﬁed direction. As shown in Figure 23, a gyro and accelerometer is placed in each axis. This design has the advantage of eliminating all moving parts. The strapdown construction, however, means that substantially more complex software is required to compute and distinguish true linear acceleration from angular acceleration and body roll or pitch with respect to gravity. Once true linear acceleration has been determined, vehicle position may be obtained, in principle, by double integration of the acceleration. Vehicle orientation and attitude may also, in principle, be determined by integration of the rotation rates of the gyros. In practice this integration leads to unbounded growth in position errors with time due to the noise associated with the measurement and the nonlinearity of the sensors.
5.4
Coordinate Frames
Inertial Navigation systems require the transformation of the diﬀerent measurement quantities between diﬀerent frames of reference. The accelerometer and gyros measure
Navigation System Design (KC4)
57
z
Az Gz Gx Ax x
Figure 23: Strapdown Six degree of Freedom INS. Each axis has an accelerometer and a gyro. The gyro signal are used to determine the attitude of the unit and compensate the acceleration information for gravity. acceleration and rotation rates with respect to an inertial frame. Other sensors such as Global Positioning System (GPS) report information in Earth Centered Earth Fixed (ECEF) coordinate frame. The navigation system needs to transform all the information to a common coordinate frame to provide the best estimate and sometime be able to report the output in various other coordinate frames. This section will introduce the diﬀerent coordinate system used in this course. Inertial Frame According to Newtown’s law of motion, an inertial frame is a frame that does not rotate or accelerate. It is almost impossible to ﬁnd a true inertial frame. A good approximation is the one that is inertial with a distant star. For practical purposes we deﬁne an inertial frame with origin at the earth center of mass, with x axis pointing towards the vernal equinox at time T0 , the z axis along the earth’s spin axis and the y axis completing the rigth handed system.
y
Ay
G
y
Navigation System Design (KC4)
58
Earthcentered EarthFixed (ECEF) This frame has origin at the mass centre of the Earth, the x axis points to the Greenwich Meridian in the equatorial plane, the y plane at 90 degree east in the same plane and the z axis in the direction of rotation of the reference ellipsoid. This frame is not an inertial frame. It can be approximated to an inertial frame by a negative rotation equivalent to the Greenwich mean Sidereal Time (GMST). The earth’s geoid is a surface that minimize the diﬀerence with the earth’s sea level. This geoid is approximated with an ellipsoid around the earth’s minor axis. Diﬀerent parameter are available for diﬀerent areas of the world. The most common reference ellipsoid used is the WGS 84. This frame is also referred as Transport Frame. Local Level Earth Frame This frame, also known as Geographic or Earth frame, is deﬁned locally relative to the earth’s geoid. The axis x points to the North’s direction, the z axis is perpendicular to the tangent of the ellipsoid pointing towards the interior of the earth, not necessarily to the centre of the earth. Finally the y axis points east to complete a right handed orthogonal system Wander Frame The Local Level frame presents some numerical problems to integrate the data from inertial system when close to the poles. In this areas signiﬁcant rotation around the axis z are required to maintain the x axis pointing North. This is true even with small movement in the y direction. This problem can be solved performing all the computation in a frame that does not require to point to North. The x axis then wanders from North at a rate chosen by the user. Geocentric frame This frame is similar to the Local Level frame with the diﬀerence that the z axis points to the centre of the earth. Local Geodetic This coordinate system is very similar to the Local level Frame. The main diﬀerence is that when the system is in motion the tangent plane origin is ﬁxed, while in the local level frame the origin is the projection of the platform origin into the earth’s geodic. This is frame is mainly used to perform navigation in local areas relative to a given origin point. Body Frame Most navigation systems have sensors attached to the vehicle. The body frame constitutes a new frame that is rigidly attached to the vehicle. The origin of this frame can be arbitrarily chosen although some location may simplify the kinematics models.
Navigation System Design (KC4) Summary on frames
59
Figure 24 shows the coordinates system used in this course. Absolute sensors like GPS provide position in ECEF coordinates, using X, Y, Z or latitude, Longitude and Height Φ, λ, H. Inertial sensors will return information in its body frame, which is in permanent rotation and translation from the navigation frame N, E, D shown in the ﬁgure. All the sensory information needs to be converted to the navigation frame prior to perform the data fusion task to obtain the best position estimate. This is one of the tasks of the Navigation system.
Z
N E
D
φ λ
Y
X
Figure 24: Diﬀerent Navigation Frames
5.5
Inertial Navigation Equations
This section presents the derivation of the inertial navigation equations for global and local area navigation. For additional information the reader can see [24] 5.5.1 The Coriolis Theorem
Navigation with respect to a rotating frame such as the earth requires the Coriolis theorem. The theorem states that the velocity of a vehicle with respect to a ﬁxed inertial frame vi ,
Navigation System Design (KC4)
60
is equal to the ground speed of a vehicle ve (the velocity of the vehicle with respect to the earth), plus the velocity of the vehicle due to the rotation rate of the earth with respect to the inertial frame ω ie , at the point on earth where the vehicle is located r, that is, vi = ve + ω ie × r, (106)
where ω ie = [0 0 Ω] and Ω is the rotation rate of the Earth . Diﬀerentiating this equation with respect to the inertial frame gives ˙ ˙ ˙ vi i = ve i + ω ie i × r + ω ie × vi . (107)
Now assuming that the angular acceleration of the Earth is zero, that is ω ie = 0, and ˙ substituting Equation 106 into 107, ω ie × vi = ω ie × [ve + ω ie × r] = ω ie × ve + ω ie × [ω ie × r], hence Equation 107 becomes ˙ ˙ vi i = ve i + ω ie × ve + ω ie × [ω ie × r]. (109)
(108)
˙ Now vi i = f + g where f is the speciﬁc force encountered due to the vehicle motion and g is the force due to gravity. Hence Equation 109 becomes ˙ f + g = ve i + ω ie × ve + ω ie × [ω ie × r], that is, ˙ ve i = f − [ω ie × ve ] + [g − ω ie × [ω ie × r]]. (111) (110)
Equation 111 simply states that the acceleration over the Earth’s surface is equal to the acceleration measured by the accelerometers compensated for the Coriolis acceleration encountered due to the velocity of the vehicle over a rotating Earth, ω ie × ve , and for the local gravity acceleration, which comprises of the Earth’s gravity, g, and that due to the rotation of the Earth, also known centripetal acceleration ω ie × [ω ie × r]. 5.5.2 Navigation in large areas
The Transport frame is used when a vehicle travels over large distances around the earth such as in missile or air transport applications. The objective of this framework is to obtain the ground speed of a vehicle with respect to a navigation frame, generally expressed in North, East and Down coordinates, at a given location, expressed in latitude, longitude
Navigation System Design (KC4)
61
and height. In such situations the rotation of the earth needs to be taken into consideration along with the fact that the navigation frame is also rotating. As an example, constantly keeping the North axis aligned on a ﬂight from Sydney to London will require the navigation frame to constantly rotate. This, coupled with the fact that the earth is rotating during this transition, causes a Coriolis acceleration between the navigation frame, the earth rotation and the ground speed of the vehicle. Hence, the Transport framework ˙ is deﬁned as: the acceleration of the vehicle with respect to the navigation frame ve n , is ˙ equal to the acceleration of the vehicle with respect to the inertial frame ve i , minus the Coriolis acceleration due to the navigation frame rotating ω en , on a rotating earth ω ie . That is, ˙ ˙ ve n = ve i − (ω ie + ω en ) × ve . Substituting in Equation 111, ˙ ve n = fn − [ω ie × ve ] + [g − ω ie × [ω ie × r]] − (ω ie + ω en ) × ve = fn − (2ω ie + ω en ) × ve + [g − ω ie × [ω ie × r]], (112)
(113)
where fn represents the acceleration in the navigation frame. Since the inertial unit measures the acceleration in the body frame fb , this acceleration needs to be transformed into the navigation frame, that is, fn = Υfb . (114)
The transformation Υ, can take on three forms as will be discussed in Section 5.6. Regardless of how Υ is obtained, the rotation data relating the body to navigation frame ω bn is required. However, the gyros measure the total inertial rotation ω ib which comprises of ω bn plus the rotation of the navigation frame with respect to the inertial frame, which is the rotation rate of the navigation frame with respect to the earth plus the rotation rate of the earth with respect to the inertial frame, ω bn = ω ib − [ω ie + ω en ], where ω en is known as the transport rate and is deﬁned as ω en = [ ve −ven −vn tan L , , ]. Ro + h Ro + h Ro + h (116) (115)
L is the latitude of the vehicle, Ro is the radius of the Earth and h is the height of the vehicle above the earth’s surface. The transport rate is what rotates the navigation frame as the vehicle travels across the surface of the earth.
Navigation System Design (KC4) 5.5.3 Navigation in local areas, Earth Frame
62
˙ In the Earth frame the acceleration of the vehicle with respect to the earth ve e , is equal ˙ to the acceleration of the vehicle with respect to the inertial frame ve i , minus the Coriolis acceleration due to the velocity of the vehicle ve , over a rotating earth ω ie , ˙ ˙ ve e = ve i − ω ie × ve . (117)
The Earth frame will be used throughout this work since its deﬁnition is well suited for land vehicle applications. Substituting in Equation 111 gives ˙ ve e = fe − 2[ω ie × ve ] + [g − ω ie × [ω ie × re ]]. (118)
Again, since the inertial unit measures the acceleration in the body frame, the acceleration measurements need to be transformed into the earth frame, fe = Υf b , (119)
where Υ now comprises of the rotation rates ω be which relates the body to earth frame. However, the gyros measure the total inertial rotation ω ib which comprises of ω be plus the rotation of the earth with respect to the inertial frame transformed over to the body frame, so ω be = ω ib − Υb ω ie . e 5.5.4 Schuler Damping (120)
The major diﬀerence between the Transport and Earth frame equations is the transport rate deﬁned by Equation 116. The transport rate is subtracted from the body rates and hence deﬁnes the attitude of the system with respect to the Transport frame. However, incorporating the transport rate also adds a bound to the errors associated with inertial systems. This is because the acceleration in the navigation frame is twice integrated to provide velocity and then position. The velocity is used to determine the transport rate which is then subtracted from the body rates. For example, assume that the pitch is in error by a small angle δβ. This error in turn causes an incorrect calculation of the acceleration due to the misalignment and the measurement of gravity given by gδβ. Repeating this process shows that misalignment feeds back onto itself. Looking at the characteristic equation of such a system, it may be observed that it exhibits simple harmonic motion with period ωs = g , Ro
Navigation System Design (KC4)
63
or 84.4 minutes. This is known as the Schuler oscillation. In essence, by implementing the Transport frame, the errors are bounded within the Schuler oscillations. If a system is tuned to the Schuler frequency then some general rules of thumb can be obtained for inertial systems: • An initial velocity error δvo , causes a position error of δvo sin(ωs t) ; ωs • An initial attitude error δβ, causes a position error of δβR(1 − cos(ωs t)); • An acceleration bias δf , causes a position error of δf ( 1−cos(ωs t) ); and ω2
s
• A gyro bias δωb , causes a position error of δωb R(t −
sin(ωs t) ). ωs
Thus for all errors, except for gyro bias, the position error is contained within the Schuler oscillation. The gyro bias causes Schuler oscillations which however grow linearly with time. It is for this reason that, with accurate gyros, passenger airlines can travel vast distances relying solely on inertial navigation. A reasonable question is “Why not use the Transport frame for land vehicle applications and hence beneﬁt from this bounding property?”. The Transport frame can be used for any inertial navigation implementation and thus allow for bounded error growth irrespective of how large this error may be. However, as an example, a low cost inertial system with a typical gyro bias of 0.01deg/sec will give rise to a bounded position error of 160000km while a gyro bias of less than 0.001deg/hr, such as encountered with ring laser gyros found in passenger airlines, gives rise to a bounded position error of 4km. Thus Schuler bounding has no purpose when using low grade inertial units. If the transport term from Equation 113 is removed, the resulting equation has a similar structure to that of the Earth frame, Equation 118. However, the Earth frame provides a better conceptual understanding when dealing with navigation within conﬁned areas.
5.6
Attitude Algorithms
The transformation matrix Υ is required to evaluate the acceleration vector in the desired frame given the acceleration vector in the body frame. This transformation matrix has to be accurate since misalignment (errors in estimated attitude) causes the components of the gravity vector measured by the accelerometers to be confused with true vehicle acceleration. Integrated over time, even small misalignment errors will cause large estimated errors. The transformation matrix consists of the roll, pitch and yaw angles required to rotate the body frame to the navigation frame and hence is updated continuously since the body frame is always rotating with respect to the navigation frame. The updating process propagates this matrix based on data obtained from the gyros, thus any errors in Υ is caused
Navigation System Design (KC4)
64
by both the physical errors associated with the gyros and the errors in the algorithms used to propagate the transformation matrix. The physical errors is discussed in Section 5.2. There are a number of algorithms available for attitude propagation. However, regardless of the type of attitude algorithm implemented, in their analytical forms they provide identical results. The choice of algorithm depends on the processing power available, on the errors introduced due to discretisation of algorithms, and on the errors introduced due to the simpliﬁcation of the algorithms. The simpliﬁcation of algorithms is to ease the computational load on the processor. There are in principle three approaches to propagating the transformation matrix: Euler, Direction Cosine and Quaternion methods. The Euler approach is conceptually easy to understand although it has the greatest computational expense. Although the Quaternion approach is computational inexpensive compared to the other two, it has no direct physical interpretation to the motion of the vehicle. The Direction Cosine approach ﬁts in between, both in terms of physical understanding and in computational expense. 5.6.1 Euler Representation
The Euler representation is the easiest form to understand. It is the mathematical analog form of a gimbal system. The roll θ, pitch β and yaw γ angles used to represent the rotations from the body frame to the navigation frame are placed in a transformation matrix En , by multiplying the consecutive rotation matrices representing a roll, followed b by a pitch and then a yaw, γc −γs 0 βc 0 βs 1 0 0 E n = γs γc 0 0 1 0 0 θc −θs b 0 0 1 −βs 0 βc 0 θs θc βc γc −θc γs + θs βs γc θs γs + θc βs γc = βc γs θc γc + θs βs γs −θs γc + θc βs γs (121) −βs θs βc θc βc where subscripts s and c represent sine and cosine of the angle. As the body frame rotates, the new angles are obtained from ˙ θ = (ωy sin θ + ωz cos θ) tan β + ωx ˙ β = ωy cos θ − ωz sin θ γ = (ωy sin θ + ωz cos θ) sec β. ˙ (122) (123) (124)
where ω is the rotation rate measured by the gyros about the x, y and z axes. As is apparent from these equations, both the roll and yaw updates have singularities when the pitch of the vehicle is π . This is equivalent to the physical phenomenon of “gimbal lock” 2
Navigation System Design (KC4)
65
in gimbal systems. In a strapdown system, as the vehicle approaches π then theoretically 2 inﬁnite word length and iteration rates are required in order to accurately obtain the result. It is for this reason that Euler updates are not generally implemented, although this is not a problem for land vehicle applications. Furthermore, this approach is computationally expensive due to the trigonometry required in the updates and in forming E n . b Discretisation The discretisation procedure is as follows: • The new roll angle is obtained through integration by θ(i + 1) = ˙ θdt + θ(i) (125)
and similarly for pitch and yaw. The angles are then placed into the matrix to form En (i + 1). b • Obtain the acceleration data in the body frame fb = [fx , fy , fz ] and evaluate the acceleration in the navigation frame fn = En fb b • Integrate the acceleration to obtain velocity and then position. 5.6.2 Direction Cosine Matrix (DCM) Representation (126)
The direction cosine matrix Cn , is a 3 × 3 matrix containing the cosine of the angles b between the body frame and the navigation frame. Cn is propagated by b ˙n C b = Cn Ωbn , b (127)
where Ωbn is a skewsymmetric matrix representing rotation rates as measured by the gyros, 0 −ωz ωy 0 −ωx . (128) Ωbn = ωz −ωy ωx 0 The initial Cn is simply the initial En . b b
Navigation System Design (KC4) Discretisation The discretisation procedure is as follows:
66
• Obtain the gyro outputs ωx , ωy , ωz and integrate to determine the change in angle φx , φy , φz . • Determine the angular vector magnitude φ = • Determine the series coeﬃcients α = sin φ φ 1 − cos φ β = φ2 (130) (131) φ2 + φ2 + φ2 x y z (129)
• Form the angular skew matrix 0 −φz φy 0 −φx φ = φz −φy φx 0 • Update the direction cosine matrix Cn (i + 1) = Cn (i)[I3×3 + αφ + βφ ] b b
2
(132)
(133)
The transformation matrix can be simpliﬁed if one assumes small angular rotations (< 0.05deg) thus, Cn (i + 1) = Cn (i) [I + ∆θ] (134) b b with 0 −∆θY ∆θY 0 ∆θ = −∆θP ∆θR ∆θP −∆θR 0
(135)
• Obtain the acceleration data in the body frame and evaluate the acceleration in the navigation frame fn = Cn (i + 1)fb b • Integrate the acceleration to obtain velocity and then position. (136)
Navigation System Design (KC4) 5.6.3 Quaternion Representation
67
In the Quaternion approach the rotation from one frame to another can be accomplished by a single rotation about a vector q through an angle q. A quaternion consists of four parameters which are a function of this vector and angle. The initial quaternion is obtained from the roll, pitch and yaw angles deﬁned in the Euler representation or alternatively through the Direction Cosine parameters, n n n 0.5 ∗ 1 + Cb11 + Cb22 + Cb33 1 n n (Cb32 − Cb23 ) 4×q(1) (137) q(i) = 1 n n (Cb13 − Cb31 ) 4×q(1) 1 n n (Cb21 − Cb12 ) 4×q(1) Discretisation The discretisation procedure is as follows: • Obtain the gyro outputs ωx , ωy , ωz and integrate to determine change in angle φx , φy , φz . • Determine the quaternion operator coeﬃcients sin φ 2 γ = 2 φ δ = cos 2 • Determine the quaternion operator δ γ φx h(i) = γ φy γ φz • Generate the quaternion matrix and q(i)1 q(i)2 q(i + 1) = q(i)3 q(i)4 update −q(i)2 −q(i)3 −q(i)4 q(i)1 −q(i)4 q(i)3 h(i) q(i)4 q(i)1 −q(i)2 −q(i)3 q(i)2 q(i)1 (138) (139)
(140)
(141)
• Obtain the acceleration data in the body frame and evaluate the acceleration in the
Navigation System Design (KC4) navigation frame. fn = q(i + 1)fT q∗ (i + 1), b
68
(142)
where q∗ is the complex conjugate of q. A less computationally expensive approach is taken by converting the quaternion back into Cn by b 2 2 2 2 (q1 + q2 − q3 − q4 ) 2(q2 q3 − q1 q4 ) 2(q2 q4 + q1 q3 ) 2 2 2 2 (q1 − q2 + q3 − q4 ) 2(q3 q4 − q1 q2 ) , Cn = 2(q2 q3 + q1 q4 ) b 2 2 2 2 2(q2 q4 − q1 q3 ) 2(q3 q4 + q1 q2 ) (q1 − q2 − q3 + q4 ) and then f n = Cn f b b • Integrate the acceleration to obtain velocity and then position. 5.6.4 Discussion of Algorithms (143)
All three approaches produce identical results in their complete form. The Euler approach is not commonly used due to the presence of the roll/yaw singularity. This does not pose a problem for land vehicle applications. However, the Euler approach requires high computational eﬀort and thus powerful processing if fast updates are required. The Quaternion has the beneﬁt of only requiring the update of four variables. It is most often used in military and space applications. In order to save on computational cost, the quaternion is converted to a DCM representation for transformation of the acceleration from the body frame to the navigation frame. This can be a signiﬁcant computational burden if fast sampling rates are required. To overcome this most military users implement a dual sampling system. As an example, a Laser INS (LINS) system typically samples Ring Laser Gyros (RLGs) at 1200Hz and the accelerometers at 400Hz. The lower sampling rate is suﬃcient for typical LINS navigation applications. Three samples of the RLGs are used to update the quaternion, thus forming a “super” quaternion representing a single rotation. This quaternion is then converted to the DCM representation and used to transform the acceleration measurements. The DCM approach is commonly used in all forms of inertial navigation. It also oﬀers the best representation for land navigation. Although nine variables are updated at each sample, it is less computationally expensive than the Euler representation, and unlike the Quaternion approach, requires no conversion over to another representation in order to transform the acceleration data from the body frame to the navigation frame. In order
Navigation System Design (KC4)
69
to satisfy this small angle assumption, the update rates generally have to be high. For example, land vehicles can undertake rotation rates of 50deg/sec, and so a sampling rate of at least 1kHz is required (to keep the angles below 0.05deg). Such a sampling rate is high for low cost inertial systems although in more expensive systems used in military and space applications 20kHz is not uncommon.
5.7
Attitude evaluation error sources
The accuracy of the evaluation of attitude can be aﬀected by a number of factors such as the approximation made by the computation algorithm and the hardware limitation of the technology used. This section present a description of the most important sources of errors. 5.7.1 Errors in attitude computation
Equations 130 and 131 presented the series coeﬃcients required for the DCM updating process. If these coeﬃcients can be determined exactly then the DCM generated would be exact. However, in practice a Taylor series expansion is required, α = 1− φ2 φ4 + − ... 3! 5! φ2 φ4 1 − + − ... β = 2! 4! 6! (144) (145)
The number of terms which are included in the expansion, and hence the accuracy of the series, describes the order of the algorithm. In [27] it is shown that the error in the DCM algorithm can be represented as Ddc = 1 (φa1 cos φ − sin φ + φ2 a2 sin φ), δt (146)
where a1 = 1 and a2 = 0 for a ﬁrst order algorithm, a1 = 1 and a2 = 0.5 for a second 2 order algorithm, a1 = 1 − φ and a2 = 0.5 for a third order algorithm and so on. 3! For example, consider a land vehicle whose axis experiences rotations in the order of 20deg/sec. Table 6 shows the drift in the algorithm when the sampling rates are set to 10, 125, 200 and 500Hz and the order of the algorithm is increased from a ﬁrst to a third. What is apparent from the table is that although the performance of the algorithm increases at each increase in sampling rate, there is a more marked improvement by increasing the order of the algorithm. Furthermore, although the sampling rates suggested in this table are not comparable to
Navigation System Design (KC4)
70
Order of Algorithm 1 2 3
10Hz 30 14.7 0.001
125Hz 0.23 0.11 5 × 10−7
200 Hz 0.04 0.021 4.85 × 10−7
500 Hz 0.012 0.006 1.4 × 10−9
Table 6: Algorithm Drift Rates in o /hr caused by sampling a continuous rotation rate of 20deg/sec the kHz sampling rates implemented by high grade inertial systems, it is the relative magnitude between the sampling rate and the continuous rotation rate that is of concern. Hence, as is apparent from the table, the drift errors caused by the simpliﬁcation of the algorithm are kept small as long as the sampling is of higher magnitude than the continuous rotation rate. 5.7.2 Vibration
Vehicle vibration has a detrimental aﬀect on the performance of inertial navigation systems. This is due to the low bandwidth of low cost inertial sensors, causing attenuation or total rejection of motion vibration at high frequencies. Vibration is generally a combination of both angular and translational motion. If the bandwidth of both the gyros and accelerometers are equal, and the vibration of the vehicle at frequencies above this bandwidth has smaller magnitude than the resolution of the sensors (which is likely with low cost units), then vibratory motion can safely be ignored. However, the bandwidth of low cost gyros is almost always signiﬁcantly less than for low cost accelerometers. Vibrations cause small changes in attitude which is measured by the accelerometers as a component of gravity. This acceleration component will be inaccurately evaluated as apparent vehicle motion since the gyros can not measure this change in attitude. To overcome this problem it is necessary to introduce vibration absorbers into the physical system to remove high frequency vibration. As an example, ceramic VSGs have a bandwidth of approximately 70Hz, those of a particular brand of piezoaccelerometers 300Hz. Thus the approach would be to obtain vibration absorbers that attenuate as much of the signal as possible above 70Hz. However, while low cost inertial units have the beneﬁt of being light weight, it is diﬃcult to ﬁnd absorbers that can attenuate high frequencies given the light load on the absorbers themselves. Furthermore, it is important to ensure that the natural frequency associated with the absorbers does not lie in a region where any excitement of the vibration causes inaccurate readings of the sensors. The eﬀect of vibration on the attitude algorithms is termed “coning” or “noncommutativity”.
Navigation System Design (KC4)
71
The deﬁnition of coning motion is simply stated as the cyclic motion of one axis due to the rotational motion of the other two axes. If one axis has an angular vibration rotation of A sin(ωt) and the other axis A sin(ωt + ), then the third axis will have a motion describing a cone. A perfect cone motion would occur if the phase diﬀerence between the two axes is ninety degrees, otherwise the motion will exhibit some other form (and in reality would be random). In [10] three forms of coning errors are discussed: • True Coning Errors Type 1: Where the true coning motion due to vibration stimulates the presence of coning motion in the algorithms used to determine the attitude of the inertial unit. • True Coning Errors Type 2: Where the true coning motion due to vibration causes a real net angular rotation rate ωc , which if not properly taken into account will be assumed to be rotation of the vehicle when it is not. • Pseudo Coning Errors Type 3: When there is no real coning motion due to vibration but coning motion is produced in the algorithms due to induced errors in the inertial unit or in the algorithms themselves. Type 1 errors arise because of sensor errors, namely gyro scale factor and gyro orthogonality errors, both of which are large with low cost inertial units. An example of Type 2 errors may occur when an inertial unit uses RLGs. The operation of this gyro works on the principle of two opposing light beams operating in a ﬁxed optical path having the same optical frequencies. When the gyro undergoes rotation the frequencies change in order to maintain the resonant frequency required for the laser beams. At very low rotations this does not happen and the two beams assume the same frequency, thus it appears as though there is no rotation; a phenomenon known as “lockin”. To remove this the gyros are “dithered”. That is, a low amplitude angular vibration is applied to the gyro. This vibration occurs at high frequency and hence can cause coning motion. One way to minimise the aﬀect is to dither the gyros at diﬀerent frequencies. Type 3 errors are of most concern. They arise from errors associated with truncation in the attitude algorithm and the limited bandwidth of the algorithm, both of which are alleviated as the order of the algorithm is increased along with the sampling rate. The problem with coning motion is determining whether the right order algorithm or sampling rate has been chosen, and whether aﬀects such as quantisation errors or dithering is causing any errors. The approach taken in industry is to place the system on a vibration table which can subject the unit to coning motion. The drift in position and attitude is an indication of coning error magnitude. This however, is only beneﬁcial in cases where the application is known and the inertial unit is built to suite. However, for generic low cost inertial applications one purchases the unit “oﬀ the shelf” and hence such techniques are not available. In these situations vibration can cause errors in the attitude evaluation and hence drive navigation errors, thus requiring aiding from an external sensor in order to bound these errors.
Navigation System Design (KC4) 5.7.3 Minimising the Attitude Errors
72
In light of the arguments presented, the approach taken for minimising the errors associated with attitude algorithms for low cost inertial units will consist of the following steps: • The bandwidth of the gyros employed will be the limiting factor in performance. Generally the accelerometers employed and the sampling rate achievable will be higher than the bandwidth of the gyro. If vehicle vibration exceeds that of the bandwidth of the gyro, then the only reasonable choice is to use vibration absorbers to attenuate incoming signals above the bandwidth of the gyro, taking into consideration the natural frequency of the absorber. • The sampling rate of the inertial sensors should be above the Nyquist frequency and furthermore should be as high as possible. • The higher the order of the algorithm the better and this will come down to the sampling rate and the processing power available.
5.8
Error Models
The accuracy of inertial navigation will depend on the accuracy of the sensors employed and on the computer implementation of the inertial algorithms. To analyse how these two eﬀects contribute to the inertial navigation system development, inertial error equations are derived. These equations provide the position, velocity and attitude error growth, given the sensor and algorithm errors. The development of the error equations is accomplished by perturbing (or linearising) the nominal equations. The perturbation can occur in two forms • The Computer Approach (ψ angle) where the analysis occurs about the computed geographical position determined by the inertial computer. Since the computer frame is known the perturbation of the angular position and rate are zero. • The True Frame Approach (φ angle) where the perturbation occurs about the true position of the vehicle. The true position is not known and hence all the elements are perturbed. The beneﬁt of implementing the perturbation in the computer frame is that the misalignment ψ between the computer frame and the true frame is independent of the position of the computer frame. When perturbing in the true frame the misalignment is coupled with the position. However, perturbation in the true frame is simpler. Both perturbation
Navigation System Design (KC4)
73
techniques produce identical results as shown in [1, 2, 4]. Perturbation analysis has always centered on the Transport or Wander Azimuth frames as these are due to their wide spread use. The perturbation of the Earth frame is the objective of this section and is done so using the true frame approach. It will be shown that in this frame, misalignment propagation is independent of position, thus delivering the same beneﬁt as the computer frame approach. Perturbation in the true frame is accomplished by stating that the error in a particular state is the diﬀerence between the estimated state and the true state. Thus given that the Earth frame velocity equation (Equation 118) is ˙ ve e = Ce fb − 2[ω ie × ve ] + [g − ω ie × [ω ie × re ]], b and deﬁning the total gravity acceleration as gt = [g − ω ie × [ω ie × re ]], then by deﬁnition of the perturbation ˜e ˙ e δv = ve − ve ˙ ˙ ˜e ˜ = [C b f b − Ce fb ] − [2˜ e ×˜e −2ω e × ve ] + [˜t − g t ]. ω ie v e g b ie e
e
(147)
(148)
(149)
˜ where v e is the evaluated velocity vector and v e is the true velocity vector. The evaluated ˙ ˙e ˜e transformation matrix C b , is the true transformation matrix C e , misaligned by the misb alignment angles δψ. The misalignment is represented in skewsymmetric form [δψ×]. Hence ˜e C b = [I3×3 − [δψ×]]C e , b thus
e ˜ ˜ δ v = C e f b − C e f b − [δψ×]C e f b − δ[2ω e × v e ] + δg t . ˙ b b b ie
(150)
(151)
If the errors in Coriolis and gravity terms are insigniﬁcant then ˜ δ v = C e δf b − [δψ×]C e f b ˙ b b ˜ = C e δf b − [δψ×]f e . b Now ˜ ˜T −[ψ×]f e = fe [δψ×], (153)
(152)
Navigation System Design (KC4) hence ˜T δ v = fe [δψ×] + C e δf b ˙ b ˜e ×]δψ + C e δf b . = [f b
74
(154)
Thus the propagation of velocity errors in the Earth frame δ v is equal to the acceleration ˙ ˜ error in the Earth frame due to the misalignment of the frame [f e ×]δψ, together with the errors associated with the measurements of the acceleration δf b transformed over to the Earth frame via C e . The errors in the measurements of the acceleration are associated b with the accelerometers and are discussed in the next section. Rearranging Equation 150 ˜e b [δψ×] = I3×3 − C b C e T , and diﬀerentiating gives ˙e ˙ ˜ e ˙ eT ˜ b [δψ×] = −C b C b − C b C e T . (156) (155)
The updating process of the transformation matrix both for the true and evaluated frames are ˙e ˙e ˜ ˜ e be C b = C e Ωe and C b = C b Ωe . b be Substituting into Equation 156 gives ˙ ˜ e b be ˜ e be b [δψ×] = −C b [C e Ωb ]T − [C b Ωb ]C e T ˜e ˜ b = −C b [Ωbe − Ωb ]C e T . be b Perturbation of the rotation update matrix gives ˜b δΩb = Ωbe − Ωb , be be therefore ˙ ˜ e be b [δψ×] = −C b δΩb C e T . Substituting in Equation 150 ˙ [δψ×] = −[[I3×3 − δψ×]C e ]δΩe C e T b be b = −C e δΩb C e T + [δψ×]C e δΩb C e T . b be b be b b (160) (159) (157)
(158)
(161)
Navigation System Design (KC4) The product of small terms δψ× and δΩ, is assumed zero. Therefore ˙ [δψ×] = −C e δΩb C e T b be b e = −C b δΩb C b . be e = −[C e δω b ×]. b be The error form of Equation 120 is δω b = δω b − C b δω e . be ib e ie
75
(162)
(163)
Given that the rotation rate of the earth is known, thus δω e = 0, then Equation 162 ie can be rewritten as ˙ [δψ×] = −C e [δω b ×] b ib ˙ = −C e δω b . δψ b ib or (164)
˙ That is, the propagation of attitude errors δ ψ is simply the errors associated with the rob tation rate measurements δω ib , transformed over to the Earth frame via C e . The errors b associated with the measurements are purely deﬁned with the gyros and are discussed in the next section. Note that the propagation of the attitude errors is independent of position. Thus although the derivation was approached through perturbation of the true frame the result delivers the same beneﬁt as found in the computer frame approach, and this is due to the structure of the inertial equations in the Earth frame. Thus given the velocity and attitude error propagation equations and an input trajectory, the error growth of the inertial system can be determined. The only terms which need to be determined are the errors in the acceleration measurement δf b = [δfx , δfy , δfz ]T , and the errors in the rotation rate δω b = [δωx , δωy , δωz ]T . ib These terms can be experimentally evaluated.
5.9
Calibration and Alignment of and Inertial Measurement Unit
The objective of calibration is to determine the biases in the accelerometers and gyros. This is obtained by ﬁrst determining the initial alignment of the inertial unit and hence in turn evaluating the initial direction cosine matrix. 5.9.1 Alignment Techniques
If the accelerometer readings are known perfectly then the attitude of the inertial unit can be determined by resolving the gravity component. In order to determine the gravity
Navigation System Design (KC4) component measured by the accelerometers, Equation 136 is rearranged to give fb = (Cn )−1 fn b
76
(165)
Since Cn is orthogonal, its inverse is simply its transpose. The inertial unit is stationary b and hence the only acceleration measured is that due to gravity along the vertical axis. Thus fn = 0 0 −g
T
.
(166)
Equation 165 becomes fxT β c γc β c γs −βs 0 fyT = −θc γs + θs βs γc θc γc + θs βs γs θs βc 0 f zT θs γs + θc βs γc −θs γc + θc βs γs θc βc −g
(167)
where the subscript T represents the true acceleration component due to gravity. Hence fxT = g sin β fyT = −g sin θ cos β fzT = −g cos θ cos β (168) (169) (170)
Although no sensor is perfect, the higher the accuracy the tighter the error tolerances and thus the accuracy of alignment which can be obtained. As the accuracy of sensors decrease, due to the errors mentioned previously, the alignment accuracy also decreases. Rearranging Equation 168 to determine the pitch β, and substituting this into either Equations 169 or 170 will solve for roll θ. This procedure for determining alignment is called “coarse alignment”. If the accuracy provided by coarse alignment is not suﬃcient for navigation performance then external alignment information will be required. This information can come from tilt sensors or GNSS attitude information for example. Coarse alignment is generally used for rapid alignment, such as in missile applications, where the time required for averaging data from external sensors is not available. The ﬁnal term which needs to be evaluated is the heading of the vehicle γ. Gyrocompassing is the self determination of the heading. Once the attitude of the unit is found, the reading on the gyros can be used to determine the components of the Earth’s rotation, and by knowing the initial position of the vehicle, heading can be resolved. However, with low cost gyros, gyrocompassing is generally not possible due to the high noise and low resolution of these sensors. In such cases external information is required to determine initial heading.
Navigation System Design (KC4) 5.9.2 Alignment for Low Cost Inertial Units
77
Due to the low accuracy of the inertial units implemented in this thesis, none of the common self aligning or calibration methods provide results accurate enough for navigation. Instead, the inertial unit uses two tilt sensors which provide measurements of the bank and elevation of the inertial platform, thus providing the initial alignment information required. A positive bank will cause the yaccelerometer to measure a component of gravity equal to fyT = −g sin(bank) Similarly a positive elevation will cause the xaccelerometer to measure fxT = g sin(elevation) (172) (171)
Equating Equation 168 with 172, and Equation 169 with 171, the pitch and roll angles are β = elevation sin(bank) θ = sin−1 ( ) cos β (173) (174)
To resolve the heading a compass is used. Equations 173 and 174 are used along with the initial heading angle to determine the initial Cn . b 5.9.3 Calibration
The simplest method of obtaining the biases of the inertial sensors is to measure the readings from each sensor whilst the vehicle is stationary. These bias values are used to calibrate the IMU. For gyros, the bias is simply the reading from these sensors when the vehicle is stationary. However, the alignment of the inertial unit is required in order to determine the biases on the accelerometers. This is accomplished during the alignment stage since the “expected” acceleration due to gravity can be determined and hence any anomalies in these values are attributed to bias. Thus the bias on the xaccelerometer is obtained by; bfx = fx − fxT (175)
where fx is the measured acceleration and fxT is the expected acceleration obtained during the alignment stage. The bias is obtained similarly for the remaining accelerometers. For
Navigation System Design (KC4) more information and experimental results the reader can see [21]
78
5.10
Position, Velocity and Attitude Algorithms
Figure 25 presents the tasks required to work with a full inertial measuring unit. The bank, elevation and heading information is used to calibrate and align the unit while n the vehicle is stationary. Once the initial transformation matrix Cb (0) is obtained, it is n updated at fast rates with the gyroscope information to obtain Cb (k). This matrix is used to transform acceleration in the body coordinate frame to the navigation coordinate frame. Then the single and double integration is performed to obtain inertial velocity and position.
Bank
Elevation Heading
Calibration Algorithm
Cbn (0)
Acceleration in Navigation frame
Inertial Measurement Unit
Gyros (R,P,Y)
n Ck () b
Integral
Velocity x,y,z
Integral Acceleration Ax,Ay,Az Acceleration in body frame
Position x, y ,z
Figure 25: Inertial Attitude, position and velocity prediction
Navigation System Design (KC4)
79
6
GPS/INS Integration
Inertial sensors have been used in numerous applications for the past 50 years. This technology originally developed for military purposes has started to appear in industrial applications. This has been possible due to the signiﬁcant reduction in cost of inertial sensors. Low cost full six degree of freedom inertial system are currently available for less than US 10,000. Unfortunately this reduction of cost comes with a substantial reduction in quality. These units without any aiding can only perform navigation for very short period of time. The solution to this problem is aiding inertial systems with external information to maintain the error within certain bonds. The most common aiding sensor for outdoor application has been the GPS in all its forms ( Autonomous / Diﬀerential / RTK ). This section presents various navigation architectures that fuse GPS, INS and modeling information in an optimal manner.
6.1
Navigation architectures for Aided Inertial Navigation Systems
The navigation architecture depends on the types of sensors and models employed. For aided inertial navigation systems, the inertial component can either be an Inertial Measurement Unit (IMU), which only provides the raw acceleration and rotation rate data, or an Inertial Navigation System (INS) providing position, velocity and attitude information. The aiding source can either be considered as a sensor providing raw sensor information, or as a navigation system providing the position, velocity and/or attitude information. The principle states of interest which are estimated by the ﬁlter, and hence which governs the type of model implemented, are the position, velocity and attitude of the vehicle, or the position, velocity and attitude errors. The most natural implementation of an aided inertial navigation system is to drive a nonlinear ﬁlter with the raw acceleration and rotation rate data provided by the IMU, as shown in Figure 26. The implementation is known as a “direct” ﬁlter structure. The process model usually represents the kinematic relationship of the vehicle and the states of interest. The state vector is propagated by the model and inertial data. The aiding information can be obtained from a navigation system where observations such as position and velocity are supplied to the system. The estimate would be in the form of the vehicle states. The disadvantage of such an implementation is that the prediction equations have to be evaluated at each sample of the inertial data. This requires substantial processing due to the high sample rates of IMUs. Another problem is that in general, high frequency maneuvers are usually ﬁltered in the linear (or linearised) model. The consequence of
Navigation System Design (KC4)
80
IMU
Acceleration and Rotation Rates
Position, Velocities and Attitude Estimates
NonLinear Filter
Observations
External Aiding Sensor
Figure 26: The “direct” structure implements a nonlinear ﬁlter to estimate the position, velocity and attitude of the vehicle. The inertial data is provided by an IMU and the aiding data from a navigation system.
Inertial Navigation System
Position, Velocity and Attitude
+

Estimated Position, Velocity and Attitudes
Estimated Errors of Position, Velocity and Attitude
+ 
Observed Errors
Linear Filter
Position, Velocities and Attitude Estimates
External Aiding System
Observations
Figure 27: The “indirect feedback” method allows a linear ﬁlter implementation and minimises the computational overhead on the ﬁlter structure.
Navigation System Design (KC4)
Estimated Errors of Position, Velocity and Attitude
81
Inertial Navigation System
Position, Velocity and Attitude
+ 
Observed Error
Linear Filter
Position, Velocities and Attitude Estimates
External Aiding System
Observations
Figure 28: The “direct feedback” method overcomes the problem of large observation values being provided to the ﬁlter by correcting the INS directly. this omission is that the ﬁlter will unnecessarily attenuate the high frequency information provided by the INS. With this approach the system may not be able to track fast maneuvers. To overcome this, an INS should be employed so that a constant stream of vehicle state information is available external to the ﬁlter. To correct any errors, the ﬁlter estimates the errors in these states. The inertial information can still be obtained even if no additional information is available. Figure 27 illustrates this implementation and is known as the “indirect feedback” method. The observation which is delivered to the ﬁlter is the “observed error” of the inertial navigation solution, that is, the diﬀerence between the inertial navigation solution and the navigation solution provided by the aiding source. In this implementation the inertial high frequency information is fed directly to the output without attenuation while the Kalman Filter provides the low frequency correction to the IMU. Since the observation is the observed error of the inertial navigation solution, and since the ﬁlter is estimating the errors in the inertial navigation solution, then the process model has to be in the form of an error model of the standard inertial navigation equations. Thus the inertial navigation equations are linearised to form error equations. Since the equations are linearised the ﬁlter implementation takes on a linear form. Although the prediction stage is still implemented, it can run at the same rate as the sampling rate of the INS or at lesser intervals. The disadvantage of the indirect feedback method is the unbounded error growth of the INS which causes an unbounded growth in the error observation delivered to the ﬁlter. This poses a problem to the linear ﬁlter since only small errors are allowed due to the linearisation process. That is, large drift in the state values from the INS cause large observation errors being fed into the ﬁlter and thus invalidating the assumptions held by the ﬁlter. The optimal implementation is illustrated in Figure 28 and is known as the
Navigation System Design (KC4)
82
“direct feedback” method. In this structure the estimated errors are fed back to the INS, thus minimising the growth of the observed error that is delivered as an observation to the ﬁlter.
6.2
Linear, Direct Feedback Implementation
This navigation architecture is driven with observation errors formed with the predicted and observed position and/or velocities. In order to implement the Kalman Filter proposed, a model of the propagation of the errors is needed. Equations 154 and 164 described the inertial error equations in the Earth frame for velocity and attitude. The rate of change of the position error can be equated as the velocity error. Thus the error equations are ˙ δ p = δv ˜ δ v = [f e ×]δψ + C e δf b ˙ b ˙ [δψ×] = −C e δω b . b ib (176) (177) (178)
Ce transforms vectors from the body frame to the Earth frame. As in the GNSS impleb mentation, the Earth frame is represented by North (N), East (E) and Down (D) vectors and hence for clariﬁcation the transformation matrix is represented as Cn . b The navigation loop implements a linear Kalman ﬁlter. The state vector consists of the error states, x = The process model is F = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 −fD fE 0 fD 0 −fN 0 −fE fN 0 0 0 0 0 0 0 0 0 0 0 0 0 δpN δpE δpD δvN δvE δvD δψN δψE δψD
T
.
(179)
(180)
In compact from
Navigation System Design (KC4)
83
r ˙ 0 I 0 r I3∗3 0 0 0 0 n v = 0 0 an v + 0 ˙ C b 0 fa n b ˙ 0 0 0 φ 0 0 Cb ωib φ
x = Fg x + Gw(181) ˙
This model can be extended to consider the biases and drift of the accelerometers and gyros. A gyro error model presented here consists of a ﬁrst order Markov process with correlation time τ plus white noise ν: ˙ θ = −(1/τ )θ + νg E[νg ] = 0,
T E[νg νg ] = Rg
(182)
The matrix Tg that takes into account the time constant for the three gyros is −1/τx 0 0 −1/τy 0 Tg = 0 0 0 −1/τz
(183)
A standard error model for the accelerometers consists of a random constant component plus white noise: a = νa E[νa ] = 0,
T E[νa νa ] = Ra
(184)
The matrix Ta that models the accelerometer errors is: 0 0 0 Ta = 0 0 0 0 0 0 Finally the augmented model that matrixes : 0 n Fg Cb 0 F = 0 0 0 Ta 0 0 0 0 x=
(185)
considers the new states has the following extended 0 0 n Cb 0 Tg G= 0 0 0 n 0 Cb 0 n 0 0 Cb 0 0 0 0 0 0 0 0 0 I 0 0 0 0 0 I
(186)
b ri vi Φi fgi ωib
Where r,v and ψ are the errors in position, velocity and angle misalignments, and fgi and
Navigation System Design (KC4)
b ωib are the errors in the accelerometers and gyros in the body frame.
84
Note that in this implementation, the acceleration inputs are fed directly into the process model and hence there is no control vector, u = 0. The process model F comprises of timevarying terms. Thus in order to determine the state transition matrix numerical methods are required. If however it is constant over the sampling interval then the state transition matrix is simply F(k) = exp(F∆t), where ∆t is the sampling time of the inertial unit. In the case of land vehicles, the dynamics are of a much lower frequency than the sampling frequency. Hence over the sampling period F remains constant, hence F(k) = exp(F∆t) = I + F∆t + (F∆t)2 + ... 2!
The discretisation is only taken to the second term since any extra terms which are added to the ﬁnal state transition matrix are of negligible value. Thus the state prediction is ˆ x(kk − 1) = F(k)ˆ (k − 1k − 1). x (187)
The advantage of using this model is that the implementation is linear and the model is independent of the vehicle dynamics. Initially the inertial sensors are calibrated and all the errors are removed, thus x(10) is set to zero. Thus, from Equation 187, the state prediction in the next cycle is also zero, and so forth. Therefore the state prediction is always zero and no correction of the inertial errors occurs during the prediction cycle. That is, the position, velocity and attitude information obtained from the navigation system is simply the data from the INS since there are no predict errors to correct it. However, due to the drift in the INS solution, there is a corresponding growth in uncertainty in the states and this is evaluated through the predicted covariance matrix is P(k  k − 1) = F(k)P(k − 1  k − 1)FT (k) + Q(k) (188)
This is a 9 × 9 matrix representing the uncertainty in the inertial predicted errors. Q(k) is the discrete process noise matrix also of dimension 9 × 9 and is evaluated using [18] Q(k) = 1 F(k)G(k)Qc (k)G(k)T F(k)T + G(k)Qc (k)G(k)T ∆t. 2 (189)
Equations 176  178 show how the noise terms (in acceleration and angular velocity) are converted from the body frame to the navigation frame and hence Qc (k), which is the
Navigation System Design (KC4) uncertainty in the process model over a period of one second, is deﬁned as δp 0 ˜ 0 0 . Qc (k) = 0 δfb b 0 0 δωib
85
(190)
δ p is the noise injected into the position error evaluation and its value is purely dependent ˜ on the uncertainty in the evaluation of the position from the integration of velocity. The remaining error terms in this matrix are the noise values on the sensor readings, that is, the errors in the body frame, which can be obtained from manufacturer speciﬁcations or through experimentation. G(k) is I3×3 0 0 . 0 (191) G(k) = 0 Cn (k) b 0 0 −Cn (k) b
6.3
Aiding in Direct Feedback Conﬁgurations
In a direct feedback structure, the model implemented in the ﬁlter is a linear error model representing the errors in the vehicle states, generally position, velocity and attitude. When an observation becomes available, the ﬁlter estimates the errors in these states. Since the model is an error model of the inertial equations, the observation z(k) is the observed error of the inertial navigation solution and not the observation delivered by the aiding sensor. Thus if an aiding system provides position and velocity data then the observation vector becomes, Pinertial (k) − PGP S (k) Vinertial (k) − VGP S (k)
z(k) =
zP (k) zV (k)
=
(192)
Figure 29 illustrates the observation structure [18]. The true acceleration, velocity and position of the vehicle have noise added to them to represent measurements taken by the sensors. The acceleration, as measured by the inertial navigation system, is integrated twice to obtain the indicated velocity and position of the vehicle. The acceleration information is obtained by the accelerometers and it is assumed that acceleration due to gravity has been compensated for.
Navigation System Design (KC4)
86
wa at + wv vt _ + wx xt + _ +
Inertial Unit vi xi
_ vgps
+ z2 + xgps _
z1
Figure 29: Illustration of how the observation measurements zP and zV are obtained by the inertial and aiding information. By deﬁning the terms δP(k) and δV(k) as the position and velocity errors in the inertial data after the integration process, the observation model becomes z(k) = = Pinertial (k) − PGP S (k) Vinertial (k) − VGP S (k) (PT (k) + δP(k)) − (PT (k) − vP (k)) (VT (k) + δV(k)) − (VT (k) − vV (k)) = δP(k) δV(k) + vP (k) vV (k) (193) The observation is thus the error between the inertial indicated position and velocity and that of the aiding sensor, and the uncertainty in this observation is reﬂected by the noise of the aiding observation. This oﬀers another beneﬁt in the direct feedback implementation and involves the tuning implementation; the noise on the observation is the noise on the aiding sensor. Thus once an inertial unit and process model is ﬁxed then the process noise matrix Q(k) is also ﬁxed, and tuning the ﬁlter is solely based on the observation noise
Navigation System Design (KC4)
87
matrix R(k). The estimate of the error states at time k given all observations up to time k can then evaluated using the standard kalman ﬁlter update equations: ˆ ˆ x(kk) = x(kk − 1) + W(k)ν(k), (194)
where W(k) is a gain matrix produced by the Kalman ﬁlter and ν(k) is the innovation vector. ν(k) = z(k) − H(k)ˆ (kk − 1). x W(k) = P(kk − 1)HT (k)S−1 (k), where S(k) is known as the innovation covariance and is obtained by S(k) = H(k)P(kk − 1)HT (k) + R(k). Finally the covariance matrix is updated due to this observation: P(kk) = (I − W(k)H(k))P(kk − 1)(I − W(k)H(k))T + W(k)R(k)WT (k) (197) (195) (196)
(198)
ˆ However with this approach, since the prediction of the error states x(kk − 1) is always zero then the state estimate becomes ˆ x(kk) = W(k)z(k) (199)
That is, the update is simply a weighted sum of the observations. The observation model H(k) is H(k) = I3×3 0 0 0 (200)
I3×3 0
The updated position and velocity errors can now be used to correct the position and velocity of the INS, Pinertial (kk) = Pinertial (kk − 1) − δp(kk) Vinertial (kk) = Vinertial (kk − 1) − δv(kk) (201) (202)
Equation 150 related the true direction cosine matrix to the estimated direction cosine
Navigation System Design (KC4) matrix. In the context of the terminology used in this section this is written as Cn (kk − 1) = [I3×3 − [δψ×]]Cn (kk). b b Rearranging this equation as Cn (kk) = [I3×3 − [δψ×]]−1 Cn (kk − 1), b b
88
(203)
(204)
provides the update equation for the direction cosine matrix once an estimate of the attitude errors occurs. Since the term in the square brackets is orthogonal then its inverse is simply its transpose (A−1 = AT ), and furthermore it is a skewsymmetric matrix (thus AT = −A), hence Equation 204 is written as Cn (kk) = [I3×3 + [δψ×]]Cn (kk − 1) b b where 0 −δψD δψE 0 −δψN [δψ×] = δψD −δψE δψN 0 (206) (205)
Note that [δψ×] is in the navigation frame yet it is used to correct the Cn matrix whose b elements are deﬁned in the body frame. It is assumed that the misalignment errors are small and hence the errors associated about the navigation frame are equal to those about the body frame. When large misalignments are encountered, the linear assumptions held are not valid. [16] deals with such situations. 6.3.1 Real Time Implementation Issues
Filter Consistency and Fault Detection The ﬁlter implementations discussed provide various methods of estimating the states of interest. However, unless the true values of those states is known, there is no way of determining whether the ﬁlter is computing correct estimates. The only information available from the real world is the observation, and hence the only form of measure for determining the behaviour of the ﬁlter is the diﬀerence between the observation and the predicted observation, that is, the innovation (Equations 195 and 225 for the Kalman ﬁlter and EKF). The innovation has the property that it must be both unbiased and white, and have covariance S(k) if the ﬁlter is operating correctly. To determine whether this is the case, the innovation is normalised, γ = ν T (k)S−1 (k)ν(k). (207)
Navigation System Design (KC4)
89
If the ﬁlter assumptions are correct then the samples of γ are distributed as a χ2 distribution in m degrees of freedom (the number of observations being estimated). Instead of using Equation 207 as a method of determining ﬁlter consistency, it can be used as a gating function. When an observation is obtained, Equation 207 is formed, and if the value γ is less than a predeﬁned threshold, then the observation is accepted. This allows for a means of detecting any faults within the observation. The threshold value is obtained from standard χ2 tables and is chosen based on the conﬁdence level required. Thus for example, a 95% conﬁdence level, and for a state vector which includes three states of position and three of velocity, then γ = 12.6. Detection of Multipath High frequency faults arise when the GNSS signals undergo multipath errors. This results in a longer time delay of the signals aﬀecting the ﬁx of the Standard Diﬀerential receiver, and also altering the phase of the signal thus aﬀecting the Carrier Phase Diﬀerential ﬁx. Another high frequency fault, although it occurs less often and with less eﬀect, is when the receiver utilises a diﬀerent set of satellites in order to determine the position ﬁx. The accuracy of the ﬁx is dependent on the geometry of the observed satellites. Changes in satellite conﬁguration due to blockages of the satellite view will in turn alter the resulting ﬁx. Both forms of high frequency faults cause abrupt jumps in the position and velocity ﬁxes obtained by the GNSS receiver. High frequency faults are therefore environment dependent. An open area such as a quarry will be less likely to produce multipath errors than will a container terminal for example. Consequently, the tuning of the ﬁlter which fuses the inertial unit and GNSS data is dependent on the environment. The most common method of rejecting multipath errors is obtained when the receiver can distinguish between the true signal and the reﬂected signal. How well the receiver can distinguish between these two signals is dependent on the accuracy of the receiver’s internal timing procedures. However, these systems cannot reject multipath errors completely, and even with the constant improvement of GNSS technology high frequency faults such as multipath always need to be factored into the development of the navigation system. Equation 207 can be use to mantain ﬁlter consistency by evaluating a gating function which has the property of being a χ2 distribution. This gating function can be used to determine if the process or observation models are valid or if any observations are spurious. Thus it can potentially determine if the multipath errors have occurred. Due to satellite geometry, the GNSS ﬁx in the vertical plane is signiﬁcantly less accurate than that in the horizontal plane. Consequently the ﬁx in North and East may lie well within the validation region, whilst that of Down may exceed it and force the result of the gating function beyond the threshold if the same noise values were used for all the terms in the observation noise matrix R(k). However, if these noise terms are accounted for by taking the values from the GNSS receiver directly, than the validation gate will
Navigation System Design (KC4)
90
detect multipath errors correctly. Similarly, changes in satellite geometry cause the Dilution of Precision (DOP) to vary. The DOP is a measure of the accuracy of the GNSS ﬁx and is used in the GNSS position solution to obtain the optimum conﬁguration of satellites which the receiver should track, if it cannot track all of them due to hardware limitations. Changes in satellite geometry occur when part of the sky is invisible to the receiver antenna due to obstacles blocking the GNSS signals. The receiver then has to obtain a new set of satellites. Consequently, a change in DOP will aﬀect the GNSS solution causing high frequency faults. These faults can be detected using the same technique as discussed for multipath errors. However, these changes are not as large as those encountered for multipath errors and generally go undetected, unless the accuracy of the inertial unit is comparable to that of the GNSS receiver. Filter Tuning There are two stages in the ﬁlter ﬂow; the prediction stage where the predicted inertial errors are always zero and the uncertainty grows with time, and the estimation stage where the estimates of the inertial errors are obtained by a weighted sum of the observations and the uncertainty is bounded. If no observation is obtained for an extended period of time, or equivalently if GNSS ﬁxes are rejected due to errors, the ﬁlter will continuously cycle in prediction mode and no corrections to the inertial navigation solution will be made. The longer the duration without correction, the greater the uncertainty in the navigation solution. When a ﬁx does occur, the observation may pass the gating function test even though the ﬁx may be in error since the uncertainty is the inertial navigation solution is large. This is shown in Figure ??.
GPS
correction
Inertia
As with any Kalman ﬁlter implementation, tuning lies in the choice of values for the process Q(k) and observation R(k) covariance matrices. For example, a large Q(k) will imply an inaccurate inertial system. During the prediction stage the uncertainty in the
Navigation System Design (KC4)
91
inertial data will grow according to the magnitude of Q(k). When a GNSS ﬁx does occur there is a greater possibility the inertial unit will be corrected using the ﬁrst available GNSS ﬁx, irrespective of the accuracy of this ﬁx. Likewise, small values in R(k) will imply accurate GNSS ﬁxes which may pose a problem when the ﬁx is in error and is being fused with low accuracy inertial sensors. Thus tuning becomes a delicate adjustment of both the Q(k) and R(k) matrices along with the employment of the gating function, Equation 207, in order to reject the high frequency faults of the GNSS. The variances along the diagonal of R(k) are determined simply by obtaining the GDOP values from the receiver and assuming there is no correlation between the ﬁxes of the navigation axes (which is how the GDOP is provided). Determining the values for Q(k) depends on the noise level of the sensors implemented which can be obtained from the manufacturer or through experimentation. The principle tuning parameters which need to be addressed are the variances of velocity and attitude. A large variance in the velocity terms will imply greater reliance on the GNSS velocity ﬁxes. Furthermore, the greater the accuracy of the velocity estimates, the greater the dampening on the attitude errors. If there are no attitude ﬁxes then only the velocity information can contain the drift in attitude and how quickly and accurately this can happen depends on the variance on the attitude. Algorithm There are two points of concern in a real time implementation of an inertially aided GNSS navigation system: processing power; and latency. Data latency is extremely important, especially with the use of GNSS sensors, where it is common to ﬁnd position latencies in the order of tens of milliseconds and that of velocity reaching over a second. The processing power internally in a GNSS sensor is used to control the correlators to lock onto satellites, determine the ephemeris of the satellites, and then compute the position and velocity of the receiver with respect to some coordinate frame. The complexity of these calculations increases with the number of satellites used to determine the solution. Standard pseudorange solutions require the least computational processing. RTK technology requires the most processing power. In addition if attitude information is required this also requires additional processing. The processing time is usually not of concern to most GNSS users; surveyors for example or more generally, those users that do not require real time information. To overcome this latency problem, careful consideration needs to be given to the real time implementation, and when the latency is large a complete redesign of the ﬁlter structure may be required. When the latency of the solution is low the position information can be propagated using velocity data and a constant velocity model. This is quite suﬃcient for low speed vehicles. Problems arise when the latency of the GNSS solution is high, and the vehicle dynamics are fast. A GNSS position solution with a latency of 50ms will have an error of 0.8m for a vehicle traveling at 60km/hr. If the
Navigation System Design (KC4)
92
vehicle is moving with constant velocity and moving in a straight line then the position can be simply propagated forward. However, any deviation from a straight line will lead to incorrect estimates when the GNSS ﬁx is fused with the current inertial state. What is required is to store the inertial data and process the estimates at the time that the GNSS ﬁx should have occurred, and then propagate the inertial solution through the backlog of data. All is well if both the position and velocity GNSS data have equal latencies. However, if there is a requirement for the GNSS receiver to obtain velocity and position with independent measurements then this requires alternative methods. For example, in RTK systems a doppler approach is used to obtain velocity while the determination of the phase ambiguity is used for position measurement. In such systems the latency of the velocity data can sometimes be over 1s, and hence the position and velocity determination occurs at diﬀerent points in time. To overcome this GNSS manufacturers propagate the velocity through the position data and hence both the position and velocity output occur at the same time with the same latencies. However such an approach produces correlation between the position and velocity data, which is not ideal for navigation systems. Hardware It was shown before that extremely high sampling of inertial measurement sensors is not required in land ﬁeld robotics since the maximum rotation and acceleration frequencies are not high (with obvious consideration to vibration). In addition, the power of modern processors is suﬃcient to handle the data throughput required in these applications. The navigation functions required for the aided INS strapdown system are usually partitioned between the fast guidance unit and the navigation processor, as shown in Figure 30
INS Platform
Gyro and Accelerations at 200Hz Accelerations, 10 Hz
DGPS Receiver
Position and Velocity at 10 Hz
Fast Guidance Unit (Prediction)
Transformation Matrix, 10 Hz
Navigation Unit *Kalman Filter (Estimation)
Bias and Misalignment, Position and Velocity Corrections
Position and Velocity 200Hz
Position and Velocity 10 Hz
Figure 30: INS/GPS Navigation architecture
Navigation System Design (KC4)
93
The system can be implemented as two independent processes. The ﬁrst process is implemented in the fast guidance unit. This process communicates with the inertial unit through a very fast link. The inertial measurement unit transmits triaxial gyro and acceleration information at a high frequency rate. The guidance unit integrates the direction cosine matrix and transforms the acceleration to the navigation frame. It also generates the predictions for velocity and position through single and double integration respectively. The positions, velocities, accelerations and the transformation matrix are transmitted to the navigation unit at a much lower rate. Although the vehicle state information is already transformed into the local navigation coordinate frame, the transformation matrix is still required to implement the data fusion algorithm. The navigation unit receives observed position and velocity information from the DGPS receiver at much lower rates 520 Hz. It is responsible for implementing the direct feedback Kalman Filter algorithm. The navigation algorithm ﬂow can be implemented as follows: 1. Since the vehicle is generally autonomous, all guidance commands are known in advance and hence the navigation ﬁlter knows when the vehicle is stationary. Whilst the vehicle is stationary the system reads in all acceleration, rotation rate and tilt sensor data from the inertial unit and provide the average of these sensor readings. The logging time for this average depend on the actual vibration present in the system. 2. Once the averaging process has been completed, calibration of the inertial unit is accomplished as described in Section 5.9. This is used to determine biases and to obtain the initial Cn matrix. At this stage the GPS position ﬁx is used to determine b the initial Cn matrix, Section 4.6. g 3. The navigation system then proceeds onto the inertial navigation system with the initial position as determined by the GPS receiver and the initial attitude as obtained from the alignment stage. Initially the GPS sensor may not provide the position of the vehicle until it has reached its most accurate solution, which in the case of RTK systems may take up to few minutes. The ﬁlter is initialised. The guidance computer is then informed that the navigation system is ready to provide the navigation solution; 4. As the vehicle moves the acceleration and gyro values are read in and the biases removed, Equation 175. Cn is updated, Equation 133, and the acceleration in the b navigation frame is computed, Equation 136. These values are then integrated to provide the position and heading of the vehicle; 5. If no GPS ﬁx is available then return to Step 4, otherwise determine the GPS ﬁx in the navigation frame, Equations 86 and 87. When the latency is small and the
Navigation System Design (KC4)
94
vehicle dynamics are low, the velocity data can be used to propagate the position information using a constant velocity model with reasonable accuracy; 6. Fuse the inertial and GPS data as described in Section 6.2; 7. Use the gating function, Equation 207, to determine if there are any high frequency faults. If high frequency faults exist then only send out the current state values as determined by the inertial system and return to Step 4. If the validation check has passed then correct the inertial unit as shown in Section 6.2 and then return to Step 4. The estimation procedure occurs within the sampling time of the inertial unit, however, if this was not the case and again the latency is low, then the corrected velocity is used to propagate the corrected position value.
6.4
Aiding Using a Vehicle Model
In this section we present a new type of aiding information that does not come from external sensors. This alternative method is based on the application of constraints on the inertial equations if the motion of the vehicle is itself constrained in some way. This approach uses the land vehicle’s motion to constrain the error growth of inertial units. The ﬁlter structure illustrated in Figure 31 is presented here.
Vehicle Constraints
Velocity Observations
IMU
Acceleration and Rotation Rates
NonLinear Kalman Filter
Position, Velocity and Attitude Estimates
Figure 31: Typical acceleration in land vehicle applications The general three dimensional inertial equations are derived from Newton’s laws and
Navigation System Design (KC4)
95
are thus applicable to all forms of motion. The inertial equations on board a train apply equally as well to an aerobatic aircraft. However, the motion of the train is clearly diﬀerent to that of the aircraft, in particular, the train is constrained to move along the rails. This fact can be used as a “virtual observation”, constraining the error growth along the lateral and vertical directions. The motion of a general land vehicle is not much diﬀerent from that of the train, and hence can also use constrained motion equations. If the vehicle undergoes excessive slip or movement in the vertical direction oﬀ the ground, then extra modelling is required, however, the use of constraints as virtual observations is still valid. 6.4.1 General ThreeDimensional Motion
Let the motion of the vehicle be described by the state equation, ˙ x = f(x,u)
T
(208)
T
where the vehicle state vector x = VT , PT , γ, β, θ , and the measurements u= fT , ω T . n n b b Using the kinematic relationship between ω b and the rates of changes of the Euler angles, and assuming that the rate of rotation of the earth is negligible, the state equations for vehicle motion can be written as ˙ V n = Cn f b b ˙ n = Vn P ωy sin θ + ωz cos θ cos β ˙ β = ωy cos θ − ωz sin θ ˙ θ = ωx + (ωy sin θ + ωz cos θ) tan β γ= ˙ (209) (210)
(211) (212) (213)
Equations 209  213 are the fundamental equations that enable the computation of the state x of the vehicle from an initial state x(0) and a series of measurements fb and ω b . The Euler method is implemented here for a better conceptual understanding. It is important to note the following with respect to these equations. 1. These equations are valid for the general motion of a body in threedimensional space. 2. Equations 209213 represent a set of nonlinear diﬀerential equations that can easily be solved using a variety of diﬀerent techniques.
Navigation System Design (KC4)
96
3. It is possible to linearise these equations, for suﬃciently small sampling intervals, by incorporating all the elements of the direction cosine matrix Cn into the state b equation. Clearly, the rate of error growth can be reduced if the velocity of the vehicle and the Euler angles β and θ can be estimated directly. It will be shown in the next section that this is indeed possible for a vehicle moving on a surface. 6.4.2 Motion of a Vehicle on a Surface
Motion of a wheeled vehicle on a surface is governed by two nonholonomic constraints. When the vehicle doesn’t jump oﬀ the ground and does not slide on the ground, the velocity of the vehicle in the plane perpendicular to the forward direction (xaxis) is zero. Under ideal conditions, there is no side slip along the direction of the rear axle (yaxis) and no motion normal to the road surface (zaxis), so the constraints are Vy = 0 Vz = 0 (214) (215)
In any practical situation, these constraints are to some degree violated due to the presence of side slip during cornering and vibrations caused by the engine and suspension system. In particular the side slip is a function of the vehicle state as well as the interaction between the vehicle tyres and the terrain. A number of models are available for determining side slip, but these models require the knowledge of the vehicle, tyre and ground characteristics that are not generally available. Alternatively, information from external sensors can be used to estimate slip online. As a ﬁrst approximation however, it is possible to model the constraint violations as follows: Vy − ηy = 0 Vz − ηz = 0 (216) (217)
2 2 where ηy and ηz are Gaussian, white noise sources with zero mean and variance σy and σz respectively. The strength of the noise can be chosen to reﬂect the extent of the expected constraint violations. Using the following equation that relates the velocities in the body frame Vb = [Vx , Vy , Vz ]T to Vn , Vb = [Cn ]T Vn b
Navigation System Design (KC4)
97
it is possible to write constraint Equations 216 and 217 as a function of the vehicle state x and a noise vector w= [ηy , ηz ]T , Vy Vz where M is =M+ ηy ηz , (218)
VN (sin θ sin β cos γ + cos θ sin γ) + VE (cos θ cos γ + sin θ sin β sin γ) + VD (sin θ cos β) VN (sin θ sin γ + cos θ sin β cos γ) + VE (cos θ sin β sin γ − sin θ cos γ) + VD (cos θ cos β) (219) The navigation frame implemented here is North (N), East (E) and Down (D). It is now required to obtain the best estimate for the state vector x modeled by the state Equations 209213 from a series of measurements fb and ω b , subjected to the constraint Equation 218.
6.5
Estimation of the Vehicle State Subject to Constraints
The state equation, obtained by the discretisation of Equations 209213, is ˆ x(kk − 1) = f(ˆ (k − 1k − 1), u(k)), x (220)
and the discrete time version of the constraint equation obtained from Equation 218 z(k) = H(x(k)) + v(k). (221)
Estimation of the state vector x subjected to stochastic constraints can be done in the framework of an extended Kalman ﬁlter. It is proposed to treat Equation 221 as an observation equation where the “observation” at each time instant k is in fact identical to zero. The Kalman ﬁlter algorithm proceeds recursively in three stages: Prediction: The algorithm ﬁrst generates a prediction for the state estimate and the state estimate covariance at time k according to ˆ x(kk − 1) = F(ˆ (k − 1k − 1), u(k)), x P(kk − 1) = Fx (k)P(k − 1k − 1) FT (k) + Q(k). x (222) (223)
The term Fx (k) is the Jacobian of the current nonlinear state transition matrix F(k) with respect to the predicted state x(kk − 1).
Navigation System Design (KC4)
98
Observation: Following the prediction, it is assumed that an observation z(k) that is identical to zero is made. An innovation is then calculated as follows ν(k) = z(k) − ˆ(kk − 1) z (224)
where z(k) is in fact set to zero. An associated innovation covariance is computed using S(k) = Hx (k)P(kk − 1) HT (k) + R(k) x (225)
Update: The state estimate and corresponding state estimate covariance are then updated according to ˆ ˆ x(kk) = x(kk − 1) + W(k)ν(k) where ν(k) is the nonlinear innovation, ν(k) = z(k) − H(ˆ (kk − 1)). x The gain and innovation covariance matrices are W(k) = P(kk − 1) HT (k)S−1 (k) x (228) (227) (226)
where the term Hx (k) is the Jacobian of the current observation model with respect to the estimated state x(kk). The updated covariance matrix is P(kk) = (I − W(k) Hx (k))P(kk − 1)(I − W(k) Hx (k))T + W(k)R(k)W(k). 6.5.1 Observability of the States
(229)
The extended Kalman ﬁlter algorithm obtains estimates of the state x. However, not all the state variables in this estimate are observable. For example, inspection of the state and observation equations suggest that the estimation of the vehicle position, Pn , requires direct integrations and therefore is not observable. Furthermore, if the vehicle moves in a trajectory that does not excite the relevant degreesoffreedom, the number of observable states may be further reduced. Intuitively, forward velocity is the direct integral of the measured forward acceleration during motion along straight lines, therefore is not observable. Clearly an analysis is required to establish the conditions for observability. As the state and observation equations are nonlinear, this is not straightforward. In this
Navigation System Design (KC4)
99
section an alternative formulation of the state equations, that directly incorporates the nonholonomic constrains, are developed in order to examine this issue. Consider the motion of a vehicle on a surface as shown in Figure 32. Assume that the
Figure 32: Motion of a vehicle on a surface. The navigation frame is ﬁxed and the body frame is on the local tangent plane to the surface and is aligned with the kinematic axes of the vehicle. nonholonomic constraints are strictly enforced and therefore the velocity vector V of the vehicle in the navigation frame is aligned with bx. Let s, s and s be the distance measured ˙ ¨ from some reference location to the current vehicle location along its path, together with its ﬁrst and second derivatives with respect to time. Therefore, V=sbx. ˙ Acceleration of the vehicle is given by, ˙ f =V = sbx+sbx. ¨ ˙˙ As the angular velocity of the coordinate frame b is given by ω b , f = sbx+sω b × bx, ¨ ˙ f = sbx+sωz bysωy bz. ¨ ˙ ˙
Navigation System Design (KC4) Components of the acceleration of the vehicle in the body frame b become, f.bx = s, ¨ f.by = sωz , ˙ f.bz = −sωy . ˙ Given that fn = Cn fb and using this in the above equations, b ˙ fx β c γc βc γs −βs 0 Vf fy = Vf ωz + −θc γs + θs βs γc θc γc + θs βs γs θs βc 0 , fz θs γs + θc βs γc −θs γc + θc βs γs θc βc −g −Vf ωy
100
(230) (231) (232)
where g is the gravitational constant and Vf = s is the speed of the vehicle. ˙ Rearranging this, the following three equations relating the vehicle motion to the measurements from the inertial unit can now be obtained, ˙ Vf − fx + g sin β = 0, Vf ωz − fy − g sin θ cos β = 0, Vf ωy + fz + g cos θ cos β = 0. It is clear from the above equations that, • when the forward acceleration is zero the roll (θ) and pitch (β) can be directly computed from the inertial measurements • if one of the angular velocities ωy or ωz is not zero, the forward velocity can also be computed directly. • even when the forward acceleration is nonzero, it is possible to write a diﬀerential equation containing only the forward velocity and the inertial measurements by substituting Equations 234 and 235 into 233. Therefore, Vf can be obtained by one integration step involving the inertial measurements. If the constraints are not used, two integration steps are required to obtain velocities. This result is of signiﬁcant importance. The fact that the forward acceleration is observable makes the forward velocity error growth only a function of the random walk due to the noise present in the observed acceleration. • It is possible to use the Equations 234 and 235 directly to obtain the complete vehicle state without going through the Kalman ﬁlter proposed in the previous section. This, however, makes it diﬃcult to incorporate models for constraint violations in the solution. Also, when the constraint violation is signiﬁcant, such as in oﬀ road situations or cornering at high speeds, the white noise model is inadequate. For example, if there is signiﬁcant side slip, explicit slip modeling may be required. (233) (234) (235)
Navigation System Design (KC4)
101
Figure ?? shows the performance of the algorithm in large area with the vehicle moving a low speed. It can clearly be seen that the incorporation of the constraint, Vy = 0, Vz = 0, results in a performance of the algorithm very close to GPS / INS. For comparison purposes the raw data integration is also presented showing that a large error is obtained after the ﬁrst minute of operation
POSITION
200
150
<−−− Constrained data <−−−IMU/GPS
NORTH (m)
100
50
0
<−−−−−
Raw data
−50 −300 −250 −200 −150 −100 EAST (m) −50 0 50
Figure 33: Comparison between integration or raw data, GPS/INS and constraint algorithms
6.5.2
Multiple Aiding of an Inertial System
This section will discuss the aiding of an inertial unit with three forms of external observations; position and velocity derived from GPS, speed from a drive shaft encoder and the vehicle model constraints. By incorporating all three pieces of observations, more information is provided for correction of errors in the inertial data. Furthermore, the constraint algorithms contain the growth of the attitude error when there are no GPS ﬁxes, thus sustaining the accuracy of the inertial unit. The speed information provided by the encoder data makes the forward velocity observable. The approach taken is to use a linear information ﬁlter with the inertial error model developed as a process model. Thus a direct feedback structure is implemented as shown in Figure 34. This makes the system practically achievable, and also allows easy information addition from the aiding sensors, especially since they are delivered asynchronously and at diﬀerent rates. It is shown that the additional sensors signiﬁcantly improve the quality
Navigation System Design (KC4)
102
of the location estimate. This is of fundamental importance since it makes the inertial system less dependent on external information.
Estimated Errors of Position, Velocity and Attitude
INS
+ + 
Position, Velocity and Attitude
Observed Error Observed Error
Kalman Filter
GNSS
Position and Velocity Observations Velocity Observations
Vehicle constraints + wheel encoder
Figure 34: Direct feedback structure The inertial error model as outlined in Section 6.2 is implemented here using the Information form of the Kalman ﬁlter. This form is adopted in this implementation since various type of external information is incorporated in asynchronous form to the system. Nevertheless identical results can be obtained with the standard Kalman ﬁlter. Prediction: The prediction stage is implemented using: ˆ y(kk − 1) = L(kk − 1)ˆ (k − 1k − 1) + B(k)u(k), y where L(kk − 1) = Y(k − 1k − 1)F(k)Y−1 (k − 1k − 1). and Y(kk − 1) = [F(k)Y−1 (k − 1k − 1)FT (k) + Q(k)]−1 . (237) (236)
Observation: When an observation from the aiding sensor or constraint is made, the observation vector generated is the observed error of the inertial system, z(k) = zinertial (k) − zaiding (k). At each sampling of the inertial unit, a constraint observation is made, that is, Vy = 0 and Vz = 0. At this stage the velocity vector is only partly completed, requiring the speed of the vehicle along the body xaxis which is obtained from the speed encoder Vx .
Navigation System Design (KC4)
103
The sampling rate of the speed encoder is 25Hz. However, it can be safely assumed that constant velocity occurs between these samples, due to the slow dynamics of the vehicle, and hence the observation formed can occur at the sampling rate of the inertial unit. This observation, which is now in the body frame, is converted to the North (N), East (E), Down (D) frame using Cn . That is, b Vx (k) (238) zvelocity (k) = Cn Vy (k) b V Vz (k) Vx (k) cos β cos γ = Vx (k) cos β sin γ . (239) −Vx (k) sin β Thus the observation vector is z(k) = zinertial (k) − zvelocity (k). V V The observation model is given by Hvelocity = V and the observation covariance matrix is Rvelocity V 2 σx 0 0 2 = 0 σy 0 . 2 0 0 σz (241) 03×3 I3×3 03×3 (240)
Since the velocity vector is transformed from the body frame to the navigation frame, the observation noise covariance needs to be transformed as well and is done so by ¯ velocity = Cn Rvelocity Cn T R b b (242)
When the position and velocity are obtained from the GNSS receiver the observation vector is z(k) = The observation model is HGN SS (k) = I3×3 03×3 03×3 , 03×3 I3×3 03×3 (244) zinertial (k) − zGN SS (k) P P zinertial (k) − zGN SS (k) V V (243)
Navigation System Design (KC4) and the observation noise matrix is 2 σP N 0 0 0 0 0 2 0 σP E 0 0 0 0 2 0 0 σP D 0 0 0 RGN SS (k) = 2 0 0 0 0 0 σV N 2 0 0 0 0 σV E 0 2 0 0 0 0 0 σV D where the individual variances are obtained from the GDOP values.
104
(245)
Update: Once the observations are formed, the information state vector is generated i(k) = HT (k)R−1 (k)z(k). (246)
The amount of certainty associated with this observation is in the form of the information observation matrix, I(k) = HT (k)R−1 (k)H(k) The estimate then proceeds according to ˆ ˆ y(kk) = y(kk − 1) + i(k) Y(kk) = Y(kk − 1) + I(k). and if there is more than one sensor at this time stamp: ˆ ˆ y(kk) = y(kk − 1) + Y(kk) = Y(kk − 1) + ij (k) Ij (k). (250) (251) (248) (249) (247)
where j is the number of sensors providing information at time k. The algorithms were tested in a very large area shown is Figure 35. In this test we used the velocity constraint and GPS information. The comparison with respect to the errors position and velocity are shown in Figure ?? and ??. It can be seen that the error in velocity tend to zero as expected and the position is also controlled by the close zero velocity error.
6.6
Tightly Coupled GPS/Ins
The three methods discussed so far are also known as “loosely coupled” implementations since there is no feedback to the aiding sensor/navigation system. If feedback is provided
Navigation System Design (KC4)
POSITION 300
105
250
200
NORTH (m)
150
100
50
0
−50 −100
0
100
200
300 EAST (m)
400
500
600
700
Figure 35: Vehicle Path to the aiding source a tighter conﬁguration is obtained which in turn improves system integrity. Figure 38 illustrates what is known as a “tightly coupled” conﬁguration. It oﬀers the advantages of being robust and increases performance since it allows the systems designer to delve into the operation and algorithms of both sensors. The inertial sensor provides raw data to the ﬁlter which usually incorporates a kinematic model of the vehicle. The aiding sensor also provides raw information. The ﬁlter estimates the states of the vehicle, and uses these estimates to assist the aiding sensor in its attainment of observations. For example, the aiding information can help GNSS with tracking satellites or assist a scanning radar with pointing and stabilisation. 6.6.1 Advantages and Disadvantages of the Tightly and Loosely Coupled Conﬁgurations
The loosely coupled conﬁgurations oﬀer the distinct advantage of being highly modular in accuracy and cost. The system’s designer can implement the model of choice along with the desired INS in whatever navigation structure is preferred. Any aiding sensor can then be added to the navigation system. Although the tightly coupled conﬁguration is more robust than the loosely coupled conﬁguration, it is more expensive to implement and more diﬃcult to develop. Furthermore, if a diﬀerent aiding sensor is employed, the models and algorithms must change substantially. The use of GNSS as an aiding system for inertial navigation systems has been the subject
Navigation System Design (KC4)
Position Error 6 5 North Error (m) 4 3 2 1 0 0 20 40 60 80 100 Time (s) 120 140 160  Constrained + GPS  Constrained Inertial
106
180
200
35 30 East Error (m) 25 20 15 10 5 0 0 20 40 60 80 100 Time (s) 120 140 160 180 200  Constrained Inertial  Constrained + GPS
Figure 36: Position error incorporating constrain and GPS of study for a number of years. The majority of implementations have been loosely coupled. This is due to companies specialising in inertial systems developing INS units with built in ﬁltering techniques in a loosely coupled conﬁguration and in turn, GNSS companies focusing on delivering state of the art GNSS navigation systems. To implement a tightly coupled conﬁguration requires close collaboration with GNSS companies, since the aiding information from the navigation ﬁlter which is fed back to the GNSS sensor assists with the satellite tracking algorithms. These algorithms are kept secret since the speed of satellite reacquisition, the ability to locate and track satellites after they have been lost, is what separates the quality of receivers between diﬀerent manufactures, and hence is a strong marketing tool.
Navigation System Design (KC4)
107
Velocity Error 1.4 1.2 North Error (m/s) 1 0.8 0.6 0.4 0.2 0  Constrained + GPS 0 20 40 60 80 100 Time (s) 120 140 160 180 200  Constrained Inertial
2
East Error (m/s)
1.5
1
0.5  Constrained Inertial 0  Constrained + GPS 180 200
0
20
40
60
80
100 Time (s)
120
140
160
Figure 37: Velocity error incorporating constrain and GPS
IMU
Acceleration and Rotation Rates
Position, Velocities and Attitude Estimates
NonLinear Filter
Raw Observations
External Aiding Sensor
Position, Velocity and Attitude Estimates
Figure 38: The “tightly coupled” conﬁguration treats both inputs as sensors and not as navigation systems. Furthermore, the ﬁlter estimates are sent to the aiding sensor and not the inertial sensor. This conﬁguration allows for a more robust system.
Navigation System Design (KC4)
108
7
Simultaneous Localisation and Mapping (SLAM)
Reliable localization is an essential component of any autonomous vehicle system. The basic navigation loop is based on dead reckoning sensors that predict high frequency vehicle manoeuvres and low frequency absolute sensors that bound positioning errors. The problem of localization given a map of the environment or estimating the map knowing the vehicle position has been addressed and solved using a number of diﬀerent approaches. Section 3 presents a Kalman ﬁlter technique to estimate the position of the vehicle based on the known position of artiﬁcial landmarks. Although this method can be made very reliable is has the drawback that requires the modiﬁcation of the environment with the addition special infrastructure. In addition the location of these infrastructure need to be surveyed. A related problem is when both, the map and the vehicle position are not known. In this case the vehicle start in an unknown location in an unknown environment and proceed to incrementally build a navigation map of the environment while simultaneously use this map to update its location. In this problem, vehicle and map estimates are highly correlated and cannot be obtained independently of one another. This problem is usually known as Simultaneous Localization and Map Building (SLAM) and was originally introduced [26]. During the past few years signiﬁcant progress has been made towards the solution of the SLAM problem [17] [23] [12] [7] [25] Kalman ﬁlter methods can also be extended to perform simultaneous localization and map building. There have been several applications of this technology in a number of diﬀerent environments, such as indoors, underwater and outdoors. One of the main problems with the SLAM algorithm has been the computational requirements. Although the algorithm is originally of O(N 3 ) the complexity of the SLAM algorithm can be reduced to O(N 2 ), N being the number of landmarks in the map. For long duration missions the number of landmarks will increase and eventually computer resources will not be suﬃcient to update the map in real time. This N 2 scaling problem arises because each landmark is correlated to all other landmarks. The correlation appears since the observation of a new landmark is obtained with a sensor mounted on the mobile robot and thus the landmark location error will be correlated with the error in the vehicle location and the errors in other landmarks of the map. This correlation is of fundamental importance for the longterm convergence of the algorithm and needs to be maintained for the full duration of the mission. This section presents an introduction to the SLAM problem, description of the computation complexity and diﬀerent approaches that makes possible the implementation of SLAM in real time in very large environments.
Navigation System Design (KC4)
109
7.1
Fundamentals of SLAM
The SLAM algorithm address the problem of a vehicle with a known kinematic model, starting at an unknown position, moving through an unknown environment populated with artiﬁcial or natural features. The objective of SLAM is then to localize the vehicle and at the same time build an incremental navigation map with the observed features. The vehicle is equipped with a sensor capable of taking measurement of the relative location of the feature and the vehicle itself. This scenario is shown if Figure 39. To facilitate the introduction of the SLAM equations a linear model for the vehicle and observation is used.
Features and Landmarks VehicleFeature Relative Observation
pi
Mobile Vehicle
xv
Global Reference Frame
Figure 39: A vehicle taking relative measurements to environmental landmarks
7.1.1
Process Model
The state of the system consist of the position and orientation of the vehicle augmented with the position of the landmarks. Assuming that the state of the vehicle is given by xv (k) the motion of the vehicle through the environment can be modeled by the following equation: xv (k + 1) = Fv (k)xv (k) + uv (k + 1) + vv (k + 1) (252)
where Fv (k) is the state transition matrix, uv (k) a vector of control inputs, and vv (k) a vector of temporally uncorrelated process noise errors with zero mean and covariance
Navigation System Design (KC4)
110
Qv (k) [[18]] for further details). The location of the ith landmark is denoted pi . SLAM considers that all landmarks are stationary. The “state transition equation” for the ith landmark is pi (k + 1) = pi (k) = pi , (253) It can be seen that the model for the evolution of the landmarks does not have any uncertainty. Assuming that N are actually validated and incorporated by the system then the vector of all N landmarks is denoted p= pT . . . 1 pT N
T
(254)
The augmented state vector containing both the state of the vehicle and the state of all landmark locations is denoted x(k) = xT (k) pT . . . v 1 pT N
T
.
(255)
The augmented state transition model for the complete system may now be written as xv (k + 1) Fv (k) 0 . . . 0 xv (k) 0 p1 Ip1 . . . 0 p1 = (256) . . . . .. . . . . . 0 . . . . pN 0 0 0 IpN pN uv (k + 1) vv (k + 1) 0p1 0p1 + (257) + . . . . . . 0pN 0pN x(k + 1) = F(k)x(k) + u(k + 1) + v(k + 1) (258)
where Ipi is the dim(pi ) × dim(pi ) identity matrix and 0pi is the dim(pi ) null vector. As can be seen from Equation 256 the size of the matrices involved were augmented by n ∗ N , being n the number of states required to represent a landmark and N the number of landmarks incorporated into the map. In a large environment this number will tend to grow and eventually the computer resources will not be suﬃcient to process the information from the external sensor in real time. 7.1.2 The Observation Model
The vehicle is equipped with a sensor that can obtain observations of the relative location of landmarks with respect to the vehicle. Assuming the observation to be linear and
Navigation System Design (KC4) synchronous, the observation model for the ith landmark is written in the form zi (k) = Hi x(k) + wi (k) = Hpi p − Hv xv (k) + wi (k)
111
(259) (260)
where wi (k) is a vector of temporally uncorrelated observation errors with zero mean and variance Ri (k). The term Hi is the observation matrix and relates the output of the sensor zi (k) to the state vector x(k) when observing the i( th) landmark. It is important to note that the observation model for the ith landmark is written in the form Hi = [−Hv , 0 · · · 0, Hpi , 0 · · · 0] (261)
This structure reﬂects the fact that the observations are “relative” between the vehicle and the landmark, often in the form of relative location, or relative range and bearing (see Section 4).
7.2
The Estimation Process
In the estimationtheoretic formulation of the SLAM problem, the Kalman ﬁlter is used to provide estimates of vehicle and landmark location. We brieﬂy summarise the notation and main stages of this process The Kalman ﬁlter recursively computes estimates for a state x(k) which is evolving according to the process model in Equation 256 and which is being observed according to the observation model in Equation 259. The Kalman ﬁlter computes an estimate which is equivalent to the conditional mean x(pq) = E [x(p)Zq ] ˆ q (p ≥ q), where Z is the sequence of observations taken up until time q. The error in the estimate is denoted x(pq) = x(pq) − x(p). The Kalman ﬁlter also provides a recursive ˜ ˆ estimate of the covariance P(pq) = E x(pq)˜(pq)T Zq in the estimate x(pq). The ˜ x ˆ Kalman ﬁlter algorithm now proceeds recursively in three stages: • Prediction: Given that the models described in equations 256 and 259 hold, and that an estimate x(kk) of the state x(k) at time k together with an estimate ˆ of the covariance P(kk) exist, the algorithm ﬁrst generates a prediction for the state estimate, the observation (relative to the ith landmark) and the state estimate covariance at time k + 1 according to x(k + 1k) ˆ = F(k)ˆ(kk) + u(k) x ˆi (k + 1k) z = Hi (k)ˆ(k + 1k) x P(k + 1k) = F(k)P(kk)FT (k) + Q(k), respectively. (262) (263) (264)
Navigation System Design (KC4)
112
• Observation: Following the prediction, an observation zi (k + 1) of the ith landmark of the true state x(k + 1) is made according to Equation 259. Assuming correct landmark association, an innovation is calculated as follows νi (k + 1) = zi (k + 1) − ˆi (k + 1k) z together with an associated innovation covariance matrix given by Si (k + 1) = Hi (k)P(k + 1k)HT (k) + Ri (k + 1). i (266) (265)
• Update: The state estimate and corresponding state estimate covariance are then updated according to: x(k + 1k + 1) = x(k + 1k) + Wi (k + 1)νi (k + 1) ˆ ˆ T P(k + 1k + 1) = P(k + 1k) − Wi (k + 1)S(k + 1)Wi (k + 1) Where the gain matrix Wi (k + 1) is given by Wi (k + 1) = P(k + 1k)HT (k)S−1 (k + 1) i i (269) (267) (268)
The update of the state estimate covariance matrix is of paramount importance to the SLAM problem. Understanding the structure and evolution of the state covariance matrix is the key component to this solution of the SLAM problem. 7.2.1 Example
Assume a vehicle moving in one dimension and observing relative range to a number of landmarks. We would like to design a ﬁlter to track the position and velocity of the vehicle. We can also assume that the position of the vehicle is known with some uncertainty, although this is not relevant for this example. Since no additional absolute information is available such as GPS, if one assume the vehicle is traveling at constant velocity the estimation of uncertainty of its position will grow with time. In addition due to initial error in position and velocity the diﬀerence between estimated and real position will grow indeﬁnite. pos(k + 1) vel(k + 1) = 1 ∆t 0 1 pos(k) vel(k) + 0 1 v(k) (270)
In this example we can assume that two landmarks are already incorporate into the map. The process model is then extended as follows:
Navigation System Design (KC4) pos(k + 1) 1 ∆t 0 0 vel(k + 1) 0 1 0 0 p1 (k + 1) = 0 0 1 0 p2 (k + 1) 0 0 0 1 pos(k) 0 vel(k) 1 p1 (k) + 0 p2 (k) 0 v(k)
113
(271)
Once a landmark is incorporated into the map it will remain as part of the state vector. The full augmented system needs to be used each time a prediction or observation is made. In future section we will see that optimizations are possible to reduce the computation complexity of SLAM. On the other hand, the observation model will be a function of the number of landmarks observed. In the case two landmarks are observed we have: z1 (k) z2 (k) = 1 0 −1 0 1 0 0 −1 pos(k) vel(k) p1 (k) + p2 (k)
1 1
w(k)
(272)
In this example it was also assumed that both observation were taken with the same sensor or with another sensor with similar noise characteristics.
7.3
Fundamentals results in SLAM
This section presents three fundamental results in the solution of SLAM. For a full demonstration of this results the readers are encouraged to see [7] The state covariance matrix may be written in block form as P(ij) = Pvv (i  j) Pvm (i  j) Pmv (i  j) Pmm (i  j))
where Pvv (i  j) is the error covariance matrix associated with the vehicle state estimate, Pmm (i  j) is the map covariance matrix associated with the landmark state estimates, and Pvm (i  j) is the crosscovariance matrix between vehicle and landmark states. Theorem 1 The determinant of any submatrix of the map covariance matrix decreases monotonically as successive observations are made. The algorithm is initialised using a positive semideﬁnite (psd ) state covariance matrix P(00). The matrices Q and Ri are both psd, and consequently the matrices P(k + 1k), T Si (k + 1), Wi (k + 1)Si (k + 1)Wi (k + 1) and P(k + 1k + 1) are all psd. From Equation
Navigation System Design (KC4) 268, and for any landmark i, det P(k + 1k + 1) = det(P(k + 1k) − Wi (k + 1)Si (k + 1)WT (k + 1)) ≤ det P(k + 1k)
114
(273)
The determinant of the state covariance matrix is a measure of the volume of the uncertainty ellipsoid associated with the state estimate. Equation 273 states that the total uncertainty of the state estimate does not increase during an update. Theorem 2 In the limit the landmark estimates become fully correlated As the number of observations taken tends to inﬁnity a lower limit on the map covariance limit will be reached such that
k→∞
lim [Pmm (k + 1  k + 1)] = Pmm (k  k)
(274)
Also the limit the determinant of the covariance matrix of a map containing more than one landmark tends to zero.
k→∞
lim [detPmm (k  k)] = 0
(275)
This result implies that the landmarks become progressively more correlated as successive observations are made. In the limit then, given the exact location of one landmark the location of all other landmarks can be deduced with absolute certainty and the map is fully correlated. Theorem 3 In the limit, the lower bound on the covariance matrix associated with any single landmark estimate is determined only by the initial covariance in the vehicle estimate P0v at the time of the ﬁrst sighting of the ﬁrst landmark. When the process noise is not zero the two competing eﬀects of loss of information content due to process noise and the increase in information content through observations, determine the limiting covariance. It is important to note that the limit to the covariance applies because all the landmarks are observed and initialised solely from the observations made from the vehicle. The covariances of landmark estimates can not be further reduced by making additional observations to previously unknown landmarks. However, incorporation of external information, for example using an observation is made to a landmark whose location is available through external means such as GPS, will reduce the limiting covariance. In summary, the three theorems presented above describe, in full, the convergence properties of the map and its steady state behaviour. As the vehicle progresses through the
Navigation System Design (KC4)
115
environment the total uncertainty of the estimates of landmark locations reduces monotonically to the point where the map of relative locations is known with absolute precision. In the limit, errors in the estimates of any pair of landmarks become fully correlated. This means that given the exact location of any one landmark, the location of any other landmark in the map can also be determined with absolute certainty. As the map converges in the above manner, the error in the absolute location estimate of every landmark (and thus the whole map) reaches a lower bound determined only by the error that existed when the ﬁrst observation was made. Thus a solution to the general SLAM problem exists and it is indeed possible to construct a perfectly accurate map describing the relative location of landmarks and simultaneously compute vehicle position estimates without any prior knowledge of landmark or vehicle locations.
7.4
Nonlinear Models
In general the models that predict the trajectory of the vehicle and the models that relates the observation with the states are nonlinear. The SLAM can still be formulated but requires the linearization of these models. In this case the Jacobian of the process and observation models are used to propagate the covariances. In this section we present a more realistic model of a standard outdoor vehicle. Assume a vehicle equipped with dead reckoning capabilities and an external sensor capable of measuring relative distance between vehicle and the environment as shown in Figure 40. The steering control α, and the speed υc are used with the kinematic model to predict the position of the vehicle. In this case the external sensor returns range and bearing information to the diﬀerent features Bi(i=1..n) . This information is obtained with respect to the vehicle coordinates (xl , yl ), that is z(k) = (r, β) , where r is the distance from the beacon to the range sensor, β is the sensor bearing measured with respect to the vehicle coordinate frame. Considering that the vehicle is controlled through a demanded velocity υc and steering angle α the process model that predicts the trajectory of the centre of the back axle is given by xc ˙ vc · cos (φ) yc = vc · sin (φ) + γ ˙ vc ˙ · tan (α) φc L
(276)
Where L is the distance between wheel axles as shown in Figure 41. To simplify the equation in the update stage, the kinematic model of the vehicle is designed to represent
Navigation System Design (KC4)
116
yL
b
z(k)=(r,b)
Bi
B3
yl xl
f
vc
a
B1 xL
Figure 40: Vehicle Coordinate System the trajectory of the centre of the laser. Based on Figure 40 and 41, the translation of the centre of the back axle can be given PL = PC + a · Tφ + b · Tφ+π/ 2 (277)
PL and PC are the position of the laser and the centre of the back axle in global coordinates respectively. The transformation is deﬁned by the orientation angle, according to the following vectorial expression: Tφ = (cos (φ) , sin (φ)) The scalar representation is xL = xc + a · cos (φ) + b · cos (φ + π/2) yL = yc + a · sin (φ) + b · sin (φ + π/2) (278)
Navigation System Design (KC4)
117
Encoder
Laser yl
H Pc (yc, xc)
xl b
yL xL
L a
Figure 41: Kinematics parameters
Finally the full state representation can be written xL ˙ vc · cos (φ) − vc · (a · sin (φ) + b · cos (φ)) · tan (α) L yL = vc · sin (φ) + vc · (a · cos (φ) − b · sin (φ)) · tan (α) + γ ˙ L vc ˙ · tan (α) φL
L
(279)
The velocity, υc , is measured with an encoder located in the back left wheel. This velocity is translated to the centre of the axle with the following equation: vc = νe 1 − tan (α) ·
H L
(280)
Where for this car H = 0.75m, L=2.83 m, b = 0.5 and a = L + 0.95m. Finally the discrete model in global coordinates can be approximated with the following set of equations: x(k − 1) + ∆T vc (k − 1) · cos (φ(k − 1)) − vc · (a · sin (φ(k − 1)) + b · cos (φ(k − 1))) L · tan (α(k − 1)) x(k) +γ y(k) = y(k (281) vc (k−1)− 1) + ∆T vc (k − 1) · sin (φ(k − 1)) + · (a · cos (φ(k − 1)) − b · sin (φ(k − 1))) φ(k) L · tan (α(k − 1)) vc (k−1) · tan (α(k − 1)) L where ∆t is the sampling time, that in this case is not constant. The observation equation relating the vehicle states to the observations is
Navigation System Design (KC4) z = h (X, xi , yi ) =
i zr i zβ 2 2
118 (xi − xL ) + (yi − yL ) + γh (y atan (xi −yL ) − φL + π/2 i −xL ) (282)
=
where z is the observation vector, is the coordinates of the landmarks, xL , yL and φL are the vehicle states deﬁned at the external sensor location and γh the sensor noise. The complete nonlinear model can be expressed in general form as: X (k + 1) = F (X (k) , u (k) + γu (k)) + γf (k) z (k) = h (X (k)) + γh (k) The eﬀect of the input signal noise is approximated by a linear representation F (X (k) , u (k) + γu (k)) + γf (k) ∼ F (X (k) , u (k)) + γ (k) = γ (k) = Ju · γu (k) + γf (k) Ju = ∂F X=X(k),u=u(k) ∂u The matrix noise characteristics are assumed zero mean and white: E {γf (k)} = E {γu (k)} = E {γh (k)} = 0 T E γf (i) · γf (j) = δi,j · Qf (i) T E γh (i) · γh (j) = δi,j · R (i) T E γu (i) · γu (j) = δi,j · Qu (i) T E γh (i) · γh (j) = 0 0 i=j δi,j = 1 i=j
T E γ (i) · γ T (j) = δi,j · Ju · Qu (i) · Ju + Qf (i) = δi,j · Q (i)
(283)
(284)
(285)
An Extended Kalman Filter (EKF) observer based on the process and output models can be formulated in two stages: Prediction and Update stages. The Prediction stage is required to obtain the predicted value of the states X and its error covariance P at time k based on the information available up to time k − 1, X (k + 1, k) = F (X (k, k) , u (k)) P (k + 1, k) = J · P (k, k) · J T + Q (k) The update stage is function of the observation model and the covariances: (286)
Navigation System Design (KC4)
119
S (k + 1) = H · P (k + 1, k) · H T (k + 1) + R (k + 1) W (k + 1) = P (k + 1, k) · H T (k + 1) · S −1 (k + 1) ϑ (k + 1) = Z (k + 1) − h (X (k + 1, k)) X (k + 1, k + 1) = X (k + 1, k) + W (k + 1) · ϑ (k + 1) P (k + 1, k + 1) = P (k + 1, k) − W (k + 1) · S (k + 1) · W T (k + 1) Where J = J (k) = ∂F ∂X ,
(X,u)=(X(k),u(k))
(287)
H = H (k) =
∂h ∂X
(288)
X=X(k)
are the Jacobian matrixes of the vectorial functions F (x, u) and h(x) respect to the state X and R is the covariance matrix characterizing the noise in the observations. Under the SLAM framework the system will detect new features at the beginning of the mission and when exploring new areas. Once these features become reliable and stable they are incorporated into the map becoming part of the state vector. The state vector is then given by: XL X= XI (289) XL = (xL , yL , φL )T ∈ R3 XI = (x1 , y1 , .., xN , yN )T ∈ R2N where (x, y, φ)L and (x, y)i are the states of the vehicle and features incorporated into the map respectively. Since this environment is consider to be static the dynamic model that includes the new states becomes: XL (k + 1) = f (XL (k)) + γ XI (k + 1) = XI (k) (290)
It is important to remarks that the landmarks are assumed to be static. Then the Jacobian matrix for the extended system is ∅ J1 ∅ = ∅ I ∅ I J1 ∈ R3x3 , ∅ ∈ R3xN , I ∈ R2N x2N
∂F ∂X
=
∂f ∂ xL ˜
(291)
The observations zr and zβ are obtained from a range and bearing sensor relative to the vehicle position and orientation. The observation equation given in Equation 282 is a function of the states of the vehicle and the states representing the position of the landmark. The Jacobian matrix of the vector h with respect to the variables (xL , yL , φL , xi , yi )
Navigation System Design (KC4) can be evaluated using: ∂h = ∂X
∂zr ∂X ∂zβ ∂X
120
=
∂ri ∂ (xL ,yL ,φL ,{xj ,yj }j=1..N ) ∂βi ∂ (xL ,yL ,φL ,{xj ,yj }j=1..N )
(292)
This Jacobian will always have a large number of null elements since only a few landmarks will be observed and validated at a given time. For example, when only one feature is observed the Jacobian has the following form:
∂zr ∂X ∂zβ ∂X
=
∆x ∆ − ∆y ∆2
∆y ∆ ∆x ∆2
0 0 0 ... − ∆x − ∆y 0 ... 0 0 ∆ ∆ −1 0 0 ... ∆y − ∆x 0 ... 0 0 ∆2 ∆2 ∆= (∆x)2 + (∆y)2
(293)
where ∆x = (xL − xi ) ,
∆y = (yL − yi ) ,
These models can then be used with a standard EKF algorithm to build and maintain a navigation map of the environment and to track the position of the vehicle.
7.5
Optimization of SLAM
Under the SLAM framework the size of the state vector is equal to the number of the vehicle states plus twice the number of landmarks, that is 2N +3 = M . This is valid when working with point landmarks in 2 − D environments. In most SLAM applications the number of vehicle states will be insigniﬁcant with respect to the number of landmarks. The number of landmarks will grow with the area of operation making the standard ﬁlter computation impracticable for online applications. In this section we present a series of optimizations in the prediction and update stages that reduce the complexity of the SLAM algorithm from O(M 3 ) to O(M 2 ). Then a compressed ﬁlter is presented to 2 reduce the real time computation requirement to O(2Na ), being Na the landmarks in the local area. This will also make the SLAM algorithm extremely eﬃcient when the vehicle remains navigation in this area since the computation complexity becomes independent of the size of the global map. These algorithms do not make any approximations and the results are exactly equivalent to a full SLAM implementation. 7.5.1 Standard Algorithm Optimization
Prediction Stage Considering the zeros in the Jacobian matrix of Equation 291 the prediction Equation 286 can be written:
Navigation System Design (KC4)
121
P + = J · P · JT + Q = J1 ∈ R3x3 , P11 ∈ R3x3 , ∅ ∈ R3x2N ,
J1 ∅ ∅T I
·
P11 P12 P21 P22
·
T J1 ∅T ∅ IT
+
QV ∅
∅ ∅2 (294)
I ∈ R2N x2N
T P21 = P12 ,
P12 ∈ R3x2N ,
P22 ∈ R2N x2N
The time subindexes are not used in this explanation for clarity of presentation. Performing the matrix operations explicitly the following result is obtained: J ·P = J1 ∅ ∅T I · P11 P12 P21 P22 = J1 · P11 J1 · P12 I · P21 I · P22
T J1 ∅T ∅ I
=
J1 · P11 J1 · P12 P21 P22
J · P · JT =
J1 · P11 J1 · P12 P21 P22 =
·
=
T J1 · P11 · J1 J1 · P12 · I T P21 · J1 P22 · I
T J1 · P11 · J1 J1 · P12 T (J1 · P12 ) P22
(295) It can be proved that the evaluation of this matrix requires approximately only 9M multiplications. In general, more than one prediction step is executed between 2 update steps. This is due to the fact that the prediction stage is usually driven by high frequency sensory information that acts as inputs to the dynamic model of the vehicle and needs to be evaluated in order to control the vehicle. The low frequency external sensors report the observation used in the estimation stage of the EKF. This information is processed at much lower frequency. For example, the steering angle and wheel speed can be sampled every 20 milliseconds but the laser frames can be obtained with a sample time of 200 milliseconds. In this case we have a ratio of approximately 10 prediction steps to one update step. The compact form for n prediction steps without an update is P (k + n, k) = where
n−1
P11 (k + n, k) G1 · P12 (k, k) T (G1 · P12 (k, k)) P22 (k, k)
(296)
G1 = G1 (k, n) =
i=0
J1 (k + i) = J1 (k + n − 1) · .... · J1 (k)
(297)
Navigation System Design (KC4)
122
Update Stage Since only a few features associated with the state vector are observed at a given time, the Jacobian matrix H will have a large number of zeros. When only one feature is incorporated into the observation vector we have: H = H (k) = H1 = H2 =
∂h ∂XL ∂h ∂Xi ∂h ∂X X=X(k)
= [H1 , ∅1 , H2 , ∅2 ] ∈ R2xM ,
X=X(k)
M = (2N + 3) (298)
X=X(k) X=X(k)
= =
∂h ∂(xL ,yL ,φL ) ∂h ∂(xi ,yi ) ∂h ∂Xj
∈ R2x3
X=X(k)
∈ R2x2
∅1 , ∅2 = nullmatrices
= ∅ ∀j = i .
At a give time k the Kalman gain matrix W requires the evaluation of P H T
T T P · H T = P1 · H1 + P2 · H2 P1 ∈ RM x3 , P2 ∈ RM x2
It can be proved that the evaluation will require 10M multiplications. Using the previous result, the matrix S and W can be evaluated with a cost of approximately 20M S = H · P · H T + R ∈ R2∗2 W = P · H T · S −1 ∈ RM x2 (299)
The cost of the state update operation is proportional to M . The main computational requirement is in the evaluation of the covariance update where complexity is O(M 2 ). Experimental results The SLAM algorithm presented were tested an outdoor environment with a standard utility vehicle retroﬁtted with dead reckoning sensors and a laser range sensor as shown in Figure 42. In this application the most common relevant feature in the environment were trees. The proﬁles of trees were extracted from the laser information. A Kalman ﬁlter was also implemented to reduce the errors due to the different proﬁles obtained when observing the trunk of the trees from diﬀerent locations. The vehicle was started at a location with known uncertainty and driven in this area for approximately 20 minutes. Figure 43 presents the vehicle trajectory and navigation landmarks incorporated into the relative map. This run includes all the features in the environment and the optimisation presented. The system built a map of the environment and localized itself. The accuracy of this map is determined by the initial vehicle position uncertainty and the quality of the combination of dead reckoning and external sensors. In this experimental run an initial uncertainty in coordinates x and y was assumed. Figure 44 presents the estimated error of the vehicle position and selected landmarks. The states corresponding to the vehicle presents oscillatory behaviour displaying the maximum deviation farther from the initial position. This result is expected since there is no
Navigation System Design (KC4)
123
absolute information incorporated into the process. The only way this uncertainty can be reduced is by incorporating additional information not correlated to the vehicle position, such as GPS position information or recognizing a beacon located at a known position. It is also appreciated that the covariances of all the landmarks are decreasing with time. This means that the map is learned with more accuracy while the vehicle navigates. The theoretical limit uncertainty in the case of no additional absolute information will be the original uncertainty vehicle location. Figure 45 presents the ﬁnal estimation of the landmarks in the map. It can be seen that after 20 minutes the estimated error of all the landmarks are below 60 cm.
Figure 42: Utility car used for the experiments. The vehicle is equipped with a Sick laser range and bearing sensor, linear variable diﬀerential transformer sensor for the steering and back wheel velocity encoders.
7.5.2
Compressed Filter
In this section we demonstrate that it is not necessary to perform a full SLAM update when working in a local area. This is a fundamental result because it reduces the computational requirement of the SLAM algorithm to the order of the number of features in the vicinity of the vehicle; independent of the size of the global map. A common scenario is to have a mobile robot moving in an area and observing features within this area. This situation is shown in Figure 46 where the vehicle is operating in a local area A. The rest of the map is part of the global area B. This approach will also present signiﬁcant advantages when the vehicle navigates for long periods of time in a local area or when the
Navigation System Design (KC4)
124
Figure 43: Vehicle trajectory and landmarks. The ’*’ shows the estimated position of objects that qualiﬁed as landmarks for the navigation system. The dots are laser returns that are not stable enough to qualify as landmarks. The solid line shows the 20 minutes vehicle trajectory estimation using full SLAM.
deviation of Xv,Yv and (x,y) of landmarks:[3][5][10][30][45] 1 0.9 0.8 0.7 0.6
meters
0.5 0.4 0.3 0.2 0.1 0 0 200 400 600 800 1000 subsamples 1200 1400 1600 1800
Figure 44: The History of selected state’s estimated errors. The vehicle states shows oscillatory behaviour with error magnitude that is decreasing with time due to the learning of the environment. The landmarks always present a exponential decreasing estimated error with a limit of the initial uncertainty of the vehicle position.
Navigation System Design (KC4)
125
Figure 45: 11 Final estimated error of all states. For each state the ﬁnal estimated error is presented. The maximum error is approximately 60 cm external information is available at high rate. Although high frequency external sensors are desirable to reduce position error growth, they also introduce a high computational cost in the SLAM algorithm. For example a laser sensor can return 2D information at frequencies of 4 to 30 Hz. To incorporate this information using the full SLAM algorithm will require to update M states at 30 Hz. In this work we show that while working in a local area observing local landmarks we can preserve all the information processing a SLAM algorithm of the order of the number of landmarks in the local area. When the vehicle departs from this area, the information acquired can be propagated to the global landmarks without loss of information. This will also allow incorporating high frequency external information with very low computational cost. Another important implication is that the global map will not be required to update sequentially at the same rate of the local map. Update step Considered the states divided in two groups: X= XA XB , XA ∈ R2NA +3 , N = NA + N B XB ∈ R2NB , (300)
X ∈ R2N −3 ,
Navigation System Design (KC4)
126
B
A B
Figure 46: Local and Global areas The states XA can be initially selected as all the states representing landmarks in an area of a certain size surrounding the vehicle. The states representing the vehicle pose are also included in XA . Assume that for a period of time the observations obtained are only related with the states XA and do not involve states of XB , that is h (X) = h (XA ) Then at a given time k H= ∂h ∂X =
X=X(k)
(301)
∂h ∂ (XA , XB )
=
X=X(k)
∂h ∂XA
∂h ∂XB
=
Ha 0
(302)
Considering the zeros of the matrix H the Kalman gain matrix W is evaluated as follows P = Paa Pab Pba Pbb P · HT =
T Paa · Ha T Pba · Ha
T H · P · H T = Ha · Paa · Ha
T S = Ha · Paa · Ha + R
(303)
W = P · H T · S −1 =
T Paa · Ha · S −1 T Pba · Ha · S −1
=
Wa Wb
From these equations it is possible to see that 1. The Jacobian matrix Ha has no dependence on the states XB .
Navigation System Design (KC4)
127
2. The innovation covariance matrix S and Kalman gain Wa are function of Paa and Ha . They do not have any dependence on Pbb , Pab , Pba and Xb . The update term dP of the covariance matrix can then be evaluated dP = W · S · W T = Paa · κ · Paa ξ · Pab T (ξ · Pab ) Pba · κ · Pab (304)
T with κ = Ha ·S −1 ·Ha and ξ = Paa ·κ. In the previous demonstration the time subindexes were neglected for clarity of the presentation. These indexes are now incorporated to present the recursive update equations. The covariance matrix after one update is
P (k + 1, k + 1) = P (k + 1, k) − dP (k + 1, k) Paa (k + 1, k + 1) = Paa (k + 1, k) − Paa (k + 1, k) · κ (k) · Paa (k + 1, k)
T Pab (k + 1, k + 1) = Pab (k + 1, k) − ξ(k) · Pab (k + 1, k) = (I − ξ(k)) · Pab (k + 1, k)
(305)
Pbb (k + 1, k + 1) = Pbb (k + 1, k) − Pba (k + 1, k) · κ (k) · Pab (k + 1, k) And the covariance variation after t consecutive updates: Pab (k + t, k + t) = Φ(k + t − 1) · Pab (k, k) Pbb (k + t, k + t) = Pbb (k, k) − Pba (k, k) · ψ (k − 1) · Pab (k, k) with (306)
Φ(k + t) = (I − ξ(k + t)) · (I − ξ(k + t − 1)) · .... · (I − ξ(k)) = ψ(k + t) =
k+t i=k
k+t i=k
(I − ξ(i)) (307)
ΦT (i − 1) · κ(i) · Φ(i − 1) ψ(k − 1) = 0
Φ(k − 1) = I,
The evaluation of the matrices Φ(k) , ψ(k) can be done recursive according to: Φ(k + t) = (I − ξ(k + t)) · Φ(k + t − 1) ψ(k + t) = ψ(k + t − 1) + ΦT (k + t − 1) · κ(k + t) · Φ(k + t − 1) (308)
Navigation System Design (KC4)
128
with Φ(k), ψ(k), κ(k), ξ(k) ∈ R2N a×2N a . During long term navigation missions, the number of states in Xa will be in general much smaller than the total number of states in the global map, that is Na << N b < M . The matrices ξ(k) and κ(k) are sparse and the 2 calculation of Φ(k) and Ψ(k) has complexity O(Na ). It is noteworthy that Xb , Pab , Pba and Pbb are not needed when the vehicle is navigating in a local region ’looking’ only at the states Xa . It is only required when the vehicle enters a new region. The evaluation of Xb , Pbb , Pab and Pba can then be done in one iteration with full SLAM computational cost using the compressed expressions. The estimates Xb can be updated after t update steps using Xb (k + t, k + t) = Xb (k + t, k) − Pba (k, k) · θ(k + t) with θ(k + t) =
k+t−1 i=k
(309)
T ΦT (i − 1) · Ha (i) · S −1 (i) · ϑ(i) , m the number of observations, in
this case range and bearing, θ(k) ∈ R2N a×m , Z(k) ∈ Rm , Φ(k) ∈ R2N a×2N a , Ha (k) ∈ Rm×2N a and S(k) ∈ Rm×m . Similarly, since Ha is a sparse matrix, the evaluation cost of the matrix θ is proportional to Na . The derivation of these equations is presented in [12] Extended Kalman Filter formulation for the compressed ﬁlter In order to maintain the information gathered in a local area it is necessary to extend the EKF formulation presented in Equations (286) and (287). The following equations must be added in the prediction and update stage of the ﬁlter to be able to propagate the information to the global map once a full update is required: Φ (k) = Jaa (k, k − 1) · Φ (k − 1) ψ (k) = ψ (k − 1) Predictionstep ψ (0) = I (310) Φ (0) = 0 Φ (k) = (I − ξ(k)) · Φ (k − 1) Updatestep ψ (k) = ψ (k) + ΦT (k − 1) · κ(k) · Φ(k − 1) When a full update is required the global covariance matrix P and state X is updated with equations (306) and (309) respectively. At this time the matrices Φ and Ψ are also reset to the initial values of time 0, since the next observations will be function of a given set of landmarks. Map Management It has been shown that while the vehicle operates in a local area all the information gathered can be maintained with a cost complexity proportional to the number of landmarks in this area. The next problem to address is the selection of local areas. One convenient approach consists of dividing the global map into rectangular
Navigation System Design (KC4)
129
regions with size at least equal to the range of the external sensor. The map management method is presented in Figure 47 . When the vehicle navigates in the region r the compressed ﬁlter includes in the group XA the vehicle states and all the states related to landmarks that belong to region r and its eight neighboring regions. This implies that the local states belong to 9 regions, each of size of the range of the external sensor. The vehicle will be able to navigate inside this region using the compressed ﬁlter. A full update will only be required when the vehicle leaves the central region r. Every time the
Figure 47: Map Management for the compressed Algorithm. The local area is composed of nine squares of length approximate of the range of the sensor. The vehicle is always within the central region of inside the threshold area. vehicle moves to a new region, the active state group XA , changes to those states that belong to the new region r and its adjacent regions. The active group always includes the vehicle states. In addition to the swapping of the XA states, a global update is also required at full SLAM algorithm cost. Each region has a list of landmarks that are known to be within its boundaries. Each time a new landmark is detected the region that owns it appends an index of the landmark deﬁnition to the list of owned landmarks. It is not critical if the landmark belongs to this region or a close connected region. In case of strong updates, where the estimated position of the landmarks changes signiﬁcantly, the owners of those landmarks can also be changed. An hysteresis region is included bounding the local area r to avoid multiple map switching when the vehicle navigates in areas close to the boundaries between the region r and surrounding areas. If the side length of the regions are smaller that the range of the external sensor, or if the hysteresis region is made
r
Navigation System Design (KC4)
130
too large, there is a chance of observing landmarks outside the deﬁned local area. This observation will be discarded since they cannot be associated with any local landmarks. In such case the resulting ﬁlter will not be optimal since this information is not incorporated into the estimates. Although these marginal landmarks will not incorporate signiﬁcant information since they are far from the vehicle, this situation can be easily avoided with appropriate selection of the size of the regions and hysteresis band. Figure 47 presents an example of the application of this approach. The vehicle is navigating in the central region r and if it never leaves this region the ﬁlter will maintain its position and the local map with a cost of a SLAM of the number of features in the local area formed by the 9 neighbour regions. A full SLAM update is only required when the vehicle leaves the region. Computational Cost The total computational requirement for this algorithm is of 2 O(Na ) and the cost of the update when the vehicle leaves the local area is of O(Na Nb2 ). Provided that the vehicle remains for a period of time in a given area, the computational saving will be considerable. This has important implications since in many applications it will allow the exact implementation of SLAM in very large areas. This will be possible with the appropriate selection of local areas. The system evaluates the location of the vehicle and the landmark of the local map continuously at the cost of a local SLAM. Although a full update is required at a transition, this update can be implemented as a parallel task. The only states that need to be fully updated are the new states in the new local area. A selective update can then be done only to those states while the full update for the rest of the map runs as a background task with lower priority. These results are important since it demonstrates that even in very large areas the computational limitation of SLAM can be overcame with the compression algorithm and appropriate selection of local areas. Experimental Results The compressed algorithm was implemented using local regions of 40x40 meters square. These regions are appropriate for the Sick laser range sensor used in this experiment. Figure 48 shows part of the trajectory of the vehicle with the local area composed of 9 squares surrounding the vehicle. To demonstrate that the compressed algorithm maintains and propagates all the information obtained, the history of the covariances of the landmarks were compared with the ones obtained with the full SLAM algorithm. Figure 49 shows a time evolution of standard deviation of few landmarks. The dotted line corresponds to the compressed ﬁlter and the solid line to the full SLAM. It can be seen that the estimated error of some landmarks are not continuously updated with the compressed ﬁlter. These landmarks are not in the local area. Once the vehicle makes a transition the system updates all the landmark performing a full SLAM update. At this time the landmarks outside the local area are updated in one iteration and its estimated error become exactly equal to the full SLAM. This is clearly
Navigation System Design (KC4)
131
shown in Figure 50 where at the full update time stamps both estimated covariances become identical. Figure 51 shows the diﬀerence between full SLAM and compressed ﬁlter estimated landmarks covariance. It can be seen that at the full update time stamps the diﬀerence between the estimation using both algorithms becomes zero as demonstrated before. This shows that while working in a local area it is possible to maintain all the information gathered with a computational cost proportional to the number of landmarks in the local area. This information can then be propagated to the rest of the landmarks in the map without any loss of information.
350 300 250
Longitude (metres)
200 150 100 50 0
−50 −100 −150 −200
−150
−100
−50
Latitude (metres)
0
50
100
150
200
Figure 48: Vehicle and local areas. This plot presents the estimated trajectory and navigation landmark estimated position . It also shows the local region ’r’ with its surrounding regions. The local states XA are the ones included in the nine regions shown enclosed by a rectangle in the left button corner of the ﬁgure
7.5.3
SubOptimal SLAM
In this section we present a series of simpliﬁcation that can further reduce the computationally complexity of SLAM. This suboptimal approach reduces the computational requirements by considering a subset of navigation landmarks present in the global map. It is demonstrated that this approach is conservative and consistent, and can generate close to optimal results when combined with the appropriate relative map representation. Most of the computational requirements of the EKF are needed during the update process of the error covariance matrix. Once an observation is being validated and associated to
Navigation System Design (KC4)
132
Figure 49: Landmark estimated position error for full Slam and compressed ﬁlter. The solid line shows the estimated error provided by the full SLAM algorithm. This algorithm updates all the landmarks with each observation. The dotted line shows the estimated error provided by the compressed ﬁlter. The landmark that are not in the local area are only updated when the vehicle leaves the local area. At this time a full updates is performed and the estimated error becomes exactly equal to full SLAM
Navigation System Design (KC4)
133
Compression Filter  Full Slam Comparison
0.27 Compresed Filter 0.26 0.25
Standard Deviation
0.24 0.23 0.22 0.21 0.2 0.19 0.18 156 Full Slam
Full Update
158
160
162
164
166 Time
168
170
172
174
Figure 50: Landmark estimated position error for full Slam and compressed ﬁlter (enhanced). This plot presents an enhanced view of the instant when the compressed algorithm performed a full update. A time ”165” the full slam (solid line) and the compressed algorithm (solid lines with dots) report same estimated error as predicted.
x 10 1
3
Compression Filter  Full Slam Comparison
Full Update
0
Covariance Difference
1
2
3
4
5 240 260 280 300 Time 320 340 360 380
Figure 51: Estimated error diﬀerences between full slam and compressed ﬁlter. The estimated error diﬀerence between both algorithms becomes identically zero when the full update is performed by the compressed algorithm.
Navigation System Design (KC4) a given landmark, the covariance error matrix of the states is updated according to P = P − ∆P ∆P = W · S · W T
134
(311)
The time subindexes are neglected when possible to simplify the equations. The state vector can be divided in 2 groups, the Preserved ”P ” and the Discarded ”D” states X= XP XD , XP ∈ RNP , XD ∈ RND , X ∈ RN , N = NP + ND (312)
With this partition it is possible to generate conservative estimates by updating the states XD but not updating the covariance and crosscovariance matrices corresponding to this subvector. The covariance matrix can then be written in the following form: P = PP P PP D T PDP PDD , ∆P = ∆PP P ∆PP D T ∆PDP ∆PDD = W · S · WT (313)
Conservative updates are obtained if the nominal update matrix ∆P is replaced by the suboptimal ∆P ∗ ∆P ∗ = ∆PP P ∆PP D ∆PDP ∅ = ∆P − ∅ ∅ ∅ ∆PDD
P ∗ = P − ∆P ∗ = P − ∆P +
∅ ∅ ∅ ∆PBB
It can be shown that this simpliﬁcation generates consistent error covariance estimates. Demonstration: The covariance error matrix P ∗ (k + 1) can be rewritten as follows P ∗ (k + 1) = P (k) − ∆P ∗ = P (k) − ∆P + δ where ∆P ∗ = ∆PP P ∆PP D ∆PDP ∅ ∆PP P ∆PP D ∆PDP ∆PDD = ∆P − δ (315) ∆P = ≥0 δ= ∅ ∅ ∅ ∆PBB ≥0 (314)
The matrices ∆P and µ are positive semideﬁnite since:
Navigation System Design (KC4)
135
∆P =
∆PP P ∆PP D T ∆PDP ∆PDD
= W · S · WT ≥
0 (316)
T ∆PDD = WD · SD · WD ≥ 0
As given in Equation 314, the total update is formed by the optimal update plus an additional positive semideﬁnite noise matrix δ. The matrix δ will increase the covariance uncertainty: P ∗ (k + 1) = P (k + 1) + δ (317)
then the suboptimal update of P ∗ becomes more conservative than the full update: P ∗ (k + 1) ≤ P (k + 1) ≤ P (k) (318)
Finally the submatrices that need to be evaluated are PP P , PP D and PDP . The significance of this result is that PDD is not evaluated. In general this matrix will be of high order since it includes most of the landmarks. The fundamental problem becomes the selection of the partition P and D of the state vector. The diagonal of matrix ∆P can be evaluated online with low computational cost. By inspecting the diagonal elements of ∆P we have that many terms are very small compared to the corresponding previous covariance value in the matrix P . This indicates that the new observation does not have a signiﬁcant information contribution to this particular state. This is an indication to select a particular state as belonging to the subset D. The other criterion used is based on the value of the actual covariance of the state. If it is below a given threshold, it can be a candidate for the subvector D. In many practical situations a large number of landmarks can usually be associated to the subvector D. This will introduce signiﬁcant computational savings since PDD can potentially become larger than PP P . The crosscorrelation PP D and PDP are still maintained but are in general of lower order as can be appreciated in Figure 52 . Finally the selection criteria to obtain the partition of the state vector can be given with the union of the following Ii sets: I1 = {i : ∆P (i, i) < c1 · P (i, i)} , I2 = {i : P (i, i) < c2 } , I = I1 ∪ I2 (319)
Then ∆P ∗ is evaluated as follows: ∆P ∗ (i, j) = 0 ∀i, j : i ∈ I and j ∈ I ∆P ∗ (i, j) = ∆P (i, j) ∀i, j : i ∈ I or j ∈ I / / (320)
Navigation System Design (KC4)
136
Vehicle States
Discarded Landmarks states
Figure 52: Full covariance matrix divided into the covariance blocks corresponding to the Vehicle and Preserved landmarks states (XP ) and Discarded landmarks states (XD ). The crosscorrelation covariance between the Preserved and Discarded states are fully updated as shown in grey. Finally the crosscorrelation between the elements of the states corresponding to the ”Discarded landmarks” are not updated as shown in white. The error covariance matrix is updated with the simpliﬁed matrix DP P ∗ (k + 1, k + 1) = P (k + 1, k) − ∆P ∗ (321)
The practical meaning of the set I1 , is that with the appropriate selection of c1 we can reject negligible update of covariances. As mentioned before the selection of I1 requires the evaluation of the diagonal elements of the matrix ∆P . The evaluation of the ∆P (i, i) elements requires a number of operations proportional to the number of states instead of the quadratic relation required for the evaluation of the complete matrix ∆P . The second subset deﬁned by I2 is related to the states whose covariances are small enough to be considered practically zero. In the case of natural landmarks they become almost equivalent to beacons at known positions. The number of elements in the set I2 will increase with time and can eventually make the computational requirements of SLAM algorithms comparable to the standard beacon localisation algorithms . Finally, the magnitude of the computation saving factor depends of the size of the set I. With appropriate exploration policies, real time mission planning, the computation requirements can be maintained within the bounds of the onboard resources. Implementation Issues: Relative Map Representation The suboptimal approach presented becomes less conservative when the cross correlation between the non relevant landmarks becomes small. This is very unlikely if an absolute reference frame is used, that is when the vehicle, landmarks and observation are represented with respect to
Preserved landmarks states
dia
go na
le
lem
en
ts
¡
¡
Navigation System Design (KC4)
137
a single reference frame. The crosscorrelations between landmarks of diﬀerent regions can be substantially decreased by using a number of diﬀerent bases and making the observation relative to those bases. With this representation the map becomes grouped in constellations. Each constellation has an associated frame based on two landmarks that belong to this constellation. The ’base’ landmarks that deﬁne the associated frame are represented in a global frame. All the others landmarks that belong to this constellation are deﬁned in the local frame. For a particular constellation, the local frame is based on the 2 base landmarks La = xa ya , Lb = xb yb (322)
It is possible to deﬁne 2 unitary vectors that describe the orientation of the base frame: v1 = v2 =
1 Lb −La
· (Lb − La ) = √ = −v12 v11 ,
1 (xb −xa ) +(yb −ya )
2 2
·
xb − xa yb − ya
=
v11 v12
v21 v22
(323)
v2 , v1 = 0
The rest of the landmarks in this particular constellation are represented using a local frame with origin at La and axes parallel to the vectors ν1 and ν2 . Li = with ξi = (Li − La ) , v1 = (Li − La )T · v1 ηi = (Li − La ) , v2 = (Li − La )T · v2 (325) xi yi , La = i ξi ηi (324)
The following expression can be used to obtain the absolute coordinates from the relative coordinate representation Li = La + ςi · v1 + ηi · v2 (326)
The reference frame is formed with two landmarks as shown in Figure 53. The observation are then obtained relative to this frame. Assuming that the external sensor returns range and bearing, the observation functions are:
Navigation System Design (KC4)
138
hi = Li − XL − Ri · (cos (βi ) , sin (βi )) = 0 βi = αi + φ − π/2 αi : objectanglewithrespecttolaserframe Ri : objectrangewithrespecttolaser (XL , φ) = (xL , yL , φ) : vehiclestates Finally hi = La + ςi · v1 + ηi · v2 − PL − Ri · (cos (βi ) , sin (βi )) = 0
(327)
(328)
With this representation the landmark deﬁning the bases will become the natural ”Preserved” landmarks. The observations in each constellation will be evaluated with respect to the bases and can be considered in the limit as observation contaminated with white noise. This will make the relative elements of the constellation uncorrelated with the other constellation relative elements. The only landmarks that will maintain strong correlation will be the ones deﬁning the bases that are represented in absolute form.
Li yi ni yb ei La Lb
ya
xa
xi
xb
Figure 53: Local reference frame
Navigation System Design (KC4)
139
Experimental Results The next set of plots present a comparison of the performance of the suboptimal algorithm. Figure 54 and 55 present two runs, one using most of the states and the other with only 100 states. The plots show that the total number of states used by the system grows with time as the vehicle explores new areas. It is also shown the number of states used by the system in grey and the number of states not updated with stars ”*”. In the ﬁrst run, very conservative values for the constant I1 and I2 were selected so most of the states were updated with each observation. The second run corresponds to a less conservative selection plus a limitation in the maximum number of states. Figure 55 shows that a large number of states are not updated at every time step resulting in a signiﬁcant reduction in the computational cost of the algorithm. From Figures 56 and 57 it can be seen that the accuracy of the SLAM algorithm has not been degraded by this simpliﬁcation. These Figures present the ﬁnal estimated error of all the states for both runs. It is noteworthy that only the bases are represented in absolute form. The other states are represented in relative form and its standard deviation becomes much smaller. One important remark regarding the advantage of the relative representation with respect to the simpliﬁcation presented: Since the bases are in absolute form they will maintain a strong correlation with the other bases and the vehicle states. They will be more likely to be chosen as ”preserved” landmarks since the observations will have more contribution to them than the relative states belonging to distant bases. In fact the states that will be chosen will most likely be the bases and the states associated with the landmarks in the local constellation. It is also important to remark that with this representation the simpliﬁcation becomes less conservative than when using the absolute representation. This can clearly be seen by looking at the correlation coeﬃcients for all the states in each case. This is shown in Figures 58 and 59 where the correlation of the relative and absolute map respectively is presented. In Figure 58 each block of the diagonal corresponds to a particular constellation and the last block has the vehicle states and the bases. The diﬀerent constellations becomes decorrelated from each other and only correlated to the ﬁrst block whose cross correlation are updated by the suboptimal algorithm presented. These results imply that with the relative representation the cross correlation between constellation becomes zero and the suboptimal algorithm presented becomes close to optimal. This is not the case for the absolute representation as shown in Figure 59 where all the states maintained strong crosscorrelations. Finally Figure 60 presents the results of a 4 km trajectory using the compressed algorithm in a large area. In this case there are approximately 500 states in the global map. The system created 19 diﬀerent constellations to implement the relative map. The crosscorrelation coeﬃcients between the diﬀerent constellations become very small as shown in Figure 61. This demonstrates the advantages of the compressed algorithm since the local areas are signiﬁcantly smaller than the global map. When compared with the full SLAM implementation the algorithm generated identical results (states and covariance) with the advantage of having very low computational requirements. For larger areas the algorithm becomes more eﬃcient since the cost is basically function of the number of local
Navigation System Design (KC4)
140
landmarks. These results are important since it demonstrates even in very large areas the computational limitation of SLAM can be overcame with the compressed algorithm and appropriate selection of local areas.
Figure 54: Total number of states and states used and not updated. The ﬁgure presents the total number of states with a solid black line. This number is increasing because the vehicle is exploring new areas and validating new landmarks. The states used by the system are represented in grey. The number of states not used is represented with ’*’. In this run the system used most of the states available.
Navigation System Design (KC4)
141
Figure 55: Total number of states and states used and not updated. In this run a maximum number of states was ﬁxed as constraint for the suboptimal SLAM algorithm. This is appreciated in the grey plot where the maximum number of states remain below a given threshold. The number of states not updated increases with time.
final deviations (1 sigma) 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0
meters
0
50
100 150 landmark states
200
Figure 56: ﬁnal estimation errors for relative and absolute states using most of the states.
Navigation System Design (KC4)
142
final deviations (1 sigma) 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0 50 100 150 landmark states 200
Figure 57: Final estimated error of relative and absolute states using a reduced number of states. These results are similar to the ones using most of the states. This result shows that the algorithm is not only consistent but close to optimal when used with the appropriate map representation.
meters
Navigation System Design (KC4)
143
covariance coefficients (states ordered by constellation)
20 40 60 80 100
states
120 140 160 180 200 220 20 40 60 80 100 120 states 140 160 180 200 220
Figure 58: Correlation coeﬃcient for the relative representation. Each block represents the crosscorrelation coeﬃcient of the elements of the diﬀerent constellation. The block in the right corner contains the vehicle states and all the bases. It can be seen that the crosscorrelation between diﬀerent constellations is very small. It is also clear the nonzero crosscorrelation between the bases and the diﬀerent constellations. These correlations are updated by the suboptimal ﬁlter.
covariance coefficients
20 40 60 80 100
states
120 140 160 180 200 220 20 40 60 80 100 120 states 140 160 180 200 220
Figure 59: Correlation Coeﬃcient for the absolute representation. In this case the map appears completely correlated and the suboptimal algorithm will generate consistent but more conservative results.
Navigation System Design (KC4)
144
Figure 60: Constellation map and vehicle trajectory. 19 constellations were formed by the algorithm. The intersection of the bases are presented with a ’+’ and the other side of the segment with a ’o’. The relative landmarks are represented with ’*’ and its association with a base is represented with a line joining the landmark with the origin of the relative coordinate system
covariance coefficients (states ordered by constellation)
50 100 150 200
states
250 300 350 400 450 50 100 150 200 250 states 300 350 400 450
Figure 61: Cross correlation coeﬃcients. The plots shows 19 constellation and a block in the right hand corner containing the correlation coeﬃcient for the bases and the vehicle states. It can be appreciated that the crosscorrelation between the relative states of the diﬀerent bases is very small.
Navigation System Design (KC4)
145
References
[1] I.Y. BarItzhack. Identity Between INS Position and Velocity Error Models. In Journal of Guidance and Control, pages 568–570, September 1981. [2] I.Y. BarItzhack and D. GoshenMeskin. Identity Between INS Position and Velocity Error Equations in the True Frame. In Journal of Guidance and Control, pages 590–592, November 1988. [3] Y. BarShalom and X. Li. Estimation and Tracking: Principles, Techniques and Software. Artech House, Boston, 1993. [4] D.O. Benson. A Comparison of Two Approaches to PureInertial and DopplerInertial Error Analysis. In IEEE Transactions on Aerospace and Electronic Systems, pages 447–455, July 1975. [5] R. Chatila and S. Lacroix. Adaptive Navigation for Autonomous Mobile Robots. International Symposium on Robotics Research, 1995. [6] I. Cox. Blanche  An Experiment in Guidance and Navigation of an Autonomous Robot Vehicle. IEEE Transactions on Robotics and Automation, 7:193–204, 1991. [7] Clark S. Dissanayake G., Newman P. and DurrantWhyte H. A Solution to Simultaneous Localisation adn Map Building (SLAM) Problem. In IEEE Journal of Robotics and Automaton, volume 17, No.3, June 2001. [8] H. DurrantWhyte. Where Am I? Industrial Robot, 21(2), 1994. [9] H. F. DurrantWhyte. An Autonomous Guided Vehicle for Cargo Handling Applications. International Journal of Robotics Research, 15, 1996. [10] D.J. Flynn. A Discussion of Coning Errors Exhibited by Inertial Navigation Systems. In Royal Aircraft Establishment. Technical Memorandum RadNav 243, 1984. [11] Guivant J. Eﬃcient Simultaneous Localisation and Mapping in Large Environments. PhD thesis, University of Sydney, 2003. [12] Guivant J. and Nebot E. Optimization of Simultaneous Localization and Map Building Algorithm for Real Time Implementation. 17, No.3:731–747, June 2001. [13] S. Julier. Process Models for the HighSpeed Navigation of Conventional Road Vehicles. Technical report, University of Oxford, 1994. [14] S. Julier. Process Models for the Navigation of High Speed Land Vehicles. PhD thesis, University of Oxford, 1996. [15] A. Kelly. A 3D State Space Formulation of a Navigation Kalman Filter for Autonomous Vehicles. CMU Robotics Institute Technical Report, 1994.
Navigation System Design (KC4)
146
[16] X. Kong. Development of a NonLinear PsiAngle Model for Large Misalignment Errors and its Application in INS Alignment and Calibration. In IEEE International Conference on Robotics and Automation, pages 1430–1435, May 1999. [17] John Leonard and H. Feder. Decoupled stochastic mapping for mobile robot and auv navigation. IEEE Journal of Oceanic Engineering, 66, No.4:561–571, 2001. [18] P. Maybeck. Stochastic Models, Estimation and Control, volume 1. Academic Press, 1982. [19] P. McKerrow. Introduction to Robotics. AddisonWesley, 1991. [20] A.G.O Mutambara. Decentralised Estimation and Control with Applications to a Modular Robot. PhD thesis, Department of Engineering Science, University of Oxford, 1994. [21] DurrantWhyte H Nebot E. Initial Alignment and Calibration of Low Cost Inertial Navigation Units for Land Vehicle Applications. In Journal of Robotic System, volume 16(2), pages 81–92, February 1999. [22] Scheding S. Nebot E., DurrantWhyte H. Frequency domain modelling of aided GPS for vehicle navigation systems. In Robotics and Autonomous Systems, volume 25, pages 73–82, February 1990. [23] J. Neira and J.D. Tardos. Data association in stochastic mapping using the joint compatibility test”. IEEE Transaction of Robotics and Automation, pages 890–897, 2001. [24] Sukkarieh S. Aided Inertial Navigation Systems for Autonomous Land Vehicles. PhD thesis, Department of Mechanical and Mechatronic Engineering, University of Sydney, 2000. [25] Michael Montemerlo Sebastian. Fastslam: A factored solution to the simultaneous localization and mapping problem, http://citeseer.nj.nec.com/503340.html. [26] Cheeseman P. Smith R., Self M. A stochastic map for uncertain spatial relationships. In 4th International Symposium on Robotic Research, MIT Press, 1987. [27] D.H. Titterton and J.L. Weston. Strapdown Inertial Navigation Technology. Peter Peregrinus Ltd, 1997.