You are on page 1of 12

1

Inertial Navigation Meets Deep Learning: A Survey


of Current Trends and Future Directions
Nadav Cohen and Itzik Klein
The Hatter Department of Marine Technologies, Charney School of Marine Sciences,
University of Haifa, Haifa, Israel

Abstract—Inertial sensing is used in many applications and such as the global navigation satellite system (GNSS) that ac-
platforms, ranging from day-to-day devices such as smartphones curately measures position, or the Doppler velocity log (DVL)
arXiv:2307.00014v1 [cs.RO] 22 Jun 2023

to very complex ones such as autonomous vehicles. In recent that gives accurate velocity measurements, are examples of
years, the development of machine learning and deep learning
techniques has increased significantly in the field of inertial aiding sensors. Further, it is possible to use additional informa-
sensing. This is due to the development of efficient computing tion instead of or in addition to a physical sensor, such as zero
hardware and the accessibility of publicly available sensor data. velocity updates (ZUPT), zero angular rates (ZAR), and more
These data-driven approaches are used to empower model-based in specific applications and scenarios, all to prevent the inertial
navigation and sensor fusion algorithms. This paper provides an navigation solution from accumulating errors [7]. The INS and
in-depth review of those deep learning methods. We examine
separately, each vehicle operation domain including land, air, aiding sensors are commonly fused in nonlinear filters such as
and sea. Each domain is divided into pure inertial advances and the extended Kalman filter (EKF) and the uncentered Kalman
improvements based on filter parameters learning. In addition, filter (UKF). It is possible for filters of this type to propagate
we review deep learning approaches for calibrating and denoising the uncertainty of a model while taking into account the
inertial sensors. Throughout the paper, we discuss these trends process noise covariance, thereby providing additional useful
and future directions. We also provide statistics on the commonly
used approaches to illustrate their efficiency and stimulate information along with preventing the accumulation of errors
further research in deep learning embedded in inertial navigation within an inertial navigation algorithm [8]. A diagram of the
and fusion. sensor fusion is shown in Fig. 1b.
Index Terms—Inertial sensing, Navigation, Deep learning, During the past decade, deep learning has made remarkable
Data-driven, Sensor fusion, Autonomous platforms, Robotics. progress due to advances in neural network architectures,
large datasets, and innovative training methods. In the field of
computer vision, convolutional neural networks (CNNs) have
I. I NTRODUCTION revolutionized tasks such as image classification, object detec-

R ESEARCH on the concepts of inertial sensing has been


conducted for several decades and has been used in
the navigation process for a variety of platforms during the
tion, and semantic segmentation [9]. It has been demonstrated
that recurrent neural networks (RNNs), including long short-
term memory (LSTM) networks, are particularly effective at
last century [1]. As of today, most inertial sensing relies modeling sequences and undertaking language-related tasks,
on accelerometers, which provide specific force measure- such as translation and sentiment analysis when used for nat-
ments, as well as gyroscopes, which provide angular velocity ural language processing [10]. Moreover, techniques such as
measurements [2]. An inertial measurement unit (IMU) is a generative adversarial networks (GANs) and transfer learning
unit composed of three orthonormal accelerometers and three have extended the capabilities of deep learning models to
orthonormal gyroscopes that differ in performance and cost enable tasks such as image synthesis and the use of pre-trained
[3, 4]. models [11, 12]. In light of these advancements, deep learning
The IMU readings are processed in real-time to provide has been significantly enhanced, propelling it to new frontiers
a navigation solution. Such a system, which executes the in the field of artificial intelligence.
strapdown inertial navigation algorithm, is known as an inertial The recent advances in hardware and computational ef-
navigation system (INS) [5]. The INS provides the navigation ficiency have proven deep learning (DL) methods to be
solution consisting of position, velocity, and orientation as useful for dealing with real-time applications ranging from
illustrated in Fig.1a. image processing and signal processing to natural language
It is important to note that the navigation solution accuracy processing by utilizing its capabilities to address nonlinear
and efficiency is affected by the duration of the mission, the problems [13, 14, 15, 16]. As a result, DL methods began
platform dynamics, as well as the quality of the IMU. Using to be integrated into inertial navigation algorithms. One of the
high-end sensors results in a more accurate navigation solution first papers to use neural networks (NNs) in inertial navigation
for extended periods of time, while using low-cost INS results was written by Chiang et al. [17] in 2003. A multi-sensor
in a much faster accumulation of errors. There is, however, a integration was proposed using multi-layer, feed-forward neu-
common approach for dealing with error accumulation in both ral networks and a back-propagation learning algorithm to
cases, which is to use an accurate aiding sensor to ensure that regress the accurate land vehicle position. It demonstrated the
the solution is bounded or the error is mitigated [6]. Sensors effectiveness of the NN in addressing navigation problems.
2

(a) (b)
Fig. 1: (a) Strapdown inertial navigation algorithm. For a given initial condition, the gyroscope’s angular velocity and the
accelerometer’s specific force measurements are integrated over time to receive the navigation solution in the desired reference
frame (local, geographic, and so on). (b) The process by which the navigation solution can be corrected using a nonlinear filter
and an aiding sensor. By looking at the output, the black diverging curve shows how the navigation solution accumulates error
and the red curve illustrates how the aiding sensor corrects this error, resulting in a chainsaw-like signal.

Inspired by success and impact, papers were published sug- navigation surveys were conducted for different platforms
gesting the use of deeper neural networks. Noureldin et al. [18] and in general [33, 34, 35, 36, 37]. Some papers focused
designed a multi-layer perceptron (MLP) network to predict only on machine-learning-based navigation, which is based
the INS position error during GNSS outages using the INS primarily on determining the features through preprocessing
position component and instantaneous time t. This work was data analysis [38, 39, 40]. In [41] there is a discussion of
continued and modified in [19], where the authors replaced end-to-end DL methods employed in autonomous navigation,
the MLP network with a radial basis function (RBF) neural including subjects other than navigation such as obstacle
network to address the same scenario and were successful in detection, scene perception, path planning, and control. A
reducing the position error. In [20], further improvements were survey was conducted in [42] to examine inertial positioning
described, utilizing a multi-layer feed-forward neural networks using recent DL methods. It targeted tasks such as pedestrian
to regress not only the vehicle’s position but also its velocity, dead-reckoning and human activity recognition. An overview
in an INS/DGNSS integration by employing a low-cost IMU. of all the survey papers is in Table I. Contrary to all of the
Further research concerning GNSS outage and INS/GNSS
fusion is available in [21] using input-delayed neural networks TABLE I: An overview of survey papers reviewing DL meth-
that regress velocity and position utilizing fully connected lay- ods used in different aspects of navigation. A total of fifteen
ers. Additionally, in [22], the development of fully connected papers.
neural networks with hidden layers constructed of wavelet base Paper Topic
functions was proposed to develop INS/GNSS integration to 28 DL methods for spacecraft dynamics, navigation and control
eliminate the complexities associated with KF by providing 29 DL methods of relative navigation of a spacecraft
30, 31 DRL for mobile robot navigation
a reliable positioning solution when GNSS signals are not 32 UAV autonomous navigation using RL
available. In addition to position regression, Chiang et al. in 33 DL methods for visual indoor navigation
[23, 24, 25] introduced NNs based on fully connected layers 34, 36 Visual navigation using RL
for the enhancement of orientation measurements provided DL methods of perception and navigation
35
in unstructured environments
by INS/GNSS fusions when using low-cost MEMS IMUs or 37 DL for perception and navigation in autonomous systems
when GNSS signals are not available. Another concept is to 38 Machine learning for maritime vehicle navigation
improve a specific block within the KF, and a suggestion in 39 Survey of inertial sensing and machine learning
[26] was to provide the innovation process in an adaptive KF 40 Machine learning for indoor navigation
41 DL applications and methods for autonomous vehicles
for an integrated INS/GNSS system using a three-layer, fully DL methods for positioning including
connected network. 42
pedestrian dead-reckoning and human activity recognition
These researchers were among the pioneers in using inertial
data in DNNs to improve navigation. Performance analysis above, this paper examines DL methods utilized exclusively
for the networks above was conducted in [27] for INS/GNSS in inertial navigation algorithms, as defined in the literature,
fusion. and focuses entirely on vehicles regardless of their operating
Aside from this paper, there are several other papers environment. The contributions of this paper are:
that conducted surveys on the general topic of data-driven 1) Provide an in-depth review of DL methods applied to
navigation. Most of them looked at specific platforms or inertial navigation tasks for land, aerial, and maritime
DL methods. In [28, 29], a review of the navigation of vehicles.
spacecraft was made in addition to other concepts such as 2) Examine DL methods for calibrating and denoising
dynamics and control. Furthermore, approaches of only deep- inertial sensor data suitable for any vehicle and any
reinforcement learning were reviewed for different platforms inertial sensor.
in [30, 31, 32]. In light of DL’s significant advances in the 3) Provide insights into current trends on the subject and
fields of image processing and computer vision, vision-based describe the common DL architectures for inertial nav-
3

igation tasks. (FOG) to improve vehicle localization. A similar approach


4) Discussion of potential future directions for the use was done in [50] where the authors introduced a novel
of DL approaches for improving inertial navigation approach to produce IMU-like data from GNSS data to train a
algorithms. fully connected-based network to regress angular velocity and
The rest of the paper is organized as follows: Sections II, III acceleration for better positioning based on MEMS IMU data.
and IV address DL approaches for improving land, aerial, and Gao et al. [51] proposed the ”VeTorch”, an inertial tracking
maritime inertial sensing, respectively, and are further divided system that tracks the location of a vehicle in real-time using
into subsections on pure inertial navigation and aided inertial inertial data obtained from a smartphone. For acceleration and
navigation. Section V discusses methods of improving the orientation sequence learning as well as pose estimation, the
calibration and denoising of inertial data using DL methods. authors used a TCN. As a means of measuring the speed of
Section VI discusses the survey, and Section VII draws con- the vehicle, Freydin & Or employed a model based on LSTM,
clusions. which takes as input low-cost IMU readings provided by the
smartphone [52]. Apart from enhancing the inertial reading
II. L AND V EHICLE I NERTIAL S ENSING ability to provide a more accurate navigation solution, DL
approaches are useful for improving sensor fusion. Li et al.
A. Pure Inertial Navigation [53] introduced a novel, recurrent, convolutional neural net-
Several studies have investigated the inertial deep learning work (RCNN)-based architecture to scan-to-scan laser/inertial
compensations when GNSS signals are not available. Shen data fusion for pose estimation. After studying the extreme
et al. in [43] presented an approach for improving MEMS- dynamics of an autonomous race car, [54] proposed an end-
INS/GNSS navigation during GNSS outages. This article to-end RNN-based approach that utilizes IMU data along
proposed two neural networks for a dual optimization process, with the wheel odometry sensor and motor current data to
wherein the first NN compensates for the INS error, while the estimate velocity. In the ”GALNet” framework, the authors
second NN compensates for the error generated by a filter utilized inertial, kinematic, and wheel velocity data to train
using the RBF network for accurate position data. an LSTM-based network that regressed the relative pose of
Considering the great interest in scenarios of GNSS outages, a car [55]. In [56], a smartphone accelerometer, gyroscope,
many papers have been published suggesting more complex magnetometer, and gravity sensor were integrated to train
DNNs. For cases of GNSS outages, Lu et al. introduced a ”XDRNet” architecture. Using 1DCNN, the network regresses
multi-task learning method in [44]. In the first phase, the vehicle speed and heading changes and, as a consequence,
inertial data is denoised using a convolutional auto-encoder, reduces inertial positioning error drift. Moreover, Liu et al.
followed by a temporal convolutional network (TCN) to bridge presented a hybrid CNN-LSTM-based network that integrates
any GNSS gaps and a one dimensional-CNN (1DCNN) to a dual-antenna satellite receiver and MEMS IMU to forecast
detect zero velocity scenarios. An accurate navigation solution the residual of the position, velocity, and orientation at each
is derived from this aiding data later on in the KF. An time step [57].
additional paper proposed a CNN model for accurate speed
estimation using the inertial data when there are no aiding
sensors such as GNSS or internal wheel speed [45]. B. Aided Inertial Navigation
GNSS signals are not viable in all scenarios, such as indoor It has been demonstrated that using a filter such as the KF is
navigation or tunnel navigation, requiring not only compensa- a common approach to achieving a high level of accuracy and
tion for gaps, but also accounting for the entire process. For ex- reliability in sensor fusion. Aside from providing a navigation
ample, in [46, 47], Tong et al. regressed the change in velocity solution, the filter also provides us with information regarding
and heading of a vehicle in GNSS-blocked environments such the propagation of navigation uncertainty over time. As a
as tunnels using a TCN architecture with residual blocks used result, DL methods were suggested as a way to account for dif-
from low-cost IMU data given by smartphones. Additionally, ferent blocks in the filter that tend to have a greater impact on
”DeepVIP”, an LSTM-based architecture, was introduced for the solution. A deep KF was suggested by Hosseinyalamdary
indoor vehicle positioning and trained on low-cost inertial data [58]. In addition to the prediction and update steps of the EKF,
from smartphones. ”DeepVIP” is available in two variations. a modeling step was added to the deep KF to correct IMU
The first achieves a higher level of positioning accuracy positioning and to model IMU errors. GNSS measurements
and is appropriate for scenarios requiring the highest degree were used to learn IMU error models with RNN and LSTM
of positioning accuracy, and the other is more appropriate methods, and in the absence of GNSS observations, the trained
for situations requiring computational efficiency, therefore its model predicted the IMU errors. Further, SL-SRCKF (self-
accuracy is slightly lower [48]. The use of data-driven methods learning square-root-cubature KF) enables continuous obser-
in inertial vehicle navigation also provides the benefit of vation vectors during GNSS outages by employing an LSTM-
improving the output of the inertial sensors independent of the based network to learn the relationship between observation
other aiding sensors. Zhao et al. examined high-end sensors vectors and internal filter parameters, enhancing the accu-
in [49] and proposed ”GyroNet” and ”FoGNet”, which are racy of integrated MEMS-INS/GNSS navigation systems [59].
based on bi-LSTMs. The first estimates the bias and noise Additionally, DL methods identified specific scenarios in the
of a gyroscope to improve angular velocity measurements, inertial data that may prevent the navigation solution from
while the second corrects the drift of the fiber optic gyroscope accumulating errors. Using RNN, a zero velocity situation or
4

no lateral slip could be identified and incorporated later on approach was employed; however, the platform was not a
into a KF for localization processes [60]. MAV, but rather an unmanned aerial vehicle (UAV). Another
The literature indicates that the covariance noise matrix vision-inertial fusion study was conducted in [68] in which a
plays an important role in the KF, and therefore DL meth- camera and IMU sensor fusion method were used to estimate
ods were employed to adapt it consistently. According to the position of an unmanned aircraft system (UAS) using
Brossard, Barrau, & Bonnabel, a CNN-based method was a CNN-LSTM-based network known as ”HVIOnet”, which
used to dynamically adapt the covariance noise matrix for an stands for hybrid visual-inertial odometry network. A more
invariant-EKF using moderate-cost IMU measurements [61]. complex method was introduced by Yusefi et al. where an
Previously, the authors developed in [62] a procedure for end-to-end, multi-model, DL-based, monocular, visual-inertial
Gaussian processes combined with RBF neural networks and localization system was utilized to resolve the global pose
stochastic variational inference in an attempt to improve a regression problem for UAVs in indoor environments. Using
state-space dynamical model’s propagation and measurement the proposed deep RCNN, experimental findings demonstrate
functions by learning residual error between physical predic- an impressive degree of time efficiency, as well as a high
tion and ground truth. It was also shown that these corrections degree of accuracy in UAV indoor localization [69].
may be used to design EKFs. An alternative method of For assessing a vehicle’s attitude with only inertial sensing,
estimating the process noise covariance relies on reinforce- Liu et al. employed an LSTM-based network based on IMU
ment learning, as explained in [63], which uses an adaptive data from a UAV [70]. While in [71] the authors utilized a
KF to determine position, velocity, and orientation. In [64], hybrid, more complex network that incorporates CNN and
not only the parameters of measurement noise covariances LSTM blocks to estimate MAV pose utilizing the current
but also the parameters of process noise covariances were position and unit quaternion. An inertial odometry network
regressed. These parameters can be more accurately estimated known as ”AbolDeepIO” was developed by Esfahani et al.
using a multitask TCN, resulting in higher position accuracy [72]. A deep, inertial, odometry, LSTM network architecture
than traditional GNSS/INS-integrated navigation systems. The based on an IMU reading was proposed and tested on MAV
authors in [65] introduced a residual network incorporating an data, yielding better results than ”VINet” [66]. Seven sub-
attention mechanism to predict individual velocity elements of architectures (AbolDeepIO1-AbolDeepIO7) were also tested
the noise covariance matrix. As a result of the empirical study, and evaluated. As a follow-up to ”AbolDeepIO”, the authors
it has been demonstrated that adjusting the non-holonomic brought forward ”OriNet”, which is also based on LSTM and
constraint uncertainty during large dynamic vehicle motions capable of estimating the full 3D orientation of a flying robot
rather than strictly setting the lateral and vertical velocities to with a single particular IMU in the quaternion coordinate [73].
zero could improve positioning accuracy under large dynamic A robust inertial attitude estimator called ”RIANN” was also
motions. proposed, a network whose name stands for Robust IMU-
based Attitude Neural Network. Several motions, including
TABLE II: Papers discussing DL and inertial sensing of land
MAV motion, were regressed using a gated recurrent units
vehicles, categorized by their improvement goal. A total of
(GRUs) network. As well as proposing three domain-specific
thirty-three papers.
advances in neural networks for inertial attitude estimation,
Improvement Goals Papers they also propose two methods that enable neural networks
17, 18, 19, 20, 23, 24, 25 to handle a wide variety of sampling rates [74]. It has been
Position 21, 22, 27, 43, 48, 50, 51 suggested by Chumuang et al. [75] that CNNs and LSTMs
are both effective for predicting the orientation of a MAV,
57
the former using the quaternions predicted from data from
20, 21, 45, 46, 47, 49, 54 the IMU using Madgwick’s adaptive algorithm [76], and
Velocity
52, 56, 57 the latter using raw gyroscope measurements. For improving
23, 24, 25, 46, 47, 49, 55 MAV navigation, rather than using two separate networks,
Orientation
50, 51, 53, 56, 57 the authors in [77] suggested three models including CNN,
26, 43, 44, 58, 60, 62, 63 RNN, and a CNN and LSTM hybrid model, and performed a
Filter Parameters
59, 64, 65 comparison amongst them.
In aerial navigation, as in other areas, GNSS signals can be
useful when integrating with the INS, and in the absence of
these signals, difficulties arise. To achieve better performance
III. A ERIAL V EHICLE I NERTIAL S ENSING
than traditional GNSS/INS fusion, an LSTM-based network
A. Pure Inertial Navigation was proposed in [78] to estimate the 3D position of an
A number of the papers examined visual-inertial odom- aerial vehicle. When the aerial vehicle encounters GNSS-
etry as a tool for aerial inertial navigation. Clark et al. denied environments, DL approaches are used to compensate.
[66] developed the ”VINet” architecture. It comprises LSTM Continuing their research in [70], as discussed above, Liu et al.
blocks that process camera output at the camera-rate and proposed a 1DCNN/GRU hybrid deep learning model that pre-
IMU LSTM blocks that process data at the IMU rate. For dicts the GNSS position increments for integrated INS/GNSS
the first time, this study used deep learning to determine a navigation in the event of GNSS outages[79, 80, 81]. A similar
micro air vehicle’s (MAV’s) orientation. In [67], a similar approach to deal with scenarios of denied GNSS environments
5

was implemented by [82], in which a GRU-based network was et al. proposed a similar procedure, as previously mentioned,
used to estimate position and velocity. A novel method known and developed an adaptive navigation algorithm for AUV
as ”QuadNet” was proposed by Shurin & Klein in [83] in navigation that uses deep learning to generate low-frequency
which they utilized a periodic quadrotor drone motion along position information as a method of generating low-frequency
with its inertial readings to develop a 1DCNN and LSTM positioning information. Based on LSTM blocks, the network
method for regression of the distance and altitude changes of receives velocity measurements from the DVL and Euler
the quadrotor. angles from the attitude and heading reference system (AHRS)
[91].
B. Aided Inertial Navigation A task such as navigation on or under water may encounter
difficulties due to the dynamics of the environment or the
To improve the quality of Kalman attitude estimates, Al- inaccessibility of aid signals, such as GNSS signals. In [92]
Sharman et al. proposed a simple, fully connected network in the authors used a hybrid TCN-LSTM network to predict the
[84]. The network uses the Kalman states estimates, provided pitch and heave movement of a ship in different challenging
by the inertial readings and control vector as inputs, and the scenarios. A fully connected network is suggested in [93] to
UAV attitude is regressed. A recurring aspect is predicting the improve AUV navigation in rapidly changing environments,
noise covariance information using DL. In [85], the authors such as in waves near or on the surface. This network is
proposed a CNN-based, adaptive Kalman filter for enhancing based on data from an accelerometer and is used to predict the
high-speed navigation with low-cost IMUs. The paper presents pitch angle. The Kalman filter, neural network, and velocity
a 1DCNN approach to predict noise covariance information of compensation are then combined in the ”NN-DR” method to
3D acceleration and angular velocity by utilizing windowed in- provide a more accurate navigation solution.
ertial measurements in an attempt to outperform classical KFs The DVL is used in underwater applications as a sensor to
and Sage-Husa adaptive filters in high dynamic conditions. In assist in navigation, similar to GNSS data used by above-water
a subsequent paper, Or & Klein [86] developed a data-driven, applications. Several papers have been published examining
adaptive noise covariance approach for an error state EKF in scenarios of DVL failure. There was, for example, a proposal
INS/GNSS fusion. Using a 1DCNN, they were able to estimate in [94] to aid dead-reckoning navigation for AUVs with limited
the noise covariance matrix, also known as the Q matrix, and sensor capabilities. Using RNN architecture, the algorithm
use the information to provide a better navigation solution for predicts relative velocities based on the data obtained from
a quadrotor drone. the IMU, pressure sensors, and control actions of the actuators
TABLE III: Papers discussing DL and inertial sensing of aerial on the AUV. In [95] the Nonlinear AutoRegressive with Ex-
vehicles, categorized by their improvement goal. A total of ogenous Input (NARX) approach is used in cases where DVL
twenty papers. malfunctions occur to determine the velocity measurements
using INS data. For receiving the navigation solution, the
Improvement Goals Papers output of the network is integrated into a robust Kalman
Position 68, 77, 78, 79, 80, 81, 82, 83 filter (RKF). Cohen & Klein proposed ”BeamsNet”, which
Velocity 77, 82 replaces the model-based approach to derive the AUV velocity
66, 67, 69, 70, 71, 72, 73 measurements out of the DVL raw beam measurements using
Orientation 1DCNN that uses inertial readings [96]. The authors continued
74, 75, 77, 83
Filter Parameters 84, 85, 86 the work by looking at cases of partial DVL measurements and
succeeded in recovering the velocity with a similar architecture
called ”LiBeamsNet” [97]. In the case of a complete DVL
IV. M ARITIME V EHICLE I NERTIAL S ENSING outage, in [98] the authors introduced ”ST-BeamsNet”, which
is a Set-Transformer based network that uses inertial reading
A. Pure Inertial Navigation and past DVL measurements to regress the current velocity.
Zhang et al. [87, 88] were the first to apply deep learning Lately, Topini et al. [99] conducted an experimental com-
techniques to autonomous underwater vehicle (AUV) naviga- parison of data-driven strategies for AUV navigation in
tion. To regress the position displacements of an AUV, the DVL-denied environments where they compared MLP, CNN,
authors used an LSTM-based network trained over GNSS LSTM, and hybrid CNN-LSTM networks to predict the ve-
data that was utilized as the position displacement training locity of the AUV.
targets. The output of this network is incorporated into EKF
for the purpose of making the position directly observable.
Later, the authors developed a system called ”Navnet”, which B. Aided Inertial Navigation
utilizes data from the IMU and Doppler velocity log (DVL) According to the authors of [100], the RBF network can be
sensors through a deep learning framework using LSTM and augmented with error state KF to improve the state estimation
attention mechanisms to regress the position displacement of an underwater vehicle. Through the application of the RBF
of an AUV and compares it to the EKF and UKF [89]. neural network, the proposed algorithm compensates for the
Further improvements and revisions were made in [90], which lack of error state KF performance by enhancing innovation
used TCN blocks instead of LSTM blocks. As a way of error terms. Or & Klein [86, 101, 102] developed an adaptive
rectifying the error accumulation in the navigation system, Ma EKF for velocity updates in INS/DVL fusions. Initially, they
6

demonstrated that by correcting the noise covariance matrix Yuan et al. introduced ”IMUDB”, which stands for IMU
using 1DCNN to predict the variance in each sample time denoising BERT. This model was inspired by the natural
using a classification problem, they could significantly im- language processing field. A self-supervised IMU denoising
prove navigation results. In recent work, the authors introduced method, besides obtaining good results, also addressed the
”ProNet”, which uses regression instead of classification to difficulty of obtaining sufficient and accurate annotations for
accomplish the same task. supervised learning [115].
TABLE IV: Papers describing DL and inertial sensing of
maritime vehicles, categorized by their improvement goal. A VI. D ISCUSSION
total of sixteen papers. By analyzing the course of the studies depicted above, this
Improvement Goals Papers section presents an in-depth analysis of current trends in using
DL methods for inertial navigation.
Position 87, 88, 89, 90, 91, 92, 93
Taking a closer look at the most common courses of action,
Velocity 93, 94, 96, 97, 98, 99 it appears there are four repeating baseline approaches, as
Orientation 92, 93 illustrated in Fig. 2. The first approach involves inserting the
Filter Parameters 86, 100, 101, 102 inertial data into a DL architecture and regressing one or more
aspects of the full navigation solution as shown in Fig. 2a.
Other papers, however, examined the desired residual/delta
V. C ALIBRATION AND D ENOISING that needs to be added to the current measurements to update
Since the INS is based on the integration of inertial data the data rather than regress the full state of navigation com-
over time, it accumulates errors due to structured errors in the ponents. Looking at the residual was demonstrated to result in
sensors. Calibration and denoising are crucial to minimizing a normally distributed output, which would make it easier for
these errors. In Chen et al. [103], deep learning was used the network to handle the problem. An illustration is shown
for the first time to reduce IMU errors. IMU data, containing in Fig. 2b.
deterministic and random errors, is fed as input to CNN, Rather than relying solely on inertial data, and as discussed
which filters the data. According to Engelsman & Klein before, most navigation solutions integrate inertial data with
[104, 105], an LSTM-based network can be used for de- other sensors to provide a more accurate result. Fig. 2c and
noising accelerometer signals and a CNN-based network can Fig. 2d show how DL is incorporated in the sensor fusion
be used to eliminate bias in low-cost gyroscopes. operation. The former uses both the inertial data and the aiding
Apart from the papers mentioned above, the majority of re- measurements as input to the network to give the navigation
search focuses on the denoising and calibration of gyroscopes. solution. The latter target parameters of the nonlinear filter,
A series of papers [106, 107, 108] examined the denoising which is responsible for the sensor fusion, such as the noise
of gyroscope data by utilizing various variations of RNNs. covariance matrix estimation.
One paper demonstrated the performance of a simple RNN The methods listed above are applicable to all three domains:
structure, while the others utilized LSTMs. A comparison land, aerial, and maritime. By examining the details of the
was made between LSTM, GRU, and hybrid LSTM-GRU survey, we determined what are the most common goals that
approaches for gyroscope denoising in [109]. Additional com- current research focuses on improving, and presented them in
parisons between GRU, LSTM, and hybrid GRU-LSTM were a pie chart, see Fig. 3. According to the chart, 81% of the
conducted. However, the performance was tested not only in papers focus on improving position, velocity, and orientation,
static conditions but dynamic ones too [110]. In Brossard et while 19% addressed filter parameter improvement. Most of
al. [111], a deep learning method is presented for reducing the latter papers were published within the past three years.
the gyroscope noise of an IMU gyroscope in order to achieve In addition to the initial analysis, we carried out a study
accurate attitude estimations utilizing a low-cost IMU. For of the general DL architectures that have been employed.
feature extraction, a dilated convolutional network was used We identified four distinct architectural streams: MLP, CNN,
and for training on orientation increments, an appropriate RNN, and others. MLP includes all fully connected networks,
loss function was utilized. More different kinds of CNN- CNN includes networks such as 1DCNN and TCN, RNN in-
based architectures addressed the gyroscope corrections. One cludes LSTM and GRU, and ”others” consists of transformers,
research demonstrated a denoising autoencoder architecture reinforcement learning, and so on. As shown in Fig. 4, the
that is built on a deep convolutional model and designed networks are also divided into single and combined networks,
to recover clean, undistorted output from corrupted data. It where combined refers to architectures that comprise more
was found that the LKF angle prediction was boosted in than one method, such as CNN-RNN. The bar plot indicates
this scenario [112]. Furthermore, a TCN and 1DCNN were that the main architecture is RNN with its derivatives, which
integrated for MEMS gyroscope calibration in [113] and makes sense since this architecture was specifically designed
Liu et al. introduced ”LGC-Net” as a method for extracting for time-series problems. However, CNN methods also have a
local and global characteristics from IMU measurements to significant impact on inertial navigation, and in some cases are
regress the gyroscope compensation components in a dynamic more accurate than RNNs. MLP is more basic architecture and
manner. To create the ”LGC-Net”, special convolution layers was among the first to be incorporated into inertial navigation.
are employed along with attention mechanisms [114]. Finally, As a result of improvements in the field of natural language
7

(b) Using DL to regress the residual between the current


(a) Using DL to regress the full state of the vehicle. state and the previous one of the vehicle.

(c) Implementing sensor fusing with a DL block to regress (d) In sensor fusion scenarios, DL methods are applied
the full state or the required increment. to obtain one or more filter parameters.
Fig. 2: Different techniques for improving inertial navigation using DL

Fig. 3: DL goals in improving the navigation performance. Fig. 4: Common DL architectures for improving navigation
tasks, divided into single architectures and combined architec-
tures.
processing, other architectures such as transformers and BERT
have started to appear in recent papers, demonstrating a great
deal of promise. paper presents a survey of DL methods applied to inertial
navigation. The paper reviews research conducted in three
VII. C ONCLUSIONS different domains — land, air, and sea— as well as calibration
Since inertial sensors can operate on any platform regardless and denoising methods. It also provides an insight into the
of the environment, inertial navigation has been a major course of the study based on statistical analysis.
topic of study in the past decade. As data-driven methods Most of the research was conducted on land vehicles rather
became increasingly popular, DL methods were integrated than aerial or maritime vehicles or even calibration and denois-
into this field, which relied on model-based algorithms. This ing techniques. However, some papers mention that despite
8

the different mechanics, maneuvers, etc., the techniques can on Communication and Electronics Systems (ICCES).
be adaptable to different platforms. Most of the reviewed IEEE, 2021, pp. 1–8.
papers dealt with improving one or more aspects of the inertial [12] F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu,
navigation algorithm for a better solution. However, in recent H. Xiong, and Q. He, “A comprehensive survey on
years, a focus has been placed on improving filter parameters transfer learning,” Proceedings of the IEEE, vol. 109,
for an enhanced sensor fusion process. Although the leading no. 1, pp. 43–76, 2020.
DL architectures are based on RNNs and CNNs, it appears [13] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,”
that recent research is inspired by natural language processing nature, vol. 521, no. 7553, pp. 436–444, 2015.
approaches as it imports and adapts its leading architectures. [14] I. Goodfellow, Y. Bengio, and A. Courville, Deep
To conclude, since 2018 there has been a significant increase learning. MIT press, 2016.
in using DL methods with applications of inertial navigation. [15] P. P. Shinde and S. Shah, “A review of machine learning
It has been found that these approaches outperformed the and deep learning applications,” in 2018 Fourth interna-
known model-based techniques and have shown great promise tional conference on computing communication control
for future research in the area of inertial sensing. The initial and automation (ICCUBEA). IEEE, 2018, pp. 1–6.
research focused on improving the navigation aspects defined [16] M. Mahrishi, K. K. Hiran, G. Meena, and P. Sharma,
in the literature; namely, position, velocity, and orientation. In Machine learning and deep learning in real-time appli-
recent years, more attention has been paid to improving filter cations. IGI global, 2020.
parameters utilizing more advanced networks. [17] K.-W. Chiang, A. Noureldin, and N. El-Sheimy, “Mul-
tisensor integration using neuron computing for land-
ACKNOWLEDGMENT vehicle navigation,” GPS solutions, vol. 6, no. 4, pp.
209–218, 2003.
N.C. is supported by the Maurice Hatter Foundation and
[18] A. Noureldin, A. Osman, and N. El-Sheimy, “A neuro-
University of Haifa presidential scholarship for students on a
wavelet method for multi-sensor system integration for
direct Ph.D. track.
vehicular navigation,” Measurement science and tech-
nology, vol. 15, no. 2, p. 404, 2003.
R EFERENCES [19] R. Sharaf, A. Noureldin, A. Osman, and N. El-Sheimy,
[1] D. MacKenzie, Inventing accuracy: A historical sociol- “Online INS/GPS integration with a radial basis func-
ogy of nuclear missile guidance. MIT press, 1993. tion neural network,” IEEE Aerospace and Electronic
[2] D. Titterton, J. L. Weston, and J. Weston, Strapdown Systems Magazine, vol. 20, no. 3, pp. 8–14, 2005.
inertial navigation technology. IET, 2004, vol. 17. [20] N. El-Sheimy, K.-W. Chiang, and A. Noureldin, “The
[3] A. Noureldin, T. B. Karamat, and J. Georgy, Fundamen- utilization of artificial neural networks for multisensor
tals of inertial navigation, satellite-based positioning system integration in navigation and positioning in-
and their integration. Springer Science & Business struments,” IEEE Transactions on instrumentation and
Media, 2012. measurement, vol. 55, no. 5, pp. 1606–1615, 2006.
[4] N. El-Sheimy and A. Youssef, “Inertial sensors tech- [21] A. Noureldin, A. El-Shafie, and M. Bayoumi, “GPS/INS
nologies for navigation applications: State of the art integration utilizing dynamic neural networks for vehic-
and future trends,” Satellite Navigation, vol. 1, no. 1, ular navigation,” Information fusion, vol. 12, no. 1, pp.
pp. 1–21, 2020. 48–57, 2011.
[5] K. R. Britting, Inertial navigation systems analysis. [22] X. Chen, C. Shen, W.-b. Zhang, M. Tomizuka, Y. Xu,
Artech House, 2010. and K. Chiu, “Novel hybrid of strong tracking Kalman
[6] J. Farrell, Aided navigation: GPS with high rate sensors. filter and wavelet neural network for GPS/INS during
McGraw-Hill, Inc., 2008. GPS outages,” Measurement, vol. 46, no. 10, pp. 3847–
[7] D. Engelsman and I. Klein, “Information aided naviga- 3854, 2013.
tion: A review,” arXiv preprint arXiv:2301.01114, 2023. [23] K.-W. Chiang, A. Noureldin, and N. El-Sheimy, “Con-
[8] P. Groves, Principles of GNSS, inertial, and multisensor structive neural-networks-based MEMS/GPS integra-
integrated navigation systems, second edition. Artech tion scheme,” IEEE transactions on aerospace and
house, 2013. electronic systems, vol. 44, no. 2, pp. 582–594, 2008.
[9] S. S. A. Zaidi, M. S. Ansari, A. Aslam, N. Kanwal, [24] K.-W. Chiang, H.-W. Chang, C.-Y. Li, and Y.-W. Huang,
M. Asghar, and B. Lee, “A survey of modern deep “An artificial neural network embedded position and
learning based object detection models,” Digital Signal orientation determination algorithm for low cost MEMS
Processing, p. 103514, 2022. INS/GPS integrated sensors,” Sensors, vol. 9, no. 4, pp.
[10] D. W. Otter, J. R. Medina, and J. K. Kalita, “A survey 2586–2610, 2009.
of the usages of deep learning for natural language [25] K.-W. Chiang and H.-W. Chang, “Intelligent sensor
processing,” IEEE transactions on neural networks and positioning and orientation through constructive neural
learning systems, vol. 32, no. 2, pp. 604–624, 2020. network-embedded INS/GPS integration algorithms,”
[11] M. Durgadevi et al., “Generative adversarial network Sensors, vol. 10, no. 10, pp. 9252–9285, 2010.
(GAN): a general review on different variants of GAN [26] J. J. Wang, W. Ding, and J. Wang, “Improving adaptive
and applications,” in 2021 6th International Conference Kalman Filter in GPS/SDINS integration with neural
9

network,” in Proceedings of the 20th International Tech- [40] P. Roy and C. Chowdhury, “A survey of machine learn-
nical Meeting of the Satellite Division of The Institute ing techniques for indoor localization and navigation
of Navigation (ION GNSS 2007), 2007, pp. 571–578. systems,” Journal of Intelligent & Robotic Systems, vol.
[27] M. Malleswaran, V. Vaidehi, A. Saravanaselvan, and 101, no. 3, p. 63, 2021.
M. Mohankumar, “Performance analysis of various ar- [41] A. A. Golroudbari and M. H. Sabour, “Recent ad-
tificial intelligent neural networks for GPS/INS integra- vancements in deep learning applications and methods
tion,” Applied Artificial Intelligence, vol. 27, no. 5, pp. for autonomous navigation–A comprehensive review,”
367–407, 2013. arXiv preprint arXiv:2302.11089, 2023.
[28] S. Silvestrini and M. Lavagna, “Deep learning and artifi- [42] C. Chen, “Deep learning for inertial positioning: A
cial neural networks for spacecraft dynamics, navigation survey,” arXiv preprint arXiv:2303.03757, 2023.
and control,” Drones, vol. 6, no. 10, p. 270, 2022. [43] C. Shen, Y. Zhang, J. Tang, H. Cao, and J. Liu, “Dual-
[29] J. Song, D. Rondao, and N. Aouf, “Deep learning-based optimization for a MEMS-INS/GPS system during GPS
spacecraft relative navigation methods: A survey,” Acta outages based on the cubature Kalman filter and neural
Astronautica, vol. 191, pp. 22–40, 2022. networks,” Mechanical Systems and Signal Processing,
[30] H. Jiang, H. Wang, W.-Y. Yau, and K.-W. Wan, “A vol. 133, p. 106222, 2019.
brief survey: Deep reinforcement learning in mobile [44] S. Lu, Y. Gong, H. Luo, F. Zhao, Z. Li, and
robot navigation,” in 2020 15th IEEE Conference on J. Jiang, “Heterogeneous multi-task learning for mul-
Industrial Electronics and Applications (ICIEA). IEEE, tiple pseudo-measurement estimation to bridge GPS
2020, pp. 592–597. outages,” IEEE Transactions on Instrumentation and
[31] K. Zhu and T. Zhang, “Deep reinforcement learning Measurement, vol. 70, pp. 1–16, 2020.
based mobile robot navigation: A review,” Tsinghua [45] R. Karlsson and G. Hendeby, “Speed estimation from
Science and Technology, vol. 26, no. 5, pp. 674–691, vibrations using a deep learning CNN approach,” IEEE
2021. Sensors Letters, vol. 5, no. 3, pp. 1–4, 2021.
[32] F. AlMahamid and K. Grolinger, “Autonomous un- [46] Y. Tong, S. Zhu, Q. Zhong, R. Gao, C. Li, and
manned aerial vehicle navigation using reinforcement L. Liu, “Smartphone-based vehicle tracking without
learning: A systematic review,” Engineering Applica- GPS: Experience and improvements,” in 2021 IEEE
tions of Artificial Intelligence, vol. 115, p. 105321, 27th International Conference on Parallel and Dis-
2022. tributed Systems (ICPADS). IEEE, 2021, pp. 209–216.
[33] X. Ye and Y. Yang, “From seeing to moving: A survey [47] Y. Tong, S. Zhu, X. Ren, Q. Zhong, D. Tao, C. Li,
on learning for visual indoor navigation (VIN),” arXiv L. Liu, and R. Gao, “Vehicle inertial tracking via
preprint arXiv:2002.11310, 2020. mobile crowdsensing: Experience and enhancement,”
[34] F. Zeng, C. Wang, and S. S. Ge, “A survey on visual IEEE Transactions on Instrumentation and Measure-
navigation for artificial agents with deep reinforcement ment, vol. 71, pp. 1–13, 2022.
learning,” IEEE Access, vol. 8, pp. 135 426–135 442, [48] B. Zhou, Z. Gu, F. Gu, P. Wu, C. Yang, X. Liu,
2020. L. Li, Y. Li, and Q. Li, “DeepVIP: Deep learning-based
[35] D. C. Guastella and G. Muscato, “Learning-based meth- vehicle indoor positioning using smartphones,” IEEE
ods of perception and navigation for ground vehicles in Transactions on Vehicular Technology, vol. 71, no. 12,
unstructured environments: A review,” Sensors, vol. 21, pp. 13 299–13 309, 2022.
no. 1, p. 73, 2020. [49] X. Zhao, C. Deng, X. Kong, J. Xu, and Y. Liu, “Learn-
[36] F. Zhu, Y. Zhu, V. Lee, X. Liang, and X. Chang, “Deep ing to compensate for the drift and error of gyroscope in
learning for embodied vision navigation: A survey,” vehicle localization,” in 2020 IEEE Intelligent Vehicles
arXiv preprint arXiv:2108.04097, 2021. Symposium (IV). IEEE, 2020, pp. 852–857.
[37] Y. Tang, C. Zhao, J. Wang, C. Zhang, Q. Sun, W. X. [50] Z. Fei, S. Jia, and Q. Li, “Research on GNSS/DR
Zheng, W. Du, F. Qian, and J. Kurths, “Perception and method based on B-spline and optimized BP neural
navigation in autonomous systems in the era of learning: network,” in 2021 IEEE 33rd International Conference
A survey,” IEEE Transactions on Neural Networks and on Tools with Artificial Intelligence (ICTAI). IEEE,
Learning Systems, 2022. 2021, pp. 161–168.
[38] S. Azimi, J. Salokannel, S. Lafond, J. Lilius, M. Sa- [51] R. Gao, X. Xiao, S. Zhu, W. Xing, C. Li, L. Liu,
lokorpi, and I. Porres, “A survey of machine learn- L. Ma, and H. Chai, “Glow in the dark: Smartphone
ing approaches for surface maritime navigation,” in inertial odometry for vehicle tracking in GPS blocked
Maritime Transport VIII: proceedings of the 8th Inter- environments,” IEEE Internet of Things Journal, vol. 8,
national Conference on Maritime Transport: Technol- no. 16, pp. 12 955–12 967, 2021.
ogy, Innovation and Research: Maritime Transport’20. [52] M. Freydin and B. Or, “Learning car speed using inertial
Barcelona, 2020, pp. 103–117. sensors for dead reckoning navigation,” IEEE Sensors
[39] Y. Li, R. Chen, X. Niu, Y. Zhuang, Z. Gao, X. Hu, Letters, vol. 6, no. 9, pp. 1–4, 2022.
and N. El-Sheimy, “Inertial sensing meets machine [53] C. Li, S. Wang, Y. Zhuang, and F. Yan, “Deep sensor
learning: opportunity or challenge?” IEEE Transactions fusion between 2D laser scanner and IMU for mobile
on Intelligent Transportation Systems, 2021. robot localization,” IEEE Sensors Journal, vol. 21,
10

no. 6, pp. 8501–8509, 2019. vol. 31, no. 1, 2017.


[54] S. Srinivasan, I. Sa, A. Zyner, V. Reijgwart, M. I. Valls, [67] F. Baldini, A. Anandkumar, and R. M. Murray, “Learn-
and R. Siegwart, “End-to-end velocity estimation for ing pose estimation for UAV autonomous navigation
autonomous racing,” IEEE Robotics and Automation and landing using visual-inertial sensor data,” in 2020
Letters, vol. 5, no. 4, pp. 6869–6875, 2020. American Control Conference (ACC). IEEE, 2020, pp.
[55] R. C. Mendoza, B. Cao, D. Goehring, and R. Ro- 2961–2966.
jas, “GALNet: An end-to-end deep neural network for [68] M. F. Aslan, A. Durdu, A. Yusefi, and A. Yilmaz,
ground localization of autonomous cars.” in ROBOVIS, “HVIOnet: A deep learning based hybrid visual–inertial
2020, pp. 39–50. odometry approach for unmanned aerial system position
[56] B. Zhou, P. Wu, Z. Gu, Z. Wu, and C. Yang, “XDR- estimation,” Neural Networks, vol. 155, pp. 461–474,
Net: Deep learning-based pedestrian and vehicle dead 2022.
reckoning using smartphones,” in 2022 IEEE 12th Inter- [69] A. Yusefi, A. Durdu, M. F. Aslan, and C. Sungur,
national Conference on Indoor Positioning and Indoor “LSTM and filter based comparison analysis for indoor
Navigation (IPIN). IEEE, 2022, pp. 1–8. global localization in UAVs,” IEEE Access, vol. 9, pp.
[57] N. Liu, Z. Hui, Z. Su, L. Qiao, and Y. Dong, “Integrated 10 054–10 069, 2021.
navigation on vehicle based on low-cost SINS/GNSS [70] Y. Liu, Y. Zhou, and X. Li, “Attitude estimation of
using deep learning,” Wireless Personal Communica- unmanned aerial vehicle based on LSTM neural net-
tions, vol. 126, no. 3, pp. 2043–2064, 2022. work,” in 2018 international joint conference on neural
[58] S. Hosseinyalamdary, “Deep Kalman filter: Simul- networks (IJCNN). IEEE, 2018, pp. 1–6.
taneous multi-sensor integration and modelling; A [71] J. P. Silva do Monte Lima, H. Uchiyama, and R.-i.
GNSS/IMU case study,” Sensors, vol. 18, no. 5, p. 1316, Taniguchi, “End-to-end learning framework for IMU-
2018. based 6-DOF odometry,” Sensors, vol. 19, no. 17, p.
[59] C. Shen, Y. Zhang, X. Guo, X. Chen, H. Cao, J. Tang, 3777, 2019.
J. Li, and J. Liu, “Seamless GPS/inertial navigation sys- [72] M. A. Esfahani, H. Wang, K. Wu, and S. Yuan, “AbolD-
tem based on self-learning square-root cubature Kalman eepIO: A novel deep inertial odometry network for
filter,” IEEE Transactions on Industrial Electronics, autonomous vehicles,” IEEE Transactions on Intelligent
vol. 68, no. 1, pp. 499–508, 2020. Transportation Systems, vol. 21, no. 5, pp. 1941–1950,
[60] M. Brossard, A. Barrau, and S. Bonnabel, “RINS- 2019.
W: Robust inertial navigation system on wheels,” in [73] ——, “OriNet: Robust 3-D orientation estimation with a
2019 IEEE/RSJ International Conference on Intelligent single particular IMU,” IEEE Robotics and Automation
Robots and Systems (IROS). IEEE, 2019, pp. 2068– Letters, vol. 5, no. 2, pp. 399–406, 2019.
2075. [74] D. Weber, C. Gühmann, and T. Seel, “RIANN—A
[61] ——, “AI-IMU dead-reckoning,” IEEE Transactions on robust neural network outperforms attitude estimation
Intelligent Vehicles, vol. 5, no. 4, pp. 585–595, 2020. filters,” Ai, vol. 2, no. 3, pp. 444–463, 2021.
[62] M. Brossard and S. Bonnabel, “Learning wheel odom- [75] N. Chumuang, A. Farooq, M. Irfan, S. Aziz, and
etry and IMU errors for localization,” in 2019 Interna- M. Qureshi, “Feature matching and deep learning mod-
tional Conference on Robotics and Automation (ICRA). els for attitude estimation on a micro-aerial vehicle,”
IEEE, 2019, pp. 291–297. in 2022 International Conference on Cybernetics and
[63] X. Gao, H. Luo, B. Ning, F. Zhao, L. Bao, Y. Gong, Innovations (ICCI). IEEE, 2022, pp. 1–6.
Y. Xiao, and J. Jiang, “RL-AKF: An adaptive Kalman [76] S. O. Madgwick, A. J. Harrison, and R. Vaidyanathan,
filter navigation algorithm based on reinforcement “Estimation of IMU and MARG orientation using a
learning for ground vehicles,” Remote Sensing, vol. 12, gradient descent algorithm,” in 2011 IEEE international
no. 11, p. 1704, 2020. conference on rehabilitation robotics. IEEE, 2011, pp.
[64] F. Wu, H. Luo, H. Jia, F. Zhao, Y. Xiao, and X. Gao, 1–7.
“Predicting the noise covariance with a multitask learn- [77] A. A. Golroudbari and M. H. Sabour, “End-to-end
ing model for Kalman filter-based GNSS/INS integrated deep learning framework for real-time inertial atti-
navigation,” IEEE Transactions on Instrumentation and tude estimation using 6DOF IMU,” arXiv preprint
Measurement, vol. 70, pp. 1–13, 2020. arXiv:2302.06037, 2023.
[65] Y. Xiao, H. Luo, F. Zhao, F. Wu, X. Gao, Q. Wang, and [78] P. Narkhede, A. Mishra, K. Hamshita, A. K. Shubham,
L. Cui, “Residual attention network-based confidence and A. Chauhan, “Inertial sensors and GPS fusion using
estimation algorithm for non-holonomic constraint in LSTM for position estimation of aerial vehicle,” in 2022
GNSS/INS integrated navigation system,” IEEE Trans- 4th International Conference on Smart Systems and
actions on Vehicular Technology, vol. 70, no. 11, pp. Inventive Technology (ICSSIT). IEEE, 2022, pp. 671–
11 404–11 418, 2021. 675.
[66] R. Clark, S. Wang, H. Wen, A. Markham, and [79] Y. Liu, Q. Luo, W. Liang, and Y. Zhou, “GPS/INS inte-
N. Trigoni, “Vinet: Visual-inertial odometry as a grated navigation with LSTM neural network,” in 2021
sequence-to-sequence learning problem,” in Proceed- 4th International Conference on Intelligent Autonomous
ings of the AAAI Conference on Artificial Intelligence, Systems (ICoIAS). IEEE, 2021, pp. 345–350.
11

[80] Y. Liu, Y. Zhou, and Y. Zhang, “A novel hybrid attitude learning approach to dead-reckoning navigation for
fusion method based on LSTM neural network for autonomous underwater vehicles with limited sensor
unmanned aerial vehicle,” in 2021 IEEE International payloads,” in OCEANS 2021: San Diego–Porto. IEEE,
Conference on Robotics and Biomimetics (ROBIO). 2021, pp. 1–9.
IEEE, 2021, pp. 1630–1635. [95] D. Li, J. Xu, H. He, and M. Wu, “An underwater
[81] Y. Liu, Q. Luo, and Y. Zhou, “Deep learning-enabled integrated navigation algorithm to deal with DVL mal-
fusion to bridge GPS outages for INS/GPS integrated functions based on deep learning,” IEEE Access, vol. 9,
navigation,” IEEE Sensors Journal, vol. 22, no. 9, pp. pp. 82 010–82 020, 2021.
8974–8985, 2022. [96] N. Cohen and I. Klein, “BeamsNet: A data-driven
[82] P. Geragersian, I. Petrunin, W. Guo, and R. Grech, “An approach enhancing Doppler velocity log measurements
INS/GNSS fusion architecture in GNSS denied envi- for autonomous underwater vehicle navigation,” Engi-
ronment using gated recurrent unit,” in AIAA SCITECH neering Applications of Artificial Intelligence, vol. 114,
2022 Forum, 2022, p. 1759. p. 105216, 2022.
[83] A. Shurin and I. Klein, “QuadNet: A hybrid framework [97] ——, “Libeamsnet: AUV velocity vector estimation
for quadrotor dead reckoning,” Sensors, vol. 22, no. 4, in situations of limited DVL beam measurements,” in
p. 1426, 2022. OCEANS 2022, Hampton Roads. IEEE, 2022, pp. 1–5.
[84] M. K. Al-Sharman, Y. Zweiri, M. A. K. Jaradat, [98] N. Cohen, Z. Yampolsky, and I. Klein, “Set-transformer
R. Al-Husari, D. Gan, and L. D. Seneviratne, “Deep- BeamsNet for AUV velocity forecasting in complete
learning-based neural network training for state estima- DVL outage scenarios,” in 2023 IEEE Underwater
tion enhancement: Application to attitude estimation,” Technology (UT), 2023, pp. 1–6.
IEEE Transactions on Instrumentation and Measure- [99] E. Topini, F. Fanelli, A. Topini, M. Pebody, A. Ridolfi,
ment, vol. 69, no. 1, pp. 24–34, 2019. A. B. Phillips, and B. Allotta, “An experimental com-
[85] Z. Zou, T. Huang, L. Ye, and K. Song, “CNN based parison of deep learning strategies for AUV navigation
adaptive Kalman filter in high-dynamic condition for in DVL-denied environments,” Ocean Engineering, vol.
low-cost navigation system on highspeed UAV,” in 274, p. 114034, 2023.
2020 5th Asia-Pacific Conference on Intelligent Robot [100] N. Shaukat, A. Ali, M. Javed Iqbal, M. Moinuddin, and
Systems (ACIRS). IEEE, 2020, pp. 103–108. P. Otero, “Multi-sensor fusion for underwater vehicle
[86] B. Or and I. Klein, “A hybrid model and learning- localization by augmentation of RBF neural network
based adaptive navigation filter,” IEEE Transactions on and error-state Kalman filter,” Sensors, vol. 21, no. 4,
Instrumentation and Measurement, vol. 71, pp. 1–11, p. 1149, 2021.
2022. [101] B. Or and I. Klein, “Adaptive step size learning with ap-
[87] X. Zhang, X. Mu, H. Liu, B. He, and T. Yan, “Ap- plications to velocity aided inertial navigation system,”
plication of modified EKF based on intelligent data IEEE Access, vol. 10, pp. 85 818–85 830, 2022.
fusion in AUV navigation,” in 2019 IEEE Underwater [102] ——, “ProNet: Adaptive process noise estimation for
Technology (UT). IEEE, 2019, pp. 1–4. INS/DVL fusion,” in 2023 IEEE Underwater Technol-
[88] X. Mu, B. He, X. Zhang, Y. Song, Y. Shen, and C. Feng, ogy (UT), 2023, pp. 1–5.
“End-to-end navigation for autonomous underwater ve- [103] H. Chen, P. Aggarwal, T. M. Taha, and V. P. Cho-
hicle with hybrid recurrent neural networks,” Ocean davarapu, “Improving inertial sensor by reducing errors
Engineering, vol. 194, p. 106602, 2019. using deep learning methodology,” in NAECON 2018-
[89] X. Zhang, B. He, G. Li, X. Mu, Y. Zhou, and T. Mang, IEEE National Aerospace and Electronics Conference.
“NavNet: AUV navigation through deep sequential IEEE, 2018, pp. 197–202.
learning,” IEEE Access, vol. 8, pp. 59 845–59 861, 2020. [104] D. Engelsman, “Data-driven denoising of accelerometer
[90] X. Zhang, B. He, S. Gao, L. Zhou, and R. Huang, signals,” Ph.D. dissertation, University of Haifa (Israel),
“Sequential learning navigation method and general 2022.
correction model for Autonomous Underwater Vehicle,” [105] D. Engelsman and I. Klein, “A learning-based approach
Ocean Engineering, vol. 278, p. 114347, 2023. for bias elimination in low-cost gyroscopes,” in 2022
[91] H. Ma, X. Mu, and B. He, “Adaptive navigation algo- IEEE International Symposium on Robotic and Sensors
rithm with deep learning for autonomous underwater Environments (ROSE). IEEE, 2022, pp. 01–05.
vehicle,” Sensors, vol. 21, no. 19, p. 6406, 2021. [106] C. Jiang, S. Chen, Y. Chen, Y. Bo, L. Han, J. Guo,
[92] G. He, Y. Chaobang, D. Guohua, and S. Xiaoshuai, Z. Feng, and H. Zhou, “Performance analysis of a
“The TCN-LSTM deep learning model for real- deep simple recurrent unit recurrent neural network
time prediction of ship motions,” Available at SSRN (SRU-RNN) in MEMS gyroscope de-noising,” Sensors,
4405121. vol. 18, no. 12, p. 4471, 2018.
[93] S. Song, J. Liu, J. Guo, J. Wang, Y. Xie, and J.- [107] C. Jiang, S. Chen, Y. Chen, B. Zhang, Z. Feng, H. Zhou,
H. Cui, “Neural-network-based AUV navigation for and Y. Bo, “A MEMS IMU de-noising method using
fast-changing environments,” IEEE Internet of Things long short term memory recurrent neural networks
Journal, vol. 7, no. 10, pp. 9773–9783, 2020. (LSTM-RNN),” Sensors, vol. 18, no. 10, p. 3470, 2018.
[94] I. B. Saksvik, A. Alcocer, and V. Hassani, “A deep [108] Z. Zhu, Y. Bo, and C. Jiang, “A MEMS gyroscope noise
12

suppressing method using neural architecture search


neural network,” Mathematical Problems in Engineer-
ing, vol. 2019, pp. 1–9, 2019.
[109] C. Jiang, Y. Chen, S. Chen, Y. Bo, W. Li, W. Tian,
and J. Guo, “A mixed deep recurrent neural network
for MEMS gyroscope noise suppressing,” Electronics,
vol. 8, no. 2, p. 181, 2019.
[110] S. Han, Z. Meng, X. Zhang, and Y. Yan, “Hybrid
deep recurrent neural networks for noise reduction
of MEMS-IMU with static and dynamic conditions,”
Micromachines, vol. 12, no. 2, p. 214, 2021.
[111] M. Brossard, S. Bonnabel, and A. Barrau, “Denois-
ing IMU gyroscopes with deep learning for open-loop
attitude estimation,” IEEE Robotics and Automation
Letters, vol. 5, no. 3, pp. 4796–4803, 2020.
[112] P. Russo, F. Di Ciaccio, and S. Troisi, “DANAE: A de-
noising autoencoder for underwater attitude estimation,”
arXiv preprint arXiv:2011.06853, 2020.
[113] F. Huang, Z. Wang, L. Xing, and C. Gao, “A MEMS
IMU gyroscope calibration method based on deep
learning,” IEEE Transactions on Instrumentation and
Measurement, vol. 71, pp. 1–9, 2022.
[114] Y. Liu, W. Liang, and J. Cui, “LGC-Net: A lightweight
gyroscope calibration network for efficient attitude es-
timation,” arXiv preprint arXiv:2209.08816, 2022.
[115] K. Yuan and Z. J. Wang, “A simple self-supervised IMU
denoising method for inertial aided navigation,” IEEE
Robotics and Automation Letters, 2023.

Nadav Cohen received his B.Sc degree in Electrical


and Computer Engineering from Ben-Gurion Uni-
versity of the Negev, Be’er Sheva, Israel in 2021. He
is currently pursuing a Ph.D. (direct track) degree
at the Hatter Department of Marine Technologies,
Charney School of Marine Sciences, University of
Haifa, Israel. His research interests are inertial sens-
ing, data-driven navigation, sensor fusion, signal
processing, and estimation theory.

Itzik Klein received B.Sc. and M.Sc. degrees in


Aerospace Engineering from the Technion – Israel
Institute of Technology, Haifa, Israel, in 2004 and
2007, respectively, and a Ph.D. degree in Geo-
information Engineering from the Technion – Israel
Institute of Technology, in 2011. He is currently
an Assistant Professor, heading the Autonomous
Navigation and Sensor Fusion Lab, at the Hatter
Department of Marine Technologies, Leon H. Char-
ney School of Marine Sciences, University of Haifa.
He is an IEEE Senior Member and a member of
the IEEE journal of Indoor and Seamless Positioning and Navigation (J-
ISPIN) Editorial Board. His research interests lie in the intersection of artificial
intelligence, inertial sensing, and sensor fusion.

You might also like