Professional Documents
Culture Documents
a r t i c l e i n f o a b s t r a c t
Article history: Autonomous takeoff and landing are crucial for unmanned aerial vehicles (UAVs) to perform various
Received 9 April 2022 missions automatically. However, navigation sensors with low accuracy such as global positioning system
Received in revised form 28 July 2022 (GPS) are limited in autonomous takeoff and landing for marine applications when ships are moving
Accepted 6 September 2022
fast and oscillating by sea waves. In this study, an image-based visual servoing (IBVS) technique, which
Available online 13 September 2022
Communicated by Hever Moncayo
was originally used to land a UAV on a static target, is extended for a moving target. In particular, the
ship velocity is estimated and used as a feed-forward term in the IBVS controller. To accurately estimate
Keywords: the velocity of the ship, velocity data from the GPS on the ship, image information obtained from the
Autonomous landing camera in the UAV, and dynamic model of the ship are combined using the Kalman filter framework.
Image-based visual servoing Besides, considering the under-actuated nature of the UAV and oscillation of the ship, a virtual plane
Sensor fusion concept, adaptive IBVS gain and feature shape compensation are introduced. Lastly, to apply the IBVS
Unmanned aerial vehicle controller to the moving ship deck landing, a robust and safe autonomous landing procedure, starting
from the approach to the touchdown phase, is also developed. The proposed autonomous landing system
is validated via simulations and various real-world flight experiments simulating situations in which the
ship is moving fast and oscillated by sea waves. In the flight experiments, the UAV lands successfully on
the landing pad with the average touchdown error of 0.2 m while the ship is oscillating at the Sea State
4 and is moving faster than 5 m/s.
© 2022 Elsevier Masson SAS. All rights reserved.
1. Introduction added to the sensing system; hence, in this research, visual servo-
ing is exploited for the autonomous landing of a UAV on a moving
Unmanned aerial vehicles (UAVs) have been widely used for ship deck.
surveillance, reconnaissance missions, search and rescue opera- There are many researches about autonomous landing of a UAV
tions, and wind turbine and bridge inspections in both military using visual servoing. However, there are insufficient studies fo-
and industrial fields [1–5]. For the aforementioned missions, the cused on the oscillating targets in high speeds. Regarding the au-
takeoff and landing of the UAV are better to be performed au- tonomous landing system, there are also insufficient systematic
tonomously before and after missions. In particular, if the UAV studies on the entire landing procedure starting from the approach
is operated at marine environments far from the land, the UAV to the touchdown phase. Therefore, this study proposes an au-
should be able to land on possibly small and narrow areas of a tonomous landing system of a UAV based on feed-forward image-
moving ship oscillated by sea waves [6]. However, owing to sig- based visual servoing (FF-IBVS), to achieve landing on a small ship
nificant position errors, it is insufficient to perform autonomous deck moving in high speeds and oscillating by sea waves. To make
landing in this condition using conventional sensing systems such the landing system robust and stable, the adaptive IBVS gain and
as global positioning system (GPS). Although real time kinematic compensation of a landing feature shape are applied. In addition,
GPS (RTK-GPS) can be employed for accurate position estimations to improve the landing performance, reliable velocity estimations
[7], installing RTK-GPS on a moving ship is expensive and challeng- of the ship are newly introduced, and the entire landing procedure
ing. To address the inaccuracy of the GPS, vision sensors can be is made fully autonomous.
The main contribution of this study is as follows. First, we en-
hance the autonomous landing performance, building upon the FF-
IBVS method [8] via several innovative techniques such as: adap-
*
Corresponding author.
E-mail addresses: chogi89@unist.ac.kr (G. Cho), choi774@purdue.edu (J. Choi), tive IBVS gain, feature shape compensation for IBVS, and improve
baegs94@unist.ac.kr (G. Bae), h.oh@unist.ac.kr (H. Oh). estimation using the Kalman filter and sensor fusion. The adap-
https://doi.org/10.1016/j.ast.2022.107869
1270-9638/© 2022 Elsevier Masson SAS. All rights reserved.
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
tive IBVS gain is introduced to maintain the features in the FOV conducted a study to achieve landing on a moving platform us-
by slowing down the altitude rate. The features are compensated ing PBVS [19] where the platform moved with a maximum speed
as a square shape to eliminate unnecessary IBVS commands from of 10 km/h. Falanga et al. developed a fully autonomous quadrotor
the effect of the distortion triggered by the changing attitude of system that can land on a moving target using PBVS [20]. In the
the ship. Second, a landing system for robust and safe autonomous study conducted by Santana et al., autonomous landing using PBVS
landing is designed. To detect the features and land on the target on a platform oscillating with the heave motion was presented
(e.g., ship) from a long distance, the size and placement of aug- [21], and simulations were carried out with the virtual robot ex-
mented reality (AR) tags (i.e., markers for the landing) are carefully perimentation platform (V-REP) [22] to verify the algorithm.
determined. The landing procedure, starting from the approach to In contrast, in IBVS, states are defined as the position of the
the touchdown phase, is also designed with the state machine features in the image plane, and the error is defined as the pixel
structure. It ensures that if the marker is missed, the UAV holds the position error between the desired and current feature positions.
position near the target or changes its altitude to find the marker From the pixel error in the image plane, the desired velocity com-
again. Lastly, comprehensive realistic simulations and flight exper- mand to move the camera to a target pose is calculated with image
iments in harsh conditions are conducted to validate the proposed Jacobian, which explains the relationship between the velocity of
algorithm. In these simulations and experiments, the environment the camera in 3-D space and feature velocity in 2-D image plane.
is set to a harsh condition, where the ship is moving at a speed of IBVS is known to be more robust against pixel measurement errors
5 m/s while oscillating at Sea State 4. Here, Sea State 4 refers to than PBVS [14].
the height of sea waves between 1.25 and 2.5 m [9]. To the best of Several studies have also been actively conducted on au-
our knowledge, 5 m/s speed is the fastest environment setup for tonomous landing using IBVS. Tang et al. applied spherical image
the vision-based autonomous landing experiment of a quadrotor centroids-based IBVS for going through a window and landing on
UAV. Under such severe circumstances, it is very difficult for UAVs a target for a UAV [23]. Hamel et al. applied IBVS to an under-
to land on a moving ship deck. The objectives of this paper are to actuated system for the first time using a robust backstepping
apply several innovative techniques for vision-based autonomous technique [24]. They considered the full dynamics of the camera
landing such as feature compensation, ship state estimation, and motion fixed to the rigid body. As an extension of this work, Gue-
robust landing procedure based on the state machine and to ver- nard et al. carried out the hovering experiment of a quadrotor UAV
ify the performance of the proposed approach via simulations and [25]. Lee et al. applied a virtual image plane to IBVS to compensate
real-world flight experiments. the effect of the attitude of the UAV. Furthermore, they designed
The rest of this paper is organized as follows. In Section 2, an adaptive sliding mode controller [26,27] in order to keep the
related work is represented, and in Section 3, IBVS is briefly intro- image within the FOV of the camera. Especially, in [27], Patrol
duced and additional processes for the under-actuated system such mode which is to find out the target is added. Serra et al. proposed
as quadrotor UAVs are explained. Subsequently, a feed-forward a control law for landing on a platform with the heave motion, and
IBVS for compensating the velocity of a moving target to achieve then conducted simulations and indoor experiments [28]. In the
precision landing is presented in Section 4. In Section 5, the en- research conducted by Truong et al. [29], a controller was intro-
tire autonomous landing system, including the marker for the IBVS duced for the ship landing of an helicopter, using a combination of
setup and landing procedure, is proposed. The performances of the IBVS and translational rate command, and then simulations were
proposed controller and landing system are verified via simulations carried out. In the study by Rakotomamonjy et al. [30], to land on
and experiments in Sections 6 and 7, respectively. The conclusions a moving ship deck, the velocity of the ship was estimated using
and future work are given in Section 8. the response amplitude operator and the autoregressive moving
average model. The motion of the ship was compensated in the
2. Related work IBVS controller and its performance was verified via simulations.
Borshchova et al. conducted autonomous landing simulations and
Visual servoing is a control method for guiding unmanned experiments on a ship deck [31,32]. They adopted the color detec-
vehicles or robots (especially robot manipulators [10–12]) to a tion method as features for IBVS to reduce the computational load.
target position using a vision sensor. It can be categorized into Simulations were also conducted for a moving target with V-REP
position-based visual servoing (PBVS) and image-based visual ser- simulations. Wynn et al. proposed feed-forward IBVS (FF-IBVS) to
voing (IBVS) [13]. In PBVS, states are defined as the pose of the compensate the velocity of the moving ship [8]. The velocity of
target, and the target pose is estimated with respect to the camera the ship was estimated using an extended Kalman filter (EKF),
frame. The error is expressed as the relative pose between the tar- which fuses visual and GPS measurements, while the estimated
get and camera, and the PBVS outputs the command to reduce the velocity was used as a feed-forward term combined with the IBVS
error. PBVS enables the camera to move to the target in an opti- controller. They also proposed the entire process for autonomous
mal trajectory. However, poor state estimation could destabilize the landing on a moving target starting from the approach phase. In
pose of the camera, which triggers issues such as perturbations in the experiment, the velocity of the target was set at approximately
the trajectory and inaccuracies after convergence [14]. 1 m/s, with the heave motion; in addition, precision landing per-
Several studies have been conducted on the autonomous land- formance was verified in flight experiments.
ing of UAVs using PBVS. Jung et al. estimated the horizontal As mentioned in the above paragraphs, there are existing stud-
distance error between the landing target and the UAV using ies to land the UAV on a moving ship. However, there are few
the center of the measured feature position and marker length studies dealing with the harsh outdoor environment where the
information [15]. For the autonomous landing on the target, landing pad is oscillated by sea waves as well as going forward at
a proportional–integral–derivative controller was employed. To a high speed. For the autonomous landing system, the entire land-
handle the limited field of view (FOV) of the camera, Chen et al. ing procedure starting from the approach phase to touchdown is
used a pan-tilt camera, estimated the landing target, and then con- also not studied sufficiently. Therefore, in this paper, first, to land
ducted autonomous landing [16]. Yang et al. exploited PBVS to the UAV on the ship moving fast and oscillating, several techniques
achieve the takeoff and landing of a UAV with a square-root un- such as adaptive-IBVS gain, feature shape compensation and sen-
scented Kalman filter to estimate the pose of the UAV [17]. Zhao sor fusion are exploited. Second, for robust and safe autonomous
et al. proposed the PBVS controller that is robust to the time delays landing, the entire landing procedure based on the state machine
in the translational and rotational dynamics [18]. Acevedo et al. is designed. Lastly, to verify the performance of the autonomous
2
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
3
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
Fig. 2. Projections of the target positions on the image plane and virtual image plane: (a) on the image plane; and (b) on the virtual image plane, respectively.
the UAV will move to the right direction by the IBVS controller
to make the center of the image plane coincide with the target
position, which is the opposite to the target position. To address
this issue, features are transformed into a virtual coordinate frame
[26]. The virtual coordinate frame is defined, such that the cen-
ter of the origin coincides with the camera coordinate frame and
its z-axis is parallel to the z-axis of the inertial frame. The virtual
image plane is also defined, such that the image plane is rotated
from the camera frame to the virtual coordinate frame. This im-
plies that the virtual image plane is always parallel to the ground,
and if the landing pad is parallel to the ground, the roll and pitch Fig. 3. Features are out of FOV at a low altitude.
rates are always zero, such that an under-actuated system can be
decoupled. the velocity for x- and y-directions, and ensure that the feature
The virtual image plane transformation can be conducted by stays in the FOV. However, this method can degrade target track-
using the roll and pitch angles of the UAV. The 3-D position of i-th ing performance because it reduces the velocity of the UAV, and
marker in the camera frame P i = [xi y i zi ] is expressed in the also the effect of a smaller scan area as the UAV decreases its alti-
virtual coordinate frame as P ri = [xri y ri zri ] , and its corresponding tude is not taken into account. Thus, in this research, we propose
2-D position in the image plane and virtual image plane are si the use of the adaptive IBVS gain for the altitude rate to be ad-
and sri , respectively. The relationship between P i and P ri can be justed by the feature error in the image plane to maintain a small
expressed as: control input if the UAV has a large distance error to the center
of the landing target. The adaptive gain (ad z ) is designed with the
P ri = R (1, φ) R (2, θ) P i , (8)
sigmoid function as:
where
⎡ ⎤ ⎡ ⎤ 1
1 0 0 cosθ 0 sinθ ad z = 1 − , (9)
1 + e −kc
R (1, φ) = ⎣ 0 cosφ −sinφ ⎦ , R (2, θ) = ⎣ 0 1 0 ⎦,
where c represents the distance between the center of the fea-
0 sinφ cosφ −sinθ 0 cosθ
tures and the center of the image plane, and k denotes the gain of
and, φ and θ are roll and pitch angle of the UAV, respectively. Us- the sigmoid function. The IBVS control command with the adaptive
ing Eq. (8) and the pinhole camera model (i.e., Eqs. (2)–(3)), the gain is then given as:
i-th marker position in the virtual image plane can be computed ⎡ ⎤
1 0 0 0 0 0
as:
⎡ ⎤ ⎢0 1 0 0 0 0⎥
xi cosθ + f sinθ ⎢ ⎥
xri
⎢0 0 ad z 0 0 0⎥ +
f −xi cosφ sinθ + y i sinφ+ f cosφ cosθ v d = −λ ⎢ ⎥ L̂ e .
sri = =f⎣ xi sinφ sinθ + y i sinφ+ f cosφ cosθ
⎦. ⎢0 0 0 1 0 0⎥ e
zri y ri ⎣0 ⎦
−xi cosφ sinθ + y i sinφ+ f cosφ cosθ 0 0 0 1 0
0 0 0 0 0 1
3.3. Adaptive-gain IBVS
4
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
(5)
(5)
X= x y ship ẋship ẏ ship ... x ship y ship
ship
where xship and y ship represent the x- and y-axis positions of the
ship in the UAV vehicle-1 frame, respectively. For the detail of
Fig. 4. Square fitting of the features: (a) before square fitting; and (b) after the
the UAV vehicle-1 frame, which is similar to the body-fixed frame
square fitting.
but with no pitch and roll considered, refer to [34]. Because the
ship might oscillate due to waves in the marine environment, the
is longer than the others and the desired velocity command to the
designed filter should be able to consider this effect. Various ap-
left direction will be obtained as illustrated in Fig. 4(a). To elimi-
proaches have been presented to estimate the pose of a ship in
nate the effect of inclination on the landing pad, four features are
such conditions [35]. Nonetheless, many existing studies require
fitted to square via the least square method. Then, the plane com-
the accurate dynamic model of a ship, which is usually hard to
prising four features is made parallel to the image plane, and the
obtain. In this study, we assume that the motion of the ship can
IBVS control command will not make unnecessary commands re-
be represented as a simple constant crackle (5th-time derivative
gardless of the attitude of the landing pad.
of the position) model inspired by [36]. Assuming the ship moves
with a constant velocity while slightly oscillating due to the wave,
4. Feed-forward IBVS
this model would be sufficient to represent the minor rotation of
the ship.
The conventional IBVS assumes that the target is stationary
[13,14]. To apply the IBVS to a moving target, the target velocity The transition matrix of the constant crackle model ( F ) can be
should be explicitly considered. In this study, the horizontal linear defined as:
velocity of the moving ship is used. The desired velocity command ⎡ T2 T3 T4 T5
⎤
I2 T I2 2 2
I 6 2
I I
24 2 120 2
I
from IBVS combined with the target velocity compensation as a ⎢ T2 T3 T4 ⎥
⎢ 02 I2 T I2 2 2
I I
6 2 24 2 ⎥
I
feed-forward term makes a new command for feed-forward IBVS ⎢ ⎥
⎢ T2 T 3
I ⎥
(FF-IBVS) as: F = ⎢ 02 02 I2 T I2 I
2 2 6 2 ⎥,
⎢0 T2 ⎥
v d, f f = v d,4D O F + v̂ target
⎢ 2 02 02 I2 T I2 2 2 ⎥
I
(10) ⎣0 02 02 02 I2 T I2 ⎦
2
where v̂ target = [ v̂ target ,x v̂ target , y 0 0] is the estimated horizontal 02 02 02 02 02 I2
velocity of the ship.
We employ the information from the GPS and camera for the where T is the sampling time, I 2 is the 2 × 2 identity matrix, and
target velocity estimation. The GPS is attached to the ship deck and 02 is the 2 × 2 zero matrix. The estimated state of the ship at time
GPS
measures the position and velocity of the ship with the respect to step k − 1 for the filter with GPS is represented as X̂ k−1|k−1 . Hence,
the world frame. Then, these measurements are transformed into the prediction step of the filter can be expressed as:
the relative values with respect to the UAV using the information
GPS GPS
from the GPS onboard the UAV. Meanwhile, the camera calculates X̂ k|k−1 = F X̂ k−1|k−1 ,
the relative pose of the landing pad on the ship relative to the GPS GPS
UAV using the features captured in the image plane. From these P̂ k|k−1 = F P̂ k−1|k−1 F T + Q k−1 ,
relative sensing data, the state of the landing pad relative to the
GPS GPS
UAV is estimated by the Kalman filter (KF). Besides, to facilitate the where X̂ k|k−1 is the predicted state, P̂ k−1|k−1 is the error covari-
information from the two different sensors, we apply the track-to- GPS
ance matrix at time step k − 1, P̂ k|k−1 is the predicted error co-
track fusion algorithm [33] as presented in Fig. 5. The algorithm
variance matrix, and Q k−1 is the system noise at time step k − 1.
comprises two distinct KFs updating the ship state via the GPS
and camera, and they are fused accordingly. Note that the posi- The correction step can be written as:
tion measurements from the camera are relatively accurate and its GPS
update rate is faster compared with GPS. However, camera mea- S kG P S = H G P S P̂ k|k−1 H GT P S + R kG P S ,
surements may not be available when the features are out of the GPS
camera field-of-view (FOV). Thus, by fusing these two sensors with
ν kG P S = ykG P S − H G P S X̂ k|k−1 ,
Kalman filters, we can make the estimation more robust against GPS −1
K kG P S = P̂ k|k−1 H GT P S S kG P S ,
intermittent measurement loss of camera or GPS as well as im-
GPS GPS
proving the estimation accuracy. X̂ k|k = X̂ k|k−1 + K kG P S kG P S ,
ν
Let us define the state of the ship (or, equivalently, landing pad)
GPS GPS
as: P̂ k|k = ( I 12 − K kG P S H G P S ) P̂ k|k−1 ,
5
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
6
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
y kG P S = [xship y ship ẋship ẏ ship 0 0] ,
GPS GPS GPS GPS
y kC AM = xCship
AM
y Cship
AM
0 0 ,
GPS GPS
where xship and y ship denote the measured locations of a ship in
the x and y directions in the UAV vehicle-1 frame, respectively, I2 0 0 0 0 0
H C AM = .
GPS
and ẋship GPS
and ẏ ship represent their velocities. The x and y position 0 0 0 0 0 I2
crackles, pseudo-measurements, are set to be zero to represent Eq.
(11). Then, the measurement matrix H G P S is defined as: To measure the relative position of the landing platform with re-
⎡ ⎤ spect to the UAV, AR tag detection (which will be explained in
I2 02 02 02 02 02
the next section) and the perspective-n-point method [37] are ex-
H G P S = ⎣ 02 I2 02 02 02 02 ⎦ .
ploited. The perspective-n-point method is a method of estimating
02 02 02 02 02 I2
the pose of the camera using a set of 2-D points projected onto
For simplicity, we only describe the detailed equations for the the image plane from a set of 3-D points and its implementa-
GPS case. Nevertheless, its counterpart for the camera case can be tion is done by using the OpenCV function, SolvePnP. The error
easily expressed as the same form of the equations while modify- of the perspective-n-point is computed as approximately 0.1 m in
ing the measurements and the measurement matrix as: the UAV landing simulation. In case of flight experiments, the er-
7
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
Fig. 13. Comparison of effect of the square compensation for landing pad oscillation.
Fig. 11. Time history of the motion of the ship: (a) linear motion; and (b) angular
motion.
Fig. 14. Simulation results: (a) time history of the velocity of the ship and UAV; (b)
time history of the altitudes of the ship and UAV; (c) time history of the horizontal
Fig. 12. Distance between the center of the features and the center of the image position error; (d) time history of estimated velocity of the ship (e) time history of
plane: (a) without adaptive IBVS gain; and (b) with adaptive IBVS gain. orientation of the ship and UAV; and (f) time history of the roll and pitch angle of
the UAV.
8
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
of the ship, the relative pose of the ship with respect to the cam-
era frame and the position and velocity from the GPS in the global
frame are used in the sensor fusion module. For the IBVS term, vir-
tual image plane transform and square compensation is conducted.
Using the modified feature positions and the estimated horizontal
velocity of the ship, the FF-IBVS control input in the form of the
velocity command is calculated.
9
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
Fig. 17. Landing platforms for flight experiments: (a) RC car leading a landing pad; and (b) motion platform on a truck for simulating motion of the ship.
Fig. 18. UAVs for experiments: (a) Tarot X4 equipped with a gimbal camera; and (b) Tarot 650 pro without a gimbal camera.
10
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
Fig. 19. Images captured during the experiments for situation (a).
level, the completion of the corresponding IBVS is determined by ducted. Fig. 10 shows the simulation in the Gazebo environment
the size of the square formed by the feature points. If the pixel er- using the PX4 autopilot (i.e., flight controller) with an Iris UAV
ror of the length of one side of the square remains less than the model from 3DR. The bottom and top of the left side of each fig-
threshold for more than 3 s, the IBVS n (n = 1, 2 and 3) flag be- ure present the images acquired from a camera and the extracted
comes true, and the landing state goes to the next state. Lastly, if features from the image, respectively. In the feature view, the rect-
IBVS 3 flag is true, the state goes to the landing state which repre- angular and circular shapes indicate the coordinates of the desired
sents the final state of the entire landing procedure. At the landing and measured features, respectively. The horizontal FOV and res-
state, the UAV decreases its altitude with a constant descent rate
olution of the camera are set to 102◦ , and 2048 × 1536 pixels,
of 1 m/s while maintaining the feed-forward velocity, which is the
respectively. The simulation is carried out in a scenario in which a
estimated horizontal velocity of the ship. During the IBVS state, if
ship moves forward with a speed of 10 knot (≈ 5.14 m/s) at the
the UAV misses the markers, the state is altered to the hold state.
Sea State 4 environment [9]. The ship motion as the superposition
At the hold state, the UAV holds its altitude and the feed-forward
velocity. If the markers are not detected for more than 3 s, the of three sinusoidal functions according to Sea State 4 is presented
state enters the rising state where, the UAV increases its altitude in the Table 1, and the corresponding time history of the motion
with a constant climb rate of 1 m/s. In the hold or rising state, of the ship is illustrated in Fig. 11. The position errors of the GPS
if the UAV detects the markers again, the state is changed to ap- of the ship and UAV are modeled as Gauss-Markov processes [39]:
propriate IBVS level again. If the UAV cannot detect the markers
continuously for more than 5 s, all parameters are initialized and
νk+1 = e−kG P S T s νk + ηk
the state is changed back to the approach state. where νk+1 and νk represent the errors simulated at time steps
k + 1 and k, respectively, ηk denotes the Gaussian white noise at
6. Simulations time step k, 1/k G P S is the time constant of the process, and T s
represents the sampling time. The velocity error of the GPS is set
To verify the feasibility and the performance of the proposed to a standard deviation of 0.05 m/s [34]. In these simulations, ηk
autonomous landing system, numerical simulations are first con- and 1/k G P S are set to 0.21 m and 1100, respectively.
11
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
Table 1
Simulated ship motion in the Sea State 4 environment.
12
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
Fig. 21. Images captured during the experiments for situation (b).
13
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
Fig. 22. Experiment results for situation (b): (a) time history of the velocity of the Fig. 23. Experiment results for situation (c): (a) time history of the velocity of the
landing platform; and (b) time history of the altitude of the UAV. landing platform; and (b) time history of the altitude of the UAV.
Table 3 Table 4
Parameters related to FF-IBVS for the experiments. Experiment results for situation (a): touchdown error.
14
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
Fig. 24. Images captured during the experiments for situation (d).
Table 5
Experiment results for situation (b): touchdown error.
Trial
Mean error Std. deviation
1st 2nd 3rd
Touchdown error [m] 0.70 0.77 0.95 0.81 0.13
Table 6
Experiment results for situation (c): touchdown error.
Trial
Mean error Std. deviation
1st 2nd 3rd
Touchdown error [m] 1.1 0.9 0.5 0.83 0.31
15
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
Table 7 References
Experiment results for situation (d): touchdown error.
Trial [1] S. Kim, J. Park, D. Han, E. Kim, D. Lee, Development of a vision-based recog-
Mean error Std. deviation nition and position measurement system for cooperative missions of multiple
1st 2nd 3rd heterogeneous unmanned vehicles, Int. J. Aeronaut. Space Sci. 22 (2) (2021)
Touchdown error [m] 0.2 0.1 0.3 0.2 0.1 468–478.
[2] Q. Wang, Y. Zhang, Heterogeneous sensor-based target tracking with constant
time delay, Int. J. Aeronaut. Space Sci. 22 (1) (2021) 186–194.
[3] A. Nayyar, B.-L. Nguyen, N.G. Nguyen, The internet of drone things (IODT): fu-
is set to 10 m. Table 7 shows the touchdown error for all experi- ture envision of smart drones, in: First International Conference on Sustainable
ments where a mean error of 0.2 m, a standard deviation of 0.1 m, Technologies for Computational Intelligence, Springer, 2020, pp. 563–580.
and a maximum error of 0.3 m are obtained. A movie clip show- [4] A. Kumar, R. Krishnamurthi, A. Nayyar, A.K. Luhach, M.S. Khan, A. Singh,
A novel software-defined drone network (SDDN)-based collision avoidance
ing the experiments for the situation (d) can be found at https://
strategies for on-road traffic monitoring and management, Veh. Commun. 28
www.youtube.com/watch?v=q4PM37adCuk. (2021) 100313.
[5] N.A. Khan, N.Z. Jhanjhi, S.N. Brohi, A. Nayyar, Emerging use of UAV’s: secure
8. Conclusions and future work communication protocol issues and challenges, in: Drones in Smart-Cities, El-
sevier, 2020, pp. 37–55.
[6] T. Dautermann, B. Korn, K. Flaig, M.U. de Haag, GNSS double differences used as
This study proposed a vision-based autonomous ship deck land- beacon landing system for aircraft instrument approach, Int. J. Aeronaut. Space
ing strategy using feed-forward image-based visual servoing (FF- Sci. 22 (6) (2021) 1455–1463.
[7] Y. Kang, B. Park, A. Cho, C. Yoo, Y. Kim, S. Choi, S. Koo, S. Oh, A precision
IBVS). Conventional IBVS schemes cannot guarantee the conver-
landing test on motion platform and shipboard of a tilt-rotor UAV based on
gence of the position error to zero if the target is not stationary. RTK-GNSS, Int. J. Aeronaut. Space Sci. 19 (4) (2018) 994–1005.
To resolve this issue, the velocity of the ship was added as a feed- [8] J.S. Wynn, T.W. McLain, Visual servoing with feed-forward for precision ship-
forward term in IBVS. The motion of the ship was measured by board landing of an autonomous multirotor, in: 2019 American Control Confer-
the GPS on the ship deck and the camera attached to the UAV, ence, IEEE, 2019, pp. 3928–3935.
[9] Manual on Codes – International Codes, Volume I.3 – Annex II to the WMO
and then estimated via Kalman filtering. In addition, the adaptive Technical Regulations: Part D – Representations derived from data models.
IBVS gain was used, such that the features remained in the FOV, [10] N. Shahriari, S. Fantasia, F. Flacco, G. Oriolo, Robotic visual servoing of moving
and the features were compensated to fit a square to avoid the tilt targets, in: 2013 IEEE/RSJ International Conference on Intelligent Robots and
effects of the ship. To accomplish the entire landing procedure au- Systems, 2013, pp. 77–82.
[11] I. Siradjuddin, L. Behera, T.M. McGinnity, S. Coleman, Image-based visual ser-
tonomously, a landing scheme in the form of the state machine
voing of a 7-DOF robot manipulator using an adaptive distributed fuzzy PD
structure, including the approach, three IBVS levels according to controller, IEEE/ASME Trans. Mechatron. 19 (2) (2013) 512–523.
the relative altitude between the ship and the UAV, hold, rising, [12] J.P. Alepuz, M.R. Emami, J. Pomares, Direct image-based visual servoing of free-
and landing states, was designed. The proposed autonomous land- floating space manipulators, Aerosp. Sci. Technol. 55 (2016) 1–9.
[13] S. Hutchinson, G.D. Hager, P.I. Corke, A tutorial on visual servo control, IEEE
ing algorithm was verified via various simulations and real flight
Trans. Robot. Autom. 12 (5) (1996) 651–670.
experiments. [14] F. Chaumette, S. Hutchinson, Visual servo control. I. Basic approaches, IEEE
The method proposed in this paper, the use of additional sen- Robot. Autom. Mag. 13 (4) (2006) 82–90.
sors (i.e., GPS on the ship deck) was necessary to estimate the [15] S. Jung, K.B. Ariyur, Automated wireless recharging for small UAVs, Int. J. Aero-
velocity of the ship accurately. However, the sensor attached to the naut. Space Sci. 18 (3) (2017) 588–600.
[16] C. Chen, S. Chen, G. Hu, B. Chen, P. Chen, K. Su, An auto-landing strategy based
ship could complicate the system, and if issues such as communi-
on pan-tilt based visual servoing for unmanned aerial vehicle in GNSS-denied
cation failure occurs, autonomous landing might not be possible. environments, Aerosp. Sci. Technol. (2021) 106891.
Therefore, as a future work, a method for estimating the velocity [17] S. Yang, J. Ying, Y. Lu, Z. Li, Precise quadrotor autonomous landing with SRUKF
of a ship without the aid of GPS will be investigated. In addition vision perception, in: 2015 IEEE International Conference on Robotics and Au-
tomation, 2015, pp. 2196–2201.
to that, to cope with the strong wind and gusts in harsh marine
[18] W. Zhao, H. Liu, X. Wang, Robust visual servoing control for quadrotors landing
environments, robust control techniques (e.g., sliding mode con- on a moving target, J. Franklin Inst. 358 (4) (2021) 2301–2319.
trol and/or disturbance observer) will be considered as the future [19] J.J. Acevedo, M. García, A. Viguria, P. Ramón, B.C. Arrue, A. Ollero, Autonomous
work. landing of a multicopter on a moving platform based on vision techniques, in:
Iberian Robotics Conference, Springer, 2017, pp. 272–282.
[20] D. Falanga, A. Zanchettin, A. Simovic, J. Delmerico, D. Scaramuzza, Vision-based
Declaration of competing interest autonomous quadrotor landing on a moving platform, in: 2017 IEEE Interna-
tional Symposium on Safety, Security and Rescue Robotics, 2017, pp. 200–207.
[21] R.O. de Santana, L.A. Mozelli, A.A. Neto, Vision-based autonomous landing for
The authors declare that they have no known competing finan-
micro aerial vehicles on targets moving in 3D space, in: 2019 19th Interna-
cial interests or personal relationships that could have appeared to tional Conference on Advanced Robotics, IEEE, 2019, pp. 541–546.
influence the work reported in this paper. [22] E. Rohmer, S.P. Singh, M. Freese, V-REP: a versatile and scalable robot sim-
ulation framework, in: 2013 IEEE/RSJ International Conference on Intelligent
Robots and Systems, 2013, pp. 1321–1326.
Data availability
[23] Z. Tang, R. Cunha, D. Cabecinhas, T. Hamel, C. Silvestre, Quadrotor going
through a window and landing: an image-based visual servo control approach,
The authors are unable or have chosen not to specify which Control Eng. Pract. 112 (2021) 104827.
data has been used. [24] T. Hamel, R. Mahony, Visual servoing of an under-actuated dynamic rigid-body
system: an image-based approach, IEEE Trans. Robot. Autom. 18 (2) (2002)
187–198.
Acknowledgement [25] N. Guenard, T. Hamel, R. Mahony, A practical visual servo control for an un-
manned aerial vehicle, IEEE Trans. Robot. 24 (2) (2008) 331–340.
[26] D. Lee, H. Lim, H.J. Kim, Y. Kim, K.J. Seong, Adaptive image-based visual servo-
This research was supported by the Basic Science Research Pro- ing for an underactuated quadrotor system, J. Guid. Control Dyn. 35 (4) (2012)
gram through the National Research Foundation of Korea (NRF) 1335–1353.
funded by the Ministry of Education (2020R1A6A1A03040570), De- [27] D. Lee, T. Ryan, H.J. Kim, Autonomous landing of a VTOL UAV on a moving plat-
velopment of Drone System for Ship and Marine Mission of Civil form using image-based visual servoing, in: 2012 IEEE International Conference
on Robotics and Automation, 2012, pp. 971–976.
Military Technology Cooperation Center (18-CM-AS-22), and Oper- [28] P. Serra, R. Cunha, T. Hamel, D. Cabecinhas, C. Silvestre, Landing on a mov-
ation of an Unmanned Aerial System for a VTOL funded by the ing target using image-based visual servo control, in: 53rd IEEE Conference on
Agency of Defense Development (111885-912828001). Decision and Control, 2014, pp. 2179–2184.
16
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869
[29] Q.H. Truong, T. Rakotomamonjy, A. Taghizad, J.-M. Biannic, Vision-based con- [36] J.L. Sanchez-Lopez, S. Saripalli, P. Campoy, J. Pestana, C. Fu, Toward visual au-
trol for helicopter ship landing with handling qualities constraints, IFAC- tonomous ship board landing of a VTOL UAV, in: 2013 International Conference
PapersOnLine 49 (17) (2016) 118–123. on Unmanned Aircraft Systems, IEEE, 2013, pp. 779–788.
[30] T. Rakotomamonjy, Q.H. Truong, Helicopter ship landing using visual servoing [37] M.A. Fischler, R.C. Bolles, Random sample consensus: a paradigm for model fit-
on a moving platform, IFAC-PapersOnLine 50 (1) (2017) 10507–10512. ting with applications to image analysis and automated cartography, Commun.
[31] I. Borshchova, S. O’Young, Visual servoing for autonomous landing of a multi- ACM 24 (6) (1981) 381–395.
rotor UAS on a moving platform, J. Unmanned Veh. Syst. 5 (1) (2016) 13–26. [38] S. Garrido-Jurado, R. Muñoz-Salinas, F.J. Madrid-Cuevas, M.J. Marín-Jiménez,
[32] I. Borshchova, S. O’Young, Marker-guided auto-landing on a moving platform, Automatic generation and detection of highly reliable fiducial markers under
Int. J. Intell. Unmanned Syst. (2017). occlusion, Pattern Recognit. 47 (6) (2014) 2280–2292.
[33] Y. Bar-Shalom, L. Campo, The effect of the common process noise on the [39] J. Rankin, An error model for sensor simulation GPS and differential GPS, in:
two-sensor fused-track covariance, IEEE Trans. Aerosp. Electron. Syst. 6 (1986) Proceedings of IEEE Position, Location and Navigation Symposium, PLANS’94,
803–805. 1994, pp. 260–266.
[34] R.W. Beard, T.W. McLain, Small Unmanned Aircraft, Princeton University Press,
2012.
[35] T. Perez, M. Blanke, Simulation of ship motion in seaway, Computer Sci-
ence, Technical Report, The University of Newcastle, Callaghan, Australia, 2002,
pp. 1–13.
17