You are on page 1of 17

Aerospace Science and Technology 130 (2022) 107869

Contents lists available at ScienceDirect

Aerospace Science and Technology


www.elsevier.com/locate/aescte

Autonomous ship deck landing of a quadrotor UAV using feed-forward


image-based visual servoing
Gangik Cho a , Joonwon Choi b , Geunsik Bae a , Hyondong Oh a,∗
a
Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, Republic of Korea
b
School of Aeronautics and Astronautics, Purdue University, West Lafayette, IN, 47906, United States

a r t i c l e i n f o a b s t r a c t

Article history: Autonomous takeoff and landing are crucial for unmanned aerial vehicles (UAVs) to perform various
Received 9 April 2022 missions automatically. However, navigation sensors with low accuracy such as global positioning system
Received in revised form 28 July 2022 (GPS) are limited in autonomous takeoff and landing for marine applications when ships are moving
Accepted 6 September 2022
fast and oscillating by sea waves. In this study, an image-based visual servoing (IBVS) technique, which
Available online 13 September 2022
Communicated by Hever Moncayo
was originally used to land a UAV on a static target, is extended for a moving target. In particular, the
ship velocity is estimated and used as a feed-forward term in the IBVS controller. To accurately estimate
Keywords: the velocity of the ship, velocity data from the GPS on the ship, image information obtained from the
Autonomous landing camera in the UAV, and dynamic model of the ship are combined using the Kalman filter framework.
Image-based visual servoing Besides, considering the under-actuated nature of the UAV and oscillation of the ship, a virtual plane
Sensor fusion concept, adaptive IBVS gain and feature shape compensation are introduced. Lastly, to apply the IBVS
Unmanned aerial vehicle controller to the moving ship deck landing, a robust and safe autonomous landing procedure, starting
from the approach to the touchdown phase, is also developed. The proposed autonomous landing system
is validated via simulations and various real-world flight experiments simulating situations in which the
ship is moving fast and oscillated by sea waves. In the flight experiments, the UAV lands successfully on
the landing pad with the average touchdown error of 0.2 m while the ship is oscillating at the Sea State
4 and is moving faster than 5 m/s.
© 2022 Elsevier Masson SAS. All rights reserved.

1. Introduction added to the sensing system; hence, in this research, visual servo-
ing is exploited for the autonomous landing of a UAV on a moving
Unmanned aerial vehicles (UAVs) have been widely used for ship deck.
surveillance, reconnaissance missions, search and rescue opera- There are many researches about autonomous landing of a UAV
tions, and wind turbine and bridge inspections in both military using visual servoing. However, there are insufficient studies fo-
and industrial fields [1–5]. For the aforementioned missions, the cused on the oscillating targets in high speeds. Regarding the au-
takeoff and landing of the UAV are better to be performed au- tonomous landing system, there are also insufficient systematic
tonomously before and after missions. In particular, if the UAV studies on the entire landing procedure starting from the approach
is operated at marine environments far from the land, the UAV to the touchdown phase. Therefore, this study proposes an au-
should be able to land on possibly small and narrow areas of a tonomous landing system of a UAV based on feed-forward image-
moving ship oscillated by sea waves [6]. However, owing to sig- based visual servoing (FF-IBVS), to achieve landing on a small ship
nificant position errors, it is insufficient to perform autonomous deck moving in high speeds and oscillating by sea waves. To make
landing in this condition using conventional sensing systems such the landing system robust and stable, the adaptive IBVS gain and
as global positioning system (GPS). Although real time kinematic compensation of a landing feature shape are applied. In addition,
GPS (RTK-GPS) can be employed for accurate position estimations to improve the landing performance, reliable velocity estimations
[7], installing RTK-GPS on a moving ship is expensive and challeng- of the ship are newly introduced, and the entire landing procedure
ing. To address the inaccuracy of the GPS, vision sensors can be is made fully autonomous.
The main contribution of this study is as follows. First, we en-
hance the autonomous landing performance, building upon the FF-
IBVS method [8] via several innovative techniques such as: adap-
*
Corresponding author.
E-mail addresses: chogi89@unist.ac.kr (G. Cho), choi774@purdue.edu (J. Choi), tive IBVS gain, feature shape compensation for IBVS, and improve
baegs94@unist.ac.kr (G. Bae), h.oh@unist.ac.kr (H. Oh). estimation using the Kalman filter and sensor fusion. The adap-

https://doi.org/10.1016/j.ast.2022.107869
1270-9638/© 2022 Elsevier Masson SAS. All rights reserved.
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

tive IBVS gain is introduced to maintain the features in the FOV conducted a study to achieve landing on a moving platform us-
by slowing down the altitude rate. The features are compensated ing PBVS [19] where the platform moved with a maximum speed
as a square shape to eliminate unnecessary IBVS commands from of 10 km/h. Falanga et al. developed a fully autonomous quadrotor
the effect of the distortion triggered by the changing attitude of system that can land on a moving target using PBVS [20]. In the
the ship. Second, a landing system for robust and safe autonomous study conducted by Santana et al., autonomous landing using PBVS
landing is designed. To detect the features and land on the target on a platform oscillating with the heave motion was presented
(e.g., ship) from a long distance, the size and placement of aug- [21], and simulations were carried out with the virtual robot ex-
mented reality (AR) tags (i.e., markers for the landing) are carefully perimentation platform (V-REP) [22] to verify the algorithm.
determined. The landing procedure, starting from the approach to In contrast, in IBVS, states are defined as the position of the
the touchdown phase, is also designed with the state machine features in the image plane, and the error is defined as the pixel
structure. It ensures that if the marker is missed, the UAV holds the position error between the desired and current feature positions.
position near the target or changes its altitude to find the marker From the pixel error in the image plane, the desired velocity com-
again. Lastly, comprehensive realistic simulations and flight exper- mand to move the camera to a target pose is calculated with image
iments in harsh conditions are conducted to validate the proposed Jacobian, which explains the relationship between the velocity of
algorithm. In these simulations and experiments, the environment the camera in 3-D space and feature velocity in 2-D image plane.
is set to a harsh condition, where the ship is moving at a speed of IBVS is known to be more robust against pixel measurement errors
5 m/s while oscillating at Sea State 4. Here, Sea State 4 refers to than PBVS [14].
the height of sea waves between 1.25 and 2.5 m [9]. To the best of Several studies have also been actively conducted on au-
our knowledge, 5 m/s speed is the fastest environment setup for tonomous landing using IBVS. Tang et al. applied spherical image
the vision-based autonomous landing experiment of a quadrotor centroids-based IBVS for going through a window and landing on
UAV. Under such severe circumstances, it is very difficult for UAVs a target for a UAV [23]. Hamel et al. applied IBVS to an under-
to land on a moving ship deck. The objectives of this paper are to actuated system for the first time using a robust backstepping
apply several innovative techniques for vision-based autonomous technique [24]. They considered the full dynamics of the camera
landing such as feature compensation, ship state estimation, and motion fixed to the rigid body. As an extension of this work, Gue-
robust landing procedure based on the state machine and to ver- nard et al. carried out the hovering experiment of a quadrotor UAV
ify the performance of the proposed approach via simulations and [25]. Lee et al. applied a virtual image plane to IBVS to compensate
real-world flight experiments. the effect of the attitude of the UAV. Furthermore, they designed
The rest of this paper is organized as follows. In Section 2, an adaptive sliding mode controller [26,27] in order to keep the
related work is represented, and in Section 3, IBVS is briefly intro- image within the FOV of the camera. Especially, in [27], Patrol
duced and additional processes for the under-actuated system such mode which is to find out the target is added. Serra et al. proposed
as quadrotor UAVs are explained. Subsequently, a feed-forward a control law for landing on a platform with the heave motion, and
IBVS for compensating the velocity of a moving target to achieve then conducted simulations and indoor experiments [28]. In the
precision landing is presented in Section 4. In Section 5, the en- research conducted by Truong et al. [29], a controller was intro-
tire autonomous landing system, including the marker for the IBVS duced for the ship landing of an helicopter, using a combination of
setup and landing procedure, is proposed. The performances of the IBVS and translational rate command, and then simulations were
proposed controller and landing system are verified via simulations carried out. In the study by Rakotomamonjy et al. [30], to land on
and experiments in Sections 6 and 7, respectively. The conclusions a moving ship deck, the velocity of the ship was estimated using
and future work are given in Section 8. the response amplitude operator and the autoregressive moving
average model. The motion of the ship was compensated in the
2. Related work IBVS controller and its performance was verified via simulations.
Borshchova et al. conducted autonomous landing simulations and
Visual servoing is a control method for guiding unmanned experiments on a ship deck [31,32]. They adopted the color detec-
vehicles or robots (especially robot manipulators [10–12]) to a tion method as features for IBVS to reduce the computational load.
target position using a vision sensor. It can be categorized into Simulations were also conducted for a moving target with V-REP
position-based visual servoing (PBVS) and image-based visual ser- simulations. Wynn et al. proposed feed-forward IBVS (FF-IBVS) to
voing (IBVS) [13]. In PBVS, states are defined as the pose of the compensate the velocity of the moving ship [8]. The velocity of
target, and the target pose is estimated with respect to the camera the ship was estimated using an extended Kalman filter (EKF),
frame. The error is expressed as the relative pose between the tar- which fuses visual and GPS measurements, while the estimated
get and camera, and the PBVS outputs the command to reduce the velocity was used as a feed-forward term combined with the IBVS
error. PBVS enables the camera to move to the target in an opti- controller. They also proposed the entire process for autonomous
mal trajectory. However, poor state estimation could destabilize the landing on a moving target starting from the approach phase. In
pose of the camera, which triggers issues such as perturbations in the experiment, the velocity of the target was set at approximately
the trajectory and inaccuracies after convergence [14]. 1 m/s, with the heave motion; in addition, precision landing per-
Several studies have been conducted on the autonomous land- formance was verified in flight experiments.
ing of UAVs using PBVS. Jung et al. estimated the horizontal As mentioned in the above paragraphs, there are existing stud-
distance error between the landing target and the UAV using ies to land the UAV on a moving ship. However, there are few
the center of the measured feature position and marker length studies dealing with the harsh outdoor environment where the
information [15]. For the autonomous landing on the target, landing pad is oscillated by sea waves as well as going forward at
a proportional–integral–derivative controller was employed. To a high speed. For the autonomous landing system, the entire land-
handle the limited field of view (FOV) of the camera, Chen et al. ing procedure starting from the approach phase to touchdown is
used a pan-tilt camera, estimated the landing target, and then con- also not studied sufficiently. Therefore, in this paper, first, to land
ducted autonomous landing [16]. Yang et al. exploited PBVS to the UAV on the ship moving fast and oscillating, several techniques
achieve the takeoff and landing of a UAV with a square-root un- such as adaptive-IBVS gain, feature shape compensation and sen-
scented Kalman filter to estimate the pose of the UAV [17]. Zhao sor fusion are exploited. Second, for robust and safe autonomous
et al. proposed the PBVS controller that is robust to the time delays landing, the entire landing procedure based on the state machine
in the translational and rotational dynamics [18]. Acevedo et al. is designed. Lastly, to verify the performance of the autonomous

2
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Fig. 1. Pinhole camera model.

landing algorithm, real-world flight experiments as well as numer- (6)


ical simulations in harsh conditions are conducted.
where v c ,lin = [ v c ,x v c , y v c ,z ] and v c ,ang = [ωc,x ωc, y ωc,z ] rep-
3. Image-based visual servoing resent the linear and angular velocities of the camera, respectively.
From Eqs. (1)–(6), the image Jacobian matrix, which is the rela-
In this section, conventional IBVS is briefly reviewed [14], and tionship between the velocity of the feature in the image plane
then several innovative techniques for applying the IBVS to au- and the camera velocity, can be obtained as:
tonomous landing are presented.  x y  −( f 2 +x 2 )

x
− zf 0 y
3.1. Conventional IBVS Ls = z
y
f
f + y 2
2
f
−x y 
.
0 − zf z f f
−x
As mentioned in Section 1, the objective of IBVS is to reduce the Assuming the desired feature position sd is static (i.e., ė = ṡ),
pixel error between the desired and measured feature positions in and to ensure that the feature error decreases exponentially (i.e.,
the image plane. The error e for the IBVS is defined as: ė = −λe), the relationship between the camera velocity and fea-
ture error can be obtained as:
e = sd − s
where sd and s denote the desired and measured feature positions, v c = −λ L +
s e
respectively. The relationship between the camera and feature ve- where λ = diag (λx , λ y , λz , λφ , λθ , λψ ) is a diagonal matrix of the
locity is given by: positive gain and L +
s denotes a pseudo inverse matrix of L s . Note
that, knowing the exact value of L s is difficult because of pixel
ṡ = L s v c , (1)
error, depth estimation error, and focal length uncertainty. Hence,
where L s denotes the image Jacobian matrix, and v c ∈ R rep- 6
the IBVS controller is designed as:
resents the camera velocity comprising three linear motions and
+
three angular motions (i.e., v c = [ v c ,x v c , y v c ,z ωc ,x ωc , y ωc ,z ] ). v d = −λ L̂ e e (7)
The image Jacobian matrix L s can be derived from the pinhole
camera model illustrated in Fig. 1. An arbitrary point P = [x y z] , where Lˆ+ +
e and v d denote the approximation of L e and the de-
which is expressed in the camera frame ( O X Y Z ), is projected to sired camera velocity, respectively. Assuming that the center of the
the image plane as s = [x y  ] : quadrotor UAV coincides with the center of the camera, v d can be
considered as the desired UAV velocity.
x = f x/ z, (2)

3.2. IBVS for under-actuated system
y = f y /z (3)
where f denotes the camera focal length. Taking the time deriva- From the IBVS controller, a six degrees-of-freedom (DOF) de-
tive of Eqs. (2) and (3), we have: sired velocity (v d ∈ R 6 = [ v x v y v z ωx ω y ωz ] ) can be calcu-
lated as in Eq. (7). However, in the under-actuated system such
ẋ = ( f ẋ − x ż)/ z, (4) as the quadrotor UAV, roll and pitch rates are coupled with the
 linear velocities in the y and x directions, respectively. In other
ẏ  = ( f ẏ − y ż)/ z. (5) words, the UAV cannot make roll and pitch rate motions inde-
When the camera is moving in a 3-D space, the velocity of the pendent of y- and x-axis velocities. Furthermore, owing to this
arbitrary point in the camera frame is expressed as: under-actuated nature, the UAV could make unwanted opposite
⎡ ⎤ ⎡ ⎤ velocity commands to the target position, depending on the sit-
ẋ − v c,x − ωc, y z + ωc,z y uation. For example, as described in Fig. 2 (a), although the target
Ṗ = ⎣ ẏ ⎦ = − v c ,lin − v c ,ang × P = ⎣ − v c , y − ωc ,z x + ωc ,x z ⎦ position is in the left side of the UAV, the target is projected on
ż − v c,z − ωc,x y + ωc, y x the right half of the image plane when the UAV is inclined; hence,

3
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Fig. 2. Projections of the target positions on the image plane and virtual image plane: (a) on the image plane; and (b) on the virtual image plane, respectively.

the UAV will move to the right direction by the IBVS controller
to make the center of the image plane coincide with the target
position, which is the opposite to the target position. To address
this issue, features are transformed into a virtual coordinate frame
[26]. The virtual coordinate frame is defined, such that the cen-
ter of the origin coincides with the camera coordinate frame and
its z-axis is parallel to the z-axis of the inertial frame. The virtual
image plane is also defined, such that the image plane is rotated
from the camera frame to the virtual coordinate frame. This im-
plies that the virtual image plane is always parallel to the ground,
and if the landing pad is parallel to the ground, the roll and pitch Fig. 3. Features are out of FOV at a low altitude.
rates are always zero, such that an under-actuated system can be
decoupled. the velocity for x- and y-directions, and ensure that the feature
The virtual image plane transformation can be conducted by stays in the FOV. However, this method can degrade target track-
using the roll and pitch angles of the UAV. The 3-D position of i-th ing performance because it reduces the velocity of the UAV, and
marker in the camera frame P i = [xi y i zi ] is expressed in the also the effect of a smaller scan area as the UAV decreases its alti-
virtual coordinate frame as P ri = [xri y ri zri ] , and its corresponding tude is not taken into account. Thus, in this research, we propose
2-D position in the image plane and virtual image plane are si the use of the adaptive IBVS gain for the altitude rate to be ad-
and sri , respectively. The relationship between P i and P ri can be justed by the feature error in the image plane to maintain a small
expressed as: control input if the UAV has a large distance error to the center
of the landing target. The adaptive gain (ad z ) is designed with the
P ri = R (1, φ) R (2, θ) P i , (8)
sigmoid function as:
where
⎡ ⎤ ⎡ ⎤ 1
1 0 0 cosθ 0 sinθ ad z = 1 − , (9)
1 + e −kc
R (1, φ) = ⎣ 0 cosφ −sinφ ⎦ , R (2, θ) = ⎣ 0 1 0 ⎦,
where c represents the distance between the center of the fea-
0 sinφ cosφ −sinθ 0 cosθ
tures and the center of the image plane, and k denotes the gain of
and, φ and θ are roll and pitch angle of the UAV, respectively. Us- the sigmoid function. The IBVS control command with the adaptive
ing Eq. (8) and the pinhole camera model (i.e., Eqs. (2)–(3)), the gain is then given as:
i-th marker position in the virtual image plane can be computed ⎡ ⎤
1 0 0 0 0 0
as:
⎡ ⎤ ⎢0 1 0 0 0 0⎥
 xi cosθ + f sinθ ⎢ ⎥
xri
⎢0 0 ad z 0 0 0⎥ +
f −xi cosφ sinθ + y i sinφ+ f cosφ cosθ v d = −λ ⎢ ⎥ L̂ e .
sri = =f⎣ xi sinφ sinθ + y i sinφ+ f cosφ cosθ
⎦. ⎢0 0 0 1 0 0⎥ e
zri y ri ⎣0 ⎦
−xi cosφ sinθ + y i sinφ+ f cosφ cosθ 0 0 0 1 0
0 0 0 0 0 1
3.3. Adaptive-gain IBVS

3.4. Square compensation for landing pad oscillation


Even though the virtual image plane is applied to IBVS, if the
UAV decreases its altitude via the IBVS control command when
After the transformation of the features to the virtual image
its horizontal position is far from the center of the landing tar- plane, x- and y-axis velocities and roll and pitch rates are decou-
get, the features can get easily out of the image plane because of pled. However, if the landing target is oscillating, that is, the plane
the limited FOV, as illustrated in the Fig. 3. To operate the IBVS, made up of features is not parallel to the image plane, then cou-
it is important to keep the features inside of the image plane as pling cannot be entirely discarded. As illustrated in Fig. 4, there
long as possible. To this end, Lee et al. [26] suggested an adaptive are four markers employed and the distance between the center of
IBVS gain, using the inverse tangent function where it was applied the camera and right marker is closer than that of the left marker.
to the horizontal plane to reduce roll and pitch angle by lowering Subsequently, the distance between the right part of the features

4
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Fig. 5. Track-to-track fusion structure.

(5)

(5)
X= x y ship ẋship ẏ ship ... x ship y ship
ship

where xship and y ship represent the x- and y-axis positions of the
ship in the UAV vehicle-1 frame, respectively. For the detail of
Fig. 4. Square fitting of the features: (a) before square fitting; and (b) after the
the UAV vehicle-1 frame, which is similar to the body-fixed frame
square fitting.
but with no pitch and roll considered, refer to [34]. Because the
ship might oscillate due to waves in the marine environment, the
is longer than the others and the desired velocity command to the
designed filter should be able to consider this effect. Various ap-
left direction will be obtained as illustrated in Fig. 4(a). To elimi-
proaches have been presented to estimate the pose of a ship in
nate the effect of inclination on the landing pad, four features are
such conditions [35]. Nonetheless, many existing studies require
fitted to square via the least square method. Then, the plane com-
the accurate dynamic model of a ship, which is usually hard to
prising four features is made parallel to the image plane, and the
obtain. In this study, we assume that the motion of the ship can
IBVS control command will not make unnecessary commands re-
be represented as a simple constant crackle (5th-time derivative
gardless of the attitude of the landing pad.
of the position) model inspired by [36]. Assuming the ship moves
with a constant velocity while slightly oscillating due to the wave,
4. Feed-forward IBVS
this model would be sufficient to represent the minor rotation of
the ship.
The conventional IBVS assumes that the target is stationary
[13,14]. To apply the IBVS to a moving target, the target velocity The transition matrix of the constant crackle model ( F ) can be
should be explicitly considered. In this study, the horizontal linear defined as:
velocity of the moving ship is used. The desired velocity command ⎡ T2 T3 T4 T5

I2 T I2 2 2
I 6 2
I I
24 2 120 2
I
from IBVS combined with the target velocity compensation as a ⎢ T2 T3 T4 ⎥
⎢ 02 I2 T I2 2 2
I I
6 2 24 2 ⎥
I
feed-forward term makes a new command for feed-forward IBVS ⎢ ⎥
⎢ T2 T 3
I ⎥
(FF-IBVS) as: F = ⎢ 02 02 I2 T I2 I
2 2 6 2 ⎥,
⎢0 T2 ⎥
v d, f f = v d,4D O F + v̂ target
⎢ 2 02 02 I2 T I2 2 2 ⎥
I
(10) ⎣0 02 02 02 I2 T I2 ⎦
2
where v̂ target = [ v̂ target ,x v̂ target , y 0 0] is the estimated horizontal 02 02 02 02 02 I2
velocity of the ship.
We employ the information from the GPS and camera for the where T is the sampling time, I 2 is the 2 × 2 identity matrix, and
target velocity estimation. The GPS is attached to the ship deck and 02 is the 2 × 2 zero matrix. The estimated state of the ship at time
GPS
measures the position and velocity of the ship with the respect to step k − 1 for the filter with GPS is represented as X̂ k−1|k−1 . Hence,
the world frame. Then, these measurements are transformed into the prediction step of the filter can be expressed as:
the relative values with respect to the UAV using the information
GPS GPS
from the GPS onboard the UAV. Meanwhile, the camera calculates X̂ k|k−1 = F X̂ k−1|k−1 ,
the relative pose of the landing pad on the ship relative to the GPS GPS
UAV using the features captured in the image plane. From these P̂ k|k−1 = F P̂ k−1|k−1 F T + Q k−1 ,
relative sensing data, the state of the landing pad relative to the
GPS GPS
UAV is estimated by the Kalman filter (KF). Besides, to facilitate the where X̂ k|k−1 is the predicted state, P̂ k−1|k−1 is the error covari-
information from the two different sensors, we apply the track-to- GPS
ance matrix at time step k − 1, P̂ k|k−1 is the predicted error co-
track fusion algorithm [33] as presented in Fig. 5. The algorithm
variance matrix, and Q k−1 is the system noise at time step k − 1.
comprises two distinct KFs updating the ship state via the GPS
and camera, and they are fused accordingly. Note that the posi- The correction step can be written as:
tion measurements from the camera are relatively accurate and its GPS
update rate is faster compared with GPS. However, camera mea- S kG P S = H G P S P̂ k|k−1 H GT P S + R kG P S ,
surements may not be available when the features are out of the GPS
camera field-of-view (FOV). Thus, by fusing these two sensors with
ν kG P S = ykG P S − H G P S X̂ k|k−1 ,
Kalman filters, we can make the estimation more robust against GPS −1
K kG P S = P̂ k|k−1 H GT P S S kG P S ,
intermittent measurement loss of camera or GPS as well as im-
GPS GPS
proving the estimation accuracy. X̂ k|k = X̂ k|k−1 + K kG P S kG P S ,
ν
Let us define the state of the ship (or, equivalently, landing pad)
GPS GPS
as: P̂ k|k = ( I 12 − K kG P S H G P S ) P̂ k|k−1 ,

5
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Fig. 6. Flowchart of the FF-IBVS algorithm.

Fig. 7. Overview of the autonomous landing system.

where H G P S is the measurement matrix, K kG P S is the Kalman gain,


y kG P S is the measurement, and R kG P S is the measurement noise
covariance.
Along with sensor measurements, to further improve the esti-
mation performance, we impose the following state constraints:
d5 xship d5 y ship
= = 0. (11)
dt 5 dt 5
This comes from the assumption that the dynamics of the ship
is not that highly maneuverable; so crackles would be zero. In
the preliminary numerical simulations, we observed that the use
of these constraints improves the estimation performance to some
extent.
To realize Eq. (11) on our KFs, we adopt the pseudo-measure-
Fig. 8. Markers used for vision-based autonomous landing.
ment technique [36] which considers the constraint of a system as
additional (i.e., pseudo) measurements. Hence, the measurement at
each time step k becomes

6
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Fig. 9. State machine structure for autonomous landing.

Fig. 10. PX4 simulation in a Gazebo environment.


y kG P S = [xship y ship ẋship ẏ ship 0 0] ,
GPS GPS GPS GPS
y kC AM = xCship
AM
y Cship
AM
0 0 ,
GPS GPS
where xship and y ship denote the measured locations of a ship in 
the x and y directions in the UAV vehicle-1 frame, respectively, I2 0 0 0 0 0
H C AM = .
GPS
and ẋship GPS
and ẏ ship represent their velocities. The x and y position 0 0 0 0 0 I2
crackles, pseudo-measurements, are set to be zero to represent Eq.
(11). Then, the measurement matrix H G P S is defined as: To measure the relative position of the landing platform with re-
⎡ ⎤ spect to the UAV, AR tag detection (which will be explained in
I2 02 02 02 02 02
the next section) and the perspective-n-point method [37] are ex-
H G P S = ⎣ 02 I2 02 02 02 02 ⎦ .
ploited. The perspective-n-point method is a method of estimating
02 02 02 02 02 I2
the pose of the camera using a set of 2-D points projected onto
For simplicity, we only describe the detailed equations for the the image plane from a set of 3-D points and its implementa-
GPS case. Nevertheless, its counterpart for the camera case can be tion is done by using the OpenCV function, SolvePnP. The error
easily expressed as the same form of the equations while modify- of the perspective-n-point is computed as approximately 0.1 m in
ing the measurements and the measurement matrix as: the UAV landing simulation. In case of flight experiments, the er-

7
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Fig. 13. Comparison of effect of the square compensation for landing pad oscillation.

Fig. 11. Time history of the motion of the ship: (a) linear motion; and (b) angular
motion.

Fig. 14. Simulation results: (a) time history of the velocity of the ship and UAV; (b)
time history of the altitudes of the ship and UAV; (c) time history of the horizontal
Fig. 12. Distance between the center of the features and the center of the image position error; (d) time history of estimated velocity of the ship (e) time history of
plane: (a) without adaptive IBVS gain; and (b) with adaptive IBVS gain. orientation of the ship and UAV; and (f) time history of the roll and pitch angle of
the UAV.

ror cannot be measured as it is difficult to obtain the ground truth


data. mation from one KF, is the more portion it will take in the fused
After the updates of both filters are accomplished, their states states. In contrast, if one filter becomes defective, the error covari-
are fused by the state-vector fusion method [33]. Each estimation ance increases significantly and it would be automatically excluded
of KFs is weighted based on the magnitude of the corresponding from the fused states. The fused covariance matrix can be com-
error covariance matrix. Accordingly, the more accurate the esti- puted as:

8
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Fig. 15. Simulation results: touchdown error.

of the ship, the relative pose of the ship with respect to the cam-
era frame and the position and velocity from the GPS in the global
frame are used in the sensor fusion module. For the IBVS term, vir-
tual image plane transform and square compensation is conducted.
Using the modified feature positions and the estimated horizontal
velocity of the ship, the FF-IBVS control input in the form of the
velocity command is calculated.

5. Autonomous ship deck landing system

This section describes the autonomous landing system that ex-


ploits the feed-forward IBVS algorithm. The entire autonomous
ship deck landing system is illustrated in Fig. 7. The ship is as-
sumed to oscillate with six-DOF motions (roll, pitch, yaw, surge,
sway, and heave) generated by sea waves and also move forward.
The landing pad is positioned at the stern of the ship, and the GPS
is installed on the landing pad. The multiple markers are placed
on the landing pad and are used as the features for IBVS. The mis-
sion of the UAV is to land on the ship deck using the GPS and the
Fig. 14. (continued) on-board camera of the UAV.
The markers used in this research are illustrated in Fig. 8. We
G P S −1 C AM −1 −1 designed three IBVS levels to enable the autonomous visual land-
P T k = [ P̂ k|k + P̂ k|k ] , ing from the high altitude where different marker sizes were used
C AM for each IBVS level. At a high altitude, small markers cannot be
where P̂ k|k represents the error covariance matrix for the camera detected owing to the limited camera resolution, and at a low al-
part at time step k. From P T k , the fused state ( XˆT k ) is represented titude, large markers can be out of FOV. For the markers, the AR
as: tags, especially the ArUco markers, are exploited, which can be de-
tected from a distant camera in a highly reliable manner [38]. The
G P S −1 GPS C AM −1 C AM
XˆT k = P T k [ P̂ k|k X̂ k|k + P̂ k|k X̂ k|k ] center points of AR tags at IBVS levels 1 and 2 form a square with
lengths of 2.2 m and 0.68 m respectively. The marker for IBVS
Note that from XˆT k , we can extract the estimated ship velocity level 3 is located at the center of the landing pad. In a 3-D space,
v̂ target for the feed-forward input for FF-IBVS in Eq. (10). more than three feature points are required for the IBVS and we
To verify the performance of the proposed constant crackle use four feature points. After the detection of AR tags, the cen-
model and pseudo measurement technique, the estimation ac- ter points of four AR tags for IBVS levels 1 and 2 or every corner
curacy is compared with the second-order constant velocity points of the single AR tag for IBVS level 3 are used as feature
model. For the ship moving with a constant speed of 10 m/s points for IBVS.
and the wave effect which can be represented as a sum of Fig. 9 describes the entire landing procedure where the decision
three sinusoidal functions: 1.0 sin(2π t /12), 0.36 sin(2π t /8.75) and making for autonomous landing is carried out by the state machine
0.096 sin(2π t /3.75), the averaged position and velocity errors of structure. There are seven states: approach, three IBVS states (lev-
the proposed estimation approach are 0.376 m and 0.629 m/s, re- els 1, 2 and 3), hold, rising and landing. At the approach state, the
spectively whereas the position and velocity errors of the second- UAV is guided to an approach point of 12 m above the center of
order model are 0.394 m and 0.661 m/s, respectively. the landing pad using the GPS information. Once the UAV reaches
The flowchart of the FF-IBVS algorithm is shown in Fig. 6. First, the approach point and AR tags for IBVS level 1 are detected, the
the image is captured from the camera and feature points of the approach flag becomes true, and FF-IBVS is started. There are three
AR tags on the landing pad are extracted. To estimate the velocity IBVS states which include IBVS levels 1, 2 and 3. For each IBVS

9
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Fig. 16. Setup for flight experiments.

Fig. 17. Landing platforms for flight experiments: (a) RC car leading a landing pad; and (b) motion platform on a truck for simulating motion of the ship.

Fig. 18. UAVs for experiments: (a) Tarot X4 equipped with a gimbal camera; and (b) Tarot 650 pro without a gimbal camera.

10
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Fig. 19. Images captured during the experiments for situation (a).

level, the completion of the corresponding IBVS is determined by ducted. Fig. 10 shows the simulation in the Gazebo environment
the size of the square formed by the feature points. If the pixel er- using the PX4 autopilot (i.e., flight controller) with an Iris UAV
ror of the length of one side of the square remains less than the model from 3DR. The bottom and top of the left side of each fig-
threshold for more than 3 s, the IBVS n (n = 1, 2 and 3) flag be- ure present the images acquired from a camera and the extracted
comes true, and the landing state goes to the next state. Lastly, if features from the image, respectively. In the feature view, the rect-
IBVS 3 flag is true, the state goes to the landing state which repre- angular and circular shapes indicate the coordinates of the desired
sents the final state of the entire landing procedure. At the landing and measured features, respectively. The horizontal FOV and res-
state, the UAV decreases its altitude with a constant descent rate
olution of the camera are set to 102◦ , and 2048 × 1536 pixels,
of 1 m/s while maintaining the feed-forward velocity, which is the
respectively. The simulation is carried out in a scenario in which a
estimated horizontal velocity of the ship. During the IBVS state, if
ship moves forward with a speed of 10 knot (≈ 5.14 m/s) at the
the UAV misses the markers, the state is altered to the hold state.
Sea State 4 environment [9]. The ship motion as the superposition
At the hold state, the UAV holds its altitude and the feed-forward
velocity. If the markers are not detected for more than 3 s, the of three sinusoidal functions according to Sea State 4 is presented
state enters the rising state where, the UAV increases its altitude in the Table 1, and the corresponding time history of the motion
with a constant climb rate of 1 m/s. In the hold or rising state, of the ship is illustrated in Fig. 11. The position errors of the GPS
if the UAV detects the markers again, the state is changed to ap- of the ship and UAV are modeled as Gauss-Markov processes [39]:
propriate IBVS level again. If the UAV cannot detect the markers
continuously for more than 5 s, all parameters are initialized and
νk+1 = e−kG P S T s νk + ηk
the state is changed back to the approach state. where νk+1 and νk represent the errors simulated at time steps
k + 1 and k, respectively, ηk denotes the Gaussian white noise at
6. Simulations time step k, 1/k G P S is the time constant of the process, and T s
represents the sampling time. The velocity error of the GPS is set
To verify the feasibility and the performance of the proposed to a standard deviation of 0.05 m/s [34]. In these simulations, ηk
autonomous landing system, numerical simulations are first con- and 1/k G P S are set to 0.21 m and 1100, respectively.

11
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Table 1
Simulated ship motion in the Sea State 4 environment.

1st order 2nd order 3rd order


Motion Amplitude Period Amplitude Period Amplitude Period
[m or deg] [s] [m or deg] [s] [m or deg] [s]
Surge 1 12 0.36 8.75 0.096 3.75
Sway 0.9 16.3 0.32 5 0.112 2.5
Heave 1 11.3 0.33 6.25 0.12 3.75
Roll 11 7.5 3.97 3.75 1.28 1
Pitch 3.1 6.3 1.54 3.75 0.5 2.25
Yaw 2.14 12.5 0.69 6.25 0.24 3.75

6.2. Square compensation for landing pad oscillation

In this subsection, the effect of the square compensation of


features (illustrated in Fig. 4) is verified. The simulation environ-
ment is set to the situation where the landing pad performs the
rotational motion (i.e., rolling, pitching, and yawing) with the am-
plitude and period described in the Table 1. The position errors
with and without square compensation were compared during the
simulation of IBVS level 3 for 30 seconds. As shown in Fig. 13, only
a slight position error exists when square compensation is used,
whereas the UAV oscillates according to the motion of the landing
pad when the compensation is not conducted. The mean values of
the horizontal position error with and without compensation are
0.0694 m and 0.2237 m, respectively.

6.3. Landing procedure

In the simulations, the entire landing procedure is conducted to


verify the performance of the proposed autonomous landing algo-
rithm including the approach, FF-IBVS, and landing with the state
machine structure. The simulations are conducted ten times, and
Fig. 14 presents one sample result, showing the time history of
relevant states and errors. As shown in the Fig. 14 (a), in the Ap-
proach phase, the UAV approaches the ship with a high speed, and
at approximately 14 s, the velocity command for FF-IBVS starts to
be generated. Note that the horizontal error in Fig. 14 (c) repre-
sents the magnitude of the horizontal error with respect to the
Fig. 20. Experiment results for situation (a): (a) time history of the velocity of the
landing platform; and (b) time history of the altitude of the UAV.
world coordinate frame which has no relation to the attitude of
the ship. The estimation error of the ship velocity is presented in
Fig. 14 (d). The performance of the orientation alignment is shown
in the Fig. 14 (e). Fig. 14 (f) shows the attitude log data of the
In the simulations section, the effect of the adaptive-gain IBVS UAV. The initial altitude and horizontal position error are approx-
and square compensation of the features are verified, and then the imately 16.2 m and 67.9 m, respectively. At the approach phase,
entire landing procedure is conducted. the UAV approaches 12 m above the ship using the GPS with the
horizontal position error of 8 m. At the IBVS level 1 ∼ 3 states,
the UAV tracks the ship while decreasing its altitude according to
6.1. Adaptive-gain IBVS the desired velocity command generated by the FF-IBVS algorithm.
Note that, for the final state, the UAV touches down the ship with
just the velocity of the ship estimated by the GPS only as there
To verify the effect of the adaptive IBVS gain, two simulation is no camera measurement available just above the landing pad.
results, which are with and without the adaptive gain, are com- Fig. 15 presents the touchdown errors for all simulations where an
pared. Fig. 12 describes the distance between the center of the average touchdown error of 0.52 m, standard deviation of 0.15 m
features and the center of the image plane (i.e., the value c in Eq. and a maximum touchdown error of 0.77 m are obtained. The pa-
(9)). Whenever features are detected, the red circle is marked on rameters related to FF-IBVS for the simulations are presented in
the graph. If the adaptive IBVS gain is not applied, the IBVS con- Table 2. A movie clip for these simulations can be found at https://
trol command makes the UAV descend its altitude regardless of the www.youtube.com/watch?v=bTBZL4SuFLM.
horizontal error, resulting in that the features get out of the image
plane from 15 to 20 s and from 25 to 32 s. On the other hand, with 7. Experiments
the adaptive IBVS gain, when the features are far from the center
of the image plane, a small control input is applied to the altitude To verify the proposed autonomous landing algorithm in the
direction (i.e., small descending rate); this allows a sufficient time real world, various flight experiments are conducted. The setup for
for the features to get closer to the center of the image plane. The flight experiments is illustrated in Fig. 16. For the camera and lens,
simulation results demonstrate that less features are missed with a Teledyne Dalsa Genie Nano C2020 and a 3.5 mm, f/5.6 Cr Series
the use of the adaptive IBVS gain during the landing process. Fixed Focal Length Lens from Edmund Optics are used, respectively.

12
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Fig. 21. Images captured during the experiments for situation (b).

Table 2 in consideration of computational stability. The generated velocity


Parameters related to FF-IBVS for the simulations. command is transmitted to the velocity controller of the UAV.
λ in Eq. (7) diag (2.0, 2.0, 5.0, 0, 0, 0.2) The landing platforms for the flight experiments are designed
IBVS Level 1 Saturation values for v d,4D O F [1.0 1.0 2.0 0.5] in two types. The first design comprises a radio-controlled (RC) car
k in Eq. (9) 0.002
with the landing pad as illustrated in Fig. 17 (a). The GPS on the
λ in Eq. (7) diag (2.0, 2.0, 5.0, 0, 0, 0.2) RC car estimates the velocity of the landing platform, and the data
IBVS Level 2 Saturation values for v d,4D O F [1.5 1.5 1.0 0.5] is transmitted to the UAV via a Wi-Fi router. The second design
k in Eq. (9) 0.002
of the landing platform is a combination of a truck and motion
λ in Eq. (7) diag (2.0, 2.0, 8.0, 0, 0, 0.2) platform as illustrated in Fig. 17 (b). The motion platform on the
IBVS Level 3 Saturation values for v d,4D O F [3.0 3.0 0.5 0.5] truck is utilized to simulate ship motions triggered by sea waves.
k in Eq. (9) 0.002
The experiments were performed at various situations by grad-
ually increasing the complexity: (a) the UAV is equipped with the
gimbal system to maintain the camera parallel to the ground at all
A combination of the selected camera and lens has the horizontal times to replace the use of the virtual coordinate frame explained
FOV of 102◦ and resolution of 2048 × 1536 pixels. To process the in Section 3.2 while starting from the IBVS level 2; (b) simulated
GPS data on the landing pad, the Pixhawk4 (i.e., commercial au- 6-DOF ship motions (roll, pitch, yaw, surge, sway, and heave) trig-
topilot) is used. The image and velocity data of the landing pad gered by sea waves using the motion platform are added along
acquired from the camera and GPS are transmitted to the Nvidia with the setup of the situation (a); (c) entire procedure starting
Jetson TX2 which is used as the mission computer (MC) for au- from the approach to touchdown phase is conducted but with-
tonomous landing via Gigabit Ethernet and Wi-Fi communication. out the gimbal and 6-DOF ship motions; and (d) simulated ship
At the MC, the AR tags for the IBVS are extracted, and the desired motions are added along with the setup of the situation (c). For
velocity command from FF-IBVS is calculated. Note that, since the the situation (b) and (d), the amplitude and period of the simu-
AR tag extraction takes the most computation time in the entire lated ship motion is set to a first order sinusoidal function of the
process, the update rate of FF-IBVS is set to the AR tag extraction Table 1 and the amplitude of the translational motion is scaled
speed. In particular, on the TX2 board, as the AR tag extraction down by 1/10 because of the physical limitation of the motion
algorithm can be run up to about 12 Hz, the update rate for ve- platform. The reason for using the gimbal system is that when the
locity control command generation from FF-IBVS is set to 10 Hz image plane transform is applied, the FOV does not always point

13
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Fig. 22. Experiment results for situation (b): (a) time history of the velocity of the Fig. 23. Experiment results for situation (c): (a) time history of the velocity of the
landing platform; and (b) time history of the altitude of the UAV. landing platform; and (b) time history of the altitude of the UAV.

Table 3 Table 4
Parameters related to FF-IBVS for the experiments. Experiment results for situation (a): touchdown error.

λ in Eq. (7) diag (4.0, 4.0, 5.0, 0, 0, 0.5) Trial


Mean error Std. deviation
IBVS Level 1 Saturation values for v d,4D O F [1.5 1.5 0.7 0.5)] 1st 2nd 3rd
k in Eq. (9) 0.002
Touchdown error [m] 0.21 0.24 0.26 0.24 0.03
λ in Eq. (7) diag (2.0, 2.0, 3.0, 0, 0, 0.5)
IBVS Level 2 Saturation values for v d,4D O F [1.5 1.5 0.5 0.6)]
k in Eq. (9) 0.002 (a) and (b), respectively. The maximum speed of the landing plat-
form is attained at approximately 4.5 m/s. The UAV starts landing
λ in Eq. (7) diag (2.0, 2.0, 3.0, 0, 0, 0.5)
at an altitude of 7 m from the IBVS level 2. At approximately
IBVS Level 3 Saturation values for v d,4D O F [1.2 1.2 0.5 0.6]
k in Eq. (9) 0.002 7 s, the IBVS level 3 commences, and at that time, the altitude
of the UAV is approximately 4 m. At approximately 22 s, the IBVS
level 3 is completed. The time from the IBVS level 2 to touchdown
downwards, so the features of the landing pad get easily out of the takes approximately 24 s. Table 4 presents the touchdown error for
FOV; this makes the autonomous landing harder. Thus, the exper- the experiments of situation (a). From three experiments, a mean
iments are carried out by first using the gimbal system for ease touchdown error of 0.24 m, a standard deviation of 0.03 m, and a
of FF-IBVS implementation and then using the virtual image plane maximum touchdown error of 0.26 m are obtained.
to reduce the weight of the payload by removing the gimbal sys- Situation (b) uses the motion platform simulating the motion
tem. For all experiments, the landing platforms are set to move of the ship. Fig. 21 presents the images captured during the ex-
forward at a speed of approximately 3 ∼ 6 m/s. As illustrated in periments for situation (b). The UAV employs IBVS to start landing
Fig. 18, at situations (a) and (b), Tarot X4 is exploited for the UAV on the landing pad on which the ship motion is simulated (Fig. 21
which is equipped with the gimbal camera, and at situation (c) and (a)), and completes IBVS levels 2 and 3 sequentially (Fig. 21 (b)
(d), Tarot 650 pro and customized UAV are employed, respectively. and (c)). When the completion of IBVS level 3 is detected, the
Here, the customized UAV is a coaxial quadrotor type UAV and has UAV decreases its altitude. Fig. 21 (d) presents the captured im-
1,500 mm and 5 kg of diagonal length and payload, respectively. age of touchdown. Fig. 22 presents the time history of velocity
The parameters for the experiments are presented in Table 3. of the landing platform and the altitudes of the UAV. The maxi-
For situation (a), the experiments are conducted three times. mum speed of the landing platform is approximately 5.5 m/s. The
The image capture during the experiment for situation (a) is illus- altitude of the UAV at the starting time is set to 7 m, which is
trated in Fig. 19. The top left side for each image shows the image the same as situation (a). The touchdown errors for situation (b)
obtained from the camera and the result of the AR tag extraction. are presented in Table 5. From three experiments for situation
The UAV starts the autonomous landing when the features are rec- (b), a mean touchdown error of 0.81 m, a standard deviation of
ognized (Fig. 19 (a)), and then, the UAV executes IBVS. Figs. 19 (b) 0.13 m, and a maximum touchdown error of 0.95 m are obtained.
and (c) present the captured images when the UAV completes the A movie clip showing the experiments for the situation (a) and (b)
IBVS levels 2 and 3, respectively. If the completion of IBVS level can be found at https://www.youtube.com/watch?v=TSdlVZ9bgXw.
3 is detected, the UAV decreases its altitude and completes the The experiments using the RC car with the landing pad for
landing procedure (Fig. 19 (d)). The speed of the landing platform situation (c) are also conducted three times. As mentioned ear-
and altitude of the UAV at situation (a) are presented in Fig. 20 lier, in these experiments, the gimbal is not employed; instead,

14
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Fig. 24. Images captured during the experiments for situation (d).

Table 5
Experiment results for situation (b): touchdown error.

Trial
Mean error Std. deviation
1st 2nd 3rd
Touchdown error [m] 0.70 0.77 0.95 0.81 0.13

Table 6
Experiment results for situation (c): touchdown error.

Trial
Mean error Std. deviation
1st 2nd 3rd
Touchdown error [m] 1.1 0.9 0.5 0.83 0.31

a virtual coordinate transform is used, which is much more chal-


lenging. Fig. 23 presents the sample result obtained out of three
trials, showing the time history of velocity of the landing platform
and the altitudes of the landing platform and the UAV. The landing
platform maintains the reference velocity of approximately 5 m/s
while autonomous landing is executed. At the approach phase, the
UAV approaches around the landing platform. Then, at approxi-
mately 14 s, the state enters the IBVS level 1, and the UAV starts
to decrease its altitude via IBVS velocity command. Table 6 rep-
resents the touchdown errors for all experiments where a mean
touchdown error of 0.83 m, a standard deviation of 0.31 m, and
a maximum touchdown error of 1.1 m are obtained. Because the
gimbal is not utilized and the RC car with the landing pad moves Fig. 25. Experiment results for situation (d): (a) time history of the velocity of the
on a slippery grass field, the touchdown error obtained is larger landing platform; and (b) time history of the altitude of the UAV.
than that of situation (a).
For the situation (d), the most comprehensive experiments are
carried out. The entire landing procedures starting from the ap- uation (d). Fig. 25 presents the sample result obtained out of three
proach phase to landing is conducted and virtual coordinate trans- trials, showing the time history of velocity of the landing platform
form is used on the situation where the ship motion is simulated. and the altitude of the UAV. The maximum speed of the landing
Fig. 24 shows the image capture during the experiment for the sit- platform is approximately 6 m/s and the initial altitude of the UAV

15
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

Table 7 References
Experiment results for situation (d): touchdown error.

Trial [1] S. Kim, J. Park, D. Han, E. Kim, D. Lee, Development of a vision-based recog-
Mean error Std. deviation nition and position measurement system for cooperative missions of multiple
1st 2nd 3rd heterogeneous unmanned vehicles, Int. J. Aeronaut. Space Sci. 22 (2) (2021)
Touchdown error [m] 0.2 0.1 0.3 0.2 0.1 468–478.
[2] Q. Wang, Y. Zhang, Heterogeneous sensor-based target tracking with constant
time delay, Int. J. Aeronaut. Space Sci. 22 (1) (2021) 186–194.
[3] A. Nayyar, B.-L. Nguyen, N.G. Nguyen, The internet of drone things (IODT): fu-
is set to 10 m. Table 7 shows the touchdown error for all experi- ture envision of smart drones, in: First International Conference on Sustainable
ments where a mean error of 0.2 m, a standard deviation of 0.1 m, Technologies for Computational Intelligence, Springer, 2020, pp. 563–580.
and a maximum error of 0.3 m are obtained. A movie clip show- [4] A. Kumar, R. Krishnamurthi, A. Nayyar, A.K. Luhach, M.S. Khan, A. Singh,
A novel software-defined drone network (SDDN)-based collision avoidance
ing the experiments for the situation (d) can be found at https://
strategies for on-road traffic monitoring and management, Veh. Commun. 28
www.youtube.com/watch?v=q4PM37adCuk. (2021) 100313.
[5] N.A. Khan, N.Z. Jhanjhi, S.N. Brohi, A. Nayyar, Emerging use of UAV’s: secure
8. Conclusions and future work communication protocol issues and challenges, in: Drones in Smart-Cities, El-
sevier, 2020, pp. 37–55.
[6] T. Dautermann, B. Korn, K. Flaig, M.U. de Haag, GNSS double differences used as
This study proposed a vision-based autonomous ship deck land- beacon landing system for aircraft instrument approach, Int. J. Aeronaut. Space
ing strategy using feed-forward image-based visual servoing (FF- Sci. 22 (6) (2021) 1455–1463.
[7] Y. Kang, B. Park, A. Cho, C. Yoo, Y. Kim, S. Choi, S. Koo, S. Oh, A precision
IBVS). Conventional IBVS schemes cannot guarantee the conver-
landing test on motion platform and shipboard of a tilt-rotor UAV based on
gence of the position error to zero if the target is not stationary. RTK-GNSS, Int. J. Aeronaut. Space Sci. 19 (4) (2018) 994–1005.
To resolve this issue, the velocity of the ship was added as a feed- [8] J.S. Wynn, T.W. McLain, Visual servoing with feed-forward for precision ship-
forward term in IBVS. The motion of the ship was measured by board landing of an autonomous multirotor, in: 2019 American Control Confer-
the GPS on the ship deck and the camera attached to the UAV, ence, IEEE, 2019, pp. 3928–3935.
[9] Manual on Codes – International Codes, Volume I.3 – Annex II to the WMO
and then estimated via Kalman filtering. In addition, the adaptive Technical Regulations: Part D – Representations derived from data models.
IBVS gain was used, such that the features remained in the FOV, [10] N. Shahriari, S. Fantasia, F. Flacco, G. Oriolo, Robotic visual servoing of moving
and the features were compensated to fit a square to avoid the tilt targets, in: 2013 IEEE/RSJ International Conference on Intelligent Robots and
effects of the ship. To accomplish the entire landing procedure au- Systems, 2013, pp. 77–82.
[11] I. Siradjuddin, L. Behera, T.M. McGinnity, S. Coleman, Image-based visual ser-
tonomously, a landing scheme in the form of the state machine
voing of a 7-DOF robot manipulator using an adaptive distributed fuzzy PD
structure, including the approach, three IBVS levels according to controller, IEEE/ASME Trans. Mechatron. 19 (2) (2013) 512–523.
the relative altitude between the ship and the UAV, hold, rising, [12] J.P. Alepuz, M.R. Emami, J. Pomares, Direct image-based visual servoing of free-
and landing states, was designed. The proposed autonomous land- floating space manipulators, Aerosp. Sci. Technol. 55 (2016) 1–9.
[13] S. Hutchinson, G.D. Hager, P.I. Corke, A tutorial on visual servo control, IEEE
ing algorithm was verified via various simulations and real flight
Trans. Robot. Autom. 12 (5) (1996) 651–670.
experiments. [14] F. Chaumette, S. Hutchinson, Visual servo control. I. Basic approaches, IEEE
The method proposed in this paper, the use of additional sen- Robot. Autom. Mag. 13 (4) (2006) 82–90.
sors (i.e., GPS on the ship deck) was necessary to estimate the [15] S. Jung, K.B. Ariyur, Automated wireless recharging for small UAVs, Int. J. Aero-
velocity of the ship accurately. However, the sensor attached to the naut. Space Sci. 18 (3) (2017) 588–600.
[16] C. Chen, S. Chen, G. Hu, B. Chen, P. Chen, K. Su, An auto-landing strategy based
ship could complicate the system, and if issues such as communi-
on pan-tilt based visual servoing for unmanned aerial vehicle in GNSS-denied
cation failure occurs, autonomous landing might not be possible. environments, Aerosp. Sci. Technol. (2021) 106891.
Therefore, as a future work, a method for estimating the velocity [17] S. Yang, J. Ying, Y. Lu, Z. Li, Precise quadrotor autonomous landing with SRUKF
of a ship without the aid of GPS will be investigated. In addition vision perception, in: 2015 IEEE International Conference on Robotics and Au-
tomation, 2015, pp. 2196–2201.
to that, to cope with the strong wind and gusts in harsh marine
[18] W. Zhao, H. Liu, X. Wang, Robust visual servoing control for quadrotors landing
environments, robust control techniques (e.g., sliding mode con- on a moving target, J. Franklin Inst. 358 (4) (2021) 2301–2319.
trol and/or disturbance observer) will be considered as the future [19] J.J. Acevedo, M. García, A. Viguria, P. Ramón, B.C. Arrue, A. Ollero, Autonomous
work. landing of a multicopter on a moving platform based on vision techniques, in:
Iberian Robotics Conference, Springer, 2017, pp. 272–282.
[20] D. Falanga, A. Zanchettin, A. Simovic, J. Delmerico, D. Scaramuzza, Vision-based
Declaration of competing interest autonomous quadrotor landing on a moving platform, in: 2017 IEEE Interna-
tional Symposium on Safety, Security and Rescue Robotics, 2017, pp. 200–207.
[21] R.O. de Santana, L.A. Mozelli, A.A. Neto, Vision-based autonomous landing for
The authors declare that they have no known competing finan-
micro aerial vehicles on targets moving in 3D space, in: 2019 19th Interna-
cial interests or personal relationships that could have appeared to tional Conference on Advanced Robotics, IEEE, 2019, pp. 541–546.
influence the work reported in this paper. [22] E. Rohmer, S.P. Singh, M. Freese, V-REP: a versatile and scalable robot sim-
ulation framework, in: 2013 IEEE/RSJ International Conference on Intelligent
Robots and Systems, 2013, pp. 1321–1326.
Data availability
[23] Z. Tang, R. Cunha, D. Cabecinhas, T. Hamel, C. Silvestre, Quadrotor going
through a window and landing: an image-based visual servo control approach,
The authors are unable or have chosen not to specify which Control Eng. Pract. 112 (2021) 104827.
data has been used. [24] T. Hamel, R. Mahony, Visual servoing of an under-actuated dynamic rigid-body
system: an image-based approach, IEEE Trans. Robot. Autom. 18 (2) (2002)
187–198.
Acknowledgement [25] N. Guenard, T. Hamel, R. Mahony, A practical visual servo control for an un-
manned aerial vehicle, IEEE Trans. Robot. 24 (2) (2008) 331–340.
[26] D. Lee, H. Lim, H.J. Kim, Y. Kim, K.J. Seong, Adaptive image-based visual servo-
This research was supported by the Basic Science Research Pro- ing for an underactuated quadrotor system, J. Guid. Control Dyn. 35 (4) (2012)
gram through the National Research Foundation of Korea (NRF) 1335–1353.
funded by the Ministry of Education (2020R1A6A1A03040570), De- [27] D. Lee, T. Ryan, H.J. Kim, Autonomous landing of a VTOL UAV on a moving plat-
velopment of Drone System for Ship and Marine Mission of Civil form using image-based visual servoing, in: 2012 IEEE International Conference
on Robotics and Automation, 2012, pp. 971–976.
Military Technology Cooperation Center (18-CM-AS-22), and Oper- [28] P. Serra, R. Cunha, T. Hamel, D. Cabecinhas, C. Silvestre, Landing on a mov-
ation of an Unmanned Aerial System for a VTOL funded by the ing target using image-based visual servo control, in: 53rd IEEE Conference on
Agency of Defense Development (111885-912828001). Decision and Control, 2014, pp. 2179–2184.

16
G. Cho, J. Choi, G. Bae et al. Aerospace Science and Technology 130 (2022) 107869

[29] Q.H. Truong, T. Rakotomamonjy, A. Taghizad, J.-M. Biannic, Vision-based con- [36] J.L. Sanchez-Lopez, S. Saripalli, P. Campoy, J. Pestana, C. Fu, Toward visual au-
trol for helicopter ship landing with handling qualities constraints, IFAC- tonomous ship board landing of a VTOL UAV, in: 2013 International Conference
PapersOnLine 49 (17) (2016) 118–123. on Unmanned Aircraft Systems, IEEE, 2013, pp. 779–788.
[30] T. Rakotomamonjy, Q.H. Truong, Helicopter ship landing using visual servoing [37] M.A. Fischler, R.C. Bolles, Random sample consensus: a paradigm for model fit-
on a moving platform, IFAC-PapersOnLine 50 (1) (2017) 10507–10512. ting with applications to image analysis and automated cartography, Commun.
[31] I. Borshchova, S. O’Young, Visual servoing for autonomous landing of a multi- ACM 24 (6) (1981) 381–395.
rotor UAS on a moving platform, J. Unmanned Veh. Syst. 5 (1) (2016) 13–26. [38] S. Garrido-Jurado, R. Muñoz-Salinas, F.J. Madrid-Cuevas, M.J. Marín-Jiménez,
[32] I. Borshchova, S. O’Young, Marker-guided auto-landing on a moving platform, Automatic generation and detection of highly reliable fiducial markers under
Int. J. Intell. Unmanned Syst. (2017). occlusion, Pattern Recognit. 47 (6) (2014) 2280–2292.
[33] Y. Bar-Shalom, L. Campo, The effect of the common process noise on the [39] J. Rankin, An error model for sensor simulation GPS and differential GPS, in:
two-sensor fused-track covariance, IEEE Trans. Aerosp. Electron. Syst. 6 (1986) Proceedings of IEEE Position, Location and Navigation Symposium, PLANS’94,
803–805. 1994, pp. 260–266.
[34] R.W. Beard, T.W. McLain, Small Unmanned Aircraft, Princeton University Press,
2012.
[35] T. Perez, M. Blanke, Simulation of ship motion in seaway, Computer Sci-
ence, Technical Report, The University of Newcastle, Callaghan, Australia, 2002,
pp. 1–13.

17

You might also like