You are on page 1of 4

High Accuracy Active Stand-off Target Geolocation

Using UAV Platform


Xiwen Yang1* , Defu Lin1 , Fubiao Zhang1 , Tao Song1 , Tao Jiang1
1
Institute of UAV Autonomous Control, School of Aerospace Engineering, Beijing Institute of Technology, Beijing, China
*Email address: (xwyang@bit.edu.cn)

Abstract—A method for high accuracy geolocation of a complicated in bearing-only estimation [6]. Apart from
distant ground stationary target using UAV platform is gimbaled camera, the active localization is aided by a laser
developed. The UAV is equipped with electro-optical device and rangefinder which can detect the range between the target and
laser rangefinder. Using the attitude and position of the UAV, the UAV after selecting the target, and the high-precision
the LOS angles, and the relative distance between target and the inertial navigation module is also required. As a result, the
UAV, the target is localized in world coordinates. Compared accuracy of active localizations is generally higher than the
with the one-shot localization method, the proposed method passive one. In some relative works, it’s assumed that the
employs multiple angle measurements and range measurements camera and body axes frames are aligned [7]. However, in
of the target to mitigate random noise of the sensors. The target
practical applications, the mounting errors between the
geolocation is calculated based on weighted least square
navigation module and the base of gimbaled camera can bring
estimation after data collecting. The main contribution of this
paper is that the mounting error between the high accuracy
a large bias to the result. So it’s really important to identify
navigation module and the electro-optical device is estimated as and calibrate the mounting errors.
a byproduct, which has a serious effect on localization result if In this paper, a method for high-accuracy active
not been calibrated. The simulation result shows that the target geolocation of a ground stationary target in a stand-off
is localized with an accuracy of 10 meters when the UAV is 4000 situation using UAV platform is developed. Considering the
meters away from it.
mounting error between the high-precision navigation module
and the electro-optical payload, we proposed the weighted
Keywords—UAV, active localization, electro-optical payload,
stand-off
linear regression algorithm to jointly estimated the mounting
error and the target position. Once we obtain the mounting
I. INTRODUCTION error, the one-shot method can be applied to localize a target
quickly after mounting error calibration, and the one-shot
Unmanned aerial vehicle (UAV) is currently being accuracy is greatly enhanced.
developed and widely used in military and civillian
The rest of this paper is organized as follows. Section Ⅱ
applications because of its flexibility and mobility [1]. A key
presents the geometry of active target localization. Section Ⅲ
technology in UAV systems being explored is the detection
describes the linear regression scheme for mounting error and
and high-accuracy localization of a stationary or moving target location estimation. Simulation results are shown in
ground target [2]. In many cases, the electro-optical payload Section Ⅳ, and the effect to one-shot method after mounting
which contains a gimbaled camera is one of the common error calibration is also presented. Conclusions are given in
sensors employed for UAVs to acquire information of the Section Ⅴ.
target and provide the line-of-sight angles to localize the
target [3]. The electro-optical payload is controlled by an II. GEOMETRY
operator via the ground control station to search for and select The UAV active geolocation model is presented in Fig. 1.
the target while the UAV is flying along a given trajectory. There are four coordinate frames used in this problem. The
The electro-optical payload based target localization with local NED system is chosen as the inertial system, whose
UAV can be divided into passive localization and active axises are denoted by xn , yn , zn . The UAV body system is
localization. The passive localization obtains the target denoted by xb , yb , zb , and the origin is at the mass center of
coordinates by the image or bearing information of the target. the UAV. The gimbal base system, denoted by x g , yg , zg , is
The main types include map registration [4], which matches fixed with the body system, and these two frames would be
the images taken in the air with a pre-existing referenced map, completely aligned if there’s no mounting error between them.
and the accuracy of target geolocation depends on the The camera coordinate system isn’t presented in the figure,
precision of the map which may be limited. In some and the x-axis, xc points along the line of sight, that is the
environments where the backgrounds are monotonous, it’s optical axis. There’s no need to define the y-axis and z-axis of
fairly difficult to extract the features from the image and apply camera system because the line of sight has no projection on
the image matching algorithm that this kind of method may the camera y-o-z plane.
become invalid in such situations [5]. Another type of passive The human operator identifies and selects the ground
positioning is determining the target coordinates with the target of interest via the ground control station, and the range
position and attitude of UAV, the camera focal parameters and finder starts work after target selecting. During the data
camera bearings by solving the collinear equations. This collecting process, the image processing system tracks the
method is low-cost and applicable in short-range scenarios, target autonomously. And it is assumed that the vision-based
but when a long stand-off distance is required, the accuracy of tracking algorithm is robust enough that the target keeps being
this kind of method can be poor because a tiny bias or error in locked during the data collecting period.
the angle measurements can lead to a huge loss in the accuracy
of localization result. The observability problem is also

978-1-7281-2345-5/19/$31.00 ©2019 IEEE

Authorized licensed use limited to: Cornell University Library. Downloaded on August 27,2020 at 13:17:59 UTC from IEEE Xplore. Restrictions apply.
UAV
The measurement variable vector is
xn xb T
RU βα y = [xU , yU , zU , α, β, ϕ, ϑ, ψ, L] (7)
yb xg
The actual measurement z is
yg zb R
yn z = y + v, v ~ N( 0, R0 ) (8)
RT
where v is the measurement Gaussian white noise with a 9×9
zg Target covariance matrix R0 .
zn
Then (5) becomes:
Fig.1. UAV active geolocation model
Θ1 = f ( y, Θ2 )= f (z - v, Θ2 ) (9)
As is shown in Fig. 1, the relationship between the target
and the UAV location is described as where f is the nonlinear function presented in (5).
RT = RU + R (1) Assuming the measurement noise of the sensors and the
Hence, by projecting (1) to the NED frame, we obtain (2): mounting angles are small and using the Taylor’s expansion
formula, Equation (9) becomes:
RnT = RnU + Cnb ∙ Cbg ∙ Cgc ∙ Rc (2) Θ1 = f (z - v, Θ2 ) ≈ f (z, 0) -
∂f
| ∙v+
∂f
| ∙Θ2 (10)
∂y z, 0 ∂Θ2 z, 0
c T
where R =[L, 0, 0] is the target coordinate in camera
coordinate frame, and L is the distance between the UAV and move the unknown parameters to the right side of the equation:
the target. ∂f ∂f
f (z, 0) ≈ Θ1 + ∂y z, 0
| ∙v- ∂Θ2 z, 0
| ∙Θ2 (11)
The matrix Cbg= Cbg (
ξ, ς ) is presented in (3), where ξ and
ς are the yaw and pitch mounting angle of the gimbal base Suppose there are N groups of data collected, stacking
respectively. Because the camera is pitch-yaw gimbaled and them in the form of (11) yields:
for simplification purposes, we omit the influence of the roll
∂f
mounting angle in this paper. I3 - |
f(z1 ,0) ∂Θ2 z , 0
1
cosξ 0 sinξ cosς -sinς 0 ⋮
[ ⋮⋮ ] = ⋮ ∙Θ+V (12)
Cbg = [ 0 1 0 ] ∙ [ sinς cosς 0] (3) ⋮ ⋮
-sinξ 0 cosξ 0 0 1 f(zN ,0) I3
∂f
- ∂Θ |
[ 2 z ,0
N ]
The transformation matrix between the camera frame and
the gimbal base frame is function of the LOS angles, that is, where V is the equation error with covariance:
N
cosα 0 sinα cosβ -sinβ 0 ∂f ∂f T
Cgc=[ 0 1 0 ] ∙ [ sinβ cosβ 0] (4) R = Diag ({( | ) R0 ( | ) } ) (13)
∂y zk , 0 ∂y zk , 0 k=1
-sinα 0 cosα 0 0 1
where α is the bearing LOS angle and β is the elevation LOS Since the sensors we use in active localization are
angle of the target as shown in Fig. 1. generally at high-accuracy, the assumptions to get (12) are
reasonable. So the target localization problem is converted
III. PARAMETER ESTIMATION into a linear estimation problem according to (12). Using the
In order to estimate the coordinates of the target and the weighted least square method in [7], the unbiased parameter
estimation is given as:
mounting angle, we use the weighted least square method to
process the data collected during the UAV flight. The N -1
estimation process is illustrated as follows. I
̂ = [∑ [ 3T ] ∙(Ak R0 ATk )-1 ∙[I3
Θ Bk ]]
Bk
Equation (2) can be written as (5): k=1

xT xU L I3 T -1
∙ ∑Nk=1 [ T ] ∙(Ak R0 Ak ) ∙f(z, 0) (14)
[ T ] = [yU ] + Cnb ∙ Cbg ∙ Cgc ∙ [ 0 ]
y (5) Bk
zT zU 0 where 𝐴𝑘 and 𝐵𝑘 are the partial derivatives of measurements
T
where [xT , yT , zT ] is the coordinates of the target to be and mounting angles respectively:
T
estimate, and [xU , yU , zU ] is the position of UAV measured ∂f
A= ∂y | , B=-
∂f
| (15)
by the onboard GPS receiver. zk , 0 ∂Θ2 z , 0
k

The parameters to be estimate is expressed in (6): IV. SIMULATION RESULTS


Θ = [Θ1 , Θ2 ]T (6) A simulation of this problem is created in this section to
T
present the effectiveness of the proposed target localization
where Θ1 = [xT , yT , zT ] and Θ2 = [ξ, ς]T is the mounting method.
angle between the navigation module which provides the In order to mitigate the adverse effect of the random
UAV attitude and the base of the gimbaled camera. measurement errors, the UAV is supposed to follow a circular
trajectory around a ground target located at the origin. The

Authorized licensed use limited to: Cornell University Library. Downloaded on August 27,2020 at 13:17:59 UTC from IEEE Xplore. Restrictions apply.
UAV’s velocity is 8m/s, and the constant altitude of the flight
is 300 meters above the sea level. The position of the UAV is
measured by a differential GPS receiver with a horizontal
standard deviation of σx = σy = 0.6m, and the vertical error is
σz = 1m. Considering the stand-off requirement in military use,
the radius of the trajectory is RU = 4000m to keep the UAV
away from potential attack. The error of the range
measurement is σL = 5 m, and the LOS angle error of the
electro-optical payload is σα = σβ = 0.7° . The attitude
uncertainty of the navigation module is σϕ = σϑ = 0.1°, and
σψ = 0.3°. The UAV is supposed to head towards the target
during the flight, and the nominal values of the pitch and roll
angle are ϑn = -1°, ϕn = 5° . The actual mounting angle
between the electro-optical payload and the navigation
module are ξ = -3°, ς = 1°. Because the detecting frequency of
the laser rangefinder is limited, the data acquisition frequency
is set to be 1 Hz.
A. Linear Regression Results Fig.3. Convergence of the target localization horizontal error
The UAV’s longitude, latitude, and altitude information is
generated by a ground station software and the trajectory in
local NED coordinate frame is shown in Fig.2.

Fig.2. UAV trajectory

The target is located at the origin presented by the red Fig.4. Yaw mounting angle estimate
asterisk. The entire circle contains 3140 sampling points. Each
time before the parameter estimation (14) is performed, a
trajectory like Fig.1 and other sensor data with random noises
are generated. The results of simulations of 100 Monte Carlo
runs is shown in Fig.3 and the CEP of the localization error in
each sampling point is presented in this figure.
As is shown in Fig.3, the horizontal error converges to 10
meters after 180s. The mounting angle estimates are shown in
Fig.4 and Fig.5.
In Fig.4, the yaw mounting angle can converge to its real
value finally and the convergence speed is also faster than the
pitch mounting angle in Fig.5. Because the altitude of the
UAV trajectory is a constant, the convergence of the pitch
mounting angle isn’t well stimulated [8]. This problem
involves the design and optimization of the trajectory which is
not the major point of this article.
Using the proposed method, the localization error reaches
an accuracy of CEP 10 meters within 3 minutes. And the
byproduct, the mounting angles ,can be used in the one-shot Fig.5. Pitch mounting angle estimate
target localization to enhance the accuracy as the next part of
this section shows.

Authorized licensed use limited to: Cornell University Library. Downloaded on August 27,2020 at 13:17:59 UTC from IEEE Xplore. Restrictions apply.
B. One-shot Localization Results V. CONCLUSIONS
In some situations in the battlefield, it’s risky to fly around In active target geolocation using UAV platform in long
the target for a long time to locate it. The one-shot method stand-off distance, the proposed method can be used to
illustrated by (2) had to be performed in this kind of situation. enhance the localization accuracy. The mounting angles
However, if the mounting angle isn’t identified and calibrated, between the navigation module and the electro-optical
the localization error could be large because the installation payload can be identified as a byproduct to improve the one-
may not be that accurate and a tiny error in the angle shot localization accuracy.
measurement would cause great diverge in the geolocation
result in the long stand-off distance situations. The estimated As the simulation result shows, the estimated pitch
mounting angle in the last part is used to improve the mounting angle can’t converge to the real value, which
performance of the one-shot method to achieve a fast and influences the altitude accuracy of the target localization. In
relatively accurate localization of the ground target. our further study, the trajectory of the UAV will be considered
to speed up the convergence process of the mounting angle.
The estimated mounting angles in Fig.4 and Fig.5, which
are ξ̂ = 2.916° and ς̂ = 1.482° respectively, are used to REFERENCES
enhance the accuracy of the one-shot method. [1] Ousingsawat J ,Campbell M E . Optimal Cooperative Reconnaissance
Using Multiple Vehicles[J]. Journal of Guidance Control & Dynamics,
2007, 30(1):122-132.
[2] Barber D B , Redding J D , Mclain T W , et al. Vision-based Target
Geo-location using a Fixed-wing Miniature Air Vehicle[J]. Journal of
Intelligent and Robotic Systems: Theory and Applications, 2006,
47(4):361-382.
[3] Cheng X , Daqing H , Wei H . High precision passive target localization
based on airborne electro-optical payload[C] International Conference
on Optical Communications & Networks. IEEE, 2015.
[4] Conte G , Hempel M , Rudol P , et al. High Accuracy Ground Target
Geo-location Using Autonomous Micro Aerial Vehicle Platforms[C]
Aiaa Guidance, Navigation & Control Conference & Exhibit. 2013.
[5] Rathinam S , Kim Z , Soghikian A , et al. Vision Based Following of
Locally Linear Structures using an Unmanned Aerial Vehicle[C]
European Control Conference Cdc-ecc 05 IEEE Conference on
Decision & Control. IEEE Xplore, 2005.
[6] Corbets J . Real-Time Trajectory Generation for Target Localization
Using Micro Air Vehicles[J]. Journal of Aerospace Computing
Information & Communication, 2012, 7(8):1-26.
[7] Pachter M , Ceccarelli N , Chandler P . Vision-Based Target Geo-
Fig.6. Single-shot target localization with mounting angle calibration Location Using Feature Tracking[C] AIAA Guidance, Navigation and
Control Conference and Exhibit. 2007.
Fig.6 shows the CEP of 100 Mont Carlo runs of the one-
[8] Ponda S S . Trajectory optimization for target localization using small
shot localization model with and without mounting error unmanned aerial vehicles[J]. 2008.
calibration. It shows that he horizontal localization error has
decreased nearly 200m after mounting angle identification.
Therefore, before a one-shot target localization mission is
executed, the proposed method can be performed to improve
the localization accuracy.

Authorized licensed use limited to: Cornell University Library. Downloaded on August 27,2020 at 13:17:59 UTC from IEEE Xplore. Restrictions apply.

You might also like