Professional Documents
Culture Documents
Cameras
1 Introduction
Some governmental and European projects have been launched to identify, improve
and integrate technologies with the objective to build an efficient counter drone system
(commonly called C-UAV for Counter – Unmanned Aerial Vehicle). An example of a
C-UAV system is BOREADES [1] (Fig. 1), which was launched in 2015 and introduced
for the first time the use of panoramic thermal sensors in a C-UAV system. This project,
which was launched by the national French research agency (ANR) [2], had as an
objective the development and the integration of a fully operational C-UAV system,
including detection, localization, decision and neutralization. In this paper we focus on
the use of panoramic thermal sensors and particularly on methods for detection and
localization of UAVs. Some of the key challenges for designing such a system are listed
below:
Image Definition. As most the drones are both small and not really hot, panoramic
thermal imaging camera applied to a C-UAV system needs to reach high performances
through: (a) high image definition, and (b) high thermal sensitivity.
Image Refresh Period. UAVs are very dynamically responsive. Beyond the detection,
the tracking performances are also a key indicator of the system performances. Tracking
© Springer Nature Switzerland AG 2019
D. Tzovaras et al. (Eds.): ICVS 2019, LNCS 11754, pp. 754–767, 2019.
https://doi.org/10.1007/978-3-030-34995-0_69
UAV Localization Using Panoramic Thermal Cameras 755
of UAVs requires to keep the distance between two consecutives positions as short as
possible to limit the potential candidates to each track and thus keep the bad tracking as
low as possible. A good compromise has been identified with 2 Hz of panoramic image
refresh period.
Thermal panoramic imaging systems can be designed using two different approaches:
(a) fixed head: multiple detectors and lenses and (b) rotating head: Single detector and
lens. Head in rotation. In Fig. 2, an example of panoramic thermal cameras based on
both fixed head and rotating head is presented. On the left part of Fig. 2 there is fixed
head and multiple detector VisionView 180 from PureTech systems [3]. This camera
is composed of 3 individual detectors and lenses. That approach is valid for uncooled
detectors that have a much lower price and does not require any regular maintenance. The
problem with uncooled detectors is that due to their limited performances, they are not
756 A. Thomas et al.
applicable to UAV surveillance (thermal resolution ten times lower that cooled detector).
The price and reliability of a panoramic cooled system based on that architecture would
be so high that it is never retained by any manufacturer. On the right part of Fig. 2 one
of the current panoramic thermal camera manufacturers is HGH and their model Spynel
S2000 is presented [4]. Their camera are based on the second principle (single detector
& rotating head).
Fig. 2. VisionView 180 (Puretech Systems) on the left and Spynel S2000 (HGH Infrared Systems)
on the right.
Having a rotating head seems to be an evidence but this will inevitably degrade the
image quality if nothing is done to compensate the movement. An example of image
quality degradation from two images captured from the same camera rotating at 720°/s
at the time when utilizing movement compensation and when not is depicted in Fig. 3.
The image degradation can be roughly quantified from the reflection in Table 1.
From the described data in the table, during the 3 ms of image capturing, the camera
head rotation will be of 2.16°. Considering the detector ratio (1280 px/720 px = 1.77),
the resulting angular sector covered by an individual image will be (20°/1.77 = 11.25°).
UAV Localization Using Panoramic Thermal Cameras 757
As the detector horizontal resolution is of 720 px for 11.25°, the head movement will
capture a slide of (2.16° * 720 px/11.25° = 138.24 px. As a conclusion, it is mandatory
to cancel this effect in order to make the camera usable for any application. Another key
element in the design of such an optical system, is to be able to generate very resolved
images while having the camera head in constant rotation. This requires a custom lens
conception. The optical design must also consider the temporal aspect of the counter
rotation system in the simulation.
r = 1.22 × λ × N (1)
758 A. Thomas et al.
where r is the diameter (not radius) of the disk, λ the considered wavelength, and N the
optical aperture number of the lens. Considering the average wavelength of the detector
sensitivity (λ = 4 μm), and the fact that almost all lenses have an optical aperture (N =
4) the Airy disk will have a standard diameter of r = 19.52 μm. It is obvious that having
a detector with a pixel size of 10 μm coupled with a lens with poor resolution will not
help to generate the expected images. As a conclusion, while using 10 μm detectors in
the MWIR, lenses must have an ideal optical aperture of 2.
Fig. 4. 640 × 512 MWIR detector (left image) and High resolution 1280 × 720 Daphnis HD
(Sofradir) detector (right image)
One camera introducing the above concepts is manufactured by HGH INFRARED SYS-
TEMS under the reference SP-X3500. This camera is able to produce rich and contrasted
image while scanning the environment at 720°/s. Below are two pictures captured with
the same field of view, same distance from a house. On the left part of Fig. 6, the sensor
resolution is 640 × 512 while on the right part of the figure the latest detector with
1280 × 720px is used.
UAV Localization Using Panoramic Thermal Cameras 759
Fig. 6. Image produced by SP-S2000 (HGH) on the left and by SP-X3500 (HGH) on the right.
3 Target Localization
Range estimation methods are commonly divided into two main categories: active ones
and passive ones.
Range estimation by active methods is based on the emission of an impulsion (RF, light,
microwave) and the computation of the time needed to receive the echo. The radar based
example is depicted in Fig. 7. One problem with active solutions like radars concerns the
authorization to use such systems in city environment. Secondly, radars can’t provide an
image of the environment making the target characterization more difficult. Last but not
least, those systems are not furtive and can be discovered which is in certain situation is
not acceptable.
The use of thermal imaging devices is an efficient way to detect and track flying objects
while staying fully passive. Like common radars, panoramic thermal cameras (Fig. 8)
can be used as part of the detection means of an anti-drone system and can be connected
to effectors (Jammer, Spoofing engine) to neutralize unauthorized UAVs. Panoramic
thermal cameras usually rotate continuously and provide high resolution images of the
environment. Using some image processing algorithms, they can detect objects of interest
in the scene and provide their respective Azimuth and Elevation (Fig. 9). The distance
will remain unknown as long as the target has no interface with the ground.
760 A. Thomas et al.
One of the challenging questions when trying to localize targets from their position
within images is to associate properly the set of targets detected by the different sensors.
This is critical as most of thermal panoramic cameras will not provide a very resolved
picture of the targets. Several precautions could be taken, namely: (a) ensure that the
distance between the lines of sight of the same target seen from the multiple sensors
is small enough (Fig. 10). And (b) ensure that the speed vector of the localized target
is consistent with the predicted trajectory. Pairing must be rejected otherwise. Or (c)
ensure that the attributes of the target are coherent with the camera specifications and
the estimated distance.
In Fig. 11, we illustrate the fact that the attributes consistency is important for the
object pairing function. In the example, the UAV has physically the same size but pro-
duces a larger image for Camera 1 due to its proximity with this camera (we consider
that both cameras have the same resolution). Therefore, a bird flying close to Camera 2
produces the same image size than the UAV on Camera 1. As the green UAV, the green
bird and Camera 2 are all almost aligned, there is a risk of confusion if nothing is done
to avoid this problem.
Fig. 11. Importance of taking the object attribute (like the size) in the object pairing function.
(Color figure online)
d = d1 + d2 (2)
762 A. Thomas et al.
d1
tan ϕ1 = (3)
D
d2
tan ϕ2 = (4)
D
Considering that optical axes are parallel, the distance of the object can then be
calculated as:
d
D= (5)
tan ϕ1 + tan ϕ2
3D Localization. In three dimensions and in the general case (Fig. 13), two non-
coplanar lines (D1 and D2) don’t have any intersections. In that case, the distance of
triangulation has to be determined using another method.
O1
X1 X2
D1
O2
D2
Sources of Localization Errors. Sources of localization errors can be sorted into two
categories; the source of uncertainty and the error amplifiers. We also have to consider
differently the case of static target from the case of moving target.
Static Targets. A poor resolution of the camera is a limitation of the localization accu-
racy. The impact of the camera resolution becomes larger as the target distance increases
as shown in Fig. 14.
Fig. 14. A shift of 1 pixel (δϕ) generates an error in the distance estimation of D. On the right,
we can easily understand that the error is larger than on the diagram on the left.
UAV Localization Using Panoramic Thermal Cameras 763
For example, considering a panoramic camera generating 360° × 20° images with
a resolution of W (24 × 512) × H 640 pixels, the angular horizontal error is 0.029°.
Considering a distance between the cameras of 100 m, the impact on the localization
error depending of the target distance is presented on Table 2:
Table 2. Example of distance estimation error function of the target range for a given camera
resolution (iFoV = 0.029°)
Azimuth and Elevation Calibration Accuracy. This error is generated during the calibra-
tion process. The different cameras must be calibrated accurately in terms of Azimuth
and Elevation otherwise the triangulation equations could not be resolved correctly.
Considering that the error could be equivalent to 1 pixel, the impact on the distance
estimation is equivalent to the one introduced by the camera resolution (see Fig. 14).
When the relative positioning of the camera is not well measured and/or set in the 3D
localization system, a resulting target positioning error will be introduced. This error is
illustrated in Fig. 15.
Error in distance
estimation
Estimated target
position
Sensors & Targets Positions. Errors on distance estimation based on triangulation are
impacted by mainly three metrics:
764 A. Thomas et al.
The diagram below (Fig. 16) shows the impact (represented in 2D to simplify the
understanding) of the direction of observation on the distance estimation accuracy. The
diagram represents an architecture where sensors are distant from about 140 m (100 m in
both latitude and longitude). The camera iFoV used to compute the diagram is 13 mrad.
Fig. 16. Example (in 2D) of a diagram showing the impact of the target location on the positioning
accuracy
Camera 1 Camera 2
Fig. 17. Example of the same target is imaged by the different cameras at different times.
The acquisition time must then be considered as a function of the Azimuth. The
image acquisition time can be represented by the below profile (Fig. 18) where each
level covers the hFoV of the sensor:
time
Camera 1
Camera 2
Azimuth
Target at t1 Target at t2
D
Camera 1 Camera 2
Fig. 19. Impact on the distance estimation in case of transversal moving target
the direction of the target at any time between actual imaged positions. If the dynamic
model is accurate and the clocks synchronization effective, the errors can be perfectly
compensated. We will now consider a time desynchronization of the two cameras to
illustrate the impact on the target localization.
We observe that the approaching case (Fig. 20) is the most favorable situation
(compared to Fig. 19). This is a good observation as approaching targets are often
the interesting use case.
Absolute Time Synchronization. Even if the clocks of the different cameras are well
synchronized, they may be globally desynchronized with the actual time. The introduced
error will be the distance traveled by the target during that period, proportionally to the
target speed. Having in mind that most UAVs fly at about 60 km/h (−17 m/s) and
considering that it could be possible in all cases to synchronize the cameras with an
accuracy of about 0.5 s, the distance estimation could be given with an error of about
8 m.
TrackPos
ΔD
Camera 3
Camera 2
Fig. 21. In this example, four cameras are used to cover a site. Having more than two cameras is
required to compensate the blind areas due to the vegetation and existing buildings.
UAV Localization Using Panoramic Thermal Cameras 767
4 Conclusion
Several research projects allowed prototyping:
• The most spatially resolved high frame rate IR panoramic camera, involving a 10 μm
pitch MCT megapixel detector.
• An advanced function of localization involving multiple IR panoramic cameras.
Those two major advances make IR panoramic imaging a key function of counter
UAV systems. The use of 3D localization helps to communicate accurate information
to a counter UAV system as it enhances the metadata that can be computed from the
individual images. Combining targets distance estimation and angular data provided by
the 360° cameras, the system calculates the true size and true speed of the target. Those
are key parameters enabling to trigger out false alarms (make the difference between a
close UAV and a far plane). Providing diverse and accurate information about targets
is also one of the challenges for taking advantages of artificial intelligence algorithms,
especially used for target classification. Having an accurate target classifier becomes a
mandatory specification to address most of the new security markets when the false alarm
rate is a critical key indicator. Even if 3D localization can be achieved in many cases
by using two sensors, the integration of more than 2 sensors will allow to have optimal
performances in every direction, but this also comes with new considerations, especially
in the pairing function when the pairing order should not depend on the processing order.
Acknowledgements. This work was supported by the European Union’s Horizon 2020 Research
and Innovation Programme Advanced holistic Adverse Drone Detection, Identification and
Neutralization (ALADDIN) under Grant Agreement No. 740859.
References
1. Boreades Project Website. http://boreades.fr/
2. Agence Nationale De La Recherche. https://anr.fr/Projet-ANR-15-FLDR-0001
3. PureTech Systems. https://www.puretechsystems.com/visionview.html
4. HGH Infrared Systems Website. https://www.hgh.fr/content/download/2344952/37530497/
version/16/file/HGH_CaseStudy_Spynel_DroneUAVDetection.pdf
5. Lynred.com. https://www.lynred.com/produit/daphnis-hd-mw
6. University of Sheffield. https://www.sheffield.ac.uk/polopoly_fs/1.19213!/file/Lecture11.pdf
7. Wikipedia. https://en.wikipedia.org/wiki/Radar
8. Lee, Y.J., Yilmaz, A.: Real-time object detection, tracking, and 3D-positioning in a multiple
camera setup. In: ISPRS (2013)
9. Straw, A.D., Branson, K., Neumann, T.R., Dickinson, M.H.: Multi-camera real-time three-
dimensional tracking of multiple flying animals. J. R. Soc. Interface 8(56), 395–409 (2011)