You are on page 1of 5

Cloud Detection System for UAV Sense and Avoid:

Cloud Distance Estimation using Triangulation


1st Adrian Dudek 2nd Peter Stütz
Institute of Flight Systems Institute of Flight Systems
University of the Bundeswehr Munich University of the Bundeswehr Munich
Neubiberg, Germany Neubiberg, Germany
adrian.dudek@unibw.de peter.stuetz@unibw.de

Abstract—Sense and Avoid is gaining importance in the II. STATE OF THE ART
integration of unmanned aerial vehicles (UAVs) into civil
airspace. In order to be seen by other air traffic participants A. Cloud detection on-board UAV
operating under visual flight rules (VFR), own visibility must be Cloud detection as part of UAV Sense & Avoid is an
granted. This includes the observance of legal minimum cloud
upcoming topic in the integration of unmanned systems in
distances. An airborne electro-optical cloud detection and
avoidance system could cope with this task on unmanned
European airspace. However, research on cloud detection has
platforms. In this paper we propose an approach for cloud mainly focused on ground-based or space-based applications
distance estimation using computer vision algorithms. In for current weather determination in meteorology. As shown
addition to the functional architecture, results from a simulation in the previous chapter, an on-board cloud distance
environment are shown and the demands for future estimation system is indispensable in airspaces with traffic
developments are addressed. under visual flight rules.
An early publication dealing with UAV-based cloud
Keywords—UAV Sense and Avoid, cloud distance estimation detection has already shown that it is possible to determine
I. INTRODUCTION the distance of clouds in flight altitude using electro-optical
systems. In their work the authors use feature point ranging
The integration of unmanned aircraft into civil airspace in combination with an image-based classifier that classifies
remains a challenge. While systems such as the Traffic local image patches as ground, cloud or sky [2]. In
Collision Avoidance System (TCAS) are used to avoid
comparison to [2] we try to separate cloud objects from the
collisions in controlled airspaces in which mainly air traffic
participants operate according to instrument flight rules (IFR), background using colour-based segmentation first and
the pilots flying in uncontrolled airspaces according to visual subsequently determine the distance in these areas with the
flight rules (VFR) are responsible for avoiding other aircrafts presented method.
and ensuring their own visibility. Depending on the airspace Previous research work at our Institute of Flight Systems
used, certain minimum cloud distances must therefore be has already focused on determining the degree of coverage
respected. With regard to our application of unmanned air and estimating vertical cloud distance using a two-view
cargo transportation, it is expected that it will take place triangulation approach in order to avoid dangers during
primarily in the lower airspace. Thus, a Sense & Avoid system ascent and descent due to icing and turbulence. However, in
that is able to detect and avoid clouds is essential as on-board our recent project the preservation of visibility is of highest
equipment. Apart from ensuring own visibility, there are priority. Therefore, we propose to investigate whether this
further reasons not to enter clouds. On the one hand, there is method using 3D reconstruction from two views is also
an increased risk of turbulence in the vicinity of clouds caused suitable for applications with perspective in flight direction
by convection. On the other hand, at low temperatures, for a short-term avoidance of clouds in flight altitude.
especially near clouds, icing can occur and influence the
aerodynamic structure. B. Monocular cloud distance estimation using two-view
The focus of our previous investigations, described in [1], triangulation
was on segmentation of clouds in a simulation environment. The advantage of using monocular approaches compared
As a second important step the results of cloud segmentation to stereoscopic ones is that the baseline necessary for
are used to perform an approach of cloud distance estimation, triangulation is less limited by the aircrafts design. By using
which is examined in the following paper. First, state of the the monocular two-view triangulation, only one camera is
art cloud detection research projects are presented followed by needed. The collection of images from different perspectives
an explanation of the 3D reconstruction method used. The is done at different times. However, in case of rapid changes
procedure for estimating cloud distance is explained step by
of the cloud structure or shifting due to wind, the estimation
step beginning with cloud segmentation methods, shown in
result will be negatively affected. Therefore, the time interval
[1]. Moreover, results are shown and compared with ground
truth data from the simulation. Finally, a possibility is shown between the images needs to be chosen according to the flight
how cloud ground truth data can be collected in real flight speed [3].
images to validate the determined cloud distance. The purpose of cloud detection as part of a Sense & Avoid
system requires the application of the approach for cloud
distance estimation in perspective of flight direction, which
brings along the challenge of optical flow divergence besides
The KoKo II project was funded by the German federal aviation research varying backgrounds (sky, ground and horizon). When
programme (LuFo V, 3rd call).

978-1-7281-9825-5/20/$31.00 ©2020 IEEE


Authorized licensed use limited to: UNIVERSIDAD DE OVIEDO. Downloaded on December 28,2020 at 12:28:08 UTC from IEEE Xplore. Restrictions apply.
observing clouds in upward or downward perspective, the K is the intrinsic matrix containing camera characteristics
optical flow vectors maintain one direction throughout the such as the focal length ( 𝑓𝑥 , 𝑓𝑦 ) resulting from image
image and there are barely any differences in the flow vector resolution and field of view. It also takes into account the
length depending on the image position for objects at equal optical centre (𝑐𝑥 , 𝑐𝑦 ) and the axis skew (s). The extrinsic
distances. In contrast, when an object is approached in matrix [𝑅|𝑡] contains rotation and translation parameters of
direction of flight (Fig. 1, left) the optical flow vectors start at the camera [9]. The result of the triangulation are 3D points
a point called focus of expansion and the flow vectors of that include the required depth information. With this distance
objects of equal distance become larger the further away they information, statements can be made about the cloud structure
are from this point in the image (Fig. 1, right) [4]. It is thus on the front and the averaged oblique distance to the cloud can
expected that the accuracy of the distance determination will be given if several features within a cloud contour could be
depend on the cloud position in the image. detected, tracked and triangulated. As a last step the 3D points
are transformed into the WGS84 coordinate system. Thus,
features with a very low altitude can be eliminated with a
known terrain height, if structures on the ground have been
classified as potential clouds by the cloud segmentation.
1st input
image

Cloud
Fig. 1. Apparent growth of a circle when approaching a sphere (left) segmentation
and corresponding radial optical flow vectors (right) [4]

Feature
III. PROPOSED CLOUD DISTANCE ESTIMATION APPROACH detection

The unaccelerated horizontal flight without wind


influence is examined in the following. First a sensor image is 2nd input Feature Remove
taken from a simulation environment based on the Presagis image tracking untracked
toolkit including aircraft state data and camera orientation
(Fig. 2). Before starting with cloud distance estimation, cloud RANSAC Remove
segmentation methods are applied, such as described in [1]. algorithm outliers
There we have shown that colour-based segmentation
methods are highly dependent on lighting conditions,
Triangulation
visibility and ground texture, so false detections are probable.
Image areas that were classified as clouds serve as regions of
interest while others are not considered. As a next step a Shi- Absolute
Thomasi feature detector is applied on these regions [5]. It is feature
positions
expected that more features will be recognized in the cloud
edge areas, because the cloud structure is usually striking in Cloud
the cloud edge area, while the middle of the cloud can have distance
rather homogeneous structures [2]. After a certain flight
distance the second input image is taken and the aircraft state Fig. 2. Cloud distance estimation approach
data and camera orientation and position is known. On the one
hand, the distance lying between the two images should not be
too small, otherwise the optical flow vectors are too small for IV. FIRST RESULTS AND VALIDATION
robust triangulation. On the other hand, the time between the The following discussion presents the results obtained on
images should not be too long, otherwise tracking will be more simple, single cloud objects to obtain basic performance
difficult if the characteristics of a cloud change substantially measures. These results form the basis for later investigations
[6]. with more complex cloud formations. First, the distance
In the second input image features detected in the first determination in unaccelerated horizontal straight flight
image are tracked using the Lucas-Kanade algorithm [7]. Here without a visual simulation of atmospheric effects and terrain
features that were detected in the first image but could not be is considered. Due to the optical flow divergence, it is
tracked in the second image are eliminated. In order to reduce expected that the determination of distances in clouds near the
the number of erroneously tracked features, the Random focus of expansion will provide less accurate values due to the
Sample Consensus (RANSAC) algorithm is used. This small change in feature position. In the following, clouds are
procedure is able to eliminate large errors from a data set, even systematically placed in a plane frontal to the UAV using
if the proportion of errors is relatively large [8]. In this use Presagis simulation environment (Fig. 3). According to the
case, tracked features that do not correspond to the main proposed procedure, cloud segmentation is applied first. In
direction of movement can be removed. Subsequently, this case, a brightness filter with an adaptive threshold allows
triangulation is performed using the camera matrix P (1) as the clouds to be separated from the black background very
input. easily (Fig. 4).

fx s cx
 P = K [R | t] = [ 0 fy cy ] [R | t] 
0 0 1

Authorized licensed use limited to: UNIVERSIDAD DE OVIEDO. Downloaded on December 28,2020 at 12:28:08 UTC from IEEE Xplore. Restrictions apply.
Fig. 3. Systematically placed Fig. 4. Segmentation result using
clouds without simulated terrain brightness threshold
and atmospheric effects

Then features are detected in these segmented areas (Fig.


5, green) and tracked in the second input image (Fig. 5, blue).
This shows the expected different change of feature positions
depending on the position in the image. After eliminating the
untracked features and the features that do not correspond to
the flow direction, triangulation is applied to estimate the Fig. 6. Averaged cloud distance estimation results (orange) and actual
distance. All feature distance values within a cloud contour are distance to the cloud center (green)
averaged and displayed in the cloud object (Fig. 6, orange).
The actual distance to the cloud centre is projected in green The fluctuation of the distance estimation of cloud objects
into the corresponding cloud object (Fig. 6, green). near the focus of expansion is visualized in the following
Since the detected features are located on the cloud example. As shown in Fig. 2 as the last step, absolute positions
surface, but according to the current state of the simulation are calculated after triangulation of the features. These
environment only the distance to the cloud centre is known, absolute positions can then be plotted on a map. For this task
the two distance values can be compared only conditionally, we use ArcGIS Runtime SDK. If three clouds are placed in a
because the cloud surface is closer than the cloud centre plane of the same distance in flight altitude and are detected
position from a UAV point of view. However, comparing the by the proposed method (Fig. 7), the feature positions shown
distance values allows to determine if the magnitude is in Fig. 8 are obtained. The true cloud centre and the bounding
correct. During the flight towards the cloud formation shown boxes of the clouds are shown in black. The cloud features
in Fig. 3, the distance values of the outer clouds are constantly whose positions could be determined are shown as a "+"
within the range of the actual distance with a relatively small symbol. Depending on their estimated altitude, they are
deviation (Fig. 6). For the cloud in the centre of the image, colour-coded.
distances are estimated that oscillate more around the value of The absolute altitude of a feature can be specified by using
the actual distance (Fig. 6). However, the estimation of the previously estimated slant distance and the feature position
distances becomes better as the cloud occupies increasing in the image. When operating under visual flight rules, a
areas of the image and the vectors of the optical flow become minimum horizontal distance of 1.5km and a minimum
longer. vertical distance of 1000ft apply in most airspaces. Therefore,
features whose determined altitude falls below the minimum
vertical distance of 1000ft are displayed in red, while features
which are more than 1000ft lower than the UAV are displayed
in green and those which are more than 1000ft above the UAV
flight altitude are displayed in blue. The current UAV position
is shown in orange (Fig. 8).

Fig. 5. Detected (green) and tracked (blue) features

Fig. 7. Clouds placed in flight altitude with detected (green) and


tracked (blue) features

Authorized licensed use limited to: UNIVERSIDAD DE OVIEDO. Downloaded on December 28,2020 at 12:28:08 UTC from IEEE Xplore. Restrictions apply.
symbol in three colours depending on altitude. The flight
altitude of the UAV is 1000m in this scenario.

Fig. 11. Second input image with detected (green) and tracked (blue)
features

Fig. 8. Visualization of estimated cloud feature positions: cloud


ground truth (black), features below (green), above (blue) and within
(red) legal vertical cloud distance of 1000 ft

The clouds shown in Fig. 7 have their centre at flight


altitude and have a maximum vertical extent of 1000m, so
most features would have to be drawn in red, but some also in
green and blue. It is noticeable that the features of the outer
clouds are too far out compared to the ground truth. The cloud
features of the cloud in the middle scatter very much (Fig. 8),
which explains the fluctuating average distance of this cloud.
When approaching this cloud, the features are more
concentrated in the area of the bounding box and the features
of the outer clouds move towards the actual cloud centre. As
expected, the closer the UAV gets to a cloud object, the better
the distance estimation. The fact that a cloud approached
head-on by this method provides less accurate distance values
compared to clouds in other parts of the image is problematic, Fig. 12. Averaged cloud distance estimation result (orange), actual
distance to cloud center (green) and segmented contour (black)
because the cloud in the centre of the image has the highest
priority when aiming to avoid clouds.
The next scenario shows a cloud formation with a terrain
model from our simulation environment (Fig. 9). The result of
the colour-based segmentation approach introduced in [1] is
shown in Fig. 10.

Fig. 9. Cloud scene in simulation Fig. 10. Segmentation result


environment

Fig. 11 shows the second input image containing detected Fig. 13. Visualization of estimated cloud feature positions: cloud ground
features (green) and tracked features (blue). Fig. 12 illustrates truth (black), features below (green), above (blue) and within (red) legal
vertical cloud distance of 1000 ft
the estimated averaged cloud distance (orange) and the actual
distance to the cloud centre (green) of each segmented cloud
The cloud centres of the two upper clouds are located at an
object. The 2D map display (Fig. 13) shows the true bounding
altitude of 3000m and have a maximum vertical extension of
boxes and centre positions of the clouds in black, the UAV
1000m. The estimated distance result of these clouds is close
position in orange, and the cloud feature positions as a "+"
to the actual distance from the cloud centre (Fig. 12). In Fig.13

Authorized licensed use limited to: UNIVERSIDAD DE OVIEDO. Downloaded on December 28,2020 at 12:28:08 UTC from IEEE Xplore. Restrictions apply.
it can be seen that some absolute positions of the cloud  Elimination of erroneously detected ground objects
features are within the range of the bounding box, even if by known terrain height and estimated altitude of the
some positions were estimated too far out. Both clouds are features or adjustment of cloud segmentation
placed well above the minimum vertical distance, so all
associated features should be marked blue, which is the case  Investigations to improve the determination of
(Fig. 13). The two lower and front clouds are estimated too far distances in the area of focus of expansion
away (Fig. 12). The distance values should tend to be slightly  Clustering of cloud features in groups of similar
less than the distance to the cloud centre, since the features distance for separation of clouds at high coverage.
whose distance is determined lie on the cloud surface.
However, the cloud on the lower left is estimated to be much  Utilization of multiple perspectives, meaning more
too far away. If the feature positions of this cloud are analysed, than two input images, for distance estimation.
some features are scattered upwards along a line (Fig. 13). The
Apart from improving the presented method we want to
features that are very far away from the cloud bounding box
evaluate different depth determination methods of image
are all marked green, meaning they are estimated to be below
processing in order to apply them in the future in a context
about 700m. Some tests have shown that this behaviour only
sensitive adaptive system during flight. This adaptive system
occurs in clouds with the ground as background. The more will be extended by further cloud segmentation methods based
sensitive the cloud segmentation was chosen, the more the on deep learning, which are described in [10] and [11], for
distance estimation was worsened towards too large values. It example. In addition, investigation of real-time cloud
turns out that although clouds have prominent structures in the detection algorithms in real flight plays a central role in our
edge areas, which are more likely to be detected by feature
future research. Compared to simulation, the collection of
detectors than more homogeneous areas in the middle of the
ground truth data in the atmosphere is more complicated. One
cloud [2] (Fig. 11), these edge areas also tend to be more
possibility of determining the true cloud positions is briefly
transparent, so that features on the ground are also detected in
explained in the following. The satellite data portal
this area, which means that the determined distance for the
Copernicus Open Access Hub of the European Space Agency
entire cloud object is increased. This behaviour can be seen in
(ESA) provides images taken by the satellites Sentinel-2A and
the lower left cloud. Some of the triangulated features do not
Sentinel-2B. Every five days one of the satellites flies over a
belong to the cloud, but to the ground behind the cloud. Since
large area south of Munich and records this area. Exactly at
the height of the features is also determined, ground features
this moment we try to record airborne clouds and determine
could be eliminated if the terrain height at this position is
the distance to them. Afterwards the actual cloud position
known. An alternative would be to shrink segmented cloud
provided by satellite data can be compared with the cloud
objects artificially, for example by using the morphological
position determined by computer vision methods. For these
operation erosion or by rescaling the segmented cloud
flights we mount an action camera, which records the GPS
contours, so that transparent cloud edge areas are not
position, at the wing of a general aviation aircraft.
segmented. The cloud centre of the cloud on the lower left is
placed at 600m height and has a vertical extension of 300m. REFERENCES
So most of the features should be marked green and the rest
[1] A. Dudek, F. Funk, M. Russ, and P. Stuetz, “Cloud Detection System
red. The cloud centre of the cloud on the lower right is placed for UAV Sense and Avoid : First Results of Cloud Segmentation in a
in flight altitude with a vertical extension of 1000m that means Simulation Environment,” in IEEE International Workshop on
features would have to be classified in green, red and blue. Metrology for AeroSpace, 2019.
Both clouds show the expected colour distribution of the [2] H. Nguyen et al., “EO/IR Due Regard Capability for UAS Based on
features. Intelligent Cloud Detection and Avoidance,” in AIAA
Infotech@Aerospace, 2010
V. CONCLUSION AND FUTURE WORK [3] F. Funk and P. Stütz, “A Passive Cloud Detection System for UAV :
Concept and First Results,” in International Symposium on Enhanced
The presented results have shown that an estimation of Solutions for Aircraft and Vehicle Surveillance Applications (ESAVS),
cloud distance in perspective of the flight direction for short- 2016.
term cloud avoidance is possible with the proposed approach. [4] D. A. Forsyth and J. Ponce, Computer Vision - A Modern Approach,
Issues with distance estimation arise near the focus of 2nd ed. Edinburgh: Pearson Education, p. 345, 2012.
expansion and in transparent cloud areas when the distance to [5] J. S. J. Shi and C. Thomasi, “Good features to track,” in IEEE
features on the ground is determined. The distance Computer Society Conference on Computer Vision and Pattern
Recognition, 1994.
information obtained is valuable in several aspects. On the one
hand, avoidance manoeuvres can be initiated when [6] F. Funk and P. Stutz, “A passive cloud detection system for UAV:
Analysis of issues, impacts and solutions,” 5th IEEE Int. Work. Metrol.
approaching a cloud object. On the other hand, the determined AeroSpace, Metroaerosp. 2018 - Proc., pp. 236–241, 2018.
dimension of the cloud can be used to classify the cloud type [7] J.-Y. Bouguet, “Pyramidal Implementation of the Lucas Kanade
and draw conclusions about convective activity in the cloud Feature Tracker - Description of the algorithm,” Intel Corporation -
area. However, the reliability of the distance estimation Microprocessor Research Labs. 1999.
approach strongly depends on the background respectively the [8] M. A. Fischler and R. C. Bolles, “Random sample consensus,”
terrain characteristics. In addition, more complex cloud Commun. ACM., vol. 24, no. 6, pp. 381–395, 1981
situations and also hazy visibility conditions lead to increasing [9] R. Szeliski, “Computer Vision: Algorithms and Applications.”
challenge of separation between individual cloud objects, for Springer-Verlag, London, pp. 46–47, 2011.
example by colour segmentation. In these cases, it is suggested [10] M. Shi, F. Xie, Y. Zi and J. Yin, "Cloud detection of remote sensing
images by deep learning," 2016 IEEE International Geoscience and
to triangulate features in the large cloud areas and then cluster Remote Sensing Symposium (IGARSS), pp. 701-704, 2016.
them into groups of similar distance. It should be mentioned
[11] S. Dev, A. Nautiyal, Y. H. Lee and S. Winkler, "CloudSegNet: A Deep
that flying under or over a cloud layer is preferable to avoiding Network for Nychthemeron Cloud Image Segmentation," in IEEE
individual clouds in high cloud cover. Based on the gained Geoscience and Remote Sensing Letters, vol. 16, no. 12, pp. 1814-
knowledge, following improvements are proposed: 1818, 2019.

Authorized licensed use limited to: UNIVERSIDAD DE OVIEDO. Downloaded on December 28,2020 at 12:28:08 UTC from IEEE Xplore. Restrictions apply.

You might also like