You are on page 1of 11

Image and Vision Computing 32 (2014) 568–578

Contents lists available at ScienceDirect

Image and Vision Computing


journal homepage: www.elsevier.com/locate/imavis

Timely autonomous identification of UAV safe landing zones☆


Timothy Patterson a,⁎, Sally McClean a, Philip Morrow a, Gerard Parr a, Chunbo Luo b
a
School of Computing and Information Engineering, University of Ulster, Cromore Road, Coleraine, BT52 1SA Northern Ireland, United Kingdom
b
School of Computing, University of the West of Scotland, Paisley, Scotland PA1 2BE, United Kingdom

a r t i c l e i n f o a b s t r a c t

Article history: For many applications such as environmental monitoring in the aftermath of a natural disaster and mountain
Received 14 September 2012 search-and-rescue, swarms of autonomous Unmanned Aerial Vehicles (UAVs) have the potential to provide a
Received in revised form 14 February 2014 highly versatile and often relatively inexpensive sensing platform. Their ability to operate as an ‘eye-in-the-
Accepted 26 June 2014
sky’, processing and relaying real-time colour imagery and other sensor readings facilitate the removal of humans
Available online 3 July 2014
from situations which may be considered dull, dangerous or dirty. However, as with manned aircraft they are
Keywords:
likely to encounter errors, the most serious of which may require the UAV to land as quickly and safely as possi-
UAV safe landing zone detection ble. Within this paper we therefore present novel work on autonomously identifying Safe Landing Zones (SLZs)
Terrain classification which can be utilised upon occurrence of a safety critical event. Safe Landing Zones are detected and subsequent-
Fuzzy logic ly assigned a safety score either solely using multichannel aerial imagery or, whenever practicable by fusing
UAV safety knowledge in the form of Ordnance Survey (OS) map data with such imagery. Given the real-time nature of
the problem we subsequently model two SLZ detection options one of which utilises knowledge enabling the
UAV to choose an optimal, viable solution. Results are presented based on colour aerial imagery captured during
manned flight demonstrating practical potential in the methods discussed.
© 2014 Elsevier B.V. All rights reserved.

1. Introduction which has a primary focus of utilising swarms of communicating, auton-


omous UAVs for a mountain search-and-rescue type scenario. Currently,
Unmanned Aerial Vehicles (UAVs) have the potential to revolution- there are three types of adapted, Ascending Technologies rotor based
ise current working practices for many military and civilian applications UAV platforms used within the SUAAVE project. Each platform has a
such as assisting in search-and-rescue missions [1] and environmental flight speed of around 10 m/s with approximately 20 min battery life.
monitoring [2]. The widespread availability of multiple Commercial Colour cameras of varying dimensions, power requirements and capa-
Off-The-Shelf (COTS) air frame designs when coupled together with re- bilities are fitted to the UAVs. Additionally, all platforms contain an
cent advances in sensing technologies such as lightweight, high resolu- IEEE 802.11 wireless networking card, a GPS receiver, an Inertial Navi-
tion colour cameras ensures that in comparison to manned aircraft gation System (INS) and an ATOM processing board enabling automat-
UAVs offer a versatile and often inexpensive solution to many such ap- ed, in-flight processing and analysis of captured colour aerial imagery
plications. A key advantage provided by UAVs is the removal of humans and other sensory inputs.
from situations which may be classified as dull, dangerous or dirty, for
example power line inspection, aerial surveillance and monitoring of at-
mospheric pollution. 1.1. Aims and motivation
There are two main types of UAV control, piloted and autonomous.
Piloted UAVs are controlled in real-time by a human operator often lo- As with manned aircraft the dependability and integrity of a UAV
cated many miles from the deployment area. On the other hand, auton- platform can be influenced by the occurrence of various endogenous
omous UAVs generate low-level flight control commands in response to and exogenous events, for example a change in wind strength may im-
high-level goals, for example GPS waypoints. One such project con- pact upon the UAV's remaining battery life. Due to the potential ethical
cerned with the creation and evaluation of autonomous UAVs is the and legal implications of an in-flight UAV failure the UK Civil Aviation
Sensing Unmanned Autonomous Aerial Vehicles (SUAAVE) project [3] Authority's UAV regulations are currently similar to those specified for
model aircraft [4]. As such one regulation is that UAVs must remain
within 500 m of the human operator at all times thereby limiting the
☆ This paper has been recommended for acceptance by Konrad Schindler, PhD.
usefulness of UAVs for many real-world applications. Before these oper-
⁎ Corresponding author.
E-mail addresses: t.patterson@ulster.ac.uk (T. Patterson), si.mcclean@ulster.ac.uk
ational constraints can be relaxed there are a number of safety related
(S. McClean), pj.morrow@ulster.ac.uk (P. Morrow), gp.parr@ulster.ac.uk (G. Parr), technical challenges which must be addressed including sense-and-
chunbo.luo@uws.ac.uk (C. Luo). avoid capabilities and provision of a safe landing system.

http://dx.doi.org/10.1016/j.imavis.2014.06.006
0262-8856/© 2014 Elsevier B.V. All rights reserved.
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578 569

For many safety critical situations the safest course of action may be GPS receiver. However, the signal strength and reliability of GPS
to instruct the UAV to land as quickly and safely as possible. In particular can be influenced by terrain profile [5]. Whilst a method of position
there are three types of error which may impact upon the safe operation estimation using Ordnance Survey (OS) map data and aerial imagery
of a UAV thus necessitating an emergency landing. has been developed [6], in the event of prolonged loss of GPS signal
the UAV would be commanded to land.
1. Loss of communication link. A key system requirement of the
SUAAVE project is that each swarm member must maintain a direct Intuitively it cannot be assumed that the ground directly beneath the
or multi-hop communication link with the base station. This commu- UAV is suitable for landing in, as it may contain humans, animals or haz-
nication link is via the IEEE 802.11 wireless networking protocol and ards. Furthermore, due to the limited flight time of the UAVs it cannot be
enables each UAV to receive commands such as return home or land assumed that previously determined landing sites are attainable. It is
immediately from a human-in-the-loop. The loss of this link would therefore desirable to have an autonomous method of Safe Landing
potentially result in the UAV being uncontrollable and is therefore Zone (SLZ) detection which can be executed on colour aerial imagery
deemed a safety critical event. obtained from onboard, passive colour cameras. Due to the flight
2. Hardware/software errors. The most serious of this class of error, for speed of the UAVs, algorithms using such imagery, for example SLZ de-
example an actuator failure, may require the UAV to descend imme- tection must execute in real-time as failure to do so will result in areas
diately in a controlled fashion and land on the ground directly below. on the UAV's flight path remaining unprocessed.
Other less serious errors, for example a software module crashing, In Fig. 1 the primary phases of UAV operation are illustrated in con-
may require the UAV to land and perform a soft reset. junction with an overview of SLZ detection. Upon bootstrap success the
3. GPS failure. During normal flight conditions the UAVs used within UAV becomes airbourne and recursively cycles through the self diag-
the SUAAVE project navigate using coordinates obtained from a nostics and operation modes. During operation mode the SLZ detection

Fig. 1. Primary modes of UAV operation demonstrating when SLZ detection is considered as a soft or hard real-time system. Potential SLZs are identified within an input aerial image using a
combination of edge detection and dilation. These potential SLZs are then assigned a safety score and depending on the mode of operation either used immediately as a SLZ or stored for
future use.
570 T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578

algorithm is continuously executed on sensed aerial imagery. Detected can be used in conjunction with camera properties to estimate the
SLZs with a high safety score are stored in a database thereby providing UAV's height above ground level (AGL). However, extending an ap-
an invaluable measure of robustness should a UAV encounter an error in proach which utilised landing pads for use in a safety critical situation
a location where no SLZs exist. We consider this mode of operation to be would necessitate a significant amount of human effort as such pads
a soft real-time system whereby it is not regarded as a safety critical would be required in multiple geographic locations.
event if processing a colour image requires slightly longer than a More recent advances have focused on reconstructing the terrain
predetermined threshold. Should diagnostics fail, the UAV will attempt profile of an area chosen by a human operator. Such approaches are
to recover by, for example navigating back within communication range based on the underlying assumption that planar regions are suitable
with other swarm members. In the event where this recovery fails the landing areas. This assumption is reasonable given an operator ability
UAV is issued with an abort command and must determine if a suitable to use human intuition in collating contextual information in order to
SLZ exists from its current location. We consider abort mode to be a hard choose a potentially suitable landing area.
real-time system whereby failure to process a colour aerial image with- The use of active sensing devices, for example laser scanners pro-
in the required time frame may have catastrophic consequences. vides a relatively robust and accurate method of determining terrain
Within this paper we present an extension of our previous work on profile [13,14] however, due to their high weight and power require-
autonomous SLZ detection discussed in [7] and [8] resulting in a SLZ de- ments such sensors are generally impracticable for small rotor-based
tection algorithm which has the capability of exploiting knowledge in platforms. In the semi-autonomous SLZ detection algorithms proposed
the form of OS data in addition to utilising the multichannel nature of by Templeton et al. [15] and Cesetti et al. [16] passive sensors, for exam-
colour aerial imagery. There are two primary contributions. Firstly, we ple colour cameras are used in conjunction with image processing tech-
incorporate OS data into the potential SLZ detection phase thus provid- niques such as computation of optical flow to detect planar regions.
ing a measure of robustness against image noise. Furthermore, we use In the work by Templeton et al. [15] the terrain is reconstructed
OS data to assist with the assignment of SLZ safety scores, fusing the using a single camera. However, in order to achieve this multiple passes
OS data and multichannel aerial imagery to perform terrain classifica- of the same area are required. In the scenario of an emergency forced
tion and also to determine the Euclidean distance between a SLZ and landing this may not be achievable due to limited battery life. A further
man-made objects. Secondly, given the scenario of an abort command, disadvantage is the requirement of an accurate estimation of camera
the available time frame under which the hard real-time system must movement. For a UAV with constantly changing velocity this may be
execute is variable and strongly influenced by the UAV's remaining difficult.
battery life. Whilst incorporating knowledge into the SLZ detection al- In the semi-autonomous approach by Cesetti et al. [16] a user
gorithm provides a more reliable result there is a time overhead in- chooses a safe landing area via a series of navigation waypoints either
curred. We therefore model the execution time of two SLZ detection from an aerial image or from the live UAV camera feed. A modification
options one of which incorporates knowledge enabling the UAV to of the Scale Invariant Feature Transform (SIFT) [17] is used to extract
choose an optimal, viable method. and match natural features in the navigation waypoints to the UAV im-
The remainder of this paper is structured as follows; in Section 2 a ages. A hierarchical control structure is used to translate high level com-
brief overview of work related to SLZ detection is given. In Section 3 mands, for example navigation waypoints, to low level controls, for
we discuss autonomously identifying potential SLZs. These potential example rotor speed.
SLZs are then assigned a safety score as presented in Section 4. In The SIFT algorithm was chosen in [16] due to the difficulties in ro-
Section 5 we model the execution times of two SLZ detection options bustly identifying reliable features in an outdoor environment from a
thus enabling the UAV to choose an optimal, viable, SLZ detection meth- moving platform. SIFT image descriptors are invariant to scale, transla-
od. An evaluation is presented in Section 6. Finally, conclusions and pro- tion, rotation and, to some extent, illumination conditions and can
posed further work are outlined in Section 7. therefore overcome these difficulties with relative success. However
one of the main disadvantages of this algorithm is that the SIFT feature
2. Related work description of an image can be computationally expensive to compute.
Cesetti et al. overcome this by dividing an input image into sub-
There are two main types of SLZ detection algorithms within the images upon which SIFT is executed only if the sub-image under consid-
literature namely, semi-autonomous and fully autonomous. Semi- eration has a high contrast threshold value which is based upon the sum
autonomous approaches rely on a human operator delineating a general and mean of the sub-image's greyscale intensity. This results in SIFT
suitable landing area after which the UAV detects a specific landing site. image descriptors being computed at a rate of 5 frames per second on
Alternatively fully autonomous approaches rely solely on SLZ detection 320 × 240 pixel images.
algorithms on-board a UAV. For completeness a brief overview of the The computed SIFT features are utilised for two possible safe landing
relevant literature is provided in the following subsections. site identification scenarios. In the first scenario, the UAV is maintaining
a steady altitude and with a translational motion is tasked with identi-
2.1. Semi-autonomous SLZ detection fying a landing site. The optical flow between two successive images is
estimated using the SIFT features and used to estimate depth structure
Many of the early semi-autonomous approaches to UAV landing information. An assumption is made that areas with low variance be-
such as the work by Sharp and Shakernia [9] and Saripalli et al. [10] fo- tween optical flow vectors indicate flat areas and are therefore deemed
cused on specially constructed landing pads of known size, shape and to be a safe landing site. A threshold for determining the boundaries be-
location. The design of many of these landing pads enabled the UAV to tween safe and unsafe areas is calculated during a supervised training
utilise image processing techniques in order to reliably estimate altitude phase. The second scenario is where the UAV is descending vertically.
and pose thus providing low level flight control with invaluable posi- In this scenario, a graph of distances is calculated between the features
tional information. For example, Merz et al. [11] propose a landing pad of two successive images. A hypothesis is proposed that linear zooming
consisting of 5 circle triplets. The pose of the UAV is determined from over a flat surface will result in constant variance in graph distances be-
the projection of three circles positioned on the corner points of an equi- tween two images. Areas with a low variance between successive im-
lateral triangle. This is fused with inertial measurement data using a ages are considered to be safe as they are assumed to be flat.
Kalman filter to provide a more accurate estimation of UAV pose and al- Both heuristics, i.e. that areas with a low variance between optical
titude. A further example landing site design is presented by Lange et al. flow vectors and that low variance between successive image features
[12] where the pattern consists of concentric circles with increasing di- for linear zooming indicate flat areas are validated to some extent
ameters. Each circle has a unique ratio of inner-to-outer radius which using both simulations and real data. For the first scenario, an example
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578 571

is provided using real data in which the variance of the optical flow vec- 3.1. Identification of region and object boundaries
tor for an unsafe landing site is 302.06 as opposed to a safe landing site
which is 2.78. Cesetti et al. [18] further the potential autonomy of their The focus of region and object boundary identification is to detect
original work by incorporating terrain classification into the SLZ detec- areas in an input aerial image which are relatively homogeneous and
tion algorithm however human interaction is still required. free from obstacles, for example animals. Two sources of data are
Fundamentally there are two primary reasons why a semi- utilised for this stage namely aerial imagery and OS data.
autonomous approach does not provide a robust solution to SLZ detec-
tion for our application of multiple autonomous UAVs conducting a 3.1.1. Edge detection
mountain search-and-rescue mission. Firstly, there is a requirement of Whilst OS data specifies region boundaries, due to its static nature it
an available communication link between the UAV and a human opera- cannot ensure that such boundaries accurately reflect the real world
tor, the failure of which may have been the very source of the error. area. Such examples include locations where houses, roads or paths
Secondly, the human operator is responsible for a number of tasks in- are constructed after the survey date. It is therefore desirable to comple-
cluding validating images which have been flagged as potentially con- ment static OS data with real-time aerial imagery. Consequently, the
taining a missing person. Placing an additional burden of identifying process of edge detection is of fundamental importance to the overall
suitable, attainable SLZs may result in the neglecting of other tasks pos- success or failure of the SLZ detection algorithm. Edge detection iden-
sibly negatively impacting upon the overall success of the mission. tifies points within an image exhibiting a steep change, in typically, in-
tensity values. This property renders it particularly useful for the
problem of locating suitable SLZs as generally speaking such areas, for
2.2. Autonomous SLZ detection example grass regions have relatively constant intensity values and
therefore, at higher altitudes do not contain edges. Furthermore there
The work contained within [19–21] presents an approach to auton- are types of manmade objects such as power pylons or wind turbines
omous SLZ detection using colour aerial imagery for a fixed wing UAV. which pose a risk to safe UAV landing however may not be represented
The system architecture is divided into two main stages. Firstly, poten- in OS data. Such objects typically exhibit sudden changes in greyscale
tial landing sites are identified using edge detection, dilation and the intensity deviation and are therefore identified by edge detection, and
identification of homogeneous areas of sufficient size. Secondly, the the neighbouring area subsequently discounted as a potential SLZ.
suitability of potential landing sites is determined based on the terrain At this stage in development the Canny edge detector [22] is used.
classification and slope. This method conducts a smoothing operation using a Gaussian filter as
The terrain of potential landing sites is classified using a back prop- a prerequisite to edge detection thereby reducing its susceptibility to
agation neural network. By using a multi-layered classification ap- noise. The width of the Gaussian filter, w, can be defined thus providing
proach, selecting appropriate input features and implementing an a useful advantage as it is likely that at very low altitudes safe areas such
automated subclass generation algorithm an overall terrain type classi- as grass regions may contain a number of edges which do not represent
fication accuracy of 91.95% was achieved. An estimation of terrain slope significant region boundaries. It is therefore desirable that in a real
was derived from digital elevation maps (DEM). The DEM used in the world implementation the width of the Gaussian filter would be related
work by Fitzgerald et al. was a grid based model of approximately 90 to altitude. Further user defined parameters are a high, tH, and low, tL
m intervals, i.e. one square in the grid represented an area of 90 m2. threshold.
The DEM data is then projected onto the image plane and each pixel Due to the fundamental role of edge detection within the SLZ identi-
assigned a linguistic slope measure based on the maximum DEM fication algorithm it is desirable that detected edges correspond, as accu-
value between 4 grid points. However, it should be noted that the rately as possible to region boundaries. Whilst it is possible to set low
work by Fitzgerald et al. is primarily for a large fixed wing UAV which values for thresholds tH and tL this results in many edges which would
generally operates at much higher altitudes than the quadrotor UAV be considered spurious. With this in mind an offline training phase is
used within SUAAVE. In the case of a UAV determining slope at a conducted during which a human operator chooses edge detection pa-
lower altitude, a DEM of this resolution is not sufficient. rameters which yield intuitive results for a series of images. For the
Within [19–21] results are presented indicating a 92% potential work presented in this paper these parameters are fixed to tH = 16.75,
landing site detection accuracy and a 93% terrain classification accuracy. tL = 8.5 and w = 3 however in the future will be related to UAV altitude.
However, when considered from the prospective of SLZ detection for a
small quadrotor UAV there are two main limitations with the approach 3.1.2. OS data
described. Firstly, the identification of potentially suitable SLZs is solely For the majority of locations within the UK, invaluable knowledge
based upon edge detection on a greyscale representation of the input regarding the landscape may be derived from OS data. This data spec-
image which may render the method susceptible to noise such as ifies regions as a series of line, point or polygonal features in easting/
shadows. Secondly, the textural features utilised are not invariant to northing coordinates to an accuracy of ±0.4 m [23]. Whilst such data
scale and rotation which may reduce terrain classification accuracy for is inherently historic and its incorporation increases the overall execu-
imagery captured at multiple scales and rotations caused by frequent tion time of the algorithm, it is nevertheless an invaluable resource
UAV movements such as altitude and yaw adjustments. A further limi- which can be utilised to complement captured aerial imagery in
tation is that, aside from Digital Elevation Models (DEM), potentially assisting with the detection of potential SLZs. Of particular interest to
useful external knowledge such as OS map data is not exploited. Within this stage of the algorithm is the relatively reliable specification of re-
this paper, we therefore seek to address these limitations. gion boundaries such as roads, buildings and vegetation extents.
In order to ensure seamless compatibility with the image based
components of SLZ detection, vector format OS data for an area enclosed
3. Potential SLZ detection by an image's geographic bounding box is converted into raster format.
Raster format data represents a real world area as a matrix with each
Following our previous work and that of Fitzgerald et al. [19,20], cell containing a value. For this component of the algorithm a matrix
the SLZ detection algorithm is divided into two main components of dimensions equal to those of the input colour aerial image is created
(Fig. 1). Firstly, potential SLZs are detected within an input colour ae- with each cell containing either 1, i.e. indicating if a region boundary is
rial image. Secondly, these potential SLZs are assigned a numeric present, or conversely 0.
safety score which either confirms or discounts their suitability For the purposes of identifying potential SLZs we consider relevant
(Section 4). region boundaries specified in OS data analogous to edges detected
572 T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578

Fig. 2. (A) is an input aerial image on which edge detection is executed resulting in the edges highlighted in white displayed in (B). In (C), OS data is combined with the output of the edge
detection phase. After dilating the edges in (C), regions which do not contain edges are considered as potential SLZs (D).

within an image. This enables the lightweight fusion of such boundaries landing. Furthermore, the process of dilation closes small gaps in region
with the results of edge detection. An example which demonstrates the boundaries helping to ensure consistency between detected and real
potential usefulness of utilising OS data in this way is displayed in Fig. 2. world boundaries.
In Fig. 2B the results of executing edge detection on a greyscale version Secondly, an important component of potential SLZ detection is the
of an input colour aerial image are overlaid in white. However it can be identification of areas of sufficient size to contain the UAV. In our previ-
seen that whilst many region boundaries are successfully detected, por- ous work [7] and that of Fitzgerald et al. [19,20], a second safety related
tions of a hedge, road and a path are unnoticed. Such boundaries are parameter was included which specifies the size of an area surrounding
generally specified in OS data (Fig. 2C). Therefore the results of edge de- a candidate pixel or group of pixels1 which must be free from edges be-
tection are combined with OS data using logical OR ensuring that all fore such candidates can be considered as potential SLZs. This was im-
specified and detected edges are included in the output. Whilst logical plemented by passing a mask over each pixel(s) in the image. If the
OR currently yields an acceptable output it may be desirable in the fu- region under the mask did not contain edges then the pixel(s) were
ture to consider combining the edges from OS data and the results of flagged as a potentially suitable SLZ. As a post processing step groups
edge detection using, for example weighted linear combination [24] or of adjacent flagged pixels were merged to from larger regions which
the generative model described in [25]. This would subsequently enable correspond to potentially suitable SLZs.
the reliability of both sources at detecting certain types of edges to be However, a similar result may be achieved solely using dilation as a
weighted in a principled fashion. pixel is specified as an edge in the output dilated image if E and ^S overlap
by ≥1 element [26]. We therefore determine the width, Sw, in pixels, of
3.2. Dilation the square structuring element by:

The morphological process of dilation increases the width of the de-


tected edges discussed in the previous subsection. The dilation of a bina- Sw ¼ ðb=Ir þ n=Ir þ u=Ir Þ; ð2Þ
ry image containing edges, E, with a structuring element, S, is denoted
by E⊕S and defined as:
where b is the required buffer size in metres to be placed around each
n h  i o detected edge. Ir is the spatial resolution in metres of a single pixel, n
E⊕S ¼ zj ^S ∩E ⊆E ; ð1Þ is the required surrounding neighbourhood size in metres which must
z
be free from edges before a pixel/group of pixels are considered as a po-
where E and S have coordinates in 2-D integer space, Z2. This equation is tential SLZ. The size of the UAV in metres is represented by u. An exam-
based on reflecting S about its origin to form ^S and translating the reflec- ple demonstrating the result of dilation is shown in Fig. 2 where a set of
tion by z. edges (Fig. 2C) is dilated resulting in the potential SLZs displayed in
For the objective of identifying potentially suitable SLZs, dilation has Fig. 2D.
two main purposes. Firstly, from a safety prospective, assuming detect-
ed edges correspond to region or object boundaries the process of dila- 1
Whilst at a higher altitude the size of a real world area represented by a single pixel
tion enables a safety buffer to be placed around such boundaries. This may be sufficient to contain the UAV, it is likely that at lower altitudes it will be necessary
safety buffer allows for a margin of error when performing the actual to analyse an area surrounding groups of pixels.
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578 573

4. Assignment of SLZ safety score accuracy of 85.6% in comparison to 78.6% when using mean RGB and
79.1% when using mean HSV.
Having identified a set of potential SLZs within the input image A key advantage provided by the probabilistic nature of the MLC
which are of sufficient size to contain the UAV and additionally are lo- classifier is the ability to leverage in prior knowledge, p(ωi) in a princi-
cated within homogeneous regions the next stage is to assign each SLZ pled fashion using Bayes' rule,
a safety score in the range [0–1]. This safety score is a measure of a
SLZ's suitability as a UAV landing area. For UAV safety critical contingen- pðxjωi Þpðωi Þ
pðωi jxÞ ¼ : ð4Þ
cy management such as the identification of SLZs, Cox et al. [27] propose pðxÞ
3 main objectives:
For the purposes of SLZ terrain classification such prior knowledge
1. Minimise the expectation of human casualty.
regarding an area terrain may be inferred from feature codes specified
2. Minimise the expectation of external property damage.
within OS data. Of particular relevance to this stage of the algorithm
3. Maximise the chance of survival for the aircraft and its payload.
are the feature codes for roads, paths and vegetation. To enable compat-
With these priorities in mind we evaluate each potential SLZ and as- ibility between OS feature codes and the probabilities returned by the
sign it a safety score based on terrain classification, roughness and dis- MLC classifier an offline knowledge solicitation phase is required during
tance to man-made objects. which a human expert quantifies the prior probability of a class, p(ωi)
given a specific feature code. A list of the feature codes used for the ter-
4.1. Terrain classification rain classification component and the associated prior probabilities are
presented in Table 1. There are two significant advantages provided
Intuitively a key parameter when determining the suitability of a SLZ by incorporating expert knowledge in this way.
is its terrain classification, for example in the majority of scenarios it Firstly, whilst OS data is generally a relatively reliable indicator as to
may be assumed that grass is more suitable for landing in than water. the type of terrain in an area it is nevertheless historic in nature. As such,
At this stage in development a Maximum Likelihood Classifier (MLC) new features for example, roads or paths may be constructed over pre-
is used which estimates the probability of a pixel represented by a mul- viously green field areas. Furthermore terrain changes caused by precip-
tivariate feature vector x belonging to class ωi by, [28] itation may result in previously suitable landing areas becoming unsafe,
for example water logged fields. Conversely changes induced by evapo-
ðxjωi Þ ¼
p ration may result in previously unsafe landing areas such as lakes or

−1=2 −1=2 1 t −1 ð3Þ streams becoming suitable. Thus by using prior probabilities the likeli-
ð2πÞ jΣi j exp − ðx−mi Þ Σi ðx−mi Þ :
2 hood of such changes may be incorporated. Secondly, an OS feature
code may be imprecise. For example, the OS feature code ‘1228’ specifies
an ‘extent of vegetation’ which may refer to classes ranging from gorse
The covariance matrix, Σi and mean vector, mi for each class i are cal-
and grass to trees. Thus higher prior probabilities may be assigned
culated during an offline training phase during which a human expert
across a number of classes given an imprecise feature code.
delineates examples of each class.
The output probabilities from Eq. (3) are fused with the relevant
Within the literature there are many types of features which may be
priors specified in Table 1 using Bayes' rule and subsequently assigned
used to assist in discriminating between classes. These include statistical
membership of a class using the decision rule,
measures derived from grey-level co-occurrence matrices [29], the use
of Gabor filters [30] and utilising colour based features such as mean  
RGB within a pixel's neighbourhood [31]. It is likely that in an actual im- x∈ωi if pðωi jxÞNp ω j jx for all j≠1; ð5Þ
plementation the input aerial image will be subject to rotation and scal-
ing due to UAV movements. We therefore focus entirely on colour based resulting in a set of classified pixels for each SLZ. A SLZ is subsequently
statistical features computed within each pixel's neighbourhood as assigned membership to the class for which the majority of its constitu-
these features are invariant to such movements thereby ensuring that ent pixels belong. When knowledge in the form of OS data was fused
training data accurately reflects the spectral appearance of classes, re- with the probabilities returned by colour based MLC the terrain classifi-
gardless of the UAV's pose. cation accuracy for the labelled dataset increased to 88.1%.
In order to determine an appropriate set of features a manually la- Intuitively different terrain types have varying levels of suitability as
belled dataset was created using aerial imagery captured during manned a SLZ. Therefore each terrain type is assigned a suitability measure in the
flight. A total of 490 samples were created for 9 classes. The terrain clas- range [0–1] by a human expert (Table 2). These values are allocated
ses subsequently used throughout this work are, ω1 = Gorse, ω2 = bearing in mind the overall objectives outlined at the beginning of
Grass, ω3 = Heather, ω4 = Path, ω5 = Scrubland, ω6 = Stone, ω7 = Section 4. Highest priority is given to ensuring that expectation of
Tarmac, ω8 = Trees 1 − (Coniferous), and ω9 = Trees 2 − (Deciduous). human casualty is minimised resulting in terrain type road and path re-
A series of tests were conducted during which 70% of the labelled data set ceiving low suitability values. The suitability value associated with a
was used for training and 30% for testing. Fifty iterations were conducted SLZs terrain classification is input to the terrain suitability membership
for each test and the overall classification results averaged to form a function (Fig. 3) which in turn determines an output classification of ei-
mean classification accuracy for each group of features. It was subse- ther ‘unsuitable’, ‘risky’ or ‘suitable’ along with an associated degree of
quently decided to use mean RGB and mean HSV computed within a 3 membership subsequently influencing the overall safety score which a
× 3 pixel window. For the labelled dataset this provided an overall SLZ receives.

Table 1
OS feature codes with assigned prior probabilities of class membership.

OS feature code Assigned prior probabilities of class occurrence

Gorse Grass Heather Path Scrubland Stone Tarmac Trees 1 Trees 2

Road 0.07 0.07 0.07 0.07 0.07 0.09 0.5 0.03 0.03
Track 0.09 0.09 0.09 0.5 0.04 0.09 0.04 0.03 0.03
Vegetation 0.14 0.1 0.14 0.03 0.14 0.12 0.03 0.15 0.15
Parcel print 0.175 0.25 0.06 0.06 0.175 0.1 0.06 0.06 0.06
574 T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578

Table 2
Terrain types with assigned suitability measure.

Class Gorse Grass Heather Path Scrubland Stone Tarmac Trees 1 Trees 2

Suitability measure 0.4 1 0.4 0.1 0.7 0.5 0.1 0.3 0.3

4.2. Distance from man-made objects as it enables human experts to linguistically describe rules in an intuitive
manner, for example if terrain suitability = suitable and roughness =
In order to ensure that expectation of human casualty and damage to smooth and distance to man-made structures is far then a SLZ is safe.
property is minimised it is heuristically speaking undesirable to land in The extent to which a SLZ is considered safe, i.e. the input safety score
close proximity to man-made structures such as houses, roads or (Fig. 6) is determined by the values of the three input parameters. A
schools. We therefore utilise OS data to assist in calculating the distance fuzzy linguistic rule base is created offline containing such rules. When
from a SLZ to nearby man-made structures. There are two main advan- the fuzzy attributes of a potential SLZ are input into the fuzzy system
tages provided by exploiting OS data for this task in comparison to rely- the relevant rules are fired, aggregated and then defuzzified to give a
ing solely on image processing based techniques. Firstly, an image may crisp numeric output which is the SLZ safety weighting. Depending on
contain noise such as fog or shadows thus obscuring potentially impor- the mode of operation (Fig. 1) SLZs with a high safety weighting are ei-
tant details such as corners. Secondly, it is possible that a man-made ther stored for future use or used in conjunction with the decision con-
structure may be located ‘off-frame’ resulting in an erroneous assump- trol process described in [8].
tion that a SLZ is located in an area free from such structures.
At this stage we focus solely on static man-made structures such as 5. Modelling options
roads, paths and buildings however as part of future work it may be de-
sirable to incorporate the ability to detect moving objects such as cars. Whilst it may be desirable to always incorporate knowledge into the
The Euclidean distance measure is used to compute the distance be- SLZ detection algorithm there may be occasions where such an inclu-
tween a SLZ's centroid position and each point of a man-made structure. sion is impracticable as the increased execution time which it requires
The minimum distance in metres is subsequently used as input to the may be greater than the UAV's remaining battery life. Within this sec-
man-made structure distance membership function to determine a tion we model two SLZ detection options one of which incorporates
fuzzy classification of ‘near’, ‘medium’ and ‘far’ (Fig. 4). knowledge and illustrates how these models may assist with choosing
an optimal, viable solution.
4.3. Roughness We assume a log-normal distribution which is commonly used
when modelling duration data such as execution times [33]. This as-
For the purposes of preserving the UAV and its payload it is undesir- sumption was made as the profile of the log-normal Probability Density
able to choose areas which may be considered rough, for example stony Function (P.D.F.) fitted the histogram of observed times. These times
areas. With natural textures such areas generally exhibit high variance and subsequently the parameters for each model, μ and σ were obtained
of greyscale values. Following [32] we therefore use the greyscale stan- by executing a C++ implementation of the SLZ detection algorithm on
dard deviation of a SLZ's member pixels as a simple, albeit relatively ef- a dataset of 1024 colour aerial images captured during manned flight.
fective approach to determining the roughness of a SLZ. An offline These aerial images are of the Antrim Plateau region in Northern
training phase is conducted during which a human expert specifies ex- Ireland and primarily contained mountainous and agricultural terrain.
amples of ‘very rough’, ‘rough’ and ‘smooth’ textures from which ranges An Ascending Technologies Pelican UAV [34] equipped with an ATOM
for class membership are computed (Fig. 5). processing board containing a 1.6 GHz processor and 1 GB RAM was
used to obtain the timings. In Table 3 the measured μ and σ parameters
4.4. Combination of attribute values are displayed for the major components of each option. It is likely that
with a state-of-the art onboard computer such as the Ascending Tech-
In order to calculate an overall safety score for a SLZ it is necessary to nologies Mastermind [35] and with further optimization and refine-
combine the attribute values of terrain suitability, roughness and dis- ment of code it would be possible to process multiple frames per
tance to man-made features. Fuzzy logic is used for this stage primarily second. However, at this stage in development the presented timings

Fig. 3. The membership functions of terrain suitability which determine the extent to which an input terrain is appropriate for landing in.
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578 575

Fig. 4. The membership functions which determine if an input distance in metres from a SLZ to a man-made structure is considered as ‘near’, ‘medium’ or ‘far’.

provide a useful indication as to the performance of the SLZ detection A key process in the overall SLZ detection algorithm is determining if
algorithm. it is viable to incorporate knowledge (Fig. 1). Bearing in mind the overall
The time required to identify boundaries and perform dilation is rel- safety objectives outlined at the beginning of Section 4 we consider in-
atively constant across all images for each option with a corresponding- corporating knowledge to provide a more robust and reliable option for
ly low σ value. It can be seen that there is additional overhead incurred SLZ detection and therefore consider it to be optimal in terms of ensur-
by incorporating OS data for this phase. This overhead is primarily ing the safety objectives are met. In order to determine the viability of
caused by the ‘selector’ operation which reads relevant OS data from a incorporating knowledge we use the parameters obtained from the ex-
large shapefile and also the process of converting OS vector format periments to construct a model of the execution time of each option
data to raster format. (Fig. 7) which can be used in conjunction with an execution threshold.
As the process of computing SLZ attributes is conducted for each SLZ If the UAV is in normal operation mode this threshold is a soft real-
the required execution time is directly related to the number and size of time constraint representing a maximum desired execution time. This
detected SLZs within a colour image. Generally more SLZs were detected is particularly useful if the previous execution of the SLZ detection algo-
when knowledge was not incorporated as there were less region and rithm required longer than expected resulting in the formation of a
object boundaries thereby increasing the number of potential SLZs. queue of unprocessed images.
Within the dataset a total of 8166 SLZs were detected when knowledge Upon receiving an abort command the threshold is based upon the
was included resulting in a mean execution time when assigning a safe- UAV's remaining battery life and represents a hard real-time constraint.
ty score of 0.06 s per SLZ. In comparison when knowledge was not used Given the models and a required threshold the probability of an option
a total of 9388 SLZs were detected with a mean execution time to assign completing execution can be calculated using the Cumulative Distribu-
a safety score of 0.03 s per SLZ. Thus the difference in execution time be- tion Function thus enabling the UAV to choose an optimal, viable solu-
tween options when considered from the prospective of a single SLZ is tion. This is illustrated in Fig. 7 where a threshold of 5 s is input. Using
greater than indicated in Table 3. It should be further noted that when the models it can be computed that option 1 has a probability of 0.492
knowledge is not incorporated into the SLZ detection algorithm the dis- and option 2 has a probability of 0.66 of completing execution before
tance from a SLZ to man-made objects is not computed. the threshold is breached. In an implementation of this approach to

Fig. 5. The membership functions determining if a SLZ is considered ‘very rough’, ‘rough’ or ‘smooth’.
576 T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578

Fig. 6. The membership functions which determine the overall safety score which a SLZ receives.

decision making in a real-time system it is likely that an acceptable A fundamental component in the computation of a SLZ's overall safe-
lower limit would be set. Thus, for example if option 1 had a probability ty score is its terrain classification with each terrain type assigned a suit-
of completion of more than 0.6 it would be chosen. However, in this ex- ability measure (Section 4.1). In both cases all of the false negatives
ample the UAV would subsequently choose to execute option 2. Whilst were caused by misclassified terrain. When knowledge was fused into
a constraint of 5 s is used to illustrate the approach to decision making it the terrain classification component a slightly greater number of SLZs
should be noted that in a real-world implementation an abort command had misclassified terrain. This was not significant however somewhat
would be issued when the UAV's remaining battery life falls below a unexpected and to some extent most likely caused by priors not accu-
predetermined threshold which would be in the range of 1–2 min. rately reflecting the existent terrain types within small portions of the
dataset. Additionally, in both cases an overarching cause of terrain mis-
classification is likely to be low separability within the chosen feature
6. Experimental results/evaluation space for certain classes. Such examples include grass which may ap-
pear similar to deciduous trees and scrubland which can appear similar
For the purposes of evaluating SLZ detection accuracy, identified to paths. As paths and trees were both assigned low suitability mea-
SLZs within a subset of 100 aerial images were manually validated by sures, SLZs which were erroneously classified as these terrain types
a human expert using a Matlab based GUI. To collate the results we con- were subsequently assigned a low safety score.
sider a true positive (TP) to be a correctly identified SLZ with a high safe- A key advantage provided by incorporating knowledge is the ability
ty weighting, a true negative (TN) is a correctly identified SLZ but is to compute the minimum distance between a SLZ and nearby man-
subsequently assigned a low safety weighting. A false positive (FP) is made structures such as roads or paths. SLZs which are very close to
an area which is incorrectly identified as a SLZ or in an unsafe area such structures are assigned a low safety weighting as they are deemed
which is assigned a high safety weighting. A false negative (FN) is a suit- to present a risk to humans. This resulted in substantially more true
able SLZ which is assigned a low safety weighting. An overview of the negatives when knowledge was incorporated into the SLZ detection al-
SLZ detection results is displayed in Table 4. gorithm. In comparison when OS data was not used, 100 SLZs (9.6%)
Due to the boundaries specified in OS data less potential SLZs were were erroneously assigned a high safety weighting, i.e. false positives,
identified when knowledge was included within the algorithm. Howev- despite their close proximity to such structures. Furthermore, 2 SLZs
er, overall it was found that incorporating knowledge provided a more spanned a man-made path which did not exhibit steep changes in
reliable method of SLZ detection with 94.7% of potentially suitable greyscale intensity at its borders and therefore remained unidentified
SLZs assigned a correct safety score. In contrast when knowledge was by the edge detection stage. A large shadow region cast over a road by
not included there was a significant amount of false positives (10.1%) trees also resulted in 2 areas being considered as potential SLZs however
with 86.3% of potential SLZs being assigned a correct safety score. these were assigned a low safety score as they were classified as conif-
erous trees. We consider false positives to be the most serious type of
Table 3 error as it may result in wholly unsuitable landing areas being assigned
Timings in seconds per image for each SLZ detection option. high safety scores and therefore deemed safe. This may subsequently
Task Option 1 Option 2 have potentially catastrophic consequences for humans, property and
incl. knowledge no knowledge the UAV and its payload.
μ σ μ σ
7. Conclusions/future work
Potential SLZ detection
Boundary identification 0.15 0.1 0.093 0.009
Dilation 0.2 0.05 0.144 0.03 Within this paper we have presented an autonomous approach to
SLZ detection which utilises colour aerial imagery and additionally has
Compute SLZ attributes
Terrain classification 0.81 1.01 0.656 0.82 the potential to incorporate external knowledge in the form of OS
Roughness 0.032 0.03 0.034 0.03 data. We propose incorporating knowledge into the potential SLZ detec-
Distance to man-made objects 0.0006 0.0001 NA NA tion phase by combining the boundaries specified in OS data with the
Misc. functions 0.435 0.03 0.325 0.026 results of edge detection thus providing a measure of robustness against
Total time .63 .04 .254 .857
image noise. We further use knowledge in the assignment of safety
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578 577

Fig. 7. P.D.F. of execution time for each option along with example execution threshold.

scores, fusing OS data with the probabilities of class membership main advantage of utilising such imagery at this stage is that it is geo-
returned by a MLC using Bayes' rule. Additionally, the distance between registered thereby enabling OS data to be readily incorporated into
a SLZ and nearby man-made structures are computed enabling SLZs the algorithm. Whilst this imagery has enabled us to perform a proof-
which are close to such structures to be assigned a low safety score of-concept implementation and evaluation of the SLZ detection algo-
thereby helping minimise the expectation of human casualty. rithm an immediate extension of this work is to implement the ap-
Whilst the boundaries of many manmade features exhibit sudden proach using real and thus potentially noisy UAV aerial imagery. This
changes in greyscale intensity and are thus identified during the edge is likely to present a number of technical challenges however it will en-
detection stage, it may be useful to consider additional manmade clas- able us to further refine and validate the approach thereby helping to
ses that are likely to obstruct a UAV's flight path. One such example is ensure the real-world usefulness of the SLZ detection algorithm. It is
power pylons which may not be explicitly represented in OS data. Au- hoped that this will ultimately increase the safety of autonomous UAV
tonomously identifying such structures in 3D space would require systems thus expediting their integration into civilian airspace.
see-and-avoid capabilities such as those described in [36] and is there-
fore likely to form an important part of an overall UAV safety system.
A key potential improvement to the SLZ detection algorithm may be Acknowledgements
to consider sequences of images, i.e. video, as opposed to considering
the images in isolation. When used in conjunction with an estimate of This research was supported by a Department for Employment and
the UAV's motion and feature descriptors such as SIFT, such sequences Learning studentship and through the Engineering and Physical Sci-
of imagery would enable moving objects such as animals to be detected ences Research Council (EPSRC) funded Sensing Unmanned Autono-
thus forming an important part of a real-world implementation. A fur- mous Aerial Vehicles (SUAAVE) project under grants EP/F064217/1,
ther area of future work may be to incorporate additional knowledge EP/F064179/1 and EP/F06358X/1.
in the form of DTMs thus enabling the slope of a SLZ to be taken into
consideration. References
Within this paper we show that by using knowledge the accuracy of
potential SLZ detection and the subsequent assignment of safety scores [1] H. Almurib, Control and path planning of quadrotor aerial vehicles for search and
rescue, no. 2, 2011. 700–705 (Tokyo, Japan).
can be improved. Overall 94.7% of detected SLZs were assigned a correct [2] M. Bryson, A. Reid, F. Ramos, S. Sukkarieh, Airborne vision-based mapping and clas-
safety score when knowledge was incorporated in comparison to 86.3% sification of large farmland environments, J. Field Robot. 27 (5) (2010) 632–655.
when knowledge was omitted. Whilst incorporating knowledge into [3] S. Cameron, G. Parr, R. Nardi, S. Hailes, A. Symington, S. Julier, L. Teacy, S. McClean, G.
Mcphillips, S. Waharte, N. Trigoni, M. Ahmed, SUAAVE: combining aerial robots and
the algorithm provides a more reliable method of SLZ detection there
wireless networking, Unmanned Air Vehicle Systems, no. 01865, 2010, pp. 7–20,
is an additional computational overhead incurred resulting in increased (Bristol).
execution time. Therefore, due to the real-time nature of the problem of [4] C. Haddon, Whittaker, UK-CAA policy for Light UAV Systems, UK Civil Aviation Au-
thority, London, 2004.
SLZ detection, and the potential, hard constraints imposed by remaining
[5] W.Y. Ochieng, K. Sauer, D. Walsh, G. Brodin, S. Griffin, M. Denney, GPS integrity and
battery life it may not always be practicable to include knowledge. We potential impact on aviation safety, J. Navig. 56 (1) (2003) 51–65.
therefore model the execution times of two SLZ detection options and [6] T. Patterson, S. McClean, P. Morrow, G. Parr, Utilizing geographic information system
demonstrate how they could be used to assist a UAV in autonomously data for unmanned aerial vehicle position estimation, 2011 Canadian Conference on
Computer and Robot Vision, IEEE, St. Johns, Newfoundland, 2011, pp. 8–15.
choosing an optimal, viable solution. [7] T. Patterson, S. McClean, P. Morrow, G. Parr, Towards autonomous safe landing site
Results are presented based on colour aerial imagery captured dur- identification from colour aerial images, 2010 Irish Machine Vision and Image Pro-
ing manned flight of the Antrim Plateau region in Northern Ireland. A cessing Conference, Cambridge Scholars Publishing, Ireland, 2010, pp. 291–304.
[8] T. Patterson, S. McClean, G. Parr, P. Morrow, L. Teacy, J. Nie, Integration of terrain
image sensing with UAV safety management protocols, The Second International
ICST Conference on Sensor Systems and Software, S-Cube 2010, Springer, Miami,
Table 4 Florida, USA, 2010, pp. 36–51.
Validated results based on SLZs detected within 100 images. [9] C. Sharp, O. Shakernia, A vision system for landing an unmanned aerial vehicle, In-
ternational Conference on Robotics and Automation, no. 1720–1727, IEEE, Seoul,
TP TN FP FN Total SLZs Korea, 2001, pp. 1720–1727.
Incl. knowledge 688 (76%) 169 (18.7%) 0 47 (5.3%) 904 [10] S. Saripalli, J. Montgomery, G. Sukhatme, Vision-based autonomous landing of an
unmanned aerial vehicle, IEEE International Conference on Robotics and Automa-
No knowledge 807 (78%) 86 (8.3%) 104 (10.1%) 37 (3.6%) 1034
tion, 2002, Proceedings. ICRA'02, vol. 3, 2002, pp. 371–380.
578 T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578

[11] T. Merz, S. Duranti, G. Conte, Autonomous landing of an unmanned helicopter based [24] I. Oruç, L.T. Maloney, M.S. Landy, Weighted linear cue combination with possibly
on vision and inertial sensing, 9th International Symposium on Experimental Robot- correlated error, Vision Res. 43 (23) (2003) 2451–2468.
ics, Springer, Singapore, 2004, pp. 57–65. [25] C. Zhou, B.W. Mel, Cue combination and color edge detection in natural scenes, J. Vis.
[12] S. Lange, N. Sunderhauf, P. Protzel, A vision based onboard approach for landing and 8 (2008) 1–25.
position control of an autonomous multirotor UAV in GPS-denied environments, [26] R.C. Gonzalez, R.E. Woods, Digital Image Processing, 3rd edition Pearson Education,
Advanced Robotics, 2009, IEEE, Munich, Germany, 2009, pp. 1–6. New Jersey, 2008.
[13] K.W. Sevcik, N. Kuntz, P.Y. Oh, Exploring the effect of obscurants on safe landing [27] T.H. Cox, C.J. Nagy, M.A. Skoog, I.A. Somers, Civil UAV Capability Assessment, Tech.
zone identification, J. Intell. Robot. Syst. 57 (1–4) (2009) 281–295. Rep, NASA, December 2004.
[14] S. Scherer, L. Chamberlain, S. Singh, First results in autonomous landing and obstacle [28] J. Richards, J. Xiuping, Remote Sensing Digital Image Analysis, 3rd edition Springer,
avoidance by a full-scale helicopter, (Accepted) IEEE International Conference on New York, 1999.
Robotics and Automation, IEEE, St. Paul, Minnesota, USA, 2012. [29] R.M. Haralick, Statistical and structural approaches to texture, Proc. IEEE 67 (5)
[15] T. Templeton, D.H. Shim, C. Geyer, S. Sastry, Autonomous vision-based landing and ter- (1979) 786–804.
rain mapping using am MPC-controlled unmanned rotorcraft, Proceedings of the IEEE [30] L. Chen, G. Lu, D. Zhang, Effects of different gabor filter parameters on image
International Conference on Robotics and Automation, 2007, pp. 1349–1356, (Vol.). retrieval by texture, Multimedia Modelling Conference, Brisbane, Australia, 2004,
[16] A. Cesetti, E. Frontoni, A. Mancini, P. Zingaretti, S. Longhi, A vision-based guidance pp. 273–278.
system for uav navigation and safe landing using natural landmarks, J. Intell. [31] B. Majidi, A. Bab-Hadiashar, Real time aerial natural image interpretation for auton-
Robot. Syst. 1–4 (2010) 233–257. omous ranger drone navigation, Proceedings Digital Image Computing: Technqiues
[17] D. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. and Applications, IEEE, Australia, 2005, pp. 448–453.
Vis. 30 (2) (2004) 91–110. [32] A. Howard, H. Seraji, Multi-sensor terrain classification for safe spacecraft landing,
[18] A. Cesetti, E. Frontoni, A. Mancini, P. Zingaretti, Autonomous safe landing of a vision IEEE Transactions on Aerospace and Electronic Systems, vol. 40, 2004, pp. 1122–1131.
guided helicopter, Mechatronics and Embedded Systems and Applications (MESA), [33] H. Pacheco, J. Pino, J. Santana, P. Ulloa, J. Pezoa, Classifying execution times in parallel
IEEE/ASME International Conference on, IEEE, Qingdao, China, 2010, pp. 125–130. computing systems: a classical hypothesis testing approach, in: C. San Martin, S.-W.
[19] D. Fitzgerald, R. Walker, D. Campbell, A computationally intelligent framework for Kim (Eds.), Progress in Pattern Recognition, Image Analysis, Computer Vision, and
UAV forced landings, IASTED Computational Intelligence Conference, Calgary, Applications, Vol. 7042 of Lecture Notes in Computer ScienceSpringer Berlin,
Canada, 2005, pp. 187–192. Heidelberg, 2011, pp. 709–717.
[20] D. Fitzgerald, R. Walker, D. Campbell, A vision based emergency forced landing sys- [34] Ascending Technologies, AscTec Pelican, (Accessed January 2013) http://www.
tem for an autonomous UAV, Australian International Aerospace Congress, asctec.de/uav-applications/research/products/asctec-pelican/.
Melbourne, Australia, 2005, pp. 60–85. [35] Ascending Technologies, AscTec Mastermind, http://www.asctec.de/uav-
[21] L. Mejias, D. Fitzgerald, P. Eng, L. Xi, Forced landing technologies for unmanned ae- applications/research/products/asctec-mastermind/ Accessed July 2013.
rial vehicles: towards safer operations, in: M.T. Lam (Ed.), Aerial Vehicles, 1st edi- [36] T. Zsedrovits, A. Zarandy, B. Vanek, T. Peni, J. Bokor, T. Roska, Visual detection and
tion, In-Tech, Kirchengasse, Austria, 2009, pp. 415–442, (Ch. 21). implementation aspects of a UAV see and avoid system, 2011 20th European Con-
[22] J. Canny, A computational approach to edge detection, IEEE Trans. Pattern Anal. ference on Circuit Theory and Design (ECCTD), IEEE, 2011, pp. 472–475.
Mach. Intell. (1986) 679–698.
[23] Ordnance Survey Northern Ireland, OSNI Large-Scale Technical Specification,
Accessed September 2012 http://www.osni.gov.uk/large-scale_spec.pdf.

You might also like