You are on page 1of 15

Cornell University Unmanned Air Systems

2019 AUVSI SUAS Competition


Orion Technical Design

Abstract
Cornell University Unmanned Air Systems (CUAir) designed, manufactured, and programmed its most innovative,
customized, and ambitious Unmanned Aerial System (UAS) to date. With its fully composite aircraft design,
modular distributed imaging system, and intuitively-tailored autopilot, the 2019 Orion system surpasses its
predecessors in efficiency, flight performance, and precision. The joint effort of 60 undergraduate students from
the fields of Computer Science, Electrical Engineering, Mechanical & Aerospace Engineering, Mathematics, and
Physics, Orion is the culmination of interdisciplinary design and collaboration. Throughout the year, CUAir
rigorously tested the system to ensure the reliability of each component.
1. Systems Engineering Approach 1.1. Mission Requirement Analysis

The 2019 Orion system was designed, manufactured, The mission demonstration simulates a package de-
and tested to perform an autonomous delivery mission livery operation. After deployment, the system must fly
as safely, reliably, and accurately as possible. Capable to the delivery area, locate the delivery point, safely and
of autonomous waypoint navigation, obstacle avoidance, accurately drop the payload, and move the package to
image processing, target recognition, and air delivery, the target location on the ground.
the Orion system has been systematically engineered to Table 1 describes the goals of each mission task and
fulfill and exceed all AUVSI SUAS requirements. The the requirements CUAir and the UAS must achieve to
following sections describe mission tasks, identify re- reach each goal.
quirements for successful mission execution, and disclose The Autonomous Flight and Obstacle Avoidance task
the system’s comprehensive design rationale. necessitate an agile aircraft equipped with an autopilot

Task Description Successful Mission Execution Requirements

• Complete the mission in minimal flight and • Well practiced full mission tests
Timeline
post-processing time (80%) • Completion of the mission as quickly and safely
(10%)
• Refrain from taking a timeout (20%) as possible
• Fly autonomously with minimal manual • A fully functioning autopilot capable of path
Autonomous
takeovers (40%) planning, autonomous takeoff, and autonomous
Flight (20%)
• Fly a waypoint sequence within 100 ft of each landing
waypoint (10%) • An agile airframe that can be easily maneuvered
• Hit waypoints in a sequence as accurately as to hit waypoints
possible (50%)
• Avoid stationary obstacles, i.e. cylinders with • An autopilot ground station algorithm adept at
Obstacle
radius 30 ft to 300 ft and no constraint on recalculating flight paths
Avoidance
height • An autopilot system capable of relaying teleme-
(20%)
try data to the Interoperability Server at rate of
1 Hz
• Identify correct shape, shape color, alphanu- • A system capable of manual detection and clas-
Object meric, alphanumeric color, and alphanumeric sification of objects and object submission
Detection, orientation of objects (20%)
Classification, • A system capable of autonomous detection and
Localization • Accurately provide GPS location of objects classification of objects and object submission
(20%) (30%) • A mechanically stable imaging system and high
• Submit objects during the first flight via the resolution camera
Interoperability System (30%) • An algorithm that can use telemetry metadata
• Submit objects autonomously (20%) to accurately compute the location of each object

• Using no means of propulsion, drop and land • A mechanism capable of deploying the payload
Air Drop the UGV with water bottle gently without in a controlled fashion reliably causing without
(20%) damage (50%) damage
• On the ground, navigate the UGV with water • An algorithm that can calculate the optimal
bottle to the correct location and stop (50%) release time
• A UGV system capable of driving autonomously
to a given location
Operational • Exercise operational professionalism, commu- • Gracious professionalism and practice with clear
Excellence nication between members, reaction to system communication and alertness from all team
(10%) failures, and attention to safety members

Table 1: Mission Requirement Analysis

CUAir: Cornell University Unmanned Air Systems Page 2 of 15


capable of precisely calculating waypoint paths and such as Autonomous Flight and Obstacle Avoidance.
steering clear of all stationary obstacles. For Object The Pixhawk 2.1 autopilot, chosen for its customiz-
Detection, Classification, and Localization (ODCL), the ability, large support community, and familiarity to
UAS must accommodate a high-resolution camera capa- the team, provides accurate and reliable autonomous
ble of capturing images of sufficient resolution to identify navigation capabilities. The autopilot’s open source
objects while imaging the entire search area. The task nature also enables the team to write custom firmware
also requires stabilization of the camera, a high-speed to improve waypoint accuracy.
data link, and a server to process images. Furthermore, The second decision was the choice of an imaging sys-
the Air Drop task demands a system to reliably deploy tem. The Sony R10C camera was chosen for its improved
the payload, control its descent autonomously and with- image quality; compared to the uEye LE from last year’s
out use of propulsion, and lower it safely to the ground. system, the R10C is heavier in weight, but higher in
To move the payload to the correct destination, the team image quality and capable of hardware triggering, which
uses an unmanned ground vehicle (UGV) capable of allows for improved ODCL in terms of resolution and
autonomous navigation over natural terrain. Regarding geotag accuracy. In the past, large latency between the
the Timeline and Operational Excellence tasks, full camera and ground system decreased geotag accuracy.
mission tests must be conducted to ensure each system However, with the new capabilities of the R10C, geotag
is fully-functional and train all mission personnel. accuracy will be significantly improved. A gimbal was
Throughout the aircraft design period, the team also necessary to control the camera for the point-at-
worked to balance the trade-off between agile perfor- ground and off-axis object capabilities.
mance for the Autonomous Flight and Obstacle Avoid- In past years, the team has had difficulties obtaining
ance tasks and greater stability for the ODCL task. target images of sufficient resolution to pass into the
Additionally, significant changes were made to the Air ODCL system. As a result, a new flight plan was devised
Drop system as a result of a new aircraft design and which includes two passes of the search grid: one with
updated mission requirements. Further, while building a the camera at minimum zoom, and one with the camera
reliable and robust system, the team considered physical at maximum zoom. The goal of the first pass is solely
weight. to identify regions of interest (ROI), so the system flies
over the search grid as the gimbal remains pointed at the
1.2. Design Rationale ground. The aircraft will be flying at a relatively high
The Orion system was designed to maximize mission altitude of 450 feet at minimum zoom. During the second
performance. The team identified the importance of each pass, the camera is at maximum zoom. The gimbal
mission task and made decisions according to the team’s will then point directly at ROIs that were identified as
skill set and budget limits. Figure 1 shows the flow of potential targets from the first pass in attempt to obtain
design decisions. high resolution images of objects in question. With
these two changes, the imaging system chosen exceeds
the necessary capabilities.
The third decision was the design of the air delivery
system. This year, in addition to dropping a payload, a
second autonomous vehicle must navigate to the ground,
without means of propulsion, and then drive to a final
location. The UGV was designed to minimize form factor
in an effort to decrease the risk of drop malfunctions.
Specifically, the UGV is shaped cylindrically to avoid
obstructions as it is being dropped from the plane.
Additionally, the UGV wheel materials were chosen to
optimize impact performance upon landing.
The fourth decision was the choice of network and
telemetry systems would be used to enable Actionable
Intelligence and ODCL in flight. To fulfill the networking
requirements of the autopilot and the camera, wireless
Figure 1: Flow of design decisions, thicker lines indicate a higher
data protocols and antennae were chosen to enable fast
priority dependency. and accurate data transmission between the aircraft and
the ground station. These links are established with a
The first design decision was the choice of navigation Ubiquiti Rocket-AC pair for WiFi and RFD900+ radio
system due to its importance in achieving mission tasks pair for telemetry.

CUAir: Cornell University Unmanned Air Systems Page 3 of 15


The fifth decision was the design of payload con- weave carbon fiber, one layer of honeycomb structure
trollers. To meet the requirements of the camera, aramid sandwich core, and two layers of 2 oz fiberglass.
gimbal, communication, airdrop, and autopilot systems, The sandwich core was chosen due to its low density
the payload controllers must be able to provide power to and flexibility, which allows it to form curves with
all of these components, communicate with the ground tight radii. The carbon fiber was chosen due to its
station, control the airdrop system and UGV descent, conformability and high strength-to-weight properties.
and point the gimbal to assist in the discovery of objects. However, the carbon fiber also presented challenges such
The team designed custom circuit boards to fit these as obstructing antenna signals and increased need for
requirements. safety measures during manufacturing.
The sixth design decision was the aerodynamic design
of the system’s airframe. The aircraft must support a
fuselage carrying the camera, gimbal, payloads, airdrop
package, and power supply. Additionally, the aircraft
requires improved agility for Autonomous Flight and
Obstacle Avoidance, accurate waypoint capture, speed
for Timeline restrictions, and stability for better image
quality. To optimize overall mission success, a large
wing surface area was chosen to slow flight, improving
waypoint accuracy and image quality. Deviating from
past CUAir aircrafts, Orion employs both rolling takeoff
and landing as opposed to catapult usage. Although this
approach makes the UAS heavier due to the extra weight
of landing gear, it allows for faster testing times and
safer landings. Cruising at a height of 450ft, the Orion
system flies at the highest altitude in CUAir history to
accommodate the new higher quality imaging system.
The fuselage is made of a honeycomb foam core, allowing
for a lighter composition. Although honeycomb foam is
more expensive, the weight decrease was worthwhile.
Lastly, mission personnel were decided based on ex-
perience operating task-related functions and familiarity
with the design.

2. System Design

2.1. Aircraft
For this year’s competition, CUAir decided to rethink Figure 2: Orion Aircraft Projections, Dimensions Shown (in)
its entire platform for its new aircraft, Orion, in order
to enhance flight performance in several categories.
The aircraft geometry was designed in consideration of
endurance, flight speed, maneuverability, mass, stability
and control, and manufacturability requirements. New
fabrication methods were tested to create a stronger,
lighter and more advanced aircraft, culminating in a
hollow carbon fiber fabrication method to replace the
previous solid foam and balsa method. A twin-boom
aircraft configuration with two subsequent motors and
vertical stabilizers was developed for redundancy and
modularity and to allow for optimal camera placement.
Using Athena Vortex Lattice (AVL), we then confirmed
the stability of our aircraft in dynamical modes (Figure
3). Figure 3: AVL vortex lattice simulating calculated load distribu-
The fuselage skin consists of a single layer of 3k Twill tion.

CUAir: Cornell University Unmanned Air Systems Page 4 of 15


Figure 4: Fuselage Inner Section

each drop and footage was reviewed to ensure that the


correct landing gear, front or rear, was the first point of
contact; the test concluded once damage was sustained.
Additionally, the landing gear was rigorously tested with
1 meter drops on both grass and asphalt. The tests were
conducted for flared landings where the nose was pitched
up between 15-20 degrees, and for flat landings to ensure
the capability of surviving a rough landing with minimal
damage. All drops were successful as the fuselage did
not sustain damage.
To verify connections between the nose gear and fuse-
lage, further tests were conducted using 5 lb. sandbags
attached to the landing gear. The bags were attached
to the shaft of the landing gear and hung for 1 minute
while the firewall was held upright. Every minute, an
additional bag was added one at a time until a total
Figure 5: Airframe Design Close-up. Labels refer to: (1)
Flaps, (2) Ailerons, (3) Polyhedral Support and Alignment, (4) of 15 lbs was reached. Then, the firewall sat with the
Interior Wing Ribs, (5) Modular Wing To Fuselage Connection, (6) suspended 15 lb weight for an additional 5 minutes.
Removable Propellers, (7) Fully Integrated Boom Section in Wing, The tests resulted in no slippage of the shaft within the
(8) Load Bearing Carbon Fiber Spars, (9) Detachable Rear Tail
Boom, (10) Slide In Boom to Vertical Connection, (11) Rudders,
mounted bearings or the brass collar during any part of
(12) Elevator the process.

All flight surfaces and fuselage skin components


underwent loading tests at a safety factor of 2 to ensure
that they were capable of supporting all payloads and
forces in flight. Drop tests were conducted by loading the
fuselage with two pounds in the gimbal dome and seven
pounds in the electronics bay section to approximate
1.5 times the weight distribution during flight. The
fuselage was then dropped from both a height of 0.2
meters and 1 meter with the front landing gear being the
first point of contact. These drops were then repeated
for the rear landing gear, replicating all possible landing
conditions. The fuselage was checked for damage after Figure 6: Orion Airframe Design

CUAir: Cornell University Unmanned Air Systems Page 5 of 15


Vertical Horizontal Aircraft Dimensions
Wing Performance
Stabilizer Stabilizer and Specifications
Cruise
Airfoil NACA6412 NACA0012 NACA0012 Length (ft) 6.68 60.37
Speed (ft/s)
Max
Span (ft) 9.84 1.14 2.5 Width (ft) 9.61 98.4
Speed (ft/s)
Min
Area (ft2 ) 13.95 0.969 2.056 Height (ft) 0.96 32.8
Speed (ft/s)
Avg. Min
Incidence 1.860 0.000 -3.000 Weight (lb) 35.27 Turn 49.21
Angle (◦ ) radius (ft)
Prop. Size Flight
Aspect Ratio 6.944 1.338 3.048 20 x 10 0.75
(in x in) Hours (hrs)
Mean
Payload Assembly
Aero 1.42 0.85 0.82 21.3 10
Weight (lb) Time (min)
Chord (ft)

Table 2: Aircraft Specifications and Performance

2.2. Autopilot ArduPilot L1 Navigation Controller. The algorithm uses


properties of Bézier spline curves to plan smooth curved
The Orion system uses the Pixhawk 2.1 autopilot paths that drastically reduce the curvature of the sys-
along with a full sensor suite for autonomous navigation. tem’s trajectory, allowing the aircraft to take smoother
The Pixhawk autopilot has plug-and-play support for turns and hit waypoints more accurately. These spline
common sensors, making it easy and highly affordable curves were developed using Theia, the team’s previous
to test many systems simultaneously on appropriate aircraft. This autopilot improvement combined with the
platforms before integration into the Orion system. This increased maneuverability of the Orion airframe gives
enabled the team to tune the autopilot for autonomous the Orion system an average waypoint hit radius of 10
flight while testing obstacle avoidance algorithms on feet. A side by side comparison of the system’s path
a test airframe. The Pixhawk autopilot also makes taken using the L1 Controller and using CUAir’s spline
it easier to meet competition safety requirements with controller is shown in Figure 8.
numerous features such as redundant power and support CUAir developed a custom autopilot Ground Control
for redundant telemetry radios. The full autopilot Station (GCS), shown in Figure 10. The custom
system diagram is shown in Figure 9. GCS enables the autopilot operator to meet the AUVSI
The Pixhawk runs a modified ArduPilot 3.8 firmware. mission requirements of connecting to the interoper-
ArduPilot was the best choice for the Pixhawk since it is ability server, relaying telemetry data, and displaying
the most popular open-source firmware currently avail- obstacles, all without leaving the GCS interface. By
able, provides developer support, and has many essential combining GCS features like aircraft vitals and waypoint
flight features required for autonomous flight. ArduPilot modification into a single interface, the GCS operator is
is capable of waypoint navigation, autonomous takeoff,
and autonomous landing, so the team will not need to
incur any pilot takeover penalties.
The system is configured for autonomous rolling
takeoff. To ensure that the aircraft lands safely, the
system is able to touch down consistently using only GPS
and a properly tuned barometer.
Another important advantage of ArduPilot is that
it is open source, which allows the team to customize
the autopilot firmware. Two key customizations were
implemented: autonomous initialization of flaps prior to
takeoff, which provides additional lift at the beginning
of the launch; and precise flight termination maneuvers, Figure 8: The left image shows the system flying a sequence of
which allow the system to meet AUVSI flight termination waypoints using the default, ArduPilot L1 Controller. The right
requirements. Additionally, the team developed an im- image shows the system flying the same waypoints using CUAir’s
proved waypoint path following algorithm in place of the improved spline controller.

CUAir: Cornell University Unmanned Air Systems Page 6 of 15


able to optimize for mission objectives more efficiently Bix3, and on Orion is shown in Table 3.
than with an alternative GCS. For example, the GCS Before every flight with the Bix3 or the Orion system,
operator can view the aircraft’s position relative to the team tested all on-board sensors including the
stationary obstacles and determine if it is necessary to gyroscope, accelerometer, airspeed sensor, and GPS to
modify waypoints. The GCS displays the areas of the confirm that no damage had occurred during transport
search grid that have not been imaged to help the GCS or previous flights.
operator ensure the UAS has collected images of every
object. Additionally, as the aircraft flies the waypoint SITL Bix3 Orion
path, the GCS indicates the point value achieved for Total Flight Hours 630 25 9
each waypoint, helping the GCS operator determine if Total Autonomous
630 20 6
the waypoint path must be flown again. Also, the team Flight Hours
loads waypoints from the interoperability server to more Total # Waypoints
550,000 4,900 2,500
quickly determine the optimal flight plan for hitting Hit
waypoints. Average Waypoints
8 10 11
The team evaluated the autopilot system both vir- Miss Distance (ft)
tually in software simulations and in the field on a Table 3: Summary of Autopilot Testing
test platform before integration into the Orion system.
The virtual tests required the autopilot to perform
2.3. Obstacle Avoidance
in a simulated environment, using simulated autopilot
hardware and a flight dynamics model provided by the CUAir developed an Obstacle Avoidance System
JSBSim library. The Software in the Loop (SITL) (OAS) for the Obstacle Avoidance task. The waypoints
simulation environment provides the ability to rapidly are automatically generated, but can be edited by
perform many tests without the man-hours required mission personnel. While an autonomous system has
to test on a physical aircraft. Four simulated flights an advantage over a human operator in terms of speed
are required to prove that new parameters or slight of path rerouting, a human is necessary to account for
modifications are ready to move forward in testing. safety concerns and to ensure the augmented paths allow
Once the simulated tests passed all requirements, the for correct execution of the mission requirements. How-
feature was integrated into a hardware test platform. ever, it would be inefficient to have a human manually
The team used a HobbyKing Bix3 airframe, a low cost edit every waypoint.
and easy to fly foam platform. With this hardware, For the OAS algorithm, the team ultimately decided
the team tested the actual autopilot hardware and its to use an artificial potential field algorithm. The
interfaces to motors and sensors. New parameters or algorithm works by modeling the airspace and obstacles
small changes to the system were verified by successfully as physical entities. Obstacles are treated like hills,
completing four test flights across two different days. and waypoints are considered valleys. The factors
A summary of testing completed in the SITL, on the involved in choosing the algorithm include the total

Figure 9: Autopilot Full System

CUAir: Cornell University Unmanned Air Systems Page 7 of 15


Figure 10: Autopilot GCS

distance of the path, the number of waypoints, the 2.4. Imaging System
max angle of the path, number of collisions, maximum
The Sony R10C has an E384/6 mapping sensor
coverage for the MDLC, and proximity to obstacles.
package that makes data collection and post-processing
The potential flow algorithm will find the optimal path
quicker and easier. The R10C has no power timeout,
that combines attraction to waypoints and repulsion
so the camera remains turned on for the entire duration
from obstacles, producing short paths in an attempt
of the flight. Paired with the Companion Computer,
to minimally impact coverage. This algorithm has the
a Raspberry Pi Model 3B, its images are automatically
advantage of demanding minimal computation compared
geotagged in flight and saved to an external thumb drive.
to other algorithms. Other algorithms that were con-
sidered during the research process include variants of
rapidly-exploring random trees, Dijkstra’s algorithm, 2.4.1. Gimbal
and gradient descent. The Orion system features a custom two-axis gimbal,
Throughout development of the OAS, the team composed of aluminum and carbon fiber, that is housed
conducted extensive software unit testing to validate in the nosecone of the plane. The Sony R10C camera
correctness and reliability of the system. The team housed within the gimbal is an expensive payload that
also performed extensive testing with SITL simulation to needs to be protected to ensure that the image quality is
ensure proper function of the OAS features of the GCS. minimally affected by the vibration of the system. The
Over 80 hours of testing in the SITL were conducted to gimbal must also fit into the nose of the plane and move
ensure correct functionality of the OAS. This included freely with no visual or physical obstruction from the
hitting over 7,500 waypoints and avoiding over 3,000 ob- skin. The gimbal motors are controlled by a Simple-
stacles. Once the team was confident in the correctness BGC 32-bit Extended gimbal controller which receives
and reliability of the system, over a dozen test flights instruction directly from the on-board computer. The
with the Bix3 system were conducted to determine the gimbal system protects and stabilizes the R10C camera
system efficacy in a real-world environment. The Bix3 and enables precise targeted image capture.
system successfully avoided 63 stationary obstacles. The
OAS was then integrated with the Orion system, on 2.5. Object Detection, Classification, Localization
which the team conducted the rest of the flight tests,
as shown in Table 4. The Imaging Ground Server (IGS) enables man-
ual and autonomous agents to complete the Object
Total Flight Hours with OAS Active 4
Detection, Classification, and Localization task. The
Total # Static Obstacles Tested Against 98
IGS interfaces with the on-board computer over the
Total # Static Obstacles Avoided 94
communications network described in Section 2.6. The
Table 4: Summary of OAS tests in Orion full ODCL system is diagrammed in Figure 12.

CUAir: Cornell University Unmanned Air Systems Page 8 of 15


Figure 12: Diagram of the Object Detection, Classification,
Figure 11: Example image of a green ‘W’ on a black and Localization system
pentagon captured from 500 feet

This year’s flight plan consists of two passes of the images. Another improvement comes from the hardware
search grid. The first pass will have the camera at triggering enabled by the software used to run the
minimum zoom in order to obtain wide angle shots. camera. It enables communication with the Pixhawk
Normally, the camera is in idle mode, the camera’s to report telemetry of the plane when an image is
standard resting position. During this first pass, the taken and therefore helps to determine more accurate
camera will be in fixed mode, which means it is retracted GPS locations. In the past, timing has been an issue
and stationary pointed with the face of the lens parallel which caused difficulties when matching telemetry data
to the ground. In this mode, the camera will take fixed to the time a photo was taken. However, now, the
photos and then send telemetry to ground to determine hardware triggering feature avoids this issue. Also, in
GPS coordinates with MDLC. Then, on the second an effort to minimize latency between the camera and
pass, the camera will obtain high resolution targets that gimbal, a Camera-Gimbal server was developed. In
become easier to tag manually. In this second pass, the addition to maintaining zoom and controlling the gimbal
camera will be in tracking mode. In this mode, the and camera, the server also contains the camera mode
camera will be at maximum zoom and pointed directly (idle, fixed, tracking) and determines which mode to
at regions of interest that were identified as potential use depending on information received from the ground
targets from the first pass. The plane will be flying at server. Further, it allows for immediate tracking of the
the cruising altitude of 450 feet. During the first pass, next object once an image capture has been completed.
the camera would be zoomed in to the minimum level Finally, the camera gimbal server has a database of
and images will cover around 422,500 square feet. Each states that allows for quick responses to emergency
target will have a resolution of around 800 square pixels. situations such as if the system shuts down.
During the second pass, the camera will be zoomed in
to the maximum level, and images will cover around θObj = (ψ + θObj/Img ) mod (360) (1)
5,500 square feet. Each target would have a resolution
of around 70,000 square pixels. To find the orientation of the alphanumeric heading,
θObj , the IGS system computes Equation 1. ψ is the
In addition to the two-pass system, CUAir employs yaw relative to north of the system at the time of image
a number of other techniques to ensure the best possible capture and θObj/Img is the yaw of the objects with
image quality. First, the team mounted the camera with respect to the top of the image.
vibration damping screws to minimize image blur from The gimbal points the camera towards the ground
high speed vibrations and the rolling shutter effect. Post for all objects within the geofence. However, when the
processing techniques on the ground allows operators to gimbal is not pointing the camera towards the ground,
adjust contrast and saturation of photos during classi- post-processing of images corrects for image skew from
fication, providing better distinction between colors in the camera lens and for small errors in the gimbal’s

CUAir: Cornell University Unmanned Air Systems Page 9 of 15


orientation. This occurs when there are regions of and center of the image. The algorithm for calculating
interest. During testing, the team found that a first order the object’s orientation was tested for all eight cardinal
linear transformation was sufficient to remove image and intercardinal orientations. The system consistently
skew. The resulting skew-corrected image can be scaled localizes the object in the image within 17 ft of the true
by a constant factor to convert from pixels to feet. position.
This simplifies geolocation calculations by effectively
removing the aircraft’s roll and pitch. The following Number of objects tested 55
algorithm describes the process of geolocating an object Mean error 12 ft
Std. Dev. of error 21 ft
identified within a skew-corrected image.
Orientation accuracy 100%
1. Orient the image, P , to axis-align it with North Table 5: Localization System Statistics. Differences in distance
and East using aircraft yaw, ψ. were computed by comparing GPS locations calculated by the IGS
[ ] to GPS locations measured on the ground.
cos(ψ) sin(ψ)
P′ = P
− sin(ψ) cos(ψ) 2.5.1. Manual Vision System

2. Find the displacement, Dxy , of the objects from The Manual Vision System (MVS) allows for manual
the center of the image in pixels. detection, localization and classification of objects in
images. Once an image is received on the ground, an
[ ] MVS operator identifies objects within the image, tags
Ox′ − W
2
Dxy = characteristics, and marks whether or not the image
Oy′ − H
2 contains the emergent object. The MVS User Interface
(UI) allows the operator to adjust image brightness,
O′ is the object location in the rotated image. W
saturation, and contrast to improve image clarity. Once
and H are the width and height of the image in
an object is tagged on the UI, the IGS system uses
pixels, respectively.
associated telemetry and gimbal metadata to compute
3. Convert the displacement to feet. the orientation and GPS location of the object. Since
the tagging procedure is time-consuming and there are
[ ] many images, the MVS system supports multiple MVS
2
W × Talt tan( FOV
2 )
h
Dft = Dxy operators simultaneously tagging images across different
2
H × Talt tan( FOV
2 )
v

computers. This allows the team to consistently achieve


Talt is the altitude at time of image capture Actionable Intelligence and minimizes flight time, im-
from telemetry data. FOVh and FOVv are the proving the team’s Timeline score.
horizontal and vertical fields of view, respectively. The IGS and MVS pipeline, from image capture to
front-end tagging, were tested extensively with mock
data on the ground and real data in the air to discover
4. Compute the object latitude and longitude using and resolve any and all software issues. Specifically, as a
the displacement and image center. result of the image pipeline being run, there are images
on the ground server so that MDLC taggers can tag
degrees the images. The MDLC pipeline is tested every week
Olat,lon = Tlat,lon + × Dft
feet since there are many changes being made to improve the
pipeline each week. Also, the target sightings are merged
Olat,lon is the object location in degrees latitude, to make a target, and deletion of target sightings on the
longitude. Tlat,lon is the latitude and longitude frontend accurately reflect on the backend.
of the center of image as determined from the
telemetry data at time of image capture. Since
2.5.2. Autonomous Vision System
the search area is small relative to the size of the
Earth, latitude and longitude are assumed to be Detection and Classification
orthogonal and can be scaled to convert from feet
back into degrees. The Autonomous Vision System (AVS) performs
detection and classification of objects for Autonomy.
Testing the geolocation algorithm at test flights The AVS relies on heuristics and machine learning
demonstrated sufficient accuracy, as shown in Table 5. algorithms to detect and identify object sightings in
The localization algorithm was tested on different images images retrieved from the aircraft through the IGS.
of the same objects and on objects at both the edges The system starts by finding regions of interest (ROIs)

CUAir: Cornell University Unmanned Air Systems Page 10 of 15


Figure 13: Autonomous Vision System

using the SURF detection algorithm, which has a high the AVS are sent to the IGS, where they are localized
recall for blob features similar to objects. ROIs are and stored in a database. At the end of the mission,
filtered by various heuristics. After a ROI is identified all object sightings are sent to the AVS, which merges
as containing a target, a floodfill algorithm generates similar sightings into a single object based on location.
a shape segmentation. Further algorithms use these Merged objects are given higher confidence values if they
contours to classify the target’s shape, alphanumeric, have similar characteristics. Objects above a certain
orientation, and shape and alpha colors. The shape confidence threshold are reported. Should two sightings
classification method uses Fourier analysis on the shape be located close together and classified as different
contours to retrieve approximate shape descriptors. The shapes or alphanumeric characters, a voting algorithm
shape descriptors are fed into a neural network trained is applied. Both filtering object sightings by confidence
on generated data. The segmented alphanumeric is values and applying a voting algorithm help reduce false
passed into an MNIST style CNN. Orientation of the positives.
image is reported by a template matching algorithm Tests were conducted on approximately 6,000 images
that compares the segmented alphanumeric with an taken from test flights, previous competitions, and syn-
upright generated version of the same character. The thetically generated datasets. These images contained
contours are also used to find the average color of each approximately 3,270 real target sightings, which were
region. Color classification uses a Monte Carlo model to manually tagged and used for testing purposes.
classify based on several hundred crowd-sourced human The statistics in Table 6 indicate that the system
identified colors. Merging is completed by relying on can locate approximately 1.5% of all objects, and that
GPS coordinates of target sightings. among the retrieved objects, 79% are actual objects. On
actual objects, the system performs well at identifying
False Positive Elimination shape, alphanumeric, and shape color characteristics.
The complete Autonomous Vision System is dia-
False positives are occasionally reported from SURF.
grammed in Figure 13.
These objects are noise on the field or a different
object altogether. To maximize our score, the system Precision 79%
attempts to eliminate false positives to avoid extra- Recall 1.5%
object penalties. One way the system avoids false Alphanumeric Accuracy 74%
Shape Accuracy 65%
positives is by not reporting any region of interest that
Shape Color Accuracy 51%
fails shape and alphanumeric segmentation. If either the
Alphanumeric Color Accuracy 31%
shape or alphanumeric returns a low confidence value, Orientation Accuracy 44%
the object is not reported. Objects are also filtered if the
color of the alphanumeric and shape are classified to be Table 6: Autonomous Vision System Statistics
the same.
2.6. Communications
Autonomous Region Merging
The UAS has three wireless links between the aircraft
During the mission, the object sightings classified by and the ground station as diagrammed in Figure 14.

CUAir: Cornell University Unmanned Air Systems Page 11 of 15


Figure 14: Communications System Diagram Figure 15: RFD900 Distance Test Results

These three links include the 915 MHz TBS Crossfire 2.7. Air Drop
spread-spectrum RC link, a 5 GHz WiFi AC link, and
a 915 MHz autopilot telemetry link. The RC link, In the design of the UGV, three main stages of the
established using a Crossfire RC radio module acts as mission were considered from a hardware perspective.
the backup RC link. This allows for manual control of The first stage was the release from the plane. The team
the aircraft from the ground. The WiFi link, established implemented a dual phase drop mechanism that would
by a Ubiquiti Rocket AC pair, provides the main data separate the process of opening doors on the plane from
path between the aircraft and the IGS. The 915 MHz dropping the vehicle. The rationale for this design choice
telemetry link, established by a RFD900+ pair, provides was to ensure that even if the drop mechanism fails, the
a backup for the autopilot ground station. Each link has doors on the plane would still safely hold the UGV inside.
a set of carefully selected antennae to ensure a reliable The second key stage was the controlled descent of
connection between the aircraft and the ground station. the UGV as it drops. The team prioritized controlled
Additionally, the team uses a NETGEAR R6700 wireless descent during the design process, due to the compe-
router, operating within the 2.4 GHz frequency band, for tition’s propulsion constraints. As a result, the team
communication between systems on the ground. decided between two main design options for controlled
On the aircraft, the team uses two 19 dBi polarized descent. The first was autorotation, which the team
panel antennae, chosen for their omnidirectional radia- determined to be too difficult to implement this year.
tion pattern between 4.9 GHz and 5.8 GHz. This ensures The second design option was to use a parafoil. The team
that connectivity is independent of aircraft orientation. ultimately chose to implement parafoils due to their light
An array of two 8 dBi patch antennae are located on form factor and precedence in military and recreational
the ground. These antennae are highly directional and applications.
improve the system’s link budget and UAS range.
Testing the communications system involved reliabil-
ity and distance testing with the two main wireless links.
The WiFi and telemetry links were both established
at roughly 650 meters apart from each other at two
locations on Cornell’s campus within line of sight. A
test matrix was constructed with various transmission
powers and antenna arrays used at both ends of the link.
RSSI and data rates were collected over a period of one
minute and averaged for each point in the matrix.
The data collected, plotted for the RFD900s in
Figure 15, allowed the team to not only establish
the best performing setup, determined by the highest
data rate, but also the most reliable, determined by the Figure 16: UGV Rover
highest power received. It was determined from these
tests that a 1/2λ dipole antenna setup on the aircraft The third key factor in the hardware design was the
maximizes signal strength during flight. design of the rover itself. The team chose a two wheel

CUAir: Cornell University Unmanned Air Systems Page 12 of 15


design to minimize total UGV volume. Additionally, intact 94% of the time, and the bottle remained intact
the cylindrical geometry allows the rover to fall at any 100% of the time. Additionally, the team conducted
orientation. The UGV is comprised of two parts: the rolling tests to ensure accuracy of the rover’s motion.
bottom plate, which controls driving and the top plate, In testing drop and delivery location accuracy, the UGV
which controls descent. Because the parafoil obstructs landed a mean distance of 14 ft from the desired drop
the driving functionality, the top plate, which is attached location and drove to within the 10 ft bound of the
to the parafoil, separates just before landing. The wheel delivery location 100% of the time.
material was chosen to be thermoplastic polyurethane on
the outside and nylon on the inside. This was because
the geometry of this combination of materials allows
for elastic deformation of the wheels on landing which
lessens some of the impact load.
The drop location is decided based on the capability
of the parafoil. Specifically, the parafoil is limited to
control in only two axes and it has issues going against
the wind. Therefore, it was determined that the parafoil
could handle a glide ratio, distance travelled laterally
per unit distance dropped vertically, of approximately
three. This means that the UGV will be dropped farther
out from the target and will use wind speed, descent
speed, altitude, and the glide ratio into account when
determining the path to the target. Once the UGV is
above a region close to the target, it will move in a figure Figure 17: UGV Rover, Labeled
eight pattern around the target to minimize the distance
between the landing site and the target. The rover will
then drive in a straight path to the target.
2.8. Cyber Security
The team has completed 35 drop tests. Specifically,
the team utilized segmented testing for each of the The team takes a comprehensive approach to en-
components. The tests began with 40-50 ft drops and suring that the Orion system thwarts cyber attacks
then increased to 90 ft. Later, the team was able to test whenever possible. In particular, the system is designed
with 400 ft drops using the matrice. The UGV remained to meet three main security goals.

Network Systems
RFD900 • Protected by AES, the NIST- GPS • The Here GNSS is resilient to GPS spoof-
Radio recommended symmetric encryption ing and jamming due to hardware built
protocol directly into the GPS. The module tracks
• FHSS prevents interference from nearby incoming GPS data for inconsistencies to
signals on similar frequencies detect spoofing attacks

• As a fallback, all capabilities of the radios


are duplicated by the WiFi system

WiFi • Protected by WPA-2, NIST recom- APIs • Write access to API endpoints is re-
mended encryption stricted to users with authentication
• Using a 128-bit key for AES encryption, • The WiFi network is encrypted, protect-
the network is effectively protected from ing confidentiality and integrity
wireless sniffing

915 MHz • Frequency hopping prevents intentional On-Board • Log-in password protect the on board
RC Trans- jamming or accidental interference Computer computer in case the aircraft is inter-
mitter cepted

Table 7: Cyber Attack Mitigation Techniques

CUAir: Cornell University Unmanned Air Systems Page 13 of 15


1. Ensure the aircraft only responds to control signals actions of each system operator. This enables a tiered
from authorized users such as the GCS or the safety permission structure. Further, a private key stored
pilot. on the host machine is used as a key for end-to-end
encryption and session signing. The Autopilot GCS
2. Maintain the confidentiality of data collected by requires the operator to enter a password on startup to
the aircraft. send actions to the aircraft and all passwords on both
3. Maximize the availability of the connection be- systems are independently hashed to prevent password
tween the aircraft and the ground. leakage.

The Orion system implements several layers of de- 3. Safety, Risks, & Mitigation
fense to meet these goals. First, because the wireless
networks used by the UAS are the most available to an Safety is fundamental to the development and oper-
attacker, the team takes extra measures to protect these ation of a UAS. This section describes potential safety
networks. The “Network” column of Table 7 describes risks and details steps taken to mitigate them. Risks are
the use of various protocols to prevent adversaries from classified by likelihood of occurrence: Rare, Infrequent,
eavesdropping or modifying data on the networks and and Frequent. Severity is classified by: Low, Medium,
how to limit an adversary’s ability to interfere with High.
the availability of the networks. When possible, all
protocols are used according to recommendations of the
3.1. Developmental Risks & Mitigation
National Institute of Standards and Technology (NIST),
which defines best practices for security protocols in the As personnel safety is of the utmost importance,
United States. In addition to securing the network from numerous safety considerations were made during the
external threats, the team also employs WPA2 and a development of the system to ensure the safety of
WAN firewall to prevent an adversary from gaining direct team members. CUAir has designated a safety lead
access to the network. In the unlikely event that an to enforce safety protocol and keep relevant medical
intruder gains access to the network, the team employees supplies and safety equipment readily available during
access control protocols described further in the “APIs” development and testing. CUAir has also developed a
section of Table 7. safety manual that outlines safe operating procedures.
Additional security includes an authentication mech- All team members are required to read and take a quiz
anism on the IGS and Autopilot GCS that limits access on these procedures. Table 8 illustrates the three main
to critical functions to authorized users and tracks the risks that the team considered during development.

Developmental
Occurrence Severity Mitigation
Risk

• Wear Personal Protective Equipment (PPE) including eye


Fabrication Error protection, closed toed shoes, and protective gloves
and Resulting Rare High
• Keep lab area and manufacturing area clean
Personnel Injury
• Keep First Aid medical supplies readily accessible

Insufficient • All personnel must attend safety training seminar


personnel training
Infrequent High • Team members do not use power tools until they are
of new team
members specifically trained to use them

Personal injury
due to unit • Unit testing is never performed alone in order to ensure safety
testing hazardous Rare High • Safety training seminar prepares members to handle possibly
components, such dangerous components
as the motor

Table 8: Summary of Developmental Risk Analysis and Mitigation Strategies

CUAir: Cornell University Unmanned Air Systems Page 14 of 15


Mission Risk Occurence Severity Mitigation
• Preflight range tests
Loss of telemetry or • Return to Launch (RTL) and aerodynamic termination
Frequent Medium
RC link to the UAS autopilot failsafes
• Reduntant links over RFD900+ and WiFi
• All personnel stand behind horizontal plane of the propeller
before throttle is plugged in
Unexpected throttle
Infrequent High • Plug in throttle from behind aircraft propeller
ramp-up on ground
• Dual throttle arming necessitates that Orion is armed both
in software and by a physical safety switch
• Air delivery algorithm does not run until the ground station
operator gives authorization with the GCS. The operator
Unexpected Air
Infrequent Medium will only make this authorization when given the requisite
Delivery release
clearance from the judges
• Do not test over populated areas
• Test all surfaces and autopilot sensors before flight
Autopilot parameter
Infrequent Medium • Comprehensive checklist to ensure every system is properly
configuration error
examined
• Check battery levels and all connections before flight
Loss of power to
• Use locking connectors whenever possible
flight controller or Rare High
electrical failure • Redundant power supply to autopilot through BEC to servo
rail and through power module
• Monitored charging
Damage to LiPo • Fireproof cases for LiPo transportation
Rare High
batteries
• Class D fire extinguisher and protocol for crashes and
disposal in event of crash
• Only fly in weather conditions deemed safe by the pilot
Pilot Error Rare High • No talking right before take-off or during flight to avoid
distracting the pilot

Table 9: Summary of Mission Risk Analysis and Mitigation Strategies

3.2. Mission Risks & Mitigation least 30 feet away from the aircraft before the throttle
is armed. The pilot then gives the GCS operator
CUAir has completed a comprehensive analysis of verbal confirmation of the team’s preparedness before
potential safety risks during mission testing and has the aircraft takes off. This procedure is followed during
taken a number of steps to mitigate them. The team all usages of the Orion system.
determined these risks through test flights, ground tests,
and general knowledge of the dangers of air systems 4. Conclusion
technology. Table 9 describes the risks and their
respective mitigation steps that the team considered. During the past year, CUAir dedicated itself to
CUAir has further developed a safety checklist that is designing, building, and testing the Orion UAS in prepa-
consulted before every flight to verify that all autopilot, ration for the AUVSI SUAS 2019 mission. Extensive
electrical, and mechanical components are adequately testing was performed on the entire system to ensure
secured and functional. Moreover, all test flight person- that the UAS meets all competition specifications and
nel must stand behind the line of the propeller and at will perform safely and effectively during the mission.

CUAir: Cornell University Unmanned Air Systems Page 15 of 15

You might also like