Professional Documents
Culture Documents
Abstract
Cornell University Unmanned Air Systems (CUAir) designed, manufactured, and programmed its most innovative,
customized, and ambitious Unmanned Aerial System (UAS) to date. With its fully composite aircraft design,
modular distributed imaging system, and intuitively-tailored autopilot, the 2019 Orion system surpasses its
predecessors in efficiency, flight performance, and precision. The joint effort of 60 undergraduate students from
the fields of Computer Science, Electrical Engineering, Mechanical & Aerospace Engineering, Mathematics, and
Physics, Orion is the culmination of interdisciplinary design and collaboration. Throughout the year, CUAir
rigorously tested the system to ensure the reliability of each component.
1. Systems Engineering Approach 1.1. Mission Requirement Analysis
The 2019 Orion system was designed, manufactured, The mission demonstration simulates a package de-
and tested to perform an autonomous delivery mission livery operation. After deployment, the system must fly
as safely, reliably, and accurately as possible. Capable to the delivery area, locate the delivery point, safely and
of autonomous waypoint navigation, obstacle avoidance, accurately drop the payload, and move the package to
image processing, target recognition, and air delivery, the target location on the ground.
the Orion system has been systematically engineered to Table 1 describes the goals of each mission task and
fulfill and exceed all AUVSI SUAS requirements. The the requirements CUAir and the UAS must achieve to
following sections describe mission tasks, identify re- reach each goal.
quirements for successful mission execution, and disclose The Autonomous Flight and Obstacle Avoidance task
the system’s comprehensive design rationale. necessitate an agile aircraft equipped with an autopilot
• Complete the mission in minimal flight and • Well practiced full mission tests
Timeline
post-processing time (80%) • Completion of the mission as quickly and safely
(10%)
• Refrain from taking a timeout (20%) as possible
• Fly autonomously with minimal manual • A fully functioning autopilot capable of path
Autonomous
takeovers (40%) planning, autonomous takeoff, and autonomous
Flight (20%)
• Fly a waypoint sequence within 100 ft of each landing
waypoint (10%) • An agile airframe that can be easily maneuvered
• Hit waypoints in a sequence as accurately as to hit waypoints
possible (50%)
• Avoid stationary obstacles, i.e. cylinders with • An autopilot ground station algorithm adept at
Obstacle
radius 30 ft to 300 ft and no constraint on recalculating flight paths
Avoidance
height • An autopilot system capable of relaying teleme-
(20%)
try data to the Interoperability Server at rate of
1 Hz
• Identify correct shape, shape color, alphanu- • A system capable of manual detection and clas-
Object meric, alphanumeric color, and alphanumeric sification of objects and object submission
Detection, orientation of objects (20%)
Classification, • A system capable of autonomous detection and
Localization • Accurately provide GPS location of objects classification of objects and object submission
(20%) (30%) • A mechanically stable imaging system and high
• Submit objects during the first flight via the resolution camera
Interoperability System (30%) • An algorithm that can use telemetry metadata
• Submit objects autonomously (20%) to accurately compute the location of each object
• Using no means of propulsion, drop and land • A mechanism capable of deploying the payload
Air Drop the UGV with water bottle gently without in a controlled fashion reliably causing without
(20%) damage (50%) damage
• On the ground, navigate the UGV with water • An algorithm that can calculate the optimal
bottle to the correct location and stop (50%) release time
• A UGV system capable of driving autonomously
to a given location
Operational • Exercise operational professionalism, commu- • Gracious professionalism and practice with clear
Excellence nication between members, reaction to system communication and alertness from all team
(10%) failures, and attention to safety members
2. System Design
2.1. Aircraft
For this year’s competition, CUAir decided to rethink Figure 2: Orion Aircraft Projections, Dimensions Shown (in)
its entire platform for its new aircraft, Orion, in order
to enhance flight performance in several categories.
The aircraft geometry was designed in consideration of
endurance, flight speed, maneuverability, mass, stability
and control, and manufacturability requirements. New
fabrication methods were tested to create a stronger,
lighter and more advanced aircraft, culminating in a
hollow carbon fiber fabrication method to replace the
previous solid foam and balsa method. A twin-boom
aircraft configuration with two subsequent motors and
vertical stabilizers was developed for redundancy and
modularity and to allow for optimal camera placement.
Using Athena Vortex Lattice (AVL), we then confirmed
the stability of our aircraft in dynamical modes (Figure
3). Figure 3: AVL vortex lattice simulating calculated load distribu-
The fuselage skin consists of a single layer of 3k Twill tion.
distance of the path, the number of waypoints, the 2.4. Imaging System
max angle of the path, number of collisions, maximum
The Sony R10C has an E384/6 mapping sensor
coverage for the MDLC, and proximity to obstacles.
package that makes data collection and post-processing
The potential flow algorithm will find the optimal path
quicker and easier. The R10C has no power timeout,
that combines attraction to waypoints and repulsion
so the camera remains turned on for the entire duration
from obstacles, producing short paths in an attempt
of the flight. Paired with the Companion Computer,
to minimally impact coverage. This algorithm has the
a Raspberry Pi Model 3B, its images are automatically
advantage of demanding minimal computation compared
geotagged in flight and saved to an external thumb drive.
to other algorithms. Other algorithms that were con-
sidered during the research process include variants of
rapidly-exploring random trees, Dijkstra’s algorithm, 2.4.1. Gimbal
and gradient descent. The Orion system features a custom two-axis gimbal,
Throughout development of the OAS, the team composed of aluminum and carbon fiber, that is housed
conducted extensive software unit testing to validate in the nosecone of the plane. The Sony R10C camera
correctness and reliability of the system. The team housed within the gimbal is an expensive payload that
also performed extensive testing with SITL simulation to needs to be protected to ensure that the image quality is
ensure proper function of the OAS features of the GCS. minimally affected by the vibration of the system. The
Over 80 hours of testing in the SITL were conducted to gimbal must also fit into the nose of the plane and move
ensure correct functionality of the OAS. This included freely with no visual or physical obstruction from the
hitting over 7,500 waypoints and avoiding over 3,000 ob- skin. The gimbal motors are controlled by a Simple-
stacles. Once the team was confident in the correctness BGC 32-bit Extended gimbal controller which receives
and reliability of the system, over a dozen test flights instruction directly from the on-board computer. The
with the Bix3 system were conducted to determine the gimbal system protects and stabilizes the R10C camera
system efficacy in a real-world environment. The Bix3 and enables precise targeted image capture.
system successfully avoided 63 stationary obstacles. The
OAS was then integrated with the Orion system, on 2.5. Object Detection, Classification, Localization
which the team conducted the rest of the flight tests,
as shown in Table 4. The Imaging Ground Server (IGS) enables man-
ual and autonomous agents to complete the Object
Total Flight Hours with OAS Active 4
Detection, Classification, and Localization task. The
Total # Static Obstacles Tested Against 98
IGS interfaces with the on-board computer over the
Total # Static Obstacles Avoided 94
communications network described in Section 2.6. The
Table 4: Summary of OAS tests in Orion full ODCL system is diagrammed in Figure 12.
This year’s flight plan consists of two passes of the images. Another improvement comes from the hardware
search grid. The first pass will have the camera at triggering enabled by the software used to run the
minimum zoom in order to obtain wide angle shots. camera. It enables communication with the Pixhawk
Normally, the camera is in idle mode, the camera’s to report telemetry of the plane when an image is
standard resting position. During this first pass, the taken and therefore helps to determine more accurate
camera will be in fixed mode, which means it is retracted GPS locations. In the past, timing has been an issue
and stationary pointed with the face of the lens parallel which caused difficulties when matching telemetry data
to the ground. In this mode, the camera will take fixed to the time a photo was taken. However, now, the
photos and then send telemetry to ground to determine hardware triggering feature avoids this issue. Also, in
GPS coordinates with MDLC. Then, on the second an effort to minimize latency between the camera and
pass, the camera will obtain high resolution targets that gimbal, a Camera-Gimbal server was developed. In
become easier to tag manually. In this second pass, the addition to maintaining zoom and controlling the gimbal
camera will be in tracking mode. In this mode, the and camera, the server also contains the camera mode
camera will be at maximum zoom and pointed directly (idle, fixed, tracking) and determines which mode to
at regions of interest that were identified as potential use depending on information received from the ground
targets from the first pass. The plane will be flying at server. Further, it allows for immediate tracking of the
the cruising altitude of 450 feet. During the first pass, next object once an image capture has been completed.
the camera would be zoomed in to the minimum level Finally, the camera gimbal server has a database of
and images will cover around 422,500 square feet. Each states that allows for quick responses to emergency
target will have a resolution of around 800 square pixels. situations such as if the system shuts down.
During the second pass, the camera will be zoomed in
to the maximum level, and images will cover around θObj = (ψ + θObj/Img ) mod (360) (1)
5,500 square feet. Each target would have a resolution
of around 70,000 square pixels. To find the orientation of the alphanumeric heading,
θObj , the IGS system computes Equation 1. ψ is the
In addition to the two-pass system, CUAir employs yaw relative to north of the system at the time of image
a number of other techniques to ensure the best possible capture and θObj/Img is the yaw of the objects with
image quality. First, the team mounted the camera with respect to the top of the image.
vibration damping screws to minimize image blur from The gimbal points the camera towards the ground
high speed vibrations and the rolling shutter effect. Post for all objects within the geofence. However, when the
processing techniques on the ground allows operators to gimbal is not pointing the camera towards the ground,
adjust contrast and saturation of photos during classi- post-processing of images corrects for image skew from
fication, providing better distinction between colors in the camera lens and for small errors in the gimbal’s
2. Find the displacement, Dxy , of the objects from The Manual Vision System (MVS) allows for manual
the center of the image in pixels. detection, localization and classification of objects in
images. Once an image is received on the ground, an
[ ] MVS operator identifies objects within the image, tags
Ox′ − W
2
Dxy = characteristics, and marks whether or not the image
Oy′ − H
2 contains the emergent object. The MVS User Interface
(UI) allows the operator to adjust image brightness,
O′ is the object location in the rotated image. W
saturation, and contrast to improve image clarity. Once
and H are the width and height of the image in
an object is tagged on the UI, the IGS system uses
pixels, respectively.
associated telemetry and gimbal metadata to compute
3. Convert the displacement to feet. the orientation and GPS location of the object. Since
the tagging procedure is time-consuming and there are
[ ] many images, the MVS system supports multiple MVS
2
W × Talt tan( FOV
2 )
h
Dft = Dxy operators simultaneously tagging images across different
2
H × Talt tan( FOV
2 )
v
using the SURF detection algorithm, which has a high the AVS are sent to the IGS, where they are localized
recall for blob features similar to objects. ROIs are and stored in a database. At the end of the mission,
filtered by various heuristics. After a ROI is identified all object sightings are sent to the AVS, which merges
as containing a target, a floodfill algorithm generates similar sightings into a single object based on location.
a shape segmentation. Further algorithms use these Merged objects are given higher confidence values if they
contours to classify the target’s shape, alphanumeric, have similar characteristics. Objects above a certain
orientation, and shape and alpha colors. The shape confidence threshold are reported. Should two sightings
classification method uses Fourier analysis on the shape be located close together and classified as different
contours to retrieve approximate shape descriptors. The shapes or alphanumeric characters, a voting algorithm
shape descriptors are fed into a neural network trained is applied. Both filtering object sightings by confidence
on generated data. The segmented alphanumeric is values and applying a voting algorithm help reduce false
passed into an MNIST style CNN. Orientation of the positives.
image is reported by a template matching algorithm Tests were conducted on approximately 6,000 images
that compares the segmented alphanumeric with an taken from test flights, previous competitions, and syn-
upright generated version of the same character. The thetically generated datasets. These images contained
contours are also used to find the average color of each approximately 3,270 real target sightings, which were
region. Color classification uses a Monte Carlo model to manually tagged and used for testing purposes.
classify based on several hundred crowd-sourced human The statistics in Table 6 indicate that the system
identified colors. Merging is completed by relying on can locate approximately 1.5% of all objects, and that
GPS coordinates of target sightings. among the retrieved objects, 79% are actual objects. On
actual objects, the system performs well at identifying
False Positive Elimination shape, alphanumeric, and shape color characteristics.
The complete Autonomous Vision System is dia-
False positives are occasionally reported from SURF.
grammed in Figure 13.
These objects are noise on the field or a different
object altogether. To maximize our score, the system Precision 79%
attempts to eliminate false positives to avoid extra- Recall 1.5%
object penalties. One way the system avoids false Alphanumeric Accuracy 74%
Shape Accuracy 65%
positives is by not reporting any region of interest that
Shape Color Accuracy 51%
fails shape and alphanumeric segmentation. If either the
Alphanumeric Color Accuracy 31%
shape or alphanumeric returns a low confidence value, Orientation Accuracy 44%
the object is not reported. Objects are also filtered if the
color of the alphanumeric and shape are classified to be Table 6: Autonomous Vision System Statistics
the same.
2.6. Communications
Autonomous Region Merging
The UAS has three wireless links between the aircraft
During the mission, the object sightings classified by and the ground station as diagrammed in Figure 14.
These three links include the 915 MHz TBS Crossfire 2.7. Air Drop
spread-spectrum RC link, a 5 GHz WiFi AC link, and
a 915 MHz autopilot telemetry link. The RC link, In the design of the UGV, three main stages of the
established using a Crossfire RC radio module acts as mission were considered from a hardware perspective.
the backup RC link. This allows for manual control of The first stage was the release from the plane. The team
the aircraft from the ground. The WiFi link, established implemented a dual phase drop mechanism that would
by a Ubiquiti Rocket AC pair, provides the main data separate the process of opening doors on the plane from
path between the aircraft and the IGS. The 915 MHz dropping the vehicle. The rationale for this design choice
telemetry link, established by a RFD900+ pair, provides was to ensure that even if the drop mechanism fails, the
a backup for the autopilot ground station. Each link has doors on the plane would still safely hold the UGV inside.
a set of carefully selected antennae to ensure a reliable The second key stage was the controlled descent of
connection between the aircraft and the ground station. the UGV as it drops. The team prioritized controlled
Additionally, the team uses a NETGEAR R6700 wireless descent during the design process, due to the compe-
router, operating within the 2.4 GHz frequency band, for tition’s propulsion constraints. As a result, the team
communication between systems on the ground. decided between two main design options for controlled
On the aircraft, the team uses two 19 dBi polarized descent. The first was autorotation, which the team
panel antennae, chosen for their omnidirectional radia- determined to be too difficult to implement this year.
tion pattern between 4.9 GHz and 5.8 GHz. This ensures The second design option was to use a parafoil. The team
that connectivity is independent of aircraft orientation. ultimately chose to implement parafoils due to their light
An array of two 8 dBi patch antennae are located on form factor and precedence in military and recreational
the ground. These antennae are highly directional and applications.
improve the system’s link budget and UAS range.
Testing the communications system involved reliabil-
ity and distance testing with the two main wireless links.
The WiFi and telemetry links were both established
at roughly 650 meters apart from each other at two
locations on Cornell’s campus within line of sight. A
test matrix was constructed with various transmission
powers and antenna arrays used at both ends of the link.
RSSI and data rates were collected over a period of one
minute and averaged for each point in the matrix.
The data collected, plotted for the RFD900s in
Figure 15, allowed the team to not only establish
the best performing setup, determined by the highest
data rate, but also the most reliable, determined by the Figure 16: UGV Rover
highest power received. It was determined from these
tests that a 1/2λ dipole antenna setup on the aircraft The third key factor in the hardware design was the
maximizes signal strength during flight. design of the rover itself. The team chose a two wheel
Network Systems
RFD900 • Protected by AES, the NIST- GPS • The Here GNSS is resilient to GPS spoof-
Radio recommended symmetric encryption ing and jamming due to hardware built
protocol directly into the GPS. The module tracks
• FHSS prevents interference from nearby incoming GPS data for inconsistencies to
signals on similar frequencies detect spoofing attacks
WiFi • Protected by WPA-2, NIST recom- APIs • Write access to API endpoints is re-
mended encryption stricted to users with authentication
• Using a 128-bit key for AES encryption, • The WiFi network is encrypted, protect-
the network is effectively protected from ing confidentiality and integrity
wireless sniffing
915 MHz • Frequency hopping prevents intentional On-Board • Log-in password protect the on board
RC Trans- jamming or accidental interference Computer computer in case the aircraft is inter-
mitter cepted
The Orion system implements several layers of de- 3. Safety, Risks, & Mitigation
fense to meet these goals. First, because the wireless
networks used by the UAS are the most available to an Safety is fundamental to the development and oper-
attacker, the team takes extra measures to protect these ation of a UAS. This section describes potential safety
networks. The “Network” column of Table 7 describes risks and details steps taken to mitigate them. Risks are
the use of various protocols to prevent adversaries from classified by likelihood of occurrence: Rare, Infrequent,
eavesdropping or modifying data on the networks and and Frequent. Severity is classified by: Low, Medium,
how to limit an adversary’s ability to interfere with High.
the availability of the networks. When possible, all
protocols are used according to recommendations of the
3.1. Developmental Risks & Mitigation
National Institute of Standards and Technology (NIST),
which defines best practices for security protocols in the As personnel safety is of the utmost importance,
United States. In addition to securing the network from numerous safety considerations were made during the
external threats, the team also employs WPA2 and a development of the system to ensure the safety of
WAN firewall to prevent an adversary from gaining direct team members. CUAir has designated a safety lead
access to the network. In the unlikely event that an to enforce safety protocol and keep relevant medical
intruder gains access to the network, the team employees supplies and safety equipment readily available during
access control protocols described further in the “APIs” development and testing. CUAir has also developed a
section of Table 7. safety manual that outlines safe operating procedures.
Additional security includes an authentication mech- All team members are required to read and take a quiz
anism on the IGS and Autopilot GCS that limits access on these procedures. Table 8 illustrates the three main
to critical functions to authorized users and tracks the risks that the team considered during development.
Developmental
Occurrence Severity Mitigation
Risk
Personal injury
due to unit • Unit testing is never performed alone in order to ensure safety
testing hazardous Rare High • Safety training seminar prepares members to handle possibly
components, such dangerous components
as the motor
3.2. Mission Risks & Mitigation least 30 feet away from the aircraft before the throttle
is armed. The pilot then gives the GCS operator
CUAir has completed a comprehensive analysis of verbal confirmation of the team’s preparedness before
potential safety risks during mission testing and has the aircraft takes off. This procedure is followed during
taken a number of steps to mitigate them. The team all usages of the Orion system.
determined these risks through test flights, ground tests,
and general knowledge of the dangers of air systems 4. Conclusion
technology. Table 9 describes the risks and their
respective mitigation steps that the team considered. During the past year, CUAir dedicated itself to
CUAir has further developed a safety checklist that is designing, building, and testing the Orion UAS in prepa-
consulted before every flight to verify that all autopilot, ration for the AUVSI SUAS 2019 mission. Extensive
electrical, and mechanical components are adequately testing was performed on the entire system to ensure
secured and functional. Moreover, all test flight person- that the UAS meets all competition specifications and
nel must stand behind the line of the propeller and at will perform safely and effectively during the mission.