You are on page 1of 135

University of Southern Denmark

MSc in Engineering - Robot Systems

Masters thesis

Autonomous Airborne Tool-carrier



Hjalte B. L. Nygaard

Ulrik P. Schultz Rasmus N. Jrgensen Kjeld Jensen

Peter E. Madsen

May 30, 2012

Abstract The eld of Unmanned Aerial Vehicles has been under rapid development during the last decade. Eyes in the sky have proven to be a powerful tool in a variety of domains. Hence many dierent approaches have been taken to create these type of systems. However, the foundation of these varying approaches diers mainly in the task they complete and thus the payload they carry. This thesis proposes a modular approach to the xed wing UAV domain, aiming to induce reusability of the same ying platform in a multitude of applications. This is done by the use of swappable software as well as hardware structures, utilizing ROS as a common communication basis. Although a completely autonomous ying prototype is not yet implemented, promising results have been achieved. A hardware base has been constructed, using MEMS sensors and an ARM processor. As state estimation is a common problem for all airborne platforms, this has been the a main concern of this thesis. A cascaded Extended Kalman Filter has been implemented, and some of its performance measures have been veried, using vision based horizon tracking. Aided ights have been conducted, utilizing simple PID loops to maintain plane attitude, based on the pilot commands and feedback from the state estimator.


We owe our deepest gratitude to a number of people, who have been involved in this project, one way or the other. First of all our beloved and patient sweethearts. Without their love and support, this thesis would have remained a dream. We really appreciate the time, patience and especially steady hand and eagle-eye of Carsten Albertsen, in assisting us on SMD soldering and bugxing during the PCB prototype manufacturing. In the same phrase, gratitude shall be expressed to David Brandt, for helping us out on Gumstix related hardware design issues and ad-hoc components. Morten Larsen should be thanked for his tireless support on Linux and CMake related issues. We have beneted from his ability to decipher illegible compiler errors. We also thank Stig Hansen and Kold College for kindly lending us eld and airspace to perform ight test. We are deeply grateful for Jimmi Friis guidance into the world of RC model planes. Your patience and kind instructions was invaluable. Without the initial help of Henning Porsby we would never have gotten in the air. Andreas Rune Fugls input and ideas have proved inspirational in our implementation and future work. We owe our gratitude to Lars Peter Ellekilde for breaking the ice on the subjects of quaternions and Kalman ltering, and Dirk Kraft for suggesting vision procedures. Lastly we would like to thank Bent Bennedsen and Henrik Midtiby for their input on the use of imagery in agriculture.

Flexible is much too rigid, in aviation you have to be uid - Verne Jobst

Contents Introduction 1 Background & Analysis 1.1 Related Work . . . . . . . . . . . . . 1.2 Application areas . . . . . . . . . . . 1.2.1 Conditions . . . . . . . . . . 1.2.2 Regulations . . . . . . . . . . 1.2.3 Practicalities . . . . . . . . . 1.3 System requirements & Architecture 2 Auto Pilot 2.1 Low level control loops . . . . . . . 2.1.1 PID controller . . . . . . . 2.1.2 Bank and Heading control . 2.1.3 Climb and Altitude control 2.1.4 Miscellaneous control . . . 2.2 Flight Management System . . . . 2.2.1 Trajectory Smoothing . . . 2.2.2 Trajectory Tracking . . . . 2.2.3 Tool Interaction . . . . . . 2.3 Flight Computer . . . . . . . . . . 2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 6 7 7 8 10 10 11 12 14 16 16 17 19 19 20 20 21 22 23 24 26 28 28 30 30 31 33 33 36 37 42 43 44

3 State Feedback 3.1 Sensors . . . . . . . . . . . . . . . . . . 3.1.1 Velocities and course parameters 3.1.2 Position . . . . . . . . . . . . . . 3.1.3 Altitude . . . . . . . . . . . . . . 3.1.4 AHRS . . . . . . . . . . . . . . . 3.2 Sensor fusion . . . . . . . . . . . . . . . 3.2.1 Methods . . . . . . . . . . . . . . 3.2.2 Estimator architecture . . . . . . 3.2.3 Kinematic models . . . . . . . . 3.3 Noise considerations . . . . . . . . . . . 3.4 Conclusion . . . . . . . . . . . . . . . . 4 Visualization and Simulation


4.2 4.3 4.4

Telemetry . . . . . . . . 4.1.1 Link hardware . 4.1.2 Message passing Visualization . . . . . . Simulator . . . . . . . . Conclusion . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44 44 45 45 46 47 48 49 50 51 52 52 53 54 54 54 54 55 55 56 57 57 57 58 59 59 60 61 62 62 65

5 Implementation 5.1 Airframe . . . . . . . . . . . . . . . . . . . . . . 5.2 Autopilot PCB . . . . . . . . . . . . . . . . . . 5.2.1 Processor and data buses . . . . . . . . 5.2.2 Airspeed sensor . . . . . . . . . . . . . . 5.2.3 Accelerometer . . . . . . . . . . . . . . . 5.2.4 Gyroscope . . . . . . . . . . . . . . . . . 5.2.5 Magnetometer . . . . . . . . . . . . . . 5.2.6 Barometer . . . . . . . . . . . . . . . . . 5.2.7 GPS . . . . . . . . . . . . . . . . . . . . 5.2.8 Zigbee . . . . . . . . . . . . . . . . . . . 5.2.9 Ultrasonic proximity . . . . . . . . . . . 5.2.10 R/C interface & Failsafe operation . . . 5.2.11 Power management . . . . . . . . . . . . 5.2.12 Auxiliary components . . . . . . . . . . 5.3 Software building blocks . . . . . . . . . . . . . 5.3.1 fmFusion . . . . . . . . . . . . . . . . . 5.3.2 fmController . . . . . . . . . . . . . . . 5.3.3 GPS . . . . . . . . . . . . . . . . . . . . 5.3.4 fmTeleAir and fmTeleGround . . . . . . 5.4 Ground Control system . . . . . . . . . . . . . 5.5 Simulator . . . . . . . . . . . . . . . . . . . . . 5.6 Flight test . . . . . . . . . . . . . . . . . . . . . 5.6.1 Vision based post ight roll verication 5.6.2 Aided Flight . . . . . . . . . . . . . . .

6 Perspective 68 6.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 6.2 Project analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 6.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Bibliography Nomenclature Appendix A Attitude kinematics Appendix B Heading kinematics Appendix C Position and Wind kinematics Appendix D Wind statistics Appendix E Wikis and how-tos 71 76 80 84 88 91 92

Appendix F PCB design Appendix G Thesis proposal Appendix H Sensor Fusion for Miniature Aerial Vehicles Appendix I Project Log

94 100 105 124


During the past decade, the eld of small scale autonomous airborne vehicles has undergone a rapid development. This development has become possible through new sensor technologies. Small, light-weight, integrated circuit components now replace larger, heavier mechanical sensors. Development has been seen in a wide variety of applications, including military, education, commercial and recreation. Example use can be seen in agriculture, aerial photography, disaster management and inspection systems. Common for these areas are specialized systems, with integrated sensors for very specic areas of application. Through this thesis, the state of the art will be assessed. Automated ight and ways to control unmanned aerial vehicles will be reviewed. By implementing state estimation and low level control, on a highly capable embedded platform, we aim to break with the single purpose paradigm, seen in todays solutions. We propose a single, modular system, applicable to various dierent use cases, and airframes. Through increased computational capabilities, the same autopilot can be used for many dierent tools, and thereby application areas. We call this the Autonomous Tool Carrier. It is hypothesized that, through aided control, unmanned aerial vehicles can be controlled by inexperienced pilots, and that the implementation of such a system, can be used as the rst step towards realizing this new concept.


Background & Analysis

This chapter will analyse existing systems available in the domain of Unmanned Aerial Vehicles (UAVs), identify application areas and asses the available technologies. Based on this analysis, it will be formulated how an Autonomous Tool Carrier can supplement the current state of the art. Furthermore, the overall building-blocks of such a system will be identied and introduced. The remainder of this introductory Chapter will be structured as follows: In Section 1.1 an overview of the existing commercial and open source based projects will be given, Section 1.2 on the next page surveys applications for Unmanned Aerial Vehicles and nally Section 1.3 on page 12 describes the proposed system and its architecture. The latter Section also outlines the structure of this thesis.


Related Work

A review of existing autopilot systems will be conducted in this section. The available systems have been divided into two groups, commercial and open source autopilots. A select number of open source auto pilot projects are presented in Table 1.1 on the next page. General for these systems are the relatively low hardware costs, compared to the commercial units. The majority of the projects use Complementary ltering for state estimation, as it is easier to implement compared to the Kalman lters used by the commercial units. These projects generally seem to lack coherent documentation and scientic foundation. None of the systems provide tool interaction, as the purpose of the projects is limited to enabling automated ight with R/C model planes. A number of commercial autopilots are available, as can be seen in Table 1.2 on the following page. Based on various sales documents, it can be deduced that commonly, these autopilots are based on Extended Kalman Filters. The turn-key prices are relatively high, ranging from 4.000 Euro and up. System information beyond sales documents is generally sparsely available. The Kestrel autopilot[12] is an exception. This autopilot originates from Birmingham Young University[23] and has been used comprehensively by Beard et al. in their research, in various aspects regarding automated ight of miniature aerial 6

Table 1.1: Table of open source autopilot projects

Name Paparazzi[14] GluonPilot[4] ArduPilot[1] OpenPilot[6] Computer 60 MHz ARM 80 MHz dsPIC 20 MHz AVR 150MIPS Cortex-M3 State Est. Complimentary EKF Complimentary Complimentary HW Price[EUR] 400 480 160 70 Payload comm. no no no no

Table 1.2: Table of commercial autopilot systems

Name Kestrel[12] SenseFly[8] MicroPilot[5] Piccolo II[7] Avior 100[2] Gatewing x100[3]

Payload 12MP Canon PowerShot Gimbal Camera EUR 19,250 10MP digital camera

Computer 29MHz 8-bit 150 MIPS 600 MHz -

State Est. EKF 12 state EKF 15 state EKF -

Price[EUR] 4,000 8,500 6,400 6,000 N/A N/A

Payload comm. one-way no one-way one-waya one-way no

Might have two way communication. No specic information found.

vehicles[19, 45, 54, 59]. Due to the academic history of this autopilot, some insight to its internal workings is available. Most of the commercial autopilots carry some sort of tool and most of them are indeed capable of communicating the position and orientation of the airframe to the tool. However, none of them suggest capabilities of involving the tool in the ight itself. Others simply xate a regular camera to the airframe. When surveying the eld, it is soon recognized that State Estimation plays a major role, for the overall system performance. This is due to the fact that this subsystem provides the feedback for the actual controller to react on. Thus, a bad state estimate yields poor overall autopilot performances. This is reected in the hobbyists using relatively simple methods, as opposed to the commercial more complex and thus expensive systems.


Application areas

The primary role of UAVs is provide an eye in the sky. This is useful for ad hoc data collection, in many domains. This section will survey some dierent application areas of UAVs and delimit the requirements, in terms of payloads, versatility and various other aspects. Aerial photography in agriculture In recent decades there has been an increasing interest in remote sensing-based precision agriculture[44, 52, 70]. Especially Site Specic Weed Management (SSWM) has great potential for reducing the amount pesticides needed for adequate weed management[33]. Historically, weed management has involved a vast amount of manual labor. In the past decades the use of herbicides has greatly

reduced the amount of manual labor. However, these chemical agents impact both environment and yield of the crop. Gutjahr and Gerhards shows that use of herbicides can be reduced by approximately 80%, by spraying selectively in weed infested areas, rather than broad spraying the entire eld. Further optimization can be achieved by selectively activating relevant spraying nozzles, when driving a spraying boom over a weed plant[40]. The two examples require dierent vision system approaches; the former needs to localize weed patches and the latter individual plants within the patch. As weed control is relevant in the germination period, weed patches can be identied by excessive biomaterial density. This can be measured using Normalized Dierence Vegetation Index [70] (NDVI). This indexing method exploits the fact that photosynthesis utilizes most red light with a wavelength of 680nm, and reects most nearinfrared light with a wavelength of 750nm, as illustrated in Figure 1.1a. This steep rise in reection is known as the red edge.

(a) Reection curves for soil and dierent plant species, illustrating typical red edges [70, gure 8.1.]

(b) Weed detection by row relative placement [22, gure 10.c]

Figure 1.1: Reprinted gures, illustrating two dierent approaches to image based weed detection. Another method, when dealing with row crops, is identication of biomass between rows, as seen in [22] and, and depicted in Figure 1.1b. As pointed out by Weis and Skefeld [70], successful crop to weed discrimination o is essential for SSWM. Therefore spatial and temporal variations of weed populations needs to be assessed if treatment should vary within the eld. Satellite based image data have been proven useful for localizing big weed patches[39], by combining RGB (red, green & blue) and NIR (Near Infra Red) images with NDVI. Weed patches are detected in images, from the increased biomass, compared to the surrounding areas. However, due to the limited spatial resolution of the satellite based images, small patches remains undetectable. Furthermore, it is expensive and the revisiting time of the QuickBird satellite used in [39], is up to 3.5 days.

Disaster overview and search & rescue aid As it is noted by Bendea et al. [20], small scale unmanned aerial vehicles (UAV) are ideal for disaster overview. I a multitude of situations, be that oods, earthquakes or post hurricane situations, an immediate overview of the situation can prove very helpful. Revisiting times of satellites, and their limited spatial resolution limits their use in these cases, thus supplementary imagery from a UAV can aid disaster management. An eye in the sky could also be invaluable in the search for missing persons. Combined with thermal cameras, an eective search could be conducted without the cost of a manned aircraft. Also the aircraft requires less space for storage, and can easily be standby for immediate dispatch. Additionally, multiple UAVs can be launched to perform a swift overight of an area without additional personnel requirements. Other application The need to inspect gas pipelines is explained by Hausamann et al. [34], where it is concluded that remote sensing possibilities of UAVs could potentially be applied. Espinar and Wiese [29] uses a UAV to collect dust samples from tropical trade winds from North Africas Sahara, in an eort to link it with health eects on humans and ecosystem. Data could also be sampled from other more inhospitable environments, like gamma radiation of the recent Fukushima disaster in Japan. Furthermore, regular civil aerial photography is also an area of application. This can be used for producing pictures for sale, wildlife monitoring, cattle inspection and counting. The dierent application areas have dierent constraints to the UAV and particularly the payload it carries. In agriculture RGB and IR images are required to map elds and extract NDVI maps, which can be used to localize weed patches that needs treatment. The spatial resolution should be high enough to clearly distinguish row from inter-row areas. In applications such as search and rescue, thermal cameras are deemed useful, as they can detect body heat. Thermal cameras are likewise useful in agriculture, as they can be used to spot deer[69] in the eld before harvesting. The majority of the application areas utilizes cameras, to provide the grand overview. However, the specications of these cameras varies. As it has been pointed out, other applications requires very dierent sensing tools.



In order to identify the requirements of UAVs in Denmark, a number of practical and regulative considerations must be addressed.



The Civil Aviation Administration - Denmark, has a number of regulations regarding recreational model plane ying and automated unmanned aerial vehicles. The most essential parts of the regulations, contained in article BL 9-4 [48], have been identied as: 9

Weight: BL 9-4 4.2, model planes with a take o weight of 7-25 kg requires special permissions and registration of aircraft. Therefore a maximum weight of 7 kg is required for anyone not possessing this permit. Flying height: BL 9-4 2.2.e, model planes are limited to a maximum altitude of 100 m. Line of sight BL 9-4, a model plane must alway be in line of sight (without using helping tools like binoculars) and if the plane is autonomous, the person operating the plane should always be able to change to manual mode, in ight, to be able to avert potential collisions.



Apart from the regulative considerations, a number of other practicalities must be assessed. When selecting an aerial platform a number of requirements and desired capabilities must be dened. The platform needs to operate outdoors, and has to be operable on any, weather-wise, regular day. When dealing with light weight smaller aircraft, the main concern before takeo is the wind. DMI [42] provides a wind rose average spanning from 1993-2002, see Figure 1.2. From the wind rose it can be conducted, see Appendix D, that if a UAV can y in winds up to 10,7 m/s, it can operate 347 days of the year. The wind tends to slow down at night, in the morning and evening. Thus, for all practical reasons, a plane that ies stable in 10,7 m/s wind can be used all year in Denmark.

Wind speed [m/s] 0.3 - 5.4 5.5 - 10.7 > 10.8

Days a year [%] 58 37 5

Figure 1.2: DMI wind rose [42]. Average wind speeds and direction in a period from 1993 to 2002 at Odense Airport. It is desirable to use electrical propulsion, over fuel driven motors as these are less demanding in maintenance, cleaner, quieter and simpler to work with and control. As the plane will be used as a tool carrier, it has to be able to carry extra weight. It is estimated that a typical tool, like a camera, would typically weigh up to 1 kg - this would be the minimum extra weight capability of the


plane. It is desirable to have an as low stall speed. Low stall speed is essential for slow ights and allows for added weight while still maintaining realistic take-o and landing speeds. When adding weight to the plane, the ight dynamics will change. The added mass increases the inertia of the system, and as a result the aircraft step-response to control input will be slower. As seen in (1.1), the lift force of a wing is dependent on wing area, A, traveling speed, v, lift coecient, CL and air density : 1 L = v 2 CL (1.1) A 2 Thus, as an eect of added weight, the take-o and landing speed will increase proportionally by the square root of the added weight, as seen in (1.2) through (1.3), where WS denotes wing loading: v2 = = 2L A CL (1.2)
M g L A = A =WS g

2 g WS CL


When dealing with UAVs a number of issues must be dealt with. The foremost challenge is that xed wing platform must be in motion in order to maintain lift. Thus, stopping in the air, in case of an unforeseen event is clearly not an option. As a result it must be ensured that a pilot or operator is always ultimately in control of the airframe; even when it is ying autonomously, he or she must be ready to manually take over the ight. The fact that the platform is airborne also introduces a number of challenges in the development phase. First of all, it is non-trivial to do a ight test. A number of conditions must be met; 1) One must have a capable pilot, 2) the weather must be right, 3) one must have an airspace to perform the test in. As debugging directly on the ying platform is not applicable, every piece of data that might be, or become relevant in later analysis, must be logged or transmitted wireless to the ground during ights. That being said, the domain of autonomous ight is an inspiring area to do development in, and the many challenges inspires long working hours.


System requirements & Architecture

Based on the above analysis, it is concluded that a modular system capable of interfacing a range of tools, would enable future developer to focus at the task and thus payload at hand, rather than building the vehicle carrying it. Furthermore, by providing bi-directional platform to tool interfacing, new possibilities could arise. In order to provide such a system, it is of the outmost importance that basic building blocks like state estimation, autopilot and telemetry link are in place. Providing these fundamental sub systems will be the main focus of this thesis. In order to provide modularity, is was chosen to take the abstraction to a higher level, than seen in any of the existing UAV systems. This has been done by 11

building the system using ROS[58] as a middleware. By utilizing ROS, a well established open source community with a broad code and sensor base is drawn upon. In order to accommodate ROS, it is necessary to provide a hardware platform, capable of running the system. Figure 1.3 depicts the suggested structure of such a system. As seen in this block diagram, a simulator is capable of replacing the entire airframe. This is done in order to enhance development and debugging. This is necessary, as ight tests are not trivially conducted, as it was concluded in Subsection 1.2.3 on page 11.
Ground Control Station
Chapter 4

Chapter 4

Flight Plan
Chapter 2

Chapter 2

State Estimator
Chapter 3

Chapter 2

Chapter 2

Chapter 4

Chapter 3

Chapter 5

Chapter 5

Figure 1.3: Overview of information ow in the proposed system structure. It should be noted that chapter references are to the main discussion of the topic. The remainder of this thesis will be structured as follows. In Chapter 2, Auto Pilot on the following page, the subject of autonomous ight will be discussed along with the processing needs of the ight computer. Chapter 3, State Feedback on page 26, surveys sensors, methods and algorithms for determining quantities such as position and orientation of the moving UAV. Various developed tools for aiding debugging and monitoring are discussed in Chapter 4, Visualization and Simulation on page 44. Chapter 5, Implementation on page 48, condenses all these aspects into a ying prototype of the proposed system and evaluates some overall performance aspects. Finally Chapter 6, Perspective on page 68, debriefs this thesis and outlines the future of the project.



Auto Pilot
The overall purpose of the autopilot is to enable automated, purposeful ight. The purpose of the ight depends on the payload and task at hand, but will typically be some sort of overight of a predene area. When dealing with aircraft control, a number of standard aviation terms are used. The most important ones are listed in the Nomenclature on page 76. Dierent approaches on autopilot design is found in the literature. Commonly, they are build in some sort of hierarchy, with the lowest layer stabilising the roll axis, pitch axis and velocity of the airframe. On top of this, methods for controlling altitude and heading can be build. The higher abstraction layers utilizes these abilities to provide execution of a plan, in one or another format. The plan is typically composed of a set of waypoints, i.e dots which the airframe must connect during the ight, in a specied order. As xed wing aircrafts needs velocity to maintain lift, turning on the spot is not an option. Hence, some sort of smoothing must be applied when interpolating the waypoints, to provide a yable path. Although the autopilot is not fully implemented, an overview of the proposed structure will be given here and depicted in Figure 2.1 on the next page.
The top level is Planning - the act of creating a set of waypoints, i.e a list of chronically ordered places, velocities, measurements and other conditions which must be satised, one by one. This list will be referred to as the ight plan. As planning involves the operator and can be done pre-ight, it will be done on the Ground Control Station (GCS). The intermediate level is the Flight Management System (FMS), which carries out the ight plan on a higher abstraction level, waypoint by waypoint. The actual real time control of the airframe is done by a set of controllers, which aims to position the airframe correctly in terms of attitude, altitude, velocity etc, according to the current setpoints given by the FMS. Typically one controller is devoted to each set of control surfaces. Such a



Planning Flight plan State Feedback Flight Management System

Section 2.2

Tool / Payload
Section 2.2.3

See Chapter 3

Control modes and setpoints Controllers

Section 2.1

Fight Computer
Section 2.3

Control eorts Operator Actuation

Section 5.2.10

Figure 2.1: Overview of the autopilot sub-system structure. The operator is in control in the planning phase, but is also capable of overriding the control eorts and thus manually control the airframe. controller can work in dierent modes, depending on what condition the FMS requires it to maintain. As pointed out by Low [47], the separation between FMS and the controllers, introduces an abstraction, which allows the FMS to control dierent types of airframes, without modication. The controllers must be tuned to the airframe in question, but the way of controlling it is in essence the same. It should be noted that key parameters, such turn radius, cruising speed, etc of the airframe and its controllers needs to be known in the planning phase as they vary with the type and proportions of the airframe. The FMS also communicates with the payload tool. This communication provides the required information to the tool, for sampling data points, when a set of conditions are met (i.e, when the airframe is at the appropriate position). It is likewise possible to change ight parameters during ight, based on observations acquired by the tool. E.g if the tool misses a data sample, it can request the FMS to y back to the point in question for another attempt. Other interesting thinkable scenarios could be following a rabbit or path, monitored by the tool - or expanding the search area based on the tools observations. The nal layer of the system is actuation. In this layer, the servos are signalled to position control surfaces at the deection commanded by the controllers. Also here, the operator can get involved, as he shall at all times be able to disengage the autopilot system and take over control via a remote control. The technical details presented in the remainder of this Chapter are structured bottom-up, as Section 2.1 on the following page describes the low level controllers, Section 2.2 on page 20 describes how the low level controllers can be utilized by the FMS, when executing a ight plan. Section 2.3 on page 23 evaluates the need for computational power and other constraints associated with


Kp e(t) Setpoint (Desired value) Feedback (Actual value)

A f




A f

Output to system

A f


d dt e(t)

Figure 2.2: Block diagram of a basic PID loop with bandwidth limited inputs and output. Please note that implementations might deviate from the depicted version. the UAV domain and nally chooses a platform. Section 5.2.10 on page 55 describes the subsystem for interacting with the R/C components, which performs the actual execution of the ight. It should be noted that throughout this Chapter, the state of the airframe is assumed to be known and correct. Chapter 3, State Feedback on page 26 describes methods for acquiring this state.


Low level control loops

The low level controllers serves to control ight parameters such as roll, pitch, yaw and velocity. They do so by continuously controlling the deection of the control surfaces. The output to these surfaces are based on the requirements from the FMS and the feedback from the state estimator


PID controller

The lowest level of control is stabilisation of the roll and pitch axis of the airframe. For this form of control, the PID scheme is applicable, as it is generally sucient for controlling most types of systems and can be tuned with relative ease. Figure 2.2 depicts such a controller. It takes two input signals, referred to as set point and feedback. The setpoint is the desired value of the plant under control. The actual value of the plant must be measured and returned as feedback. By subtracting the current plant value (feedback) from the desired (setpoint), the error is found. The controller outputs a single value, output, which must be connected to an actuator or subsystem, controlling the plant under control. The output of the controller is the weighted sum of current the error, integral- and derivative of the error. The applied weights are referred to as Kp , Ki and Kd in the ideal form, for proportional, integral and derivative respectively. Various schemes exist for tuning the three parameters on a PID controller, without modelling the system under control. Most notable is Ziegler-Nichols[72]. This method is a step-by-step recipe for tuning a PID controller for a arbitrary system, by applying various gains and inputs and observing the system reaction. In Figure 2.2, the PID controller is depicted with bandwidth limiting low pass lters on setpoint, feedback and output. These are not strictly necessary, but


can be applied to reduce the bandwidths, in order to lter out noise in feedback and setpoint. This lter can likewise limit the rate of change of the output. If the cut-o frequencies of these lters are not too low, it will not compromise the functionality of the PID-controller, as it does not make any sense to control the airframe any faster than its natural frequency. Moving average ltering can also be applied to e.g the setpoint to reshape a step input to a linear ramp. Integral wind-up is the situation, where the controller aims to control the output to a unreachable setpoint and thus the integrator term never gets satised and keeps integrating. This scenario will make the output stick to the upper limit, even when the setpoint is back in the reachable region. Integral wind-up is handled by limiting the value of the integrator. By nesting PID controllers more complex controller structures can be build. This is done by letting one controller output the setpoint for the next. This way e.g the heading of the airframe can be controlled by outputting the desired roll angle to another controller. When dealing with nested controller loops, the controller being controlled by another is referred to as the inner control loop and the controller controlling it the outer control loop. It should be noted that the inner most controller should always be the fastest. Controlling a slow controller with a fast one will result in a unstable system.


Bank and Heading control

The bank angle of the airframe is controlled using a PID loop, as seen in Figure 2.3a on the next page. The feedback is the actual roll angle, a , and the setpoint is given by outer control loops as the desired roll angle, d . The output of the controller is connected to the ailerons, with opposite signs for left and right, to induce roll. An outer control loop can control the heading of the airframe, by banking the airframe, under the assumption that a lateral controller is used to maintain the altitude and the velocity is constant, as: r=
2 Va g tan()


Where Va is the velocity through air, r is the turn radius and g is the gravita tional acceleration. Based on this, the angular velocity around the yaw axis, (i.e. the rate of turning), can be controlled, as: Va r = Va = r = Va
2 Va gtan()

(2.2) (2.3)

g tan() Va

When using a PID loop for controlling the heading, the output must be limited. If the airframe rolls too much, the lift is compromised, such that the lateral controller is unable to maintain the altitude.


d a


d a






(a) Longitudinal low level controller loops.

Altd Alta


d a






(b) Lateral low level controller loops.

0 y

Turn Coordinator



Vd Airframe Va





(c) Vertical low level controller / turn coordinator.

(d) Throttle controller / Cruise control.

Figure 2.3: Low level PID controllers.


By controlling the yaw of the airframe, the course over ground can be controlled, as the following heading covary: =+ (2.4)

Where is the course over ground and is the side slip angle, due to the wind component. This time and heading varying oset can be trimmed away by the integral term of the controller.


Climb and Altitude control

The lateral controller, depicted in Figure 2.3b on the preceding page, is similar to the longitudinal; The inner PID loop controls the pitch of the airframe, by outputting the elevator control eort. The feedback is the current pitch angle, a , and the setpoint is given by a outer control loop, as the desired pitch angle, d . An outer loop delivers the setpoint for the inner loop, in order to maintain a given altitude. The coupling between pitch and altitude is more intuitive, compared to , as: Alt Va cos() cos() (2.5) Where the course climb angle, , is given the the pitch angle, , and the angle of attack, : = (2.6) Strictly speaking, (2.5) is an approximation, as the wind is neglected. Please refer to Figure 3.2b on page 29. However, this approximation is viable for altitude control purposes. Similarly to the longitudinal controller, the output of the outer loop must be limited, as it is undesirable to climb or decent to steeply - At least under normal circumstances.


Miscellaneous control

The vertical controller outputs the rudder control eort. See gure 2.3c on the preceding page. This controller will per default maintain the y-acceleration of the airframe at zero[38], such that coordinated turns are performed. As the servo used to control the rudder also controls the steering wheel of the airframe, a special controller must be used to handle take-o, landing and taxiing situations. Figure 2.3d on the previous page depicts a simple PID control to maintaining the airframes velocity, referred to as throttle- or cruise control. It should be noted that either the indicated airspeed, Vi , or the true airspeed, Va , can be utilized as feedback for this controller. By using the true airspeed as feedback, timeof-ight and ground speed can be controlled. But if the airframe experiences strong tailwind, the velocity relative to the air might get to small to produce proper lift, possible resulting in stall situations. A solution could be using the true airspeed per default and in-cooperate a safety mechanism based on the indicated airspeed.


Figure 2.4: Examples of circle tting trajectory smoothing. Reprinted from [17, Fig. 13] .


Flight Management System

The foremost task of the Flight Management System (FMS) is to ensure that the ight is executed correctly, by keeping the ight plan and its state of execution up-to-date. This includes a set of subtasks: 1. Interpolate the ight plan, into a smooth, yable trajectory. 2. Ensure that the low level controllers do in fact y the trajectory. 3. Communicate state of plan and airframe to the tool and receive updates from it.


Trajectory Smoothing

Trajectory smoothing is the task of interpolating the waypoints, from the ight plan, into a continuous, yable trajectory. Normally, it is desirable to y as straight as possible in between waypoints, as these will be the lines where data must be sampled by the tool. Thus, ying straight lines in between the waypoints and turning in minimum circles in the line segment transitions does intuitively seem like a good idea. However, one must carefully consider how the circle segments are used: In Figure 2.4 Anderson et al. [17] illustrates three dierent approaches to this, as the FMS can either: a) Fly the minimum distance path (left path) b) Force the trajectory through the waypoint (center path) c) Fly the equal length distance (right path)


(a) Nested PID loops can be used for trajectory tracking, by minimising the distance from the the desired path. Partial reprint from [38, Fig. 17]

(b) Non-linear approach on trajectory tracking. The method aims to intersect the desired path, a xed distance ahead of the airframe. Reprinted from [57, Figure 1]

(c) A vector eld can be described, that guides the UAV towards the desired trajectory and gradually towards the direction of it. Partial reprint from [54, Fig. 1]

Figure 2.5: Reprinted gures of dierent approaches on trajectory tracking. If the purpose of the ight is to gather data along the line segments, option b) seems advantageous, as it maximizes the line segments length. Is the purpose on the other hand to get from A to B fast, option a) is the way to go. Finally option c) is advantageous, if the ight time is important, as the own distance is easily calculated, from the sum of waypoint distances. The information on which scheme to use should be part of the ight plan description, to ensure that the planner and FMS share their interpretations of the trajectory.


Trajectory Tracking

A number of dierent approaches to trajectory tracking can be found in the literature. A few examples are picked an reviewed here. Commonly, they rely on the longitudinal low level controller, but dier in the way the desired heading is produced. Ippolito [38] proposes a controller, based on nested PID controllers. This controller essentially extends the longitudinal controller depicted in Figure 2.3a on page 18, by letting an additional PID loop control the desired heading. The controller has two modes, as it can either track a line, by controlling the desired 20

heading from the orthogonal distance between the line and the airframe, or track a circle segment, by comparing the distance from the circle center to the desired circle radius. The controller is depicted in Figure 2.5a on the preceding page. This scheme is temptingly simple, but not quite awless. As the orthogonal distance between the airframe and the trajectory is used, the controller can only work reactively. This will inevitable result in the airframe overshooting the trajectory, before any error will be present for the PID to react on. Park et al. [57] proposes a non linear controller, which follows a imaginary rabbit a xed distance ahead of the airframe, on the trajectory. A conceptual illustration is given in Figure 2.5b on the previous page. The turn radius is continuously set such that the airframe will intercept the trajectory at that very distance. Thus, the airframe will smoothly approach the trajectory and adjust the turn radius such that the curve will be tracked. Nelson et al. [54] proposes a Vector Field based navigation and control scheme for xed wing UAV. The general idea of this scheme is that various interest maps can be combined. E.g the UAV can be directed away from densely populated areas, towards unexplored areas etc. Thus this scheme is not necessarily classied as a strictly trajectory tracker, but interesting indeed. In Figure 2.5c on the preceding page such a vector eld is depicted for a straight line segment of the trajectory.


Tool Interaction

Beyond the capabilities of a regular autopilot in a UAV, the autopilot of an ATC must be coupled with the tool it is carrying. This coupling is bidirectional, as the tool aects the dynamics of the airplane, while the autopilot provides information about the current state of the airframe. A use-case could be a camera tool; mounting a camera under the belly of the plane increases its weight and thus its ight characteristics, i.e. the take-o and landing speeds are increased, as well as the step response time to control input. Therefor the tool must provide information about its size and weight to the autopilot, such that these parameters can be accounted for. Furthermore, the tool needs information from the autopilot. Assume the camera is mounted in a gimbal and the mission goal is to photograph a specic location on the ground from 360 around. The tool would then need to know where to point the gimbal, and when to take a photo. Should the tool fail, it could inform the autopilot, such that a second circle could be own. The publish/subscribe[30] model is ideal for accommodating this sort of feature, as the tool can just subscribe to this information. The autopilot could likewise subscribe to the information provided be the tool. Various tools can be contemplated, and a theoretical classication has been made: 1) Passive tool: Works independently of the airframe state. Ex. a camera lming the entire ight. Requires no autopilot support or interfacing. 2) Interactive tool: Executes commands send from the FMS. Ex. take picture, turn gimbel here. Requires the tool to subscribe to the autopilot com21

mands. 3) Controlling tool: Tool provides autopilot control commands. Ex. tool tracks a red car and provides target coordinates to the autopilot. The various tool categories requires dierent levels of support. 1) Passive, requires little implementation in the autopilot software, as long as its dimensions and weight is known it is just a matter of xating it in the tool mount. 2) Interactive, requires some support. Typical message types must be predened, such that the tool can subscribe to. 3) Controlling requires a more complete integration, as it must be ensured that the commands send from the tool do not result in hazardous control - i.e. the autopilot should supply failsafe mechanisms. At this stage of the project, tool interfacing has not been implemented. It has however played a major role in the design and architecture of both hardware and software, such that interfaces are provided, and the software has been module based.


Flight Computer

At the heart of the ATC lies the main computer. Its tasks include, but is not limited to, state estimation, control, tool interfacing and telemetry linkage. In those regards a number of requirements have to be met. The ATC being intended as a development platform, outlines the requirements for the main computer. It is desirable to have enough processing power to try out new ideas, without compromising the main tasks of keeping the platform airborne. It is likewise desired to use to publish / subscribe model, as described earlier. This inherently includes some amount of computational message transport overhead, which must be accommodated. The performance requirements are similar to those of a modern desktop PC. Due to space and power constraints carrying a desktop PC around is not an option. Furthermore a variety of hardware interfaces must be available, including SPI, I 2 C, UART, USB and Ethernet. High performance embedded computers have these interfaces. As it is seen in for example the Paparazzi project 1 a dedicated processor like the SMT32/LPC214x can handle the inertial navigational system. This approach yields a ne solution for a dedicated autopilot. However, it limits expandability and modularity as all resources are dedicated to keeping the airframe in the air. Thus, there is very limited options for adding extra sensors and tools. Another approach is to use a processor like ARM, and run a Linux kernel on it. There are various distributors of this type of processor boards. Most notable are the Beagleboard and Gumstix. A comparison of the features oered by the two is listed in Table 2.1 on the next page.
1 http

: //paparazzi.enac.f r/wiki/U marimv 10


Feature Dimensions Weight Processor Clock RAM Flash Power Video DSP JTAG UART USB OTG USB HOST Ethernet Audio I/O Camera connector microSD Bluetooth Wi-Fi Expansion boards I2C - SPI - ADC PWM - GPIO

Beagleboard xM 78.74 x 76.2 mm 37 g DM3730 1000 MHz 512 MB 0 MB 5V USB or DC power DVI/LCD/S-Video 800 Mhz            

Gumstix FE 58 x 17 x 4.2 mm 5.6 g OMAP3530 600MHz 512 MB 512 MB 3.3V DC DVI/LCD 220MHz


Table 2.1: Comparison of BeagleBoard xM and Gumstix FE BeagleBoard has the advantage of integrated Ethernet, however it comes at a price of a rather large motherboard type PCB, similar to that of a standalone PC, whereas the Gumstix provides WiFi and a processor-only board, meant for using in expansion boards. Thus the design of the Gumstix is more in line with the design criteria of this system - allowing for a module based hardware. Gumstix has been deemed the optimal compromise between size, interfaces and computational power. Gumstix oers small self-contained computers, capable of higher abstraction operating systems, such as Linux. The form-factor allows for easy installation in custom designed boards. Furthermore Gumstix have a vibrant open source based user community, using OpenEmbedded2 and BitBake. This allows for quick access to support on numerous hardware and software related issues.



In this Chapter, various aspects of automated ight have been examined, with emphasis on modularity and independence of specic xed wing airframes. By splitting the executive autopilot into a Flight Management System and a controller layer, airframe specic tunings only need to be handled at the lowest layer. Abstract tasks, such as tool interfacing, trajectory smoothing and ight


control can be handled by the generic Flight Management System. Although such a system has not been implemented, the lower control layer it relies on, has been. The PID controller scheme was found useful for this task, which inherently means that well known methods can be used to tune the controller for a specic airframe. In Section 5.6.2 on page 65 it will be veried by eld tests, that the implemented parts of the controller are indeed capable of controlling the airframe. A Gumstix has been chosen as the main ight computer, such that interfacing with various tools can be done with minor eort, utilizing Linux and ROS. The lack of realtime capabilities on the Linux operating system is dealt with, by letting a small coprocessor interface with the R/C components. This coprocessor also makes sure that the operator can reclaim manual control of the airframe, in case of a operating system crash.



State Feedback
To enable control of the airframe, state feedback is essential. Together with setpoints, this feedback will enable real time control of the airframe, as discussed in Chapter 2. Several state variables has to be monitored, to enable low- as well as higher level control. The airframes bank, climb and heading needs to be known. Rotating from world to plane body frame, is done using Euler angles, in the order Z-Y-X. This de facto standard does intuitively relate to heading, to elevation and to bank, as seen in Figure 3.1b on page 28. Combined, these three will be referred to as the pose of the airframe. A subset of the pose, excluding heading information, will be referred to as the attitude of the airframe. The airframe coordinate frame is positioned (xa , ya , za ) = Nose, Starboard, Down. The global coordinate frame used is Universal Transverse Mercator (UTM). The Cartesian UTM coordinate system is chosen over the spherical latitude / longitude system, as it eases calculation. The UTM frame is positioned (x0 , y0 , z0 ) = North, East, Down. However, as it is unintuitive that negative movement on the z0 -axis will map to positive climb, the altitude is referred to as positive above ground or sea level, as seen in Figure 3.1a on page 28. This gure also shows the relations of the position of the airframe, (Pn , Pe ), to the global UTM frame. It is assumed that the UTM frame is a valid inertial reference frame. Although this is not exactly true, due rotation of the Earth and the solar system, the massive scale of these motions reduce the impact on our scale-relatively much faster moving system and are thus neglected. It is vital to know the velocity of the airframe for control purposes, but also for attitude estimation, as it will discussed in Section 3.2 on page 33. In aeronautics, velocity is a more complicated issue compared to ground based vehicles, as a total of three dierent velocities must be considered: 1. The Indicated airspeed, Vi , is the velocity of the airframe with respect to the surrounding air. 2. The True airspeed, Va , is the total 3D velocity of the airframe, with respect to the inertial reference frame. On a completely windless day, Va and Vi are identical.


3. The Speed over ground or Course speed, Vg , is the total 2D (x-y) velocity of the airframe, with respect to the inertial reference frame. Please refer to Figures 3.2a and 3.2b on page 29 for a graphical visualisation of the dierent velocities and their relations. Especially Vi and Va must be evaluated separately, as Vi is important with respect to ight dynamics (e.g. stall speed, control surface response) and Va with respect to inertial measurements. For control purposes, it is convenient to know the direction and magnitude of the wind. E.g, this will allow the for better dead reckoning navigation in the case of lost GPS x. Likewise, the Flight Management System will be capable of controlling the course heading, , rather than the airframe heading, . A complete list of variables used to describe the state of the airframe is given in Table 3.1. Variable Vi Va Vg Pn Pe Alt Wn We Description Indicated Airspeed True Airspeed Speed over Ground Roll/bank angle Pitch/elevation angle Yaw/heading angle Position north Position east Altitude Wind in north direction Wind in east direction Angle of Attack Side slip angle Course heading Course climb Type Direct Derived Direct Fused Fused Fused Fused / Direct Fused / Direct Fused / Direct Unobservable Unobservable Unobservable Derived Derived Derived Unit [m/s] [m/s] [m/s] [rad] [rad] [rad] [m] [m] [m] [m/s] [m/s] [rad] [rad] [rad] [rad]

Table 3.1: Table of state variables, needed to be determined. Quantities marked with Direct are directly measurable, Fused are deduced by fusing of various measurements. Unobservable quantities are not measurable and must be estimated. Derived quantities are redundant and can be calculated from other variables. The remainder of this Chapter will be structured as follows. Section 3.1 on the following page will examine sensors, applicable for measuring the quantities listed in Table 3.1. In Section 3.2 on page 33 methods and kinematic equations for fusing noisy data and estimating unobserved quantities will be explored. Noise modelling will briey be discussed in Section 3.3 on page 42. Finally Section 3.4 on page 43 will assess the overall performance of the implemented methods.


U p = z0 Alt = z ya xa za Pn Pe U T Me = y0 y0 Down = z0 U T Mn = x0 U T MO za ya xa




(a) The airframe positioned in the world frame.

(b) ZYX Euler rotation. First around the world frame z-axis, then around the new intermediate y-axis and nally around the body frame x-axis.

Figure 3.1: Illustrations of position and orientation variables, relating the bodyframe to the world frame.



A number of dierent sensors must be combined to provide the information required. This section will review the dierent sensor types, capable of measuring the variables. Later sections will review methods for extracting the information from the noisy sensor data streams.


Velocities and course parameters

Vi can be measured using an pitot tube. This sensor is a specialized anemometer and operates by comparing the static air pressure with the dynamic pressure, induced by the airframes velocity through the surrounding air. A variety of other types of anemometers can be used to measure the airspeed, but the pitot tube is most commonly used in aeronautics. If the airframe pose, angle-of-attack and the wind components are know, the true airspeed, Va can be calculated from the indicated airspeed, Vi : Va = (Vi c c + Wn ) + (Vi c s + We ) + (Vi s)
2 2 2


where the course climb angle = . Similarly, the speed over ground, Vg can be determined from the indicated airspeed: Vg = (Vi c c + Wn ) + (Vi c s + We )
2 2


GPS based sensors can also be used to determine Vg directly and from this measurement Va can be determined, if the airframes attitude is known: Va = Which also implies that: Vg = Va c 27 (3.4) Vg c (3.3)

From the wind components, indicated airspeed and attitude, the course over ground can be determined[19]: = tan1 = Which also implies that = . As the aforementioned velocities refer to the inertial reference frame, they can also be estimated over short time by integration of airframe acceleration, but long term integration will soon loose precision due to various noise sources. Vi s c + We Vi c c + Wn

(3.5) (3.6)

North Vi Va



(a) Top view, showing relation between yaw, , wind components Wn and We , side slip-angle , course-heading , indicated airspeed Vi and true airspeed Va .

Vi w O Horizontal Vg Va

(b) Side view, showing relation between wind W , Angle of attack , course climb angle, , indicated airspeed Vi , true airspeed Va , and speed over ground Vg .

Figure 3.2: Illustration of angles and velocities related to the airframe movement. Pitot model The pitot tube is connected to a dierential pressure sensor, such that Bernoullis equation (3.7) can be applied: Pt = Ps + air Vi2 2 28 (3.7)

Where Pt is total pressure, Ps is static pressure (the two pressures given by the pitot tube) and air is air density. Thus all variables are accounted for, as the dierential pressure sensor measures the dynamic pressure Pd = Pt Ps . Therefore: 2 Pd (3.8) Vi = air The air density air varies with temperature, humidity and pressure, but is in the vicinity of 1.22kg/m3 under normal conditions.



The position of the airframe within the reference frame can be measured by GPS (Global Positioning System). This system uses global landmarks, in the form of satellites. Satellites with known positions and synchronised clocks frequently transmits radio signals, containing a time stamps. By receiving timestamps from multiple satellites, the dierence in time-of-ight of the radio signals can be determined and thus the position of the receiver. Integrated GPS receivers are available in small modules and can be interfaced via a serial communication line. The update rate is typically 1 to 10Hz. The GPS outputs the position in spherical longitude/latitude coordinates. These coordinates needs to be converted to UTM northing and easting coordinates. This conversion is somewhat complex, but is already a integrated part of FroboMind and thus available without further eort.



The altitude of the airframe can be measured using GPS. However, the resolution on the altitude is somewhat limited from this sensor, due to satellite versus receiver geometry. The static air pressure, Ps , can also be used, as the pressure decreases up through the atmosphere[19]. Ps = air Alt g Alt = Ps air g (3.9)

Specialized barometric altimeters are available in IC sized packages. These sensors precisely measure air pressure and temperature, from which the altitude can be calculated. However, as atmospheric pressure varies with the weather, the pressure at sea level must be known to calculate the absolute altitude above sea level. If on the other hand the altitude above the ground station is adequate, a series of measurements can be conducted on the ground before take-o and used as reference. During long ights or changes in weather, the pressure at the surface will vary and this reference will not be accurate. In certain situations, the barometric altimeter is not precise enough. For instance in take o and landing situations, the distance to the runway must be known in cm-scale. A ultrasonic range nder is useful for this scale and is mounted on the belly of the airframe. The sensor operates on the time-of-ight principle: A ultrasonic chirp is transmitted and detected when it is reected by the surface of the runway. The time of travel is proportional to the speed of sound and the distance travelled. The range of this type of sensor is however limited to approximately 7 meters. 29



The Attitude and Heading Reference System (AHRS for short) serves, as the name suggests, as a reference for the airframes orientation in 3D. Historically, mechanical gimbal-mounted spinning mass gyroscopes have been useful for this type of instrumentation, as they maintain their orientation, with reference to the inertial reference frame, as the airframe is maneuvered. While these mechanical systems are bulky, expensive and heavy, MEMS (Micro Electronic Mechanical System) technology have introduced much cheaper and smaller rate gyroscopes. These devices are based on vibrating micro structures and are available in IC-sized packages for direct PCB mounting. As the airframe rotates, the vibrating mass tends to maintain its direction of travel and will therefore exceed a force (the Coriolis force) on its supports. This force can be measured and is proportional to the rate of rotation[53]. They do not sense the orientation of the airframe, but the rate of rotation. Integration of the rate of rotation will yield the actual orientation. The integration must however be done in Euler sequence (3.11) as rotation is not cumulative. 1 s t c t k k k k k ck sk k k = 0 (3.10) ck sk 0 k ck ck n n = n dt 0 k n k tk k=0 k



Where k is the current angular rotation rate at time k and tk is the sampling period at time k. Sensor imperfection, discrete time resolution, vibrations and integration errors, will accumulate over time, and eventually render the information useless. Hence, a way of measuring the attitude and heading with absolute reference is necessary and will be reviewed here. Attitude sensing The direction of Earths gravitational eld, relative to the airframe, can used to determine the attitude (pitch and roll). This acceleration can be measured using MEMS accelerometers, like rate gyroscopes available in IC-sized packages. These devices consist of micro machined masses, suspended by cantilever beams, acting as spring elements. Accelerations acting on the mass will deect the suspending beams and move the proof mass, relative to the base. This displacement is typically measured by the changing capacitance to the mass, with respect to end walls. During stable, level ight, the created lift is equal to gravity, such that the acceleration is exactly g. Maneuvering will change the direction of the airframe, by using the control surfaces to produce centripetal acceleration in the desired direction, according to Newtons 3rd law of motion.


As these sources of acceleration are indistinguishable, the sensor is useless if the centripetal accellerations are not estimated. Additionally acceleration will aect the airframe, from various noise sources such as vibrations, wind gusts etc. These noise sources must be ltered out. Alternatively, infra red thermopile sensors can be used to detect the horizon and thus the attitude of the airframe. This method works by a pair of thermopiles, facing opposite directions, in a collinear conguration (see Figure 3.3). As the airframe banks, one thermopile sees more sky and the other more Earth. As the Earth is relatively warmer than the sky, the attitude angle can be estimated from this temperature dierence. However, it is not trivial to extract the exact magnitude of the roll and pitch angles from this information, as the temperature of Earth and the sky varies. Hilly terrain will likewise distort the information provided by the sensors. Apart from these imperfections, the system is simple and intuitive. Some issues might arise in the implementation. The sensors need to be mounted externally on the airframe, with clear sight of the Earth and sky. This will constrain the mounting possibilities. While the system is attractively simple, it is dismissed on the basis of the aforementioned imperfections.

Figure 3.3: Two thermopiles on opposite sides of the airframe can be used to determine the roll angle. Reprinted from http: // paparazzi. enac. fr/ wiki/ Sensors/ IR [14]. Heading sensing As the gravitational eld tends to point to the center of Earth, it does not contribute with any information of the airframes heading. However, Earths magnetic eld is useful for determining the airframes heading. A digital magnetometer can be used to measure the eld strength in three orthogonal axis and thus provide a three dimensional vector pointing towards the North Magnetic Pole. A number of constraints must be considered, when using Earths magnetic eld for heading determination: 1. Earths magnetic eld is not uniform across the planet. 2. Other (high power) electrical equipment and soft iron masses aects the magnetic eld around the airframe. 3. As the airframe pitches and rolls the sensed eld rotates, as it is measured with respect to the base frame on all axis, not only heading.


If the location is known, the rst problem is easily overcome by a look-up-table, based on the World Magnetic Model[16]. Ignoring the declination of the magnetic eld will in some areas of the world be a reasonable approximation. In other areas, especially near Earths magnetic poles, the magnetometer will be perfectly useless, if the exact location is not known and the specications of the magnetic eld can not be looked up. Electromagnetic noise from on-board power components are harder to eliminate. Shielding the sensor is clearly not a possibility, as such a shield will not discriminate signal over noise . Shielding the power system will probably prove cumbersome. However, by positioning the magnetometer as remotely as possible to the power system, the magnitude of this issue can be reduced. Alternatively, a GPS based sensor can be used to measure heading. As this sensor calculates the heading from changes on position, it will obviously only work when the airframe is in motion. Also, the sensor does measure the courseover-ground, rather than the actual orientation of the airframe. These two headings are not necessarily the same, due wind and thus side slip angle, as described in Section 3.1.1 on page 28. Alternatively, two GPS receivers can be used in a dierential conguration. This would provide heading information, even when standing still.


Sensor fusion

In order to obtain a precise and fast responding state estimate, suitable as feedback for the control loops, some of the aforementioned sensors must be fused. Sensor fusion is the discipline of combining multiple noise infested sensor data streams into a combined estimate of the actual state. E.g both accelerometers and gyroscopes are indeed capable of measuring the attitude of the airframe, but neither of them is capable of doing so very well. Vibrations and un-modelled accelerations impact the accelerometers measurements, such that short-term extraction of the state is unsuitable. Similar sources of noise applies to magnetometer measurements. Integration errors on the other hand renders the gyroscopes long-term precision worthless. The authors have conducted a survey of suitable sensor fusion methods for UAVs (See Appendix H, Sensor Fusion for Miniature Aerial Vehicles on page 105). This section will describe the major ndings of this survey and derive the mathematical formulation of an applicable state estimator.



Several methods of varying complexity are capable of sensor fusion. The simplest method is complimentary ltering[35, 49]. Complementary ltering is useful for combining accurate fast, but biased data with slow, noise disturbed but absolute data. This is done by a pair of complimentary lters, one high pass the other low pass, as seen in gure 3.4 on the following page. Pilot experiments have shown that this lter type is indeed capable of fusing gyroscope and accelerometer data into a viable attitude estimate. The complementary lter is however not a state estimator, and is as such not capable of in-cooperating unobserved variables, such as the wind components, angle of attack, gyroscope biases etc.


xu = f (u, x)

A f

z xz = f (z, x)
A f

Figure 3.4: Basic complimentary lter. The high pass lter (top) and low pass lter (bottom) lters out the weakness of each sensor. As the sensors have opposite weaknesses, their strengths can be combined. Bayes lters[18, 21, 71] on the other hand are recursive state estimators. Bayes lters work under the assumption that the true state, x is not directly observable - It is a hidden Markov process. However, sensor measurements, z are derived of the true state. Because of the Markov assumption, the probability distribution function (PDF) of the current state, given the previous state, p(xk |xk1 ), is independent of the history of states, such that p(xk |xk1 , xk2 x0 ) = p(xk |xk1 ) (3.13)

As the observation, z, is derived of the unobserved true state, it is likewise fair to conclude that that it depends only on the current state and not the history of states. p(z k |xk , xk1 x0 ) = p(z k |xk ) (3.14)

Bayes theorem[51, 71] provides the mathematical foundation for describing the probability of the state given a measurement, based on the probability of the measurement given the state. According to (3.15) is it necessary to describe the probabilities of the state and the measurement. p(x|z) = p(z|x) p(x) p(z) (3.15)

Implementation of Bayes lter work in a two-step recursion. First, the PDF is propagated, using the current system input, uk , and a model of the system in question. As the PDF is propagated, it is smeared a bit in order to reect the uncertainty associated with the state transition, wk , referred to as the state transition noise. The projected state estimate is referred to as the a priori state k estimate, x , as this is the estimate, prior to in-cooperation of a measurement. The second step is to in-cooperate a measurement, z k . Based on Bayes theorem this measurement can describe a PDF of the state. As the measurement based state estimate is in-cooperated, the PDF is focused to reect that knowledge has been acquired. The new estimate is referred to as the posteriori state estimate, k x+ . Many dierent implementations of Bayes lters exists. They mainly dier on the way they represent the PDF of the state estimate. Generally speaking, the more detailed the description of the state estimate needs to be, the more computation is required. E.g Particle lters[68] is capable of representing a arbitrary PDF, but in limited resolution. Kalman lter[27, 43] represent the PDF as a Gaussian distribution and can thus use a parametric description, containing the 33

mean and covariance. This representation does limit the PDF to be symmetric and univariate (i.e, only keeps one hypothesis of the state). The original formulation[43] uses the State Space model to propagate the state estimate forward in time. The lter can however be extended to work for nonlinear systems as well. The Kalman lter is chosen as the complexity of multivariate PDFs are not required in this application. The Kalman lter will be reviewed here. As mentioned, the Kalman lter uses the linear state space model to project the state estimate and covariance forward in time: k k1 k1 x = x+ + A x+ + B uk tk P = P + + A P + A + Q tk k k1 k1 (3.16) (3.17)

Where the state transition model, A, is used to project the state estimate, x, and the state covariance matrix, P , forward in time, by the time step tk = tk tk1 . The system input, u, is projected into the state vector by the input model, B, and the transition noise covariance, Q, is added to the state covariance matrix. Note that A and B are not step-dependant, as the system is linear. A measurement is in-cooperated in the measurement update step: Kk = P H H P H + R k k k k k x+ = x + K k z k H x P+ k = (I K k H) P k

(3.18) (3.19) (3.20)

Where K is the Kalman gain, which is calculated from the state covariance matrix, the observer model, H and the observation covariance matrix, R. Based on the Kalman gain and the innovation term, z k H xk , the posteriori state estimate, is calculated. Finally, the state covariance matrix is corrected to reect the information gain, introduced by the measurement. As mentioned, the Kalman lter is limited to linear systems, by the linear state space model. However, it can be extended[63] to work for non-linear systems as well. This is done by using a pair of non-linear models f (x, u, w) and h(x, v): xk = f (xk , uk , wk ) z k = h (xk , v k ) = Ak xk + B k uk + W k wk (3.21) (3.22) (3.23) (3.24)

= H k xk + V k v k

f (x, u, w) models the state transition. It takes the current state vector, xk , the current system input vector, uk and the noise vector wk as inputs. wk is a random noise vector, drawn from a mean zero Gaussian distribution with covariance Q. f (x, u, w) needs not to be formulated in the linear form (3.22), but the linear models Ak and W k must be calculated for each time update. These can be calculated as the Jacobians, i.e linearising around the state and noise vectors respectively, such that: Ak[i,j] = f [i] (xk , uk ) x[j] 34 (3.25)

and W k[i,j] =

f [i] (xk , uk ) w[j]


h(x, v) models the sensor output, based on the current state estimate, xk and the measurement noise vector, v k which is drawn from a mean zero Gaussian distribution with covariance R. h(x, v) needs not to be in the linear form (3.24), but the linear models H k and V k need to be known. They can be found by linearising the model around the state and noise vector respectively, such that: H k[i,j] = and V k[i,j] = h[i] (xk ) x[j] h[i] (xk ) v [j] (3.27)


The time update step of the extended Kalman lter[18, 21, 71] (EKF) is formulated as: k k1 x = x+ + f ( + , uk , 0) tk xk1 P = P + + Ak P + Ak + W k Q W k k k1 k1 tk (3.29) (3.30)

Where the state is propagated using the state transition function f (x, u, w). Note that the noise input is set to zero, as no intentional noise is added. The state transition noise covariance matrix, Q is projected by the state transition noise model W k , before it is added to the state covariance matrix. The measurement update step is formulated as: Kk = P Hk Hk P Hk + V k R V k k k k k k x+ = x + K k z k h x , 0 P + = (I K k H k ) P k k

(3.31) (3.32) (3.33)

Where the measurement noise model, V k is used to project the measurement noise covariance matrix, R and the measurement model, h(x, v), is used in the innovation term. Also here, no intentional noise is injected, so v k = 0. This concludes the theoretical description of the EKF, used in the system. It was found that the EKF was most suitable for state estimation. The formulation of this lter has been reviewed and explained. The state estimator can be congured in various ways. This will be investigated further in the next. After this, the kinematic models needed for the ltering are derived.


Estimator architecture

The architecture of the state estimator is vital to performance in terms of calculation eciency and precision. But also from a debugging and development point of view, various constructions are easier to work with compared to others. Three dierent architectures have been considered.


The rst scheme is the obvious single 9-state lter[61], with the measurement vector composed of acceleration, , magnetic eld strength, , UTM position and altitude, utm, as depicted in Figure 3.5a on the next page. However, as the four sensors (accelerometer, magnetometer, GPS receiver and barometric altimeter), needed to compose the measurement vector, have very dierent update rates, the lowest frequency must be used. In this case the GPS receiver with a update frequency of approximately 5 Hz. This measurement frequency is not acceptable for attitude estimation. It is however possible to use separate measurement vectors[56] within the same estimator, by dening dierent H, V and R matrices for each of the measurement vector, as illustrated in Figure 3.5b on the following page. With this arrangement, the measurement rate dependency between the sensors can be eliminated. To simplify the problem even more a thrid approach can be taken; Splitting the estimator into smaller sub-estimators[28, 60], responsible for coherent parts of the state vector. Such a scheme is depicted in Figure 3.5c on the next page. The advantage of this decoupling is that debugging and development can be done in a segmented fashion. This structure also reduces the size of the matrices remarkably, which is advantageous as it reduces the need for computational resources. However this segmentation results in lost coupling between measurement and state, i.e. magnetometer can not aid attitude estimation, and the GPS can not aid heading estimation. But as the primary and intentional sensor data is still available to the respective estimators, this is not considered a major setback. CONCLUSION


Kinematic models

The Extended Kalman lter, which was chosen for state estimation earlier in this Section, requires kinematics models of the system. These models are used to propagate the estimate and its covariance forward in time, as discussed in subsection 3.2.1 on page 33. A total of three dierent estimators are needed, as explained in the previous subsection. The outlines for the models for each estimator will be given in the following subsection. Appendices A to C on pages 8088 describes the deviations in more detail. Attitude kinematic model The attitude is estimated by fusing gyroscope and accelerometer data. The gyroscope is used as system input vector, propagating the estimate. Accelerometer data is used to correct the estimate. Up till this point, the airframe pose has been described in Euler angles, as these naturally map directly to the ight parameters, bank, climb and heading. Thus they provide a intuitive representation of the airframes rotation. However, there is a snake in paradise, as the Euler angle representation is numerically unstable near the limits of denition: As the pitch angle approaches /2, the roll and yaw axis are collinear and a degree of freedom is lost. This situation is referred to as a gimbal lock, as the mathematical gimbal is locked from rotating freely in any direction. Also, when the pitch angle is zero and the roll angle approaches /2, the pitch and yaw axis align and a similar situation arises. In order to cope with this


Vi z utm

9-state Estimator

(a) 9-state estimator.

Pn Pe Alt Wn We

Vi utm

z1 z2 z3 9-state Estimator x

(b) 9-state estimator with three dierent measurement vectors.

Pn Pe Alt Wn We

Vi 2-state Attitude Estimator 1-state Heading Estimator 6-state Position Estimator Pn Pe Alt Wn We


(c) Cascaded 2+1+6 state estimator.

Figure 3.5: Dierent estimator architectures considered. Vi is the indicated airspeed, is the vector the angular rotation rates, is the acceleration vector, is the magnetic eld strength vector. All are measured in the plane body frame. utm is the measured position vector in the inertial reference frame.


issue, the Quaternion rotation representation is used in the attitude estimator instead. This representation is composed a hyper complex number, with a total of four parameters. The representation is thus redundant, as rotation in three dimensions only strictly needs three parameters. q =w+xi+yj+zk (3.34) (3.35)

i2 = j 2 = k 2 = i j k = 1

By constraining the quaternion to have unit magnitude, a total of three degrees of freedom remains: w2 + x2 + y 2 + z 1 = 1 (3.36) Rotation of one quaternion by another, can be expressed by quaternion multiplication: qb qa qb 0 0 a (3.37)

Where q y is the rotation from frame x x can be done in matrix form: b a w0 w0 xa 0 a xb xa w 0 0 0 b a a y0 y0 z0 a a b z0 y0 z0

to frame y. Quaternion multiplication

a y0 a z0 a w0 a x0

Letting q a be the state vector, it can be forward projected in the time update 0 by the angular velocities in the airframe, measured by the rate gyroscope. Under the small angle approximation, the innitesimal rotation can be written in Quaternion form by: 1 /2 t (3.39) q = x y /2 t z /2 t Equations (3.38) and (3.39) forms the basics of the state transition function, f (x, u, w). The function is fully derived in (A.1) to (A.9) on page 80. The nal function formulation is given here: f (x, u, w) = x x w = z y

b a wa z0 a y0 xb a b xa ya 0 a b w0 za


y z w x

Where x is the state vector, composed of the quaternion w, x, y, z, the system input vector, u, is composed of angular rotation rates, x , y , z , measured by the gyroscope and the state transition noise vector, w is the noise on the gyroscope measurements, x , y , z . As the small angle approximation deforms the Quaternion, such that it over time no longer have unity magnitude, it needs to be normalized frequently. This normalization can be performed by (A.10).

z x + x 1 y y + y x 2 z + z w

(3.40) (3.41)


In the measurement update step, the accelerometer measurement model, h(x, v), models the measurement based on the state vector, x and the measurement noise vector, v. The acceleration in the airframe is the sum of gravity, the centripetal forces and measurement noise. The measured gravity is the gravity vector, T modelled by rotating the vector 0 0 g into the airframe by a rotation matrix, Rq (x), calculated from the Quaternion state vector. The centripetal forces, which the airframe is subject to, is the cross product of the vectors of velocities and angular velocities in plane body frame. The angular velocities are measured by the gyroscope. The velocities in the plane body frame are not directly known, and must be computed from true airspeed, Va , angle of attack, , and side slip angle, . The measurement noise vector, v is simply added to the equation: h (x, v) = z (3.42) (3.43)

Appendix A, Attitude kinematics on page 80 describes the mathematical derivations in detail. The derivation of the linear models, A(x, u), W (x, u), H(x) and V (x), based on (3.25) to (3.28) on page 35, can also be found in this appendix. As the remaining system uses the Euler angle representation, conversion back and forth between the two representations is needed. This can be done according the (A.23) and (A.24) on page 83, respectively.

0 x c c = Rq (x) 0 + y s Va + v g z s c

Heading kinematic model Based on gyroscope and magnetometer data the heading of the airframe can be estimated. Equation (3.44)[67, eq. (1.4-15)] forms the basic for the estimators time update step. It relates the changes in Euler angles to the angular velocities in the airframe, by introducing the Euler angle derivatives in the appropriate intermediate frames of the rotation. 0 0 u + w = 0 + Rx () + Ry () 0 (3.44) 0 0

Where Rx () is the rotation matrix by around the x-axis and Ry () is the rotation matrix by around the y-axis. u is the vector of angular rotation velocities in the plane body frame and w is the noise vector on these measurements. Through Equations B.1 to B.5 on page 85, the state transition model is derived on this foundation, such that: f (x, u, w) = =
s c c c

y + y z z


Where the state vector, x is the heading, . The system input vector, u, is the angular velocities in the plane body frame around the y and z axis, y and z respectively. The x-axis angular velocity is not needed, as it maps directly to


the roll angle. The state transition noise vector, w is the noise on the gyroscope reading, y and z respectively. The airframes attitude, and , is presumed to be known from the Attitude estimator, described in Section 3.2.3 on page 37. The measurement update step is based on the measurement model, h(x, v) and models the magnetometers three dimensional reading of Earths magnetic eld. Earths magnetic eld varies across the planet, but can be determined for a given position from the World Magnetic Model[16]. The eld strengths, in north, east down components, on the test site is given in Table B.1 on page 84. These components will be referred to as: n 0 = e (3.46) d To model the magnetometer measurement, these components are rotated into the airframe. This is done by three rotation matrices, Rx (), Ry () and Rz (). The measurement noise vector, v is added to the expression. h(x, v) = z = Rx () Ry () Rz () 0 + v

(3.47) (3.48)

Appendix B, Heading kinematics on page 84 describes the mathematical derivations in detail. The derivation of the linear models, A(x, u), W (x, u), H(x) and V (x), based on Equations 3.25 to 3.28 on pages 3536, can also be found in this appendix.

Position and Wind kinematic models The nal stage of the cascaded lter, estimates the position, Pn and Pe , and altitude, Alt, along with the angle of attack, and wind components in north and east directions, Wn and We respectively. The system input is the indicated airspeed, Vi and the measurement vector is composed of GPS and barometric altimeter readings, converted into UTM coordinates utmn , utme and utmd respectively. The wind components and angle of attack are not directly measurable and are thus unobserved state variables. The forward kinematic formulation used in the time update is: f (x, u, w) = x Pn Vi c c + Wn + Pn Pe Vi c s + We + Pe Alt Vi s + Alt = W Wn n W e We Where the course climb angle is given by (3.49)




Please refer to Figures 3.2a and 3.2b on page 29 for an illustrative description of the relations between various angles and velocities. The measurement model is simply composed by picking elements from the state vector: h(x, v) = z (3.52) (3.53)

Appendix C, Position and Wind kinematics on page 88 describes the mathematical derivations in detail.

Pn = Pe + v Alt


Noise considerations

The state transition noise covariance matrix, Q and measurement noise covariance matrix, R needs to be modelled, for each state estimator. These matrices describe the variances of the state transition and observation, such that the Kalman estimator can eectively weight the a priori state covariance matrix, P and innovation term. In other terms, the estimators need an idea of the certainty of the observations, used to correct the state estimate in the measurement step and an idea of how much certainty is lost, as the state estimate is projected forward in the time update step. The covariance matrices are dened as: Q = E (w E [w]) (w E [w]) = E (w 0) (w 0)

(3.54) (3.55)

R = E (v E [v]) (v E [v]) = E (v 0) (v 0)

(3.56) (3.57)

Where E[x] is the expected value of x (similar to the mean value of x). The numerical values are hard to determine, as many dierent sources of noise may impact the exact values. Therefore, it is convenient to think of the Q and R matrices as uncertainty levels. I.e the less condence we have in the state transition, the greater the values in Q should be. Similarly, the less condence we have in the observations, the greater the values placed in the R matrix should be. It should be noted that the ratio between the two matrices is vital for the lter performance. I.e large condence in state transition and low condence in measurements tends to smooth the state estimate at the cost of slower convergence and visa versa.


The uncertainties are expressed in variance, = 2 . The values on the diagonal represent the variance of the measure it self and the o-diagonal elements represent coherent deviations. For all measurements, we do not expect any co-variation and will only place values on the diagonal.



In this Chapter, the variables describing the state of the airframe have been described. Also the mathematical formulation of their relationship have been deduced. Is has been concluded that not all variables are directly measurable and must thus be estimated. The theoretical and practical foundation for such state estimation, using the Extended Kalman Filter have been described, in the context of xed wing aircrafts. Various architectures for the state estimator have been suggested and a cascaded scheme has been found appropriate. Finally, the models for state transition and sensor estimation have been deduced and it has been discussed how noise models inuence the performance of the state estimators.



Visualization and Simulation

When using a MAV as a ATC, it is essential to be able to monitor the status of the aircraft. This is typically done on a Ground Control Station (GCS) as seen in a number of open source projects. When the ATC becomes fully autonomous, the main operator control interface changes from the radio to the GCS. When it is no longer a concern to keep the ATC in the air, focus moves to the task at hand; What is the status of the aircraft, are there any errors, where is the aircraft, is the tool behaving correctly, is the plan executing correctly and many other questions may arise. It is the job of the GCS to answer all these questions in a user-friendly way. Furthermore, it should be possible for the operator to take action, if needs for changes arises. Therefore, a bidirectional link must be provided, allowing the operator to send corrective command to the ATC in order to change plan, control or other aspects of the mission. The remainder of this Chapter will be structured as follows. First, the requirements and proposed solutions for telemetry are outlined in Section 4.1. Open source projects for visualization and simulations are surveyed in Sections 4.2 on the next page and 4.3 on page 46. The later two sections selects the open source projects, which will be used as base for the GCS and simulator respectively.



In order for the operator to have a satisfactory overview, information such as airframe, system and mission status must be provided wirelessly. This enables remote visualization of the mission on the GCS. By providing two way communication it is likewise possible to perform in-ight correct of the ight plan. When tuning the controller, it is also desirable to be able to change control parameters in-ight.


Link hardware

As it is stated in the Danish Civil Aviation Administration BL 9-4 [48], one must always have a model plane in line of sight. It is estimated that a plane, 43

with the legal maximum weight of 7 kg, is not visible at ranges above 1 km. Thus this serves as the minimum required link range. The necessary data rate of the link is dependent on the amount, resolution and frequency of the data transmissions. The feedback should come at a minimum frequency of 10 Hz. It is desired to have an airframe and system state feedback resolution in double precision i.e. 64 bits per data eld. With 11 data elds (roll, pitch, yaw, latitude, longitude, altitude, velocity, CPU, memory, battery and number of satellites) this yields a minimum data rate of 7 kbit/s. Message overhead and checksums add to this. The infrequent uplink user input is assumed to be way below this rate. Therefore the minimum acceptable data rate is approximately 10 kbit/s. Several link candidates are available. Four of the most common technologies are presented and compared in Table 4.1. Technology Bluetooth Wi-Fi Zigbee [26] 3G Range [typ.] 1 - 100 m 250 m 90 m 40 km Coverage dep. Transfer rate 1-24 Mbit/s Up to 300 Mbit/s 10-250 kbit/s Up to 30 Mbit/s Power consumption 1-100 mW Typ. 1 W 1-100 mW Typ. < 5 W

Table 4.1: Telemetry comparison overview As with all wireless networks, range is limited by transmission power, antenna type, environment, etc. Considering standard equipment, both Bluetooth and Wi-Fi are inadequate for in eld telemetry due to their limited range. Wi-Fi can have extended range, but requires non-standard and heavy equipment. Both 3G and ZigBee could be used as link, however, as 3G requires coverage that is not available everywhere. Zigbee is deemed more generic as it function standalone.


Message passing

It is important that the messages passed between GCS and ATC is valid, especially in the up-link. It could be catastrophic if a corrupted way-point, desired altitude or control parameter was passed through to the airplane control. Therefore, steps must be taken to ensure that only valid data is passed through. To ensure that binary data on the Zigbee serial line is correctly transmitted and received, an implementation of Serial Line Internet Protocol (SLIP [62]) encapsulation and checksum can be used. SLIP provides a means for knowing start and stop of packages, whilst checksum validates the data. Lastly, an acknowledgement must be send if package transfer should be guaranteed.



In order to have a decent Human Computer Interface (HCI), usable by end users, it is necessary to display the abundance of airframe information graphically. As non of the authors are particularly skilled in graphical programming, it has been deemed unnecessary to reinvent the wheel - there is already an multitude of open source GCSs targeted at just this problem. Mentioning a few 44

includes QGroundControl [15], HappyKillmore [10], openPilot GCS [13] and CCNY ground station [9]. For a quick overview of the graphical quality of the four mentioned GCSs, please refer to Figure 4.1.

(a) QGroundControl

(b) OpenPilot GCS

(c) Happykillmore GCS

(d) CCNY Ground Station

Figure 4.1: Visual comparison of four dierent open source ground stations All four GCS features similar capabilities, however, CCNY ground station stands out as being a bit more simplistic and is already an integrated part of ROS through the CityFlyer project hosted by The City College of New York [55]. Therefore it is the GCS of choice at this point in the project.



In order reduce development time and cost, a simulator was deemed useful. A simulator can be utilized for rapid testing of new control paradigms, state estimators and control parameters, without the risks involved with a real test ight. Also, pure simulation experiments can be conducted independent of wind, weather and other inconvenient phenomenons. Lastly, pilot skill can also be practiced. Furthermore it is desirable that the simulator is open source and that it can be ported to ROS. By doing so, work can be contribute to the community with a new feature not seen in ROS before. To have the simulator being ROS compatible, it is essential that it runs under Linux. In order for a simulator to accommodate these needs it has to fulll a number of criteria:
Controllable via external input device, such as joystick or mouse.


Include radio controlled airplanes, such that practicing and controllers give useful results and experience. Provide or give access to state information, such that sensor output can be simulated. Be open source and free, such that it can be ported to ROS and shared. Run natively on Linux.

With these requirements in mind, three potential candidates, all fullling the requirements, have been found: Name CRRCsim FlightGear Slope Soaring Size [MB] 7.4 300 35 URL

Either of the three candidates could potentially fulll the job, it has been chosen to go with CRRCsim, as it is the smallest of the three, very easy to overview and has pre-implemented sensor models, making it rather convenient to simulate the sensors available in hardware. It could be argued that FlightGear is more sophisticated and accurate, but it is mainly focused on real scale airplanes and requires signicant graphical hardware. Maintaining a high frame-rate is essential, in order for the simulated sensor data to mimic real hardware.



It has been decided to use Zigbee as the radio link for GCS. This is based on the fact that it features the best combination of range, weight, power consumption and transfer speeds for the needs stated in the requirements. It has been discussed how it can be guaranteed that data packages are correctly transferred. The CCNY groud station has been selected. It is not the most complete of the available GCS, but for the current stage of the project it fullls the needs. For future implementation of more complicated tasks, CCNY ground station can be expanded, or a more complete GCS can be integrated into a ROS node. The future requirements of the ground station has been stated, and there is room for expansion.



Ground Control Station Telemetry Flight Plan Tool


State Estimator







Figure 5.1: Overview of the entire system, green boxes have been implemented while red have not. This Chapter summarises the implementations done, based on the subsystems described in the previous chapters. As not all the subsystems, discussed earlier, have been implemented, Figure 5.1 provides an overview of the extent of the current implementations. Everything has been designed with modularity in mind. An eort has been made to build the entire system in the spirit of ROS - ranging all the way from hardware to software to simulator. Initially the project was branched out from the Frobomind [41] project, and still incorporates the system structure dened here. The remainder of this Chapter will be structured as follows: Section 5.1 describes the selection of the model airframe, which forms the base for the ying prototype. The developed and prototyped PCB, which forms the base for the 47

Figure 5.2: Photography of the Multiplex Mentor, used as the implementation base for this thesis. autopilot are described in Section 5.2 on the next page. Section 5.3 on page 57 explains the software structure and its individual sub-components. Aspects of porting the open source projects, which forms the base for the GCS and simulator are described in Sections 5.4 on page 60 and 5.5 on page 61 respectively. Finally, Section 5.6 on page 62 describes the conducted ight tests and results hereof.



Under guidance of a R/C reseller, it was decided to use the Multiplex Mentor airframe as a base for the prototype of this project. The airframe is depicted in Figure 5.2. The Mentor is a high winged foam model, with a moderate wing aspect ratio. It comes at a very reasonable price, has a wingspan of 1.6m and weights around 2.0 kg when tted with standard equipment. The high winged design gives the airframe embodied stability, as the wing is positioned above the center of gravity. The downside of this design is reduced manoeuvrability, compared to a low-winged design. As the airframe is not meant to perform swift manoeuvres, this is a fair trade-o. The construction material plays a vital role on parameters such as price, weight and durability. Generally three alternatives are available, balsa-wood, foam and composites. The composites and balsa-wood constructions are lightweight hollow bodies and are rather expensive. The hollow body designs results in spacious fuselages, which allows for easy tting of additional equipment. The foam models are not as rigid, but comes at a much lower price. By embedding carbon bre stieners in the foam to support the wings, the low rigidity of the foam is somewhat compensated. The foam models are far less spacious, as the low material strength is compensated by increased wall thickness on the fuselage. 48

It is expected that the model might endure crashes during the development and hence repair-ability is a property that must be considered. The foam models can simply be glued together, where as repairing balsa- and composite models are in best case more complicated. As several modications will be needed on the fuselage, the workability of the foam material is a great advantage. The airframe has one of the largest and most spacious fuselage of any small scale aircraft available, leaving adequate room for both large batteries, autopilot and payload. Furthermore, spare parts are readily available for the Multiplex Mentor. The Mentor has a total of ve control channels. One for throttle and one for each control-plane, namely ailerons, rudder and elevator. In addition, an extra channel is needed to accommodate in-ight switch between automated and manual remote control. In case of error or potential collision, the manual overrule can be activated and the plane brought down safely. Furthermore, being able to switch to manual allows for manual take o and landing if necessary. The ight ready Multiplex Mentor is composed of:
Motor: Brushless outrunner motor. 870 rpm/V capable of producing 720W. Servos: Common hobby servos controls ailerons, rudder and elevator. Electronic Speed Control: 60A AC controller with PWM input enables easy control of the motor. Battery: A 4 cell lithium-polymer battery provides 6150mAh at 14.8v at a maximum discharge rate of 150A. Receiver: A 2.4GHz receiver decodes the radio signal and outputs it as servo PWM pulses. Propeller: A propeller with a diameter of 13 inches and a pitch of 6.5 inches. Yielding a theoretical maximum speed of 870 rpm 14.8V V 6.5inches = 126 km h


Autopilot PCB

A printed circuit board (PCB) has been designed to accommodate the ightcomputer and its associated sensors and I/O units. The PCB is designed with compactness and integration in mind, leading to most devices being mounted directly on the board, rather than remotely on breakout boards. The width is constrained to be 74mm, in order to t the desired mounting face in the Mentor. A maximum length of 140mm is available in this location however only 118mm was required. As the the Mentor fuselage surrounds both long sides of the PCB, access is restricted to the short sides and top/bottom of the PCB. This fact constraints the locations of the RJ45, USB-A and dierential pressure gauge, as these needs to be located at the edge of the PCB for connectivity. Also the Xbee module needs to located at the edge of the PCB, in the interest of antenna performance. The remaining antennas (GPS, WiFi and Bluetooth) are not mounted directly on the PCB, and can thus be mounted remotely, for 49

Figure 5.3: Blockdiagram of the developed autopilot board

Figure 5.4: Image of the nal developed autopilot board the power planes not to interfere with antenna performances. For full schematics and PCB layout, please refer to Appendix F on page 94.


Processor and data buses

The OpenEmbedded le system environment available for the Gumstix processor is based on the ngstrm distribution. However, due to lacking support for A o cross-compilation of ROS, it is not easily installed on the ngstrm le system. A o Therefor it has been decided to install a ROS supported Ubuntu distribution on the Gumstix. In order not to lose the highly useful customizability of OpenEmbedded, a precompiled Ubuntu distribution could not be used. Thus OpenEmbedded was utilized to compile a Linux kernel matched against a minimal Ubuntu le-system generated by Rootstock1 , thereby substituting OpenEmbeddeds use of ngstrm with Ubuntu, while still maintaining a fully customizable A o operating system. Ubuntu does have a slightly larger memory usage compared to ngstrm, however, the 512 MB RAM available to the Gumstix is more A o that adequate when using it as a non-graphical/console system. For a complete


guide to installing Ubuntu and ROS on the Overo Gumstix board, please refer to Appendix E.1 on page 92. As it can be seen in Figure 5.3, every sensor, except the GPS, is connected to the Gumstix via the I 2 C bus. The analog sensors, as the dierential pressure from Pitot, the battery voltage and ultrasonic range nder have all been connected to the I 2 C bus via the on-board AVR. For future tool connections, USB, SPI and Ethernet support has been added to the system. Ethernet is obtained with the use of LAN9221 [66], a parallel interface LAN chip, also used in the Tobi expansion board sold by Gumstix. USB and SPI are integrated parts of the processor, where SPI has been wired as pin headers, and USB has been added with required power management. Lastly, some General Purpose Input/Outputs (GPIO) of the OMAP processor have been wired for future use. Neither of these have been used yet, but are provided by the board for future tool and development support.


Airspeed sensor

Signal Conditioning According to the dierential pressure sensor MPXV7002dp datasheet[64], the voltage output is given by (5.1). Vout = 5[V ] (0.2 pd [kP a] + 0.5) 6.25% VF SS (5.1)

Where VF SS is dened as the Full Scale Span, typically 4V at VSupply = 5V. A dierential amplier, based on an operational amplier is used for signal conditioning. The designed amplier subtracts 2.5V, as only positive pressure and thus velocity is deemed relevant. The amplier then scales the output to match the 3.3V level of the ADC (See Section 5.2.10, R/C interface & Failsafe operation on page 55). A low-pass lter is build in, to remove various high frequency noise components. Please refer to Appendix F.2 on page 95 for detailed schematics. Calibration According to (3.8) on page 30, the full range of the dierential pressure sensor, assuming an air density, , of 1.25
kg m3 22103 [P a] kg 1.25[ m3 ]

is then given by zero to V =

204 km . A eld test was conducted in a car, comparing the pitot model output h with GPS indicated speed. From the test, it was conducted that the model ts, but with a nonlinear factor of 1.22. After applying the factor, as seen in Figure 5.5, the model is reliable.



The accelerometer used is the triple axis ADXL345 [25]. It can measure up to mg 16g with a resolution of 3.9 LSB . To calibrate the sensor, the PCB has been leveled on a still at surface to measure the g force in all directions. That way 51

Ref. vel. [m/s] 7.6 10.3 12.9 15.3

Meas. mean [m/s] 7.6 10.2 12.6 15.6

Figure 5.5: Field test results, pitot output, up- and downwind, compared with GPS reference. Model output has been corrected with a factor 1.22 the nonlinearity of the sensor is accounted for. According to the datasheet, the maximum sampling frequency is 3,200kHz. However, the acceleration is only sampled at approximately 50Hz. In order to satisfy the Nyquist theorem, the device is setup band limit the output to 25Hz.



The gyroscope used is the InvenSense ITG3200 [37]. It is a triple axis gyro /s scope, with a resolution of 0.0696 LSB saturating at 2000 /s. When dealing with MEMS gyroscopes, calibration considerations must be made. Unlike a traditional mechanical gyroscope, a MEMS gyroscope measures the angular rate of change, rather than the angle. A number of factors must be considered when using a MEMS gyroscope, as it is explained by Analog Devices in [46], the errors of a gyroscope can be classied as:

Biaserror =

bias dt = bias t1


Thus, the bias error inuences the gyro measurements even when the gyro is standing still. Nonlinearity in the construction of the sensor also results in skewed measurements when moving the gyro. This error is state in the datasheet to be 0.2% typical. In order to compensate for these errors two methods are suggested in the paper: Bias can be estimated by sampling the gyroscope for a short time (3 seconds) while it is standing still, and permanently subtract the average of these measurements when using the gyroscope. This is also what has been implemented in practice. The scaling error has been neglected in the setup, but it could be estimated by rotating the gyroscope a known angle, and compare output to a known reference. There are not a lot of 3-axis digital standalone gyroscopes available, one is however the L3G4200D [50] very similar to the ITG3200. Either could be used 52

as they share most specications, however the ITG3200 was in stock when components were ordered. As the gyroscope is sampled at approximately 100Hz, the internal low pass lter is set to 50Hz, in order to satisfy the Nyquist theorem.



The initial digital magnetometer used was the triple axis HMC5883L. It can T measure up to 800T with a resolution of 0.5 LSB . However, experiments soon showed that due to its placement relatively close to the motor driving the plane, the measured magnetic eld was highly distorted. It was decided to move the magnetometer away from the noise source. Therefore a second magnetometer has been mounted, further towards the back of the airframe.



To compliment the altitude estimate of the GPS, a Bosch BMP085 [65] barometric pressure sensor has been installed. It measures 300 1100hP a at a hP a resolution of 0.03 LSB corresponding to 500m below to 9000m above sea level at a resolution of 36cm.



For global positioning, a Blox-5 based 50 Channel D2523T GPS receiver has been used. It features a helical antenna and a 5Hz update rate with a Circular Error Probability of 2.5m. It was chosen mainly because of its low price and high update rate. The GPS does however only have volatile memory, and does not come with a backup battery. Therefore, all congurations done to the GPS is lost after a power cycle. This presents a problem, as the software used to program the GPS, called U-Center is Windows based, and not intended for embedded Linux. The U-Center can save a conguration le, containing HEX codes for the congurations, like desired baud-rate, update frequency and other operation modes. The conguration protocol was reverse engineered and a C++ based parser was created. This parser can read the conguration le, send the commands to the GPS, and receive the acknowledgement conrming correct conguration with checksums. This parser has then been congured to run at every boot of the system - thus, the GPS will always be correctly congured. The main congurations done are enabling only GPGGA and GPVTG message types, as they provide all the information the state estimator needs, that being latitude, longitude, altitude, delusion of precision, number of satellites xed, course and speed over ground. Furthermore the GPS is congured to update as fast as possible, 5Hz.



The Zigbee modem connects to the same serial port used by the default kernel terminal. This was done as it was the only free UART in hardware, and if care is taken it can be used as an advantage. When debugging the software, especially


16-bit timer @ 1MHz Man/auto Radio receiver Pin-change interrupt Man Compare match interrupt Switch Auto




I C Interface

EEPROM Pitot ADC Ultrasonic Battery

Figure 5.6: Block diagram of the low-level coprocessor. Gray boxes represent hardware pieces, green AVR hardware modules and blue software modules. Wi-Fi, it is very useful to have a terminal providing full access to the system, by attaching a Zigbee modem this terminal connection becomes wireless.


Ultrasonic proximity

An LV-EZ4 ultrasonic proximity sensor has been installed pointing out the belly of the plane, intended to use for landing procedures when the correct distance to the ground is very essential. The sensor installed was the longest range and most narrow-band lightweight proximity sensor we could identify. It measures from 20cm to 7m with a resolution of 2.5cm/LSB. The sensor has a multitude of interfaces, including PWM, analog and serial. It has been installed with the analog interface to the AVR. In the current state of the project it has not yet seen use.


R/C interface & Failsafe operation

In the R/C hobby world, the de facto standard interface used for controlling servos and motor controllers are simple xed frequency PWM signals. The high period of the signal is used to set the servo set-point and varies from approximately 1.0 ms to 2.0 ms, 1.5ms being center. The radio system uses 5 channels for controlling servos and the motor controller and a 6th to switch between automatic and manual ight. It is of the outmost importance that the system is capable of switching into manual mode, even in the event of a crash on the main ight computer. This ensures that the pilot has the opportunity to avoid a crash. Thus it is desirable to keep the switching between manual and automated ight at the lowest possible level. Meanwhile, the timing of the servo signals needs to be precise, as the resolution is approximately 1 /5.6s, which is unreachable from Linux user space. An 8-bit AtMega328 microprocessor[24] is used to handle the low-level servo input and output, as it is capable of s precision and oers the desired decoupling


from the Linux system, to ensure that manual to automatic switching is always available. The co-processor samples the pulse width from the radio receiver via hardware interrupts. On both rising and falling edges of each radio channel, the interrupt samples the value of a 1MHz 16bit timer. These two samples are then subtracted to nd the pulse width. The signal is reconstructed by a compare match interrupts on the same timer. The rst channels output is set high, and a compare match interrupt is scheduled to trigger the desired pulse-width later. This interrupt then sets the current channels output low, the next channels output high and schedules the next channels compare match interrupt based on the desired pulse width. This scheme continues until all channels have been served and restarts in a 50Hz cycle. As the co-processor has build in ADCs, these are utilized for sampling battery voltage, pitot pressure via the dierential pressure sensor and ultrasonic range-to-ground. Please refer to gure 5.6 on the previous page for an overview of the hardware/software structure of the co-processor, and the Doxygen documentation for detailed descriptions.


Power management

A total of four dierent supply voltages are present on the PCB. 14.8V input from the 4-cell LiPo battery, 5V for USB interfaces and the dierential pressure gauge. The majority of devices is operated at 3.3V. This group, including; the Gumstix, various sensor chips (BMP085, ADXL345, ITG3200, HMC5883), I/O muxing (AtMega328), the Ethernet interface (LAN9221) and various auxiliary devices, e.g. level shifters. As the GumStix I/Os operates at 1.8V, all interfaces to this chip needs to be at this voltage level. Hence a 1.8V supply is needed for level conversion. An estimate of the maximum power consumption at each level has been conducted to dimension the voltage regulators accordingly, the sum up of this estimation can be seen in Tabel 5.1. Consumer [mA] I 2C LAN AVR Di. press USB Host Misc. sensors Gumstix Zigbee LEDs Regulator FTDI Total [mA] 1.8V 20 10 3.3V 20 190 25 5.0V

5 500 10 700 250 70 30 30 8 1,273 535

Table 5.1: Power consumption on dierent levels. Based in the amount of current needed on 3.3V and 5.0V it has been decided to maximize eciency by using switch mode regulators for these levels. As only 30mA are drawn on 1.8V, a linear regulator is used there.


fmController State est. Control eort fmTeleAir Zigbee


fmFusion Motion Servo cmd.


Airframe or simulator Figure 5.7: Figure of the software blocks and their communication. Apart from the voltage levels on the control board, the RC servos, motor and receiver also needs power. This power is supplied by the Electronic Speed Controller (ESC). The motor is driven by the AC regulator, while the servos and receiver are driven by the Battery Eliminating Circuit(BEC) in the ESC. To optimize eciency, it is recommended to use either an ESC with switch-mode BEC or a separate power supply for servos, as a linear regulator for the servos would waste up to 3A 9.8V = 29.4W .


Auxiliary components

A number of other components have been used to solve dierent needs. A FTDI chip connects the Linux system terminal UART to a USB plug on the board, thereby the board can be directly interfaced with a USB cable. To interface the Gumstix, level conversion from 1.8V to 3.3V and vice versa was needed. This has been done with the SN74LVC16T245 [36] 16-bit dual supply bus transceiver.


Software building blocks

The software needed to accommodate the system blocs, has been implemented as seen in 5.7. It is based on ve major nodes: 1. fmFusion: State estimation and hardware interfaces. 2. fmController: Low lever PID controls. 3. fmTeleAir & fmTeleGround: Wireless telemetry package handling. 4. GCS: Ground control station. 5. Simulator: identical interface as the actual hardware, for grounded development. The following sections will discus these nodes one by one.



fmFusion has two major roles; it is the only node in the system that has access to the I 2 C port, and as such is the only node that can interact with the low-level 56

sensors. This scheme must be used to prevent concurrency issues of the bus. Its second responsibility is to executed the state estimator. The state estimator is run in the same node that has a handle on the I 2 C bus, such that data does not have to be passed around - internal pointers can be used. This structure allows for three dierent modes, as can be seen in Table 5.2 Mode Autopilot Simulator Data logger Subscribes to none Sensor data none Publishes State Estimate State Estimate Sensor data Note Used for the nished autopilot and aided ight Used to test new algorithms on in the simulator Used to log raw sensor data

Table 5.2: fmFusion and its modes. Also, by implementing the ability for fmFusion to subscribe to sensor data rather than polling them from the I 2 C bus, it is possible run the state estimator on a desktop computer without the actual sensors. The open source KFilter [11] library has been used to implement the state estimator. The kinematics models are described in detail in Chapter 3 with kinematics derived in Appendix A through C. The lter library provides classes and vector and matrix types, such that the implementation process can be focused on producing the right lter inputs, rather than spending time worrying whether more low-level problems like matrix and vector multiplication functions properly. When instantiating the lter class, all the known elements of the lter are initialized in separate functions. I.e. makeA() is overloaded to contain the A matrix of the lter, while makeProcess() denes the Process update function etc. In practice this results in the instantiation of three EKF classes, where the output from one is used as input in the next. Please refer to the Doxygen documentation for details.



fmController implements low level controls in the system. It subscribes to the airframeState published by the state estimator (fmFusion), furthermore it also subscribes to the remote control radio data, such that set-points can be input via the radio. In future versions it will subscribe to the setpoints published by the FMS. The PID controllers used in the implementation was originally a part of an Arduino library 2 , but due to various shortcomings it was decided to expand this critical component. The source is documented via Doxygen. The implementation complies with the description given in Section 2.1.1 on page 16. Various


practical issues had to be dealt with, never the less; when switching from manual to automated ight, the PIDs integral terms has to be initialized at rest, such that the output is the same as it was during manual ight. Derivative kicks needed also to be dealt with. Simply letting the feedback, and not the error, feed the derivative term, this is overcome. By dening the minimum and maximum output of each controller, integral wind-up is easily dealt with inside the PID class. Tuning of the parameters was mainly done manually; rst in the simulator to get an idea of the magnitude of the values. During later eld tests ne tuning to the airframe was conducted iteratively. Tuning schemes such as Ziegler-Nichols could have been used, but it was found that hand-tuning was not complicated nor time consuming and adequate to obtain mediocre performance.



The GPS device provides data in NMEA format. This standard contains a long list of dierent messages. Two of these message types, GGA and VTG contains the data used in this system. Position data is provided in longitude / latitude format by the GGA message. The remainder of the system uses UTM coordinates, so conversion must be conducted. An original Frobomind structure, consisting of fmCSP, fmSensors and fmExtractors is used to obtain the desired information from the GPS. fmCSP has the low-level handle on the serial port, taking parameters for the actual port name, baud-rate and the desired publish topic name. fmCSB publishes the NMEA strings, containing data from the GPS device. fmSensors subscribes to and parses the NMEA stings and assigns the values to appropriate message-types; i.e. GGA and VTG type messages are published by fmSensors. However, as the UTM coordinates need to be extracted, the fmExtractor subscribes to the GGA data and converts longitude and latitude to UTM northings and eastings. A small overview can be seen in Table 5.3. Node fmCSP Subscribes to none Publishes Serial string Note Attaches directly to the serial port, publishing NMEA data as strings Parses the string data and puts it into appropriate message structures Converts latitude and longitude to UTM coordinates.


Serial string





Table 5.3: The node collection used to extract positional data form a GPS receiver, connected to a UART.


fmTeleAir and fmTeleGround

To accommodate the need of a wireless link between the aircraft and the ground station while the ATC is operating a Zigbee modem is needed in either end of the system. The Zigbee modem are capable of a variety of network topologies, 58

and were originally designed for home automation, as seen in [32]. As there is only one ATC in this project, the modems are congured for simple point to point serial communication, eectively using the modems as a 1.5km[26] wireless serial connection. With the hardware used, the project could however later be expanded with more ATCs, with the Zigbee modems allowing them to interconnect as well, enabling possibilities as swarm ying and coordinated ight, as proposed by Floreano et al. [31]. Transporting this data raw via serial line would deviate from the ROS design terminology. Therefore steps have been taken to make the serial link seem transparent. By having a ROS node in either end of the Zigbee connection, the link becomes transparent. By using ROS own serialization methods, predened message types that can be subscribed to by one Zigbee node, then serialized and send to another Zigbee node, where it is deserialized and published as though it had always been part of ROS. Utilizing this message transport protocol, the data already published on the autopilot board can also be published on the ground computer for the ground station to visualize. All that is required is that the two nodes are compiled with the same predened message types. fmTeleAir executes on the autopilot and is responsible for the link to the GCS. Whatever this node subscribes to, gets published in the other end by fmTeleGround, executing on the GCS. To ensure data packages are read properly SLIP [62] has been implemented together with the ROS serializer as can be seen in Figure 5.8.

Figure 5.8: The data structure transmitted over the Zigbee serial link, wrapped in slip to ensure valid packages. STA and END are slip denitions for start and end of package.


Ground Control system

The information needed by the operator on the ground, consists of the airframe state, the signal quality of the GPS x and various other information. The telemetry capability provided by fmTeleGround and fmTeleAir is used for this purpose. A ground station developed for the CityFlyer by CCNY Robotics and Intelligent Systems Lab [55] has been converted and modied to suit the current needs of the project. The ground station is ROS based and runs in the GTK+ graphical environment. The virtual cockpit as seen in Figure 5.9a, is made up of a congurable number of instruments. These instruments are drawn with vector graphics, such that the ground station can scale to any screen size. Furthermore, the individual instruments can be modied in scale, size, color, and their placement can also 59

(a) Virtual cockpit

(b) Map and tracker

Figure 5.9: Ground Control Station views. be rearranged. The ground station also features a map tab, where the user can choose between a large array of online map providers like OpenStreet or Google Maps. Here the airframe track can be visualized with a line, and the map can also be set to move with the airframe. By having the ROS enabled ground station subscribe to the aforementioned Zigbee serial node topics - all the necessary information is available, and ready to be visualized.



CRRCsim is build directly from source, as this allows integration of sensor data publishers and steering input subscribers, utilizing ROS. The GNU Autotconf tools build system is used. However these proved dicult to mate with the ROS make system. Therefore dependencies were manually installed and a CMake system was congured to include ROS in the build and thereby enabling the whole ROS suite in the source. Having done that, one of the existing unused interfaces was modied as a ROS node subscribing to radio data and publishing sensor data.

Figure 5.10: Rxgraph of the simulator (crrcsim) running alongside the state estimator (fmFusion), the PID controller (ystixController), the zigbee connected radio input (fmRemoteGround) and nally a ground station (ground station).


As it can be seen in the overview in Fig. 5.10 on the preceding page, the simulator subscribes to servo data and publishes sensor data on a multitude of topics. This architecture allows replacement of the simulator node, directly with the actual hardware, by changing the mode of fmFusion, as described in Section 5.3.1 on page 57.


Flight test
Vision based post ight roll verication

One of the major challenges of estimating the state of an airplane is the lack of ground truth, and hence the lack of a solid correct reference. The GPS can be used to some extent, if the wind speed is known, the pitot tube can be calibrated, as seen in Section 3.1.1 on page 29. Also, the heading deduced mainly from the magnetometer, can be veried by GPS heading, measured when moving in a constant direction. This method can however not be used accurately for turns, as the internal lter of any GPS introduces a latency in the output. However, the GPS is of no help when estimating roll and pitch. Thus, we are faced with the problem of acquiring a reference. As it has been shown in, roll and pitch can be deducted from a video stream. Therefore a test ight conducted with a camera mounted on the airframe, facing out the front of the plane. In itself this video footage can be synchronized with the logged state estimate visualized on a virtual cockpit. This task is however inaccurate, tedious and time consuming, as a person has to watch the video stream and visually determine if the state estimate corresponds to the video. Therefore an automated vision system has been developed in Matlab. The ow of this script is depicted in Figure 5.11 on the next page.



Load image N as greyscale

Guassian blur

Otzu two level thresholding

Canny edge detection

Hough-space transform and peak detection Deduct roll angle from peak position


Last image?

Yes Done Figure 5.11: Horizon detection algorithm ow chart


The method is stable as long as there is a clear shade dierence, enabling Otzus method to distinguish sky from ground. If an image has only sky, the most dominant line is typically the propeller, and if only ground is lmed typically eld boundaries are found. In a video section of 1:20 minute, during which sky and ground is constantly visible, a data sample has been extracted. Figures 5.12 through 5.14 illustrates this particular 1:20 minute ight, and compares the vision estimate with the Extended Kalman Filter state estimate.

Figure 5.12: Comparing vision(solid) and EKF(dashed) roll estimates

Figure 5.13: Roll error over time


Figure 5.14: Roll error distribution. The standard deviation is 5.21 . The EKF state estimate has been recorded with a frequency of 50Hz whereas the camera used had a framerate of below 15 fps. Therefore Matlab has been used to resample the vision roll estimate, and to ensure synchronization crosscorrelation has also been conducted. It is noteworthy to say that the pitch angle could also have been deducted from the video, this was however not tested. It is also estimated that the script as presented in Figure 5.11 on page 63, at times deviates with up to 6 degrees in seemingly well dened horizon images. Therefor the standard deviation of the error between vision and EKF estimate as seen in Figure 5.14, is just as likely to come from the vision script as the EKF. Thereby the test indicates that the EKF state estimate of roll is viable.


Aided Flight

Once the state estimate has been implemented and veried, the rst step towards the autopilot described in Chapter 2 on page 14, is the ability to control the plane based on its state and user input. Stabilized ight is characterized by the plane stabilizing itself, in roll and pitch, when the remote control sticks are released by the pilot. This can be used to teach aspiring pilots how to y, with a reduced risk of crashing a plane. If panic occurs, a dangerous situation can typically be averted by releasing all control sticks and, if lift is desired, elevator action can be provided. However, this limited form of stabilization does not provide a means for good control, only correction if the situation is about to, or already is out of hand. If however, the users control input could be used to guide the plane in the right direction rather than directly controlling the actuators of the plane, a more intuitive way of ying could be realized. If a robust way of controlling the plane 64

can be implemented, an autopilot utilizing the same control input would be more intuitive to program. Therefore a control model has been implemented, where the right hand control stick, in the horizontal direction, denes the roll degree set-point, rather than the aileron deection. Please refer to Figure 5.17 on the following page. If the controller is then limited to a reasonable set of maximum roll angles (i.e. no more than 90 ) this input then actually controls the roll angle and thereby the turn radius of the plane - the same ultimate goal of regular direct aileron control, but without the pilot having to do the actual control. Likewise, traditional control of the right hand vertical stick maps to elevator deection. See Figure 5.18 on the next page. Here the ultimate goal is to pitch at a certain rate or angle in order to change the plane altitude. Therefore a control loop has been implemented, where this stick input maps to a pitch angle rather than elevator deection. By limiting the maximum and minimum pitch angle to safe values (i.e. 45 for safe regular ight) the plane ies reasonably safe, no matter the user input, while still leaving great freedom of control to the pilot. The pitch controller currently maps to the elevator and does not account for roll. This results in the vertical control stick inuences the turn radius, rather than pitch, when the airframe is not horizontal. This eect can be seen in Figure 5.16 on the following page. The aided ight procedure has been implemented and tested in the eld. On a day with moderate wind of about 5-6 m/s the plane was hand launched in aided mode, own for ve minutes, and safely landed, never changing back to manual mode. The ight was mainly conducted in a large circle, turning left. Figure 5.15 shows the relation between set-point from the radio and the state estimate of the roll.

Figure 5.15: Roll angle as a function of user input


Roll 0

Roll 40

Figure 5.16: Pitch angle as a function of user input

Mode Manual Stabilized Aided

Function Aileron deection Stabilize horizontally if centered Roll angle set-point

(a) Radio aileron

(b) Radio aileron function

Figure 5.17: Radio aileron input function dependent on ight mode

Mode Manual Stabilized Aided

Function Elevator deection Stabilize horizontally if centered Pitch angle set-point

(a) Radio elevator

(b) Radio elevator function

Figure 5.18: Radio elevator input function dependent on ight mode



6.1 Future work

The ultimate goal of this project, to make an Autonomous Tool Carrier, capable of interfacing various tools and y autonomously, has not yet been fully realized. The work that has been done, has however been conducted in such a way, that it makes for a good starting point for future work. The most eminent topic of future work is the tool interfacing. Hardware interfaces have been made available, and the software has been designed to easily accommodate changes and modules. The interfacing protocol has however not been dened and implemented. Also the eect of adding weight to the system has been investigated, but the impact on control response and tuning of the controllers as a function of added weight should be explored. Of the state estimate parameters, pitot vs. GPS speed and roll angle vs. video recorded have been veried. However, the other state parameters have not been scientically veried. The verication of parameters such as pitch, yaw, position etc. should also be conducted. It is proposed to extract the pitch using the same approach as has been done for the roll estimate. Heading and position are a bit more complex to verify, as the ground truth is not readily available. So far, the verication of the remaining parameters have been done by manually inspecting the state, visualized by the ground station, and comparing them to video recordings. Also, pitch has been veried in the sense that it works in aided ight mode, but again, a more scientic method should be used. The last major eld of future work lies in the development of the Flight Management System. Various concepts have been investigated, but none of them have been implemented. This would be a natural next step of the project as the state estimator now provides all the data necessary for ying autonomously.



Project analysis

When the project was rst dened, a number of time consuming factors were underestimated. Initially, the idea was to create the entire prototype, starting at hardware design level, and ending with an ATC, capable of autonomous ight and tool interactions. This is in itself a vast project, and even if everything would have worked in the rst try, it was ambitious to think that the ying prototype could have been nished within the time limits. Apart from this, a number of factors introduced unforeseen delays; dealing with aircraft automation presents a number of challenges in transition from idea to test, as errors can potentially destroy the entire physical platform. Unlike automation of ground vehicles, hitting an emergency stop is a very bad idea when airborne. Also none of the authors could y model planes, and were thus faced with either hiring a pilot for every test ight, or having to learn how to y. Hiring a pilot would be the safest option, but it is expensive. Often-times it is also desirable to be able to y within the hour, if the weather conditions are just right. Therefore it was decided that one author would spend time learning to y. This obviously delayed the initial ights, and increased the risk of crashes, due to lack of experience. The trade-o was worth it though, as many more ights were be conducted, at the right times. Initially the project work was slightly unstructured. A time plan had been forged, but it did not become an integrated part of the day to day work. This was later realized, and the plan was revised. This resulted in a SCRUM-like process, where sub-deliveries were formed and logged for the supervisors to see. The generated log can be seen in Appendix I on page 124. This structure greatly focused and increased the productivity of the team, resulting in an eective round-o of the project. Finally, in the last stages of the project, the usefulness of an on-line process log was discovered. Through the log, supervisors and students alike, had a common forum for discussing the day to day progress and direction of the project. The log is attached in Appendix I on page 124.



Through a study of state of the art autonomous unmanned aerial vehicles, it has been discovered that though there are several robust systems available. They tend to focus on specic application areas and use specialized, integrated tools. By the use of a multitude of disciplines, ranging from hardware layout, embedded programming, software development, kinematics and digital ltering, a prototype foundation of a new concept, in the area of autonomous aerial vehicles, has been made. By building the system from the bottom up, with modularity in mind, it is anticipated that various tools can interface and interact with the same autopilot. The cornerstone has been set for a continued project. A platform capable of stable state estimation and proven control of the longitudinal axis has been developed, implemented and tested. A combination of tests and practical verication has shown that a 9 state Extended Kalman Filter, based on cheap digital sensors, can be used to accurately determine the state of an aircraft in motion.


Furthermore, it has been demonstrated that the state estimate and the nested PID loops are capable of maintaining control of the airframe under aided ight. It has been shown that by substituting direct control with attitude control, a novice pilot can y a model aircraft, even in moderate winds. Nothing suggests that this should not be the case for stronger winds as well. It is postulated that as this control is now in place, the next step towards an Autonomous Tool Carrier can be taken. By building an autopilot, as suggested in this thesis, and implementing a tool communication protocol on the provided hardware and software, the system can become a reality.



[1] Ardupilot. [2] Avior 100 autopilot. ati-avior-100-specifications/. [3] Gatewing x100. [4] Gluonpilot. [5] Micropilot. [6] Openpilot. URL [7] Piccolo ii. [8] Sensey. [9] Ccny ground station. [10] Happy killmore ground control station. happykillmore-gcs/. www.kalman.

[11] Klter - free c++ extended kalman lter library.

[12] Procerus technologies, kestrel autopilot. [13] Openpilot ground control station. openpilot-gcs/. [14] Paparazzi. [15] Q ground control. [16] World magnetic model 2010. National Geophysical Data Center, National Oceanic and Atmispheric Administration. geomagmodels/IGRFWMM.jsp?defaultModel=WMM. [17] E.P. Anderson, R.W. Beard, and T.W. McLain. Real-time dynamic trajectory smoothing for unmanned air vehicles. Control Systems Technology, IEEE Transactions on, 13(3):471477, may 2005. ISSN 1063-6536. doi: 10.1109/TCST.2004.839555. 70

[18] A. L. Barker, D. E. Brown, and W. N. Martin. Bayesian estimation and the kalman lter. Computers Math. Applic, pages 5577, 1994. [19] R. Beard. State estimation for micro air vehicles. In Javaan Chahl, Lakhmi Jain, Akiko Mizutani, and Mika Sato-Ilic, editors, Innovations in Intelligent Machines - 1, volume 70 of Studies in Computational Intelligence, pages 173199. Springer Berlin / Heidelberg, 2007. ISBN 978-3-540-72695-1. URL 10.1007/9783-540-72696-8 7. [20] H. Bendea, P. Boccardo, S. Dequal, F. Giulio Tonolo, D. Marenchino, and M. Piras. Low cost uav for post-disaster assessment. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences., XXXVII(B8), 2008. [21] W. Burgard, D. Fox, and S. Thrun. Probabilistic robotics. Cambridge, MA: MIT Press, 2005. [22] Xavier P. Burgos-Artizzu, Angela Ribeiro, Maria Guijarro, and Gonzalo Pajares. Real-time image processing for crop/weed discrimination in maize elds. Computers and Electronics in Agriculture, 75(2):337346, 2011. ISSN 0168-1699. doi: 10.1016/j.compag.2010.12.011. URL http: // [23] R. S. Christiansen. Design of an autopilot for small unmanned aerial vehicles. Masters thesis, Brigham Young University, Aug 2004. [24] Atmel Cooperation. Atmega328 datasheet, Dec 2009. 8271AAVR12/09. [25] Analog Devices. Digital accelerometer adxl345. Technical report, AD, 2009. [26] Digi. Xbee - rf family comparison matrix. Technical report, Digi, 2011. [27] Roger M. du Plessis. Poor Mans Explanation of Kalman Filtering or How I Stopped Worrying and Learned to Love Matrix Inversion. June 1967. [28] A. M. Eldredge. Improved state estimation for micro air vehicles. Masters thesis, Brigham Young University, Dec 2006. [29] Vanessa Espinar and Dana Wiese. An extreme makeover - scientists upgrade a toy plane with robotic technologies. GPS World, pages 2027, Feb 2006. [30] Patrick Th. Eugster, Pascal A. Felber, Rachid Guerraoui, and Anne-Marie Kermarrec. The Many Faces of Publish/Subscribe. ACM Computing Surveys, 35(2):114131, june 2003. [31] Dario Floreano, Sabine Hauert, Severin Leven, and Jean-Christophe Zufferey. Evolutionary Swarms of Flying Robots. Laboratory of Intelligent Systems, Ecole Polytechnique Federale, Lausanne, Switzerland. [32] K. Gill, Shuang-Hua Yang, Fang Yao, and Xin Lu. A zigbee-based home automation system. Consumer Electronics, IEEE Transactions on, 55(2): 422430, may 2009. ISSN 0098-3063. doi: 10.1109/TCE.2009.5174403.


[33] C. Gutjahr and R. Gerhards. Decision rules for site-specic weed management. In Erich-Christian Oerke, Roland Gerhards, Gunter Menz, and Richard A. Sikora, editors, Precision Crop Protection - the Challenge and Use of Heterogeneity, pages 223239. Springer Netherlands, 2010. ISBN 978-90-481-9277-9. URL 978-90-481-9277-9_14. 10.1007/978-90-481-9277-9 14. [34] Dieter Hausamann, Werner Zirnig, Gunter Schreier, and Peter Strobl. Monitoring of gas pipelines - a civil uav application. Aircraft Engineering and Aerospace Technology, 77(5):352360, 2005. [35] W.T. Higgins. A comparison of complementary and kalman ltering. Aerospace and Electronic Systems, IEEE Transactions on, AES-11(3):321 325, may 1975. ISSN 0018-9251. doi: 10.1109/TAES.1975.308081. [36] Texas Instruments. Sn74lvc16t245 16-bit dual-supply bus, 2005. [37] InvenSense. ITG-3200 Product Specication, 1.4 edition, March 2010. [38] Corey Ippolito. An autonomous autopilot control system design for smallscale uavs. Technical report, University of Carnegie Mello, 2005. NASA Ames Research Center. [39] Jan Jacobi and Matthias Backes. Classication of weed patches in quickbird images: Verication by ground truth data. EARSeL European Association of Remote Sensing Laboratories, 5(2):173179, July 2006. ISSN 1729-3782. [40] Niels Jul Jacobsen. Den intelligente sprjtebom. Syddansk Universitet, Mrsk Mc-Kinney Mller Instituttet. [41] Kjeld Jenesn. Frobomind, a conceptual architecture for eld robot software. [42] Bent Vraae Jrgensen and Bjarne Siewertsen. Dmi wind statistics. http: // [43] R. E. Kalman. A New Approach to Linear Filtering and Prediction Problems. Transactions of the ASME Journal of Basic Engineering, (82 (Series D)):3545, 1960. [44] Wajahat Kazmi, Morten Bisgaard, Francisco Garcia-Ruiz, Karl Damkjr Hansen, and Anders la Cour-Harbo. Adaptive surveying and early treatment of crops with a team of autonomous vehicles. Proceedings of the 5th European Conference on Mobile Robots ECMR 2011, pages 16, 2011. [45] Derek Kingston, Randal Beard, Al Beard, Timothy McLain, Michael Larsen, and Wei Ren. Autonomous vehicle technologies for small xed wing uavs. In AIAA Journal of Aerospace Computing, Information, and Communication, pages 20036559, 2003. [46] Mark Looney. A simple calibration for mems gyroscopes. Technical report, Analog Devices, 2010.


[47] Chang Boon Low. A trajectory tracking control design for xed-wing unmanned aerial vehicles. In Control Applications (CCA), 2010 IEEE International Conference on, pages 21182123, sept. 2010. doi: 10.1109/CCA. 2010.5611328. [48] Statens Luftfartsvsen. Bl 9-4 bestemmelser om luftfart med ubemandede luftfartjer, som ikke vejer over 25 kg. Bestemmelser for Civil Luftfart, (3), January 2004. [49] R. Mahony, M. Euston, P. Coote, J. Kim, and T. Hamel. A complementary lter for attitude estimation of a xed-wing uav. In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, pages 340345, sept. 2008. doi: 10.1109/IROS.2008.4650766. [50] ST Microelectronics. L3G4200D MEMS motion sensor: ultra-stable threeaxis digital output gyroscope. Technical report, ST, 2010. [51] E. C. Molina. Bayes theorem - an expository presentation. Bell system Technical Journal, pages 273283, 1931. [52] M.S. Moran, Y. Inoue, and E.M. Barnes. Opportunities and limitations for image-based remote sensing in precision crop management. Remote Sensing of Environment, 61(3):319346, 1997. ISSN 0034-4257. doi: 10.1016/S0034-4257(97)00045-X. URL science/article/pii/S003442579700045X. [53] S. Nasiri, M. Lim, and M. Housholder. A critical review of the market status and industry challenges of producing consumer grade mems gyroscopes. Technical report, InvenSense, 2009. [54] D.R. Nelson, D.B. Barber, T.W. McLain, and R.W. Beard. Vector eld path following for miniature air vehicles. Robotics, IEEE Transactions on, 23(3):519529, june 2007. ISSN 1552-3098. doi: 10.1109/TRO.2007.898976. [55] The City College of New York. City-yer robotics lab. http://robotics. [56] S. M. Oh. Multisensor fusion for autonomous uav navigation based on the unscented kalman lter with sequential measurement updates. In Multisensor Fusion and Integration for Intelligent Systems (MFI), 2010 IEEE Conference on, pages 217222, sept. 2010. doi: 10.1109/MFI.2010.5604461. [57] Sanghyuk Park, John Deyst, and Jonathan P. How. A new nonlinear guidance logic for trajectory tracking. In In Proceedings of the AIAA Guidance, Navigation and Control Conference, pages 20044900, 2004. [58] Morgan Quigley, Brian Gerkey, Ken Conley, Josh Faust, Tully Foote, Jeremy Leibs, Eric Berger, Rob Wheeler, and Andrew Ng. Ros: an opensource robot operating system. Willow Garage, 2009. [59] J.D. Redding, T.W. McLain, R.W. Beard, and C.N. Taylor. Vision-based target localization from a xed-wing miniature air vehicle. In American Control Conference, 2006, page 6 pp., june 2006. doi: 10.1109/ACC.2006. 1657153. 73

[60] S. Riaz and A. B. Asghar. Ins/gps based state estimation of micro air vehicles using inertial sensors. In Innovative Systems Design and Engineering Vol 2, No 5, 2011, 2011. [61] S. Riaz and Dr. A. M. Malik. Single seven state discrete time extended kalman lter for micro air vehicle. In Proceedings of the World Congress on Engineering 2010 Vol II, volume Proceedings of the World Congress on Engineering, 2010. [62] J. Romkey. A nonstandard for transmission of ip datagrams over serial lines: Slip. Network Working Group, June 1988. http://tools.ietf. org/html/rfc1055. [63] S. F. Schmidt. Kalman lter: Its recognition and development for aerospace applications. Journal of Guidance and Control, 4(1):47, 1981. [64] Freescale Semiconductor. MPXV7002 - Integrated Silicon Pressure Sensor On-Chip Signal Conditioned, Temperature Compensated and Calibrated, 2009. rev 2. [65] Bosch Sensortec. Bmp085 digital pressure sensor - data sheet, 2009. BSTBMP085-DS000-05. [66] SMSC. High-performance 16-bit non-pci 10/100 ethernet controller with variable voltage i/o, 2012. [67] B. L. Stevens and F. L. Lewis. Aircraft Control and Simulation. John Wiley & Sons, 1st edition, 1992. ISBN 978-0471613978. [68] S. Thrun, F. Dellaert, D. Fox, and W. Burgard. Monte carlo localization: Ecient position estimation for mobile robots. In Proceedings of the Sixteenth National Conference on Articial Intelligence, pages 343349, July 1999. [69] Andrs Villa-Henriksen. Automatic thermal based animal detection system e for mowing operations. Masters thesis, University of Aarhus, August 2011. [70] Martin Weis and Markus Skefeld. Detection and identication of weeds. o In Erich-Christian Oerke, Roland Gerhards, Gunter Menz, and Richard A. Sikora, editors, Precision Crop Protection - the Challenge and Use of Heterogeneity, pages 119134. Springer Netherlands, 2010. ISBN 978-90481-9277-9. URL 10.1007/978-90-481-9277-9 8. [71] G. Welch and G. Bishop. An introduction to the kalman lter, 2001. [72] J. G. Ziegler and N. B. Nichols. Optimum settings for automatic controllers. Transactions of ASME, 64:759768, 1942.



Abbreviations ADC ATC AVR BL DMI EKF FMS FTDI GCS GPIO GPS I2 C IC IMU IR LAN MAV MEMS NIR Analog to Digital Converter Autonomous Tool Carrier Ocially just a name of the Atmel microcontroller line. Most likely it is derived from Alf and Veards RISC Bestemmelser for Civil Luftfart. (Danish law on civilian aviation) Dansk Meteorologisk Institut (Danish Meteorological Institute) Extended Kalman Filter Flight Management System Future Technology Devices International. Company, specialized in USB to UART adapters. Ground Control Station General Purpose Input / Output Global Positioning System Inter-Integrated circuit bus Integrated Circuit Inertial Measurement Unit Infra red Local Area Network Micro Aerial Vehicle Micro Electro Mechanical System Near InfraRed colour channel



Normalized Dierence Vegetation Index Printed Circuit Board Probability distribution function Proportional, Derivative & Integral Controller Pulse Width Modulation Random Access Memory Radio Controlled Red, Green and Blue colour channels Surface mouted devices Serial Peripheral Interface Site Specic Weed Management Universal Asynchronous Teceiver / Transmitter Unmanned Aerial System Unmanned Aerial Vehicle Universal Serial Bus USB On-The-Go Universal Transverse Mercator Wireless Fidelity

Scalar Vector Matrix Value at k th time step Sine of alpha, sin() Cosine of alpha, cos() Tangent of alpha, tan() Derivative Estimate A priori estimate Posteriori estimate


p(a) p(a|b) Symbols Rk Rq n n 0 K R H V h(x, v) f (x, u, w) z n Pe Pn q P x Q A

The probability of a The probability of a given b

Euler rotation matrix, rotating around the x-axis - k-being either the x, y or z axis Quaternion rotation matrix Acceleration along the n-axis - not to confuse with angle of attack, Angle of attack - not to confuse with acceleration, n Magnetic eld strength along the n-axis Magnetic eld strength in reference frame Course over ground Course climb angle Kalman gain Wavelength Measurement noise covariance matrix Measurement model. Relates x to z Measurement noise model. Relates v to z Non-linear measurement model Non-linear state transition model Measurement or observation Angular velocity around n-axis Rotation around the longitudinal axis Position in east direction. Position in north direction. Rotation around the vertical axis Quaternion State covariance matrix State estimate State transition noise covariance matrix State transition model 77

W w u v We Wn

State transition noise model. Relates w to x State transition noise. Mean zero Gaussian with covariance Q System input Measurement noise. Mean zero Gaussian with covariance R Rotation around the lateral axis Wind in east direction. Wind in north direction.

Terminology Ailerons Control surface, typically located the outer half towards the wing tip. Deection upwards causes reduced lift, deection downwards causes increased lift. Operated in contra mode to induce roll. Direction of ight Control surface, located at the trailing edge of the tailplane. Deection causes movement around the lateral axis. Axis going from wingtip to wingtip, through the center of gravity.

Course Elevator Lateral

Longitudinal Axis going from plane nose to tail, through the center of gravity. ROS Rudder Robot Operating System. A middleware for inter task communication. Control surface, located on the trailing edge of the vertical stabilizer. Deection causes movement around the vertical axis in the direction of the deection. Controls the rotational speed of the propeller and thus the generated thrust. Usually used to generate a force out the nose of the plane. It can however be used as a brake for landing, as a slowly rotating propeller causes more drag than a xated propeller. Axis from top to bottom of the plane, through the center of gravity





, y y, y z , z, z , x x, x

Attitude kinematics

Figure A.1: Illustration of symbols and axis, relevant to this appendix. w x x= y z x x x , u = y , z = y , w = y z z z x , v = y z


Time update

Under the small angle approximation, the quaternion representation of a innitesimal rotation is 1 ( + x ) t /2 1 q(xk , uk ) u+w = x (A.1) (y + y ) t /2 t 2 (z + z ) t /2

As a quaternion is rotated by another quaternion by multiplication, the state can be extrapolated by quaternion multiplication in the matrix form:


However, the derivative of the state transition is wanted for the Kalman estimator, and is derived in (A.4) through (A.8). The derivate is approximated, as y the linear model y = x is used. xk xk q(xk , uk ) xk t (A.4) (A.5)

xk+1 xk q(xk , uk ) wk xk yk xk wk zk = yk zk wk zk yk xk

zk 1 yk (x + x ) t /2 xk (y + y ) t /2 wk (z + z ) t /2

(A.2) (A.3)

1 = (xk q(xk , uk ) xk ) t wk xk yk zk 1 wk xk wk zk yk (x + x ) t /2 xk 1 = yk zk wk xk (y + y ) t /2 yk t zk yk xk wk (z + z ) t /2 zk wk xk yk zk 0 xk wk zk yk (x + x ) t /2 1 = yk zk wk xk (y + y ) t /2 t zk yk xk wk (z + z ) t /2 wk xk yk zk 0 x wk zk yk (x + x ) 1 = k yk zk wk xk (y + y ) 2 (z + z ) zk yk xk wk Thus, the state transition function f (x, u, w) is given by (A.9).




xk (x + x ) yk (y + y ) zk (z + z ) w (x + x ) zk (y + y ) + yk (z + z ) 1 f (xk , uk , wk ) = k zk (x + x ) + wk (y + y ) xk (z + z ) 2 yk (x + x ) + xk (y + y ) + wk (z + z ) (A.9) The small angle approximation, introduced in (A.1), does deform the unit quaternion, such that it over time no longer will be true that ||x|| = 1. To avoid this, the quaternion must be normalized frequently by the following equation: w x y z

xn =

x = ||x||

w 2 + x2 + y 2 + z 2 80


The state transition matrix, A is given by the partial derivative of f (x, u, 0) with respect to x f [i] (x, u, 0) x[j] 0 x y x 0 z = y z 0 z y x

A[i,j] =

(A.11) z y 1 x 2 0


W [i,j] =

f [i] (x, u, w) w[j] x y z w z y 1 = z w x 2 y x w





The measurement, z, is the three dimensional acceleration of the plane in the body frame. This vector is the sum of the gravity eld towards the center of Earth and the centripetal forces produced by manoeuvring the plane and the sensor noise, measured in m/s2 . The measurement model is given by: hk (xk , v k ) = z k (A.15) (A.16)

where is the angle of attack, the side slip angle and R(x) is the orthogonal T rotation matrix by quaternion x = w x y z , dened as 2 w + x2 y 2 z 2 2 (x y + w z) R(x) = 2 (w y x z) 2 (w z x y) w 2 x2 + y 2 z 2 2 (y z + w x) 2 (x z + w y) 2 (w x y z) w 2 x2 y 2 + z 2 (A.17)

x c c 0 = R (xk ) 0 + y s Va + v k s c g z

Thus, expanding (A.16) gives the following expression 2 (w y x z) y c s z s x h (x, v) = 2 (w x + y z) g + z c c x c s Va + y w2 + x2 + y 2 z 2 x s y c c z (A.18) Note that level ight, with and both equal zero, signicantly reduces this expression. The observation matrix, H is given by the partial derivative of h with respect to the state vector, x 81

H [i,j] =

The observation noise model, V is given by the partial derivative of h with respect to the measurement noise vector, v. h[i] (x, v) v [j] 1 0 0 = 0 1 0 0 0 1

h[i] (x, 0) x[j] y z w = x w z w x y

(A.19) x y 2 g z (A.20)

V [i,j] =

(A.21) (A.22)

A.3 gles

Conversion between Quaternion to Euler an-

The quaternion representation is only applied inside the attitude estimator. In the remainder of the system, the Euler angle representation is deemed more intuitive. The Euler angles can be computed from the unit quaternion by the following equation 2 (w x + y z) tan w 2 x2 y 2 + z 2 1 = sin (2 (w y x z)) 2 (w z + x y) 1 tan w 2 + x2 y 2 z 2


Likewise, converting Euler angles to quaternion is needed for initialisation and can be done with the following equations w c(/2) c(/2) c(/2) + s(/2) s(/2) s(/2) x s(/2) c(/2) c(/2) c(/2) s(/2) s(/2) = (A.24) y c(/2) s(/2) c(/2) + s(/2) c(/2) s(/2) z c(/2) c(/2) s(/2) s(/2) s(/2) c(/2)



, y y, y z , z, z , x x, x

Heading kinematics

Figure B.1: Illustration of symbols and axis, relevant to this appendix. y z x , z = y , w = x y z x , v = x x


, u=

The heading estimator is the second stage of the cascaded state estimator. The measurement vector, z is a three dimmensional vector of the measured magnetic eld in the airframe. Extrapolation of the state estimate between measurements is based on the angular rates measured by the gyroscope and the knowledge of pitch and roll angles from the Attitude estimator (See Appendix A, Attitude kinematics on page 80). The magnetic eld vector around the surface of Earth varies with position and time. Table B.1 summarizes the eld properties at the test site. North 17,266.95 7.85 East 604.12 39.66 Down 46,922.54 30.55

Component [nT] Rate-of-change [nT/yr]

Table B.1: Earth magnetic eld at 55 23 N, 10 23 E as of april 2012. Data courtesy of NOAA, World Magnetic Model 2010



Time update

The estimator operates with Euler angles, as opposite to the Attitude estimator, which uses the Quaternion representation. The Euler angle representation is chosen, as it is more intuitive and the complexity of the Quaternion representation is not needed for the heading estimator, as no singularity issues associated with this rotation, given that = - This special case is handled separately. 2 The kinematics is based on [67]. The gist is to introduce the derivatives of the Euler angles in their respective intermediate frames: 0 0 x + x y + y = 0 + Rx () + Ry () 0 z + z 0 0 0 1 0 0 c 0 0 + 0 c s + 0 1 = 0 s c s 0 0 0 s = 0 + c + s c + c c 0 s 1 0 s = 0 c c c 0 s c c

(B.1) 0 s 0 0 c (B.2) (B.3)


By inverse transformation, the airframe rate of change x , y & z are mapped into the world frame via Euler angles: 1 t s t tc x + x = 0 c s y + y (B.5) s c z + z 0 c c The projection function f (xk1 , uk , wk ) = x propagates the only, an is thus given by y + y z + z

f (x, u, w) = =

s c s c

c c c c

(B.6) (B.7)

(u + v)

Note that the x does not contribute to the yaw propagation, as it maps directly to . The state transition matrix, A is given by the partial derivative of f with respect to x f [i] (x, u, 0) x[j]

A[i,j] =

(B.8) (B.9)

= 0


The measurement noise covariance matrix, W is given by the partial derivative of the state transition function, f , with respect to the state transition noise vector, w. f [i] (x, u, w) w[j]
s c c c

W [i,j] = =

(B.10) (B.11)



The measurement, z, is the three dimensional magnetic eld strength, measured in the plane body frame. This vector reects earths magnetic eld, at the location of the plane, rotated into the body frame. The de- and inclination and the eld strength of earth magnetic eld varies with the location on earth and changes over time. In Odense, Denmark, where the tests where conducted, the components of earth magnetic eld strength is given in Table B.1 on page 84. n 17.267 0 = e = 0.604 [T ] (B.12) 46.923 d x h (x, v) = y z (B.13) (B.14) 0 n x 0 e + y 1 d z (B.15)

c c n + c s e s d + x = (c s s s c) n + (c c + s s s) e + s c d + y (s s + c s c) n + (s c c s s) e + c c d + z (B.16) The observation matrix, H is given by the partial derivative of h with respect to the state vector, x h[i] (x, 0) x[j]

= Rx () Ry () Rz () 0 + v 1 0 0 c 0 s c 0 s = 0 c s 0 1 0 s c s 0 c 0

s c 0

H [i,j] =


c s n + c c e = (c c + s s s) n (c s s s c) e (B.18) (s c c s s) n + (s s + c s c) e c s c c 0 n = c c s s s c s + s s c 0 e (B.19) s c c s s s s + c s c 0 d 85

Note the last column of the matrix in (B.19) is zero, as d does not contribute to the heading estimate. The observation noise covariance matrix, V is given by the partial derivative of h with respect to the measurement noise vector, v. As the measurement noise is in the same frame as the sensor, the noise is mapped directly to the measurements, as seen in (B.21). h[i] (x, v) v [j] 1 0 0 = 0 1 0 0 0 1

V [i,j] =

(B.20) (B.21)



Position and Wind kinematics

This appendix serves to explain the kinematics governing the position and wind estimator. The estimated state vector, x is the UTM position, Pn and Pe , of the airframe, the wind components in north, Wn and east We directions. Also, the angle-of-attack, and the altitude Alt of the airframe is estimated. The measurement vector z is composed of the measured UTM coordinates from the GPS receiver and the altitude measured by GPS or the barometric pressure sensor. The system input vector, u is composed of the heading, , and pitch, , of the airframe along with the indicated airspeed, Vi ; Pn Pe U T Mn Alt x = , u = , z = U T Me Wn Vi U T Md We Pn Pe U T Mn Alt w= Wn , v = U T Me U T Md We


Time update

The state transition function, f (x+ , uk , wk ) = x , is the non-linear function k1 k which outputs the derivative the state estimate at time k. The input vectors are the previous state estimate, xk1 , the system input, uk and the system transition noise, wk . The state vectors is composed of the UTM position northing, Pn , and easting, Pe , the wind in north, Wn and east,We , directions. It should be noted that none of the measurements in vector z measures Wn , We or directly. These state variables are unobserved. The system input is the indicated airspeed, Vi , airframe heading, and pitch, . The state transition noise vector is composed of the variances of the mean zero random noise changes in the state vector. f (x, u, w) is given in (C.1) 87

The state transition matrix, Ak , is the partial derivative of f (x, u, w) with respect to xk 0 0 0 1 0 Vi s( ) c 0 0 0 0 1 Vi s( ) s 0 0 0 0 0 f (xk , uk , 0)[i] Vi c( ) A[i,j] = (C.2) = 0 x[j] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 The state transition noise covariance matrix, wk is given tive of of f (x, u, w) with respect to wk 1 0 0 0 0 1 0 0 0 0 1 0 f (xk , uk , 0)[i] W [i,j] = = 0 0 0 1 w[j] 0 0 0 0 0 0 0 0 by the partial deriva0 0 0 0 1 0 0 0 0 0 0 1

Vi c c( ) + Wn + Pn Vi s c( ) + We + Pe Vi s( ) + Alt f (xk1 , uk , wk ) = xk = Wn We





The measurement model h(xk , v k ) is the measurement model. It outputs the measurement, based on the current state vector and measurement noise vector, v and is described in (C.5) h(xk , v k ) = z (C.4) (C.5)

Note the inconsistency of Alt and U T Md , due to the dierent convention (Altitude is upwards, while the down is down-wards in the North-East-Down right handed reference frame). The observation matrix, H k is given by the partial derivative of h(xk , v k ) with respect to the state vector, xk .

Pn + U T Mn = Pe + U T Me Alt + U T Md


H [i,j] =

The observation noise covariance matrix, V is given by the partial derivative of h(xk , v k ) with respect to the measurement noise vector, v. h(x, v)[i] v [j] 1 0 0 = 0 1 0 0 0 1

h(x, 0)[i] x[j] 1 0 0 = 0 1 0 0 0 1

(C.6) 0 0 0 0 0 0 0 0 0 (C.7)

V [i,j] =

(C.8) (C.9)



Amount [Days/year] 13 15 21 29 36 19 33 46 57 52 29 15 365 0.3-5.4 m/s [Days] 9 10 14 19 22 14 23 28 27 24 14 10 213 5.5-10.7 m/s [Days] 4 5 6 10 13 5 10 16 25 23 13 4 135 =10,8 m/s [Days] 0 0 1 0 1 0 0 1 5 5 2 0 17

Wind statistics
Based on the graphical data obtained from DMI [42], Table D.1 has been deducted. The table illustrates the distribution of wind from 12 dierent directions Wind direction [ ] 345-15 15-45 45-75 75-105 105-135 135-165 165-195 195-225 225-255 255-285 285-315 315-345 Total

Table D.1: Wind statistics deducted from [42]



Wikis and how-tos

E.1 Installing Ubuntu and ROS on Overo Gumstix
1. Make a minimal headless Ubuntu le system with rootstock (use rootstock0.1.99.4 on Ubuntu 10.04, as the version provided in the repositories is buggy)

$ sudo ./rootstock fqdn overo login ubuntu password tmppwd imagesize 2G serial ttyS2 -d lucid seed build-essential,openssh-server
This takes a few hours. 2. Download wireless-tools.deb and libiw29.deb for armel from Debian. 3. Create MLO, u-boot.bin and uImage using OpenEmbedded. Remember to match the kernel number to that produced by Rootstock: Add PREFERRED VERSION linuxomap3 = 2.6.34 after PREFERRED PROVIDER virtual/kernel = linuxomap3 in /overo-oe/ 4. Format an SD card as per usual for Overo, and extract MLO, u-boot.bin, uImage, File system, modules and the downloaded .debs. 5. Boot the Overo, login (user: ubuntu, pwd: tmppwd), consider activating the root user: sudo passwd root 6. Install the debs: dpkg -i libiw29.deb wirelesstools.deb 7. Now you should be able to connect to any unprotected network.


8. Add deb http://ports.ubuntu. something lucid main restricted ++ multierse ++ universe to /etc/apt/sources.list. (no quotes) 9. Add deb lucid main to /etc/apt/sources.list.d/roslatest.list (no quotes) 10. Run the commands: apt-get install wget apt-get install ros-electric-ros-base echo source /opt/ros/electric/setup.bash >> /.bashrc /.bashrc 11. You now have Ubuntu and ROS on your Overo Gumstix.



PCB design
F.1 Power management


F.2 Servo interface, i2c-bus, pitot signal conditioning



Gumstix, Ethernet



Level conversion



USB interfaces



PCB Layout



Thesis proposal


Thesis proposal
Development of a modular autonomous airborne tool-carrier for automated data acquisition
Peter Emil Madsen University of Southern Denmark Hjalte Bording Lerche Nygaard University of Southern Denmark

October 3, 2011

crops, weeds, diseases, pests and Plant Protection Products (PPP) to suggest ecient treatment, This project concerns the development of an using a minimum of PPP. Studies [13, 10, 16, 17] avionic tool carrier, capable of autonomous naviga- have proven the system to be robust and have tion from a given set of way points. The platform considerable potential in reducing herbicide usage will be capable of interfacing with a range of tools, (approx. 40% in grain and 20% in other crops). through a common hardware and software inter- However, the system needs to be fed with data, face. The ability to exchange tool enables the plat- gathered from labor intense eld inspections. This form to be used in a multitude of contexts. This fact is a setback for the systems deployment. By project will focus on designing and applying the automating the data collection process, the labor platform in an agricultural context. involved with using CPO could be reduced, and thus it would potentially be used more widely resulting in a more ecient use of herbicides.



A major motivation for this project, is decreasing the use of herbicides, through optimized spraying. Todays common practice is broad spraying an entire eld, to cope with weed. However, weeds tend to grow in patches, rather than uniformly across the eld. Studies [11, 12, 18, 19] show that it is possible to reduce the use of sprays, by selectively spraying these patches. In order to localize the weed patches, various vision based systems have been proposed in the literature [18]. The proposed platform, could be utilized to gather imagery data used to localize weed patches. By speciation and growth stage determination of weeds, from imagery data [18, 14, 15], selection of proper herbicides and dose can be optimized. A Danish system called Crop Protection Online (CPO) [17, 10] combines year-to-year knowledge of weed populations, derived from manual eld inspections, with a knowledge base of various

Apart from the potential improvement in agricultural context, an autonomous avionic tool carrier system could also be used in a multitude of other situations. Inhospitable environments, like the nuclear disaster in Fukushima, Japan, could be inspected and radiation and imagery data could be collected. Wildres, as is currently seen in Texas, USA, could be inspected live to improve reghting tactics and other natural disasters could be monitored fast and accurately. More tedious tasks, such as perimeter guarding, livestock count etc. could also be automated, freeing labour resources to perform other tasks.

Related work

Several commercial systems are currently available, from various vendors. Commonly, they consist of a light weight xed-wing airframe, with onboard

100 1

GPS receivers and live radio links to a base stations, performing route planning and post-ight data processing and representation. The company SenseFly[8] focus on aerial photographing, computation and meteorological data acquisition, based on three dierent UAVs and associated basestation software. The SmartPlanes[9] system is much similar to SenseFly; the sensor and post-ight software is however more focused on orthophotographical mapping. Both SenseFly and SmartPlane provides the hard- and software, to be used by unskilled labours, without any expert knowledge needed to gather data. GeoSense[3] uses a less easy-to-use system, based on the arduPilot[2] project. GeoSense does not provide ready to y hard-/software, but merely a commercial mapping service. ASETA[1] (Adaptive Surveying and Early treatment of crops with a Team of Autonomous vehicles) is a research project at Aalborg- and Copenhagen Universities, aiming to reduce the usage of pesticides, through early automated treatment, utilizing coordinated airborne helicopters and ground vehicles. The helicopters provide relevant data for analysis on a ground station, which prompts ground vehicles to further inspection and appropriate treatment. The ASETA project uses two dierent helicopters, for dierent types of sensor equipment. Several open source autopilot project exists, the most noteworthy ones are OpenPilot[5], Paparazzi[6], arduPilot[2] and gluonpilot[4]. Both OpenPilot and Paparazzi utilizes ST32 ARM processors for ight control, where as the arduPilot is based on a much simpler (AVR-based) Arduino platform. The Gluonpilot is based on a 16-bit 80MHz dsp microprocessor. A ground-station GUI is currently being developed. Common for the open source projects, is a great exibility in air-frame choices, ranging from xed-wing to quad-rotors and even in-between[7]. However, the setups are far from straight forward, in most of the projects. The OpenPilot-project has however developed a plug-in based system, which allows for easy setup of various airframes. The mentioned state-of-the-art projects cover very specic problem areas. Whereas the open source pilots provide a way for getting small aircrafts airbourne, no specic use of the autonomy is covered. The companies that y autonomously

are commercial products that focus on providing a specic service - and as such have to be protable and thus expensive. The ASETA project operates in somewhat the same eld as this project, however, this project focuses on a xed wing solution - which generally gives a longer ight time than hovering. By providing an autonomous tool carrying agent, capable of carrying a range of sensors, the same platform could be used in a multitude of areas. By making the project modularized and open source, a community could provide with new sensor modules, and improve on the existing system. Thus, in stead of solving a specic problem, the project provides a base for further development.

Learning Goals

At the end of this project, the student will be able to: Identify the problem domain through literature study. Evaluate existing commercial and research solutions related to the identied problem. Use this domain knowledge to propose a novel solution. Use an engineering approach to design and build a demonstrator based on the proposed novel solution. Apply acquired knowledge within mobile robotics to the proposed solution. Apply acquired knowledge within embedded autonomous robot software design to the proposed solution.


At the end of this project, a system containing the following will be delivered: An autonomous ying tool-carrier capable of: Automatically follow a predened path, based on way-points.

101 2

[11] S.R Herwitz, L.F Johnson, S.E Dunagan, R.G Higgins, D.V Sullivan, J Zheng, B.M Lob Carrying dierent tools in a mechanically, itz, J.G Leung, B.A Gallmeyer, M Aoyagi, electrically and software standardized inR.E Slye, and J.A Brass. Imaging from an terface. unmanned aerial vehicle: agricultural surveil Interface with and interchange data with lance and decision support. Computers and the mounted tool. E.g data acquisition Electronics in Agriculture, 44(1):4961, 2004. via an camera tool. [12] B. T. Jones. An evaluation of a low-cost uav A prototype proof of concept tool, approach to noxious weed mapping. Masters demonstrating capabilities of the system. thesis, Brigham Young University, September 2007. mapping, UAV, Camp Williams, A graphical user interface used to: weed, remote control, RC, airplane, GPS, post-process, vision, noxious, thistle, imagery, Aid path planning. clover, photo, photos. Represent data. [13] L. N. Jrgensen, E. Noe, A. M. Lang Communicate live with airbourne unit. vad, P. Rydahl, J. E. Jensen, J. E. rum, H. Pinnschmidt, and O. Q. Bjer. Plantevrn 5.1 Delimitation: online et vrktj til at reducere pesticidforbruget i landbruget. BekmpelsesmidThis project will not provide a tool-range for delforskning fra Miljstyrelsen, page 115, 2007. mounting on the tool-carrier. Neither will it concern the utilization of the mounted tools. The [14] P.J. Komi, M.R. Jackson, and R.M. Parkin. project provides a tool-carrier, capable of carrying Plant classication combining colour and spectools designed for the carrier. How the tools are tral cameras for weed control purposes. In Inutilized is up to the developer/user. A proof of dustrial Electronics, 2007. ISIE 2007. IEEE concept tool will be developed. International Symposium on, pages 2039 2042, june 2007.

Aided take-o and landing.

[1] Aseta. [2] Diy drones. [3] Geosense. [4] Gluonpilot. [5] Openpilot. [6] Paparazzi. [7] The quadshot. [8] Sensey. [9] Smartplanes. [10] L. Hagelskjr and L. N. Jrgensen. Plantevrn online - sygdomme og skadedyr. DJF rapport, Markbrug(98):3950, January 2004.

[15] Zhao Peng. Image-blur-based robust weed recognition. In Articial Intelligence and Computational Intelligence (AICI), 2010 International Conference on, volume 1, pages 116119, oct. 2010. [16] Per Rydahl. PC-Plantevrn - optimerede herbicidblandinger i bederoer, volume I of DJF rapport Markbrug, pages 185196. DJF, 2001. [17] Per Rydahl. Plantevrn online - ukrudt. DJF rapport, Markbrug(98):2738, January 2004. [18] D. Slaughter, D. Giles, and D. Downey. Autonomous robotic weed control systems: A review. Computers and Electronics in Agriculture, 61(1):6378, April 2008. [19] A.M. Smith and R.E. Blackshaw. Crop/weed discrimination using remote sensing. In Geoscience and Remote Sensing Symposium, 2002. IGARSS 02. 2002 IEEEInternational, volume 4, pages 19621964 vol.4, 2002.

102 3



Sensor Fusion for Miniature Aerial Vehicles


Appendix : Sensor Fusion for Miniature Aerial Vehicles

Sensor Fusion for Miniature Aerial Vehicles

Peter E. Madsen Hjalte B. L. Nygaard University of Southern Denmark January 9, 2012

Abstract In the eld of autonomous robotics, there is a desire to determine precise attitude and position, utilizing cheap, and thus often noisy sensors. This paper surveys a range of existing methods for Micro Air Vehicle sensor fusion, aimed at this problem. Furthermore a partial implementation of a previously proposed three-stage extended Kalman lter is conducted on a newly developed autopilot board. The board is based on MEMS-technology, GPS and other sensors. Promising results have been obtained for Pitch and Roll estimation by fusing gyroscope and accelerometer measurements. It is concluded that by investing time into sensor fusion, it is indeed possible to gain high quality results, even using cheap noisy sensors.


Appendix : Sensor Fusion for Miniature Aerial Vehicles

Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theory & methods . . . . . 2.1 Filter types . . . . . . 2.2 Bayes Filters . . . . . 2.3 Kalman Filter . . . . . 2.4 Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3 3 4 5 6 7 8 8 9 10 12 13 13 13 13 14 15 16 18 20 23

Implementation . . . . . . . . . . . . . 3.1 Hardware . . . . . . . . . . . . . . 3.2 Three-stage Extended Kalman Filter 3.2.1 Stage 1 - Pitch & Roll . . . 3.2.2 Stage 2 - Heading . . . . . . 3.2.3 Stage 3 - Position & Wind .

Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Octave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 6 7

A Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B Pitch-roll kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Appendix : Sensor Fusion for Miniature Aerial Vehicles


This paper is the result of a 5 ECTS self study conducted by two master students at the University of Southern Denmark, Odense. The paper aims to survey methods for fusing noisy sensor data, and implement a method deemed suitable to the task of estimating attitude and position of a Micro Air Vehicle (MAV). The task of optimizing robot attitude and position estimation is of increasing interest to a growing community of robotisists. Turnkey solutions, in the form of Inertial Measurement Unit (IMU) sensor, with or without global positioning, are available commercially [3, 4]. However these sensor packages are relatively expensive. In this paper, it is shown that reliable results can be obtained using cheaper individual sensors, by the use of sensor fusion. Throughout this paper, the reader can refer to Figure 1 for a summary of the measured and estimated variables. denotes accelerations measured from the on board accelerometer, denotes magnetic eld strengths, measured from the on board magnetometer and denotes angular velocities, measured from the on board rate gyroscope. For a more complete list of mathematical notation, Appendix A on page 18 contains a full list of notations and variables used throughout this paper. It should also be noted that the airframe is oriented nose-starboard-down, and that the right handed coordinate system is used.

, y y, y , y , z z, z , z Figure 1: Illustration of symbols and axis. The paper is split in four main sections; Section 2 covers the theory of surveyed sensor fusion methods. Section 3 on page 8 explains the method that has been implemented, and describes in detail the mathematics, such that reimplementation can be conducted. Section 4 on page 13 gives a brief overview of the tools that have been used to visualize experimental results, and nally Appendix A through C on page 18 and forward, contains material like derivations and graphs, considered too detailed to make it into the paper, but useful to the curious reader. , x x, x , x

Theory & methods

A fundamental challenge of mobile robotics, is the effect of inaccurate measurements on position and attitude. The robots knowledge of its pose is vital for interacting with- and maneuvering in its environment. Two simple methods for localization and pose determination is feasible, as the robot can either 1) use internal sensors, which keeps track of its movements and thus pose, 2) use global landmarks to periodically determine its pose. However, problems arises when using either of the two methods, as


Appendix : Sensor Fusion for Miniature Aerial Vehicles

the former (referred to as dead-reckoning) will integrate even the slightest disturbance, into signicant, irreversible errors. The latter can work excellent in known, engineered environments, containing landmarks of adequate quantity and quality (e.g RFID-tags, coloured markings, ultrasonic beacons etc.). If, however, we want the robot to know its position in a less adapted environment, the position updates tends to have low update rates and imprecise measurements. No matter how expensive and precise sensing equipment that is build into the robots, these challenges might be reduced, but cannot be eliminated. However, by combining both approaches, the difculties can be overcome. This chapter presents a number of methods for such sensor fusion.


Filter types

Four types of ltering have been found suitable for MAV sensor fusion: 1. The Kalman lter, which is a recursive estimator. It uses parametrized Gaussian distributions to model the uncertainties in the system and a state space model to project the estimate forward in time. The lter uses least-squares curve-t to optimize the state estimate. 2. The Extended Kalman lter, uses a non-linear function to project the estimate forward, rather than the linear state space model used in the Kalman lter. The linear models, needed to project the covariance parameters, are found by linearizing the non-linear functions around the state estimate. 3. Particle lters uses a cloud of particles in the state space to represent the probability distribution, rather than the parametric Gaussians used by the Kalman lter. This effectively means that the lter is multi-modal, as it is not limited by the uni-modal nature of Gaussian representation. 4. Complementary lters are the only non-Bayes lter discussed in this text. It works in the frequency domain rather than the time domain and does not interpret the probability of states or outputs. It uses two inputs, describing the state, assuming one responds fast and is accurate, but biased, and another is bias free, but might be slower in responds and noisy. By using a pair of complementary lters, one low-pass and the other high-pass, the two signals can be combined, with maximum yield of both signals. See Figure 2 for a basic block diagram. u = f(u) x HPF + z = f(z) x LPF x

Figure 2: Basic complimentary lter. Particle lters have proven very strong in mobile robot localization (known in the eld under the name Monte Carlo Localization [17, 18]) , where multiple hypothesis is needed. This is especially true for indoor Simultaneous localization and mapping, as

Appendix : Sensor Fusion for Miniature Aerial Vehicles

no good source for absolute position estimates are available. These lters are however fairly complex and computationally heavy. As an absolute position estimate can be acquired from GPS in the outdoor habitat of MAVs, the complexity of particle lters is not needed. Both the Kalman lter [6, 8, 13, 14] and Complimentary lters [10, 12] have been used for state estimation of MAVs. The complementary lters is computationally lighter, compared to Kalman lters, but seems to respond slower. When the gyroscope is at rest, after a rapid change, it does not contribute to the state estimate. However, the accelerometer data does still to some extend reect the old position due to the phase delay introduced by the low pass lter. The Kalman lter on the other hand does not operate on old data and hence it introduces no phase delay. In addition, the Kalman lter is capable of rejecting statistical outliers in the data stream. The Extended Kalman lter (EKF) is chosen for the later implementation. The following sections will give a overview of its theoretical background and formulation.


Bayes Filters

The Kalman lter is a Bayes lter, which means that it work under the assumption that each observation is independent of previous observations. I.e the true state, xk is assumed to be an Hidden Markov process and the observations zk are derived from this process. In other words, the true state can only be observed indirectly. Because of the Markov assumption, the probability of the current state, p(xk ), can be derived from just the previous, p(xk1 ), and is independent to all prior states. p(xk |xk1 , xk2 , , x0 ) = p(xk |xk1 ) (2.1)

Similarly, the probability of the current observation, p(zk ) depends only on the current state, xk , and is independent of all prior states. p(zk |xk , xk1 , , x0 ) = p(zk |xk ) (2.2)

This allows the Bayes lters to work recursively, as there is no need to keep track of the history of state estimates and observations, along with their probability distributions. The lters are implemented in a two-step recursion: First, the previous state probability distribution, p(k1 ), is projected into the x next time step with the system input, uk . While this is done, uncertainty is added to the estimate to reect the system noise. This new estimate is the a priori esti mate, p(k ). x The second step includes a observation zk , which based on Bayes Theorem (see eq. (2.3)) can provide a probability distribution of the state, if p(xk ) and p(zk ) are known. This sensor-based state estimate can then be combined with k to x give a better estimate, k , of the state. x p(xk |zk ) = p(zk |xk ) p(xk ) p(zk ) (2.3)


Appendix : Sensor Fusion for Miniature Aerial Vehicles

We wish not to go deeper into the mathematical background of Bayes lters, but refer curious readers to [11], [19], [7] and [5]. To get a conceptual idea of the Bayes lters, imagine a simple robot, capable of moving in a straight line, as illustrated in Figure 3. The robot is capable of measuring its position along the track, by measuring the distance to a wall at the end of the track. However, these measurements are noisy and occurs at a slow rate. As the robot moves, it utilizes dead reckoning to keep track of its position - But due to integration errors and external disturbances, this estimate soon grows unreliable. By accepting the fact that both measurements include uncertainties and modeling these, the two data sets can be combined in a benecial way. As the movements of the robot projects the state estimate forward, the uncertainty of the estimate is increased. This estimate is called the a priori, as it is an estimate, prior to in-cooperation of an observation. When the robot receives a external measurement of its location (an observation), the certainty of this measurement is assessed and combined with the a priori to for the next location estimate. As the two are combined, the uncertainty is reduced, bringing forward a better estimate of the location, than either of the two sensors can provide by them self.



xk1 xk R


Pk1 + Q


A priori

Figure 3: Conceptual model of the Kalman lter.


Kalman Filter

B w

H v

Figure 4: Linear state space model. Kalman lters, rst proposed by R. E. Klmn in 1960 [9] have proven very efcient in avionics, when used in the extended version. The rst use of the lter was in the Apollo

Appendix : Sensor Fusion for Miniature Aerial Vehicles

space program, where it was used for trajectory estimation on the lunar missions [15]. The Kalman lter uses the Gaussian distribution function for parametric probability representation and assume Gaussian mean-zero noise models for both process- and observation noise. The lter is based around a linear state-space model (see Figure 4 on the preceding page) of the system, which is used to project the old state estimates into the next one and estimate the observation: xk = A xk1 + B uk + wk zk = H xk + vk (2.4) (2.5)

where w is the process noise, and v is the observation noise. The matrix A projects the state estimate ahead and B relates the system input into the state space. At the time update the a prioris of state estimate x and estimate covariance Pk is calculated. This is done by projecting the previous values with A and adding the system input B uk and process noise covariance Q respectively: k = k1 + A k1 + B uk x x x Pk = A Pk1 AT + Q (2.6) (2.7)

In the measurement update k and Pk , are corrected, based on an observation zk : x K k = Pk H T H Pk H T + R k = k + Kk zk H k x x x Pk = (I K H) Pk


(2.8) (2.9) (2.10)

where R is the measurement noise covariance, H relates state to measurement. The matrix K is the Kalman gain, used to mix a priori estimate with the observation based state estimate.


Extended Kalman Filter

As the state space model is a linear model, the Kalman lter is limited to linear systems. In real-world application, only very few of such systems exists. The Kalman lter can however be extended [5, 7, 19] to work for non-linear systems. Rather than using the linear state space models, the two non-linear functions f(x, u, w) and h(x, v) are used to project the state and estimate sensor output, so that: xk = f(xk1 , uk , wk ) zk = h(xk , vk ) (2.11) (2.12)

The Kalman lter needs the linear model matrices, A and H still, though. But these can be derived by linearizion of the f(x, u, w) and h(x, v), as their partial derivatives with respect to the state vector. A[i,j] (x, u) = H[i,j] (x) = f[i] (x, u, 0) x[j] h[i] (x, 0) x[j] (2.13) (2.14)


Appendix : Sensor Fusion for Miniature Aerial Vehicles

Besides, two new Jacobians are needed to project the system noise w and sensor noise, v into respectively the input vector and the measurement vector. Theses can similarly be derived by linearizion of f(x, u, w) and h(x, v), as the partial derivatives with respect to the noise vectors. W[i,j] (x) = f[i] (x, u, w) w[j] h[i] (x, v) v[j] (2.15) (2.16)

V[i,j] (x) =

In the time update of the EKF, the projection of the estimated state is given by f(x, u, w). The estimate covariance is projected much like in the original lter. Wk terms are added to project the process noise, Q covariance into state space. k = k1 + f(k1 , uk , 0) x x x
T Pk = Ak Pk1 AT + Wk Q Wk k

(2.17) (2.18)

Similarly, the measurement update is upgraded with projection of R using Vk and the sensor output is estimated by h(x, v).
T T T Kk = Pk Hk Hk Pk Hk + Vk R Vk 1

(2.19) (2.20) (2.21)

k = k + Kk zk h k , 0 x x x Pk = (I Kk Hk ) Pk

This concludes the theory covered in this paper.


This section will discuss our implementation of a special 3-stage EKF, tailored for MAVs. An overview of the hardware developed to realize the sensor fusion is provided. This hardware includes both sensors used in this project and other sensors, not yet utilized, but deemed useful to make a completely autonomous MAV.



For this and another project, an autopilot board, see Figure 5 on the next page, has been developed. The board is centered around an 600 MHz, 512MB RAM OMAP3 based Gumstix. The purpose of the board is to sense, process and control everything needed to keep a MAV in the air, independent of any groundlinks. To do so, it has been equipped with a range of sensors, listed in Table 1 on the following page. Some of the sensors work standalone, where others have to be fused to provide useful measurements. All of the sensors, except the GPS, are connected to a 400 kHz I 2 C bus of the Gumstix. The GPS has a serial interface, and is thus connected to a UART of the Gumstix. An overview of the sensors fused in Section 3.2 on the next page can be seen in Figure 6 on page 10. Linux drivers for the sensors had to be developed, as it has been decided to run the Gumstix with a stripped down Ubuntu distribution. This is however not the focus of this project, thus interfacing details are not discussed in this paper.


Appendix : Sensor Fusion for Miniature Aerial Vehicles

Figure 5: The developed Gumstix autopilot expansion board Sensor Gyroscope Accelerometer Magnetometer GPS Airspeed Altitude Proximity Name IGT3200 ADXL345 HMC5883L D2523T MPXV7002 BMP085 LV-EZ4 Notes Triple axis. 2000 /s @ 0.0696( /s)/LSB Triple axis. 16g @ 3.9mg/LSB Triple axis. 8G @ 5mG/LSB 50 Ch. Helical receiver, 4Hz, blox chipset. Differential press. sensor w. pitot tube. 2kP a. Barometric Press. 300-1100 hPa @ 0.03 hPa/LSB Ultrasonic ground dist. 0-7m @ 2.5cm/LSB

Table 1: Sensors available on the Autopilot board


Three-stage Extended Kalman Filter

In 2006, Andrew M. Eldredge wrote his Master thesis Improved state estimation for miniature air vehicles [8]. Here Eldredge proposed a new use of the EKF - a three stage lter, designed especially for Micro Air Vehicles (MAVs). This new approach, as opposed to other implementations [14] separates the complete position and orientation lter into three cascaded estimation steps: 1) Pitch & Roll, 2) Heading and 3) Position & wind, as seen in Figure 7 on the next page.


Appendix : Sensor Fusion for Miniature Aerial Vehicles


Figure 6: Autopilot board block diagram x y z Attitude vair Estimator x y y z z x y z

Heading Estimator vair

Navigation Estimator

gpsn gpse

pn pe n e

Figure 7: Three-stage state estimation illustrating the three stages, input and outputs. The diagram is as replication of [8, Fig. 2.9, p. 21] By separating the lter into three steps, the implementation is greatly simplied and the conceptual understanding is more convenient to grasp. A complete state-space model contains seven parameters: roll, pitch, yaw, north/east position and north/east wind speed - yielding rather large matrices if all is handled at once. Furthermore each step can be updated individually, which is very convenient, as our sensors have different update rates (the gyroscope has an internal sample rate of up to 8 kHz, whereas the GPS updates with a maximum of 4 Hz). We have successfully implemented Stage 1 of the proposed lter. The following three sections will explain our implementation and discuss how the last two steps will be implemented in the future. 3.2.1 Stage 1 - Pitch & Roll

In this rst of three steps, the goal is to determine the state vector, x, composed of Pitch, , and Roll, . This is done utilizing the gyroscopes angular rates as system input, u, and with the accelerometers linear accelerations as observations, z:


Appendix : Sensor Fusion for Miniature Aerial Vehicles x= State vector



u = x y z System input vector

z = x y z Observation vector

Figure 8: Stage 1 components Note that compared to the original implementation by Eldredge, total airspeed vair has been neglected for now. With, x, u and z in place we are ready to proceed to dene the rst Time-update, or Predict stage. Time-update of Pitch & Roll Here we wish to propagate gyro angular rates to estimate change in Roll and Pitch. This is done using the estimated current state and kinematics as proposed in [16, p. 27] and to the readers convenience derived in Appendix B on page 20. Thus: f(x, u) = x + t s y + t c z = c y s z (3.1)

Linearizing by partial derivatives, with respect to x, at the current estimate yields the Jacobian matrix A(x, u), please refer to Equations (B.7) through (B.10) on page 21 for a complete derivation: A[i,j] (x, u) = f[i] t c z ts (x, u) = y y s z c x[j]
y s+z c c2


That concludes the functions needed to compute the Time-update of an EKF, stated in equations (2.17) and (2.18) on page 8. Now the next goal is to calculated the Measurement-update or Correction. Measurement-update of Pitch & Roll In order to complete the Measurement-update, we are required to calculate the Kalmangain. The gain depends on the accelerometer output prediction model h(x) and its Jacobian matrix H(x). The Sensor prediction model simply maps the gravity vector into the airframe, as derived in Equation (B.16): ax s (3.3) h(x) = ay = s c az c c The Jacobian matrix, H(x) is composed of the partial derivatives of h(x), with respect to x, please refer to Equation (B.17) through (B.19) on page 22 for a complete derivation: 0 c h[i] H[i,j] (x) = (x) = c c s s (3.4) x[j] s c c s

Finally we only need to consider noise before we can proceed to the next stage.

Noise considerations of Pitch & Roll V relates the measurement (accelerometer) noise, v, to the sensor estimate z, as V is the partial derivative of h(x, v) (Eq. (3.3)) with respect to v. The complete derivation


Appendix : Sensor Fusion for Miniature Aerial Vehicles


of this Jacobian matrix can be found in Equations (B.20) through (B.22) on page 22 where it is proved that V is given by the identity matrix: 1 0 0 V (x) = 0 1 0 (3.5) 0 0 1 The derivation simply proves that the noise on the accelerometers maps directly to what we are measuring.

R denotes the measurement noise covariance, and is hard to model, thus it is usually found by trial and error tuning. W relates the system noise, w, to the state estimate x, as W is the partial derivatives of f(x, u, w) (Eq. (3.1)) with respect to w. The complete derivation of this Jacobian matrix can be found in Equations (B.11) through (B.14) on page 21 where it is proved that W is given by: 1 t s t c W (x) = (3.6) 0 c s This shows that our gyroscope noise needs to be mapped into the system model. Q denotes the process noise covariance, and is, like R usually found by trial and error tuning. This concludes the measures necessary to compute an estimate of the Roll and Pitch angles of the airframe. Simulation results and 3D visualization of the implementation can be viewed in Sections 4.1 and 4.2 on page 13. The following two sections will cover how Heading, Position and Wind can be estimated in future work. 3.2.2 Stage 2 - Heading

In stage 2, the goal is to determine Heading, , from the Pitch and Roll angles found in Stage 1, and gyroscope measurements combined with Magnetometer measurements , thus: x= State u = y z System input vector

z = x y z Observation vector

Figure 9: Stage 2 components The kinematics described in [8, Sec. 2.7], maps to current Pitch, Roll and angular rates. Furthermore a sensor model [8, Eq. 2.6] maps airframe Magnetometer measurements into the Earths magnetic eld. With this data, and a map of the Earths magnetic eld, the procedure for obtaining Heading, through Kalman ltering is similar to that described in Stage 1 in Section 3.2.1 on page 10. A noteworthy property of this method for obtaining Heading is, that opposed to similar implementations, this method does not rely on GPS to nd . This is desirable, as GPS updates are generally relatively infrequent and loss of satellite connection is not uncommon.


Appendix : Sensor Fusion for Miniature Aerial Vehicles 3.2.3 Stage 3 - Position & Wind


Once the complete attitude

has been determined in stages 1 and 2, it


is time to estimate the absolute position, pn pe , and wind, n e . This is accomplished using the GPS as our sensor and previously estimated Pitch, Heading and Airspeed: x = pn pe n e State vector

u = vair System input vector

z = gpsn gpse Observation vector

Figure 10: Stage 3 components The kinematic procedure suggested in [8, Sec. 2.8] deals with the dynamics, and again the EKF approach is applied to the kinematic model. Finally, after three cascaded steps the complete state vector is obtained: xcomplete = pn pe n e



This section gives a short introduction to the procedures made to test and visualize sensor fusion algorithms.



All code is initially written in Octave. Octave allows for quick development and does not have the same stiffness as a pure C/C++ implementation. Thus all experimentation and development is done in Octave, and once a desired lter performance is obtained, it is ported to C/C++. C/C++ implementations are based on the Open-Source library KFilter [2], which provides an framework for the EKF algorithm. Real world sensor data has been logged to .csv les, and these are imported and analyzed in Octave. An example of such an experiment is shown in Figures 13 and 14 on page 23



In order to visualize the lter performance live, much like it is seen with many commercial products [3, 4], live communication between the Autopilot board and a computer with a monitor and 3D renderer is necessary. For this communication, a middleware called Robot Operating System (ROS) is utilized. ROS allows for link transparent communication between two or more TCP/IP enabled units. Thus we use the Gumstix Wi to connect to a visualizing computer. ROS also has a build in 3D render called RViz, this is used to visualize lter output as seen in Figure 11 on the next page.


Appendix : Sensor Fusion for Miniature Aerial Vehicles


Figure 11: The 3D model used in RViz for live visualization of lter output RViz with ROS allows for .dae graphics to be imported. Many 3D models are freely available in this format in the Google 3D Warehouse [1], thus an airplane is easily found and imported into RViz. The visualization gives a very intuitive feedback, as to how well the lter performs. Furthermore it allows for simultaneous comparison of different implementations, and the real world quantity of vibrations etc. are better visualized compared to static plots.


In order to asses the quality of the implemented lter, a reference experiment has been conducted. A commercial Attitude and Heading reference System (AHRS) [3] was xated to the autopilot board frame. Both systems estimated pitch and roll angles. In Figure 12a on the following page and 12b on the next page the two estimates are plotted together. As it is seen from the gures, the estimates angles are very similar, though it seems that the implemented lter responds a bit different under g-forces (t = 15:20s). This is expected to be a matter of lter tuning and g-force compensation.


Appendix : Sensor Fusion for Miniature Aerial Vehicles


(a) Pitch estimates.

(b) Roll estimates.

Figure 12: Estimated pitch and roll angles from conducted reference experiment.


Future work includes implementation of the last two stages of the cascaded extended Kalman lter, mentioned in Sections 3.2.2 and 3.2.3. Furthermore, these stages would need to be validated. The global position and wind estimates are hard to evaluate. Position could be evaluated using a carefully dened path or another sensor known to provide a certain accuracy. The wind estimate is not as interesting as is its impact on the position estimate, thus, if it can be approximated to aid the position observer, that would sufce. An Extended Kalman lter have been implemented. Simulation, visualization and validation reects that the implemented EKF is indeed capable of fusion gyroscope and accelerometer data, into a viable state estimate. The mathematical and theoretical backgrounds of the lter has been described. We would like to thank Lars-Peter Ellekilde for his patience and guidance throughout this project.


Appendix : Sensor Fusion for Miniature Aerial Vehicles


source 3d models.

[1] Google 3d warehouse, open

[2] Klter - free c++ extended kalman lter library. [3] Vectornav homepage, inertial measurement systems. [4] Xsens homepage, inertial measurement systems. [5] A. L. Barker, D. E. Brown, and W. N. Martin. Bayesian estimation and the kalman lter. Computers Math. Applic, pages 5577, 1994. [6] R. Beard. State estimation for micro air vehicles. In Javaan Chahl, Lakhmi Jain, Akiko Mizutani, and Mika Sato-Ilic, editors, Innovations in Intelligent Machines - 1, volume 70 of Studies in Computational Intelligence, pages 173199. Springer Berlin / Heidelberg, 2007. 10.1007/978-3-540-72696-8_7. [7] W. Burgard, D. Fox, and S. Thrun. Probabilistic robotics. Cambridge, MA: MIT Press, 2005. [8] A. M. Eldredge. Improved state estimation for micro air vehicles. Masters thesis, Brigham Young University, Dec 2006. [9] R. E. Kalman. A New Approach to Linear Filtering and Prediction Problems. Transactions of the ASME Journal of Basic Engineering, (82 (Series D)):35 45, 1960. [10] R. Mahony, M. Euston, P. Coote, J. Kim, and T. Hamel. A complementary lter for attitude estimation of a xed-wing uav. In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, pages 340345, sept. 2008. [11] E. C. Molina. Bayes theorem - an expository presentation. Bell system Technical Journal, pages 273283, 1931. [12] W. Premerlani and P. Bizard. Direction cosine matrix imu: Theory. DIY Drones community, May 2007. [13] S. Riaz and A. B. Asghar. Ins/gps based state estimation of micro air vehicles using inertial sensors. In Innovative Systems Design and Engineering Vol 2, No 5, 2011, 2011. [14] S. Riaz and Dr. A. M. Malik. Single seven state discrete time extended kalman lter for micro air vehicle. In Proceedings of the World Congress on Engineering 2010 Vol II, volume Proceedings of the World Congress on Engineering, 2010. [15] S. F. Schmidt. Kalman lter: Its recognition and development for aerospace applications. Journal of Guidance and Control, 4(1):47, 1981. [16] B. L. Stevens and F. L. Lewis. Aircraft Control and Simulation. John Wiley & Sons, 2nd edition, 2003. ISBN 978-0-471-37145-8.


Appendix : Sensor Fusion for Miniature Aerial Vehicles


[17] S. Thrun, F. Dellaert, D. Fox, and W. Burgard. Monte carlo localization: Efcient position estimation for mobile robots. In Proceedings of the Sixteenth National Conference on Articial Intelligence, pages 343349, July 1999. [18] S. Thrun, F. Dellaert, D. Fox, and W. Burgard. Robust Monte Carlo localization for mobile robots. In Articial Intelligence, volume 128, pages 99141, 2000. [19] G. Welch and G. Bishop. An introduction to the kalman lter, 2001.




Project Log


Dette er flyver gruppens log. Den vil Dagligt blive opdateret med en ny to-do liste og lidt reflektion over hvordan fremgangen er get. 01/05 Vi er nu konverteret fra praktiske greml til rapportskrivning. Der er lavet udkast til rapportstruktur, og lagt op til debat vejledere og os imellem. Rapportskrivning pbegyndes. 25/04 Der laves flight test med aided flight. 2 flyvninger er i frste omgang gennemfrt med aided flight aktiveret - og det spiller. Man kan nu styre pitch og roll direkte fra remoten. PID er tunet s den reagerer bldt. Systemet er s stabilt at vi, mens vi var i luften, stille og roligt kunne give remoten til Peter, som aldrig fr har fljet i virkeligheden. Han styrede flyet rundt som en professionel og gav remoten tilbage til Hjalte efter ca. 6-10 minutters god flyvning. Derefter landede vi flyet i aided tilstand, og finere landing er sjldent set. Nu tuner vi flere smting mens vi gennemgr formiddagens generede data, og om vejret vil, s tager vi p en ny mission i eftermiddag/aften. 24/04 - Peter ndrer nogle fremgangsmde i forbindelse med pitot kalibrering, sledes at offset og gradient nu bliver justeret af kalman via GPS. - Hjalte er p virksomhedsbesg hele dagen. 23/04 R data er blevet analyseret og state estimatoren tunet. Ydermere er 3. led af kalman blevet implementeret. 20/04 Efter lngere tids stilhed er der nu lidt liv i loggen igen.. Vi har vret ude at flyve i det dejlige (=vindstille) vejr i eftermiddag. Fra sidste flyvetur lrte vi at det mske var lidt naivt, at tro det hele virkede p en gang. Derfor har vi har taget ved lre, og istedet for at kre state estimatoren, har vi valg denne gang at logge alle sensor data (der er ikke CPU kraft til at gre begge p en gang.). Sledes skulle der nu vre mulighed for at tune state estimatoren p jorden ud fra sensor-data + video ud af forruden. Dagen bd p to succelfulde flyvninger, filmet bde fra flyveren og jorden. Dermed har vi et godt udgangspunkt for at komme videre. I nste uge ser vi p disse data, og der kommer ogs en video s i alle kan flge med i status. Derudover har vi i de lydlse dage, siden sidst, udfrt fem vigtige opgaver: - Det, p printet placerede magnetometer, har vist sig (som frygtet) at sidde for tt p motorens magnetfelt. Vi har lavet interface til et nyt magnetometer, nu placeret lngere tilbage i flyveren -> alts flyver man ikke lngere altid mod nord nr motoren trkker strm!! (Den nye (strre) motoer er ogs moteret). - Udover det nye magnetometers yaw estimering, bruger vi nu ogs GPS mlinger til at beregne heading. Ydermere supplementerer GPS nu ogs pitotrret i mling af hastighed. - GPSen har ingen flash eller backup batteri. Derfor har vi tidligere skulle huske at konfigurere

GPSen efter hver powercycle - denne konfigurering er nu implementeret som en del af boot, sledes at GPSen altid er korrekt konfigureret - dermed er der ikke plads til s mange mennesklige fejl/forglemmelser. - Der lavet en strukturel ndring i koden, som gr at den samme kode kan kres i forskellige modes. Dermed skal der kun ndres t sted, uanset om man simulerer, flyver med state estimat eller flyver og logger sensor data. Alt dette styres v.h.a. et enkelt parameter i vores launch fil. - Sidst men ikke mindst har vi holdt minimde med Ulrik og Rasmus (hver for sig), hvor vi har diskuteret opfyldelse af lringsml og leverancer. Det er ogs blevet luftet at vi inden lnge skal til at begynde en overgangsfase hvor vi stille og roigt udfaser praktisk projektarbejde med rapportskrivning -> ingen af os er interesserede i at rapporten skal skrives i alt hast til sidst. (vi har vist haft nok klassiske speciale bummerter efterhnden) Vi har indset, og er rgelige over at vi ikke nr alt det vi genre vil, men dog ved godt mod, og hber at levere et brugbart projekt, som kan overtages af andre - samtidig med at vi fr rundet det vi har net godt af, bde praktisk og rapportmssigt. Go weekend til alle jer der lser med. 4/4 Vi har klargjort til flyvning i morgen. Turens forml bliver at teste pitch og roll estimering samt semiautomatisk/aided flyvning, hvorfor flyveren selv justere sig til et nsket pitch og roll, via pid. Setpoints er stick positionerer p remoten. Pid parametre er trimmet s godt det nu kan lade sig gre i simulator, men vi kommer nok til at trimme p dem i forbindelse med flyvningerne i morgen. Herudover er der lavet batteri fixtur og monteret et kamera p flyvet, der skal bruges til verificering af pitch/roll estimat, ved sammenligning af kunstig/estimeret horisont og den faktiske p videoen. 3/4 Dagen har vret trls. En compiler forskel gjorde at efterprvet kode p x86 ikke virkede p ARM. Fejlen blev til sidst fundet og nu virker state estimeringen ordentligt med quaternions p stage 1. 2/4 Bugfix af quaternions. Quaternions er nu funktionelle i stage 1 (pitch og roll estimering). Sledes har vi ikke eulervinklernes singulariteter, og vi kan estimere flyverens pitch og roll 360 grader. 30/3 Lavet ny motorflange, s vi undgr resonans ved 70%+ gas. Konverteret fra Euler til Quaternions. 29/3 Holdte minimde med Anders Fugl.

Skrev p rapport. Udskiftede motoraksel.

28/7 Stj er blevet tilfjet til simulatorens sensor-output, strrelsen af denne er baseret p mlinger taget under frste flyvetur. Ground station kan ogs tilsluttets samtidig som simulatoren, s state-estimate kan visualiseres live.reassign Ny batterikasse er lavet til flyveren, sledes at risikoen for at batteriet banker ned i elektronikken ved en evt. hrd landing er minimeret. 27/3 Der er blevet arbejdet med simulatoren og state estimatoren. For at kunne validere vores kalman filtre, er der fra simulatoren genereret kode til at simulere sensor outputs (Gyroer, accelerometre, magnetometre). Disse sensor data publiseres via ROS til en ekstern node, indeholdende stateestimatore for attitude og heading - Det ser ud til at virke. Videre arbejde gr p at tilfje stj (Her kan vores flyvetur med Henning give et godt fingerprej af stj strrelsen), og fiddle med filter tuning. Herudover skal vi have simulatoren til at publisere GPS, pitot og hjde data til sidste stadie af state estimatoren. 26/3 Arbejde videre p PID lkker. Mixning af elevator og rudder afhngig af roll. Altitude control ok, Roll control ok, Pitch control ok.

22/3 CRRCSim er nu rossificeret -> en helt basal PID lkke kan holde flyveren i en given hjde. PID biblioteket skal modificeres og tilpasses s vi fr pnere respons. Kjeld: Der ligger nu kode til min mini webserver (og dermed en stabil socket server) p: http:// 20/3 - Der arbejdes p rapporten. Aided flight og PID lkkerne til styring beskrives og illustreres. - Estimator og I2C node er kogt sammen, men stadig delt i 2 klasser -> 25-30% CPU er frigivet. - Der tunes PID, s godt som det nu kan lade sig gre p jorden. - Har bonnet Wheelspinmodels for motor aksler, de regner med de bliver sendt i dag eller i morgen. Kjeld :-) 19/3 Peter optimerer koden. At holde sig 100% til ROS tanken om adskilte noder viste sig vrende

for hrdt for ARMen. Derfor er Kalman og I2C/sensor-noden slet sammen til n. Dermed fjerner vi den strste ressource sluger ifht. datarate og beskedstrrelse. Hjalte dokumenterer Pitot hardware, teori og valideringstest. Ydermere har vi aftalt at mdes med Andreas Fugl sidst i uge 13. (26/3 eller senere). [Kjeld] Mht. mrkater til flyveren, s er det fint nok med alm. reklame for SDU-Tek. Jeg er sikker p, at Bo finder en god lsning. Dog br han tage hensyn til, at vinger jo brkker en gang imellem, s det kan sikkert godt betale sig at printe et st ekstra folier. 16/3 Rasmus Se hvad de har p Hohenheim Uni jeg lige har besgt ;-)

- Motor er skilt ad, og ny aksel er bestilt. - Kamera montering til Nokia tlf. er lavet, s vi kan filme horisont. - Har skrevet til Andreas Rune Fugl (mmmi ph.d)., der skulle have erfaring med modelfly, for at hre om vi kunne tappe af hans erfaring. - Har afleveret templates af flyverent mrkater til Bo fra kommunikation, s vi kan f vores egne stickers ( ben debat omkring takst/billede(er) ).

15/3-12: Hjalte havde besg fra tyskland og holdte fri. Peter implementerede PID til brug ved Pitch og Roll Aided Flight.

14/3-12: (Kjeld) Jeg faldt lige over denne artikel, - har ikke haft tid til at checke kvaliteten, men der er lidt forskellige data i den, som mske kan bruges i jeres rapport, og s beskriver den jo en lidt anden applikation som. Se om I kan bruge noget:

2008%20LOW%20COST%20UAV%20FOR%20POST-DISASTER%20ASSESSMENT.pdf (login og password er robotbil) Frste selvstndige flyvertur er gennemfrt. Resulterede i en bukket motoraksel. 13/3-12: Vi har vret p Kold College og tale med Stig Hansen som har givet os lov til at bruge to marker (se nedenfor) til testflyvninger.

Vi har lov til at flyve p dem nr det passer os frem til projektets afslutning i juni. Brugen er fri, s lnge vi tager fornuftigt hensyn til afgrder. Stig vil gerne hre om projektet, og vi har aftalt at de kan kigge med ved en flyvning, en gang nr vi har noget mere konkret. Ny Projekthndtering. Efter aftale ved mde d. 12/3, skal vi til at lave sprint af ca. 2 ugers varighed, hvor en prdefineret arbejdsopgave ender ud i et produkt som kan vises og afsluttes (al SCRUM metodologien). Sledes bliver det daglige projektarbejde struktureret og vejledere kan let flge med i projektets fremgang (eller mangel p samme). Nedenfor er vores frste udkast til sdanne arbejdsopgaver og deres tidsprioriteringer. SPRINT:


Dato 26/3 Sprint 1

Sprint produkter - Stage 1 (Kalman roll+pitch estimate, incl hastighed). - Aided Bank. - Aided Climb. - Stage 2 (Kalman yaw estimate). - Aided Heading. - Stage 3 (Kalman position and wind estimate). - Litteraturstudie af Navigations- og reguleringsmetoder. - Valg og definering af Navigations- og reguleringsmetode. - Rossificer Simulator. - Vrktj / showcase ELLER Waypoint navigation. Afslut Projekt.

9/4 Sprint 2 23/4 Sprint 3 7/5 Sprint 4 21/5 Sprint 5 (halvt sprint)

------------------------------------------------------------- x ---------------------------------------------------Sprint 1: 1) Sensor model for pitot - Done - Matematisk model - Done - Verficring - vingen ud af vinduet p bil versus GPS hastighed - Done - Dokumentr - Done 2) - Verficr stage 1, roll & pitch - Indkooperer airspeed til estimering af genereret g i sensor model - Done - Sm, lige flyveture med kendt attitude (Frst plan, s med sm udslag etc) - Generr lidt g-krafter, se om filteret reagere som forventet - Generr lidt flere g, se om filteret reagere som forventet - Field test : Video bagfra, overlay / PiP artificial horizon / Video p flyver (gl. tlf kamera) - Dokumentr (YouTube link, something...) 3) - Aided flight: - Aided bank / roll: stick/roll_desired -> [PID] -> ailerons -> roll angle - Done ^ | | (roll_actual) | '----------------------------' - Tuning af PID parametre. Frst i lab'en, siden i feltet. - Aided Climb. stick/pitch_desired -> [PID] -> elevator -> pitch angle - Done

^ | | (pitch_actual) | -----------------------------' - Tuning af PID parametre. Frst i lab'en, siden i feltet. - Field test : stick position versus artificial horizon el. video - Dokumentr - Pbegyndt ------------------------------------------------------------- x ---------------------------------------------------Sprint 2: 1) - Verficr stage 2, (Heading / yaw) - Frst lige, korte strk i forskellige retninger - Derefter sm cirkel stykker - Flyv i bestemt yaw-retning : (roll_desired) yaw_desired --> [PID] ------------> [PID] ------> Ailerons --> BankAngle ^ ^ | | | | (roll_actual) | | | (yaw_actual) '---------------- | '---------------------------------------------------------' 2) - Stage 3 af kalman (Position og vind estimat) - Rosbag: Flyv pnt! Gerne kort, men pnt. - Implementr, debug osv p workstation - Verificr i luften - Ground truth..? Video - Dokumenter 3) - Frdigr rapport kapitel - Introduktion - Sensor fusion teori - 3-stage model - Litteratur studie (quaternions, 7-state model, DCM, mm.) - Del konklusion ------------------------------------------------------------- x ---------------------------------------------------Sprint 3: 1) - Undersg og definer flyve plan - Litteratur studie - Metoder : Primitiver (Linje, cirkel etc), Splines, AIDL (evt modificeret) - Vurder flybarhed - Sammenkopling med mlepunkter (Ikke bare flyvetur, men flyvetur med forml..) - Dokumentr 2) - Vlg / udvikl metode til lsning af flyveplan - [Dx, Dy, Dz, Dh] = f(Px, Py, Ph) (nsket (Pn,Pe,Altitude, Heading) som function af aktuel (Px,Py,Heading)) - Vil give position-, heading- og altitude error, som kan indg som inputs i kontrollkker - Ider:

- Vector field omkring "primitiver" -? 3) - Klargr simulator - Done - Rossificer simulator med AHRS ud og Servo ind. - Done 4) - Dokumenter ------------------------------------------------------------- x ---------------------------------------------------Sprint 4: Mulighed 1: Autonom navigation. 1) - Implementr navigation (fundet passede i forgende sprint) i simulator. 2) - Valider navigation i field test (Ground truth...?) Mulighed 2: Vrktj. 1) - Definer interfaces (HW, SW, diskuter om Mekanisk skal vre standard) - Forsyningsspnding, strm, busser (Parallel, Ethernet, USB, etc...) - sw interface (rostopics, trigger signals, data transfer etc.) 2) - Implementer eksempel vrktj. 3) - Dokumenter. 4) - Get nice grade. ------------------------------------------------------------- x ---------------------------------------------------12/3-2012: Mde afholdt med alle vejledere. Der bliver hanket op i projektet og sat nye retningslinier. Ny scrum agitig fremgangsmde med print. Vi har bestilt Modelflyver-unions medlemskab til Hjalte, s han er forsikret: 700 kr. gennem MMMI. Vi har bestilt hjrisiko reserve dele fra strigske gennem MMMI / Richard Beck: Nylon skruer: Vinge beslag: Vinger: Halest: Propeller: Porto: Total : 2x 2x 1x 1x 3x 1x 3.60 5.40 37.90 18.90 6.00 6.00 98.80 = 741 DDK

5/3-12 - Todo: Klargr til vejledermde d. 12/3-12 (se liste herunder). Flyt differens tryk sensor ud i vingen, s vi ikke fr problemer med bjet luftslange ved fremtidige flyvninger. Modelr pitot -> luft hastighed frdig Implementer 3. stage af kalman filteret (GPS -> position + vind estimat)

Spekuler over hvordan vi definere en rute plan / waypoint set - Ider: AIDL - Reduceret (Kan vi finde en specifikation?) Sammenst af primitiver (Straight line, arc.... etc?) Splines gennem waypoints - evt med nogle constraints Hvordan kder vi mledata / mle punkter ind i ruteplanen? Evt kriterier for om en mling kan gennemfres eller om flyet skal vende om for en ny gennemflyvning af mlepunktet. Overvej flyv-bar-hed af waypoints / path. (Vi kan ikke momentalt banke) Ny montering til printet - evt re-design...

Til mde, Mandag d 12/3-12: Synopsis Disposition til rapport - (pbegyndt) Skriv et par afsnit - (Done) Budget Hvad sagde vi vi ville bruge: Sum Total - Estimeret inden indkb RC-gear Electronics 5.155 kr 4.843 kr

Show-case / Proof of concept : Camera-tool (Gumstix + camera + gimbal 4.000 kr + 3D ABS prototype + PCB) (Rough estimate) Total sum Hvad vi har brugt: 13.998 kr

Sum Total - Reelle indkb RC-gear Electronics Show-case / Proof of concept : camera Replacement parts (fried electronics)
Motor(665), modtager(649) og motorstyring(475), fejlprint(360)

6.676 kr 4.915 kr 410 kr 2.149 kr 14.150 kr

Total sum

Difference af budget estimat og reelt indkb: 13998-14150 = 152 kr. Her skal det noteres at der ingen showcase er indkbt. De 152 kroner (+showcase) over budget skyldes: RC udstyret og elektronik var 1595 dyrere (bedre grej end frst

budgeteret i.e. Gumstix FE, 2 batterier, God lader.) Uforudsete ekstra-omkostninger i form af afbrndt: motor, modtager, motorstyring, LAN chip og et par andre smting. Der var ydermere i budgettet ikke taget hjde for udgift til pilotln. Pilotln = 3x ??? kr/time

Hvad skal vi bruge? Plast-bolte Plast-flange-dims Kan kbes hos Multiplex - Men sm dyrt og nok omstndigt.. Dansk forhandler?

Dagbog / Log google docs, s vejlederene kan flge med (Kunne jo starte med dette dokument) Tidsplan gammel tidsplan -> /dev/null Lav ny / revideret Den gamle tidsplan skal revideres da: Sensor fusion tager lngere tid end antaget. Dataopsamling er meget mere tidskrvende end frst antaget; pga. koordinering med Henning, hans ln og vejret. Flyvning er ikke bare noget man gr ud og gr, krver mere tid/ planlgning end antaget. Pilot / lr at flyve selv issue. MMMI penge til trainer flyver? Klub / unions medlemskaber? Flyveplads? (Kold college / odense lufthavn / Lars Tyndskids mark?) Agenda 1. Velkommen. 2. Status p projektet. 3. Budget: a. Hvad budgetterede vi med. b. Hvad har vi brugt. c. Hvad kommer vi til at mangle. 4. Dataopsamling + test af fly -> flyv selv, eller betal ekstern pilot. 5. Gennemgang af frste flyvetur med dataopsamling. 6. Tidsplan: a. Revidering af deliverables. b. Projektets videre forlb. 7. ben diskussion: a. Opfyldes forventninger? b. Hvad kan vi gre bedre? c. Hvad er godt?