You are on page 1of 79

Institutionen fr systemteknik

Department of Electrical Engineering


Examensarbete
Support System for Landing with an Autonomous
Unmanned Aerial Vehicle
Examensarbete utfrt i Reglerteknik
vid Tekniska hgskolan i Linkping
av
Christian stman
Anna Forsberg
LiTH - ISY - EX - - 09 / 4261 - - SE
Linkping 2009
Department of Electrical Engineering Linkpings tekniska hgskola
Linkpings universitet Linkpings universitet
SE-581 83 Linkping, Sweden 581 83 Linkping
Support System for Landing with an Autonomous
Unmanned Aerial Vehicle
Examensarbete utfrt i Reglerteknik
vid Tekniska hgskolan i Linkping
av
Christian stman
Anna Forsberg
LiTH - ISY - EX - - 09 / 4261 - - SE
Handledare: Zoran Sjanic
isy, Linkpings universitet
Daniel Andersson
Saab AB
Examinator: Thomas Schn
isy, Linkpings universitet
Linkping, 9 January, 2009
Avdelning, Institution
Division, Department
Division of Automatic Control
Department of Electrical Engineering
Linkpings universitet
SE-581 83 Linkping, Sweden
Datum
Date
2009-01-09
Sprk
Language
2 Svenska/Swedish
2 Engelska/English
2

Rapporttyp
Report category
2 Licentiatavhandling
2 Examensarbete
2 C-uppsats
2 D-uppsats
2 vrig rapport
2

URL fr elektronisk version


http://www.control.isy.liu.se
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-ZZZZ
ISBN

ISRN
LiTH - ISY - EX - - 09 / 4261 - - SE
Serietitel och serienummer
Title of series, numbering
ISSN

Titel
Title
Support System for Landing with an Autonomous Unmanned Aerial
Vehicle
Frfattare
Author
Christian stman, Anna Forsberg
Sammanfattning
Abstract
There are a number of ongoing projects developing autonomous vehi-
cles, both helicopters and airplanes. The purpose of this thesis is to
study a concept for calculating the height and attitude of a helicopter.
The system will be active during landing. This thesis includes building
an experimental setup and to develop algorithms and software.
The basic idea is to illuminate the ground with a certain pattern and
in our case we used laser pointers to create this pattern. The ground
is then lmed and the images are processed to extract the pattern.
This provides us with information about the height and attitude of
the helicopter. Furthermore, the concept implies that no equipment on
the ground is needed. With further development the sensor should be
able to calculate the movement of the underlying surface relative to the
helicopter. This is very important when landing on a moving surface,
e.g. a ship at sea.
To study the concept empirically an experimental setup was con-
structed. The setup provides us with the necessary information to eval-
uate how well the system could perform in reality. The setup is built
with simple and cheap materials. In the setup an ordinary web camera
and laser pointers that are avaliable for everyone have been used.
Nyckelord
Keywords
Helicopter, Skeldar, Landing, Autonomous, Positioning, UAV, Guid-
ance, Adaptive estimation, Vision based estimation, Autopilot
guidence, Vision, EKF, Kalman lter, Image processing
Abstract
There are a number of ongoing projects developing autonomous vehicles, both
helicopters and airplanes. The purpose of this thesis is to study a concept for
calculating the height and attitude of a helicopter. The system will be active
during landing. This thesis includes building an experimental setup and to develop
algorithms and software.
The basic idea is to illuminate the ground with a certain pattern and in our
case we used laser pointers to create this pattern. The ground is then lmed and
the images are processed to extract the pattern. This provides us with information
about the height and attitude of the helicopter. Furthermore, the concept implies
that no equipment on the ground is needed. With further development the sensor
should be able to calculate the movement of the underlying surface relative to the
helicopter. This is very important when landing on a moving surface, e.g. a ship
at sea.
To study the concept empirically an experimental setup was constructed. The
setup provides us with the necessary information to evaluate how well the system
could perform in reality. The setup is built with simple and cheap materials. In
the setup an ordinary web camera and laser pointers that are avaliable for everyone
have been used.
Sammanfattning
Det nns era pgende projekt inom autonomygande farkoster, bde fr helikop-
trar och ygplan. Syftet med vrt examensarbetet r att underska ett koncept fr
en landningssensor fr autonom landning med helikopter. Examensarbetet innebr
att bygga en fysisk modell fr test av konceptet samt att utveckla mjukvara.
Konceptet fr sensorn bestr av att belysa marken med ett speciellt mnster,
i vrt fall skapas mnstret av laserpekare, som drefter fotograferas och bildbe-
handlas. Detta mnster ger sedan information om helikopterns hjd och attityd
i luften. Vidare innebr konceptet ocks att ingen markutrustning krvs fr att
sensorn ska fungera. I frlngningen ska man med detta koncept kunna berkna
hur underlaget rr sig relativt helikoptern, vilket r vldigt viktigt vid landning
p objekt som rr sig, till exempel ett fartyg.
Fr att underska hur bra sensorn presterar i verkligheten s har en rigg byg-
gts. Riggen r byggd med enkla och billiga material. I det hr fallet anvnds en
webbkamera och laserpekare som gr att kpa i vanliga elektronikarer.
stman, Forsberg, 2008. v
Acknowledgments
We would like to thank the persons that helped us to make this thesis what it
is. First and foremost we would like to thank Anders Bodin, Saab AB, who
introduced us to our supervisor and this masters thesis. We would also like to
thank our supervisors Daniel Andersson, Saab AB, and Zoran Sjanic, Linkping
University. And a special thanks to our examiner Thomas Schn who lead us in
the right direction during the whole thesis.
We would also like to thank everyone else that has been involved in our thesis,
such as employees on Saab AB and our families.
Our opponents Johan Fltstrm and Fredrik Gidn also deserves our thanks.
We hope you enjoy reading our report.
Linkping December 2008
Christian stman and Anna Forsberg
stman, Forsberg, 2008. vii
Contents
1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Basic Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Purpose and Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Saab AB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Topics covered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Helicopter Background 5
2.1 History of the Helicopter . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Generals of a helicopter . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 The Dynamics of a Helicopter . . . . . . . . . . . . . . . . . . . . . 7
2.4 The Limitations of a Helicopter . . . . . . . . . . . . . . . . . . . . 10
2.5 Unmanned Aerial Vehicle . . . . . . . . . . . . . . . . . . . . . . . 11
3 Modeling 13
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 Physical Modeling . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.2 System Identication . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Model of the System . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.2 Height and Attitude Calculations . . . . . . . . . . . . . . . 17
3.3 State-Space Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 The Camera Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4.1 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . 21
3.4.2 Geometric Camera Models . . . . . . . . . . . . . . . . . . 21
3.4.3 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . 24
4 Filtering 27
4.1 Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.1.1 Camera Setup . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.1.2 Identifying the Laser Spots . . . . . . . . . . . . . . . . . . 27
4.1.3 Identifying the Center of the Spots . . . . . . . . . . . . . . 29
4.2 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2.1 Digital Filters . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2.2 Kalman Filters . . . . . . . . . . . . . . . . . . . . . . . . . 31
stman, Forsberg, 2008. ix
4.2.3 Implementing EKF . . . . . . . . . . . . . . . . . . . . . . . 33
4.3 Association Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.3.1 Object-Orientation in Matlab . . . . . . . . . . . . . . . . . 36
4.3.2 Association - Assigning Measurements to the Objects . . . 36
4.4 Resulting Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4.1 imageProcess . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4.2 transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4.3 ndDxDy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4.4 animate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5 Experiments and Results 41
5.1 Measuring and Estimating the Distances x and y . . . . . . . . 41
5.1.1 Distances x . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.1.2 Analysis of the Estimations of x and y . . . . . . . . . . 44
5.2 Estimating the Height . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.2.1 Height = 0.1 meters . . . . . . . . . . . . . . . . . . . . . . 45
5.2.2 Height = 0.2 meters . . . . . . . . . . . . . . . . . . . . . . 46
5.2.3 Height = 0.3 meters . . . . . . . . . . . . . . . . . . . . . . 46
5.2.4 Step in height . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2.5 Landing process . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2.6 Analyzing the Height Estimation . . . . . . . . . . . . . . . 48
5.3 Estimating the Angles . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.3.1 Roll = 0

. . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.3.2 Roll = 11

. . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.3.3 Roll = 17

. . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.3.4 Analysis of the Angle Estimation . . . . . . . . . . . . . . . 50
5.4 Summarized RMS-values . . . . . . . . . . . . . . . . . . . . . . . . 51
6 Concluding Remarks 53
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.2.1 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.2.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . 54
6.2.3 Landing on a Moving Surface . . . . . . . . . . . . . . . . . 54
Bibliography 57
A A Prototype of the Sensor 59
A.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
A.2 Used Camera Parameters . . . . . . . . . . . . . . . . . . . . . . . 61
B Flow Chart 63
List of Figures
1.1 Photograph of Skeldar, one of several UAV projects at Saab . . . . 2
1.2 The pattern created by the laser pointers. The basic idea in this
thesis is based upon this pattern. . . . . . . . . . . . . . . . . . . . 2
2.1 Leonardo da Vincis sketch over the rotating airscrew, one of the
inspiration sketches to the modern helicopter. . . . . . . . . . . . . 5
2.2 The Figure shows an autogyro, which is a combination of an airplane
and a helicopter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Fundamental parts of a helicopter . . . . . . . . . . . . . . . . . . . 7
2.4 The gure shows the dierent types of movement a helicopter can
do. A helicopter can move up, down, right, left, forward, backward
and rotate around a vertical axis. . . . . . . . . . . . . . . . . . . . 8
2.5 The Figure shows how the tail rotor is used to compensate for the
rotational tourqe. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6 The Figure shows the swash plate assembly, the connection between
the engine and the main rotor. The swash plate assembly is the
reason why the helicopter can move in dierent directions. It can
change the angle of attack on the blades both simultaneously and
individually. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.7 Up and down forces on a helicopter . . . . . . . . . . . . . . . . . . 9
2.8 Forces when banking helicopter . . . . . . . . . . . . . . . . . . . . 10
2.9 Neuron [11], a new project in the eld of UAV. Many dierent actors
are involved in this project and one of them is Saab AB . . . . . . 11
3.1 Illustration of the experimental setup . . . . . . . . . . . . . . . . . 13
3.2 The Figure shows the main structure of the system, the dotted part
is the one made in this thesis . . . . . . . . . . . . . . . . . . . . . 15
3.3 The Figure shows the geometry of the concept. In the gure you
can see all the angles and denitions for one axis. . . . . . . . . . . 16
3.4 The Figure shows the geometry of the concept, with the simpli-
cation = 0

. In the pictures you can see all the angles and


denitions. The newly dened

H is also shown for one axis. . . . . 18
3.5 The illustration shows the dierent coordinate systems that are
present in a camera model. P=arbitrary point in real world that
has been captured in the image. . . . . . . . . . . . . . . . . . . . . 22
3.6 The illustration shows a pinhole camera. In a camera of this type
all rays go through the optical center. . . . . . . . . . . . . . . . . 23
3.7 A calibration picture of the checkerboard. [10] . . . . . . . . . . . 25
4.1 Closeup on the laser spots shot by the camera in auto-mode. Here
you can see that the laser spots are overexposed and therefor have
become white in the middle. . . . . . . . . . . . . . . . . . . . . . . 28
4.2 Image capture after adjusting exposure, brightness and gain. . . . 28
4.3 The gure shows how the search algorithm examines a red pixel.
The rst thing the algorithm do is to look for a red cross around
the red pixel. When the algorithm nds a red cross it searches for
black pixels around the cross. . . . . . . . . . . . . . . . . . . . . . 29
4.4 A red spot with 13 pixels with the center of the spot marked with
a circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.5 The measured and the EKF-estimated values for the distance x
1
. 35
4.6 Assigning algorithm when there are more then ve dots detected. . 36
4.7 Two measurements inside one of the observer windows . . . . . . . 37
4.8 No measurements inside one of the observer windows . . . . . . . . 38
4.9 Flow chart of the resulting algorithm . . . . . . . . . . . . . . . . . 38
4.10 The output from animate.m . . . . . . . . . . . . . . . . . . . . . . 40
5.1 Plots of the calculated, the estimated and the ground truth x
1
in
the roll axis at height 0.1 meter. . . . . . . . . . . . . . . . . . . . 42
5.2 Plots of the calculated, the estimated and the ground truth x
2
in
the roll axis at height 0.1 meter. . . . . . . . . . . . . . . . . . . . 42
5.3 Plots of the calculated, the estimated and the ground truth x
1
in
the roll axis at height 0.2 meter. . . . . . . . . . . . . . . . . . . . 43
5.4 Plots of the calculated, the estimated and the ground truth x
2
in
the roll axis at height 0.2 meter. . . . . . . . . . . . . . . . . . . . 43
5.5 Plots of the calculated, the estimated and the ground truth x
1
in
the roll axis at height 0.3 meter. . . . . . . . . . . . . . . . . . . . 44
5.6 Plots of the calculated, the estimated and the ground truth x
2
in
the roll axis at height 0.3 meter. . . . . . . . . . . . . . . . . . . . 44
5.7 The estimated, the calculated and the real height at 0.1 meters . . 45
5.8 The estimated, the calculated and the real height at 0.2 meters . . 46
5.9 The estimated, the calculated and the real height at 0.3 meters . . 47
5.10 Step in height . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.11 Height in a landing process started at approx. 0.3 meters . . . . . 48
5.12 Plot of the estimated-, calculated-, and real pitch at 0

. . . . . . . 49
5.13 Plot of the estimated-, calculated-, and real pitch at 11

. . . . . . 50
5.14 Plot of the estimated-, calculated-, and real pitch at 17

. . . . . . 50
A.1 The dimensions of the experimental setup seen from three angles. . 59
A.2 The experimental setup seen from the side. . . . . . . . . . . . . . 60
A.3 The experimental setup seen from below. . . . . . . . . . . . . . . 60
B.1 Flow Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Chapter 1
Introduction
In this rst chapter an introduction to this thesis will be given. The background
and the goals of this thesis are explained. A brief outline of the chapters to follow
will also be given.
1.1 Background
Unmanned Aerial Vehicle (UAV) is a eld that is highly topical, many companies
invest large amounts of money in this eld and one of these companies is Saab AB
in Linkping, Sweden. The most recent UAV project is a helicopter named Skeldar,
see Figure 1.1. Compared to an ordinary helicopter Skeldar is a relatively small
helicopter with great potential in many applications, e.g. surveillance. This type
of vehicle contains many advanced control systems and every system should be
able to work without any human intervention. In the UAV case a reliable landing
is depending on sensors and a control system. It would be desirable to have a
control system without large transients and no oscillation to obtain stability when
ying. To create a control system with this behavior you need much information
and the only information you have is the information you get from the sensors and
the knowledge about the system. The more reliable and precise sensor information
you can get, the better control system you get.
1.2 Basic Idea
The experimental setup is constructed to resemble a scaled undercarriage of a
helicopter. Five laser pointers and a high resolution web camera is mounted on
the experimental setup. The laser pointers are mounted so that they can create the
pattern illustrated in Figure 1.2. If the helicopter moves up, the spots move away
from each other. If the helicopter moves down, the pattern gets smaller until the
spots are gathered in one spot when the helicopter is standing on the ground. If
the helicopter is angled one of the distances, in either x-axis or y-axis depending on
the angle, will be greater than the correspoding one. The web camera is mounted
stman, Forsberg, 2008. 1
Figure 1.1. Photograph of Skeldar, one of several UAV projects at Saab
as close as possible to the middle laser pointer in order to get as low bias fault as
possible. For technicalities of the sensor see Appendix A.
Figure 1.2. The pattern created by the laser pointers. The basic idea in this thesis is
based upon this pattern.
1.3 Purpose and Goal
The purpose with this thesis is to create a system which estimates the height and
attitude of the helicopter relative to the ground. This thesis includes building an
experimental setup and process the collected data to extract valuable information.
1.4. Saab AB 3
The goals of this thesis are to construct a system that:
tells the main control system when the helicopter is standing on the ground
provides the distance between the helicopter and the ground
provides the angle between the helicopter and the ground
tells the main control system when the angle between the helicopter and the
ground is small enough to land
1.4 Saab AB
Saab AB is a global company, with 13700 employees in more than 50 countries [11].
The company was founded 1937 and is now one of Swedens most famous brands
all over the world. Saab AB is active in many dierent areas, including both civil
and military. The most famous product at Saab AB is the ghter aircraft JAS 39
Gripen.
The master thesis was performed at Saab Aerosystems. Almost all avionics
development is done at Saab Aerosystems.
1.5 Topics covered
There are ve chapters and two appendixes. Main topics dealt with are:
Chapter 2: This chapter explains the background of the helicopter.
Chapter 3: This chapter covers modeling and the models used in this thesis.
Chapter 4: This chapter covers ltering and image processing used in this thesis.
Chapter 5: This chapter explain the conclusions of the constructed system based
on dierent validation data. It also contains a section with future work.
Appendix A: This appendix explains the experimental setup.
Appendix B: This appendix contains the ow chart for the code.
Chapter 2
Helicopter Background
In this chapter some basic theory of helicopters and UAVs are presented. Besides
the theory this chapter also contains history of helicopters. The theory and history
are collected from [6] and [9].
2.1 History of the Helicopter
In the middle of the 18th century inventors started to experiment with building
helicopters for the rst time. They got their inspiration from a popular Chinese
toy with a vertical stick that had two sticks that looked like rotor blades in one
end. One more source of inspiration was Leonardo da Vinci with his sketches of a
rotating airscrew, see Figure 2.1.
Figure 2.1. Leonardo da Vincis sketch over the rotating airscrew, one of the inspiration
sketches to the modern helicopter.
Since then many ideas, more or less imaginative, have been tried out. The rst
ones were driven by steam or battery, but they became too heavy and were not
able to y. In the beginning of the 20th century the gasoline engine enabled the
prototypes to be lighter. This together with more knowledge about aerodynamics
stman, Forsberg, 2008. 5
and physical laws resulted in a ying helicopter. The remaining problem was to
control it in the air. The aircraft that would become the basis for the modern
helicopter rotor began to take shape in 1923 in the form of an autogyro. An
autogyro is a combination of an airplane and a modern helicopter, see Figure 2.2.
It was Juan de la Cierva who developed this rst practical rotorcraft.
Figure 2.2. The Figure shows an autogyro, which is a combination of an airplane and
a helicopter.
The next step in development was taken by the German pilot and aircraft
manufacturer Heinrich Focke. He used the idea of the autogyro and tried his own
theories in which the rotors had a power source of their own. One of the problems
were that the helicopters still were unstable and shaky in the air, until the Russian
Igor Sikorsky in 1939 created the rst helicopter with a tail rotor. The tail rotor
made the helicopter stable in the air and the helicopter did not shake as much as
before.
Today helicopters are an important part of our society; they are frequently
used by the police, military and as ambulances. One of the reasons is that they
are able to reach places where other vehicles can not.
2.2 Generals of a helicopter
In this section an ordinary helicopter is explained in general, which includes the
fundamental parts of a helicopter and the movements a helicopter can do. The
main parts of a helicopter are basically the same, see Figure 2.3. Only the most
important and common parts of a helicopter will be explained.
The biggest rotor on the top of the helicopter is the main rotor. The blades
of the rotor are very similar to an airplane wing but narrow and thin for the
possibility of a fast rotational motion. The blades on the rotor can be given an
angle of attack for controlling the lift-power on the helicopter. This is in fact the
main control of the lift-power, normally you do not change the revolutions per
minute (rpm) of the engine. Read more about the main rotor in Section 2.3.
The drive shaft is the connection between the main rotor and the engine. On
the drive shaft the swash plate assembly is connected, see Figure 2.6. This device
2.3. The Dynamics of a Helicopter 7
Figure 2.3. Fundamental parts of a helicopter
has its main function in controlling the helicopters movement. The swash plate
assembly makes it possible to angle the main rotor blades in order to turn, move
forward, backward, up and down. A helicopter can move in several directions,
see Figure 2.4. In order to gain or lose altitude the swash plate changes the
collective pitch on the blades, which means that the angle on all the blades changes
simultaneously. A steeper angle creates more lift power than a shallow angle.
When you command forward, backward, left, right or a turn, the swash plate
uses cyclic control, which means that the angle of the blades changes individually
throughout the spin and creates the desired movement. Read more about the
swash plate assembly in Section 2.3.
The tail rotor has two main tasks.
The rst task and perhaps the most obvious movement is the rotating around
the helicopters vertical z-axis. When you change the angle of attack on the
tail rotor the helicopter rotates round a vertical z-axis, see the rotational
movement in Figure 2.4.
The second task is to stabilize the helicopter. When the main rotor spins
it creates a tourqe. The tail rotor must prevent this force, otherwise the
helicopter will begin to spin uncontrolled, see Figure 2.5 [6], which in the
worst case can result in a crash.
2.3 The Dynamics of a Helicopter
The dynamics of a helicopter is somewhat less intuitive than the dynamics of an
aircraft. In this chapter the basics in helicopter dynamics are explained, once its
explained the dynamic terms will be used throughout report.
Figure 2.4. The gure shows the dierent types of movement a helicopter can do. A
helicopter can move up, down, right, left, forward, backward and rotate around a vertical
axis.
Figure 2.5. The Figure shows how the tail rotor is used to compensate for the rotational
tourqe.
As said earlier, the main rotor is connected to the drive shaft via the swash
plate assembly, the swash plate assembly is the reason why the helicopter can
move in dierent directions. Figure 2.6 shows the assembly.
In order for the helicopter to ascend or descend we have to change the angle
of the blades simultaneously. When we want the helicopter to ascend the thrust
and lift forces will have to overcome the drag and weight forces, descending have
the opposite force discussion, see Figure 2.7. In order to achieve greater or less
force in the upward direction we have to change the blades angle of attack. The
steeper angle the more upward force. The thin vertical rods that are connected to
2.3. The Dynamics of a Helicopter 9
Figure 2.6. The Figure shows the swash plate assembly, the connection between the
engine and the main rotor. The swash plate assembly is the reason why the helicopter
can move in dierent directions. It can change the angle of attack on the blades both
simultaneously and individually.
the blades in Figure 2.6 allows the rotating swash plate to change the angle of the
blades and gives the possibility to change the lift force. This procedure of changing
the angle of attack on all blades simultaneously is called to apply collective pitch.
Figure 2.7. Up and down forces on a helicopter
To y the helicopter forward, the main rotor disc is tilted forward. To make a
turn with the helicopter you both tilt the main rotor disc forward and to the side.
The swash plate assembly can change the angle of the blades individually as they
revolve, allowing the helicopter to tilt in any direction. This procedure is called
to apply cyclic pitch. That means that the lift force can be separated into two
components, one acting upward (vertical component) and one acting horizontally
(centripetal force), see Figure 2.8. The more the rotor is tilted, the more of the
total lift force is tilted towards the horizontal force. This decreases the eect of the
lift acting vertically. To maintain altitude the angle of attack of the rotor blades
must be increased.
Figure 2.8. Forces when banking helicopter
2.4 The Limitations of a Helicopter
Even though a helicopter has many more levels of freedom than e.g. an airplane,
there still exists limitations.
A helicopter needs a relatively at ground to land. The slope of the ground
on which a helicopter can be able to land on depends on the type of helicopter.
If the slope is to steep the helicopter is exposed to the risk of tipping over due
to side forces. Another limitation is that if its too windy the helicopter can not
y. The helicopter can handle some wind, but then it has to bank to create an
opposite force to the wind. This can be a problem when the helicopter is about to
2.5. Unmanned Aerial Vehicle 11
land because the helicopter can not land if the angle between the helicopter and
the ground is too steep.
2.5 Unmanned Aerial Vehicle
An UAV is an aircraft that has no pilot. UAVs can be remotely controlled or y au-
tonomously based on pre-programmed ight plans or more complex dynamically
evolving ight plans. There are many dierent types of shapes, sizes, congu-
rations and characteristics when it comes to UAVs. Most of them are used for
military purposes. There are also some civil applications for cases when a human
observer would be at risk; such as reghting, reconnaissance support in natural
disasters and police observation of civil disturbances or crime scenes. Saab AB
currently has two UAV projects; Skeldar and Neuron.
Skeldar is a helicopter that is fully autonomous, mobile and has hovering
capacity. The Skeldar system is possible to certify for ying in non-restricted
airspace.[11]
Neuron is an Unmanned Combat Aerial Vehicle (UCAV) and is developed in
cooperation with ve other European countries and their aircraft industries. [11]
Figure 2.9. Neuron [11], a new project in the eld of UAV. Many dierent actors are
involved in this project and one of them is Saab AB
Chapter 3
Modeling
This chapter covers some basic theory about modeling and cameras. It also con-
tains the model and equations used in this thesis. The experimental setup is shown
in Figure 3.1. The setup was made to imitate the reality as much as possible.
Figure 3.1. Illustration of the experimental setup
3.1 Introduction
A model is often used to describe the reality, e.g. a mathematical model is often
used to describe the laws of nature. There are many reasons for using a model,
stman, Forsberg, 2008. 13
two very important reasons are that a model is more cost ecient than using a real
system and sometimes the real system does not exist yet. Because of this, much
of the development take place in the simulated world with models. When making
technical applications it is very common to use a mathematical model. The theory
about models are collected from [3]. The most common way to describe a model
is as a state-space. A fairly general description of a non-linear state-space model
is:
x = f(x, u) (3.1)
y = h(x, u) (3.2)
x - state of the system
u - the input signal
y - the measured signal
f() & h() are non-linear functions
There are basically two ways of constructing a model; physical modeling and
identication from observations.
3.1.1 Physical Modeling
The physical way of modeling is to split the system into smaller systems. After
the system split you describe the smaller system with mathematical expressions.
These smaller systems are normally easier to describe mathematically than the
whole system all at once. When you are done with the mathematical descriptions
of the small systems you reunite your model. This way of creating models is often
very dicult and demands a lot of knowledge of the system that is about to be
modeled.
3.1.2 System Identication
When you construct a model via system identication you use observations from
the real system to create mathematical expressions. This way of modeling is often
used together with the physical modeling principle.
3.2 Model of the System
To validate the experimental setup a mathematical model of the system was de-
rived. The derived equations were also used in an Extended Kalman Filter (EKF)
which will be described further on in the report, see Section 4.2.2.
3.2. Model of the System 15
3.2.1 General
Figure 3.2 shows the main structure of the system. The dotted part is what
this thesis covers. Further on in this section denitions and explanations of the
equations are presented.
Figure 3.2. The Figure shows the main structure of the system, the dotted part is the
one made in this thesis
The G2 box in Figure 3.2 is meant to symbolize the part where the camera
acquire an image and process it. This box symbolizes the real world in which the
system is meant to act in. The output from that box is Y = (x
1
, x
2
, y
1
, y
2
).
The model G2 takes the input values:
- pitch of the helicopter
- roll of the helicopter

x
-angle of the ground

y
-angle of the ground
H - height above the ground
- constant. Angle of the laser pointers
The values that we want to calculate are the distances x
1
, x
2
, y
1
and
y
2
.
The calculations below are done for one axis (roll axis), the other will be equiv-
alent due to symmetry. For denitions, see Figure 3.3.
In step one of the calculations we start by using the law of sines (3.3a) and a
calculation of a
1
as in (3.3b):
sin(

2
(
x
) )
H
=
sin()
a
1
(3.3a)
a
1
=
x
1
cos(
x
)
(3.3b)
Figure 3.3. The Figure shows the geometry of the concept. In the gure you can see
all the angles and denitions for one axis.
After eliminating a
1
and further abbreviation (3.4) is obtained.
sin(

2
(
x
) )
H
=
sin() cos(
x
)
x
1
x
1
=
H sin() cos(
x
)
sin(

2
(
x
) )
=
H tan() cos(
x
)
sin(
x
) tan() + cos(
x
)
(3.4)
This is the nal step in the calculation of x
1
. With x
2
you do exactly the
same. If that is done you will end up in Equation (3.5).
sin(

2
+ (
x
) )
H
=
sin()
a
2
, a
2
=
x
2
cos(
x
)

sin(

2
+ (
x
) )
H
=
sin() cos(
x
)
x
2
x
2
=
H sin() cos(
x
)
sin(

2
+ (
x
) )
=
H tan() cos(
x
)
sin(
x
) tan() + cos(
x
)
(3.5)
3.2. Model of the System 17
Due to symmetry, the equations will be the same for the pitch axis. Equation
(3.6a) to (3.6d) shows the equations on both roll axis and pitch axis.
The complete model:
x
1
=
H tan() cos(
x
)
cos(
x
) sin(
x
) tan()
(3.6a)
x
2
=
H tan() cos(
x
)
cos(
x
) + sin(
x
) tan()
(3.6b)
y
1
=
H tan() cos(
y
)
cos(
y
) sin(
y
) tan()
(3.6c)
y
2
=
H tan() cos(
y
)
cos(
y
) + sin(
y
) tan()
(3.6d)
In this thesis it is assumed that the ground is at, that is
x
= 0

and
y
= 0

.
In this simplied model the height is also redened, we have transformed the height
to be the vertical height,

H, see Figure 3.4 and (3.7). This gure shows how this is
done in the roll axis. All this results in a simplied model, but the basic principles
are still the same.
H =

H
cos
(3.7)
With the simplied model, as in Figure 3.4 the equations for x
1
, x
2
, y
1
and y
2
are given by 3.8a to 3.8d.
x
1
=

H sin()
cos( ) cos
(3.8a)
x
2
=

H sin()
cos( +) cos
(3.8b)
y
1
=

H sin()
cos( ) cos
(3.8c)
y
2
=

H sin()
cos( +) cos
(3.8d)
3.2.2 Height and Attitude Calculations
The equations below are further used to validate the Kalman lter estimation, see
Section 4.2.2. Equations are derived from the original model in Figure 3.3 and
then abbreviated to suite the simplied model in Figure 3.4. All equations below
Figure 3.4. The Figure shows the geometry of the concept, with the simplication
= 0

. In the pictures you can see all the angles and denitions. The newly dened

H
is also shown for one axis.
are derived for the roll axis, due to symmetry the equations for the pitch axis will
be exactly the same.
Height calculation, H
Begin with using the law of sines to derive the height, according to (3.9).
H =
b sin(

2
+ (
x
) )
sin(

2
(
x
))
(3.9a)
b =
(a
1
+a
2
) sin(

2
(
x
) )
sin(2)
(3.9b)
a
1
+a
2
=
x
1
+ x
2
cos(
x
)
(3.9c)
By inserting (3.9c) into (3.9b) we end up in:
b =
(x
1
+ x
2
) sin(

2
+ (
x
) )
sin(2) cos(
x
)
(3.10)
3.2. Model of the System 19
Then (3.10) is put into (3.9a) and it will look like (3.11) and will, after simpli-
cations, end up in (3.12):
H =
sin(

2
+ (
x
) )
sin(

2
(
x
))

sin(

2
(
x
) )(x
1
+ x
2
)
sin(2) cos(
x
)
= (3.11)
=
cos((
x
) )
cos(
x
)

cos((
x
) +)(x
1
+ x
2
)
sin(2) cos(
x
)

H =
cos((
x
) ) cos((
x
) +)
cos(
x
) sin(2) cos(
x
)
(x
1
+ x
2
) (3.12)
After the simplication, = 0

, and adding the denition



H in Figure 3.4 we
get a simpler equation for the height (3.13).

H =
cos( +) cos( )
cos sin 2
(x
1
+ x
2
) (3.13)
Calculation of (
x
)
In order to validate the attitude estimates in the EKF, equations for this are
also derived. These equations are also done for the roll axis but looks the same
for the pitch axis.
The equations (3.6a) and (3.6b) are used to derive the roll angle. Solve for H
and eliminate H. For convenience (3.6a) and (3.6b) are repeated below.
x
1
=
H tan() cos(
x
)
cos(
x
) sin(
x
) tan()
(3.14a)
x
2
=
H tan() cos(
x
)
cos(
x
) + sin(
x
) tan()
(3.14b)
Abbrevations follow below, and will end up in (3.15).
x
1
x
2
=
cos(
x
) + sin(
x
) tan()
cos(
x
) sin(
x
) tan()

x
1
x
2
=
1 + tan(
x
) tan()
1 tan(
x
) tan()

x
1
(1 tan(
x
) tan()) = x
2
(1 + tan(
x
) tan())
(x
1
+ x
2
) tan(
x
) tan() = x
2
x
1

tan(
x
) =
x
2
x
1
(x
1
+ x
2
) tan()
(3.15)
This nally gives us the angle according to (3.16).

x
= tan
1
_
x
2
x
1
(x
1
+ x
2
) tan()
_
(3.16)
With the simplication ( = 0

) as in Figure 3.4, the Equation will end up


in (3.17).
= tan
1
_
x
2
x
1
(x
1
+ x
2
) tan()
_
(3.17)
3.3 State-Space Model
The equations derived in the previous sections will now be assembled to a state-
space model. The state vector is choosen as:
x
t
=
_

H
x
1
x
2
y
1
y
2
_

_
(3.18)
Due to that there is no known dynamic relation in the system, the best ap-
proximation is that the states do not change, that is x = 0. Because of this the
discrete time dynamic equations will be:
f(x
t
) = I
7
x
t
+w
t
(3.19)
The measured values are x
1
, x
2
, y
1
and y
2
. To make the system ob-
servable four measurements were added. These measurements are always zero
because they represent the values of the estimated -values minus the rst four
measurement equations. If the estimates are correct these values should even out
to zero.
3.4. The Camera Sensor 21
The measurement equations are:
y
t
=
_

_
x
1,t
x
2,t
y
1,t
y
2,t
0
0
0
0
_

_
= h(x
t
) +e
t
=
_

_
x
3,t
tan()
(cos(x
1,t
)sin(x
1,t
) tan()) cos(x
1,t
)
x
3,t
tan()
(cos(x
1,t
)+sin(x
1,t
) tan()) cos(x
1,t
)
x
3,t
tan()
(cos(x
2,t
)sin(x
2,t
) tan()) cos(x
2,t
)
x
3,t
tan()
(cos(x
2,t
)+sin(x
2,t
) tan()) cos(x
2,t
)
x
4,t

x
3,t
tan()
(cos(x
1,t
)sin(x
1,t
) tan()) cos(x
1,t
)
x
5,t

x
3,t
tan()
(cos(x
1,t
)+sin(x
1,t
) tan()) cos(x
1,t
)
x
6,t

x
3,t
tan()
(cos(x
2,t
)sin(x
2,t
) tan()) cos(x
2,t
)
x
7,t

x
3,t
tan()
(cos(x
2,t
)+sin(x
2,t
) tan()) cos(x
2,t
)
_

_
+e
t
(3.20)
3.4 The Camera Sensor
The theory behind cameras and using cameras with Matlab are gathered from [7]
and [10]. In this chapter basic facts about the camera sensor is presented.
3.4.1 Coordinate Systems
There are three dierent coordinate systems that are needed to model the camera
sensor, see Figure 3.5. They are:
Earth (e) - xed to earth
Camera (c) - attached to the moving camera
Image (i) - perpendicular to the optical axis, located one focal length from
the optical center
3.4.2 Geometric Camera Models
A regular camera creates a 2D projection of the real 3D world. In order to obtain
the distances in the actual world among all coordinate axis we must create a
mathematical model for this purpose. We can split our model in two; one linear
model and one non-linear model. The reason to use this classication is because a
regular camera is not linear and therefore the image will have to be transformed
so that we can apply a linear model to measure the real-world distances.
Figure 3.5. The illustration shows the dierent coordinate systems that are present in
a camera model. P=arbitrary point in real world that has been captured in the image.
Linear pinhole model
In this model we assume that all rays go through the optical center, see Figure
3.6. A camera of the type Pinhole Camera as in Figure 3.6 actually has the image
upside-down when its recorded on z = f. Its also possible, with the same
result, to imagine a non-inverted image at z = f. In the gure the length between
the optical center and the image plane, f, often is reered to as focal length. The
actual image center is the intersection between the image plane and the optical
axis which is perpendicular to the image plane and goes through the optical center.
In Figure 3.6, a point in the real world (3D) entitles
P =
_
_
X
e
Y
e
Z
e
_
_
(3.21)
and the point where the ray from the optical center to this point intersects with
the image plane entitles
p =
_
x
i
y
i
_
. (3.22)
To obtain a very simple camera model lets assume that f = 1. With triangu-
lation we can now show that
_
x
i
y
i
_
=
1
Z
e
_
X
e
Y
e
_
(3.23)
This formula, Equation 3.23, is reered to as normalized camera model.
In a digital camera the coordinate axis are attached in the upper left corner of
the image,
3.4. The Camera Sensor 23
Figure 3.6. The illustration shows a pinhole camera. In a camera of this type all rays
go through the optical center.
p

=
_
u
v
_
(3.24)
are the coordinates. With further calculations as in [7] you will end up in a Pinhole
camera model that describes the projection of the 3D position on the 2D image
plane. The model is presented in (3.25)

_
_
u
v
1
_
_
=
_
_
fs
x
fs

u
0
0 fs
y
v
0
0 0 1
_
_
. .
K
_
_
1 0 0 0
0 1 0 0
0 0 1 0
_
_
_
_
_
_
X
Y
Z
1
_
_
_
_
(3.25)
Symbol Description
f Focal length
s
x
Pixel size (in metric unit) along the x-axis
s
y
Pixel size (in metric unit) along the y-axis
s

Skew factor, explaining non rectangular pixels


u
0
Horizontal (x) position of the principal point in pixels
v
0
Vertical (y) position of the principal point in pixels
= s
x
/s
y
Aspect ratio, displayed width divided by displayed height
Table 3.1. Denitions and explanation of the the elements in the pinhole camera model
and camera calibration matrix, K
Nonlinear lens distortion
This section will describe how a distorted image that you get from a digital camera
is transformed to an undistorted image. The undistorted image is the type of image
you get from a pinhole camera. When you have transformed the picture you can
apply the pinhole camera model and you get the relations between the 3D world
and the 2D picture that was desirable from the beginning. When a camera is
calibrated you get the necessary variables to create your undistorted image.
When an image in this thesis is undistorted the following equations were used:
u
normalized
=
u u
0
fs
x
(3.26a)
u
normalized
=
v v
0
fs
y
(3.26b)
d
1
= u
2
normalized
+v
2
normalized
(3.26c)
d
2
= d
2
1
(3.26d)
This will sum up into:
u
undistorted
= u + (u u
0
)(k
0
d
1
+k
1
d
2
) (3.27a)
v
undistorted
= v + (v v
0
)(k
0
d
1
+k
1
d
2
) (3.27b)
where k
0
and k
1
are distortion coecients which are taken from the calibration.
3.4.3 Camera Calibration
A camera reduces the dimensions of the light that is taken in from a 3D scene
to a 2D image. Light from the environment is focused on an image plane and
captured. Each pixel on the image plane therefore corresponds to a cone of light
from the original scene. By calibrating a camera you determine which incoming
light is associated with each pixel on the resulting image. If the camera is an ideal
pinhole camera the calibration can be done by a simple projection matrix. When
using more complex camera systems, errors resulting from misaligned lenses and
deformations in their structures can result in more complex distortions in the nal
image. The camera projection matrix can be used to associate points in a cameras
image space with locations in 3D world space, and is derived from the intrinsic
and extrinsic parameters of the camera.
To calibrate a camera a simple planar checkerboard can be used, see Figure
3.7. The calibration is done by taking pictures of the checkerboard from dierent
angles and distances. Using the relations
p
= K (R T)

P, an optimization
problem can be constructed to nd the calibration matrix K. R = R
ce
and T = c
e
denotes the orientation and translation of the camera with respect to the earth
coordinate system. By using the maximum likelihood method the problem can be
formularized and solved. The parameters can be calculated by Matlab by loading
the pictures and using a calibration toolbox [10].
3.4. The Camera Sensor 25
Figure 3.7. A calibration picture of the checkerboard. [10]
Chapter 4
Filtering
This chapter covers the basic theory of ltering and image processing. It also
contains the approach and algorithms for creating the system.
4.1 Image Processing
To be able to extract the distances between the laser dots an image was captured
from the camera. This was done by using Video For Matlab (VFM) a free frame
grabbing tool for Matlab [12]. The captured image was then processed to obtain
the necessary information. All image processing is described below. All algorithms
that are used were created by the authors.
4.1.1 Camera Setup
Due to the intensity of the laser beams the image created by the camera with the
automatic setup caused the laser spots on the surface to appear as a circle with
white in the middle and red edges, see Figure 4.1.
Because of the problem described above it became dicult to identify the laser
spots, especially when they were surrounded by a bright environment or reections
from a light. To prohibit this the exposure time was lowered, then the intense laser
spots became red and the surrounding environment became dark, see Figure 4.2.
After decreasing the exposure time, the picture contained less red pixels and
therefore the identication of the laser spots became less dicult. On top of this,
as a positive side eect, the computational time for the image processing became
shorter because there were less interesting pixels that needed to be processed.
4.1.2 Identifying the Laser Spots
A picture is represented by a matrix with three layers, one for red color, one for
green color and one for the blue color. The rst thing the algorithm does is to
select the red layer and set the blue and green layers to zero. After selecting the
red layer we sort out the most intense pixels in the red layer. Then a function
stman, Forsberg, 2008. 27
Figure 4.1. Closeup on the laser spots shot by the camera in auto-mode. Here you can
see that the laser spots are overexposed and therefor have become white in the middle.
Figure 4.2. Image capture after adjusting exposure, brightness and gain.
is called to nd the spots where the circles are located, based on the outsorted
intense pixels.
The algorithm to nd the circle spots is based upon the circle symmetry, see
Figure 4.3. The algorithm searches through the intense pixels and check their
surroundings. First it checks whether there are red pixels in a cross, if there is a
cross of red pixels the algorithm checks if there is black pixels surrounding the red
cross. If there are enough black pixels surrounding the cross the algorithm decides
that this is a part of the laser spot.
When the algorithm has decided which pixels that belong to the laser spot we
build a new image with only these spots. From this new picture we calculate the
centers of the spots.
Due to that the lens makes the image distorted, it also needs to be undistorted.
This was done by using the camera calibration. Only the interesting pixels needed
to be undistorted.
4.1. Image Processing 29
Figure 4.3. The gure shows how the search algorithm examines a red pixel. The rst
thing the algorithm do is to look for a red cross around the red pixel. When the algorithm
nds a red cross it searches for black pixels around the cross.
4.1.3 Identifying the Center of the Spots
In order to calculate the distance between the laser spots it is necessary to know
where the centers of the spots are. The pixels of each spot were identied by
looking at one pixel and then iterating through the rest to nd which pixels that
were located in the same spot. Then to nd the center of the spots a simple
averaging was done. For each of the ve spots all the pixel locations in both
directions were summarized and then divided by the number of pixels in the spot.
For example a spot like the one in Figure 4.4, the equation would be:
x
center
:
2 1 + 3 3 + 4 5 + 5 3 + 6 1
13
= 4 (4.1a)
y
center
:
2 1 + 3 3 + 4 5 + 5 3 + 6 1
13
= 4 (4.1b)
That is the center is in pixel (4,4) which is correct according to the gure.
If the equations should result in numbers that are not integers, the numbers are
rounded to point out the right pixel location.
Figure 4.4. A red spot with 13 pixels with the center of the spot marked with a circle
4.2 Filters
There is almost always some noise in the measurements and to suppress that
lters are needed. In the experimental setup the measurements are the distances
x
1
,x
2
,y
1
and y
2
.
4.2.1 Digital Filters
A digital lter is useful for reducing noise in data transfers or to even the fre-
quency distribution. A digital lter is characterized by its transfer function or,
equivalently, its dierence equation. A linear digital lter can be expressed with
a transfer function that is a transform in the Z-domain: [5]
H(z) =
B(z)
A(z)
=
b
0
+b
1
z
1
+b
2
z
2
+. . . +b
N
z
N
1 +a
1
z
1
+a
2
z
2
+. . . +a
M
z
M
(4.2)
To Separate a Signal from Noise
A linear, time invariant lter can be characterized by the impulse answer h(k) and
transfer function H(z):
H(z) =

k=
h(k)z
k
(4.3)
When {u(t), t = 0, 1, . . . , (N 1)} is ltered through H, the ltered signal is
4.2. Filters 31
y(t) =
N1

s=0
h(t s)u(s) (4.4)
The most common way to use linear lters is to attenuate noise components
in the measured signal. Assume that the signal can be written u(t) = s(t) +n(t),
where s is the real signal and n is the noise. If the signal u(t) is ltered through a
linear lter, like (4.4), the signal y(t) should look as much as s(t) as possible. Let
(t) = y(t) s(t). The time discrete fourier transform (TDFT) becomes:
(e
i
) = Y (e
i
) S(e
i
) = H(e
i
)U(e
i
) S(e
i
) =
(H(e
i
) 1)S(e
i
) +H(e
i
)N(e
i
) (4.5)
According to Parsevals formula:

t=0

2
(t) =
1
2

(e
i
)

2
d (4.6)
That implies that to make (t) small, the lter H(z) need to be chosen:

H(e
i
) 1

small for where


S(e
i
)

is large

H(e
i
)

small for where


N(e
i
)

is large
There are two dierent approaches to succeed with this and they are:
1. Make mathematical models for

S(e
i
)

and

N(e
i
)

and use them to cal-


culate the H that minimizes equation (4.6).
2. Make an approximate estimation about the frequencies in which

S(e
i
)

and

N(e
i
)

are large/small. Choose H(e


i
) is close to 1 respectively 0 in those
frequencies.
4.2.2 Kalman Filters
The Kalman lter (KF) is an ecient recursive linear lter that estimates the
state of a dynamic system from a series of noisy measurements. In this thesis the
model is nonlinear and KF cannot be used. Instead the Extended Kalman Filter
(EKF) can be used. EKF is given by linearizing the system around the estimated
trajectory and then applying the KF. [8]
Kalman Filter
A linear discrete-time state-space model is described as:
x
t
= F
t1
x
t1
+G
u
t1
u
t1
+w
t1
(4.7a)
y
t
= H
t
x
t
+e
t
(4.7b)
where w
k
and e
k
are assumed to be zero mean Gaussian white noise with a
covariance Q
t
respectively R
t
, see (4.8a) and (4.8b). There are no signal input u
in our system so the term G
u
t1
u
t1
disappears.
w
k
N(0, Q
t
) (4.8a)
e
k
N(0, R
t
) (4.8b)
The algorithm for the Kalman lter is:
1. Initialize: Set x
0|0
= x
0
and P
0|0
= P
0
.
2. Time update:
x
t|t1
= F
t1
x
t1|t1
(4.9a)
P
t|t1
= F
t1
P
t1|t1
F
T
t1
+Q
t1
(4.9b)
3. Measurement update:
K
t
= P
t|t1
H
T
t
(H
t
P
t|t1
H
T
t
+R
t
)
1
(4.10a)
x
t|t
= x
t|t1
+K
t
(y
t
H
t
x
t|t1
) (4.10b)
P
t|t
= (I K
t
H
t
)P
t|t1
(4.10c)
4. Set t := t + 1 and repeat from step 2.
Extended Kalman Filter
To use KF the model needs to be linearized. For an arbitrary non-linear system:
x
t
= f
t1
(x
t1
, u
t1
) +w
t1
(4.11a)
y
t
= h
t
(x
t
, u
t
) +e
t
(4.11b)
The system can be linearized around the latest state estimate with a rst order
Taylor-expansion,
f(x
t
, u
t
) = f( x
t
, u
t
) +F
t
(x
t
x
t|t1
) (4.12a)
h(x
t
, u
t
) = h( x
t1
, u
t
) +H
t
(x
t
x
t|t1
) (4.12b)
where;
F
t
=
f(x, u, v)
x
|
(x,u,v)=( x
t|t
,u
t
,0)
(4.13a)
H
t
=
h(x, u)
x
|
(x,u)=( x
t|t
,u
t
)
(4.13b)
The algorithm for the EKF is:
4.2. Filters 33
1. Initialize: Set x
0|0
= x
0
and P
0|0
= P
0
2. Time update:
x
t|t1
= f
t1
_
x
t1|t1
, u
t1|t1
_
(4.14a)
P
t|t1
= F
t1
P
t1|t1
F
T
t1
+Q
t1
(4.14b)
3. Measurement update:
x
t|t
= x
t|t1
+K
t
_
y
t
h( x
t|t1
, u
t
)
_
(4.15a)
P
t|t
= (I K
t
H
t
) P
t|t1
(4.15b)
K
t
= P
t|t1
H
T
t
_
H
t
P
t|t1
H
T
t
+R
t
_
1
(4.15c)
4. Set t := t + 1 and repeat from step 2.
4.2.3 Implementing EKF
With the spots identied, we can now calculate the relative distance between
them. By simply counting the pixels in between the centers we get a relative
distance. These distances are then transformed to real values by using the camera
model. With these distances the height and attitude of the helicopter can then
be estimated by using EKF. Some equations are repeated from Chapter 3 for
convenience.
The states are:
x
t
=
_

H
x
1
x
2
y
1
y
2
_

_
(4.16)
f
t
= I
7
x
t
(4.17)
The measurement equations:
y
t
=
_

_
x
1
t
x
2
t
y
1
t
y
2
t
0
0
0
0
_

_
= h(x
t
) =
_

_
x
3,t
tan()
(cos(x
1,t
)sin(x
1,t
) tan()) cos(x
1,t
)
x
3,t
tan()
(cos(x
1,t
)+sin(x
1,t
) tan()) cos(x
1,t
)
x
3,t
tan()
(cos(x
2,t
)sin(x
2,t
) tan()) cos(x
2,t
)
x
3,t
tan()
(cos(x
2,t
)+sin(x
2,t
) tan()) cos(x
2,t
)
x
4,t

x
3,t
tan()
(cos(x
1,t
)sin(x
1,t
) tan()) cos(x
1,t
)
x
5,t

x
3,t
tan()
(cos(x
1,t
)+sin(x
1,t
) tan()) cos(x
1,t
)
x
6,t

x
3,t
tan()
(cos(x
2,t
)sin(x
2,t
) tan()) cos(x
2,t
)
x
7,t

x
3,t
tan()
(cos(x
2,t
)+sin(x
2,t
) tan()) cos(x
2,t
)
_

_
(4.18)
By derivating the previous (4.17) and (4.18) we get:
F
t
= I
7
(4.19)
H
t
=
_

_
h
1
x
1
. . .
h
1
x
7
.
.
.
.
.
.
.
.
.
h
8
x
1
. . .
h
8
x
7
_

_
(4.20)
To get an easier overview of the equations a(x
t
) can be set to the rst four
state-equations:
a(x
t
) =
_

_
x
3
tan()
(cos(x
1
)sin(x
1
) tan()) cos(x
1
)
x
3
tan()
(cos(x
1
)+sin(x
1
) tan()) cos(x
1
)
x
3
tan()
(cos(x
2
)sin(x
2
) tan()) cos(x
2
)
x
3
tan()
(cos(x
2
)+sin(x
2
) tan()) cos(x
2
)
_

_
(4.21)
Then if the matrix A
t
is the derivates of a
t
in respect to x
1
, x
2
and x
3
then H
t
becomes:
H
t
=
_

_
A
t
0
4
A
t
I
4
_

_
(4.22)
4.2. Filters 35
The derivates in A
t
that arent zero:
h
1
x
1
=
x
3
tan()
_
sin(2x
1
) + tan()(cos
2
(x
1
) sin
2
(x
1
))
_
cos
2
(x
1
) (cos(x
1
) sin(x
1
) tan())
2
(4.23a)
h
1
x
3
=
tan()
cos(x
1
)(cos(x
1
) sin(x
1
) tan())
(4.23b)
h
2
x
1
=
x
3
tan()
_
sin(2x
1
) tan()(cos
2
(x
1
) sin
2
(x
1
))
_
cos
2
(x
1
) (cos(x
1
) + sin(x
1
) tan())
2
(4.23c)
h
2
x
3
=
tan()
cos(x
1
)(cos(x
1
) + sin(x
1
) tan())
(4.23d)
h
3
x
2
=
x
3
tan()
_
sin(2x
2
) + tan()(cos
2
(x
2
) sin
2
(x
2
))
_
cos
2
(x
2
) (cos(x
2
) sin(x
2
) tan())
2
(4.23e)
h
3
x
3
=
tan()
cos(x
2
)(cos(x
2
) sin(x
2
) tan())
(4.23f)
h
4
x
2
=
x
3
tan()
_
sin(2x
2
) tan()(cos
2
(x
2
) sin
2
(x
2
))
_
cos
2
(x
2
) (cos(x
2
) + sin(x
2
) tan())
2
(4.23g)
h
4
x
3
=
tan()
cos(x
2
)(cos(x
2
) + sin(x
2
) tan())
(4.23h)
Example of EKF-ltering
This plot shows the measured and estimated x
1
. All future calculations depend
on that the EKF-estimation of the distances are good enough. In the plot in
Figure 4.5 its obvious that the EKF converges since the estimated value follows
the measured.
Figure 4.5. The measured and the EKF-estimated values for the distance x
1
.
4.3 Association Problem
A basic requirement to get the EKF working as expected is that the measurements
have to be sent the lter in the correct order. This means that it is necessary to
keep track of which measurement that belongs to a specic laser dot. The following
section will explain the solution.
4.3.1 Object-Orientation in Matlab
In order to make the laser dots remember their last location, the laser dots can be
made into objects in Matlab. Matlab has built in support for object-orientation
and an instance of the object was created for each laser dot. The main function
of the object is to keep track of the laser dots last- and present position and their
distance to the center laser dot, the -values.
4.3.2 Association - Assigning Measurements to the Objects
When assigning measurements to the laser dot objects an observation window
around the old position of the laser dot is applied, see Figure 4.6. The observer
window is the outer limit of the search, hereby we easily lter measurements that
arent interesting, see the overcrossed dots in Figure 4.6. The size of the observer
window depends on the height of the helicopter, the closer to the ground the
smaller the window is.
Figure 4.6. Assigning algorithm when there are more then ve dots detected.
There are three cases that can occur during the assignment.
4.3. Association Problem 37
1. Only one measurement inside the observation window
2. Two or more measurements inside the observation window, see Figure 4.7
3. No measurement inside the observation window, see Figure 4.8
The rst case when only one measurement is inside the observer window is the
easiest case. The laser dot object is assigned to the measurement and the object
is updated with the position in the measurement.
Figure 4.7. Two measurements inside one of the observer windows
The second case is a bit critical, the assignment has to be correct in order to
get the correct measurement into the EKF and get the correct estimates out of
the EKF. To assure that the correct measurement is chosen an assumption is that
the measurement nearest the old position of the laser dot object is the correct one.
A distance calculation to the dierent measurements is made and the nearest one
is chosen.
When no measurement is present inside one of the observer windows, the al-
gorithm assumes that this measurement is lost and the position for the laser dot
object is updated with the estimated position. In the case when the estimation is
too far from the last position we update the object with its last position.
Figure 4.8. No measurements inside one of the observer windows
4.4 Resulting Algorithm
The resulting algorithm is described below in pseudo-code. There are several help-
functions that are not included to get a better overwiev. To see the whole ow
chart of the system see Appendix B.
Figure 4.9. Flow chart of the resulting algorithm
4.4.1 imageProcess
Input: Picture
Output: Center location for the found dots
pixels = nd(pixels that are brighter than the threshold value);
pixels = undistortImage(pixels);
pixels = ndCircle(pixels);
pixels = calulateCenters(pixels);
Algorithm 1: The algorithm for the rst image processing that extracts the
centers of the dots.
4.4. Resulting Algorithm 39
4.4.2 transform
Input: Center location for the found dots in pixel locations
Output: Center location for the found dots in real coordinates
for number of centers do
realCenters = K
1
pixelCenters;
end
realCenters = realCenters

H
Algorithm 2: The algorithm for the transformation from pixel location to real
coordinates.
4.4.3 ndDxDy
Input: Center location for the found dots in real coordinates
Output: -distances, height and angles
if Height small and less than three centers found then
all positions set to zero;
else
for all centers do
look in a window around the last position;
if center not found then
look in a window around estimated position;
end
end
if centerdot has not been found the last two times but there are ve
centers then
initiate the system again;
end
if one of the dots has not been found the last eight times but there are
ve centers then
initiate the system again;
end
end
estimations = EKF(distances);
Algorithm 3: The algorithm for nding the estimation of -distances, height
and angles.
4.4.4 animate
This code-section handles the animation. The output can be seen in Figure 4.10.
Figure 4.10. The output from animate.m
Chapter 5
Experiments and Results
In this chapter the results from the experimental setup are reviewed. The results
shall be considered with the built-in faults in the experimental setup. The setup
is not very accurate since it has been built by simple materials that are not made
especially for this application. This means that the laser pattern are not always
showing the actual angle of the setup. Due to limitations in Video for Matlab we
were not able to capture the highest resolution possible with the camera, this also
causes the experimental setup to be less accurate.
The results consist of three types of values:
Calculated values - values that have not been ltered through the EKF
Estimated values - values that have been estimated by the EKF
Ground truth values - values that have been measured in the actual world
5.1 Measuring and Estimating the Distances x
and y
The distances x and y were measured using image processing. To make the
system less aected by noise in the measurements, the distances were also esti-
mated by the Kalman lter. Below there are some plots of the calculated, the
estimated and the ground truth distances. We have chosen to show these plots
because all further results depends on the accuracy on these values. Below is the
result from x presented, due to symmetry we have chosen not to present results
from y measurements. Keep in mind that most of the stationary faults is either
caused by the experimental setups inaccuraccy or the incorrect positioning of the
setup when holding it manually.
stman, Forsberg, 2008. 41
5.1.1 Distances x
The distances when placing the experimental setup horisontal at 0.1 , 0.2 and 0.3
meters over the ground can be estimated and calculated as seen in the plot below,
Figure 5.1 to 5.6. Notice the scale in the Figures.
Figure 5.1. Plots of the calculated, the estimated and the ground truth x
1
in the roll
axis at height 0.1 meter.
Figure 5.2. Plots of the calculated, the estimated and the ground truth x
2
in the roll
axis at height 0.1 meter.
5.1. Measuring and Estimating the Distances x and y 43
Figure 5.3. Plots of the calculated, the estimated and the ground truth x
1
in the roll
axis at height 0.2 meter.
Figure 5.4. Plots of the calculated, the estimated and the ground truth x
2
in the roll
axis at height 0.2 meter.
Figure 5.5. Plots of the calculated, the estimated and the ground truth x
1
in the roll
axis at height 0.3 meter.
Figure 5.6. Plots of the calculated, the estimated and the ground truth x
2
in the roll
axis at height 0.3 meter.
5.1.2 Analysis of the Estimations of x and y
As seen in the plots the estimated values are close to the ground truth values.
One observation is that the estimated values does not dier much between the
samples. The biggest fault within the values in the Figures is the stationary fault.
5.2. Estimating the Height 45
This fault can generally be assigned to the possible incorrect manual positioning
when measuring. As the helicopter is held in a zero degree angle relative to the
ground the values of x
1
and x
2
respectively y
1
and y
2
should be equal. Even
though the experimental setup is not very exact and the fact that we probably
are not holding the setup in an exact horizontal attitude we get a good result. A
good result of the estimated distances is crucial since all further calculations and
estimations depend on these values. If these values are further improved we can
get even more accurate results in the height and attitude estimations.
5.2 Estimating the Height
One of the goals in this thesis is to estimate the height above ground for a heli-
copter. In the plots below the estimated, the calculated, and the real height are
shown for dierent real heights. For all measurements there are a Root Mean
Square (RMS) value presented, this is because the RMS value represent a good
rating of the estimated values. The RMS value presented is based on a mean of
several RMS measurements.
5.2.1 Height = 0.1 meters
The height when holding the experimental setup 0.1 meters over the ground can
be estimated as seen in the plot below, Figure 5.7. The RMS value for ve mea-
surements were 0.1007 meters.
Figure 5.7. The estimated, the calculated and the real height at 0.1 meters
5.2.2 Height = 0.2 meters
The height when holding the experimental setup 0.2 meters over the ground can
be estimated as seen in the plot below, Figure 5.8. The RMS value for ve mea-
surements were 0.2007 meters.
Figure 5.8. The estimated, the calculated and the real height at 0.2 meters
5.2.3 Height = 0.3 meters
The height when holding the experimental setup 0.3 meters over the ground can
be estimated as seen in the plot below, Figure 5.9. The RMS value for ve mea-
surements were 0.3054 meters.
5.2.4 Step in height
Figure 5.10 shows the estimated and the calculated height when a step in the
height is made. The experimental setup is held at 0.1 meters and then lifted up
to 0.2 meters, after which it is taken down to 0.1 meters again. As you see in the
plot the estimated and the measured height follow each other, and they are very
close to the real values.
5.2. Estimating the Height 47
Figure 5.9. The estimated, the calculated and the real height at 0.3 meters
Figure 5.10. Step in height
5.2.5 Landing process
Figure 5.11 shows the estimated and the calculated height during a landing process.
The experimental setup is held at approximately 0.3 meters and then lowered until
it reaches ground. As you see in the plot the estimated and the measured height
follow each other, and they are very close to the real values.
Figure 5.11. Height in a landing process started at approx. 0.3 meters
5.2.6 Analyzing the Height Estimation
There are several explanations why the estimation of the height and the real height
is not exactly the same. The rst and most important explanation is that the
experimental setup is not perfect and that makes the measurements dier. One
other big inuence is that it is hard to hold the experimental setup in the exact
right place. That reason alone can be estimated to cause stationary errors that
are larger than the faults in the measurements. This indicates that the software
in the system in fact is quite exact. When looking at the RMS-values for these
measurements you see that they are close to the ground truth values also indicating
a high accuracy. Because of the simple experimental setup, most aected by the
camera, we do not show results higher than 0.3 meters. We can identify the dots
at a higher level, the highest level is approximately 0.5 meters.
5.3 Estimating the Angles
A goal of this thesis is to determine the attitude of the helicopter, both pitch and
roll. In the plots below the results of the pose estimation is showed. In the plots
only pitch angle is shown, but the results for roll is similar because it is based on
the same calculations.
5.3. Estimating the Angles 49
5.3.1 Roll = 0

The pitch angle when holding the experimental setup in horizontal pose can be
estimated as seen in Figure 5.12 below. The RMS value for ve measurements
were 0.4596 degrees.
Figure 5.12. Plot of the estimated-, calculated-, and real pitch at 0

5.3.2 Roll = 11

The pitch angle when holding the experimental setup in approximately 11

can
be estimated as seen in Figure 5.13 below. The RMS value for ve measurements
were 10.80 degrees.
5.3.3 Roll = 17

The pitch angle when holding the experimental setup in approximately 17

can
be estimated as seen in Figure 5.14 below. The RMS value for ve measurements
were 15.70 degrees.
Figure 5.13. Plot of the estimated-, calculated-, and real pitch at 11

Figure 5.14. Plot of the estimated-, calculated-, and real pitch at 17

5.3.4 Analysis of the Angle Estimation


In this section we showed the angle estimates, as we can see in these plots the
estimation of the angle could be a bit better. Also the RMS-values diers from
the ground truth values. We once again refer to the experimental setup as one
5.4. Summarized RMS-values 51
cause of the error. The other cause is that we have no tools to measure the actual
pitch. The actual pitch angle is only measured with a ruler. If the experimental
setup where more rigid and the lasers where in a more exact angle, the results
would have been better.
5.4 Summarized RMS-values
Below we have summarized the RMS-values in two tables, one for the height
measurements Table 5.1 and one for the angle measurements Table 5.2.
Ground truth height RMS-value
0.1 meters 0.1007
0.2 meters 0.2007
0.3 meters 0.3054
Table 5.1. RMS values for the height mea-
surements
Ground truth angle RMS-value
0

0.4596
11

10.80
17

15.70
Table 5.2. RMS values for the angle mea-
surements
Chapter 6
Concluding Remarks
6.1 Conclusion
In this thesis we have studied a new concept to estimate the attitude and the
height of a helicopter. The concept is intended to be used as a modular sensor on
a helicopter. We had four goals in this thesis and we believe that all of them now
are fullled. We have estimated the attitude and height with an accuracy that is
highly acceptable, regarding the conditions. The experimental setup is built as an
experimental setup to test the concept, it is not intended to be a nal product.
As said before in this thesis we have used simple materials that are not made for
this application. This inaccuracy in the setup reveals itself when calculations and
estimations are performed. Despite the fact that there are built-in errors in the
system we have created a system that illustrates that the concept works and has
an accuracy that is acceptable.
In this thesis we have assumed that the ground is at and horizontal. It can
be mistaken for a big limitation, but in fact it is not to hard to include sloping
ground in the system. The only problem is to separate the slope of the ground
from the angle of the helicopter. The main control system of the helicopter has
information about the attitude of the helicopter, this information can be forwarded
to the sensor and by this the separation is possible.
Further improvements can be done in the system which will lead to a better
accuracy and in the end, a better sensor. This is why we have come to the
conclusion that this concept can be used as a part of a landing system for a
helicopter or a UAV helicopter.
6.2 Future Work
In this section we present improvements that we think can result in a better and
more reliable solution. We also present ideas for further development of the system.
stman, Forsberg, 2008. 53
6.2.1 Sensors
We used a ordinary web camera for our experimental setup. If a better camera
is used there would be a lot to gain in accuracy. An idea would be to use an
Inertial Measurement Unit (IMU). With this unit you can expand the model used
in the Kalman lter to include dynamic relations; this will gain both accuracy and
sturdiness. One more benet of this would be that we can separate the inclination
of the helicopter from the inclination of the ground. This information can then be
used to also calculate the movement of the ground.
6.2.2 Experimental Setup
In our experimental setup there are a lot of built-in faults and inaccuracy. This
is due to the characteristics of the wooden material and the attachments of the
laser pointers. An experimental setup in a more rigid material built with more
accuracy would improve the measurements.
6.2.3 Landing on a Moving Surface
The system is designed to have all equipment on board the helicopter. With
the IMU improvements we could describe the underlying surface and from that
calculate when the time is right to land. This improvement would be of use in for
example marine applications.
Nomenclature
Symbols
x (y) The distance between the center point below the helicopter and one of the side points.
The angle of the ground below the helicopter.
The pitch-angle of the helicopter.
The roll-angle of the helicopter.
The angle of the laser-pointers.

H The perpendicular height above the ground for the helicopter.


x State vector.
y Measurements.
f(x
t
) Dynamic state equations.
h
t
(x
t
) Measurement equations.
X
e
,Y
e
,Z
e
Real world coordinates.
x
i
,y
i
The point where the optical centre intersects with the image plane.
u,v Digital camera coordinate system.
e,c,i Index for coordinate systems in the camera sensor. Earth (e),Camera(c) and Image (i).
f Focal length.
s
x
,s
y
Pixel size.
s

Scew factor.
u
0
,v
0
Horizontal position of the principal point.
Aspect ratio.
K Kamera calibration matrix.
Abbreviations
UAV Unmanned Aerial Vehicle
UCAV Unmanned Combat Aerial Vehicle
CS Control System
KF Kalman Filter
EKF Extended Kalman Filter
VFM Video For Matlab
TDFT Time Discrete Fourier Transform
IMU Inertial Measurement Unit
RPM Revolutions Per Minute
RMS Root Mean Square
stman, Forsberg, 2008. 55
Bibliography
[1] Lennart Rde, Bertil Westergren, (2001), Mathematics Handbook for Science
and Engineering, Studentlitteratur, Lund.
[2] Torkel Glad, Lennart Ljung, (1989), Reglerteknik grundlggande teori, Stu-
dentlitteratur, Lund.
[3] Torkel Glad, Lennart Ljung, (2004), Modellbygge och simulering, Studentlit-
teratur, Lund.
[4] Sune Sderkvist, (2000), Tidskontinuerliga Signaler & System, Studentlitter-
atur, Lund.
[5] Sune Sderkvist, (2000), Tidsdiskreta Signaler & System, Studentlitteratur,
Lund.
[6] Federal Aviation Administration, (2000), Rotorcarft Flying Handbook (FAA
Handbooks).
[7] Thomas Schn, (2008), On the Use of Vision Sensors for Dynamical Estima-
tion, Division of Automatic Control, Linkpings Universitet.
[8] David Trnqvist, (2008), Estimation and Detection with Applications to Nav-
igation, Department of Electrical Engineering, Linkpings Universitet.
[9] Brain Marshall, (2008), How Stu Works, sci-
ence.howstuworks.com/helicopter.htm, 2008-10-08.
[10] Jean-Yves Bouguet, (2008), Camera Calibration,
http://www.vision.caltech.edu/bouguetj/calib_doc/, 2008-10-08.
[11] Saab AB homepage, http://www.saabgroup.com/, 2008-11-12.
[12] Matworks, http://www.mathworks.com/matlabcentral/leexchange/247,
2008-10-24.
stman, Forsberg, 2008. 57
Appendix A
A Prototype of the Sensor
To demonstrate that the sensor could work in real life a prototype was built. The
prototype was built of wood and the lasers were mounted using simple attachments.
Both the wood and the attachments can be bought at any retailer of building
supplies. The dimensions of the prototype are given in Figure A.1.
Figure A.1. The dimensions of the experimental setup seen from three angles.
stman, Forsberg, 2008. 59
Below there are pictures of the experimental setup from dierent angles, seen
from the side Figure A.2 and seen from below Figure A.3.
Figure A.2. The experimental setup seen from the side.
Figure A.3. The experimental setup seen from below.
A.1 Components
The components of the prototype are:
Web camera - Logitech QuickCam Pro 9000
Carl Zeiss lens
Auto focus system
Ultra-high resolution 2-megapixel sensor with RightLight2 Technol-
ogy
Color depth: 24-bit true color
A.2. Used Camera Parameters 61
Video capture: Up to 1600 x 1200 pixels (HD quality) (HD Video 960
x 720 pixels)
Frame rate: Up to 30 frames per second
Still image capture: 8 million pixels (with software enhancement)
Built-in microphone with RightSoundTechnology
Diode pumped red laser 1mW - TIM-201-1
Size: 10.5mmx32.5mm
Laser Head Case Material: Copper B
+
Output Wavelength: 635680nm
Output Power: 1 mW
Beam Divergence: 2mrad
Operating Temperature: -10

C +40

C
Storage Temperature: -40

C +50

C
Operating Current: 4550mA
Operating Voltage: 4.5V DC
Beam Aperature Diameter: 4mm
Battery case with 3x1.5V batteries
A.2 Used Camera Parameters
Exposure: 1/500 s , manual
Gain: 1529 , manual
Brightness: 0
Contrast: 667
Saturation: 1922
Sharpness: 4118
White Balance: 0 , manual
Backlight Compensation: 1
Focus: 0 , manual
Appendix B
Flow Chart
This appendix contains a ow chart for the system, see next page Figure B.
stman, Forsberg, 2008. 63
Figure B.1. Flow Chart
Copyright
The publishers will keep this document online on the Internet - or its possible
replacement - for a period of 25 years from the date of publication barring excep-
tional circumstances. The online availability of the document implies a permanent
permission for anyone to read, to download, to print out single copies for your own
use and to use it unchanged for any non-commercial research and educational pur-
pose. Subsequent transfers of copyright cannot revoke this permission. All other
uses of the document are conditional on the consent of the copyright owner. The
publisher has taken technical and administrative measures to assure authenticity,
security and accessibility. According to intellectual property law the author has
the right to be mentioned when his/her work is accessed as described above and to
be protected against infringement. For additional information about the Linkping
University Electronic Press and its procedures for publication and for assurance of
document integrity, please refer to its WWW home page: http://www.ep.liu.se/
Upphovsrtt
Detta dokument hlls tillgngligt p Internet - eller dess framtida ersttare
- under 25 r frn publiceringsdatum under frutsttning att inga extraordinra
omstndigheter uppstr. Tillgng till dokumentet innebr tillstnd fr var och
en att lsa, ladda ner, skriva ut enstaka kopior fr enskilt bruk och att anvnda
det ofrndrat fr ickekommersiell forskning och fr undervisning. verfring av
upphovsrtten vid en senare tidpunkt kan inte upphva detta tillstnd. All annan
anvndning av dokumentet krver upphovsmannens medgivande. Fr att garan-
tera ktheten, skerheten och tillgngligheten nns det lsningar av teknisk och
administrativ art. Upphovsmannens ideella rtt innefattar rtt att bli nmnd som
upphovsman i den omfattning som god sed krver vid anvndning av dokumentet
p ovan beskrivna stt samt skydd mot att dokumentet ndras eller presenteras
i sdan form eller i sdant sammanhang som r krnkande fr upphovsmannens
litterra eller konstnrliga anseende eller egenart. Fr ytterligare information om
Linkping University Electronic Press se frlagets hemsida http://www.ep.liu.se/
2008, Christian stman
Anna Forsberg
stman, Forsberg, 2008. 65