You are on page 1of 290

Adaptive Backstepping

Flight Control for Modern


Fighter Aircraft
ISBN 978-90-8570-573-4

Printed by Wöhrmann Print Service, Zutphen, The Netherlands.

Typeset by the author with the LATEX Documentation System.


Cover design based on an F-16 image by James Dale.

Copyright c 2010 by L. Sonneveldt. All rights reserved. No part of the material


protected by this copyright notice may be reproduced or utilized in any form or by any
means, electronic or mechanical, including photocopying, recording or by any
information storage and retrieval system, without the prior permission of the author.
Adaptive Backstepping
Flight Control for Modern
Fighter Aircraft

PROEFSCHRIFT

ter verkrijging van de graad van doctor


aan de Technische Universiteit Delft,
op gezag van de Rector Magnificus Prof. ir. K.Ch.A.M. Luyben,
voorzitter van het College voor Promoties,
in het openbaar te verdedigen op woensdag 7 juli 2010 om 15.00 uur
door

Lars SONNEVELDT

ingenieur luchtvaart en ruimtevaart


geboren te Rotterdam.
Dit proefschrift is goedgekeurd door de promotor:
Prof. dr. ir. J.A. Mulder

Copromotor:
Dr. Q.P. Chu

Samenstelling promotiecommissie:
Rector Magnificus voorzitter
Prof. dr. ir. J.A. Mulder Technische Universiteit Delft, promotor
Dr. Q.P. Chu Technische Universiteit Delft, copromotor
Prof. lt. gen. b.d. B.A.C. Droste Technische Universiteit Delft
Prof. dr. ir. M. Verhaegen Technische Universiteit Delft
Prof. dr. A. Zolghadri Université de Bordeaux
Prof. Dr.-Ing. R. Luckner Technische Universität Berlin
Ir. W.F.J.A. Rouwhorst Nationaal Lucht- en Ruimtevaartlaboratorium
Prof. dr. ir. Th. van Holten Technische Universiteit Delft, reservelid
To Rianne
Summary

Over the last few decades and pushed by developments in aerospace technology, the per-
formance requirements of modern fighter aircraft became more and more challenging
throughout an ever increasing flight envelope. Extreme maneuverability is achieved by
designing the aircraft with multiple redundant control actuators and allowing static in-
stabilities in certain modes. A good example is the Lockheed Martin F-22 Raptor which
makes use of thrust vectored control to increase maneuverability. Furthermore, the sur-
vivability requirements in modern warfare are constantly evolving for both manned and
unmanned combat aircraft. Taking into account all these requirements when designing
the control systems for modern fighter aircraft poses a huge challenge for flight control
designers.
Traditionally, aircraft control systems were designed using linearized aircraft models at
multiple trimmed flight conditions throughout the flight envelope. For each of these
operating points a corresponding linear controller is derived using the well established
linear-based control design methods. One of the many gain scheduling methods can next
be applied to derive a single flight control law for the entire flight envelope. However,
a problem of this approach is that good performance and robustness properties cannot
be guaranteed for a highly nonlinear fighter aircraft. Nonlinear control methods have
been developed to overcome the shortcomings of linear design approaches. The theo-
retically established nonlinear dynamic inversion (NDI) approach is the best known and
most widely used of these methods.
NDI is a control design method that can explicitly handle systems with known nonlin-
earities. By using nonlinear feedback and exact state transformations rather than linear
approximations the nonlinear system is transformed into a constant linear system. This
linear system can in principle be controlled by just a single linear controller. However,
to perform perfect dynamic inversion all nonlinearities have to be precisely known. This
is generally not the case with modern fighter aircraft, since it is very difficult to precisely
know and model their complex nonlinear aerodynamic characteristics. Empirical data
is usually obtained from wind tunnel experiments and flight tests, augmented by com-

i
ii SUMMARY

putational fluid dynamics (CFD) results, and thus is not 100% accurate. The problem
of model deficiencies can be dealt with by closing the control loop with a linear, robust
controller. However, even then desired performance cannot be expected in case of gross
errors, due to large, sudden changes in the aircraft dynamics that could result from struc-
tural damage, control effector failures or adverse environmental conditions.

A more sophisticated way of dealing with large model uncertainties is to introduce an


adaptive control system with some form of online model identification. In recent years,
the increase in available onboard computational power has made it possible to implement
more complex adaptive flight control designs. It is clear that a nonlinear adaptive flight
control system with onboard model identification can do more than just compensate for
inaccuracies in the nominal aircraft model. It is also possible to identify any sudden
changes in the dynamic behavior of the aircraft. Such changes will in general lead to an
increase in pilot workload or can even result in a complete loss of control. If the post-
failure aircraft dynamics can be identified correctly by the online model identification,
the redundancy in control effectors and the fly-by-wire system of modern fighter planes
can be exploited to reconfigure the flight control system.
There are several methods available to design an identifier that updates the onboard model
of the NDI controller online, e.g. neural networks or least squares techniques. A disad-
vantage of an adaptive design with separate identifier is that the certainty equivalence
property does not hold for nonlinear systems, i.e. the identifier is not fast enough to cope
with potentially faster-than-linear growth of instabilities in nonlinear systems. To over-
come this problem a controller with strong parametric robustness properties is needed.
An alternative solution is to design the controller and identifier as a single integrated
system using the adaptive backstepping design method. By systematically constructing
a Lyapunov function for the closed-loop system, adaptive backstepping offers the pos-
sibility to synthesize a controller for a wide class of nonlinear systems with parametric
uncertainties.
The main goal of this thesis is to investigate the potential of the nonlinear adaptive back-
stepping control technique in combination with online model identification for the design
of a reconfigurable flight control (RFC) system for a modern fighter aircraft. The follow-
ing features are aimed for:
• the RFC system uses a single nonlinear adaptive flight controller for the entire
domain of operation (flight envelope), which has provable theoretical performance
and stability properties.
• the RFC system enhances performance and survivability of the aircraft in the pres-
ence of disturbances related to failures and structural damage.
• the algorithms, on which the RFC system is based, possess excellent numerical sta-
bility properties and their computational costs are low (real-time implementation
is feasible).
Adaptive backstepping is a recursive, Lyapunov-based, nonlinear design method, that
makes use of dynamic parameter update laws to deal with parametric uncertainties. The
iii

idea of backstepping is to design a controller recursively by considering some of the


state variables as ‘virtual controls’ and designing intermediate control laws for these.
Backstepping achieves the goals of global asymptotic stabilization of the closed-loop
states and tracking. The proof of these properties is a direct consequence of the recur-
sive procedure, since a Lyapunov function is constructed for the entire system including
the parameter estimates. The tracking errors drive the adaptation process of the proce-
dure. Furthermore, it is possible to take magnitude and rate constraints on the control
inputs and system states into account in such a way that the identification process is not
corrupted during periods of control effector saturation. A disadvantage of the integrated
adaptive backstepping method is that it only yields pseudo-estimates of the uncertain sys-
tem parameters. There is no guarantee that the real values of the parameters are found,
since the adaptation only tries to satisfy a total system stability criterion, i.e. the Lya-
punov function. Increasing the adaptation gain will not necessarily improve the response
of the closed-loop system, due to the strong coupling between the controller and the es-
timator dynamics.

The immersion and invariance (I&I) approach provides an alternative way of construct-
ing a nonlinear estimator. This approach allows for prescribed stable dynamics to be
assigned to the parameter estimation error. The resulting estimator is combined with a
backstepping controller to form a modular adaptive control scheme. The I&I based esti-
mator is fast enough to capture the potential faster-than-linear growth of nonlinear sys-
tems. The resulting modular scheme is much easier to tune than the ones resulting from
the standard adaptive backstepping approaches with tracking error driven adaptation pro-
cess. In fact, the closed-loop system resulting from the application of the I&I based
adaptive backstepping controller can be seen as a cascaded interconnection between two
stable systems with prescribed asymptotic properties. As a result, the performance of the
closed-loop system with adaptive controller can be improved significantly.
To make a real-time implementation of the adaptive controllers feasible the computa-
tional complexity has to be kept at a minimum. As a solution, a flight envelope partition-
ing method is proposed to capture the globally valid aerodynamic model into multiple
locally valid aerodynamic models. The estimator only has to update a few local models
at each time step, thereby decreasing the computational load of the algorithm. An addi-
tional advantage of using multiple, local models is that information of the models that are
not updated at a certain time step is retained, thereby giving the approximator memory
capabilities. B-spline networks are selected for their nice numerical properties to ensure
smooth transitions between the different regions.

The adaptive backstepping flight controllers developed in this thesis have been evaluated
in numerical simulations on a high-fidelity F-16 dynamic model involving several control
problems. The adaptive designs have been compared with the gain-scheduled baseline
flight control system and a non-adaptive NDI design. The performance has been com-
pared in simulation scenarios at several flight conditions with the aircraft model suffering
from actuator failures, longitudinal center of gravity shifts and changes in aerodynamic
coefficients. All numerical simulations can be easily performed in real-time on an ordi-
iv SUMMARY

nary desktop computer. Results of the simulations demonstrate that the adaptive flight
controllers provide a significant performance improvement over the non-adaptive NDI
design for the simulated failure cases.
Of the evaluated adaptive flight controllers, the I&I based modular adaptive backstep-
ping design has the overall best performance and is also easiest to tune, at the cost of
a small increase in computational load and design complexity when compared to inte-
grated adaptive backstepping control designs. Moreover, the flight controllers designed
with the I&I based modular adaptive backstepping approach have even stronger provable
stability and convergence properties than the integrated adaptive backstepping flight con-
trollers, while at the same time achieving a modularity in the design of the controller and
identifier. On the basis of the research performed in this thesis, it can be concluded that a
RFC system based on the I&I based modular adaptive backstepping method shows a lot
of potential, since it possesses all the features aimed at in the thesis goal.

Further research that explores the performance of the RFC system based on the I&I
based modular adaptive backstepping method in other simulation scenarios is suggested.
The evaluation of the adaptive flight controllers in this thesis is limited to simulation
scenarios with actuator failures, symmetric center of gravity shifts and uncertainties is
individual aerodynamic coefficients. The research would be more valuable if scenarios
with asymmetric failures such as partial surface loss are performed. Generating the nec-
essary realistic aerodynamic data for the F-16 model would take a separate study in itself.
Still an open issue is the development of an adaptive flight envelope protection system
that can estimate the reduced flight envelope of an aircraft post-failure and that can feed
this information back to the controller, the pilot and the guidance system. Another im-
portant research direction would be to perform a piloted evaluation and validation of the
proposed RFC framework in a simulator. Post-failure workload and handling qualities
should be compared with those of the baseline flight control system. Simultaneously, a
study of the interactions between the pilots reactions to a failure and the actions taken by
the adaptive element in the flight control system can be performed.
Contents

Summary i

1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Reconfigurable Flight Control . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Reconfigurable Flight Control Approaches . . . . . . . . . . . 5
1.3.2 Reconfigurable Flight Control in Practice . . . . . . . . . . . . 9
1.4 Thesis Goal and Research Approach . . . . . . . . . . . . . . . . . . . 10
1.4.1 Nonlinear Adaptive Backstepping Control . . . . . . . . . . . . 11
1.4.2 Flight Envelope Partitioning . . . . . . . . . . . . . . . . . . . 11
1.4.3 The F-16 Dynamic Model . . . . . . . . . . . . . . . . . . . . 12
1.5 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Aircraft Modeling 17
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Aircraft Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.1 Reference Frames . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.2 Aircraft Variables . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.3 Equations of Motion for a Rigid Body Aircraft . . . . . . . . . 20
2.2.4 Gathering the Equations of Motion . . . . . . . . . . . . . . . . 24
2.3 Control Variables and Engine Modeling . . . . . . . . . . . . . . . . . 26
2.4 Geometry and Aerodynamic Data . . . . . . . . . . . . . . . . . . . . 28
2.5 Baseline Flight Control System . . . . . . . . . . . . . . . . . . . . . . 31
2.5.1 Longitudinal Control . . . . . . . . . . . . . . . . . . . . . . . 31
2.5.2 Lateral Control . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.5.3 Directional Control . . . . . . . . . . . . . . . . . . . . . . . . 31

v
vi CONTENTS

2.6 MATLAB/Simulink c Implementation . . . . . . . . . . . . . . . . . . 32

3 Backstepping 33
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Lyapunov Theory and Stability Concepts . . . . . . . . . . . . . . . . . 34
3.2.1 Lyapunov Stability Definitions . . . . . . . . . . . . . . . . . . 34
3.2.2 Lyapunov’s Direct Method . . . . . . . . . . . . . . . . . . . . 36
3.2.3 Lyapunov Theory and Control Design . . . . . . . . . . . . . . 38
3.3 Backstepping Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3.1 Integrator Backstepping . . . . . . . . . . . . . . . . . . . . . 41
3.3.2 Extension to Higher Order Systems . . . . . . . . . . . . . . . 44
3.3.3 Example: Longitudinal Missile Control . . . . . . . . . . . . . 47

4 Adaptive Backstepping 53
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.2 Tuning Functions Adaptive Backstepping . . . . . . . . . . . . . . . . 54
4.2.1 Dynamic Feedback . . . . . . . . . . . . . . . . . . . . . . . . 55
4.2.2 Extension to Higher Order Systems . . . . . . . . . . . . . . . 58
4.2.3 Robustness Considerations . . . . . . . . . . . . . . . . . . . . 63
4.2.4 Example: Adaptive Longitudinal Missile Control . . . . . . . . 66
4.3 Constrained Adaptive Backstepping . . . . . . . . . . . . . . . . . . . 68
4.3.1 Command Filtering Approach . . . . . . . . . . . . . . . . . . 69
4.3.2 Example: Constrained Adaptive Longitudinal Missile Control . 73

5 Inverse Optimal Adaptive Backstepping 77


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.2 Nonlinear Control and Optimality . . . . . . . . . . . . . . . . . . . . 78
5.2.1 Direct Optimal Control . . . . . . . . . . . . . . . . . . . . . . 78
5.2.2 Inverse Optimal Control . . . . . . . . . . . . . . . . . . . . . 80
5.3 Adaptive Backstepping and Optimality . . . . . . . . . . . . . . . . . . 80
5.3.1 Inverse Optimal Design Procedure . . . . . . . . . . . . . . . . 81
5.3.2 Transient Performance Analysis . . . . . . . . . . . . . . . . . 85
5.3.3 Example: Inverse Optimal Adaptive Longitudinal Missile Control 86
5.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

6 Comparison of Integrated and Modular Adaptive Flight Control 93


6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.2 Modular Adaptive Backstepping . . . . . . . . . . . . . . . . . . . . . 94
6.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . 95
6.2.2 Input-to-state Stable Backstepping . . . . . . . . . . . . . . . . 97
6.2.3 Least-Squares Identifier . . . . . . . . . . . . . . . . . . . . . 98
6.3 Aircraft Model Description . . . . . . . . . . . . . . . . . . . . . . . . 101
CONTENTS vii

6.4 Flight Control Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 103


6.4.1 Feedback Control Design . . . . . . . . . . . . . . . . . . . . . 103
6.4.2 Integrated Model Identification . . . . . . . . . . . . . . . . . . 105
6.4.3 Modular Model Identification . . . . . . . . . . . . . . . . . . 106
6.5 Control Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.5.1 Weighted Pseudo-inverse . . . . . . . . . . . . . . . . . . . . . 108
6.5.2 Quadratic Programming . . . . . . . . . . . . . . . . . . . . . 108
6.6 Numerical Simulation Results . . . . . . . . . . . . . . . . . . . . . . 110
6.6.1 Tuning the Controllers . . . . . . . . . . . . . . . . . . . . . . 110
6.6.2 Simulation Scenarios . . . . . . . . . . . . . . . . . . . . . . . 111
6.6.3 Controller Comparison . . . . . . . . . . . . . . . . . . . . . . 112
6.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

7 F-16 Trajectory Control Design 119


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
7.2 Flight Envelope Partitioning . . . . . . . . . . . . . . . . . . . . . . . 120
7.2.1 Partitioning the F-16 Aerodynamic Model . . . . . . . . . . . . 121
7.2.2 B-spline Networks . . . . . . . . . . . . . . . . . . . . . . . . 124
7.2.3 Resulting Approximation Model . . . . . . . . . . . . . . . . . 128
7.3 Trajectory Control Design . . . . . . . . . . . . . . . . . . . . . . . . 128
7.3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
7.3.2 Aircraft Model Description . . . . . . . . . . . . . . . . . . . . 130
7.3.3 Adaptive Control Design . . . . . . . . . . . . . . . . . . . . . 131
7.3.4 Model Identification . . . . . . . . . . . . . . . . . . . . . . . 139
7.4 Numerical Simulation Results . . . . . . . . . . . . . . . . . . . . . . 141
7.4.1 Controller Parameter Tuning . . . . . . . . . . . . . . . . . . . 142
7.4.2 Maneuver 1: Upward Spiral . . . . . . . . . . . . . . . . . . . 143
7.4.3 Maneuver 2: Reconnaissance . . . . . . . . . . . . . . . . . . 145
7.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

8 F-16 Stability and Control Augmentation Design 149


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.2 Flight Control Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8.2.1 Outer Loop Design . . . . . . . . . . . . . . . . . . . . . . . . 151
8.2.2 Inner Loop Design . . . . . . . . . . . . . . . . . . . . . . . . 152
8.2.3 Update Laws and Stability Properties . . . . . . . . . . . . . . 153
8.3 Integrated Model Identification . . . . . . . . . . . . . . . . . . . . . . 154
8.4 Modular Model Identification . . . . . . . . . . . . . . . . . . . . . . . 155
8.5 Controller Tuning and Command Filter Design . . . . . . . . . . . . . 157
8.6 Numerical Simulations and Results . . . . . . . . . . . . . . . . . . . . 159
8.6.1 Simulation Scenarios . . . . . . . . . . . . . . . . . . . . . . . 162
8.6.2 Simulation Results with Cmq = 0 . . . . . . . . . . . . . . . . 162
viii CONTENTS

8.6.3 Simulation Results with Longitudinal c.g. Shifts . . . . . . . . 163


8.6.4 Simulation Results with Aileron Lock-ups . . . . . . . . . . . . 164
8.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

9 Immersion and Invariance Adaptive Backstepping 167


9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
9.2 The Immersion and Invariance Concept . . . . . . . . . . . . . . . . . 168
9.3 Extension to Higher Order Systems . . . . . . . . . . . . . . . . . . . 173
9.3.1 Estimator Design . . . . . . . . . . . . . . . . . . . . . . . . . 173
9.3.2 Control Design . . . . . . . . . . . . . . . . . . . . . . . . . . 175
9.4 Dynamic Scaling and Filters . . . . . . . . . . . . . . . . . . . . . . . 177
9.4.1 Estimator Design with Dynamic Scaling . . . . . . . . . . . . . 177
9.4.2 Command Filtered Control Law Design . . . . . . . . . . . . . 180
9.5 Adaptive Flight Control Example . . . . . . . . . . . . . . . . . . . . . 182
9.5.1 Adaptive Control Design . . . . . . . . . . . . . . . . . . . . . 183
9.5.2 Numerical Simulation Results . . . . . . . . . . . . . . . . . . 185
9.6 F-16 Stability and Control Augmentation Design . . . . . . . . . . . . 187
9.6.1 Adaptive Control Design . . . . . . . . . . . . . . . . . . . . . 187
9.6.2 Numerical Simulation Results . . . . . . . . . . . . . . . . . . 189
9.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

10 Conclusions and Recommendations 193


10.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
10.2 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

A F-16 Model 203


A.1 F-16 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
A.2 ISA Atmospheric Model . . . . . . . . . . . . . . . . . . . . . . . . . 204
A.3 Flight Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

B System and Stability Concepts 207


B.1 Lyapunov Stability and Convergence . . . . . . . . . . . . . . . . . . . 207
B.2 Input-to-state Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 211
B.3 Invariant Manifolds and System Immersion . . . . . . . . . . . . . . . 211

C Command Filters 213

D Additional Figures 215


D.1 Simulation Results of Chapter 6 . . . . . . . . . . . . . . . . . . . . . 216
D.2 Simulation Results of Chapter 7 . . . . . . . . . . . . . . . . . . . . . 221
D.3 Simulation Results of Chapter 8 . . . . . . . . . . . . . . . . . . . . . 227
D.4 Simulation Results of Chapter 9 . . . . . . . . . . . . . . . . . . . . . 234
CONTENTS ix

Bibliography 239

Samenvatting 263

Acknowledgements 269

Curriculum Vitae 271


Chapter 1
Introduction

This chapter provides an introduction on modern high performance fighter aircraft and
their flight control systems. It describes the current situation, the ongoing research and
the challenges for these systems. The position of the work performed in this thesis in
relation to existing research on control methods is explained. Furthermore, the solu-
tion proposed in this thesis, as well as the research approach and scope are discussed.
The thesis outline is clarified in the final part of the chapter, by means of a short topic
description for each chapter and an explanation of the interconnections between the dif-
ferent chapters.

1.1 Background
At the moment, most western countries, including the Netherlands, have started to re-
place or are considering replacing their current fleet of fighter aircraft with aircraft of the
new generation. Some of the better known examples of this new generation of fighter
aircraft are the F-22 Raptor, the JAS-39 Gripen, the Eurofighter and the F-35 Lightning
II (better known as Joint Strike Fighter). Pushed by Air Force requirements and by de-
velopments in aerospace technology, the performance specifications for modern fighter
aircraft have become ever more challenging. Extreme maneuverability over a large flight
envelope is achieved by designing the aircraft unstable in certain modes and by using
multiple redundant control effectors for control. Examples include the F-22 Raptor (Fig-
ure 1.1(a)), which makes use of thrust vectored control to achieve extreme angles of
attack, and the highly unstable Su-47 prototype (Figure 1.1(b)) with its forward swept
wings and thrust vectoring.
Human pilots are not able to control these highly complex nonlinear systems without
some kind of assistance for their various tasks. Modern fighter aircraft require digital
flight control systems to ensure that aircraft possess the flying qualities pilots desire. In
fact, flight control systems have been considered by inventors even before the first flight

1
2 INTRODUCTION 1.1

(a) The F-22 Raptor (b) The Su-47 Berkhut

Figure 1.1: Two examples of modern high performance fighter aircraft. The F-22 picture is by
courtesy of the USAF and the Su-47 picture is a photo by Andrey Zinchuk.

of the Wright brothers in 1903 [139]. In 1893 Sir Hiram Maxim already made a work-
ing model of a steam-powered gyroscope and servo cylinder to maintain the longitudinal
attitude of an aircraft. The pitch controller weighted over 130 kg, which is still only a
fraction of the 3.5 ton total weight of his self-developed steam-powered flying machine
depicted in Figure 1.2. In 1913 Lawrence Sperry demonstrated hands-off flight when he
and his co-pilot each stood on a wing of his biplane as it passed the exuberant crowd.
Sperry used a lightweight version of his father’s gyroscope to control the pitch and roll
motion of his aircraft with compressed air.
Both world wars not only stimulated the development of more advanced flight control
systems, but also the fundamentals of classical control theory were laid.

Figure 1.2: Sir Hiram Maxim’s ‘heavier-than-air’ steam-powered aircraft.

In the early 1950’s it was found that constant-gain, linear feedback controllers had prob-
lems to perform well over the whole flight regime of the new high-performance prototype
aircraft such as the X-15. After a considerable development effort it was found that gain-
1.2 PROBLEM DEFINITION 3

scheduling was a suitable technique to achieve good performance over a wide range of
operating conditions [9]. Even today, modern fighter aircraft still make use of flight
control systems based on various types of linear control algorithms and gain-scheduling.
The main benefit of this strategy is that it is based on the well-developed classical linear
control theory. However, nonlinear effects, occurring in particular at high angles of in-
cidence, and the cross-couplings between longitudinal and lateral motion are neglected
in the control design. Furthermore, it is difficult to guarantee stability and performance
of the gain-scheduled controller in between operating points for which a linear controller
has been designed. This motivates the use of nonlinear control techniques for the flight
control system design of high performance aircraft.
In the early 1990’s a new nonlinear control methodology called feedback linearization
(FBL) emerged [88, 187]. Nonlinear dynamic inversion (NDI) is a special form of FBL
especially suited for flight control applications; see e.g. [48, 123, 129]. The main idea
behind NDI is to use an accurate model of the system to cancel all system nonlinearities
in such a way that a single linear system valid over the entire flight envelope remains. A
classical linear controller can be used to close the outer loop of the system under NDI
control. The F-35 Lightning II will be the first production fighter aircraft equipped with
such a NDI based flight control system [20, 205]. The control law structure, presented
in Figure 1.3, permits a decoupling of flying qualities dependent portions of the design
from that which is dependent on airframe and engine dynamics.

Flying Qualities Dependent Airframe/Engine Dependent

Nonlinear Dynamic
Stick Input Command Shaping
Precompensation Inversion Control Allocation
(Desired Dynamics)
(Onboard Model)

Sensor
Processing

Figure 1.3: The nonlinear dynamic inversion controller structure of the F-35: The onboard model
is used to cancel all system nonlinearities and a single linear controller which enforces the flying
qualities closes the outer loop.

1.2 Problem Definition


The main weakness of the NDI technique is that an accurate model of the aircraft dynam-
ics is required. A dynamic aircraft model is costly to obtain, since it takes a large number
4 INTRODUCTION 1.3

of (virtual) wind tunnel experiments and an intensive flight testing program. Small un-
certainties can be dealt with by designing a robust, linear outer loop controller. However,
especially for larger model uncertainties, robust control methods tend to yield rather
conservative control laws and, consequently, result in poor closed-loop performance
[106, 202, 215]. A more sophisticated way of dealing with large model uncertainties
is to introduce an adaptive control system with some form of online model identification.
Adaptive control was originally studied in the 1950’s as an alternative to gain-scheduling
methods for flight control and there has been a lot of theoretical development over the
past decades [8]. In recent years, the increase in available onboard computational power
has made it possible to implement adaptive flight control designs.
There are several methods available to design an identifier that updates the onboard model
of the NDI controller online, e.g. neural networks or least squares techniques. A disad-
vantage of a nonlinear adaptive design with separate identifier is that the certainty equiv-
alence property [106] does not hold for nonlinear systems, i.e. the identifier is not fast
enough to cope with potentially explosive instabilities of nonlinear systems. To overcome
this problem a controller with strong parametric robustness properties is needed [119].
An alternative solution is to design the controller and identifier as a single integrated sys-
tem using the adaptive backstepping design method [101, 117, 118]. By systematically
constructing a Lyapunov function for the closed-loop system, adaptive backstepping of-
fers the possibility to synthesize a controller for a wide class of nonlinear systems with
parametric uncertainties.
Obviously, a nonlinear adaptive (backstepping based) flight control system with onboard
model identification has the potential to do more than just compensate for inaccuracies in
the nominal aircraft model. It is also possible to identify sudden changes in the dynamic
behavior of the aircraft that could result from structural damage, control effector failures
or adverse environmental conditions. Such changes will in general lead to an increase in
pilot workload or can even result in a complete loss of control. If the post-failure aircraft
dynamics can be identified correctly by the online model identification, the redundancy
in control effectors and the fly-by-wire system of modern fighter planes can be exploited
to reconfigure the flight control system.

1.3 Reconfigurable Flight Control


The idea of control reconfiguration can be traced back throughout the history of flight in
cases where pilots had to manually exploit the remaining control capability of a degraded
aircraft. In 1971 an early theoretical basis for control reconfiguration appeared in [13],
where the number of control effectors needed for the controllability of a linear system for
failure accommodation was considered. In fact, most of the studies in the 1970’s were
based on the idea of introducing backup flight control effectors to compensate for the
failure of a primary control surface. Many of these studies are also relevant for control
reconfiguration. Two early studies that first showed the value of control reconfiguration
were performed by the Grumman Aerospace Corporation for the United States Air Force
(USAF) [23] and by the United States Navy [72]. The study done by Grumman demon-
1.3 RECONFIGURABLE FLIGHT CONTROL 5

strated the importance of considering reconfiguration during the initial design process.
One of the aircraft studied at the time was the F-16, which would become a focus of later
USAF studies as it appeared to be well suited for reconfiguration.
Flight control reconfiguration became an important research subject in the 1980’s and has
remained a major field of study ever since. This section will try to provide an overview
of the many different reconfigurable flight control (RFC) approaches that have been pro-
posed in literature over the past decades. Methods for accommodating sensor failures,
software failures or for switching among redundant hardware will not be considered,
although they are sometimes referred to as flight control reconfiguration. Here ‘recon-
figurable flight control’ is only used to refer to software algorithms designed specifically
to compensate for failures or damage to the flight control effectors or structure of the
aircraft (e.g. lifting surfaces). This section is based on survey papers on reconfigurable
flight control by Huzmezan [84], Jones [92] and Steinberg [195]. Other relevant articles
are the more general fault-tolerant control surveys by Stengel [197] and Patton [162].

1.3.1 Reconfigurable Flight Control Approaches


Most of the control configuration methods developed in the 1980’s required a separate
system for explicit failure detection, isolation and estimation (FDIE). An important early
example of this type of approach was developed by General Electric Aircraft Controls
[55]. This design used a single extended Kalman estimator to perform all FDIE and a
pseudo-inverse approach based on a linearized model of the aircraft was used to deter-
mine controller effector commands, so that the degraded aircraft would generate the same
accelerations as the nominal aircraft. The single Kalman estimator approach turned out
to be impractical, but the pseudo-inverse methods would become a major focus of re-
search, even resulting in some limited flight testing at the end of the 1980’s [140].
By the beginning of the 1990’s a set of flight tested techniques was available, which
could be used to add limited reconfigurable control capability to otherwise conventional
flight control laws for fixed wing aircraft. FDIE was the main limiting factor and required
complicated tuning based on known failure models, particularly for surface damage de-
tection and isolation. Similarly, the control approaches could require quite a bit of design
tuning and there was a lack of theoretical proofs of stability and robustness. However,
these approaches were shown to be quite effective when optimized for a small number of
failure cases [195].
The increase of onboard computational power and advanced control development soft-
ware packages in the 1990’s led to an rapid increase in the number and types of ap-
proaches applied to RFC problems. It became much easier and cheaper to experiment
with complex nonlinear design approaches. Furthermore, there had been considerable
theoretical advances in the areas of adaptive [9] and nonlinear control methods [187]
throughout the 1980’s. The late 1980’s also saw a renewed interest in the use of emerging
machine intelligence techniques, such as neural networks and fuzzy logic [148]. These
approaches could potentially improve FDIE or support new control architectures that do
not use explicit FDIE at all.
6 INTRODUCTION 1.3

An attempt is now made to organize the various RFC methods developed during the
1990’s and up until now. This has become increasingly more difficult, because a lot of
combinations of different methods have been attempted over the years. In [92] the RFC
methods are subdivided in four categories. A short overview of each category will now
be given. Note that this overview is by no means complete, but only serves to give an il-
lustration of the advantages and disadvantages of the different methodologies. Also note
that many combinations of different methodologies have emerged over the years.

Multiple Model Control


Multiple model control basically involves a controller law existing of several fault models
and their corresponding controllers. Three types of multiple model control exist in liter-
ature: multiple model switching and tuning (MMST), interacting multiple model (IMM)
and propulsion controlled aircraft (PCA). In the first two cases all expected failure sce-
narios are collected during a failure modes and effects analysis, where fault models are
constructed that cover each situation. When a failure occurs MMST switches to a pre-
computed control law corresponding to the current fault situation. Some examples of
MMST approaches can be found in [24, 25, 26, 71]. IMM removes the extensive fault
modeling limitation of MMST, by considering fault models which are a convex combi-
nation of models in a predetermined model set. Again the control law can be based on
a variety of methods. In [137, 138] a fixed controller is used, while in [99, 100] a MPC
scheme with minimization of the past tracking error is used. PCA is a special case of
MMST, where the only anticipated fault is total hydraulics failure and only the engines
can be used for control. There have been some successful flight tests with PCA on a F-15
and an MD-11 in the beginning of the 1990’s [32, 33].
The advantage of multiple model methods is that they are fast and provable stable. The
main disadvantages are the lack of correct models when dealing with failures that were
not considered during the control design and the exponential increase of the number of
models required with the number of considered failures for large systems.

Controller Synthesis
Controller synthesis methods make use of a fault model provided by some form of FDIE.
FDIE provides information about the onset, location and severity of any faults and hence
the reconfiguration problem is reduced to finding a proper FDIE. Many FDIE approaches
can be found in literature, see e.g. [41, 169, 217, 216] and the references therein. Eigen-
structure assignment (EA), the pseudo-inverse method (PIM) and model predictive con-
trol (MPC) are three of the methodologies which can be used in this reconfigurable con-
trol framework.
• The main idea of EA is to design a stabilizing output feedback law such that eigen
structure closed-loop system of the linear fault model provided by the FDIE unit
is as close as possible to that of the original closed-loop system. The limitations
when applying EA to reconfigurable flight control are obvious: only linear models
are considered and actuator dynamics are not taken into account. Also, a perfect
1.3 RECONFIGURABLE FLIGHT CONTROL 7

fault model is assumed and the effect of eigenvectors in the failed system being
not exactly equal to those in the nominal system is not well understood. Despite
these problems, some examples of EA and reconfigurable flight control exist in
literature, see e.g. [112, 214].

• A method which closely resembles EA is the pseudo-inverse method. The idea of


the PIM is to recover the closed-loop behavior by calculating an output feedback
law which minimizes the difference in closed-loop dynamics between the fault
model and the nominal model. The PIM was popular in the 1980’s and the early
1990’s, but has fallen out of grace due to difficulties in ensuring stability. A survey
with several attempts to make this method stable can be found in [162].

• MPC is an interesting method to use for RFC due to its ability to handle constraints
and changing model dynamics systematically when failure occurs. MPC also re-
quires the use of a fault model since it relies on an internal model of the system.
Several methods for changing the internal model have been proposed, such as the
multiple model method in [99]. More examples of RFC using MPC can be found
in [94, 95]. In [74] a combination of MPC and a subspace predictor is suggested
and demonstrated in a reconfigurable flight control problem for a damaged Boeing
747 model. A disadvantage of MPC is that the method requires a computation-
ally intensive online optimization at each time step, which makes it difficult to
implement MPC as an aircraft controller. There is no guarantee that there exists a
solution to the optimization problem at all time.

Actuator Only
Actuator only methodologies are limited in the sense that they can only provide recon-
figurable control in case of actuator failures. Sliding mode control (SMC) and control
allocation (CA) are two such methodologies:

• SMC is a nonlinear control method, which has become quite popular for RFC
research [82, 83, 179, 180]. The advantages of SMC are its excellent provable
robustness properties by the use of a discontinuous term in the control law. A
major disadvantage is that some assumptions have to be made which require that
there is one control surface for each controlled variable, none of the controller
surfaces can ever be completely lost. This is not very realistic, as actuators are
usually jammed completely when they fail.

• CA is mainly used in aircraft with redundant control surfaces, like high perfor-
mance jet fighters [34, 35]. CA distributes the demanded forces and torque by
the controller over the actuators. CA handles actuator failures without the need
to model the control law and has therefor received a lot of attention in literature,
see [54] for a survey. A limitation of this approach is that the post-failure aircraft
and actuator dynamics are not taken into account by the control law, so that the
controller will still be attempting to achieve the original system performance while
8 INTRODUCTION 1.3

the actuators may not be capable of achieving this. Another problem is that the
system will not necessarily be stable, even with a stabilizing control law, as the in-
put seen by the system may not be equal to that intended by the controller. Several
extensions to the basic CA method have been proposed in literature, see e.g. [76]
for an overview.

Adaptive Control
Adaptive control approaches are by far the largest research area in RFC, especially in
recent decades. An adaptive controller is a controller with adjustable parameters and a
mechanism for adjusting these parameters. A ‘good’ adaptive control law removes the
need for a FDIE system. However, it is often difficult or impossible to proof robust-
ness and/or stability of such an algorithm. All the previously mentioned methods are
also somewhat adaptive, but all require FDIE or use pre-computed control laws and fault
models. The bulk of the adaptive flight control approaches in literature can be roughly di-
vided in two categories: model reference adaptive control (MRAC) and model inversion
based control, e.g. NDI or backstepping (BS), in combination with an online parameter
estimation method, e.g. neural networks (NN), recursive least squares (RLS).

• MRAC is usually used as a final stage in another algorithm. The goal of MRAC
is to force the output of the system to track a reference model with the desired
specifications. Adaptation is used to estimate the controller parameters needed to
track the model when failure occurs. There exists direct adaptation and indirect
adaptation. Both these methods are compared in [18, 19]. Other publications
about RFC using MRAC are [73, 108, 107, 142]. In [85] a discrete version of this
method is proposed. Several modifications of standard MRAC have been proposed
to provide stable adaptation in the presence of input constraints [91, 123].

• NDI/BS in combination with NN/RLS basically uses nonlinear control for refer-
ence tracking and NN/RLS to compensate for all modeling errors. In [30, 31, 37,
38, 39] a controller using NDI in combination with NN is designed and (limited)
flight tested on a tailless fighter aircraft under the USAF RESTORE program and
on the unmanned X-36 (Figure 1.4(a)). NDI with NN was also used on the F-15
ACTIVE (Figure 1.4(b)) under the intelligent flight control system program of the
NASA [21, 22]. A Boeing 747 fitted with a NDI controller combined with RLS for
the online model identification was successfully tested in a moving base simulator
[131, 132, 133]. In recent years adaptive backstepping flight control in combina-
tion with some form of neural networks has become a popular research subject, see
e.g. [58, 125, 161, 176, 177, 196]. The main advantages of adaptive backstepping
over NDI are its strong stability and convergence properties.

• Some other approaches suggested in literature over the years include the adaptive
LQR methods in [2, 69]. In [62] a linear matrix inequalities framework for a
robust, adaptive nonlinear flight control system is proposed. In [81, 185] a RFC
for the NASA F-18/HARV based on a QFT compensator and an adaptive filter is
1.3 RECONFIGURABLE FLIGHT CONTROL 9

used. Flight control based on reinforcement learning is the subject of [89, 90, 126].
Indirect adaptive control using a moving window/batch estimation for partial loss
of the horizontal tail surface is studied in [157].

(a) The X-36 (b) The F-15 ACTIVE

Figure 1.4: Two examples of aircraft used for reconfigurable flight control testing. Pictures by
courtesy of NASA.

1.3.2 Reconfigurable Flight Control in Practice


In 1998, an F-18E/F Super Hornet (Figure 1.5) was in the middle of a flutter test flight
when the right stabilizer actuator experienced a failure [53]. This failure would have trig-
gered a reversion to a mechanical control mode in previous versions of the F-18, which
usually caused substantial transients and slightly degraded handling qualities. However,
the E/F design included the replacement of the mechanical backup system with a recon-
figurable control law. For this particular failure, the left stabilizer and rudder toe-in can
be used to restore some of the lost pitching moment and the flaps, ailerons and rudders
can be used to compensate for the coupling in lateral/directional axis caused by asym-
metric stabilizer deflection. Although this control reconfiguration approach had been
demonstrated with simulated failures in flight tests, this was the first successful demon-
stration with an actual failure.
In 1999 the F-18E/F was the first production aircraft delivered with a reconfigurable
flight control law, which can only compensate for a single stabilizer actuator failure
mode. Several more advanced RFC systems have been flight tested on the X-36 and
the F-15 ACTIVE, but manufacturers are cautious to implement them in production air-
craft. One reason for this has been the difficulty of certifying RFC approaches for safety
of flight. Therefore, part of the current research is focusing on the development of tools
for the analysis of RFC laws and adaptive control algorithms that are easier to certify and
implement. For instance, in [27, 141] a ‘retrofit’ RFC law using a modified sequential
least-squares algorithm for online model identification is proposed, which does not alter
10 INTRODUCTION 1.4

Figure 1.5: The F-18E/F Super Hornet with RFC Law. Picture by courtesy of Boeing.

the baseline inner loop control and could be treated more like an autopilot for certifica-
tion purposes. A limited flight test program has been performed by Boeing and the Naval
Air Systems Command [158]. However, again only certain types of actuator failures are
considered.

1.4 Thesis Goal and Research Approach


The main goal of this thesis is to investigate the potential of the nonlinear adaptive back-
stepping control technique in combination with online model identification for the design
of a reconfigurable flight control system for a modern fighter aircraft. The following fea-
tures are aimed for:

• the RFC system uses a single nonlinear adaptive flight controller for the entire
domain of operation (flight envelope), which has provable theoretical performance
and stability properties.

• the RFC system enhances performance and survivability of the aircraft in the pres-
ence of disturbances related to failures and structural damage.

• the algorithms, on which the RFC system is based, possess excellent numerical sta-
bility properties and their computational costs are low (real-time implementation
is feasible).

As a study model the Lockheed Martin F-16 is selected, since it is the current fighter
aircraft of the Royal Netherlands Air Force and an accurate high-fidelity aerodynamic
model has been obtained. The MATLAB/Simulink c software package will be used to
design, refine and evaluate the RFC system. A short discussion on the motivation of the
methods and the aircraft model used in this thesis is now presented.
1.4 THESIS GOAL AND RESEARCH APPROACH 11

1.4.1 Nonlinear Adaptive Backstepping Control

Adaptive backstepping is a recursive, Lyapunov-based, nonlinear design method, which


makes use of dynamic parameter update laws to deal with parametric uncertainties. The
idea of backstepping is to design a controller recursively by considering some of the state
variables as ‘virtual controls’ and designing intermediate control laws for these. Back-
stepping achieves the goals of global asymptotic stabilization and tracking. The proof
of these properties is a direct consequence of the recursive procedure, since a Lyapunov
function is constructed for the entire system including the parameter estimates. The
tracking errors drive the adaptation process of the procedure.
Furthermore, it is possible to take magnitude and rate constraints on the control inputs
and system states into account such that the identification process is not corrupted during
periods of control effector saturation [58, 61]. A disadvantage of the integrated adaptive
backstepping method is that it only yields pseudo-estimates of the uncertain system pa-
rameters. There is no guarantee that the real values of the parameters are found, since
the adaptation only tries to satisfy a total system stability criterion, i.e. the Lyapunov
function. Furthermore, since the controller and identifier are designed as one integrated
system it is very difficult to tune the performance of one subsystem without influencing
the performance of the other. In this thesis several possible improvements to the basic
adaptive backstepping approach are introduced and evaluated.

1.4.2 Flight Envelope Partitioning

To simplify the online approximation of a full nonlinear dynamic aircraft model and
thereby reducing computational load, the flight envelope can be partitioned into mul-
tiple connecting operating regions called hyperboxes or clusters [152, 153]. This can
be done manually using a priori knowledge of the nonlinearity of the system, automat-
ically using nonlinear optimization algorithms that cluster the data into hyperplanar or
hyperellipsoidal clusters [10] or a combination of both. In each hyperbox a locally valid
linear-in-the-parameters nonlinear model is defined, which can be updated using the up-
date laws of the Lyapunov-based adaptive backstepping control law.
For an aircraft, the aerodynamic model can be partitioned using different state variables,
the choice of which depends on the expected nonlinearities of the system. Fuzzy logic
or some form of neural network can be used to interpolate between the local nonlinear
models, ensuring smooth transitions. Because only a small number of local models is
updated at any given time step, the computational expense is relatively low. Another
advantage is that storing of the local models means retaining information of all flight
conditions, because the local adaptation does not interfere with the models outside the
closed neighborhood. Hence, the estimator has memory capabilities and learns instead
of continuously adapting one global nonlinear model.
12 INTRODUCTION 1.5

1.4.3 The F-16 Dynamic Model


Throughout this thesis the theoretical results are illustrated, where possible, by means
of numerical simulation examples. The most accurate dynamic aircraft model available
for this research is that of the Lockheed Martin F-16 single-seat fighter aircraft. This
aircraft model has been constructed using high-fidelity aerodynamic data obtained from
[149] which is valid over the entire subsonic flight envelope of the F-16. Detailed engine
and actuator models are also available, as well as a simplified version of the baseline
flight control system. However, structural failure models are not available, which poses a
limitation on the reconfigurable flight control research in this thesis. In other words, the
simulation scenarios with the F-16 model are limited to actuator hard-overs or lock-ups,
longitudinal center of gravity shifts and uncertainties in one or more aerodynamic coef-
ficients.
Without any form of FDIE, these limited failure scenarios still pose a challenge, espe-
cially the actuator failures, and can be used to evaluate the theoretical results in this
thesis work. Therefore, an FDIE system, such as sensor feedback of actuator positions
or actuator health monitoring systems, is not included in the investigated adaptive con-
trol designs. In this way, the actuator failures are used as a substitute for more complex
(a)symmetric structural failure scenarios.
Note that the baseline flight control system of the F-16 model makes use of full state
measurement and hence these measurements are also assumed to be available for the
nonlinear adaptive control designs developed in this thesis.

1.5 Thesis Outline


The outline of the thesis is as follows:

In Chapter 2 the high-fidelity dynamic model of the F-16 is constructed. The model
is implemented as a C S-function in MATLAB/Simulink c . The available aerodynamic
data is valid over a large, subsonic flight envelope. Furthermore, the characteristics of the
classical baseline flight control system of the F-16 are discussed. The dynamic aircraft
model and baseline controller are needed to evaluate and compare the performance of the
nonlinear adaptive control designs in later chapters.
Chapter 3 starts with a discussion on stability concepts and the concept of Lyapunov
functions. Lyapunov’s direct method forms the basis for the recursive backstepping pro-
cedure, which is highlighted in the second part of the chapter. Simple control examples
are used to clarify the design procedure.
In Chapter 4 nonlinear systems with parametric uncertainty are introduced. The back-
stepping method is extended with a dynamic feedback part, i.e. a parameter update law,
that constantly updates the static control part. The parameter adaptation part is designed
recursively and simultaneously with the static feedback part using a single control Lya-
punov function. This approach is referred to as tuning functions adaptive backstepping.
Techniques to robustify the adaptive design against non-parametric uncertainties are also
1.5 THESIS OUTLINE 13

discussed. Finally, command filters are introduced in the design to simplify the tuning
functions adaptive backstepping design and to make the parameter adaption more robust
to actuator saturation. This approach is referred to as constrained adaptive backstepping.
Chapter 5 explores the possibilities of combining (inverse) optimal control theory and
adaptive backstepping. The standard adaptive backstepping designs are mainly focused
on achieving stability and convergence, the transient performance and optimality are not
taken explicitly into account. The inverse optimal adaptive backstepping technique re-
sulting from combining the tuning functions approach and inverse optimal control theory
is validated with a simple flight control example.
In Chapter 6 the constrained adaptive backstepping technique is applied to the design
of a flight control system for a simplified, nonlinear over-actuated fighter aircraft model
valid at two flight conditions. It is demonstrated that the extension of the method to
multi-input multi-output systems is straightforward. A comparison with a modular adap-
tive controller that employs a least squares identifier is made. Furthermore, the interac-
tions between several control allocation algorithms and the online model identification
for simulations with actuator failures are studied.
Chapter 7 extends the results of Chapter 6 to nonlinear adaptive control for the F-16 dy-
namic model of Chapter 2, which is valid for the entire subsonic flight envelope. A flight
envelope partitioning method to simplify the online model identification is introduced.
The flight envelope is partitioned into multiple connecting operating regions and locally
valid models are defined in each region. B-spline networks are used for smooth inter-
polation between the models. As a study case a trajectory control autopilot is designed,
after which it is evaluated in several maneuvers with actuator failures and uncertainties
in the onboard aerodynamic model.
Chapter 8 again considers constrained adaptive backstepping flight control for the high-
fidelity F-16 model. A stability and control augmentation system is designed in such a
way that it has virtually the same handling qualities as the baseline F-16 flight control
system. A comparison is made between the performance of the baseline control system,
a modular adaptive controller with least-squares identifier and the constrained adaptive
backstepping controller in several realistic failure scenarios.
Chapter 9 introduces the immersion and invariance method to construct a new type of
nonlinear adaptive estimator. The idea behind the immersion and invariance approach is
to assign prescribed stable dynamics to the estimation error. The resulting estimator in
combination with a backstepping controller is shown to improve transient performance
and to radically simplify the tuning process of the integrated adaptive backstepping de-
signs of the earlier chapters.
In Chapter 10 the concluding remarks and recommendations for further research are
discussed.

Figure 1.6 depicts a flow chart of the thesis illustrating the connections between the
different chapters. Although this thesis is written as a monograph, Chapters 5 to 9 can
be viewed as a collection of edited versions of previously published papers. An (approx-
imate) overview of the papers on which these chapters are based is given below.
14 INTRODUCTION 1.5

Chapter 5:
• L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Comparison of Inverse
Optimal and Tuning Functions Design for Adaptive Missile Control”, Journal of
Guidance, Control and Dynamics, Vol. 31, No. 4, July-Aug 2008, pp. 1176-1182

• L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Comparison of Inverse
Optimal and Tuning Function Designs for Adaptive Missile Control”, Proc. of the
2007 AIAA Guidance, Navigation, and Control Conference and Exhibit, Hilton
Head, South Carolina, AIAA-2007-6675

Chapter 6:
• E.R. van Oort, L. Sonneveldt, Q.P. Chu and J.A. Mulder, “Full Envelope Modular
Adaptive Control of a Fighter Aircraft using Orthogonal Least Squares”, Journal
of Guidance, Control and Dynamics, Accepted for publication

• E.R. van Oort, L. Sonneveldt, Q.P. Chu and J.A. Mulder, “Modular Adaptive Input-
to-State Stable Backstepping of a Nonlinear Missile Model”, Proc. of the 2007
AIAA Guidance, Navigation, and Control Conference and Exhibit, Hilton Head,
South Carolina, AIAA-2007-6676

• E.R. van Oort, L. Sonneveldt, Q.P. Chu and J.A. Mulder, “A Comparison of Adap-
tive Nonlinear Control Designs for an Over-Actuated Fighter Aircraft Model”,
Proc. of the 2008 AIAA Guidance, Navigation, and Control Conference and Ex-
hibit, Honolulu, Hawaii, AIAA-2008-6786
Chapter 7:
• L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Nonlinear Adaptive
Backstepping Trajectory Control”, Journal of Guidance, Control and Dynamics,
Vol. 32, No. 1, Jan-Feb 2009, pp. 25-39

• L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Nonlinear Adaptive
Trajectory Control Applied to an F-16 Model”, Proc. of the 2008 AIAA Guidance,
Navigation, and Control Conference and Exhibit, Honolulu, Hawaii, AIAA-2008-
6788
Chapter 8:
• L. Sonneveldt, Q.P. Chu and J.A. Mulder, ‘‘Nonlinear Flight Control Design Using
Constrained Adaptive Backstepping”, Journal of Guidance, Control and Dynam-
ics, Vol. 30, No. 2, Mar-Apr 2007, pp. 322-336

• L. Sonneveldt, Q.P. Chu and J.A. Mulder, “Constrained Nonlinear Adaptive Back-
stepping Flight Control: Application to an F-16/MATV Model”, Proc. of the 2006
AIAA Guidance, Navigation, and Control Conference and Exhibit, Keystone, Col-
orado, AIAA-2006-6413
1.5 THESIS OUTLINE 15

• L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Nonlinear Adaptive
Flight Control Law Design and Handling Qualities Evaluation”, Joint 48th IEEE
Conference on Decision and Control and 28th Chinese Control Conference, Shang-
hai, 2009
• L. Sonneveldt, et al., “Lyapunov-based Fault Tolerant Flight Control Designs for a
Modern Fighter Aircraft Model”, Proc. of the 2009 AIAA Guidance, Navigation,
and Control Conference and Exhibit, Chicago, Illinois, AIAA-2009-6172
Chapter 9:
• L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Immersion and Invari-
ance Adaptive Backstepping Flight Control”, Journal of Guidance, Control and
Dynamics, Under review
• L. Sonneveldt, E.R. van Oort, Q.P. Chu and J.A. Mulder, “Immersion and Invari-
ance Based Nonlinear Adaptive Flight Control”, Proc. of the 2010 AIAA Guid-
ance, Navigation, and Control Conference and Exhibit, To be presented
16 INTRODUCTION 1.5

1. Introduction

3. Lyapunov Theory and Backstepping

2. Aircraft Modeling

4. Adaptive Backstepping

6. Comparison of Integrated and Modular


Adaptive Flight Control

5. Inverse Optimal
7. F-16 Trajectory Control Design Adaptive
Backstepping

8. F-16 Stability and Control Augmentation


System Design

9. Immersion and Invariance Adaptive Backstepping

10. Conclusions and Recommendations

Figure 1.6: Flow chart of the thesis chapters.


Chapter 2
Aircraft Modeling

This chapter utilizes basic flight dynamics theory to construct a nonlinear dynamical
model of the Lockheed Martin F-16, which is the main study model in this thesis work.
The available geometric and aerodynamic aircraft data, as well as the assumptions made
are discussed in detail. Furthermore, a description of the baseline flight control system
of the F-16, which can be used for comparison purposes, is also included. The final part
of the chapter discusses the implementation of the model and the baseline control system
in the MATLAB/Simulink c software package.

2.1 Introduction
In this chapter a nonlinear dynamical model of the Lockheed-Martin F-16 is constructed.
The F-16 is a single-seat, supersonic, multi-role tactical aircraft with a blended wing-
fuselage that has been in production since 1976. Over 4,400 have been produced for 24
countries, making it the most common fighter type in the world. A three-view of the
single-engined F-16 aircraft is depicted in Figure 2.1.
This chapter will start with a derivation of the equations of motion for a general rigid
body aircraft. After that, the available control variables and the engine model of the F-
16 are discussed. The geometry and the aerodynamic data are given in Section 2.4. In
Section 2.5 a simplified version of the baseline F-16 flight control system is discussed.
The implementation in MATLAB/Simulink c of the complete F-16 dynamic model with
flight control system is detailed in the last part of the chapter.

2.2 Aircraft Dynamics


In this section the equations of motion for the F-16 model are derived, this derivation is
based on [16, 45, 127]. A very thorough discussion on flight dynamics can be found in

17
18 AIRCRAFT MODELING 2.2

Figure 2.1: Three-view of the Lockheed-Martin F-16.

the course notes [143].

2.2.1 Reference Frames


Before the equations of motion can be derived, some frames of reference are needed to
describe the motion in. The reference frames used in this thesis are
• the earth-fixed reference frame FE , used as the inertial frame and the vehicle car-
ried local earth reference frame FO with its origin fixed in the center of gravity of
the aircraft which is assumed to have the same orientation as FE ;
• the wind-axes reference frame FW , obtained from FO by three successive rotations
of flight path heading angle χ, flight path climb angle γ and aerodynamic bank
angle µ;
• the stability-axes reference frame FS , obtained from FW by a rotation of minus
sideslip angle β;
• and finally the body-fixed reference frame FB , obtained from FS by a rotation of
angle of attack α.
The body-fixed reference frame FB can also be obtained directly from FO by three suc-
cessive rotations of yaw angle ψ, pitch angle θ and roll angle φ. All reference frames are
right-handed and orthogonal. In the earth-fixed reference frame the zE -axis points to the
center of the earth, the xE -axis points in some arbitrary direction, e.g. the north, and the
2.2 AIRCRAFT DYNAMICS 19

yE -axis is perpendicular to the xE -axis.


The transformation matrices from FB to FS and from FB to FW are defined as
   
cos α 0 sin α cos α cos β sin β sin α cos β
Ts/b = 0 1 0 , Tw/b =  − cos α sin β cos β − sin α sin β  .
− sin α 0 cos α − sin α 0 cos α

2.2.2 Aircraft Variables


A number of assumptions has to be made, before proceeding with the derivation of the
equations of motion:

1. The aircraft is a rigid-body, which means that any two points on or within the
airframe remain fixed with respect to each other. This assumption is quite valid for
a small fighter aircraft.

2. The earth is flat and non-rotating and regarded as an inertial reference. This
assumption is valid when dealing with control design of aircraft, but not when
analyzing inertial guidance systems.

3. Wind gust effects are not taken into account, hence the undisturbed air is assumed
to be at rest w.r.t. the surface of the earth. In other words, the kinematic velocity is
equal to the aerodynamic velocity of the aircraft.

4. The mass is constant during the time interval over which the motion is considered,
the fuel consumption is neglected during this time-interval. This assumption is
necessary to apply Newton’s motion laws.

5. The mass distribution of the aircraft is symmetric relative to the XB OZB -plane,
this implies that the products of inertia Iyz and Ixy are equal to zero. This assump-
tion is valid for most aircraft.

Note that the last assumption is no longer valid when the aircraft gets asymmetrically
damaged. However, the aerodynamic effects resulting from such damage will, in gen-
eral, be much larger than the influence of the center of gravity shift for a small fighter
aircraft. A derivation of the equations of motion without this last assumption can be
found in [11].
Under the above assumptions the motion of the aircraft has six degrees of freedom (rota-
tion and translation in three dimensions). The aircraft dynamics can be described by its
position, orientation, velocity and angular velocity over time. pE = (xE , yE , zE )T is the
position vector expressed in an earth-fixed coordinate system. V is the velocity vector
given by V = (u, v, w)T , where u is the longitudinal velocity, v the lateral velocity and
w the normal velocity. The orientation vector is given by Φ = (φ, θ, ψ)T , where φ is
the roll angle, θ the pitch angle and ψ the yaw angle, and the angular velocity vector is
given by ω = (p, q, r)T , where p, q and r are the roll, pitch and yaw angular velocities,
respectively. Various components of the aircraft motions are illustrated in Figure 2.2.
20 AIRCRAFT MODELING 2.2

Figure 2.2: Aircraft orientation angles φ, θ and ψ, aerodynamic angles α and β, and the angular
rates p, q and r. The frame of reference is body-fixed and all angles and rates are defined positive
in the figure [178].

The relation between the attitude vector Φ and the angular velocity vector ω is given as
 
1 sin φ tan θ cos φ tan θ
Φ̇ =  0 cos φ − sin φ  ω (2.1)
sin φ cos φ
0 cos θ cos θ

Defining VT as the total velocity and using Figure 2.2, the following relations can be
derived:
p
VT = u2 + v 2 + w 2
w
α = arctan (2.2)
u
v
β = arcsin
VT

Furthermore, when β = φ = 0, the flight path angle γ can be defined as

γ =θ−α (2.3)

2.2.3 Equations of Motion for a Rigid Body Aircraft


The equations of motion for the aircraft can be derived from Newton’s Second Law of
motion, which states that the summation of all external forces acting on a body must be
equal to the time rate of change of its momentum, and the summation of the external
2.2 AIRCRAFT DYNAMICS 21

moments acting on a body must be equal to the time rate of change of its angular mo-
mentum. In the inertial, earth-fixed reference frame FE , Newton’s Second Law can be
expressed by two vector equations [143]
d i
F = (mV) (2.4)
dt E
dH i
M = (2.5)
dt E
where F represents the sum of all externally applied forces, m is the mass of the aircraft,
M represents the sum of all applied torques and H is the angular momentum.

Force Equation
First, to further evaluate the force equation (2.4) it is necessary to obtain an expression
for the time rate of change of the velocity vector with respect to earth. This process is
complicated by the fact that the velocity vector may be rotating while it is changing in
magnitude. Using the equation of Coriolis in appendix A of [16] results in
d i
F = (mV) + ω × mV, (2.6)
dt B
where ω is the total angular velocity of the aircraft with respect to the earth (inertial
reference frame). Expressing the vectors as the sum of their components with respect to
the body-fixed reference frame FB gives
V = iu + jv + kw (2.7)
ω = ip + jq + kr (2.8)
where i, j and k are unit vectors along the aircraft’s xB , yB and zB axes, respectively.
Expanding (2.6) using (2.7), (2.8) results in
Fx = m(u̇ + qw − rv)
Fy = m(v̇ + ru − pw) (2.9)
Fz = m(ẇ + pv − qu)
where the external forces Fx , Fy and Fz depend on the weight vector W, the aerodynamic
force vector R and the thrust vector E. It is assumed the thrust produced by the engine,
FT , acts parallel to the aircraft’s xB -axis. Hence,
Ex = FT
Ey = 0 (2.10)
Ez = 0
The components of W and R along the body-axes are
Wx = −mg sin θ
Wy = mg sin φ cos θ (2.11)
Wz = mg cos φ cos θ
22 AIRCRAFT MODELING 2.2

and

Rx = X̄
Ry = Ȳ (2.12)
Rz = Z̄

where g is the gravity constant. The size of the aerodynamic forces X̄, Ȳ and Z̄ is
determined by the amount of air diverted by the aircraft in different directions. The
amount of air diverted by the aircraft mainly depends on the following factors:
• the total velocity VT (or Mach number M ) and density of the airflow ρ,
• the geometry of the aircraft: wing area S, wing span b and mean aerodynamic
chord c̄,
• the orientation of the aircraft relative to the airflow: angle of attack α and side slip
angle β,
• the control surface deflections δ,
• the angular rates p, q, r,
There are other variables such as the time derivatives of the aerodynamic angles that also
play a role, but these effects are less prominent, since it is assumed that the aircraft is a
rigid body. This motivates the standard way of modeling the aerodynamic force:

X̄ = q̄SCXT (α, β, p, q, r, δ, ...)


Ȳ = q̄SCYT (α, β, p, q, r, δ, ...) (2.13)
Z̄ = q̄SCZT (α, β, p, q, r, δ, ...)

where q̄ = 21 ρVT2 is the aerodynamic pressure. The air density ρ is calculated according
to the International Standard Atmosphere (ISA) as given in Appendix A.2. The coeffi-
cients CXT , CYT and CZT are usually obtained from (virtual) wind tunnel data and flight
tests. Combining equations (2.11) and (2.12) and the thrust components (2.10) with (2.9),
results in the complete body-axes force equation:

X̄ + FT − mg sin θ = m(u̇ + qw − rv)


Ȳ + mg sin φ cos θ = m(v̇ + ru − pw) (2.14)
Z̄ + mg cos φ sin θ = m(ẇ + pv − qu)

Moment Equation
To obtain the equations for angular motion, consider again Equation (2.5). The time rate
of change of H is required and since H can change in magnitude and direction, (2.5) can
be written as
dH i
M= +ω×H (2.15)
dt B
2.2 AIRCRAFT DYNAMICS 23

In the body-fixed reference frame, under the rigid body and constant mass assumptions,
the angular momentum H can be expressed as

H = Iω (2.16)

where, under the symmetrical aircraft assumption, the inertia matrix is defined as
 
Ix 0 −Ixz
I= 0 Iy 0  (2.17)
−Ixz 0 Iz

Expanding (2.15) using (2.16) results in

Mx = ṗIx − ṙIxz + qr(Iz − Iy ) − pqIxz


My = q̇Iy + pq(Ix − Iz ) + (p2 − r2 )Ixz (2.18)
Mz = ṙIz − ṗIxz + pq(Iy − Ix ) + qrIxz .

The external moments Mx , My and Mz are those due to aerodynamics and engine angu-
lar momentum. As a result the aerodynamic moments are

Mx = L̄
My = M̄ − rHeng (2.19)
Mz = N̄ + qHeng

where L̄,M̄ and N̄ are the aerodynamic moments and Heng is the engine angular mo-
mentum. Note that the engine angular momentum is assumed to act parallel to the body
x-axis of the aircraft. The aerodynamic moments can be expressed in a similar way as
the aerodynamic forces in Equation (2.13):

L̄ = q̄SbClT (α, β, p, q, r, δ, ...) (2.20)


M̄ = q̄Sc̄CmT (α, β, p, q, r, δ, ...)
N̄ = q̄SbCnT (α, β, p, q, r, δ, ...)

Combining (2.18) and (2.19), the complete body-axis moment equation is formed as

L̄ = ṗIx − ṙIxz + qr(Iz − Iy ) − pqIxz


M̄ − rHeng = q̇Iy + pq(Ix − Iz ) + (p2 − r2 )Ixz (2.21)
N̄ + qHeng = ṙIz − ṗIxz + pq(Iy − Ix ) + qrIxz .
24 AIRCRAFT MODELING 2.2

2.2.4 Gathering the Equations of Motion


Euler Angles
The equations of motion derived in the previous sections are now collected and written
as a system of twelve scalar first order differential equations.
1
u̇ = rv − qw − g sin θ + (X̄ + FT ) (2.22)
m
1
v̇ = pw − ru + g sin φ cos θ + Ȳ (2.23)
m
1
ẇ = qu − pv + g cos φ cos θ + Z̄ (2.24)
m

ṗ = (c1 r + c2 p)q + c3 L̄ + c4 (N̄ + qHeng ) (2.25)


q̇ = c5 pr − c6 (p2 − r2 ) + c7 (M̄ − rHeng ) (2.26)
ṙ = (c8 p − c2 r)q + c4 L̄ + c9 (N̄ + qHeng ) (2.27)

φ̇ = p + tan θ(q sin φ + r cos φ) (2.28)


θ̇ = q cos φ − r sin φ (2.29)
q sin φ + r cos φ
ψ̇ = (2.30)
cos θ

ẋE = u cos ψ cos θ + v(cos ψ sin θ sin φ − sin ψ cos φ)


+ w(cos ψ sin θ cos φ + sin ψ sin φ) (2.31)
ẏE = u sin ψ cos θ + v(sin ψ sin θ sin φ + cos ψ cos φ)
+ w(sin ψ sin θ cos φ − cos ψ sin φ) (2.32)
żE = −u sin θ + v cos θ sin φ + w cos θ cos φ (2.33)

where
2 1
Γc1 = (Iy − Iz )Iz − Ixz Γc4 = Ixz c7 =
Iy
Iz −Ix 2
Γc2 = (Ix − Iy + Iz )Ixz c5 = Iy Γc8 = Ix (Ix − Iy ) + Ixz
Ixz
Γc3 = Iz c6 = Iy Γc9 = Ix

2
with Γ = Ix Iz − Ixz .

Quaternions
The above equations of motion make use of Euler angle approach for the orientation
model. The disadvantage of the Euler angle method is that the differential equations for
2.2 AIRCRAFT DYNAMICS 25

ṗ and ṙ become singular when pitch angle θ passes through ± π2 . To avoid these singular-
ities quaternions are used for the aircraft orientation presentation. A detailed explanation
about quaternions and their properties can be found in [127]. With the quaternions pre-
sentation the aircraft system representation consists of 13 scalar first order differential
equations:

1 
u̇ = rv − qw + X̄ + FT + 2(q1 q3 − q0 q2 )g (2.34)
m
1
v̇ = pw − ru + Ȳ + 2(q2 q3 + q0 q1 )g (2.35)
m
1
ẇ = qu − pv + Z̄ + (q02 − q12 − q22 + q32 )g (2.36)
m

ṗ = (c1 r + c2 p)q + c3 L̄ + c4 (N̄ + qHeng ) (2.37)


2 2
q̇ = c5 pr − c6 (p − r ) + c7 (M̄ − rHeng ) (2.38)
ṙ = (c8 p − c2 r)q + c4 L̄ + c9 (N̄ + qHeng ) (2.39)

    
q̇0 0 −p −q −r q0
 q̇1  1  p 0 r −q   q1 
 q̇2  = 2 
q̇ =      (2.40)
q −r 0 p   q2 
q̇3 r q −p 0 q3

  2
q0 + q12 − q22 − q32
  
ẋE 2(q1 q2 − q0 q3 ) 2(q1 q3 + q0 q2 ) u
 ẏE  =  2(q1 q2 + q0 q3 ) q02− q12 + q22 − q32 2(q2 q3 − q0 q1 )   v  (2.41)
żE 2(q1 q3 − q0 q2 ) 2(q2 q3 + q0 q1 ) q02 − q12 − q22 + q32 w

where
   
q0 cos φ/2 cos θ/2 cos ψ/2 + sin φ/2 sin θ/2 sin ψ/2
 q1   sin φ/2 cos θ/2 cos ψ/2 − cos φ/2 sin θ/2 sin ψ/2 
 q2  = ± 
   .
cos φ/2 sin θ/2 cos ψ/2 + sin φ/2 cos θ/2 sin ψ/2 
q3 cos φ/2 cos θ/2 sin ψ/2 − sin φ/2 sin θ/2 cos ψ/2

Using (2.40) to describe the attitude dynamics means that the four differential equa-
tions are integrated as if all quaternion
p components were independent. Therefore, the
normalization condition |q| = q02 + q12 + q22 + q32 = 1 and the derivative constraint
q0 q̇0 + q1 q̇1 + q2 q̇2 + q3 q̇3 = 0 may not be satisfied after performing an integration step
due to numerical round-off errors. After each integration step the constraint may be re-
established by subtracting the discrepancy from the quaternion derivatives. The corrected
quaternion dynamics are [170]

q̇′ = q̇ − δq, (2.42)

where δ = q0 q̇0 + q1 q̇1 + q2 q̇2 + q3 q̇3 .


26 AIRCRAFT MODELING 2.3

Wind-axes Force Equations


For control design it is more convenient to transform the force equations (2.34)-(2.36) to
the wind-axes reference frame. Taking the derivative of (2.2) results in [127]
1
V̇T = (−D + FT cos α cos β + mg1 ) (2.43)
m
1
α̇ = q − (p cos α + r sin α) tan β − (L + FT sin α − mg3 ) (2.44)
mVT cos β
1
β̇ = p sin α − r cos α + (Y − FT cos α sin β + mg2 ) (2.45)
mVT
where the drag force D, the side force Y and the lift force L are defined as

D = −X̄ cos α cos β − Ȳ sin β − Z̄ sin α cos β


Y = −X̄ cos α sin β + Ȳ cos β − Z̄ sin α sin β
L = X̄ sin α − Z̄ cos α

and the gravity components as

g1 = g (− cos α cos β sin θ + sin β sin φ cos θ + sin α cos β cos φ cos θ)
g2 = g (cos α sin β sin θ + cos β sin φ cos θ − sin α sin β cos φ cos θ)
g3 = g (sin α sin θ + cos α cos φ cos θ) .

2.3 Control Variables and Engine Modeling


The F-16 model allows control over thrust, elevator, ailerons and rudder. The thrust is
measured in Newtons. All deflections are defined positive in the conventional way, i.e.
positive thrust causes an increase in acceleration along the xB -axis, a positive elevator
deflection results in a decrease in pitch rate, a positive aileron deflection gives a decrease
in roll rate and a positive rudder deflection decreases the yaw rate. The F-16 also has a
leading edge flap, which helps to fly the aircraft at high angles of attack. The deflection
of the leading edge flap δlef is not controlled directly by the pilot, but is governed by
the following transfer function dependent on angle of attack α and static and dynamic
pressures:
2s + 7.25 q̄
δlef = 1.38 α − 9.05 + 1.45. (2.46)
s + 7.25 pstat
The differential elevator deflection, trailing edge flap, landing gear and speed brakes are
not included in the model, since no data is publicly available. The control surfaces of
the F-16 are driven by servo-controlled actuators to produce the deflections commanded
by the flight control system. The actuators of the control surfaces are modeled as a first-
order low-pass filters with certain gain and saturation limits in range and deflection rate.
These limits can be found in Table 2.1. The gains of the actuators are 1/0.136 for the
leading edge flap and 1/0.0495 for the other control surfaces. The maximum values and
2.3 CONTROL VARIABLES AND ENGINE MODELING 27

Table 2.1: The control input units and maximum values

Control units MIN. MAX. rate limit


Elevator deg -25 25 ± 60 deg/s
Ailerons deg -21.5 21.5 ± 80 deg/s
Rudder deg -30 30 ± 120 deg/s
Leading edge flap deg 0 25 ± 25 deg/s

units for all control variables are given in Table 2.1.


The Lockheed Martin F-16 is powered by an after-burning turbofan jet engine, which
is modeled taking into account throttle gearing and engine power level lag. The thrust
response is modeled with a first order lag, where the lag time constant is a function of the
current engine power level and the commanded power. The commanded power level to
the throttle position is a linear relationship apart from a change in slope when the military
power level is reached at 0.77 throttle setting [149]:

64.94δth if δth ≤ 0.77
Pc∗ (δth ) = . (2.47)
217.38δth − 117.38 if δth > 0.77

Note that the throttle position is limited to the range 0 ≤ δth ≤ 1. The derivative of the
actual power level Pa is given by [149]

1
Ṗa = (Pc − Pa ) , (2.48)
τeng

where


 Pc if Pc∗ ≥ 50 and Pa ≥ 50
60 if Pc∗ ≥ 50 and Pa < 50

Pc =
 40
 if Pc∗ < 50 and Pa ≥ 50
Pc if Pc∗ < 50 and Pa < 50


 5.0 if Pc∗ ≥ 50 and Pa ≥ 50


1  1

if Pc∗ ≥ 50 and Pa < 50
τeng
=
τeng  5.0 if Pc∗ < 50 and Pa ≥ 50
 1 ∗

if Pc∗ < 50 and Pa < 50

τeng


1 ∗  1.0 if (Pc − Pa ) ≤ 25
= 0.1 if (Pc − Pa ) ≥ 50 .
τeng
1.9 − 0.036 (Pc − Pa ) if 25 < (Pc − Pa ) < 50

28 AIRCRAFT MODELING 2.4

The engine thrust data is available in a tabular form as a function of actual power, altitude
and Mach number over the ranges 0 ≥ h ≥ 15240 m and 0 ≥ M ≥ 1 for idle, military
and maximum power settings [149]. The thrust is computed as

Tidle + (Tmil − Tidle ) P50a



if Pa < 50
FT = . (2.49)
Tmil + (Tmax − Tmil ) Pa50 −50
if Pa ≥ 50

The engine angular momentum is assumed to be acting along the xB -axis with a constant
value of 216.9 kg.m2 /s.

2.4 Geometry and Aerodynamic Data


The relevant geometry data of the F-16 can be found in Table A.1 of Appendix A. The
aerodynamic data of the F-16 model have been derived from low-speed static and dy-
namic (force oscillation) wind-tunnel tests conducted with sub-scale models in wind-
tunnel facilities at the NASA Ames and Langley Research Centers [149]. The aerody-
namic data in [149] are given in tabular form and are valid for the following subsonic
flight envelope:

• −20 ≤ α ≤ 90 degrees;

• −30 ≤ β ≤ 30 degrees.

Two examples of the aerodynamic data for the F-16 model can be found in Figure 2.3.
The pitch moment coefficient Cm and the CZ both depend on three variables: angle of
attack, sideslip angle and elevator deflection.

0.2
2
0.1

0 1
−0.1
0
Cm (−)

−0.2
CZ (−)

−0.3
−1
−0.4

−0.5 −2

−0.6 20
20 −3
0 −20 0
0
−20 0 20
20 40 −20 40 −20
60 80 60
80
beta (deg) beta (deg)
alpha (deg) alpha (deg)

(a) Cm for δe = 0 (b) CZ for δe = 0

Figure 2.3: Two examples of the aerodynamic coefficient data for the F-16 obtained from wind-
tunnel tests.

The various aerodynamic contributions to a given force or moment coefficient as given


2.4 GEOMETRY AND AERODYNAMIC DATA 29

in [149] are summed as follows.


For the X-axis force coefficient CXT :

 δlef 
CXT = CX (α, β, δe ) + δCXlef 1 −
25
qc̄ h  δlef i
+ CXq (α) + δCXqlef (α) 1 − (2.50)
2VT 25

where

δCXlef = CXlef (α, β) − CX (α, β, δe = 0o ).

For the Y-axis force coefficient CYT :

 δlef 
CYT = CY (α, β) + δCYlef 1 −
25
h  δlef i δa 
+ δCYδa + δCYδa 1−
lef 25 20
δ  rb h  δlef i
r
+ δCYδr + CYr (α) + δCYrlef (α) 1 −
30 2VT 25
pb h  δlef i
+ CYp (α) + δCYplef (α) 1 − (2.51)
2VT 25

where

δCYlef = CYlef (α, β) − CY (α, β)


δCYδa = CYδa (α, β) − CY (α, β)
δCYδa = CYδa (α, β) − CYlef (α, β) − δCYδa
lef lef

δCYδr = CYδr (α, β) − CY (α, β).

For the Z-axis force coefficient CZT :

 δlef 
CZT = CZ (α, β, δe ) + δCZlef 1 −
25
qc̄ h  δlef i
+ CZq (α) + δCZqlef (α) 1 − (2.52)
2VT 25

where

δCZlef = CZlef (α, β) − CZ (α, β, δe = 0o ).


30 AIRCRAFT MODELING 2.5

For the rolling-moment coefficient ClT :


 δlef 
ClT = Cl (α, β, δe ) + δCllef 1 −
25
h  δlef i δa 
+ δClδa + δClδa 1−
lef 25 20
δ  rb h  δlef i
r
+ δClδr + Clr (α) + δClrlef (α) 1 −
30 2VT 25
pb h  δlef i
+ Clp (α) + δClplef (α) 1 − + δClβ (α)β (2.53)
2VT 25
where
δCllef = Cllef (α, β) − Cl (α, β, δe = 0o )
δClδa = Clδa (α, β) − Cl (α, β, δe = 0o )
δClδa = Clδa (α, β) − Cllef (α, β) − δClδa
lef lef

δClδr = Clδr (α, β) − Cl (α, β, δe = 0o ).


For the pitching-moment coefficient CmT :
h i  δlef 
CmT = Cm (α, β, δe ) + CZT xcgr − xcg + δCmlef 1 − (2.54)
25
qc̄ h i δlef 
+ Cmq (α) + δCmqlef (α) 1 − + δCm (α) + δCmds (α, δe )
2VT 25
where
δCmlef = Cmlef (α, β) − Cm (α, β, δe = 0o ).
For the yawing-moment coefficient CnT :
 δlef  h i c̄
CnT = Cn (α, β, δe ) + δCnlef 1 − − CYT xcgr − xcg
25 b
h  δlef i δa 
+ δCnδa + δCnδa 1−
lef 25 20
δ  rb h  δlef i
r
+ δCnδr + Cnr (α) + δCnrlef (α) 1 −
30 2VT 25
pb h  δlef i
+ Cnp (α) + δCnplef (α) 1 − + δCnβ (α)β (2.55)
2VT 25
where
δCnlef = Cnlef (α, β) − Cn (α, β, δe = 0o )
δCnδa = Cnδa (α, β) − Cn (α, β, δe = 0o )
δCnδa = Cnδa (α, β) − Cnlef (α, β) − δCnδa
lef lef

δCnδr = Cnδr (α, β) − Cn (α, β, δe = 0o ).


2.5 BASELINE FLIGHT CONTROL SYSTEM 31

2.5 Baseline Flight Control System


The NASA technical report [149] also contains a description of a stability and control
augmentation system for the F-16 model. This flight control system is a simplified ver-
sion of the actual baseline F-16 flight controller, which retains its main characteristics. A
description of the different control loops of the system is given in this section, for more
details see [149].

2.5.1 Longitudinal Control


A diagram of the longitudinal flight control system can be found in Figure A.2 of Ap-
pendix A.3. It is a command augmentation system where the pilot commands normal
acceleration with a longitudinal stick input. Washed-out pitch rate and filtered normal
acceleration are fed back to achieve the desired response. A forward-loop integration is
included to make the steady-state acceleration response match the commanded acceler-
ation. At low Mach numbers the F-16 model has a minor negative static longitudinal
stability; therefore angle of attack feedback is used to provide artificial static stability.
The pitch control system incorporates an angle of attack limiting system, where again
angle of attack feedback is used to modify the pilot-commanded normal acceleration.
The resulting angle of attack limit is about 25 deg in 1g flight. Finally, the system also
makes sure that the pitch control is deflected in the proper direction to oppose the nose-up
coupling moment generated by rapid rolling at high angles of attack.

2.5.2 Lateral Control


The lateral flight control system is depicted in the block diagram given in Figure A.3. The
pilot can command roll rates up to 308 deg/s through the lateral stick movement. Above
angles of attack of 29 deg, an automatic departure-prevention system is activated. This
system disengages the roll-rate control augmentation system and uses yaw rate feedback
to drive the roll control surfaces to oppose any yaw rate buildup. At high angles of attack
the pilot-commanded roll rate is limited to prevent pitch-out departures. The roll rate
limiting is scheduled on angle of attack, elevator deflection and dynamic pressure.

2.5.3 Directional Control


A scheme of the directional control system can be found in Figure A.4. The pilot rudder
input is computed directly from pedal force and is limited to ±30 deg. Between 20 and 30
deg angle of attack this command signal is gradually reduced to zero to prevent departures
from excessive pilot rudder usage at high angles of attack. Also, between 20 and 40 deg/s
roll rate the command signal is gradually reduced to zero to prevent pitch-out departures.
Yaw stability augmentation consists of lateral acceleration and approximated stability
yaw rate feedback. The stability-axis yaw damper provides increased lateral-directional
damping in addition to reducing sideslip during high angle of attack roll maneuvers.
32 AIRCRAFT MODELING 2.6

An aileron-rudder interconnection exists to improve coordination and roll performance.


At low speeds the gain for the interconnection is scheduled as a linear function of angle
of attack. As in the lateral control system, above angles of attack of 29 deg, a departure-
/spin-prevention mode is activated which uses the rudder to oppose any yaw rate buildup.

2.6 MATLAB/Simulink c Implementation


The F-16 dynamic model is written as a C S-function in MATLAB/Simulink c . The in-
puts of the model are the control surface deflections and the throttle setting. The outputs
are the aircraft states and the dimensionless normal accelerations ny and nz . The aero-
dynamic data, interpolation functions, the engine model and the ISA atmosphere model
are obtained from separate C files. A rudimentary trim function obtained from [173] is
included. The baseline flight control system and the leading edge flap control system are
constructed with Simulink blocks. Sensor models have been obtained from ADMIRE
[63] and are also included in the Simulink model. Full state measurement is assumed to
be available for the control systems. Note that wind or turbulence effects are not taken
into account in the simulation model.
Figure 2.4 depicts the resulting Simulink model of the closed-loop system. The Flight-
gear block can be used to fly the aircraft on a desktop computer with a joystick in the
open-source Flightgear flight simulator in real-time. All simulation model files are in-
cluded on cdrom, but can also be downloaded from www.mathworks.com. Descriptions
are included in the header of each file.

Figure 2.4: The MATLAB/Simulink c F-16 model with baseline flight control system.
Chapter 3
Backstepping

In this chapter the backstepping approach to control design is introduced. Since all the
adaptive design methods discussed throughout the chapters of this thesis are based on the
backstepping technique, this chapter, together with the next chapter about adaptive back-
stepping, form the theoretical basis of the thesis. First, the Lyapunov theory and stability
concepts on which backstepping is based are reviewed. After that, the design approach
itself is introduced and its characteristics are explained with illustrative examples.

3.1 Introduction
Backstepping is a systematic, Lyapunov-based method for nonlinear control design. The
backstepping method can be applied to a broad class of systems. The name ‘backstep-
ping’ refers to the recursive nature of the design procedure. The design procedure starts
at the scalar equation which is separated by the largest number of integrations from the
control input and ‘steps back’ toward the control input. Each step an intermediate or
‘virtual’ control law is calculated and in the last step the real control law is found. Two
comprehensive textbooks that deal with backstepping and Lyapunov theory are [106] and
especially [118]. The origins of the backstepping method are traced in the survey paper
by Kokotović [110].
An important feature of backstepping is the flexibility of the method, for instance deal-
ing with nonlinearities is a designer choice. If a nonlinearity acts stabilizing, i.e. it is
useful in a sense, it can be retained in the closed-loop system. This is in contrast with
the NDI and FBL methods. An additional advantage is that the controller relies on less
precise model information: the designer does not need to know the size of a stabilizing
nonlinearity. In [75, 77, 78] this notion is used to design a robust nonlinear controller
for a fighter aircraft model. Other examples of backstepping control designs where the
cancellation of useful nonlinearities is avoided can be found in [116, 118].
However, it is often difficult to ascertain if a nonlinearity in the aircraft dynamics acts

33
34 BACKSTEPPING 3.2

stabilizing over the entire flight envelope, especially with model uncertainties or sudden
changes in the aircraft’s dynamic behavior. Therefore, this feature of backstepping is
not exploited in this thesis. Instead, the research focuses on more advanced adaptive
backstepping techniques that guarantee stability and convergence even in the presence
of unknown parameters. Nevertheless, this chapter serves as an introduction before the
more complex adaptive backstepping techniques are introduced in Chapter 4.
This chapter starts with a discussion on Lyapunov theory and stability concepts. Lya-
punov’s direct method, which forms the basis of the backstepping technique, is outlined.
In Section 3.3 the idea behind backstepping is introduced on a general second order
nonlinear system and extended to a recursive procedure for higher order systems. The
chapter closes with an example where the backstepping procedure is applied to the pitch
autopilot design for a longitudinal missile model.

3.2 Lyapunov Theory and Stability Concepts


3.2.1 Lyapunov Stability Definitions
Consider the nonlinear dynamical system

ẋ = f (x(t), t), x(t0 ) = x0 (3.1)

where x(t) ∈ Rn and f : Rn × R+ → Rn is locally Lipschitz in x and piecewise


continuous in t.

Definition 3.1 (Lipschitz condition). A function f (x, t) satisfies a Lipschitz condition on


D with Lipschitz constant L if

|f (x, t) − f (x, y)| ≤ L|x − y|, (3.2)

for all points (x, t) and (y, t) in D.1

An equilibrium point xe ∈ Rn of (3.1) is such that f (xe ) = 0. It can be assumed, without


loss of generality, that the system (3.1) has an equilibrium point xe = 0. The following
definition gives the stability of this equilibrium point [106].

Definition 3.2 (Stability in the sense of Lyapunov). The equilibrium point xe = 0 of the
system (3.1) is

• stable if for each ǫ > 0 and any t0 > 0, there exists a δ(ǫ, t0 ) > 0 such that

|x(t0 )| < δ(ǫ, t0 ) ⇒ |x(t)| < ǫ ∀t ≥ t0 ;


1
√ Note that Lipschitz continuity is a stronger condition than continuity. For example, the function f (x) =
x is continuous on D = [0, ∞), but it is not Lipschitz continuous on D since its slope approaches infinity as
x approaches zero.
3.2 LYAPUNOV THEORY AND STABILITY CONCEPTS 35

• uniformly stable if for each ǫ > 0 and any t0 > 0, there exists a δ(ǫ) > 0 such
that
|x(t0 )| < δ(ǫ) ⇒ |x(t)| < ǫ ∀t ≥ t0 ;

• unstable if it is not stable;

• asymptotically stable if it is stable, and for any t0 > 0, there exists an η(t0 ) > 0
such that
|x(t0 )| < η(t0 ) ⇒ |x(t)| → 0 as t → ∞;

• uniformly asymptotically stable if it is uniformly stable, and there exists a δ > 0


independent of t such that ∀ǫ > 0 there exists a T (ǫ) > 0 such that

|x(t0 )| < δ ⇒ |x(t)| < ǫ ∀t ≥ t0 + T (ǫ);

• exponentially stable if for any ǫ > 0 there exists a δ(ǫ) > 0 such that

|x(t0 )| < δ ⇒ |x(t)| < ǫe−α(t−t0 ) ∀t > t0 ≥ 0

for some α > 0.

Stability in the sense of Lyapunov is a very mild requirement on equilibrium points. In


particular, it includes the idea that solutions are bounded, but at the same time requires
that the bound on the solution can be made arbitrarily small by restriction of the size
of the initial condition. The main difference between stability and uniform stability is
that in the latter case δ is independent of t0 . Asymptotic stability additionally requires
solutions to converge to the origin, while exponential stability requires this convergence
rate to be exponential. Lyapunov stability can be further illustrated in R2 by Figure 3.1.
All trajectories that start in the inner disc will remain in the outer disc forever (bounded).

Figure 3.1: Different types of stability illustrated in R2 [136].

The set of initial conditions D = {x0 ∈ Rn |x(t0 ) = x0 and |x(t)| → ∞ as t → ∞}


36 BACKSTEPPING 3.2

is the domain of attraction of the origin. If D is equal to Rn , then the origin is said
to be globally asymptotically stable. A globally asymptotically stable equilibrium point
implies that xe is the unique equilibrium point, i.e. all solutions, regardless of their
starting point, converge to this point.
In some relevant cases it may not be possible to prove stability of xe , but it may still be
possible to use Lyapunov analysis to show boundedness of the solution [106].

Definition 3.3 (Boundedness). The equilibrium point xe = 0 of the system (3.1) is

• uniformly ultimately bounded if there exist positive constants R, T (R), and b


such that |x(t0 )| ≤ R implies that

|x(t)| < b ∀t > t0 + T ;

• globally uniformly ultimately bounded if it is uniformly ultimately bounded and


R = ∞.

The constant b is referred to as the ultimate bound.

3.2.2 Lyapunov’s Direct Method


To be of practical interest the stability conditions must not require that the differential
equation (3.1) has to be explicitly solved, since this is in general not possible analytically.
The Russian A. M. Lyapunov [135] found another way of proving stability, nowadays
referred to as Lyapunov’s direct method (or Lyapunov’s second method). The method is
a generalization of the idea that if there is some ‘measure of energy’ in a system, then
studying the rate of change of the energy in the system is a way to ascertain stability. To
make this more precise, this ‘measure of energy’ has to be defined in a more formal way.
Let B(r) be a ball of size r around the origin, B(r) = {x ∈ Rn : |x| < r}.

Definition 3.4. A continuous function V (x) is

• positive definite on B(r) if V (0) = 0 and V (x) > 0, ∀x ∈ B(r) such that
x 6= 0;

• positive semi-definite on B(r) if V (0) = 0 and V (x) ≥ 0 ∀x ∈ B(r) such that


x 6= 0;

• negative(semi-)definite on B(r) if −V (x) is positive (semi-)definite;

• radially unbounded if V (0) = 0, V > 0 on Rn − {0}, and V (x) → ∞ as


|x| → ∞.

A continuous function V (x, t) is


3.2 LYAPUNOV THEORY AND STABILITY CONCEPTS 37

• positive definite on R × B(r) if there exists a positive definite function α(x) on


B(r) such that

V (0, t) = 0, ∀t ≥ 0 and V (x, t) ≥ α(x), ∀t ≥ 0, x ∈ B(r);

• radially unbounded if there exists a radially unbounded function α(x) such that

V (0, t) = 0, ∀t ≥ 0 and V (x, t) ≥ α(x), ∀t ≥ 0, x ∈ Rn ;

• decrescent on R × B(r) if there exists a positive definite function α(x) on B(r)


such that
V (x, t) ≤ α(x), ∀t ≥ 0, x ∈ B(r).

Using these definitions, the following theorem can be used to determine stability for a
system by studying an appropriate Lyapunov (energy) function V (x, t). The time deriva-
tive of V (x, t) is taken along the trajectories of the system (3.1)

∂V ∂V
V̇ = + f (x, t).
ẋ=f (x,t) ∂t ∂x

Theorem 3.5 (Lyapunov’s Direct Method). Let V (x, t) : R+ × D → R+ be a continu-


ously differentiable and positive definite function, where D is an open region containing
the origin.

• If V̇ is negative semi-definite for x ∈ D, then the equilibrium xe = 0 is


ẋ=f (x,t)
stable.
• If V (x, t) is decrescent and V̇ is negative semi-definite for x ∈ D, then
ẋ=f (x,t)
the equilibrium xe = 0 is uniformly stable.

• If V̇ is negative definite for x ∈ D, then the equilibrium xe = 0 is asymp-


ẋ=f (x,t)
totically stable.

• If V (x, t) is decrescent and V̇ is negative definite for x ∈ D, then the


ẋ=f (x,t)
equilibrium xe = 0 is uniformly asymptotically stable.
• If there exist three positive constants c1 , c2 and c3 such that c1 |x|2 ≤ V (x, t) ≤
c2 |x|2 and V̇ ≤ −c3 |x|2 for all t ≥ 0 and for all x ∈ D, then the
ẋ=f (x,t)
equilibrium xe = 0 is exponentially stable.

Proof: The proof can be found in chapter 4 of [106].


38 BACKSTEPPING 3.2

The requirement for negative definiteness of the derivative of the Lyapunov function
to guarantee asymptotic convergence is quite stringent. It may still be possible to con-
clude asymptotic convergence when this derivative is only negative semi-definite using
LaSalle’s invariance theorem (Theorem B.7 in Appendix B.1). However, this theorem is
only valid for autonomous systems. For time-varying systems Barbalat’s useful lemma
can be used [118].
+
R tLemma). Let φ : R → R be a uniformly continuous function
Lemma 3.6 (Barbalat’s
on [0, ∞). If limt→∞ 0 φ(τ )dτ exists and is finite, then

lim φ(t) = 0.
t→∞

Combining this lemma with Lyapunov’s direct method leads to the powerful theorem by
LaSalle and Yoshizawa.
Theorem 3.7 (LaSalle-Yoshizawa). Let xe = 0 be an equilibrium point of (3.1) and
suppose that f is locally Lipschitz in x uniformly in t. Let V : Rn × R+ → R+ be a
continuously differentiable function such that
• γ1 (x) ≤ V (x, t) ≤ γ2 (x)
∂V ∂V
• V̇ = ∂t + ∂x f (x, t) ≤ −W (x) ≤ 0
∀t ≥ 0, ∀x ∈ Rn , where γ1 and γ2 are continuous positive definite functions and where
W is a continuous function. Then all solutions of (3.1) satisfy

lim W (x(t)) = 0.
t→∞

In addition, if W (x) is positive definite, then the equilibrium xe = 0 is globally uniformly


asymptotically stable.

Proof: The detailed proof can be found in Appendix B.1.

The key advantage of this theorem is that it can be applied without finding the solutions
of (3.1). Unfortunately, Theorem 3.7 does not give an actual prescription for determining
the Lyapunov function V (x, t). Since the theorem only gives sufficient conditions, it can
be tedious to find the correct Lyapunov function to establish the stability of an equilib-
rium point. However, the converse of the theorem also exists: if an equilibrium point is
stable, then there exists a function V (x, t) satisfying the conditions of the theorem. A
more formal explanation of Lyapunov stability theory can be found in Appendix B.1.

3.2.3 Lyapunov Theory and Control Design


In this section the Lyapunov function concept is extended to control design, i.e. Lya-
punov theory will now be applied to create a closed-loop system with desirable stability
3.2 LYAPUNOV THEORY AND STABILITY CONCEPTS 39

properties. Consider the nonlinear system to be controlled


ẋ = f (x, u), x ∈ Rn , u ∈ R, f (0, 0) = 0 (3.3)
where x is the system state and u the control input. The control objective is to design
a feedback control law α(x) for the control input u such that the equilibrium x = 0 is
globally asymptotically stable. To prove stability a function V (x) is needed as a Lya-
punov candidate, and it is required that its derivative along the solutions of (3.3) satisfies
V̇ (x) ≤ −W (x), where W (x) is a positive semi-definite function. The straightforward
approach for finding α(x) would be to pick a positive definite, radially unbounded func-
tion V (x) and then choosing α(x) such that
∂V
(x)f (x, α(x)) ≤ −W (x) ∀x ∈ Rn . (3.4)
∂x
Careful selection is needed: while there may exist a stabilizing control law for (3.3), it
may fail to satisfy (3.4). This problem motivated [5] and [190] to introduce the control
Lyapunov function (CLF) concept.
Definition 3.8 (Control Lyapunov function). A smooth positive definite and radially un-
bounded function V : Rn → R+ is called a control Lyapunov function (CLF) for the
system (3.3) if
n ∂V o
inf (x)f (x, u) < 0 ∀x 6= 0. (3.5)
u∈R ∂x

Given a CLF for a system, a globally stabilizing control law can thus be found. In fact,
in [5] it was demonstrated that the existence of such a globally stabilizing control law is
equivalent to the existence of a CLF. This means that for each globally stabilizing control
law, a corresponding CLF can be found and vice versa. This is illustrated in the following
example [75].

Example 3.1 (A scalar system)


Consider the feedback linearizable system
ẋ = −x3 + x + u (3.6)
and let x = 0 be the desired equilibrium. Consider the simplest choice of CLF, the
quadratic CLF
1
V (x) = x2 , (3.7)
2
and its time derivative along the solutions of (3.6)
V̇ = xẋ = x(−x3 + x + u). (3.8)
There exist multiple choices of control law to render the above expression negative
(semi-)definite. The most obvious choice is the control law
u = x3 − cx, c > 1, (3.9)
40 BACKSTEPPING 3.2

which is equivalent to applying FBL, since it cancels all nonlinearities thus resulting
in the linear feedback system: ẋ = −(c − 1)x. Obviously, this control law does not
recognize the fact that −x3 is a useful nonlinearity when stabilizing around x = 0
and thereby wastes control effort canceling this term. Also, the presence of x3 in the
control law (3.9) is dangerous from a robustness perspective. Suppose that the true
system is equal to ẋ = −0.99x3 + x + u, applying control law (3.9) could lead to an
unstable closed-loop system.
As an alternative the much simpler feedback
u = −cx, c>1 (3.10)
is selected. This results in V̇ = −x4 − (c − 1)x2 < 0 for x 6= 0. By Theorem 3.7
this control law again renders the origin globally asymptotically stable. However, the
new control is more efficient and also more robust to model uncertainty as compared
to the previous control (3.9).
This can be illustrated using numerical simulations. Plots of the closed-loop system
response for both controllers can be found in Figure 3.2. The first plot of Figure 3.2
shows the regulation of the states for both controllers for x(0) = 5 and control gain
c = 2. As expected the system with the second ‘smart’ controller (3.10) has a more
rapid convergence because it makes use of the stabilizing nonlinearity. The bottom
plot of Figure 3.2 illustrates that far less control effort is required when the stabilizing
nonlinearity is not canceled.

3
state x

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (s)

120
fbl
100 smart
80
input u

60

40

20

−20
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (s)

Figure 3.2: Regulation of x and control effort u for both stabilizing controllers with x(0) = 5 and
k = 2.
3.3 BACKSTEPPING BASICS 41

The main deficiency of the CLF concept as a design tool is that for more complex nonlin-
ear systems a CLF is in general not known and the task of finding one may be as difficult
as that of designing a stabilizing feedback law. At the end of the 1980’s backstepping
was introduced in a number of papers, e.g. [111, 191, 201], as a recursive design tool to
solve this problem for several important classes of nonlinear systems.

3.3 Backstepping Basics


The previous section dealt with the general Lyapunov theory and introduced the concept
of the CLF. It was stated that if a CLF exists, a control law which makes the closed-loop
system globally asymptotically stable can be found. However, it can be a problem to find
a CLF or the corresponding control law. Using the backstepping procedure a CLF and a
control law can be found simultaneously as will be illustrated in this section.

3.3.1 Integrator Backstepping


Consider the second order system

ẋ1 = f (x1 ) + g(x1 )x2 (3.11)


ẋ2 = u (3.12)

where (x1 , x2 ) ∈ R2 are the states, u ∈ R is the control input and g(x1 ) 6= 0. The
control objective is to track the smooth reference signal yr (t) (all derivatives known
and bounded) with the state x1 . This tracking control problem can be transformed to a
regulation problem by introducing the tracking error variable z1 = x1 − yr and rewriting
the x1 -subsystem in terms of this variable as

ż1 = f (x1 ) + g(x1 )x2 − ẏr (3.13)

The idea behind backstepping is to regard the state x2 as a control input for the z1 -
subsystem. By a correct choice of x2 the z1 -subsystem can be made globally asymptoti-
cally stable. Since x2 is just a state variable and not the real control input, x2 is called a
virtual control and its desired value xdes
2 , α(x1 , yr , ẏr ) a stabilizing function. For the
z1 -subsystem a CLF V1 (z1 ) can be selected such that the stabilizing virtual control law
renders its time-derivative along the solutions of (3.13) negative (semi-)definite, i.e.

∂V1
V̇1 = [f (x1 ) + g(x1 )α(x1 , yr , ẏr ) − ẏr ] ≤ −W (z1 ), (3.14)
∂z1

where W (z1 ) is positive definite. The difference between the virtual control x2 and its
desired value α(x1 , yr , ẏr ) is defined as the tracking error variable

z2 = x2 − xdes
2 = x2 − α(x1 , yr , ẏr ). (3.15)
42 BACKSTEPPING 3.3

The system can now be rewritten in terms of the new state z2 as

ż1 = f + g (z2 + α) − ẏr (3.16)


∂α ∂α ∂α
ż2 = u− [f + g (z2 + α)] − ẏr − ÿr , (3.17)
∂x1 ∂yr ∂ ẏr
where the time derivative of α can be computed analytically, since it is a known expres-
sion. The task is now to find a control law for u that ensures that z2 converges to zero,
i.e. x2 converges to its desired value α. To help find this stabilizing control law, a CLF
for the complete (z1 , z2 )-system is needed. The most obvious solution is to augment the
CLF of the first design step, V1 , with an additional quadratic term that penalizes the error
z2 as
1
V2 (z1 , z2 ) = V1 (z1 ) + z22 . (3.18)
2
Taking the derivative of V2 results in

V̇2 = V̇1 + z2 ż2


 
∂α ∂α ∂α
= V̇1 + z2 u − [f + g (z2 + α)] − ẏr − ÿr
∂x1 ∂yr ∂ ẏr
∂V1
= [f + g (z2 + α) − ẏr ]
∂z1
 
∂α ∂α ∂α
+z2 u − [f + g (z2 + α)] − ẏr − ÿr
∂x1 ∂yr ∂ ẏr
∂V1
= [f + gα − ẏr ]
∂z1
 
∂V1 ∂α ∂α ∂α
+z2 g+u− [f + g (z2 + α)] − ẏr − ÿr
∂z1 ∂x1 ∂yr ∂ ẏr
 
∂V1 ∂α ∂α ∂α
≤ −W (z1 ) + z2 g+u− [f + g (z2 + α)] − ẏr − ÿr ,
∂z1 ∂x1 ∂yr ∂ ẏr

where the cross term ∂V∂z1 gz2 due to the presence of z2 in (3.16) is grouped together with
1

u. The first term of the above expression is already negative definite by the choice of
the stabilizing function α and the bracketed term can be made negative semi-definite by
selecting the control law
∂V1 ∂α ∂α ∂α
u = −cz2 − g+ [f + g (z2 + α)] + ẏr + ÿr , (3.19)
∂z1 ∂x1 ∂yr ∂ ẏr
where the gain c > 0. This control law yields

V̇2 ≤ −W (z1 ) − cz22 ,

and thus by Theorem 3.7 renders the equilibrium (z1 , z1 ) = 0 globally stable. Further-
more, the tracking problem is solved, since limt→∞ x1 → yr . Note that selecting the
3.3 BACKSTEPPING BASICS 43

CLF quadratic with a corresponding (virtual) feedback control law is usually the most
straightforward choice. However, other choices of CLF are also possible and in some
cases may even result in a more efficient controller by e.g. not canceling stabilizing
nonlinearities. This is demonstrated in the following example [75].

Example 3.2 (A second order system)


Consider the scalar system of Example 3.1 augmented with an integrator

ẋ1 = −x31 + x1 + x2 (3.20)


ẋ2 = u. (3.21)

The control objective is to regulate x1 to zero. A control law for the x1 -subsystem
was already found in Example 3.1. This control law is now used as a virtual control
law for x2 with c = 2:
xdes
2 = −2x1 , α. (3.22)
The error between x2 and its desired value α is defined as the tracking error z

z = x2 − α = x2 + 2x1 . (3.23)

Rewriting the system in terms of the state x1 and z satisfies

ẋ1 = −x31 − x1 + z (3.24)


ż = u + 2(−x31 − x1 + z). (3.25)

Now the CLF of Example 3.1 is augmented for the (x1 , z)-system with an extra term
that penalizes the tracking error z as
1 2 1 2
V2 (x1 , z) = x + z . (3.26)
2 1 2
Taking the derivative of V2 results in

V̇2 = x1 ẋ1 + z ż = −x41 − x21 + z(u − 2x31 − x1 + 2z).

Examining (3.27) reveals that all indefinite terms can be canceled by the control law

u = −c2 z + 2x31 + x1 , c2 > 2. (3.27)

By Theorem 3.7 the control law 3.27 stabilizes the (x1 , z)-system. However, it may
be possible to find another, more efficient controller that recognizes the naturally sta-
bilizing dynamics of the x1 -subsystem. In order to find this efficient controller the
definition of the CLF V2 is postponed. Consider the CLF
1
V2 (x1 , z) = Q(x1 ) + z 2 , (3.28)
2
44 BACKSTEPPING 3.3

where Q(x1 ) is a CLF for the x1 -subsystem. Taking the derivative of V2 results in

V̇2 = −Q′ (x31 + x1 ) + z(Q′ + u − 2x31 − 2x1 + 2z).

The extended design freedom can now be used to cancel the indefinite terms by se-
lecting Q′ = 2x31 + 2x1 , i.e.

1 4
Q(x1 ) = x + x21 (3.29)
2 1
which is positive definite and thus a valid choice of CLF. This reduces the derivative
of V2 to
V̇2 = −2x61 − 4x41 − 2x21 + z(u + 2z).

A much simpler control law

u = −c2 z, c2 > 2 (3.30)

can now be selected to render the derivative of the CLF V2 negative semi-definite.
Plots of the closed-loop system response of both controllers can be found in Figure
3.3. Backstepping controller 1 only takes the stabilizing nonlinearity into account in
the first design step and backstepping controller 2 was found using the non-quadratic
CLF. The system is initialized at x1 (0) = 2, x(2) = −2 and the control gains are
selected as c = 2, c2 = 3. The required control effort for both controllers is much
lower when compared to a full cancellation FBL controller. This example illustrates
the design freedom the backstepping technique gives the control engineer.

3.3.2 Extension to Higher Order Systems


The backstepping procedure demonstrated on second order systems in the previous sec-
tion can be applied recursively to higher order systems. The only difference is that there
are more virtual states to ‘backstep’ through. Starting with the state ‘furthest’ from the
actual control, each step of the backstepping technique can be divided into three parts:

1. Introduce a virtual control and an error state, and rewrite the current state equation
in terms of these,

2. Choose a CLF for the system, treating it as a final stage,

3. Choose a stabilizing feedback term for the virtual control that makes the CLF
stabilizable.

The CLF is augmented at subsequent steps to reflect the presence of new virtual states,
but the same three stages are followed at each step.
3.3 BACKSTEPPING BASICS 45

2
1.5
state x1

1
0.5
0
−0.5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (s)

0
state x2

−1

−2

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5


time (s)
10
bs1
bs2
5
input u

−5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (s)

Figure 3.3: Response of x1 , x2 and control effort u for both backstepping controllers with x1 (0) =
2, x2 (0) = −2 and c = 2, c2 = 3.

The backstepping procedure for general strict feedback systems is now stated more for-
mally, consider the nonlinear system

ẋ1 = f1 (x1 ) + g1 (x1 )x2


ẋ2 = f2 (x1 , x2 ) + g2 (x1 , x2 )x3
..
.
ẋi = fi (x1 , x2 , ..., xi ) + gi (x1 , x2 , ..., xi )xi+1 (3.31)
..
.
ẋn = fn (x1 , x2 , ..., xn ) + gn (x1 , x2 , ..., xn )u

where xi ∈ R, u ∈ R and gi 6= 0. The control objective is to force the output y = x1


to asymptotically track the reference signal yr (t) whose first n derivatives are assumed
to be known and bounded. The backstepping procedure starts by defining the tracking
errors as

z1 = x1 − yr
zi = xi − αi−1 , i = 2, ..., n. (3.32)
46 BACKSTEPPING 3.3

The system (3.31) can be rewritten in terms of these new variables as


ż1 = f1 (x1 ) + g1 (x1 )x2 − ẏr
ż2 = f2 (x1 , x2 ) + g2 (x1 , x2 )x3 − α̇1
..
.
żi = fi (x1 , x2 , ..., xi ) + gi (x1 , x2 , ..., xi )xi+1 − α̇i−1 (3.33)
..
.
żn = fn (x1 , x2 , ..., xn ) + gn (x1 , x2 , ..., xn )u − α̇n−1
The clfs are selected as
1
Vi = Vi−1 + zi2 , i = 1, ..., n. (3.34)
2
and the (virtual) feedback controls as
1
α1 = [−c1 z1 − f1 + ẏr ]
g1
1
αi = [−gi−1 zi−1 − ci zi − fi + α̇i−1 ] , i = 2, ..., n
gi
u = αn (3.35)
with gains ci > 0.
Theorem 3.9 (Backstepping Design for Tracking). If Vn is radially unbounded and gi 6=
0 holds globally, then the closed-loop system, consisting of the tracking error dynamics of
(3.33) and the control u specified according to (3.35), has a globally stable equilibrium at
(z1 , z2 , ..., zn ) = 0 and limt→∞ zi = 0. In particular, this means that global asymptotic
tracking is achieved:
lim [x1 − yr ] = 0.
t→∞

Proof: The time derivative of Vn along the solutions of (3.33) is


n
X
V̇n = − ci zi2 ,
i=1

which proves that the equilibrium (z1 , z2 , ..., zn ) = 0 is globally uniformly stable. By
Theorem 3.7 it follows further that limt→∞ zi = 0.

A block scheme of the resulting closed-loop system for n = 3 and a constant refer-
ence signal yr is shown in Figure 3.4. The recursive nature of the procedure is clearly
visible. This concludes the discussion of the theory behind backstepping. In [118] it
is demonstrated that the procedure can be applied to all nonlinear systems of a lower
triangular form, including multivariable systems.
3.3 BACKSTEPPING BASICS 47

f3 ( x )

f 2 ( x1 , x2 )

f1 ( x1 )

g3 ( x )
∫ x3 g 2 ( x1 , x2 )
∫ x2 g1 ( x1 )
∫ x1

- α1 ( x1 , yr ) -
yr
- α 2 ( x1 , x2 , yr )

u ( x , yr )

Backstepping controller

Figure 3.4: Closed-loop dynamics of a general strict feedback control system with backstepping
(i)
controller for n = 3. It is assumed that yr = 0, i = 1, 2, 3.

3.3.3 Example: Longitudinal Missile Control

In this section the backstepping method is demonstrated in a first flight control example:
the tracking control design for a longitudinal missile model. A second order nonlinear
model of a generic surface-to-air missile has been obtained from [109]. The model is
nonlinear, but not overly complex. The model consists of the longitudinal force and
moment equations representative of a missile traveling at an altitude of approximately
6000 meters, with aerodynamic coefficients represented as third order polynomials in
angle of attack α and Mach number M .
The nonlinear equations of motion in the pitch plane are given by

q̄S  
α̇ = q+ Cz (α, M ) + bz (M )δ (3.36)
mVT
q̄Sd  
q̇ = Cm (α, M ) + bm (M )δ , (3.37)
Iyy

while the aerodynamic coefficients of the model are approximated by

bz (M ) = 1.6238M − 6.7240,
bm (M ) = 12.0393M − 48.2246,
Cz (α, M ) = ϕz1 (α) + ϕz2 (α)M,
Cm (α, M ) = ϕm1 (α) + ϕm2 (α)M,
48 BACKSTEPPING 3.3

where
ϕz1 (α) = −288.7α3 + 50.32α|α| − 23.89α,
ϕz2 (α) = −13.53α|α| + 4.185α,
ϕm1 (α) = 303.1α3 − 246.3α|α| − 37.56α,
ϕm2 (α) = 71.51α|α| + 10.01α.
These approximations are valid for the flight envelope −10o < α < 10o and 1.8 <
M < 2.6. To facilitate the control design, the nonlinear missile model (3.36) and (3.37)
is rewritten in the more general state-space form as
ẋ1 = x2 + f1 (x1 ) + g1 u (3.38)
ẋ2 = f2 (x1 ) + g2 u, (3.39)
where
x1 = α, x2 = q,
 
f1 (x1 ) = C1 ϕz1 (x1 ) + ϕz2 (x1 )M ,
 
f2 (x1 ) = C2 ϕm1 (x1 ) + ϕm2 (x1 )M ,
g1 = C1 bz , g2 = C2 bm ,
q̄S q̄Sd
C1 = , C2 = .
mVT Iyy
The control objective considered here is to design an autopilot with the backstepping
method that tracks a commanded reference yr (all derivatives known and bounded) with
the angle of attack x1 . It is assumed that the aerodynamic force and moment functions
are exactly known and the Mach number M is treated as a parameter available for mea-
surement. Furthermore, the contribution of the fin deflection on the right-hand side of the
force equation (3.38) is ignored during the control design, since the backstepping method
can only handle nonlinear systems of lower-triangular form, i.e. the assumption is made
that the fin surface is a pure moment generator. This a valid assumption for most types
of aircraft and aerodynamically controlled missiles, often made in flight control system
design, see e.g. [56, 76].
The backstepping procedure starts by defining the tracking errors as
z1 = x1 − yr
z2 = x2 − α1
where α1 is the virtual control to be designed in this first design step.
Step 1: The z1 -dynamics satisfy
ż1 = x2 + f1 − ẏr = z2 + α1 + f1 − ẏr . (3.40)
Consider a candidate CLF V1 for the z1 -subsystem defined as
1 2
z + k1 λ21 ,

V1 (z1 ) = (3.41)
2 1
3.3 BACKSTEPPING BASICS 49

Rt
where the gain k1 > 0 and the integrator term λ1 = 0 z1 dt are introduced to robustify
the control design against the effect of the neglected control term. The derivative of V1
along the solutions of (3.40) is given by

V̇1 = z1 ż1 + k1 λ1 z1 = z1 [z2 + α1 + f1 − ẏr + k1 λ1 ] .

The virtual control α1 is selected as

α1 = −c1 z1 − k1 λ1 − f1 + ẏr , c1 > 0 (3.42)

to render the derivative

V̇1 = −c1 z12 + z1 z2 .

The cross term z1 z2 will be dealt with in the second design step.
Step 2: The z2 -dynamics are given by

ż2 = f2 + g2 u − α̇1 , (3.43)

where α̇1 = −c1 (x2 + f1 − ẏr ) − k1 z1 − f˙1 + ÿr . The CLF V1 is augmented with an
additional term to penalize z2 as
1
V2 (z1 , z2 ) = V1 + z22 . (3.44)
2
The derivative of V2 along the solutions of (3.40) and (3.43) satisfies

V̇2 = −c1 z12 + z1 z2 + z2 [f2 + g2 u − α̇1 ] = −c1 z12 + z2 [z1 + f2 + g2 u − α̇1 ] .

A control law for u can now be defined to cancel all indefinite terms, the most straight-
forward choice is given by
1
u= [−c2 z2 − z1 − f2 + α̇1 ] .
g2
By Theorem 3.7 limt→∞ z1 , z2 = 0, which means that the reference signal yr is asymp-
totically tracked with x1 .
Numerical simulations of the longitudinal missile model with the backstepping controller
have been performed in MATLAB/Simulink c . A third order fixed time step solver with
sample time 0.01s was used. First, consider the simulations using the ‘idealized’ missile
model, i.e. the lower triangular model as used for the control design with g1 = 0. Figure
3.5 shows the response of the system states and the control input for a series of angle
of attack doublets at Mach 2.0. The red line represents the reference signal, while the
closed-loop response of the system for three different gain selections is plotted in blue.
As can be seen in the plots perfect tracking is achieved by increasing the control gains.
However, when the full missile model is used, with g1 6= 0, the controllers without inte-
gral gain only achieve bounded tracking as can be seen in Figure 3.6. Setting the integral
gain k1 = 10 removes the bounded tracking error. Many other methods for robustifying
50 BACKSTEPPING 3.3

the backstepping design against unmodeled dynamics can be found in literature. How-
ever, for large uncertainties these robust methods fail to give adequate performance or
they tend to lead to conservative control laws. Adaptive backstepping is a more sophis-
ticated method of dealing with large model uncertainties and is the subject of the next
chapter.
angle of attack (deg)

10

0
c1,c2=1,k1=0
−10 c1,c2=10,k1=0
0 5 10 15 20 c1,c225
=10,k1=10 30
reference
40
pitch rate (deg/s)

20

−20

−40
0 5 10 15 20 25 30
control deflection (deg)

10

−10

0 5 10 15 20 25 30
time (s)

Figure 3.5: Numerical Simulations at Mach 2.0 of the idealized longitudinal missile model with
backstepping control law for 3 different gain selections.
3.3 BACKSTEPPING BASICS 51

angle of attack (deg)

10

0
c1,c2=1,k1=0
−10 c1,c2=10,k1=0
0 5 10 15 20 c ,c =10,k =10
1 2 25 1 30
reference
50
pitch rate (deg/s)

−50
0 5 10 15 20 25 30
control deflection (deg)

10

−10

0 5 10 15 20 25 30
time (s)

Figure 3.6: Numerical Simulations at Mach 2.0 of the full longitudinal missile model with back-
stepping control law for 3 different gain selections.
Chapter 4
Adaptive Backstepping

In the previous chapter the basic ideas of the backstepping control design approach for
nonlinear systems were explained. The backstepping approach allows the designer to
construct controllers for a wide range of nonlinear systems in a structured, recursive
way. However, the method assumes that an accurate system model is available and this
may not be the case for real world physical systems. In this chapter the backstepping
framework is extended with a dynamic feedback part that constantly updates the static
feedback control part to deal with nonlinear systems with parametric uncertainties. In the
first part of the chapter the concept of dynamic feedback is explained in a simple example
and after that the standard tuning functions adaptive backstepping method is derived.
An overview of methods to deal with non-parametric uncertainties such as measurement
noise is also presented. In the second part command filters are introduced to simplify
the adaptive backstepping method and to make the dynamic update laws more robust to
input saturation.

4.1 Introduction
Backstepping can be used to stabilize a large class of nonlinear systems in a structured
manner, while giving the control designer a lot of freedom. However, the true potential
of backstepping was discovered only when the approach was developed for nonlinear
systems with structured uncertainty. With adaptive backstepping [101, 117] global stabi-
lization is achieved in the presence of unknown parameters, and with robust backstepping
[64, 66, 87] it is achieved in the presence of disturbances. The ease with which uncer-
tainties and unknown parameters can be incorporated in the backstepping procedure is
what makes the method so interesting.
Robust backstepping and other robust nonlinear control techniques have been studied
extensively in literature. However, these methods tend to yield rather conservative con-
trol laws, especially for cases where the uncertainties are large. Furthermore, nonlinear

53
54 ADAPTIVE BACKSTEPPING 4.2

damping terms and switching control functions are often used to guarantee robustness
in the presence of uncertainties, which may result in undesirable high gain control or
chattering in the control signal. High gain feedback may cause several problems, such as
saturation of the control (actuators), high sensitivity to measurement noise, excitation of
unmodeled dynamics and large transient errors.
Adaptive backstepping control has a more sophisticated way of dealing with large un-
certainties. Adaptive backstepping controllers do not only employ static feedback like
the controllers designed in the previous section, but also contain a dynamic feedback
part. This dynamic part of the control law is used as a parameter update law to continu-
ously adapt the static part to new parameter estimates. Adaptive backstepping achieves
boundedness of the closed-loop states and convergence of the tracking error to zero for
nonlinear systems with parametric uncertainties.
The first adaptive backstepping method [101] employed overparametrization, i.e. more
than one update law was used for each parameter. Overparametrization is not necessarily
disadvantageous from a performance point of view, but it is not very efficient in a numer-
ical implementation of the controller due to the resulting higher dynamical order. With
the introduction of the tuning functions adaptive backstepping method [117] the over-
parametrization was removed so that only one dynamic update law for each unknown
parameter is needed. The first part of this chapter, Section 4.2, discusses the tuning
functions adaptive backstepping approach. Dynamic feedback is introduced on a second
order system, after which the method is extended to higher order systems.
The tuning functions approach has a number of shortcomings, two of the most important
being its analytical complexity and its sensitivity to input saturation. In Section 4.3 the
constrained adaptive backstepping method is introduced, which makes use of command
filters to completely remove these drawbacks. The use of filters in the backstepping
framework was first proposed as dynamic surface control in [212, 213] to remove the
tedious analytical calculation of the time derivatives of the virtual control laws at each
design step. In [58] the idea of using command filters is extended in such a way that the
dynamic update laws of adaptive backstepping are robustified against the effects of the
input saturation, resulting in the constrained adaptive backstepping approach.

4.2 Tuning Functions Adaptive Backstepping

In this section the tuning functions adaptive backstepping method as conceived in [117]
is discussed. The ideas of the recursive backstepping approach of the previous chapter
are extended to nonlinear systems with parametric uncertainties. Dynamic feedback is
employed as parameter update law to continuously adapt the static feedback control to
new parameter estimates. The controller is still constructed in a recursive manner, in-
troducing a virtual control law and intermediate update laws at each design step, while
extending the CLF, until the control law and the dynamic update laws are found in the
last design step.
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 55

4.2.1 Dynamic Feedback


The difference between a static and a dynamic nonlinear design will be illustrated using
the scalar system of Example 3.1 augmented with an unknown constant parameter θ in
front of the nonlinear term.

Example 4.1 (An uncertain scalar system)


Consider the feedback linearizable system
ẋ = θx3 + x + u (4.1)
where θ ∈ R is an unknown constant parameter. The control objective is regulation of
x to zero. If θ where known, the control
u = −θx3 − cx, c > 1, (4.2)
would render the derivative of V0 (x) = 12 x2 negative definite: V̇0 = −(c−1)x2 . Since
θ is not known, its certainty equivalence form is employed in which θ is replaced by
the parameter estimate θ̂:
u = −θ̂x3 − cx, c > 1. (4.3)
Substituting (4.3) into (4.1) gives
ẋ = θ̃x3 − (c − 1)x, (4.4)
where θ̃ is the parameter estimation error, defined as
θ̃ = θ − θ̂. (4.5)
The derivative of V0 (x) = 12 x2 now satisfies
V̇0 = θ̃x4 − (c − 1)x2 . (4.6)
It is not possible to conclude anything about the stability of (4.4), since the first term
of (4.6) contains the unknown parameter error θ̃. The idea is now to extend the control
law with a dynamic update law for θ̂. To design this update law, V0 is augmented with
a quadratic term to penalize the parameter estimation error θ̃ as
1 2 1 2
V1 (x, θ̃) = x + θ̃ , (4.7)
2 2γ
where γ > 0 is the adaptation gain. The derivative of this function is

1 ˙
V̇1 = xẋ + θ̃θ̃
γ
1 ˙
= θ̃x4 − (c − 1)x2 + θ̃θ̃ (4.8)
γ
 
2 4 1˙
= −(c − 1)x + θ̃ x + θ̃ .
γ
56 ADAPTIVE BACKSTEPPING 4.2

The above equation still contains an indefinite term with the unknown θ̃. However, the
˙
dynamics of θ̃˙ = −θ̂ can now be utilized, which means that the indefinite term can be
˙
canceled with an appropriate choice of θ̂. Choosing the update law

˙
θ̂ = −θ̃˙ = γx4 (4.9)

yields
V̇1 = −(c − 1)x2 ≤ 0. (4.10)
It can be concluded that the equilibrium (x, θ̃) = 0 is globally stable and by Theorem
3.7 the regulation property limt→∞ x = 0 is satisfied. Note that since the parameter
estimation error term in (4.8) is completely canceled, it cannot be concluded that the
parameter estimation error θ̃ converges to zero. This is a characteristic of this type
of Lyapunov based adaptive controllers: the idea is to satisfy a total system stability
criterion, the CLF, rather than to optimize the error in estimation. The advantage is
that global asymptotic stability of the closed-loop system is guaranteed. This is in
contrast with a traditional estimation-based design, where the identifiers are too slow
to deal with nonlinear system dynamics [118].
The resulting adaptive system consists of (4.1) with control law (4.3) and update law
(4.9). The response of the closed-loop system with θ = 1 for several values of update
gain γ can be found in Figure 4.1. The initial state of the system is x(0) = 2, the
control gain c = 2 and the initial parameter estimate θ̂(0) = 0 . As can be seen
from the figure, the adaptive controller manages to stabilize the uncertain nonlinear
system. The parameter estimate converges to a constant value for each of the update
gain selections, but never converges to the true parameter value.

The adaptive design of the above example is very simple because the uncertainty is in the
span of the control , i.e. matched. Adaptive backstepping extends the design approach
of the example to a recursive procedure that can deal with nonlinear systems containing
parametric uncertainties that are separated by one or more integrators from the control
input.
Consider the second order system

ẋ1 = ϕ(x1 )T θ + x2 (4.11)


ẋ2 = u (4.12)

where (x1 , x2 ) ∈ R2 are the states, u ∈ R is the control input, ϕ(x1 ) is a smooth, non-
linear function vector, i.e. the regressor vector, and θ is a vector of unknown constant
parameters. The control objective is to track the smooth reference signal yr (t) (all deriva-
tives known and bounded) with the state x1 . The adaptive backstepping procedure starts
by introducing the tracking errors z1 = x1 − yr and z2 = x2 − α. The virtual control α
is now defined in terms of the parameter estimate θ̂ as

α(x1 , θ̂, yr , ẏr ) = −c1 z1 − ϕT θ̂ + ẏr , c1 > 0. (4.13)


4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 57

state x 3

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (s)
0
gamma = 0.1
−10 gamma = 1
input u

−20 gamma = 10
−30
−40
−50
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (s)
parameter estimate

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (s)

Figure 4.1: State x, control effort u and parameter estimate θ̂ for initial values x(0) = 2, θ̂(0) = 0
and control gain c = 2 with different values of update gain γ. The parameter estimate does not
converge to the true parameter value θ = 1.

This virtual control reduces the (z1 , z2 )-dynamics to

ż1 = ϕT θ̃ + z2 − c1 z1 (4.14)
∂α ∂α ∂α ∂α ˙
ż2 = u− ẋ1 − ẏr − ÿr − θ̂, (4.15)
∂x1 ∂yr ∂ ẏr ∂ θ̂
where θ̃ = θ − θ̂ is the parameter estimation error. A CLF is defined that not only
penalizes the tracking errors, but also the estimation error as
1 2 
V (z1 , z2 , θ̃) = z1 + z22 + θ̃T Γ−1 θ̃ (4.16)
2
with Γ = ΓT > 0. The time derivative of V along the solutions of (4.14) is

V̇ = −c1 z12 + z1 z2 + ϕT θ̃z1


 
∂α ∂α ∂α ∂α ˙ ˙
+z2 u − ẋ1 − ẏr − ÿr − θ̂ − θ̃T Γ−1 θ̂
∂x1 ∂yr ∂ ẏr ∂ θ̂
 
2 ∂α T ∂α ∂α ∂α ∂α ˙
= −c1 z1 + z2 z1 + u + ϕ θ̂ − x2 − ẏr − ÿr − θ̂
∂x1 ∂x1 ∂yr ∂ ẏr ∂ θ̂
  
˙ ∂α
−θ̃T Γ−1 θ̂ − Γϕ z1 − z2 .
∂x1
58 ADAPTIVE BACKSTEPPING 4.2

In order to render the derivative of the CLF V negative definite, a control law for u and a
dynamic update law for θ̂ are selected as
∂α T ∂α ∂α ∂α ∂α ˙
u = −c2 z2 − z1 − ϕ θ̂ + x2 + ẏr + ÿr + θ̂ (4.17)
∂x1 ∂x1 ∂yr ∂ ẏr ∂ θ̂
 
˙ ∂α
θ̂ = Γϕ z1 − z2 (4.18)
∂x1
where c2 > 0. This results in
V̇ = −c1 z12 − c2 z22
and it follows that the equilibrium (z1 , z2 , θ̃) = 0 is globally uniformly stable. Further-
more, limt→∞ z1 , z2 → 0, i.e. global asymptotic tracking is achieved. Note again that
convergence of the parameter estimate θ̂ is guaranteed, but not necessarily convergence
to the real value of θ. In this adaptive backstepping design the choice of parameter update
law was postponed until the second design step. This will become a lot more complicated
for higher order systems as considered in the next part of the chapter.

4.2.2 Extension to Higher Order Systems


The adaptive backstepping method is now extended to higher order systems. Consider
the strict feedback system
ẋi = fi (x̄i ) + gi (x̄i )xi+1 , i = 1, ..., n − 1
ẋn = fn (x) + gn (x)u (4.19)
where xi ∈ R, u ∈ R and x̄i = (x1 , x2 , ..., xi ). Unlike before, the smooth functions fi
and gi now contain the unknown dynamics of the system and will have to be approxi-
mated. It is assumed that gi does not change sign, i.e. gi > 0 or gi < 0, in the domain of
operation. For most physical systems at least the sign of these functions is known. It is
assumed that there exist vectors θfi and θgi such that
fi (x̄i ) = ϕfi (x̄i )T θfi
gi (x̄i ) = ϕgi (x̄i )T θgi ,
where ϕ∗ are the regressors and θ∗ are vectors of unknown constant parameters. Then
the estimates of the nonlinear functions fi and gi are defined as
fˆi (x̄i , θ̂fi ) = ϕfi (x̄i )T θ̂fi
ĝi (x̄i , θ̂gi ) = ϕgi (x̄i )T θ̂gi

and the parameter estimation errors as θ̃fi = θfi − θ̂fi and θ̃gi = θgi − θ̂gi . The system
(4.20) can be rewritten as
ẋi = ϕfi (x̄i )T θfi + ϕgi (x̄i )T θgi xi+1
ẋn = ϕfn (x̄n )T θfn + ϕgn (x̄n )T θgn u.
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 59

The control objective is to force the output y = x1 to asymptotically track the refer-
ence signal yr (t) whose first n derivatives are assumed to be known and bounded. The
adaptive backstepping procedure is initiated by defining the tracking errors as
z1 = x1 − yr
zi = xi − αi−1 , i = 2, ..., n. (4.20)
Step 1: The task in the first design step is to stabilize the z1 -subsystem given by
ż1 = ϕTf1 θf1 + ϕTg1 θg1 (z2 + α1 ) − ẏr . (4.21)
Consider the CLF V1 given by
1 2 1 T −1 1
V1 = z1 + θ̃f1 Γf1 θ̃f1 + θ̃gT1 Γ−1
g1 θ̃g1 , (4.22)
2 2 2
where Γ∗ = ΓT∗ > 0 and whose derivative along the solutions of (4.21) is
h i
V̇1 = z1 ϕTf1 θ̂f1 + ϕTg1 θ̂g1 (z2 + α1 ) − ẏr
˙ T −1 ˙
   
−θ̃fT1 Γ−1
f1 θ̂ f1 − Γ f1 ϕf1 z 1 − θ̃ Γ
g1 g1 θ̂ g1 − Γ g1 ϕg1 x2 z 1 .

To cancel the indefinite terms the virtual control α1 and the intermediate update laws
τf11 , τg11 are defined as
1  
α1 = −c1 z1 − ϕTf1 θ̂f1 + ẏr (4.23)
ϕTg1 θ̂g1
τf11 = Γf1 ϕf1 z1 (4.24)
τg11 = Γg1 ϕg1 x2 z1 , (4.25)
where c1 > 0. Similar to the construction of the control law, the parameter update laws
are build up recursively in the adaptive backstepping design to prevent overparametriza-
tion. These intermediate update functions τ are called tuning functions and therefore this
method is often referred to as the tuning functions approach in literature [117]. Substi-
tuting these expressions in the derivative of V1 leads to
˙ T −1 ˙
   
V̇1 = −c1 z12 + ϕTg1 θ̂g1 z1 z2 − θ̃fT1 Γ−1
f1 θ̂ f1 − τf11 − θ̃ g1
Γ g1
θ̂ g1 − τg11 .

If this would be the final design step, the update laws would cancel the last two indefinite
terms and z2 ≡ 0, reducing the derivative to
V̇1 = −c1 z12
Hence, the z1 -system would be stabilized. Hence, the task in the next design step is to
make sure that z2 converges to zero.
Step 2: The z2 -dynamics satisfy
ż2 = ϕTf2 θf2 + ϕTg2 θg2 (z3 + α2 ) − α̇1 . (4.26)
60 ADAPTIVE BACKSTEPPING 4.2

The CLF V1 is now augmented with additional terms penalizing z2 and the parameter
estimation errors θ̃f2 , θ̃g2 , i.e.
1 1 1 T −1
V2 = V1 + z22 + θ̃fT2 Γ−1
f2 θ̃f2 + θ̃g2 Γg2 θ̃g2 . (4.27)
2 2 2
Taking the time derivative of V2 along the solutions of (4.21), (4.26) results in
V̇2 = −c1 z12 + ϕTg1 θ̂g1 z1 z2
 
T −1 ˙ ∂α1
−θ̃f1 Γf1 θ̂f1 − τf11 + Γf1 ϕf1 z2
∂x1
 
T −1 ˙ ∂α1
−θ̃g1 Γg1 θ̂g1 − τg11 + Γg1 ϕg1 x2 z2
∂x1
h i
+z2 ϕTf2 θ̂f2 + ϕTg2 θ̂g2 (z3 + α2 ) − µ1
˙ ˙
   
− θ̃fT2 Γ−1
f2 θ̂f2 − Γf2 ϕf2 z2 − θ̃gT2 Γ−1
g2 θ̂g2 − Γg2 ϕg2 x3 z2 ,

where µ1 represents the known parts of the dynamics of α̇1 and is defined as
∂α1  T  ∂α
1 ˙ ∂α1 ˙ ∂α1 ∂α1
µ1 = ϕf1 θ̂f1 + ϕTg1 θ̂g1 x2 + θ̂f1 + θ̂g1 + ẏr + ÿr .
∂x1 ∂ θ̂f1 ∂ θ̂g1 ∂yr ∂ ẏr
The virtual control and intermediate update laws are selected as
1  
α2 = −c2 z2 − ϕTg1 θ̂g1 z1 − ϕTf2 θ̂f2 + µ1 (4.28)
ϕTg2 θ̂g2
 
∂α1 ∂α1
τf12 = τf11 − Γf1 ϕf1 z2 = Γf1 ϕf1 z1 − z2 (4.29)
∂x1 ∂x1
 
∂α1 ∂α1
τg12 = τg11 − Γg1 ϕg1 x2 z2 = Γg1 ϕg1 x3 z1 − z2 (4.30)
∂x1 ∂x1
τf22 = Γf2 ϕf2 z2 (4.31)
τg22 = Γg2 ϕg2 x3 z2 . (4.32)
Substituting the above expressions in the derivative of V2 gives
V̇2 = −c1 z12 − c2 z22 + ϕTg2 θ̂g2 z2 z3
˙ T −1 ˙
   
−θ̃fT1 Γ−1
f1 θ̂ f1 − τf12 − θ̃ Γ
g1 g1 θ̂ g1 − τg12

˙ ˙
   
−θ̃fT2 Γ−1
f2 θ̂f2 − τf22 − θ̃gT2 Γ−1 g2 θ̂g2 − τg22 .

This concludes the second design step.


Step i: The design steps until step n (where the real control u enters) are identical. The
zi -dynamics are given by
żi = ϕTfi θfi + ϕTgi θgi (zi+1 − αi ) − α̇i−1 . (4.33)
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 61

The CLF for step i is defined as


1 1 1 T −1
Vi = Vi−1 + zi2 + θ̃fTi Γ−1
fi θ̃fi + θ̃gi Γgi θ̃gi . (4.34)
2 2 2
The time derivative of Vi along the solutions of (4.33) satisfies
i−1
X
V̇i = − cj zj2 + ϕTgi−1 θ̂gi−1 zi−1 zi
j=1
i−1  
X ˙ ∂αi−1
− θ̃fTk Γ−1
fk θ̂fk − τfk(i−1) + Γfk ϕfk zi
∂xk
k=1
i−1  
X
T −1 ˙ ∂αi−1
− θ̃gk Γgk θ̂gk − τgk(i−1) + Γgk ϕgk xk+1 zi
∂xk
k=1
h i
+zi ϕTfi θ̂fi + ϕTgi θ̂gi (zi+1 + αi ) − µi−1
˙ ˙
   
−θ̃fTi Γ−1
fi θ̂fi − Γfi ϕfi zi − θ̃gTi Γ−1
gi θ̂gi − Γgi ϕgi xi+1 zi ,

where µi−1 is given by


i−1
X ∂αi−1  T 
µi−1 = ϕfk θ̂fk + ϕTgk θ̂gk xk+1
∂xk
k=1
i−1 i
!
X ∂αi−1 ˙ ∂αi−1 ˙ X ∂αi−1 (k)
+ θ̂fk + θ̂gk + y .
(k−1) r
k=1
∂ θ̂fk ∂ θ̂gk k=1 ∂yr

Now the intermediate update laws and the virtual control αi are selected as
1  
αi = −ci zi − ϕTgi−1 θ̂gi−1 zi−1 − ϕTfi θ̂fi + µi−1 (4.35)
ϕTgi θ̂gi
∂αi−1
τfki = τfk(i−1) − Γfk ϕfk zi (4.36)
∂xk
∂αi−1
τgki = τgk(i−1) − Γgk ϕgk xk+1 zi (4.37)
∂xk
τfii = Γfi ϕfi zi (4.38)
τgii = Γgi ϕgi xi+1 zi , (4.39)
for k = 1, 2, ..., i − 1. This renders the derivative of Vi equal to
i
X
V̇i = − cj zj2 + ϕTgi θ̂gi zi zi+1
j=1
i i
˙ T −1 ˙
X   X  
− θ̃fTk Γ−1
fk θ̂ fk
− τfki
− θ̃ Γ
gk gk θ̂ gk
− τgki
.
k=1 k=1
62 ADAPTIVE BACKSTEPPING 4.2

Step n: In the final step the control law and the complete update laws are defined. Con-
sider the final Lyapunov function
1 1 1 T −1
Vn = Vn−1 + zn2 + θ̃fTn Γ−1 fn θ̃fn + θ̃gn Γgn θ̃gn
2 2 2
n
1 X 2 
= zk + θ̃fTk Γ−1
fk θ̃ fk
+ θ̃ T −1
Γ
gk gk θ̃ gk
. (4.40)
2
k=1

To render the derivative of Vn negative semi-definite, the real control and update laws are
selected as
1  
u = −cn zn − ϕTgn−1 θ̂gn−1 zn−1 − ϕTfn θ̂fn + µn−1 (4.41)
ϕTgn θ̂gn
˙ ∂αn−1
θ̂fk = τfk(n−1) − Γfk ϕfk zn
∂xk
 
n−1
X ∂αj
= Γfk ϕfk zk − zj+1  (4.42)
∂xk
j=k
 
˙ ∂αn−1
θ̂gk = P τgk(n−1) − Γgk ϕgk xk+1 zn
∂xk
  
n−1
X ∂αj
= P Γgk ϕgk xk+1 zk − zj+1  (4.43)
∂xk
j=k

˙
θ̂fn = Γfn ϕfn zn (4.44)
˙
θ̂gn = P (Γgn ϕgn uzn ) , (4.45)

where P represents the parameter projection operator to prevent singularity problems, i.e.
zero crossings, in the domain of operation. While the functions gi 6= 0, the update laws
for θ̂i can still cross through zero if this modification is not made. Parameter projection
can be used to keep the parameter estimate within a desired bounded and convex region.
In section 4.2.3 the parameter projection method is discussed in more detail. Substituting
(4.41)-(4.45) in the derivative of Vn renders it equal to
n
X
V̇n = − cj zj2 .
j=1

Theorem 4.1. The closed-loop system consisting of the system (4.20), the control (4.41)
and the dynamic update laws (4.42)-(4.45) has a globally uniformly stable equilibrium
at (zi , θ̃fi , θ̃gi ) = 0 and limt→∞ zi = 0, i = 1, ..., n.

Proof: The closed-loop stability result follows directly from Theorem 3.7.
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 63

A block scheme of the resulting closed-loop system with tuning functions controller for
n = 3 and a constant reference signal yr is shown in Figure 4.2. It is clear that controller
and update laws are part of one integrated system.

f3 ( x )

f 2 ( x1 , x2 )

f1 ( x1 )

g3 ( x )
∫ x3 g 2 ( x1 , x2 )
∫ x2 g1 ( x1 )
∫ x1

- α1 ( x1 , yr ) -
yr
- α 2 ( x1 , x2 , yr )

u ( x , yr )

ɺ ɺ
θˆf , θˆg
1 1

ɺ ɺ
θˆ f 2 , θˆg2

ɺ ɺ
θˆf ,θˆg
3 3

Adaptive backstepping controller

Figure 4.2: Closed-loop dynamics of an uncertain strict feedback control system with adaptive
(i)
backstepping controller for n = 3. It is assumed that yr = 0, i = 1, 2, 3.

4.2.3 Robustness Considerations


The adaptive backstepping control design of Theorem 4.1 is based on ideal plant models
with parametric uncertainties. However, in practice the controllers will be designed for
real world physical systems, which means they have to deal with non-parametric uncer-
tainties such as

• low-frequency unmodeled dynamics, e.g. structural vibrations;

• measurement noise;

• computational round-off errors and sampling delays;

• time variations of the unknown parameters.


64 ADAPTIVE BACKSTEPPING 4.2

When the input signal (or the reference signal) of the system is persistently exciting (PE)
[3], i.e. the reference signal is sufficiently rich [28], these uncertainties will hardly af-
fect the robustness of the adaptive backstepping design. The PE property guarantees
exponential stability in the absence of modeling errors which in turn guarantees bounded
states in the presence of bounded modeling error inputs provided the modeling error term
does not destroy the PE property of the input.
However, when the reference signal is not persistently exciting even very small uncer-
tainties may already lead to problems. For example, the estimated parameters will, in
general, not converge to their true values. Although a parameter estimation error of zero
can be useful (e.g. for system health monitoring), it is not a necessary condition to guar-
antee stability of the adaptive backstepping design. A more serious problem is that the
adaptation process will have difficulty to distinguish between parameter information and
noise. This may cause the estimated parameters to drift slowly. More examples of in-
stability phenomena in adaptive systems can be found in [87]. The lack of robustness
is primarily due to the adaptive law which is nonlinear in general and therefore more
susceptible to modeling error effects.
Several methods of robustifying the update laws have been suggested in literature over
the years, an overview is given in [87]. These techniques have in common that they all
aim to guarantee that the properties of the modified adaptive laws are as close as possi-
ble to the ideal properties despite the presence of the non-parametric uncertainties. The
different methods are now discussed briefly for the general parameter update law

˙
θ̂ = γϕz. (4.46)

Dead-Zones
The dead-zone modification method is based on the observation that small tracking errors
are mostly due to noise and disturbances. The most obvious solution is to turn off the
adaptation process if the tracking errors are within certain bounds. This gives a closed-
loop system with bounded tracking errors. Modifying the update law (4.46) using the
dead-zone technique results in

˙ 0 if |z| ≥ η0
θ̂ = γϕ(z + η), η = (4.47)
−z if |z| < η0
or

 η0 if z < η0
˙
θ̂ = γϕ(z + η), η= −η0 if z > η0 (4.48)
−z if |z| ≤ η0

for a continuous version to prevent computational problems.

Leakage Terms
The idea behind leakage terms is to modify the update laws so that the time derivative
of the Lyapunov function used to analyze closed-loop stability becomes negative in the
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 65

space of the parameter estimates when these parameters exceed certain bounds. Basi-
cally, leakage terms add damping to the update laws:

˙
θ̂ = γ(ϕz − ω θ̂), (4.49)

where the term ω θ̂ with ω > 0 converts the pure integral action of the update law (4.46)
to a ‘leaky’ integration and is therefore referred to as the leakage. Several choices are
possible for the leakage term ω, the most widely used choices are called σ-modification
[86] and e-modification [147]. These modifications are as follows

• σ-modification:

˙
θ̂ = γ(ϕz − σ θ̂) (4.50)

• e-modification:

˙
θ̂ = γ(ϕz − σ|z|θ̂) (4.51)

where σ > 0 is a small constant. The advantage of e-modification is that the leakage
term will go to zero as the tracking error converges to zero.

Parameter Projection
A last effective method for eliminating parameter drift and keeping the parameter esti-
mates within some designer defined bounds is to use the projection method to constrain
the parameter estimates to lie inside a bounded convex set in the parameter space. Let
this convex region S be defined as

S , {θ ∈ Rpθ |g(θ) ≤ 0}, (4.52)

where g : Rpθ → R is a smooth function. Applying the projection algorithm the standard
update law (4.46) becomes

 γϕz
 if θ̂ ∈ S 0
˙ or if θ̂ ∈ δS and ∇g T γϕz ≤ 0
θ̂ = P (γϕz) = (4.53)
 γϕz − γ ∇g∇gT γϕz

otherwise
∇gT Γ∇g

dg
where S 0 is the interior of S, δS the boundary of S and ∇g = dθ̂
. If the parameter
estimate θ̂ is inside the desired region S, then the standard adaptive law is implemented.
If θ̂ is on the boundary of S and its derivative is directed outside the region, then the
derivative is projected onto the hyperplane tangent to δS. Hence, the projection keeps
the parameter estimation vector within the desired convex region S at all time.
66 ADAPTIVE BACKSTEPPING 4.2

4.2.4 Example: Adaptive Longitudinal Missile Control


In this section the missile example of Chapter 3 is revisited. The generalized dynamics
of the missile (3.38), (3.39) are repeated here for convenience sake:

ẋ1 = x2 + f1 (x1 ) + g1 u (4.54)


ẋ2 = f2 (x1 ) + g2 u, (4.55)

where f1 , f2 , g1 and g2 are now unknown nonlinear functions containing the aerodynamic
stability and control derivatives. For the control design the g1 u1 -term is again neglected
so that the system is in a lower triangular form. It is assumed that the sign of g2 is known
and fixed. The unknown functions are rewritten in a parametric form with unknown
parameter vectors θf1 , θf2 and θg2 as

f1 (x1 ) = ϕf1 (x1 )T θf1


f2 (x1 ) = ϕf2 (x1 )T θf2
g2 = ϕTg2 θg2

where the regressors ϕ∗ are given by


T
= C1 x31 , x1 |x1 |, x1

ϕf1
T
= C2 x31 , x1 |x1 |, x1

ϕf2
ϕg2 = C2 .

Then the estimates of the nonlinear functions are defined as

fˆ1 (x1 , θ̂f1 ) = ϕf1 (x1 )T θ̂f1


fˆ2 (x1 , θ̂f2 ) = ϕf2 (x1 )T θ̂f2
ĝ2 (θ̂g2 ) = ϕTg2 θ̂g2

and the parameter estimation errors as θ̃∗ = θ̂∗ − θ∗ . The tracking control objective
remains the same, hence

z1 = x1 − yr
z2 = x2 − α1

where α1 is the virtual control to be designed in the first design step. The tuning functions
adaptive backstepping method is now used to solve this control problem.
Step 1: The z1 -dynamics satisfy

ż1 = z2 + α1 + ϕTf1 θf1 − ẏr . (4.56)

Consider the candidate CLF V1 for the z1 -subsystem defined as


1h 2 i
V1 (z1 , θ̃f1 ) = z1 + k1 λ21 + θ̃fT1 Γ−1
f1 θ̃f1 , (4.57)
2
4.2 TUNING FUNCTIONS ADAPTIVE BACKSTEPPING 67

Rt
where the gain k1 > 0 and the integrator term λ1 = 0 z1 dt are again introduced to
robustify the design against the neglected g1 u1 -term. The derivative of V1 along the
solutions of (4.56) is given by
˙
h i  
V̇1 = z1 z2 + α1 + ϕTf1 θ̂f1 − ẏr + k1 λ1 − θ̃fT1 Γ−1
f1 θ̂ f1 − Γ f1 ϕf1 z 1 .

To cancel all indefinite terms, the virtual control α1 is selected as


α1 = −c1 z1 − k1 λ1 − ϕTf1 θ̂f1 + ẏr , c1 > 0 (4.58)

and the intermediate update law for θ̂f1 as


τf11 = Γf1 ϕf1 z1 . (4.59)
This renders the derivative equal to
˙
 
V̇1 = −c1 z12 + z1 z2 − θ̃fT1 Γ−1
f1 θ̂f1 − τf11 .

This concludes the outer loop design.


Step 2: The z2 -dynamics are given by
ż2 = ϕTf2 θf2 + ϕTg2 θg2 u − α̇1 . (4.60)
Consider the CLF V2 for the complete system
1h 2 i
z2 + θ̃fT2 Γ−1
V2 (z1 , z2 , θ̃f1 , θ̃f2 , θ̃g2 ) = V1 (z1 , θ̃f1 ) + T −1
f2 θ̃f2 + θ̃g2 Γg2 θ̃g2 . (4.61)
2
The derivative of V2 along the solutions of (4.56) and (4.60) satisfies
 
2 T −1 ˙ ∂α1
V̇2 = −c1 z1 + z1 z2 − θ̃f1 Γf1 θ̂f1 − τf11 + Γf1 ϕf1 z2
∂x1
h i
+ z2 ϕTf2 θ̂f2 + ϕTg2 θ̂g2 u − µ1
˙ ˙
   
− θ̃fT2 Γ−1
f2 θ̂f2 − Γf2 ϕf2 z2 − θ̃gT2 Γ−1
g2 θ̂g2 − Γg2 ϕg2 uz2 ,

where µ1 is given by
∂α1 T ∂α1 ˙ ∂α1 ∂α1
µ1 = ϕ θ̂f + θ̂f + ẏr + ÿr .
∂x1 f1 1 ∂ θ̂f1 1 ∂yr ∂ ẏr
The control law and the update laws are selected as
1  
u = −c2 z2 − z1 − ϕTf2 θ̂f2 + µ1 (4.62)
ϕTg2 θ̂g2
˙ ∂α1
θ̂f1 = τf11 − Γf1 ϕf1 z2 (4.63)
∂x1
˙
θ̂f2 = Γf2 ϕf2 z2 (4.64)
˙
θ̂g2 = P (Γg2 ϕg2 uz2 ) , (4.65)
68 ADAPTIVE BACKSTEPPING 4.3

where the projection operator P is introduced to ensure that the estimate of g2 does not
change sign. The above adaptive control law renders the derivative of V2 equal to

V̇2 = −c1 z12 − c2 z22 .

By Theorem 3.7 limt→∞ z1 , z2 = 0, which means that the reference signal yr is again
asymptotically tracked with x1 .
The resulting closed-loop system has been implemented in MATLAB/Simulink c . In
Figure 4.3 the response of the system with 4 different gain selections to a number of an-
gle of attack doublets at Mach 2.2 is shown. The onboard model contains the data of the
missile at Mach 2.0. The control gains are selected as c1 = c2 = 10 for all simulations,
the integral gain k1 is either 0 (‘noint’) or 10 (‘int’) and the update gains Γf1 = Γf2 = 0I,
Γg2 = 0 (‘nonad’) or Γf1 = Γf2 = 10I, Γg2 = 0.01 (‘ad’).
As can be seen from Figure 4.3, the modeling error is severe enough to render the system
unstable, when adaptation is turned off and no integral gain is sued. Adding an integral
gain ensures that the missile follows its reference again, but the transient performance is
not acceptable. Turning adaptation on instead gives a much better response, but there is
still a very small tracking error in the outer loop. This is due to the neglected g1 u-term.
The regressors are not defined rich enough to fully cancel the effect of these unmodeled
dynamics. Therefore, the final simulation with adaptation turned on and an integral gain
shows the best response.
The parameter estimation errors of the two simulations with adaptation turned on are
plotted in Figure 4.4. The errors can be seen to converge to constant values. However, the
true values are not found. This is a characteristic of the integrated adaptive approaches:
the estimation is performed to meet a total system stability criterion, the control Lya-
punov function, rather than to optimize the error in the estimation. Hence, convergence
of the parameters to their true values is not guaranteed. Note that dead-zones can be
added to the update laws to prevent the parameter drift due to numerical round-off errors.

4.3 Constrained Adaptive Backstepping


In the previous section the tuning functions adaptive backstepping method was derived.
The complexity of the design procedure is mainly due to the calculation of the derivatives
of the virtual controls at each intermediate design step. Especially for high order systems
or complex multivariable systems such as aircraft dynamics, it becomes very tedious to
calculate the derivatives analytically.
In this section an alternative approach involving command filters is introduced to reduce
the algebraic complexity of the adaptive backstepping control law formulated in Theorem
4.1. This approach is sometimes referred to as dynamic surface control in literature [212,
213]. An additional advantage of this approach is that it also eliminates the method’s
restriction to nonlinear systems of a lower triangular form. Finally, the command filters
can also be used to incorporate magnitude and rate limits on the input and states used as
virtual controls in the design [58, 60, 61, 163]. For example, when a magnitude limit on
4.3 CONSTRAINED ADAPTIVE BACKSTEPPING 69

angle of attack (deg) 20

10

0
nonad,noint
−10 nonad,int
ad,noint
−20
0 5 10 15 20 ad,int
25 30
reference
50
pitch rate (deg/s)

−50
0 5 10 15 20 25 30
control deflection (deg)

20

10

−10

−20
0 5 10 15 20 25 30
time (s)

Figure 4.3: Numerical Simulations at Mach 2.2 of the longitudinal missile model with adaptive
backstepping control law with uncertainty in the onboard model. Results are shown for 4 different
gain selections, including 2 with adaptation turned off.

the input is in effect and the desired control cannot be achieved, then the tracking errors
will in general become larger and will no longer be the result of function approximation
errors exclusively. Since the dynamic parameter update laws of the adaptive backstepping
method are driven by the tracking errors, care must be taken that they do not ‘unlearn’
when the limits on the control input are in effect.
The command filtered approach for preventing corruption of the parameter estimation
process can be seen as a combination of training signal hedging [4, 105] and pseudo-
control hedging [91, 206]. Training signal hedging involves modifying the tracking error
definitions used in the parameter update laws to remove the effects of the saturation.
In the pseudo-control hedging method the commanded input to the next control loop
is altered so that the generated control signal is implementable without exceeding the
constraints.

4.3.1 Command Filtering Approach


Consider the non-triangular, feedback passive system

ẋi = fi (x) + gi (x)xi+1 , i = 1, ..., n − 1


ẋn = fn (x) + gn (x)u, (4.66)
70 ADAPTIVE BACKSTEPPING 4.3

−3
x 10

thetatildef11

thetatildef12
0 ad,noint −2.71
ad,int
−2 −2.72
−4 −2.73
0 5 10 15 20 25 30 0 5 10 15 20 25 30
thetatildef13

thetatildef21
0
0.8
−0.05
0.7
−0.1
0 5 10 15 20 25 30 0 5 10 15 20 25 30
thetatildef22

thetatildef23
14.4 2
14.2
14 0
13.8
13.6 −2
0 5 10 15 20 25 30 0 5 10 15 20 25 30
time (s)
thetatildeg2

2.41
2.4
2.39
0 5 10 15 20 25 30
time (s)

Figure 4.4: The parameter estimation errors for the two simulations of the longitudinal missile
model with adaptive backstepping control law at Mach 2.2 with adaptation turned on.

where x = (x1 , ..., xn ) is the state, xi ∈ R and u ∈ R the control signal. The smooth
functions fi and gi are again unknown. The sign of all gi (x) is known and gi (x) 6= 0.
The control objective is to asymptotically track the reference signal x1,r (t) with first
derivative known. The tracking errors are defined as
zi = xi − xi,r , (4.67)
where xi,r , i = 2, ..., n will be defined by the backstepping controller.
Step 1: As with the standard adaptive backstepping procedure, the first virtual control is
defined as
1  
α1 = −c1 z1 − ϕTf1 θ̂f1 + ẋ1,r (4.68)
ϕTg1 θ̂g1
where c1 > 0. However, instead of directly applying this virtual control, a new signal
x02,r is defined as
x02,r = α1 − χ2 , (4.69)
where χ2 will be defined in design step 2. The signal x02,r is filtered with a second order
command filter to produce x2,r and its derivative ẋ2,r . It is possible to enforce magnitude
and rate limits with this filter, see Appendix C for details. The effect that the use of this
command filter has on the tracking error z1 is estimated by the stable linear filter
χ̇1 = −c1 χ1 + ϕTg1 θ̂g1 x2,r − x02,r .

(4.70)
4.3 CONSTRAINED ADAPTIVE BACKSTEPPING 71

Note that by design of the second order command filter, the signal (x2,r − x02,r ) is
bounded and, when no limits are in effect, small. It is now possible to introduce the
compensated tracking errors as

z̄i = z i − χi , i = 1, ..., n. (4.71)

Select the first CLF V1 as a quadratic function of the compensated tracking error z̄1 and
the estimation errors:
1 2 1 T −1 1
V1 = z̄ + θ̃ Γ θ̃f + θ̃T Γ−1 θ̃g . (4.72)
2 1 2 f1 f1 1 2 g1 g1 1
Taking the derivative of V1 results in
h i
V̇1 = z̄1 ϕTf1 θ̂f1 + ϕTg1 θ̂g1 (z2 + x2,r ) − ẋ1,r − χ̇1
˙ T −1 ˙
   
−θ̃fT1 Γ−1
f1 θ̂ f1 − Γ f1 ϕf1 z̄ 1 − θ̃ g1
Γ g1
θ̂ g1 − Γ g1 ϕg1 x2 z̄ 1
h i
T T 0
= z̄1 ϕf1 θ̂f1 + ϕg1 θ̂g1 (z2 + x2,r ) − ẋ1,r + c1 χ1
˙ T −1 ˙
   
−θ̃fT1 Γ−1
f1 θ̂ f1 − Γ f1 ϕf1 z̄ 1 − θ̃ Γ
g1 g1 θ̂ g1 − Γ g1 ϕg1 x2 z̄ 1

= −c1 z̄12 + ϕTg1 θ̂g1 z̄1 z̄2


˙ T −1 ˙
   
−θ̃fT1 Γ−1
f1 θ̂ f1 − Γ f1 ϕf1 z̄ 1 − θ̃ g1
Γ g1
θ̂ g1 − Γ g1 ϕg1 x2 z̄ 1 .

Selecting the dynamic update laws as

˙
θ̂f1 = Γf1 ϕf1 z̄1 (4.73)
˙
θ̂g1 = P (Γg1 ϕg1 x2 z̄1 ) (4.74)

finishes the first design step. The update laws for θ̂f1 and θ̂f1 are defined immediately,
since there will be no additional derivative terms in the next steps due to the command
filters. Note that the update laws are driven by the compensated tracking error.
Step i: (i = 2, ..., n − 1) The virtual controls are defined as
1  
αi = −ci zi − ϕTgi−1 θ̂gi−1 z̄i−1 − ϕTfi θ̂fi + ẋi,r (4.75)
ϕTgi θ̂gi

where ci > 0 and the command filter inputs as

x0i,r = αi−1 − χi . (4.76)

The effect that the use of the command filters has on the tracking errors is estimated by

χ̇i = −ci χi + ϕTgi θ̂gi xi+1,r − x0i+1,r .



(4.77)
72 ADAPTIVE BACKSTEPPING 4.3

Finally, the update laws are given by


˙
θ̂fi = Γfi ϕfi z̄i (4.78)
˙
θ̂gi = P (Γgi ϕgi xi+1 z̄i ) . (4.79)

Step n: In the final design step the actual controller is found by filtering
1  
u0 = αn = −cn zn − ϕTgn−1 θ̂gn−1 z̄n−1 − ϕTfn θ̂fn + ẋn,r , (4.80)
ϕTgn θ̂gn

to generate u. The effect that the use of this filter has on the tracking error zn is estimated
by

χ̇n = −cn χn + ϕTgn θ̂gn u − u0



(4.81)

and the update laws are defined as


˙
θ̂fn = Γfn ϕfn z̄n (4.82)
˙
θ̂gn = P (Γgn ϕgn uz̄n ) . (4.83)

Theorem 4.2. The closed-loop system consisting of the system (4.66), the control (4.80)
and update laws (4.73), (4.74), (4.78), (4.79), (4.82), (4.83) has a globally uniformly
stable equilibrium at (z̄i , θ̃fi , θ̃gi ) = 0, i = 1, ..., n. Furthermore, limt→∞ z̄i = 0.

Proof: Consider the CLF


n
1 X 2 
Vn = z̄i + θ̃fTi Γ−1 θ̃
fi fi + θ̃ T −1
gi gi gi ,
Γ θ̃ (4.84)
2 i=1

which, along the solutions of the closed-loop system with the control (4.80) and update
laws (4.73), (4.78), (4.82), has the time derivative
n
X
V̇n = − ci z̄i2 .
i=1

Hence, by Theorem 3.7 the stated stability properties follow.

The above theorem guarantees desirable properties for the compensated tracking errors
z̄i . The difference between z̄i and the real tracking errors zi is χi , which is the output of
the stable filters

χ̇i = −ci χi + ϕTgi θ̂gi xi+1,r − x0i+1,r .




The magnitude of the input to this filter is determined by the design of the command filter
for x0i+1,r . If there are no magnitude or rate limits in effect on the command filters and
4.3 CONSTRAINED ADAPTIVE BACKSTEPPING 73

their bandwidth is selected sufficiently high, the error xi+1,r − x0i+1,r will be small


during transients and zero under steady-state conditions. Hence, the performance of the
command filtered adaptive backstepping approach can be made arbitrarily close to that
of the standard adaptive backstepping approach of Section 4.2. A formal proof of this
statement can be found in [59]. This rigorous proof is based on singular perturbation
theory and makes use of Tikhonov’s Theorem as given in [106].
If the limits on the command filter are in effect, the real tracking errors zi may increase,
but the compensated tracking errors z̄i that drive the estimation process are unaffected.
Hence, the dynamic update laws will not unlearn due to magnitude or rate limits on the
input and states used for virtual control.

4.3.2 Example: Constrained Adaptive Longitudinal Missile Control


In this section the command filtered adaptive backstepping approach is applied to the
tracking control design for the longitudinal missile model (3.38), (3.39) of the earlier ex-
amples. The nonlinear functions containing the aerodynamic stability and control deriva-
tives f1 , f2 , g1 and g2 are again unknown. Furthermore, it is again assumed that the sign
of g2 is known and fixed. Since the command filtered adaptive backstepping method can
deal with non-triangular nonlinear systems the g1 u1 -term does not have to be neglected
during the control design.
The tracking errors are defined as

z1 = x1 − yr
z2 = x2 − x2,r . (4.85)

where x2,r is the command filtered virtual control. The virtual controls are defined as

α1 = −c1 z1 − ϕTf1 θ̂f1 − ϕTg1 θ̂g1 u + ẏr , c1 > 0 (4.86)


1  
α2 = −c2 z2 − z̄1 − ϕTf2 θ̂f2 + ẋ2,r , c2 > 0, (4.87)
ϕTg2 θ̂g2

where

z̄i = z i − χi , i = 1, 2 (4.88)

are the compensated tracking errors. The signals

x02,r = α1 − χ2 (4.89)
0
u = α2 (4.90)

are filtered with second order command filters to produce x2,r , its derivative ẋ2,r and u.
The effect that the use of these command filters has on the tracking errors is measured by

χ̇1 = −c1 χ1 + x2,r − x02,r



(4.91)
T 0

χ̇2 = −c2 χ2 + ϕg2 θ̂g2 u − u . (4.92)
74 ADAPTIVE BACKSTEPPING 4.3

Finally, the update laws are given by


˙
θ̂f1 = Γf1 ϕf1 z̄1 (4.93)
˙
θ̂g1 = Γg1 ϕg1 uz̄1 (4.94)
˙
θ̂f2 = Γf2 ϕf2 z̄2 (4.95)
˙
θ̂g2 = P (Γg2 ϕg2 uz̄2 ) , (4.96)

where Γ∗ = ΓT∗ > 0 are the update gains. The adaptive controller renders the derivative
of the CLF
2
1 X 2 
V = z̄i + θ̃fTi Γ−1 T −1
fi θ̃fi + θ̃gi Γgi θ̃gi (4.97)
2 i=1

equal to

V̇ = −c1 z̄12 − c2 z̄22 .


(4.98)

By Theorem 3.7 the equilibrium (z̄i , θ̃fi , θ̃gi ) = 0 for i = 1, 2 is globally stable and the
compensated tracking errors z̄1 , z̄2 converge asymptotically to zero.
The resulting constrained adaptive backstepping controller can be compared with the
standard adaptive backstepping controller of Section 4.2.4 in MATLAB/Simulink c sim-
ulations. For the tuning functions controller the control gains are selected as c1 = c2 =
k1 = 10 and the update gains as Γf1 = Γf2 = 10I, Γg2 = 0.01. The gains of the com-
mand filtered controller are selected the same, except that the update gains of the outer
loop are selected as Γf1 = 1000I and Γg1 = 1. The outer loop update laws of both
designs differ, but with these update gain selections the response of both controllers is
nearly identical. Of course, the command filtered controller does not need the integral
term to achieve perfect tracking since it does not neglect the effect of the control surface
deflections on the aerodynamic forces.
The results of a simulation with an upper magnitude limit of 9.5 degrees on the control in-
put are more interesting, as can be seen in Figure 4.5. The maneuver has been performed
at Mach 2.2 with onboard model for Mach 2.0. The performance of the standard adaptive
backstepping degrades severely when compared to the performance without saturation in
Figure 4.3 of Section 4.2.4. The reason for this loss in performance can be found in
Figure 4.6 where the parameter estimation errors are plotted. During periods of control
saturation the tracking errors increase, since the parameter update laws are driven by
the tracking errors (which are now no longer the result of the function approximation
errors exclusively) so they tend to ‘unlearn’. The update laws of the command filtered
controller are driven by the compensated tracking errors, where the effect of the magni-
tude limit has been removed by proper definition of the command filters. As a result the
performance of the constrained adaptive backstepping controller is much better.
4.3 CONSTRAINED ADAPTIVE BACKSTEPPING 75

20
angle of attack (deg) 10

0
tuning
−10 cabs
reference
−20
0 5 10 15 20 25 30

50
pitch rate (deg/s)

−50
0 5 10 15 20 25 30

20
control deflection (deg)

10

−10

−20
0 5 10 15 20 25 30
time (s)

Figure 4.5: Numerical simulations at Mach 2.2 of the longitudinal missile model with the tuning
functions versus the constrained adaptive backstepping (cabs) control law and an upper magnitude
limit on the control input of 9.5 deg.

0.04 −2.4
f12
f11

tuning
0.02
thetatilde
thetatilde

cabs
0 −2.6
−0.02
−0.04 −2.8
0 5 10 15 20 25 30 0 5 10 15 20 25 30
2.5 1
f13

thetatildef21

2 0
thetatilde

1.5 −1
1 −2
0.5 −3
0 5 10 15 20 25 30 0 5 10 15 20 25 30
20 30
thetatildef22

thetatildef23

20
15
10
0
10
−10
0 5 10 15 20 25 30 0 5 10 15 20 25 30
0.3249
thetatildeg1

g2

2.4
thetatilde

0.3248 2.3
2.2
0.3247 2.1
0 5 10 15 20 25 30 0 5 10 15 20 25 30
time (s) time (s)

Figure 4.6: The parameter estimation errors for both adaptive backstepping designs. The update
laws of the tuning functions adaptive backstepping controller ‘unlearn’ during periods when the
upper limit on the input is in effect.
Chapter 5
Inverse Optimal Adaptive
Backstepping

The static and dynamic parts of the adaptive backstepping controllers of the previous
chapter are designed simultaneously in a recursive manner. The very strong stability and
convergence properties of the controllers can be proved using a single control Lyapunov
function. A drawback of this approach is that, because there is strong coupling between
the static and dynamic parts, it is unclear how changes in the adaptation gain affect the
tracking performance. This makes tuning of the controllers a very tedious and nonintu-
itive process. In this chapter an attempt is made to develop an adaptive backstepping
control approach that is optimal with respect to some meaningful cost functional. Be-
sides optimal control being an intuitively appealing approach, the resulting control laws
inherently possess certain robustness properties.

5.1 Introduction
The adaptive backstepping designs of Chapter 4 are focused on achieving stability and
convergence rather than performance or optimality. Some performance bounds can be
derived for the tracking errors, the system states and the estimated parameters, but those
bounds do not contain any estimates of the necessary control effort [121]. Furthermore,
increasing the update gains results in more rapid parameter convergence, but it is unclear
how the transient tracking performance is affected. The advantages of a control law that
is optimal with respect to some ‘meaningful’ cost functional1 are its inherent robustness
properties with respect to external disturbances and model uncertainties, as in the case of
linear quadratic control or H∞ control [215]. This would suggest combining or extend-
1 A meaningful cost functional is one that places a suitable penalty on both the tracking error and the control

effort, so that useless conclusions such as ‘every stabilizing control law is optimal’ can be avoided [67].

77
78 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.2

ing the Lyapunov based control with some form of optimal control theory.
Naturally, many attempts have been made to extend linear optimal control results to non-
linear control, see e.g. [130, 166, 167, 175]. However, the difficulty lies in the fact
that the direct optimal control problem for nonlinear systems requires the solving of
a Hamilton-Jacobi-Bellman (HJB) equation which is in general not feasible. Optimal
adaptive control is even more challenging, since the certainty equivalence combination
of a standard parameter estimation scheme with linear quadratic optimal control does not
even give any optimality properties [113].
The problems with direct nonlinear optimal control motivated the development of inverse
optimal design methods [65, 66]. In the inverse approach a positive definite Lyapunov
function is given and the task is to determine if a feedback control law minimizes some
meaningful cost functional. The term inverse refers to the fact that the cost functional
is determined after the design of the stabilizing feedback control law, instead of being
selected beforehand by the control designer. In [128] the inverse optimal control the-
ory for nonlinear systems was combined with the tuning functions approach, to develop
an inverse optimal adaptive backstepping control design for a general class of nonlinear
systems with parametric uncertainties. This adaptive controller compensates for the ef-
fect of the parameter estimation transients in order to achieve optimality of the overall
system. In [134] this result is extended to a nonlinear multivariable system with external
disturbances.
This chapter starts with a discussion on the differences between direct and inverse opti-
mal control for nonlinear systems. The inverse optimal control theory is combined with
the tuning function adaptive backstepping method in Section 5.3 following the approach
of [128]. The transient performance is analyzed, after which the method is applied to the
pitch autopilot design for a longitudinal missile model and compared with a design based
on the standard tuning functions adaptive backstepping approach.

5.2 Nonlinear Control and Optimality


This section discusses general optimal control theory. The difficulties with optimal con-
trol theory in the context of nonlinear control are explained and as an alternative inverse
optimal control theory is introduced.

5.2.1 Direct Optimal Control


Optimal control deals with the problem of finding a control law for a given system such
that a certain optimality criterion is achieved. Given the general nonlinear system
ẋ = f (x) + g(x)u (5.1)
where x ∈ Rn is the state vector and u ∈ Rm is the control input, the aim is to find a
control u(x) that stabilizes system (5.1) while minimizing the cost functional
Z ∞
l(x) + uT R(x)u dt

J= (5.2)
0
5.2 NONLINEAR CONTROL AND OPTIMALITY 79

with l(x) ≥ 0 and R(x) > 0 for all x. For a given feedback control u(x), the value of
J, if finite, is a function of the initial state x(0): J(x). When J is at its minimum, J(x)
is called the optimal value function. The optimal control law is denoted by u∗ (x). When
this optimal control law is applied, J(x) will decrease along the trajectory, since the
cost-to-go must continuously decrease by the principle of optimality [15]. This means
that J(x) is a Lyapunov function for the controlled system: V (x) = J(x). The functions
V (x) and u∗ (x) are related to each other by the following optimality condition [175,
194].
Theorem 5.1 (Optimality and Stability). Suppose that there exists a continuously dif-
ferentiable positive semi-definite function V (x) which satisfies the Hamilton-Jacobi-
Bellman equation [14]
1
l(x) + Lf V (x) − Lg V (x)R−1 (x)(Lg V (x))T = 0, V (0) = 0 (5.3)
4
such that the feedback control
1
u∗ (x) = − R−1 (x)(Lg V (x))T (5.4)
2
achieves asymptotic stability of the equilibrium x = 0. Then u∗ (x) is the optimal
stabilizing control which minimizes the cost functional (5.2) over all u guaranteeing
limt→∞ x(t) = 0, and V (x) is the optimal value function.

Proof: Substituting
1
v = u − u∗ = u + R−1 (x)(Lg V (x))T (5.5)
2
into (5.2) and using the HJB-identity results in:
Z ∞
1
l + v T Rv − v T (Lg V )T + Lg V R−1 (Lg V )T dt

J =
4
Z0 ∞ Z ∞
1
− Lf V + Lg V R−1 (Lg V )T − Lg V v dt + v T Rv dt

=
0 2 0
Z ∞ Z ∞
∂V T
= − (f + gu)dt + v Rv dt
∂x
Z0 ∞ Z ∞ 0
dV
= − dt + v T Rv dt
0 dt 0
Z ∞
= V (x(0)) − lim V (x(T )) + v T Rv dt.
T →∞ 0

The above limit of V (x(T )) is zero since the cost functional (5.2) is only minimized over
those u which achieve limt→∞ x(t) = 0, thus
Z ∞
J = V (x(0)) + v T Rv dt.
0
80 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.3

It is easy to see that the minimum of J is V (x(0)). This minimum is reached for v(t) ≡ 0,
which proves that u∗ (x) given by (5.4) is optimal and that V (x) is the optimal value
function. In [70] and [175] it is shown that, besides optimal control being an intuitively
appealing approach, optimal control laws inherently possess certain robustness proper-
ties for the closed-loop system, including stability margins. However, a direct optimal
control approach requires the solving of the Hamilton-Jacobi-Bellman equation which is
in general not feasible.

5.2.2 Inverse Optimal Control


The fact that the robustness achieved as a result of optimality is largely independent of the
choice of functions l(x) ≥ 0 and R(x) > 0 motivated the development of inverse optimal
control design methods [65, 66]. In the inverse approach a Lyapunov function V (x) is
given and the task is to determine whether a control law such as (5.4) is optimal for a
cost functional of the form (5.2). The term ‘inverse’ refers to the fact that the functions
l(x) and R(x) are determined after the design of the stabilizing feedback control instead
of being selected beforehand by the designer.
Definition 5.2. A stabilizing control law u(x) solves an inverse optimal control problem
for the system

ẋ = f (x) + g(x)u (5.6)

if it can be expressed as
1
u(x) = −k(x) = − R−1 (x)(Lg V (x))T , R(x) > 0, (5.7)
2
where V (x) is a positive semi-definite function, such that the negative semi-definiteness
of V̇ is achieved with the control (5.7), that is
1
V̇ = Lf V (x) − Lg V (x)k(x) ≤ 0. (5.8)
2
When the function l(x) is selected equal to −V̇ :
1
l(x) := −Lf V (x) + Lg V (x)k(x) ≥ 0 (5.9)
2
then V (x) is a solution of the HJB equation
1
l(x) + Lv V (x) − (Lg V (x))R−1 (x)(Lg V (x))T = 0. (5.10)
4

5.3 Adaptive Backstepping and Optimality


Since the introduction of adaptive backstepping in the beginning of the 1990’s, there
have been numerous publications that consider the inverse optimal problem and control
5.3 ADAPTIVE BACKSTEPPING AND OPTIMALITY 81

Lyapunov function designs, e.g. [57, 122] and [128]. Textbooks that deal with the subject
are [115] and [175]. However, inverse optimal adaptive backstepping control is only
considered in [128] and [134]. In [128] an inverse optimal adaptive tracking control
design for a general class of nonlinear systems is derived and [134] extends the results to
a nonlinear multi-input multi-output system with external disturbances.
In this section the approach of [128] is repeated in an organized manner and theoretical
transient performance bounds are given. The section concludes with an evaluation of the
performance and numerical sensitivity of the inverse optimal design approach applied to
the longitudinal missile pitch autopilot example as discussed in the earlier chapters.

5.3.1 Inverse Optimal Design Procedure


Consider the class of parametric strict feedback systems

ẋi = xi+1 + ϕi (x̄i )T θ, i = 1, ..., n − 1


ẋn = u + ϕn (x)T θ (5.11)

where xi ∈ R, u ∈ R and x̄i = (x1 , x2 , ..., xi ). The vector θ contains the unknown
constant parameters of the system.
The control objective is to force the output y = x1 to asymptotically track the reference
signal yr (t) whose first n derivatives are assumed to be known and bounded. To simplify
the control design, the tracking control problem is first transformed to a regulation prob-
lem. For any given smooth function yr (t) there exist functions ρ1 (t), ρ2 (t, θ), ..., ρn (t, θ)
and αr (t, θ) such that

ρ̇i = ρi+1 + ϕi (ρ̄i )T θ, i = 1, ..., n − 1


T
ρ̇n = αr (t, θ) + ϕn (ρ) θ (5.12)
yr (t) = ρ1 (t).

Since ∂ρ p
∂θ = 0 for all t ≥ 0 and for all θ ∈ R , θ can be replaced by its estimate θ̂.
1

Consider the signal xr (t) = ρ(t, θ̂(t)), which is governed by

∂ρi ˙
ẋri = xr(i+1) + ϕri (x̄ri )T θ̂ + θ̂, i = 1, ..., n − 1
∂ θ̂
∂ρn ˙
ẋrn = αr (t, θ̂) + ϕrn (xr )T θ̂ + θ̂ (5.13)
∂ θ̂
yr (t) = ρ1 (t).

The dynamics of the tracking error e = x − xr satisfy

∂ρi ˙
ėi = ei+1 + ϕ̃i (e1 , ..., ei , θ̂)T θ + ϕri (xr1 , ..., xrn , θ̂)T θ̃ − θ̂, i = 1, ..., n − 1
∂ θ̂
∂ρn ˙
ėn = ũ + ϕ̃n (e1 , ..., ei , θ̂)T θ + ϕrn (xr , θ̂)T θ̃ − θ̂ (5.14)
∂ θ̂
82 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.3

where ũ = u−αr (t, θ̂) and ϕ̃i = ϕi (x1 , ..., xi )−ϕri (xr1 , ..., xri ), i = 1, ..., n. Now the
inverse optimal tracking problem has been transformed into an inverse optimal regulation
problem. Define the error states as

zi = ei − α̃i−1 , i = 1, ..., n (5.15)

where α̃i−1 are the virtual controls to be designed by applying the tuning functions adap-
tive backstepping method of Theorem 4.1. After that, the real control ũ is chosen in a
form that is inverse optimal.
Step i: (i = 1,...,n-1)
i−1
∂ α̃i−1 X ∂ α̃i−1
α̃i (t, ēi θ̂) = −ci zi − zi−1 + + ek+1
∂t ∂ek
k=1
i−1
X
− ω̃iT θ̂ − (σki + σik ) zk − σii zi , ci > 0 (5.16)
k=1

where for notational convenience


i−1
X ∂ α̃i−1
ω̃(t, ēi θ̂) = ϕ̃i − ϕ̃k (5.17)
∂ek
k=1
 
i−1
∂ α̃i−1 ∂ρi X ∂ α̃i−1 ∂ρi
σik = − + −  Γωk . (5.18)
∂ θ̂ ∂ θ̂ j=2
∂ej ∂ θ̂

Step n: Consider the control Lyapunov function


n
1 X 2 1 T −1
Vn = zn + θ̃ Γ θ̃. (5.19)
2 2
k=1

Taking the derivative of Vn and substituting (5.16) gives


n−1 n−1
"
X X
V̇n = − ck zk2 + zn zn−1 + ũ + (σkn + σnk ) zk + σnn zn
k=1 k=1
i−1
#
∂ α̃i−1 X ∂ α̃i−1
− − ek+1 + ω̃iT θ̂ (5.20)
∂t ∂ek
k=1
˙
+θ̃T (τn − Γ−1 θ̂),

where

τi = τi−1 + ω̃i zi , i = 1, ..., n. (5.21)


5.3 ADAPTIVE BACKSTEPPING AND OPTIMALITY 83

To eliminate the parameter estimation error θ̃ = θ − θ̂ from V̇n , the update law
˙
θ̂ = Γτn (5.22)

is selected. Now the actual control u can be defined. Following the standard adaptive
backstepping procedure of Theorem 4.1 it is possible to define a control ũ which cancels
all indefinite terms and render V̇n negative semi-definite. However, this controller is not
designed in a way that it can be guaranteed to be optimal. By Theorem 5.1 a control law
of the form
∂V
u = −r−1 (z, θ̂) g, r(t, e, θ̂) > 0 ∀ t, e, θ̂ (5.23)
∂e
is suggested. For this control problem (5.23) simplifies to

u = −r−1 (z, θ̂)zn , (5.24)

i.e. zn has to be a factor of the control. In order to get rid of the indefinite terms without
canceling them,
P nonlinear damping terms [118] are introduced. Since the expressions
− ∂ α̃∂ti−1 − n−1 ∂αn−1
k=1 ∂ek e k+1 and ω̃nT θ̂ vanish at z = 0 there exist smooth functions φk
such that

n−1 n
∂ α̃i−1 X ∂αn−1 X
− − ek+1 + ω̃nT θ̂ = φk zk , k = 1, ..., n. (5.25)
∂t ∂ek
k=1 k=1

Thus (5.20) becomes


n−1
X n
X
V̇n = − ck zk2 + zn ũ + z k Φk z n , (5.26)
k=1 k=1

where

Φk = φk + σkn + σnk , k = 1, ..., n − 2


Φn−1 = 1 + φn−1 + σ(n−1)n + σn(n−1) (5.27)
Φn = φn + σnn .

A control law of the form (5.24) with


n
!−1
X Φ2k
r(t, e, θ̂) = cn + > 0, cn > 0, ∀ t, e, θ̂ (5.28)
2ck
k=1

results in
n n  2
1X X ck Φk
V̇n = − ck zk2 − zk − zn . (5.29)
2 2 ck
k=1 k=1
84 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.3

Note that incorporating command filters in this inverse optimal technique is not possible,
since the filtered derivatives of the virtual controls cannot be damped out in the same
way. By Theorem 3.7, it can be concluded that the tracking control problem is solved,
since V̇n is negative semi-definite. The properties of the controller are summarized in the
following theorem.

Theorem 5.3 (Inverse optimal adaptive backstepping). The dynamic feedback control
law

u∗ = −βr−1 (t, e, θ̂)zn , β ≥ 2


n
˙ X
θ̂ = Γτn = Γ ω̃j zj , (5.30)
j=1

does not only stabilize the system (5.14) with respect to the control Lyapunov function
(5.19), but is also optimal with respect to the cost functional

Z ∞ h i
J = β lim |θ − θ̂(t)|2Γ−1 + l(t, e, θ̂) + r(t, e, θ̂)ũ2 dt, ∀ θ ∈ Rp (5.31)
t→∞ 0

where

l(z, θ̂) = −2β V̇n + β(β − 2)r−1 zn2 (5.32)

with a value function

J ∗ = β|θ − θ̂|2Γ−1 + β|z|2 . (5.33)

Proof: Since β ≥ 2, r(t, e, θ̂) > 0, and V̇n negative definite, it is clear that l(t, e, θ̂) is
positive definite. Therefore J defined in (5.31) is a ‘meaningful’ cost functional which
puts an integral penalty on both z and ũ (with complicated nonlinear scaling in terms
of the parameter estimate), as well as on the terminal value of |θ̃|. Note that an integral
penalty on θ̃ is not included, since adaptive backstepping controllers in general do not
guarantee parameter convergence to a true value.
Substituting l(t, e, θ̂) and

v = ũ − u∗ = ũ + βr−1 (t, e, θ̂)zn (5.34)


5.3 ADAPTIVE BACKSTEPPING AND OPTIMALITY 85

into J together with (5.26) gives


Z " ∞  n−1
X n
X 
J = β lim |θ̃|2Γ−1 + − 2β − ck zk2 − r−1 zn2 + z k Φk z n
t→∞ 0 k=1 k=1
#
+rv 2 − 2βvzn + β 2 r−1 zn2 dt

Z ∞ h n−1
X n
X i Z ∞
= β lim |θ̃|2Γ−1 − 2β − ck zk2 + zn ũ + zk Φk zn dt + rv 2 dt
t→∞ 0 0
k=1 k=1
Z ∞ Z ∞
= β lim |θ̃|2Γ−1 − 2β dVn + rv 2 dt (5.35)
t→∞ 0 0
Z ∞
= 2βVn (z(0), θ̂(0)) + β|θ̃(0)|2Γ−1 − 2β lim Vs (z(t)) + rv 2 dt,
t→∞ 0

where Vs = 12 nk=1 zn2 . It was already shown that the control law ũ together with the
P

update law for θ̂ stabilizes the closed-loop system, which means limt→∞ z(t) = 0 and
thus limt→∞ Vs (z(t)) = 0. Therefore the minimum of (5.35) is reached only if v = 0
and thus the control u = u∗ minimizes the cost functional (5.31).

5.3.2 Transient Performance Analysis


A L2 transient performance bound on the error state z and the control ũ can be found for
the inverse optimal design. By Theorem 5.3 the control law (5.24) for β = 2 is optimal
with respect to the cost functional

J = 2 lim |θ̃|2Γ−1 (5.36)


t→∞
Z ∞"X n n
#
X  Φk  2 ũ2
+ 2 ck zk2 + ck z k − zn +   dt
ck P n Φ2k
0 k=1 k=1 2 cn + k=1 2ck

with a value function

J ∗ = 2|θ̃|2Γ−1 + 2|z|2 . (5.37)

Therefore
n
" #

ũ2
Z X
2 ck zk2 +   dt
Pn Φ2k
0 k=1 2 cn + k=1 2ck
n n
" #

Φk 2 ũ2
Z X  X
≤2 ck zk2 + ck z k − zn +   dt
ck P n Φ2k
0 k=1 k=1 2 cn + k=1 2ck

≤ J ∗ = 2|θ̃(0)|2Γ−1 + 2|z(0)|2 (5.38)


86 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.3

which yields the inequality


 
Z ∞ X n
ũ2
 ck zk2 +    dt ≤ |θ̃(0)|2Γ−1 + |z(0)|2 . (5.39)
Pn Φ2k
0 k=1 2 cn + k=1 2ck

The dependency on z(0) can be eliminated by employing trajectory initialization: z(0) =


0. This results in the L2 performance bound
 
Z ∞ X n 2

 ck zk2 +   2
Pn Φ2k  dt ≤ |θ̃(0)|Γ−1 . (5.40)
0 k=1 2 cn + k=1 2ck

5.3.3 Example: Inverse Optimal Adaptive Longitudinal Missile Control


The nonlinear adaptive controller developed in this chapter is inverse optimal with respect
to a cost functional that penalizes the tracking errors and the control effort. However,
nonlinear damping terms are used to achieve this inverse optimality. In [164] the numer-
ical sensitivity of the tuning functions adaptive backstepping method, with added non-
linear damping terms to robustify the controller against unknown external disturbances,
is studied. Increasing the nonlinear damping gains improves tracking performance, but
leads to undesirable high frequency components in the control signal. This illustrates
that using nonlinear damping in the feedback controller must be done with care, since it
can easily result in high gain feedback.
The effect of the nonlinear damping terms used in the inverse optimal design will become
more clear in the example outlined in this section. The inverse optimal nonlinear adaptive
control approach is applied to the longitudinal missile control example of Sections 3.3.3
and 4.2.4. The generalized dynamics of the missile (3.38), (3.39) are repeated here for
convenience sake:

ẋ1 = x2 + f1 (x1 ) + g1 u (5.41)


ẋ2 = f2 (x1 ) + g2 u, (5.42)

where f1 , f2 , g1 and g2 are unknown nonlinear functions containing the aerodynamic sta-
bility and control derivatives. For the control design the g1 u1 -term has to be neglected
so that the system is of a lower triangular form. The control objective is to track the
reference signal yr (t) with the state x1 . According to the inverse optimal adaptive back-
stepping procedure, the functions ρ1 (t), ρ2 (t, θ) and αr (t, θ) have to be selected such
that

ρ̇1 = ρ2 + ϕTf1 (ρ1 )θf1


ρ̇2 = αr (t, θf1 , θf2 ) + ϕTf2 (ρ1 )θf2 (5.43)
yr (t) = ρ1 (t).
5.3 ADAPTIVE BACKSTEPPING AND OPTIMALITY 87

Hence,

ρ1 = yr
ρ2 = ẏr − ϕTf1 (ρ1 )θf1 (5.44)

ϕTf1 (ρ1 )θf1 ẏr − ϕTf2 (ρ1 )θf2 .

αr = ÿr −
∂ρ1
∂yr ∂ρ1
Since ∂θ∗ = 0 it follows that ∂θ∗ = 0 for all t ≥ 0 and for all θ∗ ∈ R2 , θ∗ can be replaced
by its estimate θ̂∗ . Consider the signal xr (t) = ρ(t, θ̂f1 (t), θ̂f2 (t)), which satisfies

ẋr1 = xr2 + ϕTrf1 (xr1 )θ̂f1


∂ρ2 ˙
ẋr2 = αr + ϕTrf2 (xr1 )θ̂f2 + θ̂f1 (5.45)
∂ θ̂f1
yr (t) = xr1 (t).

Defining the tracking error e = x − xr , the system can be rewritten as

ė1 = e2 + ϕ̃Tf1 θf1 + ϕTrf1 θ̃f1 (5.46)


∂ρ2 ˙
ė2 = ũ + ϕ̃Tf2 θf2 + ϕTrf θ̃f2 − θ̂f1 (5.47)
2
∂ θ̂f1

where ũ = ϕTg1 θg1 u − αr and ϕ̃∗ = ϕ∗ − ϕr∗ . Now the tracking problem has been
transformed into a regulation problem. The error states are defined as

z1 = e1
z2 = e2 − α̃1 , (5.48)

where the standard adaptive backstepping approach is used to find the virtual control α̃1
as

α̃1 (e1 , θ̂f1 ) = −c1 z1 − ϕ̃Tf1 θ̂f1 , (5.49)

where c1 > 0 and the update laws as


 
˙ ∂ α̃1
θ̂f1 = Γf1 ϕ̃f1 z1 − ϕ̃f1 z2 (5.50)
∂e1
˙
θ̂f2 = Γf2 ϕ̃f2 z2 (5.51)
˙
θ̂g2 = P (Γg2 ϕg2 uz2 ) . (5.52)

Consider the CLF


1h 2 i
V2 = z1 + z22 + θ̃fT1 Γ−1 T −1 T −1
f1 θ̃f1 + θ̃f2 Γf2 θ̃f2 + θ̃g2 Γg2 θ̃g2 . (5.53)
2
88 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.3

Taking the derivative of V2 along the solutions of (5.49)-(5.52) results in


"
∂ α̃1 T
V̇2 = −c1 z1 + z2 z1 + ϕ̃Tf2 θ̂f2 + ũ −
2
ϕ̃ θ̂f
∂e1 f1 1
 ∂ α̃  #
1 ∂ρ2  ∂ α̃1
− + Γ ϕ̃f1 z1 − ϕ̃f1 z2 . (5.54)
∂ θ̂f1 ∂ θ̂f1 ∂e1

Instead of canceling all indefinite terms, scaling nonlinear damping terms are introduced
as
 ∂ α̃ ∂ρ2 
1
Φ1 = 1 − + Γϕ̃f1 + φ1 (5.55)
∂ θ̂f1 ∂ θ̂f1
 ∂ α̃ ∂ρ2  ∂ α̃1
1
Φ2 = + Γ ϕ̃f + φ2 , (5.56)
∂ θ̂f1 ∂ θ̂f1 ∂e1 2

where
∂ α̃1 ∂ α̃1 T
− e2 − ϕ̃ θ̂f = φ1 z1 + φ2 z2 . (5.57)
∂e1 ∂e1 f1 1
This renders (5.54) equal to

V̇2 = −c1 z12 + z2 ũ + z1 Φ1 z2 + z2 Φ2 z2 . (5.58)

Finally, substituting the control law

Φ2 Φ2
 
ũ = − c2 + 1 + 2 z2 , c2 > 0, (5.59)
2c1 2c2

gives
 2  2
1 1 c1 Φ1 c2 Φ2
V̇2 = − c1 z12 − c2 z22 − z1 − z2 − z2 − z2 . (5.60)
2 2 2 c1 2 c2

By Theorem 5.3 the inverse optimal tracking control problem is solved. An integral term
with gain k1 ≥ 0 can be added to the outer loop design to compensate for the neglected
control effectiveness term as was done with the tuning functions autopilot design of Sec-
tion 4.2.4.
The resulting inverse optimal closed-loop system is implemented in the MATLAB/
Simulink c environment to evaluate the performance and the numerical sensitivity. The
gains are selected as c1 = 18, k1 = c2 = 10, Γf1 = Γf2 = 10I, Γg2 = 0.01. The simu-
lation is again performed with a third order fixed step solver with a sample time of 0.01s.
The control signal is fed through a low pass filter to remove high frequency components
that crash the solver. The controller is very sensitive to variations in the control gain c1 .
The response of the system for a simulation at Mach 2.2 with onboard model data for
5.4 CONCLUSIONS 89

Mach 2.0 can be found in Figure 5.1. Tracking performance is excellent, there is not
even a bad transient at the start of the first doublet as was the case with the tuning func-
tions design of Section 4.2.4. However, some high frequency components are visible in
the control signal at 5, 10, 15, 20 and 25 seconds, despite the use of the low pass filter.
This aggressive behavior is further illustrated in Figure 5.2, where the parameter estima-
tion errors are plotted. There is hardly any adaptation, since the controller already forces
the tracking errors rapidly to zero. In fact, turning adaptation off does not influence the
tracking performance.
The control law of the inverse optimal design contains the large nonlinear damping terms
Φ21 Φ22
2c1 and 2c2 . Especially the first term can grow very large and vary in size rapidly as it
contains the derivatives of the virtual control law, as is illustrated in Figure 5.3. The con-
trol law is numerically very sensitive due the fast nonlinear growth resulting from these
Φ2
terms. It is not possible to reduce 2c11 , since Φ1 is also dependent on c1 . For other con-
trol applications where the derivatives of the intermediate control law are much smaller,
such as attitude control problems, the design approach may be beneficial, because the
nonlinear growth will be more restricted.

5.4 Conclusions
In this chapter inverse optimal control theory is used to modify the last step of the tuning
functions adaptive backstepping approach of Chapter 4. The goal is to introduce a cost
functional to simplify the closed-loop performance tuning of the adaptive controller and
to exploit the inherent robustness properties of optimal controllers. However, nonlinear
damping terms were utilized to achieve the inverse optimality, resulting in high gain
feedback terms in the design. The numerical sensitivity due to the high gain feedback
terms makes the inverse optimal approach less suitable than the adaptive designs of the
previous chapter for the complex flight control design problems considered in this thesis.
Furthermore, the complexity of the cost functional associated with the inverse optimal
design does not make performance tuning any easier.
90 INVERSE OPTIMAL ADAPTIVE BACKSTEPPING 5.4

angle of attack (deg)


10

−10

0 5 10 15 20 25 30

50
pitch rate (deg/s)

−50
0 5 10 15 20 25 30
control deflection (deg)

20

10

−10

−20
0 5 10 15 20 25 30
time (s)

Figure 5.1: Numerical simulations at Mach 2.2 of the longitudinal missile model using an inverse
optimal adaptive backstepping control law with uncertainty in the onboard model.

−4
x 10
0
thetatildef11

thetatildef12

−2.706
−2
−2.707
−4
0 5 10 15 20 25 30 0 5 10 15 20 25 30
−3
x 10
0.84 0
thetatildef13

thetatildef21

−2
0.835
−4
0 5 10 15 20 25 30 0 5 10 15 20 25 30
thetatildef22

thetatildef23

14.3 2

14.29
1.95
0 5 10 15 20 25 30 0 5 10 15 20 25 30
time (s)

2.41
thetatildeg2

2.405

2.4
0 5 10 15 20 25 30
time (s)

Figure 5.2: The parameter estimation errors for the inverse optimal adaptive backstepping design.
The aggressive control law prevents the update laws from any serious adaptation.
5.4 CONCLUSIONS 91

200

150
Phi1 /2/c1

100
2

50

0
0 5 10 15 20 25 30

0.05
z2

−0.05
0 5 10 15 20 25 30

250
200
inv(r)

150
100
50
0
0 5 10 15 20 25 30
time (s)

Figure 5.3: The size and variations of the nonlinear damping terms and the error state z2 during
the missile simulation.
Chapter 6
Comparison of Integrated and
Modular Adaptive Flight Control

The constrained adaptive backstepping approach of Chapter 4 is applied to the design


of a flight control system for a simplified, nonlinear over-actuated fighter aircraft model
valid at two flight conditions. It is demonstrated that the extension of the adaptive con-
trol method to multi-input multi-output systems is straightforward. A comparison with
a more traditional modular adaptive controller that employs a least squares identifier is
made to illustrate the advantages and disadvantages of an integrated adaptive design.
Furthermore, the interactions between several control allocation algorithms and the on-
line model identification for simulations with actuator failures are studied. The control
design for this simplified aircraft model will provide valuable insights before attempt-
ing the more complex flight control design for the high-fidelity F-16 dynamic model of
Chapter 2.

6.1 Introduction
In this chapter a nonlinear adaptive backstepping based reconfigurable flight control sys-
tem is designed for a simplified aircraft model, before attempting the more complex
F-16 model of Chapter 2. As a study case the control design problem for a nonlinear
over-actuated fighter aircraft model is selected. The key simplifications made here are
constant velocity and no lift or drag effects of the control surfaces. Furthermore, aerody-
namic data is only available for two flight conditions.
Since the aircraft model considered in this chapter is over-actuated, some form of control
allocation has to be applied to distribute the desired control moments over the actuators.
However, a characteristic of the adaptive backstepping designs as discussed in Chapter 4
is that the Lyapunov-based identifiers of the method only yield pseudo-estimates of the

93
94 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.2

unknown parameters, since the estimation is performed to satisfy a total system stability
criterion rather than to optimize the error in estimation. As a result the parameter esti-
mates are not guaranteed to converge to their true values over time and it is not clear what
effect this will have on the control allocation. Therefore, as an interesting side study, the
combination of constrained adaptive backstepping with two common types of control al-
location methods with different weightings will also be examined.
Furthermore, the integrated adaptive backstepping flight controller will be compared with
a more traditional modular adaptive design which makes use of a separate least-squares
identifier. This type of modular adaptive controller is referred to as ‘estimation-based’
designs in literature. An estimation-based adaptive control design does not suffer from
the restriction of a Lyapunov update law, since it achieves modularity of controller and
identifier: any stabilizing controller can be combined with any identifier. Especially a
least-squares based identifier is of interest, since this type of identifier possesses excel-
lent convergence properties and guaranteed parameter convergence to constant values.
In [131, 132, 188] an adaptive NDI design with recursive least-squares identifier is used
for the design of a reconfigurable flight control system for a fly-by-wire Boeing 747.
However, theoretical stability and convergence results for the closed-loop system are
not provided, since the least-squares identifier, like all traditional identifiers, is not fast
enough to capture the potential faster-than-linear growth of nonlinear systems. Hence,
the certainty equivalence principle does not hold and an alternative solution will have to
be found.
In [119, 120] a robust backstepping controller is introduced which achieves input-to-state
stability (ISS) with respect to the parameter estimation errors and the derivative of the
parameter estimate. Nonlinear state filters are used to compensate for the time varying
nature of the parameter estimation errors so that standard gradient or least-squares iden-
tifiers can be applied. The resulting identifier module guarantees boundedness of the
parameter estimation errors. The modular nonlinear adaptive flight controller will be de-
signed using this approach in combination with the different control allocation methods
so that a comparison can be made.
This chapter starts with a discussion on the problem of applying classical estimation-
based adaptive control designs to uncertain nonlinear systems. After that, the theory
behind modular adaptive backstepping with a least-squares identifier is explained. In the
second part of the chapter the aircraft model is introduced and the integrated and modular
adaptive backstepping flight control designs are constructed. The concept of control allo-
cation is explained and three common types of algorithms are introduced in both design
frameworks. Finally, the aircraft model with the adaptive flight controllers is evaluated
in numerical simulations where several types of actuator lockup failure scenarios are
performed.

6.2 Modular Adaptive Backstepping


One of the goals in this chapter is to compare a reconfigurable flight controller based on
the constrained adaptive backstepping technique with one based on a more traditional
6.2 MODULAR ADAPTIVE BACKSTEPPING 95

modular adaptive design where the controller and identifier are separate modules. How-
ever, the latter adaptive design method fails to achieve any global stability results for
systems whose nonlinearities are not linearly bounded. In this section a robust backstep-
ping design with least-squares identifier is developed with strong provable stability and
convergence properties.

6.2.1 Problem Statement


Before the modular adaptive backstepping approach is derived, the problem of applying
traditional estimation-based adaptive control designs to nonlinear systems is illustrated
in the following simple example.

Example 6.1
Consider the scalar nonlinear system

ẋ = u + θx2 , (6.1)

where θ is an unknown constant parameter. A stabilizing certainty equivalence con-


troller is given by

u = −x − θ̂x2 , (6.2)

where θ̂ the parameter estimate of θ. The parameter estimation error is defined as


θ̃ = θ − θ̂. Selecting the Lyapunov update law

˙
θ̂ = x3 (6.3)
1 2
renders the derivative of the control Lyapunov function V = 2x + 12 θ̃2 negative
semi-definite, i.e.

V̇ = −x2 . (6.4)

An alternative solution to this adaptive control problem is to employ a standard identi-


fier to provide the estimate for the certainty equivalence controller (6.2). However, in
general, the signal ẋ is not available for measurement and thus (6.1) cannot be solved
1
for unknown θ. This problem is solved by filtering both sides of (6.1) by s+1 :

s 1 1
x = u+θ x2 . (6.5)
s+1 s+1 s+1
Introducing the filters

ẋf = −xf + x2 (6.6)


2
u̇f = = −uf + u + x = −uf − θ̂x (6.7)
96 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.2

makes it possible to rewrite (6.5) as

x(t) = θ(t)xf (t) + uf (t). (6.8)

Since θ is unknown its estimate θ̂ has to be used. The corresponding predicted value
of x is

x̂(t) = θ̂(t)xf (t) + uf (t), (6.9)

and the prediction error e is defined as

e = x − x̂ = θ̃xf . (6.10)

To achieve the minimum of e2 a parameter update law for θ̂ has to be defined. A


standard normalized gradient update law is selected:
˙ xf
θ̂ = e. (6.11)
1 + xTf xf

˙ ˙
Substituting (6.10) and θ̂ = −θ̃ results in

˙ θ̃x2f
θ̃ = − . (6.12)
1 + xTf xf

Hence, the parameter estimation error converges to zero. However, since this is a lin-
ear differential equation the error cannot converge faster than exponentially. Consider
the most favorable case where

θ̃ = e−t θ̃(0). (6.13)

The closed-loop system with controller (6.2) is

ẋ = −x + θ̃x2 . (6.14)

Substitution of (6.13) into (6.14) yields the equation

ẋ = −x + x2 e−t θ̃(0), (6.15)

whose explicit solution is


2x(0)
x(t) = h i . (6.16)
x(0)θ̃(0)e−t + 2 − x(0)θ̃(0) et

If x(0)θ̃(0) < 2 then x(t) will converge to zero as t → ∞. However, if x(0)θ̃(0) > 2
the solution escapes to infinity in finite time, that is

1 x(0)θ̃(0)
x(t) → ∞ as t → ln . (6.17)
2 x(0)θ̃(0) − 2
6.2 MODULAR ADAPTIVE BACKSTEPPING 97

This is illustrated in Figure 6.1, where the response of the system (6.1) with both the
Lyapunov- and the estimation-based adaptive control design is plotted. The identifier
of the estimation-based design is not fast enough to cope with the potential faster-than-
linear growth of nonlinear systems and converges to infinity resulting in a simulation
crash.

5
state x

−5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (s)

0
est−based
Lyap−based
−5
input u

−10

−15

−20
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (s)

Figure 6.1: State x and control effort u of the Lyapunov- and estimation-based adaptive controllers
for initial values x(0) = 2 and θ̂(0) = 0. The real value of θ is 2. The normalized gradient based
identifier of the estimation-based controller is not fast enough to cope with the nonlinear growth.

The above simple example illustrates the notion that to achieve stability either a faster
identifier is needed, such as the adaptive backstepping designs of Chapter 4, or a robust
controller that can deal with disturbances such as large transient parameter estimation
errors resulting from a slower identifier.

6.2.2 Input-to-state Stable Backstepping


In this section a robust backstepping controller which is input-to-state stable (ISS) with
respect to the parameter estimation error is constructed. In other words, the states of the
closed-loop system remain bounded when the parameter estimation error is bounded, and
when the parameter estimation error converges to zero the closed-loop system states will
also converge to zero. A formal definition of input-to-state stability is given in Appendix
B.2.
The ISS backstepping design procedure is largely identical to the static feedback design
98 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.2

part of the command filtered adaptive backstepping approach as given by Theorem 4.2.
The only difference is that the virtual and real control laws are augmented with additional
nonlinear damping terms, i.e.
1  
αi = −ci zi − si z̄i − ϕTgi−1 θ̂gi−1 z̄i−1 − ϕTfi θ̂fi + ẋi,r , i = 1, ..., n
ϕTgi θ̂gi
u0 = αn , (6.18)
where si , i = 1, ..., n are nonlinear damping terms defined as
si = κ1i ϕTfi ϕfi + κ2i ϕTgi ϕgi x2i+1 , i = 1, ..., n, (6.19)

with κ∗ > 0 and u , xn+1 for the ease of notation. Note that when compared to
the complex nonlinear damping terms used in the inverse optimal design of Chapter
5, the size of the above damping terms is much easier to control.
Pn Consider again the
general system (4.66) and the control Lyapunov function V = 21 i=1 z̄i2 . Applying the
approach of Theorem 4.2, excluding the update laws but including the nonlinear damping
terms si defined above, reduces the derivative of V to
n h
X i
V̇ = −(ci + si )z̄i2 + ϕTfi θ̃fi z̄i + ϕTgi θ̃gi xi+1 z̄i
i=1
 !T
n
!
X
−ci z̄i2 − κ1i θ̃f θ̃f 1 T
= ϕfi z̄i − i ϕfi z̄i − i + θ̃ θ̃f
i=1
2κ1i 2κ1i 4κ1i fi i
!T ! 
θ̃g θ̃g 1 T 
− −κ2i ϕgi xi+1 z̄i − i ϕgi xi+1 z̄i − i + θ̃ θ̃g
2κ2i 2κ2i 4κ2i gi i
n  
X 1 T 1 T
≤ −ci z̄i2 + θ̃ θ̃f + θ̃ θ̃g .
i=1
4κ1i fi i 4κ2i gi i

If the parameter estimation errors θ̃∗ are bounded V̇ , is negative outside a compact set,
which demonstrates that the modified tracking errors z̄i are decreasing outside the com-
pact set and are hence bounded. The size of the bounds is determined by the damping
gains κ∗ . Furthermore, if the parameter estimation errors are converging to zero, then the
modified tracking errors will also converge to zero. From an input-output point of view,
the nonlinear damping terms render the closed-loop system input-to-state stable with re-
spect to the parameter estimation errors. The values of κ∗ should be selected very small,
since nonlinear damping terms may result in high gain control for large disturbance sig-
nals if not tuned carefully.

6.2.3 Least-Squares Identifier


In this section a least-squares identifier that guarantees boundedness of the parameter
estimation error and its derivative is developed for the ISS backstepping design of the
6.2 MODULAR ADAPTIVE BACKSTEPPING 99

previous section. The idea of [119] is to use nonlinear regressor filtering to convert the
dynamic parametric system into a static form in such a way that a standard least-squares
estimation algorithm can be used.
The system (4.66) can be rewritten in a general parametric form as

ẋ = h(x, u) + F (x, u)T θ, (6.20)

where h(x, u) represents the known system dynamics, F (x, u) the known regressor ma-
trix, θ ∈ Rp the unknown parameter vector and x = (x1 , ..., xn )T the system states.
Consider the x-swapping filter from [120], which is defined as

Ω̇0 = A0 − ρF (x, u)T F (x, u)P (Ω0 + x) − h(x, u), Ω0 ∈ Rn


 
(6.21)
T T
  T T p×n
Ω̇ = A0 − ρF (x, u) F (x, u)P Ω + F (x, u) , Ω ∈ R , (6.22)

where ρ > 0 and A0 is an arbitrary constant matrix such that

P A0 + AT0 P = −I, P = P T > 0. (6.23)

The estimation error vector is defined as

ǫ = x + Ω0 − ΩT θ̂, ǫ ∈ Rn , (6.24)

along with

ǫ̃ = x + Ω0 − ΩT θ, ǫ̃ ∈ Rn . (6.25)

Then ǫ̃ is governed by

ǫ̃˙ A0 − ρF (x, u)T F (x, u)P ǫ̃,



= (6.26)

which is exponentially decaying. The least-squares update law for θ̂ and the covariance
update are defined as

˙ Ωǫ
θ̂ = Γ (6.27)
1 + νtrace (ΩT ΓΩ)
˙ ΓΩΩT Γ
Γ̂ = − , Γ(0) = Γ(0)T > 0, (6.28)
1 + νtrace (ΩT ΓΩ)
where ν ≥ 0 is the normalization coefficient. The properties of the least-squares identi-
fier are given by the following Lemma from [118].
Lemma 6.1. Let the maximal interval of existence of solutions of (6.20), (6.21)-(6.22)
with (6.27)-(6.28) be [0, tf ). Then for ν ≥ 0, the following identifier properties hold:

1. θ̃ ∈ L∞ [0, tf )
2. ǫ ∈ L2 [0, tf ) ∩ L∞ [0, tf )
100 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.3

˙
3. θ̂ ∈ L2 [0, tf ) ∩ L∞ [0, tf )

Proof: Along the solutions of (6.22) the following holds:


d    
ΩP ΩT = Ω P A0 + AT0 P ΩT − 2ρΩP F T F P ΩT + ΩP F T + F P ΩT
dt
 T  
1 1 1
= −ΩΩT − 2ρ F P ΩT − Ip F P ΩT − Ip + Ip . (6.29)
2ρ 2ρ 2ρ
Taking the Frobenius norm results in
2
d 1 1
trace ΩP ΩT = −|Ω|2F − 2ρ F P ΩT −

I + trace {Ip }
dt 2ρ F 2ρ
p
≤ −|Ω|2F + . (6.30)

This proves that Ω ∈ L∞ [0, tf ). From (6.26) it follows that
d
|ǫ̃|2P ≤ −|ǫ̃|2 ,

(6.31)
dt
which implies that ǫ̃ ∈ L2 [0, tf ) ∩ L∞ [0, tf ). Consider the function
1 2
U= |θ̃| −1 + |ǫ̃|2P (6.32)
2 Γ(t)
which is positive definite because Γ(t)−1 is positive definite for each t. The derivative of
U after some manipulations satisfies
|ǫ|2
U̇ ≤ − .
1 + νtrace {ΩT ΓΩ}
The fact that U̇ is non-positive proves that θ̃ ∈ L∞ [0, tf ) Integration of the above in-
equality yields
ǫ
p ∈ L2 [0, tf ).
1 + νtrace {ΩT ΓΩ}
Since Ω is bounded, then ǫ ∈ L2 [0, tf ). Due to ǫ = ΩT θ̃ + ǫ̃ and the boundedness
˙ Ωǫ
of Ω it follows that ǫ ∈ L∞ [0, tf ), which in turn proves that θ̂ = Γ 1+νtrace(ΩT ΓΩ) ∈

L∞ [0, tf ). Finally, the square-integrability of ǫ and the boundedness of Ω prove that


˙ Ωǫ
θ̂ = Γ 1+νtrace(ΩT ΓΩ) ∈ L2 [0, tf ).

The robust backstepping controller of Section 6.2.2 allows the use of any identifier which
can independently guarantee that the parameter estimation errors and their derivatives are
bounded. The least-squares identifier with x-swapping filter as introduced in this section
has these properties. This concludes the dicussion on the theory behind the modular
adaptive backstepping approach in which the controller and identifier are designed sepa-
rately.
6.3 AIRCRAFT MODEL DESCRIPTION 101

6.3 Aircraft Model Description


Before the adaptive flight control designs are discussed, the aircraft dynamic model for
which the controllers are designed is introduced in this section. The simplified nonlinear
aircraft dynamic model has been obtained from [159]. The aircraft dynamic model (6.33)
somewhat resembles that of an F-18 model.
α̇
   
q − pβ + zα ∆α + (g0 /V )(cosθ cos φ − cos θ0 )

 β̇ 


 yβ + p(sin α0 + ∆α) − r cos α0 + (g0 /V ) cos θ sin φ 


 φ̇ 


 p + q tan θ sin φ + r tan θ cos φ 


 θ̇ 
 = 
 q cos φ − r sin θ 


 ṗ 


 lβ β + lq q + lr r + (lβα β + lrα r)∆α + lp p − i1 qr 

 q̇   mα ∆α + mq q + i2 pr − mα̇ pβ + mα̇ (g0 /V )(cos θ cos φ − cos θ0 ) 
ṙ nβ β + nr r + np p + npα p∆α − i3 pq + nq q
  
0 0 0 0 0 0 0 δel

 0 0 0 0 0 0 0   δer 
 

 0 0 0 0 0 0 0 
 δal 

+ 
 0 0 0 0 0 0 0   δar (6.33)
 

 lδel lδer lδal lδar 0 0 lδr 

 δlef 

 mδel mδer mδal mδar mδlef mδtef mδr   δtef 
nδel nδer nδal nδar 0 0 nδr δr

Aerodynamic data are available in Tables 6.1 and 6.2 for two trimmed flight conditions:
flight condition 1 at an altitude of 30000 ft and a Mach number of 0.7, and flight condi-
tion 2 at 40000 ft altitude and a Mach number of 0.6. The model has seven independent
control surfaces, i.e. left and right elevators, left and right ailerons, leading and trailing
edge flaps, and collective rudders. A layout of the aircraft and its control surfaces can
be seen in Figure 6.2. The main simplifications made in the dynamic model are constant
airspeed and no lift or drag effects on the control surfaces. The latter simplifications have
been made to get the system into a lower triangular form required for standard adaptive
backstepping and feedback linearization designs. The designs considered in this chapter
do not suffer from this shortcoming since command filters are used to generate the inter-
mediate control laws. The aircraft model includes second order actuator dynamics. The
magnitude, rate and bandwidth limits of the actuators are specified in Table 6.3.

Table 6.1: Aircraft model parameters for trim condition I, h = 30000 ft and M = 0.7.

lβ = −11.04 lq = 0 lr = 0.4164 lβα = −19.72 lrα = 4.709


lp = −1.4096 zα = −0.6257 yβ = −0.1244 mα = −5.432 mα̇ = −0.1258
mq = −0.3373 nβ = 2.558 nr = −0.1122 np = −0.0328 npα = −0.0026
nq = 0 lδel = 6.3176 lδer = −6.3176 lδal = 7.9354 lδar = −7.9354
lδr = 1.8930 i1 = 0.7966 i2 = 0.9595 i3 = 0.6914 mδel = −4.5176
mδer = −4.5176 mδal = −0.8368 mδar = 0.8368 mδlef = −1.2320 mδtef = 0.9893
mδr = 0 g0 = 9.80665 nδel = 0.2814 nδer = −0.2814 nδal = −0.0698
nδar = −0.0698 nδr = −1.7422 V = 212.14 α0 = 0.0681 θ0 = 0.0681

All stability and control derivatives introduced in (6.33) are considered to be unknown
102 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.3

Table 6.2: Aircraft model parameters for trim condition II, h = 40000 ft and M = 0.6.

lβ = −7.0104 lq = 0 lr = 0.3529 lβα = −16.4015 lrα = 1.0461


lp = −0.7331 zα = −0.2876 yβ = −0.0700 mα = −1.4592 mα̇ = −0.0177
mq = −0.1286 nβ = 1.3612 nr = −0.0619 np = −0.0177 npα = 0.0696
nq = 0 lδel = 2.7203 lδer = −2.7203 lδal = 4.2438 lδar = −4.2438
lδr = 0.8920 i1 = 0.7966 i2 = 0.9595 i3 = 0.6914 mδel = −1.9782
mδer = −1.9782 mδal = −0.3183 mδar = −0.3183 mδlef = −0.4048 mδtef = 0.3034
mδr = 0 g0 = 9.80665 nδel = 0.1262 nδer = −0.1262 nδal = −0.0963
nδar = −0.0963 nδr = −0.8018 V = 177.09 α0 = 0.1447 θ0 = 0.1447

Figure 6.2: The control surfaces of the fighter aircraft model. The control surfaces which will lock
in place during the various simulation scenarios are indicated.

and will be estimated online by the parameter estimation process of the adaptive control
laws. The system (6.33) is rewritten in a more suitable form for the control design as

Ẋ1 = H1 (X1 , Xu ) + Φ1 (X1 , Xu )T Θ1 + B1 (X1 , Xu )X2


Ẋ2 = H2 (X1 , X2 , Xu ) + Φ2 (X1 , X2 , Xu )T Θ2 + B2 U (6.34)
Ẋu = Hu (X1 , X2 , Xu )

where X1 = (φ, α, β)T , X2 = (p, q, r)T , U = (δel , δer , δal , δar , δlef , δtef , δr )T and the
uncontrolled state Xu = θ. The known nonlinear aircraft dynamics are represented by
the vector functions H1 (X1 , Xu ), H2 (X1 , X2 , Xu ) and Hu (X1 , X2 , Xu ) and the matrix
function B1 (X1 , Xu ). The functions Φ1 (X1 , Xu ) and Φ2 (X1 , X2 , Xu ) are the regres-
sor matrices, while Θ1 , Θ2 and B2 are vectors and a matrix containing the unknown
6.4 FLIGHT CONTROL DESIGN 103

Table 6.3: Aircraft model actuator specifications.

Deflection Rate Bandwidth


Surface
Limit [deg] Limit [deg/s] [rad/s]
Horizontal Stabilizer [-24, 10.5] ± 40 50
Ailerons [-25, 45] ± 100 50
Leading Edge Flaps [-3, 33] ± 15 50
Trailing Edge Flaps [-8, 45] ± 18 50
Rudder [-30, 30] ± 82 50

parameters of the system, defined as

Θ1 = (zα , yβ )T
Θ2 = ( lβ , lp , lq , lr , lβα , lrα , l0 , mα , mq , mα̇ , m0 , nβ , np , nq , nr , npα , n0 )T

 
lδel lδer lδal lδar 0 0 lδr
B2 =  mδel mδer mδal mδar mδlef mδtef mδr  .
nδel nδer nδal nδar 0 0 nδr

Note that the parameters l0 , m0 and n0 have been added to the vector Θ to compensate
for additional trim moments caused by locked actuators.

6.4 Flight Control Design


Now that the system has been rewritten in a structured form, the actual control design
methods can be discussed. The control objective is to track a smooth reference signal
X1,r with state vector X1 . The reference X1,r and its derivative Ẋ1,r are generated
by linear second order filters, which can also be used to enforce the desired transient
response of the controllers. The static feedback loops of the integrated and modular
adaptive controllers are designed identical for comparison purposes and are therefore
derived first. After that, the dynamic part of both controllers is introduced and their
closed-loop stability properties are discussed.

6.4.1 Feedback Control Design


The static feedback control design can be divided in two parts, an outer loop to control
the aerodynamic angles and the roll angle using the angular rates, and an inner loop
to control the angular rates using the control surfaces. The design procedure starts by
104 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.4

defining the tracking errors as


  
φ φr
Z1 =  α − αr  = X1 − X1,r (6.35)
β βr
   
p pr
Z2 =  q − qr  = X2 − X2,r , (6.36)
r rr
where X2,r is the virtual control law to be defined.
Step 1: The Z1 -dynamics satisfy
Ż1 = B1 Z2 + B1 X2,r + H1 + ΦT1 Θ1 . (6.37)
0
To stabilize (6.37), a stabilizing function X2,r is defined as
 
0
X2,r = B1−1 −C1 Z1 − S1 Z̄1 − H1 − ΦT1 Θ̂1 + Ẋ1,r − Ξ2 , (6.38)

where Θ̂1 is the estimate of Θ1 , C1 is a positive definite gain matrix and


S1 = κ1 ΦT1 Φ1 . (6.39)
The compensated tracking error Z̄1 and Ξ2 are to be defined. The stabilizing function
(6.38) is now fed through second order low pass filters as defined in Appendix C to
produce the virtual control law X2,r and its derivative. These filters can also be used to
enforce rate and magnitude limits on the signals. The magnitude and rate limits can be
selected equal to the physical limits of the actual actuators or states of the aircraft. The
effect that the use of these filters has on the tracking errors can be captured with the stable
linear filter
0

Ξ̇1 = −C1 Ξ1 + B1 X2,r − X2,r . (6.40)
The compensated tracking error Z̄1 is defined as
Z̄1 = Z1 − Ξ1 . (6.41)
This concludes the outer loop design.
Step 2: The inner loop design starts with the Z2 -dynamics, which are given by
Ż2 = B2 U + H2 + ΦT2 Θ2 − Ẋ2,r . (6.42)
0
To stabilize (6.42), the stabilizing function Mdes is defined as

B̂2 U 0 = −C2 Z2 − S2 Z̄2 − B1T Z̄1 − H2 − ΦT2 Θ̂2 + Ẋ2,r = Mdes


0
, (6.43)

where C2 is a positive definite gain matrix, B̂2 is the estimate of B2 and


3
X
S2 = κ2 ΦT2 Φ2 + κ2i Ui2 . (6.44)
i=1
6.4 FLIGHT CONTROL DESIGN 105

Note that the matrix B̂2 is a 3 × 7 matrix. In Section 6.5 several control allocation
algorithms are introduced to determine U 0 . The real control B̂2 U = Mdes is found by
filtering B̂2 U 0 . Finally, the stable linear filter
0
Ξ̇2 = −C2 Ξ2 + Mdes − Mdes (6.45)
is defined. The derivative of the control Lyapunov function
1 T 1
V = Z̄ Z̄1 + Z̄2T Z̄2 (6.46)
2 1 2
along the trajectories of the closed-loop system is reduced to
7
1 T 1 T X 1
V̇ ≤ −Z̄1T C1 Z̄1 − Z̄2T C2 Z̄2 + Θ̃1 Θ̃1 + Θ̃2 Θ̃2 + T
B̃2j B̃2j ,
4κ1 4κ2 j=1
4κ 2j

where B̃2j represents the j-th column of the matrix B̃2 . From the above expression it can
be deduced that the compensated tracking errors Z̄1 , Z̄2 are globally uniformly bounded
if the parameter estimation errors are bounded. The size of the bounds is determined by
the damping gains κ∗ . Furthermore, if the parameter estimation errors are converging to
zero, than the compensated tracking errors will also converge to zero. This concludes the
static feedback design for both adaptive controllers.

6.4.2 Integrated Model Identification


To design the Lyapunov update laws of the integrated adaptive design method the con-
trol Lyapunov function V (6.46) is augmented with additional terms that penalize the
estimation errors as
 
2 7
1 X   X  
Va = V + trace Θ̃Ti Γ−1
i Θ̃i +
T −1
trace B̃2j ΓB2j B̃2j  , (6.47)
2 i=1 j=1

where Γ∗ = ΓT∗ > 0 are the update gain matrices. Selecting the update laws
˙
Θ̂1 = Γ1 Φ1 Z̄1
˙
Θ̂2 = Γ2 Φ2 Z̄2 (6.48)
˙ 
B̂2j = PB2j ΓB2j Z̄2 Uj
where Uj represents the j-th element of the control vector U , reduces the derivative of
Va along the trajectories of the closed-loop system to
V̇a = −Z̄1T (C1 + S1 ) Z̄1 − Z̄2T (C2 + S2 ) Z̄2 ,
which is negative semi-definite. Hence, the modified tracking errors Z̄1 , Z̄2 converge
asymptotically to zero. Note that the nonlinear damping gains are not needed to guaran-
tee stability of this integrated adaptive design. However, for the purpose of comparison
106 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.4

the static feedback parts of both controllers are kept the same. Furthermore, the damping
terms can be used to improve transient performance bounds of the integrated design as
demonstrated in [118], although selecting them too large will result in high gain control
and related numerical problems.
The update laws (6.48) are driven by the compensated tracking errors Z̄i . If the mag-
nitude or rate limits of the command filters (selected equal to limits of the actuators
or states) are reached, the real tracking errors Zi may increase. However, the modified
tracking errors Z̄i will still converge to zero, since the effect of these constraints has been
filtered out. In this way unlearning of the update laws is prevented. Note that the update
laws for B̂2 include a projection operator to ensure that certain elements of the matrix
do not change sign and full rank is maintained at all times. For most elements the sign
is known based on physical principles. The update laws are also robustified against pa-
rameter drift with continuous dead-zones and e-modification. A scheme of the integrated
adaptive control law can be found in Figure 6.3.

_
Z Online Model U
Identification

Θ Command Filters

_ Backstepping
Pilot Y Z Z Mdes Control
Prefilters Control Law U0
Commands Allocation
(Onboard Model)
Constraint Effect
Estimator

X Sensor U
Processing

Figure 6.3: Integrated adaptive control framework.

6.4.3 Modular Model Identification


An alternative approach to the control design with the Lyapunov-based adaptive laws of
the previous section is to separate the identifier and control law designs. The theory be-
hind this approach, referred to as modular adaptive backstepping control, was discussed
in Section 6.2. A least-squares identification method is selected as the identifier module
for its excellent convergence properties. An advantage of the least-squares method is
that, in theory, the true system parameters can be found since the estimation is not driven
by the tracking error but rather by the state of the system.
The system (6.34) can be written as the general affine parametric model

Ẋ = H(X, U ) + F T (X, U )Θ, (6.49)

where X = (X1T , X2T , XuT )T represents the system states, H(X, U ) are the known sys-
tem dynamics, Θ = (ΘT1 , ΘT2 , B21
T T T
, ..., B27 ) is a vector containing the unknown con-
6.5 CONTROL ALLOCATION 107

stant parameters and F (X, U ) the known regressor matrix. The x-swapping filter and
prediction error are defined as

Ω̇0 = A0 − ρF T (X, U )F (X, U )P (Ω0 + X) − H(X, U )


 
(6.50)
Ω̇T = A0 − ρF T (X, U )F (X, U )P ΩT + F T (X, U )
 
(6.51)
ǫ = X + Ω0 − ΩT Θ̂, (6.52)

where ρ > 0 and A0 is an arbitrary constant matrix such that


P A0 + AT0 P = −I, P = P T > 0. (6.53)

The least-squares update law for Θ̂ and the covariance update are defined as
˙ Ωǫ
Θ̂ = Γ (6.54)
1 + νtrace (ΩT ΓΩ)
˙ ΓΩΩT Γ − Γλ
Γ̂ = − , (6.55)
1 + νtrace (ΩT ΓΩ)
where ν ≥ 0 is the normalization coefficient and λ ≥ 0 is a forgetting factor. By
Lemma 6.1 the modular controller with x-swapping filters and least-squares update law
achieves global asymptotic tracking of the modified tracking errors. Despite using a
mild forgetting factor in (6.55), the covariance matrix can become small after a period
of tracking, and hence reduces the ability of the identifier to adjust to abrupt changes in
the system parameters. A possible solution to this problem can be found by resetting
the covariance matrix Γ when a sudden change is detected. After an abrupt change in
the system parameters, the estimation error will be large. Therefore a good monitoring
candidate is the ratio between the current estimation error and the mean estimation error
over an interval tǫ . After a failure, the estimation error will be large compared to the
mean estimation error, and thus an abrupt change is declared when
ǫ − ǭ
> Tǫ (6.56)
ǭ
where Tǫ is a predefined threshold. Moreover, this threshold should be chosen large
enough such that measurement noise and other disturbances do not trigger the resetting.
However, it should also be sufficiently small such that failures will trigger resetting. The
modular scheme is depicted in Figure 6.4.

6.5 Control Allocation


The control designs discussed in the preceding section provide the desired body frame
0
moments Mdes . The problem of control allocation is to distribute these moments over the
available control effectors U 0 . For the control design of this chapter the control allocation
problem can be summarized as
B̂2 U 0 = Mdes
0
, (6.57)
108 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.5

where B̂2 is a 3 × 7 matrix obtained from the identifiers. Without constraints on U 0 , the
expression (6.57) has infinite solutions. In the presence of magnitude and rate constraints
on U 0 , this equation has either an infinite number of solution, an unique solution, or no
solution at all. Two different control allocation methods will be discussed in this section,
one based on the weighted pseudo-inverse and one based on quadratic programming. The
control allocation methods applied in this section are quite basic methods, many more
sophisticated methods exist. Overviews of the numerous control allocation techniques
can be found in [17, 52, 76, 154].

6.5.1 Weighted Pseudo-inverse


A simple and computationally efficient solution to the control allocation problem is found
by utilizing the weighted pseudo-inverse (WPI). Consider the following quadratic cost
function

J = (U 0 )T W U 0 (6.58)

where W is a weighting matrix. The solution of (6.58) is given by


h i−1
U 0 = W −1 (B̂2 )T B̂2 W −1 (B̂2 )T 0
Mdes . (6.59)

The above equation provides an unique solution to (6.57), but it does not take any con-
straints on the control effectors into account. The WPI approach can therefore be inter-
preted as a very crude approach to control allocation. When W = I, the solution of
(6.59) is referred to as the pseudo-inverse (PI) solution.

6.5.2 Quadratic Programming


The main disadvantage of the WPI method is that it does not take magnitude and rate
constraints on the control effectors into account. When online solving of an optimization
problem is allowed, these constraints can be taken into account. Quadratic optimization
problems, or quadratic programs, can be solved very efficiently and are therefore inter-
esting for online applications. The quadratic programming (QP) solution will be feasible

Backstepping
Pilot Y Z Mdes Control U
Prefilters Control Law
Commands Allocation
(Onboard Model)

Θ U

Least Squares ΩT , Ω0 X Sensor


x-Swapping Filter
Identifier ε Processing

Online Model Identification

Figure 6.4: Modular adaptive control framework.


6.5 CONTROL ALLOCATION 109

when the desired moment vector is within the attainable moment set (AMS), and infea-
sible if it is outside.
In [181] two approaches to modify the QP solution are proposed to guarantee that the
solution will always be feasible: direction preserving and sign preserving. The direction
preserving method scales down the magnitude of the desired moment with a scaling fac-
tor σ such that it falls within the attainable moment set. The sign preserving method is
0
very similar, but allows the scaling σ to be split amongst the three components of Mdes
individually as σroll , σpitch and σyaw . The difference between the scaling methods is
illustrated in Figure 6.5.

Figure 6.5: Illustration of two quadratic programming solutions: (a) Direction preserving method
(b) Sign preserving method [181].

The sign preserving control allocation method makes more effective use of the available
control authority, and therefore this method is implemented in the flight control designs.
The QP is formulated as [181]

1 T
min x Hx + cTU x (6.60)
0
U ,σ 2
s.t. B̂2 U 0 − ΣT Mdes
0
=0
U0
     
Ulb Uub
 0   σroll   1 
 0  ≤  σpitch  ≤  1
     

0 σyaw 1

where x = ((U 0 )T , 1 − σroll , 1 − σpitch , 1 − σyaw )T ,


 
σroll 0 0
Σ= 0 σpitch 0 ,
0 0 σyaw
 
QU 0
H= .
0 Qσ
110 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.6

The weighting matrices QU , Qσ and cU are user specified. The scaling factors are more
heavily weighted than the control inputs to make sure that all the available control au-
thority is used: Qσ ≫ QU .

6.6 Numerical Simulation Results


The control designs are evaluated on their tracking performance and parameter estima-
tion accuracy for several failure scenarios during two separate maneuvers of 60 seconds.
The task given to the controllers is to track roll angle and angle of attack reference sig-
nals, while the sideslip angle is regulated to zero. The simulations are performed in
MATLAB/Simulink c with a third order solver and 0.01s sampling time. The controllers,
identifiers and aircraft model are all written as M S-functions.

6.6.1 Tuning the Controllers


The gains of both controllers are selected as C1 = I, C2 = 2I and all damping terms
κ∗ are taken equal to 0.01. These gains were selected after a trial-and-error procedure
in order to get an acceptable nominal tracking response. Note that Lyapunov stability
theory only requires the control gains to be larger than zero, but it is natural to select
the gains of the inner loop largest. The dynamics and limits of the outer loop command
filters are selected equal to the actuator dynamics of the aircraft model. The inner loop
command filters do not contain any limits on the virtual control signals.
With the tuning of the static feedback designs finished, the identifiers can be tuned. Again
the theory for the integrated adaptive design only requires the update gains to be larger
than zero. Selecting larger gains results in a more rapid parameter convergence at the cost
of more control effort. However, the effect of the update gains on the transient perfor-
mance of the closed-loop system is unclear, since the dynamic behavior of the tracking
error driven update laws can be quite unexpected. As such, it turns out to be very time
consuming to find an unique set of update gains of the Lyapunov-based identifier for a
range of failure types and two different flight conditions. This is a clear disadvantage
of the integrated adaptive design. All tracking error driven update laws are normalized
and the update gains related to the symmetric coefficients are selected equal to 10 and
the gains related to the asymmetric coefficients equal to 3. The constant σ related to the
e-modification (see Section 4.2.3) is taken equal to 0.01 and the continuous dead-zone
bounds are taken equal to 0.01 deg in the outer loop and 0.1 deg/s in the inner loop.
The tuning of the least-squares identifier is much more straight-forward, since the gain
scaling is more or less automated and the dynamic behavior is similar to the aircraft
model. However, the selection of a proper resetting threshold may take some time. All
diagonal elements of the update gain matrix are initialized at 10 and the resetting thresh-
old is selected as Tǫ = 20. A disadvantage of the modular adaptive design is that the
least-squares identifier in combination with regressor filtering has a much higher dynam-
ical order (more states) than the Lyapunov identifier and hence the simulations with the
modular adaptive design take up more time.
6.6 NUMERICAL SIMULATION RESULTS 111

6.6.2 Simulation Scenarios


The simulated failure scenarios are limited to individual locked control surfaces at dif-
ferent offsets from zero. As indicated in Figure 6.2, failures of the left aileron and the
left elevator surfaces are considered at both flight conditions: the left aileron locks at
−25, −10, 0, 10, 25 and 45 degrees, and the left elevator locks at −20, −10, −5, 0, 5
and 10 degrees. A positive deflection means trailing edge down for both control surfaces.
All simulations are started from the trimmed flight condition; scenarios 1 and 2 at flight
condition I, scenarios 3 and 4 at flight condition II. The simulated failures are initiated
1 second into the simulation and the failed surface is deflecting to the failure position
subject to the rate limit of the corresponding effector, i.e. 100 deg/s for the aileron and
40 deg/s for the elevator surface. Second order command filters are used to generate the
reference signals on angle of attack α and roll angle φ. The following two maneuvers are
considered:

1. three angle of attack doublets of ±15+α0 deg are flown, while a roll angle doublet
of ±90 degrees is commanded.

2. three multi axis doublets are flown, with angle of attack and roll angle of ±15 + α0
and ±60 deg, respectively.

Figure 6.6 shows the reference signals for the maneuvers, which have been generated
with second order command filters. The failure scenarios are summarized in Table 6.4.

Figure 6.6: The two simulated maneuvers.

Each scenario and failure case is simulated for each of the adaptive control laws and a
non-adaptive backstepping controller, used as a baseline, combined with each of the three
control allocation methods discussed in Section 6.5. Two different weight matrices are
used for the weighted pseudo-inverse and QP control allocation methods
 
WU1 = diag 1 1 20 20 10 10 5 ,
  (6.61)
WU2 = diag 20 20 1 1 10 10 5 .
112 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.6

The first weight matrix favors deflections of the horizontal tail surfaces over the ailerons,
while the second weight matrix favors the use of ailerons over the horizontal tail surfaces.

Table 6.4: Definition of the simulation scenarios.

Scenario Maneuver Trim Condition Failed Effector Lock Positions


1 1 I Left Aileron 45, 25, 10, 0, -10, -25 degrees
2 2 I Left Elevator 10.5, 5, 0, -5, -10, -24 degrees
3 1 II Left Aileron 45, 25, 10, 0, -10, -25 degrees
4 2 II Left Elevator 10.5, 5, 0, -5, -10, -24 degrees

6.6.3 Controller Comparison


Nominal Performance

First of all, the results of the simulated maneuvers without any failures are presented.
The root mean square (RMS) error, or quadratic mean of the tracking errors over the
whole duration of the simulation for the different control approaches combined with
the control allocation methods is presented in Table 6.5. As a reference, the results
for a backstepping controller without adaptation, but with robustifying damping terms,
are included. The results show that the choice of control allocation method does not
have a significant impact on the nominal performance. When the estimate of the control
effectiveness matrix B2 is good, and the control moments commanded by the controller
are within the attainable moment set, the control allocation methods are able to generate
the commanded moment. The performance of the integrated design is better than the
nominal and modular designs, since the tracking error driven update laws will adapt
the system parameters even if their values are correctly initialized and the dead-zones
in place. This is a general property of Lyapunov-based update laws for these type of
adaptive designs. The modular design on the other hand recognizes that the parameter
estimates are at their correct value, and therefore does not adapt the parameters.

Table 6.5: Tracking performance, nominal case.

Control Allocation
Controller PI WPI WU1 WPI WU2 QP WU1 QP WU2
NOMINAL 1.0013 1.0139 1.0007 0.9964 0.9964
INTEGRATED 0.7692 0.8717 0.7402 0.8198 0.7953
MODULAR 1.0013 1.0139 1.0007 0.9964 0.9964
6.6 NUMERICAL SIMULATION RESULTS 113

Performance in the Presence of Actuator Failures

The same reference tracking problem is considered with failures. To be able to present
some meaningful statistics on performance, simulation cases which were terminated due
to excessive tracking errors are not included in the comparison. The amount of excluded
cases is given in Table 6.6. For each scenario, 6 failure case were performed for every
control allocation method, resulting in 120 failure simulations per controller. It is clear
that all the adaptive control laws reduce the number of failed simulations considerably
with respect to the non-adaptive control law. The robust non-adaptive control law only
results in satisfactory tracking for the mildest failure cases.
Another striking fact is that the integrated adaptive control law in combination with the
control allocation weight matrix WU1 performs poorly, especially for the WPI method.
Weight matrix WU1 gives priority to the horizontal stabilizers. If one of these surfaces
fails, and its loss of effectiveness is poorly estimated, the difference between the desired
moments and the actually generated moments will be large, resulting in performance
degradation. This effect is much larger when the weighted pseudo-inverse is used instead
of the more sophisticated QP control allocation method which can incorporate constraints
on the input. The modular adaptive design is less sensitive to the choice of control allo-
cations algorithm.
The few terminated failure cases that occur for the adaptive designs are the most ex-
treme failure cases. For example, in scenario 4 with an elevator hard-over failure of
10.5 degrees, the simulation is terminated for all flight control laws. Stability can still
be maintained at straight, level flight, but the commanded maneuver is too demanding
for this failure at flight condition II. The RMS of the tracking errors over the whole du-

Table 6.6: Number of terminated simulation cases.

Control Allocation
Controller PI WPI WU1 WPI WU2 QP WU1 QP WU2
NOMINAL 15 19 18 15 18
INTEGRATED 2 14 4 8 3
MODULAR 4 15 5 3 2

ration of the simulations have been generated for a controller performance comparison.
The tabulated results of the numerical simulations can be found in Table 6.7. Note that
the results for the damaged aircraft are averaged over all the successful failure scenarios.
The average tracking performance of the modular approach is better than that of the in-
tegrated design, although it should be noted that for the PI control allocation the results
of the integrated design include two of the more severe failure cases. However, when a
weighted control allocation method is included, the performance of the modular adaptive
controller is clearly superior.
The performance of the nominal controller is included for comparison, note that the
tracking performance for the mild failure cases already degrades when compared to the
114 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.6

nominal performance. Unsurprisingly, the average performance with the QP control al-
location is better than for the (weighted) pseudo-inverse methods, and the number of
successful simulations is also higher. The WPI method does not take constraints on the
surface deflections into account, which can result in suboptimal use of the available con-
trol effectiveness, and thus reduced performance.
In [159] similar simulations were performed for a tuning function adaptive backstepping
design in combination with the weighted pseudo-inverse and direct control allocation.
Their results show that the weighted pseudo-inverse control allocation gave the best re-
sults due to the artificial lead it generates, although it is pointed out that this lead can also
result in poor performance during maneuvers. However, [159] did not consider realistic
failures, the investigation was limited to maneuvers with wrong initial estimates of the
aerodynamic parameters. With control surface failures, a more sophisticated control al-
location method is clearly more beneficial. A possible source for the better performance

Table 6.7: Post-failure tracking error RMS (terminated cases removed).

Control Allocation
Controller PI WPI WU1 WPI WU2 QP WU1 QP WU2
NOMINAL 5.7142 4.8384 4.4557 5.7652 4.2899
INTEGRATED 2.0363 2.6949 2.3331 2.4066 2.3583
MODULAR 1.7437 2.0602 2.4741 1.7660 1.8176

of the modular controller in combination with weighted control allocation is the accu-
rateness of its parameter estimation. To verify this hypothesis, the average errors over
the last 5 seconds of the simulation between the estimate of the parameter and the true
values of the post-failure parameters are calculated. The average estimation errors of pa-
rameters not related to the control surfaces are shown in Table 6.8, Table 6.9 presents the
estimation errors of the elements of the control effectiveness matrix that did not change
due to the failure. Finally, the estimation errors in the elements of the control effective-
ness matrix related to the failed surfaces are shown in Table 6.10.
From these tables it becomes clear that the identifier of the modular design estimates
the parameters closest to their true values. In fact, if the simulations are continued the
estimates of the least squares algorithm keep converging closer to the true parameters.
For the integrated adaptive design the opposite is often true, which is why parameter pro-
jection methods are usually introduced to bound the values of the parameter estimation
errors for an adaptive backstepping design.
Most crucial for the weighted control allocation are the estimation errors in the effective-
ness of the failed surfaces as shown in Table 6.10. It is evident that the estimation quality
of the modular design is superior for the parameters, which explains why this control law
has the most successful reconfigurations when a weighted control allocation method is
used.
6.6 NUMERICAL SIMULATION RESULTS 115

Table 6.8: Average parameter estimation error over last 5 seconds (terminated cases removed).

Control Allocation
Controller PI WPI WU1 WPI WU2 QP WU1 QP WU2
INTEGRATED 0.2064 0.2187 0.1950 0.2805 0.2272
MODULAR 0.0589 0.0242 0.0413 0.0739 0.0516

Table 6.9: Average estimation error over last 5 seconds of the elements of B2 that did not change
due to a failure.

Control Allocation
Controller PI WPI WU1 WPI WU2 QP WU1 QP WU2
INTEGRATED 0.3034 0.3294 0.2796 0.2484 0.2628
MODULAR 0.2125 0.0691 0.1508 0.1481 0.1310

Table 6.10: Average estimation error over last 5 seconds of B2 elements relating to failed surfaces.

Control Allocation
Controller PI WPI WU1 WPI WU2 QP WU1 QP WU2
INTEGRATED 1.7776 1.0579 1.5359 2.0979 1.8685
MODULAR 0.4046 0.7117 0.2266 0.2208 0.1005
116 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.7

Specific Failure Cases

The response of the aircraft during one of the maneuvers with a hard-over of the left
aileron is shown in Figure D.1 of Appendix D.1 for the integrated controller combined
with PI control allocation. Despite the 45 degree lock of the left aileron after 1 second,
stability is maintained and tracking performance is reasonable. The realized control sur-
face deflections are shown in Figure D.1(b). The remaining control surfaces compensate
for the trim moment introduced by the locked aileron, saturating both elevators during the
roll doublets. Figure D.1(c) shows the realized total control moment coefficients versus
the commanded control moment coefficients adjusted with the estimated trim moment
introduced by the failure.
Finally, the results of the parameter estimation of the elevator and aileron related control
derivatives can be found in Figure D.1(d). It can be seen that the parameter estimates con-
verge, but, as expected, not to their true values. Although the estimates do not converge
to the true values, the plots of Figure D.1(c) demonstrate that the difference between the
commanded control moment and the realized control moment is relatively small.
The results of the same simulation scenario with the modular adaptive controller can be
found in Figure D.2. Tracking performance of this controller is even better than for the
integrated design. As can be seen in Figure D.2(d), the parameter estimates generated by
the least-squares identifier converge to their true values.
The results of a comparison between both controllers for simulation scenario 4 are shown
in Figures D.3 and D.4. After 1 second of simulation time the aircraft experiences a left
stabilizer hard-over to 10.5 degrees. This failure results in even more coupling between
the longitudinal and lateral motions then the aileron hard-over. In this simulation the
controllers make use of the QP control allocation with weighting WU2 . Again, both con-
trollers manage to stabilize the aircraft after the failure. Tracking performance is restored
close the the nominal performance, after some large initial tracking errors. The estimated
parameters of both designs converge, but only the estimated parameters of the modular
design converge to their true values.
As discussed earlier, the performance of both adaptive designers with weighting WU2
used in the control allocation is much better then with WU1 . This performance difference
is more pronounced when the unsophisticated weighted pseudo-inverse control alloca-
tion is used. Furthermore, the modular adaptive design is less sensitive to the weighting
used than the integrated design due to its true parameter estimates. These statements are
illustrated in Figure D.5, where the tracking performance of both controllers, with the
WPI control allocation method, is compared in simulation scenario 2, where the aircraft
suffers a left stabilizer lockup at 0 degrees after 1 second.

6.7 Conclusions
Two nonlinear adaptive flight control designs for an over-actuated fighter aircraft model
have been studied. The first controller is a constrained adaptive backstepping design with
control law and dynamic update law designed simultaneously using a control Lyapunov
6.7 CONCLUSIONS 117

function, while the second design is an ISS-backstepping controller with a separate re-
cursive least-squares identifier. In addition, two control allocation methods with different
weighings have been used to distribute the desired moments over the available control
surfaces. The controllers have been compared in numerical simulations involving several
types of aileron and horizontal stabilizer failures.
Several important observations can be made based on this comparison:
1. Results of numerical simulations show that both adaptive controllers provide a sig-
nificant improvement over a non adaptive NDI/backstepping design in the presence
of actuator lockup failures. The success rate and performance of both adaptive de-
signs with the pseudo inverse control allocation is comparable for most failure
cases. However, in combination with weighted control allocation methods the suc-
cess rate and also the performance of the modular adaptive design is shown to be
superior. This is mainly due to the better parameter estimates obtained by the least
squares identification method. The Lyapunov-based update laws of the integrated
adaptive backstepping designs, in general, do not estimate the true value of the
unknown parameters. It is shown that especially the estimate of the control effec-
tiveness of the damaged surfaces is much more accurate using the modular adaptive
design. It can be concluded that the constrained adaptive backstepping approach
is best used in combination with the simple pseudo inverse control allocation to
prevent unexpected results.
2. The computational load of the integrated adaptive design is much lower than for
the modular design. This is due to the higher dynamic order of the estimator of
the latter approach. The number of states of the Lyapunov-based estimator is equal
to the number of parameters to be estimated p. The least-squares identifier used
by the modular design has p × p + p states, while the x-swapping filter has an
additional p × n + n states, with n being the number of states of the system to
be controlled. This is a critical advantage of the integrated adaptive design when
considering real-time implementation.
3. The integrated adaptive design does not require the nonlinear damping terms used
in this experiment to compensate for the slowness of the identifier. The nonlin-
ear damping terms can easily result in high gain feedback control and numerical
instability.
4. The tuning of the update laws of the integrated design turns out to be quite time
consuming for this simplified aircraft model. Increasing the adaptation gain may
lead to unwanted transients in the closed-loop tracking performance. This tuning
process may have to be improved when attempting the control design for the high-
fidelity full envelope F-16 model or an alternative identifier may have to be found.
5. For some simulated failure cases, the adaptive controller managed to stabilize the
aircraft, but the commanded maneuver proved too challenging for the damaged
aircraft. Hence, an adaptive controller by itself may not be sufficient for a good
reconfigurable flight control system. The pilot or guidance system also needs to
118 COMPARISON OF INTEGRATED AND MODULAR ADAPTIVE FLIGHT CONTROL 6.7

be aware of the characteristics of the failure, since the post-failure flight envelope
might be a lot smaller. This statement has resulted in a whole new area of research,
usually referred to as adaptive flight envelope estimation and/or protection, see e.g.
[198, 211]. Using the adaptive controllers developed in this thesis, it is possible to
indicate to the pilot which axes have suffered a failure, so that he is made aware that
there is a failure and that he should fly more carefully. However, a fully adaptive
flight envelope protection system is beyond the scope of this thesis work.
Chapter 7
F-16 Trajectory Control Design

The results of the previous chapter demonstrated that the constrained adaptive backstep-
ping flight control system improved the closed-loop performance in the case of sudden
changes in the dynamic behavior of the aircraft. In this chapter the control system de-
sign framework of the previous chapter is extended to nonlinear adaptive control for the
complex high-fidelity F-16 dynamic model of Chapter 2, which is valid over a large, sub-
sonic flight envelope. A flight envelope partitioning method using B-spline networks is
introduced to simplify the online model identification and make real-time implementation
feasible. As a study case a trajectory control autopilot is designed, which is evaluated in
several maneuvers with actuator failures and uncertainties in the onboard aerodynamic
model. The trajectory control problem is relatively challenging since the uncertain sys-
tem to be controlled has a high relative degree. It will be shown that the constrained
adaptive backstepping approach is well suited to tackle this problem.

7.1 Introduction
In this chapter the command filtered adaptive backstepping design method is applied to
the control design for the F-16 dynamic model of Chapter 2, thereby extending the re-
sults of Chapter 6 which are limited to a single point in the flight envelope. The size
of the aerodynamic forces and moments of the F-16 model varies nonlinearly with the
flight condition. Approximating uncertainties, resulting from modeling errors or sud-
den changes due to failures, in these complex force and moment functions means that
the regressors of the identifier will have to be selected very large in order to capture
the dynamic behavior of the complete aircraft model. On the other hand, a real-time
implementation of the adaptive control method is still an important goal, hence the com-
putational complexity should be kept at a minimum.
As a solution, a flight envelope partitioning method [152, 153, 203] is proposed to capture
the globally valid aerodynamic model into multiple locally valid aerodynamic models.

119
120 F-16 TRAJECTORY CONTROL DESIGN 7.2

The Lyapunov-based update laws of the adaptive backstepping method only update a few
local models at each time step, thereby decreasing the computational load of the algo-
rithm. B-spline networks are used to ensure smooth transitions between the different
regions. In Section 7.2 the flight envelope partitioning method and resulting local ap-
proximation scheme is further explained.
In the second part of the chapter an inertial trajectory controller in three-dimensional air
space for the F-16 model is designed using the adaptive backstepping approach com-
bined with multiple model approach in the parameter update laws. The trajectory control
problem is quite challenging, since the system to be controlled has a high relative de-
gree, resulting in a multivariable, four loop adaptive feedback design. The performance
of the autopilot is evaluated in numerical simulation scenarios involving several types of
trajectories and uncertainties in the onboard aerodynamic model. The conclusions are
presented in Section 7.5.

7.2 Flight Envelope Partitioning


In the previous chapter a reconfigurable flight control system based on the constrained
adaptive backstepping method was designed for a simplified fighter aircraft model. The
aircraft model was only valid at a single operating point and hence the identifier only
had to estimate the aerodynamic model error at that point. As discussed before, this
model error can be the result of modeling inaccuracies or sudden changes in the dynamic
behavior of the aircraft, e.g. due to structural damage or control surface failures. The
high-fidelity F-16 model, as detailed in Chapter 2, contains aerodynamic data valid over
the entire subsonic flight envelope. Hence, the model error in this case is, in general, a
complex nonlinear function dependent on the states and inputs of the aircraft valid over
a large domain of operation.
To ensure that the constrained adaptive backstepping method can also be applied to the
control design for this more complex aircraft model the regressors of the parameter up-
date laws will have to be selected in an appropriate way, i.e. in such a way that they are
‘rich’ enough to accurately identify the model error. One possible approach is to view
the model error as a black box and introduce some form of neural network for which the
weights are updated by the adaptive backstepping update laws, see e.g. [125, 176, 196].
The main motivation for this approach is that neural networks are universal approxi-
mators and can be used to approximate continuous functions at any arbitrary accuracy as
long as the network is large enough [79]. However, exactly how large the network should
be selected is very difficult to determine since it has no real physical meaning. Hence,
the trade-off between estimation accuracy and computational load is not transparent.
In this thesis a different and more intuitive approach based on [144, 153] is used. The
idea is to ‘partition’ the flight envelope into multiple connecting operating regions called
hyperboxes or clusters. In each hyperbox a locally valid linear-in-the-parameter (poly-
nomial) aerodynamic model is defined, for which the parameters can be updated with
the Lyapunov based update laws of the adaptive backstepping control law. An additional
advantage of using multiple, local models is that information of the models that are not
7.2 FLIGHT ENVELOPE PARTITIONING 121

updated at a certain time step is retained, thereby giving the approximator memory capa-
bilities. The partitioning can be done using multiple state variables, the choice of which
depends on the expected nonlinearities of the system. In this thesis B-spline networks are
employed to ensure smooth transitions between the local aerodynamic models [42, 208],
but fuzzy sets or radial basis function networks could also be used.
More advanced local learning algorithms can be found in literature, see e.g. [145, 204]
and related works, where nonlinear function approximation with automatic growth of
the learning network according to the nonlinearities and the working domain of the con-
trol system is proposed. However, the computational load of these type of methods may
well be too large for a real-time implementation. Therefore the implementation of these
methods is not investigated in this thesis.

7.2.1 Partitioning the F-16 Aerodynamic Model


An earlier study at the Delft University of Technology [203] already examined the possi-
bilities of partitioning the F-16 aerodynamic model for modeling and identification pur-
poses. The study focused on fuzzy sets for the smooth transitions between the multiple
models and several manual and automatic methods for the partitioning were evaluated.
For the F-16 model it is not necessary to focus on such complex automatic partitioning
methods, since the aerodynamic data are already given in tabular form and have a polyno-
mial model structure. As an example, consider the normal force coefficient CZT and the
pitch moment coefficient Cmt as given in Section 2.4, but repeated here for convenience
sake
 δlef 
CZT = CZ (α, β, δe ) + δCZlef 1 −
25
qc̄ h  δlef i
+ CZq (α) + δCZqlef (α) 1 −
2VT 25
and
h i  δlef 
CmT = Cm (α, β, δe ) + CZT xcgr − xcg + δCmlef 1 −
25
qc̄ h i δlef 
+ Cmq (α) + δCmqlef (α) 1 − + δCm (α) + δCmds (α, δe )
2VT 25
where

δCZlef = CZlef (α, β) − CZ (α, β, δe = 0o )


δCmlef = Cmlef (α, β) − Cm (α, β, δe = 0o ).

As can be seen, the effects of altitude or Mach number are not included in the aerody-
namic database itself. This is because the aerodynamic data is only valid at subsonic
flight conditions. All static and rotational coefficient terms in the above expressions are
determined from 1, 2 or 3-dimensional look-up tables depending on a combination of the
angle of attack α, the sideslip angle β and the elevator deflection δe . The density of the
122 F-16 TRAJECTORY CONTROL DESIGN 7.2

data points in the look-up tables for the angle of attack data varies between 5o and 10o
for angles of attack above 55o . For the sideslip angle tables the grid points are spaced
2o between −10o and 10o , while they points are 5o apart outside this range. Finally, the
grid points of the tables dependent on the elevator deflection are 5o degrees apart. As
explained in Section 2.3, the leading edge flap is controlled automatically by a separate
system.
It is possible to translate this polynomial aerodynamic model to the proposed multiple
model form directly and even use the same grid, i.e. partitioning. However, this would
not be very realistic. In reality the aerodynamic model is never perfect, since it is ob-
tained from (virtual) wind tunnel experiments and flight tests. To make the experiments
more realistic, the onboard model used by the controller and the multiple polynomial
models used by the identifier are selected to be of a more basic structure. In [127] the
aerodynamic data of the F-16 was already simplified by integrating all leading edge flap
dependent tables into the rest of the tables and approximating some sideslip dependen-
cies. However, the range of the data has been reduced to −10o ≤ α ≤ 45o in the
process. These steps have greatly reduced the size of the database, but the response of
the approximate model constructed from this new data is still close to the response with
the full aerodynamic model in the reduced flight envelope. This aerodynamic data will
be referred to as the low-fidelity set.
The onboard model used by the backstepping controller will use the low-fidelity data to
simulate the modeling error. At angle of attacks outside the range of the low-fidelity
model the data of the nearest known point will be used, e.g. at 75 degrees angle of attack
the low fidelity model will use the data from 45 degrees. The identifier should be able
to compensate for the large modeling errors in this region. The structure of the normal
force coefficient CZT and the pitch moment coefficient CmT for the low fidelity model,
in an affine form suitable for control, are given by
qc̄ l
CZl T = CZl (α, β) + C (α) + CZl δe (α, δe )δe + ĈZl T
2VT Zq
and
l l
h i qc̄ l
Cm = Cm (α, β) + CZl T xcgr − xcg + l
C (α) + Cm l
(α, δe )δe + Ĉm ,
T
2VT mq δe T

where ĈZl T and Ĉml


T
are the estimates of the modeling errors. The other force and
moment coefficients of the low-fidelity model are similarly defined. All coefficient terms
are again given in tabular form. Note that the higher order elevator deflection dependent
terms are contained in the base terms CZl and Cm l
.
The next step is to further specify the estimates of the modeling errors ĈZl T and Ĉm
l
T
in
such a way that they can account for all possible uncertainties. If the failure scenarios
are limited to symmetric damage and/or control surface failures the polynomial structure
of the estimates can be selected identical to the known onboard model structure, i.e.
qc̄ l
ĈZl T = ĈZl (α, β) + Ĉ (α) + ĈZl δe (α, δe )δe (7.1)
2VT Zq
7.2 FLIGHT ENVELOPE PARTITIONING 123

and

l l qc̄ l l
Ĉm = Ĉm (α, β) + Ĉ (α) + Ĉm (α, δe )δe , (7.2)
T
2VT mq δe

where each polynomial coefficient term Cˆ∗l varies with the flight condition. It should be
possible to model all possible errors with parameter estimate definitions. However, in the
case of asymmetric damage to an aircraft the longitudinal force and moment coefficient
will become dependent of more lateral states and vice versa.
The dependency of the total aerodynamic force and moment coefficients on the aircraft
states for the nominal (undamaged) aircraft model is given in the second column of Ta-
ble 7.1. It can be seen that for the most part the aerodynamic model for the undamaged
aircraft is almost decoupled with respect to the aircraft states. If the aerodynamic char-
acteristics of the aircraft change, due to a structural failure, then the dependencies of the
aerodynamic coefficients on the aircraft states change and dependencies on additional
states are possibly established. In [51] some specific asymmetric structural failures are

Table 7.1: Aerodynamic force and moment coefficients - dependency on aircraft states for the
nominal and damaged aircraft model [51].

Coefficient nominal aircraft model damaged aircraft model


CXT [α, β, VT , q, δe ] [α, β, VT , p, q, r, δe , δa , δr ]
CYT [α, β, VT , p, r, δa , δr ] [α, β, VT , p, q, r, δe , δa , δr ]
CZT [α, β, VT , q, δe ] [α, β, VT , p, q, r, δe , δa , δr ]
ClT [α, β, VT , p, r, δa , δr ] [α, β, VT , p, q, r, δe , δa , δr ]
CmT [α, β, VT , q, δe ] [α, β, VT , p, q, r, δe , δa , δr ]
CnT [α, β, VT , p, r, δa , δr ] [α, β, VT , p, q, r, δe , δa , δr ]

discussed, i.e. wing, fuselage and vertical stabilizer damage. It is concluded that for
each failure the aircraft aerodynamic characteristics become more coupled in case of an
asymmetric failure. This means that all aerodynamic coefficients become dependent on
all longitudinal and lateral aircraft states, see the third column of Table 7.1. Some fail-
ures (wing damage) will cause stronger coupling than others (fuselage damage) because
it directly depends on the measure of asymmetry introduced into the aerodynamic charac-
teristics of the aircraft. This means the parameter estimation structures (7.1), (7.2) would
have to be extended with more coefficient terms to accurately estimate the aerodynamic
model after such failures. However, as discussed in Chapter 2, no aerodynamic data is
available for any asymmetric damage cases for the F-16 model. Therefore, the research
in this thesis will be limited to symmetric structural damage or actuator failure scenar-
ios only. Hence, the polynomial approximation structures (7.1) and (7.2) are sufficiently
rich.
124 F-16 TRAJECTORY CONTROL DESIGN 7.2

7.2.2 B-spline Networks


In the previous section the flight envelope was subdivided and a low order approximating
polynomial for the model error was defined on each of the resulting subregions. Numeric
splines are very suitable to connect such a set of polynomials in a continuous fashion to
fit a more complex nonlinear function over a certain domain. In this thesis B-splines are
used, which are computationally efficient and possess good numeric properties [46, 174].
In this section the properties of B-splines and their use in B-spline networks is discussed.
An adaptive B-spline network can be used to relate k inputs and a single output y on a
restricted domain of the input space.

One-dimensional B-spline Networks


First, consider the following network showing a realization with one input: The net-

Figure 7.1: One-dimensional network [44].

work has two hidden layers. One hidden layer would be enough for a one-dimensional
network, but multi-dimensional networks use two hidden layers as will be shown later
on. The first hidden layer is used to distribute the inputs over the nodes. In the one-
dimensional network of Figure 7.1 one input is distributed over n nodes in the first hid-
den layer, so each node has only one input. To this input a basis function F is applied.
These basis functions are B-splines of any desired order. An n-th order B-spline function
consists of pieces of (n-1)th order polynomials, such that the resulting function is (n-1)
times differentiable. B-spline basis function have the interesting property that they are
non-zero on a few adjacent subintervals, which makes them ‘local’ as a result. B-spline
basis functions can be defined in the following way [49]:

Definition 7.1 (B-spline basis function). Let U be a set of m + 1 non-decreasing num-


bers, u0 ≤ u2 ≤ u3 ≤ ... ≤ um . The ui ’s are called knots, the set U the knot
vector, and the half-open interval [ui , ui + 1) the ith knot span. Note that since some
ui ’s may be equal, some knot spans may not exist. If a knot ui appears k times (i.e.,
ui = ui + 1 = ... = ui + k − 1), where k > 1, ui is a multiple knot of multiplicity k,
written as ui (k). Otherwise, if ui appears only once, it is a simple knot. If the knots are
equally spaced (i.e., ui + 1 − ui is a constant for 0 ≤ i ≤ m − 1), the knot vector or the
knot sequence is said uniform; otherwise, it is non-uniform.
7.2 FLIGHT ENVELOPE PARTITIONING 125

The knots can be considered as division points that subdivide the interval [u0 , um ] into
knot spans. All B-spline basis functions are supposed to have their domain on [u0 , um ].
To define B-spline basis functions, we need one more parameter, the degree of these basis
functions, p. The ith B-spline basis function of degree p, written as Ni,p (u), is defined
recursively as follows1 :

1 if ui ≤ u < ui+1
Fi,0 (u) =
0 otherwise
u − ui ui+p+1 − u
Fi,p (u) = Fi,p−1 (u) + Fi+1,p−1 (u)
ui+p − ui ui+p+1 − ui+1

B-splines of order 2 through 6 are depicted as an example in Figure 7.2. Note that a
spline function differs from zero on a finite interval.

Figure 7.2: B-splines order 2 through 6.

The second hidden layer of Figure 7.1 also consists of n nodes and each node of this
layer also has only one input. To this input a function G is applied which is merely a
multiplication of this input with a weight w. The results of all second hidden layer nodes
are summed in the output node. When the spline functions of the various nodes are
properly spaced, every one-dimensional function can be approximated. This is shown in
the Figure 7.3, where the various splines (F1 to Fn ) combined with the various weights
(w1 to wn ), together form an output function:
n
X
y= wi Fi (u). (7.3)
i=1
1 This formula is usually referred to as the Cox-De Boor recursion formula
126 F-16 TRAJECTORY CONTROL DESIGN 7.2

As an example, the input could be the angle of attack over an input space of 0 to 10
degrees and the output one of the coefficients of the polynomial approximators (7.1) and
(7.2). Note that (7.3) can also be written in the standard notation used throughout this
thesis as
y = ϕ(u)T θ̂, (7.4)
where ϕ(u) = (F1 (u), ..., Fn (u))T is the known regressor and θ = [w1 , ..., wn ]T a
vector of unknown constant parameters.

Figure 7.3: The output function y as a combination of third order B-splines and weights.

Two-dimensional B-spline Networks


Two-dimensional B-spline networks have two input nodes. The first hidden layer, as with
the one-dimensional network, consists of nodes, to which a basis function F is applied.
This is shown in Figure 7.4 below. To the first input a group of n nodes are applied,
and to the second input a group of m nodes are applied. The second hidden layer now
consists of nodes which each have two inputs u1 and u2 . For every combination of a
node from one group and and a node from the second group, a node exists. To each node
of the second hidden layer, a function G is applied which is now a multiplication of the
two inputs multiplied by a weight w. Again the output node sums the results of all second
hidden layer nodes:
n X
X m
y= wi+n(j−1) F1i (u1 )F2j (u2 ). (7.5)
i=1 j=1
7.2 FLIGHT ENVELOPE PARTITIONING 127

Figure 7.4: Two-dimensional network [44].

When the spline functions of the various nodes are properly spaced, any two-dimensional
function can be approximated. The extension to n-dimensional networks is evidently
straightforward.

B-spline Network Learning


Learning of B-spline networks can be done in several ways, the most common way to
adapt is after each sample:
∆wi = γeFi (u) (7.6)
where ∆wi is the adaptation of weight i, Fi is B-spline function i and u is the input.
Given a certain input u, only a limited number of splines Fi (u) are nonzero. Therefore
only a few weights are adapted after each sample, i.e. the adaptation is local. There are
two practical methods for the network learning process available:
• offline learning, where a previously obtained set of data is available and the net-
work learns from this environment. The complete data set can be presented at the
same time, i.e. batch learning, but it is also possible to use only a part of the set at
each time step for training, i.e. stochastic learning. The learning phase is separated
from the simulation phase, i.e. offline learning;
• online learning, when no data set is available to train the network, the network can
be trained during a simulation. The network learns to include the new data points
in the network. Since the learning phase takes place during the simulation phase,
this is called online learning.

Application of B-spline Networks


Based on the definition of the B-spline networks and the properties of B-splines, it can
be concluded that B-spline networks have several characteristics that make them very
suitable for online adaptive control:
128 F-16 TRAJECTORY CONTROL DESIGN 7.3

• Because only a small number of B-spline basis functions is non-zero at any given
time step, the weight updating scheme is local. This has the advantage that only a
few update laws are used at the same time, resulting in a lower computational load.
Another advantage is that the network retains information of all flight conditions,
since the local adaptation does not interfere with points outside the closed neigh-
borhood. This means the approximator has memory capabilities, and hence learns
instead of simply adapting the weights.

• The spline outputs are always positive and normalized, which provides numerical
stability.

7.2.3 Resulting Approximation Model


For the F-16 aerodynamic model error approximation each of the coefficient terms in
(7.1) and (7.2) are represented by a B-spline network. Third order B-spline basis func-
tions are used and the grid for each of scheduling parameters α, β and δe is selected as
2.5 degrees. In earlier work this combination provided enough accuracy to estimate the
model errors, even in the case of an aircraft model with sudden changes in the dynamic
behavior [189]. Note, however, that [203] demonstrated that less partitions are needed
to accurately identify the nominal aerodynamic F-16 model. Since sudden, unexpected
changes in the model are considered in this work, more partitions are used. Note that
using more partitions does not mean that more models are updated at a certain time step;
this is determined by approximation structures, the order of the B-spline functions and
the order of the B-spline networks. The local behavior of the approximation process with
B-spline networks is illustrated in one of the simulation scenarios in Section 8.6.

7.3 Trajectory Control Design


In this section, a nonlinear adaptive autopilot is designed for the inertial trajectory control
of the six-degrees-of-freedom, high-fidelity F-16 aircraft model as introduced in Chapter
2. The control system is decomposed in four backstepping feedback loops, see Figure
7.5, constructed using a single control Lyapunov function. The aerodynamic force and
moment functions of the aircraft model are assumed not to be exactly known during the
control design phase and will be approximated online. B-spline networks are used to
partition the flight envelope into multiple connecting regions in the manner that was dis-
cussed in the previous section. In each partition a locally valid linear-in-the-parameters
nonlinear aircraft model is defined, of which the unknown parameters are adapted online
by Lyapunov based update laws. These update laws take aircraft state and input con-
straints into account so that they do not corrupt the parameter estimation process. The
performance of the proposed control system will be assessed in numerical simulations of
several types of trajectories at different flight conditions. Simulations with a locked con-
trol surface and uncertainties in the aerodynamic forces and moments are also included.
7.3 TRAJECTORY CONTROL DESIGN 129

The section is outlined as follows. First, a motivation for applying the proposed con-
trol approach to this problem is given. After that, the nonlinear dynamics of the aircraft
model are written in a suitable form for the control design in Section 7.3.2. In Section
7.3.3 the adaptive control design is presented as decomposed in four feedback loops,
after which the identification process with the B-spline neural networks is discussed in
Section 7.3.4. Section 7.4 validates the performance of the control law using numerical
simulations performed in MATLAB/Simulink c . Finally, a summary of the results and
the conclusions are given in Section 7.5.

7.3.1 Motivation
In recent years the advancements in micro-electronics and precise navigation systems
have led to an enormous rise of interest [43] in (partially) automated unmanned air ve-
hicle (UAV) designs for a large variety of missions in both civil [160, 209] and military
aviation [200]. Inertial trajectory control is essential for these UAVs, since they are
usually required to follow predetermined paths through certain target points in the three-
dimensional air space [29, 96, 97, 151, 171, 172, 184]. Other situations where trajectory
control is desired include formation control, aerial refueling and autonomous landing
maneuvers [68, 155, 156, 168, 182, 207]. This has lead to a lot of literature dedicated
to formation and flight path control for UAVs, but also for other types of (un)manned
vehicles [80, 146].
Two different approaches can be distinguished in the design of these trajectory control
systems. The most popular approach is to separate the guidance and control laws: A
given reference trajectory is converted by the guidance laws to velocity and attitude
commands for the actual flight controller which in turn generates the actuator signals
[155, 156, 172]. For example, in [172] it is assumed a flight path angle control autopilot
exists and a guidance law is constructed that takes heading rate and velocity constraints of
the vehicle into account. The same holds for the formation control schemes of [155, 156].
Usually the assumption is made that the autopilot response to heading and airspeed com-
mands is first order in nature to simplify the design.
The other design approach is to integrate the guidance and control laws into one system to
achieve better stability guarantees and improve performance. For instance, [96] utilizes
an integrated guidance and control approach to trajectory tracking where the trimmed
flight conditions along the reference trajectory are the command input to the tracking
controllers. In [184] a combination of sliding mode control and adaptive control is used

Figure 7.5: Four loop feedback design for flight path control.
130 F-16 TRAJECTORY CONTROL DESIGN 7.3

for flight path control of an F/A-18 model.


In this section, a Lyapunov-based adaptive backstepping approach is used to design a
flight path controller for a nonlinear, high-fidelity F-16 model in three-dimensional air
space. It is assumed that the aerodynamic force and moment functions of the model are
not known exactly and that they can change during flight due to structural damage or
control surface failures. There is plenty of literature available on adaptive backstepping
designs for the control of aircraft and missiles; see e.g. [58, 61, 76, 109, 176, 183]. How-
ever, most of these designs consider control of the aerodynamic angles µ, α and β or
the angular rates. The design of a trajectory controller is much more complicated since
the system to be controlled is of a higher relative degree. This presents difficulties for a
standard adaptive backstepping design since the derivatives of the intermediate control
variables have to be calculated analytically in each design step. Calculating the deriva-
tives of the intermediate control variables in each design step leads to a rapid ‘explosion
of terms’.
This phenomenon is the main motivation for the authors of [184] to select a sliding mode
design for the outer feedback loops: It simplifies the design considerably. Another dis-
advantage of standard backstepping designs and indeed most feedback linearizing de-
signs is that the contribution of the control surface deflections to the aerodynamic forces
cannot be taken into account. For these reasons the constrained adaptive backstepping
approach as explained in Section 4.3 is used in this chapter. Furthermore, to simplify the
approximation of the unknown aerodynamic force and moment functions and to reduce
computational load, the flight envelope is partitioned into multiple, connecting operating
regions as discussed in the previous section.

7.3.2 Aircraft Model Description


The aircraft model used in this study is that of an F-16 fighter aircraft with geometry
and aerodynamic data as reported in Section 2.4. The control inputs of the model are
the elevator, ailerons, rudder and leading edge flaps, as well as the throttle setting. The
leading edge flaps are controlled separately and will not be used for the control design.
The control surface actuators are modeled as first-order low pass filters with rate and
magnitude limits as given in Table 2.1. In Section 2.2.4 a representation of the equations
of motion for the F-16 model was given. These differential equations can be rewritten in
the following form, which is more suitable for the trajectory control problem:
 
VT cos χ cos γ
Ẋ0 =  VT sin χ cos γ  (7.7)
−VT sin γ

1
(−D + FT cos α cos β) − g sin γ
 
m
1
Ẋ1 =  mVT cos γ (L sin µ + Y cos µ + FT (sin α sin µ − cos α sin β cos µ))

1 g
mVT (L cos µ − Y sin µ + FT (cos α sin β sin µ + sin α cos µ)) − VT cos γ
(7.8)
7.3 TRAJECTORY CONTROL DESIGN 131

cos α sin α
 
cos β 0 cos β
Ẋ2 =  − cos α tan β 1 − sin α tan β  X3
sin α 0 − cos α
 
0 sin γ + cos γ sin µ tan β cos µ tan β
+  0 − coscos
γ sin µ
β
cos µ
− cos β  Ẋ1 (7.9)
0 cos γ cos µ − sin µ

  
(c1 r + c2 p) q + c3L̄ + c4 N̄ + qHeng 
Ẋ3 =  c5 pr − c6 p2 − r2 q + c7 M̄ − rHeng  (7.10)
(c8 p − c2 r) q + c4 L̄ + c9 N̄ + qHeng

where X0 = (x, y, z)T , X1 = (VT , χ, γ)T , X2 = (µ, α, β)T , X3 = (p, q, r)T and the
definition of the inertia terms ci , i = 1, ..., 9 is given in Section 2.2.4.
These twelve differential equations are sufficient to describe the complete motion of the
rigid-body aircraft. Other states such as the attitude angles φ, θ and ψ are functions of
X = (X0T , X1T , X2T , X3T )T .

7.3.3 Adaptive Control Design


In this section the aim is to develop an adaptive guidance and control system that asymp-
totically tracks a smooth, prescribed inertial trajectory Y ref = (xref , y ref , z ref )T with
position states X0 = (x, y, z)T . Furthermore, the sideslip angle β has to be kept at
zero to enable coordinated turning. It is assumed that the reference trajectory Y ref =
(xref , y ref , z ref )T satisfies

ẋref = V ref cos χref


ref
ẏ = V ref sin χref (7.11)

with V ref , χref , z ref and their derivatives continuous and bounded. It is also assumed
that the components of the total aerodynamic forces L, Y, D and moments L̄, M̄ , N̄ are
uncertain, so these will have to be estimated. The available controls are the control
surface deflections (δe , δa , δr )T and the engine thrust FT . The Lyapunov-based control
design based on Section 4.3 is done in four feedback loops, starting at the outer loop.

Inertial Position Control


The outer loop feedback control design is initiated by transforming the tracking control
problem into a regulation problem:
   
z01 cos χ sin χ 0
Z0 =  z02  =  − sin χ cos χ 0  X0 − Y ref ,

(7.12)
z03 0 0 1
132 F-16 TRAJECTORY CONTROL DESIGN 7.3

where a new rotating reference frame for control, that is fixed to the aircraft and aligned
with the horizontal component of the velocity vector, is introduced [168, 172]. Differen-
tiating (7.12) gives
 
VT + z02 χ̇ − V ref cos(χ − χref )
Ż0 =  −z01 χ̇ + V ref sin(χ − χref )  . (7.13)
ż ref − VT sin γ

The idea is to design virtual control laws for the flight path angles χ, γ and the total
airspeed VT to control the position errors Z0 . However, from (7.13) it is clear that it is
not yet possible to do something about z02 in this design step. The virtual control laws
are selected as

V des,0 = V ref cos χ − χref − c01 z01



(7.14)
ref
 
c03 z03 − ż
γ des,0 = arcsin , −π/2 < γ < π/2, (7.15)
VT
where c01 , c03 > 0 are the control gains. The actual, implementable virtual control
signals V des and γ des as well as their derivatives V̇ des and γ̇ des are obtained by filtering
the virtual signals with a second order low pass filter with optional magnitude and rate
limits in place. As an example the state space representation of the filter for V des,0 is
given by
" #
q2
 
q̇1 (t)
= ω2 (7.16)
  
q̇2 (t) 2ζV ωV SR 2ζVVωV [SM (V des,0 ) − q1 ] − q2
 des   
V q1
= (7.17)
V̇ des q2

where SM (·) and SR (·) represent the magnitude and rate limit functions as given in
Appendix C. These functions enforce the state VT to stay within the defined limits. Note
that if the signal V des,0 is bounded, then V des and V̇ des are also bounded and continuous
signals. When the magnitude and rate limits are not in effect the transfer function from
V des,0 to V des is given by

V des (s) ωv2


= (7.18)
V des,0 (s) s2 + 2ζv ωv + ωv2

and the error V des,0 − V des can be made arbitrarily small by selecting the bandwidth of
the filter sufficiently large.

Flight Path Angle and Airspeed Control


In the second loop the objective is to steer VT and γ to their desired values as determined
in the previous section. Furthermore, the heading angle χ has to track the reference signal
χref , while the tracking error z02 is also regulated to zero. The available (virtual) controls
7.3 TRAJECTORY CONTROL DESIGN 133

in this step are the aerodynamic angles µ and α as well as the thrust FT . Note that the
aerodynamic forces also depend on the control surface deflections U = (δe , δa , δr )T .
These forces are quite small, since the surfaces are primarily moment generators. How-
ever, since the current control surface deflections will be available from the command
filters that are used in the inner design loop, they can be taken into account in the control
design. The relevant equations of motion for this design step are given by

Ẋ1 = A1 F1 (X, U ) + B1 G1 (X, U, X2 ) + H1 (X) (7.19)

where
−g sin γ
   
0 0 −VT
1  cos µ T
A1 = 0 cos γ 0  , H1 =  mVT cos γ cos α sin β cos µ ,
mVT T g
0 − sin µ 0 mVT cos α sin β sin µ − VT cos γ
 
V cos α cos β 0 0
1  T 1
B1 = 0 cos γ 0 ,
mVT
0 0 1

are known (matrix) functions, and


   
L(X, U ) FT
F1 =  Y (X, U )  , G1 =  (L(X, U ) + FT sin α) sin µ 
D(X, U ) (L(X, U ) + FT sin α) cos µ

are functions containing the uncertain aerodynamic forces. Note that the intermediate
control variables α and µ do not appear affine in the X1 -subsystem, which complicates
the design somewhat. Since the control objective in this step is to track the smooth
reference signal X1des = (V des , χref , γ des )T with X1 = (VT , χ, γ)T , the tracking errors
are defined as
 
z11
Z1 =  z12  = X1 − X1des . (7.20)
z13

To regulate Z1 and z02 to zero simultaneously, the following equation needs to be satis-
fied [98]
 
−c11 z11
B1 Ĝ1 (X, U, X2 ) =  −V ref (c02 z02 + c12 sin z12 )  − A1 F̂1 − H1 + Ẋ1des , (7.21)
−c13 z13

where F̂1 is the estimate of F1 and where


 
 FT 
L̂ (X, U ) + L̂ (X, U )α + F sin α sin µ
 
Ĝ1 (X, U, X2 ) =   0
 α T


 (7.22)
L̂0 (X, U ) + L̂α (X, U )α + FT sin α cos µ
134 F-16 TRAJECTORY CONTROL DESIGN 7.3

with the estimate of the lift force decomposed as L̂(X, U ) = L̂0 (X, U ) + L̂α (X, U )α.
The estimate of the aerodynamic forces F̂1 is defined as

F̂1 = ΦTF1 (X, U )Θ̂F1 (7.23)

where ΦTF1 is the known regressor function and Θ̂F1 is a vector with unknown constant
parameters. It is assumed that there exists a vector ΘF1 such that

F1 = ΦTF1 (X, U )ΘF1 . (7.24)

This means the estimation error can be defined as Θ̃F1 = ΘF1 − Θ̂F1 . The next step
is to determine the desired values αdes and µdes . The right-hand side of (7.21) is en-
tirely known, so the left-hand side can be determined and the desired values extracted.
Introducing the coordinate transformation
 
x ≡ L̂0 (X, U ) + L̂α (X, U )α + FT sin α cos µ (7.25)
 
y ≡ L̂0 (X, U ) + L̂α (X, U )α + FT sin α sin µ, (7.26)

which
 can be seen as a transformation
 from the two-dimensional polar coordinates
L̂0 (X, U ) + L̂α (X, U )α + T sin α and µ to cartesian coordinates x and y. The de-
sired signals (FTdes,0 , y0 , x0 )T are given by

FTdes,0
   
−c11 z11
B1  y0  =  −V ref (c02 z02 + c12 sin z12 )  − A1 F̂1 − H1 + Ẋ1des , (7.27)
x0 −c13 z13

thus the virtual control signals are equal to


q
L̂α (X, U )αdes,0 = x20 + y02 − L̂0 (X, U ) − FT sin α (7.28)

and
  
y0
 arctan if x0 > 0
 x0 



 arctan xy00 + π

if x0 < 0 and y0 ≥ 0


µdes,0 = . (7.29)
 
y0

 arctan x0 − π if x0 < 0 and y0 < 0
π

if x0 = 0 and y0 > 0


 2π

−2 if x0 = 0 and y0 < 0

Filtering the virtual signals to account for magnitude, rate and bandwidth limits will give
the implementable virtual controls αdes , µdes and their derivatives. The sideslip angle
command was already defined as β ref = 0, thus X2des = (µdes , αdes , 0)T and its derivative
are completely defined.
7.3 TRAJECTORY CONTROL DESIGN 135

However, care must be taken since the desired virtual control µdes,0 is undefined when
both x0 and y0 are
 equal to zero making the system momentarily
 uncontrollable. This
sign change of L̂0 (X, U ) + L̂α (X, U )α + FT sin α can only occur at very low or
negative angles of attack. This situation was not encountered during the maneuvers sim-
ulated in this study. To solve the problem altogether, the designer could measure the
rate of change for x0 and y0 and device a rule base set to change sign when these terms
approach zero. Furthermore, problems will also occur at high angles of attack when the
control effectiveness term L̂α will become smaller and eventually change sign. Possible
solutions include limiting the angle of attack commands using the command filters or
proper trajectory planning to avoid high angle of attack maneuvers.

Aerodynamic Angle Control


Now that the reference signal X2des = (µdes , αdes , β ref )T and its derivative have been
found, the next feedback loop can be designed. The available virtual controls in this step
are the angular rates X3 . The relevant equations of motion for this part of the design are
given by
Ẋ2 = A2 F1 (X, U ) + B2 (X)X3 + H2 (X) (7.30)
where
 
(tan β + tan γ sin µ) tan γ cos µ 0
1  −1
A2 = cos β 0 0 
mVT
0 1 0
cos α sin α
 
cos β 0 cos β
B2 =  − cos α tan β 1 − sin α tan β 
sin α 0 cos α
g
T0 − VT tan β cos γ cos µ
 
1  sin α g
H2 = FT cos β + VT cos γ cos µ
,
mVT
−FT cos α cos β + VgT cos γ sin µ
are known (matrix) functions with
T0 = FT (sin α tan γ sin µ + sin α tan β − cos α sin β tan γ cos µ) .
The tracking errors are defined as
Z2 = X2 − X2des . (7.31)
To stabilize the Z2 -subsystem a virtual feedback control X3des,0 is defined as

B2 X3des,0 = −C2 Z2 − A2 F̂1 − H2 + Ẋ2des , C2 = C2T > 0. (7.32)


The implementable virtual control, i.e. the reference signal for the inner loop, X3des and
its derivative are again obtained by filtering the virtual control signal X3des,0 with a second
order command limiting filter.
136 F-16 TRAJECTORY CONTROL DESIGN 7.3

Angular Rate Control


In the fourth step, an inner feedback loop for the control of the body-axis angular rates
X3 = (p, q, r)T is constructed. The control inputs for the inner loop are the control
surface deflections U = (δe , δa , δr )T . The dynamics of the angular rates can be written
as

Ẋ3 = A3 (F3 (X, U ) + B3 (X)U ) + H3 (X) (7.33)

where
   
c3 0 c4 (c1 r + c2 p) q
A3 =  0 c7 0 , H3 =  c5 pr − c6 (p2 − r2 ) 
c4 0 c9 (c8 p − c2 r) q

are known (matrix) functions, and


   
L̄0 L̄δe L̄δa L̄δr
F3 =  M̄0  , B3 =  M̄δe M̄δa M̄δr 
N̄0 N̄δe N̄δa N̄δr

are unknown (matrix) functions that have to be approximated. Note that for a more
convenient presentation the aerodynamic moments have been decomposed, e.g.

M̄ (X, U ) = M̄0 (X, U ) + M̄δe δe + M̄δa δa + M̄δr δr (7.34)

where the higher order control surface dependencies are still contained in M̄0 (X, U ).
The control objective in this feedback loop is to track the reference signal X3des =
(pref , q ref , rref )T with the angular rates X3 . Defining the tracking errors

Z3 = X3 − X3des (7.35)

and taking the derivatives results in

Ż3 = A3 (F3 (X, U ) + B3 (X)U ) + H3 (X) − Ẋ3des . (7.36)

To stabilize the system of (7.36) the desired control U 0 is defined as

A3 B̂3 U 0 = −C3 Z3 − A3 F̂3 − H3 + Ẋ3des , C3 = C3T > 0 (7.37)

where F̂3 and B̂3 are the estimates of the unknown nonlinear aerodynamic moment func-
tions F3 and B3 , respectively. The F-16 model is not over-actuated, i.e. the B3 matrix
is square. If this is not the case some form of control allocation would be required,
for instance the QP method used in the flight control problem discussed in the previous
chapter. The estimates are defined as

F̂3 = ΦTF3 (X, U )Θ̂F3


B̂3j = ΦTB3j (X)Θ̂B3j for j = 1, ..., 3 (7.38)
7.3 TRAJECTORY CONTROL DESIGN 137

where ΦTF3 , ΦTB3j are the known regressor functions and Θ̂F3 , Θ̂B3j are vectors with un-
known constant parameters, also note that B̂3j represents the ith column of B̂3 . It is
assumed that there exist vectors ΘF3 , ΘB3j such that

F3 = ΦTF3 (X, U )ΘF3


B3j = ΦTB3j (X)ΘB3j . (7.39)

This means the estimation errors can be defined as Θ̃F3 = ΘF3 − Θ̂F3 and Θ̃B3j =
ΘB3j − Θ̂B3j . The actual control signal U is found by applying a command filter similar
to (7.16) to U 0 .

Update Laws and Stability Properties

The static part of the trajectory control design has been completed. In this section the
stability properties of the control law are discussed and dynamic update laws for the
unknown parameters are derived. Define the control Lyapunov function
 
1 T 2 2 − 2 cos z12 2 T T
V = Z0 Z0 + z11 + + z13 + Z2 Z2 + Z3 Z3
2 c02
1    
+ trace Θ̃TF1 Γ−1 T −1
F1 Θ̃F1 + trace Θ̃F3 ΓF3 Θ̃F3
2
3
1X  
+ trace Θ̃TB3j Γ−1
B3j Θ̃B3j , (7.40)
2 j=1

with the update gains matrices ΓF1 = ΓTF1 > 0, ΓF3 = ΓTF3 > 0 and ΓB3j = ΓTB3j > 0.
Taking the derivative of V along the trajectories of the closed-loop system gives
   
2
V̇ = −c01 z01 + z02 z01 χ̇ + VT − V des,0 z01 + z02 −z01 χ̇ + V ref sin z12
 
2
 
2 c12
−c03 z03 − VT sin γ − sin γ des,0 z03 − c11 z11 − V ref sin z12 z02 + sin2 z12
c02
  
2
−c13 z13 + Z1T A1 ΦTF1 Θ̃F1 + B1 G1 (X2 ) − Ĝ1 (X2 )
 
+Z1T B1 Ĝ1 (X2 ) − Ĝ1 (X2des,0 )
 
−Z2T C2 Z2 + Z2T A2 ΦTF1 Θ̃F1 + Z2T B2 X3 − X3des,0 (7.41)
3
!
X T
−Z3T C3 Z3 + Z3T A3 ΦTF3 Θ̃F3 + ΦB3j Θ̃B3j Ui + Z3T A3 B̂3 U − U 0

j=1
3
˙ ˙ ˙
    X  
−trace Θ̂TF1 Γ−1
F1 Θ̃F1 − trace Θ̂TF3 Γ−1
F3 Θ̃ F 3 − trace Θ̂TB3j Γ−1
B3j Θ̃B3j .
j=1
138 F-16 TRAJECTORY CONTROL DESIGN 7.3

To cancel the terms depending on the estimation errors in (7.41), the update laws are
selected as

˙
= ΓF1 ΦF1 AT1a Z1 + AT2 Z2

Θ̂F1
˙
Θ̂F3 = ΓF3 ΦF3 AT3 Z3
˙
= PB3j ΓB3j ΦB3j AT3 Z3 Uj ,

Θ̂B3j (7.42)
 
with A1a ΦTF1 Θ̃F1 = A1 ΦTF1 Θ̃F1 + B1 G1 (X2 ) − Ĝ1 (X2 ) . The update laws for B̂3
include a projection operator to ensure that certain elements of the matrix do not change
sign and full rank is maintained always. For most elements the sign is known based on
physical principles. Substituting the update laws in (7.41) leads to

2 2 2 c12
V̇ = −c01 z01 − c03 z03 − c11 z11 − V ref sin2 z12 − c13 z13
2
− Z2T C2 Z2 − Z3T C3 Z3
c02
     
+ VT − V des,0 z01 − VT sin γ − sin γ des,0 z03 + Z1T B1 Ĝ1 (X2 ) − Ĝ1 (X2des,0 )
 
+Z2T B2 X3 − X3des,0 + Z3T A3 B̂3 U − U 0 ,

(7.43)

where the first line is already negative semi-definite which is needed to prove stability in
the sense of Lyapunov. Since the Lyapunov function V (7.40) is not radially unbounded,
only local asymptotic stability can be guaranteed [98]. This is sufficient for the domain of
operation considered here if the control law is properly initialized to ensure z12 ≤ ±π/2.
However, the derivative expression of V also includes indefinite error terms due to the
tracking errors and due to the command filters used in the design. As mentioned before,
when no rate or magnitude limits are in effect the difference between the input and output
of the filters can be made small by selecting the bandwidth of the filters sufficiently larger
than the bandwidth of the input signal. Also, when no limits are in effect and the small,
bounded difference between the input and output of the command filters is neglected, the
feedback controller designed in the previous sections will converge the tracking errors to
zero.
Naturally, when control or state limits are in effect the system will in general not track
the reference signal asymptotically. A problem with adaptive control is that this can
lead to corruption of the parameter estimation process, since the tracking errors that are
driving this process are no longer caused by the function approximation errors alone. To
solve this problem a modified definition of the tracking errors is used in the update laws
where the effect of the magnitude and rate limits has been removed. Define the modified
tracking errors

Z̄1 = Z1 − Ξ1
Z̄2 = Z2 − Ξ2 (7.44)
Z̄3 = Z3 − Ξ3
7.3 TRAJECTORY CONTROL DESIGN 139

with the linear filters


 
Ξ̇1 = −C1 Ξ1 + B1 Ĝ1 (X, U, X2 ) − Ĝ1 (X, U, X2des,0 )
 
Ξ̇2 = −C2 Ξ2 + B2 X3 − X3des,0 (7.45)
−C3 Ξ3 + A3 B̂3 U − U 0 .

Ξ̇3 =

The modified errors will still converge to zero when the constraints are in effect. The
resulting update laws are given by

˙
ΓF1 ΦF1 AT1a Z̄1 + AT2 Z̄2

Θ̂F1 =
˙
Θ̂F3 = ΓF3 ΦF3 AT3 Z̄3 (7.46)
˙
ΓB3j ΦB3j AT3 Z̄3 Uj

Θ̂B3j = PB3j .

To better illustrate the structure of the control system a scheme of the adaptive inner loop
controller is shown in Figure 7.6.

X 3ref Z3 A3 Bˆ3U 0 = U0 Command U


Xɺ 3 = A3 ( F3 + B3U ) + H 3
−C3 Z 3 − A3 Fˆ3 − H 3 + Xɺ 3ref Limiting Filter

Ξɺ 3 = −C3Ξ 3 + A3 Bˆ3 (U − U 0 )

Θˆɺ = Γ Φ AT Z
F3 F3 F3 3 3

ˆɺ = Γ Φ AT Z U
Θ B 3i F 3i F 3i 3 3 i

Figure 7.6: Inner loop control system

7.3.4 Model Identification


To simplify the approximation of the unknown aerodynamic force and moment functions,
and thereby reducing computational load, the flight envelope is partitioned into multiple,
connecting operating regions with a locally valid linear-in-the-parameters model defined
in each region. B-spline networks are used to interpolate between the local nonlinear
models to ensure smooth transitions. In the previous section parameter update laws (7.46)
140 F-16 TRAJECTORY CONTROL DESIGN 7.3

were defined for the unknown aerodynamic functions which were written as

F̂1 = ΦTF1 (X, U )Θ̂F1


F̂3 = ΦTF3 (X, U )Θ̂F3
B̂3j = ΦTB3j (X)Θ̂B3j . (7.47)

These unknown vectors and known regressor vectors can now be further defined. The
total force approximations are defined as
 
qc̄
L̂ = L0 + q̄S ĈL0 (α, β) + ĈLα (β, δe )α + ĈLq (α) + ĈLδe (α, δe )δe
2VT

pb rb
Ŷ = Y0 + q̄S ĈY0 (α, β) + ĈYp (α) + ĈYr (α)
2VT 2VT

+ ĈYδa (α, β)δa + ĈYδr (α, β)δr (7.48)
 
qc̄
D̂ = D0 + q̄S ĈD0 (α, β) + ĈDq (α) + ĈDδe (α, δe )δe ,
2VT

and the moment approximations



ˆ pb rb
L̄ = L̄0 + q̄S ĈL̄0 (α, β) + ĈL̄p (α) + ĈL̄r (α)
2VT 2VT

+ĈL̄δa (α, β)δa + ĈL̄δr (α, β)δr
 
ˆ = M̄ + q̄S Ĉ (α, β) + Ĉ (α) qc̄ + Ĉ
M̄ (α, δ )δ (7.49)
0 M̄0 M̄q M̄δe e e
2VT

ˆ = N̄ + q̄S Ĉ (α, β) + Ĉ (α) pb + Ĉ (α) rb
N̄ 0 N̄0 N̄p N̄r
2VT 2VT

+ĈN̄δa (α, β)δa + ĈN̄δr (α, β)δr ,

where L0 , Y0 , D0 , L̄0 , M̄0 and N̄0 represent the known, nominal values of the aerody-
namic forces and moments. Note that the approximation polynomial structures are some-
what different from two example structures in Section 7.2. Estimating the aerodynamic
forces in a wind-axes reference frame is more natural for this control problem. Further-
more, an additional term can be found in the lift force approximation for the lift curve,
since this term is needed in the flight path control loop.
These approximations do not account for asymmetric failures that will introduce cou-
pling of the longitudinal and lateral motions of the aircraft. If a failure occurs which
introduces a parameter dependency that is not included in the approximation, stability
can no longer be guaranteed. However, the failure scenarios considered in the next sec-
tion are limited to symmetric structural damage or actuator failure scenarios. Therefore,
these uncertainties can all be modeled with the above approximation structures. The
7.4 NUMERICAL SIMULATION RESULTS 141

total nonlinear function approximations are divided into simpler linear-in-the-parameter


nonlinear coefficient approximations, e.g.

ĈL0 (α, β) = ϕTCL0 (α, β)θ̂CL0 , (7.50)

where the unknown parameter vector θ̂CL0 contains the B-spline network weights, i.e.
the unknown parameters, and ϕCL0 is a regressor vector containing the B-spline basis
functions. All other coefficient estimates are defined in similar fashion. In this case a
two-dimensional network is used with input nodes for α and β. Different scheduling
parameters can be selected for each unknown coefficient. Third order B-splines spaced
2.5 degrees and three scheduling variables, α, β and δe , have been used to partitioning
the flight envelope. With these approximators a sufficient model accuracy was obtained.
Following the notation of (7.50) the estimates of the aerodynamic forces and moments
can be written as

L̂ = ΦTL (α, β, δe )Θ̂L , ˆ = ΦT (α, β, δ )Θ̂


L̄ L̄ e L̄,

Ŷ = ΦTY (α, β, δe )Θ̂Y , ˆ T


M̄ = Φ (α, β, δ )Θ̂ (7.51)
M̄ e M̄,

D̂ = ΦTD (α, β, δe )Θ̂D , ˆ =


N̄ ΦTN̄ (α, β, δe )Θ̂N̄ ,

which is a notation equivalent to the one used in (7.47). Therefore, the update laws
(7.46) can be used adapt the B-spline network weights. However, the update laws have
not yet been robustified against non-parametric uncertainties. In this study dead zones
and e-modification are used to protect the estimated parameters from drifting.

7.4 Numerical Simulation Results


This section presents the simulation results from the application of the adaptive flight
path controller to the high-fidelity, six-degrees-of-freedom F-16 model of Section 7.3.2.
Both the adaptive flight control law and the aircraft model are written as C S-functions
in MATLAB/Simulink c . C S-functions are much more efficient than the Matlab S-
functions used for the simplified F-18 model of Chapter 6, which means that the simu-
lations can easily be performed in real-time despite the increase complexity of aircraft
model and controller. The tracking error driven update laws now have around 17000
states, but only a small number of updates is non-zero at each time step due to the local
model approximation structure. The simulations are performed at three different starting
flight conditions with the following trim conditions:
1. h = 5000 m, VT = 200 m/s, α = θ = 2.774 deg;
2. h = 0 m, VT = 250 m/s, α = θ = 2.406 deg;
3. h = 2500 m, VT = 150 m/s, α = θ = 0.447 deg;
where h is the altitude of the aircraft, and all other trim states are equal to zero. Further-
more, two maneuvers are considered:
142 F-16 TRAJECTORY CONTROL DESIGN 7.4

1. a climbing helical path;

2. a reconnaissance and surveillance maneuver.

This last maneuver involves turns in both directions and some altitude changes. The
simulations of both maneuvers last 300 seconds. The reference trajectories are gener-
ated with second order linear filters to ensure smooth trajectories. The onboard model
in the nominal case contains the low-fidelity data, which means the online model iden-
tification has to compensate for any (small) differences between the low-fidelity data of
the onboard model and high-fidelity data of the aircraft model. To properly evaluate the
effectiveness of the online model identification, all maneuvers will also be performed
with a ±30% deviation in all aerodynamic stability and control derivatives used by the
controller, i.e. it is assumed that the onboard model is very inaccurate. Finally, the same
maneuvers are also simulated with a lockup at ±10 degrees of the left aileron.

7.4.1 Controller Parameter Tuning


The tuning process starts with the selection of the gains of the static control law and the
bandwidths of the command filters. Lyapunov stability theory only requires the control
gains to be larger than zero, but it is natural to select the gains of the inner loop largest.
Larger gains will of course result in smaller tracking errors, but at the cost of more control
effort. It is possible to derive certain performance bounds that can serve as guidelines
for tuning, see e.g. [121]. However, getting the desired closed-loop response is still
an extensive trial-and-error procedure. The control gains were selected as c01 = 0.1,
c02 = 1.10−5, c03 = 0.5, c11 = 0.01, c12 = 2.5, c13 = 0.5, C2 = diag(1, 1, 1) and
C3 = diag(2, 2, 2).
The bandwidths of the command filters for the actual control variables δe , δa , δr are
chosen equal to the bandwidths of the F-16 model actuators. The outer loop filters have
the smallest bandwidths. The selection of the other bandwidths is again trial-and-error.
A higher bandwidth in a certain feedback loop will result in more aggressive commands
to the next feedback loop. All damping ratios are equal to 1.0. It is possible to add
magnitude and rate limits to each of the filters. In this study magnitude limits on the
aerodynamic bank angle µ and the flight path angle γ are used to avoid singularities
in the control laws. Rate and magnitude limits, equal to the ones of the actuators, are
enforced on the actual control variables. The selected command filter parameters can be
found in Table 7.2.
As soon as the controller gains and command filters parameters have been defined, the
update law gains can be selected. Again the theory only requires that the gains should
be larger than zero. Larger update gains means higher learning rates and thus more rapid
changes in the B-spline network weights. It is not difficult to find a gain selection that
results in a good performance at all flight conditions and with the failures considered in
this section. This is probably because all flight path maneuvers are relatively slow and
smooth.
7.4 NUMERICAL SIMULATION RESULTS 143

Table 7.2: Command filter parameters.

Command variable ωn (rad/s) mag. limit rate limit


V des 5 − −
γ des 3 ± 80 deg −
µdes 8 ± 80 deg −
αdes 8 − −
pdes 20 − −
q des 20 − −
rdes 10 − −
δe 40.4 ± 25 deg ± 60 deg/s
δa 40.4 ± 21.5 deg ± 80 deg/s
δr 40.4 ± 30 deg ± 120 deg/s

7.4.2 Maneuver 1: Upward Spiral

In this section the results of the numerical simulations of the first test maneuver, the
climbing helical path, are discussed. For each of the three flight conditions five cases
are considered: nominal, the aerodynamic stability and control derivatives used in the
control law perturbed with +30%, and with −30% w.r.t. to the real values of the model,
a lockup of the left aileron at +10 degrees, and a lockup at −10 degrees. No actuator
sensor information is used. In Figure D.6 of Appendix D.2 the results of the simulation
without uncertainty starting at flight condtion 1 are plotted. The maneuver involves a
climbing spiral to the left with an increase in airspeed. It can be seen that the control law
manages to track the reference signal very well and that closed-loop tracking is achieved.
The sideslip angle does not become any larger than ±0.02 deg. The aerodynamic bank
angle µ does reach the limit set by the command filter, but this has no consequences
for the performance. The use of dead-zones ensures that the parameter update laws are
indeed not updating during this maneuver without any uncertainties. The responses at
the two other flight conditions are virtually the same, although less thrust is needed due
to the lower altitude of flight condition 2 and the lower airspeed of flight condition 3.
The other control surfaces are also more efficient. This is illustrated in Tables 7.3 to 7.5,
where the mean absolute values (MAVs) of the outer loop tracking errors, control sur-
face deflections and thrust can be found. Plots of the parameter estimation errors are not
included. However, the errors converge to constant values, but not to zero as is common
with Lyapunov based update laws.

The response of the closed-loop system during the same maneuver starting at flight con-
dition 1, but with +30% uncertainty in the aerodynamic coefficients, is shown in Figure
D.7. It can be observed that the tracking errors of the outer loop are now much larger,
but in the end the steady-state tracking error converges to zero. The sideslip angle still
144 F-16 TRAJECTORY CONTROL DESIGN 7.4

Table 7.3: Maneuver 1 at flight condition 1: Mean absolute values of the tracking errors and
control inputs.

Case (z01 , z02 , z03 )M AV (m) (δe , δa , δr )M AV (deg) T M AV (N)


nominal (0.33,0.24,0.24) (4.63, 0.12, 0.10) 5.59e+04
+30% uncertainty (4.56,3.75,1.07) (4.59, 0.13, 0.11) 5.57e+04
−30% uncertainty (5.15,3.88,1.10) (4.68, 0.16, 0.11) 5.62e+04
+10% deg. locked left aileron (0.39,0.32,0.78) (4.63, 0.56, 0.74) 5.59e+04
−10% deg. locked left aileron (0.31,0.25,1.12) (4.63, 0.46, 1.16) 5.59e+04

remains within 0.02 degrees. Some small oscillations are visible in Figure D.7J, but these
stay well within the rate and magnitude limits of the actuators. In Tables 7.3 to 7.5 the
MAVs of the tracking errors and control inputs are shown for all flight conditions with
this uncertainty. As was already seen in the plots, the average tracking errors increase, but
the magnitude of the control inputs stays approximately the same. The same simulations
have been performed for a −30% perturbation in the stability and control derivatives used
by the control law, the results are also shown in the tables. It appears that underestimated
initial values of the unknown parameters lead to larger tracking errors than overestimates
for this maneuver.
Finally, the maneuver is performed with the left aileron locked at ±10 degrees, i.e.
δadamaged = 0.5(δa ± 10π180 ). Figure D.8 shows the response at flight condition 3 with
the aileron locked at −10 degrees. Except for some small oscillations in the response of
roll rate p and aileron deflection δa at the start of the simulation, there is no real change
in performance visible. This is confirmed by the numbers of Table 7.5. However, from
Tables 7.3 and 7.4 it can be observed that aileron and rudder deflections become larger
for both locked aileron failure cases, while tracking performance does hardly decline.

Table 7.4: Maneuver 1 at flight condition 2: Mean absolute values of the tracking errors and
control inputs.

Case (z01 , z02 , z03 )M AV (m) (δe , δa , δr )M AV (deg) T M AV (N)


nominal (0.30,0.23,0.21) (3.97, 0.14, 0.21) 3.14e+04
+30% uncertainty (1.55,1.33,0.41) (3.96, 0.15, 0.23) 3.14e+04
−30% uncertainty (2.01,1.53,0.52) (3.98, 0.15, 0.20) 3.14e+04
+10% deg. locked left aileron (0.36,0.33,0.72) (3.97, 0.25, 1.20) 3.14e+04
−10% deg. locked left aileron (0.30,0.28,1.01) (3.96, 0.40, 1.52) 3.14e+04

Table 7.5: Maneuver 1 at flight condition 3: Mean absolute values of the tracking errors and
control inputs.

Case (z01 , z02 , z03 )M AV (m) (δe , δa , δr )M AV (deg) T M AV (N)


nominal (0.33,0.22,0.27) (3.37, 0.08, 0.08) 4.41e+04
+30% uncertainty (2.01,1.43,0.61) (3.40, 0.10, 0.08) 4.44e+04
−30% uncertainty (2.16,1.49,0.77) (3.38, 0.09, 0.08) 4.41e+04
+10% deg. locked left aileron (0.32,0.33,0.29) (3.38, 0.08, 0.09) 4.41e+04
−10% deg. locked left aileron (0.34,0.24,0.30) (3.38, 0.08, 0.09) 4.41e+04
7.5 NUMERICAL SIMULATION RESULTS 145

7.4.3 Maneuver 2: Reconnaissance

The second maneuver, called reconnaissance and surveillance, involves turns in both di-
rections and altitude changes, but airspeed is kept constant. Plots of the simulation at
flight condition 3 with −30% uncertainty are shown in Figure D.9. Tracking perfor-
mance is again excellent and the steady-state tracking errors converge to zero. There are
some small oscillations in the rudder deflection, but these are within the limits of the
actuator. To provide some insight in the online estimation process, the time histories of
the estimated coefficient errors are plotted in Figure D.10. The errors in the individual
components of the force and moment coefficients do in general not converge to the true
error values, as is expected with Lyapunov based update laws. However, the total force
and moment coefficients are identified correctly which explains the good tracking per-
formance.
The MAVs of the tracking errors and control inputs are compared with the ones for the
nominal case in Table 7.8. It can be observed that the average tracking errors have not
increased much for this uncertainty case. The degradation of performance for the uncer-
tainty cases is somewhat worse at the other two flight conditions as can be seen in Tables
7.6 and 7.7. The sideslip angle always remains within 0.05 degrees for all flight condi-
tions and uncertainties. Corresponding with the results of maneuver 1 overestimation of
the unknown parameters again leads to smaller tracking errors.

Table 7.6: Maneuver 2 at flight condition 1: Mean absolute values of the tracking errors and
control inputs.

Case (z01 , z02 , z03 )M AV (m) (δe , δa , δr )M AV (deg) T M AV (N)


nominal (0.42,0.39,0.46) (3.17, 0.16, 0.13) 2.25e+04
+30% uncertainty (2.69,2.30,1.13) (3.16, 0.16, 0.14) 2.25e+04
−30% uncertainty (3.02,2.40,1.12) (3.19, 0.18, 0.14) 2.25e+04
+10% deg. locked left aileron (0.43,0.40,0.45) (3.17, 0.17, 0.16) 2.25e+04
−10% deg. locked left aileron (0.42,0.39,0.46) (3.17, 0.17, 0.15) 2.25e+04

Simulations of maneuver 2 with the locked aileron are also performed. Figure D.11
shows the results for flight condition 1 with a locked aileron at +10 degrees. Some very
small oscillations are again visible in the roll rate, aileron and rudder responses, but
tracking performance is good and the steady-state convergence is achieved. Table 7.6
confirms that the results of the simulations with actuator failure hardly differ from the
nominal one. There is only a small increase in the use of the lateral control surfaces. The
same holds at the other flight conditions as can be seen in Tables 7.7 and 7.8.
146 F-16 TRAJECTORY CONTROL DESIGN 7.5

Table 7.7: Maneuver 2 at flight condition 2: Mean absolute values of the tracking errors and
control inputs.

Case (z01 , z02 , z03 )M AV (m) (δe , δa , δr )M AV (deg) T M AV (N)


nominal (0.58,0.49,0.34) (2.95, 0.18, 0.21) 1.62e+04
+30% uncertainty (1.27,1.10,0.48) (2.95, 0.19, 0.22) 1.62e+04
−30% uncertainty (1.73,1.24,0.55) (2.97, 0.19, 0.21) 1.61e+04
+10% deg. locked left aileron (0.58,0.50,0.35) (2.95, 0.20, 0.22) 1.62e+04
−10% deg. locked left aileron (0.59,0.51,0.34) (2.95, 0.22, 0.22) 1.62e+04

Table 7.8: Maneuver 2 at flight condition 3: Mean absolute values of the tracking errors and
control inputs.

Case (z01 , z02 , z03 )M AV (m) (δe , δa , δr )M AV (deg) T M AV (N)


nominal (0.49,0.40,0.56) (2.39, 0.12, 0.12) 2.33e+04
+30% uncertainty (0.97,0.78,0.54) (2.39, 0.12, 0.13) 2.33e+04
−30% uncertainty (0.97,0.56,0.85) (2.40, 0.13, 0.12) 2.33e+04
+10% deg. locked left aileron (0.48,0.40,0.58) (2.39, 0.12, 0.13) 2.33e+04
−10% deg. locked left aileron (0.49,0.40,0.56) (2.40, 0.13, 0.13) 2.33e+04

7.5 Conclusions
In this chapter, a nonlinear adaptive flight path control system is designed for a high-
fidelity F-16 model. The controller is based on a backstepping approach with four feed-
back loops which are designed using a single control Lyapunov function to guarantee
stability. The uncertain aerodynamic forces and moments of the aircraft are approxi-
mated online with B-spline neural networks for which the weights are adapted by Lya-
punov based update laws. Numerical simulations of two test maneuvers were performed
at several flight conditions to verify the performance of the control law. Actuator failures
and uncertainties in the stability and control derivatives were introduced to evaluate the
parameter estimation process.
Several observations can be made based on the simulation results:

1. The results show that trajectory control can still be accomplished with the inves-
tigated uncertainties and failures, while good tracking performance is maintained.
Compared to other nonlinear adaptive trajectory control designs found in litera-
ture, such as standard adaptive backstepping or sliding mode control in combina-
tion with feedback linearization, the approach is much simpler to apply, while the
online estimation process is more robust to saturation effects.

2. The flight envelope partitioning approach used to simplify the estimation process
makes real-time implementation of the adaptive control system feasible, while it
also keeps the estimation process more transparent. All performed simulations
easily run real-time in MATLAB/Simulink c with a standard third order solver at
100 Hz.

3. In the general case, a detailed design study is needed to define the necessary par-
titions and approximator structure. For the F-16 aerodynamic model earlier mod-
7.5 CONCLUSIONS 147

eling studies have already been performed and the data is already available in a
suitable tabular form.
4. Tuning of the integrated update laws of the backstepping controller is, in general,
a time consuming trial-and-error process, since increasing the gains can lead to
unexpected closed-loop system behavior. However, the maneuvers flown with the
trajectory controller are relatively slow and smooth, especially for this fighter air-
craft model. This smooth maneuvering simplified the tuning of the update gains,
since it was not hard to find a gain selection that provided adequate performance
for all considered failure scenarios and flight conditions. However, in Chapter 6
more aggressive maneuvering with a much simpler aircraft model was considered,
while finding an update gain selection that gave good performance at both flight
conditions for all failure types was much more difficult, if not impossible. In the
next chapter the stability and control augmentation system design for the F-16
model is considered and simulations involving more aggressive maneuvering will
again be performed. Hence, update gain tuning is expected to be much more time
consuming.
Chapter 8
F-16 Stability and Control
Augmentation Design

This chapter once again considers an adaptive flight control design for the high-fidelity
F-16 model, but here a stability and control augmentation system (SCAS) is developed in-
stead of a trajectory autopilot. This means that the flight control system must provide the
pilot with the handling qualities he or she desires. Command filters are used to enforce
these handling qualities and a frequency response analysis is included to verify that they
have been satisfied in the nominal case. The flight envelope partitioning method, which
results in multiple local models, is again used to simplify the online model identification.
In the final part of the chapter the constrained adaptive backstepping based SCAS is
compared with the baseline F-16 flight control system and an adaptive flight control sys-
tem that makes use of a least squares identifier in several realistic maneuvers and failure
scenarios. Furthermore, sensor models and time delays are introduced in the numerical
simulations.

8.1 Introduction
Nowadays most modern fighter aircraft are designed statically relaxed stable or even un-
stable in certain modes to allow for extreme maneuverability. As a result these aircraft
have to be equipped with a stability and control augmentation system (SCAS) that ar-
tificially stabilizes the aircraft and provides the pilot with desirable flying and handling
qualities. Briefly stated, the flying and handling qualities of an aircraft are those proper-
ties which describe the ease and effectiveness with which it responds to pilot commands
in the execution of a flight task [45]. Flying qualities can be seen as being task related,
while handling qualities are response related.
In this chapter the constrained adaptive backstepping approach with B-spline networks

149
150 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.2

is used to design a SCAS for a nonlinear, high-fidelity F-16 model which satisfies the
handling qualities requirements [1] across the entire flight envelope of the model. It is
assumed that the aerodynamic force and moment functions of the model are not known
exactly and that they can change during flight due to structural damage or control surface
failures. There is plenty of literature available on adaptive backstepping designs for the
control of aircraft and missiles, see e.g. [107, 183]. However, none of these publications
considers the flying qualities during the controller design phase or performs a handling
qualities evaluation after the design is finished. An exception is [93], where a longi-
tudinal adaptive backstepping controller is designed for a simplified supersonic aircraft
model. The controller parameters are tuned explicitly via short period handling qualities
specifications [1]. The work in this chapter considers a full six degrees-of-freedom high-
fidelity aircraft model and enforces the handling qualities requirements with command
filters during the control design process.
A second adaptive SCAS is designed using the modular adaptive backstepping method
with recursive least squares as detailed in Section 6.2. In this method the control law and
identifier are designed as separate models as so often done for adaptive control for linear
systems. Since the certainty equivalence principle does not hold in general for nonlinear
systems, the modular control law has to be robustified against the time-varying charac-
ter of the parameter estimates. The estimation error and the derivative of the parameter
estimate are viewed as an unknown disturbance input, which are attenuated by adding
nonlinear damping terms to the control law. As identifier the well-established recursive
least-squares method in combination with an abrupt change detection algorithm is used.
As was illustrated in Chapter 6, a potential advantage of the modular method is that the
true values of the uncertain parameters can be found since the estimation is not driven by
the tracking error but rather by the state of the system.
Both fault tolerant SCAS systems are compared with the baseline F-16 flight control
system in numerical simulations where the F-16 model suffers several types of sudden
changes in the dynamic behavior. The comparison focuses on the performance, estima-
tion accuracy, computation time and controller tuning. In the first part of the chapter both
adaptive flight control designs are derived. In the second part the tuning of the controllers
and the handling qualities analysis are discussed, followed by the results of the numerical
simulations in MATLAB/Simulink c .

8.2 Flight Control Design


A full description of the F-16 model together with all necessary data can be found in
Chapter 2, the relevant equations of motion are repeated here for convenience sake:
1
V̇T = (−D + FT cos α cos β + mg1 ) (8.1)
m
(−L − FT sin α + mg3 )
α̇ = qs − ps tan β + (8.2)
mVT cos β
(Y − FT cos α sin β + mg2 )
β̇ = −rs + (8.3)
mVT
8.2 FLIGHT CONTROL DESIGN 151


ṗ = (c1 r + c2 p) q + c3 L̄ + c4 N̄ + Heng q (8.4)
= c5 pr − c6 p2 − r2 + c7 M̄ − Heng r
 
q̇ (8.5)

ṙ = (c8 p − c2 r) q + c4 L̄ + c9 N̄ + Heng q (8.6)

The goal of this study is to design a SCAS that tracks pilot commands with responses
that satisfy the handling qualities, across the entire flight envelope of the aircraft, in the
presence of uncertain aerodynamic parameters. The pilot commands should control the
responses as follows: Longitudinal stick deflection commands angle of attack α0com , lat-
eral stick deflection commands stability-axis roll rate p0s,com and the pedals command
0 0
the sideslip angle βcom . The total velocity command VT,com is achieved with the to-
tal engine thrust FT , which is in turn controlled with the throttle lever deflection. The
commanded signals are fed through command filters to produce the signals αcom , βcom ,
ps,com , VT,com and their derivatives. The command filters are also used for specifying
the desired aircraft handling qualities.

8.2.1 Outer Loop Design


The control design procedure starts by defining the new tracking error states as
   
VT VT,com
Z1 =  α  −  αcom  = X1 − X1,com (8.7)
β βcom
   
ps ps,com
Z2 =  qs  −  qs,des  = X2 − X2,com , (8.8)
rs rs,des

with qs,des and rs,des the intermediate control laws that will be defined by the adaptive
backstepping controller. The time derivative of Z1 can be written as
 
FT
Ż1 = A1 F1 + H1 + B11 X2 + B12  0  − Ẋ1,com , (8.9)
0

where
 
0 0 −VT
1 
A1 = − cos1 β 0 0 ,
mVT
0 1 0
mg1
 
1 
H1 = −ps tan β + −FTVsin α+mg3 
T cos β
,
m −FT cos α sin β+mg2
V
  T  
0 0 0 cos α cos β 0 0
B11 =  0 1 0  , B12 =  0 0 0 ,
0 0 −1 0 0 0
152 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.2

are known (matrix) functions, and F1 = [L, Y, D]T is a vector containing the uncertain
aerodynamic forces. Furthermore, let
FT0
 
 
0
 qs,des  = B1−1 − C1 Z1 −K1 Λ1 −A1 F̂1 −H1 + Ẋ1,com −B11 Ξ2 ,
0
rs,des
(8.10)
Rt
where B1 = B11 + B12 and Λ1 = 0 Z̄1 (t)dt be a feedback control law with C1 =
C1T > 0, K1 = K1T ≥ 0, F̂1 the estimate of F1 , Ξ2 and Z̄1 to be defined later. The
estimate of the aerodynamic forces F̂1 is defined as

F̂1 = ΦTF1 (X, U )Θ̂F1 , (8.11)

where ΦTF1 is the known regressor function and Θ̂F1 is a vector with unknown constant
parameters. It is assumed that there exists a vector ΘF1

F1 = ΦTF1 (X, U )ΘF1 , (8.12)

so that the estimation error can be defined as Θ̃F1 = ΘF1 − Θ̂F1 . Part of the feedback
control law (8.10) is now fed through second order low pass filters to produce the signals
FT , qs,des , rs,des and their derivatives. These filters can also be used to enforce rate and
magnitude limits on the signals, see the appendix of [61]. The effect that the use of these
command filters has on the tracking errors can be captured with the stable linear filter
FT − FT0
 
0

Ξ̇1 = −C1 Ξ1 + B11 X2,com − X2,com + B12  0 . (8.13)
0
Define the modified tracking errors as
Z̄i = Zi − Ξi , i = 1, 2. (8.14)

8.2.2 Inner Loop Design


Taking the derivative of Z2 results in

Ż2 = A2 (F2 + G2 U ) + H2 − Ẋ2,com (8.15)


T
where U = (δe , δa , δr ) is the control vector,
 
c3 0 c4
A2 = Ts/b  0 c7 0  ,
c4 0 c9
   
rs (c1 r + c2 p) q +c4 he q
H2 =  0  α̇ + Ts/b  c5 pr − c6 p2 − r2 − c7 he r  ,
ps (c8 p − c2 r) q + c9 he q
8.2 FLIGHT CONTROL DESIGN 153

are known (matrix) functions, and


   
L̄0 L̄δe L̄δa L̄δr
F2 =  M̄0  , G2 =  M̄δe M̄δa M̄δr  ,
N̄0 N̄δe N̄δa N̄δr
are unknown (matrix) functions containing the aerodynamic moment components. Note
that for a more convenient presentation the aerodynamic moments have been decom-
posed, e.g.
M̄ (X, U ) = M̄0 (X, U ) + M̄δe δe + M̄δa δa + M̄δr δr (8.16)
where the higher order control surface dependencies are still contained in M̄0 (X, U ). To
stabilize the system (8.15) the desired control U 0 is defined as
A2 Ĝ2 U 0 = −C2 Z2 −K2 Λ2 −B11T
Z̄1 −A2 F̂2 −H2 + Ẋ2,com, (8.17)
Rt
where Λ2 = 0 Z̄2 (t)dt with C2 = C2T > 0, K2 = K2T ≥ 0 and where F̂2 and Ĝ2
are the estimates of the unknown nonlinear aerodynamic moment functions F2 and G2 ,
respectively. The estimates are defined as
F̂2 = ΦTF2 (X, U )Θ̂F2 (8.18)
Ĝ2j = ΦTG2j (X)Θ̂G2j for j = 1, 2, 3 (8.19)

where ΦTF2 , ΦTG2j are the known regressor functions and Θ̂F2 , Θ̂G2j are vectors with un-
known constant parameters, also note that Ĝ2j represents the jth column of Ĝ2 . It is
assumed that there exist vectors ΘF2 , ΘG2j such that
F2 = ΦTF2 (X, U )ΘF2
G2j = ΦTG2j (X)ΘG2j . (8.20)

This means the estimation errors can be defined as Θ̃F2 = ΘF2 − Θ̂F2 and Θ̃G2j =
ΘG2j − Θ̂G2j . The actual control U is found by again applying command filters, as was
also done in the outer loop design. Finally, with the definition of the stable linear filter
Ξ̇2 = −C2 Ξ2 + A1 Ĝ2 U − U 0 ,

(8.21)
the static part of the control design is finished.

8.2.3 Update Laws and Stability Properties


In this section the stability properties of the control law are discussed and dynamic update
laws for the unknown parameters are derived. Define the control Lyapunov function
2
1X T  1  
V = Z̄i Z̄i + ΛTi Ki Λi + trace Θ̃TF1 Γ−1
F1 Θ̃F1
2 i=1 2
3
!
  X  
T −1 T −1
+ trace Θ̃F2 ΓF2 Θ̃F2 + trace Θ̃G2j ΓG2j Θ̃G2j
i=1
154 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.3

with the update gains matrices ΓF1 = ΓTF1 > 0, ΓF2 = ΓTF2 > 0 and ΓG2j = ΓTG2j > 0.
Selecting the update laws

˙
Θ̂F1 = ΓF1 ΦF1 AT1 Z̄1
˙
Θ̂F2 = ΓF2 ΦF2 AT2 Z̄2 (8.22)
˙
PG2j ΓG2j ΦG2j AT2 Z̄2 Uj

Θ̂G2j =

and substituting (8.10), (8.13), (8.21) and (7.37) reduces the derivative of V along the
trajectories of the closed-loop system to

V̇ = −Z̄1T C1 Z̄1 − Z̄2T C2 Z̄2 , (8.23)

which is negative semi-definite. By using Theorem 3.7 it can be shown that Z̄ → 0 as


t → ∞. When the command filters are properly designed and the limits on the filters
are not in effect, Z̄i will converge to the close neighborhood of Zi . If the limits are in
effect the actual tracking errors Zi may increase, but the modified tracking errors Z̄i will
still converge to zero and the update laws will not unlearn, since they are driven by the
modified tracking error definitions. Note that the update law for Ĝ2 include a projection
operator to ensure that certain elements of the matrix do not change sign and full rank is
maintained always. For most elements the sign is known based on physical principles.

8.3 Integrated Model Identification


As was explained in Chapter 7, to simplify the approximation of the unknown aerody-
namic force and moment functions, and thereby reducing computational load to make
real-time implementation feasible, the flight envelope is partitioned into multiple, con-
necting operating regions.
In the previous section parameter update laws (8.22) for the unknown aerodynamic func-
tions (8.11)-(8.19) were defined. Now these unknown vectors and known regressor vec-
tors will be further specified. The total force and moment approximations are written in
the standard coefficient notation. The total nonlinear function approximations are divided
into simpler linear-in-the-parameter nonlinear coefficient approximations, e.g.

ĈL0 (α, β) = ϕTCL0 (α, β)θ̂CL0 , (8.24)

where the unknown parameter vector θ̂CL0 contains the network weights, i.e. the un-
known parameters, and ϕCL0 is a regressor vector containing the B-spline basis func-
tions. All other coefficient estimates are defined in similar fashion. In this case a two-
dimensional network is used with input nodes for α and β. Different scheduling param-
eters can be selected for each unknown coefficient. In this chapter third order B-splines
spaced 2.5 degrees and up to three scheduling variables, α, β, δe , depending on co-
efficient are once again used. With these approximators sufficient model accuracy is
8.4 MODULAR MODEL IDENTIFICATION 155

obtained. Following the notation of (7.50) the estimates of the aerodynamic forces and
moments can be written as
L̂ = ΦT (α, β, δ )Θ̂ , L̄ ˆ = ΦT (α, β, δ )Θ̂
L e L L̄ e L̄,

Ŷ = ΦTY ˆ = ΦT (α, β, δ )Θ̂


(α, β, δe )Θ̂Y , M̄ (8.25)
M̄ e M̄,

D̂ T ˆ T
= ΦD (α, β, δe )Θ̂D , N̄ = ΦN̄ (α, β, δe )Θ̂N̄ ,
which is a notation equivalent to the one used in (8.11)-(8.19). Therefore, the update laws
(8.22) can be used to adapt the B-spline network weights. A scheme of the integrated
adaptive backstepping controller can be found in Figure 6.4 of Chapter 6.

8.4 Modular Model Identification


An alternative to the Lyapunov-based indirect adaptive laws of the previous section is to
separately design the identifier and the control law. This approach is referred to as the
modular control design and was discussed in Section 6.2. The modular adaptive design
is not limited to Lyapunov-based identifiers, but allows for more freedom in the selection
of model identification. Especially (recursive) least-squares identification is of interest,
since it is considered to have good convergence properties and its parameter estimates
converge to true, constant values if the system is sufficiently excited.
A comparison of Lyapunov and least-squares model identification for a simplified air-
craft model in Chapter 6 demonstrated the more accurate approximation potential of the
latter approach. A disadvantage of the design is that nonlinear damping terms have to be
used to robustify the controller against the slowness of the parameter estimation method.
These nonlinear damping terms can lead to high gain control and related numerical prob-
lems. Another disadvantage is that the least-squares identifier with nonlinear regressor
filter is of a much higher dynamical order than the Lyapunov identifier of the integrated
model identification method.
First, the intermediate control (8.10) and the control (7.37) are augmented with the addi-
tional nonlinear damping terms −S1 Z̄1 and −S2 Z̄2 respectively, where
S1 = κ1 A1 ΦTF1 ΦF1 AT1 (8.26)
 
3
X
S2 = κ2 A2 ΦTF2 ΦF2 AT2 + A2  κ2j ΦTG2j ΦG2j Uj2  AT2 (8.27)
j=1

with the scalar gains κ1 , κ2 , κ11 , κ12 , κ13 > 0. With these additional terms the derivative
of the control Lyapunov function V (6.46) becomes
V̇ = −Z̄1T (C1 + S1 ) Z̄1 − Z̄2T (C2 + S2 ) Z̄2 + Z̄1T A1 ΦF1 Θ̃F1
3
!
T
X
+Z̄2 A2 ΦF2 Θ̃F2 + ΦG2j Θ̃G2j Uj (8.28)
j=1
3
1 T 1 T X 1
≤ −Z̄1T C1 Z̄1 − Z̄2T C2 Z̄2 + Θ̃F1 Θ̃F1 + Θ̃F2 Θ̃F2 + Θ̃TG2j Θ̃G2j ,
4κ1 4κ2 j=1
4κ2j
156 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.4

which demonstrates that the controller achieves boundedness of the modified tracking
errors Z̄i if the parameter estimation errors are bounded. The size of the bounds is de-
termined by the damping gains κ∗ . Nonlinear damping terms have to be used with care,
since they may result in large control effort for large signals and thereby have an adverse
effect on the robustness of the control scheme. An alternative is the use of so-called
composite update laws that include both a tracking error and and estimation-based up-
date term [186].
The resulting input-to-state stable controller allows the use of any identifier which can
independently guarantee that the parameter estimation errors are bounded. However, to
be able to use recursive least-squares techniques a swapping scheme is needed to account
for the time-varying behavior of the parameter estimates. The idea behind the swapping
technique is to use regressor filtering to convert the dynamic parametric system into a
static form in such a way that standard parameter estimation algorithms can be used. In
this study a x-swapping filter is used, which is defined as

Ω̇0 = A0 − ρF T (X, U )F (X, U )P (Ω0 + X) − H(X, U )


 
(8.29)
T T
  T T
Ω̇ = A0 − ρF (X, U )F (X, U )P Ω + F (X, U ) (8.30)
ǫ = X + Ω0 − ΩT Θ̂, (8.31)

where H(X, U ) are the known dynamics, F (X, U ) is the known regressor matrix, ρ > 0
and A0 is an arbitrary constant matrix such that

P A0 + AT0 P = −I, P = P T > 0. (8.32)

The least-squares update law for Θ̂ and the covariance update are defined as

˙ Ωǫ
Θ̂ = Γ (8.33)
1 + νtrace (ΩT ΓΩ)
˙ ΓΩΩT Γ − Γλ
Γ̂ = − , (8.34)
1 + νtrace (ΩT ΓΩ)

where ν ≥ 0 is the normalization coefficient and λ ≥ 0 is the forgetting factor. By


Lemma 6.1 the modular controller with x-swapping filters and least-squares update law
achieve global asymptotic tracking of the modified tracking errors. Note that flight enve-
lope partitioning is again used for the modular design, only the parameters of the locally
valid nonlinear linear-in-the-parameter models in the current partitions are updated at
each time step. Although the whole updating process is slightly different, the same B-
spline neural networks are used. In this way the modular adaptive design has the same
memory capabilities as the integrated design. Note that for the modular adaptive design
the covariance matrix also has to be stored in each partition, which leads to a significant
increase in identifier states. However, again only a few partitions are updated at each
time step.
Despite using a mild forgetting factor in (8.34), the covariance matrix can become small
after a period of tracking, and hence reduces the ability of the identifier to adjust to abrupt
8.5 CONTROLLER TUNING AND COMMAND FILTER DESIGN 157

changes in the system parameters. A possible solution to this is by resetting the covari-
ance matrix Γ when a sudden change is detected. After an abrupt change in the system
parameters, the estimation error will be large. Therefore a good monitoring candidate is
the ratio between the current estimation error and the mean estimation error over an inter-
val tǫ . After a failure, the estimation error will be large compared to the mean estimation
error, and thus an abrupt change is declared when

ǫ − ǭ
> Tǫ (8.35)
ǭ

where Tǫ is a predefined threshold. This threshold should be chosen large enough such
that measurement noise and other disturbances do not trigger the resetting, and suffi-
ciently small such that failures will trigger resetting. For the B-spline partitioned identi-
fier, Tǫ is weighted by the degree of membership of the partition. Due to this modification
partitions with low degree of membership, hence relatively inactive partitions, are more
unlikely to reset, while the active partitions will reset normally if required. The modular
scheme was already depicted in Figure 6.4.

8.5 Controller Tuning and Command Filter Design


In this section the gains of the adaptive controllers are tuned and the handling qualities
for the undamaged aircraft model are investigated. The goal of the control laws is to
provide the pilot with Level 1 handling qualities throughout the whole flight envelope of
the aircraft model as specified in MIL-STD-1797B [1]. The reference command filters
can be used to convert the commands of the pilot into smooth reference signals for the
control law as
ps,com 1 αcom ωα2
= , = 2
ps,com,0 Tp s + 1 αcom,0 s + 2ζα ωα s + ωα2
2
βcom ωβ
= ,
βcom,0 s + 2ζβ ωβ s + ωβ2
2

where Tp = 0.5, ζα = ζβ = 0.8, ωβ = 1.25 and ωα is a linear function of the dynamic


pressure q̄ with value 2.5 for low q̄ and 6.5 for high q̄. After a trial-and error procedure,
the controller gains are selected as C1 = 0.5I, C2 = I and the integral gains as
   
0.2 0 0 0.5 0 0
K1 =  0 0.2 0  , K2 =  0 0 0  .
0 0 0.2 0 0 0

The nonlinear damping gains for the modular adaptive controller are all taken equal to
0.01. The update laws (8.22) for the integrated design are robustified against parameter
drift with continuous dead-zones and leakage terms. The update gains are all selected
positive definite and tuned in a trial-and-error procedure. As expected, tuning the update
158 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.5

laws of the integrated adaptive controller such that they give a good performance at all
flight conditions is again a very difficult and time consuming process. Selecting update
gains too large can easily result in undesired oscillatory behavior.
Low Order Equivalent System (LOES) analysis of frequency responses, obtained from
frequency sweeps (0.2→12 rad/s) performed at twenty flight conditions over the entire
operating range, were used as the primary means to verify the handling qualities in the
nominal case. The flight conditions used for verification are shown in Figure 8.1.

Figure 8.1: Flight conditions for handling qualities analysis.

The transforming of the time history data from the sweeps into the frequency domain and
the transfer function fitting was done with the commercially available software package
CIFER c . Good fitting results were achieved at all test flight conditions. The following
LOES are considered:
h i
2 2 −τp s
p K φ s s + 2ζφ ω φ s + ω φ e
=
δroll (s + 1/Ts ) (s + 1/Tr ) [s2 + 2ζd ωd s + ωd2 ]
q Kθ s (s + 1/Tθ1 ) (s + 1/Tθ2 ) e−τq s
=   
δpitch s2 + 2ζp ωp s + ωp2 s2 + 2ζsp ωsp s + ωsp2

β Aβ (s + 1/Tβ1 ) (s + 1/Tβ2 ) (s + 1/Tβ3 ) e−τβ s


= .
δyaw (s + 1/Ts ) (s + 1/Tr ) [s2 + 2ζd ωd s + ωd2 ]
For level 1 handling qualities the LOES parameters must satisfy the ranges
0.28 ≤ CAP ≤ 3.6, Tr ≤ 1.0 s,
ωsp > 1.0 rad/s, ζd ≥ 0.4,
0.35 ≤ ζsp ≤ 1.3, ζd ωd ≥ 0.4 rad/s,
2
where CAP = ωsp /(nz /α) is the Control Anticipation Parameter and the equivalent time
delays τ∗ must be less than 0.10 seconds. Guidelines for estimating the substantial num-
ber of parameters in the LOES transfer functions are given in [1, 47]. For the longitudinal
8.6 NUMERICAL SIMULATIONS AND RESULTS 159

response the pitch attitude bandwidth versus phase delay criterion [1] is also taken into
account as recommended by [199]. Plots of the CAP versus the short period frequency
ωsp can be found in Figure 8.2, while the bandwidth criterion plot appears in Figure 8.3.
It can be seen that both criteria predict level 1 handling qualities. Short period damping
ζsp values were between 0.60 and 0.82, while the largest effective time delay was 0.084 s.
The gain margin was larger than 6 dB and the phase margin larger than 45 deg at all test
conditions. Finally, the Neal-Smith criterion [12] also predicts level 1 handling qualities.
The Neal-Smith method estimates the amount of pilot compensation required to prevent
pilot-in-the-loop resonance.

Category A Flight Phases


2
10

1
10
ωsp (rad/s)

Level 2

Level 1
0
10
Level 2

Level 3

−1
10
0 1 2
10 10 10
nz/a

Figure 8.2: LOES short period frequency estimates.

Plots of the LOES roll mode time constant and effective time delay requirements can be
found in Figure 8.4 and the LOES Dutch roll frequency ωd and damping ζd requirements
in Figure 8.5. The figures demonstrate that also for lateral maneuvering all criteria for
level 1 handling qualities are met.

8.6 Numerical Simulations and Results


This section presents numerical simulation results from the application of the control sys-
tems developed in the previous sections to the high-fidelity, six-degrees-of-freedom F-16
model in a number of failure scenarios and maneuvers. The controllers are evaluated on
160 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.6

Pitch Attitude Bandwidth vs. Phase Delay Criterion

0.25

Cat. C
Phase Delay τ (sec)

0.2
Cat. A
p

0.15
Level 2/3 Cat. A

0.1

0.05 Cat. C
Level 1/2

0
0 1 2 3 4 5 6
Pitch Attitude Bandwidth ω (rad/s)
BW

Figure 8.3: Pitch attitude bandwidth vs.phase delay.


Roll Mode Time Constant (sec)

Roll Requirements
2
Level 3
1.5
Level 2
1

0.5
Level 1
0
0 1 2 3 4 5 6
4
x 10
Effective Time Delay (sec)

0.25
Level 3
0.2

0.15 Level 2

0.1

0.05 Level 1
0
0 1 2 3 4 5 6
Dynamic Pressure (N/m2) x 10
4

Figure 8.4: Roll mode time constant and effective time delay.
8.6 NUMERICAL SIMULATIONS AND RESULTS 161

their tracking performance and parameter estimation accuracy. Both the control laws and
the aircraft model are written as C S-functions in MATLAB/Simulink c . Sensor models
taken from [63] and transport delays of 20 ms have been added to the controller to model
an onboard computer implementation of the control laws.
The analysis in the previous part demonstrates that it is quite straightforward to use the
command filters to enforce desired handling qualities of the adaptive backstepping con-
trollers. However, one of the goals in this section is to compare the adaptive designs
directly with the baseline F-16 control system of Section 2.5. For the purpose of this
comparison, the command filters, stick shaping functions and command limiting func-
tions in the numerical simulations are selected in such a way that the response of the
adaptive designs on the nominal F-16 model will be approximately the same as the base-
line control system response over the entire flight envelope. One problem is that a lon-
gitudinal stick command to the baseline controller generates a mixed pitch rate and load
factor response, while the adaptive designs generate an angle of attack response. The
desired mixed response is transformed to an angle of attack command for the adaptive
controllers using the nominal aircraft model data.
To verify whether the baseline control system achieves level 1 handling qualities or not,
simulations of frequency sweeps were again performed. The small amplitude responses
have been matched to LOES models. As expected the baseline control system also satis-
fies these criteria over the entire F-16 model flight envelope.

Dutch Roll Data


6

4
ω (rad/s)

3
d

Level 1
1
Level 3 Level 2

0
0 0.2 0.4 0.6 0.8 1
ζd

Figure 8.5: Dutch roll frequency vs. damping.


162 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.6

Table 8.1: Flight conditions used for evaluation.

Flight condition Mach number Altitude (m) dynamic pressure (kN/m2 ) α (deg)
FC1 0.8 8000 15.95 1.80
FC2 0.6 12000 4.87 9.40
FC3 0.6 5000 13.61 2.46
FC4 0.4 10000 2.96 14.99
FC5 0.8 2000 35.61 0.04

8.6.1 Simulation Scenarios


The simulated failure scenarios are limited to locked right ailerons at zero, four different
offsets from zero, two longitudinal center of gravity shifts and a sudden change in the
pitch damping (Cmq ) for a total of eight different failure cases. Each simulation lasts
between 150 and 200 seconds, after 20 seconds a failure is introduced. All simulation
runs start at one of five different trimmed flight conditions as given in Table 8.1.

This gives a total of forty failure scenarios for each controller, the simulation results of
three typical ones are discussed in detail in the next sections.

8.6.2 Simulation Results with Cmq = 0


The first series of simulations considers a sudden reduction of the longitudinal damping
coefficient Cmq to zero at all flight conditions. This is not a very critical change, since
the tracking performance of both the baseline and the backstepping controller with adap-
tation disabled is hardly affected. It does however serve as a nice example to evaluate
the ability of the adaptation schemes to accurately estimate inaccuracies in the onboard
model. Figure D.12 of Appendix D.3 contains the simulation results for the integrated
design starting at flight condition 2 with the longitudinal stick commanding a series of
pitch doublets, after 20 seconds of simulation the sudden change in Cmq takes place. The
left hand side plots show the inputs and response of the aircraft in solid lines, while the
dotted lines are the reference trajectories. Tracking performance both before and after
the change in pitch damping is excellent.
The solid lines in the right hand side plots of Figure D.12 show the changes in aerody-
namic coefficients w.r.t. the nominal values divided by the maximum absolute value of
the real aerodynamic coefficients to normalize them. The dotted lines are the normalized
real differences between the altered and nominal aircraft model. The change in Cmq is
clearly visible in the plots. However, the tracking error based update laws of the inte-
grated controller compensate by estimating changes in Cm0 and Cmδe instead, which
leads to the same total pitching moment. The time histories drag estimation and the total
airspeed are not depicted in the figure. The flight control system is not able to follow this
maneuver and at the same time hold the aircraft at the correct airspeed, hence there is
some estimation of a non-existing drag coefficient error.
8.6 NUMERICAL SIMULATIONS AND RESULTS 163

It is expected that the estimation-based update laws of the modular design will manage to
find the correct parameter values, since the reference signal flown should be rich enough
with information. The results of the same simulation scenario for the F-16 with the mod-
ular controller can be seen in Figure D.13. The tracking performance of this controller is
also excellent, and, as can be seen from the right hand side plots, the correct change in
parameter value is found by the model identification.
The results of other simulations of this failure scenario are in correspondence with the
single case discussed above. Tracking performance is always good, but as expected only
the modular controller manages to find the true aerodynamic coefficient values. Natu-
rally, the speed at which the true values are found depends on the richness of information
in the reference signal.

8.6.3 Simulation Results with Longitudinal c.g. Shifts


The second series of simulations considers a more complex failure: Longitudinal center
of gravity shifts. Especially backward shifts can be quite critical, since they work desta-
bilizing and can even result in a loss of static stability margin. All pitching and yawing
aerodynamic moment coefficients will change as a result of a longitudinal c.g. shift. The
baseline classical controller is designed to deal with longitudinal c.g. shifts and, as is
demonstrated in [149], can even deal with shifts of ±0.06c̄. The tracking performance
degrades somewhat, but is still acceptable. However, for a non-adaptive model inversion
based design the changes are far more critical and stability loss often occurs for destabi-
lizing shifts, even with the integral gains.
Figure D.14 contains the simulation results for F-16 model with the integrated adaptive
controller starting at flight condition 1 with the longitudinal stick commanding a series of
small amplitude pitch doublets, after 20 seconds the c.g. instantly shifts backward 0.06c̄
and the small positive static margin is lost. Without adaptation stability is lost immedi-
ately, but as can be seen in the left hand side plots with adaptation turned on the tracking
performance of the integrated design is acceptable although small tracking errors remain.
The right hand side plots demonstrate that the estimates again do not converge to their
true values, and the change in yawing moment is not estimated at all, since it does not
result in large enough tracking errors.
In Figure D.15 the total pitch moment coefficient is plotted against the angle of attack
with a pitch rate and elevator deflection of zero, both before (blue line) and after the
failure occurs (red line). The difference or the error is plotted in Figure D.16 together
with the estimated error generated by the adaptive backstepping controller at the end of
the simulation. It is interesting to note that the pitch moment coefficient error is only
learned over the portion of the flight envelope over which training samples have been
accumulated. This is due to the local nature of the B-spline networks used for the flight
envelope partitioning.
The plots of the results for the modular design for the same scenario can be found in
Figure D.17. The tracking performance of the modular design is somewhat disappoint-
ing; even after 200 seconds of simulation there still remains a significant tracking error.
Also the parameter estimates do not converge to their true values and the total recon-
164 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.7

structed pitching moment is not equal to the real moment. However, if the same simu-
lation is performed without flight envelope partitioning with B-splines on a semi-linear
aircraft model the tracking performance and parameter convergence is excellent. It seems
the flight envelope partitioning negatively affects the estimation capabilities of the least-
squares algorithm for this failure scenario.
The simulation results of the rest of the c.g. shift failure scenarios correspond to this
single case: The tracking performance of the integrated design is better than for the mod-
ular design, with the modular design struggling to estimate the correct parameter values.
Tracking performance of both controllers is better for stabilizing c.g. shifts.

8.6.4 Simulation Results with Aileron Lock-ups

In the last series of simulations right aileron lockups or hard-overs are considered. At
20 seconds simulation time the right aileron suddenly moves to a certain offset: -21.5,
-10.75, 0, 10 or 21.5 degrees. Note that the public domain F-16 model does not contain a
differential elevator, hence only the rudder and the left aileron can be used to compensate
for these failures. Both the baseline control system and the adaptive SCAS designs with
adaptation turned off cannot compensate for the additional rolling and yawing moments
themselves, which means a very high workload for the pilot.
The results of a simulation performed with the integrated controller at flight condition 4
with a right aileron lock up at -10.75 degrees can be seen in Figure D.18. One lateral stick
doublet is performed before the failure occurs and three more 60 seconds after. As can
be seen the controller manages to compensate for most of the additional rolling moment
and after that the stability roll rate tracking error slowly converges to zero. Additional
sideslip is generated in the doublets and tracking performance improves over time. The
other plots of Figure D.18 demonstrate that parameter convergence to the true values is
not achieved. The change in yawing moment is even estimated as having an opposite
sign. However, tracking performance is adequate and improving over time.
Figure D.19 contains the results of the same scenario using the modular controller. It
can be seen that the aileron failure is quickly compensated for by the modular adaptive
controller: All tracking errors quickly converge to zero. However, the controller again
fails to identify the true aerodynamic coefficient changes. The total reconstructed forces
and moments are correct, but the individual coefficients do not match their true values.
This is partly because the reference signal is not rich enough, but also due to the flight
envelope partitioning. The same simulation without partitioning on the semi-linear F-16
model gives much better estimates.
The results of the above simulations were again characteristic for all scenarios with
aileron lockup failures. Tracking performance of the modular controller is excellent,
but parameter convergence to the true values is seldom achieved. The adaptation of the
integrated design is less aggressive, mainly due to the use of the continuous dead-zones,
but tracking performance is still good.
8.7 CONCLUSIONS 165

8.7 Conclusions
In this chapter two Lyapunov-based nonlinear adaptive stability and control augmen-
tation systems are designed for a high-fidelity F-16 model. The first controller is an
integrated design with feedback control and dynamic tracking error based update law
designed simultaneously using a control Lyapunov function. The second design is an
ISS-backstepping controller with a separate recursive least-squares identifier. In order
to make real-time implementation of the controllers feasible, the flight envelope is parti-
tioned in locally valid linear-in-the-parameters models using B-spline networks. Only a
few local aerodynamic models are updated at each time step, while information of other
local models is stored. The controllers are designed in such a way that they have nearly
identical handling qualities as the baseline F-16 control system over the entire subsonic
flight envelope for the nominal, undamaged aircraft model. Numerical simulations with
several types of failures were performed to verify the robust performance of the control
laws. The results show that good tracking performance can still be accomplished with
these failures and that pilot workload is reduced.
Several important observations can be made based on the simulation results and the com-
parison:
1. Results of numerical simulations show that adaptive flight controllers provide a
significant improvement over a non-adaptive NDI design with integral gains for the
simulated failure cases. Both adaptive designs show no degradation in performance
with the added sensor dynamics and time delays. The flight envelope partitioning
method makes real-time implementation of both controllers feasible, although the
difference in required computational load and storage space is quite significant.
For the least-squares identifier each locally valid aerodynamic model has its own
covariance matrix.
2. In general, the modular adaptive design provides the best estimates of the indi-
vidual aerodynamic coefficients. However, the nonlinear damping gains of the
modular design should be tuned with care to avoid high gain feedback signals.
3. The gain tuning of the update laws of the integrated adaptive controller is a very
time consuming process, since changing the gains can give very unexpected tran-
sients in the closed-loop tracking performance. This is especially true for aggres-
sive maneuvering. In the next chapter an alternative Lyapunov based parameter
estimation method is investigated.
4. Tuning of the modular identifier is a much less involved task. However, the re-
cursive least-squares identifier combined with flight envelope partitioning has un-
expected problems to estimate the true parameters. A different parametrization of
the approximator structure or another tuning setting may solve these problems.
5. Enforcing desired handling qualities using the command filters is a trivial task in
the nominal case, since most specifications can be implemented directly. The han-
dling qualities can be verified using frequency sweeps and lower order model fits.
166 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 8.7

Measurements of the handling qualities when a sudden aerodynamic change oc-


curs and the adaptation becomes active have not been obtained, since the dynamic
behavior of the closed-loop system is constantly changing, making it impossible
to fit a lower order equivalent model.
Chapter 9
Immersion and Invariance Adaptive
Backstepping

The earlier chapters have shown that the dynamic part of integrated adaptive backstep-
ping designs is very difficult to tune, since it is unclear how a higher update gain affects
the closed-loop tracking performance of the control system. Furthermore, the dynamic
behavior of the controllers is very unpredictable. In this chapter the dynamic part of the
controller is replaced with a new kind of estimator based on the immersion and invari-
ance approach. This approach allows for prescribed stable dynamics to be assigned to
the parameter estimation error and is therefore much easier to tune. The new immersion
and invariance backstepping technique is used to design a new stability and control aug-
mentation system for the F-16, which is compared to the designs of the previous chapter.
This chapter can be seen as a follow up of Chapter 5, where an attempt was made to sim-
plify the performance tuning of the controllers by designing an inverse optimal adaptive
backstepping controller.

9.1 Introduction
In the past two decades a considerable amount of literature has been devoted to nonlinear
adaptive control design methods for a variety of flight control problems where parametric
uncertainties in the system dynamics are involved, see e.g. [61, 124, 132, 150]. Recur-
sive, Lyapunov-based adaptive backstepping is among the most widely studied of these
methods. The main attractions of adaptive backstepping based control laws lie in their
provable convergence and stability properties as well as in the fact that they can be ap-
plied to a broad class of nonlinear systems.
However, despite a number of refinements over the years, the adaptive backstepping
method also has a number of shortcomings. The most important of these is that the pa-

167
168 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.2

rameter estimation error is only guaranteed to be bounded and converging to an unknown


constant value, yet little can be said about its dynamical behavior. Unexpected dynam-
ical behavior of the parameter update laws may lead to an undesired transient response
of the closed-loop system. Furthermore, increasing the adaptation gain will lead to faster
parameter convergence, but will not necessarily improve the response of the closed-loop
system. This makes it impossible to properly tune an adaptive backstepping controller,
especially for large and complex systems such as the high-fidelity F-16 model.
One solution to this problem is to introduce a modular input-to-state stable backstep-
ping approach with a separate identifier that is not of the Lyapunov-type, e.g. the well
known recursive least-squares identifier. Since the certainty equivalence principle does
not hold in general for nonlinear systems, the control law has to be robustified against
the time-varying character of the parameter estimates. However, the nonlinear damping
terms introduced to achieve this robustness can lead to undesirable high gain control.
Furthermore, the controller loses some of the strong stability properties with respect to
the integrated adaptive backstepping approach. Finally, in a real-time application for a
complex system, the high dynamic order resulting from using a least-squares identifier
with the necessary regressor filtering may be undesirable.
In [40, 102, 103], a different class of Lyapunov-based adaptive controllers has been de-
veloped based on the immersion and invariance (I&I) methodology [7]. This approach
allows for prescribed stable dynamics to be assigned to the parameter estimation error,
thus leading to a modular control scheme which is much easier to tune than an adaptive
backstepping controller. However, this shaping of the dynamics relies on the solution of a
partial differential matrix inequality, which is difficult to solve for multivariable systems.
This limitation is removed in [104] using a dynamic extension consisting of output filters
and dynamic scaling factors added to the estimator dynamics.
In this study, the approach of [104] is used to derive a nonlinear adaptive estimator which
in combination with a static backstepping feedback controller results in a nonlinear adap-
tive control framework with guaranteed global asymptotic stability of the closed-loop
system. The new design technique is applied to the flight control design problem for
the over-actuated, six-degrees-of-freedom fighter aircraft model of Chapter 6 and after
that to the SCAS design for the F-16 model of Chapter 2. The results of the numerical
simulations are compared directly to the results for the integrated and modular adaptive
backstepping controllers of Chapters 6 and 8.

9.2 The Immersion and Invariance Concept


Immersion and invariance is a relatively new approach to designing nonlinear controllers
or estimators for (uncertain) nonlinear systems [6]. As the name suggests, the method re-
lies on the well known notions of system immersion and manifold invariance1, but used
from another perspective. The idea behind the I&I approach is to capture the desired
behavior of the system to be controlled with a target dynamical system. This way the
1 Formal definitions of immersion and invariant manifolds can be found in Appendix B.3
9.2 THE IMMERSION AND INVARIANCE CONCEPT 169

control problem is reduced to the design of a control law which guarantees that the con-
trolled system asymptotically behaves like the target system.
The I&I method is applicable to a variety of control problems, but it is easiest to illustrate
the approach with a basic stabilization problem of an equilibrium point of a nonlinear
system. Consider the general system

ẋ = f (x) + g(x)u, (9.1)

where x ∈ Rn and u ∈ Rm . The control problem is to find a state feedback control law
u = v(x) such that the closed-loop system has a globally asymptotic stable equilibrium
at the origin. The first step of the I&I approach is to find a target dynamical system

ξ̇ = α(x), (9.2)

where ξ ∈ Rp , p < n, which has a globally asymptotically stable equilibrium at the


origin, a smooth mapping x = π(ξ), and a control law v(x) such that
∂π
f (π(ξ)) + g(π(ξ))v(π(ξ)) = α(ξ). (9.3)
∂ξ
If these conditions hold, then any trajectory x(t) of the closed-loop system

ẋ = f (x) + g(x)v(x), (9.4)

is the image through the mapping π(ξ) of a trajectory ξ(t) of the target system (9.2).
Note that the rank of π is equal to the dimension of ξ. The second step is to find a control
law that renders the manifold x = π(ξ) attractive and keeps the closed-loop trajectories
bounded. This way the closed-loop system will asymptotically behave like the desired
target system and hence stability is ensured.
From the above discussion, it follows that the control problem has been transformed into
the problem of the selection of a target dynamical system. This is, in general, a non-
trivial task, since the solvability of the underlying control design problem depends on
this selection. However, in many cases of practical interest it is possible to identify natu-
ral target dynamics. Examples of different applications are given in [6].
In this thesis, the focus lies on adaptive control, hence the I&I approach is used to develop
a framework for adaptive stabilization of nonlinear systems with parametric uncertain-
ties. Consider again the system (9.1) with an equilibrium xe to be stabilized, but where
the functions f (x) and g(x) now depend on an unknown parameter vector θ ∈ Rq . The
goal is to find an adaptive state feedback control law of the form

u = v(x, θ̂) (9.5)


˙
θ̂ = w(x, θ̂),

such that all trajectories of the closed-loop system (9.1), (9.5) are bounded and
limt→∞ x = xe . To this end it is assumed that a full-information control law v(x, θ)
exists. The I&I adaptive control problem is then defined as follows [7].
170 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.2

Definition 9.1. The system (9.1) is said to be adaptively I&I stabilizable if there exist
functions β(x) and w(x) such that all trajectories of the extended system

ẋ = f (x) + g(x)v(x, θ̂ + β(x)) (9.6)


˙
θ̂ = w(x, θ̂),

are bounded and satisfy


h i
lim g(x(t))v(x(t), θ̂(t) + β(x(t))) − g(x(t))v(x(t), θ) = 0. (9.7)
t→∞

It is not difficult to see that for all trajectories staying on the manifold
n o
M = (x, θ̂) ∈ Rn × Rq |θ̂ − θ + β(x) = 0

condition (9.7) holds. Moreover, by Definition 9.1, adaptive I&I stabilizability implies
that
lim x = xe . (9.8)
t→∞
Note that the adaptive controller designed with the I&I approach is not of the certainty
equivalence type in the strict sense, i.e. the parameter estimate is not used directly by the
static feedback controller. Furthermore, note that, in general, f (x) and g(x) depend on
the unknown θ and therefore the parameter estimate θ̂ does not necessarily converge to
the true parameter values. However, in many cases it is also possible to establish global
stability of the equilibrium (x, θ̂) = (xe , θ). This is illustrated in the following example.

Example 9.1 (Adaptive controller design)


Consider the feedback linearizable system

ẋ = θx3 + x + u, (9.9)

where θ ∈ R is an unknown constant parameter. If θ were known, the equilibrium


point x = 0 would be globally asymptotically stabilized by the control law

u = −θx3 − cx, c > 1. (9.10)

Since θ is not known, θ is replaced by its estimate θ̂ in the certainty equivalence


controller

u = −θ̂x3 − cx
˙
θ̂ = w,

where w is the parameter update law. As before, the control Lyapunov function is
selected as
1 1
V (x, θ̂) = x2 + (θ − θ̂)2 , (9.11)
2 2γ
9.2 THE IMMERSION AND INVARIANCE CONCEPT 171

with γ > 0. Selecting the update law

w = γx4 (9.12)

renders the derivative of V equal to

V̇ = −(c − 1)x2 . (9.13)

By Theorem 3.7 the equilibrium (x, θ̃) = 0 is globally stable and limt→∞ x = 0.
However, no conclusions can be drawn about the behavior of the parameter estimation
error θ − θ̂, except that it converges to a constant value. The dynamical behavior of the
estimation error may be unacceptable in terms of transient response of the closed-loop
system.
Alternatively, the adaptive control problem can be placed in the I&I framework by
considering the augmented system

ẋ = θx3 + x + u
˙
θ̂ = w (9.14)

and by defining the one-dimensional manifold


n o
M = (x, θ̂) ∈ R2 |θ̂ − θ + β(x) = 0

in the extended space (x, θ̂), where β(x) is a continuous function yet to be specified.
If the manifold M is invariant, the dynamics of the x-subsystem of (9.14) restricted
to this manifold can be written as

ẋ = (θ̂ + β(x))x3 + x + u. (9.15)

Hence, the dynamics of the system are completely known and the equilibrium x = 0
can be asymptotically stabilized by the control law

u = −cx − (θ̂ + β(x))x3 , c > 1. (9.16)

To render this design feasible, the first step of the I&I approach consists of finding
an update law w that renders the manifold M invariant. To this end, consider the
dynamics of the ‘off-the-manifold’ coordinate, i.e. the estimation error

σ , θ̂ − θ + β(x), (9.17)

which are given by


∂β  
σ̇ = w + (θ̂ + β(x) − σ)x3 + x + u . (9.18)
∂x
If the update law w is selected as
∂β   ∂β
w=− (θ̂ + β(x))x3 + x + u = (c − 1)x (9.19)
∂x ∂x
172 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.3

the manifold M is invariant and the off-the-manifold dynamics are described by

∂β 3
σ̇ = − x σ. (9.20)
∂x
1 2
Consider the Lyapunov function V = 2γ σ , whose time derivative along the trajecto-
ries of (9.20) satisfies

∂β 1 3 2
V̇ = − x σ , (9.21)
∂x γ

where γ > 0. To render this expression negative semi-definite, a possible choice for
the function β(x) is given as

x2
β(x) = γ . (9.22)
2
An alternative solution with dead-zones is given as
 γ 2
 2 (x − η0 ) if x > η0
γ 2
β(x) = 2 (x + η0 ) if x < −η0 , (9.23)
0 |x| ≤ η0

if

with η0 > 0 the dead-zone constant. It can be concluded that the system (9.20) has a
globally stable equilibrium at zero and limt→∞ x3 σ = 0. The resulting closed-loop
system can be written in (x, σ)-coordinates as

ẋ = −(c − 1)x − x3 σ
σ̇ = −γx4 σ (9.24)

which has a global stable equilibrium at the origin and x converges to zero. Moreover,
the extra β(x)x3 -term in the control law (9.16) renders the closed-loop system input-
to-state stable with respect to the parameter estimation error θ − θ̂.
The response of the closed-loop system with the I&I adaptive controller is compared
to the response of the system with the standard adaptive controller designed at the
beginning of this example. The tuning parameters of both designs are selected the
same. The real θ is equal to 2, but the initial parameter estimate is 0. As can be seen
in Figure 9.1, both controllers manage to regulate the state to zero. Note that it is not
guaranteed that the estimate of the I&I design converges to the true value, only that
limt→∞ x3 σ = 0.

The closed-loop system (9.24) can be regarded as a cascaded interconnection between


two stable systems which can be tuned via the constants c and γ. This modularity makes
the I&I adaptive controller much easier to tune than the standard adaptive design. As a
result, the performance of the adaptive system can be significantly improved.
9.3 EXTENSION TO HIGHER ORDER SYSTEMS 173

3
standard
I&I
2
state x

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (s)
0
−10
input u

−20
−30
−40
−50
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (s)
5
parameter estimate

4
3
2
1
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (s)

Figure 9.1: State x, control effort u and parameter estimate θ̂ for initial values x(0) = 2, θ̂(0) =
0, control gain c = 2 and update gain γ = 1 for the closed-loop system with standard adaptive
design and with the I&I adaptive design.

9.3 Extension to Higher Order Systems


Extending the I&I approach outlined in the last section to higher-order nonlinear systems
with unmatched uncertainties is by no means straight-forward. In [102] an attempt is
made for the class of lower-triangular nonlinear systems of the form
ẋi = xi+1 + ϕi (x1 , ..., xi )T θ, i = 1, ..., n − 1
ẋn = u + ϕn (x)T θ (9.25)
where xi ∈ R, i = 1, ..., n are the states, u ∈ R the control input, ϕi the smooth
regressors and θ ∈ Rp a vector of unknown constant parameters. The control problem is
to track the smooth reference signal yr (t) (all derivatives known and bounded) with the
state x1 . The adaptive control design is done in two steps. First, an overparametrized
estimator of order np for the unknown parameter vector θ is designed. In the second
step a controller is designed that ensures that limt→∞ x1 = yr and all other states are
bounded.

9.3.1 Estimator Design


The estimator design starts by defining the estimation errors as
σi = θˆi − θ + βi (x1 , ..., xi ), 1 = 1, ..., n, (9.26)
174 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.3

where θ̂i are the estimator states and βi are continuously differentiable functions to de-
fined later. The dynamics of σi are given by
i
X ∂βi
˙
xk+1 + ϕk (x1 , ..., xk )T θ

σ̇i = θ̂i +
∂xk
k=1
i
˙ X ∂βi   
= θ̂i + xk+1 + ϕk (x1 , ..., xk )T θ̂i + βi (x1 , ..., xi ) − σi ,
∂xk
k=1

˙
where xn+1 = u for the ease of notation. Update laws for θ̂i can be defined as
i
˙ X ∂βi   
θ̂i = xk+1 + ϕk (x1 , ..., xk )T θ̂i + βi (x1 , ..., xi ) (9.27)
∂xk
k=1

to cancel all the known parts of the σi dynamics, resulting in


" i #
X ∂βi
σ̇i = − ϕk (x1 , ..., xk )T σi . (9.28)
∂xk
k=1

The system (9.28) for i = 1, ..., n can be seen as a linear time varying system with a
block diagonal dynamic matrix. Hence, the problem of designing an estimator θ̂i is now
reduced to the problem of finding functions βi such that the diagonal blocks are rendered
negative semi-definite. In [102] the functions βi are selected as
Z xi
βi (x1 , ..., xi ) = γi ϕi (x1 , ..., xi−1 , χ)dχ + ǫi (xi ), γi > 0, (9.29)
o

where ǫi are continuously differentiable functions that satisfy the partial differential ma-
trix inequality
Fi (x1 , ..., xi )T + Fi (x1 , ..., xi ) ≥ 0, i = 2, ..., n, (9.30)
where
i−1 Z xi 
X ∂
Fi (x1 , ..., xi ) = γi ϕi (x1 , ..., xi−1 , χ)dχ ϕk (x1 , ..., xk )T
∂xk 0
k=1
∂ǫi
+ ϕi (x1 , ..., xi )T .
∂xi
Note that the solvability of (9.30) strongly depends on the structure of the regressors
ϕi . For instance, in the case that ϕi only depends on xi , the trivial solution ǫi (xi ) = 0
satisfies the inequality. If (9.30) is solvable, the following lemma can be established [6].
Lemma 9.2. Consider the system (9.25), where the functions βi are given by (9.29) and
functions ǫi exist which satisfy (9.30). Then the system (9.25) has a globally uniformly
stable equilibrium at the origin, σi (t) ∈ L∞ and ϕi (x1 (t), ..., xi (t))T σi (t) ∈ L2 , for
all i = 1, ..., n and for all x1 (t), ..., xi (t). If, in addition, ϕi and its time derivative are
bounded, then ϕi (x1 , ..., xi )T σi converges to zero.
9.3 EXTENSION TO HIGHER ORDER SYSTEMS 175

Pn T
Proof: Consider the Lyapunov function W (σ) = i=1 σi σi , whose time derivative
along the trajectories of (9.25) is given as
n
" i #
X X ∂βi
T T
Ẇ = − σi ϕk (x1 , ..., xk ) σi
i=1
∂xk
k=1
n
X
σiT 2γi ϕi ϕTi + F1 + F1T σi
 
= −
i=1
n
X
≤ − 2γi (ϕTi σi )2 ,
i=1

where (9.30) was used to obtain the last inequality. The stability properties follow di-
rectly from Theorem 3.7.
Note that the above inequality holds for any u. Furthermore, by definition (9.26) an
asymptotically converging estimate of each unknown term ϕTi θ of the system (9.25) is
given by

ϕi (x1 , ..., xi )T (θ̂ + βi (x1 , ..., xi )). (9.31)

Note that an estimate of the ϕTi θ terms is obtained, instead of only an estimate of the
parameter θ as with the Lyapunov based update laws of the earlier chapters.

9.3.2 Control Design


The properties of the estimator will now be exploited with a backstepping control law.
The design procedure starts by defining the tracking errors as

z1 = x1 − yr
zi = xi − αi−1 , i = 2, ..., n, (9.32)

where α∗ are the intermediate control laws to be defined. The dynamics of z1 satisfy

ż1 = z2 + α1 + ϕT1 θ − ẏr . (9.33)

Introducing the virtual control

α1 = −κ1 (z1 ) − ϕT1 (θ̂1 + β1 ) + ẏr , (9.34)

where κ1 (z1 ) is a stabilizing function to be defined, reduces the z1 -dynamics to

ż1 = z2 − κ1 − ϕT1 σ1 . (9.35)

Assume for the moment that z2 ≡ 0, i.e. α1 is the real control. Then the above expression
can be seen as a stable system perturbed by an L2 signal. Consider now the Lyapunov
176 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.4

function V1 (z1 , σ1 ) = z12 + σ1T σ1 . Taking the derivative along the trajectories (9.35) and
(9.28) results in

V̇1 = −2κ1 z1 − 2ϕT1 σ1 z1 − 2γ1 (ϕT1 σ1 )2 .


2


1 1
= −2κ1 z1 + ǫz12 − √ ϕT1 σ1 + ǫz1 − (2γ1 − )(ϕT1 σ1 )2
ǫ ǫ
1
≤ −2κ1 z1 + ǫz12 − (2γ1 − )(ϕT1 σ1 )2 ,
ǫ
where ǫ > 0 is a constant. Substituting κ1 = (c1 + ǫ/2) z1 , with gain c1 > 0, reduces
the derivative of V1 to
1
V̇1 ≤ −2c1 z12 − (2γ1 − )(ϕT1 σ1 )2 .
ǫ
1
By Theorem 3.7 it follows that if γ1 ≥ 2ǫ the closed-loop (z1 , σ1 )-subsystem has a
globally uniformly stable equilibrium at (0, θ) and limt→∞ z1 = 0, limt→∞ ϕT1 σ1 = 0.
Since, z2 6= 0, the approach is extended to design a backstepping controller for the
complete system, i.e.
i−1 i−1
X ∂αi h i X ∂αi ˙
αi+1 = −κi − ϕTi (θ̂i + βi ) + xk+1 + ϕTk (θ̂k + βk ) + θ̂k + yr(i)
∂xk ∂ θ̂k
k=1 k=1
u = αn+1 , (9.36)

where
i−1  2
 ǫ ǫX ∂αi
κi = ci + zi + zi + zi−1 , i = 1, ..., n,
2 2 ∂xk
k=1

where ci > 0 and ǫ > 0. Note that nonlinear damping terms have to be introduced to
compensate for derivative terms of the virtual controls. This is necessary, since command
filters are not used in this backstepping design. To proof stability of the closed-loop
system with the above backstepping control law and the I&I based Pestimator designed in
n
the previous section, the Lyapunov function V (z, σ) = W (σ) + k=1 zk2 is introduced.
Taking the derivative of V results in
n n  
X X 1
V̇ = −2 ci z12 − 2γi − (n − i + 1) (ϕTi σi )2 .
i=1 i=1
ǫ

It can be concluded that, if 2γi ≥ 1ǫ (n − i + 1) and the inequality (9.30) is satisfied,


the system (9.25), (9.36), (9.27), (9.29) has a globally stable equilibrium. Furthermore,
by Theorem 3.7 limt→∞ zi = 0 and limt→∞ ϕTi σi = 0. This concludes the over-
parametrized, nonlinear adaptive control design, which can be used as an alternative to
the tuning functions adaptive backstepping approach if functions ǫi (xi ) can be found that
satisfy (9.30), as is demonstrated in a wing rock example in [102].
9.4 DYNAMIC SCALING AND FILTERS 177

9.4 Dynamic Scaling and Filters

In the previous section a first attempt was made to design an adaptive backstepping con-
troller with I&I based estimator. The estimator allows for prescribed dynamics to be
assigned to the parameter estimation error, which leads to a modular adaptive backstep-
ping design that is much easier to tune than the integrated approaches discussed in earlier
chapters. Furthermore, the modular design does not suffer from the weaknesses of the
certainty equivalence modular design of Section 6.2. However, the shaping of the dynam-
ics relies on the solution of a partial differential matrix inequality, which is, in general,
very difficult to solve for most physical systems.
This limitation of the estimator design was removed in [104] with the introduction of a
dynamic scaling factor in the estimator dynamics and by adding an output filter to the
design. Dynamic scaling has been widely used in the design of high-gain observers, see
e.g. [165]. In this section an I&I estimator with dynamic scaling and output filter is
combined with a command filtered backstepping control design approach to arrive at a
modular adaptive control framework.
Consider the class of linearly parametrized systems of the form

ẋi = xi+1 + ϕi (x)T θi , i = 1, ..., n, (9.37)

with states xi ∈ R, i = i, ..., n and control input u ∈ R. Note that for notational
convenience xn+1 = u. The functions ϕi (x, u) are the known, smooth regressors and
θi ∈ Rpi are vectors of unknown constant parameters. The control objective is to track a
smooth reference signal x1,r , for which the first derivative is known and bounded, with
the state x1 .

9.4.1 Estimator Design with Dynamic Scaling

The construction of an estimator for θi starts by defining the scaled estimation errors as

θ̂i − θi + βi (xi , x̂)


σi = , i = 1, ..., n, (9.38)
ri

where ri are scalar dynamic scaling factors, θ̂i are the estimator states and βi (xi , x̂)
continuously differentiable vector functions yet to be specified. Let ei = x̂i − xi , then
the filtered states x̂i are obtained from

 
x̂˙ i = xi+1 + ϕi (x)T θ̂i + βi (xi , x̂) − ki (x, r, e)ei , (9.39)
178 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.4

where ki (x, r, e) are positive functions. Using the above definitions, the dynamics of σi
are given by
 
n
1 ˙ ∂βi X ∂βi ˙  ṙi
xi+1 + ϕi (x)T θi +

σ̇i = θ̂i + x̂j − σi
ri ∂xi j=1
∂ x̂j ri
 
n
1 ˙ ∂βi h  i X ∂βi ˙ 
= θ̂i + xi+1 + ϕi (x)T θ̂i + βi (xi , x̂) − ri σi + x̂j
ri ∂xi j=1
∂ x̂j
ṙi
− σi .
ri
˙
By selecting the update laws for θ̂i as
n
˙ ∂βi    X ∂βi ˙
θ̂i = − xi+1 + ϕi (x)T θ̂i + βi (xi , x̂) − x̂j , (9.40)
∂xi j=1
∂ x̂j

the dynamics of σi are reduced to


 
∂βi ṙi ∂βi ṙi
σ̇i = − ϕi (x)T σi − σi = − ϕi (x)T + σi . (9.41)
∂xi ri ∂xi ri
The system (9.41) can again be seen as a linear time varying system with a block diag-
onal dynamic matrix. In order to render the diagonal blocks negative semi-definite, the
functions βi (xi , x̂) are selected as
Z xi
βi (xi , x̂) = γi ϕi (x̂1 , ..., x̂i−1 , σ, x̂i+1 , ..., x̂n )dσ, (9.42)
0

where γi > 0. Since the regressors ϕi (x) are continuously differentiable, the expression
n
X
ej δij (x, e) = ϕi (x) − ϕi (x̂1 , ..., x̂i−1 , σ, x̂i+1 , ..., x̂n ), δii ≡ 0, (9.43)
j=1

holds for some functions δij (x, e). Substituting (9.42) and (9.43) into (9.41) yields the
σi -dynamics
n
X ṙi
σ̇i = −γi ϕi (x)ϕi (x)T σi + γi ej δij (x, e)ϕi (x)T σi − σi . (9.44)
j=1
ri

Furthermore, from (9.37) and (9.39), the dynamics of ei = x̂i − xi are given by
ėi = −ki (x, r, e)ei + ri ϕi (x)T σi . (9.45)
The system consisting of (9.44) and (9.45) has an equilibrium at zero, which can be
rendered globally uniformly stable by selecting the dynamics of the scaling factors ri
and the functions ki (x, r, e) as defined in the following lemma [104].
9.4 DYNAMIC SCALING AND FILTERS 179

Lemma 9.3. Consider the system (9.37) and let


n
X
ṙi = ci ri e2j |δij (x, e)|2 , ri (0) = 1, (9.46)
j=1

with ci ≥ γi n/2, where |.| denotes the 2-norm, and


n
X
ki (x, r, e) = λi ri2 +ǫ cj rj2 |δji (x, e)|2 (9.47)
j=1

where λi > 0 and ǫ > 0 are constants. Then the system consisting of (9.44), (9.45)
and (9.46) has a globally uniformly stable manifold of equilibria defined by M =
{(σ, r, e)|σ = e = 0}. Moreover, σi (t) ∈ L∞ , ri (t) ∈ L∞ , ei (t) ∈ L2 ∩ L∞ and
ϕi (x(t))T σi (t) ∈ L2 for all i = 1, ..., n. If, in addition, ϕi (x(t)) and its time derivative
are bounded, it follows that limt→∞ ϕi (x(t))T σi = 0.

1 T
Proof: Consider the Lyapunov function Vi (σi ) = 2γi σi σi . Taking the time derivative
of Vi along the trajectories of (9.44) results in
n
X ṙi
V̇i = −(ϕTi σi )2 + ej σiT δij ϕTi σi − |σi |2
j=1
γi ri
n  
T 2
X 1 T 2 n 2 T 2
= −(ϕi σi ) + (ϕ σi ) + ej (δij σi )
j=1
2n i 2
n  r 2
X 1 T n T ṙi
− √ ϕi σi − ej δij σi − |σi |2
j=1
2n 2 γ i ri
n
1 nX 2 T 2 ṙi
≤ − (ϕTi σi )2 + e (δ σi ) − |σi |2 .
2 2 j=1 j ij γi ri

Substituting the dynamic scaling terms ri as given by (9.46) and by applying the inequal-
T
ity |δij σi | < |δij ||σi |, the remaining indefinite term can be canceled such that

1
V̇i ≤ − (ϕTi σi )2 < 0, σi 6= 0.
2
Hence, the system (9.44) has a globally uniformly stable equilibrium at the origin,
σi (t) ∈ L∞ and ϕi (x(t))T σi (t) ∈ L2 for all i = 1, ..., n. If ϕi (x(t)) and its time
derivative are bounded, it follows from Barbalat’s lemma that limt→∞ ϕi (x(t))T σi = 0.
This implies that an asymptotic estimate
 of each parametric
 uncertainty term ϕi (x)T θi
T
in (9.37) is given by the term ϕi (x) θ̂i + βi (xi , x̂) .
The next design step is to select the positive functions ki (x, r, e) in such a way that the
180 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.4

dynamics of ei , given by (9.45), become globally asymptotically stable. Taking the time
derivative of the augmented Lyapunov function Wi (σi , ei ) = 21 e2i + λ1i Vi , with constant
λi > 0, results in
1
Ẇi ≤ −ki e2i + ri ϕTi σi ei − (ϕT σi )2
2λi i
"r #2
2 λi 2 2 λi 1 T
= −ki ei + ri ei − ri ei − √ ϕi σi
2 2 2λi
 
λi
≤ − ki − ri2 e2i .
2

It is clear that selecting ki (x, r, e) > λ2i ri2 renders the above expression negative definite,
thus the equilibrium (σi , ei ) = (0, 0) is globally uniformly stable and ei (t) ∈ L2 ∩ L∞ .
A final design step has to be made to ensure thatP the dynamic scalings ri remain bounded.
n 
Consider the Lyapunov function Ve (e, σ, r) = i=1 Wi (σi , ei ) + 2ǫ ri2 with ǫ > 0, for


which the time derivative is given by


 
n   n n
X λi 2 2 X
ci ri2
X
V̇e ≤ − ki (x, r, e) − ri ei + ǫ e2j |δij (x, e)|2  .
i=1
2 i=1 j=1

Selecting
Pn ki (x, r, e) as given by (9.47) to cancel the indefinite terms, ensures V̇e ≤
− i=1 λ2i ri2 e2i , which proves that ri (t) ∈ L∞ and limt→∞ ei (t) = 0. The functions
ki (x, r, e) contain a nonlinear damping term to achieve boundedness of ri , but the con-
stant ǫ multiplying the damping term can be chosen arbitrarily small.

This completes the design of the estimator, which consists of output filters (9.39), update
laws (9.40) and dynamic scalings (9.46). Note that the estimator, in general, employs
overparametrization, which is not necessarily disadvantageous from a performance point
of view. However, in a numerical implementation
Pn it can lead to a higher computational
load. The total order of the estimator is i=1 pi + 2n.

9.4.2 Command Filtered Control Law Design


In this section the command filtered backstepping approach is used to close the loop and
complete the adaptive control design. The procedure starts by defining the tracking errors
as

zi = xi − xi,r , i = 1, ..., n (9.48)

where xi,r are the intermediate control laws to be designed. The modified tracking errors
are defined as

z̄i = zi − χi . (9.49)
9.4 DYNAMIC SCALING AND FILTERS 181

with the signals χi to be defined. The dynamics of z̄i can be written as

z̄˙i = zi+1 + xi+1,r + ϕTi θi − χ̇i


˙z̄n = u + ϕTn θn − ẋn,r − χ̇n . (9.50)

The idea is now to design a control law that renders the closed-loop system L2 stable
from the ‘perturbation’ inputs ϕTi σi to the output z̄1 and keeps all signals bounded. To
stabilize (9.50) the following desired (intermediate) controls are proposed:

 
x0i+1,r = −κi + z̄i−1 − ϕTi θ̂i + βi − χi+1 , i = 1, ..., n − 1,
 
u0 = −κn + z̄n−1 − ϕTn θ̂n + βn + ẋn,r (9.51)

with the stabilizing functions κi given as

µri2
κi = c̄i zi + z̄i + k̄i λ̄i ,
2

for i = 1, ..., n, where c̄i > 0, µ > 0 and k̄i ≥ 0 are constants. The integral terms are
defined as

Z t
λ̄i = z̄i (t)dt.
0

The desired (intermediate) control laws (9.51) are fed through second order low pass
filters to produce the actual intermediate controls xi+1,r , u and their derivatives. The
effect that the use of these filters has on the tracking errors can be captured with the
stable linear filters

−c̄i χi + xi+1,r − x0i+1,r ,



χ̇i = i = 1, ..., n − 1
−c̄n χn + u − u0 .

χ̇n = (9.52)

The stability properties of the adaptive control framework based on this command filtered
backstepping controller in combination with the I&I based estimator design of Lemma
9.3 can be proved using the control Lyapunov function

n  
X 1
Vc (z̄, σ) = z̄i2 + k̄ λ̄2i + σiT σi . (9.53)
i=1
2
182 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.5

Taking the time derivative of Vc and following some of the steps used in the proof of
Lemma 9.3 results in
n−1
X
z̄i zi+1 + xi+1,r + ϕTi θi − χ̇i + 2z̄n u + ϕTn θn − ẋn,r − χ̇n
 
V̇c ≤ 2
i=1
n n  
X X 1
+ k̄λ̄i z̄i − 2γi − (ϕTi σi )2
i=1 i=1
2
n n 
µi ri2
  X 
X
T 1
= −2 z̄i c̄i z̄i + z̄i + ϕi σi ri − 2γi − (ϕTi σi )2
i=1
2 i=1
2
n n  2 X n  
X X 1 √ 1 1
= −2 c̄i z̄i2 − √ ϕTi σi + µri z̄i − 2γi − − (ϕTi σi )2
i=1 i=1
µ i=1
2 µ
n n  
X X 1 1
≤ −2 c̄i z̄i2 − 2γi − − (ϕTi σi )2 .
i=1 i=1
2 µ

It can be concluded that, if γi ≥ µ+2


4µ , the closed-loop system consisting of (9.37), (9.51)
and the I&I based estimator of the previous section, which consists of output filters
(9.39), update laws (9.40) and dynamic scalings (9.46), has a globally stable equilib-
rium. Furthermore, by Theorem 3.7 limt→∞ z̄i = 0 and limt→∞ ϕTi σi = 0 (if ϕ and
its time derivative are bounded). When the command filters are properly designed, i.e.
with bandwidths sufficiently high, and no rate or magnitude limits are in effect, z̄i will
converge to the close neighborhood of the real tracking errors zi . This concludes the
discussion on the modular I&I based adaptive backstepping control design.

9.5 Adaptive Flight Control Example


In this section the approach discussed in Section 9.4 is used to construct a nonlinear adap-
tive flight control law for the simplified aircraft model of Chapter 6 with the equations
of motion given by (6.33). It will be demonstrated that the I&I estimator design with
dynamic scalings can be applied directly to a multivariable system. The control objec-
tive is to track smooth reference signals with φ, α and β. It is assumed that all stability
and control derivatives are unknown. A scheme of the proposed modular adaptive flight
controller is depicted in Figure 9.2.

Before the adaptive control design procedure begins, the aircraft dynamic model (6.33)
is rewritten in a more general form. Define the states x1 = φ, x2 = α, x3 = β, x4 =
θ, x5 = p, x6 = q, x7 = r and the control inputs u = (δel , δer , δal , δar , δlef , δtef , δr )T ,
then the system (6.33) can be rewritten as

ẋi = fi (x) + ϕi (x, u)T θi , i = 1, ..., 7, (9.54)


9.5 ADAPTIVE FLIGHT CONTROL EXAMPLE 183

Backstepping
Pilot
Prefilters
y z Control Law
mdes Control u
Commands Allocation
(Onboard Model)

θˆ u

Parameter Update x̂
Output Filters x Sensor
Laws Processing

r e

Dynamic Scaling

Nonlinear Adaptive Estimator


x

Figure 9.2: Modular adaptive I&I backstepping control framework.

with fi (x) the known functions, the unknown parameter vectors

θ1 = 0, θ2 = z α , θ3 = y β , θ4 = 0,
T
θ5 = (lp , lq , lr , lβα , lrα , l0 , lδel , lδer , lδal , lδar , lδr ) ,
T
θ6 = mα , mq , mα̇ , m0 , mδel , mδer , mδal , mδar , mδlef , mδtef , mδr ,
T
θ7 = (nβ , np , nq , nr , npα , n0 , nδel , nδer , nδal , nδar , nδr ) ,

and the regressors

ϕ1 (x, u) = 0, ϕ2 (x, u) = x2 − α0 , ϕ3 (x, u) = x3 , ϕ4 (x, u) = 0,


T
ϕ5 (x, u) = (x5 , x6 , x7 , x3 (x2 − α0 ), x7 (x2 − α0 ), 1, u1 , u2 , u3 , u4 , u7 ) ,
 g0 T
ϕ6 (x, u) = x2 − α0 , x6 , x5 x3 + (cos x4 cos x1 − cos θ0 ), 1, u1 , ..., u7 ,
V
T
ϕ7 (x, u) = (x3 , x5 , x6 , x7 , x5 (x2 − α0 ), 1, u1 , u2 , u3 , u4 , u7 ) .

Note that the parameters l0 , m0 and n0 have been added to the unknown parameter vec-
tors to compensate for any additional moments caused by failures, e.g. actuator hard-
overs.

9.5.1 Adaptive Control Design


The design of the command filtered backstepping feedback control design is identical
to the static backstepping part of the flight control design of Chapter 6. Note that the
nonlinear damping terms are not needed for the combination with an I&I based estimator
to guarantee stability, but they are kept in for the sake of comparison. The I&I estimator
design of Section 9.4.1 can be applied directly to the rewritten aircraft equations of mo-
tion (9.54).
Following the estimator design procedure of Section 9.4.1, the scaled estimation errors
184 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.5

are defined as
θ̂i − θi + βi (xi , x̂)
σi = , i = 2, 3, 5, 6, 7. (9.55)
ri

Let the output errors be given by ei = x̂i − xi , then the output filters are defined as
 
x̂˙ i = fi + ϕTi θ̂i + βi − ki ei , i = 2, 3, 5, 6, 7.

Note that no output filters are needed for x1 - and x4 -dynamics, since they contain no
uncertainties. The estimator dynamics are given by

7 7
˙ ∂βi X ∂βi ˙ X ∂βi
θ̂i = − (x̂i + ki ei ) − x̂j − u̇k ,
∂xi j=1
∂ x̂j ∂uk
k=1

where the functions βi (xi , x̂) are obtained from (9.42), i.e.
 
1 2 1
β2 = γ2 x2 − α0 x2 , β3 = γ3 x23 ,
2 2
 T
1
β5 = γ5 x5 x̂3 , x̂6 , x̂7 , x̂3 (x̂2 − α0 ), x̂7 (x̂2 − α0 ), x5 , 1, u1 , u2 , u3 , u4 , u7 ,
2
 T
1 g0
β6 = γ6 x6 x̂2 − α0 , x6 , −x̂5 x̂3 + (cos x4 cos x1 − cos θ0 ) , 1, u1 , ..., u7 ,
2 V
 T
1
β7 = γ7 x7 x̂3 , x7 , x̂5 , x̂5 (x̂2 − α0 ), x̂6 , 1, u1 , u2 , u3 , u4 , u7 ,
2

with γi > 0. Note that the derivative of the control vector is required in the estimator
design. This derivative is obtained directly from the command filters used in the last step
of the static backstepping control design. Taking the time derivative of the functions β
results in

β̇2 = γ2 ϕ2 , β̇3 = γ3 ϕ3 ,
T
β̇5 = γ5 ϕ3 − γ5 e2 (0, 0, 0, 0, −x3, −x7 , 0, 0, 0, 0, 0, 0)
T
− γ5 e3 (0, −1, 0, 0, α0 − x2 − e2 , 0, 0, 0, 0, 0, 0, 0)
T
− γ5 e6 (0, 0, −1, 0, 0, 0, 0, 0, 0, 0, 0, 0)
7
T
X ∂β5
− γ5 e7 (0, 0, 0, −1, 0, α0 − x2 − e2 , 0, 0, 0, 0, 0, 0) + u̇k
∂uk
k=1
7
X ∂β5
= γ5 ϕ5 − γ5 e2 δ52 − γ5 e3 δ53 − γ5 e6 δ56 − γ5 e7 δ57 + u̇k ,
∂uk
k=1
9.5 ADAPTIVE FLIGHT CONTROL EXAMPLE 185

T
β̇6 = γ6 ϕ6 − γ6 e2 (0, −1, 0, 0, 0, 0, 0, 0, 0, 0, 0)
T
− γ6 e3 (0, 0, 0, x5 + e5 , 0, 0, 0, 0, 0, 0, 0)
7
T
X ∂β6
− γ6 e5 (0, 0, 0, x3 , 0, 0, 0, 0, 0, 0, 0) + u̇k
∂uk
k=1
7
X ∂β6
= γ6 ϕ6 − γ6 e2 δ62 − γ6 e3 δ63 − γ6 e5 δ65 + u̇k ,
∂uk
k=1
T
β̇7 = γ7 ϕ7 − γ7 e2 (0, 0, 0, 0, −x5, 0, 0, 0, 0, 0, 0)
T
− γ7 e3 (0, −1, 0, 0, 0, 0, 0, 0, 0, 0, 0)
T
− γ7 e5 (0, 0, 0, −1, α0 − x2 − e2 , 0, 0, 0, 0, 0, 0)
7
T
X ∂β7
− γ7 e6 (0, 0, 0, 0, 0, −1, 0, 0, 0, 0, 0) + u̇k
∂uk
k=1
7
X ∂β7
= γ7 ϕ7 − γ7 e2 δ72 − γ7 e3 δ73 − γ7 e5 δ75 − γ7 e6 δ76 + u̇k ,
∂uk
k=1

where the bracketed terms correspond to the functions δij (x, e) of (9.43). Finally, from
(9.46) and (9.47) the dynamic scaling parameters ri and the gains ki are given by
7
5 X
ṙi = γi ri e2j |δij (x, e)|2
2 j=1

and
7
X
ki (x, r, e) = λi ri2 + ǫ cj rj2 |δji (x, e)|2 ,
j=1

with λi > 0, ǫ > 0 and ri (0) = 1. This completes the nonlinear estimator design for the
over-actuated aircraft model. The tracking performance and parameter estimation capa-
bilities of the adaptive controller resulting from combining this nonlinear estimator with
the command filtered adaptive backstepping approach can now be evaluated in numerical
simulations.

9.5.2 Numerical Simulation Results


This section presents the simulation results from the application of the adaptive flight
controller developed in the previous section to the over-actuated fighter aircraft model
of Section 6.3, implemented in the MATLAB/Simulink c environment. The simulations
are performed at two flight conditions for which the aerodynamic data can be found in
Tables 6.1 and 6.2.
The command filtered, static backstepping controller is tuned in a trial-and-error proce-
dure on the nominal aircraft model. The final control and nonlinear damping gains were
186 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.5

chosen identical to the ones used for the adaptive controllers in Chapter 6.
Tuning of the I&I estimator is relatively straight-forward, since increasing the adaptation
gains γi not only increases the adaptation rate but also improves the closed-loop perfor-
mance. This is in contrast with the integrated adaptive backstepping approach used in
Chapter 6 where increasing the adaptation gains can lead to a worsened transient per-
formance. The influence of the size of the other estimator parameters, λi and ǫ, on the
tracking performance is very limited, simply selecting them inside the bounds defined in
Section 9.4 is enough to guarantee convergence of the filtered states to the true states and
boundedness of the dynamic scaling parameters. The final gain and parameter selection
is: γi = 10, λi = 0.01, i = 1, ..., 7 and ǫ = 0.01.

Simulation with Left Aileron Runaway


In this first simulation a mixed maneuver involving a series of angle of attack and roll
angle doublets is considered. The aircraft model starts at flight condition 2 in a trimmed
horizontal flight where after 1 second of simulation time the left aileron suffers a hard-
over failure and moves to its limit of 45 degrees. This failure results in a large additional
rolling moment and minor additional pitch and yawing moments. Note that the adaptive
controller does not use any sensor measurements of the control surface position or any
other form of fault detection.
The results of this simulation can be found in Figure D.20 of Appendix D. Note that this
maneuver is identical to scenario 3 of Chapter 6, which means the results in Figure D.20
can be compared directly with the plots of Figures D.1 and D.2.
The adaptive controller manages to rapidly return the states to their reference values
after the failure. Of course, the coupling between longitudinal and lateral motion is more
prominent in the response after the failure. It can be seen in Figure D.20(c) that the
total moment coefficients post-failure are estimated rapidly and accurately. However, the
individual parameters have not converged to their true values since this maneuver alone
does not provide the estimator with enough information.
In Figure D.20(d) some additional parameters of the I&I estimator are plotted, i.e. the
dynamic scaling parameters r∗ , the output filter states x̂ and the prediction errors e∗ .
All signals are behaving as they should be, the dynamic scalings converge to constant
values, the filter states follow the aircraft model states and the prediction errors converge
to zero. In the control surface plots it can be seen that most of the additional moment is
compensated by the right aileron and the left elevator. The simple pseudo-inverse control
allocation scheme does not give any preference to certain control surfaces or axes. The
tracking performance and parameter convergence of the adaptive controller are very good
for this failure case.

Simulation with Left Elevator Runaway


The second simulation is again of a mixed maneuver involving a series of angle of attack
and roll angle doublets at flight condition 2. The aircraft model starts in a straight, hori-
zontal flight where after 1 second of simulation time the left elevator suffers a hard-over
9.6 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 187

failure and moves to its limit of 10.5 degrees. The simulation results of this maneuver can
be found in Figure D.21, which is again divided in 4 subplots. The results of this same
simulation scenario for the adaptive controllers of Chapter 6 can be found in Figures D.3
and D.4. Note however, that a more sophisticated control allocation approach was used
there.
The results demonstrate again that the adaptive controller performs excellent. The total
moment coefficients are rapidly found by the estimator and tracking performance is ex-
cellent. However, the individual components of the parameter estimate vectors do not
converge to their true values. It is interesting to note that the new adaptive design man-
ages to recover good performance without saturating any of the other control surfaces for
this failures scenario, unlike the adaptive flight controllers of Chapter 6.

9.6 F-16 Stability and Control Augmentation Design


The next step is to apply the I&I adaptive backstepping approach to the problem of de-
signing a SCAS for the high-fidelity F-16 model of Chapter 2 and compare its perfor-
mance with the integrated and modular SCAS designs of the previous chapter. First, the
I&I estimator for the dynamic F-16 model uncertainties will be derived, after that, the
simulation scenarios of the previous chapter are performed once again for the adaptive
backstepping flight controller with the new estimator.

9.6.1 Adaptive Control Design


The static nonlinear backstepping SCAS design has already been discussed in Section
8.2. This flight controller will be used again, but the tracking error driven adaptation
process is replaced by an I&I based estimator. The I&I estimator with dynamic scaling
of Section 9.4 can be applied directly to the F-16 model if the multiple model approach
with B-spline networks is once again selected to simplify the approximation process. The
size and number of the networks is selected identical to the ones used for the adaptive
backstepping flight control laws of Chapter 8.
Before the design of the estimator starts, the relevant equations of motion are written in
the more general form
ẋi = fi (x, u) + ϕi (x, u)T θi , i = 1, ..., 6, (9.56)
with the states x1 = VT , x2 = α, x3 = β, x4 = p, x5 = q, x6 = r and the inputs
u1 = δe , u2 = δa , u3 = δr . Here fi (x) represent the known parts of the F-16 model
dynamics given by
1
f1 (x) = [D0 + FT cos x2 cos x3 + mg1 ]
m
−L0 − FT sin x2 + mg3
f2 (x) = x5 − (x4 cos x2 + x6 sin x2 ) tan x3 +
mx1 cos x3
Y0 − FT cos x2 sin x3 + mg2
f3 (x) = − (x6 cos x2 − x4 sin x2 ) +
mx1
188 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.6


f4 (x) = (c1 x6 + c2 x4 ) x5 + c3 L̄0 + c4 N̄0 + Heng x5
c5 x4 x6 − c6 x24 − x26 + c7 M̄0 − Heng x6
 
f5 (x) =

f6 (x) = (c8 x4 − c2 x6 ) x5 + c4 L̄0 + c9 N̄0 + Heng x5 ,

where L0 , Y0 , D0 , L̄0 , M̄0 and N̄0 are the known, nominal values of the aerodynamic
forces and moments. The second term of (9.56) describes the uncertainties in the aircraft
model. As an example the approximation of the uncertainty in the total drag is given as
 
T q̄S x5 c̄
ϕ1 (x, u) θ1 = ĈD0 (x2 , x3 ) + ĈDq (x2 ) + ĈDδe (x2 , u1 )u1
m 2x1
θCD0
 
 
q̄S x5 c̄
= ϕTCD0 (x2 , x3 ), ϕTCDq (x2 ) , ϕT (x2 , u1 )u1  θCDq  ,
m 2x1 CDδe θCDδ
e

where ϕTCD∗ (·) are vectors containing the third-order B-spline basis functions that form
a first or second order B-spline network, and where θCD∗ are vectors containing the
unknown parameters, i.e. the B-spline network weights, that will have to be estimated
online. The other approximation terms are defined as
 
q̄S x5 c̄
ϕ2 (x, u)T θ2 = ĈL0 (x2 , x3 ) + ĈLq (x2 ) + ĈLδe (x2 , u1 )u1
mx1 cos x3 2x1

q̄S x4 b x6 b
ϕ3 (x, u)T θ3 = ĈY0 (x2 , x3 ) + ĈYp (x2 ) + ĈYr (x2 )
mx1 2x1 2x1
i
+ĈYδa (x2 , x3 )u2 + ĈYδr (x2 , x3 )u3

x4 b x6 b
ϕ4 (x, u)T θ4 = c3 q̄Sb ĈL̄0 (x2 , x3 ) + ĈL̄p (x2 ) + ĈL̄r (x2 )
2x1 2x1
i
+ĈL̄δa (x2 , x3 )u2 + ĈL̄δr (x2 , x3 )u3
 
x5 c̄
ϕ5 (x, u)T θ5 = c7 q̄Sc̄ ĈM̄0 (x2 , x3 ) + ĈM̄q (x2 ) + ĈM̄δe (x2 , u1 )u1
2x1

x4 b x6 b
ϕ6 (x, u)T θ6 = c9 q̄Sb ĈN̄0 (x2 , x3 ) + ĈN̄p (x2 ) + ĈN̄r (x2 )
2x1 2x1
i
+ĈN̄δa (x2 , x3 )u2 + ĈN̄δr (x2 , x3 )u3 ,

where all the coefficients are again approximated with B-spline networks. Note that,
to avoid overparametrization, the roll and yaw moment error approximators are not de-
signed to estimate the real errors, but rather pseudo-estimates. It is possible to estimate
the real errors, but this would result in additional update laws and thus increase the dy-
namic order of the adaptation process.
Now that the system is rewritten in the standard form, the I&I estimator design of Section
9.6 F-16 STABILITY AND CONTROL AUGMENTATION DESIGN 189

9.4 can be followed directly. The scaled estimation errors are again defined as

θ̂i − θi + βi (xi , x̂)


σi = , i = 1, ..., 6. (9.57)
ri
In Section 9.4, the functions βi in the above expression were selected as
Z xi
βi (xi , x̂) = γi ϕi (x̂1 , ..., x̂i−1 , σ, x̂i+1 , ..., x̂n )dσ, (9.58)
0

where γi are the adaptation gains. The analytic calculation of βi (xi , x̂) for the F-16
model is relatively time-consuming, since the regressors ϕ∗ are quite large and contain
the B-spline basis functions. Furthermore, the expression
n
X
ej δij (x, e) = ϕi (x) − ϕi (x̂1 , ..., x̂i−1 , σ, x̂i+1 , ..., x̂n ), δii ≡ 0, (9.59)
j=1

has to be solved for some functions δij (x, e). This is an even more tedious process due
to the B-spline basis functions. However, it is still possible to solve the above expres-
sion. This concludes the discussion on the I&I estimator design for the high-fidelity F-16
dynamic model.

9.6.2 Numerical Simulation Results


In this section the numerical simulation results are presented for the application of the
flight control system with I&I based estimator, derived in the previous section, to the
high-fidelity, six-degrees-of-freedom F-16 model in a number of failure scenarios and
maneuvers. The scenarios are identical to the ones considered in the previous chapter,
so that closed-loop responses for the new adaptive flight control design can be com-
pared directly with the earlier results. For that same reason, the control gains and com-
mand filter parameters of the backstepping SCAS design are selected the same as in the
previous chapter. The I&I estimator is tuned in a trial and error procedure, using the
bounds derived in Section 9.4. Tuning is quite intuitive as expected and it is not difficult
to find an adaptation gain selection that provides good results in all considered failure
scenarios. The final gain and parameter selection for the estimator is: γ2 , γ5 = 0.1,
γ1 , γ3 , γ4 , γ6 = 0.01, λi = 0.01, i = 1, ..., 6 and ǫ = 0.01.

Simulation Results with Cmq = 0


The first series of simulations considers again the sudden reduction of the longitudinal
damping coefficient Cmq to zero. As discussed before, this is not a very critical change,
but it does however serve as a nice example to evaluate the ability of the adaptation
scheme to accurately estimate inaccuracies in the onboard model. Figure D.22 of Ap-
pendix D.4 contains the simulation results for the I&I backstepping design starting at
190 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.6

flight condition 2 with the longitudinal stick commanding a series of pitch doublets, after
20 seconds of simulation the sudden change in Cmq takes place. The left hand side plots
show the inputs and response of the aircraft in solid lines, while the dotted lines are the
reference trajectories. Tracking performance both before and after the change in pitch
damping is excellent. The time histories of the dynamic scalings and the output errors
of the filters used by the I&I estimator are not shown, but the scalings all converge to
constant values and the errors converge to zero as expected.
The solid lines in the right hand side plots of Figure D.22 represent the changes in aero-
dynamic coefficients w.r.t. the nominal values, divided by the maximum absolute value
of the real aerodynamic coefficients to normalize them. The dotted lines are the normal-
ized true errors between the altered and nominal aircraft model. The change in Cmq is
clearly visible in the plots. It can be seen that the estimator does not succeed in estimating
the individual components of the pitch moment correctly. This is to be expected for such
an insignificant error. The simulation results of this failure at the other flight conditions
exhibit the same characteristics.

Simulation Results with Longitudinal c.g. Shifts


A second series of simulations considers a much more complex failure scenario where
the longitudinal center of gravity of the aircraft model is shifted. Especially backward
shifts can be quite critical, since they work destabilizing and can even result in a loss of
static stability margin. All pitching and yawing aerodynamic moment coefficients will
change as a result of a longitudinal c.g. shift. For a model inversion based design the
changes are far more critical and stability loss often will occur for destabilizing shifts
without robust of adaptive compensation.
Figure D.23 depicts the simulation results for the F-16 model with the I&I based back-
stepping controller starting at flight condition 1 with the longitudinal stick commanding
a series of small amplitude pitch doublets, after 20 seconds the c.g. instantly shifts back-
ward 0.06c̄ and the small positive static margin is lost. As can be seen in the left hand
side plots the tracking performance of the I&I based backstepping design is very good.
However, once again the right hand side plots demonstrate that the individual compo-
nents are not estimated correctly.
Compared to the results of Chapter 8, the tracking performance of the new adaptive de-
sign in this simulation scenario is superior to the performance of the other two adaptive
designs. The integrated adaptive design of Chapter 8 is also more aggressive in its re-
sponse, resulting from the non-ideal adaptation gains selected after the difficult tuning
process.

Simulation Results with Aileron Lockups


The last series of simulations considers controlling the aircraft model with right aileron
lock ups or hard-overs. At 20 seconds simulation time the right aileron suddenly moves
to a certain offset: -21.5, -10.75, 0, 10 or 21.5 degrees. It should again be noted that the
public domain F-16 model does not contain a differential elevator, hence only the rudder
9.7 CONCLUSIONS 191

and the left aileron can be used to compensate for these failures. The pilot should be able
to compensate for this failure, but it would result in a very high workload.
The results of a simulation performed with the integrated controller at flight condition
4 with a right aileron lockup at -10.75 degrees can be seen in Figure D.24. One lateral
stick doublet is performed before the failure occurs and three more are performed after
60 seconds of further simulation. The response of the I&I adaptive design resembles the
response of the modular adaptive design in Chapter 8, i.e. much better than the response
of the integrated adaptive backstepping controller. The I&I based adaptive design still
has a pretty good tracking performance after the failure and even the sideslip angle is
regulated back to zero. The additional forces and moments resulting from the error are
identified correctly, but the individual components are not.

9.7 Conclusions
In this chapter, the immersion and invariance technique is combined with backstepping
and the resulting adaptive control scheme is applied to the flight control problems of
Chapter 6 and 8. The control scheme makes use of an invariant manifold based estimator
with dynamic scalings and output filters to help guarantee attractivity of the manifold.
The controller itself is based on the backstepping approach with command filters to avoid
the analytic computation of the virtual control derivatives. Global asymptotic stability of
the closed-loop system and parameter convergence of the complete adaptive controller
can be proved with a single Lyapunov function. The controllers have been evaluated
in numerical simulations and the results have been compared with the integrated and
modular adaptive designs of Chapters 6 and 8.
Based on the simulation results several observations can be made:

1. The main advantage of the invariant manifold approach over a conventional adap-
tive backstepping controller with tracking error driven update laws is that it allows
for prescribed stable dynamics to be assigned to the parameter estimation error.
Furthermore, this approach does not suffer from undesired transient performance
resulting from unexpected dynamical behavior of parameter update laws that are
strongly coupled with the static feedback control part. As a result the adaptive con-
troller is much easier to tune, since a large update gain will improve the closed-loop
transient performance. Therefore, it is possible to achieve a better performance of
the closed-loop system, as is demonstrated in several simulation scenarios. In fact,
the closed-loop system resulting from the application of the I&I based adaptive
backstepping controller can be seen as a cascade interconnection between two sta-
ble systems with prescribed asymptotic properties.

2. The new I&I based modular adaptive controller does not require nonlinear damp-
ing terms, that could potentially result in high gain feedback, to proof closed-loop
stability. This is a big advantage over the modular backstepping control design
with least-squares identifier. Obviously, least-squares still has the appeal that it
192 IMMERSION AND INVARIANCE ADAPTIVE BACKSTEPPING 9.7

has the capability of automatically adjusting the adaptation gain matrix. However,
this comes at the cost of a higher dynamic order of the estimator.
3. A minor disadvantage of the I&I based modular adaptive backstepping approach is
that the estimator employs overparametrization, which means that, in general, the
dynamic order of the estimator is higher than for an integrated adaptive backstep-
ping controller. Hence, the computational load is also higher. However, this does
not play a role in the aircraft control design problems considered in Chapters 6 and
8. Though for the trajectory control problem of Chapter 7 the I&I estimator would
require more states than the tracking error driven update laws of the constrained
adaptive backstepping controller.
4. Another disadvantage is that the analytical derivation of the I&I estimator in com-
bination with the B-spline networks used for the partitioning of the F-16 model is
relatively time-consuming.
Chapter 10
Conclusions and Recommendations

This thesis describes the development of adaptive flight control systems for a modern
fighter aircraft. Adaptive backstepping techniques in combination with online model
identification based on multiple models connected with B-spline networks have been used
as the main design tools. Several algorithms have been considered for the online model
adaptation. In this chapter the main conclusions of the research are provided. New re-
search questions can be formulated based on these conclusions and these are formulated
in the form of recommendations for further research.

10.1 Conclusions

This thesis has aimed to contribute to the development of computationally efficient recon-
figurable or adaptive flight control systems using nonlinear control design techniques and
online model identification, all based on well founded mathematical proofs. As the main
design framework the adaptive backstepping technique was investigated, this choice was
based on the strong stability and convergence properties of the method as discussed in
the introduction. For the online model identification a multiple model approach based
on flight envelope partitioning was proposed to keep the required computational load at
an acceptable level and create a numerically stable algorithm. The considered methods
have been investigated and adapted throughout the thesis to improve their weaknesses
for the considered flight control problems. Finally, numerical simulations involving a
high-fidelity F-16 dynamic model with several types of uncertainties and failures have
been used to validate the proposed adaptive flight control designs. The main conclusions
and results of the thesis are summarized below.

193
194 CONCLUSIONS AND RECOMMENDATIONS 10.1

Constrained Adaptive Backstepping

The standard adaptive backstepping approach has a number of shortcomings, two of the
most important being its analytical complexity and its sensitivity to input saturation. The
analytical complexity of the design procedure is mainly due to the calculation of the
derivatives of the virtual controls at each intermediate design step. Especially for high
order systems or complex multivariable systems such as aircraft dynamics, it becomes
very tedious to calculate the derivatives analytically. The parameter update laws of the
standard adaptive backstepping procedure are driven by the tracking errors, which makes
them sensitive to input saturation. If input saturation is in effect and the desired control
cannot be achieved, the tracking errors will in general become larger and no longer be
the result of function approximation errors exclusively. As a result the parameter update
laws may start to ‘unlearn’.
In Chapter 4 both shortcoming are solved by introducing command filters in the design
approach. The idea is to filter the virtual controls to calculate the derivatives and at
the same time enforce the input or state limits. The effect that these limits have on
the tracking errors is measured using a set of first order linear filters. Compensated
tracking errors where the effect of the limits has been removed are defined and used to
drive the parameter update laws. If there are no magnitude or rate limits in effect on
the command filters and their bandwidth is selected sufficiently high, the performance
of the constrained adaptive backstepping approach can be made arbitrarily close to that
of the standard adaptive backstepping approach. If the limits on the command filters are
in effect, the real tracking errors may increase, but the compensated tracking errors that
drive the estimation process are unaffected. Hence, the dynamic update laws will not
unlearn due to magnitude or rate limits on the input and states used for (virtual) control.
An additional advantage of the command filters in the design is that the application is
no longer restricted to uncertain nonlinear systems of a lower triangular form. For these
reasons, the constrained adaptive backstepping approach serves as a basis for all the
control designs developed in this thesis.

Inverse Optimal Adaptive Backstepping

The tuning functions and constrained adaptive backstepping designs are both focused on
achieving stability and convergence rather than performance or optimality. To this end
the static and dynamic parts of the adaptive backstepping controllers are designed si-
multaneously in a recursive manner. This way the very strong stability and convergence
properties of the controllers can be proved using a single control Lyapunov function. A
drawback of this design approach is that because there is strong coupling between the
static and dynamic feedback parts, it is unclear how changes in the adaptation gain affect
the tracking performance.
In an attempt to solve this problem, inverse optimal control theory was combined with the
tuning functions backstepping approach to develop an inverse optimal adaptive backstep-
ping control design for a general class of nonlinear systems with parametric uncertainties
in Chapter 5. An additional advantage of a control law that is (inverse) optimal with re-
10.1 CONCLUSIONS 195

spect to some ‘meaningful’ cost functional are its inherent robustness properties with
respect to external disturbances and model uncertainties.
However, nonlinear damping terms were utilized to achieve the inverse optimality, result-
ing in high gain feedback terms in the design. These nonlinear damping terms resulted in
a very robust control design, but also in a very numerically sensitivity design. The non-
linear damping terms even removed the need for parameter adaptation. Furthermore, the
complexity of the cost functional associated with the inverse optimal design did not make
performance tuning any more transparent. It can be concluded that the inverse optimal
adaptive backstepping approach is unsuitable for the type of control problems considered
in this thesis.

Integrated Versus Modular Adaptive Backstepping Flight Control


In Chapter 6 the constrained adaptive backstepping approach was applied to the de-
sign of a flight control system for a simplified, nonlinear over-actuated fighter aircraft
model valid at two flight conditions. It is demonstrated that the extension of the adap-
tive backstepping control method to multi-input multi-output systems is straightforward.
A comparison with a more traditional modular adaptive controller that employs a least
squares identifier was made to illustrate the advantages and disadvantages of an inte-
grated adaptive design. The modular controller employs regressor filtering and nonlinear
damping terms to guarantee closed-loop stability and robustify the design against po-
tential faster-than-linear growth of the nonlinear systems. Furthermore, the interactions
between several control allocation algorithms and the online model identification for sim-
ulations with actuator failures were studied.
The results of numerical simulations demonstrated that both adaptive flight controllers
provide a significant improvement over a non-adaptive NDI/backstepping design in the
presence of actuator lockup failures. The success rate and performance of both adaptive
designs with a simple pseudo inverse control allocation is comparable for most failure
cases. However, in combination with weighted control allocation methods the success
rate and also the performance of the modular adaptive design is shown to be superior.
This is mainly due to the better parameter estimates obtained by the least squares identi-
fication method. The Lyapunov-based update laws of the constrained adaptive backstep-
ping design, in general, do not estimate the true value of the unknown parameters. It is
shown that especially the estimate of the control effectiveness of the damaged surfaces
is much more accurate using the modular adaptive design. It can be concluded that the
constrained adaptive backstepping approach is best used in combination with the simple
pseudo inverse control allocation to prevent unexpected results.
An advantage of the constrained adaptive backstepping design is that even for this sim-
ple example the computational load is much lower, since the gradient based identifier has
less states than the least-squares identifier and does not require any regressor filtering.
Furthermore, the modular design adaptive design requires regressor filtering and nonlin-
ear damping terms to compensate for the fact that the least-squares identifier is to slow
to deal with nonlinear growth, i.e. the certainty equivalence principle does not hold. The
high gain associated with nonlinear damping terms can lead to numerical instability prob-
196 CONCLUSIONS AND RECOMMENDATIONS 10.1

lems. The identifier of the constrained adaptive backstepping design is much faster and
does not suffer from this problem. For these reasons the integrated adaptive backstepping
approach is deemed more suitable than the modular approach to design a reconfigurable
flight control systems and is therefor tested on the .

Full Envelope Adaptive Backstepping Flight Control


In Chapters 7 and 8 two control design problems for the high-fidelity, subsonic F-16
dynamic model were considered: Trajectory control and SCAS design. The trajectory
control problem is quite challenging, since the system to be controlled has a high relative
degree, resulting in a multivariable, four loop adaptive feedback design. The SCAS de-
sign on the other hand can be compared directly with the baseline flight control system
of the F-16.
A flight envelope partitioning method is used to capture the globally valid nonlinear aero-
dynamic model into multiple locally valid aerodynamic models. The Lyapunov-based
update laws of the adaptive backstepping method only update a few local models at each
time step, thereby keeping the computational load of the algorithm at a minimum and
making real-time implementation feasible. An additional advantage of using multiple,
local models is that information of the models that are not updated at a certain time step
is retained, thereby giving the approximator memory capabilities. B-spline networks are
used to ensure smooth transitions between the different regions and have been selected
for their excellent numerical properties. The partitioning for the F-16 has been done man-
ually based on earlier modeling studies and the fact that the aerodynamic data is already
available in a suitable tabular form.
Numerical simulation results of several maneuvers demonstrate that trajectory control
can still be accomplished with the investigated uncertainties and failures, while good
tracking performance is maintained. Compared to the other nonlinear adaptive trajectory
control designs in literature, such as standard adaptive backstepping or sliding mode con-
trol in combination with feedback linearization, the approach is much simpler to apply,
while the online estimation process is more robust to saturation effects.
Results of numerical simulations for the SCAS design demonstrate that the adaptive con-
troller provides a significant improvement over a non-adaptive NDI design for the sim-
ulated failure cases. The adaptive design shows no degradation in performance with the
added sensor dynamics and time delays. Feeding the reference signal through command
filters makes it trivial to enforce desired handling qualities for the constrained adaptive
backstepping controller in the nominal case. The handling qualities were verified using
frequency sweeps and lower order equivalent model analysis.
However, the adaptation gain tuning for the update laws of the constrained adaptive back-
stepping controller is a very time consuming and unintuitive process, since changing the
identifier gains can result in unexpected transients in the closed-loop tracking perfor-
mance. This is especially true for the SCAS design, since more aggressive maneuvering
is considered. It is very difficult to find a set of gains that gives an adequate performance
for all considered failure cases at the selected flight conditions. These results demonstrate
that an alternative to the tracking error driven identifier has to be found when complex
10.1 CONCLUSIONS 197

flight control problems are considered.

I&I Adaptive Backstepping


Despite a number of refinements introduced in this thesis, the adaptive backstepping
method with tracking error driven gradient update laws still has a major shortcoming.
The estimation error is only guaranteed to be bounded and converging to an unknown
constant. However, not much can be said about its dynamical behavior which may be un-
acceptable in terms of the closed-loop transient performance. Increasing the adaptation
gain will not necessarily improve the response of the system, due to the strong coupling
between system and estimator dynamics. The modular adaptive backstepping designs
with least-squares identifier as derived in Chapter 6 do not suffer from this problem.
However, this type of design requires unwanted nonlinear damping terms to compensate
for the slowness of the estimation based identifier.
In Chapter 9 an alternative way of constructing a nonlinear estimator is introduced,
based on the I&I approach. This approach allows for prescribed stable dynamics to be
assigned to the parameter estimation error. The resulting estimator is combined with the
command filtered backstepping approach to form a modular adaptive control scheme.
Robust nonlinear damping terms are not required in the backstepping design, since the
I&I based estimator is fast enough to capture the potential faster-than-linear growth of
nonlinear systems. The new modular scheme is much easier to tune than the ones result-
ing from the constrained adaptive backstepping approach. In fact, the closed-loop system
resulting from the application of the I&I based adaptive backstepping controller can be
seen as a cascaded interconnection between two stable systems with prescribed asymp-
totic properties. As a result, the performance of the closed-loop system with adaptive
controller can be improved significantly.
The flight control problems of Chapters 6 and 8 have been tackled again using the new
I&I based modular adaptive backstepping scheme. A comparison of the simulation re-
sults has demonstrated that it is indeed possible to achieve a much higher level of tracking
performance with the new design technique. Moreover, the I&I based modular adaptive
backstepping approach has even stronger provable stability and convergence properties
than the integrated adaptive backstepping approaches discussed in this thesis, while at the
same time achieving a modularity in the design of the controller and identifier modules.
It can be concluded that the I&I based modular adaptive backstepping design has great
potential for these type of control problems: The resulting adaptive flight control systems
perform excellent in the considered failure scenarios, while the identifier tuning process
is relatively straight-forward.
A minor disadvantage of the I&I based modular adaptive backstepping approach is that
the estimator employs overparametrization, i.e. in general more than one update law is
used to estimate each unknown parameter. Overparametrization is not necessarily dis-
advantageous from a performance point of view, but it is less efficient in a numerical
implementation of the controller. Overparametrization does not play a role in the aero-
dynamic angle control design problems considered in Chapters 6 and 8. However, for
the trajectory control problem of Chapter 7 the I&I estimator would require more states
198 CONCLUSIONS AND RECOMMENDATIONS 10.1

than the tracking error driven update laws of the constrained adaptive backstepping ap-
proach. Another minor disadvantage is that the analytical derivation of the I&I estimator
in combination with the B-spline networks used for the partitioning of the F-16 model is
relatively time consuming, but this additional effort is marginal when compared to effort
required to perform the tuning process of the integrated adaptive backstepping update
laws.

Comparison of Adaptive Flight Control Frameworks


The overall performance of the three methods used for flight control design in this thesis
is now compared using several important criteria, such as design complexity and track-
ing performance. A table with the results of this comparison can be found in Figure
10.1. It can be seen that the modular adaptive backstepping design with I&I identifier
outperforms the other methods, while also being the only method that does not display
an unacceptable performance for any of the criteria.

Figure 10.1: Comparison of the overall performance of integrated adaptive backstepping control,
modular adaptive backstepping control with RLS identifier and modular adaptive backstepping
control with I&I identifier. Green indicates the best performing method, yellow the second best
and orange the worst. A red table cell indicates an unacceptable performance.

A short explanation of each criterion is given.


1. Design complexity: The static feedback design of the controllers is nearly identi-
cal, but the identifier designs for the modular approaches are more complex than
for the integrated design. The analytical derivation of the I&I estimator is the most
time consuming, especially in combination with flight envelope partitioning
10.1 CONCLUSIONS 199

2. Dynamic order: The dynamic order of the RLS identifier is by far the highest. The
I&I estimator requires a couple of extra filter states when compared to the tracking
error driven update laws of the integrated adaptive backstepping. Furthermore, due
to the overparametrization in the design, the dynamic order of the I&I estimator
can increase for some control problems.
3. Estimation quality: With sufficient excitation the RLS identifier will find the true
parameters of the system. The I&I identifier will find the total force and moment
coefficients, but not the individual parameters. Finally, the parameter estimates of
the integrated adaptive backstepping controller will in general never converge to
their true values.
4. Numerical stability: The nonlinear damping terms used for the modular design
with RLS identifier can lead to numerical problems. The integrated adaptive de-
sign is the simplest and therefore the most numerically stable, although it should
be noted that no problems were encountered for the modular design with I&I esti-
mator.
5. Tracking performance: The tracking performance of the modular designs is bet-
ter in general, with the I&I designs just outperforming the RLS designs.
6. Transient performance: Unexpected behavior of the update laws can lead to bad
transient performance for the integrated design. The nonlinear damping terms
sometimes result in unwanted oscillations with the modular RLS design.
7. Tuning complexity: Integrated backstepping designs are very hard or impossi-
ble to tune for complex systems due to the strong coupling between controller
and identifier. Tuning of I&I estimator is quite straightforward, but the tuning of
the RLS identifier is almost automatic. However, finding the correct resetting algo-
rithm and nonlinear damping gains requires additional effort. Therefore, the tuning
of the modular adaptive backstepping controller with I&I identifier is the least time
consuming and the most transparent.

Final Conclusions
On the basis of the research performed in this thesis, it can be concluded that a RFC
system based on the modular adaptive backstepping method with I&I estimator shows a
lot of potential, since it possesses all the features aimed at in the thesis goal:
• a single nonlinear backstepping controller with an I&I estimator is used for the
entire flight envelope. The stability and convergence properties of the resulting
closed-loop system are guaranteed by Lyapunov theory and have been verified in
numerical simulations. Due to the modularity of the design, systematic gain tuning
can be performed to achieve the desired closed-loop performance.
• the numerical simulation results with the F-16 model suffering from various types
of sudden actuator failures and large aerodynamic uncertainties demonstrate that
200 CONCLUSIONS AND RECOMMENDATIONS 10.2

the performance of the RFC system is superior with respect to a non-adaptive NDI
based control system or the baseline gain-scheduled control system for the con-
sidered situations. By extending the regressors, i.e. the local aerodynamic model
polynomial structures, of the identifier, the adaptive controller should also be able
to take asymmetric (structural) failures, which introduce additional coupling be-
tween longitudinal and lateral motion of the aircraft, into account.
• by making use of a multiple model approach based on flight envelope partitioning
with B-spline networks the computational load of the numerically stable adaptive
control algorithm is relatively low. The algorithm can easily run real-time on a
budget desktop computer. However, the current processors of onboard comput-
ers are sized for the current generation of flight controllers and are not powerful
enough to run the proposed adaptive control algorithm in real-time. Manufactur-
ers and clients will have to be convinced that the benefits of RFC are worth the
additional hardware cost and weight.

10.2 Recommendations
New questions and research directions can be formulated based on the research presented
in this thesis. These recommendations are formulated in this section:
• As already discussed in the introduction, accurate failure models of realistic (struc-
tural) damage are lacking for the high-fidelity F-16 model used as the main study
object in this thesis. For this reason, the evaluation of the adaptive flight controllers
was limited to simulation scenarios with actuator failures, symmetric center of
gravity shifts and uncertainties is individual aerodynamic coefficients. If more re-
alistic aerodynamic data for asymmetric failures such as partial surface loss could
be obtained, the results of the study would be more valuable. Furthermore, the
adaptive controllers could be extended with an FDIE module that performs actua-
tor health monitoring, thereby simplifying the task of online model identification.
This was not done in this thesis work to make the limited failure scenarios more
challenging.
• A multiple local model approach resulting from flight envelope partitioning was
used to simplify the online model approximation and thereby reducing compu-
tational load. The aerodynamic model of the F-16 considered in this thesis was
already examined in many earlier studies and is already in a form that lends it-
self for partitioning into locally valid models. However, in the more general case
finding a proper local approximation structure and partitioning may be a time con-
suming study in itself.
Many more advanced local approximation and learning algorithms are currently
being developed, see e.g. [50, 145, 204]. In [145] an algorithm is proposed that
employs nonlinear function approximation with automatic growth of the learning
network according to the nonlinearities and the working domain of the control sys-
tem. The unknown function in the dynamical system is approximated by piecewise
10.2 RECOMMENDATIONS 201

linear models using a nonparametric regression technique. Local models are allo-
cated as necessary and their parameters are optimized online. Such an advanced
technique eliminates the need for manual partitioning and the structure automat-
ically adapts itself in the case of failures. However, it is unclear if a real-time
implementation would be feasible for the F-16 model or similar high-fidelity air-
craft models.

• For some simulations with sudden failure cases, the adaptive controllers managed
to stabilize the aircraft, but the commanded maneuver proved too challenging for
the damaged aircraft. Hence, an adaptive controller by itself may not be sufficient
for a good reconfigurable flight control system. The pilot or guidance system also
needs to be aware of the characteristics of the failure, since the post-failure flight
envelope might be a lot smaller. This statement has resulted in a whole new area
of research, usually referred to as adaptive flight envelope estimation and/or pro-
tection, see e.g. [198, 211]. It is possible to indicate to the pilot which axis have
suffered a failure with the adaptive controllers developed in this thesis, so that he is
made aware that there is a failure and that he should fly more carefully. However,
the development of fully adaptive flight envelope protection systems or at least
systems that help make the pilot aware of the type and the size of failure should be
a key research focus for the coming years.

• Two main reasons for the gap between the research and the application of adap-
tive or reconfigurable flight control methods can be identified. Firstly, many of the
adaptive control techniques cannot be applied to existing aircraft without replacing
the already certified flight control laws. Secondly, the verification and validation
procedures needed to certify the novel reconfiguration methods have not received
the necessary attention. For these reasons, some designers have been developing
‘retrofit’ adaptive control systems which leave the baseline flight control system
intact, see e.g. [141, 158].
The nonlinear adaptive designs developed in this thesis can not be used in a retrofit
manner, since all current flight control systems are based on linear design tech-
niques. However, this may change when the first aircraft with NDI based flight
control systems become available. Nevertheless, more research should be devoted
to the verification and validation procedures that can be applied directly to non-
linear and even adaptive control designs. The linear analysis tools currently used
have a lot of shortcomings.

• The contributions of this thesis are mainly of a theoretical nature, since all results
were obtained from numerical flight simulations with preprogrammed maneuvers.
No piloted simulations were performed. However, the adaptive flight control sys-
tems developed in the thesis have been test flown by the author on a desktop com-
puter using a joystick and a flight simulator. Compared to a normal NDI controller
and the baseline controller, the workload was indeed lowered for most of the fail-
ures considered. Results of these simulated test flights are not included in the
thesis, since the author is not a professional pilot. Nevertheless, simulations with
202 CONCLUSIONS AND RECOMMENDATIONS 10.2

actual test pilots should be performed to examine the interactions between pilots
and the adaptive control systems. The fast reaction of the pilot to the unexpected
movements caused by an unknown, sudden change in dynamic behavior of the air-
craft in combination with the immediate online adaptation may lead to unexpected
results. Pilots may need to learn to thrust the adaptive element in the flight con-
trol system, as was already observed in an earlier study involving a damaged large
passenger aircraft [133].
• As discussed in the conclusions, the I&I based estimator used in this thesis em-
ploys overparametrization. From a numerical point of view it would be beneficial
to obtain a single estimate of each unknown parameter. Moreover, should this
be achieved, it may be possible to combine the I&I based estimator with a least-
squares adaptation. The ability of least-squares to even out adaptation rates would
almost completely automate the tuning of the adaptive control design.
The regressor filters employed in the modular control designs with least-squares,
as derived in Chapter 6, results in a high dynamic order of the estimators and may
also make the estimator less responsive, thus affecting the performance. The com-
bination with I&I can possibly remove the need for these filters as demonstrated
for linearly parametrized nonlinear control systems in the ‘normal form’ in [114].
Furthermore, the need for nonlinear damping is also removed. However, to extend
the suggested approach of [114] to the broader class of strict-feedback systems
would require overparametrization. This would mean employing multiple Riccati
differential equations, which is of course unacceptable. Hence, the use of over-
parametrization should certainly be avoided if least-squares are considered.
• Control engineering is a broad field of study that encompasses many applications.
The adaptive backstepping techniques discussed in this thesis were studied and
evaluated purely for their usefulness in flight control design problems. Obviously,
most of the techniques studied in this thesis can be and have been used for other
types of control system problems in literature and sometimes even in practice.
However, the shortcomings and modifications discussed in this thesis may not be
relevant in other control design problems.
Appendix A
F-16 Model

A.1 F-16 Geometry

Figure A.1: F-16 of the Royal Netherlands Air Force Demo Team. Picture by courtesy of the F-16
Demo Team.

203
204 APPENDIX A

Table A.1: F-16 parameters.

Parameter Symbol Value


aircraft mass (kg) m 9295.44
wing span (m) b 9.144
wing area (m2 ) S 27.87
mean aerodynamic chord (m) c̄ 3.45
roll moment of inertia (kg.m2 ) Ix 12874.8
pitch moment of inertia (kg.m2 ) Iy 75673.6
yaw moment of inertia (kg.m2 ) Iz 85552.1
product moment of inertia (kg.m2 ) Ixz 1331.4
product moment of inertia (kg.m2 ) Ixy 0.0
product moment of inertia (kg.m2 ) Iyz 0.0
c.g. location (m) xcg 0.3c̄
reference c.g. location (m) xcgr 0.35c̄
engine angular momentum (kg.m2/s) Heng 216.9

A.2 ISA Atmospheric Model

For the atmospheric data an approximation of the International Standard Atmosphere


(ISA) is used [143].


T0 + λh if h ≤ 11000
T =
T(h=11000) if h > 11000
  g0
− Rλ
p0 1 + λ Th0 if h ≤ 11000

p = g
− (h−11000)
p(h=11000) e RT(h=11000) if h > 11000

p
ρ =
RT
p
a = γRT ,

where T0 = 288.15 K is the temperature at sea level, p0 = 101325 N/m2 the pressure
at sea level, R = 287.05 J/kg.K the gas constant of air, g0 = 9.80665 m/s2 the gravity
constant at sea level, λ = dT /dh = −0.0065 K/m the temperature gradient and γ = 1.41
the isentropic expansion factor for air. Given the aircraft’s altitude (h in meters) it returns
the current temperature (T in Kelvin), the current air pressure (p in N/m2 ), the current
air density (ρ in kg/m3 ) and the speed of sound (a in m/s).
FLIGHT CONTROL SYSTEM 205

A.3 Flight Control System


These figures contain the schemes of the baseline flight control system of the F-16 model.
More details can be found in [149].

Figure A.2: Baseline pitch axis control loop of the F-16 model.
206 APPENDIX A

Figure A.3: Baseline F-16 roll axis control loop of the F-16 model.

Figure A.4: Baseline F-16 yaw axis control loop of the F-16 model.
Appendix B
System and Stability Concepts

This appendix clarifies certain system and stability concepts that are used in the main
body of the thesis. Most proofs are not included, but can be found in the main references
for this appendix: [106, 118, 192].

B.1 Lyapunov Stability and Convergence


For completeness, the main results of Lyapunov stability theory as discussed in Section
3.2 are reviewed. More comprehensive accounts can be found in [106] and [118].
Consider the non-autonomous system

ẋ = f (x, t) (B.1)

where f : Rn × R → Rn is locally Lipschitz in x and piecewise continuous in t.


Definition B.1. The origin x = 0 is the equilibrium point for (B.1) if

f (0, t) = 0, ∀t ≥ 0. (B.2)

The following comparison functions are useful tools to create more transparent stability
definitions.
Definition B.2. A continuous function α : [0, a) → R+ is said to be of class K if
it is strictly increasing and α(0) = 0. It is said to be of class K∞ if a = ∞ and
limr→∞ α(r) = ∞.
Definition B.3. A continuous function β : [0, a) × R+ → R+ is said to be of class KL
if, for each fixed s, the mapping β(r, s) is of class K with respect to r and, for each fixed
r, the mapping β(r, s) is decreasing with respect to s and lims→∞ β(r, s) = 0. It is said

207
208 APPENDIX B

to be of class KL∞ if, in addition, for each fixed s the mapping β(r, s) belongs to class
K∞ with respect to r.

Using these comparison functions the stability definitions of Chapter 3 are restated.
Definition B.4. The equilibrium point x = 0 of (B.1) is
• uniformly stable, if there exists a class K function γ(·) and a positive constant c,
independent of t0 , such that

|x(t)| ≤ γ(|x(t0 )|), ∀t ≥ t0 ≥ 0, ∀x(t0 )| |x(t0 )| < c; (B.3)

• uniformly asymptotically stable, if there exists a class KL function β(·, ·) and a


positive constant c, independent of t0 , such that

|x(t)| ≤ β(|x(t0 )|, t − t0 ), ∀t ≥ t0 ≥ 0, ∀x(t0 )| |x(t0 )| < c; (B.4)

• exponentially stable, if (B.4) is satisfied with β(r, s) = kre−αs , k > 0, α > 0;


• globally uniformly stable, if (B.3) is satisfied with γ ∈ K∞ for any initial state
x(t0 );
• globally uniformly asymptotically stable, if (B.4) is satisfied with β ∈ KL∞ for
any initial state x(t0 );
• globally exponentially stable, if (B.4) is satisfied for any initial state x(t0 ) and with
β(r, s) = kre−αs , k > 0, α > 0.

Based on these definitions, the main Lyapunov stability theorem is then formulated as
follows.
Theorem B.5. Let x = 0 be an equilibrium point of (B.1) and D = {x ∈ Rn ||x| < r}.
Let V : D × Rn → R+ be a continuously differentiable function such that ∀t ≥ 0, ∀x ∈
D,

γ1 (|x|) ≤ V (x, t) ≤ γ2 (|x|) (B.5)


∂V ∂V
+ f (x, t) ≤ −γ3 (|x|). (B.6)
∂t ∂x
Then the equilibrium x = 0 is
• uniformly stable, if γ1 and γ2 are class K functions on [0, r) and γ3 (·) ≥ 0 on
[0, r);
• uniformly asymptotically stable, if γ1 , γ2 and γ3 are class K functions on [0, r);
• exponentially stable, if γi (ρ) = ki ρα on [0, r), ki > 0, α > 0, i = 1, 2, 3;
LYAPUNOV STABILITY AND CONVERGENCE 209

• globally uniformly stable, if D = Rn , γ1 and γ2 are class K∞ functions, and


γ3 (·) ≥ 0 on R+ ;
• globally uniformly asymptotically stable, if D = Rn , γ1 and γ2 are class K∞
functions, and γ3 is a class K function on R+ ; and
• globally exponentially stable, if D = Rn and γi (ρ) = ki ρα on R+ , ki > 0, α > 0,
i = 1, 2, 3.

The key advantage of this theorem is that it can be applied without solving the differential
equation (B.1). However, analysis of dynamic systems can result in situations where
the derivative of the Lyapunov function is only negative semi-definite. For autonomous
systems it may still be possible to conclude asymptotic stability in these situations via
the concept of invariant sets, i.e. LaSalle’s Invariance Theorem.
Definition B.6. A set Γ is a positively invariant set of a dynamic system if every trajectory
starting in Γ at t = 0 remains in Γ for all t > 0.

For instance, any equilibrium of a system is an invariant set, but also the set of all equi-
libria of a system is an invariant set. Using the concept of invariant sets, the following
invariant set theorem can be stated.
Theorem B.7. For an autonomous system ẋ = f (x), with f continuous on the domain
D, let V (x) : D → R be a function with continuous first partial derivatives on D. If
1. the compact set Ω ⊂ D is a positively invariant set of the system;
2. V̇ ≤ 0 ∀x ∈ Ω;
then every solution x(t) originating in Ω converges to M as t → ∞, where R = {x ∈
Ω|V̇ (x) = 0} and M is the union of all invariant sets in R.

LaSalle’s Theorem is only applicable to the analysis of autonomous systems, since it


may be unclear how to define the sets M and R for non-autonomous systems. For non-
autonomous systems Barbalat’s Lemma can be used:
+ 1
R ∞Let φ(t) : R → R be uniformly continuous on [0, ∞).
Lemma B.8.
If limt→∞ 0 φ(τ )dτ exists and is finite, then
lim φ(t) = 0.
t→∞

Note that the uniform continuity of φ needed can be proven by showing either that φ̇ ∈
L∞ ([0, ∞)) or that φ(t) is Lipschitz on [0, ∞). Finally, the theorem due to LaSalle and
Yoshizawa is stated.
1 A function f : D ⊆ R → R is uniformly continuous if, for any ǫ > 0, there exists δ(ǫ) > 0 such that

|x − y| < δ ⇒ |f (x0 − f (y)| < ǫ, for all x, y ∈ D.


210 APPENDIX B

Theorem B.9. Let x = 0 be an equilibrium point of (B.1) and suppose that f is locally
Lipschitz in x uniformly in t. Let V : Rn × R+ → R+ be a continuously differentiable
function such that

γ1 (|x|) ≤ V (x, t) ≤ γ2 (|x|) (B.7)


∂V ∂V
V̇ = + f (x, t) ≤ −W (x) ≤ 0, (B.8)
∂t ∂x
∀t ≥ 0, ∀x ∈ Rn , where γ1 and γ2 are class K∞ functions and W is a continuous
function. Then all solutions of (B.1) are globally uniformly bounded and satisfy

lim W (x(t)) = 0. (B.9)


t→∞

In addition, if W (x) is positive definite, then the equilibrium x = 0 is globally uniformly


asymptotically stable.

Proof: Since V̇ ≤ 0, V is non-increasing. Hence, in view of (B.7), it can be concluded


that x is globally uniformly bounded, i.e. |x(t)| ≤ B, ∀t ≥ 0. Furthermore, since
V (x(t), t) is non-increasing and bounded from below by zero, it can be concluded that it
has a limit V∞ as t → ∞. Integrating (B.8) gives
Z t Z t
lim W (x(τ ))dτ ≤ − lim V̇ (x(τ ), τ )dτ
t→∞ t0 t→∞ t0
= lim {V (x(t0 ), t0 ) − V (x(t), t)}
t→∞
= V (x(t0 ), t0 ) − V∞ , (B.10)
R∞
which means that t0 W (x(τ ))dτ exists and is finite. It remains to show that W (x(t))
is also uniformly continuous. Since |x(t)| ≤ B and f is locally Lipschitz in x uniformly
in t, it can be observed that for any t ≥ t0 ≥ 0,
Z t Z t
|x(t) − x(t0 )| = | f (x(τ ), τ )dτ | ≤ L |x(τ )|dτ
t0 t0
≤ LB|t − t0 |, (B.11)
ǫ
where L is the Lipschitz constant of f on {|x| ≤ B}. Selecting δ(ǫ) = LB results in

|x(t) − x(t0 )| < ǫ, ∀|t − t0 | ≤ δ(ǫ), (B.12)

which means that x(t) is uniformly continuous. Since W is continuous, it is uniformly


continuous on the compact set {|x| ≤ B}. It can be concluded that W (x(t)) is uniformly
continuous from the uniform continuity of W (x) and x(t). Hence, it satisfies the condi-
tions of Lemma B.8, which in turn guarantees that W (x(t)) → 0 as t → ∞.
If, in addition, W (x) is positive definite, there exists a class K function γ3 such that
W (x) ≥ γ3 (|x|). By Theorem B.7 it can be concluded that x = 0 is globally uniformly
asymptotically stable.
INPUT-TO-STATE STABILITY 211

B.2 Input-to-state Stability


This section recalls the notion of input-to-state stability (ISS) [192, 193]. The ISS con-
cept plays an important role in the modular backstepping design technique as derived in
Section 6.2.2.
Consider the system
ẋ = f (t, x, u), (B.13)
where f is piecewise continuous in t and locally Lipschitz in x and u.
Definition B.10. The system (B.13) is said to be input-to-state stable (ISS) if there exist a
class KL function β and a class K function γ, such that, for any x(t0 ) and for any input
u that is continuous and bounded on [0, ∞) the solution exists for all t ≥ 0 and satisfies
!
|x(t)| ≤ β(|x(t0 )|, t − t0 ) + γ sup |u(τ )| (B.14)
τ ∈[t0 ,t]

for all t ≥ t0 and t such that 0 ≤ t0 ≤ t.

The function γ(·) is often referred to as an ISS gain for the system (B.13). The above
definition implies that an ISS system is bounded-input bounded-state stable and has a
globally uniformly asymptotically stable equilibrium at zero when u(t) = 0.
The ISS property can be equivalently characterized in terms of Lyapunov functions, as
the following theorem shows.
Theorem B.11. The system (B.13) is ISS if and only if there exists a continuously differ-
entiable function V : R+ × Rn → R+ such that for all x ∈ Rn and u ∈ Rm ,

γ1 (|x|) ≤ V (x, t) ≤ γ2 (|x|) (B.15)


∂V ∂V
|x| ≥ ρ(|u|) ⇒ ∂t + ∂x f (t, x, u) ≤ −γ3 (|x|) , (B.16)

where γ1 , γ2 and ρ are class K∞ functions and γ3 is a class K function.

Note that an ISS gain for the system (B.13) can be obtained from the above theorem as
γ = γ1−1 ◦ γ2 ◦ ρ.

B.3 Invariant Manifolds and System Immersion


This section gives the definition of an invariant manifold [210] and of system immersion
[36], since these notions are used in Chapter 9.
Consider the autonomous system

ẋ = f (x), y = h(x), (B.17)

with state x ∈ Rn and output y ∈ Rm .


212 APPENDIX B

Definition B.12. The manifold M = {x ∈ Rn |s(x) = 0}, with s(x) smooth, is said to
be (positively) invariant for ẋ = f (x) if s(x(0)) = 0, which implies s(x(t)) = 0, for all
t ≥ 0.

Consider now the (target) system

ξ˙ = α(ξ), ζ = β(ζ), (B.18)

with state ξ ∈ Rp , p < n, and output ζ ∈ Rm .


Definition B.13. The system (B.18) is said to be immersed into the system (B.17) if there
exists a smooth mapping π : Rp → Rn satisfying x(0) = π(ξ(0)) and β(ξ1 ) 6= β(ξ2 ) ⇒
h(π(ξ1 )) 6= h(π(ξ2 )) and such that

∂π
f (π(ξ)) = α(ξ)
∂ξ
and
h(π(ξ)) = β(ξ)
p
for all ξ ∈ R .

Hence, roughly stated, a system Σ1 is said to be immersed into a system Σ2 if the input-
output mapping of Σ2 is a restriction of the input-output mapping of Σ1 , i.e. any output
response generated by Σ2 is also an output response of Σ1 for a restricted set of initial
conditions.
Appendix C
Command Filters

This appendix covers the second order command filters which are used for reference
signal generation and in the intermediate steps of the constrained adaptive backstepping
approach (taken from [61]).

Figure C.1: Filter that generates the command and command derivative while enforcing magni-
tude, bandwidth and rate limits constraints [61].

Figure C.1 shows an example of a filter which produces a magnitude, rate and band-
width limited signal xc and its derivative ẋc by filtering a signal x0c . The state space
representation of this filter is

" #

q̇1 (t)
 q2
= 2 (C.1)
 
ωn 0

q̇2 (t) 2ζωn SR 2ζωn [SM (xc ) − q1 ] − q2
   
xc q1
= , (C.2)
ẋc q2

213
214 APPENDIX C

where SM (·) and SR (·) represent the magnitude and rate limit functions, respectively.
The functions SM and SR are defined similarly:

 M if x ≥ M
SM (x) = x if |x| < M .
−M if x ≤ −M

Note that if the signal x0c is bounded, then xc and ẋc are also bounded and continuous
signals. Note also that ẋc is computed without differentiation. When the state must re-
main in some operating envelope defined by the magnitude limit M and the rate limit R,
the command filter ensures that the commanded trajectory and its derivative satisfy these
same constraints.
If the only objective in the design of the command filter is to compute xc and its deriva-
tive, then M and R are infinitely large and the limiters do not need to be included in the
filter implementation. In the linear range of the functions SM and SR the filter dynamics
are
      
q̇1 (t) 0 1 q1 0
= + x0c (C.3)
q̇2 (t) −ωn2 −2ζωn q2 ωn2
   
xc q1
= , (C.4)
ẋc q2

with the transfer function from the input to the first output defined as

Xc (s) ωn2
0
= 2 . (C.5)
Xc (s) s + 2ζωn s + ωn2

When command limiting is not in effect, the error xc − x0c can be made arbitrarily small
by selecting ωn sufficiently larger than the bandwidth of the signal x0c . When command
filtering is in effect, the error xc − x0c will be bounded since both xc and x0c are bounded.
Appendix D
Additional Figures

This appendix contains the results for some of the numerical simulations performed in
Chapters 6 to 9.

215
216 APPENDIX D

D.1 Simulation Results of Chapter 6

response reference 60 δal δar δr


90
60
30 40
φ (deg)

δal, δar, δr (deg)


0
−30 20
−60
−90 0
0 10 20 30 40 50 60
−20
40
30 −40
20 0 10 20 30 40 50 60
α (deg)

10
0
−10 20 δ δ δ δ
el er lef tef
−20
0 10 20 30 40 50 60

δel, δer, δlef, δtef (deg)


10

4 0

2
β (deg)

−10
0
−2 −20
−4
−30
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time [s]

(a) Reference tracking (b) Surface deflections

5 5 δel δer δal δar


realized estimated
ltot (−)

lδ (−)

0 0
*

−5 −5
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.5 0

0 −1
mtot (−)

mδ (−)
*

−0.5 −2

−1 −3
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.5 0.2

0.1
ntot (−)

n (−)

0 0
*
δ

−0.1

−0.5 −0.2
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time (s)

(c) Control moment (d) Parameter estimation

Figure D.1: Simulation scenario 3 results for the integrated adaptive controller combined with PI
control allocation where the aircraft experiences a hard-over of the left aileron to 45 degrees after
1 second.
SIMULATION RESULTS OF CHAPTER 6 217

90 response reference 60 δal δar δr


60
30
φ (deg)

40

δ , δ , δ (deg)
0
−30

r
−60 20

ar
−90
0 10 20 30 40 50 60

al
0
40
30 −20
20 0 10 20 30 40 50 60
α (deg)

10
0
−10 20 δel δer δlef δtef
−20
0 10 20 30 40 50 60

(deg)
10

tef
0
δ ,δ ,δ ,δ
2 lef
β (deg)

er
el
−10

0 −20

−2 −30
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time [s]

(a) Reference tracking (b) Surface deflections

5 5 δel δer δal δar


realized estimated
(−)

l (−)

0 0
tot

*
δ
l

−5 −5
0 10 20 30 40 50 60 0 10 20 30 40 50 60

2 2

1 0
(−)

m (−)
tot

*
δ
m

0 −2

−1 −4
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.5 0.2

0.1
(−)

nδ (−)

0 0
tot

*
n

−0.1

−0.5 −0.2
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time (s)

(c) Control moment (d) Parameter estimation

Figure D.2: Simulation scenario 3 results for the modular adaptive controller combined with PI
control allocation where the aircraft experiences a hard-over of the left aileron to 45 degrees after
1 second.
218 APPENDIX D

40 δ δ δ
90 response reference al ar r
60
30
φ (deg)

20

δ , δ , δ (deg)
0
−30

r
−60 0

ar
−90
0 10 20 30 40 50 60

al
−20
40
30 −40
20 0 10 20 30 40 50 60
α (deg)

10
0
−10 20 δ δ δ δ
el er lef tef
−20
0 10 20 30 40 50 60

(deg)
10

tef
δ ,δ ,δ ,δ
0

lef
0
β (deg)

er
−10

el
−2 −20
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time [s]

(a) Reference tracking (b) Surface deflections

δel δer δal δar


2 realized estimated 5

0
(−)

l (−)

0
tot

*
δ
l

−2

−4 −5
0 10 20 30 40 50 60 0 10 20 30 40 50 60

1 0

−0.5
0.5
(−)

mδ (−)

−1
tot

*
m

0
−1.5

−0.5 −2
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.5 0.2

0.1
(−)

nδ (−)

0 0
tot

*
n

−0.1

−0.5 −0.2
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time (s)

(c) Control moment (d) Parameter estimation

Figure D.3: Simulation scenario 4 results for the integrated adaptive controller combined with QP
WU2 control allocation where the aircraft experiences a hard-over of the left horizontal stabilizer
to 10.5 degrees.
SIMULATION RESULTS OF CHAPTER 6 219

90 response reference 40 δ δ δ
al ar r
60
30
φ (deg)

20

δ , δ , δ (deg)
0
−30

r
−60 0

ar
−90
0 10 20 30 40 50 60

al
−20
40
30 −40
20 0 10 20 30 40 50 60
α (deg)

10
0
−10 20 δ δ δ δ
el er lef tef
−20
0 10 20 30 40 50 60

(deg)
10

tef
δ ,δ ,δ ,δ 0
0 lef
β (deg)

er

−10
el

−2 −20
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time [s]

(a) Reference tracking (b) Surface deflections

δel δer δal δar


2 realized estimated 5

0
(−)

l (−)

0
tot

*
δ
l

−2

−4 −5
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.5 1

0
(−)

m (−)

0
tot

*
δ
m

−1

−0.5 −2
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.5 0.2

0.1
0
(−)

nδ (−)

0
tot

*
n

−0.5
−0.1

−1 −0.2
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time (s)

(c) Control moment (d) Parameter estimation

Figure D.4: Simulation scenario 4 results for the modular adaptive controller combined with QP
WU2 control allocation where the aircraft experiences a hard-over of the left horizontal stabilizer
to 10.5 degrees.
220 APPENDIX D

response reference response reference


90 90
60 60
30 30
φ (deg)

φ (deg)
0 0
−30 −30
−60 −60
−90 −90
0 10 20 30 40 50 60 0 10 20 30 40 50 60

20 20

10
10
α (deg)

α (deg)
0
0
−10

−10 −20
0 10 20 30 40 50 60 0 10 20 30 40 50 60

4 2

2
β (deg)

β (deg)
0 0

−2
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time (s)

(a) Integrated adaptive control with WPI WU1 (b) Integrated adaptive control with WPI WU2

response reference response reference


90 90
60 60
30 30
φ (deg)

φ (deg)

0 0
−30 −30
−60 −60
−90 −90
0 10 20 30 40 50 60 0 10 20 30 40 50 60

20 20

10
10
α (deg)

α (deg)

0
0
−10

−10 −20
0 10 20 30 40 50 60 0 10 20 30 40 50 60

2
4
2
β (deg)

β (deg)

0 0
−2
−4
−2
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time (s)

(c) Modular adaptive control with WPI WU1 (d) Modular adaptive control with WPI WU2

Figure D.5: Simulation scenario 2 results for both controllers with WPI control allocation where
the aircraft experiences a left horizontal stabilizer locked at 0 degrees.
SIMULATION RESULTS OF CHAPTER 7 221

D.2 Simulation Results of Chapter 7

(A ) (B )
4

(m)
10000 z z z
01 02 03

0
2
Altitude (m)

Tracking errors Z
0
5000
−2
1 3
4 0 2
x 10 −1 1 4 −4
0 x 10 0 100 200 300
East Distance (m) North Distance (m)
time (s)
(C ) (D )
215 2000

210
1000
V (m/s)

χ (deg)

205
0
200

195 −1000
0 100 200 300 0 100 200 300
time (s) time (s)
(E ) (F )
6 100
µ α β
4
µ, α, β (deg)

50
γ (deg)

2
0
0

−2 −50
0 100 200 300 0 100 200 300
time (s) time (s)
(G ) (H )
2000 15
φ θ ψ p q r
10
p, q, r (deg/s)
φ, θ, ψ (deg)

1000
5
0
0

−1000 −5
0 100 200 300 0 100 200 300
time (s) time (s)
4 (I) (J)
x 10
15 5
δe δa δr
δ , δ , δ (deg)

10 0
Thrust (N )

r
a

5 −5
e

0 −10
0 100 200 300 0 100 200 300
time (s) time (s)

Figure D.6: Maneuver 1: Climbing helical path performed at flight condition 1 without any un-
certainty or actuator failures.
222 APPENDIX D

(A ) (B )
10

(m)
z z z
01 02 03

0
4000 5
Altitude (m)

Tracking errors Z
2000 0
0
−5
1 3
4 0 2
x 10 −1 1 4 −10
0 x 10 0 100 200 300
East Distance (m) North Distance (m)
time (s)
(C ) (D )
180 2000

170 1000
V (m/s)

χ (deg)
160 0

150 −1000

140 −2000
0 100 200 300 0 100 200 300
time (s) time (s)
(E ) (F )
10 100

5 50
µ, α, β (deg)
γ (deg)

0 0

−5 −50
µ α β
−10 −100
0 100 200 300 0 100 200 300
time (s) time (s)
(G ) (H )
2000 20

1000 10
p, q, r (deg/s)
φ, θ, ψ (deg)

0 0

−1000 −10
φ θ ψ p q r
−2000 −20
0 100 200 300 0 100 200 300
time (s) time (s)
4 (I) (J)
x 10
10 10
δe δa δr
5
δ , δ , δ (deg)
Thrust (N )

5 0
a

−5
e

0 −10
0 100 200 300 0 100 200 300
time (s) time (s)

Figure D.7: Maneuver 1: Climbing helical path performed at flight condition 2 with +30% un-
certainty in the aerodynamic coefficients.
SIMULATION RESULTS OF CHAPTER 7 223

(A ) (B )
5

(m)
z z z
01 02 03

0
Altitude (m)

6000

Tracking errors Z
4000 0
2000
1 3
4 0 2
x 10 −1 1 4 −5
0 x 10 0 100 200 300
East Distance (m) North Distance (m)
time (s)
(C ) (D )
280 2000

270 1000
V (m/s)

χ (deg)
260 0

250 −1000

240 −2000
0 100 200 300 0 100 200 300
time (s) time (s)
(E ) (F )
10 100

5 50
µ, α, β (deg)
γ (deg)

0 0

−5 −50
µ α β
−10 −100
0 100 200 300 0 100 200 300
time (s) time (s)
(G ) (H )
2000 20

1000 10
φ, θ, ψ (deg)

p, q, r (deg/s)

0 0

−1000 −10
φ θ ψ p q r
−2000 −20
0 100 200 300 0 100 200 300
time (s) time (s)
4 (I) (J)
x 10
10 10
δe δa δr
5
δ , δ , δ (deg)
Thrust (N )

5 0
a

−5
e

0 −10
0 100 200 300 0 100 200 300
time (s) time (s)

Figure D.8: Maneuver 1: Climbing helical path performed at flight condition 3 with left aileron
locked at −10 deg.
224 APPENDIX D

(A ) (B )
20

(m)
0
10
Altitude (m)

6000

Tracking errors Z
4000 0
2000
−10 z z z
1 3 01 02 03
4 0 2
x 10 −1 1 4 −20
0 x 10 0 100 200 300
East Distance (m) North Distance (m)
time (s)
(C ) (D )
255 1000

500
V (m/s)

χ (deg)
250 0

−500

245 −1000
0 100 200 300 0 100 200 300
time (s) time (s)
(E ) (F )
10 100
µ α β
5 50
µ, α, β (deg)
γ (deg)

0 0

−5 −50

−10 −100
0 100 200 300 0 100 200 300
time (s) time (s)
(G ) (H )
1000 50
φ θ ψ
500
p, q, r (deg/s)
φ, θ, ψ (deg)

0 0

−500
p q r
−1000 −50
0 100 200 300 0 100 200 300
time (s) time (s)
4 (I) (J)
x 10
10 10
δe δa δr
5
δ , δ , δ (deg)
Thrust (N )

5 0
a

−5
e

0 −10
0 100 200 300 0 100 200 300
time (s) time (s)

Figure D.9: Maneuver 2: Reconnaissance and surveillance performed at flight condition 3 with
−30% uncertainty in the aerodynamic coefficients.
SIMULATION RESULTS OF CHAPTER 7 225

−3 −4
x 10 x 10
10 ∆ CL0 ∆ CLq ∆ CLa ∆ CLde ∆ Cl0 ∆ Clp ∆ Clr ∆ Clda ∆ Cldr
8 2
CL components

C components
6
4 0
2

l
0 −2
−2
0 50 100 150 200 250 300 0 50 100 150 200 250 300
time (s) time (s)
−3 −6
x 10 x 10
3 ∆ CY0 ∆ CYp ∆ CYr ∆ CYda ∆ CYdr ∆ Cm0 ∆ Cmq ∆ Cmde
10

Cm components
CY components

1 5

0
0
−1
0 50 100 150 200 250 300 0 50 100 150 200 250 300
time (s) time (s)
−7
x 10
0.08 ∆ CD0 ∆ CDq ∆ CDde ∆ Cn0 ∆ Cnp ∆ Cnr ∆ Cnda ∆ Cndr
2
CD components

C components
0.06

0.04 0
0.02
−2
n

−0.02 −4
0 50 100 150 200 250 300 0 50 100 150 200 250 300
time (s) time (s)

Figure D.10: Maneuver 2: Estimated errors for the reconnaissance and surveillance performed at
flight condition 3 with −30% uncertainty in the aerodynamic coefficients.
226 APPENDIX D

(A ) (B )
5

(m) 0
Altitude (m)

6000

Tracking errors Z
4000 0
2000
1 z z z
3 01 02 03
4 0 2
x 10 −1 1 4 −5
0 x 10 0 100 200 300
East Distance (m) North Distance (m)
time (s)
(C ) (D )
200.5 1000

500
V (m/s)

χ (deg)
200 0

−500

199.5 −1000
0 100 200 300 0 100 200 300
time (s) time (s)
(E ) (F )
10 100
µ α β
5 50
µ, α, β (deg)
γ (deg)

0 0

−5 −50

−10 −100
0 100 200 300 0 100 200 300
time (s) time (s)
(G ) (H )
1000 50
φ θ ψ
500
φ, θ, ψ (deg)

p, q, r (deg/s)

0 0

−500
p q r
−1000 −50
0 100 200 300 0 100 200 300
time (s) time (s)
5 (I) (J)
x 10
2 10
δe δa δr
1.5 5
δ , δ , δ (deg)
Thrust (N )

1 0
a

0.5 −5
e

0 −10
0 100 200 300 0 100 200 300
time (s) time (s)

Figure D.11: Maneuver 2: Reconnaissance and surveillance path performed at flight condition 1
with left aileron locked at +10 deg.
SIMULATION RESULTS OF CHAPTER 8 227

D.3 Simulation Results of Chapter 8

−7
δ δ δ x 10 ∆CL0 ∆CLq ∆CLde
stick/rudder deflection (−)

1 s l p 2

0.5 0

∆CL
0 −2

−0.5 −4
0 50 100 150 200 0 50 100 150 200

n n ∆Cm0 ∆Cmq ∆Cmde


3 y z 1
normal acc. (g)

2
0.5
∆Cm

1
0
0

−1 −0.5
0 50 100 150 200 0 50 100 150 200
angle of attack/sideslip (deg)

30 β α ∆CY0
1 ∆CYp ∆CYr ∆CYde ∆CYda ∆CYdr

20 0.5
∆CY

10 0

0 −0.5

−10 −1
0 50 100 150 200 0 50 100 150 200

20 ps q rs 1
∆Cl0 ∆Clp ∆Clr ∆Clde ∆Clda ∆Cldr
angular rates (deg/s)

0.5
10
∆Cl

0
0
−0.5

−10 −1
0 50 100 150 200 0 50 100 150 200

δe δr δar δal
surface deflections (deg)

5 1
∆Cn0 ∆Cnp ∆Cnr ∆Cnde ∆Cnda ∆Cndr

0.5
0
∆Cn

0
−5
−0.5

−10 −1
0 50 100 150 200 0 50 100 150 200
time (s) time (s)

Figure D.12: Simulation results for the integrated adaptive controller at flight condition 2 and
failure scenario 1: Cmq = 0 after 20 seconds.
228 APPENDIX D

−6
stick/rudder deflection (−) 1 δ δ δ 5
x 10
∆CL0 ∆CLq ∆CLde
s l p

0.5 0

∆CL
0 −5

−0.5 −10
0 50 100 150 200 0 50 100 150 200

∆Cm0 ∆Cmq ∆Cmde


3 n n 1
y z
normal acc. (g)

2
0.5

∆Cm
1
0
0

−1 −0.5
0 50 100 150 200 0 50 100 150 200
angle of attack/sideslip (deg)

−6
x 10
30 β α 2
∆CY0 ∆CYp ∆CYr ∆CYde ∆CYda ∆CYdr
20
1
∆CY

10
0
0

−10 −1
0 50 100 150 200 0 50 100 150 200
−7
x 10
20 ps q rs 5∆Cl0 ∆Clp ∆Clr ∆Clde ∆Clda ∆Cldr
angular rates (deg/s)

10
∆Cl

0
0

−10 −5
0 50 100 150 200 0 50 100 150 200
−5
x 10
surface deflections (deg)

5 δe δr δar δal ∆Cn0


4 ∆Cnp ∆Cnr ∆Cnde ∆Cnda ∆Cndr

2
0
∆Cn

0
−5
−2

−10 −4
0 50 100 150 200 0 50 100 150 200
time (s) time (s)

Figure D.13: Simulation results for the modular adaptive controller at flight condition 2 and
failure scenario 1: Cmq = 0 after 20 seconds.
SIMULATION RESULTS OF CHAPTER 8 229

δ δ δ
−8
x 10 ∆CL0 ∆CLq ∆CLde
stick/rudder deflection (−)

0.6 s l p 6

0.4 4

∆CL
0.2 2

0 0

−0.2 −2
0 50 100 150 200 0 50 100 150 200

n n
2 y z 1.5 ∆Cm0 ∆Cmq ∆Cmde
normal acc. (g)

1
1

∆Cm
0.5
0
0

−1 −0.5
0 50 100 150 200 0 50 100 150 200
angle of attack/sideslip (deg)

β α ∆CY0 ∆CYp ∆CYr ∆CYde ∆CYda ∆CYdr


6 1

4 0.5
∆CY

2 0

0 −0.5

−2 −1
0 50 100 150 200 0 50 100 150 200

4 ps q rs ∆Cl0
1 ∆Clp ∆Clr ∆Clde ∆Clda ∆Cldr
angular rates (deg/s)

2 0.5
∆Cl

0 0

−2 −0.5

−4 −1
0 50 100 150 200 0 50 100 150 200

δ δ δ δ ∆Cn0 ∆Cnp ∆Cnr ∆Cnde ∆Cnda ∆Cndr


surface deflections (deg)

2 e r ar al 0.1

0.05
0
∆Cn

0
−2
−0.05

−4 −0.1
0 50 100 150 200 0 50 100 150 200
time (s) time (s)

Figure D.14: Simulation results for the integrated adaptive controller at flight condition 1 and
failure scenario 2: Loss of longitudinal static stability margin after 20 seconds.
230 APPENDIX D

0.2

0.1

−0.1

−0.2
m
C

−0.3

−0.4

−0.5

−0.6

−0.7
−20 −10 0 10 20 30 40 50 60 70 80 90
angle of attack (deg)

Figure D.15: Simulation results for the integrated adaptive controller at flight condition 1 and
failure scenario 2: Body pitch moment coefficient versus angle of attack. The blue line represents
the nominal values, the red line are the post-failure values.

Figure D.16: Simulation results for the integrated adaptive controller at flight condition 1 and
failure scenario 2: Body pitch moment coefficient error versus angle of attack. The blue line
represents the actual error, the red line the estimated error at the end of the simulation.
SIMULATION RESULTS OF CHAPTER 8 231

−7
δ δ δ x 10
stick/rudder deflection (−)

0.6 s l p 4 ∆CL0 ∆CLq ∆CLde

0.4 2

∆CL
0.2 0

0 −2

−0.2 −4
0 50 100 150 200 0 50 100 150 200

4 n n 6 ∆Cm0 ∆Cmq ∆Cmde


y z
normal acc. (g)

4
2

∆Cm
2
0
0

−2 −2
0 50 100 150 200 0 50 100 150 200
angle of attack/sideslip (deg)

−9
x 10
10 β α ∆CY0
2 ∆CYp ∆CYr ∆CYde ∆CYda ∆CYdr

5 1
∆CY

0 0

−5 −1
0 50 100 150 200 0 50 100 150 200

15 ps q rs ∆Cl0
0.01 ∆Clp ∆Clr ∆Clde ∆Clda ∆Cldr
angular rates (deg/s)

10 0.005
∆Cl

5 0

0 −0.005

−5 −0.01
0 50 100 150 200 0 50 100 150 200

δ δ δ δ ∆Cn0 ∆Cnp ∆Cnr ∆Cnde ∆Cnda ∆Cndr


surface deflections (deg)

5 e r ar al 0.4

0.2
0
∆Cn

0
−5
−0.2

−10 −0.4
0 50 100 150 200 0 50 100 150 200
time (s) time (s)

Figure D.17: Simulation results for the modular adaptive controller at flight condition 1 and
failure scenario 2: Loss of longitudinal static stability margin after 20 seconds.
232 APPENDIX D

−9
stick/rudder deflection (−) δ δ δ x 10
0.4 s l p 2 ∆CL0 ∆CLq ∆CLde

0.2
1

∆CL
0
0
−0.2

−0.4 −1
0 50 100 150 0 50 100 150

n n ∆Cm0 ∆Cmq ∆Cmde


2 y z 0.05
normal acc. (g)

∆Cm
0
0

−1 −0.05
0 50 100 150 0 50 100 150
angle of attack/sideslip (deg)

−4
β α x 10
20 4
∆CY0 ∆CYp ∆CYr ∆CYde ∆CYda ∆CYdr

10 2
∆CY

0 0

−10 −2
0 50 100 150 0 50 100 150

40 ps q rs ∆Cl0
1.5 ∆Clp ∆Clr ∆Clde ∆Clda ∆Cldr
angular rates (deg/s)

20 1
∆Cl

0 0.5

−20 0

−40 −0.5
0 50 100 150 0 50 100 150

δ δ δ δ
surface deflections (deg)

20 e r ar al ∆Cn0
0.6 ∆Cnp ∆Cnr ∆Cnde ∆Cnda ∆Cndr

10 0.4
∆Cn

0 0.2

−10 0

−20 −0.2
0 50 100 150 0 50 100 150
time (s) time (s)

Figure D.18: Simulation results for the integrated adaptive controller at flight condition 4 and
failure scenario 5: Right aileron negatively locked at half of maximum deflection after 20 seconds.
SIMULATION RESULTS OF CHAPTER 8 233

δ δ δ
stick/rudder deflection (−)

0.4 s l p 3 ∆CL0 ∆CLq ∆CLde

0.2 2

∆CL
0 1

−0.2 0

−0.4 −1
0 50 100 150 0 50 100 150
−10
n n x 10
1.5 y z 6 ∆Cm0 ∆Cmq ∆Cmde
normal acc. (g)

1 4

∆Cm
0.5 2

0 0

−0.5 −2
0 50 100 150 0 50 100 150
angle of attack/sideslip (deg)

20 β α ∆CY0
1 ∆CYp ∆CYr ∆CYde ∆CYda ∆CYdr

10 0
∆CY

0 −1

−10 −2
0 50 100 150 0 50 100 150

20 ps q rs ∆Cl0
4 ∆Clp ∆Clr ∆Clde ∆Clda ∆Cldr
angular rates (deg/s)

10 2
∆Cl

0 0

−10 −2

−20 −4
0 50 100 150 0 50 100 150

δ δ δ δ
surface deflections (deg)

20 e r ar al ∆Cn0
10 ∆Cnp ∆Cnr ∆Cnde ∆Cnda ∆Cndr

10
5
∆Cn

0
0
−10

−20 −5
0 50 100 150 0 50 100 150
time (s) time (s)

Figure D.19: Simulation results for the modular adaptive controller at flight condition 4 and
failure scenario 5: Right aileron negatively locked at half of maximum deflection after 20 seconds.
234 APPENDIX D

D.4 Simulation Results of Chapter 9

90 response reference 60 δal δar δr


60
30 40
φ (deg)

δal, δar, δr (deg)


0
−30 20
−60
−90 0
0 10 20 30 40 50 60
−20
40
30 −40
20 0 10 20 30 40 50 60
α (deg)

10
0
−10 20 δel δer δlef δtef
−20
0 10 20 30 40 50 60
δel, δer, δlef, δtef (deg)
10

2 0
β (deg)

−10
0
−20

−2 −30
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time [s]

(a) Reference tracking (b) Surface deflections

realized estimated
1 1.4 r2 r3 r5 r6 r7

1.3
0
ltot (−)

1.2
r

−1
1.1

−2 1
0 10 20 30 40 50 60 0 10 20 30 40 50 60

xhat2 xhat3 xhat5 xhat6 xhat7


0.5 50
xhat(deg or deg/s)

0
mtot (−)

0
−50

−0.5 −100
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.5 5 e2 e3 e5 e6 e7
e(deg or deg/s)

0
ntot (−)

0
−5

−0.5 −10
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time (s)

(c) Control moment (d) Estimator Parameters

Figure D.20: Simulation scenario 3 results for the modular adaptive controller with I&I estimator
where the aircraft experiences a hard-over of the left aileron to 45 degrees after 1 second.
SIMULATION RESULTS OF CHAPTER 9 235

response reference 40 δal δar δr


90
60
30
φ (deg)

20

δ , δ , δ (deg)
0
−30

r
−60 0

ar
−90
0 10 20 30 40 50 60

al
−20
40
30 −40
20 0 10 20 30 40 50 60
α (deg)

10
0
−10 20
δ δ δ δ
−20 el er lef tef
0 10 20 30 40 50 60

(deg)
10
0.5

tef
δ ,δ ,δ ,δ 0
0 lef
β (deg)

er

−10
el

−0.5

−1 −20
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time [s]

(a) Reference tracking (b) Surface deflections

δ δ δ δ
el er al ar
2 realized estimated 5

1
(−)

l (−)

0 0
tot

*
δ
l

−1

−2 −5
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.5 0

−0.5
(−)

mδ (−)

0 −1
tot

*
m

−1.5

−0.5 −2
0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.5 0.2

0.1
(−)

nδ (−)

0 0
tot

*
n

−0.1

−0.5 −0.2
0 10 20 30 40 50 60 0 10 20 30 40 50 60
time (s) time (s)

(c) Control moment (d) Parameter estimation

Figure D.21: Simulation scenario 4 results for the modular adaptive controller with I&I estimator
where the aircraft experiences a hard-over of the left horizontal stabilizer to 10.5 degrees after 1
second.
236 APPENDIX D

δs δl δp
stick/rudder deflection (−) 1 1 ∆CL0 ∆CLq ∆CLde

0.5 0.5

∆CL
0 0

−0.5 −0.5

−1 −1
0 50 100 150 200 0 50 100 150 200
∆Cm0 ∆Cmq ∆Cmde
5 n n 1
y z
normal acc. (g)

0.5

∆Cm
0 0

−0.5

−5 −1
0 50 100 150 200 0 50 100 150 200
angle of attack/sideslip (deg)

50 β α ∆CY0
0.2 ∆CYp ∆CYr ∆CYde ∆CYda ∆CYdr

0.1
∆CY

0 0

−0.1

−50 −0.2
0 50 100 150 200 0 50 100 150 200

20 ps q rs ∆Cl0
0.2 ∆Clp ∆Clr ∆Clde ∆Clda ∆Cldr
angular rates (deg/s)

10 0.1
∆Cl

0 0

−10 −0.1

−20 −0.2
0 50 100 150 200 0 50 100 150 200
surface deflections (deg)

10 δe δr δar δal ∆Cn0


0.5 ∆Cnp ∆Cnr ∆Cnde ∆Cnda ∆Cndr

5
∆Cn

0 0

−5

−10 −0.5
0 50 100 150 200 0 50 100 150 200
time (s) time (s)

Figure D.22: Simulation results for the I&I based adaptive controller at flight condition 2 and
failure scenario 1: Cmq = 0 after 20 seconds.
SIMULATION RESULTS OF CHAPTER 9 237

δs δl δp
stick/rudder deflection (−)

0.5 1 ∆CL0 ∆CLq ∆CLde


0.5

∆CL
0 0

−0.5

−0.5 −1
0 50 100 150 200 0 50 100 150 200
ny nz
2 2 ∆Cm0 ∆Cmq ∆Cmde
normal acc. (g)

1 1

∆Cm
0 0

−1 −1

−2 −2
0 50 100 150 200 0 50 100 150 200
angle of attack/sideslip (deg)

β α
5 0.02
∆CY0 ∆CYp ∆CYr ∆CYde ∆CYda ∆CYdr
0.01
∆CY

0 0

−0.01

−5 −0.02
0 50 100 150 200 0 50 100 150 200

5 ps q rs ∆Cl0
0.01 ∆Clp ∆Clr ∆Clde ∆Clda ∆Cldr
angular rates (deg/s)

0.005
∆Cl

0 0

−0.005

−5 −0.01
0 50 100 150 200 0 50 100 150 200
surface deflections (deg)

5 δ δ δ δ ∆Cn0
0.5 ∆Cnp ∆Cnr ∆Cnde ∆Cnda ∆Cndr
e r ar al
∆Cn

0 0

−5 −0.5
0 50 100 150 200 0 50 100 150 200
time (s) time (s)

Figure D.23: Simulation results for the I&I based adaptive controller at flight condition 1 and
failure scenario 2: Loss of longitudinal static stability margin after 20 seconds.
238 APPENDIX D

δs δl δp
stick/rudder deflection (−) 0.5 1
∆CL0 ∆CLq ∆CLde
0.5

∆CL
0 0

−0.5

−0.5 −1
0 50 100 150 200 0 50 100 150 200

2 ny nz 0.01 ∆Cm0 ∆Cmq ∆Cmde


normal acc. (g)

1 0.005

∆Cm
0 0

−1 −0.005

−2 −0.01
0 50 100 150 200 0 50 100 150 200
angle of attack/sideslip (deg)

β α
20 ∆CY0
0.5 ∆CYp ∆CYr ∆CYde ∆CYda ∆CYdr

10
∆CY

0 0

−10

−20 −0.5
0 50 100 150 200 0 50 100 150 200
ps q rs ∆Cl0 ∆Clp ∆Clr ∆Clde ∆Clda ∆Cldr
20 2
angular rates (deg/s)

10 1
∆Cl

0 0

−10 −1

−20 −2
0 50 100 150 200 0 50 100 150 200
δ δ δ δ ∆Cn0 ∆Cnp ∆Cnr ∆Cnde ∆Cnda ∆Cndr
surface deflections (deg)

20 e r ar al 1

10 0.5
∆Cn

0 0

−10 −0.5

−20 −1
0 50 100 150 200 0 50 100 150 200
time (s) time (s)

Figure D.24: Simulation results for the I&I based adaptive controller at flight condition 4 and
failure scenario 5: Right aileron negatively locked at half of maximum deflection after 20 seconds.
Bibliography
[1] Military Standard, Flying Qualities of Piloted Aircraft, MIL-STD-1797B, 2006,
2006.

[2] F. Ahmed-Zaid, P. Ioannou, K. Gousman, and R. Rooney. Accommodation of


Failures in the F-16 Aircraft Using Adaptive Control. IEEE Control Syst. Mag.,
11:73–78, 1991.

[3] B. D. O. Anderson and C. R. Johnson. Exponential Convergence of Adaptive


Identification and Control Algorithms. Automatica, 18:1–13, 1982.

[4] A. M. Annaswamy and J. E. Wong. Adaptive Control in the Presence of Saturation


Nonlinearity. Int. Journal of Adaptive Control and Signal Processing, 11:3–19,
1997.

[5] Z. Artstein. Stabilization with Relaxed Controls. Nonlinear Analysis, TMA-


7:1163–1173, 1983.

[6] A. Astolfi, D. Karagiannis, and R. Ortega. Nonlinear and Adaptive Control with
Applications. Springer-Verlag, 2008.

[7] A. Astolfi and R. Ortega. Immersion and Invariance: A New Tool for Stabilization
and Adaptive Control of Nonlinear Systems. IEEE Transactions on Automatic
Control, 48(4):590–606, 2003.

[8] K. J. Åström. Adaptive Control Around 1960. IEEE Control Systems Magazine,
16(3):44–49, 1996.

[9] K. J. Åström and B. Wittenmark. Adaptive Control. Addison Wesley, 1989.

[10] R. Babuška. Fuzzy Modeling for Control, pages 49–52. Kluwer Academic Pub-
lishers, 1998.

239
240 BIBLIOGRAPHY

[11] B. J. Bacon and I. M. Gregory. General Equations of Motion for a Damaged Asym-
metric Aircraft. In Proc. of the AIAA Atmospheric Flight Mechanics Conference
and Exhibit, 2007.
[12] R. E. Bailey and R. E. Smith. Analysis of Augmented Aircraft Flying Qualities
Through Application of the Neal-Smith Criterion. In Proc. of the Guidance and
Control Conference, number AIAA 81-1776, 1981.
[13] R. V. Beard. Failure Accommodation in Linear Systems Through Self-
Organization. PhD thesis, Department of Aeronautics and Astronautics, Mas-
sachusetts Institute of Technology, Cambridge, 1971.
[14] R. E. Bellman. Dynamic Programming. Princeton, NJ, 1957.
[15] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific,
3rd edition, 2005.
[16] J. H. Blakelock. Automatic Control of Aircraft and Missiles. John Wiley & Sons,
2nd edition, 1991.
[17] M. Bodson. Evaluation of Optimization Methods for Control Allocation. Journal
of Guidance, Control and Dynamics, 25(4):703–711, 2002.
[18] M. Bodson and J. E. Groszkiewicz. Multivariable Adaptive Algorithms for Recon-
figurable Flight Control. In Proc. of the 33rd Conference on Decision and Control,
Dec. 1994.
[19] M. Bodson and J. E. Groszkiewicz. Multivariable Adaptive Algorithms for Re-
configurable Flight Control. IEEE Transactions on Control Systems Technology,
5(2):217–229, Mar. 1997.
[20] K. Bordignon and J. Bessolo. Control Allocation for the X-35B. In Proc. of the
2002 Biennial International Powered Lift Conference and Exhibit, 2002.
[21] J. Bosworth. Flight Results of the NF-15B Intelligent Flight Control System
(IFCS) Aircraft with Adaptation to a Longitudinally Destabilized Plant. In Proc.
of the AIAA Guidance, Navigation and Control Conference and Exhibit, 2008.
[22] J. Bosworth and P. Williams-Hayes. Stabilator Failure Adaptation from Flight
Tests of NF-15B Intelligent Flight Control System. Journal of Aerospace Com-
puting, Information, and Communication, 6(3):187–206, 2009.
[23] J. A. Boudreau and H. I. Berman. Dispersed and Reconfigurable Digital Flight
Control Systems. Technical report, Grumman Aerospace Corp., 1979.
[24] J. D. Bošković, S. M. Li, and R. K. Mehra. Reconfigurable Flight Control Design
Using Multiple Switching Controllers and On-line Estimation of Damage-Related
Parameters. In Proc. of the 2000 IEEE International Conference on Control Ap-
plications, 2000.
BIBLIOGRAPHY 241

[25] J. D. Bošković and R. K. Mehra. A Multiple Model-Based Reconfigurable Flight


Control System Design. Proc. of the 37th IEEE Conf. on Decision and Control,
1998.

[26] J. D. Bošković and R. K. Mehra. Multiple Model-Based Adaptive Reconfigurable


Formation Flight Control Design. In Proc. of the 41st IEEE Conference of Decison
and Control, Dec. 2002.

[27] J. D. Bošković, R. Prasanth, and R. K. Mehra. Retrofit Reconfigurable Flight


Control. In Proc. of the AIAA Guidance, Navigation, and Control Conference and
Exhibit, 2005.

[28] S. Boyd and S. Sastry. Necessary and Sufficient Conditions for Parameter Con-
vergence in Adaptive Control. Automatica, 22:629–638, 1986.

[29] D. P. Boyle and G. E. Chamitof. Autonomous Maneuver Tracking for Self-Piloted


Vehicles. Journal of Guidance, Control and Dynamics, 22:58–67, 1999.

[30] J. S. Brinker and K. A. Wise. Reconfigurable Flight Control for Tailless Advanced
Fighter Aircraft. In Proc. of the 1998 AIAA Guidance, Navigation and Control
Conference, Aug. 1998.

[31] J. S. Brinker and K. A. Wise. Nonlinear Simulation Analysis of a Tailless Ad-


vanced Fighter Aircraft Reconfigurable Flight Control Law. In Proc. of the AIAA
Guidance, Navigation, and Control Conference and Exhibit, 1999.

[32] F. W. Burcham, J. J. Burken, T. A. Maine, and J. Bull. Emergency Flight Control


Using Only Engine Thrust and Lateral Center-of-Gravity Offset: A First Look.
Technical report, NASA, 1997.

[33] F. W. Burcham, J. J. Burken, T. A. Maine, and C. G. Fullerton. Development and


Flight Test of an Emergency Flight Control System Using Only Engine Thrust on
an MD-11 Transport Airplane. Technical report, NASA, Oct. 1997.

[34] J. J. Burken, P. Lu, and Z. Wu. Reconfigurable Flight Control Designs with Ap-
plication to the X-33 Vehicle. Technical report, NASA, 1999.

[35] J. J. Burken, P. Lu, Z. Wu, and C. Bahm. Two Reconfigurable Flight-Control


Design Methods: Robust Servomechanism and Control Allocation. Journal of
Guidance, Control and Dynamics, 24(3):482–493, May-June 2001.

[36] C. I. Byrnes, F. D. Priscoli, and A. Isidori. Output Regulation of Uncertain Non-


linear Systems. Birkhauser, 1997.

[37] A. J. Calise, N. Hovakimyan, and M. Idan. Adaptive Output Feedback Control


of Nonlinear Systems Using Neural Networks. Automatica, 37(8):1201–1211,
March 2001.
242 BIBLIOGRAPHY

[38] A. J. Calise, S. Lee, and M. Sharma. Development of a Reconfigurable Flight


Control law for the X-36 Tailless Fighter Aircraft. In Proc. of the AIAA Guidance,
Navigation, and Control Conference and Exhibit, Aug. 2000.
[39] A. J. Calise, S. Lee, and M. Sharma. Development of a Reconfigurable Flight
Control Law for Tailless Aircraft. Journal of Guidance, Control and Dynamics,
24(5):896–902, Sep.-Oct. 2001.
[40] D. Carnevale, D. Karagiannis, and A. Astolfi. Reduced-Order Observer Design
for Nonlinear Systems. In Proc. of the European Control Conference, 2007.
[41] R. Chen and J. Speyer. Sensor and Actuator Fault Reconstruction. Journal of
Guidance, Control and Dynamics, 27:186–196, 2004.
[42] K. W. E. Cheng, H. Wang, and D. Sutanto. Adaptive B-Spline Network Control
for Three-Phase PWM AC-DC Voltage Source Converter. In Proc. of the IEEE
1999 International Conference on Power Electronics and Drive Systems, 1999.
[43] B. T. Clough. Unmanned Aerial Vehicles: Autonomous Control Challenges, a Re-
searchers Perspective. Journal of Aerospace Computing, Information, and Com-
munication, 2:327–347, 2005.
[44] Controllab Products B.V., www.20sim.com. 20 Sims Control Toolbox, 20-sim help
files, 2005.
[45] M. V. Cook. Flight Dynamics Principles. Butterworth-Heinemann, 1997.
[46] M. Cox. Algorithms for Spline Curves and Surfaces. Technical report, MPL
Report DITC 166, 1990.
[47] T. J. Curry. Estimation of Handling Qualities Parameters of the Tu-144 Super-
sonic Transport Aircraft From Flight Test Data. Technical report, NASA CR-
2000210290, August 2000.
[48] R. R. da Costa, Q. P. Chu, and J. A. Mulder. Reentry Flight Controller Design
Using Nonlinear Dynamic Inversion. Journal of Spacecraft and Rockets, 40:64–
71, 2003.
[49] M. Daehlen and T. Lyche. Box Splines and Applications. Springer-Verlag, 1991.
[50] C. C. de Visser, Q. P. Chu, and J. A. Mulder. A New Approach to Linear Regres-
sion with Multivariate Splines. Automatica, 45:2903–2909, 2009.
[51] E. de Weerdt, Q. P. Chu, and J. A. Mulder. Neural Network Aerodynamic Model
Identification for Aerospace Reconfiguration. In Proc. of the AIAA Guidance,
Navigation, and Control Conference and Exhibit, 2005.
[52] W. C. Durham. Constrained Control Allocation. Journal of Guidance, Control
and Dynamics, 16(4):717–725, 1993.
BIBLIOGRAPHY 243

[53] L. Egbert and I. Halley. Stabilator reconfiguration flight testing on the F/A-18/E/F.
In Proc. of the SAE Control and Guidance Meeting, Mar. 2001.

[54] D. F. Enns. Control Allocation Approaches. In Proc. of the AIAA Guidance,


Navigation, and Control Conference and Exhibit, Aug. 1998.

[55] R. A. Eslinger and P. R. Chandler. Self-Repairing Flight Control System Program


Overview. In Proc. of the IEEE National Aerospaceand Electronics Conference,
1988.

[56] B. Etkin and L. D. Reid. Dynamics of Flight: Stability and Control. John Wiley
& Sons, 3rd edition, 1996.

[57] K. Ezal, Z. Pan, and P. Kokotović. Locally Optimal and Robust Backstepping
Design. IEEE Transactions on Automatic Control, 45:260–271, 2000.

[58] J. A. Farrell, M. Polycarpou, and M. Sharma. Adaptive Backstepping with Magni-


tude, Rate, and Bandwidth Constraints: Aircraft Longitude Control. In Proc. of the
American Control Conference, pages 3898–3903, Evanston, IL, 2003. American
Control Conference Council.

[59] J. A. Farrell, M. Polycarpou, M. Sharma, and W. Dong. Command Filtered Back-


stepping. IEEE Transactions on Automatic Control, 54(6):1391–1395, 2009.

[60] J. A. Farrell, M. Sharma, and M. Polycarpou. On-line Approximation Based


Aircraft Longitudinal Control. In Proc. of the American Control Conference,
Evanston, IL, 2003. American Control Conference Council.

[61] J. A. Farrell, M. Sharma, and M. Polycarpou. Backstepping Based Flight Control


with Adaptive Function Approximation. AIAA Journal of Guidance, Control and
Dynamics, 28(6):1089–1102, Jan. 2005.

[62] S. Ferrari and M. Jensenius. Robust and Reconfigurable Flight Control by Neu-
ral Networks. In Proc. of the AIAA 5th Aviation, Technology, Integration, and
Operations Conference (ATIO), 2005.

[63] L. Forssell and U. Nilsson. ADMIRE - The Aero-Data Model in a Research Envi-
ronment. Technical report, FOI, 2005.

[64] R. A. Freeman and P. Kokotović. Backstepping Design of Robust Controllers for


a Class of Nonlinear Systems. In Proceedings of the IFAC Nonlinear Control
Systems Design Symposium, 1992.

[65] R. A. Freeman and P. Kokotović. Inverse Optimality in Robust Stabilization. SIAM


J. Control and Optimization, 34(4):1365–1391, july 1996.

[66] R. A. Freeman and P. Kokotović. Robust Nonlinear Control Design: State-space


and Lyapunov Techniques. Birkhauser, 1996.
244 BIBLIOGRAPHY

[67] R. A. Freeman and J. A. Primbs. Control Lyapunov Functions: New Ideas From
an Old Source. In Proc. of the 35th Conference on Decision and Control, 1996.

[68] A. Fujimori, M. Kurozumi, P. N. Nikiforuk, and M. M. Gupta. Flight Control


Design of an Automatic Landing Flight Experiment Vehicle. Journal of Guidance,
Control and Dynamics, 23:373–376, 2000.

[69] R. J. Gadient and G. L. Weltz. Adaptive/Reconfigurable Flight Control Augmen-


tation Design Applied to High-Winged Transport Aircraft. In Proc. of the AIAA
Guidance, Navigation and Control Conference and Exhibit, 2004.

[70] T. Glad. Robustness of Nonlinear State Feedback - A Survey. Automatica, 23:425–


435, 1987.

[71] M. Gopinathan, J. D. Bošković, R. K. Mehra, and C. Rago. A Multiple Model


Predictive Scheme for Fault-Tolerant Flight Control Design. In Proc. of the 37th
IEEE Conference on Decision and Control, 1998.

[72] K. D. Graham, T. B. Cunningham, and C. Shure. Aircraft Flight Control Surviv-


ability Through Use of Computational Techniques. Technical report, Naval Air
Development Center, Report 77028-30, May 1980.

[73] J. E. Groszkiewicz and M. Bodson. Flight Control Reconfiguration Using Adap-


tive Methods. In Proc. of the 34th Conf. on Decision and Control, 1995.

[74] R. Hallouzi and M. Verhaegen. Fault-Tolerant Subspace Predictive Control Ap-


plied to a Boeing 747 Model. Journal of Guidance, Control and Dynamics,
31:873–883, 2008.

[75] O. Härkegård. Flight Control Design Using Backstepping. Master’s thesis,


Linköping University, 2001.

[76] O. Härkegård. Backstepping and Control Allocation with Applications to Flight


Control. PhD thesis, Linköping University, 2003.

[77] O. Härkegård and S. T. Glad. A Backstepping Design for Flight Path Angle Con-
trol. In Proc. of the 39th Conference on Decision and Control, 2000.

[78] O. Härkegård and S. T. Glad. Flight Control Design Using Backstepping. In Proc.
of the 5th IFAC Symposium on Nonlinear Control Systems, 2001.

[79] S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall, 1994.

[80] A. Healy and D. Liebard. Multivariable Sliding Mode Control for Autonomous
Diving and Steering of Unmanned Underwater Vehicles. IEEE Journal of Oceanic
Engineering, 18:327–339, 1993.
BIBLIOGRAPHY 245

[81] R. A. Hess and C. McLean. Development of a Design Methodology for Reconfig-


urable Flight Control Systems. In Proc. of the 38th Aerospace Sciences Meeting
and Exhibit, Jan. 2000.

[82] R. A. Hess and S. R. Wells. Sliding Mode Control Applied to Reconfigurable


Flight Control Design. In Proc. of the 40th AIAA Aerospace Sciences Meeting and
Exhibit, Jan. 2002.

[83] R. A. Hess, S. R. Wells, and T. K. Vetter. MIMO Sliding Mode Control as an


Alternative to Reconfigurable Flight Control Designs. In Proc. of the American
Control Conference, May 2002.

[84] M. Huzmezan and J. M. Maciejowski. Reconfigurable Control Methods and Re-


lated Issued - A Survey. Technical report, Department of Engineering, University
of Cambridge, Aug. 1997. Technical report prepared for the DERA under the
Research Agreement no.ASF/3455.

[85] S. Hyung and Y. Kim. Reconfigurable Flight Control System Design Using Dis-
crete Model Reference Adaptive Control. In Proc. of the AIAA Guidance, Navi-
gation and Control Conference and Exhibit, Aug. 2005.

[86] P. A. Ioannou and P. V. Kokotović. Instability Analysis and Improvement of Ro-


bustness of Adaptive Control. Automatica, 20(5):583–594, 1984.

[87] P. A. Ioannou and J. Sun. Stable and Robust Adaptive Control. Prentice-Hall,
1995.

[88] A. Isidori. Nonlinear Control Systems. Springer, 3rd edition, 1995.

[89] V. Janardhan, D. Schmitz, and S. N. Balakrishnan. Development and Implemen-


tation of New Nonlinear Control Concepts for a UA. In Proc. of the 23rd Digital
Avionics Systems Conference, 2004.

[90] V. Janardhan, D. Schmitz, and S. N. Balakrishnan. Nonlinear control concepts for


a UA. IEEE Aerospace and Electronic Systems Magazine, 2006.

[91] E. N. Johnson and A. J. Calise. Neural Network Adaptive Control of Systems with
Input Saturation. In Proc. of the American Control Conference, pages 2557–2562,
2001.

[92] C. N. Jones and J. M. Maciejowski. Reconfigurable Flight Control: First Year


Report. Technical report, Department of Engineering, University of Cambridge,
March 2005.

[93] H. S. Ju and C. C. Tsai. Longitudinal Axis Flight Control law Design by Adaptive
Backstepping. In Proc. of the IEEE Transactions on Aerospace and Electtronic
Systems, 2007.
246 BIBLIOGRAPHY

[94] M. M. Kale and A. J. Chipperfield. Reconfigurable Flight Control Strategies Using


Model Predictive Control. In Proc. of the 2002 IEEE International Symposium on
Intelligent Control, 2002.
[95] M. M. Kale and A. J. Chipperfield. Robust and Stabilized MPC Formulations
for Fault Tolerant and Reconfigurable Flight Control. In Proc. of the 2004 IEEE
International Symposium on Intelligent Control, 2004.

[96] I. Kaminer, A. Pascoal, E. Hallberg, and C. Silvestre. Trajectory Tracking for


Autonomous Vehicles: An Integrated Approach to Guidance and Control. Journal
of Guidance, Control, and Dynamics, 21:29–38, 1998.
[97] I. Kaminer, O. Yakimenko, V. Dobrokhodov, A. Pascoal, N. Hovakimyan, C. Cao,
A. Young, and V. Patel. Coordinated Path Following for Time-Critical Missions
of Multiple UAVs via L1 Adaptive Output Feedback Controllers. In Proc. of the
AIAA Guidance, Navigation and Control Conference and Exhibit, 2007.
[98] Y. J. Kanayama, Y. Kimura, F. Miyazaki, and T. Noguchi. A Stable Tracking Con-
trol Method for an Autonomous Mobile Robot. In Proc. of the IEEE International
Conference on Robotics and Automation, 1990.
[99] S. Kanev and M. Verhaegen. Controller Reconfiguration for Non-linear Systems.
Control Engineering Practice, 8:1223–1235, Oct. 2000.
[100] S. Kanev, M. Verhaegen, and G. Nijsse. A Method for the Design of Fault-Tolerant
Systems in Case of Sensor and Actuator Faults. In Proc. of the European Control
Conference, Sept. 2001.

[101] I. Kannelakopoulos, P. V. Kokotović, and A. S. Morse. Systematic Design of


Adaptive Controllers for Feedback Linearizable Systems. IEEE Transactions on
Automatic Control, 36(11):1241–1253, Nov. 1991.
[102] D. Karagiannis and A. Astolfi. Nonlinear Observer Design Using Invariant Man-
ifolds and Applications. In Proc. of the 44th IEEE Conf. Decision and Control,
2005.

[103] D. Karagiannis and A. Astolfi. Nonlinear Adaptive Control of Systems in Feed-


back Form: An Alternative to Adaptive Backstepping. Systems and Control Let-
ters, 57:733–739, 2008.
[104] D. Karagiannis and A. Astolfi. Observer Design for a Class of Nonlinear Systems
using Dynamic Scaling with Application to Adaptive Control. In Proc. of the 47th
IEEE Conference on Decision and Control, 2008.

[105] S. P. Karason and A. M. Annaswamy. Adaptive Control in the Presence of Input


Constraints. IEEE Trans. on Automatic Control, 39(11):2325–2330, 1994.

[106] H. K. Khalil. Nonlinear Systems. Prentice Hall, 3rd edition, 2002.


BIBLIOGRAPHY 247

[107] K. S. Kim, K. J. Lee, and Y. Kim. Reconfigurable Flight Control System Design
Using Direct Adaptive Method. Journal of Guidance, Control and Dynamics,
26(4):543–550, July-Aug. 2003.
[108] K. S. Kim, K. J. Lee, and Y. S. Kim. Model Following Reconfigurable Flight
Control System Design Using Direct Adaptive Scheme. In Proc. of the AIAA
Guidance Navigation and Control Conference and Exhibit, Aug. 2002.
[109] S. H. Kim, Y. S. Kim, and C. Song. A Robust Adaptive Nonlinear Control Ap-
proach to Missile Autopilot Design. Control Engineering Practice, 12(2):149–
154, 2004.
[110] P. V. Kokotović and M. Arcak. Constructive Nonlinear Control: A Historical
Perspective. Automatica, 37:637–662, 2001.
[111] P. V. Kokotović and H. J. Sussmann. A Positive Real Condition for Global Stabi-
lization of Nonlinear Systems. Systems and Control Letters, 19:177–185, 1989.
[112] I. Konstantopoulos. Eigenstructure Assignment In Reconfigurable Control Sys-
tems. citeseer.ist.psu.edu/152208.html, 1996.
[113] M. Krstić. Optimal Adaptive Control - Contradiction in terms or a Matter of
Choosing the Right Cost Functional? IEEE Transactions on Automatic Control,
53(8):1942–1947, 2008.
[114] M. Krstić. On Using Least-squares Updates Without Regressor Filtering in Iden-
tification and Adaptive Control of Nonlinear Systems. Automatics, 45:731–735,
2009.
[115] M. Krstić and H. Deng. Stabilization of Nonlinear Uncertain Systems. Springer,
1998.
[116] M. Krstić, D. Fontaine, P. V. Kokotović, and J. D. Paduano. Useful Nonlinearities
and Global Stabilization of Bifurcations in a Model of Jet Engine Surge and Stall.
IEEE Transactions on Automatic Control, 43(12):1739–1745, 1998.
[117] M. Krstić, I. Kanellakopoulos, and P. V. Kokotović. Adaptive Nonlinear Control
Without Overparametrization. Systems and Control Letters, 19:177–185, sept.
1992.
[118] M. Krstić, I. Kanellakopoulos, and P. V. Kokotović. Nonlinear and Adaptive Con-
trol Design. John Wiley & Sons, 1995.
[119] M. Krstić and P. V. Kokotović. Adaptive Nonlinear Design with Controller-
Identifier Separation and Swapping. IEEE Transactions on Automatic Control,
40(3):426–440, March 1995.
[120] M. Krstić and P. V. Kokotović. Modular Approach to Adaptive Nonlinear Stabi-
lization. Automatica, 32:625–629, 1996.
248 BIBLIOGRAPHY

[121] M. Krstić, P. V. Kokotović, and I. Kanellakopoulos. Transient Performance Im-


provement with a New Class of Adaptive Controllers. Systems and Control Letters,
21:451–461, 1993.
[122] M. Krstić and P. Tsiotras. Inverse optimality results for the attitude motion of a
rigid spacecraft. IEEE Transactions on Automatic Control, 44:1042–1049, 1999.
[123] E. Lavretsky and N. Hovakimyan. Positive µ-modification for Stable Adaptation
in Dynamic Inversion Based Adaptive Control with Input Saturation. In Proc. of
the American Control Conference, pages 3373–3378, 2005.

[124] E. Lavretsky, N. Hovakimyan, and C. Cao. Adaptive Design for Uncertain Sys-
tems with Nonlinear-in-Control Dynamics. In Proc. of the AIAA Guidance, Navi-
gation, and Control Conference and Exhibit, 2007.
[125] T. Lee and Y. Kim. Nonlinear Adaptive Flight Control Using backstepping
and Neural Networks Controller. Journal of Guidance, Control and Dynamics,
24(4):675–682, July-Aug. 2001.

[126] G. G. Lendaris, R. A. Santiago, and M. S. Carroll. Proposed Framework for Ap-


plying Adaptive Critics in Real-Time Realm. Proceedings of the 2002 Interna-
tional Joint Conference on Neural Networks, 2002.
[127] B. L. Lewis and F. L. Stevens. Aircraft Control and Simulation. John Wiley &
Sons, 1992.
[128] Z. H. Li and M. Krstić. Optimal Design of Adaptive Tracking Controllers for
Nonlinear Systems. Automatica, 33:1459–1473, 1997.
[129] D. M. Littleboy and P. R. Smith. Using Bifurcation Methods to Aid Nonlinear
Dynamic Inversion Control Law Design. Journal of Guidance, Control, and Dy-
namics, 21:632–638, 1998.

[130] J. Löfberg. Backstepping with Local LQ Performance and Global Approximation


of Quadratic Performance. In Proc. of the American Control Conference, 2000.

[131] T. J. J. Lombaerts, Q. P. Chu, J. A. Mulder, and D. A. Joosten. Real Time Damaged


Aircraft Model Identification for Reconfiguring Flight Control. In Proc. of the
AIAA Guidance, Navigation, and Control Conference and Exhibit, 2007.
[132] T. J. J. Lombaerts, H. Huisman, Q. Chu, J. A. Mulder, and D. Joosten. Nonlin-
ear Reconfiguring Flight Control Based on Online Physical Model Identification.
Journal of Guidance, Control, and Dynamics, 32(3):727–748, 2009.
[133] T. J. J. Lombaerts, M. H. Smaili, O. Stroosma, Q. P. Chu, J. A. Mulder, and
D. Joosten. Piloted Simulator Evaluation Results of New Fault-Tolerant Flight
Control Algorithm. Journal of Guidance, Control, and Dynamics, 32(6):1747–
1765, 2009.
BIBLIOGRAPHY 249

[134] W. Luo, Y. C. Chu, and K. V. Ling. Inverse Optimal Adaptive Control for Attitude
Tracking of Spacecraft. IEEE Transactions on Automatic Control, 50(11):1639–
1654, 2005.

[135] A. M. Lyapunov. The General Problem of the Stability of Motion. Taylor &
Francis, 1992. English translation of the original publication in Russian from
1892.

[136] C. Manzie. Advanced Control Lecture Notes. Melbourne School of Engineering,


2004.

[137] P. S. Maybeck and R. D. Stevens. Reconfigurable Flight Control via Multiple


Model Adaptive Control Methods. Proc. of the 29th Conf. on Decision and Con-
trol, 1990.

[138] P. S. Maybeck and R. D. Stevens. Reconfigurable Flight Control via Multiple


Model Adaptive Control Methods. IEEE Transactions on Aerospace and Elec-
tronic Systems, 27(3), May 1991.

[139] D. McRuer and D. Graham. Eighty Years of Flight Control - Triumphs and Pitfalls
of the Systems Approach. Journal of Guidance, Control, and Dynamics, 4:353–
362, 1981.

[140] M. Mears, S. Pruett, and J. Houtz. URV Flight Test of an ADA implemented,
Self-Repairing Flight Control System. WL-TR-92-3101, Aug. 1992.

[141] J. Monaco, D. Ward, and A. Bateman. A Retrofit Architecture for Model-Based


Adaptive Flight Control. In Proc. of the AIAA 1st Intelligent Systems Technical
Conference, 2004.

[142] G. Moon, H. Lee, and Y. Kim. Reconfigurable Flight Control Law Based on Model
Following Scheme and Parameter Estimation. In Proc. of the AIAA Guidance,
Navigation, and Control Conference and Exhibit, Aug. 2005.

[143] J. A. Mulder, W. H. J. J. van Staveren, J. C. van der Vaart, and E. de Weerdt.


Flight Dynamics, Lecture Notes AE3-302. Technical report, Delft University of
Technology, 2006.

[144] R. Murray-Smith and T. A. Johansen, editors. Multiple Model Approaches to


Modelling and Control. Taylor & Francis, 1997.

[145] J. Nakanishi, J. A. Farrell, and S. Schaal. Composite Adaptive Control with Lo-
cally Weighted Statistical Learning. Neural Networks, 18:71–90, 2005.

[146] M. Narasimhan, H. Dong, R. Mittal, and S. N. Singh. Optimal Yaw Regulation


and Trajectory Control of Biorobotic AUV Using Mechanical Fins Based on CFD
Parametrization. Journal of Fluids Engineering, 128:687–698, 2006.
250 BIBLIOGRAPHY

[147] K. S. Narendra and A. M. Annaswamy. Stable Adaptive Systems. Prentice Hall,


1989.

[148] K. S. Narendra and K. Parthasarathy. Identification and Control of Dynamical


Systems Using Neural Networks. IEEE Transactions on Neural Networks, 1990.

[149] L. T. Nguyen, M. E. Ogburn, W. P. Gilbert, K. S. Kibler, P. W. Brown, and P. L.


Deal. Simulator Study of Stall Post-stall Characteristics of a Fighter Airplane with
Relaxed Longitudinal Static Stability. Technical report, NASA Langley Research
Center, 1979.

[150] N. Nguyen and K. Krishnakumar. Hybrid Intelligent Flight Control with Adaptive
Learning Parameter Estimation. Journal of Aerospace Computing, Information,
and Communication, 6:171–186, 2009.

[151] T. S. No, B. M. Min, R. H. Stone, and J. E. K. C. Wong. Control and Simulation of


Arbitrary Flight Trajectory-Tracking. Control Engineering Practice, 13:601–612,
2005.

[152] M. Oosterom, P. Bergsten, and R. Babuska. Fuzzy Gain-Scheduled H-infinity


Flight Control Law Design. In Proc. of the AIAA Guidance, Navigation, and
Control Conference and Exhibit, 2002.

[153] M. Oosterom, G. Schram, R. Babuska, and H. B. Verbruggen. Automated Pro-


cedure for Gain Scheduled Flight Control Law Design. In Proc. of the AIAA
Guidance, Navigation, and Control Conference and Exhibit, 2000.

[154] M. Oppenheimer and D. Doman. A Method for Including Control Effector In-
teractions in the Control Allocation Problem. In Proc. of the AIAA Guidance,
Navigation and Control Conference and Exhibit, 2007.

[155] M. Pachter, J. J. D’Azzo, J. L. dargan, and J. L. A. W. Proud. Automatic Formation


Flight Control. Journal of Guidance, Control and Dynamics, 17(6), 1994.

[156] M. Pachter, J. J. D’Azzo, and A. W. Proud. Tight Formation Control. Journal of


Guidance, Control and Dynamics, 24:246–254, 2001.

[157] M. Pachter and E. B. Nelson. Reconfigurable Flight Control. IMechE, 219, 2005.

[158] A. B. Page, J. Monaco, and D. Meloney. Flight Testing of a Retrofit Reconfig-


urable Control Law Architecture Using an F/A-18C. In Proc. of the AIAA Guid-
ance, Navigation, and Control Conference and Exhibit, 2006.

[159] A. B. Page and M. L. Steinberg. Effects of Control Allocation Algorithms on a


Nonlinear Adaptive Design. Technical report, AIAA-99-4282, 1999.

[160] B. Papadales and M. Downing. UAV Science Missions: A Business Perspective.


In Infotech@Aerospace, 2005.
BIBLIOGRAPHY 251

[161] A. A. Pashilkar, N. Sundararajan, and P. Saratchandran. Adaptive Nonlinear Neu-


ral Controller for Aircraft Under Actuator Failures. Journal of Guidance, Control,
and Dynamics, 30:835–847, 2007.
[162] R. J. Patton. Fault-Tolerant Control: The 1997 Situation. In Proc. IFAC Safepro-
cess ’97, pages 1033–1055, 1997.
[163] M. M. Polycarpou, J. A. Farrell, and M. Sharma. Robust On-line Approximation
Control of Uncertain Nonlinear Systems Subject to Constraints. In Proc. of the 9th
IEEE International Conference on Engineering of Complex Computer Systems,
pages 66–74, 2004.
[164] F. Pozo, F. Ikhouane, and J. Rodellar. Numerical Issues in backstepping Control:
Sensitivity and Parameter Tuning. Journal of the Franklin Institute, 345:891–905,
2008.
[165] L. Praly. Asymptotic Stabilization via Output Feedback for Lower Triangular Sys-
tems with Output Dependent Incremental Rate. IEEE Transactions on Automatic
Control, 48:1103–1108, 2003.
[166] J. A. Primbs. Nonlinear Optimal Control: A Receding Horizon Approach. PhD
thesis, California Institute of Technology, Pasadena, California, 1999.
[167] J. A. Primbs, V. Nevistic, and J. C. Doyle. A Receding Horizon Generalization
of Pointwise Min-Norm Controllers. IEEE Transactions On Automatic Control,
45:898–909, 2000.
[168] A. W. Proud, M. Pachter, and J. J. D’Azzo. Close Formation Flight Control. In
Proc. of the AIAA Guidance, Navigation and Control Conference, 1999.
[169] H. Rauch, R. Kline-Schoder, J. Adams, and H. Youssef. Fault Detection, Isolation
and Reconfiguration for Aircraft Using Neural Networks. In Proc. of the AIAA
11th Applied Aerodynamics Conference, 1993.
[170] K. Refson. Moldy’s User’s Manual. Department of Earth Sciences, May 2001.
[171] W. Ren and E. Atkins. Nonlinear Trajectory Tracking for Fixed Wing UAVs via
Backstepping and Parameter Adaptation. In Proc. of the AIAA Guidance, Naviga-
tion and Control Conference and Exhibit, Aug. 2005.
[172] W. Ren and R. W. Beard. Trajectory Tracking for Unmanned Air Vehicles With
Velocity and Heading Rate Constraints. IEEE Transactions on Control Systems
Technology, 12:706–716, 2004.
[173] R. S. Russell. Nonlinear F-16 Simulations Using Simulink and Matlab. Technical
report, University of Minnesota, 2003.
[174] I. J. Schoenberg. Spline Functions and the Problem of Graduation. Proceedings
of the National Academy of Sciences, 52:947–950, 1964.
252 BIBLIOGRAPHY

[175] R. Sepulchre, M. Janovic, and P. V. Kokotovic. Constructive Nonlinear Control.


Springer, 1997.

[176] D. H. Shin and Y. Kim. Reconfigurable Flight Control System Design Using
Adaptive Neural Networks. IEEE Transactions on Control Systems Technology,
12(1):87–100, Jan. 2004.

[177] D. H. Shin and Y. Kim. Nonlinear Discrete-Time Reconfigurable Flight Control


Law Using Neural Networks. IEEE Transactions on Control Systems Technology,
14(3):408–422, May 2006.

[178] Y. Shin. Neural Network Based Adaptive Control for Nonlinear Dynamic Regimes.
PhD thesis, Georgia Institute of Technology, 2005.

[179] Y. B. Shtessel and J. Buffington. Multiple Time Scale Flight Control Using Re-
configurable Sliding Modes. AIAA Journal on Guidance, Control and Dynamics,
22(6):873–883, 1999.

[180] Y. B. Shtessel, J. Buffington, M. Pachter, P. Chandler, and S. Banda. Recon-


figurable Flight Control on Sliding Modes Addressing Actuator Deflection and
Deflection Rate Saturation. AIAA-98-4112, 1998.

[181] A. T. Simmons and A. S. Hodel. Control Allocation for the X-33 Using Existing
and Novel Quadratic Programming Techniques. In Proc. of the American Control
Conference, 2004.

[182] S. N. Singh, P. Chandler, C. Schumacher, S. Banda, and M. Pachter. Adaptive


Feedback Linearizing Nonlinear Close Formation Control of UAVs. In Proc. of
the American Control Conference, pages 854–858, June 2000.

[183] S. N. Singh and M. Steinberg. Adaptive Control of Feedback Linearizable Nonlin-


ear Systems With Application to Flight Control. In Proc. of the AIAA Guidance,
Navigation and Control Conference, July 1996.

[184] S. N. Singh, M. L. Steinberg, and A. B. Page. Nonlinear Adaptive and Sliding


Mode Flight Path Control of F/A-18 Model. IEEE Transactions on Aerospace and
Electronic Systems, 39:1250–1262, 2003.

[185] W. Siwakosit and R. A. Hess. Multi-Input/Multi-Output Reconfigurable Flight


Control Design. Journal of Guidance, Control and Dynamics, 24(6), Nov.-Dec.
2001.

[186] J. J. E. Slotine and W. Li. Composite Adaptive Control of Robot Manipulators.


Automatica, 25:509–519, 1989.

[187] J. J. E. Slotine and W. Li. Applied Nonlinear Control. Prentice Hall, 1991.
BIBLIOGRAPHY 253

[188] H. Smaili, J. H. Breeman, T. J. J. Lombaerts, and D. Joosten. A Simulation Bench-


mark for Integrated Fault Tolerant Flight Control Evaluation. In Proc. of the AIAA
Modeling and Simulation Technologies Conference and Exhibit, 2006.

[189] L. Sonneveldt. Constrained nonlinear adaptive backstepping flight control - ap-


plication to an f-16/matv model. Master’s thesis, Delft University of Technology,
2006.

[190] E. D. Sontag. A Lyapunov-like Characterization of Asymptotic Controllability.


SIAM Journal of Control and Optimization, 21:462–471, 1983.

[191] E. D. Sontag. A ‘Universal’ Construction of Artstein’s Theorem on Nonlinear


Stabilization. Systems & Control Letters, 13:117–123, 1989.

[192] E. D. Sontag. Smooth Stabilization Implies Coprime Factorization. IEEE Trans-


actions on Automatic Control, 34:435–443, 1989.

[193] E. D. Sontag. On the Input-to-state Stability Property. European Journal of Con-


trol, 1:24–36, 1995.

[194] E. D. Sontag. Mathematical Control Theory: Deterministic Finite Dimensional


Systems. Springer, New York, 2nd edition, 1998.

[195] M. Steinberg. Historical Overview of Research in Reconfigurable Flight Control.


IMechE, 219, 2005.

[196] M. L. Steinberg. Comparison of Intelligent, Adaptive and Nonlinear Flight Control


Laws. Journal of Guidance, Control and Dynamics, 24(4):693–699, July-Aug.
2001.

[197] R. F. Stengel. Intelligent Failure-Tolerant Control. IEEE Control Systems Maga-


zine, 11(4):14–23, 1991.

[198] L. Tang, M. Roemer, J. Ge, A. Crassidis, J. Prasad, and C. Belcastro. Methodolo-


gies for Adaptive Flight Envelope Estimation and Protection. In Proc. of the AIAA
Guidance, Navigation, and Control Conference, 2009.

[199] M. B. Tischler. Advances in Aircraft Flight Control. Taylor & Francis, 1996.

[200] S. Tsach, J. Chemla, and D. Penn. UAV Systems Development in IAI - Past,
Present and Future. In Proc. of the 2nd AIAA ”Unmanned Unlimited” Systems,
Technologies, and Operations - Aerospace, Land, and Sea Conference, 2003.

[201] J. Tsinias. Existence of Control Lyapunov Functions and Applications to State


Feedback Stabilizability of Nonlinear Systems. Journal of Control and Optimiza-
tion, 29:457–473, 1991.
254 BIBLIOGRAPHY

[202] E. R. van Oort, Q. P. Chu, and J. A. Mulder. Robust Model Predictive Control of a
Feedback Linearized Nonlinear F-16/MATV Aircraft Model. In Proc. of the AIAA
Guidance, Navigation and Control Conference and Exhibit, 2006.

[203] J. C. van Tooren. Fuzzy Aerodynamic Modeling and Identification - Application


to the F-16 Aerodynamic Model. Master’s thesis, Delft University of Technology,
2006.

[204] S. Vijayakumar and S. Schaal. Locally Weighted Projection Regression: Incre-


mental Real Time Learning in High Dimensional Space. In Proc. of the 17th
International Conference on Machine Learning, 2000.

[205] G. P. Walker and D. A. Allen. X-35B STOVL Flight Control Law Design and Fly-
ing Qualities. In Proc. of the International Powered Lift Conference and Exhibit,
2002.

[206] H. Wang and J. Sun. Modified Reference Adaptive Control with Saturated Inputs.
In Proc. of the Conf. on Decision and Control, pages 3255–3256, 1992.

[207] J. Wang, V. Patel, C. Cao, N. Hovakimyan, and E. Lavretsky. Novel L1 Adap-


tive Control Methodology for Aerial Refueling with Guaranteed Transient Perfor-
mance. Journal of Guidance, Control and Dynamics, 31:182–193, 2008.

[208] D. G. Ward, M. Sharma, N. D. Richards, J. D. Luca, and M. Mears. Intelligent


Control of Un-Manned Air Vehicles: Program Summary and Representative Re-
sults. In Proc. of the 2nd AIAA Unmanned Unlimited Systems, Technologies and
Operations Aerospace, Land and Sea, 2003.

[209] S. Wegener, D. Sullivan, J. Frank, and F. Enomoto. UAV Autonomous Operations


for Airborne Science Missions. In Proc. of the AIAA 3rd ”Unmanned Unlimited”
Technical Conference, Workshop and Exhibit, 2004.

[210] S. Wiggins. Introduction to Applied Nonlinear Dynamical Systems and Chaos.


Springer-Verlag, 1990.

[211] I. Yavrucuk, J. Prasad, and S. Unnikrishnan. Envelope Protection for Au-


tonomous Unmanned Aerial Vehicles. Journal of Guidance, Control, and Dy-
namics, 32(1):248–261, 2009.

[212] P.-C. P. Yip. Robust and Adaptive Nonlinear Control Using Dynamic Surface
Controller with Applications to Intelligent Vehicle Highway Systems. PhD thesis,
University of California at Berkeley, 1997.

[213] P.-C. P. Yip and J. K. Hedrick. Adaptive Dynamic Surface Control: A Simpli-
fied Algorithm for Adaptive Backstepping Control of Nonlinear Systems. Int. J.
Control, 71(5):959–979, 1998.
BIBLIOGRAPHY 255

[214] Y. Zhang and J. Jiang. Integrated Design of Reconfigurable Fault-Tolerant Control


Systems. Journal of Guidance, 24(1), July 2000.
[215] K. Zhou and J. Doyle. Essentials of Robust Control. Prentice Hall, 1997.
[216] A. Zolghadri. A Redundancy-based Strategy for Safety Management in Modern
Civil Aircraft. Control Engineering Practice, 8:545–554, 2000.
[217] A. Zolghadri, D. Henry, and M. Monsion. Design of Nonlinear Observers for Fault
Diagnosis: A Case Study. Control Engineering Practice, 4:1535–1544, 1996.
Nomenclature

Abbreviations

ABS Adaptive Backstepping

ADMIRE Aerodata Model in Research Environment

AIAA American Institute of Aeronautics and Astronautics

AMS Attainable Moment Set

BS Backstepping
CA Control Allocation

CABS Constrained Adaptive Backstepping

CAP Control Anticipation Parameter

CFD Computational Fluid Dynamics

CG Center of Gravity

CLF Control Lyapunov Function

DOF Degrees of Freedom

DUT Delft university of Technology

EA Eigenstructure Assignment

FBL Feedback Linearization

FBW Fly-By-Wire

257
258 BIBLIOGRAPHY

FDIE Fault Detection, Isolation and Estimation


HJB Hamilton-Jacobi-Bellman
I&I Immersion and Invariance
IEEE Institute of Electrical and Electronics Engineers
IMM Interacting Multiple Model
ISS Input-to-state Stable
LOES Lower Order Equivalent System
LQR Linear Quadratic Regulator
MAV Mean Absolute Value
MMST Multiple Model Switching and Tuning
MPC Model Predictive Control
MRAC Model Reference Adaptive Control
NASA National Aeronautics and Space Administration
NDI Nonlinear Dynamic Inversion
NLR National Aerospace Laboratory
NN Neural Network
PCA Propulsion Controlled Aircraft
PE Persistently Exciting
PIM Pseudo Inverse Method
QFT Quantitative Feedback Theory
QP Quadratic Programming
RFC Reconfigurable Flight Control
RLS Recursive Least-Squares
RMS Root Mean Square
SCAS Stability and Control Augmentation System
SMC Sliding Mode Control
UAV Unmanned Aerial Vehicle
BIBLIOGRAPHY 259

USAF United States Air Force

WPI Weighted Pseudo-Inverse

Greek Symbols

α Aerodynamic angle of attack

α∗ Virtual control

β Aerodynamic angle of sideslip

β∗ Continuously differentiable function vector

χ Flight path heading angle

δa Aileron deflection angle

δe Elevator deflection angle

δr Rudder deflection angle

δal Left aileron deflection angle

δar Right aileron deflection angle

δel Left elevator deflection angle

δer Right elevator deflection angle

δlef Leading edge flap deflection angle

δtef Trailing edge flap deflection angle

δth Throttle position

ǫ∗ Continuously differentiable function vector

γ Flight path climb angle

γ∗ Update gain

θ̂∗ Parameter estimate (vector)

κ∗ Nonlinear damping gain

µ Aerodynamic bank angle

φ Aircraft body axis roll angle

ψ Aircraft body axis yaw angle


260 BIBLIOGRAPHY

ρ Air density

σ∗ Invariant manifold

τeng Engine lag time constant

θ Aircraft body axis pitch angle

θ∗ Unknown parameter vector

θ̃∗ Parameter estimation error (vector)

ϕ∗ Regressor vector

Roman Symbols

c̄ Mean aerodynamic chord length

L̄ Total rolling moment

M̄ Total pitching moment

N̄ Total yawing moment

q̄ Dynamic air pressure

X̄ Total force in body x-direction

Ȳ Total force in body y-direction

Z̄ Total force in body z-direction

z̄∗ Compensated tracking error

b Reference wing span

C∗ Non-dimensional aerodynamic coefficient

c∗ Control gain

e∗ Prediction error

FB Body-fixed reference frame

FE Earth-fixed reference frame

FO Vehicle carried local earth reference frame

FS Stability axes reference frame

FT Total thrust
BIBLIOGRAPHY 261

FW Wind axes reference frame

g Gravity acceleration

g1 , g2 , g3 Wind axes gravity components

h Altitude

Heng Engine angular momentum

Ix Roll moment of inertia

Iy Pitch moment of inertia

Iz Yaw moment of inertia

Ixy , Ixz , Iyz Product moments of inertia

k∗ Integral gain

M Mach number

m Total aircraft mass

ny Normal Acceleration in body y-axis

nz Normal Acceleration in body z-axis

p Body axis roll rate

Pa Engine power, percent of maximum power

Pc Commanded engine power to the engine, percent of maximum power

Pc∗ Commanded engine power based on throttle position, percent of max-


imum power

ps Stability axis roll rate

pstat Static air pressure

q Body axis pitch rate

q0 , q1 , q2 , q3 Quaternion components

qs Stability axis pitch rate

r Body axis yaw rate

r∗ Dynamic scaling parameter

rs Stability axis yaw rate


262 BIBLIOGRAPHY

S Reference wing area


T Air temperature
Tidle Idle thrust
Tmax Maximum thrust
Tmil Military thrust
u Aircraft velocity in body x-direction
u System input
v Aircraft velocity in body y-direction
V∗ (Control) Lyapunov function
VT Total velocity
w Aircraft velocity in body z-direction
x System states
x∗ System state
xE , yE , zE Aircraft position w.r.t. reference point
xcgr Reference center of gravity location
xcg Center of gravity location
y System output
yr Reference signal
z∗ Tracking error
Samenvatting

Onder de invloed van technologische ontwikkelingen in de lucht- en ruimtevaart techniek


zijn tijdens de afgelopen decennia de prestatie-eisen voor moderne gevechtsvliegtuigen
alsmaar hoger geworden, terwijl tegelijkertijd ook de grootte van het gewenste opera-
tionele vliegdomein flink is toegenomen. Om een extreme wendbaarheid te bereiken,
worden deze vliegtuigen vaak aërodynamisch instabiel ontworpen en uitgerust met re-
dundante besturingsactuatoren. Een goed voorbeeld hiervan is de Lockheed Martin
F-22 Raptor, die gebruik maakt van een zogenaamd thrust vectored control systeem
om een hogere mate van wendbaarheid te bereiken. Daarbij worden de overlevings-
eisen in de moderne oorlogsvoering steeds strenger voor zowel bemande als onbemande
gevechtsvliegtuigen. Het vormt een enorme uitdaging voor regeltechnisch ingenieurs om
rekening te houden met al deze eisen bij het ontwerp van de besturingssystemen voor dit
type vliegtuigen.
Tot op heden worden de meeste besturingssystemen voor vliegtuigen ontworpen met be-
hulp van gelineariseerde vliegtuigmodellen die elk geldig zijn op een trimconditie in het
operationele vliegdomein. Door gebruik te maken van de gevestigde klassieke regeltech-
nieken kan een lineaire regelaar worden afgeleid voor elk lokaal model. De versterkings-
factoren van de verschillende lineaire regelaars kunnen worden opgeslagen in tabellen
en door te interpoleren wordt in feite een unieke regelaar verkregen, die geldig is in
het gehele operationele vliegdomein. Echter, een probleem van deze aanpak is dat het
voor complexe niet-lineaire systemen zoals moderne gevechtsvliegtuigen niet mogelijk
is hoge prestatie- en robuustheidseisen te garanderen. Niet-lineaire regelmethodes zijn
ontwikkeld om de tekortkomingen van deze klassieke aanpak op te lossen. De theoretisch
ontwikkelde nonlinear dynamic inversion (NDI) methode is de bekendste en de meeste
gebruikte van deze technieken.
NDI is een regelmethode die expliciet kan omgaan met systemen die niet-lineariteiten
bevatten. Door het toepassen van niet-lineaire terugkoppeling en toestandtransformaties
kan het niet-lineare systeem worden omgezet in een constant lineair systeem, zonder

263
264 SAMENVATTING

gebruik te maken van lineaire benaderingen van het systeem. Vervolgens kan er een
klassieke regelaar voor het resulterende lineaire systeem worden ontworpen. Echter, om
een perfecte niet-lineaire dynamische inversie toe te passen is er een zeer nauwkeurig
systeemmodel nodig. Het is een erg kostbaar en langdurig proces om zo een model voor
een gevechtsvliegtuig af te leiden, aangezien er windtunnel experimenten, computational
fluid dynamics (CFD) berekeningen en een uitgebreid testvluchtprogramma voor nodig
zijn. Het resulterende, empirische vliegtuigmodel zal nooit 100% accuraat zijn. De
tekortkomingen in het model kunnen worden gecompenseerd door een robuuste lineaire
regelaar voor het met NDI gelineariseerde systeem af te leiden. Maar zelfs dan kun-
nen de gewenste vliegprestaties niet worden gehandhaafd in het geval van grove fouten
als gevolg van grote, plotselinge veranderingen in de vliegtuig dynamica. Bijvoorbeeld
veroorzaakt door structurele schade of het falen van een actuator.

Voor een elegantere oplossing om om te gaan met grote modelonzekerheden kan er wor-
den gekeken naar een adaptief regelsysteem met een vorm van real-time modelidentifi-
catie. De recente ontwikkelingen in computers en beschikbare rekenkracht hebben het
mogelijk gemaakt om meer complexe, adaptieve vliegtuigbesturingssystemen te imple-
menteren. Natuurlijk heeft een adaptief regelsysteem de potentie om meer te doen dan het
compenseren van modelonzekerheden; het is ook mogelijk om plotselinge veranderin-
gen in het dynamisch gedrag van het vliegtuig te identificeren. Dergelijke veranderingen
zullen over het algemeen leiden tot een verhoogde werkdruk van de piloot of zelfs tot
compleet verlies van de controle over het vliegtuig. Als de systeemdynamica van het
vliegtuig na de schade correct kan worden geschat door het modelidentificatiesysteem,
dan kunnen het teveel aan besturingsactuatoren en de fly-by-wire structuur van moderne
gevechtsvliegtuigen worden benut om het besturingssysteem te herconfigureren.
Er zijn verscheidene methodes beschikbaar om een schatter te ontwerpen die het vlieg-
tuigmodel dat gebruikt wordt door het besturingssysteem kan updaten, bijvoorbeeld neu-
rale netwerken of least squares technieken. Een nadeel van een adaptief ontwerp met een
aparte schatter is dat het certainty equivalence principe niet geldig is voor niet-lineaire
systemen. Met andere woorden, de dynamica van de schatter is niet snel genoeg om
om te gaan met de mogelijk sneller-dan-lineaire groei van instabiliteiten in niet-lineaire
systemen. Om dit probleem te overwinnen is een regelaar met sterke parametrische
robuustheidseigenschappen nodig. Als alternatieve oplossing kunnen de regelaar en
schatter als een geı̈ntegreerd systeem worden ontworpen met behulp van de adaptive
backstepping methode. Adaptive backstepping biedt de mogelijkheid een regelaar af te
leiden voor een brede klasse van niet-lineaire systemen met parametrische onzekerheden,
door systematisch een Lyapunov functie te construeren voor het gesloten lus systeem.
Het hoofddoel van dit proefschrift is om de geschiktheid te onderzoeken van de niet-
lineaire adaptive backstepping techniek in combinatie met real-time model identificatie
voor het ontwerp van een herconfigureerbaar vliegtuigbesturingssysteem voor een
modern gevechtsvliegtuig. Dit systeem moet beschikken over de volgende kenmerken:

• Er wordt gebruikt gemaakt van een enkele niet-lineaire adaptieve regelaar die
geldig is voor het gehele operationele domein van het vliegtuig en waarvan de
SAMENVATTING 265

prestatie- en stabiliteitseigenschappen theoretisch aantoonbaar zijn.

• Het besturingssysteem verbetert de prestaties en de overlevingskansen van het


vliegtuig wanneer er verstoringen optreden als gevolg van schade.

• De algoritmes die het regelsysteem beschrijven bezitten uitstekende numerieke sta-


biliteitseigenschappen en de benodigde rekenkracht is klein (een real-time
implementatie is haalbaar).

Adaptive backstepping is een recursieve, niet-lineaire ontwerpmethode die is gebaseerd


op Lyapunovs stabiliteitstheorie en gebruik maakt van dynamische parameter-update-
wetten om te compenseren voor parametrische onzekerheden. De gedachte achter back-
stepping is om een regelaar recursief af te leiden door sommige toestandsvariabelen als
‘virtuele’ systeeminputs te beschouwen en hier tussenliggende, virtuele regelaars voor
te ontwerpen. Backstepping realiseert globale asymptotische stabiliteit voor de toes-
tandsvariabelen van het gesloten lus systeem. Het bewijs van deze eigenschappen is
een direct gevolg van de recursieve procedure, aangezien op deze manier een Lyapunov
functie wordt geconstrueerd voor het gehele systeem, inclusief de parameterschattingen.
De tracking-fouten drijven het parameterschattingsproces van de procedure. Tevens is
het mogelijk om fysieke beperkingen van systeeminputs en toestandsvariabelen mee te
nemen in het ontwerp, zodat het identificatieproces niet wordt verstoord tijdens periodes
van actuatorsaturatie. Een keerzijde van de geı̈ntegreerde adaptive backstepping meth-
ode is dat de geschatte parameters slechts pseudo-schattingen zijn van de echte onzekere
parameters. Er is geen enkele garantie dat de echte waardes van de onzekere parame-
ters worden gevonden, aangezien de adaptatie alleen probeert te voldoen aan een totaal
systeemstabiliteitscriterium, oftewel de Lyapunov functie. Verder is het zo dat het ver-
hogen van de adaptatieversterkingsfactoren niet noodzakelijkerwijs de responsie van het
gesloten lus systeem verbetert, doordat er een sterke koppeling is tussen de regelaar en
dynamica van de schatter.

De immersion en invariance (I&I) methode biedt een alternatieve manier om een niet-
lineaire schatter te construeren. Met deze aanpak is het mogelijk om voorgeschreven
stabiele dynamica toe te wijzen aan de parameterschattingsfout. De resulterende schat-
ter wordt gecombineerd met een backstepping regelaar om tot een modulaire, adaptieve
regelmethode te komen. De op basis van I&I ontworpen schatter is snel genoeg om om
te gaan met de potentiële sneller-dan-lineaire groei van niet-lineaire systemen. De resul-
terende modulaire regelmethode is veel makkelijker te tunen dan de standaard adaptive
backstepping methode waarbij de schatter wordt aangepast op basis van de tracking-
fouten. Het is zelfs zo dat het gesloten lus systeem, verkregen door de toepassing van
de op I&I gebaseerde adaptive backstepping regelaar, kan worden gezien als een meer-
trapsverbinding tussen twee stabiele systemen met voorgeschreven asymptotische karak-
teristieken. Het gevolg is dat de prestaties van het gesloten lus systeem met de nieuwe
adaptieve regelaar significant kunnen worden verbeterd.
Om een real-time implementatie van adaptieve regelaars mogelijk te maken moet de com-
plexiteit zoveel mogelijk beperkt worden. Als oplossing wordt het operationele vlieg-
266 SAMENVATTING

domein verdeeld in meerdere gebieden, met in ieder gebied een lokaal geldig vliegtuig-
model. Op deze manier hoeft de schatter maar een paar lokale modellen te updaten op
iedere tijdstap, waarmee de benodigde rekenkracht van het algoritme gereduceerd wordt.
Een ander voordeel van het gebruik van meerdere, lokale modellen is dat informatie van
modellen die niet worden geupdate in een zekere tijdstap wordt onthouden. Met an-
dere woorden, de schatter heeft geheugencapaciteiten. B-spline netwerken, geselecteerd
vanwege hun uitstekende numerieke eigenschappen, worden gebruikt om te zorgen voor
vloeiende overgangen tussen de lokale modellen in de verschillende gebieden van het
vliegdomein.

De adaptive backstepping besturingssystemen, die ontwikkeld zijn in deze thesis, zijn


toegepast op een hoogwaardig dynamisch F-16 model en geëvalueerd in numerieke sim-
ulaties die zijn toegespitst op verscheidene regelproblemen. De adaptieve vliegtuigrege-
laars zijn vergeleken met het standaard F-16 besturingssysteem, dat is gebaseerd op
klassieke regeltechniek methodes, en een niet-adaptief NDI-ontwerp. De prestaties zijn
vergeleken in simulatie scenario’s op verschillende vliegcondities waar het vliegtuig
model plotseling te maken krijgt met een falende actuator, longitudinale zwaartepuntsver-
schuivingen en veranderingen in de aërodynamische coëfficiënten. Alle numerieke
simulaties kunnen zonder problemen in real-time uitgevoerd worden op een standaard
desktop computer. De resultaten van de numerieke simulaties tonen aan dat de verschil-
lende adaptieve regelaars een significante verbetering geven qua prestaties ten opzichte
van een op NDI-gebaseerd besturingssysteem voor de gesimuleerde schade gevallen.
Het modulaire adaptive backstepping ontwerp met I&I schatter geeft de beste prestaties
en is het makkelijkst te tunen van alle onderzochte adaptieve vliegtuigbesturingssys-
temen. Verder beschikt de regelaar met I&I schatter over de sterkste stabiliteits- en
convergentie-eigenschappen. In vergelijking met de standaard adaptive backstepping
regelaars is de complexiteit van het ontwerp en de benodigde rekenkracht wat hoger,
maar kunnen de regelaar en schatter wel los van elkaar ontworpen en getuned wor-
den. Op basis van het onderzoek, dat is uitgevoerd voor dit proefschrift, kan worden
geconcludeerd dat een RFC systeem gebaseerd op het modulaire adaptive backstepping
ontwerp met I&I schatter een hoop potentie heeft, aangezien het over alle eigenschappen
beschikt die zijn genoemd in de doelstellingen.

Het wordt aangeraden om aanvullend onderzoek te doen naar de prestaties van het RFC
systeem gebaseerd op het modulaire adaptive backstepping ontwerp met I&I schatter
in andere simulatie scenario’s. De evaluatie van de adaptieve vliegtuigbesturingssyste-
men in dit proefschrift is beperkt tot simulatie scenario’s met falende actuatoren, sym-
metrische zwaartepuntsverschuivingen en onzekerheden in de aërodynamische coëffi-
ciënten. Het onderzoek zou van meer waarde zijn als er ook simulaties met asym-
metrische verstoringen, zoals gedeeltelijk vleugelverlies, waren uitgevoerd. Een apart
onderzoek is dan wel eerst nodig om de benodigde realistische aërodynamische data
voor het F-16 model te verkrijgen. Het is nog een open probleem om een adaptief flight
envelope protection systeem te ontwikkelen, dat de gereduceerde flight envelope van het
beschadige vliegtuig kan schatten en door kan geven aan de regelaar, de piloot en het
SAMENVATTING 267

guidance systeem. Tenslotte is het belangrijk om het voorgestelde RFC systeem te eva-
lueren en valideren met testpiloten. De werkdruk van de piloot en de stuureigenschap-
pen na een schadegeval met het RFC systeem moeten worden vergeleken met die van de
standaard regelaar. Tegelijkertijd kan een studie worden uitgevoerd met betrekking tot
de interactie tussen de reacties van de piloot en de acties van het adaptieve element van
het besturingssysteem bij het plots optreden van schade of een actuator falen.
Acknowledgements

This thesis is the result of four years of research within the Aerospace Software and
Technology Institute (ASTI) at the Delft University of Technology. During this period,
many people contributed to the realization of this work. I am very grateful to all of these
people, but I would like to mention some of them in particular.

First of all, I would like to thank my supervisor Dr. Ping Chu, my colleague Eddy van
Oort and my promotor Prof. Bob Mulder.

Dr. Ping Chu convinced me to pursue a Ph.D. degree and I am indebted for his en-
thusiastic scientific support that has kept me motivated in these past years. Moreover, I
always enjoyed our social discussions on practically anything. I want to thank Eddy van
Oort for his cooperation and the many inspiring discussions we have had. Eddy started
his related Ph.D. research a few months after me: The modular adaptive backstepping
flight control designs with least-squares identifier, used for comparison in this thesis,
were mainly designed by him. I will always have many fond memories of the trips we
made to conference meetings around the world. I am very grateful to Prof. Bob Mulder
for his scientific support, his expert advice and for being my promotor. Thanks to Prof.
Bob Mulder’s extensive knowledge and experience in the field of aerospace control and
simulation, he could always provide me with a fresh perspective on my work.

This research would not have been possible without the efforts of Prof. Lt. Gen. (ret.)
Ben Droste, former commander of the Royal Netherlands Air Force and former dean of
the faculty of Aerospace Engineering, and the support of the National Aerospace Labora-
tory (NLR). I would like to thank the people at the NLR and especially Jan Breeman for
their scientific input and support. I am also indebted to my thesis committee for taking
the time to read this book and making the (long) trip to The Netherlands.

269
270 ACKNOWLEDGEMENTS

I would like to thank all of my colleagues at ASTI, in particular Erikjan van Kampen,
Elwin de Weerdt, Meine Oosten and Vera van Bragt. I am also grateful to the people at
the Control and Simulation Division of the Delft University of Technology, especially
to Thomas Lombaerts and Bertine Markus for their assistance with the administrative
aspects of the thesis.

I would like to express my gratitude to the people at Lockheed Martin and the Royal
Netherlands Air Force, as well as to the many reviewers that read the journal papers con-
taining parts of this research, for providing me with valuable scientific input and practical
expertise.

Last but certainly not least, I am truly grateful to my family, especially my parents, my
brother Rutger and my girlfriend Rianne for their love and continuous support.

Rotterdam, Lars Sonneveldt


May 2010
Curriculum Vitae

Lars Sonneveldt was born in Rotterdam, The Netherlands on July 29, 1982. From 1994
to 2000 he attended the Emmaus College in Rotterdam, obtaining the Gymnasium cer-
tificate.

In 2000 he started his studies at the Delft University of Technology, Faculty of Aerospace
Engineering. In 2004 he completed an internship at the Command and Control depart-
ment of TNO-FEL in The Hague and obtained his B.Sc. degree. After that, he enrolled
with the Control and Simulation Division for his masters program, specializing in flight
control problems. In June 2006 he received his M.Sc. degree for his study on the suit-
ability of new nonlinear adaptive control techniques for flight control design.

In 2006 he started as a Ph.D. student at the Delft University of Technology within the
Aerospace Software and Technology Institute (ASTI). His Ph.D. research was conducted
in cooperation with the National Aerospace Laboratory (NLR) in Amsterdam and un-
der the supervision of the Control and Simulation Division at the Faculty of Aerospace
Engineering.

271

You might also like