You are on page 1of 185

Stability of Switched Systems in Sense of

Lyapunov’s Theory

A Thesis Presented
By
Mamdouh Omar Khudaydus∗
supervisor: Dr.Wajdi Kallel

to
Department of Mathematics,
in partial fulfillment of the requirements for the degree of
Master of Science
in
Control Theory

The Collage of Applied Sciences


Umm Alqura University, Saudi Arabia

April 25, 2018


Typeset using LATEX
Dedicated to my parents and especially to mother and to my lovely wife
and both of my daughter and son.

2
ACKNOWLEDGEMENTS

I would especially like to thank my supervisor Dr.Wajdi Kallel for his pa-
tience and which i had the luck to have a great teacher like him.Particular
thanks must also be recorded to Dr.Yousef Jaha who really helped me to
join the Department of Mathematics.
CONTENTS

1. CHAPTER 1 :Classical Stability . . . . . . . . . . . . . . . . . . . 10


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.1 State-Space Models and Equations . . . . . . . . . . . 10
1.1.2 The Phase Plane of Autonomous Systems . . . . . . . 11
1.1.3 Constructing Phase Portraits . . . . . . . . . . . . . . 12
1.1.4 Qualitative Behaviour of Linear Systems . . . . . . . . 19
1.2 Multiple Equilibria . . . . . . . . . . . . . . . . . . . . . . . . 28
1.3 Existence and Uniqueness . . . . . . . . . . . . . . . . . . . . 29
1.4 Lyapunov Stability . . . . . . . . . . . . . . . . . . . . . . . . 33
1.4.1 Autonomous Systems . . . . . . . . . . . . . . . . . . 33
1.4.2 Internal Stability . . . . . . . . . . . . . . . . . . . . . 34
1.4.3 Lyapunov Direct Method . . . . . . . . . . . . . . . . 36
1.4.4 Domain of Attraction . . . . . . . . . . . . . . . . . . 38
1.4.5 Lyapunov Quadratic Form . . . . . . . . . . . . . . . . 41
1.5 Linear Systems and Linearization . . . . . . . . . . . . . . . . 45
1.5.1 Investigating the Asymptotic Stability of the LTI dy-
namical system (1.31) via Theorem (1.4.5) . . . . . . . 49
1.5.2 Lyapunov Indirect Method . . . . . . . . . . . . . . . 52
1.6 The Invariant Principle(LaSalle’s Invariance Principle) . . . . 54
1.7 The Partial Stability of Nonlinear Dynamical Systems . . . . 58
1.7.1 Addressing Partial Stability When Both Initial Con-
ditions (x10 , x20 ) Lie in a Neighborhood of the Origin 62
1.8 Nonautonomous Systems . . . . . . . . . . . . . . . . . . . . . 64
1.8.1 The Unification between Time-Invariant Stability The-
ory and Stability Theory for Time-Varying Systems
Through The partial Stability Theory . . . . . . . . . 66
1.8.2 Rephrase The Partial Stability Theory for Nonlinear
Time-Varying Systems . . . . . . . . . . . . . . . . . . 68
1.9 Stability Theorems for Linear Time-Varying Systems and Lin-
earization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Contents 5

1.9.1 Applying Lyapunov Indirect Method on Stabilizing


Nonlinear Controlled Dynamical System . . . . . . . . 79

2. CHAPTER 2 :Stability of Switched Systems . . . . . . . . . . . . . 80


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2.2 Classes of Hybrid and Switched Systems . . . . . . . . . . . . 80
2.2.1 State-Dependent Switching . . . . . . . . . . . . . . . 81
2.2.2 Time-Dependent Switching . . . . . . . . . . . . . . . 81
2.2.3 Stability to Non-Stability Switched Systems and Vice-
Versa . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.2.4 Autonomous Switching Versus Controlled Switching . 83
2.3 Sliding and Hysteresis Switching . . . . . . . . . . . . . . . . 84
2.4 Stability Under Arbitrary Switching . . . . . . . . . . . . . . 87
2.4.1 Commuting Linear Systems . . . . . . . . . . . . . . . 88
2.4.2 Common Lyapunov Function (CLF) . . . . . . . . . . 88
2.4.3 Switched Linear Systems . . . . . . . . . . . . . . . . 93
2.4.4 Solution of a Linear Switched System Under Arbitrary
Switching Signal . . . . . . . . . . . . . . . . . . . . . 96
2.4.5 Commuting Nonlinear Systems . . . . . . . . . . . . . 96
2.4.6 Commutation and The Triangular Systems . . . . . . 97
2.5 Stability Under Constrained Switching . . . . . . . . . . . . . 97
2.5.1 Multiple Lyapunov functions (MLF) . . . . . . . . . . 97
2.5.2 Stability Under State-Dependent Switching . . . . . . 100
2.6 Stability under Slow Switching . . . . . . . . . . . . . . . . . 104
2.6.1 Brief . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
2.6.2 Dwell Time . . . . . . . . . . . . . . . . . . . . . . . . 105
2.7 Stability of Switched System When all Subsystems are Hurwitz108
2.7.1 State-Dependent Switching . . . . . . . . . . . . . . . 112

3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and


Non-Hurwitz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.2 Stability of Switched System When all Subsystems are Unsta-
ble Except the corresponding Negative Subsystems Matrices
(−Ai ) are Hurwitz . . . . . . . . . . . . . . . . . . . . . . . . 114
3.2.1 Average Dwell Time (ADT) . . . . . . . . . . . . . . . 115
3.2.2 Determine ADT Based on Positive Symmetric Matrix
Pi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.3 Stability of Switched Systems When All Subsystems Consist
of Stable and Unstable Subsystems . . . . . . . . . . . . . . . 125
Contents 6

3.3.1 Stability When One Subsystem is Stable and the Other


One is Unstable . . . . . . . . . . . . . . . . . . . . . . 126
3.3.2 Stability When a Set of Subsystems are Stable and
the Other Sets are Unstable . . . . . . . . . . . . . . . 127
3.4 Stability of Switched Systems When All Subsystems Unstable
via Dwell-Time Switching . . . . . . . . . . . . . . . . . . . . 129
3.4.1 Stabilization for Switched Linear System . . . . . . . . 131
3.4.2 Discretized Lyapunov Function Technique . . . . . . . 133

4. Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.1 Matrices and Matrix Calculus . . . . . . . . . . . . . . . . . . 138
4.2 Vector and Matrix Norms . . . . . . . . . . . . . . . . . . . . 139
4.3 Matrices p-norms . . . . . . . . . . . . . . . . . . . . . . . . . 140
4.4 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.5 Matrix Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.6 Topological Concepts in Rn . . . . . . . . . . . . . . . . . . . 145
4.7 Mean Value Theorem . . . . . . . . . . . . . . . . . . . . . . . 146
4.8 Supremum and Infimum Bounds . . . . . . . . . . . . . . . . 146
4.9 Jordan Form . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.10 The Weighted Logarithmic Matrix Norm and Bounds of The
Matrix Exponential . . . . . . . . . . . . . . . . . . . . . . . . 150
4.11 Continuous and Differentiable Functions . . . . . . . . . . . . 151
4.12 Gronwall-Bellman Inequality . . . . . . . . . . . . . . . . . . 152
4.13 Solutions of the State Equations . . . . . . . . . . . . . . . . 154
4.13.1 Solutions of Linear Time-Invariant(LTI) State Equa-
tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.13.2 Solutions of Linear Time-Variant(LTV) State Equations157
4.13.3 Computing The Transition Matrix Through The Peano-
Baker Series . . . . . . . . . . . . . . . . . . . . . . . . 165
4.14 Solutions of Discrete Dynamical Systems . . . . . . . . . . . . 169
4.14.1 Discretization of Continuous-Time Equations . . . . . 169
4.14.2 Solution of Discrete-Time Equations . . . . . . . . . . 172
4.15 Solutions of Van Der Pol’s Equation . . . . . . . . . . . . . . 174
4.15.1 Parameter Perturbation Theory . . . . . . . . . . . . . 174
4.15.2 Solution of Van Der Pol Equation Via Parameter Per-
turbations Method . . . . . . . . . . . . . . . . . . . . 174
4.16 Limit Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Contents 7

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
ABSTRACT

Switched systems are a class of hybrid systems that involve switching be-
tween several subsystems depending on various factors. In general,a switch-
ing system consists of a family of continuous-time subsystems.In addition, a
rule that dominates the switching between them.One kind of hybrid system
is a switched linear system which consists of several linear subsystems and
a rule that orchestrates the switching among them.
The existence of disturbance in a system can even cause the instability.Often,
the existence of a common Lyapunov function [CLF] is the useful approach
for studying stability of various control systems. Consider the dynamic sys-
tem of the form

ẋ(t) = Aσ(t) x(t), x(t0 ) = x0

where x(t) ∈ Rn is the state, t0 is the initial time and x0 is the initial state.
σ(t) : [t0 , +∞) 7→ {1, 2, · · · , N } is the switching signal.Thus,

Aσ(t) : [t0 , +∞) 7→ {A1 , A2 , · · · , AN }

We can break the entire state space Ω into N subspaces Ωi corresponding


to each subsystem such that no common region among subspaces in manner
such that:

Ωi ∩ Ωj = φ , i 6= j
We can write Ωi = {x ∈ Rn : xT (ATi Pi + Pi Ai )x ≤ 0}.
Now,the problem is :
What is the required conditions that ensure the stability of the
switched system under switching signal when it is composed by :

1. stable subsystems.
2. stable/unstable subsystems.
Abstract 9

3. unstable subsystems.

Our proposal aims to answer this problem.So, this work consists of three
major chapters:

Chapter 1: We recall some important theorems in stability of Autonomous


and Non-autonomous systems. Also, we review some necessary details
background material from the control theory.In addition, we recall
some results on existence and uniqueness of solutions to differential
equations.

Chapter 2: We focus on stability of switched systems under arbitrary switch-


ing when all subsystems are stable.The existence of qlf of the sub-
systems plays a dominant role in the stability analysis.

Chapter 3: Consist of two cases.In first case, the switched systems consists
of both Hurwitz stable and unstable subsystems that they are to be
investigated.In the second case, when all the subsystems are unsta-
ble.So, we need to determine a sufficient condition ensure the stability
of switched continuous-time systems under strategy of switching signal
1. CLASSICAL STABILITY

1.1 Introduction

1.1.1 State-Space Models and Equations[1]


Dynamical systems are modeled by a finite number of coupled first-order
ordinary differential equations:

ẋ1 = f1 (t, x1 , x2 , x3 , . . . , xn , u1 , u2 , u3 , . . . , up )

ẋ2 = f1 (t, x1 , x2 , x3 , . . . , xn , u1 , u2 , u3 , . . . , up )
..
.
ẋn = fn (t, x1 , x2 , x3 , . . . , xn , u1 , u2 , u3 , . . . , up )
where f : [0, ∞) × D1 × D2 7→ Rn such that D1 ,D2 ⊂ Rn and xn , un are
components of the state variables and the input variables, respectively.
To writing the above equations in a compact form known as vector nota-
tion, we define:
   

x1
 u1 f1 (t, x, u)
 x2   u2   f2 (t, x, u) 
x=  , u= , f (t, x, u) =
  
. ..
 ···   .. 
   
 . 
xn up fn (t, x, u)

rewrite the n first-order differential equations as follow,

ẋ = f (t, x, u) (1.1)

The equation (1.1) is called the state equation and x is the state and u
is the input.
Also, there is another equation associates with equation (1.1):

y = h (t, x, u) (1.2)
1. CHAPTER 1 :Classical Stability 11

which defines a q-dimensional output vector.We call equation (1.2) the out-
put equation and refer to equation (1.1) and equation (1.2) together as
the state-space model, or simply the state model.
We can model physical systems in this form by choosing the state vari-
ables.
Many times we will deal with the state equation without presence of an
input u , that is, the so-called unforced state equation

ẋ = f (t, x) (1.3)

Working with unforced state equation does not mean that the input to
the system is zero.It could be that the input has been specified as a given
function of time, u = γ (t), a given feedback function of the state, u = γ (x)
or both u = γ (t, x). Substitution of u = γ in equation (1.1) eliminates u
and yields an unforced state equation.

Definition 1.1.1. When the function f of equation (1.3) does not depend
on time t ; that is,
ẋ = f (x) (1.4)
In this case the system is said to be the autonomous or time invari-
ant.This means that the system is invariant to shifts in the time origin,
since changing the time variable from t to τ = t − α does not change the
right-hand side of the state equation.
The system (1.3) is called non-autonomous or time-varying.

Definition 1.1.2. A point x = x∗ in the state space is said to be an equi-


librium point of ẋ = f (t, x) if t has the property that whenever the state of
the system start at initial point x∗ it will remain at x∗ for all future time.

The equilibrium points of the autonomous system (1.4) are the real roots
of the equation
f (x) = 0

1.1.2 The Phase Plane of Autonomous Systems


Autonomous systems occupy an important place in study of non-linear sys-
tems because solution trajectories can be represented by curves in the plane.
A second-order autonomous system is represented by two scalar differ-
ential equations

ẋ1 = f1 (x1 , x2 ) (1.5)


1. CHAPTER 1 :Classical Stability 12

ẋ2 = f2 (x1 , x2 ) (1.6)


Let x(t) = (x1 (t), x2 (t)) be the solution of (1.5), (1.6) that starts at initial
state x0 = (x10 , x20 )T where x0 = x(0) .
The x1 − x2 plane of, ∀ t ≥ 0 is a curve that passes through the point x0 .
This curve is called a trajectory or orbit of (1.5), (1.6) from x0 .The
x1 − x2 plane is called the state-plane or phase-plane.
Equations (1.5), (1.6) can be written in vector form:

ẋ = f (x)

Where f (x) is the vector (f1 (x), f2 (x)), thus we consider f (x) as a vector
field on the state plane.
The length of arrow on state plane at a given point x is proportional to
p
f12 (x) + f22 (x).The family of all trajectories or solution curves is called
the phase portrait of equations (1.5) and (1.6).
An approximate picture of the phase portrait can be constructed by
plotting trajectories from a large number of initial states spread all over the
x1 − x2 plane.

1.1.3 Constructing Phase Portraits


There are a number of techniques[5] that constructing phase plane trajecto-
ries for linear and nonlinear systems.
The Common ones of this methods are : Analytical Method and Isoclines
Method.
• Analytical Method: This method involves the analytical solution of
the differential equations that describing the system.So there are two
ways to construct the equation of trajectories.Both techniques lead to
solution of the form
g(x1 , x2 , x0 ) = 0

(1) First Technique : Involves solving equations

ẋ1 = f1 (x1 , x2 )

ẋ2 = f2 (x1 , x2 )
for x1 and x2 as a functions of time t, viz.,

x1 = g1 (t)

x2 = g2 (t)
1. CHAPTER 1 :Classical Stability 13

and then eliminating time t, leading to a function of the form

g(x1 , x2 , x0 ) = 0

(2) Second Technique: Involves eliminating the time variable using

dx2 f2 (x1 , x2 )
=
dx1 f1 (x1 , x2 )

Then solving this equation to obtain a function for x1 , x2 .

• Method of Isoclines[1]
Isoclines is a method that can be applied to construct a phase plane
when the system can not be solved analytically.
An isocline is defined to be a locus of points with a given tangent slope,
and each point consist of a short line segment takes the tendency of
the specified slope as depicted in the figure 1.1.

Fig. 1.1: Isoclines method with slope α


1. CHAPTER 1 :Classical Stability 14

The slope α of the isocline is defined to be:

dx2 f2 (x1 , x2 )
= = slope α
dx1 f1 (x1 , x2 )

So the slope of the trajectories is seen to be:

f2 (x1 , x2 ) = α f1 (x1 , x2 )

with all points on the isocline have the same tangent slope α .
By taking a different values for the slope α, a set of isoclines can be
drawn.
The procedure is to plot the curve ff21 (x 1 ,x2 )
(x1 ,x2 ) = α in the x1 − x2 plane
and along this curve draw short line segments having the slope α .
These line segments are parallel and their direction are determined by
the sign of f1 (x) and f2 (x) at x, as seen in Figure 1.2

Fig. 1.2: Isocline with positive slope

The curve ff21 (x 1 ,x2 )


(x1 ,x2 ) = α is known as isocline.This procedure is re-
peated for sufficiently many values of the constant α until the plain is
filled with isoclines.Then, starting from a given initial point x0 , one
can construct the trajectory from x0 by moving in the direction of the
short line segments from one isocline to the next.

Example 1.1.1 (Example for first Technique).


Consider the linear second-order differential equation

ẍ + x = 0
1. CHAPTER 1 :Classical Stability 15

Assume that the mass is initially at length x0 (i.e, at rest state)


The general solution of the equation is : x(t) = x0 cos t.
By differentiation :
ẋ(t) = −x0 sin t

Now,transferring the state-space equations to equation of trajectories re-


quired that to eliminating the time t.
Hence, to eliminate the time t

ẋ2 (t) = x0 2 sin2 t

∵ sin2 t = 1 − cos2 t
∴ ẋ2 (t) = x20 (1 − cos2 t)
∴ ẋ2 (t) = x20 − x20 cos2 t
∴ ẋ2 = x20 − x2 (t)
Thus, the equation of trajectories is :

ẋ2 + x2 = x20

This equation represents a circle in the phase plane. With different initial
values we can obtain different circles plotted on the phase plan.
The phase portrait of this system is plotted with MATLAB :
By transform the system to state-space form:
Let x = x1 and ẋ = x2 
ẋ1 = x2
ẋ2 = −x1
MATLAB.m 1.
[x1, x2]=meshgrid(-5:0.5:5,-5:0.5:5);
x1dot = x2 ;
x2dot = -x1 ;
quiver(x1,x2,x1dot,x2dot);
xlabel(x);
ylable(x(.));

Also we see in Figure 1.3 that the system trajectories neither Converges
to the origin nor diverge to infinity.
1. CHAPTER 1 :Classical Stability 16

Fig. 1.3: Phase portrait of ẋ2 + x2 = x20

Example 1.1.2. [1] Consider the system (Example for Second Technique)

ẍ + x = 0

assume that the mass is initially at length x0 .


By transform the system to state-space form:
let x = x1 and ẋ = x2 
ẋ1 = x2
ẋ2 = −x1
Now,set
dx1 dx2
ẋ1 = , ẋ2 =
dt dt
Thus, 
dx1 = x2 dt
dx2 = −x1 dt
By integrating of both sides :
Z Z
x1 dx1 + x2 dx2 = 0 → x21 + x22 = C where C = x20
1. CHAPTER 1 :Classical Stability 17

Hence ,we obtain the phase-plane trajectory equation :

x21 + x22 = x20 or ẋ2 + x2 = x20

Which represent a cycle in the phase plane and known as limit cycle (refer
to Appendix).

Example 1.1.3. consider the dynamical equation (Example for Isoclines


Method)
ẍ + x = 0
The slope of the trajectories is given by

ẋ1 = x2 = f1 (x1 , x2 )
ẋ2 = −x1 = f2 (x1 , x2 )

Thus,
dx2 x1
=− =α
dx1 x2
Therefore, the isocline equation for a slope α is

x1 + α x 2 = 0

Which is a straight line equation .


Along the straight line, we can draw a lot of short line segments with slope α
and a set of isoclines can be plotted with different values of the slope α as
depicted in Figure 1.4.
1. CHAPTER 1 :Classical Stability 18

Fig. 1.4: Phase portrait of ẍ + x = 0

Example 1.1.4. Consider the pendulum equation without friction:

ẋ1 = x2

ẋ2 = − sin x1
The slope function s(x) is given by

−sinx1 f2 (x1 , x2 )
s(x) = = =α
x2 f1 (x1 , x2 )
Hence, the isoclines are defined by
1
x2 = − sin x1
α
1. CHAPTER 1 :Classical Stability 19

Fig. 1.5: Graphical construction of the phase portrait of the pendulum equation
without friction by the isocline method

Figure 1.5 shows the isoclines for several values of α.One can easily
sketch the trajectory starting at any point, as shown In the figure for the
trajectory starting at ( π2 , 0).Here one might see that the trajectory is a closed
curve. A closed trajectory shows that there is a periodic solution and the
system has a sustained oscillation.

1.1.4 Qualitative Behaviour of Linear Systems


We describe here the meaning of the visual motion of the system trajectories
plotted in the phase portrait, based on the eigenvalues of the nonlinear/lin-
ear dynamic system

Consider the linear time-invariant system


ẋ = Ax (1.7)
1. CHAPTER 1 :Classical Stability 20

Where A is a 2 × 2 real matrix.The solution of equation (1.7) for a given


initial state x0 is given by

x(t) = x0 P eJr t P −1

Where jr is the real Jordan form of A and P is a real non-singular matrix


such that P −1 AP = Jr .
In general, the solution of (1.7) is given by

x(t) = x0 eAt

Where eAt can be formally defined by the convergence power series


∞ k k
At
X t A
e =
k!
i=0

Note that in general


eAt = P ejr t P −1
Since
∞ k k
X t A 1
eAt = = I + At + A2 t2 + · · ·
k! 2
i=0
1
= I + P jr P −1 t + (P jr P −1 )2 t2 + · · ·
2
1
= I + P jr P −1 t + P jr P −1 P jr P −1 t2 + · · ·
2
Multiply both sides from the left by P −1 and from the right by P :
1
P −1 eAt P = P −1 IP + P −1 P jr P −1 P t + P −1 P jr P −1 P jr P −1 P t2 + · · ·
2
−1 At 1 2 2
P e P = I + jr t + jr t + · · ·
2
−1 At jr t
P e P =e
⇒ eAt = P ejr t P −1

Hence, the general solution of (1.7) will take the form

x(t) = x0 P ejr t P −1

Depending on determining the eigenvalues of A using the characteristic equa-


tion
det (A − λI) = 0
1. CHAPTER 1 :Classical Stability 21

There exists a real, non-singular matrix P such that P −1 AP is so-called


Jordan form.
The real Jordan form may take one of the following three cases:
     
λ1 0 λ k α −β
, ,
0 λ2 0 λ β α

where k is either 0 or 1.
We have here to distinguish between three cases:
• case 1: the eignevalues λ1 , λ2 are real and distinct (λ1 6= λ2 6=
0)  
λ1 0
Let jr =:
0 λ2
Let P = [v1 , v2 ] where v1 , v2 are the real eigenvectors associated
with λ1 and λ2 .
Thus, the change of coordinates z = P −1 x transform the system into:

z = P −1 x ⇒ ż = P −1 ẋ = P −1 Ax
⇒∵ A = P jr P −1
⇒ ż = P −1 P jr P −1 x = jr P −1 x
⇒ ż = jr z

Hence       
ż1 λ1 0 z1 λ 1 z1
= =
ż2 0 λ2 z2 λ 2 z2

So,the two decoupled first-order differential equations

ż1 = λ1 z1

ż2 = λ2 z2
whose solution is given by

z1 (t) = z10 eλ1 t , z2 (t) = z20 eλ2 t where z10 , z20 are initail states

Eliminating t between the two equation,


z1 z2
ln( ) = λ1 t, ln( ) = λ2 t
z10 z20
z2 1 z1 z1 λ1 z1 λλ2
⇒ ln( ) = λ2 [ ln( )] = λ2 ln( ) 1 = ln( ) 1
z20 λ1 z10 z10 z10
1. CHAPTER 1 :Classical Stability 22

z2 z1 λλ2
∴ =( ) 1
z20 z10
we obtain,
λ1
λ
z2 = cz1 2 (1.8)
λ2
where c = z20 /(z01 ) λ1
The phase portrait of this system is given by the family of curves gen-
erated from (1.8) by allowing the constant c to take arbitrary values.
The shapes of the phase portrait have many cases depends on the signs
of λ1 and λ2 .

(i) When λ2 < λ1 < 0, the equilibrium point has a Stable-


node:
In this case, both exponential terms eλ1 t and eλ2 t tend to zero as
t→∞.
The slope of the curve is given by
λ
dz2 λ2 ( λ2 −1)
= c z1 1
dz1 λ1

Since ( λλ21 − 1) is positive ,the slope of the curve approaches zero


as |z1 | → 0 and approach ∞ as |z1 | → ∞.
As the trajectory approaches the origin, it becomes tangent to
the z1 axis; as it approaches ∞, it becomes parallel to the z2
axis.These observations allow us to sketch the typical family of
trajectories shown in figure 1.6 .

Fig. 1.6: Phase portrait of a stable node in model coordinates

When transformed back into the x-coordinates, the family of tra-


jectories will have the typical portrait shown in Figure 1.7(a).
In this case, the equilibrium point x = 0 is called a stable node.
1. CHAPTER 1 :Classical Stability 23

(ii) When λ2 > λ1 > 0, the equilibrium point has Unstable-


node:
The phase portrait will retain the character of Figure 1.7(a)
but with the trajectory directions reversed, since the exponen-
tial terms eλ1 t and eλ2 t grow exponentially as t increases as shown
in Figure 1.7(b).

Fig. 1.7: Phase portrait for (a) a stable node ; (b) an unstable node

(iii) When λ2 < 0 < λ1 , the equilibrium point has a Saddle-


point:
In this case, eλ1 t → ∞ while eλ2 t → 0 as t → ∞.Hence, we call
λ2 the stable eigenvalue and λ1 the unstable eigenvalue.The tra-
jectory equation (1.8) will have a negative exponent [ λλ12 ]. Thus,
the family of trajectories in the z1 − z2 plane will take the typical
form shown in Figure 1.8 trajectories become tangent to z1 -axis
as |z1 | → ∞ and tangent to the z2 -axis as |z1 | → 0.

Fig. 1.8: Phase portrait of a saddle point

• case 2: the
 eigenvalues
 λ1 , λ2 are complex; that is, λ1,2 = α±jβ
α −β
Let jr =
β α
1. CHAPTER 1 :Classical Stability 24

The change of coordinates z = P −1 x transforms the system (1.7) into


the form

ż1 = αz1 − βz2 (1.9)


ż2 = βz1 + αz2 (1.10)

The solution of this equation is oscillatory and can be expressed more


conveniently in the polar coordinates
z2 z2
r2 = z12 + z22 ; θ = tan−1 ( ) or tan θ =
z1 z1
Thus,
z1 z˙2 − z2 z˙1
2rṙ = 2z1 z˙1 + 2z2 ż2 ; sec2 θθ̇ =
z12
Substituting of (1.9), (1.10) into above equations, yields

[z1 (βz1 + αz2 ) − z2 (αz1 − βz2 )]


rṙ = z1 (αz1 −βz2 )+z2 (βz1 +αz2 ) ; sec2 θθ̇ = ( )
z12

ṙ = α(z12 + z22 ) = αr; sec2 θθ̇ = β(1 + tan2 θ) = β sec2 θ

So we have two first-order differential equations:

ṙ = αr

θ̇ = β

The solution with a given initial state (r0 , θ0 ) is given by

r(t) = r0 eαt ; θ(t) = θ0 + βt

which defines a logarithmic spiral in the z1 − z2 plane.Depending on


the value of α, the trajectory will take shapes as shown in Figure 1.9.

– When α < 0, the equilibrium point x = 0 is referred to as a


stable focus because the spiral converges to the origin.
– when α > 0, the equilibrium point x = 0 is referred to as a
unstable focus because the spiral diverges away from the origin.
1. CHAPTER 1 :Classical Stability 25

– When α = 0, the equilibrium point x = 0 is referred to as a


center because the trajectory is a circle of radius r0 . As depicted
in Figure 1.10

Fig. 1.9: typical trajectories in the case of complex eigenvalues (a)α < 0, (b)α > 0,
(c)α = 0

Fig. 1.10: Phase portrait for (a) a stable focus, (b) an unstable focus, (c)a center

• Case 3:the eigenvalues are real and equal; that is, λ1 = λ2 =


λ 6= 0  
λ k
Let jr =
0 λ
The change of coordinates z = P 1 x transforms the system (1.7) into
the form
ż1 = λz1 + kz2
ż2 = λz2

whose solution with a given initial state (z10 , z20 ), is given by

z1 (t) = (z10 + z20 kt)eλt z2 (t) = z20 eλt

Eliminating t, we obtain the trajectory equation


z10 k z2
z1 = z2 [ + ( )]
z20 λ z20
1. CHAPTER 1 :Classical Stability 26

Fig. 1.11: Phase portrait for the case of nonzero multiple eigenvalues when k=0:(a)
λ < 0, (b)λ > 0

Fig. 1.12: Phase portrait for the case of nonzero multiple eigenvalues when k=1:(a)
λ < 0, (b)λ > 0

The equilibrium point x = 0 is referred to as:


– stable node if λ < 0.
– unstable node if λ > 0 .
Figure 1.11 shows the form of the trajectories when k = 0, while Figure
1.12 shows their form when k = 1.
• Case 4:the
 eigenvalues
 λ1 = 0, λ2 > 0 or λ2 < 0:
0 0
Let jr =
0 λ2
Let P = [v1 , v2 ] where v1 , v2 are the real eigenvector associated with
λ1 and λ2 .Thus,the change of coordinates z = P −1 x transform the
system into:
ż1 = 0
ż2 = λ2 z2

whose solution is
z1 (t) = z10 ; z2 (t) = z20 eλ2 t

The exponential term will grow or decay depending on the sign of λ2 :


1. CHAPTER 1 :Classical Stability 27

– when λ2 < 0, all trajectories converge to the equilibrium subspace


– when λ2 > 0, all trajectories diverge away from the equilibrium
subspace
All the cases are shown as depicted in Figure 1.13.

Fig. 1.13: Phase portrait for (a)λ1 = 0, λ2 < 0, (b) λ1 = 0, λ2 > 0

• Case 5:the
 eigenvalues
 are at the origin ;that is, λ1 = λ2 = 0:
0 k
Let jr =
0 λ2
The change of variables ż = P −1 x results in
ż1 = z2

z˙2 = 0

whose solution is
z1 (t) = z10 + z20 t, z2 (t) = z20

The term z20 t will increase or decrease depending on the sign of z20
as depicted in Figure 1.14

Fig. 1.14: Phase portrait when λ1 = λ2 = 0


1. CHAPTER 1 :Classical Stability 28

1.2 Multiple Equilibria

The linear system (1.7) has an isolated equilibrium point at x = 0 if A has


no zero eigenvalues, that is, if det A 6= 0.
When det A = 0, the system has a continuum of equilibrium points. A
nonlinear system can have multiple isolated equilibrium points.

Example 1.2.1. The state-space model of a tunnel diode circuit (refer to


Appendix) is given by
1
ẋ1 = [−h(x1 ) + x2 ]
C
1
ẋ2 = [−x1 − Rx2 + u]
L
Assume the circuit parameters are u = 1.2V , R = 1.5kΩ, C = 2pF, and L =
5uH .Measuring time in nanoseconds and the currents x2 and h(x1 ) are in
mA, the state-space model is given by

x˙1 = 0.5[−h(x1 ) + x2 ]

x˙2 = 0.2(−x1 − 1.5x2 + 1.2)


Suppose h(·) is given by

h(x1 ) = 17.76x1 − 103.79x21 − 226.31x41 + 83.72x51

setting ẋ1 = ẋ2 = 0 and solving for the equilibrium points, it can be verified
that there are three equilibrium points at (0.063, 0.758), (0.285, 0.61), and
(0.884, 0.21).The phase portrait of the system is shown in Figure 1.15 .
1. CHAPTER 1 :Classical Stability 29

Fig. 1.15: Phase portrait of the Tunnel diode circuit

Examination of the phase portrait shows that all trajectories tend to ei-
ther the point Q1 or the point Q3 .In experimental, we shall observe one of
the two steady-state operating points Q1 or Q3 , depending on the initial ca-
pacitor voltage and inductor current.This tunnel diode circuit with multiple
equilibria is referred to as a bi-stable circuit, because it has two steady-state
operating points.

1.3 Existence and Uniqueness

In this section we will recalling some elements of mathematical analysis


which will be used throughout this section.
For the mathematical model to predict the future state of the system
from its current state at t0 , the initial value problem
ẋ = f (t, x), x(t0 ) = x0
must have a unique solution.
The uniqueness and existence can be ensured by imposing some con-
straints on the right-hand side function f (t, x) .
Let us define
ẋ = f (t, x (t)) ∀ t ∈ [t0 , t1 ] (1.11)
If f (t, x) is continuous in t and x, then the solution x (t) will be continuously
differentiable. A differentiable equation with a given initial condition might
have several solutions.
1. CHAPTER 1 :Classical Stability 30

Example 1.3.1. Consider the scalar function


1
ẋ = x 3 with x (0) = 0 (1.12)
 3
has solution x (t) = 23 t 2 .This solution is not unique, since x (t) = 0
is another solution.Noting that the right-hand side of (1.12) is continuous
in x but continuity of f (t, x) is not sufficient to ensure uniqueness of the
solution.Extra conditions must be imposed on the function f .

Definition 1.3.1. [1] Let D ⊂ Rn and let f : D 7→ Rn .Then:

• f is Lipschitz continuous at x0 ∈ D if there exists a Lipschitz constant


L = L(x0 ) > 0 and a neighborhood M ⊂ D of x such that

||f (x) − f (y)|| ≤ L||x − y|| , x, y ∈ M

• f is Lipschitz continuous on D if f is Lipschitz continuous at every


point in D.

• f is uniformly Lipschitz continuous on D if there exists a Lipschitz


constant L > 0 such that

||f (x) − f (y)|| ≤ L||x − y|| , x, y ∈ D

• f is globally Lipschitz continuous, if f is uniformly Lipschitz contin-


uous on D = Rn .

Theorem 1.3.1. (Local Existence and Uniqueness): Let f (t, x) be


piecewise continuous in t and satisfy the Lipschitz condition

||f (t, x)−f (t, y) || ≤ L||x−y|| , ∀ x, y ∈ B = {x ∈ Rn : ||x − x0 || ≤ r} , ∀t ∈ [t0 , t1 ]


(1.13)
then, there exists some δ > 0 such that the state equation

ẋ = f (t, x) with x (t0 ) = x0

has a unique solution over [t0 , t0 + δ].


A function f (x) is said to be globally Lipschitz if it is Lipschitz on
Rn .The same terminology is extended to a function f (t, x), provided the
Lipschitz condition holds uniformly in t for all t in a given interval of time.
1. CHAPTER 1 :Classical Stability 31

Corollary 1.3.1. Let D ⊂ Rn and let f : D 7→ Rn be continuous on D. If


f 0 exists and is continuous on D, then f is Lipschitz continuous on D.

Example 1.3.2. :
1
The function f (x) = x 3 , which was used in (1.12), is not locally Lipschitz
2
at x = 0 since f 0 (x) = 13 x− 3 → ∞ as x → 0.On the other hand, if f 0 (x)
is bounded by a constant k over the interval, then f (x) is Lipschitz on the
same interval with Lipschitz constant L = k.
m be continuous for some domain
Lemma 1.3.1. Let h if : [a, b] × D → R
D ⊂ Rn .Suppose ∂f ∂x (called the Jacobian matrix) exists and is continuous
on [a, b]×D.If, for convex subset W ⊂ D, there is a constant L ≥ 0 such that

∂f
|| (t, x) || ≤ L
∂x
on [a, b] × W , then

||f (t, x) − f (t, y) || ≤ L||x − y|| ∀ t ∈ [a, b] , x, y ∈ W

h i Let f (t, x) be continuous on [a, b] × D,for some domain


Lemma 1.3.2.
D ⊂ R .If ∂f
n
∂x exists and is continuous on [a, b] × D, then f is locally
Lipschitz in x on [a, b] × D.
h i
Lemma 1.3.3. If f (t, x) and ∂f ∂x exists and is continuous on [a, b] ×
h i
Rn ,then f is globally Lipschitz in x on [a, b] × Rn if and only if ∂f
∂x in
n
uniformly bounded on [a, b] × R .

Example 1.3.3. Consider the function


 
−x1 + x1 x2
f (x) =  
x2 − x1 x2

where f (x) is continuously differentiable hon R 2


i .Hence, it is locally Lipschitz
∂f
on R2 .It is not globally Lipschtiz since ∂x is not uniformly bounded on
R2 .Suppose we are interested in calculating a Lipschitz constant over

W = x ∈ R2 : |x1 | ≤ a1 , |x2 | ≤ a2

1. CHAPTER 1 :Classical Stability 32

h i
∂f
The Jacobian matrix ∂x is given by
 
  −1 + x2 x1
∂f
=  
∂x
−x2 1 − x1
Using || · ||∞ for vectors in Rn and the induced matrix norm for matrices,
we have
∂f
|| || = max {|−1 + x2 | + |x1 | , |x2 | + |1 − x1 | }
∂x ∞
All points in W satisfy
|−1 + x2 | + |x1 | ≤ 1 + a1 + a2
|−x2 | + |1 − x1 | ≤ 1 + a1 + a2
Hence,
∂f
|| ≤ 1 + a1 + a2 = L
||
∂x ∞
Theorem 1.3.2. (Global Existence and Uniqueness)
Suppose f (t, x) is piecewise continuous in t and satisfies
||f (t, x) − f (t, y) || ≤ L||x − y|| ∀ x, y ∈ Rn , ∀ t ∈ [t0 , t1 ] .
then, the state equation
ẋ = f (t, x) , x (t0 ) = x0
has a unique solution over [t0 , t1 ] .
Example 1.3.4. Consider the linear system
ẋ = A (t) x + g (t) = f (t, x)
where A (·) and g (·) are piecewise continuous functions of t .Over any finite
interval of time [t0 , t1 ], the elements of A (t) and g (t) are bounded.Hence,
||A (t) || ≤ a and ||g (t) || ≤ b ,where can be any norm of Rn and is the
induced matrix norm .
The condition of Theorem (1.3.2) are satisfied since,
||f (t, x) − f (t, y) || = ||A (t) (x − y) || ≤ ||A (t) || ||(x − y)||
≤ a ||x − y|| ∀ x, y ∈ Rn , ∀ t ∈ [t0 , t1 ]
Therefore, theorem (1.3.2) shows that the linear system has a unique solution
over [t0 , t1 ] .
1. CHAPTER 1 :Classical Stability 33

1.4 Lyapunov Stability

1.4.1 Autonomous Systems


Consider the autonomous system

ẋ = f (x(t)) (1.14)

where f : D 7→ Rn is a locally Lipschitz and x(t) is the system state victor.


Suppose x̄ ∈ D is an equilibrium point of (1.14) ; that is f (x̄) = 0
Our goal is to characterize and study stability of x̄.For convenience, we state
all definitions and theorems for the case when the equilibrium point is at the
origin of Rn ; that is, x̄ = 0 ,because any equilibrium point can be shifted to
the origin via a change of variables.
Suppose x̄ 6= 0, so by change of variables y = x − x̄ the derivative of y is
given by
def
ẏ = ẋ = f (x) = f (y + x̄) = g(y); g(0) = 0
In the new variable y ,the system has equilibrium at the origin .We shall
always assume that f (x) satisfies f (0) = 0 and study stability of the origin
x=0.

Definition 1.4.1. The equilibrium point 0 of (1.14) is:


1. Stable if, ∀  > 0, ∃ δ = δ() > 0 such that

kx(0)k < δ ⇒ kx(t)k <  ∀ t ≥ 0

2. Unstable if not stable.

3. Asymptotically stable, if it is stable and there exist δ > 0 can be chosen


such that
kx (0)k < δ ⇒ lim x (t) = 0
t→∞

4. Exponentially stable(locally), if there exist positive constants α, β and


δ such that if ||x(0)|| < δ, then

||x(t)|| ≤ αe−βt ||x(0)|| ∀t ≥ 0

5. Asymptotically stable(globally), if it is stable and

∀ x(0) ∈ Rn ⇒ lim x(t) = 0


t7→∞
1. CHAPTER 1 :Classical Stability 34

6. Exponentially stable(globally), if ∃ α, β > 0 such that


||x(t)|| ≤ αe−βt ||x(0)|| , ∀ t ≥ 0, ∀ x(0) ∈ Rn

For more demonstrations see Figures 1.16,1.17.

Fig. 1.16: Lyapunov stability of an equilibrium point

Fig. 1.17: Asymptotic stability of an equilibrium point

The  − δ arguments demonstrates that if the origin is stable, then, for


any value of , we must produce a value of δ , possibly dependent on , such
that a trajectory starting in a δ neighborhood of the origin will never leave
the neighborhood.

1.4.2 Internal Stability


Internal Uniformly Stability[9]
Internal stability deals with boundedness and asymptotic behaviour of
solution of the zero-input linear state equation
ẋ(t) = A(t)x(t) , x(t0 ) = x0 (1.15)
1. CHAPTER 1 :Classical Stability 35

where boundedness properties hold regardless of the choice of fixed t0 and


various initial states x0 .

Theorem 1.4.1. We say that equation (1.15) is uniformly stable if there


exists a constant γ > 0 such that for any fixed t0 and x0 the corresponding
solution satisfies the inequality

||x(t)|| ≤ γ||x0 || , t ≥ t0

The word uniformly refers to the fact that γ must not depend on the
initial time t0 as illustrated in Figure 1.18.

Fig. 1.18: Uniformly stability implies the γ-bound is independent of t0

Theorem 1.4.2. The linear state equation (1.15) is uniformly stable if


and only if there exists γ such that

||Φ(t, τ )|| ≤ γ

for all t and τ such that t ≥ τ , where Φ(t, τ ) is the state transition matrix
of the solution and where we know that the solution of equation (1.15) is
given by
x(t) = Φ(t, τ )x(τ )

Theorem 1.4.3. The linear state equation (1.15) is uniformly exponen-


tially stable if there exist two constants γ > 0 and λ > 0 such that

||φ(t, τ )|| ≤ γe−λ(t−τ )

for all t and τ such that t ≥ τ .

both γ and λ are independent on the initial time t0 as illustrated in


Figure 1.19.
1. CHAPTER 1 :Classical Stability 36

Fig. 1.19: A decaying exponential bound independent of t0

Theorem 1.4.4. If there exists a constant α > 0 such that ||A(t)|| ≤ α


for all t.Then,the linear state equation (1.15) is uniformly exponentially
stable if and only if there exists a constant β > 0 such that
Zt
||Φ(t, σ) dσ|| ≤ β , ∀ t ≥ τ
τ

Proof. Assume the state equation is uniformly exponentially stable,then un-


der theorem (1.4.3) there exist γ, λ > 0 such that

||Φ(t, σ)|| ≤ γe−λ(t−σ) , ∀ t ≥ σ

Then
Zt Zt
||Φ(t, σ)||dσ ≤ γe−λ(t−σ) dσ
τ τ
γ
= (1 − e−λ(t−τ ) )
λ
≤ β
γ
where β = λ and for all t ≥ τ .

1.4.3 Lyapunov Direct Method


In 1892, Lyapunov showed that other functions could be used to determine
stability of an equilibrium points.He said, that if Φ(t, x) is a solution of the
autonomous (1.14) then, the derivative of the Lyapunov function V (x) along
the trajectories of (1.14) which denoted by
∂V
V̇ (x) = f (x)
∂ xi
1. CHAPTER 1 :Classical Stability 37

should be dependent on the system’s equation.In addition, if the solution


Φ(t, x) starts at initial state x at time t = 0, then it should be

d
V̇ (x) = V (Φ(t, x)) <0
dt t=0

Therefore, V (x) will decrease along the solution of (1.14).

Definition 1.4.2. [1] Let V : D → R be a continuously differentiable func-


tion defined in a domain D ⊂ Rn that contains the origin.The derivative of
V along the trajectories of (1.14), denoted by V̇ (x), is given by
n n
X ∂V X ∂V
V̇ = ẋi = fi (x)
∂xi ∂xi
i=1 i=1
 
f1 (x)
h
∂V ∂V ∂V
i  f2 (x)  ∂V
= ···  = f (x)
 
∂x1 ∂x2 ∂xi ..
∂x

 . 
fn (x)
The derivative of V along the trajectories of the system is dependent on the
system’s equation.
If φ (t, x) is the solution of (1.14) that start at initial state x at time t = 0,
then
d
= V 0 (x)f (x)

V̇ (x) = V (φ (t, x))
dt t=0

Therefore, if V̇ (x) is negative,then V will decrease along the solution φ (t, x0 )


through x0 = x(t0 ) at t0 = 0 .
Note that,
 
dV ∂V dφ ∂V ∂φ dt ∂φ dx
= = +
dt ∂φ dt t=0 ∂φ ∂t dt ∂x dt t=0

Since t = 0 imply that dt = 0,


=0
z}|{
dV ∂V dt ∂V dx
= +
dt ∂t dt ∂x dt
Hence,
dV ∂V dx
= = V 0 (x)f (x)
dt ∂x dt
1. CHAPTER 1 :Classical Stability 38

Theorem 1.4.5. Let x = 0 be an equilibrium point for (1.14) and D ⊂


Rn be a domain containing x = 0.Let V : D → Rn be a continuously
differentiable function, such that

V (0) = 0 ; V (x) > 0 ∀x ∈ D, x 6= 0 (1.16)

V̇ (x) ≤ 0 ∀ x ∈ D (1.17)
Then,x(t) = 0 is stable.Moreover, if

V̇ (x) < 0 x ∈ in D , x 6= 0 (1.18)

Then x = 0 is asymptotically stable.For proof, see Khalil[1],p100.

Definition 1.4.3. [Positive Definite Functions]

• A function V (x) satisfying condition (1.16); that is, V (0) = 0 and


V (x) > 0 for x 6= 0 is said to be positive definite .

• A function V (x) satisfying the weaker condition V (x) ≥ 0 for x 6= 0


is said to be positive semi-definite.

• A function V (x) is said to be negative definite or semi-definite if


−V (x) is positive definite or positive semi-definite, respectively.

Theorem 1.4.6. We say that the origin is stable if there is a continuously


differentiable positive definite function V (x) so that V̇ (x) is negative semi-
definite, and it is asymptotically stable if V̇ (x) is negative definite .

1.4.4 Domain of Attraction


Consider the autonomous system

ẋ = f (x) (1.19)

where f : D 7→ Rn is a locally Lipschitz.


When the origin x = 0 is asymptotically stable, we are often interested in
determining how far from the origin the trajectory can be converges to the
origin as t → ∞ .This is what called the region of attraction (also called
region of asymptotic stability ,domain of attraction , or basin).
The region of attraction is defined as the set of all points x such that
lim Φ (t, x) = 0 ,where γ (t, x) be the solution of (1.19). Finding region of
t→∞
attraction analytically might be difficult.However, Lyapunov functions can
1. CHAPTER 1 :Classical Stability 39

be used to estimate the region of attraction by finding sets contained in the


region of attraction.If there is a Lyapunov function that satisfies the condi-
tions of asymptotic stability over a domain D and if Ωc = {x ∈ Rn : V (x) ≤ c}
is bounded and contained in D , then Ωc is a positive invariant set and ev-
ery trajectory starting in Ωc remains in Ωc and approaches the origin as
t → ∞ .Thus, Ωc is an estimate of the region of attraction.
In the case that for any initial state x, the trajectory Φ (t, x) approaches the
origin as t → ∞ .Then, if an asymptotically stable equilibrium point at the
origin has this property,it is said to be globally asymptotically stable.
The problem is that for large c, the set Ωc might not be bounded.So, to en-
sure that Ωc is bounded for all values of c > 0, we need for an extra condition
that is
V (x) → ∞ as x → ∞
A function satisfying this condition is said to be radially unbounded.

Theorem 1.4.7 (Barbashin-Krasovskii theorem). Consider the non-


linear dynamical system (1.19) and assume that there exists a continuously
differentiable candidate function V : Rn 7→ R such that

V (0) = 0 (1.20)
n
V (x) > 0, x ∈ R , x 6= 0 (1.21)
0 n
V f (x) < 0, x ∈ R , x 6= 0 (1.22)
V (x) ⇒ ∞ as||x|| 7→ ∞ (1.23)

Then the zero solution x(t) ≡ 0 is globally asymptotically stable. The radial
unbounded condition (1.23) ensures that the system trajectories move from
one energy surface to an inner energy surface along the system trajectories
cannot drift away from the system equilibrium.

Example 1.4.1. Consider the function

x21
V (x) =  + x22
1 + x21
1. CHAPTER 1 :Classical Stability 40

x21
Fig. 1.20: Lyapunov surfaces for V (x) = + x22 ≤ c
(1+x21 )

Figure (1.20) shows that the surface V (x) = c for various values of c.For
small c,the surface V (x) = c is closed; Ωc is bounded since it is contained
in a closed ball Br for some r > 0 and Ωc = {x ∈ Rn : V (x) ≤ c} .Thus, as
c increases, V (x) = c is open and Ωc is unbounded.
For Ωc to be in the interior of a ball Br , c must satisfy

c < inf V (x)


|x|≥r

If
γ = lim inf V (x) < ∞
r7→∞ |x|≥r

Then Ωc will be bounded only if c < γ.


For our example ,

x21 x21 x21


     
2
γ = lim min + x2 = lim min 2 = lim =1
r7→∞ |x|=r x2 + 1 r7→∞ |x|=r x + 1 |x1 |7→∞ x2
1 1 1+1

Hence, Ωc is bounded only for c < 1. An extra condition to ensure that Ωc


is bounded for all c > 0 is

lim V (x) = ∞
||x||→∞

Therefore, we say that V (x) is radially unbounded.

MATLAB.m 2 (Example 1.4.1).


syms x1 x2 c
eqn = (((x12 ) / (1 + (x12 ))) + x22 ) − c
RelativeT Ox2 = solve(eqn,0 x20 )
x1 = −2 : 0.1 : 2
f orc = −1 : 0.1 : 1
1. CHAPTER 1 :Classical Stability 41

hold on
plot(x1, eval(RelativeT Ox2))
end

Example 1.4.2. Consider the second-order system

x˙1 = x2
x˙2 = −h(x1 ) − ax2

where a > 0 , h (·) is locally Lipschitz , h (0) = 0 and yh (y) > 0 for all
y 6= 0 , y ∈ (−b, c) for some positive constants b and c
The Lyapunov function candidate is,
1 1 x1
V (x) = γ a x21 + δx22 + γx1 x2 + δ ∫ h (y1 ) dy1
2 2 0

where its derivative is,

V̇ (x) = − [a δ − γ] x22 − γx1 h (x1 )

Choosing δ > 0 and 0 < γ < a δ ensures that V (x) is positive definite
for all x ∈ R2 and radially unbounded and V̇ (x) is negative definite for all
x ∈ R2 .Therefore, the origin is globally asymptotically stable.

1.4.5 Lyapunov Quadratic Form


A class of scalar functions V (x) for which sign definiteness can be easily
checked is the class of functions of the quadratic form
n X
X n
V (x) = xT P x = pij xi xj
i=1 j=1

where P is a real symmetric matrix.In this case, V (x) is positive definite


(positive semi-definite) if and only if all the eigenvalues of P are posi-
tive(nonnegative) and it is positive(nonnegative) definite if and only if all
the leading principle minors of P are positive (nonnegative).
If V (x) = xT P x is positive definite(positive semi-definite), we say that
the matrix P is positive definite (positive semi-definite) and we write P >
0 (P ≥ 0).
1. CHAPTER 1 :Classical Stability 42

Example 1.4.3. Consider V (x) = ax21 + 2x1 x3 + ax22 + 4x2 x3 + ax23


   
a 0 1 x1
   
 0 a 2   x2  = xT P x
     
= x1 x2 x3    
   
1 2 a x3

The leading principle minors of P are a, a2 , and a a2 − 5 .Therefore, V(x)




is positive definite if a > 5 .
For negative definiteness, the leading principle minors of −P should be pos-
itive; that is, the leading principle minors of P should
√ have alternating
signs.Therefore, V (x) is negative definite
√ if a < − 5 .It can be seen that√
V (x) is positive√semi-definite
√  if a ≥ 5, negative semi-definite if a ≤ − 5
and for a ∈ − 5, + 5 , V (x) is indifinte.

Remark:
Lyapunov Theorem can be applied without solving the differential equation
(1.14).There are natural Lyapunov function candidates like energy functions
in electrical or mechanical systems.
Example 1.4.4. Consider the first-order differential equation
ẋ = −g (x)
Where g (x) is locally lipschitz on (−a, a) and satisfies
g (0) = 0 ; xg (x) > 0 ∀ x 6= 0 , x ∈ (−a, a)
This system has an isolated equilibrium point at the origin.
Consider the function
x
V (x) = ∫ g (µ ) dµ
0

Over the domain D = (−a, a), V (x) is continuously differentiable, V (0) =


0 and V (x) > 0 ∀ x 6= 0 .Thus, V (x) is a valid Lyapunov function candi-
date.To see whether or not V (x) is indeed a Lyapunov function, we calculate
its derivative along the trajectories of the system
∂V
V̇ (x) = (−g (x)) = −g 2 (x) < 0 ∀ x ∈ D − {0}
∂x
Thus,by Theorem (1.4.5), we conclude that the origin is asymptotically sta-
ble.
1. CHAPTER 1 :Classical Stability 43

Example 1.4.5. Consider the pendulum equation without friction

ẋ1 = x2 (1.24)
g 
ẋ2 = − sin x1 (1.25)
l
A natural Lyapunov function candidate is the energy function
g  1
V (x) = (1 − cos x1 ) + x22
l 2
So,V (0) = 0 and V (x) is positive definite over the domain −2π < x1 < 2π .
The derivative of V (x) along the trajectories of the system is given by
g  g  g 
V̇ (x) = x1 sin x1 + x2 x2 = x2 sin x1 − x2 sin x1 = 0
l l l
Thus, conditions (1.16) and (1.17) of Theorem (1.4.5) are satisfied and
we conclude that the origin is stable.
Since V̇ (x) = 0 , we can conclude that the origin is not asymptotically stable,
because trajectories starting on a Lyapunov surface V (x) = c (remain on
the same surface for all future time).

Example 1.4.6. Consider the pendulum equation with friction

x˙1 = x2 (1.26)
g   
k
x˙2 = − sin x1 − x2 (1.27)
l m

Assume that g  1 2
V (x) = (1 − cos x1 ) + x
l 2 2
as a Lyapunov function candidate .Then,
g   
k
V̇ = x˙1 sinx1 + x2 x˙2 = − x22
l m

V̇ (x) is negative semi-definite.It is not negative definite because V̇ (x) = 0


∀ x2 = 0, irrespective of the value of x1 ; that is, V̇ (x) = 0 along the x1 -
axis.Therefore, the origin is stable.Unfortunately, sketching the phase por-
trait of the pendulum equation when k > 0 , reveals that the origin is asymp-
totically stable.
Thus, the energy Lyapunov function fails to show this fact but a theorem
due to LaSalle will enable us to arrive at a different conclusion using the
1. CHAPTER 1 :Classical Stability 44

energy Lyapunov function.


Let us look for a Lyapunov function V(x) that would have a negative definite
V̇ (x).From the energy Lyapunov function, let us replace the term 12 x22 by
the more general quadratic form 12 xT P x for some 2 × 2 positive definite
matrix P ,
1 g 
V (x) = xT P x + (1 − cos x1 )
2 l
   
p11 p12 x1
1  
 + g (1 − cos x1 )
 
= x1 x2   
2 l
p21 p22 x2
1 T
for the quadratic form 2x P x to be positive definite, the elements of the
matrix P must satisfy

p11 > 0 ; p22 > 0 ; p11 p22 − p212 > 0

The derivative V̇ (x) is given by


g  g 
V̇ (x) = (p11 x1 + p12 x2 + sin x1 ) x2 + (p12 x1 + p22 x2 ) (− sin x1
  l l
k
− x2 )
m
g  g    
k
= (1 − p22 ) x2 sin x1 − p12 x1 sin x1 + p11 − p12 x1 x2
l l m
  
k
+ p12 − p22 x22
m

Now, we want to choose p11 , p12 and p22 such that V̇ (x) is negative definite
and as the terms x2 sin x1 and x  1 x2 are sign indefinite, we will cancelk them
k

by taking p22 = 1 and p11 = m p12 .So, p12 must satisfy 0 < p12 < m for
V (x) to be positive definite .
Let us take p12 = 12 m k

then, V̇ (x) is given by
   
1 g  k 1 k
V̇ (x) = − x1 sin x1 − x22
2 l m 2 m

The term x1 sin x1 > 0 , ∀ 0 < |x1 | < π .Taking D = x ∈ R2 : x1 < π ,




we see that V (x)is positive definite and V̇ (x) is negative definite over
D.Thus, by Theorem (1.4.5), we conclude that the origin is asymptotically
stable.
1. CHAPTER 1 :Classical Stability 45

1.5 Linear Systems and Linearization

Definition 1.5.1. Consider a second order nonlinear dynamical system of


represented in the vector form

ẋ = f (x(t)) (1.28)

then, we say x̄ is an equilibrium point (some times referred to as singular


point or critical point) when ẋ = 0; that is, when f (x̄) = 0 of (1.28).
Definition 1.5.2. Linearization is a technique that let f (x) has a Taylor
series expansion of the first order degree plus higher order term in neigh-
bourhood of the equilibrium point x = x̄.
Thus, equation (1.28) can be written in the neighbourhood of the equilibrium
point x = x̄ as:
∂f
f (x)|x=x̄ = f (x̄) + (x̄) (x − x̄) + H.O.T
∂x
Now, linearizing means that we have to remove the higher order terms.Hence
∂f
ẋ = f (x) ≈ f (x̄) + (x̄)
∂x
Note that, usually x̄ = 0 which means the equilibrium point is the origin (i.e
f (x) ≡ 0 if x̄ = 0).
The nonlinear system (1.28) is represented by two scalar differential
equations

ẋ1 = f1 (x1 , x2 ) (1.29)


ẋ2 = f2 (x1 , x2 ) (1.30)

let p = (p1 , p2 ) be an equilibrium points of the nonlinear system (1.29),(1.30),


and suppose that the functions f1 , f2 are continuously differentiable.
Linearization requires expanding the right-hand side of (1.29),(1.30) into its
Taylor series about the point (p1 , p2 ), we obtain

ẋ1 = f1 (p1 , p2 ) + a11 (x1 − p1 ) + a12 (x2 − p2 ) + H.O.T

x˙2 = f2 (p1 , p2 ) + a21 (x1 − p1 ) + a22 (x2 − p2 ) + H.O.T


where,
∂f1 (x1 , x2 ) ∂f1 (x1 , x2 )
a11 = |x1 =p1 ,x2 =p2 , a12 = |x1 =p1 ,x2 =p2
∂x1 ∂x2
1. CHAPTER 1 :Classical Stability 46

∂f2 (x1 , x2 ) ∂f2 (x1 , x2 )


a21 = |x1 =p1 ,x2 =p2 , a22 = |x1 =p1 ,x2 =p2
∂x1 ∂x2
H.O.T denotes terms of the form (x1 − p1 )2 , (x2 − p2 )2 , (x1 − p1 )x(x2 − p2 )
, and so on.
Since (p1 , p2 ) is an equilibrium point, we have

f1 (p1 , p2 ) = f2 (p1 , p2 ) = 0

Since we are interested in the trajectories near (p1 , p2 ), we define

y1 = x1 − p1 , y2 = x2 − p2

and rewrite the state equation as

y˙1 = x˙1 = a11 y1 + a12 y2 + H.O.T

y˙2 = x˙2 = a21 y1 + a22 y2 + H.O.T


Also, we may cancel the higher-order terms and approximate the non-
linear state equation by the linear state equation

y˙1 = a11 y1 + a12 y2

y˙2 = a21 y1 + a22 y2


Rewriting this equations in a vector form, we obtain

ẏ = Ay

where,
  " ∂f1 ∂f1
#
a11 a12 ∂f
A= = ∂x 1
∂f2
∂x2
∂f2 =
a21 a22 ∂x ∂x2
∂x x=p
1 x=p

The matrix [ ∂f
∂x ] is called the Jacobian matrix of f (x) and A is the
Jacobian matrix evaluated at the equilibrium point p.

Example 1.5.1. Consider The state-space model given in Example (1.2.1).


The Jacobian matrix of the function f (x) is given by,
 0 
∂f −0.5h (x1 ) 0.5
=
∂x −0.2 −0.3
1. CHAPTER 1 :Classical Stability 47

where,
0 dh
h (x1 ) = = 17.76 − 207.58x1 + 688.86x21 − 905.24x31 + 418.6x41
dx1
Evaluating the Jacobian matrix at the equilibrium points Q1 = (0.063, 0.758)
Q2 = (0.285, 0.61), and Q3 = (0.884, 0.21) respectively, yields the three ma-
trices  
−3.598 0.5
A1 = , Eigenvalues : −3.57, −0.33
−0.2 −0.3
 
1.82 0.5
A2 = , Eigenvalues : 1.77, −0.25
−0.2 −0.3
 
−1.427 0.5
A3 = , Eigenvalues : −1.33, −0.4
−0.2 −0.3
Thus, Q1 is a stable node, Q2 is a saddle point, and Q3 is a stable node.
Consider the nonlinear system (1.14)

ẋ = f (x)

where f : D 7→ Rn is a continuously differentiable map from a domain


D ⊂ Rn into Rn .Suppose that the origin x = 0 is in the interior of D and is
an equilibrium point for the system; that is, f (0) = 0.
∂fi (zi )
fi (x) = fi (0) + x
∂x
where zi is a point on the line segment connecting x to the origin.
Since f (0) = 0, we can write fi (x) as
 
∂fi (zi ) ∂fi (0) ∂fi (zi ) ∂fi (0)
fi (x) = x= x+ − x
∂x ∂x ∂x ∂x
which is corresponding to

f (x) = Ax + g(x)

where  
∂f (0) ∂fi (zi ) ∂fi (0)
A≡ , gi (x) = − x
∂x ∂x ∂x
The function gi (x) satisfies
 
∂fi (zi ) ∂fi (0)
||gi (x)|| ≤
− ||x||
∂x ∂x
1. CHAPTER 1 :Classical Stability 48

h i
∂f
by continuity of ∂x , we have

||g(x)||
7→ 0 as ||x|| 7→ 0
||x||

This suggest that in a small neighbourhood of the origin we can approximate


the nonlinear system (1.14) by its linearization about the origin

∂f
ẋ = Ax , where A =
∂x x=0

The linear invariant system

ẋ = Ax(t) (1.31)

has an equilibrium point at the origin.The equilibrium point is isolated


if and only if det(A) 6= 0.If det(A) = 0, the matrix A has nontrivial null
space.Every point of the null space of A is an equilibrium point for the
system (1.31).In other words, if det(A) = 0, the system has an equilibrium
subspace.
Stability properties of the origin can be characterized by the locations of the
eigenvalues of the matrix A.
The solution of (1.31) for a given initial state x(0) is given by

x(t) = eAt x(0) (1.32)

The structure of eAt can be understood by considering the Jordan form of A.

Theorem 1.5.1. The equilibrium point x = 0 of (1.31) is stable if and only


if all eigenvalues of A satisfy Reλi ≤ 0 and every eigenvalue with Reλi = 0
has an associated Jordan block of order one(i.e nonpositive real part).The
equilibrium point x = 0 is (globally) asymptotically stable if and only if
all eigenvalues of A satisfy Reλi < 0 or Hurwitz.

Proof: From (1.32) we can see that the origin is stable if and only
if eAt is a bounded function of t , ∀t ≥ 0; that is, if there exists α > 0
such that ||eAt || < α, t ≥ 0.If the eigenvalue of A is in the open right-half
plane,the corresponding exponential term eλi t in (1.33) will grow unbounded
as t 7→ ∞.Therefore, we must restrict the eigenvalue to be in the closed left-
half plane.
1. CHAPTER 1 :Classical Stability 49

Notation:
When all eigenvalues of A satisfy Re(λi ) < 0, A is called a stability matrix
or a Hurwitz matrix.

For any matrix A there is a nonsingular matrix P that transforms A into


its Jordan form; that is,

P −1 AP = J = block diag [J1 , J2 , · · · , Jm ]

where Ji is the Jordan block associated with the eigenvalue λi of A.


A Jordan block of order m takes the form
 
λi 1 0 ··· ··· 0
 0 λi
 1 0 ··· 0 
. . .
. . .

. . .
Ji =  .
 
 .. .. 
. 0
 ..
 
.. 
. . 1
0 · · · · · · · · · 0 λi m×m

Therefore,
ni
m X
X
At (Jt) −1
e = Pe P = tj−1 e(λi t) pij (A) (1.33)
i=1 j=1

where m is the number of the distinct eigenvalues of A, ni is the order of the


ith Jordan block and pij (A) are constant matrices (refer to the Appendix).

1.5.1 Investigating the Asymptotic Stability of the LTI


dynamical system (1.31) via Theorem (1.4.5)
Consider a quadratic Lyapunov function candidate

V (x) = xT P x

where P is a real symmetric positive definite matrix.


the derivative of V along the trajectories of the linear system (1.31) is given
by
V̇ (x) = xT P ẋ + ẋT P x = xT (P A + AT P )x = −xT Qx
where Q is a symmetric matrix defined by

P A + AT P = −Q (1.34)
1. CHAPTER 1 :Classical Stability 50

if Q is positive definite so that V 0 (x)f (x) = −xT Qx < 0, we conclude by


Theorem (1.4.5) that the origin is asymptotically stable; i.e., Reλi < 0 for
all eigenvalues of A.So, we had take the procedure of Lyapunov method,
where we choose V (x) to be positive definite and then check the negative
definite of V (x).
Suppose we start by choosing Q as a real symmetric positive definite matrix
and solve (1.34) for P .If (1.34) has a positive definite solution, then we
conclude that the origin is asymptotically stable.Equation (1.34) is called
the Lyapunov equation.
Theorem 1.5.2. A matrix A is a stability matrix (Hurwitz matrix); that
is, Re(λi ) < 0 for all eigenvalues of A,if and only if for any given positive
definite symmetric matrix Q there exists a positive definite symmetric
matrix P that satisfies the Lyapunov equation (1.34).Moreover, if A is a
stability matrix, then P is the unique solution of (1.34).
Proof. from Theorem (1.4.5) with Lyapunov function V (x) = xT P x, assume
that all eigenvalues of A satisfy Re(λi ) < 0 and consider the matrix P ,
defined by
Z∞
T
P = eA t Q eAt dt (1.35)
0
The integrand is a sum of terms of the form tk−1 eλi t , where Reλi <
0.Therefore, the integral exists.The matrix P is symmetric and positive
definite.To show that it is symmetric and positive definite,suppose that it is
not.Hence,there is a vector x 6= 0 such that xT P x = 0.
Z∞
T
x P x = 0 ⇒ e(A t) Qe(At) x dt = 0
T

0
(At)x
⇒e ≡ 0 , ∀t ≥ 0 ⇒ x = 0
This contradiction shows that P is positive definite.Substitution of (1.35)
in the left-hand side of (1.34) yields:
Z∞ Z∞
T (AT t) (At) T
PA + A P = e Qe A dt + AT e(A t)Qe(At) dt
0 0
Z∞ ∞
d  (AT t) (At)  T
= e Qe dt = e(A t) Qe(At) = −Q

dt 0
0

which means that P is indeed a solution of (1.35) if and only if λi < 0.


1. CHAPTER 1 :Classical Stability 51

To show that P > 0 (positive definite and symmetric), let C ∈ Rn×m be


such that Q = C T C,
Z∞ Z∞
T T AT t T At
x Px = x e C Ce x dt = ||CeAt x||22 dt
0 0

which shows that P ≥ 0 and P = P T .

Example 1.5.2. Let


     
0 −1 1 0 p11 p12
A= ;Q = ;P =
1 −1 0 1 p21 p22

since P is a positive symmetric definite matrix.Therefore, P12 = P21


Using Lyapunov equation: P A + AT P = −Q, which can be written as
    
0 2 0 p11 −1
−1 −1 1  p12  =  0 
0 −2 −2 p22 −1

the unique solution of this equation is given by


   
p11 1.5  
p12  = 0.5 ⇒ P = 1.5 0.5
0.5 1.0
p22 1.0

• Lyapunov equation can be used to test whether or not a matrix A is a


stability matrix(Hurwitz matrix) as a second method to calculate the
eigenvalues of A.
One starts by choosing a positive definite matrix Q (most often Q = I)
and then solve the Lyapunov equation (1.34) for P .If the equation have
a positive definite solution, we conclude that A is a stability matrix
(Hurwitz); otherwise, it is not.

MATLAB.m 3 (Example 1.5.2).


clear
A=[0 -1 ; 1 -1]
Q=[1 0 ; 0 1]
P=lyap(A,Q)
1. CHAPTER 1 :Classical Stability 52

1.5.2 Lyapunov Indirect Method


Theorem 1.5.3. [4] Let x = 0 be an equilibrium point of the nonlinear
system
ẋ = f (x)
where f : D 7→ Rn is continuously differentiable and D is a neighbourhood
of the origin.Let
∂f (x)
A=
∂x x=0
Then

• The origin is asymptotically stable if Re(λi ) < 0 for all eigenvalues


of A.

• The origin is unstable if ∃ λi such that Re(λi ) > 0.


Proof. Let A be a stability matrix (Hurwitz matrix), then by Theorem
(1.5.2), we know that for any positive definite symmetric matrix Q, the
solution P of the lyapunov equation (1.34) is positive definite .We use

V (x) = xT P x

as a lyapunov function candidate for the nonlinear system.The derivative of


V (x) along the trajectories of the system is given by

V̇ (x) = xT P f (x) + f T (x)P x


= xT P [Ax + g(x)] + [xT AT + g T (x)]P x
= xT (P A + AT P )x + 2xT P h(x)
= −xT Qx + 2xT P g(x)

the first term of the right-hand side is negative definite, while the second
term is indefinite.The function g(x) satisfies:
kg(x)k2
7→ 0 as kxk2 7→ 0
kxk2
Therefore, for any γ > 0 there exists r > 0 such that

||g(x)||2 < γ||x||2 , ∀ ||x||2 < r

Hence,
V̇ (x) < −xT Qx + 2γ||P ||2 ||x||22
1. CHAPTER 1 :Classical Stability 53

but
xT Qx ≥ λmin (Q)||x||22
where λmin (·) denoted the minimum eigenvalue of a matrix .Note that
λmin (Q) is real and positive since Q is symmetric and positive definite.Thus
V̇ (x) < − [λmin (Q) − 2γ||P ||2 ] ||x||22 , ∀||x||2 < r

Choosing γ < 12 λmin (Q)


||P ||2 ensures that V̇ (x) is negative definite.By Theorem
(1.4.5), we conclude that the origin is asymptotically stable.

Theorem (1.5.3) provides us with a simple procedure for determining


stability of an equilibrium point at the origin.We calculate the Jacobian
matrix
∂f
A=
∂x x=0
and test its eigenvalues.
• If Re(λi ) < 0 ∀ i (or Reλi > 0), we conclude that the origin is asymp-
totically stable (or unstable ,respectively).
• When Re(λi ) < 0 ∀ i, we can also find a quadratic Lyapunov function
for the system that will work locally in some neighbourhood of the
origin.The Lyapunov function is the quadratic form V (x) = xT P x,
where P is the solution of the Lyapunov equation (1.34) for any posi-
tive definite symmetric matrix Q.
• When Reλi ≤ 0 ∀ i, in this case, linearization fails to determine sta-
bility of the equilibrium point.

To show local exponential stability it needs noted that


λmin (P )||x||22 ≤ V (x) ≤ λmax (P )||x||22
Remark:
If the equilibrium point
6 0) then Theorem (1.5.3)
is not the origin (x 0 =
∂f ∂f
holds with A = ∂x replaced by A = ∂x , where x = xe is not the
x=0 x=xe
non-origin equilibrium point .
Example 1.5.3. Consider the pendulum equation
ẋ1 = x2
g k
ẋ2 = − sin x1 − x2
l m
1. CHAPTER 1 :Classical Stability 54

has two equilibrium points at (x1 , x2 ) = (0, 0) and (x1 , x2 ) = (π, 0).
Using linearization,the Jacobian matrix is given by:
" # 
∂f1 ∂f1 
∂f ∂x ∂x 0 1
= ∂f2 ∂f2 =
1 2
∂x ∂x1 ∂x2
− gl cos x1 − m k

Evaluating the Jacobian at x = 0


 
∂f 0 1
A= =
∂x x=0 − gl −mk

The eigenvalues of A are :


s
k 1 k 2 4g
λ1,2 =− ± −
2m 2 m l

If the eigenvalues satisfy Re(λi ) < 0, the equilibrium point at the origin is
asymptotically stable.Meanwhile, if the friction (k = 0) then both eigenvalues
are on the imaginary axis.In this case, we can not determine stability of the
origin through linearization.

MATLAB.m 4 (Example 1.5.3).


clear
syms x y g l k m
A=jacobian([ y , (-(g/l) * sin(x)) - ((k/m)*y) ] , [x , y])
x=0
y=0
Ax0 = eval(A)
eig(Ax0)

1.6 The Invariant Principle(LaSalle’s Invariance Principle)

In our study of the pendulum equation with friction (look at example (1.4.6)),
we saw that the energy Lyapunov function fails to satisfy the asymptotic
k
stability condition of Theorem (1.4.5) because V̇ (x) = − m x22 is only
negative semidefinite.Notice that, V̇ (x) is negative everywhere except on
the line x2 = 0 where V̇ (x) = 0.
For the system to maintain the V̇ (x) = 0 condition, the trajectory of the
system must be confined to the line x2 = 0 .
If in a domain about the origin we can find a Lyapunov function whose
derivative along the trajectories of the system is negative semidefinite
1. CHAPTER 1 :Classical Stability 55

and if we can establish that no trajectory can stay identically at points


where V̇ (x) = 0 except at the origin, then the origin is asymptotically
stable.
This idea follows from LaSalles invariance principleand before we state
LaSalles invariance principle, we need to introduce some definitions:

• Positive limit point: A point p is said to be a positive limit point


of x (t) if there is a sequence {tn } with tn → ∞ as n → ∞ such that
x (tn ) → p as n → ∞ .

• Positive limit set: The set of all positive limit points of x (t) is called
the positive limit set of x (t).

• Invariant set: A set M is said to be an invariant set with respect to

ẋ = f (x) (1.36)

if,

x (0) ∈ M ⇒ x (t) ∈ M , f or all f uture time (i.e. ∀ t ∈ R )

That is, if a solution belongs to M at some time instant, then it belongs


to M for all future and past time.

• Positively invariant set: A set M is said to a positively invariant


set if
x (0) ∈ M ⇒ x (t) ∈ M , ∀ t ≥ 0
The asymptotically stable equilibrium is the positive limit set of every
solution starting sufficiently near the equilibrium point .

• Stable limit cycle: The stable limit cycle is the positive limit set of
every solution sufficiently near the limit cycle.The solution approaches
the limit cycle as t → ∞ .The equilibrium point and the limit cycle
are invariant sets,since any solution starting in either set remains in
the set for all t ∈ R.

Theorem 1.6.1. (LaSalle’s Theorem) Let Ω ∈ D be a compact set that


is positively invariant set with respect to (1.36).Let V : D → R be a contin-
uously differentiable function such that V̇ (x) ≤n0, x ∈ Ω .Let E beo the set
of all points in Ω where V̇ (x) = 0 ;that is, E = x ∈ Ω : V̇ (x) = 0 and let
M be the largest invariant set in E.If x(0) ∈ E, then every solution starting
in Ω approaches M as t → ∞ .
1. CHAPTER 1 :Classical Stability 56

Unlike Lyapunov Theorem, Theorem (1.6.1) does not requires the function
V (x) to be positive definite.
Our interest is to show that x (t) → 0 as t → ∞.So we need to establish
that the largest invariant set in E is the origin (x = 0).
This is shown by note that no solution can stay identically in E other than
the trivial solution x (t) = 0 .

Corollary 1.6.1. Let x = 0 be an equilibrium point for (1.36).Let V :


D → R be a continuously differentiable positive definite function on a
n D containing theo origin x = 0, such that V̇ (x) ≤ 0 in D and let
domain
ω = x ∈ D : V̇ (x) = 0 .Suppose that no solution can stay identically in
ω other than the trivial solution {0}.Then, the origin is asymptotically
stable.

Corollary 1.6.2. Let x = 0 be an equilibrium point for (1.36).Let V : Rn →


R be a continuously differentiable, radially unbounded,
n positive definite
o func-
n n
tion, such that V̇ (x) ≤ 0 , ∀ x ∈ R and let Ω = x ∈ R : V̇ (x) = 0 .Suppose
that no solution can stay identically in Ω other than the trivial solution.Then,
the origin is globally asymptotically stable.

Example 1.6.1. Consider the system

x˙1 = x2
x˙2 = −g(x1 ) − h(x2 )

where g (·) and h (·) are locally Lipschitz and satisfy

g (0) = 0 , yg (y) > 0 ∀ y 6= 0 , y ∈ (−a, a)

h (0) = 0 , yh (y) > 0 ∀ y 6= 0 , y ∈ (−a, a)


The system has an isolated equilibrium point at the origin. A Lyapunov
function candidate may be taken as the energy-like function
x1 1 2
V (x) = ∫ g (y) dy + x
0 2 2

Let D = {x ∈ Rn : −a < xi < a } . V (x) is positive definite in D.The


derivative of V (x) along the trajectories of the system is given by

V̇ (x) = g (x1 ) x2 + x2 [−g (x1 ) − h (x2 )] = −x2 h (x2 ) ≤ 0


1. CHAPTER 1 :Classical Stability 57

Thus, V̇ (x) is negative semidefinite


n . o
To characterize the set Ω = x ∈ D : V̇ (x) = 0 note that,

V̇ (x) = 0 ⇒ x2 h (x2 ) = 0 ⇒ x2 = 0 , since − a < x2 < a

Hence, S = {x ∈ D : x2 = 0 } .Suppose x (t) is a trajectory that belongs to


S.
x2 (t) = 0 ⇒ ẋ2 (t) = 0 ⇒ g (x1 (t)) = 0 ⇒ x1 (t) = 0
Therefore, the only solution that can stay identically in S is the trivial
solution x (t) = 0 .Thus, the origin is asymptotically stable.

Example 1.6.2. Consider the system of Example (1.6.1) but this time with
a = ∞ and assume that g (·) satisfies the additional condition:
y
∫ g (z) dz → ∞ as |y| → ∞
0

The Lyapunov function


x1 1
V (x) = ∫ g (y) dy + x22
0 2

is radialy unbounded and it can be shown that V̇ (x) ≤ 0 in R2 , and the set
n o
S = x ∈ R2 : V̇ (x) = 0 ⇒ S = x ∈ R 2 : x2 = 0


contains no solutions other than the trivial solution.Hence, the origin is


globally asymptotically stable.

Example 1.6.3. Consider the first-order system ẏ = ay + u with feedback


control law
u = −ky ; k̇ = γy 2 ; γ > 0
Setting:
x1 = y , x2 = k ⇒ x˙1 = ẏ and x˙2 = k̇
Then, the closed-loop system is represented by

⇒ x˙1 = − (x2 − a) x1
⇒ x˙2 = γx21

The line x1 = 0 is an equilibrium set for this system. We want to show


that the trajectory of the system approaches this equilibrium set as t → ∞ ,
1. CHAPTER 1 :Classical Stability 58

which means that the feedback controller success in regulating y to zero.


Consider the Lyapunov function candidate V (x) = 21 x21 + 2γ 1
(x2 − b)2
where b > a .The derivative of V along the trajectories of the system is given
by
1
V̇ (x) = x1 ẋ1 + (x2 − b)x˙2 = −x21 (b − a) ≤ 0
γ
Hence, V̇ (x) ≤ 0 , and since V (x) is radially unbounded, the set Ωc =
x ∈ R2 : V (x) ≤ c is a compact, positively invariant set. Thus, all con-


dition of Theorem (1.6.1) are satisfied.


The set E is given by E = {x ∈ Ωc : x1 = 0 }
since any point on the line x1 = 0 is an equilibrium point, E is an in-
variant set.Therefore, M = E and from Theorem (1.6.1), we conclude
that every trajectory starting in Ωc approaches E as t → ∞ ; that is,
x1 (t) → 0 as t → ∞ .
Moreover, since V (x) is radially unbounded, this conclusion is global; that
is, it holds for all initial conditions x (0) because for any x (0) the constant
c can be chosen large enough that x (0) ∈ Ωc .

1.7 The Partial Stability of Nonlinear Dynamical Systems

Consider the nonlinear autonomous dynamical system:

ẋ1 = f (x1 (t), x2 (t)) , x1 (t0 ) = x10 , ∀ t (1.37)


ẋ2 = f (x1 (t), x2 (t)) , x2 (t0 ) = x20 (1.38)

Where x1 ∈ D ⊂ Rn1 , 0 ∈ D and x2 ∈ Rn2 such that f1 : D × Rn2 7→ Rn1 is


such that f1 (0, x2 ) = 0 and f1 (·, x2 ) is locally Lipschitz in x1 .
Similarly, f2 : D × Rn2 7→ Rn2 is such that f2 (x1 , ·) is locally Lipschitz in x2 .
Assume the solution (x1 (t), x2 (t)) to the system (1.37), (1.38) exists and is
unique.
The following theorem introduces eight types of partial stability, that is,
stability with respect to x1 , for the nonlinear dynamical system (1.37) and
(1.38)

Theorem 1.7.1. [4] The nonlinear dynamical systems (1.37) and (1.38)
are

• Lyapunov stable with respect to x1 if, for every  > 0 and x20 ∈
Rn2 , there exists δ = δ(, x20 ) > 0 such that ||x10 || < δ implies that
||x1 (t)|| <  for all t ≥ 0 (see Figure 1.21(a)).
1. CHAPTER 1 :Classical Stability 59

Fig. 1.21: (a) Partial Lyapunov stability with respect to x1 .(b) Partial asymptotic
T  T
stability with respect to x1 . x1 = [y1 y2 ] , x2 = z, and x = xT1 x2

• Lyapunov stable with respect to x1 uniformly in x20 , if for


every  > 0, there exists δ = δ() > 0 such that ||x10 || < δ implies that
||x1 (t)|| <  for all t ≥ 0 and for all x20 ∈ Rn2 .

• asymptotically stable with respect to x1 , if it is Lyapunov stable


with respect to x1 and, for every x20 ∈ Rn2 , there exists δ = δ(x20 ) > 0
such that ||x10 || < δ implies that lim x1 (t) = 0 (see Figure 1.21(b)).
t7→∞

• asymptotically stable with respect to x1 uniformly in x20 , if it


is Lyapunov stable with respect to x1 uniformly in x20 and there exists
δ > 0 such that ||x10 || < δ implies that lim x1 (t) = 0 uniformly in x10
t7→∞
and x20 for all x20 ∈ Rn2 .

• globally asymptotically stable with respect to x1 , if it is Lya-


punov stable with respect to x1 and lim x1 (t) = 0 for all x10 ∈ Rn1
t7→∞
and x20 ∈ Rn2 .

• globally asymptotically stable with respect to x1 uniformly in


x20 , if it is Lyapunov stable with respect to x1 uniformly in x20 and
lim x1 (t) = 0 uniformly in x10 and x20 for all x10 ∈ Rn1 and x20 ∈ Rn2
t7→∞
.

• exponentially stable with respect to x1 uniformly in x20 , if there


exist scalars α, β, δ > 0 such that ||x10 || < δ implies that |x1 (t)|| ≤
α||x10 ||e−βt , t ≥ 0, ∀ x20 ∈ Rn2 .
1. CHAPTER 1 :Classical Stability 60

• globally exponentially stable with respect to x1 uniformly in


x20 , if there exist scalars α, β > 0 such that ||x1 (t)|| ≤ α||x10 ||e−βt , t ≥
0, for all x10 ∈ Rn1 and x20 ∈ Rn2 .

The following theorem present a sufficient conditions for partial stabil-


ity of nonlinear dynamical system (1.37) and (1.38) in sense of Lyapunov
candidate functions.We need now to assume that:

• V̇ = V 0 (x1 , x2 ) f (x1 , x2 )
def  T
• f (x1 , x2 ) = f1T (x1 , x2 ) f2T (x1 , x2 )

• V : D × Rn2 7→ R

Theorem 1.7.2. Consider the nonlinear dynamical system (1.37) and (1.38).Then
the following statements hold:

• Lyapunov stable with respect to x1 , If there exist a continuously


differentiable function V : D × Rn2 7→ R and a class K function α(·)
such that

V (0, x2 ) = 0 , x2 ∈ Rn2 (1.39)


n2
α(||x1 ||) ≤ V (x1 , x2 ) , (x1 , x2 ) ∈ D × R (1.40)
n2
V̇ (x1 , x2 ) ≤ 0 , (x1 , x2 ) ∈ D × R (1.41)

• Lyapunov stable with respect to x1 uniformly in x20 , if there


exist a continuously differentiable function V : D × Rn2 7→ R and class
K functions α(·), β(·) satisfying (1.40), (1.41), and

V (x1 , x2 ) ≤ β(||x1 ||) , (x1 , x2 ) ∈ D × Rn2 (1.42)

• asymptotically stable with respect to x1 , if there exist continu-


ously differentiable functions V : D × Rn2 7→ R and W : D × Rn2 7→
R and class K functions α(·), β(·), γ(·) such that Ẇ (x1 (·), x2 (·)) is
bounded from below or above, equations (1.39) and (1.40) hold, and

W (0, x2 ) = 0, x2 ∈ Rn2 (1.43)


n2
β(||x1 ||) ≤ W (x1 , x2 ) , (x1 , x2 ) ∈ D × R (1.44)
n2
V̇ (x1 , x2 ) ≤ −γ(W (x1 , x2 )) , (x1 , x2 ) ∈ D × R (1.45)
1. CHAPTER 1 :Classical Stability 61

• asymptotically stable with respect to x1 uniformly in x20 , if


there exist a continuously differentiable function V : D × Rn2 7→ R and
class K functions α(·), β(·), γ(·) satisfying (1.40), (1.42), and
V̇ (x1 , x2 ) ≤ −γ(||x1 ||) , (x1 , x2 ) ∈ D × Rn2 (1.46)

• globally asymptotically stable with respect to x1 , if D = Rn1


and there exist continuously differentiable functions V : Rn1 ×Rn2 7→ R
and W : Rn1 × Rn2 7→ R, class K functions β(·), γ(·) and a class K∞
function α(·) such that Ẇ (x1 (·), x2 (·)) is bounded from below or above,
and (1.39),(1.40), and (1.43) to (1.45) hold
• globally asymptotically stable with respect to x1 uniformly in
x20 , if D = Rn1 and there exist continuously differentiable functions
V : Rn1 × Rn2 7→ R , a class K function γ(·), and class K∞ functions
α(·), β(·) satisfying (1.40), (1.42), and (1.46)
• exponentially stable with respect to x1 uniformly in x20 , if
there exist a continuously differentiable function V : Rn1 × Rn2 7→ R
and positive constants α, β, γ, p ≥ 1 satisfying
α||x1 ||p ≤ V (x1 , x2 ) ≤ β||x1 ||p , (x1 , x2 ) ∈ D × Rn2 (1.47)
p n2
V̇ (x1 , x2 ) ≤ −γ||x1 || , (x1 , x2 ) ∈ D × R (1.48)

• globally exponentially stable with respect to x1 uniformly in


x20 , if D = Rn1 and there exist a continuously differentiable function
V : Rn1 × Rn2 7→ R and positive constants α, β, γ, p ≥ 1 satisfying
(1.47) and (1.48).
By setting n1 = n and n2 = 0, Theorem (1.7.2) specializes to the case
of nonlinear autonomous systems of the form ẋ1 (t) = f1 (x1 (t)).In this case,
Lyapunov stability with respect to x1 and Lyapunov stability with respect
to x1 uniformly in x20 are equivalent to the classical Lyapunov stability of
nonlinear autonomous systems.
Notation:
The condition V (0, x2 ) = 0, x2 ∈ Rn2 , allows us to prove partial stability in
the sense of Theorem (1.7.1).

Example 1.7.1. Consider the nonlinear dynamical systems


h i
(M + m)q̈(t) + me θ̈(t) cos θ(t) − θ̇2 (t) sin θ(t) + kq(t) = 0 (1.49)
(I + me2 )θ̈(t) + meq̈(t) cos θ(t) = 0 (1.50)
1. CHAPTER 1 :Classical Stability 62

where t ≥ 0, q(0) = q0 , q̇(0) = q̇0 , θ(0) = θ0 and M, m, k ≥ 0. Let x1 = q,


x2 = q̇, x3 = θ, x4 = θ̇ and consider the Lyapunov function candidate:
1 2
kx1 + (M + m)x22 + (I + me2 )x24 + 2mex2 x4 cos x3

V (x1 , x2 , x3 , x4 ) =
4
which can be written as
1 1
V (x1 , x2 , x3 , x4 ) = kx21 + xT P (x3 )x
2 2
where x = [x2 x4 ]T and
 
M + m me cos x3
P (x3 ) =
me cos x3 I + me2

Since,
1h p i
λmin (P ) = M + m + I + me2 − (M + m − I − me2 )2 + 4m2 e2 cos2 x3
2
and
1h p i
λmax (P ) = M + m + I + me2 + (M + m − I − me2 )2 + 4m2 e2 cos2 x3
2
Hence,
1 2 1 1 1
x + λmin (P )(x22 + x24 ) ≤ V (x1 , x2 , x3 , x4 ) ≤ x21 + λmax (x22 + x24 )
2 1 2 2 2
which implies that V (·) satisfies (1.40) and (1.44).
Since,
V̇ (x1 , x2 , x3 , x4 ) = 0
it follows from (1.41) that, (1.49) and (1.50) are Lyapunov stable with respect
to x1 , x2 and x4 uniformly in x30 .
Furthermore, it follows that the zero solution (q(t), q̇(t), (t),˙(t)) = (0, 0, 0, 0)
to (1.49) and (1.50) is unstable in the standard sense but partially Lyapunov
stable with respect to q, q̇, and θ̇.

1.7.1 Addressing Partial Stability When Both Initial Conditions


(x10 , x20 ) Lie in a Neighborhood of the Origin
For this result, we modify Theorem (1.7.2) to reflect the fact that the entire
T
initial state x0 = x10 T x20 T

lies in the neighborhood of the origin so
that ||x10 || < δ is replaced by ||x0 || < δ.
1. CHAPTER 1 :Classical Stability 63

Theorem 1.7.3. Consider the nonlinear dynamical system (1.37) and (1.38).Then
the following statements hold:
• Lyapunov stable with respect to x1 if, there exist a continuously
differentiable function V : D × Rn2 7→ R and a class K function α(·)
such that
V (0, 0) = 0 (1.51)
n2
α(||x1 ||) ≤ V (x1 , x2 ) , (x1 , x2 ) ∈ D × R (1.52)
n2
V̇ (x1 , x2 ) ≤ 0 , (x1 , x2 ) ∈ D × R (1.53)

• Lyapunov stable with respect to x1 uniformly in x20 if, there


exist a continuously differentiable function V : D × Rn2 7→ R and class
K functions α(·), β(·) satisfying (1.52),(1.53), and
V (x1 , x2 ) ≤ β(||x||) , (x1 , x2 ) ∈ D × Rn2 (1.54)
T
where x = x1 T x2 T .


• asymptotically stable with respect to x1 if, there exist continu-


ously differentiable functions V : D × Rn2 7→ R and W : D × Rn2 7→
R and class K functions α(·), β(·), γ(·) such that Ẇ (x1 (·), x2 (·)) is
bounded from below or above, (1.52) holds, and
β(||x||) ≤ W (x1 , x2 ) , (x1 , x2 ) ∈ D × Rn2 (1.55)
V̇ (x1 , x2 ) ≤ −γ(W (x1 , x2 )) , (x1 , x2 ) ∈ D × Rn2 (1.56)

• asymptotically stable with respect to x1 uniformly in x20 if,


there exist a continuously differentiable function V : D × Rn2 7→ R and
class K functions α(·), β(·), γ(·) satisfying (1.52) and (1.54), and
V̇ (x1 , x2 ) ≤ −γ(||x||) , (x1 , x2 ) ∈ D × Rn2 (1.57)

• globally asymptotically stable with respect to x1 if, D = Rn1


and there exist continuously differentiable functions V : Rn1 ×Rn2 7→ R
and W : Rn1 × Rn2 7→ R, class K functions β(·), γ(·) and a class K∞
function α(·) such that Ẇ (x1 (·), x2 (·)) is bounded from below or above,
and (1.52), (1.55) and (1.56) hold
• globally asymptotically stable with respect to x1 uniformly in
x20 If, D = Rn1 and there exist continuously differentiable functions
V : Rn1 × Rn2 7→ R , a class K function γ(·), and class K∞ functions
α(·), β(·) satisfying (1.52), (1.54), and (1.57)
1. CHAPTER 1 :Classical Stability 64

• exponentially stable with respect to x1 uniformly in x20 if, there


exist a continuously differentiable function V : Rn1 × Rn2 7→ R and
positive constants α, β, γ, p ≥ 1 satisfying

α||x1 ||p ≤ V (x1 , x2 ) ≤ β||x||p , (x1 , x2 ) ∈ D × Rn2 (1.58)


p n2
V̇ (x1 , x2 ) ≤ −γ||x|| , (x1 , x2 ) ∈ D × R (1.59)

• globally exponentially stable with respect to x1 uniformly in


x20 if, D = Rn1 and there exist a continuously differentiable function
V : Rn1 × Rn2 7→ R and positive constants α, β, γ, p ≥ 1 satisfying
(1.58) and (1.59).

1.8 Nonautonomous Systems

Consider the nonautonomous system

ẋ = f (t, x) (1.60)

where f : [0, ∞) × D 7→ Rn is piecewise continuous in t and locally


Lipschitz in x on [0, ∞) × D, and D ⊂ Rn is the domain that contains the
origin x = 0.
The origin is an equilibrium point for (1.60) at t = 0 if,

f (t, x = 0) = 0 , ∀t ≥ 0

An equilibrium at the origin could be a translation of a nonzero equilibrium


point.
To see this, suppose that ȳ(τ ) is a solution of the system

∂y
= g(τ, y)
∂τ
defined for all τ ≥ a.The change of variables

x = y − ȳ(τ ) ; t = τ − a

transforms the system into the form


def
ẋ = g(t, y) − ȳ(τ ) = g(t + a, x + ȳ(t + a)) − ȳ(t + a) = f (t, x)

since
˙ + a) = g(t + a, ȳ(t + a)) , t ≥ 0
ȳ(t
1. CHAPTER 1 :Classical Stability 65

the origin is an equilibrium point of the transformed system at t = 0.


Thus, by examining stability behaviour of the origin as an equilibrium point
for the transformed system, we determine the stability behaviour of the
solution ȳ(τ ) of the original system.
The stability and the asymptotic stability of the equilibrium point of a
nonautonomous system are basically the same as we introduced in Theorem
(1.4.1) for autonomous system.
Note that, while the solution of an autonomous system depends only on
(t − t0 ), the solution of a nonautonomous system may depend on both t
and t0 .
The origin x = 0 is stable equilibrium point for (1.60) if for each  > 0 and
for any t0 ≥ 0 there is δ = δ(, t0 ) > 0 such that

kx(t0 )k ≤ δ ⇒ kx(t)k ≤  , ∀ t ≥ t0

where constant δ depends on both the initial time t0 and δ.


Example 1.8.1. The linear first-order system
x
ẋ = −
1+t
has the solution
Rt −1 1+t0
( 1+τ
dτ )= 1+t
t0
x(t) = x(t0 ) e
Since

||x(t)|| ≤ kx(t0 )k, ∀ t ≥ t0


≤

the origin is stable but according to Theorem (1.4.1), we see that

x(t) 7→ 0 as t 7→ ∞

hence, the origin is asymptotically stable.


Example 1.8.2. Consider the system

ẋ(t) = A(t)x + B(t) , x(0) = x0

By variation of parameters technique, we obtain the solution


Zt
x(t) = Φ(t)x0 + Φ(t − τ )B(τ ) dτ
0
1. CHAPTER 1 :Classical Stability 66

Assume that all the eigenvalues of the matrix A(t) have a negative real parts,
then ||Φ(t)|| ≤ βe−αt for all α, β, t ≥ 0.
Hence,
Zt
||x(t)|| ≤ ||Φ(t)|| ||x0 || + ||Φ(t − τ )|| ||B(τ )|| dτ
0
Zt
≤ βe−αt ||x0 || + βe−α(t−τ ) ||B(τ )|| dτ
0

Pre-multiply both sides by eαt


Zt
αt
||x(t)|| e ≤ β||x0 || + βeατ ) ||B(τ )|| dτ
0

Applying Gronwall’s Theorem:


Rt
β ||B(τ )|| dτ
αt
||x(t)|| e ≤ β||x0 || e 0


R
β ||B(τ )|| dτ
≤ β||x0 || e 0

Thus,

R
β ||B(τ )|| dτ
−αt
||x(t)|| ≤ β||x0 || e e 0

R R∞
We conclude that if, the integral ||B(τ )|| dτ is finite ( ||B(τ )|| dτ < ∞)
0 0
and all eigenvalues of A have negative real parts then,
• All solutions are bounded.Thus,they are stable
• lim ||x(t)|| = 0, since α > 0 (All solutions are asymptotically stable).
t7→∞

1.8.1 The Unification between Time-Invariant Stability Theory


and Stability Theory for Time-Varying Systems Through
The partial Stability Theory
Partial stability theory provides a unification between time-invariant stability
theory and stability theory for time-varying systems.
Consider the time-varying nonlinear dynamical system
ẋ(t) = f (t, x(t)) where, x(t0 ) = x0 , t ≥ t0
1. CHAPTER 1 :Classical Stability 67

where x(t) ∈ Rn ,t ≥ t0 , and f : [t0 , t1 ) × Rn 7→ Rn .


By setting x1 (τ ) ≡ x(t) and x2 (τ ) ≡ t, where τ = t − t0 .Then the solution;
that is, x(t), t ≥ t0 , to the nonlinear dynamical system in above can be
equivalently characterized by the solution x1 (τ ), τ ≥ 0, to the nonlinear
autonomous dynamical system

ẋ1 (τ ) = f ((x2 (τ ), x1 (τ )) x1 (0) = x0 , τ ≥ 0 (1.61)


ẋ2 (τ ) = 1 , x2 (0) = t0 (1.62)

where ẋ1 (·) and ẋ2 (·) denote differentiation with respect to τ .
Example 1.8.3. Consider the linear time-varying dynamical system

f (t, x) = A(t)x

where A : [0, ∞) × Rn×n is continuous.So we given

ẋ(t) = A(t)x(t), x(t0 ) = x0 , t ≥ t0

Next, let n1 = n , n2 = 1, x1 (τ ) = x(t) , x2 (τ ) = t ,f1 (x1 (t), x2 (t)) =


A(t)x(t), f2 (x1 (t), x2 (t)) = 1.
Hence, the solution x(t), t ≥ t0 to the nonlinear dynamical system in above
can be equivalently characterized by the solution x1 (τ ), τ ≥ 0 to the nonlinear
autonomous dynamical system

ẋ1 (τ ) = A(x2 (τ )) x1 (τ ) , x1 (0) = x0 , τ ≥ 0 (1.63)


ẋ2 (τ ) = 1 , x2 (0) = t0 (1.64)

where ẋ1 (·) and ẋ2 (·) denote differentiation with respect to τ .
Example 1.8.4. Consider the spring-mass-damper system with time-varying
damping coefficient

q̈(t) + c(t)q̇(t) + kq(t) = 0 , q(0) = q0 , q̇(0) = q̇0 , t ≥ 0

Assume that c(t) = 3 + sin t and k > 1.


By using Theorem (1.7.2), setting z1 = q and z2 = q̇, the dynamical system
can be equivalently written as

ż1 (t) = z2 (t) , z1 (0) = q0 , t ≥ 0 (1.65)


ż2 (t) = −kz1 (t) − c(t)z2 (t) , z2 (0)q̇0 (1.66)
T
Let n1 = 2, n2 = 1, x1 = [z1 z2 ]T , x2 = t, f1 (x1 , x2 ) = xT1 v −x1 T h(x2 )


and f2 (x1 , x2 ) = 1, where h(x2 ) = [k c(x2 )]T and v = [0 1]T .


1. CHAPTER 1 :Classical Stability 68

Now, the solution (z1 (t), z2 (t)), t ≥ 0, to the nonlinear time-varying


dynamical system (1.65) and (1.66), is equivalently characterized by the so-
lution x1 (τ ), τ ≥ 0, to the nonlinear autonomous dynamical system :

ẋ1 (τ ) = f1 (x1 (τ ), x2 (τ )) , x1 (0) = [q0 q̇0 ]T , τ ≥ 0 (1.67)


ẋ2 (τ ) = 1 , x2 (0) = 0 (1.68)

where ẋ1 (τ ) and ẋ2 (τ ) denote differentiation with respect to τ .


To observe the stability of this system, assume the Lyapunov function can-
didate  
T k + 3 + sin(x2 ) 1
V (x1 , x2 ) = x1 P (x2 )x1 , where P (x2 ) = .
1 1
Since ,
x1 T P1 x1 ≤ V (x1 , x2 ) ≤ x1 T P2 x1
where
   
k+2 1 k+4 1
P1 = P2 = , (x1 , x2 ) ∈ R2 × R.
1 1 1 1

it follows that V (x1 , x2 ) satisfies (1.47) with D = R2 and p = 2.


Next,since

V̇ (x1 , x2 ) = −2x1 T [R + R1 (x2 )] x1


≤ −2x1 T Rx1
≤ − min {k − 1, 1}||x1 ||22

where  
k−1 0
R= >0
0 1
and
1 − 12 cos(x2 )
 
0
R1 (x2 ) =
0 1 + sin(x2 )
it follows from Theorem (1.7.2) that the dynamical system is globally expo-
nentially stable with respect to x1 uniformly in x20 .

1.8.2 Rephrase The Partial Stability Theory for Nonlinear


Time-Varying Systems
Consider the nonlinear time-varying dynamical system

ẋ = f (t, x(t)) where x(t0 ) = x0 , t ≥ t0 (1.69)


1. CHAPTER 1 :Classical Stability 69

where x(t) ∈ D, D ∈ Rn such that 0 ∈ D. Also, f : [t0 , t1 ) × D 7→ Rn such


that f (·, ·) is continuous in t and x, and ∀ t ∈ [t0 , t1 ), f (t, 0) = 0 and f (t, ·)
is locally Lipschitz in x uniformly in t for all t in compact subsets of [0, ∞)
.

Theorem 1.8.1. The nonlinear time-varying dynamical system (1.69) is:

• Stable if, ∀  > 0 and t0 ∈ [0, ∞), ∃ δ = δ(, t0 ) > 0 such that ||x0 || <
δ implies that ||x(t)|| <  , ∀ t ≥ t0 .

• Uniformly stable if, ∀  > 0, ∃ δ = δ() > 0 such that ||x0 || < δ
implies that ||x(t)|| <  ∀ t ≥ t0 and t0 ∈ [0, ∞)

• Asymptotically stable if, it is stable and ∀ t0 ∈ [0, ∞), ∃ δ =


δ(t0 ) > 0 such that ||x0 || < δ implies that lim x(t) = 0.
t7→∞

• Uniformly asymptotically stable if, it is uniformly stable and ∃ δ >


0 such that ||x0 || < δ implies that lim x(t) = 0 uniformly in x0 and
t7→∞
uniformly in t0 ; that is, ∀  > 0 , ∃ T = T () > 0 such that

||x(t)|| < , ∀ t ≥ t0 + T (), and ∀ ||x(t0 )|| < δ , ∀ t0 ∈ [0, ∈)

• Globally asymptotically stable if, it is Lyapunov stable and lim x(t) =


t7→∞
0 , ∀ x0 ∈ Rn and t0 ∈ [0, ∞).

• Globally uniformly asymptotically stable if, it is uniformly stable


and lim x(t) = 0 uniformly in x0 , ∀ x0 Rn and uniformly in t0 ;that is,
t7→∞
∀  > 0 , δ > 0 , ∃ T = T (, δ) > 0 such that

||x(t)|| < , ∀ t ≥ t0 + T (, δ), and ∀ ||x(t0 )|| < δ , t0 ∈ [0, ∞)

• Exponentially stable if, ∃ α, β, δ > 0 such that ||x0 || < δ implies


that ||x(t)|| ≤ α||x0 || e−βt , ∀ t ≥ t0 and t0 ∈ [0, ∞).

• Globally (uniformly) exponentially stable if, ∃ α, β > 0 such that


||x(t)|| ≤ α||x0 k|; e−βt , t ≥ t0 , ∀ x0 ∈ Rn and t0 ∈ [0, ∞).

Example 1.8.5. The linear first order system

ẋ = (6t sin t − 2t)x


1. CHAPTER 1 :Classical Stability 70

has the solution :


Rt
(6τ sin τ −2τ ) dτ
x(t) = x(t0 ) e t0
2 −6 sin t +6t cos t +t2 )
= x(t0 ) e (6 sin t−6t cos t−t 0 0 0 0 (1.70)

Thus, for any fixed t0 and for all t ≥ t0 , the term −t2 in (1.70) will even-
tually dominates.Thus, the exponential term is bounded by a constant c(t0 )
that is depends on t0 .
Hence,
|x(t)| < |x(t0 )| c(t0 ) , ∀t ≥ t0
So, for any  > 0 ∃ δ = c(t0 ) such that |x0 | < δ implies |x(t)| < , ∀ t ≥
t0 .Hence, the origin is stable.

Uniform stability and uniformly asymptotic stability, can be char-


acterized in terms of a special scalar functions,known as class K and class
KL.

Definition 1.8.1. A continuous function α : [0, ∞) 7→ [0, ∞) is said to


belong to class K if, it is

• strictly increasing

• α(0) = 0

and is said to belong to class K∞ if,

• α : [0, ∞) 7→ [0, ∞).

• α(r) 7→ ∞ as r 7→ ∞.

Definition 1.8.2. A continuous function β(r, s) : [0, a) × [0, ∞) 7→ [0, ∞)


is said to belong to class KL if,

• For each fixed s, the mapping β(r, s) belongs to class K with respect to
r

• For each fixed r, the mapping β(r, s) is decreasing with respect to s

• β(r, s) 7→ 0 as s 7→ ∞

Example 1.8.6.
1. CHAPTER 1 :Classical Stability 71

0
• α(r) = tan−1 r is strictly increasing where α (r) = 1+r1
2 > 0.It is
π
belong to class K but not to class K∞ since lim α(r) = 2 < ∞.
r7→∞

• α(r) = rc , for any positive real number c, is strictly increasing since


0
α = crc−1 > 0.In addition, lim α(r) = ∞; thus, it is belong to class
r7→∞
K∞ .
r
• β(r, s) = ksr+1 , for any positive real number k, is strictly increasing
∂β 1 ∂β
in r since ∂r = (ksr+1) 2 > 0, and strictly decreasing in s since ∂s =
−kr2
(ksr+1)2
< 0.In addition, β(r, s) 7→ 0 as s 7→ ∞.Hence, it is belong to
class KL.
• β(r, s) = rc e −s , for any positive real number c, belongs to class KL.
Theorem 1.8.2. Consider the time-varying dynamical system given by 1.69.Then
the following statements hold:
• The system is stable if, there is a continuously differentiable function
V : [0, ∞) × D 7→ R and a class K f unction α(·) such that
V (t, 0) = 0 , t ∈ [0, ∞) (1.71)
α(||x||) ≤ V (t, x) , (t, x) ∈ [0, ∞) × D (1.72)
V̇ (t, x) ≤ 0 , (t, x) ∈ [0, ∞) × D (1.73)

• The system is uniformly stable if, there is a continuously differen-


tiable function V : [0, ∞) × D 7→ R and a class K f unctions α(·), β(·)
such that
V (t, x) ≤ β(||x||); , (t, x) ∈ [0, ∞) × D (1.74)
and inequalities 1.72 and 1.73 are satisfied
• The system is asymptotically stable If, there exist continuously dif-
ferentiable functions V : [0, ∞) × D 7→ R and W : [0, ∞) × D 7→ R and
a class K f unctions α(·), β(·) and γ(·) such that
(A) Ẇ (·, x(·)) is bounded from below or above
(B) Conditions 1.71 and 1.72 hold
(C)
W (t, 0) = 0 , t ∈ [0, ∞) × D (1.75)
β(||x||) ≤ W (t, x) , (t, x) ∈ [0, ∞) (1.76)
V̇ ≤ −γ (W (t, x)) , (t, x) ∈ [0, ∞) × D (1.77)
1. CHAPTER 1 :Classical Stability 72

• The system is uniformly asymptotically stable If, there exist a


continuously differentiable function V : [0, ∞) × D 7→ R and a class
K f unctions α(·), β(·) and γ(·) satisfying 1.72, 1.74, and

V̇ (t, x) ≤ −γ(||x||) .(t, x) ∈ [0, ∞) × D (1.78)

• The system is globally asymptotically stable if, D = Rn and there


exist continuously differentiable functions V : [0, ∞) × D 7→ R and
W : [0, ∞) × D 7→ R, a class K f unctions β(·), γ(·) and a class
K∞ f unction α(·) such that

(A) Ẇ (·, x(·)) is bounded from below or above


(B) Conditions 1.71,1.72 and 1.75 to 1.77 hold

• The system is globally uniformly asymptotically stable if, D =


Rn and there exist a continuously differentiable function V : [0, ∞) ×
D 7→ R, a class K f unction γ(·) and a class K∞ f unctions α(·), β(·)
satisfying 1.72, 1.74, and 1.78

• The system is (uniformly) exponentially stable,if there exist a


continuously differentiable function V : [0, ∞) × D 7→ R and positive
constants α, β, γ, p such that p ≥ 1 and

α||x||p ≤ V (t, x) ≤ β||x||p , (t, x) ∈ [0, ∞) × D (1.79)


p
V̇ (t, x) ≤ −γ||x|| , (t, x) ∈ [0, ∞) × D (1.80)

• The system is globally (uniformly) exponentially stable if, D =


Rn and there exist a continuously differentiable function V : [0, ∞) ×
D 7→ R and positive constants α, β, γ, p such that p ≥ 1 and satisfying
1.79 and 1.80.

Proof. By let n1 = n and n2 = 1,


Setting x1 (τ ) = x(t) and x2 (τ ) = t, where τ = t − t0 ≥ 0,
Let f1 (x1 , x2 ) = f (t, x) = f (x2 , x1 ),
Let f2 (x1 , x2 ) = 1.
The solution x(t), t ≥ t0 , to the nonlinear time-varying dynamical system
(1.69) is equivalently characterized by the solution x1 (τ ), τ ≥ 0, to the
nonlinear autonomous dynamical system

ẋ1 = f1 (x1 (τ ), x2 (τ )) , x1 (0) = x0 , τ ≥ 0, (1.81)


ẋ2 = 1 , x2 (0) = t0 (1.82)
1. CHAPTER 1 :Classical Stability 73

where ẋ1 (·) and ẋ2 (·) denote differentiation with respect to τ .Note that
since f (t, 0) = 0, t ≥ 0, it follows that f1 (x2 , 0) = 0, ∀x2 ∈ Rn2 .Definitely,
the result is a direct consequence of Theorem (1.7.2).
Lemma 1.8.1. The equilibrium point x = 0 of equation (1.69) is
• Uniformly stable if and only if there exists a class K function α(·)
and a positive constant δ, independent of t0 , such that
kx(t)k ≤ α(kx(t0 )k , ∀t ≥ t0 ≥ 0, ∀kx(t0 )k < δ (1.83)

• Uniformly asymptotically stable if and only if there exist a class


KL function β(r, s) and a positive constant δ, independent of t0 , such
that
kx(t)k ≤ β(kx(t0 )k, t − t0 ) , ∀ kx(t0 )k < δ (1.84)
• Globally uniformly asymptotically stable if and only if inequality
(1.84) is satisfied for any initial state x(t0 ).
Remark:
A special case of uniform asymptotic stability arises when the class KL
function β in (1.84) takes the form
β(r, s) = kr e −γs
This case is very important and will be designated as a distinct stability
property of equilibrium points.
Theorem 1.8.3. The equilibrium point x = 0 of (1.69) is exponentially
stable if, inequality (1.84) is satisfied with
β(r, s) = kr e−γs , k > 0 , γ > 0
and is globally exponentially stable if this condition is satisfied for any
initial state.
So, to establishing uniform asymptotic stability of the origin, we need to
verify inequality (1.84).
Lemma 1.8.2. Let V (x) : D 7→ R be a continuous positive definite function
defined on a domain D ⊂ Rn that contain the origin.Let Br ⊂ D for some
r > 0.Then, there exist class K functions α1 , α2 , defined on [0, r] , such that
α1 (kxk) ≤ V (x) ≤ α2 (kxk) ∀ ||x|| ≤ r where x ∈ Br
Moreover, if D = Rnand V (x) is radially unbounded then α1 and α2 can
be chosen to belong to class K∞ and the foregoing inequality holds for all
x ∈ Rn .
1. CHAPTER 1 :Classical Stability 74

Remark:
For the quadratic positive definite function V (x) = xT P x,
we figure out that inequality of lemma (1.8.2) becomes

λmin (P )||x||22 ≤ xT P x ≤ λmax (P )||x||22

Theorem 1.8.4. Let x = 0 be an equilibrium point for (1.69) and D ⊂ Rn


be a domain containing x = 0.Let V : [0, ∞) × D 7→ R be a continuously
differentiable function such that

W1 (x) ≤ V (t, x) ≤ W2 (x) (1.85)

∂V ∂V
+ f (t, x) ≤ −W3 (x) (1.86)
∂t ∂x
∀ t ≥ 0, ∀x ∈ D where W1 (x), W2 (x) and W3 (x) are continuously positive
definite functions on D.Then, x = 0 is uniformly asymptotically stable.

A function V (t, x) satisfying the left inequality of (1.85) is said to be


positive definite . A function satisfying the right inequality of (1.85) is
said to be decrescent .
A function V (t, x) is said to be negative definite if, −V (t, x) is positive
definite.
Therefore, Theorem (1.8.4) states that the origin is uniformly asymp-
totically stable if there is a continuously differentiable, positive definite ,
decrescent function V (t, x) whose derivative along the trajectories of the
system is negative definite.In this case, V (t, x) is called a Lyapunov func-
tion

Corollary 1.8.1. Suppose that all the assumptions of Theorem (1.8.4) are
satisfied globally (∀ x ∈ Rn ) and W1 (x) is radially unbounded .Then,
x = 0 is globally uniformly asymptotically stable.

Example 1.8.7. Consider the system

x˙1 = −x − g(x)x2

x˙2 = x1 − x2
where g(t) is continuously differentiable and satisfies

0 ≤ g(t) ≤ k , ġ(t) ≤ g(t) , ∀t ≥ 0


1. CHAPTER 1 :Classical Stability 75

taking V (t, x) = x21 + [1 + g(t)]x22 as a Lyapunov function candidate. Also,


we see that

x21 + x22 ≤ V (t, x) ≤ x21 + (1 + k)x22 , ∀x ∈ R2

Hence, V (t, x) is positive definite ,decrescent ,and radially unbounded .


The derivative of V (t, x) along the trajectories of the system is given by

V̇ (t, x) = −2x21 + 2x1 x2 − [2 + 2g(t) − ġ(t)]x22

Using the inequality

2 + 2g(t) − ġ(t) ≥ 2 + 2g(t) − g(t) ≥ 2

we obtain,
 T   
x 2 −1 x1 def
V̇ (t, x) ≤ −2x21 + 2x1 x2 − 2x22 =− 1 = −xT P x
x2 −1 2 x2

where V (x) is positive definite, hence, V̇ (t, x) is negative definite. Thus, all
the assumptions of Theorem (1.8.4) are satisfied globally with quadratic pos-
itive definite functions W1 (x), W2 (x) and W3 (x).So the origin is globally
exponentially stable.

Example 1.8.8. The linear time-varying system

ẋ = A(t) x(t) (1.87)

has an equilibrium point at x = 0.


Let A(t) : [0, ∞) 7→ Rn×n be continuous ∀ t ≥ 0
To examine the stability, consider a Lyapunov function candidate

V (t, x) = xT P (t)x

where P (t) : [0, ∞) 7→ Rn×n is a continuously differentiable , symmetric,


bounded, positive definite matrix;that is, P (t) > 0 , ∀ t ≥ t0 ),

0 ≤ αI ≤ P (t) ≤ βI , ∀ t ≥ 0 (1.88)

Also, we need to satisfy the matrix differential equation

− Ṗ (t) = P (t)A(t) + AT (t)P (t) + Q(t) (1.89)


1. CHAPTER 1 :Classical Stability 76

where Q(·) is continuous such that

Q(t) ≥ γI , ∀t ≥ t0

Hence, the function V (t, x) is positive definite,decrescent and radially un-


bounded since
c1 ||x||p=2 2 ≤ V (t, x) ≤ c2 ||x||p=2 2
The derivative of V (t, x) along the trajectories of the system (1.87) is given
by

V̇ (t, x) =xT Ṗ (t) x + xT P (t) ẋ + ẋT P (t) x


=xT [Ṗ (t) + P (t)A(t) + AT (t)P (t)]x = −xT Q x ≤ −γ||x||22

Hence, V̇ (t, x) is negative definite and all assumptions of Theorem 1.8.2 are
satisfied globally with p = 2.Therefore, the origin is globally exponentially
stable.
Hence, a sufficient condition for global exponential stability for a linear time-
varying system is the existence of a continuously differentiable, bounded and
positive-definite matrix function P : [0, ∞) 7→ Rn×n satisfying 1.89.

Example 1.8.9. Consider the linear time-varying dynamical system

ẋ1 = −x1 − e−t x2 , x1 (0) = x10 , t ∀t ≥ 0 (1.90)


ẋ2 = x1 − x2 , x2 (0) = x20 (1.91)

To examine stability of this system, consider the Lyapunov function candi-


date
V (t, x) = x21 + (1 + e−t )x22
Since,

x21 + x22 ≤ V (t, x) ≤ x21 + 2x22 , (x1 , x2 ) ∈ R × R , t ≥ 0

it follows that V (t, x) is positive definite, radially unbounded and satisfies


1.72 and 1.74.Since

V̇ (t, x) = −2 x21 + 2x1 x2 − 2x22 3e−t x22 (1.92)


≤ −2x21 + 2x1 x2 − 2x22 (1.93)
T
= −x Rx (1.94)
p=2
≤ −λ min(R)||x|| 2 (1.95)
1. CHAPTER 1 :Classical Stability 77

where  
2 −1
R= >0
−1 2
and x = [x1 x2 ]T , it follows from Theorem 1.8.2 with p = 2 that the zero
solution (x1 (t), x2 (t)) ≡ (0, 0) to the dynamical system is globally expo-
nentially stable.

1.9 Stability Theorems for Linear Time-Varying Systems


and Linearization

Consider the linear time-varying system


ẋ(t) = A(t)x (1.96)
We know from linear system theory that the solution of (1.96) is given by
x(t) = x(t0 ) Φ(t, t0 )
where Φ(t, t0 ) is called the state transition matrix.

Theorem 1.9.1. [1] The equilibrium point x = 0 of (1.96) is (globally


uniformly asymptotically stable if and only if the state transition matrix
satisfies the inequality
kΦ(t, t0 )k ≤ k e −γ(t−t0 ) , ∀t ≥ t0 ≥ 0 (1.97)
for some positive constants k and γ
Theorem (1.9.1) shows that, for linear systems, uniformly asymptotic
stability of the origin is equivalent to exponential stability.
We have seen in Example (1.8.8) that, if we can find a positive definite,
bounded matrix P (t) which satisfies the differential equation (1.89) for some
positive definite Q(t) then
V (t, x) = xT P x
is a Lyapunov function candidate for the system.
if the matrix Q(t) is chosen to be bounded in addition to being positive
definite; that is,
0 ≤ c3 I ≤ Q(t) ≤ c4 I , ∀t ≥ 0
and if A(t) is continuous and bounded, then it can be shown that when the
origin is uniformly asymptotically stable, there is a solution of (1.89) that
possesses the desired properties.
1. CHAPTER 1 :Classical Stability 78

Theorem 1.9.2. Let x = 0 be the uniformly asymptotically stable equilib-


rium of (1.96).Suppose A(t) is continuous and bounded.Let Q(t) be a con-
tinuous, bounded, positive definite, symmetric matrix.Then, there is a con-
tinuously differentiable, bounded, positive definite, symmetric matrix P (t)
which satisfies (1.89).Hence, V (t, x) = xT P x is a Lyapunov function for
the system that satisfies the conditions of Theorem (1.8.4).
Example 1.9.1. Consider the second order linear system
−1 + 1.5 cos2 t
 
1 − 1.5 sin t cos t
A(t) =
−1 − 1.5 sin t cos t −1 + 1.5 sin2 t

the eigenvalues of A(t) are given by −0.25 ± i0.25 7 which they indepen-
dent on t and lie on the open left-half plane. Although,the system should
be uniformly asymptotically stable but te system can not be characterized by
the location of the eigenvalues of the matrix A.To verified that the origin is
unstable,
cos t e−t sin t
 0.5t 
e
Φ(t, 0) = (1.98)
−e0.5t sin t e−t cos t
shows that there are arbitrary initial states that make the solution unbounded
and escapes to infinity.
Corollary 1.9.1. It is important to note that it is not always necessary to
construct a time-varying Lyapunov function to show stability for a nonlinear
time-varying dynamical system
Example 1.9.2. consider the nonlinear time varying dynamical system
ẋ1 (t) = −x31 (t) + (sin ωt)x2 (t) , x1 (0) = x10 , t ≥ 0 (1.99)
x˙2 = −(sin ωt)x1 (t) − x32 (t) , x2 (0) = x20 (1.100)
To show that the origin is globally uniformly asymptotically stable, con-
sider the time-invariant Lyapunov function candidate
1
V (x) == (x21 + x22 )
2
Clearly,V (x), x ∈ R2 , is positive definite and radially unbounded. Further-
more,
V̇ (x) = x1 x31 + (sin ωt)x2 + x2 (sint)x1 − x32
   
(1.101)
= x41 x42 (1.102)
< 0 , (x1 , x2 ) ∈ R × R, (x1 , x2 ) 6= (0, 0) (1.103)
which shows that the zero solution (x1 (t), x2 (t)) ≡ (0, 0) to the above dy-
namical system is globally uniformly asymptotically stable.
1. CHAPTER 1 :Classical Stability 79

1.9.1 Applying Lyapunov Indirect Method on Stabilizing


Nonlinear Controlled Dynamical System
Consider the nonlinear controlled system
ẋ(t) = f (x(t), u(t)) , x(t = 0) = x0 , t ≥ 0 (1.104)
where F : Rn × Rm 7→ Rn , u(t) is the feedback controller.We seek feedback
controllers of the form u(t) = φ (x(t)), where φ : Rn 7→ Rm , φ(0) = 0 to
let that the zero solution x(t) ≡ 0 of the closed-loop system given by
ẋ = F (x, u) , x(0) = x0 , t ≥ 0 (1.105)
is asymptotically stable.
Theorem 1.9.3. Consider the nonlinear controlled system (1.105) where
F : Rn × Rm 7→ Rn is continuously differentiable function and F (0, 0) =
0.Let define

∂F (x, u) ∂F (x, u)
A= ,B =
∂u (x,u)=(0,0) ∂x (x,u)=(0,0)

assume that the matrices A, B are stabilizable.Then there exists a matrix


K ∈ Rm×n that can be chosen so that A + BK is asymptotically stable.It
follow that if the linearization of (1.104) is stabilizable; that is, A, B are
stabilizable, then if for every Q > 0 there exists a positive-definite matrix
P ∈ Rn×n satisfying Lyapunov equation
(A + BK)T P + P (A + BK) = −Q
then we can construct a Lyapunov function for the nonlinear closed-loop
system (1.105).Hence, the quadratic function V (x) = xT P x is a Lyapunov
function that guarantees local asymptotic stability .
Proof. The closed-loop system
ẋ(t) = F (x(t), K(t)) , x(0) = 0 , t ≥ 0
has the form
ẋ(t) = F (x(t), K(t)) = f (x(t)) , x(0) = 0 , t ≥ 0
It follows that,
 
∂f ∂F ∂F
= (x, Kx) + (x, Kx)K = A + BK
∂x x=0 ∂x ∂u x=0
Since (A, B) is stabilizable, K can be chosen such that A + BK is asymp-
totically stable.
2. STABILITY OF SWITCHED SYSTEMS

2.1 Introduction[2]

A switched system is a dynamical system in which switching plays a non-


trivial rule.The system consists of continuous states that take values from
a vector space and discrete states that take values from a discrete index
set.The interaction between the continuous and discrete states makes switched
systems described by
ẋ(t) = fσ(t) (x(t)) where f : Rn 7→ Rn , σ(t) ∈ P = {1, 2, · · · }

2.2 Classes of Hybrid and Switched Systems

Hybrid systems are an interaction between continuous and discrete dy-


namics that are form a complete dynamical systems.
Many researchers in control theory regard hybrid systems as continuous
systems with switching .
Consider the motion of an automobile that might takes the form
ẋ1 = x2
ẋ2 = f (a, q)
where x1 is the position, x2 is the velocity, a ≥ 0 is the acceleration input
and q ∈ {1, 2, 3, 4, 5, −1, 0} is the gear shift position.Here x1 and x2 are the
continuous states and q is the discrete state.In case of a manual transmis-
sion, the discrete transitions affect the continuous trajectory.For example,
when q = −1, the function f (a, q) should be decreasing in a (acceleration)
and vice-versa.In case of an automatic transmission, the evaluation of the
continuous state x2 is in turn used to determine the discrete transitions.
A continuous-time systems with isolated discrete switching events refer
to be called switched systems.
Switching events in switched systems can be classified into
• State-Dependent versus Time-Dependent switching.
• Autonomous (Uncontrolled) switching versus Controlled switching.
2. CHAPTER 2 :Stability of Switched Systems 81

2.2.1 State-Dependent Switching


In this situation, where the space of the continuous state is separated by a
family of switching surfaces.So, a finite or infinite number of operating
regions are constructed.In each of this regions, a continuous-time dynamical
system is governing.Whenever the system trajectory hits a switching surface,
the state jumps instantaneously to a new state with its related dynamical
system as depicted in Figure 2.1.

Fig. 2.1: State-dependent switching, the thick curves denote the switching surface
and the thin curves with arrows denote the continuous portions of the
trajectory

2.2.2 Time-Dependent Switching


Suppose that we have a family of systems
ẋ = fi (x) , i ∈ P (2.1)
Assume fi : Rn 7→ Rn , i ∈ P, the functions fi are assumed to be at least
locally Lipschitz and P is a finite index set defined as: P = {1, 2, · · · , m}.
To define a switched system generated by the above family, we need to de-
clare for what is called a switching signal.It is a constant function σ :
[0, ∞) 7→ P that can generate a finite number to specify which active sub-
system is currently being followed from the family (2.1) on every interval
between two consecutive switching times as depicted in Figure 2.2.
Hence, the switched system with time-dependent switching can be described
by
ẋ(t) = fσ(t) (x(t))
or for shortly,
ẋ(t) = fσ (x) (2.2)
2. CHAPTER 2 :Stability of Switched Systems 82

and in case of a switched linear system:

ẋ = Aσ x. (2.3)

Fig. 2.2: Switching signal σ(t) : [0, ∞) 7→ S

2.2.3 Stability to Non-Stability Switched Systems and


Vice-Versa
Consider the following switched system

ẋ = fσ(t) x(t), σ(t) := p ∀ p ∈ P = {1, 2}

assume that the two subsystems are asymptotically stable, with trajectories
as depicted in Figure 2.3.

Fig. 2.3: the two individual subsystems are asymptotically stable

If we applied a switching signal σ(t), then the switched system might


be asymptotically stable as shown on the left in Figure 2.4 or no longer
asymptotically stable (unstable) as shown on the right in Figure 2.4.
2. CHAPTER 2 :Stability of Switched Systems 83

Fig. 2.4: Switching between stable subsystems

Similarly, in case of both subsystems are unstable, the switched system


may be either asymptotically stable or unstable depending on the strategy
of the switching signal as shown in Figure 2.5.

Fig. 2.5: Switching between unstable subsystems

Hence, we conclude that unconstrained switching signal forces the tra-


jectories of such a system to escape to infinity in finite time.Thus, destabilize
the switched system even if all the subsystems are stable.

2.2.4 Autonomous Switching Versus Controlled Switching


Autonomous switching is a situation where we have no direct control over
the switching mechanism as well as systems with state-dependent switch-
ing in which the rule that defines the switching signal are predetermined
throughout the switching surfaces.
In contrast, Controlled switching is imposed by the designer where a di-
rect control is applied over the switching mechanism in order to achieve a
desired behaviour over the system.For example, if we given a system con-
trolled through autonomous switching and suddenly the process encounter
some failures,then it may be necessary to design a logic-based mecha-
nisms for detecting and correcting the fault occurred.
2. CHAPTER 2 :Stability of Switched Systems 84

Definition 2.2.1. Switched system with controlled time-dependent switching


can be described by recasting the switched system (2.2) into
m
X
ẋ = fi (x)ui
i=1
Where i ∈ P = {1, 2, . . . , m} and the controls are defined as

uk = 1
ui =
ui = 0 ∀i 6= k
In case of the switched linear system,
m
X
ẋ = Ai x ui
i=1

2.3 Sliding and Hysteresis Switching

Consider a switched system


ẋ = fi (x)
with state-dependent switching described by a single switching surface S and
i ∈ P = {1, 2}.If we assume that the continuous trajectory hits the surface
S, it crosses over to the other side.This will indeed be true if,
• The trajectory hits the surface at the corresponding point x ∈ S.
• Both the vectors f1 (x) , f2 (x) point in the same direction as depicted
in Figure 2.6.
Thus, a solution is then obtained depending on which side of S the trajectory
lies on.

Fig. 2.6: Crossing a switching surface


2. CHAPTER 2 :Stability of Switched Systems 85

Definition 2.3.1. Sliding Mode is obtained once the trajectory hits the
surface S then it cannot leave the surface because both vectors f1 (x), f2 (x)
pointing toward S as depicted in Figure 2.7.Therefore, the only possible so-
lution of the switched system is to slide on the surface S in the sense of
Filippov.

Fig. 2.7: a sliding mode

Definition 2.3.2 (Filippov’s Definition:).


According to Filippov concept, the solution x(·) of the switched system is
obtained by including all the convex combinations of the vectors f1 (x), f2 (x)
such that
ẋ ∈ F (x), ∀x ∈ S
where F (x) defined as:
F (x) := {αf1 (x) + (1 − α)f2 (x) : α ∈ [0, 1]}
If x ∈
/ S, we simply set F (x) = f1 (x) or F (x) = f2 (x) depending on which
side of the surface S the point x lies on.
Note that in Figure 2.7, the tangent to the surface S at the point x is a
unique convex combination of the two vectors f1 (x) and f2 (x).Sometimes,
sliding mode can be interpreted as a fast switching, or chattering .
Usually, this phenomenon is undesirable in mathematical models of real sys-
tems, because in electronics for example, it leads to low control accuracy and
high heat losses in power circuits. Avoiding chattering requiring to maintain
the property that two consecutive switching events are always separated by
a time interval of positive length[2]. Also, fast switching can be avoided
with help of what is called hysteresis switching strategy .
2. CHAPTER 2 :Stability of Switched Systems 86

Definition 2.3.3 (Hysteresis Switching).


This strategy is constructed by generate a piecewise constant signal σ which
is change its value depending on the current value of x and the previous
value of σ.

Hysteresis switching strategy can be formalized by introducing a


discrete state σ which is described as follows.First, we have to construct two
regions Ω1 and Ω2 by offsetting the original switching surface.Thus a newly
switching surfaces S1 and S2 are obtained as shown in Figure 2.8.

Fig. 2.8: Hysteresis switching regions

Consider a switched system

ẋ = fi (x) , i ∈ P {1, 2}

Assume that the subsystem ẋ = f1 (x) ∈ Ω1 and ẋ = f2 (x) ∈ Ω2 .Thus


switching events occur when the trajectory hits one of the switching sur-
faces Ω1 or Ω2 .
Second, at the initial state let σ(0) = 1 if x(0) ∈ Ω1 and σ(0) = 2 if x(0) ∈
Ω2 .
The strategy now is to declare a switching signal σ(t) to be defined as

σ(t) = 1 if x(t) ∈ Ω1
=:
σ(t) = 2 if x(t) ∈ Ω2

On the other hand, if σ(t) = 1 but x(t) ∈ / Ω1 , then let σ(t) = 2 and vice-
versa.Following this procedure, the generated switching signal σ can avoiding
the chattering with a typical solution trajectory as shown in Figure 2.9.
2. CHAPTER 2 :Stability of Switched Systems 87

Fig. 2.9: Hysteresis with a typical trajectory

2.4 Stability Under Arbitrary Switching

Definition 2.4.1. Consider the switched system

ẋ = fσ(t) (x) (2.4)

• If there exist a positive constant δ and a class KL function β such that


for all switching signals σ, the solutions of (2.4) satisfy

|x(0)| ≤ δ ⇒ |x(t)| ≤ β(|x(0)|, t) ∀t ≥ 0 (2.5)

then the switched system is uniformly asymptotically stable at the


origin .

• If the inequality (2.5) is valid for all switching signals and all initial
conditions,we obtain global uniform asymptotic stability (GUAS).

• If β takes the form β(r, s) = k r e−λ s for some k, λ > 0, and satisfy
the inequality
|x(t)| ≤ k|x(0)|e−λ t ∀t ≥ 0 (2.6)
then the system (2.4) is uniformly exponentially stable.
Moreover,if the inequality (2.6) is valid for all switching signals and all
initial conditions,we obtain global uniform exponential stability (GUES).

Remark:
The term uniform describes the uniformly with respect to switching sig-
nals.In general, if we have V as a Lyapunov function candidate and the rate
2. CHAPTER 2 :Stability of Switched Systems 88

of decrease of V along the solutions is not affected by switching, then we


say that the switched system is uniformly with respect to σ(t)

2.4.1 Commuting Linear Systems


Definition 2.4.2. Suppose we have two matrices A1 and A2 then we always
say that the matrices A1 and A2 are commute if and only if

A1 A2 = A2 A1

and we often write this condition as

[A1 , A2 ] = 0

where the commutator or Lie bracket [·, ·], is defined as

[A1 , A2 ] := A1 A2 − A2 A1 (2.7)

Moreover, we have

eA1 eA2 = eA2 eA1 (2.8)


A1 A2 A1 +A2
e e =e (2.9)

2.4.2 Common Lyapunov Function (CLF)


Consider the switched system

ẋ(t) = Ai (t)x(t) , x(t0 ) = x0 , i = 1, 2, · · · , n (2.10)

where x(t) ∈ Rn and the matrix Ai (t) switches between stable matrices
A1 , A2 , · · · , An .
If the switched system (2.10) share a common quadratic Lyapunov function
of the form V (x) = xT P x with a negative definite time derivative along the
trajectories of the system (2.10) (i.e, V̇ (x) < 0), then the system (2.10) is
exponentially stable for any arbitrary switching sequence.

Theorem 2.4.1. [13] Consider the switching system (2.10) with Ai =


{A1 , A2 }. Assume that

• A1 , A2 are asymptotically stable matrices.

• A1 , A2 are commute.

Then
2. CHAPTER 2 :Stability of Switched Systems 89

(1) The system is exponentially stable under any arbitrary switching se-
quence between the elements of Ai .

(2) For a given symmetric positive definite matrix P0 ,let P1 , P2 be the


unique symmetric positive definite solutions to the Lyapunov equations

AT1 P1 + P1 A1 = − P0 (2.11)
AT2 P2 + P2 A2 = − P1 (2.12)

then the function V (x) = xT P2 x is a common Lyapunov function


CLF for both the individual systems ẋ(t) = Ai x , i = 1, 2. Also, a
Lyapunov function for the switching system (2.10).

(3) for a given choice of the matrix P0 , the matrices A1 , A2 can be chosen
in any order in (2.11), (2.12) to yield the same solution P2 ; that is, if

AT2 P3 + P3 A2 = −P0 (2.13)

then
AT1 P2 + P2 A1 = −P3 (2.14)

(4) The matrix P2 can be expressed in integral form as

Z∞
∞ 
Z
T T
P2 = eA2 t  eA1 τ P0 eA1 τ dτ  eA2 t dt
0 0
Z∞
∞ 
Z
T T
= eA1 t  eA2 τ P0 eA2 τ dτ  eA1 t dt
0 0

Proof.

(1) If A1 , A2 commute, then eA1 t eA2 τ = eA2 τ eA1 t . To prove exponential


stability, let V (x) = xT P2 x. The derivative of V along the trajectories
of the system ẋ(t) = A2 x is given by

V̇ = xT (AT2 P2 + P2 A2 )x = −xT P1 x < 0

which means V is a Lyapunov function for this system.


Again, the derivative of V along the trajectories of the system ẋ(t) =
A1 x is given by
V̇ = xT (AT1 P2 + P2 A1 )x
2. CHAPTER 2 :Stability of Switched Systems 90

We need to prove that this is a negative definite. Substituting for P1


from (2.12) into (2.11) and using the commutativity of A1 and A2 , we
obtain
P0 = AT1 (AT2 P2 + P2 A2 ) + (AT2 P2 + P2 A2 )A1 (2.15)
= AT2 (AT1 P2 + P 2 A1 ) + (AT1 P2 + P2 A1 )A2
it is required that AT1 P2 + P2 A1 < 0, since A2 is stable and P0 > 0.
Finally, the derivative of V = xT P2 x along the trajectories of the
system (2.10) is given by
V̇ = xT (AT (t)P2 + P2 A(t))x
= xT (AT2 P2 + P2 A2 )x = −xT P1 x < 0 , A(t) = A2
= xT (AT1 P2 + P2 A1 )x < 0 , A(t) = A1
Hence, V is a Lyapunov function for the switching system .
(2) Since P3 is the solution of (2.13), thus, P3 is positive definite and there
is a unique positive definite solution P4 to the Lyapunov equation
AT1 P4 + P4 A1 = −P3 (2.16)
using (2.14) and (2.16) and since A1 , A2 are commute, it follows that
P0 = AT1 (AT2 P4 + P4 A2 ) + (AT2 P4 + P4 A2 )A1 (2.17)
equating (2.17) with (2.15) yield,
AT2 P4 + P4 A2 = AT2 P2 + P2 A2 ⇒
Hence,
AT2 (P2 − P4 ) + (P2 − P4 )A2 = 0
since A2 is stable,P2 = P4 .Thus,the statement (2.16) is proved as a
solution to the Lyapunov equation (2.13).
(3) The solution to the Lyapunov equation (2.11) is known to be
Z∞
T
P1 = eA1 τ P0 eA1 τ dτ
0

Hence, the solution P2 to the Lyapunov equation (2.12) is


Z∞
∞ 
Z
T T
P2 = eA2 t  eA1 τ P0 eA1 τ dτ  eA2 t dt
0 0
2. CHAPTER 2 :Stability of Switched Systems 91

Example 2.4.1. Consider the switched system


ẋ = Ap x, p ∈ P = {1, 2}
   
−1 0 −2 0
where A1 = , A2 =
0 −1 0 −1
Let P0 = I and A1 , A2 are commute and by solving AT1 P1 + P1 A1 = −I,
we obtain, P1 = 21 I.
Substituting of P1 into AT2 P2 + P2 A2 = −P1 gives
1 
0
P2 = 8 1
0 4
Thus, we obtained the QCLF of the switched system under Theorem (2.4.1);
that is, V = xT P2 x.
MATLAB.m 5 (Example 2.4.1).
syms x1 x2
x=[ x1 ; x2]
A1=[-1 0 ; 0 -1]
A2=[-2 0 ; 0 -1]
P0=[1 0 ; 0 1]
% Are they commute?
syms P1 P2
res= A1 * A2 - A2*A1 % if res==0, then A1 A2 are commute
% solve lypaunov equation for P1,
P1=lyap(A1,P0)
% solve lyapunov equation for P2,
P2=lyap(A2,P1)
% the QCLF is,
syms V(x)
V(x)=transpose(x)*P2*x
Theorem 2.4.2. Consider the switching system (2.10) where the matrices
Ai are asymptotically stable and commute.Then,
• The system is exponentially stable for any arbitrary switching signal.
• For a given symmetric positive definite matrix P0 , let P1 , P2 , · · · , PN
be the unique symmetric positive definite solutions to the Lyapunov
equations
ATi Pi + Pi Ai = −Pi−1 , i = 1, 2, · · · , N (2.18)
2. CHAPTER 2 :Stability of Switched Systems 92

then the function V (x) = xT PN x is a common Lyapunov function for


each of the individual systems ẋ = Ai x, i = 1, 2, · · · , N and hence a
Lyapunov function for the switching system (2.10).

• for a given choice of the matrix P0 , the matrices A1 , A2 , · · · , AN can


be chosen in any order in (2.18) to yield the same solution PN .

• The matrix PN can be expressed in integral form as


Z∞
∞ ∞  
Z Z
T T T
PN = eAN tN · · ·  eA2 t2  eA1 t1 P0 eA1 t1 dt1  eA2 t2 dt2  · · · eAN tN dtN
0 0 0

Definition 2.4.3. Consider a positive definite continuously differentiable


function V : Rn 7→ R then we say V is a common Lyapunov function
for the switched systems (2.4) if there exists a positive definite continuous
function W : Rn 7→ R such that
∂V
fp (x) ≤ −W (x) , ∀ x ; ∀ p ∈ P (2.19)
∂x
Example 2.4.2. Consider that we have fσ(t) (x) = −p x ∀ p ∈ (0, 1] for
system (2.1),where each system is globally asymptotically stable and has
V (x) = 21 x2 .the switched system

ẋ = −σ x

has the solutions


Rt
− σ(τ ) dτ
x(t) = e 0 x(0)
thus,we see that the trajectory does not converge to zero, because
∂V
fp (x) = −p x2
∂x
does not satisfied since σ go to zero very fast.

Theorem 2.4.3. If the system (2.4) share a radially unbounded com-


mon Lyapunov function (CLF ) then the switched system

ẋ = fσ (x)

is GUAS.
2. CHAPTER 2 :Stability of Switched Systems 93

Theorem 2.4.4. [Converse Theorem]


If the switched system (2.4) is GUAS, the set {fp (x) : p ∈ P} is bounded
for each x, and the function fp (x) is locally Lipschitz in x uniformly over p,
then all the system
ẋ = fp (x)
share a radially unbounded smooth common Lyapunov function.
Corollary 2.4.1. Under the assumption of Theorem (2.4.4), the convex
combinations of the individual subsystems defined by the vector fields
def
fp,q,α (x) = αfp (x) + (1 − α)fq (x), p, q ∈ P, α ∈ [0, 1]

is globally asymptotically stable.

Remark:
A convex combination of two asymptotically stable vectors is not necessary
asymptotically stable.
Example 2.4.3. Consider the two matrices
   
−0.1 −1 −0.1 2
A1 = , A2 =
2 −0.1 −1 −0.1

both of these two matrices are Hurwitz, but their convex combinations are
not.Thus, the switching of this system is not GUAS.

2.4.3 Switched Linear Systems


Consider the LTI system
ẋ = Ax(t)
Recall that this system is asymptotically stable if the matrix A be a Hurwitz
matrix; i.e, the eigenvalues of the matrix A lie in the open left-half of the
complex plane.Moreover, there exists a quadratic Lyapunov function

V (x) = xT P x

where P is a positive definite symmetric matrix, which satisfy the inequality

AT P + P A ≤ −Q (2.20)

The stability of the switched linear system is similarly to the classical Lya-
punov stability concept.
2. CHAPTER 2 :Stability of Switched Systems 94

Definition 2.4.4. For a switched linear system

ẋ = Ai x

Assume that {Ai : i ∈ P} is a compact set of Hurwitz matrices.Then, we say


the system has a quadratic common Lyapunov function QCLF of the form

V (x) = xT P x

such that for some positive definite symmetric matrix Q

ATi P + P Ai ≤ −Q , ∀ p ∈ P

holds.
Theorem 2.4.5. Consider the switched linear system

ẋ = Aσ(t) x(t) (2.21)

The switched linear system (2.21) is GUES if and only if it is locally at-
tractive for every switching signal.
The example below demonstrates that in spite of the switched systems is
GUES, but it does not imply the existence of a quadratic common Lyapunov
function (QCLF).
Example 2.4.4. Assume that the following two matrices are Hurwitz
   
−1 − 1 −1 − 10
A1 = A2 =
1−1 0.1 − 1

The systems ẋ = A1 x and ẋ = A2 x do not share a common Lyapunov


function.To see this fact, we can look for a positive definite symmetric matrix
P of the form  
1 q
P =
q r
which satisfies the inequality (2.20), we have
 
T 2 − 2q 2q + 1 − r
−A1 P − P A1 =
2q + 1 − r 2q + 2r

this is positive definite if and only if

(r − 3)2
q2 + <1 (2.22)
8
2. CHAPTER 2 :Stability of Switched Systems 95

Similarly,
2 − 5q r
 
2q + 10 − 10
−AT2 P − P A2 = q
2q + 10 − 10 20q + 2r
this is positive definite if and only if
(r − 300)2
q2 + < 100 (2.23)
800
Both inequalities (2.22), (2.23) representing an ellipses form.We see that the
interior of both ellipses do not intersect as depicted in Figure (2.10).There-
fore, a quadratic common Lyapunov function does not exists.

Fig. 2.10: Ellipses in example 2.4.4

Although there is no common Lyapunov function,the switched linear sys-


tem ẋ = Aσ x is GUES.
Corollary 2.4.2 ([2],p51). The linear systems ẋ = A1 x and ẋ = A2 x share
a quadratic common Lyapunov function (QCLF) if and only if all pairwise of
the convex combinations of the matrices A1 ,A2 ,A−1 −1
1 , and A2 are Hurwitz.So
T
that V (x) = x P x is a quadratic common Lyapunov function for the systems
ẋ = Ax and ẋ = A−1 x and hence for all their convex combinations.
Proof. Note that
AT P + P A = −Q
implies
(A−1 )T P + P A−1 = −(A−1 )T QA−1
for every Hurwitz matrix A.
2. CHAPTER 2 :Stability of Switched Systems 96

2.4.4 Solution of a Linear Switched System Under Arbitrary


Switching Signal
Consider the switched linear system
ẋ = Aσ x , σ : [0, ∞) 7→ P and σ(t) = p, p ∈ P = {1, 2}
Assume that the matrices A1 and A2 are commute.Now, suppose that we
applied an arbitrary switching signal σ, and denote by ti and τi the length
of the time interval where the current subsystem is still active where σ equal
1 or 2, respectively (as shown in Figure 2.11).

Fig. 2.11: Switching between two systems under arbitrary switching signal

The solution under this arbitrary switching signal is


x(t) = · · · eA2 τ2 eA1 t2 eA2 τ1 eA1 t1 x(t0 )


Since [A1 , A2 ] = 0.We obtain,


eA1 t eA2 τ = eA2 τ eA1 1 (2.24)
Thus, the solution can be written as
x(t) = · · · eA2 τ2 eA2 τ1 eA1 t2 eA1 t1 x(t0 )
 

Hence, according to relation (2.9), we can rewrite (2.24) as


x(t) = eA2 (τ1 +τ2 +··· ) eA1 (t1 +t2 +··· ) x(t0 ) (2.25)

2.4.5 Commuting Nonlinear Systems


Consider the linear vector fields f1 (x) = A1 x and f2 (x) = A2 x. According
to Lie bracket, or commutator of two vector fields, we have the following
definition
∂f2 (x) ∂f1 (x)
[f1 , f2 ] := f1 (x) − f2 (x)
∂x ∂x
After substitution, the right-hand side becomes (A2 A1 − A1 A2 )x, which is
consistent with the definition of the Lie bracket of two matrices (2.7) except
for the difference in sign.
Theorem 2.4.6. If {fp : p ∈ P} is a finite set of commuting continuous dif-
ferentiable(functions) vector fields and the origin is a globally asymptotically
stable equilibrium for all systems in the family ẋ = fp (x), p ∈ P, then the
corresponding switched system (2.5) is GUAS.
2. CHAPTER 2 :Stability of Switched Systems 97

2.4.6 Commutation and The Triangular Systems


Theorem 2.4.7. If {Ap : p ∈ P} is a compact set of upper-triangular Hur-
witz matrices, then the switched linear system (2.21) is GUES.
Example 2.4.5. Suppose that we have
ẋ = Ap x(t), p ∈ P = {1, 2} , x ∈ R2
and consider the two matrices
   
−a1 b1 −a2 b2
A1 = A2 =
0 − c1 0 − c2
Assume that ai , ci > 0, i = 1, 2; thus, the eigenvalues have negative real
parts.Now, under arbitrary switching signal σ, the second component of x
corresponding to
x˙2 = −cσ x2
The obtained general solution of this equation with initial value x2 (0) = x20
is,
x2 = x20 e−cσ t
Therefore, x2 decays to zero exponentially fast with rate of decay correspond-
ing to min {c1 , c2 }.
Similarly, the the first component of x corresponding to
ẋ1 = −aσ x1 + bσ x2
We can look at this equation consists of two parts.The exponentially stable
system ẋ1 = −aσ x1 perturbed by the exponentially decaying input bσ x2 .Thus,
x1 converges to zero exponentially fast.

2.5 Stability Under Constrained Switching

2.5.1 Multiple Lyapunov functions (MLF)[16]


Consider the switched system
ẋ = fσ(t) (x) , where σ(t) = ρ, ρ ∈ P = {1, 2} (2.26)
Assume that both ẋ = f1 (x) and ẋ = f2 (x) are asymptotically stable, and
let V1 ,V2 be their respective radially unbounded Lyapunov functions.
We are interested in the situation where a CLF for the two systems not
known or does not exist.
Suppose ti representing a set of switching times where i = 1, 2, · · · .
2. CHAPTER 2 :Stability of Switched Systems 98

• If it happens that the values of V1 (x(t)),V2 (x(t)) coincide at each


switching time; i.e, Vσ(ti−1 ) (ti ) = Vσ(ti ) (ti ) for all i, then Vσ is a con-
tinuous Lyapunov function for the switched system and asymptotic
stability follows (see Figure 2.12 (a)).

Fig. 2.12: Solid graphs correspond to V1 , dashed graphs correspond to V2 :(a) con-
tinuous Vσ , (b) discontinuous Vσ .

While each Vp decreases when the pth subsystem is active, it may be in-
creases when the pth subsystem is inactive, as depicted in Figure 2.12(b).
For the system to be asymptotically stable, the values of Vp should form
a decreasing sequence for each p at the beginning at each interval of the
switching signal.
Theorem 2.5.1. [26] Consider the switched system described by (2.26).Suppose
that we have a multi-Lyapunov function Vi for each individual subsystems.Let
S = {ti , · · · , t1 , t0 } be the set of all switching sequences that activate each
subsystems simultaneously.If for each S, the following conditions satisfied,

(1) V̇i ≤ 0 , ∀ t ∈ S
(2) Vi (ti+1 ) ≤ Vi (ti )

there exists a positive constant µ, such that

(3) |Vj (t)| ≤ µ|Vi (t)| , i 6= j

the switched system is asymptotically stable.


2. CHAPTER 2 :Stability of Switched Systems 99

Assume V 1(x1 (t)) and V2 (x2 (t)) are two multi Lyapunov candidate func-
tions. Assume V1 (x1 ) satisfy conditions (1) and (2) , whereas V2 (x2 ) satisfy
condition (3)and as shown in Figure 2.13,V2 may increase or decrease during
active time intervals.If V̇2 ≥ 0 and if subsystem 2 is activate at initial time,
then V2 can only reach a finite value when subsystem 1 is switched on due
to condition (3).

Fig. 2.13: Multiple Lyapunov Stability, where m ≡ µ

Theorem 2.5.2. [2] Consider the system (2.26) and let {Vp (x) : p ∈ P}
be a set of radially unbounded real positive definite functions, where P =
{1, 2, · · · , n}.Suppose there exists a family of positive definite continuous
functions Wp with the property that for every pair of switching times (ti , ti+1 ),
for i < i + 1 such that σ(ti+1 ) = σ(ti ) = p ∈ P, and σ(tk ) 6= p ∀ ti < tk <
ti+1 , we have
Vp (x(ti+1 )) − Vp (x(ti )) ≤ −Wp (x(ti )) (2.27)
then the switched system is globally asymptotically stable.

Proof. [16]
Since Vp (x(ti+1 )) − Vp (x(ti )) < 0 where x(ti+1 ), x(ti ) 6= 0, then by virtue
of the fact that the sequence {Vp (x(ti ))} is strictly decreasing and lower
2. CHAPTER 2 :Stability of Switched Systems 100

bounded by zero, then the limit lim Vp (x(ti )) = L ≥ 0 exists. By continuity,


i 7→∞

Vp (x(ti+1 )) − Vp (x(ti )) < 0 ⇒


lim Vp (x(ti+1 )) − lim Vp (x(ti )) = L − L = 0
i 7→∞ i 7→∞
but,
lim Vp (x(ti+1 )) − lim Vp (x(ti )) = lim [Vp (x(ti+1 )) − Vp (x(ti ))]
i 7→∞ i7→∞ i 7→∞
≤ lim [−W (x(ti ))]
i 7→∞
≤0

where W (x(t)) is positive definite function.


In other words, lim [−W (x(ti ))] 7→ 0 which imply lim x(ti ) 7→ 0.It follows
i 7→∞ i 7→∞
from the Lyapunov asymptotic stability that x(t) 7→ 0 as t 7→ ∞.

Assume that W (x(ti )) ≡ γ||x(ti )||2 ,then from Theorem (2.5.2) we have
the following corollary:

Corollary 2.5.1. [16] Consider the system (2.26) and let {Vp (x) : p ∈ P}
be a set of radially unbounded real positive definite functions,where P =
{1, 2, · · · , n}.There exists constant γ > 0 such that

Vp (x(ti+1 )) − Vp (x(ti )) ≤ −γ||x(ti )||2 (2.28)

then the switched system is globally asymptotically stable.

2.5.2 Stability Under State-Dependent Switching


In state-dependent switching, a switching events can occur when the tra-
jectories hit a switching surfaces. In this case, the stability is concern in
observing the behavior of each individual subsystem in its region where this
system is active.

Let us define a state-dependent switched linear system,


(
A1 x if x1 ≥ 0 ≡ (Ω1 )
ẋ =
A2 x if x1 < 0 ≡ (Ω2 )
2. CHAPTER 2 :Stability of Switched Systems 101

where
 
−0.1 −1
A1 =
2 −0.1
 
−0.1 −2
A2 =
1 −0.1

Consider the two positive definite symmetric matrices


 
2 0
P1 =
0 1
1 
2 0
P2 =
0 1

where V1 = xT P1 x and V2 = xT P2 x are Lyapunov functions candidates for


the above systems.

Definition 2.5.1. It is enough to require that each function Vi decreases


(V̇ < 0) along solution the ith subsystem in the region Ωi where this system
is active.It is important to note that V serves as a Lyapunov function only
in suitable regions for each subsystem.

Remark:
The multiple Lyapunov functions can be used if he stability analysis not
based on a single Lyapunov function.

Example 2.5.1.
Consider the autonomous switched system ẋ = Aσ(t) x(t) and
   
0 10 1.5 2
A1 = A2 =
0 0 −2 −0.5

1, if σ(t− ) = 2 and x2 (t) = −0.25x1 (t)


where σ(t) =
2, if σ(t− ) = 1 and x2 (t) = +0.50x1 (t)


and the eigenvalues are, λ(A1 ) = 0 and λ1,2 (A2 ) = 0.5 ± i 3
Thus, both systems are unstable and the phase-plane portrait of the indi-
vidual systems depicted in Figure 2.14. These trajectories can be pieced
together in one region Ωi to produce a stable state trajectory as shown in
Figure 2.15.This can be done by constructing multi-Lyaunov functions for
2. CHAPTER 2 :Stability of Switched Systems 102

A1 , A2 as V1 (x) = xT P1 x and V2 (x) = xT P2 x.


Define the region :
n o
Ωi = x : V̇i (x) = xT (ATi Pi + Pi Ai )x ≤ 0 , ∀ i = 1, 2

By solving the LMI of (ATi Pi + Pi Ai ≤ −I) such that V̇i (x) decreasing,we
obtain    
0.46875 −1.875 1 1.2
P1 = , P2 =
−1.875 15 1.2 1.6

Fig. 2.14: Dashed line for ẋ = A1 x and solid line for ẋ = A2 x.


2. CHAPTER 2 :Stability of Switched Systems 103

Fig. 2.15: trajectory resulting from switching between A1 and A2 .The lines x2 =
0.5x1 and x2 = −0.25x1 lie in Ω1 ∪ Ω2 .

MATLAB.m 6 (Example 2.5.1).

clear
A1=[0 10 ; 0 0 ]
syms x1 x2
x=[ x1 ; x2 ]
SW1=A1∗x
A2= [ 1 . 5 2 ; −2 −0.5]
SW2=A2∗x
xSPAN= −2:0.5:2
LINE1= −0.25∗xSPAN
LINE2= 0 . 5 ∗xSPAN
%s w i t c h e d system 1
[ x1 , x2 ]= meshgrid ( − 2 : 0 . 5 : 2 , − 2 : 0 . 5 : 2 ) ;
x1dot=e v a l (SW1 ( 1 ) ) ;
x2dot=e v a l (SW1( 2 ) ) ∗ x1 ;
p l o t (xSPAN, LINE1 , ’ − ’ )
h o l d on
p l o t (xSPAN, LINE1 , ’ ∗ g ’ )
h o l d on
2. CHAPTER 2 :Stability of Switched Systems 104

q u i v e r ( x1 , x2 , x1dot , x2dot , ’ r ’ ) ;
h o l d on
% s w i t c h e d system 2
x1dot=e v a l (SW2 ( 1 ) ) ;
x2dot=e v a l (SW2 ( 2 ) ) ;
p l o t (xSPAN, LINE2 , ’ − ’ )
h o l d on
p l o t (xSPAN, LINE2 , ’ ∗ r ’ )
h o l d on
q u i v e r ( x1 , x2 , x1dot , x2dot , ’ b ’ ) ;
% p l o t t h e two s y s t e m s s t a t e −models
f = @( t , y ) [ ( 3 / 2 ) ∗ y (1)+(2∗ y ( 2 ) ) ; − 2 ∗ y (1) −(1/2)∗ y ( 2 ) ]
h o l d on
y20 =0.1
y10 =0.1
t 0=0
t 1=5
[ t s , ys ] = ode45 ( f , [ t0 , t 1 ] , [ y10 ; y20 ] ) ;
p l o t ( ys ( : , 1 ) , ys ( : , 2 ) , ’ − g ’ )

2.6 Stability under Slow Switching (Time Domain


Constraints)

2.6.1 Brief
Theorem 2.6.1. [18] We say that the system
ẋ = f (x) , ∀ f (t0 ) = 0 , t0 = 0
is globally exponentially stable with decay rate λ > 0 if
||x(t)|| ≤ k e−λ t ||x0 ||
holds for any x0 , ∀ t ≥ 0 and a positive constant k > 0.
Theorem 2.6.2. [15] The switched system
ẋ = Aσ(t) x(t), x(t0 ) = x0 (2.29)
is said to be globally exponentially stable with stability degree λ ≥ 0 if
||x(t)|| ≤ eα−λ(t−t0 ) ||x0 ||
holds ∀ t ≥ t0 and a known constant α.
Note that Theorem (2.6.1) is similar to Theorem (2.6.2), if we set k = eα .
2. CHAPTER 2 :Stability of Switched Systems 105

2.6.2 Dwell Time


The concept of Dwell-Time allowing the possibility of switching fast when
necessary and compensating for switching slowly later to complete the strat-
egy that making the switched system stable.

Definition 2.6.1 ([2]). Dwell-Time represents the minimum time inter-


val between two successive switch instants that insures the stability of the
switched system.Dwell-time is denoted by a number τd > 0 that forcing the
switching signal to establishing property that can making the switching times
t1 , t2 , · · · satisfy the inequality
ti − ti−1 ≥ τd , ∀ i
Theorem 2.6.3. [17] Consider the switched system
ẋ = Ai x ∀i ∈ P (2.30)
If we assume that all the individual subsystems are asymptotically stable (i.e
when all subsystem matrices Ai are Hurwitz stable) then the switched
system is exponentially stable if and only if the dwell − time is suffi-
ciently large to allow each subsystem reaching the steady-state.

Proof.
Let Φi (t, τ ) denote the transition matrix solution of the system (2.30).
Since all the subsystems are asymptotically stable, then according to Lya-
punov stability theorems, one can find k > 0 and λ0 > 0 such that
||Φi (t, τ )|| ≤ k e−λ0 (t−τ ) , t ≥ τ ≥ 0, ∀ i ∈ P = {1, 2, · · · , N }
where N =number of the switched systems , λ0 = max λi and k = max ki
i∈P i∈P
represent the common decay rate for the family of subsystems.Since λi ,ki
are constants that define the convergence of each subsystem.
Now, consider the sequences {t1 , t2 , · · · , ti } denote the switching instants in
the time interval (τ, t) such that,
ti − ti−1 ≥ τd
The solution of the system at a given instant of time t is given by (refer to
Appendix for Theorem 4.13.3),
x(t) = Φσ(ti ) (t, ti )Φσ(ti−1 ) (ti , ti−1 )Φσ(ti−2 ) (ti−1 , ti−2 ) · · ·
Φσ(t1 ) (t2 , t1 )Φσ(τ ) (t1 , τ ) x(τ )
2. CHAPTER 2 :Stability of Switched Systems 106

on the interval [t, t0 ]. Note that,

Φσ(ti ) (t, ti )Φσ(ti−1 ) (ti , ti−1 )Φσ(ti−2 ) (ti−1 , ti−2 ) · · ·


Φσ(t1 ) (t2 , t1 )Φσ(τ ) (t1 , τ ) x(τ ) = Φi (t, τ ) x(τ )

Also,
Aσ(ti−1 ) (ti −ti−1 ) Aσ(ti−2 ) (ti−1 −ti−2 )
Φ(t, τ ) = eAσ(ti ) (t−ti ) e e · · · eAσ(τ ) (t1 −τ ) ,∀t ≥ τ

Hence, the transition matrices Φσ between two successive switching times


satisfy the inequality:

||Φσ(ti−1 ) (ti , ti−1 ) || ≤ k e−λ0 (ti −ti−1 ) ≤ k e−λ0 τd

Since λ ∈ (0, λ0 ), we obtain

||Φσ(ti−1 ) (ti , ti−1 ) || ≤ k e−λ0 (ti −ti−1 ) ≤ k e−λ0 τd ≤ e−τd (λ0 −λ)

which means the system is asymptotically stable if and only if k e−(λ0 −λ) τd ≤
1.
Then, by taking the logarithm on both sides of the inequality, this condition
can be written as
ln k
τd ≥ , ∀λ ∈ (0, λ0 ) (2.31)
λ0 − λ
This is the desired lower bound on the dwell time.

Theorem 2.6.4.
Assume that all the systems of the switched system

x = fp (x), p∈P (2.32)

are globally exponentially stable.Thus, there exists a Lyapunov candidate


function Vp for each p ∈ P which has some positive constants ap , bp and cp
satisfies:

ap ||x||2 ≤ Vp (x) ≤ bp ||x||2 (2.33)

V̇ (x) ≤ −cp ||x||2 (2.34)

Moreover,
There exists a constant scalar µ ≥ 1 such that

Vp (x(ti )) ≤ µVq (x(ti )), ∀ x(t) ∈ Rn , ∀ p, q ∈ P (2.35)


2. CHAPTER 2 :Stability of Switched Systems 107

Proof. Assume all the system in the family (2.32) are globally exponentially
stable with sufficient large dwell time τd .Then there exists a Lyapunov func-
tion Vp for each p ∈ P satisfies the first two inequalities of Theorem (2.6.4).
Combining the right-hand side of inequality (2.33) with inequality (2.34),
we obtain
cp
V̇p ≤ −λp Vp , where λp =
bp
By integration for t ∈ [t0 , t0 + τd ),

Vp (x(t0 + τd ) ≤ e−λp τd Vp (t0 ) (2.36)

Without loss of generality, we can see that inequality (2.36) implies :

Vp (xi+1 ) ≤ Vp (xi ) (2.37)

if τd is large enough.
To simplify, let us consider the case when σ = 1 on the interval [t0 , t1 ) and
σ = 2 on the interval [t0 , t2 ), where P = {1, 2} and ti+1 −ti ≤ τd ,i = 0, 1.From
the above inequality we have,
For σ = 1 on t ∈ [t0 , t1 ):

V1 (x(t1 )) ≤ e−λ1 τd V1 (x(t0 )) (2.38)


2 2
a1 ||x|| ≤ V1 (t1 ) ≤ b1 ||x|| (2.39)
2 2
a1 ||x|| ≤ V1 (t2 ) ≤ b1 ||x|| (2.40)

For σ = 2 on t ∈ [t1 , t2 ):

V2 (x(t2 )) ≤ e−λ2 τd V2 (x(t1 )) (2.41)


2 2
a2 ||x|| ≤ V2 (t1 ) ≤ b2 ||x|| (2.42)
2 2
a2 ||x|| ≤ V2 (t2 ) ≤ b2 ||x|| (2.43)

Now by combining the left-hand side of inequality (2.39) with the right-hand
side of inequality (2.42) with help of inequality (2.38), we obtain

b2 b2
V2 (t1 ) ≤ V1 (t1 ) ≤ e−λ1 τd V1 (x(t0 )) (2.44)
a1 a1
Thus, we conclude that

V1 (x(t1 )) < V1 (x(t0 ))


2. CHAPTER 2 :Stability of Switched Systems 108

if and only if τd <<< 1 (i.e; τd is large enough).


In addition, from the last inequalities, we obtain
b1 b1 b1 b2 −(λ1 +λ2 )τd
V1 (t2 ) ≤ V2 (t2 ) ≤ e−λ2 τd V2 (t1 ) ≤ e V1 (t0 ) (2.45)
a2 a2 a2 a1
Hence, from the left-hand side of inequality (2.45) we conclude that,

Vj (t) ≤ µ Vi (t) , µ≥1

2.7 Stability of Switched System When all Subsystems are


Hurwitz

Consider the linear subsystems described by

ẋ = Ai x(t) , i = 1, 2, · · · , n (2.46)

where n ≥ 2, Ai ∈ Rn×n , and x(t) ∈ Rn .


Theorem 2.7.1. Consider the switched subsystems (2.46) have the follow-
ing assumptions:
• Assumption 1: The matrices Ai are all Hurwitz stable (i.e, all the
eigenvalues lie in the left-half complex plane).
• Assumption 2: All the matrices Ai are commutative; that is,

Ai Aj = AjAi , ∀i 6= j

• Assumption 3: Let {ik } ⊂ {1, 2, · · · , n} for k ≥ 0 denotes the switching


sequence and let [tk , tk+1 ] denotes the time period that required to set
the ik -th subsystem activated, then assume that

lim tk = ∞
k7→∞

Then the origin equilibrium point of the switched system is globally exponen-
tially stable under arbitrary switching laws satisfying Assumption 3
Proof.
Since the Ai ’s are Hurwitz, there exist constants αi , β > 0 such that for
each i = 1, 2, · · · , n the following inequality holds

||eAi t || ≤ αi e−βt (2.47)


2. CHAPTER 2 :Stability of Switched Systems 109

Now, for any initial condition (x0 , t0 ) the solution of (2.46) is

x(t) = x0 eAi (t−t0 )

since t ∈ [tk , tk+1 ], we have

x(t) = eAik (t−tk ) eAik−1 (tk −tk−1 ) eAik−2 (tk−1 −tk−2 ) · · · eAi1 (t2 −t1 ) eAi0 (t1 −t0 ) x(t0 )

If we let Ti (t, t0 ) , i = 1, 2, · · · , n denotes the total time that the i-th sub-
Pn
system still activated during the time t0 to t, then Ti (t, t0 ) = t − t0 , we
i=1
can rewrite the above expression as

x(t) = x0 eAn Tn (t,t0 ) · · · eA1 T1 (t,t0 )

Since the Ai ’s are commutative, then we can write the above expression as

x(t) = x0 eA1 T1 (t,t0 ) · · · eAn Tn (t,t0 ) (2.48)

By applying (2.47), yields


n
!
Y
||x(t)|| ≤ ||x0 || αi e−β(T1 (t,t0 )+···+Tn (t,t0 ))
i=1

which implies that the switched system (2.46) is globally exponentially stable
under arbitrary switching laws.

Corollary 2.7.1.
If the matrices Ai are not commute then the switched system (2.46) is not
globally exponentially stable under arbitrary switching laws.
Theorem 2.7.2. Consider the switched nonlinear systems

ẋ(t) = Ai x(t) + fi (t, x(t)) , i = 1, 2, · · · , n (2.49)

where the perturbation term fi is vanishing in the sense that

||fi (t, x(t)) || ≤ γ||x(t)|| , i = 1, 2, · · · , n (2.50)

Assume that αi , β, γ > 0 such that condition (2.47) holds.If hypotheses (as-
sumption 1 and 2 of Theorem (2.7.1)) are true, then for switched system
(2.49) with initial condition (t0 , x0 ), under arbitrary switching law satisfy-
ing (Assumption 3), it is true that

||x(t)|| ≤ K0 ||x0 || e−(β−K0 γ)(t−t0 )


2. CHAPTER 2 :Stability of Switched Systems 110

n
Q
where K0 = αi .Therefore, under the condition
i=1

β
γ<
K0
then the equilibrium point is globally exponentially stable

Proof.
Since the general solution of (2.49) takes the form

Zt
Ai (t−t0 )
x(t) = x0 e + eAi (t−τ ) fi (τ, x(τ )) dτ
t0

For t ∈ [tk , tk+1 ], we have

Zt1
Ai0 (t1 −t0 )
x(t1 ) = x(t0 ) e + eAi0 (t1 −τ ) fi 0 (τ, x(τ )) dτ
t0
Zt2
x(t2 ) = x(t1 ) eAi0 (t2 −t1 ) + eAi0 (t2 −τ ) fi 1 (τ, x(τ )) dτ
t1
Zt1
Ai1 (t2 −t1 )+Ai0 (t1 −t0 )
= x(t0 ) e + eAi1 (t2 −t1 )+Ai0 (t1 −τ ) fi 0 (τ, x(τ )) dτ
t0
Zt1
+ eAi1 (t2 −τ ) fi 1 (τ, x(τ )) dτ
t2
2. CHAPTER 2 :Stability of Switched Systems 111

By induction, we have

x(t) = x(t0 ) eAik (t−tk )+Aik−1 (tk −tk−1 )+···+Ai1 (t1 −t0 )
Zt1
+ eAik (t−tk )+Aik−1 (tk −tk−1 )+···+Ai0 (t1 −τ ) fi 0 (τ, x(τ )) dτ
t0
Zt2
+ eAik (t−tk )+Aik−1 (tk −tk−1 ) + · · · + Ai1 (t2 − τ )fi 1 (τ, x(τ )) dτ
t1
Ztk
+ ··· + eAik (t−tk )+Aik−1 (tk −τ ) fi k−1 (τ, x(τ )) dτ
tk−1
Zt
+ eAik (t−τ ) fi k (τ, x(τ )) dτ (2.51)
tk

Grouping the elapsed time interval for each subsystem as we did in Theorem
(2.7.1), we obtain

Zt1

−β(t−t
||x(t)|| ≤ K0 ||x0 || e 0 )
+ e−β(t−τ ) γ||x(τ )|| dτ + · · ·
t0

Ztk Zt
+ e−β(t−τ ) γ||x(τ )|| dτ + e−β(t−τ ) γ||x(τ )|| dτ 

tk−1 tk

Therefore,
Zt
βt βt0
||x(t)|| e ≤ K0 ||x0 || e + K0 eβτ γ||x(τ )|| dτ
t0

using Gronwall inequality (Refer to Appendix), yields


 
||x(t)|| ≤ K0 ||x0 || eβt0 eγK0 (t−t0 ) e−βt

Thus,
||x(t)|| ≤ K0 ||x0 || e−(β−K0 γ)(t−t0 )
2. CHAPTER 2 :Stability of Switched Systems 112

2.7.1 State-Dependent Switching


Switching of a state-dependent switched system occurs when the trajectory
hits a switching surface.

Stability Analysis Based on Single Lyapunov Function


Proposition 2.7.1. Consider a state-dependent switched linear system
ẋ = Ap x , p ∈ P
and this switched system have a common Lyapunov function V (x) = xT P x
satisfies V̇ < 0 where the Lyapunov function decrease along all nonzero solu-
tions of each subsystem system.Then, we say the switched system is globally
asymptotically stable.
Example 2.7.1. Assume we have two matrices defined as
   
0 −1 0 −2
A1 = , A2 =
2 0 1 0
Now, define a state-dependent switched linear system in the plane by

A1 x , if x1 x2 ≤ 0
ẋ =
A2 x , if x1 x2 > 0

Let V (x) = xT x be a Lyapunov function candidate.It is easy to see that V̇ <


0.Thus, V (x) decreases along solutions of each subsystem.Hence we have
global asymptotic stability even if the individual subsystems are unstable.

Stability Analysis Based on Multiple Lyapunov Functions:


In fact, from the previous example, there is no global common Lyapunov
function for the two matrices since they are not commute; i.e, [A1 , A2 ] 6= 0.
Consider the two positive definite symmetric matrices
  1 
2 0 0
P1 = , P2 = 2
0 1 0 1
and let Vσ (x) = xT Pσ x are Lyapunov functions for the switched systems
ẋi = Ai x.Let us define a switching surface S = {x : x1 = 0} such that the
function σ takes the value σ = 1 f or x1 ≥ 0 and σ = 2 f or x2 ≤ 0.Hence,
it is required that each function Vσ decreases along solutions of the pth
subsystem; i.e,
xT (ATp Pp + Pp Ap )x < 0 , ∀ x 6= 0
2. CHAPTER 2 :Stability of Switched Systems 113

Proposition 2.7.2. Consider we have two linear systems ẋ = A1 x and


ẋ = A2 x associate with V1 (x) = xT P1 x and V2 (x) = xT P2 , respectively.
Assumen V̇1 < 0 such
o that V1 decreases along its solution in a conic region
Ω1 = x : V̇1 < 0 . Also, in similar approach, V2 decreases along its solu-
n o
tion in a conic region Ω2 = x : V̇2 < 0 .Then,

In case of no sliding motion occurs on the switching surface S :=


(1) 
x : xT P1 x = xT P2 x :
if

xT (AT1 P1 + P1 A1 )x < 0 whenever xT P1 x ≤ xT P2 x and x 6= 0

and

xT (AT2 P2 + P2 A2 )x < 0 whenever xT P1 x ≥ xT P2 x and x 6= 0

then the function Vσ is continuous and decreases along solutions of the


switched system.Thus, the system is globally asymptotically stable.

(2) In case
 of sliding motion occurs on the switching surface
S := x : xT P1 x = xT P2 x :

The existence of a sliding mode characterized by the inequalities

xT AT1 (P1 − P2 ) + (P1 − P2 )A1 x ≥ 0 ∀ x ∈ S




and
xT AT2 (P1 − P2 ) + (P1 − P2 )A2 x ≤ 0 ∀ x ∈ S


since a sliding motion occurs, then σ is not defined.So, without loss


of generality, let σ = 1 on S and suppose that V1 decreases along the
corresponding Filippov solution.We obtain,

xT (αA1 + (1 − α)A2 )T P1 + P1 (αA1 + (1 − α)A2 ) x




= αxT (AT1 P1 + P1 A1 )x + (1 − α)xT (AT2 P1 + P1 A2 )x


≤ αxT (AT1 P1 + P1 A1 )x + (1 − α)xT (AT2 P2 + P2 A2 )x

Therefore, the switched system is globally asymptotically stable.


3. STABILITY OF SWITCHED SYSTEM UNDER SWITCHING
SIGNAL WHEN SUBSYSTEMS ARE HURWITZ AND
NON-HURWITZ

3.1 Introduction

Handling the case when all subsystems are unstable is a big challenge to
approach.Stabilization of switched systems with unstable subsystems under
ADT is not adequate, since ADT concept was established on the fact that
the slow switching will dissipate the effect of the transition effect at each
switching.thus , the decrement of the Lyapunov function during the ADT
compensates the possible increment of the Lyapunov function at switching
instance.Stabilization of unstable subsystems based on computing a minimal
and maximal admissible dwell time.

3.2 Stability of Switched System When all Subsystems are Unstable


Except the corresponding Negative Subsystems Matrices (−Ai )
are Hurwitz

Theorem 3.2.1. Consider the switched subsystems

ẋ = Ai x(t) , i = 1, 2, · · · , where Ai ∈ Rn×n , n ≥ 2 (3.1)

have the following assumptions:


• Assumption 1: The matrices Ai are all unstable.

• Assumption 2: All the matrices Ai are commutative; that is,

Ai Aj = AjAi , ∀i 6= j

• Assumption 3: Let {ik } ⊂ {1, 2, · · · , n} for k ≥ 0 denotes the switching


sequence and let [tk , tk+1 ] denotes the time period required to set the
ik -th subsystem activated.Then assume that

lim tk = ∞ (, i.e; t0 < t1 < · · · < tk )


k7→∞
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 115

• Assumption 4: −Ai is Hurwitz stable for i = 1, 2, · · · , n

In addition, assume that there exist constants αi , β, γ > 0 such that condition

kfi (t, x(t))k ≤ γkx(t)k , i = 1, 2 · · · , n

and
||e−Ai t || ≤ αi e−βt , i = 1, 2, · · · , m
hold.
Then for switched system

ẋ(t) = Ai x(t) + fi (t, x(t)) , i = 1, 2, · · · , n (3.2)

with initial condition (t0 , x0 ), and under arbitrary switching law satisfying
Assumption 3, it is true that
1 (β− Kγ )(t−t0 )
||x|| ≥ ||x0 || e 0
K0
γ
Therefore, if β > K0 , then the switched system is exponentially unbounded.

3.2.1 Average Dwell Time (ADT)


The purpose of the average dwell time (ADT) which is denoted by (τa ),
is to allow the possibility of switching fast when necessary and then com-
pensate for the non possibility to switching fast to be switching sufficiently
slowly later[2].Thus, the ADT defines restricted class of switching signals
that guarantee stability of the whole system.In instance, subsystems that
produce similar trajectories can be switched more rapidly without yielding
to instability[19], when giving rise to a smaller dwell time τ ∗ .

Consider autonomous linear switched system

ẋ(t) = Aσ(t) x(t) , x(t0 ) = x0 (3.3)

where x(t) : [0, ∞) 7→ Rn , σ(t) : [t0 , ∞) 7→ p ∈ P = {1, 2, · · · , N } is


a piecewise constant function of time, called switching signal, and Aσ(t) :
[t0 , ∞) 7→ {A1 , A2 , · · · , AN } are number of the subsystems matrices which
consist of both Hurwitz stable and unstable subsystem matrices.
Assume that {A1 , A2 , · · · , Ai } for (i ≤ r) are unstable subsystems exits in
(3.3) and Ai for (N ≥ i > r) are Hurwitz stable matrices.
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 116

Then, there exist a set of positive scalars λi and αi that make the system
exponentially stable with the following inequalities hold:
||eAi t| || ≤ eαi +λi t , 0≤i≤r (3.4)
Ai t αi −λi t
||e || ≤ e , r<i≤N (3.5)
Now,at this time, we aim to derive a switching law that incorporates the
ADT such that the switched system (3.3) is exponentially stable.
Definition 3.2.1. [20]
For a switching signal σ(t) and for any t > t0 ≥ 0, let Nσ (t, t0 ) denotes the
number of switching over the interval (t0 , t).We say that σ(t) has the ADT
property, if
t − t0
Nσ (t, t0 ) ≤ N0 + (3.6)
τa
holds for N0 ≥ 1 and τa > 0.
Theorem 3.2.2. [21] Consider the switched system (3.3),and under the
switching law :
Ts (t) λu − λ∗
≥ s (3.7)
Tu (t) λ − λ∗
where Ts , Tu denote the total activate time of Hurwitz stable subsystems to
the total activate time of Hurwitz unstable subsystems during [t0 , t), λu =
max λi ,λs = min λi , λ ∈ (0, λs ) and λ∗ ∈ (λ, λs ).
i≤r i>r
Then, there is a finite constant τa∗ such that the switched system (3.3) is glob-
ally exponentially stable with stability degree λ over the set of all switching
signals computed by the condition (3.6) for any τa (ADT) satisfies τa ≥ τa∗
and any chatter bound N0 ≥ 0.
Proof. Let t0 , t1 , · · · , tNσ (t0 ,t) be the switching time points that occur over
the interval (t0 , t).For any t satisfying t ∈ [ti , ti+1 ) and i = {0, 1, · · · , N }, we
obtain
Aσ(ti−1 ) (ti −ti−1 ) Aσ(ti−2 ) (ti−1 −ti−2 )
x(t) = eAσ(ti ) (t−ti ) e e · · · eAσ(t0 ) (t1 −t0 ) x0
Using the inequalities (3.4) and (3.5) in addition to collecting the terms of
Hurwitz stable and unstable subsystems, we obtain:
 
Nσ (t0 ,t)+1
u s
Y
||x(t)|| ≤  eαin  eλ Tu (t0 ,t)−λ Ts (t0 ,t) ||x0 ||
n=1
α+αNσ (t0 ,t) +λu Tu (t0 ,t)−λs Ts (t0 ,t)
||x(t)|| ≤ e ||x0 ||
αNσ (t0 ,t) +λu Tu (t0 ,t)−λs Ts (t0 ,t)
||x(t)|| ≤ Ce ||x0 || (3.8)
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 117

where α = max αin and C = eα .Now, if the switching law (3.7) holds, which
n∈i
can be written as

λu Tu (t0 , t) − λs Ts (t0 , t) ≤ −λ∗ (Tu + Ts ) = −λ∗ (t − t0 ) (3.9)

Thus, we obtain from (3.8) that



||x(t)|| ≤ CeαNσ (t0 ,t) −λ (t−t0 )
||x0 || (3.10)

Since λ∗ > λ (i.e, λ∗ ∈ (λ, λs )), we have two cases here:

• if we set α ≤ 0, we obtain from (3.10)



||x(t)|| ≤ e−λ (t−t0 )
||x0 || ≤ e−λ (t−t0 ) ||x0 || (3.11)

that is satisfied when :

− λ∗ (t − t0 ) ≤ −λ (t − t0 ) (3.12)

Hence, according to Theorem (2.6.2), (3.11) implies that the switched


system is exponentially stable with degree λ for any average dwell time
and any chatter bound.

• When α > 0, we obtain from (3.10),

αNσ (t0 , t) − λ∗ (t − t0 ) ≤ β − λ(t − t0 ) (3.13)

This is equivalent to
t − t0
Nσ (t0 , t) ≤ N0 +
τ∗

where τ ∗ = λ∗α−λ , N0 = αβ , λ∗ ≥ λ and λ < λs , Since β and N0 can be


specified arbitrarily.
Note that,
∵ C = eα
∴ α = ln C

The scalars αi , λi are computed from (3.4),(3.5) using algebraic matrix the-
ory that based on using the eigenvalue decomposition A = P J P −1 and
triangular inequality for norms, where J is Jordan canonical form whose
diagonal entries are the eigenvalues of A and P has columns set of linear
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 118

independent eigenvectors of A.
Since, eAt = P eJt P −1 , we have

||eAt || ≤ ||P || ||P −1 || ||eJt ||

where α = ln(||P || ||P −1 ||) = λM (Pi ) and ||eJt || ≤ eλM ax (J) t .


Finally, we conclude that,

||x(t)|| ≤ Ceβ−λ(t−t0 ) ||x0 ||

Thus, the switched system is globally exponentially stable with stability


degree λ and any average dwell time τa ≥ τa∗ and any chatter bound N0 .

3.2.2 Determine ADT Based on Positive Symmetric Matrix


Pi [21]
Assume that {A1 , · · · , Ak },{Ak+1 , · · · , AN } are unstable and stable Hurwitz
matrices, respectively.
Then, there exist a set of λ1 , · · · λN such that both of Ai − λi I, (i ≤ k) and
Ai + λi I, (i > k) are Hurwitz stable.Therefore, determining λi for (i > k)
requires the eigenvalues of (Ai + λI) are Hurwitz.Similarly, determining λi
for (i ≤ k) requires the eigenvalues of (Ai − λI) are Hurwitz.
Also, there exist positive definite matrices Pi such that:

(Ai − λi I)T Pi + Pi (Ai − λi I) < 0 (i ≤ k) (3.14)


T
(Ai + λi I) Pi + Pi (Ai + λi I) < 0 (i > k)

hold.
and bases on solution of (3.14), we propose a Lyapunov quadratic function
candidate of the form
V (t) = xT Pi x (3.15)
that has the following properties:

(A) For each Vi and its derivative satisfies:


(
+2λi Vi (i ≤ k)
V̇i ≤ (3.16)
−2λi Vi (i > k)

(B) There exist constant scalars α1 ≥ α2 > 0 such that

α1 ||x||2 ≤ Vi (x) ≤ α2 ||x||2 ∀ x ∈ Rn , ∀ i ∈ P (3.17)


3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 119

(C) There exists a constant scalar µ ≥ 1 such that


Vi ≤ µVj ∀ i, j ∈ P (3.18)

where, α1 = min {λm (P1 ), λm (P2 )}, α2 = max {λM (P1 ), λM (P2 )} and µ =
α2
α1 .
Under the switching law (3.7), there is τa such that the switched system is
globally exponentially stable.

Proof. The first property (A) can be derived from (3.14),


∵ ẋ = (Ai + λi I) x
∵ Vi = xT Pi x
∵ V̇i ≤ 0
∴ V̇i = ẋT Pi x + xT (Pi x)
= (Ai + λi I)T xT Pi x + xT Pi (Ai + λi I)x
xT [Ai Pi + P iAi ] x + 2λi xT Pi x ≤ 0
=
∂Vi
⇒ Ai x ≤ −2λi Vi
∂x
and for any t > t0 , let t1 < t2 < · · · < tNσ (t0 ,t) denote the switching points
of σ(t) over the interval (t0 , t).Then, for any t ∈ [ti , ti+1 ] where (0 ≤ i ≤
Nσ(t0 ,t) ), λ+ = max λi and λ− = min λi , we have
i≤k i>k
( −
e−2λ (t−ti ) Vi (i > k)
V (t) ≤ + (3.19)
e+2λ (t−ti ) Vi (i ≤ k)
By induction with help of (3.18) and through expanding inequalities (??)
along the interval from t to tk+1 .similarity, spreading the right-hand side
along the interval tk to t0 , we obtain
h − − −
i
V (t) ≤ e−2λ (t−ti ) V (ti ) e−2λ (ti −ti−1 ) V (ti−1 ) · · · e−2λ (tk+2 −tk+1 ) V (tk+1 ) .
h + + +
i
e+2λ (tk −tk−1 ) V (tk−1 ) e+2λ (tk−1 −tk−2 ) V (tk−2 ) · · · e+2λ (t1 −t0 ) V (t0 )
− T − (t,t + T + (t,t
≤ µNσ (t,t0 ) e−2λ 0)
e+2λ 0)
V (t0 )
Nσ (t,t0 ) −2λ− T − (t,t + +
0 )+2λ T (t,t0 )
V (t) ≤ µ e V (t0 ) (3.20)
where T − and T + represent the total activation time of Hurwitz stable and
unstable subsystems during [t0 , t], respectively.
By using (3.17)
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 120

r
α1 ln µ Nσ (t,t0 )−λ− T − (t,t0 )+λ+ T + (t,t0 )
||x|| ≤ e 2 ||x0 ||
α2
using (3.9), we obtain
r
α1 ln µ Nσ (t,t0 )−λ∗ (t−t0 )
||x|| ≤ e 2 ||x0 ||
α2

and since the switching law (3.7) holds for some λ∗ ∈ (λ, λ− ), we have
r
α1 ln µ Nσ (t,t0 )−λ∗ (t−t0 )
||x|| ≤ e 2 ||x0 ||
α2
r
α1 β−λ(t−t0 )
≤ e ||x0 || (3.21)
α2
ln µ
let 2 = a, then

aNσ (t, t0 ) − λ∗ (t − t0 ) ≤ β − λ(t − t0 )

which is equivalent to
t − t0
Nσ (t0 , t) ≤ N0 +
τ∗
where N0 = ln2βµ and τ ∗ = 2(λln∗ −λ)
µ
.
Hence, (3.21) implies that the switched system (3.3) is globally exponentially
stable with degree λ under the switching law (3.7).

Example 3.2.1. Consider we have


   
−9 10 −20 10
A1 = , A2 =
−10 11 10 −20

where A1 is unstable while A2 is Hurwitz stable.We have λ(A1 ) = {1, 1} and


λ(A2 ) = {−10, −30}.We choose λ+ = 10 such that A1 − λI Hurwitz stable,
λ− = 9 such that A2 + λI is Hurwitz stable and solving LMI (3.14), we
obtain    
0.45 −0.05 0.5 0.1
P1 = , P2 =
−0.05 0.45 0.1 0.5
thus, α1 = 0.4, α2 = 0.6 and µ = 1.5.we choose λ = 1, λ∗ = 6 arbitrar-
ily.Then, according to switching law (3.7), we obtain
T− λ+ − λ∗ 16
≥ =
T+ λ− − λ∗ 3
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 121

and the dwell time will be τd ≥ τ ∗ = 2(λln∗ −λ)


µ
Hence, we need to activate
the stable system for more than 0.04 (let say τ = 0.08) and the activate the
unstable system less than 0.01 where,
3T − 3 ∗ 0.08
≥ T+ ⇒ = 0.015 ≥ T +
16 16
The results shown in Figure 3.1.
MATLAB.m 7 (Example 3.2.1).

clear
clc
A1=[−9 10 ; −10 1 1 ]
A2=[−20 1 0 ; 1 0 −20]
P1 = [ 0 . 4 5 −0.05; −0.05 0 . 4 5 ]
P2 = [ 0 . 5 0 . 1 ; 0 . 1 0 . 5 ]
syms x ( t ) y ( t )
x=[x ( t ) ; y ( t ) ]
I =[1 0 ; 0 1 ]
SW1=(A1−(10∗ I ) ) ∗ x
SW2=(A2+(9∗ I ) ) ∗ x
syms x ( t ) y ( t )
[ x1 x2 ]= d s o l v e ( d i f f ( x)== SW1( 1 ) , d i f f ( y)==SW1( 2 ) , x (0)==10 , y (0)==60)
[ x3 x4 ]= d s o l v e ( d i f f ( x)== SW2( 1 ) , d i f f ( y)==SW2( 2 ) , x (0)==10 , y (0)==60)

t =0:0.01:2
dimB=z e r o s ( l e n g t h ( t ) , 6 ) ;
dimB ( : , 1 ) = t ;
dimB ( : , 2 ) = e v a l ( x1 ) ;
dimB ( : , 3 ) = e v a l ( x2 ) ;

h o l d on
xlabel ( ’ t ’)
y l a b e l ( ’ x1&x2 ’ )
t i t l e ( ’ t r a j e c t o r i e s s o l u t i o n o f A1 : t V. S ( x1−x2 ) ’ )
h o l d on
p l o t ( dimB ( : , 1 ) , dimB ( : , 2 ) , ’ r ’ )
h o l d on
p l o t ( dimB ( : , 1 ) , dimB ( : , 3 ) , ’ g ’ )
h o l d on
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 122

x l a b e l ( ’ x1 ’ )
y l a b e l ( ’ x2 ’ )
t i t l e ( ’ P l o t o f x1−x2 o f system A1 ’ )
figure
p l o t ( dimB ( : , 2 ) , dimB ( : , 3 ) , ’ b ’ )
h o l d on
syms s 1 s 2
X=[ s 1 ; s 2 ]
V1=t r a n s p o s e (X) ∗ P1∗X
V1dot=t r a n s p o s e (X) ∗ ( ( t r a n s p o s e (A1−(10∗ I ) ) ∗ P1)+(P1 ∗ (A1−(10∗ I ) ) ) ) ∗X
s 1=dimB ( : , 2 ) ;
s 2=dimB ( : , 3 ) ;
dimB ( : , 4 ) = e v a l (V1 ) ;
dimB ( : , 5 ) = e v a l (V1)
dimB ( : , 6 ) = e v a l ( V1dot ) ;
h o l d on
xlabel ( ’ t ’)
y l a b e l ( ’ V1 ’ )
t i t l e ( ’ P l o t o f t , V1 system A1 ’ )
figure
p l o t ( dimB ( : , 1 ) , dimB (: ,4) , ’ − − r ’ )
h o l d on
x d o t 1 2 =(A1−(10∗ I ) ) ∗X
p l o t x 1=e v a l ( x d o t 1 2 ( 1 ) ) ;
p l o t y 1=e v a l ( x d o t 1 2 ( 2 ) ) ;
h o l d on
figure
plot ( plotx1 , ploty1 , ’ r ’ )
h o l d on
%−−−−−−− A2 s i t c h e d system −−−−−
t =0:0.01:2;
dimC=z e r o s ( l e n g t h ( t ) , 6 ) ;
dimC ( : , 1 ) = t ;
dimC ( : , 2 ) = e v a l ( x3 ) ;
dimC ( : , 3 ) = e v a l ( x4 ) ;
h o l d on
xlabel ( ’ t ’)
y l a b e l ( ’ x2−x1 ’ )
t i t l e ( ’ P l o t o f t , x1−x2 system A2 ’ )
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 123

figure
p l o t ( dimC ( : , 1 ) , dimC ( : , 2 ) , ’ : g ’ )
h o l d on
p l o t ( dimC ( : , 1 ) , dimC ( : , 3 ) , ’ : b ’ )
h o l d on
x l a b e l ( ’ x1 ’ )
y l a b e l ( ’ x2 ’ )
t i t l e ( ’ P l o t o f x1−x2 system A2 ’ )
figure
p l o t ( dimC ( : , 2 ) , dimC(: ,3) , ’ − − y ’ )
h o l d on
syms s 3 s 4
X=[ s 3 ; s 4 ]
V2=( t r a n s p o s e (X) ∗ P2 ) ∗X
s 3=dimC ( : , 2 ) ;
s 4=dimC ( : , 3 ) ;
dimC ( : , 4 ) = e v a l (V2 ) ;
h o l d on
xlabel ( ’ t ’)
y l a b e l ( ’ V2 ’ )
t i t l e ( ’ Plot of t , V2 system A2 ’ )
h o l d on
figure
p l o t ( dimC ( : , 1 ) , dimC(: ,4) , ’ − − r ’ )
h o l d on
p l o t ( dimB ( : , 1 ) , dimB ( : , 4 ) , ’ g ’ )
h o l d on
V2dot=t r a n s p o s e (X) ∗ ( ( t r a n s p o s e (A2+(9∗ I ) ) ∗ P2)+(P2 ∗ (A2+(9∗ I ) ) ) ) ∗X
dimC ( : , 5 ) = e v a l (V2)
dimC ( : , 6 ) = e v a l ( V2dot ) ;
x d o t 3 4 =(A2−(9∗ I ) ) ∗X
p l o t x 2=e v a l ( x d o t 3 4 ( 1 ) ) ;
p l o t y 2=e v a l ( x d o t 3 4 ( 2 ) ) ;
figure
h o l d on
plot ( plotx2 , ploty2 , ’ : b ’ )
h o l d on
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 124

Fig. 3.1: Results show Lyapunov functions of A1 , A2 decreasing since V̇i (x) < 0
satisfied

Example 3.2.2.
Consider the switched system (3.3) with
   
2 2 −2 1
A1 = A2 =
1 3 1 −2

The eigenvalues of both matrices are λ(A1 ) = 1, 4 and λ(A2 ) = −1, −3


which indicate that A1 is unstable whereas A2 is Hurwitz stable.Thus, we
have λu = 4 and λs = 1.Using algebraic matrix theory, we can compute
the scalars λi , αi in (3.4) and (3.5) (we can solve them based on technique
described in the Appendix ) from
   
−1 1 0 2 1
P1 A1 P1 = , P1 =
0 4 −1 1
   
−1 0 1 −1
P2−1 A2 P2 = , P2 =
0 −3 1 1
Thus, we choose α1 = 0, α2 = 0.6 according to,
   −t  1 1

Jt −1
1 −1 e 0
||P2 e P2 || = 2 2
0 e−3t

1 1 − 12 12
1 e−t + e−3t e−t − e−3t
 
=
2 e−t − e−3t e−t + e−3t
= ||P2 || ||P2−1 || eλM (J) t ≤ eα2 −λ2 t
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 125

 1
− 13
  t 
2 1 e 0
||P1 eJt
P1−1 || =
3
1 2
−1 1 0 e4t 3 3

1 2e + e4t −2et + 2e4t
 t 
=
3 −et + e4t et + 2e4t
= ||P1 || ||P1−1 || eλM (J) t ≤ eα1 +λ1 t

if we choose t = 0.0001s, λ = 0.25 < λ− and λ∗ = 0.5 ∈ (λ, λ− ), we get


0.33 ≤ α.
We choose α1 = 0 and α2 = 0.6.Then, the switching law will require
Ts (t,t0 ) λu −λ∗ ∗ α
Tu (t,t0 ) ≥ λs −λ∗ = 9 and the ADT is computed as τa ≥ τa = λ∗ −λ = 2.4.
If we choose τa ≈ 4.5 and then we activate the stable Hurwitz system with
time period T s = 4.5, then we need to activate the other subsystem with
T u = 0.5.The trajectory of the switched system is shown in Figure 3.2,
where the initial state is x0 = [10 20]T .

Fig. 3.2: trajectory resulting from switching between A1 and A2 (Example 3.2.2)

3.3 Stability of Switched Systems When All Subsystems


Consist of Stable and Unstable Subsystems

Consider the switched system

ẋ = Ai x(t) , f or i = 1, 2 (3.22)
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 126

Assume that T > 0 is a constant, where T is the estimated time for each
system to be activate, then:
• If subsystems A1 , A2 are activated one by one with the same duration
T , then the entire system is exponentially stable.

• If subsystem A1 , A2 are activated one by one with duration T and


2T , respectively, then the entire system is uniformly stable, but not
asymptotically stable.

• if subsystems A1 , A2 are activated one by one with T and 3T , respec-


tively, the the entire system is unstable.
So, stability of this kind of switched system requires the switching law to be
specified.

3.3.1 Stability When One Subsystem is Stable and the Other


One is Unstable
Theorem 3.3.1. Consider the two matrices Au1 , As2 are Hurwitz and non-
Hurwitz, respectively. Assume that there exist β1 , β2 > 0 and α > 0 such
that
u
||eA1 t || ≤ α e−β1 t , (U nstable Subsystem)
As2 t +β2 t
||e || ≤ α e , (Stable Subsystem)

Let T u denotes the total time period that unstable subsystem {Au1 } is acti-
vated.Similarly, Let T s denotes the total time period that unstable subsystem
{As2 } is activated and under the hypotheses of Theorem (3.2.1) satisfied, if
Ts β2
>
Tu β1
then the switched system is asymptotically stable under any switching signal.
Proof. Let t0 , t1 , · · · , tn be the the switching times for each switched sub-
system over the interval (t0 , t).
Consider for the solution of (3.22), we obtain

x(t) = eA2 (tn −tn−1 ) eA1 (tn−1 −tn−2 ) · · · , eA2 (t2 −t1 ) eA1 (t1 −t0 ) ||x) ||
||x(t)|| ≤ e−β2 (tn −tn−1 ) eβ1 (tn−1 −tn−2 ) · · · , e−β2 (t2 −t1 ) eβ1 (t1 −t0 ) ||x0 ||
= e−β2 (tn −tn−1 +···+t1 −t0 ) eβ1 (tn−1 −tn−2 +···+t1 −t0 ) ||x0 ||
s u
= e−β2 nT eβ1 nT ||x0 ||
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 127

To satisfy the asymptotic stability, we have to set −β2 nT s + β1 nT u <


0.Thus, as n go to infinity , ||x(t)|| goes to origin if and only if
Ts β1
u

T β2

3.3.2 Stability When a Set of Subsystems are Stable and the


Other Sets are Unstable
Definition 3.3.1. Since the switched system consist of both stable and un-
stable subsystems.Therefore, the matrices Ai can be separated into Ai − and
Ai + .Thus,
{Ai : i = 1, 2, · · · , n} = Ai − : i = 1, 2, · · · , r ∪ Ai + : i = r + 1, · · · , n
 

where 1 ≤ r ≤ n − 1, the Ai − are Hurwitz stable, while the Ai + are unstable.


Moreover,assume that there exist αi , β1 , β2 > 0 such that

||eAi t || ≤ αi e−β1 t , i = 1, 2, · · · , r (3.23)
Ai + t β2 t
||e || ≤ αi e , i = r + 1, · · · , n
(3.24)
Let T − (τ, t) denotes the total time period that stable subsystems from A−

i
are activated.Similarly, Let T + (τ, t) denotes the total time period that un-
stable subsystems from A+ i are activated. Now, assume the switching law
guarantees the following inequality,
T − (t, t0 ) β2
inf =q>
t≥t0 T + (t, t0 ) β1
holds.
and for any given initial time t0 .
Theorem 3.3.2. Assume that there exist αi , β1 , β2 > 0 such that the con-
ditions

||eAi t || ≤ αi e−β1 t , i = 1, 2, · · · , r
+
||eAi t || ≤ αi eβ2 t , i = r + 1, r + 2, · · · , n
hold.
If hypothesis (i.e, assumption 2 of Theorem (3.2.1)) is true, then for switched
system (3.1) with initial condition (t0 , x0 ), and under any switching law
satisfying hypothesis (assumption three), and assume that :
T − (t, t0 ) β2
inf +
=q> (3.25)
t≥t0 T (t, t0 ) β1
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 128

then,
||x(t)|| ≤ K0 e−β(t−t0 ) (3.26)
where,
β1 q − β2
β=
1+q
Proof.
Notice that under assumption (3.25), it is true that
 
− β2
+
−β1 T (t, t0 ) + β2 T (t, t0 ) ≤ − β1 − T − (t, t0 )
q
  
β2 q
≤ − β1 − (t − t0 )
q q+1
= −β(t − t0 )
Thus, inequality (3.26) can be derived directly from (2.51).
Zt1

||x(t)|| ≤ K0  e−β1 (t−t0 ) eβ2 (t−t0 ) ||x0 || + e−β1 (t−τ ) eβ2 (t−τ ) γ||x(τ )|| dτ + · · ·
t0

Ztk Zt
+ e−β1 (t−τ ) e−β2 (t−τ ) γ||x(τ )|| dτ + e−β1 (t−τ ) eβ2 (t−τ ) γ||x(τ )|| dτ 

tk−1 tk

Using T − ,T + notations:
Zt1

−β1 T − (t,t0 ) T + (t,t − (t,τ ) + (t,τ )
||x(t)|| ≤ K0 ||x0 || e e β2 0)
+ e−β1 T eβ2 T γ||x(τ )|| dτ + · · ·
t0

Ztk Zt
− (t,τ ) + (t,τ ) − (t,τ ) + (t,τ )
+ e−β1 T e β2 T γ||x(τ )|| dτ + e−β1 T eβ2 T γ||x(τ )|| dτ 

tk−1 tk

Grouping the elapsed time interval for each subsystem


Zt
 
− + − +
||x(t)|| ≤ K0 ||x0 || e−β1 T (t,t0 ) eβ2 T (t,t0 ) + e−β1 T (t,τ ) eβ2 T (t,τ ) γ||x(τ ||) dτ 
t0

replacing T (t, t0 ) with (t − t0 ) then applying Gronwall inequality, yields


− (t,t )+β T + (t,t )+γK (t−t )
||x(t)|| ≤ K0 ||x0 || e−β1 T 0 2 0 0 0
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 129

Thus,
− (t,t )+β T + (t,t )
||x(t)|| ≤ K0 ||x0 || e−β1 T 0 2 0

Hence, for stability it is required that

−β1 T − (t, t0 ) + β2 T + (t, t0 ) < 0


T − (t, t0 ) β2
⇒ >
T + (t, t0 ) β1

3.4 Stability of Switched Systems When All Subsystems


Unstable via Dwell-Time Switching

When all subsystems are unstable; i.e., V̇i (t) < κ Vi (t), κ > 0, i ∈ P, then
the increment of the Lyapunov function is compensated by switching law
that forces the Lyapunov function decreases at switching instant t.

Theorem 3.4.1. [22] Consider the switched system

ẋ = fσ(t) (x(t)) (3.27)

Assume there is a switching sequence T = {t0 , t1 , · · · , tn } generated by σ(t) :


[0, ∞) 7→ {1, 2, · · · , N } = P, where N is the number of modes.If there exists
a set of positive definite functions Vi (t, x) : R × Rn 7→ R satisfying the
following properties:

α(||x||) ≤ Vi (t, x) ≤ β(||x||), where α, β ∈ K∞ (3.28)


V̇i ≤ κVi (t), κ>0 (3.29)

and there exists a constant 0 < µ < 1 such that


− −
Vj (t+ +
n ) ≤ µVi (tn ), tn 6= tn , ∀i 6= j , i, j ∈ P (3.30)
ln µ + κτn < 0, n = 0, 1, 2, · · · (3.31)

where Vi (t− +
n ) = lim Vi (t),Vi (tn ) = lim Vi (t) and τn = tn+1 − tn is the
t7→t−
n t7→t+
n
dwell time for n = 0, 1, 2, · · · , which denotes the length between successive
switching instants, then the switched system (3.27) is GUAS with respect to
switching law σ(t).
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 130

Proof. When t ∈ (tn , tn+1 ) and from inequality (3.29), we obtain

V (t) ≤ eκ(t−tn ) V (tn ) ⇒ V (tn+ )) ≤ eκ(tn+1 −tn ) V (tn )

Suppose the switched system switches from i 7→ j at switching instant tn+1 ,


thus,
Vj (tn ) ≤ µVi (tn ), i 6= j, ∀ i, j ∈ T
Thus, we can rewrite inequality (3.32) as

V (tn+1 ) ≤ µ eκ(tn+1 −tn ) V (tn ) (3.32)

Thus,
V (tn+1 ) ≤ ρV (tn )
where ρ = µ eκ τn .It follows from inequality (3.30) that for ρ < 1, we obtain

ρ = µ eκ τn < 1 ⇒ ln µ + κ τn < 0

By establishing inequality (3.31) which can be written as

µ < e−κτn

there exists a τmax = supn=0,1,2,··· τn .So, to conclude that the switched sys-
tem (3.27) is GUAS with respect to switching law σ(t), we can choose

δ() = β −1 (e−κ τmax α())

such that

||x(t0 )|| < δ() ⇒ ||x(t0 )|| < β −1 (e−κ τmax α()) (3.33)

Thus, we can write the above inequality as:

β(||x(t0 )||) < e−κ τmax α() (3.34)

From inequality (3.28), we obtain

V (t0 ) ≤ β(||x0 ||) (3.35)

Combination of inequalities (3.34) and (3.35) yield

V (t0 ) < β(||x0 ||) < e−κ τmax α() (3.36)


3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 131

Since V (tn ) is strictly decreasing; i.e,

V (tn+1 ) < .. < V (t) < .. < V (tn ) < .. < V( t0 )


then, we have
V (tn ) < e−κτmax α() (3.37)
If, one has V (t) ≤ eκ τmax V (tn ) over t ∈ [tn , tn+1 ), then under this act, we
obtain
V (t) < α() ⇒ α−1 (V (t)) < 
Furthermore,

α(||x(t)||) ≤ V (t) ⇒ ||x(t)|| ≤ α−1 (V (t)) ⇒ ||x(t)|| ≤ 

We conclude that

∀  > 0, ∃ δ() > 0 such that ||x(t)|| < , whenever ||x(t0 )|| < δ

which means the system (3.27) is GUS. Also, due to the fact that the se-
quence V (tn ) ∀ n = 0, 1, 2, · · · is strictly decreasing, we have lim ||x(t)|| = 0.
t→∞
Hence, the switched system (3.27) is GUAS under the switching law σ(t).

3.4.1 Stabilization for Switched Linear System


Consider the switched system

ẋ(t) = Aσ(t) x(t) (3.38)

with all subsystems unstable.The stability of switched system (3.38) focuses


on computation of minimal and maximal admissible dwell time {τmin , τmax }.
If the switching law stabilizes switch systems (3.38) then, the dwell time τ
should be confined by pairs of lower and upper bounds by τd ∈ [τmin , τmax ],
which means that the activation time of each mode requires to be neither
too long nor too small, and confined by [τmin , τmax ], to ensure the switched
system (3.38) is the GUAS.When unstable subsystems are involved, then
Lyapunov functions Vi (t) are allowed to increase with bounded increase rate
described by V̇i (t) < κVi (t).
Since no stable subsystem exists to be activated to compensate the increment
of Vi (t), the idea in stabilizing the switched system is to compensating the
increment of the Lyapunov function by switching behaviors such that the
Lyapunov functions decreasing at switching instant t1 and finally converge
to zero.
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 132

Roughly speaking, stability of switched systems (3.38) is related to the pair


of {tmin , tmax } which means the activate time of each mode is required to
be nether to long nor too small and confined in the section [τmin , τmax ]
Note that if we construct a Lyapunov function in the form of

Vi (t) = xT Pi x(t)

then inequality (3.30) turns into

Pj ≤ Pi , i 6= j

which will be never satisfied.Hence, we propose a Lyapunov function of the


form
Vi (t) = xT (t)Pi (t)x(t)
where Pi (t) is time dependent positive definite matrix.Then, inequality (3.30)
can be expressed as
Pj (tn ) ≤ µPi (tn ) (3.39)
which, in practice, is not easy task to check for existence of such a matrix
function Pi that satisfy the inequality.Therefore, we resort to a technique
called discretized Lyapunov function to solve this kind of problem.

Definition 3.4.1. [23][Lagrange Linear Interpolation] Linear interpolation


is interpolation by the straight line through two arbitrary points (x1 , f1 ) and
(x2 , f2 ) as depicted in Figure 3.3.Thus, the linear polynomial p1 (x) over
[x1 , x2 ] is given as
x − x2 x − x1
p1 (x) = f1 (x1 ) + f2 (x2 ) (3.40)
x1 − x2 x2 − x1
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 133

Fig. 3.3: Linear Interpolation

3.4.2 Discretized Lyapunov Function Technique


The technique is based on dividing the domain of the defined matrix func-
tion Pi (t) into finite discrete points over the interval [tn , tn+1 ].Thus, dividing
the interval into L segments.Each segment is described by the sub-interval
Tn,q = [t̄n , t̄n+1 ), q = 0, 1, 2, · · · , (L − 1), n = 0, 1, · · · and with equal length
h, where h = τmin L , t̄n = tn + qh and t̄n+1 = tn + (q + 1)h.
Now, assume that Pi is chosen to be linear continuous matrix function that
describing a linear straight curve between two points t̄n and t̄n+1 over the
segment Tn,q .Let consider Pi,q describing the linear curve between t̄n and
t̄n+1 . Also, let us consider Pi any point on this linear curve over the interval
[t̄n , t̄n+1 ).Let Pi,q describing the exact line between the two points Pi (t̄n )
and Pi (t̄n+1 ).Resort to any linear interpolation formula like Lagrange inter-
polation(refer to Definition (3.4.1)) .
Thus, we can declare the definition of Pi,q as

t − t̄n+1 t − t̄n
Pi,q (t) = Pi (t̄n ) + Pi (t̄n+1 ) (3.41)
t̄n − t̄n+1 t̄n+1 − t̄n

where t̄n = tn + qh, t̄n+1 = tn + (q + 1)h, q = 0, 1, · · · , (L − 1), L =


number of segments, and the corresponding discretized Lyapunov function
is
Vi (t) = xT (t)Pi,q x(t), t ∈ Tn,q = [t̄n , t̄n+1 ] (3.42)
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 134

Moreover, on the interval [tn + τmin , tn+1 ), we set Pi (t) = Pi,L , where q =
L.Thus, the corresponding discretized Lyapunov function is

Vi (t) = xT (t)Pi,L x(t), t ∈ [tn + τmin , tn+1 ) (3.43)

Theorem 3.4.2. Consider the switched linear system

ẋ = Aσ(t) x(t) (3.44)

For Ai are unstable matrices, there exists a sufficiently large scalar κ∗ > 0

such that matrices Ai − κ2 I are Hurwitz stable. Also, for a given scalars
κ > κ∗ > 0, 1 > µ > 0 and 0 < τmin ≤ τmax , if there exists a set of matrices
Pi,q > 0, q = {0, 1, · · · , L − 1, L} and i ∈ P such that

ATi Pi,q + Pi,q Ai + h1 (Pi,q+1 − Pi,q ) − κ Pi,q <0 (3.45)


1
ATi Pi,q+1 + Pi,q+1 Ai + h (Pi,q+1 − Pi,q ) − κ Pi,q+1 < 0 (3.46)
ATi Pi,L + Pi,L Ai − κ Pi,L <0 (3.47)
Pj,0 − µ Pi,L ≤ 0, i 6= j (3.48)
ln µ + κ τmax <0 (3.49)

where h = τmin
L .Then, the switched system (3.44) is GUAS under any switch-
ing law.Moreover, computing of τmin , τmax are provided as

τmax < τmax = max {τmax : inequality (3.49) holds} (3.50)
τmax >τmin

τmin > τmin = min {τmin : inequalities (3.45) − (3.48) hold}
(3.51)
τmin <τmax

Thus, the admissible dwell time is bounded in the section [τmin , τmax ].

Proof. Since Ai − κ2 I are Hurwitz stable, there must exist a scalar κ ≥
κ∗ > 0 satisfying inequality (3.47).Hence, there exists a discretized Lya-
punov function candidate for the ith subsystem:

Vi (t) = xT (t) Pi (t) x(t) ⇒ Vi (t) = xT Pi,q (t) x(t)

By applying discretization and using (3.42), we obtain

V̇i = xT Ṗi (t)x + 2ẋT Pi (t)x

and over each descretized segment Tn,q = [t̄n , t̄n+1 ], we obtain


3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 135

• ∀ t ∈ Tn,q

Ṗi (t) = Ṗi,q (3.52)


Pi,q+1 − Pi,q
= (3.53)
τ̄n+1 − τ¯n
Pi,q+1 − Pi,q
= (3.54)
(τn + (q + 1)h) − (τn + qh)
Pi,q+1 − Pi,q
= (3.55)
h
τmin
where h = L and

2ẋT Pi (t) x = 2ẋT Pi,q (t) x (3.56)


= xT ATi Pi,q + Pi,q A x
 
(3.57)

Since,

V̇i < κVi ⇒


h i
Pi,q+1 −Pi,q
xT ATi Pi,q + Pi,q A x − κxT Pi,q x + xT
 
h x < 0 (3.58)

• Since Pi (t) = Pi,L when q = L,∀ t ∈ lef t[t̄n , t̄n+1 , we have

V̇i (t) = xT Pi,L x

Hence,
V̇i (t) = xT ATi Pi,L + Pi,L Ai x
 

Since,

V̇i − κVi < 0 ⇒


we obtain,
ATi Pi,L + Pi,L Ai − κ Pi,L < 0 (3.59)
thus, inequalities (3.58) and (3.59) imply that inequality (3.29) is sat-
isfied.
Furthermore, from (3.39), we get

Pj < µPi
let Pj = Pj,0 when q = 0
let Pi = Pi,L when q = L
⇒ Pj,0 − µPi,L < 0
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 136

that is to say inequality (3.30) has been satisfied.


Also, since τmax ≥ τn , ∀ n ∈ {0, 1, 2, · · · }, we say that inequality (3.31)
is satisfied.
Finally, according to Theorem (3.4.1), the switched system (3.44) is
GUAS and governed under any switching law σ(t) where t ∈ [τmin , τmax ].

Example 3.4.1. Consider the switched system (3.38) composed of two sub-
systems:
   
−1.9 0.6 0.1 − 0.9
A1 = , A2 =
0.6 − 0.1 0.1 − 1.4
The eigenvalues of A1 are λ1 (A1 ) = 0.0817, λ2 (A1 ) = −2.0817 and eigen-
values of A2 are λ1 (A2 ) = 0.0374 and λ2 (A2 ) = −1.3374.
Since both subsystems have eigenvalues lie in the right half-plane, so the
switched system unstable.
Given x0 = [3 5]T and a periodic switching sequence σ satisfies t̄n+1 − t̄n =
2, n = 0, 1, 2, ...Therefore, we see that τmin = τmax = τn = 2, if we choose
L = 1, µ = 0.5 and κ = 0.4 then, through inequalities (3.45)-(3.49), the
following convenient solution can be obtained :
   
7.3707 2.2116 45.2459 − 12.3623
P1,0 = , P1,1 =
2.2116 4.3990 −12.3623 28.8193

   
18.5432 − 6.2729 17.0657 0.9100
P2,0 = , P2,1 =
−6.2729 13.0147 0.9100 59.5486
As shown in Figure 3.4, the switched system can be stabilized by the switch-
ing signal σ(t) concluded in the set of all switching signals with dwell time
τn ∈ [τmin = 2, τmax = 2], ∀ n = 0, 1, 2, · · · as illustrated in Figure 3.5 of the
corresponding Lyapunov function, the function may increase during the evo-
lution time t ∈ [0, 2), but the increments are compensated by the switching
behaviors where the Lyapunov function decreases at switching instant t1 and
V1 (t−
1 ), which is finally converges to zero.
3. CHAPTER 3 :Stability of S.S When All Subsystems are Hurwitz and Non-Hurwitz 137

Fig. 3.4: State trajectories of x(t)

Fig. 3.5: State trajectories of x(t)


4. APPENDIX

4.1 Matrices and Matrix Calculus

Every vector x can be expressed as

x = x1 i1 + x2 i2 + · · · + xn in

Define n × n square matrix I as a basis for vector x:


 
I = i1 i2 · in

Hence,  
x1
 x2 
x = I  .  = I x̄
 
 .. 
xn
 T
where x̄ = x1 x2 · · · xn is called the representation of the vector x
w.r.t the basis I = i1 i2 · in , where the following orthonormal basis
are defined as      
1 0 0
0 1 0
i1 =  .  , i2 =  .  , · · · , in =  . 
     
 ..   ..   .. 
0 0 1
Associating the representation of the vector x with respect to this basis, we
have
   
x1 
1 0 0 ··· 0
 x1
 x2   x2 
0 1 0 ··· 0

x =  .  = x1 i1 + x2 i2 + · · · + xn in =  = In  . 
   
 ..  0 0 1 ··· 0  .. 
xn 0 0 0 ··· 1 xn

where In is the n × n unit matrix.So, the representation of any vector x with


respect to the orthonormal basis are equals.
4. Appendix 139

T
Example 4.1.1. Consider the vector x = 1 3 in Rn .The two vectors

 T  T
q1 = 3 1 and q2 = 2 2 are linearly independent and can be used as
a basis for vector x. Thus,we have

x = Q x̄
  
3 2 x1
=
1 2 x2
 T
⇒ x̄ = −1 2

MATLAB.m 8.

Q=[3 2 ; 1 2 ]
x=[1 ; 3 ]
xBAR=i n v (Q) ∗ x

Some Vector Properties in Rn

Vectors in Rn can be added or multiplied by a scalar.The inner product of


n
two vectors x and y is xT y =
P
x i yi .
i=1

4.2 Vector and Matrix Norms[4]

The norm ||x|| of a vector x is a real-valued function with the following


properties

• ||x|| ≥ 0 for all x ∈ Rn , with ||x|| = 0 if and only if x = 0.

• ||x + y|| ≤ ||x|| + ||y|| ∀x, y ∈ Rn .

• ||αx|| = |α| ||x|| ∀ α ∈ R , x ∈ Rn .

and the second property is called the triangle inequality.We shall consider
the class of p − norm, defines by
1
||x||p = (|x1 |p + |x2 |p + . . . + |xn |p ) p , 1 ≤ p < ∞

and
||x||∞ = max |xi |
i
4. Appendix 140

The three most commonly used norms are ||x||1 , ||x||∞ , and the Euclidean
norm 1
 1
||x||2 = |x1 |2 + |x2 |2 + . . . + |xn |2
2
= xT x 2

All p-norms are equivalent in the sense that if k · kα and k · kβ are two
different p-norms, then there exist positive constants c1 and c2 such that

c1 |x|α ≤ |x|β ≤ c2 |x|α

4.3 Matrices p-norms

Definition 4.3.1. [4] A matrix norm || · || on Rn×m is a function || · || :


Rn×m 7→ R that satisfies the following axioms:
• ||A|| ≥ 0 , A ∈ Rn×m .

• ||A|| = 0 , if A = 0.

• ||αA|| = |α|||A|| , where α ∈ R.

• ||A + B|| ≤ ||A|| + ||B|| , where A, B ∈ Rn×m .


For A ∈ Rn×m , the induced p-norm of A is defined by
 1
n X
m p
X
p
||A||p =  |ai,j |  , 1 ≤ p ≤ ∞
i=1 j=1

which for p = 1, 2, ∞ is given by


m n
X  T
 12 X
||A||1 = max |aij | ; ||A||2 = λmax A A ; ||A||∞ = max |aij |
j i
i=1 j=1

where λmax AT A is the maximum eigenvalue of AT A.




Example 4.3.1. Consider the matrix A,


 
A = a 10 2

The ||A|| is given by computing the eigenvalues of AT A:

λ − a2 7 − a
 
T
det(λI − A A) = det
−a λ − b2 − 1
4. Appendix 141

and the roots of the quadratic λ2 − (1 + a2 + b2 )λ + a2 b2 are:


p
1 + a2 + b2 ± (1 + a2 + b2 )2 − 4a2 b2
λ=
2
Thus, with the largest root we obtain,
p p
(a + b)2 + 1 + (a − b)2 + 1
||A|| =
2
Definition 4.3.2. Let A be a square n × n matrix, the trace is the sum of
the diagonal elements
Xn
trA = aii
i=1
Moreover,
tr A ≡ tr AT
Definition 4.3.3. [8] For any given square matrix A of order n (i.e n × n),
the determinant of A is denoted by det(A) or |A| and defined as
n
X
det A = (−1)i+j aij Mij
j=1

where i choosed to be fixed, i ≤ n and Mij is the minor determinant of order


(n − 1).
Definition 4.3.4. If A is n × n matrix and x is a vector of order n × 1 such
that
|A − λI| = 0
is called characteristic equation of the matrix A.Thus, the eigenval-
ues of the matrix A are the roots of the characteristic equation.Thus, after
determining the eigenvalues λn ’s, we have to solve the homogeneous system
(A − λi I)x = 0
for each λi , i = 1, 2, · · · , n and then obtaining the corresponding eigenvec-
tors.
Theorem 4.3.1 (Cayley-Hamilton).
It states that, if we have a square matrix A of order n × n then the charac-
teristic polynomial of A is given by
det(A − λ I) = pn λn + pn−1 λn−1 + · · · + p0 where pn = (−1)n
satisfies
pn An + pn−1 An−1 + · · · + p1 A + p0 I = 0
4. Appendix 142

Example 4.3.2. Consider the square 2 × 2 matrix


 
−1 2
A=
−10 8
the eigenvalues λ’s and the corresponding eigenvectors of matrix A are ob-
tained by solving the characteristic equation and the homogeneous system
(A − λI)x = 0, respectively.

−1 − λ 2
|A − λI| =
=0
−10 8 − λ

Thus, λ2 − 7λ + 12 = 0
Hence, λ1 = 3 , λ2 = 4
The eigenvector corresponding to the eigenvalue λ1 obtained by,
  
−4 2 x1 2x − x2 = 0
(A − (3)I)x = =0→ 1
−10 5 x2 or x1 = x22
Hence, the eigenvector x is given by
   x2   1
x1 2 −2
x= = = x2
x2 x2 1
 T
Setting x2 = 2, then we have the eigenvector as 1 2 .
and corresponding to the eigenvalue λ2 , we have
  
−5 2 x1 5x − 2x2 = 0
(A − (8)I)x = =0→ 1
−10 4 x2 or x1 = 52 x2
Setting x2 = 5,then we have the eigenvector is given by,
 T  T
x = x1 x2 = x2 2 5 .
Since,
λ2 − 7λ + 12 = 0
Thus, by Cayley-Hamilton Theorem ,we get

A2 − 7A + 12I = 0

Hence ,A2 = 7A − 12I.


  
2 −1 2 1 0
⇒A =7 − 12
−10 8 0 1
     
2 −7 14 −12 0 −19 14
⇒A = + =
−70 56 0 −12 −70 44
4. Appendix 143

Definition 4.3.5. Assume that A has distinct eigenvalues, and P is a ma-


trix that has columns corresponding to the eigenvectors for the matrix A,
then P −1 AP is a diagonal matrix, with the eigenvalues of A as the diagonal
entries.

Example 4.3.3. Based on Example (4.3.2), we have


 −1     
1 2 −1 2 1 2 3 0
=
2 5 −10 8 2 5 0 4
   
1 2 −1 5 −2
where P = and P =
2 5 −2 1

4.4 Quadratic Forms

Consider n × n matrix P and any n × 1 vector x.Both have real entries, then
the product xT P x is called a quadratic form in x.

Definition 4.4.1 (Rayleigh-Ritz Inequality[9]). State that for any n × n


matrix P and any n × 1 vector x, the inequality

λmin (P )xT x ≤ xT P x ≤ λmax (P )xT x

satisfied.
where λmin , λmax denote the smallest and largest eigenvalues of P .

Definition 4.4.2. We say that the matrix P is positive definite if and


only if all of its leading principle minors are all positive .

Example 4.4.1. Consider the symmetric matrix


 
a b
P =
c d

is positive definite if a > 0 and ad − bc > 0 and semidefinite if a ≥ 0 and


ad − bc ≥ 0.

4.5 Matrix Calculus

Definition 4.5.1. [9]


4. Appendix 144

d
Rt
• Differentiation of dt A(t) and integration of A(δ) dδ matrices should
0
be defined entry-by-entry as:

Zt Zt
A(δ) dδ = aij (δ)dδ
0 0

d d
A(t) = aij (t)
dt dt
• The product rule holds for differentiation of matrices such that
d
[A(t)B(t)] = Ȧ(t)B(t) + A(t)Ḃ(t)
dt
and
d 2
A (t) = Ȧ(t)A(t) + A(t)Ȧ(t)
dt
• In the case of matrix functions, we have

Zt
d
A(δ)dδ = A(t) − A(0)
dt
0

• Moreover, from the triangle inequality,


t
Zt Z

|| x(τ )dτ || ≤ ||x(τ )||dτ

t0 t0

Definition 4.5.2 (Leibniz rule).


Consider that we have a matrix A depends on two variables t and δ, then

Zg(t) Zg(t)
d ∂
A(t, δ)dδ = A(t, g(t)) ġ(t) − A(t, f (t)) f˙(t) + A(t, δ) dδ
dt ∂t
f (t) f (t)

is true
4. Appendix 145

4.6 Topological Concepts in Rn

Convergence of Sequences
Definition 4.6.1. A sequence of n × 1 vectors x0 , x1 , . . . , xn in Rn , denoted
by {xn }∞
n=0 is said to be converging to a vector x̄ (and is called a limit vector
of the sequence), if
|xn − x̄| → 0 as n → ∞
which is equivalent to saying that, for any  > 0, there is N () such that

|x̄ − xn | <  , ∀ n ≥ N ()

and written as
lim xn = x̄
n→∞

Definition 4.6.2 ([9]). For an infinite series of vector functions, written as



X
xi (t)
i=0

and for each xi (t) defined on the interval [t0 , t1 ], convergence is defined in
terms of the sequence of partial sum
k
X
sk (t) = xi (t)
i=0

The series converges to the function x(t), if for each tq ∈ [t0 , t1 ], we have

lim ||x(tq ) − sk (tq )|| = 0


k→∞

and said to be converge-uniformly to x(t) on the interval [t0 , t1 ],if for a given
 > 0 there exists a positive integer K() such that for every t ∈ [t, t1 ],
k
X
||x(t) − xi (t)|| <  , k > K()
i=0

Sets
• Open Subsets:
A subset S ⊂ Rn is said to be open if, for every vector x ∈ S one can
find an  − neighborhood of x such that

N (x, ) = {z ∈ Rn : ||z − x|| < } ⊂ S


4. Appendix 146

• Closed Subset:
A subset S is closed, if every convergent sequence {xn } with elements
in S converges to a point in S. .

• bounded Set:
A set S is bounded, if there is r > 0 such that ||x|| ≤ r ∀ x ∈ S .

• Compact Sets:
A set S is compact, if it is closed and bounded

• The interior of a set S:


The interior of a set S is S − ∂S, where ∂S is called the boundary of
S and contain the set of all boundary points of S .

• Connected Set:
An open set S is connected, if every pair of points in S can be joined
by an arc lying in S.

• Convex Set:
A set S is convex if, for every x, y ∈ Rn and every real number
θ, 0 < θ < 1 , then the line segment joining x and y is
L(x, y) = {θx + (1 − θ) y}
and is included in the domain Rn .

4.7 Mean Value Theorem

Assume that f : Rn 7→ R is continuously differentiable at each point x.Let


x, y ∈ Rn such that the line segment L(x, y) ⊂ Rn .Then, there exists a point
z that is positioning on the middle of the line L(x, y) such that

∂f
f (y) − f (x) = (y − x)
∂x x=z

4.8 Supremum and Infimum Bounds

Definition 4.8.1. The supremum( sup ) and infimum( inf ) defined as

• The supremum of a nonempty set S ⊂ R of scalars, denoted by sup S,


is defined to be the smallest scalar x such that x ≥ y for all y ∈ S.

• The infimum of a nonempty set S ⊂ R of scalars,denoted by inf S, is


defined to be the largest scalar x such that x ≤ y for all y ∈ S.
4. Appendix 147

Definition 4.8.2. Let D ⊂ Rn and let f : D 7→ Rn .We say that f is


bounded, if there exists α < 0 such that ||f (x)|| ≤ α for all x ∈ D.
Definition 4.8.3. In the case where
• f : Rn 7→ R is continuously differentiable,
 
∂f 0 ∂f ∂f
= f (x) = , ··· , ∈ R1×n
∂x ∂x1 ∂xn
is called the gradient of f at x.
• for a continuously differentiable function f : Rm 7→ Rn ,
 ∂f1 ∂f1 ∂f1 
∂x1 (x) ∂x2 (x) · · · ∂xm (x)
 ∂f2 ∂f2 .. 
∂f 0

∂x1 (x) ∂x2 (x) · · · .   ∈ Rn×m
= f (x) =  .. .. .
∂x 
 . . .. .. 
. 
∂fn ∂fn
∂x1 (x) ··· · · · ∂xm (x)

is called the Jacobian of f at x

4.9 Jordan Form

Consider the linear dynamical system

ẋ = Ax(t) , t ≥ 0 (4.1)

that has a general solution of the form

x = eAt

where A is a real n × n matrix.


Definition 4.9.1. [25] We can transform the matrix A in Jordan canonical
form J, by using the transformation x = P y to obtain the equivalent system

ẏ = P −1 A P y = Jy

Thus,
J = P −1 AP → A = P J P −1
Hence, the solution of the dynamical system is given by,

x = eAt = P eJt P −1

where P is a real, nonsingular(invertible) constant n × n matrix


4. Appendix 148

Definition 4.9.2. Without loss of generality, the Jordan canonical form J


of the matrix A is given by
 
J0 0 0 ··· 0
 0 J1 0
 ··· 0
 ..   
J =  0 0 J2 . 0 = diag J0 J1 · · · Jm
 .. 
0 0 0 . 0
0 0 ··· ··· Jm
where  
J0 = diag − block λ1 λ2 · · · λk
and
Ji = λk+i I + Ni , ∀ i ∈ {1, 2, .., m}
The structure of Ni takes the form,
 
0 1 0 ··· 0
0 0 1 · · · 0
 
 .. .. . . . . .. 
Ni = . . . . .
0 0 0 . . .
 
1
0 0 0 ··· 0 k ×k
i i

where Ni has all zero entries except for 10 s


above the diagonal and Ni denotes
the ki × ki Nilpotent matrix.
So, we have
 Jt 
e 0 0 0 ··· 0
 0 eJ1 t 0 ··· 0 
 
Jt
 J t . .   
e = 0 0 e 2 . 0   = diag J0 J1 · · · Jm
 .. 
 0 0 0 . 0 
0 0 · · · · · · e Jm t
and
1 t 1 2
··· 1 ki −1 
2t (ki −1)! t

0 1 t ··· 1 ki −2 
(ki −2)! t
eJi t = e λk t  . .
 
 .. .. .. .. .. 
. . . 
0 0 0 ··· 1
equivalent form,
 
λk It Nk λk t 1 2 2 1 k−1 k−1
e e =e I + N t + N t + ··· + N t
2! (k − 1)!
4. Appendix 149

we are concern on determining eAt = P eJ0 t P −1 when all eigenvalues of


A occur in the Jordan form only in J0 and not in any of the Jordan blocks
Ji for i = 1, 2..., m.

Definition 4.9.3. [3] There exists nonsingular, n × n matrix P such that


P −1 A P = J is so-called Jordan form and if the eigenvalues of matrix A are
distinct, then J is in a diagonal form with eigenvalues as diagonal elements;
that is,  
λ1 0 0 ...
 0 λ2 0 · · ·
J = J0 = 
 
.. 
0 0 . · · ·
0 ··· ··· λn
Definition 4.9.4. If A is n × n and v is a nonzero n × 1 vector such that
for some scalar λ
Av = λ v
then v is an eigenvector corresponding to the eigenvalue λ. Also,there exists
an invertible n × n matrix P that has columns set of linear independent
eigenvectors for A such that P −1 AP is a diagonal matrix similar to A with
eigenvalues of A as the diagonal entries.

Definition 4.9.5. [9] For a square matrix A, there exists an invertible n ×


n matrix P such that J = P −1 AP is a structure that has diagonal of n
eigenvalues of A.
Moreover, if A has distinct eigenvalues, then P can be constructed from
eigenvectors of A.Finally, since P is nonsingular(invertable),constant n × n
matrix,then for every t, we have
−1 AP t
eP = P −1 eAt P

Example 4.9.1. Consider the dynamical system

ẋ = A x(t) ,

where the A is a square matrix


 
0 −2
A=
2 −2

Since,  
λ 2
det(A − λ I) = det
−2 λ + 2
4. Appendix 150


Therefore, A has eigenvalues λ = −1 ± i 3.
√  √ T
The eigenvector of −1 + i 3 is 2 1 − i 3
√  √ T
Similarly, the eigenvector of −1 − i 3 is 2 1 + i 3

Then, the invertible matrix


 
2√ 2√
P =
1−i 3 1+i 3

yields the diagonal form J:


 √ 
−1 1+i 3 0√
J =P AP =
0 1−i 3

Hence,
−1 t
eAt = eP JP = P eJt P −1

4.10 The Weighted Logarithmic Matrix Norm and Bounds


of The Matrix Exponential[27]

Definition 4.10.1. For any real matrix A, we have the induced matrix norm
denoted by || · || and the logarithmic matrix norm denoted by µ(A) defined
by,
q
||A||2 = λM AX (AT A) (4.2)
T
 
A+A
µ2 (A) = λM AX (4.3)
2

where A = P J P −1 and λM AX stands for the maximal eigenvalue of a


symmetric matrix.

Theorem 4.10.1. For a real matrix A, we have

||eAt ||2 ≤ β eµP [A] t (4.4)


q
λmax (P )
where P is given by Lyapunov equation, β = λmin (P ) and µP [A] =
1
− λmax (P )
4. Appendix 151

Example 4.10.1. consider that we have a Hurwitz stable matrix


 
−0.8 0.4 0.2
A= 1 −3 2 
0 1 −1
and on solving Lyapunov equation, we obtain
 
5.0333 3.0266 6.9401
P = 3.0266
 2.5477 5.4324 
6.9401 5.3424 13.2528
we have µ2 (A) = 0.0359, the maximum real part of eigenvalues of A is
−0.0566, µP [A] = −.0514 and β = 8.1966. According to Theorem (4.10.1),
we have
||eAt ||2 ≤ 9.1966 e−0.0514 t , ∀ t ≥ 0
Definition 4.10.2. [28] Consider matrix A that can be diagonalized such
that
P −1 A P = J = Diag(λ1 , · · · , λm )
where λi are the eigenvalues of matrix A, then
||eAt || = ||P eJt P −1 || ≤ ||P || ||P −1 || eλM ax (J) t

4.11 Continuous and Differentiable Functions

Definition 4.11.1 (Continuous Functions). A function f is mapping a set


S1 into a set S2 is denoted by f : S1 → S2 . A function f : Rn → Rm is said
to be continuous at a point x if, ∀ ε > 0, ∃δ > 0 such that
|x − y| < δ ⇒ |f (x) − f (y)| < 
Definition 4.11.2 (Piecewise function). A function f : R → Rn is said to be
piecewise continuous on an interval J ⊂ R, if for every bounded sub-interval
J0 ⊂ J, f is continuous for all x ∈ J0 except at finite number of points
where f may have discontinuities.In addition,at each point of discontinuity
x0 , the right-side limit lim f (x0 + h) and the left-side limit limh→0 f (x0 − h)
h→0
exist.
Definition 4.11.3 (Differentiable Functions). A function f : R → R is said
to be differentiable at x if the limit
0 f (x + h) − f (x)
f (x) = lim
h→0 h
exists.
4. Appendix 152

Definition 4.11.4. A function f : Rn → Rm is said to be continuously


∂fi
differentiable at a point x0 , if the partial derivative ∂xj
exist and are
continuous at x0 , ∀ 1 ≤ i < m, 1 ≤ j < n.

For a continuous differentiable function f : Rn → R , the row vector


∂f
∂x is defined by

∂f ∂f ∂f
=[ ,··· , ]
∂x ∂x1 ∂xn

and the gradient vector, denoted by Of (x), is


 T
∂f
Of (x) =
∂x
For continuously differentiable function f : Rn → Rm , the Jacobian
matrix [ ∂f
∂x ] is an m × n matrix whose element in the ith row and jth column
∂fi
is ∂xj .

4.12 Gronwall-Bellman Inequality

Lemma 4.12.1. [5] Let λ : [a, b] → R, be continuous and µ : [a, b] → R be


continuous and non-negative.If a continuous function y : [a, b] → R satisfies
Z t
y(t) ≤ λ(t) + µ(s)y(s) ds ∀ t0 ≤ t (4.5)
t0

then, on the same interval:


Z t Rt
µ(τ )dτ
y(t) ≤ λ(t) + µ(s)λ(s)[e s ] ds
t0

In particular,if λ(t) ≡ λ is a constant, then


Rt
µ(τ )dτ
y(t) ≤ λe t0

Also, if µ(t) ≡ µ ≥ 0 is a constant, then

y(t) ≤ λeµ(t−t0 )
4. Appendix 153

Proof. The proof is based on defining a new variable and transforming the
integral inequality into a differential equation, which can be easily solved.
From equation (4.5), let
Z t
v(t) = µ(s)y(s)ds
t0

Thus,
v̇ = µ(t)y(t) (4.6)
and using equation (4.5) and Multiply both sides by µ(t), yields
Z t
⇒ y(t) ≤ λ(t) + µ(s)y(s) ds ∀ t0 ≤ t
t0
Z t
⇒ µ(t)y(t) ≤ µ(t)λ(t) + µ(t) µ(s)y(s)ds
t0

⇒ µ(t)y(t) ≤ µ(t)λ(t) + µ(t)v(t) (4.7)


Then, using (4.6), (4.7) lead to

v̇ = µ(t)y(t) ≤ µ(t)λ(t) + µ(t)v(t) (4.8)

Let us now define a new function that takes the same form of v̇ = µ(t)y(t)
with some extra terms:

ζ(t) = µ(t)y(t) − [µ(t)λ(t) + µ(t)v(t)]

which is absolutely a non-positive function under the consideration of


equation (4.8)
Thus,
⇒ ζ(t) = v̇(t) − µ(t)λ(t) − µ(t)v(t)
⇒ v̇(t) − µ(t)v(t) = ζ(t) + µ(t)λ(t) (4.9)
Solving this equation (4.9) with initial condition v(a) = 0, yields
Z t Rt
v(t) = [µ(τ )λ(τ ) + ζ(τ )]e τ µ(r)dr dτ
t)

Since ζ(t) is non-positive function.Therefore,


Z t Rt
v(t) ≤ [µ(τ )λ(τ )]e τ µ(r)dr dτ
t0
4. Appendix 154

Rt
Using the definition of v(t) = t0 µ(s)y(s)ds,
Z t Z t Rt
µ(r)dr
µ(s)y(s)ds ≤ [µ(τ )λ(τ )]e τ dτ (4.10)
t0 t0

and rearrange inequality (4.5) into


Z t
y(t) − λ(t) ≤ µ(s)y(s)ds (4.11)
t0

Thus, combining both (1.17)-(1-18), yields


Z t Rt
µ(r)dr
y(t) − λ(t) ≤ [µ(τ )λ(τ )]e τ dτ
t0
Z t Rt
µ(r)dr
⇒ y(t) ≤ λ(t) + [µ(τ )λ(τ )]e τ dτ
t0

4.13 Solutions of the State Equations

4.13.1 Solutions of Linear Time-Invariant(LTI) State Equations


Theorem 4.13.1. [10] Consider the LTI state equation

ẋ(t) = Ax(t) + Bu(t) (4.12)


ẏ(t) = Cx(t) + Du(t) (4.13)

where A ∈ Rn×n , B ∈ Rn×p , C ∈ Rq×n , and D ∈ Rq×p are constant


matrices.Then, the solutions

Zt
A(t−t0 )
x(t) = e x(t0 ) + eA(t−τ ) Bu(τ ) dτ
t0
Zt
A(t−t0 )
y(t) = Ce x(t0 ) + C eA(t−τ ) Bu(τ ) dτ + Du(t)
t0

satisfy (4.12) and (4.13)


4. Appendix 155

Proof. To find the solution of equations (4.12), (4.13) with dependence on


the initial state x(0) = x0 and the input u(t), we need to use the property
d At
e = AeAt ≡ eAt A
dt
Also, we can use the method of variant parameters.
Now, multiply both sides of (4.12) by e−At yields

e−At ẋ(t) − e−At Ax(t) = e−At Bu(t)

which can be written as:


d  −At
x(t) = e−At Bu(t)

e
dt
By integration from t0 7→ t,we obtain
Zt
τ =t
x(τ ) e−Aτ τ =t0 = e−Aτ Bu(τ ) dτ
t0

Thus,
Zt
−At −At0
e x(t) − e x(t0 ) = e−Aτ Bu(τ ) dτ
t0

Hence,
Zt
x(t) = eA(t−t0 ) x(t0 ) + eA(t−τ ) Bu(τ ) dτ (4.14)
t0

Substituting (4.14) into (4.13) yields the solution of (4.13) as


Zt
A(t−t0 )
y(t) = Ce x(t0 ) + C eA(t−τ ) Bu(τ ) dτ + Du(t) (4.15)
t0

Corollary 4.13.1. [10] Consider a continuous function f (t, τ ) then


Zt Zt  
∂ ∂
f (t, τ ) dτ = f (t, τ ) dτ + f (t, τ )|τ =t (4.16)
∂t ∂t
t0 t0

is true.
4. Appendix 156

Proof.
Since equation (4.14) satisfies (4.12).Thus, differentiation of (4.14) should
gives the state equation (4.12):

Zt
 
d  A(t−t0 )
ẋ(t) = e x(t0 ) + eA(t−τ ) Bu(τ ) dτ  (4.17)
dt
t0

applying (4.16) to the above equation yields,

Zt
A(t−t0 )
ẋ(t) = Ae x(t0 ) + AeA(t−τ ) Bu(τ ) dτ + eA(t−τ ) Bu(τ )

τ =t
t0
Zt
 

= A eA(t−t0 ) x(t0 ) + eA(t−τ ) Bu(τ ) dτ  + eA.(0) Bu(t)


t0

Substituting (4.14),yields

ẋ(t) = Ax(t) + Bu(t)

Definition 4.13.1. Let P ∈ Rn×n real non-singular matrix and let z = P x


an equivalent transformation.Then the state equation,

ż(t) = Āz(t) + B̄u(t)

ṫ(t) = C̄z(t) + D̄u(t)


is said to be equivalent to the state equations (4.12), (4.13).
where,

Ā = P AP −1 B̄ = P B C̄ = CP −1 D̄ = D
which obtained from (4.12) by substituting of:

x(t) = P −1 z
ẋ(t) = P −1 ż(t)

Lemma 4.13.1. Consider the LTI system

ẋ(t) = Ax(t)
4. Appendix 157

where x ∈ Rn , A ∈ Rn×n then,

x(t) = x0 eAt

is the solution of the system with the initial condition x(t0 ) = x0 , where eAt
is an n × n matrix defined by its Taylor series.

Example 4.13.1. Assume the linear system



ẋ1 = −x1
(4.18)
ẋ2 = 2x2

This system can be written in the form ẋ = Ax where,


 
−1 0
A=
0 2

The general solutions of the system (4.18) are given by

x1 (t) = c1 e−t
x2 (t) = c2 e2t

Alternately,  −t 
e 0
x(t) = x ≡ X̄(t)x0
0 e2t 0

4.13.2 Solutions of Linear Time-Variant(LTV) State Equations


(A) LTV Homogeneous Time-Varying Equations:

Lemma 4.13.2. [11] Consider the LTV scalar equation

ẋ(t) = a(t)x(t) (4.19)

where a(t) is a continuous function, t ∈ [t0 , t].Then


Rt
a(τ )dτ
x(t) = x0 e t0

is the solution of the homogeneous equation (4.19) that satisfying the


initial condition x(t0 ) = x0 .
4. Appendix 158

Lemma 4.13.3. Consider the LTV equation


ẋ(t) = A(t)x(t) (4.20)
where A(t) = (Aij (t)) is an n × n matrix with continuous components,
t ∈ [t0 , t].Then
Rt
A(τ )dτ
x(t) = x0 e t0
is the solution of the homogeneous equation (4.20).
Rt
A(τ )dτ
If we set ρ(t) = e t0 , then we can express the solution as x(t) =
x0 ρ(t), which is representing the one-dimensional version of the n-
 T
dimensional linear system ẋ = A(t) x, where x = x1 · · · xn and ρ(t)
is a fundamental matrix satisfying the matrix ODE ρ(t)˙ = A(t)ρ(t).

Corollary 4.13.2. In the case where A(t) = A, we have a Taylor


series representation of the matrix exponential:
t2
ρ(t) = eA t = I + At + A2 + ···
2!
(B) LTV Non homogeneous Equations:

Theorem 4.13.2. Consider the LTV state equation


ẋ(t) = a(t)x(t) + b(t) (4.21)
where a(t), b(t) are continuous functions, t ∈ [t0 , t].Then
 " # 
Rt t Rτ
a(τ )dτ Z − a(s)ds
x(t) = e t0 x0 + e t 0 b(τ )dτ 
 

t0

or equivalently,
Zt
−1
x(t) = ρ(t)ρ(t0 ) x0 + ρ(t) ρ−1 (τ )b(τ )dτ
t0

or equivalently,
Zt
x(t) = ρ(t) x0 + ρ(t) ρ−1 (τ )b(τ )dτ
t0
4. Appendix 159

is the solution of (4.21) that satisfying the initial condition x(t0 ) = x0 ,


Rt
a(τ )dτ
where ρ(t) = e t0 and ρ(t0 )−1 = 1.

Proof. By using the variation of parameters technique such that we


can finding a solution of the form

x(t) = ρ(t)u(t) (4.22)

Thus, by differentiation

ẋ(t) = ρ(t)u̇(t) + ρ̇(t)u(t) (4.23)

Substituting (4.21) into (4.23) yield:

ẋ(t) = ρ(t)u̇(t) + ρ̇(t)u(t) = a(t)x(t) + b(t)

Then, by using (4.22) we obtain:

ẋ(t) = ρ(t)u̇(t) + ρ̇(t)u(t) = a(t)Φ(t)u(t) + b(t)

Since
ρ̇(t) = a(t)ρ(t)
Doing terms by terms comparative, we obtain

ρ(t)u̇(t) = b(t) ⇒ u̇(t) = ρ−1 (t)b(t)

Also,

ρ̇(t) = a(t)ρ(t) ⇒
Rt
a(τ )dτ
ρ(t) = e t0 (4.24)

If the initial condition is x(t0 ) = x0 then

x(t0 ) = ρ(t0 )u(t0 ) = x0

Therefore, we have to set ρ(t0 ) = 1 to let u(t0 ) = x0 .


Then, the solution will be
Zt
u(t) = x0 + ρ−1 (τ )b(τ ) dτ (4.25)
t0
4. Appendix 160


− a(s) ds
where ρ−1 = e t0 and x0 ≡ u(t0 ).Substituting (4.25) and (4.24)
into (4.22) complete the proof.

(C) LTV Non-homogeneous Equations with controlling u(t):


Consider the LTV state equation

ẋ = A(t)x(t) , A(t) ∈ Rn×n (4.26)

Assume there will be a unique solution xi (t) for every initial state
xi (t0 ) for i = 0, 1, · · · , n. Thus, there are n solutions corresponding to
these initial conditions
 which can be formed
 as a columns in square
matrix ρ(t) = x1 (t) x2 (t) · · · xn (t) of order n.
Since every solution xi (t) satisfies (4.26), then we can write

ρ̇ = A(t)ρ(t) (4.27)

where the matrix ρ(t) is called the fundamental solution matrix if the
n initial states are linearly independent(equivalently, ρ(t0 ) matrix is
non-singular).But since we can choose an arbitrary initial states, which
they are linearly independent, the fundamental matrix is not unique.

Definition 4.13.2. [12] Let ρ(t) be any fundamental matrix of the


homogeneous system ẋ = A(t)x and satisfy ρ̇ = A(t)ρ(t).Then
def
Φ(t, t0 ) = ρ(t)ρ−1 (t0 )

is called the state transition matrix of ẋ(t) = A(t)x(t).The state tran-


sition matrix is also the unique solution of

Φ(t, t0 ) = A(t)Φ(t, t0 ) (4.28)
∂t
with the initial condition Φ(t0 , t0 ) = I.
Moreover,

• Φ(t, t) = I
−1
• Φ−1 (t, t0 ) = ρ(t)ρ−1 (t0 ) = ρ(t0 )ρ−1 (t) = Φ(t0 , t)


• Φ(t, t0 ) = Φ(t, t1 )Φ(t1 , t0 )


4. Appendix 161

Theorem 4.13.3. Consider the LTV state equation with u(t) as a


controller

ẋ(t) = A(t)x(t) + B(t)u(t) (4.29)

where A is continuous matrix (where the sufficient condition for the


existence of unique solutions is to require that all elements aij of A(t)
be continuous) and B(t) is 2 × 1 matrix, ∀ t ∈ [t0 , t].Then,

Zt
−1
x(t) = ρ(t)ρ (t0 )x0 + ρ(t)ρ−1 (τ )B(τ )u(τ )dτ
t0

or equivalently,

Zt
x(t) = Φ(t, t0 )x0 + Φ(t, τ )B(τ )u(τ )dτ (4.30)
t0

In case of A(t) = A,then[9]

Zt
x(t) = Φ(t, t0 )x0 + eA(t−τ ) B(τ )u(τ )dτ
t0

or equivalently[10],

Zt
x(t) = Φ(t, t0 )x0 + Φ(t − τ )B(τ )u(τ )dτ
t0

is the solution of (4.29) that satisfying the initial condition x(t0 ) = x0


where Φ is the transition matrix solution that satisfying the initial
condition Φ(t0 , t0 ) = I.Usually if A(t) = A, then Φ(t, τ ) = eA(t−τ ) .

Proof. [11] Applying the concept of the variation parameters method,


we want to find a solution of the form

x(t) = ρ(t)M (t)

by differentiation,
ẋ = ρṀ + ρ̇M
4. Appendix 162

Since,
ẋ(t) = A(t)x(t) + B(t)u(t)
Thus,
ρṀ + ρ̇M = ẋ(t) = A(t)x(t) + B(t)u(t)
Comparing terms by terms,yields
Rt
A(τ ) dτ
ρ̇(t) = A(t) ρ(t) = ⇒ ρ(t) = e t0
(4.31)
−1
ρṀ = B(t)u(t) ⇒ Ṁ = ρ B(t)u(t) (4.32)

Also,
x(t0 ) = ρ(t0 )M (t0 ) = x0
Rt0
A(τ ) dτ
which means that M (t0 ) = x0 and ρ(t0 ) = et0 = e0 = I
Now, the solution of (4.32) is

Zt
M (t) = M (t0 ) + ρ−1 (τ )B(τ )u(τ ) dτ (4.33)
t0
4. Appendix 163

Hence,

x(t) = ρ(t)M (t)


Zt
= ρ(t)x0 + ρ(t) ρ−1 (τ )B(τ )u(τ ) dτ
t0
Zt
= ρ(t)x0 + ρ(t)ρ−1 (τ )B(τ )u(τ ) dτ
t0
Zt
= e A(t−t0 )
x0 + eA(t−t0 ) e−A(τ −t0 ) B(τ )u(τ ) dτ
t0
Zt
= eA(t−t0 ) x0 + eA(t−τ ) B(τ )u(τ ) dτ
t0
Zt
= Φ(t, t0 )x0 + Φ(t − τ )B(τ )u(τ ) dτ
t0
Zt
= Φ(t, t0 )x0 + Φ(t, τ )B(τ )u(τ ) dτ
t0

Moreover,
Rt Rτ
A(s)ds − A(s)ds
ρ(t)ρ−1 (τ ) = et0 e t0

Rt0 Rτ
− A(s)ds − A(s)ds
t0
= e t e

− A(s)ds
= e t

Rt
+ A(s)ds
= e τ

= Φ(t, τ )

Also, if A(t) is constant[10], we have

Φ(t, τ ) = eA(t−t0 ) = Φ(t − τ )

where ρ(t) is called the fundamental matrix and Φ(t, τ ) is known as


4. Appendix 164

the state transition matrix solution of



Φ(t, t0 ) = A(t)Φ(t, t0 )
∂t
that satisfying the initial condition Φ(t0 , t0 ) = I.

Corollary 4.13.3. [10] Consider the homogeneous state equation

ẋ(t) = A(t)x , x(t0 ) = x0 (4.34)

If the input is identically zero, then according to (4.30) of Theorem


(4.13.3):
x(t) = φ(t, t0 )x0
is the solution of the state equation ẋ = A(t)x0 .

Proof. [12] Since we can write the LTV state equation

ẋ = A(x)x

as a fundamental solution matrix


˙ = A(t)ρ(t)
ρ(t)

So, the solution to equation (4.34) with an arbitrary initial condition


vector x(t0 ) is
x(t) = ρ(t)ρ(t0 )−1 x0
Now, by differentiation we obtain,
˙
ẋ(t) = ρ(t)ρ(t −1 −1
0 ) x0 = A(t)ρ(t) ρ(t0 ) x0 = A(t)x(t)

Thus, by comparing each terms by terms, we conclude that

x(t) = ρ(t) ρ(t0 )−1 x0


= Φ(t, t0 )x0
4. Appendix 165

4.13.3 Computing The Transition Matrix Through The


Peano-Baker Series
since computing state transition matrix Φ(t, t0 ) is generally difficult, it
is convenient to define the transition matrix Φ(t, τ ) by the Peano-Baker
series[9]

Zt Zt Zσ1
Φ(t, τ ) = I + A(σ1 ) dσ1 + A(σ1 ) A(σ2 ) dσ2 dσ1
τ τ τ
Zt Zσ1 Zσ2
+ A(σ1 ) A(σ2 ) A(σ3 ) dσ3 dσ2 dσ1 + · · · (4.35)
τ τ τ

Proof. Consider Equation (4.34).For arbitrary initial time t0 , initial state


x0 and arbitrary time T > 0, we will construct a sequence of n × 1 vector
functions {xk }∞
k=0 , defined on the interval [t0 , t0 + T ] as a sequence of ap-
proximation solutions of (4.34).
The sequence of approximations is defined in an iterative manner as

x0 (t) = x0
Zt
x1 (t) = x0 + A(σ1 ) x0 (σ1 ) dσ1
t0
Zt
x2 (t) = x0 + A(σ1 ) x1 (σ1 ) dσ1
t0
..
.
Zt
xk (t) = x0 + A(σ1 ) xk−1 (σ1 ) dσ1 (4.36)
t0

this iterative can be written as


Zt Zt Zσ1
xk (t) = x0 + A(σ1 ) x0 dσ1 + A(σ1 ) A(σ2 ) x0 dσ2 dσ1
t0 t0 t0
Zt Zσ1 σZk−1

+ ··· + A(σ1 ) A(σ2 ) · · · A(σk )x0 dσk · · · dσ1 (4.37)


t0 t0 t0
4. Appendix 166

By taking the limit of the sequence and letting k 7→ ∞,


Zt Zt Zσ1
x(t) = x0 + A(σ1 ) x0 dσ1 + A(σ1 ) A(σ2 ) x0 dσ2 dσ1
t0 t0 t0
Zt Zσ1 σZk−1

+ ··+ A(σ1 ) A(σ2 ) · · A(σk )x0 dσk · ·dσ1 + · · · (4.38)


t0 t0 t0

By factoring x0 ,we obtain


Zt Zt Zσ1

x(t) = I + A(σ1 ) dσ1 + A(σ1 ) A(σ2 ) dσ2 dσ1


t0 t0 t0
Zt Zσ1 σZk−1 

+ ··+ A(σ1 ) A(σ2 ) · · A(σk ) dσk · ·dσ1 + · · ·  x0


t0 t0 t0

and denoting the n×n matrix series on the right side by Φ(t, t0 ), the solution
can be written as
x(t) = Φ(t, t0 )x0

Theorem 4.13.4. For any t0 and x0 the linear state x(t) = A(t)x with A(t)
is continuous ,has the unique solution

x(t) = Φ(t, t0 )x0

where the transition matrix Φ(t, t0 ) is given by the Peano-backer series that
converges uniformly for t and τ ∈ [−T, T ],where T > 0 is arbitrary.
Example 4.13.2. consider the LTI state equation ẋ = Ax for A = a,then
by using the approximation sequence in (4.36).The generated result is:

x0 (t) = x0
t − t0
x1 (t) = x0 + ax0
1!
t − t0 (t − t0 )2
x2 (t) = x0 + ax0 + a2 x0
1! 2!
..
.
k
 
t − t0 k (t − t)
xk = 1+a + ··· + a x0
1! k!
4. Appendix 167

and by taking the limit of the sequence, we get the solution:

x(t) = ea(t−t0 ) x0

Example 4.13.3. Consider


 
0 t
A(t) =
0 0

the Peano-Baker is
  Zt   Zt   Zt  
1 0 0 σ1 0 σ1 0 σ2
Φ(t, τ ) = + dσ1 + dσ2 dσ1 + · · ·
0 1 0 0 0 0 0 0
τ τ τ

Since the third term and beyond are zero (verified),thus ,

1 12 (t2 − τ 2 )
 
Φ(t, τ ) =
0 1

Example 4.13.4. Consider the the following scalar equations

ẋ1 (t) = x1 (t) , x1 (t0 ) = x10 (4.39)


ẋ2 (t) = a(t)x2 (t) + x1 (t) , x2 (t0 ) = x20 (4.40)

which can be written as


 
1 0
ẋ = A(t)x , with A(t) =
1 a(t)

the solution of (4.39) is,


x1 (t) = et−t0 x10
the second scalar equation (4.40) can be written as

ẋ2 (t) = a(t)x2 (t) + et−t0 x10

which is identical with the form of a forced scalar state equation (4.29) where
B(t)u(t) = et−t0 x10 .
the transition matrix for scalar a(t) is computed from the solution of the
homogeneous equation (4.19),
Rt
a(τ )dτ
Φ(t, t0 ) = et0
4. Appendix 168

Applying the general solution (4.30) gives


et−t0 0
 
Rt Rt
x(t) = Rt a(τ )dτ  x0
 
a(σ)dσ
eτ −t0 eτ dτ et0

t0

Property 4.13.1. If we have a constant matrix A(t) = A, then the Peano-


Baker series becomes
Zt Zσ1 σZk−1

A(σ1 ) A(σ2 ) · · · A(σk )x0 dσk · · · dσ1


t0 t0 t0
Zt Zσ1 Zσ2 σZk−1

= Ak ··· 1dσk · · · dσ1


t0 t0 t0 t0
(t − t0 )k
= Ak (4.41)
k!
Property 4.13.2. If A(t) = A, an n×n constant matrix, then the transition
matrix is
Φ(t, τ ) = eA(t−tτ )
where the exponential matrix is defines by the power series

X 1 k k
eAt = A t
k!
k=0

that converges uniformly on [−T, T ],where T > 0 is arbitrary.


Example 4.13.5. Consider the following matrix
 
a(t) a(t)
A(t) =
0 0
where a(t) is a continuous scalar function.Since,
t
Zt Rt

R
a(σ)dσ a(σ)dσ 
A(σ)dσ = τ τ
τ 0 0
Thus,  
Rt Rt
a(σ)dσ a(σ)dσ
φ(t, τ ) = eτ eτ − 1
0 1
4. Appendix 169

4.14 Solutions of Discrete Dynamical Systems

4.14.1 Discretization of Continuous-Time Equations


(A) Weak Discretization Method
Theorem 4.14.1. Consider the continuous-time state equations

ẋ(t) = Ax(t) + Bu(t) (4.42)


y(t) = Cx(t) + Du(t) (4.43)

Assume that {t0 , t1 , · · · , tk , tk+1 , · · · } is a set of discrete time points.


Thus, the transferred discrete-time state space equations for (4.47) and
(4.48) become,

x [k + 1] = (I + T A)x [k] + T Bu [k]


y [k] = Cx [k] + Du [k]

where x(t) and y(t) computed only at t = kT ∀ k = (0, 1, · · · ) and


def
x(t) = x(kT ) = x[k]

for all kT ≤ t < (k + 1)T, T = tk+1 − tk .

Proof. Since
x(t + T ) − x(t)
ẋ(t) = lim
T 7→0 T
which can be approximated as

ẋ(t)T + x(t) ≈ x(t + T )

we can use this idea to approximate (4.47) as

x(t + T ) = x(t) + Ax(t)T + Bu(t)T (4.44)

if we compute x(t) only at t = kT where k = {0, 1, · · · },then (4.47)


and (4.48) become

x ((k + 1)T ) = (I + T A)x(kT ) + T Bu(kT )


y(kT ) = Cx(kT ) + Du(kT )

If we define

x(t) = x(kT ) = x [k] ∀ kT ≤ t < (k + 1)T


4. Appendix 170

then the above equations can be written as

x [k + 1] = (I + T A)x [k] + T Bu [k] (4.45)


y [k] = Cx [k] + Du [k] (4.46)

(B) Accurate Discretization Method


Theorem 4.14.2. Consider the continuous-time state equations

ẋ(t) = Ax(t) + Bu(t) (4.47)


y(t) = Cx(t) + Du(t) (4.48)

Assume that {t0 , t1 , · · · , tk , tk+1 , · · · } is a set of discrete time points.


Thus, the transferred discrete-time state space equations for (4.47) and
(4.48) become,

x[k + 1] = Ad x[k] + Bd u[k] (4.49)


y[k] = Cd x[k] + Dd u[k] (4.50)
" #
T
where Ad = eAT , Bd = eAτ dτ B, Cd = C, and Dd = D.
R
0
Moreover, x(t) and y(t) computed only at t = kT ∀ k = (0, 1, · · · ) and
def
x(t) = x(kT ) = x[k]

for all kT ≤ t < (k + 1)T, T = tk+1 − tk .

Proof. Since the solution of (4.47) equals



x(t) = Φ(t)x0 + Φ(t − τ )B(τ )u(τ )dτ
t0

with Φ(t) = eAt , t0 = 0


Then, computing the above solution at t = kT yields,

x[k] = x(kT )
ZkT
=e AkT
x0 + eA(kT −τ ) Bu(τ ) dτ (4.51)
0
4. Appendix 171

Again, computing at t = (k + 1)T yields,

x[k + 1] = x((k + 1)T )


(k+1)T
Z
=e A(k+1)T
x0 + eA((k+1)T −τ ) Bu(τ ) dτ (4.52)
0

Equation (4.52) can be written as


 
 ZkT 
AT AkT A(kT −τ )
x[k + 1] =e e x0 + e Bu(τ ) dτ
 
0
(k+1)T
Z
+ eA((k+1)T −τ ) Bu(τ ) dτ (4.53)
kT

Combining (4.51) into (4.53) and substituting for s = (k + 1)T − τ


yields,  T 
Z
x[k1 ] = eAT x[k] +  eAs ds Bu[k] (4.54)
0

Alternatively, we can write (4.54) as

x[k + 1] = Ad x[k] + Bd u[k]


y[k] = Cd x[k] + Dd u[k]

where, " #
RT
Ad = eAT , Bd = eAτ dτ B, Cd = C, and Dd = D.
0

" #
RT
Lemma 4.14.1. If A is non-singular matrix then, Bd = eAτ dτ B can
0
be equivalently replaced by

Bd = A−1 (Ad − I)B


4. Appendix 172

Proof. We have

ZT ZT  2

Aτ 2 tau
e dτ = I + Aτ + A + ··· dτ
2!
0 0
T2 T3 2
= TI + A+ A + ···
2! 3!
If A is non-singular then,

T2 T3 2
 
A−1 T A + + A + · · · + I − I = A−1 (eAT − I)
2! 3!

Thus,
Bd = A−1 (Ad − I)B

4.14.2 Solution of Discrete-Time Equations


Theorem 4.14.3. Consider the discrete-time state equations

x[k + 1] = Ad x[k] + Bd u[k]


y[k] = Cd x[k] + Dd u[k]

Then, the most general solutions of two equations are:


k−1
X
k
x[k] = A x[0] + Ak−1−i Bu[i]
i=0
k−1
X
y[k] = CAk x[0] + CAk−1−i Bu[i] + Du[k]
i=0

Alternatively,
k
X
x[k] = Φ[k, 0]x[0] + Φ(k, i)B(i − 1)u(i − 1)
i=1

Proof. Since
x[k + 1] = Ad x[k] + Bd u[k]
4. Appendix 173

Compute for k = 0, 1, 2 · · · , yields,

x[1] = Ax[0] + Bu[0]


x[2] = Ax[0] + Bu[1] = A2 x[0] + ABu[0] + Bu[1]
..
.
k−1
X
x[k] = Ak x[0] + Ak−1−i Bu[i]
i=0

A change in summation index allows us to rewrite the solution as


k
X
k
x[k] = A x[0] + Ak−i B(i − 1)u(i − 1)
i=1

Alternatively, whenever A is constant, the discrete transition matrix is given


by
Φ[k, i] = Ak−i
Thus,
k
X
x[k] = Φ[k, 0]x[0] + Φ(k, i)B(i − 1)u(i − 1)
i=1

Corollary 4.14.1. Consider the discrete-time state equations

x[k + 1] = Ad [k]x[k] + Bd [k]u[k]


y[k] = Cd [k]x[k] + Dd [k]u[k]

Then, the most general solutions with initial state x[k0 ] = x0 and the input
u[k] for all x ≥ k0 are given by:
k−1
X
x[k] = Φ[k, k0 ]x0 + Φ[k, i + 1]B[i]u[i]
i=k0
k−1
X
y[k] = C[k]Φ[k, k0 ]x0 + C[k] Φ[k, i + 1]B[i]u[i] + D[k]u[k]
i=k0

where
Φ[k, k0 ] = A[k − 1]A[k − 2] · · · A[k0 ]
for k > k0 and Φ[k0 , k0 ] = I.
4. Appendix 174

4.15 Solutions of Van Der Pol’s Equation

4.15.1 Parameter Perturbation Theory


Consider the system

ẋ(t) = f (t, x, ) , x(t0 ) = µ() (4.55)

where f : [t0 , t1 ] × D × [−, ] 7→ Rn , and allow the initial state depending


on .
The solution of (4.55) will depend on the parameter  by writing the solution
as x(t, ).Our goal is to construct approximate solutions that will be valid for
sufficiently small perturbations ||.The simplest approximation is produced
by setting  = 0, to obtain the unperturbed equation

ẋ(t) = f (t, x, 0) , x(t0 ) = µ(0) (4.56)

Hence, equation (4.56) now can be solved more readily.One seek to to find
the approximate solution for small ,in powers of ; that is,

x(t, ) = x0 (t) + x1 (t) + 2 x2 (t) + · · · (4.57)

where x0 (t) is the first solution of the unperturbed equation (4.56) for  = 0,
and x1 , x2 , · · · , xn are solutions that independent of  and must be deter-
mined by substitute the expansion (4.57) into (4.55) and collect the coeffi-
cients of each like powers of  [24].

4.15.2 Solution of Van Der Pol Equation Via Parameter


Perturbations Method
Consider Van Der Pol’s equation

ẍ + (1 − x2 )ẋ + x = 0 (4.58)

For small , if  = 0, we obtain the unperturbed equation

ẍ + x = 0 (4.59)

with the general solution


x = a cos(t + θ) (4.60)
where a, θ are constants.To try determine the approximated solution of
(4.58), we seek a perturbation expansion of the form

x(t, ) = x0 (t) + x1 (t) + 2 x2 (t) + · · · (4.61)


4. Appendix 175

Substituting of (4.61) into (4.58) yields,


d2
 2   2 
d 2 d
x0 + x0 +  x1 + x1 +  x2 + x2 + · · ·
dt2 dt2 dt2
d2
 
 d d
=  1 − (x0 + x1 + 2 x2 + · · · )2 x0 +  x1 + 2 x2 + · · · (4.62)

dt dt dt
Expanding the right-hand side,we obtain
d2
 2   2 
d 2 d
x0 + x0 +  x1 + x1 +  x2 + x2 + · · ·
dt2 dt2 dt2
 
2 d 2 2 d d

=  1 − x0 x0 +  (1 − x0 x1 − 2x0 x1 x0 + · · · (4.63)
dt dt dt
Collecting the coefficients of like powers of  on both sides with equating,
yield
• Coefficient of 0
d2
x0 + x0 = 0 (4.64)
dt2
• Coefficient of 1
d2 d
2
x1 + x1 = (1 − x0 2 ) x0 (4.65)
dt dt
• Coefficient of 2
d2 d d
2
x2 + x2 = (1 − x0 2 ) x1 − 2x0 x1 x0 (4.66)
dt dt dt
Substituting the first solution (4.60) of equation (4.64) into (4.65),we have
d2
x1 + x1 = − 1 − a2 cos2 (t + θ) a sin(t + θ)
 
2
(4.67)
dt
Since,
sin(t + θ + sin 3(t + θ)
cos2 (t + θ) sin(t + θ) =
4
equation (4.65) can be written as
d2 a3 − 4a 1
2
x1 + x 1 = sin(t + θ) + a3 sin 3(t + θ) (4.68)
dt 4 4
its particular solution is
a3 − 4a 1
u1 = − t cos(t + θ) − a3 sin 3(t + θ) (4.69)
8 32
In a similar fashion, we can find x3 (t), x4 (t), · · · , then substitute them to
(4.61) to get the approximation solution of Van Der Pol.
4. Appendix 176

Fig. 4.1: (a) Basic oscillator circuit ; (b) Typical nonlinear driving-point character-
istic

Example 4.15.1. Figure 4.1 shows the basic circuit structure of an impor-
tant class of electronic oscillators.the inductor and capacitor are assumed to
be linear,time-invariant and passive, that is, L > 0 and C > 0.The resis-
tive element is an active circuit characterized by the voltage-controlled i-v
characteristic i = h (v). The function h (·) satisfies the conditions

h (0) = 0 ; h0 (0) < 0

h (v) → ∞ as v → ∞ , and h (v) → −∞ as v → −∞


where h0 (v) is the first derivative of h (v) with respect to v such i − v char-
acteristic can be represented in two tunnel-diode circuit of Figure 4.2.

Fig. 4.2: (a) Basic oscillator circuit ; (b) Typical nonlinear driving-point character-
istic

Using kirchhoffs current law, we can write the equation

iC + iL + i = 0

Hence,
dv 1 t
C + ∫ v (s) ds + h (v) = 0
dt L −∞
where
t
iL = ∫ v (t) dt
t0
4. Appendix 177

Differentiating once with respect to t and multiplying through by L, we obtain


d2 v dv
CL 2
+ v + Lh0 (v) =0
dt dt
This equation can be written in a form that is well-known in nonlinear sys-
tems theory. √
Let us change the time variable from t to τ = kt t where = LC.the deriva-
tives of v with respect to t and τ are related by
dv dv
=
dτ dt
dt dv √ dv
= = LC
dτ dt dt
2
d v 2 d v 2 d2 v
= = LC
dτ 2 dt2 dt2
dv
By setting dτ ≡ v̇, we can write the circuit equation as
r
0 L
v̈ + h (v)v̇ + v = 0 ,  =
C
when
1
h(v) = −v + v 3
3
the circuit equation takes the form

v̈ − (1 − v 2 )v̇ + v = 0

which is known as the Van Der Pol equation.


This equation is used by Van Der Pol to study oscillations in vacuum tube
circuits, is a fundamental example in nonlinear oscillation theory.
writing this equation in state-space model

ẋ1 = x2
0
ẋ2 = −x1 − h (x1 )x2

The system has only one equilibrium point at x1 = x2 = 0.The Jacobian


matrix at this point (0, 0) is given by
 
∂f 0 1
A= |x=0 = 0
∂x −1 −h (0)
0
since h (0) < 0, the origin is either an unstable node or unstable focus
0
depending on the value of h (0) .
4. Appendix 178

4.16 Limit Cycles

Oscillation is one of the most phenomena that occurs in dynamical systems.


A system oscillate when it has a periodic solution

x(t + T ) = x(t), ∀ t ≥ 0

for some T > 0.


Definition 4.16.1. [4] Consider dynamical system of the form

ẋ = f (x(t)) , x(0) = x0 , t ∈ R (4.70)

A solution s(t, x0 ) of (4.70) is periodic if there exists a finite time T > 0


such that s(t, x0 ) = s(t + T, x0 ) for all t ∈ R.
As we have seen in Example 1.6.3 and in case 2 of section 1.1.4, the
second-order linear system with eigenvalues ±jβ, thus the origin of that
system is a center, and the trajectories are closed orbits.
When the system is transformed into its real Jordan form, the solution is
given by
z1 (t) = r0 cos(βt + θ0 ), z2 (t) = r0 sin(βt + θ0 )
where  
−1 z2 (0)
q
2 2
r0 = z1 (0) + z2 (0), θ0 = tan
z1 (0)
Therefore, the system has a sustained oscillation of amplitude r0 .It is usually
referred to as the harmonic oscillator .
Example 4.16.1. Consider the nonlinear dynamical system

ẋ1 (t) = −x2 (t) + x1 (t) 1 − x21 (t) − x22 (t) , x1 (0) = x10 , t ≥ 0(4.71)
 

ẋ2 (t) = x1 (t) + x2 (t) 1 − x21 (t) − x22 (t) , x2 (0) = x20
 
(4.72)

By transforming the system into terms of polar coordinates:


q
2
, r(0) = r0 = x210 + x220 , t ≥ 0 (4.73)
 
ṙ(t) = r(t) 1 − r (t)
 
x20
θ̇(t) = 1 , θ(0) = θ0 = arctan (4.74)
x10
it can be shown that the system has an equilibrium point at the origin; that
is, x = [x1 x2 ] = 0. All the solutions of the system starting from initial con-
ditions (x10 , x20 ) that are not on the unit circle Ωc = x ∈ R2 : x21 + x22 = 1


approach the unit circle.In particular,


4. Appendix 179

• since ṙ < 0 and when r > 1, all solutions starting outside the unit
circle go toward the unit circle.

• since ṙ > 0 and when r < 1, all solutions starting outside the unit
circle go inward the unit circle.(see Figure 4.3).

Once again,using separation of variables with (4.73) and (4.74), the solution
is given by
    −1
1 −2t
2
r(t) = 1+ e (4.75)
A2
θ(t) = t + θ0 (4.76)

equation (4.75) shows that the dynamical system has a periodic solution
corresponding to r = 1; that is, x210 + x220 = 1.Hence, the dynamical systems
(4.71) and (4.72) have a periodic orbit or limit cycle.

Fig. 4.3: Limit cycle of example 4.16.1

Theorem 4.16.1 (Bendixson Criterion[1]).


Consider the nonlinear dynamical system

ẋ1 = f1 (x1 , x2 )
ẋ2 = f2 (x1 , x2 )
4. Appendix 180

∂f1 ∂f2
a limit cycle exists in domain Ω ∈ R2 of the phase plane if ∂x1 + ∂x 2
vanishes
or changes sign in Ω(not strictly positive or negative).
Proof. From the slope of the phase trajectories
dx2 f2 (x1 , x2 )
=
dx1 f1 (x1 , x2
→ f2 dx1 = f1 dx2

where f2 dx1 = f1 dx2 satisfies any system trajectories including any closed
shapes or the limit cycle.
and by integrating both sides along the closed curve C of a limit cycle, we
obtain
I
(f1 dx2 − f2 dx1 ) = 0
C
By Greens Theorem which state that:
Let C any closed curve and let D be the region enclosed by the
curve.If P and Q are two victor functions and they have continu-
ous first order partial derivatives on the region D then
I ZZ  
∂Q ∂P
P dx + Q dy = − dA
∂x ∂y
C D

Hence,
I ZZ  
∂f2 ∂f1
(f1 dx2 − f2 dx1 ) = − dx1 dx2 = 0
∂x ∂y
C D

Example 4.16.2. Consider the system

ẋ1 = g(x2 ) + 4x2 x22 (4.77)


ẋ2 = h(x1 ) + 4x21 x2 (4.78)

Since

∂f1 ∂f2
+ = 4(x21 + x22 )
∂x1 ∂x2
which is always strictly positive(no sign change can be happen), the system
not have a limit cycle.
CONCLUSION

This MSc thesis is dedicated to the stability of switched linear systems.The


research is focused on continuous time switched systems that govern by a
designated switching law.The stability problems are treated in framework
of the Lyapunov Theory.

In this thesis, in chapter 1, we introduced linearization of nonlinear systems


to make it easy to stabilize the dynamical systems. Also, we described the
methods we need to, which help us constructing the phase portrait that
gives a view on how the trajectory of the a dynamical system solution may
go on.

One of the most useful tools in studying the stability of a dynamical systems
is to understand the concepts of Lyapunov functions.Meanwhile, solving the
dynamical systems are tedious, but building a Lyapunov function candidate
is more powerful method to observing the parameters that force the system
to be stable without need to search for a solutions for its state equation.

In chapter 2, we show how to stabilize the switched system using the slow
switching topics where the switching between two successive switch instants
no less than a specified time so-called the Dwell-Time.Hence, the concept
of dwell time guarantees the stability of the switched systems.Dwell-time
revolved to a new tool called ADT (Average dwell time) that gives a method
on how to construct switching signal law which forcing the switched system
to be asymptotically stable.The description of ADT is provided in chapter
3.

Searching for a common Lyapunov function is not easy to obtained.If we have


more than two Hurwitz subsystems, then it may be too hard to determine a
QCLF that we need to study the stability of the switched systems.Therefore,
the idea of multiple Lyapunov functions comes to solve many of switched
systems problems.
4. Appendix 182

Finally, in chapter 3, where the challenge has been came to study the situ-
ation, where all the subsystems are unstable.Thus, with all subsystems are
unstable,the stability of switched systems focus on computation of minimal
and maximal admissible dwell time {τmin , τmax }.
Therefore, the switching law are confined by pairs of lower and upper bounds
as {τmin , τmax } which means the activation time of each mode requires to
be neither too long nor too small and confined in the section [τmin , τmax ]
to ensure the GUAS of the switched systems.This concept based on realize
the discretization technique that dividing the domain of the defined matrix
function Pi (t) into a finite discrete points over the interval [tn , tn+1 ].
BIBLIOGRAPHY

[1] H.Khalil,Nonlinear Systems, 2nd edition, Michigan State Univer-


sity,1996

[2] D.Liberzon, switching in Systems and Control, 1973

[3] Verhulst, F.(Ferdinand), Nonlinear differential equations and dynamical


systems, 1985

[4] Haddad, Nonlinear dynamical systems and control: a Lyapunov-based


approach, Wassim M.Haddad, VijaySekhar Chellaboina, 1961

[5] Slotine, J.-J.E.(Jean-Jacques E.), Applied nonlinear control, Weiping


Li, Massachusetts Institute of Technology, 1991

[6] Lawrence Perko, Differential equations and dynamical systems, Lawrence


Perko 3rd, 2000

[7] Kai Wulff, Quadratic and Non-Quadratic Stability Criteria for Switched
Linear Systems, by Kai Wulff, Dr.Robert Shorten, Department of Com-
puter Science, National University of Ireland, Maynooth, Maynooth, De-
cember 2004

[8] I.J.S.Sarna, Mathematics for Engineers with Essential Theory, Delhi


College of Engineering, Delhi University, Delhi, 1st Edition 2011

[9] Wilson J.Rugh, Linear System Theory, 2nd Ed, Department of Electrical
and Computer Engineering The Johns Hopkins University, 1996

[10] Chi Tsong Chen, Linear System Theory and Design, 3rd Ed, Oxford
University Press, 1999

[11] David A.Sanchex, Ordinary Differential Equations, Texas A& M Uni-


versity

[12] W.Borgan,Modern Control Theory ,3rd Ed, University of Nevada,


1991
Bibliography 184

[13] K.Narendra and Jeyendran Balakrishnan, A Common Lyapunov Func-


tion for Stable LTI Systems with Commuting A-Matrices,Department of
Electrical Engineering, Yale University, USA, 1993

[14] Bo Hu,Xuping Xu, Anthony N.Michel, Panos J.Antsaklis, Stabilty


Analysis for a Class of Nonlinear Switched Systems, Department of Elec-
trical Engineering, university of Notre Dame, USA, 1999

[15] Guisheng Zhai, Bo hu,Kazunori Yasuda, Piecewise Lyapunov Functions


for Switched Systems with Average Dwell Time, Faculty of Systems Engi-
neering, Wakayama University , Japan, Anthony N.Michel, Department
of Electrical Engineering , university of Note Dame,USA

[16] Philippos Peleties and Raymond DeCarlo, Asymptotic Stability of m-


Switched Systems Using Lyapunov-Like Functions, School of Electrical
Engineering, Purdue University, IN 47907

[17] L.Hetel, Robust Stability and Control of Switched Linear Systems,


Nancy University, France 2007

[18] Guisheng Zhai, Xinkai Chen, Shigemasa Takai, and Kazunori Ya-
suda,Stability and H∞ Disturbance Attenuation Analysis for LTI Control
Systems with Controller Failures,Asian Journal of Control, Vol.6, No.1,
pp.104-111, March 2004

[19] Karabacak, Dwell Time and Average Dwell Time Methods Based on The
Cycle Ratio of The Switching Graph,Istanbul Technical University, Elec-
tronics and Communication Engineering Department,Maslak, Istanbul,
Turkey, 2013

[20] Jiqiang Wang, Two Characterizations of Switched Nonlinear Systems


with Average Dwell Time Province Key Laboratory of Aerospace Power
Systems, College of Energy and Power Engineering, Nanjing University
of Aeronautics and Astronautics, P.R.China, 2016

[21] Guisheng Zhai, Bo Hu, Kazunori Yasuda, Anthony N.Michel, Sta-


bility Analysis of Switched Systems with Stable and Unstable Subsys-
tems: An Average Dwell Time Approach, Faculty of Systems Engineer-
ing, Wakayama University, Japan, Department of Electrical Engineer-
ing, University of Notre Dame, Notre Dame,USA

[22] Weiming Xianga , Jian Xiaob, Stabilization of Switched Continuous-


Time Systems with All Modes Unstable via Dwell-Time Switching, School
Bibliography 185

of Transportation and Logistics, Southwest Jiaotong University, China,


School of Electrical Engineering, Southwest Jiaotong University, China

[23] Erwin Kreyszig, Advanced Engineering Mathematics, Ohion State Uni-


versity, Columbus, Ohio, 10th edition, 2015

[24] A.Nayfeh, Perturbation Methods, WILEY-VCH Verlag GmbH and


Co.KGaA, Weinheim, 2004

[25] Anthony N.Michel, Stability of Dynamical Systems-Continuous, Dis-


continuous, and Discrete Systems, Department of Electrical Engineering,
University of Notre Dame, U.S.A, 2008

[26] Jin Lu and Lyndon J.Brown, A Multiple Lyapunov Functions Approach


for Stability of Switched Systems,American Control Conference Mar-
riottn Waterfront, Baltimore, MD, USA 2010

[27] Guang-Da Hu, Mingzhu Liu, The Weighted Logarithmic Matrix Nor-
mand Bounds of The Matrix Exponential, Department of Control Science
and Engineering, Harbin Institute of Technology, China, 2003

[28] Ranga Narayanan, Dietrich Schwabe, Interfacial Fluid Dynamics and


Transport Processes, University of Florida, USA , 2003

You might also like