Professional Documents
Culture Documents
Poznań 2020
Poznan University of Technology
Poznan 2020
Contents
5
Notations and symbols
Description of symbols:
ż time derivative of z variable
∂f (z)
∂z partial derivative of f (z) function w.r.t. z variable
6
1. Revision of selected differential operations
∂ ∂A (x)
g (x) = b,
∂x ∂x
∂A(x)
where ∂x is the partial derivative of A(x) in the form:
a11 (x) a12 (x) a1l (x)
...
a21∂x(x) ∂x
a22 (x)
∂x
a2l (x)
∂A (x) ∂x ∂x ... ∂x
k×l×n
:= .. .. .. ∈R .
∂x . . ... .
ak1 (x) ak2 (x) akl (x)
∂x ∂x ... ∂x
Due to the size of A(x)∂x (which is k × l × n) the analytical notation is quite troublesome. In such a
situation, we can use the stack operator and Kronecker’s product, which will simplify the notation of
partial derivatives of the function such as ∂g(x)
∂x . It is worth noting that if the A(x)b multiplication is
∂
realized first, and only then the calculation of the ∂x (A(x)b) derivative is realized, then there will be no
problem with notation difficulties of this derivative. At this point we will skip this fact and assume that
there is a need to calculate A(x)
∂x .
7
1.1. Partial derivative 8
Stack operator
The stack operator is also called the Bellman operator and it is a transformation that turns a matrix
into a column so that all columns of a given matrix are stored in one column of a new vector.
Let’s assume that the stack operator is described by the symbol (·)S , where (·) means any two-
dimensional array and S means a transformation of the (·) array.
For the purpose of explaining this operator, let’s define the G matrix:
g11 g12 . . . g1l
g21 g22 . . . g2l
k×l
G= . .. ∈ R .
..
.. . ... .
gk1 gk2 . . . gkl
The stack operator is defined as:
g11
g21
..
.
gk1
g12
g22
S
..
∈ Rk·l .
G :=
.
(1.4)
gk2
..
.
g1l
g2l
..
.
gkl
S
Thus (·) : Rk×l → Rk·l , which means the two-dimensional matrix G of a size k × l is changed into
one-dimensional vector GS of a size k · l.
Kronecker product
Kronecker’s product is called an action that realizes a specific multiplication of the matrix, which
realises the multiplication of all elements of one matrix with the whole content of the other matrix.
Let us assume that the Kronecker’s product is described by the symbol ⊗. Let’s add that it is an
noncommutative operation. In order to explain the operator ⊗, we will use a predefined G matrix and H
matrix:
h11 h12 . . . h1r
h21 h22 . . . h2r
p×r
H= . .. ∈ R .
..
.. . ... .
hp1 hp2 . . . hpr
It is worth noting that, in general, there is no requirement for correlation between G dimensions and
H dimensions.
We define Kronecker’s product as:
h11 G h12 G . . . h1r G
h21 G h22 G . . . h2r G
(k·p)×(l·r)
H ⊗ G := . .. ∈ R . (1.5)
. .. . .
. . . .
hp1 G hp2 G . . . hpr G
If you put the values of the matrix H, then we will obtain the matrix
g11 g12 . . . g1l g11 g12 . . . g1l g11 g12 . . . g1l
g21 g22 . . . g2l g21 g22 . . . g2l g21 g22 . . . g2l
h
.. h12 .. .. . . . h1r ..
11 .. .. .. .. ..
. . ... . . . ... . . . ... .
gk1 gk2 . . . gkl gk1 gk2 . . . gkl gk1 gk2 . . . gkl
g11 g12 . . . g1l g11 g12 . . . g1l g11 g12 . . . g1l
g21 g22 . . . g2l g21 g22 . . . g2l g21 g22 . . . g2l
h21 .. .. h22 .. .. . . . h2r ..
.. .. .. ..
H⊗G :=
. . ... . . . ... . . . ... . .
gk1 gk2 . . . gkl gk1 gk2 . . . gkl gk1 gk2 ... gkl
.. .. .. ..
. . . .
g11 g12 . . . g1l g11 g12 . . . g1l g11 g12 ... g1l
g21 g22 . . . g2l g21 g22 . . . g2l g21 g22 ... g2l
hp1 . hp2 . . . . hpr .
.. .. .. .. .. ..
.. . ... . .. . ... . .. . ... .
gk1 gk2 . . . gkl gk1 gk2 . . . gkl gk1 gk2 . . . gkl
Calculation of partial derivative of matrix function with use of stack operator and
Kronecker product
Now let’s go back to the case of calculation of the partial derivative of g(x) described by (1.3). For
this case it is possible to use the stack operator and the Kronecker product in the form:
∂ ∂AS (x)
g(x) = [b ⊗ Ik×k ]T , (1.6)
∂x ∂x
S
where AS (x) is determined by (1.4) for A(x) matrix given in (1.1), ∂A∂x(x) is a partial derivative of
AS (x), Ik×k ∈ Rk×k is an identity matrix, b is given by (1.2), [b ⊗ Ik×k ]T means the realization of the
Kronecker product defined in (1.5) on elements b and Ik×k and subjected to transposition. Using (1.1)
and (1.4), AS (x) is determined in form:
a11 (x)
a21 (x)
..
.
ak1 (x)
a12 (x)
a22 (x)
S
..
∈ Rk·l ,
A (x) =
.
ak2 (x)
..
.
a1l (x)
a2l (x)
..
.
akl (x)
where [b ⊗ Ik×k ]T ∈ R(k·1)×(k·l) . Next, using the definition (1.6) we calculate the final form of the
derivative:
··· 0 ··· ···
b1 0 b2 0 0 bl 0 0
∂
0 b1 ··· 0 0 b2 ··· 0 0 bl ··· 0
g(x) = ··· ·
.. .. .. .. .. .. .. .. .. .. .. ..
∂x
. . . . . . . . . . . .
0 0 · · · b1 0 0 ··· b2 0 0 ··· bl
∂a11 (x) ∂a11 (x)
∂a11 (x)
∂x1 ∂x2 ∂xn
∂a21 (x) ∂a21 (x) ∂a21 (x)
∂x1 ∂x2 ∂xn
..
..
...
..
. . .
∂ak1 (x) ∂ak1 (x) ∂ak1 (x)
∂x1 ∂x2 ∂xn
∂a12 (x) ∂a12 (x) ∂a12 (x)
∂x1 ∂x2 ∂xn
∂a22 (x) ∂a22 (x) ∂a22 (x)
∂x1 ∂x2 ∂xn
.. .. ... ..
k×n
· ∈R
.
. . .
∂ak2 (x) ∂ak2 (x) ∂ak2 (x)
∂x1 ∂x2 ∂xn
.. .. .. ..
. . . .
∂a1l (x) ∂a1l (x) ∂a1l (x)
∂x1 ∂x2
∂xn
∂a2l (x) ∂a2l (x) ∂a2l (x)
∂x1 ∂x2 ∂xn
..
..
...
..
. . .
∂akl (x) ∂akl (x) ∂akl (x)
∂x1 ∂x2 ∂xn
∂ ∂ 1 ∂ 2 ∂ n
g(x) = ∂x g(x) ∂x g(x) ··· ∂x g(x) ,
∂x
then
∂a11 (x) ∂a12 (x) ∂a1l (x)
b1 0 ··· 0
∂x1
b2 0 ··· 0
∂x1
bl 0 ··· 0
∂x1
∂a21 (x) ∂a22 (x) ∂a2l (x)
0 b1 ··· 0 ∂x1
0 b2 ··· 0 ∂x1
0 bl ··· 0
∂x1
∂
1
g(x) = . . . + . . . +· · ·+ .. . . ,
. . . . . .
∂x
. . . . . . . . . . . . . .
. . .
. . . . . . . . . . .
.
0 0 ··· b1 ··· ···
0 0 b2
0 0 bl
∂ak1 (x) ∂ak2 (x) ∂akl (x)
∂x1 ∂x1 ∂x1
Exercises
∂
Ćwiczenie 1. Calculate the ∂x (A(x)b) derivative using the stack operator and Kronecker product. Com-
pare the results obtained with the derivation calculated in the standard way.
sin x1 x1 x2 + x1 b1
A(x) = , b = .
cos x1 x22 + x2 + tan x1 b2
Solution
∂ ∂AS (x)
(A(x)b) = [b ⊗ Ik×k ]T ,
∂x ∂x
sin x1
cos x1
AS (x) =
,
x1 x2 + x1
2
x2 + x2 + tan x1
cos x1 0
∂ S − sin x1 0
A (x) = ,
∂x x2 + 1 x1
1
cos2 x1
2x2 + 1
T T
1 0 b1 0
b1 0
T
b1 I2×2 1 0 b1 b1 0 b2 0
[b ⊗ I2×2 ]T = =
b2 0 = 0 b1 0 b2 .
=
b2 I2×2 1 0
b2
0 1 0 b2
0 cos x1
∂ ∂ S b1 0 b2 0 0 − sin x1
(A(x)b) = [b ⊗ I2×2 ]T
A (x) = =
x2 + 1
∂x ∂x 0 b1 0 b2 x1
1
cos2 x1
2x2 + 1
b1 cos x1 + b2 (x2 + 1) b2 x1
= .
−b1 sin x1 + b2 cos12 x1 b2 (2x2 + 1)
Ćwiczenie 2. A mechanical system is given in the conservative force field. For this system the dynamic
equation is defined as
M (q)q̈ + C(q,q̇)q̇ + G(q) = τ,
where q ∈ Rn is a state, M (q) is a symmetrical mass matrix, C(q,q̇) is a Coriolis force matrix G(q) is a
gravitational matrix, and τ is a vector for external extortion. For this system, kinetic energy is defined as
1
T = q̇ T M (q)q̇,
2
and the potential energy in form
V = V (q).
Prove that Ṁ (q) − 2C(q,q̇) = −J, where J is some Skew-symmetric matrix.
Solution
T T T
d ∂L ∂L ∂R
− + =Q
dt ∂ q̇ ∂q ∂ q̇
L=T −V
∂L ∂T ∂V
= −
∂ q̇ ∂ q̇ ∂ q̇
∂T ∂ 1 T 1 ∂
q̇ T (M (q)q̇)
= q̇ M (q)q̇ =
∂ q̇ ∂ q̇ 2 2 ∂ q̇
∂T 1 ∂ T T ∂ 1
M (q)q̇ + q̇ T M (q) =
= q̇ M (q)q̇ + q̇ (M (q)q̇) =
∂ q̇ 2 ∂ q̇ ∂ q̇ 2
1 T T 1
q̇ M (q) + q̇ T M (q) = q̇ T M T (q) + M (q)
=
2 2
Due to the symmetry of M (q)
M T (q) + M (q) = 2M T (q)
1 T 1
q̇ M T (q) + M (q) = q̇ T 2M T (q) = q̇ T M T (q)
2 2
∂V
=0
∂ q̇
T
∂V T d ∂T T
d ∂L d ∂T
= − =
dt ∂ q̇ dt ∂ q̇ ∂ q̇ dt ∂ q̇
T
∂T
= M (q)q̇
∂ q̇
T
d ∂T d d d
= (M (q)q̇) = (M (q)) q̇ + M (q) (q̇) = Ṁ (q)q̇ + M (q)q̈
dt ∂ q̇ dt dt dt
∂ d ∂
Ṁ (q) = (M (q)) (q) = (M (q)) q̇
∂q dt ∂q
∂
Ṁ (q) = [q̇ ⊗ In×n ]T M S (q)
∂q
T
d ∂T ∂
= [q̇ ⊗ In×n ]T M S (q) q̇ + M (q)q̈
dt ∂ q̇ ∂q
∂L ∂T ∂V
= −
∂q ∂q ∂q
∂T 1 ∂
= q̇ T (M (q)) q̇
∂q 2 ∂q
∂ ∂
(M (q)) q̇ = [q̇ ⊗ In×n ]T M S (q)
∂q ∂q
∂T 1 ∂
= q̇ T [q̇ ⊗ In×n ]T M S (q)
∂q 2 ∂q
∂V
= G(q)
∂q
∂L 1 ∂
= q̇ T [q̇ ⊗ In×n ]T M S (q) − G(q)
∂q 2 ∂q
T
T
∂L 1 T T ∂
= q̇ [q̇ ⊗ In×n ] S
M (q) − (G(q))T
∂q 2 ∂q
d ∂L T
T
∂R T
∂L
− + =Q
dt ∂ q̇ ∂q ∂ q̇
∂R T
=0
∂ q̇
T
T ∂ 1 T T ∂
S S
+ (G(q))T = Q
M (q)q̈ + [q̇ ⊗ In×n ] M (q) q̇ − q̇ [q̇ ⊗ In×n ] M (q)
∂q 2 ∂q
| {z }
C(q,q̇)q̇
T
∂ 1 ∂
C(q,q̇)q̇ = [q̇ ⊗ In×n ]T M S (q) q̇ − [q̇ ⊗ In×n ]T q̇ T
M S (q)
=
∂q 2 ∂q
T
∂ 1 ∂
= [q̇ ⊗ In×n ]T M S (q) q̇ − [q̇ ⊗ In×n ]T M S (q)
q̇ =
∂q 2 ∂q
T !
∂ 1 ∂
[q̇ ⊗ In×n ]T M S (q) − [q̇ ⊗ In×n ]T M S (q)
= q̇ =
∂q 2 ∂q
T
T∂ 1 ∂
C(q,q̇) = [q̇ ⊗ In×n ] S
M (q) − [q̇ ⊗ In×n ]T S
M (q) =
∂q 2 ∂q
T
1 ∂ 1 ∂ 1 T ∂
= [q̇ ⊗ In×n ]T M S (q) + [q̇ ⊗ In×n ]T S
M (q) − [q̇ ⊗ In×n ] S
M (q) =
2 ∂q 2 ∂q 2 ∂q
| {z }
Ṁ (q)
T
1 ∂ ∂
Ṁ (q) + [q̇ ⊗ In×n ]T M S (q) − [q̇ ⊗ In×n ]T M S (q)
=
2 ∂q ∂q
| {z }
J
J = A − AT
1
C(q,q̇) = Ṁ (q) + J
2
Ṁ (q) − 2C(q,q̇) = −J
Definicja 4 (Vector field acting on a function). For a differentiation X ∈ Γ(M ) and for a point a ∈ M
formula:
Referring to the definition (1.7) vf X ∈ Γ(M ) specified on the curve f in point a may be written in
a different way.
Definicja 5 (Vector field acting on a function – in the form of a differential operator). Let X ∈ Γ(M )
mean differentiation, a ∈ M mean point of tangencity and ∂1 , . . . ,∂n forms the base of the space module
Γ(M ), then the X(f (a)) field can be written as:
n
X
X(f (a)) = X(k) (f (a))∂k , (1.8)
k=1
where f ∈ C ∞ (M ).
From (1.8) one can state that vf can be written in form of summation:
where a ∈ M .
From the definition 5 it can be seen that the operation of the differentiation X on the function f in
point a is based on assigning to the function f its directional derivative in point a in the direction of
vector X(k) (f (a)). Thus, we can say the vectors X(k) (f (a)), for k = 1, . . . ,n, uniquely determine the
differentiation X.
It is worth noting that (1.8) is a notation of vf in the form of a differential operator, i.e. the sum
of individual components of X(f ) with the designation of operator no k in the form of ∂k . Another
commonly used vf notation a matrix form, i.e. using a single-column matrix. Then the equation (1.8) can
be rewritten to the form:
T
X(f (a)) = X(1) (f (a)) . . . X(n) (f (a)) .
This form of notation will be most commonly used in this document.
in the form:
∂X2 (x) ∂X1 (x)
LX1 (x) X2 (x) = X1 (x) − X2 (x),
∂x ∂x
where X1 (x),X2 (x) ∈ T Rn . The directional derivative of the vector field is also called the Lie bracket,
which will be discussed later in this paper.
Solution
∂f (x)
Lv f (x) := v
∂x
∂f (x)
= 7 + x22 + 2x1 x2 cos x21 2x1 x2 + sin x21 − 36x22
∂x
∂f (x) 2 2
2
2
2
Lv f (x) = v = 7 + x2 + 2x1 x2 cos x1 2x1 x2 + sin x1 − 36x2 =
∂x 3
= 2 7 + x22 + 2x1 x2 cos x21 + 3 2x1 x2 + sin x21 − 36x22 =
Ćwiczenie 9. Calculate the directional derivative of the scalar function f (x) ∈ R in the direction of the
vector field w(x) ∈ R2 , where
f (x) = 7x1 + x1 x22 + x2 sin x21 − 12x32 ,
x2
w(x) = .
x1 x2
Solution
∂f (x)
Lw f (x) := w(x)
∂x
∂f (x)
= 7 + x22 + 2x1 x2 cos x21 2x1 x2 + sin x21 − 36x22
∂x
∂f (x) 2 2
2
2
x2
Lw f (x) = w(x) = 7 + x2 + 2x1 x2 cos x1 2x1 x2 + sin x1 − 36x2 =
∂x x1 x2
= x2 7 + x22 + 2x1 x2 cos x21 + x1 x2 2x1 x2 + sin x21 − 36x22 =
= 7x2 + x32 + 2x1 x22 cos x21 + 2x21 x22 + x1 x2 sin x21 − 36x1 x32
Solution
∂f (x)
Lg f (x) := g(x)
∂x
2
x2 + 2x1 x2 cos x21 − 12x32 x3 2x1 x2 + sin x21 − 36x1 x22 x3 −12x1 x32
∂f (x)
=
∂x 0 x3 x2
2 2
3 2
2 3
x1 x2
∂f (x) x + 2x1 x2 cos x1 − 12x2 x3 2x1 x2 + sin x1 − 36x1 x2 x3 −12x1 x2
Lg f (x) = g(x) = 2 2x1 =
∂x 0 x3 x2
x3
2 2 3 2 2 3
x x x + 2x1 x2 cos x1 − 12x2 x3 + 2x1 2x1 x2 + sin x1 − 36x1 x2 x3 − 12x1 x2 x3
= 1 2 2
2x1 x3 + x2 x3
∂f (x)
Is it possible to calculate the above operation ∂x g(x) using a Kronecker product and Bellman oper-
ator?
Adjoin representation
Adjoin representation is a differential operator, which is equivalent to the Lie bracket. In order to
clarify the scope of the concept of adjoin representation, let us note that there is difference between the
Lie group’s adjoin representation and the Lie algebra adjoin representation. There is a lack of such a dis-
tinction in the literature on the subject, which often introduces a terminological and semantic inaccuracy.
The adjoin representation of a Lie group is denoted by Adg , where g is an element of the group, and the
adjoin representation of a Lie algebra is denoted by ad. In this paper, we will use only the Lie algebra
adjoin representation and will call it short as adjoin representation. We will omit formal definition of the
Lie algebra adjoin representation here, as it requires the definition of terms such as the Lie group and
Lie algebra. Instead, we will give the relationship of this adjoin representation with a Lie bracket and a
directional derivative (a Lie derivative) that can be written as an equation:
where X1 i X2 are vector fields. Therefore, to put it simply, we can say that the adjoin representation
realized on two vector fields is identical to the operator of the Lie bracket on these vector fields.
Nilpotent algebra
In the case of using the Lie group description in control theory aspects, one of the considered issues
is the problem of adequacy of local Lie group properties with global properties, and this is related to
the possibility of mapping the Lie group structure through the associated Lie algebra. In order for such
mapping to give a correct result in the global sense, it is necessary to preserve the nilpotency of algebra.
Since this information do not constitute the scope of this subject, the introduction to the topic of algebra’s
nilpotency will be limited only to this brief description.
Let’s define the concept of the algebra nilpotency.
Definicja 12 (Lie algebra’s nilpotency - using operator ad). Lie algebra g is nilpotent when the following
equality occurs:
ad(X1 )ad(X2 )ad(X3 ) . . . ad(Xr ) = 0,
where Xi ∈ g, a i = 1,2,3, . . . ,r.
Thus, the nilpotency verification involves calculation of the directional derivatives from given vector
fields and checking whether the final directional derivatives of a sufficiently high order will nullify. The
above claim can be simplified to expression with use of the Lie brackets in the form:
Calculate the iterative Lie brackets of the generator fields and check the nilpotency of the system algebra.
Solution
1 0
g1 (q) = 0 , g2 (q) = 1 .
−q2 q1
∂g2 ∂g1
[g1 ,g2 ] = g1 − g2
∂q ∂q
0 0 0 0 0 0
∂g1 ∂g2
= 0 0 0 , = 0 0 0
∂q ∂q
0 −1 0 1 0 0
0 0 0 1 0 0 0 0 0 0 0
g3 = [g1 ,g2 ] = 0 0 0
0 − 0 0 0
1 = 0 − 0 = 0
1 0 0 −q2 0 −1 0 q1 1 −1 2
0 0 0 1 0 0 0 0 0
∂g3 ∂g1
g4 = [g1 ,g3 ] = g1 − g3 = 0 0 0
0 − 0 0 0
0 = 0
∂q ∂q
0 0 0 −q2 0 −1 0 2 0
0 0 0 0 0 0 0 0 0
∂g3 ∂g2
g5 = [g2 ,g3 ] = g2 − g3 = 0 0 0
1 − 0 0 0
0 = 0
∂q ∂q
0 0 0 q1 1 0 0 2 0
Since the vector fields g4 and g5 are zero, any vector fields created using the Lie bracket and these fields
will produce zero vector fields. Thus the algebra associated with the given system is nilpotent.
Ćwiczenie 14. The mechanical system is given in the form of kinematic equations:
0 1
I −2
q̇ = 0 u1 + − m q3 u2 ,
1 0
where I is the inertia of a certain element of the system, and m is the mass of a certain element of
the system. Calculate iterative Lie brackets of generator fields and check the nilpotency of the system
algebra.
Solution
0 1
I −2
g1 = 0 , g2 (q) = − m q3 .
1 0
∂g2 ∂g1
[g1 ,g2 ] = g1 − g2
∂q ∂q
0 0 0 0 0 0
∂g1 ∂g 2 I −3
= 0 0 0 , = 0 0 2 m q3
∂q ∂q
0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0
I −3 I −2 I −3 I −3
g3 = [g1 ,g2 ] = 0 0 2 m q3 0 − 0 0 0 − m q3 = 2 m q3 − 0 = 2 m q3
0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
∂g3 ∂g1 I −4 I −3 I −4
g4 = [g1 ,g3 ] = g1 − g3 = 0 0 −6 m q3 0 − 0 0 0 2 m q3 = −6 m q3
∂q ∂q
0 0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0 0
∂g3 ∂g2 I −4 I −2 I −3 I −3
g5 = [g2 ,g3 ] = g2 − g3 = 0 0 −6 m q3 − m q3 − 0 0 2 m q3 2 m q3 = 0
∂q ∂q
0 0 0 0 0 0 0 0 0
Is it possible to generate more non-zero vector fields using g4 ?
0 0 0 0 0 0 0 0 0
∂g4 ∂g1 I −5 I −4 I −5
g6 = [g1 ,g4 ] = g1 − g4 = 0 0 24 m q3 0 − 0 0 0 −6 m q3 = 24 m q3
∂q ∂q
0 0 0 1 0 0 0 0 0
0 0
g7 = [g2 ,g4 ] = 0 , g8 = [g3 ,g4 ] = 0
0 0
You may notice the possibility of generating successive non-zero vector fields using g1 and g4 . Let’s
list all previously obtained non-zero vector fields and present their notation using the ad operator.
g3 = [g1 ,g2 ] = adg1 g2
g4 = [g1 ,g3 ] = [g1 , [g1 ,g2 ]] = adg1 [g1 ,g2 ] = adg1 (adg1 g2 ) = adg1 adg1 g2 = ad2g1 g2
g6 = [g1 ,g4 ] = [g1 , [g1 ,g3 ]] = [g1 , [g1 , [g1 ,g2 ]]] =
= adg1 [g1 , [g1 ,g2 ]] = adg1 (adg1 [g1 ,g2 ]) = adg1 (adg1 (adg1 g2 )) =
= adg1 adg1 adg1 g2 = ad3g1 g2
Let’s calculate the vector field for k times execution of the adjoin representation:
0
adkg1 g2 = (−1)k+1 (k + 1)! mI −k−2 .
q3
0
1. Determine the system dynamics equation using the stack operator and Cronecker product.
4. Calculate the Lie bracket adkg1 g2 of the kinematic system, where k is the degree of the action ad.
Solution
Ad.1
I 0 0
1 T
EK = q̇ M q̇, M = 0 mq32 0 ,
2
0 0 m
Vp = 0,
L = EK − VP = EK
d ∂L T
T
∂L
− =Q⇒
dt ∂ q̇ ∂q
d ∂EK T ∂EK T
− = Q,
dt ∂ q̇ ∂q
∂EK 1
= q̇ T M + M T = q̇ T M ⇒
∂ q̇ 2
∂EK T
= M q̇,
∂ q̇
T
d ∂EK d
= (M q̇) = Ṁ q̇ + M q̈,
dt ∂ q̇ dt
∂M ∂M S
Ṁ = q̇ = [q̇ ⊗ I3x3 ]T ,
∂q ∂q
q̇1 0 0 q̇2 0 0 q̇3 0 0
[q̇ ⊗ I3x3 ]T = 0 q̇1 0 0 q̇2 0 0 q̇3 0 ,
0 0 q̇1 0 0 q̇2 0 0 q̇3
T
MS = I 0 0 0 mq32 0 0 0 m
,
0 0 0
0 0 0
0 0 0
0 0 0
∂M S
= 0 0 2mq3
,
∂q
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
q̇1 0 0 q̇2 0 0 q̇3 0 0 0 0 0 0 0 0
Ṁ = 0 q̇1 0 0 q̇2 0 0 q̇3 0
0 0 2mq3
= 0 0 2mq3 q̇2 ,
0 0 q̇1 0 0 q̇2 0 0 q̇3
0 0 0 0 0 0
0 0 0
0 0 0
0 0 0
T 0 0 0 I 0 0 0 I q̈1
d ∂EK
= 0 0 2mq3 q̇2 q̇ + 0 mq32 0 q̈ = 2mq3 q̇2 q̇3 + mq32 q̈2 =
dt ∂ q̇
0 0 0 0 0 m 0 mq̈3
I q̈1
2
= mq3 q̈2 + 2mq3 q̇2 q̇3 ,
mq̈3
0 0 0
∂EK ∂ 1 T 1 T ∂M 1 T 1 T
0 0 2q3 q̇2 m = 0 0 q3 q̇22 m ,
= q̇ M q̇ = q̇ q̇ = q̇ Ṁ = q̇
∂q ∂q 2 2 ∂q 2 2
0 0 0
0
∂EK T
= 0 ,
∂q 2
mq3 q̇2
u1
∂W = u1 ∂q1 − u1 ∂q2 + u2 ∂q3 ⇒ Q = −u1 ,
u2
T
d ∂EK ∂EK
− = Q,
dt ∂ q̇ ∂q
I q̈1 0 u1
mq32 q̈2 + 2mq3 q̇3 q̇2 − 0 = −u1 ,
mq̈3 2
mq3 q̇2 u2
I q̈1 = u1
mq32 q̈2 + 2mq3 q̇3 q̇2 = −u1 . (1.10)
mq̈3 − mq3 q̇22 = u2
Ad.2
Are the equations (1.10) integrable?
For the first equation we have:
Z Z
I q̈1 = u1 ⇒ Iq1 = u1 dt - is integrable.
The right side of the equation is integrable, but how do you prove that the left side is also integrable? We
can show that for the left side of the equation we have:
d 2
q q̇2 = q32 q̈2 + 2q3 q̇3 q̇2 - is integrable.
dt 3
The last equation:
mq̈3 − mq3 q̇22 = u2 ⇒
and using a new input ũ2 = u2 + mq3 q̇22 we can get rid of the problem of integration:
mq̈3 = ũ2 ⇒
Z Z
mq3 = ũdt.
where we assume C = 0, so
I q̇1 + mq32 q̇2 = 0.
This is a kinematic equation of speed limits and it is not integrable. It defines moments of inertia and is
an equation for the momentum conservation principle for the free system.
In result, a new system of equations is obtained:
I q̈1 = u1
mq̈3 = u2 .
2
I q̇1 + mq3 q̇2 = 0
Ad.3
The kinematics equation will be calculated from the non-holonomic constraint Pfaff equation :
I mq32 0 q̇ = 0,
where the general form of this equation is A (q) q̇ = 0 and A(q) is a constraint matrix. So the
velocities q̇ lies in the zero space of matrix A(q). So, we’re looking for a base that spans the speed limit
space q̇. Based on the relationship between the system kinematics and the constraints matrix we can write
down:
A (q) S (q) = 0, (1.11)
where S(q) is the kinematics matrix, so:
q̇ = S (q) v,
T
where S (q) = g1 (q) g2 (q) , v = v1 v2 is the input. Thus:
v1
q̇ = g1 g2 = g1 v1 + g2 v2 .
v2
On the basis of general guidelines for simplification of vector fields we assume that:
T
g1 (q) = g1 = 0 0 1 ,
T
g2 (q) = g21 g22 g23
Taking into account (1.11) we get:
0 g21
I mq32
0 0 g22 = 0 0 ⇒
1 g23
where v is speed, and v̇ is forces and momentum of forces. Then we introduce a general formula for the
input transformation:
v̇ = T u∗ ⇒
1
v̇1 0 m u1
= 1 .
v̇2 I 0 ũ2
The final form of the equations of kinematics with the input transformation can be written in the form:
q̇ = S (q) v
.
v̇ = T u∗
Ad.4
T T 0 0 0 0 0
∂g2 ∂g1 I −3 I −3
[g1 ,g2 ] = adg1 g2 = g1 − g2 = 0 0 2 m q3 0 = 2m q3 ,
∂q ∂q
0 0 0 1 0
0 0 0 0 0
I −4 I −4
ad2g1 g2 = 0 0 −6 m q3 0 = −6 m q3 ,
0 0 0 1 0
0
I −(k+2) .
adkg1 g2 = (−1)k+1 (k + 1)! m q3
0
2.1. State evolution for general case - the vector fields are given in un-
disclosed form
Ćwiczenie 16. Let us consider the state evolution of the two-input system described by the equation:
ẋ = g1 (x) u1 + g2 (x) u2 . (2.1)
We assume the initial state of the system is x (0) = x0 . The input of the system is piecewise constant
value, where the sequence is illustrated in Fig. 2.1. The task is to determine the state after the applied
sequence of controls, i.e. at t = 4.
Solution
Phase I
The state of the system after the first control phase can be presented as an expansion in the Taylor
series relative to time t. It is then obtained:
1
x (t)|t= = x () = x0 + ẋ (t)|t=0 + ẍ (t)|t=0 2 + O 3 ,
(2.2)
2
where O 3 describes all higher-level components (depending on p for p ≥ 3)1 . Note that in this phase
degree polynomial (relative to variable) - therefore it cannot be assumed that O 3 describes the exact form of the function.
3
In subsequent relationships, O can aggregate different components. Therefore, the detailed form of components described
by O 3 in each equation may be different.
26
2.1. State evolution for general case - the vector fields are given in undisclosed form 27
Note that in equation (2.2) derivatives ẋ and ẍ are valued at t = 0. So we can write:
∂g1 (x)
ẋ (t)|t=0 = αg1 (x0 ) oraz ẍ (t)|t=0 = α2 g1 (x0 ) = α2 Dg1 (x0 ) g1 (x0 ) , (2.3)
∂x x=x0
∂g1 (x)
with the following notation introduced for simplicity: Dg1 (x0 ) := ∂x . Taking into account the
x=x0
dependence (2.3) in the equation (2.2) we obtain the result describing the state of the system after the
first phase
1
x () = x0 + αg1 (x0 ) + α2 Dg1 (x0 ) g1 (x0 ) 2 + O 3 .
(2.4)
2
Phase II
As in the previous phase, we use the expansion in series to determine the state. Now we are interested
in the state of the system at t = 2, that is at the end of phase two. So we have:
1
x (t)|t=2 = x () + ẋ (t)|t= + ẍ (t)|t= 2 + O 3 .
(2.5)
2
The input signals in phase two are u1 = 0 and u2 = α which allows you to write down
∂g2 (x (t))
ẋ (t) = αg2 (x (t)) i ẍ (t) = α2 g2 (x (t)) .
∂x
Now, let’s note that in equation (2.5) derivatives ẋ and ẍ are specified at the moment t = (not t = 0 as
in the previous phase!). Referring to the result (2.3) we get
ẋ (t)|t= = αg2 (x ()) oraz ẍ (t)|t= = α2 Dg2 (x ()) g2 (x ()) . (2.6)
By substituting (2.6) for (2.5) we can enter the following
1
x (2) = x () + αg2 (x ()) + α2 Dg2 (x ()) g2 (x ()) 2 + O 3 .
(2.7)
2
The state value at t = is known by the result (2.4). From this we get
g2 (x ()) = g2 (x0 + αg1 (x0 ) + . . .) oraz Dg2 (x ()) = Dg2 (x0 + αg1 (x0 ) + . . .) .
Note that in the solution (2.7) assumed that all components in which the variable occurs in the degree
3
higher than two are aggregated in the form of O . Taking this into consideration, we define the
following component development2 in equation (2.7) (This means that the expansions should not include
too many words if they are replaced by O ):3
!
∂g2 (x)
g2 (x ()) = g2 (x0 + αg1 (x0 ) + . . .) = g2 (x0 ) + (αg1 (x0 ) + . . .) =
∂x x=x0
= g2 (x0 ) + αDg2 (x0 ) g1 (x0 ) 2 + O 3
(2.8)
and
Dg2 (x ()) g2 (x ()) 2 = Dg2 (x0 + . . .) g2 (x0 + . . .) 2 = Dg2 (x0 ) g2 (x0 ) 2 + O 3 .
(2.9)
Then, substituting the relationships (2.8) and (2.9) into (2.7) we get:
1
x (2) = x () + αg2 (x0 ) + α2 Dg2 (x0 ) g1 (x0 ) 2 + α2 Dg2 (x0 ) g2 (x0 ) 2 + O 3 .
2
Taking into account the solution (2.4) we obtain the result describing the state after the second phase of
control
x (2) = x0 + α (g1 (x0 ) + g2 (x0 )) +
2 1 1
Dg1 (x0 ) g1 (x0 ) + Dg2 (x0 ) g1 (x0 ) + Dg2 (x0 ) g2 (x0 ) 2 + O 3 .
+α
2 2
2
Note that in Taylor’s expansions described by equations (2.8) and (2.9) it is assumed that the increment is a vector. For
example, in the equation (2.8) increment is ∆x = αg1 (x0 ) + . . .. The formula used to describe the expansion results from
the Taylor expansion of the vector function of many variables.
Phase III
As in the previous two phases , we will describe the following
1
ẍ (t)|t=2 2 + O 3 .
x (t)|t=3 = x (2) + ẋ (t)|t=2 +
2
1
x (3) = x (2) − αg1 (x (2)) + α2 Dg1 (x (2)) g1 (x (2)) 2 + O 3 .
(2.10)
2
and
Dg1 (x (2)) g1 (x (2)) 2 = Dg1 (x0 ) g1 (x0 ) 2 + O 3 .
(2.12)
Phase IV
The final result will be given without a detailed output (we leave this as a task for the Reader). The
state of the system after applying the given sequence of controls is
Let’s consider the component in brackets in more detail. We can write it down:
∂g2 (x) ∂g1 (x)
Dg2 (x0 ) g1 (x0 ) − Dg1 (x0 ) g2 (x0 ) = g1 (x) − g2 (x) = [g1 , g2 ] (x0 ) .
∂x ∂x x=x0
It turns out that in the solution (2.14) there is a component, which is a Lie bracket of the first order
specified for vector fields at the starting point. So the total change of state results is resulting not only
from directions generated by primary fields g1 and g2 but also from fields of higher orders. If the higher-
order fields were equal to zero, then due to the closed cycle and symmetrical input, the system would
returnto the initial state x (4) = x0 . Components present in the rest of the expansion and marked as
O 3 should be identified with the presence of vector fields of higher rows (i.e. orders higher than Lie
[g1 ,g2 ]).
Calculate the final state and then check if the solution is in the direction of the first order Lie brackets.
Solution
Phase I
For t ∈ (0; ) input signals are: u1 = α, u2 = 0. After substituting the controls u, we get:
α
ẋ = 0 .
(2.16)
0
1
ẍ (t)|t=0 2 + O 3 .
x () = x0 + ẋ (t)|t=0 + (2.17)
2
In this case x0 = x(0), so
x01 α 0 0 α + x01
1
x () = x02 + 0 + · 0 · 2 + 0 = x02 . (2.18)
2
x03 0 0 0 x03
| {z } |{z} |{z} |{z}
x0 ẋ(0) ẍ(0) O(3 )
Phase II
For t ∈ (; 2) input signals are: u1 = 0, u2 = α. After substituting the controls u, we get:
0
ẋ = α . (2.19)
α sin x1
1
ẍ (t)|t= 2 + O 3 .
x (2) = x() + ẋ (t)|t= + (2.20)
2
In the above equation we know the value of x() but for the ẋ (t) element we have an unresolved
relationship in the third element, i.e. ẋ3 (t). In it, the variablex1 appears as an argument of the sine
function, so we should calculate its value from the solution of the differential equation (2.19) for the
variable ẋ1 :
Z Z
ẋ1 (t) = 0 ⇒ dx1 (t) = 0 · dt ⇒ x1 (t) = C,
ẋ1 (t) = 0 ⇒
Z 2 Z 2
dx1 (t) = 0 · dt ⇒
x1 (2) − x1 () = C − C ⇒
x1 (2) − x1 () = 0 ⇒
x1 (2) = x1 (),
and on the basis of the value x1 () form (2.18) we have:
x1 (2) = α + x01 ,
which gives the same result as in the equation 2.21. So you can see that any integration method will lead
to a correct result.
From (2.19) and (2.21) we can write:
0
ẋ = α .
α sin (α + x01 )
Since ẋ is a constant vector (independent of t), then vector ẍ will be zero. On this basis, we also get a
zero O 3 vector.
By substituting known values of elements to (2.20) we obtain the solution of the equation x (t) at
t = 2 is:
0 α + x01
x (2) = x () + α = α + x02 . (2.22)
α sin (α + x01 ) α sin (α + x01 ) + x03
Phase III
For t ∈ (2; 3) input signals are: u1 = −α, u2 = 0, so we have:
−α
ẋ = 0 .
0
Following the same procedure as for phase I, we calculate that the state at t = 3 is:
−α x01
x (3) = x (2) + 0 = α + x02 . (2.23)
0 α sin (α + x01 ) + x03
Phase IV
For t ∈ (3; 4) input signals are: u1 = 0, u2 = −α. Then the kinematics is simplified to the form:
0
ẋ = −α . (2.24)
−α sin x1
As in the case of phase II we have to calculate the value of x1 for the third element ẋ, i.e. ẋ3 . Then we
obtain:
1
ẍ (t)|t=3 2 + O 3 .
x (4) = x(3) + ẋ (t)|t=3 + (2.25)
2
By substituting the known elements of x(3) and ẋ (t) for t = 3 and zero vectors ẍ (t) and O 3 we
get:
0 x01 0
x (4) = x (3) + −α = α + x02 + −α
−α sin (x01 ) α sin (α + x01 ) + x03 −α sin (x01 )
x01
= x02 .
−α sin (x01 ) + α sin (α + x01 ) + x03
Elements x01 , x02 ,x03 can be aggregated to vector x0 and then x (4) can be presented in the form:
0
x (4) = x0 + 0 .
−α sin (x01 ) + α sin (α + x01 )
There are non-linear relationships in the last component. For more detailed analysis, let us consider the
expansion of sinus function into the Taylor series:
−α sin (x01 ) + α sin (α + x01 ) = −α sin (x01 ) + α sin (x01 ) + cos (x01 ) α2 2 −
− 12 sin (x01 ) α3 3 + . . . = cos (x01 ) α2 2 − 12 sin (x01 ) α3 3 + . . . .
Analysis of the direction of the system evolution after the control sequence
T
When calculating the Lie bracket of the control system vector fields, i.e. g1 = 1 0 0 and
T
g2 = 0 1 sin x1 , we get:
0
[g1 ,g2 ] = 0 . (2.27)
cos (x1 )
Invoking the solution (2.26) you can see that the component listed there is derived from the Lie bracket
(2.27), i.e:
0
α2 2 0 = α2 2 [g1 ,g2 ] (x0 ) .
cos (x01 )
This is confirmed by the overall result described by the relation (2.14). It should be noted, however,
that because Lie brackets of higher orders will not be zero, e.g. [g1 , [g1 ,g2 ]] and [g1, [g1 , [g1 ,g2 ]]], so
the direction indicated by [g1 ,g2 ] is not the exact direction of evolution of the system, although it is the
dominant one. To be able to determine more precisely the direction of evolution of the system one should
take into account vectors generated by Lie brackets of higher orders. Then components containing higher
powers of parameter will appear. In this case additional elements connected with Lie brackets of higher
orders influence only the third coordinate of x vector, because these vector fields will have non-zero
component only in the third element. Let us notice that the first and second element of the vector field
[g1 ,g2 ] are zero, which is consistent with the fact that
where R[ , ] is a vector associated with higher-order Lie brackets. Therefore, it turns out that in this
particular case, after applying a given control sequence, the state of the system changes in the direction
indicated by the Lie brackets of the first order, and if the derivatives of the higher orders in the extension
of the Taylor series (component R) are taken into account, it will be necessary to take into account the
Lie brackets of the higher orders (component R[ , ] ). Graphical illustration Fig. 2.3 shows the trajectory
T
of the solution, which was obtained assuming x0 = 0,25 0 0 and α = 0,5 i = 1.
Solution
Phase I
For t ∈ (t0 ; tK ), t0 = 0, tK = T , u1 = 1, u2 = 0. The dynamics of the system is simplified into the
form:
1
ẋ = 0 .
0
When integrating the given dynamics, we determine the state for individual variables:
ẋ1 (t) = 1,
Z Z
ẋ1 (t)dt = 1dt,
x1 (t) = t + C,
C = x1 (t0 ),
x1 (t) = t + x1 (t0 ),
x1 (T ) = T + x1 (0),
x1 (T ) = T,
ẋ2 (t) = 0,
Z Z
ẋ2 (t)dt = 0dt,
x2 (t) = C,
C = x2 (t0 ),
x2 (t) = x2 (t0 ),
x2 (T ) = x2 (0),
x2 (T ) = 0,
ẋ3 (t) = 0,
Z Z
ẋ3 (t)dt = 0dt,
x3 (t) = C,
C = x3 (t0 ),
x3 (t) = x3 (t0 ),
x3 (T ) = x3 (0),
x3 (T ) = 0.
The final result is
T
x(T ) = 0 .
0
The same result can be obtained by using the definite integral. Then the calculations for individual
variables are as follows:
ẋ1 (t) = 1,
Z tK Z tK
ẋ1 (t)dt = 1dt,
t0 t0
x1 (t)|ttK
0
= t|ttK
0
,
x1 (tK ) − x1 (t0 ) = tK − t0 ,
x1 (T ) − x1 (0) = T − 0,
x1 (T ) = T,
ẋ2 (t) = 0,
Z tK Z tK
ẋ2 (t)dt = 0dt,
t0 t0
x2 (t)|ttK
0
= C|ttK
0
,
x2 (tK ) − x2 (t0 ) = C − C,
x2 (T ) − x2 (0) = 0,
x2 (T ) = 0,
ẋ3 (t) = 0,
Z tK Z tK
ẋ3 (t)dt = 0dt,
t0 t0
x3 (t)|ttK
0
= C|ttK
0
,
x3 (tK ) − x3 (t0 ) = C − C,
x3 (T ) − x3 (0) = 0,
x3 (T ) = 0.
Phase II
For t ∈ (t0 ; tK ), t0 = T , tK = 2T , u1 = 0, u2 = 1. The dynamics of the system is simplified into
the form:
0
ẋ = 1 .
x1
When integrating the given dynamics, we determine the state for individual variables:
ẋ1 (t) = 0,
Z Z
ẋ1 (t)dt = 0dt,
x1 (t) = C,
C = x1 (t0 ),
x1 (t) = x1 (t0 ),
x1 (2T ) = x1 (T ),
x1 (2T ) = T,
ẋ2 (t) = 1,
Z Z
ẋ2 (t)dt = 1dt,
x2 (t) = t + C,
C = −t0 + x2 (t0 ),
x2 (t) = t − t0 + x2 (t0 ),
x2 (2T ) = 2T − T + x2 (T ),
x2 (2T ) = T,
x3 (t) = x1 (t0 )t + C,
C = x3 (t0 ) − x1 (t0 )t0 ,
x3 (t) = x1 (t0 )t + x3 (t0 ) − x1 (t0 )t0 ,
x3 (2T ) = x1 (T )2T + x3 (T ) − x1 (T )T,
x3 (2T ) = 2T 2 − T 2 ,
x3 (2T ) = T 2 .
The calculation of x3 can also be performed by substituting the value of x1 (t) (for time t = 2T )
before the integration is performed, which allows to write x3 derivative as:
x3 (t) = T t + C,
C = −T t0 + x3 (t0 ),
x3 (t) = T t − T t0 + x3 (t0 ),
x3 (2T ) = 2T 2 − T 2 + x3 (T ),
x3 (2T ) = T 2 ,
The final result is
T
x(2T ) = T .
T2
Further calculations will be carried out without any detailed description of the integrals, as they are
similar to those of the previous phases.
Phase III
For t ∈ (t0 ; tK ), t0 = 2T , tK = 3T , u1 = −1, u2 = 0
The dynamics of the system for the period under consideration is in the form:
−1
ẋ = 0 .
0
When integrating the given dynamics, we determine the state for individual variables:
x1 (3T ) = 0,
ẋ2 (t) = 0,
Z Z
ẋ2 (t)dt = 0dt,
x2 (3T ) = T,
ẋ3 (t) = 0,
Z Z
ẋ3 (t)dt = 0dt,
x3 (3T ) = T 2 .
The final result is
0
x(3T ) = T .
T2
Phase IV
For t ∈ (t0 ; tK ), t0 = 3T , tK = 4T , u1 = 0, u2 = −1
The dynamics of the system for the period under consideration is in the form:
0
ẋ = −1 .
−x1
When integrating the given dynamics, we determine the state for individual variables:
ẋ1 (t) = 0,
Z Z
ẋ1 (t)dt = 0dt,
x1 (4T ) = 0,
x2 (4T ) = 0,
x3 (4T ) = T 2 .
The final result is
0
x(4T ) = 0 .
T2
ẋ = f (x) + g(x)u,
y = h(x),
Lg Lif h(x) = 0.
Lg Lr−1
f h(x) 6= 0.
When conditions 1 and 2 are met, the relative degree is defined by the number r. The procedure is
performed in such a way that we increase i by 1 as long as Lg Lif h(x) = 0. If for a given i we get
Lg Lr−1
f h(x) 6= 0, it means that the second condition is satisfied and the relative degree is r.
The relative degree can also be interpreted as the number of required output differentials of the system
that are necessary for the differential function to be directly dependent on the system input.
Calculating the relative order by differentiation of the output function
di di
y = h(x).
di t di t
2. For the i-th derivative function y in which the input u is directly present, i.e.
di di h(x)
i
y= (x,u),
dt di t
we conclude that the number of the relative degree is equal to the order of the derivative of function
y, i.e. r = i.
39
3.2. Calculation of the relative degree of linear systems 40
Solution
Solution
Case a) y = h(x) = x1 .
We calculate Lg Lif h(x) for i = 0.
0 ∂ ∂ 0 0
Lg Lf h(x) = Lg h(x) = (h(x))g = ( 1 0 x) = 1 0 = 0.
∂x ∂x 1 1
We conclude that Lg L0f h(x) = 0, so the first condition is met and we continue the calculation for the
increased i. We calculate Lg Lif h(x) for i = 1.
∂ ∂ x2 x2
Lf h(x) = (h(x))f = ( 1 0 x) = 1 0 = x2 ,
∂x ∂x −x2 − x31 − x1 −x2 − x31 − x1
∂ 0
Lg Lf h(x) = Lg (x2 ) = (x2 )g = 0 1 = 1.
∂x 1
We conclude that Lg Lf h(x) 6= 0, so that means the second condition is fulfilled. Using the equation in
the second condition, we can write down
Lg Lr−1 1
f h(x) = Lg Lf h(x),
so
r − 1 = 1 ⇒ r = 2.
So the relative degree is 2.
Case b) y = h(x) = x2 .
We calculate Lg Lif h(x) for i = 0.
∂ ∂ 0 0
Lg L0f h(x) = Lg h(x) =
(h(x))g = ( 0 1 x) = 0 1 = 1.
∂x ∂x 1 1
We find that Lg L0f h(x) 6= 0, so this means that the second condition for i = 0 is fulfilled. So there was
no checking of the first condition in this case. Using the equation in the second condition, we can enter
Lg Lr−1 0
f h(x) = Lg Lf h(x),
so
r − 1 = 0 ⇒ r = 1.
So the relative degree is 1.
Case c) y = h(x) = cos x2 .
We calculate Lg Lif h(x) for i = 0.
If x2 6= kπ, where k = 0,1,2, . . ., so then Lg L0f h(x) 6= 0 and we conclude that the second condition
is met. So
r − 1 = 0 ⇒ r = 1,
which ends the relative order analysis.
If we take x2 = kπ, where k = 0,1,2, . . ., then we get Lg L0f h(x) = 0. This means that it is necessary
to increase i further check conditions. Let’s consider this case. We calculate Lg Lif h(x) for i = 1.
∂ ∂ x2
Lf h(x) = (h(x))f = (cos x2 ) =
∂x ∂x −x2 − x31 − x1
x2
= sin x2 (x2 + x31 + x1 ),
= 0 − sin x2
−x2 − x31 − x1
∂
Lg Lf h(x) = Lg (sin x2 (x2 + x31 + x1 )) = (sin x2 (x2 + x31 + x1 ))g =
∂x
0
2 3 = cos x2 (x2 + x31 + x1 ) + sin x2 .
= sin x2 (3x1 + 1) cos x2 (x2 + x1 + x1 ) + sin x2
1
Given that x2 = kπ, where k = 0,1,2, . . ., we choose k = 0 and then the condition is simplified to
Lg Lf h(x) = x31 + x1 .
Therefore further assumptions are necessary, this time for the value x1 . We will not analyse this case
further due to its complexity. However, it should be remembered that the analysis of the relative degree
may depend on the value of the state that will be considered in the conditions used to calculate the relative
degree.
Ćwiczenie 21. Determine the relative degree of the system
−ax1 1
ẋ = −bx2 + k − cx1 x2 + 0 u,
εx1 x2 0
where a,b,c,k,ε are constant values. The analysis should be carried out for cases
a) y = x2 ,
b) y = x3 .
Solution
Case a) y = h(x) = x2 .
Analysis using the calculation of the derived y function.
d d
ÿ = ẋ2 = (−bx2 + k − cx1 x2 ) = −bẋ2 − cẋ1 x2 − cx1 ẋ2 =
dt dt
= −b(−bx2 + k − cx1 x2 ) − c(−ax1 + u)x2 − cx1 (−bx2 + k − cx1 x2 ).
In the ÿ function there is an input u, so if you assume that c 6= 0 and x2 6= 0, it was enough to consider a
second order derivative on the system output function to obtain a direct dependence on the system input.
Therefore, the relative degree is 2. If the conditions for c and x2 are not met, then you should continue
to calculate the derivatives y of the higher degrees. For simplicity, c 6= 0 and x2 6= 0 are assumed, which
ends the analysis of this case.
Analysis using directional derivatives.
We calculate Lg Lif h(x) for i = 0.
1 1
∂ ∂
Lg L0f h(x) = Lg h(x) =
(h(x))g = ( 0 1 0 x) 0 = 0 1 0 0 = 0.
∂x ∂x
0 0
We find that Lg L0f h(x) = 0, so the first condition is met and we continue the calculation for the increased
i. We calculate Lg Lif h(x) for i = 1.
−ax1
∂ ∂
Lf h(x) = (h(x))f = ( 0 1 0 x) −bx2 + k − cx1 x2 =
∂x ∂x
εx1 x2
−ax1
= 0 1 0 −bx2 + k − cx1 x2 = −bx2 + k − cx1 x2 ,
εx1 x2
∂
Lg Lf h(x) = Lg (−bx2 + k − cx1 x2 ) = (−bx2 + k − cx1 x2 )g =
∂x
1
= −cx2 −cx1 − b 0 0 = −cx2 .
0
We conclude that Lg Lf h(x) = −cx2 . To simplify the analysis we assume that cx2 6= 0, which implies
that Lg Lf h(x) 6= 0, so the second condition is met. So
r − 1 = 1 ⇒ r = 2.
1 1
∂ ∂
Lg L0f h(x) = Lg h(x) =
(h(x))g = ( 0 0 1 x) 0 = 0 0 1 0 = 0.
∂x ∂x
0 0
We note that Lg L0f h(x) = 0, so the first condition is met and we continue the calculation for the increased
i. We calculate Lg Lif h(x) for i = 1.
−ax1
∂ ∂
Lf h(x) = (h(x))f = ( 0 0 1 x) −bx2 + k − cx1 x2 =
∂x ∂x
εx1 x2
−ax1
= 0 0 1 −bx2 + k − cx1 x2 = εx1 x2 ,
εx1 x2
∂ 1
Lg Lf h(x) = Lg (εx1 x2 ) = (εx1 x2 )g = εx2 εx1 0 0 = εx2 .
∂x
0
For simplicity’s sake, εx2 6= 0 is assumed, so we conclude that Lg Lf h(x) 6= 0. So the second condition
is met. So
r − 1 = 1 ⇒ r = 2.
The relative degree is 2.
where ω,η,µ are not defined constant values. The analysis should be carried out for cases
a) y = x1 ,
b) y = sin x2 .
Solution
Case a) y = h(x) = x1 .
We calculate Lg Lif h(x) for i = 0.
∂ ∂ x2
Lf h(x) = (h(x))f = ( 1 0 x) =
∂x ∂x 2ωη(1 − µx21 )x2 − ω 2 x1
x2
= 1 0 = x2 ,
2ωη(1 − µx21 )x2 − ω 2 x1
∂ 0
Lg Lf h(x) = Lg (x2 ) = (x2 )g = 0 1 = 1.
∂x 1
We conclude that Lg Lf h(x) 6= 0, so that means the second condition is fulfilled. So
r − 1 = 1 ⇒ r = 2.
r − 1 = 0 ⇒ r = 1.
∂ ∂ x2
Lf h(x) = (h(x))f = (sin x2 ) =
∂x ∂x 2ωη(1 − µx21 )x2 − ω 2 x1
x2
= cos x2 (2ωη(1 − µx21 )x2 − ω 2 x1 ),
= 0 cos x2
2ωη(1 − µx21 )x2 − ω 2 x1
∂
Lg Lf h(x) = Lg (cos x2 (2ωη(1 − µx21 )x2 − ω 2 x1 )) = (cos x2 (2ωη(1 − µx21 )x2 − ω 2 x1 ))g = . . .
∂x
The continuation of the calculations is left to the Reader’s own realization.
Codistribution
Alternatively, instead of distribution of vector fields, a set of their dual forms, i.e. covectors, is con-
sidered. Then we’re talking about codistribution. The k-dimensional codistribution Ω defined in the
neighbourhood U ⊂ Rn of a point x0 ∈ Rn we call a set of k linearly independent covectors defined in
each point x ∈ U and described by
where ωi ∈ (Rn )∗ (i = 1, . . . ,k) is a covector field (covector). The dimension of codistribution we define
similarly as the dimension of the distribution of vector fields.
It follows that the covectors belonging to the codistribution ∆⊥ (x) are anihilating („nulling”) any vector
field that can be determined from the distribution ∆ (x). Such covectors are called anihilators.
Similarly, we assume that ∀x ∈ U ⊂ Rn , dim Ω (x) = k, so we can define distribution ∆ = Ω⊥ of
a dimension l = n − k defined as
Generally we can state that the pairs ∆⊥ (x) i ∆ (x) (Ω⊥ (x) i Ω (x)) are anihilating each other.
46
4.2. Exercises 47
Distribution integrability
Distribution ∆ (x) of dimension d, for which dimx = n, is integrable , if there exist n − d scalar
functions λ1 , . . . ,λn−d , for which the condition
span{dλ1 ,. . .,dλn−d } = ∆⊥ (x) ,
is fulfilled, where ∆⊥ (x) is a codistribution of a distribution ∆ (x).
Frobenius theorem
Distribution ∆ (x) is integrable if, and only if, when it is involutive.
Distribution involutivnes
The ∆ (x) distribution is involutive if for any two vector fields f1 (x) and f2 (x) belonging to ∆ (x)
distribution, their Lie brackets also belong to ∆ (x) distribution. Such a dependence can be written down
by the formula
f1 (x),f2 (x) ∈ ∆ (x) ⇒ [f1 (x),f2 (x)](x) ∈ ∆ (x) .
In practice, the easiest way to check ∆ (x) distribution involutivnes is to use the condition
rank[f1 (x), . . . ,fd (x)] = rank[f1 (x), . . . ,fd (x),[fi (x),fj (x)]],
where x ∈ U, dimx = n, dim∆ (x) = d, 1 ≤ i, j ≤ d.
4.2. Exercises
Ćwiczenie 23. Annihilator calculation
The distribution is defined as:
0 −x2
∆(x) = span 0 , 1 . (4.1)
1 + x1 2
T
1. Define the dimension of the ∆(x) in a point x = 0 0 0 .
2. Check if the ∆(x) is integrable.
3. Define a function λ (x), which gradient is an annihilator of the ∆(x) (any method of λ(x) calcu-
lation is acceptable).
Solution
Ad.1
T
Define the dimension of the ∆(x) in a point x = 0 0 0 .
The rank condition is
0 −x2
rank 0 1 ,
1 + x1 2
T
and for x = 0 0 0
0 0
rank 0 1 = 2,
1 2
so dim ∆(x) = 2.
Ad.2
Check if the ∆(x) is integrable.
To check if ∆(x) is integrable, one have to calculate Lie bracket
0 −x2 0 −1 0 0 0 0 0 −x2
0 , 1 = 0 0 0 0 − 0 0 0 1 =
1 + x1 2 0 0 0 1 + x1 1 0 0 2
0 0 0
= 0 − 0 = 0 .
0 −x2 x2
Now one have to check if this new vector field increases the dimension of ∆(x). Let’s define ∆2 (x) as:
0 −x2 0
∆2 (x) = span 0 , 1 , 0 ,
1 + x1 2 x2
Ad.3
Define a function λ (x) which gradient is an annihilator of the ∆(x) (any method of λ(x) calculation
is acceptable).
We look for a function that will provide
h i 0 −x2
∂λ ∂λ ∂λ
∂x1 ∂x2 ∂x3
0 1 = 0 0 ,
1 + x1 2
∂λ ∂λ ∂λ
− x2 + +2 = 0.
∂x1 ∂x2 ∂x3
∂λ ∂λ
Let’s assume that ∂x 3
= 0, then the first condition is fulfilled for any x1 . Submitting ∂x3 = 0 into the
second equation we have:
∂λ ∂λ
− x2 + = 0.
∂x1 ∂x2
So
∂λ ∂λ
= x2 .
∂x2 ∂x1
∂λ
We assume ∂x1 = 1 and we have
∂λ
= x2 .
∂x2
The final gradient of λ(x) is
∂λ h ∂λ ∂λ ∂λ i
= ∂x1 ∂x2 ∂x3 = 1 x2 0 .
∂x
Performing a simple integration we can write the proposition of λ(x) function as:
1
λ = x1 + x22 + C,
2
where C is a constant.
Ćwiczenie 24. Annihilator calculation no 2
There are given two vector fields
1 0
f1 = 0 , f2 = 1 .
x1 + x2 x1
Calculate the anihilator of the distribution which consists f1 and f2 .
Solution
We define distribution in form:
1 0
∆(x) = span 0 , 1 . (4.2)
x1 + x2 x1
5.2. Accessibility
Ćwiczenie 26. Check if the dynamics (5.1) is reachable.
x22
ẋ1 0
= + u. (5.1)
ẋ2 0 1
Solution
For the system (5.1) the vector fields are as:
x22
0
f (x) = , g(x) = .
0 1
For this distribution dim 41 = 2 for assumption x2 6= 0. So, we have dim 41 = n, what allows to stare
that the system is accessible form any point xU, U = {xR2 : x2 6= 0}.
Is the system accessible from the point x2 = 0? We calculate the vector field of a Lie bracket:
∂g ∂f −2x2
f, g = f− g= .
∂x ∂x 0
50
5.2. Accessibility 51
Extending the distribution 41 by the f, g we create a new distribution:
x22
0 −2x2
42 = span f, g, f, g = span , , .
0 1 0
The vector field f, g does not increase the distribution dimension and for x2 = 0 we have
dim 42 = 1. So we calculate a higher order field (any combination of fields f,g, f, g such that the
newly created field allows to increase the distribution dimension. In general, we may not get a new field
allowing to increase the distribution dimension, then we continue the procedure of increasing the field
order):
∂ f, g
∂g 2
f, g ,g = f, g − g= .
∂x ∂x 0
We are expanding our distribution 42 by f, g ,g and we’re creating a new distribution:
43 = span f, g, f, g , f, g ,g =
x22
0 −2x2 2
= span , , , .
0 1 0 0
The vector field f, g ,g increases the distribution dimension and for x2 = 0 we have dim 43 = 2.
We find that the system (5.1) is accessible from the point x2 = 0.
ẋ1 x1 x2
ẋ2 = 0 + 1 u. (5.2)
ẋ3 x2 0
T
Check if the system is reachable form the point x = 0 0 0 .
Solution
Dla danego układu pola wektorowe sa˛ nast˛epujace:
˛
x1 x2
f (x) = 0 , g(x) = 1 .
x2 0
Dla tej dystrybucji dim 41 = 2 (przy założeniu x1 6= 0 lub x2 6= 0), a wi˛ec dim 41 6= n. Z uwagi na
to, rozszerzamy dystrybucj˛e 41 o nawias Liego
∂g ∂f
f− f, g
g= =
∂x ∂x
0 1 0 x1 1 0 0 x2 −x2
= 0 0 0 0 − 0 0 0 1 = 0
0 0 0 x2 0 1 0 0 −1
x1 x2 −x2
det 0 1 0 = −x1 + x22
x2 0 −1
Dla dystrybucji 42 uzyskujemy dim 42 = 3 (przy założeniu −x1 + x22 6= 0), a wi˛ec dim 42 = n i
stwierdzamy, że układ jest osiagalny
˛ z każdego punktu x ∈ U, U = {x ∈ R3 : x2 6= 0}. Układ (5.2)
posiada dryf, a zatem fakt osiagalności
˛ tego układu nie implikuje jego sterowalności.
Układ nie jest osiagalny
˛ dla x = 0, wi˛ec rozszerzamy dystrybucj˛e 42 o nawias Liego
∂g ∂ f, g
f, g , g = f, g − g=
∂x ∂x
0 1 0 −x2 0 −1 0 x2 1
= 0 0 0 0 − 0 0 0 1 = 0
0 0 0 −1 0 0 0 0 0
i tworzymy nowa˛ dystrybucj˛e:
43 = span f,g, f, g , f, g , g =
x1 x2 −x2 1
= span 0 , 1 , 0 , 0 ,
x2 0 −1 0
x2 −x2 1
det 1 0 0 = −1.
0 −1 0
Dla dystrybucji 43 uzyskujemy dim 43 = 3 dla x ∈ R3 .
(x2 − 1)2
ẋ1 0
= + u. (5.3)
ẋ2 0 1
Solution
Dla układu (5.3) pola wektorowe sa˛ nast˛epujace:
˛
2
x2 − 2x2 + 1 0
f (x) = , g(x) = .
0 1
Dla tej dystrybucji dim 41 = 2 przy założeniu, że x2 6= 1. Uzyskujemy dim 41 = n, wi˛ec układ jest
osiagalny
˛ z każdego punktu x ∈ U, U = {x ∈ R2 : x2 6= 1}.
x22 − 2x2 + 1
0 −2x2 + 2
42 = span f, g, f, g = span , , .
0 1 0
Pole wektorowe f, g nie powoduje zwi˛ekszenia wymiaru dystrybucji i dla x2 = 1 uzyskujemy
dim 42 = 1. Obliczamy wi˛ec pole wyższego rz˛edu (dowolna kombinacja pól f,g, f, g taka, by
nowo powstałe pole umożliwiło zwi˛ekszenie wymiaru dystrybucji; w ogólności możemy nie uzyskać
nowego pola umożliwiajacego
˛ zwi˛ekszenie wymiaru dystrybucji, wówczas kontynuujemy procedur˛e
zwi˛ekszania rz˛edu pola):
∂ f, g
∂g 2
f, g ,g = f, g − g= .
∂x ∂x 0
Rozszerzamy dystrybucj˛e 42 o nawias Liego f, g ,g i tworzymy nowa˛ dystrybucj˛e:
43 = span f, g, f, g , f, g ,g =
ẋ1 1 0
ẋ2 = 0 u1 + 1 u2 , (5.4)
ẋ3 0 x1
check if it is accessible and STLC (short time local controllability).
Solution
For a given system, vector fields are as follows:
0 1 0
f (x) = 0 , g1 (x) = 0 , g2 (x) = 1 .
0 0 x1
For distribution 42 we obtain dim 42 = 3, so dim 42 = n and we state, that the system is accessible
for any point x ∈ U, U = {x ∈ R3 }. The system (5.4) has no drift, therefore the accessibility implies
controllability (STLC).
Ćwiczenie 30. For the system with two inputs:
ẋ1 x2 0
ẋ2 = 0 u1 + 1 u2 . (5.5)
ẋ3 0 x1
T
check if it is accessible and STLC (short time local controllability) in the point x = 0 0 0 .
Solution
Dla danego układu pola wektorowe sa˛ nast˛epujace:
˛
0 x2 0
f (x) = 0 , g1 (x) = 0 , g2 (x) = 1 .
0 0 x1
Stwierdzamy, że dim x = 3. Tworzymy dystrybucj˛e:
x2 0
41 = span g1 , g2 = span 0 , 1 .
0 x1
Dla tej dystrybucji dim 41 = 2 (przy założeniu x2 6= 0), a wi˛ec dim 41 6= n. Z uwagi na to, roz-
szerzamy dystrybucj˛e 41 o nawias Liego
∂g2 ∂g1
g1 −
g1 , g2 g2 =
=
∂x ∂x
0 0 0 x2 0 1 0 0 −1
= 0 0 0 0 − 0 0 0 1 = 0
1 0 0 0 0 0 0 x1 x2
i tworzymy nowa˛ dystrybucj˛e:
x 2 0 −1
42 = span g1 , g2 , g1 , g2 = span 0 , 1 , 0 ,
0 x1 x2
x2 0 −1
det 0 1 0 = x22
0 x1 x2
Dla dystrybucji 42 uzyskujemy dim 42 = 3 (przy założeniu x2 6= 0), a wi˛ec dim 42 = n i stwier-
dzamy, że układ jest osiagalny
˛ z każdego punktu x ∈ U, U = {x ∈ R3 : x2 6= 0}. Układ (5.5) nie
posiada dryfu, a zatem fakt osiagalności
˛ tego układu implikuje jego sterowalność.
T
Układ nie jest osiagalny
˛ dla x = , wi˛ec rozszerzamy dystrybucj˛e 42 o nawias Liego
0 0 0
∂g2 ∂ g1 , g2
g1 , g2 , g2 = g1 , g2 − g2 =
∂x ∂x
0 0 0 −1 0 0 0 0 0
= 0 0 0 0 − 0 0 0 1 = 0
1 0 0 x2 0 1 0 x1 −2
i tworzymy nowa˛ dystrybucj˛e:
43 = span g1 , g2 , g1 , g2 , g1 , g 2 , g 2 =
x2 0 −1 0
= span 0 , 1 , 0 , 0 ,
0 x1 x2 −2
0 −1 0
det 1 0 0 = −2.
x1 x2 −2
Dla dystrybucji 43 uzyskujemy dim 43 = 3 dla x ∈ R3 .
ẋ1 x2 0
ẋ2 = 0 u1 + 2 u2 . (5.6)
ẋ3 0 x1 + x2
T
1. Check if the system is accessible in the neighbourhood of x = 0 0 0 with assumption
T
x 6= 0 0 0 .
T
2. Check if the system is accessible for x = 0 0 0 .
Solution
Ad.1
Dla danego układu pola wektorowe sa˛ nast˛epujace:
˛
0 x2 0
f (x) = 0 , g1 (x) = 0 , g2 (x) = 2 .
0 0 x1 + x2
Dla tej dystrybucji dim 41 = 2 (przy założeniu x2 6= 0), a wi˛ec dim 41 6= n. Z uwagi na to, roz-
szerzamy dystrybucj˛e 41 o nawias Liego
∂g2 ∂g1
g1 , g 2 = g1 − g2 =
∂x ∂x
0 0 0 x2 0 1 0 0 −2
= 0 0 0 0 − 0 0 0 2 = 0
1 1 0 0 0 0 0 x1 + x2 x2
i tworzymy nowa˛ dystrybucj˛e:
x2 0 −2
42 = span g1 , g2 , g1 , g2 = span 0 , 1 , 0 ,
0 x1 x2
x2 0 −2
det 0 1 0 = 2x22
0 x1 x2
Dla dystrybucji 42 uzyskujemy dim 42 = 3 (przy założeniu x2 6= 0), a wi˛ec dim 42 = n i stwier-
dzamy, że układ jest osiagalny
˛ z każdego punktu x ∈ U, U = {x ∈ R3 : x2 6= 0}.
Ad.2
Układ nie jest osiagalny
˛ dla x = 0, wi˛ec rozszerzamy dystrybucj˛e 42 o nawias Liego
∂g2 ∂ g1 , g2
g1 , g2 , g2 = g1 , g2 − g2 =
∂x ∂x
0 0 0 −2 0 0 0 0 0
= 0 0 0 0 − 0 0 0 2 = 0
1 1 0 x2 0 1 0 x1 + x2 −4
i tworzymy nowa˛ dystrybucj˛e:
43 = span g1 , g 2 , g1 , g2 , g1 , g2 , g2 =
x2 0 −2 0
= span 0 , 2 , 0 , 0 ,
0 x1 + x2 x2 −4
0 −2 0
det 2 0 0 = −16.
x1 + x2 x2 −4
Dla dystrybucji 43 uzyskujemy dim 43 = 3 dla x ∈ R3 .
Ad.3
Układ (5.6) nie posiada dryfu, a zatem fakt osiagalności
˛ tego układu implikuje jego sterowalność.
ẋ1 x2 0
ẋ2 = x1 u1 + 1 u2 . (5.7)
ẋ3 0 2x1 + x2
T
1. Check if the system is accessible in the neighbourhood of x = 0 0 0 with assumption
T
x 6= 0 0 0 .
T
2. Check if the system is accessible for x = 0 0 0 .
Solution
Ad.1
Dla danego układu pola wektorowe sa˛ nast˛epujace:
˛
0 x2 0
f (x) = 0 , g1 (x) = x1 , g2 (x) = 1 .
0 0 2x1 + x2
Stwierdzamy, że dim x = 3. Tworzymy dystrybucj˛e:
x 2 0
41 = span g1 , g2 = span x1 , 1 .
0 2x1 + x2
Ad.2
Układ nie jest osiagalny
˛ dla x = 0, wi˛ec rozszerzamy dystrybucj˛e 42 o nawias Liego
∂g2 ∂ g1 , g2
g1 , g 2 , g 2 = g1 , g2 − g2 =
∂x ∂x
0 0 0 −1 0 0 0 0 0
= 0 0 0 0 − 0 0 0 1 = 0
2 1 0 2x2 + x1 1 2 0 2x1 + x2 −4
i tworzymy nowa˛ dystrybucj˛e:
43 = span g1 , g2 , g1 , g2 , g1 , g 2 , g 2 =
x2 0 −1 0
= span x1 , 1 , 0 , 0 ,
0 2x1 + x2 2x2 + x1 −4
det g2 g1 , g2 g1 , g2 , g2 = −4.
Dla dystrybucji 43 uzyskujemy dim 43 = 3 dla x ∈ R3 .
Ad.3
Układ (5.7) nie posiada dryfu, a zatem fakt osiagalności
˛ tego układu implikuje jego sterowalność.
• condition no 1
• condition no 2
h i
adqf gi (x),adrf gj (x) = 0, (6.2)
x2 − 2x2 x3 + x23
4x2 x3
ẋ = x3 + −2x3 u. (6.3)
0 1
1. Check the conditions for linearisation of the system with use of pure state transformation.
2. Define the dyffeomorphism x = Ψ (z) which defines the pure state transformation.
3. Calculate the inverse of dyffeomorphism in the form z = Ψ−1 (x) and present the system equations
after the linearisation.
Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe określamy nast˛epujaco:
˛
x2 − 2x2 x3 + x23
4x2 x3
f (x) = x3 , g (x) = −2x3 . (6.4)
0 1
58
6.2. Linearisation with pure state transformation 59
gdzie
2x2
adf g = −1 , (6.6)
0
1
ad2f g = 0 . (6.7)
0
Na tej podstawie można stwierdzić, że dim ∆(x) = 3 = n, tak wi˛ec pierwszy warunek jest spełniony.
Wypiszmy wszystkie nawiasy Liego zwiazane˛ ze spełnieniem drugiego warunku (6.2) dla 0 ≤ q ≤ 2
i dla r = 0:
0
adf g(x),ad0f g(x) = [g(x),g(x)] = 0 − spelniony z def inicji,
1
adf g(x),ad0f g(x) = 0 − do sprawdzenia,
(6.8)
2
adf g(x),ad0f g(x) = 0 − do sprawdzenia.
(6.9)
2
adf g(x),ad1f g(x) = 0 − do sprawdzenia.
(6.10)
1
adf g(x),ad2f g(x) = 0 − spelniony z def inicji,
2
adf g(x),ad2f g(x) = 0 − spelniony z def inicji.
h i
Na podstawie (6.14) można analitycznie dowieść, że warunek (6.9) (tj. ad2f g(x),ad0f g(x) = 0) jest
również spełniony. Można to również łatwo sprawdzić podstawiajac
˛ wartości odpowiednich pól wekt-
orowych, co prowadzi do zapisu:
2 1 4x 2 x 3 0
0
adf g(x),adf g(x) = [[f,adf g(x)] ,g(x)] = 0 , −2x3
= 0 .
0 1 0
Dodatkowo na tej samej podstawie można analitycznie wykazać, że warunki (6.10), (6.11), (6.12) i
(6.13) również b˛eda˛ spełnione. Zatem „Warunek 2” podany równaniem (6.2) dla układu w postaci
(6.3)
h i wejściem u) ostatecznie sprowadza si˛e do sprawdzenia tylko warunku (6.8) (tj.
(układ z jednym
1 0
adf g(x),adf g(x) = 0).
Na tej podstawie stwierdzamy, że warunki na linearyzacj˛e układu przez czysta˛ transformacj˛e zmien-
nych stanu sa˛ spełnione.
Ad.2
Kolejnym krokiem jest przyj˛ecie wektorów hi , gdzie i = 1, . . . ,3, które b˛eda˛ rozpinać dystrybucj˛e
4(x), która˛ wykorzystamy do obliczenia strumienia rozwiazania
˛ równania całkowego. Jako, że ta dys-
trybucja została już określona równaniem (6.5), to przyjmujemy:
4x2 x3 2x2 1
∆(x) = span {h1 h2 h3 } = span g adf g ad2f g = span −2x3 −1 0 . (6.15)
1 0 0
Φhz31 ◦ Φhz22 ◦ Φhz13 (x0 ) = Φhz31 Φhz22 Φhz13 (x0 ) = Ψ (x0 ,z) = x. (6.16)
Należy zwrócić uwag˛e w jakiej kolejności zostały podstawione zmienne z1 , z2 i z3 (możliwe jest również
przyj˛ecie odwrotnej kolejności ułożenia tych zmiennych).
T
Przyjmujemy stan zerowy jako x0 = x10 x20 x30 .
Pierwszy strumień Φhz13 (x0 ) obliczamy z zależności:
1
ẋ = h3 = 0 ⇒
0
t + x10
⇒ Φht 3 (x0 ) = x = x20 ⇒
x30
z1 + x10
⇒ Φhz13 (x0 ) = x = x20 .
x30
2x2
ẋ = h2 = −1 ⇒
0
−t2 + 2x20 t + x10
4x2 x3
ẋ = h1 = −2x3 ⇒
1
−t4 − 4x30 t3 + 2 x20 − 2x230 t2 + 4x20 x30 t + x10
−z34 − 4x30 z33 + 2 (−z2 + x20 ) − 2x230 z32 + 4 (−z2 + x20 ) x30 z3 − z22 + 2x20 z2 + z1 + x10
Ad.3
Po odwróceniu przekształcenia (6.18) możemy zapisać zależność:
−1 4 2
x3 + 2 −x23 − x2 x23 + −x23 − x2 + x1
z1 Ψ1 (x)
z = z2 = Ψ−1 (x) = Ψ−1 2 (x)
= −x23 − x2 =
−1
z3 Ψ3 (x) x3
4
x3 − 2 x43 + x2 x23 + x43 + x22 + 2x2 x23 + x1
= −x23 − x2 = (6.19)
x3
2
x2 + x1
= −x23 − x2 . (6.20)
x3
Aby móc przedstawić postać dynamiki układu po linearyzacji konieczne jest obliczenie pochodnej
po czasie równania (6.20):
d −1 ∂ −1 dx ∂ −1
ż = Ψ (x) = Ψ (x) = Ψ (x)ẋ =
dt ∂x dt ∂x
∂ −1
∂x Ψ1 (x)
∂ −1
=
∂x Ψ2 (x)
ẋ =
∂ −1
∂x Ψ3 (x)
∂ ∂
∂x z1 ∂x z1 ẋ ż1
∂ ∂
= ∂x z2 ẋ = ∂x z2 ẋ = ż2 . (6.21)
∂ ∂ ż3
∂x z3 ∂x z3 ẋ
Wyznaczamy poszczególne pochodne czastkowe
˛ z zależności (6.21) z wykorzystaniem równania
układu oryginalnego (nieliniowego) (6.3):
x2 − 2x2 x3 + x23
4x2 x3
∂
+ −2x3 u =
ż1 = z1 ẋ = 1 2x2 0 x3
∂x
0 1
2
x2 − 2x2 x3 + x3 + 4x2 x3 u
= 1 2x2 0 x3 − 2x3 u =
u
= x2 − 2x2 x3 + x23 + 4x2 x3 u + 2x2 (x3 − 2x3 u) =
(6.20)
= x2 + x23 = −z2 ,
2
∂ x2 − 2x2 x3 + x3 + 4x2 x3 u
ż2 = z2 ẋ = 0 −1 −2x3 x3 − 2x3 u =
∂x
u
= −x3 + 2x3 u − 2x3 u =
(6.20)
= −x3 = −z3 ,
x2 + x23 −2x23
1. Check the conditions for linearisation of the system with use of pure state transformation.
2. Define the dyffeomorphism x = Ψ (z) which defines the pure state transformation.
3. Calculate the inverse of dyffeomorphism in the form z = Ψ−1 (x) and present the system equations
after the linearisation.
Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe określamy nast˛epujaco:
˛
x2 + x23 −2x23
gdzie
x2 + x23 −2x23
0 0 −4x3 0 1 2x3 0
adf g = 0 0 −2x3 −x3 − 0 0 −1 −2x3 = 1 , (6.26)
0 0 0 0 0 0 0 1 0
0 1 2x3 0 −1
ad2f g = − 0 0 −1 1 = 0 . (6.27)
0 0 0 0 0
Na tej podstawie można stwierdzić, że dim ∆(x) = 3 = n, tak wi˛ec pierwszy warunek jest spełniony.
Aby spełnić drugi warunek linearyzacji przez czysta˛ transformat˛e stanu należy sprawdzić warunek:
1
adf g(x),ad0f g(x) = [adf g(x),g(x)] =
−2x23
0
= 1 , −2x3 =
0 1
0 0 −4x3 2x2 0
= 0 0 −2x3 −1 = 0 .
0 0 0 0 0
Na tej podstawie stwierdzamy, że warunki na linearyzacj˛e układu przez czysta˛ transformacj˛e zmien-
nych stanu sa˛ spełnione.
Ad.2
Kolejnym krokiem jest przyj˛ecie wektorów hi , gdzie i = 1, . . . ,3, które b˛eda˛ rozpinać dystrybucj˛e
4(x), która˛ wykorzystamy do obliczenia strumienia rozwiazania
˛ równania całkowego. Jako, że ta dys-
trybucja została już określona równaniem (6.25), to przyjmujemy:
2
−2x 3 0 −1
∆(x) = span {h1 h2 h3 } = span g adf g ad2f g = span −2x3 1 0 . (6.28)
1 0 0
Należy zwrócić uwag˛e w jakiej kolejności zostały podstawione zmienne z1 , z2 i z3 (możliwe jest również
przyj˛ecie odwrotnej kolejności ułożenia tych zmiennych).
T
Przyjmujemy stan zerowy jako x0 = x10 x20 x30 .
Pierwszy strumień Φhz13 (x0 ) obliczamy z zależności:
−1
ẋ = h3 = 0 ⇒
0
−t + x10
⇒ Φht 3 (x0 ) = x = x20 ⇒
x30
−z1 + x10
⇒ Φhz13 (x0 ) = x = x20 .
x30
0
ẋ = h2 = 1 ⇒
0
x10
⇒ Φht 2 (x0 ) = x = t + x20 ⇒
x30
x10
⇒ Φhz22 (x0 ) = x = z2 + x20 .
x30
−2x23
ẋ = h1 = −2x3 ⇒
1
2 3
− 3 t − 2x30 t2 − 2x230 t + x10
Ad.3
Po odwróceniu przekształcenia (6.31) możemy zapisać zależność:
−1 2 3
z1 Ψ1 (x) − 3 x3 − x1
z = z2 = Ψ−1 (x) = Ψ−1
2 (x)
= x23 + x2 (6.32)
z3 Ψ−1
3 (x) x3
Aby móc przedstawić postać dynamiki układu po linearyzacji konieczne jest obliczenie pochodnej
po czasie równania (6.32):
d ∂
ż = z(x) = z(x)ẋ =
dt ∂x
∂
∂x z1 ẋ ż1
∂
= ∂x z2 ẋ = ż2 .
∂ ż3
∂x z3 ẋ
x2 + x23 − 2x23 u
∂
−1 0 −2x23 −x3 − 2x3 u =
ż1 = z1 ẋ =
∂x
u
(6.32)
= −x2 − x23 = −z2 ,
2 2
∂ x2 + x3 − 2x3 u
ż2 = z2 ẋ = 0 1 2x3 −x3 − 2x3 u =
∂x
u
(6.32)
= −x3 = −z3 ,
x2 + x23 − 2x23 u
∂
−x3 − 2x3 u = u.
ż3 = z3 ẋ = 0 0 1
∂x
u
Podstawiajac ˛ wyliczone pochodne poszczególnych elementów zmiennej z do jednego równania uzyskuje
si˛e układ po linearyzacji:
ż1 0 −1 0 z1 0
ż2 = 0 0 −1 z2 + 0 u. (6.33)
ż3 0 0 0 z3 1
• condition no 1
• condition no 2
x2 − 3x23 − 3u
1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.
3. Define a feedback control allowing for trajectory tracking of smooth admissible reference traject-
ory defined for the linearising output as zr1 = r (t).
Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
x2 − 3x23
−3
f (x) = sin x1 + x22 + x3 , g (x) = 0 . (6.37)
x23 1
oraz
∆n−1 = ∆2 = span g, adf g, ad2f g .
(6.39)
Obliczajac
˛ poszczególne nawiasy Liego mamy:
6x3
adf g = 3 cos x1 − 1 (6.40)
−2x3
oraz
2 + 1 − 3 cos x
−6x 3 1
ad2f g = −3 x2 − 3x23 sin x1 − 6x3 cos x1 − 2x2 (3 cos x1 − 1) + 2x3 .
(6.41)
2x23
Sprawdzamy inwolutywność dystrybucji ∆1 .
Obliczajac
˛ nawiasy Liego
6 0
adg adf g = [g, adf g] = 9 sin x1 , ad2g adf g = −27 sin x1 , (6.42)
−2 0
0
ad3g adf g = −81 cos x1 , . . . (6.43)
0
wykazujemy, że dla k = 1,2, . . . adkg adf g ∈ ∆1 , a zatem dowodzimy inwolutywności dystrybucji ∆1 .
Nast˛epnie sprawdzamy wymiar dystrybucji ∆2 .
Należy sprawdzić rz˛edu macierzy
C = g, adf g, ad2f g .
Stad
˛ wynika, że dla
1
cos x1 6= (6.45)
3
zachodzi dim ∆2 = 3.
Układ spełnia zatem warunki linearyzacji w otoczeniu punktu x = 0.
Ad.2
Szukamy teraz funkcji linearyzujacej
˛ rozpatrujac
˛ nast˛epujac
˛ a˛ dystrybucj˛e
(Należy zwrócić uwag˛e w jakiej kolejności zostały podstawione zmienne z1 , z2 i z3 ; w tym przypadku
jako pierwsza˛ zmienna˛ podstawiamy z1 i dlatego później przyjmiemy h (x) = z1 ).
Każdy ze strumieni określamy nast˛epujaco:
˛
−3z3 + x10
Φhz31 (x0 ) = x20 ,
z3 + x30
równania różniczkowego ẋ2 = 3 cos −3x30 e−t + x20 .
gdzie ψ2 (x0 ,z) jest całka˛ rozwiazania
˛
Wypadkowy strumień rozwiazania
˛ można przedstawić w postaci nast˛epujacej:
˛
z1 = x1 + 3x3 . (6.50)
Zgodnie z teoria˛ Frobeniusa funkcja h (x) = z1 posiada taka˛ właściwość, że jej gradient ∇h jest ani-
hilatorem dystrybucji ∆. Można wykazać, że w rozważanym przypadku ∇h = ∂h ∂x = [1 0 3] prawdziwa
jest zależność:
∇h · hi = 0, (6.51)
gdzie i ∈ (1,n − 1) ⇒ i = {1,2}.
Inna˛ metoda˛ określenia funkcji wyjścia jest wykorzystanie właściwości, które musi spełniać jej gradi-
ent. W analizowanym przypadku mamy:
Lg h = 0, Lg Lf h = 0, Lg L2f h 6= 0
lub alternatywnie:
Lg h = 0, Ladf g h = 0, Lad2 g h 6= 0. (6.52)
f
Można pokazać, że przykładowym anihilatorem pól wektorowych g i adf g jest nast˛epujacy
˛ kowektor
∂h
= [c 0 3c] , (6.55)
∂x
gdzie c ∈ R jest dowolna˛ stała.˛ Jednocześnie kowektor ten spełnia zależność (6.54) dla 1 − 3 cos x1 6= 0.
Na podstawie (6.55) znajdujemy funkcj˛e skalarna˛ h (x):
Łatwo wykazać, że funkcja (6.56) jest szczególnym prrzypadkiem funkcji określonej równaniem (6.50).
Ad.3
Zgodnie z treścia˛ zadania przyjmujemy, że trajektoria odniesienia określona dla wyjścia odsprz˛ega-
jacego
˛ ma postać
zr1 = r (t) . (6.61)
Po uwzgl˛ednieniu modelu układu liniowego (6.59) możemy zdefiniować:
zr2 = ṙ (t) , zr3 = r̈ (t) .
Definiujac
˛ bład
˛ sterowania
ez , z − zr (6.62)
i różniczkujac
˛ go po czasie otrzymujemy liniowe równanie różniczkowe
ėz = Aez + b (v − vr ) , (6.63)
gdzie
0 1 0 0
A = 0 0 1 , b = 0 (6.64)
0 0 0 1
oraz vr = żr3 = r̈ (t) . Nast˛epnie projektujemy sterowanie liniowe w postaci
v = Kez + vr , (6.65)
gdzie K = [k1 k2 k3 ] jest macierza˛ wzmocnień wybrana˛ tak, aby macierz liniowego układu zamkni˛etego
A + bK była macierza˛ Hurwitza. Bezpośrednio na wejście układu sterowania nieliniowego podajemy
sygnał w postaci:
1 1 1 1
u=− 2 L3f h + 2 ·v =− 2 L3f h + · (Kez + vr ) . (6.66)
Lg Lf h Lg Lf h Lg Lf h Lg L2f h
Ćwiczenie 36. The system is defined as:
2
x1 + 2x22 + 4x1 x2 + x2 − 2(1 + x22 )u
ẋ = . (6.67)
x22 + (1 + x22 )u
1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.
2. Define linearising input function.
Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
2
x1 + 2x22 + 4x1 x2 + x2 −2(1 + x22 )
f (x) = , g (x) = . (6.68)
x22 1 + x22
Sprawdzamy warunki linearyzowalności za pomoca˛ transformacji stanu i sprz˛eżenia zwrotnego. Dla
rozpatrywanego układu stan opisany jest w przestrzeni R2 , tj. n = dim x = 2. Należy sprawdzić czy
dystrybucja 4n−2 jest inwolutywna oraz czy 4n−1 ma wymiar n. Określamy dwie dystrybucje:
oraz
∆n−1 = ∆1 = span {g, adf g} . (6.70)
Obliczajac
˛ poszczególne nawiasy Liego mamy:
2
x1 + 2x22 + 4x1 x2 + x2
0 −4x2
adf g = · +
0 2x2 x22
Ad.2
Szukamy teraz funkcji linearyzujacej
˛ rozpatrujac
˛ nast˛epujac
˛ a˛ dystrybucj˛e
gdzie h1 , g jest polem wektorowym wchodzacym ˛ w skład dystrybucji, natomiast h2 jest dowolnie
wybranym polem wektorowym liniowo niezależnym od h1 . Przykładowo przyjmujemy h2 = [1 0]T .
Nast˛epnie określamy strumienie rozwiazań
˛ równań różniczkowych zwyczajnych:
Φhz21 ◦ Φhz12 (x0 ) = Φhz21 Φhz12 (x0 ) = Ψ (x0 ,z) = x. (6.74)
(Należy zwrócić uwag˛e w jakiej kolejności zostały podstawione zmienne z1 , z2 i z3 ; w tym przypadku
jako pierwsza˛ zmienna˛ podstawiamy z1 i dlatego później przyjmiemy h (x) = z1 ).
Obliczamy Φhz12 :
1 h2 t + x10 h2 z1 + x10
ẋ = , Φt = ⇒ Φz1 = .
0 x20 x20
Obliczamy Φhz21 :
−2(1 + x22 )
ẋ = ⇒ ẋ1 = −2(1 + x22 ), ẋ2 = 1 + x22 .
1 + x22
ẋ2 = 1 + x22 ,
dx2
= 1 + x22 ,
dt
dx2
= dt,
1 + x22
arctan x2 = t + C,
dla t = 0 ⇒ C = arctan x20 , a wi˛ec
wówczas
h1 −2 tan (t + arctan x20 ) + x10 + 2x20 h1 −2 tan (z2 + arctan x20 ) + x10 + 2x20
Φt = ⇒ Φ z2 = .
tan (t + arctan x20 ) tan (z2 + arctan x20 )
x10 0
˛ x0 =
Przyjmujac = uzyskujemy:
x20 0
h2 z1
Φz1 (x0 ) = ,
0
˛ Φhz12 (x0 ) jako argument dla Φhz21 uzyskujemy:
a wykorzystujac
−2 tan z + z x
h1 h2 2 1 1
Ψ (x0 ,z) = Φz2 Φz1 (x0 ) = = . (6.75)
tan z2 x2
z1 = x1 + 2x2 . (6.76)
Zgodnie z teoria˛ Frobeniusa funkcja h (x) = z1 posiada taka˛ właściwość, że jej gradient ∇h jest anihil-
atorem dystrybucji ∆. Można wykazać, że w rozważanym przypadku ∇h = ∂h ∂x = [1 2] prawdziwa jest
zależność:
∇h · hi = 0, (6.77)
gdzie i ∈ (1,n − 1) ⇒ i = {1}.
Inna˛ metoda˛ określenia funkcji wyjścia jest wykorzystanie właściwości, które musi spełniać jej gradi-
ent. W analizowanym przypadku mamy:
Lg h = 0, Lg Lf h 6= 0
lub alternatywnie:
Lg h = 0, Ladf g h 6= 0. (6.78)
∂h
= [c 2c] , (6.81)
∂x
gdzie c ∈ R jest dowolna˛ stała.˛ Jednocześnie kowektor ten spełnia zależność (6.80) dla −1 − x22 6= 0.
Na podstawie (6.81) znajdujemy funkcj˛e skalarna˛ h (x):
Łatwo wykazać, że funkcja (6.82) jest szczególnym prrzypadkiem funkcji określonej równaniem (6.76).
Po określeniu funkcji wyjścia h znajdujemy transformat˛e współrz˛ednych według zależności:
h
z = T (x) = . (6.83)
Lf h
Przyjmujemy c = 1 i uzyskujemy:
gdzie
v = Lg Lf h · u + L2f h (6.86)
jest nowym wejściem sterowania. W rozważanym przypadku mamy
2
x1 + 2x22 + 4x1 x2 + x2
L2f h = Lf x21 + 4x22 + 4x1 x2 + x2 =
= Lf Lf h = Lf [1 2] 2
x2
Wyznaczone v jest równe ż2 , co potwierdza poprawność definicji układu w postaci (6.85).
1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.
Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
− sin x1 2
f (x) = , g (x) = . (6.88)
x1 + x22 0
oraz
2 2 cos x1
∆n−1 = ∆1 = span {g, adf g} = span , . (6.90)
0 −2
Ad.2
Określamy funkcj˛e wyjścia wykorzystujac
˛ właściwości, które musi spełniać jej gradient. W an-
alizowanym przypadku mamy:
Lg h = 0, Lg Lf h 6= 0
lub alternatywnie:
Lg h = 0, Ladf g h 6= 0. (6.92)
∂h
= [0 c] , (6.95)
∂x
gdzie c ∈ R jest dowolna˛ stała.˛ Jednocześnie kowektor ten spełnia zależność (6.94) dla c 6= 0. Na
podstawie (6.95) znajdujemy funkcj˛e skalarna˛ h (x):
Przyjmujemy c = 1 i uzyskujemy:
x2
z = T (x) = . (6.98)
x1 + x22
gdzie
v = Lg Lf h · u + L2f h = 2u − sin x1 + 2x2 (x1 + x22 ). (6.100)
x21 + x2
ẋ = . (6.101)
cos x1 + x2 + 3u
1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.
Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
x21 + x2
0
f (x) = , g (x) = . (6.102)
cos x1 + x2 3
Ad.2
Określamy funkcj˛e wyjścia wykorzystujac
˛ właściwości, które musi spełniać jej gradient. W an-
alizowanym przypadku mamy:
Lg h = 0, Lg Lf h 6= 0
lub alternatywnie:
Lg h = 0, Ladf g h 6= 0. (6.106)
Na tej podstawie otrzymujemy:
h i 0
∂h ∂h
∂x1 ∂x2 =0 (6.107)
3
oraz h i −3
∂h ∂h
∂x1 ∂x2 6= 0. (6.108)
−3
Można pokazać, że przykładowym anihilatorem pola wektorowego g jest nast˛epujacy
˛ kowektor
∂h
= [c 0] , (6.109)
∂x
gdzie c ∈ R jest dowolna˛ stała.˛ Jednocześnie kowektor ten spełnia zależność (6.108) dla c 6= 0. Na
podstawie (6.109) znajdujemy funkcj˛e skalarna˛ h (x):
Przyjmujemy c = 1 i uzyskujemy:
x1
z = T (x) = 2 . (6.112)
x1 + x2
gdzie
v = Lg Lf h · u + L2f h = 3u + 2x1 (x21 + x2 ) + cos(x1 ) + x2 . (6.114)
1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.
Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
sin x1 2
f (x) = , g (x) = . (6.116)
sin x1 + 2x1 + x2 2
oraz
2 −2 cos x1
∆n−1 = ∆1 = span {g, adf g} = span , . (6.118)
2 −2 cos x1 − 6
Sprawdzamy inwolutywność dystrybucji ∆0 .
Dystrybucja ∆0 jest inwolutywna, gdyż rozpi˛eta jest na jednym polu wektorowym.
Nast˛epnie sprawdzamy wymiar dystrybucji ∆1 określajac ˛ warunek rz˛edu macierzy C = [g, adf g].
˛ wyznacznik macierzy C otrzymujemy
Obliczajac
2 −2 cos x1
det C = det = −12. (6.119)
2 −2 cos x1 − 6
Ad.2
Określamy funkcj˛e wyjścia wykorzystujac
˛ właściwości, które musi spełniać jej gradient. W an-
alizowanym przypadku mamy:
Lg h = 0, Lg Lf h 6= 0
lub alternatywnie:
Lg h = 0, Ladf g h 6= 0. (6.120)
Na tej podstawie otrzymujemy:
h i 2
∂h ∂h
Lg h = ∂x1 ∂x2 =0 (6.121)
2
oraz
h i −2 cos x1
∂h ∂h
Ladf g h = ∂x1 ∂x2 6= 0. (6.122)
−2 cos x1 − 6
Można pokazać, że przykładowym anihilatorem pola wektorowego g jest nast˛epujacy
˛ kowektor
∂h
= [c − c] , (6.123)
∂x
gdzie c ∈ R jest dowolna˛ stała.˛ Jednocześnie kowektor ten spełnia zależność (6.122) dla c 6= 0. Na
podstawie (6.123) znajdujemy funkcj˛e skalarna˛ h (x):
Ad.2
Po określeniu funkcji wyjścia h znajdujemy transformat˛e współrz˛ednych według zależności:
h
z = T (x) = . (6.125)
Lf h
Przyjmujemy c = 1 i uzyskujemy:
x1 − x2
z = T (x) = . (6.126)
−2x1 − x2
Ad.3
W nowym układzie współrz˛ednych równanie układu jest liniowe:
0 1 0
ż = z+ v, (6.127)
0 0 1
gdzie
v = Lg Lf h · u + L2f h = −6u − 3 sin x1 − 2x1 − x2 ). (6.128)
1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.
Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
x1 + x2 1 + x1
f (x) = , g (x) = . (6.130)
sin x2 2 + 2x1
Ad.2
Określamy funkcj˛e wyjścia wykorzystujac
˛ właściwości, które musi spełniać jej gradient. W an-
alizowanym przypadku mamy:
Lg h = 0, Lg Lf h 6= 0
lub alternatywnie:
Lg h = 0, Ladf g h 6= 0. (6.134)
Na tej podstawie otrzymujemy:
h i 1 + x
∂h ∂h 1
∂x1 ∂x2 =0 (6.135)
2 + 2x1
oraz
h i −2x1 + x2 − 3
∂h ∂h
∂x1 ∂x2 6= 0. (6.136)
2x1 + 2x2 − cos x2 (2 + 2x1 )
Można pokazać, że przykładowym anihilatorem pola wektorowego g jest nast˛epujacy
˛ kowektor
∂h
= [2 − 1] , (6.137)
∂x
Jednocześnie kowektor ten spełnia zależność (6.136). Na podstawie (6.137) znajdujemy funkcj˛e skalarna˛
h (x):
h (x) = 2x1 − x2 . (6.138)
Ad.3
W nowym układzie współrz˛ednych równanie układu jest liniowe:
0 1 0
ż = z+ v, (6.141)
0 0 1
gdzie
v = Lg Lf h · u + L2f h = (6.142)
= (2 + 2x1 + (2 − cos x2 ) (2 + 2x1 )) u + 2x1 + 2x2 + (2 − cos x2 ) sin x2 .
Ćwiczenie 41. The system is defined as:
x1 + x2 + 1
ẋ = . (6.143)
− sin x1 + 2u
1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.
2. Define diffeomorphism T (x) which defines linearising output z = T (x).
3. Define control signal v = f (x,u) for the linearised system.
Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
x1 + x2 + 1 0
f (x) = , g (x) = . (6.144)
− sin x1 2
Sprawdzamy warunki linearyzowalności za pomoca˛ transformacji stanu i sprz˛eżenia zwrotnego. Dla
rozpatrywanego układu stan opisany jest w przestrzeni R2 , tj. n = dim x = 2. Należy sprawdzić czy
dystrybucja 4n−2 jest inwolutywna oraz czy 4n−1 ma wymiar n. Określamy dwie dystrybucje:
0
∆n−2 = ∆0 = span {g} = span (6.145)
2
oraz
0 −2
∆n−1 = ∆1 = span {g, adf g} = span , . (6.146)
2 0
Sprawdzamy inwolutywność dystrybucji ∆0 .
Dystrybucja ∆0 jest inwolutywna, gdyż rozpi˛eta jest na jednym polu wektorowym.
Nast˛epnie sprawdzamy wymiar dystrybucji ∆1 określajac ˛ warunek rz˛edu macierzy C = [g, adf g].
˛ wyznacznik macierzy C otrzymujemy
Obliczajac
0 −2
det C = det = 4. (6.147)
2 0
˛ wynika, że det C > 0 ⇒ dim ∆1 = 2.
Stad
Układ spełnia zatem warunki linearyzacji w otoczeniu punktu x = 0.
Ad.2
Określamy funkcj˛e wyjścia wykorzystujac
˛ właściwości, które musi spełniać jej gradient. W an-
alizowanym przypadku mamy:
Lg h = 0, Lg Lf h 6= 0
lub alternatywnie:
Lg h = 0, Ladf g h 6= 0. (6.148)
Na tej podstawie otrzymujemy:
h i 0
∂h ∂h
∂x1 ∂x2 =0 (6.149)
2
oraz
h i −2
∂h ∂h
∂x1 ∂x2 6= 0. (6.150)
0
Można pokazać, że przykładowym anihilatorem pola wektorowego g jest nast˛epujacy
˛ kowektor
∂h
= [c 0] , (6.151)
∂x
gdzie c ∈ R jest dowolna˛ stała.˛ Jednocześnie kowektor ten spełnia zależność (6.150) dla c 6= 0. Na
podstawie (6.151) znajdujemy funkcj˛e skalarna˛ h (x):
Przyjmujemy c = 1 i uzyskujemy:
x1
z = T (x) = . (6.154)
x1 + x2 + 1
Ad.3
W nowym układzie współrz˛ednych równanie układu jest liniowe:
0 1 0
ż = z+ v, (6.155)
0 0 1
gdzie
v = Lg Lf h · u + L2f h = 2u + x1 + x2 + 1 − sin x1 . (6.156)
x31 + x2
ẋ = . (6.157)
x32 + u
1. Check the conditions for system’s linearisation with use of state transformation and feedback.
Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
3
x1 + x2 0
f (x) = , g (x) = . (6.158)
x32 1
Ad.2
Szukamy teraz funkcji linearyzujacej
˛ rozpatrujac
˛ nast˛epujac
˛ a˛ dystrybucj˛e
(Należy zwrócić uwag˛e w jakiej kolejności zostały podstawione zmienne z1 i z2 ; w tym zmienna,˛ która
jest przeliczana przez wszystkie pola jest zmienna z2 i dlatego później przyjmiemy h (x) = z2 ).
Obliczamy poszczególne strumienie. Dla h1 mamy:
0 h1 x10
ẋ = h1 = ⇒ Φt (x0 ) = ,
1 t + x20
a dla h2 mamy:
−1
ẋ = h2 = ,
−3x22
gdzie
dx1
ẋ1 = −1 ⇒ = −1,
dt
Z Z
dx1 = −dt ⇒ x1 = −t + C,
C = x10 ,
x1 = −t + x10 ,
oraz
dx2
ẋ2 = −3x22 ⇒ = −3x22 ,
dt
Z Z
1 −2 1
− x2 dx2 = dt ⇒ x−1 + C = t,
3 3 2
1
C = − x−1 ,
3 20
1 1 1
− x−1 − x−1 = t ⇒ x2 = .
3 2 3 20 −3t − x−1
20
Na tej podstawie uzyskujemy: " #
−t + x10
Φht 2 (x0 ) = 1 .
−3t−x−1
20
z2 = −x1 .
1 1
z1 = x 2 − = x2 − .
−3z2 − 1 3x1 − 1
Zgodnie z teoria˛ Frobeniusa funkcja h (x) = z2 posiada taka˛ właściwość, że jej gradient ∇h jest ani-
hilatorem dystrybucji ∆. Można wykazać, że w rozważanym przypadku ∇h = ∂h ∂x = [−1 0] prawdziwa
jest zależność:
∇h · hi = 0, (6.166)
gdzie i ∈ (1,n − 1) ⇒ i = {1}.
Zmienne z1 i z2 nie stanowia˛ jednak zmiennych systemu po linearyzacji, gdyż zgodnie z teoria˛ należy
je obliczyć z zależności:
h
z = T (x) = . (6.167)
Lf h
Zatem układ po transformacji stanu ma postać:
−x1
z = T (x) = . (6.168)
−x31 − x2
Ad.3
Obliczenia v = f (x,u) ...
[1] B. d’Andrea Novel, G. Campion, and G. Bastin. Control of nonholonomic wheeled mobile robots
by state feedback linearization. International Journal of Robotics Research, pages 543–559, 1995.
83
Appendix A
84